Many leaders are wondering about when their companies will see returns on their AI investments. On this episode of the At the Edge podcast, Elad Gil—technology angel investor, AI expert, and entrepreneur—says many companies and organizations will need at least ten years to fully integrate AI into their everyday processes, even as its value is already proving itself. In his conversation with Senior Partner Lareina Yee, Gil talks about major drivers of AI’s impact, from market shifts and new business models to the persistent challenge of change management—and why data may be overrated.
The following transcript has been edited for clarity and length.
Evolution and market dynamics of AI technology
Lareina Yee: You are legendary in terms of how you’re shaping the AI market. Give us a snapshot of the AI journey we’ve been on.
Elad Gil: I was at McKinsey way back when, and then I joined Google. Before this, I was a technology investor, and I got involved with companies like Airbnb, Coinbase, Stripe, et cetera.
As everybody probably knows, in 2017, there was a new type of model architecture invented at Google called the Transformer. And that led to the revolution we’re seeing right now in terms of large language models and foundation labs.
The T in GPT, when you talk about ChatGPT, stands for transformer. Around 2021 or so, I started doing a lot of investing in these transformer-based model companies. It could be companies building the models themselves; it could be companies making use of them.
But we’ve gone through these waves where things have gone from being very certain to very uncertain to very certain again for different AI markets. What happened was OpenAI came out with GPT-3. A bunch of nerds, such as myself, thought it was really interesting, but most of the world ignored it.
But you started to see this interesting scaling law or scaling curve where you realized that if you threw more compute at one of these types of models, it got smarter and smarter, and it was unclear where the limit to that was.
That was the moment, I think, where a lot of good researchers who had entrepreneurial tendencies started companies. About two years ago, things started to get very crowded in a lot of markets that were kind of obvious markets.
Those are the foundation model labs themselves. That’s Anthropic versus Google versus xAI versus Mistral versus Meta, et cetera. That included things like customer success. It was coding. So that was an era of great uncertainty, at least for me.
In that era, I kept telling people that the more I learned about AI, the less I knew, while with other markets, the more you learned about something, the more you knew. I feel like last year we suddenly had crystallization of some of these markets in terms of who the main contenders are.
This has kind of been a year of clarification. One big trend is that market clarification. The second big trend is probably the move to more agentic-related workflows, although it’s very early. A third one would be a move toward different types of models besides language: physics, biology, chemistry, materials, et cetera. Self-driving is suddenly working for the first time in reality.
Subscribe to the At the Edge podcast
AI enterprise adoption takes time
Lareina Yee: Let’s dive into this concept that you see more clarity. I think for a lot of enterprises, as they’re thinking about trying to adopt these things, they see less clarity.
Elad Gil: I think we always have to remind ourselves that technology adoption takes a while. The start-ups in Silicon Valley will adopt something really early and really quickly, and this happened with the internet, it happened with mobile, and it happened with the cloud.
And then it takes another ten years to propagate everywhere else. Even within certain early-adopting companies, it takes a decade sometimes for that to really turn into a great product. And AI’s going to be the same. It’s going to take time to really bake it in properly.
And there are two reasons for that. One is the technology curve—just how good the technology is. But honestly, the second thing is change management. It takes time to reorganize an organization around a new technology and to figure out how to integrate it with existing tools, how to integrate it with incentives if you’re a sales leader, or how to deal with the impact on your team in different ways.
Even within certain early-adopting companies, it takes a decade sometimes for [new technology] to really turn into a great product. And AI’s going to be the same.
If you look at the way enterprises are adopting AI, there are really three ways. The first is in their own software or supply chain, or the outsourcing of labor in some sense. You’re not building something yourself; you’re just integrating it into existing tools and augmenting your team.
The second way people are adopting AI is through internal tooling. They’re building a specialized bank underwriting system for a financial risk, for example, or compliance processing of certain types of documents, if you’re a pharma company.
And then the third way that people are incorporating AI is into their own products they sell to customers.
Lareina Yee: But if you’re looking into customer care software for the first time, I have heard a lot of people say, “How do I choose?” There are companies that are two years old, and they bring a first-principles, beginner mindset. There are also constant releases at the foundation model, the LLM [large language model], layer. How does someone make sense of that if they’re not as close to the technology as you are?
Elad Gil: The guidance I give to start-ups—which could be applied to enterprises assessing a start-up and whether to use them—is does an increase in the foundation model make that product better or worse? If it’s just taking and reformatting data from an LLM in different ways, it’s just not going to be that useful for you, because eventually, to your point, the model will get there. If it’s more than just a wrapper around an LLM, it actually provides you with deeper functionality. It provides you with a workflow. It hooks into and integrates with all your tooling. That’s when it’s unique.
Lareina Yee: What are examples of applications that are typical things a bank or retail company would want to accomplish?
Elad Gil: One example would be compliance workflows, which pop up in all sorts of different types of businesses. Anytime you’re doing a lot of repetitive tasks with documents, you’re effectively doing something that a language model can do very well.
There are different types of permitting for governments. For example, we’ve been involved with a project to help with construction permits, where you can upload all the detailed documents that are needed to get a permit approved. And instead of taking three months to do it, it’ll just do it in a couple of hours and crunch it, and then a human will review it and approve it or not.
Lareina Yee: You’re not automating the tasks, if I understand correctly. You’re looking at the compliance workflow from start to finish and redesigning how that works with new technology.
Elad Gil: We’ve tried to be cautious about not overdoing the redesign part of it, because we want to map into what people do normally. The biggest issue with adopting AI, besides the implementation of the technology, is dealing with the internal processes that already exist in the organization.
You can choose to blow up those processes, but then you won’t make very much progress. Or you could say, “How do we adapt to it so that we make it dramatically easier, better, and faster for the people already doing this—or automate all sorts of things for them so they become the final arbiter of it but they can just get through a lot more?”
If you have a two-year backlog on permitting, wouldn’t it be great if you just moved to real time? As a government, you’d get better revenue because people may be willing to pay more for a permitting process that’s expedited. As a construction firm, maybe you’re actually the one paying for it. We think it’s alignment across different stakeholders in that type of example.
How informative is your data?
Lareina Yee: I get this question about data a lot: “How valuable is my enterprise data versus the data across the entirety of the internet?”
Elad Gil: I think that data is both dramatically underrated and overrated. Data as some useful input to do something better for your business is incredibly valuable. Data as a core differentiator of your business is rarer. And data as a primary competitive advantage [applies to] a very, very small number of companies.
I think that data is both dramatically underrated and overrated.
Lareina Yee: So if you’re in the software world, how do you differentiate?
Elad Gil: I think it comes back to how hard it is to reproduce that data set. There are two types of data. One is when we have a giant table of customer data that helps with customer insights, helps us adjust products for them, and helps us serve them better.
There’s a second type of data. It’s proprietary data sets plus a brand behind it that people view as creating some uniqueness around the data. And then the question is, “How much would it cost you and how long would it take to reproduce that data set?” If you sent runners down to every single city hall to collect X, Y, Z information, how much would it cost you to get that back into a comprehensive data set?
In some cases, relative to venture scale, it’s actually not that much money. It’s in the tens of millions of dollars. Maybe it’s $100 [million], $200 million. That’s very doable in today’s venture capital world. It may take two or three years to do it properly. There may be quality issues. But it’s still doable. I’m surprised by how few people are following that as a strategy.
How AI is benefiting companies
Lareina Yee: We recently came out with a survey of the kinds of AI benefits small, medium, and large enterprises are seeing. The top was innovation, customer experience was the second, third was employee experience, fourth was time to market, then cost, and then also competitive differentiation. In your experience, what kinds of benefits are you seeing?
Elad Gil: We’re seeing benefits across the board. People talk about AI as a cost-cutting thing, but we think that’s just a minor part of it. It’s important, but we see a lot of revenue growth through the adoption of AI and better practices around that—or just getting to a customer faster. So if you could do something faster, cheaper, with higher quality, in a way that’s very positive for your end customer or user, and that makes your employee do less of the things they view as drudgery, it’s a win for everyone.
Anytime you’re early in a technology adoption cycle, there’s an enormous amount of low-hanging fruit. So just go do the easy stuff that’s valuable. This is where a lot of start-up founders get confused. Or sometimes smart engineers in big organizations will say, “I need to go do the really hard thing.” And you’re, like, “Don’t go do the hard thing. Do the easy thing that’s valuable, because you can still do that right now, since it’s wide open. It’s a new market.”
Workforce transformation
Lareina Yee: Let’s talk about how to deploy AI at the enterprise level.
Elad Gil: More broadly, there is an interesting skill gap for AI. This happened before with mobile and the internet. Even within technology companies, there’s this viewpoint that you want more AI-native people, and that people who grew up without AI are going to be less savvy in terms of how to adopt it, or they may not be as smart about how to iterate quickly with it.
I’ve had a few different CEOs tell me their VP [vice president] of product is no longer scaling relative to the AI era, because the young product managers are showing up and they’ve vibe coded a UI [user interface]. They’ve already gone to an application, written out what they want, and it’ll generate it for them. And they’ll build a workflow with it, and you can interact with it. It doesn’t quite work. It doesn’t have a real backend to it. But it gives you a sense of what the product would be.
While the VP of product wants to do the, “Let’s write a PRD [product requirements document] for a month. Let’s do mock-ups for one or two months after that, and let’s sit down and review it,” the young people are just showing up with the thing built in an afternoon and saying, “This is how it feels.”
There is a lot of thinking about talent in this context. I see this even with technology companies. I remember talking to a five-year-old machine learning healthcare start-up. The CEO said their product wasn’t modern, because the right team wasn’t in place to adopt the AI properly.
Lareina Yee: But if you’re a 40-year-old or a 100-year-old company, it can be intimidating. Are there some practical things you would suggest?
Elad Gil: This is why it takes a decade sometimes. When I see these reports saying, “Oh, AI hasn’t had an impact”: (a) that’s not true, because you see the revenue of all sorts of companies adopting AI, and (b) there just hasn’t been that much time.
What a lot of companies do is, number one, they’ll figure out what areas they can outsource to an AI vendor.
Two is that sometimes people will do an internal hackathon or something else to get everyone using it. And it’s under the full realization that these things may not turn into real products or real tools internally.
Related to that, some people will buy an enterprise license for ChatGPT or Perplexity or some other tool for all their employees. I know one company that, for their engineering team, will measure how much of the code that’s written by the engineer is being written on a modern AI-enabled IDE [integrated development environment]. They’ll measure what other tooling they’re using. They’re real sticklers for pinging engineers and saying, “Why aren’t you using these products?”
Business models and AI-driven rollups
Lareina Yee: Moving to models, what do you think of the various neolabs?
Elad Gil: I think there’s a lot of differentiation in terms of what the neolabs are doing. Thinking Machines, for example, is helping to create models that work much better for very specific use cases. They take existing models and make them fine-tuned or work much better for your specific need, and hopefully that means it’s smaller, it’s faster, it’s cheaper to run, et cetera—and more accurate in some cases.
Reflection is talking about really becoming an open-source, US-based foundation lab model. They’re going a bit more directly against these big labs but saying they’re going to be the open-source provider.
The remarkable thing that’s happened over the last year or so is the open-source market has really flipped to Chinese models. People may have heard of DeepSeek or Qwen or others. But that’s where the world is headed. These things are also massively funded by commercial funding and the Chinese government. It looks like China is making ownership of the open-source world for AI an imperative. There’s a lot of discussion about how important it is for the US to continue to maintain its lead in that area.
Lareina Yee: Looking ahead over the next, say, three years, what are some of the benefits from more competition and more mixed approaches to solving different things that you foresee?
Elad Gil: I think it only benefits the end users. Competition is very good for users and very bad for companies, in some sense. Obviously, competition can help make companies better.
But in general, oligopoly structures are better for businesses if you just look at the microeconomics of it. I always thought the foundation lab market would end up being an oligopoly market.
I wrote a blog post about this about three years ago because it was clear that if you believed that scale was important, then it became a capital game. To have the biggest models or the most performant models with the most training data, you need the most GPUs [graphics processing units]. You need the biggest data sets. You need a lot of researchers. So that means you need a lot of capital. If you just extrapolate out along that curve, you’re, like, “OK, eventually these models will go from tens of millions to hundreds of millions to billions to tens of billions of dollars,” at which point only a very small number of parties can afford to keep going. So it has to collapse into an oligopoly market in the short to medium term. This was from a couple of years ago, which I think has largely proved to be correct.
Lareina Yee: Do you see the emergence of new business models, or is it still too early?
Elad Gil: One of the biggest underdiscussed transitions is in software. Traditionally, you buy seats or, in some cases, pay based on usage. What’s changing is that what’s being sold are units of cognition, or labor equivalents. Instead of buying customer success seats, you’re buying customer support queries that are answered. You’re buying outcomes that reproduce parts of what reps do or make them dramatically more efficient. That’s why I’m very bullish on AI-driven rollups—acquiring traditionally people-intensive services businesses and using AI to radically change workflows. That shift can dramatically increase margins and turn services businesses into software margin businesses.
At the core, AI-driven rollups are about productivity. Many of these workflows—sometimes called “email jobs”—involve back-and-forth communication, document processing, data entry, and report generation. Those are well suited to this new type of AI.
If you own the business and understand people management and workflows, you can redesign the organization around AI, increase leverage per person, and significantly improve margins—moving from a low gross-margin business to a software margin business through automation.
Lareina Yee: When companies think about doing an AI rollup or think about this within their own portfolio, what are the markers of functions or areas that are better suited to today’s technology?
Elad Gil: Today’s technology can address a lot of things. Often, the question is more about where you can rework an organization, what tooling it has, and how stuck people are in their ways. A lot of this is change management. AI can do a lot today, which is why the rollup works, because the software is adaptable more broadly.
It’s really hard to get people to adopt it. So the question is: “How do you cram a decade of change management and slow adoption into six months?” That’s really what AI rollups are doing. Often, the challenge is more of a people challenge than a technology challenge.
Lareina Yee: Let’s talk more about the people challenge. How are you seeing jobs change? Are they being replaced?
Elad Gil: There are a lot of different aspects to this. If you go back 2,000 years to Roman times, luxury goods looked very different. Having a sauna or a bath in your house was a luxury. Now everyone turns on a faucet and has hot water.
To some extent, technology takes luxury goods and makes them cheap and available to everyone. A good example is legal services. When I started my first company, I sent my legal contracts to a friend’s cousin to mark them up as a favor because I couldn’t afford a real lawyer. Imagine if AI did that for you—suddenly you’d have the equivalent of the world’s best lawyers available to every small and medium business.
A lot of this is about broadening the market potential and impact of things that only a small number of people can afford today.
Lareina Yee: That sounds amazing. But what about the luxury good itself? What about all the bright young minds studying for their final exams in law school?
Elad Gil: I honestly don’t know. I don’t understand all the intricacies across different legal categories. You can create a giant matrix—different customers, small businesses versus enterprises, different verticals, and different types of law: employment, corporate, M&A. For each of those cross-sections, you’ll end up with different outcomes. In some cases, it may increase the number of people needed because you still need humans reviewing things and making judgment calls.
AI today is like a really eager intern—sometimes it knocks it out of the park, and sometimes the work is bad. Over time, it’ll get better, like a first-year graduate, and then stronger still.
There’s still a lot of need for human review and reinforcement of outcomes. In some areas, that need will grow; in others, it may shrink. It will also change the nature of firms. If AI reduces the number of associates needed or dramatically augments them, firms can either grow their book of business or shrink teams. But associates are future partners. Shrinking the bench raises questions.
AI today is like a really eager intern—sometimes it knocks it out of the park, and sometimes the work is bad. Over time, it’ll get better.
Lareina Yee: Apprenticeship matters.
Elad Gil: You want a deep enough bench so the best people rise to the top. If you shrink it, maybe that changes things—maybe it doesn’t. I don’t know. These are real questions.
Lareina Yee: We see different signals in the data as well. They’re not deterministic. If you think about the average person, what would you suggest they do to think proactively about their own learning and their ability to manage these stormy waters?
Elad Gil: It’s very hard to say, in a generic way, what every person should do. The reality is that adopting some of these tools early, making use of them, and understanding them can give people leverage over their own time and careers.
It also helps people see where the blind spots are—where they can add value as a person. Trying these tools out and using them day-to-day is very useful for informing how you think about your own path.
Lareina Yee: How are you using AI personally in your day-to-day?
Elad Gil: I use it for coding and research. Sometimes I’ll ask it for another perspective. For example, if I’m thinking through a managerial interaction, I’ll say, “Tell me five interpretations of what this person could mean.” It helps me think about business problems, managerial issues, and other situations where I might not have considered every angle.
Lareina Yee: Looking ahead, what are one or two AI developments you’re personally watching over the next year where we might see a significant leap in capability?
Elad Gil: I’m really interested in other types of models and applications for AI. That includes physics and simulation for large industrial companies and material science. One big theme for me is foundation models outside of language.
A second theme is self-driving work. After 15 years of talking about it, autonomous vehicles are happening.
We’ll also continue to see broader adoption of language models across more use cases, verticals, and applications.
In defense, machine learning and AI are incredibly important, and that’s accelerating. The US and its adversaries are moving toward more drone-based warfare, which is a major global issue.
We’re also seeing people use models for important health-related information—patients uploading data about themselves, concerns they have, and questions about options. These models are increasingly being trained with reinforcement learning to get very good at medical data and medical information.
We’ll see more of this wave in healthcare. I’m especially excited about global health equity—the ability for anyone in the world to access some of the world’s most important medical information through these models.


