As more investment dollars seek great opportunities, every competitive edge matters. Gen AI presents a seemingly powerful option: the ability to leverage large language models (LLMs) to quickly analyze investment ideas with previously unimaginable speed. Already, some 67 percent of investors believe gen AI will have a “transformational” impact on their business in five years’ time, and 82 percent view its use as a high priority.1
LLM-derived investment analysis is undisputedly rapid, and the landscape of companies offering products for investment firms and dealmakers has never been more diverse or fast-moving. But intelligent investors need to harness that speed through a first-class process that serves the firm’s culture by quickly discovering risks and reallocating resources toward the very best opportunities. As one investment firm leader told us: “I’m paid to be right. Not to be fast.”
Recognizing the limitations of gen AI
Investment decisions demand precision. One challenge that investment teams face with the largest enterprise LLMs is that when these models draw from a wide data pool (such as the public internet), they are subject to biases and unintended divergences and contradictions. They may present answers as authoritative but offer false precision that investors are otherwise trained to filter out. That can be a serious problem: For instance, overly optimistic or simplistic early reads may drive misallocation of investment teams’ time on deals that may otherwise have been a quick “pencils down.”
A better approach, based on our recent analysis, is for investment teams to use proprietary research data to compare and contrast the answers provided by the largest enterprise LLMs. For example, notes from an investment team’s meeting with management teams or financial advisers can be valuable supplementary sources of detail on the development of various companies, industry unit economics, or the potential of new products. Indeed, when we examined expert interviews alongside the output of large enterprise LLMs—conducting more than 100 separate structured queries on given topics, products, or companies—we found the following:
‘Happy talk’ bias
In seven out of ten industries analyzed, gen AI deep-research reports portrayed a much more optimistic outlook—or “happy talk”—than reports based on expert interviews. The latter reports were more likely to ground the analysis in cautious realism, reflecting both market potential and on-the-ground challenges. For instance, the penetration of a given product might be presented as universally successful by a large enterprise LLM. By contrast, industry participants (experts) may provide more nuance by indicating positive performance among enterprise buyers and less need for the product among customer segments within small and medium-size businesses.
Divergences and contradictions
Our findings identified some insights from LLM-generated reports that diverged from what industry experts shared. And these divergences were often not minor—they spanned core metrics such as market size, growth rates, pricing dynamics, and margin structures, highlighting the risk of relying on unvalidated public data. Such misalignments warrant investment teams doing more work—or moving on.
Key omissions
About 40 percent of important data points uncovered in expert interviews were absent from the corresponding LLM answers on the same topics or questions and could not be uncovered with additional user prompting. These missing insights included vital data that are often critical in deals but invisible to the corpus of data accessible by public LLMs—from conventional contract structures in an industry to unit economics, channel breakdowns, and regulatory hurdles. When we examined the US market for baby and kids’ apparel, for instance, the report based on expert-interview insights painted a sector that was resilient and incrementally expanding. It found growth was “stable and growing” in the low single digits annually, with some experts citing incremental postpandemic tailwinds. Spending per child was said to be rising faster than inflation, and initial online penetration signaled room for future growth.
Yet the gen AI report attested confidently that online sales were far higher—at about half of total sales. It further suggested annual average spending per child of just a few hundred dollars, compared with experts’ assessment of more than $1,000. The LLM-sourced report also stated that the market had shrunk from 2010 to 2020 and risked stagnating or declining. Finally, it was missing additional details required to make informed investment decisions such as data on gross margin benchmarks, sales numbers among age groups, economics for boutique retailers, and norms for inventory turns. Much of this information was included in the expert-interview report.
Adopting a balanced approach
In none of these three categories—happy-talk bias, divergences and contradictions, and key omissions—are we contending that whatever proprietary data a user may have is universally correct versus a public-LLM answer. In fact, often neither source will on its own yield a full and exhaustive answer to an investment team. We believe these misalignments are something investment teams should actively look for and use as a tool for allocation of time and resources in the research and underwriting process. It’s through this process that they can land on their own view of the most material open questions that will determine the success of an opportunity.
An approach in which investment teams complement the use of LLM products with proprietary data provides more balance. Investment teams should consider utilizing industry-specific expert-driven insights to gain a realistic understanding of operational realities and market-specific risks. Furthermore, investment teams should adopt a rigorous culture and set of processes for checking all sources of information. Gen AI may accelerate the discovery of information, but it will not safeguard for quality without user guidance.
This approach can yield analytical thoroughness, allowing stakeholders to confidently identify genuine growth opportunities and anticipate potential operational pitfalls. It also creates a level of investment rigor by supporting teams’ due diligence with comprehensive and industry-specific data that can directly affect valuation accuracy and risk assessments.
When stakeholders have a large amount of granular, traceable data at their fingertips, they are able to make better decisions about how to allocate capital and time throughout the investment process. Indeed, optimizing investment teams’ information intake is becoming a critical part of managing a strong investment research process.
One leader at a North American investment fund used a nutrition analogy to describe changing dynamics in the investment process. “[In the past] we were always hunting and gathering for those few pieces of information with which we could gain conviction for our deals,” he said. “Now, [with gen AI] we have to put ourselves on a deliberate diet. We have to be careful what information we consume and allow into our investment process and not to overdo it on the junk.”


