This article was originally published on LinkedIn.
Earlier this week I wrote about how my work in the NetSuite and AI space has opened some unexpected doors. One of them has been the chance to work directly with private equity firms and PE-backed companies. Not just the portfolio companies, but the investment teams themselves.
One conversation in particular stood out to me. A firm that I was giving an AI-related consulting session to asked me something that I wasn't expecting:
"When we're evaluating a new acquisition, how should we be thinking about technology and AI?"
It's a good question. So I started asking other PE firms the same thing: what are you asking about AI when you're evaluating a deal?
What I found was that some firms weren't asking anything about AI. Most of the ones that were asking questions were asking surface-level ones, like "Do you have an AI strategy?" But that's not really diligence.
What follows is the advice that I've been giving private equity firms about AI and diligence. The questions that I think they should be asking today, and how they'll need to update them as the impact of AI continues to unfold.
Start With the "So What" Test
Before anything else, you need a filter. It's tempting to get excited when a management team talks about their AI initiatives. New tools, pilot programs, a dedicated task force. It all sounds good.
But performative AI adoption and meaningful AI adoption look exactly the same from the outside. The question that cuts through is simple: what specific outcome does this produce? How does it affect revenue, cost, or quality, and in a measurable way?
No specific answer means no real outcome. That's the test.
Every AI initiative a company describes should be able to answer: what does this specifically do to revenue, cost, or quality? If the answer is vague, the initiative is probably vague too.
Three Lenses. Every Deal.
Once you've cleared the "so what" filter, you need to figure out what role AI really plays in the business. In my experience, it falls into one of three categories, and each has different implications for value creation and risk.
Lens 1: AI as Core Product
The product itself is powered by AI. Analytics, automation, generative features. AI isn't a feature. AI is the foundation. And it's in these cases where diligence needs to go deepest.
- Proprietary vs. commodity? A product built on wrapped API calls from a large foundation model is replicable in months. A product with proprietary training data or fine-tuned models is not.
- Does the company own a data asset? Proprietary data competitors can't replicate is the real moat. The AI is just the mechanism.
- What happens to margins at scale? Inference costs are real. A business model that looks great at current revenue can compress badly as volume grows.
- How fast is the feature gap closing? AI capabilities commoditize quickly. If competitors are catching up, what sustains the premium?
Lens 2: AI as Operational Lever
The company uses AI internally to reduce costs, increase throughput, or improve quality. The product doesn't change, but the economics do. This is often where the most durable value creation opportunities are. But you have to verify that, not just believe it.
- Where specifically is it deployed? Finance, customer success, sales, operations? "We use AI across the company" is not a good answer.
- What's the measured gain? Ask for the headcount avoidance numbers. Ask for the before-and-after productivity data. If they don't have it, the adoption isn't mature.
- Is it company-wide or a few enthusiasts? One power user running AI-assisted workflows doesn't create enterprise value. Scaled adoption does.
- Is there executive ownership? A formal AI adoption roadmap with someone accountable for results is a positive signal. Ad hoc experimentation is not.
Lens 3: AI as Threat Vector
Here's the one that I think most teams underweight. Sometimes AI isn't an opportunity. It's a risk. Competitors are using it to undercut, disintermediate, or just do what this company does at a lower cost.
- Is the core value proposition automatable? Think honestly about this over a 3-to-5 year horizon. Some business models are genuinely exposed.
- Are competitors using AI to create pricing pressure? If so, it's already happening, and management may not be tracking it closely enough.
- Does management have a credible response? Awareness alone isn't enough. The question is whether they have a plan and the capability to execute it.
- Does the moat survive disruption? Relationships, proprietary data, and regulatory barriers tend to hold. Feature-based advantages tend not to.
In my AI-related work, I'm finding that most companies fall into some combination of all three of those categories. The challenge is to understand which lens is primary, and what it means with regard to value creation.
The Questions to Ask in the Management Meeting
The management meeting is your highest-leverage moment in the diligence process. It's your one chance to get unscripted, real-time answers from the people who actually run the business. What you ask, and how carefully you listen, matters more than most teams realize.
Here are five questions I'd ask in those meetings.
- How is your team currently using AI tools, and are there formal policies governing usage and data security?
- Have you identified specific use cases where AI could measurably reduce headcount growth or improve output quality in the next 18 months?
- Where does your proprietary data live, and have you explored how it could be used as a training or personalization asset?
- Are any competitors using AI in a way that is creating pricing pressure or eroding your win rates?
- What percentage of your product roadmap over the next 18 months involves AI or ML capabilities?
Pay as much attention to the confidence and specificity of the answers as to the answers themselves. A founder who knows their AI strategy cold, who can tell you the exact workflow that went from 4 hours to 20 minutes and how they're tracking it, is a different kind of operator than one giving you a compelling narrative without numbers behind it.
The best management teams can describe AI adoption the same way they describe their financial performance: with specifics, with accountability, and with awareness of where they still have gaps.
A Framework for Maturity
Not every company you look at will be an AI leader, and that shouldn't immediately disqualify them. What matters is whether they're on a credible, logical path to AI maturity, and whether the gap between where they are and where they need to be is closable within your hold period.
Crawl: Awareness Without Deployment. Management knows AI is important, and may have piloted a tool or two. But they have no formal policy, no measured outcomes, and no designated ownership. This isn't a reason to pass on them, but it's a value creation opportunity you need to plan for, not assume away.
Walk: Targeted Adoption with Early Results. The company has specific use cases in production, and hopefully some measurable outcomes. A roadmap exists, and it has executive sponsorship. Governance is forming. I've found that this is the most common state for well-run middle-market businesses. The question is the pace of AI maturation.
Run: Scaled, Measured, Governed. AI adoption is embedded in how the company operates. They have well defined KPIs, and they're being tracked. Governance policies have been documented. The team can articulate ROI by AI initiative. And also importantly, they're not only thinking about where AI creates opportunity, but where it creates risk as well.
I suspect that most of the businesses that you'll evaluate will fall somewhere between Crawl and Walk. And the question is, given your hold period and your post-close value creation capacity, is the gap closable in an acceptable timeframe?
The Benchmark Is Moving
AI capability is moving fast. The questions that matter today during diligence will very likely be different in six months.
That's not a reason to skip the AI-related diligence. It's exactly the reason to start asking the right questions now.