Don’t Let AI Eat Your Lunch
- Duncan Welling
- Jan 23
- 2 min read

AI Platforms Don’t Just Enable Value - They Claim It
AI is often sold to portfolio companies as leverage: faster decisions, better insight, lower cost. And initially, that’s often true.
But over time, many AI implementations quietly shift from being enablers to becoming intermediaries.
The early phase looks attractive:
rapid deployment
minimal internal capability required
impressive demos
variable cost instead of fixed investment
For PE, this is seductive - speed to impact matters.
But then gravity sets in.
As the AI platform becomes embedded:
business processes adapt around its logic
learning loops happen inside the vendor’s environment
switching costs rise
pricing power quietly shifts
At that point, the portfolio company may still “own the outcome”, but it no longer fully controls the capability.
This matters at exit.
Strategic buyers and sophisticated sponsors increasingly ask:
Which AI capabilities are genuinely proprietary?
What happens if this vendor relationship changes?
Can we replicate this capability post-acquisition - and at what cost?
When the honest answer is “it’s complicated”, valuation risk follows.
There’s a familiar pattern here:
The SaaS contract that looked like leverage starts to look like a toll booth
AI capability becomes inseparable from vendor IP
Margin improvement is partially offset by long-term licensing drag
Buyers apply a discount for opacity and dependency
In extreme cases, AI platforms begin to resemble the very intermediaries they were meant to remove - standing between the company and its customers, data, or decisions.
From a PE value creation lens, the critical question isn’t “best-in-class or not?”It’s: where does the learning accrue? If learning, fine-tuning, and behavioural insight accumulate outside the portfolio company, value leaks even if performance improves.
The most durable AI strategies in PE-backed environments follow a simple principle:
rent speed
but own the learning
That usually means:
internalising critical models over time
separating decision logic from vendor tooling
ensuring data and feedback loops remain portable
designing AI as a capability, not a dependency
AI doesn’t kill value at exit. Unexamined dependency does.
And buyers are getting much better at spotting it.




Comments