We’ve been here before. This time, it’s AI.
AI is everywhere. But the value? That’s still not completely clear.
The conversations at recent FinOps events echo a familiar pattern. Just like the early days of cloud and Kubernetes, organizations are deep into experimentation—but struggling to quantify what AI is actually delivering.
The challenge isn’t just financial—it’s conceptual.
- Are we using the right models?
- Are we sending too many tokens?
- Are we making the most efficient use of our capital?
From an engineering lens, it looks like another optimization problem. From a finance lens, it’s a question of unit economics. And in most cases, those two views haven’t aligned yet.
As one speaker put it: “Eventually, someone’s going to look at that OPEX line item and ask: what are we really getting here?”
Sound familiar?
It’s the same cycle we’ve seen before:
- A new technology emerges (cloud, containers, now AI).
- Teams adopt quickly to gain an edge.
- Costs climb—and visibility doesn’t keep up.
- Everyone scrambles to connect spend to value.
So how do we avoid repeating the same mistakes?
A few ideas emerged from the conversations:
- Build AI telemetry into your FinOps tooling early—don’t wait until usage scales.
- Treat productivity gains as a trackable KPI, not just a narrative.
- Ask practical questions: is this model helping us ship faster? Is it improving output quality?
Because without those answers, you’re not running AI—you’re funding it.
Hear more in the video clip above.


