Development teams and FinOps aren’t always on the same page, and lately developers have been feeling the effects of not having proper visibility into their cloud spend.

In a recent episode of our podcast, we interviewed Martin Reynolds, field CTO at Harness, about the company’s recent FinOps in Focus 2025 report, which explored the ways in which development teams and FinOps have been misaligned.

Here is an edited and abridged version of that conversation: 

One interesting thing in the report is that 55% of developers said their cloud purchasing commitments are based on guesswork. So what is holding them back from having the proper information to be able to make more informed decisions?

That’s actually a really interesting question, and a lot of it is really around when they have visibility of that data. A lot of that data around how much something costs when it’s running in production and customers are using it comes after the fact, and it’s difficult for them to understand those costs, because they don’t see the costs all the way through the life cycle and what the impact of the software they’re releasing has. 

So when they’re guessing, they’re literally saying, I think it’s going to use this much because they just don’t know, and they don’t have the raw data to back it up with upfront, because cost isn’t in the process from day one, from design forwards.

Similarly, another finding was that less than half of developers have data on their idle cloud resources. Their unused resources are there over or under provisioned workloads. So is that kind of a similar reason why they don’t have that data too?

Yeah, it’s visibility and also idle resources, especially, is one of those things that it’s sometimes hard to spot as a human. Just because it’s idle, now, you don’t know if that’s idle all the time. Computers in general, but AI especially, are great at that kind of thing, of saying, “I can see that nobody’s used this for two weeks. You should really be turning it off.” 

And sometimes it’s hard to gather that kind of hard information, or they just don’t see it. There’s no notification coming into their work stack that says, “hey, you’ve got these idle resources,” or, even better, just automatically turn them off.

In an ideal world, what would be the ideal scenario for having developers and FinOps teams be perfectly aligned?

I think there’s a couple of things, and I feel like I have a little bit of an advantage here, because part of my responsibilities in a previous role was running the cloud cost function across engineering teams and helping them have that visibility. Really it is actually about having shared outcomes. Businesses want to be profitable. I think it mentions in the report that our CFO, John Bonney, talks about how cloud spend is quite often the second biggest thing on a company’s line items of spend after salaries. 

I think having that kind of overall vision of how cloud costs should be managed, and having it shared, not just for those FinOps teams who are trying to get the right balance of cost and performance of the application, but also making sure that the teams understand what that balance is.

Where I’ve seen this work is where they get that visibility all the way to the left. So engineers understand what their software is costing them in development, what it’s costing them in testing, and what it costs them when it moves to production. They have that visibility. They understand what that is, but they also understand what the goals of the business are in terms of managing that cost, and helping them be aligned on their incentives.

One of the things I’ve seen that worked really well, for example, is actually saying to the product teams, the product managers, and saying, “Hey, this is how much revenue your product is bringing in, and your cloud cost can’t be more than this percentage of that revenue.” And then that then feeds into an alignment of, “okay, if we add this new thing, how much is it going to cost? And how are we going to balance that against what this product makes?” 

The engineers are aware of what the overall goal is and what the scope is that they have of cost for what they’re building, and they can design with cost in mind. That doesn’t mean inhibiting things based on the cost. It just means balancing those two things out. We’re going to bring in more revenue, but we’re also going to do this in an efficient way, so that we’re not wasting money on cloud spend.

How can implementing more automation help address some of these issues?

So that is actually one of my favorite topics and and mostly because, when I was doing this myself, automation of idle resources and shutting down test environments automatically really helps drive costs down, and makes a saving. 

And I can give you a specific example. We set up some rules around, you know, if things were idle, they would turn off, and then they would turn on automatically. So a bit like the stop start in your car. If you still have a petrol car, you stop at the lights and the engine shuts off. You push the gas pedal, it turns on. That’s kind of how you want your cloud resources to work, especially in those non-customer facing environments. We had some teams that were saying, “no, no, no, these environments are used all the time.” And then we’d show them the data and say, “well, actually, it’s just used every two weeks when you do your testing.” So, turning off a bunch of servers and networking and ingress and all the things that go with it to save money can have a huge impact on the overall cost.

Is AI making the problem worse? As development teams start experimenting with it, they’re having to spin up more infrastructure, they’re having to pay for tokens and things like that, without maybe having insight into the overall cost that they’re racking up. So how does that factor into this spending disconnect?

It’s like another dimension on top of what is already there. But you’re right, it can be disconnected, especially when it’s credits versus what’s actually going on under the covers, and whether they’re buying it from a third party or provisioning on their own cloud infrastructure. I think, again, being able to highlight out what that costs against the overall cost that they’re spending, so that they can see how that works is really key. 

There has to be a value conversation. Teams love to try new things. Engineers love to innovate. They want to try all these new things, but there has to be a balance between giving value, ultimately, to the customer, but also doing in a way that is cost efficient. So I think having that visibility up front and seeing even what it’s costing when they’re testing and playing with it, and learning that technology will help them understand the implication of what it will cost them when they roll that out at scale. 

We’ve got 20 people in a team using this right now. What’s that going to be like when we have 20,000 people using it constantly? What does that cost look like? And is what we’re going to charge for it actually going to bring that money back in?