How continuous learning, ‘small steps’, and architectural optimisation take the lead in FinOps transformation
By François Pasquet, Technical Account Manager, DoiT
At first, when I started, FinOps was considered a secondary activity that was done when there was time. Now, with the growing adoption of the cloud and the emergence of multi-cloud solutions, FinOps practices are evolving to become more sophisticated, much better integrated and understood within companies. These companies are primarily looking to optimise their costs, often without knowing where to start. They want to spend less, have better governance, and gain better control over their expenses. These are the most recurring requests, which have become a top priority in 2024 for nearly all my clients (and unsurprisingly, these concerns remain just as crucial for nearly all organisations in 2025!), who now tell me: “We want to focus on saving costs while maximising the performance and resilience of our infrastructures.”
Now, all this does not come without effort, because it involves continuous monitoring of expenses, the automation of cost management processes, and the use of advanced tools to analyse and forecast future spendings. The rise of multi-cloud solutions adds yet another layer of complexity, requiring a more global and interoperable approach to manage costs across different cloud providers and, much more often than one might think, hybrid cloud/on-prem environments.
FinOps practices must also adapt to the diversity of services offered by the various cloud providers. Each provider has its own pricing models, its own cost/usage management and tracking tools, and its own best practices to be infused into organisations. Consequently, FinOps needs to be able to navigate this complex environment and take advantage of the best offerings from each provider. This can include discounts for commitments, optimising resources based on the specific needs of applications, and implementing governance strategies to ensure efficient and cost-effective use of the cloud.
The adoption of multi-cloud solutions requires truly close collaboration between technical and financial teams. This is not always the case, and not always easy, not for lack of willingness, but often because teams do not speak the same language, do not follow the same data to base their analyses on, and above all do not interpret that data in the same way. Technical teams need to be aware of the financial implications of their decisions (since a single click can have so many consequences), while financial teams need to understand the technical aspects of cloud services in order to correctly evaluate costs and benefits. This collaboration can be facilitated by the use of cost management tools that provide real-time visibility into expenses and enable informed decision-making.
The evolution of FinOps practices with the adoption of cloud and multi-cloud solutions also entails continuous training for teams. Cloud technologies evolve rapidly, and it is essential for FinOps teams to stay up-to-date with the latest innovations and best practices. This may include participating in training sessions, conferences, FinOps communities of practice, and setting up mentoring and knowledge-sharing programs, including within the Tech Rocks community we have in France.
The specific KPIs I use include cost per unit of work (for example, cost per transaction, per user, per million requests, etc.) because it's important to have an idea of how much a user or client “costs,” as there’s an underlying concept of resource usage rate. Other KPIs include the savings achieved through optimisations, as well as the return on investment, commonly known as ROI, of FinOps initiatives. These indicators make it possible to measure not only direct costs but also the effectiveness and impact of optimisations on the organisation’s overall performance.
For example, at one of my clients, we implemented dashboards to track costs by project and by application, allowing us to quickly identify areas where savings can be made. We also use monitoring and reporting tools to track resource usage in real time and adjust strategies accordingly. Furthermore, we regularly analyse the savings achieved through optimisations, enabling us to measure the impact of our FinOps initiatives and demonstrate their value to the organisation.
Now, this isn’t always easy, especially when the existing or legacy infrastructure does not initially allow for effective tracking mechanisms - particularly when everything is mixed together and tracking how much a client costs within the total spend is a long and complex task because of technical debt. Often, this technical debt requires an investment that can seem daunting at first, since all one sees are expenses, risk to production, lack of time, staff, sometimes skills, or sometimes historical data… and that’s completely understandable and in no way a criticism. The role of FinOps professionals is to provide a gentle, structured, strategic, and methodical approach.
All of the above leads to the integration of the FinOps dimension into the product development lifecycle beginning as early as the planning and design phases. Whenever possible, I collaborate closely with DevOps and CI/CD teams to embed cost management practices from the outset; this is the key to success - the earlier you start, the simpler it is and the quicker you reap the benefits. This includes automating cost monitoring, optimising resources used for testing and deployment, and building in feedback loops to adjust strategies in real time. The goal is to foster a culture of financial responsibility, where every team is aware of the cost impact of their decisions. People love it when they can monitor their entire infrastructure in real time on dashboards - it's like watching the weather report on TV!
For example, at one of my clients, we worked on integrating FinOps into the development cycle by using tools like Karpenter to optimise spot instance usage and reduce costs. We also implemented resource tagging practices to track costs by environment (production, staging, etc.) and by application, allowing us to quickly identify optimisation opportunities. Finally, we collaborated with DevOps teams to automate cost monitoring and integrate real-time alerts, enabling us to rapidly react to anomalies and keep costs under control.
As a conclusion, I could say the most common mistake is to view FinOps solely through the lens of cost optimisation. Other common errors include a lack of or insufficient collaboration between financial and technical teams from the outset, the absence of continuous monitoring, and underestimating the importance of automation. To avoid these pitfalls, it's necessary to promote open and ongoing communication between teams, implement robust monitoring and reporting tools, invest in automating cost management processes, and, perhaps most importantly, foster mutual trust.
Recently, with one of my clients, to address a major visibility issue regarding expenses, we established detailed dashboards and regular reports to track costs by project and application - monthly, daily, and even factoring in days of the week to identify patterns, much like an investigator looking for clues. On the human side, I encouraged close collaboration between finance and technical teams by organising workshops and training sessions on FinOps best practices. Building internal momentum, finding allies to help convince others, and demonstrating the value of FinOps to trigger a virtuous cycle: this, too, is part of the FinOps approach. Finally, my client invested time in automating cost management processes, which reduced human error and improved overall efficiency of FinOps initiatives.
To avoid these mistakes, I believe it's crucial to adopt a gradual approach to building a FinOps culture. Avoid aggressive strategies in rapidly growing environments with technical debt; instead, concentrate on methodically optimising infrastructure to manage scalability and resilience.
FinOps should embrace the “small steps” method: test hypotheses on a small scale, learn, adjust, repeat, and gradually roll out changes. This minimises risk while maximising learning and adaptability. Diverse expertise - from both managers and technical experts - and ongoing peer exchange within communities add significant strategic value.
Finally, one of the main errors to avoid is not embedding FinOps in the technological and product vision, especially when aiming to redefine complex business operations while remaining ambitious and aligned with market needs. Ultimately, using FinOps as a compass strongly contributes to building an elastic and scalable infrastructure that can absorb multiple times the current load or future changes - which is essential to achieving your performance and agility goals. Integrating FinOps practices strengthens resilience.
Need another concrete case study to demonstrate the tangible benefits of a FinOps-led transformation?
A global dating app faced escalating cloud costs driven by a complex Elasticsearch cluster architecture, particularly due to high inter-zone network egress and compute expenses. The inter-zone setup, initially intended as a resilience measure, introduced additional operational complexity and rising financial pressure. These challenges not only impacted the company’s budget but also threatened their ability to deliver reliable user experiences.
To resolve this, the organisation partnered with us to conduct a comprehensive assessment of both architectural and operational practices. It was discovered that the existing disaster recovery arrangements were insufficient, with the inter-zone structure acting as a misguided failsafe.
The solution centered around several key FinOps-aligned initiatives:
- Enhanced visibility into cloud network usage and spend using detailed reporting and analytics, empowering engineers to make more cost-aware decisions
- A systematic evaluation of reliability strategies, weighing their costs against potential downtime, and deep-diving into the specific drivers behind cloud expenditure
- The adoption of FinOps best practices, promoting collaboration between technical and business stakeholders, and aligning cloud strategy with organisational goals
- Architectural optimisation, including reducing the Elasticsearch cluster size and rolling out a proper disaster recovery plan, all managed through a gradual, low-disruption process
- Continued monitoring and iterative optimisation to sustain improvements in both reliability and cost-effectiveness
As a result, the company achieved a substantial reduction in network and infrastructure expenses, improved operational efficiency, and established a culture of financial accountability. Automated disaster recovery enhanced resilience, and the savings realised enabled business expansion.