The promise versus the reality of the public cloud


The cloud continues to drive efficiencies and innovation, but its growing complexity leaves many companies struggling to realize its benefits.

The emergence of the public cloud in 2006 marked a transformative shift — not just for computing, but for the way the world does business. By enabling organizations to access services at the exact time and scale they need them, the cloud opened the door to previously unimaginable levels of agility, reliability, scalability and speed.

Without the cloud, the modern digital economy we know today would not exist — and the pandemic would have brought the world to a halt. Instead, we saw businesses pivot and thrive, finding innovative ways to develop and distribute their products and services. Yet, as the public cloud matures, many companies still struggle to leverage its benefits.

While the cloud holds possibly unlimited potential to facilitate transformative growth, companies hoping to realize that must first understand why it’s challenging and then what they can do to leverage the cloud more effectively.

The promise of the cloud

On paper, the concept of the cloud is simple: Businesses have the capacity to purchase all their computing resources as a service, available whenever and wherever they need them. This  promise is driving global spend on public cloud services to an expected $397.5 billion in 2022. The companies spending this money have seen the world-changing results achieved by cloud performers. Now they too are seeking to tap the potential for operational and economic efficiencies and increased innovation delivered through the agility, scalability, reliability and speed they know the cloud can bring.


Businesses need to be able to adapt quickly and efficiently to ever-changing market conditions. Moving to the cloud advances this kind of flexibility because companies no longer have to invest in hardware and software that require ongoing manual maintenance. Instead, they can rely on their cloud provider for an ecosystem that’s secure, continuously updated and scales to meet their needs.

Apart from startups born in the cloud, most organizations have legacy applications, which, though rarely used, need to be available immediately when required. By decomposing these applications into services and moving them to a serverless technology in the public cloud, organizations can have multiple teams working independently on the separate components to develop new features and enhancements.

From an accounting perspective, operating in the cloud makes it easier to forecast spend and align budgets with business expansion plans. The entire IT spend can be treated as Opex (Operational Expenditure) outlay rather than Capex (Capital Expenditure), giving businesses more financial agility and freedom.


Another key driver of cloud growth is the ability to increase or decrease IT resources as required to meet changing demand. Businesses can scale their data storage capacity, processing power and networking with little disruption through the use of virtual machines (VMs). VMs can be scaled up or down with ease, with workloads and applications transferred to bigger VMs if necessary.

With on-premises physical infrastructure, this kind of scaling is exceedingly expensive, slow and difficult to manage, involving the purchase of new server hardware and disk arrays and all the associated administration headaches and delays. Companies face inflated IT costs because they end up provisioning more resources than they need, so they can meet potential peak demand. If they adopt the alternative approach of provisioning just enough resources for daily use, they risk dire consequences when increased demand creates performance issues – and revenue inevitably suffers.


Cloud migration can help reduce downtimes and the risk of data loss because the major cloud providers operate under service-level agreements (SLAs) that guarantee 99% uptime, and they also assume responsibility for backups and disaster recovery. Distributed solutions ensure maximum uptime and minimal access issues while facilitating rapid changes in user demand.

No system is completely reliable: Server downtime, security breaches, user error and faulty software are facts of life, but cloud offerings are more likely to remain up and running without interruptions or downtime. And when faults do occur, proper planning can prevent them from becoming the kind of issues that prevent people from accessing products. By allocating additional resources for redundancy, it is easy to integrate fault tolerance into a cloud infrastructure.


The headline reason organizations move to the cloud is faster time to market. In the cloud, end-to-end automation means companies can release code into production thousands of times a day. This allows them to release new features and offerings quickly and embrace a fail-fast approach for rapid testing of new ideas. Most cloud computing services are available on a self-service and on-demand basis, so huge amounts of computing resources can be provisioned almost instantly.

But it’s not just about automation: The cloud is a streamlined platform where everything companies need to help them innovate is in one place. This means they can process and analyze data on a larger scale and quickly generate the insights they need to get products to the market fast. Teams are freed up to work on new products and features in a more collaborative way, harnessing their shared creativity to add business value.

The reality for many enterprises

The promise of the cloud remains elusive for many organizations. Multicloud deployments, multiple brands and increasing pressures on management and security make it difficult to leverage the value that can be realized. 

Tools are emerging and maturing to deal with issues around cloud management, but organizations continue to struggle with factors such as cost, inflexibility and security.

High cost

For many SaaS companies, cloud computing costs make up the bulk of their cost of revenue calculation, reducing their margins. As the cost of cloud represents an ever bigger share of the total cost of revenue (COR) or cost of goods sold (COGS), companies such as Dropbox are even leaving the public cloud.

Such dramatic steps are not necessary if a disciplined approach to cost optimization is taken. But it can be difficult to strike a balance between reducing costs and investing in cloud resources that support business objectives. Too many organizations are unprepared to manage the opportunities for saving and are likely to overspend.

Inflexibility due to vendor lock-in

Organizations can be vulnerable to vendor lock-in when they implement cloud transformations. It may be physically possible to leave the vendor, but if a company becomes too reliant on the service the vendor provides, they may perceive the cost of leaving as being too high. This can be avoided by employing cloud-agnostic services developed with open, common standards.

Open-source applications are based on source code that is publicly available for inspection, editing and improvement. So-called as-a-service cloud solutions based on open-source code can be distributed across private, public and hybrid cloud environments, giving companies more control over their cloud solutions. Interoperability can also be advanced by reusing software stacks, libraries and components to create more common ground between applications.

Security issues

Security around the cloud is cited as a fear among organizations. Cyberattacks can shut down smaller businesses and inflict significant damage on larger organization’s reputations and revenues. However, the leading cloud providers maintain levels of security far more robust than anything an individual business could hope to implement on its own — and they regularly upgrade their services in line with industry standards and regulations. Concerns around security failures are valid, but up to 99% of cloud security failures through 2025 will be caused by customer error.

Managing complexity, unlocking growth

The potential benefits to be realized in the cloud remain game-changing, but unlocking them remains a little more complicated. Users have a huge amount of control over variables such as architecture, networking, storage, zones, DNS, CPU and memory, but this control can be a dangerous weapon if not harnessed properly. Cloud customers need to find the right balance between performance and cost to harness the cloud in a way that drives true value.

The promise of the public cloud can’t be realized by throwing unlimited amounts of money at it. The approach has to be based on a mix of people, process and architecture, and for many businesses, it pays to partner with experts who have done it all before.

The kind of agility the cloud facilitates is a powerful enabler of developers’ creativity and innovation. However, there must be standards around IT fundamentals such as security, management, monitoring and operations. Managing the growing complexity of the public cloud must be done in a consistent way that still allows people to move as quickly as possible. It will be difficult and there will be tradeoffs, but getting the balance right is the key to transformational growth.

Subscribe to updates, news and more.

Leave a Reply

Your email address will not be published. Required fields are marked *

Related blogs