the past In the past few years, cloud computing has become more expensive than ever. Initially attracted by the promise to cut costs on infrastructure spend, companies flocked far and wide to behemoths like AWS and Google Cloud to host their services. Engineering teams were told this would reduce engineering costs and increase developer productivity, and in some cases it did.
Fundamental shifts in AI/ML were made possible by the ability to batch and run jobs in parallel in the cloud. This reduced the amount of time it took to train certain types of models and led to faster innovation cycles. Another example was: the shift in how software is actually designed: From monolithic applications running on VMs to a microservices and container-based infrastructure paradigm.
But while the adoption of the cloud has fundamentally changed the way we build, manage and run technology products, it also led to an unforeseen consequence: runaway cloud costs.
Total operating expenses in billions. Numbers are approximate based on data from Synergy Research Group. Image Credits: Chelsea Goddard
While the promise to spend less prompted companies to migrate services to the cloud, many teams were unsure how to do it efficiently and, by extension, cost-effectively. This created the first investment opportunity we’ve seen behind the recent surge in venture capital for cloud observation platforms such as Chronosphere ($255 million), Observe ($70 million) and Cribl ($150 million).
The basic thesis here is simple: By providing insight into the cost of services, we can help teams reduce their spend. We can compare this to the age-old saying that roughly goes, “You can’t change what you can’t see.” This has also been the main driver for larger companies to acquire smaller observability players: to reduce the risk of customer churn by enticing customers with additional observability features, then increasing their average contract value (ACV).