Sustainable Cloud Computing:

How Smart Scaler is leading the charge
Sustainable Cloud Computing
Olyvia Rakshit
Olyvia Rakshit

VP Marketing & Product (UX)

23 February, 2023

3 min read



A Tale of Two Cities

“It was the best of times, it was the worst of times,        
It was the age of wisdom, it was the age of foolishness,        
It was the epoch of belief, it was the epoch of incredulity,        
It was the season of light, it was the season of darkness        
It was the spring of hope, it was the winter of despair.”

Charles Dickens' famous lines have been used throughout history to describe contrasting situations. In the world of cloud computing, the Dickensian analogy is both thought-provoking and relatable.

Cloud and the Carbon Footprint

The Cloud has enabled businesses to scale faster and reach more customers than ever before- “the best of times”. But, the environmental costs of running huge cloud data centers are staggering- “the worst of times?” Read this article from MIT which says that the carbon footprint from the Cloud is so high that it surpasses that of the airline industry!

Which brings us to this question: What if there was a way to reduce cloud costs and save the environment at the same time? That's where continuous predictive autoscaling of Kubernetes resources driven by Reinforcement Learning (RL) comes in.

How do applications scale (today)?

Kubernetes is used to manage the deployment, scaling, and operation of containerized applications. The application containers are housed in pods. In Kubernetes, there is a feature called HPA (Horizontal Pod Autoscaler) that scales the number of pods, up or down, based on CPU utilization. This means if the demand for the application goes up, HPA scales up the number of pods, and when demand goes down, HPA scales down the number.

So where’s the problem? This setup requires close collaboration between the application developer and devops (cluster admin). The developer will specify the resource needs of the application, and the cluster admin will estimate and set the number of pods needed for peak demand and for a bare minimum demand. If the pod count is set too low the application will crash. If the pod count is set too high, it wastes money and contributes to the carbon footprint in the cloud. Moreover, existing HPA solutions are “reactive” in nature, hence cluster admins have to overprovision to allow time for HPA to react to an increased demand and the need for more resources.

Smart Scaler - the “predictive” autoscaling

So, coming back to the question. What if there was a way to reduce cloud costs and save the environment at the same time?

That's where Avesha’s Smart Scaler comes in. It uses “predictive” analytics to “precisely” estimate the number of pods an application will need ahead of time. This allows the application to scale up or down accurately based on demand, and by doing that, Smart Scaler can avoid the waste of overprovisioning resources. When you don’t overprovision resources you’re not wasting energy on servers that are not being used. When you don’t waste energy, you’re not contributing to the carbon footprint in the cloud. It’s a win-win for businesses and for the environment.

Smart Scaler - the “automated” autoscaling

In the new CI/CD paradigm, where applications are frequently deployed, each application version may have newer performance metrics. This makes it harder to manually tune the HPA parameters for each new deployment. This results in escalating costs due to increase in devops time or insufficient utilization of infrastructure resources. And yes, you got it, it also contributes to the carbon footprint.

Smart Scaler solves these problems with continuous optimization and automation. It uses reinforcement learning (RL) to “continuously” optimize the number of pods. RL is a type of machine learning that makes decisions based on rewards and punishments. In the case of Smart Scaler, the rewards are cost savings and a reduced carbon footprint.

Smart Scaler extracts performance data from Prometheus and utilizes a “Pod Capacity Estimator” to forecast the number of pods necessary for a given load, as well as a “Traffic Pattern Predictor” that makes predictions based on learned patterns. These two components work in tandem, feeding data into the RL engine, which accurately predicts the Kubernetes resources needed “ahead of time”.

Spring of Hope

Smart Scaler is an exciting development in the world of cloud computing. By using predictive approaches, automation and reinforcement learning to optimize Kubernetes resources, it reduces cloud costs and the carbon footprint of the cloud. As we endeavor towards a more sustainable future, technologies like Smart Scaler will play an important role in reducing our impact on the environment.

Reflecting on the opening lines of Charles Dickens, we are reminded of the opportunity and the responsibility to create a brighter future for all, to turn the winter of despair into a spring of hope.