Customers & Partners
FAQ

Avesha Resources / Blogs

Announcing Smart Karpenter Version 1.0.0

veena.png

Veena Jayaram

Staff Technical Writer/Program Manager

Copied

Announcing Smart karpenter_version1.0.0 (3).jpg

We are proud to announce the release of Smart Karpenter version 1.0.0, which went live on April 2, 2025. It’s our next-generation autoscaling solution designed to simplify Kubernetes scaling, reduce cloud costs, and enable DevOps teams to focus on building rather than constant tuning.

Key Highlights in Smart Karpenter Version 1.0.0

This release introduces major features and enhancements across infrastructure, application scaling, and AI-driven decision making.

Expanded Cloud and Platform Support

  • Oracle Cloud support: Smart Karpenter can now be deployed on Oracle Cloud clusters under a commercial license.
  • Support for Rancher with Linode: Teams using Rancher on Linode can now adopt Smart Karpenter under commercial licensing.

Contact Avesha Sales at avesha@sales.io  for licensing details and deployment guidance.

AI/Smart Scaling Enhancements

  • Predictive Pod Scaling: Uses app-level metrics such as latency, RPS (requests per second), and service dependencies to forecast demand.
  • Dynamic Node Provisioning: Automatically provision the right number of nodes based on forecasted pod demand.
  • No Manual CPU Thresholds: Moving away from static thresholds. Smart Karpenter makes scaling decisions driven by AI models, reducing the need for manual tuning of HPA or CPU limits.
  • Reinforcement Learning: We continuously learn and optimize from usage patterns. Over time, the system improves forecasts and resource utilization.
  • Safe Rollouts via Observation and Optimize Modes:
    • Observation mode monitors environments and metrics without acting, helping teams understand behavior without risk.
    • Optimize (or Run) mode activates the AI-driven scaling to provision nodes/pods according to predictions.

Key Benefits

With these features, organizations can expect:

  • Up to 70% reduction in cloud costs by ensuring right-sized node and pod provisioning and eliminating over-provisioning.
  • Better SLO (Service Level Objective) compliance, especially during unpredictable traffic or spikes.
  • Less manual effort: no more tweaking thresholds, dummy pods, or maintaining large static node pools.

How It Works

Smart Karpenter combines two main layers:

  • Smart Scaler (AI / Prediction Layer)  
    Deployed via Helm in Observation mode initially. It monitors real-time metrics across app services, builds a service graph, forecasts demand, and suggests optimal pod/node counts.
  • Karpenter ( Node Provisioning Layer)  
    After confidence is established, in Optimize (or Run) mode, the predictions from Smart Scaler feed into Karpenter which then provisions nodes just-in-time. This ensures nodes are started when needed, used efficiently, and scaled down as demand falls.

Getting Started

Smart Karpenter v1.0.0 is already helping teams in production, and new users can get started today.  
For detailed installation instructions and documentation, visit our Smart Karpenter documentation.