Customers
FAQ
Why multi-tenancy Header Image
Raj Nair

Raj Nair

Founder & CEO

14 April, 2022,

2 min read

Copied

link

Use case:

With a growing number of applications getting modernized, organizations must address the issues of isolating activities of one team from impacting other teams. For a SaaS provider, the term “team” can be substituted with “customer”. There are several reasons for this requirement — the most obvious reason is to ensure that traffic or CPU usage from one team does not overwhelm capacity and choke off the activities of other teams that are sharing resources with it. This is frequently referred to as limiting the “blast radius”. What is a chatty neighbor?

Limitations:

Kubernetes offers the mechanism of namespaces and network policy to impose limits on resource usage. However, this is currently manual (subject to drift across locations) and needs special cluster admin privileges. Current approaches to multi-tenancy require you to spin up separate clusters for each tenant which leads to cost overheads & complexity.

Better way:

A better way is through automation. An “application slice” (available in Avesha’s KubeSlice product) is a building block to make Kubernetes “multi-tenant” in an automated fashion. The slice “operator” (a Kubernetes automation mechanism) of the application slice, enables the platform team to set resource usage limits and automates its implementation. These limits can be applied to one or multiple locations using one or more slices for each tenant. Also, each slice has a QoE profile corresponding to a desired application performance — something that can be specified manually or learned by an ML/RL algorithm by observing the application state (available in an upcoming product called Smart Application Load Balancer from Avesha). The slice then enforces the QoE using traffic prioritization at its egress traffic scheduler. Moreover, the slice allows you to have multi-tenancy without incurring the overheads of having multiple clusters.

The application slice opens the door to a self-service portal that lets tenants (eg. application developers from ‘teams’ or ‘customers’) create slices and multi-cluster namespaces on their own across multiple locations seamlessly without impeding developer velocity. Maintaining the all-important east-west traffic simplifies the app developer work and speeds up the workload. Application workloads can also freely “roam” anywhere on a slice without needing any change — something that was unthinkable before.

 

Related Articles

card image

Optimizing Payments Infrastructure with Smart Karpenter: A Case Study

card image

Scaling RAG in Production with Elastic GPU Service (EGS)

card image

Optimizing GPU Allocation for Real-Time Inference with Avesha EGS

card image

#1 Myth or Mantra of spike scaling – "throw more resources at it."

card image

Do You Love Your Cloud Credits? Here's How You Can Get More…

card image

The APM Paradox: When Solution Becomes the Problem

card image

Migration should be 'gradual' and 'continuous'

card image

Hack your scaling and pay for a European Escape?

card image

Here Are 3 Ways You Can Slash Your Kubernetes Costs by 50%

Copyright © Avesha 2024. All rights reserved.

Terms and Conditions

Privacy Policy

twitter logo
linkedin logo
slack logo
youtube logo