Customers
Resources
analyst-report.svg

Analyst Reports

Navigating Key Metrics for Growth and Success

blog.svg

Blog

Source for Trends, Tips, and Timely Topics

docs.svg

Documentation

The Blueprint for Mastering Tools and Processes

sandbox.svg

Sandboxes

Explore interactive sandboxes for Avesha products

line
news.svg

News/Pubs

Bringing You the Top Stories as They Happen

videos.svg

Videos

Explore Our Library of Informative and Entertaining Clips

whitepapers.svg

Whitepapers

Exploring Critical Topics with Authoritative Research

roi.svg

ROI Calculator

Easily Track and Maximize Your Investment Returns

line
egs-marketing

Optimize Your AI with Elastic GPU Service (EGS)

Company
about-us.svg

About Us

Discover Our Mission and Core Values

careers.svg

Careers

Join Our Team and Shape the Future Together

events.svg

Events and Webinars

Connecting You to Trends, Tools, and Thought Leaders

support.svg

Support

Helping You Navigate Challenges with Ease

FAQ
KubeSlice Office hour
Tanisha Banik

Tanisha Banik

Contributor @KubeSlice

21 September, 2022,

5 min read

Copied

link

The KubeSlice project recently hosted the second episode of its Office Hours series on Thursday, 25th August 2022. This project, which was just open sourced by Avesha, brings together the entire community every other Thursday to learn more about different facets of KubeSlice. Visit our YouTube channel to watch recordings of earlier events, including our very first office hours.

The configuration of KubeSlice and how it can be used to give multi-tenancy capabilities in a multi-cluster Kubernetes environment were covered in this episode by Prabhu Navali, director of engineering at Avesha.

Based on the concept of dividing a cluster to let different teams inside an organization use a dedicated set of resources, KubeSlice uses pre-existing Kubernetes primitives to define tenancy. With the aid of an overlay network, on which you may specify namespaces and apps, it is possible to construct a wider workspace over many clusters for a number of applications that can easily span between different clusters.

In addition to the above, KubeSlice also takes into account the following limitations while providing multi-tenancy in a multi-cluster Kubernetes environment:

  • How can clusters that are dispersed across numerous cloud providers,     
    geographies, and edges be connected?
  • How can the same multi-tenancy be maintained throughout these clusters?

To learn more about how it achieves this, refer to the KubeSlice documentation.

With this foundation laid, Prabhu demonstrated the inner workings of KubeSlice with the help of two use cases.

If you want to follow along with the demo, the prerequisites are listed below:

Prabhu used the kind cluster bash automation script that is available on the GitHub repo for his live demo, even though you can manually construct a cluster and install the slice. This script sets up a Kubernetes cluster using kind and installs the necessary KubeSlice components.

After installing the prerequisites mentioned above, you can follow along by directly cloning the repository locally and running the kind.sh script. In order to test the multi- cluster functionality, the script additionally configures an iPerf client-server application. As an alternative, Terraform can be used to install KiND and Kubernetes on AWS EC2 instances.

P.S. We are in the process of optimizing & simplifying these for a better user experience.If this is something you are interested in helping with, do hop on to the #kubeslice channel on the Kubernetes slack.

After testing the connectivity using iPerf, Prabhu deployed a simple web application called book-info using a yaml config file. This application displays information related to a book such as the author, year of publishing, reviews, ratings etc. on the browser. The config file used to deploy the application is available in the kubeslice/examples repository on GitHub.

The product page is deployed in the worker cluster-1 and the book-info details, reviews, service exports, and ratings pages are deployed in worker cluster-2. The NodePort service is used to expose the product page deployed in worker cluster-1. Since the kind.sh script creates a book-info namespace created in each cluster, the http application will be automatically onboarded onto the Slice once we deploy the yaml file. Once created, it can be accessed across the slice and isolated from the other namespaces within the cluster – solving the noisy/nosey neighbor problem. Not only this, granular quotas can be set so as to avoid resource hogging by any particular application thus ensuring equitable distribution of resources in a multi-cluster, multi- tenant setup.

If you face any issue while following along or want to know more about the project, we encourage you to raise questions in the #kubeslice channel on the Kubernetes slack where we hang out. Additionally, the Office Hours are intended to be as interactive as possible and we’d love you to join us on our next one to be held on 8th September 2022. To receive an invite for the next one in your inbox & stay updated with the latest goings-on in the project, join our Google Group.

Until next time!

 

Related Articles

card image

Scaling RAG in Production with Elastic GPU Service (EGS)

card image

Optimizing GPU Allocation for Real-Time Inference with Avesha EGS

card image

#1 Myth or Mantra of spike scaling – "throw more resources at it."

card image

Do You Love Your Cloud Credits? Here's How You Can Get More…

card image

The APM Paradox: When Solution Becomes the Problem

card image

Migration should be 'gradual' and 'continuous'

card image

Hack your scaling and pay for a European Escape?

card image

Here Are 3 Ways You Can Slash Your Kubernetes Costs by 50%

card image

A completely new way for K8s Autoscaling: Why Predictive Pod Scaling with Smart Scaler and Karpenter is needed before plain VPA

Copyright © Avesha 2024. All rights reserved.

Terms and Conditions

Privacy Policy

twitter logo
linkedin logo
slack logo
youtube logo