Products
Solutions
Customers
Resources
Company

KubeSlice Office Hours – 25th August 2022

KubeSlice Office hour
Tanisha Banik
Tanisha Banik

Contributor @KubeSlice

21 September, 2022

5 min read

Copied

link

The KubeSlice project recently hosted the second episode of its Office Hours series on Thursday, 25th August 2022. This project, which was just open sourced by Avesha, brings together the entire community every other Thursday to learn more about different facets of KubeSlice. Visit our YouTube channel to watch recordings of earlier events, including our very first office hours.

The configuration of KubeSlice and how it can be used to give multi-tenancy capabilities in a multi-cluster Kubernetes environment were covered in this episode by Prabhu Navali, director of engineering at Avesha.

Based on the concept of dividing a cluster to let different teams inside an organization use a dedicated set of resources, KubeSlice uses pre-existing Kubernetes primitives to define tenancy. With the aid of an overlay network, on which you may specify namespaces and apps, it is possible to construct a wider workspace over many clusters for a number of applications that can easily span between different clusters.

In addition to the above, KubeSlice also takes into account the following limitations while providing multi-tenancy in a multi-cluster Kubernetes environment:

  • How can clusters that are dispersed across numerous cloud providers, 
    geographies, and edges be connected?
  • How can the same multi-tenancy be maintained throughout these clusters?

To learn more about how it achieves this, refer to the KubeSlice documentation.

With this foundation laid, Prabhu demonstrated the inner workings of KubeSlice with the help of two use cases.

If you want to follow along with the demo, the prerequisites are listed below:

 

  • Infrastructure Requirements:
    • Minimum of 8vCPUs and 8GB of RAM

 

  • Requirements for Hosting KubeSlice Controller
    • Cluster Requirements: 1 Kubernetes Cluster
    • Supported Kubernetes Version: 1.21 and 1.22
    • Supported Helm version: 3.7.0

 

  • Requirements for worker clusters:
    • Minimum Clusters Required: 2 Kubernetes Clusters
    • Node per Cluster: 1 Node per cluster
    • Supported Kubernetes Version: 1.21 and 1.22
    • Supported Helm version: 3.7.0

Prabhu used the kind cluster bash automation script that is available on the GitHub repo for his live demo, even though you can manually construct a cluster and install the slice. This script sets up a Kubernetes cluster using kind and installs the necessary KubeSlice components.

After installing the prerequisites mentioned above, you can follow along by directly cloning the repository locally and running the kind.sh script. In order to test the multi- cluster functionality, the script additionally configures an iPerf client-server application. As an alternative, Terraform can be used to install KiND and Kubernetes on AWS EC2 instances.

P.S. We are in the process of optimizing & simplifying these for a better user experience.If this is something you are interested in helping with, do hop on to the #kubeslice channel on the Kubernetes slack.

After testing the connectivity using iPerf, Prabhu deployed a simple web application called book-info using a yaml config file. This application displays information related to a book such as the author, year of publishing, reviews, ratings etc. on the browser. The config file used to deploy the application is available in the kubeslice/examples repository on GitHub.

The product page is deployed in the worker cluster-1 and the book-info details, reviews, service exports, and ratings pages are deployed in worker cluster-2. The NodePort service is used to expose the product page deployed in worker cluster-1. Since the kind.sh script creates a book-info namespace created in each cluster, the http application will be automatically onboarded onto the Slice once we deploy the yaml file. Once created, it can be accessed across the slice and isolated from the other namespaces within the cluster – solving the noisy/nosey neighbor problem. Not only this, granular quotas can be set so as to avoid resource hogging by any particular application thus ensuring equitable distribution of resources in a multi-cluster, multi- tenant setup.

If you face any issue while following along or want to know more about the project, we encourage you to raise questions in the #kubeslice channel on the Kubernetes slack where we hang out. Additionally, the Office Hours are intended to be as interactive as possible and we’d love you to join us on our next one to be held on 8th September 2022. To receive an invite for the next one in your inbox & stay updated with the latest goings-on in the project, join our Google Group.

Until next time!