Customers
Resources
analyst-report.svg

Analyst Reports

Navigating Key Metrics for Growth and Success

blog.svg

Blog

Source for Trends, Tips, and Timely Topics

docs.svg

Documentation

The Blueprint for Mastering Tools and Processes

sandbox.svg

Sandboxes

Explore interactive sandboxes for Avesha products

line
news.svg

News/Pubs

Bringing You the Top Stories as They Happen

videos.svg

Videos

Explore Our Library of Informative and Entertaining Clips

whitepapers.svg

Whitepapers

Exploring Critical Topics with Authoritative Research

roi.svg

ROI Calculator

Easily Track and Maximize Your Investment Returns

line
egs-marketing

Optimize Your AI with Elastic GPU Service (EGS)

Company
about-us.svg

About Us

Discover Our Mission and Core Values

careers.svg

Careers

Join Our Team and Shape the Future Together

events.svg

Events and Webinars

Connecting You to Trends, Tools, and Thought Leaders

support.svg

Support

Helping You Navigate Challenges with Ease

FAQ
Trying out Kubeslice
Prianna Sharan

Prianna Sharan

23 November, 2022,

4 min read

Copied

link

Introduction

Introducing Sandbox: The pre-packaged playground for KubeSlice. Experience the universal secure connectivity and rapid and easy networking via the sandbox environment. Don’t worry about infrastructure requirements or prerequisites – we’ve got you covered. Built so everybody can play with KubeSlice. Register here:    
www.kubeslice.io

Why KubeSlice?

KubeSlice creates a “virtual” flat and secure network over existing infrastructure for applications. Why do you need such a layer? To reduce the complexity of getting distributed workloads to communicate across Kubernetes clusters and creating an unified trusted domain, To reduce the time taken to deploy distributed workloads. To make pod-to-pod networking more secure by segmenting clusters to form trusted domains via isolation. But don’t take our word for it, try it out yourself in the sandbox. This article takes you through the creation and deployment of an application using KubeSlice. The concept of a distributed or multi-region, multi-cloud, multi-cluster pod-to-pod network is integral to understanding the benefits of KubeSlice.

What we’re going to cover

  1. Connecting to the sandbox
  2. Using the KubeSlice-CLI to install two worker clusters and one controller cluster
  3. Using the KubeSlice-CLI to connect these clusters via a slice
  4. Verifying the ease of inter-cluster communication by using iPerf to send traffic between the front-end worker cluster and the back-end worker cluster

Logging into the Sandbox

https://community.aveshalabs.io/

Register with your credentials, in our Avesha portal. You will receive an email with a .pem key file that is your gateway into the playground. This key will be valid for 4 hours, during which you have limitless freedom and experimentation within the Sandbox. Download this file.    
Run the following commands in your local command line to establish connectivity to the Sandbox.

  chmod 400 ~/ kubeslice-key.pem
  ssh -i ~/ kubeslice-playground-key.pem ubuntu@3.86.25.253

Expected result:

  < Welcome to Ubuntu message >

You’ve now entered the playground.    
The playground comes with these components preinstalled

  1. The requisite infrastructure requirements to run over 3 clusters and deploy multiple slices
  2. The requisite software (Helm, Kind, etc.) to deploy and interact with clusters
  3. The KubeSlice-CLI that simplifies slice deployment

Creating the worker clusters, controller cluster, and slice

Now it’s time to set up the infrastructure in order to interact with the slice. The CLI full demo will install all the necessary clusters for you, and deploy the application on the slice. Run the below command to do so.

kubeslice-cli install –profile=full-demo

As the installation runs, a lot of things are occurring. Let’s break it down. This command does the following:

  1. Creates three kind clusters. One controller cluster with the name ks-ctrl and two worker clusters with the names ks-w-1 and ks-w-1.
  2. Installs Calico Networking on controller and worker clusters.
  3. Downloads the opensource KubeSlice helm charts.
  4. Installs KubeSlice Controller on a ks-ctrl cluster.
  5. Creates a kubeslice-demo project namespace on a controller cluster.
  6. Registers the ks-w-1 and ks-w-2 worker clusters with this project.
  7. Installs Slice Operator on the worker clusters.
  8. Deploys a Demo slice on worker clusters.
  9. Deploys the iPerf demo application onto the Slice.

Verifying the Inter-Cluster Communication

One of the core tenets of KubeSlice is the simplicity of inter-cluster communication. Pods in different clusters can send data directly to each other without the need for an intermediary or IP address translation or configuration. This is where the “multi-region, multi-cloud, multi-cluster pod-to-pod network” truly shines.

The iPerf application is constantly sending kilobytes of data between its front end and back end clusters. These are the clusters we just set up. The slice is all that’s needed to establish the communication between these clusters – no more hours spent IP address planning. Here’s how you can simulate and experiment with two clusters communicating on a slice. To verify the iPerf connectivity, use the following command:

  /usr/local/bin/kubectl --context=kind-ks-w-2 --kubeconfig=kubeslice/kubeconfig.yaml 
  exec -it deploy/iperf-sleep -c iperf -n iperf -- iperf -c iperf-server.iperf.svc.slice.local -p 
  5201 -i 1 -b 10Mb;

This will return a log of the bytes of information that have passed through the server between the two clusters. Expected Output:

 ------------------------------------------------------------
 Client connecting to iperf-server.iperf.svc.slice.local, TCP port 5201
 TCP window size: 45.0 KByte (default)
 ------------------------------------------------------------
  [  1] local 10.1.2.5 port 49188 connected with 10.1.1.5 port 5201
[ ID] Interval       Transfer     Bandwidth
[  1] 0.00-1.00 sec   640 KBytes  5.24 Mbits/sec
[  1] 1.00-2.00 sec   512 KBytes  4.19 Mbits/sec
[  1] 2.00-3.00 sec   512 KBytes  4.19 Mbits/sec
[  1] 3.00-4.00 sec   640 KBytes  5.24 Mbits/sec
[  1] 4.00-5.00 sec   512 KBytes  4.19 Mbits/sec
[  1] 5.00-6.00 sec   640 KBytes  5.24 Mbits/sec]

Conclusion

You just set up three clusters on the slice, and deployed an application across them, in under five minutes. These clusters are connected without the complexity of intermediary services or communication via IP. Now you have another ~4 hours to play with KubeSlice as much as you want – after you’re done, join our slack channel to learn more about how KubeSlice can simplify application deployments.

 

Related Articles

card image

Scaling RAG in Production with Elastic GPU Service (EGS)

card image

Optimizing GPU Allocation for Real-Time Inference with Avesha EGS

card image

#1 Myth or Mantra of spike scaling – "throw more resources at it."

card image

Do You Love Your Cloud Credits? Here's How You Can Get More…

card image

The APM Paradox: When Solution Becomes the Problem

card image

Migration should be 'gradual' and 'continuous'

card image

Hack your scaling and pay for a European Escape?

card image

Here Are 3 Ways You Can Slash Your Kubernetes Costs by 50%

card image

A completely new way for K8s Autoscaling: Why Predictive Pod Scaling with Smart Scaler and Karpenter is needed before plain VPA

Copyright © Avesha 2024. All rights reserved.

Terms and Conditions

Privacy Policy

twitter logo
linkedin logo
slack logo
youtube logo