simplify your hybrid
Prasad Dorbala

Prasad Dorbala

Co-Founder & CPO at Avesha

7 April, 2022,

7 min read

Copied

Introduction

A multi-cloud or hybrid strategy gives enterprises the freedom to use the best possible cloud native services for revenue generating workloads. Organizations utilize Kubernetes hybrid cloud/ multi-cloud / polycloud  deployments for use cases such as a) Cloud bursting b) Disaster recovery and c) Multi-site active-active deployments.

Each cloud provider has a unique value proposition for workloads for enterprises to consider. For eg. Oracle Cloud apart from computational capabilities, brings best-in-class database management functionalities, surrounding ecosystems of analytics as well as other cloud native products. The diverse value props provided by cloud providers thus makes Kubernetes hybrid cloud/ multi-cloud / polycloud  an attractive option for enterprises. However, the ability to simplify deploying and connecting workloads across clusters remains a big challenge. With Avesha’s KubeSlice, in partnership with Oracle Cloud Infrastructure utilizing the managed Oracle Kubernetes Engine, enterprises will be able to solve their multi-cluster networking challenges.

KubeSlice creates a virtual cluster across multiple clusters that helps accelerate application velocity by removing the friction of network layout, multi-tenancy and microservice reachability across clusters.

cloud strategy for all organisations

Proliferation of Kubernetes

In the context of hybrid/multi-cloud deployments, it’s important to note that the adoption of containerized application architectures has profoundly changed with Kubernetes and its flexible platform. Kubernetes has become the de-facto standard for ephemeral containerized workloads orchestration. This proliferation of microservices in enterprise environments has led to the growth of the Kubernetes ecosystem.

Challenges

Deploying and effectively managing Kubernetes at scale demands robust design processes and rigid administrative discipline. Enterprises are faced with decisions based on type of workloads: highly transactional, data-centric or location-specific. Additionally, workload distribution needs to take into account factors like latency, resiliency, compliancy, and cost. In addition, the complexity of microservices made interconnections the focal point of modern application architectures. Kubernetes handles such complexities well as long as you deploy all microservices in a single cluster and you maintain the trust between all the services running within the cluster. Kubernetes has various resources like CPU, Memory, StorageClass, PersistentVolumes by default as shared objects in a cluster. Here’s where things get complex. Multiple teams deploying applications in the same cluster leads to painful marshaling of shared resources, which includes security concerns and contention of resources from potential “noisy” neighbors. Kubernetes has a soft construct ‘namespace’ which provides isolation across applications. Managing and assigning namespace resources for teams with multiple applications leads to operational challenges that need to be constantly governed. Kubernetes natively does not treat multi-tenancy as a first-class construct for users and resources.

Mature deployments – the challenges grow

As organizations reach maturity in running Kubernetes, they expand their ecosystems to include multi-cluster deployments. These deployments are hosted in data centers or distributed in data centers as well as in hyperscalers’ clouds. Some of the reasons enterprises choose to deploy multi-cluster environments are for team boundaries, latency sensitivity (where applications need to be closer to customers), geographic resiliency, and data jurisdiction (policies involving user data restrictions in countries where data cannot cross geographical boundaries like GDPR and California Consumer Privacy Act – CCPA). While deployment of applications across clusters increases, there is a growing need for these applications to reach back to other applications in different clusters.

Platform teams have tedious operational management challenges when faced with providing infrastructure to application developers – (1) extending the construct of namespace sameness to multi-cluster deployments while also maintaining tenancy, (2) governing cluster resources and (3) cluster configuration without environmental configuration drift.

Kubernetes Networking – the ultimate challenge

Traditional multi-cluster networking for Kubernetes deployments is no small feat. While we are used to defining domains and firewalls at places where we need boundaries and control, mapping that in Kubernetes is challenging. Yes, you need additional flexibility beyond the namespace. Having to connect orchestration. This proliferation of microservices in enterprise environments has led to the growth of the Kubernetes ecosystem.

KubeSlice – the efficient hybrid/multi-cloud cluster connectivity & management solution

Avesha’s KubeSlice combines Kubernetes network services and manageability in a framework to accelerate application deployment in a Kubernetes environment. KubeSlice achieves this by creating logical application “Slice” boundaries which allow pods and services to communicate seamlessly.

Let’s take a deeper look at how KubeSlice simplifies the management and operation of Kubernetes           
environments.

simplification of the management and operation of Kubernetes environments         
KubeSlice brings the manageability, connectivity, and governance into one framework for deployments which need tenancy in a cluster and extends it across clusters, be it in data centers, in a single cloud or across multiple clouds.

KubeSlice is driven by a Custom Resource Definition (CRD) operator defining the “Slice” construct, which is analogous to a tenant. A tenant can be defined as (1) a set of applications which needs to be isolated from traffic of other applications; or (2) a team needing different security secrets and requiring strict isolation from those who have access to applications.

KubeSlice brings 3 types of functions to pods in it including Kubernetes Services, Networking Services, and Multitenancy & Isolation.

Kubernetes Services: KubeSlice governs namespace management and application isolation by managing Kubernetes objects, namespaces, and RBAC rules for better operational efficiency. KubeSlice enables sharding or “slicing” of clusters for a set of environments, teams or applications that would expect to be reasonably isolated but share common compute resources and the Kubernetes control plane.

Network Services: 

KubeSlice preserves the first principle of Kubernetes, i.e. the ability to talk to any pod in a cluster, by the creation of Slices, which enables pod to pod communication seamlessly. KubeSlice slice interconnect solves the complex problem of overlapping IP addresses between cloud providers, data centers, and edge locations.

Multitenancy & Isolation: KubeSlice enables the creation of multiple logical Slices in a single cluster or a group of clusters to address true isolation all the way from the network to the application domain.

Use Cases

KubeSlice, because of its simple deployment & robust capabilities, can be the platform of choice for the following enterprise use cases:

Use Cases

1. Isolation for enterprise teams: Each team operates multiple workloads such as services or batch jobs. These workloads frequently need to communicate with each other and have different preferential treatments. Enable isolation for this set of applications by defining sets of compute resources dedicated per team (especially GPU resources)

2. Single operator for multi-customer enterprise (aka SaaS provider): 

3. Hybrid deployment: Enterprises who have a data-centric application in a data center (Oracle database) and would like to keep an instance of the database in the data center for compliance reasons but have workloads in managed OKE.

4. Cloud Bursting: As per Flexera 2021 State of the cloud report and summarized by Accenture, 31% of enterprises are looking for hybrid solutions for workload bursting (cloud-bursting). Enterprises requiring extra capacity on-demand but need connectivity back to the data store in the data center, or certain web properties running in the cloud (OCI) need to reach back to the data center for database access. This strategy emphasizes the importance of multi-cluster networking.

5. Multi-Cloud deployment: 76% of companies are adopting multi-cloud and hybrid-cloud approaches – according to a report by Jean Atelsek @ 451 Research. A similar survey from Accenture, 45% of enterprises responded, ‘Data integration between clouds’ as one of the use cases for multi-cloud architecture. Enterprises who would like to take advantage of services like MySQL HeatWave, e-business suites, or Oracle Autonomous Database in OCI and have workloads in GCP/Azure/AWS or any other cloud provider.

 This approach often requires a multi-cluster/ multi-cloud pod to pod network strategy to ensure seamless communication between services. 

6. Other use cases : The integration of Multi-cluster Kafka systems can enhance data streaming capabilities across these diverse environments. Also partial migration of workloads can be implemented for polycoud deployments.

To address the challenges of cluster sprawl, organizations must adopt strategies that allow for efficient resource utilization and management across multiple clusters. This includes understanding the intricacies of multi-cluster CNI setups and ensuring that there's a clear multi-cluster networking strategy in place. It’s our emphasis that 

In conclusion, as enterprises continue to adopt Kubernetes and expand their deployments across multiple clusters and clouds, it's imperative to have a clear strategy in place. This includes understanding the challenges and benefits of hybrid cloud/ multi-cloud/ polycloud Kubernetes deployments, ensuring efficient pod to pod networking. With tools like KubeSlice, organizations can simplify these complexities and ensure seamless operations across their Kubernetes environments.

Related Articles

card image

Transforming your GPU infrastructure into a competitive advantage

card image

Building Distributed MongoDB Deployments Across Multi-Cluster/Multi-Cloud Environments with KubeSlice

card image

KubeSlice: The Bridge to Seamless Multi-Cloud Kubernetes Service Migration

card image

Optimizing Payments Infrastructure with Smart Karpenter: A Case Study

card image

Optimizing GPU Allocation for Real-Time Inference with Avesha EGS

card image

Scaling RAG in Production with Elastic GPU Service (EGS)

card image

Do You Love Your Cloud Credits? Here's How You Can Get More…

card image

#1 Myth or Mantra of spike scaling – "throw more resources at it."

card image

The APM Paradox: When Solution Becomes the Problem

Copyright © Avesha 2024. All rights reserved.

Terms and Conditions

Privacy Policy

twitter logo
linkedin logo
slack logo
youtube logo