Integrating Avesha's KubeSlice with Dell PowerFlex for Multi-Cluster Connectivity
Ryan Wallner
Lead Developer Advocate, Dell
Rob Croteau
Director of Enablement, Avesha
8 November, 2023,
10 min read
This blog is written as a joint effort between Ryan Wallner from Dell Technologies and Rob Croteau of Avesha.
In today's rapidly evolving tech landscape, the ability to seamlessly connect and manage multiple Kubernetes clusters is paramount. This is where Avesha's KubeSlice comes into play, offering a robust solution for multi-cluster connectivity. In our recent endeavors, we explored the integration of KubeSlice with Dell PowerFlex storage across multiple Kubernetes environments.
Dell, with its forward-thinking approach, is keen on establishing a robust ecosystem around its Cloud Platforms such as its recent release of ACP for OpenShift. The goal of the project outlined in this blog post was to understand the value-add Avesha brings to the table in Kubernetes environments, especially in the context of ACP platforms and use cases.
In our quest to delve deep into the capabilities of multi-cluster connectivity, we've embarked on two distinct yet equally compelling experiments. The first experiment is set within the vast expanse of AWS (Amazon Web Services), where we aim to demonstrate multi-cluster connectivity across regions. This endeavor is amplified by harnessing the robust capabilities of Dell's PowerFlex platform, further enriched by the integrative strengths of Dell’s CSI (Container Storage Interface) and CSM (Container Storage Modules).
For our second experiment, we venture into a more diverse setup, bridging the world of Google's GKE cluster with an OpenShift cluster configured in a hybrid configuration. This experiment is not just about connectivity; it's about identity and access. By integrating Google's IDP and leveraging OpenShift's OIDC, we aim to showcase KubeSlice's prowess in RBAC controls and its ability to segment workloads and teams effectively.
In our first experiment with multi-cluster connectivity, the setup involving two OpenShift clusters on VMs in AWS, connected cross-region, stood out prominently. This configuration showcased the power of seamless inter-region communication without the need for traditional AWS networking adjustments, such as peering or STS gateways. Here's a detailed breakdown:
AWS Cross-Region Connectivity:
In the dynamic world of Kubernetes, the KubeSlice Manager UI stands out as a game-changer. Crafted for simplicity and efficiency, this interface streamlines the process of connecting multiple Kubernetes clusters. With just a few clicks, users can effortlessly link clusters and deploy slices across them. This ease of deployment empowers customers to strategically position their workloads in locations that best suit their operational needs. Beyond its connectivity prowess, the UI offers a comprehensive dashboard for monitoring, resource management, and advanced networking functionalities. By making complex tasks feel straightforward, the KubeSlice Manager UI ensures that users can fully leverage the versatility of Kubernetes without the usual intricacies.
Using the KubeSlice Manager UI, users can seamlessly create an application slice and onboard namespaces.
Slice Creation:
Add namespace to Slice:
Upon adding the namespace(s) you can configure namespace sameness which ensures consistent naming and configuration across multiple clusters for unified management.
Then, with the kubectl command, you can deploy the application onto the slice. In this case, the application is a Postgres Database using a PowerFlex volume storage class and a NodeJS web front end to interact with the database.
Once a service export is created, KubeSlice automatically imports the service into all connected clusters within that slice.
Using iperf on Kubernetes provides a tool to measure network bandwidth performance between pods, helping diagnose connectivity and speed issues within the cluster. Iperf is set up with a server pod on one cluster and a sleep pod on another. The diagrams below illustrate the status of iperf. They also depict the process of accessing the sleep pod and initiating a connection to the server pod using the slice's DNS name to test communication.
KubeSlice's Quality of Service (QoS) rate limiting feature stands as a testament to its advanced capabilities in ensuring consistent and optimal service delivery. This feature allows administrators to set specific bandwidth limits for different services or workloads, ensuring that no single service monopolizes the network resources. By dynamically allocating bandwidth based on predefined rules, KubeSlice ensures that critical services receive the necessary resources they need, even during peak times. This not only guarantees smooth and uninterrupted service delivery but also provides a mechanism to prevent network congestion, ensuring that all services within the cluster operate at their peak efficiency.
In the displayed iperf test screenshot above, one can observe the bandwidth fluctuating between 93 and 209 Mbits/sec, ultimately settling at 134 Mbits/sec.
To further explore KubeSlice's capabilities, we'll introduce a constraint. We'll set a high watermark of 20 Mbits/sec on the slice and initiate the test once more. To achieve this, we'll modify the slice settings, adjusting the QoS profile to cap the bandwidth at 20 Mbits/sec.
Upon executing the Iperf test again, we'll closely monitor the traffic behavior:
From the results of our tests the bandwidth has been limited to 20 Mbits/sec, it's evident that KubeSlice's QoS rate limiting feature offers precise control over bandwidth, ensuring that resources are utilized optimally. This level of granularity not only guarantees consistent performance but also underscores the platform's adaptability and efficiency in diverse scenarios.
In our AWS multi-cluster communication test, the seamless integration and performance were evident. By leveraging platforms like Dell PowerFlex storage in tandem with Kubernetes, and harnessing the capabilities of Dell's CSI and CSM integrations, we were able to achieve optimal storage and networking efficiencies. KubeSlice was the catalyst in the configuration, effortlessly bridging the clusters and illuminating the dynamic capabilities of multi-cluster deployments.
In our quest to explore the capabilities of multi-cluster Kubernetes deployments, we embarked on a unique experiment that brought together the strengths of Google Cloud's GKE and an on-premise OpenShift cluster. This experiment was not just about connectivity; it was about showcasing the power of integration, resource management, security, and team segregation in a hybrid environment. Here's a deep dive into our findings:
1. Multi-Cluster Connectivity:
2. Workload Resource Management:
KubeSlice's resource management empowers users to allocate and monitor resources at the slice level, ensuring optimal performance and efficient utilization across multi-cluster deployments.
4.Network Policies:
KubeSlice's network policy isolation ensures granular control over pod communication, enhancing security and data integrity within multi-cluster environments.
6.Integration with Google's IDP:
OpenShift's OAuth configuration offers versatile authentication options; while we opted for Google IDP in our setup, it seamlessly supports a wide range of other identity providers for flexible integration.
7. Team Segregation:
KubeSlice offers a seamless process for role creation, allowing not only the definition of new roles but also the import of pre-existing ones, ensuring flexibility and continuity in multi-cluster environments.
KubeSlice streamlines role assignment, ensuring that users and teams are granted precise permissions tailored to their responsibilities within the multi-cluster environment.
Dell Role Assignment: User rob.croteau@aveshasystems.com has a role assignment added.
Engineering Role Assignment: User rob.croteau@aveshasystems.com has NOT been added.
Slice Namespace Assignments:
KubeSlice facilitates namespace assignment, maintaining namespace sameness across clusters, ensuring consistency and simplifying multi-cluster management.
Configured with namespace sameness across both clusters.
Configured with namespace sameness across both clusters.
Logged in as user rob.croteau@aveshasystems.com This user only is assigned a role to access the dell slice with the dell-dev-space configured.
Attempting to list pod resources in the “default” namespace of the cluster:
Roberts-MacBook-Pro-2 .kube % k get pods -n default
Error from server (Forbidden): pods is forbidden: User
"rob.croteau@aveshasystems.com" cannot list resource "pods" in API
group "" in the namespace "default"
Result: Action Forbidden
Attempting to list pod resources in the “engineering” namespace of the cluster on the engineering slice:
Roberts-MacBook-Pro-2 .kube % k get pods -n engineering
Error from server (Forbidden): pods is forbidden:
User "rob.croteau@aveshasystems.com" cannot list resource "pods" in API
group "" in the namespace "engineering"
Result: Action Forbidden
Attempting to list pod resources in the “dell-dev-space” namespace of the cluster on the dell slice:
Roberts-MacBook-Pro-2 .kube % k get pods -n dell-dev-space
NAME READY STATUS RESTARTS AGE
nginx 2/2 Running 0 5m1s
Result: Action Permitted
Attempting to delete a pod resources in the “dell-dev-space” namespace of the cluster on the dell slice:
Roberts-MacBook-Pro-2 .kube % k delete pod nginx -n dell-dev-space
pod "nginx" deleted
Roberts-MacBook-Pro-2 .kube % k get pods -n dell-dev-space
No resources found in dell-dev-space namespace.
Result: Action Permitted
Attempting to create a pod resources in the “dell-dev-space” namespace of the cluster on the dell slice:
Roberts-MacBook-Pro-2 .kube % k get pods -n dell-dev-space
No resources found in dell-dev-space namespace.
Roberts-MacBook-Pro-2 .kube % kubectl run nginx --image=nginx -n dell-dev-space
pod/nginx created
Roberts-MacBook-Pro-2 .kube % k get pods -n dell-dev-space
NAME READY STATUS RESTARTS AGE
nginx 0/2 Init:0/1 0 5s
Roberts-MacBook-Pro-2 .kube % k get pods -n dell-dev-space
NAME READY STATUS RESTARTS AGE
nginx 0/2 Init:0/1 0 8s
Roberts-MacBook-Pro-2 .kube % k get pods -n dell-dev-space
NAME READY STATUS RESTARTS AGE
nginx 2/2 Running 0 15s
Result: Action Permitted
Attempting to create a pod resource in the “engineering” namespace of the cluster on the engineering slice:
Roberts-MacBook-Pro-2 .kube % kubectl run nginx --image=nginx -n \
engineering
Error from server (Forbidden): pods is forbidden: User
"rob.croteau@aveshasystems.com" cannot create resource "pods" in API
group "" in the namespace "engineering"
Result: Action Forbidden
In concluding our hybrid experiment, it's evident that KubeSlice's capabilities, when combined with the strengths of both Google Cloud's GKE and an on-premise OpenShift cluster, offer a transformative approach to multi-cluster management. This synergy not only streamlines operations but also paves the way for innovative deployment strategies, setting a new benchmark for hybrid Kubernetes deployments.
In our comprehensive exploration spanning both the AWS and hybrid experiments, the collaboration between Avesha's KubeSlice and Dell PowerFlex storage stood out as a valued integration. These experiments highlighted the seamless integration capabilities of KubeSlice, further amplified by the robustness of Dell's CSI and CSM integrations. As businesses evolve and expand their multi-cluster strategies, the combined strengths of KubeSlice and Dell PowerFlex will undeniably be at the forefront, ensuring cohesive, secure, and efficient operations across varied deployment scenarios.
A completely new way for K8s Autoscaling: Why Predictive Pod Scaling with Smart Scaler and Karpenter is needed before plain VPA
Copied