Role of an application fabric in hybrid cloud
Raj Nair
Founder & CEO
16 September, 2022,
4 min read
The advent of the “cloud-native” age has enabled the creation of open-compute platforms that allows
individual workloads[1] to be run in different locations based on properties that meet business objectives — lower cost, data governance, closer proximity to end users, resiliency or regulatory needs. However, there is a lack of tools ranging from simple connectivity to isolation. In this writeup, we examine the underlying issues and develop the key concepts behind an application fabric.
Existing tools such as service meshes or application gateways are built primarily for web interactions
based on http. Yet, many workloads communicate with each other using non-http communications and creating side-cars specifically for this purpose impedes the fluidity needed to place workloads at different locations. This is where an application fabric comes in. It is a super-cluster abstraction that extends the familiar notion of a cluster, a flat network that enables seamless application communications, to one that transcends all network boundaries and works across the Internet.
In short, an application fabric solves the issue of workload migrations once and forever. A flat network is one in which there are no gateways or IP address overlaps[2] or translations to worry about. Traffic seamlessly flows from a pod in one cluster to a pod in another far away cluster. The “magic” of a seamless connectivity is created by an overlay that connects the two clusters and is created automatically at installation. Services in the remote cluster are discoverable just like other local services through a gateway pod. IP address overlaps are mitigated by utilizing a single non-overlapping address range (with its own DNS) that may be reused by multiple times in different application fabrics. This is possible because an application fabric is a unit of tenancy. All communication inside an application fabric is controlled by RBAC (Role-based access control) of the namespaces that were added to it.
Security is of course the next important aspect that needs to be addressed. Here again, available
tools are based on application firewalls that impose a “S= 2(X.n)²” problem where X is the average number of workload pods in a cluster, n is the number of clusters that need to be connected and S is the number of security assertions that need to be created and approved by the compliance team — a lot of work. On the other hand, the application fabric solves this issue by using the overlay created from NIST-compliant automated VPN tunnels that need to be certified once and can be reused multiple times.
A hybrid deployment poses multiple challenges including connectivity and security that are solved by
an application fabric. Typically, hybrid deployments are used to meet regulatory requirements for the financial entities such as banks and insurance companies. Data simply cannot leave the premises unencrypted at any point. The security model of the application fabric ensures that the mTLS (mutual TLS) association between pods is maintained across the overlay that is opaque to all entities along the way. This ensures high-integrity communication between workloads. Also, the isolation provided by the application fabric limits the blast radius of any workload that is compromised to the application fabric boundaries without infecting others. This is possible because an application fabric implements fabric-wide resource limits. In addition, the application fabric naturally creates a boundary to limit the noise when viewing or monitoring the status of the entire deployment of an application. This is extremely useful in troubleshooting problems and gauging the overall health of the deployment.
Finally, we at Avesha are creating an intelligent and futuristic application fabric with all of the capabilities described above. We call it KubeSlice. The onboarding of an application onto an application fabric is achieved in 2 easy steps: (1) identifying the specific fabric via namespace membership (via the yaml file); and (2) redeploying the application pod. Everything else is automated and does not require any support from the IT/platform team.
Get Started with KubeSlice (Github)
Learn more about Avesha (Website)
Simplify your Hybrid/Multi-Cluster, Multi-Cloud Kubernetes deployments with KubeSlice (Blog)
[1] A workload is the term used to describe a microservice running in a pod in a cluster.
[2] IP address overlap is a major impediment in connecting clusters across regions or cloud providers
because clusters typically use private IP addresses that are independently assigned by providers.
Copied