Why Multi Cluster?
Organizations undertaking modernization must support a hybrid deployment model where data and workloads often in different locations — near premises (data center or edge) and public cloud. There are several reasons for this, including (1) data provenance (2) data security (3) processing specialization (4) data compliance and (4) latency requirements for applications.
For compliance, data sometimes needs to be in a certain geography to meet regulations & security standards. Certain public clouds offer special advantages for processing specialization— like ads (Google), databases (Oracle), mail campaigns (Azure), bursting (AWS), resiliency (IBM Cloud) etc. Thus, to address these scenarios, workloads need to be able to be deployed in multiple cloud locations and not break basic assumptions around latency or security for inter-service communications. Hence, inter-service latency is an important consideration for deployments to preserve functionality of existing services and avoid costly and long application rewrite of projects.
Current approaches to inter-service communication across locations are fraught with difficulties of firewall configuration and latency of API Gateways in each direction. Firewalls need to manage exceptions for all services in one large configuration that is a common entry point for all traffic for each location. Exceptions for each new service and location need to be specified separately in each of these locations resulting in a number of (mostly manual) updates that are each subject to risk of misconfiguration — “fat finger” problems.
Connectivity for each location, is the traditional North-South path that needs the communication between services to pass via API gateways for each direction, and firewalls resulting in greater latency because of the need to translate or encapsulate traditional protocols into http. A more direct & low latency connection for East-West connectivity is a better architecture but needs expensive (hard-to-find) devops and netops resources to manage IP address overlaps, create secure tunnels and configure service discovery and routing.
Clearly, there is a need for a more automated solution that simplifies inter-service communication across locations. The answer is an “application slice” (available in Avesha’s KubeSlice product) which is a concept that offers automated, direct layer 3 pod-to-pod connectivity as an overlay of secure (zero-trust) tunnels with service discovery and routing, without the need to transverse API gateways or firewalls. Also, each slice by default is isolated and exceptions need only to be made on a per-slice basis — this is easier to manage because a slice is defined to capture the needs of only one application or logically related group of (distributed) endpoints. The slice automation allows individual app developers to easily access these automations by a mere annotation in the application yaml — a great improvement over existing best practices by devops or netops for easy-to-configure, secure & low latency inter-service communications.
Installation & How to video: