loader

Installing Slices
Introduction

In this step, you will configure and install a slice across your registered clusters. This is the final step to be completed before you can get started deploying your applications with KubeSlice.

Note
This step must be completed for each cluster that will participate in the slice. While the slice deployment .yaml files will largely be identical, the site location will change from cluster to cluster. This is denoted with an asterisk * in the chart below, and a comment in the provided template.
Creating the Slice .yaml File

Variable

Options

Description

<slice name>

String

Name of the slice you are installing. Each slice must have a unique name.

<slice display name>

String

Short description of the slice you are installing.

This must be wrapped in quotes “ “.

<slice subnet>

String (IP /16 Subnet)

Example: 192.168.0.0/16

Subnet the pods in the KubeSlice overlay network will communicate over.

This must be wrapped in quotes “ “.

<qos profile>

HIGH_PRIORITY_PROFILE

MED_PRIORITY_PROFILE

LOW_PRIORITY_PROFILE

QoS profiles allow for traffic management within a slice as well as prioritization across slices. A QoS profile specifies the amount and priority of traffic that can be sourced between clusters. The specified profiles have upper limits as follows: 25 MB, 15 MB, 5 MB.

<site location>*

String (Region or City)

Can be either the cloud region or city your cluster is located in, and can be different for each cluster belonging to a slice. Examples: us-east-1, us-west-2, boston, cupertino

<enable isolation>

Bool

applicationNamespaces

YAML List

Namespaces which should be bound to the slice. This will be a standard YAML list containing <cluster name>:<namespace> or *:<namespace> key/value pairs.

allowedNamespaces

YAML List

If <enable isolation> is set to true, you can still make exceptions to allow traffic from certain namespaces within a cluster that are not bound to the slice. This will be a standard YAML list containing <cluster name>:<namespace> or *:<namespace> key/value pairs.

<cluster name>

String,

Wildcard *

Name of the cluster being specified in either applicationNamespaces or allowedNamespaces.

<namespace>

String

Namespace being specified in either applicationNamespaces or allowedNamespaces.

<enable istio>

Bool

If istio has been installed on your clusters, this value should be set to true.

<enable egress>

Bool

To use istio-egress gateway for E-W traffic on your slice, set to true.

<egress gateway>

String

Name for the istio-egress gateway.

<enable ingress>

Bool

To use istio-ingress gateway for E-W traffic on your slice, set to true.

<ingress gateway>

String

Name for the istio-ingress gateway.

Note
Changes to the namespaceIsolationProfile configurations are not permitted once the slice has been installed. Please include all needed namespaces here.

To create a slice, you will want to create a slice .yaml file for each cluster participating in the slice using the template below. All fillable fields are denoted by < > and are defined in detail in the table above. If there are any issues, please contact support@avesha.io.

slice:
  name: <slice name>
  displayName: "<slice display name>"
  subnet: "<slice subnet>"
  type: SLICE_TYPE_APPLICATION
  gatewayProvider:
    type: SLICE_GATEWAY_TYPE_OPEN_VPN
    caType: SLICE_CA_TYPE_LOCAL
    idP: SLICE_GATEWAY_IDP_COGNITO
    maxGateways: 256
    subnetSize: 256
    gatewayAutoCreate: true
    gatewayNamePrefix: avesha
  qosProfile: <qos profile>
  ipamType: SLICE_IPAM_TYPE_LOCAL
  site: <site location> #Can Differ By Cluster

  namespaceIsolationProfile:
    isolationEnabled: <enable isolation>
    applicationNamespaces:
      - <cluster name>:<namespace>
    allowedNamespaces:
      - <cluster name>:<namespace>

istio:
  enabled: <enable istio>

istio-egress:
  enabled: <enable egress>
  slice: <slice name>
  gateways:
    istio-egressgateway:
      name: <egress gateway>

istio-ingress:
  enabled: <enable ingress>
  slice: <slice name>
  gateways:
    istio-ingressgateway:
      name: <ingress gateway>
Applying the Slice .yaml File

Variable

Description

<slice name>

Name of the slice being installed.

<deployment yaml>

Path to the slice deployment .yaml file.

<cluster name>

Name of the cluster the slice will be installed on.

For each cluster in the slice, you will need to apply the cluster-specific slice .yaml file created above. To begin, switch contexts to the first cluster you would like to apply the slice .yaml file to:

kubectx <cluster name>

You can now use the KubeSlice chart from the Avesha Helm repository you added in a previous section to apply the slice .yaml file:

helm install <slice name> avesha/slice -n avesha-system -f <slice yaml>

You should get the following expected output:

NAME: <slice name>
LAST DEPLOYED: Tue Feb 02 14:23:54 2022
NAMESPACE: avesha-system
STATUS: deployed
REVISION: 1
TEST SUIT: None
NOTES:
Slice Created Successfully
Name: <slice name>
Subnet <slice subnet>
Validating your Installation

Before moving on to installing the slice on your next cluster or onboarding applications, it is important to validate the slice was installed successfully.

Validating your Slice Installation

Use the below command to validate the cluster has been successfully connected to the slice:

kubectl get slice -n avesha-system

Expected Output:

NAME           DISPLAY NAME              SUBNET          QOS PROFILE                  
<slice name>   <slice display name>      172.20.0.0/16   HIGH_PRIORITY_PROFILE
Validating your SliceGateway Installation

Use the below command to validate the slice gateway has been created on the cluster:

kubectl get slicegw -n avesha-system

Expected Output:

NAME         SUBNET          REMOTE SUBNET   REMOTE CLUSTER                GW STATUS
<slice name> 172.20.1.0/24   172.20.2.0/24   <remote cluster identifier>   SLICE_GATEWAY_STATUS_REGISTERED
Note
You have successfully installed a KubeSlice slice across your chosen clusters.
Next Steps

Now that you have successfully installed your slice, you are ready to begin onboarding your applications and deployments to the slice:


< PREVIOUS
Registering Clusters Before installing a slice across your Kubernetes clusters, you must register your clusters with the KubeSlice backend using the details provided during Account Signup. Read More
NEXT >
Onboarding Applications This page will guide you through onboarding an application deployment to a slice, as well as setting up the necessary ServiceExports if necessary. Read More