loader

Prepare Clusters
Introduction

In this step, you will prepare your environment for cluster registration, setting up your command line environment and clusters for registration with KubeSlice.

Cluster Authentication
Note
If you have already retrieved your credentials for each cluster, you can continue to Cluster Context Switching below.
Note
This step must be performed on each cluster you will be registering with KubeSlice.

Before registering your clusters with KubeSlice, you must authenticate with each cloud provider that will be used in your installation. Each of the below commands will retrieve the relevant kubeconfig and add it to your default kubeconfig path.

Google Kubernetes Engine

The following information is required in order to retrieve your GKE kubeconfig:

Variable

Description

<cluster name>

The name of the cluster you would like to get credentials for.

<region>

The region the cluster belongs to.

<project id>

The project id that the cluster belongs to.

The below command will retrieve your GKE cluster kubeconfig and add it to your default kubeconfig path. Complete this step for each GKE cluster you would like to work with.

gcloud container clusters get-credentials <cluster name> --region <region> --project <project id>

Expected:

Fetching cluster endpoint and auth data.
kubeconfig entry generated for <cluster name>

For support with Google Kubernetes Engine, visit the official documentation:

Amazon Elastic Kubernetes Service

The following information is required in order to retrieve your EKS kubeconfig:

Variable

Description

<cluster name>

The name of the cluster you would like to get credentials for.

<region>

The region the cluster belongs to.

The below command will retrieve your EKS cluster kubeconfig and add it to your default kubeconfig path. Complete this step for each EKS cluster you would like to work with.

aws eks update-kubeconfig --region <region> --name <cluster name>

Expected:

Updated context aarn:aws:eks:<region>:<id>:cluster/<cluster name> in <default kubeconfig path>

For support with Amazon Elastic Kubernetes Service, visit the official documentation:

IBM Cloud Kubernetes Service

Variable

Description

<cluster name>

The name of the cluster you would like to get credentials for.

The below command will retrieve your IKS cluster kubeconfig and add it to your default kubeconfig path. Complete this step for each IKS cluster you would like to work with.

ibmcloud ks cluster config --cluster <cluster name>

For support with IBM Cloud Kubernetes Service, visit the official documentation:

Microsoft Azure Kubernetes Service

Variable

Description

<resource group name>

The name of the resource group the cluster belongs to.

<cluster name>

The name of the cluster you would like to get credentials for.

The below command will retrieve your AKS cluster kubeconfig and add it to your default kubeconfig path. Complete this step for each AKS cluster you would like to work with.

az aks get-credentials --resource-group <resource group name> --name <cluster name>

For support with Microsoft Kubernetes Service, visit the official documentation:

Labeling Avesha Gateway Nodes
Note
If a cluster contains only one node pool, follow the below instructions for Labeling Individual Nodes.
Note
This step must be performed on each cluster you will be registering with KubeSlice.

Google Kubernetes Engine

The following information is required in order to label the GKE cluster nodepools:

Variable

Description

<nodepool name>

The name of the nodepool being labeled.

<cluster name>

The name of the cluster the nodepool being labeled belongs to.

<region>

The Compute Engine region for the cluster the nodepool belongs to.

<zone>

The Compute Engine zone for the cluster the nodepool belongs to.

The below command will label the GKE cluster node pool:

gcloud container node-pools update <nodepool name> \
    --node-labels=avesha/node-type=gateway \
    --cluster=<cluster name> \
    [--region=<region> | --zone=<zone>]

Amazon Elastic Kubernetes Service

The following information is required in order to label the EKS cluster nodepools:

Variable

Description

<nodegroup name>

The name of the nodepool being labeled.

<cluster name>

The name of the cluster the nodepool being labeled belongs to.

The below command will label the EKS cluster node pool:

eksctl set labels --labels avesha/node-type=gateway -n <nodegroup name> --cluster <cluster name>

IBM Cloud Kubernetes Service

The following information is required in order to label the IKS cluster nodepools:

Variable

Description

<workerpool id>

The id of the workerpool to update.

<cluster id>

The id of the cluster the workerpool being labeled belongs to.

The below command will label the IKS cluster workerpool:

ibmcloud ks worker-pool label set --cluster <cluster id> \
    --worker-pool <workerpool id> \
    --label avesha/node-type=gateway

Microsoft Azure Kubernetes Service

AKS nodepools can only be set during nodepool creation. The nodepool must contain the label avesha/node-type=gateway. For instructions on creating a labeled nodepool visit the AKS documentation below:

Labeling Individual Nodes

Note
We recommend using a dedicated nodepool and follow the above instructions for labeling. If you must use a single node pool, the below steps will

The following information is required in order to label an individual nodepool:

Variable

Description

<cluster name>

Name of the cluster the node belongs to.

<node name>

Name of the node you will be labeling (will be fetched below).

First, switch contexts to the cluster the nodepool you would like to label belongs to:

kubectx <cluster name>

If you need to find out the <node name> of the node you wish to label, you can run the below command to return a list of nodes belonging the cluster:

kubectl get nodes

Once you have the two required variables, you are able to run the below command to label your node:

kubectl label node <node name> avesha/node-type=gateway

Verify Your Labels

Note
This step should be performed on each cluster you will be registering with KubeSlice.

To verify the label was set correctly, first switch to the context you wish to verify:

kubectx <cluster name>

Then run the below command to get all nodes with the avesha/node-type=gateway label:

kubectl get no -l avesha/node-type=gateway

If you successfully set your labels, you will get a list of the labeled nodepools in the cluster.

Each gateway node should have an external IP address configured. To verify this, run:

kubectl get no -o wide
Add the Avesha Helm Repository

For easy installation, Avesha provides self-hosted Helm charts which are required to register your clusters. Using the below commands, add the Avesha Helm repository to your local configuration, update the Helm repositories, then verify the repository was added.

Add the Helm chart to your local configuration:

helm repo add avesha https://nexus.aveshalabs.io/repository/avesha-helm/

Expected:

"avesha" has been added to your repositories

Update the repositories on your system with the below command:

helm repo update

Expected:

Hang tight while we grab the latest from your chart repositories...
...Successfully got an update from the "avesha" chart repository
Update Complete. ⎈Happy Helming!⎈

View the available Avesha charts using the command below to verify the repository was added successfully:

helm search repo avesha

Expected:

NAME                    CHART VERSION   APP VERSION   DESCRIPTION 
avesha/avesha-mesh      1.6.8           1.14.0        Avesha Mesh setup 
avesha/bookinfo         0.1.0                         A Helm chart for Kubernetes 
avesha/cert-manager     v1.6.1          v1.6.1        A Helm chart for cert-manager 
avesha/iperf            0.1.0                         A Helm chart for Kubernetes 
avesha/istio-base       1.9.0                         Helm chart for deploying Istio cluster resource...
avesha/istio-discovery  1.9.0                         Helm chart for istio control plane 
avesha/slice            1.5.0           1.5.0         Avesha Slice
Installing Istio
Note
Applications requiring Istio, this step must be performed on each cluster you will be registering with KubeSlice.

Istio (Istio ) is an open source service mesh that is used frequently to connect and secure microservices within a cluster. The below instructions will install istio from the Avesha helm repository chart.

First, switch to the cluster you will be installing Istio on:

kubectx <cluster name>

Each cluster will need an istio-system namespace to install the Istio helm charts in:

kubectl create namespace istio-system

Next you will install the istio-system chart from the Avesha helm repository added in the previous section:

helm install istio-base avesha/istio-base -n istio-system

And finally, you will install the istio-discovery chart from the Avesha helm repository added in the previous section:

helm install istiod avesha/istio-discovery -n istio-system
Note
You have successfully prepared your clusters for KubeSlice installation.
Next Steps

Great Job. Now, you will move on to registering your clusters with KubeSlice, one step closer to modernizing your infrastructure with Avesha.

Registering Clusters


< PREVIOUS
Account Signup To use KubeSlice, you must first visit the KubeSlice registration page and retrieve your unique API key and Token. These values will allow you to complete your KubeSlice installation successfully. Read More
NEXT >
Installing KubeSlice There are three main steps to installing KubeSlice: Registering Clusters, Installing Slices, and Onboarding Applications. Avesha has compiled the below guide to walk you through installing your first KubeSlice slice. Read More