loader

Application: iPerf
Introduction

iPerf is a tool commonly used to measure network performance, perform network tuning, and more. The iPerf application is made up of two main services, iperf-sleep (client) and iperf-server. This guide will show the installation of the iperf-sleep and iperf-server services on two clusters within a KubeSlice configuration. Additionally, you will be able to use the installed iPerf tool to verify cross cluster communication over the KubeSlice.

Prerequisites

To install iPerf, you should first have a KubeSlice configuration with two or more clusters completely installed. If you need to install KubeSlice, visit:

Deploying iPerf

In this tutorial, iperf-sleep and iperf-server will each be deployed to their own cluster in the KubeSlice configuration. The cluster used for iperf-sleep will be referred to as <sleep cluster>, and the cluster used for iperf-server will be referred to as <server cluster>.

iPerf Sleep

Using the template below, create a deployment file name iperf-sleep.yaml. All fields in the template will remain the same except for <slice name> which must be replaced with the name of your KubeSlice configuration:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: iperf-sleep
  namespace: iperf
  labels:
    app: iperf-sleep
  annotations:
    avesha.io/slice: <slice name> #Replace Slice Name
spec:
  replicas: 1
  selector:
    matchLabels:
      app: iperf-sleep
  template:
    metadata:
      labels:
        app: iperf-sleep
    spec:
      containers:
      - name: iperf
        image: mlabbe/iperf
        imagePullPolicy: Always
        command: ["/bin/sleep", "3650d"]
      - name: sidecar
        image: nicolaka/netshoot
        imagePullPolicy: IfNotPresent
        command: ["/bin/sleep", "3650d"]
        securityContext:
          capabilities:
            add: ["NET_ADMIN"]
          allowPrivilegeEscalation: true
          privileged: true

Before applying the iperf-sleep.yaml, ensure you are targeting the cluster that will be used for the iperf-sleep service:

kubectx <sleep cluster>

Create the iperf namespace to install the iperf-sleep service on:

kubectl create ns iperf

Lastly, apply the iperf-sleep.yaml deployment:

kubectl apply -f iperf-sleep.yaml -n iperf

Expected:

deployment.apps/iperf-sleep created

iPerf Server

Using the template below, create a deployment file name iperf-server.yaml. All fields in the template will remain the same except for two <slice name> instances which must be replaced with the name of your KubeSlice configuration:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: iperf-server
  namespace: iperf
  labels:
    app: iperf-server
  annotations:
    avesha.io/slice: <slice name> #Replace Slice Name
spec:
  replicas: 1
  selector:
    matchLabels:
      app: iperf-server
  template:
    metadata:
      labels:
        app: iperf-server
  spec:
    containers:
    - name: iperf
      image: mlabbe/iperf
      imagePullPolicy: Always
      args:
        - '-s'
        - '-p'
        - '5201'
      ports:
      - containerPort: 5201
        name: server
    - name: sidecar
      image: nicolaka/netshoot
      imagePullPolicy: IfNotPresent
      command: ["/bin/sleep", "3650d"]
      securityContext:
        capabilities:
          add: ["NET_ADMIN"]
        allowPrivilegeEscalation: true
        privileged: true
---
apiVersion: mesh.avesha.io/v1beta1
kind: ServiceExport
metadata:
  name: iperf-server
  namespace: iperf
spec:
  slice: <slice name> #Replace Slice Name
  selector:
    matchLabels:
      app: iperf-server
  meshType: none
  ingressEnabled: false
  ports:
  - name: tcp
    containerPort: 5201
    protocol: TCP

Before applying the iperf-server.yaml, ensure you are targeting the cluster that will be used for the iperf-server service:

kubectx <server cluster>

Create the iperf namespace to install the iperf-sleep service on:

kubectl create ns iperf

Lastly, apply the iperf-sleep.yaml deployment:

kubectl apply -f iperf-server.yaml -n iperf

Expected:

deployment.apps/iperf-server created
serviceexport.mesh.avesha.io/iperf-server created
Verifying Your iPerf Installation

To verify our iPerf installation, we will first switch contexts to the cluster with the iperf-sleep.yaml applied:

kubectx <sleep clutser>

Verifying the ServiceExport and ServiceImport

Using the below command, verify the iperf-server service was imported successfully from the server cluster:

kubectl get si -n iperf

Expected:

NAME          SLICE.        PORT(S)   ENDPOINTS  STATUS
iperf-server  <slice name>  5201/TCP  1          READY

Getting the Service DNS Name

Next, run the below command to describe the iperf-server service and retrieve the short and full DNS names for the service. We will use the short DNS name later to verify inter-cluster communication:

kubectl describe si iperf-server -n iperf | grep "Dns Name:"

Expected:

Dns Name: iperf-server.iperf.svc.slice.local #The DNS Name listed here will be used as the DNS Name below.
Dns Name: <iperf server service>.<cluster identifier>.iperf-server.iperf.svc.slice.local #Full DNS Name

Verifying Inter-Cluster Communication

Using the below command, list the pods in the iperf namespace to get the full name of the iperf-sleep pod:

kubectl get pods -n iperf

Your output should look similar to the below, but your pod name identifier suffix will differ:

NAME                          READY   STATUS   RESTARTS  AGE
iperf-sleep-c4b96d6b9-wrdh2   4/4     Running  0         21m

Using the pod name you just retrieved, exec into the iperf-sleep pod with the below command:

kubectl exec -it <full iperf-sleep pod name> -c iperf -n iperf -- sh

Once attached to the pod, use the short DNS Name retrieved above to connect to the server from the sleep pod:

iperf -c <short iperf-server DNS Name> -p 5201 -i 1 -b 10Mb;

If the iperf-sleep pod is able to reach the iperf-server pod across clusters, you should see similar output to that below:

------------------------------------------------------------
Client connecting to iperf-server.iperf.svc.slice.local, TCP port 5201
TCP window size: 45.0 KByte (default)
------------------------------------------------------------
[ 1] local 172.20.1.5 port 46272 connected with 172.20.2.5 port 5201
[ ID] Interval Transfer Bandwidth
[ 1] 0.00-1.00 sec 1.25 MBytes 10.5 Mbits/sec
[ 1] 1.00-2.00 sec 1.25 MBytes 10.5 Mbits/sec
[ 1] 2.00-3.00 sec 1.25 MBytes 10.5 Mbits/sec
[ 1] 3.00-4.00 sec 1.25 MBytes 10.5 Mbits/sec
[ 1] 4.00-5.00 sec 1.25 MBytes 10.5 Mbits/sec
[ 1] 5.00-6.00 sec 1.25 MBytes 10.5 Mbits/sec
[ 1] 6.00-7.00 sec 1.25 MBytes 10.5 Mbits/sec
[ 1] 7.00-8.00 sec 1.25 MBytes 10.5 Mbits/sec
[ 1] 8.00-9.00 sec 1.25 MBytes 10.5 Mbits/sec
[ 1] 9.00-10.00 sec 1.25 MBytes 10.5 Mbits/sec
[ 1] 0.00-10.00 sec 12.8 MBytes 10.7 Mbits/sec
/ $
Note
You have successfully deployed Iperf on a KubeSlice configuration containing at least two clusters.
Uninstalling iPerf

If you would like to uninstall iPerf from your KubeSlice configuration, simply follow the instructions in the guide below:


< PREVIOUS
Tutorials We’ve outlined two end-to-end application tutorials to aid you in getting started with KubeSlice and begin introducing you to it’s various features: Read More
NEXT >
Application: Istio Bookinfo Istio Bookinfo (Bookinfo Application) is a sample application from Istio composed of four separate microservices intended to demonstrate various Istio features. In this guide, we will use Istio Bookinfo to demonstrate inter-slice communications. Read More