Page tree
Skip to end of metadata
Go to start of metadata

Rancher based Kubernetes Cluster Set-up on OpenStack VMs with NFS Storage class

Introduction

This document describes how you can achieve PNDA Helm Deployment on Rancher based Kubernetes cluster over OpenStack based (Kilo) VMs.

There are two parts to this document,

  1. Rancher based K8S cluster set-up on OpenStack (along with NFS as Storage class)
  2. Helm-based PNDA deployment on K8S cluster

Rancher based K8S cluster set-up on OpenStack

In the first part, we need to set-up a Rancher based Kubernetes cluster with the necessary software components. Following is the high-level depiction of Rancher Kubernetes cluster components using OpenStack VMs,

Cluster Components and Versions

All Rancher Kubernetes cluster components are deployed across 6 OpenStack VMs. Resource configuration for each OpenStack VM is shown in the diagram.

Following are the versions of various components that can be used,

SoftwareVersion
CentOS7.6
Docker19.03.1
Rancher2.2.7
Kubernetes1.14.5
Helm2.14.3
Tiller2.14.3

Docker Installation

Install Docker on all the Rancher Kubernetes Engine nodes including Load Balancer node. Execute the following commands on all the nodes. Following commands assume “CentoOS” as underlying OS, but you can also use Ubuntu.

$ yum install -y yum-utils device-mapper-persistent-data lvm2

$ yum-config-manager  --add-repo https://download.docker.com/linux/centos/docker-ce.repo

$yum install docker-ce docker-ce-cli containerd.io

$ systemctl start docker

$ systemctl enable docker

$ usermod -aG docker centos

Following are the cluster components that need to be installed on the respective node or VMs as depicted in the above diagram,

  • NGINX Layer 4 (TCP) Load Balancer
  • Rancher Kubernetes Engine Nodes
  • Helm and Tiller
  • Rancher Server
  • Network File System (NFS)

NGINX Layer 4 (TCP) Load Balancer

NGINX is configured as Layer 4 load balancer (TCP) that forwards connections to one of the Rancher nodes. The important point here is not to use one of the Rancher nodes as the load balancer. Following steps are executed on the Nginx node or VM.

Step 1: Create a repo file as below - 

Step 2: List the repo with the below command to check for any errors - 

Step 3: Install Nginx package with the below command - 

Step 4: Update the configuration file “/etc/nginx/nginx.conf” with the IP addresses of your Rancher Kubernetes nodes as shown below - 

Please refer to link for any additional details - Nginx Load Balancer Set-up

Rancher Kubernetes Engine Nodes

Next activity is to install Rancher Kubernetes Engine on all the designated Kubernetes nodes. Following steps can be executed from any node or VM (Installer node) that has access to all the Rancher Kubernetes Engine Nodes.

Step 1:

Create the rancher-cluster.yml file - 

Using the below sample file, create the rancher-cluster.yml file. Replace the IP Addresses in the nodes list with the IP address or DNS names of the nodes you created.

Step 2: Download and install ‘rke’ utility - 

Download the utility as shown in the below diagram based on your node hardware type.

To install ‘rke’ using the following command –

$ install rke_linux-amd64 /usr/bin/rke

Step 3: Run ‘rke’ utility on the Rancher Cluster configuration.

$ rke up --config ./rancher-cluster.yml

When finished, it should end with the line: Finished building Kubernetes cluster successfully.

Step 4: Install Kubectl utility – 

For managing your Kubernetes cluster you need to use Kubectl utility.

$ curl -LO https://storage.googleapis.com/kubernetes-release/release/`curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt`/bin/linux/amd64/kubectl

$ install kubectl /usr/bin/kubectl

 Step 5: Testing Your Cluster

rke’ command execution in Step 3 above should have created a file kube_config_rancher-cluster.yml. This file has the credentials for kubectl and helm that you need to use. Update the .bash_profile file with as below -

 $ export KUBECONFIG=$(pwd)/kube_config_rancher-cluster.yml

$ source .bash_profile

 Test your connectivity with kubectl and see if all your nodes are in Ready state as shown below

Please refer to the link for any additional details such as connecting to and testing your Kubernetes cluster and for checking the health of cluster Pods - Rancher Kubernetes Engine Installation

Helm and Tiller 

Following steps need to be executed from any node or VM (Installer node) that has access to all Rancher Kubernetes Engine nodes.

Step 1: Install Helm and Tiller on the installer node

$ wget https://get.helm.sh/helm-v2.14.3-linux-amd64.tar.gz

$ tar zxvf helm-v2.14.3-linux-amd64.tar.gz

$ install linux-amd64/helm /usr/bin/helm

$ install linux-amd64/tiller /usr/bin/tiller

Step 2: Copy or Install Kubectl, Helm and Tiller binaries on all the Rancher Kubernetes Nodes to the same location as described in the above steps.

Step 3: Finishing Tiller installation on Cluster from the Installer node –

  • Create the ServiceAccount in the kube-system namespace.
  • Create the ClusterRoleBinding to give the tiller account access to the cluster.
  • Finally, use helm to install the tiller service

$ kubectl -n kube-system create serviceaccount tiller

$ kubectl create clusterrolebinding tiller   --clusterrole=cluster-admin   --serviceaccount=kube-system:tiller

$ helm init --service-account tiller

 Test your Tiller installation –

Important Files

  • rancher-cluster.yml: The RKE cluster configuration file.
  • kube_config_rancher-cluster.yml: The Kubeconfig file for the cluster, this file contains credentials for full access to the cluster.
  • rancher-cluster.rkestate: The Kubernetes Cluster State file, this file contains credentials for full access to the cluster.

Rancher Server

In this activity, Rancher server and UI gets installed as one of the Pods inside Kubernetes cluster.

Step 1: Add the Helm Chart Repository -

$ helm repo add rancher-stable https://releases.rancher.com/server-charts/stable

 Step 2: Install cert-manager from Kubernetes Helm chart repository -

$ helm install stable/cert-manager --name cert-manager --namespace kube-system --version v0.5.2

 Step 3: Wait for cert-manager to be rolled out -

$ kubectl -n kube-system rollout status deploy/cert-manager

 Step 4: Install Rancher Server – Stable version

$ helm install rancher-stable/rancher  --name rancher --namespace cattle-system --set hostname="your_host_name" --set ingress.tls.source=letsEncrypt --set letsEncrypt.email="your-email@domain.com"

This should print a message – Rancher Server has been installed. This will be followed by the Rancher Admin UI URL that you can browse.

Important Note: The hostname specified in the above command needs to be registered with your organization’s DNS.

Step 5: Rancher Rollout

$ kubectl –n cattle-system rollout status deploy/rancher

This will roll out the Rancher deployment.

Step 6: Check Ingress

$ kubectl -n cattle-system get ingress

Step 7: Check Rancher Deployment Status

$ kubectl -n cattle-system get deploy rancher

Network File System (NFS) Server

Following are general steps to get the NFS server and share created -

Step 1: Set-up a NFS Server on one of the nodes or VMs.

Step 2: Create and export a NFS Share

Step 3: Mount the exported NFS share on all the Rancher Kubernetes Nodes

Kubernetes NFS-Client Provisioner

The nfs-client is an automatic provisioner that use your existing and already configured NFS server to support dynamic provisioning of Kubernetes Persistent Volumes via Persistent Volume Claims. You can use Helm Chart to deploy the nfs-client.

$ helm install stable/nfs-client-provisioner --set nfs.server=x.x.x.x --set nfs.path=/nfsshare

For additional details source code, please refer to - https://github.com/kubernetes-incubator/external-storage/tree/master/nfs-client

For additional details please on Helm refer to - https://github.com/helm/charts/tree/master/stable/nfs-client-provisioner

Helm-based PNDA deployment on K8S cluster

This is the second part of PNDA Helm Deployment on Rancher based K8s Cluster. In this document, we will see how to configure Kubectl for cluster communication and add persistent volume claims for the required components.

Prerequisites

Before configuration of Kubectl for cluster communication, be sure your host machine meets the following prerequisites:

  1. Installation and a functional state of Rancher Kubernetes cluster.
  2. Installation of Docker and Kubectl on edge node to communicate Pods, Deployments, Services and PVCs.
  3. Installation of Helm on edge node for the deployment of PNDA application on Rancher based Kubernetes cluster.

Step-by-step Kubectl Configuration for Cluster Communication

Step 1: Create a .kube directory

Step 2: Get the config file from the Rancher cluster and place it under .kube directory





Helm repository from GIT

Download the PNDA Helm Repo from GIT repository by using the following link,

https://github.com/pndaproject/pnda-helm-repo

Configuration of Persistent Volume Claims (PVC) for PNDA

Pods use Persistent Volume Claims (PVC) to request the platform for physical storage. You must create a PersistentVolumeClaim requesting a volume of at least three gibibytes to provide read-write access. Here, we have used NFS-client for storage. For configuring NFS-client storage, you need to modify the values.yaml for the following components,

 

Step 1: For Redis, HDFS Name Node, HDFS Data Node, Confluent-platform and JupyterHub you have

             to modify in /pnda-helm-repo-master/pnda/values.yaml


  • Redis


  • HDFS


  • Confluent-platform


  • JupyterHub


 

Step 2: For Package repository, have to modify values.yaml under /pnda-helm-repo-master/pnda/package-repository/values.yaml folder

Package repository

Pnda/package-repository/values.yaml

PNDA Deployment Steps

Please check the following link for PNDA deployment steps,

https://github.com/pndaproject/pnda-helm-repo

After successful PNDA deployment, you can see workloads, services and volumes under the namespace which you specify in Rancher UI, as shown below:








         

  • No labels