How to install Kubernetes on CentOS

Kubernetes, developed by Google, is a cluster and orchestration engine for docker containers.

In this session I tried kubeadm to deploy a Kubernetes Cluster. I also used my OpenStack environment for this PoC and provisioned two CentOS Compute nodes as follows

k8s-master will run the API Manager, Kubectl utility, Scheduler, etcd, and Controller Manager.

k8s-worker will be our worker node and will run Kubelet, Kube-proxy and our pods.

On both system, execute the following

  • yum update -y
  • set SELinux to disabled (/etc/selinux/config)
  • and update /etc/hosts making sure an entry for the two systems exists
  • Reboot, Reboot, Reboot!

Configure Kubernetes Repo by adding the following

[root@k8s-master ~]# cat /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg
https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg

Configure Kubernetes Master Node

Execute the following on the Master Node

yum install docker kubeadm -y
systemctl restart kubelet && systemctl enable kubelet

Initialize Kubernetes Master with

kubeadm init

You should see something similar to the following

[root@k8s-master etc]# kubeadm init
[kubeadm] WARNING: kubeadm is in beta, please do not use it for production clusters.
[init] Using Kubernetes version: v1.8.0
[init] Using Authorization modes: [Node RBAC]
[preflight] Running pre-flight checks
[preflight] WARNING: hostname “k8s-master” could not be reached
[preflight] WARNING: hostname “k8s-master” lookup k8s-master on 8.8.8.8:53: no such host
[preflight] Starting the kubelet service
[kubeadm] WARNING: starting in 1.8, tokens expire after 24 hours by default (if you require a non-expiring token use –token-ttl 0)
[certificates] Generated ca certificate and key.
[certificates] Generated apiserver certificate and key.
[certificates] apiserver serving cert is signed for DNS names [k8s-master kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.2.48]
[certificates] Generated apiserver-kubelet-client certificate and key.
[certificates] Generated sa key and public key.
[certificates] Generated front-proxy-ca certificate and key.
[certificates] Generated front-proxy-client certificate and key.
[certificates] Valid certificates and keys now exist in “/etc/kubernetes/pki”
[kubeconfig] Wrote KubeConfig file to disk: “admin.conf”
[kubeconfig] Wrote KubeConfig file to disk: “kubelet.conf”
[kubeconfig] Wrote KubeConfig file to disk: “controller-manager.conf”
[kubeconfig] Wrote KubeConfig file to disk: “scheduler.conf”
[controlplane] Wrote Static Pod manifest for component kube-apiserver to “/etc/kubernetes/manifests/kube-apiserver.yaml”
[controlplane] Wrote Static Pod manifest for component kube-controller-manager to “/etc/kubernetes/manifests/kube-controller-manager.yaml”
[controlplane] Wrote Static Pod manifest for component kube-scheduler to “/etc/kubernetes/manifests/kube-scheduler.yaml”
[etcd] Wrote Static Pod manifest for a local etcd instance to “/etc/kubernetes/manifests/etcd.yaml”
[init] Waiting for the kubelet to boot up the control plane as Static Pods from directory “/etc/kubernetes/manifests”
[init] This often takes around a minute; or longer if the control plane images have to be pulled.
[apiclient] All control plane components are healthy after 437.011125 seconds
[uploadconfig] Storing the configuration used in ConfigMap “kubeadm-config” in the “kube-system” Namespace
[markmaster] Will mark node k8s-master as master by adding a label and a taint
[markmaster] Master k8s-master tainted and labelled with key/value: node-role.kubernetes.io/master=””
[bootstraptoken] Using token: 5cf1b4.23d95a40a9d5f674
[bootstraptoken] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstraptoken] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstraptoken] Creating the “cluster-info” ConfigMap in the “kube-public” namespace
[addons] Applied essential addon: kube-dns
[addons] Applied essential addon: kube-proxy

Your Kubernetes master has initialized successfully!

To start using your cluster, you need to run (as a regular user):

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run “kubectl apply -f [podnetwork].yaml” with one of the options listed at:
http://kubernetes.io/docs/admin/addons/

You can now join any number of machines by running the following on each node
as root:

kubeadm join –token 5cf1b4.23d95a40a9d5f674 192.168.2.48:6443 –discovery-token-ca-cert-hash sha256:beb0b1ba0edbc76b0288b5de57949e0aa728aa1149c4c5b548b2b59e5d6a7106

As shown in the first bold section, execute the following

[root@k8s-master kubernetes]# cd ~
[root@k8s-master ~]# mkdir .kube
[root@k8s-master ~]# cp -i /etc/kubernetes/admin.conf .kube/config
[root@k8s-master ~]# chown $(id -u):$(id -g) .kube/config
[root@k8s-master ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8s-master NotReady master 9m v1.8.0
[root@k8s-master ~]#

Configure Network

As you can see from the output of kubectl get nodes, our Kubernetes Master still shows NotReady. This is because we haven’t deployed our overlay network. If you look at your /var/log/messages, you’ll see entries similar to the one below

Oct 4 15:41:09 [localhost] kubelet: E1004 15:41:09.589532 2515 kubelet.go:2095] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized

To fix this, run the following to deploy our network.

[root@k8s-master ~]# export kubever=$(kubectl version | base64 | tr -d ‘\n’
[root@k8s-master ~]# kubectl apply -f “https://cloud.weave.works/k8s/net?k8s-version=$kubever”
serviceaccount “weave-net” created
clusterrole “weave-net” created
clusterrolebinding “weave-net” created
daemonset “weave-net” created
[root@k8s-master ~]#

Checking our Kubernetes Master node again,

[root@k8s-master ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8s-master Ready master 18m v1.8.0
[root@k8s-master ~]#

Configure Worker Node

Time to configure our Worker Node. Login to our Worker Node and execute the following command

yum install kubeadm docker -y

After successfully installing kubeadm and docker on our Worker Node, run the following command

systemctl restart docker && systemctl enable docker

We need to join this Worker Node into our Kubernetes Cluster. From the second highlighted section of the kubeadm init output above, execute the “kubeadm join” command in our Worker Node.

[root@k8s-worker ~]# kubeadm join –token 5cf1b4.23d95a40a9d5f674 192.168.2.48:6443 –discovery-token-ca-cert-hash sha256:beb0b1ba0edbc76b0288b5de57949e0aa728aa1149c4c5b548b2b59e5d6a7106
[kubeadm] WARNING: kubeadm is in beta, please do not use it for production clusters.
[preflight] Running pre-flight checks
[preflight] WARNING: kubelet service is not enabled, please run ‘systemctl enable kubelet.service’
[preflight] WARNING: docker service is not enabled, please run ‘systemctl enable docker.service’
[preflight] Starting the kubelet service
[discovery] Trying to connect to API Server “192.168.2.48:6443”
[discovery] Created cluster-info discovery client, requesting info from “https://192.168.2.48:6443”
[discovery] Requesting info from “https://192.168.2.48:6443” again to validate TLS against the pinned public key
[discovery] Cluster info signature and contents are valid and TLS certificate validates against pinned roots, will use API Server “192.168.2.48:6443”
[discovery] Successfully established connection with API Server “192.168.2.48:6443”
[bootstrap] Detected server version: v1.8.0
[bootstrap] The server supports the Certificates API (certificates.k8s.io/v1beta1)

Node join complete:
* Certificate signing request sent to master and response
received.
* Kubelet informed of new secure connection details.

Run ‘kubectl get nodes’ on the master to see this machine join.
[root@k8s-worker ~]#

Using the same steps, you could add multiple Worker node to our Kubernetes cluster.

As suggested, let’s now check from our Kubernetes Master Node if the Worker Node was added successfully to our cluster.

[root@k8s-master ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8s-master Ready master 52m v1.8.0
k8s-worker.novalocal Ready <none> 7m v1.8.0
[root@k8s-master ~]#

We can see from the output above that our Kubernetes Master and Worker node are in Ready Status.

We have successfully installed Kubernetes Cluster using kubeadm and successfully joined a Worker Node to our cluster. With this environment we can now create pods and services.

Configuring a Block Device on a Ceph Client

First, using the ceph-admin machine, check if the cluster is in a correct status:

[root@ceph-1 ~]# ceph -s
 cluster 376331d2-4da0-4f41-8040-0cf433148a08
 health HEALTH_OK
 monmap e1: 3 mons at {ceph-1=192.168.0.42:6789/0,ceph-2=192.168.0.43:6789/0,ceph-3=192.168.0.44:6789/0}
 election epoch 10, quorum 0,1,2 ceph-1,ceph-2,ceph-3
 osdmap e73: 9 osds: 9 up, 9 in
 flags sortbitwise,require_jewel_osds
 pgmap v603: 112 pgs, 7 pools, 8833 kB data, 184 objects
 344 MB used, 8832 MB / 9176 MB avail
 112 active+clean
 [root@ceph-1 ~]#

Create the block device

[root@ceph-1 ~]# rbd create myblock --size 200 --image-format 1
 rbd: image format 1 is deprecated
 [root@ceph-1 ~]#

In the above command, myblock is the name of the rbd image. 200 is the size in MB. RBD image must be in format 1. I tried without using that option but was getting “write error: No such device or address” when mapping the RBD device on my client host.

We can check the block device by issuing the following command.

[root@ceph-1 ~]# rbd list
 myblock
 [root@ceph-1 ~]#

From one of the monitor node, retrieve the client name and key by looking at the /etc/ceph/ceph.client.admin.keyring file.

[root@ceph-1 ~]# cat /etc/ceph/ceph.client.admin.keyring 
 [client.admin]
 key = AQByd5xYfFmqFBAABrv/q2mUKrQdS2Uo5nVq+g==
 caps mds = "allow *"
 caps mon = "allow *"
 caps osd = "allow *"
 [root@ceph-1 ~]#

On the client host, verify if the kernel supports rbd modules.

[root@client ~]# modprobe rbd
 [root@client ~]#

If it gives you an error, then RBD is not installed. Install kmod-rbd and kmod-libceph packages and reload the RBD module.

Map the RBD device in the client host.

[root@client ~]# echo "192.168.0.42,192.168.0.43,192.168.0.44 name=admin,secret=AQByd5xYfFmqFBAABrv/q2mUKrQdS2Uo5nVq+g== rbd myblock" > /sys/bus/rbd/add

The above command will create a new device on the client host.

[root@client ~]# ll /dev/rbd*
 brw-rw---- 1 root disk 252, 0 Feb 16 16:36 /dev/rbd0

Time to format the device.

[root@client ~]# mkfs.ext4 /dev/rbd0 
 mke2fs 1.42.9 (28-Dec-2013)
 Discarding device blocks: done 
 Filesystem label=
 OS type: Linux
 Block size=1024 (log=0)
 Fragment size=1024 (log=0)
 Stride=4096 blocks, Stripe width=4096 blocks
 51200 inodes, 204800 blocks
 10240 blocks (5.00%) reserved for the super user
 First data block=1
 Maximum filesystem blocks=33816576
 25 block groups
 8192 blocks per group, 8192 fragments per group
 2048 inodes per group
 Superblock backups stored on blocks: 
 8193, 24577, 40961, 57345, 73729

Allocating group tables: done 
 Writing inode tables: done 
 Creating journal (4096 blocks): done
 Writing superblocks and filesystem accounting information: done

Create a mount point and mount the device

[root@client ~]# mkdir /mnt/cephblock
 [root@client ~]# mount /dev/rbd0 /mnt/cephblock/

The device is mounted and ready to be used.

[root@client ~]# lsblk
 NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
 vda 253:0 0 20G 0 disk 
 └─vda1 253:1 0 20G 0 part /
 vdb 253:16 0 1G 0 disk 
 rbd0 252:0 0 200M 0 disk /mnt/cephblock
 [root@client ~]#
[root@client ~]# df -h
 Filesystem Size Used Avail Use% Mounted on
 /dev/vda1 20G 1.7G 19G 9% /
 devtmpfs 902M 0 902M 0% /dev
 tmpfs 920M 0 920M 0% /dev/shm
 tmpfs 920M 25M 896M 3% /run
 tmpfs 920M 0 920M 0% /sys/fs/cgroup
 tmpfs 184M 0 184M 0% /run/user/0
 /dev/rbd0 190M 1.6M 175M 1% /mnt/cephblock

When needed, you can unmount the filesystem and remove the RBD device using:

echo "0" >/sys/bus/rbd/remove

Creating and attaching a Cinder volume to an OpenStack instance

Using openstack-cli tools, let’s create a new 1GB cinder volume with a display name of repvolume

[root@localhost ~(keystone_demo)]# cinder create 1 --display-name repvolume
+---------------------+--------------------------------------+
| Property | Value |
+---------------------+--------------------------------------+
| attachments | [] |
| availability_zone | nova |
| bootable | false |
| created_at | 2015-09-16T15:59:45.025134 |
| display_description | None |
| display_name | repvolume |
| encrypted | False |
| id | 4efa3212-55bc-417d-a8ed-e88fa63f05d3 |
| metadata | {} |
| multiattach | false |
| size | 1 |
| snapshot_id | None |
| source_volid | None |
| status | creating |
| volume_type | None |
+---------------------+--------------------------------------+

Check if the volume was created successfully

[root@localhost ~(keystone_demo)]# cinder list
+--------------------------------------+-----------+--------------+------+-------------+----------+-------------+
| ID | Status | Display Name | Size | Volume Type | Bootable | Attached to |
+--------------------------------------+-----------+--------------+------+-------------+----------+-------------+
| 4efa3212-55bc-417d-a8ed-e88fa63f05d3 | available | repvolume | 1 | - | false | |
+--------------------------------------+-----------+--------------+------+-------------+----------+-------------+

Let’s try to attach repvolume to one of our instance.

[root@localhost ~(keystone_demo)]# nova list
+--------------------------------------+----------+---------+------------+-------------+------------------+
| ID | Name | Status | Task State | Power State | Networks |
+--------------------------------------+----------+---------+------------+-------------+------------------+
| 38b5a3e7-e540-42fb-a25f-ea6e79c4d372 | client01 | ACTIVE | - | Running | private=10.0.0.6 |
| 431ba9e5-616c-401e-8dd6-e8269420246c | web01 | SHUTOFF | - | Shutdown | private=10.0.0.3 |
| 99584428-2d5d-4d3d-bd19-5a7a77cb7f24 | web02 | SHUTOFF | - | Shutdown | private=10.0.0.4 |
+--------------------------------------+----------+---------+------------+-------------+------------------+

Using client01 instance ID and the repvolume ID, let’s attach the cinder volume to our instance by issuing the following command.

[root@localhost ~(keystone_demo)]# nova volume-attach 38b5a3e7-e540-42fb-a25f-ea6e79c4d372 4efa3212-55bc-417d-a8ed-e88fa63f05d3 auto
+----------+--------------------------------------+
| Property | Value |
+----------+--------------------------------------+
| device | /dev/vdb |
| id | 4efa3212-55bc-417d-a8ed-e88fa63f05d3 |
| serverId | 38b5a3e7-e540-42fb-a25f-ea6e79c4d372 |
| volumeId | 4efa3212-55bc-417d-a8ed-e88fa63f05d3 |
+----------+--------------------------------------+

Issuing cinder list again should show that our volume is attached to client01

[root@localhost ~(keystone_demo)]# cinder list
+--------------------------------------+--------+--------------+------+-------------+----------+--------------------------------------+
| ID | Status | Display Name | Size | Volume Type | Bootable | Attached to |
+--------------------------------------+--------+--------------+------+-------------+----------+--------------------------------------+
| 4efa3212-55bc-417d-a8ed-e88fa63f05d3 | in-use | repvolume | 1 | - | false | 38b5a3e7-e540-42fb-a25f-ea6e79c4d372 |
+--------------------------------------+--------+--------------+------+-------------+----------+--------------------------------------+

Next let’s login to client01.

Issue lsblk to see the available block device.  Since this is a new block device, we need to format it

mkfs.ext4 /dev/vdb

Now let’s create a new directory where we could mount the new volume.

mkdir /repvolume

Using the mount command, let’s mount the device.

mount /dev/vdb /repvolume

Issuing df command should show the new volume

 

 

 

 

 

Launching new instance using openstack-cli

Since we will be using the out of the box demo project/account, execute the following to set the needed env variables.

[root@localhost ~]# source keystonerc_demo

Let’s list available instance

[root@localhost ~(keystone_demo)]# nova list
+----+------+--------+------------+-------------+----------+
| ID | Name | Status | Task State | Power State | Networks |
+----+------+--------+------------+-------------+----------+
+----+------+--------+------------+-------------+----------+

Base from the above output, we don’t have any available instance under our project.

Let’s list available image(Glance)

[root@localhost ~(keystone_demo)]# nova image-list
+--------------------------------------+--------+--------+--------+
| ID | Name | Status | Server |
+--------------------------------------+--------+--------+--------+
| 411d5791-c29a-47e3-8d73-c9ec91765beb | cirros | ACTIVE | |
+--------------------------------------+--------+--------+--------+

We need the available network (Neutron) so issue the following command

[root@localhost ~(keystone_demo)]# neutron net-list
+--------------------------------------+---------+------------------------------------------------------+
| id | name | subnets |
+--------------------------------------+---------+------------------------------------------------------+
| bf7bfa1d-4f15-4883-a49e-9bfea12f30db | public | 23ab53f6-a805-42a9-a9c9-9ba2af16dc59 172.24.4.224/28 |
| b3c770f6-0d7e-4361-b644-0d73f8a152f3 | private | 3c05d2f6-9469-4e56-ad32-f7df9a414336 10.0.0.0/24 |
+--------------------------------------+---------+------------------------------------------------------+

Time to create a new instance with a name, web01, using the cirros image and attaching it to the private subnet.

[root@localhost ~(keystone_demo)]# nova boot --image cirros --flavor m1.tiny web01 --nic net-id=b3c770f6-0d7e-4361-b644-0d73f8a152f3
+--------------------------------------+-----------------------------------------------+
| Property | Value |
+--------------------------------------+-----------------------------------------------+
| OS-DCF:diskConfig | MANUAL |
| OS-EXT-AZ:availability_zone | nova |
| OS-EXT-STS:power_state | 0 |
| OS-EXT-STS:task_state | scheduling |
| OS-EXT-STS:vm_state | building |
| OS-SRV-USG:launched_at | - |
| OS-SRV-USG:terminated_at | - |
| accessIPv4 | |
| accessIPv6 | |
| adminPass | 9KTrQ2mxzjGQ |
| config_drive | |
| created | 2015-09-15T14:34:30Z |
| flavor | m1.tiny (1) |
| hostId | |
| id | a6926f88-6787-4559-b872-37572ad39b26 |
| image | cirros (411d5791-c29a-47e3-8d73-c9ec91765beb) |
| key_name | - |
| metadata | {} |
| name | web01 |
| os-extended-volumes:volumes_attached | [] |
| progress | 0 |
| security_groups | default |
| status | BUILD |
| tenant_id | 27314e6ab1024e0b9ed1c83d9459352a |
| updated | 2015-09-15T14:34:32Z |
| user_id | 8eecc18e9ec2424c97170d3073d36847 |
+--------------------------------------+-----------------------------------------------+

Create a new instance with name=web02 and attach it to the private subnet

[root@localhost ~(keystone_demo)]# nova boot --image cirros --flavor m1.tiny web02 --nic net-id=b3c770f6-0d7e-4361-b644-0d73f8a152f3
+--------------------------------------+-----------------------------------------------+
| Property | Value |
+--------------------------------------+-----------------------------------------------+
| OS-DCF:diskConfig | MANUAL |
| OS-EXT-AZ:availability_zone | nova |
| OS-EXT-STS:power_state | 0 |
| OS-EXT-STS:task_state | scheduling |
| OS-EXT-STS:vm_state | building |
| OS-SRV-USG:launched_at | - |
| OS-SRV-USG:terminated_at | - |
| accessIPv4 | |
| accessIPv6 | |
| adminPass | MPi5MQYnJB4u |
| config_drive | |
| created | 2015-09-15T14:34:55Z |
| flavor | m1.tiny (1) |
| hostId | |
| id | 7d7200f3-7350-43b7-a5f0-01c60af1aacb |
| image | cirros (411d5791-c29a-47e3-8d73-c9ec91765beb) |
| key_name | - |
| metadata | {} |
| name | web02 |
| os-extended-volumes:volumes_attached | [] |
| progress | 0 |
| security_groups | default |
| status | BUILD |
| tenant_id | 27314e6ab1024e0b9ed1c83d9459352a |
| updated | 2015-09-15T14:34:56Z |
| user_id | 8eecc18e9ec2424c97170d3073d36847 |
+--------------------------------------+-----------------------------------------------+

You can check the status of the build by issuing nova list.

[root@localhost ~(keystone_demo)]# nova list
+--------------------------------------+----------+--------+------------+-------------+----------+
| ID | Name | Status | Task State | Power State | Networks |
+--------------------------------------+----------+--------+------------+-------------+----------+
| a6926f88-6787-4559-b872-37572ad39b26 | web01 | BUILD | spawning | NOSTATE | |
| 7d7200f3-7350-43b7-a5f0-01c60af1aacb | web02 | BUILD | spawning | NOSTATE | |
+--------------------------------------+----------+--------+------------+-------------+----------+