Deploying Docker Image in OpenShift

On this series on Kubernetes/Docker/Containers, I’ve shown how to deploy a docker image on a vanilla Kubernetes platform, and deploying the same image on Amazon EC2 Container Service. This time, I wanted to test how to deploy the same image on Red Hat’s OpenShift Container Platform.

OpenShift is Red Hat’s offering to bring Docker and Kubernetes to the enterprise. There is an upstream community project called Origin which provides open source container application platform.

OpenShift Online

Red Hat offers cloud-based OpenShift Online. You could sign-up for a free OpenShift Online platform access to try it out. This gives you a limited environment but sufficient enough to do our test deployment. On another note, Red Hat OCP can be deployed on-premise on a RHEL environment.

 Once you have the access, let’s start by creating a new project.

OpenShift has features where you could create an application from scratch using templates, or importing a YAML/JSON specification of our applications, or as for this example, deploy an already created image.

Once your project is created after clicking that Create button, Select Deploy Image

I will be using the same dockerflask image used in the previous examples. (note: I made some minor changes on the application and the Dockerfile manifest)

After hitting the search button, OpenShift displays some information about our image.

Time to hit Deploy

and that should deploy our image.

 

OpenShift automatically deployed our image. As we can see here, we have one pod running our application. It also created a service that automatically talks (port 8000 tcp) to our pod(s). Later we will increase the number of our pods and we can see that the service will automatically load balance the request. The changes I made on our application is to show the host name where the application is running from.

For now, let’s create a Route to our application so we can access it externally.

Once you hit Create, you’ll be provided with a link for your application.

We can now access our application externally.

Let’s try increasing the number of our pods. Here I set it to run 2 pods.

 

Hitting that url again

Using curl to check if the service indeed is doing load balancing.

As you can see from the above, OpenShift Service did load balanced the request as we see the host name changes based on which pod processed the request. We could put in auto-scaling configurations that will increase the number of pods to handle the load accordingly.

Red Hat OpenShift Container Platform provides enterprises with on-cloud (OCP Online) or on-premise container platform. With this series, it showed what containers is and how technology is moving away from the traditional way of deploying and managing applications.

 

Trying out Amazon EC2 Container Service (Amazon ECS)

In the previous post I wrote, I showed how to build/configure a Kubernetes platform where we could run Docker image/containers. Container technology allows us to have consistent way to package our application and we could expect that it will always run the same way regardless of the environment. With this, I wanted to test our previous application and check out what Cloud providers such as Amazon Web Services (AWS) and Google Cloud Platform (GCP) offers in this space.

Amazon EC2 Container Service (AWS ECS)

Amazon ECS is an AWS service that makes it simple to store, manage and deploy Docker containers. Using this service, we don’t have to install a Container platform and Orchestration Software to run our container images. Since AWS ECS is tightly integrated with other AWS Services, we can expect that we could utilize other services such as AWS Load Balancer, IAM, S3 etc.

Amazon EC2 Container Registry

Amazon EC2 Container Registry (Amazon ECR) provides a container registry where we could store, manage and deploy our Docker images. Amazon ECR also eliminates the need to setup and manage a repository for our container images. Since it using S3 at the back-end, it provides us a highly available and accessible platform to serve our images. It also provides a secure platform since it transfers our images using https and secures our images at rest. By leveraging AWS IAM, we can control access to our image repository. So let’s get started.

Under the Compute Section, click EC2 Container Service.

We will create a new image and deploy our application so leave the default selection and click Continue.

In the next page, I’ll be using awscontainerio as the name of this repository.

After clicking Next Step, you should be presented with something similar below. Using AWS Cli, we can now push our docker image to our repository by following the steps listed.

I will be using the application and Dockerfile from the previous post to test AWS ECS.

[root@k8s-master dockerFlask]# aws ecr get-login –no-include-email –region us-east-1
docker login -u AWS -p <very-long-key> https://823355006218.dkr.ecr.us-east-1.amazonaws.com
[root@k8s-master dockerFlask]# docker login -u AWS -p <very-long-key> https://823355006218.dkr.ecr.us-east-1.amazonaws.com
Login Succeeded
[root@k8s-master dockerFlask]# docker build -t awscontainerio .
Sending build context to Docker daemon 128.5 kB
Step 1 : FROM alpine:3.1
—> f13c92c2f447
Step 2 : RUN apk add –update python py-pip
—> Using cache
—> 988086eeb89d
Step 3 : RUN pip install Flask
—> Using cache
—> 4e4232df96c2
Step 4 : COPY app.py /src/app.py
—> Using cache
—> 9567163717b6
Step 5 : COPY app/main.py /src/app/main.py
—> Using cache
—> 993765657104
Step 6 : COPY app/__init__.py /src/app/__init__.py
—> Using cache
—> 114239a47d67
Step 7 : COPY app/templates/index.html /src/app/templates/index.html
—> Using cache
—> 5f9e85b36b98
Step 8 : COPY app/templates/about.html /src/app/templates/about.html
—> Using cache
—> 96c6ac480d98
Step 9 : EXPOSE 8000
—> Using cache
—> c79dcdddf6c1
Step 10 : CMD python /src/app.py
—> Using cache
—> 0dcfd15189f1
Successfully built 0dcfd15189f1
[root@k8s-master dockerFlask]# docker tag awscontainerio:latest 823355006218.dkr.ecr.us-east-1.amazonaws.com/awscontainerio:latest
[root@k8s-master dockerFlask]# docker push 823355006218.dkr.ecr.us-east-1.amazonaws.com/awscontainerio:latest
The push refers to a repository [823355006218.dkr.ecr.us-east-1.amazonaws.com/awscontainerio]
596bab3c12e4: Pushed
e24802fe0ea0: Pushed
fdee42dc503e: Pushed
2be9bf2ec52c: Pushed
9211d7b219b7: Pushed
239f9a7fd5b0: Pushed
8ab8949d0d88: Pushed
03b625132c33: Pushed
latest: digest: sha256:8f0e2417c90ba493ce93f24add18697b60d34bfea60bc37b0c30c0459f09977b size: 1986
[root@k8s-master dockerFlask]#

Continue reading “Trying out Amazon EC2 Container Service (Amazon ECS)”

How to install Kubernetes on CentOS

Kubernetes, developed by Google, is a cluster and orchestration engine for docker containers.

In this session I tried kubeadm to deploy a Kubernetes Cluster. I also used my OpenStack environment for this PoC and provisioned two CentOS Compute nodes as follows

k8s-master will run the API Manager, Kubectl utility, Scheduler, etcd, and Controller Manager.

k8s-worker will be our worker node and will run Kubelet, Kube-proxy and our pods.

On both system, execute the following

  • yum update -y
  • set SELinux to disabled (/etc/selinux/config)
  • and update /etc/hosts making sure an entry for the two systems exists
  • Reboot, Reboot, Reboot!

Configure Kubernetes Repo by adding the following

[root@k8s-master ~]# cat /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg
https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg

Configure Kubernetes Master Node

Execute the following on the Master Node

yum install docker kubeadm -y
systemctl restart kubelet && systemctl enable kubelet

Initialize Kubernetes Master with

kubeadm init

You should see something similar to the following

[root@k8s-master etc]# kubeadm init
[kubeadm] WARNING: kubeadm is in beta, please do not use it for production clusters.
[init] Using Kubernetes version: v1.8.0
[init] Using Authorization modes: [Node RBAC]
[preflight] Running pre-flight checks
[preflight] WARNING: hostname “k8s-master” could not be reached
[preflight] WARNING: hostname “k8s-master” lookup k8s-master on 8.8.8.8:53: no such host
[preflight] Starting the kubelet service
[kubeadm] WARNING: starting in 1.8, tokens expire after 24 hours by default (if you require a non-expiring token use –token-ttl 0)
[certificates] Generated ca certificate and key.
[certificates] Generated apiserver certificate and key.
[certificates] apiserver serving cert is signed for DNS names [k8s-master kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.2.48]
[certificates] Generated apiserver-kubelet-client certificate and key.
[certificates] Generated sa key and public key.
[certificates] Generated front-proxy-ca certificate and key.
[certificates] Generated front-proxy-client certificate and key.
[certificates] Valid certificates and keys now exist in “/etc/kubernetes/pki”
[kubeconfig] Wrote KubeConfig file to disk: “admin.conf”
[kubeconfig] Wrote KubeConfig file to disk: “kubelet.conf”
[kubeconfig] Wrote KubeConfig file to disk: “controller-manager.conf”
[kubeconfig] Wrote KubeConfig file to disk: “scheduler.conf”
[controlplane] Wrote Static Pod manifest for component kube-apiserver to “/etc/kubernetes/manifests/kube-apiserver.yaml”
[controlplane] Wrote Static Pod manifest for component kube-controller-manager to “/etc/kubernetes/manifests/kube-controller-manager.yaml”
[controlplane] Wrote Static Pod manifest for component kube-scheduler to “/etc/kubernetes/manifests/kube-scheduler.yaml”
[etcd] Wrote Static Pod manifest for a local etcd instance to “/etc/kubernetes/manifests/etcd.yaml”
[init] Waiting for the kubelet to boot up the control plane as Static Pods from directory “/etc/kubernetes/manifests”
[init] This often takes around a minute; or longer if the control plane images have to be pulled.
[apiclient] All control plane components are healthy after 437.011125 seconds
[uploadconfig] Storing the configuration used in ConfigMap “kubeadm-config” in the “kube-system” Namespace
[markmaster] Will mark node k8s-master as master by adding a label and a taint
[markmaster] Master k8s-master tainted and labelled with key/value: node-role.kubernetes.io/master=””
[bootstraptoken] Using token: 5cf1b4.23d95a40a9d5f674
[bootstraptoken] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstraptoken] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstraptoken] Creating the “cluster-info” ConfigMap in the “kube-public” namespace
[addons] Applied essential addon: kube-dns
[addons] Applied essential addon: kube-proxy

Your Kubernetes master has initialized successfully!

To start using your cluster, you need to run (as a regular user):

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run “kubectl apply -f [podnetwork].yaml” with one of the options listed at:
http://kubernetes.io/docs/admin/addons/

You can now join any number of machines by running the following on each node
as root:

kubeadm join –token 5cf1b4.23d95a40a9d5f674 192.168.2.48:6443 –discovery-token-ca-cert-hash sha256:beb0b1ba0edbc76b0288b5de57949e0aa728aa1149c4c5b548b2b59e5d6a7106

As shown in the first bold section, execute the following

[root@k8s-master kubernetes]# cd ~
[root@k8s-master ~]# mkdir .kube
[root@k8s-master ~]# cp -i /etc/kubernetes/admin.conf .kube/config
[root@k8s-master ~]# chown $(id -u):$(id -g) .kube/config
[root@k8s-master ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8s-master NotReady master 9m v1.8.0
[root@k8s-master ~]#

Configure Network

As you can see from the output of kubectl get nodes, our Kubernetes Master still shows NotReady. This is because we haven’t deployed our overlay network. If you look at your /var/log/messages, you’ll see entries similar to the one below

Oct 4 15:41:09 [localhost] kubelet: E1004 15:41:09.589532 2515 kubelet.go:2095] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized

To fix this, run the following to deploy our network.

[root@k8s-master ~]# export kubever=$(kubectl version | base64 | tr -d ‘\n’
[root@k8s-master ~]# kubectl apply -f “https://cloud.weave.works/k8s/net?k8s-version=$kubever”
serviceaccount “weave-net” created
clusterrole “weave-net” created
clusterrolebinding “weave-net” created
daemonset “weave-net” created
[root@k8s-master ~]#

Checking our Kubernetes Master node again,

[root@k8s-master ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8s-master Ready master 18m v1.8.0
[root@k8s-master ~]#

Configure Worker Node

Time to configure our Worker Node. Login to our Worker Node and execute the following command

yum install kubeadm docker -y

After successfully installing kubeadm and docker on our Worker Node, run the following command

systemctl restart docker && systemctl enable docker

We need to join this Worker Node into our Kubernetes Cluster. From the second highlighted section of the kubeadm init output above, execute the “kubeadm join” command in our Worker Node.

[root@k8s-worker ~]# kubeadm join –token 5cf1b4.23d95a40a9d5f674 192.168.2.48:6443 –discovery-token-ca-cert-hash sha256:beb0b1ba0edbc76b0288b5de57949e0aa728aa1149c4c5b548b2b59e5d6a7106
[kubeadm] WARNING: kubeadm is in beta, please do not use it for production clusters.
[preflight] Running pre-flight checks
[preflight] WARNING: kubelet service is not enabled, please run ‘systemctl enable kubelet.service’
[preflight] WARNING: docker service is not enabled, please run ‘systemctl enable docker.service’
[preflight] Starting the kubelet service
[discovery] Trying to connect to API Server “192.168.2.48:6443”
[discovery] Created cluster-info discovery client, requesting info from “https://192.168.2.48:6443”
[discovery] Requesting info from “https://192.168.2.48:6443” again to validate TLS against the pinned public key
[discovery] Cluster info signature and contents are valid and TLS certificate validates against pinned roots, will use API Server “192.168.2.48:6443”
[discovery] Successfully established connection with API Server “192.168.2.48:6443”
[bootstrap] Detected server version: v1.8.0
[bootstrap] The server supports the Certificates API (certificates.k8s.io/v1beta1)

Node join complete:
* Certificate signing request sent to master and response
received.
* Kubelet informed of new secure connection details.

Run ‘kubectl get nodes’ on the master to see this machine join.
[root@k8s-worker ~]#

Using the same steps, you could add multiple Worker node to our Kubernetes cluster.

As suggested, let’s now check from our Kubernetes Master Node if the Worker Node was added successfully to our cluster.

[root@k8s-master ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8s-master Ready master 52m v1.8.0
k8s-worker.novalocal Ready <none> 7m v1.8.0
[root@k8s-master ~]#

We can see from the output above that our Kubernetes Master and Worker node are in Ready Status.

We have successfully installed Kubernetes Cluster using kubeadm and successfully joined a Worker Node to our cluster. With this environment we can now create pods and services.