Docker Image + Automated Builds

In the previous post I listed the steps on setting up a Kubernetes Cluster system. In this session, I will run through on how to create an image, build it, and use the same image to create our pods. I will also show how automated build works in Docker Hub.

Create a Docker Hub account if you haven’t done so. We would also need a repository for your codes. I am using Github to store my codes.

Create new Github repo

In your Github account, create a new repository and upload your codes. Our Node.js application will just output some text as an http response.

We will also need a Docker file. A Docker file contains instructions on how to create our image. Here’s the structure of my Github repo.

Basically our Dockerfile contains the following steps. First is, I declared that I will be using the alpine image and  install nodejs package on it. We will also copy index.js to the /src directory of the image and set it to listen to web port 8080 and start the Node.js application.

Configure Source Provider

Login to your Docker Hub account (http://cloud.docker.com) and go to Cloud Settings section.

Under the Source Provider, configure GitHub provider using your Github details.

Now that our Source provider is configured, we can now start to create a new Docker Hub repository.

In the Build Settings section, select the Source Provider and the repo

Click Create & Build to start building our docker image.

You can see from the Timeline section the steps it is doing to build our image. You can click the specific step to view more details.

Once our image is built, you can see under the General Section that we now have a new docker image (latest)

We can now use this image to create our pods. In my repo, I created a yml file that I’ll use to create a pod in our Kubernetes Cluster. Issue the following command to create our pod.

After a couple of minutes, we can see that a pod has been created.

Using the pod IP address, we can curl port 8080 to test our image.

As you can see from the above, we got an http response from our pod from our Node.js application.

Let’s try to update our code in Github. Update index.js adding in a version 2 in the response string.

Once you commit the changes, you will see in Docker hub that an Automatic Build will be triggered to update our docker image.

We could have created a deployment to simulate Blue-Green deployment/Update our pods but for simplicity, we will focus on just creating pods manually. Delete the current running pod. Re-running the steps to create a pod by passing in the same yml file, we should be able to see the updated Node.js application.

Still new to this technology, I hope this post provided some understanding on the basics of Containers/image/Kubernetes. There’s so much more to learn and explore on this Container Technology. Connect with me on LinkedIn as I would like to know how others implement CI/CD process and what framework/methodology or tools they follow.

 

How to install Kubernetes on CentOS

Kubernetes, developed by Google, is a cluster and orchestration engine for docker containers.

In this session I tried kubeadm to deploy a Kubernetes Cluster. I also used my OpenStack environment for this PoC and provisioned two CentOS Compute nodes as follows

k8s-master will run the API Manager, Kubectl utility, Scheduler, etcd, and Controller Manager.

k8s-worker will be our worker node and will run Kubelet, Kube-proxy and our pods.

On both system, execute the following

  • yum update -y
  • set SELinux to disabled (/etc/selinux/config)
  • and update /etc/hosts making sure an entry for the two systems exists
  • Reboot, Reboot, Reboot!

Configure Kubernetes Repo by adding the following

[root@k8s-master ~]# cat /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg
https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg

Configure Kubernetes Master Node

Execute the following on the Master Node

yum install docker kubeadm -y
systemctl restart kubelet && systemctl enable kubelet

Initialize Kubernetes Master with

kubeadm init

You should see something similar to the following

[root@k8s-master etc]# kubeadm init
[kubeadm] WARNING: kubeadm is in beta, please do not use it for production clusters.
[init] Using Kubernetes version: v1.8.0
[init] Using Authorization modes: [Node RBAC]
[preflight] Running pre-flight checks
[preflight] WARNING: hostname “k8s-master” could not be reached
[preflight] WARNING: hostname “k8s-master” lookup k8s-master on 8.8.8.8:53: no such host
[preflight] Starting the kubelet service
[kubeadm] WARNING: starting in 1.8, tokens expire after 24 hours by default (if you require a non-expiring token use –token-ttl 0)
[certificates] Generated ca certificate and key.
[certificates] Generated apiserver certificate and key.
[certificates] apiserver serving cert is signed for DNS names [k8s-master kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.2.48]
[certificates] Generated apiserver-kubelet-client certificate and key.
[certificates] Generated sa key and public key.
[certificates] Generated front-proxy-ca certificate and key.
[certificates] Generated front-proxy-client certificate and key.
[certificates] Valid certificates and keys now exist in “/etc/kubernetes/pki”
[kubeconfig] Wrote KubeConfig file to disk: “admin.conf”
[kubeconfig] Wrote KubeConfig file to disk: “kubelet.conf”
[kubeconfig] Wrote KubeConfig file to disk: “controller-manager.conf”
[kubeconfig] Wrote KubeConfig file to disk: “scheduler.conf”
[controlplane] Wrote Static Pod manifest for component kube-apiserver to “/etc/kubernetes/manifests/kube-apiserver.yaml”
[controlplane] Wrote Static Pod manifest for component kube-controller-manager to “/etc/kubernetes/manifests/kube-controller-manager.yaml”
[controlplane] Wrote Static Pod manifest for component kube-scheduler to “/etc/kubernetes/manifests/kube-scheduler.yaml”
[etcd] Wrote Static Pod manifest for a local etcd instance to “/etc/kubernetes/manifests/etcd.yaml”
[init] Waiting for the kubelet to boot up the control plane as Static Pods from directory “/etc/kubernetes/manifests”
[init] This often takes around a minute; or longer if the control plane images have to be pulled.
[apiclient] All control plane components are healthy after 437.011125 seconds
[uploadconfig] Storing the configuration used in ConfigMap “kubeadm-config” in the “kube-system” Namespace
[markmaster] Will mark node k8s-master as master by adding a label and a taint
[markmaster] Master k8s-master tainted and labelled with key/value: node-role.kubernetes.io/master=””
[bootstraptoken] Using token: 5cf1b4.23d95a40a9d5f674
[bootstraptoken] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstraptoken] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstraptoken] Creating the “cluster-info” ConfigMap in the “kube-public” namespace
[addons] Applied essential addon: kube-dns
[addons] Applied essential addon: kube-proxy

Your Kubernetes master has initialized successfully!

To start using your cluster, you need to run (as a regular user):

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run “kubectl apply -f [podnetwork].yaml” with one of the options listed at:
http://kubernetes.io/docs/admin/addons/

You can now join any number of machines by running the following on each node
as root:

kubeadm join –token 5cf1b4.23d95a40a9d5f674 192.168.2.48:6443 –discovery-token-ca-cert-hash sha256:beb0b1ba0edbc76b0288b5de57949e0aa728aa1149c4c5b548b2b59e5d6a7106

As shown in the first bold section, execute the following

[root@k8s-master kubernetes]# cd ~
[root@k8s-master ~]# mkdir .kube
[root@k8s-master ~]# cp -i /etc/kubernetes/admin.conf .kube/config
[root@k8s-master ~]# chown $(id -u):$(id -g) .kube/config
[root@k8s-master ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8s-master NotReady master 9m v1.8.0
[root@k8s-master ~]#

Configure Network

As you can see from the output of kubectl get nodes, our Kubernetes Master still shows NotReady. This is because we haven’t deployed our overlay network. If you look at your /var/log/messages, you’ll see entries similar to the one below

Oct 4 15:41:09 [localhost] kubelet: E1004 15:41:09.589532 2515 kubelet.go:2095] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized

To fix this, run the following to deploy our network.

[root@k8s-master ~]# export kubever=$(kubectl version | base64 | tr -d ‘\n’
[root@k8s-master ~]# kubectl apply -f “https://cloud.weave.works/k8s/net?k8s-version=$kubever”
serviceaccount “weave-net” created
clusterrole “weave-net” created
clusterrolebinding “weave-net” created
daemonset “weave-net” created
[root@k8s-master ~]#

Checking our Kubernetes Master node again,

[root@k8s-master ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8s-master Ready master 18m v1.8.0
[root@k8s-master ~]#

Configure Worker Node

Time to configure our Worker Node. Login to our Worker Node and execute the following command

yum install kubeadm docker -y

After successfully installing kubeadm and docker on our Worker Node, run the following command

systemctl restart docker && systemctl enable docker

We need to join this Worker Node into our Kubernetes Cluster. From the second highlighted section of the kubeadm init output above, execute the “kubeadm join” command in our Worker Node.

[root@k8s-worker ~]# kubeadm join –token 5cf1b4.23d95a40a9d5f674 192.168.2.48:6443 –discovery-token-ca-cert-hash sha256:beb0b1ba0edbc76b0288b5de57949e0aa728aa1149c4c5b548b2b59e5d6a7106
[kubeadm] WARNING: kubeadm is in beta, please do not use it for production clusters.
[preflight] Running pre-flight checks
[preflight] WARNING: kubelet service is not enabled, please run ‘systemctl enable kubelet.service’
[preflight] WARNING: docker service is not enabled, please run ‘systemctl enable docker.service’
[preflight] Starting the kubelet service
[discovery] Trying to connect to API Server “192.168.2.48:6443”
[discovery] Created cluster-info discovery client, requesting info from “https://192.168.2.48:6443”
[discovery] Requesting info from “https://192.168.2.48:6443” again to validate TLS against the pinned public key
[discovery] Cluster info signature and contents are valid and TLS certificate validates against pinned roots, will use API Server “192.168.2.48:6443”
[discovery] Successfully established connection with API Server “192.168.2.48:6443”
[bootstrap] Detected server version: v1.8.0
[bootstrap] The server supports the Certificates API (certificates.k8s.io/v1beta1)

Node join complete:
* Certificate signing request sent to master and response
received.
* Kubelet informed of new secure connection details.

Run ‘kubectl get nodes’ on the master to see this machine join.
[root@k8s-worker ~]#

Using the same steps, you could add multiple Worker node to our Kubernetes cluster.

As suggested, let’s now check from our Kubernetes Master Node if the Worker Node was added successfully to our cluster.

[root@k8s-master ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8s-master Ready master 52m v1.8.0
k8s-worker.novalocal Ready <none> 7m v1.8.0
[root@k8s-master ~]#

We can see from the output above that our Kubernetes Master and Worker node are in Ready Status.

We have successfully installed Kubernetes Cluster using kubeadm and successfully joined a Worker Node to our cluster. With this environment we can now create pods and services.

Developing RESTful APIs with AWS API Gateway

If you followed the previous post, you now have a functioning AWS Lambda function. But how do we expose or trigger this function, say for a web application/client?

AWS API Gateway is an AWS service that allows developers to create, publish, monitors and secures APIs. These APIs could be something that access another AWS Service, in this case AWS Lambda functions, or other web services and could even be data stored in the cloud. We could create RESTful APIs to enable applications to access AWS Cloud services.

Let’s start building

To start, let’s build a basic web API to invoke our Lambda function using an HTTP GET query. Go to Application services section or search for API Gateway on the AWS Services search box.

It’s a good idea to choose the same region you used previously for your Lambda function. Click Get Started in the API Gateway home page.

In the next page, give your API a name. I’m calling this API manageBooksAPI. Click Create API.

Leave the default resource (/) and create one a new one by clicking Create Resource from the Actions menu.

In the New Child Resource page, give it a name. AS shown below, I’m calling this resource books. Leave the Resource Path as is. Make sure Enable API Gateway CORS is checked. You can proceed by clicking Create Resource.

The books resource will now appear under the default resource. We can now create a method. Choose Create Method under the Action menu.

Select the Get HTTP verb.

In the Integration Type page, select Lambda Function. And in the Lambda Function text box, type the Lambda function name you created and select it from the list. Click Save.

In the next page, just click OK. This is just providing permission for API Gateway to invoke our Lambda function.

Once created, you should have something similar to the one below.

Click TEST at the top of the Client section on the books GET Method execution and click Test in the next page.

You should see something similar to the one below.

We can now see the output of our Lambda function. Take note the Response Headers which shows that the Content Type is in json format.

Deploy our API

We are now ready to deploy our API. Under the Action menu, click Deploy API.

We can have the option to create multiple stage environment where we deploy our API. Let’s create a Production deployment stage by selecting New Stage and giving it Production as it’s Stage Name. Click Deploy.

Note:  Whenever we update our API, we need to re-deploy them.

Once created, you should see the Invoke URL for the newly created  stage environment.

Open your web browser. Using the URL provided and appending the books resource, you should see the JSON values provided by our Lambda function.

We’ve successfully created an API endpoint for our Lambda function. By creating an HTML file stored in Amazon S3 and with the help of Jquery, we can now use the same endpoint in our web application and process the returned JSON data.

 $.getJSON("https://9xvlao852a.execute-api.us-east-1.amazonaws.com/Production/books", function(result){ 
    for (i = 0; i < result['catalogue'].length; i++) { 
      $("#deck").append('<div class="col-md-3"><div class="card-block"><h4 class="card-title">'+ result['catalogue'][i].title +'</h4><p class="card-text">' + result['catalogue'][i].author + '</p><a href="card/'+ result['catalogue'][i].id + '" class="btn btn-primary">Learn More</a></div></div></div>');
    }
 });

 

With AWS Lambda and API Gateway ( + S3 ), we can create a Serverless application. We can create a method to handle passing parameters using HTTP or formatting the response of a function in our web API. Imagine running applications which scales without the complexity of managing compute nodes. Remember we didn’t even have to setup a single web server instance for this project!.

Yet another AWS Lambda tutorial

I briefly discussed AWS Lambda months ago but I feel that example is too simple. Let’s create a slightly more complex task, a function that list books and which we will use in our API Gateway endpoint on the next post.

To create an AWS Lambda function, login to your AWS console and select Lambda from the Compute Section or select Lambda in the AWS Services search box.

Click Create a function on the AWS Lambda home page.

To simplify the creation of Lambda function, AWS provides sample blueprints which we could use. For this session, we will be creating a function from scratch so click Author from scratch.

On the next screen, we can add a trigger for this Lambda function. We will discuss creating trigger and association a Lambda function to it at a later part of this tutorial. For now just click Next.

In Step 3, give your function a distinct name. I’m calling it manageBooks. For this example, the runtime I will be using is Python 2.7.

It is possible to develop your serverless functions locally thru the Serverless framework and upload it as an archive file. For this session, we are just going to type our code in-line. In the Lambda function code section, copy the code here and paste it in the code area.

What we did here is we have a method (get_all_lesson) which returns an array of books in json format. Take note of the name of the method as we will be using that same name in the section below for the Handler name (lambda_function.get_all_lesson).

Specifying other settings

Everything that’s executed by AWS Lambda needs to have permission to do what it’s supposed to do. This is managed by AWS Access and Identity Management thru roles and policies. We need to create a new basic execution role using the Role menu. Choose create a new role. I am using myBasicRole for the role name. Don’t select a policy template.

You need to configure two important aspects of a Lambda function. How much memory to use affects the quantity of CPU power and the cost of executing the function. For this simple function, 128MB is more than enough. The timeout after which the function is automatically terminated setting is used to avoid mistakes that could start long-running function. Three seconds is fine for this simple function.

You can select Next to review all the configurations, and then select Create function.

In the next screen, after successfully creating the function, select Test to check our function.

Since we are not passing any arguments in our function, we can just use the Hellow World event template. To test, click Save and test.

We should see the result of the test execution from the web console with the summary of the execution and the log output.

In the next post, we will create an API Gateway Endpoint to consume this AWS Lambda Function. And using that AWS Gateway Endpoint in a page hosted on S3 bucket, we will display the list of books.

 

 

Setup IPSec VPN Server with Raspberry Pi

On my previous post, I shared how to configure a direct connection between my private home network and Google Cloud VPN. In my setup I was using my Raspberry Pi as my local VPN Gateway using OpenSwan. In this tutorial I’m going to show how I configure that.

Why did I choose Raspberry Pi?

First is I don’t own/have access to a dedicated VPN device/appliance. Having a Cisco ASA device would have been a good choice just to have that “Enterprise grade” experience but since this is just a POC, I think the Raspberry Pi is very well suited for this. Second, the low power consumption of this pocket-sized computer really makes it a better choice. Instead of running a power-hungry x86 server or DLXXX hardware, I could leave this one up and running all night without worrying about my electricity bill going up. But since we are using OpenSwan, you can definitely run this on any commodity hardware.

On with the installation

I have my pi up and running Raspbian Jessie Lite since I was using it as my Kodi media server. All I need to do now is to install openswan.

root@gateway:~# apt-get install openswan

When prompted ‘Use an X.509 certificate for this host?’, answer ‘No’. If you want to add it, use ‘dpkg-reconfigure openswan’ to come back.

Once installed, let’s configure our ipsec.secrets

root@gateway:~# vi /etc/ipsec.secrets

Add the following to the end of the line. Change raspberrypi_IP with the IP Address of your pi. Change the  pre-shared-key-password with something else. This will be used by both peers for authentication (RFC2409). Generate a long PSK with atleast 30 characters to mitigate brute force attack.

<raspberrypi_IP> %any: PSK "<pre-shared-key-password>"

We now need to define our VPN connection. Edit ipsec.conf

root@gateway:~# vi /etc/ipsec.conf

Add the following connection definition at the bottom part of the config file.

## connection definition in vpc-google ##
conn vpc-google     #vpc-google is the name of the connection
 auto=add
 authby=secret     #since we are using PSK, this is set to secret 
 type=tunnel #OpenSwan support l2tpd as well. For site-to-site use tunnel
 leftsubnet=192.168.0.0/24 # This is our local subnet.
 rightsubnet=10.128.0.0/20 # Remote site subnet.
 leftid=xx.xx.xxx.xx # My public IP
 left=192.168.0.100 # Raspberry PI ip address
 leftsourceip=192.168.0.100 # Raspberry PI ip address
 right=%any
 aggrmode=no

 

Under the default connection, I actually set the following

keyexchange=ike
nat_traversal=yes

I forgot to mention that this Raspberry PI is behind my router. I had to do port forwarding. IPSec uses udp port 500/4500. You need to do port forwarding if your gateway will be behind a router.

Restart openswan service.

root@gateway:~# service ipsec restart

All we need to do now is to configure a VPN Connection in GCP.

Once configured,  we can do the following to check if it’s working as expected.

Check ipsec status

root@gateway:~# service ipsec status
● ipsec.service - LSB: Start Openswan IPsec at boot time
 Loaded: loaded (/etc/init.d/ipsec)
 Active: active (running) since Thu 2017-07-20 14:23:39 UTC; 18s ago
 Process: 6866 ExecStop=/etc/init.d/ipsec stop (code=exited, status=0/SUCCESS)
 Process: 6964 ExecStart=/etc/init.d/ipsec start (code=exited, status=0/SUCCESS)
 CGroup: /system.slice/ipsec.service
 ├─7090 /bin/sh /usr/lib/ipsec/_plutorun --debug --uniqueids yes --force_busy no --nocrsend no --strictcrlpolicy no --nat_traversal yes -...
 ├─7091 logger -s -p daemon.error -t ipsec__plutorun
 ├─7094 /bin/sh /usr/lib/ipsec/_plutorun --debug --uniqueids yes --force_busy no --nocrsend no --strictcrlpolicy no --nat_traversal yes -...
 ├─7095 /bin/sh /usr/lib/ipsec/_plutoload --wait no --post
 ├─7096 /usr/lib/ipsec/pluto --nofork --secretsfile /etc/ipsec.secrets --ipsecdir /etc/ipsec.d --use-auto --uniqueids --nat_traversal --v...
 ├─7100 pluto helper # 0 
 └─7188 _pluto_adns

Jul 20 14:23:44 gateway pluto[7096]: "vpc-google"[1] 35.188.205.71 #1: new NAT mapping for #1, was 35.188.205.71:500, now 35.188.205.71:4500
Jul 20 14:23:44 gateway pluto[7096]: "vpc-google"[1] 35.188.205.71 #1: STATE_MAIN_R3: sent MR3, ISAKMP SA established {auth=OAKLEY_PRESHAR...modp1024}
Jul 20 14:23:44 gateway pluto[7096]: "vpc-google"[1] 35.188.205.71 #1: the peer proposed: 192.168.0.0/24:0/0 -> 10.128.0.0/20:0/0
Jul 20 14:23:44 gateway pluto[7096]: "vpc-google"[1] 35.188.205.71 #2: responding to Quick Mode proposal {msgid:3e4ab184}
Jul 20 14:23:44 gateway pluto[7096]: "vpc-google"[1] 35.188.205.71 #2: us: 192.168.0.0/24===192.168.0.100<192.168.0.100>[xx.xxx.xx.xx]
Jul 20 14:23:44 gateway pluto[7096]: "vpc-google"[1] 35.188.205.71 #2: them: 35.188.205.71===10.128.0.0/20
Jul 20 14:23:44 gateway pluto[7096]: "vpc-google"[1] 35.188.205.71 #2: transition from state STATE_QUICK_R0 to state STATE_QUICK_R1
Jul 20 14:23:44 gateway pluto[7096]: "vpc-google"[1] 35.188.205.71 #2: STATE_QUICK_R1: sent QR1, inbound IPsec SA installed, expecting QI2
Jul 20 14:23:45 gateway pluto[7096]: "vpc-google"[1] 35.188.205.71 #2: transition from state STATE_QUICK_R1 to state STATE_QUICK_R2
Jul 20 14:23:45 gateway pluto[7096]: "vpc-google"[1] 35.188.205.71 #2: STATE_QUICK_R2: IPsec SA established tunnel mode {ESP/NAT=>0x1baf69...DPD=none}
Hint: Some lines were ellipsized, use -l to show in full.
root@gateway:~#

35.188.205.71 is my GCP VPN Gateway IP. We need to see that IPsec SA established tunnel mode to confirm everything is working fine.

ipsec auto –status

root@gateway:~# ipsec auto --status
000 using kernel interface: netkey
000 interface lo/lo ::1
000 interface lo/lo 127.0.0.1
000 interface lo/lo 127.0.0.1
000 interface eth0/eth0 192.168.0.100
000 interface eth0/eth0 192.168.0.100
000 interface wlan0/wlan0 192.168.1.1
000 interface wlan0/wlan0 192.168.1.1
000 %myid = (none)
000 debug none
000 
000 virtual_private (%priv):
000 - allowed 6 subnets: 10.0.0.0/8, 192.168.0.0/16, 172.16.0.0/12, 25.0.0.0/8, fd00::/8, fe80::/10
000 - disallowed 0 subnets: 
000 WARNING: Disallowed subnets in virtual_private= is empty. If you have 
000 private address space in internal use, it should be excluded!
000 
000 algorithm ESP encrypt: id=2, name=ESP_DES, ivlen=8, keysizemin=64, keysizemax=64
000 algorithm ESP encrypt: id=3, name=ESP_3DES, ivlen=8, keysizemin=192, keysizemax=192
000 algorithm ESP encrypt: id=6, name=ESP_CAST, ivlen=8, keysizemin=40, keysizemax=128
000 algorithm ESP encrypt: id=11, name=ESP_NULL, ivlen=0, keysizemin=0, keysizemax=0
000 algorithm ESP encrypt: id=12, name=ESP_AES, ivlen=8, keysizemin=128, keysizemax=256
000 algorithm ESP encrypt: id=13, name=ESP_AES_CTR, ivlen=8, keysizemin=160, keysizemax=288
000 algorithm ESP encrypt: id=14, name=ESP_AES_CCM_A, ivlen=8, keysizemin=128, keysizemax=256
000 algorithm ESP encrypt: id=15, name=ESP_AES_CCM_B, ivlen=8, keysizemin=128, keysizemax=256
000 algorithm ESP encrypt: id=16, name=ESP_AES_CCM_C, ivlen=8, keysizemin=128, keysizemax=256
000 algorithm ESP encrypt: id=18, name=ESP_AES_GCM_A, ivlen=8, keysizemin=128, keysizemax=256
000 algorithm ESP encrypt: id=19, name=ESP_AES_GCM_B, ivlen=8, keysizemin=128, keysizemax=256
000 algorithm ESP encrypt: id=20, name=ESP_AES_GCM_C, ivlen=8, keysizemin=128, keysizemax=256
000 algorithm ESP auth attr: id=1, name=AUTH_ALGORITHM_HMAC_MD5, keysizemin=128, keysizemax=128
000 algorithm ESP auth attr: id=2, name=AUTH_ALGORITHM_HMAC_SHA1, keysizemin=160, keysizemax=160
000 algorithm ESP auth attr: id=5, name=AUTH_ALGORITHM_HMAC_SHA2_256, keysizemin=256, keysizemax=256
000 algorithm ESP auth attr: id=6, name=AUTH_ALGORITHM_HMAC_SHA2_384, keysizemin=384, keysizemax=384
000 algorithm ESP auth attr: id=7, name=AUTH_ALGORITHM_HMAC_SHA2_512, keysizemin=512, keysizemax=512
000 algorithm ESP auth attr: id=9, name=AUTH_ALGORITHM_AES_CBC, keysizemin=128, keysizemax=128
000 algorithm ESP auth attr: id=251, name=AUTH_ALGORITHM_NULL_KAME, keysizemin=0, keysizemax=0
000 
000 algorithm IKE encrypt: id=0, name=(null), blocksize=16, keydeflen=131
000 algorithm IKE encrypt: id=5, name=OAKLEY_3DES_CBC, blocksize=8, keydeflen=192
000 algorithm IKE encrypt: id=7, name=OAKLEY_AES_CBC, blocksize=16, keydeflen=128
000 algorithm IKE hash: id=1, name=OAKLEY_MD5, hashsize=16
000 algorithm IKE hash: id=2, name=OAKLEY_SHA1, hashsize=20
000 algorithm IKE hash: id=4, name=OAKLEY_SHA2_256, hashsize=32
000 algorithm IKE hash: id=6, name=OAKLEY_SHA2_512, hashsize=64
000 algorithm IKE dh group: id=2, name=OAKLEY_GROUP_MODP1024, bits=1024
000 algorithm IKE dh group: id=5, name=OAKLEY_GROUP_MODP1536, bits=1536
000 algorithm IKE dh group: id=14, name=OAKLEY_GROUP_MODP2048, bits=2048
000 algorithm IKE dh group: id=15, name=OAKLEY_GROUP_MODP3072, bits=3072
000 algorithm IKE dh group: id=16, name=OAKLEY_GROUP_MODP4096, bits=4096
000 algorithm IKE dh group: id=17, name=OAKLEY_GROUP_MODP6144, bits=6144
000 algorithm IKE dh group: id=18, name=OAKLEY_GROUP_MODP8192, bits=8192
000 algorithm IKE dh group: id=22, name=OAKLEY_GROUP_DH22, bits=1024
000 algorithm IKE dh group: id=23, name=OAKLEY_GROUP_DH23, bits=2048
000 algorithm IKE dh group: id=24, name=OAKLEY_GROUP_DH24, bits=2048
000 
000 stats db_ops: {curr_cnt, total_cnt, maxsz} :context={0,0,0} trans={0,0,0} attrs={0,0,0} 
000 
000 "vpc-google": 192.168.0.0/24===192.168.0.100<192.168.0.100>[xx.xx.xx.xx]...%any===10.128.0.0/20; unrouted; eroute owner: #0
000 "vpc-google": myip=192.168.0.100; hisip=unset;
000 "vpc-google": ike_life: 3600s; ipsec_life: 1200s; rekey_margin: 180s; rekey_fuzz: 100%; keyingtries: 3 
000 "vpc-google": policy: PSK+ENCRYPT+TUNNEL+PFS+IKEv2ALLOW+SAREFTRACK+lKOD+rKOD; prio: 24,20; interface: eth0; 
000 "vpc-google": newest ISAKMP SA: #0; newest IPsec SA: #0; 
000 "vpc-google": IKE algorithms wanted: AES_CBC(7)_256-SHA1(2)_000-MODP1024(2); flags=-strict
000 "vpc-google": IKE algorithms found: AES_CBC(7)_256-SHA1(2)_160-MODP1024(2)
000 "vpc-google": ESP algorithms wanted: AES(12)_256-SHA1(2)_000; flags=-strict
000 "vpc-google": ESP algorithms loaded: AES(12)_256-SHA1(2)_160
000 "vpc-google"[1]: 192.168.0.0/24===192.168.0.100<192.168.0.100>[xx.xx.xxx.xx]...35.188.205.71===10.128.0.0/20; erouted; eroute owner: #2
000 "vpc-google"[1]: myip=192.168.0.100; hisip=unset;
000 "vpc-google"[1]: ike_life: 3600s; ipsec_life: 1200s; rekey_margin: 180s; rekey_fuzz: 100%; keyingtries: 3 
000 "vpc-google"[1]: policy: PSK+ENCRYPT+TUNNEL+PFS+IKEv2ALLOW+SAREFTRACK+lKOD+rKOD; prio: 24,20; interface: eth0; 
000 "vpc-google"[1]: newest ISAKMP SA: #1; newest IPsec SA: #2; 
000 "vpc-google"[1]: IKE algorithms wanted: AES_CBC(7)_256-SHA1(2)_000-MODP1024(2); flags=-strict
000 "vpc-google"[1]: IKE algorithms found: AES_CBC(7)_256-SHA1(2)_160-MODP1024(2)
000 "vpc-google"[1]: IKE algorithm newest: AES_CBC_128-SHA1-MODP1024
000 "vpc-google"[1]: ESP algorithms wanted: AES(12)_256-SHA1(2)_000; flags=-strict
000 "vpc-google"[1]: ESP algorithms loaded: AES(12)_256-SHA1(2)_160
000 "vpc-google"[1]: ESP algorithm newest: AES_128-HMAC_SHA1; pfsgroup=<Phase1>
000 
000 #2: "vpc-google"[1] 35.188.205.71:4500 STATE_QUICK_R2 (IPsec SA established); EVENT_SA_REPLACE in 671s; newest IPSEC; eroute owner; isakmp#1; idle; import:not set
000 #2: "vpc-google"[1] 35.188.205.71 esp.1baf698c@35.188.205.71 esp.c810f1e5@192.168.0.100 tun.0@35.188.205.71 tun.0@192.168.0.100 ref=0 refhim=4294901761
000 #1: "vpc-google"[1] 35.188.205.71:4500 STATE_MAIN_R3 (sent MR3, ISAKMP SA established); EVENT_SA_REPLACE in 3070s; newest ISAKMP; lastdpd=20s(seq in:0 out:0); idle; import:not set
000 
root@gateway:~#

If tunnel isn’t coming up/establishing, your best pal is tcpdump. Initiate a ping or some traffic from the remote site to your local network. I prefer to start by pinging my local VPN gateway from one of my cloud instance.

root@gateway:~# tcpdump -n "port 4500" -vvvv
tcpdump: listening on eth0, link-type EN10MB (Ethernet), capture size 262144 bytes
14:35:24.609126 IP (tos 0x0, ttl 64, id 448, offset 0, flags [DF], proto UDP (17), length 29)
 192.168.0.100.4500 > 35.188.205.71.4500: [bad udp cksum 0xb22a -> 0x2ba3!] isakmp-nat-keep-alive
14:35:24.609953 IP (tos 0x0, ttl 64, id 449, offset 0, flags [DF], proto UDP (17), length 29)
 192.168.0.100.4500 > 35.188.205.71.4500: [bad udp cksum 0xb22a -> 0x2ba3!] isakmp-nat-keep-alive

If everything is ok we can test connectivity from our local network to any of our remote instance.

root@gateway:~# ping 10.128.0.3
PING 10.128.0.3 (10.128.0.3) 56(84) bytes of data.
64 bytes from 10.128.0.3: icmp_seq=1 ttl=64 time=210 ms
64 bytes from 10.128.0.3: icmp_seq=2 ttl=64 time=211 ms
^C
--- 10.128.0.3 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1001ms
rtt min/avg/max/mdev = 210.571/210.973/211.375/0.402 ms
root@gateway:~# ssh root@10.128.0.3
The authenticity of host '10.128.0.3 (10.128.0.3)' can't be established.
ECDSA key fingerprint is 8f:f7:62:4f:1e:85:ad:1e:50:cc:bc:21:fd:ae:bb:9e.
Are you sure you want to continue connecting (yes/no)? 

From the above, we can see that we are able to connect to one of my instance in GCP.

 

 

Connecting On-Premise network to a Virtual Private Cloud Network

Last time I’ve worked with configuring two sites using OpenSwan was more than 10 years ago. Had a good success deploying that solution with good throughput using commodity hardware and leveraging Open Source solutions. This time I wanted to test if I could do the same using Raspberry Pi.

In this post I want to show how I was able to configure Google Compute Engine VPN and connect my home network.

Under Networking – click VPN. Click Create VPN connection. This will open up a page which will guide you on creating a VPN connection.

Under the Create a VPN connection form, we must first create a static IP Address that can be used by our VPN Gateway. Under the IP Address dropdown list box, select Create IP Address.

 

Put in a name to distinguish this IP Address and click RESERVE.

Complete the form by giving it a Name and selecting a Region where we could deploy this VPN Gateway. Here I am using us-central-1.

Put in your VPN Gateway IP Address in the Remote peer IP address. This is the IP Address of your home network VPN Gateway. I am currently using a Raspberry Pi installed with OpenSwan. I am using port forwarding (udp 500/4500) since this gateway is behind my router. (Installation and configuration of OpenSwan/IPSec on Raspberry Pi deserves a separate post)

Select the IKE version. Mine is using IKEv1.

In the Remote network IP ranges, enter your home network range. Select the Local subnetworks (Google Cloud side) which you want to associate this tunnel to.

Click Create. Deploying this could take a minute or two to complete.

Once done, you should be able to see that the Remote peer IP address is up with a green check icon.

In one of my compute instance, I can verify that the tunnel is up by pinging my home network.

Or by running TCP dump on my local VPN gateway

We now have a secure way of connecting our on-premise network to our Virtual Private Cloud network.

One thing to note, if you are deleting the VPN Connection, you must also release the IP Address you allocated to the VPN Gateway so as not to incur additional cost since that is a Static IP Address.

 

 

 

Let’s Git it on!

Install Git

# yum install git
Loaded plugins: fastestmirror, langpacks
base | 3.6 kB 00:00:00
epel/x86_64/metalink | 5.4 kB 00:00:00
epel | 4.3 kB 00:00:00
extras | 3.4 kB 00:00:00
google-chrome | 951 B 00:00:00
nux-dextop | 2.9 kB 00:00:00
updates | 3.4 kB 00:00:00
epel/x86_64/primary_db FAILED ] 0.0 B/s | 1.2 MB –:–:– ETA
http://mirror.rise.ph/fedora-epel/7/x86_64/repodata/167fde3ffebcbd63c6850b6c2301b20d575eb884d2657a26003f078878c52a77-primary.sqlite.xz: [Errno 14] HTTP Error 404 – Not Found
Trying other mirror.
To address this issue please refer to the below knowledge base article

https://access.redhat.com/articles/1320623

If above article doesn’t help to resolve this issue please create a bug on https://bugs.centos.org/

(1/5): epel/x86_64/group_gz | 170 kB 00:00:00
(2/5): updates/7/x86_64/primary_db | 4.8 MB 00:00:01
(3/5): epel/x86_64/updateinfo | 799 kB 00:00:04
(4/5): epel/x86_64/primary_db | 4.7 MB 00:00:07
(5/5): nux-dextop/x86_64/primary_db | 1.7 MB 00:00:11
Loading mirror speeds from cached hostfile
* base: mirror.qoxy.com
* epel: mirror.rise.ph
* extras: mirror.qoxy.com
* nux-dextop: mirror.li.nux.ro
* updates: mirror.qoxy.com
Resolving Dependencies
–> Running transaction check
—> Package git.x86_64 0:1.8.3.1-6.el7_2.1 will be installed
–> Processing Dependency: perl-Git = 1.8.3.1-6.el7_2.1 for package: git-1.8.3.1-6.el7_2.1.x86_64
–> Processing Dependency: perl(Term::ReadKey) for package: git-1.8.3.1-6.el7_2.1.x86_64
–> Processing Dependency: perl(Git) for package: git-1.8.3.1-6.el7_2.1.x86_64
–> Processing Dependency: perl(Error) for package: git-1.8.3.1-6.el7_2.1.x86_64
–> Running transaction check
—> Package perl-Error.noarch 1:0.17020-2.el7 will be installed
—> Package perl-Git.noarch 0:1.8.3.1-6.el7_2.1 will be installed
—> Package perl-TermReadKey.x86_64 0:2.30-20.el7 will be installed
–> Finished Dependency Resolution

Dependencies Resolved

======================================================================================================================================================
Package Arch Version Repository Size
======================================================================================================================================================
Installing:
git x86_64 1.8.3.1-6.el7_2.1 base 4.4 M
Installing for dependencies:
perl-Error noarch 1:0.17020-2.el7 base 32 k
perl-Git noarch 1.8.3.1-6.el7_2.1 base 53 k
perl-TermReadKey x86_64 2.30-20.el7 base 31 k

Transaction Summary
======================================================================================================================================================
Install 1 Package (+3 Dependent packages)

Total download size: 4.5 M
Installed size: 22 M
Is this ok [y/d/N]: y
Downloading packages:
(1/4): git-1.8.3.1-6.el7_2.1.x86_64.rpm | 4.4 MB 00:00:07
(2/4): perl-Git-1.8.3.1-6.el7_2.1.noarch.rpm | 53 kB 00:00:00
(3/4): perl-TermReadKey-2.30-20.el7.x86_64.rpm | 31 kB 00:00:00
(4/4): perl-Error-0.17020-2.el7.noarch.rpm | 32 kB 00:00:10
——————————————————————————————————————————————————
Total 425 kB/s | 4.5 MB 00:00:10
Running transaction check
Running transaction test
Transaction test succeeded
Running transaction
Installing : 1:perl-Error-0.17020-2.el7.noarch 1/4
Installing : perl-TermReadKey-2.30-20.el7.x86_64 2/4
Installing : git-1.8.3.1-6.el7_2.1.x86_64 3/4
Installing : perl-Git-1.8.3.1-6.el7_2.1.noarch 4/4
Verifying : perl-Git-1.8.3.1-6.el7_2.1.noarch 1/4
Verifying : perl-TermReadKey-2.30-20.el7.x86_64 2/4
Verifying : 1:perl-Error-0.17020-2.el7.noarch 3/4
Verifying : git-1.8.3.1-6.el7_2.1.x86_64 4/4

Installed:
git.x86_64 0:1.8.3.1-6.el7_2.1

Dependency Installed:
perl-Error.noarch 1:0.17020-2.el7 perl-Git.noarch 0:1.8.3.1-6.el7_2.1 perl-TermReadKey.x86_64 0:2.30-20.el7

Complete!

Generate SSH Keys

# ssh-keygen -t rsa -b 4096 -C “youremail@address.com”
Generating public/private rsa key pair.
Enter file in which to save the key (/root/.ssh/id_rsa):
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /root/.ssh/id_rsa.
Your public key has been saved in /root/.ssh/id_rsa.pub.
The key fingerprint is:
fe:41:cc:88:23:0f:9d:0d:8b:f4:b6:a6:2f:70:45:4f youremail@address.com
The key’s randomart image is:
+–[ RSA 4096]—-+
| |
| . E |
| …o |
| . +.*.+ |
| +.O S + |
| . .= + . |
| o + . . |
| .o . . |
| .o. . |
+—————–+

Add your SSH key to your SSH agent.

Start SSH agent in the background

# eval “$(ssh-agent -s)”
Agent pid 11120

Add your SSH private key to the ssh-agent

#ssh-add ~/.ssh/id_rsa
Enter passphrase for /root/.ssh/id_rsa:
Identity added: /root/.ssh/id_rsa (/root/.ssh/id_rsa)

Add the SSH key to your Github account

First, copy the SSH key to your clipboard.

# xclip -sel clip < ~/.ssh/id_rsa.pub

Login to your Github account and inn the upper-right corner of any page, click your profile photo, then click Settings.

In the user settings sidebar, click SSH and GPG keys.

Click New SSH key or Add SSH key

In the Title filed add a descriptive name for this SSH key. Paste your key into the “Key” field

Click Add SSH key

If prompted enter your Github password.

You now have Git configured!

Google Cloud Platform Functions

I want to share my experience testing Google Cloud Platform’s Serverless offering called Cloud Functions. I’ve been playing around GCP for a more than a week now (moving my site, resizing it! etc) and I had this itch to test Cloud Functions after coming out from an AWS SNG User Group meeting which talked about AWS Lambda.

Let’s get started.

Login to your Cloud Console (if you haven’t subscribed yet, go sign up for an account and make use of that $300 Free Tier offering for a year!).

Under COMPUTE section, click Cloud Functions.

Click Create Function

Give your function a name. I’m going to use storageTriggeredImageProcessor. I’ll be using the us-central-1 Region  (since it’s under the Free tier). You need to consider latency when selecting Region.  Not sure if GCP would have something like Lambda Edge in the future.

Remember for Cloud Functions, you are billed by how long your function executes and the amount of memory you allocate to it. I’ll choose 128 MB for this test.

For Timeout, i’ll stick with what’s shown which is 60s.

We are going to use Cloud Storage Bucket under the Trigger section. This will ask us to select which bucket to “monitor”.

Under the source code section, we are presented with a skeleton template of our function. package.json holds the information of what other APIs you want to use. You can copy the code and package.json on my Github repository.

Cloud Functions needs to be staged in Google Storage buckets. Select or create one where we could store our codes.

Click Create.

It would take a minute or two for GCP to stage our function.

Once done, we can now test this. Since we are only logging the output of this function, under Function Details, click View Logs

On another window, upload an image file to your Storage bucket and watch the log entries.

I will be uploading the image below

And on the logs, we can see what Google Vision detected on this image.

We’ve successfully created and tested GCP’s Cloud Functions. Remember we actually run an application without provisioning application servers (Serverless!)

Note:

If you wan’t to use Google Vision API or any other APIs, you need to enable it under the API Section. Else your function won’t execute correctly and will throw an exception.

 

 

Deploying a LAMP server in Google Cloud Platform

Since my Raspberry Pi, which hosts my WordPress site, has been quirky the past couple of weeks I decided to try out Google Cloud Platform.

Like AWS 12 months Free-Tier, GCP is giving out $300-free trial to get you started. You would need to have a credit card to sign up and you could read more information about the Free Trial in this link.

Using GCP’s Cloud Launcher, we can deploy solutions and applications in an instant (AWS MarketPlace?). In this tutorial, we are deploying a LAMP server which could maybe host a WordPress site.

Login to GCP Console and select/create a project. Under Computer Engine – VM instances, click CREATE INSTANCE.

 

Select a zone in any of the following: us-east1, us-west1, and us-central1. Using an f1-micro Machine type we could be well within the “744 hours monthly free limit”. Capacity might not be enough but this could get me start a LAMP server.

I want to use CentOS this time so under Boot disk, select the CentOS 7 image. The 10GB persisitent disk is well enough for my site so I’ll just go ahead and click select.

 

I’ll just check both option to allow HTTP and HTTPS under the Firewall section and hit Create to start deploying this instance.

 

Once GCP is done, you should see the following

Click SSH. This will open a new window which automatically gives us an SSH connection to our instance. (No more managing keys?)

sudo su to become root and keep things up to date by running yum update -y

Let’s install apache by running the following

yum install httpd

Install mariadb by executing the following

yum install mariadb-server mariadb

Install php and php-mysql

yum install php php-mysql

Let’s now configure our Mysql/MariaDB server. Change the root account, and remove the test user, database when prompted after running the following

mysql_secure_installation

make sure to have Apache and MariaDB start at boot time

chkconfig httpd on
chkconfig mariadb on

If it’s not yet started, start Apache and MariaDB server.

service mariadb start
service httpd start

Using the External IP of our instance, let’s check if our Web server is accessible

We could create a simple page to test if PHP is working

[root@wordpress paulinomreyes]# cat /var/www/html/info.php
<?php

phpinfo();

?>
[root@wordpress paulinomreyes]# 

And navigating to info.php we should see the following

We just deployed a LAMP server on Google Cloud Platform. Using this instance, we could then deploy a WordPress site.

Google Cloud Platform in my opinion is “minimalist” (compared to how huge AWS offers) but this platform gives me the basic things I need for now . In the future, I want to try out how to configure Load Balancing, and how Container Engine and Cloud Functions ( equivalent I guess to AWS Lambda) works.

 

 

Ansible-Vault How-To

A short tutorial in using ansible-vault for storing sensitive information.

Here I have an ansible inventory file – hosts which utilizes group_vars where I store connection details/credentials.

[root@ansible vault]# tree .
.
├── inventory
│   ├── group_vars
│   │   └── web
│   └── hosts
└── wget.yml

 

[root@ansible vault]# cat inventory/hosts
[web]
192.168.0.54

[root@ansible vault]# cat inventory/group_vars/web

ansible_connection: ssh
ansible_user: root
ansible_ssh_pass: P@ssw0rd

 

Since the credentials is in plain text, contents are visible to anyone who has access to this file, we can use ansible-vault, which is provided by ansible-core package, and passing in a password  to encrypt it

[root@ansible vault]# ansible-vault encrypt inventory/group_vars/web
New Vault password:
Confirm New Vault password:
Encryption successful

web is now encrypted (AES 256)

[root@ansible vault]# cat inventory/group_vars/web
$ANSIBLE_VAULT;1.1;AES256
66613032646237636338346230363465653436313539313235393331663434666637303031323864
6331656237323166376336396431333666316335353764380a313937356336616265646562336237
65616631346661623566633734303664646138636335643466393534623661393261383238303633
3136356131616239640a626635633466383234366130643031393034623165313938393066373237
63363562393530336234373237393464356439643731346538323834616166363864656337613539
38643263396335623831316236303933383532636663373138353433633638613838623933396134
65343964653934366632663031393265316661656238653662313539313234316536303464303737
30653265303439303465
[root@ansible vault]#

 

When we try to run ansible (or ansible-playbook) command, we can use the –ask-vault-pass and when prompted enter the password we used when we encrypted the file.

[root@ansible vault]# ansible web -i inventory/hosts -m ping –ask-vault-pass

Vault password:
192.168.0.54 | SUCCESS => {
“changed”: false,
“ping”: “pong”
}