How to install Kubernetes on CentOS

Kubernetes, developed by Google, is a cluster and orchestration engine for docker containers.

In this session I tried kubeadm to deploy a Kubernetes Cluster. I also used my OpenStack environment for this PoC and provisioned two CentOS Compute nodes as follows

k8s-master will run the API Manager, Kubectl utility, Scheduler, etcd, and Controller Manager.

k8s-worker will be our worker node and will run Kubelet, Kube-proxy and our pods.

On both system, execute the following

  • yum update -y
  • set SELinux to disabled (/etc/selinux/config)
  • and update /etc/hosts making sure an entry for the two systems exists
  • Reboot, Reboot, Reboot!

Configure Kubernetes Repo by adding the following

[root@k8s-master ~]# cat /etc/yum.repos.d/kubernetes.repo

Configure Kubernetes Master Node

Execute the following on the Master Node

yum install docker kubeadm -y
systemctl restart kubelet && systemctl enable kubelet

Initialize Kubernetes Master with

kubeadm init

You should see something similar to the following

[root@k8s-master etc]# kubeadm init
[kubeadm] WARNING: kubeadm is in beta, please do not use it for production clusters.
[init] Using Kubernetes version: v1.8.0
[init] Using Authorization modes: [Node RBAC]
[preflight] Running pre-flight checks
[preflight] WARNING: hostname “k8s-master” could not be reached
[preflight] WARNING: hostname “k8s-master” lookup k8s-master on no such host
[preflight] Starting the kubelet service
[kubeadm] WARNING: starting in 1.8, tokens expire after 24 hours by default (if you require a non-expiring token use –token-ttl 0)
[certificates] Generated ca certificate and key.
[certificates] Generated apiserver certificate and key.
[certificates] apiserver serving cert is signed for DNS names [k8s-master kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs []
[certificates] Generated apiserver-kubelet-client certificate and key.
[certificates] Generated sa key and public key.
[certificates] Generated front-proxy-ca certificate and key.
[certificates] Generated front-proxy-client certificate and key.
[certificates] Valid certificates and keys now exist in “/etc/kubernetes/pki”
[kubeconfig] Wrote KubeConfig file to disk: “admin.conf”
[kubeconfig] Wrote KubeConfig file to disk: “kubelet.conf”
[kubeconfig] Wrote KubeConfig file to disk: “controller-manager.conf”
[kubeconfig] Wrote KubeConfig file to disk: “scheduler.conf”
[controlplane] Wrote Static Pod manifest for component kube-apiserver to “/etc/kubernetes/manifests/kube-apiserver.yaml”
[controlplane] Wrote Static Pod manifest for component kube-controller-manager to “/etc/kubernetes/manifests/kube-controller-manager.yaml”
[controlplane] Wrote Static Pod manifest for component kube-scheduler to “/etc/kubernetes/manifests/kube-scheduler.yaml”
[etcd] Wrote Static Pod manifest for a local etcd instance to “/etc/kubernetes/manifests/etcd.yaml”
[init] Waiting for the kubelet to boot up the control plane as Static Pods from directory “/etc/kubernetes/manifests”
[init] This often takes around a minute; or longer if the control plane images have to be pulled.
[apiclient] All control plane components are healthy after 437.011125 seconds
[uploadconfig] Storing the configuration used in ConfigMap “kubeadm-config” in the “kube-system” Namespace
[markmaster] Will mark node k8s-master as master by adding a label and a taint
[markmaster] Master k8s-master tainted and labelled with key/value:””
[bootstraptoken] Using token: 5cf1b4.23d95a40a9d5f674
[bootstraptoken] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstraptoken] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstraptoken] Creating the “cluster-info” ConfigMap in the “kube-public” namespace
[addons] Applied essential addon: kube-dns
[addons] Applied essential addon: kube-proxy

Your Kubernetes master has initialized successfully!

To start using your cluster, you need to run (as a regular user):

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run “kubectl apply -f [podnetwork].yaml” with one of the options listed at:

You can now join any number of machines by running the following on each node
as root:

kubeadm join –token 5cf1b4.23d95a40a9d5f674 –discovery-token-ca-cert-hash sha256:beb0b1ba0edbc76b0288b5de57949e0aa728aa1149c4c5b548b2b59e5d6a7106

As shown in the first bold section, execute the following

[root@k8s-master kubernetes]# cd ~
[root@k8s-master ~]# mkdir .kube
[root@k8s-master ~]# cp -i /etc/kubernetes/admin.conf .kube/config
[root@k8s-master ~]# chown $(id -u):$(id -g) .kube/config
[root@k8s-master ~]# kubectl get nodes
k8s-master NotReady master 9m v1.8.0
[root@k8s-master ~]#

Configure Network

As you can see from the output of kubectl get nodes, our Kubernetes Master still shows NotReady. This is because we haven’t deployed our overlay network. If you look at your /var/log/messages, you’ll see entries similar to the one below

Oct 4 15:41:09 [localhost] kubelet: E1004 15:41:09.589532 2515 kubelet.go:2095] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized

To fix this, run the following to deploy our network.

[root@k8s-master ~]# export kubever=$(kubectl version | base64 | tr -d ‘\n’
[root@k8s-master ~]# kubectl apply -f “$kubever”
serviceaccount “weave-net” created
clusterrole “weave-net” created
clusterrolebinding “weave-net” created
daemonset “weave-net” created
[root@k8s-master ~]#

Checking our Kubernetes Master node again,

[root@k8s-master ~]# kubectl get nodes
k8s-master Ready master 18m v1.8.0
[root@k8s-master ~]#

Configure Worker Node

Time to configure our Worker Node. Login to our Worker Node and execute the following command

yum install kubeadm docker -y

After successfully installing kubeadm and docker on our Worker Node, run the following command

systemctl restart docker && systemctl enable docker

We need to join this Worker Node into our Kubernetes Cluster. From the second highlighted section of the kubeadm init output above, execute the “kubeadm join” command in our Worker Node.

[root@k8s-worker ~]# kubeadm join –token 5cf1b4.23d95a40a9d5f674 –discovery-token-ca-cert-hash sha256:beb0b1ba0edbc76b0288b5de57949e0aa728aa1149c4c5b548b2b59e5d6a7106
[kubeadm] WARNING: kubeadm is in beta, please do not use it for production clusters.
[preflight] Running pre-flight checks
[preflight] WARNING: kubelet service is not enabled, please run ‘systemctl enable kubelet.service’
[preflight] WARNING: docker service is not enabled, please run ‘systemctl enable docker.service’
[preflight] Starting the kubelet service
[discovery] Trying to connect to API Server “”
[discovery] Created cluster-info discovery client, requesting info from “”
[discovery] Requesting info from “” again to validate TLS against the pinned public key
[discovery] Cluster info signature and contents are valid and TLS certificate validates against pinned roots, will use API Server “”
[discovery] Successfully established connection with API Server “”
[bootstrap] Detected server version: v1.8.0
[bootstrap] The server supports the Certificates API (

Node join complete:
* Certificate signing request sent to master and response
* Kubelet informed of new secure connection details.

Run ‘kubectl get nodes’ on the master to see this machine join.
[root@k8s-worker ~]#

Using the same steps, you could add multiple Worker node to our Kubernetes cluster.

As suggested, let’s now check from our Kubernetes Master Node if the Worker Node was added successfully to our cluster.

[root@k8s-master ~]# kubectl get nodes
k8s-master Ready master 52m v1.8.0
k8s-worker.novalocal Ready <none> 7m v1.8.0
[root@k8s-master ~]#

We can see from the output above that our Kubernetes Master and Worker node are in Ready Status.

We have successfully installed Kubernetes Cluster using kubeadm and successfully joined a Worker Node to our cluster. With this environment we can now create pods and services.

Developing RESTful APIs with AWS API Gateway

If you followed the previous post, you now have a functioning AWS Lambda function. But how do we expose or trigger this function, say for a web application/client?

AWS API Gateway is an AWS service that allows developers to create, publish, monitors and secures APIs. These APIs could be something that access another AWS Service, in this case AWS Lambda functions, or other web services and could even be data stored in the cloud. We could create RESTful APIs to enable applications to access AWS Cloud services.

Let’s start building

To start, let’s build a basic web API to invoke our Lambda function using an HTTP GET query. Go to Application services section or search for API Gateway on the AWS Services search box.

It’s a good idea to choose the same region you used previously for your Lambda function. Click Get Started in the API Gateway home page.

In the next page, give your API a name. I’m calling this API manageBooksAPI. Click Create API.

Leave the default resource (/) and create one a new one by clicking Create Resource from the Actions menu.

In the New Child Resource page, give it a name. AS shown below, I’m calling this resource books. Leave the Resource Path as is. Make sure Enable API Gateway CORS is checked. You can proceed by clicking Create Resource.

The books resource will now appear under the default resource. We can now create a method. Choose Create Method under the Action menu.

Select the Get HTTP verb.

In the Integration Type page, select Lambda Function. And in the Lambda Function text box, type the Lambda function name you created and select it from the list. Click Save.

In the next page, just click OK. This is just providing permission for API Gateway to invoke our Lambda function.

Once created, you should have something similar to the one below.

Click TEST at the top of the Client section on the books GET Method execution and click Test in the next page.

You should see something similar to the one below.

We can now see the output of our Lambda function. Take note the Response Headers which shows that the Content Type is in json format.

Deploy our API

We are now ready to deploy our API. Under the Action menu, click Deploy API.

We can have the option to create multiple stage environment where we deploy our API. Let’s create a Production deployment stage by selecting New Stage and giving it Production as it’s Stage Name. Click Deploy.

Note:  Whenever we update our API, we need to re-deploy them.

Once created, you should see the Invoke URL for the newly created  stage environment.

Open your web browser. Using the URL provided and appending the books resource, you should see the JSON values provided by our Lambda function.

We’ve successfully created an API endpoint for our Lambda function. By creating an HTML file stored in Amazon S3 and with the help of Jquery, we can now use the same endpoint in our web application and process the returned JSON data.

 $.getJSON("", function(result){ 
    for (i = 0; i < result['catalogue'].length; i++) { 
      $("#deck").append('<div class="col-md-3"><div class="card-block"><h4 class="card-title">'+ result['catalogue'][i].title +'</h4><p class="card-text">' + result['catalogue'][i].author + '</p><a href="card/'+ result['catalogue'][i].id + '" class="btn btn-primary">Learn More</a></div></div></div>');


With AWS Lambda and API Gateway ( + S3 ), we can create a Serverless application. We can create a method to handle passing parameters using HTTP or formatting the response of a function in our web API. Imagine running applications which scales without the complexity of managing compute nodes. Remember we didn’t even have to setup a single web server instance for this project!.

Yet another AWS Lambda tutorial

I briefly discussed AWS Lambda months ago but I feel that example is too simple. Let’s create a slightly more complex task, a function that list books and which we will use in our API Gateway endpoint on the next post.

To create an AWS Lambda function, login to your AWS console and select Lambda from the Compute Section or select Lambda in the AWS Services search box.

Click Create a function on the AWS Lambda home page.

To simplify the creation of Lambda function, AWS provides sample blueprints which we could use. For this session, we will be creating a function from scratch so click Author from scratch.

On the next screen, we can add a trigger for this Lambda function. We will discuss creating trigger and association a Lambda function to it at a later part of this tutorial. For now just click Next.

In Step 3, give your function a distinct name. I’m calling it manageBooks. For this example, the runtime I will be using is Python 2.7.

It is possible to develop your serverless functions locally thru the Serverless framework and upload it as an archive file. For this session, we are just going to type our code in-line. In the Lambda function code section, copy the code here and paste it in the code area.

What we did here is we have a method (get_all_lesson) which returns an array of books in json format. Take note of the name of the method as we will be using that same name in the section below for the Handler name (lambda_function.get_all_lesson).

Specifying other settings

Everything that’s executed by AWS Lambda needs to have permission to do what it’s supposed to do. This is managed by AWS Access and Identity Management thru roles and policies. We need to create a new basic execution role using the Role menu. Choose create a new role. I am using myBasicRole for the role name. Don’t select a policy template.

You need to configure two important aspects of a Lambda function. How much memory to use affects the quantity of CPU power and the cost of executing the function. For this simple function, 128MB is more than enough. The timeout after which the function is automatically terminated setting is used to avoid mistakes that could start long-running function. Three seconds is fine for this simple function.

You can select Next to review all the configurations, and then select Create function.

In the next screen, after successfully creating the function, select Test to check our function.

Since we are not passing any arguments in our function, we can just use the Hellow World event template. To test, click Save and test.

We should see the result of the test execution from the web console with the summary of the execution and the log output.

In the next post, we will create an API Gateway Endpoint to consume this AWS Lambda Function. And using that AWS Gateway Endpoint in a page hosted on S3 bucket, we will display the list of books.



Setup IPSec VPN Server with Raspberry Pi

On my previous post, I shared how to configure a direct connection between my private home network and Google Cloud VPN. In my setup I was using my Raspberry Pi as my local VPN Gateway using OpenSwan. In this tutorial I’m going to show how I configure that.

Why did I choose Raspberry Pi?

First is I don’t own/have access to a dedicated VPN device/appliance. Having a Cisco ASA device would have been a good choice just to have that “Enterprise grade” experience but since this is just a POC, I think the Raspberry Pi is very well suited for this. Second, the low power consumption of this pocket-sized computer really makes it a better choice. Instead of running a power-hungry x86 server or DLXXX hardware, I could leave this one up and running all night without worrying about my electricity bill going up. But since we are using OpenSwan, you can definitely run this on any commodity hardware.

On with the installation

I have my pi up and running Raspbian Jessie Lite since I was using it as my Kodi media server. All I need to do now is to install openswan.

root@gateway:~# apt-get install openswan

When prompted ‘Use an X.509 certificate for this host?’, answer ‘No’. If you want to add it, use ‘dpkg-reconfigure openswan’ to come back.

Once installed, let’s configure our ipsec.secrets

root@gateway:~# vi /etc/ipsec.secrets

Add the following to the end of the line. Change raspberrypi_IP with the IP Address of your pi. Change the  pre-shared-key-password with something else. This will be used by both peers for authentication (RFC2409). Generate a long PSK with atleast 30 characters to mitigate brute force attack.

<raspberrypi_IP> %any: PSK "<pre-shared-key-password>"

We now need to define our VPN connection. Edit ipsec.conf

root@gateway:~# vi /etc/ipsec.conf

Add the following connection definition at the bottom part of the config file.

## connection definition in vpc-google ##
conn vpc-google     #vpc-google is the name of the connection
 authby=secret     #since we are using PSK, this is set to secret 
 type=tunnel #OpenSwan support l2tpd as well. For site-to-site use tunnel
 leftsubnet= # This is our local subnet.
 rightsubnet= # Remote site subnet. # My public IP
 left= # Raspberry PI ip address
 leftsourceip= # Raspberry PI ip address


Under the default connection, I actually set the following


I forgot to mention that this Raspberry PI is behind my router. I had to do port forwarding. IPSec uses udp port 500/4500. You need to do port forwarding if your gateway will be behind a router.

Restart openswan service.

root@gateway:~# service ipsec restart

All we need to do now is to configure a VPN Connection in GCP.

Once configured,  we can do the following to check if it’s working as expected.

Check ipsec status

root@gateway:~# service ipsec status
● ipsec.service - LSB: Start Openswan IPsec at boot time
 Loaded: loaded (/etc/init.d/ipsec)
 Active: active (running) since Thu 2017-07-20 14:23:39 UTC; 18s ago
 Process: 6866 ExecStop=/etc/init.d/ipsec stop (code=exited, status=0/SUCCESS)
 Process: 6964 ExecStart=/etc/init.d/ipsec start (code=exited, status=0/SUCCESS)
 CGroup: /system.slice/ipsec.service
 ├─7090 /bin/sh /usr/lib/ipsec/_plutorun --debug --uniqueids yes --force_busy no --nocrsend no --strictcrlpolicy no --nat_traversal yes -...
 ├─7091 logger -s -p daemon.error -t ipsec__plutorun
 ├─7094 /bin/sh /usr/lib/ipsec/_plutorun --debug --uniqueids yes --force_busy no --nocrsend no --strictcrlpolicy no --nat_traversal yes -...
 ├─7095 /bin/sh /usr/lib/ipsec/_plutoload --wait no --post
 ├─7096 /usr/lib/ipsec/pluto --nofork --secretsfile /etc/ipsec.secrets --ipsecdir /etc/ipsec.d --use-auto --uniqueids --nat_traversal --v...
 ├─7100 pluto helper # 0 
 └─7188 _pluto_adns

Jul 20 14:23:44 gateway pluto[7096]: "vpc-google"[1] #1: new NAT mapping for #1, was, now
Jul 20 14:23:44 gateway pluto[7096]: "vpc-google"[1] #1: STATE_MAIN_R3: sent MR3, ISAKMP SA established {auth=OAKLEY_PRESHAR...modp1024}
Jul 20 14:23:44 gateway pluto[7096]: "vpc-google"[1] #1: the peer proposed: ->
Jul 20 14:23:44 gateway pluto[7096]: "vpc-google"[1] #2: responding to Quick Mode proposal {msgid:3e4ab184}
Jul 20 14:23:44 gateway pluto[7096]: "vpc-google"[1] #2: us:<>[]
Jul 20 14:23:44 gateway pluto[7096]: "vpc-google"[1] #2: them:
Jul 20 14:23:44 gateway pluto[7096]: "vpc-google"[1] #2: transition from state STATE_QUICK_R0 to state STATE_QUICK_R1
Jul 20 14:23:44 gateway pluto[7096]: "vpc-google"[1] #2: STATE_QUICK_R1: sent QR1, inbound IPsec SA installed, expecting QI2
Jul 20 14:23:45 gateway pluto[7096]: "vpc-google"[1] #2: transition from state STATE_QUICK_R1 to state STATE_QUICK_R2
Jul 20 14:23:45 gateway pluto[7096]: "vpc-google"[1] #2: STATE_QUICK_R2: IPsec SA established tunnel mode {ESP/NAT=>0x1baf69...DPD=none}
Hint: Some lines were ellipsized, use -l to show in full.
root@gateway:~# is my GCP VPN Gateway IP. We need to see that IPsec SA established tunnel mode to confirm everything is working fine.

ipsec auto –status

root@gateway:~# ipsec auto --status
000 using kernel interface: netkey
000 interface lo/lo ::1
000 interface lo/lo
000 interface lo/lo
000 interface eth0/eth0
000 interface eth0/eth0
000 interface wlan0/wlan0
000 interface wlan0/wlan0
000 %myid = (none)
000 debug none
000 virtual_private (%priv):
000 - allowed 6 subnets:,,,, fd00::/8, fe80::/10
000 - disallowed 0 subnets: 
000 WARNING: Disallowed subnets in virtual_private= is empty. If you have 
000 private address space in internal use, it should be excluded!
000 algorithm ESP encrypt: id=2, name=ESP_DES, ivlen=8, keysizemin=64, keysizemax=64
000 algorithm ESP encrypt: id=3, name=ESP_3DES, ivlen=8, keysizemin=192, keysizemax=192
000 algorithm ESP encrypt: id=6, name=ESP_CAST, ivlen=8, keysizemin=40, keysizemax=128
000 algorithm ESP encrypt: id=11, name=ESP_NULL, ivlen=0, keysizemin=0, keysizemax=0
000 algorithm ESP encrypt: id=12, name=ESP_AES, ivlen=8, keysizemin=128, keysizemax=256
000 algorithm ESP encrypt: id=13, name=ESP_AES_CTR, ivlen=8, keysizemin=160, keysizemax=288
000 algorithm ESP encrypt: id=14, name=ESP_AES_CCM_A, ivlen=8, keysizemin=128, keysizemax=256
000 algorithm ESP encrypt: id=15, name=ESP_AES_CCM_B, ivlen=8, keysizemin=128, keysizemax=256
000 algorithm ESP encrypt: id=16, name=ESP_AES_CCM_C, ivlen=8, keysizemin=128, keysizemax=256
000 algorithm ESP encrypt: id=18, name=ESP_AES_GCM_A, ivlen=8, keysizemin=128, keysizemax=256
000 algorithm ESP encrypt: id=19, name=ESP_AES_GCM_B, ivlen=8, keysizemin=128, keysizemax=256
000 algorithm ESP encrypt: id=20, name=ESP_AES_GCM_C, ivlen=8, keysizemin=128, keysizemax=256
000 algorithm ESP auth attr: id=1, name=AUTH_ALGORITHM_HMAC_MD5, keysizemin=128, keysizemax=128
000 algorithm ESP auth attr: id=2, name=AUTH_ALGORITHM_HMAC_SHA1, keysizemin=160, keysizemax=160
000 algorithm ESP auth attr: id=5, name=AUTH_ALGORITHM_HMAC_SHA2_256, keysizemin=256, keysizemax=256
000 algorithm ESP auth attr: id=6, name=AUTH_ALGORITHM_HMAC_SHA2_384, keysizemin=384, keysizemax=384
000 algorithm ESP auth attr: id=7, name=AUTH_ALGORITHM_HMAC_SHA2_512, keysizemin=512, keysizemax=512
000 algorithm ESP auth attr: id=9, name=AUTH_ALGORITHM_AES_CBC, keysizemin=128, keysizemax=128
000 algorithm ESP auth attr: id=251, name=AUTH_ALGORITHM_NULL_KAME, keysizemin=0, keysizemax=0
000 algorithm IKE encrypt: id=0, name=(null), blocksize=16, keydeflen=131
000 algorithm IKE encrypt: id=5, name=OAKLEY_3DES_CBC, blocksize=8, keydeflen=192
000 algorithm IKE encrypt: id=7, name=OAKLEY_AES_CBC, blocksize=16, keydeflen=128
000 algorithm IKE hash: id=1, name=OAKLEY_MD5, hashsize=16
000 algorithm IKE hash: id=2, name=OAKLEY_SHA1, hashsize=20
000 algorithm IKE hash: id=4, name=OAKLEY_SHA2_256, hashsize=32
000 algorithm IKE hash: id=6, name=OAKLEY_SHA2_512, hashsize=64
000 algorithm IKE dh group: id=2, name=OAKLEY_GROUP_MODP1024, bits=1024
000 algorithm IKE dh group: id=5, name=OAKLEY_GROUP_MODP1536, bits=1536
000 algorithm IKE dh group: id=14, name=OAKLEY_GROUP_MODP2048, bits=2048
000 algorithm IKE dh group: id=15, name=OAKLEY_GROUP_MODP3072, bits=3072
000 algorithm IKE dh group: id=16, name=OAKLEY_GROUP_MODP4096, bits=4096
000 algorithm IKE dh group: id=17, name=OAKLEY_GROUP_MODP6144, bits=6144
000 algorithm IKE dh group: id=18, name=OAKLEY_GROUP_MODP8192, bits=8192
000 algorithm IKE dh group: id=22, name=OAKLEY_GROUP_DH22, bits=1024
000 algorithm IKE dh group: id=23, name=OAKLEY_GROUP_DH23, bits=2048
000 algorithm IKE dh group: id=24, name=OAKLEY_GROUP_DH24, bits=2048
000 stats db_ops: {curr_cnt, total_cnt, maxsz} :context={0,0,0} trans={0,0,0} attrs={0,0,0} 
000 "vpc-google":<>[xx.xx.xx.xx]...%any===; unrouted; eroute owner: #0
000 "vpc-google": myip=; hisip=unset;
000 "vpc-google": ike_life: 3600s; ipsec_life: 1200s; rekey_margin: 180s; rekey_fuzz: 100%; keyingtries: 3 
000 "vpc-google": policy: PSK+ENCRYPT+TUNNEL+PFS+IKEv2ALLOW+SAREFTRACK+lKOD+rKOD; prio: 24,20; interface: eth0; 
000 "vpc-google": newest ISAKMP SA: #0; newest IPsec SA: #0; 
000 "vpc-google": IKE algorithms wanted: AES_CBC(7)_256-SHA1(2)_000-MODP1024(2); flags=-strict
000 "vpc-google": IKE algorithms found: AES_CBC(7)_256-SHA1(2)_160-MODP1024(2)
000 "vpc-google": ESP algorithms wanted: AES(12)_256-SHA1(2)_000; flags=-strict
000 "vpc-google": ESP algorithms loaded: AES(12)_256-SHA1(2)_160
000 "vpc-google"[1]:<>[]...; erouted; eroute owner: #2
000 "vpc-google"[1]: myip=; hisip=unset;
000 "vpc-google"[1]: ike_life: 3600s; ipsec_life: 1200s; rekey_margin: 180s; rekey_fuzz: 100%; keyingtries: 3 
000 "vpc-google"[1]: policy: PSK+ENCRYPT+TUNNEL+PFS+IKEv2ALLOW+SAREFTRACK+lKOD+rKOD; prio: 24,20; interface: eth0; 
000 "vpc-google"[1]: newest ISAKMP SA: #1; newest IPsec SA: #2; 
000 "vpc-google"[1]: IKE algorithms wanted: AES_CBC(7)_256-SHA1(2)_000-MODP1024(2); flags=-strict
000 "vpc-google"[1]: IKE algorithms found: AES_CBC(7)_256-SHA1(2)_160-MODP1024(2)
000 "vpc-google"[1]: IKE algorithm newest: AES_CBC_128-SHA1-MODP1024
000 "vpc-google"[1]: ESP algorithms wanted: AES(12)_256-SHA1(2)_000; flags=-strict
000 "vpc-google"[1]: ESP algorithms loaded: AES(12)_256-SHA1(2)_160
000 "vpc-google"[1]: ESP algorithm newest: AES_128-HMAC_SHA1; pfsgroup=<Phase1>
000 #2: "vpc-google"[1] STATE_QUICK_R2 (IPsec SA established); EVENT_SA_REPLACE in 671s; newest IPSEC; eroute owner; isakmp#1; idle; import:not set
000 #2: "vpc-google"[1] esp.1baf698c@ esp.c810f1e5@ tun.0@ tun.0@ ref=0 refhim=4294901761
000 #1: "vpc-google"[1] STATE_MAIN_R3 (sent MR3, ISAKMP SA established); EVENT_SA_REPLACE in 3070s; newest ISAKMP; lastdpd=20s(seq in:0 out:0); idle; import:not set

If tunnel isn’t coming up/establishing, your best pal is tcpdump. Initiate a ping or some traffic from the remote site to your local network. I prefer to start by pinging my local VPN gateway from one of my cloud instance.

root@gateway:~# tcpdump -n "port 4500" -vvvv
tcpdump: listening on eth0, link-type EN10MB (Ethernet), capture size 262144 bytes
14:35:24.609126 IP (tos 0x0, ttl 64, id 448, offset 0, flags [DF], proto UDP (17), length 29) > [bad udp cksum 0xb22a -> 0x2ba3!] isakmp-nat-keep-alive
14:35:24.609953 IP (tos 0x0, ttl 64, id 449, offset 0, flags [DF], proto UDP (17), length 29) > [bad udp cksum 0xb22a -> 0x2ba3!] isakmp-nat-keep-alive

If everything is ok we can test connectivity from our local network to any of our remote instance.

root@gateway:~# ping
PING ( 56(84) bytes of data.
64 bytes from icmp_seq=1 ttl=64 time=210 ms
64 bytes from icmp_seq=2 ttl=64 time=211 ms
--- ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1001ms
rtt min/avg/max/mdev = 210.571/210.973/211.375/0.402 ms
root@gateway:~# ssh root@
The authenticity of host ' (' can't be established.
ECDSA key fingerprint is 8f:f7:62:4f:1e:85:ad:1e:50:cc:bc:21:fd:ae:bb:9e.
Are you sure you want to continue connecting (yes/no)? 

From the above, we can see that we are able to connect to one of my instance in GCP.



Connecting On-Premise network to a Virtual Private Cloud Network

Last time I’ve worked with configuring two sites using OpenSwan was more than 10 years ago. Had a good success deploying that solution with good throughput using commodity hardware and leveraging Open Source solutions. This time I wanted to test if I could do the same using Raspberry Pi.

In this post I want to show how I was able to configure Google Compute Engine VPN and connect my home network.

Under Networking – click VPN. Click Create VPN connection. This will open up a page which will guide you on creating a VPN connection.

Under the Create a VPN connection form, we must first create a static IP Address that can be used by our VPN Gateway. Under the IP Address dropdown list box, select Create IP Address.


Put in a name to distinguish this IP Address and click RESERVE.

Complete the form by giving it a Name and selecting a Region where we could deploy this VPN Gateway. Here I am using us-central-1.

Put in your VPN Gateway IP Address in the Remote peer IP address. This is the IP Address of your home network VPN Gateway. I am currently using a Raspberry Pi installed with OpenSwan. I am using port forwarding (udp 500/4500) since this gateway is behind my router. (Installation and configuration of OpenSwan/IPSec on Raspberry Pi deserves a separate post)

Select the IKE version. Mine is using IKEv1.

In the Remote network IP ranges, enter your home network range. Select the Local subnetworks (Google Cloud side) which you want to associate this tunnel to.

Click Create. Deploying this could take a minute or two to complete.

Once done, you should be able to see that the Remote peer IP address is up with a green check icon.

In one of my compute instance, I can verify that the tunnel is up by pinging my home network.

Or by running TCP dump on my local VPN gateway

We now have a secure way of connecting our on-premise network to our Virtual Private Cloud network.

One thing to note, if you are deleting the VPN Connection, you must also release the IP Address you allocated to the VPN Gateway so as not to incur additional cost since that is a Static IP Address.




Let’s Git it on!

Install Git

# yum install git
Loaded plugins: fastestmirror, langpacks
base | 3.6 kB 00:00:00
epel/x86_64/metalink | 5.4 kB 00:00:00
epel | 4.3 kB 00:00:00
extras | 3.4 kB 00:00:00
google-chrome | 951 B 00:00:00
nux-dextop | 2.9 kB 00:00:00
updates | 3.4 kB 00:00:00
epel/x86_64/primary_db FAILED ] 0.0 B/s | 1.2 MB –:–:– ETA [Errno 14] HTTP Error 404 – Not Found
Trying other mirror.
To address this issue please refer to the below knowledge base article

If above article doesn’t help to resolve this issue please create a bug on

(1/5): epel/x86_64/group_gz | 170 kB 00:00:00
(2/5): updates/7/x86_64/primary_db | 4.8 MB 00:00:01
(3/5): epel/x86_64/updateinfo | 799 kB 00:00:04
(4/5): epel/x86_64/primary_db | 4.7 MB 00:00:07
(5/5): nux-dextop/x86_64/primary_db | 1.7 MB 00:00:11
Loading mirror speeds from cached hostfile
* base:
* epel:
* extras:
* nux-dextop:
* updates:
Resolving Dependencies
–> Running transaction check
—> Package git.x86_64 0: will be installed
–> Processing Dependency: perl-Git = for package: git-
–> Processing Dependency: perl(Term::ReadKey) for package: git-
–> Processing Dependency: perl(Git) for package: git-
–> Processing Dependency: perl(Error) for package: git-
–> Running transaction check
—> Package perl-Error.noarch 1:0.17020-2.el7 will be installed
—> Package perl-Git.noarch 0: will be installed
—> Package perl-TermReadKey.x86_64 0:2.30-20.el7 will be installed
–> Finished Dependency Resolution

Dependencies Resolved

Package Arch Version Repository Size
git x86_64 base 4.4 M
Installing for dependencies:
perl-Error noarch 1:0.17020-2.el7 base 32 k
perl-Git noarch base 53 k
perl-TermReadKey x86_64 2.30-20.el7 base 31 k

Transaction Summary
Install 1 Package (+3 Dependent packages)

Total download size: 4.5 M
Installed size: 22 M
Is this ok [y/d/N]: y
Downloading packages:
(1/4): git- | 4.4 MB 00:00:07
(2/4): perl-Git- | 53 kB 00:00:00
(3/4): perl-TermReadKey-2.30-20.el7.x86_64.rpm | 31 kB 00:00:00
(4/4): perl-Error-0.17020-2.el7.noarch.rpm | 32 kB 00:00:10
Total 425 kB/s | 4.5 MB 00:00:10
Running transaction check
Running transaction test
Transaction test succeeded
Running transaction
Installing : 1:perl-Error-0.17020-2.el7.noarch 1/4
Installing : perl-TermReadKey-2.30-20.el7.x86_64 2/4
Installing : git- 3/4
Installing : perl-Git- 4/4
Verifying : perl-Git- 1/4
Verifying : perl-TermReadKey-2.30-20.el7.x86_64 2/4
Verifying : 1:perl-Error-0.17020-2.el7.noarch 3/4
Verifying : git- 4/4

git.x86_64 0:

Dependency Installed:
perl-Error.noarch 1:0.17020-2.el7 perl-Git.noarch 0: perl-TermReadKey.x86_64 0:2.30-20.el7


Generate SSH Keys

# ssh-keygen -t rsa -b 4096 -C “”
Generating public/private rsa key pair.
Enter file in which to save the key (/root/.ssh/id_rsa):
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /root/.ssh/id_rsa.
Your public key has been saved in /root/.ssh/
The key fingerprint is:
The key’s randomart image is:
+–[ RSA 4096]—-+
| |
| . E |
| …o |
| . +.*.+ |
| +.O S + |
| . .= + . |
| o + . . |
| .o . . |
| .o. . |

Add your SSH key to your SSH agent.

Start SSH agent in the background

# eval “$(ssh-agent -s)”
Agent pid 11120

Add your SSH private key to the ssh-agent

#ssh-add ~/.ssh/id_rsa
Enter passphrase for /root/.ssh/id_rsa:
Identity added: /root/.ssh/id_rsa (/root/.ssh/id_rsa)

Add the SSH key to your Github account

First, copy the SSH key to your clipboard.

# xclip -sel clip < ~/.ssh/

Login to your Github account and inn the upper-right corner of any page, click your profile photo, then click Settings.

In the user settings sidebar, click SSH and GPG keys.

Click New SSH key or Add SSH key

In the Title filed add a descriptive name for this SSH key. Paste your key into the “Key” field

Click Add SSH key

If prompted enter your Github password.

You now have Git configured!

Google Cloud Platform Functions

I want to share my experience testing Google Cloud Platform’s Serverless offering called Cloud Functions. I’ve been playing around GCP for a more than a week now (moving my site, resizing it! etc) and I had this itch to test Cloud Functions after coming out from an AWS SNG User Group meeting which talked about AWS Lambda.

Let’s get started.

Login to your Cloud Console (if you haven’t subscribed yet, go sign up for an account and make use of that $300 Free Tier offering for a year!).

Under COMPUTE section, click Cloud Functions.

Click Create Function

Give your function a name. I’m going to use storageTriggeredImageProcessor. I’ll be using the us-central-1 Region  (since it’s under the Free tier). You need to consider latency when selecting Region.  Not sure if GCP would have something like Lambda Edge in the future.

Remember for Cloud Functions, you are billed by how long your function executes and the amount of memory you allocate to it. I’ll choose 128 MB for this test.

For Timeout, i’ll stick with what’s shown which is 60s.

We are going to use Cloud Storage Bucket under the Trigger section. This will ask us to select which bucket to “monitor”.

Under the source code section, we are presented with a skeleton template of our function. package.json holds the information of what other APIs you want to use. You can copy the code and package.json on my Github repository.

Cloud Functions needs to be staged in Google Storage buckets. Select or create one where we could store our codes.

Click Create.

It would take a minute or two for GCP to stage our function.

Once done, we can now test this. Since we are only logging the output of this function, under Function Details, click View Logs

On another window, upload an image file to your Storage bucket and watch the log entries.

I will be uploading the image below

And on the logs, we can see what Google Vision detected on this image.

We’ve successfully created and tested GCP’s Cloud Functions. Remember we actually run an application without provisioning application servers (Serverless!)


If you wan’t to use Google Vision API or any other APIs, you need to enable it under the API Section. Else your function won’t execute correctly and will throw an exception.



Deploying a LAMP server in Google Cloud Platform

Since my Raspberry Pi, which hosts my WordPress site, has been quirky the past couple of weeks I decided to try out Google Cloud Platform.

Like AWS 12 months Free-Tier, GCP is giving out $300-free trial to get you started. You would need to have a credit card to sign up and you could read more information about the Free Trial in this link.

Using GCP’s Cloud Launcher, we can deploy solutions and applications in an instant (AWS MarketPlace?). In this tutorial, we are deploying a LAMP server which could maybe host a WordPress site.

Login to GCP Console and select/create a project. Under Computer Engine – VM instances, click CREATE INSTANCE.


Select a zone in any of the following: us-east1, us-west1, and us-central1. Using an f1-micro Machine type we could be well within the “744 hours monthly free limit”. Capacity might not be enough but this could get me start a LAMP server.

I want to use CentOS this time so under Boot disk, select the CentOS 7 image. The 10GB persisitent disk is well enough for my site so I’ll just go ahead and click select.


I’ll just check both option to allow HTTP and HTTPS under the Firewall section and hit Create to start deploying this instance.


Once GCP is done, you should see the following

Click SSH. This will open a new window which automatically gives us an SSH connection to our instance. (No more managing keys?)

sudo su to become root and keep things up to date by running yum update -y

Let’s install apache by running the following

yum install httpd

Install mariadb by executing the following

yum install mariadb-server mariadb

Install php and php-mysql

yum install php php-mysql

Let’s now configure our Mysql/MariaDB server. Change the root account, and remove the test user, database when prompted after running the following


make sure to have Apache and MariaDB start at boot time

chkconfig httpd on
chkconfig mariadb on

If it’s not yet started, start Apache and MariaDB server.

service mariadb start
service httpd start

Using the External IP of our instance, let’s check if our Web server is accessible

We could create a simple page to test if PHP is working

[root@wordpress paulinomreyes]# cat /var/www/html/info.php


[root@wordpress paulinomreyes]# 

And navigating to info.php we should see the following

We just deployed a LAMP server on Google Cloud Platform. Using this instance, we could then deploy a WordPress site.

Google Cloud Platform in my opinion is “minimalist” (compared to how huge AWS offers) but this platform gives me the basic things I need for now . In the future, I want to try out how to configure Load Balancing, and how Container Engine and Cloud Functions ( equivalent I guess to AWS Lambda) works.



Ansible-Vault How-To

A short tutorial in using ansible-vault for storing sensitive information.

Here I have an ansible inventory file – hosts which utilizes group_vars where I store connection details/credentials.

[root@ansible vault]# tree .
├── inventory
│   ├── group_vars
│   │   └── web
│   └── hosts
└── wget.yml


[root@ansible vault]# cat inventory/hosts

[root@ansible vault]# cat inventory/group_vars/web

ansible_connection: ssh
ansible_user: root
ansible_ssh_pass: P@ssw0rd


Since the credentials is in plain text, contents are visible to anyone who has access to this file, we can use ansible-vault, which is provided by ansible-core package, and passing in a password  to encrypt it

[root@ansible vault]# ansible-vault encrypt inventory/group_vars/web
New Vault password:
Confirm New Vault password:
Encryption successful

web is now encrypted (AES 256)

[root@ansible vault]# cat inventory/group_vars/web
[root@ansible vault]#


When we try to run ansible (or ansible-playbook) command, we can use the –ask-vault-pass and when prompted enter the password we used when we encrypted the file.

[root@ansible vault]# ansible web -i inventory/hosts -m ping –ask-vault-pass

Vault password: | SUCCESS => {
“changed”: false,
“ping”: “pong”



Serverless – AWS Lambda

Serverless is a computing concept also known as Function as a Service (FaaS). Despite it’s name, it does not exactly means running codes without physical servers.  AWS Lambda is Amazon’s service that executes code, scales automatically when needed, and in where you only pay for the time your code executes. Server and operating system maintenance as well as capacity provisioning are all handled by Amazon. There are other Serverless framework out there. OpenWhisk, Fission, Funktion to name a few.

In this topic, I’ll show you how to create an ASW Lambda function and consume the same function thru AWS Gateway API calls. So let’s get started by logging in to your AWS account.


Under the Compute section, click Lambda.



Click Get Started Now.

You will be asked to select a run-time and a Blueprint. Blueprints are much like patterns available for you to start developing functions. There are several blueprints available for each AWS Lambda-supported language that targets the use of DynamoDB or Amazon Kinesis for example.  For now let’s select a blank node.js blueprint.

Functions can be invoked by other AWS services. Think of S3, if someone uploads an image file on S3,  you can trigger a Lambda Function that automatically creates a thumbnail of the image and save it to another bucket. We will configure this later. For now just click next.


In the next section, this is where you put your code. Give your function a name and some description of what the function does. Be sure to select the correct runtime. Here I will be using node.js 4.3 for my random number generator function.

You can copy and paste this on the code section. Under Lambda function handler and role section , note the Handler value as it corresponds to the function in the code. Choose or create an existing role. You can learn more about roles in this section.

In the Advanced settings, you can leave the default values shown. These values affects the performance of your code. As shown, changing the resource settings as well as the time-out settings affects your function cost. Remember you are charged by the number of requests and how long your code executes. For now leave it with the default values as shown.

In the Review section, check the details of your function. Click Create Function.

On the Function page, you can test your function by clicking the Test button. Here you can see that the function returned the number 7.

With the above steps, we have created a “microservice” that returns a random number. Let’s now create an API Gateway by creating a trigger for this function.

Under the Triggers tab, click Add Trigger link. Remember AWS Lambda function can be triggered by other AWS services. Let’s select AWS Gateway API.

In the next section, we can define who can access our API. For this example I am setting it to Open which means it is available to the public.

Click Done.

You will be presented with a URL which you can directly access. That URL will call your function and return the value.

Go to AWS Gateway API service and you can visualize how your AWS Lambda function is triggered by AWS Gateway API.

Remember we have created this “microservice” all without provisioning an instance or server that will handle our requests. You can trigger a Lambda function if there’s a new insert or update on an RDS or DynamoDB table. Imagine running an application where you don’t have to deal with the complexity of  managing an instance or let alone thinking what size of instance you need before you develop your application.