Deploying applications on AWS ElasticBeanStalk

AWS Elastic Beanstalk allows us to quickly deploy, monitor, scale and manage our application in AWS Cloud. This is without worrying about scaling, capacity management and health monitoring complexities that comes into deploying applications. It runs on highly reliable and scalable AWS Services.

Although this simplifies deployment of our application and reduces the management part of it, it doesn’t limit us to still have full control of the underlying infrastructure as compared to Container technologies. AWS Elastic Beanstalk supports different software and hardware stacks. It also supports applications written in Java, Python, PHP, Node.JS, etc.

For this example we will build a simple java web application and will deploy it in AWS Elastic BeanStalk. You can copy the source files from this repository. As a prerequisite you need JDK and ANT installed on your development environment.

Build the project by issuing the following

# ant war
Buildfile: /home/project/build.xml

prepare:
[mkdir] Created dir: /home/project/build
[mkdir] Created dir: /home/project/build/WEB-INF
[mkdir] Created dir: /home/project/build/WEB-INF/classes
[mkdir] Created dir: /home/project/build/WEB-INF/lib
[copy] Copying 2 files to /home/project/build

compile:

war:
[war] Building war: /home/project/dist/test.war

BUILD SUCCESSFUL
Total time: 0 seconds

Now that we have a sample war file let’s login to AWS Console.

Under Compute services, click Elastic Beanstalk.

We could deploy our application easily by selecting Tomcat from the “Select a platform” dropdown-list and hitting Launch Now.  Our application will be deployed “auto-magically” but to get more control on the deployment process, click Create New Application located on the top right of the page.

 

In the Application Information page, provide a name for this deployment and click Next.

We need to choose an Environment Tier. As stated, a Web Server Environment typically supports standard web applications. Worker environment are more suited for applications that involves Queues and background processing. Since our dummy application is a java web application, select a Web Server Tier environment.

AWS ElasticBeanstalk support different web platforms. Select Tomcat from the Predefined configuration list. Under the environment type, we can configure this deployment to use a Single Instance or deployed under an Auto Scaling, Load Balancing platform.

In the next page, select our sample java web application project that we built. In future post, we’ll look into detail the deployment policy settings we could configure when we deploy an application in Elastic Beanstalk. Click Next.

In the Environment Information page, we can define in which logical environment this deployment is for. We could do a deployment mean for testing or prod. We could also aptly define the URL for which environment this deployment is for.

 

Under Additional Resources, we could select if we need an RDS DB in this deployment or if we need it to be under a specific / new VPC. Since we wont be using an RDS DB, we could just click Next.

 

Under Configuration details, this is the part where we could really control how our application is deployed. We can define and choose the type of instance. We also have the option to configure access to our instance by associating or creating an SSH Key pair that would allow us to remotely login to our EC2 instance.

 

As with any AWS services, we can define tags for this deployment.

Under Permissions, select the role you want to associate our instance. If the application needs to access different AWS Service, it is best to provide the instance or our application a Role rather than using shared login information/Access keys which open up security concerns.

 

Review your deployment configuration and hit Launch.

Take note of the Environment URL since we will be using that url to access our application.

AWS Elastic Beanstalk service will process and deploy our application. Everything is being logged and you can see what’s happening (creating instance, ELB etc) during the deployment process.

Once successfully deployed

you can browse our URL and check out our application.

Deploying Docker Image in OpenShift

On this series on Kubernetes/Docker/Containers, I’ve shown how to deploy a docker image on a vanilla Kubernetes platform, and deploying the same image on Amazon EC2 Container Service. This time, I wanted to test how to deploy the same image on Red Hat’s OpenShift Container Platform.

OpenShift is Red Hat’s offering to bring Docker and Kubernetes to the enterprise. There is an upstream community project called Origin which provides open source container application platform.

OpenShift Online

Red Hat offers cloud-based OpenShift Online. You could sign-up for a free OpenShift Online platform access to try it out. This gives you a limited environment but sufficient enough to do our test deployment. On another note, Red Hat OCP can be deployed on-premise on a RHEL environment.

 Once you have the access, let’s start by creating a new project.

OpenShift has features where you could create an application from scratch using templates, or importing a YAML/JSON specification of our applications, or as for this example, deploy an already created image.

Once your project is created after clicking that Create button, Select Deploy Image

I will be using the same dockerflask image used in the previous examples. (note: I made some minor changes on the application and the Dockerfile manifest)

After hitting the search button, OpenShift displays some information about our image.

Time to hit Deploy

and that should deploy our image.

 

OpenShift automatically deployed our image. As we can see here, we have one pod running our application. It also created a service that automatically talks (port 8000 tcp) to our pod(s). Later we will increase the number of our pods and we can see that the service will automatically load balance the request. The changes I made on our application is to show the host name where the application is running from.

For now, let’s create a Route to our application so we can access it externally.

Once you hit Create, you’ll be provided with a link for your application.

We can now access our application externally.

Let’s try increasing the number of our pods. Here I set it to run 2 pods.

 

Hitting that url again

Using curl to check if the service indeed is doing load balancing.

As you can see from the above, OpenShift Service did load balanced the request as we see the host name changes based on which pod processed the request. We could put in auto-scaling configurations that will increase the number of pods to handle the load accordingly.

Red Hat OpenShift Container Platform provides enterprises with on-cloud (OCP Online) or on-premise container platform. With this series, it showed what containers is and how technology is moving away from the traditional way of deploying and managing applications.

 

Trying out Amazon EC2 Container Service (Amazon ECS)

In the previous post I wrote, I showed how to build/configure a Kubernetes platform where we could run Docker image/containers. Container technology allows us to have consistent way to package our application and we could expect that it will always run the same way regardless of the environment. With this, I wanted to test our previous application and check out what Cloud providers such as Amazon Web Services (AWS) and Google Cloud Platform (GCP) offers in this space.

Amazon EC2 Container Service (AWS ECS)

Amazon ECS is an AWS service that makes it simple to store, manage and deploy Docker containers. Using this service, we don’t have to install a Container platform and Orchestration Software to run our container images. Since AWS ECS is tightly integrated with other AWS Services, we can expect that we could utilize other services such as AWS Load Balancer, IAM, S3 etc.

Amazon EC2 Container Registry

Amazon EC2 Container Registry (Amazon ECR) provides a container registry where we could store, manage and deploy our Docker images. Amazon ECR also eliminates the need to setup and manage a repository for our container images. Since it using S3 at the back-end, it provides us a highly available and accessible platform to serve our images. It also provides a secure platform since it transfers our images using https and secures our images at rest. By leveraging AWS IAM, we can control access to our image repository. So let’s get started.

Under the Compute Section, click EC2 Container Service.

We will create a new image and deploy our application so leave the default selection and click Continue.

In the next page, I’ll be using awscontainerio as the name of this repository.

After clicking Next Step, you should be presented with something similar below. Using AWS Cli, we can now push our docker image to our repository by following the steps listed.

I will be using the application and Dockerfile from the previous post to test AWS ECS.

[root@k8s-master dockerFlask]# aws ecr get-login –no-include-email –region us-east-1
docker login -u AWS -p <very-long-key> https://823355006218.dkr.ecr.us-east-1.amazonaws.com
[root@k8s-master dockerFlask]# docker login -u AWS -p <very-long-key> https://823355006218.dkr.ecr.us-east-1.amazonaws.com
Login Succeeded
[root@k8s-master dockerFlask]# docker build -t awscontainerio .
Sending build context to Docker daemon 128.5 kB
Step 1 : FROM alpine:3.1
—> f13c92c2f447
Step 2 : RUN apk add –update python py-pip
—> Using cache
—> 988086eeb89d
Step 3 : RUN pip install Flask
—> Using cache
—> 4e4232df96c2
Step 4 : COPY app.py /src/app.py
—> Using cache
—> 9567163717b6
Step 5 : COPY app/main.py /src/app/main.py
—> Using cache
—> 993765657104
Step 6 : COPY app/__init__.py /src/app/__init__.py
—> Using cache
—> 114239a47d67
Step 7 : COPY app/templates/index.html /src/app/templates/index.html
—> Using cache
—> 5f9e85b36b98
Step 8 : COPY app/templates/about.html /src/app/templates/about.html
—> Using cache
—> 96c6ac480d98
Step 9 : EXPOSE 8000
—> Using cache
—> c79dcdddf6c1
Step 10 : CMD python /src/app.py
—> Using cache
—> 0dcfd15189f1
Successfully built 0dcfd15189f1
[root@k8s-master dockerFlask]# docker tag awscontainerio:latest 823355006218.dkr.ecr.us-east-1.amazonaws.com/awscontainerio:latest
[root@k8s-master dockerFlask]# docker push 823355006218.dkr.ecr.us-east-1.amazonaws.com/awscontainerio:latest
The push refers to a repository [823355006218.dkr.ecr.us-east-1.amazonaws.com/awscontainerio]
596bab3c12e4: Pushed
e24802fe0ea0: Pushed
fdee42dc503e: Pushed
2be9bf2ec52c: Pushed
9211d7b219b7: Pushed
239f9a7fd5b0: Pushed
8ab8949d0d88: Pushed
03b625132c33: Pushed
latest: digest: sha256:8f0e2417c90ba493ce93f24add18697b60d34bfea60bc37b0c30c0459f09977b size: 1986
[root@k8s-master dockerFlask]#

Continue reading “Trying out Amazon EC2 Container Service (Amazon ECS)”

How to install Kubernetes on CentOS

Kubernetes, developed by Google, is a cluster and orchestration engine for docker containers.

In this session I tried kubeadm to deploy a Kubernetes Cluster. I also used my OpenStack environment for this PoC and provisioned two CentOS Compute nodes as follows

k8s-master will run the API Manager, Kubectl utility, Scheduler, etcd, and Controller Manager.

k8s-worker will be our worker node and will run Kubelet, Kube-proxy and our pods.

On both system, execute the following

  • yum update -y
  • set SELinux to disabled (/etc/selinux/config)
  • and update /etc/hosts making sure an entry for the two systems exists
  • Reboot, Reboot, Reboot!

Configure Kubernetes Repo by adding the following

[root@k8s-master ~]# cat /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg
https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg

Configure Kubernetes Master Node

Execute the following on the Master Node

yum install docker kubeadm -y
systemctl restart kubelet && systemctl enable kubelet

Initialize Kubernetes Master with

kubeadm init

You should see something similar to the following

[root@k8s-master etc]# kubeadm init
[kubeadm] WARNING: kubeadm is in beta, please do not use it for production clusters.
[init] Using Kubernetes version: v1.8.0
[init] Using Authorization modes: [Node RBAC]
[preflight] Running pre-flight checks
[preflight] WARNING: hostname “k8s-master” could not be reached
[preflight] WARNING: hostname “k8s-master” lookup k8s-master on 8.8.8.8:53: no such host
[preflight] Starting the kubelet service
[kubeadm] WARNING: starting in 1.8, tokens expire after 24 hours by default (if you require a non-expiring token use –token-ttl 0)
[certificates] Generated ca certificate and key.
[certificates] Generated apiserver certificate and key.
[certificates] apiserver serving cert is signed for DNS names [k8s-master kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.2.48]
[certificates] Generated apiserver-kubelet-client certificate and key.
[certificates] Generated sa key and public key.
[certificates] Generated front-proxy-ca certificate and key.
[certificates] Generated front-proxy-client certificate and key.
[certificates] Valid certificates and keys now exist in “/etc/kubernetes/pki”
[kubeconfig] Wrote KubeConfig file to disk: “admin.conf”
[kubeconfig] Wrote KubeConfig file to disk: “kubelet.conf”
[kubeconfig] Wrote KubeConfig file to disk: “controller-manager.conf”
[kubeconfig] Wrote KubeConfig file to disk: “scheduler.conf”
[controlplane] Wrote Static Pod manifest for component kube-apiserver to “/etc/kubernetes/manifests/kube-apiserver.yaml”
[controlplane] Wrote Static Pod manifest for component kube-controller-manager to “/etc/kubernetes/manifests/kube-controller-manager.yaml”
[controlplane] Wrote Static Pod manifest for component kube-scheduler to “/etc/kubernetes/manifests/kube-scheduler.yaml”
[etcd] Wrote Static Pod manifest for a local etcd instance to “/etc/kubernetes/manifests/etcd.yaml”
[init] Waiting for the kubelet to boot up the control plane as Static Pods from directory “/etc/kubernetes/manifests”
[init] This often takes around a minute; or longer if the control plane images have to be pulled.
[apiclient] All control plane components are healthy after 437.011125 seconds
[uploadconfig] Storing the configuration used in ConfigMap “kubeadm-config” in the “kube-system” Namespace
[markmaster] Will mark node k8s-master as master by adding a label and a taint
[markmaster] Master k8s-master tainted and labelled with key/value: node-role.kubernetes.io/master=””
[bootstraptoken] Using token: 5cf1b4.23d95a40a9d5f674
[bootstraptoken] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstraptoken] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstraptoken] Creating the “cluster-info” ConfigMap in the “kube-public” namespace
[addons] Applied essential addon: kube-dns
[addons] Applied essential addon: kube-proxy

Your Kubernetes master has initialized successfully!

To start using your cluster, you need to run (as a regular user):

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run “kubectl apply -f [podnetwork].yaml” with one of the options listed at:
http://kubernetes.io/docs/admin/addons/

You can now join any number of machines by running the following on each node
as root:

kubeadm join –token 5cf1b4.23d95a40a9d5f674 192.168.2.48:6443 –discovery-token-ca-cert-hash sha256:beb0b1ba0edbc76b0288b5de57949e0aa728aa1149c4c5b548b2b59e5d6a7106

As shown in the first bold section, execute the following

[root@k8s-master kubernetes]# cd ~
[root@k8s-master ~]# mkdir .kube
[root@k8s-master ~]# cp -i /etc/kubernetes/admin.conf .kube/config
[root@k8s-master ~]# chown $(id -u):$(id -g) .kube/config
[root@k8s-master ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8s-master NotReady master 9m v1.8.0
[root@k8s-master ~]#

Configure Network

As you can see from the output of kubectl get nodes, our Kubernetes Master still shows NotReady. This is because we haven’t deployed our overlay network. If you look at your /var/log/messages, you’ll see entries similar to the one below

Oct 4 15:41:09 [localhost] kubelet: E1004 15:41:09.589532 2515 kubelet.go:2095] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized

To fix this, run the following to deploy our network.

[root@k8s-master ~]# export kubever=$(kubectl version | base64 | tr -d ‘\n’
[root@k8s-master ~]# kubectl apply -f “https://cloud.weave.works/k8s/net?k8s-version=$kubever”
serviceaccount “weave-net” created
clusterrole “weave-net” created
clusterrolebinding “weave-net” created
daemonset “weave-net” created
[root@k8s-master ~]#

Checking our Kubernetes Master node again,

[root@k8s-master ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8s-master Ready master 18m v1.8.0
[root@k8s-master ~]#

Configure Worker Node

Time to configure our Worker Node. Login to our Worker Node and execute the following command

yum install kubeadm docker -y

After successfully installing kubeadm and docker on our Worker Node, run the following command

systemctl restart docker && systemctl enable docker

We need to join this Worker Node into our Kubernetes Cluster. From the second highlighted section of the kubeadm init output above, execute the “kubeadm join” command in our Worker Node.

[root@k8s-worker ~]# kubeadm join –token 5cf1b4.23d95a40a9d5f674 192.168.2.48:6443 –discovery-token-ca-cert-hash sha256:beb0b1ba0edbc76b0288b5de57949e0aa728aa1149c4c5b548b2b59e5d6a7106
[kubeadm] WARNING: kubeadm is in beta, please do not use it for production clusters.
[preflight] Running pre-flight checks
[preflight] WARNING: kubelet service is not enabled, please run ‘systemctl enable kubelet.service’
[preflight] WARNING: docker service is not enabled, please run ‘systemctl enable docker.service’
[preflight] Starting the kubelet service
[discovery] Trying to connect to API Server “192.168.2.48:6443”
[discovery] Created cluster-info discovery client, requesting info from “https://192.168.2.48:6443”
[discovery] Requesting info from “https://192.168.2.48:6443” again to validate TLS against the pinned public key
[discovery] Cluster info signature and contents are valid and TLS certificate validates against pinned roots, will use API Server “192.168.2.48:6443”
[discovery] Successfully established connection with API Server “192.168.2.48:6443”
[bootstrap] Detected server version: v1.8.0
[bootstrap] The server supports the Certificates API (certificates.k8s.io/v1beta1)

Node join complete:
* Certificate signing request sent to master and response
received.
* Kubelet informed of new secure connection details.

Run ‘kubectl get nodes’ on the master to see this machine join.
[root@k8s-worker ~]#

Using the same steps, you could add multiple Worker node to our Kubernetes cluster.

As suggested, let’s now check from our Kubernetes Master Node if the Worker Node was added successfully to our cluster.

[root@k8s-master ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8s-master Ready master 52m v1.8.0
k8s-worker.novalocal Ready <none> 7m v1.8.0
[root@k8s-master ~]#

We can see from the output above that our Kubernetes Master and Worker node are in Ready Status.

We have successfully installed Kubernetes Cluster using kubeadm and successfully joined a Worker Node to our cluster. With this environment we can now create pods and services.

Developing RESTful APIs with AWS API Gateway

If you followed the previous post, you now have a functioning AWS Lambda function. But how do we expose or trigger this function, say for a web application/client?

AWS API Gateway is an AWS service that allows developers to create, publish, monitors and secures APIs. These APIs could be something that access another AWS Service, in this case AWS Lambda functions, or other web services and could even be data stored in the cloud. We could create RESTful APIs to enable applications to access AWS Cloud services.

Let’s start building

To start, let’s build a basic web API to invoke our Lambda function using an HTTP GET query. Go to Application services section or search for API Gateway on the AWS Services search box.

It’s a good idea to choose the same region you used previously for your Lambda function. Click Get Started in the API Gateway home page.

In the next page, give your API a name. I’m calling this API manageBooksAPI. Click Create API.

Leave the default resource (/) and create one a new one by clicking Create Resource from the Actions menu.

In the New Child Resource page, give it a name. AS shown below, I’m calling this resource books. Leave the Resource Path as is. Make sure Enable API Gateway CORS is checked. You can proceed by clicking Create Resource.

The books resource will now appear under the default resource. We can now create a method. Choose Create Method under the Action menu.

Select the Get HTTP verb.

In the Integration Type page, select Lambda Function. And in the Lambda Function text box, type the Lambda function name you created and select it from the list. Click Save.

In the next page, just click OK. This is just providing permission for API Gateway to invoke our Lambda function.

Once created, you should have something similar to the one below.

Click TEST at the top of the Client section on the books GET Method execution and click Test in the next page.

You should see something similar to the one below.

We can now see the output of our Lambda function. Take note the Response Headers which shows that the Content Type is in json format.

Deploy our API

We are now ready to deploy our API. Under the Action menu, click Deploy API.

We can have the option to create multiple stage environment where we deploy our API. Let’s create a Production deployment stage by selecting New Stage and giving it Production as it’s Stage Name. Click Deploy.

Note:  Whenever we update our API, we need to re-deploy them.

Once created, you should see the Invoke URL for the newly created  stage environment.

Open your web browser. Using the URL provided and appending the books resource, you should see the JSON values provided by our Lambda function.

We’ve successfully created an API endpoint for our Lambda function. By creating an HTML file stored in Amazon S3 and with the help of Jquery, we can now use the same endpoint in our web application and process the returned JSON data.

 $.getJSON("https://9xvlao852a.execute-api.us-east-1.amazonaws.com/Production/books", function(result){ 
    for (i = 0; i < result['catalogue'].length; i++) { 
      $("#deck").append('<div class="col-md-3"><div class="card-block"><h4 class="card-title">'+ result['catalogue'][i].title +'</h4><p class="card-text">' + result['catalogue'][i].author + '</p><a href="card/'+ result['catalogue'][i].id + '" class="btn btn-primary">Learn More</a></div></div></div>');
    }
 });

 

With AWS Lambda and API Gateway ( + S3 ), we can create a Serverless application. We can create a method to handle passing parameters using HTTP or formatting the response of a function in our web API. Imagine running applications which scales without the complexity of managing compute nodes. Remember we didn’t even have to setup a single web server instance for this project!.

Connecting On-Premise network to a Virtual Private Cloud Network

Last time I’ve worked with configuring two sites using OpenSwan was more than 10 years ago. Had a good success deploying that solution with good throughput using commodity hardware and leveraging Open Source solutions. This time I wanted to test if I could do the same using Raspberry Pi.

In this post I want to show how I was able to configure Google Compute Engine VPN and connect my home network.

Under Networking – click VPN. Click Create VPN connection. This will open up a page which will guide you on creating a VPN connection.

Under the Create a VPN connection form, we must first create a static IP Address that can be used by our VPN Gateway. Under the IP Address dropdown list box, select Create IP Address.

 

Put in a name to distinguish this IP Address and click RESERVE.

Complete the form by giving it a Name and selecting a Region where we could deploy this VPN Gateway. Here I am using us-central-1.

Put in your VPN Gateway IP Address in the Remote peer IP address. This is the IP Address of your home network VPN Gateway. I am currently using a Raspberry Pi installed with OpenSwan. I am using port forwarding (udp 500/4500) since this gateway is behind my router. (Installation and configuration of OpenSwan/IPSec on Raspberry Pi deserves a separate post)

Select the IKE version. Mine is using IKEv1.

In the Remote network IP ranges, enter your home network range. Select the Local subnetworks (Google Cloud side) which you want to associate this tunnel to.

Click Create. Deploying this could take a minute or two to complete.

Once done, you should be able to see that the Remote peer IP address is up with a green check icon.

In one of my compute instance, I can verify that the tunnel is up by pinging my home network.

Or by running TCP dump on my local VPN gateway

We now have a secure way of connecting our on-premise network to our Virtual Private Cloud network.

One thing to note, if you are deleting the VPN Connection, you must also release the IP Address you allocated to the VPN Gateway so as not to incur additional cost since that is a Static IP Address.

 

 

 

Deploying a LAMP server in Google Cloud Platform

Since my Raspberry Pi, which hosts my WordPress site, has been quirky the past couple of weeks I decided to try out Google Cloud Platform.

Like AWS 12 months Free-Tier, GCP is giving out $300-free trial to get you started. You would need to have a credit card to sign up and you could read more information about the Free Trial in this link.

Using GCP’s Cloud Launcher, we can deploy solutions and applications in an instant (AWS MarketPlace?). In this tutorial, we are deploying a LAMP server which could maybe host a WordPress site.

Login to GCP Console and select/create a project. Under Computer Engine – VM instances, click CREATE INSTANCE.

 

Select a zone in any of the following: us-east1, us-west1, and us-central1. Using an f1-micro Machine type we could be well within the “744 hours monthly free limit”. Capacity might not be enough but this could get me start a LAMP server.

I want to use CentOS this time so under Boot disk, select the CentOS 7 image. The 10GB persisitent disk is well enough for my site so I’ll just go ahead and click select.

 

I’ll just check both option to allow HTTP and HTTPS under the Firewall section and hit Create to start deploying this instance.

 

Once GCP is done, you should see the following

Click SSH. This will open a new window which automatically gives us an SSH connection to our instance. (No more managing keys?)

sudo su to become root and keep things up to date by running yum update -y

Let’s install apache by running the following

yum install httpd

Install mariadb by executing the following

yum install mariadb-server mariadb

Install php and php-mysql

yum install php php-mysql

Let’s now configure our Mysql/MariaDB server. Change the root account, and remove the test user, database when prompted after running the following

mysql_secure_installation

make sure to have Apache and MariaDB start at boot time

chkconfig httpd on
chkconfig mariadb on

If it’s not yet started, start Apache and MariaDB server.

service mariadb start
service httpd start

Using the External IP of our instance, let’s check if our Web server is accessible

We could create a simple page to test if PHP is working

[root@wordpress paulinomreyes]# cat /var/www/html/info.php
<?php

phpinfo();

?>
[root@wordpress paulinomreyes]# 

And navigating to info.php we should see the following

We just deployed a LAMP server on Google Cloud Platform. Using this instance, we could then deploy a WordPress site.

Google Cloud Platform in my opinion is “minimalist” (compared to how huge AWS offers) but this platform gives me the basic things I need for now . In the future, I want to try out how to configure Load Balancing, and how Container Engine and Cloud Functions ( equivalent I guess to AWS Lambda) works.

 

 

Serverless – AWS Lambda

Serverless is a computing concept also known as Function as a Service (FaaS). Despite it’s name, it does not exactly means running codes without physical servers.  AWS Lambda is Amazon’s service that executes code, scales automatically when needed, and in where you only pay for the time your code executes. Server and operating system maintenance as well as capacity provisioning are all handled by Amazon. There are other Serverless framework out there. OpenWhisk, Fission, Funktion to name a few.

In this topic, I’ll show you how to create an ASW Lambda function and consume the same function thru AWS Gateway API calls. So let’s get started by logging in to your AWS account.

 

Under the Compute section, click Lambda.

 

 

Click Get Started Now.

You will be asked to select a run-time and a Blueprint. Blueprints are much like patterns available for you to start developing functions. There are several blueprints available for each AWS Lambda-supported language that targets the use of DynamoDB or Amazon Kinesis for example.  For now let’s select a blank node.js blueprint.

Functions can be invoked by other AWS services. Think of S3, if someone uploads an image file on S3,  you can trigger a Lambda Function that automatically creates a thumbnail of the image and save it to another bucket. We will configure this later. For now just click next.

 

In the next section, this is where you put your code. Give your function a name and some description of what the function does. Be sure to select the correct runtime. Here I will be using node.js 4.3 for my random number generator function.

You can copy and paste this on the code section. Under Lambda function handler and role section , note the Handler value as it corresponds to the function in the code. Choose or create an existing role. You can learn more about roles in this section.

In the Advanced settings, you can leave the default values shown. These values affects the performance of your code. As shown, changing the resource settings as well as the time-out settings affects your function cost. Remember you are charged by the number of requests and how long your code executes. For now leave it with the default values as shown.

In the Review section, check the details of your function. Click Create Function.

On the Function page, you can test your function by clicking the Test button. Here you can see that the function returned the number 7.

With the above steps, we have created a “microservice” that returns a random number. Let’s now create an API Gateway by creating a trigger for this function.

Under the Triggers tab, click Add Trigger link. Remember AWS Lambda function can be triggered by other AWS services. Let’s select AWS Gateway API.

In the next section, we can define who can access our API. For this example I am setting it to Open which means it is available to the public.

Click Done.

You will be presented with a URL which you can directly access. That URL will call your function and return the value.

Go to AWS Gateway API service and you can visualize how your AWS Lambda function is triggered by AWS Gateway API.

Remember we have created this “microservice” all without provisioning an instance or server that will handle our requests. You can trigger a Lambda function if there’s a new insert or update on an RDS or DynamoDB table. Imagine running an application where you don’t have to deal with the complexity of  managing an instance or let alone thinking what size of instance you need before you develop your application.