Deploying applications on AWS ElasticBeanStalk

AWS Elastic Beanstalk allows us to quickly deploy, monitor, scale and manage our application in AWS Cloud. This is without worrying about scaling, capacity management and health monitoring complexities that comes into deploying applications. It runs on highly reliable and scalable AWS Services.

Although this simplifies deployment of our application and reduces the management part of it, it doesn’t limit us to still have full control of the underlying infrastructure as compared to Container technologies. AWS Elastic Beanstalk supports different software and hardware stacks. It also supports applications written in Java, Python, PHP, Node.JS, etc.

For this example we will build a simple java web application and will deploy it in AWS Elastic BeanStalk. You can copy the source files from this repository. As a prerequisite you need JDK and ANT installed on your development environment.

Build the project by issuing the following

# ant war
Buildfile: /home/project/build.xml

prepare:
[mkdir] Created dir: /home/project/build
[mkdir] Created dir: /home/project/build/WEB-INF
[mkdir] Created dir: /home/project/build/WEB-INF/classes
[mkdir] Created dir: /home/project/build/WEB-INF/lib
[copy] Copying 2 files to /home/project/build

compile:

war:
[war] Building war: /home/project/dist/test.war

BUILD SUCCESSFUL
Total time: 0 seconds

Now that we have a sample war file let’s login to AWS Console.

Under Compute services, click Elastic Beanstalk.

We could deploy our application easily by selecting Tomcat from the “Select a platform” dropdown-list and hitting Launch Now.  Our application will be deployed “auto-magically” but to get more control on the deployment process, click Create New Application located on the top right of the page.

 

In the Application Information page, provide a name for this deployment and click Next.

We need to choose an Environment Tier. As stated, a Web Server Environment typically supports standard web applications. Worker environment are more suited for applications that involves Queues and background processing. Since our dummy application is a java web application, select a Web Server Tier environment.

AWS ElasticBeanstalk support different web platforms. Select Tomcat from the Predefined configuration list. Under the environment type, we can configure this deployment to use a Single Instance or deployed under an Auto Scaling, Load Balancing platform.

In the next page, select our sample java web application project that we built. In future post, we’ll look into detail the deployment policy settings we could configure when we deploy an application in Elastic Beanstalk. Click Next.

In the Environment Information page, we can define in which logical environment this deployment is for. We could do a deployment mean for testing or prod. We could also aptly define the URL for which environment this deployment is for.

 

Under Additional Resources, we could select if we need an RDS DB in this deployment or if we need it to be under a specific / new VPC. Since we wont be using an RDS DB, we could just click Next.

 

Under Configuration details, this is the part where we could really control how our application is deployed. We can define and choose the type of instance. We also have the option to configure access to our instance by associating or creating an SSH Key pair that would allow us to remotely login to our EC2 instance.

 

As with any AWS services, we can define tags for this deployment.

Under Permissions, select the role you want to associate our instance. If the application needs to access different AWS Service, it is best to provide the instance or our application a Role rather than using shared login information/Access keys which open up security concerns.

 

Review your deployment configuration and hit Launch.

Take note of the Environment URL since we will be using that url to access our application.

AWS Elastic Beanstalk service will process and deploy our application. Everything is being logged and you can see what’s happening (creating instance, ELB etc) during the deployment process.

Once successfully deployed

you can browse our URL and check out our application.

Trying out Amazon EC2 Container Service (Amazon ECS)

In the previous post I wrote, I showed how to build/configure a Kubernetes platform where we could run Docker image/containers. Container technology allows us to have consistent way to package our application and we could expect that it will always run the same way regardless of the environment. With this, I wanted to test our previous application and check out what Cloud providers such as Amazon Web Services (AWS) and Google Cloud Platform (GCP) offers in this space.

Amazon EC2 Container Service (AWS ECS)

Amazon ECS is an AWS service that makes it simple to store, manage and deploy Docker containers. Using this service, we don’t have to install a Container platform and Orchestration Software to run our container images. Since AWS ECS is tightly integrated with other AWS Services, we can expect that we could utilize other services such as AWS Load Balancer, IAM, S3 etc.

Amazon EC2 Container Registry

Amazon EC2 Container Registry (Amazon ECR) provides a container registry where we could store, manage and deploy our Docker images. Amazon ECR also eliminates the need to setup and manage a repository for our container images. Since it using S3 at the back-end, it provides us a highly available and accessible platform to serve our images. It also provides a secure platform since it transfers our images using https and secures our images at rest. By leveraging AWS IAM, we can control access to our image repository. So let’s get started.

Under the Compute Section, click EC2 Container Service.

We will create a new image and deploy our application so leave the default selection and click Continue.

In the next page, I’ll be using awscontainerio as the name of this repository.

After clicking Next Step, you should be presented with something similar below. Using AWS Cli, we can now push our docker image to our repository by following the steps listed.

I will be using the application and Dockerfile from the previous post to test AWS ECS.

[root@k8s-master dockerFlask]# aws ecr get-login –no-include-email –region us-east-1
docker login -u AWS -p <very-long-key> https://823355006218.dkr.ecr.us-east-1.amazonaws.com
[root@k8s-master dockerFlask]# docker login -u AWS -p <very-long-key> https://823355006218.dkr.ecr.us-east-1.amazonaws.com
Login Succeeded
[root@k8s-master dockerFlask]# docker build -t awscontainerio .
Sending build context to Docker daemon 128.5 kB
Step 1 : FROM alpine:3.1
—> f13c92c2f447
Step 2 : RUN apk add –update python py-pip
—> Using cache
—> 988086eeb89d
Step 3 : RUN pip install Flask
—> Using cache
—> 4e4232df96c2
Step 4 : COPY app.py /src/app.py
—> Using cache
—> 9567163717b6
Step 5 : COPY app/main.py /src/app/main.py
—> Using cache
—> 993765657104
Step 6 : COPY app/__init__.py /src/app/__init__.py
—> Using cache
—> 114239a47d67
Step 7 : COPY app/templates/index.html /src/app/templates/index.html
—> Using cache
—> 5f9e85b36b98
Step 8 : COPY app/templates/about.html /src/app/templates/about.html
—> Using cache
—> 96c6ac480d98
Step 9 : EXPOSE 8000
—> Using cache
—> c79dcdddf6c1
Step 10 : CMD python /src/app.py
—> Using cache
—> 0dcfd15189f1
Successfully built 0dcfd15189f1
[root@k8s-master dockerFlask]# docker tag awscontainerio:latest 823355006218.dkr.ecr.us-east-1.amazonaws.com/awscontainerio:latest
[root@k8s-master dockerFlask]# docker push 823355006218.dkr.ecr.us-east-1.amazonaws.com/awscontainerio:latest
The push refers to a repository [823355006218.dkr.ecr.us-east-1.amazonaws.com/awscontainerio]
596bab3c12e4: Pushed
e24802fe0ea0: Pushed
fdee42dc503e: Pushed
2be9bf2ec52c: Pushed
9211d7b219b7: Pushed
239f9a7fd5b0: Pushed
8ab8949d0d88: Pushed
03b625132c33: Pushed
latest: digest: sha256:8f0e2417c90ba493ce93f24add18697b60d34bfea60bc37b0c30c0459f09977b size: 1986
[root@k8s-master dockerFlask]#

Continue reading “Trying out Amazon EC2 Container Service (Amazon ECS)”

Developing RESTful APIs with AWS API Gateway

If you followed the previous post, you now have a functioning AWS Lambda function. But how do we expose or trigger this function, say for a web application/client?

AWS API Gateway is an AWS service that allows developers to create, publish, monitors and secures APIs. These APIs could be something that access another AWS Service, in this case AWS Lambda functions, or other web services and could even be data stored in the cloud. We could create RESTful APIs to enable applications to access AWS Cloud services.

Let’s start building

To start, let’s build a basic web API to invoke our Lambda function using an HTTP GET query. Go to Application services section or search for API Gateway on the AWS Services search box.

It’s a good idea to choose the same region you used previously for your Lambda function. Click Get Started in the API Gateway home page.

In the next page, give your API a name. I’m calling this API manageBooksAPI. Click Create API.

Leave the default resource (/) and create one a new one by clicking Create Resource from the Actions menu.

In the New Child Resource page, give it a name. AS shown below, I’m calling this resource books. Leave the Resource Path as is. Make sure Enable API Gateway CORS is checked. You can proceed by clicking Create Resource.

The books resource will now appear under the default resource. We can now create a method. Choose Create Method under the Action menu.

Select the Get HTTP verb.

In the Integration Type page, select Lambda Function. And in the Lambda Function text box, type the Lambda function name you created and select it from the list. Click Save.

In the next page, just click OK. This is just providing permission for API Gateway to invoke our Lambda function.

Once created, you should have something similar to the one below.

Click TEST at the top of the Client section on the books GET Method execution and click Test in the next page.

You should see something similar to the one below.

We can now see the output of our Lambda function. Take note the Response Headers which shows that the Content Type is in json format.

Deploy our API

We are now ready to deploy our API. Under the Action menu, click Deploy API.

We can have the option to create multiple stage environment where we deploy our API. Let’s create a Production deployment stage by selecting New Stage and giving it Production as it’s Stage Name. Click Deploy.

Note:  Whenever we update our API, we need to re-deploy them.

Once created, you should see the Invoke URL for the newly created  stage environment.

Open your web browser. Using the URL provided and appending the books resource, you should see the JSON values provided by our Lambda function.

We’ve successfully created an API endpoint for our Lambda function. By creating an HTML file stored in Amazon S3 and with the help of Jquery, we can now use the same endpoint in our web application and process the returned JSON data.

 $.getJSON("https://9xvlao852a.execute-api.us-east-1.amazonaws.com/Production/books", function(result){ 
    for (i = 0; i < result['catalogue'].length; i++) { 
      $("#deck").append('<div class="col-md-3"><div class="card-block"><h4 class="card-title">'+ result['catalogue'][i].title +'</h4><p class="card-text">' + result['catalogue'][i].author + '</p><a href="card/'+ result['catalogue'][i].id + '" class="btn btn-primary">Learn More</a></div></div></div>');
    }
 });

 

With AWS Lambda and API Gateway ( + S3 ), we can create a Serverless application. We can create a method to handle passing parameters using HTTP or formatting the response of a function in our web API. Imagine running applications which scales without the complexity of managing compute nodes. Remember we didn’t even have to setup a single web server instance for this project!.

Yet another AWS Lambda tutorial

I briefly discussed AWS Lambda months ago but I feel that example is too simple. Let’s create a slightly more complex task, a function that list books and which we will use in our API Gateway endpoint on the next post.

To create an AWS Lambda function, login to your AWS console and select Lambda from the Compute Section or select Lambda in the AWS Services search box.

Click Create a function on the AWS Lambda home page.

To simplify the creation of Lambda function, AWS provides sample blueprints which we could use. For this session, we will be creating a function from scratch so click Author from scratch.

On the next screen, we can add a trigger for this Lambda function. We will discuss creating trigger and association a Lambda function to it at a later part of this tutorial. For now just click Next.

In Step 3, give your function a distinct name. I’m calling it manageBooks. For this example, the runtime I will be using is Python 2.7.

It is possible to develop your serverless functions locally thru the Serverless framework and upload it as an archive file. For this session, we are just going to type our code in-line. In the Lambda function code section, copy the code here and paste it in the code area.

What we did here is we have a method (get_all_lesson) which returns an array of books in json format. Take note of the name of the method as we will be using that same name in the section below for the Handler name (lambda_function.get_all_lesson).

Specifying other settings

Everything that’s executed by AWS Lambda needs to have permission to do what it’s supposed to do. This is managed by AWS Access and Identity Management thru roles and policies. We need to create a new basic execution role using the Role menu. Choose create a new role. I am using myBasicRole for the role name. Don’t select a policy template.

You need to configure two important aspects of a Lambda function. How much memory to use affects the quantity of CPU power and the cost of executing the function. For this simple function, 128MB is more than enough. The timeout after which the function is automatically terminated setting is used to avoid mistakes that could start long-running function. Three seconds is fine for this simple function.

You can select Next to review all the configurations, and then select Create function.

In the next screen, after successfully creating the function, select Test to check our function.

Since we are not passing any arguments in our function, we can just use the Hellow World event template. To test, click Save and test.

We should see the result of the test execution from the web console with the summary of the execution and the log output.

In the next post, we will create an API Gateway Endpoint to consume this AWS Lambda Function. And using that AWS Gateway Endpoint in a page hosted on S3 bucket, we will display the list of books.

 

 

Serverless – AWS Lambda

Serverless is a computing concept also known as Function as a Service (FaaS). Despite it’s name, it does not exactly means running codes without physical servers.  AWS Lambda is Amazon’s service that executes code, scales automatically when needed, and in where you only pay for the time your code executes. Server and operating system maintenance as well as capacity provisioning are all handled by Amazon. There are other Serverless framework out there. OpenWhisk, Fission, Funktion to name a few.

In this topic, I’ll show you how to create an ASW Lambda function and consume the same function thru AWS Gateway API calls. So let’s get started by logging in to your AWS account.

 

Under the Compute section, click Lambda.

 

 

Click Get Started Now.

You will be asked to select a run-time and a Blueprint. Blueprints are much like patterns available for you to start developing functions. There are several blueprints available for each AWS Lambda-supported language that targets the use of DynamoDB or Amazon Kinesis for example.  For now let’s select a blank node.js blueprint.

Functions can be invoked by other AWS services. Think of S3, if someone uploads an image file on S3,  you can trigger a Lambda Function that automatically creates a thumbnail of the image and save it to another bucket. We will configure this later. For now just click next.

 

In the next section, this is where you put your code. Give your function a name and some description of what the function does. Be sure to select the correct runtime. Here I will be using node.js 4.3 for my random number generator function.

You can copy and paste this on the code section. Under Lambda function handler and role section , note the Handler value as it corresponds to the function in the code. Choose or create an existing role. You can learn more about roles in this section.

In the Advanced settings, you can leave the default values shown. These values affects the performance of your code. As shown, changing the resource settings as well as the time-out settings affects your function cost. Remember you are charged by the number of requests and how long your code executes. For now leave it with the default values as shown.

In the Review section, check the details of your function. Click Create Function.

On the Function page, you can test your function by clicking the Test button. Here you can see that the function returned the number 7.

With the above steps, we have created a “microservice” that returns a random number. Let’s now create an API Gateway by creating a trigger for this function.

Under the Triggers tab, click Add Trigger link. Remember AWS Lambda function can be triggered by other AWS services. Let’s select AWS Gateway API.

In the next section, we can define who can access our API. For this example I am setting it to Open which means it is available to the public.

Click Done.

You will be presented with a URL which you can directly access. That URL will call your function and return the value.

Go to AWS Gateway API service and you can visualize how your AWS Lambda function is triggered by AWS Gateway API.

Remember we have created this “microservice” all without provisioning an instance or server that will handle our requests. You can trigger a Lambda function if there’s a new insert or update on an RDS or DynamoDB table. Imagine running an application where you don’t have to deal with the complexity of  managing an instance or let alone thinking what size of instance you need before you develop your application.