Trying out Amazon EC2 Container Service (Amazon ECS)

In the previous post I wrote, I showed how to build/configure a Kubernetes platform where we could run Docker image/containers. Container technology allows us to have consistent way to package our application and we could expect that it will always run the same way regardless of the environment. With this, I wanted to test our previous application and check out what Cloud providers such as Amazon Web Services (AWS) and Google Cloud Platform (GCP) offers in this space.

Amazon EC2 Container Service (AWS ECS)

Amazon ECS is an AWS service that makes it simple to store, manage and deploy Docker containers. Using this service, we don’t have to install a Container platform and Orchestration Software to run our container images. Since AWS ECS is tightly integrated with other AWS Services, we can expect that we could utilize other services such as AWS Load Balancer, IAM, S3 etc.

Amazon EC2 Container Registry

Amazon EC2 Container Registry (Amazon ECR) provides a container registry where we could store, manage and deploy our Docker images. Amazon ECR also eliminates the need to setup and manage a repository for our container images. Since it using S3 at the back-end, it provides us a highly available and accessible platform to serve our images. It also provides a secure platform since it transfers our images using https and secures our images at rest. By leveraging AWS IAM, we can control access to our image repository. So let’s get started.

Under the Compute Section, click EC2 Container Service.

We will create a new image and deploy our application so leave the default selection and click Continue.

In the next page, I’ll be using awscontainerio as the name of this repository.

After clicking Next Step, you should be presented with something similar below. Using AWS Cli, we can now push our docker image to our repository by following the steps listed.

I will be using the application and Dockerfile from the previous post to test AWS ECS.

[root@k8s-master dockerFlask]# aws ecr get-login –no-include-email –region us-east-1
docker login -u AWS -p <very-long-key> https://823355006218.dkr.ecr.us-east-1.amazonaws.com
[root@k8s-master dockerFlask]# docker login -u AWS -p <very-long-key> https://823355006218.dkr.ecr.us-east-1.amazonaws.com
Login Succeeded
[root@k8s-master dockerFlask]# docker build -t awscontainerio .
Sending build context to Docker daemon 128.5 kB
Step 1 : FROM alpine:3.1
—> f13c92c2f447
Step 2 : RUN apk add –update python py-pip
—> Using cache
—> 988086eeb89d
Step 3 : RUN pip install Flask
—> Using cache
—> 4e4232df96c2
Step 4 : COPY app.py /src/app.py
—> Using cache
—> 9567163717b6
Step 5 : COPY app/main.py /src/app/main.py
—> Using cache
—> 993765657104
Step 6 : COPY app/__init__.py /src/app/__init__.py
—> Using cache
—> 114239a47d67
Step 7 : COPY app/templates/index.html /src/app/templates/index.html
—> Using cache
—> 5f9e85b36b98
Step 8 : COPY app/templates/about.html /src/app/templates/about.html
—> Using cache
—> 96c6ac480d98
Step 9 : EXPOSE 8000
—> Using cache
—> c79dcdddf6c1
Step 10 : CMD python /src/app.py
—> Using cache
—> 0dcfd15189f1
Successfully built 0dcfd15189f1
[root@k8s-master dockerFlask]# docker tag awscontainerio:latest 823355006218.dkr.ecr.us-east-1.amazonaws.com/awscontainerio:latest
[root@k8s-master dockerFlask]# docker push 823355006218.dkr.ecr.us-east-1.amazonaws.com/awscontainerio:latest
The push refers to a repository [823355006218.dkr.ecr.us-east-1.amazonaws.com/awscontainerio]
596bab3c12e4: Pushed
e24802fe0ea0: Pushed
fdee42dc503e: Pushed
2be9bf2ec52c: Pushed
9211d7b219b7: Pushed
239f9a7fd5b0: Pushed
8ab8949d0d88: Pushed
03b625132c33: Pushed
latest: digest: sha256:8f0e2417c90ba493ce93f24add18697b60d34bfea60bc37b0c30c0459f09977b size: 1986
[root@k8s-master dockerFlask]#

Once completed, we can now see our image in Amazon ECR.

We need to create a new Cluster where we could deploy our container image. Click Create Cluster under AWS ECS – Clusters main page. Here I am going to use awscontainerio-cluster as the Cluster name. I am going to use two On-Demand t2.micro instance for our nodes that will be part of our cluster.

I am going to create a new VPC and will add a new security group, allowing port 8000. As you can see below, we could leverage on the different AWS Services to provide a secure, highly available platform for our ECS Cluster.

Once you click Create you should have a similar result page as the one below.

We now have a ECS Cluster.

 

To use the image we created, we need to configure a Service. But in order to do so, we must first create a Task Definition. Amazon ECS Task Definition contains information such as which Docker image to use, how many containers do we want, which port should be expose for our containers, volumes etc.  Under AWS ECS – Task Definition main page, Click Create new Task Definition.

In the Task Definition Name, put in awscontainerio-taskdef.

Click the Add container button. I am going to use awscontainerio-container as Container name for this example. In the Image field, enter the URI of the Docker image we just created. I am using the default 128 MiB Memory Limit. Under the Port mappings section, I am setting it to port 8000 which our Docker image/application is using. Click Add button to add this Container Definition to our Task Definition.

Going back to our Task Definition, click Create.

Now that we have a Task Definition, we can now continue to deploy a Service in our our ECS Cluster.In Services Tab under our Cluster information page, click create.

Select the Task Definition and Cluster name which we earlier created. I am going to set the Number of tasks to 2.

In the next step, you can configure this Service to use an Elastic Load Balancer to distribute traffic across our tasks (pods). For this example I won’t be using an ELB so just click Next Step.

I tried this along with ELB and this is one setting which you should really check out and try. Defining an Auto Scaling policy, we can scale up/scale down the number of pods. For this example, I won’t configure Auto Scaling so just click Next Step.

In the Review page, click Create Service.

Under Cluster info, you should be able to see the status and additional information of our cluster.

Under the Task details, using the DNS information of our nodes, we can check whether our application/image has been deployed correctly.

There you have it. Using the same Docker image/definition, we managed deploy it on Amazon ECS and  saw our sample Flask application running. Again, the intent of this example is to show that by packaging our application as a Docker image, we can be sure that it will executes the same way regardless which environment or platform we deploy it on.

Amazon EC2 Container Services provides a readily, highly available and scalable platform for us to move into Container technology.

 

Leave a Reply