Docker on AWS – Part 1 – Creating the Docker Image

In my last post, I was learning to use Docker for Windows. I created a trivial Spring Boot web app, added it to a Docker image by issuing instructions via a Dockerfile, and then ran my Docker image in a container locally.

Now I want to learn how to host my Docker image on Amazon Web Services. So where do I start?

A quick Google search for “how to host a docker image on AWS” pointed me to this following tutorial:

http://docs.aws.amazon.com/AmazonECS/latest/developerguide/docker-basics.html

I already have Docker for Windows installed locally, so I’ll skip the first section of the tutorial and go directly to the “Create a Docker Image” section.

Creating a simple Docker image for AWS

The AWS “Docker Basics” tutorial that I referenced above provides a simple Dockerfile that will produce a very simple Docker image. The Dockerfile content is:

FROM ubuntu:12.04

# Install dependencies
RUN apt-get update -y
RUN apt-get install -y apache2

# Install apache and write hello world message
RUN echo "Hello World!" > /var/www/index.html

# Configure apache
RUN a2enmod rewrite
RUN chown -R www-data:www-data /var/www
ENV APACHE_RUN_USER www-data
ENV APACHE_RUN_GROUP www-data
ENV APACHE_LOG_DIR /var/log/apache2

EXPOSE 80

CMD ["/usr/sbin/apache2", "-D", "FOREGROUND"]

I read this as:

  • Create our image an Ubuntu 12.04 image as the base
  • Install Apache Web Server
  • Create a single web page (index.html) that just says “Hello World!”
  • Configure Apache Web Server
  • Open port 80 to the outside world
  • Run Apache Web Server

The tutorial is written as if one will be doing this on Linux. I’m using Windows, so the “touch Dockerfile” command isn’t relevant, and I’ll be running the Docker commands from PowerShell.

I’m just going to make one small change to the Dockerfile, so I know I’m hitting my own Hello World example:

# Install apache and write hello world message
RUN echo "Hello, JTOUGH!" > /var/www/index.html

I put my Dockerfile into an empty folder on my PC, and ran the following command:

docker build -t aws-hello-world .

This kicks off the build process, starting with the download of the base Ubuntu image and its dependencies. The output in PowerShell looks like this on my PC:

DOAWS-screenshot-001

and finishes up like this…

DOAWS-screenshot-002

Note the security warning at the very end:

SECURITY WARNING: You are building a Docker image from Windows against a non-Windows Docker host. All files and directories added to build context will have ‘-rwxr-xr-x’ permissions. It is recommended to double check and reset permissions for sensitive files and directories.

I’m not concerned about security issues for this tutorial, but this might be an annoying problem in the future when I’m building images on a Windows-based workstation.

After the build operation is completed, there is nothing added to the current folder that contains the Dockerfile.

DOAWS-screenshot-003

Perhaps you are wondering: “Where is the image stored locally?” I was curious about that myself, so I Googled it. Apparently the images are stored inside the Moby Linux VM that is used by Docker for Windows. You can see the VM running in the Hyper-V Manager window when Docker is running:

DOAWS-screenshot-004
Hyper-V Manager in Windows 10 Pro

I need to run the “docker images” command to see what images are in my local repository.

DOAWS-screenshot-005

The first in the list is the image that I just created. Below it are some images that I created previously for other tutorials, as well as some base images. Those old tutorial images are taking up more space that I expected. Now that I know they are stored in the Moby Linux VM that powers Docker locally, I don’t want that space being used. That VM space is allocated from my primary SSD drive, so space is at a premium.

How can I get rid of old images that I no longer want to keep? I address that question in a separate post:

Removing old Docker images and containers

Running the simple Docker image locally

I’ll run my image locally, using the following command:

docker run --rm -p 80:80 aws-hello-world

The ‘–rm’ flag tells Docker to automatically remove the container after the run terminates for this image. By default, Docker saves the container instance. That’s a waste of space in my local VM that drives Docker.

DOAWS-screenshot-013

The AWS tutorial instructions say that the Apache warning message can be safely ignored. Now I’ll verify that I can connect to this locally running instance of my image:

DOAWS-screenshot-014

All good. Now I’ll stop the running instance. Docker generated the name ‘wizardly_poitras‘ for this container instance, so the command to stop it is:

docker stop wizardly_poitras

DOAWS-screenshot-015

Because I ran the image with the ‘–rm’ argument in the ‘docker run’ command, there is no dead container instance left over after I issue the ‘docker stop’ command.

Deploying the Docker image to Amazon Web Services

The AWS Docker Basic tutorial (URL is at the top of this post) states that:

Amazon ECR is a managed AWS Docker registry service. Customers can use the familiar Docker CLI to push, pull, and manage images.

https://aws.amazon.com/ecr/

I think this is the AWS equivalent of “Docker Hub”. So this gives us a secure, reliable place to publish Docker images that are intended for deployment to AWS.

The next statement in the tutorial is this:

This section requires the AWS CLI. If you do not have the AWS CLI installed on your system, see Installing the AWS Command Line Interface in the AWS Command Line Interface User Guide.

Ugh… Another utility setup task. Well it has to be done so we can move on to the fun stuff.

How DOES one set up the AWS CLI? I address that question in a separate post:

Setting up AWS CLI on Windows 10

Pushing the Docker Image to AWS

The AWS “Docker Basics” tutorial says to run the following command:

aws ecr create-repository --repository-name hello-world

I named by project and Docker image ‘aws-hello-world’, so I’m going to tweak this and name my repository the same way:

aws ecr create-repository --repository-name aws-hello-world

DOAWS-screenshot-030

The response back from the CLI should look something like this:

{
 "repository": {
 "registryId": "<aws-account-id>",
 "repositoryName": "aws-hello-world",
 "repositoryArn": "arn:aws:ecr:ca-central-1:<aws-account-id>:repository/aws-hello-world",
 "createdAt": 1514064542.0,
 "repositoryUri": "<aws-account-id>.dkr.ecr.ca-central-1.amazonaws.com/aws-hello-world"
 }
}

My AWS account id appeared in three places, and the AWS availability zone that I use (ca-central-1) appears in two places. Some of the values from the JSON-format response will be needed in subsequent steps, so its necessary to save the content of the response somewhere.

In my AWS Management Console, I can now see the repository that I just created via the CLI:

DOAWS-screenshot-031

Next, the “Docker Basics” tutorial instructions say to issue the following Docker command:

docker tag aws-hello-world <aws-account-id>.dkr.ecr.ca-central-1.amazonaws.com/aws-hello-world

I don’t understand the purpose of this, but I’ll do it anyway and see what happens:

DOAWS-screenshot-032

Interesting. The ‘docker tag’ command, used in this way, creates a new version of the image that is associated with a different target repository. In this case, the target repository is my private AWS ECS repository. The tag name is still set to ‘latest’.

The next instruction provided by the tutorial is this command:

aws ecr get-login --no-include-email

When I run this command, I get back a response in the form of a ‘docker login’ command, with a very long password argument. The AWS tutorial says:

Run the docker login command that was returned in the previous step. This command provides an authorization token that is valid for 12 hours.

This makes sense. We now have a programmatic method for obtaining temporary credentials to our private AWS ECS Docker repository. Let’s see what happens when I copy-and-paste that command in PowerShell and run it:

DOAWS-screenshot-033

Looks good. Now this PowerShell session should be authorized to deploy images to my AWS ECS repository for the next 12 hours.

Next step is to push my image to AWS. This is the (adjusted for me) command that the tutorial says to use:

docker push <aws-account-id>.dkr.ecr.ca-central-1.amazonaws.com/aws-hello-world

DOAWS-screenshot-034

After all the uploading is completed, it should look something like this:

DOAWS-screenshot-035

In the AWS Management Console, I can see that my image appears in the list for my private repository:

DOAWS-screenshot-036

Running the Docker Image on AWS

The Docker image has been pushed to my private AWS ECS image respository. Now what? How do I configure AWS to actually run the image?

The AWS “Docker Basics” tutorial has a “Next Steps” section at the bottom, that says:

After the image push is finished, you can use your image in your Amazon ECS task definitions, which you can use to run tasks with.

I think it’s weird that they frame this as “Next Steps”. What was the point of creating a Docker repository and uploading the image, if not to run it inside AWS? Anyway…

The tutorial provides a ‘task definition’, in JSON format, to associate with my Docker image. I have to tweak the provided definition to match my own image name and AWS account settings, so this is what I’ll use:

{
 "family": "aws-hello-world",
 "containerDefinitions": [
 {
 "name": "aws-hello-world",
 "image": "<aws-account-id>.dkr.ecr.ca-central-1.amazonaws.com/aws-hello-world",
 "cpu": 10,
 "memory": 500,
 "portMappings": [
 {
 "containerPort": 80,
 "hostPort": 80
 }
 ],
 "entryPoint": [
 "/usr/sbin/apache2",
 "-D",
 "FOREGROUND"
 ],
 "essential": true
 }
 ]
}

The tutorial says to put this into a text file named “hello-world-task-def.json”. I’m going to call mine aws-hello-world-task-def.json, in keeping with the naming that I’ve used up to this point.

DOAWS-screenshot-037
These are the only files that have been created for this Docker tutorial

And now, the command that submits that JSON configuration to AWS and associates it with my uploaded Docker image:

aws ecs register-task-definition --cli-input-json file://aws-hello-world-task-def.json

DOAWS-screenshot-038

This looks promising. The JSON-formatted response from AWS mirrors my submitted configuration. It also includes some placeholders for other settings that I didn’t provide, such as volumes and mount points.

If I go back to the AWS Management Console, I can see the new task definition.

DOAWS-screenshot-039
The new task definition now appears in AWS

Next, the tutorial says:

Important

Before you can run tasks in Amazon ECS, you need to launch container instances into a default cluster. For more information about how to set up and launch container instances, see Setting Up with Amazon ECS

Boooooo!! More set up tasks. 🙁

I’m going to end this post here. In the next post, I’ll cover the process of creating an AWS ECS Cluster, and using it to deploy the Docker image.


 

Leave a Reply