Docker on AWS – Part 8 – Adding an Amazon EFS Filesystem

As I learn to use Docker and Amazon ECS, I’m wishing that my Dockerized application had access to a file system that exists outside of my Docker/ECS container. I would initially like to use this file system as a place to write files, but I might also use this location to read incoming files. I can think of all sorts of uses for this as I add more functionality to my application, so I want to set up this infrastructure before I move on.

I found something called “Amazon EFS” (Elastic File System) that might suit my needs.

REFERENCE: Using Amazon EFS File Systems with Amazon ECS tutorial

This is the AWS tutorial that I’ll follow to learn about Amazon EFS.

Not supported in all regions

Heads up before you begin. I found out that EFS is not supported in all AWS regions when I attempted to follow the tutorial on the Central (Canada) region. I’ve since switched to the N. Virginia region (Docker on AWS – Part 7 – Switching AWS Regions).

Where is the code?

Use this URL to clone my Github repository:

The code for this post was committed under Git branch SBDAWS-PART-08.

Let’s do the tutorial!

I’ll follow the tutorial steps as closely as possible, adapting it to fit my setup as needed. Let’s begin…

Step 1: Gather Cluster Information

The first step is to find my VPC ID and ECS Security Group ID.

The VPC ID is found on the EC2 console page.


Mine is vpc-02059316c2860c518.

If I click on the “Security groups” link on the EC2 instance page, I can get the Security Group ID.


Mine is sg-03f13ca0b2cdca71a.

Step 2: Create a Security Group for the EFS File System

I’m already in the right place to create a new security group, so I’ll just click the big blue button.


Step 3: Create an Amazon EFS File System



Start by navigating to the EFS console page. Find the big blue “Create file system” button. It was front-and-center in the console for me, because I haven’t used EFS before.

After clicking the button, you’ll be presented with step 1 of the EFS file system setup, “Configure file system access”. I only have one VPC defined for this AWS Region, so it has been pre-populated for me.

These values were all pre-populated at the start, after I clicked the “Create file system” button

The tutorial instructions seem to diverge from how the console is laid out today. I believe this is where I need to change the Security Groups values from the defaults shown above to the new Security Group that I created in Step 2.


On the next page, I just provide a descriptive name for my file system and leave the performance mode set to the default “General Purpose” mode. I don’t need encryption either, so I leave that unchecked.


After clicking through the confirmation page, I end up back at the EFS console with my new file system information displayed.


Step 4: Configure Container Instances

The tutorial instructions say the following at the start of this step:

After you’ve created your Amazon EFS file system in the same VPC as your container instances, you must configure the container instances to access and use the file system. Your container instances must mount the Amazon EFS file system before the Docker daemon starts, or you can restart the Docker daemon after the file system is mounted.

Alright then… let’s get to it.

I already have an active container, so I’m going to follow the instructions to “Configure a running container instance to use an Amazon EFS file system”.

The first thing to do is SSH into my EC2 instance that hosts the ECS/Docker container. When I created the ECS Cluster in my previous post, Docker on AWS – Part 7 – Switching AWS Regions, I enabled access to ports 22 through 80 and specified a keypair. I did that to allow SSH connections on port 22. I’ll attempt to connect to my EC2 instance with PuTTY.

REFERENCEHow to SSH into an AWS ECS container server with PuTTY on Windows

Logged in to EC2 instance that hosts my ECS/Docker container

The first step is to create a new directory to use as a Linux mount point for the EFS.

sudo mkdir /efs


Next I need to install the NFS client software on my container instance. I’m using a standard Amazon Linux EC2 server instance, so the instructions tell me to do this:

sudo yum install -y nfs-utils


Next I need to mount the EFS file system. The command will contain my EFS id (obtained from the EFS console page), my AWS region (us-east-1), and the directory that I created above as my mount point (/efs).

sudo mount -t nfs4 -o nfsvers=4.1,rsize=1048576,wsize=1048576,hard,timeo=600,retrans=2 /efs

After executing the mount command, I need to verify that it was successful.

mount | grep efs


The EFS is now mounted, but if my EC2 instance was rebooted, then it would have to be remounted manually again. This can be addressed by adding the mount command to the Linux /etc/fstab file.

The /etc/fstab file is an important Linux configuration file, so I should make a backup before I attempt to modify it.

sudo cp /etc/fstab /etc/fstab.bak

Now I use the following command to append the my new mount point to /etc/fstab.

echo ' /efs nfs4 nfsvers=4.1,rsize=1048576,wsize=1048576,hard,timeo=600,retrans=2 0 0' | sudo tee -a /etc/fstab

Then I use the following command to reload the file system and verify that my change to /etc/fstab was valid.

sudo mount -a
No errors reported, so all is well

I’ll use the touch command to see if I can create an empty file in the mounted directory.

sudo touch /efs/new-empty-file.txt
Looks like I’m able to create files in there

Next, I need to restart Docker so it can see the new file system.

sudo stop ecs
sudo service docker restart
sudo start ecs



The AWS instructions have a section titled “Bootstrap an instance to use Amazon EFS with user data“. I might come back and check it out later, but for now I’m going to skip that section and move on.

Until I learn how to automate the EFS volume mount procedure, I’ll need to manually repeat the process above for every EC2 instance that is created to host my ECS containers. That will happen if a ‘spot’ instance is interrupted and later replaced, or if I delete a cluster and replace it with a new one.

Using the EFS in my Docker App

Now I need to figure out how to enable my Docker application to access the new EFS mount location on the container server. I found documentation on Docker “bind mounts”, which sounds like what I’m trying to do.


My first thought was: “Is this something that I can incorporate into my Dockerfile?”

The answer is no. The Dockerfile is tied to the image that I create, but binding a directory in the Docker image to a mount point on a container requires knowledge of the container host. It doesn’t really make sense. Modifying the Dockerfile is part of this, and I’ll get to that below.

The bind mount will have to be part of the ‘docker run’ command, as described on the page that I referenced above. In AWS, I’ll have to figure out how to achieve the same using the ECS Task Definition. Before I take the step, I’d like to get this working locally.

Using a bind mount on my Windows 10 PC

Before I can use the EFS from my containerized application, I’ll need to declare a new volume in my Dockerfile. In the screenshot below, I’ve declared a new volume at the path /datafiles. Inside the containerized application, any file reads/writes to the directory /datafiles will happen inside that volume.

Defining a new volume in the Dockerfile

On my PC, I’ll try running my Docker image in a container that has been created with a bind mount for this volume. I’ll make the /datafiles volume read/write to the folder at E:\DOCKER_DATA_FILES on my PC.

This is the command that I use to run the image with the bind mount:

docker run --rm -p 80:80 --mount type=bind,source="e:\DOCKER_DATA_FILES",target="/datafiles" jtough/sbdaws:0.1.0

In the mount part of the docker run command, note the following:

  • The ‘source’ parameter is the path on my PC (the container host) where I want to map the volume
  • The ‘target’ parameter is the path my containerized application will use to access the volume, and matches the value that I declared in my Dockerfile

On my first attempt to run the command, I got this error message back from Docker:

C:\Program Files\Docker\Docker\Resources\bin\docker.exe: Error response from daemon: invalid mount config for type “bind”: bind source path does not exist.
See ‘C:\Program Files\Docker\Docker\Resources\bin\docker.exe run –help’.

Uh oh. What have I done wrong?

As it turns out, I did nothing wrong. The problem was the Norton Security app on my PC. Its “Smart Firewall” feature was preventing Docker from making the bind mount. I’m only going to be experimenting locally for a short time, so I just disable the Smart Firewall temporarily in case I forget to re-enable it later.



Now if I re-run the same command again, success!

Successful startup after I temporarily disable my Norton Security firewall

Now let’s see what happens when I open a browser and go to localhost:


I’ve made a few additions to my Java code. One change is to read the names and attributes of all the files/directories under /datafiles, and display them on the page. Another is to ‘touch’ a file each time someone accesses my application. The Linux ‘touch’ command can do several things, but I’m using it to create (or replace) an empty file at the path /datafiles/touchme.txt.

The code that does this is all crammed into the Application class right now.


If I’ve properly defined both the volume and the bind mount, then I should be able to see the file “touchme.txt” inside the local folder on my PC that I specified in the bind mount.

Whatever happened to Samantha Fox?

Perfect. Everything is working as expected locally, so now I need to build my Docker image, upload it to AWS, and then figure out how to configure ECS to do the same as I’ve done here.

Building and uploading to AWS

Its time to rebuild the Docker image, and then redeploy to AWS. Before I do so, I’m going to change the version number of my app to 0.8.0 in the pom.xml file:

	<description>Spring Boot/Docker/AWS</description>

Next I’ll do a clean build of the image.

mvn clean package

And then I need to tag the image before I upload it to my private AWS repository. Note that I have updated the version in the tag to 0.8.0 so it matches the new value in the POM.

docker tag jtough/sbdaws:0.8.0 (my-aws-account-id)



Now for the image upload. I’ll start by getting my temporary ECR credentials from the AWS command line interface:

aws ecr get-login --no-include-email

Then I execute the ‘docker login’ that it provides back to me, and finally I can execute the ‘docker push’ to upload my new image:



Inside the ECS console in AWS, I can see my new image and the old image from my previous version.


Next I need to figure out how to configure a new Task Definition to make this image work with my EFS volume.

Configuring the new image to run on Amazon ECS

My updated Docker image has been uploaded to my private AWS repository, so now I need to create a new Task Definition to run it.

Creating the new Task Definition

I’ve covered the basics of creating a Task Definition is several previous posts, including:

I won’t go through all that again here. Instead I’ll just include some screenshots to highlight the important changes needed to bind mount the EFS volume with the ECS container.

I changed my Task Definition name to have v02 at the end, so it won’t be confused with my previous pre-EFS definition
This bit near the bottom is important! It defines a path on the container host that can be mounted as a volume by the container.

Task Definition Container settings

The Container setup inside the Task Definition will be similar to what was used before, but now includes a volume mapping.

The basics are the same, except for the updated image name
No changes here. The IAM user key id and key value are still needed to run the app.
This is the important part! The volume that was defined above is selected, and mapped to the /datafiles path that the containerized app uses.

Running the Docker Image

I don’t need to create a new cluster to run the updated Task Definition. I just use the existing cluster and create a new Service to run the new Task Definition.

I kept my old service definition, but updated it to run zero tasks. Then I created a new service. I can’t have both running at the same time, because they both run apps that use port 80.

Now I’ll open a web browser and see if it works.

Notice the file named “new-empty-file.txt” in my EFS volume mount directory. That was something I created manually from my SSH session on the container host when I set it up.

Looks good! Unlike AWS S3 buckets, there is no out-of-the-box way to peek at the contents of an EFS volume in the AWS Management Console. If I want to confirm without a doubt that files are really going into EFS, then I’ll need to do some extra work.

I’ll start by making an SSH connection to my container host again.

Just two boring empty files in there right now…

And now I’ll check my EFS details in the AWS Management Console.

It shows an insignificant “Metered size” for my EFS

Now I’ll try copying some arbitrary file from the container host to the EFS directory.

Copied a roughly 11K file into the EFS mount directory

And now I’ll refresh my browser and see what files are shown.

Now I can see the file that I moved to the EFS mount directory on the container host

I can see the new file, as expected, and the file size is exactly right.

What’s Next?

I’m going to call it quits for this post, because this one was a lot more work than expected. I had to change my AWS region in order to use EFS at all (see Docker on AWS – Part 7 – Switching AWS Regions), so that was an unexpected roadblock.

In order to do this properly, I’ll need to go back to the EFS tutorial (here) and learn how to “Bootstrap an instance to use Amazon EFS with user data”. If I don’t do this, then every time I create a new ECS cluster, I’ll need to manually configure it to work with my EFS. I will do that in a future post.

Leave a Reply