Docker on AWS – Part 7 – Switching AWS Regions

I started my Docker on AWS experimentation on the Canada (Central) AWS region. This seemed reasonable, given that I’m in Canada. Unfortunately, I’ve run into multiple dead-ends when attempting to use AWS features in the Canada region. The latest was when I attempted to follow the Using Amazon EFS File Systems with Amazon ECS tutorial. I was only getting started on the tutorial, when I hit this roadblock:

Well that sucks…

I’ve reached my disappointment limit with trying to use an AWS Region that only supports a subset of features. Since Canada isn’t working out, I’m going to switch to the US East (N. Virginia) region. That region seems to be the best supported, from what I’ve read so far.

What AWS stuff is “Global”?

Most things in AWS appear to be locked in to the region where they were created. There are a few things that are global, such as:

  • My AWS account itself, and the root user
  • IAM users and user groups
  • Billing information

At least I won’t have to mess with creating new IAM users, and setting up the Command Line Interface and access keys again.

I’ll assume that everything else will need to be done over again. sigh

Updating target region in the Command Line Interface

I set up the Command Line Interface (CLI) to work in Powershell in my Setting up AWS CLI on Windows 10 post. When I did so, I bound my CLI configuration to use the Canada region. I’ll need to switch that to N. Virginia region now, using its region name “us-east-1”, before I start issuing CLI commands.

aws configure


Let’s start with ECS


In the N. Virginia region, my ECS console is in virgin condition. I’ll need to create my repository, task definition and cluster from scratch. Nothing to do on this welcome page but click the “Get started” button.


The first page is different from what I got in the Canada region. N. Virginia offers the “Fargate” service for ECS. The ECS console in the Canada region just shows ads for Fargate, and tells you that it is available in the N. Virginia region. But am I required to use Fargate here?


Nope. If I click the “Cancel” link at the bottom of the page, then I’m shown the standard ECS console page that I was using before in the Canada region.


I also see the same ad for Fargate at the top of the console page. I might want to try out Fargate, but first I want to get things working as they were in the Canada region.

Recreating the Docker repository

Each ECS image repository is region-specific, so I’ll need to recreate mine in ‘us-east-1’.


Now I have a new repository in ‘us-east-1’ for my Docker images.

Retagging the Docker image

Next, I’ll need to re-tag my Docker image with the new target repository name.

docker tag jtough/sbdaws:0.1.0 (my-aws-account-id)


Pushing the Docker image to my AWS repository

Now I’ll do the two-stage process of obtaining temporary credentials for CLI to upload an image, and then execute the image push.


And now I’ll use the AWS Management Console to verify that it all looks good.


Recreating the Task Definition

I’ll need to make a nearly identical Task Definition in ECS on ‘us-east-1’. I can peek at the entire set of Task Definition configuration values in the JSON tab in my settings for ‘ca-central-1’.


I could try to simply copy the entire JSON configuration, change the ‘ca-central-1’ parts to ‘us-east-1’, and just save it. I’m not going to do it that way, because some parts of the original configuration were auto-generated by AWS. I don’t know if AWS also automatically set up other things at the same time. I’ll play it safe by simply using the JSON config as a guide when I manually recreate my Task Definition in the user interface of the ECS management console.




The rest of the Task Definition setup is the same as that used in my previous posts:

Just remember to change any references to ‘ca-central-1’ to be ‘us-east-1’.

Recreating the Cloudwatch Logging Group

In my previous post, I set up logging to Cloudwatch. I’ll need to quickly recreate a logging group in this region with the same name that I used in the old region.



Generating a Keypair

A keypair is needed to allow SSH connections to my EC2 server. I need to create the keypair before I create the ECS cluster. Start from the EC2 dashboard.


I’ll name mine ‘us-east-1-keypair’.


And make sure to save the .pem file when prompted. You’ll need it later.


Creating a new ECS Cluster

The ECS repository has been created, and populated with a Docker image. A task definition has also been created. The next thing to do is create a new cluster. I’ll start with a similar approach to what I did in my Docker on AWS – Part 4 – Simple Spring Boot app post. Afterward, I’ll use some of the knowledge I’ve gained since then to tweak the configuration of the Virtual Private Cloud (VPC) that AWS automatically creates for the ECS Cluster.




Unlike before, I now allow multiple EC2 instance types to be chosen. I’m more restrictive on my maximum price. The highlighted “Spot prices” link is helpful.
Don’t forget to select your keypair
Allow AWS to create a new VPC. Trying to create your own when using “spot” instances is a headache. Note that I have changed the open port range to go from 22 (SSH) to 80 (HTTP).


The cluster is created! The stuff that I highlighted at the bottom of the screenshot is what makes it difficult to manually create your own VPC. AWS sets all this up for you when you allow it to create a new VPC. Best to do that, and then customize things afterward.

Now I’ll switch to the Cloud Formation console to see how some of this magic happened.


ECS used a CloudFormation template to create a “Stack” of resources for my ECS Cluster. If I delete the cluster, then CloudFormation will know how to destroy the entire stack with it.

I’m also curious about how my “spot” instances are created. Those details can be found in the EC2 console, in the “Spot Requests” section.


Well this is a problem. My cluster won’t be able to do anything without a spot instance.


Side topic: EC2 Spot Instance Limits

The problem that I encountered above was due to a limitation on my AWS account. It seems that my account was set to allow only 1 Spot instance at a time, across all AWS Regions. I documented how to request a Spot limit increase in this post:

Dealing with AWS ‘spotInstanceCountLimitExceeded’ errors

There is no charge to have the limit increased, but you do have to ask.


I eventually figured out how to solve the ECS Cluster “Spot” instance problem.


One EC2 Spot instance is ready to go in my cluster, waiting to host a Docker image

Creating a new Service for the Cluster

The cluster has an idle instance waiting to host something, so now I’ll create a Service configuration that will use my Task Definition to launch the Docker image.


Only the “Launch type”, “Service name” and “Number of tasks” fields need to be changed. The rest can keep the default value. “Task Placement” is irrelevant since I have only 1 EC2 instance.
Just accept the default values and click “Next step” on this step
Again, accept the defaults and click “Next step” on this step
And now, to create the service…
Looking good so far

Connecting to the application

The service appears to be running. So how do I figure out how to connect my application via a web browser? I’ll need to drill down to get the public IP address of my EC2 server.

First I click on the “Task” link on the “Tasks” tab of the service page.
Next, I click on the “EC2 instance id” link on the task details page
And now I have the IP for my running instance

If my Docker image is running properly, then the application should be reachable via a web browser on the default HTTP port. All I need to do is paste this IP address into the browser.


Everything is working properly, but the output from the application is confusing. It is showing a different name for the ECS cluster, and those S3 buckets don’t exist in the N. Virginia region on my account. Why is this happening?

This is because the current version of my Docker-ized Spring Boot application has the AWS Region hardcoded to “Central (Canada)”. Even though this instance of the application is running in N. Virginia, the app is using the AWS SDK to query assets on the Canada region. I’ll stop hardcoding the region in a future revision of the application code and perhaps get that property from an environment variable, thus allowing me to set the value in the ECS Task Definition.

For now, this is a success. The app is running as designed in the new region. I just have a few cleanup tasks to do before I’m finished.

Creating a new Elastic IP for my instance

When I originally deployed my Docker application in the Canada region, I assigned an Elastic IP to the running instance. This allowed me to associate my “” domain with that IP. I’d like to do the same thing here, so the next step is to create a new Elastic IP in the N. Virginia region and link it to my EC2 instance.

I start by going to the EC2 Dashboard.



Choose the “VPC” scope
The Elastic IP has been created, and now belongs to my account until I release it. If I leave the Elastic IP like this, and not associated with a server instance, AWS will bill me a small penalty charge every hour until I associate or release the Elastic IP.


I only have one running EC2 instance, so I only get one option here.


I like naming things, but this is optional
Elastic IP is created, and associated with my lone EC2 instance


New IP address works in a browser

Shutting off the lights in Canada

Everything that was working in the Central (Canada) region is now working in the N. Virginia region. I don’t need to have the EC2 instance in Canada running anymore, and racking up hourly charges, so it’s time to delete the cluster in that region.




Goodbye, dear Cluster

It will likely take several minutes to complete this operation. AWS has to remove the EC2 instance, and then tear down all the infrastructure that was created for the cluster (the VPC and all of its pieces).




Leave a Reply