Docker on AWS – Part 5 – Spring Boot and AWS SDK for Java

In my previous post, I deployed an existing Spring Boot/Docker “Hello World” app to my AWS account. It worked fine, but that Spring Boot app did nothing more than output a static message when connected to with a browser.

I’d like to take the boilerplate code from the Hello World app and use it as a base to start building something more interesting. Adding the AWS SDK for Java seems like a good place to start.

Where is the code?

Use this URL to clone my Github repository: https://github.com/jimtough/SBDAWS.git

The code for this post was committed under Git branch SBDAWS-PART-05.

If you’re using Eclipse, and you’ve cloned the Git repo, then switch to the branch using the option shown in the screenshot below:

SBDAWS-P05-screenshot-01

References used for this post

I used information from the following pages when writing the code

Adding AWS SDK to my Maven POM file

NOTE: At the time I’m writing this, Amazon has labeled the 2.0 version of their Java SDK as a “developer preview”. They state that “This is a preview release and is not recommended for production environments“. Not a problem for me. By the time I’m ready to use this for anything important they will likely have a production-ready release. I don’t want to waste time learning how to use the 1.x SDK when something better is on the horizon.

The first step in adding the AWS SDK for Java to my application is to add the Bill of Materials (BOM) for the SDK to my Maven POM file.

	<dependencyManagement>
		<dependencies>
			<dependency>
				<!-- This adds a Maven "Bill of Materials" (BOM) for a particular version of the AWS SDK. In the "dependencies" section below, the version of individual AWS SDK components should NOT be specified! The BOM will ensure that an appropriate version of each requested component is used. -->
				<groupId>software.amazon.awssdk</groupId>
				<artifactId>bom</artifactId>
				<!-- TODO: Update the version when Amazon releases a stable production build -->
				<version>2.0.0-preview-1</version>
				<type>pom</type>
				<scope>import</scope>
			</dependency>
		</dependencies>
	</dependencyManagement>

The BOM will provide a list of compatible versions of AWS SDK libraries that can be added together in the application. I’ll add the ones that interest me in the dependencies section of the POM.

	<dependencies>
		<dependency>
			<groupId>org.springframework.boot</groupId>
			<artifactId>spring-boot-starter-web</artifactId>
		</dependency>

  		<!-- AWS SDK modules -->
		<dependency>
			<groupId>software.amazon.awssdk</groupId>
			<artifactId>s3</artifactId>
		</dependency>
		<dependency>
			<groupId>software.amazon.awssdk</groupId>
			<artifactId>iam</artifactId>
		</dependency>
		<dependency>
			<groupId>software.amazon.awssdk</groupId>
			<artifactId>ec2</artifactId>
		</dependency>
		<dependency>
			<groupId>software.amazon.awssdk</groupId>
			<artifactId>ecs</artifactId>
		</dependency>
		<dependency>
			<groupId>software.amazon.awssdk</groupId>
			<artifactId>ecr</artifactId>
		</dependency>

		<!-- 		... 		-->
	</dependencies>

In Eclipse, I can see a cluster of AWS SDK libraries in my list of Maven dependencies.

New properties files

This version of the Spring Boot app has three new properties files.

  • application.properties – Spring Boot looks for this file on the classpath at runtime by default. I use this file to change the Spring Boot HTTP port from (the default) 8080 to 80.
  • app.properties – I load this file via a Spring “configuration bean” at application startup. The version in Git contains placeholders. The app.properties file is used by both the application and an integration test during the Maven build.
  • maven-filter-values.properties – This file is intended to be edited before the Maven build is executed. It supplies the values that go into the app.properties file placeholders. The placeholder substitution is done by Maven using the “filter” feature. Some of these properties will likely not be needed in a future version of the app.
server.port=80
#-------------------------------------------------------------------------
# Jim Tough - 2018-01-07
#
# AWS IAM credentials that are valid for your account must be stored here.
# Maven will replace the placeholder values when building the application.
#
# This is hopefully a temporary means of authentication for the AWS SDK.
#
# IMPORTANT
# Maven filter property placeholders are usually in the format:
#   ${filter.property.name}
# Because this is a Spring Boot application, the Spring Boot parent POM
# changes this behaviour. The placeholder format is instead:
#   @filter.property.name@
#-------------------------------------------------------------------------
sdk.user.aws.accessKeyId=@iam.user.accessKeyId@
sdk.user.aws.secretAccessKey=@iam.user.secretAccessKey@
aws.target.region=@aws.region.name@
my.name=@my.name@
-------------------------------------------------------------------------
# Jim Tough - 2018-01-07
#
# Edit this file and fill in property values that are appropriate for you.
#-------------------------------------------------------------------------

iam.user.accessKeyId=XXXXXXXXXXXXXXXXXXXX
iam.user.secretAccessKey=YYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYY
aws.region.name=ca-central-1
my.name=yournamegoeshere

Integration test

I created a JUnit test that verifies the application can connect to AWS via the SDK. Note that this will only work if you have edited the maven-filter-values.properties to contain values that are valid in your own AWS account. You’ll need a pair of IAM user programmatic access keys, and the name of the AWS region that you’re using. If you don’t have an AWS account, then you won’t be able to run the test.

Here’s an example of a test method from the integration test class.

	@Test
	public void testS3Client() throws Exception {
		try (
			S3Client s3Client = S3Client.builder()
				.region(targetAwsRegion)
				.build();
		) {
			assertNotNull(s3Client);

			ListBucketsResponse response = s3Client.listBuckets(ListBucketsRequest.builder().build());

			assertNotNull(response);
			List<Bucket> myBucketList = response.buckets();
			assertNotNull(myBucketList);
			LOGGER.info("My AWS account has {} S3 buckets", myBucketList.size());
			for (Bucket bucket : myBucketList) {
				LOGGER.info(" --> bucket: {}", bucket.name());
			}
		}
	}

The output from this test looks like this on my machine:

Setting up the IAM user authentication properties

The AWS SDK has more than one way of getting its authentication credentials. This application uses the scheme where system properties named “aws.accessKeyId” and “aws.secretAccessKey” are set to the access key id and secret access key for my AWS IAM user. Those system properties need to have a valid pair of values before any calls to AWS are invoked.

I use a Spring “configuration bean” to read the values at application startup, and populate the system properties at that time. This seems sloppy, but I’ll find a cleaner way later.

package com.jimtough.sbdaws;

import javax.annotation.PostConstruct;

import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.springframework.beans.factory.annotation.Value;
import org.springframework.context.annotation.Configuration;
import org.springframework.context.annotation.PropertySource;

import software.amazon.awssdk.regions.Region;

@Configuration
@PropertySource("classpath:/app.properties")
public class ConfigurationBean {

	private static final Logger LOGGER = LoggerFactory.getLogger(ConfigurationBean.class);

	/**
	 * The AWS SDK looks for this System property to get the IAM user access key id
	 */
	public static final String SYSPROPKEY_AWS_ACCESS_KEY_ID = "aws.accessKeyId";
	/**
	 * The AWS SDK looks for this System property to get the IAM user secret key
	 */
	public static final String SYSPROPKEY_AWS_SECRET_ACCESS_KEY = "aws.secretAccessKey";

	@Value("${my.name}")
	private String myName;

	@Value("${sdk.user.aws.accessKeyId}")
	private String awsAccessKeyId;

	@Value("${sdk.user.aws.secretAccessKey}")
	private String awsSecretAccessKey;

	@Value("${aws.target.region}")
	private String awsTargetRegionName;

	private Region awsTargetRegion;

	@PostConstruct
	void postConstruct() {
		awsTargetRegion = Region.of(awsTargetRegionName);
		LOGGER.debug("App configuration properties loaded | AWS target region: [{}] | IAM user id: [{}]",
				this.awsTargetRegion.value(), this.awsAccessKeyId);
		System.setProperty(SYSPROPKEY_AWS_ACCESS_KEY_ID, awsAccessKeyId);
		System.setProperty(SYSPROPKEY_AWS_SECRET_ACCESS_KEY, awsSecretAccessKey);
	}

	public String getMyName() {
		return myName;
	}

	public Region getAwsTargetRegion() {
		return awsTargetRegion;
	}

}

Calling the AWS SDK from the application

I encapsulated the SDK calls inside a Spring component class. Any other class that needs to interact with the SDK will do so via this component.

package com.jimtough.sbdaws.awssdk;

import java.util.ArrayList;
import java.util.Collections;
import java.util.List;

import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.springframework.stereotype.Component;

import software.amazon.awssdk.regions.Region;
import software.amazon.awssdk.services.ecs.ECSClient;
import software.amazon.awssdk.services.ecs.model.Cluster;
import software.amazon.awssdk.services.ecs.model.DescribeClustersRequest;
import software.amazon.awssdk.services.ecs.model.DescribeClustersResponse;
import software.amazon.awssdk.services.ecs.model.ListClustersRequest;
import software.amazon.awssdk.services.ecs.model.ListClustersResponse;
import software.amazon.awssdk.services.iam.IAMClient;
import software.amazon.awssdk.services.iam.model.GetUserRequest;
import software.amazon.awssdk.services.iam.model.GetUserResponse;
import software.amazon.awssdk.services.iam.model.User;
import software.amazon.awssdk.services.s3.S3Client;
import software.amazon.awssdk.services.s3.model.Bucket;
import software.amazon.awssdk.services.s3.model.ListBucketsRequest;
import software.amazon.awssdk.services.s3.model.ListBucketsResponse;

/**
 * Uses the AWS SDK to get information about the environment that is running this app instance.
 *
 * Note that the AWS SDK relies on AWS IAM user credentials being defined in system variables
 * (or another documented AWS SDK credential scheme) at runtime.
 *
 * @author JTOUGH
 */
@Component
public class AwsEnvironmentInterrogator {

	private static final Logger LOGGER = LoggerFactory.getLogger(AwsEnvironmentInterrogator.class);

	/**
	 * Retrieve details on the AWS IAM user that is being used by this application when calling the AWS SDK.
	 *
	 * Note that the AWS SDK relies on AWS IAM user credentials being defined in system variables
	 * (or another documented AWS SDK credential scheme) at runtime.
	 *
	 * @return User
	 * @throws AwsSdkException Thrown if the SDK call throws an exception
	 */
	public User getIAMUser() throws AwsSdkException {
		try (
			IAMClient iamClient = IAMClient.builder()
				// The IAM users are considered "Global" in AWS, rather than region-specific
				.region(Region.AWS_GLOBAL)
				.build()
		) {
			GetUserResponse response = iamClient.getUser(GetUserRequest.builder().build());
			User user = response.user();
			LOGGER.debug("IAM user information retrieved successfully | username: [{}]", user.userName());
			return user;
		} catch (Exception e) {
			throw new AwsSdkException("Unable to retrieve information on IAM user", e);
		}
	}

	/**
	 * Retrieves list of ECS clusters owned by this AWS account on the target AWS region
	 *
	 * @param targetRegion Non-null
	 * @return Non-null (possibly empty) list
	 * @throws AwsSdkException Thrown if the SDK call throws an exception
	 */
	public List<Cluster> getECSClusterList(Region targetRegion) throws AwsSdkException {
		if (targetRegion == null) {
			throw new IllegalArgumentException("targetRegion cannot be null");
		}
		try (
			ECSClient ecsClient = ECSClient.builder().region(targetRegion).build();
		) {
			ListClustersResponse response = ecsClient.listClusters(ListClustersRequest.builder().build());
			List<String> clusterArns = response.clusterArns();
			if (clusterArns == null) {
				return Collections.emptyList();
			}
			List<Cluster> clusterList = new ArrayList<>(clusterArns.size());
			LOGGER.debug("My AWS account has {} ECS clusters defined", clusterArns.size());
			for (String clusterArn : clusterArns) {
				LOGGER.debug(" --> clusterArn: {}", clusterArn);
				DescribeClustersResponse descResponse =
						ecsClient.describeClusters(
								DescribeClustersRequest.builder().clusters(clusterArn).build());
				List<Cluster> clusters = descResponse.clusters();
				// Assume that a describe request with a valid 'arn' will return exactly one result
				Cluster cluster = clusters.get(0);
				LOGGER.debug("   DETAILS | name: {} | arn: {} | containers: {} | services: {} | tasks: {}",
						cluster.clusterName(),
						cluster.clusterArn(),
						cluster.registeredContainerInstancesCount(),
						cluster.activeServicesCount(),
						cluster.runningTasksCount());
				clusterList.add(cluster);
			}
			return clusterList;
		} catch (Exception e) {
			throw new AwsSdkException("Unable to retrieve information on ECS clusters", e);
		}
	}

	/**
	 * Retrieves list of S3 buckets owned by this AWS account on the target AWS region
	 *
	 * @param targetRegion Non-null
	 * @return Non-null (possibly empty) list
	 * @throws AwsSdkException Thrown if the SDK call throws an exception
	 */
	public List<Bucket> getS3BucketList(Region targetRegion) throws AwsSdkException {
		if (targetRegion == null) {
			throw new IllegalArgumentException("targetRegion cannot be null");
		}
		try (
			S3Client s3Client = S3Client.builder().region(targetRegion).build();
		) {
			ListBucketsResponse response = s3Client.listBuckets(ListBucketsRequest.builder().build());

			List<Bucket> myBucketList = response.buckets();
			if (myBucketList == null) {
				return Collections.emptyList();
			}
			LOGGER.debug("S3 buckets information retrieved successfully | Number of buckets: [{}]", myBucketList.size());
			return myBucketList;
		} catch (Exception e) {
			throw new AwsSdkException("Unable to retrieve information on S3 buckets", e);
		}
	}

}

Generating the HTML reply

The Application class has the AwsEnvironmentInterrogator injected via an @Autowired annotation. Every time an HTTP GET is received, the AwsEnvironmentInterrogator is used to retrieve information from AWS, and that information returned to the caller as SDK objects. The Application class just needs to format the information into a (very basic) HTML response.

Here is the new code for the Application class.

package com.jimtough.sbdaws;

import java.util.List;

import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.boot.SpringApplication;
import org.springframework.boot.autoconfigure.SpringBootApplication;
import org.springframework.web.bind.annotation.RequestMapping;
import org.springframework.web.bind.annotation.RestController;

import com.jimtough.sbdaws.awssdk.AwsEnvironmentInterrogator;
import com.jimtough.sbdaws.awssdk.AwsSdkException;

import software.amazon.awssdk.services.ecs.model.Cluster;
import software.amazon.awssdk.services.iam.model.User;
import software.amazon.awssdk.services.s3.model.Bucket;

@SpringBootApplication
@RestController
public class Application {

	private static final Logger LOGGER = LoggerFactory.getLogger(Application.class);

	private final String EOL = System.lineSeparator();
	private final String HTML_HELLO_MESSAGE = "
<h2>Hello! This is the Spring Boot webapp that %s created.</h2>
";

	private final String HTML_IAM_USER_HEADER = "
<h3>My IAM User: <b>%s</b></h3>
";
	private final String HTML_IAM_USER_DETAILS =
			"
<ul>" + EOL +
				"
	<li><b>arn:</b> %s</li>
" + EOL +
				"
	<li><b>creation date:</b> %s</li>
" + EOL +
			"</ul>
";
	private final String HTML_IAM_USER_SDK_FAILURE = "
<h3><b>SDK request for IAM user details failed!</b></h3>
";

	private final String HTML_ECS_CLUSTER_HEADER = "
<h3>My ECS clusters (%d in total)</h3>
";
	private final String HTML_ECS_CLUSTER_DETAILS =
			"
	<li><b>name:</b> %s | <b>containers:</b> %s | <b>services:</b> %s | <b>tasks:</b> %s</li>
";
	private final String HTML_ECS_CLUSTER_SDK_FAILURE = "
<h3><b>SDK request for ECS cluster details failed!</b></h3>
";

	private final String HTML_S3_BUCKETS_HEADER = "
<h3>My S3 buckets (%d in total)</h3>
";
	private final String HTML_S3_BUCKET_DETAILS = "
	<li><b>name:</b> %s | <b>creation date:</b> %s</li>
";
	private final String HTML_S3_BUCKETS_SDK_FAILURE = "
<h3><b>SDK request for S3 buckets list failed!</b></h3>
";

	@Autowired
	private ConfigurationBean configurationBean;

	@Autowired
	private AwsEnvironmentInterrogator interrogator;

	// This is the only HTTP request handler defined in the application.
	// It will return a kind-of-ugly HTML reply with details from the AWS SDK method call responses.
	@RequestMapping("/")
	public String home() {
		LOGGER.info("Request received");
		StringBuilder sb = new StringBuilder();
		sb.append("<html><body>");

		// Start with a simple greeting that includes one of the values from the properties file.
		// This should insert YOUR name if you edited the "maven-filter-values.properties" file.
		sb.append(String.format(HTML_HELLO_MESSAGE, configurationBean.getMyName()));

		// Add the IAM user details to the HTML response
		try {
			User user = interrogator.getIAMUser();
			sb.append(String.format(HTML_IAM_USER_HEADER, user != null ? user.userName() : "UNKNOWN"));
			if (user != null) {
				sb.append(String.format(HTML_IAM_USER_DETAILS, user.arn(), user.createDate()));
			}
		} catch (AwsSdkException e) {
			sb.append(HTML_IAM_USER_SDK_FAILURE);
		}

		// Add the ECS cluster details to the HTML response
		try {
			List<Cluster> clusterList = interrogator.getECSClusterList(configurationBean.getAwsTargetRegion());
			sb.append(String.format(HTML_ECS_CLUSTER_HEADER, clusterList.size()));
			if (!clusterList.isEmpty()) {
				sb.append("
<ul>").append(EOL);
				for (Cluster cluster : clusterList) {
					sb.append(String.format(HTML_ECS_CLUSTER_DETAILS,
							cluster.clusterName(),
							cluster.registeredContainerInstancesCount(),
							cluster.activeServicesCount(),
							cluster.runningTasksCount())
						).append(EOL);
				}
				sb.append("</ul>
").append(EOL);
			}
		} catch (AwsSdkException e) {
			sb.append(HTML_ECS_CLUSTER_SDK_FAILURE);
		}

		// Add the S3 bucket details to the HTML response
		try {
			List<Bucket> bucketList = interrogator.getS3BucketList(configurationBean.getAwsTargetRegion());
			sb.append(String.format(HTML_S3_BUCKETS_HEADER, bucketList.size()));
			if (!bucketList.isEmpty()) {
				sb.append("
<ul>").append(EOL);
				for (Bucket bucket : bucketList) {
					sb.append(String.format(HTML_S3_BUCKET_DETAILS, bucket.name(), bucket.creationDate())).append(EOL);
				}
				sb.append("</ul>
").append(EOL);
			}
		} catch (AwsSdkException e) {
			sb.append(HTML_S3_BUCKETS_SDK_FAILURE);
		}

		sb.append("</body></html>");
		return sb.toString();
	}

	public static void main(String[] args) {
		SpringApplication.run(Application.class, args);
	}

}

Building the running the app

This version of the application gets its IAM credentials from a properties file that is bundled with it, so everything can be run from outside the AWS infrastructure. I’ll build and run it locally to make sure.

mvn clean package

The Maven “package” target does the same as in my previous posts, plus adds a stage where my single integration test is executed.

  1. Compile the Spring Boot application code
  2. Run the integration test (the build stops here if the test fails)
  3. Build the “this-is-the-app.jar” binary that is the Spring Boot application JAR
  4. Build the Docker image (which includes the Spring Boot application JAR)

This time, let’s skip the step of testing the Spring Boot JAR by itself and just run the Docker image locally.

docker run --rm -p 80:80 -t jtough/sbdaws:0.1.0

There are a few differences to note from the version of the Spring Boot app that I used in my last post:

  • This version of the Spring Boot app runs on port 80, so we need to map that port instead of 8080 as we used in the previous version. I changed the Spring Boot port by adding an application.properties file to the project, and setting a special property in that file to specify the HTTP port.
  • The Maven artifact is now named “sbdaws”. This is short for “Spring Boot/Docker/AWS”.
  • The Maven version is now “0.1.0”, and I’m allowing the Docker plugin to use the Maven version as the Docker image tag.

SBDAWS-P05-screenshot-04

And this is what I see when I connect to “localhost” in a browser:

SBDAWS-P05-screenshot-05b

I’ve cheated a bit here. I already created my ECS cluster and deployed the Docker image for this version of the app. That’s why a cluster named “sbdaws-cluster” already exists in my AWS account, and already has a service running. I came back and ran the image locally so I could get the screenshots for this post.

Running the Docker image on AWS

In my post, Docker on AWS – Part 4 – Simple Spring Boot app, I walked through the steps that are needed to get a Docker image into your AWS account via the Command Line Interface (CLI). I’m not going to discuss those details again, but I will quickly show the commands to use, since this version of the app changes things slightly.

Tagging the Docker image

docker tag jtough/sbdaws:0.1.0 .dkr.ecr.ca-central-1.amazonaws.com/sbdaws:0.1.0

This command does three things (and pardon my terminology, as it might be incorrect):

  1. Creates a new “tag” for my existing Docker image, with the same “IMAGE ID” as the original image
  2. The “tag” gets a different target repository, as specified in the command
  3. The “tag” gets tag name “0.1.0” because I explicitly specified that in the command

Note that at this point I have only created the tag. The tagged image has not yet been pushed to AWS.

Creating the target Docker repository on AWS

I’ve renamed the application to “sbdaws”, so now I’ll need to create a new ECR repository on my AWS account with a matching name. This can be done manually in the AWS Management Console, or via the CLI.

Here is the CLI command that I used:

aws ecr create-repository --repository-name sbdaws

Pushing the Docker image to AWS

I’ve created my target ECR repository, and tagged my Docker image to match. Now it’s time to push the image. The first step is to request temporary authentication credentials from AWS for my private ECR Docker repo.

aws ecr get-login --no-include-email

After I’m authenticated, I execute the push.

docker push .dkr.ecr.ca-central-1.amazonaws.com/sbdaws:0.1.0

It should look something like this:

SBDAWS-P05-screenshot-06b

Running the Docker image on AWS ECS

In my post, Docker on AWS – Part 4 – Simple Spring Boot app, I walked through the steps to create the ECS task definition, cluster and service that will run the Docker image. I’m not going to repeat any of that here, because the process will be the same except for two things:

  1. The new setup will refer to the newly uploaded Docker image for “sbdaws”
  2. In the “Networking” configuration for the new cluster, make sure to set the Port Range value to “80” rather than “8080” this time.

My running service is shown in the screenshot below.

SBDAWS-P05-screenshot-07
The “sbdaws-service” up and running in ECS

Now I’ll connect from a browser.

SBDAWS-P05-screenshot-08
Browsing to my service hosted on AWS ECS

It works! Notice that I have associated my service with a custom A-record for my “jimtough.com” domain. I covered how to do this in a previous post (Docker on AWS – Part 3 – Assigning an Elastic IP and Domain Name), so I won’t repeat that procedure here.

That’s all for this post. The application is primed to have more added to it, so now I just need to decide what to experiment with next.

Leave a Reply