Using Docker containers has become an integral part of everyday developer workflows. Containers provide a consistent and isolated environment, ensuring that the application behaves the same way regardless of the underlying system. This is particularly useful for microservices, as each service can run in its own container with the required dependencies. This capability reduces dependency on fully operational environments, speeds up testing, and enhances collaboration across teams. By leveraging Docker, developers can focus more on coding and less on managing infrastructure complexities.
This seamless integration of Docker into development workflows not only simplifies managing microservices but also prepares teams to tackle the complexities introduced by modern cloud-based architectures, which are becoming increasingly popular as many companies outsource services to the cloud to save significant development and infrastructure costs. This introduces new challenges, particularly in terms of testing and running environments locally. Developers often need to replicate cloud-based services for testing purposes, which can be complex due to the intricacies of cloud configurations and dependencies. Tools like Docker and LocalStack address these challenges by providing a way to simulate cloud services locally, enabling developers to test and debug applications without needing direct access to cloud environments.
In this article, I will demonstrate how to create a containerized environment with an application that retrieves files from LocalStack’s simulated S3 service.
First, we need a Docker Compose file to define the containers for our environment. This file will specify the services, networks, and configurations required to run the application and LocalStack together.
x-shared-env:
&aws-envs
AWS_ACCESS_KEY_ID: test
AWS_SECRET_ACCESS_KEY: test
AWS_REGION: us-west-2
AWS_ENDPOINT_URL: http://192.168.1.100:4566
services:
app:
build:
context: .
dockerfile: Dockerfile
environment:
<<: *aws-envs
depends_on:
- localstack
ports:
- "5000:5000"
volumes:
- ~/.aws:/root/.aws
networks:
- app-network
localstack:
image: localstack/localstack
container_name: localstack
volumes:
- ./localstack/init/:/etc/localstack/init/ready.d
- ./localstack/resources:/home/localstack/resources
environment:
<<: *aws-envs
SERVICES: s3
DEBUG: 1
ports:
- "4566:4566"
- "4572:4572"
networks:
app-network:
ipv4_address: 192.168.1.100
networks:
app-network:
driver: bridge
ipam:
config:
- subnet: 192.168.1.0/24
Notice that the IP address is explicitly defined in the app-network
. This is because, according to the AWS SDK documentation:
The default behaviour is to detect which access style to use based on the configured endpoint (an IP will result in path-style access)
Using a DNS name like localstack
would result in virtual-hosted–style requests, which are more complex to reroute compared to path-style requests.
Let’s now take a closer look at the volumes section
volumes:
- ./localstack/init/:/etc/localstack/init/ready.d
- ./localstack/resources:/home/localstack/resources
LocalStack supports custom initialization scripts to configure its services when the container starts. Any scripts or commands placed in the ready.d
directory inside the container are executed after LocalStack is fully initialized.
Let’s create a s3.sh
script in the /localstack/init/
directory to initialize our S3 bucket and upload files to it. The files to be uploaded are located in the second bound volume (/home/localstack/resources
).
#!/bin/sh
awslocal --region=us-west-2 --endpoint-url=http://127.0.0.1:4566 s3 mb s3://dev-bucket
awslocal s3 cp /home/localstack/resources s3://dev-bucket --region us-west-2 --recursive
All that is left is a simple TypeScript code to retrieve a file from LocalStack’s S3. The AWS environment variables are defined in the docker-compose
configuration, keeping them decoupled from the code and easy to modify when needed.
import { S3Client, GetObjectCommand } from '@aws-sdk/client-s3';
const s3Client = new S3Client({
});
async function pullAndPrintFile(bucketName: string, key: string) {
console.log(`Fetching file '${key}' from bucket '${bucketName}'...`);
const command = new GetObjectCommand({ Bucket: bucketName, Key: key });
const result = await s3Client.send(command)
const bodyString = await result.Body?.transformToString();
console.log(`Content of '${key}':\n`);
console.log(bodyString);
}
const [bucketName, key] = process.argv.slice(2);
pullAndPrintFile(bucketName, key);
App container is simply running command:
npx ts-node src/app.ts dev-bucket test.txt
Which produces output:
npm warn exec The following package was not found and will be installed: ts-node@10.9.2
Fetching file 'test.txt' from bucket 'dev-bucket'...
Content of 'test.txt':
Hello World!
That’s it! The mocked S3 setup can be utilized in CI/CD pipelines or to simplify development workflows.
Leave a Reply