Guide to Building an S3 Local Development Environment Using MinIO (AWS SDK for Java 2.x) with Docker Compose and Health Checks
Additions & Revisions
In the initial version of the blog, Java code using AWS SDK for Java V1 was included, but since V1 has moved to maintenance mode, a warning message started appearing when starting Spring Boot. Therefore, we migrated to AWS SDK for Java V2 and rewrote the source code in V2. We have added code snippets to make frequently used operations easily accessible via copy-paste.
Additionally, when setting up MinIO using Docker Compose on GitHub Actions, we encountered an issue where mc (MinIO Client) was executed before MinIO had fully started, resulting in an error. To resolve this issue, we added a health check to ensure that mc is executed only after MinIO has fully started.
Introduction and Summary
Hello. I am Miyashita, a membership management engineer in the Common Services Development Group[1][2][3][4] at KINTO Technologies.
Today, I'd like to talk about how we solved the challenges we faced with building an S3-compatible local storage environment in our development site.
Specifically, I'll share a practical approach on how to leverage the open source MinIO to emulate AWS S3 features.
I hope this article will be helpful for engineers confronting similar challenges.
What is MinIO?
MinIO is an open source object storage server tool with S3 compatible features. Just like NAS, you can upload and download files.
There is also a similar service in this area called LocalStack. LocalStack is a tool specialized in AWS emulation and can emulate services such as S3, Lambda, SQS, and DynamoDB locally.
Although these two tools serve different purposes, both meet the requirements for setting up an S3-compatible environment locally.
MinIO website
LocalStack website
Tool Selection with MinIO and LocalStack
Development requirements
As a development requirement, it was necessary to automatically create an arbitrary S3 bucket simply by running docker-compose, and register email templates, CSV files, etc., in the bucket.
This is because it’s cumbersome to register files with commands or GUI after a container is started.
Also, when conducting automated local S3 connection testing, the bucket and files must be ready as soon as the container starts.
Tool Comparison
After comparing which tool can easily achieve the requirements, LocalStack uses aws-cli to create buckets and operate files, while MinIO provides a dedicated command line tool, mc (MinIO Client). This made it easier to build the system.
In addition, I found MinIO to be more sophisticated in the GUI-based management console. A comparison on Google Trends shows that MinIO is more popular. For these reasons, we decided to adopt MinIO.
Compose Files
To set up a MinIO local environment, a "compose.yaml" file must first be prepared. Follow the steps below.
- Create a directory.
- Create a text file in the directory with the filename "compose.yaml".
- Copy and paste the contents of compose.yaml below and save it.
docker-compose.yml is not recommended. Click here for the compose file specifications
*docker-compose.yml also works with backwards compatibility. For more information, click here.
services:
# Configure the MinIO server container
minio:
container_name: minio_test
image: minio/minio:latest
# Start the MinIO server and specify the access port for the management console (GUI)
command: ['server', '/data', '--console-address', ':9001']
ports:
- "9000:9000" # for API access
- "9001:9001" # for the management console (GUI)
# USER and PASSWORD can be omitted.
# In that case, it is automatically set to minioadmin | minioadmin.
environment:
- "MINIO_ROOT_USER=minio"
- "MINIO_ROOT_PASSWORD=minio123"
# Health check to verify that MinIO has fully started
healthcheck:
test: [ "CMD", "curl", "-f", "http://localhost:9000/minio/health/live" ]
interval: 1s
timeout: 20s
retries: 20
# minio-managed configuration files and uploaded files
# If you want to refer to the file locally or if you want to make the registered file persistent,
# mount a local directory.
# volumes:
# - ./minio/data:/data
# If you want the MinIO container to start automatically after restarting the PC, etc.,
# enable it if you want it to start automatically when it is stopped.
# restart: unless-stopped
# Configuration for the MinIO Client (mc) container
mc:
image: minio/mc:latest
container_name: mc_test
depends_on:
minio:
# Setting to execute mc only after confirming that MinIO has fully started
condition: service_healthy
environment:
- "MINIO_ROOT_USER=minio" # Same user name as above
- "MINIO_ROOT_PASSWORD=minio123" # Same password as above
# Create a bucket with the mc command and place the file in the created bucket.
# First, set the alias so that subsequent commands can easily
# specify MinIO itself.
# This time, the alias name is myminio.
# mb creates a new bucket. Abbreviation for make bucket
# cp copies local files to MinIO.
entrypoint: >
/bin/sh -c "
mc alias set myminio http://minio:9000 minio minio123;
mc mb myminio/mail-template;
mc mb myminio/image;
mc mb myminio/csv;
mc cp init_data/mail-template/* myminio/mail-template/;
mc cp init_data/image/* myminio/image/;
mc cp init_data/csv/* myminio/csv/;
"
# Mount the directory containing the files you want to upload to MinIO.
volumes:
- ./myData/init_data:/init_data
Directory and File Structure
Create an appropriate dummy file and start it with the following directory and file structure.
minio_test# tree .
.
├── compose.yaml
└── myData
└── init_data
├── csv
│ └── example.csv
├── image
│ ├── slide_01.jpg
│ └── slide_04.jpg
└── mail-template
└── mail.vm
Startup and Operation Check
The following is the flow of running MinIO and its client on Docker and checking its operation.
The Docker container is started in the background (using the -d flag) with the following command: If Docker Desktop (for Windows) is installed, containers can be created using a command line interface such as Command Prompt or PowerShell. *Download Docker Desktop here
docker compose up -d
*The hyphen in the middle of docker-compose is no longer added. For more information, click here.
Docker Desktop
Open Docker Desktop and check the container status.
You can see that the minio_test container is running, but the mc_test container is stopped.
Check the execution log of the mc_test container.
MC Execution Log
The logs indicate that the MinIO Client (mc) has been executed and all commands are completed successfully.
Management Console
Next, let's explore the MinIO GUI management console. Access port 9001 on localhost with a browser. http://127.0.0.1:9001
When the login screen appears, enter the username and password configured in compose.yaml (minio and minio123 in this example).
List of Buckets
Select "Object Browser" from the menu on the left.
You will see a list of buckets created and the number of files stored in them.
List of Files
Select the "image" bucket as an example and look inside.
You will see the pre-uploaded files.
You can directly view the file by selecting "Preview" from the action menu next to the file.
File Preview Function
Our mascot, Kumobii, will be shown on preview.
The function to preview images directly in MinIO management console is very useful.
Installation of MC (MinIO Client)
Using the command line can be more efficient than GUI for handling large numbers of files.
Also, when accessing MinIO from source code during development and an error occurs,
the command line is very useful for checking file paths.
This section describes how to install MinIO Client and its basic operations.
*If you are satisfied with the GUI management console, feel free to skip this section.
# Use the following command to download mc. The executable file is stored in an arbitrary directory.
minio_test/mc# curl https://dl.min.io/client/mc/release/linux-amd64/mc \
--create-dirs \
-o ./minio-binaries/mc
# Operation check
# Check if the installed mc is the latest version and display the version to check that it was installed correctly.
# It is your choice whether to pass the mc command through Path. I will not pass through this time.
minio_test/mc# ./minio-binaries/mc update
> You are already running the most recent version of ‘mc’.
minio_test/mc# ./minio-binaries/mc -version
> mc version RELEASE.2023-10-30T18-43-32Z (commit-id=9f2fb2b6a9f86684cbea0628c5926dafcff7de28)
> Runtime: go1.21.3 linux/amd64
> Copyright (c) 2015-2023 MinIO, Inc.
> License GNU AGPLv3 <https://www.gnu.org/licenses/agpl-3.0.html>
# Set alias
# Set the alias required to access the MinIO server.
minio_test/mc# ./minio-binaries/mc alias set myminio http://localhost:9000 minio minio123;
> Added `myminio` successfully.
# Example of file operation
# Display a list of files in the bucket
minio_test/mc# ./minio-binaries/mc ls myminio/image
> [2023-11-07 21:18:54 JST] 11KiB STANDARD slide_01.jpg
> [2023-11-07 21:18:54 JST] 18KiB STANDARD slide_04.jpg
minio_test/mc# ./minio-binaries/mc ls myminio/csv
> [2023-11-07 21:18:54 JST] 71B STANDARD example.csv
# Screen output of file contents
minio_test/mc# ./minio-binaries/mc cat myminio/csv/example.csv
> name,age,job
> tanaka,30,engineer
> suzuki,25,designer
> satou,,40,manager
# Batch file upload
minio_test/mc# ./minio-binaries/mc cp ../myData/init_data/image/* myminio/image/;
> ...t_data/image/slide_04.jpg: 28.62 KiB / 28.62 KiB
# File deletion
minio_test/mc# ./minio-binaries/mc ls myminio/mail-template
> [2023-11-15 11:46:25 JST] 340B STANDARD mail.txt
minio_test/mc# ./minio-binaries/mc rm myminio/mail-template/mail.txt
> Removed `myminio/mail-template/mail.txt`.
List of MC Commands
For more detailed documentation on the MinIO Client, please refer to the official manual. Click here for the official MinIO Client manual.
Lastly, Access from Java Source Code(AWS SDK for Java 2.x)
After building an S3-compatible development environment using MinIO locally, I'll demonstrate how to access MinIO from a real Java application.
First, configure Gradle.
plugins {
id 'java'
}
java {
sourceCompatibility = '17'
}
repositories {
mavenCentral()
}
dependencies {
// The initial version of the blog used AWS SDK for Java 1.x,
// but it has been updated to AWS SDK for Java 2.x
// https://mvnrepository.com/artifact/software.amazon.awssdk/s3
implementation 'software.amazon.awssdk:s3:2.28.28'
}
Next, create a Java class to access MinIO.
Wrote down commonly used operations to be easily copied and used as needed.
import java.io.BufferedReader;
import java.io.File;
import java.io.IOException;
import java.io.InputStream;
import java.io.InputStreamReader;
import java.net.URI;
import java.nio.charset.StandardCharsets;
import java.nio.file.Files;
import java.nio.file.Path;
import java.nio.file.Paths;
import java.nio.file.StandardCopyOption;
import java.time.LocalDateTime;
import java.time.format.DateTimeFormatter;
import java.util.ArrayList;
import java.util.List;
import java.util.stream.Collectors;
import software.amazon.awssdk.auth.credentials.AwsBasicCredentials;
import software.amazon.awssdk.auth.credentials.InstanceProfileCredentialsProvider;
import software.amazon.awssdk.auth.credentials.StaticCredentialsProvider;
import software.amazon.awssdk.core.ResponseInputStream;
import software.amazon.awssdk.core.sync.RequestBody;
import software.amazon.awssdk.regions.Region;
import software.amazon.awssdk.services.s3.S3Client;
import software.amazon.awssdk.services.s3.model.CopyObjectRequest;
import software.amazon.awssdk.services.s3.model.CreateBucketConfiguration;
import software.amazon.awssdk.services.s3.model.CreateBucketRequest;
import software.amazon.awssdk.services.s3.model.DeleteObjectRequest;
import software.amazon.awssdk.services.s3.model.GetObjectRequest;
import software.amazon.awssdk.services.s3.model.GetObjectResponse;
import software.amazon.awssdk.services.s3.model.HeadBucketRequest;
import software.amazon.awssdk.services.s3.model.HeadObjectRequest;
import software.amazon.awssdk.services.s3.model.ListObjectsV2Request;
import software.amazon.awssdk.services.s3.model.NoSuchBucketException;
import software.amazon.awssdk.services.s3.model.NoSuchKeyException;
import software.amazon.awssdk.services.s3.model.PutObjectRequest;
import software.amazon.awssdk.services.s3.model.S3Exception;
import software.amazon.awssdk.services.s3.paginators.ListObjectsV2Iterable;
public class MainEn {
public static void main(String[] args) {
try {
new MainEn().execute();
} catch (Exception e) {
System.out.println("An error occurred: " + e.getMessage());
}
}
/** S3 Client */
private S3Client s3Client;
/**
* Intended to switch based on Spring Boot profiles.
* For batch processing, switches based on startup arguments.
*/
private final boolean isLocal = true;
/** MinIO S3 compatibility test */
private void execute() throws IOException {
System.out.println("----- Start -----");
// Initialize S3 client. Connect to MinIO if local, otherwise connect to AWS.
if (isLocal) {
s3Client = getS3ClientForLocal();
} else {
s3Client = getS3ClientForAwsS3();
}
final String bucketName = "csv";
// 1. Retrieve a list of files in the bucket
List<String> fileList = getFileList(bucketName);
System.out.println("File list in the " + bucketName + " bucket:");
fileList.forEach(file -> System.out.println(" - " + file));
// 2. Retrieve file content as a stream (without downloading the file)
for (String fileKey : fileList) {
System.out.println("\nRetrieve line-by-line as a stream, File name: " + fileKey);
try (InputStream s3is =
s3Client.getObject(
GetObjectRequest.builder().bucket(bucketName).key(fileKey).build());
BufferedReader reader = new BufferedReader(new InputStreamReader(s3is))) {
String line;
while ((line = reader.readLine()) != null) {
System.out.println(line);
}
}
}
// 3. Retrieve file content all at once as a String
final String sourceKey = "example.csv";
System.out.println("\nRetrieve all at once, File name: " + sourceKey);
String fileContents = getStringFromS3File(bucketName, sourceKey);
System.out.println(fileContents);
// 4. Copy a file within the same bucket
final String destinationKey = "example_copy.csv";
copyObject(bucketName, sourceKey, destinationKey);
System.out.println(
"\nFile copy successful: "
+ bucketName
+ "/"
+ sourceKey
+ " -> "
+ bucketName
+ "/"
+ destinationKey);
// 5. Create a bucket
final String tmpBucketName = "tmp-bucket";
createBucketIfNotExists(tmpBucketName);
System.out.println(
"\nBucket creation successful: " + tmpBucketName + ": " + doesBucketExist(tmpBucketName));
// 6. Move a file between buckets
moveObjectBetweenBuckets(bucketName, destinationKey, tmpBucketName, destinationKey);
System.out.println(
"\nMove successful: "
+ tmpBucketName
+ "/"
+ destinationKey
+ ": "
+ doesFileExistInS3(tmpBucketName, destinationKey));
// 7. Upload a file from a String
String newFileName = "string.txt";
String fileContent = "memo memo memo";
putObject(tmpBucketName, newFileName, fileContent);
System.out.println("\nFile upload successful (String): " + tmpBucketName + "/" + newFileName);
// 8. Upload a file from a File object
Path filePath = Files.writeString(Paths.get("file.txt"), "This is a sample file content.");
putFileObject(tmpBucketName, filePath.toFile());
System.out.println(
"\nFile upload successful (File): " + tmpBucketName + "/" + filePath.toFile().getName());
// 9. Rename a file
String renameFileName = "renamed.txt";
renameObject(tmpBucketName, filePath.toFile().getName(), renameFileName);
System.out.println(
"\nFile rename successful: "
+ renameFileName
+ ": "
+ doesFileExistInS3(tmpBucketName, renameFileName));
// 10. Download a file locally
String downloadFileName = "download.csv";
downloadObject(bucketName, sourceKey, downloadFileName);
System.out.println(
"\nFile download successful: "
+ downloadFileName
+ ": "
+ new File(downloadFileName).exists());
// 11. Create a directory with today's date and back up the file
String backupFilePath =
LocalDateTime.now().format(DateTimeFormatter.ofPattern("/yyyy/MM/dd/"))
+ filePath.toFile().getName();
putFileObjectWithKey(tmpBucketName, backupFilePath, filePath.toFile());
System.out.println("\nBackup successful: " + doesFileExistInS3(tmpBucketName, backupFilePath));
// 12. Delete a file
deleteObject(bucketName, renameFileName);
System.out.println("\nFile deletion successful: " + renameFileName);
System.out.println("----- End -----");
}
/** Configure S3 Client (for local) */
private S3Client getS3ClientForLocal() {
final String id = "minio";
final String pass = "minio123";
final String endpoint = "http://127.0.0.1:9000";
return S3Client.builder()
.credentialsProvider(StaticCredentialsProvider.create(AwsBasicCredentials.create(id, pass)))
.endpointOverride(URI.create(endpoint))
.region(Region.AP_NORTHEAST_1)
.forcePathStyle(true)
.build();
}
/** Configure S3 Client (for AWS) */
private S3Client getS3ClientForAwsS3() {
return S3Client.builder()
.credentialsProvider(InstanceProfileCredentialsProvider.builder().build())
.region(Region.AP_NORTHEAST_1)
.build();
}
/** Retrieve a list of files in the specified bucket */
public List<String> getFileList(String bucketName) throws S3Exception {
List<String> fileNameList = new ArrayList<>();
ListObjectsV2Request request = ListObjectsV2Request.builder().bucket(bucketName).build();
ListObjectsV2Iterable response = s3Client.listObjectsV2Paginator(request);
response.stream()
.forEach(result -> result.contents().forEach(s3Object -> fileNameList.add(s3Object.key())));
return fileNameList;
}
/** Retrieve the content of an S3 file as a String */
public String getStringFromS3File(String bucketName, String s3key) throws IOException {
String fileContentsString;
GetObjectRequest getObjectRequest =
GetObjectRequest.builder().bucket(bucketName).key(s3key).build();
try (ResponseInputStream<GetObjectResponse> s3InputStream =
s3Client.getObject(getObjectRequest);
BufferedReader reader =
new BufferedReader(new InputStreamReader(s3InputStream, StandardCharsets.UTF_8))) {
fileContentsString = reader.lines().collect(Collectors.joining("\n"));
}
return fileContentsString;
}
/** Create a bucket if it does not exist */
public void createBucketIfNotExists(String bucketName) {
if (doesBucketExist(bucketName)) {
System.out.println("The bucket already exists: " + bucketName);
} else {
createBucket(bucketName);
}
}
/** Check if a bucket exists */
private boolean doesBucketExist(String bucketName) {
try {
s3Client.headBucket(HeadBucketRequest.builder().bucket(bucketName).build());
return true;
} catch (NoSuchBucketException e) {
return false;
}
}
/** Create a bucket */
private void createBucket(String bucketName) {
CreateBucketRequest createBucketRequest =
CreateBucketRequest.builder()
.bucket(bucketName)
.createBucketConfiguration(
CreateBucketConfiguration.builder()
.locationConstraint(Region.AP_NORTHEAST_1.id())
.build())
.build();
s3Client.createBucket(createBucketRequest);
}
/** Move file between buckets */
public void moveObjectBetweenBuckets(
String sourceBucket, String sourceKey, String targetBucket, String targetKey) {
copyObjectBetweenBuckets(sourceBucket, sourceKey, targetBucket, targetKey);
deleteObject(sourceBucket, sourceKey);
}
/** Copy a file between buckets */
public void copyObjectBetweenBuckets(
String sourceBucket, String sourceKey, String destinationBucket, String destinationKey)
throws S3Exception {
CopyObjectRequest copyRequest =
CopyObjectRequest.builder()
.sourceBucket(sourceBucket)
.sourceKey(sourceKey)
.destinationBucket(destinationBucket)
.destinationKey(destinationKey)
.build();
s3Client.copyObject(copyRequest);
}
/** Copy a file within the same bucket */
public void copyObject(String sourceBucket, String sourceKey, String destinationKey)
throws S3Exception {
CopyObjectRequest copyRequest =
CopyObjectRequest.builder()
.sourceBucket(sourceBucket)
.sourceKey(sourceKey)
.destinationBucket(sourceBucket)
.destinationKey(destinationKey)
.build();
s3Client.copyObject(copyRequest);
}
/** Upload a file to S3 from a String */
public void putObject(String bucketName, String s3key, String content) {
PutObjectRequest putObjectRequest =
PutObjectRequest.builder().bucket(bucketName).key(s3key).build();
s3Client.putObject(putObjectRequest, RequestBody.fromString(content));
}
/** Upload a file to S3 from a File object */
public void putFileObject(String bucketName, File file) {
PutObjectRequest putObjectRequest =
PutObjectRequest.builder().bucket(bucketName).key(file.getName()).build();
s3Client.putObject(putObjectRequest, RequestBody.fromFile(file));
}
/** Upload a File object to S3 with a specified key */
public void putFileObjectWithKey(String bucketName, String key, File file) {
PutObjectRequest putObjectRequest =
PutObjectRequest.builder().bucket(bucketName).key(key).build();
s3Client.putObject(putObjectRequest, RequestBody.fromFile(file));
}
/** Download a file from S3 to local storage */
public void downloadObject(String bucketName, String s3key, String localFilePath)
throws IOException {
GetObjectRequest getObjectRequest =
GetObjectRequest.builder().bucket(bucketName).key(s3key).build();
try (ResponseInputStream<GetObjectResponse> s3InputStream =
s3Client.getObject(getObjectRequest)) {
Files.copy(s3InputStream, Path.of(localFilePath), StandardCopyOption.REPLACE_EXISTING);
}
}
/** Rename a file */
public void renameObject(String bucketName, String oldKey, String newKey) {
copyObject(bucketName, oldKey, newKey);
deleteObject(bucketName, oldKey);
}
/** Check if a file exists in S3 */
public String doesFileExistInS3(String bucketName, String key) {
try {
s3Client.headObject(HeadObjectRequest.builder().bucket(bucketName).key(key).build());
return "exists";
} catch (NoSuchKeyException e) {
return "not exists";
}
}
/** Delete a file */
public void deleteObject(String bucketName, String s3key) throws S3Exception {
DeleteObjectRequest deleteObjectRequest =
DeleteObjectRequest.builder().bucket(bucketName).key(s3key).build();
s3Client.deleteObject(deleteObjectRequest);
}
}
Execution Result
----- Start -----
File list in the csv bucket:
- example.csv
Retrieve line-by-line as a stream, File name: example.csv
name,age,job
tanaka,30,engineer
suzuki,25,designer
satou,40,manager
Retrieve all at once, File name: example.csv
name,age,job
tanaka,30,engineer
suzuki,25,designer
satou,40,manager
File copy successful: csv/example.csv -> csv/example_copy.csv
Bucket creation successful: tmp-bucket: true
Move successful: tmp-bucket/example_copy.csv: exists
File upload successful (String): tmp-bucket/string.txt
File upload successful (File): tmp-bucket/file.txt
File rename successful: renamed.txt: exists
File download successful: download.csv: true
Backup successful: exists
File deletion successful: renamed.txt
----- End -----
Source Code Description
A notable feature of this code is that the AWS SDK for Java 2.x supports both MinIO and AWS S3.
When connecting to a local MinIO instance, use the getS3ClientForLocal method;
when connecting to AWS S3, use the getS3ClientForAwsS3 method to initialize the client.
This approach makes it possible to use the same SDK across different backend environments
and the same interface for operation.
It is nice to be able to easily test an application before deploying it to AWS environment without incurring additional costs.
I hope you find this guide helpful.
Thank you for reading my article all the way to the end.🙇♂
Post #1 by a team mate from the Common Services Development Group
[ グローバル展開も視野に入れた決済プラットフォームにドメイン駆動設計(DDD)を取り入れた ] ↩︎Post #2 by a team mate from the Common Services Development Group
[入社 1 年未満メンバーだけのチームによる新システム開発をリモートモブプログラミングで成功させた話] ↩︎Post #3 by a team mate from the Common Services Development Group
[JIRA と GitHub Actions を活用した複数環境へのデプロイトレーサビリティ向上の取り組み] ↩︎Post #4 by a team mate from the Common Services Development Group
[ VSCode Dev Container を使った開発環境構築 ] ↩︎
関連記事 | Related Posts
Quick Start Spring Batch with Spring Boot 3
[Sequel! Dev Container] Creating a cloud development environment with GitHub Codespaces
Spring Bootを2系から3系へバージョンアップしました。
Spring Boot 2 to 3 Upgrade: Procedure, Challenges, and Solutions
[続!Dev Container] GitHub Codespacesでクラウド開発環境を構築
インストール時の問題に関するソリューション:Keycloak M1
We are hiring!
【クラウドエンジニア】Cloud Infrastructure G/東京・大阪
KINTO Tech BlogWantedlyストーリーCloud InfrastructureグループについてAWSを主としたクラウドインフラの設計、構築、運用を主に担当しています。
【プラットフォームエンジニア】プラットフォームG/東京・大阪
プラットフォームグループについてAWS を中心とするインフラ上で稼働するアプリケーション運用改善のサポートを担当しています。