KINTO Tech Blog
AWS

Guide to Building an S3 Local Development Environment Using MinIO (RELEASE.2023-10)

Cover Image for Guide to Building an S3 Local Development Environment Using MinIO (RELEASE.2023-10)

Introduction and Summary

Hello. I am Miyashita, a membership management engineer in the Common Services Development Group[1][2][3][4] at KINTO Technologies.
Today, I'd like to talk about how we solved the challenges we faced with building an S3-compatible local storage environment in our development site.
Specifically, I'll share a practical approach on how to leverage the open source MinIO to emulate AWS S3 features.
I hope this article will be helpful for engineers confronting similar challenges.

What is MinIO?

MinIO is an open source object storage server tool with S3 compatible features. Just like NAS, you can upload and download files.
There is also a similar service in this area called LocalStack. LocalStack is a tool specialized in AWS emulation and can emulate services such as S3, Lambda, SQS, and DynamoDB locally.
Although these two tools serve different purposes, both meet the requirements for setting up an S3-compatible environment locally.
MinIO website LocalStack website

Tool Selection with MinIO and LocalStack

Development requirements

As a development requirement, it was necessary to automatically create an arbitrary S3 bucket simply by running docker-compose, and register email templates, CSV files, etc., in the bucket.
This is because it’s cumbersome to register files with commands or GUI after a container is started.
Also, when conducting automated local S3 connection testing, the bucket and files must be ready as soon as the container starts.

Tool Comparison

After comparing which tool can easily achieve the requirements, LocalStack uses aws-cli to create buckets and operate files, while MinIO provides a dedicated command line tool, mc (MinIO Client). This made it easier to build the system.
In addition, I found MinIO to be more sophisticated in the GUI-based management console. A comparison on Google Trends shows that MinIO is more popular. For these reasons, we decided to adopt MinIO.
googleTrends_comp

Compose Files

To set up a MinIO local environment, a "compose.yaml" file must first be prepared. Follow the steps below.

  1. Create a directory.
  2. Create a text file in the directory with the filename "compose.yaml".
  3. Copy and paste the contents of compose.yaml below and save it.
    docker-compose.yml is not recommended. Click here for the compose file specifications
    *docker-compose.yml also works with backwards compatibility. For more information, click here.
compose.yaml
services:
# Configure the MinIO server container
  minio:
    container_name: minio_test
    image: minio/minio:latest
# Start the MinIO server and specify the access port for the management console (GUI)
    command: ['server', '/data', '--console-address', ':9001']
    ports:
      - "9000:9000" # for API access
      - "9001:9001" # for the management console (GUI)
# USER and PASSWORD can be omitted.
# In that case, it is automatically set to minioadmin | minioadmin.
    environment:
      - "MINIO_ROOT_USER=minio"
      - "MINIO_ROOT_PASSWORD=minio123"
# minio-managed configuration files and uploaded files
# If you want to refer to the file locally or if you want to make the registered file persistent,
# mount a local directory.
#    volumes:
#      - ./minio/data:/data
# If you want the MinIO container to start automatically after restarting the PC, etc.,
# enable it if you want it to start automatically when it is stopped.
#    restart: unless-stopped
# Configure the MinIO Client (mc) container
  mc:
    image: minio/mc:latest
    container_name: mc_test
    depends_on:
      - minio
    environment:
      - "MINIO_ROOT_USER=minio" # Same user name as above
      - "MINIO_ROOT_PASSWORD=minio123" # Same password as above
# Create a bucket with the mc command and place the file in the created bucket.
# First, set the alias so that subsequent commands can easily
# specify MinIO itself.
# This time, the alias name is myminio.
# mb creates a new bucket. Abbreviation for make bucket
# cp copies local files to MinIO.
    entrypoint: >
      /bin/sh -c "
      mc alias set myminio http://minio:9000 minio minio123;
      mc mb myminio/mail-template;
      mc mb myminio/image;
      mc mb myminio/csv;
      mc cp init_data/mail-template/* myminio/mail-template/;
      mc cp init_data/image/* myminio/image/;
      mc cp init_data/csv/* myminio/csv/;
      "
# Mount the directory containing the files you want to upload to MinIO.
    volumes:
      - ./myData/init_data:/init_data

Directory and File Structure

Create an appropriate dummy file and start it with the following directory and file structure.

tree command
minio_test# tree .
.
├── compose.yaml
└── myData
    └── init_data
        ├── csv
        │   └── example.csv
        ├── image
        │   ├── slide_01.jpg
        │   └── slide_04.jpg
        └── mail-template
            └── mail.vm

Startup and Operation Check

The following is the flow of running MinIO and its client on Docker and checking its operation.
The Docker container is started in the background (using the -d flag) with the following command: If Docker Desktop (for Windows) is installed, containers can be created using a command line interface such as Command Prompt or PowerShell. *Download Docker Desktop here

docker compose up -d

*The hyphen in the middle of docker-compose is no longer added. For more information, click here.

Docker Desktop

Open Docker Desktop and check the container status.
You can see that the minio_test container is running, but the mc_test container is stopped.
Check the execution log of the mc_test container. doker-desktop

MC Execution Log

The logs indicate that the MinIO Client (mc) has been executed and all commands are completed successfully. mc_test_log

Management Console

Next, let's explore the MinIO GUI management console. Access port 9001 on localhost with a browser. http://127.0.0.1:9001
When the login screen appears, enter the username and password configured in compose.yaml (minio and minio123 in this example). gui_console

List of Buckets

Select "Object Browser" from the menu on the left.
You will see a list of buckets created and the number of files stored in them. bucket_list

List of Files

Select the "image" bucket as an example and look inside.
You will see the pre-uploaded files.
You can directly view the file by selecting "Preview" from the action menu next to the file. image

File Preview Function

Our mascot character, the mysterious creature K, will be shown on preview.
The function to preview images directly in MinIO management console is very useful. k_preview

Installation of MC (MinIO Client)

Using the command line can be more efficient than GUI for handling large numbers of files.
Also, when accessing MinIO from source code during development and an error occurs,
the command line is very useful for checking file paths.
This section describes how to install MinIO Client and its basic operations.
*If you are satisfied with the GUI management console, feel free to skip this section.

# Use the following command to download mc. The executable file is stored in an arbitrary directory.
minio_test/mc# curl https://dl.min.io/client/mc/release/linux-amd64/mc \
  --create-dirs \
  -o ./minio-binaries/mc

# Operation check
# Check if the installed mc is the latest version and display the version to check that it was installed correctly.
# It is your choice whether to pass the mc command through Path. I will not pass through this time.
minio_test/mc# ./minio-binaries/mc update
> You are already running the most recent version of ‘mc’.
minio_test/mc# ./minio-binaries/mc -version
> mc version RELEASE.2023-10-30T18-43-32Z (commit-id=9f2fb2b6a9f86684cbea0628c5926dafcff7de28)
> Runtime: go1.21.3 linux/amd64
> Copyright (c) 2015-2023 MinIO, Inc.
> License GNU AGPLv3 <https://www.gnu.org/licenses/agpl-3.0.html>

# Set alias
# Set the alias required to access the MinIO server.
minio_test/mc# ./minio-binaries/mc alias set myminio http://localhost:9000 minio minio123;
> Added `myminio` successfully.

# Example of file operation
# Display a list of files in the bucket
minio_test/mc# ./minio-binaries/mc ls myminio/image
> [2023-11-07 21:18:54 JST]  11KiB STANDARD slide_01.jpg
> [2023-11-07 21:18:54 JST]  18KiB STANDARD slide_04.jpg
minio_test/mc# ./minio-binaries/mc ls myminio/csv
> [2023-11-07 21:18:54 JST]    71B STANDARD example.csv

# Screen output of file contents
minio_test/mc# ./minio-binaries/mc cat myminio/csv/example.csv
> name,age,job
> tanaka,30,engineer
> suzuki,25,designer
> satou,,40,manager

# Batch file upload
minio_test/mc# ./minio-binaries/mc cp ../myData/init_data/image/* myminio/image/;
> ...t_data/image/slide_04.jpg: 28.62 KiB / 28.62 KiB

# File deletion
minio_test/mc# ./minio-binaries/mc ls myminio/mail-template
> [2023-11-15 11:46:25 JST]   340B STANDARD mail.txt
minio_test/mc# ./minio-binaries/mc rm myminio/mail-template/mail.txt
> Removed `myminio/mail-template/mail.txt`.

List of MC Commands

For more detailed documentation on the MinIO Client, please refer to the official manual. Click here for the official MinIO Client manual.

Lastly, Access from Java Source Code

After building an S3-compatible development environment using MinIO locally, I'll demonstrate how to access MinIO from a real Java application.
First, configure Gradle.

build.gradle
plugins {
    id 'java'
}
java {
    sourceCompatibility = '17'
}
repositories {
    mavenCentral()
}
dependencies {
// https://mvnrepository.com/artifact/com.amazonaws/aws-java-sdk-s3
    implementation 'com.amazonaws:aws-java-sdk-s3:1.12.582'
}

Next, create a Java class to access MinIO.

Main.java
package com.example.miniotest;
import com.amazonaws.ClientConfiguration;
import com.amazonaws.auth.AWSStaticCredentialsProvider;
import com.amazonaws.auth.BasicAWSCredentials;
import com.amazonaws.auth.EC2ContainerCredentialsProviderWrapper;
import com.amazonaws.client.builder.AwsClientBuilder;
import com.amazonaws.services.s3.AmazonS3;
import com.amazonaws.services.s3.AmazonS3ClientBuilder;
import com.amazonaws.services.s3.model.GetObjectRequest;
import com.amazonaws.services.s3.model.S3Object;
import com.amazonaws.services.s3.model.S3ObjectSummary;
import com.amazonaws.services.s3.model.ListObjectsV2Result;
import com.amazonaws.regions.Regions;
import java.io.BufferedReader;
import java.io.IOException;
import java.io.InputStreamReader;
import java.util.List;
public class Main {
  public static void main(String... args) {
    new Main().execute();
  }
  /**
   *S3 compatibility test of MinIO
   *Obtain a list of files in the bucket and display the contents.
   */
  private void execute() {
    System.out.println("--- Start ---");
    // When connecting to a local MinIO,
    // switches when connecting to AWS S3.
    // Assumed to switch with the spring boot profile.
    boolean isLocal = true;
    // Since MinIO is compatible with AWS S3,
    // you can connect from the AWS library.
    AmazonS3 s3Client = null;
    if (isLocal) {
      s3Client = getAmazonS3ClientForLocal();
    } else {
      s3Client = getAmazonS3ClientForAwsS3();
    }
    // Bucket name
    final String bucketName = "csv";
    // List all objects in the bucket.
    ListObjectsV2Result result = s3Client.listObjectsV2(bucketName);
    List<S3ObjectSummary> objects = result.getObjectSummaries();
    // Loop as many filenames as possible.
    for (S3ObjectSummary os : objects) {
      System.out.println ("filename retrieved from bucket: " + os.getKey());
      // Obtain the contents of the file in the stream.
      // Of course, files can also be downloaded.
      try (S3Object s3object = 
             s3Client.getObject(
               new GetObjectRequest(bucketName, os.getKey()));
           BufferedReader reader = 
             new BufferedReader(
               new InputStreamReader(s3object.getObjectContent()))) {
        String line;
        while ((line = reader.readLine()) != null) {
          // Screen output of file contents one line at a time
          System.out.println(line);
        }
      } catch (IOException e) {
        e.printStackTrace();
      }
      // Insert a blank line at the file switching.
      System.out.println();
    }
    System.out.println("--- End ---");
  }
  /**
   *Click here to connect to local MinIO.
   * @return AmazonS3 client instance is an implementation of the AmazonS3 interface.
   */
  private AmazonS3 getAmazonS3ClientForLocal() {
    final String id = "minio";
    final String pass = "minio123";
    final String endpoint = "http://127.0.0.1:9000";
    return AmazonS3ClientBuilder.standard()
        .withCredentials(
            new AWSStaticCredentialsProvider(
              new BasicAWSCredentials(id, pass)))
        .withEndpointConfiguration(
            new AwsClientBuilder.EndpointConfiguration(
              endpoint, Regions.AP_NORTHEAST_1.getName()))
        .build();
  }
  /**
   * Obtain an Amazon S3 client and set up a connection to the AWS S3 service.
   * This method uses the IAM role at runtime on Amazon EC2 instance to automatically
   * obtain credentials and establish a connection with S3. * The IAM role must have
   a policy allowing access to S3.
   *
   * The client is configured as follows:
   * - Region: Regions.AP_NORTHEAST_1 (Asia Pacific (Tokyo))
   * - Maximum connections: 500
   * - Connection timeout: 120 seconds
   * - Number of error retries: Up to 15 times
   *
   * Note: This method is intended to be executed on an EC2 instance.
   * When running on anything other than EC2, AWS credentials must be provided separately.
   *
   * @return AmazonS3 client instance is an implementation of the AmazonS3 interface.
   * @see com.amazonaws.auth.EC2ContainerCredentialsProviderWrapper
   * @see com.amazonaws.services.s3.AmazonS3
   * @see com.amazonaws.services.s3.AmazonS3ClientBuilder
   * @see com.amazonaws.regions.Regions
   */
  private AmazonS3 getAmazonS3ClientForAwsS3() {
    return AmazonS3ClientBuilder.standard()
        .withCredentials(new EC2ContainerCredentialsProviderWrapper())
        .withRegion(Regions.AP_NORTHEAST_1)
        .withClientConfiguration(
            new ClientConfiguration()
                .withMaxConnections(500)
                .withConnectionTimeout(120 * 1000)
                .withMaxErrorRetry(15))
        .build();
  }
}

Execution Result

--- Start ---
Filename retrieved from bucket: example.csv
name,age,job
tanaka,30,engineer
suzuki,25,designer
satou,40,manager
--- End ---

Source Code Description

A notable feature of this code is that the AWS SDK for Java supports both MinIO and AWS S3.
When connecting to a local MinIO instance, use the getAmazonS3ClientForLocal method;
when connecting to AWS S3, use the getAmazonS3ClientForAwsS3 method to initialize the client.
This approach makes it possible to use the same SDK across different backend environments
and the same interface for operation.
It is nice to be able to easily test an application before deploying it to AWS environment without incurring additional costs.
I hope you find this guide helpful.
Thank you for reading my article all the way to the end.‍🙇‍♂

脚注
  1. Post #1 by a team mate from the Common Services Development Group
    [ グローバル展開も視野に入れた決済プラットフォームにドメイン駆動設計(DDD)を取り入れた ] ↩︎

  2. Post #2 by a team mate from the Common Services Development Group
    [入社 1 年未満メンバーだけのチームによる新システム開発をリモートモブプログラミングで成功させた話] ↩︎

  3. Post #3 by a team mate from the Common Services Development Group
    [JIRA と GitHub Actions を活用した複数環境へのデプロイトレーサビリティ向上の取り組み] ↩︎

  4. Post #4 by a team mate from the Common Services Development Group
    [ VSCode Dev Container を使った開発環境構築 ] ↩︎

Facebook

関連記事 | Related Posts

Cover Image for Deployment Process in CloudFront Functions and Operational Kaizen

Deployment Process in CloudFront Functions and Operational Kaizen

Cover Image for AWSサーバレスアーキテクチャをMonorepoツール - Nxとterraformで構築してみた!

AWSサーバレスアーキテクチャをMonorepoツール - Nxとterraformで構築してみた!

Torii⛩
Torii⛩
Cover Image for [Sequel! Dev Container] Creating a cloud development environment with GitHub Codespaces

[Sequel! Dev Container] Creating a cloud development environment with GitHub Codespaces

Cover Image for Building an AWS Serverless Architecture Using Nx Monorepo Tool and Terraform

Building an AWS Serverless Architecture Using Nx Monorepo Tool and Terraform

Maya.S
Maya.S
Cover Image for October Welcomes: Introducing the New Members

October Welcomes: Introducing the New Members

Rina.K
Rina.K
Cover Image for 書籍の管理方法をラクにした話

書籍の管理方法をラクにした話

We are hiring!

【プラットフォームエンジニア】プラットフォームG/東京・大阪

プラットフォームグループについてAWS を中心とするインフラ上で稼働するアプリケーション運用改善のサポートを担当しています。

【スクラッチ開発エンジニア】プラットフォームG/東京・大阪

プラットフォームグループについてAWS を中心とするインフラ上で稼働するアプリケーション運用改善のサポートを担当しています。