KINTO Tech Blog
Development

Continuous Delivery of Kubernetes Applications Using Only GitHub Actions

Cover Image for Continuous Delivery of Kubernetes Applications Using Only GitHub Actions

Continuous Delivery of Kubernetes Applications Using Only GitHub Actions

Hello. My name is Narazaki, and I work in the Toyota Woven City Payment Solution Development Group.

Our team is responsible for developing the payment infrastructure application for Woven by Toyota at Toyota Woven City. We build cross-functional payment solutions, covering everything from the backend to the web front end and mobile applications.

The payment backend runs on Kubernetes and is developed using various cloud-native tools.

This time, while following GitOps—an approach where infrastructure configuration files are managed and modified using Git, key to building and maintaining stable Kubernetes applications—we aim to implement the continuous delivery (CD) process using only GitHub Actions, instead of the commonly used cloud-native CD tools. The CD process in this setup is limited to:

  • Applying changes to Kubernetes configuration files
  • Updating the container image

While there are more advanced CD strategies like Blue/Green and Canary deployments, this approach starts small. This setup is designed for teams that already have a DevOps workflow and want to continuously and efficiently deliver Kubernetes applications with minimal developers and no additional tools—using only GitHub Actions, which they already use daily. The repository assumes that both the application code and Kubernetes configuration management files are maintained in the same repository. (Technically, it might be possible to run this across repositories depending on permission settings, but let’s not get into that here.)

For GitLab users, there’s an excellent tool called Auto DevOps, so this isn’t a 'GitHub and GitHub Actions are the best!' kind of post. But don’t worry, I’m not making that claim!

Cloud-Native CI/CD Tools for Kubernetes

What tools come to mind when you think of CI/CD for Kubernetes applications?

And so on. Both tools are powerful and highly useful for leveraging Kubernetes to its full potential. They also allow for flexible and secure updates to Kubernetes configuration files and application images, enabling GitOps practices.

On the other hand, they require tool-specific knowledge and expertise. For smaller teams without dedicated DevOps specialists, maintaining them continuously can be a challenge—wouldn’t you agree? Running a CD tool itself requires Kubernetes, and the tool also needs Kubernetes configuration files to manage those same configuration files.

In this article, we‘ll explore how to set up the pipeline shown in the figure below using only GitHub Actions. Kubernetes runs on a generic cluster, not tied to any specific cloud provider. This setup requires a container registry. The configuration management file uses Kustomize as an example, but it can be applied to other tools like Helm, Terraform, and more.

Demo

Consider a repository that includes folders for both Kubernetes configuration files and applications. The folder structure is as follows: This section omits specific code, Dockerfile contents, and application source code.

├── .github
│   ├── actions
│   │   └── image-tag-update
│   │       └── action.yaml
│   └── workflows
│       ├── build-go.yaml
│       ├── build-java.yaml
│       ├── build-node.yaml
│       └── kubectl.yaml
├── go-app
│   ├── src/
│   └── Dockerfile
├── java-app
│   ├── src/
│   └── Dockerfile
├── k8s
│   ├── service-go.yaml
│   ├── service-java.yaml
│   ├── service-node.yaml
│   └── kustomization.yaml
└── node-app
    ├── src/
    └── Dockerfile

Each application follows the structure below:

k8s/service-*.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: app
spec:
...
  template:
...
    spec:
      containers:
      - name: some-server
        image: Go-placeholder # put the same string as in kcustomization as a placeholder

All placeholders are centrally managed in kustomization.yaml:

ks8/kustomization.yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
namespace: techblog
resources:
  - service-go.yaml
  - service-java.yaml
  - service-node.yaml
images:
  - name: go-placeholder
    newName: go-app
    newTag: v1.1.1
  - name: java-placeholder
    newName: java-app
    newTag: v2.7.9alpha
  - name: node-placeholder
    newName: node-app
    newTag: latest

First, to apply the Kubernetes configuration file, configure the following GitHub Actions workflow.

.github/workflows/kubectl.yaml
name: kubectl

on:
  pull_request:
    branches:
    - "**"
    paths:
    - "K8s/**" the location of the #Kubernetes manifest file
  push:
    branches:
    - main
    paths:
    - "k8s/**"

jobs:
  deploy:
    runs-on: ubuntu-latest
    steps:
    - uses: actions/checkout@v4
    - uses: azure/setup-kubectl@v4
    - env:
        KUBECONFIG_CONTENTS: ${ secrets.KUBECONFIG_contents }} # Put kubeconfig in GitHub secrets beforehand
      run: |
        echo "${KUBECONFIG_CONTENTS}" > $HOME/.kube/config
        chmod 600 $HOME/.kube/config
    - run: kubectl apply --dry-run=server -k ./k8s  >> $GITHUB_STEP_SUMMARY

    - if: Github.ref == 'refs/heads/main # Changes are actually applied only on the main branch
      run: kubectl apply -k k8s/

This pipeline applies a standard Kubernetes configuration when using a kubeconfig with administrator privileges. Adjust the method of retrieving the kubeconfig based on the cluster’s configuration approach, such as for different cloud environments. Next, automatically create a pull request when pushing an application, or set up a composite action to update the container’s image tags.

.github/actions/image-tag-update/action.yaml
name: image-tag-update
description: 'Task to update image tags in kustomization when container images are updated.
inputs:
  target_app:
    description: 'Target applications,
    required: true
  tag_value:
    description: 'New Container Image Tag '
    required: true
  token:
    description: 'Tokens with PR and content update privileges.
    required: true
runs:
  using: 'composite'
  steps:
    - uses: actions/checkout@v4
      id: check-branch-exists
      continue-on-error: true
      with:
        ref: "Image-tag-update" # Default branch name for tag updates
    - uses: Actions/checkout@v4 # Checkout cannot fall back to the default branch if the specified branch is missing
      if: steps.check-branch-exists.outcome == 'failure'
      with:
        ref: main 
    - uses: Mikefarah/yq@master # Replace the value of the target placeholder tag with yq
      with:
      cmd: yq eval '(.images[] | select(.name == "'"${{ inputs.target_app }}-placeholder"'")).newTag = "'"${{ inputs.tag_value }}"'"' -i k8s/kustomization.yaml
    - uses: peter-evans/create-pull-request@v6
      if: steps.check-branch-exists.outcome == 'failure' # Create a new pull request if no pull requests exist
      with:
        title: 'Update Container Image.
        body: |
           Update `${{ inputs.target_app }}'
        branch: "image-tag-update"
    - uses: stefanzweifel/git-auto-commit-action@v5
      if: steps.check-branch-exists.outcome == 'success' # Add a commit to the existing branch if checkout succeeds
      with:
        commit_message: "Image update for ${{ inputs.target_app }}"

The composite runs during image creation for each application. For multiple applications, it’s a good idea to add them after each image is created.

.github/workflows/build-go.yaml
...
      - uses: docker/setup-buildx-action@v3
      - uses: docker/build-push-action@v6
        with:
          file: ./Dockerfile
          push: true
          tags: ${ env.tag }} # some tag
      - uses: ./.github/actions/image_update
        if: github.ref == 'refs/heads/main'
        with:
          target_app: go
          tag_value: ${{ env.tag }}
          token: ${ secrets.GITHUB_token }} # A github token with content and Pull Request editing privileges

When the application runs,the container image updates automatically, letting you deploy a new one with a Pull Request! (Tag derivation is handled by your workflow.) The example below shows the minor version increment.)

  - name: go-placeholder
    newName: go-app
-    newTag: v1.1.1
+    newTag: v1.1.2 

Operational considerations

Timing of deployment

Image update Pull Request deploy immediately upon merging. If you want to release updates along with infrastructure changes, you can either add the fixes to the same branch or merge them when the timing is right.

Add a new container application

For example, if you add a Python application in the above setup while an image update Pull Request is still open, updating the Python image tag won’t take effect unless the Pull Request includes the latest changes.

Cut back

It’s easy to undo—just revert the commit.

Timing of Reconcile

While many GitOps Tools offer near real-time reconciliation to minimize drift, this method only works when the CD pipeline is running. It’s important to choose the right tool based on the number of teammates and their permissions to update the Kubernetes cluster.

You’re interacting with the Container Registry indirectly.

While some retrieve the latest container image directly from the container registry, this approach operates differently. It's advisable to include a verification step for each container registry to ensure the container exists.

About permission settings for GitHub Actions

You’ll need update permissions for contents and pull-requests. Set permissions in Actions settings, GitHub Apps, and more. Learn more here .

Overwritten by a container image that was executed later

The CD tool determines the newer version by checking the container image tag, following conventions like Semantic Versioning. The workflow above will overwrite the image tag in the later-executed pipeline, regardless of the tag’s value. If this behavior is an issue, consider checking the value before deciding whether to overwrite it.

Summary

With this approach, GitOps can be fully managed on GitHub, enabling a simple and efficient continuous delivery process for Kubernetes applications. Since CD tool errors can also be consolidated in GitHub Actions, it’s convenient to check execution results and errors just like in the usual CI process. Kubernetes offers a wide range of tools, making selection challenging. However, by choosing the right tools for my workflow, I aim to improve the productivity of Kubernetes application development.

Facebook

関連記事 | Related Posts

We are hiring!

【Woven City決済プラットフォーム構築 PoC担当バックエンドエンジニア(シニアクラス)】/Toyota Woven City Payment Solution開発G/東京

Toyota Woven City Payment Solution開発グループについて私たちのグループはトヨタグループが取り組むWoven Cityプロジェクトの一部として、街の中で利用される決済システムの構築を行います。Woven Cityは未来の生活を実験するためのテストコースとしての街です。

【Toyota Woven City決済プラットフォームフロントエンドエンジニア(Web/Mobile)】/Toyota Woven City Payment Solution開発G/東京

Toyota Woven City Payment Solution開発グループについて我々のグループはトヨタグループが取り組むToyota Woven Cityプロジェクトの一部として、街の中で利用される決済システムの構築を行います。Toyota Woven Cityは未来の生活を実験するためのテストコースとしての街です。

イベント情報

Appium Meetup Tokyo #1 - モバイルE2Eテスト/自動テスト/ソフトウェアテストについてQAエンジニアが語りまくる夜 -
Developers Summit 2025【KINTOテクノロジーズ協賛】