Logical Database Backup & Rollback manually using a script
Hello, My name is Shweta Oza.
I joined KINTO in April 2022 at the Global Development Group. I am an application developer who has recently gained interest in DevOps. I am currently working in the Coupon Team.
Our team develops and maintains Coupon APIs for customers/ merchants all over the world. There are many features that are being continuously developed and deployed. For any project it is important to maintain the databases, including data backup and rollback. This way, if anything goes wrong when releasing a new version, you can safely rollback the changes and restore the backup.
As an open-source database migration tool, our company uses solutions such as Flyway. Flyway strongly favors simplicity and convention over configuration. It is based around just 7 basic commands: Migrate, Clean, Info, Validate, Undo, Baseline and Repair.
Below are a few concepts and simplified procedures about how to back up the database manually using a script.
Overview
Coupon System is a tool that easily issues and manages coupons that can be used for KINTO services or other partner services. The coupon system is in continuous development to improve and upgrade with new functionalities. With every new change, it is necessary to manage the databases that the coupon system uses; which plays a key role in any software project. In our case, we use MySQL.
In coupon, whenever we have any new release the steps we follow are:
- Get the functionalities developed.
- Test them locally and then on AWS test environments.
- Take a backup of the DB before release.
- Release the new functionalities.
- If any issues, roll back to the previous version of the release.
In order to achieve these we have backup and rollback scripts to reduce the number of instructions to follow in CLI. This saves time and maintains consistency of structure for backups or rollbacks.
Let's learn some basics
What is a database backup?
A database backup is a copy of data that is stored on a server.
What is the purpose of a database backup?
Backups are used to prevent unexpected data loss.
In an event of failure, if the original data gets lost or corrupted, with the help of a backup, it is easy to restore the data again.
What are the types of database backup?
[1]
Physical backupPhysical database backups are backups of physical files that are used to store and recover databases. These include different data files, control files, archived redo logs, and many more. Typically, physical backup data is kept in the cloud, offline storage, magnetic tape, or on a disc.
There are two methods to perform a physical backup :
- Operating system utilities
- Recovery manager
Advantages:
- Total control of the data.
- Storage cost is cheaper.
- Speedy process retrieving backed up data.
Disadvantages:
- Stored data in any device can get corrupted making it hard to recover data.
- Data can be ruined by natural disasters.
- Loss of storage devices.
[2]
Logical backupLogical backups contains logical data which is retrieved from the database. It contains a view, procedure, function, and table. This is useful when users want to restore or transfer a copy of the database to a different location.
Logical backups are not as secure as physical backups in preventing data loss. It only provides structural details. Every week, complete logical backups should be performed.
Logical backups are used as a supplement to a physical backup.
Advantages:
- Useful when the user needs to restore the complete database to a previous instance.
- It is more complex and provides granular recovery capabilities.
Disadvantages:
- Critical for recovery of special components.
- Less secure compared to physical backup.
- It only provides structural details.
We are going to see all about Logical Database backups in both local and in AWS CLI. Because it is an important step with reference to a developers' perspective to operate with data in DB.
To manage Amazon RDS(Relational database service) using AWS CLI
Amazon RDS[3]
It is a service that can safely use RDB on the cloud without setting up a server. Amazon RDS supports MySQL, MariaDB, PostgreSQL which are open source and widely used, Oracle Database and microsoft SQL server. Pay for only what you use.
Amazon CLI[4]
The AWS CLI is an integrated tool for managing AWS services that you run from a terminal on Linux or macOS or a command prompt on Windows.
Basically, commands are entered and executed from a dedicated tool, but they can also be used to automate processes by writing them in scripts.
One thing to keep in mind when managing Amazon RDS from the AWS CLI is that executing a command may not always be reflected immediately.
This is because Amazon RDS has a number of failure countermeasures, and it takes time for the settings to be reflected in all of them.
Same we will be writing simple scripts for executing tasks of Backups and Rollbacks easily on different environments.
How exactly can we take Backups and Rollback DB from CLI at developer's level?
Using mysqldump in SQL Format
What is mysqldump[5]
The mysqldump client is a logical backup program originally written by Igor Romanenko. Database and is usually in the form of a list of SQL statements ("SQL dump") for backup or transfer to another database server (not necessarily MariaDB or MySQL). The dump typically contains SQL statements to create the table, populate it, or both.
Using a dump file
What is a dump file[6]
A dump file is also useful when you upgrade from one release to the next.
You can dump a database using an older release and load it under a new release as part of the upgrade process.
QUICK HIGHLIGHTS
To take DB Dump with data (i.e. both DDL & DML)
mysqldump -u root -p DB_Name > backup.sql
# AWS CLI
mysqldump -u $user -p$password -h $rds_endpoint > backup.dump
# To avoid warnings like below you can use the following options and add the login credentials in a CNF file.
# mysqldump: [Warning] Using a password on the command line interface can be insecure.
# --set-gtid-purged=OFF
# Warning: A partial dump from a server that has GTIDs will by default include the GTIDs of all transactions, even those that changed suppressed parts of the database. If you don't want to restore GTIDs, pass --set-gtid-purged=OFF. To make a complete dump, pass --all-databases --triggers --routines --events.
# --defaults-extra-file=path/to/extra.cnf
# cat << EOF > path/to/extra.cnf
# [client]
# user=${MYSQL_USER}
# password=${MYSQL_PASSWORD}
# host=${RDS_ENDPOINT}
# EOF
mysqldump --defaults-extra-file=path/to/extra.cnf --skip-column-statistics --single-transaction --set-gtid-purged=OFF --databases $DB > backup.dump
To take DB Dump without data (i.e. only DDL)
mysqldump -d -u root -p DB_Name > backup.sql
# AWS CLI
mysqldump -d -u $user -p$password -h $rds_endpoint > backup.dump
# Below are some useful options to consider
# --no-data, -d
# Do not write any table row information (that is, do not dump table contents). This is useful if you want to dump only the CREATE TABLE statement for the table. For example, when you want to create an empty copy of the table by loading the dump file.
# --defaults-extra-file=file_name
# Read this option file after the global option file but (on Unix) before the user option file. If the file does not exist or is otherwise inaccessible, an error occurs. If file_name is not an absolute path name, it is interpreted relative to the current directory.
# The --single-transaction option and the --lock-tables option are mutually exclusive because LOCK TABLES causes any pending transactions to be committed implicitly.
mysqldump --no-data --defaults-extra-file=path/to/extra.cnf --skip-column-statistics --single-transaction --set-gtid-purged=OFF --databases $DB > backup.dump
To Roll Back to previous version
mysql -u root -p DB_Name < backup.dump
# AWS CLI
mysql -u $user -p$password -h $rds_endpoint < backup.dump
SCRIPT for Backup
Things you will need to edit in script before using directly:
- pathToParameterStoreKeyValueMYSQL_USER
- pathToParameterStoreKeyValueMYSQL_PASSWORD
- pathToParameterStoreKeyValueRDS_ENDPOINT
- DB_Name
#########################################################################
#########################################################################
#####
##### Description: Shell Script to take backup of existing data AWS
##### 1. Choose the env and switch the db connection
##### 2. Create backup directory with today's date
##### 3. Create dump file to backup folder with current db
##### 4. Check dump file size
#####
#########################################################################
#########################################################################
read -p "Enter Your Env(env): " x
echo "Welcome to ${x} Env!"
# MySQL server credentials
MYSQL_USER=$(aws ssm get-parameter --with-decryption --name /${x}/pathToParameterStoreKeyValueMYSQL_USER | jq -r .Parameter.Value)
MYSQL_PASSWORD=$(aws ssm get-parameter --with-decryption --name /${x}/pathToParameterStoreKeyValueMYSQL_PASSWORD | jq -r .Parameter.Value)
RDS_ENDPOINT=$(aws ssm get-parameter --with-decryption --name /${x}/pathToParameterStoreKeyValueRDS_ENDPOINT | jq -r .Parameter.Value)
# Create a configuration file to maintain MySQL login details
cat << EOF > mysql-dbaccess.cnf
[client]
user=$MYSQL_USER
password=$MYSQL_PASSWORD
host=$RDS_ENDPOINT
EOF
# Set the folder name with date format (example: 2022-08-15)
DATE_FORMAT=$(date +"%Y-%m-%d")
TIMESTAMP=$(date +%H%M%s)
# Path to local backup directory
BACKUP_DIR="tmp/release/backup/${x}/dbbackup/${DATE_FORMAT}"
# Use database's names
DB="DB_Name"
echo "########################################DATABASE########################################"
echo "Using Database (DB_Name)"
echo "########################################DATABASE########################################"
# Create backup directory with today's date
mkdir -p ${BACKUP_DIR}
FILENAME_PREFIX="backup_${x}_DDL_DML_${TIMESTAMP}_"
FILENAME_POSTFIX=".dump"
read -p "Enter version eg: v0.0.1: " d
FILENAME=$FILENAME_PREFIX${d}$FILENAME_POSTFIX
echo "########################################FILEPATH########################################"
echo "Created directory" ${BACKUP_DIR}
echo "File will be saved as ${FILENAME} "
mysqldump --defaults-extra-file=mysql-dbaccess.cnf --single-transaction --set-gtid-purged=OFF --databases $DB > ${BACKUP_DIR}/${FILENAME}
echo "File saved at ${BACKUP_DIR}/${FILENAME}"
echo "########################################FILESPATH########################################"
# check File size
file=${BACKUP_DIR}/${FILENAME}
filesize=$(ls -lh $file )
echo "########################################FILESIZE########################################"
echo "$file has a size of $filesize"
echo "########################################FILESIZE########################################"
# Remove the file after executing shell
rm mysql-dbaccess.cnf
Procedure for taking backup
Place the shell script in AWS CLI
- Login to AWS
- Click AWS management console → AWS Systems Manager → Session Manager→ Click Start session
- Enter the env you want to take backup eg:
{env}-{project_name}-{maintenance_server_name}
- Start session
- Check if
dbbackupDDL_DML.sh
file is present - If file is not present place
dbbackupDDL_DML.sh
file in AWS CLI - To execute file
sh-4.2$ ls
dbbackupDDL_DML.sh mysql.sh tmp
sh-4.2$ sh dbbackupDDL_DML.sh
Enter Your Env(env): env
Welcome to env Env!
########################################DATABASE########################################
Using Database (db_name)
########################################DATABASE########################################
Enter version eg: v0.0.1: v0.0.1
########################################FILEPATH########################################
Created directory tmp/release/backup/env/dbbackup/2022-08-12
File will be saved as backup_env_DDL_DML_06311660285870_v0.0.1.dump
File saved at tmp/release/backup/env/dbbackup/2022-08-12/backup_env_DDL_DML_06311660285870_v0.0.1.dump
########################################FILESPATH########################################
#######################################FILESIZE########################################
tmp/release/backup/env/dbbackup/2022-08-12/backup_env_DDL_DML_06311660285870_v0.0.1.dump has a size of -rw-r--r-- 1 ssm-user ssm-user 1.7M Aug 12 06:31 tmp/release/backup/env/dbbackup/2022-08-12/backup_env_DDL_DML_06311660285870_v0.0.1.dump
########################################FILESIZE########################################
sh-4.2$
To check the content of dump file
sh-4.2$ less tmp/release/backup/env/dbbackup/2022-08-12/backup_env_DDL_DML_06311660285870_v0.0.1.dump
sh-4.2$
Backup will be taken into folder with tmp/release/backup/env/dbbackup/{currentDate}/{FileNameWithTimestamp&Version}
If you want to take dump multiple times in a day in individual date folder you will have the file with timestamp.
sh-4.2$ cd tmp/release/backup/env/dbbackup/
sh-4.2$ ls
2022-08-09 2022-08-10 2022-08-12
sh-4.2$ cd tmp/release/backup/env/dbbackup/2022-08-12/
sh-4.2$ ls
backup_env_DDL_DML_06311660285870_v0.0.1.dump backup_env_DDL_DML_06371660286257_v0.0.1.dump
sh-4.2$
SCRIPT for Rollback
Things you will need to edit in script before using directly:
- pathToParameterStoreKeyValueMYSQL_USER
- pathToParameterStoreKeyValueMYSQL_PASSWORD
- pathToParameterStoreKeyValueRDS_ENDPOINT
- DB_Name
#########################################################################
#########################################################################
#####
##### Description: Shell Script to rollback to target SQL
##### 1. Choose the env and switch the db connection
##### 2. Create rollback directory with today's date
##### 3. Choose and input the backup file
##### 4. Input the version of dump file
##### 5. Copy backup dump file to ROLLBACK folder
##### 6. Create dump file to ROLLBACK folder with current db
##### 7. Rollback db with backup dump file
##### 8. Comparison....
#####
#########################################################################
#########################################################################
read -p "Enter Your Env(env): " x
echo "Welcome to ${x} Env!"
# MySQL server credentials
MYSQL_USER=$(aws ssm get-parameter --with-decryption --name /${x}/pathToParameterStoreKeyValueMYSQL_USER | jq -r .Parameter.Value)
MYSQL_PASSWORD=$(aws ssm get-parameter --with-decryption --name /${x}/pathToParameterStoreKeyValueMYSQL_PASSWORD | jq -r .Parameter.Value)
RDS_ENDPOINT=$(aws ssm get-parameter --with-decryption --name /${x}/pathToParameterStoreKeyValueRDS_ENDPOINT | jq -r .Parameter.Value)
# Set the folder name with date format(eg: 2022-08-15)
DATE_FORMAT=$(date +"%Y-%m-%d")
TIMESTAMP=$(date +%H%M%s)
# Path to local rollback directory history
ROLLBACK_DIR="tmp/release/rollback/${x}/dbRollback/${DATE_FORMAT}"
# Use database's names
DB="DB_Name"
echo "########################################DATABASE########################################"
echo "Using Database (DB_Name)"
echo "########################################DATABASE########################################"
# Create rollback directory with today's date
mkdir -p ${ROLLBACK_DIR}
read -p "Enter full dumpFile Path to which you want to rollback: " df
echo "dumpFile ${df} selected!"
FILENAME_ROLLBACK_PREFIX="rollback_${x}_DDL_DML_${TIMESTAMP}_"
FILENAME_BACKUP_PREFIX="backup_${x}_DDL_DML_${TIMESTAMP}_"
FILENAME_POSTFIX=".dump"
read -p "Enter version eg: v0.0.1: " d
FILENAME_ROLLBACK=FILENAME_ROLLBACK_PREFIX${d}$FILENAME_POSTFIX
FILENAME_BACKUP=FILENAME_BACKUP_PREFIX${d}$FILENAME_POSTFIX
echo "########################################FILEPATH########################################"
echo "Created directory" ${ROLLBACK_DIR}
# copy dump file to backup folder
cp ${df} ${ROLLBACK_DIR}/${FILENAME_ROLLBACK}
ROLLBACK_FILE=${ROLLBACK_DIR}/${FILENAME_ROLLBACK}
BEFORE_ROLLBACK_DUMP=${ROLLBACK_DIR}/"BeforeRollback_${FILENAME_BACKUP}"
AFTER_ROLLBACK_DUMP=${ROLLBACK_DIR}/"AfterRollback_${FILENAME_BACKUP}"
mysqldump -u $MYSQL_USER -p$MYSQL_PASSWORD -h $RDS_ENDPOINT --databases $DB > ${BEFORE_ROLLBACK_DUMP}
echo "Dump Before Rollback ${BEFORE_ROLLBACK_DUMP}"
echo "Rollback to DDL_DML of sql file located at ${ROLLBACK_FILE} "
mysql -u $MYSQL_USER -p$MYSQL_PASSWORD -h $RDS_ENDPOINT --databases $DB < ${ROLLBACK_FILE}
echo "Rollback successfully done with ${ROLLBACK_FILE}"
mysqldump -u $MYSQL_USER -p$MYSQL_PASSWORD -h $RDS_ENDPOINT --databases $DB > ${AFTER_ROLLBACK_DUMP}
echo "Dump After Rollback ${AFTER_ROLLBACK_DUMP}"
echo "########################################FILESPATH########################################"
# check File size before Rollback
fileBeforeRollback=${ROLLBACK_DIR}/${BEFORE_ROLLBACK_DUMP}
filesizeBeforeRollback=$(ls -lh fileBeforeRollback )
echo "########################################FILESIZE BEFORE ROLLBACK########################################"
echo "$fileBeforeRollback has a size of $filesizeBeforeRollback"
echo "########################################FILESIZE BEFORE ROLLBACK########################################"
# check File size after Rollback
fileAfterRollback=${ROLLBACK_DIR}/${AFTER_ROLLBACK_DUMP}
filesizeAfterRollback=$(ls -lh fileAfterRollback )
echo "########################################FILESIZE AFTER ROLLBACK########################################"
echo "$fileAfterRollback has a size of $filesizeAfterRollback"
echo "########################################FILESIZE AFTER ROLLBACK########################################"
Footer
Procedure for Rollback
Place the shell script in AWS CLI
- Login to AWS
- Click AWS management console → AWS Systems Manager → Session Manager→ Click Start session
- Enter the env you want to take rollback eg:
{env}-{project_name}-{maintenance_server_name}
- Start session
- Check if
dbRollbackDDL_DML.sh
file is present - If file is not present place
dbRollbackDDL_DML.sh
file in AWS CLI - To execute file
sh-4.2$ ls
dbRollbackDDL_DML.sh mysql.sh tmp
sh-4.2$ sh dbRollbackDDL_DML.sh
Enter Your Env: env
Welcome to env Env!
########################################DATABASE########################################
Using Database (DB_Name)
########################################DATABASE########################################
Enter full dumpFile Path to which you want to rollback: tmp/release/backup/env/dbbackup/2022-08-12
Enter version eg: v0.0.1: v0.0.1
########################################FILEPATH########################################
Created directory tmp/release/rollback/env/dbRollback/2022-08-13
Dump Before Rollback tmp/release/rollback/env/dbRollback/2022-08-13/BeforeRollback_backup_env_DDL_DML_06311660285870_2022-08-13.dump
Rollback to DDL_DML of sql file located at tmp/release/rollback/env/dbRollback/2022-08-13/backup_env_DDL_DML_06311660285870_2022-08-13.dump
Rollback successfully done with tmp/release/rollback/env/dbRollback/2022-08-13/backup_env_DDL_DML_06311660285870_2022-08-13.dump
Dump After Rollback tmp/release/rollback/env/dbRollback/2022-08-13/AfterRollback_backup_env_DDL_DML_063116602859099_2022-08-13.dump
########################################FILESPATH########################################
########################################FILESIZE BEFORE ROLLBACK########################################
tmp/release/rollback/env/dbRollback/2022-08-13/BeforeRollback_backup_env_DDL_DML_06311660285870_2022-08-13.dump has a size of -rw-r--r-- 1 ssm-user ssm-user 1.7M Aug 13 06:31 tmp/release/rollback/env/dbRollback/2022-08-13/BeforeRollback_backup_env_DDL_DML_06311660285870_2022-08-13.dump
########################################FILESIZE BEFORE ROLLBACK########################################
########################################FILESIZE AFTER ROLLBACK########################################
tmp/release/rollback/env/dbRollback/2022-08-13/AfterRollback_backup_env_DDL_DML_063116602859099_2022-08-13.dump has a size of -rw-r--r-- 1 ssm-user ssm-user 1.6M Aug 13 06:31 tmp/release/rollback/env/dbRollback/2022-08-13/AfterRollback_backup_env_DDL_DML_063116602859099_2022-08-13.dump
########################################FILESIZE AFTER ROLLBACK########################################
sh-4.2$
How can you use this in your daily work?
- You can take quick backups by using script or commands like above to avoid multiple inputs, references to many documents and links for login etc.
- This will save your time and errors while writing commands by hand.
- This will provide you systematic and clutter-free folder structure which you can refer whenever you need one.
- You will need this procedure in case your DB is upgraded, there is data inconsistency, in case you need to release the next version of your system or rollback the data when there is a failure.
関連記事 | Related Posts
We are hiring!
【データエンジニア】分析G/名古屋・大阪
分析グループについてKINTOにおいて開発系部門発足時から設置されているチームであり、それほど経営としても注力しているポジションです。決まっていること、分かっていることの方が少ないぐらいですので、常に「なぜ」を考えながら、未知を楽しめるメンバーが集まっております。
【プラットフォームエンジニア】プラットフォームG/東京・大阪
プラットフォームグループについてAWS を中心とするインフラ上で稼働するアプリケーション運用改善のサポートを担当しています。