diff --git a/CHANGELOG.md b/CHANGELOG.md index b2ce9eb..88d30e2 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -4,6 +4,13 @@ All notable changes to this project will be documented in this file. The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/), and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html). +## [v2.0.0] - 16-12-2018 +### Fix issue with Slack Alerts +- Implemented the ability to backup multiple databases from a single host +- Updated the Variable Name of TARGET_DATABASE_NAME to TARGET_DATABASE_NAMES +- Updated the format of AWS_BUCKET_BACKUP_PATH so that the trailing / is not required +- Removed $AWS_BUCCKET_BACKUP_NAME variable (which had a typo). Database backups are now saved using their database names + ## [v1.1.1] - 16-12-2018 ### Fix issue with Slack Alerts - Fixed issue with failed Slack alerts when log messages contained special characters diff --git a/README.md b/README.md index 1556464..5b2f09c 100644 --- a/README.md +++ b/README.md @@ -2,13 +2,13 @@ aws-database-backup is a container image based on Alpine Linux. This container is designed to run in Kubernetes as a cronjob to perform automatic backups of MySQL databases to Amazon S3. It was created to meet my requirements for regular and automatic database backups. Having started with a relatively basic feature set, it is gradually growing to add more and more features. -Currently, aws-database-backup supports the backing up of a single MySQL Database. When triggered, a full database dump is performed using the `mysqldump` command. The backup is then uploaded to an Amazon S3 Bucket. aws-database-backup features Slack Integration, and can post messages into a channel detailing if the backup was successful or not. - -Over time, aws-database-backup will be updated to support more features and functionality. The most immediate feature on the roadmap is the ability to perform backups of multiple databases. I currently use this container as part of my Kubernetes Architecture which you can read about [here](https://benjamin.maynard.io/this-blog-now-runs-on-kubernetes-heres-the-architecture/). +Currently, aws-database-backup supports the backing up of MySQL Databases. It can perform backups of multiple MySQL databases from a single database host. When triggered, a full database dump is performed using the `mysqldump` command for each configured database. The backup(s) are then uploaded to an Amazon S3 Bucket. aws-database-backup features Slack Integration, and can post messages into a channel detailing if the backup(s) were successful or not. +Over time, aws-database-backup will be updated to support more features and functionality. I currently use this container as part of my Kubernetes Architecture which you can read about [here](https://benjamin.maynard.io/this-blog-now-runs-on-kubernetes-heres-the-architecture/). All changes are captured in the [changelog](CHANGELOG.md), which adheres to [Semantic Versioning](https://semver.org/spec/vadheres2.0.0.html). + ## Environment Variables The below table lists all of the Environment Variables that are configurable for aws-database-backup. @@ -19,11 +19,10 @@ The below table lists all of the Environment Variables that are configurable for | AWS_SECRET_ACCESS_KEY | **(Required)** AWS IAM Secret Access Key. Should have very limited IAM permissions (see below for example) and should be configured using a Secret in Kubernetes. | | AWS_DEFAULT_REGION | **(Required)** Region of the S3 Bucket (e.g. eu-west-2). | | AWS_BUCKET_NAME | **(Required)** The name of the S3 bucket. | -| AWS_BUCKET_BACKUP_PATH | **(Required)** Path the backup file should be saved to in S3. E.g. `/database/myblog/backups/`. **Requires the trailing / and should not include the file name.** | -| AWS_BUCCKET_BACKUP_NAME | **(Required)** File name of the backup file. E.g. `database_dump.sql`. | +| AWS_BUCKET_BACKUP_PATH | **(Required)** Path the backup file should be saved to in S3. E.g. `/database/myblog/backups`. **Do not put a trailing / or specify the filename.** | | TARGET_DATABASE_HOST | **(Required)** Hostname or IP address of the MySQL Host. | | TARGET_DATABASE_PORT | **(Optional)** Port MySQL is listening on (Default: 3306). | -| TARGET_DATABASE_NAME | **(Required)** Name of the database to dump. | +| TARGET_DATABASE_NAMES | **(Required)** Name of the databases to dump. This should be comma seperated (e.g. `database1,database2`). | | TARGET_DATABASE_USER | **(Required)** Username to authenticate to the database with. | | TARGET_DATABASE_PASSWORD | **(Required)** Password to authenticate to the database with. Should be configured using a Secret in Kubernetes. | | SLACK_ENABLED | **(Optional)** (true/false) Enable or disable the Slack Integration (Default False). | @@ -31,6 +30,7 @@ The below table lists all of the Environment Variables that are configurable for | SLACK_CHANNEL | **(Required if Slack enabled)** Slack Channel the WebHook is configured for. | | SLACK_WEBHOOK_URL | **(Required if Slack enabled)** What is the Slack WebHook URL to post to? Should be configured using a Secret in Kubernetes. | + ## Slack Integration aws-database-backup supports posting into Slack after each backup job completes. The message posted into the Slack Channel varies as detailed below: @@ -68,6 +68,7 @@ An IAM Users should be created, with API Credentials. An example Policy to attac } ``` + ## Example Kubernetes Cronjob An example of how to schedule this container in Kubernetes as a cronjob is below. This would configure a database backup to run each day at 01:00am. The AWS Secret Access Key, and Target Database Password are stored in secrets. @@ -125,14 +126,12 @@ spec: value: "" - name: AWS_BUCKET_BACKUP_PATH value: "" - - name: AWS_BUCCKET_BACKUP_NAME - value: "" - name: TARGET_DATABASE_HOST value: "" - name: TARGET_DATABASE_PORT value: "" - - name: TARGET_DATABASE_NAME - value: "" + - name: TARGET_DATABASE_NAMES + value: "" - name: TARGET_DATABASE_USER value: "" - name: TARGET_DATABASE_PASSWORD @@ -150,4 +149,4 @@ spec: name: SLACK_WEBHOOK_URL key: slack_webhook_url restartPolicy: Never -``` +``` \ No newline at end of file diff --git a/resources/perform-backup.sh b/resources/perform-backup.sh index 81179f8..2492ed0 100644 --- a/resources/perform-backup.sh +++ b/resources/perform-backup.sh @@ -5,25 +5,32 @@ has_failed=false -# Perform the database backup. Put the output to a variable. If successful upload the backup to S3, if unsuccessful print an entry to the console and the log, and set has_failed to true. -if sqloutput=$(mysqldump -u $TARGET_DATABASE_USER -h $TARGET_DATABASE_HOST -p$TARGET_DATABASE_PASSWORD -P $TARGET_DATABASE_PORT $TARGET_DATABASE_NAME 2>&1 > /tmp/$AWS_BUCCKET_BACKUP_NAME) -then - - echo -e "Database backup successfully completed for $TARGET_DATABASE_NAME at $(date +'%d-%m-%Y %H:%M:%S')." +# Loop through all the defined databases, seperating by a , +for CURRENT_DATABASE in ${TARGET_DATABASE_NAMES//,/ } +do - # Perform the upload to S3. Put the output to a variable. If successful, print an entry to the console and the log. If unsuccessful, set has_failed to true and print an entry to the console and the log - if awsoutput=$(aws s3 cp /tmp/$AWS_BUCCKET_BACKUP_NAME s3://$AWS_BUCKET_NAME$AWS_BUCKET_BACKUP_PATH$AWS_BUCCKET_BACKUP_NAME 2>&1) + # Perform the database backup. Put the output to a variable. If successful upload the backup to S3, if unsuccessful print an entry to the console and the log, and set has_failed to true. + if sqloutput=$(mysqldump -u $TARGET_DATABASE_USER -h $TARGET_DATABASE_HOST -p$TARGET_DATABASE_PASSWORD -P $TARGET_DATABASE_PORT $CURRENT_DATABASE 2>&1 > /tmp/$CURRENT_DATABASE.sql) then - echo -e "Database backup successfully uploaded for $TARGET_DATABASE_NAME at $(date +'%d-%m-%Y %H:%M:%S')." + + echo -e "Database backup successfully completed for $CURRENT_DATABASE at $(date +'%d-%m-%Y %H:%M:%S')." + + # Perform the upload to S3. Put the output to a variable. If successful, print an entry to the console and the log. If unsuccessful, set has_failed to true and print an entry to the console and the log + if awsoutput=$(aws s3 cp /tmp/$CURRENT_DATABASE.sql s3://$AWS_BUCKET_NAME$AWS_BUCKET_BACKUP_PATH/$CURRENT_DATABASE.sql 2>&1) + then + echo -e "Database backup successfully uploaded for $CURRENT_DATABASE at $(date +'%d-%m-%Y %H:%M:%S')." + else + echo -e "Database backup failed to upload for $CURRENT_DATABASE at $(date +'%d-%m-%Y %H:%M:%S'). Error: $awsoutput" | tee -a /tmp/aws-database-backup.log + has_failed=true + fi + else - echo -e "Database backup failed to upload for $TARGET_DATABASE_NAME at $(date +'%d-%m-%Y %H:%M:%S'). Error: $awsoutput" | tee -a /tmp/aws-database-backup.log + echo -e "Database backup FAILED for $CURRENT_DATABASE at $(date +'%d-%m-%Y %H:%M:%S'). Error: $sqloutput" | tee -a /tmp/aws-database-backup.log has_failed=true fi -else - echo -e "Database backup FAILED for $TARGET_DATABASE_NAME at $(date +'%d-%m-%Y %H:%M:%S'). Error: $sqloutput" | tee -a /tmp/aws-database-backup.log - has_failed=true -fi +done + # Check if any of the backups have failed. If so, exit with a status of 1. Otherwise exit cleanly with a status of 0.