Backup Bot II: This Time It's Easily Configurable
Go to file
3wc 84d606fa80 Add CHANGELOG.md
[ci skip]
2024-04-09 22:51:09 -03:00
release breaking change: rename env RESTIC_REPO to RESTIC_REPOSITORY 2023-10-24 21:03:44 +02:00
.drone.yml Backup volumes from host instead of copying paths 2023-10-04 19:07:16 +02:00
.env.sample Push Notifications #24 2024-01-16 19:40:31 +01:00
.envrc.sample add to .envrc.sample 2023-09-26 16:43:57 +02:00
.gitignore Ignore testing folder 2021-11-22 14:25:56 +02:00
CHANGELOG.md Add CHANGELOG.md 2024-04-09 22:51:09 -03:00
README.md fix Readme 2024-01-17 20:36:06 +01:00
abra.sh Push Notifications #24 2024-01-16 19:40:31 +01:00
backupbot.py feat: add retry option 2024-01-18 18:01:30 +01:00
compose.s3.yml Work-in-progress: split S3 & SSH storage 2021-11-09 12:37:56 +02:00
compose.secret.yml breaking change: rename env RESTIC_REPO to RESTIC_REPOSITORY 2023-10-24 21:03:44 +02:00
compose.ssh.yml add sftp storage 2023-10-04 19:07:57 +02:00
compose.swarm-cronjob.yml fix typo 2021-11-16 11:43:47 +01:00
compose.yml Fix python package install error 2023-12-19 01:16:12 +01:00
entrypoint.sh fix push notification precendence race condition 2024-03-08 15:42:00 +01:00
renovate.json chore(deps): add renovate.json 2023-01-18 17:24:09 +00:00
ssh_config add sftp storage 2023-10-04 19:07:57 +02:00

README.md

Backupbot II

Build Status

This Time, It's Easily Configurable

Automatically take backups from all volumes of running Docker Swarm services and runs pre- and post commands.

  • Category: Utilities
  • Status: 0, work-in-progress
  • Image: thecoopcloud/backup-bot-two, 4, upstream
  • Healthcheck: No
  • Backups: N/A
  • Email: N/A
  • Tests: No
  • SSO: N/A

Background

There are lots of Docker volume backup systems; all of them have one or both of these limitations:

  • You need to define all the volumes to back up in the configuration system
  • Backups require services to be stopped to take consistent copies

Backupbot II tries to help, by

  1. letting you define backups using Docker labels, so you can easily collect your backups for use with another system like docker-volume-backup.
  2. running pre- and post-commands before and after backups, for example to use database tools to take a backup from a running service.

Deployment

With Co-op Cloud

  • abra app new backup-bot-two
  • abra app config <app-name>
    • set storage options. Either configure CRON_SCHEDULE, or set up swarm-cronjob
  • abra app secret generate -a <backupbot_name>
  • abra app deploy <app-name>

Configuration

Per default Backupbot stores the backups locally in the repository /backups/restic, which is accessible as volume at /var/lib/docker/volumes/<backupbot_name>_backups/_data/restic/

The backup location can be changed using the RESTIC_REPOSITORY env variable.

S3 Storage

To use S3 storage as backup location set the following envs:

RESTIC_REPOSITORY=s3:<S3-SERVICE-URL>/<BUCKET-NAME>
SECRET_AWS_SECRET_ACCESS_KEY_VERSION=v1
AWS_ACCESS_KEY_ID=<MY_ACCESS_KEY>
COMPOSE_FILE="$COMPOSE_FILE:compose.s3.yml"

and add your <SECRET_ACCESS_KEY> as docker secret: abra app secret insert <backupbot_name> aws_secret_access_key v1 <SECRET_ACCESS_KEY>

See restic s3 docs for more information.

SFTP Storage

With sftp it is not possible to prevent the backupbot from deleting backups in case of a compromised machine. Therefore we recommend to use S3, REST or rclone server without delete permissions.

To use SFTP storage as backup location set the following envs:

RESTIC_REPOSITORY=sftp:user@host:/restic-repo-path
SECRET_SSH_KEY_VERSION=v1
SSH_HOST_KEY="hostname ssh-rsa AAAAB3...
COMPOSE_FILE="$COMPOSE_FILE:compose.ssh.yml"

To get the SSH_HOST_KEY run the following command ssh-keyscan <hostname>

Generate an ssh keypair: ssh-keygen -t ed25519 -f backupkey -P '' Add the key to your authorized_keys: ssh-copy-id -i backupkey <user>@<hostname> Add your SSH_KEY as docker secret:

abra app secret insert <backupbot_name> ssh_key v1 """$(cat backupkey)
"""

Attention: This command needs to be executed exactly as stated above, because it places a trailing newline at the end, if this is missing you will get the following error: Load key "/run/secrets/ssh_key": error in libcrypto

Restic REST server Storage

You can simply set the RESTIC_REPOSITORY variable to your REST server URL rest:http://host:8000/. If you access the REST server with a password rest:https://user:pass@host:8000/ you should hide the whole URL containing the password inside a secret. Uncomment these lines:

SECRET_RESTIC_REPO_VERSION=v1
COMPOSE_FILE="$COMPOSE_FILE:compose.secret.yml"

Add your REST server url as secret:

`abra app secret insert <backupbot_name> restic_repo v1 "rest:https://user:pass@host:8000/"`

The secret will overwrite the RESTIC_REPOSITORY variable.

See restic REST docs for more information.

Push notifications

The following env variables can be used to setup push notifications for backups. PUSH_URL_START is requested just before the backups starts, PUSH_URL_SUCCESS is only requested if the backup was successful and if the backup fails PUSH_URL_FAIL will be requested. Each variable is optional and independent of the other.


PUSH_URL_START=https://status.example.com/api/push/xxxxxxxxxx?status=up&msg=start
PUSH_URL_SUCCESS=https://status.example.com/api/push/xxxxxxxxxx?status=up&msg=OK
PUSH_URL_FAIL=https://status.example.com/api/push/xxxxxxxxxx?status=down&msg=fail

Usage

Run the cronjob that creates a backup, including the push notifications and docker logging: abra app cmd <backupbot_name> app run_cron

Create a backup of all apps:

abra app run <backupbot_name> app -- backup create

The apps to backup up need to be deployed

Create an individual backup:

abra app run <backupbot_name> app -- backup --host <target_app_name> create

Create a backup to a local repository:

abra app run <backupbot_name> app -- backup create -r /backups/restic

It is recommended to shutdown/undeploy an app before restoring the data

Restore the latest snapshot of all including apps:

abra app run <backupbot_name> app -- backup restore

Restore a specific snapshot of an individual app:

abra app run <backupbot_name> app -- backup --host <target_app_name> restore --snapshot <snapshot_id>

Show all snapshots:

abra app run <backupbot_name> app -- backup snapshots

Show all snapshots containing a specific app:

abra app run <backupbot_name> app -- backup --host <target_app_name> snapshots

Show all files inside the latest snapshot (can be very verbose):

abra app run <backupbot_name> app -- backup ls

Show specific files inside a selected snapshot:

abra app run <backupbot_name> app -- backup ls --snapshot <snapshot_id> --path /var/lib/docker/volumes/

Download files from a snapshot:

filename=$(abra app run <backupbot_name> app -- backup download --snapshot <snapshot_id> --path <absolute_path>)
abra app cp <backupbot_name> app:$filename .

Run restic

abra app run <backupbot_name> app bash
export AWS_SECRET_ACCESS_KEY=$(cat $AWS_SECRET_ACCESS_KEY_FILE)
export RESTIC_PASSWORD=$(cat $RESTIC_PASSWORD_FILE)
restic snapshots

Recipe Configuration

Like Traefik, or swarm-cronjob, Backupbot II uses access to the Docker socket to read labels from running Docker Swarm services:

services:
  db:
    deploy:
      labels:
        backupbot.backup: ${BACKUP:-"true"} 
        backupbot.backup.pre-hook: 'mysqldump -u root -p"$(cat /run/secrets/db_root_password)" -f /volume_path/dump.db'
        backupbot.backup.post-hook: "rm -rf /volume_path/dump.db"
  • backupbot.backup -- set to true to back up this service (REQUIRED)
  • backupbot.backup.pre-hook -- command to run before copying files (optional), save all dumps into the volumes
  • backupbot.backup.post-hook -- command to run after copying files (optional)

As in the above example, you can reference Docker Secrets, e.g. for looking up database passwords, by reading the files in /run/secrets directly.