release | ||
.drone.yml | ||
.env.sample | ||
.envrc.sample | ||
.gitignore | ||
abra.sh | ||
backupbot.py | ||
compose.s3.yml | ||
compose.secret.yml | ||
compose.ssh.yml | ||
compose.swarm-cronjob.yml | ||
compose.yml | ||
entrypoint.sh | ||
README.md | ||
renovate.json | ||
ssh_config |
Backupbot II
This Time, It's Easily Configurable
Automatically take backups from all volumes of running Docker Swarm services and runs pre- and post commands.
- Category: Utilities
- Status: 0, work-in-progress
- Image:
thecoopcloud/backup-bot-two
, 4, upstream - Healthcheck: No
- Backups: N/A
- Email: N/A
- Tests: No
- SSO: N/A
Background
There are lots of Docker volume backup systems; all of them have one or both of these limitations:
- You need to define all the volumes to back up in the configuration system
- Backups require services to be stopped to take consistent copies
Backupbot II tries to help, by
- letting you define backups using Docker labels, so you can easily collect your backups for use with another system like docker-volume-backup.
- running pre- and post-commands before and after backups, for example to use database tools to take a backup from a running service.
Deployment
With Co-op Cloud
abra app new backup-bot-two
abra app config <app-name>
- set storage options. Either configure
CRON_SCHEDULE
, or set upswarm-cronjob
- set storage options. Either configure
abra app secret generate -a <app_name>
abra app deploy <app-name>
Configuration
Per default Backupbot stores the backups locally in the repository /backups/restic
, which is accessible as volume at /var/lib/docker/volumes/<app_name>_backups/_data/restic/
The backup location can be changed using the RESTIC_REPO
env variable.
S3 Storage
To use S3 storage as backup location set the following envs:
RESTIC_REPO=s3:<S3-SERVICE-URL>/<BUCKET-NAME>
SECRET_AWS_SECRET_ACCESS_KEY_VERSION=v1
AWS_ACCESS_KEY_ID=<MY_ACCESS_KEY>
COMPOSE_FILE="$COMPOSE_FILE:compose.s3.yml"
and add your <SECRET_ACCESS_KEY>
as docker secret:
abra app secret insert <app_name> aws_secret_access_key v1 <SECRET_ACCESS_KEY>
See restic s3 docs for more information.
SFTP Storage
With sftp it is not possible to prevent the backupbot from deleting backups in case of a compromised machine. Therefore we recommend to use S3, REST or rclone server without delete permissions.
To use SFTP storage as backup location set the following envs:
RESTIC_REPO=sftp:user@host:/restic-repo-path
SECRET_SSH_KEY_VERSION=v1
SSH_HOST_KEY="hostname ssh-rsa AAAAB3...
COMPOSE_FILE="$COMPOSE_FILE:compose.ssh.yml"
To get the SSH_HOST_KEY
run the following command ssh-keyscan <hostname>
Generate an ssh keypair: ssh-keygen -t ed25519 -f backupkey -P ''
Add the key to your authorized_keys
:
ssh-copy-id -i backupkey <user>@<hostname>
Add your SSH_KEY
as docker secret:
abra app secret insert <app_name> ssh_key v1 """$(cat backupkey)
"""
Restic REST server Storage
You can simply set the RESTIC_REPO
variable to your REST server URL rest:http://host:8000/
.
If you access the REST server with a password rest:https://user:pass@host:8000/
you should hide the whole URL containing the password inside a secret.
Uncomment these lines:
SECRET_RESTIC_REPO_VERSION=v1
COMPOSE_FILE="$COMPOSE_FILE:compose.secret.yml"
Add your REST server url as secret:
`abra app secret insert <app_name> restic_repo v1 "rest:https://user:pass@host:8000/"`
The secret will overwrite the RESTIC_REPO
variable.
See restic REST docs for more information.
Usage
Create a backup of all apps:
abra app run <app_name> app -- backup create
The apps to backup up need to be deployed
Create an individual backup:
abra app run <app_name> app -- backup --host <target_app_name> create
Create a backup to a local repository:
abra app run <app_name> app -- backup create -r /backups/restic
It is recommended to shutdown/undeploy an app before restoring the data
Restore the latest snapshot of all including apps:
abra app run <app_name> app -- backup restore
Restore a specific snapshot of an individual app:
abra app run <app_name> app -- backup --host <target_app_name> restore --snapshot <snapshot_id>
Show all snapshots:
abra app run <app_name> app -- backup snapshots
Show all snapshots containing a specific app:
abra app run <app_name> app -- backup --host <target_app_name> snapshots
Show all files inside the latest snapshot (can be very verbose):
abra app run <app_name> app -- backup ls
Show specific files inside a selected snapshot:
abra app run <app_name> app -- backup ls --snapshot <snapshot_id> --path /var/lib/docker/volumes/
Download files from a snapshot:
filename=$(abra app run <app_name> app -- backup download --snapshot <snapshot_id> --path <absolute_path>)
abra app cp <app_name> app:$filename .
Recipe Configuration
Like Traefik, or swarm-cronjob
, Backupbot II uses access to the Docker socket to read labels from running Docker Swarm services:
services:
db:
deploy:
labels:
backupbot.backup: ${BACKUP:-"true"}
backupbot.backup.pre-hook: 'mysqldump -u root -p"$(cat /run/secrets/db_root_password)" -f /volume_path/dump.db'
backupbot.backup.post-hook: "rm -rf /volume_path/dump.db"
backupbot.backup
-- set totrue
to back up this service (REQUIRED)backupbot.backup.pre-hook
-- command to run before copying files (optional), save all dumps into the volumesbackupbot.backup.post-hook
-- command to run after copying files (optional)
As in the above example, you can reference Docker Secrets, e.g. for looking up database passwords, by reading the files in /run/secrets
directly.