All checks were successful
continuous-integration/drone/push Build is passing
Reviewed-on: #69 |
||
---|---|---|
release | ||
.drone.yml | ||
.env.sample | ||
.envrc.sample | ||
.gitignore | ||
abra.sh | ||
backupbot.py | ||
CHANGELOG.md | ||
compose.pushbasicauth.yml | ||
compose.s3.yml | ||
compose.secret.yml | ||
compose.ssh.yml | ||
compose.swarm-cronjob.yml | ||
compose.yml | ||
cronjob.sh | ||
Dockerfile | ||
entrypoint.sh | ||
pg_backup.sh | ||
README.md | ||
renovate.json | ||
ssh_config |
Backupbot II
This Time, It's Easily Configurable
Automatically take backups from all volumes of running Docker Swarm services and runs pre- and post commands.
- Category: Utilities
- Status: 0, work-in-progress
- Image:
git.coopcloud.tech/coop-cloud/backup-bot-two
, 4, upstream - Healthcheck: No
- Backups: N/A
- Email: N/A
- Tests: No
- SSO: N/A
Background
There are lots of Docker volume backup systems; all of them have one or both of these limitations:
- You need to define all the volumes to back up in the configuration system
- Backups require services to be stopped to take consistent copies
Backupbot II tries to help, by
- letting you define backups using Docker labels, so you can easily collect your backups for use with another system like docker-volume-backup.
- running pre- and post-commands before and after backups, for example to use database tools to take a backup from a running service.
Deployment
With Co-op Cloud
abra app new backup-bot-two
abra app config <app-name>
- set storage options. Either configure
CRON_SCHEDULE
, or set upswarm-cronjob
- set storage options. Either configure
abra app secret generate -a <backupbot_name>
abra app deploy <app-name>
Configuration
Per default Backupbot stores the backups locally in the repository /backups/restic
, which is accessible as volume at /var/lib/docker/volumes/<backupbot_name>_backups/_data/restic/
The backup location can be changed using the RESTIC_REPOSITORY
env variable.
S3 Storage
To use S3 storage as backup location set the following envs:
RESTIC_REPOSITORY=s3:<S3-SERVICE-URL>/<BUCKET-NAME>
SECRET_AWS_SECRET_ACCESS_KEY_VERSION=v1
AWS_ACCESS_KEY_ID=<MY_ACCESS_KEY>
COMPOSE_FILE="$COMPOSE_FILE:compose.s3.yml"
and add your <SECRET_ACCESS_KEY>
as docker secret:
abra app secret insert <backupbot_name> aws_secret_access_key v1 <SECRET_ACCESS_KEY>
See restic s3 docs for more information.
SFTP Storage
With sftp it is not possible to prevent the backupbot from deleting backups in case of a compromised machine. Therefore we recommend to use S3, REST or rclone server without delete permissions.
To use SFTP storage as backup location set the following envs:
RESTIC_REPOSITORY=sftp:user@host:/restic-repo-path
SECRET_SSH_KEY_VERSION=v1
SSH_HOST_KEY="hostname ssh-rsa AAAAB3...
COMPOSE_FILE="$COMPOSE_FILE:compose.ssh.yml"
To get the SSH_HOST_KEY
run the following command ssh-keyscan <hostname>
Generate an ssh keypair: ssh-keygen -t ed25519 -f backupkey -P ''
Add the key to your authorized_keys
:
ssh-copy-id -i backupkey <user>@<hostname>
Add your SSH_KEY
as docker secret:
abra app secret insert <backupbot_name> ssh_key v1 """$(cat backupkey)
"""
Attention: This command needs to be executed exactly as stated above, because it places a trailing newline at the end, if this is missing you will get the following error:
Load key "/run/secrets/ssh_key": error in libcrypto
Restic REST server Storage
You can simply set the RESTIC_REPOSITORY
variable to your REST server URL rest:http://host:8000/
.
If you access the REST server with a password rest:https://user:pass@host:8000/
you should hide the whole URL containing the password inside a secret.
Uncomment these lines:
SECRET_RESTIC_REPO_VERSION=v1
COMPOSE_FILE="$COMPOSE_FILE:compose.secret.yml"
Add your REST server url as secret:
abra app secret insert <backupbot_name> restic_repo v1 "rest:https://user:pass@host:8000/"
The secret will overwrite the RESTIC_REPOSITORY
variable.
See restic REST docs for more information.
Push notifications
It is possible to configure three push events, that may trigger on the backup cronjob. Those can be used to detect failures from mointoring systems. The events are:
- start
- success
- fail
Using a Prometheus Push Gateway
A prometheus push gateway can be used by setting the following env variables:
PUSH_PROMETHEUS_URL=pushgateway.example.com/metrics/job/backup
Using custom URLs
The following env variables can be used to setup push notifications for backups. PUSH_URL_START
is requested just before the backups starts, PUSH_URL_SUCCESS
is only requested if the backup was successful and if the backup fails PUSH_URL_FAIL
will be requested.
Each variable is optional and independent of the other.
PUSH_URL_START=https://status.example.com/api/push/xxxxxxxxxx?status=up&msg=start
PUSH_URL_SUCCESS=https://status.example.com/api/push/xxxxxxxxxx?status=up&msg=OK
PUSH_URL_FAIL=https://status.example.com/api/push/xxxxxxxxxx?status=down&msg=fail
Push endpoint behind basic auth
Insert the basic auth secret
abra app secret insert <backupbot_name> push_basicauth v1 "user:password"
Enable basic auth in the env file, by uncommenting the following line:
#COMPOSE_FILE="$COMPOSE_FILE:compose.pushbasicauth.yml"
#SECRET_PUSH_BASICAUTH=v1
Usage
Run the cronjob that creates a backup, including the push notifications and docker logging:
abra app cmd <backupbot_name> app run_cron
Create a backup of all apps:
abra app run <backupbot_name> app -- backup create
The apps to backup up need to be deployed
Create an individual backup:
abra app run <backupbot_name> app -- backup --host <target_app_name> create
Create a backup to a local repository:
abra app run <backupbot_name> app -- backup create -r /backups/restic
It is recommended to shutdown/undeploy an app before restoring the data
Restore the latest snapshot of all including apps:
abra app run <backupbot_name> app -- backup restore
Restore a specific snapshot of an individual app:
abra app run <backupbot_name> app -- backup --host <target_app_name> restore --snapshot <snapshot_id>
Show all snapshots:
abra app run <backupbot_name> app -- backup snapshots
Show all snapshots containing a specific app:
abra app run <backupbot_name> app -- backup --host <target_app_name> snapshots
Show all files inside the latest snapshot (can be very verbose):
abra app run <backupbot_name> app -- backup ls
Show specific files inside a selected snapshot:
abra app run <backupbot_name> app -- backup ls --snapshot <snapshot_id> /var/lib/docker/volumes/
Download files from a snapshot:
filename=$(abra app run <backupbot_name> app -- backup download --snapshot <snapshot_id> --path <absolute_path>)
abra app cp <backupbot_name> app:$filename .
Run restic
abra app run <backupbot_name> app bash
export AWS_SECRET_ACCESS_KEY=$(cat $AWS_SECRET_ACCESS_KEY_FILE)
export RESTIC_PASSWORD=$(cat $RESTIC_PASSWORD_FILE)
restic snapshots
Recipe Configuration
Like Traefik, or swarm-cronjob
, Backupbot II uses access to the Docker socket to read labels from running Docker Swarm services:
-
Add
ENABLE_BACKUPS=true
to .env.sample -
Add backupbot labels to the compose file
services:
db:
deploy:
labels:
backupbot.backup: "${ENABLE_BACKUPS:-true}"
backupbot.backup.pre-hook: "/pg_backup.sh backup"
backupbot.backup.volumes.db.path: "backup.sql"
backupbot.restore.post-hook: '/pg_backup.sh restore'
backupbot.backup.volumes.redis: "false"
backupbot.backup
-- set totrue
to back up this service (REQUIRED)- this is the only required backup label, per default it will backup all volumes
backupbot.backup.volumes.<volume_name>.path
-- only backup the listed relative paths from<volume_name>
backupbot.backup.volumes.<volume_name>: false
-- exclude <volume_name> from the backupbackupbot.backup.pre-hook
-- command to run before copying files- i.e. save all database dumps into the volumes
backupbot.backup.post-hook
-- command to run after copying filesbackupbot.restore.pre-hook
-- command to run before restoring filesbackupbot.restore.post-hook
-- command to run after restoring files- i.e. read all database dumps from the volumes
- (Optional) add backup/restore scripts to the compose file
services:
db:
configs:
- source: pg_backup
target: /pg_backup.sh
mode: 0555
configs:
pg_backup:
name: ${STACK_NAME}_pg_backup_${PG_BACKUP_VERSION}
file: pg_backup.sh
Version the config file in abra.sh
:
export PG_BACKUP_VERSION=v1
As in the above example, you can reference Docker Secrets, e.g. for looking up database passwords, by reading the files in /run/secrets
directly.
Backupbot Development
- Copy modified backupbot.py into the container:
cp backupbot.py /tmp/backupbot.py; git stash; abra app cp <backupbot_name> /tmp/backupbot.py app:/usr/bin/backupbot.py; git checkout main; git stash pop
- Testing stuff with the python interpreter inside the container:
abra app run <backupbot_name> app bash
cd /usr/bin/
python
from backupbot import *
Versioning
-
App version: changes to
backup.py
(build a new image) -
Co-op Cloud package version: changes to recipe.
For example, starting with 1.0.0+2.0.0: "patch" change to recipe: 1.0.1+2.0.0 "patch" change to backup.py: increment both, so 1.1.0+2.0.1 because bumping the image version would result in a minor recipe release