# Backupbot II [![Build Status](https://build.coopcloud.tech/api/badges/coop-cloud/backup-bot-two/status.svg)](https://build.coopcloud.tech/coop-cloud/backup-bot-two) _This Time, It's Easily Configurable_ Automatically take backups from all volumes of running Docker Swarm services and runs pre- and post commands. * **Category**: Utilities * **Status**: 0, work-in-progress * **Image**: [`git.coopcloud.tech/coop-cloud/backup-bot-two`](https://git.coopcloud.tech/coop-cloud/-/packages/container/backup-bot-two), 4, upstream * **Healthcheck**: No * **Backups**: N/A * **Email**: N/A * **Tests**: No * **SSO**: N/A ## Background There are lots of Docker volume backup systems; all of them have one or both of these limitations: - You need to define all the volumes to back up in the configuration system - Backups require services to be stopped to take consistent copies Backupbot II tries to help, by 1. **letting you define backups using Docker labels**, so you can **easily collect your backups for use with another system** like docker-volume-backup. 2. **running pre- and post-commands** before and after backups, for example to use database tools to take a backup from a running service. ## Deployment ### With Co-op Cloud * `abra app new backup-bot-two` * `abra app config ` - set storage options. Either configure `CRON_SCHEDULE`, or set up `swarm-cronjob` * `abra app secret generate -a ` * `abra app deploy ` ## Configuration Per default Backupbot stores the backups locally in the repository `/backups/restic`, which is accessible as volume at `/var/lib/docker/volumes/_backups/_data/restic/` The backup location can be changed using the `RESTIC_REPOSITORY` env variable. ### S3 Storage To use S3 storage as backup location set the following envs: ``` RESTIC_REPOSITORY=s3:/ SECRET_AWS_SECRET_ACCESS_KEY_VERSION=v1 AWS_ACCESS_KEY_ID= COMPOSE_FILE="$COMPOSE_FILE:compose.s3.yml" ``` and add your `` as docker secret: `abra app secret insert aws_secret_access_key v1 ` See [restic s3 docs](https://restic.readthedocs.io/en/latest/030_preparing_a_new_repo.html#amazon-s3) for more information. ### SFTP Storage > With sftp it is not possible to prevent the backupbot from deleting backups in case of a compromised machine. Therefore we recommend to use S3, REST or rclone server without delete permissions. To use SFTP storage as backup location set the following envs: ``` RESTIC_REPOSITORY=sftp:user@host:/restic-repo-path SECRET_SSH_KEY_VERSION=v1 SSH_HOST_KEY="hostname ssh-rsa AAAAB3... COMPOSE_FILE="$COMPOSE_FILE:compose.ssh.yml" ``` To get the `SSH_HOST_KEY` run the following command `ssh-keyscan ` Generate an ssh keypair: `ssh-keygen -t ed25519 -f backupkey -P ''` Add the key to your `authorized_keys`: `ssh-copy-id -i backupkey @` Add your `SSH_KEY` as docker secret: ``` abra app secret insert ssh_key v1 """$(cat backupkey) """ ``` > Attention: This command needs to be executed exactly as stated above, because it places a trailing newline at the end, if this is missing you will get the following error: `Load key "/run/secrets/ssh_key": error in libcrypto` ### Restic REST server Storage You can simply set the `RESTIC_REPOSITORY` variable to your REST server URL `rest:http://host:8000/`. If you access the REST server with a password `rest:https://user:pass@host:8000/` you should hide the whole URL containing the password inside a secret. Uncomment these lines: ``` SECRET_RESTIC_REPO_VERSION=v1 COMPOSE_FILE="$COMPOSE_FILE:compose.secret.yml" ``` Add your REST server url as secret: ``` abra app secret insert restic_repo v1 "rest:https://user:pass@host:8000/" ``` The secret will overwrite the `RESTIC_REPOSITORY` variable. See [restic REST docs](https://restic.readthedocs.io/en/latest/030_preparing_a_new_repo.html#rest-server) for more information. ## Push notifications It is possible to configure three push events, that may trigger on the backup cronjob. Those can be used to detect failures from mointoring systems. The events are: - start - success - fail ### Using a Prometheus Push Gateway [A prometheus push gateway](https://git.coopcloud.tech/coop-cloud/monitoring-ng#setup-push-gateway) can be used by setting the following env variables: - `PUSH_PROMETHEUS_URL=pushgateway.example.com/metrics/job/backup` ### Using custom URLs The following env variables can be used to setup push notifications for backups. `PUSH_URL_START` is requested just before the backups starts, `PUSH_URL_SUCCESS` is only requested if the backup was successful and if the backup fails `PUSH_URL_FAIL` will be requested. Each variable is optional and independent of the other. ``` PUSH_URL_START=https://status.example.com/api/push/xxxxxxxxxx?status=up&msg=start PUSH_URL_SUCCESS=https://status.example.com/api/push/xxxxxxxxxx?status=up&msg=OK PUSH_URL_FAIL=https://status.example.com/api/push/xxxxxxxxxx?status=down&msg=fail ``` ### Push endpoint behind basic auth Insert the basic auth secret `abra app secret insert push_basicauth v1 "user:password"` Enable basic auth in the env file, by uncommenting the following line: ``` #COMPOSE_FILE="$COMPOSE_FILE:compose.pushbasicauth.yml" #SECRET_PUSH_BASICAUTH=v1 ``` ## Usage Run the cronjob that creates a backup, including the push notifications and docker logging: `abra app cmd app run_cron` Create a backup of all apps: `abra app run app -- backup create` > The apps to backup up need to be deployed Create an individual backup: `abra app run app -- backup --host create` Create a backup to a local repository: `abra app run app -- backup create -r /backups/restic` > It is recommended to shutdown/undeploy an app before restoring the data Restore the latest snapshot of all including apps: `abra app run app -- backup restore` Restore a specific snapshot of an individual app: `abra app run app -- backup --host restore --snapshot ` Show all snapshots: `abra app run app -- backup snapshots` Show all snapshots containing a specific app: `abra app run app -- backup --host snapshots` Show all files inside the latest snapshot (can be very verbose): `abra app run app -- backup ls` Show specific files inside a selected snapshot: `abra app run app -- backup ls --snapshot /var/lib/docker/volumes/` Download files from a snapshot: ``` filename=$(abra app run app -- backup download --snapshot --path ) abra app cp app:$filename . ``` ## Run restic ``` abra app run app bash export AWS_SECRET_ACCESS_KEY=$(cat $AWS_SECRET_ACCESS_KEY_FILE) export RESTIC_PASSWORD=$(cat $RESTIC_PASSWORD_FILE) restic snapshots ``` ## Recipe Configuration Like Traefik, or `swarm-cronjob`, Backupbot II uses access to the Docker socket to read labels from running Docker Swarm services: 1. Add `ENABLE_BACKUPS=true` to .env.sample 2. Add backupbot labels to the compose file ``` services: db: deploy: labels: backupbot.backup: "${ENABLE_BACKUPS:-true}" backupbot.backup.pre-hook: "/pg_backup.sh backup" backupbot.backup.volumes.db.path: "backup.sql" backupbot.restore.post-hook: '/pg_backup.sh restore' backupbot.backup.volumes.redis: "false" ``` - `backupbot.backup` -- set to `true` to back up this service (REQUIRED) - this is the only required backup label, per default it will backup all volumes - `backupbot.backup.volumes..path` -- only backup the listed relative paths from `` - `backupbot.backup.volumes.: false` -- exclude from the backup - `backupbot.backup.pre-hook` -- command to run before copying files - i.e. save all database dumps into the volumes - `backupbot.backup.post-hook` -- command to run after copying files - `backupbot.restore.pre-hook` -- command to run before restoring files - `backupbot.restore.post-hook` -- command to run after restoring files - i.e. read all database dumps from the volumes 3. (Optional) add backup/restore scripts to the compose file ``` services: db: configs: - source: pg_backup target: /pg_backup.sh mode: 0555 configs: pg_backup: name: ${STACK_NAME}_pg_backup_${PG_BACKUP_VERSION} file: pg_backup.sh ``` Version the config file in `abra.sh`: ``` export PG_BACKUP_VERSION=v1 ``` As in the above example, you can reference Docker Secrets, e.g. for looking up database passwords, by reading the files in `/run/secrets` directly. [abra]: https://git.autonomic.zone/autonomic-cooperative/abra ## Backupbot Development 1. Copy modified backupbot.py into the container: ``` cp backupbot.py /tmp/backupbot.py; git stash; abra app cp /tmp/backupbot.py app:/usr/bin/backupbot.py; git checkout main; git stash pop ``` 2. Testing stuff with the python interpreter inside the container: ``` abra app run app bash cd /usr/bin/ python from backupbot import * ``` ### Versioning - App version: changes to `backup.py` (build a new image) - Co-op Cloud package version: changes to recipe. For example, starting with 1.0.0+2.0.0: "patch" change to recipe: 1.0.1+2.0.0 "patch" change to backup.py: increment both, so 1.1.0+2.0.1 because bumping the image version would result in a minor recipe release https://git.coopcloud.tech/coop-cloud/backup-bot-two/issues/4