# Backupbot II [![Build Status](https://build.coopcloud.tech/api/badges/coop-cloud/backup-bot-two/status.svg)](https://build.coopcloud.tech/coop-cloud/backup-bot-two) _This Time, It's Easily Configurable_ Automatically take backups from all volumes of running Docker Swarm services and runs pre- and post commands. * **Category**: Utilities * **Status**: 0, work-in-progress * **Image**: [`thecoopcloud/backup-bot-two`](https://hub.docker.com/r/thecoopcloud/backup-bot-two), 4, upstream * **Healthcheck**: No * **Backups**: N/A * **Email**: N/A * **Tests**: No * **SSO**: N/A ## Background There are lots of Docker volume backup systems; all of them have one or both of these limitations: - You need to define all the volumes to back up in the configuration system - Backups require services to be stopped to take consistent copies Backupbot II tries to help, by 1. **letting you define backups using Docker labels**, so you can **easily collect your backups for use with another system** like docker-volume-backup. 2. **running pre- and post-commands** before and after backups, for example to use database tools to take a backup from a running service. ## Deployment ### With Co-op Cloud * `abra app new backup-bot-two` * `abra app config ` - set storage options. Either configure `CRON_SCHEDULE`, or set up `swarm-cronjob` * `abra app secret generate -a ` * `abra app deploy ` ## Configuration Per default Backupbot stores the backups locally in the repository `/backups/restic`, which is accessible as volume at `/var/lib/docker/volumes/_backups/_data/restic/` The backup location can be changed using the `RESTIC_REPOSITORY` env variable. ### S3 Storage To use S3 storage as backup location set the following envs: ``` RESTIC_REPOSITORY=s3:/ SECRET_AWS_SECRET_ACCESS_KEY_VERSION=v1 AWS_ACCESS_KEY_ID= COMPOSE_FILE="$COMPOSE_FILE:compose.s3.yml" ``` and add your `` as docker secret: `abra app secret insert aws_secret_access_key v1 ` See [restic s3 docs](https://restic.readthedocs.io/en/latest/030_preparing_a_new_repo.html#amazon-s3) for more information. ### SFTP Storage > With sftp it is not possible to prevent the backupbot from deleting backups in case of a compromised machine. Therefore we recommend to use S3, REST or rclone server without delete permissions. To use SFTP storage as backup location set the following envs: ``` RESTIC_REPOSITORY=sftp:user@host:/restic-repo-path SECRET_SSH_KEY_VERSION=v1 SSH_HOST_KEY="hostname ssh-rsa AAAAB3... COMPOSE_FILE="$COMPOSE_FILE:compose.ssh.yml" ``` To get the `SSH_HOST_KEY` run the following command `ssh-keyscan ` Generate an ssh keypair: `ssh-keygen -t ed25519 -f backupkey -P ''` Add the key to your `authorized_keys`: `ssh-copy-id -i backupkey @` Add your `SSH_KEY` as docker secret: ``` abra app secret insert ssh_key v1 """$(cat backupkey) """ ``` > Attention: This command needs to be executed exactly as stated above, because it places a trailing newline at the end, if this is missing you will get the following error: `Load key "/run/secrets/ssh_key": error in libcrypto` ### Restic REST server Storage You can simply set the `RESTIC_REPOSITORY` variable to your REST server URL `rest:http://host:8000/`. If you access the REST server with a password `rest:https://user:pass@host:8000/` you should hide the whole URL containing the password inside a secret. Uncomment these lines: ``` SECRET_RESTIC_REPO_VERSION=v1 COMPOSE_FILE="$COMPOSE_FILE:compose.secret.yml" ``` Add your REST server url as secret: ``` `abra app secret insert restic_repo v1 "rest:https://user:pass@host:8000/"` ``` The secret will overwrite the `RESTIC_REPOSITORY` variable. See [restic REST docs](https://restic.readthedocs.io/en/latest/030_preparing_a_new_repo.html#rest-server) for more information. ## Push notifications The following env variables can be used to setup push notifications for backups. `PUSH_URL_START` is requested just before the backups starts, `PUSH_URL_SUCCESS` is only requested if the backup was successful and if the backup fails `PUSH_URL_FAIL` will be requested. Each variable is optional and independent of the other. ``` PUSH_URL_START=https://status.example.com/api/push/xxxxxxxxxx?status=up&msg=start PUSH_URL_SUCCESS=https://status.example.com/api/push/xxxxxxxxxx?status=up&msg=OK PUSH_URL_FAIL=https://status.example.com/api/push/xxxxxxxxxx?status=down&msg=fail ``` ## Usage Run the cronjob that creates a backup, including the push notifications and docker logging: `abra app cmd app run_cron` Create a backup of all apps: `abra app run app -- backup create` > The apps to backup up need to be deployed Create an individual backup: `abra app run app -- backup --host create` Create a backup to a local repository: `abra app run app -- backup create -r /backups/restic` > It is recommended to shutdown/undeploy an app before restoring the data Restore the latest snapshot of all including apps: `abra app run app -- backup restore` Restore a specific snapshot of an individual app: `abra app run app -- backup --host restore --snapshot ` Show all snapshots: `abra app run app -- backup snapshots` Show all snapshots containing a specific app: `abra app run app -- backup --host snapshots` Show all files inside the latest snapshot (can be very verbose): `abra app run app -- backup ls` Show specific files inside a selected snapshot: `abra app run app -- backup ls --snapshot --path /var/lib/docker/volumes/` Download files from a snapshot: ``` filename=$(abra app run app -- backup download --snapshot --path ) abra app cp app:$filename . ``` ## Run restic ``` abra app run app bash export AWS_SECRET_ACCESS_KEY=$(cat $AWS_SECRET_ACCESS_KEY_FILE) export RESTIC_PASSWORD=$(cat $RESTIC_PASSWORD_FILE) restic snapshots ``` ## Recipe Configuration Like Traefik, or `swarm-cronjob`, Backupbot II uses access to the Docker socket to read labels from running Docker Swarm services: ``` services: db: deploy: labels: backupbot.backup: ${BACKUP:-"true"} backupbot.backup.pre-hook: 'mysqldump -u root -p"$(cat /run/secrets/db_root_password)" -f /volume_path/dump.db' backupbot.backup.post-hook: "rm -rf /volume_path/dump.db" ``` - `backupbot.backup` -- set to `true` to back up this service (REQUIRED) - `backupbot.backup.pre-hook` -- command to run before copying files (optional), save all dumps into the volumes - `backupbot.backup.post-hook` -- command to run after copying files (optional) As in the above example, you can reference Docker Secrets, e.g. for looking up database passwords, by reading the files in `/run/secrets` directly. [abra]: https://git.autonomic.zone/autonomic-cooperative/abra