159 lines
5.1 KiB
Markdown
159 lines
5.1 KiB
Markdown
# Backupbot II
|
|
|
|
[![Build Status](https://build.coopcloud.tech/api/badges/coop-cloud/backup-bot-two/status.svg)](https://build.coopcloud.tech/coop-cloud/backup-bot-two)
|
|
|
|
_This Time, It's Easily Configurable_
|
|
|
|
Automatically take backups from all volumes of running Docker Swarm services and runs pre- and post commands.
|
|
|
|
<!-- metadata -->
|
|
|
|
* **Category**: Utilities
|
|
* **Status**: 0, work-in-progress
|
|
* **Image**: [`thecoopcloud/backup-bot-two`](https://hub.docker.com/r/thecoopcloud/backup-bot-two), 4, upstream
|
|
* **Healthcheck**: No
|
|
* **Backups**: N/A
|
|
* **Email**: N/A
|
|
* **Tests**: No
|
|
* **SSO**: N/A
|
|
|
|
<!-- endmetadata -->
|
|
|
|
|
|
## Background
|
|
|
|
There are lots of Docker volume backup systems; all of them have one or both of these limitations:
|
|
- You need to define all the volumes to back up in the configuration system
|
|
- Backups require services to be stopped to take consistent copies
|
|
|
|
Backupbot II tries to help, by
|
|
1. **letting you define backups using Docker labels**, so you can **easily collect your backups for use with another system** like docker-volume-backup.
|
|
2. **running pre- and post-commands** before and after backups, for example to use database tools to take a backup from a running service.
|
|
|
|
## Deployment
|
|
|
|
### With Co-op Cloud
|
|
|
|
|
|
* `abra app new backup-bot-two`
|
|
* `abra app config <app-name>`
|
|
- set storage options. Either configure `CRON_SCHEDULE`, or set up `swarm-cronjob`
|
|
* `abra app secret generate -a <app_name>`
|
|
* `abra app deploy <app-name>`
|
|
|
|
## Configuration
|
|
|
|
Per default Backupbot stores the backups locally in the repository `/backups/restic`, which is accessible as volume at `/var/lib/docker/volumes/<app_name>_backups/_data/restic/`
|
|
|
|
The backup location can be changed using the `RESTIC_REPO` env variable.
|
|
|
|
### S3 Storage
|
|
|
|
To use S3 storage as backup location set the following envs:
|
|
```
|
|
RESTIC_REPO=s3:<S3-SERVICE-URL>/<BUCKET-NAME>
|
|
SECRET_AWS_SECRET_ACCESS_KEY_VERSION=v1
|
|
AWS_ACCESS_KEY_ID=<MY_ACCESS_KEY>
|
|
COMPOSE_FILE="$COMPOSE_FILE:compose.s3.yml"
|
|
```
|
|
and add your `<SECRET_ACCESS_KEY>` as docker secret:
|
|
`abra app secret insert <app_name> aws_secret_access_key v1 <SECRET_ACCESS_KEY>`
|
|
|
|
See [restic s3 docs](https://restic.readthedocs.io/en/latest/030_preparing_a_new_repo.html#amazon-s3) for more information.
|
|
|
|
### SFTP Storage
|
|
|
|
> With sftp it is not possible to prevent the backupbot from deleting backups in case of a compromised machine. Therefore we recommend to use S3, REST or rclone server without delete permissions.
|
|
|
|
To use SFTP storage as backup location set the following envs:
|
|
```
|
|
RESTIC_REPO=sftp:user@host:/restic-repo-path
|
|
SECRET_SSH_KEY_VERSION=v1
|
|
SSH_HOST_KEY="hostname ssh-rsa AAAAB3...
|
|
COMPOSE_FILE="$COMPOSE_FILE:compose.ssh.yml"
|
|
```
|
|
To get the `SSH_HOST_KEY` run the following command `ssh-keyscan <hostname>`
|
|
|
|
Generate an ssh keypair: `ssh-keygen -t ed25519 -f backupkey -P ''`
|
|
Add the key to your `authorized_keys`:
|
|
`ssh-copy-id -i backupkey <user>@<hostname>`
|
|
Add your `SSH_KEY` as docker secret:
|
|
```
|
|
abra app secret insert <app_name> ssh_key v1 """$(cat backupkey)
|
|
"""
|
|
```
|
|
|
|
|
|
## Usage
|
|
|
|
|
|
Create a backup of all apps:
|
|
|
|
`abra app run <app_name> app -- backup create`
|
|
|
|
> The apps to backup up need to be deployed
|
|
|
|
Create an individual backup:
|
|
|
|
`abra app run <app_name> app -- backup --host <target_app_name> create`
|
|
|
|
Create a backup to a local repository:
|
|
|
|
`abra app run <app_name> app -- backup create -r /backups/restic`
|
|
|
|
> It is recommended to shutdown/undeploy an app before restoring the data
|
|
|
|
Restore the latest snapshot of all including apps:
|
|
|
|
`abra app run <app_name> app -- backup restore`
|
|
|
|
Restore a specific snapshot of an individual app:
|
|
|
|
`abra app run <app_name> app -- backup --host <target_app_name> restore --snapshot <snapshot_id>`
|
|
|
|
Show all snapshots:
|
|
|
|
`abra app run <app_name> app -- backup snapshots`
|
|
|
|
Show all snapshots containing a specific app:
|
|
|
|
`abra app run <app_name> app -- backup --host <target_app_name> snapshots`
|
|
|
|
Show all files inside the latest snapshot (can be very verbose):
|
|
|
|
`abra app run <app_name> app -- backup ls`
|
|
|
|
Show specific files inside a selected snapshot:
|
|
|
|
`abra app run <app_name> app -- backup ls --snapshot <snapshot_id> --path /var/lib/docker/volumes/`
|
|
|
|
Download files from a snapshot:
|
|
|
|
```
|
|
filename=$(abra app run <app_name> app -- backup download --snapshot <snapshot_id> --path <absolute_path>)
|
|
abra app cp <app_name> app:$filename .
|
|
```
|
|
|
|
|
|
## Recipe Configuration
|
|
|
|
Like Traefik, or `swarm-cronjob`, Backupbot II uses access to the Docker socket to read labels from running Docker Swarm services:
|
|
|
|
```
|
|
services:
|
|
db:
|
|
deploy:
|
|
labels:
|
|
backupbot.backup: ${BACKUP:-"true"}
|
|
backupbot.backup.pre-hook: 'mysqldump -u root -p"$(cat /run/secrets/db_root_password)" -f /volume_path/dump.db'
|
|
backupbot.backup.post-hook: "rm -rf /volume_path/dump.db"
|
|
```
|
|
|
|
- `backupbot.backup` -- set to `true` to back up this service (REQUIRED)
|
|
- `backupbot.backup.pre-hook` -- command to run before copying files (optional), save all dumps into the volumes
|
|
- `backupbot.backup.post-hook` -- command to run after copying files (optional)
|
|
|
|
As in the above example, you can reference Docker Secrets, e.g. for looking up database passwords, by reading the files in `/run/secrets` directly.
|
|
|
|
[abra]: https://git.autonomic.zone/autonomic-cooperative/abra
|