backup-bot-two/README.md

199 lines
7.1 KiB
Markdown
Raw Permalink Normal View History

2021-11-21 19:23:23 +00:00
# Backupbot II
2021-10-25 11:56:49 +00:00
2021-11-10 22:18:50 +00:00
[![Build Status](https://build.coopcloud.tech/api/badges/coop-cloud/backup-bot-two/status.svg)](https://build.coopcloud.tech/coop-cloud/backup-bot-two)
2021-11-21 19:23:23 +00:00
_This Time, It's Easily Configurable_
Automatically take backups from all volumes of running Docker Swarm services and runs pre- and post commands.
2021-10-25 11:56:49 +00:00
2023-09-07 13:26:29 +00:00
<!-- metadata -->
* **Category**: Utilities
* **Status**: 0, work-in-progress
* **Image**: [`thecoopcloud/backup-bot-two`](https://hub.docker.com/r/thecoopcloud/backup-bot-two), 4, upstream
* **Healthcheck**: No
* **Backups**: N/A
* **Email**: N/A
* **Tests**: No
* **SSO**: N/A
<!-- endmetadata -->
## Background
2021-11-11 00:10:17 +00:00
There are lots of Docker volume backup systems; all of them have one or both of these limitations:
- You need to define all the volumes to back up in the configuration system
- Backups require services to be stopped to take consistent copies
2021-11-11 00:10:17 +00:00
Backupbot II tries to help, by
1. **letting you define backups using Docker labels**, so you can **easily collect your backups for use with another system** like docker-volume-backup.
2. **running pre- and post-commands** before and after backups, for example to use database tools to take a backup from a running service.
2021-11-09 10:27:54 +00:00
2021-11-10 22:18:50 +00:00
## Deployment
### With Co-op Cloud
2021-10-25 11:56:49 +00:00
2023-09-07 11:44:38 +00:00
* `abra app new backup-bot-two`
* `abra app config <app-name>`
- set storage options. Either configure `CRON_SCHEDULE`, or set up `swarm-cronjob`
2024-01-16 18:27:58 +00:00
* `abra app secret generate -a <backupbot_name>`
2023-09-07 11:44:38 +00:00
* `abra app deploy <app-name>`
2021-11-09 10:27:54 +00:00
2021-11-10 22:18:50 +00:00
## Configuration
2021-10-25 11:56:49 +00:00
2024-01-16 18:27:58 +00:00
Per default Backupbot stores the backups locally in the repository `/backups/restic`, which is accessible as volume at `/var/lib/docker/volumes/<backupbot_name>_backups/_data/restic/`
2023-09-07 11:44:38 +00:00
The backup location can be changed using the `RESTIC_REPOSITORY` env variable.
2023-09-07 11:44:38 +00:00
### S3 Storage
To use S3 storage as backup location set the following envs:
```
RESTIC_REPOSITORY=s3:<S3-SERVICE-URL>/<BUCKET-NAME>
2023-09-07 11:44:38 +00:00
SECRET_AWS_SECRET_ACCESS_KEY_VERSION=v1
AWS_ACCESS_KEY_ID=<MY_ACCESS_KEY>
COMPOSE_FILE="$COMPOSE_FILE:compose.s3.yml"
```
and add your `<SECRET_ACCESS_KEY>` as docker secret:
2024-01-16 18:27:58 +00:00
`abra app secret insert <backupbot_name> aws_secret_access_key v1 <SECRET_ACCESS_KEY>`
2023-09-07 11:44:38 +00:00
See [restic s3 docs](https://restic.readthedocs.io/en/latest/030_preparing_a_new_repo.html#amazon-s3) for more information.
### SFTP Storage
2023-09-07 23:16:44 +00:00
> With sftp it is not possible to prevent the backupbot from deleting backups in case of a compromised machine. Therefore we recommend to use S3, REST or rclone server without delete permissions.
2023-09-07 11:44:38 +00:00
To use SFTP storage as backup location set the following envs:
```
RESTIC_REPOSITORY=sftp:user@host:/restic-repo-path
2023-09-07 11:44:38 +00:00
SECRET_SSH_KEY_VERSION=v1
SSH_HOST_KEY="hostname ssh-rsa AAAAB3...
COMPOSE_FILE="$COMPOSE_FILE:compose.ssh.yml"
```
2023-09-07 23:16:44 +00:00
To get the `SSH_HOST_KEY` run the following command `ssh-keyscan <hostname>`
2023-09-07 11:44:38 +00:00
Generate an ssh keypair: `ssh-keygen -t ed25519 -f backupkey -P ''`
2023-09-07 23:16:44 +00:00
Add the key to your `authorized_keys`:
`ssh-copy-id -i backupkey <user>@<hostname>`
Add your `SSH_KEY` as docker secret:
```
2024-01-16 18:27:58 +00:00
abra app secret insert <backupbot_name> ssh_key v1 """$(cat backupkey)
2023-09-07 23:16:44 +00:00
"""
```
2023-11-10 19:04:05 +00:00
> Attention: This command needs to be executed exactly as stated above, because it places a trailing newline at the end, if this is missing you will get the following error: `Load key "/run/secrets/ssh_key": error in libcrypto`
2023-09-07 11:44:38 +00:00
2023-10-03 20:39:06 +00:00
### Restic REST server Storage
You can simply set the `RESTIC_REPOSITORY` variable to your REST server URL `rest:http://host:8000/`.
2023-10-03 20:39:06 +00:00
If you access the REST server with a password `rest:https://user:pass@host:8000/` you should hide the whole URL containing the password inside a secret.
Uncomment these lines:
```
SECRET_RESTIC_REPO_VERSION=v1
COMPOSE_FILE="$COMPOSE_FILE:compose.secret.yml"
```
Add your REST server url as secret:
```
2024-01-16 18:27:58 +00:00
`abra app secret insert <backupbot_name> restic_repo v1 "rest:https://user:pass@host:8000/"`
2023-10-03 20:39:06 +00:00
```
The secret will overwrite the `RESTIC_REPOSITORY` variable.
2023-10-03 20:39:06 +00:00
See [restic REST docs](https://restic.readthedocs.io/en/latest/030_preparing_a_new_repo.html#rest-server) for more information.
2023-09-07 11:44:38 +00:00
2024-01-16 18:27:58 +00:00
## Push notifications
The following env variables can be used to setup push notifications for backups. `PUSH_URL_START` is requested just before the backups starts, `PUSH_URL_SUCCESS` is only requested if the backup was successful and if the backup fails `PUSH_URL_FAIL` will be requested.
Each variable is optional and independent of the other.
```
PUSH_URL_START=https://status.example.com/api/push/xxxxxxxxxx?status=up&msg=start
PUSH_URL_SUCCESS=https://status.example.com/api/push/xxxxxxxxxx?status=up&msg=OK
PUSH_URL_FAIL=https://status.example.com/api/push/xxxxxxxxxx?status=down&msg=fail
```
2023-09-07 11:44:38 +00:00
## Usage
2024-01-16 18:27:58 +00:00
Run the cronjob that creates a backup, including the push notifications and docker logging:
2024-01-17 19:36:06 +00:00
`abra app cmd <backupbot_name> app run_cron`
2023-09-07 11:44:38 +00:00
Create a backup of all apps:
2024-01-16 18:27:58 +00:00
`abra app run <backupbot_name> app -- backup create`
2023-09-07 11:44:38 +00:00
> The apps to backup up need to be deployed
Create an individual backup:
2024-01-16 18:27:58 +00:00
`abra app run <backupbot_name> app -- backup --host <target_app_name> create`
2023-09-07 11:44:38 +00:00
2023-09-07 13:32:57 +00:00
Create a backup to a local repository:
2023-09-07 11:44:38 +00:00
2024-01-16 18:27:58 +00:00
`abra app run <backupbot_name> app -- backup create -r /backups/restic`
2023-09-07 11:44:38 +00:00
> It is recommended to shutdown/undeploy an app before restoring the data
2023-09-07 13:32:57 +00:00
Restore the latest snapshot of all including apps:
2023-09-07 11:44:38 +00:00
2024-01-16 18:27:58 +00:00
`abra app run <backupbot_name> app -- backup restore`
2023-09-07 11:44:38 +00:00
2023-09-07 13:32:57 +00:00
Restore a specific snapshot of an individual app:
2023-09-07 11:44:38 +00:00
2024-01-16 18:27:58 +00:00
`abra app run <backupbot_name> app -- backup --host <target_app_name> restore --snapshot <snapshot_id>`
2023-09-07 11:44:38 +00:00
Show all snapshots:
2024-01-16 18:27:58 +00:00
`abra app run <backupbot_name> app -- backup snapshots`
2023-09-07 11:44:38 +00:00
Show all snapshots containing a specific app:
2024-01-16 18:27:58 +00:00
`abra app run <backupbot_name> app -- backup --host <target_app_name> snapshots`
2023-09-07 11:44:38 +00:00
Show all files inside the latest snapshot (can be very verbose):
2024-01-16 18:27:58 +00:00
`abra app run <backupbot_name> app -- backup ls`
2023-09-07 11:44:38 +00:00
Show specific files inside a selected snapshot:
2023-09-07 13:32:57 +00:00
2024-01-16 18:27:58 +00:00
`abra app run <backupbot_name> app -- backup ls --snapshot <snapshot_id> --path /var/lib/docker/volumes/`
2023-09-07 11:44:38 +00:00
Download files from a snapshot:
```
2024-01-16 18:27:58 +00:00
filename=$(abra app run <backupbot_name> app -- backup download --snapshot <snapshot_id> --path <absolute_path>)
abra app cp <backupbot_name> app:$filename .
2023-09-07 11:44:38 +00:00
```
2023-10-24 19:14:39 +00:00
## Run restic
```
2024-01-16 18:27:58 +00:00
abra app run <backupbot_name> app bash
2023-10-24 19:14:39 +00:00
export AWS_SECRET_ACCESS_KEY=$(cat $AWS_SECRET_ACCESS_KEY_FILE)
export RESTIC_PASSWORD=$(cat $RESTIC_PASSWORD_FILE)
restic snapshots
```
2023-09-07 11:44:38 +00:00
## Recipe Configuration
2021-11-10 22:18:50 +00:00
Like Traefik, or `swarm-cronjob`, Backupbot II uses access to the Docker socket to read labels from running Docker Swarm services:
2021-10-25 11:56:49 +00:00
```
services:
db:
2021-10-25 11:56:49 +00:00
deploy:
labels:
backupbot.backup: ${BACKUP:-"true"}
backupbot.backup.pre-hook: 'mysqldump -u root -p"$(cat /run/secrets/db_root_password)" -f /volume_path/dump.db'
backupbot.backup.post-hook: "rm -rf /volume_path/dump.db"
2021-10-25 11:56:49 +00:00
```
2021-11-10 22:18:50 +00:00
- `backupbot.backup` -- set to `true` to back up this service (REQUIRED)
- `backupbot.backup.pre-hook` -- command to run before copying files (optional), save all dumps into the volumes
2021-11-10 22:18:50 +00:00
- `backupbot.backup.post-hook` -- command to run after copying files (optional)
As in the above example, you can reference Docker Secrets, e.g. for looking up database passwords, by reading the files in `/run/secrets` directly.
[abra]: https://git.autonomic.zone/autonomic-cooperative/abra