forked from coop-cloud/backup-bot-two
Compare commits
82 Commits
dockerize
...
enable-lab
Author | SHA1 | Date | |
---|---|---|---|
4c2304a962
|
|||
69e7f07978 | |||
d25688f312 | |||
b3cbb8bb46 | |||
bb1237f9ad | |||
972a2c2314 | |||
4240318d20 | |||
c3f3d1a6fe | |||
ab6c06d423 | |||
9398e0d83d | |||
6fc62b5516 | |||
1f06af95eb | |||
15a552ef8b | |||
5d4def6143 | |||
ebc0ea5d84 | |||
488c59f667 | |||
825565451a | |||
6fa9440c76 | |||
33ce3c58aa | |||
06ad03c1d5 | |||
bd8398e7dd | |||
75a93c5456 | |||
d32337cf3a | |||
61ffb67686 | |||
a86ac15363 | |||
5fa8f821c1 | |||
203719c224 | |||
3009159c82 | |||
28334a4241 | |||
447a808849 | |||
42ae6a6b9b | |||
3261d67dca | |||
6355f3572f | |||
451c511554 | |||
87d584e4e8 | |||
a171d9eea0 | |||
620ab4e3d7 | |||
83a3d82ea5 | |||
6450c80236 | |||
6f6a82153a | |||
efc942c041 | |||
0c4bc19e2a | |||
dde9987de6 | |||
5f734bc371 | |||
27e2e61d7f | |||
1bb1917e18 | |||
7b8b3b1acd | |||
9c5ba87232 | |||
9064bebb56 | |||
4fdb585825 | |||
bde63b3f6f | |||
92dfd23b26 | |||
bab224ab96 | |||
36928c34ac | |||
9b324476c2 | |||
7aa464e271 | |||
59c071734a | |||
940b6bde1a | |||
d6faffcbbd | |||
5a20ef4349 | |||
ce42fb06fd | |||
f2472bd0d3 | |||
f7cbbf04c0 | |||
ab03d2d7cc | |||
32ba0041d1 | |||
394cc4f47c | |||
53c4f1956a | |||
c750d6402f | |||
b2e2fc9d13 | |||
9e818ed021 | |||
f6d1da8899 | |||
d3e9001597 | |||
f7db376377 | |||
d6e90e04ba | |||
a990dc27c7 | |||
721c393d2d | |||
c9de239e93 | |||
489ef570dd | |||
23b092776f | |||
ed76e6164b | |||
f5e87f396a | |||
8317f50a8a |
12
.drone.yml
Normal file
12
.drone.yml
Normal file
@ -0,0 +1,12 @@
|
||||
---
|
||||
kind: pipeline
|
||||
name: linters
|
||||
steps:
|
||||
- name: run shellcheck
|
||||
image: koalaman/shellcheck-alpine
|
||||
commands:
|
||||
- shellcheck backup.sh
|
||||
|
||||
trigger:
|
||||
branch:
|
||||
- main
|
29
.env.sample
Normal file
29
.env.sample
Normal file
@ -0,0 +1,29 @@
|
||||
TYPE=backup-bot-two
|
||||
|
||||
SECRET_RESTIC_PASSWORD_VERSION=v1
|
||||
|
||||
COMPOSE_FILE=compose.yml
|
||||
|
||||
RESTIC_REPO=/backups/restic
|
||||
|
||||
CRON_SCHEDULE='30 3 * * *'
|
||||
|
||||
# swarm-cronjob, instead of built-in cron
|
||||
#COMPOSE_FILE="$COMPOSE_FILE:compose.swarm-cronjob.yml"
|
||||
|
||||
# SSH storage
|
||||
#SECRET_SSH_KEY_VERSION=v1
|
||||
#SSH_HOST_KEY="hostname ssh-rsa AAAAB3...
|
||||
#COMPOSE_FILE="$COMPOSE_FILE:compose.ssh.yml"
|
||||
|
||||
# S3 storage
|
||||
#SECRET_AWS_SECRET_ACCESS_KEY_VERSION=v1
|
||||
#AWS_ACCESS_KEY_ID=something-secret
|
||||
#COMPOSE_FILE="$COMPOSE_FILE:compose.s3.yml"
|
||||
|
||||
# Secret restic repository
|
||||
# use a secret to store the RESTIC_REPO if the repository location contains a secret value
|
||||
# i.E rest:https://user:SECRET_PASSWORD@host:8000/
|
||||
# it overwrites the RESTIC_REPO variable
|
||||
#SECRET_RESTIC_REPO_VERSION=v1
|
||||
#COMPOSE_FILE="$COMPOSE_FILE:compose.secret.yml"
|
17
.envrc.sample
Normal file
17
.envrc.sample
Normal file
@ -0,0 +1,17 @@
|
||||
export RESTIC_HOST="user@domain.tld"
|
||||
export RESTIC_PASSWORD_FILE=/run/secrets/restic-password
|
||||
export BACKUP_DEST=/backups
|
||||
|
||||
export SERVER_NAME=domain.tld
|
||||
export DOCKER_CONTEXT=$SERVER_NAME
|
||||
|
||||
# uncomment either this:
|
||||
#export SSH_KEY_FILE=~/.ssh/id_rsa
|
||||
# or this:
|
||||
#export AWS_SECRET_ACCESS_KEY_FILE=s3
|
||||
#export AWS_ACCESS_KEY_ID=easter-october-emphatic-tug-urgent-customer
|
||||
# or this:
|
||||
#export HTTPS_PASSWORD_FILE=/run/secrets/https_password
|
||||
|
||||
# optionally limit subset of services for testing
|
||||
#export SERVICES_OVERRIDE="ghost_domain_tld_app ghost_domain_tld_db"
|
1
.gitignore
vendored
Normal file
1
.gitignore
vendored
Normal file
@ -0,0 +1 @@
|
||||
/testing
|
194
README.md
194
README.md
@ -1,45 +1,175 @@
|
||||
# Backupbot II: This Time It's Easily Configurable
|
||||
# Backupbot II
|
||||
|
||||
Automatically backup files from running Docker Swarm services based on labels.
|
||||
[](https://build.coopcloud.tech/coop-cloud/backup-bot-two)
|
||||
|
||||
## TODO
|
||||
_This Time, It's Easily Configurable_
|
||||
|
||||
- [ ] Make a Docker image of this
|
||||
- [ ] Rip out or improve Restic stuff
|
||||
- [ ] Add secret handling for database backups
|
||||
- [ ] Continuous linting with shellcheck
|
||||
Automatically take backups from all volumes of running Docker Swarm services and runs pre- and post commands.
|
||||
|
||||
## Label format
|
||||
<!-- metadata -->
|
||||
|
||||
(Haven't done secrets yet, here are two options)
|
||||
* **Category**: Utilities
|
||||
* **Status**: 0, work-in-progress
|
||||
* **Image**: [`thecoopcloud/backup-bot-two`](https://hub.docker.com/r/thecoopcloud/backup-bot-two), 4, upstream
|
||||
* **Healthcheck**: No
|
||||
* **Backups**: N/A
|
||||
* **Email**: N/A
|
||||
* **Tests**: No
|
||||
* **SSO**: N/A
|
||||
|
||||
v1:
|
||||
<!-- endmetadata -->
|
||||
|
||||
|
||||
## Background
|
||||
|
||||
There are lots of Docker volume backup systems; all of them have one or both of these limitations:
|
||||
- You need to define all the volumes to back up in the configuration system
|
||||
- Backups require services to be stopped to take consistent copies
|
||||
|
||||
Backupbot II tries to help, by
|
||||
1. **letting you define backups using Docker labels**, so you can **easily collect your backups for use with another system** like docker-volume-backup.
|
||||
2. **running pre- and post-commands** before and after backups, for example to use database tools to take a backup from a running service.
|
||||
|
||||
## Deployment
|
||||
|
||||
### With Co-op Cloud
|
||||
|
||||
|
||||
* `abra app new backup-bot-two`
|
||||
* `abra app config <app-name>`
|
||||
- set storage options. Either configure `CRON_SCHEDULE`, or set up `swarm-cronjob`
|
||||
* `abra app secret generate -a <app_name>`
|
||||
* `abra app deploy <app-name>`
|
||||
|
||||
## Configuration
|
||||
|
||||
Per default Backupbot stores the backups locally in the repository `/backups/restic`, which is accessible as volume at `/var/lib/docker/volumes/<app_name>_backups/_data/restic/`
|
||||
|
||||
The backup location can be changed using the `RESTIC_REPO` env variable.
|
||||
|
||||
### S3 Storage
|
||||
|
||||
To use S3 storage as backup location set the following envs:
|
||||
```
|
||||
RESTIC_REPO=s3:<S3-SERVICE-URL>/<BUCKET-NAME>
|
||||
SECRET_AWS_SECRET_ACCESS_KEY_VERSION=v1
|
||||
AWS_ACCESS_KEY_ID=<MY_ACCESS_KEY>
|
||||
COMPOSE_FILE="$COMPOSE_FILE:compose.s3.yml"
|
||||
```
|
||||
and add your `<SECRET_ACCESS_KEY>` as docker secret:
|
||||
`abra app secret insert <app_name> aws_secret_access_key v1 <SECRET_ACCESS_KEY>`
|
||||
|
||||
See [restic s3 docs](https://restic.readthedocs.io/en/latest/030_preparing_a_new_repo.html#amazon-s3) for more information.
|
||||
|
||||
### SFTP Storage
|
||||
|
||||
> With sftp it is not possible to prevent the backupbot from deleting backups in case of a compromised machine. Therefore we recommend to use S3, REST or rclone server without delete permissions.
|
||||
|
||||
To use SFTP storage as backup location set the following envs:
|
||||
```
|
||||
RESTIC_REPO=sftp:user@host:/restic-repo-path
|
||||
SECRET_SSH_KEY_VERSION=v1
|
||||
SSH_HOST_KEY="hostname ssh-rsa AAAAB3...
|
||||
COMPOSE_FILE="$COMPOSE_FILE:compose.ssh.yml"
|
||||
```
|
||||
To get the `SSH_HOST_KEY` run the following command `ssh-keyscan <hostname>`
|
||||
|
||||
Generate an ssh keypair: `ssh-keygen -t ed25519 -f backupkey -P ''`
|
||||
Add the key to your `authorized_keys`:
|
||||
`ssh-copy-id -i backupkey <user>@<hostname>`
|
||||
Add your `SSH_KEY` as docker secret:
|
||||
```
|
||||
abra app secret insert <app_name> ssh_key v1 """$(cat backupkey)
|
||||
"""
|
||||
```
|
||||
|
||||
### Restic REST server Storage
|
||||
|
||||
You can simply set the `RESTIC_REPO` variable to your REST server URL `rest:http://host:8000/`.
|
||||
If you access the REST server with a password `rest:https://user:pass@host:8000/` you should hide the whole URL containing the password inside a secret.
|
||||
Uncomment these lines:
|
||||
```
|
||||
SECRET_RESTIC_REPO_VERSION=v1
|
||||
COMPOSE_FILE="$COMPOSE_FILE:compose.secret.yml"
|
||||
```
|
||||
Add your REST server url as secret:
|
||||
```
|
||||
`abra app secret insert <app_name> restic_repo v1 "rest:https://user:pass@host:8000/"`
|
||||
```
|
||||
The secret will overwrite the `RESTIC_REPO` variable.
|
||||
|
||||
|
||||
See [restic REST docs](https://restic.readthedocs.io/en/latest/030_preparing_a_new_repo.html#rest-server) for more information.
|
||||
|
||||
## Usage
|
||||
|
||||
|
||||
Create a backup of all apps:
|
||||
|
||||
`abra app run <app_name> app -- backup create`
|
||||
|
||||
> The apps to backup up need to be deployed
|
||||
|
||||
Create an individual backup:
|
||||
|
||||
`abra app run <app_name> app -- backup --host <target_app_name> create`
|
||||
|
||||
Create a backup to a local repository:
|
||||
|
||||
`abra app run <app_name> app -- backup create -r /backups/restic`
|
||||
|
||||
> It is recommended to shutdown/undeploy an app before restoring the data
|
||||
|
||||
Restore the latest snapshot of all including apps:
|
||||
|
||||
`abra app run <app_name> app -- backup restore`
|
||||
|
||||
Restore a specific snapshot of an individual app:
|
||||
|
||||
`abra app run <app_name> app -- backup --host <target_app_name> restore --snapshot <snapshot_id>`
|
||||
|
||||
Show all snapshots:
|
||||
|
||||
`abra app run <app_name> app -- backup snapshots`
|
||||
|
||||
Show all snapshots containing a specific app:
|
||||
|
||||
`abra app run <app_name> app -- backup --host <target_app_name> snapshots`
|
||||
|
||||
Show all files inside the latest snapshot (can be very verbose):
|
||||
|
||||
`abra app run <app_name> app -- backup ls`
|
||||
|
||||
Show specific files inside a selected snapshot:
|
||||
|
||||
`abra app run <app_name> app -- backup ls --snapshot <snapshot_id> --path /var/lib/docker/volumes/`
|
||||
|
||||
Download files from a snapshot:
|
||||
|
||||
```
|
||||
filename=$(abra app run <app_name> app -- backup download --snapshot <snapshot_id> --path <absolute_path>)
|
||||
abra app cp <app_name> app:$filename .
|
||||
```
|
||||
|
||||
|
||||
## Recipe Configuration
|
||||
|
||||
Like Traefik, or `swarm-cronjob`, Backupbot II uses access to the Docker socket to read labels from running Docker Swarm services:
|
||||
|
||||
```
|
||||
services:
|
||||
db:
|
||||
deploy:
|
||||
labels:
|
||||
backupbot.backup: "true"
|
||||
backupbot.backup.repos: "$some_thing"
|
||||
backupbot.backup.at: "* * * * *"
|
||||
backupbot.backup.pre-hook: 'mysqldump -u root -p"$(cat /run/secrets/db_root_password)" -f /tmp/dump/dump.db'
|
||||
backupbot.backup.post-hook: "rm -rf /tmp/dump/dump.db"
|
||||
backupbot.backup.path: "/tmp/dump/"
|
||||
```
|
||||
v2:
|
||||
```
|
||||
deploy:
|
||||
labels:
|
||||
backupbot.backup: "true"
|
||||
backupbot.backup.repos: "$some_thing"
|
||||
backupbot.backup.at: "* * * * *"
|
||||
backupbot.backup.post-hook: "rm -rf /tmp/dump/dump.db"
|
||||
backupbot.backup.secrets": "db_root_password",
|
||||
backupbot.backup.pre-hook: 'mysqldump -u root -p"$DB_ROOT_PASSWORD" -f /tmp/dump/dump.db'
|
||||
backupbot.backup: ${BACKUP:-"true"}
|
||||
backupbot.backup.pre-hook: 'mysqldump -u root -p"$(cat /run/secrets/db_root_password)" -f /volume_path/dump.db'
|
||||
backupbot.backup.post-hook: "rm -rf /volume_path/dump.db"
|
||||
```
|
||||
|
||||
## Questions:
|
||||
- `backupbot.backup` -- set to `true` to back up this service (REQUIRED)
|
||||
- `backupbot.backup.pre-hook` -- command to run before copying files (optional), save all dumps into the volumes
|
||||
- `backupbot.backup.post-hook` -- command to run after copying files (optional)
|
||||
|
||||
- Should frequency be configurable per service, centrally, or both?
|
||||
As in the above example, you can reference Docker Secrets, e.g. for looking up database passwords, by reading the files in `/run/secrets` directly.
|
||||
|
||||
```
|
||||
- "backupbot.backup.at: "* * * * *"
|
||||
```
|
||||
[abra]: https://git.autonomic.zone/autonomic-cooperative/abra
|
||||
|
3
abra.sh
Normal file
3
abra.sh
Normal file
@ -0,0 +1,3 @@
|
||||
export ENTRYPOINT_VERSION=v1
|
||||
export BACKUPBOT_VERSION=v1
|
||||
export SSH_CONFIG_VERSION=v1
|
50
backup.sh
50
backup.sh
@ -1,50 +0,0 @@
|
||||
#!/bin/bash
|
||||
|
||||
# FIXME: just for testing
|
||||
backup_path=backups
|
||||
|
||||
# FIXME: just for testing
|
||||
export DOCKER_CONTEXT=demo.coopcloud.tech
|
||||
|
||||
mapfile -t services < <(docker service ls --format '{{ .Name }}')
|
||||
|
||||
# FIXME: just for testing
|
||||
services=( "ghost_demo_app" "ghost_demo_db" )
|
||||
|
||||
for service in "${services[@]}"; do
|
||||
echo "service: $service"
|
||||
details=$(docker service inspect "$service" --format "{{ json .Spec.Labels }}")
|
||||
if echo "$details" | jq -r '.["backupbot.backup"]' | grep -q 'true'; then
|
||||
pre=$(echo "$details" | jq -r '.["backupbot.backup.pre-hook"]')
|
||||
post=$(echo "$details" | jq -r '.["backupbot.backup.post-hook"]')
|
||||
path=$(echo "$details" | jq -r '.["backupbot.backup.path"]')
|
||||
|
||||
if [ "$path" = "null" ]; then
|
||||
echo "ERROR: missing 'path' for $service"
|
||||
continue # or maybe exit?
|
||||
fi
|
||||
|
||||
container=$(docker container ls -f "name=$service" --format '{{ .ID }}')
|
||||
|
||||
echo "backing up $service"
|
||||
test -d "$backup_path/$service" || mkdir "$backup_path/$service"
|
||||
|
||||
if [ "$pre" != "null" ]; then
|
||||
# run the precommand
|
||||
# shellcheck disable=SC2086
|
||||
docker exec "$container" $pre
|
||||
fi
|
||||
|
||||
# run the backup
|
||||
docker cp "$container:$path" "$backup_path/$service"
|
||||
|
||||
if [ "$post" != "null" ]; then
|
||||
# run the postcommand
|
||||
# shellcheck disable=SC2086
|
||||
docker exec "$container" $post
|
||||
fi
|
||||
fi
|
||||
restic -p restic-password \
|
||||
backup --quiet -r sftp:u272979@u272979.your-storagebox.de:/demo.coopcloud.tech \
|
||||
--tag coop-cloud "$backup_path"
|
||||
done
|
274
backupbot.py
Executable file
274
backupbot.py
Executable file
@ -0,0 +1,274 @@
|
||||
#!/usr/bin/python3
|
||||
|
||||
import os
|
||||
import click
|
||||
import json
|
||||
import subprocess
|
||||
import logging
|
||||
import docker
|
||||
import restic
|
||||
from datetime import datetime, timezone
|
||||
from restic.errors import ResticFailedError
|
||||
from pathlib import Path
|
||||
from shutil import copyfile, rmtree
|
||||
# logging.basicConfig(level=logging.INFO)
|
||||
|
||||
VOLUME_PATH = "/var/lib/docker/volumes/"
|
||||
SECRET_PATH = '/secrets/'
|
||||
SERVICE = None
|
||||
|
||||
|
||||
@click.group()
|
||||
@click.option('-l', '--log', 'loglevel')
|
||||
@click.option('service', '--host', '-h', envvar='SERVICE')
|
||||
@click.option('repository', '--repo', '-r', envvar='RESTIC_REPO', required=True)
|
||||
def cli(loglevel, service, repository):
|
||||
global SERVICE
|
||||
if service:
|
||||
SERVICE = service.replace('.', '_')
|
||||
if repository:
|
||||
os.environ['RESTIC_REPO'] = repository
|
||||
if loglevel:
|
||||
numeric_level = getattr(logging, loglevel.upper(), None)
|
||||
if not isinstance(numeric_level, int):
|
||||
raise ValueError('Invalid log level: %s' % loglevel)
|
||||
logging.basicConfig(level=numeric_level)
|
||||
export_secrets()
|
||||
init_repo()
|
||||
|
||||
|
||||
def init_repo():
|
||||
repo = os.environ['RESTIC_REPO']
|
||||
logging.debug(f"set restic repository location: {repo}")
|
||||
restic.repository = repo
|
||||
restic.password_file = '/var/run/secrets/restic_password'
|
||||
try:
|
||||
restic.cat.config()
|
||||
except ResticFailedError as error:
|
||||
if 'unable to open config file' in str(error):
|
||||
result = restic.init()
|
||||
logging.info(f"Initialized restic repo: {result}")
|
||||
else:
|
||||
raise error
|
||||
|
||||
|
||||
def export_secrets():
|
||||
for env in os.environ:
|
||||
if env.endswith('FILE') and not "COMPOSE_FILE" in env:
|
||||
logging.debug(f"exported secret: {env}")
|
||||
with open(os.environ[env]) as file:
|
||||
secret = file.read()
|
||||
os.environ[env.removesuffix('_FILE')] = secret
|
||||
# logging.debug(f"Read secret value: {secret}")
|
||||
|
||||
|
||||
@cli.command()
|
||||
def create():
|
||||
pre_commands, post_commands, backup_paths, apps = get_backup_cmds()
|
||||
copy_secrets(apps)
|
||||
backup_paths.append(SECRET_PATH)
|
||||
run_commands(pre_commands)
|
||||
backup_volumes(backup_paths, apps)
|
||||
run_commands(post_commands)
|
||||
|
||||
|
||||
def get_backup_cmds():
|
||||
client = docker.from_env()
|
||||
container_by_service = {
|
||||
c.labels['com.docker.swarm.service.name']: c for c in client.containers.list()}
|
||||
backup_paths = set()
|
||||
backup_apps = set()
|
||||
pre_commands = {}
|
||||
post_commands = {}
|
||||
services = client.services.list()
|
||||
for s in services:
|
||||
labels = s.attrs['Spec']['Labels']
|
||||
if (backup := labels.get('backupbot.backup')) and bool(backup):
|
||||
stack_name = labels['com.docker.stack.namespace']
|
||||
if SERVICE and SERVICE != stack_name:
|
||||
continue
|
||||
backup_apps.add(stack_name)
|
||||
container = container_by_service.get(s.name)
|
||||
if not container:
|
||||
logging.error(
|
||||
f"Container {s.name} is not running, hooks can not be executed")
|
||||
if prehook := labels.get('backupbot.backup.pre-hook'):
|
||||
pre_commands[container] = prehook
|
||||
if posthook := labels.get('backupbot.backup.post-hook'):
|
||||
post_commands[container] = posthook
|
||||
backup_paths = backup_paths.union(
|
||||
Path(VOLUME_PATH).glob(f"{stack_name}_*"))
|
||||
return pre_commands, post_commands, list(backup_paths), list(backup_apps)
|
||||
|
||||
|
||||
def copy_secrets(apps):
|
||||
rmtree(SECRET_PATH, ignore_errors=True)
|
||||
os.mkdir(SECRET_PATH)
|
||||
client = docker.from_env()
|
||||
container_by_service = {
|
||||
c.labels['com.docker.swarm.service.name']: c for c in client.containers.list()}
|
||||
services = client.services.list()
|
||||
for s in services:
|
||||
app_name = s.attrs['Spec']['Labels']['com.docker.stack.namespace']
|
||||
if (app_name in apps and
|
||||
(app_secs := s.attrs['Spec']['TaskTemplate']['ContainerSpec'].get('Secrets'))):
|
||||
if not container_by_service.get(s.name):
|
||||
logging.error(
|
||||
f"Container {s.name} is not running, secrets can not be copied.")
|
||||
continue
|
||||
container_id = container_by_service[s.name].id
|
||||
for sec in app_secs:
|
||||
src = f'/var/lib/docker/containers/{container_id}/mounts/secrets/{sec["SecretID"]}'
|
||||
dst = SECRET_PATH + sec['SecretName']
|
||||
copyfile(src, dst)
|
||||
|
||||
|
||||
def run_commands(commands):
|
||||
for container, command in commands.items():
|
||||
if not command:
|
||||
continue
|
||||
# Use bash's pipefail to return exit codes inside a pipe to prevent silent failure
|
||||
command = command.removeprefix('bash -c \'').removeprefix('sh -c \'')
|
||||
command = command.removesuffix('\'')
|
||||
command = f"bash -c 'set -o pipefail;{command}'"
|
||||
result = container.exec_run(command)
|
||||
logging.info(f"run command in {container.name}")
|
||||
logging.info(command)
|
||||
if result.exit_code:
|
||||
logging.error(
|
||||
f"Failed to run command {command} in {container.name}: {result.output.decode()}")
|
||||
else:
|
||||
logging.info(result.output.decode())
|
||||
|
||||
|
||||
def backup_volumes(backup_paths, apps, dry_run=False):
|
||||
result = restic.backup(backup_paths, dry_run=dry_run, tags=apps)
|
||||
print(result)
|
||||
logging.info(result)
|
||||
|
||||
|
||||
@cli.command()
|
||||
@click.option('snapshot', '--snapshot', '-s', envvar='SNAPSHOT', default='latest')
|
||||
@click.option('target', '--target', '-t', envvar='TARGET', default='/')
|
||||
@click.option('noninteractive', '--noninteractive', envvar='NONINTERACTIVE', default=False)
|
||||
def restore(snapshot, target, noninteractive):
|
||||
# Todo: recommend to shutdown the container
|
||||
service_paths = VOLUME_PATH
|
||||
if SERVICE:
|
||||
service_paths = service_paths + f'{SERVICE}_*'
|
||||
snapshots = restic.snapshots(snapshot_id=snapshot)
|
||||
if not snapshot:
|
||||
logging.error("No Snapshots with ID {snapshots}")
|
||||
exit(1)
|
||||
if not noninteractive:
|
||||
snapshot_date = datetime.fromisoformat(snapshots[0]['time'])
|
||||
delta = datetime.now(tz=timezone.utc) - snapshot_date
|
||||
print(f"You are going to restore Snapshot {snapshot} of {service_paths} at {target}")
|
||||
print(f"This snapshot is {delta} old")
|
||||
print(f"THIS COMMAND WILL IRREVERSIBLY OVERWRITES {target}{service_paths.removeprefix('/')}")
|
||||
prompt = input("Type YES (uppercase) to continue: ")
|
||||
if prompt != 'YES':
|
||||
logging.error("Restore aborted")
|
||||
exit(1)
|
||||
print(f"Restoring Snapshot {snapshot} of {service_paths} at {target}")
|
||||
result = restic.restore(snapshot_id=snapshot,
|
||||
include=service_paths, target_dir=target)
|
||||
logging.debug(result)
|
||||
|
||||
|
||||
@cli.command()
|
||||
def snapshots():
|
||||
snapshots = restic.snapshots()
|
||||
for snap in snapshots:
|
||||
if not SERVICE or (tags := snap.get('tags')) and SERVICE in tags:
|
||||
print(snap['time'], snap['id'])
|
||||
|
||||
|
||||
@cli.command()
|
||||
@click.option('snapshot', '--snapshot', '-s', envvar='SNAPSHOT', default='latest')
|
||||
@click.option('path', '--path', '-p', envvar='INCLUDE_PATH')
|
||||
def ls(snapshot, path):
|
||||
results = list_files(snapshot, path)
|
||||
for r in results:
|
||||
if r.get('path'):
|
||||
print(f"{r['ctime']}\t{r['path']}")
|
||||
|
||||
|
||||
def list_files(snapshot, path):
|
||||
cmd = restic.cat.base_command() + ['ls']
|
||||
if SERVICE:
|
||||
cmd = cmd + ['--tag', SERVICE]
|
||||
cmd.append(snapshot)
|
||||
if path:
|
||||
cmd.append(path)
|
||||
output = restic.internal.command_executor.execute(cmd)
|
||||
output = output.replace('}\n{', '}|{')
|
||||
results = list(map(json.loads, output.split('|')))
|
||||
return results
|
||||
|
||||
|
||||
@cli.command()
|
||||
@click.option('snapshot', '--snapshot', '-s', envvar='SNAPSHOT', default='latest')
|
||||
@click.option('path', '--path', '-p', envvar='INCLUDE_PATH')
|
||||
@click.option('volumes', '--volumes', '-v', is_flag=True)
|
||||
@click.option('secrets', '--secrets', '-c', is_flag=True)
|
||||
def download(snapshot, path, volumes, secrets):
|
||||
if sum(map(bool, [path, volumes, secrets])) != 1:
|
||||
logging.error("Please specify exactly one of '--path', '--volumes', '--secrets'")
|
||||
exit(1)
|
||||
if path:
|
||||
path = path.removesuffix('/')
|
||||
files = list_files(snapshot, path)
|
||||
filetype = [f.get('type') for f in files if f.get('path') == path][0]
|
||||
filename = "/tmp/" + Path(path).name
|
||||
if filetype == 'dir':
|
||||
filename = filename + ".tar"
|
||||
output = dump(snapshot, path)
|
||||
with open(filename, "wb") as file:
|
||||
file.write(output)
|
||||
print(filename)
|
||||
elif volumes:
|
||||
if not SERVICE:
|
||||
logging.error("Please specify '--host' when using '--volumes'")
|
||||
exit(1)
|
||||
filename = f"/tmp/{SERVICE}.tar"
|
||||
files = list_files(snapshot, VOLUME_PATH)
|
||||
for f in files[1:]:
|
||||
path = f[ 'path' ]
|
||||
if SERVICE in path and f['type'] == 'dir':
|
||||
content = dump(snapshot, path)
|
||||
# Concatenate tar files (extract with tar -xi)
|
||||
with open(filename, "ab") as file:
|
||||
file.write(content)
|
||||
elif secrets:
|
||||
if not SERVICE:
|
||||
logging.error("Please specify '--host' when using '--secrets'")
|
||||
exit(1)
|
||||
filename = f"/tmp/SECRETS_{SERVICE}.json"
|
||||
files = list_files(snapshot, SECRET_PATH)
|
||||
secrets = {}
|
||||
for f in files[1:]:
|
||||
path = f[ 'path' ]
|
||||
if SERVICE in path and f['type'] == 'file':
|
||||
secret = dump(snapshot, path).decode()
|
||||
secret_name = path.removeprefix(f'{SECRET_PATH}{SERVICE}_')
|
||||
secrets[secret_name] = secret
|
||||
with open(filename, "w") as file:
|
||||
json.dump(secrets, file)
|
||||
print(filename)
|
||||
|
||||
def dump(snapshot, path):
|
||||
cmd = restic.cat.base_command() + ['dump']
|
||||
if SERVICE:
|
||||
cmd = cmd + ['--tag', SERVICE]
|
||||
cmd = cmd +[snapshot, path]
|
||||
logging.debug(f"Dumping {path} from snapshot '{snapshot}'")
|
||||
output = subprocess.run(cmd, capture_output=True)
|
||||
if output.returncode:
|
||||
logging.error(f"error while dumping {path} from snapshot '{snapshot}': {output.stderr}")
|
||||
exit(1)
|
||||
return output.stdout
|
||||
|
||||
|
||||
if __name__ == '__main__':
|
||||
cli()
|
14
compose.s3.yml
Normal file
14
compose.s3.yml
Normal file
@ -0,0 +1,14 @@
|
||||
---
|
||||
version: "3.8"
|
||||
services:
|
||||
app:
|
||||
environment:
|
||||
- AWS_ACCESS_KEY_ID
|
||||
- AWS_SECRET_ACCESS_KEY_FILE=/run/secrets/aws_secret_access_key
|
||||
secrets:
|
||||
- aws_secret_access_key
|
||||
|
||||
secrets:
|
||||
aws_secret_access_key:
|
||||
external: true
|
||||
name: ${STACK_NAME}_aws_secret_access_key_${SECRET_AWS_SECRET_ACCESS_KEY_VERSION}
|
13
compose.secret.yml
Normal file
13
compose.secret.yml
Normal file
@ -0,0 +1,13 @@
|
||||
---
|
||||
version: "3.8"
|
||||
services:
|
||||
app:
|
||||
environment:
|
||||
- RESTIC_REPO_FILE=/run/secrets/restic_repo
|
||||
secrets:
|
||||
- restic_repo
|
||||
|
||||
secrets:
|
||||
restic_repo:
|
||||
external: true
|
||||
name: ${STACK_NAME}_restic_repo_${SECRET_RESTIC_REPO_VERSION}
|
23
compose.ssh.yml
Normal file
23
compose.ssh.yml
Normal file
@ -0,0 +1,23 @@
|
||||
---
|
||||
version: "3.8"
|
||||
services:
|
||||
app:
|
||||
environment:
|
||||
- SSH_KEY_FILE=/run/secrets/ssh_key
|
||||
- SSH_HOST_KEY
|
||||
secrets:
|
||||
- source: ssh_key
|
||||
mode: 0400
|
||||
configs:
|
||||
- source: ssh_config
|
||||
target: /root/.ssh/config
|
||||
|
||||
secrets:
|
||||
ssh_key:
|
||||
external: true
|
||||
name: ${STACK_NAME}_ssh_key_${SECRET_SSH_KEY_VERSION}
|
||||
|
||||
configs:
|
||||
ssh_config:
|
||||
name: ${STACK_NAME}_ssh_config_${SSH_CONFIG_VERSION}
|
||||
file: ssh_config
|
15
compose.swarm-cronjob.yml
Normal file
15
compose.swarm-cronjob.yml
Normal file
@ -0,0 +1,15 @@
|
||||
---
|
||||
version: "3.8"
|
||||
services:
|
||||
app:
|
||||
deploy:
|
||||
mode: replicated
|
||||
replicas: 0
|
||||
labels:
|
||||
- "swarm.cronjob.enable=true"
|
||||
# Note(3wc): every 5m, testing
|
||||
- "swarm.cronjob.schedule=*/5 * * * *"
|
||||
# Note(3wc): blank label to be picked up by `abra recipe sync`
|
||||
restart_policy:
|
||||
condition: none
|
||||
entrypoint: [ "/usr/bin/backup.sh" ]
|
54
compose.yml
Normal file
54
compose.yml
Normal file
@ -0,0 +1,54 @@
|
||||
---
|
||||
version: "3.8"
|
||||
services:
|
||||
app:
|
||||
image: docker:24.0.2-dind
|
||||
volumes:
|
||||
- "/var/run/docker.sock:/var/run/docker.sock"
|
||||
- "/var/lib/docker/volumes/:/var/lib/docker/volumes/"
|
||||
- "/var/lib/docker/containers/:/var/lib/docker/containers/:ro"
|
||||
- backups:/backups
|
||||
environment:
|
||||
- CRON_SCHEDULE
|
||||
- RESTIC_REPO
|
||||
- RESTIC_PASSWORD_FILE=/run/secrets/restic_password
|
||||
secrets:
|
||||
- restic_password
|
||||
deploy:
|
||||
labels:
|
||||
- coop-cloud.${STACK_NAME}.version=0.1.0+latest
|
||||
- coop-cloud.${STACK_NAME}.timeout=${TIMEOUT:-300}
|
||||
- coop-cloud.backupbot.enabled=true
|
||||
configs:
|
||||
- source: entrypoint
|
||||
target: /entrypoint.sh
|
||||
mode: 0555
|
||||
- source: backupbot
|
||||
target: /usr/bin/backup
|
||||
mode: 0555
|
||||
entrypoint: ['/entrypoint.sh']
|
||||
deploy:
|
||||
labels:
|
||||
- "coop-cloud.backupbot.enabled=true"
|
||||
healthcheck:
|
||||
test: "pgrep crond"
|
||||
interval: 30s
|
||||
timeout: 10s
|
||||
retries: 10
|
||||
start_period: 5m
|
||||
|
||||
secrets:
|
||||
restic_password:
|
||||
external: true
|
||||
name: ${STACK_NAME}_restic_password_${SECRET_RESTIC_PASSWORD_VERSION}
|
||||
|
||||
volumes:
|
||||
backups:
|
||||
|
||||
configs:
|
||||
entrypoint:
|
||||
name: ${STACK_NAME}_entrypoint_${ENTRYPOINT_VERSION}
|
||||
file: entrypoint.sh
|
||||
backupbot:
|
||||
name: ${STACK_NAME}_backupbot_${BACKUPBOT_VERSION}
|
||||
file: backupbot.py
|
20
entrypoint.sh
Normal file
20
entrypoint.sh
Normal file
@ -0,0 +1,20 @@
|
||||
#!/bin/sh
|
||||
|
||||
set -e -o pipefail
|
||||
|
||||
apk add --upgrade --no-cache restic bash python3 py3-pip
|
||||
|
||||
# Todo use requirements file with specific versions
|
||||
pip install click==8.1.7 docker==6.1.3 resticpy==1.0.2
|
||||
|
||||
if [ -n "$SSH_HOST_KEY" ]
|
||||
then
|
||||
echo "$SSH_HOST_KEY" > /root/.ssh/known_hosts
|
||||
fi
|
||||
|
||||
cron_schedule="${CRON_SCHEDULE:?CRON_SCHEDULE not set}"
|
||||
|
||||
echo "$cron_schedule backup create" | crontab -
|
||||
crontab -l
|
||||
|
||||
crond -f -d8 -L /dev/stdout
|
3
release/1.0.0+latest
Normal file
3
release/1.0.0+latest
Normal file
@ -0,0 +1,3 @@
|
||||
Breaking Change: the variables `SERVER_NAME` and `RESTIC_HOST` are merged into `RESTIC_REPO`. The format can be looked up here: https://restic.readthedocs.io/en/stable/030_preparing_a_new_repo.html
|
||||
ssh/sftp: `sftp:user@host:/repo-path`
|
||||
S3: `s3:https://s3.example.com/bucket_name`
|
3
renovate.json
Normal file
3
renovate.json
Normal file
@ -0,0 +1,3 @@
|
||||
{
|
||||
"$schema": "https://docs.renovatebot.com/renovate-schema.json"
|
||||
}
|
4
ssh_config
Normal file
4
ssh_config
Normal file
@ -0,0 +1,4 @@
|
||||
Host *
|
||||
IdentityFile /run/secrets/ssh_key
|
||||
ServerAliveInterval 60
|
||||
ServerAliveCountMax 240
|
Reference in New Issue
Block a user