82 Commits

Author SHA1 Message Date
4c2304a962 feat: add backupbot label 2023-10-10 07:53:26 +02:00
69e7f07978 Merge pull request 'Backupbot Revolution' (#23) from backupbot_revolution into main
Reviewed-on: coop-cloud/backup-bot-two#23
2023-10-09 10:54:22 +00:00
d25688f312 add backupbot label 2023-10-09 12:53:28 +02:00
b3cbb8bb46 rm unused compose.https.yml as its replaced with compose.secret.yml 2023-10-04 19:11:42 +02:00
bb1237f9ad fix secret name 2023-10-04 19:08:47 +02:00
972a2c2314 extend download to download the secrets or all app volumes at once 2023-10-04 19:08:47 +02:00
4240318d20 remove package versions, to avoid conflicts 2023-10-04 19:08:47 +02:00
c3f3d1a6fe restic_repo as secret option #31 2023-10-04 19:08:39 +02:00
ab6c06d423 Prompt before restore 2023-10-04 19:07:57 +02:00
9398e0d83d release note for migration 2023-10-04 19:07:57 +02:00
6fc62b5516 fix typo 2023-10-04 19:07:57 +02:00
1f06af95eb fix error messages 2023-10-04 19:07:57 +02:00
15a552ef8b formatting 2023-10-04 19:07:57 +02:00
5d4def6143 feat: Backup Secrets (copy secrets) #28 2023-10-04 19:07:57 +02:00
ebc0ea5d84 small fixes 2023-10-04 19:07:57 +02:00
488c59f667 Revert "feat: Backup Secrets #28"
This reverts commit 2838a36d43f44f80aa76095863f463d6aae57403.
2023-10-04 19:07:57 +02:00
825565451a feat: Backup Secrets #28 2023-10-04 19:07:57 +02:00
6fa9440c76 fix restic version, timeout and cron default timer 2023-10-04 19:07:57 +02:00
33ce3c58aa fix entrypoint 2023-10-04 19:07:57 +02:00
06ad03c1d5 specify program versions to prevent future breakage 2023-10-04 19:07:57 +02:00
bd8398e7dd add healthcheck 2023-10-04 19:07:57 +02:00
75a93c5456 add sftp storage 2023-10-04 19:07:57 +02:00
d32337cf3a update README 2023-10-04 19:07:57 +02:00
61ffb67686 update README 2023-10-04 19:07:57 +02:00
a86ac15363 README 2023-10-04 19:07:57 +02:00
5fa8f821c1 choos specific restore target 2023-10-04 19:07:57 +02:00
203719c224 change repo per option 2023-10-04 19:07:57 +02:00
3009159c82 use latest snapshot as default 2023-10-04 19:07:57 +02:00
28334a4241 mount volumes read/write to restore backups 2023-10-04 19:07:57 +02:00
447a808849 initial rewrite 2023-10-04 19:07:57 +02:00
42ae6a6b9b remove unused traefik labels 2023-10-04 19:07:57 +02:00
3261d67dca mount volume ro 2023-10-04 19:07:38 +02:00
6355f3572f Backup volumes from host instead of copying paths
* Backupbot will now copy all volumes from a service with
  backupbot.enabled = 'true' label from the /var/lib/docker/volumes/
  path directly. This reduces the resource overhead of copying
  stuff from one volume to another.
  Recipes need to be adjustet that db-dumps are saved into a volume
  now!
* Remove the Dockerfile and move stuff into a entrypoint. This
  simplifies the whole versioning thing and makes this "just"
  a recipe

Co-authored-by: Moritz < moritz.m@local-it.org>
2023-10-04 19:07:16 +02:00
3wc
451c511554 Hopefully fix REMOVE_BACKUP_VOLUME_AFTER_UPLOAD 2023-09-28 10:18:18 +01:00
87d584e4e8 REALLY disable shellcheck 2023-09-26 16:48:29 +02:00
a171d9eea0 disable shellcheck 2023-09-26 16:45:58 +02:00
620ab4e3d7 add to .envrc.sample 2023-09-26 16:43:57 +02:00
3wc
83a3d82ea5 More HTTPS fixes 2023-09-19 15:45:37 +01:00
3wc
6450c80236 Add more HTTPS support 2023-09-19 15:40:20 +01:00
3wc
6f6a82153a Add HTTPS storage support 2023-09-19 15:39:56 +01:00
efc942c041 chore(deps): update docker docker tag to v24.0.6 2023-09-06 07:03:13 +00:00
0c4bc19e2a chore(deps): update docker docker tag to v24.0.5 2023-07-25 07:07:04 +00:00
dde9987de6 chore(deps): update docker docker tag to v24.0.4 2023-07-11 07:02:51 +00:00
5f734bc371 chore(deps): update docker docker tag to v24.0.3 2023-07-07 07:03:08 +00:00
27e2e61d7f chore(deps): update docker docker tag to v24.0.2 2023-05-29 07:03:02 +00:00
1bb1917e18 Merge pull request 'chore(deps): update docker docker tag to v24 (main)' (#14) from renovate/main-docker-24.x into main
Reviewed-on: coop-cloud/backup-bot-two#14
2023-05-28 14:23:14 +00:00
7b8b3b1acd chore(deps): update docker docker tag to v24 2023-05-22 07:06:36 +00:00
9c5ba87232 chore(deps): update docker docker tag to v23.0.6 2023-05-10 07:02:21 +00:00
9064bebb56 chore(deps): update docker docker tag to v23.0.5 2023-04-27 07:06:03 +00:00
4fdb585825 Merge pull request 'chore(deps): update docker docker tag to v23 (main)' (#11) from renovate/main-docker-23.x into main
Reviewed-on: coop-cloud/backup-bot-two#11
2023-04-24 08:17:36 +00:00
bde63b3f6f chore(deps): update docker docker tag to v23 2023-04-18 07:02:30 +00:00
92dfd23b26 feat: backupvolume can be pruned after upload 2023-03-01 13:29:00 +01:00
bab224ab96 chore(deps): update docker docker tag to v19.03.15 2023-01-19 08:03:49 +00:00
36928c34ac Merge pull request 'Configure Renovate' (#8) from renovate/configure into main
Reviewed-on: coop-cloud/backup-bot-two#8
2023-01-18 17:37:22 +00:00
9b324476c2 chore(deps): add renovate.json 2023-01-18 17:24:09 +00:00
7aa464e271 chore: publish 0.1.0+latest release 2022-10-11 15:28:50 +02:00
59c071734a add labels to get around abra checks 2022-10-11 15:24:35 +02:00
940b6bde1a Merge pull request 'backup multiple paths' (#5) from multi_path into main
Reviewed-on: coop-cloud/backup-bot-two#5
2021-12-14 14:44:05 +00:00
d6faffcbbd move rm up, to keep the latest backup in the volume 2021-12-13 11:37:52 +01:00
5a20ef4349 Merge branch 'main' into multi_path 2021-11-24 10:29:27 +00:00
ce42fb06fd fix docker cp paths 2021-11-24 11:17:13 +01:00
f2472bd0d3 make backup.path comma separated list 2021-11-23 17:38:31 +01:00
3wc
f7cbbf04c0 Goodbye, emojis! 😢
[ci skip]
2021-11-23 12:19:04 +02:00
3wc
ab03d2d7cc Ignore testing folder 2021-11-22 14:25:56 +02:00
3wc
32ba0041d1 chore: fix README bullet formatting
[ci skip]
2021-11-22 13:42:03 +02:00
3wc
394cc4f47c Mass README update
[ci skip]
2021-11-21 21:31:24 +02:00
53c4f1956a fix push docker image to correct destination 2021-11-16 12:14:21 +01:00
c750d6402f fix s3 restic_repo 2021-11-16 11:44:10 +01:00
b2e2fc9d13 fix typo 2021-11-16 11:43:47 +01:00
3wc
9e818ed021 Revert to previous, probably-working cron set-up 2021-11-11 12:05:04 +02:00
3wc
f6d1da8899 Working cron again, d'oh 2021-11-11 02:10:17 +02:00
3wc
d3e9001597 Add badge, tidy docs 2021-11-11 00:18:50 +02:00
3wc
f7db376377 Appease shellcheck 2021-11-11 00:02:44 +02:00
3wc
d6e90e04ba SSH_HOST_KEY_DISABLE, add drone pipline 2021-11-11 00:00:39 +02:00
3wc
a990dc27c7 SSH host keys, split out swarm-cronjob 2021-11-10 22:02:13 +02:00
3wc
721c393d2d Allow overriding cron schedule, fix vars 2021-11-10 21:17:12 +02:00
3wc
c9de239e93 Bash command line handling showdown 2021-11-09 15:25:43 +02:00
3wc
489ef570dd Fix AWS S3 settings 2021-11-09 14:30:19 +02:00
3wc
23b092776f More progress towards S3/SSH 2021-11-09 14:20:11 +02:00
3wc
ed76e6164b Work-in-progress: split S3 & SSH storage 2021-11-09 12:37:56 +02:00
3wc
f5e87f396a Update TODO list and tidy up README 2021-11-09 12:27:54 +02:00
3wc
8317f50a8a Variables, Dockerfile, better syntax, etc. 2021-11-06 19:45:39 +02:00
17 changed files with 647 additions and 82 deletions

12
.drone.yml Normal file
View File

@ -0,0 +1,12 @@
---
kind: pipeline
name: linters
steps:
- name: run shellcheck
image: koalaman/shellcheck-alpine
commands:
- shellcheck backup.sh
trigger:
branch:
- main

29
.env.sample Normal file
View File

@ -0,0 +1,29 @@
TYPE=backup-bot-two
SECRET_RESTIC_PASSWORD_VERSION=v1
COMPOSE_FILE=compose.yml
RESTIC_REPO=/backups/restic
CRON_SCHEDULE='30 3 * * *'
# swarm-cronjob, instead of built-in cron
#COMPOSE_FILE="$COMPOSE_FILE:compose.swarm-cronjob.yml"
# SSH storage
#SECRET_SSH_KEY_VERSION=v1
#SSH_HOST_KEY="hostname ssh-rsa AAAAB3...
#COMPOSE_FILE="$COMPOSE_FILE:compose.ssh.yml"
# S3 storage
#SECRET_AWS_SECRET_ACCESS_KEY_VERSION=v1
#AWS_ACCESS_KEY_ID=something-secret
#COMPOSE_FILE="$COMPOSE_FILE:compose.s3.yml"
# Secret restic repository
# use a secret to store the RESTIC_REPO if the repository location contains a secret value
# i.E rest:https://user:SECRET_PASSWORD@host:8000/
# it overwrites the RESTIC_REPO variable
#SECRET_RESTIC_REPO_VERSION=v1
#COMPOSE_FILE="$COMPOSE_FILE:compose.secret.yml"

17
.envrc.sample Normal file
View File

@ -0,0 +1,17 @@
export RESTIC_HOST="user@domain.tld"
export RESTIC_PASSWORD_FILE=/run/secrets/restic-password
export BACKUP_DEST=/backups
export SERVER_NAME=domain.tld
export DOCKER_CONTEXT=$SERVER_NAME
# uncomment either this:
#export SSH_KEY_FILE=~/.ssh/id_rsa
# or this:
#export AWS_SECRET_ACCESS_KEY_FILE=s3
#export AWS_ACCESS_KEY_ID=easter-october-emphatic-tug-urgent-customer
# or this:
#export HTTPS_PASSWORD_FILE=/run/secrets/https_password
# optionally limit subset of services for testing
#export SERVICES_OVERRIDE="ghost_domain_tld_app ghost_domain_tld_db"

1
.gitignore vendored Normal file
View File

@ -0,0 +1 @@
/testing

194
README.md
View File

@ -1,45 +1,175 @@
# Backupbot II: This Time It's Easily Configurable
# Backupbot II
Automatically backup files from running Docker Swarm services based on labels.
[![Build Status](https://build.coopcloud.tech/api/badges/coop-cloud/backup-bot-two/status.svg)](https://build.coopcloud.tech/coop-cloud/backup-bot-two)
## TODO
_This Time, It's Easily Configurable_
- [ ] Make a Docker image of this
- [ ] Rip out or improve Restic stuff
- [ ] Add secret handling for database backups
- [ ] Continuous linting with shellcheck
Automatically take backups from all volumes of running Docker Swarm services and runs pre- and post commands.
## Label format
<!-- metadata -->
(Haven't done secrets yet, here are two options)
* **Category**: Utilities
* **Status**: 0, work-in-progress
* **Image**: [`thecoopcloud/backup-bot-two`](https://hub.docker.com/r/thecoopcloud/backup-bot-two), 4, upstream
* **Healthcheck**: No
* **Backups**: N/A
* **Email**: N/A
* **Tests**: No
* **SSO**: N/A
v1:
<!-- endmetadata -->
## Background
There are lots of Docker volume backup systems; all of them have one or both of these limitations:
- You need to define all the volumes to back up in the configuration system
- Backups require services to be stopped to take consistent copies
Backupbot II tries to help, by
1. **letting you define backups using Docker labels**, so you can **easily collect your backups for use with another system** like docker-volume-backup.
2. **running pre- and post-commands** before and after backups, for example to use database tools to take a backup from a running service.
## Deployment
### With Co-op Cloud
* `abra app new backup-bot-two`
* `abra app config <app-name>`
- set storage options. Either configure `CRON_SCHEDULE`, or set up `swarm-cronjob`
* `abra app secret generate -a <app_name>`
* `abra app deploy <app-name>`
## Configuration
Per default Backupbot stores the backups locally in the repository `/backups/restic`, which is accessible as volume at `/var/lib/docker/volumes/<app_name>_backups/_data/restic/`
The backup location can be changed using the `RESTIC_REPO` env variable.
### S3 Storage
To use S3 storage as backup location set the following envs:
```
RESTIC_REPO=s3:<S3-SERVICE-URL>/<BUCKET-NAME>
SECRET_AWS_SECRET_ACCESS_KEY_VERSION=v1
AWS_ACCESS_KEY_ID=<MY_ACCESS_KEY>
COMPOSE_FILE="$COMPOSE_FILE:compose.s3.yml"
```
and add your `<SECRET_ACCESS_KEY>` as docker secret:
`abra app secret insert <app_name> aws_secret_access_key v1 <SECRET_ACCESS_KEY>`
See [restic s3 docs](https://restic.readthedocs.io/en/latest/030_preparing_a_new_repo.html#amazon-s3) for more information.
### SFTP Storage
> With sftp it is not possible to prevent the backupbot from deleting backups in case of a compromised machine. Therefore we recommend to use S3, REST or rclone server without delete permissions.
To use SFTP storage as backup location set the following envs:
```
RESTIC_REPO=sftp:user@host:/restic-repo-path
SECRET_SSH_KEY_VERSION=v1
SSH_HOST_KEY="hostname ssh-rsa AAAAB3...
COMPOSE_FILE="$COMPOSE_FILE:compose.ssh.yml"
```
To get the `SSH_HOST_KEY` run the following command `ssh-keyscan <hostname>`
Generate an ssh keypair: `ssh-keygen -t ed25519 -f backupkey -P ''`
Add the key to your `authorized_keys`:
`ssh-copy-id -i backupkey <user>@<hostname>`
Add your `SSH_KEY` as docker secret:
```
abra app secret insert <app_name> ssh_key v1 """$(cat backupkey)
"""
```
### Restic REST server Storage
You can simply set the `RESTIC_REPO` variable to your REST server URL `rest:http://host:8000/`.
If you access the REST server with a password `rest:https://user:pass@host:8000/` you should hide the whole URL containing the password inside a secret.
Uncomment these lines:
```
SECRET_RESTIC_REPO_VERSION=v1
COMPOSE_FILE="$COMPOSE_FILE:compose.secret.yml"
```
Add your REST server url as secret:
```
`abra app secret insert <app_name> restic_repo v1 "rest:https://user:pass@host:8000/"`
```
The secret will overwrite the `RESTIC_REPO` variable.
See [restic REST docs](https://restic.readthedocs.io/en/latest/030_preparing_a_new_repo.html#rest-server) for more information.
## Usage
Create a backup of all apps:
`abra app run <app_name> app -- backup create`
> The apps to backup up need to be deployed
Create an individual backup:
`abra app run <app_name> app -- backup --host <target_app_name> create`
Create a backup to a local repository:
`abra app run <app_name> app -- backup create -r /backups/restic`
> It is recommended to shutdown/undeploy an app before restoring the data
Restore the latest snapshot of all including apps:
`abra app run <app_name> app -- backup restore`
Restore a specific snapshot of an individual app:
`abra app run <app_name> app -- backup --host <target_app_name> restore --snapshot <snapshot_id>`
Show all snapshots:
`abra app run <app_name> app -- backup snapshots`
Show all snapshots containing a specific app:
`abra app run <app_name> app -- backup --host <target_app_name> snapshots`
Show all files inside the latest snapshot (can be very verbose):
`abra app run <app_name> app -- backup ls`
Show specific files inside a selected snapshot:
`abra app run <app_name> app -- backup ls --snapshot <snapshot_id> --path /var/lib/docker/volumes/`
Download files from a snapshot:
```
filename=$(abra app run <app_name> app -- backup download --snapshot <snapshot_id> --path <absolute_path>)
abra app cp <app_name> app:$filename .
```
## Recipe Configuration
Like Traefik, or `swarm-cronjob`, Backupbot II uses access to the Docker socket to read labels from running Docker Swarm services:
```
services:
db:
deploy:
labels:
backupbot.backup: "true"
backupbot.backup.repos: "$some_thing"
backupbot.backup.at: "* * * * *"
backupbot.backup.pre-hook: 'mysqldump -u root -p"$(cat /run/secrets/db_root_password)" -f /tmp/dump/dump.db'
backupbot.backup.post-hook: "rm -rf /tmp/dump/dump.db"
backupbot.backup.path: "/tmp/dump/"
```
v2:
```
deploy:
labels:
backupbot.backup: "true"
backupbot.backup.repos: "$some_thing"
backupbot.backup.at: "* * * * *"
backupbot.backup.post-hook: "rm -rf /tmp/dump/dump.db"
backupbot.backup.secrets": "db_root_password",
backupbot.backup.pre-hook: 'mysqldump -u root -p"$DB_ROOT_PASSWORD" -f /tmp/dump/dump.db'
backupbot.backup: ${BACKUP:-"true"}
backupbot.backup.pre-hook: 'mysqldump -u root -p"$(cat /run/secrets/db_root_password)" -f /volume_path/dump.db'
backupbot.backup.post-hook: "rm -rf /volume_path/dump.db"
```
## Questions:
- `backupbot.backup` -- set to `true` to back up this service (REQUIRED)
- `backupbot.backup.pre-hook` -- command to run before copying files (optional), save all dumps into the volumes
- `backupbot.backup.post-hook` -- command to run after copying files (optional)
- Should frequency be configurable per service, centrally, or both?
As in the above example, you can reference Docker Secrets, e.g. for looking up database passwords, by reading the files in `/run/secrets` directly.
```
- "backupbot.backup.at: "* * * * *"
```
[abra]: https://git.autonomic.zone/autonomic-cooperative/abra

3
abra.sh Normal file
View File

@ -0,0 +1,3 @@
export ENTRYPOINT_VERSION=v1
export BACKUPBOT_VERSION=v1
export SSH_CONFIG_VERSION=v1

View File

@ -1,50 +0,0 @@
#!/bin/bash
# FIXME: just for testing
backup_path=backups
# FIXME: just for testing
export DOCKER_CONTEXT=demo.coopcloud.tech
mapfile -t services < <(docker service ls --format '{{ .Name }}')
# FIXME: just for testing
services=( "ghost_demo_app" "ghost_demo_db" )
for service in "${services[@]}"; do
echo "service: $service"
details=$(docker service inspect "$service" --format "{{ json .Spec.Labels }}")
if echo "$details" | jq -r '.["backupbot.backup"]' | grep -q 'true'; then
pre=$(echo "$details" | jq -r '.["backupbot.backup.pre-hook"]')
post=$(echo "$details" | jq -r '.["backupbot.backup.post-hook"]')
path=$(echo "$details" | jq -r '.["backupbot.backup.path"]')
if [ "$path" = "null" ]; then
echo "ERROR: missing 'path' for $service"
continue # or maybe exit?
fi
container=$(docker container ls -f "name=$service" --format '{{ .ID }}')
echo "backing up $service"
test -d "$backup_path/$service" || mkdir "$backup_path/$service"
if [ "$pre" != "null" ]; then
# run the precommand
# shellcheck disable=SC2086
docker exec "$container" $pre
fi
# run the backup
docker cp "$container:$path" "$backup_path/$service"
if [ "$post" != "null" ]; then
# run the postcommand
# shellcheck disable=SC2086
docker exec "$container" $post
fi
fi
restic -p restic-password \
backup --quiet -r sftp:u272979@u272979.your-storagebox.de:/demo.coopcloud.tech \
--tag coop-cloud "$backup_path"
done

274
backupbot.py Executable file
View File

@ -0,0 +1,274 @@
#!/usr/bin/python3
import os
import click
import json
import subprocess
import logging
import docker
import restic
from datetime import datetime, timezone
from restic.errors import ResticFailedError
from pathlib import Path
from shutil import copyfile, rmtree
# logging.basicConfig(level=logging.INFO)
VOLUME_PATH = "/var/lib/docker/volumes/"
SECRET_PATH = '/secrets/'
SERVICE = None
@click.group()
@click.option('-l', '--log', 'loglevel')
@click.option('service', '--host', '-h', envvar='SERVICE')
@click.option('repository', '--repo', '-r', envvar='RESTIC_REPO', required=True)
def cli(loglevel, service, repository):
global SERVICE
if service:
SERVICE = service.replace('.', '_')
if repository:
os.environ['RESTIC_REPO'] = repository
if loglevel:
numeric_level = getattr(logging, loglevel.upper(), None)
if not isinstance(numeric_level, int):
raise ValueError('Invalid log level: %s' % loglevel)
logging.basicConfig(level=numeric_level)
export_secrets()
init_repo()
def init_repo():
repo = os.environ['RESTIC_REPO']
logging.debug(f"set restic repository location: {repo}")
restic.repository = repo
restic.password_file = '/var/run/secrets/restic_password'
try:
restic.cat.config()
except ResticFailedError as error:
if 'unable to open config file' in str(error):
result = restic.init()
logging.info(f"Initialized restic repo: {result}")
else:
raise error
def export_secrets():
for env in os.environ:
if env.endswith('FILE') and not "COMPOSE_FILE" in env:
logging.debug(f"exported secret: {env}")
with open(os.environ[env]) as file:
secret = file.read()
os.environ[env.removesuffix('_FILE')] = secret
# logging.debug(f"Read secret value: {secret}")
@cli.command()
def create():
pre_commands, post_commands, backup_paths, apps = get_backup_cmds()
copy_secrets(apps)
backup_paths.append(SECRET_PATH)
run_commands(pre_commands)
backup_volumes(backup_paths, apps)
run_commands(post_commands)
def get_backup_cmds():
client = docker.from_env()
container_by_service = {
c.labels['com.docker.swarm.service.name']: c for c in client.containers.list()}
backup_paths = set()
backup_apps = set()
pre_commands = {}
post_commands = {}
services = client.services.list()
for s in services:
labels = s.attrs['Spec']['Labels']
if (backup := labels.get('backupbot.backup')) and bool(backup):
stack_name = labels['com.docker.stack.namespace']
if SERVICE and SERVICE != stack_name:
continue
backup_apps.add(stack_name)
container = container_by_service.get(s.name)
if not container:
logging.error(
f"Container {s.name} is not running, hooks can not be executed")
if prehook := labels.get('backupbot.backup.pre-hook'):
pre_commands[container] = prehook
if posthook := labels.get('backupbot.backup.post-hook'):
post_commands[container] = posthook
backup_paths = backup_paths.union(
Path(VOLUME_PATH).glob(f"{stack_name}_*"))
return pre_commands, post_commands, list(backup_paths), list(backup_apps)
def copy_secrets(apps):
rmtree(SECRET_PATH, ignore_errors=True)
os.mkdir(SECRET_PATH)
client = docker.from_env()
container_by_service = {
c.labels['com.docker.swarm.service.name']: c for c in client.containers.list()}
services = client.services.list()
for s in services:
app_name = s.attrs['Spec']['Labels']['com.docker.stack.namespace']
if (app_name in apps and
(app_secs := s.attrs['Spec']['TaskTemplate']['ContainerSpec'].get('Secrets'))):
if not container_by_service.get(s.name):
logging.error(
f"Container {s.name} is not running, secrets can not be copied.")
continue
container_id = container_by_service[s.name].id
for sec in app_secs:
src = f'/var/lib/docker/containers/{container_id}/mounts/secrets/{sec["SecretID"]}'
dst = SECRET_PATH + sec['SecretName']
copyfile(src, dst)
def run_commands(commands):
for container, command in commands.items():
if not command:
continue
# Use bash's pipefail to return exit codes inside a pipe to prevent silent failure
command = command.removeprefix('bash -c \'').removeprefix('sh -c \'')
command = command.removesuffix('\'')
command = f"bash -c 'set -o pipefail;{command}'"
result = container.exec_run(command)
logging.info(f"run command in {container.name}")
logging.info(command)
if result.exit_code:
logging.error(
f"Failed to run command {command} in {container.name}: {result.output.decode()}")
else:
logging.info(result.output.decode())
def backup_volumes(backup_paths, apps, dry_run=False):
result = restic.backup(backup_paths, dry_run=dry_run, tags=apps)
print(result)
logging.info(result)
@cli.command()
@click.option('snapshot', '--snapshot', '-s', envvar='SNAPSHOT', default='latest')
@click.option('target', '--target', '-t', envvar='TARGET', default='/')
@click.option('noninteractive', '--noninteractive', envvar='NONINTERACTIVE', default=False)
def restore(snapshot, target, noninteractive):
# Todo: recommend to shutdown the container
service_paths = VOLUME_PATH
if SERVICE:
service_paths = service_paths + f'{SERVICE}_*'
snapshots = restic.snapshots(snapshot_id=snapshot)
if not snapshot:
logging.error("No Snapshots with ID {snapshots}")
exit(1)
if not noninteractive:
snapshot_date = datetime.fromisoformat(snapshots[0]['time'])
delta = datetime.now(tz=timezone.utc) - snapshot_date
print(f"You are going to restore Snapshot {snapshot} of {service_paths} at {target}")
print(f"This snapshot is {delta} old")
print(f"THIS COMMAND WILL IRREVERSIBLY OVERWRITES {target}{service_paths.removeprefix('/')}")
prompt = input("Type YES (uppercase) to continue: ")
if prompt != 'YES':
logging.error("Restore aborted")
exit(1)
print(f"Restoring Snapshot {snapshot} of {service_paths} at {target}")
result = restic.restore(snapshot_id=snapshot,
include=service_paths, target_dir=target)
logging.debug(result)
@cli.command()
def snapshots():
snapshots = restic.snapshots()
for snap in snapshots:
if not SERVICE or (tags := snap.get('tags')) and SERVICE in tags:
print(snap['time'], snap['id'])
@cli.command()
@click.option('snapshot', '--snapshot', '-s', envvar='SNAPSHOT', default='latest')
@click.option('path', '--path', '-p', envvar='INCLUDE_PATH')
def ls(snapshot, path):
results = list_files(snapshot, path)
for r in results:
if r.get('path'):
print(f"{r['ctime']}\t{r['path']}")
def list_files(snapshot, path):
cmd = restic.cat.base_command() + ['ls']
if SERVICE:
cmd = cmd + ['--tag', SERVICE]
cmd.append(snapshot)
if path:
cmd.append(path)
output = restic.internal.command_executor.execute(cmd)
output = output.replace('}\n{', '}|{')
results = list(map(json.loads, output.split('|')))
return results
@cli.command()
@click.option('snapshot', '--snapshot', '-s', envvar='SNAPSHOT', default='latest')
@click.option('path', '--path', '-p', envvar='INCLUDE_PATH')
@click.option('volumes', '--volumes', '-v', is_flag=True)
@click.option('secrets', '--secrets', '-c', is_flag=True)
def download(snapshot, path, volumes, secrets):
if sum(map(bool, [path, volumes, secrets])) != 1:
logging.error("Please specify exactly one of '--path', '--volumes', '--secrets'")
exit(1)
if path:
path = path.removesuffix('/')
files = list_files(snapshot, path)
filetype = [f.get('type') for f in files if f.get('path') == path][0]
filename = "/tmp/" + Path(path).name
if filetype == 'dir':
filename = filename + ".tar"
output = dump(snapshot, path)
with open(filename, "wb") as file:
file.write(output)
print(filename)
elif volumes:
if not SERVICE:
logging.error("Please specify '--host' when using '--volumes'")
exit(1)
filename = f"/tmp/{SERVICE}.tar"
files = list_files(snapshot, VOLUME_PATH)
for f in files[1:]:
path = f[ 'path' ]
if SERVICE in path and f['type'] == 'dir':
content = dump(snapshot, path)
# Concatenate tar files (extract with tar -xi)
with open(filename, "ab") as file:
file.write(content)
elif secrets:
if not SERVICE:
logging.error("Please specify '--host' when using '--secrets'")
exit(1)
filename = f"/tmp/SECRETS_{SERVICE}.json"
files = list_files(snapshot, SECRET_PATH)
secrets = {}
for f in files[1:]:
path = f[ 'path' ]
if SERVICE in path and f['type'] == 'file':
secret = dump(snapshot, path).decode()
secret_name = path.removeprefix(f'{SECRET_PATH}{SERVICE}_')
secrets[secret_name] = secret
with open(filename, "w") as file:
json.dump(secrets, file)
print(filename)
def dump(snapshot, path):
cmd = restic.cat.base_command() + ['dump']
if SERVICE:
cmd = cmd + ['--tag', SERVICE]
cmd = cmd +[snapshot, path]
logging.debug(f"Dumping {path} from snapshot '{snapshot}'")
output = subprocess.run(cmd, capture_output=True)
if output.returncode:
logging.error(f"error while dumping {path} from snapshot '{snapshot}': {output.stderr}")
exit(1)
return output.stdout
if __name__ == '__main__':
cli()

14
compose.s3.yml Normal file
View File

@ -0,0 +1,14 @@
---
version: "3.8"
services:
app:
environment:
- AWS_ACCESS_KEY_ID
- AWS_SECRET_ACCESS_KEY_FILE=/run/secrets/aws_secret_access_key
secrets:
- aws_secret_access_key
secrets:
aws_secret_access_key:
external: true
name: ${STACK_NAME}_aws_secret_access_key_${SECRET_AWS_SECRET_ACCESS_KEY_VERSION}

13
compose.secret.yml Normal file
View File

@ -0,0 +1,13 @@
---
version: "3.8"
services:
app:
environment:
- RESTIC_REPO_FILE=/run/secrets/restic_repo
secrets:
- restic_repo
secrets:
restic_repo:
external: true
name: ${STACK_NAME}_restic_repo_${SECRET_RESTIC_REPO_VERSION}

23
compose.ssh.yml Normal file
View File

@ -0,0 +1,23 @@
---
version: "3.8"
services:
app:
environment:
- SSH_KEY_FILE=/run/secrets/ssh_key
- SSH_HOST_KEY
secrets:
- source: ssh_key
mode: 0400
configs:
- source: ssh_config
target: /root/.ssh/config
secrets:
ssh_key:
external: true
name: ${STACK_NAME}_ssh_key_${SECRET_SSH_KEY_VERSION}
configs:
ssh_config:
name: ${STACK_NAME}_ssh_config_${SSH_CONFIG_VERSION}
file: ssh_config

15
compose.swarm-cronjob.yml Normal file
View File

@ -0,0 +1,15 @@
---
version: "3.8"
services:
app:
deploy:
mode: replicated
replicas: 0
labels:
- "swarm.cronjob.enable=true"
# Note(3wc): every 5m, testing
- "swarm.cronjob.schedule=*/5 * * * *"
# Note(3wc): blank label to be picked up by `abra recipe sync`
restart_policy:
condition: none
entrypoint: [ "/usr/bin/backup.sh" ]

54
compose.yml Normal file
View File

@ -0,0 +1,54 @@
---
version: "3.8"
services:
app:
image: docker:24.0.2-dind
volumes:
- "/var/run/docker.sock:/var/run/docker.sock"
- "/var/lib/docker/volumes/:/var/lib/docker/volumes/"
- "/var/lib/docker/containers/:/var/lib/docker/containers/:ro"
- backups:/backups
environment:
- CRON_SCHEDULE
- RESTIC_REPO
- RESTIC_PASSWORD_FILE=/run/secrets/restic_password
secrets:
- restic_password
deploy:
labels:
- coop-cloud.${STACK_NAME}.version=0.1.0+latest
- coop-cloud.${STACK_NAME}.timeout=${TIMEOUT:-300}
- coop-cloud.backupbot.enabled=true
configs:
- source: entrypoint
target: /entrypoint.sh
mode: 0555
- source: backupbot
target: /usr/bin/backup
mode: 0555
entrypoint: ['/entrypoint.sh']
deploy:
labels:
- "coop-cloud.backupbot.enabled=true"
healthcheck:
test: "pgrep crond"
interval: 30s
timeout: 10s
retries: 10
start_period: 5m
secrets:
restic_password:
external: true
name: ${STACK_NAME}_restic_password_${SECRET_RESTIC_PASSWORD_VERSION}
volumes:
backups:
configs:
entrypoint:
name: ${STACK_NAME}_entrypoint_${ENTRYPOINT_VERSION}
file: entrypoint.sh
backupbot:
name: ${STACK_NAME}_backupbot_${BACKUPBOT_VERSION}
file: backupbot.py

20
entrypoint.sh Normal file
View File

@ -0,0 +1,20 @@
#!/bin/sh
set -e -o pipefail
apk add --upgrade --no-cache restic bash python3 py3-pip
# Todo use requirements file with specific versions
pip install click==8.1.7 docker==6.1.3 resticpy==1.0.2
if [ -n "$SSH_HOST_KEY" ]
then
echo "$SSH_HOST_KEY" > /root/.ssh/known_hosts
fi
cron_schedule="${CRON_SCHEDULE:?CRON_SCHEDULE not set}"
echo "$cron_schedule backup create" | crontab -
crontab -l
crond -f -d8 -L /dev/stdout

3
release/1.0.0+latest Normal file
View File

@ -0,0 +1,3 @@
Breaking Change: the variables `SERVER_NAME` and `RESTIC_HOST` are merged into `RESTIC_REPO`. The format can be looked up here: https://restic.readthedocs.io/en/stable/030_preparing_a_new_repo.html
ssh/sftp: `sftp:user@host:/repo-path`
S3: `s3:https://s3.example.com/bucket_name`

3
renovate.json Normal file
View File

@ -0,0 +1,3 @@
{
"$schema": "https://docs.renovatebot.com/renovate-schema.json"
}

4
ssh_config Normal file
View File

@ -0,0 +1,4 @@
Host *
IdentityFile /run/secrets/ssh_key
ServerAliveInterval 60
ServerAliveCountMax 240