4 Commits

Author SHA1 Message Date
e360c3d8f8 remove unused traefik labels 2023-06-13 14:34:53 +02:00
50b317c12d fix shellcheck prevent globbing 2023-06-13 11:57:42 +02:00
9cb3a469f3 mount volume ro 2023-06-13 11:48:25 +02:00
24d2c0e85b Backup volumes from host instead of copying paths
* Backupbot will now copy all volumes from a service with
  backupbot.enabled = 'true' label from the /var/lib/docker/volumes/
  path directly. This reduces the resource overhead of copying
  stuff from one volume to another.
  Recipes need to be adjustet that db-dumps are saved into a volume
  now!
* Remove the Dockerfile and move stuff into a entrypoint. This
  simplifies the whole versioning thing and makes this "just"
  a recipe

Co-authored-by: Moritz < moritz.m@local-it.org>
2023-06-05 11:15:54 +02:00
15 changed files with 302 additions and 539 deletions

12
.drone.yml Normal file
View File

@ -0,0 +1,12 @@
---
kind: pipeline
name: linters
steps:
- name: run shellcheck
image: koalaman/shellcheck-alpine
commands:
- shellcheck backup.sh
trigger:
branch:
- main

View File

@ -1,10 +1,24 @@
STACK_NAME=backup-bot-two TYPE=backup-bot-two
RESTIC_REPOSITORY=/backups/restic SECRET_RESTIC_PASSWORD_VERSION=v1
CRON_SCHEDULE='30 3 * * *' COMPOSE_FILE=compose.yml
# Push Notifications SERVER_NAME=example.com
#PUSH_URL_START=https://status.example.com/api/push/xxxxxxxxxx?status=up&msg=start RESTIC_HOST=minio.example.com
#PUSH_URL_SUCCESS=https://status.example.com/api/push/xxxxxxxxxx?status=up&msg=OK
#PUSH_URL_FAIL=https://status.example.com/api/push/xxxxxxxxxx?status=down&msg=fail CRON_SCHEDULE='*/5 * * * *'
REMOVE_BACKUP_VOLUME_AFTER_UPLOAD=1
# swarm-cronjob, instead of built-in cron
#COMPOSE_FILE="$COMPOSE_FILE:compose.swarm-cronjob.yml"
# SSH storage
#SECRET_SSH_KEY_VERSION=v1
#SSH_HOST_KEY="hostname ssh-rsa AAAAB3...
#COMPOSE_FILE="$COMPOSE_FILE:compose.ssh.yml"
# S3 storage
#SECRET_AWS_SECRET_ACCESS_KEY_VERSION=v1
#AWS_ACCESS_KEY_ID=something-secret
#COMPOSE_FILE="$COMPOSE_FILE:compose.s3.yml"

15
.envrc.sample Normal file
View File

@ -0,0 +1,15 @@
export RESTIC_HOST="user@domain.tld"
export RESTIC_PASSWORD_FILE=/run/secrets/restic-password
export BACKUP_DEST=/backups
export SERVER_NAME=domain.tld
export DOCKER_CONTEXT=$SERVER_NAME
# uncomment either this:
#export SSH_KEY_FILE=~/.ssh/id_rsa
# or this:
#export AWS_SECRET_ACCESS_KEY_FILE=s3
#export AWS_ACCESS_KEY_ID=easter-october-emphatic-tug-urgent-customer
# optionally limit subset of services for testing
#export SERVICES_OVERRIDE="ghost_domain_tld_app ghost_domain_tld_db"

2
.gitignore vendored
View File

@ -1 +1 @@
.env /testing

136
README.md
View File

@ -1,115 +1,48 @@
# Backupbot II # Backupbot II
Wiki Cafe's configuration for a Backupbot II deployment. Originally slimmed down from an `abra` [recipe](https://git.coopcloud.tech/coop-cloud/backup-bot-two) by [Co-op Cloud](https://coopcloud.tech/). [![Build Status](https://build.coopcloud.tech/api/badges/coop-cloud/backup-bot-two/status.svg)](https://build.coopcloud.tech/coop-cloud/backup-bot-two)
_This Time, It's Easily Configurable_
## Deploying the app with Docker Swarm Automatically take backups from all volumes of running Docker Swarm services and runs pre- and post commands.
Set the environment variables from the .env file during the shell session. ## Background
``` There are lots of Docker volume backup systems; all of them have one or both of these limitations:
set -a && source .env && set +a - You need to define all the volumes to back up in the configuration system
``` - Backups require services to be stopped to take consistent copies
Set the secrets. Backupbot II tries to help, by
1. **letting you define backups using Docker labels**, so you can **easily collect your backups for use with another system** like docker-volume-backup.
2. **running pre- and post-commands** before and after backups, for example to use database tools to take a backup from a running service.
``` ## Deployment
printf "SECRET_HERE" | docker secret create SECRET_NAME -
```
Deploy using the `-c` flag to specify one or multiple compose files. ### With Co-op Cloud
``` 1. Set up Docker Swarm and [`abra`][abra]
docker stack deploy backup-bot-two -c compose.yaml 2. `abra app new backup-bot-two`
``` 3. `abra app config <your-app-name>`, and set storage options. Either configure `CRON_SCHEDULE`, or set up `swarm-cronjob`
4. `abra app secret generate <your-app-name> restic-password v1`, optionally with `--pass` before `<your-app-name>` to save the generated secret in `pass`.
5. `abra app secret insert <your-app-name> ssh-key v1 ...` or similar, to load required secrets.
4. `abra app deploy <your-app-name>`
## Push notifications <!-- metadata -->
The following env variables can be used to setup push notifications for backups. `PUSH_URL_START` is requested just before the backups starts, `PUSH_URL_SUCCESS` is only requested if the backup was successful and if the backup fails `PUSH_URL_FAIL` will be requested. * **Category**: Utilities
Each variable is optional and independent of the other. * **Status**: 0, work-in-progress
``` * **Image**: [`thecoopcloud/backup-bot-two`](https://hub.docker.com/r/thecoopcloud/backup-bot-two), 4, upstream
* **Healthcheck**: No
* **Backups**: N/A
* **Email**: N/A
* **Tests**: No
* **SSO**: N/A
PUSH_URL_START=https://status.example.com/api/push/xxxxxxxxxx?status=up&msg=start <!-- endmetadata -->
PUSH_URL_SUCCESS=https://status.example.com/api/push/xxxxxxxxxx?status=up&msg=OK
PUSH_URL_FAIL=https://status.example.com/api/push/xxxxxxxxxx?status=down&msg=fail
```
## Commands ## Configuration
Like Traefik, or `swarm-cronjob`, Backupbot II uses access to the Docker socket to read labels from running Docker Swarm services:
- Find the ID or name of the backup container:
```
docker ps --filter "name=backup-bot-two_app"
```
2. Run the desired command using `docker exec`:
```
docker exec -it <container_id_or_name> backup <command> [options]
```
Replace `<container_id_or_name>` with the ID or name of the backup container.
Available commands:
- `create`: Initiate the backup process.
- `restore`: Restore a specific snapshot to a target directory.
- `snapshots`: List available snapshots.
- `ls`: List files in a specific snapshot.
- `download`: Download specific files, volumes, or secrets from a snapshot.
Options:
- `--host`, `-h`: Specify the service name (e.g., `app`).
- `--repo`, `-r`: Specify the Restic repository location (e.g., `/run/secrets/restic_repo`).
- `--log`, `-l`: Set the log level (e.g., `debug`, `info`, `warning`, `error`).
- `--machine-logs`, `-m`: Enable machine-readable JSON logging.
## Examples
Create a backup:
```
docker exec -it <container_id_or_name> backup create --host app
```
Restore a snapshot:
```
docker exec -it <container_id_or_name> backup restore --snapshot <snapshot_id> --target /path/to/restore
```
List snapshots:
```
docker exec -it <container_id_or_name> backup snapshots
```
List files in a snapshot:
```
docker exec -it <container_id_or_name> backup ls --snapshot <snapshot_id> --path /path/to/directory
```
Download files, volumes, or secrets from a snapshot:
```
docker exec -it <container_id_or_name> backup download --snapshot <snapshot_id> [--path /path/to/file] [--volumes] [--secrets]
```
Note: Make sure to replace `<container_id_or_name>` and `<snapshot_id>` with the appropriate values for your setup.
Remember to review and adjust the Docker Compose file and environment variables according to your specific requirements before running the backup commands.
When using `docker exec`, you don't need to specify the volume mounts or the Restic repository location as command-line arguments because they are already defined in the Docker Compose file and are available within the running container.
If you need to access the downloaded files, volumes, or secrets from the backup, you can use `docker cp` to copy them from the container to the host machine:
```
docker cp <container_id_or_name>:/path/to/backup/file /path/on/host
```
This allows you to retrieve the backed-up data from the container.
## Recipe Configuration
Backupbot II uses access to the Docker socket to read labels from running Docker Swarm services:
``` ```
services: services:
@ -126,3 +59,12 @@ services:
- `backupbot.backup.post-hook` -- command to run after copying files (optional) - `backupbot.backup.post-hook` -- command to run after copying files (optional)
As in the above example, you can reference Docker Secrets, e.g. for looking up database passwords, by reading the files in `/run/secrets` directly. As in the above example, you can reference Docker Secrets, e.g. for looking up database passwords, by reading the files in `/run/secrets` directly.
## Development
1. Install `direnv`
2. `cp .envrc.sample .envrc`
3. Edit `.envrc` as appropriate, including setting `DOCKER_CONTEXT` to a remote Docker context, if you're not running a swarm server locally.
4. Run `./backup.sh` -- you can add the `--skip-backup` or `--skip-upload` options if you just want to test one other step
[abra]: https://git.autonomic.zone/autonomic-cooperative/abra

2
abra.sh Normal file
View File

@ -0,0 +1,2 @@
export ENTRYPOINT_VERSION=v1
export BACKUP_VERSION=v1

119
backup.sh Executable file
View File

@ -0,0 +1,119 @@
#!/bin/bash
set -e
server_name="${SERVER_NAME:?SERVER_NAME not set}"
restic_password_file="${RESTIC_PASSWORD_FILE:?RESTIC_PASSWORD_FILE not set}"
restic_host="${RESTIC_HOST:?RESTIC_HOST not set}"
backup_paths=()
# shellcheck disable=SC2153
ssh_key_file="${SSH_KEY_FILE}"
s3_key_file="${AWS_SECRET_ACCESS_KEY_FILE}"
restic_repo=
restic_extra_options=
if [ -n "$ssh_key_file" ] && [ -f "$ssh_key_file" ]; then
restic_repo="sftp:$restic_host:/$server_name"
# Only check server against provided SSH_HOST_KEY, if set
if [ -n "$SSH_HOST_KEY" ]; then
tmpfile=$(mktemp)
echo "$SSH_HOST_KEY" >>"$tmpfile"
echo "using host key $SSH_HOST_KEY"
ssh_options="-o 'UserKnownHostsFile $tmpfile'"
elif [ "$SSH_HOST_KEY_DISABLE" = "1" ]; then
echo "disabling SSH host key checking"
ssh_options="-o 'StrictHostKeyChecking=No'"
else
echo "neither SSH_HOST_KEY nor SSH_HOST_KEY_DISABLE set"
fi
restic_extra_options="sftp.command=ssh $ssh_options -i $ssh_key_file $restic_host -s sftp"
fi
if [ -n "$s3_key_file" ] && [ -f "$s3_key_file" ] && [ -n "$AWS_ACCESS_KEY_ID" ]; then
AWS_SECRET_ACCESS_KEY="$(cat "${s3_key_file}")"
export AWS_SECRET_ACCESS_KEY
restic_repo="s3:$restic_host:/$server_name"
fi
if [ -z "$restic_repo" ]; then
echo "you must configure either SFTP or S3 storage, see README"
exit 1
fi
echo "restic_repo: $restic_repo"
# Pre-bake-in some default restic options
_restic() {
if [ -z "$restic_extra_options" ]; then
# shellcheck disable=SC2068
restic -p "$restic_password_file" \
--quiet -r "$restic_repo" \
$@
else
# shellcheck disable=SC2068
restic -p "$restic_password_file" \
--quiet -r "$restic_repo" \
-o "$restic_extra_options" \
$@
fi
}
if [ -n "$SERVICES_OVERRIDE" ]; then
# this is fine because docker service names should never include spaces or
# glob characters
# shellcheck disable=SC2206
services=($SERVICES_OVERRIDE)
else
mapfile -t services < <(docker service ls --format '{{ .Name }}')
fi
post_commands=()
if [[ \ $*\ != *\ --skip-backup\ * ]]; then
for service in "${services[@]}"; do
echo "service: $service"
details=$(docker service inspect "$service" --format "{{ json .Spec.Labels }}")
if echo "$details" | jq -r '.["backupbot.backup"]' | grep -q 'true'; then
pre=$(echo "$details" | jq -r '.["backupbot.backup.pre-hook"]')
post=$(echo "$details" | jq -r '.["backupbot.backup.post-hook"]')
container=$(docker container ls -f "name=$service" --format '{{ .ID }}')
stack_name=$(echo "$details" | jq -r '.["com.docker.stack.namespace"]')
if [ "$pre" != "null" ]; then
# run the precommand
echo "executing precommand $pre in container $container"
docker exec "$container" sh -c "$pre"
fi
if [ "$post" != "null" ]; then
# append post command
post_commands+=("docker exec $container sh -c \"$post\"")
fi
# add volume paths to backup path
backup_paths+=(/var/lib/docker/volumes/"${stack_name}"_*)
fi
done
# check if restic repo exists, initialise if not
if [ -z "$(_restic cat config)" ] 2>/dev/null; then
echo "initializing restic repo"
_restic init
fi
fi
if [[ \ $*\ != *\ --skip-upload\ * ]]; then
echo "${backup_paths[@]}"
_restic backup --host "$server_name" --tag coop-cloud "${backup_paths[@]}"
fi
# run post commands
for post in "${post_commands[@]}"; do
echo "executing postcommand $post"
eval "$post"
done

View File

@ -1,369 +0,0 @@
#!/usr/bin/python3
import os
import sys
import click
import json
import subprocess
import logging
import docker
import restic
import tarfile
import io
from pythonjsonlogger import jsonlogger
from datetime import datetime, timezone
from restic.errors import ResticFailedError
from pathlib import Path
from shutil import copyfile, rmtree
VOLUME_PATH = "/var/lib/docker/volumes/"
SECRET_PATH = '/secrets/'
SERVICE = None
logger = logging.getLogger("backupbot")
logging.addLevelName(55, 'SUMMARY')
setattr(logging, 'SUMMARY', 55)
setattr(logger, 'summary', lambda message, *args, **
kwargs: logger.log(55, message, *args, **kwargs))
def handle_exception(exc_type, exc_value, exc_traceback):
if issubclass(exc_type, KeyboardInterrupt):
sys.__excepthook__(exc_type, exc_value, exc_traceback)
return
logger.critical("Uncaught exception", exc_info=(
exc_type, exc_value, exc_traceback))
sys.excepthook = handle_exception
@click.group()
@click.option('-l', '--log', 'loglevel')
@click.option('-m', '--machine-logs', 'machine_logs', is_flag=True)
@click.option('service', '--host', '-h', envvar='SERVICE')
@click.option('repository', '--repo', '-r', envvar='RESTIC_REPOSITORY')
def cli(loglevel, service, repository, machine_logs):
global SERVICE
if service:
SERVICE = service.replace('.', '_')
if repository:
os.environ['RESTIC_REPOSITORY'] = repository
if loglevel:
numeric_level = getattr(logging, loglevel.upper(), None)
if not isinstance(numeric_level, int):
raise ValueError('Invalid log level: %s' % loglevel)
logger.setLevel(numeric_level)
if machine_logs:
logHandler = logging.StreamHandler()
formatter = jsonlogger.JsonFormatter(
"%(levelname)s %(filename)s %(lineno)s %(process)d %(message)s", rename_fields={"levelname": "message_type"})
logHandler.setFormatter(formatter)
logger.addHandler(logHandler)
export_secrets()
init_repo()
def init_repo():
repo = os.environ['RESTIC_REPOSITORY']
logger.debug(f"set restic repository location: {repo}")
restic.repository = repo
restic.password_file = '/var/run/secrets/restic_password'
try:
restic.cat.config()
except ResticFailedError as error:
if 'unable to open config file' in str(error):
result = restic.init()
logger.info(f"Initialized restic repo: {result}")
else:
raise error
def export_secrets():
for env in os.environ:
if env.endswith('FILE') and not "COMPOSE_FILE" in env:
logger.debug(f"exported secret: {env}")
with open(os.environ[env]) as file:
secret = file.read()
os.environ[env.removesuffix('_FILE')] = secret
if env == 'RESTIC_REPOSITORY_FILE':
# RESTIC_REPOSITORY_FILE and RESTIC_REPOSITORY are mutually exclusive
logger.info("RESTIC_REPOSITORY set to RESTIC_REPOSITORY_FILE. Unsetting RESTIC_REPOSITORY_FILE.")
del os.environ['RESTIC_REPOSITORY_FILE']
@cli.command()
@click.option('retries', '--retries', '-r', envvar='RETRIES', default=1)
def create(retries):
pre_commands, post_commands, backup_paths, apps = get_backup_cmds()
copy_secrets(apps)
backup_paths.append(SECRET_PATH)
run_commands(pre_commands)
backup_volumes(backup_paths, apps, int(retries))
run_commands(post_commands)
def get_backup_cmds():
client = docker.from_env()
container_by_service = {
c.labels['com.docker.swarm.service.name']: c for c in client.containers.list()}
backup_paths = set()
backup_apps = set()
pre_commands = {}
post_commands = {}
services = client.services.list()
for s in services:
labels = s.attrs['Spec']['Labels']
if (backup := labels.get('backupbot.backup')) and bool(backup):
# volumes: s.attrs['Spec']['TaskTemplate']['ContainerSpec']['Mounts'][0]['Source']
stack_name = labels['com.docker.stack.namespace']
# Remove this lines to backup only a specific service
# This will unfortenately decrease restice performance
# if SERVICE and SERVICE != stack_name:
# continue
backup_apps.add(stack_name)
backup_paths = backup_paths.union(
Path(VOLUME_PATH).glob(f"{stack_name}_*"))
if not (container := container_by_service.get(s.name)):
logger.error(
f"Container {s.name} is not running, hooks can not be executed")
continue
if prehook := labels.get('backupbot.backup.pre-hook'):
pre_commands[container] = prehook
if posthook := labels.get('backupbot.backup.post-hook'):
post_commands[container] = posthook
return pre_commands, post_commands, list(backup_paths), list(backup_apps)
def copy_secrets(apps):
# TODO: check if it is deployed
rmtree(SECRET_PATH, ignore_errors=True)
os.mkdir(SECRET_PATH)
client = docker.from_env()
container_by_service = {
c.labels['com.docker.swarm.service.name']: c for c in client.containers.list()}
services = client.services.list()
for s in services:
app_name = s.attrs['Spec']['Labels']['com.docker.stack.namespace']
if (app_name in apps and
(app_secs := s.attrs['Spec']['TaskTemplate']['ContainerSpec'].get('Secrets'))):
if not container_by_service.get(s.name):
logger.error(
f"Container {s.name} is not running, secrets can not be copied.")
continue
container_id = container_by_service[s.name].id
for sec in app_secs:
src = f'/var/lib/docker/containers/{container_id}/mounts/secrets/{sec["SecretID"]}'
if not Path(src).exists():
logger.error(
f"For the secret {sec['SecretName']} the file {src} does not exist for {s.name}")
continue
dst = SECRET_PATH + sec['SecretName']
copyfile(src, dst)
def run_commands(commands):
for container, command in commands.items():
if not command:
continue
# Remove bash/sh wrapping
command = command.removeprefix('bash -c').removeprefix('sh -c').removeprefix(' ')
# Remove quotes surrounding the command
if (len(command) >= 2 and command[0] == command[-1] and (command[0] == "'" or command[0] == '"')):
command = command[1:-1]
# Use bash's pipefail to return exit codes inside a pipe to prevent silent failure
command = f"bash -c 'set -o pipefail;{command}'"
logger.info(f"run command in {container.name}:")
logger.info(command)
result = container.exec_run(command)
if result.exit_code:
logger.error(
f"Failed to run command {command} in {container.name}: {result.output.decode()}")
else:
logger.info(result.output.decode())
def backup_volumes(backup_paths, apps, retries, dry_run=False):
while True:
try:
result = restic.backup(backup_paths, dry_run=dry_run, tags=apps)
logger.summary("backup finished", extra=result)
return
except ResticFailedError as error:
logger.error(
f"Backup failed for {apps}. Could not Backup these paths: {backup_paths}")
logger.error(error, exc_info=True)
if retries > 0:
retries -= 1
else:
exit(1)
@cli.command()
@click.option('snapshot', '--snapshot', '-s', envvar='SNAPSHOT', default='latest')
@click.option('target', '--target', '-t', envvar='TARGET', default='/')
@click.option('noninteractive', '--noninteractive', envvar='NONINTERACTIVE', is_flag=True)
def restore(snapshot, target, noninteractive):
# Todo: recommend to shutdown the container
service_paths = VOLUME_PATH
if SERVICE:
service_paths = service_paths + f'{SERVICE}_*'
snapshots = restic.snapshots(snapshot_id=snapshot)
if not snapshot:
logger.error("No Snapshots with ID {snapshots}")
exit(1)
if not noninteractive:
snapshot_date = datetime.fromisoformat(snapshots[0]['time'])
delta = datetime.now(tz=timezone.utc) - snapshot_date
print(
f"You are going to restore Snapshot {snapshot} of {service_paths} at {target}")
print(f"This snapshot is {delta} old")
print(
f"THIS COMMAND WILL IRREVERSIBLY OVERWRITES {target}{service_paths.removeprefix('/')}")
prompt = input("Type YES (uppercase) to continue: ")
if prompt != 'YES':
logger.error("Restore aborted")
exit(1)
print(f"Restoring Snapshot {snapshot} of {service_paths} at {target}")
# TODO: use tags if no snapshot is selected, to use a snapshot including SERVICE
result = restic.restore(snapshot_id=snapshot,
include=service_paths, target_dir=target)
logger.debug(result)
@cli.command()
def snapshots():
snapshots = restic.snapshots()
no_snapshots = True
for snap in snapshots:
if not SERVICE or (tags := snap.get('tags')) and SERVICE in tags:
print(snap['time'], snap['id'])
no_snapshots = False
if no_snapshots:
err_msg = "No Snapshots found"
if SERVICE:
service_name = SERVICE.replace('_', '.')
err_msg += f' for app {service_name}'
logger.warning(err_msg)
@cli.command()
@click.option('snapshot', '--snapshot', '-s', envvar='SNAPSHOT', default='latest')
@click.option('path', '--path', '-p', envvar='INCLUDE_PATH')
def ls(snapshot, path):
results = list_files(snapshot, path)
for r in results:
if r.get('path'):
print(f"{r['ctime']}\t{r['path']}")
def list_files(snapshot, path):
cmd = restic.cat.base_command() + ['ls']
if SERVICE:
cmd = cmd + ['--tag', SERVICE]
cmd.append(snapshot)
if path:
cmd.append(path)
try:
output = restic.internal.command_executor.execute(cmd)
except ResticFailedError as error:
if 'no snapshot found' in str(error):
err_msg = f'There is no snapshot "{snapshot}"'
if SERVICE:
err_msg += f' for the app "{SERVICE}"'
logger.error(err_msg)
exit(1)
else:
raise error
output = output.replace('}\n{', '}|{')
results = list(map(json.loads, output.split('|')))
return results
@cli.command()
@click.option('snapshot', '--snapshot', '-s', envvar='SNAPSHOT', default='latest')
@click.option('path', '--path', '-p', envvar='INCLUDE_PATH')
@click.option('volumes', '--volumes', '-v', envvar='VOLUMES')
@click.option('secrets', '--secrets', '-c', is_flag=True, envvar='SECRETS')
def download(snapshot, path, volumes, secrets):
file_dumps = []
if not any([path, volumes, secrets]):
volumes = secrets = True
if path:
path = path.removesuffix('/')
binary_output = dump(snapshot, path)
files = list_files(snapshot, path)
filetype = [f.get('type') for f in files if f.get('path') == path][0]
filename = Path(path).name
if filetype == 'dir':
filename = filename + ".tar"
tarinfo = tarfile.TarInfo(name=filename)
tarinfo.size = len(binary_output)
file_dumps.append((binary_output, tarinfo))
if volumes:
if not SERVICE:
logger.error("Please specify '--host' when using '--volumes'")
exit(1)
files = list_files(snapshot, VOLUME_PATH)
for f in files[1:]:
path = f['path']
if Path(path).name.startswith(SERVICE) and f['type'] == 'dir':
binary_output = dump(snapshot, path)
filename = f"{Path(path).name}.tar"
tarinfo = tarfile.TarInfo(name=filename)
tarinfo.size = len(binary_output)
file_dumps.append((binary_output, tarinfo))
if secrets:
if not SERVICE:
logger.error("Please specify '--host' when using '--secrets'")
exit(1)
filename = f"{SERVICE}.json"
files = list_files(snapshot, SECRET_PATH)
secrets = {}
for f in files[1:]:
path = f['path']
if Path(path).name.startswith(SERVICE) and f['type'] == 'file':
secret = dump(snapshot, path).decode()
secret_name = path.removeprefix(f'{SECRET_PATH}{SERVICE}_')
secrets[secret_name] = secret
binary_output = json.dumps(secrets).encode()
tarinfo = tarfile.TarInfo(name=filename)
tarinfo.size = len(binary_output)
file_dumps.append((binary_output, tarinfo))
with tarfile.open('/tmp/backup.tar.gz', "w:gz") as tar:
print(f"Writing files to /tmp/backup.tar.gz...")
for binary_output, tarinfo in file_dumps:
tar.addfile(tarinfo, fileobj=io.BytesIO(binary_output))
size = get_formatted_size('/tmp/backup.tar.gz')
print(
f"Backup has been written to /tmp/backup.tar.gz with a size of {size}")
def get_formatted_size(file_path):
file_size = os.path.getsize(file_path)
units = ['Bytes', 'KB', 'MB', 'GB', 'TB']
for unit in units:
if file_size < 1024:
return f"{round(file_size, 3)} {unit}"
file_size /= 1024
return f"{round(file_size, 3)} {units[-1]}"
def dump(snapshot, path):
cmd = restic.cat.base_command() + ['dump']
if SERVICE:
cmd = cmd + ['--tag', SERVICE]
cmd = cmd + [snapshot, path]
print(f"Dumping {path} from snapshot '{snapshot}'")
output = subprocess.run(cmd, capture_output=True)
if output.returncode:
logger.error(
f"error while dumping {path} from snapshot '{snapshot}': {output.stderr}")
exit(1)
return output.stdout
if __name__ == '__main__':
cli()

14
compose.s3.yml Normal file
View File

@ -0,0 +1,14 @@
---
version: "3.8"
services:
app:
environment:
- AWS_ACCESS_KEY_ID
- AWS_SECRET_ACCESS_KEY_FILE=/run/secrets/aws_secret_access_key
secrets:
- aws_secret_access_key
secrets:
aws_secret_access_key:
external: true
name: ${STACK_NAME}_aws_secret_access_key_${SECRET_AWS_SECRET_ACCESS_KEY_VERSION}

16
compose.ssh.yml Normal file
View File

@ -0,0 +1,16 @@
---
version: "3.8"
services:
app:
environment:
- SSH_KEY_FILE=/run/secrets/ssh_key
- SSH_HOST_KEY
- SSH_HOST_KEY_DISABLE
secrets:
- source: ssh_key
mode: 0400
secrets:
ssh_key:
external: true
name: ${STACK_NAME}_ssh_key_${SECRET_SSH_KEY_VERSION}

15
compose.swarm-cronjob.yml Normal file
View File

@ -0,0 +1,15 @@
---
version: "3.8"
services:
app:
deploy:
mode: replicated
replicas: 0
labels:
- "swarm.cronjob.enable=true"
# Note(3wc): every 5m, testing
- "swarm.cronjob.schedule=*/5 * * * *"
# Note(3wc): blank label to be picked up by `abra recipe sync`
restart_policy:
condition: none
entrypoint: [ "/usr/bin/backup.sh" ]

View File

@ -1,44 +0,0 @@
services:
app:
image: docker:24.0.7-dind
volumes:
- "/var/run/docker.sock:/var/run/docker.sock"
- "/var/lib/docker/volumes/:/var/lib/docker/volumes/"
- "/var/lib/docker/containers/:/var/lib/docker/containers/:ro"
environment:
- CRON_SCHEDULE
- RESTIC_REPOSITORY_FILE=/run/secrets/restic_repo
- RESTIC_PASSWORD_FILE=/run/secrets/restic_password
secrets:
- restic_repo
- restic_password
configs:
- source: entrypoint
target: /entrypoint.sh
mode: 0555
- source: backupbot
target: /usr/bin/backup
mode: 0555
entrypoint: ['/entrypoint.sh']
healthcheck:
test: "pgrep crond"
interval: 30s
timeout: 10s
retries: 10
start_period: 5m
secrets:
restic_repo:
external: true
name: ${STACK_NAME}_restic_repo
restic_password:
external: true
name: ${STACK_NAME}_restic_password
configs:
entrypoint:
name: ${STACK_NAME}_entrypoint
file: entrypoint.sh
backupbot:
name: ${STACK_NAME}_backupbot
file: backupbot.py

42
compose.yml Normal file
View File

@ -0,0 +1,42 @@
---
version: "3.8"
services:
app:
image: docker:24.0.2-dind
volumes:
- "/var/run/docker.sock:/var/run/docker.sock"
- "/var/lib/docker/volumes/:/var/lib/docker/volumes/:ro"
environment:
- CRON_SCHEDULE
- RESTIC_REPO
- RESTIC_PASSWORD_FILE=/run/secrets/restic_password
- BACKUP_DEST=/backups
- RESTIC_HOST
- SERVER_NAME
- REMOVE_BACKUP_VOLUME_AFTER_UPLOAD=1
secrets:
- restic_password
deploy:
labels:
- coop-cloud.${STACK_NAME}.version=0.1.0+latest
configs:
- source: entrypoint
target: /entrypoint.sh
mode: 0555
- source: backup
target: /backup.sh
mode: 0555
entrypoint: ['/entrypoint.sh']
secrets:
restic_password:
external: true
name: ${STACK_NAME}_restic_password_${SECRET_RESTIC_PASSWORD_VERSION}
configs:
entrypoint:
name: ${STACK_NAME}_entrypoint_${ENTRYPOINT_VERSION}
file: entrypoint.sh
backup:
name: ${STACK_NAME}_backup_${BACKUP_VERSION}
file: backup.sh

View File

@ -1,30 +1,12 @@
#!/bin/sh #!/bin/sh
set -e -o pipefail set -e
apk add --upgrade --no-cache restic bash python3 py3-pip py3-click py3-docker-py py3-json-logger curl apk add --upgrade --no-cache bash curl jq restic
# Todo use requirements file with specific versions
pip install --break-system-packages resticpy==1.0.2
cron_schedule="${CRON_SCHEDULE:?CRON_SCHEDULE not set}" cron_schedule="${CRON_SCHEDULE:?CRON_SCHEDULE not set}"
if [ -n "$PUSH_URL_START" ] echo "$cron_schedule /backup.sh" | crontab -
then
push_start_notification="curl -s '$PUSH_URL_START' &&"
fi
if [ -n "$PUSH_URL_FAIL" ]
then
push_fail_notification="|| curl -s '$PUSH_URL_FAIL'"
fi
if [ -n "$PUSH_URL_SUCCESS" ]
then
push_notification=" && (grep -q 'backup finished' /tmp/backup.log && curl -s '$PUSH_URL_SUCCESS' $push_fail_notification)"
fi
echo "$cron_schedule $push_start_notification backup --machine-logs create 2>&1 | tee /tmp/backup.log $push_notification" | crontab -
crontab -l crontab -l
crond -f -d8 -L /dev/stdout crond -f -d8 -L /dev/stdout

3
renovate.json Normal file
View File

@ -0,0 +1,3 @@
{
"$schema": "https://docs.renovatebot.com/renovate-schema.json"
}