Merge pull request 'Backupbot Revolution' (#23) from backupbot_revolution into main
continuous-integration/drone/push Build is failing Details

Reviewed-on: #23
This commit is contained in:
moritz 2023-10-09 10:54:22 +00:00
commit 69e7f07978
15 changed files with 499 additions and 250 deletions

View File

@ -7,22 +7,6 @@ steps:
commands:
- shellcheck backup.sh
- name: publish image
image: plugins/docker
settings:
auto_tag: true
username: thecoopcloud
password:
from_secret: thecoopcloud_password
repo: thecoopcloud/backup-bot-two
tags: latest
depends_on:
- run shellcheck
when:
event:
exclude:
- pull_request
trigger:
branch:
- main

View File

@ -4,11 +4,9 @@ SECRET_RESTIC_PASSWORD_VERSION=v1
COMPOSE_FILE=compose.yml
SERVER_NAME=example.com
RESTIC_HOST=minio.example.com
RESTIC_REPO=/backups/restic
CRON_SCHEDULE='*/5 * * * *'
REMOVE_BACKUP_VOLUME_AFTER_UPLOAD=1
CRON_SCHEDULE='30 3 * * *'
# swarm-cronjob, instead of built-in cron
#COMPOSE_FILE="$COMPOSE_FILE:compose.swarm-cronjob.yml"
@ -23,7 +21,9 @@ REMOVE_BACKUP_VOLUME_AFTER_UPLOAD=1
#AWS_ACCESS_KEY_ID=something-secret
#COMPOSE_FILE="$COMPOSE_FILE:compose.s3.yml"
# HTTPS storage
#SECRET_HTTPS_PASSWORD_VERSION=v1
#COMPOSE_FILE="$COMPOSE_FILE:compose.https.yml"
#RESTIC_USER=<somebody>
# Secret restic repository
# use a secret to store the RESTIC_REPO if the repository location contains a secret value
# i.E rest:https://user:SECRET_PASSWORD@host:8000/
# it overwrites the RESTIC_REPO variable
#SECRET_RESTIC_REPO_VERSION=v1
#COMPOSE_FILE="$COMPOSE_FILE:compose.secret.yml"

View File

@ -1,13 +0,0 @@
FROM docker:24.0.6-dind
RUN apk add --upgrade --no-cache \
bash \
curl \
jq \
restic
COPY backup.sh /usr/bin/backup.sh
COPY setup-cron.sh /usr/bin/setup-cron.sh
RUN chmod +x /usr/bin/backup.sh /usr/bin/setup-cron.sh
ENTRYPOINT [ "/usr/bin/setup-cron.sh" ]

167
README.md
View File

@ -4,7 +4,21 @@
_This Time, It's Easily Configurable_
Automatically take backups from running Docker Swarm services into a volume.
Automatically take backups from all volumes of running Docker Swarm services and runs pre- and post commands.
<!-- metadata -->
* **Category**: Utilities
* **Status**: 0, work-in-progress
* **Image**: [`thecoopcloud/backup-bot-two`](https://hub.docker.com/r/thecoopcloud/backup-bot-two), 4, upstream
* **Healthcheck**: No
* **Backups**: N/A
* **Email**: N/A
* **Tests**: No
* **SSO**: N/A
<!-- endmetadata -->
## Background
@ -20,28 +34,126 @@ Backupbot II tries to help, by
### With Co-op Cloud
1. Set up Docker Swarm and [`abra`][abra]
2. `abra app new backup-bot-two`
3. `abra app config <your-app-name>`, and set storage options. Either configure `CRON_SCHEDULE`, or set up `swarm-cronjob`
4. `abra app secret generate <your-app-name> restic-password v1`, optionally with `--pass` before `<your-app-name>` to save the generated secret in `pass`.
5. `abra app secret insert <your-app-name> ssh-key v1 ...` or similar, to load required secrets.
4. `abra app deploy <your-app-name>`
<!-- metadata -->
* **Category**: Utilities
* **Status**: 0, work-in-progress
* **Image**: [`thecoopcloud/backup-bot-two`](https://hub.docker.com/r/thecoopcloud/backup-bot-two), 4, upstream
* **Healthcheck**: No
* **Backups**: N/A
* **Email**: N/A
* **Tests**: No
* **SSO**: N/A
<!-- endmetadata -->
* `abra app new backup-bot-two`
* `abra app config <app-name>`
- set storage options. Either configure `CRON_SCHEDULE`, or set up `swarm-cronjob`
* `abra app secret generate -a <app_name>`
* `abra app deploy <app-name>`
## Configuration
Per default Backupbot stores the backups locally in the repository `/backups/restic`, which is accessible as volume at `/var/lib/docker/volumes/<app_name>_backups/_data/restic/`
The backup location can be changed using the `RESTIC_REPO` env variable.
### S3 Storage
To use S3 storage as backup location set the following envs:
```
RESTIC_REPO=s3:<S3-SERVICE-URL>/<BUCKET-NAME>
SECRET_AWS_SECRET_ACCESS_KEY_VERSION=v1
AWS_ACCESS_KEY_ID=<MY_ACCESS_KEY>
COMPOSE_FILE="$COMPOSE_FILE:compose.s3.yml"
```
and add your `<SECRET_ACCESS_KEY>` as docker secret:
`abra app secret insert <app_name> aws_secret_access_key v1 <SECRET_ACCESS_KEY>`
See [restic s3 docs](https://restic.readthedocs.io/en/latest/030_preparing_a_new_repo.html#amazon-s3) for more information.
### SFTP Storage
> With sftp it is not possible to prevent the backupbot from deleting backups in case of a compromised machine. Therefore we recommend to use S3, REST or rclone server without delete permissions.
To use SFTP storage as backup location set the following envs:
```
RESTIC_REPO=sftp:user@host:/restic-repo-path
SECRET_SSH_KEY_VERSION=v1
SSH_HOST_KEY="hostname ssh-rsa AAAAB3...
COMPOSE_FILE="$COMPOSE_FILE:compose.ssh.yml"
```
To get the `SSH_HOST_KEY` run the following command `ssh-keyscan <hostname>`
Generate an ssh keypair: `ssh-keygen -t ed25519 -f backupkey -P ''`
Add the key to your `authorized_keys`:
`ssh-copy-id -i backupkey <user>@<hostname>`
Add your `SSH_KEY` as docker secret:
```
abra app secret insert <app_name> ssh_key v1 """$(cat backupkey)
"""
```
### Restic REST server Storage
You can simply set the `RESTIC_REPO` variable to your REST server URL `rest:http://host:8000/`.
If you access the REST server with a password `rest:https://user:pass@host:8000/` you should hide the whole URL containing the password inside a secret.
Uncomment these lines:
```
SECRET_RESTIC_REPO_VERSION=v1
COMPOSE_FILE="$COMPOSE_FILE:compose.secret.yml"
```
Add your REST server url as secret:
```
`abra app secret insert <app_name> restic_repo v1 "rest:https://user:pass@host:8000/"`
```
The secret will overwrite the `RESTIC_REPO` variable.
See [restic REST docs](https://restic.readthedocs.io/en/latest/030_preparing_a_new_repo.html#rest-server) for more information.
## Usage
Create a backup of all apps:
`abra app run <app_name> app -- backup create`
> The apps to backup up need to be deployed
Create an individual backup:
`abra app run <app_name> app -- backup --host <target_app_name> create`
Create a backup to a local repository:
`abra app run <app_name> app -- backup create -r /backups/restic`
> It is recommended to shutdown/undeploy an app before restoring the data
Restore the latest snapshot of all including apps:
`abra app run <app_name> app -- backup restore`
Restore a specific snapshot of an individual app:
`abra app run <app_name> app -- backup --host <target_app_name> restore --snapshot <snapshot_id>`
Show all snapshots:
`abra app run <app_name> app -- backup snapshots`
Show all snapshots containing a specific app:
`abra app run <app_name> app -- backup --host <target_app_name> snapshots`
Show all files inside the latest snapshot (can be very verbose):
`abra app run <app_name> app -- backup ls`
Show specific files inside a selected snapshot:
`abra app run <app_name> app -- backup ls --snapshot <snapshot_id> --path /var/lib/docker/volumes/`
Download files from a snapshot:
```
filename=$(abra app run <app_name> app -- backup download --snapshot <snapshot_id> --path <absolute_path>)
abra app cp <app_name> app:$filename .
```
## Recipe Configuration
Like Traefik, or `swarm-cronjob`, Backupbot II uses access to the Docker socket to read labels from running Docker Swarm services:
```
@ -49,24 +161,15 @@ services:
db:
deploy:
labels:
backupbot.backup: "true"
backupbot.backup.pre-hook: 'mysqldump -u root -p"$(cat /run/secrets/db_root_password)" -f /tmp/dump/dump.db'
backupbot.backup.post-hook: "rm -rf /tmp/dump/dump.db"
backupbot.backup.path: "/tmp/dump/,/etc/foo/"
backupbot.backup: ${BACKUP:-"true"}
backupbot.backup.pre-hook: 'mysqldump -u root -p"$(cat /run/secrets/db_root_password)" -f /volume_path/dump.db'
backupbot.backup.post-hook: "rm -rf /volume_path/dump.db"
```
- `backupbot.backup` -- set to `true` to back up this service (REQUIRED)
- `backupbot.backup.path` -- comma separated list of file paths within the service to copy (REQUIRED)
- `backupbot.backup.pre-hook` -- command to run before copying files (optional)
- `backupbot.backup.pre-hook` -- command to run before copying files (optional), save all dumps into the volumes
- `backupbot.backup.post-hook` -- command to run after copying files (optional)
As in the above example, you can reference Docker Secrets, e.g. for looking up database passwords, by reading the files in `/run/secrets` directly.
## Development
1. Install `direnv`
2. `cp .envrc.sample .envrc`
3. Edit `.envrc` as appropriate, including setting `DOCKER_CONTEXT` to a remote Docker context, if you're not running a swarm server locally.
4. Run `./backup.sh` -- you can add the `--skip-backup` or `--skip-upload` options if you just want to test one other step
[abra]: https://git.autonomic.zone/autonomic-cooperative/abra

3
abra.sh Normal file
View File

@ -0,0 +1,3 @@
export ENTRYPOINT_VERSION=v1
export BACKUPBOT_VERSION=v1
export SSH_CONFIG_VERSION=v1

139
backup.sh
View File

@ -1,139 +0,0 @@
#!/bin/bash
server_name="${SERVER_NAME:?SERVER_NAME not set}"
restic_password_file="${RESTIC_PASSWORD_FILE:?RESTIC_PASSWORD_FILE not set}"
restic_host="${RESTIC_HOST:?RESTIC_HOST not set}"
backup_path="${BACKUP_DEST:?BACKUP_DEST not set}"
# shellcheck disable=SC2153
ssh_key_file="${SSH_KEY_FILE}"
s3_key_file="${AWS_SECRET_ACCESS_KEY_FILE}"
# shellcheck disable=SC2153
https_password_file="${HTTPS_PASSWORD_FILE}"
restic_repo=
restic_extra_options=
if [ -n "$ssh_key_file" ] && [ -f "$ssh_key_file" ]; then
restic_repo="sftp:$restic_host:/$server_name"
# Only check server against provided SSH_HOST_KEY, if set
if [ -n "$SSH_HOST_KEY" ]; then
tmpfile=$(mktemp)
echo "$SSH_HOST_KEY" >>"$tmpfile"
echo "using host key $SSH_HOST_KEY"
ssh_options="-o 'UserKnownHostsFile $tmpfile'"
elif [ "$SSH_HOST_KEY_DISABLE" = "1" ]; then
echo "disabling SSH host key checking"
ssh_options="-o 'StrictHostKeyChecking=No'"
else
echo "neither SSH_HOST_KEY nor SSH_HOST_KEY_DISABLE set"
fi
restic_extra_options="sftp.command=ssh $ssh_options -i $ssh_key_file $restic_host -s sftp"
fi
if [ -n "$s3_key_file" ] && [ -f "$s3_key_file" ] && [ -n "$AWS_ACCESS_KEY_ID" ]; then
AWS_SECRET_ACCESS_KEY="$(cat "${s3_key_file}")"
export AWS_SECRET_ACCESS_KEY
restic_repo="s3:$restic_host:/$server_name"
fi
if [ -n "$https_password_file" ] && [ -f "$https_password_file" ]; then
HTTPS_PASSWORD="$(cat "${https_password_file}")"
export HTTPS_PASSWORD
restic_user="${RESTIC_USER:?RESTIC_USER not set}"
restic_repo="rest:https://$restic_user:$HTTPS_PASSWORD@$restic_host"
fi
if [ -z "$restic_repo" ]; then
echo "you must configure either SFTP, S3, or HTTPS storage, see README"
exit 1
fi
echo "restic_repo: $restic_repo"
# Pre-bake-in some default restic options
_restic() {
if [ -z "$restic_extra_options" ]; then
# shellcheck disable=SC2068
restic -p "$restic_password_file" \
--quiet -r "$restic_repo" \
$@
else
# shellcheck disable=SC2068
restic -p "$restic_password_file" \
--quiet -r "$restic_repo" \
-o "$restic_extra_options" \
$@
fi
}
if [ -n "$SERVICES_OVERRIDE" ]; then
# this is fine because docker service names should never include spaces or
# glob characters
# shellcheck disable=SC2206
services=($SERVICES_OVERRIDE)
else
mapfile -t services < <(docker service ls --format '{{ .Name }}')
fi
if [[ \ $*\ != *\ --skip-backup\ * ]]; then
rm -rf "${backup_path}"
for service in "${services[@]}"; do
echo "service: $service"
details=$(docker service inspect "$service" --format "{{ json .Spec.Labels }}")
if echo "$details" | jq -r '.["backupbot.backup"]' | grep -q 'true'; then
pre=$(echo "$details" | jq -r '.["backupbot.backup.pre-hook"]')
post=$(echo "$details" | jq -r '.["backupbot.backup.post-hook"]')
path=$(echo "$details" | jq -r '.["backupbot.backup.path"]')
if [ "$path" = "null" ]; then
echo "ERROR: missing 'path' for $service"
continue # or maybe exit?
fi
container=$(docker container ls -f "name=$service" --format '{{ .ID }}')
echo "backing up $service"
if [ "$pre" != "null" ]; then
# run the precommand
# shellcheck disable=SC2086
docker exec "$container" sh -c "$pre"
fi
# run the backup
for p in ${path//,/ }; do
# creates the parent folder, so `docker cp` has reliable behaviour no matter if $p ends with `/` or `/.`
dir=$backup_path/$service/$(dirname "$p")
test -d "$dir" || mkdir -p "$dir"
docker cp -a "$container:$p" "$dir/$(basename "$p")"
done
if [ "$post" != "null" ]; then
# run the postcommand
# shellcheck disable=SC2086
docker exec "$container" sh -c "$post"
fi
fi
done
# check if restic repo exists, initialise if not
if [ -z "$(_restic cat config)" ] 2>/dev/null; then
echo "initializing restic repo"
_restic init
fi
fi
if [[ \ $*\ != *\ --skip-upload\ * ]]; then
_restic backup --host "$server_name" --tag coop-cloud "$backup_path"
if [ "$REMOVE_BACKUP_VOLUME_AFTER_UPLOAD" -eq 1 ]; then
echo "Cleaning up ${backup_path}"
rm -rf "${backup_path:?}"/*
fi
fi

274
backupbot.py Executable file
View File

@ -0,0 +1,274 @@
#!/usr/bin/python3
import os
import click
import json
import subprocess
import logging
import docker
import restic
from datetime import datetime, timezone
from restic.errors import ResticFailedError
from pathlib import Path
from shutil import copyfile, rmtree
# logging.basicConfig(level=logging.INFO)
VOLUME_PATH = "/var/lib/docker/volumes/"
SECRET_PATH = '/secrets/'
SERVICE = None
@click.group()
@click.option('-l', '--log', 'loglevel')
@click.option('service', '--host', '-h', envvar='SERVICE')
@click.option('repository', '--repo', '-r', envvar='RESTIC_REPO', required=True)
def cli(loglevel, service, repository):
global SERVICE
if service:
SERVICE = service.replace('.', '_')
if repository:
os.environ['RESTIC_REPO'] = repository
if loglevel:
numeric_level = getattr(logging, loglevel.upper(), None)
if not isinstance(numeric_level, int):
raise ValueError('Invalid log level: %s' % loglevel)
logging.basicConfig(level=numeric_level)
export_secrets()
init_repo()
def init_repo():
repo = os.environ['RESTIC_REPO']
logging.debug(f"set restic repository location: {repo}")
restic.repository = repo
restic.password_file = '/var/run/secrets/restic_password'
try:
restic.cat.config()
except ResticFailedError as error:
if 'unable to open config file' in str(error):
result = restic.init()
logging.info(f"Initialized restic repo: {result}")
else:
raise error
def export_secrets():
for env in os.environ:
if env.endswith('FILE') and not "COMPOSE_FILE" in env:
logging.debug(f"exported secret: {env}")
with open(os.environ[env]) as file:
secret = file.read()
os.environ[env.removesuffix('_FILE')] = secret
# logging.debug(f"Read secret value: {secret}")
@cli.command()
def create():
pre_commands, post_commands, backup_paths, apps = get_backup_cmds()
copy_secrets(apps)
backup_paths.append(SECRET_PATH)
run_commands(pre_commands)
backup_volumes(backup_paths, apps)
run_commands(post_commands)
def get_backup_cmds():
client = docker.from_env()
container_by_service = {
c.labels['com.docker.swarm.service.name']: c for c in client.containers.list()}
backup_paths = set()
backup_apps = set()
pre_commands = {}
post_commands = {}
services = client.services.list()
for s in services:
labels = s.attrs['Spec']['Labels']
if (backup := labels.get('backupbot.backup')) and bool(backup):
stack_name = labels['com.docker.stack.namespace']
if SERVICE and SERVICE != stack_name:
continue
backup_apps.add(stack_name)
container = container_by_service.get(s.name)
if not container:
logging.error(
f"Container {s.name} is not running, hooks can not be executed")
if prehook := labels.get('backupbot.backup.pre-hook'):
pre_commands[container] = prehook
if posthook := labels.get('backupbot.backup.post-hook'):
post_commands[container] = posthook
backup_paths = backup_paths.union(
Path(VOLUME_PATH).glob(f"{stack_name}_*"))
return pre_commands, post_commands, list(backup_paths), list(backup_apps)
def copy_secrets(apps):
rmtree(SECRET_PATH, ignore_errors=True)
os.mkdir(SECRET_PATH)
client = docker.from_env()
container_by_service = {
c.labels['com.docker.swarm.service.name']: c for c in client.containers.list()}
services = client.services.list()
for s in services:
app_name = s.attrs['Spec']['Labels']['com.docker.stack.namespace']
if (app_name in apps and
(app_secs := s.attrs['Spec']['TaskTemplate']['ContainerSpec'].get('Secrets'))):
if not container_by_service.get(s.name):
logging.error(
f"Container {s.name} is not running, secrets can not be copied.")
continue
container_id = container_by_service[s.name].id
for sec in app_secs:
src = f'/var/lib/docker/containers/{container_id}/mounts/secrets/{sec["SecretID"]}'
dst = SECRET_PATH + sec['SecretName']
copyfile(src, dst)
def run_commands(commands):
for container, command in commands.items():
if not command:
continue
# Use bash's pipefail to return exit codes inside a pipe to prevent silent failure
command = command.removeprefix('bash -c \'').removeprefix('sh -c \'')
command = command.removesuffix('\'')
command = f"bash -c 'set -o pipefail;{command}'"
result = container.exec_run(command)
logging.info(f"run command in {container.name}")
logging.info(command)
if result.exit_code:
logging.error(
f"Failed to run command {command} in {container.name}: {result.output.decode()}")
else:
logging.info(result.output.decode())
def backup_volumes(backup_paths, apps, dry_run=False):
result = restic.backup(backup_paths, dry_run=dry_run, tags=apps)
print(result)
logging.info(result)
@cli.command()
@click.option('snapshot', '--snapshot', '-s', envvar='SNAPSHOT', default='latest')
@click.option('target', '--target', '-t', envvar='TARGET', default='/')
@click.option('noninteractive', '--noninteractive', envvar='NONINTERACTIVE', default=False)
def restore(snapshot, target, noninteractive):
# Todo: recommend to shutdown the container
service_paths = VOLUME_PATH
if SERVICE:
service_paths = service_paths + f'{SERVICE}_*'
snapshots = restic.snapshots(snapshot_id=snapshot)
if not snapshot:
logging.error("No Snapshots with ID {snapshots}")
exit(1)
if not noninteractive:
snapshot_date = datetime.fromisoformat(snapshots[0]['time'])
delta = datetime.now(tz=timezone.utc) - snapshot_date
print(f"You are going to restore Snapshot {snapshot} of {service_paths} at {target}")
print(f"This snapshot is {delta} old")
print(f"THIS COMMAND WILL IRREVERSIBLY OVERWRITES {target}{service_paths.removeprefix('/')}")
prompt = input("Type YES (uppercase) to continue: ")
if prompt != 'YES':
logging.error("Restore aborted")
exit(1)
print(f"Restoring Snapshot {snapshot} of {service_paths} at {target}")
result = restic.restore(snapshot_id=snapshot,
include=service_paths, target_dir=target)
logging.debug(result)
@cli.command()
def snapshots():
snapshots = restic.snapshots()
for snap in snapshots:
if not SERVICE or (tags := snap.get('tags')) and SERVICE in tags:
print(snap['time'], snap['id'])
@cli.command()
@click.option('snapshot', '--snapshot', '-s', envvar='SNAPSHOT', default='latest')
@click.option('path', '--path', '-p', envvar='INCLUDE_PATH')
def ls(snapshot, path):
results = list_files(snapshot, path)
for r in results:
if r.get('path'):
print(f"{r['ctime']}\t{r['path']}")
def list_files(snapshot, path):
cmd = restic.cat.base_command() + ['ls']
if SERVICE:
cmd = cmd + ['--tag', SERVICE]
cmd.append(snapshot)
if path:
cmd.append(path)
output = restic.internal.command_executor.execute(cmd)
output = output.replace('}\n{', '}|{')
results = list(map(json.loads, output.split('|')))
return results
@cli.command()
@click.option('snapshot', '--snapshot', '-s', envvar='SNAPSHOT', default='latest')
@click.option('path', '--path', '-p', envvar='INCLUDE_PATH')
@click.option('volumes', '--volumes', '-v', is_flag=True)
@click.option('secrets', '--secrets', '-c', is_flag=True)
def download(snapshot, path, volumes, secrets):
if sum(map(bool, [path, volumes, secrets])) != 1:
logging.error("Please specify exactly one of '--path', '--volumes', '--secrets'")
exit(1)
if path:
path = path.removesuffix('/')
files = list_files(snapshot, path)
filetype = [f.get('type') for f in files if f.get('path') == path][0]
filename = "/tmp/" + Path(path).name
if filetype == 'dir':
filename = filename + ".tar"
output = dump(snapshot, path)
with open(filename, "wb") as file:
file.write(output)
print(filename)
elif volumes:
if not SERVICE:
logging.error("Please specify '--host' when using '--volumes'")
exit(1)
filename = f"/tmp/{SERVICE}.tar"
files = list_files(snapshot, VOLUME_PATH)
for f in files[1:]:
path = f[ 'path' ]
if SERVICE in path and f['type'] == 'dir':
content = dump(snapshot, path)
# Concatenate tar files (extract with tar -xi)
with open(filename, "ab") as file:
file.write(content)
elif secrets:
if not SERVICE:
logging.error("Please specify '--host' when using '--secrets'")
exit(1)
filename = f"/tmp/SECRETS_{SERVICE}.json"
files = list_files(snapshot, SECRET_PATH)
secrets = {}
for f in files[1:]:
path = f[ 'path' ]
if SERVICE in path and f['type'] == 'file':
secret = dump(snapshot, path).decode()
secret_name = path.removeprefix(f'{SECRET_PATH}{SERVICE}_')
secrets[secret_name] = secret
with open(filename, "w") as file:
json.dump(secrets, file)
print(filename)
def dump(snapshot, path):
cmd = restic.cat.base_command() + ['dump']
if SERVICE:
cmd = cmd + ['--tag', SERVICE]
cmd = cmd +[snapshot, path]
logging.debug(f"Dumping {path} from snapshot '{snapshot}'")
output = subprocess.run(cmd, capture_output=True)
if output.returncode:
logging.error(f"error while dumping {path} from snapshot '{snapshot}': {output.stderr}")
exit(1)
return output.stdout
if __name__ == '__main__':
cli()

View File

@ -1,15 +0,0 @@
---
version: "3.8"
services:
app:
environment:
- HTTPS_PASSWORD_FILE=/run/secrets/https_password
- RESTIC_USER
secrets:
- source: https_password
mode: 0400
secrets:
https_password:
external: true
name: ${STACK_NAME}_https_password_${SECRET_HTTPS_PASSWORD_VERSION}

13
compose.secret.yml Normal file
View File

@ -0,0 +1,13 @@
---
version: "3.8"
services:
app:
environment:
- RESTIC_REPO_FILE=/run/secrets/restic_repo
secrets:
- restic_repo
secrets:
restic_repo:
external: true
name: ${STACK_NAME}_restic_repo_${SECRET_RESTIC_REPO_VERSION}

View File

@ -5,12 +5,19 @@ services:
environment:
- SSH_KEY_FILE=/run/secrets/ssh_key
- SSH_HOST_KEY
- SSH_HOST_KEY_DISABLE
secrets:
- source: ssh_key
mode: 0400
configs:
- source: ssh_config
target: /root/.ssh/config
secrets:
ssh_key:
external: true
name: ${STACK_NAME}_ssh_key_${SECRET_SSH_KEY_VERSION}
configs:
ssh_config:
name: ${STACK_NAME}_ssh_config_${SSH_CONFIG_VERSION}
file: ssh_config

View File

@ -2,34 +2,50 @@
version: "3.8"
services:
app:
image: thecoopcloud/backup-bot-two:latest
# build: .
image: docker:24.0.2-dind
volumes:
- "/var/run/docker.sock:/var/run/docker.sock"
- "backups:/backups"
- "/var/lib/docker/volumes/:/var/lib/docker/volumes/"
- "/var/lib/docker/containers/:/var/lib/docker/containers/:ro"
- backups:/backups
environment:
- CRON_SCHEDULE
- RESTIC_REPO
- RESTIC_PASSWORD_FILE=/run/secrets/restic_password
- BACKUP_DEST=/backups
- RESTIC_HOST
- SERVER_NAME
- REMOVE_BACKUP_VOLUME_AFTER_UPLOAD=1
secrets:
- restic_password
deploy:
labels:
- "traefik.enable=true"
- "traefik.http.services.${STACK_NAME}.loadbalancer.server.port=8008"
- "traefik.http.routers.${STACK_NAME}.rule="
- "traefik.http.routers.${STACK_NAME}.entrypoints=web-secure"
- "traefik.http.routers.${STACK_NAME}.tls.certresolver=${LETS_ENCRYPT_ENV}"
- coop-cloud.${STACK_NAME}.version=0.1.0+latest
volumes:
backups:
- coop-cloud.${STACK_NAME}.timeout=${TIMEOUT:-300}
- coop-cloud.backupbot.enabled=true
configs:
- source: entrypoint
target: /entrypoint.sh
mode: 0555
- source: backupbot
target: /usr/bin/backup
mode: 0555
entrypoint: ['/entrypoint.sh']
healthcheck:
test: "pgrep crond"
interval: 30s
timeout: 10s
retries: 10
start_period: 5m
secrets:
restic_password:
external: true
name: ${STACK_NAME}_restic_password_${SECRET_RESTIC_PASSWORD_VERSION}
volumes:
backups:
configs:
entrypoint:
name: ${STACK_NAME}_entrypoint_${ENTRYPOINT_VERSION}
file: entrypoint.sh
backupbot:
name: ${STACK_NAME}_backupbot_${BACKUPBOT_VERSION}
file: backupbot.py

20
entrypoint.sh Normal file
View File

@ -0,0 +1,20 @@
#!/bin/sh
set -e -o pipefail
apk add --upgrade --no-cache restic bash python3 py3-pip
# Todo use requirements file with specific versions
pip install click==8.1.7 docker==6.1.3 resticpy==1.0.2
if [ -n "$SSH_HOST_KEY" ]
then
echo "$SSH_HOST_KEY" > /root/.ssh/known_hosts
fi
cron_schedule="${CRON_SCHEDULE:?CRON_SCHEDULE not set}"
echo "$cron_schedule backup create" | crontab -
crontab -l
crond -f -d8 -L /dev/stdout

3
release/1.0.0+latest Normal file
View File

@ -0,0 +1,3 @@
Breaking Change: the variables `SERVER_NAME` and `RESTIC_HOST` are merged into `RESTIC_REPO`. The format can be looked up here: https://restic.readthedocs.io/en/stable/030_preparing_a_new_repo.html
ssh/sftp: `sftp:user@host:/repo-path`
S3: `s3:https://s3.example.com/bucket_name`

View File

@ -1,11 +0,0 @@
#!/bin/bash
set -e
set -o pipefail
cron_schedule="${CRON_SCHEDULE:?CRON_SCHEDULE not set}"
echo "$cron_schedule /usr/bin/backup.sh" | crontab -
crontab -l
crond -f -d8 -L /dev/stdout

4
ssh_config Normal file
View File

@ -0,0 +1,4 @@
Host *
IdentityFile /run/secrets/ssh_key
ServerAliveInterval 60
ServerAliveCountMax 240