Compare commits
22 Commits
feature/se
...
main
Author | SHA1 | Date |
---|---|---|
Moritz | 7f14698824 | |
Moritz | 2a9a98172f | |
Moritz | 282215cf9c | |
Moritz | ae7a14b6f1 | |
Moritz | 8acdb20e5b | |
Moritz | 5582744073 | |
3wc | 84d606fa80 | |
Moritz | 7865907811 | |
Moritz | dc66c02e23 | |
Moritz | f730c70bfe | |
Moritz | faa7ae3dd1 | |
Moritz | 79eeec428a | |
Moritz | 4164760dc6 | |
Moritz | e644679b8b | |
Moritz | 0c587ac926 | |
Moritz | 65686cd891 | |
Moritz | ac055c932e | |
Moritz | 64328c79b1 | |
Moritz | 15275b2571 | |
moritz | 4befebba38 | |
p4u1 | d2087a441e | |
Moritz | f4d96b0875 |
|
@ -8,6 +8,11 @@ RESTIC_REPOSITORY=/backups/restic
|
|||
|
||||
CRON_SCHEDULE='30 3 * * *'
|
||||
|
||||
# Push Notifiactions
|
||||
#PUSH_URL_START=https://status.example.com/api/push/xxxxxxxxxx?status=up&msg=start
|
||||
#PUSH_URL_SUCCESS=https://status.example.com/api/push/xxxxxxxxxx?status=up&msg=OK
|
||||
#PUSH_URL_FAIL=https://status.example.com/api/push/xxxxxxxxxx?status=down&msg=fail
|
||||
|
||||
# swarm-cronjob, instead of built-in cron
|
||||
#COMPOSE_FILE="$COMPOSE_FILE:compose.swarm-cronjob.yml"
|
||||
|
||||
|
|
|
@ -0,0 +1,6 @@
|
|||
# Change log
|
||||
|
||||
## 2.0.0 (unreleased)
|
||||
|
||||
- Rewrite from Bash to Python
|
||||
- Add support for push notifications (#24)
|
49
README.md
49
README.md
|
@ -38,12 +38,12 @@ Backupbot II tries to help, by
|
|||
* `abra app new backup-bot-two`
|
||||
* `abra app config <app-name>`
|
||||
- set storage options. Either configure `CRON_SCHEDULE`, or set up `swarm-cronjob`
|
||||
* `abra app secret generate -a <app_name>`
|
||||
* `abra app secret generate -a <backupbot_name>`
|
||||
* `abra app deploy <app-name>`
|
||||
|
||||
## Configuration
|
||||
|
||||
Per default Backupbot stores the backups locally in the repository `/backups/restic`, which is accessible as volume at `/var/lib/docker/volumes/<app_name>_backups/_data/restic/`
|
||||
Per default Backupbot stores the backups locally in the repository `/backups/restic`, which is accessible as volume at `/var/lib/docker/volumes/<backupbot_name>_backups/_data/restic/`
|
||||
|
||||
The backup location can be changed using the `RESTIC_REPOSITORY` env variable.
|
||||
|
||||
|
@ -57,7 +57,7 @@ AWS_ACCESS_KEY_ID=<MY_ACCESS_KEY>
|
|||
COMPOSE_FILE="$COMPOSE_FILE:compose.s3.yml"
|
||||
```
|
||||
and add your `<SECRET_ACCESS_KEY>` as docker secret:
|
||||
`abra app secret insert <app_name> aws_secret_access_key v1 <SECRET_ACCESS_KEY>`
|
||||
`abra app secret insert <backupbot_name> aws_secret_access_key v1 <SECRET_ACCESS_KEY>`
|
||||
|
||||
See [restic s3 docs](https://restic.readthedocs.io/en/latest/030_preparing_a_new_repo.html#amazon-s3) for more information.
|
||||
|
||||
|
@ -79,9 +79,10 @@ Add the key to your `authorized_keys`:
|
|||
`ssh-copy-id -i backupkey <user>@<hostname>`
|
||||
Add your `SSH_KEY` as docker secret:
|
||||
```
|
||||
abra app secret insert <app_name> ssh_key v1 """$(cat backupkey)
|
||||
abra app secret insert <backupbot_name> ssh_key v1 """$(cat backupkey)
|
||||
"""
|
||||
```
|
||||
> Attention: This command needs to be executed exactly as stated above, because it places a trailing newline at the end, if this is missing you will get the following error: `Load key "/run/secrets/ssh_key": error in libcrypto`
|
||||
|
||||
### Restic REST server Storage
|
||||
|
||||
|
@ -94,67 +95,81 @@ COMPOSE_FILE="$COMPOSE_FILE:compose.secret.yml"
|
|||
```
|
||||
Add your REST server url as secret:
|
||||
```
|
||||
`abra app secret insert <app_name> restic_repo v1 "rest:https://user:pass@host:8000/"`
|
||||
`abra app secret insert <backupbot_name> restic_repo v1 "rest:https://user:pass@host:8000/"`
|
||||
```
|
||||
The secret will overwrite the `RESTIC_REPOSITORY` variable.
|
||||
|
||||
|
||||
See [restic REST docs](https://restic.readthedocs.io/en/latest/030_preparing_a_new_repo.html#rest-server) for more information.
|
||||
|
||||
## Push notifications
|
||||
|
||||
The following env variables can be used to setup push notifications for backups. `PUSH_URL_START` is requested just before the backups starts, `PUSH_URL_SUCCESS` is only requested if the backup was successful and if the backup fails `PUSH_URL_FAIL` will be requested.
|
||||
Each variable is optional and independent of the other.
|
||||
```
|
||||
|
||||
PUSH_URL_START=https://status.example.com/api/push/xxxxxxxxxx?status=up&msg=start
|
||||
PUSH_URL_SUCCESS=https://status.example.com/api/push/xxxxxxxxxx?status=up&msg=OK
|
||||
PUSH_URL_FAIL=https://status.example.com/api/push/xxxxxxxxxx?status=down&msg=fail
|
||||
```
|
||||
|
||||
|
||||
## Usage
|
||||
|
||||
Run the cronjob that creates a backup, including the push notifications and docker logging:
|
||||
`abra app cmd <backupbot_name> app run_cron`
|
||||
|
||||
Create a backup of all apps:
|
||||
|
||||
`abra app run <app_name> app -- backup create`
|
||||
`abra app run <backupbot_name> app -- backup create`
|
||||
|
||||
> The apps to backup up need to be deployed
|
||||
|
||||
Create an individual backup:
|
||||
|
||||
`abra app run <app_name> app -- backup --host <target_app_name> create`
|
||||
`abra app run <backupbot_name> app -- backup --host <target_app_name> create`
|
||||
|
||||
Create a backup to a local repository:
|
||||
|
||||
`abra app run <app_name> app -- backup create -r /backups/restic`
|
||||
`abra app run <backupbot_name> app -- backup create -r /backups/restic`
|
||||
|
||||
> It is recommended to shutdown/undeploy an app before restoring the data
|
||||
|
||||
Restore the latest snapshot of all including apps:
|
||||
|
||||
`abra app run <app_name> app -- backup restore`
|
||||
`abra app run <backupbot_name> app -- backup restore`
|
||||
|
||||
Restore a specific snapshot of an individual app:
|
||||
|
||||
`abra app run <app_name> app -- backup --host <target_app_name> restore --snapshot <snapshot_id>`
|
||||
`abra app run <backupbot_name> app -- backup --host <target_app_name> restore --snapshot <snapshot_id>`
|
||||
|
||||
Show all snapshots:
|
||||
|
||||
`abra app run <app_name> app -- backup snapshots`
|
||||
`abra app run <backupbot_name> app -- backup snapshots`
|
||||
|
||||
Show all snapshots containing a specific app:
|
||||
|
||||
`abra app run <app_name> app -- backup --host <target_app_name> snapshots`
|
||||
`abra app run <backupbot_name> app -- backup --host <target_app_name> snapshots`
|
||||
|
||||
Show all files inside the latest snapshot (can be very verbose):
|
||||
|
||||
`abra app run <app_name> app -- backup ls`
|
||||
`abra app run <backupbot_name> app -- backup ls`
|
||||
|
||||
Show specific files inside a selected snapshot:
|
||||
|
||||
`abra app run <app_name> app -- backup ls --snapshot <snapshot_id> --path /var/lib/docker/volumes/`
|
||||
`abra app run <backupbot_name> app -- backup ls --snapshot <snapshot_id> --path /var/lib/docker/volumes/`
|
||||
|
||||
Download files from a snapshot:
|
||||
|
||||
```
|
||||
filename=$(abra app run <app_name> app -- backup download --snapshot <snapshot_id> --path <absolute_path>)
|
||||
abra app cp <app_name> app:$filename .
|
||||
filename=$(abra app run <backupbot_name> app -- backup download --snapshot <snapshot_id> --path <absolute_path>)
|
||||
abra app cp <backupbot_name> app:$filename .
|
||||
```
|
||||
|
||||
## Run restic
|
||||
|
||||
```
|
||||
abra app run <app_name> app bash
|
||||
abra app run <backupbot_name> app bash
|
||||
export AWS_SECRET_ACCESS_KEY=$(cat $AWS_SECRET_ACCESS_KEY_FILE)
|
||||
export RESTIC_PASSWORD=$(cat $RESTIC_PASSWORD_FILE)
|
||||
restic snapshots
|
||||
|
|
8
abra.sh
8
abra.sh
|
@ -1,3 +1,11 @@
|
|||
export ENTRYPOINT_VERSION=v1
|
||||
export BACKUPBOT_VERSION=v1
|
||||
export SSH_CONFIG_VERSION=v1
|
||||
|
||||
run_cron () {
|
||||
schedule="$(crontab -l | tr -s " " | cut -d ' ' -f-5)"
|
||||
rm -f /tmp/backup.log
|
||||
echo "* * * * * $(crontab -l | tr -s " " | cut -d ' ' -f6-)" | crontab -
|
||||
while [ ! -f /tmp/backup.log ]; do sleep 1; done
|
||||
echo "$schedule $(crontab -l | tr -s " " | cut -d ' ' -f6-)" | crontab -
|
||||
}
|
||||
|
|
138
backupbot.py
138
backupbot.py
|
@ -1,6 +1,7 @@
|
|||
#!/usr/bin/python3
|
||||
|
||||
import os
|
||||
import sys
|
||||
import click
|
||||
import json
|
||||
import subprocess
|
||||
|
@ -9,22 +10,40 @@ import docker
|
|||
import restic
|
||||
import tarfile
|
||||
import io
|
||||
from pythonjsonlogger import jsonlogger
|
||||
from datetime import datetime, timezone
|
||||
from restic.errors import ResticFailedError
|
||||
from pathlib import Path
|
||||
from shutil import copyfile, rmtree
|
||||
# logging.basicConfig(level=logging.INFO)
|
||||
|
||||
VOLUME_PATH = "/var/lib/docker/volumes/"
|
||||
SECRET_PATH = '/secrets/'
|
||||
SERVICE = None
|
||||
|
||||
logger = logging.getLogger("backupbot")
|
||||
logging.addLevelName(55, 'SUMMARY')
|
||||
setattr(logging, 'SUMMARY', 55)
|
||||
setattr(logger, 'summary', lambda message, *args, **
|
||||
kwargs: logger.log(55, message, *args, **kwargs))
|
||||
|
||||
|
||||
def handle_exception(exc_type, exc_value, exc_traceback):
|
||||
if issubclass(exc_type, KeyboardInterrupt):
|
||||
sys.__excepthook__(exc_type, exc_value, exc_traceback)
|
||||
return
|
||||
logger.critical("Uncaught exception", exc_info=(
|
||||
exc_type, exc_value, exc_traceback))
|
||||
|
||||
|
||||
sys.excepthook = handle_exception
|
||||
|
||||
|
||||
@click.group()
|
||||
@click.option('-l', '--log', 'loglevel')
|
||||
@click.option('-m', '--machine-logs', 'machine_logs', is_flag=True)
|
||||
@click.option('service', '--host', '-h', envvar='SERVICE')
|
||||
@click.option('repository', '--repo', '-r', envvar='RESTIC_REPOSITORY', required=True)
|
||||
def cli(loglevel, service, repository):
|
||||
@click.option('repository', '--repo', '-r', envvar='RESTIC_REPOSITORY')
|
||||
def cli(loglevel, service, repository, machine_logs):
|
||||
global SERVICE
|
||||
if service:
|
||||
SERVICE = service.replace('.', '_')
|
||||
|
@ -34,22 +53,33 @@ def cli(loglevel, service, repository):
|
|||
numeric_level = getattr(logging, loglevel.upper(), None)
|
||||
if not isinstance(numeric_level, int):
|
||||
raise ValueError('Invalid log level: %s' % loglevel)
|
||||
logging.basicConfig(level=numeric_level)
|
||||
logger.setLevel(numeric_level)
|
||||
logHandler = logging.StreamHandler()
|
||||
if machine_logs:
|
||||
formatter = jsonlogger.JsonFormatter(
|
||||
"%(levelname)s %(filename)s %(lineno)s %(process)d %(message)s", rename_fields={"levelname": "message_type"})
|
||||
logHandler.setFormatter(formatter)
|
||||
logger.addHandler(logHandler)
|
||||
|
||||
export_secrets()
|
||||
init_repo()
|
||||
|
||||
|
||||
def init_repo():
|
||||
repo = os.environ['RESTIC_REPOSITORY']
|
||||
logging.debug(f"set restic repository location: {repo}")
|
||||
restic.repository = repo
|
||||
if repo:= os.environ.get('RESTIC_REPOSITORY_FILE'):
|
||||
# RESTIC_REPOSITORY_FILE and RESTIC_REPOSITORY are mutually exclusive
|
||||
del os.environ['RESTIC_REPOSITORY']
|
||||
else:
|
||||
repo = os.environ['RESTIC_REPOSITORY']
|
||||
restic.repository = repo
|
||||
logger.debug(f"set restic repository location: {repo}")
|
||||
restic.password_file = '/var/run/secrets/restic_password'
|
||||
try:
|
||||
restic.cat.config()
|
||||
except ResticFailedError as error:
|
||||
if 'unable to open config file' in str(error):
|
||||
result = restic.init()
|
||||
logging.info(f"Initialized restic repo: {result}")
|
||||
logger.info(f"Initialized restic repo: {result}")
|
||||
else:
|
||||
raise error
|
||||
|
||||
|
@ -57,20 +87,21 @@ def init_repo():
|
|||
def export_secrets():
|
||||
for env in os.environ:
|
||||
if env.endswith('FILE') and not "COMPOSE_FILE" in env:
|
||||
logging.debug(f"exported secret: {env}")
|
||||
logger.debug(f"exported secret: {env}")
|
||||
with open(os.environ[env]) as file:
|
||||
secret = file.read()
|
||||
os.environ[env.removesuffix('_FILE')] = secret
|
||||
# logging.debug(f"Read secret value: {secret}")
|
||||
# logger.debug(f"Read secret value: {secret}")
|
||||
|
||||
|
||||
@cli.command()
|
||||
def create():
|
||||
@click.option('retries', '--retries', '-r', envvar='RETRIES', default=1)
|
||||
def create(retries):
|
||||
pre_commands, post_commands, backup_paths, apps = get_backup_cmds()
|
||||
copy_secrets(apps)
|
||||
backup_paths.append(SECRET_PATH)
|
||||
run_commands(pre_commands)
|
||||
backup_volumes(backup_paths, apps)
|
||||
backup_volumes(backup_paths, apps, int(retries))
|
||||
run_commands(post_commands)
|
||||
|
||||
|
||||
|
@ -86,6 +117,7 @@ def get_backup_cmds():
|
|||
for s in services:
|
||||
labels = s.attrs['Spec']['Labels']
|
||||
if (backup := labels.get('backupbot.backup')) and bool(backup):
|
||||
# volumes: s.attrs['Spec']['TaskTemplate']['ContainerSpec']['Mounts'][0]['Source']
|
||||
stack_name = labels['com.docker.stack.namespace']
|
||||
# Remove this lines to backup only a specific service
|
||||
# This will unfortenately decrease restice performance
|
||||
|
@ -94,8 +126,8 @@ def get_backup_cmds():
|
|||
backup_apps.add(stack_name)
|
||||
backup_paths = backup_paths.union(
|
||||
Path(VOLUME_PATH).glob(f"{stack_name}_*"))
|
||||
if not (container:= container_by_service.get(s.name)):
|
||||
logging.error(
|
||||
if not (container := container_by_service.get(s.name)):
|
||||
logger.error(
|
||||
f"Container {s.name} is not running, hooks can not be executed")
|
||||
continue
|
||||
if prehook := labels.get('backupbot.backup.pre-hook'):
|
||||
|
@ -106,7 +138,7 @@ def get_backup_cmds():
|
|||
|
||||
|
||||
def copy_secrets(apps):
|
||||
#TODO: check if it is deployed
|
||||
# TODO: check if it is deployed
|
||||
rmtree(SECRET_PATH, ignore_errors=True)
|
||||
os.mkdir(SECRET_PATH)
|
||||
client = docker.from_env()
|
||||
|
@ -118,16 +150,18 @@ def copy_secrets(apps):
|
|||
if (app_name in apps and
|
||||
(app_secs := s.attrs['Spec']['TaskTemplate']['ContainerSpec'].get('Secrets'))):
|
||||
if not container_by_service.get(s.name):
|
||||
logging.error(
|
||||
logger.warning(
|
||||
f"Container {s.name} is not running, secrets can not be copied.")
|
||||
continue
|
||||
container_id = container_by_service[s.name].id
|
||||
for sec in app_secs:
|
||||
src = f'/var/lib/docker/containers/{container_id}/mounts/secrets/{sec["SecretID"]}'
|
||||
if not Path(src).exists():
|
||||
logging.error(f"For the secret {sec['SecretName']} the file {src} does not exist for {s.name}")
|
||||
logger.error(
|
||||
f"For the secret {sec['SecretName']} the file {src} does not exist for {s.name}")
|
||||
continue
|
||||
dst = SECRET_PATH + sec['SecretName']
|
||||
logger.debug("Copy Secret {sec['SecretName']}")
|
||||
copyfile(src, dst)
|
||||
|
||||
|
||||
|
@ -136,37 +170,44 @@ def run_commands(commands):
|
|||
if not command:
|
||||
continue
|
||||
# Remove bash/sh wrapping
|
||||
command = command.removeprefix('bash -c').removeprefix('sh -c')
|
||||
command = command.removeprefix('bash -c').removeprefix('sh -c').removeprefix(' ')
|
||||
# Remove quotes surrounding the command
|
||||
if (len(command) >= 2 and command[0] == command[-1] and (command[0] == "'" or command[0] == '"')):
|
||||
command[1:-1]
|
||||
command = command[1:-1]
|
||||
# Use bash's pipefail to return exit codes inside a pipe to prevent silent failure
|
||||
command = f"bash -c 'set -o pipefail;{command}'"
|
||||
logging.info(f"run command in {container.name}:")
|
||||
logging.info(command)
|
||||
logger.info(f"run command in {container.name}:")
|
||||
logger.info(command)
|
||||
result = container.exec_run(command)
|
||||
if result.exit_code:
|
||||
logging.error(
|
||||
logger.error(
|
||||
f"Failed to run command {command} in {container.name}: {result.output.decode()}")
|
||||
else:
|
||||
logging.info(result.output.decode())
|
||||
logger.info(result.output.decode())
|
||||
|
||||
|
||||
def backup_volumes(backup_paths, apps, dry_run=False):
|
||||
try:
|
||||
result = restic.backup(backup_paths, dry_run=dry_run, tags=apps)
|
||||
print(result)
|
||||
logging.info(result)
|
||||
except ResticFailedError as error:
|
||||
logging.error(f"Backup failed for {apps}. Could not Backup these paths: {backup_paths}")
|
||||
logging.error(error)
|
||||
exit(1)
|
||||
def backup_volumes(backup_paths, apps, retries, dry_run=False):
|
||||
while True:
|
||||
try:
|
||||
logger.info("Start volume backup")
|
||||
logger.debug(backup_paths)
|
||||
result = restic.backup(backup_paths, dry_run=dry_run, tags=apps)
|
||||
logger.summary("backup finished", extra=result)
|
||||
return
|
||||
except ResticFailedError as error:
|
||||
logger.error(
|
||||
f"Backup failed for {apps}. Could not Backup these paths: {backup_paths}")
|
||||
logger.error(error, exc_info=True)
|
||||
if retries > 0:
|
||||
retries -= 1
|
||||
else:
|
||||
exit(1)
|
||||
|
||||
|
||||
@cli.command()
|
||||
@click.option('snapshot', '--snapshot', '-s', envvar='SNAPSHOT', default='latest')
|
||||
@click.option('target', '--target', '-t', envvar='TARGET', default='/')
|
||||
@click.option('noninteractive', '--noninteractive', envvar='NONINTERACTIVE', default=False)
|
||||
@click.option('noninteractive', '--noninteractive', envvar='NONINTERACTIVE', is_flag=True)
|
||||
def restore(snapshot, target, noninteractive):
|
||||
# Todo: recommend to shutdown the container
|
||||
service_paths = VOLUME_PATH
|
||||
|
@ -174,7 +215,7 @@ def restore(snapshot, target, noninteractive):
|
|||
service_paths = service_paths + f'{SERVICE}_*'
|
||||
snapshots = restic.snapshots(snapshot_id=snapshot)
|
||||
if not snapshot:
|
||||
logging.error("No Snapshots with ID {snapshots}")
|
||||
logger.error("No Snapshots with ID {snapshots}")
|
||||
exit(1)
|
||||
if not noninteractive:
|
||||
snapshot_date = datetime.fromisoformat(snapshots[0]['time'])
|
||||
|
@ -186,12 +227,13 @@ def restore(snapshot, target, noninteractive):
|
|||
f"THIS COMMAND WILL IRREVERSIBLY OVERWRITES {target}{service_paths.removeprefix('/')}")
|
||||
prompt = input("Type YES (uppercase) to continue: ")
|
||||
if prompt != 'YES':
|
||||
logging.error("Restore aborted")
|
||||
logger.error("Restore aborted")
|
||||
exit(1)
|
||||
print(f"Restoring Snapshot {snapshot} of {service_paths} at {target}")
|
||||
# TODO: use tags if no snapshot is selected, to use a snapshot including SERVICE
|
||||
result = restic.restore(snapshot_id=snapshot,
|
||||
include=service_paths, target_dir=target)
|
||||
logging.debug(result)
|
||||
logger.debug(result)
|
||||
|
||||
|
||||
@cli.command()
|
||||
|
@ -205,8 +247,9 @@ def snapshots():
|
|||
if no_snapshots:
|
||||
err_msg = "No Snapshots found"
|
||||
if SERVICE:
|
||||
err_msg += f' for app {SERVICE}'
|
||||
logging.warning(err_msg)
|
||||
service_name = SERVICE.replace('_', '.')
|
||||
err_msg += f' for app {service_name}'
|
||||
logger.warning(err_msg)
|
||||
|
||||
|
||||
@cli.command()
|
||||
|
@ -230,10 +273,10 @@ def list_files(snapshot, path):
|
|||
output = restic.internal.command_executor.execute(cmd)
|
||||
except ResticFailedError as error:
|
||||
if 'no snapshot found' in str(error):
|
||||
err_msg = f'There is no snapshot {snapshot}'
|
||||
err_msg = f'There is no snapshot "{snapshot}"'
|
||||
if SERVICE:
|
||||
err_msg += f'for the app {SERVICE}'
|
||||
logging.error(err_msg)
|
||||
err_msg += f' for the app "{SERVICE}"'
|
||||
logger.error(err_msg)
|
||||
exit(1)
|
||||
else:
|
||||
raise error
|
||||
|
@ -245,8 +288,8 @@ def list_files(snapshot, path):
|
|||
@cli.command()
|
||||
@click.option('snapshot', '--snapshot', '-s', envvar='SNAPSHOT', default='latest')
|
||||
@click.option('path', '--path', '-p', envvar='INCLUDE_PATH')
|
||||
@click.option('volumes', '--volumes', '-v', is_flag=True)
|
||||
@click.option('secrets', '--secrets', '-c', is_flag=True)
|
||||
@click.option('volumes', '--volumes', '-v', envvar='VOLUMES')
|
||||
@click.option('secrets', '--secrets', '-c', is_flag=True, envvar='SECRETS')
|
||||
def download(snapshot, path, volumes, secrets):
|
||||
file_dumps = []
|
||||
if not any([path, volumes, secrets]):
|
||||
|
@ -264,7 +307,7 @@ def download(snapshot, path, volumes, secrets):
|
|||
file_dumps.append((binary_output, tarinfo))
|
||||
if volumes:
|
||||
if not SERVICE:
|
||||
logging.error("Please specify '--host' when using '--volumes'")
|
||||
logger.error("Please specify '--host' when using '--volumes'")
|
||||
exit(1)
|
||||
files = list_files(snapshot, VOLUME_PATH)
|
||||
for f in files[1:]:
|
||||
|
@ -277,7 +320,7 @@ def download(snapshot, path, volumes, secrets):
|
|||
file_dumps.append((binary_output, tarinfo))
|
||||
if secrets:
|
||||
if not SERVICE:
|
||||
logging.error("Please specify '--host' when using '--secrets'")
|
||||
logger.error("Please specify '--host' when using '--secrets'")
|
||||
exit(1)
|
||||
filename = f"{SERVICE}.json"
|
||||
files = list_files(snapshot, SECRET_PATH)
|
||||
|
@ -297,7 +340,8 @@ def download(snapshot, path, volumes, secrets):
|
|||
for binary_output, tarinfo in file_dumps:
|
||||
tar.addfile(tarinfo, fileobj=io.BytesIO(binary_output))
|
||||
size = get_formatted_size('/tmp/backup.tar.gz')
|
||||
print(f"Backup has been written to /tmp/backup.tar.gz with a size of {size}")
|
||||
print(
|
||||
f"Backup has been written to /tmp/backup.tar.gz with a size of {size}")
|
||||
|
||||
|
||||
def get_formatted_size(file_path):
|
||||
|
@ -318,7 +362,7 @@ def dump(snapshot, path):
|
|||
print(f"Dumping {path} from snapshot '{snapshot}'")
|
||||
output = subprocess.run(cmd, capture_output=True)
|
||||
if output.returncode:
|
||||
logging.error(
|
||||
logger.error(
|
||||
f"error while dumping {path} from snapshot '{snapshot}': {output.stderr}")
|
||||
exit(1)
|
||||
return output.stdout
|
||||
|
|
|
@ -27,6 +27,7 @@ services:
|
|||
target: /usr/bin/backup
|
||||
mode: 0555
|
||||
entrypoint: ['/entrypoint.sh']
|
||||
#entrypoint: ['tail', '-f','/dev/null']
|
||||
healthcheck:
|
||||
test: "pgrep crond"
|
||||
interval: 30s
|
||||
|
|
|
@ -2,10 +2,10 @@
|
|||
|
||||
set -e -o pipefail
|
||||
|
||||
apk add --upgrade --no-cache restic bash python3 py3-pip
|
||||
apk add --upgrade --no-cache restic bash python3 py3-pip py3-click py3-docker-py py3-json-logger curl
|
||||
|
||||
# Todo use requirements file with specific versions
|
||||
pip install click==8.1.7 docker==6.1.3 resticpy==1.0.2
|
||||
pip install --break-system-packages resticpy==1.0.2
|
||||
|
||||
if [ -n "$SSH_HOST_KEY" ]
|
||||
then
|
||||
|
@ -14,7 +14,22 @@ fi
|
|||
|
||||
cron_schedule="${CRON_SCHEDULE:?CRON_SCHEDULE not set}"
|
||||
|
||||
echo "$cron_schedule backup create" | crontab -
|
||||
if [ -n "$PUSH_URL_START" ]
|
||||
then
|
||||
push_start_notification="curl -s '$PUSH_URL_START' &&"
|
||||
fi
|
||||
|
||||
if [ -n "$PUSH_URL_FAIL" ]
|
||||
then
|
||||
push_fail_notification="|| curl -s '$PUSH_URL_FAIL'"
|
||||
fi
|
||||
|
||||
if [ -n "$PUSH_URL_SUCCESS" ]
|
||||
then
|
||||
push_notification=" && (grep -q 'backup finished' /tmp/backup.log && curl -s '$PUSH_URL_SUCCESS' $push_fail_notification)"
|
||||
fi
|
||||
|
||||
echo "$cron_schedule $push_start_notification backup --machine-logs create 2>&1 | tee /tmp/backup.log $push_notification" | crontab -
|
||||
crontab -l
|
||||
|
||||
crond -f -d8 -L /dev/stdout
|
||||
|
|
Loading…
Reference in New Issue