forked from coop-cloud/backup-bot-two
		
	Compare commits
	
		
			4 Commits
		
	
	
		
			enable-lab
			...
			backup_vol
		
	
	| Author | SHA1 | Date | |
|---|---|---|---|
| e360c3d8f8 | |||
| 50b317c12d | |||
| 9cb3a469f3 | |||
| 24d2c0e85b | 
							
								
								
									
										13
									
								
								.env.sample
									
									
									
									
									
								
							
							
						
						
									
										13
									
								
								.env.sample
									
									
									
									
									
								
							| @ -4,9 +4,11 @@ SECRET_RESTIC_PASSWORD_VERSION=v1 | |||||||
|  |  | ||||||
| COMPOSE_FILE=compose.yml | COMPOSE_FILE=compose.yml | ||||||
|  |  | ||||||
| RESTIC_REPO=/backups/restic | SERVER_NAME=example.com | ||||||
|  | RESTIC_HOST=minio.example.com | ||||||
|  |  | ||||||
| CRON_SCHEDULE='30 3 * * *' | CRON_SCHEDULE='*/5 * * * *' | ||||||
|  | REMOVE_BACKUP_VOLUME_AFTER_UPLOAD=1 | ||||||
|  |  | ||||||
| # swarm-cronjob, instead of built-in cron | # swarm-cronjob, instead of built-in cron | ||||||
| #COMPOSE_FILE="$COMPOSE_FILE:compose.swarm-cronjob.yml" | #COMPOSE_FILE="$COMPOSE_FILE:compose.swarm-cronjob.yml" | ||||||
| @ -20,10 +22,3 @@ CRON_SCHEDULE='30 3 * * *' | |||||||
| #SECRET_AWS_SECRET_ACCESS_KEY_VERSION=v1 | #SECRET_AWS_SECRET_ACCESS_KEY_VERSION=v1 | ||||||
| #AWS_ACCESS_KEY_ID=something-secret | #AWS_ACCESS_KEY_ID=something-secret | ||||||
| #COMPOSE_FILE="$COMPOSE_FILE:compose.s3.yml" | #COMPOSE_FILE="$COMPOSE_FILE:compose.s3.yml" | ||||||
|  |  | ||||||
| # Secret restic repository |  | ||||||
| # use a secret to store the RESTIC_REPO if the repository location contains a secret value |  | ||||||
| # i.E rest:https://user:SECRET_PASSWORD@host:8000/ |  | ||||||
| # it overwrites the RESTIC_REPO variable |  | ||||||
| #SECRET_RESTIC_REPO_VERSION=v1 |  | ||||||
| #COMPOSE_FILE="$COMPOSE_FILE:compose.secret.yml" |  | ||||||
|  | |||||||
| @ -10,8 +10,6 @@ export DOCKER_CONTEXT=$SERVER_NAME | |||||||
| # or this: | # or this: | ||||||
| #export AWS_SECRET_ACCESS_KEY_FILE=s3 | #export AWS_SECRET_ACCESS_KEY_FILE=s3 | ||||||
| #export AWS_ACCESS_KEY_ID=easter-october-emphatic-tug-urgent-customer | #export AWS_ACCESS_KEY_ID=easter-october-emphatic-tug-urgent-customer | ||||||
| # or this: |  | ||||||
| #export HTTPS_PASSWORD_FILE=/run/secrets/https_password |  | ||||||
|  |  | ||||||
| # optionally limit subset of services for testing | # optionally limit subset of services for testing | ||||||
| #export SERVICES_OVERRIDE="ghost_domain_tld_app ghost_domain_tld_db" | #export SERVICES_OVERRIDE="ghost_domain_tld_app ghost_domain_tld_db" | ||||||
|  | |||||||
							
								
								
									
										155
									
								
								README.md
									
									
									
									
									
								
							
							
						
						
									
										155
									
								
								README.md
									
									
									
									
									
								
							| @ -6,20 +6,6 @@ _This Time, It's Easily Configurable_ | |||||||
|  |  | ||||||
| Automatically take backups from all volumes of running Docker Swarm services and runs pre- and post commands. | Automatically take backups from all volumes of running Docker Swarm services and runs pre- and post commands. | ||||||
|  |  | ||||||
| <!-- metadata --> |  | ||||||
|  |  | ||||||
| * **Category**: Utilities |  | ||||||
| * **Status**: 0, work-in-progress |  | ||||||
| * **Image**: [`thecoopcloud/backup-bot-two`](https://hub.docker.com/r/thecoopcloud/backup-bot-two), 4, upstream |  | ||||||
| * **Healthcheck**: No |  | ||||||
| * **Backups**: N/A |  | ||||||
| * **Email**: N/A |  | ||||||
| * **Tests**: No |  | ||||||
| * **SSO**: N/A |  | ||||||
|  |  | ||||||
| <!-- endmetadata --> |  | ||||||
|  |  | ||||||
|  |  | ||||||
| ## Background | ## Background | ||||||
|  |  | ||||||
| There are lots of Docker volume backup systems; all of them have one or both of these limitations: | There are lots of Docker volume backup systems; all of them have one or both of these limitations: | ||||||
| @ -34,126 +20,28 @@ Backupbot II tries to help, by | |||||||
|  |  | ||||||
| ### With Co-op Cloud | ### With Co-op Cloud | ||||||
|  |  | ||||||
|  | 1. Set up Docker Swarm and [`abra`][abra] | ||||||
|  | 2. `abra app new backup-bot-two` | ||||||
|  | 3. `abra app config <your-app-name>`, and set storage options. Either configure `CRON_SCHEDULE`, or set up `swarm-cronjob` | ||||||
|  | 4. `abra app secret generate <your-app-name> restic-password v1`, optionally with `--pass` before `<your-app-name>` to save the generated secret in `pass`. | ||||||
|  | 5. `abra app secret insert <your-app-name> ssh-key v1 ...` or similar, to load required secrets. | ||||||
|  | 4. `abra app deploy <your-app-name>` | ||||||
|  |  | ||||||
| * `abra app new backup-bot-two` | <!-- metadata --> | ||||||
| * `abra app config <app-name>` |  | ||||||
|     - set storage options. Either configure `CRON_SCHEDULE`, or set up `swarm-cronjob` | * **Category**: Utilities | ||||||
| * `abra app secret generate -a <app_name>` | * **Status**: 0, work-in-progress | ||||||
| * `abra app deploy <app-name>` | * **Image**: [`thecoopcloud/backup-bot-two`](https://hub.docker.com/r/thecoopcloud/backup-bot-two), 4, upstream | ||||||
|  | * **Healthcheck**: No | ||||||
|  | * **Backups**: N/A | ||||||
|  | * **Email**: N/A | ||||||
|  | * **Tests**: No | ||||||
|  | * **SSO**: N/A | ||||||
|  |  | ||||||
|  | <!-- endmetadata --> | ||||||
|  |  | ||||||
| ## Configuration | ## Configuration | ||||||
|  |  | ||||||
| Per default Backupbot stores the backups locally in the repository `/backups/restic`, which is accessible as volume at `/var/lib/docker/volumes/<app_name>_backups/_data/restic/` |  | ||||||
|  |  | ||||||
| The backup location can be changed using the `RESTIC_REPO` env variable. |  | ||||||
|  |  | ||||||
| ### S3 Storage |  | ||||||
|  |  | ||||||
| To use S3 storage as backup location set the following envs: |  | ||||||
| ``` |  | ||||||
| RESTIC_REPO=s3:<S3-SERVICE-URL>/<BUCKET-NAME> |  | ||||||
| SECRET_AWS_SECRET_ACCESS_KEY_VERSION=v1 |  | ||||||
| AWS_ACCESS_KEY_ID=<MY_ACCESS_KEY> |  | ||||||
| COMPOSE_FILE="$COMPOSE_FILE:compose.s3.yml" |  | ||||||
| ``` |  | ||||||
| and add your `<SECRET_ACCESS_KEY>` as docker secret: |  | ||||||
| `abra app secret insert <app_name> aws_secret_access_key v1 <SECRET_ACCESS_KEY>` |  | ||||||
|  |  | ||||||
| See [restic s3 docs](https://restic.readthedocs.io/en/latest/030_preparing_a_new_repo.html#amazon-s3) for more information. |  | ||||||
|  |  | ||||||
| ### SFTP Storage |  | ||||||
|  |  | ||||||
| > With sftp it is not possible to prevent the backupbot from deleting backups in case of a compromised machine. Therefore we recommend to use S3, REST or rclone server without delete permissions. |  | ||||||
|  |  | ||||||
| To use SFTP storage as backup location set the following envs: |  | ||||||
| ``` |  | ||||||
| RESTIC_REPO=sftp:user@host:/restic-repo-path |  | ||||||
| SECRET_SSH_KEY_VERSION=v1 |  | ||||||
| SSH_HOST_KEY="hostname ssh-rsa AAAAB3... |  | ||||||
| COMPOSE_FILE="$COMPOSE_FILE:compose.ssh.yml" |  | ||||||
| ``` |  | ||||||
| To get the `SSH_HOST_KEY` run the following command `ssh-keyscan <hostname>` |  | ||||||
|  |  | ||||||
| Generate an ssh keypair: `ssh-keygen -t ed25519 -f backupkey -P ''` |  | ||||||
| Add the key to your `authorized_keys`: |  | ||||||
| `ssh-copy-id -i backupkey <user>@<hostname>` |  | ||||||
| Add your `SSH_KEY` as docker secret: |  | ||||||
| ``` |  | ||||||
| abra app secret insert <app_name> ssh_key v1 """$(cat backupkey) |  | ||||||
| """ |  | ||||||
| ``` |  | ||||||
|  |  | ||||||
| ### Restic REST server Storage |  | ||||||
|  |  | ||||||
| You can simply set the `RESTIC_REPO` variable to your REST server URL `rest:http://host:8000/`. |  | ||||||
| If you access the REST server with a password `rest:https://user:pass@host:8000/` you should hide the whole URL containing the password inside a secret. |  | ||||||
| Uncomment these lines: |  | ||||||
| ``` |  | ||||||
| SECRET_RESTIC_REPO_VERSION=v1 |  | ||||||
| COMPOSE_FILE="$COMPOSE_FILE:compose.secret.yml" |  | ||||||
| ``` |  | ||||||
| Add your REST server url as secret: |  | ||||||
| ``` |  | ||||||
| `abra app secret insert <app_name> restic_repo v1 "rest:https://user:pass@host:8000/"` |  | ||||||
| ``` |  | ||||||
| The secret will overwrite the `RESTIC_REPO` variable. |  | ||||||
|  |  | ||||||
|  |  | ||||||
| See [restic REST docs](https://restic.readthedocs.io/en/latest/030_preparing_a_new_repo.html#rest-server) for more information. |  | ||||||
|  |  | ||||||
| ## Usage |  | ||||||
|  |  | ||||||
|  |  | ||||||
| Create a backup of all apps: |  | ||||||
|  |  | ||||||
| `abra app run <app_name> app -- backup create` |  | ||||||
|  |  | ||||||
| > The apps to backup up need to be deployed |  | ||||||
|  |  | ||||||
| Create an individual backup: |  | ||||||
|  |  | ||||||
| `abra app run <app_name> app -- backup --host <target_app_name> create` |  | ||||||
|  |  | ||||||
| Create a backup to a local repository: |  | ||||||
|  |  | ||||||
| `abra app run <app_name> app -- backup create -r /backups/restic` |  | ||||||
|  |  | ||||||
| > It is recommended to shutdown/undeploy an app before restoring the data |  | ||||||
|  |  | ||||||
| Restore the latest snapshot of all including apps: |  | ||||||
|  |  | ||||||
| `abra app run <app_name> app -- backup restore` |  | ||||||
|  |  | ||||||
| Restore a specific snapshot of an individual app: |  | ||||||
|  |  | ||||||
| `abra app run <app_name> app -- backup --host <target_app_name> restore --snapshot <snapshot_id>` |  | ||||||
|  |  | ||||||
| Show all snapshots: |  | ||||||
|  |  | ||||||
| `abra app run <app_name> app -- backup snapshots` |  | ||||||
|  |  | ||||||
| Show all snapshots containing a specific app: |  | ||||||
|  |  | ||||||
| `abra app run <app_name> app -- backup --host <target_app_name> snapshots` |  | ||||||
|  |  | ||||||
| Show all files inside the latest snapshot (can be very verbose): |  | ||||||
|  |  | ||||||
| `abra app run <app_name> app -- backup ls` |  | ||||||
|  |  | ||||||
| Show specific files inside a selected snapshot: |  | ||||||
|  |  | ||||||
| `abra app run <app_name> app -- backup ls --snapshot <snapshot_id> --path /var/lib/docker/volumes/` |  | ||||||
|  |  | ||||||
| Download files from a snapshot: |  | ||||||
|  |  | ||||||
| ``` |  | ||||||
| filename=$(abra app run <app_name> app -- backup download --snapshot <snapshot_id> --path <absolute_path>) |  | ||||||
| abra app cp <app_name> app:$filename . |  | ||||||
| ``` |  | ||||||
|  |  | ||||||
|  |  | ||||||
| ## Recipe Configuration |  | ||||||
|  |  | ||||||
| Like Traefik, or `swarm-cronjob`, Backupbot II uses access to the Docker socket to read labels from running Docker Swarm services: | Like Traefik, or `swarm-cronjob`, Backupbot II uses access to the Docker socket to read labels from running Docker Swarm services: | ||||||
|  |  | ||||||
| ``` | ``` | ||||||
| @ -172,4 +60,11 @@ services: | |||||||
|  |  | ||||||
| As in the above example, you can reference Docker Secrets, e.g. for looking up database passwords, by reading the files in `/run/secrets` directly. | As in the above example, you can reference Docker Secrets, e.g. for looking up database passwords, by reading the files in `/run/secrets` directly. | ||||||
|  |  | ||||||
|  | ## Development | ||||||
|  |  | ||||||
|  | 1. Install `direnv` | ||||||
|  | 2. `cp .envrc.sample .envrc` | ||||||
|  | 3. Edit `.envrc` as appropriate, including setting `DOCKER_CONTEXT` to a remote Docker context, if you're not running a swarm server locally. | ||||||
|  | 4. Run `./backup.sh` -- you can add the `--skip-backup` or `--skip-upload` options if you just want to test one other step | ||||||
|  |  | ||||||
| [abra]: https://git.autonomic.zone/autonomic-cooperative/abra | [abra]: https://git.autonomic.zone/autonomic-cooperative/abra | ||||||
|  | |||||||
							
								
								
									
										3
									
								
								abra.sh
									
									
									
									
									
								
							
							
						
						
									
										3
									
								
								abra.sh
									
									
									
									
									
								
							| @ -1,3 +1,2 @@ | |||||||
| export ENTRYPOINT_VERSION=v1 | export ENTRYPOINT_VERSION=v1 | ||||||
| export BACKUPBOT_VERSION=v1 | export BACKUP_VERSION=v1 | ||||||
| export SSH_CONFIG_VERSION=v1 |  | ||||||
|  | |||||||
							
								
								
									
										119
									
								
								backup.sh
									
									
									
									
									
										Executable file
									
								
							
							
						
						
									
										119
									
								
								backup.sh
									
									
									
									
									
										Executable file
									
								
							| @ -0,0 +1,119 @@ | |||||||
|  | #!/bin/bash | ||||||
|  |  | ||||||
|  | set -e | ||||||
|  |  | ||||||
|  | server_name="${SERVER_NAME:?SERVER_NAME not set}" | ||||||
|  |  | ||||||
|  | restic_password_file="${RESTIC_PASSWORD_FILE:?RESTIC_PASSWORD_FILE not set}" | ||||||
|  |  | ||||||
|  | restic_host="${RESTIC_HOST:?RESTIC_HOST not set}" | ||||||
|  |  | ||||||
|  | backup_paths=() | ||||||
|  |  | ||||||
|  | # shellcheck disable=SC2153 | ||||||
|  | ssh_key_file="${SSH_KEY_FILE}" | ||||||
|  | s3_key_file="${AWS_SECRET_ACCESS_KEY_FILE}" | ||||||
|  |  | ||||||
|  | restic_repo= | ||||||
|  | restic_extra_options= | ||||||
|  |  | ||||||
|  | if [ -n "$ssh_key_file" ] && [ -f "$ssh_key_file" ]; then | ||||||
|  | 	restic_repo="sftp:$restic_host:/$server_name" | ||||||
|  |  | ||||||
|  | 	# Only check server against provided SSH_HOST_KEY, if set | ||||||
|  | 	if [ -n "$SSH_HOST_KEY" ]; then | ||||||
|  | 		tmpfile=$(mktemp) | ||||||
|  | 		echo "$SSH_HOST_KEY" >>"$tmpfile" | ||||||
|  | 		echo "using host key $SSH_HOST_KEY" | ||||||
|  | 		ssh_options="-o 'UserKnownHostsFile $tmpfile'" | ||||||
|  | 	elif [ "$SSH_HOST_KEY_DISABLE" = "1" ]; then | ||||||
|  | 		echo "disabling SSH host key checking" | ||||||
|  | 		ssh_options="-o 'StrictHostKeyChecking=No'" | ||||||
|  | 	else | ||||||
|  | 		echo "neither SSH_HOST_KEY nor SSH_HOST_KEY_DISABLE set" | ||||||
|  | 	fi | ||||||
|  | 	restic_extra_options="sftp.command=ssh $ssh_options -i $ssh_key_file $restic_host -s sftp" | ||||||
|  | fi | ||||||
|  |  | ||||||
|  | if [ -n "$s3_key_file" ] && [ -f "$s3_key_file" ] && [ -n "$AWS_ACCESS_KEY_ID" ]; then | ||||||
|  | 	AWS_SECRET_ACCESS_KEY="$(cat "${s3_key_file}")" | ||||||
|  | 	export AWS_SECRET_ACCESS_KEY | ||||||
|  | 	restic_repo="s3:$restic_host:/$server_name" | ||||||
|  | fi | ||||||
|  |  | ||||||
|  | if [ -z "$restic_repo" ]; then | ||||||
|  | 	echo "you must configure either SFTP or S3 storage, see README" | ||||||
|  | 	exit 1 | ||||||
|  | fi | ||||||
|  |  | ||||||
|  | echo "restic_repo: $restic_repo" | ||||||
|  |  | ||||||
|  | # Pre-bake-in some default restic options | ||||||
|  | _restic() { | ||||||
|  | 	if [ -z "$restic_extra_options" ]; then | ||||||
|  | 		# shellcheck disable=SC2068 | ||||||
|  | 		restic -p "$restic_password_file" \ | ||||||
|  | 			--quiet -r "$restic_repo" \ | ||||||
|  | 			$@ | ||||||
|  | 	else | ||||||
|  | 		# shellcheck disable=SC2068 | ||||||
|  | 		restic -p "$restic_password_file" \ | ||||||
|  | 			--quiet -r "$restic_repo" \ | ||||||
|  | 			-o "$restic_extra_options" \ | ||||||
|  | 			$@ | ||||||
|  | 	fi | ||||||
|  | } | ||||||
|  |  | ||||||
|  | if [ -n "$SERVICES_OVERRIDE" ]; then | ||||||
|  | 	# this is fine because docker service names should never include spaces or | ||||||
|  | 	# glob characters | ||||||
|  | 	# shellcheck disable=SC2206 | ||||||
|  | 	services=($SERVICES_OVERRIDE) | ||||||
|  | else | ||||||
|  | 	mapfile -t services < <(docker service ls --format '{{ .Name }}') | ||||||
|  | fi | ||||||
|  |  | ||||||
|  | post_commands=() | ||||||
|  | if [[ \ $*\  != *\ --skip-backup\ * ]]; then | ||||||
|  |  | ||||||
|  | 	for service in "${services[@]}"; do | ||||||
|  | 		echo "service: $service" | ||||||
|  | 		details=$(docker service inspect "$service" --format "{{ json .Spec.Labels }}") | ||||||
|  | 		if echo "$details" | jq -r '.["backupbot.backup"]' | grep -q 'true'; then | ||||||
|  | 			pre=$(echo "$details" | jq -r '.["backupbot.backup.pre-hook"]') | ||||||
|  | 			post=$(echo "$details" | jq -r '.["backupbot.backup.post-hook"]') | ||||||
|  | 			container=$(docker container ls -f "name=$service" --format '{{ .ID }}') | ||||||
|  | 			stack_name=$(echo "$details" | jq -r '.["com.docker.stack.namespace"]') | ||||||
|  |  | ||||||
|  | 			if [ "$pre" != "null" ]; then | ||||||
|  | 				# run the precommand | ||||||
|  | 				echo "executing precommand $pre in container $container" | ||||||
|  | 				docker exec "$container" sh -c "$pre" | ||||||
|  | 			fi | ||||||
|  | 			if [ "$post" != "null" ]; then | ||||||
|  | 				# append post command | ||||||
|  | 				post_commands+=("docker exec $container sh -c \"$post\"") | ||||||
|  | 			fi | ||||||
|  |  | ||||||
|  | 			# add volume paths to backup path | ||||||
|  | 			backup_paths+=(/var/lib/docker/volumes/"${stack_name}"_*) | ||||||
|  | 		fi | ||||||
|  | 	done | ||||||
|  |  | ||||||
|  | 	# check if restic repo exists, initialise if not | ||||||
|  | 	if [ -z "$(_restic cat config)" ] 2>/dev/null; then | ||||||
|  | 		echo "initializing restic repo" | ||||||
|  | 		_restic init | ||||||
|  | 	fi | ||||||
|  | fi | ||||||
|  |  | ||||||
|  | if [[ \ $*\  != *\ --skip-upload\ * ]]; then | ||||||
|  | 	echo "${backup_paths[@]}" | ||||||
|  | 	_restic backup --host "$server_name" --tag coop-cloud "${backup_paths[@]}" | ||||||
|  | fi | ||||||
|  |  | ||||||
|  | # run post commands | ||||||
|  | for post in "${post_commands[@]}"; do | ||||||
|  | 	echo "executing postcommand $post" | ||||||
|  | 	eval "$post" | ||||||
|  | done | ||||||
							
								
								
									
										274
									
								
								backupbot.py
									
									
									
									
									
								
							
							
						
						
									
										274
									
								
								backupbot.py
									
									
									
									
									
								
							| @ -1,274 +0,0 @@ | |||||||
| #!/usr/bin/python3 |  | ||||||
|  |  | ||||||
| import os |  | ||||||
| import click |  | ||||||
| import json |  | ||||||
| import subprocess |  | ||||||
| import logging |  | ||||||
| import docker |  | ||||||
| import restic |  | ||||||
| from datetime import datetime, timezone |  | ||||||
| from restic.errors import ResticFailedError |  | ||||||
| from pathlib import Path |  | ||||||
| from shutil import copyfile, rmtree |  | ||||||
| # logging.basicConfig(level=logging.INFO) |  | ||||||
|  |  | ||||||
| VOLUME_PATH = "/var/lib/docker/volumes/" |  | ||||||
| SECRET_PATH = '/secrets/' |  | ||||||
| SERVICE = None |  | ||||||
|  |  | ||||||
|  |  | ||||||
| @click.group() |  | ||||||
| @click.option('-l', '--log', 'loglevel') |  | ||||||
| @click.option('service', '--host', '-h', envvar='SERVICE') |  | ||||||
| @click.option('repository', '--repo', '-r', envvar='RESTIC_REPO', required=True) |  | ||||||
| def cli(loglevel, service, repository): |  | ||||||
|     global SERVICE |  | ||||||
|     if service: |  | ||||||
|         SERVICE = service.replace('.', '_') |  | ||||||
|     if repository: |  | ||||||
|         os.environ['RESTIC_REPO'] = repository |  | ||||||
|     if loglevel: |  | ||||||
|         numeric_level = getattr(logging, loglevel.upper(), None) |  | ||||||
|         if not isinstance(numeric_level, int): |  | ||||||
|             raise ValueError('Invalid log level: %s' % loglevel) |  | ||||||
|         logging.basicConfig(level=numeric_level) |  | ||||||
|     export_secrets() |  | ||||||
|     init_repo() |  | ||||||
|  |  | ||||||
|  |  | ||||||
| def init_repo(): |  | ||||||
|     repo = os.environ['RESTIC_REPO'] |  | ||||||
|     logging.debug(f"set restic repository location: {repo}") |  | ||||||
|     restic.repository = repo |  | ||||||
|     restic.password_file = '/var/run/secrets/restic_password' |  | ||||||
|     try: |  | ||||||
|         restic.cat.config() |  | ||||||
|     except ResticFailedError as error: |  | ||||||
|         if 'unable to open config file' in str(error): |  | ||||||
|             result = restic.init() |  | ||||||
|             logging.info(f"Initialized restic repo: {result}") |  | ||||||
|         else: |  | ||||||
|             raise error |  | ||||||
|  |  | ||||||
|  |  | ||||||
| def export_secrets(): |  | ||||||
|     for env in os.environ: |  | ||||||
|         if env.endswith('FILE') and not "COMPOSE_FILE" in env: |  | ||||||
|             logging.debug(f"exported secret: {env}") |  | ||||||
|             with open(os.environ[env]) as file: |  | ||||||
|                 secret =  file.read() |  | ||||||
|                 os.environ[env.removesuffix('_FILE')] = secret |  | ||||||
|                 # logging.debug(f"Read secret value: {secret}") |  | ||||||
|  |  | ||||||
|  |  | ||||||
| @cli.command() |  | ||||||
| def create(): |  | ||||||
|     pre_commands, post_commands, backup_paths, apps = get_backup_cmds() |  | ||||||
|     copy_secrets(apps) |  | ||||||
|     backup_paths.append(SECRET_PATH) |  | ||||||
|     run_commands(pre_commands) |  | ||||||
|     backup_volumes(backup_paths, apps) |  | ||||||
|     run_commands(post_commands) |  | ||||||
|  |  | ||||||
|  |  | ||||||
| def get_backup_cmds(): |  | ||||||
|     client = docker.from_env() |  | ||||||
|     container_by_service = { |  | ||||||
|         c.labels['com.docker.swarm.service.name']: c for c in client.containers.list()} |  | ||||||
|     backup_paths = set() |  | ||||||
|     backup_apps = set() |  | ||||||
|     pre_commands = {} |  | ||||||
|     post_commands = {} |  | ||||||
|     services = client.services.list() |  | ||||||
|     for s in services: |  | ||||||
|         labels = s.attrs['Spec']['Labels'] |  | ||||||
|         if (backup := labels.get('backupbot.backup')) and bool(backup): |  | ||||||
|             stack_name = labels['com.docker.stack.namespace'] |  | ||||||
|             if SERVICE and SERVICE != stack_name: |  | ||||||
|                 continue |  | ||||||
|             backup_apps.add(stack_name) |  | ||||||
|             container = container_by_service.get(s.name) |  | ||||||
|             if not container: |  | ||||||
|                 logging.error( |  | ||||||
|                     f"Container {s.name} is not running, hooks can not be executed") |  | ||||||
|             if prehook := labels.get('backupbot.backup.pre-hook'): |  | ||||||
|                 pre_commands[container] = prehook |  | ||||||
|             if posthook := labels.get('backupbot.backup.post-hook'): |  | ||||||
|                 post_commands[container] = posthook |  | ||||||
|             backup_paths = backup_paths.union( |  | ||||||
|                 Path(VOLUME_PATH).glob(f"{stack_name}_*")) |  | ||||||
|     return pre_commands, post_commands, list(backup_paths), list(backup_apps) |  | ||||||
|  |  | ||||||
|  |  | ||||||
| def copy_secrets(apps): |  | ||||||
|     rmtree(SECRET_PATH, ignore_errors=True) |  | ||||||
|     os.mkdir(SECRET_PATH) |  | ||||||
|     client = docker.from_env() |  | ||||||
|     container_by_service = { |  | ||||||
|         c.labels['com.docker.swarm.service.name']: c for c in client.containers.list()} |  | ||||||
|     services = client.services.list() |  | ||||||
|     for s in services: |  | ||||||
|         app_name = s.attrs['Spec']['Labels']['com.docker.stack.namespace'] |  | ||||||
|         if (app_name in apps and |  | ||||||
|                 (app_secs := s.attrs['Spec']['TaskTemplate']['ContainerSpec'].get('Secrets'))): |  | ||||||
|             if not container_by_service.get(s.name): |  | ||||||
|                 logging.error( |  | ||||||
|                     f"Container {s.name} is not running, secrets can not be copied.") |  | ||||||
|                 continue |  | ||||||
|             container_id = container_by_service[s.name].id |  | ||||||
|             for sec in app_secs: |  | ||||||
|                 src = f'/var/lib/docker/containers/{container_id}/mounts/secrets/{sec["SecretID"]}' |  | ||||||
|                 dst = SECRET_PATH + sec['SecretName'] |  | ||||||
|                 copyfile(src, dst) |  | ||||||
|  |  | ||||||
|  |  | ||||||
| def run_commands(commands): |  | ||||||
|     for container, command in commands.items(): |  | ||||||
|         if not command: |  | ||||||
|             continue |  | ||||||
|         # Use bash's pipefail to return exit codes inside a pipe to prevent silent failure |  | ||||||
|         command = command.removeprefix('bash -c \'').removeprefix('sh -c \'') |  | ||||||
|         command = command.removesuffix('\'') |  | ||||||
|         command = f"bash -c 'set -o pipefail;{command}'" |  | ||||||
|         result = container.exec_run(command) |  | ||||||
|         logging.info(f"run command in {container.name}") |  | ||||||
|         logging.info(command) |  | ||||||
|         if result.exit_code: |  | ||||||
|             logging.error( |  | ||||||
|                 f"Failed to run command {command} in {container.name}: {result.output.decode()}") |  | ||||||
|         else: |  | ||||||
|             logging.info(result.output.decode()) |  | ||||||
|  |  | ||||||
|  |  | ||||||
| def backup_volumes(backup_paths, apps, dry_run=False): |  | ||||||
|     result = restic.backup(backup_paths, dry_run=dry_run, tags=apps) |  | ||||||
|     print(result) |  | ||||||
|     logging.info(result) |  | ||||||
|  |  | ||||||
|  |  | ||||||
| @cli.command() |  | ||||||
| @click.option('snapshot', '--snapshot', '-s', envvar='SNAPSHOT', default='latest') |  | ||||||
| @click.option('target', '--target', '-t', envvar='TARGET', default='/') |  | ||||||
| @click.option('noninteractive', '--noninteractive', envvar='NONINTERACTIVE', default=False) |  | ||||||
| def restore(snapshot, target, noninteractive): |  | ||||||
|     # Todo: recommend to shutdown the container |  | ||||||
|     service_paths = VOLUME_PATH |  | ||||||
|     if SERVICE: |  | ||||||
|         service_paths = service_paths + f'{SERVICE}_*' |  | ||||||
|     snapshots = restic.snapshots(snapshot_id=snapshot) |  | ||||||
|     if not snapshot: |  | ||||||
|         logging.error("No Snapshots with ID {snapshots}") |  | ||||||
|         exit(1) |  | ||||||
|     if not noninteractive: |  | ||||||
|         snapshot_date = datetime.fromisoformat(snapshots[0]['time']) |  | ||||||
|         delta = datetime.now(tz=timezone.utc) - snapshot_date |  | ||||||
|         print(f"You are going to restore Snapshot {snapshot} of {service_paths} at {target}") |  | ||||||
|         print(f"This snapshot is {delta} old") |  | ||||||
|         print(f"THIS COMMAND WILL IRREVERSIBLY OVERWRITES {target}{service_paths.removeprefix('/')}") |  | ||||||
|         prompt = input("Type YES (uppercase) to continue: ") |  | ||||||
|         if prompt != 'YES': |  | ||||||
|             logging.error("Restore aborted") |  | ||||||
|             exit(1) |  | ||||||
|     print(f"Restoring Snapshot {snapshot} of {service_paths} at {target}") |  | ||||||
|     result = restic.restore(snapshot_id=snapshot, |  | ||||||
|                             include=service_paths, target_dir=target) |  | ||||||
|     logging.debug(result) |  | ||||||
|  |  | ||||||
|  |  | ||||||
| @cli.command() |  | ||||||
| def snapshots(): |  | ||||||
|     snapshots = restic.snapshots() |  | ||||||
|     for snap in snapshots: |  | ||||||
|         if not SERVICE or (tags := snap.get('tags')) and SERVICE in tags: |  | ||||||
|             print(snap['time'], snap['id']) |  | ||||||
|  |  | ||||||
|  |  | ||||||
| @cli.command() |  | ||||||
| @click.option('snapshot', '--snapshot', '-s', envvar='SNAPSHOT', default='latest') |  | ||||||
| @click.option('path', '--path', '-p', envvar='INCLUDE_PATH') |  | ||||||
| def ls(snapshot, path): |  | ||||||
|     results = list_files(snapshot, path) |  | ||||||
|     for r in results: |  | ||||||
|         if r.get('path'): |  | ||||||
|             print(f"{r['ctime']}\t{r['path']}") |  | ||||||
|  |  | ||||||
|  |  | ||||||
| def list_files(snapshot, path): |  | ||||||
|     cmd = restic.cat.base_command() + ['ls'] |  | ||||||
|     if SERVICE: |  | ||||||
|         cmd = cmd + ['--tag', SERVICE] |  | ||||||
|     cmd.append(snapshot) |  | ||||||
|     if path: |  | ||||||
|         cmd.append(path) |  | ||||||
|     output = restic.internal.command_executor.execute(cmd) |  | ||||||
|     output = output.replace('}\n{', '}|{') |  | ||||||
|     results = list(map(json.loads, output.split('|'))) |  | ||||||
|     return results |  | ||||||
|  |  | ||||||
|  |  | ||||||
| @cli.command() |  | ||||||
| @click.option('snapshot', '--snapshot', '-s', envvar='SNAPSHOT', default='latest') |  | ||||||
| @click.option('path', '--path', '-p', envvar='INCLUDE_PATH') |  | ||||||
| @click.option('volumes', '--volumes', '-v', is_flag=True) |  | ||||||
| @click.option('secrets', '--secrets', '-c', is_flag=True) |  | ||||||
| def download(snapshot, path, volumes, secrets): |  | ||||||
|     if sum(map(bool, [path, volumes, secrets])) != 1: |  | ||||||
|         logging.error("Please specify exactly one of '--path', '--volumes', '--secrets'") |  | ||||||
|         exit(1) |  | ||||||
|     if path: |  | ||||||
|         path = path.removesuffix('/') |  | ||||||
|         files = list_files(snapshot, path) |  | ||||||
|         filetype = [f.get('type') for f in files if f.get('path') == path][0] |  | ||||||
|         filename = "/tmp/" + Path(path).name |  | ||||||
|         if filetype == 'dir': |  | ||||||
|             filename = filename + ".tar" |  | ||||||
|         output = dump(snapshot, path) |  | ||||||
|         with open(filename, "wb") as file: |  | ||||||
|             file.write(output) |  | ||||||
|         print(filename) |  | ||||||
|     elif volumes: |  | ||||||
|         if not SERVICE: |  | ||||||
|             logging.error("Please specify '--host' when using '--volumes'") |  | ||||||
|             exit(1) |  | ||||||
|         filename = f"/tmp/{SERVICE}.tar" |  | ||||||
|         files = list_files(snapshot, VOLUME_PATH) |  | ||||||
|         for f in files[1:]: |  | ||||||
|             path = f[ 'path' ] |  | ||||||
|             if SERVICE in path and f['type'] == 'dir': |  | ||||||
|                 content = dump(snapshot, path) |  | ||||||
|                 # Concatenate tar files (extract with tar -xi) |  | ||||||
|                 with open(filename, "ab") as file: |  | ||||||
|                     file.write(content) |  | ||||||
|     elif secrets: |  | ||||||
|         if not SERVICE: |  | ||||||
|             logging.error("Please specify '--host' when using '--secrets'") |  | ||||||
|             exit(1) |  | ||||||
|         filename = f"/tmp/SECRETS_{SERVICE}.json" |  | ||||||
|         files = list_files(snapshot, SECRET_PATH) |  | ||||||
|         secrets = {} |  | ||||||
|         for f in files[1:]: |  | ||||||
|             path = f[ 'path' ] |  | ||||||
|             if SERVICE in path and f['type'] == 'file': |  | ||||||
|                 secret = dump(snapshot, path).decode() |  | ||||||
|                 secret_name = path.removeprefix(f'{SECRET_PATH}{SERVICE}_') |  | ||||||
|                 secrets[secret_name] = secret |  | ||||||
|         with open(filename, "w") as file: |  | ||||||
|             json.dump(secrets, file) |  | ||||||
|         print(filename) |  | ||||||
|  |  | ||||||
| def dump(snapshot, path): |  | ||||||
|     cmd = restic.cat.base_command() + ['dump'] |  | ||||||
|     if SERVICE: |  | ||||||
|         cmd = cmd + ['--tag', SERVICE] |  | ||||||
|     cmd = cmd +[snapshot, path] |  | ||||||
|     logging.debug(f"Dumping {path} from snapshot '{snapshot}'") |  | ||||||
|     output = subprocess.run(cmd, capture_output=True) |  | ||||||
|     if output.returncode: |  | ||||||
|         logging.error(f"error while dumping {path} from snapshot '{snapshot}': {output.stderr}") |  | ||||||
|         exit(1) |  | ||||||
|     return output.stdout |  | ||||||
|  |  | ||||||
|  |  | ||||||
| if __name__ == '__main__': |  | ||||||
|     cli() |  | ||||||
| @ -1,13 +0,0 @@ | |||||||
| --- |  | ||||||
| version: "3.8" |  | ||||||
| services: |  | ||||||
|   app: |  | ||||||
|     environment: |  | ||||||
|       - RESTIC_REPO_FILE=/run/secrets/restic_repo |  | ||||||
|     secrets: |  | ||||||
|       - restic_repo |  | ||||||
|  |  | ||||||
| secrets: |  | ||||||
|   restic_repo: |  | ||||||
|     external: true |  | ||||||
|     name: ${STACK_NAME}_restic_repo_${SECRET_RESTIC_REPO_VERSION} |  | ||||||
| @ -5,19 +5,12 @@ services: | |||||||
|     environment: |     environment: | ||||||
|       - SSH_KEY_FILE=/run/secrets/ssh_key |       - SSH_KEY_FILE=/run/secrets/ssh_key | ||||||
|       - SSH_HOST_KEY |       - SSH_HOST_KEY | ||||||
|  |       - SSH_HOST_KEY_DISABLE | ||||||
|     secrets: |     secrets: | ||||||
|       - source: ssh_key |       - source: ssh_key | ||||||
|         mode: 0400 |         mode: 0400 | ||||||
|     configs: |  | ||||||
|       - source: ssh_config |  | ||||||
|         target: /root/.ssh/config |  | ||||||
|  |  | ||||||
| secrets: | secrets: | ||||||
|   ssh_key: |   ssh_key: | ||||||
|     external: true |     external: true | ||||||
|     name: ${STACK_NAME}_ssh_key_${SECRET_SSH_KEY_VERSION} |     name: ${STACK_NAME}_ssh_key_${SECRET_SSH_KEY_VERSION} | ||||||
|  |  | ||||||
| configs: |  | ||||||
|   ssh_config: |  | ||||||
|     name: ${STACK_NAME}_ssh_config_${SSH_CONFIG_VERSION} |  | ||||||
|     file: ssh_config |  | ||||||
|  | |||||||
							
								
								
									
										32
									
								
								compose.yml
									
									
									
									
									
								
							
							
						
						
									
										32
									
								
								compose.yml
									
									
									
									
									
								
							| @ -5,50 +5,38 @@ services: | |||||||
|     image: docker:24.0.2-dind |     image: docker:24.0.2-dind | ||||||
|     volumes: |     volumes: | ||||||
|       - "/var/run/docker.sock:/var/run/docker.sock" |       - "/var/run/docker.sock:/var/run/docker.sock" | ||||||
|       - "/var/lib/docker/volumes/:/var/lib/docker/volumes/" |       - "/var/lib/docker/volumes/:/var/lib/docker/volumes/:ro" | ||||||
|       - "/var/lib/docker/containers/:/var/lib/docker/containers/:ro" |  | ||||||
|       - backups:/backups |  | ||||||
|     environment: |     environment: | ||||||
|       - CRON_SCHEDULE |       - CRON_SCHEDULE | ||||||
|       - RESTIC_REPO |       - RESTIC_REPO | ||||||
|       - RESTIC_PASSWORD_FILE=/run/secrets/restic_password |       - RESTIC_PASSWORD_FILE=/run/secrets/restic_password | ||||||
|  |       - BACKUP_DEST=/backups | ||||||
|  |       - RESTIC_HOST | ||||||
|  |       - SERVER_NAME | ||||||
|  |       - REMOVE_BACKUP_VOLUME_AFTER_UPLOAD=1 | ||||||
|     secrets: |     secrets: | ||||||
|       - restic_password |       - restic_password | ||||||
|     deploy: |     deploy: | ||||||
|       labels: |       labels: | ||||||
|         - coop-cloud.${STACK_NAME}.version=0.1.0+latest |         - coop-cloud.${STACK_NAME}.version=0.1.0+latest | ||||||
|         - coop-cloud.${STACK_NAME}.timeout=${TIMEOUT:-300} |  | ||||||
|         - coop-cloud.backupbot.enabled=true |  | ||||||
|     configs: |     configs: | ||||||
|       - source: entrypoint |       - source: entrypoint | ||||||
|         target: /entrypoint.sh |         target: /entrypoint.sh | ||||||
|         mode: 0555 |         mode: 0555 | ||||||
|       - source: backupbot |       - source: backup | ||||||
|         target: /usr/bin/backup |         target: /backup.sh | ||||||
|         mode: 0555 |         mode: 0555 | ||||||
|     entrypoint: ['/entrypoint.sh'] |     entrypoint: ['/entrypoint.sh'] | ||||||
|     deploy: |  | ||||||
|       labels: |  | ||||||
|         - "coop-cloud.backupbot.enabled=true" |  | ||||||
|     healthcheck: |  | ||||||
|       test: "pgrep crond" |  | ||||||
|       interval: 30s |  | ||||||
|       timeout: 10s |  | ||||||
|       retries: 10 |  | ||||||
|       start_period: 5m |  | ||||||
|  |  | ||||||
| secrets: | secrets: | ||||||
|   restic_password: |   restic_password: | ||||||
|     external: true |     external: true | ||||||
|     name: ${STACK_NAME}_restic_password_${SECRET_RESTIC_PASSWORD_VERSION} |     name: ${STACK_NAME}_restic_password_${SECRET_RESTIC_PASSWORD_VERSION} | ||||||
|  |  | ||||||
| volumes: |  | ||||||
|   backups: |  | ||||||
|  |  | ||||||
| configs: | configs: | ||||||
|   entrypoint: |   entrypoint: | ||||||
|     name: ${STACK_NAME}_entrypoint_${ENTRYPOINT_VERSION} |     name: ${STACK_NAME}_entrypoint_${ENTRYPOINT_VERSION} | ||||||
|     file: entrypoint.sh |     file: entrypoint.sh | ||||||
|   backupbot: |   backup: | ||||||
|     name: ${STACK_NAME}_backupbot_${BACKUPBOT_VERSION} |     name: ${STACK_NAME}_backup_${BACKUP_VERSION} | ||||||
|     file: backupbot.py |     file: backup.sh | ||||||
|  | |||||||
| @ -1,20 +1,12 @@ | |||||||
| #!/bin/sh | #!/bin/sh | ||||||
|  |  | ||||||
| set -e -o pipefail | set -e | ||||||
|  |  | ||||||
| apk add --upgrade --no-cache restic bash python3 py3-pip | apk add --upgrade --no-cache bash curl jq restic | ||||||
|  |  | ||||||
| # Todo use requirements file with specific versions |  | ||||||
| pip install click==8.1.7 docker==6.1.3 resticpy==1.0.2 |  | ||||||
|  |  | ||||||
| if [ -n "$SSH_HOST_KEY" ] |  | ||||||
| then |  | ||||||
|     echo "$SSH_HOST_KEY" > /root/.ssh/known_hosts |  | ||||||
| fi |  | ||||||
|  |  | ||||||
| cron_schedule="${CRON_SCHEDULE:?CRON_SCHEDULE not set}" | cron_schedule="${CRON_SCHEDULE:?CRON_SCHEDULE not set}" | ||||||
|  |  | ||||||
| echo "$cron_schedule backup create" | crontab - | echo "$cron_schedule /backup.sh" | crontab - | ||||||
| crontab -l | crontab -l | ||||||
|  |  | ||||||
| crond -f -d8 -L /dev/stdout | crond -f -d8 -L /dev/stdout | ||||||
|  | |||||||
| @ -1,3 +0,0 @@ | |||||||
| Breaking Change: the variables `SERVER_NAME` and `RESTIC_HOST` are merged into `RESTIC_REPO`. The format can be looked up here: https://restic.readthedocs.io/en/stable/030_preparing_a_new_repo.html |  | ||||||
| ssh/sftp: `sftp:user@host:/repo-path` |  | ||||||
| S3:  `s3:https://s3.example.com/bucket_name` |  | ||||||
| @ -1,4 +0,0 @@ | |||||||
| Host * |  | ||||||
|     IdentityFile    /run/secrets/ssh_key |  | ||||||
|     ServerAliveInterval 60 |  | ||||||
|     ServerAliveCountMax 240 |  | ||||||
		Reference in New Issue
	
	Block a user