WIP: Backup labels #9

Draft
wolcen wants to merge 9 commits from wolcen/hedgedoc:main into main
4 changed files with 21 additions and 24 deletions

View File

@ -8,7 +8,7 @@ DOMAIN=hedgedoc.example.com
LETS_ENCRYPT_ENV=production
SECRET_DB_PASSWORD_VERSION=v1
SECRET_CMD_SESSION_SECRET=v1
COMPOSE_FILE="compose.yml"
# OAuth, see https://docs.hedgedoc.org/guides/auth/keycloak/
@ -42,6 +42,7 @@ COMPOSE_FILE="compose.yml"
# CMD_CSP_REPORTURI=undefined
# CMD_DEFAULT_PERMISSION=editable
# CMD_EMAIL=true
# CMD_REQUIRE_FREEURL_AUTHENTICATION=false
# CMD_SESSION_LIFE=1209600000
# Only present in config.json (no equivalent env var):
# DOCUMENT_MAX_LENGTH=100000

View File

@ -25,8 +25,8 @@
5. `abra app deploy YOURAPPDOMAIN`
6. Create initial user:
```
abra app YOURAPPDOMAIN run app bash
. /docker-entrypoint2.sh -e
abra app run YOURAPPDOMAIN app bash
. /docker-entrypoint.sh -e
bin/manage_users
[hedegedoc]: https://github.com/hedgedoc/hedgedoc

14
abra.sh
View File

@ -1,13 +1 @@
export ENTRYPOINT_CONF_VERSION=v8
abra_backup_app() {
_abra_backup_dir "app:/home/hackmd/app/public/uploads/"
}
abra_backup_db() {
_abra_backup_postgres "db" "codimd" "codimd" "db_password"
}
abra_backup() {
abra_backup_app && abra_backup_db
}
export ENTRYPOINT_CONF_VERSION=v9

View File

@ -25,6 +25,7 @@ services:
- CMD_CSP_REPORTURI
- CMD_DEFAULT_PERMISSION
- CMD_EMAIL
- CMD_REQUIRE_FREEURL_AUTHENTICATION
- CMD_SESSION_LIFE
- DOCUMENT_MAX_LENGTH
depends_on:
@ -33,7 +34,7 @@ services:
- proxy
- internal
volumes:

Shall we do codimd_uploads -> hedgedoc_uploads also while we're here? Would be a breaking change on the recipe, people would need to migrate their volumes, but idk how many folks are using this right now? Up to you!

Shall we do `codimd_uploads` -> `hedgedoc_uploads` also while we're here? Would be a breaking change on the recipe, people would need to migrate their volumes, but idk how many folks are using this right now? Up to you!

I'm not personally sure it's worth breaking people's configs, but there are bigger issues than that, really. I'm a bit concerned about the existing deployments upgrade path - are there possibly upgrade hooks for this (for example, would be great to do a one-time tar/gz of the /hedgedoc/public/uploads if it is in the container)? People's volumes will be pointing to the incorrect folder, backups would not have been working, and just restarting the instance would kill their uploaded files otherwise, if I understand correctly?

Backups were not previously working (for me, at least), but at least this was what the recipe had reported, so hopefully no one was really trusting it for long-term pads (which we do intend to have).

FWIW, all the env vars for hedgedoc use codi's old "CMD_" prefixes, but it would probably be nice to see hedgedoc_uploads in your volume list.

If you'd prefer, I can do them in a different merge/pr, but I'm changing a few more things - I've added a health check for Postgres, and another CMD_ setting that I'll be using - I'm a little unclear how the versioning works at present, but just a note that if it'll require another version, more settings are inbound, one of which is the session secret in order to maintain logins between restarts.

I'm not personally sure it's worth breaking people's configs, but there are bigger issues than that, really. I'm a bit concerned about the existing deployments upgrade path - are there possibly upgrade hooks for this (for example, would be great to do a one-time tar/gz of the /hedgedoc/public/uploads if it is in the container)? People's volumes will be pointing to the incorrect folder, backups would not have been working, and just restarting the instance would kill their uploaded files otherwise, if I understand correctly? Backups were not previously working (for me, at least), but at least this was what the recipe had reported, so hopefully no one was really trusting it for long-term pads (which we do intend to have). FWIW, all the env vars for hedgedoc use codi's old "CMD_" prefixes, but it would probably be nice to see hedgedoc_uploads in your volume list. If you'd prefer, I can do them in a different merge/pr, but I'm changing a few more things - I've added a health check for Postgres, and another CMD_ setting that I'll be using - I'm a little unclear how the versioning works at present, but just a note that if it'll require another version, more settings are inbound, one of which is the session secret in order to maintain logins between restarts.

Oh yeh, v reasonable, maybe bundling all your changes into a new major recipe release version is the way to go? The upgrade paths are sometimes a bit ad-hoc yeh, so we have this "release notes" feature 👉 https://docs.coopcloud.tech/maintainers/handbook/#how-do-i-write-version-release-notes 👈 You can write a short guide which will turn up in the shell when people run upgrade and realise they need to do some work before doing the upgrade. Anyway, up to you!

Oh yeh, v reasonable, maybe bundling all your changes into a new major recipe release version is the way to go? The upgrade paths are sometimes a bit ad-hoc yeh, so we have this "release notes" feature 👉 https://docs.coopcloud.tech/maintainers/handbook/#how-do-i-write-version-release-notes 👈 You can write a short guide which will turn up in the shell when people run `upgrade` and realise they need to do some work before doing the upgrade. Anyway, up to you!
- codimd_uploads:/home/hackmd/app/public/uploads
- codimd_uploads:/hedgedoc/public/uploads
secrets:
- db_password
entrypoint: /docker-entrypoint.sh
@ -43,7 +44,6 @@ services:
mode: 0555
- source: config_json
target: /files/config.json
mode: 0555
deploy:
restart_policy:
condition: on-failure
@ -57,6 +57,8 @@ services:
- "traefik.http.routers.${STACK_NAME}.middlewares=${STACK_NAME}-redirect"
- "traefik.http.middlewares.${STACK_NAME}-redirect.headers.SSLForceHost=true"
- "traefik.http.middlewares.${STACK_NAME}-redirect.headers.SSLHost=${DOMAIN}"
- "backupbot.backup=true"
- "backupbot.backup.path=/hedgedoc/public/uploads"
- coop-cloud.${STACK_NAME}.timeout=${TIMEOUT:-120}
- coop-cloud.${STACK_NAME}.version=0.6.0+1.9.9
healthcheck:
@ -79,12 +81,18 @@ services:
- internal
deploy:
labels:
backupbot.backup: "true"
backupbot.backup.pre-hook: "mkdir -p /tmp/backup/ && PGPASSWORD=$$(cat $${POSTGRES_PASSWORD_FILE}) pg_dump -U $${POSTGRES_USER} $${POSTGRES_DB} > /tmp/backup/backup.sql"
backupbot.backup.post-hook: "rm -rf /tmp/backup"
backupbot.backup.path: "/tmp/backup/"
backupbot.restore: "true"
backupbot.restore.post-hook: "sh -c 'psql -U $${POSTGRES_USER} -d $${POSTGRES_DB} < ./backup.sql && rm -f ./backup.sql'"
backupbot.backup: "true"
backupbot.backup.pre-hook: "mkdir -p /tmp/backup/ && PGPASSWORD=$$(cat $${POSTGRES_PASSWORD_FILE}) pg_dump -U $${POSTGRES_USER} $${POSTGRES_DB} > /tmp/backup/backup.sql"
backupbot.backup.post-hook: "rm -rf /tmp/backup"
backupbot.backup.path: "/tmp/backup/"
backupbot.restore: "true"
backupbot.restore.post-hook: "sh -c 'psql -U $${POSTGRES_USER} -d $${POSTGRES_DB} < ./backup.sql && rm -f ./backup.sql'"
healthcheck:
test: "pg_isready"
interval: 30s
timeout: 10s
retries: 5
start_period: 1m
volumes:
postgres:
codimd_uploads: