generated from coop-cloud/example
Compare commits
38 Commits
simon-add-
...
main
Author | SHA1 | Date | |
---|---|---|---|
6c4fb2c70b | |||
ca0eebcc64 | |||
400445f8f5 | |||
73ed83f5bf | |||
91e73387bf | |||
4005524b7b | |||
ab70b3c4ca | |||
59fdc6481a | |||
fbcdc57b9d | |||
37c8cf8141 | |||
cd3d754eb5 | |||
fbe1a99054 | |||
529e0d9d26 | |||
5f300f945b | |||
3a06d0d9a0 | |||
10c074a96b | |||
33c53a5d94 | |||
a2713a9a64 | |||
f6536067ee | |||
93b670b8f4 | |||
1212c295b9 | |||
6a1fb0e4f3 | |||
9ee27ac443 | |||
287654b8d3 | |||
79c10ed214 | |||
8ddceb9017 | |||
c1d6826d82 | |||
ac7fb7c3dd | |||
ba63176598 | |||
6eee864ba1 | |||
f3c8e08441 | |||
6da688ad1b | |||
ea012f2628 | |||
1705383533 | |||
6575668586 | |||
39f3a61ce0 | |||
158a3c8b1c | |||
03b7d984f0 |
@ -18,6 +18,8 @@ steps:
|
||||
STACK_NAME: outline
|
||||
LETS_ENCRYPT_ENV: production
|
||||
APP_ENTRYPOINT_VERSION: v1
|
||||
DB_ENTRYPOINT_VERSION: v1
|
||||
PG_BACKUP_VERSION: v1
|
||||
SECRET_DB_PASSWORD_VERSION: v1
|
||||
SECRET_SECRET_KEY_VERSION: v1 # length=64
|
||||
SECRET_UTILS_SECRET_VERSION: v1 # length=64
|
||||
@ -36,7 +38,7 @@ steps:
|
||||
from_secret: drone_abra-bot_token
|
||||
fork: true
|
||||
repositories:
|
||||
- coop-cloud/auto-recipes-catalogue-json
|
||||
- toolshed/auto-recipes-catalogue-json
|
||||
|
||||
trigger:
|
||||
event: tag
|
||||
|
@ -8,6 +8,8 @@ DOMAIN=outline.example.com
|
||||
#EXTRA_DOMAINS=', `www.outline.example.com`'
|
||||
LETS_ENCRYPT_ENV=production
|
||||
|
||||
ENABLE_BACKUPS=true
|
||||
|
||||
COMPOSE_FILE="compose.yml"
|
||||
|
||||
# –––––––––––––––– REQUIRED ––––––––––––––––
|
||||
@ -39,7 +41,7 @@ WEB_CONCURRENCY=1
|
||||
|
||||
# Override the maxium size of document imports, could be required if you have
|
||||
# especially large Word documents with embedded imagery
|
||||
MAXIMUM_IMPORT_SIZE=5120000
|
||||
FILE_STORAGE_IMPORT_MAX_SIZE=5120000
|
||||
|
||||
# You can remove this line if your reverse proxy already logs incoming http
|
||||
# requests and this ends up being duplicative
|
||||
|
53
README.md
53
README.md
@ -22,13 +22,12 @@ Wiki and knowledge base for growing teams
|
||||
3. `abra app new ${REPO_NAME}`
|
||||
- **WARNING**: Choose "n" when `abra` asks if you'd like to generate secrets
|
||||
4. `abra app config YOURAPPNAME` - be sure to change `$DOMAIN` to something that resolves to
|
||||
your Docker swarm box. For Minio, you'll want:
|
||||
- `AWS_ACCESS_KEY_ID=<minio username>`
|
||||
- `AWS_REGION="us-east-1"`
|
||||
- `AWS_S3_UPLOAD_BUCKET_URL=https://minio.example.com`
|
||||
- `AWS_S3_UPLOAD_BUCKET_NAME=
|
||||
5. `abra app deploy YOURAPPNAME`
|
||||
7. Open the configured domain in your browser to finish set-up
|
||||
your Docker swarm box
|
||||
5. Insert secrets:
|
||||
- `abra app secret insert YOURAPPNAME secret_key v1 $(openssl rand -hex 32)` #12
|
||||
- `abra app secret generate -a YOURAPPNAME`
|
||||
6. `abra app deploy YOURAPPNAME`
|
||||
8. Open the configured domain in your browser to finish set-up
|
||||
|
||||
[`abra`]: https://git.coopcloud.tech/coop-cloud/abra
|
||||
[`coop-cloud/traefik`]: https://git.coopcloud.tech/coop-cloud/traefik
|
||||
@ -41,14 +40,6 @@ Wiki and knowledge base for growing teams
|
||||
abra app cmd YOURAPPNAME app create_email_user test@example.com
|
||||
```
|
||||
|
||||
### Post-deploy migration
|
||||
|
||||
```
|
||||
abra app cmd YOURAPPNAME app migrate
|
||||
```
|
||||
|
||||
_As of 2022-03-30, this requires `abra` RC version, run `abra upgrade --rc`._
|
||||
|
||||
### Setting up your `.env` config
|
||||
|
||||
Avoid the use of quotes (`"..."`) as much as possible, the NodeJS scripts flip out for some reason on some vars.
|
||||
@ -61,14 +52,30 @@ Where `<username-to-delete>` is the username of the user to be removed, and
|
||||
`<username-to-replace>` is the username of another user, to assign documents and
|
||||
revisions to (instead of deleting them).
|
||||
|
||||
_As of 2022-03-30, this requires `abra` RC version, run `abra upgrade --rc`._
|
||||
### Migrate from S3 to local storage
|
||||
|
||||
## Single Sign On with Keycloak
|
||||
- `abra app config <domain>`, add
|
||||
- `COMPOSE_FILE="$COMPOSE_FILE:compose.local.yml"`
|
||||
- `FILE_STORAGE_UPLOAD_MAX_SIZE=26214400`
|
||||
- `abra app deploy <domain> -f`
|
||||
- compose.aws.yml should still be deployed!
|
||||
- `abra app undeploy <domain>`
|
||||
- on the docker host, find mountpoint of newly created volume via `docker volume ls` and `docker volume inspect`
|
||||
- volume name is smth like `<domain>_storage-data`
|
||||
- take note which linux user owns `<storage_mountpoint>` (likely `1001`)
|
||||
- use s3cmd/rclone/... to sync your bucket to `<storage_mountpoint>`
|
||||
- `chown -R <storage_user>:<storage_user> <storage_mountpoint>`
|
||||
- `abra app config <domain>`, switch storage backend
|
||||
- remove `AWS_*` vars, `SECRET_AWS_SECRET_KEY_VERSION` and `COMPOSE_FILE="$COMPOSE_FILE:compose.aws.yml"`
|
||||
- set `FILE_STORAGE=local`
|
||||
- `abra app deploy <domain> -f`
|
||||
- enjoy getting rid of S3 🥳
|
||||
|
||||
`abra app config YOURAPPNAME`, then uncomment everything in the `OIDC_` section.
|
||||
## Single Sign On with Keycloak/Authentik
|
||||
|
||||
Create a new client in Keycloak:
|
||||
|
||||
- **Valid Redirect URIs**: `https://YOURAPPDOMAIN/auth/oidc.callback`
|
||||
|
||||
`abra app deploy YOURAPPDOMAIN`
|
||||
- Create an OIDC client in Keycloak (in Authentik this is called a provider and application)
|
||||
- Run `abra app config YOURAPPNAME`, then uncomment everything in the `OIDC_` section.
|
||||
- **Valid Redirect URIs**: `https://YOURAPPDOMAIN/auth/oidc.callback`
|
||||
- Reference the client/provider info to populate the `_AUTH_URI` `_TOKEN_URI` and `_USERINFO_URI` values
|
||||
- Set the OIDC secret using the value from the client/provider `abra app secret insert YOURAPPNAME oidc_client_secret v1 SECRETVALUE`
|
||||
- `abra app deploy YOURAPPDOMAIN`
|
7
abra.sh
7
abra.sh
@ -1,5 +1,6 @@
|
||||
export APP_ENTRYPOINT_VERSION=v8
|
||||
export APP_ENTRYPOINT_VERSION=v9
|
||||
export DB_ENTRYPOINT_VERSION=v2
|
||||
export PG_BACKUP_VERSION=v1
|
||||
|
||||
create_email_user() {
|
||||
if [ -z "$1" ]; then
|
||||
@ -19,6 +20,10 @@ migrate() {
|
||||
yarn db:migrate --env=production-ssl-disabled
|
||||
}
|
||||
|
||||
generate_secret() {
|
||||
abra app secret insert $DOMAIN secret_key v1 $(openssl rand -hex 32)
|
||||
}
|
||||
|
||||
delete_user_by_id() {
|
||||
if [ -z "$1" ] || [ -z "$2" ]; then
|
||||
echo "Usage: ... delete_user_by_id <userid-to-delete> <userid-to-replace>"
|
||||
|
15
alaconnect.yml
Normal file
15
alaconnect.yml
Normal file
@ -0,0 +1,15 @@
|
||||
authentik:
|
||||
env:
|
||||
OIDC_CLIENT_ID: outline
|
||||
OIDC_AUTH_URI: https://authentik.example.com/application/o/authorize/
|
||||
OIDC_TOKEN_URI: https://authentik.example.com/application/o/token/
|
||||
OIDC_USERINFO_URI: https://authentik.example.com/application/o/userinfo/
|
||||
OIDC_DISPLAY_NAME: "Authentik"
|
||||
uncomment:
|
||||
- compose.oidc.yml
|
||||
- OIDC_ENABLED
|
||||
- OIDC_USERNAME_CLAIM
|
||||
- OIDC_SCOPES
|
||||
- SECRET_OIDC_CLIENT_SECRET_VERSION
|
||||
shared_secrets:
|
||||
outline_secret: oidc_client_secret
|
31
compose.yml
31
compose.yml
@ -6,7 +6,7 @@ services:
|
||||
networks:
|
||||
- backend
|
||||
- proxy
|
||||
image: outlinewiki/outline:0.73.1
|
||||
image: outlinewiki/outline:0.82.0
|
||||
secrets:
|
||||
- db_password
|
||||
- secret_key
|
||||
@ -34,19 +34,20 @@ services:
|
||||
- "traefik.http.routers.${STACK_NAME}.rule=Host(`${DOMAIN}`${EXTRA_DOMAINS})"
|
||||
- "traefik.http.routers.${STACK_NAME}.entrypoints=web-secure"
|
||||
- "traefik.http.routers.${STACK_NAME}.tls.certresolver=${LETS_ENCRYPT_ENV}"
|
||||
- "coop-cloud.${STACK_NAME}.version=1.1.0+0.73.1"
|
||||
## Redirect from EXTRA_DOMAINS to DOMAIN
|
||||
#- "traefik.http.routers.${STACK_NAME}.middlewares=${STACK_NAME}-redirect"
|
||||
#- "traefik.http.middlewares.${STACK_NAME}-redirect.headers.SSLForceHost=true"
|
||||
#- "traefik.http.middlewares.${STACK_NAME}-redirect.headers.SSLHost=${DOMAIN}"
|
||||
- "coop-cloud.${STACK_NAME}.version=2.9.0+0.82.0"
|
||||
# Redirect from EXTRA_DOMAINS to DOMAIN
|
||||
- "traefik.http.routers.${STACK_NAME}.middlewares=${STACK_NAME}-redirect"
|
||||
- "traefik.http.middlewares.${STACK_NAME}-redirect.headers.SSLForceHost=true"
|
||||
- "traefik.http.middlewares.${STACK_NAME}-redirect.headers.SSLHost=${DOMAIN}"
|
||||
- "coop-cloud.${STACK_NAME}.timeout=${TIMEOUT:-80}"
|
||||
|
||||
cache:
|
||||
image: redis:7.2.3
|
||||
image: redis:7.4.2
|
||||
networks:
|
||||
- backend
|
||||
|
||||
db:
|
||||
image: postgres:15.5
|
||||
image: postgres:17.3
|
||||
networks:
|
||||
- backend
|
||||
secrets:
|
||||
@ -55,6 +56,9 @@ services:
|
||||
- source: db_entrypoint
|
||||
target: /docker-entrypoint.sh
|
||||
mode: 0555
|
||||
- source: pg_backup
|
||||
target: /pg_backup.sh
|
||||
mode: 0555
|
||||
environment:
|
||||
POSTGRES_DB: outline
|
||||
POSTGRES_PASSWORD_FILE: /run/secrets/db_password
|
||||
@ -64,10 +68,10 @@ services:
|
||||
entrypoint: /docker-entrypoint.sh
|
||||
deploy:
|
||||
labels:
|
||||
backupbot.backup: "true"
|
||||
backupbot.backup.path: "/tmp/dump.sql.gz"
|
||||
backupbot.backup.post-hook: "rm -f /tmp/dump.sql.gz"
|
||||
backupbot.backup.pre-hook: "sh -c 'PGPASSWORD=$$(cat $${POSTGRES_PASSWORD_FILE}) pg_dump -U outline outline | gzip > /tmp/dump.sql.gz'"
|
||||
backupbot.backup: "${ENABLE_BACKUPS:-true}"
|
||||
backupbot.backup.pre-hook: "/pg_backup.sh backup"
|
||||
backupbot.backup.volumes.postgres_data.path: "backup.sql"
|
||||
backupbot.restore.post-hook: '/pg_backup.sh restore'
|
||||
|
||||
secrets:
|
||||
secret_key:
|
||||
@ -97,3 +101,6 @@ configs:
|
||||
name: ${STACK_NAME}_db_entrypoint_${DB_ENTRYPOINT_VERSION}
|
||||
file: entrypoint.postgres.sh.tmpl
|
||||
template_driver: golang
|
||||
pg_backup:
|
||||
name: ${STACK_NAME}_pg_backup_${PG_BACKUP_VERSION}
|
||||
file: pg_backup.sh
|
||||
|
34
pg_backup.sh
Normal file
34
pg_backup.sh
Normal file
@ -0,0 +1,34 @@
|
||||
#!/bin/bash
|
||||
|
||||
set -e
|
||||
|
||||
BACKUP_FILE='/var/lib/postgresql/data/backup.sql'
|
||||
|
||||
function backup {
|
||||
export PGPASSWORD=$(cat $POSTGRES_PASSWORD_FILE)
|
||||
pg_dump -U ${POSTGRES_USER} ${POSTGRES_DB} > $BACKUP_FILE
|
||||
}
|
||||
|
||||
function restore {
|
||||
cd /var/lib/postgresql/data/
|
||||
restore_config(){
|
||||
# Restore allowed connections
|
||||
cat pg_hba.conf.bak > pg_hba.conf
|
||||
su postgres -c 'pg_ctl reload'
|
||||
}
|
||||
# Don't allow any other connections than local
|
||||
cp pg_hba.conf pg_hba.conf.bak
|
||||
echo "local all all trust" > pg_hba.conf
|
||||
su postgres -c 'pg_ctl reload'
|
||||
trap restore_config EXIT INT TERM
|
||||
|
||||
# Recreate Database
|
||||
psql -U ${POSTGRES_USER} -d postgres -c "DROP DATABASE ${POSTGRES_DB} WITH (FORCE);"
|
||||
createdb -U ${POSTGRES_USER} ${POSTGRES_DB}
|
||||
psql -U ${POSTGRES_USER} -d ${POSTGRES_DB} -1 -f $BACKUP_FILE
|
||||
|
||||
trap - EXIT INT TERM
|
||||
restore_config
|
||||
}
|
||||
|
||||
$@
|
4
release/2.0.0+0.74.0
Normal file
4
release/2.0.0+0.74.0
Normal file
@ -0,0 +1,4 @@
|
||||
Due to the introduction of local storage, you need to adapt your config to continue using S3 storage. Just add the following lines to your config:
|
||||
|
||||
FILE_STORAGE=s3
|
||||
COMPOSE_FILE="$COMPOSE_FILE:compose.aws.yml"
|
1
release/2.9.1+0.82.0
Normal file
1
release/2.9.1+0.82.0
Normal file
@ -0,0 +1 @@
|
||||
Fixes a problem where deployments were consistently giving a timeout response even though they were successful
|
Loading…
x
Reference in New Issue
Block a user