docs.coopcloud.tech/docs/maintainers/handbook.md

25 KiB

title
Packaging handbook

Create a new recipe

You can run abra recipe new <recipe> to generate a new ~/.abra/recipes/<recipe> repository. The generated repository is a copy of coop-cloud/example.

Hacking on an existing recipe

If you want to make changes to an existing recipe then you can simply edit the files in ~/.abra/recipes/<recipe-name> and run pass --chaos to the deploy command when deploying those changes. abra will not deploy unstaged changes to avoid instability but you can tell it to do so with --chaos. This means ou can simple hack away on the existing recipe files on your local file system and then when something is working, submit a change request to the recipe upstream.

How is a recipe structured?

compose.yml

This is a compose specification compliant file that contains a list of: services, secrets, networks, volumes and configs. It describe what is needed to run an app. Whenever you deploy an app, abra reads this file.

.env.sample

This file is a skeleton for environmental variables that should be adjusted by the user. Examples include: domain or php extention list. Whenever you create a new app with abra app new this file gets copied to the ~/.abra/servers/<server-domain>/<app-domain>.env and when you run abra app config <app-domain> you're editing this file.

abra.sh

The abra.sh provides versions for configs that are vendored by the recipe maintainer. See this handbook entry for more.

entrypoint.sh

After docker creates the filesystem and copies files into a new container it runs what's called an entrypoint. This is usually a shell script that exports some variables and runs the application. Sometimes the vendor entrypoint doesn't do everything that we need it to do. In that case you can write your own entrypoint, do whatever you need to do and then run the vendor entrypoint.

For a simple example check the entrypoint.sh for croc. In this case, croc needs the password to be exported as an environmental variable called CROC_PASS, and that is exactly what the entrypoint does before running vendor entrypoint.

If you write your own entrypoint, it needs to be specified in the config section of compose.yml. See this handbook entry for more.

releases/ directory

This directory contains text files whose names correspond to the recipe versions which have been released and contain useful tips for operators who are doing upgrade work. See this handbook entry for more.

Optional compose files

I.e. compose.smtp.yml. These are used to provide non-essential functionality such as (registration) e-mails or single sign on. These are typically loaded by specifying COMPOSE_FILE="compose.yml:compose.smtp.yml" in your app .env configuration. Then abra learns to include these optional files at deploy time. abra uses the usual docker-compose configuration merging technique when merging all the compose.**.yml files together at deploy time.

Additional configs

If you look at a compose.yml file and see a configs section, that means this compose file is putting files in the container. This might be used for changing default (vendor) configuration, such as this fpm-tune.ini file used to adjust php-fpm. See this handbook entry for more.

Manage configs

To add additional files into the container, you can use Docker configs. This usually involves the following:

  1. Create the file and add it to your recipe repository
  2. Create an entry for this config in your configs: ... global stanza
  3. Create an entry on the service configuration configs: ... stanza
  4. Vendor a version in the abra.sh of the recipe

An example of a config is an entrypoint, a script run at container run time.

# compose.yml
services:
  app:
    configs:
      - source: nginx_config
        target: /etc/nginx/nginx.conf

configs:
  nginx_config:
    name: ${STACK_NAME}_nginx_config_${NGINX_CONFIG_VERSION}
    file: nginx.conf.tmpl
    template_driver: golang

Because configurations are maintained in-repository by maintainers, we version them ourselves. This means that configs changes are seamless to operators unless they cause breaking changes which should be signalled in the new version and release notes. This is in distinction to secrets, which are managed by the operators. For example, operators may need to rotate secrets on a running deployment and should be able to do so at any time. We put the versions in the abra.sh file.

# abra.sh
export NGINX_CONFIG_VERSION=v1

Manage environment variables

When you define an environment variable in a .env.sample for a recipe, such as:

FOO=123

And you pass this via the environment stanza of a service config in the recipe like so:

service:
  app:
    environment:
      - FOO

Then your environment variable will be threaded into the running app at deploy time. If you run abra app run <domain> app env | grep FOO then you'll see it exposed.

You can also access it in your configs using the following syntax:

{{ env "FOO" }}

Manage secret data

Adding a secret to your recipe is done:

  1. Create an entry in the secrets: ... global stanza
  2. Add the <SECRET-NAME>_VERSION=v1 to your .env.sample
  3. Ensure that the secret is listed on the service configuration under secrets: ...

It might look something like this:

# compose.yml
services:
  app:
    secrets:
      - db_password

secrets:
  db_password:
    external: true
    name: ${STACK_NAME}_db_password_${SECRET_DB_PASSWORD_VERSION}

Operators manage the secret versions themselves. So we provide a version hook in the environment variables which they control. This allows operators to deal with things like secret rotation without having to rely on recipe maintainers.

# .env.sample
SECRET_DB_PASSWORD_VERSION=v1

If you need to access this secret in a config, say:

configs:
  someconfig:
    name: ${STACK_NAME}_someconfig_${SOME_CONFIG_VERSION}
    file: entrypoint.sh.tmpl
    template_driver: golang

Don't forget the template_driver: golang, it won't work otherwise.

Then you can use the following syntax to access the secret:

# someconfig.conf
{{ secret "db_password"}}

Entrypoints

Custom entrypoints

They can be useful to install additional dependencies or setup configuration that upstream doesn't have or want to have.

Here's a trimmed down config, the general idea is to create a new config and insert it into the container at a specific location and then have the compose configuration tell the underlying image to run this new script as the entrypoint.

You typically don't want to completely override the upstream entrypoint of the image you're using, so in the last line of your entrypoint, you can run the upstream entrypoint.

services:
  app:
    entrypoint: /docker-entrypoint.sh
    configs:
      - source: app_entrypoint
        target: /docker-entrypoint.sh
        mode: 0555

configs:
  app_entrypoint:
    name: ${STACK_NAME}_app_entrypoint_${APP_ENTRYPOINT_VERSION}
    file: entrypoint.sh.tmpl
    template_driver: golang

Exposing secrets

Sometimes apps expect to find a secret in their environment which is not possible with the default compose configuration approach. This requires a hack using an entrypoint. The hack is basically this (assume we want to expose a secret called db_password):

  1. Setup the secret as per usual in secrets: ...
  2. Pass a DB_PASSWORD_FILE=/run/secrets/db_password in via the environment: ...
  3. Create an entrypoint and inside it, use the following boilerplate.
#!/bin/bash

set -e

file_env() {
   local var="$1"
   local fileVar="${var}_FILE"
   local def="${2:-}"

   if [ "${!var:-}" ] && [ "${!fileVar:-}" ]; then
      echo >&2 "error: both $var and $fileVar are set (but are exclusive)"
      exit 1
   fi

   local val="$def"

   if [ "${!var:-}" ]; then
      val="${!var}"
   elif [ "${!fileVar:-}" ]; then
      val="$(< "${!fileVar}")"
   fi

   export "$var"="$val"
   unset "$fileVar"
}

And then to expose your secret to the container environment use the following in a line below this function:

file_env "DB_PASSWORD"

/bin/bash is missing?

Sometimes the containers don't even have Bash installed on them. You had better just use /bin/sh or, in your entrypoint script, install Bash :upside_down: The entrypoint secrets hack listed above doesn't work in this case (as it requires Bash), so instead you can just do export FOO=$(cat /run/secrets/<secret-name>).

Reference services in configs?

When referencing an app service in a config file, you should prefix with the STACK_NAME to avoid namespace conflicts (because all these containers sit on the traefik overlay network). You might want to do something like this {{ env "STACK_NAME" }}_app (using the often obscure dark magic of the Golang templating language). You can find examples of this approach used in the Peertube recipe.

How are recipes are versioned?

We'll use an example to work through this. Let's use Gitea.

The Gitea project maintains a version, e.g. 1.14.3. This version uses the semver strategy for communicating what type of changes are included in each version, i.e., if there is a breaking change, Gitea will release a new version as 2.0.0.

However, there are other types of changes that can happen for a recipe. Perhaps the database image gets a breaking update or the actual recipe configuration changes some environment variable. This can mean that end-users of the recipe need to do some work to make sure their updates will deploy successfully.

Therefore, we maintain an additional version part, in front of the project version. So, the first release of the Gitea recipe in the Co-op Cloud project has the version of 1.0.0+1.14.3. This x.y.z+ is the version part that the recipe maintainer manages. If a new available Gitea version comes out as 1.15 then the recipe maintainer will publish 1.1.0+1.15 as this is a backwards compatible update, following semantic versioning.

In all cases, we follow the semver semantics. So, if we upgrade the Gitea recipe from 1.14.3 to 1.15.3, we still publish 1.1.0+1.15.3 as our recipe version. In this case, we skipped a few patch releases but it was all backwards compatible, so we only increment the minor version part.

How do I release a new recipe version?

The commands uses for dealing with recipe versioning in abra are:

  • abra recipe upgrade: upgrade the image tags in the compose configs of a recipe
  • abra recipe sync: upgrade the deploy labels to match the new recipe version
  • abra recipe release: publish a git tag for the recipe repo

The abra recipe publishing commands have been designed to complement a semi-automatic workflow. If abra breaks or doesn't understand what is going on, you can always finish the process manually with a few Git commands and a bit of luck. We designed abra to support this way due to the chaotic nature of container publishing versioning schemes.

Let's take a practical example, publishing a new version of Wordpress.

If we run abra recipe upgrade wordpress (at time of running), we end up with a prompt to upgrade Wordpress to 5.9.0. We can skip the database upgrade for now. Here is what that looks like:

➜  ~ abra recipe upgrade wordpress
? upgrade to which tag? (service: app, image: wordpress, tag: 5.8.3) 5.9.0
? upgrade to which tag? (service: db, image: mariadb, tag: 10.6) skip
WARN[0004] not upgrading mariadb, skipping as requested

Now, what happened? abra queried the upstream container repositories of all the images listed in the Wordpress recipe configuration and checked if there are new tags available. Once you make some choices on the prompt, abra will update the recipe configurations. Let's take a look by running cd ~/.abra/recipes/wordpress && git diff:

diff --git a/compose.yml b/compose.yml
index 1618ef5..6cd754d 100644
--- a/compose.yml
+++ b/compose.yml
@@ -3,7 +3,7 @@ version: "3.8"

 services:
   app:
-    image: "wordpress:5.8.3"
+    image: "wordpress:5.9.0"
     volumes:
       - "wordpress_content:/var/www/html/wp-content/"
     networks:

!!! warning "Here be versioning dragons"

`abra` doesn't understand all image tags unfortunately. There are limitations which we're still running into. You can pass `-a` to have `abra` list all available image tags from the upstream repository and then make a choice manually. See [`tagcmp`](https://git.coopcloud.tech/coop-cloud/tagcmp) for more info on how we implement image parsing.

Next, we need to update the label in the recipe, we can do that with abra recipe sync wordpress. You'll be prompted by a question asking what kind of upgrade this is. Take a moment to read the output and if it still doesn't make sense, read this. Since we're upgrading from 5.8.3 -> 5.9.0, it is a minor release, so we choose minor:

➜  wordpress (master) ✗ abra recipe sync wordpress
...
INFO[0088] synced label coop-cloud.${STACK_NAME}.version=1.1.0+5.9.0 to service app

Once again, we can run cd ~/.abra/recipes/wordpress && git diff to see what abra has done for us:

diff --git a/compose.yml b/compose.yml
index 1618ef5..4a08db6 100644
--- a/compose.yml
+++ b/compose.yml
@@ -3,7 +3,7 @@ version: "3.8"

 services:
   app:
-    image: "wordpress:5.8.3"
+    image: "wordpress:5.9.0"
     volumes:
       - "wordpress_content:/var/www/html/wp-content/"
     networks:
@@ -48,7 +48,7 @@ services:
         #- "traefik.http.routers.${STACK_NAME}.rule=HostRegexp(`{subdomain:.+}.${DOMAIN}`, `${DOMAIN}`)"
         - "traefik.http.routers.${STACK_NAME}.tls.certresolver=${LETS_ENCRYPT_ENV}"
         - "traefik.http.routers.${STACK_NAME}.entrypoints=web-secure"
-        - "coop-cloud.${STACK_NAME}.version=1.0.2+5.8.3"
+        - "coop-cloud.${STACK_NAME}.version=1.1.0+5.9.0"
         - "backupbot.backup=true"
         - "backupbot.backup.path=/var/www/html"

You'll notice that abra figured out how to upgrade the Co-op Cloud version label according to our choice, 1.0.2 -> 1.1.0 is a minor update.

At this point, we're all set, we can run abra recipe release --publish wordpress. This will do the following:

  1. run git commit the new changes
  2. run git tag to create a new git tag named 1.1.0+5.9.0
  3. run git push to publish changes to the Wordpress repository

!!! warning "Here be more SSH dragons"

In order to have `abra` publish changes for you automatically, you'll have to have write permissons to the git.coopcloud.tech repository and your account must have a working SSH key configuration. `abra` will use the SSH based URL connection details for Git by automagically creating an `origin-ssh` remote in the repository and pushing to it.

Here is the output:

WARN[0000] discovered 1.1.0+5.9.0 as currently synced recipe label
WARN[0000] previous git tags detected, assuming this is a new semver release
? current: 1.0.2+5.8.3, new: 1.1.0+5.9.0, correct? Yes
new release published: https://git.coopcloud.tech/coop-cloud/wordpress/src/tag/1.1.0+5.9.0

And once more, we can validate this tag has been created with cd ~/.abra/recipes/wordpress && git tag -l.

How are new recipe versions tested?

This is currently a manual process. Our best estimates are to do a backup and run a test deployment and see how things go.

Following the entry above, before running abra recipe release --publish <recipe>, you can deploy the new version of the recipe. You find an app that relies on this recipe and pass -C/--chaos to ugrade so that it accepts the locally unstaged changes.

!!! warning "Here be more SSH dragons"

In order to have `abra` publish changes for you automatically, you'll have to have write permissons to the git.coopcloud.tech repository and your account must have a working SSH key configuration. `abra` will use the SSH based URL connection details for Git by automagically creating an `origin-ssh` remote in the repository and pushing to it.

It is good practice to take note of all the issues you ran into and share them with other operators. See this entry for more.

If you don't have time or are not an operator, reach out on our communication channels for an operator willing to do some testing.

How do I write version release notes?

In the root of your recipe repository, run the following (if the folder doesn't already exist):

mkdir -p releases

And then create a text file which corresponds to the version release, e.g. 1.1.0+5.9.0 and write some notes. abra will show these when another operator runs abra app deploy / abra app upgrade.

Generate the recipe catalogue

To generate an entire new copy of the catalogue:

abra catalogue generate

You will most likely want to pass --user/--username / --pass/--password with container regsitry credentials to avoid rate limiting.

If you just want to generate a catalogue entry for a single recipe:

abra catalogue generate <recipe>

The changes are generated and added to ~/.abra/catalogue, you can validate what is done by running:

cd ~/.abra/catalogue
git diff

You can pass --publish to have abra automatically publish those changes.

!!! warning "Here be more SSH dragons"

In order to have `abra` publish changes for you automatically, you'll have to have write permissons to the git.coopcloud.tech repository and your account must have a working SSH key configuration. `abra` will use the SSH based URL connection details for Git by automagically creating an `origin-ssh` remote in the repository and pushing to it.

Enable healthchecks

A healthcheck is an important and often overlooked part of the recipe configuration. It is part of the configuration that the runtime uses to figure out if a container is really up-and-running. You can tweak what command to run, how often and how many times to try until you assume the container is not up.

There are no real univesal configs and most maintainers just pick up what others are doing and try to adapt. There is some testing involved to see what works well. You can browse the existing recipe repositories and see from there.

You'll often find the same one used for things like caches & supporting services, such as Redis:

healthcheck:
  test: ["CMD", "redis-cli", "ping"]

If you're just starting off with packaging a recipe, you can use healthcheck: disable until you have something working. It's definitely advised to work out your healthcheck as a last step, it can be a bit tricky.

abra app errors -w <domain> will show what errors are being reported from a failing healtcheck setup.

Tuning deploy configs

A bit like healtchecks, there is no universal setup. A good default seems to be the following configuration:

deploy:
  update_config:
    failure_action: rollback
    order: start-first
  rollback_config:
    order: start-first
  restart_policy:
    max_attempts: 3

The start-first setting ensures that the container runtime tries to start up the new container and get it running before switching over to it.

Setting a restart policy is also good so that the runtime doesn't try to restart the container forever.

Best to read the docs on this one.

Tuning resource limits

If you don't place resource limits on your app it will assume it can use the entire capacity of the server it is on. This can cause issues such as OOM eerors for your entire swarm.

See the Docker documentation to get into this topic and check the other recipes to see what other maintainers are doing.

Enable A+ SSL ratings

If you want to get the highest rating on SSL certs, you can use the following traefik labels which use a tweaked Traefik configuration.

- "traefik.http.routers.traefik.tls.options=default@file"
- "traefik.http.routers.traefik.middlewares=security@file"

See this PR for the technical details

Tweaking secret generation length

It is possible to tell abra which length it should generate secrets with from your recipe config.

You do this by adding a inline comment to the secret definition in the .env.sample / .env file.

Here are examples from the gitea recipe:

SECRET_INTERNAL_TOKEN_VERSION=v1 # length=105
SECRET_JWT_SECRET_VERSION=v1 # length=43
SECRET_SECRET_KEY_VERSION=v1 # length=64

When using this length specifier, abra will not use the "easy to remember word" style generator but instead a string of characters to match the exact length. This can be useful if you have to generate "key" style values instead of passwords which admins have to type out in database shells.

How are recipes added to the catalogue?

This is so far a manual process which requires a member of Autonomic. This is a temporary situation, we want to open out this process & also introduce some automation to support making thie process more convenient. Please nag us to move things along.

  • Publish your new recipe on the git.coopcloud.tech listing
  • Run abra catalogue generate <recipe> -p
  • Run cd ~/.abra/catalogue && make

These minimal steps will publish a new recipe with no versions. You can also do the recipe release publishing dance which will then extend the versions: [...] section of the published JSON in the catalogue.

Recipes that are not included in the catalogue can still be deployed. It is not required to add your recipes to the catalogue but this will improve the visibility for other co-op hosters & end-users.

For now, it is best to get in touch if you want to add your recipe to the catalogue.

In the future, we'd like to support multiple catalogues.

How do I configure backup/restore?

From the perspective of the recipe maintainer, backup/restore is just more deploy: ... labels. Tools can read these labels and then perform the backup/restore logic.

Tools

Two of the current "blessed" options are backup-bot-two & abra.

abra

abra will read labels and store backups in ~/.abra/backups/....

backup-bot-two

Please see the README.md for the full docs.

Backup

For backup, here are the labels & some examples:

  • backupbot.backup=true: turn on backup logic
  • backupbot.backup.pre-hook=mysqldump -u root -pghost ghost --tab /var/lib/foo: command to run before backing up
  • backupbot.backup.post-hook=rm -rf /var/lib/mysql-files/*: command to run after backing up
  • backupbot.backup.path=/var/lib/foo,/var/lib/bar: paths to back up

You place these on your recipe configuration and then tools can run backups.

Restore

Restore, in this context means, "moving a compressed archive back to the container backup paths". So, if you set backupbot.backup.path=/var/lib/foo,/var/lib/bar and you have a backed up archive, tooling will unzip files in the archive back to those paths.

In the case of restoring database tables, you can use the pre-hook & post-hook commands to run the insertion logic.

Can I override a service within a recipe?

You can use this docker-compose trick to do this.

If you have a recipe that is using a mysql service and you'd like to use postgresql instead, you can create a compose.psql.yml!

An example of this is the selfoss recipe. The default is sqlite but there is a postgresql compose configuration there too.