forked from toolshed/docs.coopcloud.tech
Fiddle with folder names
This commit is contained in:
parent
c648f67cef
commit
11764163d1
File diff suppressed because it is too large
Load Diff
@ -1,23 +1,23 @@
|
||||
---
|
||||
title: Maintainers
|
||||
title: Operators
|
||||
---
|
||||
|
||||
Welcome to the maintainers guide! Maintainers are typically individuals who have a stake in building up and maintaining our digital configuration commons, the recipe configurations. Maintainers help keep recipes configurations up to date, respond to issues in a timely manner, help new users within the community and recruit new maintainers when possible.
|
||||
Welcome to the operators guide! Operators are typically individuals, members of tech co-ops or collectives who provide services powered by Co-op Cloud. This documentation is meant to help new & experienced operators manage their deployments as well as provide a space for sharing tricks & tips for keeping things running smoothly.
|
||||
|
||||
<div class="grid cards" markdown>
|
||||
|
||||
- __New Maintainers Tutorial__
|
||||
- __New Operators Tutorial__
|
||||
|
||||
If you want to package a recipe and/or become a maintainer, start here :rocket:
|
||||
If you want to become an operator, start your journey here :rocket:
|
||||
|
||||
[Get Started](/maintainers/tutorial){ .md-button .md-button--primary }
|
||||
[Get started](tutorial.md){ .md-button .md-button--primary }
|
||||
|
||||
- __Packaging Handbook__
|
||||
- __Operators Handbook__
|
||||
|
||||
One-stop shop for all you need to know to package recipes :package:
|
||||
One-stop shop for all you need to know to manage a deployment :ribbon:
|
||||
|
||||
[Read Handbook](/maintainers/handbook){ .md-button .md-button--primary }
|
||||
[Read Handbook](handbook.md){ .md-button .md-button--primary }
|
||||
|
||||
</div>
|
||||
|
||||
Maintainers are encouraged to submit documentation patches! Sharing is caring :sparkling_heart:
|
||||
Operators are encouraged to submit documentation patches! Sharing is caring :sparkling_heart:
|
||||
|
@ -1,94 +1,279 @@
|
||||
---
|
||||
title: New maintainers tutorial
|
||||
title: New Operators Tutorial
|
||||
---
|
||||
|
||||
## Package your first recipe
|
||||
This tutorial assumes you understand the [frequently asked questions](/intro/faq/) as well as [the moving parts](/intro/strategy/) of the technical problems _Co-op Cloud_ solves. If yes, proceed :smile:
|
||||
|
||||
### Overview
|
||||
## Deploy your first app
|
||||
|
||||
Packaging a recipe is basically knowing a bag of about 20 tricks. Once you learn them, there is nothing more to learn. It can seem daunting at first but it's simple and easy to do once you know the tricks.
|
||||
In order to deploy an app you need two things:
|
||||
|
||||
The nice thing about packaging is that only one person has to do it and then we all benefit. We've seen that over time, the core of the configuration doesn't really change. New options and versions might come but the config remains quite stable. This is good since it means that your packaging work stays relevant and useful for other maintainers & operators as time goes on.
|
||||
1. a server with SSH access and a public IP address
|
||||
2. a domain name pointing to that server
|
||||
|
||||
Depending on your familiarity with recipes, it might be worth reading [how a recipe is structured](/maintainers/handbook/#how-is-a-recipe-structured) and making clear you understand [what a recipe is](/glossary/#recipe) before continuing.
|
||||
This tutorial tries to help you make choices about which server and which DNS setup you need to run a _Co-op Cloud_ deployment but it does not go into great depth about how to set up a new server.
|
||||
|
||||
### Making a plan
|
||||
### Server setup
|
||||
|
||||
The ideal scenario is when the upstream project provides both the packaged image and a compose configuration which we can build from. If you're in luck, you'll typically find a `Dockerfile` and a `docker-compose.yml` file in the root of the upstream Git repository for the app.
|
||||
Co-op Cloud has itself near zero system requirements. You only need to worry about the system resource usage of your apps and the overhead of running containers with the docker runtime (often negligible. If you want to know more, see [this FAQ entry](/intro/faq/#isnt-running-everything-in-containers-inefficient)).
|
||||
|
||||
- **Tired**: Write your own image and compose file from scratch :sleeping:
|
||||
- **Wired**: Use someone else's image (& maybe compose file) :smirk_cat:
|
||||
- **Inspired**: Upstream image, someone else's compose file :exploding_head:
|
||||
- **On fire**: Upstream image, upstream compose file :fire:
|
||||
We will deploy a new Nextcloud instance in this guide, so you will only need 1GB of RAM according to [their documentation](https://docs.nextcloud.com/server/latest/admin_manual/installation/system_requirements.html). You may also be interested in this [FAQ entry](/intro/faq/#arent-containers-horrible-from-a-security-perspective) if you are curious about security in the context of containers.
|
||||
|
||||
### Writing / adapting the `compose.yml`
|
||||
Most Co-op Cloud deployments have been run on Debian machines so far. Some experiments have been done on single board computers & servers with low resource capacities.
|
||||
|
||||
Let's take a practical example, [Matomo web analytics](https://matomo.org/). We'll be making a Docker "swarm-mode" `compose.yml` file.
|
||||
You need to keep port `:80` and `:443` free on your server for web proxying to your apps. Typically, you don't need to keep any other ports free as the core web proxy ([Traefik](https://traefik.io)) keeps all app ports internal to its network. Sometimes however, you need to expose an app port when you need to use a transport which would perform better or more reliably without proxying.
|
||||
|
||||
Luckily, Matomo already has an example compose file in their repository. Like a lot of compose files, it's intended for use with `docker-compose`, instead of "swarm mode", but it should be a good start.
|
||||
`abra` has support for creating servers (`abra server new`) but that is a more advanced automation feature which is covered in the [handbook](/operators/handbook). For this tutorial, we'll focus on the basics. Assuming you've managed to create a testing VPS with some `$hosting_provider`, you'll need to install Docker, add your user to the Docker group & setup swarm mode:
|
||||
|
||||
First, let's create a directory with the files we need:
|
||||
!!! warning "You may need to log in/out"
|
||||
|
||||
When running `usermod ...`, you may need to (depending on your system) log
|
||||
in and out again of your shell session to get the required permissions for
|
||||
Docker.
|
||||
|
||||
```
|
||||
abra recipe new matomo
|
||||
cd ~/.abra/recipes/matomo
|
||||
# ssh into your server
|
||||
ssh <server-domain>
|
||||
|
||||
# docker install convenience script
|
||||
wget -O- https://get.docker.com | bash
|
||||
|
||||
# add user to docker group
|
||||
sudo usermod -aG docker $USER
|
||||
|
||||
# exit and re-login to load the group
|
||||
exit
|
||||
ssh <server-domain>
|
||||
|
||||
# back on the server, setup swarm
|
||||
docker swarm init
|
||||
docker network create -d overlay proxy
|
||||
|
||||
# now you can exit and start using abra
|
||||
exit
|
||||
```
|
||||
|
||||
Then, let's download and edit the `docker-compose.yml` file:
|
||||
??? question "Do you support multiple web proxies?"
|
||||
|
||||
```
|
||||
mkdir matomo && cd matomo
|
||||
wget https://raw.githubusercontent.com/matomo-org/docker/master/.examples/apache/docker-compose.yml -O compose.yml
|
||||
We do not know if it is feasible and convenient to set things up on an existing server with another web proxy which uses ports `:80` & `:443`. We'd happily receive reports and documentation on how to do this if you manage to set it up!
|
||||
|
||||
### DNS setup
|
||||
|
||||
You'll need two A records, one to point to the server itself and another to support sub-domains for the apps. You can then support an app hosted on your root domain (e.g. `example.com`) and other apps on sub-domains (e.g. `foo.example.com`, `bar.example.com`).
|
||||
|
||||
Your entries in your DNS provider setup might look like the following.
|
||||
|
||||
@ 1800 IN A 116.203.211.204
|
||||
*. 1800 IN A 116.203.211.204
|
||||
|
||||
Where `116.203.211.204` can be replaced with the IP address of your server.
|
||||
|
||||
??? question "How do I know my DNS is working?"
|
||||
|
||||
You can use a tool like `dig` on the command-line to check if your server has the necessary DNS records set up. Something like `dig +short <domain>` should show the IP address of your server if things are working.
|
||||
|
||||
### Install `abra`
|
||||
|
||||
Now we can install [`abra`](/abra) locally on your machine and hook it up to
|
||||
your server. We support a script-based installation method ([script source](https://git.coopcloud.tech/coop-cloud/abra/src/branch/main/scripts/installer/installer)):
|
||||
|
||||
```bash
|
||||
curl https://install.abra.coopcloud.tech | bash
|
||||
```
|
||||
|
||||
Open the `compose.yml` in your favourite editor and have a gander 🦢. There are a few things we're looking for, but some immediate changes could be:
|
||||
The installer will verify the downloaded binary checksum. If you prefer, you can
|
||||
[manually verify](/abra/install/#manual-verification) the binary, and then
|
||||
manally place it in one the directories in your `$PATH` variable. To validate
|
||||
that everything is working try listing the `--help` command or `-h` to view
|
||||
output:
|
||||
|
||||
1. Let's bump the version to `3.8`, to make sure we can use all the latest swarm coolness.
|
||||
2. We load environment variables separately via [`abra`](/abra/), so we'll strip out `env_file`.
|
||||
3. The `/var/www/html` volume definition on L21 is a bit overzealous; it means a copy of Matomo will be stored separately per app instance, which is a waste of space in most cases. We'll narrow it down according to the documentation. The developers have been nice enough to suggest `logs` and `config` volumes instead, which is a decent start.
|
||||
4. The MySQL passwords are sent as variables which is fine for basic use, but if we replace them with Docker secrets we can keep them out of our env files if we want to publish those more widely.
|
||||
5. The MariaDB service doesn't need to be exposed to the internet, so we can define an `internal` network for it to communicate with Matomo.
|
||||
6. Lastly, we want to use `deploy.labels` and remove the `ports:` definition, to tell Traefik to forward requests to Matomo based on hostname and generate an SSL certificate.
|
||||
|
||||
The resulting `compose.yml` is available [here](https://git.autonomic.zone/coop-cloud/matomo/src/branch/main/compose.yml).
|
||||
|
||||
### Updating the `.env.sample`
|
||||
|
||||
Open the `.env.sample` file and add the following
|
||||
|
||||
```
|
||||
DB_PASSWORD_VERSION=v1
|
||||
DB_ROOT_PASSWORD_VERSION=v1
|
||||
```bash
|
||||
abra -h
|
||||
```
|
||||
|
||||
The resulting `.env.sample` is available [here](https://git.coopcloud.tech/coop-cloud/matomo/src/branch/main/.env.sample)
|
||||
You may need to add the `~/.local/bin/` directory to your `$PATH` variable, in
|
||||
order to run the executable. Also, run this line into your terminal so
|
||||
you have immediate access to `abra` on the current terminal.
|
||||
|
||||
### Test deployment
|
||||
|
||||
!!! note "Running Co-op Cloud server required!"
|
||||
|
||||
The rest of this guide assumes you have a Co-op Cloud server going -- we'll use `swarm.example.com`, but replace it with your own server address. Head over to [the operators tutorial](/operators/tutorial) if you need help setting one up.
|
||||
|
||||
Now, we're ready to create a testing instance of Matomo:
|
||||
|
||||
```
|
||||
abra app new matomo --secrets \
|
||||
--domain matomo.swarm.example.com \
|
||||
--server swarm.example.com
|
||||
```bash
|
||||
export PATH=$PATH:$HOME/.local/bin
|
||||
```
|
||||
|
||||
Depending on whether you defined any extra environment variables -- we didn't so
|
||||
far, in this example -- you might want to run `abra app config swarm.example.com`
|
||||
to check the configuration.
|
||||
If you run into issues during installation, [please report a ticket](https://git.coopcloud.tech/coop-cloud/organising/issues/new) :pray: Once you're all set up, we **highly** recommend configuring command-line auto-completion for `abra`. See `abra autocomplete -h` for more on how to do this.
|
||||
|
||||
Otherwise, or once you've done that, go ahead and deploy the app:
|
||||
??? question "Can I install `abra` on my server?"
|
||||
|
||||
```
|
||||
abra app deploy swarm.example.com
|
||||
Yes, this is possible. However, the instructions for this setup are different. For more info see [this handbook entry](/operators/handbook/#running-abra-server-side).
|
||||
|
||||
### Add your server
|
||||
|
||||
Now you can connect `abra` with your server. You must have a working SSH configuration for your server before you can proceed. That means you can run `ssh <server-domain>` on your command-line and everything Works :tm:. See the [`abra` SSH troubleshooting](/abra/trouble/#ssh-connection-issues) for a working SSH configuration example.
|
||||
|
||||
??? warning "Beware of SSH dragons :dragon_face:"
|
||||
|
||||
Under the hood `abra` uses plain 'ol `ssh` and aims to make use of your
|
||||
existing SSH configurations in `~/.ssh/config` and interfaces with your
|
||||
running `ssh-agent` for password protected secret key files.
|
||||
|
||||
Running `server add` with `-d` or `--debug` should help you debug what is
|
||||
going on under the hood. `ssh -v ...` should also help. If you're running
|
||||
into SSH connection issues with `abra` take a moment to read [this
|
||||
troubleshooting entry](/abra/trouble/#ssh-connection-issues).
|
||||
|
||||
```bash
|
||||
ssh <server-domain> # make sure it works
|
||||
abra server add <server-domain>
|
||||
```
|
||||
|
||||
Then, open the `DOMAIN` you configured (you might need to wait a while for Traefik to generate SSL certificates) to finish the set-up. Luckily, this container is (mostly) configurable via environment variables, if we want to auto-generate the configuration we can use a `config` and / or a custom `entrypoint` (see [`coop-cloud/mediawiki`](https://git.autonomic.zone/coop-cloud/mediawiki) for examples of both).
|
||||
It is important to note that `<server-domain>` here is a publicy accessible domain name which points to your server IP address. `abra` does make sure this is the case and this is done to avoid issues with HTTPS certificate rate limiting.
|
||||
|
||||
### Finishing up
|
||||
??? warning "Can I use arbitrary server names?"
|
||||
|
||||
You've probably got more questions, check out the [packaging handbook](/maintainers/handbook)!
|
||||
Yes, this is possible. You need to pass `-D` to `server add` and ensure
|
||||
that your `Host ...` entry in your SSH configuration includes the name.
|
||||
So, for example:
|
||||
|
||||
Host example.com example
|
||||
...
|
||||
|
||||
And then:
|
||||
|
||||
abra server add -D example
|
||||
|
||||
You will now have a new `~/.abra/` folder on your local file system which stores all the configuration of your Co-op Cloud instance.
|
||||
|
||||
By now `abra` should have registered this server as managed. To confirm this run:
|
||||
|
||||
```
|
||||
abra server ls
|
||||
```
|
||||
|
||||
??? question "How do I share my configs in `~/.abra`?"
|
||||
|
||||
It's possible and quite easy, for more see [this handbook
|
||||
entry](/operators/handbook/#understanding-app-and-server-configuration).
|
||||
|
||||
### Web proxy setup
|
||||
|
||||
In order to have your Co-op cloud deployment serve the public internet, we need to install the core web proxy, [Traefik](https://doc.traefik.io/traefik/).
|
||||
|
||||
Traefik is the main entrypoint for all web requests (e.g. like NGINX) and
|
||||
supports automatic SSL certificate configuration and other quality-of-life
|
||||
features which make deploying libre apps more enjoyable.
|
||||
|
||||
**1. To get started, you'll need to create a new app:**
|
||||
|
||||
```bash
|
||||
abra app new traefik
|
||||
```
|
||||
|
||||
Choose your newly registered server and specify a domain name. By default `abra`
|
||||
will suggest `<app-name>.server.org` or prompt you with a list of servers.
|
||||
|
||||
|
||||
**2. Configure this new `traefix` app**
|
||||
|
||||
You will want to take a look at your generated configuration and tweak the `LETS_ENCRYPT_EMAIL` value. You can do that by running `abra app config`:
|
||||
|
||||
```bash
|
||||
abra app config <traefik-domain>
|
||||
```
|
||||
|
||||
Every app you deploy will have one of these `.env` files, which contains
|
||||
variables which will be injected into app configurations when deployed. These
|
||||
files exist at relevantly named path:
|
||||
|
||||
```bash
|
||||
~/.abra/servers/<domain>/<traefik-domain>.env
|
||||
```
|
||||
|
||||
Variables starting with `#` are optional, others are required. Some things to
|
||||
consider here is that by default our *Traefik* recipe exposes the metric
|
||||
dashboard unauthenticated on the public internet at the URL `<traefik-domain>`
|
||||
it is deployed to, which is not ideal. You can disable this with:
|
||||
|
||||
```
|
||||
DASHBOARD_ENABLED=false
|
||||
```
|
||||
|
||||
**3. Now it is time to deploy your app:**
|
||||
|
||||
```
|
||||
abra app deploy <traefik-domain>
|
||||
```
|
||||
|
||||
Voila. Abracadabra :magic_wand: your first app is deployed :sparkles:
|
||||
|
||||
|
||||
### Deploy Nextcloud
|
||||
|
||||
And now we can deploy apps. Let's create a new Nextcloud app.
|
||||
|
||||
```bash
|
||||
abra app new nextcloud -S
|
||||
```
|
||||
|
||||
The `-S` or `--secrets` flag is used to generate secrets for the app: database connection password, root password and admin password.
|
||||
|
||||
??? warning "Beware of password dragons :dragon:"
|
||||
|
||||
Take care, these secrets are only shown once on the terminal so make sure to take note of them! `abra` makes use of the [Docker secrets](/operators/handbook/#managing-secret-data) mechanism to ship these secrets securely to the server and store them as encrypted data. Only the apps themselves have access to the values from here on, they're placed in `/run/secrets` on the container file system.
|
||||
|
||||
Then we can deploy Nextcloud:
|
||||
|
||||
```bash
|
||||
abra app deploy <nextcloud-domain>
|
||||
```
|
||||
|
||||
`abra app deploy` will wait nearly a minute for an app to deploy until it times out and shows some helpful commands for how to debug what is going on. If things don't come up in time, try running the following:
|
||||
|
||||
```
|
||||
abra app ps -w <nextcloud-domain> # status check
|
||||
abra app logs <nextcloud-domain> # logs trailing
|
||||
abra app errors -w <nextcloud-domain> # error catcher
|
||||
```
|
||||
|
||||
Your new `traefik` instance will detect that a new app is coming up and generate SSL certificates for it. You can see what `traefik` is up to using the same commands above but replacing `<netcloud-domain>` with the `<traefik-domain>` you chose earlier (`abra app ls` will remind you what domains you chose :grinning:).
|
||||
|
||||
### Upgrade Nextcloud
|
||||
|
||||
To upgrade an app manually to the newest available version run:
|
||||
|
||||
```bash
|
||||
abra app upgrade <nextcloud-domain>
|
||||
```
|
||||
|
||||
### Automatic Upgrades
|
||||
|
||||
`kadabra` the auto-updater is still under development, use it with care and don't use it in production environments. To setup the auto-updater copy the `kadabra` binary to the server and configure a cronjob for regular app upgrades. The following script will configure ssmtp for email notifications and setup a cronjob. This cronjob checks daily for new app versions, notifies if any kind of update is available and upgrades all apps to the latest patch/minor version.
|
||||
|
||||
|
||||
```bash
|
||||
apt install ssmtp
|
||||
|
||||
cat > /etc/ssmtp/ssmtp.conf << EOF
|
||||
mailhub=$MAIL_SERVER:587
|
||||
hostname=$MAIL_DOMAIN
|
||||
AuthUser=$USER
|
||||
AuthPass=$PASSWORD
|
||||
FromLineOverride=yes
|
||||
UseSTARTTLS=yes
|
||||
EOF
|
||||
|
||||
cat > /etc/cron.d/abra_updater << EOF
|
||||
MAILTO=admin@example.com
|
||||
MAILFROM=noreply@example.com
|
||||
|
||||
0 6 * * * root ~/kadabra notify --major
|
||||
30 4 * * * root ~/kadabra upgrade --all
|
||||
EOF
|
||||
|
||||
```
|
||||
|
||||
Add `ENABLE_AUTO_UPDATE=true` to the env config (`abra app config <app name>`) to enable the auto-updater for a specific app.
|
||||
|
||||
## Finishing up
|
||||
|
||||
Hopefully you got something running! Well done! The [operators handbook](/operators/handbook) would probably be the next place to go check out if you're looking for more help. Especially on topics of ongoing maintenance.
|
||||
|
||||
If not, please [get in touch](/intro/contact) or [raise a ticket](https://git.coopcloud.tech/coop-cloud/organising/issues/new/choose) and we'll try to help out. We want our operator onboarding to be as smooth as possible, so we do appreciate any feedback we receive.
|
||||
|
781
docs/maintainers/handbook.md
Normal file
781
docs/maintainers/handbook.md
Normal file
@ -0,0 +1,781 @@
|
||||
---
|
||||
title: Packaging handbook
|
||||
---
|
||||
|
||||
## Create a new recipe
|
||||
|
||||
You can run `abra recipe new <recipe>` to generate a new `~/.abra/recipes/<recipe>` repository. The generated repository is a copy of [`coop-cloud/example`](https://git.coopcloud.tech/coop-cloud/example).
|
||||
|
||||
## Hacking on an existing recipe
|
||||
|
||||
!!! warning
|
||||
|
||||
It is *very advisable* to disable any `healthcheck: ...` configuration
|
||||
while hacking on new recipes. This is because it is very easy to mess up
|
||||
and it will stop Traefik or other web proxies routing the app. You can
|
||||
enable a specific healthcheck later when your recipe is stable. The default
|
||||
"unconfigured" healthcheck behaviour is much less strict and it's faster to
|
||||
get something up and running.
|
||||
|
||||
If you want to make changes to an existing recipe then you can simply edit the files in `~/.abra/recipes/<recipe-name>` and run pass `--chaos` to the `deploy` command when deploying those changes. `abra` will not deploy unstaged changes to avoid instability but you can tell it to do so with `--chaos`. This means you can simply hack away on the existing recipe files on your local file system and then when something is working, submit a change request to the recipe upstream.
|
||||
|
||||
## How is a recipe structured?
|
||||
|
||||
### `compose.yml`
|
||||
|
||||
This is a [compose specification](https://compose-spec.io/) compliant file that contains a list of: services, secrets, networks, volumes and configs. It describe what is needed to run an app. Whenever you deploy an app, `abra` reads this file.
|
||||
|
||||
### `.env.sample`
|
||||
|
||||
This file is a skeleton for environmental variables that should be adjusted by the user. Examples include: domain or PHP extension list. Whenever you create a new app with `abra app new` this file gets copied to the `~/.abra/servers/<server-domain>/<app-domain>.env` and when you run `abra app config <app-domain>` you're editing this file.
|
||||
|
||||
### `abra.sh`
|
||||
|
||||
The `abra.sh` provides versions for configs that are vendored by the recipe maintainer. See [this handbook entry](/maintainers/handbook/#manage-configs) for more.
|
||||
|
||||
### `entrypoint.sh`
|
||||
|
||||
After docker creates the filesystem and copies files into a new container it runs what's called an entrypoint. This is usually a shell script that exports some variables and runs the application. Sometimes the vendor entrypoint doesn't do everything that we need it to do. In that case you can write your own entrypoint, do whatever you need to do and then run the vendor entrypoint.
|
||||
|
||||
For a simple example check the [entrypoint.sh for `croc`](https://git.coopcloud.tech/coop-cloud/croc/src/commit/2f06e8aac52a3850d527434a26de0a242bea0c79/entrypoint.sh). In this case, `croc` needs the password to be exported as an environmental variable called `CROC_PASS`, and that is exactly what the entrypoint does before running vendor entrypoint.
|
||||
|
||||
If you write your own entrypoint, it needs to be specified in the `config` section of compose.yml. See [this handbook entry](/maintainers/handbook/#how-do-i-set-a-custom-entrypoint) for more.
|
||||
|
||||
### `release/` directory
|
||||
|
||||
This directory contains text files whose names correspond to the recipe versions which have been released and contain useful tips for operators who are doing upgrade work. See [this handbook entry](/maintainers/handbook/#how-do-i-write-version-release-notes) for more.
|
||||
|
||||
### Optional compose files
|
||||
|
||||
I.e. `compose.smtp.yml`. These are used to provide non-essential functionality such as (registration) e-mails or single sign on. These are typically loaded by specifying `COMPOSE_FILE="compose.yml:compose.smtp.yml"` in your app `.env` configuration. Then `abra` learns to include these optional files at deploy time. `abra` uses the usual `docker-compose` configuration merging technique when merging all the `compose.**.yml` files together at deploy time.
|
||||
|
||||
### Additional configs
|
||||
|
||||
If you look at a `compose.yml` file and see a `configs` section, that means this compose file is putting files in the container. This might be used for changing default (vendor) configuration, such as this [fpm-tune.ini file](https://git.coopcloud.tech/coop-cloud/nextcloud/src/commit/28425b6138603067021757de28c639ad464e9cf8/fpm-tune.ini) used to adjust `php-fpm.` See [this handbook entry](/maintainers/handbook/#manage-configs) for more.
|
||||
|
||||
## Manage configs
|
||||
|
||||
To add additional files into the container, you can use [Docker configs](https://docs.docker.com/engine/swarm/configs/). This usually involves the following:
|
||||
|
||||
1. Create the file and add it to your recipe repository
|
||||
1. Create an entry for this config in your `configs: ...` global stanza
|
||||
1. Create an entry on the service configuration `configs: ...` stanza
|
||||
1. Vendor a version in the `abra.sh` of the recipe
|
||||
|
||||
An example of a config is an [entrypoint](/maintainers/handbook/#entrypoints), a script run at container run time.
|
||||
|
||||
```yaml
|
||||
# compose.yml
|
||||
services:
|
||||
app:
|
||||
configs:
|
||||
- source: nginx_config
|
||||
target: /etc/nginx/nginx.conf
|
||||
|
||||
configs:
|
||||
nginx_config:
|
||||
name: ${STACK_NAME}_nginx_config_${NGINX_CONFIG_VERSION}
|
||||
file: nginx.conf.tmpl
|
||||
template_driver: golang
|
||||
```
|
||||
|
||||
Because configurations are maintained in-repository by maintainers, we version them ourselves. This means that configs changes are seamless to operators unless they cause breaking changes which should be signalled in the new version and release notes. This is in distinction to secrets, which are managed by the operators. For example, operators may need to rotate secrets on a running deployment and should be able to do so at any time. We put the versions in the [`abra.sh`](/maintainers/handbook/#abrash) file.
|
||||
|
||||
```bash
|
||||
# abra.sh
|
||||
export NGINX_CONFIG_VERSION=v1
|
||||
```
|
||||
|
||||
## Manage environment variables
|
||||
|
||||
!!! warning
|
||||
|
||||
Please read this section carefully to avoid deployment footguns for the
|
||||
operators who deploy your recipe configuration. It's important to
|
||||
understand how to add new env vars into the recipe configuration in a
|
||||
non-breaking manner. Thanks for reading!
|
||||
|
||||
When you define an environment variable in an `.env.sample` for a recipe, such as:
|
||||
|
||||
```bash
|
||||
FOO=123
|
||||
```
|
||||
|
||||
This defines an env var which then needs to be added by an operator to their app env file. If you would like to add an env var which is optional, you can do:
|
||||
|
||||
```bash
|
||||
#FOO=123
|
||||
```
|
||||
|
||||
In order to expose this env var to recipe configuration, you pass this via the `environment` stanza of a service config in the recipe like so:
|
||||
|
||||
```yaml
|
||||
service:
|
||||
app:
|
||||
environment:
|
||||
- FOO
|
||||
```
|
||||
|
||||
Then your environment variable will be threaded into the running app at deploy time. If you run `abra app run <domain> app env | grep FOO` then you'll see it exposed.
|
||||
|
||||
You can also access it in your configs using the following syntax:
|
||||
|
||||
```go
|
||||
{{ env "FOO" }}
|
||||
```
|
||||
|
||||
### Global environment variables
|
||||
|
||||
- `TYPE`: specifies the recipe name
|
||||
- `DOMAIN`: specifies the app domain
|
||||
- `LETS_ENCRYPT_ENV`: TODO
|
||||
- `TIMEOUT`: specifies the time in seconds to wait until all services have started and passed the health checks
|
||||
- `ENABLE_AUTO_UPDATE`: if set to `true`, the auto-updater `kadabra` can update this app (see [this auto updater entry](/operators/tutorial/#automatic-upgrades) for more)
|
||||
- `POST_DEPLOY_CMDS="<container> <command> <arguments>|<container> <command> <arguments>|... "` specifies commands that should be executed after each `abra app deploy`
|
||||
- `POST_UPGRADE_CMDS="<container> <command> <arguments>|<container> <command> <arguments>|... "` specifies commands that should be executed after each `abra app upgrade`
|
||||
|
||||
## Manage secret data
|
||||
|
||||
Adding a secret to your recipe is done:
|
||||
|
||||
1. Create an entry in the `secrets: ...` global stanza
|
||||
1. Add the `<SECRET-NAME>_VERSION=v1` to your `.env.sample`
|
||||
1. Ensure that the secret is listed on the service configuration under `secrets: ...`
|
||||
|
||||
It might look something like this:
|
||||
|
||||
```yaml
|
||||
# compose.yml
|
||||
services:
|
||||
app:
|
||||
secrets:
|
||||
- db_password
|
||||
|
||||
secrets:
|
||||
db_password:
|
||||
external: true
|
||||
name: ${STACK_NAME}_db_password_${SECRET_DB_PASSWORD_VERSION}
|
||||
```
|
||||
|
||||
Operators manage the secret versions themselves. So we provide a version hook in the environment variables which they control. This allows operators to deal with things like secret rotation without having to rely on recipe maintainers.
|
||||
|
||||
```bash
|
||||
# .env.sample
|
||||
SECRET_DB_PASSWORD_VERSION=v1
|
||||
```
|
||||
|
||||
If you need to access this secret in a config, say:
|
||||
|
||||
```yaml
|
||||
configs:
|
||||
someconfig:
|
||||
name: ${STACK_NAME}_someconfig_${SOME_CONFIG_VERSION}
|
||||
file: entrypoint.sh.tmpl
|
||||
template_driver: golang
|
||||
```
|
||||
|
||||
Don't forget the `template_driver: golang`, it won't work otherwise.
|
||||
|
||||
Then you can use the following syntax to access the secret:
|
||||
|
||||
```go
|
||||
# someconfig.conf
|
||||
{{ secret "db_password"}}
|
||||
```
|
||||
|
||||
## Entrypoints
|
||||
|
||||
### Custom entrypoints
|
||||
|
||||
They can be useful to install additional dependencies or setup configuration that upstream doesn't have or want to have.
|
||||
|
||||
Here's a trimmed down config, the general idea is to create a new config and insert it into the container at a specific location and then have the compose configuration tell the underlying image to run this new script as the entrypoint.
|
||||
|
||||
You typically don't want to completely override the upstream entrypoint of the image you're using, so in the last line of your entrypoint, you can run the upstream entrypoint.
|
||||
|
||||
```yaml
|
||||
services:
|
||||
app:
|
||||
entrypoint: /docker-entrypoint.sh
|
||||
configs:
|
||||
- source: app_entrypoint
|
||||
target: /docker-entrypoint.sh
|
||||
mode: 0555
|
||||
|
||||
configs:
|
||||
app_entrypoint:
|
||||
name: ${STACK_NAME}_app_entrypoint_${APP_ENTRYPOINT_VERSION}
|
||||
file: entrypoint.sh.tmpl
|
||||
template_driver: golang
|
||||
```
|
||||
|
||||
### Exposing secrets
|
||||
|
||||
Sometimes apps expect to find a secret in their environment which is not possible with the default compose configuration approach. This requires a hack using an entrypoint. The hack is basically this (assume we want to expose a secret called `db_password`):
|
||||
|
||||
1. Setup the secret as per usual in `secrets: ...`
|
||||
2. Pass a `DB_PASSWORD_FILE=/run/secrets/db_password` in via the `environment: ...`
|
||||
3. Create an entrypoint and inside it, use the following boilerplate.
|
||||
|
||||
```bash
|
||||
#!/bin/bash
|
||||
|
||||
set -e
|
||||
|
||||
file_env() {
|
||||
local var="$1"
|
||||
local fileVar="${var}_FILE"
|
||||
local def="${2:-}"
|
||||
|
||||
if [ "${!var:-}" ] && [ "${!fileVar:-}" ]; then
|
||||
echo >&2 "error: both $var and $fileVar are set (but are exclusive)"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
local val="$def"
|
||||
|
||||
if [ "${!var:-}" ]; then
|
||||
val="${!var}"
|
||||
elif [ "${!fileVar:-}" ]; then
|
||||
val="$(< "${!fileVar}")"
|
||||
fi
|
||||
|
||||
export "$var"="$val"
|
||||
unset "$fileVar"
|
||||
}
|
||||
```
|
||||
|
||||
And then to expose your secret to the container environment use the following in a line below this function:
|
||||
|
||||
```bash
|
||||
file_env "DB_PASSWORD"
|
||||
```
|
||||
|
||||
### `/bin/bash` is missing?
|
||||
|
||||
Sometimes the containers don't even have Bash installed on them. You had better just use `/bin/sh` or, in your entrypoint script, install Bash :upside_down: The entrypoint secrets hack listed above doesn't work in this case (as it requires Bash), so instead you can just do `export FOO=$(cat /run/secrets/<secret-name>)`.
|
||||
|
||||
|
||||
## How do I reference services in configs?
|
||||
|
||||
When referencing an `app` service in a config file, you should prefix with the `STACK_NAME` to avoid namespace conflicts (because all these containers sit on the traefik overlay network). You might want to do something like this `{{ env "STACK_NAME" }}_app` (using the often obscure dark magic of the Golang templating language). You can find examples of this approach used in the [Peertube recipe](https://git.coopcloud.tech/coop-cloud/peertube/src/commit/d1b297c5a6a23a06bf97bb954104ddfd7f736568/nginx.conf.tmpl#L9).
|
||||
|
||||
## How are recipes versioned?
|
||||
|
||||
We'll use an example to work through this. Let's use [Gitea](https://hub.docker.com/r/gitea/gitea).
|
||||
|
||||
The Gitea project maintains a version, e.g. `1.14.3`. This version uses the [semver](https://semver.org) strategy for communicating what type of changes are included in each version, i.e., if there is a breaking change, Gitea will release a new version as `2.0.0`.
|
||||
|
||||
However, there are other types of changes that can happen for a recipe. Perhaps the database image gets a breaking update or the actual recipe configuration changes some environment variable. This can mean that end-users of the recipe need to do some work to make sure their updates will deploy successfully.
|
||||
|
||||
Therefore, we maintain an additional version part, in front of the project version. So, the first release of the Gitea recipe in the Co-op Cloud project has the version of `1.0.0+1.14.3`. This `x.y.z+` is the version part that the recipe maintainer manages. If a new available Gitea version comes out as `1.15` then the recipe maintainer will publish `1.1.0+1.15` as this is a backwards compatible update, following semantic versioning.
|
||||
|
||||
In all cases, we follow the semver semantics. So, if we upgrade the Gitea recipe from `1.14.3` to `1.15.3`, we still publish `1.1.0+1.15.3` as our recipe version. In this case, we skipped a few patch releases but it was all backwards compatible, so we only increment the minor version part.
|
||||
|
||||
## How do I release a new recipe version?
|
||||
|
||||
The commands uses for dealing with recipe versioning in `abra` are:
|
||||
|
||||
- `abra recipe upgrade`: upgrade the image tags in the compose configs of a recipe
|
||||
- `abra recipe sync`: upgrade the deploy labels to match the new recipe version
|
||||
- `abra recipe release`: publish a git tag for the recipe repo
|
||||
|
||||
The `abra` recipe publishing commands have been designed to complement a semi-automatic workflow. If `abra` breaks or doesn't understand what is going on, you can always finish the process manually with a few Git commands and a bit of luck. We designed `abra` to support this way due to the chaotic nature of container publishing versioning schemes.
|
||||
|
||||
Let's take a practical example, publishing a new version of [Wordpress](https://git.coopcloud.tech/coop-cloud/wordpress).
|
||||
|
||||
If we run `abra recipe upgrade wordpress` (at time of running), we end up with a prompt to upgrade Wordpress to `5.9.0`. We can skip the database upgrade for now. Here is what that looks like:
|
||||
|
||||
```
|
||||
➜ ~ abra recipe upgrade wordpress
|
||||
? upgrade to which tag? (service: app, image: wordpress, tag: 5.8.3) 5.9.0
|
||||
? upgrade to which tag? (service: db, image: mariadb, tag: 10.6) skip
|
||||
WARN[0004] not upgrading mariadb, skipping as requested
|
||||
```
|
||||
|
||||
Now, what happened? `abra` queried the upstream container repositories of all the images listed in the Wordpress recipe configuration and checked if there are new tags available. Once you make some choices on the prompt, `abra` will update the recipe configurations. Let's take a look by running `cd ~/.abra/recipes/wordpress && git diff`:
|
||||
|
||||
```diff
|
||||
diff --git a/compose.yml b/compose.yml
|
||||
index 1618ef5..6cd754d 100644
|
||||
--- a/compose.yml
|
||||
+++ b/compose.yml
|
||||
@@ -3,7 +3,7 @@ version: "3.8"
|
||||
|
||||
services:
|
||||
app:
|
||||
- image: "wordpress:5.8.3"
|
||||
+ image: "wordpress:5.9.0"
|
||||
volumes:
|
||||
- "wordpress_content:/var/www/html/wp-content/"
|
||||
networks:
|
||||
```
|
||||
|
||||
!!! warning "Here be versioning dragons"
|
||||
|
||||
`abra` doesn't understand all image tags unfortunately. There are limitations which we're still running into. You can pass `-a` to have `abra` list all available image tags from the upstream repository and then make a choice manually. See [`tagcmp`](https://git.coopcloud.tech/coop-cloud/tagcmp) for more info on how we implement image parsing.
|
||||
|
||||
Next, we need to update the label in the recipe, we can do that with `abra recipe sync wordpress`. You'll be prompted by a question asking what kind of upgrade this is. Take a moment to read the output and if it still doesn't make sense, read [this](/maintainers/handbook/#how-are-recipes-are-versioned). Since we're upgrading from `5.8.3` -> `5.9.0`, it is a minor release, so we choose `minor`:
|
||||
|
||||
```
|
||||
➜ wordpress (master) ✗ abra recipe sync wordpress
|
||||
...
|
||||
INFO[0088] synced label coop-cloud.${STACK_NAME}.version=1.1.0+5.9.0 to service app
|
||||
```
|
||||
|
||||
Once again, we can run `cd ~/.abra/recipes/wordpress && git diff` to see what `abra` has done for us:
|
||||
|
||||
```diff
|
||||
diff --git a/compose.yml b/compose.yml
|
||||
index 1618ef5..4a08db6 100644
|
||||
--- a/compose.yml
|
||||
+++ b/compose.yml
|
||||
@@ -3,7 +3,7 @@ version: "3.8"
|
||||
|
||||
services:
|
||||
app:
|
||||
- image: "wordpress:5.8.3"
|
||||
+ image: "wordpress:5.9.0"
|
||||
volumes:
|
||||
- "wordpress_content:/var/www/html/wp-content/"
|
||||
networks:
|
||||
@@ -48,7 +48,7 @@ services:
|
||||
#- "traefik.http.routers.${STACK_NAME}.rule=HostRegexp(`{subdomain:.+}.${DOMAIN}`, `${DOMAIN}`)"
|
||||
- "traefik.http.routers.${STACK_NAME}.tls.certresolver=${LETS_ENCRYPT_ENV}"
|
||||
- "traefik.http.routers.${STACK_NAME}.entrypoints=web-secure"
|
||||
- - "coop-cloud.${STACK_NAME}.version=1.0.2+5.8.3"
|
||||
+ - "coop-cloud.${STACK_NAME}.version=1.1.0+5.9.0"
|
||||
- "backupbot.backup=true"
|
||||
- "backupbot.backup.path=/var/www/html"
|
||||
```
|
||||
|
||||
You'll notice that `abra` figured out how to upgrade the Co-op Cloud version label according to our choice, `1.0.2` -> `1.1.0` is a minor update.
|
||||
|
||||
At this point, we're all set, we can run `abra recipe release --publish wordpress`. This will do the following:
|
||||
|
||||
1. run `git commit` the new changes
|
||||
1. run `git tag` to create a new git tag named `1.1.0+5.9.0`
|
||||
1. run `git push` to publish changes to the Wordpress repository
|
||||
|
||||
!!! warning "Here be more SSH dragons"
|
||||
|
||||
In order to have `abra` publish changes for you automatically, you'll have to have write permissons to the git.coopcloud.tech repository and your account must have a working SSH key configuration. `abra` will use the SSH based URL connection details for Git by automagically creating an `origin-ssh` remote in the repository and pushing to it.
|
||||
|
||||
Here is the output:
|
||||
|
||||
```
|
||||
WARN[0000] discovered 1.1.0+5.9.0 as currently synced recipe label
|
||||
WARN[0000] previous git tags detected, assuming this is a new semver release
|
||||
? current: 1.0.2+5.8.3, new: 1.1.0+5.9.0, correct? Yes
|
||||
new release published: https://git.coopcloud.tech/coop-cloud/wordpress/src/tag/1.1.0+5.9.0
|
||||
```
|
||||
|
||||
And once more, we can validate this tag has been created with `cd ~/.abra/recipes/wordpress && git tag -l`.
|
||||
|
||||
## How are new recipe versions tested?
|
||||
|
||||
This is currently a manual process. Our best estimates are to do a backup and run a test deployment and see how things go.
|
||||
|
||||
Following the [entry above](/maintainers/handbook/#how-do-i-release-a-new-recipe-version), before running `abra recipe release --publish <recipe>`, you can deploy the new version of the recipe. You find an app that relies on this recipe and pass `-C/--chaos` to `ugrade` so that it accepts the locally unstaged changes.
|
||||
|
||||
!!! warning "Here be more SSH dragons"
|
||||
|
||||
In order to have `abra` publish changes for you automatically, you'll have to have write permissons to the git.coopcloud.tech repository and your account must have a working SSH key configuration. `abra` will use the SSH based URL connection details for Git by automagically creating an `origin-ssh` remote in the repository and pushing to it.
|
||||
|
||||
It is good practice to take note of all the issues you ran into and share them with other operators. See [this entry](/maintainers/handbook/#how-do-i-write-version-release-notes) for more.
|
||||
|
||||
If you don't have time or are not an operator, reach out on our communication channels for an operator willing to do some testing.
|
||||
|
||||
## How do I write version release notes?
|
||||
|
||||
In the root of your recipe repository, run the following (if the folder doesn't already exist):
|
||||
|
||||
```
|
||||
mkdir -p release
|
||||
```
|
||||
|
||||
And then create a text file which corresponds to the version release, e.g. `1.1.0+5.9.0` and write some notes. `abra` will show these when another operator runs `abra app deploy` / `abra app upgrade`.
|
||||
|
||||
You can also add release notes for the next release into a special file `release/next`. This file will be used when running `abra recipe release`.
|
||||
|
||||
!!! warning "Not available previous versions of Abra"
|
||||
|
||||
Using `release/next` is only available in > 0.9.x series of `abra`.
|
||||
|
||||
## How do I generate the recipe catalogue
|
||||
|
||||
To generate an entire new copy of the catalogue:
|
||||
|
||||
```
|
||||
abra catalogue generate
|
||||
```
|
||||
|
||||
You will most likely want to pass `--user/--username` / `--pass/--password` with container regsitry credentials to avoid rate limiting.
|
||||
|
||||
If you just want to generate a catalogue entry for a single recipe:
|
||||
|
||||
```
|
||||
abra catalogue generate <recipe>
|
||||
```
|
||||
|
||||
The changes are generated and added to `~/.abra/catalogue`, you can validate what is done by running:
|
||||
|
||||
```
|
||||
cd ~/.abra/catalogue
|
||||
git diff
|
||||
```
|
||||
|
||||
You can pass `--publish` to have `abra` automatically publish those changes.
|
||||
|
||||
!!! warning "Here be more SSH dragons"
|
||||
|
||||
In order to have `abra` publish changes for you automatically, you'll have to have write permissons to the git.coopcloud.tech repository and your account must have a working SSH key configuration. `abra` will use the SSH based URL connection details for Git by automagically creating an `origin-ssh` remote in the repository and pushing to it.
|
||||
|
||||
## How is I make the catalogue automatically regenerate after new versions are published?
|
||||
|
||||
"I'd like to make it so that whenever I push a new git tag to the
|
||||
[`coop-cloud/rallly` repository](https://git.coopcloud.tech/coop-cloud/rallly)
|
||||
(probably [using `abra recipe
|
||||
release`](#how-do-i-release-a-new-recipe-version)), it automatically does the
|
||||
[recipe catalogue generation steps](#how-do-i-generate-the-recipe-catalogue)"
|
||||
|
||||
1. Check whether tag builds are already trying to run: go to
|
||||
https://build.coopcloud.tech, search for the recipe name (in this case taking
|
||||
you to https://build.coopcloud.tech/coop-cloud/rallly/settings). If there are
|
||||
failing builds, or if you see builds succeeding but catalogue regeneration
|
||||
doesn't seem to be happening, then either dive in and try and fix it, or ask
|
||||
for help in [`#coopcloud-tech`](https://matrix.to/#/#coopcloud-tech:autonomic.zone)
|
||||
2. Otherwise, click "activate repository". You probably want to set the "disable pull
|
||||
requests" and "disable forks" options; they won't work anyway, but the
|
||||
failures might be confusing.
|
||||
3. Make sure there is a `generate recipe catalogue` step in the recipe's
|
||||
`.drone.yml` -- if there isn't, you can copy [the one from
|
||||
`coop-cloud/rallly`](https://git.coopcloud.tech/coop-cloud/rallly/src/branch/main/.drone.yml#L24-L38) unchanged.
|
||||
4. That's it! Now, when you push a new tag, the recipe catalogue will regenerate
|
||||
automatically. You can test this by re-pushing a tag (e.g. `git push origin
|
||||
:0.5.0+3.5.1 && git push 0.5.0+3.5.1`)
|
||||
|
||||
## How does automatic catalogue regeneration work?
|
||||
|
||||
TODO
|
||||
|
||||
## How do I enable healthchecks
|
||||
|
||||
A healthcheck is an important and often overlooked part of the recipe configuration. It is part of the configuration that the runtime uses to figure out if a container is really up-and-running. You can tweak what command to run, how often and how many times to try until you assume the container is not up.
|
||||
|
||||
There are no real univesal configs and most maintainers just pick up what others are doing and try to adapt. There is some testing involved to see what works well. You can browse the existing recipe repositories and see from there.
|
||||
|
||||
You'll often find the same one used for things like caches & supporting services, such as Redis:
|
||||
|
||||
```yaml
|
||||
healthcheck:
|
||||
test: ["CMD", "redis-cli", "ping"]
|
||||
```
|
||||
|
||||
If you're just starting off with packaging a recipe, you can use `healthcheck: disable` until you have something working. It's definitely advised to work out your healthcheck as a last step, it can be a bit tricky.
|
||||
|
||||
`abra app errors -w <domain>` will show what errors are being reported from a failing healtcheck setup.
|
||||
|
||||
## How do I tune deploy configs?
|
||||
|
||||
A bit like healtchecks, there is no universal setup. A good default seems to be the following configuration:
|
||||
|
||||
```yaml
|
||||
deploy:
|
||||
update_config:
|
||||
failure_action: rollback
|
||||
order: start-first
|
||||
rollback_config:
|
||||
order: start-first
|
||||
restart_policy:
|
||||
max_attempts: 3
|
||||
```
|
||||
|
||||
The `start-first` setting ensures that the container runtime tries to start up the new container and get it running before switching over to it.
|
||||
|
||||
Setting a restart policy is also good so that the runtime doesn't try to restart the container forever.
|
||||
|
||||
Best to [read](https://docs.docker.com/engine/reference/builder/#healthcheck) [the docs](https://docs.docker.com/compose/compose-file/compose-file-v3/#healthcheck) on this one.
|
||||
|
||||
## How do I tune resource limits?
|
||||
|
||||
If you don't place resource limits on your app it will assume it can use the entire capacity of the server it is on. This can cause issues such as Out-Of Memory errors for your entire swarm.
|
||||
|
||||
See the [Docker documentation](https://docs.docker.com/config/containers/resource_constraints/) to get into this topic and check the other recipes to see what other maintainers are doing.
|
||||
|
||||
## How do I enable A+ SSL ratings?
|
||||
|
||||
If you want to get the highest rating on SSL certs, you can use the following traefik labels which use a tweaked Traefik configuration.
|
||||
|
||||
```yaml
|
||||
- "traefik.http.routers.traefik.tls.options=default@file"
|
||||
- "traefik.http.routers.traefik.middlewares=security@file"
|
||||
```
|
||||
|
||||
See [this PR](https://git.coopcloud.tech/coop-cloud/traefik/pulls/8/files) for the technical details
|
||||
|
||||
## How do I change secret generation length?
|
||||
|
||||
It is possible to tell `abra` which length it should generate secrets with from your recipe config.
|
||||
|
||||
You do this by adding a inline comment to the secret definition in the `.env.sample` / `.env` file.
|
||||
|
||||
Here are examples from the gitea recipe:
|
||||
|
||||
```
|
||||
SECRET_INTERNAL_TOKEN_VERSION=v1 # length=105
|
||||
SECRET_JWT_SECRET_VERSION=v1 # length=43
|
||||
SECRET_SECRET_KEY_VERSION=v1 # length=64
|
||||
```
|
||||
|
||||
When using this length specifier, `abra` will not use the "easy to remember
|
||||
word" style generator but instead a string of characters to match the exact
|
||||
length. This can be useful if you have to generate "key" style values instead
|
||||
of passwords which admins have to type out in database shells.
|
||||
|
||||
## How are recipes added to the catalogue?
|
||||
|
||||
> This is so far a manual process which requires someone who's been added to the
|
||||
> `coop-cloud` "Organisation" on https://git.coopcloud.tech. This is a temporary
|
||||
> situation, we want to open out this process & also introduce some automation
|
||||
> to support making thie process more convenient. Please nag us to move things
|
||||
> along.
|
||||
|
||||
- Publish your new recipe on the [git.coopcloud.tech](https://git.coopcloud.tech/coop-cloud) "Organisation"
|
||||
- Run `abra catalogue generate <recipe> -p`
|
||||
- Run `cd ~/.abra/catalogue && make`
|
||||
|
||||
These minimal steps will publish a new recipe with no versions. You can also do
|
||||
the [recipe release publishing dance](https://docs.coopcloud.tech/maintainers/handbook/#how-do-i-release-a-new-recipe-version)
|
||||
which will then extend the `versions: [...]` section of the published JSON in the catalogue.
|
||||
|
||||
Recipes that are not included in the catalogue can still be deployed. It is not
|
||||
required to add your recipes to the catalogue, but this will improve the
|
||||
visibility for other co-op hosters & end-users.
|
||||
|
||||
For now, it is best to [get in touch](https://docs.coopcloud.tech/intro/contact/) if you want to add your recipe to the catalogue.
|
||||
|
||||
In the future, we'd like to support [multiple catalogues](https://git.coopcloud.tech/coop-cloud/organising/issues/139).
|
||||
|
||||
## How do I configure backup/restore?
|
||||
|
||||
From the perspective of the recipe maintainer, backup/restore is just more
|
||||
`deploy: ...` labels. Tools can read these labels and then perform the
|
||||
backup/restore logic.
|
||||
|
||||
### Tools
|
||||
|
||||
Two of the current "blessed" options are
|
||||
[`backup-bot-two`](https://git.coopcloud.tech/coop-cloud/backup-bot-two) &
|
||||
[`abra`](https://git.coopcloud.tech/coop-cloud/abra).
|
||||
|
||||
#### `backup-bot-two`
|
||||
|
||||
Please see the [`README.md`](https://git.coopcloud.tech/coop-cloud/backup-bot-two#backupbot-ii) for the full docs.
|
||||
|
||||
#### `abra`
|
||||
|
||||
`abra` will read labels and store backups in `~/.abra/backups/...`.
|
||||
|
||||
### Backup
|
||||
|
||||
For backup, here are the labels & some examples:
|
||||
|
||||
- `backupbot.backup=true`: turn on backup logic
|
||||
- `backupbot.backup.pre-hook=mysqldump -u root -pghost ghost --tab /var/lib/foo`: command to run before backing up
|
||||
- `backupbot.backup.post-hook=rm -rf /var/lib/mysql-files/*`: command to run after backing up
|
||||
- `backupbot.backup.path=/var/lib/foo,/var/lib/bar`: paths to back up
|
||||
|
||||
You place these on your recipe configuration and then tools can run backups.
|
||||
|
||||
### Restore
|
||||
|
||||
Restore, in this context means, "moving a compressed archive back to the
|
||||
container backup paths". So, if you set
|
||||
`backupbot.backup.path=/var/lib/foo,/var/lib/bar` and you have a backed up
|
||||
archive, tooling will unzip files in the archive back to those paths.
|
||||
|
||||
In the case of restoring database tables, you can use the `pre-hook` &
|
||||
`post-hook` commands to run the insertion logic.
|
||||
|
||||
## Can I override a service within a recipe?
|
||||
|
||||
You can use [this `docker-compose` trick](https://docs.docker.com/compose/extends/#understanding-multiple-compose-files) to do this.
|
||||
|
||||
If you have a recipe that is using a `mysql` service and you'd like to use `postgresql` instead, you can create a `compose.psql.yml`!
|
||||
|
||||
An example of this is the [`selfoss`](https://git.coopcloud.tech/coop-cloud/selfoss) recipe. The default is `sqlite` but there is a `postgresql` compose configuration there too.
|
||||
|
||||
## How do I set a custom entrypoint?
|
||||
|
||||
For more context, see the [`entrypoint.sh`](/maintainers/handbook/#entrypointsh) section. The following configuration example is ripped from the [`coop-cloud/peertube`](https://git.coopcloud.tech/coop-cloud/peertube) recipe but shortened down. Here are more or less the steps you need to take:
|
||||
|
||||
Define a config:
|
||||
|
||||
```yaml
|
||||
app:
|
||||
...
|
||||
configs:
|
||||
- source: app_entrypoint
|
||||
target: /docker-entrypoint.sh
|
||||
mode: 0555
|
||||
...
|
||||
|
||||
configs:
|
||||
app_entrypoint:
|
||||
name: ${STACK_NAME}_app_entrypoint_${APP_ENTRYPOINT_VERSION}
|
||||
file: entrypoint.sh.tmpl
|
||||
template_driver: golang
|
||||
```
|
||||
|
||||
Define a `entrypoint.sh.tmpl`:
|
||||
|
||||
```
|
||||
#!/bin/bash
|
||||
|
||||
set -e
|
||||
|
||||
file_env() {
|
||||
local var="$1"
|
||||
local fileVar="${var}_FILE"
|
||||
local def="${2:-}"
|
||||
|
||||
if [ "${!var:-}" ] && [ "${!fileVar:-}" ]; then
|
||||
echo >&2 "error: both $var and $fileVar are set (but are exclusive)"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
local val="$def"
|
||||
|
||||
if [ "${!var:-}" ]; then
|
||||
val="${!var}"
|
||||
elif [ "${!fileVar:-}" ]; then
|
||||
val="$(< "${!fileVar}")"
|
||||
fi
|
||||
|
||||
export "$var"="$val"
|
||||
unset "$fileVar"
|
||||
}
|
||||
|
||||
file_env "PEERTUBE_DB_PASSWORD"
|
||||
|
||||
{{ if eq (env "PEERTUBE_SMTP_ENABLED") "1" }}
|
||||
file_env "PEERTUBE_SMTP_PASSWORD"
|
||||
{{ end }}
|
||||
|
||||
{{ if eq (env "PEERTUBE_LIVE_CHAT_ENABLED") "1" }}
|
||||
apt -y update && apt install -y prosody && apt -y clean
|
||||
mkdir -p /run/prosody && chown prosody:prosody /run/prosody
|
||||
{{ end }}
|
||||
|
||||
# Copy the client files over to a named volume
|
||||
# so that they may be served by nginx directly
|
||||
cp -ar /app/client/dist /srv/client
|
||||
|
||||
# upstream entrypoint
|
||||
# https://github.com/Chocobozzz/PeerTube/blob/66f77f63437c6774acbd72584a9839a7636ea167/support/docker/production/entrypoint.sh
|
||||
/usr/local/bin/entrypoint.sh "$@"
|
||||
```
|
||||
|
||||
Please note:
|
||||
|
||||
1. The `file_env` / `_FILE` hack is to pass secrets into the container runtime without exposing them in plaintext in the configuration. See [this entry](/maintainers/handbook/#exposing-secrets) for more.
|
||||
|
||||
1. In order to pass execution back to the original entrypoint, it's a good idea to find the original entrypoint script and run it from your own entrypoint script. If there is none, you may want to reference the `CMD` definition or if that isn't working, try to actually specify `cmd: ...` in the `compose.yml` definition (there are other recipes which do this).
|
||||
|
||||
1. If you're feeling reckless, you can also use the Golang templating engine to do things conditionally.
|
||||
|
||||
Then, wire up the vendored config version:
|
||||
|
||||
```
|
||||
# abra.sh
|
||||
export APP_ENTRYPOINT_VERSION=v5
|
||||
```
|
||||
|
||||
You should be able to deploy this overriden configuration now.
|
||||
|
||||
## Linting rules
|
||||
|
||||
### R015: "long secret names"
|
||||
|
||||
Due to limitations placed by the Docker runtime, secret names must be < 64
|
||||
characters long. Due to convetions in recipe configuration and how `abra`
|
||||
works, several characters are appended to secret names during a deployment.
|
||||
This means if you have a domain `example.org` and a secret `foo_pass`, you'll
|
||||
end up with something like `example_org_foo_pass_v1` being used for the secret
|
||||
name.
|
||||
|
||||
Based on a discussion in
|
||||
[`#463`](https://git.coopcloud.tech/coop-cloud/organising/issues/463) and
|
||||
looking on what is implemented currently in existing recipes, we came up with a
|
||||
general rule of thumb that secret names in recipe configurations should be < 12
|
||||
characters long to avoid errors on deployment.
|
||||
|
||||
### R014: "invalid lightweight tag"
|
||||
|
||||
This is an issue related to the way Git/`go-git` handle Git tags internally. We
|
||||
need to use "annotated tags" and not "lightweight tags" for our recipe versions
|
||||
tags. Otherwise, `abra` has a hard time parsing what is going on.
|
||||
|
||||
The `R01O4` linting error happens because the recipe in question has a
|
||||
lightweight tag. This needs to be replaced. This is a manual process. Here's a
|
||||
practical example with the Gitea recipe when we had this issue.
|
||||
|
||||
You can validate what kind of tag is which by running the following:
|
||||
|
||||
```
|
||||
git for-each-ref refs/tags
|
||||
734045872a57d795cd54b1992a1753893a4934f1 tag refs/tags/1.0.0+1.14.5-rootless
|
||||
b2cefa5ccf2f2f77dae54cf6c304cccecb3547ca tag refs/tags/1.1.0+1.15.0-rootless
|
||||
6d669112d8caafcdcf4eb1485f2d6afdb54a8e30 tag refs/tags/1.1.1+1.15.3-rootless
|
||||
64761ad187cc7a3984a37dd9abd4fa16979f97b9 tag refs/tags/1.1.2+1.15.6-rootless
|
||||
1ccb1cb6a63a08eebf6ba5508b676eaaccba7ed8 tag refs/tags/1.1.3+1.15.10-rootless
|
||||
b86e1f6dfef3c464b16736274b3cd95f8978f66b tag refs/tags/1.2.0+1.16.3-rootless
|
||||
b1d22f3c39ca768a4efa1a0b9b9f780268c924b3 tag refs/tags/1.2.1+1.16.8-rootless
|
||||
85a45aa749427822a73ef62b6362d57bae1a61af tag refs/tags/1.3.0+1.17.2-rootless
|
||||
f35689989c0b57575b8362e1252476d8133dc961 commit refs/tags/1.3.1+1.17.3-rootless
|
||||
df015fae592fca7728a3f0835217e110da4dbafc tag refs/tags/2.0.0+1.18.0-rootless
|
||||
71920adb0c25a59f7678894e39f1a705f0ad08dd tag refs/tags/2.0.1+1.18.2-rootless
|
||||
1ab9a96922341c8e54bdb6d60850630cce4b9587 tag refs/tags/2.1.0+1.18.5-rootless
|
||||
1e612d84a2ad7c9beb7aa064701a520c7e91eecc commit refs/tags/2.1.2+1.19.3-rootless
|
||||
0bee99615a8bbd534a66a315ee088af3124e054b tag refs/tags/2.2.0+1.19.3-rootless
|
||||
699378f53501b2d5079fa62cc7f8e79930da7540 tag refs/tags/2.3.0+1.20.1-rootless
|
||||
c0dc5f82930d875c0a6e29abc016b4f6a53b83dd tag refs/tags/2.3.1+1.20.1-rootless
|
||||
```
|
||||
|
||||
Where `f35689989c0b57575b8362e1252476d8133dc961` &
|
||||
`1e612d84a2ad7c9beb7aa064701a520c7e91eecc` need to be removed ("commit"). We
|
||||
will deal with `refs/tags/1.3.1+1.17.3-rootless` in this example.
|
||||
|
||||
```
|
||||
# find the tag hash
|
||||
git show 1.3.1+1.17.3-rootless
|
||||
commit f35689989c0b57575b8362e1252476d8133dc961 (tag: 1.3.1+1.17.3-rootless)
|
||||
Merge: af97db8 1d4dc8e
|
||||
Author: decentral1se <decentral1se@noreply.git.coopcloud.tech>
|
||||
Date: Sun Nov 13 21:54:01 2022 +0000
|
||||
|
||||
Merge pull request 'Adding Oauth2 options and up on versions' (#29) from javielico/gitea:master into master
|
||||
|
||||
Reviewed-on: https://git.coopcloud.tech/coop-cloud/gitea/pulls/29
|
||||
|
||||
# delete the tag locally / remotely
|
||||
git tag -d 1.3.1+1.17.3-rootless
|
||||
git push origin 1.3.1+1.17.3-rootless --delete
|
||||
|
||||
# re-tag, this time with `-a` (annotated)
|
||||
git checkout f35689989c0b57575b8362e1252476d8133dc961
|
||||
git tag -a 1.3.1+1.17.3-rootless
|
||||
|
||||
# push new tag
|
||||
git checkout master # might be main on other recipes!
|
||||
git push origin master --tags
|
||||
|
||||
# check everything works
|
||||
git for-each-ref refs/tags | grep 1.3.1+1.17.3-rootless
|
||||
964f1680000fbba6daa520aa8d533a53ad151ab8 tag refs/tags/1.3.1+1.17.3-rootless
|
||||
```
|
||||
|
||||
That's it! Spread the word, use `-a` when tagging recipe versions manually! Or
|
||||
just use `abra` which should handle this issue automagically for you in all
|
||||
cases 🎉
|
23
docs/maintainers/index.md
Normal file
23
docs/maintainers/index.md
Normal file
@ -0,0 +1,23 @@
|
||||
---
|
||||
title: Maintainers
|
||||
---
|
||||
|
||||
Welcome to the maintainers guide! Maintainers are typically individuals who have a stake in building up and maintaining our digital configuration commons, the recipe configurations. Maintainers help keep recipes configurations up to date, respond to issues in a timely manner, help new users within the community and recruit new maintainers when possible.
|
||||
|
||||
<div class="grid cards" markdown>
|
||||
|
||||
- __New Maintainers Tutorial__
|
||||
|
||||
If you want to package a recipe and/or become a maintainer, start here :rocket:
|
||||
|
||||
[Get Started](/maintainers/tutorial){ .md-button .md-button--primary }
|
||||
|
||||
- __Packaging Handbook__
|
||||
|
||||
One-stop shop for all you need to know to package recipes :package:
|
||||
|
||||
[Read Handbook](/maintainers/handbook){ .md-button .md-button--primary }
|
||||
|
||||
</div>
|
||||
|
||||
Maintainers are encouraged to submit documentation patches! Sharing is caring :sparkling_heart:
|
94
docs/maintainers/tutorial.md
Normal file
94
docs/maintainers/tutorial.md
Normal file
@ -0,0 +1,94 @@
|
||||
---
|
||||
title: New maintainers tutorial
|
||||
---
|
||||
|
||||
## Package your first recipe
|
||||
|
||||
### Overview
|
||||
|
||||
Packaging a recipe is basically knowing a bag of about 20 tricks. Once you learn them, there is nothing more to learn. It can seem daunting at first but it's simple and easy to do once you know the tricks.
|
||||
|
||||
The nice thing about packaging is that only one person has to do it and then we all benefit. We've seen that over time, the core of the configuration doesn't really change. New options and versions might come but the config remains quite stable. This is good since it means that your packaging work stays relevant and useful for other maintainers & operators as time goes on.
|
||||
|
||||
Depending on your familiarity with recipes, it might be worth reading [how a recipe is structured](/maintainers/handbook/#how-is-a-recipe-structured) and making clear you understand [what a recipe is](/glossary/#recipe) before continuing.
|
||||
|
||||
### Making a plan
|
||||
|
||||
The ideal scenario is when the upstream project provides both the packaged image and a compose configuration which we can build from. If you're in luck, you'll typically find a `Dockerfile` and a `docker-compose.yml` file in the root of the upstream Git repository for the app.
|
||||
|
||||
- **Tired**: Write your own image and compose file from scratch :sleeping:
|
||||
- **Wired**: Use someone else's image (& maybe compose file) :smirk_cat:
|
||||
- **Inspired**: Upstream image, someone else's compose file :exploding_head:
|
||||
- **On fire**: Upstream image, upstream compose file :fire:
|
||||
|
||||
### Writing / adapting the `compose.yml`
|
||||
|
||||
Let's take a practical example, [Matomo web analytics](https://matomo.org/). We'll be making a Docker "swarm-mode" `compose.yml` file.
|
||||
|
||||
Luckily, Matomo already has an example compose file in their repository. Like a lot of compose files, it's intended for use with `docker-compose`, instead of "swarm mode", but it should be a good start.
|
||||
|
||||
First, let's create a directory with the files we need:
|
||||
|
||||
```
|
||||
abra recipe new matomo
|
||||
cd ~/.abra/recipes/matomo
|
||||
```
|
||||
|
||||
Then, let's download and edit the `docker-compose.yml` file:
|
||||
|
||||
```
|
||||
mkdir matomo && cd matomo
|
||||
wget https://raw.githubusercontent.com/matomo-org/docker/master/.examples/apache/docker-compose.yml -O compose.yml
|
||||
```
|
||||
|
||||
Open the `compose.yml` in your favourite editor and have a gander 🦢. There are a few things we're looking for, but some immediate changes could be:
|
||||
|
||||
1. Let's bump the version to `3.8`, to make sure we can use all the latest swarm coolness.
|
||||
2. We load environment variables separately via [`abra`](/abra/), so we'll strip out `env_file`.
|
||||
3. The `/var/www/html` volume definition on L21 is a bit overzealous; it means a copy of Matomo will be stored separately per app instance, which is a waste of space in most cases. We'll narrow it down according to the documentation. The developers have been nice enough to suggest `logs` and `config` volumes instead, which is a decent start.
|
||||
4. The MySQL passwords are sent as variables which is fine for basic use, but if we replace them with Docker secrets we can keep them out of our env files if we want to publish those more widely.
|
||||
5. The MariaDB service doesn't need to be exposed to the internet, so we can define an `internal` network for it to communicate with Matomo.
|
||||
6. Lastly, we want to use `deploy.labels` and remove the `ports:` definition, to tell Traefik to forward requests to Matomo based on hostname and generate an SSL certificate.
|
||||
|
||||
The resulting `compose.yml` is available [here](https://git.autonomic.zone/coop-cloud/matomo/src/branch/main/compose.yml).
|
||||
|
||||
### Updating the `.env.sample`
|
||||
|
||||
Open the `.env.sample` file and add the following
|
||||
|
||||
```
|
||||
DB_PASSWORD_VERSION=v1
|
||||
DB_ROOT_PASSWORD_VERSION=v1
|
||||
```
|
||||
|
||||
The resulting `.env.sample` is available [here](https://git.coopcloud.tech/coop-cloud/matomo/src/branch/main/.env.sample)
|
||||
|
||||
### Test deployment
|
||||
|
||||
!!! note "Running Co-op Cloud server required!"
|
||||
|
||||
The rest of this guide assumes you have a Co-op Cloud server going -- we'll use `swarm.example.com`, but replace it with your own server address. Head over to [the operators tutorial](/operators/tutorial) if you need help setting one up.
|
||||
|
||||
Now, we're ready to create a testing instance of Matomo:
|
||||
|
||||
```
|
||||
abra app new matomo --secrets \
|
||||
--domain matomo.swarm.example.com \
|
||||
--server swarm.example.com
|
||||
```
|
||||
|
||||
Depending on whether you defined any extra environment variables -- we didn't so
|
||||
far, in this example -- you might want to run `abra app config swarm.example.com`
|
||||
to check the configuration.
|
||||
|
||||
Otherwise, or once you've done that, go ahead and deploy the app:
|
||||
|
||||
```
|
||||
abra app deploy swarm.example.com
|
||||
```
|
||||
|
||||
Then, open the `DOMAIN` you configured (you might need to wait a while for Traefik to generate SSL certificates) to finish the set-up. Luckily, this container is (mostly) configurable via environment variables, if we want to auto-generate the configuration we can use a `config` and / or a custom `entrypoint` (see [`coop-cloud/mediawiki`](https://git.autonomic.zone/coop-cloud/mediawiki) for examples of both).
|
||||
|
||||
### Finishing up
|
||||
|
||||
You've probably got more questions, check out the [packaging handbook](/maintainers/handbook)!
|
@ -1,502 +0,0 @@
|
||||
---
|
||||
title: Operators Handbook
|
||||
---
|
||||
|
||||
## Understanding `~/.abra`
|
||||
|
||||
Co-op Cloud stores per-app configuration in the `$USER/.abra/servers` directory, on whichever machine you're running `abra` on (by default, your own work station). In other words, app configurations are grouped under their relevant server directory. This corresponds to the ordering of the output of `abra app ls`.
|
||||
|
||||
!!! question "What format do the `.env` files use?"
|
||||
|
||||
`.env` files use the same format as used by Docker (with the `env_file:` statement in a `docker-compose.yml` file, or the `--env-file` option to `docker run`) and `direnv`. There is no `export ...=...` required since `abra` will take care to thread the values into the recipe configuration at deploy time.
|
||||
|
||||
`abra` doesn't mind if `~/.abra/servers`, or any of its subdirectories, is a [symlink](https://en.wikipedia.org/wiki/Symlink), so you can keep your app definitions wherever you like!
|
||||
|
||||
```
|
||||
mv ~/.abra/servers/ ~/coop-cloud
|
||||
ln -s ~/coop-cloud ~/.abra/servers
|
||||
```
|
||||
|
||||
You don't need to worry about `~/.abra/{vendor,catalogue,recipes,autocompletion}`, `abra` manages those automagically.
|
||||
|
||||
## Backing up `~/.abra`
|
||||
|
||||
Just make sure the `~/.abra/servers` is included in the configuration of your favourite backup tool. Because `~/.abra/servers` is a collection of plain-text files, it's easy to keep your backup configuration in a version control system (we use `git`, others would almost certainly work).
|
||||
|
||||
This is particularly recommended if you're collaborating with others, so that you can all run `abra app ...` commands without having to maintain your own separate, probably-conflicting, configuration files.
|
||||
|
||||
In the simple case where you only have one server configured with `abra`, or everyone in your team is using the same set of servers, you can version-control the whole `~/.abra/servers` directory:
|
||||
|
||||
```
|
||||
cd ~/.abra/servers
|
||||
git init
|
||||
git add .
|
||||
git commit -m "Initial import"
|
||||
```
|
||||
|
||||
!!! warning "Test your revision-control self-discipline"
|
||||
|
||||
`abra` does not yet help keep your `~/.abra/server` configurations up-to-date! Make sure to run `git add` / `git commit` after making configuration changes, and `cd ~/.abra/servers && git pull` before running `abra app ...` commands. Patches to add some safety checks and auto-updates would be very welcome! 🙏
|
||||
|
||||
## Sharing `~/.abra`
|
||||
|
||||
In a more complex situation, where you're using Co-op Cloud to manage several servers, and you're collaborating with different people on different servers, you can set up **a separate repository for each subdirectory in `~/.abra/servers`**, or even a mixture of single-server and multi-server repositories:
|
||||
|
||||
```
|
||||
ls -l ~/.abra/servers
|
||||
# Example.com's own app configuration:
|
||||
swarm.example.com -> /home/user/Example/coop-cloud-apps/swarm.example.com
|
||||
|
||||
# Configuration for one of Example.com's clients – part of the same repository:
|
||||
swarm.client.com -> /home/user/Example/coop-cloud-apps/swarm.client.com
|
||||
|
||||
# A completely separate project, part of a different repository:
|
||||
swarm.demonstration.com -> /home/user/Demonstration/coop-cloud-apps
|
||||
```
|
||||
|
||||
To make setting up these symlinks easier, you might want to include a simple installer script in your configuration repositories.
|
||||
|
||||
Save this as `Makefile` in your repository:
|
||||
|
||||
```
|
||||
# -s symlink, -f force creation, -F don't create symlink in the target dir
|
||||
default:
|
||||
@mkdir -p ~/.abra/servers/
|
||||
@for SERVER in $$(find -maxdepth 1 -type d -name "[!.]*"); do \
|
||||
echo ln -sfF "$$(pwd)/$${SERVER#./}" ~/.abra/servers/ ; \
|
||||
ln -sfF "$$(pwd)/$${SERVER#./}" ~/.abra/servers/ ; \
|
||||
done
|
||||
```
|
||||
|
||||
This will set up symlinks from each directory in your repository to a correspondingly-named directory in `~/.abra/servers` – if your repository has a `swarm.example.com` directory, it'll be linked as `~/.abra/servers/swarm.example.com`.
|
||||
|
||||
Then, tell your collaborators (e.g. in the repository's `README.md`), to run `make` in their repository check-out.
|
||||
|
||||
!!! warning "You're on your own!"
|
||||
|
||||
As with the [simple repository set-up above](#backing-up-your-abra-configuration), `abra` doesn't yet help you update your version control system when you make changes, nor check version control to make sure you have the latest configuration. Make sure to `commit` and `push` after you make any configuration changes, and `pull` before running any `abra app ...` commands.
|
||||
|
||||
!!! question "Even more granularity?"
|
||||
|
||||
The plain-text, file-based configuration format means that you could even keep the configuration for different apps on the same server in different repositories, e.g. having `git.example.com` configuration in a separate repository to `wordpress.example.com`, using per-file symlinks.
|
||||
|
||||
We don't currently recommend this, because it might set inaccurate expectations about the security model – remember that, by default, **any user who can deploy apps to a Docker Swarm can manage _any_ app in that swarm**.
|
||||
|
||||
### Migrating a server into a repository
|
||||
|
||||
Even if you've got your existing server configs in version control, by default, `abra server add` will define the server locally. To move it -- taking the example of `newserver.example.com`:
|
||||
|
||||
```
|
||||
mv ~/.abra/servers/newserver.example.com ~/coop-cloud-apps/
|
||||
cd ~/coop-cloud-apps
|
||||
git add newserver.example.com
|
||||
git commit
|
||||
make link
|
||||
```
|
||||
|
||||
## Running abra server side
|
||||
|
||||
If you're on an environment where it's hard to run Docker, or command-line programs in general, you might want to install `abra` on a server instead of your local work station.
|
||||
|
||||
To install `abra` on the same server where you'll be hosting your apps, just follow [getting started guide](/operators/tutorial#deploy-your-first-app) as normal except for one difference. Instead of providing your SSH connection details when you run `abra server add ...`, just pass `--local`.
|
||||
|
||||
```
|
||||
abra server add --local
|
||||
```
|
||||
|
||||
!!! note "Technical details"
|
||||
|
||||
This will tell `abra` to look at the Docker system running on the server, instead of a remote one (using the Docker internal `default` context). Once this is wired up, `abra` knows that the deployment target is the local server and not a remote one. This will be handle seamlessly for all other deployments on this server.
|
||||
|
||||
Make sure to back up your `~/.abra` directory on the server, or put it in version control, as well as other files you'd like to keep safe.
|
||||
|
||||
## Managing secret data
|
||||
|
||||
Co-op Cloud uses [Docker Secrets](https://docs.docker.com/engine/swarm/secrets/) to handle sensitive data, like database passwords and API keys, securely.
|
||||
|
||||
`abra` includes several commands to make it easier to manage secrets:
|
||||
|
||||
- `abra app secret generate <domain>`: to auto-generate app secrets
|
||||
- `abra app secret insert <domain>`: to insert a single secret
|
||||
- `abra app secret rm <domain>`: to remove secrets
|
||||
|
||||
### Secret versions
|
||||
|
||||
Docker secrets are immutable, which means that their values can't be changed after they're set. To accommodate this, Co-op Cloud uses the established convention of "secret versions". Every time you change (rotate) a secret, you will insert it as a new version. Because secret versions are managed per-instance by the people deploying their apps, secret versions are stored in the `.env` file for each app:
|
||||
|
||||
```
|
||||
find -L ~/.abra/servers/ -name '*.env' -print0 | xargs -0 grep -h SECRET
|
||||
OIDC_CLIENT_SECRET_VERSION=v1
|
||||
RPC_SECRET_VERSION=v1
|
||||
CLIENT_SECRET_VERSION=v1
|
||||
...
|
||||
```
|
||||
|
||||
If you try and add a secret version which already exists, Docker will helpfully complain:
|
||||
|
||||
```
|
||||
abra app secret insert mywordpress.com db_password v1 foobar
|
||||
Error response from daemon: rpc error: code = AlreadyExists desc = secret mywordpress_com_db_password_v1 already exists
|
||||
```
|
||||
|
||||
By default, new app instances will look for `v1` secrets.
|
||||
|
||||
### Generating secrets automatically
|
||||
|
||||
You can generate secrets in one of two ways:
|
||||
|
||||
1. While running `abra app new <recipe>`, by passing `-S/--secrets`
|
||||
2. At any point once an app instance is defined, by running `abra app secret generate <domain> ...` (see `abra app secret generate -h` for more)
|
||||
|
||||
### Inserting secrets manually
|
||||
|
||||
For third-party API tokens, like OAuth client secrets, or keys for services like Mailgun, you will be storing values you already have as the appropriately-named Docker secrets. `abra` provides a convenient interface to the underlying `docker secret create` command:
|
||||
|
||||
```
|
||||
abra app secret insert <domain> db_password v2 "your-secret-value-here"
|
||||
```
|
||||
|
||||
### Rotating a secret
|
||||
|
||||
So, given how [secret versions](/operators/handbook/#secret-versions) work, here's how you change a secret:
|
||||
|
||||
1. Find out the current version number of the secret, e.g. by running `abra app config <domain>`, and choose a new one. Let's assume it's currently `v1`, so by convention the new secret will be `v2`
|
||||
2. Generate or insert the new secret: `abra app secret generate <domain> db_password v2` or `abra app secret insert <domain> db_password v2 "foobar"`
|
||||
3. Edit the app configuration to change which secret version the app will use: `abra app config <domain>`
|
||||
4. Re-deploy the app with the new secret version: `abra app deploy <domain>`
|
||||
|
||||
### Storing secrets in `pass`
|
||||
|
||||
The Co-op Cloud authors use the [UNIX `pass` tool](https://www.passwordstore.org) to share sensitive data, including Co-op Cloud secrets, and `abra app secret ...` commands include a `--pass` option to automatically manage generated / inserted secrets:
|
||||
|
||||
```
|
||||
# Store generated secrets in `pass`:
|
||||
abra app new wordpress --secrets --pass
|
||||
abra app secret generate mywordpress.com --all --pass
|
||||
|
||||
# Store inserted secret in `pass`:
|
||||
abra app secret insert mywordpress.com db_password v2 --pass
|
||||
|
||||
# Remove secrets from Docker, and `pass`:
|
||||
abra app secret rm mywordpress.com --all --pass
|
||||
```
|
||||
|
||||
This functionality currently relies on our specific `pass` storage conventions; patches to make that configurable are very welcome!
|
||||
|
||||
## Networking
|
||||
|
||||
!!! note "So dark the con of Docker Networking"
|
||||
|
||||
Our understanding of Docker networking is probably wrong. We're working on it. Plz send halp :pray:
|
||||
|
||||
### Traefik networking
|
||||
|
||||
[Traefik](https://doc.traefik.io/traefik/) is our core web proxy, all traffic on a Co-op Cloud deployment goes through a running Traefik container. When setting up a new Co-op Cloud deployment, `abra` creates a "global" [overlay network](https://docs.docker.com/network/overlay/) which traefik is hooked up to. This is the network that other apps use to speak to traefik and get traffic routed to them. Not every service in every app is also included in this network and hence not internet-facing (by convention, we name this network `internal`, see more below).
|
||||
|
||||
### App networking
|
||||
|
||||
By convention, the main `app` service is wired up to the "global" traefik overlay network. This container is the one that should be publicy reachable on the internet. The other services in the app such as the database and caches should not be publicly reachable or visible to other apps on the same instance.
|
||||
|
||||
To deal with this, we make an additional "internal" network for each app which is namespaced to that app. So, if you deploy a Wordpress instance called `my_wordpress_blog` then there will be a network called `my_wordpress_blog_internal` created. This allows all the services in an app to speak to each other but not be reachable on the public internet.
|
||||
|
||||
## Multiple apps on the same domain?
|
||||
|
||||
At time of writing (Jan 2022), we think there is a limitation in our design which doesn't support multiple apps sharing the same domain (e.g. `example.com/app1/` & `example.com/app2/`). `abra` treats each domain as unique and as the single reference for a single app.
|
||||
|
||||
This may be possible to overcome if someone really needs it, we encourage people to investigate. We've found that often there are limitations in the actual software which don't support this anyway and several of the current operators simply use a new domain per app.
|
||||
|
||||
## How do I bootstrap a server for running Co-op Cloud apps?
|
||||
|
||||
The requirements are:
|
||||
|
||||
1. Docker installed
|
||||
1. User in Docker user group
|
||||
1. Swarm mode initialised
|
||||
1. Proxy network created
|
||||
|
||||
!!! warning "You may need to log in/out"
|
||||
|
||||
When running `usermod ...`, you may need to (depending on your system) log
|
||||
in and out again of your shell session to get the required permissions for
|
||||
Docker.
|
||||
|
||||
```
|
||||
# docker install convenience script
|
||||
wget -O- https://get.docker.com | bash
|
||||
|
||||
# add user to docker group
|
||||
usermod -aG docker $USER
|
||||
|
||||
# setup swarm
|
||||
docker swarm init
|
||||
docker network create -d overlay proxy
|
||||
|
||||
# on debian machines as of 2023-02-17
|
||||
apt install apparmor
|
||||
systemctl restart docker containerd
|
||||
```
|
||||
|
||||
## How do I persist container logs after they go away?
|
||||
|
||||
This is a big topic but in general, if you're looking for something quick & easy, you can use the [journald logging driver](https://docs.docker.com/config/containers/logging/journald/). This will hook the container logs into systemd which can handle persistent log collection & managing log file size.
|
||||
|
||||
You need to add the following to your `/etc/docker/daemon.json` file on the server:
|
||||
|
||||
```json
|
||||
{
|
||||
"log-driver": "journald",
|
||||
"log-opts": {
|
||||
"labels":"com.docker.swarm.service.name"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
And for log size management, edit `/etc/systemd/journald.conf`:
|
||||
|
||||
```
|
||||
[Journal]
|
||||
Storage=persistent
|
||||
SystemMaxUse=5G
|
||||
MaxFileSec=1month
|
||||
```
|
||||
|
||||
Tne restart `docker` & `journald`:
|
||||
|
||||
```
|
||||
systemctl restart docker
|
||||
systemctl restart systemd-journald
|
||||
```
|
||||
|
||||
Now when you use `docker service logs` or `abra app logs`, it will read from the systemd journald logger seamlessly! Some useful `journalctl` commands are as follows, if you're doing some more fine grained logs investigation:
|
||||
|
||||
- `journalctl -f`
|
||||
- `journalctl CONTAINER_NAME=my_git_com_app.1.jxn9r85el63pdz42ykjnmh792 -f`
|
||||
- `journalctl COM_DOCKER_SWARM_SERVICE_NAME=my_git_com_app --since="2020-09-18 13:00:00" --until="2020-09-18 13:01:00"`
|
||||
- `journalctl CONTAINER_ID=$(docker ps -qf name=my_git_com_app) -f`
|
||||
|
||||
Also, for more system wide analysis stuff:
|
||||
|
||||
- `journalctl --disk-usage`
|
||||
- `du -sh /var/log/journal/*`
|
||||
- `man journalctl` / `man systemd-journald` / `man journald.conf`
|
||||
|
||||
## How do I specify a custom user/port for SSH connections with `abra`?
|
||||
|
||||
`abra` uses plain 'ol SSH under the hood and aims to make use of your existing SSH configurations in `~/.ssh/config` and interfaces with your running `ssh-agent` for password protected secret key files.
|
||||
|
||||
The `server add` command listed above assumes that that you make SSH connections on port 22 using your current username. If that is not the case, pass the new values as positional arguments. See `abra server add -h` for more on this.
|
||||
|
||||
```bash
|
||||
abra server add <domain> <user> <port> -p
|
||||
```
|
||||
|
||||
Running `server add` with `-d/--debug` should help you debug what is going on under the hood. It's best to take a moment to read [this troubleshooting entry](/abra/trouble/#ssh-connection-issues) if you're running into SSH connection issues with `abra`.
|
||||
|
||||
## How do I attach to a running container?
|
||||
|
||||
If you need to run a command within a running container you can use `abra app run <domain> <service> <command>`. For example, you could run `abra app run cloud.lumbung.space app bash` to open a new bash terminal session inside your remote container.
|
||||
|
||||
## How do I attach on a non-running container?
|
||||
|
||||
If you need to run a command on a container that won't start (eg. the container is stuck in a restart loop) you can temporarily disable its default entrypoint by setting it in `compose.yml` to something like ['tail', '-f', '/dev/null'], then redeploy the stack (with `--force --chaos` so you don't need to commit), then [get into the now running container](#how-do-i-attach-to-a-running-container), do your business, and when done revert the compose.yml change and redeploy again.
|
||||
|
||||
## Can I run Co-op Cloud on ARM?
|
||||
|
||||
`@Mayel`:
|
||||
|
||||
> FYI I've been running on ARM for a while with no troubles (as long as images
|
||||
> used support it of course, `abra` doesn't work yet!) 😀 ... in cases where I
|
||||
> couldn't find a multiarch image I simply have eg. image: ${DB_DOCKER_IMAGE}
|
||||
> in the docker-compose and set that to a compatible image in the env config
|
||||
> ... there was really nothing to it, apart from making sure to use multiarch
|
||||
> or arm images
|
||||
|
||||
See [`#312`](https://git.coopcloud.tech/coop-cloud/organising/issues/312) for more.
|
||||
|
||||
## How do I backup/restore my app?
|
||||
|
||||
If you're app [supports backup/restore](/maintainers/handbook/#how-do-i-configure-backuprestore) then you have two options: [`backup-bot-two`](https://git.coopcloud.tech/coop-cloud/backup-bot-two) & [`abra`](https://git.coopcloud.tech/coop-cloud/abra).
|
||||
|
||||
With `abra`, you can simply run the commands:
|
||||
|
||||
```
|
||||
$ abra app backup <domain>
|
||||
$ abra app restore <domain>
|
||||
```
|
||||
|
||||
Pass `-h` for more information on the specific flags & arguments.
|
||||
|
||||
If your app Recipe *does not support backups* you can do it manually with the
|
||||
`abra cp` command. See the exact commands in [abra
|
||||
cheetsheet](/abra/cheat-sheet/#manually-restoring-app-data).
|
||||
|
||||
|
||||
## How do I take a manual database backup?
|
||||
|
||||
MySQL / MariaDB:
|
||||
|
||||
```
|
||||
abra app run foo.bar.com db mysqldump -u root <database> | gzip > ~/.abra/backups/foo.bar.com_db_`date +%F`.sql.gz
|
||||
```
|
||||
|
||||
Postgres:
|
||||
|
||||
```
|
||||
abra app run foo.bar.com db pg_dump -u root <database> | gzip > ~/.abra/backups/foo.bar.com_db_`date +%F`.sql.gz
|
||||
```
|
||||
|
||||
If you get errors about database access:
|
||||
- Make sure you've specified the right database user (`root` above) and db name
|
||||
- If you have a database password set, you might need to load it from a secret,
|
||||
something like this:
|
||||
|
||||
```
|
||||
abra app run foo.bar.com db bash -c 'mysqldump -u root -p"$(cat /run/secrets/db_oot_password)" <database>' | gzip > ~/.abra/backups/foo.bar.com_db_`date +%F`.sql.gz
|
||||
```
|
||||
|
||||
## Can I deploy a recipe without `abra`?
|
||||
|
||||
Yes! It's a design goal to keep the recipes not dependent on `abra` or any
|
||||
single tool that we develop. This means that the configuration commons can
|
||||
still be useful beyond this project. You can deploy a recipe with standard
|
||||
commands like so:
|
||||
|
||||
```
|
||||
set -a
|
||||
source example.com.env
|
||||
cd ~/.abra/recipes/myrecipe
|
||||
docker stack deploy -c compose.yml example_com
|
||||
```
|
||||
|
||||
`abra` makes all of this more convenient.
|
||||
|
||||
## Proxying apps outside of Co-op Cloud with Traefik?
|
||||
|
||||
It's possible! It's actually always been possible but we just didn't have
|
||||
spoons to investigate. Co-op Cloud can co-exist on the same server as bare
|
||||
metal apps, non-swarm containers (plain `docker-compose up` deployments!),
|
||||
Nginx installs etc. It's a bit gnarly with the networking but doable.
|
||||
|
||||
Enable the following in your Traefik `$domain.env` configuration:
|
||||
|
||||
```
|
||||
FILE_PROVIDER_DIRECTORY_ENABLED=1
|
||||
```
|
||||
|
||||
You must also have host mode networking enabled for Traefik:
|
||||
|
||||
```
|
||||
COMPOSE_FILE="$COMPOSE_FILE:compose.host.yml"
|
||||
```
|
||||
|
||||
And re-deploy your `traefik` app. You now have full control over the [file
|
||||
provider](https://doc.traefik.io/traefik/providers/file/#directory)
|
||||
configuration of Traefik. This also means you lost the defaults of the
|
||||
`file-provider.yml.tmpl`, so this is a more involved approach.
|
||||
|
||||
The main change is that there is now a `/etc/traefik/file-providers` volume
|
||||
being watched by Traefik for provider configurations. You can re-enable the
|
||||
recipe defaults by copying the original over to the volume (this assumes you've
|
||||
deployed `traefik` already without `FILE_PROVIDER_DIRECTORY_ENABLED`, which is
|
||||
required for the following command):
|
||||
|
||||
```
|
||||
abra app run $your-traefik app \
|
||||
cp /etc/traefik/file-provider.yml /etc/traefik/file-providers/
|
||||
```
|
||||
|
||||
You don't need to re-deploy Traefik, it should automatically pick this up.
|
||||
|
||||
You can route requests to a bare metal / non-docker service by making a
|
||||
`/etc/traefik/file-providers/$YOUR-SERVICE.yml` and putting something like this in
|
||||
it:
|
||||
|
||||
```yaml
|
||||
http:
|
||||
routers:
|
||||
myservice:
|
||||
rule: "Host(`my-service.example.com`)"
|
||||
service: "myservice"
|
||||
entryPoints:
|
||||
- web-secure
|
||||
tls:
|
||||
certResolver: production
|
||||
|
||||
services:
|
||||
myservice:
|
||||
loadBalancer:
|
||||
servers:
|
||||
- url: "http://$YOUR-HOST-IP:8080/"
|
||||
```
|
||||
|
||||
Where you should replace all instances of `myservice`.
|
||||
|
||||
You must use your host level IP address (replace `$YOUR-HOST-IP` in the
|
||||
example). With host mode networking, your deployment can route out of the swarm
|
||||
to the host.
|
||||
|
||||
If you're running a firewall (e.g. UFW) then it will likely block traffic from
|
||||
the swarm to the host. You can typically add a specific UFW to route from the
|
||||
swarm (typically, your `docker_gwbridge`) to the specific port of your bare
|
||||
metal / non-docker app:
|
||||
|
||||
```
|
||||
docker network inspect docker_gwbridge --format='{{( index .IPAM.Config 0).Gateway}}'
|
||||
172.18.0.1
|
||||
ufw allow from 172.18.0.0/16 proto tcp to any port $YOUR-APP-PORT
|
||||
```
|
||||
|
||||
Notice that we turn `172.18.0.1` into `172.18.0.0/16`. It's advised to open the
|
||||
firewall on a port by port case to avoid expanding your attack surface.
|
||||
|
||||
Traefik should handle the usual automagic HTTPS certificate generation and
|
||||
route requests after. You're free to make as many `$whatever.yml` files in your
|
||||
`/etc/traefik/file-providers` directory. It should Just Work ™
|
||||
|
||||
Please note that we have to hardcode `production` and `web-secure` which are
|
||||
typically configurable when not using `FILE_PROVIDER_DIRECTORY_ENABLED`.
|
||||
|
||||
## Can I use Caddy instead of Traefik?
|
||||
|
||||
Yes, it's possible although currently Quite Experimental! See
|
||||
[`#388`](https://git.coopcloud.tech/coop-cloud/organising/issues/388) for more.
|
||||
|
||||
## Running an offline coop-cloud server
|
||||
|
||||
You may want to run a coop-cloud directly on your device (or in a VM or machine on your LAN), whether that's for testing a recipe or to run coop-cloud apps outside of the cloud ;-)
|
||||
In that case you might simply add some names to `/etc/hosts` (e.g `127.0.0.1 myapp.localhost`), or configure them on a local DNS server - which means `traefik` won't be able to use `letsencrypt` to generate and verify SSL certificates. Here's what you can do instead:
|
||||
1. In your traefik .env file, edit/uncomment the following lines:
|
||||
```
|
||||
LETS_ENCRYPT_ENV=staging
|
||||
WILDCARDS_ENABLED=1
|
||||
SECRET_WILDCARD_CERT_VERSION=v1
|
||||
SECRET_WILDCARD_KEY_VERSION=v1
|
||||
COMPOSE_FILE="$COMPOSE_FILE:compose.wildcard.yml"
|
||||
```
|
||||
2. Generate a self-signed certificate using the [command listed here](https://letsencrypt.org/docs/certificates-for-localhost/#making-and-trusting-your-own-certificates). Unless using `localhost` you may want to edit that where it appears in the command, and/or add multiple (sub)domains to the certificate e.g: `subjectAltName=DNS:localhost,DNS:myapp.localhost`
|
||||
3. Run these commands:
|
||||
```
|
||||
abra app secret insert localhost ssl_cert v1 "$(cat localhost.crt)"
|
||||
abra app secret insert localhost ssl_key v1 "$(cat localhost.key)"
|
||||
```
|
||||
4. Re-deploy `traefik` with `--force` and voila!
|
||||
|
||||
## Remote recipes
|
||||
|
||||
!!! warning "This is only available in the currently unreleased version of `abra`"
|
||||
|
||||
Please see [this issue](https://git.coopcloud.tech/coop-cloud/organising/issues/583) to track current progress towards a release. All feedback and testing are welcome on this new feature. The design is not finalised yet.
|
||||
|
||||
It is possible to specify a remote recipe in your `.env` file:
|
||||
|
||||
```
|
||||
RECIPE=mygit.org/myorg/cool-recipe.git:1.3.12
|
||||
```
|
||||
|
||||
Where `1.3.12` is an optional pinned version. When `abra` runs a deployment, it
|
||||
will fetch the remote recipe and create a directory for it under `$ABRA_DIR`
|
||||
(typically `~/.abra`):
|
||||
|
||||
```
|
||||
$ABRA_DIR/recipes/mygit_org_myorg_cool-recipe
|
||||
```
|
@ -1,23 +0,0 @@
|
||||
---
|
||||
title: Operators
|
||||
---
|
||||
|
||||
Welcome to the operators guide! Operators are typically individuals, members of tech co-ops or collectives who provide services powered by Co-op Cloud. This documentation is meant to help new & experienced operators manage their deployments as well as provide a space for sharing tricks & tips for keeping things running smoothly.
|
||||
|
||||
<div class="grid cards" markdown>
|
||||
|
||||
- __New Operators Tutorial__
|
||||
|
||||
If you want to become an operator, start your journey here :rocket:
|
||||
|
||||
[Get started](tutorial.md){ .md-button .md-button--primary }
|
||||
|
||||
- __Operators Handbook__
|
||||
|
||||
One-stop shop for all you need to know to manage a deployment :ribbon:
|
||||
|
||||
[Read Handbook](handbook.md){ .md-button .md-button--primary }
|
||||
|
||||
</div>
|
||||
|
||||
Operators are encouraged to submit documentation patches! Sharing is caring :sparkling_heart:
|
@ -1,279 +0,0 @@
|
||||
---
|
||||
title: New Operators Tutorial
|
||||
---
|
||||
|
||||
This tutorial assumes you understand the [frequently asked questions](/intro/faq/) as well as [the moving parts](/intro/strategy/) of the technical problems _Co-op Cloud_ solves. If yes, proceed :smile:
|
||||
|
||||
## Deploy your first app
|
||||
|
||||
In order to deploy an app you need two things:
|
||||
|
||||
1. a server with SSH access and a public IP address
|
||||
2. a domain name pointing to that server
|
||||
|
||||
This tutorial tries to help you make choices about which server and which DNS setup you need to run a _Co-op Cloud_ deployment but it does not go into great depth about how to set up a new server.
|
||||
|
||||
### Server setup
|
||||
|
||||
Co-op Cloud has itself near zero system requirements. You only need to worry about the system resource usage of your apps and the overhead of running containers with the docker runtime (often negligible. If you want to know more, see [this FAQ entry](/intro/faq/#isnt-running-everything-in-containers-inefficient)).
|
||||
|
||||
We will deploy a new Nextcloud instance in this guide, so you will only need 1GB of RAM according to [their documentation](https://docs.nextcloud.com/server/latest/admin_manual/installation/system_requirements.html). You may also be interested in this [FAQ entry](/intro/faq/#arent-containers-horrible-from-a-security-perspective) if you are curious about security in the context of containers.
|
||||
|
||||
Most Co-op Cloud deployments have been run on Debian machines so far. Some experiments have been done on single board computers & servers with low resource capacities.
|
||||
|
||||
You need to keep port `:80` and `:443` free on your server for web proxying to your apps. Typically, you don't need to keep any other ports free as the core web proxy ([Traefik](https://traefik.io)) keeps all app ports internal to its network. Sometimes however, you need to expose an app port when you need to use a transport which would perform better or more reliably without proxying.
|
||||
|
||||
`abra` has support for creating servers (`abra server new`) but that is a more advanced automation feature which is covered in the [handbook](/operators/handbook). For this tutorial, we'll focus on the basics. Assuming you've managed to create a testing VPS with some `$hosting_provider`, you'll need to install Docker, add your user to the Docker group & setup swarm mode:
|
||||
|
||||
!!! warning "You may need to log in/out"
|
||||
|
||||
When running `usermod ...`, you may need to (depending on your system) log
|
||||
in and out again of your shell session to get the required permissions for
|
||||
Docker.
|
||||
|
||||
```
|
||||
# ssh into your server
|
||||
ssh <server-domain>
|
||||
|
||||
# docker install convenience script
|
||||
wget -O- https://get.docker.com | bash
|
||||
|
||||
# add user to docker group
|
||||
sudo usermod -aG docker $USER
|
||||
|
||||
# exit and re-login to load the group
|
||||
exit
|
||||
ssh <server-domain>
|
||||
|
||||
# back on the server, setup swarm
|
||||
docker swarm init
|
||||
docker network create -d overlay proxy
|
||||
|
||||
# now you can exit and start using abra
|
||||
exit
|
||||
```
|
||||
|
||||
??? question "Do you support multiple web proxies?"
|
||||
|
||||
We do not know if it is feasible and convenient to set things up on an existing server with another web proxy which uses ports `:80` & `:443`. We'd happily receive reports and documentation on how to do this if you manage to set it up!
|
||||
|
||||
### DNS setup
|
||||
|
||||
You'll need two A records, one to point to the server itself and another to support sub-domains for the apps. You can then support an app hosted on your root domain (e.g. `example.com`) and other apps on sub-domains (e.g. `foo.example.com`, `bar.example.com`).
|
||||
|
||||
Your entries in your DNS provider setup might look like the following.
|
||||
|
||||
@ 1800 IN A 116.203.211.204
|
||||
*. 1800 IN A 116.203.211.204
|
||||
|
||||
Where `116.203.211.204` can be replaced with the IP address of your server.
|
||||
|
||||
??? question "How do I know my DNS is working?"
|
||||
|
||||
You can use a tool like `dig` on the command-line to check if your server has the necessary DNS records set up. Something like `dig +short <domain>` should show the IP address of your server if things are working.
|
||||
|
||||
### Install `abra`
|
||||
|
||||
Now we can install [`abra`](/abra) locally on your machine and hook it up to
|
||||
your server. We support a script-based installation method ([script source](https://git.coopcloud.tech/coop-cloud/abra/src/branch/main/scripts/installer/installer)):
|
||||
|
||||
```bash
|
||||
curl https://install.abra.coopcloud.tech | bash
|
||||
```
|
||||
|
||||
The installer will verify the downloaded binary checksum. If you prefer, you can
|
||||
[manually verify](/abra/install/#manual-verification) the binary, and then
|
||||
manally place it in one the directories in your `$PATH` variable. To validate
|
||||
that everything is working try listing the `--help` command or `-h` to view
|
||||
output:
|
||||
|
||||
```bash
|
||||
abra -h
|
||||
```
|
||||
|
||||
You may need to add the `~/.local/bin/` directory to your `$PATH` variable, in
|
||||
order to run the executable. Also, run this line into your terminal so
|
||||
you have immediate access to `abra` on the current terminal.
|
||||
|
||||
```bash
|
||||
export PATH=$PATH:$HOME/.local/bin
|
||||
```
|
||||
|
||||
If you run into issues during installation, [please report a ticket](https://git.coopcloud.tech/coop-cloud/organising/issues/new) :pray: Once you're all set up, we **highly** recommend configuring command-line auto-completion for `abra`. See `abra autocomplete -h` for more on how to do this.
|
||||
|
||||
??? question "Can I install `abra` on my server?"
|
||||
|
||||
Yes, this is possible. However, the instructions for this setup are different. For more info see [this handbook entry](/operators/handbook/#running-abra-server-side).
|
||||
|
||||
### Add your server
|
||||
|
||||
Now you can connect `abra` with your server. You must have a working SSH configuration for your server before you can proceed. That means you can run `ssh <server-domain>` on your command-line and everything Works :tm:. See the [`abra` SSH troubleshooting](/abra/trouble/#ssh-connection-issues) for a working SSH configuration example.
|
||||
|
||||
??? warning "Beware of SSH dragons :dragon_face:"
|
||||
|
||||
Under the hood `abra` uses plain 'ol `ssh` and aims to make use of your
|
||||
existing SSH configurations in `~/.ssh/config` and interfaces with your
|
||||
running `ssh-agent` for password protected secret key files.
|
||||
|
||||
Running `server add` with `-d` or `--debug` should help you debug what is
|
||||
going on under the hood. `ssh -v ...` should also help. If you're running
|
||||
into SSH connection issues with `abra` take a moment to read [this
|
||||
troubleshooting entry](/abra/trouble/#ssh-connection-issues).
|
||||
|
||||
```bash
|
||||
ssh <server-domain> # make sure it works
|
||||
abra server add <server-domain>
|
||||
```
|
||||
|
||||
It is important to note that `<server-domain>` here is a publicy accessible domain name which points to your server IP address. `abra` does make sure this is the case and this is done to avoid issues with HTTPS certificate rate limiting.
|
||||
|
||||
??? warning "Can I use arbitrary server names?"
|
||||
|
||||
Yes, this is possible. You need to pass `-D` to `server add` and ensure
|
||||
that your `Host ...` entry in your SSH configuration includes the name.
|
||||
So, for example:
|
||||
|
||||
Host example.com example
|
||||
...
|
||||
|
||||
And then:
|
||||
|
||||
abra server add -D example
|
||||
|
||||
You will now have a new `~/.abra/` folder on your local file system which stores all the configuration of your Co-op Cloud instance.
|
||||
|
||||
By now `abra` should have registered this server as managed. To confirm this run:
|
||||
|
||||
```
|
||||
abra server ls
|
||||
```
|
||||
|
||||
??? question "How do I share my configs in `~/.abra`?"
|
||||
|
||||
It's possible and quite easy, for more see [this handbook
|
||||
entry](/operators/handbook/#understanding-app-and-server-configuration).
|
||||
|
||||
### Web proxy setup
|
||||
|
||||
In order to have your Co-op cloud deployment serve the public internet, we need to install the core web proxy, [Traefik](https://doc.traefik.io/traefik/).
|
||||
|
||||
Traefik is the main entrypoint for all web requests (e.g. like NGINX) and
|
||||
supports automatic SSL certificate configuration and other quality-of-life
|
||||
features which make deploying libre apps more enjoyable.
|
||||
|
||||
**1. To get started, you'll need to create a new app:**
|
||||
|
||||
```bash
|
||||
abra app new traefik
|
||||
```
|
||||
|
||||
Choose your newly registered server and specify a domain name. By default `abra`
|
||||
will suggest `<app-name>.server.org` or prompt you with a list of servers.
|
||||
|
||||
|
||||
**2. Configure this new `traefix` app**
|
||||
|
||||
You will want to take a look at your generated configuration and tweak the `LETS_ENCRYPT_EMAIL` value. You can do that by running `abra app config`:
|
||||
|
||||
```bash
|
||||
abra app config <traefik-domain>
|
||||
```
|
||||
|
||||
Every app you deploy will have one of these `.env` files, which contains
|
||||
variables which will be injected into app configurations when deployed. These
|
||||
files exist at relevantly named path:
|
||||
|
||||
```bash
|
||||
~/.abra/servers/<domain>/<traefik-domain>.env
|
||||
```
|
||||
|
||||
Variables starting with `#` are optional, others are required. Some things to
|
||||
consider here is that by default our *Traefik* recipe exposes the metric
|
||||
dashboard unauthenticated on the public internet at the URL `<traefik-domain>`
|
||||
it is deployed to, which is not ideal. You can disable this with:
|
||||
|
||||
```
|
||||
DASHBOARD_ENABLED=false
|
||||
```
|
||||
|
||||
**3. Now it is time to deploy your app:**
|
||||
|
||||
```
|
||||
abra app deploy <traefik-domain>
|
||||
```
|
||||
|
||||
Voila. Abracadabra :magic_wand: your first app is deployed :sparkles:
|
||||
|
||||
|
||||
### Deploy Nextcloud
|
||||
|
||||
And now we can deploy apps. Let's create a new Nextcloud app.
|
||||
|
||||
```bash
|
||||
abra app new nextcloud -S
|
||||
```
|
||||
|
||||
The `-S` or `--secrets` flag is used to generate secrets for the app: database connection password, root password and admin password.
|
||||
|
||||
??? warning "Beware of password dragons :dragon:"
|
||||
|
||||
Take care, these secrets are only shown once on the terminal so make sure to take note of them! `abra` makes use of the [Docker secrets](/operators/handbook/#managing-secret-data) mechanism to ship these secrets securely to the server and store them as encrypted data. Only the apps themselves have access to the values from here on, they're placed in `/run/secrets` on the container file system.
|
||||
|
||||
Then we can deploy Nextcloud:
|
||||
|
||||
```bash
|
||||
abra app deploy <nextcloud-domain>
|
||||
```
|
||||
|
||||
`abra app deploy` will wait nearly a minute for an app to deploy until it times out and shows some helpful commands for how to debug what is going on. If things don't come up in time, try running the following:
|
||||
|
||||
```
|
||||
abra app ps -w <nextcloud-domain> # status check
|
||||
abra app logs <nextcloud-domain> # logs trailing
|
||||
abra app errors -w <nextcloud-domain> # error catcher
|
||||
```
|
||||
|
||||
Your new `traefik` instance will detect that a new app is coming up and generate SSL certificates for it. You can see what `traefik` is up to using the same commands above but replacing `<netcloud-domain>` with the `<traefik-domain>` you chose earlier (`abra app ls` will remind you what domains you chose :grinning:).
|
||||
|
||||
### Upgrade Nextcloud
|
||||
|
||||
To upgrade an app manually to the newest available version run:
|
||||
|
||||
```bash
|
||||
abra app upgrade <nextcloud-domain>
|
||||
```
|
||||
|
||||
### Automatic Upgrades
|
||||
|
||||
`kadabra` the auto-updater is still under development, use it with care and don't use it in production environments. To setup the auto-updater copy the `kadabra` binary to the server and configure a cronjob for regular app upgrades. The following script will configure ssmtp for email notifications and setup a cronjob. This cronjob checks daily for new app versions, notifies if any kind of update is available and upgrades all apps to the latest patch/minor version.
|
||||
|
||||
|
||||
```bash
|
||||
apt install ssmtp
|
||||
|
||||
cat > /etc/ssmtp/ssmtp.conf << EOF
|
||||
mailhub=$MAIL_SERVER:587
|
||||
hostname=$MAIL_DOMAIN
|
||||
AuthUser=$USER
|
||||
AuthPass=$PASSWORD
|
||||
FromLineOverride=yes
|
||||
UseSTARTTLS=yes
|
||||
EOF
|
||||
|
||||
cat > /etc/cron.d/abra_updater << EOF
|
||||
MAILTO=admin@example.com
|
||||
MAILFROM=noreply@example.com
|
||||
|
||||
0 6 * * * root ~/kadabra notify --major
|
||||
30 4 * * * root ~/kadabra upgrade --all
|
||||
EOF
|
||||
|
||||
```
|
||||
|
||||
Add `ENABLE_AUTO_UPDATE=true` to the env config (`abra app config <app name>`) to enable the auto-updater for a specific app.
|
||||
|
||||
## Finishing up
|
||||
|
||||
Hopefully you got something running! Well done! The [operators handbook](/operators/handbook) would probably be the next place to go check out if you're looking for more help. Especially on topics of ongoing maintenance.
|
||||
|
||||
If not, please [get in touch](/intro/contact) or [raise a ticket](https://git.coopcloud.tech/coop-cloud/organising/issues/new/choose) and we'll try to help out. We want our operator onboarding to be as smooth as possible, so we do appreciate any feedback we receive.
|
Loading…
x
Reference in New Issue
Block a user