Add Hooks / Signals #687

Open
opened 2026-03-05 10:22:52 +00:00 by ineiti · 2 comments

I propose to add hooks (and signals?) to the installation / upgrade procedure, so that recipes can communicate while they're installed, upgraded, and removed. The first use-case is to improve handling non-standard setups with traefik, like also discussed in #686. But I think a lot of other use-cases can be solved.

One thing I'm not sure is whether it should be hooks, or a more generic signalling system - here is my understanding of the difference (feel free to add your definition :)

  • hooks are emitted by the system itself whenever something happens
  • signals can be emitted by the recipes themselves if they think something should happen

As a first step, we could define some default signals for the system:

  • network.http(FQDN, container:port) - requests forwarding incoming HTTP requests to the conainer:port
  • network.https(FQDN, container:port) - requests forwarding incoming HTTPS requests to the conainer:port - setup of TLS has to be done by whoever listens to this signal
  • network.port_tcp(port, container:port) - requests forwarding of TCP requests on a port to a container
  • network.port_udp(port, container:port) - requests forwarding of UDP requests on a port to a container

For the hooks, this can be something like the following, to give the recipes more possibilities to interact during their lifecycle. There are default things happening in the hooks, which can be overwritten with a shell script of the name of the hook.

  • install.before
  • install.after
  • update.before
  • update.after
  • remove.before
  • remove.after

Recipes can then:

  • create a shell script with the name of the signal or the hook, and run their commands as required
  • emit signals to indicate what they're up to, and all recipes with the corresponding shell scripts will be informed (what happens if you replace traefik with nginx, and it should be informed about previous signals? Cache them and run them again?)

This would allow to have whatever TLS termination tool somebody likes, or have more complex setups (I run my co-op server in a VM, which is behind the TLS terminating traefik).

I propose to add hooks (and signals?) to the installation / upgrade procedure, so that recipes can communicate while they're installed, upgraded, and removed. The first use-case is to improve handling non-standard setups with traefik, like also discussed in #686. But I think a lot of other use-cases can be solved. One thing I'm not sure is whether it should be hooks, or a more generic signalling system - here is my understanding of the difference (feel free to add your definition :) - hooks are emitted by the system itself whenever something happens - signals can be emitted by the recipes themselves if they think something should happen As a first step, we could define some default signals for the system: - network.http(FQDN, container:port) - requests forwarding incoming HTTP requests to the conainer:port - network.https(FQDN, container:port) - requests forwarding incoming HTTPS requests to the conainer:port - setup of TLS has to be done by whoever listens to this signal - network.port_tcp(port, container:port) - requests forwarding of TCP requests on a port to a container - network.port_udp(port, container:port) - requests forwarding of UDP requests on a port to a container For the hooks, this can be something like the following, to give the recipes more possibilities to interact during their lifecycle. There are default things happening in the hooks, which can be overwritten with a shell script of the name of the hook. - install.before - install.after - update.before - update.after - remove.before - remove.after Recipes can then: - create a shell script with the name of the signal or the hook, and run their commands as required - emit signals to indicate what they're up to, and all recipes with the corresponding shell scripts will be informed (what happens if you replace traefik with nginx, and it should be informed about previous signals? Cache them and run them again?) This would allow to have whatever TLS termination tool somebody likes, or have more complex setups (I run my co-op server in a VM, which is behind the TLS terminating traefik).
Owner

Thanks for opening @ineiti 👏 This issue here is heavily related: #682 /cc @notplants

I think I understand this proposal, which I fully support and would like to make progress. We have some technical limitations based on our previous choices. Namely, we have currently been taken hostage by how Docker Swarm works.

abra does what the Docker CLI does, it merges the compose.*yml from the recipe config and deploys them as a bunch of services which has a shared namespace (Docker Stack). "The recipe" then becomes "An App" in our terminology.

So,

recipes can communicate while they're installed, upgraded, and removed

This means actually that the services/stack emits signals from the docker runtime. There is an API for this: https://docs.docker.com/reference/cli/docker/system/events/ However, it's quite a confusing API because, again, the way Docker Swarm works with rolling updates. It's hard to get insights into if a deployment succeeded, failed, was rolled back, etc. (explained more in #682)

network.http(...)

I believe this would require some API interaction between abra and whatever $proxy you are using. This then comes back to #686 and the question if we want to support multiple or a single proxy. A single proxy support (for me, ideally Caddy), then we could ask Caddy if the deployment is doing specific things from a traffic perspective.

A concrete example of this is how uncloud.run (a similar project) manages their proxy: https://uncloud.run/docs/concepts/ingress/managing-caddy

install.before
install.after
update.before
update.after
remove.before
remove.after

I think you could figure out these moments but it would require introspecting the Docker system events API and trying to pull out an event which matches this moment. I have struggled to figure this out but hopefully someone who is better at programming could do this! The current implementation of abra app deploy does some watching of the state, it could be a good entrypoint for other Go hackers: https://git.coopcloud.tech/toolshed/abra/src/branch/main/pkg/ui/deploy.go Lining up scripts to run at these moments was also discussed in #682.


To conclude, I hope this is useful! It's one of the major issues we're running up against and we all do want solutions. It's just a question of how, which doesn't break peoples deployments and maintains backwards compatibility. I sadly have little time to investigate this because $life atm but I can support. Anything you want to experiment with, prototype, propose, show & tell, is welcome!

Thanks for opening @ineiti 👏 This issue here is heavily related: https://git.coopcloud.tech/toolshed/organising/issues/682 /cc @notplants I *think* I understand this proposal, which I fully support and would like to make progress. We have some technical limitations based on our previous choices. Namely, we have currently been taken hostage by how Docker Swarm works. `abra` does what the Docker CLI does, it merges the `compose.*yml` from the recipe config and deploys them as a bunch of services which has a shared namespace (Docker Stack). "The recipe" then becomes "An App" in our terminology. So, > recipes can communicate while they're installed, upgraded, and removed This means actually that the services/stack emits signals from the docker runtime. There is an API for this: https://docs.docker.com/reference/cli/docker/system/events/ However, it's quite a confusing API because, again, the way Docker Swarm works with rolling updates. It's hard to get insights into if a deployment succeeded, failed, was rolled back, etc. (explained more in https://git.coopcloud.tech/toolshed/organising/issues/682) > network.http(...) I believe this would require some API interaction between `abra` and whatever `$proxy` you are using. This then comes back to https://git.coopcloud.tech/toolshed/organising/issues/686 and the question if we want to support multiple or a single proxy. A single proxy support (for me, ideally Caddy), then we could ask Caddy if the deployment is doing specific things from a traffic perspective. A concrete example of this is how uncloud.run (a similar project) manages their proxy: https://uncloud.run/docs/concepts/ingress/managing-caddy > install.before > install.after > update.before > update.after > remove.before > remove.after I think you could figure out these moments but it would require introspecting the Docker system events API and trying to pull out an event which matches this moment. I have struggled to figure this out but hopefully someone who is better at programming could do this! The current implementation of `abra app deploy` does some watching of the state, it could be a good entrypoint for other Go hackers: https://git.coopcloud.tech/toolshed/abra/src/branch/main/pkg/ui/deploy.go Lining up scripts to run at these moments was also discussed in https://git.coopcloud.tech/toolshed/organising/issues/682. --- To conclude, I hope this is useful! It's one of the major issues we're running up against and we all do want solutions. It's just a question of how, which doesn't break peoples deployments and maintains backwards compatibility. I sadly have little time to investigate this because `$life` atm but I can support. Anything you want to experiment with, prototype, propose, show & tell, is welcome!
decentral1se added the
design
help wanted
question
labels 2026-03-05 11:18:55 +00:00
Author

Hmm - so you would have to translate the docker-labels in a way that they can be passed to abra.sh? So you could: call all abra.sh from all other services on this server with a pre-defined command, e.g., label([[key0, value0], [key1, value1]]), even though that's a bit clumsy with bash...

And it would need some additional information like the server, service-name, or such. Even though this could also be put in the labels.

For the different services, you would have to define default labels, for example:

  • abra.proxy.server.service.url to register an url to the service on the server

All labels starting with abra are defined by co-op cloud, other labels can be defined by the community.

Hmm - so you would have to translate the docker-`labels` in a way that they can be passed to `abra.sh`? So you could: call all `abra.sh` from all other services on this server with a pre-defined command, e.g., `label([[key0, value0], [key1, value1]])`, even though that's a bit clumsy with bash... And it would need some additional information like the server, service-name, or such. Even though this could also be put in the labels. For the different services, you would have to define default labels, for example: - `abra.proxy.server.service.url` to register an `url` to the `service` on the `server` All labels starting with `abra` are defined by co-op cloud, other labels can be defined by the community.
Sign in to join this conversation.
2 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: toolshed/organising#687
No description provided.