Automatic patch/security updates for deployed apps #236

Closed
opened 2021-11-05 09:08:00 +00:00 by decentral1se · 12 comments
Owner

Describe the problem to be solved

Like cloudron, it'd be nice to sort out having automatic patch/security updates automatically deployed. This could be configurable.

Describe the solution you would like

Potentially a abra companion binary living on the server which scans deployed apps and checks for new recipe versions in the catalogue and rolls them out if it can.

## Describe the problem to be solved Like cloudron, it'd be nice to sort out having automatic patch/security updates automatically deployed. This could be configurable. ## Describe the solution you would like Potentially a `abra` companion binary living on the server which scans deployed apps and checks for new recipe versions in the catalogue and rolls them out if it can.
decentral1se added this to the Versioning and deploy stability milestone 2021-11-05 09:08:00 +00:00
decentral1se added the
enhancement
abra
labels 2021-11-05 09:08:00 +00:00
decentral1se added this to the Beta release (software) project 2021-11-05 09:08:01 +00:00
Author
Owner

Some related discussion in #208.

Some related discussion in https://git.coopcloud.tech/coop-cloud/organising/issues/208.
decentral1se modified the project from Beta release (software) to Spicing abra up 2021-12-21 23:25:56 +00:00
decentral1se removed this from the Versioning and deploy stability milestone 2021-12-21 23:26:04 +00:00
Member

Is someone still working on this issue?

I would start to write a little python daemon. I am open for a rewrite in go, but for the first little explorative PoC, I think python is the fastest way to go.

Is someone still working on this issue? I would start to write a little python daemon. I am open for a rewrite in go, but for the first little explorative PoC, I think python is the fastest way to go.
Member

I already wrote a little script that checks all applications for updates and differentiate between major, minor and patch versions.

I have some design proposals:

For each docker service the daemon needs to recognize the recipe. The recipe name could be passed as env variable or as label. I chose to use the label, because it's a little bit easier to access and the labels are already used to pass the version number.

Further there should be a way to activate and deactivate automatic updates per instance. I would recommend to set a env variable like ENABLE_AUTO_UPDATE=TRUE to enable updates.
Should applications that are deployed with the chaos flag be updated automatically at all? Or should the user decide using an env variable?
I think automatically updating chaos deployments can lead to unexpected situations.

For the upgrade process itself I have to dig a little deeper how abra is doing it.

I already wrote a little script that checks all applications for updates and differentiate between major, minor and patch versions. I have some design proposals: For each docker service the daemon needs to recognize the recipe. The recipe name could be passed as env variable or as label. I chose to use the label, because it's a little bit easier to access and the labels are already used to pass the version number. Further there should be a way to activate and deactivate automatic updates per instance. I would recommend to set a env variable like `ENABLE_AUTO_UPDATE=TRUE` to enable updates. Should applications that are deployed with the `chaos` flag be updated automatically at all? Or should the user decide using an env variable? I think automatically updating chaos deployments can lead to unexpected situations. For the upgrade process itself I have to dig a little deeper how abra is doing it.
moritz self-assigned this 2023-01-17 18:18:31 +00:00
Owner

Thanks for the initiative @moritz! 💕

Is someone still working on this issue?

Not to my knowledge.

I would start to write a little python daemon. I am open for a rewrite in go, but for the first little explorative PoC, I think python is the fastest way to go.

Sounds fine to me!

For each docker service the daemon needs to recognize the recipe.

This is an interesting gap, yes, currently we're missing a way to find out what recipe a stack is running from. We have the RECIPE variable in each app's .env file, but that's not made available to the containers.

I wonder if we can generate the label from that variable (it is available at deploy-time), e.g. - "coop-cloud.recipe=${RECIPE}", to avoid needing to repeat the recipe name?

Further there should be a way to activate and deactivate automatic updates per instance. I would recommend to set a env variable like ENABLE_AUTO_UPDATE=TRUE to enable updates.

Which do you mean, per app instance, or per server instance?

Should applications that are deployed with the chaos flag be updated automatically at all? Or should the user decide using an env variable?
I think automatically updating chaos deployments can lead to unexpected situations.

If we're hooking into abra's versioning system then I think I agree with "no"; anything deployed with --chaos probably represents un-released changes to a recipe, and an upgrade seems very liable to break things. HOWEVER! I don't think there's currently an easy way of telling server-side if a recipe was deployed using --chaos 🤔

For the upgrade process itself I have to dig a little deeper how abra is doing it.

We'll soon (again) have automatic recipe generation, see coop-cloud/recipes-catalogue-json#4. The usual process from there is to run abra app upgrade; unsure how to translate this to server-side where there isn't (usually) a copy of ~/.abra.

Thanks for the initiative @moritz! 💕 > Is someone still working on this issue? Not to my knowledge. > I would start to write a little python daemon. I am open for a rewrite in go, but for the first little explorative PoC, I think python is the fastest way to go. Sounds fine to me! > For each docker service the daemon needs to recognize the recipe. This is an interesting gap, yes, currently we're missing a way to find out what recipe a stack is running from. We have the `RECIPE` variable in each app's `.env` file, but that's not made available to the containers. I wonder if we can generate the label from that variable (it is available at deploy-time), e.g. `- "coop-cloud.recipe=${RECIPE}"`, to avoid needing to repeat the recipe name? > Further there should be a way to activate and deactivate automatic updates per instance. I would recommend to set a env variable like ENABLE_AUTO_UPDATE=TRUE to enable updates. Which do you mean, per app instance, or per server instance? > Should applications that are deployed with the chaos flag be updated automatically at all? Or should the user decide using an env variable? > I think automatically updating chaos deployments can lead to unexpected situations. If we're hooking into abra's versioning system then I think I agree with "no"; anything deployed with `--chaos` probably represents un-released changes to a recipe, and an upgrade seems very liable to break things. HOWEVER! I don't think there's currently an easy way of telling server-side if a recipe was deployed using `--chaos` 🤔 > For the upgrade process itself I have to dig a little deeper how abra is doing it. We'll soon (again) have automatic recipe generation, see coop-cloud/recipes-catalogue-json#4. The usual process from there is to run `abra app upgrade`; unsure how to translate this to server-side where there isn't (usually) a copy of `~/.abra`.
Author
Owner

Great thoughts. Some more...

  • abra automation to add a - "coop-cloud.recipe=${RECIPE}" 👍

  • abra automation to label a deployment as "Chaos" 👍

I have some reservations about starting down the road with Python but I don't want to stifle the initative! A lot folks want this feature so anything we come up could become "core" and Bash/Python have historically given us an increased work load due to install / portability problems. If this will be a system level install, then one work-around with Python is to keep it very lean and only use the Python stdlib? But if it will be something that lives in the swarm, then whatever goes? Please do proceed ofc as you like but I wanted to share these concerns.

I'm kinda not sure how we bridge the "all state on individual workstations of the people running abra and "thing running on the server-side which has knowledge of that state". It seems like a separate server-side process could read the catalogue (just JSON) for published updates and speak to the Docker daemon, completely seprately from the abra workflow?

This for me, would be a strong case for modularising the abra code out into separate packages and wiring up a new binary which lives on the server and has a very limited feature set. A lot of the code we need is already living in abra. Watch for updates, do updates, notify, some form of remote control? abra could speak to that binary to get the latest state, remote control it (turn on/off auto-ness?) or read from the daemon as per usual.

Great thoughts. Some more... - `abra` automation to add a `- "coop-cloud.recipe=${RECIPE}"` 👍 - `abra` automation to label a deployment as "Chaos" 👍 I have some reservations about starting down the road with Python but I don't want to stifle the initative! A lot folks want this feature so anything we come up could become "core" and Bash/Python have historically given us an increased work load due to install / portability problems. If this will be a system level install, then one work-around with Python is to keep it very lean and only use the Python stdlib? But if it will be something that lives in the swarm, then whatever goes? Please do proceed ofc as you like but I wanted to share these concerns. I'm kinda not sure how we bridge the "all state on individual workstations of the people running `abra` and "thing running on the server-side which has knowledge of that state". It seems like a separate server-side process could read the catalogue (just JSON) for published updates and speak to the Docker daemon, completely seprately from the `abra` workflow? This for me, would be a strong case for modularising the `abra` code out into separate packages and wiring up a new binary which lives on the server and has a very limited feature set. A lot of the code we need is already living in `abra`. Watch for updates, do updates, notify, some form of remote control? `abra` could speak to that binary to get the latest state, remote control it (turn on/off auto-ness?) or read from the daemon as per usual.
Member

Which do you mean, per app instance, or per server instance?

I mean per app instance. Per server you could simply deactivate the upgrade daemon.

If we're hooking into abra's versioning system then I think I agree with "no"; anything deployed with --chaos probably represents un-released changes to a recipe, and an upgrade seems very liable to break things. HOWEVER! I don't think there's currently an easy way of telling server-side if a recipe was deployed using --chaos 🤔

Maybe deploying with --chaos could add a label like coop-cloud.chaos=TRUE.

We'll soon (again) have automatic recipe generation, see coop-cloud/recipes-catalogue-json#4. The usual process from there is to run abra app upgrade; unsure how to translate this to server-side where there isn't (usually) a copy of ~/.abra.

This issue is not available anymore.

I tried to understand the abra app upgrade process, I would summarize it as follows:

  1. Lint the recipe for errors
  2. Check if the deployed version is smaller than the current catalogue version
  3. Check the release notes
  4. Checkout the recipe repo to the catalogue version tag
  5. Merge the abra.sh env vars with the app .env vars
  6. Read compose files from either COMPOSE_FILE env var or compose.yml
    • Merge and validate them
  7. Run the deployment of the compose config

I ask myself if we need the .env files on the server side. The env variables exposed to the container are accessible from docker.
But I think there are a few variables only for the deployment process that are not exposed to the container, like COMPOSE_FILE, RECIPE...
This could be solved by adding a label for each of them.
As proposed here #381 , these kind of variables should have a ABRA_ prefix to distinguish from container variables.

Can someone tell if the deployment process is simply a docker compose up with the merged compose files? Somewhere I read that it should be possible to deploy the recipes even without abra, or is there something entangled with abra?

I have some reservations about starting down the road with Python but I don't want to stifle the initative! A lot folks want this feature so anything we come up could become "core" and Bash/Python have historically given us an increased work load due to install / portability problems. If this will be a system level install, then one work-around with Python is to keep it very lean and only use the Python stdlib? But if it will be something that lives in the swarm, then whatever goes? Please do proceed ofc as you like but I wanted to share these concerns.

The python script is just for exploring the possible design concept. In the end there should be a binary without any dependency, that can run on any system.

I'm kinda not sure how we bridge the "all state on individual workstations of the people running abra and "thing running on the server-side which has knowledge of that state". It seems like a separate server-side process could read the catalogue (just JSON) for published updates and speak to the Docker daemon, completely seprately from the abra workflow?

I think as long as all the necessary .env variables are exposed to the docker daemon, either directly to the container or using labels, the server-side process could be independent of the abra workflow.
I imagine it like this:

  1. Compare the version label with the catalogue recipe version
  2. If there is a minor/patch update clone the specific version of the recipe
  3. Extract the env vars from the running container, use the labels for further information
  4. Merge the compose files regarding the env vars and labels
  5. Deploy the merged compose files.

This for me, would be a strong case for modularising the abra code out into separate packages and wiring up a new binary which lives on the server and has a very limited feature set. A lot of the code we need is already living in abra. Watch for updates, do updates, notify, some form of remote control? abra could speak to that binary to get the latest state, remote control it (turn on/off auto-ness?) or read from the daemon as per usual.

I will start to extract all the abra upgrade related code into a separate packages and try to run it on the server. It doesn't make sense to rewrite all of this process, even in python.
But I'm not sure what would be the best way to handle this code duplication? Write a library that is shared between abra and the upgrade-daemon?

> Which do you mean, per app instance, or per server instance? I mean per app instance. Per server you could simply deactivate the upgrade daemon. > If we're hooking into abra's versioning system then I think I agree with "no"; anything deployed with `--chaos` probably represents un-released changes to a recipe, and an upgrade seems very liable to break things. HOWEVER! I don't think there's currently an easy way of telling server-side if a recipe was deployed using `--chaos` 🤔 Maybe deploying with `--chaos` could add a label like `coop-cloud.chaos=TRUE`. > We'll soon (again) have automatic recipe generation, see coop-cloud/recipes-catalogue-json#4. The usual process from there is to run `abra app upgrade`; unsure how to translate this to server-side where there isn't (usually) a copy of `~/.abra`. This issue is not available anymore. I tried to understand the `abra app upgrade` process, I would summarize it as follows: 1. Lint the recipe for errors 2. Check if the deployed version is smaller than the current catalogue version 3. Check the release notes 4. Checkout the recipe repo to the catalogue version tag 5. Merge the abra.sh env vars with the app .env vars 6. Read compose files from either `COMPOSE_FILE` env var or compose.yml - Merge and validate them 7. Run the deployment of the compose config I ask myself if we need the .env files on the server side. The env variables exposed to the container are accessible from docker. But I think there are a few variables only for the deployment process that are not exposed to the container, like `COMPOSE_FILE`, `RECIPE`... This could be solved by adding a label for each of them. As proposed here https://git.coopcloud.tech/coop-cloud/organising/issues/381 , these kind of variables should have a `ABRA_` prefix to distinguish from container variables. Can someone tell if the deployment process is simply a `docker compose up` with the merged compose files? Somewhere I read that it should be possible to deploy the recipes even without abra, or is there something entangled with abra? > I have some reservations about starting down the road with Python but I don't want to stifle the initative! A lot folks want this feature so anything we come up could become "core" and Bash/Python have historically given us an increased work load due to install / portability problems. If this will be a system level install, then one work-around with Python is to keep it very lean and only use the Python stdlib? But if it will be something that lives in the swarm, then whatever goes? Please do proceed ofc as you like but I wanted to share these concerns. The python script is just for exploring the possible design concept. In the end there should be a binary without any dependency, that can run on any system. > I'm kinda not sure how we bridge the "all state on individual workstations of the people running `abra` and "thing running on the server-side which has knowledge of that state". It seems like a separate server-side process could read the catalogue (just JSON) for published updates and speak to the Docker daemon, completely seprately from the `abra` workflow? I think as long as all the necessary .env variables are exposed to the docker daemon, either directly to the container or using labels, the server-side process could be independent of the `abra` workflow. I imagine it like this: 1. Compare the version label with the catalogue recipe version 2. If there is a minor/patch update clone the specific version of the recipe 3. Extract the env vars from the running container, use the labels for further information 4. Merge the compose files regarding the env vars and labels 5. Deploy the merged compose files. > This for me, would be a strong case for modularising the `abra` code out into separate packages and wiring up a new binary which lives on the server and has a very limited feature set. A lot of the code we need is already living in `abra`. Watch for updates, do updates, notify, some form of remote control? `abra` could speak to that binary to get the latest state, remote control it (turn on/off auto-ness?) or read from the daemon as per usual. I will start to extract all the `abra upgrade` related code into a separate packages and try to run it on the server. It doesn't make sense to rewrite all of this process, even in python. But I'm not sure what would be the best way to handle this code duplication? Write a library that is shared between abra and the upgrade-daemon?
Author
Owner

Somewhere I read that it should be possible to deploy the recipes even without abra, or is there something entangled with abra?

Yeh you can do more or less the following:

set -a
source example.com.env
cd ~/.abra/recipes/myrecipe
docker stack deploy -c compose.yml example_com

I think as long as all the necessary .env variables are exposed to the docker daemon, either directly to the container or using labels, the server-side process could be independent of the abra workflow.

Nice! The steps look great.

I will start to extract all the abra upgrade related code into a separate packages and try to run it on the server. It doesn't make sense to rewrite all of this process, even in python. But I'm not sure what would be the best way to handle this code duplication? Write a library that is shared between abra and the upgrade-daemon?

I see, yeh. Well, abra more or less uses the docker lib bindings to do the work, so if can find similar bindings for Python, then you'd probably not have to write much of that code yourself. There may be strange inconsistencies with different libraries, I'm not sure.

For going with Go, yeh, I'd look at the internal.DeployAction code, all deploy/rollback/upgrade commands rely on it afair and it's a centralised code path for deployment logic. There are other steps surrounding each command but that's the main part. I'm not sure how easy it will be untangling it all.

If you want to try and propose a path for stripping it out, I'd be up for reviewing. Or we could do a co-working session one day and try to figure it out together. I think it'd be great to split this piece out into another package other tools are gonna use it.

> Somewhere I read that it should be possible to deploy the recipes even without abra, or is there something entangled with abra? Yeh you can do more or less the following: ``` set -a source example.com.env cd ~/.abra/recipes/myrecipe docker stack deploy -c compose.yml example_com ``` > I think as long as all the necessary .env variables are exposed to the docker daemon, either directly to the container or using labels, the server-side process could be independent of the abra workflow. Nice! The steps look great. > I will start to extract all the abra upgrade related code into a separate packages and try to run it on the server. It doesn't make sense to rewrite all of this process, even in python. But I'm not sure what would be the best way to handle this code duplication? Write a library that is shared between abra and the upgrade-daemon? I see, yeh. Well, `abra` more or less uses the docker lib bindings to do the work, so if can find similar bindings for Python, then you'd probably not have to write much of that code yourself. There may be strange inconsistencies with different libraries, I'm not sure. For going with Go, yeh, I'd look at the `internal.DeployAction` code, all `deploy`/`rollback`/`upgrade` commands rely on it afair and it's a centralised code path for deployment logic. There are other steps surrounding each command but that's the main part. I'm not sure how easy it will be untangling it all. If you want to try and propose a path for stripping it out, I'd be up for reviewing. Or we could do a co-working session one day and try to figure it out together. I think it'd be great to split this piece out into another package other tools are gonna use it.
Member

This is my first PoC auto updater:
https://git.coopcloud.tech/moritz/abra/src/branch/update_daemon / 667264b5bb

I decided to write it as extra abra component, so I can reuse as much of the abra stuff as possible.
make build is now compiling an additional binary called kadabra.
This binary could be used by a cronjob.
The upgrade process is written based on abra app upgrade.

To make a recipe usable for this autoupdate process the label coop-cloud.${STACK_NAME}.recipe=${RECIPE} needs to be added and every env variable needs to be exposed to at least one container.

It is still an open design question, how to retrieve all necessary env variables?
There are some env vars that are typically not passed to the container and only used for variable substitution in the compose file. Like COMPOSE_FILE and all the SECRET_VERSIONs.
The simplest way would be to pass all the env vars to at least one container.
I don't see any problem with it because the env vars shouldn't contain any sensible data. All sensible data should be stored in secrets.
Is there a docker compose syntax to pass all env variables to a service?

Another idea is to pass env vars as label. Either by manually specifying the non container variables as label, but this would be error prone and produce more work to maintain a recipe.
Or automatically by simply passing all env variables inside one label like coop-cloud.${STACK_NAME}.env='COMPOSE_FILE=...'.
Instead of passing all env vars it would be enough to only pass non container env vars, that are prefixed with ABRA_.
But this is a new .env convention, that requires to rewrite every .env file.

This is my first PoC auto updater: https://git.coopcloud.tech/moritz/abra/src/branch/update_daemon / https://git.coopcloud.tech/moritz/abra/commit/667264b5bb3869f6a1562f2a4043c3f4da285e1c I decided to write it as extra abra component, so I can reuse as much of the abra stuff as possible. `make build` is now compiling an additional binary called `kadabra`. This binary could be used by a cronjob. The upgrade process is written based on `abra app upgrade`. To make a recipe usable for this autoupdate process the label `coop-cloud.${STACK_NAME}.recipe=${RECIPE}` needs to be added and every env variable needs to be exposed to at least one container. It is still an open design question, how to retrieve all necessary env variables? There are some env vars that are typically not passed to the container and only used for variable substitution in the compose file. Like `COMPOSE_FILE` and all the `SECRET_VERSION`s. The simplest way would be to pass all the env vars to at least one container. I don't see any problem with it because the env vars shouldn't contain any sensible data. All sensible data should be stored in secrets. Is there a docker compose syntax to pass all env variables to a service? Another idea is to pass env vars as label. Either by manually specifying the non container variables as label, but this would be error prone and produce more work to maintain a recipe. Or automatically by simply passing all env variables inside one label like `coop-cloud.${STACK_NAME}.env='COMPOSE_FILE=...'`. Instead of passing all env vars it would be enough to only pass non container env vars, that are prefixed with `ABRA_`. But this is a new .env convention, that requires to rewrite every .env file.
Member

I don't see any problem with it because the env vars shouldn't contain any sensible data. All sensible data should be stored in secrets.

So far I've been using secrets for auto-generated passwords or keys, but env vars for things like API keys. How does one use secrets for those?

> I don't see any problem with it because the env vars shouldn't contain any sensible data. All sensible data should be stored in secrets. So far I've been using secrets for auto-generated passwords or keys, but env vars for things like API keys. How does one use secrets for those?
Author
Owner

Great work @moritz!

To make a recipe usable for this autoupdate process the label coop-cloud.{STACK_NAME}.recipe={RECIPE} needs to be added and

#391 is ready to go I'd say. Putting it on the app service makes sense to me? I see no issue with appending to all services if not complicated to do. Potentially a simpler mental model... it's just "everywhere"?

every env variable needs to be exposed to at least one container ... It is still an open design question, how to retrieve all necessary env variables?

Have opened up #393, can discuss on that ticket.

@mayel you'd just use the usual secrets facility in the compose config? Are you saying these keys are things that should be kept private? You only put them in private git repos in the env var files or?

Great work @moritz! > To make a recipe usable for this autoupdate process the label coop-cloud.${STACK_NAME}.recipe=${RECIPE} needs to be added and https://git.coopcloud.tech/coop-cloud/organising/issues/391 is ready to go I'd say. Putting it on the `app` service makes sense to me? I see no issue with appending to all services if not complicated to do. Potentially a simpler mental model... it's just "everywhere"? > every env variable needs to be exposed to at least one container ... It is still an open design question, how to retrieve all necessary env variables? Have opened up https://git.coopcloud.tech/coop-cloud/organising/issues/393, can discuss on that ticket. @mayel you'd just use the usual secrets facility in the compose config? Are you saying these keys are things that should be kept private? You only put them in private git repos in the env var files or?
Author
Owner

@moritz don't have time this week to dive into the code much deeper but it's looking like a solid start and I like the approach of adding in the binary here for now. Once this PoC starts getting used and we see if it meets the needs of operators, we will know what we need to pull out into a separate packages for further modularisation. I think this would still be a good idea because then we can expose more machine readable output and open up tooling opportunities further beyond Go. Send a pull request when you're happy with it and I'll do my best to review ASAP!

@moritz don't have time this week to dive into the code much deeper but it's looking like a solid start and I like the approach of adding in the binary here for now. Once this PoC starts getting used and we see if it meets the needs of operators, we will know what we need to pull out into a separate packages for further modularisation. I think this would still be a good idea because then we can expose more machine readable output and open up tooling opportunities further beyond Go. Send a pull request when you're happy with it and I'll do my best to review ASAP!
Author
Owner

coop-cloud/abra#268 is merged 👏

We'll be iterating on this, please get testing! Gonna be great.

https://git.coopcloud.tech/coop-cloud/abra/pulls/268 is merged 👏 We'll be iterating on this, please get testing! Gonna be great.
Sign in to join this conversation.
No Milestone
No Assignees
4 Participants
Notifications
Due Date
The due date is invalid or out of range. Please use the format 'yyyy-mm-dd'.

No due date set.

Dependencies

No dependencies set.

Reference: coop-cloud/organising#236
No description provided.