Automatic patch/security updates for deployed apps #236
Labels
No Label
abra
abra-gandi
awaiting-feedback
backups
bug
build
ci/cd
community organising
contributing
coopcloud.tech
democracy
design
documentation
duplicate
enhancement
finance
funding
good first issue
help wanted
installer
kadabra
performance
proposal
question
recipes.coopcloud.tech
security
test
wontfix
No Milestone
No project
No Assignees
4 Participants
Notifications
Due Date
No due date set.
Dependencies
No dependencies set.
Reference: coop-cloud/organising#236
Loading…
Reference in New Issue
No description provided.
Delete Branch "%!s(<nil>)"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
Describe the problem to be solved
Like cloudron, it'd be nice to sort out having automatic patch/security updates automatically deployed. This could be configurable.
Describe the solution you would like
Potentially a
abra
companion binary living on the server which scans deployed apps and checks for new recipe versions in the catalogue and rolls them out if it can.Some related discussion in #208.
Is someone still working on this issue?
I would start to write a little python daemon. I am open for a rewrite in go, but for the first little explorative PoC, I think python is the fastest way to go.
I already wrote a little script that checks all applications for updates and differentiate between major, minor and patch versions.
I have some design proposals:
For each docker service the daemon needs to recognize the recipe. The recipe name could be passed as env variable or as label. I chose to use the label, because it's a little bit easier to access and the labels are already used to pass the version number.
Further there should be a way to activate and deactivate automatic updates per instance. I would recommend to set a env variable like
ENABLE_AUTO_UPDATE=TRUE
to enable updates.Should applications that are deployed with the
chaos
flag be updated automatically at all? Or should the user decide using an env variable?I think automatically updating chaos deployments can lead to unexpected situations.
For the upgrade process itself I have to dig a little deeper how abra is doing it.
Thanks for the initiative @moritz! 💕
Not to my knowledge.
Sounds fine to me!
This is an interesting gap, yes, currently we're missing a way to find out what recipe a stack is running from. We have the
RECIPE
variable in each app's.env
file, but that's not made available to the containers.I wonder if we can generate the label from that variable (it is available at deploy-time), e.g.
- "coop-cloud.recipe=${RECIPE}"
, to avoid needing to repeat the recipe name?Which do you mean, per app instance, or per server instance?
If we're hooking into abra's versioning system then I think I agree with "no"; anything deployed with
--chaos
probably represents un-released changes to a recipe, and an upgrade seems very liable to break things. HOWEVER! I don't think there's currently an easy way of telling server-side if a recipe was deployed using--chaos
🤔We'll soon (again) have automatic recipe generation, see coop-cloud/recipes-catalogue-json#4. The usual process from there is to run
abra app upgrade
; unsure how to translate this to server-side where there isn't (usually) a copy of~/.abra
.Great thoughts. Some more...
abra
automation to add a- "coop-cloud.recipe=${RECIPE}"
👍abra
automation to label a deployment as "Chaos" 👍I have some reservations about starting down the road with Python but I don't want to stifle the initative! A lot folks want this feature so anything we come up could become "core" and Bash/Python have historically given us an increased work load due to install / portability problems. If this will be a system level install, then one work-around with Python is to keep it very lean and only use the Python stdlib? But if it will be something that lives in the swarm, then whatever goes? Please do proceed ofc as you like but I wanted to share these concerns.
I'm kinda not sure how we bridge the "all state on individual workstations of the people running
abra
and "thing running on the server-side which has knowledge of that state". It seems like a separate server-side process could read the catalogue (just JSON) for published updates and speak to the Docker daemon, completely seprately from theabra
workflow?This for me, would be a strong case for modularising the
abra
code out into separate packages and wiring up a new binary which lives on the server and has a very limited feature set. A lot of the code we need is already living inabra
. Watch for updates, do updates, notify, some form of remote control?abra
could speak to that binary to get the latest state, remote control it (turn on/off auto-ness?) or read from the daemon as per usual.I mean per app instance. Per server you could simply deactivate the upgrade daemon.
Maybe deploying with
--chaos
could add a label likecoop-cloud.chaos=TRUE
.This issue is not available anymore.
I tried to understand the
abra app upgrade
process, I would summarize it as follows:COMPOSE_FILE
env var or compose.ymlI ask myself if we need the .env files on the server side. The env variables exposed to the container are accessible from docker.
But I think there are a few variables only for the deployment process that are not exposed to the container, like
COMPOSE_FILE
,RECIPE
...This could be solved by adding a label for each of them.
As proposed here #381 , these kind of variables should have a
ABRA_
prefix to distinguish from container variables.Can someone tell if the deployment process is simply a
docker compose up
with the merged compose files? Somewhere I read that it should be possible to deploy the recipes even without abra, or is there something entangled with abra?The python script is just for exploring the possible design concept. In the end there should be a binary without any dependency, that can run on any system.
I think as long as all the necessary .env variables are exposed to the docker daemon, either directly to the container or using labels, the server-side process could be independent of the
abra
workflow.I imagine it like this:
I will start to extract all the
abra upgrade
related code into a separate packages and try to run it on the server. It doesn't make sense to rewrite all of this process, even in python.But I'm not sure what would be the best way to handle this code duplication? Write a library that is shared between abra and the upgrade-daemon?
Yeh you can do more or less the following:
Nice! The steps look great.
I see, yeh. Well,
abra
more or less uses the docker lib bindings to do the work, so if can find similar bindings for Python, then you'd probably not have to write much of that code yourself. There may be strange inconsistencies with different libraries, I'm not sure.For going with Go, yeh, I'd look at the
internal.DeployAction
code, alldeploy
/rollback
/upgrade
commands rely on it afair and it's a centralised code path for deployment logic. There are other steps surrounding each command but that's the main part. I'm not sure how easy it will be untangling it all.If you want to try and propose a path for stripping it out, I'd be up for reviewing. Or we could do a co-working session one day and try to figure it out together. I think it'd be great to split this piece out into another package other tools are gonna use it.
This is my first PoC auto updater:
https://git.coopcloud.tech/moritz/abra/src/branch/update_daemon /
667264b5bb
I decided to write it as extra abra component, so I can reuse as much of the abra stuff as possible.
make build
is now compiling an additional binary calledkadabra
.This binary could be used by a cronjob.
The upgrade process is written based on
abra app upgrade
.To make a recipe usable for this autoupdate process the label
coop-cloud.${STACK_NAME}.recipe=${RECIPE}
needs to be added and every env variable needs to be exposed to at least one container.It is still an open design question, how to retrieve all necessary env variables?
There are some env vars that are typically not passed to the container and only used for variable substitution in the compose file. Like
COMPOSE_FILE
and all theSECRET_VERSION
s.The simplest way would be to pass all the env vars to at least one container.
I don't see any problem with it because the env vars shouldn't contain any sensible data. All sensible data should be stored in secrets.
Is there a docker compose syntax to pass all env variables to a service?
Another idea is to pass env vars as label. Either by manually specifying the non container variables as label, but this would be error prone and produce more work to maintain a recipe.
Or automatically by simply passing all env variables inside one label like
coop-cloud.${STACK_NAME}.env='COMPOSE_FILE=...'
.Instead of passing all env vars it would be enough to only pass non container env vars, that are prefixed with
ABRA_
.But this is a new .env convention, that requires to rewrite every .env file.
So far I've been using secrets for auto-generated passwords or keys, but env vars for things like API keys. How does one use secrets for those?
Great work @moritz!
#391 is ready to go I'd say. Putting it on the
app
service makes sense to me? I see no issue with appending to all services if not complicated to do. Potentially a simpler mental model... it's just "everywhere"?Have opened up #393, can discuss on that ticket.
@mayel you'd just use the usual secrets facility in the compose config? Are you saying these keys are things that should be kept private? You only put them in private git repos in the env var files or?
@moritz don't have time this week to dive into the code much deeper but it's looking like a solid start and I like the approach of adding in the binary here for now. Once this PoC starts getting used and we see if it meets the needs of operators, we will know what we need to pull out into a separate packages for further modularisation. I think this would still be a good idea because then we can expose more machine readable output and open up tooling opportunities further beyond Go. Send a pull request when you're happy with it and I'll do my best to review ASAP!
coop-cloud/abra#268 is merged 👏
We'll be iterating on this, please get testing! Gonna be great.