feat: Relaxes --force enforcement on deploy an instead error in some cases #459

Closed
p4u1 wants to merge 2 commits from p4u1/abra:deploy-relax-force into main
Member

@decentral1se what do you think?

Will finish when people think this is good

@decentral1se what do you think? Will finish when people think this is good
p4u1 added 1 commit 2024-12-30 15:41:14 +00:00
feat: Relaxes --force enforcement on deploy an instead error in some cases
Some checks failed
continuous-integration/drone/pr Build is failing
628a9a4b3f
p4u1 force-pushed deploy-relax-force from 3988730341 to 5a4dac7e76 2024-12-30 15:45:00 +00:00 Compare
decentral1se reviewed 2024-12-30 19:18:04 +00:00
decentral1se left a comment
Owner

This would be a breaking behaviour change in abra app deploy that it would re-deploy in some cases and in others not vs. the current behaviour: refuse to re-deploy without --force/--chaos.

That's also more internal bookkeeping that we have to do in the code. And if we handle these specific cases, then we'll be expected to handle more? The code will get harder and harder to maintain.

In both cases: 1) re-deploy a chaos deploy 2) re-deploy a downgrade - you can see this information already on the deploy overview. So, I'm not sure the break in behaviour and extra maintenance load is worth it?

Instead, why can't we improve the deploy overview?

Here's an example of the current deploy overview for a simple deploy -f:

image

A <hash> (with optional +U) is already shown to signal the user that it is a chaos deployment.

image

For the downgrade, we could add a visual element to the "TO DEPLOY" (?) to signal the "going down" 🔽 If you currently try to deploy -f [version] a downgrade, you do have a visual indication of it:

image

In general tho, I think if you "go it alone" with a app deploy -f/-C, we can't offer much guarantees because there just could be anything going with the local / remote / recipe / env / etc. state?

UPDATE: I'm iterating on the overview screen: coop-cloud/abra#460

This would be a breaking behaviour change in `abra app deploy` that it would re-deploy in some cases and in others not vs. the current behaviour: refuse to re-deploy without `--force/--chaos`. That's also more internal bookkeeping that we have to do in the code. And if we handle these specific cases, then we'll be expected to handle more? The code will get harder and harder to maintain. In both cases: 1) re-deploy a chaos deploy 2) re-deploy a downgrade - you can see this information already on the deploy overview. So, I'm not sure the break in behaviour and extra maintenance load is worth it? Instead, why can't we improve the deploy overview? Here's an example of the current deploy overview for a simple `deploy -f`: <img width="266" alt="image" src="attachments/ae1e3ee5-8c58-4e36-b3ea-913e4ea0e849"> A `<hash>` (with optional `+U`) is already shown to signal the user that it is a chaos deployment. <img width="267" alt="image" src="attachments/27cb086f-c3d5-4777-9f5e-968ef4ea2216"> For the downgrade, we could add a visual element to the "TO DEPLOY" (?) to signal the "going down" 🔽 If you currently try to `deploy -f [version]` a downgrade, you do have a visual indication of it: <img width="263" alt="image" src="attachments/9e2c8476-db55-4b4e-8d95-aa5d8640df7d"> In general tho, I think if you "go it alone" with a `app deploy -f/-C`, we can't offer much guarantees because there just could be anything going with the local / remote / recipe / env / etc. state? **UPDATE**: I'm iterating on the overview screen: https://git.coopcloud.tech/coop-cloud/abra/pulls/460
Owner
https://git.coopcloud.tech/coop-cloud/abra/pulls/460
decentral1se closed this pull request 2024-12-31 15:38:02 +00:00
Some checks failed
continuous-integration/drone/pr Build is failing

Pull request closed

Sign in to join this conversation.
No description provided.