Clear versioning convention #578

Open
opened 2024-02-27 09:06:52 +00:00 by moritz · 9 comments
Member

I would wish a clear convention for versioning recipes. Especially the difference between minor version and patch because I see recipe maintainer handle this differently.
This is the actual description and its quite general:

    major: new features/bug fixes, backwards incompatible (e.g 1.0.0 -> 2.0.0).
           the upgrade won't work without some preparation work and others need
           to take care when performing it. "it could go wrong".

    minor: new features/bug fixes, backwards compatible (e.g. 0.1.0 -> 0.2.0).
           the upgrade should Just Work and there are no breaking changes in
           the app and the recipe config. "it should go fine".

    patch: bug fixes, backwards compatible (e.g. 0.0.1 -> 0.0.2). this upgrade
           should also Just Work and is mostly to do with minor bug fixes
           and/or security patches. "nothing to worry about".

Related docs: https://docs.coopcloud.tech/maintainers/handbook/#how-are-recipes-versioned

I would suggest:

  • major: This update could break things because of a major image update or some invasive recipe changes (e.g. changing secrets or envs)
  • minor: Updating at least one of the container images, and there are no breaking changes.
  • patch: The container images are untouched but the recipe is changed, without breaking anything (e.g. adding some features, some optional env variables, abra.sh commands ....)

For me the clear differentiation between minor and patch is quite important, because it allows to patch/change a recipe without updating the image versions. So it's possible to create an in-between-version, like I've done for nextcloud:
https://git.coopcloud.tech/coop-cloud/nextcloud/releases/tag/5.0.3+27.0.1-fpm
I added the same feature to the 5.0.2+27.0.1-fpm recipe version and and the 6.0.1+28.0.2-fpm version. So we can use the 5.0.3+27.0.1-fpm version tag to deploy the old nextcloud version, be sure we don't break anything with the same image version, while still applying the recipe patch. This won't work if the patch version would be increased with an image update.

I would wish a clear convention for versioning recipes. Especially the difference between `minor` version and `patch` because I see recipe maintainer handle this differently. This is the actual description and its quite general: ``` major: new features/bug fixes, backwards incompatible (e.g 1.0.0 -> 2.0.0). the upgrade won't work without some preparation work and others need to take care when performing it. "it could go wrong". minor: new features/bug fixes, backwards compatible (e.g. 0.1.0 -> 0.2.0). the upgrade should Just Work and there are no breaking changes in the app and the recipe config. "it should go fine". patch: bug fixes, backwards compatible (e.g. 0.0.1 -> 0.0.2). this upgrade should also Just Work and is mostly to do with minor bug fixes and/or security patches. "nothing to worry about". ``` Related docs: https://docs.coopcloud.tech/maintainers/handbook/#how-are-recipes-versioned I would suggest: - `major`: This update could break things because of a major image update or some invasive recipe changes (e.g. changing secrets or envs) - `minor`: Updating at least one of the container images, and there are no breaking changes. - `patch`: The container images are untouched but the recipe is changed, without breaking anything (e.g. adding some features, some optional env variables, abra.sh commands ....) For me the clear differentiation between `minor` and `patch` is quite important, because it allows to patch/change a recipe without updating the image versions. So it's possible to create an in-between-version, like I've done for nextcloud: https://git.coopcloud.tech/coop-cloud/nextcloud/releases/tag/5.0.3+27.0.1-fpm I added the same feature to the `5.0.2+27.0.1-fpm` recipe version and and the `6.0.1+28.0.2-fpm` version. So we can use the `5.0.3+27.0.1-fpm` version tag to deploy the old nextcloud version, be sure we don't break anything with the same image version, while still applying the recipe patch. This won't work if the `patch` version would be increased with an image update.

I like those more specific definitions, and maybe could come with a "reverse" guide, based on scenarios encountered, e.g.

  • needs manual database migration ->major
  • new required env variable -> major?
  • changed a container image -> minor
  • packaged software upgraded -> minor
  • fixed passing of an env variable -> minor? patch?

... maybe some other scenarios too?

Which of course could also just be scenarios formatted under major, minor, patch headings... it's just when I'm doing something myself, I encounter the scenario first.

I like those more specific definitions, and maybe could come with a "reverse" guide, based on scenarios encountered, e.g. - needs manual database migration ->major - new required env variable -> major? - changed a container image -> minor - packaged software upgraded -> minor - fixed passing of an env variable -> minor? patch? ... maybe some other scenarios too? Which of course could also just be scenarios formatted under `major`, `minor`, `patch` headings... it's just when I'm doing something myself, I encounter the scenario first.
Owner

@moritz sounds good, I like it! Also to go into specifics for typical scenarios as @nicksellen mentioned, also super useful. Having a clear docs reference so we can help each other learn from and stick to will be great.

@moritz sounds good, I like it! Also to go into specifics for typical scenarios as @nicksellen mentioned, also super useful. Having a clear docs reference so we can help each other learn from and stick to will be great.
decentral1se added the
documentation
label 2024-02-27 12:56:37 +00:00
Owner

@moritz super appreciate the thoughts, thanks so much for writing this up. More guidance on versioning would definitely be helpful.

I think I don't understand the example so I'm struggling to evaluate the plan. Sorry for being slow!

For me the clear differentiation between minor and patch is quite important, because it allows to patch/change a recipe without updating the image versions
I added the same feature to the 5.0.2+27.0.1-fpm recipe version and and the 6.0.1+28.0.2-fpm version. So we can use the 5.0.3+27.0.1-fpm version tag to deploy the old nextcloud version, be sure we don't break anything with the same image version, while still applying the recipe patch. This won't work if the patch version would be increased with an image update.

So you added a feature to 5.0.2+27.0.1-fpm and bumped the version to 5.0.3+27.0.1, and added the same feature to 6.0.1+28.0.2-fpm and bumped the version to... what? I don't see any 6.y.z versions higher than 6.0.1 🤔

Our current approach:

  1. recipe with image: foobar:2.3.4, initial recipe release 1.0.0+2.3.4.
  2. major version upgrade to image: foobar:3.0.0, second recipe release 2.0.0+3.0.0
  3. add a recipe feature to both major versions of foobar, two new recipe releases 1.1.0+2.3.4 and 2.1.0+3.0.0

Then if there are subsequent patch releases of the image, foobar:2.3.5 and foobar:3.0.1, new recipe releases 1.1.1+2.3.5 and 2.1.1+3.0.1. And another minor image release, recipe versions 1.2.0+2.4.0 and 2.2.0+3.1.0. Then a patch to the recipe, 1.2.1+2.4.0 and 2.2.1+3.1.0.

What from the Nextcloud example is not possible here?

I'm not tied to semantic versioning, personally, but moving away from it by redefining "minor recipe update" seems like a cognitive cost and I don't currently understand the benefit. Any help fixing my misunderstanding would be very welcome! 🙏

@moritz super appreciate the thoughts, thanks so much for writing this up. More guidance on versioning would definitely be helpful. I think I don't understand the example so I'm struggling to evaluate the plan. Sorry for being slow! > For me the clear differentiation between `minor` and `patch` is quite important, because it allows to patch/change a recipe without updating the image versions > I added the same feature to the `5.0.2+27.0.1-fpm` recipe version and and the `6.0.1+28.0.2-fpm` version. So we can use the 5.0.3+27.0.1-fpm version tag to deploy the old nextcloud version, be sure we don't break anything with the same image version, while still applying the recipe patch. This won't work if the patch version would be increased with an image update. So you added a feature to `5.0.2+27.0.1-fpm` and bumped the version to `5.0.3+27.0.1`, and added the same feature to `6.0.1+28.0.2-fpm` and bumped the version to... what? I don't see any `6.y.z` versions higher than `6.0.1` 🤔 Our current approach: 1. recipe with `image: foobar:2.3.4`, initial recipe release `1.0.0+2.3.4.` 2. major version upgrade to `image: foobar:3.0.0`, second recipe release `2.0.0+3.0.0` 3. add a recipe feature to both major versions of `foobar`, two new recipe releases `1.1.0+2.3.4` and `2.1.0+3.0.0` Then if there are subsequent patch releases of the image, `foobar:2.3.5` and `foobar:3.0.1`, new recipe releases `1.1.1+2.3.5` and `2.1.1+3.0.1`. And another minor image release, recipe versions `1.2.0+2.4.0` and `2.2.0+3.1.0`. Then a patch to the recipe, `1.2.1+2.4.0` and `2.2.1+3.1.0`. What from the Nextcloud example is not possible here? I'm not tied to semantic versioning, personally, but moving away from it by redefining "minor recipe update" seems like a cognitive cost and I don't currently understand the benefit. Any help fixing my misunderstanding would be very welcome! 🙏
Author
Member

So you added a feature to 5.0.2+27.0.1-fpm and bumped the version to 5.0.3+27.0.1, and added the same feature to 6.0.1+28.0.2-fpm and bumped the version to... what? I don't see any 6.y.z versions higher than 6.0.1 🤔

6.0.1+28.0.2-fpm should be bumped to version 6.0.2+28.0.2-fpm. In this case of nextcloud I added the feature but still haven't released the version, because I only release a version after I tested it carefully on our systems and yet I haven't tested Nextcloud 28.

Then if there are subsequent patch releases of the image, foobar:2.3.5 and foobar:3.0.1, new recipe releases 1.1.1+2.3.5 and 2.1.1+3.0.1. And another minor image release, recipe versions 1.2.0+2.4.0 and 2.2.0+3.1.0. Then a patch to the recipe, 1.2.1+2.4.0 and 2.2.1+3.1.0.

What from the Nextcloud example is not possible here?

The problem if you released 1.1.1+2.3.5 and 2.1.1+3.0.1 is that you can't change the recipe of the image foobar:2.3.4 and foobar:3.0.0 anymore without changing the image version. There are no in-between versions (1.1.x+2.3.4 and 2.1.x+3.0.0) possible anymore. Having a version 1.1.1+2.3.5 and 1.1.1+2.3.4 or 1.1.2+2.3.4 could be quite confusing and could lead to unintended downgrades.

> So you added a feature to `5.0.2+27.0.1-fpm` and bumped the version to `5.0.3+27.0.1`, and added the same feature to `6.0.1+28.0.2-fpm` and bumped the version to... what? I don't see any `6.y.z` versions higher than `6.0.1` 🤔 `6.0.1+28.0.2-fpm` should be bumped to version `6.0.2+28.0.2-fpm`. In this case of nextcloud I added the feature but still haven't released the version, because I only release a version after I tested it carefully on our systems and yet I haven't tested Nextcloud 28. > Then if there are subsequent patch releases of the image, `foobar:2.3.5` and `foobar:3.0.1`, new recipe releases `1.1.1+2.3.5` and `2.1.1+3.0.1`. And another minor image release, recipe versions `1.2.0+2.4.0` and `2.2.0+3.1.0`. Then a patch to the recipe, `1.2.1+2.4.0` and `2.2.1+3.1.0`. > > What from the Nextcloud example is not possible here? The problem if you released `1.1.1+2.3.5` and `2.1.1+3.0.1` is that you can't change the recipe of the image `foobar:2.3.4` and `foobar:3.0.0` anymore without changing the image version. There are no in-between versions (`1.1.x+2.3.4` and `2.1.x+3.0.0`) possible anymore. Having a version `1.1.1+2.3.5` and `1.1.1+2.3.4` or `1.1.2+2.3.4` could be quite confusing and could lead to unintended downgrades.
Owner

Thanks for reply @moritz , sorry for long delay!

So if I understand you correctly, you're saying that if we have recipe versions:

  • 1.1.0+2.3.4
  • 1.1.1+2.3.5
  • 2.1.0+3.0.0
  • 2.1.1+3.0.1

That you would want to be able to release a change to the 2.3.4 / 3.0.0 version of the recipes, separately to the upgrade to 2.3.5 / 3.0.1.

I agree with you about this:

Having a version 1.1.1+2.3.5 and 1.1.1+2.3.4 or 1.1.2+2.3.4 could be quite confusing and could lead to unintended downgrades.

How often is the difference between 2.3.4 and 2.3.5 significant enough to justify backporting a recipe fix to a specific patch release like that? I agree that backporting to a separate major version (2.x.y and 3.x.y) seems important, but in your example, what's wrong with releasing the change just to 2.3.5 and 3.0.1`? So the newest releases would be:

  • 1.1.2+2.3.5
  • 2.1.2+3.0.1
Thanks for reply @moritz , sorry for long delay! So if I understand you correctly, you're saying that if we have recipe versions: * `1.1.0+2.3.4` * `1.1.1+2.3.5` * `2.1.0+3.0.0` * `2.1.1+3.0.1` That you would want to be able to release a change to the `2.3.4` / `3.0.0` version of the recipes, separately to the upgrade to `2.3.5` / `3.0.1`. I agree with you about this: > Having a version `1.1.1+2.3.5` and `1.1.1+2.3.4` or `1.1.2+2.3.4` could be quite confusing and could lead to unintended downgrades. How often is the difference between `2.3.4` and `2.3.5` significant enough to justify backporting a recipe fix to a specific patch release like that? I agree that backporting to a separate major version (`2.x.y` and `3.x.y`) seems important, but in your example, what's wrong with releasing the change just to `2.3.5` and 3.0.1`? So the newest releases would be: * `1.1.2+2.3.5` * `2.1.2+3.0.1`
Author
Member

So if I understand you correctly, you're saying that if we have recipe versions:

  • 1.1.0+2.3.4
  • 1.1.1+2.3.5
  • 2.1.0+3.0.0
  • 2.1.1+3.0.1

That you would want to be able to release a change to the 2.3.4 / 3.0.0 version of the recipes, separately to the upgrade to 2.3.5 / 3.0.1.

Yes that's the point.

How often is the difference between 2.3.4 and 2.3.5 significant enough to justify backporting a recipe fix to a specific patch release like that? I agree that backporting to a separate major version (2.x.y and 3.x.y) seems important, but in your example, what's wrong with releasing the change just to 2.3.5 and 3.0.1`?

We try to have our systems as deterministic as possible, to be able to automate all the deployment and upgrading processes. Therefore we have a release cycle of two month, in which we keep the same image version for all the deployed apps. But sometimes it's important to fix the recipe, without touching the image at all. Sometimes we want to patch the recipe only for one or two instances, for exampe adding ftp feature to one wordpress instance, without having to deploy a different wordpress version. Keeping the same version over all instances is quite important to avoid batch upgrades breaking single instances. I agree that a minor image upgrade has a small probability to break anything, but we try to avoid even this small probability. Further every software maintainer may handle minor changes differently.

> So if I understand you correctly, you're saying that if we have recipe versions: > > * `1.1.0+2.3.4` > * `1.1.1+2.3.5` > * `2.1.0+3.0.0` > * `2.1.1+3.0.1` > > That you would want to be able to release a change to the `2.3.4` / `3.0.0` version of the recipes, separately to the upgrade to `2.3.5` / `3.0.1`. Yes that's the point. > How often is the difference between `2.3.4` and `2.3.5` significant enough to justify backporting a recipe fix to a specific patch release like that? I agree that backporting to a separate major version (`2.x.y` and `3.x.y`) seems important, but in your example, what's wrong with releasing the change just to `2.3.5` and 3.0.1`? We try to have our systems as deterministic as possible, to be able to automate all the deployment and upgrading processes. Therefore we have a release cycle of two month, in which we keep the same image version for all the deployed apps. But sometimes it's important to fix the recipe, without touching the image at all. Sometimes we want to patch the recipe only for one or two instances, for exampe adding ftp feature to one wordpress instance, without having to deploy a different wordpress version. Keeping the same version over all instances is quite important to avoid batch upgrades breaking single instances. I agree that a minor image upgrade has a small probability to break anything, but we try to avoid even this small probability. Further every software maintainer may handle `minor` changes differently.
Author
Member

@3wordchant your release https://git.coopcloud.tech/coop-cloud/authentik/releases/tag/6.3.1+2024.6.2 makes it confusing for us to patch our stable release. At the moment we deploy until our next update cycle authentik with version https://git.coopcloud.tech/coop-cloud/authentik/releases/tag/6.3.0+2024.6.1. Now we want to add to some systems a recipe patch, but we don't want to update the image. We are now releasing a version 6.3.1+2024.6.1 containing the recipe patch, but this creates a duplicate 6.3.1 version as it's already used with 6.3.1+2024.6.2

@3wordchant your release https://git.coopcloud.tech/coop-cloud/authentik/releases/tag/6.3.1+2024.6.2 makes it confusing for us to patch our stable release. At the moment we deploy until our next update cycle authentik with version https://git.coopcloud.tech/coop-cloud/authentik/releases/tag/6.3.0+2024.6.1. Now we want to add to some systems a recipe patch, but we don't want to update the image. We are now releasing a version `6.3.1+2024.6.1` containing the recipe patch, but this creates a duplicate `6.3.1` version as it's already used with `6.3.1+2024.6.2`
Owner

But sometimes it's important to fix the recipe, without touching the image at all.

I feel like adding a Debian-style +release tag would be the best long-term solution for this case.

But, as long as the instructions during abra release sync are clear enough, it seems worth moving slightly further away from semver in order to support Local IT's use right now.

> But sometimes it's important to fix the recipe, without touching the image at all. I feel like adding a Debian-style `+release` tag would be the best long-term solution for this case. But, as long as the instructions during `abra release sync` are clear enough, it seems worth moving slightly further away from semver in order to support Local IT's use right now.
Owner

Semver was a baseline, I'm all for shifting to fit actual needs!

As always, change things! Make proposals, we can tear it all down.

Semver was a baseline, I'm all for shifting to fit actual needs! As always, change things! Make proposals, we can tear it all down.
Sign in to join this conversation.
No Milestone
No project
No Assignees
4 Participants
Notifications
Due Date
The due date is invalid or out of range. Please use the format 'yyyy-mm-dd'.

No due date set.

Dependencies

No dependencies set.

Reference: toolshed/organising#578
No description provided.