Test suite for recipes? #514

Open
opened 2023-10-18 09:13:56 +00:00 by decentral1se · 3 comments
Owner

What we have today:

  • Maintainers release versions by hand
  • When the version is pushed, Drone-based CI runs on the repo if enabled
  • Those versions are automatically published to the catalogue
  • Operators deploy those versions

Problems with this is 1) it's up to maintainers to test their own new versions 2) it's unclear if the Drone CI is a reliable test suite for a new version (e.g. it tests a deploy but it doesn't test an upgrade from the previous version 3) versions are published automatically regardless of any of these questions.

I think it would be nice if we could try to think about how we can get some test suite to check some things for us before a version is published. One great example of this is how YunoHost does it https://ci-apps.yunohost.org/ci/job/19464 👀

I think the with work to make the abra test suite with https://bats-core.readthedocs.io, the way is clear to have a Bash-based test suite for recipes which actually uses abra itself. I think it will be important to keep it as close to Bash as possible, because between recipe versions, we have breaking changes, so we'll need a way to programmatically apply patches which migrate recipe configurations like an operator would do it.

I could imagine a test suite would start with...

  • Does the recipe config pass the lint checks?
  • Does the new version deploy from scratch successfully?
  • Does the new version upgrade from the previous version successfully?
  • Does the new version rollback to the previous version successfully?
  • Does backup and restore work?

And so on... YunoHost has their own set of questions and for each "YES", the app gets a better "score". Which is a clear indicator that the app is a reliable choice. We could use this in combination with the recipe metadata questions ("Email: no, SSO: yes") to generate a "score"?

I'm not sure how this would fit into our current workflow. And whether we need a web front-end.

Any thoughts? Anyone want to work on this?

What we have today: * Maintainers release versions by hand * When the version is pushed, Drone-based CI runs on the repo if enabled * Those versions are automatically published to the catalogue * Operators deploy those versions Problems with this is 1) it's up to maintainers to test their own new versions 2) it's unclear if the Drone CI is a reliable test suite for a new version (e.g. it tests a deploy but it doesn't test an upgrade from the previous version 3) versions are published automatically regardless of any of these questions. I think it would be nice if we could try to think about how we can get some test suite to check some things for us before a version is published. One great example of this is how YunoHost does it https://ci-apps.yunohost.org/ci/job/19464 👀 I think the with work to make the `abra` test suite with https://bats-core.readthedocs.io, the way is clear to have a Bash-based test suite for recipes which actually uses `abra` itself. I think it will be important to keep it as close to Bash as possible, because between recipe versions, we have breaking changes, so we'll need a way to programmatically apply patches which migrate recipe configurations like an operator would do it. I could imagine a test suite would start with... * Does the recipe config pass the lint checks? * Does the new version deploy from scratch successfully? * Does the new version upgrade from the previous version successfully? * Does the new version rollback to the previous version successfully? * Does backup and restore work? And so on... YunoHost has their own set of questions and for each "YES", the app gets a better "score". Which is a clear indicator that the app is a reliable choice. We could use this in combination with the recipe metadata questions ("Email: no, SSO: yes") to generate a "score"? I'm not sure how this would fit into our current workflow. And whether we need a web front-end. Any thoughts? Anyone want to work on this?
decentral1se added the
enhancement
question
design
proposal
labels 2023-10-18 09:13:56 +00:00
decentral1se added this to the Medium/large enhancements project 2023-10-18 09:14:01 +00:00
Member

I really like this idea.
I'm working on an integration smoke test suite using playwright. With this I want to test if all the changes and customization we made inside the recipes are still working after an upgrade or a new deployment.
Things I want to test:

  • SSO integration
  • Email delivery
  • Account creation/login/deletion
  • Are the theme/language or other special configurations applied
  • Are all nextcloud apps successfully installed
    ....

I'm not sure if it's a good idea to integrate it in this test process as well.

I really like this idea. I'm working on an integration smoke test suite using `playwright`. With this I want to test if all the changes and customization we made inside the recipes are still working after an upgrade or a new deployment. Things I want to test: - SSO integration - Email delivery - Account creation/login/deletion - Are the theme/language or other special configurations applied - Are all nextcloud apps successfully installed .... I'm not sure if it's a good idea to integrate it in this test process as well.
Author
Owner

@moritz oh shit, that could be seriously cool. I think with the Bash-based stuff we could work on really doing the operator & maintainer actions in a programmatic way. But this playwright approach could be an extra layer to really test stuff once things are up. Also very important. I'd be curious to see how difficult it is to set up and to integrate.

@moritz oh shit, that could be seriously cool. I think with the Bash-based stuff we could work on really doing the operator & maintainer actions in a programmatic way. But this `playwright` approach could be an extra layer to really test stuff once things are up. Also very important. I'd be curious to see how difficult it is to set up and to integrate.
Author
Owner
Related: https://git.coopcloud.tech/coop-cloud/matrix-synapse/issues/42
Sign in to join this conversation.
No Milestone
No Assignees
2 Participants
Notifications
Due Date
The due date is invalid or out of range. Please use the format 'yyyy-mm-dd'.

No due date set.

Dependencies

No dependencies set.

Reference: coop-cloud/organising#514
No description provided.