Compare commits

...

54 Commits

Author SHA1 Message Date
decentral1se 03f94da2d8
docs: add fauno [ci skip] 2024-05-01 01:20:25 +02:00
f 766f69b0fd
feat: strip debug symbols
continuous-integration/drone/pr Build is passing Details
continuous-integration/drone/push Build is passing Details
to produce smaller binaries
2024-04-30 14:05:03 -03:00
decentral1se 004cd70aed
fix: use unique rule number & wording [ci skip] 2024-04-06 23:52:56 +02:00
decentral1se a4de446f58
test: more verbose failure msg, use contains [ci skip] 2024-04-06 23:48:22 +02:00
Rich M d21c35965d fix: add warning for long secret names (!359)
continuous-integration/drone/push Build is passing Details
A start of a fix for coop-cloud/organising#463
Putting some code out to start a discussion.  I've added a linting rule for recipes to establish a general principal but I want to put some validation into cli/app/new.go as that's the point we have both the recipe and the domain and can say for sure whether or not the secret names lengths cause a problem but that will have to wait for a bit.  Let me know if I've missed the mark somewhere

Reviewed-on: #359
Reviewed-by: decentral1se <decentral1se@noreply.git.coopcloud.tech>
Co-authored-by: Rich M <r.p.makepeace@gmail.com>
Co-committed-by: Rich M <r.p.makepeace@gmail.com>
2024-04-06 21:41:37 +00:00
Mayel de Borniol 63ea58ffaa add relevant command to error message
continuous-integration/drone/push Build is passing Details
2024-04-01 18:51:53 +01:00
decentral1se 2ecace3e90
fix: add missing packages on final layer
continuous-integration/drone/pr Build is passing Details
continuous-integration/drone/push Build is passing Details
Closes coop-cloud/organising#598
2024-04-01 13:57:51 +02:00
p4u1 d5ac3958a4 feat: add retries to app volume remove
continuous-integration/drone/push Build is passing Details
2024-03-27 05:38:24 +00:00
3wc 72c20e0039 fix: make installer work again
continuous-integration/drone/push Build is passing Details
2024-03-26 21:07:38 -03:00
decentral1se 575f9905f1
Revert "Revert "feat: backup revolution""
continuous-integration/drone/push Build is passing Details
This reverts commit 2c515ce70a.
2024-03-12 10:34:40 +01:00
decentral1se e3a0af5840
build: upgrade goreleaser
continuous-integration/drone/push Build is passing Details
Closes coop-cloud/organising#474
2024-03-12 10:11:14 +01:00
decentral1se 9a3a39a185
chore: new 0.9.x series
continuous-integration/drone/push Build was killed Details
2024-03-12 10:05:31 +01:00
decentral1se cea56dddde
fix: drop deprecated stanza (goreleaser) 2024-03-12 10:04:50 +01:00
decentral1se 2c515ce70a
Revert "feat: backup revolution"
This reverts commit c5687dfbd7.

This is a temporary measure to facilitate a release which won't
completely explode peoples workflows (missing command logic). We
re-instate this commit after the first 0.9.x release.
2024-03-12 10:03:42 +01:00
p4u1 40c0fb4bac fix-integration-tests (!403)
continuous-integration/drone/push Build is passing Details
In preparation for the new abra release, let's fix all integration tests

After merging, this needs to be cherry-picked into the release-0-9 branch.

  - [x] app_backup.bats (skip this one)
  - [x] app_check.bats (fixed by bd21014fed)
  - [x] app_cmd.bats (partially fixed in 08232b74f6), has known regression coop-cloud/organising#581
  - [x] app_config.bats (no changes needed)
  - [x] app_cp.bats (no changes needed)
  - [x] app_deploy.bats
  - [x] app_errors.bats (no changes needed)
  - [x] app_list.bats (no changes needed)
  - [x] app_logs.bats (no changes needed)
  - [x] app_new.bats (no changes needed)
  - [x] app_ps.bats (no changes needed)
  - [x] app_remove.bats (fixed by [2f29fbeb2e](#403/commits/2f29fbeb2e018656413fa25f8615b7a98cdcb083))
  - [x] app_restart.bats (no changes needed
  - [x] app_restore.bats (fixed by [f2dd5afc38](#403/commits/f2dd5afc38a25a8316899fa0c6d59499445868d7))
  - [x] app_rollback.bats (partially fixed by 6e99b74c24)
  - [x] app_run.bats (no changes needed)
  - [x] app_secret.bats (fixed by bd069d32f6)
  - [x] app_services.bats (no changes needed)
  - [x] app_undeploy.bats (no changes needed)
  - [x] app_upgrade.bats (no changes needed)
  - [x] app_version.bats (partially fixed by ad323ad2bd)
  - [x] app_volume.bats (fixed by [03c3823770](#403/commits/03c38237707ae795b723180eb07a7edc84a8de35))
  - [x] autocomplete.bats (no changes needed)
  - [x] catalogue.bats (no changes needed)
  - [x] dirs.bats (no changes needed)
  - [x] install.bats (failes, but is expected)
  - [x] recipe_diff.bats (no changes needed)
  - [x] recipe_fetch.bats (no changes needed)
  - [x] recipe_lint.bats (fixed by [b6b0808066](#403/commits/b6b0808066a11e4bcd77517ec39600d500bcb944))
  - [x] recipe_list.bats (no changes needed)
  - [x] recipe_new.bats (fixed by [0aac464ded](#403/commits/0aac464ded6b43afb3ec37ade2f64d6191b9838f))
  - [x] recipe_release.bats (no changes needed)
  - [x] recipe_reset.bats (no changes needed)
  - [x] recipe_sync.bats (no changes needed)
  - [x] recipe_upgrade.bats (fixed by [ab86904cf4](#403/commits/ab86904cf45db89c7c189ca1fd9971909bd446dd))
  - [x] recipe_version.bats (fixed by 81897bf4da)
  - [x] server_add.bats
  - [x] server_list.bats
  - [x] server_prune.bats (no changes needed)
  - [x] server_remove.bats
  - [x] upgrade.bats
  - [x] version.bats (no changes needed)

Co-authored-by: decentral1se <cellarspoon@riseup.net>
Reviewed-on: #403
Co-authored-by: p4u1 <p4u1_f4u1@riseup.net>
Co-committed-by: p4u1 <p4u1_f4u1@riseup.net>
2024-03-11 13:27:21 +00:00
p4u1 0643df6d73 feat: fetch all recipes when no recipe is specified (!401)
continuous-integration/drone/push Build is passing Details
Closes coop-cloud/organising#530

Reviewed-on: #401
Reviewed-by: decentral1se <decentral1se@noreply.git.coopcloud.tech>
Co-authored-by: p4u1 <p4u1_f4u1@riseup.net>
Co-committed-by: p4u1 <p4u1_f4u1@riseup.net>
2024-01-24 15:01:33 +00:00
basebuilder e9b99fe921 make installer save abra-download to /tmp/ directory
continuous-integration/drone/push Build is passing Details
the current location of download is ~/.local/bin/ but this
conflicts with some security tools
2024-01-24 14:27:09 +00:00
p4u1 4920dfedb3 fix: retry docker volume remove (!399)
continuous-integration/drone/push Build is passing Details
Closes coop-cloud/organising#509

Reviewed-on: #399
Reviewed-by: decentral1se <decentral1se@noreply.git.coopcloud.tech>
Co-authored-by: p4u1 <p4u1_f4u1@riseup.net>
Co-committed-by: p4u1 <p4u1_f4u1@riseup.net>
2024-01-19 15:09:00 +00:00
p4u1 0a3624c15b feat: add version input to abra app new (!400)
continuous-integration/drone/push Build is passing Details
Closes coop-cloud/organising#519

Reviewed-on: #400
Reviewed-by: decentral1se <decentral1se@noreply.git.coopcloud.tech>
Co-authored-by: p4u1 <p4u1_f4u1@riseup.net>
Co-committed-by: p4u1 <p4u1_f4u1@riseup.net>
2024-01-19 15:08:41 +00:00
decentral1se c5687dfbd7
feat: backup revolution
continuous-integration/drone/push Build is passing Details
See coop-cloud/organising#485
2024-01-12 22:01:08 +01:00
p4u1 ca91abbed9 fix: correct append service name logic in Filters function (!396)
continuous-integration/drone/push Build is passing Details
This fixes a regression introduced by #395

Reviewed-on: #396
Co-authored-by: p4u1 <p4u1_f4u1@riseup.net>
Co-committed-by: p4u1 <p4u1_f4u1@riseup.net>
2023-12-22 12:08:12 +00:00
p4u1 d4727db8f9 feat: abra app logs shows task errors (!395)
continuous-integration/drone/push Build is passing Details
The log command now checks for the ready state in the task list. If it is not ready. It shows the task logs. This might look like this:
```
ERRO[0000] Service abra-test-recipe_default_app: State rejected: No such image: ngaaaax:1.21.0
ERRO[0000] Service abra-test-recipe_default_app: State preparing:
ERRO[0000] Service abra-test-recipe_default_app: State rejected: No such image: ngaaaax:1.21.0
ERRO[0000] Service abra-test-recipe_default_app: State rejected: No such image: ngaaaax:1.21.0
ERRO[0000] Service abra-test-recipe_default_app: State rejected: No such image: ngaaaax:1.21.0
```

Closes coop-cloud/organising#518

Reviewed-on: #395
Reviewed-by: decentral1se <decentral1se@noreply.git.coopcloud.tech>
Co-authored-by: p4u1 <p4u1_f4u1@riseup.net>
Co-committed-by: p4u1 <p4u1_f4u1@riseup.net>
2023-12-14 13:15:24 +00:00
p4u1 af8cd1f67a feat: abra release now asks for a release note (!393)
continuous-integration/drone/push Build is passing Details
This implements coop-cloud/organising#540 by checking if a`release/next` file exists and if so moves it to `release/<tag>`. When no release notes exists it prompts for them.

Reviewed-on: #393
Reviewed-by: moritz <moritz.m@local-it.org>
Co-authored-by: p4u1 <p4u1_f4u1@riseup.net>
Co-committed-by: p4u1 <p4u1_f4u1@riseup.net>
2023-12-12 14:46:20 +00:00
decentral1se cdd7516e54
chore: go mod tidy [ci skip] 2023-12-04 22:56:58 +01:00
test 99e3ed416f fix: secret name generation when secretId is not part of the secret name
continuous-integration/drone/push Build is passing Details
2023-12-04 21:52:09 +00:00
p4u1 02b726db02 add comments to better explain how the length modifier gets added to the secret
continuous-integration/drone/push Build is passing Details
2023-12-04 17:30:26 +00:00
p4u1 2de6934322 feat: abra app cp enhancements
continuous-integration/drone/push Build is passing Details
2023-12-02 15:39:27 +00:00
decentral1se cb49cf06d1
chore: drop old godotenv pointers [ci skip]
Follows 9affda8a70
2023-12-02 13:02:24 +01:00
decentral1se 9affda8a70
chore: update godotenv fork commit pointer
continuous-integration/drone/push Build is passing Details
Follows #391
2023-12-02 12:59:42 +01:00
p4u1 3957b7c965 proper env modifiers support
continuous-integration/drone/push Build is passing Details
This implements proper modifier support in the env file using this new fork of the godotenv library. The modifier implementation is quite basic for but can be improved later if needed. See this commit for the actual implementation.

Because we are now using proper modifer parsing, it does not affect the parsing of value, so this is possible again:
```
MY_VAR="#foo"
```
Closes coop-cloud/organising#535
2023-12-01 11:03:52 +00:00
Moritz 0d83339d80 fix(ssh): increase connection timeout #482
continuous-integration/drone/push Build is passing Details
see coop-cloud/organising#482
2023-11-30 16:35:53 +01:00
decentral1se 6e54ec7213
test: skip failing test for now
continuous-integration/drone/push Build is passing Details
See coop-cloud/organising#535.
2023-11-28 11:42:36 +01:00
decentral1se 66b40a9189
fix: just run it in place [ci skip] 2023-11-27 11:25:01 +01:00
decentral1se 049f02f063
docs: add p4u1 [ci skip] 2023-11-27 11:23:03 +01:00
decentral1se 15857e6453
fix: clean up after cp'ing script [ci skip]
Follows 31e0ed75b0.
2023-11-27 11:21:46 +01:00
decentral1se 31e0ed75b0
build: target for docker building
continuous-integration/drone/push Build is failing Details
Adapted from #384.

Thanks @cas.
2023-11-27 11:15:59 +01:00
p4u1 b1d3fcbb0b add integration test
continuous-integration/drone/push Build is failing Details
2023-11-27 10:01:33 +00:00
p4u1 7b6134f35e add bash completion for abra cmd 2023-11-27 10:01:33 +00:00
decentral1se 316b59b465
test: support local-first testing
continuous-integration/drone/push Build is failing Details
Cherry-picked from #389

Thanks @p4u1.
2023-11-27 10:41:46 +01:00
decentral1se 92b073d5b6
chore: go mod tidy
continuous-integration/drone/push Build is failing Details
2023-11-27 10:28:43 +01:00
Comrade Renovate Bot 9b0dd933b5 chore(deps): update module github.com/schollz/progressbar/v3 to v3.14.1
renovate/artifacts Artifact file update failure
continuous-integration/drone/pr Build is failing Details
continuous-integration/drone/push Build is failing Details
2023-11-10 08:00:52 +00:00
Comrade Renovate Bot f255fa1555 chore(deps): update module github.com/hashicorp/go-retryablehttp to v0.7.5
renovate/artifacts Artifact file update failure
continuous-integration/drone/pr Build is failing Details
continuous-integration/drone/push Build is failing Details
2023-11-09 08:00:33 +00:00
Comrade Renovate Bot 74200318ab chore(deps): update module github.com/schollz/progressbar/v3 to v3.14.0
renovate/artifacts Artifact file update failure
continuous-integration/drone/pr Build is failing Details
continuous-integration/drone/push Build is failing Details
2023-11-07 08:01:11 +00:00
Comrade Renovate Bot 609656b4e1 chore(deps): update module golang.org/x/sys to v0.14.0
renovate/artifacts Artifact file update failure
continuous-integration/drone/pr Build is failing Details
continuous-integration/drone/push Build is failing Details
2023-11-06 08:00:33 +00:00
decentral1se 856c9f2f7d
chore: go mod tidy
continuous-integration/drone/push Build is failing Details
2023-11-04 09:37:15 +01:00
Comrade Renovate Bot bd5cdd3443 chore(deps): update module github.com/docker/docker to v24.0.7
renovate/artifacts Artifact file update failure
continuous-integration/drone/pr Build is failing Details
continuous-integration/drone/push Build is failing Details
2023-10-30 08:00:53 +00:00
Comrade Renovate Bot 79d274e074 chore(deps): update module github.com/docker/cli to v24.0.7
renovate/artifacts Artifact file update failure
continuous-integration/drone/pr Build is failing Details
continuous-integration/drone/push Build is failing Details
2023-10-27 07:01:16 +00:00
Comrade Renovate Bot 51e3df17f1 chore(deps): update module github.com/go-git/go-git/v5 to v5.10.0
renovate/artifacts Artifact file update failure
continuous-integration/drone/pr Build is failing Details
continuous-integration/drone/push Build is failing Details
2023-10-26 07:00:33 +00:00
knoflook ccf0215495 hotfix: parse values starting with # correctly
continuous-integration/drone/push Build is failing Details
2023-10-23 19:21:45 +02:00
decentral1se 254df7f2be
feat: app cmd ls
continuous-integration/drone/pr Build is passing Details
continuous-integration/drone/push Build is passing Details
See coop-cloud/organising#484
2023-10-17 21:16:31 +02:00
decentral1se 6a673ef101
refactor: filter by topic when building catalogue
continuous-integration/drone/pr Build is passing Details
continuous-integration/drone/push Build is passing Details
See coop-cloud/organising#377
2023-10-16 18:42:38 +02:00
decentral1se 7f7f7224c6
feat: diff on release flow
continuous-integration/drone/pr Build is passing Details
continuous-integration/drone/push Build is passing Details
Also, don't commit unstaged files.
2023-10-16 18:31:22 +02:00
decentral1se f96bf9a8ac
feat: `recipe reset`, `recipe diff`
continuous-integration/drone/pr Build is passing Details
continuous-integration/drone/push Build is passing Details
See coop-cloud/organising#511
2023-10-15 12:56:52 +02:00
decentral1se dcecf32999
chore: bump version for installer script [ci skip] 2023-10-11 19:31:28 +02:00
79 changed files with 2400 additions and 1235 deletions

View File

@ -29,7 +29,7 @@ steps:
event: tag
- name: release
image: goreleaser/goreleaser:v1.18.2
image: goreleaser/goreleaser:v1.24.0
environment:
GITEA_TOKEN:
from_secret: goreleaser_gitea_token

View File

@ -29,6 +29,8 @@ builds:
ldflags:
- "-X 'main.Commit={{ .Commit }}'"
- "-X 'main.Version={{ .Version }}'"
- "-s"
- "-w"
- id: kadabra
binary: kadabra
@ -50,12 +52,8 @@ builds:
ldflags:
- "-X 'main.Commit={{ .Commit }}'"
- "-X 'main.Version={{ .Version }}'"
archives:
- replacements:
386: i386
amd64: x86_64
format: binary
- "-s"
- "-w"
checksum:
name_template: "checksums.txt"

View File

@ -7,10 +7,12 @@
- cassowary
- codegod100
- decentral1se
- fauno
- frando
- kawaiipunk
- knoflook
- moritz
- p4u1
- rix
- roxxers
- vera

View File

@ -1,23 +1,29 @@
# Build image
FROM golang:1.21-alpine AS build
ENV GOPRIVATE coopcloud.tech
RUN apk add --no-cache \
ca-certificates \
gcc \
git \
make \
musl-dev
RUN update-ca-certificates
COPY . /app
WORKDIR /app
RUN CGO_ENABLED=0 make build
FROM scratch
# Release image ("slim")
FROM alpine:3.19.1
RUN apk add --no-cache \
ca-certificates \
git \
openssh
RUN update-ca-certificates
COPY --from=build /app/abra /abra

View File

@ -2,6 +2,7 @@ ABRA := ./cmd/abra
KADABRA := ./cmd/kadabra
COMMIT := $(shell git rev-list -1 HEAD)
GOPATH := $(shell go env GOPATH)
GOVERSION := 1.21
LDFLAGS := "-X 'main.Commit=$(COMMIT)'"
DIST_LDFLAGS := $(LDFLAGS)" -s -w"
@ -30,6 +31,12 @@ build-kadabra:
build: build-abra build-kadabra
build-docker-abra:
@docker run -it -v $(PWD):/abra golang:$(GOVERSION) \
bash -c 'cd /abra; ./scripts/docker/build.sh'
build-docker: build-docker-abra
clean:
@rm '$(GOPATH)/bin/abra'
@rm '$(GOPATH)/bin/kadabra'

View File

@ -1,414 +1,296 @@
package app
import (
"archive/tar"
"context"
"fmt"
"io"
"os"
"path/filepath"
"strings"
"time"
"coopcloud.tech/abra/cli/internal"
"coopcloud.tech/abra/pkg/autocomplete"
"coopcloud.tech/abra/pkg/client"
"coopcloud.tech/abra/pkg/config"
containerPkg "coopcloud.tech/abra/pkg/container"
recipePkg "coopcloud.tech/abra/pkg/recipe"
"coopcloud.tech/abra/pkg/upstream/container"
"github.com/docker/cli/cli/command"
"github.com/docker/docker/api/types"
"github.com/docker/docker/api/types/filters"
dockerClient "github.com/docker/docker/client"
"github.com/docker/docker/pkg/archive"
"github.com/klauspost/pgzip"
"coopcloud.tech/abra/pkg/recipe"
"github.com/sirupsen/logrus"
"github.com/urfave/cli"
)
type backupConfig struct {
preHookCmd string
postHookCmd string
backupPaths []string
var snapshot string
var snapshotFlag = &cli.StringFlag{
Name: "snapshot, s",
Usage: "Lists specific snapshot",
Destination: &snapshot,
}
var appBackupCommand = cli.Command{
Name: "backup",
Aliases: []string{"bk"},
Usage: "Run app backup",
ArgsUsage: "<domain> [<service>]",
var includePath string
var includePathFlag = &cli.StringFlag{
Name: "path, p",
Usage: "Include path",
Destination: &includePath,
}
var resticRepo string
var resticRepoFlag = &cli.StringFlag{
Name: "repo, r",
Usage: "Restic repository",
Destination: &resticRepo,
}
var appBackupListCommand = cli.Command{
Name: "list",
Aliases: []string{"ls"},
Flags: []cli.Flag{
internal.DebugFlag,
internal.OfflineFlag,
internal.ChaosFlag,
snapshotFlag,
includePathFlag,
},
Before: internal.SubCommandBefore,
Usage: "List all backups",
BashComplete: autocomplete.AppNameComplete,
Description: `
Run an app backup.
A backup command and pre/post hook commands are defined in the recipe
configuration. Abra reads this configuration and run the comands in the context
of the deployed services. Pass <service> if you only want to back up a single
service. All backups are placed in the ~/.abra/backups directory.
A single backup file is produced for all backup paths specified for a service.
If we have the following backup configuration:
- "backupbot.backup.path=/var/lib/foo,/var/lib/bar"
And we run "abra app backup example.com app", Abra will produce a file that
looks like:
~/.abra/backups/example_com_app_609341138.tar.gz
This file is a compressed archive which contains all backup paths. To see paths, run:
tar -tf ~/.abra/backups/example_com_app_609341138.tar.gz
(Make sure to change the name of the backup file)
This single file can be used to restore your app. See "abra app restore" for more.
`,
Action: func(c *cli.Context) error {
app := internal.ValidateApp(c)
recipe, err := recipePkg.Get(app.Recipe, internal.Offline)
if err != nil {
if err := recipe.EnsureExists(app.Recipe); err != nil {
logrus.Fatal(err)
}
if !internal.Chaos {
if err := recipePkg.EnsureIsClean(app.Recipe); err != nil {
if err := recipe.EnsureIsClean(app.Recipe); err != nil {
logrus.Fatal(err)
}
if !internal.Offline {
if err := recipePkg.EnsureUpToDate(app.Recipe); err != nil {
if err := recipe.EnsureUpToDate(app.Recipe); err != nil {
logrus.Fatal(err)
}
}
if err := recipePkg.EnsureLatest(app.Recipe); err != nil {
if err := recipe.EnsureLatest(app.Recipe); err != nil {
logrus.Fatal(err)
}
}
backupConfigs := make(map[string]backupConfig)
for _, service := range recipe.Config.Services {
if backupsEnabled, ok := service.Deploy.Labels["backupbot.backup"]; ok {
if backupsEnabled == "true" {
fullServiceName := fmt.Sprintf("%s_%s", app.StackName(), service.Name)
bkConfig := backupConfig{}
logrus.Debugf("backup config detected for %s", fullServiceName)
if paths, ok := service.Deploy.Labels["backupbot.backup.path"]; ok {
logrus.Debugf("detected backup paths for %s: %s", fullServiceName, paths)
bkConfig.backupPaths = strings.Split(paths, ",")
}
if preHookCmd, ok := service.Deploy.Labels["backupbot.backup.pre-hook"]; ok {
logrus.Debugf("detected pre-hook command for %s: %s", fullServiceName, preHookCmd)
bkConfig.preHookCmd = preHookCmd
}
if postHookCmd, ok := service.Deploy.Labels["backupbot.backup.post-hook"]; ok {
logrus.Debugf("detected post-hook command for %s: %s", fullServiceName, postHookCmd)
bkConfig.postHookCmd = postHookCmd
}
backupConfigs[service.Name] = bkConfig
}
}
}
cl, err := client.New(app.Server)
if err != nil {
logrus.Fatal(err)
}
serviceName := c.Args().Get(1)
if serviceName != "" {
backupConfig, ok := backupConfigs[serviceName]
if !ok {
logrus.Fatalf("no backup config for %s? does %s exist?", serviceName, serviceName)
}
targetContainer, err := internal.RetrieveBackupBotContainer(cl)
if err != nil {
logrus.Fatal(err)
}
logrus.Infof("running backup for the %s service", serviceName)
execEnv := []string{fmt.Sprintf("SERVICE=%s", app.Domain)}
if snapshot != "" {
logrus.Debugf("including SNAPSHOT=%s in backupbot exec invocation", snapshot)
execEnv = append(execEnv, fmt.Sprintf("SNAPSHOT=%s", snapshot))
}
if includePath != "" {
logrus.Debugf("including INCLUDE_PATH=%s in backupbot exec invocation", includePath)
execEnv = append(execEnv, fmt.Sprintf("INCLUDE_PATH=%s", includePath))
}
if err := runBackup(cl, app, serviceName, backupConfig); err != nil {
logrus.Fatal(err)
}
} else {
if len(backupConfigs) == 0 {
logrus.Fatalf("no backup configs discovered for %s?", app.Name)
}
for serviceName, backupConfig := range backupConfigs {
logrus.Infof("running backup for the %s service", serviceName)
if err := runBackup(cl, app, serviceName, backupConfig); err != nil {
logrus.Fatal(err)
}
}
if err := internal.RunBackupCmdRemote(cl, "ls", targetContainer.ID, execEnv); err != nil {
logrus.Fatal(err)
}
return nil
},
}
// TimeStamp generates a file name friendly timestamp.
func TimeStamp() string {
ts := time.Now().UTC().Format(time.RFC3339)
return strings.Replace(ts, ":", "-", -1)
}
var appBackupDownloadCommand = cli.Command{
Name: "download",
Aliases: []string{"d"},
Flags: []cli.Flag{
internal.DebugFlag,
internal.OfflineFlag,
snapshotFlag,
includePathFlag,
},
Before: internal.SubCommandBefore,
Usage: "Download a backup",
BashComplete: autocomplete.AppNameComplete,
Action: func(c *cli.Context) error {
app := internal.ValidateApp(c)
// runBackup does the actual backup logic.
func runBackup(cl *dockerClient.Client, app config.App, serviceName string, bkConfig backupConfig) error {
if len(bkConfig.backupPaths) == 0 {
return fmt.Errorf("backup paths are empty for %s?", serviceName)
}
// FIXME: avoid instantiating a new CLI
dcli, err := command.NewDockerCli()
if err != nil {
return err
}
filters := filters.NewArgs()
filters.Add("name", fmt.Sprintf("^%s_%s", app.StackName(), serviceName))
targetContainer, err := containerPkg.GetContainer(context.Background(), cl, filters, true)
if err != nil {
return err
}
fullServiceName := fmt.Sprintf("%s_%s", app.StackName(), serviceName)
if bkConfig.preHookCmd != "" {
splitCmd := internal.SafeSplit(bkConfig.preHookCmd)
logrus.Debugf("split pre-hook command for %s into %s", fullServiceName, splitCmd)
preHookExecOpts := types.ExecConfig{
AttachStderr: true,
AttachStdin: true,
AttachStdout: true,
Cmd: splitCmd,
Detach: false,
Tty: true,
if err := recipe.EnsureExists(app.Recipe); err != nil {
logrus.Fatal(err)
}
if err := container.RunExec(dcli, cl, targetContainer.ID, &preHookExecOpts); err != nil {
return fmt.Errorf("failed to run %s on %s: %s", bkConfig.preHookCmd, targetContainer.ID, err.Error())
if !internal.Chaos {
if err := recipe.EnsureIsClean(app.Recipe); err != nil {
logrus.Fatal(err)
}
if !internal.Offline {
if err := recipe.EnsureUpToDate(app.Recipe); err != nil {
logrus.Fatal(err)
}
}
if err := recipe.EnsureLatest(app.Recipe); err != nil {
logrus.Fatal(err)
}
}
logrus.Infof("succesfully ran %s pre-hook command: %s", fullServiceName, bkConfig.preHookCmd)
}
var tempBackupPaths []string
for _, remoteBackupPath := range bkConfig.backupPaths {
sanitisedPath := strings.ReplaceAll(remoteBackupPath, "/", "_")
localBackupPath := filepath.Join(config.BACKUP_DIR, fmt.Sprintf("%s%s_%s.tar.gz", fullServiceName, sanitisedPath, TimeStamp()))
logrus.Debugf("temporarily backing up %s:%s to %s", fullServiceName, remoteBackupPath, localBackupPath)
logrus.Infof("backing up %s:%s", fullServiceName, remoteBackupPath)
content, _, err := cl.CopyFromContainer(context.Background(), targetContainer.ID, remoteBackupPath)
cl, err := client.New(app.Server)
if err != nil {
logrus.Debugf("failed to copy %s from container: %s", remoteBackupPath, err.Error())
if err := cleanupTempArchives(tempBackupPaths); err != nil {
return fmt.Errorf("failed to clean up temporary archives: %s", err.Error())
logrus.Fatal(err)
}
targetContainer, err := internal.RetrieveBackupBotContainer(cl)
if err != nil {
logrus.Fatal(err)
}
execEnv := []string{fmt.Sprintf("SERVICE=%s", app.Domain)}
if snapshot != "" {
logrus.Debugf("including SNAPSHOT=%s in backupbot exec invocation", snapshot)
execEnv = append(execEnv, fmt.Sprintf("SNAPSHOT=%s", snapshot))
}
if includePath != "" {
logrus.Debugf("including INCLUDE_PATH=%s in backupbot exec invocation", includePath)
execEnv = append(execEnv, fmt.Sprintf("INCLUDE_PATH=%s", includePath))
}
if err := internal.RunBackupCmdRemote(cl, "download", targetContainer.ID, execEnv); err != nil {
logrus.Fatal(err)
}
remoteBackupDir := "/tmp/backup.tar.gz"
currentWorkingDir := "."
if err = CopyFromContainer(cl, targetContainer.ID, remoteBackupDir, currentWorkingDir); err != nil {
logrus.Fatal(err)
}
fmt.Println("backup successfully downloaded to current working directory")
return nil
},
}
var appBackupCreateCommand = cli.Command{
Name: "create",
Aliases: []string{"c"},
Flags: []cli.Flag{
internal.DebugFlag,
internal.OfflineFlag,
resticRepoFlag,
},
Before: internal.SubCommandBefore,
Usage: "Create a new backup",
BashComplete: autocomplete.AppNameComplete,
Action: func(c *cli.Context) error {
app := internal.ValidateApp(c)
if err := recipe.EnsureExists(app.Recipe); err != nil {
logrus.Fatal(err)
}
if !internal.Chaos {
if err := recipe.EnsureIsClean(app.Recipe); err != nil {
logrus.Fatal(err)
}
return fmt.Errorf("failed to copy %s from container: %s", remoteBackupPath, err.Error())
}
defer content.Close()
_, srcBase := archive.SplitPathDirEntry(remoteBackupPath)
preArchive := archive.RebaseArchiveEntries(content, srcBase, remoteBackupPath)
if err := copyToFile(localBackupPath, preArchive); err != nil {
logrus.Debugf("failed to create tar archive (%s): %s", localBackupPath, err.Error())
if err := cleanupTempArchives(tempBackupPaths); err != nil {
return fmt.Errorf("failed to clean up temporary archives: %s", err.Error())
if !internal.Offline {
if err := recipe.EnsureUpToDate(app.Recipe); err != nil {
logrus.Fatal(err)
}
}
return fmt.Errorf("failed to create tar archive (%s): %s", localBackupPath, err.Error())
}
tempBackupPaths = append(tempBackupPaths, localBackupPath)
}
logrus.Infof("compressing and merging archives...")
if err := mergeArchives(tempBackupPaths, fullServiceName); err != nil {
logrus.Debugf("failed to merge archive files: %s", err.Error())
if err := cleanupTempArchives(tempBackupPaths); err != nil {
return fmt.Errorf("failed to clean up temporary archives: %s", err.Error())
}
return fmt.Errorf("failed to merge archive files: %s", err.Error())
}
if err := cleanupTempArchives(tempBackupPaths); err != nil {
return fmt.Errorf("failed to clean up temporary archives: %s", err.Error())
}
if bkConfig.postHookCmd != "" {
splitCmd := internal.SafeSplit(bkConfig.postHookCmd)
logrus.Debugf("split post-hook command for %s into %s", fullServiceName, splitCmd)
postHookExecOpts := types.ExecConfig{
AttachStderr: true,
AttachStdin: true,
AttachStdout: true,
Cmd: splitCmd,
Detach: false,
Tty: true,
}
if err := container.RunExec(dcli, cl, targetContainer.ID, &postHookExecOpts); err != nil {
return err
}
logrus.Infof("succesfully ran %s post-hook command: %s", fullServiceName, bkConfig.postHookCmd)
}
return nil
}
func copyToFile(outfile string, r io.Reader) error {
tmpFile, err := os.CreateTemp(filepath.Dir(outfile), ".tar_temp")
if err != nil {
return err
}
tmpPath := tmpFile.Name()
_, err = io.Copy(tmpFile, r)
tmpFile.Close()
if err != nil {
os.Remove(tmpPath)
return err
}
if err = os.Rename(tmpPath, outfile); err != nil {
os.Remove(tmpPath)
return err
}
return nil
}
func cleanupTempArchives(tarPaths []string) error {
for _, tarPath := range tarPaths {
if err := os.RemoveAll(tarPath); err != nil {
return err
}
logrus.Debugf("remove temporary archive file %s", tarPath)
}
return nil
}
func mergeArchives(tarPaths []string, serviceName string) error {
var out io.Writer
var cout *pgzip.Writer
localBackupPath := filepath.Join(config.BACKUP_DIR, fmt.Sprintf("%s_%s.tar.gz", serviceName, TimeStamp()))
fout, err := os.Create(localBackupPath)
if err != nil {
return fmt.Errorf("Failed to open %s: %s", localBackupPath, err)
}
defer fout.Close()
out = fout
cout = pgzip.NewWriter(out)
out = cout
tw := tar.NewWriter(out)
for _, tarPath := range tarPaths {
if err := addTar(tw, tarPath); err != nil {
return fmt.Errorf("failed to merge %s: %v", tarPath, err)
}
}
if err := tw.Close(); err != nil {
return fmt.Errorf("failed to close tar writer %v", err)
}
if cout != nil {
if err := cout.Flush(); err != nil {
return fmt.Errorf("failed to flush: %s", err)
} else if err = cout.Close(); err != nil {
return fmt.Errorf("failed to close compressed writer: %s", err)
}
}
logrus.Infof("backed up %s to %s", serviceName, localBackupPath)
return nil
}
func addTar(tw *tar.Writer, pth string) (err error) {
var tr *tar.Reader
var rc io.ReadCloser
var hdr *tar.Header
if tr, rc, err = openTarFile(pth); err != nil {
return
}
for {
if hdr, err = tr.Next(); err != nil {
if err == io.EOF {
err = nil
if err := recipe.EnsureLatest(app.Recipe); err != nil {
logrus.Fatal(err)
}
break
}
if err = tw.WriteHeader(hdr); err != nil {
break
} else if _, err = io.Copy(tw, tr); err != nil {
break
cl, err := client.New(app.Server)
if err != nil {
logrus.Fatal(err)
}
}
if err == nil {
err = rc.Close()
} else {
rc.Close()
}
return
targetContainer, err := internal.RetrieveBackupBotContainer(cl)
if err != nil {
logrus.Fatal(err)
}
execEnv := []string{fmt.Sprintf("SERVICE=%s", app.Domain)}
if resticRepo != "" {
logrus.Debugf("including RESTIC_REPO=%s in backupbot exec invocation", resticRepo)
execEnv = append(execEnv, fmt.Sprintf("RESTIC_REPO=%s", resticRepo))
}
if err := internal.RunBackupCmdRemote(cl, "create", targetContainer.ID, execEnv); err != nil {
logrus.Fatal(err)
}
return nil
},
}
func openTarFile(pth string) (tr *tar.Reader, rc io.ReadCloser, err error) {
var fin *os.File
var n int
buff := make([]byte, 1024)
var appBackupSnapshotsCommand = cli.Command{
Name: "snapshots",
Aliases: []string{"s"},
Flags: []cli.Flag{
internal.DebugFlag,
internal.OfflineFlag,
snapshotFlag,
},
Before: internal.SubCommandBefore,
Usage: "List backup snapshots",
BashComplete: autocomplete.AppNameComplete,
Action: func(c *cli.Context) error {
app := internal.ValidateApp(c)
if fin, err = os.Open(pth); err != nil {
return
}
if err := recipe.EnsureExists(app.Recipe); err != nil {
logrus.Fatal(err)
}
if n, err = fin.Read(buff); err != nil {
fin.Close()
return
} else if n == 0 {
fin.Close()
err = fmt.Errorf("%s is empty", pth)
return
}
if !internal.Chaos {
if err := recipe.EnsureIsClean(app.Recipe); err != nil {
logrus.Fatal(err)
}
if _, err = fin.Seek(0, 0); err != nil {
fin.Close()
return
}
if !internal.Offline {
if err := recipe.EnsureUpToDate(app.Recipe); err != nil {
logrus.Fatal(err)
}
}
rc = fin
tr = tar.NewReader(rc)
if err := recipe.EnsureLatest(app.Recipe); err != nil {
logrus.Fatal(err)
}
}
return tr, rc, nil
cl, err := client.New(app.Server)
if err != nil {
logrus.Fatal(err)
}
targetContainer, err := internal.RetrieveBackupBotContainer(cl)
if err != nil {
logrus.Fatal(err)
}
execEnv := []string{fmt.Sprintf("SERVICE=%s", app.Domain)}
if snapshot != "" {
logrus.Debugf("including SNAPSHOT=%s in backupbot exec invocation", snapshot)
execEnv = append(execEnv, fmt.Sprintf("SNAPSHOT=%s", snapshot))
}
if err := internal.RunBackupCmdRemote(cl, "snapshots", targetContainer.ID, execEnv); err != nil {
logrus.Fatal(err)
}
return nil
},
}
var appBackupCommand = cli.Command{
Name: "backup",
Aliases: []string{"b"},
Usage: "Manage app backups",
ArgsUsage: "<domain>",
Subcommands: []cli.Command{
appBackupListCommand,
appBackupSnapshotsCommand,
appBackupDownloadCommand,
appBackupCreateCommand,
},
}

View File

@ -6,9 +6,11 @@ import (
"os"
"os/exec"
"path"
"sort"
"strings"
"coopcloud.tech/abra/cli/internal"
"coopcloud.tech/abra/pkg/app"
"coopcloud.tech/abra/pkg/autocomplete"
"coopcloud.tech/abra/pkg/client"
"coopcloud.tech/abra/pkg/config"
@ -22,8 +24,7 @@ var appCmdCommand = cli.Command{
Name: "command",
Aliases: []string{"cmd"},
Usage: "Run app commands",
Description: `
Run an app specific command.
Description: `Run an app specific command.
These commands are bash functions, defined in the abra.sh of the recipe itself.
They can be run within the context of a service (e.g. app) or locally on your
@ -43,8 +44,19 @@ Example:
internal.OfflineFlag,
internal.ChaosFlag,
},
BashComplete: autocomplete.AppNameComplete,
Before: internal.SubCommandBefore,
Before: internal.SubCommandBefore,
Subcommands: []cli.Command{appCmdListCommand},
BashComplete: func(ctx *cli.Context) {
args := ctx.Args()
switch len(args) {
case 0:
autocomplete.AppNameComplete(ctx)
case 1:
autocomplete.ServiceNameComplete(args.Get(0))
case 2:
cmdNameComplete(args.Get(0))
}
},
Action: func(c *cli.Context) error {
app := internal.ValidateApp(c)
@ -186,3 +198,76 @@ func parseCmdArgs(args []string, isLocal bool) (bool, string) {
return hasCmdArgs, parsedCmdArgs
}
func cmdNameComplete(appName string) {
app, err := app.Get(appName)
if err != nil {
return
}
cmdNames, _ := getShCmdNames(app)
if err != nil {
return
}
for _, n := range cmdNames {
fmt.Println(n)
}
}
var appCmdListCommand = cli.Command{
Name: "list",
Aliases: []string{"ls"},
Usage: "List all available commands",
ArgsUsage: "<domain>",
Flags: []cli.Flag{
internal.DebugFlag,
internal.OfflineFlag,
internal.ChaosFlag,
},
BashComplete: autocomplete.AppNameComplete,
Before: internal.SubCommandBefore,
Action: func(c *cli.Context) error {
app := internal.ValidateApp(c)
if err := recipe.EnsureExists(app.Recipe); err != nil {
logrus.Fatal(err)
}
if !internal.Chaos {
if err := recipePkg.EnsureIsClean(app.Recipe); err != nil {
logrus.Fatal(err)
}
if !internal.Offline {
if err := recipePkg.EnsureUpToDate(app.Recipe); err != nil {
logrus.Fatal(err)
}
}
if err := recipePkg.EnsureLatest(app.Recipe); err != nil {
logrus.Fatal(err)
}
}
cmdNames, err := getShCmdNames(app)
if err != nil {
logrus.Fatal(err)
}
for _, cmdName := range cmdNames {
fmt.Println(cmdName)
}
return nil
},
}
func getShCmdNames(app config.App) ([]string, error) {
abraShPath := fmt.Sprintf("%s/%s/%s", config.RECIPES_DIR, app.Recipe, "abra.sh")
cmdNames, err := config.ReadAbraShCmdNames(abraShPath)
if err != nil {
return nil, err
}
sort.Strings(cmdNames)
return cmdNames, nil
}

View File

@ -2,19 +2,24 @@ package app
import (
"context"
"errors"
"fmt"
"io"
"os"
"path"
"path/filepath"
"strings"
"coopcloud.tech/abra/cli/internal"
"coopcloud.tech/abra/pkg/autocomplete"
"coopcloud.tech/abra/pkg/client"
"coopcloud.tech/abra/pkg/config"
"coopcloud.tech/abra/pkg/container"
containerPkg "coopcloud.tech/abra/pkg/container"
"coopcloud.tech/abra/pkg/formatter"
"coopcloud.tech/abra/pkg/upstream/container"
"github.com/docker/cli/cli/command"
"github.com/docker/docker/api/types"
"github.com/docker/docker/api/types/filters"
dockerClient "github.com/docker/docker/client"
"github.com/docker/docker/errdefs"
"github.com/docker/docker/pkg/archive"
"github.com/sirupsen/logrus"
"github.com/urfave/cli"
@ -49,46 +54,14 @@ And if you want to copy that file back to your current working directory locally
dst := c.Args().Get(2)
if src == "" {
logrus.Fatal("missing <src> argument")
} else if dst == "" {
}
if dst == "" {
logrus.Fatal("missing <dest> argument")
}
parsedSrc := strings.SplitN(src, ":", 2)
parsedDst := strings.SplitN(dst, ":", 2)
errorMsg := "one of <src>/<dest> arguments must take $SERVICE:$PATH form"
if len(parsedSrc) == 2 && len(parsedDst) == 2 {
logrus.Fatal(errorMsg)
} else if len(parsedSrc) != 2 {
if len(parsedDst) != 2 {
logrus.Fatal(errorMsg)
}
} else if len(parsedDst) != 2 {
if len(parsedSrc) != 2 {
logrus.Fatal(errorMsg)
}
}
var service string
var srcPath string
var dstPath string
isToContainer := false // <container:src> <dst>
if len(parsedSrc) == 2 {
service = parsedSrc[0]
srcPath = parsedSrc[1]
dstPath = dst
logrus.Debugf("assuming transfer is coming FROM the container")
} else if len(parsedDst) == 2 {
service = parsedDst[0]
dstPath = parsedDst[1]
srcPath = src
isToContainer = true // <src> <container:dst>
logrus.Debugf("assuming transfer is going TO the container")
}
if isToContainer {
if _, err := os.Stat(srcPath); os.IsNotExist(err) {
logrus.Fatalf("%s does not exist locally?", srcPath)
}
srcPath, dstPath, service, toContainer, err := parseSrcAndDst(src, dst)
if err != nil {
logrus.Fatal(err)
}
cl, err := client.New(app.Server)
@ -96,7 +69,18 @@ And if you want to copy that file back to your current working directory locally
logrus.Fatal(err)
}
if err := configureAndCp(c, cl, app, srcPath, dstPath, service, isToContainer); err != nil {
container, err := containerPkg.GetContainerFromStackAndService(cl, app.StackName(), service)
if err != nil {
logrus.Fatal(err)
}
logrus.Debugf("retrieved %s as target container on %s", formatter.ShortenID(container.ID), app.Server)
if toContainer {
err = CopyToContainer(cl, container.ID, srcPath, dstPath)
} else {
err = CopyFromContainer(cl, container.ID, srcPath, dstPath)
}
if err != nil {
logrus.Fatal(err)
}
@ -104,46 +88,292 @@ And if you want to copy that file back to your current working directory locally
},
}
func configureAndCp(
c *cli.Context,
cl *dockerClient.Client,
app config.App,
srcPath string,
dstPath string,
service string,
isToContainer bool) error {
filters := filters.NewArgs()
filters.Add("name", fmt.Sprintf("^%s_%s", app.StackName(), service))
var errServiceMissing = errors.New("one of <src>/<dest> arguments must take $SERVICE:$PATH form")
container, err := container.GetContainer(context.Background(), cl, filters, internal.NoInput)
// parseSrcAndDst parses src and dest string. One of src or dst must be of the form $SERVICE:$PATH
func parseSrcAndDst(src, dst string) (srcPath string, dstPath string, service string, toContainer bool, err error) {
parsedSrc := strings.SplitN(src, ":", 2)
parsedDst := strings.SplitN(dst, ":", 2)
if len(parsedSrc)+len(parsedDst) != 3 {
return "", "", "", false, errServiceMissing
}
if len(parsedSrc) == 2 {
return parsedSrc[1], dst, parsedSrc[0], false, nil
}
if len(parsedDst) == 2 {
return src, parsedDst[1], parsedDst[0], true, nil
}
return "", "", "", false, errServiceMissing
}
// CopyToContainer copies a file or directory from the local file system to the container.
// See the possible copy modes and their documentation.
func CopyToContainer(cl *dockerClient.Client, containerID, srcPath, dstPath string) error {
srcStat, err := os.Stat(srcPath)
if err != nil {
logrus.Fatal(err)
return fmt.Errorf("local %s ", err)
}
logrus.Debugf("retrieved %s as target container on %s", formatter.ShortenID(container.ID), app.Server)
dstStat, err := cl.ContainerStatPath(context.Background(), containerID, dstPath)
dstExists := true
if err != nil {
if errdefs.IsNotFound(err) {
dstExists = false
} else {
return fmt.Errorf("remote path: %s", err)
}
}
if isToContainer {
toTarOpts := &archive.TarOptions{NoOverwriteDirNonDir: true, Compression: archive.Gzip}
content, err := archive.TarWithOptions(srcPath, toTarOpts)
if err != nil {
logrus.Fatal(err)
}
mode, err := copyMode(srcPath, dstPath, srcStat.Mode(), dstStat.Mode, dstExists)
if err != nil {
return err
}
movePath := ""
switch mode {
case CopyModeDirToDir:
// Add the src directory to the destination path
_, srcDir := path.Split(srcPath)
dstPath = path.Join(dstPath, srcDir)
copyOpts := types.CopyToContainerOptions{AllowOverwriteDirWithFile: false, CopyUIDGID: false}
if err := cl.CopyToContainer(context.Background(), container.ID, dstPath, content, copyOpts); err != nil {
logrus.Fatal(err)
}
} else {
content, _, err := cl.CopyFromContainer(context.Background(), container.ID, srcPath)
// Make sure the dst directory exits.
dcli, err := command.NewDockerCli()
if err != nil {
logrus.Fatal(err)
return err
}
defer content.Close()
fromTarOpts := &archive.TarOptions{NoOverwriteDirNonDir: true, Compression: archive.Gzip}
if err := archive.Untar(content, dstPath, fromTarOpts); err != nil {
logrus.Fatal(err)
if _, err := container.RunExec(dcli, cl, containerID, &types.ExecConfig{
AttachStderr: true,
AttachStdin: true,
AttachStdout: true,
Cmd: []string{"mkdir", "-p", dstPath},
Detach: false,
Tty: true,
}); err != nil {
return fmt.Errorf("create remote directory: %s", err)
}
case CopyModeFileToFile:
// Remove the file component from the path, since docker can only copy
// to a directory.
dstPath, _ = path.Split(dstPath)
case CopyModeFileToFileRename:
// Copy the file to the temp directory and move it to its dstPath
// afterwards.
movePath = dstPath
dstPath = "/tmp"
}
toTarOpts := &archive.TarOptions{IncludeSourceDir: true, NoOverwriteDirNonDir: true, Compression: archive.Gzip}
content, err := archive.TarWithOptions(srcPath, toTarOpts)
if err != nil {
return err
}
logrus.Debugf("copy %s from local to %s on container", srcPath, dstPath)
copyOpts := types.CopyToContainerOptions{AllowOverwriteDirWithFile: false, CopyUIDGID: false}
if err := cl.CopyToContainer(context.Background(), containerID, dstPath, content, copyOpts); err != nil {
return err
}
if movePath != "" {
_, srcFile := path.Split(srcPath)
dcli, err := command.NewDockerCli()
if err != nil {
return err
}
if _, err := container.RunExec(dcli, cl, containerID, &types.ExecConfig{
AttachStderr: true,
AttachStdin: true,
AttachStdout: true,
Cmd: []string{"mv", path.Join("/tmp", srcFile), movePath},
Detach: false,
Tty: true,
}); err != nil {
return fmt.Errorf("create remote directory: %s", err)
}
}
return nil
}
// CopyFromContainer copies a file or directory from the given container to the local file system.
// See the possible copy modes and their documentation.
func CopyFromContainer(cl *dockerClient.Client, containerID, srcPath, dstPath string) error {
srcStat, err := cl.ContainerStatPath(context.Background(), containerID, srcPath)
if err != nil {
if errdefs.IsNotFound(err) {
return fmt.Errorf("remote: %s does not exist", srcPath)
} else {
return fmt.Errorf("remote path: %s", err)
}
}
dstStat, err := os.Stat(dstPath)
dstExists := true
var dstMode os.FileMode
if err != nil {
if os.IsNotExist(err) {
dstExists = false
} else {
return fmt.Errorf("remote path: %s", err)
}
} else {
dstMode = dstStat.Mode()
}
mode, err := copyMode(srcPath, dstPath, srcStat.Mode, dstMode, dstExists)
if err != nil {
return err
}
moveDstDir := ""
moveDstFile := ""
switch mode {
case CopyModeFileToFile:
// Remove the file component from the path, since docker can only copy
// to a directory.
dstPath, _ = path.Split(dstPath)
case CopyModeFileToFileRename:
// Copy the file to the temp directory and move it to its dstPath
// afterwards.
moveDstFile = dstPath
dstPath = "/tmp"
case CopyModeFilesToDir:
// Copy the directory to the temp directory and move it to its
// dstPath afterwards.
moveDstDir = path.Join(dstPath, "/")
dstPath = "/tmp"
// Make sure the temp directory always gets removed
defer os.Remove(path.Join("/tmp"))
}
content, _, err := cl.CopyFromContainer(context.Background(), containerID, srcPath)
if err != nil {
return fmt.Errorf("copy: %s", err)
}
defer content.Close()
if err := archive.Untar(content, dstPath, &archive.TarOptions{
NoOverwriteDirNonDir: true,
Compression: archive.Gzip,
NoLchown: true,
}); err != nil {
return fmt.Errorf("untar: %s", err)
}
if moveDstFile != "" {
_, srcFile := path.Split(strings.TrimSuffix(srcPath, "/"))
if err := moveFile(path.Join("/tmp", srcFile), moveDstFile); err != nil {
return err
}
}
if moveDstDir != "" {
_, srcDir := path.Split(strings.TrimSuffix(srcPath, "/"))
if err := moveDir(path.Join("/tmp", srcDir), moveDstDir); err != nil {
return err
}
}
return nil
}
var (
ErrCopyDirToFile = fmt.Errorf("can't copy dir to file")
ErrDstDirNotExist = fmt.Errorf("destination directory does not exist")
)
type CopyMode int
const (
// Copy a src file to a dest file. The src and dest file names are the same.
// <dir_src>/<file> + <dir_dst>/<file> -> <dir_dst>/<file>
CopyModeFileToFile = CopyMode(iota)
// Copy a src file to a dest file. The src and dest file names are not the same.
// <dir_src>/<file_src> + <dir_dst>/<file_dst> -> <dir_dst>/<file_dst>
CopyModeFileToFileRename
// Copy a src file to dest directory. The dest file gets created in the dest
// folder with the src filename.
// <dir_src>/<file> + <dir_dst> -> <dir_dst>/<file>
CopyModeFileToDir
// Copy a src directory to dest directory.
// <dir_src> + <dir_dst> -> <dir_dst>/<dir_src>
CopyModeDirToDir
// Copy all files in the src directory to the dest directory. This works recursively.
// <dir_src>/ + <dir_dst> -> <dir_dst>/<files_from_dir_src>
CopyModeFilesToDir
)
// copyMode takes a src and dest path and file mode to determine the copy mode.
// See the possible copy modes and their documentation.
func copyMode(srcPath, dstPath string, srcMode os.FileMode, dstMode os.FileMode, dstExists bool) (CopyMode, error) {
_, srcFile := path.Split(srcPath)
_, dstFile := path.Split(dstPath)
if srcMode.IsDir() {
if !dstExists {
return -1, ErrDstDirNotExist
}
if dstMode.IsDir() {
if strings.HasSuffix(srcPath, "/") {
return CopyModeFilesToDir, nil
}
return CopyModeDirToDir, nil
}
return -1, ErrCopyDirToFile
}
if dstMode.IsDir() {
return CopyModeFileToDir, nil
}
if srcFile != dstFile {
return CopyModeFileToFileRename, nil
}
return CopyModeFileToFile, nil
}
// moveDir moves all files from a source path to the destination path recursively.
func moveDir(sourcePath, destPath string) error {
return filepath.Walk(sourcePath, func(p string, info os.FileInfo, err error) error {
if err != nil {
return err
}
newPath := path.Join(destPath, strings.TrimPrefix(p, sourcePath))
if info.IsDir() {
err := os.Mkdir(newPath, info.Mode())
if err != nil {
if os.IsExist(err) {
return nil
}
return err
}
}
if info.Mode().IsRegular() {
return moveFile(p, newPath)
}
return nil
})
}
// moveFile moves a file from a source path to a destination path.
func moveFile(sourcePath, destPath string) error {
inputFile, err := os.Open(sourcePath)
if err != nil {
return err
}
outputFile, err := os.Create(destPath)
if err != nil {
inputFile.Close()
return err
}
defer outputFile.Close()
_, err = io.Copy(outputFile, inputFile)
inputFile.Close()
if err != nil {
return err
}
// Remove file after succesfull copy.
err = os.Remove(sourcePath)
if err != nil {
return err
}
return nil
}

113
cli/app/cp_test.go Normal file
View File

@ -0,0 +1,113 @@
package app
import (
"os"
"testing"
)
func TestParse(t *testing.T) {
tests := []struct {
src string
dst string
srcPath string
dstPath string
service string
toContainer bool
err error
}{
{src: "foo", dst: "bar", err: errServiceMissing},
{src: "app:foo", dst: "app:bar", err: errServiceMissing},
{src: "app:foo", dst: "bar", srcPath: "foo", dstPath: "bar", service: "app", toContainer: false},
{src: "foo", dst: "app:bar", srcPath: "foo", dstPath: "bar", service: "app", toContainer: true},
}
for i, tc := range tests {
srcPath, dstPath, service, toContainer, err := parseSrcAndDst(tc.src, tc.dst)
if srcPath != tc.srcPath {
t.Errorf("[%d] srcPath: want (%s), got(%s)", i, tc.srcPath, srcPath)
}
if dstPath != tc.dstPath {
t.Errorf("[%d] dstPath: want (%s), got(%s)", i, tc.dstPath, dstPath)
}
if service != tc.service {
t.Errorf("[%d] service: want (%s), got(%s)", i, tc.service, service)
}
if toContainer != tc.toContainer {
t.Errorf("[%d] toConainer: want (%t), got(%t)", i, tc.toContainer, toContainer)
}
if err == nil && tc.err != nil && err.Error() != tc.err.Error() {
t.Errorf("[%d] err: want (%s), got(%s)", i, tc.err, err)
}
}
}
func TestCopyMode(t *testing.T) {
tests := []struct {
srcPath string
dstPath string
srcMode os.FileMode
dstMode os.FileMode
dstExists bool
mode CopyMode
err error
}{
{
srcPath: "foo.txt",
dstPath: "foo.txt",
srcMode: os.ModePerm,
dstMode: os.ModePerm,
dstExists: true,
mode: CopyModeFileToFile,
},
{
srcPath: "foo.txt",
dstPath: "bar.txt",
srcMode: os.ModePerm,
dstExists: true,
mode: CopyModeFileToFileRename,
},
{
srcPath: "foo",
dstPath: "foo",
srcMode: os.ModeDir,
dstMode: os.ModeDir,
dstExists: true,
mode: CopyModeDirToDir,
},
{
srcPath: "foo/",
dstPath: "foo",
srcMode: os.ModeDir,
dstMode: os.ModeDir,
dstExists: true,
mode: CopyModeFilesToDir,
},
{
srcPath: "foo",
dstPath: "foo",
srcMode: os.ModeDir,
dstExists: false,
mode: -1,
err: ErrDstDirNotExist,
},
{
srcPath: "foo",
dstPath: "foo",
srcMode: os.ModeDir,
dstMode: os.ModePerm,
dstExists: true,
mode: -1,
err: ErrCopyDirToFile,
},
}
for i, tc := range tests {
mode, err := copyMode(tc.srcPath, tc.dstPath, tc.srcMode, tc.dstMode, tc.dstExists)
if mode != tc.mode {
t.Errorf("[%d] mode: want (%d), got(%d)", i, tc.mode, mode)
}
if err != tc.err {
t.Errorf("[%d] err: want (%s), got(%s)", i, tc.err, err)
}
}
}

View File

@ -2,75 +2,26 @@ package app
import (
"context"
"fmt"
"io"
"os"
"slices"
"sync"
"time"
"coopcloud.tech/abra/cli/internal"
"coopcloud.tech/abra/pkg/autocomplete"
"coopcloud.tech/abra/pkg/client"
"coopcloud.tech/abra/pkg/config"
"coopcloud.tech/abra/pkg/recipe"
"coopcloud.tech/abra/pkg/service"
"coopcloud.tech/abra/pkg/upstream/stack"
"github.com/docker/docker/api/types"
"github.com/docker/docker/api/types/filters"
"github.com/docker/docker/api/types/swarm"
dockerClient "github.com/docker/docker/client"
"github.com/sirupsen/logrus"
"github.com/urfave/cli"
)
var logOpts = types.ContainerLogsOptions{
ShowStderr: true,
ShowStdout: true,
Since: "",
Until: "",
Timestamps: true,
Follow: true,
Tail: "20",
Details: false,
}
// stackLogs lists logs for all stack services
func stackLogs(c *cli.Context, app config.App, client *dockerClient.Client) {
filters, err := app.Filters(true, false)
if err != nil {
logrus.Fatal(err)
}
serviceOpts := types.ServiceListOptions{Filters: filters}
services, err := client.ServiceList(context.Background(), serviceOpts)
if err != nil {
logrus.Fatal(err)
}
var wg sync.WaitGroup
for _, service := range services {
wg.Add(1)
go func(s string) {
if internal.StdErrOnly {
logOpts.ShowStdout = false
}
logs, err := client.ServiceLogs(context.Background(), s, logOpts)
if err != nil {
logrus.Fatal(err)
}
defer logs.Close()
_, err = io.Copy(os.Stdout, logs)
if err != nil && err != io.EOF {
logrus.Fatal(err)
}
}(service.ID)
}
wg.Wait()
os.Exit(0)
}
var appLogsCommand = cli.Command{
Name: "logs",
Aliases: []string{"l"},
@ -105,46 +56,84 @@ var appLogsCommand = cli.Command{
logrus.Fatalf("%s is not deployed?", app.Name)
}
logOpts.Since = internal.SinceLogs
serviceName := c.Args().Get(1)
if serviceName == "" {
logrus.Debugf("tailing logs for all %s services", app.Recipe)
stackLogs(c, app, cl)
} else {
logrus.Debugf("tailing logs for %s", serviceName)
if err := tailServiceLogs(c, cl, app, serviceName); err != nil {
logrus.Fatal(err)
}
serviceNames := []string{}
if serviceName != "" {
serviceNames = []string{serviceName}
}
err = tailLogs(cl, app, serviceNames)
if err != nil {
logrus.Fatal(err)
}
return nil
},
}
func tailServiceLogs(c *cli.Context, cl *dockerClient.Client, app config.App, serviceName string) error {
filters := filters.NewArgs()
filters.Add("name", fmt.Sprintf("%s_%s", app.StackName(), serviceName))
chosenService, err := service.GetService(context.Background(), cl, filters, internal.NoInput)
// tailLogs prints logs for the given app with optional service names to be
// filtered on. It also checks if the latest task is not runnning and then
// prints the past tasks.
func tailLogs(cl *dockerClient.Client, app config.App, serviceNames []string) error {
f, err := app.Filters(true, false, serviceNames...)
if err != nil {
logrus.Fatal(err)
return err
}
if internal.StdErrOnly {
logOpts.ShowStdout = false
}
logs, err := cl.ServiceLogs(context.Background(), chosenService.ID, logOpts)
services, err := cl.ServiceList(context.Background(), types.ServiceListOptions{Filters: f})
if err != nil {
logrus.Fatal(err)
return err
}
defer logs.Close()
_, err = io.Copy(os.Stdout, logs)
if err != nil && err != io.EOF {
logrus.Fatal(err)
var wg sync.WaitGroup
for _, service := range services {
filters := filters.NewArgs()
filters.Add("name", service.Spec.Name)
tasks, err := cl.TaskList(context.Background(), types.TaskListOptions{Filters: f})
if err != nil {
return err
}
if len(tasks) > 0 {
// Need to sort the tasks by the CreatedAt field in the inverse order.
// Otherwise they are in the reversed order and not sorted properly.
slices.SortFunc[[]swarm.Task](tasks, func(t1, t2 swarm.Task) int {
return int(t2.Meta.CreatedAt.Unix() - t1.Meta.CreatedAt.Unix())
})
lastTask := tasks[0].Status
if lastTask.State != swarm.TaskStateRunning {
for _, task := range tasks {
logrus.Errorf("[%s] %s State %s: %s", service.Spec.Name, task.Meta.CreatedAt.Format(time.RFC3339), task.Status.State, task.Status.Err)
}
}
}
// Collect the logs in a go routine, so the logs from all services are
// collected in parallel.
wg.Add(1)
go func(serviceID string) {
logs, err := cl.ServiceLogs(context.Background(), serviceID, types.ContainerLogsOptions{
ShowStderr: true,
ShowStdout: !internal.StdErrOnly,
Since: internal.SinceLogs,
Until: "",
Timestamps: true,
Follow: true,
Tail: "20",
Details: false,
})
if err != nil {
logrus.Fatal(err)
}
defer logs.Close()
_, err = io.Copy(os.Stdout, logs)
if err != nil && err != io.EOF {
logrus.Fatal(err)
}
}(service.ID)
}
// Wait for all log streams to be closed.
wg.Wait()
return nil
}

View File

@ -54,9 +54,17 @@ var appNewCommand = cli.Command{
internal.OfflineFlag,
internal.ChaosFlag,
},
Before: internal.SubCommandBefore,
ArgsUsage: "[<recipe>]",
BashComplete: autocomplete.RecipeNameComplete,
Before: internal.SubCommandBefore,
ArgsUsage: "[<recipe>] [<version>]",
BashComplete: func(ctx *cli.Context) {
args := ctx.Args()
switch len(args) {
case 0:
autocomplete.RecipeNameComplete(ctx)
case 1:
autocomplete.RecipeVersionComplete(ctx.Args().Get(0))
}
},
Action: func(c *cli.Context) error {
recipe := internal.ValidateRecipe(c)
@ -69,8 +77,14 @@ var appNewCommand = cli.Command{
logrus.Fatal(err)
}
}
if err := recipePkg.EnsureLatest(recipe.Name); err != nil {
logrus.Fatal(err)
if c.Args().Get(1) == "" {
if err := recipePkg.EnsureLatest(recipe.Name); err != nil {
logrus.Fatal(err)
}
} else {
if err := recipePkg.EnsureVersion(recipe.Name, c.Args().Get(1)); err != nil {
logrus.Fatal(err)
}
}
}
@ -97,7 +111,7 @@ var appNewCommand = cli.Command{
var secrets AppSecrets
var secretTable *jsontable.JSONTable
if internal.Secrets {
sampleEnv, err := recipe.SampleEnv(config.ReadEnvOptions{})
sampleEnv, err := recipe.SampleEnv()
if err != nil {
logrus.Fatal(err)
}
@ -108,7 +122,7 @@ var appNewCommand = cli.Command{
}
envSamplePath := path.Join(config.RECIPES_DIR, recipe.Name, ".env.sample")
secretsConfig, err := secret.ReadSecretsConfig(envSamplePath, composeFiles, recipe.Name)
secretsConfig, err := secret.ReadSecretsConfig(envSamplePath, composeFiles, config.StackName(internal.Domain))
if err != nil {
return err
}
@ -168,14 +182,14 @@ var appNewCommand = cli.Command{
type AppSecrets map[string]string
// createSecrets creates all secrets for a new app.
func createSecrets(cl *dockerClient.Client, secretsConfig map[string]string, sanitisedAppName string) (AppSecrets, error) {
func createSecrets(cl *dockerClient.Client, secretsConfig map[string]secret.Secret, sanitisedAppName string) (AppSecrets, error) {
// NOTE(d1): trim to match app.StackName() implementation
if len(sanitisedAppName) > 45 {
logrus.Debugf("trimming %s to %s to avoid runtime limits", sanitisedAppName, sanitisedAppName[:45])
sanitisedAppName = sanitisedAppName[:45]
if len(sanitisedAppName) > config.MAX_SANITISED_APP_NAME_LENGTH {
logrus.Debugf("trimming %s to %s to avoid runtime limits", sanitisedAppName, sanitisedAppName[:config.MAX_SANITISED_APP_NAME_LENGTH])
sanitisedAppName = sanitisedAppName[:config.MAX_SANITISED_APP_NAME_LENGTH]
}
secrets, err := secret.GenerateSecrets(cl, secretsConfig, sanitisedAppName, internal.NewAppServer)
secrets, err := secret.GenerateSecrets(cl, secretsConfig, internal.NewAppServer)
if err != nil {
return nil, err
}
@ -217,7 +231,7 @@ func ensureDomainFlag(recipe recipe.Recipe, server string) error {
}
// promptForSecrets asks if we should generate secrets for a new app.
func promptForSecrets(recipeName string, secretsConfig map[string]string) error {
func promptForSecrets(recipeName string, secretsConfig map[string]secret.Secret) error {
if len(secretsConfig) == 0 {
logrus.Debugf("%s has no secrets to generate, skipping...", recipeName)
return nil

View File

@ -3,6 +3,7 @@ package app
import (
"context"
"fmt"
"log"
"os"
"coopcloud.tech/abra/cli/internal"
@ -11,7 +12,6 @@ import (
stack "coopcloud.tech/abra/pkg/upstream/stack"
"github.com/AlecAivazis/survey/v2"
"github.com/docker/docker/api/types"
"github.com/docker/docker/api/types/volume"
"github.com/sirupsen/logrus"
"github.com/urfave/cli"
)
@ -110,26 +110,19 @@ flag.
logrus.Fatal(err)
}
volumeListOptions := volume.ListOptions{fs}
volumeListOKBody, err := cl.VolumeList(context.Background(), volumeListOptions)
volumeList := volumeListOKBody.Volumes
volumeList, err := client.GetVolumes(cl, context.Background(), app.Server, fs)
if err != nil {
logrus.Fatal(err)
}
volumeNames := client.GetVolumeNames(volumeList)
var vols []string
for _, vol := range volumeList {
vols = append(vols, vol.Name)
}
if len(vols) > 0 {
for _, vol := range vols {
err := cl.VolumeRemove(context.Background(), vol, internal.Force) // last argument is for force removing
if err != nil {
logrus.Fatal(err)
}
logrus.Info(fmt.Sprintf("volume %s removed", vol))
if len(volumeNames) > 0 {
err := client.RemoveVolumes(cl, context.Background(), volumeNames, internal.Force, 5)
if err != nil {
log.Fatalf("removing volumes failed: %s", err)
}
logrus.Infof("%d volumes removed successfully", len(volumeNames))
} else {
logrus.Info("no volumes to remove")
}

View File

@ -1,223 +1,82 @@
package app
import (
"context"
"errors"
"fmt"
"os"
"coopcloud.tech/abra/cli/internal"
"coopcloud.tech/abra/pkg/autocomplete"
"coopcloud.tech/abra/pkg/client"
"coopcloud.tech/abra/pkg/config"
containerPkg "coopcloud.tech/abra/pkg/container"
"coopcloud.tech/abra/pkg/recipe"
recipePkg "coopcloud.tech/abra/pkg/recipe"
"coopcloud.tech/abra/pkg/upstream/container"
"github.com/docker/cli/cli/command"
"github.com/docker/docker/api/types"
"github.com/docker/docker/api/types/filters"
dockerClient "github.com/docker/docker/client"
"github.com/docker/docker/pkg/archive"
"github.com/sirupsen/logrus"
"github.com/urfave/cli"
)
type restoreConfig struct {
preHookCmd string
postHookCmd string
var targetPath string
var targetPathFlag = &cli.StringFlag{
Name: "target, t",
Usage: "Target path",
Destination: &targetPath,
}
var appRestoreCommand = cli.Command{
Name: "restore",
Aliases: []string{"rs"},
Usage: "Run app restore",
ArgsUsage: "<domain> <service> <file>",
Usage: "Restore an app backup",
ArgsUsage: "<domain> <service>",
Flags: []cli.Flag{
internal.DebugFlag,
internal.OfflineFlag,
internal.ChaosFlag,
targetPathFlag,
},
Before: internal.SubCommandBefore,
BashComplete: autocomplete.AppNameComplete,
Description: `
Run an app restore.
Pre/post hook commands are defined in the recipe configuration. Abra reads this
configuration and run the comands in the context of the service before
restoring the backup.
Unlike "abra app backup", restore must be run on a per-service basis. You can
not restore all services in one go. Backup files produced by Abra are
compressed archives which use absolute paths. This allows Abra to restore
according to standard tar command logic, i.e. the backup will be restored to
the path it was originally backed up from.
Example:
abra app restore example.com app ~/.abra/backups/example_com_app_609341138.tar.gz
`,
Action: func(c *cli.Context) error {
app := internal.ValidateApp(c)
recipe, err := recipe.Get(app.Recipe, internal.Offline)
if err != nil {
if err := recipe.EnsureExists(app.Recipe); err != nil {
logrus.Fatal(err)
}
if !internal.Chaos {
if err := recipePkg.EnsureIsClean(app.Recipe); err != nil {
if err := recipe.EnsureIsClean(app.Recipe); err != nil {
logrus.Fatal(err)
}
if !internal.Offline {
if err := recipePkg.EnsureUpToDate(app.Recipe); err != nil {
if err := recipe.EnsureUpToDate(app.Recipe); err != nil {
logrus.Fatal(err)
}
}
if err := recipePkg.EnsureLatest(app.Recipe); err != nil {
if err := recipe.EnsureLatest(app.Recipe); err != nil {
logrus.Fatal(err)
}
}
serviceName := c.Args().Get(1)
if serviceName == "" {
internal.ShowSubcommandHelpAndError(c, errors.New("missing <service>?"))
}
backupPath := c.Args().Get(2)
if backupPath == "" {
internal.ShowSubcommandHelpAndError(c, errors.New("missing <file>?"))
}
if _, err := os.Stat(backupPath); err != nil {
if os.IsNotExist(err) {
logrus.Fatalf("%s doesn't exist?", backupPath)
}
}
restoreConfigs := make(map[string]restoreConfig)
for _, service := range recipe.Config.Services {
if restoreEnabled, ok := service.Deploy.Labels["backupbot.restore"]; ok {
if restoreEnabled == "true" {
fullServiceName := fmt.Sprintf("%s_%s", app.StackName(), service.Name)
rsConfig := restoreConfig{}
logrus.Debugf("restore config detected for %s", fullServiceName)
if preHookCmd, ok := service.Deploy.Labels["backupbot.restore.pre-hook"]; ok {
logrus.Debugf("detected pre-hook command for %s: %s", fullServiceName, preHookCmd)
rsConfig.preHookCmd = preHookCmd
}
if postHookCmd, ok := service.Deploy.Labels["backupbot.restore.post-hook"]; ok {
logrus.Debugf("detected post-hook command for %s: %s", fullServiceName, postHookCmd)
rsConfig.postHookCmd = postHookCmd
}
restoreConfigs[service.Name] = rsConfig
}
}
}
rsConfig, ok := restoreConfigs[serviceName]
if !ok {
rsConfig = restoreConfig{}
}
cl, err := client.New(app.Server)
if err != nil {
logrus.Fatal(err)
}
if err := runRestore(cl, app, backupPath, serviceName, rsConfig); err != nil {
targetContainer, err := internal.RetrieveBackupBotContainer(cl)
if err != nil {
logrus.Fatal(err)
}
execEnv := []string{fmt.Sprintf("SERVICE=%s", app.Domain)}
if snapshot != "" {
logrus.Debugf("including SNAPSHOT=%s in backupbot exec invocation", snapshot)
execEnv = append(execEnv, fmt.Sprintf("SNAPSHOT=%s", snapshot))
}
if targetPath != "" {
logrus.Debugf("including TARGET=%s in backupbot exec invocation", targetPath)
execEnv = append(execEnv, fmt.Sprintf("TARGET=%s", targetPath))
}
if err := internal.RunBackupCmdRemote(cl, "restore", targetContainer.ID, execEnv); err != nil {
logrus.Fatal(err)
}
return nil
},
}
// runRestore does the actual restore logic.
func runRestore(cl *dockerClient.Client, app config.App, backupPath, serviceName string, rsConfig restoreConfig) error {
// FIXME: avoid instantiating a new CLI
dcli, err := command.NewDockerCli()
if err != nil {
return err
}
filters := filters.NewArgs()
filters.Add("name", fmt.Sprintf("^%s_%s", app.StackName(), serviceName))
targetContainer, err := containerPkg.GetContainer(context.Background(), cl, filters, true)
if err != nil {
return err
}
fullServiceName := fmt.Sprintf("%s_%s", app.StackName(), serviceName)
if rsConfig.preHookCmd != "" {
splitCmd := internal.SafeSplit(rsConfig.preHookCmd)
logrus.Debugf("split pre-hook command for %s into %s", fullServiceName, splitCmd)
preHookExecOpts := types.ExecConfig{
AttachStderr: true,
AttachStdin: true,
AttachStdout: true,
Cmd: splitCmd,
Detach: false,
Tty: true,
}
if err := container.RunExec(dcli, cl, targetContainer.ID, &preHookExecOpts); err != nil {
return err
}
logrus.Infof("succesfully ran %s pre-hook command: %s", fullServiceName, rsConfig.preHookCmd)
}
backupReader, err := os.Open(backupPath)
if err != nil {
return err
}
content, err := archive.DecompressStream(backupReader)
if err != nil {
return err
}
// NOTE(d1): we use absolute paths so tar knows what to do. it will restore
// files according to the paths set in the compressed archive
restorePath := "/"
copyOpts := types.CopyToContainerOptions{AllowOverwriteDirWithFile: false, CopyUIDGID: false}
if err := cl.CopyToContainer(context.Background(), targetContainer.ID, restorePath, content, copyOpts); err != nil {
return err
}
logrus.Infof("restored %s to %s", backupPath, fullServiceName)
if rsConfig.postHookCmd != "" {
splitCmd := internal.SafeSplit(rsConfig.postHookCmd)
logrus.Debugf("split post-hook command for %s into %s", fullServiceName, splitCmd)
postHookExecOpts := types.ExecConfig{
AttachStderr: true,
AttachStdin: true,
AttachStdout: true,
Cmd: splitCmd,
Detach: false,
Tty: true,
}
if err := container.RunExec(dcli, cl, targetContainer.ID, &postHookExecOpts); err != nil {
return err
}
logrus.Infof("succesfully ran %s post-hook command: %s", fullServiceName, rsConfig.postHookCmd)
}
return nil
}

View File

@ -91,7 +91,7 @@ var appRunCommand = cli.Command{
logrus.Fatal(err)
}
if err := container.RunExec(dcli, cl, targetContainer.ID, &execCreateOpts); err != nil {
if _, err := container.RunExec(dcli, cl, targetContainer.ID, &execCreateOpts); err != nil {
logrus.Fatal(err)
}

View File

@ -20,19 +20,23 @@ import (
"github.com/urfave/cli"
)
var allSecrets bool
var allSecretsFlag = &cli.BoolFlag{
Name: "all, a",
Destination: &allSecrets,
Usage: "Generate all secrets",
}
var (
allSecrets bool
allSecretsFlag = &cli.BoolFlag{
Name: "all, a",
Destination: &allSecrets,
Usage: "Generate all secrets",
}
)
var rmAllSecrets bool
var rmAllSecretsFlag = &cli.BoolFlag{
Name: "all, a",
Destination: &rmAllSecrets,
Usage: "Remove all secrets",
}
var (
rmAllSecrets bool
rmAllSecretsFlag = &cli.BoolFlag{
Name: "all, a",
Destination: &rmAllSecrets,
Usage: "Remove all secrets",
}
)
var appSecretGenerateCommand = cli.Command{
Name: "generate",
@ -87,28 +91,22 @@ var appSecretGenerateCommand = cli.Command{
logrus.Fatal(err)
}
secretsConfig, err := secret.ReadSecretsConfig(app.Path, composeFiles, app.Recipe)
secrets, err := secret.ReadSecretsConfig(app.Path, composeFiles, app.StackName())
if err != nil {
logrus.Fatal(err)
}
secretsToCreate := make(map[string]string)
if allSecrets {
secretsToCreate = secretsConfig
} else {
if !allSecrets {
secretName := c.Args().Get(1)
secretVersion := c.Args().Get(2)
matches := false
for name := range secretsConfig {
if secretName == name {
secretsToCreate[name] = secretVersion
matches = true
}
}
if !matches {
s, ok := secrets[secretName]
if !ok {
logrus.Fatalf("%s doesn't exist in the env config?", secretName)
}
s.Version = secretVersion
secrets = map[string]secret.Secret{
secretName: s,
}
}
cl, err := client.New(app.Server)
@ -116,7 +114,7 @@ var appSecretGenerateCommand = cli.Command{
logrus.Fatal(err)
}
secretVals, err := secret.GenerateSecrets(cl, secretsToCreate, app.StackName(), app.Server)
secretVals, err := secret.GenerateSecrets(cl, secrets, app.Server)
if err != nil {
logrus.Fatal(err)
}
@ -276,7 +274,7 @@ Example:
logrus.Fatal(err)
}
secretsConfig, err := secret.ReadSecretsConfig(app.Path, composeFiles, app.Recipe)
secrets, err := secret.ReadSecretsConfig(app.Path, composeFiles, app.StackName())
if err != nil {
logrus.Fatal(err)
}
@ -311,12 +309,7 @@ Example:
match := false
secretToRm := c.Args().Get(1)
for secretName, secretValue := range secretsConfig {
val, err := secret.ParseSecretValue(secretValue)
if err != nil {
logrus.Fatal(err)
}
for secretName, val := range secrets {
secretRemoteName := fmt.Sprintf("%s_%s_%s", app.StackName(), secretName, val.Version)
if _, ok := remoteSecretNames[secretRemoteName]; ok {
if secretToRm != "" {

View File

@ -2,6 +2,7 @@ package app
import (
"context"
"log"
"coopcloud.tech/abra/cli/internal"
"coopcloud.tech/abra/pkg/autocomplete"
@ -131,12 +132,12 @@ Passing "--force/-f" will select all volumes for removal. Be careful.
}
if len(volumesToRemove) > 0 {
err = client.RemoveVolumes(cl, context.Background(), app.Server, volumesToRemove, internal.Force)
err := client.RemoveVolumes(cl, context.Background(), volumesToRemove, internal.Force, 5)
if err != nil {
logrus.Fatal(err)
log.Fatalf("removing volumes failed: %s", err)
}
logrus.Info("volumes removed successfully")
logrus.Infof("%d volumes removed successfully", len(volumesToRemove))
} else {
logrus.Info("no volumes removed")
}

View File

@ -98,11 +98,6 @@ keys configured on your account.
continue
}
if _, exists := catalogue.CatalogueSkipList[recipeMeta.Name]; exists {
catlBar.Add(1)
continue
}
versions, err := recipe.GetRecipeVersions(recipeMeta.Name, internal.Offline)
if err != nil {
logrus.Warn(err)
@ -173,7 +168,7 @@ keys configured on your account.
}
msg := "chore: publish new catalogue release changes"
if err := gitPkg.Commit(cataloguePath, "**.json", msg, internal.Dry); err != nil {
if err := gitPkg.Commit(cataloguePath, msg, internal.Dry); err != nil {
logrus.Fatal(err)
}

View File

@ -1,35 +1,67 @@
package internal
import (
"strings"
"context"
"coopcloud.tech/abra/pkg/config"
containerPkg "coopcloud.tech/abra/pkg/container"
"coopcloud.tech/abra/pkg/service"
"coopcloud.tech/abra/pkg/upstream/container"
"github.com/docker/cli/cli/command"
"github.com/docker/docker/api/types"
"github.com/docker/docker/api/types/filters"
dockerClient "github.com/docker/docker/client"
"github.com/sirupsen/logrus"
)
// SafeSplit splits up a string into a list of commands safely.
func SafeSplit(s string) []string {
split := strings.Split(s, " ")
var result []string
var inquote string
var block string
for _, i := range split {
if inquote == "" {
if strings.HasPrefix(i, "'") || strings.HasPrefix(i, "\"") {
inquote = string(i[0])
block = strings.TrimPrefix(i, inquote) + " "
} else {
result = append(result, i)
}
} else {
if !strings.HasSuffix(i, inquote) {
block += i + " "
} else {
block += strings.TrimSuffix(i, inquote)
inquote = ""
result = append(result, block)
block = ""
}
}
// RetrieveBackupBotContainer gets the deployed backupbot container.
func RetrieveBackupBotContainer(cl *dockerClient.Client) (types.Container, error) {
ctx := context.Background()
chosenService, err := service.GetServiceByLabel(ctx, cl, config.BackupbotLabel, NoInput)
if err != nil {
return types.Container{}, err
}
return result
logrus.Debugf("retrieved %s as backup enabled service", chosenService.Spec.Name)
filters := filters.NewArgs()
filters.Add("name", chosenService.Spec.Name)
targetContainer, err := containerPkg.GetContainer(
ctx,
cl,
filters,
NoInput,
)
if err != nil {
return types.Container{}, err
}
return targetContainer, nil
}
// RunBackupCmdRemote runs a backup related command on a remote backupbot container.
func RunBackupCmdRemote(cl *dockerClient.Client, backupCmd string, containerID string, execEnv []string) error {
execBackupListOpts := types.ExecConfig{
AttachStderr: true,
AttachStdin: true,
AttachStdout: true,
Cmd: []string{"/usr/bin/backup", "--", backupCmd},
Detach: false,
Env: execEnv,
Tty: true,
}
logrus.Debugf("running backup %s on %s with exec config %v", backupCmd, containerID, execBackupListOpts)
// FIXME: avoid instantiating a new CLI
dcli, err := command.NewDockerCli()
if err != nil {
return err
}
if _, err := container.RunExec(dcli, cl, containerID, &execBackupListOpts); err != nil {
return err
}
return nil
}

View File

@ -60,7 +60,7 @@ func RunCmdRemote(cl *dockerClient.Client, app config.App, abraSh, serviceName,
Tty: false,
}
if err := container.RunExec(dcli, cl, targetContainer.ID, &execCreateOpts); err != nil {
if _, err := container.RunExec(dcli, cl, targetContainer.ID, &execCreateOpts); err != nil {
logrus.Infof("%s does not exist for %s, use /bin/sh as fallback", shell, app.Name)
shell = "/bin/sh"
}
@ -85,7 +85,7 @@ func RunCmdRemote(cl *dockerClient.Client, app config.App, abraSh, serviceName,
execCreateOpts.Tty = false
}
if err := container.RunExec(dcli, cl, targetContainer.ID, &execCreateOpts); err != nil {
if _, err := container.RunExec(dcli, cl, targetContainer.ID, &execCreateOpts); err != nil {
return err
}

40
cli/recipe/diff.go Normal file
View File

@ -0,0 +1,40 @@
package recipe
import (
"path"
"coopcloud.tech/abra/cli/internal"
"coopcloud.tech/abra/pkg/autocomplete"
"coopcloud.tech/abra/pkg/config"
gitPkg "coopcloud.tech/abra/pkg/git"
"github.com/sirupsen/logrus"
"github.com/urfave/cli"
)
var recipeDiffCommand = cli.Command{
Name: "diff",
Usage: "Show unstaged changes in recipe config",
Description: "Due to limitations in our underlying Git dependency, this command requires /usr/bin/git.",
Aliases: []string{"d"},
ArgsUsage: "<recipe>",
Flags: []cli.Flag{
internal.DebugFlag,
internal.NoInputFlag,
},
Before: internal.SubCommandBefore,
BashComplete: autocomplete.RecipeNameComplete,
Action: func(c *cli.Context) error {
recipeName := c.Args().First()
if recipeName != "" {
internal.ValidateRecipe(c)
}
recipeDir := path.Join(config.RECIPES_DIR, recipeName)
if err := gitPkg.DiffUnstaged(recipeDir); err != nil {
logrus.Fatal(err)
}
return nil
},
}

View File

@ -3,6 +3,7 @@ package recipe
import (
"coopcloud.tech/abra/cli/internal"
"coopcloud.tech/abra/pkg/autocomplete"
"coopcloud.tech/abra/pkg/formatter"
"coopcloud.tech/abra/pkg/recipe"
"github.com/sirupsen/logrus"
"github.com/urfave/cli"
@ -17,26 +18,31 @@ var recipeFetchCommand = cli.Command{
Flags: []cli.Flag{
internal.DebugFlag,
internal.NoInputFlag,
internal.OfflineFlag,
},
Before: internal.SubCommandBefore,
BashComplete: autocomplete.RecipeNameComplete,
Action: func(c *cli.Context) error {
recipeName := c.Args().First()
if recipeName != "" {
internal.ValidateRecipe(c)
if err := recipe.Ensure(recipeName); err != nil {
logrus.Fatal(err)
}
return nil
}
if err := recipe.EnsureExists(recipeName); err != nil {
catalogue, err := recipe.ReadRecipeCatalogue(internal.Offline)
if err != nil {
logrus.Fatal(err)
}
if err := recipe.EnsureUpToDate(recipeName); err != nil {
logrus.Fatal(err)
}
if err := recipe.EnsureLatest(recipeName); err != nil {
logrus.Fatal(err)
catlBar := formatter.CreateProgressbar(len(catalogue), "fetching latest recipes...")
for recipeName := range catalogue {
if err := recipe.Ensure(recipeName); err != nil {
logrus.Error(err)
}
catlBar.Add(1)
}
return nil

View File

@ -30,5 +30,7 @@ manner. Abra supports convenient automation for recipe maintainenace, see the
recipeSyncCommand,
recipeUpgradeCommand,
recipeVersionCommand,
recipeResetCommand,
recipeDiffCommand,
},
}

View File

@ -1,7 +1,9 @@
package recipe
import (
"errors"
"fmt"
"os"
"path"
"strconv"
"strings"
@ -106,6 +108,18 @@ your SSH keys configured on your account.
}
}
isClean, err := gitPkg.IsClean(recipe.Dir())
if err != nil {
logrus.Fatal(err)
}
if !isClean {
logrus.Infof("%s currently has these unstaged changes 👇", recipe.Name)
if err := gitPkg.DiffUnstaged(recipe.Dir()); err != nil {
logrus.Fatal(err)
}
}
if len(tags) > 0 {
logrus.Warnf("previous git tags detected, assuming this is a new semver release")
if err := createReleaseFromPreviousTag(tagString, mainAppVersion, recipe, tags); err != nil {
@ -128,7 +142,7 @@ your SSH keys configured on your account.
// getImageVersions retrieves image versions for a recipe
func getImageVersions(recipe recipe.Recipe) (map[string]string, error) {
var services = make(map[string]string)
services := make(map[string]string)
missingTag := false
for _, service := range recipe.Config.Services {
@ -195,6 +209,10 @@ func createReleaseFromTag(recipe recipe.Recipe, tagString, mainAppVersion string
tagString = fmt.Sprintf("%s+%s", tag.String(), mainAppVersion)
}
if err := addReleaseNotes(recipe, tagString); err != nil {
logrus.Fatal(err)
}
if err := commitRelease(recipe, tagString); err != nil {
logrus.Fatal(err)
}
@ -225,6 +243,82 @@ func getTagCreateOptions(tag string) (git.CreateTagOptions, error) {
return git.CreateTagOptions{Message: msg}, nil
}
// addReleaseNotes checks if the release/next release note exists and moves the
// file to release/<tag>.
func addReleaseNotes(recipe recipe.Recipe, tag string) error {
repoPath := path.Join(config.RECIPES_DIR, recipe.Name)
tagReleaseNotePath := path.Join(repoPath, "release", tag)
if _, err := os.Stat(tagReleaseNotePath); err == nil {
// Release note for current tag already exist exists.
return nil
} else if !errors.Is(err, os.ErrNotExist) {
return err
}
nextReleaseNotePath := path.Join(repoPath, "release", "next")
if _, err := os.Stat(nextReleaseNotePath); err == nil {
// release/next note exists. Move it to release/<tag>
if internal.Dry {
logrus.Debugf("dry run: move release note from 'next' to %s", tag)
return nil
}
if !internal.NoInput {
prompt := &survey.Input{
Message: "Use release note in release/next?",
}
var addReleaseNote bool
if err := survey.AskOne(prompt, &addReleaseNote); err != nil {
return err
}
if !addReleaseNote {
return nil
}
}
err := os.Rename(nextReleaseNotePath, tagReleaseNotePath)
if err != nil {
return err
}
err = gitPkg.Add(repoPath, path.Join("release", "next"), internal.Dry)
if err != nil {
return err
}
err = gitPkg.Add(repoPath, path.Join("release", tag), internal.Dry)
if err != nil {
return err
}
} else if !errors.Is(err, os.ErrNotExist) {
return err
}
// No release note exists for the current release.
if internal.NoInput {
return nil
}
prompt := &survey.Input{
Message: "Release Note (leave empty for no release note)",
}
var releaseNote string
if err := survey.AskOne(prompt, &releaseNote); err != nil {
return err
}
if releaseNote == "" {
return nil
}
err := os.WriteFile(tagReleaseNotePath, []byte(releaseNote), 0o644)
if err != nil {
return err
}
err = gitPkg.Add(repoPath, path.Join("release", tag), internal.Dry)
if err != nil {
return err
}
return nil
}
func commitRelease(recipe recipe.Recipe, tag string) error {
if internal.Dry {
logrus.Debugf("dry run: no changes committed")
@ -244,7 +338,7 @@ func commitRelease(recipe recipe.Recipe, tag string) error {
msg := fmt.Sprintf("chore: publish %s release", tag)
repoPath := path.Join(config.RECIPES_DIR, recipe.Name)
if err := gitPkg.Commit(repoPath, ".", msg, internal.Dry); err != nil {
if err := gitPkg.Commit(repoPath, msg, internal.Dry); err != nil {
return err
}
@ -392,6 +486,10 @@ func createReleaseFromPreviousTag(tagString, mainAppVersion string, recipe recip
}
}
if err := addReleaseNotes(recipe, tagString); err != nil {
logrus.Fatal(err)
}
if err := commitRelease(recipe, tagString); err != nil {
logrus.Fatalf("failed to commit changes: %s", err.Error())
}

56
cli/recipe/reset.go Normal file
View File

@ -0,0 +1,56 @@
package recipe
import (
"path"
"coopcloud.tech/abra/cli/internal"
"coopcloud.tech/abra/pkg/autocomplete"
"coopcloud.tech/abra/pkg/config"
"github.com/go-git/go-git/v5"
"github.com/sirupsen/logrus"
"github.com/urfave/cli"
)
var recipeResetCommand = cli.Command{
Name: "reset",
Usage: "Remove all unstaged changes from recipe config",
Description: "WARNING, this will delete your changes. Be Careful.",
Aliases: []string{"rs"},
ArgsUsage: "<recipe>",
Flags: []cli.Flag{
internal.DebugFlag,
internal.NoInputFlag,
},
Before: internal.SubCommandBefore,
BashComplete: autocomplete.RecipeNameComplete,
Action: func(c *cli.Context) error {
recipeName := c.Args().First()
if recipeName != "" {
internal.ValidateRecipe(c)
}
repoPath := path.Join(config.RECIPES_DIR, recipeName)
repo, err := git.PlainOpen(repoPath)
if err != nil {
logrus.Fatal(err)
}
ref, err := repo.Head()
if err != nil {
logrus.Fatal(err)
}
worktree, err := repo.Worktree()
if err != nil {
logrus.Fatal(err)
}
opts := &git.ResetOptions{Commit: ref.Hash(), Mode: git.HardReset}
if err := worktree.Reset(opts); err != nil {
logrus.Fatal(err)
}
return nil
},
}

View File

@ -8,6 +8,7 @@ import (
"coopcloud.tech/abra/cli/internal"
"coopcloud.tech/abra/pkg/autocomplete"
"coopcloud.tech/abra/pkg/config"
gitPkg "coopcloud.tech/abra/pkg/git"
"coopcloud.tech/tagcmp"
"github.com/AlecAivazis/survey/v2"
"github.com/go-git/go-git/v5"
@ -198,6 +199,17 @@ likely to change.
logrus.Infof("dry run: not syncing label %s for recipe %s", nextTag, recipe.Name)
}
isClean, err := gitPkg.IsClean(recipe.Dir())
if err != nil {
logrus.Fatal(err)
}
if !isClean {
logrus.Infof("%s currently has these unstaged changes 👇", recipe.Name)
if err := gitPkg.DiffUnstaged(recipe.Dir()); err != nil {
logrus.Fatal(err)
}
}
return nil
},
}

View File

@ -14,6 +14,7 @@ import (
"coopcloud.tech/abra/pkg/client"
"coopcloud.tech/abra/pkg/config"
"coopcloud.tech/abra/pkg/formatter"
gitPkg "coopcloud.tech/abra/pkg/git"
recipePkg "coopcloud.tech/abra/pkg/recipe"
"coopcloud.tech/tagcmp"
"github.com/AlecAivazis/survey/v2"
@ -326,6 +327,7 @@ You may invoke this command in "wizard" mode and be prompted for input:
}
fmt.Println(string(jsonstring))
return nil
}
@ -336,6 +338,18 @@ You may invoke this command in "wizard" mode and be prompted for input:
}
}
}
isClean, err := gitPkg.IsClean(recipeDir)
if err != nil {
logrus.Fatal(err)
}
if !isClean {
logrus.Infof("%s currently has these unstaged changes 👇", recipe.Name)
if err := gitPkg.DiffUnstaged(recipeDir); err != nil {
logrus.Fatal(err)
}
}
return nil
},
}

26
go.mod
View File

@ -4,19 +4,20 @@ go 1.21
require (
coopcloud.tech/tagcmp v0.0.0-20211103052201-885b22f77d52
git.coopcloud.tech/coop-cloud/godotenv v1.5.2-0.20231130100509-01bff8284355
github.com/AlecAivazis/survey/v2 v2.3.7
github.com/Autonomic-Cooperative/godotenv v1.3.1-0.20210731094149-b031ea1211e7
github.com/Gurpartap/logrus-stack v0.0.0-20170710170904-89c00d8a28f4
github.com/docker/cli v24.0.6+incompatible
github.com/docker/cli v24.0.7+incompatible
github.com/docker/distribution v2.8.3+incompatible
github.com/docker/docker v24.0.6+incompatible
github.com/docker/docker v24.0.7+incompatible
github.com/docker/go-units v0.5.0
github.com/go-git/go-git/v5 v5.9.0
github.com/go-git/go-git/v5 v5.10.0
github.com/google/go-cmp v0.5.9
github.com/moby/sys/signal v0.7.0
github.com/moby/term v0.5.0
github.com/olekukonko/tablewriter v0.0.5
github.com/pkg/errors v0.9.1
github.com/schollz/progressbar/v3 v3.13.1
github.com/schollz/progressbar/v3 v3.14.1
github.com/sirupsen/logrus v1.9.3
gotest.tools/v3 v3.5.1
)
@ -47,7 +48,6 @@ require (
github.com/gogo/protobuf v1.3.2 // indirect
github.com/golang/groupcache v0.0.0-20210331224755-41bb18bfe9da // indirect
github.com/golang/protobuf v1.5.3 // indirect
github.com/google/go-cmp v0.5.9 // indirect
github.com/hashicorp/go-cleanhttp v0.5.2 // indirect
github.com/imdario/mergo v0.3.12 // indirect
github.com/inconshreveable/mousetrap v1.0.0 // indirect
@ -56,7 +56,7 @@ require (
github.com/kevinburke/ssh_config v1.2.0 // indirect
github.com/klauspost/compress v1.14.2 // indirect
github.com/mattn/go-colorable v0.1.12 // indirect
github.com/mattn/go-isatty v0.0.17 // indirect
github.com/mattn/go-isatty v0.0.20 // indirect
github.com/mattn/go-runewidth v0.0.14 // indirect
github.com/matttproud/golang_protobuf_extensions v1.0.4 // indirect
github.com/mgutz/ansi v0.0.0-20170206155736-9520e82c474b // indirect
@ -71,18 +71,18 @@ require (
github.com/prometheus/client_model v0.3.0 // indirect
github.com/prometheus/common v0.42.0 // indirect
github.com/prometheus/procfs v0.10.1 // indirect
github.com/rivo/uniseg v0.2.0 // indirect
github.com/rivo/uniseg v0.4.4 // indirect
github.com/russross/blackfriday/v2 v2.1.0 // indirect
github.com/skeema/knownhosts v1.2.0 // indirect
github.com/spf13/pflag v1.0.5 // indirect
github.com/xanzy/ssh-agent v0.3.3 // indirect
github.com/xeipuuv/gojsonreference v0.0.0-20180127040603-bd5ef7bd5415 // indirect
github.com/xeipuuv/gojsonschema v1.2.0 // indirect
golang.org/x/crypto v0.13.0 // indirect
golang.org/x/crypto v0.14.0 // indirect
golang.org/x/mod v0.12.0 // indirect
golang.org/x/net v0.15.0 // indirect
golang.org/x/net v0.17.0 // indirect
golang.org/x/sync v0.3.0 // indirect
golang.org/x/term v0.12.0 // indirect
golang.org/x/term v0.14.0 // indirect
golang.org/x/text v0.13.0 // indirect
golang.org/x/time v0.0.0-20200630173020-3af7569d3a1e // indirect
golang.org/x/tools v0.13.0 // indirect
@ -104,7 +104,7 @@ require (
github.com/fvbommel/sortorder v1.0.2 // indirect
github.com/google/shlex v0.0.0-20191202100458-e7afc7fbc510 // indirect
github.com/gorilla/mux v1.8.0 // indirect
github.com/hashicorp/go-retryablehttp v0.7.4
github.com/hashicorp/go-retryablehttp v0.7.5
github.com/klauspost/pgzip v1.2.6
github.com/moby/patternmatcher v0.5.0 // indirect
github.com/moby/sys/sequential v0.5.0 // indirect
@ -116,5 +116,5 @@ require (
github.com/theupdateframework/notary v0.7.0 // indirect
github.com/urfave/cli v1.22.9
github.com/xeipuuv/gojsonpointer v0.0.0-20190809123943-df4f5c81cb3b // indirect
golang.org/x/sys v0.13.0
golang.org/x/sys v0.14.0
)

52
go.sum
View File

@ -51,12 +51,12 @@ coopcloud.tech/tagcmp v0.0.0-20211103052201-885b22f77d52/go.mod h1:ESVm0wQKcbcFi
dario.cat/mergo v1.0.0 h1:AGCNq9Evsj31mOgNPcLyXc+4PNABt905YmuqPYYpBWk=
dario.cat/mergo v1.0.0/go.mod h1:uNxQE+84aUszobStD9th8a29P2fMDhsBdgRYvZOxGmk=
dmitri.shuralyov.com/gpu/mtl v0.0.0-20190408044501-666a987793e9/go.mod h1:H6x//7gZCb22OMCxBHrMx7a5I7Hp++hsVxbQ4BYO7hU=
git.coopcloud.tech/coop-cloud/godotenv v1.5.2-0.20231130100509-01bff8284355 h1:tCv2B4qoN6RMheKDnCzIafOkWS5BB1h7hwhmo+9bVeE=
git.coopcloud.tech/coop-cloud/godotenv v1.5.2-0.20231130100509-01bff8284355/go.mod h1:Q8V1zbtPAlzYSr/Dvky3wS6x58IQAl3rot2me1oSO2Q=
github.com/AdaLogics/go-fuzz-headers v0.0.0-20230106234847-43070de90fa1 h1:EKPd1INOIyr5hWOWhvpmQpY6tKjeG0hT1s3AMC/9fic=
github.com/AdaLogics/go-fuzz-headers v0.0.0-20230106234847-43070de90fa1/go.mod h1:VzwV+t+dZ9j/H867F1M2ziD+yLHtB46oM35FxxMJ4d0=
github.com/AlecAivazis/survey/v2 v2.3.7 h1:6I/u8FvytdGsgonrYsVn2t8t4QiRnh6QSTqkkhIiSjQ=
github.com/AlecAivazis/survey/v2 v2.3.7/go.mod h1:xUTIdE4KCOIjsBAE1JYsUPoCqYdZ1reCfTwbto0Fduo=
github.com/Autonomic-Cooperative/godotenv v1.3.1-0.20210731094149-b031ea1211e7 h1:asQtdXYbxEYWcwAQqJTVYC/RltB4eqoWKvqWg/LFPOg=
github.com/Autonomic-Cooperative/godotenv v1.3.1-0.20210731094149-b031ea1211e7/go.mod h1:oZRCMMRS318l07ei4DTqbZoOawfJlJ4yyo8juk2v4Rk=
github.com/Azure/azure-sdk-for-go v16.2.1+incompatible/go.mod h1:9XXNKU+eRnpl9moKnB4QOLf1HestfXbmab5FXxiDBjc=
github.com/Azure/go-ansiterm v0.0.0-20170929234023-d6e3b3328b78/go.mod h1:LmzpDX56iTiv29bbRTIsUNlaFfuhWRQBWjQdVyAevI8=
github.com/Azure/go-ansiterm v0.0.0-20210617225240-d185dfc1b5a1 h1:UQHMgLO+TxOElx5B5HZ4hJQsoJ/PvUvKRhJHDQXO8P8=
@ -339,16 +339,16 @@ github.com/distribution/reference v0.5.0 h1:/FUIFXtfc/x2gpa5/VGfiGLuOIdYa1t65IKK
github.com/distribution/reference v0.5.0/go.mod h1:BbU0aIcezP1/5jX/8MP0YiH4SdvB5Y4f/wlDRiLyi3E=
github.com/dnaeon/go-vcr v1.0.1/go.mod h1:aBB1+wY4s93YsC3HHjMBMrwTj2R9FHDzUr9KyGc8n1E=
github.com/docker/cli v0.0.0-20191017083524-a8ff7f821017/go.mod h1:JLrzqnKDaYBop7H2jaqPtU4hHvMKP+vjCwu2uszcLI8=
github.com/docker/cli v24.0.6+incompatible h1:fF+XCQCgJjjQNIMjzaSmiKJSCcfcXb3TWTcc7GAneOY=
github.com/docker/cli v24.0.6+incompatible/go.mod h1:JLrzqnKDaYBop7H2jaqPtU4hHvMKP+vjCwu2uszcLI8=
github.com/docker/cli v24.0.7+incompatible h1:wa/nIwYFW7BVTGa7SWPVyyXU9lgORqUb1xfI36MSkFg=
github.com/docker/cli v24.0.7+incompatible/go.mod h1:JLrzqnKDaYBop7H2jaqPtU4hHvMKP+vjCwu2uszcLI8=
github.com/docker/distribution v0.0.0-20190905152932-14b96e55d84c/go.mod h1:0+TTO4EOBfRPhZXAeF1Vu+W3hHZ8eLp8PgKVZlcvtFY=
github.com/docker/distribution v2.7.1-0.20190205005809-0d3efadf0154+incompatible/go.mod h1:J2gT2udsDAN96Uj4KfcMRqY0/ypR+oyYUYmja8H+y+w=
github.com/docker/distribution v2.7.1+incompatible/go.mod h1:J2gT2udsDAN96Uj4KfcMRqY0/ypR+oyYUYmja8H+y+w=
github.com/docker/distribution v2.8.3+incompatible h1:AtKxIZ36LoNK51+Z6RpzLpddBirtxJnzDrHLEKxTAYk=
github.com/docker/distribution v2.8.3+incompatible/go.mod h1:J2gT2udsDAN96Uj4KfcMRqY0/ypR+oyYUYmja8H+y+w=
github.com/docker/docker v1.4.2-0.20190924003213-a8608b5b67c7/go.mod h1:eEKB0N0r5NX/I1kEveEz05bcu8tLC/8azJZsviup8Sk=
github.com/docker/docker v24.0.6+incompatible h1:hceabKCtUgDqPu+qm0NgsaXf28Ljf4/pWFL7xjWWDgE=
github.com/docker/docker v24.0.6+incompatible/go.mod h1:eEKB0N0r5NX/I1kEveEz05bcu8tLC/8azJZsviup8Sk=
github.com/docker/docker v24.0.7+incompatible h1:Wo6l37AuwP3JaMnZa226lzVXGA3F9Ig1seQen0cKYlM=
github.com/docker/docker v24.0.7+incompatible/go.mod h1:eEKB0N0r5NX/I1kEveEz05bcu8tLC/8azJZsviup8Sk=
github.com/docker/docker-credential-helpers v0.6.3/go.mod h1:WRaJzqw3CTB9bk10avuGsjVBZsD05qeibJ1/TYlvc0Y=
github.com/docker/docker-credential-helpers v0.6.4 h1:axCks+yV+2MR3/kZhAmy07yC56WZ2Pwu/fKWtKuZB0o=
github.com/docker/docker-credential-helpers v0.6.4/go.mod h1:ofX3UI0Gz1TteYBjtgs07O36Pyasyp66D2uKT7H8W1c=
@ -417,10 +417,10 @@ github.com/go-git/gcfg v1.5.1-0.20230307220236-3a3c6141e376 h1:+zs/tPmkDkHx3U66D
github.com/go-git/gcfg v1.5.1-0.20230307220236-3a3c6141e376/go.mod h1:an3vInlBmSxCcxctByoQdvwPiA7DTK7jaaFDBTtu0ic=
github.com/go-git/go-billy/v5 v5.5.0 h1:yEY4yhzCDuMGSv83oGxiBotRzhwhNr8VZyphhiu+mTU=
github.com/go-git/go-billy/v5 v5.5.0/go.mod h1:hmexnoNsr2SJU1Ju67OaNz5ASJY3+sHgFRpCtpDCKow=
github.com/go-git/go-git-fixtures/v4 v4.3.2-0.20230305113008-0c11038e723f h1:Pz0DHeFij3XFhoBRGUDPzSJ+w2UcK5/0JvF8DRI58r8=
github.com/go-git/go-git-fixtures/v4 v4.3.2-0.20230305113008-0c11038e723f/go.mod h1:8LHG1a3SRW71ettAD/jW13h8c6AqjVSeL11RAdgaqpo=
github.com/go-git/go-git/v5 v5.9.0 h1:cD9SFA7sHVRdJ7AYck1ZaAa/yeuBvGPxwXDL8cxrObY=
github.com/go-git/go-git/v5 v5.9.0/go.mod h1:RKIqga24sWdMGZF+1Ekv9kylsDz6LzdTSI2s/OsZWE0=
github.com/go-git/go-git-fixtures/v4 v4.3.2-0.20231010084843-55a94097c399 h1:eMje31YglSBqCdIqdhKBW8lokaMrL3uTkpGYlE2OOT4=
github.com/go-git/go-git-fixtures/v4 v4.3.2-0.20231010084843-55a94097c399/go.mod h1:1OCfN199q1Jm3HZlxleg+Dw/mwps2Wbk9frAWm+4FII=
github.com/go-git/go-git/v5 v5.10.0 h1:F0x3xXrAWmhwtzoCokU4IMPcBdncG+HAAqi9FcOOjbQ=
github.com/go-git/go-git/v5 v5.10.0/go.mod h1:1FOZ/pQnqw24ghP2n7cunVl0ON55BsjPYvhWHvZGhoo=
github.com/go-gl/glfw v0.0.0-20190409004039-e6da0acd62b1/go.mod h1:vR7hzQXu2zJy9AVAgeJqvqgH9Q5CA+iKCZ2gyEVpxRU=
github.com/go-gl/glfw/v3.3/glfw v0.0.0-20191125211704-12ad95a8df72/go.mod h1:tQ2UAYgL5IevRw8kRxooKSPJfGvJ9fJQFa0TUsXzTg8=
github.com/go-gl/glfw/v3.3/glfw v0.0.0-20200222043503-6f7a984d4dc4/go.mod h1:tQ2UAYgL5IevRw8kRxooKSPJfGvJ9fJQFa0TUsXzTg8=
@ -590,8 +590,8 @@ github.com/hashicorp/go-multierror v1.0.0/go.mod h1:dHtQlpGsu+cZNNAkkCN/P3hoUDHh
github.com/hashicorp/go-multierror v1.1.0/go.mod h1:spPvp8C1qA32ftKqdAHm4hHTbPw+vmowP0z+KUhOZdA=
github.com/hashicorp/go-multierror v1.1.1/go.mod h1:iw975J/qwKPdAO1clOe2L8331t/9/fmwbPZ6JB6eMoM=
github.com/hashicorp/go-retryablehttp v0.5.3/go.mod h1:9B5zBasrRhHXnJnui7y6sL7es7NDiJgTc6Er0maI1Xs=
github.com/hashicorp/go-retryablehttp v0.7.4 h1:ZQgVdpTdAL7WpMIwLzCfbalOcSUdkDZnpUv3/+BxzFA=
github.com/hashicorp/go-retryablehttp v0.7.4/go.mod h1:Jy/gPYAdjqffZ/yFGCFV2doI5wjtH1ewM9u8iYVjtX8=
github.com/hashicorp/go-retryablehttp v0.7.5 h1:bJj+Pj19UZMIweq/iie+1u5YCdGrnxCT9yvm0e+Nd5M=
github.com/hashicorp/go-retryablehttp v0.7.5/go.mod h1:Jy/gPYAdjqffZ/yFGCFV2doI5wjtH1ewM9u8iYVjtX8=
github.com/hashicorp/go-rootcerts v1.0.2/go.mod h1:pqUvnprVnM5bf7AOirdbb01K4ccR319Vf4pU3K5EGc8=
github.com/hashicorp/go-sockaddr v1.0.0/go.mod h1:7Xibr9yA9JjQq1JpNB2Vw7kxv8xerXegt+ozgdvDeDU=
github.com/hashicorp/go-syslog v1.0.0/go.mod h1:qPfqrKkXGihmCqbJM2mZgkZGvKG1dFdvsLplgctolz4=
@ -705,8 +705,8 @@ github.com/mattn/go-isatty v0.0.10/go.mod h1:qgIWMr58cqv1PHHyhnkY9lrL7etaEgOFcME
github.com/mattn/go-isatty v0.0.11/go.mod h1:PhnuNfih5lzO57/f3n+odYbM4JtupLOxQOAqxQCu2WE=
github.com/mattn/go-isatty v0.0.12/go.mod h1:cbi8OIDigv2wuxKPP5vlRcQ1OAZbq2CE4Kysco4FUpU=
github.com/mattn/go-isatty v0.0.14/go.mod h1:7GGIvUiUoEMVVmxf/4nioHXj79iQHKdU27kJ6hsGG94=
github.com/mattn/go-isatty v0.0.17 h1:BTarxUcIeDqL27Mc+vyvdWYSL28zpIhv3RoTdsLMPng=
github.com/mattn/go-isatty v0.0.17/go.mod h1:kYGgaQfpe5nmfYZH+SKPsOc2e4SrIfOl2e/yFXSvRLM=
github.com/mattn/go-isatty v0.0.20 h1:xfD0iDuEKnDkl03q4limB+vH+GxLEtL/jb4xVJSWWEY=
github.com/mattn/go-isatty v0.0.20/go.mod h1:W+V8PltTTMOvKvAeJH7IuucS94S2C6jfK/D7dTCTo3Y=
github.com/mattn/go-runewidth v0.0.2/go.mod h1:LwmH8dsx7+W8Uxz3IHJYH5QSwggIsqBzpuz5H//U1FU=
github.com/mattn/go-runewidth v0.0.9/go.mod h1:H031xJmbD/WCDINGzjvQ9THkh0rPKHF+m2gUSrubnMI=
github.com/mattn/go-runewidth v0.0.14 h1:+xnbZSEeDbOIg5/mE6JF0w6n9duR1l3/WmbinWVwUuU=
@ -885,8 +885,9 @@ github.com/prometheus/procfs v0.6.0/go.mod h1:cz+aTbrPOrUb4q7XlbU9ygM+/jj0fzG6c1
github.com/prometheus/procfs v0.10.1 h1:kYK1Va/YMlutzCGazswoHKo//tZVlFpKYh+PymziUAg=
github.com/prometheus/procfs v0.10.1/go.mod h1:nwNm2aOCAYw8uTR/9bWRREkZFxAUcWzPHWJq+XBB/FM=
github.com/prometheus/tsdb v0.7.1/go.mod h1:qhTCs0VvXwvX/y3TZrWD7rabWM+ijKTux40TwIPHuXU=
github.com/rivo/uniseg v0.2.0 h1:S1pD9weZBuJdFmowNwbpi7BJ8TNftyUImj/0WQi72jY=
github.com/rivo/uniseg v0.2.0/go.mod h1:J6wj4VEh+S6ZtnVlnTBMWIodfgj8LQOQFoIToxlJtxc=
github.com/rivo/uniseg v0.4.4 h1:8TfxU8dW6PdqD27gjM8MVNuicgxIjxpm4K7x4jp8sis=
github.com/rivo/uniseg v0.4.4/go.mod h1:FN3SvrM+Zdj16jyLfmOkMNblXMcoc8DfTHruCPUcx88=
github.com/rogpeppe/fastuuid v0.0.0-20150106093220-6724a57986af/go.mod h1:XWv6SoW27p1b0cqNHllgS5HIMJraePCO15w5zCzIWYg=
github.com/rogpeppe/fastuuid v1.2.0/go.mod h1:jVj6XXZzXRy/MSR5jhDC/2q6DgLz+nrA6LYCDYWNEvQ=
github.com/rogpeppe/go-internal v1.3.0/go.mod h1:M8bDsm7K2OlrFYOpmOWEs/qY81heoFRclV5y23lUDJ4=
@ -900,8 +901,8 @@ github.com/ryanuber/columnize v0.0.0-20160712163229-9b3edd62028f/go.mod h1:sm1tb
github.com/safchain/ethtool v0.0.0-20190326074333-42ed695e3de8/go.mod h1:Z0q5wiBQGYcxhMZ6gUqHn6pYNLypFAvaL3UvgZLR0U4=
github.com/sagikazarmark/crypt v0.3.0/go.mod h1:uD/D+6UF4SrIR1uGEv7bBNkNqLGqUr43MRiaGWX1Nig=
github.com/satori/go.uuid v1.2.0/go.mod h1:dA0hQrYB0VpLJoorglMZABFdXlWrHn1NEOzdhQKdks0=
github.com/schollz/progressbar/v3 v3.13.1 h1:o8rySDYiQ59Mwzy2FELeHY5ZARXZTVJC7iHD6PEFUiE=
github.com/schollz/progressbar/v3 v3.13.1/go.mod h1:xvrbki8kfT1fzWzBT/UZd9L6GA+jdL7HAgq2RFnO6fQ=
github.com/schollz/progressbar/v3 v3.14.1 h1:VD+MJPCr4s3wdhTc7OEJ/Z3dAeBzJ7yKH/P4lC5yRTI=
github.com/schollz/progressbar/v3 v3.14.1/go.mod h1:Zc9xXneTzWXF81TGoqL71u0sBPjULtEHYtj/WVgVy8E=
github.com/sclevine/spec v1.2.0/go.mod h1:W4J29eT/Kzv7/b9IWLB055Z+qvVC9vt0Arko24q7p+U=
github.com/sean-/seed v0.0.0-20170313163322-e2103e2c3529/go.mod h1:DxrIzT+xaE7yg65j358z/aeFdxmN0P9QXhEzd20vsDc=
github.com/seccomp/libseccomp-golang v0.9.1/go.mod h1:GbW5+tmTXfcxTToHLXlScSlAvWlF4P2Ca7zGrPiEpWo=
@ -1069,8 +1070,8 @@ golang.org/x/crypto v0.0.0-20210921155107-089bfa567519/go.mod h1:GvvjBRRGRdwPK5y
golang.org/x/crypto v0.0.0-20220622213112-05595931fe9d/go.mod h1:IxCIyHEi3zRg3s0A5j5BB6A9Jmi73HwBIUl50j+osU4=
golang.org/x/crypto v0.3.1-0.20221117191849-2c476679df9a/go.mod h1:hebNnKkNXi2UzZN1eVRvBB7co0a+JxK6XbPiWVs/3J4=
golang.org/x/crypto v0.7.0/go.mod h1:pYwdfH91IfpZVANVyUOhSIPZaFoJGxTFbZhFTx+dXZU=
golang.org/x/crypto v0.13.0 h1:mvySKfSWJ+UKUii46M40LOvyWfN0s2U+46/jDd0e6Ck=
golang.org/x/crypto v0.13.0/go.mod h1:y6Z2r+Rw4iayiXXAIxJIDAJ1zMW4yaTpebo8fPOliYc=
golang.org/x/crypto v0.14.0 h1:wBqGXzWJW6m1XrIKlAH0Hs1JJ7+9KBwnIO8v66Q9cHc=
golang.org/x/crypto v0.14.0/go.mod h1:MVFd36DqK4CsrnJYDkBA3VC4m2GkXAM0PvzMCn4JQf4=
golang.org/x/exp v0.0.0-20190121172915-509febef88a4/go.mod h1:CJ0aWSM057203Lf6IL+f9T1iT9GByDxfZKAQTCR3kQA=
golang.org/x/exp v0.0.0-20190306152737-a1d7652674e8/go.mod h1:CJ0aWSM057203Lf6IL+f9T1iT9GByDxfZKAQTCR3kQA=
golang.org/x/exp v0.0.0-20190510132918-efd6b22b2522/go.mod h1:ZjyILWgesfNpC6sMxTJOJm9Kp84zZh5NQWvqDGG3Qr8=
@ -1168,8 +1169,8 @@ golang.org/x/net v0.0.0-20220722155237-a158d28d115b/go.mod h1:XRhObCWvk6IyKnWLug
golang.org/x/net v0.2.0/go.mod h1:KqCZLdyyvdV855qA2rE3GC2aiw5xGR5TEjj8smXukLY=
golang.org/x/net v0.6.0/go.mod h1:2Tu9+aMcznHK/AK1HMvgo6xiTLG5rD5rZLDS+rp2Bjs=
golang.org/x/net v0.8.0/go.mod h1:QVkue5JL9kW//ek3r6jTKnTFis1tRmNAW2P1shuFdJc=
golang.org/x/net v0.15.0 h1:ugBLEUaxABaB5AJqW9enI0ACdci2RUd4eP51NTBvuJ8=
golang.org/x/net v0.15.0/go.mod h1:idbUs1IY1+zTqbi8yxTbhexhEEk5ur9LInksu6HrEpk=
golang.org/x/net v0.17.0 h1:pVaXccu2ozPjCXewfr1S7xza/zcXTity9cCdXQYSjIM=
golang.org/x/net v0.17.0/go.mod h1:NxSsAGuq816PNPmqtQdLE42eU2Fs7NoRIZrHJAlaCOE=
golang.org/x/oauth2 v0.0.0-20180821212333-d2e6202438be/go.mod h1:N/0e6XlmueqKjAGxoOufVs8QHGRruUQn6yWY3a++T0U=
golang.org/x/oauth2 v0.0.0-20190226205417-e64efc72b421/go.mod h1:gOpvHmFTYa4IltrdGE7lF6nIHvwfUNPOp7c8zoXwtLw=
golang.org/x/oauth2 v0.0.0-20190604053449-0f29369cfe45/go.mod h1:gOpvHmFTYa4IltrdGE7lF6nIHvwfUNPOp7c8zoXwtLw=
@ -1310,21 +1311,20 @@ golang.org/x/sys v0.0.0-20211216021012-1d35b9e2eb4e/go.mod h1:oPkhp1MJrh7nUepCBc
golang.org/x/sys v0.0.0-20220520151302-bc2c85ada10a/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20220715151400-c0bba94af5f8/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20220722155257-8c9f86f7a55f/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20220811171246-fbc7d0a398ab/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.2.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.3.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.5.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.6.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.13.0 h1:Af8nKPmuFypiUBjVoU9V20FiaFXOcuZI21p0ycVYYGE=
golang.org/x/sys v0.13.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.14.0 h1:Vz7Qs629MkJkGyHxUlRHizWJRG2j8fbQKjELVSNhy7Q=
golang.org/x/sys v0.14.0/go.mod h1:/VUhepiaJMQUp4+oa/7Zr1D23ma6VTLIYjOOTFZPUcA=
golang.org/x/term v0.0.0-20201117132131-f5c789dd3221/go.mod h1:Nr5EML6q2oocZ2LXRh80K7BxOlk5/8JxuGnuhpl+muw=
golang.org/x/term v0.0.0-20201126162022-7de9c90e9dd1/go.mod h1:bj7SfCRtBDWHUb9snDiAeCFNEtKQo2Wmx5Cou7ajbmo=
golang.org/x/term v0.0.0-20210927222741-03fcf44c2211/go.mod h1:jbD1KX2456YbFQfuXm/mYQcufACuNUgVhRMnK/tPxf8=
golang.org/x/term v0.2.0/go.mod h1:TVmDHMZPmdnySmBfhjOoOdhjzdE1h4u1VwSiw2l1Nuc=
golang.org/x/term v0.5.0/go.mod h1:jMB1sMXY+tzblOD4FWmEbocvup2/aLOaQEp7JmGp78k=
golang.org/x/term v0.6.0/go.mod h1:m6U89DPEgQRMq3DNkDClhWw02AUbt2daBVO4cn4Hv9U=
golang.org/x/term v0.12.0 h1:/ZfYdc3zq+q02Rv9vGqTeSItdzZTSNDmfTi0mBAuidU=
golang.org/x/term v0.12.0/go.mod h1:owVbMEjm3cBLCHdkQu9b1opXd4ETQWc3BhuQGKgXgvU=
golang.org/x/term v0.14.0 h1:LGK9IlZ8T9jvdy6cTdfKUCltatMFOehAQo9SRC46UQ8=
golang.org/x/term v0.14.0/go.mod h1:TySc+nGkYR6qt8km8wUhuFRTVSMIX3XPR58y2lC8vww=
golang.org/x/text v0.0.0-20170915032832-14c0d48ead0c/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=
golang.org/x/text v0.3.0/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=
golang.org/x/text v0.3.1-0.20180807135948-17ff2d5776d2/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=

View File

@ -25,6 +25,16 @@ func AppNameComplete(c *cli.Context) {
}
}
func ServiceNameComplete(appName string) {
serviceNames, err := config.GetAppServiceNames(appName)
if err != nil {
return
}
for _, s := range serviceNames {
fmt.Println(s)
}
}
// RecipeNameComplete completes recipe names.
func RecipeNameComplete(c *cli.Context) {
catl, err := recipe.ReadRecipeCatalogue(false)
@ -41,6 +51,20 @@ func RecipeNameComplete(c *cli.Context) {
}
}
// RecipeVersionComplete completes versions for the recipe.
func RecipeVersionComplete(recipeName string) {
catl, err := recipe.ReadRecipeCatalogue(false)
if err != nil {
logrus.Warn(err)
}
for _, v := range catl[recipeName].Versions {
for v2 := range v {
fmt.Println(v2)
}
}
}
// ServerNameComplete completes server names.
func ServerNameComplete(c *cli.Context) {
files, err := config.LoadAppFiles("")

View File

@ -12,46 +12,6 @@ import (
"github.com/sirupsen/logrus"
)
// CatalogueSkipList is all the repos that are not recipes.
var CatalogueSkipList = map[string]bool{
"abra": true,
"abra-apps": true,
"abra-aur": true,
"abra-bash": true,
"abra-capsul": true,
"abra-gandi": true,
"abra-hetzner": true,
"abra-test-recipe": true,
"apps": true,
"aur-abra-git": true,
"auto-mirror": true,
"auto-recipes-catalogue-json": true,
"backup-bot": true,
"backup-bot-two": true,
"beta.coopcloud.tech": true,
"comrade-renovate-bot": true,
"coopcloud.tech": true,
"coturn": true,
"docker-cp-deploy": true,
"docker-dind-bats-kcov": true,
"docs.coopcloud.tech": true,
"drone-abra": true,
"example": true,
"gardening": true,
"go-abra": true,
"organising": true,
"pyabra": true,
"radicle-seed-node": true,
"recipes-catalogue-json": true,
"recipes-wishlist": true,
"recipes.coopcloud.tech": true,
"stack-ssh-deploy": true,
"swarm-cronjob": true,
"tagcmp": true,
"traefik-cert-dumper": true,
"tyop": true,
}
// EnsureCatalogue ensures that the catalogue is cloned locally & present.
func EnsureCatalogue() error {
catalogueDir := path.Join(config.ABRA_DIR, "catalogue")

View File

@ -2,15 +2,17 @@ package client
import (
"context"
"fmt"
"time"
"github.com/docker/docker/api/types/filters"
"github.com/docker/docker/api/types/volume"
"github.com/docker/docker/client"
"github.com/sirupsen/logrus"
)
func GetVolumes(cl *client.Client, ctx context.Context, server string, fs filters.Args) ([]*volume.Volume, error) {
volumeListOptions := volume.ListOptions{fs}
volumeListOKBody, err := cl.VolumeList(ctx, volumeListOptions)
volumeListOKBody, err := cl.VolumeList(ctx, volume.ListOptions{Filters: fs})
volumeList := volumeListOKBody.Volumes
if err != nil {
return volumeList, err
@ -29,13 +31,32 @@ func GetVolumeNames(volumes []*volume.Volume) []string {
return volumeNames
}
func RemoveVolumes(cl *client.Client, ctx context.Context, server string, volumeNames []string, force bool) error {
func RemoveVolumes(cl *client.Client, ctx context.Context, volumeNames []string, force bool, retries int) error {
for _, volName := range volumeNames {
err := cl.VolumeRemove(ctx, volName, force)
err := retryFunc(5, func() error {
return cl.VolumeRemove(context.Background(), volName, force)
})
if err != nil {
return err
return fmt.Errorf("volume %s: %s", volName, err)
}
}
return nil
}
// retryFunc retries the given function for the given retries. After the nth
// retry it waits (n + 1)^2 seconds before the next retry (starting with n=0).
// It returns an error if the function still failed after the last retry.
func retryFunc(retries int, fn func() error) error {
for i := 0; i < retries; i++ {
err := fn()
if err == nil {
return nil
}
if i+1 < retries {
sleep := time.Duration(i+1) * time.Duration(i+1)
logrus.Infof("%s: waiting %d seconds before next retry", err, sleep)
time.Sleep(sleep * time.Second)
}
}
return fmt.Errorf("%d retries failed", retries)
}

View File

@ -0,0 +1,26 @@
package client
import (
"fmt"
"testing"
)
func TestRetryFunc(t *testing.T) {
err := retryFunc(1, func() error { return nil })
if err != nil {
t.Errorf("should not return an error: %s", err)
}
i := 0
fn := func() error {
i++
return fmt.Errorf("oh no, something went wrong!")
}
err = retryFunc(2, fn)
if err == nil {
t.Error("should return an error")
}
if i != 2 {
t.Errorf("The function should have been called 1 times, got %d", i)
}
}

View File

@ -29,7 +29,7 @@ func UpdateTag(pattern, image, tag, recipeName string) (bool, error) {
opts := stack.Deploy{Composefiles: []string{composeFile}}
envSamplePath := path.Join(config.RECIPES_DIR, recipeName, ".env.sample")
sampleEnv, err := config.ReadEnv(envSamplePath, config.ReadEnvOptions{})
sampleEnv, err := config.ReadEnv(envSamplePath)
if err != nil {
return false, err
}
@ -97,7 +97,7 @@ func UpdateLabel(pattern, serviceName, label, recipeName string) error {
opts := stack.Deploy{Composefiles: []string{composeFile}}
envSamplePath := path.Join(config.RECIPES_DIR, recipeName, ".env.sample")
sampleEnv, err := config.ReadEnv(envSamplePath, config.ReadEnvOptions{})
sampleEnv, err := config.ReadEnv(envSamplePath)
if err != nil {
return err
}

View File

@ -25,6 +25,9 @@ import (
// AppEnv is a map of the values in an apps env config
type AppEnv = map[string]string
// AppModifiers is a map of modifiers in an apps env config
type AppModifiers = map[string]map[string]string
// AppName is AppName
type AppName = string
@ -47,34 +50,61 @@ type App struct {
Path string
}
// StackName gets whatever the docker safe (uses the right delimiting
// character, e.g. "_") stack name is for the app. In general, you don't want
// to use this to show anything to end-users, you want use a.Name instead.
// See documentation of config.StackName
func (a App) StackName() string {
if _, exists := a.Env["STACK_NAME"]; exists {
return a.Env["STACK_NAME"]
}
stackName := SanitiseAppName(a.Name)
if len(stackName) > 45 {
logrus.Debugf("trimming %s to %s to avoid runtime limits", stackName, stackName[:45])
stackName = stackName[:45]
}
stackName := StackName(a.Name)
a.Env["STACK_NAME"] = stackName
return stackName
}
// Filters retrieves exact app filters for querying the container runtime. Due
// to upstream issues, filtering works different depending on what you're
// StackName gets whatever the docker safe (uses the right delimiting
// character, e.g. "_") stack name is for the app. In general, you don't want
// to use this to show anything to end-users, you want use a.Name instead.
func StackName(appName string) string {
stackName := SanitiseAppName(appName)
if len(stackName) > MAX_SANITISED_APP_NAME_LENGTH {
logrus.Debugf("trimming %s to %s to avoid runtime limits", stackName, stackName[:MAX_SANITISED_APP_NAME_LENGTH])
stackName = stackName[:MAX_SANITISED_APP_NAME_LENGTH]
}
return stackName
}
// Filters retrieves app filters for querying the container runtime. By default
// it filters on all services in the app. It is also possible to pass an
// otional list of service names, which get filtered instead.
//
// Due to upstream issues, filtering works different depending on what you're
// querying. So, for example, secrets don't work with regex! The caller needs
// to implement their own validation that the right secrets are matched. In
// order to handle these cases, we provide the `appendServiceNames` /
// `exactMatch` modifiers.
func (a App) Filters(appendServiceNames, exactMatch bool) (filters.Args, error) {
func (a App) Filters(appendServiceNames, exactMatch bool, services ...string) (filters.Args, error) {
filters := filters.NewArgs()
if len(services) > 0 {
for _, serviceName := range services {
filters.Add("name", ServiceFilter(a.StackName(), serviceName, exactMatch))
}
return filters, nil
}
// When not appending the service name, just add one filter for the whole
// stack.
if !appendServiceNames {
f := fmt.Sprintf("%s", a.StackName())
if exactMatch {
f = fmt.Sprintf("^%s", f)
}
filters.Add("name", f)
return filters, nil
}
composeFiles, err := GetComposeFiles(a.Recipe, a.Env)
if err != nil {
@ -88,28 +118,23 @@ func (a App) Filters(appendServiceNames, exactMatch bool) (filters.Args, error)
}
for _, service := range compose.Services {
var filter string
if appendServiceNames {
if exactMatch {
filter = fmt.Sprintf("^%s_%s", a.StackName(), service.Name)
} else {
filter = fmt.Sprintf("%s_%s", a.StackName(), service.Name)
}
} else {
if exactMatch {
filter = fmt.Sprintf("^%s", a.StackName())
} else {
filter = fmt.Sprintf("%s", a.StackName())
}
}
filters.Add("name", filter)
f := ServiceFilter(a.StackName(), service.Name, exactMatch)
filters.Add("name", f)
}
return filters, nil
}
// ServiceFilter creates a filter string for filtering a service in the docker
// container runtime. When exact match is true, it uses regex to match the
// string exactly.
func ServiceFilter(stack, service string, exact bool) string {
if exact {
return fmt.Sprintf("^%s_%s", stack, service)
}
return fmt.Sprintf("%s_%s", stack, service)
}
// ByServer sort a slice of Apps
type ByServer []App
@ -150,7 +175,7 @@ func (a ByName) Less(i, j int) bool {
}
func ReadAppEnvFile(appFile AppFile, name AppName) (App, error) {
env, err := ReadEnv(appFile.Path, ReadEnvOptions{})
env, err := ReadEnv(appFile.Path)
if err != nil {
return App{}, fmt.Errorf("env file for %s couldn't be read: %s", name, err.Error())
}
@ -330,7 +355,7 @@ func TemplateAppEnvSample(recipeName, appName, server, domain string) error {
return fmt.Errorf("%s already exists?", appEnvPath)
}
err = ioutil.WriteFile(appEnvPath, envSample, 0664)
err = ioutil.WriteFile(appEnvPath, envSample, 0o664)
if err != nil {
return err
}
@ -592,7 +617,7 @@ func GetLabel(compose *composetypes.Config, stackName string, label string) stri
// GetTimeoutFromLabel reads the timeout value from docker label "coop-cloud.${STACK_NAME}.TIMEOUT" and returns 50 as default value
func GetTimeoutFromLabel(compose *composetypes.Config, stackName string) (int, error) {
var timeout = 50 // Default Timeout
timeout := 50 // Default Timeout
var err error = nil
if timeoutLabel := GetLabel(compose, stackName, "timeout"); timeoutLabel != "" {
logrus.Debugf("timeout label: %s", timeoutLabel)

View File

@ -1,12 +1,15 @@
package config_test
import (
"encoding/json"
"fmt"
"reflect"
"testing"
"coopcloud.tech/abra/pkg/config"
"coopcloud.tech/abra/pkg/recipe"
"github.com/docker/docker/api/types/filters"
"github.com/google/go-cmp/cmp"
"github.com/stretchr/testify/assert"
)
@ -106,3 +109,89 @@ func TestGetComposeFilesError(t *testing.T) {
}
}
}
func TestFilters(t *testing.T) {
oldDir := config.RECIPES_DIR
config.RECIPES_DIR = "./testdir"
defer func() {
config.RECIPES_DIR = oldDir
}()
app, err := config.NewApp(config.AppEnv{
"DOMAIN": "test.example.com",
"RECIPE": "test-recipe",
}, "test_example_com", config.AppFile{
Path: "./testdir/filtertest.end",
Server: "local",
})
if err != nil {
t.Fatal(err)
}
f, err := app.Filters(false, false)
if err != nil {
t.Error(err)
}
compareFilter(t, f, map[string]map[string]bool{
"name": {
"test_example_com": true,
},
})
f2, err := app.Filters(false, true)
if err != nil {
t.Error(err)
}
compareFilter(t, f2, map[string]map[string]bool{
"name": {
"^test_example_com": true,
},
})
f3, err := app.Filters(true, false)
if err != nil {
t.Error(err)
}
compareFilter(t, f3, map[string]map[string]bool{
"name": {
"test_example_com_bar": true,
"test_example_com_foo": true,
},
})
f4, err := app.Filters(true, true)
if err != nil {
t.Error(err)
}
compareFilter(t, f4, map[string]map[string]bool{
"name": {
"^test_example_com_bar": true,
"^test_example_com_foo": true,
},
})
f5, err := app.Filters(false, false, "foo")
if err != nil {
t.Error(err)
}
compareFilter(t, f5, map[string]map[string]bool{
"name": {
"test_example_com_foo": true,
},
})
}
func compareFilter(t *testing.T, f1 filters.Args, f2 map[string]map[string]bool) {
t.Helper()
j1, err := f1.MarshalJSON()
if err != nil {
t.Error(err)
}
j2, err := json.Marshal(f2)
if err != nil {
t.Error(err)
}
if diff := cmp.Diff(string(j2), string(j1)); diff != "" {
t.Errorf("filters mismatch (-want +got):\n%s", diff)
}
}

View File

@ -12,7 +12,7 @@ import (
"sort"
"strings"
"github.com/Autonomic-Cooperative/godotenv"
"git.coopcloud.tech/coop-cloud/godotenv"
"github.com/sirupsen/logrus"
)
@ -36,6 +36,11 @@ var REPOS_BASE_URL = "https://git.coopcloud.tech/coop-cloud"
var CATALOGUE_JSON_REPO_NAME = "recipes-catalogue-json"
var SSH_URL_TEMPLATE = "ssh://git@git.coopcloud.tech:2222/coop-cloud/%s.git"
const MAX_SANITISED_APP_NAME_LENGTH = 45
const MAX_DOCKER_SECRET_LENGTH = 64
var BackupbotLabel = "coop-cloud.backupbot.enabled"
// envVarModifiers is a list of env var modifier strings. These are added to
// env vars as comments and modify their processing by Abra, e.g. determining
// how long secrets should be.
@ -55,45 +60,34 @@ func GetServers() ([]string, error) {
return servers, nil
}
// ReadEnvOptions modifies the ReadEnv processing of env vars.
type ReadEnvOptions struct {
IncludeModifiers bool
}
// ContainsEnvVarModifier determines if an env var contains a modifier.
func ContainsEnvVarModifier(envVar string) bool {
for _, mod := range envVarModifiers {
if strings.Contains(envVar, fmt.Sprintf("%s=", mod)) {
return true
}
}
return false
}
// ReadEnv loads an app envivornment into a map.
func ReadEnv(filePath string, opts ReadEnvOptions) (AppEnv, error) {
func ReadEnv(filePath string) (AppEnv, error) {
var envVars AppEnv
envVars, err := godotenv.Read(filePath)
envVars, _, err := godotenv.Read(filePath)
if err != nil {
return nil, err
}
for idx, envVar := range envVars {
if strings.Contains(envVar, "#") {
if opts.IncludeModifiers && ContainsEnvVarModifier(envVar) {
continue
}
vals := strings.Split(envVar, "#")
envVars[idx] = strings.TrimSpace(vals[0])
}
}
logrus.Debugf("read %s from %s", envVars, filePath)
return envVars, nil
}
// ReadEnv loads an app envivornment and their modifiers in two different maps.
func ReadEnvWithModifiers(filePath string) (AppEnv, AppModifiers, error) {
var envVars AppEnv
envVars, mods, err := godotenv.Read(filePath)
if err != nil {
return nil, mods, err
}
logrus.Debugf("read %s from %s", envVars, filePath)
return envVars, mods, nil
}
// ReadServerNames retrieves all server names.
func ReadServerNames() ([]string, error) {
serverNames, err := GetAllFoldersInDirectory(SERVERS_DIR)
@ -227,7 +221,7 @@ func CheckEnv(app App) ([]EnvVar, error) {
return envVars, err
}
envSample, err := ReadEnv(envSamplePath, ReadEnvOptions{})
envSample, err := ReadEnv(envSamplePath)
if err != nil {
return envVars, err
}
@ -249,3 +243,39 @@ func CheckEnv(app App) ([]EnvVar, error) {
return envVars, nil
}
// ReadAbraShCmdNames reads the names of commands.
func ReadAbraShCmdNames(abraSh string) ([]string, error) {
var cmdNames []string
file, err := os.Open(abraSh)
if err != nil {
if os.IsNotExist(err) {
return cmdNames, nil
}
return cmdNames, err
}
defer file.Close()
cmdNameRegex, err := regexp.Compile(`(\w+)(\(\).*\{)`)
if err != nil {
return cmdNames, err
}
scanner := bufio.NewScanner(file)
for scanner.Scan() {
line := scanner.Text()
matches := cmdNameRegex.FindStringSubmatch(line)
if len(matches) > 0 {
cmdNames = append(cmdNames, matches[1])
}
}
if len(cmdNames) > 0 {
logrus.Debugf("read %s from %s", strings.Join(cmdNames, " "), abraSh)
} else {
logrus.Debugf("read 0 command names from %s", abraSh)
}
return cmdNames, nil
}

View File

@ -5,6 +5,7 @@ import (
"os"
"path"
"reflect"
"slices"
"strings"
"testing"
@ -12,15 +13,21 @@ import (
"coopcloud.tech/abra/pkg/recipe"
)
var TestFolder = os.ExpandEnv("$PWD/../../tests/resources/test_folder")
var ValidAbraConf = os.ExpandEnv("$PWD/../../tests/resources/valid_abra_config")
var (
TestFolder = os.ExpandEnv("$PWD/../../tests/resources/test_folder")
ValidAbraConf = os.ExpandEnv("$PWD/../../tests/resources/valid_abra_config")
)
// make sure these are in alphabetical order
var TFolders = []string{"folder1", "folder2"}
var TFiles = []string{"bar.env", "foo.env"}
var (
TFolders = []string{"folder1", "folder2"}
TFiles = []string{"bar.env", "foo.env"}
)
var AppName = "ecloud"
var ServerName = "evil.corp"
var (
AppName = "ecloud"
ServerName = "evil.corp"
)
var ExpectedAppEnv = config.AppEnv{
"DOMAIN": "ecloud.evil.corp",
@ -70,7 +77,7 @@ func TestGetAllFilesInDirectory(t *testing.T) {
}
func TestReadEnv(t *testing.T) {
env, err := config.ReadEnv(ExpectedAppFile.Path, config.ReadEnvOptions{})
env, err := config.ReadEnv(ExpectedAppFile.Path)
if err != nil {
t.Fatal(err)
}
@ -115,6 +122,31 @@ func TestReadAbraShEnvVars(t *testing.T) {
}
}
func TestReadAbraShCmdNames(t *testing.T) {
offline := true
r, err := recipe.Get("abra-test-recipe", offline)
if err != nil {
t.Fatal(err)
}
abraShPath := fmt.Sprintf("%s/%s/%s", config.RECIPES_DIR, r.Name, "abra.sh")
cmdNames, err := config.ReadAbraShCmdNames(abraShPath)
if err != nil {
t.Fatal(err)
}
if len(cmdNames) == 0 {
t.Error("at least one command name should be found")
}
expectedCmdNames := []string{"test_cmd", "test_cmd_args"}
for _, cmdName := range expectedCmdNames {
if !slices.Contains(cmdNames, cmdName) {
t.Fatalf("%s should have been found in %s", cmdName, abraShPath)
}
}
}
func TestCheckEnv(t *testing.T) {
offline := true
r, err := recipe.Get("abra-test-recipe", offline)
@ -123,7 +155,7 @@ func TestCheckEnv(t *testing.T) {
}
envSamplePath := path.Join(config.RECIPES_DIR, r.Name, ".env.sample")
envSample, err := config.ReadEnv(envSamplePath, config.ReadEnvOptions{})
envSample, err := config.ReadEnv(envSamplePath)
if err != nil {
t.Fatal(err)
}
@ -157,7 +189,7 @@ func TestCheckEnvError(t *testing.T) {
}
envSamplePath := path.Join(config.RECIPES_DIR, r.Name, ".env.sample")
envSample, err := config.ReadEnv(envSamplePath, config.ReadEnvOptions{})
envSample, err := config.ReadEnv(envSamplePath)
if err != nil {
t.Fatal(err)
}
@ -185,16 +217,6 @@ func TestCheckEnvError(t *testing.T) {
}
}
func TestContainsEnvVarModifier(t *testing.T) {
if ok := config.ContainsEnvVarModifier("FOO=bar # bing"); ok {
t.Fatal("FOO contains no env var modifier")
}
if ok := config.ContainsEnvVarModifier("FOO=bar # length=3"); !ok {
t.Fatal("FOO contains an env var modifier (length)")
}
}
func TestEnvVarCommentsRemoved(t *testing.T) {
offline := true
r, err := recipe.Get("abra-test-recipe", offline)
@ -203,7 +225,7 @@ func TestEnvVarCommentsRemoved(t *testing.T) {
}
envSamplePath := path.Join(config.RECIPES_DIR, r.Name, ".env.sample")
envSample, err := config.ReadEnv(envSamplePath, config.ReadEnvOptions{})
envSample, err := config.ReadEnv(envSamplePath)
if err != nil {
t.Fatal(err)
}
@ -235,12 +257,19 @@ func TestEnvVarModifiersIncluded(t *testing.T) {
}
envSamplePath := path.Join(config.RECIPES_DIR, r.Name, ".env.sample")
envSample, err := config.ReadEnv(envSamplePath, config.ReadEnvOptions{IncludeModifiers: true})
envSample, modifiers, err := config.ReadEnvWithModifiers(envSamplePath)
if err != nil {
t.Fatal(err)
}
if !strings.Contains(envSample["SECRET_TEST_PASS_TWO_VERSION"], "length") {
t.Fatal("comment from env var SECRET_TEST_PASS_TWO_VERSION should not be removed")
if !strings.Contains(envSample["SECRET_TEST_PASS_TWO_VERSION"], "v1") {
t.Errorf("value should be 'v1', got: '%s'", envSample["SECRET_TEST_PASS_TWO_VERSION"])
}
if modifiers == nil || modifiers["SECRET_TEST_PASS_TWO_VERSION"] == nil {
t.Errorf("no modifiers included")
} else {
if modifiers["SECRET_TEST_PASS_TWO_VERSION"]["length"] != "10" {
t.Errorf("length modifier should be '10', got: '%s'", modifiers["SECRET_TEST_PASS_TWO_VERSION"]["length"])
}
}
}

View File

@ -0,0 +1,2 @@
RECIPE=test-recipe
DOMAIN=test.example.com

View File

@ -0,0 +1,6 @@
version: "3.8"
services:
foo:
image: debian
bar:
image: debian

View File

@ -28,7 +28,7 @@ func GetContainer(c context.Context, cl *client.Client, filters filters.Args, no
return types.Container{}, fmt.Errorf("no containers matching the %v filter found?", filter)
}
if len(containers) != 1 {
if len(containers) > 1 {
var containersRaw []string
for _, container := range containers {
containerName := strings.Join(container.Names, " ")
@ -68,3 +68,15 @@ func GetContainer(c context.Context, cl *client.Client, filters filters.Args, no
return containers[0], nil
}
// GetContainerFromStackAndService retrieves the container for the given stack and service.
func GetContainerFromStackAndService(cl *client.Client, stack, service string) (types.Container, error) {
filters := filters.NewArgs()
filters.Add("name", fmt.Sprintf("^%s_%s", stack, service))
container, err := GetContainer(context.Background(), cl, filters, true)
if err != nil {
return types.Container{}, err
}
return container, nil
}

27
pkg/git/add.go Normal file
View File

@ -0,0 +1,27 @@
package git
import (
"github.com/go-git/go-git/v5"
"github.com/sirupsen/logrus"
)
// Add adds a file to the git index.
func Add(repoPath, path string, dryRun bool) error {
repo, err := git.PlainOpen(repoPath)
if err != nil {
return err
}
worktree, err := repo.Worktree()
if err != nil {
return err
}
if dryRun {
logrus.Debugf("dry run: adding %s", path)
} else {
worktree.Add(path)
}
return nil
}

View File

@ -8,7 +8,7 @@ import (
)
// Commit runs a git commit
func Commit(repoPath, glob, commitMessage string, dryRun bool) error {
func Commit(repoPath, commitMessage string, dryRun bool) error {
if commitMessage == "" {
return fmt.Errorf("no commit message specified?")
}
@ -33,17 +33,8 @@ func Commit(repoPath, glob, commitMessage string, dryRun bool) error {
}
if !dryRun {
err = commitWorktree.AddGlob(glob)
if err != nil {
return err
}
logrus.Debugf("staged %s for commit", glob)
} else {
logrus.Debugf("dry run: did not stage %s for commit", glob)
}
if !dryRun {
_, err = commitWorktree.Commit(commitMessage, &git.CommitOptions{})
// NOTE(d1): `All: true` does not include untracked files
_, err = commitWorktree.Commit(commitMessage, &git.CommitOptions{All: true})
if err != nil {
return err
}

42
pkg/git/diff.go Normal file
View File

@ -0,0 +1,42 @@
package git
import (
"fmt"
"os/exec"
"github.com/sirupsen/logrus"
)
// getGitDiffArgs builds the `git diff` invocation args. It removes the usage
// of a pager and ensures that colours are specified even when Git might detect
// otherwise.
func getGitDiffArgs(repoPath string) []string {
return []string{
"-C",
repoPath,
"--no-pager",
"-c",
"color.diff=always",
"diff",
}
}
// DiffUnstaged shows a `git diff`. Due to limitations in the underlying go-git
// library, this implementation requires the /usr/bin/git binary. It gracefully
// skips if it cannot find the command on the system.
func DiffUnstaged(path string) error {
if _, err := exec.LookPath("git"); err != nil {
logrus.Warnf("unable to locate git command, cannot output diff")
return nil
}
gitDiffArgs := getGitDiffArgs(path)
diff, err := exec.Command("git", gitDiffArgs...).Output()
if err != nil {
return nil
}
fmt.Print(string(diff))
return nil
}

View File

@ -115,6 +115,13 @@ var LintRules = map[string][]LintRule{
HowToResolve: "upload your recipe to git.coopcloud.tech/coop-cloud/...",
Function: LintHasRecipeRepo,
},
{
Ref: "R015",
Level: "warn",
Description: "long secret names",
HowToResolve: "reduce length of secret names to 12 chars",
Function: LintSecretLengths,
},
},
"error": {
{
@ -227,7 +234,7 @@ func LintAppService(recipe recipe.Recipe) (bool, error) {
// therefore no matching traefik deploy label will be present.
func LintTraefikEnabledSkipCondition(recipe recipe.Recipe) (bool, error) {
envSamplePath := path.Join(config.RECIPES_DIR, recipe.Name, ".env.sample")
sampleEnv, err := config.ReadEnv(envSamplePath, config.ReadEnvOptions{})
sampleEnv, err := config.ReadEnv(envSamplePath)
if err != nil {
return false, fmt.Errorf("Unable to discover .env.sample for %s", recipe.Name)
}
@ -401,6 +408,16 @@ func LintHasRecipeRepo(recipe recipe.Recipe) (bool, error) {
return true, nil
}
func LintSecretLengths(recipe recipe.Recipe) (bool, error) {
for name := range recipe.Config.Secrets {
if len(name) > 12 {
return false, fmt.Errorf("secret %s is longer than 12 characters", name)
}
}
return true, nil
}
func LintValidTags(recipe recipe.Recipe) (bool, error) {
recipeDir := path.Join(config.RECIPES_DIR, recipe.Name)

View File

@ -7,6 +7,7 @@ import (
"os"
"path"
"path/filepath"
"slices"
"sort"
"strconv"
"strings"
@ -31,7 +32,7 @@ import (
// RecipeCatalogueURL is the only current recipe catalogue available.
const RecipeCatalogueURL = "https://recipes.coopcloud.tech/recipes.json"
// ReposMetadataURL is the recipe repository metadata
// ReposMetadataURL is the recipe repository metadata.
const ReposMetadataURL = "https://git.coopcloud.tech/api/v1/orgs/coop-cloud/repos"
// tag represents a git tag.
@ -63,6 +64,11 @@ type RecipeMeta struct {
Website string `json:"website"`
}
// TopicMeta represents a list of topics for a repository.
type TopicMeta struct {
Topics []string `json:"topics"`
}
// LatestVersion returns the latest version of a recipe.
func (r RecipeMeta) LatestVersion() string {
var version string
@ -221,7 +227,7 @@ func Get(recipeName string, offline bool) (Recipe, error) {
}
envSamplePath := path.Join(config.RECIPES_DIR, recipeName, ".env.sample")
sampleEnv, err := config.ReadEnv(envSamplePath, config.ReadEnvOptions{})
sampleEnv, err := config.ReadEnv(envSamplePath)
if err != nil {
return Recipe{}, err
}
@ -249,15 +255,29 @@ func Get(recipeName string, offline bool) (Recipe, error) {
}, nil
}
func (r Recipe) SampleEnv(opts config.ReadEnvOptions) (map[string]string, error) {
func (r Recipe) SampleEnv() (map[string]string, error) {
envSamplePath := path.Join(config.RECIPES_DIR, r.Name, ".env.sample")
sampleEnv, err := config.ReadEnv(envSamplePath, opts)
sampleEnv, err := config.ReadEnv(envSamplePath)
if err != nil {
return sampleEnv, fmt.Errorf("unable to discover .env.sample for %s", r.Name)
}
return sampleEnv, nil
}
// Ensure makes sure the recipe exists, is up to date and has the latest version checked out.
func Ensure(recipeName string) error {
if err := EnsureExists(recipeName); err != nil {
return err
}
if err := EnsureUpToDate(recipeName); err != nil {
return err
}
if err := EnsureLatest(recipeName); err != nil {
return err
}
return nil
}
// EnsureExists ensures that a recipe is locally cloned
func EnsureExists(recipeName string) error {
recipeDir := path.Join(config.RECIPES_DIR, recipeName)
@ -822,7 +842,16 @@ func ReadReposMetadata() (RepoCatalogue, error) {
}
for idx, repo := range reposList {
reposMeta[repo.Name] = reposList[idx]
var topicMeta TopicMeta
topicsURL := getReposTopicUrl(repo.Name)
if err := web.ReadJSON(topicsURL, &topicMeta); err != nil {
return reposMeta, err
}
if slices.Contains(topicMeta.Topics, "recipe") && repo.Name != "example" {
reposMeta[repo.Name] = reposList[idx]
}
}
pageIdx++
@ -1002,14 +1031,8 @@ func UpdateRepositories(repos RepoCatalogue, recipeName string) error {
retrieveBar.Add(1)
return
}
if _, exists := catalogue.CatalogueSkipList[rm.Name]; exists {
ch <- rm.Name
retrieveBar.Add(1)
return
}
recipeDir := path.Join(config.RECIPES_DIR, rm.Name)
if err := gitPkg.Clone(recipeDir, rm.CloneURL); err != nil {
logrus.Fatal(err)
}
@ -1025,3 +1048,8 @@ func UpdateRepositories(repos RepoCatalogue, recipeName string) error {
return nil
}
// getReposTopicUrl retrieves the repository specific topic listing.
func getReposTopicUrl(repoName string) string {
return fmt.Sprintf("https://git.coopcloud.tech/api/v1/repos/coop-cloud/%s/topics", repoName)
}

View File

@ -21,11 +21,24 @@ import (
"github.com/sirupsen/logrus"
)
// secretValue represents a parsed `SECRET_FOO=v1 # length=bar` env var config
// secret definition.
type secretValue struct {
// Secret represents a secret.
type Secret struct {
// Version comes from the secret version environment variable.
// For example:
// SECRET_FOO=v1
Version string
Length int
// Length comes from the length modifier at the secret version environment
// variable. For Example:
// SECRET_FOO=v1 # length=12
Length int
// RemoteName is the name of the secret on the server. For example:
// name: ${STACK_NAME}_test_pass_two_${SECRET_TEST_PASS_TWO_VERSION}
// With the following:
// STACK_NAME=test_example_com
// SECRET_TEST_PASS_TWO_VERSION=v2
// Will have this remote name:
// test_example_com_test_pass_two_v2
RemoteName string
}
// GeneratePasswords generates passwords.
@ -35,7 +48,6 @@ func GeneratePasswords(count, length uint) ([]string, error) {
length,
passgen.AlphabetDefault,
)
if err != nil {
return nil, err
}
@ -54,7 +66,6 @@ func GeneratePassphrases(count uint) ([]string, error) {
passgen.PassphraseCasingDefault,
passgen.WordListDefault,
)
if err != nil {
return nil, err
}
@ -69,22 +80,27 @@ func GeneratePassphrases(count uint) ([]string, error) {
// and some times you don't (as the caller). We need to be able to handle the
// "app new" case where we pass in the .env.sample and the "secret generate"
// case where the app is created.
func ReadSecretsConfig(appEnvPath string, composeFiles []string, recipeName string) (map[string]string, error) {
secretConfigs := make(map[string]string)
appEnv, err := config.ReadEnv(appEnvPath, config.ReadEnvOptions{IncludeModifiers: true})
func ReadSecretsConfig(appEnvPath string, composeFiles []string, stackName string) (map[string]Secret, error) {
appEnv, appModifiers, err := config.ReadEnvWithModifiers(appEnvPath)
if err != nil {
return secretConfigs, err
return nil, err
}
// Set the STACK_NAME to be able to generate the remote name correctly.
appEnv["STACK_NAME"] = stackName
opts := stack.Deploy{Composefiles: composeFiles}
config, err := loader.LoadComposefile(opts, appEnv)
composeConfig, err := loader.LoadComposefile(opts, appEnv)
if err != nil {
return secretConfigs, err
return nil, err
}
// Read the compose files without injecting environment variables.
configWithoutEnv, err := loader.LoadComposefile(opts, map[string]string{}, loader.SkipInterpolation)
if err != nil {
return nil, err
}
var enabledSecrets []string
for _, service := range config.Services {
for _, service := range composeConfig.Services {
for _, secret := range service.Secrets {
enabledSecrets = append(enabledSecrets, secret.Source)
}
@ -92,12 +108,13 @@ func ReadSecretsConfig(appEnvPath string, composeFiles []string, recipeName stri
if len(enabledSecrets) == 0 {
logrus.Debugf("not generating app secrets, none enabled in recipe config")
return secretConfigs, nil
return nil, nil
}
for secretId, secretConfig := range config.Secrets {
secretValues := map[string]Secret{}
for secretId, secretConfig := range composeConfig.Secrets {
if string(secretConfig.Name[len(secretConfig.Name)-1]) == "_" {
return secretConfigs, fmt.Errorf("missing version for secret? (%s)", secretId)
return nil, fmt.Errorf("missing version for secret? (%s)", secretId)
}
if !(slices.Contains(enabledSecrets, secretId)) {
@ -107,68 +124,62 @@ func ReadSecretsConfig(appEnvPath string, composeFiles []string, recipeName stri
lastIdx := strings.LastIndex(secretConfig.Name, "_")
secretVersion := secretConfig.Name[lastIdx+1:]
secretConfigs[secretId] = secretVersion
value := Secret{Version: secretVersion, RemoteName: secretConfig.Name}
if len(value.RemoteName) > config.MAX_DOCKER_SECRET_LENGTH {
return nil, fmt.Errorf("secret %s is > %d chars when combined with %s", secretId, config.MAX_DOCKER_SECRET_LENGTH, stackName)
}
// Check if the length modifier is set for this secret.
for envName, modifierValues := range appModifiers {
// configWithoutEnv contains the raw name as defined in the compose.yaml
// The name will look something like this:
// name: ${STACK_NAME}_test_pass_two_${SECRET_TEST_PASS_TWO_VERSION}
// To check if the current modifier is for the current secret we check
// if the raw name contains the env name (e.g. SECRET_TEST_PASS_TWO_VERSION).
if !strings.Contains(configWithoutEnv.Secrets[secretId].Name, envName) {
continue
}
lengthRaw, ok := modifierValues["length"]
if ok {
length, err := strconv.Atoi(lengthRaw)
if err != nil {
return nil, err
}
value.Length = length
}
break
}
secretValues[secretId] = value
}
return secretConfigs, nil
}
func ParseSecretValue(secret string) (secretValue, error) {
values := strings.Split(secret, "#")
if len(values) == 0 {
return secretValue{}, fmt.Errorf("unable to parse %s", secret)
}
if len(values) == 1 {
return secretValue{Version: values[0], Length: 0}, nil
}
split := strings.Split(values[1], "=")
parsed := split[len(split)-1]
stripped := strings.ReplaceAll(parsed, " ", "")
length, err := strconv.Atoi(stripped)
if err != nil {
return secretValue{}, err
}
version := strings.ReplaceAll(values[0], " ", "")
logrus.Debugf("parsed version %s and length '%v' from %s", version, length, secret)
return secretValue{Version: version, Length: length}, nil
return secretValues, nil
}
// GenerateSecrets generates secrets locally and sends them to a remote server for storage.
func GenerateSecrets(cl *dockerClient.Client, secretsFromConfig map[string]string, appName, server string) (map[string]string, error) {
secrets := make(map[string]string)
func GenerateSecrets(cl *dockerClient.Client, secrets map[string]Secret, server string) (map[string]string, error) {
secretsGenerated := map[string]string{}
var mutex sync.Mutex
var wg sync.WaitGroup
ch := make(chan error, len(secretsFromConfig))
for n, v := range secretsFromConfig {
ch := make(chan error, len(secrets))
for n, v := range secrets {
wg.Add(1)
go func(secretName, secretValue string) {
go func(secretName string, secret Secret) {
defer wg.Done()
parsedSecretValue, err := ParseSecretValue(secretValue)
if err != nil {
ch <- err
return
}
logrus.Debugf("attempting to generate and store %s on %s", secret.RemoteName, server)
secretRemoteName := fmt.Sprintf("%s_%s_%s", appName, secretName, parsedSecretValue.Version)
logrus.Debugf("attempting to generate and store %s on %s", secretRemoteName, server)
if parsedSecretValue.Length > 0 {
passwords, err := GeneratePasswords(1, uint(parsedSecretValue.Length))
if secret.Length > 0 {
passwords, err := GeneratePasswords(1, uint(secret.Length))
if err != nil {
ch <- err
return
}
if err := client.StoreSecret(cl, secretRemoteName, passwords[0], server); err != nil {
if err := client.StoreSecret(cl, secret.RemoteName, passwords[0], server); err != nil {
if strings.Contains(err.Error(), "AlreadyExists") {
logrus.Warnf("%s already exists, moving on...", secretRemoteName)
logrus.Warnf("%s already exists, moving on...", secret.RemoteName)
ch <- nil
} else {
ch <- err
@ -178,7 +189,7 @@ func GenerateSecrets(cl *dockerClient.Client, secretsFromConfig map[string]strin
mutex.Lock()
defer mutex.Unlock()
secrets[secretName] = passwords[0]
secretsGenerated[secretName] = passwords[0]
} else {
passphrases, err := GeneratePassphrases(1)
if err != nil {
@ -186,9 +197,9 @@ func GenerateSecrets(cl *dockerClient.Client, secretsFromConfig map[string]strin
return
}
if err := client.StoreSecret(cl, secretRemoteName, passphrases[0], server); err != nil {
if err := client.StoreSecret(cl, secret.RemoteName, passphrases[0], server); err != nil {
if strings.Contains(err.Error(), "AlreadyExists") {
logrus.Warnf("%s already exists, moving on...", secretRemoteName)
logrus.Warnf("%s already exists, moving on...", secret.RemoteName)
ch <- nil
} else {
ch <- err
@ -198,7 +209,7 @@ func GenerateSecrets(cl *dockerClient.Client, secretsFromConfig map[string]strin
mutex.Lock()
defer mutex.Unlock()
secrets[secretName] = passphrases[0]
secretsGenerated[secretName] = passphrases[0]
}
ch <- nil
}(n, v)
@ -206,16 +217,16 @@ func GenerateSecrets(cl *dockerClient.Client, secretsFromConfig map[string]strin
wg.Wait()
for range secretsFromConfig {
for range secrets {
err := <-ch
if err != nil {
return nil, err
}
}
logrus.Debugf("generated and stored %s on %s", secrets, server)
logrus.Debugf("generated and stored %v on %s", secrets, server)
return secrets, nil
return secretsGenerated, nil
}
type secretStatus struct {
@ -237,7 +248,7 @@ func PollSecretsStatus(cl *dockerClient.Client, app config.App) (secretStatuses,
return secStats, err
}
secretsConfig, err := ReadSecretsConfig(app.Path, composeFiles, app.Recipe)
secretsConfig, err := ReadSecretsConfig(app.Path, composeFiles, app.StackName())
if err != nil {
return secStats, err
}
@ -257,14 +268,9 @@ func PollSecretsStatus(cl *dockerClient.Client, app config.App) (secretStatuses,
remoteSecretNames[cont.Spec.Annotations.Name] = true
}
for secretName, secretValue := range secretsConfig {
for secretName, val := range secretsConfig {
createdRemote := false
val, err := ParseSecretValue(secretValue)
if err != nil {
return secStats, err
}
secretRemoteName := fmt.Sprintf("%s_%s_%s", app.StackName(), secretName, val.Version)
if _, ok := remoteSecretNames[secretRemoteName]; ok {
createdRemote = true

View File

@ -1,42 +1,39 @@
package secret
import (
"path"
"testing"
"coopcloud.tech/abra/pkg/config"
"coopcloud.tech/abra/pkg/recipe"
"coopcloud.tech/abra/pkg/upstream/stack"
loader "coopcloud.tech/abra/pkg/upstream/stack"
"github.com/stretchr/testify/assert"
)
func TestReadSecretsConfig(t *testing.T) {
offline := true
recipe, err := recipe.Get("matrix-synapse", offline)
composeFiles := []string{"./testdir/compose.yaml"}
secretsFromConfig, err := ReadSecretsConfig("./testdir/.env.sample", composeFiles, "test_example_com")
if err != nil {
t.Fatal(err)
}
sampleEnv, err := recipe.SampleEnv(config.ReadEnvOptions{})
if err != nil {
t.Fatal(err)
}
// Simple secret
assert.Equal(t, "test_example_com_test_pass_one_v2", secretsFromConfig["test_pass_one"].RemoteName)
assert.Equal(t, "v2", secretsFromConfig["test_pass_one"].Version)
assert.Equal(t, 0, secretsFromConfig["test_pass_one"].Length)
composeFiles := []string{path.Join(config.RECIPES_DIR, recipe.Name, "compose.yml")}
envSamplePath := path.Join(config.RECIPES_DIR, recipe.Name, ".env.sample")
secretsFromConfig, err := ReadSecretsConfig(envSamplePath, composeFiles, recipe.Name)
if err != nil {
t.Fatal(err)
}
// Has a length modifier
assert.Equal(t, "test_example_com_test_pass_two_v1", secretsFromConfig["test_pass_two"].RemoteName)
assert.Equal(t, "v1", secretsFromConfig["test_pass_two"].Version)
assert.Equal(t, 10, secretsFromConfig["test_pass_two"].Length)
opts := stack.Deploy{Composefiles: composeFiles}
config, err := loader.LoadComposefile(opts, sampleEnv)
if err != nil {
t.Fatal(err)
}
for secretId := range config.Secrets {
assert.Contains(t, secretsFromConfig, secretId)
}
// Secret name does not include the secret id
assert.Equal(t, "test_example_com_pass_three_v2", secretsFromConfig["test_pass_three"].RemoteName)
assert.Equal(t, "v2", secretsFromConfig["test_pass_three"].Version)
assert.Equal(t, 0, secretsFromConfig["test_pass_three"].Length)
}
func TestReadSecretsConfigWithLongDomain(t *testing.T) {
composeFiles := []string{"./testdir/compose.yaml"}
_, err := ReadSecretsConfig("./testdir/.env.sample", composeFiles, "should_break_on_forty_eight_char_stack_nameeeeee")
if err == nil {
t.Fatal("expected failure, stack name is too long")
}
assert.Contains(t, err.Error(), "is > 64 chars")
}

View File

@ -0,0 +1,3 @@
SECRET_TEST_PASS_ONE_VERSION=v2
SECRET_TEST_PASS_TWO_VERSION=v1 # length=10
SECRET_TEST_PASS_THREE_VERSION=v2

View File

@ -0,0 +1,21 @@
---
version: "3.8"
services:
app:
image: nginx:1.21.0
secrets:
- test_pass_one
- test_pass_two
- test_pass_three
secrets:
test_pass_one:
external: true
name: ${STACK_NAME}_test_pass_one_${SECRET_TEST_PASS_ONE_VERSION} # should be removed
test_pass_two:
external: true
name: ${STACK_NAME}_test_pass_two_${SECRET_TEST_PASS_TWO_VERSION}
test_pass_three:
external: true
name: ${STACK_NAME}_pass_three_${SECRET_TEST_PASS_THREE_VERSION} # secretId and name don't match

View File

@ -14,6 +14,70 @@ import (
"github.com/sirupsen/logrus"
)
// GetService retrieves a service container based on a label. If prompt is true
// and the retrievd count of service containers does not match 1, then a prompt
// is presented to let the user choose. An error is returned when no service is
// found.
func GetServiceByLabel(c context.Context, cl *client.Client, label string, prompt bool) (swarm.Service, error) {
services, err := cl.ServiceList(c, types.ServiceListOptions{})
if err != nil {
return swarm.Service{}, err
}
if len(services) == 0 {
return swarm.Service{}, fmt.Errorf("no services deployed?")
}
var matchingServices []swarm.Service
for _, service := range services {
if enabled, exists := service.Spec.Labels[label]; exists && enabled == "true" {
matchingServices = append(matchingServices, service)
}
}
if len(matchingServices) == 0 {
return swarm.Service{}, fmt.Errorf("no services deployed matching label '%s'?", label)
}
if len(matchingServices) > 1 {
var servicesRaw []string
for _, service := range matchingServices {
serviceName := service.Spec.Name
created := formatter.HumanDuration(service.CreatedAt.Unix())
servicesRaw = append(servicesRaw, fmt.Sprintf("%s (created %v)", serviceName, created))
}
if !prompt {
err := fmt.Errorf("expected 1 service but found %v: %s", len(matchingServices), strings.Join(servicesRaw, " "))
return swarm.Service{}, err
}
logrus.Warnf("ambiguous service list received, prompting for input")
var response string
prompt := &survey.Select{
Message: "which service are you looking for?",
Options: servicesRaw,
}
if err := survey.AskOne(prompt, &response); err != nil {
return swarm.Service{}, err
}
chosenService := strings.TrimSpace(strings.Split(response, " ")[0])
for _, service := range matchingServices {
serviceName := strings.ToLower(service.Spec.Name)
if serviceName == chosenService {
return service, nil
}
}
logrus.Panic("failed to match chosen service")
}
return matchingServices[0], nil
}
// GetService retrieves a service container. If prompt is true and the retrievd
// count of service containers does not match 1, then a prompt is presented to
// let the user choose. A count of 0 is handled gracefully.

View File

@ -18,7 +18,7 @@ import (
//
// ssh://<user>@<host> URL requires Docker 18.09 or later on the remote host.
func GetConnectionHelper(daemonURL string) (*connhelper.ConnectionHelper, error) {
return getConnectionHelper(daemonURL, []string{"-o ConnectTimeout=5"})
return getConnectionHelper(daemonURL, []string{"-o ConnectTimeout=60"})
}
func getConnectionHelper(daemonURL string, sshFlags []string) (*connhelper.ConnectionHelper, error) {

View File

@ -13,7 +13,10 @@ import (
"github.com/sirupsen/logrus"
)
func RunExec(dockerCli command.Cli, client *apiclient.Client, containerID string, execConfig *types.ExecConfig) error {
// RunExec runs a command on a remote container. io.Writer corresponds to the
// command output.
func RunExec(dockerCli command.Cli, client *apiclient.Client, containerID string,
execConfig *types.ExecConfig) (io.Writer, error) {
ctx := context.Background()
// We need to check the tty _before_ we do the ContainerExecCreate, because
@ -21,22 +24,22 @@ func RunExec(dockerCli command.Cli, client *apiclient.Client, containerID string
// there's no easy way to clean those up). But also in order to make "not
// exist" errors take precedence we do a dummy inspect first.
if _, err := client.ContainerInspect(ctx, containerID); err != nil {
return err
return nil, err
}
if !execConfig.Detach {
if err := dockerCli.In().CheckTty(execConfig.AttachStdin, execConfig.Tty); err != nil {
return err
return nil, err
}
}
response, err := client.ContainerExecCreate(ctx, containerID, *execConfig)
if err != nil {
return err
return nil, err
}
execID := response.ID
if execID == "" {
return errors.New("exec ID empty")
return nil, errors.New("exec ID empty")
}
if execConfig.Detach {
@ -44,13 +47,13 @@ func RunExec(dockerCli command.Cli, client *apiclient.Client, containerID string
Detach: execConfig.Detach,
Tty: execConfig.Tty,
}
return client.ContainerExecStart(ctx, execID, execStartCheck)
return nil, client.ContainerExecStart(ctx, execID, execStartCheck)
}
return interactiveExec(ctx, dockerCli, client, execConfig, execID)
}
func interactiveExec(ctx context.Context, dockerCli command.Cli, client *apiclient.Client,
execConfig *types.ExecConfig, execID string) error {
execConfig *types.ExecConfig, execID string) (io.Writer, error) {
// Interactive exec requested.
var (
out, stderr io.Writer
@ -76,7 +79,7 @@ func interactiveExec(ctx context.Context, dockerCli command.Cli, client *apiclie
}
resp, err := client.ContainerExecAttach(ctx, execID, execStartCheck)
if err != nil {
return err
return out, err
}
defer resp.Close()
@ -107,10 +110,10 @@ func interactiveExec(ctx context.Context, dockerCli command.Cli, client *apiclie
if err := <-errCh; err != nil {
logrus.Debugf("Error hijack: %s", err)
return err
return out, err
}
return getExecExitStatus(ctx, client, execID)
return out, getExecExitStatus(ctx, client, execID)
}
func getExecExitStatus(ctx context.Context, client apiclient.ContainerAPIClient, execID string) error {

View File

@ -18,15 +18,24 @@ func DontSkipValidation(opts *loader.Options) {
opts.SkipValidation = false
}
// SkipInterpolation skip interpolating environment variables.
func SkipInterpolation(opts *loader.Options) {
opts.SkipInterpolation = true
}
// LoadComposefile parse the composefile specified in the cli and returns its Config and version.
func LoadComposefile(opts Deploy, appEnv map[string]string) (*composetypes.Config, error) {
func LoadComposefile(opts Deploy, appEnv map[string]string, options ...func(*loader.Options)) (*composetypes.Config, error) {
configDetails, err := getConfigDetails(opts.Composefiles, appEnv)
if err != nil {
return nil, err
}
if options == nil {
options = []func(*loader.Options){DontSkipValidation}
}
dicts := getDictsFrom(configDetails.ConfigFiles)
config, err := loader.Load(configDetails, DontSkipValidation)
config, err := loader.Load(configDetails, options...)
if err != nil {
if fpe, ok := err.(*loader.ForbiddenPropertiesError); ok {
return nil, fmt.Errorf("compose file contains unsupported options: %s",

View File

@ -233,7 +233,7 @@ func validateExternalNetworks(ctx context.Context, client dockerClient.NetworkAP
network, err := client.NetworkInspect(ctx, networkName, types.NetworkInspectOptions{})
switch {
case dockerClient.IsErrNotFound(err):
return errors.Errorf("network %q is declared as external, but could not be found. You need to create a swarm-scoped network before the stack is deployed", networkName)
return errors.Errorf("network %q is declared as external, but could not be found. You need to create a swarm-scoped network before the stack is deployed, which you can do by running this on the server: docker network create -d overlay proxy", networkName)
case err != nil:
return err
case network.Scope != "swarm":

11
scripts/docker/build.sh Executable file
View File

@ -0,0 +1,11 @@
#!/bin/bash
if [ ! -f .envrc ]; then
. .envrc.sample
else
. .envrc
fi
git config --global --add safe.directory /abra # work around funky file permissions
make build

View File

@ -1,8 +1,8 @@
#!/usr/bin/env bash
ABRA_VERSION="0.8.0-beta"
ABRA_VERSION="0.9.0-beta"
ABRA_RELEASE_URL="https://git.coopcloud.tech/api/v1/repos/coop-cloud/abra/releases/tags/$ABRA_VERSION"
RC_VERSION="0.8.0-beta"
RC_VERSION="0.8.0-rc1-beta"
RC_VERSION_URL="https://git.coopcloud.tech/api/v1/repos/coop-cloud/abra/releases/tags/$RC_VERSION"
for arg in "$@"; do
@ -45,7 +45,9 @@ function install_abra_release {
fi
ARCH=$(uname -m)
if [[ $ARCH =~ "aarch64" ]]; then
if [[ $ARCH =~ "x86_64" ]]; then
ARCH="amd64"
elif [[ $ARCH =~ "aarch64" ]]; then
ARCH="arm64"
elif [[ $ARCH =~ "armv5l" ]]; then
ARCH="armv5"
@ -55,7 +57,7 @@ function install_abra_release {
ARCH="armv7"
fi
PLATFORM=$(uname -s | tr '[:upper:]' '[:lower:]')_$ARCH
FILENAME="abra_"$ABRA_VERSION"_"$PLATFORM""
FILENAME="abra_"$ABRA_VERSION"_"$PLATFORM".tar.gz"
sed_command_rel='s/.*"assets":\[\{[^]]*"name":"'$FILENAME'"[^}]*"browser_download_url":"([^"]*)".*\].*/\1/p'
sed_command_checksums='s/.*"assets":\[\{[^\]*"name":"checksums.txt"[^}]*"browser_download_url":"([^"]*)".*\].*/\1/p'
@ -65,17 +67,22 @@ function install_abra_release {
checksums=$(wget -q -O- $checksums_url)
checksum=$(echo "$checksums" | grep "$FILENAME" - | sed -En 's/([0-9a-f]{64})\s+'"$FILENAME"'.*/\1/p')
abra_download="/tmp/abra-download.tar.gz"
echo "downloading $ABRA_VERSION $PLATFORM binary release for abra..."
wget -q "$release_url" -O "$HOME/.local/bin/.abra-download"
localsum=$(sha256sum $HOME/.local/bin/.abra-download | sed -En 's/([0-9a-f]{64})\s+.*/\1/p')
wget -q "$release_url" -O $abra_download
localsum=$(sha256sum $abra_download | sed -En 's/([0-9a-f]{64})\s+.*/\1/p')
echo "checking if checksums match..."
if [[ "$localsum" != "$checksum" ]]; then
print_checksum_error
exit 1
fi
echo "$(tput setaf 2)check successful!$(tput sgr0)"
mv "$HOME/.local/bin/.abra-download" "$HOME/.local/bin/abra"
cd /tmp/
tar xf abra-download.tar.gz
mv abra "$HOME/.local/bin/abra"
tar tf abra-download.tar.gz | xargs rm -f
chmod +x "$HOME/.local/bin/abra"
x=$(echo $PATH | grep $HOME/.local/bin)

View File

@ -70,13 +70,13 @@ setup(){
assert_success
run git -C "$ABRA_DIR/recipes/$TEST_RECIPE" status
assert_output --partial 'behind 3'
assert_output --regexp 'behind .* 3 commits'
run $ABRA app check "$TEST_APP_DOMAIN"
assert_success
run git -C "$ABRA_DIR/recipes/$TEST_RECIPE" status
refute_output --partial 'behind 3'
assert_output --regexp 'behind .* 3 commits'
_reset_recipe
}
@ -86,7 +86,7 @@ setup(){
assert_success
run git -C "$ABRA_DIR/recipes/$TEST_RECIPE" status
assert_output --partial 'behind 1'
assert_output --partial "Your branch is behind 'origin/main' by 1 commit"
# NOTE(d1): we can't quite tell if this will fail or not in the future, so,
# since it isn't an important part of what we're testing here, we don't check
@ -94,7 +94,7 @@ setup(){
run $ABRA app check "$TEST_APP_DOMAIN" --offline
run git -C "$ABRA_DIR/recipes/$TEST_RECIPE" status
assert_output --partial 'behind 1'
assert_output --partial "Your branch is behind 'origin/main' by 1 commit"
_reset_recipe
}

View File

@ -25,6 +25,24 @@ teardown(){
fi
}
# bats test_tags=slow
@test "autocomplete" {
run $ABRA app cmd --generate-bash-completion
assert_success
assert_output "$TEST_APP_DOMAIN"
run $ABRA app cmd "$TEST_APP_DOMAIN" --generate-bash-completion
assert_success
assert_output "app"
run $ABRA app cmd "$TEST_APP_DOMAIN" app --generate-bash-completion
assert_success
assert_output "test_cmd
test_cmd_arg
test_cmd_args
test_cmd_export"
}
@test "validate app argument" {
run $ABRA app cmd
assert_failure
@ -40,7 +58,7 @@ teardown(){
assert_success
assert_not_exists "$ABRA_DIR/recipes/$TEST_RECIPE"
run $ABRA app cmd "$TEST_APP_DOMAIN" test_cmd --local
run $ABRA app cmd --local "$TEST_APP_DOMAIN" test_cmd
assert_success
assert_output --partial 'baz'
@ -52,7 +70,7 @@ teardown(){
assert_success
assert_exists "$ABRA_DIR/recipes/$TEST_RECIPE/foo"
run $ABRA app cmd "$TEST_APP_DOMAIN" test_cmd --local
run $ABRA app cmd --local "$TEST_APP_DOMAIN" test_cmd
assert_failure
assert_output --partial 'locally unstaged changes'
@ -65,7 +83,7 @@ teardown(){
assert_success
assert_exists "$ABRA_DIR/recipes/$TEST_RECIPE/foo"
run $ABRA app cmd "$TEST_APP_DOMAIN" test_cmd --local --chaos
run $ABRA app cmd --local --chaos "$TEST_APP_DOMAIN" test_cmd
assert_success
assert_output --partial 'baz'
@ -78,14 +96,14 @@ teardown(){
assert_success
run git -C "$ABRA_DIR/recipes/$TEST_RECIPE" status
assert_output --partial 'behind 3'
assert_output --regexp 'behind .* 3 commits'
run $ABRA app cmd "$TEST_APP_DOMAIN" test_cmd --local
run $ABRA app cmd --local "$TEST_APP_DOMAIN" test_cmd
assert_success
assert_output --partial 'baz'
run git -C "$ABRA_DIR/recipes/$TEST_RECIPE" status
refute_output --partial 'behind 3'
assert_output --partial "up to date"
_reset_recipe "$TEST_RECIPE"
}
@ -95,14 +113,14 @@ teardown(){
assert_success
run git -C "$ABRA_DIR/recipes/$TEST_RECIPE" status
assert_output --partial 'behind 3'
assert_output --regexp 'behind .* 3 commits'
run $ABRA app cmd "$TEST_APP_DOMAIN" test_cmd --local --offline
run $ABRA app cmd --local --offline "$TEST_APP_DOMAIN" test_cmd
assert_success
assert_output --partial 'baz'
run git -C "$ABRA_DIR/recipes/$TEST_RECIPE" status
assert_output --partial 'behind 3'
assert_output --regexp 'behind .* 3 commits'
_reset_recipe "$TEST_RECIPE"
}
@ -114,13 +132,13 @@ teardown(){
}
@test "error if missing arguments when passing --local" {
run $ABRA app cmd "$TEST_APP_DOMAIN" --local
run $ABRA app cmd --local "$TEST_APP_DOMAIN"
assert_failure
assert_output --partial 'missing arguments'
}
@test "cannot use --local and --user at same time" {
run $ABRA app cmd "$TEST_APP_DOMAIN" test_cmd --local --user root
run $ABRA app cmd --local --user root "$TEST_APP_DOMAIN" test_cmd
assert_failure
assert_output --partial 'cannot use --local & --user together'
}
@ -129,7 +147,7 @@ teardown(){
run rm -rf "$ABRA_DIR/recipes/$TEST_RECIPE/abra.sh"
assert_success
run $ABRA app cmd "$TEST_APP_DOMAIN" test_cmd --local --chaos
run $ABRA app cmd --local --chaos "$TEST_APP_DOMAIN" test_cmd
assert_failure
assert_output --partial "$ABRA_DIR/recipes/$TEST_RECIPE/abra.sh does not exist"
@ -137,25 +155,25 @@ teardown(){
}
@test "error if missing command" {
run $ABRA app cmd "$TEST_APP_DOMAIN" doesnt_exist --local
run $ABRA app cmd --local "$TEST_APP_DOMAIN" doesnt_exist
assert_failure
assert_output --partial "doesn't have a doesnt_exist function"
}
@test "run --local command" {
run $ABRA app cmd "$TEST_APP_DOMAIN" test_cmd --local
run $ABRA app cmd --local "$TEST_APP_DOMAIN" test_cmd
assert_success
assert_output --partial 'baz'
}
@test "run command with single arg" {
run $ABRA app cmd "$TEST_APP_DOMAIN" test_cmd_arg --local -- bing
run $ABRA app cmd --local "$TEST_APP_DOMAIN" test_cmd_arg -- bing
assert_success
assert_output --partial 'bing'
}
@test "run command with several args" {
run $ABRA app cmd "$TEST_APP_DOMAIN" test_cmd_args --local -- bong bang
run $ABRA app cmd --local "$TEST_APP_DOMAIN" test_cmd_args -- bong bang
assert_success
assert_output --partial 'bong bang'
}

View File

@ -5,9 +5,11 @@ setup_file(){
_common_setup
_add_server
_new_app
_deploy_app
}
teardown_file(){
_undeploy_app
_rm_app
_rm_server
}
@ -17,13 +19,6 @@ setup(){
_common_setup
}
teardown(){
# https://github.com/bats-core/bats-core/issues/383#issuecomment-738628888
if [[ -z "${BATS_TEST_COMPLETED}" ]]; then
_undeploy_app
fi
}
@test "validate app argument" {
run $ABRA app cp
assert_failure
@ -54,68 +49,120 @@ teardown(){
assert_output --partial 'arguments must take $SERVICE:$PATH form'
}
@test "detect 'coming FROM' syntax" {
run $ABRA app cp "$TEST_APP_DOMAIN" app:/myfile.txt . --debug
assert_failure
assert_output --partial 'coming FROM the container'
}
@test "detect 'going TO' syntax" {
run $ABRA app cp "$TEST_APP_DOMAIN" myfile.txt app:/somewhere --debug
assert_failure
assert_output --partial 'going TO the container'
}
@test "error if local file missing" {
run $ABRA app cp "$TEST_APP_DOMAIN" myfile.txt app:/somewhere
run $ABRA app cp "$TEST_APP_DOMAIN" thisfileshouldnotexist.txt app:/somewhere
assert_failure
assert_output --partial 'myfile.txt does not exist locally?'
assert_output --partial 'local stat thisfileshouldnotexist.txt: no such file or directory'
}
# bats test_tags=slow
@test "error if service doesn't exist" {
_deploy_app
_mkfile "$BATS_TMPDIR/myfile.txt" "foo"
run bash -c "echo foo >> $BATS_TMPDIR/myfile.txt"
assert_success
run $ABRA app cp "$TEST_APP_DOMAIN" "$BATS_TMPDIR/myfile.txt" doesnt_exist:/
run $ABRA app cp "$TEST_APP_DOMAIN" "$BATS_TMPDIR/myfile.txt" doesnt_exist:/ --debug
assert_failure
assert_output --partial 'no containers matching'
run rm -rf "$BATS_TMPDIR/myfile.txt"
assert_success
_undeploy_app
_rm "$BATS_TMPDIR/myfile.txt"
}
# bats test_tags=slow
@test "copy to container" {
_deploy_app
run bash -c "echo foo >> $BATS_TMPDIR/myfile.txt"
assert_success
@test "copy local file to container directory" {
_mkfile "$BATS_TMPDIR/myfile.txt" "foo"
run $ABRA app cp "$TEST_APP_DOMAIN" "$BATS_TMPDIR/myfile.txt" app:/etc
assert_success
run rm -rf "$BATS_TMPDIR/myfile.txt"
run $ABRA app run "$TEST_APP_DOMAIN" app cat /etc/myfile.txt
assert_success
assert_output --partial "foo"
_undeploy_app
_rm "$BATS_TMPDIR/myfile.txt"
_rm_remote "/etc/myfile.txt"
}
# bats test_tags=slow
@test "copy from container" {
_deploy_app
@test "copy local file to container file (and override on remote)" {
_mkfile "$BATS_TMPDIR/myfile.txt" "foo"
run bash -c "echo foo >> $BATS_TMPDIR/myfile.txt"
# create
run $ABRA app cp "$TEST_APP_DOMAIN" "$BATS_TMPDIR/myfile.txt" app:/etc/myfile.txt
assert_success
run $ABRA app cp "$TEST_APP_DOMAIN" "$BATS_TMPDIR/myfile.txt" app:/etc
run $ABRA app run "$TEST_APP_DOMAIN" app cat /etc/myfile.txt
assert_success
assert_output --partial "foo"
_mkfile "$BATS_TMPDIR/myfile.txt" "bar"
# override
run $ABRA app cp "$TEST_APP_DOMAIN" "$BATS_TMPDIR/myfile.txt" app:/etc/myfile.txt
assert_success
run rm -rf "$BATS_TMPDIR/myfile.txt"
run $ABRA app run "$TEST_APP_DOMAIN" app cat /etc/myfile.txt
assert_success
assert_output --partial "bar"
_rm "$BATS_TMPDIR/myfile.txt"
_rm_remote "/etc/myfile.txt"
}
# bats test_tags=slow
@test "copy local file to container file (and rename)" {
_mkfile "$BATS_TMPDIR/myfile.txt" "foo"
# rename
run $ABRA app cp "$TEST_APP_DOMAIN" "$BATS_TMPDIR/myfile.txt" app:/etc/myfile2.txt
assert_success
run $ABRA app run "$TEST_APP_DOMAIN" app cat /etc/myfile2.txt
assert_success
assert_output --partial "foo"
_rm "$BATS_TMPDIR/myfile.txt"
_rm_remote "/etc/myfile2.txt"
}
# bats test_tags=slow
@test "copy local directory to container directory (and creates missing directory)" {
_mkdir "$BATS_TMPDIR/mydir"
_mkfile "$BATS_TMPDIR/mydir/myfile.txt" "foo"
run $ABRA app cp "$TEST_APP_DOMAIN" "$BATS_TMPDIR/mydir" app:/etc
assert_success
run $ABRA app run "$TEST_APP_DOMAIN" app ls /etc/mydir
assert_success
assert_output --partial "myfile.txt"
_rm "$BATS_TMPDIR/mydir"
_rm_remote "/etc/mydir"
}
# bats test_tags=slow
@test "copy local files to container directory" {
_mkdir "$BATS_TMPDIR/mydir"
_mkfile "$BATS_TMPDIR/mydir/myfile.txt" "foo"
_mkfile "$BATS_TMPDIR/mydir/myfile2.txt" "foo"
run $ABRA app cp "$TEST_APP_DOMAIN" "$BATS_TMPDIR/mydir/" app:/etc
assert_success
run $ABRA app run "$TEST_APP_DOMAIN" app ls /etc/myfile.txt
assert_success
assert_output --partial "myfile.txt"
run $ABRA app run "$TEST_APP_DOMAIN" app ls /etc/myfile2.txt
assert_success
assert_output --partial "myfile2.txt"
_rm "$BATS_TMPDIR/mydir"
_rm_remote "/etc/myfile*"
}
# bats test_tags=slow
@test "copy container file to local directory" {
run $ABRA app run "$TEST_APP_DOMAIN" app bash -c "echo foo > /etc/myfile.txt"
assert_success
run $ABRA app cp "$TEST_APP_DOMAIN" app:/etc/myfile.txt "$BATS_TMPDIR"
@ -123,8 +170,76 @@ teardown(){
assert_exists "$BATS_TMPDIR/myfile.txt"
assert bash -c "cat $BATS_TMPDIR/myfile.txt | grep -q foo"
run rm -rf "$BATS_TMPDIR/myfile.txt"
_rm "$BATS_TMPDIR/myfile.txt"
_rm_remote "/etc/myfile.txt"
}
# bats test_tags=slow
@test "copy container file to local file" {
run $ABRA app run "$TEST_APP_DOMAIN" app bash -c "echo foo > /etc/myfile.txt"
assert_success
_undeploy_app
run $ABRA app cp "$TEST_APP_DOMAIN" app:/etc/myfile.txt "$BATS_TMPDIR/myfile.txt"
assert_success
assert_exists "$BATS_TMPDIR/myfile.txt"
assert bash -c "cat $BATS_TMPDIR/myfile.txt | grep -q foo"
_rm "$BATS_TMPDIR/myfile.txt"
_rm_remote "/etc/myfile.txt"
}
# bats test_tags=slow
@test "copy container file to local file and rename" {
run $ABRA app run "$TEST_APP_DOMAIN" app bash -c "echo foo > /etc/myfile.txt"
assert_success
run $ABRA app cp "$TEST_APP_DOMAIN" app:/etc/myfile.txt "$BATS_TMPDIR/myfile2.txt"
assert_success
assert_exists "$BATS_TMPDIR/myfile2.txt"
assert bash -c "cat $BATS_TMPDIR/myfile2.txt | grep -q foo"
_rm "$BATS_TMPDIR/myfile2.txt"
_rm_remote "/etc/myfile.txt"
}
# bats test_tags=slow
@test "copy container directory to local directory" {
run $ABRA app run "$TEST_APP_DOMAIN" app bash -c "echo foo > /etc/myfile.txt"
assert_success
run $ABRA app run "$TEST_APP_DOMAIN" app bash -c "echo bar > /etc/myfile2.txt"
assert_success
mkdir "$BATS_TMPDIR/mydir"
run $ABRA app cp "$TEST_APP_DOMAIN" app:/etc "$BATS_TMPDIR/mydir"
assert_success
assert_exists "$BATS_TMPDIR/mydir/etc/myfile.txt"
assert_success
assert_exists "$BATS_TMPDIR/mydir/etc/myfile2.txt"
_rm "$BATS_TMPDIR/mydir"
_rm_remote "/etc/myfile.txt"
_rm_remote "/etc/myfile2.txt"
}
# bats test_tags=slow
@test "copy container files to local directory" {
run $ABRA app run "$TEST_APP_DOMAIN" app bash -c "echo foo > /etc/myfile.txt"
assert_success
run $ABRA app run "$TEST_APP_DOMAIN" app bash -c "echo bar > /etc/myfile2.txt"
assert_success
mkdir "$BATS_TMPDIR/mydir"
run $ABRA app cp "$TEST_APP_DOMAIN" app:/etc/ "$BATS_TMPDIR/mydir"
assert_success
assert_exists "$BATS_TMPDIR/mydir/myfile.txt"
assert_success
assert_exists "$BATS_TMPDIR/mydir/myfile2.txt"
_rm "$BATS_TMPDIR/mydir"
_rm_remote "/etc/myfile.txt"
_rm_remote "/etc/myfile2.txt"
}

View File

@ -16,6 +16,7 @@ teardown_file(){
setup(){
load "$PWD/tests/integration/helpers/common"
_common_setup
_reset_recipe
}
teardown(){
@ -82,13 +83,13 @@ teardown(){
assert_success
run git -C "$ABRA_DIR/recipes/$TEST_RECIPE" status
assert_output --partial 'behind 3'
assert_output --regexp 'behind .* 3 commits'
run $ABRA app deploy "$TEST_APP_DOMAIN" --no-input --no-converge-checks
assert_success
run git -C "$ABRA_DIR/recipes/$TEST_RECIPE" status
refute_output --partial 'behind 3'
refute_output --regexp 'behind .* 3 commits'
_reset_recipe
_undeploy_app
@ -100,7 +101,7 @@ teardown(){
assert_success
run git -C "$ABRA_DIR/recipes/$TEST_RECIPE" status
assert_output --partial 'behind 3'
assert_output --regexp 'behind .* 3 commits'
# NOTE(d1): need to use --chaos to force same commit
run $ABRA app deploy "$TEST_APP_DOMAIN" \
@ -108,7 +109,7 @@ teardown(){
assert_success
run git -C "$ABRA_DIR/recipes/$TEST_RECIPE" status
assert_output --partial 'behind 3'
assert_output --regexp 'behind .* 3 commits'
_undeploy_app
_reset_recipe
@ -116,6 +117,9 @@ teardown(){
# bats test_tags=slow
@test "deploy latest commit if no published versions and no --chaos" {
# TODO(d1): fix with a new test recipe which has no published versions?
skip "known issue, abra-test-recipe has published versions now"
latestCommit="$(git -C "$ABRA_DIR/recipes/$TEST_RECIPE" rev-parse --short HEAD)"
_remove_tags
@ -140,7 +144,7 @@ teardown(){
assert_success
run git -C "$ABRA_DIR/recipes/$TEST_RECIPE" status
assert_output --partial 'behind 3'
assert_output --regexp 'behind .* 3 commits'
threeCommitsBack="$(git -C "$ABRA_DIR/recipes/$TEST_RECIPE" rev-parse --short HEAD)"
@ -273,6 +277,10 @@ teardown(){
}
@test "ensure domain is checked" {
if [[ "$TEST_SERVER" == "default" ]]; then
skip "domain checks are disabled for local server"
fi
appDomain="custom-html.DOESNTEXIST"
run $ABRA app new custom-html \

View File

@ -18,9 +18,24 @@ setup(){
}
teardown(){
load "$PWD/tests/integration/helpers/common"
_rm_app
}
@test "autocomplete" {
run $ABRA app new --generate-bash-completion
assert_success
assert_output --partial "traefik"
assert_output --partial "abra-test-recipe"
# Note: this test needs to be updated when a new version of the test recipe is published.
run $ABRA app new abra-test-recipe --generate-bash-completion
assert_success
assert_output "0.1.0+1.20.0
0.1.1+1.20.2
0.2.0+1.21.0"
}
@test "create new app" {
run $ABRA app new "$TEST_RECIPE" \
--no-input \
@ -28,10 +43,29 @@ teardown(){
--domain "$TEST_APP_DOMAIN"
assert_success
assert_exists "$ABRA_DIR/servers/$TEST_SERVER/$TEST_APP_DOMAIN.env"
run git -C "$ABRA_DIR/recipes/$TEST_RECIPE" status
assert_output --partial "up to date"
}
@test "create new app with version" {
run $ABRA app new "$TEST_RECIPE" 0.1.1+1.20.2 \
--no-input \
--server "$TEST_SERVER" \
--domain "$TEST_APP_DOMAIN"
assert_success
assert_exists "$ABRA_DIR/servers/$TEST_SERVER/$TEST_APP_DOMAIN.env"
run git -C "$ABRA_DIR/recipes/$TEST_RECIPE" log -1
assert_output --partial "453db7121c0a56a7a8f15378f18fe3bf21ccfdef"
}
@test "does not overwrite existing env files" {
_new_app
run $ABRA app new "$TEST_RECIPE" \
--no-input \
--server "$TEST_SERVER" \
--domain "$TEST_APP_DOMAIN"
assert_success
run $ABRA app new "$TEST_RECIPE" \
--no-input \
@ -74,8 +108,7 @@ teardown(){
--no-input \
--chaos \
--server "$TEST_SERVER" \
--domain "$TEST_APP_DOMAIN" \
--secrets
--domain "$TEST_APP_DOMAIN"
assert_success
assert_exists "$ABRA_DIR/servers/$TEST_SERVER/$TEST_APP_DOMAIN.env"
@ -88,18 +121,17 @@ teardown(){
assert_success
run git -C "$ABRA_DIR/recipes/$TEST_RECIPE" status
assert_output --partial 'behind 3'
assert_output --regexp 'behind .* 3 commits'
run $ABRA app new "$TEST_RECIPE" \
--no-input \
--server "$TEST_SERVER" \
--domain "$TEST_APP_DOMAIN" \
--secrets
--domain "$TEST_APP_DOMAIN"
assert_success
assert_exists "$ABRA_DIR/servers/$TEST_SERVER/$TEST_APP_DOMAIN.env"
run git -C "$ABRA_DIR/recipes/$TEST_RECIPE" status
refute_output --partial 'behind 3'
assert_output --partial "up to date"
_reset_recipe
}
@ -109,7 +141,7 @@ teardown(){
assert_success
run git -C "$ABRA_DIR/recipes/$TEST_RECIPE" status
assert_output --partial 'behind 3'
assert_output --regexp 'behind .* 3 commits'
# NOTE(d1): need to use --chaos to force same commit
run $ABRA app new "$TEST_RECIPE" \
@ -117,13 +149,12 @@ teardown(){
--offline \
--chaos \
--server "$TEST_SERVER" \
--domain "$TEST_APP_DOMAIN" \
--secrets
--domain "$TEST_APP_DOMAIN"
assert_success
assert_exists "$ABRA_DIR/servers/$TEST_SERVER/$TEST_APP_DOMAIN.env"
run git -C "$ABRA_DIR/recipes/$TEST_RECIPE" status
assert_output --partial 'behind 3'
assert_output --regexp 'behind .* 3 commits'
_reset_recipe
}

View File

@ -104,10 +104,7 @@ teardown(){
_undeploy_app
# NOTE(d1): to let the stack come down before nuking volumes
sleep 5
run $ABRA app volume rm "$TEST_APP_DOMAIN" --force
run $ABRA app volume rm "$TEST_APP_DOMAIN" --no-input
assert_success
run $ABRA app volume ls "$TEST_APP_DOMAIN"
@ -132,9 +129,6 @@ teardown(){
_undeploy_app
# NOTE(d1): to let the stack come down before nuking volumes
sleep 5
run $ABRA app rm "$TEST_APP_DOMAIN" --no-input
assert_success
assert_output --partial 'test-volume'

View File

@ -109,13 +109,13 @@ teardown(){
assert_success
run git -C "$ABRA_DIR/recipes/$TEST_RECIPE" status
assert_output --partial 'behind 3'
assert_output --regexp 'behind .* 3 commits'
run $ABRA app restore "$TEST_APP_DOMAIN" app DOESNTEXIST
run $ABRA app restore "$TEST_APP_DOMAIN" app
assert_failure
run git -C "$ABRA_DIR/recipes/$TEST_RECIPE" status
refute_output --partial 'behind 3'
assert_output --partial "up to date"
}
@test "ensure recipe not up to date if --offline" {
@ -126,19 +126,19 @@ teardown(){
assert_success
run git -C "$ABRA_DIR/recipes/$TEST_RECIPE" status
assert_output --partial 'behind 3'
assert_output --regexp 'behind .* 3 commits'
run $ABRA app restore "$TEST_APP_DOMAIN" app DOESNTEXIST --offline
run $ABRA app restore "$TEST_APP_DOMAIN" app --offline
assert_failure
run git -C "$ABRA_DIR/recipes/$TEST_RECIPE" status
assert_output --partial 'behind 3'
assert_output --regexp 'behind .* 3 commits'
run git -C "$ABRA_DIR/recipes/$TEST_RECIPE" checkout "$latestCommit"
assert_success
run git -C "$ABRA_DIR/recipes/$TEST_RECIPE" status
refute_output --partial 'behind 3'
assert_output --partial "HEAD detached at $latestCommit"
}
@test "error if missing service" {

View File

@ -50,13 +50,13 @@ teardown(){
assert_success
run git -C "$ABRA_DIR/recipes/$TEST_RECIPE" status
assert_output --partial 'behind 3'
assert_output --regexp 'behind .* 3 commits'
run $ABRA app rollback "$TEST_APP_DOMAIN" --no-input --no-converge-checks
assert_failure
run git -C "$ABRA_DIR/recipes/$TEST_RECIPE" status
refute_output --partial 'behind 3'
assert_output --partial "up to date"
}
@test "ensure recipe not up to date if --offline" {
@ -67,14 +67,14 @@ teardown(){
assert_success
run git -C "$ABRA_DIR/recipes/$TEST_RECIPE" status
assert_output --partial 'behind 3'
assert_output --regexp 'behind .* 3 commits'
run $ABRA app rollback "$TEST_APP_DOMAIN" \
--no-input --no-converge-checks --offline
assert_failure
run git -C "$ABRA_DIR/recipes/$TEST_RECIPE" status
assert_output --partial 'behind 3'
assert_output --regexp 'behind .* 3 commits'
run git -C "$ABRA_DIR/recipes/$TEST_RECIPE" checkout "$latestCommit"
assert_success
@ -131,7 +131,7 @@ teardown(){
latestCommit="$(git -C "$ABRA_DIR/recipes/$TEST_RECIPE" rev-parse --short HEAD)"
run $ABRA app deploy "$TEST_APP_DOMAIN" \
--no-input --no-converge-checks --chaos
--no-input --chaos
assert_success
assert_output --partial "$latestCommit"
assert_output --partial 'chaos'

View File

@ -8,7 +8,7 @@ setup_file(){
run $ABRA app new "$TEST_RECIPE" \
--no-input \
--server "$TEST_SERVER" \
--domain "$TEST_APP_DOMAIN" \
--domain "$TEST_APP_DOMAIN"
assert_success
assert_exists "$ABRA_DIR/servers/$TEST_SERVER/$TEST_APP_DOMAIN.env"
}
@ -19,13 +19,6 @@ teardown_file(){
_reset_recipe
}
teardown(){
# https://github.com/bats-core/bats-core/issues/383#issuecomment-738628888
if [[ -z "${BATS_TEST_COMPLETED}" ]]; then
_undeploy_app
fi
}
setup(){
load "$PWD/tests/integration/helpers/common"
_common_setup

View File

@ -59,6 +59,8 @@ teardown(){
# bats test_tags=slow
@test "error if not in catalogue" {
skip "known issue, see https://git.coopcloud.tech/coop-cloud/recipes-catalogue-json/issues/6"
_deploy_app
run $ABRA app version "$TEST_APP_DOMAIN"
@ -92,7 +94,7 @@ teardown(){
assert_success
# NOTE(d1): to let the stack come down before nuking volumes
sleep 3
sleep 5
run $ABRA app volume remove "$appDomain" --no-input
assert_success

View File

@ -78,9 +78,6 @@ teardown(){
_undeploy_app
# NOTE(d1): to let the stack come down before nuking volumes
sleep 5
run $ABRA app volume rm "$TEST_APP_DOMAIN" --force
assert_success
assert_output --partial 'volumes removed successfully'
@ -92,9 +89,6 @@ teardown(){
_undeploy_app
# NOTE(d1): to let the stack come down before nuking volumes
sleep 5
run $ABRA app volume rm "$TEST_APP_DOMAIN" --force
assert_success
assert_output --partial 'volumes removed successfully'

View File

@ -49,7 +49,7 @@ _reset_app(){
run $ABRA app new "$TEST_RECIPE" \
--no-input \
--server "$TEST_SERVER" \
--domain "$TEST_APP_DOMAIN" \
--domain "$TEST_APP_DOMAIN"
assert_success
assert_exists "$ABRA_DIR/servers/$TEST_SERVER/$TEST_APP_DOMAIN.env"
}

View File

@ -1,10 +1,11 @@
#!/usr/bin/env bash
_common_setup() {
load '/usr/lib/bats/bats-support/load'
load '/usr/lib/bats/bats-assert/load'
load '/usr/lib/bats/bats-file/load'
bats_load_library bats-support
bats_load_library bats-assert
bats_load_library bats-file
load "$PWD/tests/integration/helpers/file"
load "$PWD/tests/integration/helpers/app"
load "$PWD/tests/integration/helpers/git"
load "$PWD/tests/integration/helpers/recipe"

View File

@ -0,0 +1,24 @@
_mkfile() {
run bash -c "echo $2 > $1"
assert_success
}
_mkfile_remote() {
run $ABRA app run "$TEST_APP_DOMAIN" app "bash -c \"echo $2 > $1\""
assert_success
}
_mkdir() {
run bash -c "mkdir -p $1"
assert_success
}
_rm() {
run rm -rf "$1"
assert_success
}
_rm_remote() {
run "$ABRA" app run "$TEST_APP_DOMAIN" app rm -rf "$1"
assert_success
}

View File

@ -28,3 +28,10 @@ _reset_tags() {
assert_success
refute_output '0'
}
_set_git_author() {
run git -C "$ABRA_DIR/recipes/$TEST_RECIPE" config --local user.email test@example.com
assert_success
run git -C "$ABRA_DIR/recipes/$TEST_RECIPE" config --local user.name test
assert_success
}

View File

@ -1,13 +1,21 @@
#!/usr/bin/env bash
_add_server() {
run $ABRA server add "$TEST_SERVER"
if [[ "$TEST_SERVER" == "default" ]]; then
run $ABRA server add -l
else
run $ABRA server add "$TEST_SERVER"
fi
assert_success
assert_exists "$ABRA_DIR/servers/$TEST_SERVER"
}
_rm_server() {
run $ABRA server remove --no-input "$TEST_SERVER"
if [[ "$TEST_SERVER" == "default" ]]; then
run rm -rf "$ABRA_DIR/servers/default"
else
run $ABRA server remove --no-input "$TEST_SERVER"
fi
assert_success
assert_not_exists "$ABRA_DIR/servers/$TEST_SERVER"
}

View File

@ -0,0 +1,21 @@
#!/usr/bin/env bash
setup() {
load "$PWD/tests/integration/helpers/common"
_common_setup
}
@test "show unstaged changes" {
run $ABRA recipe diff "$TEST_RECIPE"
assert_success
refute_output --partial 'traefik.enable'
run sed -i '/traefik.enable=.*/d' "$ABRA_DIR/recipes/$TEST_RECIPE/compose.yml"
assert_success
run $ABRA recipe diff "$TEST_RECIPE"
assert_success
assert_output --partial 'traefik.enable'
_reset_recipe
}

View File

@ -5,7 +5,17 @@ setup() {
_common_setup
}
@test "recipe fetch" {
@test "recipe fetch all" {
run rm -rf "$ABRA_DIR/recipes/matrix-synapse"
assert_success
assert_not_exists "$ABRA_DIR/recipes/matrix-synapse"
run $ABRA recipe fetch
assert_success
assert_exists "$ABRA_DIR/recipes/matrix-synapse"
}
@test "recipe fetch single recipe" {
run rm -rf "$ABRA_DIR/recipes/matrix-synapse"
assert_success
assert_not_exists "$ABRA_DIR/recipes/matrix-synapse"

View File

@ -66,13 +66,13 @@ setup() {
assert_success
run git -C "$ABRA_DIR/recipes/$TEST_RECIPE" status
assert_output --partial 'behind 3'
assert_output --regexp 'behind .* 3 commits'
run $ABRA recipe lint "$TEST_RECIPE"
assert_success
run git -C "$ABRA_DIR/recipes/$TEST_RECIPE" status
refute_output --partial 'behind 3'
assert_output --regexp 'behind .* 3 commits'
_reset_recipe
}
@ -82,13 +82,13 @@ setup() {
assert_success
run git -C "$ABRA_DIR/recipes/$TEST_RECIPE" status
assert_output --partial 'behind 3'
assert_output --regexp 'behind .* 3 commits'
run $ABRA recipe lint "$TEST_RECIPE" --offline
assert_success
run git -C "$ABRA_DIR/recipes/$TEST_RECIPE" status
assert_output --partial 'behind 3'
assert_output --regexp 'behind .* 3 commits'
_reset_recipe
}

View File

@ -15,6 +15,11 @@ teardown_file(){
setup(){
load "$PWD/tests/integration/helpers/common"
_common_setup
_set_git_author
}
teardown() {
_reset_recipe
}
@test "validate recipe argument" {
@ -51,8 +56,6 @@ setup(){
run git -C "$ABRA_DIR/recipes/$TEST_RECIPE" tag --list
assert_success
assert_output --partial '0.2.1+1.21.6'
_reset_recipe
}
# NOTE(d1): this test can't assert hardcoded versions since we upgrade a minor
@ -81,6 +84,38 @@ setup(){
run git -C "$ABRA_DIR/recipes/$TEST_RECIPE" tag --list
assert_success
assert_output --regexp '0\.3\.0\+1\.2.*'
_reset_recipe "$TEST_RECIPE"
}
@test "unknown files not committed" {
run $ABRA recipe upgrade "$TEST_RECIPE" --no-input --patch
assert_success
run bash -c 'echo "unstaged changes" >> "$ABRA_DIR/recipes/$TEST_RECIPE/foo"'
assert_success
assert_exists "$ABRA_DIR/recipes/$TEST_RECIPE/foo"
run $ABRA recipe release "$TEST_RECIPE" --no-input --patch
assert_success
assert_output --partial 'no -p/--publish passed, not publishing'
run git -C "$ABRA_DIR/recipes/$TEST_RECIPE" rm foo
assert_failure
assert_output --partial "fatal: pathspec 'foo' did not match any files"
}
# NOTE: relies on 0.2.x being the last minor version
@test "release with next release note" {
_mkfile "$ABRA_DIR/recipes/$TEST_RECIPE/release/next" "those are some release notes for the next release"
run git -C "$ABRA_DIR/recipes/$TEST_RECIPE" add release/next
assert_success
run git -C "$ABRA_DIR/recipes/$TEST_RECIPE" commit -m "added some release notes"
assert_success
run $ABRA recipe release "$TEST_RECIPE" --no-input --minor
assert_success
assert_output --partial 'no -p/--publish passed, not publishing'
assert_not_exists "$ABRA_DIR/recipes/$TEST_RECIPE/release/next"
assert_exists "$ABRA_DIR/recipes/$TEST_RECIPE/release/0.3.0+1.21.0"
assert_file_contains "$ABRA_DIR/recipes/$TEST_RECIPE/release/0.3.0+1.21.0" "those are some release notes for the next release"
}

View File

@ -0,0 +1,25 @@
#!/usr/bin/env bash
setup() {
load "$PWD/tests/integration/helpers/common"
_common_setup
}
@test "reset unstaged changes" {
run $ABRA recipe fetch "$TEST_RECIPE"
assert_success
run sed -i '/traefik.enable=.*/d' "$ABRA_DIR/recipes/$TEST_RECIPE/compose.yml"
assert_success
run $ABRA recipe diff "$TEST_RECIPE"
assert_success
assert_output --partial 'traefik.enable'
run $ABRA recipe reset "$TEST_RECIPE"
assert_success
run $ABRA recipe diff "$TEST_RECIPE"
assert_success
refute_output --partial 'traefik.enable'
}

View File

@ -61,14 +61,14 @@ setup(){
assert_success
run git -C "$ABRA_DIR/recipes/$TEST_RECIPE" status
assert_output --partial 'behind 3'
assert_output --regexp 'behind .* 3 commits'
run $ABRA recipe upgrade "$TEST_RECIPE" --no-input
assert_success
assert_output --partial 'can upgrade service: app'
run git -C "$ABRA_DIR/recipes/$TEST_RECIPE" status
refute_output --partial 'behind 3'
assert_output --regexp 'behind .* 3 commits'
_reset_recipe
}

View File

@ -12,6 +12,8 @@ setup() {
}
@test "error if not present in catalogue" {
skip "known issue, see https://git.coopcloud.tech/coop-cloud/recipes-catalogue-json/issues/6"
run $ABRA recipe versions "$TEST_RECIPE"
assert_failure
assert_output --partial "is not published on the catalogue"