0
0
forked from toolshed/abra

Compare commits

...

27 Commits

Author SHA1 Message Date
0643df6d73 feat: fetch all recipes when no recipe is specified (!401)
Closes coop-cloud/organising#530

Reviewed-on: coop-cloud/abra#401
Reviewed-by: decentral1se <decentral1se@noreply.git.coopcloud.tech>
Co-authored-by: p4u1 <p4u1_f4u1@riseup.net>
Co-committed-by: p4u1 <p4u1_f4u1@riseup.net>
2024-01-24 15:01:33 +00:00
e9b99fe921 make installer save abra-download to /tmp/ directory
the current location of download is ~/.local/bin/ but this
conflicts with some security tools
2024-01-24 14:27:09 +00:00
4920dfedb3 fix: retry docker volume remove (!399)
Closes coop-cloud/organising#509

Reviewed-on: coop-cloud/abra#399
Reviewed-by: decentral1se <decentral1se@noreply.git.coopcloud.tech>
Co-authored-by: p4u1 <p4u1_f4u1@riseup.net>
Co-committed-by: p4u1 <p4u1_f4u1@riseup.net>
2024-01-19 15:09:00 +00:00
0a3624c15b feat: add version input to abra app new (!400)
Closes coop-cloud/organising#519

Reviewed-on: coop-cloud/abra#400
Reviewed-by: decentral1se <decentral1se@noreply.git.coopcloud.tech>
Co-authored-by: p4u1 <p4u1_f4u1@riseup.net>
Co-committed-by: p4u1 <p4u1_f4u1@riseup.net>
2024-01-19 15:08:41 +00:00
c5687dfbd7
feat: backup revolution
See coop-cloud/organising#485
2024-01-12 22:01:08 +01:00
ca91abbed9 fix: correct append service name logic in Filters function (!396)
This fixes a regression introduced by #395

Reviewed-on: coop-cloud/abra#396
Co-authored-by: p4u1 <p4u1_f4u1@riseup.net>
Co-committed-by: p4u1 <p4u1_f4u1@riseup.net>
2023-12-22 12:08:12 +00:00
d4727db8f9 feat: abra app logs shows task errors (!395)
The log command now checks for the ready state in the task list. If it is not ready. It shows the task logs. This might look like this:
```
ERRO[0000] Service abra-test-recipe_default_app: State rejected: No such image: ngaaaax:1.21.0
ERRO[0000] Service abra-test-recipe_default_app: State preparing:
ERRO[0000] Service abra-test-recipe_default_app: State rejected: No such image: ngaaaax:1.21.0
ERRO[0000] Service abra-test-recipe_default_app: State rejected: No such image: ngaaaax:1.21.0
ERRO[0000] Service abra-test-recipe_default_app: State rejected: No such image: ngaaaax:1.21.0
```

Closes coop-cloud/organising#518

Reviewed-on: coop-cloud/abra#395
Reviewed-by: decentral1se <decentral1se@noreply.git.coopcloud.tech>
Co-authored-by: p4u1 <p4u1_f4u1@riseup.net>
Co-committed-by: p4u1 <p4u1_f4u1@riseup.net>
2023-12-14 13:15:24 +00:00
af8cd1f67a feat: abra release now asks for a release note (!393)
This implements coop-cloud/organising#540 by checking if a`release/next` file exists and if so moves it to `release/<tag>`. When no release notes exists it prompts for them.

Reviewed-on: coop-cloud/abra#393
Reviewed-by: moritz <moritz.m@local-it.org>
Co-authored-by: p4u1 <p4u1_f4u1@riseup.net>
Co-committed-by: p4u1 <p4u1_f4u1@riseup.net>
2023-12-12 14:46:20 +00:00
cdd7516e54
chore: go mod tidy [ci skip] 2023-12-04 22:56:58 +01:00
test
99e3ed416f fix: secret name generation when secretId is not part of the secret name 2023-12-04 21:52:09 +00:00
02b726db02 add comments to better explain how the length modifier gets added to the secret 2023-12-04 17:30:26 +00:00
2de6934322 feat: abra app cp enhancements 2023-12-02 15:39:27 +00:00
cb49cf06d1
chore: drop old godotenv pointers [ci skip]
Follows 9affda8a70270632ecea60ef592e7f3287bd0374
2023-12-02 13:02:24 +01:00
9affda8a70
chore: update godotenv fork commit pointer
Follows coop-cloud/abra#391
2023-12-02 12:59:42 +01:00
3957b7c965 proper env modifiers support
This implements proper modifier support in the env file using this new fork of the godotenv library. The modifier implementation is quite basic for but can be improved later if needed. See this commit for the actual implementation.

Because we are now using proper modifer parsing, it does not affect the parsing of value, so this is possible again:
```
MY_VAR="#foo"
```
Closes coop-cloud/organising#535
2023-12-01 11:03:52 +00:00
0d83339d80 fix(ssh): increase connection timeout #482
see coop-cloud/organising#482
2023-11-30 16:35:53 +01:00
6e54ec7213
test: skip failing test for now
See coop-cloud/organising#535.
2023-11-28 11:42:36 +01:00
66b40a9189
fix: just run it in place [ci skip] 2023-11-27 11:25:01 +01:00
049f02f063
docs: add p4u1 [ci skip] 2023-11-27 11:23:03 +01:00
15857e6453
fix: clean up after cp'ing script [ci skip]
Follows 31e0ed75b0084dfc05388ee749537b1e0c0b4a39.
2023-11-27 11:21:46 +01:00
31e0ed75b0
build: target for docker building
Adapted from coop-cloud/abra#384.

Thanks @cas.
2023-11-27 11:15:59 +01:00
b1d3fcbb0b add integration test 2023-11-27 10:01:33 +00:00
7b6134f35e add bash completion for abra cmd 2023-11-27 10:01:33 +00:00
316b59b465
test: support local-first testing
Cherry-picked from coop-cloud/abra#389

Thanks @p4u1.
2023-11-27 10:41:46 +01:00
92b073d5b6
chore: go mod tidy 2023-11-27 10:28:43 +01:00
9b0dd933b5 chore(deps): update module github.com/schollz/progressbar/v3 to v3.14.1 2023-11-10 08:00:52 +00:00
f255fa1555 chore(deps): update module github.com/hashicorp/go-retryablehttp to v0.7.5 2023-11-09 08:00:33 +00:00
51 changed files with 1857 additions and 1052 deletions

View File

@ -11,6 +11,7 @@
- kawaiipunk
- knoflook
- moritz
- p4u1
- rix
- roxxers
- vera

View File

@ -2,6 +2,7 @@ ABRA := ./cmd/abra
KADABRA := ./cmd/kadabra
COMMIT := $(shell git rev-list -1 HEAD)
GOPATH := $(shell go env GOPATH)
GOVERSION := 1.21
LDFLAGS := "-X 'main.Commit=$(COMMIT)'"
DIST_LDFLAGS := $(LDFLAGS)" -s -w"
@ -30,6 +31,12 @@ build-kadabra:
build: build-abra build-kadabra
build-docker-abra:
@docker run -it -v $(PWD):/abra golang:$(GOVERSION) \
bash -c 'cd /abra; ./scripts/docker/build.sh'
build-docker: build-docker-abra
clean:
@rm '$(GOPATH)/bin/abra'
@rm '$(GOPATH)/bin/kadabra'

View File

@ -1,414 +1,296 @@
package app
import (
"archive/tar"
"context"
"fmt"
"io"
"os"
"path/filepath"
"strings"
"time"
"coopcloud.tech/abra/cli/internal"
"coopcloud.tech/abra/pkg/autocomplete"
"coopcloud.tech/abra/pkg/client"
"coopcloud.tech/abra/pkg/config"
containerPkg "coopcloud.tech/abra/pkg/container"
recipePkg "coopcloud.tech/abra/pkg/recipe"
"coopcloud.tech/abra/pkg/upstream/container"
"github.com/docker/cli/cli/command"
"github.com/docker/docker/api/types"
"github.com/docker/docker/api/types/filters"
dockerClient "github.com/docker/docker/client"
"github.com/docker/docker/pkg/archive"
"github.com/klauspost/pgzip"
"coopcloud.tech/abra/pkg/recipe"
"github.com/sirupsen/logrus"
"github.com/urfave/cli"
)
type backupConfig struct {
preHookCmd string
postHookCmd string
backupPaths []string
var snapshot string
var snapshotFlag = &cli.StringFlag{
Name: "snapshot, s",
Usage: "Lists specific snapshot",
Destination: &snapshot,
}
var appBackupCommand = cli.Command{
Name: "backup",
Aliases: []string{"bk"},
Usage: "Run app backup",
ArgsUsage: "<domain> [<service>]",
var includePath string
var includePathFlag = &cli.StringFlag{
Name: "path, p",
Usage: "Include path",
Destination: &includePath,
}
var resticRepo string
var resticRepoFlag = &cli.StringFlag{
Name: "repo, r",
Usage: "Restic repository",
Destination: &resticRepo,
}
var appBackupListCommand = cli.Command{
Name: "list",
Aliases: []string{"ls"},
Flags: []cli.Flag{
internal.DebugFlag,
internal.OfflineFlag,
internal.ChaosFlag,
snapshotFlag,
includePathFlag,
},
Before: internal.SubCommandBefore,
Usage: "List all backups",
BashComplete: autocomplete.AppNameComplete,
Description: `
Run an app backup.
A backup command and pre/post hook commands are defined in the recipe
configuration. Abra reads this configuration and run the comands in the context
of the deployed services. Pass <service> if you only want to back up a single
service. All backups are placed in the ~/.abra/backups directory.
A single backup file is produced for all backup paths specified for a service.
If we have the following backup configuration:
- "backupbot.backup.path=/var/lib/foo,/var/lib/bar"
And we run "abra app backup example.com app", Abra will produce a file that
looks like:
~/.abra/backups/example_com_app_609341138.tar.gz
This file is a compressed archive which contains all backup paths. To see paths, run:
tar -tf ~/.abra/backups/example_com_app_609341138.tar.gz
(Make sure to change the name of the backup file)
This single file can be used to restore your app. See "abra app restore" for more.
`,
Action: func(c *cli.Context) error {
app := internal.ValidateApp(c)
recipe, err := recipePkg.Get(app.Recipe, internal.Offline)
if err != nil {
if err := recipe.EnsureExists(app.Recipe); err != nil {
logrus.Fatal(err)
}
if !internal.Chaos {
if err := recipePkg.EnsureIsClean(app.Recipe); err != nil {
if err := recipe.EnsureIsClean(app.Recipe); err != nil {
logrus.Fatal(err)
}
if !internal.Offline {
if err := recipePkg.EnsureUpToDate(app.Recipe); err != nil {
if err := recipe.EnsureUpToDate(app.Recipe); err != nil {
logrus.Fatal(err)
}
}
if err := recipePkg.EnsureLatest(app.Recipe); err != nil {
if err := recipe.EnsureLatest(app.Recipe); err != nil {
logrus.Fatal(err)
}
}
backupConfigs := make(map[string]backupConfig)
for _, service := range recipe.Config.Services {
if backupsEnabled, ok := service.Deploy.Labels["backupbot.backup"]; ok {
if backupsEnabled == "true" {
fullServiceName := fmt.Sprintf("%s_%s", app.StackName(), service.Name)
bkConfig := backupConfig{}
logrus.Debugf("backup config detected for %s", fullServiceName)
if paths, ok := service.Deploy.Labels["backupbot.backup.path"]; ok {
logrus.Debugf("detected backup paths for %s: %s", fullServiceName, paths)
bkConfig.backupPaths = strings.Split(paths, ",")
}
if preHookCmd, ok := service.Deploy.Labels["backupbot.backup.pre-hook"]; ok {
logrus.Debugf("detected pre-hook command for %s: %s", fullServiceName, preHookCmd)
bkConfig.preHookCmd = preHookCmd
}
if postHookCmd, ok := service.Deploy.Labels["backupbot.backup.post-hook"]; ok {
logrus.Debugf("detected post-hook command for %s: %s", fullServiceName, postHookCmd)
bkConfig.postHookCmd = postHookCmd
}
backupConfigs[service.Name] = bkConfig
}
}
}
cl, err := client.New(app.Server)
if err != nil {
logrus.Fatal(err)
}
serviceName := c.Args().Get(1)
if serviceName != "" {
backupConfig, ok := backupConfigs[serviceName]
if !ok {
logrus.Fatalf("no backup config for %s? does %s exist?", serviceName, serviceName)
}
logrus.Infof("running backup for the %s service", serviceName)
if err := runBackup(cl, app, serviceName, backupConfig); err != nil {
targetContainer, err := internal.RetrieveBackupBotContainer(cl)
if err != nil {
logrus.Fatal(err)
}
} else {
if len(backupConfigs) == 0 {
logrus.Fatalf("no backup configs discovered for %s?", app.Name)
execEnv := []string{fmt.Sprintf("SERVICE=%s", app.Domain)}
if snapshot != "" {
logrus.Debugf("including SNAPSHOT=%s in backupbot exec invocation", snapshot)
execEnv = append(execEnv, fmt.Sprintf("SNAPSHOT=%s", snapshot))
}
if includePath != "" {
logrus.Debugf("including INCLUDE_PATH=%s in backupbot exec invocation", includePath)
execEnv = append(execEnv, fmt.Sprintf("INCLUDE_PATH=%s", includePath))
}
for serviceName, backupConfig := range backupConfigs {
logrus.Infof("running backup for the %s service", serviceName)
if err := runBackup(cl, app, serviceName, backupConfig); err != nil {
if err := internal.RunBackupCmdRemote(cl, "ls", targetContainer.ID, execEnv); err != nil {
logrus.Fatal(err)
}
}
}
return nil
},
}
// TimeStamp generates a file name friendly timestamp.
func TimeStamp() string {
ts := time.Now().UTC().Format(time.RFC3339)
return strings.Replace(ts, ":", "-", -1)
var appBackupDownloadCommand = cli.Command{
Name: "download",
Aliases: []string{"d"},
Flags: []cli.Flag{
internal.DebugFlag,
internal.OfflineFlag,
snapshotFlag,
includePathFlag,
},
Before: internal.SubCommandBefore,
Usage: "Download a backup",
BashComplete: autocomplete.AppNameComplete,
Action: func(c *cli.Context) error {
app := internal.ValidateApp(c)
if err := recipe.EnsureExists(app.Recipe); err != nil {
logrus.Fatal(err)
}
// runBackup does the actual backup logic.
func runBackup(cl *dockerClient.Client, app config.App, serviceName string, bkConfig backupConfig) error {
if len(bkConfig.backupPaths) == 0 {
return fmt.Errorf("backup paths are empty for %s?", serviceName)
if !internal.Chaos {
if err := recipe.EnsureIsClean(app.Recipe); err != nil {
logrus.Fatal(err)
}
// FIXME: avoid instantiating a new CLI
dcli, err := command.NewDockerCli()
if !internal.Offline {
if err := recipe.EnsureUpToDate(app.Recipe); err != nil {
logrus.Fatal(err)
}
}
if err := recipe.EnsureLatest(app.Recipe); err != nil {
logrus.Fatal(err)
}
}
cl, err := client.New(app.Server)
if err != nil {
return err
logrus.Fatal(err)
}
filters := filters.NewArgs()
filters.Add("name", fmt.Sprintf("^%s_%s", app.StackName(), serviceName))
targetContainer, err := containerPkg.GetContainer(context.Background(), cl, filters, true)
targetContainer, err := internal.RetrieveBackupBotContainer(cl)
if err != nil {
return err
logrus.Fatal(err)
}
fullServiceName := fmt.Sprintf("%s_%s", app.StackName(), serviceName)
if bkConfig.preHookCmd != "" {
splitCmd := internal.SafeSplit(bkConfig.preHookCmd)
logrus.Debugf("split pre-hook command for %s into %s", fullServiceName, splitCmd)
preHookExecOpts := types.ExecConfig{
AttachStderr: true,
AttachStdin: true,
AttachStdout: true,
Cmd: splitCmd,
Detach: false,
Tty: true,
execEnv := []string{fmt.Sprintf("SERVICE=%s", app.Domain)}
if snapshot != "" {
logrus.Debugf("including SNAPSHOT=%s in backupbot exec invocation", snapshot)
execEnv = append(execEnv, fmt.Sprintf("SNAPSHOT=%s", snapshot))
}
if includePath != "" {
logrus.Debugf("including INCLUDE_PATH=%s in backupbot exec invocation", includePath)
execEnv = append(execEnv, fmt.Sprintf("INCLUDE_PATH=%s", includePath))
}
if err := container.RunExec(dcli, cl, targetContainer.ID, &preHookExecOpts); err != nil {
return fmt.Errorf("failed to run %s on %s: %s", bkConfig.preHookCmd, targetContainer.ID, err.Error())
if err := internal.RunBackupCmdRemote(cl, "download", targetContainer.ID, execEnv); err != nil {
logrus.Fatal(err)
}
logrus.Infof("succesfully ran %s pre-hook command: %s", fullServiceName, bkConfig.preHookCmd)
remoteBackupDir := "/tmp/backup.tar.gz"
currentWorkingDir := "."
if err = CopyFromContainer(cl, targetContainer.ID, remoteBackupDir, currentWorkingDir); err != nil {
logrus.Fatal(err)
}
var tempBackupPaths []string
for _, remoteBackupPath := range bkConfig.backupPaths {
sanitisedPath := strings.ReplaceAll(remoteBackupPath, "/", "_")
localBackupPath := filepath.Join(config.BACKUP_DIR, fmt.Sprintf("%s%s_%s.tar.gz", fullServiceName, sanitisedPath, TimeStamp()))
logrus.Debugf("temporarily backing up %s:%s to %s", fullServiceName, remoteBackupPath, localBackupPath)
fmt.Println("backup successfully downloaded to current working directory")
logrus.Infof("backing up %s:%s", fullServiceName, remoteBackupPath)
return nil
},
}
content, _, err := cl.CopyFromContainer(context.Background(), targetContainer.ID, remoteBackupPath)
var appBackupCreateCommand = cli.Command{
Name: "create",
Aliases: []string{"c"},
Flags: []cli.Flag{
internal.DebugFlag,
internal.OfflineFlag,
resticRepoFlag,
},
Before: internal.SubCommandBefore,
Usage: "Create a new backup",
BashComplete: autocomplete.AppNameComplete,
Action: func(c *cli.Context) error {
app := internal.ValidateApp(c)
if err := recipe.EnsureExists(app.Recipe); err != nil {
logrus.Fatal(err)
}
if !internal.Chaos {
if err := recipe.EnsureIsClean(app.Recipe); err != nil {
logrus.Fatal(err)
}
if !internal.Offline {
if err := recipe.EnsureUpToDate(app.Recipe); err != nil {
logrus.Fatal(err)
}
}
if err := recipe.EnsureLatest(app.Recipe); err != nil {
logrus.Fatal(err)
}
}
cl, err := client.New(app.Server)
if err != nil {
logrus.Debugf("failed to copy %s from container: %s", remoteBackupPath, err.Error())
if err := cleanupTempArchives(tempBackupPaths); err != nil {
return fmt.Errorf("failed to clean up temporary archives: %s", err.Error())
}
return fmt.Errorf("failed to copy %s from container: %s", remoteBackupPath, err.Error())
}
defer content.Close()
_, srcBase := archive.SplitPathDirEntry(remoteBackupPath)
preArchive := archive.RebaseArchiveEntries(content, srcBase, remoteBackupPath)
if err := copyToFile(localBackupPath, preArchive); err != nil {
logrus.Debugf("failed to create tar archive (%s): %s", localBackupPath, err.Error())
if err := cleanupTempArchives(tempBackupPaths); err != nil {
return fmt.Errorf("failed to clean up temporary archives: %s", err.Error())
}
return fmt.Errorf("failed to create tar archive (%s): %s", localBackupPath, err.Error())
logrus.Fatal(err)
}
tempBackupPaths = append(tempBackupPaths, localBackupPath)
targetContainer, err := internal.RetrieveBackupBotContainer(cl)
if err != nil {
logrus.Fatal(err)
}
logrus.Infof("compressing and merging archives...")
if err := mergeArchives(tempBackupPaths, fullServiceName); err != nil {
logrus.Debugf("failed to merge archive files: %s", err.Error())
if err := cleanupTempArchives(tempBackupPaths); err != nil {
return fmt.Errorf("failed to clean up temporary archives: %s", err.Error())
}
return fmt.Errorf("failed to merge archive files: %s", err.Error())
execEnv := []string{fmt.Sprintf("SERVICE=%s", app.Domain)}
if resticRepo != "" {
logrus.Debugf("including RESTIC_REPO=%s in backupbot exec invocation", resticRepo)
execEnv = append(execEnv, fmt.Sprintf("RESTIC_REPO=%s", resticRepo))
}
if err := cleanupTempArchives(tempBackupPaths); err != nil {
return fmt.Errorf("failed to clean up temporary archives: %s", err.Error())
}
if bkConfig.postHookCmd != "" {
splitCmd := internal.SafeSplit(bkConfig.postHookCmd)
logrus.Debugf("split post-hook command for %s into %s", fullServiceName, splitCmd)
postHookExecOpts := types.ExecConfig{
AttachStderr: true,
AttachStdin: true,
AttachStdout: true,
Cmd: splitCmd,
Detach: false,
Tty: true,
}
if err := container.RunExec(dcli, cl, targetContainer.ID, &postHookExecOpts); err != nil {
return err
}
logrus.Infof("succesfully ran %s post-hook command: %s", fullServiceName, bkConfig.postHookCmd)
if err := internal.RunBackupCmdRemote(cl, "create", targetContainer.ID, execEnv); err != nil {
logrus.Fatal(err)
}
return nil
},
}
func copyToFile(outfile string, r io.Reader) error {
tmpFile, err := os.CreateTemp(filepath.Dir(outfile), ".tar_temp")
var appBackupSnapshotsCommand = cli.Command{
Name: "snapshots",
Aliases: []string{"s"},
Flags: []cli.Flag{
internal.DebugFlag,
internal.OfflineFlag,
snapshotFlag,
},
Before: internal.SubCommandBefore,
Usage: "List backup snapshots",
BashComplete: autocomplete.AppNameComplete,
Action: func(c *cli.Context) error {
app := internal.ValidateApp(c)
if err := recipe.EnsureExists(app.Recipe); err != nil {
logrus.Fatal(err)
}
if !internal.Chaos {
if err := recipe.EnsureIsClean(app.Recipe); err != nil {
logrus.Fatal(err)
}
if !internal.Offline {
if err := recipe.EnsureUpToDate(app.Recipe); err != nil {
logrus.Fatal(err)
}
}
if err := recipe.EnsureLatest(app.Recipe); err != nil {
logrus.Fatal(err)
}
}
cl, err := client.New(app.Server)
if err != nil {
return err
logrus.Fatal(err)
}
tmpPath := tmpFile.Name()
_, err = io.Copy(tmpFile, r)
tmpFile.Close()
targetContainer, err := internal.RetrieveBackupBotContainer(cl)
if err != nil {
os.Remove(tmpPath)
return err
logrus.Fatal(err)
}
if err = os.Rename(tmpPath, outfile); err != nil {
os.Remove(tmpPath)
return err
execEnv := []string{fmt.Sprintf("SERVICE=%s", app.Domain)}
if snapshot != "" {
logrus.Debugf("including SNAPSHOT=%s in backupbot exec invocation", snapshot)
execEnv = append(execEnv, fmt.Sprintf("SNAPSHOT=%s", snapshot))
}
if err := internal.RunBackupCmdRemote(cl, "snapshots", targetContainer.ID, execEnv); err != nil {
logrus.Fatal(err)
}
return nil
},
}
func cleanupTempArchives(tarPaths []string) error {
for _, tarPath := range tarPaths {
if err := os.RemoveAll(tarPath); err != nil {
return err
}
logrus.Debugf("remove temporary archive file %s", tarPath)
}
return nil
}
func mergeArchives(tarPaths []string, serviceName string) error {
var out io.Writer
var cout *pgzip.Writer
localBackupPath := filepath.Join(config.BACKUP_DIR, fmt.Sprintf("%s_%s.tar.gz", serviceName, TimeStamp()))
fout, err := os.Create(localBackupPath)
if err != nil {
return fmt.Errorf("Failed to open %s: %s", localBackupPath, err)
}
defer fout.Close()
out = fout
cout = pgzip.NewWriter(out)
out = cout
tw := tar.NewWriter(out)
for _, tarPath := range tarPaths {
if err := addTar(tw, tarPath); err != nil {
return fmt.Errorf("failed to merge %s: %v", tarPath, err)
}
}
if err := tw.Close(); err != nil {
return fmt.Errorf("failed to close tar writer %v", err)
}
if cout != nil {
if err := cout.Flush(); err != nil {
return fmt.Errorf("failed to flush: %s", err)
} else if err = cout.Close(); err != nil {
return fmt.Errorf("failed to close compressed writer: %s", err)
}
}
logrus.Infof("backed up %s to %s", serviceName, localBackupPath)
return nil
}
func addTar(tw *tar.Writer, pth string) (err error) {
var tr *tar.Reader
var rc io.ReadCloser
var hdr *tar.Header
if tr, rc, err = openTarFile(pth); err != nil {
return
}
for {
if hdr, err = tr.Next(); err != nil {
if err == io.EOF {
err = nil
}
break
}
if err = tw.WriteHeader(hdr); err != nil {
break
} else if _, err = io.Copy(tw, tr); err != nil {
break
}
}
if err == nil {
err = rc.Close()
} else {
rc.Close()
}
return
}
func openTarFile(pth string) (tr *tar.Reader, rc io.ReadCloser, err error) {
var fin *os.File
var n int
buff := make([]byte, 1024)
if fin, err = os.Open(pth); err != nil {
return
}
if n, err = fin.Read(buff); err != nil {
fin.Close()
return
} else if n == 0 {
fin.Close()
err = fmt.Errorf("%s is empty", pth)
return
}
if _, err = fin.Seek(0, 0); err != nil {
fin.Close()
return
}
rc = fin
tr = tar.NewReader(rc)
return tr, rc, nil
var appBackupCommand = cli.Command{
Name: "backup",
Aliases: []string{"b"},
Usage: "Manage app backups",
ArgsUsage: "<domain>",
Subcommands: []cli.Command{
appBackupListCommand,
appBackupSnapshotsCommand,
appBackupDownloadCommand,
appBackupCreateCommand,
},
}

View File

@ -10,6 +10,7 @@ import (
"strings"
"coopcloud.tech/abra/cli/internal"
"coopcloud.tech/abra/pkg/app"
"coopcloud.tech/abra/pkg/autocomplete"
"coopcloud.tech/abra/pkg/client"
"coopcloud.tech/abra/pkg/config"
@ -45,6 +46,17 @@ Example:
},
Before: internal.SubCommandBefore,
Subcommands: []cli.Command{appCmdListCommand},
BashComplete: func(ctx *cli.Context) {
args := ctx.Args()
switch len(args) {
case 0:
autocomplete.AppNameComplete(ctx)
case 1:
autocomplete.ServiceNameComplete(args.Get(0))
case 2:
cmdNameComplete(args.Get(0))
}
},
Action: func(c *cli.Context) error {
app := internal.ValidateApp(c)
@ -187,6 +199,20 @@ func parseCmdArgs(args []string, isLocal bool) (bool, string) {
return hasCmdArgs, parsedCmdArgs
}
func cmdNameComplete(appName string) {
app, err := app.Get(appName)
if err != nil {
return
}
cmdNames, _ := getShCmdNames(app)
if err != nil {
return
}
for _, n := range cmdNames {
fmt.Println(n)
}
}
var appCmdListCommand = cli.Command{
Name: "list",
Aliases: []string{"ls"},
@ -222,13 +248,11 @@ var appCmdListCommand = cli.Command{
}
}
abraShPath := fmt.Sprintf("%s/%s/%s", config.RECIPES_DIR, app.Recipe, "abra.sh")
cmdNames, err := config.ReadAbraShCmdNames(abraShPath)
cmdNames, err := getShCmdNames(app)
if err != nil {
logrus.Fatal(err)
}
sort.Strings(cmdNames)
for _, cmdName := range cmdNames {
fmt.Println(cmdName)
}
@ -236,3 +260,14 @@ var appCmdListCommand = cli.Command{
return nil
},
}
func getShCmdNames(app config.App) ([]string, error) {
abraShPath := fmt.Sprintf("%s/%s/%s", config.RECIPES_DIR, app.Recipe, "abra.sh")
cmdNames, err := config.ReadAbraShCmdNames(abraShPath)
if err != nil {
return nil, err
}
sort.Strings(cmdNames)
return cmdNames, nil
}

View File

@ -2,19 +2,24 @@ package app
import (
"context"
"errors"
"fmt"
"io"
"os"
"path"
"path/filepath"
"strings"
"coopcloud.tech/abra/cli/internal"
"coopcloud.tech/abra/pkg/autocomplete"
"coopcloud.tech/abra/pkg/client"
"coopcloud.tech/abra/pkg/config"
"coopcloud.tech/abra/pkg/container"
containerPkg "coopcloud.tech/abra/pkg/container"
"coopcloud.tech/abra/pkg/formatter"
"coopcloud.tech/abra/pkg/upstream/container"
"github.com/docker/cli/cli/command"
"github.com/docker/docker/api/types"
"github.com/docker/docker/api/types/filters"
dockerClient "github.com/docker/docker/client"
"github.com/docker/docker/errdefs"
"github.com/docker/docker/pkg/archive"
"github.com/sirupsen/logrus"
"github.com/urfave/cli"
@ -49,46 +54,14 @@ And if you want to copy that file back to your current working directory locally
dst := c.Args().Get(2)
if src == "" {
logrus.Fatal("missing <src> argument")
} else if dst == "" {
}
if dst == "" {
logrus.Fatal("missing <dest> argument")
}
parsedSrc := strings.SplitN(src, ":", 2)
parsedDst := strings.SplitN(dst, ":", 2)
errorMsg := "one of <src>/<dest> arguments must take $SERVICE:$PATH form"
if len(parsedSrc) == 2 && len(parsedDst) == 2 {
logrus.Fatal(errorMsg)
} else if len(parsedSrc) != 2 {
if len(parsedDst) != 2 {
logrus.Fatal(errorMsg)
}
} else if len(parsedDst) != 2 {
if len(parsedSrc) != 2 {
logrus.Fatal(errorMsg)
}
}
var service string
var srcPath string
var dstPath string
isToContainer := false // <container:src> <dst>
if len(parsedSrc) == 2 {
service = parsedSrc[0]
srcPath = parsedSrc[1]
dstPath = dst
logrus.Debugf("assuming transfer is coming FROM the container")
} else if len(parsedDst) == 2 {
service = parsedDst[0]
dstPath = parsedDst[1]
srcPath = src
isToContainer = true // <src> <container:dst>
logrus.Debugf("assuming transfer is going TO the container")
}
if isToContainer {
if _, err := os.Stat(srcPath); os.IsNotExist(err) {
logrus.Fatalf("%s does not exist locally?", srcPath)
}
srcPath, dstPath, service, toContainer, err := parseSrcAndDst(src, dst)
if err != nil {
logrus.Fatal(err)
}
cl, err := client.New(app.Server)
@ -96,7 +69,18 @@ And if you want to copy that file back to your current working directory locally
logrus.Fatal(err)
}
if err := configureAndCp(c, cl, app, srcPath, dstPath, service, isToContainer); err != nil {
container, err := containerPkg.GetContainerFromStackAndService(cl, app.StackName(), service)
if err != nil {
logrus.Fatal(err)
}
logrus.Debugf("retrieved %s as target container on %s", formatter.ShortenID(container.ID), app.Server)
if toContainer {
err = CopyToContainer(cl, container.ID, srcPath, dstPath)
} else {
err = CopyFromContainer(cl, container.ID, srcPath, dstPath)
}
if err != nil {
logrus.Fatal(err)
}
@ -104,46 +88,292 @@ And if you want to copy that file back to your current working directory locally
},
}
func configureAndCp(
c *cli.Context,
cl *dockerClient.Client,
app config.App,
srcPath string,
dstPath string,
service string,
isToContainer bool) error {
filters := filters.NewArgs()
filters.Add("name", fmt.Sprintf("^%s_%s", app.StackName(), service))
var errServiceMissing = errors.New("one of <src>/<dest> arguments must take $SERVICE:$PATH form")
container, err := container.GetContainer(context.Background(), cl, filters, internal.NoInput)
if err != nil {
logrus.Fatal(err)
// parseSrcAndDst parses src and dest string. One of src or dst must be of the form $SERVICE:$PATH
func parseSrcAndDst(src, dst string) (srcPath string, dstPath string, service string, toContainer bool, err error) {
parsedSrc := strings.SplitN(src, ":", 2)
parsedDst := strings.SplitN(dst, ":", 2)
if len(parsedSrc)+len(parsedDst) != 3 {
return "", "", "", false, errServiceMissing
}
if len(parsedSrc) == 2 {
return parsedSrc[1], dst, parsedSrc[0], false, nil
}
if len(parsedDst) == 2 {
return src, parsedDst[1], parsedDst[0], true, nil
}
return "", "", "", false, errServiceMissing
}
logrus.Debugf("retrieved %s as target container on %s", formatter.ShortenID(container.ID), app.Server)
// CopyToContainer copies a file or directory from the local file system to the container.
// See the possible copy modes and their documentation.
func CopyToContainer(cl *dockerClient.Client, containerID, srcPath, dstPath string) error {
srcStat, err := os.Stat(srcPath)
if err != nil {
return fmt.Errorf("local %s ", err)
}
if isToContainer {
toTarOpts := &archive.TarOptions{NoOverwriteDirNonDir: true, Compression: archive.Gzip}
dstStat, err := cl.ContainerStatPath(context.Background(), containerID, dstPath)
dstExists := true
if err != nil {
if errdefs.IsNotFound(err) {
dstExists = false
} else {
return fmt.Errorf("remote path: %s", err)
}
}
mode, err := copyMode(srcPath, dstPath, srcStat.Mode(), dstStat.Mode, dstExists)
if err != nil {
return err
}
movePath := ""
switch mode {
case CopyModeDirToDir:
// Add the src directory to the destination path
_, srcDir := path.Split(srcPath)
dstPath = path.Join(dstPath, srcDir)
// Make sure the dst directory exits.
dcli, err := command.NewDockerCli()
if err != nil {
return err
}
if _, err := container.RunExec(dcli, cl, containerID, &types.ExecConfig{
AttachStderr: true,
AttachStdin: true,
AttachStdout: true,
Cmd: []string{"mkdir", "-p", dstPath},
Detach: false,
Tty: true,
}); err != nil {
return fmt.Errorf("create remote directory: %s", err)
}
case CopyModeFileToFile:
// Remove the file component from the path, since docker can only copy
// to a directory.
dstPath, _ = path.Split(dstPath)
case CopyModeFileToFileRename:
// Copy the file to the temp directory and move it to its dstPath
// afterwards.
movePath = dstPath
dstPath = "/tmp"
}
toTarOpts := &archive.TarOptions{IncludeSourceDir: true, NoOverwriteDirNonDir: true, Compression: archive.Gzip}
content, err := archive.TarWithOptions(srcPath, toTarOpts)
if err != nil {
logrus.Fatal(err)
return err
}
logrus.Debugf("copy %s from local to %s on container", srcPath, dstPath)
copyOpts := types.CopyToContainerOptions{AllowOverwriteDirWithFile: false, CopyUIDGID: false}
if err := cl.CopyToContainer(context.Background(), container.ID, dstPath, content, copyOpts); err != nil {
logrus.Fatal(err)
if err := cl.CopyToContainer(context.Background(), containerID, dstPath, content, copyOpts); err != nil {
return err
}
} else {
content, _, err := cl.CopyFromContainer(context.Background(), container.ID, srcPath)
if movePath != "" {
_, srcFile := path.Split(srcPath)
dcli, err := command.NewDockerCli()
if err != nil {
logrus.Fatal(err)
return err
}
defer content.Close()
fromTarOpts := &archive.TarOptions{NoOverwriteDirNonDir: true, Compression: archive.Gzip}
if err := archive.Untar(content, dstPath, fromTarOpts); err != nil {
logrus.Fatal(err)
if _, err := container.RunExec(dcli, cl, containerID, &types.ExecConfig{
AttachStderr: true,
AttachStdin: true,
AttachStdout: true,
Cmd: []string{"mv", path.Join("/tmp", srcFile), movePath},
Detach: false,
Tty: true,
}); err != nil {
return fmt.Errorf("create remote directory: %s", err)
}
}
return nil
}
// CopyFromContainer copies a file or directory from the given container to the local file system.
// See the possible copy modes and their documentation.
func CopyFromContainer(cl *dockerClient.Client, containerID, srcPath, dstPath string) error {
srcStat, err := cl.ContainerStatPath(context.Background(), containerID, srcPath)
if err != nil {
if errdefs.IsNotFound(err) {
return fmt.Errorf("remote: %s does not exist", srcPath)
} else {
return fmt.Errorf("remote path: %s", err)
}
}
dstStat, err := os.Stat(dstPath)
dstExists := true
var dstMode os.FileMode
if err != nil {
if os.IsNotExist(err) {
dstExists = false
} else {
return fmt.Errorf("remote path: %s", err)
}
} else {
dstMode = dstStat.Mode()
}
mode, err := copyMode(srcPath, dstPath, srcStat.Mode, dstMode, dstExists)
if err != nil {
return err
}
moveDstDir := ""
moveDstFile := ""
switch mode {
case CopyModeFileToFile:
// Remove the file component from the path, since docker can only copy
// to a directory.
dstPath, _ = path.Split(dstPath)
case CopyModeFileToFileRename:
// Copy the file to the temp directory and move it to its dstPath
// afterwards.
moveDstFile = dstPath
dstPath = "/tmp"
case CopyModeFilesToDir:
// Copy the directory to the temp directory and move it to its
// dstPath afterwards.
moveDstDir = path.Join(dstPath, "/")
dstPath = "/tmp"
// Make sure the temp directory always gets removed
defer os.Remove(path.Join("/tmp"))
}
content, _, err := cl.CopyFromContainer(context.Background(), containerID, srcPath)
if err != nil {
return fmt.Errorf("copy: %s", err)
}
defer content.Close()
if err := archive.Untar(content, dstPath, &archive.TarOptions{
NoOverwriteDirNonDir: true,
Compression: archive.Gzip,
NoLchown: true,
}); err != nil {
return fmt.Errorf("untar: %s", err)
}
if moveDstFile != "" {
_, srcFile := path.Split(strings.TrimSuffix(srcPath, "/"))
if err := moveFile(path.Join("/tmp", srcFile), moveDstFile); err != nil {
return err
}
}
if moveDstDir != "" {
_, srcDir := path.Split(strings.TrimSuffix(srcPath, "/"))
if err := moveDir(path.Join("/tmp", srcDir), moveDstDir); err != nil {
return err
}
}
return nil
}
var (
ErrCopyDirToFile = fmt.Errorf("can't copy dir to file")
ErrDstDirNotExist = fmt.Errorf("destination directory does not exist")
)
type CopyMode int
const (
// Copy a src file to a dest file. The src and dest file names are the same.
// <dir_src>/<file> + <dir_dst>/<file> -> <dir_dst>/<file>
CopyModeFileToFile = CopyMode(iota)
// Copy a src file to a dest file. The src and dest file names are not the same.
// <dir_src>/<file_src> + <dir_dst>/<file_dst> -> <dir_dst>/<file_dst>
CopyModeFileToFileRename
// Copy a src file to dest directory. The dest file gets created in the dest
// folder with the src filename.
// <dir_src>/<file> + <dir_dst> -> <dir_dst>/<file>
CopyModeFileToDir
// Copy a src directory to dest directory.
// <dir_src> + <dir_dst> -> <dir_dst>/<dir_src>
CopyModeDirToDir
// Copy all files in the src directory to the dest directory. This works recursively.
// <dir_src>/ + <dir_dst> -> <dir_dst>/<files_from_dir_src>
CopyModeFilesToDir
)
// copyMode takes a src and dest path and file mode to determine the copy mode.
// See the possible copy modes and their documentation.
func copyMode(srcPath, dstPath string, srcMode os.FileMode, dstMode os.FileMode, dstExists bool) (CopyMode, error) {
_, srcFile := path.Split(srcPath)
_, dstFile := path.Split(dstPath)
if srcMode.IsDir() {
if !dstExists {
return -1, ErrDstDirNotExist
}
if dstMode.IsDir() {
if strings.HasSuffix(srcPath, "/") {
return CopyModeFilesToDir, nil
}
return CopyModeDirToDir, nil
}
return -1, ErrCopyDirToFile
}
if dstMode.IsDir() {
return CopyModeFileToDir, nil
}
if srcFile != dstFile {
return CopyModeFileToFileRename, nil
}
return CopyModeFileToFile, nil
}
// moveDir moves all files from a source path to the destination path recursively.
func moveDir(sourcePath, destPath string) error {
return filepath.Walk(sourcePath, func(p string, info os.FileInfo, err error) error {
if err != nil {
return err
}
newPath := path.Join(destPath, strings.TrimPrefix(p, sourcePath))
if info.IsDir() {
err := os.Mkdir(newPath, info.Mode())
if err != nil {
if os.IsExist(err) {
return nil
}
return err
}
}
if info.Mode().IsRegular() {
return moveFile(p, newPath)
}
return nil
})
}
// moveFile moves a file from a source path to a destination path.
func moveFile(sourcePath, destPath string) error {
inputFile, err := os.Open(sourcePath)
if err != nil {
return err
}
outputFile, err := os.Create(destPath)
if err != nil {
inputFile.Close()
return err
}
defer outputFile.Close()
_, err = io.Copy(outputFile, inputFile)
inputFile.Close()
if err != nil {
return err
}
// Remove file after succesfull copy.
err = os.Remove(sourcePath)
if err != nil {
return err
}
return nil
}

113
cli/app/cp_test.go Normal file
View File

@ -0,0 +1,113 @@
package app
import (
"os"
"testing"
)
func TestParse(t *testing.T) {
tests := []struct {
src string
dst string
srcPath string
dstPath string
service string
toContainer bool
err error
}{
{src: "foo", dst: "bar", err: errServiceMissing},
{src: "app:foo", dst: "app:bar", err: errServiceMissing},
{src: "app:foo", dst: "bar", srcPath: "foo", dstPath: "bar", service: "app", toContainer: false},
{src: "foo", dst: "app:bar", srcPath: "foo", dstPath: "bar", service: "app", toContainer: true},
}
for i, tc := range tests {
srcPath, dstPath, service, toContainer, err := parseSrcAndDst(tc.src, tc.dst)
if srcPath != tc.srcPath {
t.Errorf("[%d] srcPath: want (%s), got(%s)", i, tc.srcPath, srcPath)
}
if dstPath != tc.dstPath {
t.Errorf("[%d] dstPath: want (%s), got(%s)", i, tc.dstPath, dstPath)
}
if service != tc.service {
t.Errorf("[%d] service: want (%s), got(%s)", i, tc.service, service)
}
if toContainer != tc.toContainer {
t.Errorf("[%d] toConainer: want (%t), got(%t)", i, tc.toContainer, toContainer)
}
if err == nil && tc.err != nil && err.Error() != tc.err.Error() {
t.Errorf("[%d] err: want (%s), got(%s)", i, tc.err, err)
}
}
}
func TestCopyMode(t *testing.T) {
tests := []struct {
srcPath string
dstPath string
srcMode os.FileMode
dstMode os.FileMode
dstExists bool
mode CopyMode
err error
}{
{
srcPath: "foo.txt",
dstPath: "foo.txt",
srcMode: os.ModePerm,
dstMode: os.ModePerm,
dstExists: true,
mode: CopyModeFileToFile,
},
{
srcPath: "foo.txt",
dstPath: "bar.txt",
srcMode: os.ModePerm,
dstExists: true,
mode: CopyModeFileToFileRename,
},
{
srcPath: "foo",
dstPath: "foo",
srcMode: os.ModeDir,
dstMode: os.ModeDir,
dstExists: true,
mode: CopyModeDirToDir,
},
{
srcPath: "foo/",
dstPath: "foo",
srcMode: os.ModeDir,
dstMode: os.ModeDir,
dstExists: true,
mode: CopyModeFilesToDir,
},
{
srcPath: "foo",
dstPath: "foo",
srcMode: os.ModeDir,
dstExists: false,
mode: -1,
err: ErrDstDirNotExist,
},
{
srcPath: "foo",
dstPath: "foo",
srcMode: os.ModeDir,
dstMode: os.ModePerm,
dstExists: true,
mode: -1,
err: ErrCopyDirToFile,
},
}
for i, tc := range tests {
mode, err := copyMode(tc.srcPath, tc.dstPath, tc.srcMode, tc.dstMode, tc.dstExists)
if mode != tc.mode {
t.Errorf("[%d] mode: want (%d), got(%d)", i, tc.mode, mode)
}
if err != tc.err {
t.Errorf("[%d] err: want (%s), got(%s)", i, tc.err, err)
}
}
}

View File

@ -2,75 +2,26 @@ package app
import (
"context"
"fmt"
"io"
"os"
"slices"
"sync"
"time"
"coopcloud.tech/abra/cli/internal"
"coopcloud.tech/abra/pkg/autocomplete"
"coopcloud.tech/abra/pkg/client"
"coopcloud.tech/abra/pkg/config"
"coopcloud.tech/abra/pkg/recipe"
"coopcloud.tech/abra/pkg/service"
"coopcloud.tech/abra/pkg/upstream/stack"
"github.com/docker/docker/api/types"
"github.com/docker/docker/api/types/filters"
"github.com/docker/docker/api/types/swarm"
dockerClient "github.com/docker/docker/client"
"github.com/sirupsen/logrus"
"github.com/urfave/cli"
)
var logOpts = types.ContainerLogsOptions{
ShowStderr: true,
ShowStdout: true,
Since: "",
Until: "",
Timestamps: true,
Follow: true,
Tail: "20",
Details: false,
}
// stackLogs lists logs for all stack services
func stackLogs(c *cli.Context, app config.App, client *dockerClient.Client) {
filters, err := app.Filters(true, false)
if err != nil {
logrus.Fatal(err)
}
serviceOpts := types.ServiceListOptions{Filters: filters}
services, err := client.ServiceList(context.Background(), serviceOpts)
if err != nil {
logrus.Fatal(err)
}
var wg sync.WaitGroup
for _, service := range services {
wg.Add(1)
go func(s string) {
if internal.StdErrOnly {
logOpts.ShowStdout = false
}
logs, err := client.ServiceLogs(context.Background(), s, logOpts)
if err != nil {
logrus.Fatal(err)
}
defer logs.Close()
_, err = io.Copy(os.Stdout, logs)
if err != nil && err != io.EOF {
logrus.Fatal(err)
}
}(service.ID)
}
wg.Wait()
os.Exit(0)
}
var appLogsCommand = cli.Command{
Name: "logs",
Aliases: []string{"l"},
@ -105,37 +56,70 @@ var appLogsCommand = cli.Command{
logrus.Fatalf("%s is not deployed?", app.Name)
}
logOpts.Since = internal.SinceLogs
serviceName := c.Args().Get(1)
if serviceName == "" {
logrus.Debugf("tailing logs for all %s services", app.Recipe)
stackLogs(c, app, cl)
} else {
logrus.Debugf("tailing logs for %s", serviceName)
if err := tailServiceLogs(c, cl, app, serviceName); err != nil {
logrus.Fatal(err)
serviceNames := []string{}
if serviceName != "" {
serviceNames = []string{serviceName}
}
err = tailLogs(cl, app, serviceNames)
if err != nil {
logrus.Fatal(err)
}
return nil
},
}
func tailServiceLogs(c *cli.Context, cl *dockerClient.Client, app config.App, serviceName string) error {
filters := filters.NewArgs()
filters.Add("name", fmt.Sprintf("%s_%s", app.StackName(), serviceName))
chosenService, err := service.GetService(context.Background(), cl, filters, internal.NoInput)
// tailLogs prints logs for the given app with optional service names to be
// filtered on. It also checks if the latest task is not runnning and then
// prints the past tasks.
func tailLogs(cl *dockerClient.Client, app config.App, serviceNames []string) error {
f, err := app.Filters(true, false, serviceNames...)
if err != nil {
logrus.Fatal(err)
return err
}
if internal.StdErrOnly {
logOpts.ShowStdout = false
services, err := cl.ServiceList(context.Background(), types.ServiceListOptions{Filters: f})
if err != nil {
return err
}
logs, err := cl.ServiceLogs(context.Background(), chosenService.ID, logOpts)
var wg sync.WaitGroup
for _, service := range services {
filters := filters.NewArgs()
filters.Add("name", service.Spec.Name)
tasks, err := cl.TaskList(context.Background(), types.TaskListOptions{Filters: f})
if err != nil {
return err
}
if len(tasks) > 0 {
// Need to sort the tasks by the CreatedAt field in the inverse order.
// Otherwise they are in the reversed order and not sorted properly.
slices.SortFunc[[]swarm.Task](tasks, func(t1, t2 swarm.Task) int {
return int(t2.Meta.CreatedAt.Unix() - t1.Meta.CreatedAt.Unix())
})
lastTask := tasks[0].Status
if lastTask.State != swarm.TaskStateRunning {
for _, task := range tasks {
logrus.Errorf("[%s] %s State %s: %s", service.Spec.Name, task.Meta.CreatedAt.Format(time.RFC3339), task.Status.State, task.Status.Err)
}
}
}
// Collect the logs in a go routine, so the logs from all services are
// collected in parallel.
wg.Add(1)
go func(serviceID string) {
logs, err := cl.ServiceLogs(context.Background(), serviceID, types.ContainerLogsOptions{
ShowStderr: true,
ShowStdout: !internal.StdErrOnly,
Since: internal.SinceLogs,
Until: "",
Timestamps: true,
Follow: true,
Tail: "20",
Details: false,
})
if err != nil {
logrus.Fatal(err)
}
@ -145,6 +129,11 @@ func tailServiceLogs(c *cli.Context, cl *dockerClient.Client, app config.App, se
if err != nil && err != io.EOF {
logrus.Fatal(err)
}
}(service.ID)
}
// Wait for all log streams to be closed.
wg.Wait()
return nil
}

View File

@ -55,8 +55,16 @@ var appNewCommand = cli.Command{
internal.ChaosFlag,
},
Before: internal.SubCommandBefore,
ArgsUsage: "[<recipe>]",
BashComplete: autocomplete.RecipeNameComplete,
ArgsUsage: "[<recipe>] [<version>]",
BashComplete: func(ctx *cli.Context) {
args := ctx.Args()
switch len(args) {
case 0:
autocomplete.RecipeNameComplete(ctx)
case 1:
autocomplete.RecipeVersionComplete(ctx.Args().Get(0))
}
},
Action: func(c *cli.Context) error {
recipe := internal.ValidateRecipe(c)
@ -69,9 +77,15 @@ var appNewCommand = cli.Command{
logrus.Fatal(err)
}
}
if c.Args().Get(1) == "" {
if err := recipePkg.EnsureLatest(recipe.Name); err != nil {
logrus.Fatal(err)
}
} else {
if err := recipePkg.EnsureVersion(recipe.Name, c.Args().Get(1)); err != nil {
logrus.Fatal(err)
}
}
}
if err := ensureServerFlag(); err != nil {
@ -97,7 +111,7 @@ var appNewCommand = cli.Command{
var secrets AppSecrets
var secretTable *jsontable.JSONTable
if internal.Secrets {
sampleEnv, err := recipe.SampleEnv(config.ReadEnvOptions{})
sampleEnv, err := recipe.SampleEnv()
if err != nil {
logrus.Fatal(err)
}
@ -108,7 +122,7 @@ var appNewCommand = cli.Command{
}
envSamplePath := path.Join(config.RECIPES_DIR, recipe.Name, ".env.sample")
secretsConfig, err := secret.ReadSecretsConfig(envSamplePath, composeFiles, recipe.Name)
secretsConfig, err := secret.ReadSecretsConfig(envSamplePath, composeFiles, config.StackName(internal.Domain))
if err != nil {
return err
}
@ -168,14 +182,8 @@ var appNewCommand = cli.Command{
type AppSecrets map[string]string
// createSecrets creates all secrets for a new app.
func createSecrets(cl *dockerClient.Client, secretsConfig map[string]string, sanitisedAppName string) (AppSecrets, error) {
// NOTE(d1): trim to match app.StackName() implementation
if len(sanitisedAppName) > 45 {
logrus.Debugf("trimming %s to %s to avoid runtime limits", sanitisedAppName, sanitisedAppName[:45])
sanitisedAppName = sanitisedAppName[:45]
}
secrets, err := secret.GenerateSecrets(cl, secretsConfig, sanitisedAppName, internal.NewAppServer)
func createSecrets(cl *dockerClient.Client, secretsConfig map[string]secret.Secret, sanitisedAppName string) (AppSecrets, error) {
secrets, err := secret.GenerateSecrets(cl, secretsConfig, internal.NewAppServer)
if err != nil {
return nil, err
}
@ -217,7 +225,7 @@ func ensureDomainFlag(recipe recipe.Recipe, server string) error {
}
// promptForSecrets asks if we should generate secrets for a new app.
func promptForSecrets(recipeName string, secretsConfig map[string]string) error {
func promptForSecrets(recipeName string, secretsConfig map[string]secret.Secret) error {
if len(secretsConfig) == 0 {
logrus.Debugf("%s has no secrets to generate, skipping...", recipeName)
return nil

View File

@ -3,7 +3,9 @@ package app
import (
"context"
"fmt"
"log"
"os"
"time"
"coopcloud.tech/abra/cli/internal"
"coopcloud.tech/abra/pkg/autocomplete"
@ -124,9 +126,11 @@ flag.
if len(vols) > 0 {
for _, vol := range vols {
err := cl.VolumeRemove(context.Background(), vol, internal.Force) // last argument is for force removing
err = retryFunc(5, func() error {
return cl.VolumeRemove(context.Background(), vol, internal.Force) // last argument is for force removing
})
if err != nil {
logrus.Fatal(err)
log.Fatalf("removing volumes failed: %s", err)
}
logrus.Info(fmt.Sprintf("volume %s removed", vol))
}
@ -143,3 +147,21 @@ flag.
return nil
},
}
// retryFunc retries the given function for the given retries. After the nth
// retry it waits (n + 1)^2 seconds before the next retry (starting with n=0).
// It returns an error if the function still failed after the last retry.
func retryFunc(retries int, fn func() error) error {
for i := 0; i < retries; i++ {
err := fn()
if err == nil {
return nil
}
if i+1 < retries {
sleep := time.Duration(i+1) * time.Duration(i+1)
logrus.Infof("%s: waiting %d seconds before next retry", err, sleep)
time.Sleep(sleep * time.Second)
}
}
return fmt.Errorf("%d retries failed", retries)
}

26
cli/app/remove_test.go Normal file
View File

@ -0,0 +1,26 @@
package app
import (
"fmt"
"testing"
)
func TestRetryFunc(t *testing.T) {
err := retryFunc(1, func() error { return nil })
if err != nil {
t.Errorf("should not return an error: %s", err)
}
i := 0
fn := func() error {
i++
return fmt.Errorf("oh no, something went wrong!")
}
err = retryFunc(2, fn)
if err == nil {
t.Error("should return an error")
}
if i != 2 {
t.Errorf("The function should have been called 1 times, got %d", i)
}
}

View File

@ -1,223 +1,82 @@
package app
import (
"context"
"errors"
"fmt"
"os"
"coopcloud.tech/abra/cli/internal"
"coopcloud.tech/abra/pkg/autocomplete"
"coopcloud.tech/abra/pkg/client"
"coopcloud.tech/abra/pkg/config"
containerPkg "coopcloud.tech/abra/pkg/container"
"coopcloud.tech/abra/pkg/recipe"
recipePkg "coopcloud.tech/abra/pkg/recipe"
"coopcloud.tech/abra/pkg/upstream/container"
"github.com/docker/cli/cli/command"
"github.com/docker/docker/api/types"
"github.com/docker/docker/api/types/filters"
dockerClient "github.com/docker/docker/client"
"github.com/docker/docker/pkg/archive"
"github.com/sirupsen/logrus"
"github.com/urfave/cli"
)
type restoreConfig struct {
preHookCmd string
postHookCmd string
var targetPath string
var targetPathFlag = &cli.StringFlag{
Name: "target, t",
Usage: "Target path",
Destination: &targetPath,
}
var appRestoreCommand = cli.Command{
Name: "restore",
Aliases: []string{"rs"},
Usage: "Run app restore",
ArgsUsage: "<domain> <service> <file>",
Usage: "Restore an app backup",
ArgsUsage: "<domain> <service>",
Flags: []cli.Flag{
internal.DebugFlag,
internal.OfflineFlag,
internal.ChaosFlag,
targetPathFlag,
},
Before: internal.SubCommandBefore,
BashComplete: autocomplete.AppNameComplete,
Description: `
Run an app restore.
Pre/post hook commands are defined in the recipe configuration. Abra reads this
configuration and run the comands in the context of the service before
restoring the backup.
Unlike "abra app backup", restore must be run on a per-service basis. You can
not restore all services in one go. Backup files produced by Abra are
compressed archives which use absolute paths. This allows Abra to restore
according to standard tar command logic, i.e. the backup will be restored to
the path it was originally backed up from.
Example:
abra app restore example.com app ~/.abra/backups/example_com_app_609341138.tar.gz
`,
Action: func(c *cli.Context) error {
app := internal.ValidateApp(c)
recipe, err := recipe.Get(app.Recipe, internal.Offline)
if err != nil {
if err := recipe.EnsureExists(app.Recipe); err != nil {
logrus.Fatal(err)
}
if !internal.Chaos {
if err := recipePkg.EnsureIsClean(app.Recipe); err != nil {
if err := recipe.EnsureIsClean(app.Recipe); err != nil {
logrus.Fatal(err)
}
if !internal.Offline {
if err := recipePkg.EnsureUpToDate(app.Recipe); err != nil {
if err := recipe.EnsureUpToDate(app.Recipe); err != nil {
logrus.Fatal(err)
}
}
if err := recipePkg.EnsureLatest(app.Recipe); err != nil {
if err := recipe.EnsureLatest(app.Recipe); err != nil {
logrus.Fatal(err)
}
}
serviceName := c.Args().Get(1)
if serviceName == "" {
internal.ShowSubcommandHelpAndError(c, errors.New("missing <service>?"))
}
backupPath := c.Args().Get(2)
if backupPath == "" {
internal.ShowSubcommandHelpAndError(c, errors.New("missing <file>?"))
}
if _, err := os.Stat(backupPath); err != nil {
if os.IsNotExist(err) {
logrus.Fatalf("%s doesn't exist?", backupPath)
}
}
restoreConfigs := make(map[string]restoreConfig)
for _, service := range recipe.Config.Services {
if restoreEnabled, ok := service.Deploy.Labels["backupbot.restore"]; ok {
if restoreEnabled == "true" {
fullServiceName := fmt.Sprintf("%s_%s", app.StackName(), service.Name)
rsConfig := restoreConfig{}
logrus.Debugf("restore config detected for %s", fullServiceName)
if preHookCmd, ok := service.Deploy.Labels["backupbot.restore.pre-hook"]; ok {
logrus.Debugf("detected pre-hook command for %s: %s", fullServiceName, preHookCmd)
rsConfig.preHookCmd = preHookCmd
}
if postHookCmd, ok := service.Deploy.Labels["backupbot.restore.post-hook"]; ok {
logrus.Debugf("detected post-hook command for %s: %s", fullServiceName, postHookCmd)
rsConfig.postHookCmd = postHookCmd
}
restoreConfigs[service.Name] = rsConfig
}
}
}
rsConfig, ok := restoreConfigs[serviceName]
if !ok {
rsConfig = restoreConfig{}
}
cl, err := client.New(app.Server)
if err != nil {
logrus.Fatal(err)
}
if err := runRestore(cl, app, backupPath, serviceName, rsConfig); err != nil {
targetContainer, err := internal.RetrieveBackupBotContainer(cl)
if err != nil {
logrus.Fatal(err)
}
execEnv := []string{fmt.Sprintf("SERVICE=%s", app.Domain)}
if snapshot != "" {
logrus.Debugf("including SNAPSHOT=%s in backupbot exec invocation", snapshot)
execEnv = append(execEnv, fmt.Sprintf("SNAPSHOT=%s", snapshot))
}
if targetPath != "" {
logrus.Debugf("including TARGET=%s in backupbot exec invocation", targetPath)
execEnv = append(execEnv, fmt.Sprintf("TARGET=%s", targetPath))
}
if err := internal.RunBackupCmdRemote(cl, "restore", targetContainer.ID, execEnv); err != nil {
logrus.Fatal(err)
}
return nil
},
}
// runRestore does the actual restore logic.
func runRestore(cl *dockerClient.Client, app config.App, backupPath, serviceName string, rsConfig restoreConfig) error {
// FIXME: avoid instantiating a new CLI
dcli, err := command.NewDockerCli()
if err != nil {
return err
}
filters := filters.NewArgs()
filters.Add("name", fmt.Sprintf("^%s_%s", app.StackName(), serviceName))
targetContainer, err := containerPkg.GetContainer(context.Background(), cl, filters, true)
if err != nil {
return err
}
fullServiceName := fmt.Sprintf("%s_%s", app.StackName(), serviceName)
if rsConfig.preHookCmd != "" {
splitCmd := internal.SafeSplit(rsConfig.preHookCmd)
logrus.Debugf("split pre-hook command for %s into %s", fullServiceName, splitCmd)
preHookExecOpts := types.ExecConfig{
AttachStderr: true,
AttachStdin: true,
AttachStdout: true,
Cmd: splitCmd,
Detach: false,
Tty: true,
}
if err := container.RunExec(dcli, cl, targetContainer.ID, &preHookExecOpts); err != nil {
return err
}
logrus.Infof("succesfully ran %s pre-hook command: %s", fullServiceName, rsConfig.preHookCmd)
}
backupReader, err := os.Open(backupPath)
if err != nil {
return err
}
content, err := archive.DecompressStream(backupReader)
if err != nil {
return err
}
// NOTE(d1): we use absolute paths so tar knows what to do. it will restore
// files according to the paths set in the compressed archive
restorePath := "/"
copyOpts := types.CopyToContainerOptions{AllowOverwriteDirWithFile: false, CopyUIDGID: false}
if err := cl.CopyToContainer(context.Background(), targetContainer.ID, restorePath, content, copyOpts); err != nil {
return err
}
logrus.Infof("restored %s to %s", backupPath, fullServiceName)
if rsConfig.postHookCmd != "" {
splitCmd := internal.SafeSplit(rsConfig.postHookCmd)
logrus.Debugf("split post-hook command for %s into %s", fullServiceName, splitCmd)
postHookExecOpts := types.ExecConfig{
AttachStderr: true,
AttachStdin: true,
AttachStdout: true,
Cmd: splitCmd,
Detach: false,
Tty: true,
}
if err := container.RunExec(dcli, cl, targetContainer.ID, &postHookExecOpts); err != nil {
return err
}
logrus.Infof("succesfully ran %s post-hook command: %s", fullServiceName, rsConfig.postHookCmd)
}
return nil
}

View File

@ -91,7 +91,7 @@ var appRunCommand = cli.Command{
logrus.Fatal(err)
}
if err := container.RunExec(dcli, cl, targetContainer.ID, &execCreateOpts); err != nil {
if _, err := container.RunExec(dcli, cl, targetContainer.ID, &execCreateOpts); err != nil {
logrus.Fatal(err)
}

View File

@ -20,19 +20,23 @@ import (
"github.com/urfave/cli"
)
var allSecrets bool
var allSecretsFlag = &cli.BoolFlag{
var (
allSecrets bool
allSecretsFlag = &cli.BoolFlag{
Name: "all, a",
Destination: &allSecrets,
Usage: "Generate all secrets",
}
)
var rmAllSecrets bool
var rmAllSecretsFlag = &cli.BoolFlag{
var (
rmAllSecrets bool
rmAllSecretsFlag = &cli.BoolFlag{
Name: "all, a",
Destination: &rmAllSecrets,
Usage: "Remove all secrets",
}
)
var appSecretGenerateCommand = cli.Command{
Name: "generate",
@ -87,28 +91,22 @@ var appSecretGenerateCommand = cli.Command{
logrus.Fatal(err)
}
secretsConfig, err := secret.ReadSecretsConfig(app.Path, composeFiles, app.Recipe)
secrets, err := secret.ReadSecretsConfig(app.Path, composeFiles, app.StackName())
if err != nil {
logrus.Fatal(err)
}
secretsToCreate := make(map[string]string)
if allSecrets {
secretsToCreate = secretsConfig
} else {
if !allSecrets {
secretName := c.Args().Get(1)
secretVersion := c.Args().Get(2)
matches := false
for name := range secretsConfig {
if secretName == name {
secretsToCreate[name] = secretVersion
matches = true
}
}
if !matches {
s, ok := secrets[secretName]
if !ok {
logrus.Fatalf("%s doesn't exist in the env config?", secretName)
}
s.Version = secretVersion
secrets = map[string]secret.Secret{
secretName: s,
}
}
cl, err := client.New(app.Server)
@ -116,7 +114,7 @@ var appSecretGenerateCommand = cli.Command{
logrus.Fatal(err)
}
secretVals, err := secret.GenerateSecrets(cl, secretsToCreate, app.StackName(), app.Server)
secretVals, err := secret.GenerateSecrets(cl, secrets, app.Server)
if err != nil {
logrus.Fatal(err)
}
@ -276,7 +274,7 @@ Example:
logrus.Fatal(err)
}
secretsConfig, err := secret.ReadSecretsConfig(app.Path, composeFiles, app.Recipe)
secrets, err := secret.ReadSecretsConfig(app.Path, composeFiles, app.StackName())
if err != nil {
logrus.Fatal(err)
}
@ -311,12 +309,7 @@ Example:
match := false
secretToRm := c.Args().Get(1)
for secretName, secretValue := range secretsConfig {
val, err := secret.ParseSecretValue(secretValue)
if err != nil {
logrus.Fatal(err)
}
for secretName, val := range secrets {
secretRemoteName := fmt.Sprintf("%s_%s_%s", app.StackName(), secretName, val.Version)
if _, ok := remoteSecretNames[secretRemoteName]; ok {
if secretToRm != "" {

View File

@ -1,35 +1,67 @@
package internal
import (
"strings"
"context"
"coopcloud.tech/abra/pkg/config"
containerPkg "coopcloud.tech/abra/pkg/container"
"coopcloud.tech/abra/pkg/service"
"coopcloud.tech/abra/pkg/upstream/container"
"github.com/docker/cli/cli/command"
"github.com/docker/docker/api/types"
"github.com/docker/docker/api/types/filters"
dockerClient "github.com/docker/docker/client"
"github.com/sirupsen/logrus"
)
// SafeSplit splits up a string into a list of commands safely.
func SafeSplit(s string) []string {
split := strings.Split(s, " ")
var result []string
var inquote string
var block string
for _, i := range split {
if inquote == "" {
if strings.HasPrefix(i, "'") || strings.HasPrefix(i, "\"") {
inquote = string(i[0])
block = strings.TrimPrefix(i, inquote) + " "
} else {
result = append(result, i)
}
} else {
if !strings.HasSuffix(i, inquote) {
block += i + " "
} else {
block += strings.TrimSuffix(i, inquote)
inquote = ""
result = append(result, block)
block = ""
}
}
// RetrieveBackupBotContainer gets the deployed backupbot container.
func RetrieveBackupBotContainer(cl *dockerClient.Client) (types.Container, error) {
ctx := context.Background()
chosenService, err := service.GetServiceByLabel(ctx, cl, config.BackupbotLabel, NoInput)
if err != nil {
return types.Container{}, err
}
return result
logrus.Debugf("retrieved %s as backup enabled service", chosenService.Spec.Name)
filters := filters.NewArgs()
filters.Add("name", chosenService.Spec.Name)
targetContainer, err := containerPkg.GetContainer(
ctx,
cl,
filters,
NoInput,
)
if err != nil {
return types.Container{}, err
}
return targetContainer, nil
}
// RunBackupCmdRemote runs a backup related command on a remote backupbot container.
func RunBackupCmdRemote(cl *dockerClient.Client, backupCmd string, containerID string, execEnv []string) error {
execBackupListOpts := types.ExecConfig{
AttachStderr: true,
AttachStdin: true,
AttachStdout: true,
Cmd: []string{"/usr/bin/backup", "--", backupCmd},
Detach: false,
Env: execEnv,
Tty: true,
}
logrus.Debugf("running backup %s on %s with exec config %v", backupCmd, containerID, execBackupListOpts)
// FIXME: avoid instantiating a new CLI
dcli, err := command.NewDockerCli()
if err != nil {
return err
}
if _, err := container.RunExec(dcli, cl, containerID, &execBackupListOpts); err != nil {
return err
}
return nil
}

View File

@ -60,7 +60,7 @@ func RunCmdRemote(cl *dockerClient.Client, app config.App, abraSh, serviceName,
Tty: false,
}
if err := container.RunExec(dcli, cl, targetContainer.ID, &execCreateOpts); err != nil {
if _, err := container.RunExec(dcli, cl, targetContainer.ID, &execCreateOpts); err != nil {
logrus.Infof("%s does not exist for %s, use /bin/sh as fallback", shell, app.Name)
shell = "/bin/sh"
}
@ -85,7 +85,7 @@ func RunCmdRemote(cl *dockerClient.Client, app config.App, abraSh, serviceName,
execCreateOpts.Tty = false
}
if err := container.RunExec(dcli, cl, targetContainer.ID, &execCreateOpts); err != nil {
if _, err := container.RunExec(dcli, cl, targetContainer.ID, &execCreateOpts); err != nil {
return err
}

View File

@ -3,6 +3,7 @@ package recipe
import (
"coopcloud.tech/abra/cli/internal"
"coopcloud.tech/abra/pkg/autocomplete"
"coopcloud.tech/abra/pkg/formatter"
"coopcloud.tech/abra/pkg/recipe"
"github.com/sirupsen/logrus"
"github.com/urfave/cli"
@ -17,26 +18,31 @@ var recipeFetchCommand = cli.Command{
Flags: []cli.Flag{
internal.DebugFlag,
internal.NoInputFlag,
internal.OfflineFlag,
},
Before: internal.SubCommandBefore,
BashComplete: autocomplete.RecipeNameComplete,
Action: func(c *cli.Context) error {
recipeName := c.Args().First()
if recipeName != "" {
internal.ValidateRecipe(c)
if err := recipe.Ensure(recipeName); err != nil {
logrus.Fatal(err)
}
return nil
}
if err := recipe.EnsureExists(recipeName); err != nil {
catalogue, err := recipe.ReadRecipeCatalogue(internal.Offline)
if err != nil {
logrus.Fatal(err)
}
if err := recipe.EnsureUpToDate(recipeName); err != nil {
logrus.Fatal(err)
catlBar := formatter.CreateProgressbar(len(catalogue), "fetching latest recipes...")
for recipeName := range catalogue {
if err := recipe.Ensure(recipeName); err != nil {
logrus.Error(err)
}
if err := recipe.EnsureLatest(recipeName); err != nil {
logrus.Fatal(err)
catlBar.Add(1)
}
return nil

View File

@ -1,7 +1,9 @@
package recipe
import (
"errors"
"fmt"
"os"
"path"
"strconv"
"strings"
@ -140,7 +142,7 @@ your SSH keys configured on your account.
// getImageVersions retrieves image versions for a recipe
func getImageVersions(recipe recipe.Recipe) (map[string]string, error) {
var services = make(map[string]string)
services := make(map[string]string)
missingTag := false
for _, service := range recipe.Config.Services {
@ -207,6 +209,10 @@ func createReleaseFromTag(recipe recipe.Recipe, tagString, mainAppVersion string
tagString = fmt.Sprintf("%s+%s", tag.String(), mainAppVersion)
}
if err := addReleaseNotes(recipe, tagString); err != nil {
logrus.Fatal(err)
}
if err := commitRelease(recipe, tagString); err != nil {
logrus.Fatal(err)
}
@ -237,6 +243,82 @@ func getTagCreateOptions(tag string) (git.CreateTagOptions, error) {
return git.CreateTagOptions{Message: msg}, nil
}
// addReleaseNotes checks if the release/next release note exists and moves the
// file to release/<tag>.
func addReleaseNotes(recipe recipe.Recipe, tag string) error {
repoPath := path.Join(config.RECIPES_DIR, recipe.Name)
tagReleaseNotePath := path.Join(repoPath, "release", tag)
if _, err := os.Stat(tagReleaseNotePath); err == nil {
// Release note for current tag already exist exists.
return nil
} else if !errors.Is(err, os.ErrNotExist) {
return err
}
nextReleaseNotePath := path.Join(repoPath, "release", "next")
if _, err := os.Stat(nextReleaseNotePath); err == nil {
// release/next note exists. Move it to release/<tag>
if internal.Dry {
logrus.Debugf("dry run: move release note from 'next' to %s", tag)
return nil
}
if !internal.NoInput {
prompt := &survey.Input{
Message: "Use release note in release/next?",
}
var addReleaseNote bool
if err := survey.AskOne(prompt, &addReleaseNote); err != nil {
return err
}
if !addReleaseNote {
return nil
}
}
err := os.Rename(nextReleaseNotePath, tagReleaseNotePath)
if err != nil {
return err
}
err = gitPkg.Add(repoPath, path.Join("release", "next"), internal.Dry)
if err != nil {
return err
}
err = gitPkg.Add(repoPath, path.Join("release", tag), internal.Dry)
if err != nil {
return err
}
} else if !errors.Is(err, os.ErrNotExist) {
return err
}
// No release note exists for the current release.
if internal.NoInput {
return nil
}
prompt := &survey.Input{
Message: "Release Note (leave empty for no release note)",
}
var releaseNote string
if err := survey.AskOne(prompt, &releaseNote); err != nil {
return err
}
if releaseNote == "" {
return nil
}
err := os.WriteFile(tagReleaseNotePath, []byte(releaseNote), 0o644)
if err != nil {
return err
}
err = gitPkg.Add(repoPath, path.Join("release", tag), internal.Dry)
if err != nil {
return err
}
return nil
}
func commitRelease(recipe recipe.Recipe, tag string) error {
if internal.Dry {
logrus.Debugf("dry run: no changes committed")
@ -404,6 +486,10 @@ func createReleaseFromPreviousTag(tagString, mainAppVersion string, recipe recip
}
}
if err := addReleaseNotes(recipe, tagString); err != nil {
logrus.Fatal(err)
}
if err := commitRelease(recipe, tagString); err != nil {
logrus.Fatalf("failed to commit changes: %s", err.Error())
}

14
go.mod
View File

@ -4,19 +4,20 @@ go 1.21
require (
coopcloud.tech/tagcmp v0.0.0-20211103052201-885b22f77d52
git.coopcloud.tech/coop-cloud/godotenv v1.5.2-0.20231130100509-01bff8284355
github.com/AlecAivazis/survey/v2 v2.3.7
github.com/Autonomic-Cooperative/godotenv v1.3.1-0.20210731094149-b031ea1211e7
github.com/Gurpartap/logrus-stack v0.0.0-20170710170904-89c00d8a28f4
github.com/docker/cli v24.0.7+incompatible
github.com/docker/distribution v2.8.3+incompatible
github.com/docker/docker v24.0.7+incompatible
github.com/docker/go-units v0.5.0
github.com/go-git/go-git/v5 v5.10.0
github.com/google/go-cmp v0.5.9
github.com/moby/sys/signal v0.7.0
github.com/moby/term v0.5.0
github.com/olekukonko/tablewriter v0.0.5
github.com/pkg/errors v0.9.1
github.com/schollz/progressbar/v3 v3.14.0
github.com/schollz/progressbar/v3 v3.14.1
github.com/sirupsen/logrus v1.9.3
gotest.tools/v3 v3.5.1
)
@ -47,7 +48,6 @@ require (
github.com/gogo/protobuf v1.3.2 // indirect
github.com/golang/groupcache v0.0.0-20210331224755-41bb18bfe9da // indirect
github.com/golang/protobuf v1.5.3 // indirect
github.com/google/go-cmp v0.5.9 // indirect
github.com/hashicorp/go-cleanhttp v0.5.2 // indirect
github.com/imdario/mergo v0.3.12 // indirect
github.com/inconshreveable/mousetrap v1.0.0 // indirect
@ -56,7 +56,7 @@ require (
github.com/kevinburke/ssh_config v1.2.0 // indirect
github.com/klauspost/compress v1.14.2 // indirect
github.com/mattn/go-colorable v0.1.12 // indirect
github.com/mattn/go-isatty v0.0.17 // indirect
github.com/mattn/go-isatty v0.0.20 // indirect
github.com/mattn/go-runewidth v0.0.14 // indirect
github.com/matttproud/golang_protobuf_extensions v1.0.4 // indirect
github.com/mgutz/ansi v0.0.0-20170206155736-9520e82c474b // indirect
@ -71,7 +71,7 @@ require (
github.com/prometheus/client_model v0.3.0 // indirect
github.com/prometheus/common v0.42.0 // indirect
github.com/prometheus/procfs v0.10.1 // indirect
github.com/rivo/uniseg v0.2.0 // indirect
github.com/rivo/uniseg v0.4.4 // indirect
github.com/russross/blackfriday/v2 v2.1.0 // indirect
github.com/skeema/knownhosts v1.2.0 // indirect
github.com/spf13/pflag v1.0.5 // indirect
@ -82,7 +82,7 @@ require (
golang.org/x/mod v0.12.0 // indirect
golang.org/x/net v0.17.0 // indirect
golang.org/x/sync v0.3.0 // indirect
golang.org/x/term v0.13.0 // indirect
golang.org/x/term v0.14.0 // indirect
golang.org/x/text v0.13.0 // indirect
golang.org/x/time v0.0.0-20200630173020-3af7569d3a1e // indirect
golang.org/x/tools v0.13.0 // indirect
@ -104,7 +104,7 @@ require (
github.com/fvbommel/sortorder v1.0.2 // indirect
github.com/google/shlex v0.0.0-20191202100458-e7afc7fbc510 // indirect
github.com/gorilla/mux v1.8.0 // indirect
github.com/hashicorp/go-retryablehttp v0.7.4
github.com/hashicorp/go-retryablehttp v0.7.5
github.com/klauspost/pgzip v1.2.6
github.com/moby/patternmatcher v0.5.0 // indirect
github.com/moby/sys/sequential v0.5.0 // indirect

28
go.sum
View File

@ -51,12 +51,12 @@ coopcloud.tech/tagcmp v0.0.0-20211103052201-885b22f77d52/go.mod h1:ESVm0wQKcbcFi
dario.cat/mergo v1.0.0 h1:AGCNq9Evsj31mOgNPcLyXc+4PNABt905YmuqPYYpBWk=
dario.cat/mergo v1.0.0/go.mod h1:uNxQE+84aUszobStD9th8a29P2fMDhsBdgRYvZOxGmk=
dmitri.shuralyov.com/gpu/mtl v0.0.0-20190408044501-666a987793e9/go.mod h1:H6x//7gZCb22OMCxBHrMx7a5I7Hp++hsVxbQ4BYO7hU=
git.coopcloud.tech/coop-cloud/godotenv v1.5.2-0.20231130100509-01bff8284355 h1:tCv2B4qoN6RMheKDnCzIafOkWS5BB1h7hwhmo+9bVeE=
git.coopcloud.tech/coop-cloud/godotenv v1.5.2-0.20231130100509-01bff8284355/go.mod h1:Q8V1zbtPAlzYSr/Dvky3wS6x58IQAl3rot2me1oSO2Q=
github.com/AdaLogics/go-fuzz-headers v0.0.0-20230106234847-43070de90fa1 h1:EKPd1INOIyr5hWOWhvpmQpY6tKjeG0hT1s3AMC/9fic=
github.com/AdaLogics/go-fuzz-headers v0.0.0-20230106234847-43070de90fa1/go.mod h1:VzwV+t+dZ9j/H867F1M2ziD+yLHtB46oM35FxxMJ4d0=
github.com/AlecAivazis/survey/v2 v2.3.7 h1:6I/u8FvytdGsgonrYsVn2t8t4QiRnh6QSTqkkhIiSjQ=
github.com/AlecAivazis/survey/v2 v2.3.7/go.mod h1:xUTIdE4KCOIjsBAE1JYsUPoCqYdZ1reCfTwbto0Fduo=
github.com/Autonomic-Cooperative/godotenv v1.3.1-0.20210731094149-b031ea1211e7 h1:asQtdXYbxEYWcwAQqJTVYC/RltB4eqoWKvqWg/LFPOg=
github.com/Autonomic-Cooperative/godotenv v1.3.1-0.20210731094149-b031ea1211e7/go.mod h1:oZRCMMRS318l07ei4DTqbZoOawfJlJ4yyo8juk2v4Rk=
github.com/Azure/azure-sdk-for-go v16.2.1+incompatible/go.mod h1:9XXNKU+eRnpl9moKnB4QOLf1HestfXbmab5FXxiDBjc=
github.com/Azure/go-ansiterm v0.0.0-20170929234023-d6e3b3328b78/go.mod h1:LmzpDX56iTiv29bbRTIsUNlaFfuhWRQBWjQdVyAevI8=
github.com/Azure/go-ansiterm v0.0.0-20210617225240-d185dfc1b5a1 h1:UQHMgLO+TxOElx5B5HZ4hJQsoJ/PvUvKRhJHDQXO8P8=
@ -590,8 +590,8 @@ github.com/hashicorp/go-multierror v1.0.0/go.mod h1:dHtQlpGsu+cZNNAkkCN/P3hoUDHh
github.com/hashicorp/go-multierror v1.1.0/go.mod h1:spPvp8C1qA32ftKqdAHm4hHTbPw+vmowP0z+KUhOZdA=
github.com/hashicorp/go-multierror v1.1.1/go.mod h1:iw975J/qwKPdAO1clOe2L8331t/9/fmwbPZ6JB6eMoM=
github.com/hashicorp/go-retryablehttp v0.5.3/go.mod h1:9B5zBasrRhHXnJnui7y6sL7es7NDiJgTc6Er0maI1Xs=
github.com/hashicorp/go-retryablehttp v0.7.4 h1:ZQgVdpTdAL7WpMIwLzCfbalOcSUdkDZnpUv3/+BxzFA=
github.com/hashicorp/go-retryablehttp v0.7.4/go.mod h1:Jy/gPYAdjqffZ/yFGCFV2doI5wjtH1ewM9u8iYVjtX8=
github.com/hashicorp/go-retryablehttp v0.7.5 h1:bJj+Pj19UZMIweq/iie+1u5YCdGrnxCT9yvm0e+Nd5M=
github.com/hashicorp/go-retryablehttp v0.7.5/go.mod h1:Jy/gPYAdjqffZ/yFGCFV2doI5wjtH1ewM9u8iYVjtX8=
github.com/hashicorp/go-rootcerts v1.0.2/go.mod h1:pqUvnprVnM5bf7AOirdbb01K4ccR319Vf4pU3K5EGc8=
github.com/hashicorp/go-sockaddr v1.0.0/go.mod h1:7Xibr9yA9JjQq1JpNB2Vw7kxv8xerXegt+ozgdvDeDU=
github.com/hashicorp/go-syslog v1.0.0/go.mod h1:qPfqrKkXGihmCqbJM2mZgkZGvKG1dFdvsLplgctolz4=
@ -705,8 +705,8 @@ github.com/mattn/go-isatty v0.0.10/go.mod h1:qgIWMr58cqv1PHHyhnkY9lrL7etaEgOFcME
github.com/mattn/go-isatty v0.0.11/go.mod h1:PhnuNfih5lzO57/f3n+odYbM4JtupLOxQOAqxQCu2WE=
github.com/mattn/go-isatty v0.0.12/go.mod h1:cbi8OIDigv2wuxKPP5vlRcQ1OAZbq2CE4Kysco4FUpU=
github.com/mattn/go-isatty v0.0.14/go.mod h1:7GGIvUiUoEMVVmxf/4nioHXj79iQHKdU27kJ6hsGG94=
github.com/mattn/go-isatty v0.0.17 h1:BTarxUcIeDqL27Mc+vyvdWYSL28zpIhv3RoTdsLMPng=
github.com/mattn/go-isatty v0.0.17/go.mod h1:kYGgaQfpe5nmfYZH+SKPsOc2e4SrIfOl2e/yFXSvRLM=
github.com/mattn/go-isatty v0.0.20 h1:xfD0iDuEKnDkl03q4limB+vH+GxLEtL/jb4xVJSWWEY=
github.com/mattn/go-isatty v0.0.20/go.mod h1:W+V8PltTTMOvKvAeJH7IuucS94S2C6jfK/D7dTCTo3Y=
github.com/mattn/go-runewidth v0.0.2/go.mod h1:LwmH8dsx7+W8Uxz3IHJYH5QSwggIsqBzpuz5H//U1FU=
github.com/mattn/go-runewidth v0.0.9/go.mod h1:H031xJmbD/WCDINGzjvQ9THkh0rPKHF+m2gUSrubnMI=
github.com/mattn/go-runewidth v0.0.14 h1:+xnbZSEeDbOIg5/mE6JF0w6n9duR1l3/WmbinWVwUuU=
@ -885,8 +885,9 @@ github.com/prometheus/procfs v0.6.0/go.mod h1:cz+aTbrPOrUb4q7XlbU9ygM+/jj0fzG6c1
github.com/prometheus/procfs v0.10.1 h1:kYK1Va/YMlutzCGazswoHKo//tZVlFpKYh+PymziUAg=
github.com/prometheus/procfs v0.10.1/go.mod h1:nwNm2aOCAYw8uTR/9bWRREkZFxAUcWzPHWJq+XBB/FM=
github.com/prometheus/tsdb v0.7.1/go.mod h1:qhTCs0VvXwvX/y3TZrWD7rabWM+ijKTux40TwIPHuXU=
github.com/rivo/uniseg v0.2.0 h1:S1pD9weZBuJdFmowNwbpi7BJ8TNftyUImj/0WQi72jY=
github.com/rivo/uniseg v0.2.0/go.mod h1:J6wj4VEh+S6ZtnVlnTBMWIodfgj8LQOQFoIToxlJtxc=
github.com/rivo/uniseg v0.4.4 h1:8TfxU8dW6PdqD27gjM8MVNuicgxIjxpm4K7x4jp8sis=
github.com/rivo/uniseg v0.4.4/go.mod h1:FN3SvrM+Zdj16jyLfmOkMNblXMcoc8DfTHruCPUcx88=
github.com/rogpeppe/fastuuid v0.0.0-20150106093220-6724a57986af/go.mod h1:XWv6SoW27p1b0cqNHllgS5HIMJraePCO15w5zCzIWYg=
github.com/rogpeppe/fastuuid v1.2.0/go.mod h1:jVj6XXZzXRy/MSR5jhDC/2q6DgLz+nrA6LYCDYWNEvQ=
github.com/rogpeppe/go-internal v1.3.0/go.mod h1:M8bDsm7K2OlrFYOpmOWEs/qY81heoFRclV5y23lUDJ4=
@ -900,8 +901,8 @@ github.com/ryanuber/columnize v0.0.0-20160712163229-9b3edd62028f/go.mod h1:sm1tb
github.com/safchain/ethtool v0.0.0-20190326074333-42ed695e3de8/go.mod h1:Z0q5wiBQGYcxhMZ6gUqHn6pYNLypFAvaL3UvgZLR0U4=
github.com/sagikazarmark/crypt v0.3.0/go.mod h1:uD/D+6UF4SrIR1uGEv7bBNkNqLGqUr43MRiaGWX1Nig=
github.com/satori/go.uuid v1.2.0/go.mod h1:dA0hQrYB0VpLJoorglMZABFdXlWrHn1NEOzdhQKdks0=
github.com/schollz/progressbar/v3 v3.13.1 h1:o8rySDYiQ59Mwzy2FELeHY5ZARXZTVJC7iHD6PEFUiE=
github.com/schollz/progressbar/v3 v3.13.1/go.mod h1:xvrbki8kfT1fzWzBT/UZd9L6GA+jdL7HAgq2RFnO6fQ=
github.com/schollz/progressbar/v3 v3.14.1 h1:VD+MJPCr4s3wdhTc7OEJ/Z3dAeBzJ7yKH/P4lC5yRTI=
github.com/schollz/progressbar/v3 v3.14.1/go.mod h1:Zc9xXneTzWXF81TGoqL71u0sBPjULtEHYtj/WVgVy8E=
github.com/sclevine/spec v1.2.0/go.mod h1:W4J29eT/Kzv7/b9IWLB055Z+qvVC9vt0Arko24q7p+U=
github.com/sean-/seed v0.0.0-20170313163322-e2103e2c3529/go.mod h1:DxrIzT+xaE7yg65j358z/aeFdxmN0P9QXhEzd20vsDc=
github.com/seccomp/libseccomp-golang v0.9.1/go.mod h1:GbW5+tmTXfcxTToHLXlScSlAvWlF4P2Ca7zGrPiEpWo=
@ -1310,21 +1311,20 @@ golang.org/x/sys v0.0.0-20211216021012-1d35b9e2eb4e/go.mod h1:oPkhp1MJrh7nUepCBc
golang.org/x/sys v0.0.0-20220520151302-bc2c85ada10a/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20220715151400-c0bba94af5f8/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20220722155257-8c9f86f7a55f/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20220811171246-fbc7d0a398ab/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.2.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.3.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.5.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.6.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.13.0 h1:Af8nKPmuFypiUBjVoU9V20FiaFXOcuZI21p0ycVYYGE=
golang.org/x/sys v0.13.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.14.0 h1:Vz7Qs629MkJkGyHxUlRHizWJRG2j8fbQKjELVSNhy7Q=
golang.org/x/sys v0.14.0/go.mod h1:/VUhepiaJMQUp4+oa/7Zr1D23ma6VTLIYjOOTFZPUcA=
golang.org/x/term v0.0.0-20201117132131-f5c789dd3221/go.mod h1:Nr5EML6q2oocZ2LXRh80K7BxOlk5/8JxuGnuhpl+muw=
golang.org/x/term v0.0.0-20201126162022-7de9c90e9dd1/go.mod h1:bj7SfCRtBDWHUb9snDiAeCFNEtKQo2Wmx5Cou7ajbmo=
golang.org/x/term v0.0.0-20210927222741-03fcf44c2211/go.mod h1:jbD1KX2456YbFQfuXm/mYQcufACuNUgVhRMnK/tPxf8=
golang.org/x/term v0.2.0/go.mod h1:TVmDHMZPmdnySmBfhjOoOdhjzdE1h4u1VwSiw2l1Nuc=
golang.org/x/term v0.5.0/go.mod h1:jMB1sMXY+tzblOD4FWmEbocvup2/aLOaQEp7JmGp78k=
golang.org/x/term v0.6.0/go.mod h1:m6U89DPEgQRMq3DNkDClhWw02AUbt2daBVO4cn4Hv9U=
golang.org/x/term v0.13.0 h1:bb+I9cTfFazGW51MZqBVmZy7+JEJMouUHTUSKVQLBek=
golang.org/x/term v0.13.0/go.mod h1:LTmsnFJwVN6bCy1rVCoS+qHT1HhALEFxKncY3WNNh4U=
golang.org/x/term v0.14.0 h1:LGK9IlZ8T9jvdy6cTdfKUCltatMFOehAQo9SRC46UQ8=
golang.org/x/term v0.14.0/go.mod h1:TySc+nGkYR6qt8km8wUhuFRTVSMIX3XPR58y2lC8vww=
golang.org/x/text v0.0.0-20170915032832-14c0d48ead0c/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=
golang.org/x/text v0.3.0/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=
golang.org/x/text v0.3.1-0.20180807135948-17ff2d5776d2/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=

View File

@ -25,6 +25,16 @@ func AppNameComplete(c *cli.Context) {
}
}
func ServiceNameComplete(appName string) {
serviceNames, err := config.GetAppServiceNames(appName)
if err != nil {
return
}
for _, s := range serviceNames {
fmt.Println(s)
}
}
// RecipeNameComplete completes recipe names.
func RecipeNameComplete(c *cli.Context) {
catl, err := recipe.ReadRecipeCatalogue(false)
@ -41,6 +51,20 @@ func RecipeNameComplete(c *cli.Context) {
}
}
// RecipeVersionComplete completes versions for the recipe.
func RecipeVersionComplete(recipeName string) {
catl, err := recipe.ReadRecipeCatalogue(false)
if err != nil {
logrus.Warn(err)
}
for _, v := range catl[recipeName].Versions {
for v2 := range v {
fmt.Println(v2)
}
}
}
// ServerNameComplete completes server names.
func ServerNameComplete(c *cli.Context) {
files, err := config.LoadAppFiles("")

View File

@ -29,7 +29,7 @@ func UpdateTag(pattern, image, tag, recipeName string) (bool, error) {
opts := stack.Deploy{Composefiles: []string{composeFile}}
envSamplePath := path.Join(config.RECIPES_DIR, recipeName, ".env.sample")
sampleEnv, err := config.ReadEnv(envSamplePath, config.ReadEnvOptions{})
sampleEnv, err := config.ReadEnv(envSamplePath)
if err != nil {
return false, err
}
@ -97,7 +97,7 @@ func UpdateLabel(pattern, serviceName, label, recipeName string) error {
opts := stack.Deploy{Composefiles: []string{composeFile}}
envSamplePath := path.Join(config.RECIPES_DIR, recipeName, ".env.sample")
sampleEnv, err := config.ReadEnv(envSamplePath, config.ReadEnvOptions{})
sampleEnv, err := config.ReadEnv(envSamplePath)
if err != nil {
return err
}

View File

@ -25,6 +25,9 @@ import (
// AppEnv is a map of the values in an apps env config
type AppEnv = map[string]string
// AppModifiers is a map of modifiers in an apps env config
type AppModifiers = map[string]map[string]string
// AppName is AppName
type AppName = string
@ -47,34 +50,61 @@ type App struct {
Path string
}
// StackName gets whatever the docker safe (uses the right delimiting
// character, e.g. "_") stack name is for the app. In general, you don't want
// to use this to show anything to end-users, you want use a.Name instead.
// See documentation of config.StackName
func (a App) StackName() string {
if _, exists := a.Env["STACK_NAME"]; exists {
return a.Env["STACK_NAME"]
}
stackName := SanitiseAppName(a.Name)
if len(stackName) > 45 {
logrus.Debugf("trimming %s to %s to avoid runtime limits", stackName, stackName[:45])
stackName = stackName[:45]
}
stackName := StackName(a.Name)
a.Env["STACK_NAME"] = stackName
return stackName
}
// Filters retrieves exact app filters for querying the container runtime. Due
// to upstream issues, filtering works different depending on what you're
// StackName gets whatever the docker safe (uses the right delimiting
// character, e.g. "_") stack name is for the app. In general, you don't want
// to use this to show anything to end-users, you want use a.Name instead.
func StackName(appName string) string {
stackName := SanitiseAppName(appName)
if len(stackName) > 45 {
logrus.Debugf("trimming %s to %s to avoid runtime limits", stackName, stackName[:45])
stackName = stackName[:45]
}
return stackName
}
// Filters retrieves app filters for querying the container runtime. By default
// it filters on all services in the app. It is also possible to pass an
// otional list of service names, which get filtered instead.
//
// Due to upstream issues, filtering works different depending on what you're
// querying. So, for example, secrets don't work with regex! The caller needs
// to implement their own validation that the right secrets are matched. In
// order to handle these cases, we provide the `appendServiceNames` /
// `exactMatch` modifiers.
func (a App) Filters(appendServiceNames, exactMatch bool) (filters.Args, error) {
func (a App) Filters(appendServiceNames, exactMatch bool, services ...string) (filters.Args, error) {
filters := filters.NewArgs()
if len(services) > 0 {
for _, serviceName := range services {
filters.Add("name", ServiceFilter(a.StackName(), serviceName, exactMatch))
}
return filters, nil
}
// When not appending the service name, just add one filter for the whole
// stack.
if !appendServiceNames {
f := fmt.Sprintf("%s", a.StackName())
if exactMatch {
f = fmt.Sprintf("^%s", f)
}
filters.Add("name", f)
return filters, nil
}
composeFiles, err := GetComposeFiles(a.Recipe, a.Env)
if err != nil {
@ -88,28 +118,23 @@ func (a App) Filters(appendServiceNames, exactMatch bool) (filters.Args, error)
}
for _, service := range compose.Services {
var filter string
if appendServiceNames {
if exactMatch {
filter = fmt.Sprintf("^%s_%s", a.StackName(), service.Name)
} else {
filter = fmt.Sprintf("%s_%s", a.StackName(), service.Name)
}
} else {
if exactMatch {
filter = fmt.Sprintf("^%s", a.StackName())
} else {
filter = fmt.Sprintf("%s", a.StackName())
}
}
filters.Add("name", filter)
f := ServiceFilter(a.StackName(), service.Name, exactMatch)
filters.Add("name", f)
}
return filters, nil
}
// ServiceFilter creates a filter string for filtering a service in the docker
// container runtime. When exact match is true, it uses regex to match the
// string exactly.
func ServiceFilter(stack, service string, exact bool) string {
if exact {
return fmt.Sprintf("^%s_%s", stack, service)
}
return fmt.Sprintf("%s_%s", stack, service)
}
// ByServer sort a slice of Apps
type ByServer []App
@ -150,7 +175,7 @@ func (a ByName) Less(i, j int) bool {
}
func ReadAppEnvFile(appFile AppFile, name AppName) (App, error) {
env, err := ReadEnv(appFile.Path, ReadEnvOptions{})
env, err := ReadEnv(appFile.Path)
if err != nil {
return App{}, fmt.Errorf("env file for %s couldn't be read: %s", name, err.Error())
}
@ -330,7 +355,7 @@ func TemplateAppEnvSample(recipeName, appName, server, domain string) error {
return fmt.Errorf("%s already exists?", appEnvPath)
}
err = ioutil.WriteFile(appEnvPath, envSample, 0664)
err = ioutil.WriteFile(appEnvPath, envSample, 0o664)
if err != nil {
return err
}
@ -592,7 +617,7 @@ func GetLabel(compose *composetypes.Config, stackName string, label string) stri
// GetTimeoutFromLabel reads the timeout value from docker label "coop-cloud.${STACK_NAME}.TIMEOUT" and returns 50 as default value
func GetTimeoutFromLabel(compose *composetypes.Config, stackName string) (int, error) {
var timeout = 50 // Default Timeout
timeout := 50 // Default Timeout
var err error = nil
if timeoutLabel := GetLabel(compose, stackName, "timeout"); timeoutLabel != "" {
logrus.Debugf("timeout label: %s", timeoutLabel)

View File

@ -1,12 +1,15 @@
package config_test
import (
"encoding/json"
"fmt"
"reflect"
"testing"
"coopcloud.tech/abra/pkg/config"
"coopcloud.tech/abra/pkg/recipe"
"github.com/docker/docker/api/types/filters"
"github.com/google/go-cmp/cmp"
"github.com/stretchr/testify/assert"
)
@ -106,3 +109,89 @@ func TestGetComposeFilesError(t *testing.T) {
}
}
}
func TestFilters(t *testing.T) {
oldDir := config.RECIPES_DIR
config.RECIPES_DIR = "./testdir"
defer func() {
config.RECIPES_DIR = oldDir
}()
app, err := config.NewApp(config.AppEnv{
"DOMAIN": "test.example.com",
"RECIPE": "test-recipe",
}, "test_example_com", config.AppFile{
Path: "./testdir/filtertest.end",
Server: "local",
})
if err != nil {
t.Fatal(err)
}
f, err := app.Filters(false, false)
if err != nil {
t.Error(err)
}
compareFilter(t, f, map[string]map[string]bool{
"name": {
"test_example_com": true,
},
})
f2, err := app.Filters(false, true)
if err != nil {
t.Error(err)
}
compareFilter(t, f2, map[string]map[string]bool{
"name": {
"^test_example_com": true,
},
})
f3, err := app.Filters(true, false)
if err != nil {
t.Error(err)
}
compareFilter(t, f3, map[string]map[string]bool{
"name": {
"test_example_com_bar": true,
"test_example_com_foo": true,
},
})
f4, err := app.Filters(true, true)
if err != nil {
t.Error(err)
}
compareFilter(t, f4, map[string]map[string]bool{
"name": {
"^test_example_com_bar": true,
"^test_example_com_foo": true,
},
})
f5, err := app.Filters(false, false, "foo")
if err != nil {
t.Error(err)
}
compareFilter(t, f5, map[string]map[string]bool{
"name": {
"test_example_com_foo": true,
},
})
}
func compareFilter(t *testing.T, f1 filters.Args, f2 map[string]map[string]bool) {
t.Helper()
j1, err := f1.MarshalJSON()
if err != nil {
t.Error(err)
}
j2, err := json.Marshal(f2)
if err != nil {
t.Error(err)
}
if diff := cmp.Diff(string(j2), string(j1)); diff != "" {
t.Errorf("filters mismatch (-want +got):\n%s", diff)
}
}

View File

@ -12,7 +12,7 @@ import (
"sort"
"strings"
"github.com/Autonomic-Cooperative/godotenv"
"git.coopcloud.tech/coop-cloud/godotenv"
"github.com/sirupsen/logrus"
)
@ -36,6 +36,8 @@ var REPOS_BASE_URL = "https://git.coopcloud.tech/coop-cloud"
var CATALOGUE_JSON_REPO_NAME = "recipes-catalogue-json"
var SSH_URL_TEMPLATE = "ssh://git@git.coopcloud.tech:2222/coop-cloud/%s.git"
var BackupbotLabel = "coop-cloud.backupbot.enabled"
// envVarModifiers is a list of env var modifier strings. These are added to
// env vars as comments and modify their processing by Abra, e.g. determining
// how long secrets should be.
@ -55,45 +57,34 @@ func GetServers() ([]string, error) {
return servers, nil
}
// ReadEnvOptions modifies the ReadEnv processing of env vars.
type ReadEnvOptions struct {
IncludeModifiers bool
}
// ContainsEnvVarModifier determines if an env var contains a modifier.
func ContainsEnvVarModifier(envVar string) bool {
for _, mod := range envVarModifiers {
if strings.Contains(envVar, fmt.Sprintf("%s=", mod)) {
return true
}
}
return false
}
// ReadEnv loads an app envivornment into a map.
func ReadEnv(filePath string, opts ReadEnvOptions) (AppEnv, error) {
func ReadEnv(filePath string) (AppEnv, error) {
var envVars AppEnv
envVars, err := godotenv.Read(filePath)
envVars, _, err := godotenv.Read(filePath)
if err != nil {
return nil, err
}
// for idx, envVar := range envVars {
// if strings.Contains(envVar, "#") {
// if opts.IncludeModifiers && ContainsEnvVarModifier(envVar) {
// continue
// }
// vals := strings.Split(envVar, "#")
// envVars[idx] = strings.TrimSpace(vals[0])
// }
// }
logrus.Debugf("read %s from %s", envVars, filePath)
return envVars, nil
}
// ReadEnv loads an app envivornment and their modifiers in two different maps.
func ReadEnvWithModifiers(filePath string) (AppEnv, AppModifiers, error) {
var envVars AppEnv
envVars, mods, err := godotenv.Read(filePath)
if err != nil {
return nil, mods, err
}
logrus.Debugf("read %s from %s", envVars, filePath)
return envVars, mods, nil
}
// ReadServerNames retrieves all server names.
func ReadServerNames() ([]string, error) {
serverNames, err := GetAllFoldersInDirectory(SERVERS_DIR)
@ -227,7 +218,7 @@ func CheckEnv(app App) ([]EnvVar, error) {
return envVars, err
}
envSample, err := ReadEnv(envSamplePath, ReadEnvOptions{})
envSample, err := ReadEnv(envSamplePath)
if err != nil {
return envVars, err
}

View File

@ -13,15 +13,21 @@ import (
"coopcloud.tech/abra/pkg/recipe"
)
var TestFolder = os.ExpandEnv("$PWD/../../tests/resources/test_folder")
var ValidAbraConf = os.ExpandEnv("$PWD/../../tests/resources/valid_abra_config")
var (
TestFolder = os.ExpandEnv("$PWD/../../tests/resources/test_folder")
ValidAbraConf = os.ExpandEnv("$PWD/../../tests/resources/valid_abra_config")
)
// make sure these are in alphabetical order
var TFolders = []string{"folder1", "folder2"}
var TFiles = []string{"bar.env", "foo.env"}
var (
TFolders = []string{"folder1", "folder2"}
TFiles = []string{"bar.env", "foo.env"}
)
var AppName = "ecloud"
var ServerName = "evil.corp"
var (
AppName = "ecloud"
ServerName = "evil.corp"
)
var ExpectedAppEnv = config.AppEnv{
"DOMAIN": "ecloud.evil.corp",
@ -71,7 +77,7 @@ func TestGetAllFilesInDirectory(t *testing.T) {
}
func TestReadEnv(t *testing.T) {
env, err := config.ReadEnv(ExpectedAppFile.Path, config.ReadEnvOptions{})
env, err := config.ReadEnv(ExpectedAppFile.Path)
if err != nil {
t.Fatal(err)
}
@ -149,7 +155,7 @@ func TestCheckEnv(t *testing.T) {
}
envSamplePath := path.Join(config.RECIPES_DIR, r.Name, ".env.sample")
envSample, err := config.ReadEnv(envSamplePath, config.ReadEnvOptions{})
envSample, err := config.ReadEnv(envSamplePath)
if err != nil {
t.Fatal(err)
}
@ -183,7 +189,7 @@ func TestCheckEnvError(t *testing.T) {
}
envSamplePath := path.Join(config.RECIPES_DIR, r.Name, ".env.sample")
envSample, err := config.ReadEnv(envSamplePath, config.ReadEnvOptions{})
envSample, err := config.ReadEnv(envSamplePath)
if err != nil {
t.Fatal(err)
}
@ -211,16 +217,6 @@ func TestCheckEnvError(t *testing.T) {
}
}
func TestContainsEnvVarModifier(t *testing.T) {
if ok := config.ContainsEnvVarModifier("FOO=bar # bing"); ok {
t.Fatal("FOO contains no env var modifier")
}
if ok := config.ContainsEnvVarModifier("FOO=bar # length=3"); !ok {
t.Fatal("FOO contains an env var modifier (length)")
}
}
func TestEnvVarCommentsRemoved(t *testing.T) {
offline := true
r, err := recipe.Get("abra-test-recipe", offline)
@ -229,7 +225,7 @@ func TestEnvVarCommentsRemoved(t *testing.T) {
}
envSamplePath := path.Join(config.RECIPES_DIR, r.Name, ".env.sample")
envSample, err := config.ReadEnv(envSamplePath, config.ReadEnvOptions{})
envSample, err := config.ReadEnv(envSamplePath)
if err != nil {
t.Fatal(err)
}
@ -261,12 +257,19 @@ func TestEnvVarModifiersIncluded(t *testing.T) {
}
envSamplePath := path.Join(config.RECIPES_DIR, r.Name, ".env.sample")
envSample, err := config.ReadEnv(envSamplePath, config.ReadEnvOptions{IncludeModifiers: true})
envSample, modifiers, err := config.ReadEnvWithModifiers(envSamplePath)
if err != nil {
t.Fatal(err)
}
if !strings.Contains(envSample["SECRET_TEST_PASS_TWO_VERSION"], "length") {
t.Fatal("comment from env var SECRET_TEST_PASS_TWO_VERSION should not be removed")
if !strings.Contains(envSample["SECRET_TEST_PASS_TWO_VERSION"], "v1") {
t.Errorf("value should be 'v1', got: '%s'", envSample["SECRET_TEST_PASS_TWO_VERSION"])
}
if modifiers == nil || modifiers["SECRET_TEST_PASS_TWO_VERSION"] == nil {
t.Errorf("no modifiers included")
} else {
if modifiers["SECRET_TEST_PASS_TWO_VERSION"]["length"] != "10" {
t.Errorf("length modifier should be '10', got: '%s'", modifiers["SECRET_TEST_PASS_TWO_VERSION"]["length"])
}
}
}

View File

@ -0,0 +1,2 @@
RECIPE=test-recipe
DOMAIN=test.example.com

View File

@ -0,0 +1,6 @@
version: "3.8"
services:
foo:
image: debian
bar:
image: debian

View File

@ -28,7 +28,7 @@ func GetContainer(c context.Context, cl *client.Client, filters filters.Args, no
return types.Container{}, fmt.Errorf("no containers matching the %v filter found?", filter)
}
if len(containers) != 1 {
if len(containers) > 1 {
var containersRaw []string
for _, container := range containers {
containerName := strings.Join(container.Names, " ")
@ -68,3 +68,15 @@ func GetContainer(c context.Context, cl *client.Client, filters filters.Args, no
return containers[0], nil
}
// GetContainerFromStackAndService retrieves the container for the given stack and service.
func GetContainerFromStackAndService(cl *client.Client, stack, service string) (types.Container, error) {
filters := filters.NewArgs()
filters.Add("name", fmt.Sprintf("^%s_%s", stack, service))
container, err := GetContainer(context.Background(), cl, filters, true)
if err != nil {
return types.Container{}, err
}
return container, nil
}

27
pkg/git/add.go Normal file
View File

@ -0,0 +1,27 @@
package git
import (
"github.com/go-git/go-git/v5"
"github.com/sirupsen/logrus"
)
// Add adds a file to the git index.
func Add(repoPath, path string, dryRun bool) error {
repo, err := git.PlainOpen(repoPath)
if err != nil {
return err
}
worktree, err := repo.Worktree()
if err != nil {
return err
}
if dryRun {
logrus.Debugf("dry run: adding %s", path)
} else {
worktree.Add(path)
}
return nil
}

View File

@ -227,7 +227,7 @@ func LintAppService(recipe recipe.Recipe) (bool, error) {
// therefore no matching traefik deploy label will be present.
func LintTraefikEnabledSkipCondition(recipe recipe.Recipe) (bool, error) {
envSamplePath := path.Join(config.RECIPES_DIR, recipe.Name, ".env.sample")
sampleEnv, err := config.ReadEnv(envSamplePath, config.ReadEnvOptions{})
sampleEnv, err := config.ReadEnv(envSamplePath)
if err != nil {
return false, fmt.Errorf("Unable to discover .env.sample for %s", recipe.Name)
}

View File

@ -227,7 +227,7 @@ func Get(recipeName string, offline bool) (Recipe, error) {
}
envSamplePath := path.Join(config.RECIPES_DIR, recipeName, ".env.sample")
sampleEnv, err := config.ReadEnv(envSamplePath, config.ReadEnvOptions{})
sampleEnv, err := config.ReadEnv(envSamplePath)
if err != nil {
return Recipe{}, err
}
@ -255,15 +255,29 @@ func Get(recipeName string, offline bool) (Recipe, error) {
}, nil
}
func (r Recipe) SampleEnv(opts config.ReadEnvOptions) (map[string]string, error) {
func (r Recipe) SampleEnv() (map[string]string, error) {
envSamplePath := path.Join(config.RECIPES_DIR, r.Name, ".env.sample")
sampleEnv, err := config.ReadEnv(envSamplePath, opts)
sampleEnv, err := config.ReadEnv(envSamplePath)
if err != nil {
return sampleEnv, fmt.Errorf("unable to discover .env.sample for %s", r.Name)
}
return sampleEnv, nil
}
// Ensure makes sure the recipe exists, is up to date and has the latest version checked out.
func Ensure(recipeName string) error {
if err := EnsureExists(recipeName); err != nil {
return err
}
if err := EnsureUpToDate(recipeName); err != nil {
return err
}
if err := EnsureLatest(recipeName); err != nil {
return err
}
return nil
}
// EnsureExists ensures that a recipe is locally cloned
func EnsureExists(recipeName string) error {
recipeDir := path.Join(config.RECIPES_DIR, recipeName)

View File

@ -21,11 +21,24 @@ import (
"github.com/sirupsen/logrus"
)
// secretValue represents a parsed `SECRET_FOO=v1 # length=bar` env var config
// secret definition.
type secretValue struct {
// Secret represents a secret.
type Secret struct {
// Version comes from the secret version environment variable.
// For example:
// SECRET_FOO=v1
Version string
// Length comes from the length modifier at the secret version environment
// variable. For Example:
// SECRET_FOO=v1 # length=12
Length int
// RemoteName is the name of the secret on the server. For example:
// name: ${STACK_NAME}_test_pass_two_${SECRET_TEST_PASS_TWO_VERSION}
// With the following:
// STACK_NAME=test_example_com
// SECRET_TEST_PASS_TWO_VERSION=v2
// Will have this remote name:
// test_example_com_test_pass_two_v2
RemoteName string
}
// GeneratePasswords generates passwords.
@ -35,7 +48,6 @@ func GeneratePasswords(count, length uint) ([]string, error) {
length,
passgen.AlphabetDefault,
)
if err != nil {
return nil, err
}
@ -54,7 +66,6 @@ func GeneratePassphrases(count uint) ([]string, error) {
passgen.PassphraseCasingDefault,
passgen.WordListDefault,
)
if err != nil {
return nil, err
}
@ -69,18 +80,23 @@ func GeneratePassphrases(count uint) ([]string, error) {
// and some times you don't (as the caller). We need to be able to handle the
// "app new" case where we pass in the .env.sample and the "secret generate"
// case where the app is created.
func ReadSecretsConfig(appEnvPath string, composeFiles []string, recipeName string) (map[string]string, error) {
secretConfigs := make(map[string]string)
appEnv, err := config.ReadEnv(appEnvPath, config.ReadEnvOptions{IncludeModifiers: true})
func ReadSecretsConfig(appEnvPath string, composeFiles []string, stackName string) (map[string]Secret, error) {
appEnv, appModifiers, err := config.ReadEnvWithModifiers(appEnvPath)
if err != nil {
return secretConfigs, err
return nil, err
}
// Set the STACK_NAME to be able to generate the remote name correctly.
appEnv["STACK_NAME"] = stackName
opts := stack.Deploy{Composefiles: composeFiles}
config, err := loader.LoadComposefile(opts, appEnv)
if err != nil {
return secretConfigs, err
return nil, err
}
// Read the compose files without injecting environment variables.
configWithoutEnv, err := loader.LoadComposefile(opts, map[string]string{}, loader.SkipInterpolation)
if err != nil {
return nil, err
}
var enabledSecrets []string
@ -92,12 +108,13 @@ func ReadSecretsConfig(appEnvPath string, composeFiles []string, recipeName stri
if len(enabledSecrets) == 0 {
logrus.Debugf("not generating app secrets, none enabled in recipe config")
return secretConfigs, nil
return nil, nil
}
secretValues := map[string]Secret{}
for secretId, secretConfig := range config.Secrets {
if string(secretConfig.Name[len(secretConfig.Name)-1]) == "_" {
return secretConfigs, fmt.Errorf("missing version for secret? (%s)", secretId)
return nil, fmt.Errorf("missing version for secret? (%s)", secretId)
}
if !(slices.Contains(enabledSecrets, secretId)) {
@ -107,68 +124,58 @@ func ReadSecretsConfig(appEnvPath string, composeFiles []string, recipeName stri
lastIdx := strings.LastIndex(secretConfig.Name, "_")
secretVersion := secretConfig.Name[lastIdx+1:]
secretConfigs[secretId] = secretVersion
}
value := Secret{Version: secretVersion, RemoteName: secretConfig.Name}
return secretConfigs, nil
// Check if the length modifier is set for this secret.
for envName, modifierValues := range appModifiers {
// configWithoutEnv contains the raw name as defined in the compose.yaml
// The name will look something like this:
// name: ${STACK_NAME}_test_pass_two_${SECRET_TEST_PASS_TWO_VERSION}
// To check if the current modifier is for the current secret we check
// if the raw name contains the env name (e.g. SECRET_TEST_PASS_TWO_VERSION).
if !strings.Contains(configWithoutEnv.Secrets[secretId].Name, envName) {
continue
}
func ParseSecretValue(secret string) (secretValue, error) {
values := strings.Split(secret, "#")
if len(values) == 0 {
return secretValue{}, fmt.Errorf("unable to parse %s", secret)
}
if len(values) == 1 {
return secretValue{Version: values[0], Length: 0}, nil
}
split := strings.Split(values[1], "=")
parsed := split[len(split)-1]
stripped := strings.ReplaceAll(parsed, " ", "")
length, err := strconv.Atoi(stripped)
lengthRaw, ok := modifierValues["length"]
if ok {
length, err := strconv.Atoi(lengthRaw)
if err != nil {
return secretValue{}, err
return nil, err
}
value.Length = length
}
break
}
secretValues[secretId] = value
}
version := strings.ReplaceAll(values[0], " ", "")
logrus.Debugf("parsed version %s and length '%v' from %s", version, length, secret)
return secretValue{Version: version, Length: length}, nil
return secretValues, nil
}
// GenerateSecrets generates secrets locally and sends them to a remote server for storage.
func GenerateSecrets(cl *dockerClient.Client, secretsFromConfig map[string]string, appName, server string) (map[string]string, error) {
secrets := make(map[string]string)
func GenerateSecrets(cl *dockerClient.Client, secrets map[string]Secret, server string) (map[string]string, error) {
secretsGenerated := map[string]string{}
var mutex sync.Mutex
var wg sync.WaitGroup
ch := make(chan error, len(secretsFromConfig))
for n, v := range secretsFromConfig {
ch := make(chan error, len(secrets))
for n, v := range secrets {
wg.Add(1)
go func(secretName, secretValue string) {
go func(secretName string, secret Secret) {
defer wg.Done()
parsedSecretValue, err := ParseSecretValue(secretValue)
logrus.Debugf("attempting to generate and store %s on %s", secret.RemoteName, server)
if secret.Length > 0 {
passwords, err := GeneratePasswords(1, uint(secret.Length))
if err != nil {
ch <- err
return
}
secretRemoteName := fmt.Sprintf("%s_%s_%s", appName, secretName, parsedSecretValue.Version)
logrus.Debugf("attempting to generate and store %s on %s", secretRemoteName, server)
if parsedSecretValue.Length > 0 {
passwords, err := GeneratePasswords(1, uint(parsedSecretValue.Length))
if err != nil {
ch <- err
return
}
if err := client.StoreSecret(cl, secretRemoteName, passwords[0], server); err != nil {
if err := client.StoreSecret(cl, secret.RemoteName, passwords[0], server); err != nil {
if strings.Contains(err.Error(), "AlreadyExists") {
logrus.Warnf("%s already exists, moving on...", secretRemoteName)
logrus.Warnf("%s already exists, moving on...", secret.RemoteName)
ch <- nil
} else {
ch <- err
@ -178,7 +185,7 @@ func GenerateSecrets(cl *dockerClient.Client, secretsFromConfig map[string]strin
mutex.Lock()
defer mutex.Unlock()
secrets[secretName] = passwords[0]
secretsGenerated[secretName] = passwords[0]
} else {
passphrases, err := GeneratePassphrases(1)
if err != nil {
@ -186,9 +193,9 @@ func GenerateSecrets(cl *dockerClient.Client, secretsFromConfig map[string]strin
return
}
if err := client.StoreSecret(cl, secretRemoteName, passphrases[0], server); err != nil {
if err := client.StoreSecret(cl, secret.RemoteName, passphrases[0], server); err != nil {
if strings.Contains(err.Error(), "AlreadyExists") {
logrus.Warnf("%s already exists, moving on...", secretRemoteName)
logrus.Warnf("%s already exists, moving on...", secret.RemoteName)
ch <- nil
} else {
ch <- err
@ -198,7 +205,7 @@ func GenerateSecrets(cl *dockerClient.Client, secretsFromConfig map[string]strin
mutex.Lock()
defer mutex.Unlock()
secrets[secretName] = passphrases[0]
secretsGenerated[secretName] = passphrases[0]
}
ch <- nil
}(n, v)
@ -206,16 +213,16 @@ func GenerateSecrets(cl *dockerClient.Client, secretsFromConfig map[string]strin
wg.Wait()
for range secretsFromConfig {
for range secrets {
err := <-ch
if err != nil {
return nil, err
}
}
logrus.Debugf("generated and stored %s on %s", secrets, server)
logrus.Debugf("generated and stored %v on %s", secrets, server)
return secrets, nil
return secretsGenerated, nil
}
type secretStatus struct {
@ -237,7 +244,7 @@ func PollSecretsStatus(cl *dockerClient.Client, app config.App) (secretStatuses,
return secStats, err
}
secretsConfig, err := ReadSecretsConfig(app.Path, composeFiles, app.Recipe)
secretsConfig, err := ReadSecretsConfig(app.Path, composeFiles, app.StackName())
if err != nil {
return secStats, err
}
@ -257,14 +264,9 @@ func PollSecretsStatus(cl *dockerClient.Client, app config.App) (secretStatuses,
remoteSecretNames[cont.Spec.Annotations.Name] = true
}
for secretName, secretValue := range secretsConfig {
for secretName, val := range secretsConfig {
createdRemote := false
val, err := ParseSecretValue(secretValue)
if err != nil {
return secStats, err
}
secretRemoteName := fmt.Sprintf("%s_%s_%s", app.StackName(), secretName, val.Version)
if _, ok := remoteSecretNames[secretRemoteName]; ok {
createdRemote = true

View File

@ -1,42 +1,30 @@
package secret
import (
"path"
"testing"
"coopcloud.tech/abra/pkg/config"
"coopcloud.tech/abra/pkg/recipe"
"coopcloud.tech/abra/pkg/upstream/stack"
loader "coopcloud.tech/abra/pkg/upstream/stack"
"github.com/stretchr/testify/assert"
)
func TestReadSecretsConfig(t *testing.T) {
offline := true
recipe, err := recipe.Get("matrix-synapse", offline)
composeFiles := []string{"./testdir/compose.yaml"}
secretsFromConfig, err := ReadSecretsConfig("./testdir/.env.sample", composeFiles, "test_example_com")
if err != nil {
t.Fatal(err)
}
sampleEnv, err := recipe.SampleEnv(config.ReadEnvOptions{})
if err != nil {
t.Fatal(err)
}
// Simple secret
assert.Equal(t, "test_example_com_test_pass_one_v2", secretsFromConfig["test_pass_one"].RemoteName)
assert.Equal(t, "v2", secretsFromConfig["test_pass_one"].Version)
assert.Equal(t, 0, secretsFromConfig["test_pass_one"].Length)
composeFiles := []string{path.Join(config.RECIPES_DIR, recipe.Name, "compose.yml")}
envSamplePath := path.Join(config.RECIPES_DIR, recipe.Name, ".env.sample")
secretsFromConfig, err := ReadSecretsConfig(envSamplePath, composeFiles, recipe.Name)
if err != nil {
t.Fatal(err)
}
// Has a length modifier
assert.Equal(t, "test_example_com_test_pass_two_v1", secretsFromConfig["test_pass_two"].RemoteName)
assert.Equal(t, "v1", secretsFromConfig["test_pass_two"].Version)
assert.Equal(t, 10, secretsFromConfig["test_pass_two"].Length)
opts := stack.Deploy{Composefiles: composeFiles}
config, err := loader.LoadComposefile(opts, sampleEnv)
if err != nil {
t.Fatal(err)
}
for secretId := range config.Secrets {
assert.Contains(t, secretsFromConfig, secretId)
}
// Secret name does not include the secret id
assert.Equal(t, "test_example_com_pass_three_v2", secretsFromConfig["test_pass_three"].RemoteName)
assert.Equal(t, "v2", secretsFromConfig["test_pass_three"].Version)
assert.Equal(t, 0, secretsFromConfig["test_pass_three"].Length)
}

View File

@ -0,0 +1,3 @@
SECRET_TEST_PASS_ONE_VERSION=v2
SECRET_TEST_PASS_TWO_VERSION=v1 # length=10
SECRET_TEST_PASS_THREE_VERSION=v2

View File

@ -0,0 +1,21 @@
---
version: "3.8"
services:
app:
image: nginx:1.21.0
secrets:
- test_pass_one
- test_pass_two
- test_pass_three
secrets:
test_pass_one:
external: true
name: ${STACK_NAME}_test_pass_one_${SECRET_TEST_PASS_ONE_VERSION} # should be removed
test_pass_two:
external: true
name: ${STACK_NAME}_test_pass_two_${SECRET_TEST_PASS_TWO_VERSION}
test_pass_three:
external: true
name: ${STACK_NAME}_pass_three_${SECRET_TEST_PASS_THREE_VERSION} # secretId and name don't match

View File

@ -14,6 +14,70 @@ import (
"github.com/sirupsen/logrus"
)
// GetService retrieves a service container based on a label. If prompt is true
// and the retrievd count of service containers does not match 1, then a prompt
// is presented to let the user choose. An error is returned when no service is
// found.
func GetServiceByLabel(c context.Context, cl *client.Client, label string, prompt bool) (swarm.Service, error) {
services, err := cl.ServiceList(c, types.ServiceListOptions{})
if err != nil {
return swarm.Service{}, err
}
if len(services) == 0 {
return swarm.Service{}, fmt.Errorf("no services deployed?")
}
var matchingServices []swarm.Service
for _, service := range services {
if enabled, exists := service.Spec.Labels[label]; exists && enabled == "true" {
matchingServices = append(matchingServices, service)
}
}
if len(matchingServices) == 0 {
return swarm.Service{}, fmt.Errorf("no services deployed matching label '%s'?", label)
}
if len(matchingServices) > 1 {
var servicesRaw []string
for _, service := range matchingServices {
serviceName := service.Spec.Name
created := formatter.HumanDuration(service.CreatedAt.Unix())
servicesRaw = append(servicesRaw, fmt.Sprintf("%s (created %v)", serviceName, created))
}
if !prompt {
err := fmt.Errorf("expected 1 service but found %v: %s", len(matchingServices), strings.Join(servicesRaw, " "))
return swarm.Service{}, err
}
logrus.Warnf("ambiguous service list received, prompting for input")
var response string
prompt := &survey.Select{
Message: "which service are you looking for?",
Options: servicesRaw,
}
if err := survey.AskOne(prompt, &response); err != nil {
return swarm.Service{}, err
}
chosenService := strings.TrimSpace(strings.Split(response, " ")[0])
for _, service := range matchingServices {
serviceName := strings.ToLower(service.Spec.Name)
if serviceName == chosenService {
return service, nil
}
}
logrus.Panic("failed to match chosen service")
}
return matchingServices[0], nil
}
// GetService retrieves a service container. If prompt is true and the retrievd
// count of service containers does not match 1, then a prompt is presented to
// let the user choose. A count of 0 is handled gracefully.

View File

@ -18,7 +18,7 @@ import (
//
// ssh://<user>@<host> URL requires Docker 18.09 or later on the remote host.
func GetConnectionHelper(daemonURL string) (*connhelper.ConnectionHelper, error) {
return getConnectionHelper(daemonURL, []string{"-o ConnectTimeout=5"})
return getConnectionHelper(daemonURL, []string{"-o ConnectTimeout=60"})
}
func getConnectionHelper(daemonURL string, sshFlags []string) (*connhelper.ConnectionHelper, error) {

View File

@ -13,7 +13,10 @@ import (
"github.com/sirupsen/logrus"
)
func RunExec(dockerCli command.Cli, client *apiclient.Client, containerID string, execConfig *types.ExecConfig) error {
// RunExec runs a command on a remote container. io.Writer corresponds to the
// command output.
func RunExec(dockerCli command.Cli, client *apiclient.Client, containerID string,
execConfig *types.ExecConfig) (io.Writer, error) {
ctx := context.Background()
// We need to check the tty _before_ we do the ContainerExecCreate, because
@ -21,22 +24,22 @@ func RunExec(dockerCli command.Cli, client *apiclient.Client, containerID string
// there's no easy way to clean those up). But also in order to make "not
// exist" errors take precedence we do a dummy inspect first.
if _, err := client.ContainerInspect(ctx, containerID); err != nil {
return err
return nil, err
}
if !execConfig.Detach {
if err := dockerCli.In().CheckTty(execConfig.AttachStdin, execConfig.Tty); err != nil {
return err
return nil, err
}
}
response, err := client.ContainerExecCreate(ctx, containerID, *execConfig)
if err != nil {
return err
return nil, err
}
execID := response.ID
if execID == "" {
return errors.New("exec ID empty")
return nil, errors.New("exec ID empty")
}
if execConfig.Detach {
@ -44,13 +47,13 @@ func RunExec(dockerCli command.Cli, client *apiclient.Client, containerID string
Detach: execConfig.Detach,
Tty: execConfig.Tty,
}
return client.ContainerExecStart(ctx, execID, execStartCheck)
return nil, client.ContainerExecStart(ctx, execID, execStartCheck)
}
return interactiveExec(ctx, dockerCli, client, execConfig, execID)
}
func interactiveExec(ctx context.Context, dockerCli command.Cli, client *apiclient.Client,
execConfig *types.ExecConfig, execID string) error {
execConfig *types.ExecConfig, execID string) (io.Writer, error) {
// Interactive exec requested.
var (
out, stderr io.Writer
@ -76,7 +79,7 @@ func interactiveExec(ctx context.Context, dockerCli command.Cli, client *apiclie
}
resp, err := client.ContainerExecAttach(ctx, execID, execStartCheck)
if err != nil {
return err
return out, err
}
defer resp.Close()
@ -107,10 +110,10 @@ func interactiveExec(ctx context.Context, dockerCli command.Cli, client *apiclie
if err := <-errCh; err != nil {
logrus.Debugf("Error hijack: %s", err)
return err
return out, err
}
return getExecExitStatus(ctx, client, execID)
return out, getExecExitStatus(ctx, client, execID)
}
func getExecExitStatus(ctx context.Context, client apiclient.ContainerAPIClient, execID string) error {

View File

@ -18,15 +18,24 @@ func DontSkipValidation(opts *loader.Options) {
opts.SkipValidation = false
}
// SkipInterpolation skip interpolating environment variables.
func SkipInterpolation(opts *loader.Options) {
opts.SkipInterpolation = true
}
// LoadComposefile parse the composefile specified in the cli and returns its Config and version.
func LoadComposefile(opts Deploy, appEnv map[string]string) (*composetypes.Config, error) {
func LoadComposefile(opts Deploy, appEnv map[string]string, options ...func(*loader.Options)) (*composetypes.Config, error) {
configDetails, err := getConfigDetails(opts.Composefiles, appEnv)
if err != nil {
return nil, err
}
if options == nil {
options = []func(*loader.Options){DontSkipValidation}
}
dicts := getDictsFrom(configDetails.ConfigFiles)
config, err := loader.Load(configDetails, DontSkipValidation)
config, err := loader.Load(configDetails, options...)
if err != nil {
if fpe, ok := err.(*loader.ForbiddenPropertiesError); ok {
return nil, fmt.Errorf("compose file contains unsupported options: %s",

11
scripts/docker/build.sh Executable file
View File

@ -0,0 +1,11 @@
#!/bin/bash
if [ ! -f .envrc ]; then
. .envrc.sample
else
. .envrc
fi
git config --global --add safe.directory /abra # work around funky file permissions
make build

View File

@ -65,17 +65,19 @@ function install_abra_release {
checksums=$(wget -q -O- $checksums_url)
checksum=$(echo "$checksums" | grep "$FILENAME" - | sed -En 's/([0-9a-f]{64})\s+'"$FILENAME"'.*/\1/p')
abra_download="/tmp/abra-download"
echo "downloading $ABRA_VERSION $PLATFORM binary release for abra..."
wget -q "$release_url" -O "$HOME/.local/bin/.abra-download"
localsum=$(sha256sum $HOME/.local/bin/.abra-download | sed -En 's/([0-9a-f]{64})\s+.*/\1/p')
wget -q "$release_url" -O $abra_download
localsum=$(sha256sum $abra_download | sed -En 's/([0-9a-f]{64})\s+.*/\1/p')
echo "checking if checksums match..."
if [[ "$localsum" != "$checksum" ]]; then
print_checksum_error
exit 1
fi
echo "$(tput setaf 2)check successful!$(tput sgr0)"
mv "$HOME/.local/bin/.abra-download" "$HOME/.local/bin/abra"
mv "$abra_download" "$HOME/.local/bin/abra"
chmod +x "$HOME/.local/bin/abra"
x=$(echo $PATH | grep $HOME/.local/bin)

View File

@ -25,6 +25,24 @@ teardown(){
fi
}
# bats test_tags=slow
@test "autocomplete" {
run $ABRA app cmd --generate-bash-completion
assert_success
assert_output "$TEST_APP_DOMAIN"
run $ABRA app cmd "$TEST_APP_DOMAIN" --generate-bash-completion
assert_success
assert_output "app"
run $ABRA app cmd "$TEST_APP_DOMAIN" app --generate-bash-completion
assert_success
assert_output "test_cmd
test_cmd_arg
test_cmd_args
test_cmd_export"
}
@test "validate app argument" {
run $ABRA app cmd
assert_failure

View File

@ -5,9 +5,11 @@ setup_file(){
_common_setup
_add_server
_new_app
_deploy_app
}
teardown_file(){
_undeploy_app
_rm_app
_rm_server
}
@ -17,13 +19,6 @@ setup(){
_common_setup
}
teardown(){
# https://github.com/bats-core/bats-core/issues/383#issuecomment-738628888
if [[ -z "${BATS_TEST_COMPLETED}" ]]; then
_undeploy_app
fi
}
@test "validate app argument" {
run $ABRA app cp
assert_failure
@ -54,68 +49,120 @@ teardown(){
assert_output --partial 'arguments must take $SERVICE:$PATH form'
}
@test "detect 'coming FROM' syntax" {
run $ABRA app cp "$TEST_APP_DOMAIN" app:/myfile.txt . --debug
assert_failure
assert_output --partial 'coming FROM the container'
}
@test "detect 'going TO' syntax" {
run $ABRA app cp "$TEST_APP_DOMAIN" myfile.txt app:/somewhere --debug
assert_failure
assert_output --partial 'going TO the container'
}
@test "error if local file missing" {
run $ABRA app cp "$TEST_APP_DOMAIN" myfile.txt app:/somewhere
run $ABRA app cp "$TEST_APP_DOMAIN" thisfileshouldnotexist.txt app:/somewhere
assert_failure
assert_output --partial 'myfile.txt does not exist locally?'
assert_output --partial 'local stat thisfileshouldnotexist.txt: no such file or directory'
}
# bats test_tags=slow
@test "error if service doesn't exist" {
_deploy_app
_mkfile "$BATS_TMPDIR/myfile.txt" "foo"
run bash -c "echo foo >> $BATS_TMPDIR/myfile.txt"
assert_success
run $ABRA app cp "$TEST_APP_DOMAIN" "$BATS_TMPDIR/myfile.txt" doesnt_exist:/
run $ABRA app cp "$TEST_APP_DOMAIN" "$BATS_TMPDIR/myfile.txt" doesnt_exist:/ --debug
assert_failure
assert_output --partial 'no containers matching'
run rm -rf "$BATS_TMPDIR/myfile.txt"
assert_success
_undeploy_app
_rm "$BATS_TMPDIR/myfile.txt"
}
# bats test_tags=slow
@test "copy to container" {
_deploy_app
run bash -c "echo foo >> $BATS_TMPDIR/myfile.txt"
assert_success
@test "copy local file to container directory" {
_mkfile "$BATS_TMPDIR/myfile.txt" "foo"
run $ABRA app cp "$TEST_APP_DOMAIN" "$BATS_TMPDIR/myfile.txt" app:/etc
assert_success
run rm -rf "$BATS_TMPDIR/myfile.txt"
run $ABRA app run "$TEST_APP_DOMAIN" app cat /etc/myfile.txt
assert_success
assert_output --partial "foo"
_undeploy_app
_rm "$BATS_TMPDIR/myfile.txt"
_rm_remote "/etc/myfile.txt"
}
# bats test_tags=slow
@test "copy from container" {
_deploy_app
@test "copy local file to container file (and override on remote)" {
_mkfile "$BATS_TMPDIR/myfile.txt" "foo"
run bash -c "echo foo >> $BATS_TMPDIR/myfile.txt"
# create
run $ABRA app cp "$TEST_APP_DOMAIN" "$BATS_TMPDIR/myfile.txt" app:/etc/myfile.txt
assert_success
run $ABRA app cp "$TEST_APP_DOMAIN" "$BATS_TMPDIR/myfile.txt" app:/etc
run $ABRA app run "$TEST_APP_DOMAIN" app cat /etc/myfile.txt
assert_success
assert_output --partial "foo"
_mkfile "$BATS_TMPDIR/myfile.txt" "bar"
# override
run $ABRA app cp "$TEST_APP_DOMAIN" "$BATS_TMPDIR/myfile.txt" app:/etc/myfile.txt
assert_success
run rm -rf "$BATS_TMPDIR/myfile.txt"
run $ABRA app run "$TEST_APP_DOMAIN" app cat /etc/myfile.txt
assert_success
assert_output --partial "bar"
_rm "$BATS_TMPDIR/myfile.txt"
_rm_remote "/etc/myfile.txt"
}
# bats test_tags=slow
@test "copy local file to container file (and rename)" {
_mkfile "$BATS_TMPDIR/myfile.txt" "foo"
# rename
run $ABRA app cp "$TEST_APP_DOMAIN" "$BATS_TMPDIR/myfile.txt" app:/etc/myfile2.txt
assert_success
run $ABRA app run "$TEST_APP_DOMAIN" app cat /etc/myfile2.txt
assert_success
assert_output --partial "foo"
_rm "$BATS_TMPDIR/myfile.txt"
_rm_remote "/etc/myfile2.txt"
}
# bats test_tags=slow
@test "copy local directory to container directory (and creates missing directory)" {
_mkdir "$BATS_TMPDIR/mydir"
_mkfile "$BATS_TMPDIR/mydir/myfile.txt" "foo"
run $ABRA app cp "$TEST_APP_DOMAIN" "$BATS_TMPDIR/mydir" app:/etc
assert_success
run $ABRA app run "$TEST_APP_DOMAIN" app ls /etc/mydir
assert_success
assert_output --partial "myfile.txt"
_rm "$BATS_TMPDIR/mydir"
_rm_remote "/etc/mydir"
}
# bats test_tags=slow
@test "copy local files to container directory" {
_mkdir "$BATS_TMPDIR/mydir"
_mkfile "$BATS_TMPDIR/mydir/myfile.txt" "foo"
_mkfile "$BATS_TMPDIR/mydir/myfile2.txt" "foo"
run $ABRA app cp "$TEST_APP_DOMAIN" "$BATS_TMPDIR/mydir/" app:/etc
assert_success
run $ABRA app run "$TEST_APP_DOMAIN" app ls /etc/myfile.txt
assert_success
assert_output --partial "myfile.txt"
run $ABRA app run "$TEST_APP_DOMAIN" app ls /etc/myfile2.txt
assert_success
assert_output --partial "myfile2.txt"
_rm "$BATS_TMPDIR/mydir"
_rm_remote "/etc/myfile*"
}
# bats test_tags=slow
@test "copy container file to local directory" {
run $ABRA app run "$TEST_APP_DOMAIN" app bash -c "echo foo > /etc/myfile.txt"
assert_success
run $ABRA app cp "$TEST_APP_DOMAIN" app:/etc/myfile.txt "$BATS_TMPDIR"
@ -123,8 +170,76 @@ teardown(){
assert_exists "$BATS_TMPDIR/myfile.txt"
assert bash -c "cat $BATS_TMPDIR/myfile.txt | grep -q foo"
run rm -rf "$BATS_TMPDIR/myfile.txt"
_rm "$BATS_TMPDIR/myfile.txt"
_rm_remote "/etc/myfile.txt"
}
# bats test_tags=slow
@test "copy container file to local file" {
run $ABRA app run "$TEST_APP_DOMAIN" app bash -c "echo foo > /etc/myfile.txt"
assert_success
_undeploy_app
run $ABRA app cp "$TEST_APP_DOMAIN" app:/etc/myfile.txt "$BATS_TMPDIR/myfile.txt"
assert_success
assert_exists "$BATS_TMPDIR/myfile.txt"
assert bash -c "cat $BATS_TMPDIR/myfile.txt | grep -q foo"
_rm "$BATS_TMPDIR/myfile.txt"
_rm_remote "/etc/myfile.txt"
}
# bats test_tags=slow
@test "copy container file to local file and rename" {
run $ABRA app run "$TEST_APP_DOMAIN" app bash -c "echo foo > /etc/myfile.txt"
assert_success
run $ABRA app cp "$TEST_APP_DOMAIN" app:/etc/myfile.txt "$BATS_TMPDIR/myfile2.txt"
assert_success
assert_exists "$BATS_TMPDIR/myfile2.txt"
assert bash -c "cat $BATS_TMPDIR/myfile2.txt | grep -q foo"
_rm "$BATS_TMPDIR/myfile2.txt"
_rm_remote "/etc/myfile.txt"
}
# bats test_tags=slow
@test "copy container directory to local directory" {
run $ABRA app run "$TEST_APP_DOMAIN" app bash -c "echo foo > /etc/myfile.txt"
assert_success
run $ABRA app run "$TEST_APP_DOMAIN" app bash -c "echo bar > /etc/myfile2.txt"
assert_success
mkdir "$BATS_TMPDIR/mydir"
run $ABRA app cp "$TEST_APP_DOMAIN" app:/etc "$BATS_TMPDIR/mydir"
assert_success
assert_exists "$BATS_TMPDIR/mydir/etc/myfile.txt"
assert_success
assert_exists "$BATS_TMPDIR/mydir/etc/myfile2.txt"
_rm "$BATS_TMPDIR/mydir"
_rm_remote "/etc/myfile.txt"
_rm_remote "/etc/myfile2.txt"
}
# bats test_tags=slow
@test "copy container files to local directory" {
run $ABRA app run "$TEST_APP_DOMAIN" app bash -c "echo foo > /etc/myfile.txt"
assert_success
run $ABRA app run "$TEST_APP_DOMAIN" app bash -c "echo bar > /etc/myfile2.txt"
assert_success
mkdir "$BATS_TMPDIR/mydir"
run $ABRA app cp "$TEST_APP_DOMAIN" app:/etc/ "$BATS_TMPDIR/mydir"
assert_success
assert_exists "$BATS_TMPDIR/mydir/myfile.txt"
assert_success
assert_exists "$BATS_TMPDIR/mydir/myfile2.txt"
_rm "$BATS_TMPDIR/mydir"
_rm_remote "/etc/myfile.txt"
_rm_remote "/etc/myfile2.txt"
}

View File

@ -18,9 +18,24 @@ setup(){
}
teardown(){
load "$PWD/tests/integration/helpers/common"
_rm_app
}
@test "autocomplete" {
run $ABRA app new --generate-bash-completion
assert_success
assert_output --partial "traefik"
assert_output --partial "abra-test-recipe"
# Note: this test needs to be updated when a new version of the test recipe is published.
run $ABRA app new abra-test-recipe --generate-bash-completion
assert_success
assert_output "0.1.0+1.20.0
0.1.1+1.20.2
0.2.0+1.21.0"
}
@test "create new app" {
run $ABRA app new "$TEST_RECIPE" \
--no-input \
@ -28,10 +43,29 @@ teardown(){
--domain "$TEST_APP_DOMAIN"
assert_success
assert_exists "$ABRA_DIR/servers/$TEST_SERVER/$TEST_APP_DOMAIN.env"
run git -C "$ABRA_DIR/recipes/$TEST_RECIPE" status
assert_output --partial "Your branch is up to date with 'origin/main'."
}
@test "create new app with version" {
run $ABRA app new "$TEST_RECIPE" 0.1.1+1.20.2 \
--no-input \
--server "$TEST_SERVER" \
--domain "$TEST_APP_DOMAIN"
assert_success
assert_exists "$ABRA_DIR/servers/$TEST_SERVER/$TEST_APP_DOMAIN.env"
run git -C "$ABRA_DIR/recipes/$TEST_RECIPE" log -1
assert_output --partial "453db7121c0a56a7a8f15378f18fe3bf21ccfdef"
}
@test "does not overwrite existing env files" {
_new_app
run $ABRA app new "$TEST_RECIPE" \
--no-input \
--server "$TEST_SERVER" \
--domain "$TEST_APP_DOMAIN"
assert_success
run $ABRA app new "$TEST_RECIPE" \
--no-input \
@ -74,8 +108,7 @@ teardown(){
--no-input \
--chaos \
--server "$TEST_SERVER" \
--domain "$TEST_APP_DOMAIN" \
--secrets
--domain "$TEST_APP_DOMAIN"
assert_success
assert_exists "$ABRA_DIR/servers/$TEST_SERVER/$TEST_APP_DOMAIN.env"
@ -88,18 +121,17 @@ teardown(){
assert_success
run git -C "$ABRA_DIR/recipes/$TEST_RECIPE" status
assert_output --partial 'behind 3'
assert_output --partial "Your branch is behind 'origin/main' by 3 commits, and can be fast-forwarded."
run $ABRA app new "$TEST_RECIPE" \
--no-input \
--server "$TEST_SERVER" \
--domain "$TEST_APP_DOMAIN" \
--secrets
--domain "$TEST_APP_DOMAIN"
assert_success
assert_exists "$ABRA_DIR/servers/$TEST_SERVER/$TEST_APP_DOMAIN.env"
run git -C "$ABRA_DIR/recipes/$TEST_RECIPE" status
refute_output --partial 'behind 3'
assert_output --partial "Your branch is up to date with 'origin/main'."
_reset_recipe
}
@ -109,7 +141,7 @@ teardown(){
assert_success
run git -C "$ABRA_DIR/recipes/$TEST_RECIPE" status
assert_output --partial 'behind 3'
assert_output --partial "Your branch is behind 'origin/main' by 3 commits, and can be fast-forwarded."
# NOTE(d1): need to use --chaos to force same commit
run $ABRA app new "$TEST_RECIPE" \
@ -117,13 +149,12 @@ teardown(){
--offline \
--chaos \
--server "$TEST_SERVER" \
--domain "$TEST_APP_DOMAIN" \
--secrets
--domain "$TEST_APP_DOMAIN"
assert_success
assert_exists "$ABRA_DIR/servers/$TEST_SERVER/$TEST_APP_DOMAIN.env"
run git -C "$ABRA_DIR/recipes/$TEST_RECIPE" status
assert_output --partial 'behind 3'
assert_output --partial "Your branch is behind 'origin/main' by 3 commits, and can be fast-forwarded."
_reset_recipe
}

View File

@ -104,10 +104,7 @@ teardown(){
_undeploy_app
# NOTE(d1): to let the stack come down before nuking volumes
sleep 5
run $ABRA app volume rm "$TEST_APP_DOMAIN" --force
run $ABRA app volume rm "$TEST_APP_DOMAIN"
assert_success
run $ABRA app volume ls "$TEST_APP_DOMAIN"
@ -132,9 +129,6 @@ teardown(){
_undeploy_app
# NOTE(d1): to let the stack come down before nuking volumes
sleep 5
run $ABRA app rm "$TEST_APP_DOMAIN" --no-input
assert_success
assert_output --partial 'test-volume'

View File

@ -1,10 +1,11 @@
#!/usr/bin/env bash
_common_setup() {
load '/usr/lib/bats/bats-support/load'
load '/usr/lib/bats/bats-assert/load'
load '/usr/lib/bats/bats-file/load'
bats_load_library bats-support
bats_load_library bats-assert
bats_load_library bats-file
load "$PWD/tests/integration/helpers/file"
load "$PWD/tests/integration/helpers/app"
load "$PWD/tests/integration/helpers/git"
load "$PWD/tests/integration/helpers/recipe"

View File

@ -0,0 +1,24 @@
_mkfile() {
run bash -c "echo $2 > $1"
assert_success
}
_mkfile_remote() {
run $ABRA app run "$TEST_APP_DOMAIN" app "bash -c \"echo $2 > $1\""
assert_success
}
_mkdir() {
run bash -c "mkdir -p $1"
assert_success
}
_rm() {
run rm -rf "$1"
assert_success
}
_rm_remote() {
run "$ABRA" app run "$TEST_APP_DOMAIN" app rm -rf "$1"
assert_success
}

View File

@ -28,3 +28,10 @@ _reset_tags() {
assert_success
refute_output '0'
}
_set_git_author() {
run git -C "$ABRA_DIR/recipes/$TEST_RECIPE" config --local user.email test@example.com
assert_success
run git -C "$ABRA_DIR/recipes/$TEST_RECIPE" config --local user.name test
assert_success
}

View File

@ -1,7 +1,11 @@
#!/usr/bin/env bash
_add_server() {
if [[ "$TEST_SERVER" == "default" ]]; then
run $ABRA server add -l
else
run $ABRA server add "$TEST_SERVER"
fi
assert_success
assert_exists "$ABRA_DIR/servers/$TEST_SERVER"
}

View File

@ -5,7 +5,17 @@ setup() {
_common_setup
}
@test "recipe fetch" {
@test "recipe fetch all" {
run rm -rf "$ABRA_DIR/recipes/matrix-synapse"
assert_success
assert_not_exists "$ABRA_DIR/recipes/matrix-synapse"
run $ABRA recipe fetch
assert_success
assert_exists "$ABRA_DIR/recipes/matrix-synapse"
}
@test "recipe fetch single recipe" {
run rm -rf "$ABRA_DIR/recipes/matrix-synapse"
assert_success
assert_not_exists "$ABRA_DIR/recipes/matrix-synapse"

View File

@ -15,6 +15,11 @@ teardown_file(){
setup(){
load "$PWD/tests/integration/helpers/common"
_common_setup
_set_git_author
}
teardown() {
_reset_recipe
}
@test "validate recipe argument" {
@ -51,8 +56,6 @@ setup(){
run git -C "$ABRA_DIR/recipes/$TEST_RECIPE" tag --list
assert_success
assert_output --partial '0.2.1+1.21.6'
_reset_recipe
}
# NOTE(d1): this test can't assert hardcoded versions since we upgrade a minor
@ -81,8 +84,6 @@ setup(){
run git -C "$ABRA_DIR/recipes/$TEST_RECIPE" tag --list
assert_success
assert_output --regexp '0\.3\.0\+1\.2.*'
_reset_recipe "$TEST_RECIPE"
}
@test "unknown files not committed" {
@ -100,6 +101,21 @@ setup(){
run git -C "$ABRA_DIR/recipes/$TEST_RECIPE" rm foo
assert_failure
assert_output --partial "fatal: pathspec 'foo' did not match any files"
_reset_recipe
}
# NOTE: relies on 0.2.x being the last minor version
@test "release with next release note" {
_mkfile "$ABRA_DIR/recipes/$TEST_RECIPE/release/next" "those are some release notes for the next release"
run git -C "$ABRA_DIR/recipes/$TEST_RECIPE" add release/next
assert_success
run git -C "$ABRA_DIR/recipes/$TEST_RECIPE" commit -m "added some release notes"
assert_success
run $ABRA recipe release "$TEST_RECIPE" --no-input --minor
assert_success
assert_output --partial 'no -p/--publish passed, not publishing'
assert_not_exists "$ABRA_DIR/recipes/$TEST_RECIPE/release/next"
assert_exists "$ABRA_DIR/recipes/$TEST_RECIPE/release/0.3.0+1.21.0"
assert_file_contains "$ABRA_DIR/recipes/$TEST_RECIPE/release/0.3.0+1.21.0" "those are some release notes for the next release"
}