Compare commits

...

138 Commits

Author SHA1 Message Date
3a3f41988b chore: publish 0.4.0-alpha
All checks were successful
continuous-integration/drone/push Build is passing
2022-04-19 14:36:56 +02:00
f6690a80bd build: upx release script [ci skip] 2022-04-19 14:34:06 +02:00
2337c4648b chore: remove unused command 2022-04-19 14:32:34 +02:00
a1190f1352 fix: show which service is getting backed up [ci skip] 2022-04-19 13:50:23 +02:00
e421922f5b fix: restore uses absolute paths & better docs
All checks were successful
continuous-integration/drone/push Build is passing
2022-04-19 13:21:12 +02:00
10d5705d1a docs: better backup docs 2022-04-19 13:20:48 +02:00
a4f1634b24 fix: backups get gzip, absolute paths, single archive file 2022-04-19 12:52:30 +02:00
cbd924060f fix: better local changes message
All checks were successful
continuous-integration/drone/push Build is passing
2022-04-19 10:29:05 +02:00
3c4bb6a55e fix: ensure we're on latest for recipe release dance
Closes coop-cloud/organising#313.
2022-04-19 10:28:49 +02:00
a0d7a76f9d fix: better error messages for release failures
See coop-cloud/organising#313
2022-04-19 10:20:35 +02:00
c71efb46ba feat: arm builds [ci skip]
See coop-cloud/organising#312
2022-04-19 10:06:14 +02:00
ce69967ec5 chore: go mod tidy
All checks were successful
continuous-integration/drone/push Build is passing
2022-04-18 10:42:39 +02:00
1a04439b1f chore(deps): update module github.com/hashicorp/go-retryablehttp to v0.7.1
Some checks failed
renovate/artifacts Artifact file update failure
continuous-integration/drone/push Build is failing
continuous-integration/drone/pr Build is failing
2022-04-14 07:01:24 +00:00
979f417a63 chore: gpl this sucka [ci skip] 2022-04-05 12:18:34 +02:00
b27acb2f61 feat: backup/restore [ci skip]
All checks were successful
continuous-integration/drone/pr Build is passing
See coop-cloud/organising#30.
2022-04-03 18:24:09 +02:00
622ecc4885 docs: drop slash [ci skip] 2022-04-01 23:18:22 +02:00
ed5bbda811 docs: wording & emoji [ci skip] 2022-04-01 23:14:57 +02:00
7b627ea518 docs: nice gopher [ci skip] 2022-04-01 23:12:24 +02:00
1ac66da83f chore: go mod tidy
All checks were successful
continuous-integration/drone/push Build is passing
2022-04-01 10:21:16 +02:00
061de96b62 chore(deps): update module github.com/kevinburke/ssh_config to v1.2.0
Some checks failed
renovate/artifacts Artifact file update failure
continuous-integration/drone/pr Build is failing
continuous-integration/drone/push Build is failing
2022-04-01 07:01:23 +00:00
6998298d32 chore: publish next tag 0.4.0-alpha-rc8
Some checks reported errors
continuous-integration/drone/push Build was killed
2022-03-30 16:28:55 +02:00
323f4467c8 fix: filtering requires case-by-case handling
Some checks reported errors
continuous-integration/drone/pr Build was killed
continuous-integration/drone/push Build was killed
See https://github.com/moby/moby/issues/32985.
2022-03-30 16:25:38 +02:00
e8e41850b5 fix: pass args to local function invocations too
All checks were successful
continuous-integration/drone/pr Build is passing
continuous-integration/drone/push Build is passing
2022-03-30 11:31:16 +02:00
0e23ec53d7 refactor!: simple validation only 2022-03-30 11:30:51 +02:00
b943a8b9b1 feat: allow choosing user on remote commands 2022-03-30 11:30:36 +02:00
acc665f054 chore: publish next tag 0.4.0-alpha-rc7
Some checks reported errors
continuous-integration/drone/push Build was killed
2022-03-27 21:33:30 +02:00
860f1d6376 feat: bring back scripts interface
All checks were successful
continuous-integration/drone/push Build is passing
See coop-cloud/organising#301.
2022-03-27 19:30:48 +00:00
2122f0e67c fix: avoid short command alias conflicts 2022-03-27 19:30:48 +00:00
6aa23a76a1 fix: more precise filtering
All checks were successful
continuous-integration/drone/push Build is passing
Closes coop-cloud/organising#305.
2022-03-27 19:30:36 +00:00
338360096c feat: pass domain to new app envs
All checks were successful
continuous-integration/drone/push Build is passing
See coop-cloud/organising#304.
2022-03-27 21:06:48 +02:00
7a8c7cd50f ci: drop static check
All checks were successful
continuous-integration/drone/push Build is passing
2022-03-27 13:51:40 +02:00
bafc8a8e34 chore: go mod tidy
Some checks failed
continuous-integration/drone/push Build is failing
2022-03-26 15:23:27 +01:00
3d44d8c9fd Merge remote-tracking branch 'origin/renovate/main-github.com-docker-docker-20.x' into main 2022-03-26 15:22:31 +01:00
b8b4616498 Merge remote-tracking branch 'origin/renovate/main-github.com-docker-cli-20.x' into main 2022-03-26 15:22:18 +01:00
da97117929 chore(deps): update module github.com/docker/docker to v20.10.14
Some checks failed
renovate/artifacts Artifact file update failure
continuous-integration/drone/pr Build is failing
continuous-integration/drone/push Build is failing
2022-03-24 08:01:35 +00:00
978297c464 chore(deps): update module github.com/docker/cli to v20.10.14
Some checks failed
renovate/artifacts Artifact file update failure
continuous-integration/drone/pr Build is failing
continuous-integration/drone/push Build is failing
2022-03-24 08:01:27 +00:00
11da4808fc chore(deps): update module github.com/alecaivazis/survey/v2 to v2.3.4
Some checks failed
renovate/artifacts Artifact file update failure
continuous-integration/drone/pr Build is failing
continuous-integration/drone/push Build is failing
2022-03-24 08:01:21 +00:00
4023e6a066 fix: wait until app created to check for secrets
Some checks failed
continuous-integration/drone/push Build is failing
2022-03-18 11:10:15 +01:00
f432bfdd23 fix: warn when no repo on git
Some checks failed
continuous-integration/drone/push Build is failing
2022-03-18 10:13:24 +01:00
848e17578d chore(deps): update golang docker tag to v1.18
Some checks reported errors
continuous-integration/drone/push Build was killed
continuous-integration/drone/pr Build was killed
2022-03-16 08:01:41 +00:00
1615130929 fix: skip prompt for no passwords
All checks were successful
continuous-integration/drone/push Build is passing
2022-03-15 10:54:05 +01:00
7f315315f0 fix: better prompts & matching for secret removal
All checks were successful
continuous-integration/drone/push Build is passing
2022-03-13 10:59:19 +01:00
6a50981120 fix: match on generation of single secret 2022-03-13 10:50:35 +01:00
c67471e6ca fix: show which secret was generated 2022-03-13 10:45:08 +01:00
f0fc1027e5 feat: more info on volumes. skip driver info
All checks were successful
continuous-integration/drone/push Build is passing
2022-03-12 17:11:05 +01:00
c66695d55e fix: return err not logrus + new lines 2022-03-12 17:02:04 +01:00
262009701e fix: guard against concurrent write errors 2022-03-12 16:59:45 +01:00
b31cb6b866 feat: prompt for secret generation
All checks were successful
continuous-integration/drone/push Build is passing
Closes coop-cloud/organising#302.
2022-03-12 16:47:19 +01:00
f39e186b66 fix: match Force/NoInput where needed
All checks were successful
continuous-integration/drone/push Build is passing
2022-03-12 16:15:20 +01:00
a8f35bdf2f fix: handle NoInput for volume removal 2022-03-12 16:09:05 +01:00
6e1e02ac28 chore: use same flag docs style 2022-03-12 16:08:44 +01:00
16fc5ee54b fix: can't force remove if it is already deployed 2022-03-12 16:08:26 +01:00
37a1fcc4af fix: delete all secrets if force/noinput 2022-03-12 16:01:42 +01:00
a9b522719f fix: use name not stack name for pass storage 2022-03-12 16:01:31 +01:00
ce70932a1c feat: single char short flag for volumes removal 2022-03-12 16:01:14 +01:00
d61e104536 fix: look at removal flag for pass logic 2022-03-12 15:48:43 +01:00
d5f30a3ae4 fix: use removal flag with correct help 2022-03-12 15:48:26 +01:00
2555096510 feat: short flags for run command 2022-03-12 15:42:29 +01:00
3797292b20 fix: no domain/converge check for deploy/upgrade/rollback 2022-03-12 15:36:43 +01:00
6333815b71 fix: remove unused flag 2022-03-12 15:32:23 +01:00
793a850fd5 refactor!: short flags for server add 2022-03-12 15:30:43 +01:00
42c1450384 refactor!: prefer short flags on release 2022-03-12 15:28:33 +01:00
a2377882f6 refacator!: use single char short flags 2022-03-12 15:27:19 +01:00
e78b395662 feat: new short flag for RC upgrading 2022-03-12 15:24:19 +01:00
cdec834ca9 reformat: remove extra line in CLI help 2022-03-12 10:20:37 +01:00
b4b0b464bd fix: only delete secrets from specific app
Some checks failed
continuous-integration/drone/push Build is failing
See coop-cloud/organising#300.
2022-03-12 09:39:30 +01:00
d8a1b0ccc1 doc: indicate storage location of secret in logs 2022-03-12 09:39:15 +01:00
3fbd381f55 fix: add pass remove flag & show name is optional 2022-03-12 09:17:24 +01:00
d3e127e5c8 fix: retain backwards compat with TYPE/RECIPE change
All checks were successful
continuous-integration/drone/push Build is passing
2022-03-11 19:37:50 +01:00
e9cfb076c6 fix: strip length modifiers
All checks were successful
continuous-integration/drone/push Build is passing
See coop-cloud/organising#297.
2022-03-11 16:40:10 +01:00
8ccf856110 fix: lay out generated secrets with warning/clarification 2022-03-11 16:39:34 +01:00
d0945aa09d fix: handle NoInput for app removal 2022-03-11 16:39:20 +01:00
123619219e chore: go mod tidy
All checks were successful
continuous-integration/drone/push Build is passing
2022-03-11 09:17:37 +01:00
a27410952e Merge remote-tracking branch 'origin/renovate/main-github.com-docker-docker-20.x' into main 2022-03-11 09:17:15 +01:00
13e0392af6 chore(deps): update module github.com/docker/docker to v20.10.13
Some checks failed
renovate/artifacts Artifact file update failure
continuous-integration/drone/push Build is failing
continuous-integration/drone/pr Build is failing
2022-03-11 08:01:57 +00:00
99a6135f72 chore(deps): update module github.com/docker/cli to v20.10.13
Some checks failed
renovate/artifacts Artifact file update failure
continuous-integration/drone/pr Build is failing
continuous-integration/drone/push Build is failing
2022-03-11 08:01:45 +00:00
a6b52c1354 chore: go mod tidy [ci skip] 2022-03-09 12:28:26 +01:00
fa51459191 chore(deps): update module github.com/docker/distribution to v2.8.1
Some checks failed
renovate/artifacts Artifact file update failure
continuous-integration/drone/push Build is failing
continuous-integration/drone/pr Build is failing
2022-03-09 08:01:26 +00:00
c529988427 feat: output success for secret insert [ci skip] 2022-03-08 18:10:37 +01:00
231cc3c718 fix: use StackName to filter volumes
All checks were successful
continuous-integration/drone/push Build is passing
2022-03-08 18:04:47 +01:00
3381b8936d fix: better error handling & proper context deletion for server rm
All checks were successful
continuous-integration/drone/push Build is passing
2022-02-24 15:57:52 +01:00
823f869f1d fix: error out correctly from ValidateDomain 2022-02-24 15:57:40 +01:00
ecbeacf10f fix: prompt for container choice correctly on run [ci skip] 2022-02-22 11:47:36 +01:00
3f838038d5 chore: go mod tidy
All checks were successful
continuous-integration/drone/push Build is passing
2022-02-22 10:52:14 +01:00
91b4e021d0 chore(deps): update module github.com/containers/image to v5
Some checks failed
renovate/artifacts Artifact file update failure
continuous-integration/drone/push Build is failing
continuous-integration/drone/pr Build is failing
2022-02-22 08:01:12 +00:00
598e87dca2 chore: skip new repositories
All checks were successful
continuous-integration/drone/push Build is passing
2022-02-21 08:46:30 +00:00
001511876d chore: go mod tidy 2022-02-21 08:46:30 +00:00
b295958c17 fix: handle all container registries
See coop-cloud/organising#258

This fixes also how we read the digest of the image. I think it was
wrong before. Some registries restrict reading this info and we now just
default to "unknown" for that case.

This also appears to bring a wave of new dependencies due to the generic
handling logic of containers/... package. The abra binary is now 1mb
larger.

The catalogue generation is now slower unfortunately. But it is more
robust.

The generic logic looks in ~/.docker/config.json for log in details, so
you don't have to pass those in manually on the CLI anymore. We just
read those defaults. You can "docker login" to get credentials setup in
that file. Since most folks won't generate the catalogue, this seems
fine for now.
2022-02-21 08:46:30 +00:00
2fbdcfb958 refactor: try the meta for default branch too
All checks were successful
continuous-integration/drone/pr Build is passing
continuous-integration/drone/push Build is passing
Sometimes the Branch(...) call gets confused with state in the
repository. Its more robust to use the default value we get from gitea.

See coop-cloud/organising#299.
2022-02-20 18:07:49 +01:00
09ac74d205 fix: check out default branch from tags
All checks were successful
continuous-integration/drone/push Build is passing
Also fix error handling to match function signatures.
2022-02-18 11:17:43 +01:00
5da4afa0ec fix: only ensure latest after cloning
All checks were successful
continuous-integration/drone/push Build is passing
2022-02-18 09:55:07 +01:00
9d5e805748 chore: go mod tidy
All checks were successful
continuous-integration/drone/push Build is passing
2022-02-16 13:53:09 +01:00
770ae5ed9b chore(deps): update module github.com/moby/sys/signal to v0.7.0
Some checks failed
renovate/artifacts Artifact file update failure
continuous-integration/drone/push Build is failing
continuous-integration/drone/pr Build is failing
2022-02-16 08:01:33 +00:00
e056d8dc44 fix: de-dupe dns resolver logging, more concise [ci skip] 2022-02-14 18:06:06 +01:00
c3442354e7 fix: skip dupe ipv4 check, done in EnsureDomainsResolveSameIPv4
All checks were successful
continuous-integration/drone/push Build is passing
2022-02-14 17:44:15 +01:00
6b2a0011af fix: remove dupe logging on catalogue reading [ci skip] 2022-02-14 17:37:25 +01:00
46fca7cfa7 docs: less ambig wording [ci skip] 2022-02-14 17:35:42 +01:00
82d560a946 fix: prompt for input on app cp
All checks were successful
continuous-integration/drone/push Build is passing
2022-02-14 17:10:53 +01:00
fc5107865b fix: typo
All checks were successful
continuous-integration/drone/push Build is passing
2022-02-10 10:59:19 +01:00
53ed1fc545 chore: go mod tidy
Some checks failed
continuous-integration/drone/push Build is failing
2022-02-09 09:59:23 +01:00
cc9e3d4e60 chore(deps): update module github.com/docker/distribution to v2.8.0 2022-02-09 09:59:23 +01:00
0557284461 fix: use new repo name
All checks were successful
continuous-integration/drone/push Build is passing
2022-02-09 08:58:51 +00:00
b5f23d3791 feat: show latest published version on sync
All checks were successful
continuous-integration/drone/push Build is passing
2022-02-09 08:58:20 +00:00
2b2dcc01b4 fix: dont checkout latest if we dont have a copy
All checks were successful
continuous-integration/drone/push Build is passing
2022-02-09 09:54:02 +01:00
0a208d049e chore: go mod tidy + patch upgrades
All checks were successful
continuous-integration/drone/push Build is passing
2022-02-04 10:50:55 +01:00
141711ecd0 Merge remote-tracking branch 'origin/renovate/main-github.com-schollz-progressbar-v3-3.x' into main 2022-02-04 10:50:36 +01:00
cd46d71ce4 chore(deps): update module github.com/schollz/progressbar/v3 to v3.8.6
Some checks failed
renovate/artifacts Artifact file update failure
continuous-integration/drone/push Build is failing
continuous-integration/drone/pr Build is failing
2022-02-04 08:01:17 +00:00
6fa090352d chore(deps): update module github.com/buger/goterm to v1.0.4
Some checks failed
renovate/artifacts Artifact file update failure
continuous-integration/drone/pr Build is running
continuous-integration/drone/push Build is failing
2022-02-04 08:01:11 +00:00
227c02cd09 refactor!: make common flags single char again
All checks were successful
continuous-integration/drone/push Build is passing
2022-02-03 14:19:51 +01:00
bfeda40e34 fix: catch more ssh failure modes with help
All checks were successful
continuous-integration/drone/push Build is passing
2022-02-03 13:43:11 +01:00
5237c7ed50 docs: focus more on straight ssh docs for server add 2022-02-03 13:42:49 +01:00
4e09f3b9a8 refactor: migrate authors to dedicated file [ci skip] 2022-02-02 21:00:00 +01:00
dfb32cbb68 fix: type -> recipe [ci skip] 2022-02-02 20:48:12 +01:00
bdd9b0a1aa fix: ensure recipes on latest for lint/generate
All checks were successful
continuous-integration/drone/push Build is passing
Follows b2d17a1829.
2022-01-29 14:06:25 +01:00
b2d17a1829 fix: ensure latest checked out for recipe upgrade
All checks were successful
continuous-integration/drone/push Build is passing
2022-01-29 13:35:42 +01:00
c905376472 refactor!: use "config" instead of "compose" [ci skip] 2022-01-27 12:24:33 +01:00
d316de218c feat: include recipe in deploy & friends overview 2022-01-27 12:23:02 +01:00
123475bd36 chore: remove old files [ci skip] 2022-01-27 12:14:01 +01:00
58e98f490d refactor!: type -> recipes
Some checks failed
continuous-integration/drone/pr Build is failing
continuous-integration/drone/push Build is passing
2022-01-27 12:06:32 +01:00
224b8865bf test: newlines for output when Y'ing & N'ing
Some checks failed
continuous-integration/drone/pr Build is running
continuous-integration/drone/push Build is failing
2022-01-27 12:05:22 +01:00
8fb9f42f13 test: add remaining scripts 2022-01-27 12:05:21 +01:00
dc5e2a5b24 test: fix pwd usage, PWD doesn't exist 2022-01-27 12:05:21 +01:00
40b4ef5ab2 test: disable debug, its too much noise 2022-01-27 12:05:21 +01:00
4a912ae3bc test: show how to run all tests 2022-01-27 12:05:21 +01:00
1150fcc595 test: remove manual test guide, using semi-automated now 2022-01-27 12:05:20 +01:00
45224d1349 test: use new flags + order for record/server 2022-01-27 12:05:20 +01:00
7a40e2d616 fix: remove duplicate flags on "server new" 2022-01-27 12:05:20 +01:00
2277e4ef72 refactor!: remove no-input flag where not needed 2022-01-27 12:05:19 +01:00
c0c3d9fe76 refactor!: make dry-run flag more convenient 2022-01-27 12:05:19 +01:00
2493921ade refactor!: de-duplicate record flags 2022-01-27 12:05:19 +01:00
22f9cf2be4 refactor: remove unused flag 2022-01-27 12:05:18 +01:00
a23124aede feat: auto strip domain names to avoid runtime limits
All checks were successful
continuous-integration/drone/push Build is passing
2022-01-27 10:33:21 +00:00
e670844b56 refactor!: app name -> domain 2022-01-27 10:33:21 +00:00
bc1729c5ca trim docs, point to new docs [ci skip] 2022-01-27 10:30:28 +01:00
fa8611b115 fix: respect NoInput on "app cp" & use app to get StackName
All checks were successful
continuous-integration/drone/push Build is passing
2022-01-25 11:39:38 +01:00
415df981ff test: long flags, drop docker, use run_tests for all tests
All checks were successful
continuous-integration/drone/push Build is passing
2022-01-24 16:49:51 +01:00
57728e58e8 test: improve semi-manual testing
All checks were successful
continuous-integration/drone/pr Build is passing
continuous-integration/drone/push Build is passing
2022-01-21 16:48:42 +01:00
c7062e0494 fix: initial subcmd completion
All checks were successful
continuous-integration/drone/pr Build is passing
continuous-integration/drone/push Build is passing
Broken by migration to v1 API.
2022-01-20 11:42:04 +01:00
86 changed files with 2286 additions and 959 deletions

View File

@ -3,27 +3,17 @@ kind: pipeline
name: coopcloud.tech/abra
steps:
- name: make check
image: golang:1.17
image: golang:1.18
commands:
- make check
- name: make static
image: golang:1.17
ignore: true # until we decide we all want this check
environment:
STATIC_CHECK_URL: honnef.co/go/tools/cmd/staticcheck
STATIC_CHECK_VERSION: v0.2.0
commands:
- go install $STATIC_CHECK_URL@$STATIC_CHECK_VERSION
- make static
- name: make build
image: golang:1.17
image: golang:1.18
commands:
- make build
- name: make test
image: golang:1.17
image: golang:1.18
commands:
- make test
@ -55,7 +45,7 @@ steps:
event: tag
- name: release
image: golang:1.17
image: golang:1.18
environment:
GITEA_TOKEN:
from_secret: goreleaser_gitea_token

View File

@ -7,7 +7,6 @@ gitea_urls:
before:
hooks:
- go mod tidy
- go generate ./...
builds:
- env:
- CGO_ENABLED=0
@ -15,6 +14,15 @@ builds:
goos:
- linux
- darwin
goarch:
- 386
- amd64
- arm
- arm64
goarm:
- 5
- 6
- 7
ldflags:
- "-X 'main.Commit={{ .Commit }}'"
- "-X 'main.Version={{ .Version }}'"

10
AUTHORS.md Normal file
View File

@ -0,0 +1,10 @@
# authors
> If you're looking at this and you hack on Abra and you're not listed here,
> please do add yourself! This is a community project, let's show
- 3wordchant
- decentral1se
- kawaiipunk
- knoflook
- roxxers

15
LICENSE Normal file
View File

@ -0,0 +1,15 @@
Abra: The Co-op Cloud utility belt
Copyright (C) 2022 Co-op Cloud <helo@coopcloud.tech>
This program is free software: you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by
the Free Software Foundation, either version 3 of the License, or
(at your option) any later version.
This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU General Public License for more details.
You should have received a copy of the GNU General Public License
along with this program. If not, see <https://www.gnu.org/licenses/>.

View File

@ -5,7 +5,7 @@ LDFLAGS := "-X 'main.Commit=$(COMMIT)'"
DIST_LDFLAGS := $(LDFLAGS)" -s -w"
export GOPRIVATE=coopcloud.tech
all: format check static build test
all: format check build test
run:
@go run -ldflags=$(LDFLAGS) $(ABRA)
@ -28,9 +28,6 @@ format:
check:
@test -z $$(gofmt -l .) || (echo "gofmt: formatting issue - run 'make format' to resolve" && exit 1)
static:
@staticcheck $(ABRA)
test:
@go test ./... -cover -v

View File

@ -1,73 +1,12 @@
# abra
> https://coopcloud.tech
# `abra`
[![Build Status](https://build.coopcloud.tech/api/badges/coop-cloud/abra/status.svg?ref=refs/heads/main)](https://build.coopcloud.tech/coop-cloud/abra)
[![Go Report Card](https://goreportcard.com/badge/git.coopcloud.tech/coop-cloud/abra)](https://goreportcard.com/report/git.coopcloud.tech/coop-cloud/abra)
The Co-op Cloud utility belt 🎩🐇
`abra` is a command-line tool for managing your own [Co-op Cloud](https://coopcloud.tech). It can provision new servers, create apps, deploy them and a whole lot of other things. Please see [docs.coopcloud.tech](https://docs.coopcloud.tech) for more extensive documentation.
<a href="https://github.com/egonelbre/gophers"><img align="right" width="150" src="https://github.com/egonelbre/gophers/raw/master/.thumb/sketch/adventure/poking-fire.png"/></a>
## Quick install
`abra` is our flagship client & command-line tool which has been developed specifically in the context of the Co-op Cloud project for the purpose of making the day-to-day operations of [operators](https://docs.coopcloud.tech/operators/) and [maintainers](https://docs.coopcloud.tech/maintainers/) pleasant & convenient. It is libre software, written in [Go](https://go.dev) and maintained and extended by the community :heart:
```bash
curl https://install.abra.autonomic.zone | bash
```
Or using the latest release candidate (extra experimental!):
```bash
curl https://install.abra.autonomic.zone | bash -s -- --rc
```
Source for this script is in [scripts/installer/installer](./scripts/installer/installer).
## Hacking
### Getting started
Install [direnv](https://direnv.net), run `cp .envrc.sample .envrc`, then run `direnv allow` in this directory. This will set coopcloud repos as private due to [this bug.](https://git.coopcloud.tech/coop-cloud/coopcloud.tech/issues/20#issuecomment-8201). Or you can run `go env -w GOPRIVATE=coopcloud.tech` but I'm not sure how persistent this is.
Install [Go >= 1.16](https://golang.org/doc/install) and then:
- `make build` to build
- `./abra` to run commands
- `make test` will run tests
- `make install` will install it to `$GOPATH/bin`
- `go get <package>` and `go mod tidy` to add a new dependency
Our [Drone CI configuration](.drone.yml) runs a number of sanity on each pushed commit. See the [Makefile](./Makefile) for more handy targets.
Please use the [conventional commit format](https://www.conventionalcommits.org/en/v1.0.0/) for your commits so we can automate our change log.
### Versioning
We use [goreleaser](https://goreleaser.com) to help us automate releases. We use [semver](https://semver.org) for versioning all releases of the tool. While we are still in the public alpha release phase, we will maintain a `0.y.z-alpha` format. Change logs are generated from our commit logs. We are still working this out and aim to refine our release praxis as we go.
For developers, while using this `-alpha` format, the `y` part is the "major" version part. So, if you make breaking changes, you increment that and _not_ the `x` part. So, if you're on `0.1.0-alpha`, then you'd go to `0.1.1-alpha` for a backwards compatible change and `0.2.0-alpha` for a backwards incompatible change.
### Making a new release
- Change `ABRA_VERSION` to match the new tag in [`scripts`](./scripts/installer/installer) (use [semver](https://semver.org))
- Commit that change (e.g. `git commit -m 'chore: publish next tag x.y.z-alpha'`)
- Make a new tag (e.g. `git tag -a x.y.z-alpha`)
- Push the new tag (e.g. `git push && git push --tags`)
- Wait until the build finishes on [build.coopcloud.tech](https://build.coopcloud.tech/coop-cloud/abra)
- Deploy the new installer script (e.g. `cd ./scripts/installer && make`)
- Check the release worked, (e.g. `abra upgrade; abra -v`)
### Fork maintenance
#### `godotenv`
We maintain a fork of [godotenv](https://github.com/Autonomic-Cooperative/godotenv) for two features:
1. multi-line env var support
2. inline comment parsing
You can upgrade the version here by running `go get github.com/Autonomic-Cooperative/godotenv@<commit>` where `<commit>` is the latest commit you want to pin to. At time of writing, `go get github.com/Autonomic-Cooperative/godotenv@b031ea1211e7fd297af4c7747ffb562ebe00cd33` is the command you want to run to maintain the above functionality.
#### `docker/client`
A number of modules in [pkg/upstream](./pkg/upstream) are copy/pasta'd from the upstream [docker/docker/client](https://pkg.go.dev/github.com/docker/docker/client). We had to do this because upstream are not exposing their API as public.
Please see [docs.coopcloud.tech/abra](https://docs.coopcloud.tech/abra) for help on install, upgrade, hacking, troubleshooting & more!

View File

@ -8,7 +8,7 @@ var AppCommand = cli.Command{
Name: "app",
Aliases: []string{"a"},
Usage: "Manage apps",
ArgsUsage: "<app>",
ArgsUsage: "<domain>",
Description: "This command provides functionality for managing the life cycle of your apps",
Subcommands: []cli.Command{
appNewCommand,
@ -29,5 +29,8 @@ var AppCommand = cli.Command{
appVolumeCommand,
appVersionCommand,
appErrorsCommand,
appCmdCommand,
appBackupCommand,
appRestoreCommand,
},
}

389
cli/app/backup.go Normal file
View File

@ -0,0 +1,389 @@
package app
import (
"archive/tar"
"context"
"fmt"
"io"
"os"
"path/filepath"
"strconv"
"strings"
"time"
"coopcloud.tech/abra/cli/internal"
"coopcloud.tech/abra/pkg/autocomplete"
"coopcloud.tech/abra/pkg/client"
"coopcloud.tech/abra/pkg/config"
containerPkg "coopcloud.tech/abra/pkg/container"
"coopcloud.tech/abra/pkg/recipe"
"coopcloud.tech/abra/pkg/upstream/container"
"github.com/docker/cli/cli/command"
"github.com/docker/docker/api/types"
"github.com/docker/docker/api/types/filters"
"github.com/docker/docker/pkg/archive"
"github.com/docker/docker/pkg/system"
"github.com/klauspost/pgzip"
"github.com/sirupsen/logrus"
"github.com/urfave/cli"
)
type backupConfig struct {
preHookCmd string
postHookCmd string
backupPaths []string
}
var appBackupCommand = cli.Command{
Name: "backup",
Aliases: []string{"bk"},
Usage: "Run app backup",
ArgsUsage: "<domain> [<service>]",
Flags: []cli.Flag{
internal.DebugFlag,
},
Before: internal.SubCommandBefore,
BashComplete: autocomplete.AppNameComplete,
Description: `
This command runs an app backup.
A backup command and pre/post hook commands are defined in the recipe
configuration. Abra reads this configuration and run the comands in the context
of the deployed services. Pass <service> if you only want to back up a single
service. All backups are placed in the ~/.abra/backups directory.
A single backup file is produced for all backup paths specified for a service.
If we have the following backup configuration:
- "backupbot.backup.path=/var/lib/foo,/var/lib/bar"
And we run "abra app backup example.com app", Abra will produce a file that
looks like:
~/.abra/backups/example_com_app_609341138.tar.gz
This file is a compressed archive which contains all backup paths. To see paths, run:
tar -tf ~/.abra/backups/example_com_app_609341138.tar.gz
(Make sure to change the name of the backup file)
This single file can be used to restore your app. See "abra app restore" for more.
`,
Action: func(c *cli.Context) error {
app := internal.ValidateApp(c)
recipe, err := recipe.Get(app.Recipe)
if err != nil {
logrus.Fatal(err)
}
backupConfigs := make(map[string]backupConfig)
for _, service := range recipe.Config.Services {
if backupsEnabled, ok := service.Deploy.Labels["backupbot.backup"]; ok {
if backupsEnabled == "true" {
fullServiceName := fmt.Sprintf("%s_%s", app.StackName(), service.Name)
bkConfig := backupConfig{}
logrus.Debugf("backup config detected for %s", fullServiceName)
if paths, ok := service.Deploy.Labels["backupbot.backup.path"]; ok {
logrus.Debugf("detected backup paths for %s: %s", fullServiceName, paths)
bkConfig.backupPaths = strings.Split(paths, ",")
}
if preHookCmd, ok := service.Deploy.Labels["backupbot.backup.pre-hook"]; ok {
logrus.Debugf("detected pre-hook command for %s: %s", fullServiceName, preHookCmd)
bkConfig.preHookCmd = preHookCmd
}
if postHookCmd, ok := service.Deploy.Labels["backupbot.backup.post-hook"]; ok {
logrus.Debugf("detected post-hook command for %s: %s", fullServiceName, postHookCmd)
bkConfig.postHookCmd = postHookCmd
}
backupConfigs[service.Name] = bkConfig
}
}
}
serviceName := c.Args().Get(1)
if serviceName != "" {
backupConfig, ok := backupConfigs[serviceName]
if !ok {
logrus.Fatalf("no backup config for %s? does %s exist?", serviceName, serviceName)
}
logrus.Infof("running backup for the %s service", serviceName)
if err := runBackup(app, serviceName, backupConfig); err != nil {
logrus.Fatal(err)
}
} else {
for serviceName, backupConfig := range backupConfigs {
logrus.Infof("running backup for the %s service", serviceName)
if err := runBackup(app, serviceName, backupConfig); err != nil {
logrus.Fatal(err)
}
}
}
return nil
},
}
// runBackup does the actual backup logic.
func runBackup(app config.App, serviceName string, bkConfig backupConfig) error {
if len(bkConfig.backupPaths) == 0 {
return fmt.Errorf("backup paths are empty for %s?", serviceName)
}
cl, err := client.New(app.Server)
if err != nil {
return err
}
// FIXME: avoid instantiating a new CLI
dcli, err := command.NewDockerCli()
if err != nil {
return err
}
filters := filters.NewArgs()
filters.Add("name", fmt.Sprintf("^%s_%s", app.StackName(), serviceName))
targetContainer, err := containerPkg.GetContainer(context.Background(), cl, filters, true)
if err != nil {
return err
}
fullServiceName := fmt.Sprintf("%s_%s", app.StackName(), serviceName)
if bkConfig.preHookCmd != "" {
splitCmd := internal.SafeSplit(bkConfig.preHookCmd)
logrus.Debugf("split pre-hook command for %s into %s", fullServiceName, splitCmd)
preHookExecOpts := types.ExecConfig{
AttachStderr: true,
AttachStdin: true,
AttachStdout: true,
Cmd: splitCmd,
Detach: false,
Tty: true,
}
if err := container.RunExec(dcli, cl, targetContainer.ID, &preHookExecOpts); err != nil {
return fmt.Errorf("failed to run %s on %s: %s", bkConfig.preHookCmd, targetContainer.ID, err.Error())
}
logrus.Infof("succesfully ran %s pre-hook command: %s", fullServiceName, bkConfig.preHookCmd)
}
var tempBackupPaths []string
for _, remoteBackupPath := range bkConfig.backupPaths {
timestamp := strconv.Itoa(time.Now().Nanosecond())
sanitisedPath := strings.ReplaceAll(remoteBackupPath, "/", "_")
localBackupPath := filepath.Join(config.BACKUP_DIR, fmt.Sprintf("%s%s_%s.tar.gz", fullServiceName, sanitisedPath, timestamp))
logrus.Debugf("temporarily backing up %s:%s to %s", fullServiceName, remoteBackupPath, localBackupPath)
logrus.Infof("backing up %s:%s", fullServiceName, remoteBackupPath)
content, _, err := cl.CopyFromContainer(context.Background(), targetContainer.ID, remoteBackupPath)
if err != nil {
logrus.Debugf("failed to copy %s from container: %s", remoteBackupPath, err.Error())
if err := cleanupTempArchives(tempBackupPaths); err != nil {
return fmt.Errorf("failed to clean up temporary archives: %s", err.Error())
}
return fmt.Errorf("failed to copy %s from container: %s", remoteBackupPath, err.Error())
}
defer content.Close()
_, srcBase := archive.SplitPathDirEntry(remoteBackupPath)
preArchive := archive.RebaseArchiveEntries(content, srcBase, remoteBackupPath)
if err := copyToFile(localBackupPath, preArchive); err != nil {
logrus.Debugf("failed to create tar archive (%s): %s", localBackupPath, err.Error())
if err := cleanupTempArchives(tempBackupPaths); err != nil {
return fmt.Errorf("failed to clean up temporary archives: %s", err.Error())
}
return fmt.Errorf("failed to create tar archive (%s): %s", localBackupPath, err.Error())
}
tempBackupPaths = append(tempBackupPaths, localBackupPath)
}
logrus.Infof("compressing and merging archives...")
if err := mergeArchives(tempBackupPaths, fullServiceName); err != nil {
logrus.Debugf("failed to merge archive files: %s", err.Error())
if err := cleanupTempArchives(tempBackupPaths); err != nil {
return fmt.Errorf("failed to clean up temporary archives: %s", err.Error())
}
return fmt.Errorf("failed to merge archive files: %s", err.Error())
}
if err := cleanupTempArchives(tempBackupPaths); err != nil {
return fmt.Errorf("failed to clean up temporary archives: %s", err.Error())
}
if bkConfig.postHookCmd != "" {
splitCmd := internal.SafeSplit(bkConfig.postHookCmd)
logrus.Debugf("split post-hook command for %s into %s", fullServiceName, splitCmd)
postHookExecOpts := types.ExecConfig{
AttachStderr: true,
AttachStdin: true,
AttachStdout: true,
Cmd: splitCmd,
Detach: false,
Tty: true,
}
if err := container.RunExec(dcli, cl, targetContainer.ID, &postHookExecOpts); err != nil {
return err
}
logrus.Infof("succesfully ran %s post-hook command: %s", fullServiceName, bkConfig.postHookCmd)
}
return nil
}
func copyToFile(outfile string, r io.Reader) error {
tmpFile, err := system.TempFileSequential(filepath.Dir(outfile), ".tar_temp")
if err != nil {
return err
}
tmpPath := tmpFile.Name()
_, err = io.Copy(tmpFile, r)
tmpFile.Close()
if err != nil {
os.Remove(tmpPath)
return err
}
if err = os.Rename(tmpPath, outfile); err != nil {
os.Remove(tmpPath)
return err
}
return nil
}
func cleanupTempArchives(tarPaths []string) error {
for _, tarPath := range tarPaths {
if err := os.RemoveAll(tarPath); err != nil {
return err
}
logrus.Debugf("remove temporary archive file %s", tarPath)
}
return nil
}
func mergeArchives(tarPaths []string, serviceName string) error {
var out io.Writer
var cout *pgzip.Writer
timestamp := strconv.Itoa(time.Now().Nanosecond())
localBackupPath := filepath.Join(config.BACKUP_DIR, fmt.Sprintf("%s_%s.tar.gz", serviceName, timestamp))
fout, err := os.Create(localBackupPath)
if err != nil {
return fmt.Errorf("Failed to open %s: %s", localBackupPath, err)
}
defer fout.Close()
out = fout
cout = pgzip.NewWriter(out)
out = cout
tw := tar.NewWriter(out)
for _, tarPath := range tarPaths {
if err := addTar(tw, tarPath); err != nil {
return fmt.Errorf("failed to merge %s: %v", tarPath, err)
}
}
if err := tw.Close(); err != nil {
return fmt.Errorf("failed to close tar writer %v", err)
}
if cout != nil {
if err := cout.Flush(); err != nil {
return fmt.Errorf("failed to flush: %s", err)
} else if err = cout.Close(); err != nil {
return fmt.Errorf("failed to close compressed writer: %s", err)
}
}
logrus.Infof("backed up %s to %s", serviceName, localBackupPath)
return nil
}
func addTar(tw *tar.Writer, pth string) (err error) {
var tr *tar.Reader
var rc io.ReadCloser
var hdr *tar.Header
if tr, rc, err = openTarFile(pth); err != nil {
return
}
for {
if hdr, err = tr.Next(); err != nil {
if err == io.EOF {
err = nil
}
break
}
if err = tw.WriteHeader(hdr); err != nil {
break
} else if _, err = io.Copy(tw, tr); err != nil {
break
}
}
if err == nil {
err = rc.Close()
} else {
rc.Close()
}
return
}
func openTarFile(pth string) (tr *tar.Reader, rc io.ReadCloser, err error) {
var fin *os.File
var n int
buff := make([]byte, 1024)
if fin, err = os.Open(pth); err != nil {
return
}
if n, err = fin.Read(buff); err != nil {
fin.Close()
return
} else if n == 0 {
fin.Close()
err = fmt.Errorf("%s is empty", pth)
return
}
if _, err = fin.Seek(0, 0); err != nil {
fin.Close()
return
}
rc = fin
tr = tar.NewReader(rc)
return tr, rc, nil
}

View File

@ -14,18 +14,17 @@ import (
var appCheckCommand = cli.Command{
Name: "check",
Aliases: []string{"c"},
Aliases: []string{"chk"},
Usage: "Check if app is configured correctly",
ArgsUsage: "<service>",
ArgsUsage: "<domain>",
Flags: []cli.Flag{
internal.DebugFlag,
internal.NoInputFlag,
},
Before: internal.SubCommandBefore,
Action: func(c *cli.Context) error {
app := internal.ValidateApp(c)
envSamplePath := path.Join(config.RECIPES_DIR, app.Type, ".env.sample")
envSamplePath := path.Join(config.RECIPES_DIR, app.Recipe, ".env.sample")
if _, err := os.Stat(envSamplePath); err != nil {
if os.IsNotExist(err) {
logrus.Fatalf("%s does not exist?", envSamplePath)

233
cli/app/cmd.go Normal file
View File

@ -0,0 +1,233 @@
package app
import (
"context"
"errors"
"fmt"
"io/ioutil"
"os"
"os/exec"
"path"
"strings"
"coopcloud.tech/abra/cli/internal"
"coopcloud.tech/abra/pkg/autocomplete"
"coopcloud.tech/abra/pkg/client"
"coopcloud.tech/abra/pkg/config"
containerPkg "coopcloud.tech/abra/pkg/container"
"coopcloud.tech/abra/pkg/formatter"
"coopcloud.tech/abra/pkg/upstream/container"
"github.com/docker/cli/cli/command"
"github.com/docker/docker/api/types"
"github.com/docker/docker/api/types/filters"
"github.com/docker/docker/pkg/archive"
"github.com/sirupsen/logrus"
"github.com/urfave/cli"
)
var localCmd bool
var localCmdFlag = &cli.BoolFlag{
Name: "local, l",
Usage: "Run command locally",
Destination: &localCmd,
}
var remoteUser string
var remoteUserFlag = &cli.StringFlag{
Name: "user, u",
Value: "",
Usage: "User to run command within a service context",
Destination: &remoteUser,
}
var appCmdCommand = cli.Command{
Name: "command",
Aliases: []string{"cmd"},
Usage: "Run app commands",
Description: `
This command runs app specific commands.
These commands are bash functions, defined in the abra.sh of the recipe itself.
They can be run within the context of a service (e.g. app) or locally on your
work station by passing "--local". Arguments can be passed into these functions
using the "-- <args>" syntax.
Example:
abra app cmd example.com app create_user -- me@example.com
`,
ArgsUsage: "<domain> [<service>] <command>",
Flags: []cli.Flag{
internal.DebugFlag,
localCmdFlag,
remoteUserFlag,
},
BashComplete: autocomplete.AppNameComplete,
Before: internal.SubCommandBefore,
Action: func(c *cli.Context) error {
app := internal.ValidateApp(c)
if localCmd && remoteUser != "" {
internal.ShowSubcommandHelpAndError(c, errors.New("cannot use --local & <user> together"))
}
abraSh := path.Join(config.RECIPES_DIR, app.Recipe, "abra.sh")
if _, err := os.Stat(abraSh); err != nil {
if os.IsNotExist(err) {
logrus.Fatalf("%s does not exist for %s?", abraSh, app.Name)
}
logrus.Fatal(err)
}
var parsedCmdArgs string
var cmdArgsIdx int
var hasCmdArgs bool
for idx, arg := range c.Args() {
if arg == "--" {
cmdArgsIdx = idx
hasCmdArgs = true
}
if hasCmdArgs && idx > cmdArgsIdx {
parsedCmdArgs += fmt.Sprintf("%s ", c.Args().Get(idx))
}
}
if localCmd {
cmdName := c.Args().Get(1)
if err := ensureCommand(abraSh, app.Recipe, cmdName); err != nil {
logrus.Fatal(err)
}
logrus.Debugf("--local detected, running %s on local work station", cmdName)
var sourceAndExec string
if hasCmdArgs {
logrus.Debugf("parsed following command arguments: %s", parsedCmdArgs)
sourceAndExec = fmt.Sprintf("TARGET=local; APP_NAME=%s; . %s; %s %s", app.StackName(), abraSh, cmdName, parsedCmdArgs)
} else {
logrus.Debug("did not detect any command arguments")
sourceAndExec = fmt.Sprintf("TARGET=local; APP_NAME=%s; . %s; %s", app.StackName(), abraSh, cmdName)
}
cmd := exec.Command("/bin/sh", "-c", sourceAndExec)
if err := internal.RunCmd(cmd); err != nil {
logrus.Fatal(err)
}
} else {
targetServiceName := c.Args().Get(1)
cmdName := c.Args().Get(2)
if err := ensureCommand(abraSh, app.Recipe, cmdName); err != nil {
logrus.Fatal(err)
}
serviceNames, err := config.GetAppServiceNames(app.Name)
if err != nil {
logrus.Fatal(err)
}
matchingServiceName := false
for _, serviceName := range serviceNames {
if serviceName == targetServiceName {
matchingServiceName = true
}
}
if !matchingServiceName {
logrus.Fatalf("no service %s for %s?", targetServiceName, app.Name)
}
logrus.Debugf("running command %s within the context of %s_%s", cmdName, app.StackName(), targetServiceName)
if hasCmdArgs {
logrus.Debugf("parsed following command arguments: %s", parsedCmdArgs)
} else {
logrus.Debug("did not detect any command arguments")
}
if err := runCmdRemote(app, abraSh, targetServiceName, cmdName, parsedCmdArgs); err != nil {
logrus.Fatal(err)
}
}
return nil
},
}
func ensureCommand(abraSh, recipeName, execCmd string) error {
bytes, err := ioutil.ReadFile(abraSh)
if err != nil {
return err
}
if !strings.Contains(string(bytes), execCmd) {
return fmt.Errorf("%s doesn't have a %s function", recipeName, execCmd)
}
return nil
}
func runCmdRemote(app config.App, abraSh, serviceName, cmdName, cmdArgs string) error {
cl, err := client.New(app.Server)
if err != nil {
return err
}
filters := filters.NewArgs()
filters.Add("name", fmt.Sprintf("^%s_%s", app.StackName(), serviceName))
targetContainer, err := containerPkg.GetContainer(context.Background(), cl, filters, true)
if err != nil {
return err
}
logrus.Debugf("retrieved %s as target container on %s", formatter.ShortenID(targetContainer.ID), app.Server)
toTarOpts := &archive.TarOptions{NoOverwriteDirNonDir: true, Compression: archive.Gzip}
content, err := archive.TarWithOptions(abraSh, toTarOpts)
if err != nil {
return err
}
copyOpts := types.CopyToContainerOptions{AllowOverwriteDirWithFile: false, CopyUIDGID: false}
if err := cl.CopyToContainer(context.Background(), targetContainer.ID, "/tmp", content, copyOpts); err != nil {
return err
}
var cmd []string
if cmdArgs != "" {
cmd = []string{"/bin/sh", "-c", fmt.Sprintf("TARGET=%s; APP_NAME=%s; . /tmp/abra.sh; %s %s", serviceName, app.StackName(), cmdName, cmdArgs)}
} else {
cmd = []string{"/bin/sh", "-c", fmt.Sprintf("TARGET=%s; APP_NAME=%s; . /tmp/abra.sh; %s", serviceName, app.StackName(), cmdName)}
}
logrus.Debugf("running command: %s", strings.Join(cmd, " "))
execCreateOpts := types.ExecConfig{
AttachStderr: true,
AttachStdin: true,
AttachStdout: true,
Cmd: cmd,
Detach: false,
Tty: true,
}
if remoteUser != "" {
logrus.Debugf("running command with user %s", remoteUser)
execCreateOpts.User = remoteUser
}
// FIXME: avoid instantiating a new CLI
dcli, err := command.NewDockerCli()
if err != nil {
return err
}
if err := container.RunExec(dcli, cl, targetContainer.ID, &execCreateOpts); err != nil {
return err
}
return nil
}

View File

@ -14,12 +14,12 @@ import (
)
var appConfigCommand = cli.Command{
Name: "config",
Aliases: []string{"c"},
Usage: "Edit app config",
Name: "config",
Aliases: []string{"cfg"},
Usage: "Edit app config",
ArgsUsage: "<domain>",
Flags: []cli.Flag{
internal.DebugFlag,
internal.NoInputFlag,
},
Before: internal.SubCommandBefore,
Action: func(c *cli.Context) error {

View File

@ -22,7 +22,7 @@ import (
var appCpCommand = cli.Command{
Name: "cp",
Aliases: []string{"c"},
ArgsUsage: "<src> <dst>",
ArgsUsage: "<domain> <src> <dst>",
Flags: []cli.Flag{
internal.DebugFlag,
internal.NoInputFlag,
@ -34,12 +34,11 @@ This command supports copying files to and from any app service file system.
If you want to copy a myfile.txt to the root of the app service:
abra app cp <app> myfile.txt app:/
abra app cp <domain> myfile.txt app:/
And if you want to copy that file back to your current working directory locally:
abra app cp <app> app:/myfile.txt .
abra app cp <domain> app:/myfile.txt .
`,
Action: func(c *cli.Context) error {
app := internal.ValidateApp(c)
@ -106,25 +105,15 @@ func configureAndCp(
dstPath string,
service string,
isToContainer bool) error {
appFiles, err := config.LoadAppFiles("")
if err != nil {
logrus.Fatal(err)
}
appEnv, err := config.GetApp(appFiles, app.Name)
if err != nil {
logrus.Fatal(err)
}
cl, err := client.New(app.Server)
if err != nil {
logrus.Fatal(err)
}
filters := filters.NewArgs()
filters.Add("name", fmt.Sprintf("%s_%s", appEnv.StackName(), service))
filters.Add("name", fmt.Sprintf("^%s_%s", app.StackName(), service))
container, err := container.GetContainer(context.Background(), cl, filters, true)
container, err := container.GetContainer(context.Background(), cl, filters, internal.NoInput)
if err != nil {
logrus.Fatal(err)
}
@ -157,5 +146,6 @@ func configureAndCp(
logrus.Fatal(err)
}
}
return nil
}

View File

@ -7,9 +7,10 @@ import (
)
var appDeployCommand = cli.Command{
Name: "deploy",
Aliases: []string{"d"},
Usage: "Deploy an app",
Name: "deploy",
Aliases: []string{"d"},
Usage: "Deploy an app",
ArgsUsage: "<domain>",
Flags: []cli.Flag{
internal.DebugFlag,
internal.NoInputFlag,
@ -21,7 +22,7 @@ var appDeployCommand = cli.Command{
Before: internal.SubCommandBefore,
Description: `
This command deploys an app. It does not support incrementing the version of a
deployed app, for this you need to look at the "abra app upgrade <app>"
deployed app, for this you need to look at the "abra app upgrade <domain>"
command.
You may pass "--force" to re-deploy the same version again. This can be useful

View File

@ -2,6 +2,7 @@ package app
import (
"context"
"fmt"
"strconv"
"strings"
"time"
@ -20,8 +21,9 @@ import (
)
var appErrorsCommand = cli.Command{
Name: "errors",
Usage: "List errors for a deployed app",
Name: "errors",
Usage: "List errors for a deployed app",
ArgsUsage: "<domain>",
Description: `
This command lists errors for a deployed app.
@ -40,15 +42,13 @@ Got any more ideas? Please let us know:
https://git.coopcloud.tech/coop-cloud/organising/issues/new/choose
This command is best accompanied by "abra app logs <app>" which may reveal
This command is best accompanied by "abra app logs <domain>" which may reveal
further information which can help you debug the cause of an app failure via
the logs.
`,
Aliases: []string{"e"},
Flags: []cli.Flag{
internal.DebugFlag,
internal.NoInputFlag,
internal.WatchFlag,
},
Before: internal.SubCommandBefore,
@ -89,14 +89,15 @@ the logs.
}
func checkErrors(c *cli.Context, cl *dockerClient.Client, app config.App) error {
recipe, err := recipe.Get(app.Type)
recipe, err := recipe.Get(app.Recipe)
if err != nil {
return err
}
for _, service := range recipe.Config.Services {
filters := filters.NewArgs()
filters.Add("name", service.Name)
filters.Add("name", fmt.Sprintf("^%s_%s", app.StackName(), service.Name))
containers, err := cl.ContainerList(context.Background(), types.ContainerListOptions{Filters: filters})
if err != nil {
return err

View File

@ -22,12 +22,12 @@ var statusFlag = &cli.BoolFlag{
Destination: &status,
}
var appType string
var typeFlag = &cli.StringFlag{
Name: "type, t",
var appRecipe string
var recipeFlag = &cli.StringFlag{
Name: "recipe, r",
Value: "",
Usage: "Show apps of a specific type",
Destination: &appType,
Usage: "Show apps of a specific recipe",
Destination: &appRecipe,
}
var listAppServer string
@ -68,13 +68,12 @@ in ~/.abra/) to generate a report of all your apps.
By passing the "--status/-S" flag, you can query all your servers for the
actual live deployment status. Depending on how many servers you manage, this
can take some time.
`,
`,
Flags: []cli.Flag{
internal.DebugFlag,
internal.NoInputFlag,
statusFlag,
listAppServerFlag,
typeFlag,
recipeFlag,
},
Before: internal.SubCommandBefore,
Action: func(c *cli.Context) error {
@ -88,7 +87,7 @@ can take some time.
logrus.Fatal(err)
}
sort.Sort(config.ByServerAndType(apps))
sort.Sort(config.ByServerAndRecipe(apps))
statuses := make(map[string]map[string]string)
var catl recipe.RecipeCatalogue
@ -123,14 +122,14 @@ can take some time.
var ok bool
if stats, ok = allStats[app.Server]; !ok {
stats = serverStatus{}
if appType == "" {
if appRecipe == "" {
// count server, no filtering
totalServersCount++
}
}
if app.Type == appType || appType == "" {
if appType != "" {
if app.Recipe == appRecipe || appRecipe == "" {
if appRecipe != "" {
// only count server if matches filter
totalServersCount++
}
@ -161,7 +160,7 @@ can take some time.
var newUpdates []string
if version != "unknown" {
updates, err := recipe.GetRecipeCatalogueVersions(app.Type, catl)
updates, err := recipe.GetRecipeCatalogueVersions(app.Recipe, catl)
if err != nil {
logrus.Fatal(err)
}
@ -198,7 +197,7 @@ can take some time.
}
appStats.server = app.Server
appStats.recipe = app.Type
appStats.recipe = app.Recipe
appStats.appName = app.Name
appStats.domain = app.Domain
@ -216,7 +215,7 @@ can take some time.
serverStat := allStats[app.Server]
tableCol := []string{"recipe", "domain", "app name"}
tableCol := []string{"recipe", "domain"}
if status {
tableCol = append(tableCol, []string{"status", "version", "upgrade"}...)
}
@ -224,7 +223,7 @@ can take some time.
table := formatter.CreateTable(tableCol)
for _, appStat := range serverStat.apps {
tableRow := []string{appStat.recipe, appStat.domain, appStat.appName}
tableRow := []string{appStat.recipe, appStat.domain}
if status {
tableRow = append(tableRow, []string{appStat.status, appStat.version, appStat.upgrade}...)
}

View File

@ -29,9 +29,12 @@ var logOpts = types.ContainerLogsOptions{
}
// stackLogs lists logs for all stack services
func stackLogs(c *cli.Context, stackName string, client *dockerClient.Client) {
filters := filters.NewArgs()
filters.Add("name", stackName)
func stackLogs(c *cli.Context, app config.App, client *dockerClient.Client) {
filters, err := app.Filters(true, false)
if err != nil {
logrus.Fatal(err)
}
serviceOpts := types.ServiceListOptions{Filters: filters}
services, err := client.ServiceList(context.Background(), serviceOpts)
if err != nil {
@ -67,12 +70,11 @@ func stackLogs(c *cli.Context, stackName string, client *dockerClient.Client) {
var appLogsCommand = cli.Command{
Name: "logs",
Aliases: []string{"l"},
ArgsUsage: "[<service>]",
ArgsUsage: "<domain> [<service>]",
Usage: "Tail app logs",
Flags: []cli.Flag{
internal.StdErrOnlyFlag,
internal.DebugFlag,
internal.NoInputFlag,
},
Before: internal.SubCommandBefore,
BashComplete: autocomplete.AppNameComplete,
@ -86,8 +88,8 @@ var appLogsCommand = cli.Command{
serviceName := c.Args().Get(1)
if serviceName == "" {
logrus.Debugf("tailing logs for all %s services", app.Type)
stackLogs(c, app.StackName(), cl)
logrus.Debugf("tailing logs for all %s services", app.Recipe)
stackLogs(c, app, cl)
} else {
logrus.Debugf("tailing logs for %s", serviceName)
if err := tailServiceLogs(c, cl, app, serviceName); err != nil {
@ -101,7 +103,8 @@ var appLogsCommand = cli.Command{
func tailServiceLogs(c *cli.Context, cl *dockerClient.Client, app config.App, serviceName string) error {
filters := filters.NewArgs()
filters.Add("name", fmt.Sprintf("%s_%s", app.StackName(), serviceName))
filters.Add("name", fmt.Sprintf("^%s_%s", app.StackName(), serviceName))
chosenService, err := service.GetService(context.Background(), cl, filters, internal.NoInput)
if err != nil {
logrus.Fatal(err)

View File

@ -11,7 +11,7 @@ This command takes a recipe and uses it to create a new app. This new app
configuration is stored in your ~/.abra directory under the appropriate server.
This command does not deploy your app for you. You will need to run "abra app
deploy <app>" to do so.
deploy <domain>" to do so.
You can see what recipes are available (i.e. values for the <recipe> argument)
by running "abra recipe ls".
@ -36,12 +36,11 @@ var appNewCommand = cli.Command{
internal.NoInputFlag,
internal.NewAppServerFlag,
internal.DomainFlag,
internal.NewAppNameFlag,
internal.PassFlag,
internal.SecretsFlag,
},
Before: internal.SubCommandBefore,
ArgsUsage: "<recipe>",
ArgsUsage: "[<recipe>]",
Action: internal.NewAction,
BashComplete: autocomplete.RecipeNameComplete,
}

View File

@ -15,7 +15,6 @@ import (
"github.com/buger/goterm"
dockerFormatter "github.com/docker/cli/cli/command/formatter"
"github.com/docker/docker/api/types"
"github.com/docker/docker/api/types/filters"
dockerClient "github.com/docker/docker/client"
"github.com/sirupsen/logrus"
"github.com/urfave/cli"
@ -25,11 +24,11 @@ var appPsCommand = cli.Command{
Name: "ps",
Aliases: []string{"p"},
Usage: "Check app status",
ArgsUsage: "<domain>",
Description: "This command shows a more detailed status output of a specific deployed app.",
Flags: []cli.Flag{
internal.WatchFlag,
internal.DebugFlag,
internal.NoInputFlag,
},
Before: internal.SubCommandBefore,
BashComplete: autocomplete.AppNameComplete,
@ -67,8 +66,10 @@ var appPsCommand = cli.Command{
// showPSOutput renders ps output.
func showPSOutput(c *cli.Context, app config.App, cl *dockerClient.Client) {
filters := filters.NewArgs()
filters.Add("name", app.StackName())
filters, err := app.Filters(true, true)
if err != nil {
logrus.Fatal(err)
}
containers, err := cl.ContainerList(context.Background(), types.ContainerListOptions{Filters: filters})
if err != nil {

View File

@ -11,7 +11,6 @@ import (
stack "coopcloud.tech/abra/pkg/upstream/stack"
"github.com/AlecAivazis/survey/v2"
"github.com/docker/docker/api/types"
"github.com/docker/docker/api/types/filters"
"github.com/sirupsen/logrus"
"github.com/urfave/cli"
)
@ -21,14 +20,15 @@ var Volumes bool
// VolumesFlag is used to specify if volumes should be deleted when deleting an app
var VolumesFlag = &cli.BoolFlag{
Name: "volumes",
Name: "volumes, V",
Destination: &Volumes,
}
var appRemoveCommand = cli.Command{
Name: "remove",
Aliases: []string{"rm"},
Usage: "Remove an already undeployed app",
Name: "remove",
Aliases: []string{"rm"},
ArgsUsage: "<domain>",
Usage: "Remove an already undeployed app",
Flags: []cli.Flag{
VolumesFlag,
internal.ForceFlag,
@ -39,7 +39,7 @@ var appRemoveCommand = cli.Command{
Action: func(c *cli.Context) error {
app := internal.ValidateApp(c)
if !internal.Force {
if !internal.Force && !internal.NoInput {
response := false
prompt := &survey.Confirm{
Message: fmt.Sprintf("about to remove %s, are you sure?", app.Name),
@ -62,11 +62,14 @@ var appRemoveCommand = cli.Command{
logrus.Fatal(err)
}
if isDeployed {
logrus.Fatalf("%s is still deployed. Run \"abra app undeploy %s \" or pass --force", app.Name, app.Name)
logrus.Fatalf("%s is still deployed. Run \"abra app undeploy %s\"", app.Name, app.Name)
}
fs, err := app.Filters(false, false)
if err != nil {
logrus.Fatal(err)
}
fs := filters.NewArgs()
fs.Add("name", app.StackName())
secretList, err := cl.SecretList(context.Background(), types.SecretListOptions{Filters: fs})
if err != nil {
logrus.Fatal(err)
@ -83,7 +86,7 @@ var appRemoveCommand = cli.Command{
if len(secrets) > 0 {
var secretNamesToRemove []string
if !internal.Force {
if !internal.Force && !internal.NoInput {
secretsPrompt := &survey.MultiSelect{
Message: "which secrets do you want to remove?",
Help: "'x' indicates selected, enter / return to confirm, ctrl-c to exit, vim mode is enabled",
@ -96,6 +99,10 @@ var appRemoveCommand = cli.Command{
}
}
if internal.Force || internal.NoInput {
secretNamesToRemove = secretNames
}
for _, name := range secretNamesToRemove {
err := cl.SecretRemove(context.Background(), secrets[name])
if err != nil {
@ -107,6 +114,11 @@ var appRemoveCommand = cli.Command{
logrus.Info("no secrets to remove")
}
fs, err = app.Filters(false, true)
if err != nil {
logrus.Fatal(err)
}
volumeListOKBody, err := cl.VolumeList(context.Background(), fs)
volumeList := volumeListOKBody.Volumes
if err != nil {
@ -121,7 +133,7 @@ var appRemoveCommand = cli.Command{
if len(vols) > 0 {
if Volumes {
var removeVols []string
if !internal.Force {
if !internal.Force && !internal.NoInput {
volumesPrompt := &survey.MultiSelect{
Message: "which volumes do you want to remove?",
Help: "'x' indicates selected, enter / return to confirm, ctrl-c to exit, vim mode is enabled",
@ -133,6 +145,7 @@ var appRemoveCommand = cli.Command{
logrus.Fatal(err)
}
}
for _, vol := range removeVols {
err := cl.VolumeRemove(context.Background(), vol, internal.Force) // last argument is for force removing
if err != nil {

View File

@ -18,10 +18,9 @@ var appRestartCommand = cli.Command{
Name: "restart",
Aliases: []string{"re"},
Usage: "Restart an app",
ArgsUsage: "<service>",
ArgsUsage: "<domain>",
Flags: []cli.Flag{
internal.DebugFlag,
internal.NoInputFlag,
},
Before: internal.SubCommandBefore,
Description: `This command restarts a service within a deployed app.`,

201
cli/app/restore.go Normal file
View File

@ -0,0 +1,201 @@
package app
import (
"context"
"errors"
"fmt"
"os"
"coopcloud.tech/abra/cli/internal"
"coopcloud.tech/abra/pkg/autocomplete"
"coopcloud.tech/abra/pkg/client"
"coopcloud.tech/abra/pkg/config"
containerPkg "coopcloud.tech/abra/pkg/container"
"coopcloud.tech/abra/pkg/recipe"
"coopcloud.tech/abra/pkg/upstream/container"
"github.com/docker/cli/cli/command"
"github.com/docker/docker/api/types"
"github.com/docker/docker/api/types/filters"
"github.com/docker/docker/pkg/archive"
"github.com/sirupsen/logrus"
"github.com/urfave/cli"
)
type restoreConfig struct {
preHookCmd string
postHookCmd string
}
var appRestoreCommand = cli.Command{
Name: "restore",
Aliases: []string{"rs"},
Usage: "Run app restore",
ArgsUsage: "<domain> <service> <file>",
Flags: []cli.Flag{
internal.DebugFlag,
},
Before: internal.SubCommandBefore,
BashComplete: autocomplete.AppNameComplete,
Description: `
This command runs an app restore.
Pre/post hook commands are defined in the recipe configuration. Abra reads this
configuration and run the comands in the context of the service before
restoring the backup.
Unlike "abra app backup", restore must be run on a per-service basis. You can
not restore all services in one go. Backup files produced by Abra are
compressed archives which use absolute paths. This allows Abra to restore
according to standard tar command logic.
Example:
abra app restore example.com app ~/.abra/backups/example_com_app_609341138.tar.gz
`,
Action: func(c *cli.Context) error {
app := internal.ValidateApp(c)
serviceName := c.Args().Get(1)
if serviceName == "" {
internal.ShowSubcommandHelpAndError(c, errors.New("missing <service>?"))
}
backupPath := c.Args().Get(2)
if backupPath == "" {
internal.ShowSubcommandHelpAndError(c, errors.New("missing <file>?"))
}
if _, err := os.Stat(backupPath); err != nil {
if os.IsNotExist(err) {
logrus.Fatalf("%s doesn't exist?", backupPath)
}
}
recipe, err := recipe.Get(app.Recipe)
if err != nil {
logrus.Fatal(err)
}
restoreConfigs := make(map[string]restoreConfig)
for _, service := range recipe.Config.Services {
if restoreEnabled, ok := service.Deploy.Labels["backupbot.restore"]; ok {
if restoreEnabled == "true" {
fullServiceName := fmt.Sprintf("%s_%s", app.StackName(), service.Name)
rsConfig := restoreConfig{}
logrus.Debugf("restore config detected for %s", fullServiceName)
if preHookCmd, ok := service.Deploy.Labels["backupbot.restore.pre-hook"]; ok {
logrus.Debugf("detected pre-hook command for %s: %s", fullServiceName, preHookCmd)
rsConfig.preHookCmd = preHookCmd
}
if postHookCmd, ok := service.Deploy.Labels["backupbot.restore.post-hook"]; ok {
logrus.Debugf("detected post-hook command for %s: %s", fullServiceName, postHookCmd)
rsConfig.postHookCmd = postHookCmd
}
restoreConfigs[service.Name] = rsConfig
}
}
}
rsConfig, ok := restoreConfigs[serviceName]
if !ok {
rsConfig = restoreConfig{}
}
if err := runRestore(app, backupPath, serviceName, rsConfig); err != nil {
logrus.Fatal(err)
}
return nil
},
}
// runRestore does the actual restore logic.
func runRestore(app config.App, backupPath, serviceName string, rsConfig restoreConfig) error {
cl, err := client.New(app.Server)
if err != nil {
return err
}
// FIXME: avoid instantiating a new CLI
dcli, err := command.NewDockerCli()
if err != nil {
return err
}
filters := filters.NewArgs()
filters.Add("name", fmt.Sprintf("^%s_%s", app.StackName(), serviceName))
targetContainer, err := containerPkg.GetContainer(context.Background(), cl, filters, true)
if err != nil {
return err
}
fullServiceName := fmt.Sprintf("%s_%s", app.StackName(), serviceName)
if rsConfig.preHookCmd != "" {
splitCmd := internal.SafeSplit(rsConfig.preHookCmd)
logrus.Debugf("split pre-hook command for %s into %s", fullServiceName, splitCmd)
preHookExecOpts := types.ExecConfig{
AttachStderr: true,
AttachStdin: true,
AttachStdout: true,
Cmd: splitCmd,
Detach: false,
Tty: true,
}
if err := container.RunExec(dcli, cl, targetContainer.ID, &preHookExecOpts); err != nil {
return err
}
logrus.Infof("succesfully ran %s pre-hook command: %s", fullServiceName, rsConfig.preHookCmd)
}
backupReader, err := os.Open(backupPath)
if err != nil {
return err
}
content, err := archive.DecompressStream(backupReader)
if err != nil {
return err
}
// we use absolute paths so tar knows what to do. it will restore files
// according to the paths set in the compresed archive
restorePath := "/"
copyOpts := types.CopyToContainerOptions{AllowOverwriteDirWithFile: false, CopyUIDGID: false}
if err := cl.CopyToContainer(context.Background(), targetContainer.ID, restorePath, content, copyOpts); err != nil {
return err
}
logrus.Infof("restored %s to %s", backupPath, fullServiceName)
if rsConfig.postHookCmd != "" {
splitCmd := internal.SafeSplit(rsConfig.postHookCmd)
logrus.Debugf("split post-hook command for %s into %s", fullServiceName, splitCmd)
postHookExecOpts := types.ExecConfig{
AttachStderr: true,
AttachStdin: true,
AttachStdout: true,
Cmd: splitCmd,
Detach: false,
Tty: true,
}
if err := container.RunExec(dcli, cl, targetContainer.ID, &postHookExecOpts); err != nil {
return err
}
logrus.Infof("succesfully ran %s post-hook command: %s", fullServiceName, rsConfig.postHookCmd)
}
return nil
}

View File

@ -22,12 +22,13 @@ var appRollbackCommand = cli.Command{
Name: "rollback",
Aliases: []string{"rl"},
Usage: "Roll an app back to a previous version",
ArgsUsage: "<app>",
ArgsUsage: "<domain>",
Flags: []cli.Flag{
internal.DebugFlag,
internal.NoInputFlag,
internal.ForceFlag,
internal.ChaosFlag,
internal.NoDomainChecksFlag,
internal.DontWaitConvergeFlag,
},
Before: internal.SubCommandBefore,
@ -50,12 +51,12 @@ recipes.
stackName := app.StackName()
if !internal.Chaos {
if err := recipe.EnsureUpToDate(app.Type); err != nil {
if err := recipe.EnsureUpToDate(app.Recipe); err != nil {
logrus.Fatal(err)
}
}
r, err := recipe.Get(app.Type)
r, err := recipe.Get(app.Recipe)
if err != nil {
logrus.Fatal(err)
}
@ -85,13 +86,13 @@ recipes.
logrus.Fatal(err)
}
versions, err := recipe.GetRecipeCatalogueVersions(app.Type, catl)
versions, err := recipe.GetRecipeCatalogueVersions(app.Recipe, catl)
if err != nil {
logrus.Fatal(err)
}
if len(versions) == 0 && !internal.Chaos {
logrus.Fatalf("no published releases for %s in the recipe catalogue?", app.Type)
logrus.Fatalf("no published releases for %s in the recipe catalogue?", app.Recipe)
}
var availableDowngrades []string
@ -125,7 +126,7 @@ recipes.
var chosenDowngrade string
if !internal.Chaos {
if internal.Force {
if internal.Force || internal.NoInput {
chosenDowngrade = availableDowngrades[0]
logrus.Debugf("choosing %s as version to downgrade to (--force)", chosenDowngrade)
} else {
@ -140,7 +141,7 @@ recipes.
}
if !internal.Chaos {
if err := recipe.EnsureVersion(app.Type, chosenDowngrade); err != nil {
if err := recipe.EnsureVersion(app.Recipe, chosenDowngrade); err != nil {
logrus.Fatal(err)
}
}
@ -148,13 +149,13 @@ recipes.
if internal.Chaos {
logrus.Warn("chaos mode engaged")
var err error
chosenDowngrade, err = recipe.ChaosVersion(app.Type)
chosenDowngrade, err = recipe.ChaosVersion(app.Recipe)
if err != nil {
logrus.Fatal(err)
}
}
abraShPath := fmt.Sprintf("%s/%s/%s", config.RECIPES_DIR, app.Type, "abra.sh")
abraShPath := fmt.Sprintf("%s/%s/%s", config.RECIPES_DIR, app.Recipe, "abra.sh")
abraShEnv, err := config.ReadAbraShEnvVars(abraShPath)
if err != nil {
logrus.Fatal(err)
@ -163,7 +164,7 @@ recipes.
app.Env[k] = v
}
composeFiles, err := config.GetAppComposeFiles(app.Type, app.Env)
composeFiles, err := config.GetAppComposeFiles(app.Recipe, app.Env)
if err != nil {
logrus.Fatal(err)
}

View File

@ -19,14 +19,14 @@ import (
var user string
var userFlag = &cli.StringFlag{
Name: "user",
Name: "user, u",
Value: "",
Destination: &user,
}
var noTTY bool
var noTTYFlag = &cli.BoolFlag{
Name: "no-tty",
Name: "no-tty, t",
Destination: &noTTY,
}
@ -35,12 +35,11 @@ var appRunCommand = cli.Command{
Aliases: []string{"r"},
Flags: []cli.Flag{
internal.DebugFlag,
internal.NoInputFlag,
noTTYFlag,
userFlag,
},
Before: internal.SubCommandBefore,
ArgsUsage: "<service> <args>...",
ArgsUsage: "<domain> <service> <args>...",
Usage: "Run a command in a service container",
BashComplete: autocomplete.AppNameComplete,
Action: func(c *cli.Context) error {
@ -60,11 +59,11 @@ var appRunCommand = cli.Command{
}
serviceName := c.Args().Get(1)
stackAndServiceName := fmt.Sprintf("%s_%s", app.StackName(), serviceName)
stackAndServiceName := fmt.Sprintf("^%s_%s", app.StackName(), serviceName)
filters := filters.NewArgs()
filters.Add("name", stackAndServiceName)
targetContainer, err := containerPkg.GetContainer(context.Background(), cl, filters, true)
targetContainer, err := containerPkg.GetContainer(context.Background(), cl, filters, false)
if err != nil {
logrus.Fatal(err)
}

View File

@ -10,10 +10,11 @@ import (
"coopcloud.tech/abra/cli/internal"
"coopcloud.tech/abra/pkg/autocomplete"
"coopcloud.tech/abra/pkg/client"
"coopcloud.tech/abra/pkg/config"
"coopcloud.tech/abra/pkg/formatter"
"coopcloud.tech/abra/pkg/secret"
"github.com/docker/docker/api/types"
"github.com/docker/docker/api/types/filters"
dockerClient "github.com/docker/docker/client"
"github.com/sirupsen/logrus"
"github.com/urfave/cli"
)
@ -25,15 +26,22 @@ var allSecretsFlag = &cli.BoolFlag{
Usage: "Generate all secrets",
}
var rmAllSecrets bool
var rmAllSecretsFlag = &cli.BoolFlag{
Name: "all, a",
Destination: &rmAllSecrets,
Usage: "Remove all secrets",
}
var appSecretGenerateCommand = cli.Command{
Name: "generate",
Aliases: []string{"g"},
Usage: "Generate secrets",
ArgsUsage: "<secret> <version>",
ArgsUsage: "<domain> <secret> <version>",
Flags: []cli.Flag{
internal.DebugFlag,
internal.NoInputFlag,
allSecretsFlag, internal.PassFlag,
allSecretsFlag,
internal.PassFlag,
},
Before: internal.SubCommandBefore,
BashComplete: autocomplete.AppNameComplete,
@ -62,8 +70,10 @@ var appSecretGenerateCommand = cli.Command{
parsed := secret.ParseSecretEnvVarName(sec)
if secretName == parsed {
secretsToCreate[sec] = secretVersion
matches = true
}
}
if !matches {
logrus.Fatalf("%s doesn't exist in the env config?", secretName)
}
@ -76,7 +86,7 @@ var appSecretGenerateCommand = cli.Command{
if internal.Pass {
for name, data := range secretVals {
if err := secret.PassInsertSecret(data, name, app.StackName(), app.Server); err != nil {
if err := secret.PassInsertSecret(data, name, app.Name, app.Server); err != nil {
logrus.Fatal(err)
}
}
@ -105,11 +115,10 @@ var appSecretInsertCommand = cli.Command{
Usage: "Insert secret",
Flags: []cli.Flag{
internal.DebugFlag,
internal.NoInputFlag,
internal.PassFlag,
},
Before: internal.SubCommandBefore,
ArgsUsage: "<app> <secret-name> <version> <data>",
ArgsUsage: "<domain> <secret-name> <version> <data>",
BashComplete: autocomplete.AppNameComplete,
Description: `
This command inserts a secret into an app environment.
@ -139,8 +148,10 @@ Example:
logrus.Fatal(err)
}
logrus.Infof("%s successfully stored on server", secretName)
if internal.Pass {
if err := secret.PassInsertSecret(data, name, app.StackName(), app.Server); err != nil {
if err := secret.PassInsertSecret(data, name, app.Name, app.Server); err != nil {
logrus.Fatal(err)
}
}
@ -149,6 +160,25 @@ Example:
},
}
// secretRm removes a secret.
func secretRm(cl *dockerClient.Client, app config.App, secretName, parsed string) error {
if err := cl.SecretRemove(context.Background(), secretName); err != nil {
return err
}
logrus.Infof("deleted %s successfully from server", secretName)
if internal.PassRemove {
if err := secret.PassRmSecret(parsed, app.StackName(), app.Server); err != nil {
return err
}
logrus.Infof("deleted %s successfully from local pass store", secretName)
}
return nil
}
var appSecretRmCommand = cli.Command{
Name: "remove",
Aliases: []string{"rm"},
@ -156,27 +186,28 @@ var appSecretRmCommand = cli.Command{
Flags: []cli.Flag{
internal.DebugFlag,
internal.NoInputFlag,
allSecretsFlag, internal.PassFlag,
rmAllSecretsFlag,
internal.PassRemoveFlag,
},
Before: internal.SubCommandBefore,
ArgsUsage: "<app> <secret-name>",
ArgsUsage: "<domain> [<secret-name>]",
BashComplete: autocomplete.AppNameComplete,
Description: `
This command removes a secret from an app environment.
This command removes app secrets.
Example:
abra app secret remove myapp db_pass
`,
Action: func(c *cli.Context) error {
app := internal.ValidateApp(c)
secrets := secret.ReadSecretEnvVars(app.Env)
if c.Args().Get(1) != "" && allSecrets {
if c.Args().Get(1) != "" && rmAllSecrets {
internal.ShowSubcommandHelpAndError(c, errors.New("cannot use '<secret-name>' and '--all' together"))
}
if c.Args().Get(1) == "" && !allSecrets {
if c.Args().Get(1) == "" && !rmAllSecrets {
internal.ShowSubcommandHelpAndError(c, errors.New("no secret(s) specified?"))
}
@ -185,49 +216,59 @@ Example:
logrus.Fatal(err)
}
filters := filters.NewArgs()
filters.Add("name", app.StackName())
filters, err := app.Filters(false, false)
if err != nil {
logrus.Fatal(err)
}
secretList, err := cl.SecretList(context.Background(), types.SecretListOptions{Filters: filters})
if err != nil {
logrus.Fatal(err)
}
secretToRm := c.Args().Get(1)
remoteSecretNames := make(map[string]bool)
for _, cont := range secretList {
secretName := cont.Spec.Annotations.Name
parsed := secret.ParseGeneratedSecretName(secretName, app)
if allSecrets {
if err := cl.SecretRemove(context.Background(), secretName); err != nil {
logrus.Fatal(err)
}
logrus.Infof("deleted %s successfully from server", secretName)
remoteSecretNames[cont.Spec.Annotations.Name] = true
}
if internal.Pass {
if err := secret.PassRmSecret(parsed, app.StackName(), app.Server); err != nil {
logrus.Fatal(err)
}
match := false
secretToRm := c.Args().Get(1)
for sec := range secrets {
secretName := secret.ParseSecretEnvVarName(sec)
logrus.Infof("deleted %s successfully from local pass store", secretName)
}
} else {
if parsed == secretToRm {
if err := cl.SecretRemove(context.Background(), secretName); err != nil {
logrus.Fatal(err)
}
secVal, err := secret.ParseSecretEnvVarValue(secrets[sec])
if err != nil {
logrus.Fatal(err)
}
logrus.Infof("deleted %s successfully from server", secretName)
if internal.Pass {
if err := secret.PassRmSecret(parsed, app.StackName(), app.Server); err != nil {
secretRemoteName := fmt.Sprintf("%s_%s_%s", app.StackName(), secretName, secVal.Version)
if _, ok := remoteSecretNames[secretRemoteName]; ok {
if secretToRm != "" {
if secretName == secretToRm {
if err := secretRm(cl, app, secretRemoteName, secretName); err != nil {
logrus.Fatal(err)
}
logrus.Infof("deleted %s successfully from local pass store", secretName)
return nil
}
} else {
match = true
if err := secretRm(cl, app, secretRemoteName, secretName); err != nil {
logrus.Fatal(err)
}
}
}
}
if !match && secretToRm != "" {
logrus.Fatalf("%s doesn't exist on server?", secretToRm)
}
if !match {
logrus.Fatal("no secrets to remove?")
}
return nil
},
}
@ -237,7 +278,6 @@ var appSecretLsCommand = cli.Command{
Aliases: []string{"ls"},
Flags: []cli.Flag{
internal.DebugFlag,
internal.NoInputFlag,
},
Before: internal.SubCommandBefore,
Usage: "List all secrets",
@ -253,8 +293,11 @@ var appSecretLsCommand = cli.Command{
logrus.Fatal(err)
}
filters := filters.NewArgs()
filters.Add("name", app.StackName())
filters, err := app.Filters(false, false)
if err != nil {
logrus.Fatal(err)
}
secretList, err := cl.SecretList(context.Background(), types.SecretListOptions{Filters: filters})
if err != nil {
logrus.Fatal(err)
@ -295,7 +338,7 @@ var appSecretCommand = cli.Command{
Name: "secret",
Aliases: []string{"s"},
Usage: "Manage app secrets",
ArgsUsage: "<command>",
ArgsUsage: "<domain>",
Subcommands: []cli.Command{
appSecretGenerateCommand,
appSecretInsertCommand,

View File

@ -12,8 +12,9 @@ import (
)
var appUndeployCommand = cli.Command{
Name: "undeploy",
Aliases: []string{"un"},
Name: "undeploy",
Aliases: []string{"un"},
ArgsUsage: "<domain>",
Flags: []cli.Flag{
internal.DebugFlag,
internal.NoInputFlag,

View File

@ -21,13 +21,14 @@ var appUpgradeCommand = cli.Command{
Name: "upgrade",
Aliases: []string{"up"},
Usage: "Upgrade an app",
ArgsUsage: "<app>",
ArgsUsage: "<domain>",
Flags: []cli.Flag{
internal.DebugFlag,
internal.NoInputFlag,
internal.ForceFlag,
internal.ChaosFlag,
internal.NoDomainChecksFlag,
internal.DontWaitConvergeFlag,
},
Before: internal.SubCommandBefore,
Description: `
@ -35,7 +36,7 @@ This command supports upgrading an app. You can use it to choose and roll out a
new upgrade to an existing app.
This command specifically supports incrementing the version of running apps, as
opposed to "abra app deploy <app>" which will not change the version of a
opposed to "abra app deploy <domain>" which will not change the version of a
deployed app.
You may pass "--force/-f" to upgrade to the same version again. This can be
@ -53,12 +54,12 @@ recipes.
stackName := app.StackName()
if !internal.Chaos {
if err := recipe.EnsureUpToDate(app.Type); err != nil {
if err := recipe.EnsureUpToDate(app.Recipe); err != nil {
logrus.Fatal(err)
}
}
r, err := recipe.Get(app.Type)
r, err := recipe.Get(app.Recipe)
if err != nil {
logrus.Fatal(err)
}
@ -88,17 +89,17 @@ recipes.
logrus.Fatal(err)
}
versions, err := recipe.GetRecipeCatalogueVersions(app.Type, catl)
versions, err := recipe.GetRecipeCatalogueVersions(app.Recipe, catl)
if err != nil {
logrus.Fatal(err)
}
if len(versions) == 0 && !internal.Chaos {
logrus.Fatalf("no published releases for %s in the recipe catalogue?", app.Type)
logrus.Fatalf("no published releases for %s in the recipe catalogue?", app.Recipe)
}
var availableUpgrades []string
if deployedVersion == "uknown" {
if deployedVersion == "unknown" {
availableUpgrades = versions
logrus.Warnf("failed to determine version of deployed %s", app.Name)
}
@ -128,7 +129,7 @@ recipes.
var chosenUpgrade string
if len(availableUpgrades) > 0 && !internal.Chaos {
if internal.Force {
if internal.Force || internal.NoInput {
chosenUpgrade = availableUpgrades[len(availableUpgrades)-1]
logrus.Debugf("choosing %s as version to upgrade to", chosenUpgrade)
} else {
@ -145,13 +146,13 @@ recipes.
// if release notes written after git tag published, read them before we
// check out the tag and then they'll appear to be missing. this covers
// when we obviously will forget to write release notes before publishing
releaseNotes, err := internal.GetReleaseNotes(app.Type, chosenUpgrade)
releaseNotes, err := internal.GetReleaseNotes(app.Recipe, chosenUpgrade)
if err != nil {
return err
}
if !internal.Chaos {
if err := recipe.EnsureVersion(app.Type, chosenUpgrade); err != nil {
if err := recipe.EnsureVersion(app.Recipe, chosenUpgrade); err != nil {
logrus.Fatal(err)
}
}
@ -159,13 +160,13 @@ recipes.
if internal.Chaos {
logrus.Warn("chaos mode engaged")
var err error
chosenUpgrade, err = recipe.ChaosVersion(app.Type)
chosenUpgrade, err = recipe.ChaosVersion(app.Recipe)
if err != nil {
logrus.Fatal(err)
}
}
abraShPath := fmt.Sprintf("%s/%s/%s", config.RECIPES_DIR, app.Type, "abra.sh")
abraShPath := fmt.Sprintf("%s/%s/%s", config.RECIPES_DIR, app.Recipe, "abra.sh")
abraShEnv, err := config.ReadAbraShEnvVars(abraShPath)
if err != nil {
logrus.Fatal(err)
@ -174,7 +175,7 @@ recipes.
app.Env[k] = v
}
composeFiles, err := config.GetAppComposeFiles(app.Type, app.Env)
composeFiles, err := config.GetAppComposeFiles(app.Recipe, app.Env)
if err != nil {
logrus.Fatal(err)
}

View File

@ -31,8 +31,9 @@ func getImagePath(image string) (string, error) {
}
var appVersionCommand = cli.Command{
Name: "version",
Aliases: []string{"v"},
Name: "version",
Aliases: []string{"v"},
ArgsUsage: "<domain>",
Flags: []cli.Flag{
internal.DebugFlag,
internal.NoInputFlag,
@ -68,7 +69,7 @@ Cloud recipe version.
logrus.Fatalf("%s is not deployed?", app.Name)
}
recipeMeta, err := recipe.GetRecipeMeta(app.Type)
recipeMeta, err := recipe.GetRecipeMeta(app.Recipe)
if err != nil {
logrus.Fatal(err)
}

View File

@ -13,8 +13,9 @@ import (
)
var appVolumeListCommand = cli.Command{
Name: "list",
Aliases: []string{"ls"},
Name: "list",
Aliases: []string{"ls"},
ArgsUsage: "<domain>",
Flags: []cli.Flag{
internal.DebugFlag,
internal.NoInputFlag,
@ -25,18 +26,20 @@ var appVolumeListCommand = cli.Command{
Action: func(c *cli.Context) error {
app := internal.ValidateApp(c)
volumeList, err := client.GetVolumes(context.Background(), app.Server, app.Name)
filters, err := app.Filters(false, true)
if err != nil {
logrus.Fatal(err)
}
table := formatter.CreateTable([]string{"driver", "volume name"})
volumeList, err := client.GetVolumes(context.Background(), app.Server, filters)
if err != nil {
logrus.Fatal(err)
}
table := formatter.CreateTable([]string{"name", "created", "mounted"})
var volTable [][]string
for _, volume := range volumeList {
volRow := []string{
volume.Driver,
volume.Name,
}
volRow := []string{volume.Name, volume.CreatedAt, volume.Mountpoint}
volTable = append(volTable, volRow)
}
@ -58,15 +61,15 @@ var appVolumeRemoveCommand = cli.Command{
Description: `
This command supports removing volumes associated with an app. The app in
question must be undeployed before you try to remove volumes. See "abra app
undeploy <app>" for more.
undeploy <domain>" for more.
The command is interactive and will show a multiple select input which allows
you to make a seclection. Use the "?" key to see more help on navigating this
interface.
Passing "--force" will select all volumes for removal. Be careful.
Passing "--force/-f" will select all volumes for removal. Be careful.
`,
ArgsUsage: "<app>",
ArgsUsage: "<domain>",
Aliases: []string{"rm"},
Flags: []cli.Flag{
internal.DebugFlag,
@ -77,14 +80,19 @@ Passing "--force" will select all volumes for removal. Be careful.
Action: func(c *cli.Context) error {
app := internal.ValidateApp(c)
volumeList, err := client.GetVolumes(context.Background(), app.Server, app.Name)
filters, err := app.Filters(false, true)
if err != nil {
logrus.Fatal(err)
}
volumeList, err := client.GetVolumes(context.Background(), app.Server, filters)
if err != nil {
logrus.Fatal(err)
}
volumeNames := client.GetVolumeNames(volumeList)
var volumesToRemove []string
if !internal.Force {
if !internal.Force && !internal.NoInput {
volumesPrompt := &survey.MultiSelect{
Message: "which volumes do you want to remove?",
Help: "'x' indicates selected, enter / return to confirm, ctrl-c to exit, vim mode is enabled",
@ -95,7 +103,9 @@ Passing "--force" will select all volumes for removal. Be careful.
if err := survey.AskOne(volumesPrompt, &volumesToRemove); err != nil {
logrus.Fatal(err)
}
} else {
}
if internal.Force || internal.NoInput {
volumesToRemove = volumeNames
}
@ -115,7 +125,7 @@ var appVolumeCommand = cli.Command{
Name: "volume",
Aliases: []string{"vl"},
Usage: "Manage app volumes",
ArgsUsage: "<command>",
ArgsUsage: "<domain>",
Subcommands: []cli.Command{
appVolumeListCommand,
appVolumeRemoveCommand,

View File

@ -20,40 +20,42 @@ import (
// CatalogueSkipList is all the repos that are not recipes.
var CatalogueSkipList = map[string]bool{
"abra": true,
"abra-apps": true,
"abra-aur": true,
"abra-bash": true,
"abra-capsul": true,
"abra-gandi": true,
"abra-hetzner": true,
"apps": true,
"aur-abra-git": true,
"auto-apps-json": true,
"auto-mirror": true,
"backup-bot": true,
"backup-bot-two": true,
"beta.coopcloud.tech": true,
"comrade-renovate-bot": true,
"coopcloud.tech": true,
"coturn": true,
"docker-cp-deploy": true,
"docker-dind-bats-kcov": true,
"docs.coopcloud.tech": true,
"drone-abra": true,
"example": true,
"gardening": true,
"go-abra": true,
"organising": true,
"outline-with-patch": true,
"pyabra": true,
"radicle-seed-node": true,
"recipes": true,
"stack-ssh-deploy": true,
"swarm-cronjob": true,
"tagcmp": true,
"traefik-cert-dumper": true,
"tyop": true,
"abra": true,
"abra-apps": true,
"abra-aur": true,
"abra-bash": true,
"abra-capsul": true,
"abra-gandi": true,
"abra-hetzner": true,
"apps": true,
"aur-abra-git": true,
"auto-apps-json": true,
"auto-mirror": true,
"backup-bot": true,
"backup-bot-two": true,
"beta.coopcloud.tech": true,
"comrade-renovate-bot": true,
"coopcloud.tech": true,
"coturn": true,
"docker-cp-deploy": true,
"docker-dind-bats-kcov": true,
"docs.coopcloud.tech": true,
"drone-abra": true,
"example": true,
"gardening": true,
"go-abra": true,
"organising": true,
"outline-with-patch": true,
"pyabra": true,
"radicle-seed-node": true,
"recipes-catalogue-json": true,
"recipes-wishlist": true,
"recipes.coopcloud.tech": true,
"stack-ssh-deploy": true,
"swarm-cronjob": true,
"tagcmp": true,
"traefik-cert-dumper": true,
"tyop": true,
}
var catalogueGenerateCommand = cli.Command{
@ -66,8 +68,6 @@ var catalogueGenerateCommand = cli.Command{
internal.PublishFlag,
internal.DryFlag,
internal.SkipUpdatesFlag,
internal.RegistryUsernameFlag,
internal.RegistryPasswordFlag,
},
Before: internal.SubCommandBefore,
Description: `
@ -94,7 +94,7 @@ keys configured on your account.
Action: func(c *cli.Context) error {
recipeName := c.Args().First()
if recipeName != "" {
internal.ValidateRecipe(c)
internal.ValidateRecipe(c, true)
}
repos, err := recipe.ReadReposMetadata()
@ -132,13 +132,9 @@ keys configured on your account.
continue
}
versions, err := recipe.GetRecipeVersions(
recipeMeta.Name,
internal.RegistryUsername,
internal.RegistryPassword,
)
versions, err := recipe.GetRecipeVersions(recipeMeta.Name)
if err != nil {
logrus.Fatal(err)
logrus.Warn(err)
}
features, category, err := recipe.GetRecipeFeaturesAndCategory(recipeMeta.Name)
@ -215,7 +211,7 @@ keys configured on your account.
logrus.Fatal(err)
}
sshURL := fmt.Sprintf(config.SSH_URL_TEMPLATE, "recipes")
sshURL := fmt.Sprintf(config.SSH_URL_TEMPLATE, config.CATALOGUE_JSON_REPO_NAME)
if err := gitPkg.CreateRemote(repo, "origin-ssh", sshURL, internal.Dry); err != nil {
logrus.Fatal(err)
}
@ -236,7 +232,7 @@ keys configured on your account.
}
if !internal.Dry && internal.Publish {
url := fmt.Sprintf("%s/recipes/commit/%s", config.REPOS_BASE_URL, head.Hash())
url := fmt.Sprintf("%s/%s/commit/%s", config.REPOS_BASE_URL, config.CATALOGUE_JSON_REPO_NAME, head.Hash())
logrus.Infof("new changes published: %s", url)
}

View File

@ -14,6 +14,7 @@ import (
"coopcloud.tech/abra/cli/recipe"
"coopcloud.tech/abra/cli/record"
"coopcloud.tech/abra/cli/server"
"coopcloud.tech/abra/pkg/autocomplete"
"coopcloud.tech/abra/pkg/config"
"coopcloud.tech/abra/pkg/web"
"github.com/sirupsen/logrus"
@ -39,12 +40,10 @@ Supported shells are as follows:
fizsh
zsh
bash
`,
ArgsUsage: "<shell>",
Flags: []cli.Flag{
internal.DebugFlag,
internal.NoInputFlag,
},
Action: func(c *cli.Context) error {
shellType := c.Args().First()
@ -91,7 +90,7 @@ Supported shells are as follows:
sudo mkdir /etc/bash_completion.d/
sudo cp %s /etc/bash_completion.d/abra
echo "source /etc/bash_completion.d/abra" >> ~/.bashrc
# And finally run "abra app ps <hit tab key>" to test things are working, you should see app names listed!
# And finally run "abra app ps <hit tab key>" to test things are working, you should see app domains listed!
`, autocompletionFile))
case "zsh":
fmt.Println(fmt.Sprintf(`
@ -99,7 +98,7 @@ echo "source /etc/bash_completion.d/abra" >> ~/.bashrc
sudo mkdir /etc/zsh/completion.d/
sudo cp %s /etc/zsh/completion.d/abra
echo "PROG=abra\n_CLI_ZSH_AUTOCOMPLETE_HACK=1\nsource /etc/zsh/completion.d/abra" >> ~/.zshrc
# And finally run "abra app ps <hit tab key>" to test things are working, you should see app names listed!
# And finally run "abra app ps <hit tab key>" to test things are working, you should see app domains listed!
`, autocompletionFile))
}
@ -117,9 +116,9 @@ This command allows you to upgrade Abra in-place with the latest stable or
release candidate.
If you would like to install the latest release candidate, please pass the
"--rc" option. Please bear in mind that the latest release candidate may have
some catastrophic bugs contained in it. In any case, thank you very much for
the testing efforts!
"-r/--rc" option. Please bear in mind that the latest release candidate may
have some catastrophic bugs contained in it. In any case, thank you very much
for the testing efforts!
`,
Flags: []cli.Flag{internal.RCFlag},
Action: func(c *cli.Context) error {
@ -162,16 +161,7 @@ func newAbraApp(version, commit string) *cli.App {
UpgradeCommand,
AutoCompleteCommand,
},
Authors: []cli.Author{
// If you're looking at this and you hack on Abra and you're not listed
// here, please do add yourself! This is a community project, let's show
// some love
{Name: "3wordchant"},
{Name: "decentral1se"},
{Name: "kawaiipunk"},
{Name: "knoflook"},
{Name: "roxxers"},
},
BashComplete: autocomplete.SubcommandComplete,
}
app.EnableBashCompletion = true
@ -182,6 +172,7 @@ func newAbraApp(version, commit string) *cli.App {
path.Join(config.SERVERS_DIR),
path.Join(config.RECIPES_DIR),
path.Join(config.VENDOR_DIR),
path.Join(config.BACKUP_DIR),
}
for _, path := range paths {

35
cli/internal/backup.go Normal file
View File

@ -0,0 +1,35 @@
package internal
import (
"strings"
)
// SafeSplit splits up a string into a list of commands safely.
func SafeSplit(s string) []string {
split := strings.Split(s, " ")
var result []string
var inquote string
var block string
for _, i := range split {
if inquote == "" {
if strings.HasPrefix(i, "'") || strings.HasPrefix(i, "\"") {
inquote = string(i[0])
block = strings.TrimPrefix(i, inquote) + " "
} else {
result = append(result, i)
}
} else {
if !strings.HasSuffix(i, inquote) {
block += i + " "
} else {
block += strings.TrimSuffix(i, inquote)
inquote = ""
result = append(result, block)
block = ""
}
}
}
return result
}

View File

@ -13,7 +13,7 @@ var Secrets bool
// SecretsFlag turns on/off automatically generating secrets
var SecretsFlag = &cli.BoolFlag{
Name: "secrets, ss",
Name: "secrets, S",
Usage: "Automatically generate secrets",
Destination: &Secrets,
}
@ -28,14 +28,14 @@ var PassFlag = &cli.BoolFlag{
Destination: &Pass,
}
// Context is temp
var Context string
// PassRemove stores the variable for PassRemoveFlag
var PassRemove bool
// ContextFlag is temp
var ContextFlag = &cli.StringFlag{
Name: "context, c",
Value: "",
Destination: &Context,
// PassRemoveFlag turns on/off removing generated secrets from pass
var PassRemoveFlag = &cli.BoolFlag{
Name: "pass, p",
Usage: "Remove generated secrets from a local pass store",
Destination: &PassRemove,
}
// Force force functionality without asking.
@ -53,7 +53,7 @@ var Chaos bool
// ChaosFlag turns on/off chaos functionality.
var ChaosFlag = &cli.BoolFlag{
Name: "chaos, ch",
Name: "chaos, C",
Usage: "Deploy uncommitted recipes changes. Use with care!",
Destination: &Chaos,
}
@ -79,7 +79,7 @@ var NoInputFlag = &cli.BoolFlag{
var DNSType string
var DNSTypeFlag = &cli.StringFlag{
Name: "type, t",
Name: "record-type, rt",
Value: "",
Usage: "Domain name record type (e.g. A)",
Destination: &DNSType,
@ -88,7 +88,7 @@ var DNSTypeFlag = &cli.StringFlag{
var DNSName string
var DNSNameFlag = &cli.StringFlag{
Name: "name, n",
Name: "record-name, rn",
Value: "",
Usage: "Domain name record name (e.g. mysubdomain)",
Destination: &DNSName,
@ -97,7 +97,7 @@ var DNSNameFlag = &cli.StringFlag{
var DNSValue string
var DNSValueFlag = &cli.StringFlag{
Name: "value, v",
Name: "record-value, rv",
Value: "",
Usage: "Domain name record value (e.g. 192.168.1.1)",
Destination: &DNSValue,
@ -105,7 +105,7 @@ var DNSValueFlag = &cli.StringFlag{
var DNSTTL string
var DNSTTLFlag = &cli.StringFlag{
Name: "ttl, T",
Name: "record-ttl, rl",
Value: "600s",
Usage: "Domain name TTL value (seconds)",
Destination: &DNSTTL,
@ -114,7 +114,7 @@ var DNSTTLFlag = &cli.StringFlag{
var DNSPriority int
var DNSPriorityFlag = &cli.IntFlag{
Name: "priority, P",
Name: "record-priority, rp",
Value: 10,
Usage: "Domain name priority value",
Destination: &DNSPriority,
@ -248,35 +248,35 @@ var RC bool
// RCFlag chooses the latest release candidate for install
var RCFlag = &cli.BoolFlag{
Name: "rc",
Name: "rc, r",
Destination: &RC,
Usage: "Insatll the latest release candidate",
}
var Major bool
var MajorFlag = &cli.BoolFlag{
Name: "major, ma, x",
Name: "major, x",
Usage: "Increase the major part of the version",
Destination: &Major,
}
var Minor bool
var MinorFlag = &cli.BoolFlag{
Name: "minor, mi, y",
Name: "minor, y",
Usage: "Increase the minor part of the version",
Destination: &Minor,
}
var Patch bool
var PatchFlag = &cli.BoolFlag{
Name: "patch, pa, z",
Name: "patch, z",
Usage: "Increase the patch part of the version",
Destination: &Patch,
}
var Dry bool
var DryFlag = &cli.BoolFlag{
Name: "dry-run, dr",
Name: "dry-run, r",
Usage: "Only reports changes that would be made",
Destination: &Dry,
}
@ -290,7 +290,7 @@ var PublishFlag = &cli.BoolFlag{
var Domain string
var DomainFlag = &cli.StringFlag{
Name: "domain, dn",
Name: "domain, D",
Value: "",
Usage: "Choose a domain name",
Destination: &Domain,
@ -304,17 +304,9 @@ var NewAppServerFlag = &cli.StringFlag{
Destination: &NewAppServer,
}
var NewAppName string
var NewAppNameFlag = &cli.StringFlag{
Name: "app-name, a",
Value: "",
Usage: "Choose an app name",
Destination: &NewAppName,
}
var NoDomainChecks bool
var NoDomainChecksFlag = &cli.BoolFlag{
Name: "no-domain-checks, nd",
Name: "no-domain-checks, D",
Usage: "Disable app domain sanity checks",
Destination: &NoDomainChecks,
}
@ -328,7 +320,7 @@ var StdErrOnlyFlag = &cli.BoolFlag{
var DontWaitConverge bool
var DontWaitConvergeFlag = &cli.BoolFlag{
Name: "no-converge-checks, nc",
Name: "no-converge-checks, c",
Usage: "Don't wait for converge logic checks",
Destination: &DontWaitConverge,
}
@ -354,24 +346,6 @@ var SkipUpdatesFlag = &cli.BoolFlag{
Destination: &SkipUpdates,
}
var RegistryUsername string
var RegistryUsernameFlag = &cli.StringFlag{
Name: "username, user",
Value: "",
Usage: "Registry username",
EnvVar: "REGISTRY_USERNAME",
Destination: &RegistryUsername,
}
var RegistryPassword string
var RegistryPasswordFlag = &cli.StringFlag{
Name: "password, pass",
Value: "",
Usage: "Registry password",
EnvVar: "REGISTRY_PASSWORD",
Destination: &RegistryUsername,
}
var AllTags bool
var AllTagsFlag = &cli.BoolFlag{
Name: "all-tags, a",
@ -428,6 +402,24 @@ Good luck!
`
var ServerAddFailMsg = `
Failed to add server %s.
This could be caused by two things.
Abra isn't picking up your SSH configuration or you need to specify it on the
command-line (e.g you use a non-standard port or username to connect). Run
"server add" with "-d/--debug" to learn more about what Abra is doing under the
hood.
Docker is not installed on your server. You can pass "-p/--provision" to
install Docker and initialise Docker Swarm mode. See help output for "server
add"
See "abra server add -h" for more.
`
// SubCommandBefore wires up pre-action machinery (e.g. --debug handling).
func SubCommandBefore(c *cli.Context) error {
if Debug {

View File

@ -26,12 +26,12 @@ func DeployAction(c *cli.Context) error {
app := ValidateApp(c)
if !Chaos {
if err := recipe.EnsureUpToDate(app.Type); err != nil {
if err := recipe.EnsureUpToDate(app.Recipe); err != nil {
logrus.Fatal(err)
}
}
r, err := recipe.Get(app.Type)
r, err := recipe.Get(app.Recipe)
if err != nil {
logrus.Fatal(err)
}
@ -66,24 +66,24 @@ func DeployAction(c *cli.Context) error {
if err != nil {
logrus.Fatal(err)
}
versions, err := recipe.GetRecipeCatalogueVersions(app.Type, catl)
versions, err := recipe.GetRecipeCatalogueVersions(app.Recipe, catl)
if err != nil {
logrus.Fatal(err)
}
if len(versions) > 0 {
version = versions[len(versions)-1]
logrus.Debugf("choosing %s as version to deploy", version)
if err := recipe.EnsureVersion(app.Type, version); err != nil {
if err := recipe.EnsureVersion(app.Recipe, version); err != nil {
logrus.Fatal(err)
}
} else {
head, err := git.GetRecipeHead(app.Type)
head, err := git.GetRecipeHead(app.Recipe)
if err != nil {
logrus.Fatal(err)
}
version = formatter.SmallSHA(head.String())
logrus.Warn("no versions detected, using latest commit")
if err := recipe.EnsureLatest(app.Type); err != nil {
if err := recipe.EnsureLatest(app.Recipe); err != nil {
logrus.Fatal(err)
}
}
@ -91,13 +91,13 @@ func DeployAction(c *cli.Context) error {
if version == "unknown" && !Chaos {
logrus.Debugf("choosing %s as version to deploy", version)
if err := recipe.EnsureVersion(app.Type, version); err != nil {
if err := recipe.EnsureVersion(app.Recipe, version); err != nil {
logrus.Fatal(err)
}
}
if version != "unknown" && !Chaos {
if err := recipe.EnsureVersion(app.Type, version); err != nil {
if err := recipe.EnsureVersion(app.Recipe, version); err != nil {
logrus.Fatal(err)
}
}
@ -105,13 +105,13 @@ func DeployAction(c *cli.Context) error {
if Chaos {
logrus.Warnf("chaos mode engaged")
var err error
version, err = recipe.ChaosVersion(app.Type)
version, err = recipe.ChaosVersion(app.Recipe)
if err != nil {
logrus.Fatal(err)
}
}
abraShPath := fmt.Sprintf("%s/%s/%s", config.RECIPES_DIR, app.Type, "abra.sh")
abraShPath := fmt.Sprintf("%s/%s/%s", config.RECIPES_DIR, app.Recipe, "abra.sh")
abraShEnv, err := config.ReadAbraShEnvVars(abraShPath)
if err != nil {
logrus.Fatal(err)
@ -120,7 +120,7 @@ func DeployAction(c *cli.Context) error {
app.Env[k] = v
}
composeFiles, err := config.GetAppComposeFiles(app.Type, app.Env)
composeFiles, err := config.GetAppComposeFiles(app.Recipe, app.Env)
if err != nil {
logrus.Fatal(err)
}
@ -141,11 +141,6 @@ func DeployAction(c *cli.Context) error {
if !NoDomainChecks {
domainName := app.Env["DOMAIN"]
ipv4, err := dns.EnsureIPv4(domainName)
if err != nil || ipv4 == "" {
logrus.Fatalf("could not find an IP address assigned to %s?", domainName)
}
if _, err = dns.EnsureDomainsResolveSameIPv4(domainName, app.Server); err != nil {
logrus.Fatal(err)
}
@ -162,7 +157,7 @@ func DeployAction(c *cli.Context) error {
// DeployOverview shows a deployment overview
func DeployOverview(app config.App, version, message string) error {
tableCol := []string{"server", "compose", "domain", "app name", "version"}
tableCol := []string{"server", "recipe", "config", "domain", "version"}
table := formatter.CreateTable(tableCol)
deployConfig := "compose.yml"
@ -175,7 +170,7 @@ func DeployOverview(app config.App, version, message string) error {
server = "local"
}
table.Append([]string{server, deployConfig, app.Domain, app.Name, version})
table.Append([]string{server, app.Recipe, deployConfig, app.Domain, version})
table.Render()
if NoInput {
@ -200,7 +195,7 @@ func DeployOverview(app config.App, version, message string) error {
// NewVersionOverview shows an upgrade or downgrade overview
func NewVersionOverview(app config.App, currentVersion, newVersion, releaseNotes string) error {
tableCol := []string{"server", "compose", "domain", "app name", "current version", "to be deployed"}
tableCol := []string{"server", "recipe", "config", "domain", "current version", "to be deployed"}
table := formatter.CreateTable(tableCol)
deployConfig := "compose.yml"
@ -213,12 +208,12 @@ func NewVersionOverview(app config.App, currentVersion, newVersion, releaseNotes
server = "local"
}
table.Append([]string{server, deployConfig, app.Domain, app.Name, currentVersion, newVersion})
table.Append([]string{server, app.Recipe, deployConfig, app.Domain, currentVersion, newVersion})
table.Render()
if releaseNotes == "" {
var err error
releaseNotes, err = GetReleaseNotes(app.Type, newVersion)
releaseNotes, err = GetReleaseNotes(app.Recipe, newVersion)
if err != nil {
return err
}

View File

@ -4,6 +4,7 @@ import (
"fmt"
"path"
"coopcloud.tech/abra/pkg/app"
"coopcloud.tech/abra/pkg/config"
"coopcloud.tech/abra/pkg/formatter"
"coopcloud.tech/abra/pkg/recipe"
@ -11,6 +12,7 @@ import (
"coopcloud.tech/abra/pkg/secret"
"coopcloud.tech/abra/pkg/ssh"
"github.com/AlecAivazis/survey/v2"
"github.com/olekukonko/tablewriter"
"github.com/sirupsen/logrus"
"github.com/urfave/cli"
)
@ -23,7 +25,7 @@ var RecipeName string
// createSecrets creates all secrets for a new app.
func createSecrets(sanitisedAppName string) (AppSecrets, error) {
appEnvPath := path.Join(config.ABRA_DIR, "servers", NewAppServer, fmt.Sprintf("%s.env", NewAppName))
appEnvPath := path.Join(config.ABRA_DIR, "servers", NewAppServer, fmt.Sprintf("%s.env", Domain))
appEnv, err := config.ReadEnv(appEnvPath)
if err != nil {
return nil, err
@ -38,7 +40,7 @@ func createSecrets(sanitisedAppName string) (AppSecrets, error) {
if Pass {
for secretName := range secrets {
secretValue := secrets[secretName]
if err := secret.PassInsertSecret(secretValue, secretName, sanitisedAppName, NewAppServer); err != nil {
if err := secret.PassInsertSecret(secretValue, secretName, Domain, NewAppServer); err != nil {
return nil, err
}
}
@ -65,6 +67,31 @@ func ensureDomainFlag(recipe recipe.Recipe, server string) error {
return nil
}
// promptForSecrets asks if we should generate secrets for a new app.
func promptForSecrets(appName string) error {
app, err := app.Get(appName)
if err != nil {
return err
}
secretEnvVars := secret.ReadSecretEnvVars(app.Env)
if len(secretEnvVars) == 0 {
logrus.Debugf("%s has no secrets to generate, skipping...", app.Recipe)
return nil
}
if !Secrets && !NoInput {
prompt := &survey.Confirm{
Message: "Generate app secrets?",
}
if err := survey.AskOne(prompt, &Secrets); err != nil {
return err
}
}
return nil
}
// ensureServerFlag checks if the server flag was used. if not, asks the user for it.
func ensureServerFlag() error {
servers, err := config.GetServers()
@ -89,28 +116,9 @@ func ensureServerFlag() error {
return nil
}
// ensureServerFlag checks if the AppName flag was used. if not, asks the user for it.
func ensureAppNameFlag() error {
if NewAppName == "" && !NoInput {
prompt := &survey.Input{
Message: "Specify app name:",
Default: Domain,
}
if err := survey.AskOne(prompt, &NewAppName); err != nil {
return err
}
}
if NewAppName == "" {
return fmt.Errorf("no app name provided")
}
return nil
}
// NewAction is the new app creation logic
func NewAction(c *cli.Context) error {
recipe := ValidateRecipeWithPrompt(c)
recipe := ValidateRecipeWithPrompt(c, false)
if err := recipePkg.EnsureUpToDate(recipe.Name); err != nil {
logrus.Fatal(err)
@ -124,48 +132,45 @@ func NewAction(c *cli.Context) error {
logrus.Fatal(err)
}
if err := ensureAppNameFlag(); err != nil {
sanitisedAppName := config.SanitiseAppName(Domain)
logrus.Debugf("%s sanitised as %s for new app", Domain, sanitisedAppName)
if err := config.TemplateAppEnvSample(recipe.Name, Domain, NewAppServer, Domain); err != nil {
logrus.Fatal(err)
}
sanitisedAppName := config.SanitiseAppName(NewAppName)
if len(sanitisedAppName) > 45 {
logrus.Fatalf("%s cannot be longer than 45 characters", sanitisedAppName)
}
logrus.Debugf("%s sanitised as %s for new app", NewAppName, sanitisedAppName)
if err := config.TemplateAppEnvSample(recipe.Name, NewAppName, NewAppServer, Domain); err != nil {
if err := promptForSecrets(Domain); err != nil {
logrus.Fatal(err)
}
var secrets AppSecrets
var secretTable *tablewriter.Table
if Secrets {
if err := ssh.EnsureHostKey(NewAppServer); err != nil {
logrus.Fatal(err)
}
secrets, err := createSecrets(sanitisedAppName)
var err error
secrets, err = createSecrets(sanitisedAppName)
if err != nil {
logrus.Fatal(err)
}
secretCols := []string{"Name", "Value"}
secretTable := formatter.CreateTable(secretCols)
secretTable = formatter.CreateTable(secretCols)
for secret := range secrets {
secretTable.Append([]string{secret, secrets[secret]})
}
if len(secrets) > 0 {
defer secretTable.Render()
}
}
if NewAppServer == "default" {
NewAppServer = "local"
}
tableCol := []string{"server", "type", "domain", "app name"}
tableCol := []string{"server", "recipe", "domain"}
table := formatter.CreateTable(tableCol)
table.Append([]string{NewAppServer, recipe.Name, Domain, NewAppName})
table.Append([]string{NewAppServer, recipe.Name, Domain})
fmt.Println("")
fmt.Println(fmt.Sprintf("A new %s app has been created! Here is an overview:", recipe.Name))
@ -173,11 +178,19 @@ func NewAction(c *cli.Context) error {
table.Render()
fmt.Println("")
fmt.Println("You can configure this app by running the following:")
fmt.Println(fmt.Sprintf("\n abra app config %s", NewAppName))
fmt.Println(fmt.Sprintf("\n abra app config %s", Domain))
fmt.Println("")
fmt.Println("You can deploy this app by running the following:")
fmt.Println(fmt.Sprintf("\n abra app deploy %s", NewAppName))
fmt.Println(fmt.Sprintf("\n abra app deploy %s", Domain))
fmt.Println("")
if len(secrets) > 0 {
fmt.Println("Here are your generated secrets:")
fmt.Println("")
secretTable.Render()
fmt.Println("")
logrus.Warn("generated secrets are not shown again, please take note of them *now*")
}
return nil
}

View File

@ -11,7 +11,7 @@ import (
)
// PromptBumpType prompts for version bump type
func PromptBumpType(tagString string) error {
func PromptBumpType(tagString, latestRelease string) error {
if (!Major && !Minor && !Patch) && tagString == "" {
fmt.Printf(`
You need to make a decision about what kind of an update this new recipe
@ -20,6 +20,8 @@ migration work or take care of some breaking changes? This can be signaled in
the version you specify on the recipe deploy label and is called a semantic
version.
The latest published version is %s.
Here is a semver cheat sheet (more on https://semver.org):
major: new features/bug fixes, backwards incompatible (e.g 1.0.0 -> 2.0.0).
@ -34,7 +36,7 @@ Here is a semver cheat sheet (more on https://semver.org):
should also Just Work and is mostly to do with minor bug fixes
and/or security patches. "nothing to worry about".
`)
`, latestRelease)
var chosenBumpType string
prompt := &survey.Select{

View File

@ -19,7 +19,7 @@ import (
var AppName string
// ValidateRecipe ensures the recipe arg is valid.
func ValidateRecipe(c *cli.Context) recipe.Recipe {
func ValidateRecipe(c *cli.Context, ensureLatest bool) recipe.Recipe {
recipeName := c.Args().First()
if recipeName == "" {
@ -38,6 +38,12 @@ func ValidateRecipe(c *cli.Context) recipe.Recipe {
}
}
if ensureLatest {
if err := recipe.EnsureLatest(recipeName); err != nil {
logrus.Fatal(err)
}
}
logrus.Debugf("validated %s as recipe argument", recipeName)
return chosenRecipe
@ -45,7 +51,7 @@ func ValidateRecipe(c *cli.Context) recipe.Recipe {
// ValidateRecipeWithPrompt ensures a recipe argument is present before
// validating, asking for input if required.
func ValidateRecipeWithPrompt(c *cli.Context) recipe.Recipe {
func ValidateRecipeWithPrompt(c *cli.Context, ensureLatest bool) recipe.Recipe {
recipeName := c.Args().First()
if recipeName == "" && !NoInput {
@ -99,6 +105,12 @@ func ValidateRecipeWithPrompt(c *cli.Context) recipe.Recipe {
logrus.Fatal(err)
}
if ensureLatest {
if err := recipe.EnsureLatest(recipeName); err != nil {
logrus.Fatal(err)
}
}
logrus.Debugf("validated %s as recipe argument", recipeName)
return chosenRecipe
@ -122,7 +134,7 @@ func ValidateApp(c *cli.Context) config.App {
logrus.Fatal(err)
}
if err := recipe.EnsureExists(app.Type); err != nil {
if err := recipe.EnsureExists(app.Recipe); err != nil {
logrus.Fatal(err)
}
@ -136,7 +148,7 @@ func ValidateApp(c *cli.Context) config.App {
}
// ValidateDomain ensures the domain name arg is valid.
func ValidateDomain(c *cli.Context) (string, error) {
func ValidateDomain(c *cli.Context) string {
domainName := c.Args().First()
if domainName == "" && !NoInput {
@ -145,7 +157,7 @@ func ValidateDomain(c *cli.Context) (string, error) {
Default: "example.com",
}
if err := survey.AskOne(prompt, &domainName); err != nil {
return domainName, err
logrus.Fatal(err)
}
}
@ -155,7 +167,7 @@ func ValidateDomain(c *cli.Context) (string, error) {
logrus.Debugf("validated %s as domain argument", domainName)
return domainName, nil
return domainName
}
// ValidateSubCmdFlags ensures flag order conforms to correct order
@ -173,12 +185,12 @@ func ValidateSubCmdFlags(c *cli.Context) bool {
}
// ValidateServer ensures the server name arg is valid.
func ValidateServer(c *cli.Context) (string, error) {
func ValidateServer(c *cli.Context) string {
serverName := c.Args().First()
serverNames, err := config.ReadServerNames()
if err != nil {
return serverName, err
logrus.Fatal(err)
}
if serverName == "" && !NoInput {
@ -187,17 +199,28 @@ func ValidateServer(c *cli.Context) (string, error) {
Options: serverNames,
}
if err := survey.AskOne(prompt, &serverName); err != nil {
return serverName, err
logrus.Fatal(err)
}
}
matched := false
for _, name := range serverNames {
if name == serverName {
matched = true
}
}
if !matched {
ShowSubcommandHelpAndError(c, errors.New("server doesn't exist?"))
}
if serverName == "" {
ShowSubcommandHelpAndError(c, errors.New("no server provided"))
}
logrus.Debugf("validated %s as server argument", serverName)
return serverName, nil
return serverName
}
// EnsureDNSProvider ensures a DNS provider is chosen.

View File

@ -19,13 +19,12 @@ var recipeLintCommand = cli.Command{
ArgsUsage: "<recipe>",
Flags: []cli.Flag{
internal.DebugFlag,
internal.NoInputFlag,
internal.OnlyErrorFlag,
},
Before: internal.SubCommandBefore,
BashComplete: autocomplete.RecipeNameComplete,
Action: func(c *cli.Context) error {
recipe := internal.ValidateRecipe(c)
recipe := internal.ValidateRecipe(c, true)
if err := recipePkg.EnsureUpToDate(recipe.Name); err != nil {
logrus.Fatal(err)

View File

@ -27,7 +27,6 @@ var recipeListCommand = cli.Command{
Aliases: []string{"ls"},
Flags: []cli.Flag{
internal.DebugFlag,
internal.NoInputFlag,
patternFlag,
},
Before: internal.SubCommandBefore,

View File

@ -59,7 +59,7 @@ your SSH keys configured on your account.
Before: internal.SubCommandBefore,
BashComplete: autocomplete.RecipeNameComplete,
Action: func(c *cli.Context) error {
recipe := internal.ValidateRecipeWithPrompt(c)
recipe := internal.ValidateRecipeWithPrompt(c, false)
imagesTmp, err := getImageVersions(recipe)
if err != nil {
@ -322,12 +322,6 @@ func createReleaseFromPreviousTag(tagString, mainAppVersion string, recipe recip
}
var lastGitTag tagcmp.Tag
if tagString == "" {
if err := internal.PromptBumpType(tagString); err != nil {
return err
}
}
for _, tag := range tags {
parsed, err := tagcmp.Parse(tag)
if err != nil {
@ -368,6 +362,12 @@ func createReleaseFromPreviousTag(tagString, mainAppVersion string, recipe recip
newTag.Major = strconv.Itoa(now + 1)
}
if tagString == "" {
if err := internal.PromptBumpType(tagString, lastGitTag.String()); err != nil {
return err
}
}
if internal.Major || internal.Minor || internal.Patch {
newTag.Metadata = mainAppVersion
tagString = newTag.String()
@ -393,15 +393,15 @@ func createReleaseFromPreviousTag(tagString, mainAppVersion string, recipe recip
}
if err := commitRelease(recipe, tagString); err != nil {
logrus.Fatal(err)
logrus.Fatalf("failed to commit changes: %s", err.Error())
}
if err := tagRelease(tagString, repo); err != nil {
logrus.Fatal(err)
logrus.Fatalf("failed to tag release: %s", err.Error())
}
if err := pushRelease(recipe, tagString); err != nil {
logrus.Fatal(err)
logrus.Fatalf("failed to publish new release: %s", err.Error())
}
return nil

View File

@ -8,6 +8,7 @@ import (
"coopcloud.tech/abra/cli/internal"
"coopcloud.tech/abra/pkg/autocomplete"
"coopcloud.tech/abra/pkg/config"
recipePkg "coopcloud.tech/abra/pkg/recipe"
"coopcloud.tech/tagcmp"
"github.com/AlecAivazis/survey/v2"
"github.com/go-git/go-git/v5"
@ -41,7 +42,11 @@ auto-generate it for you. The <recipe> configuration will be updated on the
local file system.
`,
Action: func(c *cli.Context) error {
recipe := internal.ValidateRecipeWithPrompt(c)
recipe := internal.ValidateRecipeWithPrompt(c, false)
if err := recipePkg.EnsureUpToDate(recipe.Name); err != nil {
logrus.Fatal(err)
}
mainApp, err := internal.GetMainAppImage(recipe)
if err != nil {
@ -95,7 +100,8 @@ likely to change.
}
if nextTag == "" && (!internal.Major && !internal.Minor && !internal.Patch) {
if err := internal.PromptBumpType(""); err != nil {
latestRelease := tags[len(tags)-1]
if err := internal.PromptBumpType("", latestRelease); err != nil {
logrus.Fatal(err)
}
}

View File

@ -46,7 +46,6 @@ interface.
You may invoke this command in "wizard" mode and be prompted for input:
abra recipe upgrade
`,
BashComplete: autocomplete.RecipeNameComplete,
ArgsUsage: "<recipe>",
@ -60,7 +59,7 @@ You may invoke this command in "wizard" mode and be prompted for input:
},
Before: internal.SubCommandBefore,
Action: func(c *cli.Context) error {
recipe := internal.ValidateRecipeWithPrompt(c)
recipe := internal.ValidateRecipeWithPrompt(c, true)
bumpType := btoi(internal.Major)*4 + btoi(internal.Minor)*2 + btoi(internal.Patch)
if bumpType != 0 {
@ -113,13 +112,13 @@ You may invoke this command in "wizard" mode and be prompted for input:
logrus.Fatal(err)
}
image := reference.Path(img)
regVersions, err := client.GetRegistryTags(image)
regVersions, err := client.GetRegistryTags(img)
if err != nil {
logrus.Fatal(err)
}
logrus.Debugf("retrieved %s from remote registry for %s", regVersions, image)
image := reference.Path(img)
logrus.Debugf("retrieved %s from remote registry for %s", regVersions, image)
image = formatter.StripTagMeta(image)
switch img.(type) {
@ -142,7 +141,7 @@ You may invoke this command in "wizard" mode and be prompted for input:
var compatible []tagcmp.Tag
for _, regVersion := range regVersions {
other, err := tagcmp.Parse(regVersion.Name)
other, err := tagcmp.Parse(regVersion)
if err != nil {
continue // skip tags that cannot be parsed
}
@ -232,7 +231,7 @@ You may invoke this command in "wizard" mode and be prompted for input:
msg = fmt.Sprintf("upgrade to which tag? (service: %s, tag: %s)", service.Name, tag)
compatibleStrings = []string{"skip"}
for _, regVersion := range regVersions {
compatibleStrings = append(compatibleStrings, regVersion.Name)
compatibleStrings = append(compatibleStrings, regVersion)
}
}

View File

@ -16,12 +16,11 @@ var recipeVersionCommand = cli.Command{
ArgsUsage: "<recipe>",
Flags: []cli.Flag{
internal.DebugFlag,
internal.NoInputFlag,
},
Before: internal.SubCommandBefore,
BashComplete: autocomplete.RecipeNameComplete,
Action: func(c *cli.Context) error {
recipe := internal.ValidateRecipe(c)
recipe := internal.ValidateRecipe(c, false)
catalogue, err := recipePkg.ReadRecipeCatalogue()
if err != nil {

View File

@ -21,7 +21,6 @@ var RecordListCommand = cli.Command{
ArgsUsage: "<zone>",
Flags: []cli.Flag{
internal.DebugFlag,
internal.NoInputFlag,
internal.DNSProviderFlag,
},
Before: internal.SubCommandBefore,

View File

@ -45,7 +45,6 @@ Example:
You may also invoke this command in "wizard" mode and be prompted for input:
abra record new
`,
Action: func(c *cli.Context) error {
zone, err := internal.EnsureZoneArgument(c)

View File

@ -28,7 +28,6 @@ library documentation for more. It supports many existing providers and allows
to implement new provider support easily.
https://pkg.go.dev/github.com/libdns/libdns
`,
Subcommands: []cli.Command{
RecordListCommand,

View File

@ -41,7 +41,6 @@ such purposes. Docker stable is now installed by default by this script. The
source for this script can be seen here:
https://github.com/docker/docker-install
`
)
@ -61,7 +60,7 @@ var provisionFlag = &cli.BoolFlag{
var sshAuth string
var sshAuthFlag = &cli.StringFlag{
Name: "ssh-auth, sh",
Name: "ssh-auth, s",
Value: "identity-file",
Usage: "Select SSH authentication method (identity-file, password)",
Destination: &sshAuth,
@ -69,7 +68,7 @@ var sshAuthFlag = &cli.StringFlag{
var askSudoPass bool
var askSudoPassFlag = &cli.BoolFlag{
Name: "ask-sudo-pass, as",
Name: "ask-sudo-pass, a",
Usage: "Ask for sudo password",
Destination: &askSudoPass,
}
@ -372,39 +371,27 @@ var serverAddCommand = cli.Command{
Usage: "Add a server to your configuration",
Description: `
This command adds a new server to your configuration so that it can be managed
by Abra. This can be useful when you already have a server provisioned and want
to start running Abra commands against it.
by Abra. This command can also provision your server ("--provision/-p") with a
Docker installation so that it is capable of hosting Co-op Cloud apps.
This command can also provision your server ("--provision/-p") so that it is
capable of hosting Co-op Cloud apps. Abra will default to expecting that you
have a running ssh-agent and are using SSH keys to connect to your new server.
Abra will also read your SSH config (matching "Host" as <domain>). SSH
connection details precedence follows as such: command-line > SSH config >
guessed defaults.
Abra will default to expecting that you have a running ssh-agent and are using
SSH keys to connect to your new server. Abra will also read your SSH config
(matching "Host" as <domain>). SSH connection details precedence follows as
such: command-line > SSH config > guessed defaults.
If you have no SSH key configured for this host and are instead using password
authentication, you may pass "--ssh-auth password" to have Abra ask you for the
password. "--ask-sudo-pass" may be passed if you run your provisioning commands
via sudo privilege escalation.
If "--local" is passed, then Abra assumes that the current local server is
intended as the target server. This is useful when you want to have your entire
Co-op Cloud config located on the server itself, and not on your local
developer machine.
The <domain> argument must be a publicy accessible domain name which points to
your server. You should working SSH access to this server already, Abra will
assume port 22 and will use your current system username to make an initial
connection. You can use the <user> and <port> arguments to adjust this.
Example:
abra server add --local
Otherwise, you may specify a remote server. The <domain> argument must be a
publicy accessible domain name which points to your server. You should have SSH
access to this server, Abra will assume port 22 and will use your current
system username to make an initial connection. You can use the <user> and
<port> arguments to adjust this.
Example:
abra server add --provision varia.zone glodemodem 12345
abra server add varia.zone glodemodem 12345 -p
Abra will construct the following SSH connection and Docker context:
@ -412,9 +399,10 @@ Abra will construct the following SSH connection and Docker context:
All communication between Abra and the server will use this SSH connection.
In this example, Abra will install Docker and initialise swarm mode.
You may omit flags to avoid performing this provisioning logic.
If "--local" is passed, then Abra assumes that the current local server is
intended as the target server. This is useful when you want to have your entire
Co-op Cloud config located on the server itself, and not on your local
developer machine.
`,
Flags: []cli.Flag{
internal.DebugFlag,
@ -437,6 +425,8 @@ You may omit flags to avoid performing this provisioning logic.
internal.ShowSubcommandHelpAndError(c, err)
}
domainName := internal.ValidateDomain(c)
if local {
if err := newLocalServer(c, "default"); err != nil {
logrus.Fatal(err)
@ -444,11 +434,6 @@ You may omit flags to avoid performing this provisioning logic.
return nil
}
domainName, err := internal.ValidateDomain(c)
if err != nil {
logrus.Fatal(err)
}
username := c.Args().Get(1)
if username == "" {
systemUser, err := user.Current()
@ -473,14 +458,17 @@ You may omit flags to avoid performing this provisioning logic.
cl, err := newClient(c, domainName)
if err != nil {
logrus.Fatal(err)
cleanUp(domainName)
logrus.Debugf("failed to construct client for %s, saw %s", domainName, err.Error())
logrus.Fatalf(fmt.Sprintf(internal.ServerAddFailMsg, domainName))
}
if provision {
logrus.Debugf("attempting to construct SSH client for %s", domainName)
sshCl, err := ssh.New(domainName, sshAuth, username, port)
if err != nil {
logrus.Fatal(err)
cleanUp(domainName)
logrus.Fatalf(fmt.Sprintf(internal.ServerAddFailMsg, domainName))
}
defer sshCl.Close()
logrus.Debugf("successfully created SSH client for %s", domainName)
@ -495,7 +483,7 @@ You may omit flags to avoid performing this provisioning logic.
if _, err := cl.Info(context.Background()); err != nil {
cleanUp(domainName)
logrus.Fatalf("couldn't make a remote docker connection to %s? use --provision/-p to attempt to install", domainName)
logrus.Fatalf(fmt.Sprintf(internal.ServerAddFailMsg, domainName))
}
return nil

View File

@ -18,7 +18,6 @@ var serverListCommand = cli.Command{
Usage: "List managed servers",
Flags: []cli.Flag{
internal.DebugFlag,
internal.NoInputFlag,
},
Before: internal.SubCommandBefore,
Action: func(c *cli.Context) error {

View File

@ -223,10 +223,7 @@ Where "$provider_TOKEN" is the expected env var format.
Flags: []cli.Flag{
internal.DebugFlag,
internal.NoInputFlag,
internal.ServerProviderFlag,
internal.DebugFlag,
internal.NoInputFlag,
// Capsul
internal.CapsulInstanceURLFlag,

View File

@ -126,21 +126,24 @@ like tears in rain.
},
Before: internal.SubCommandBefore,
Action: func(c *cli.Context) error {
serverName := c.Args().Get(1)
if serverName != "" {
var err error
serverName, err = internal.ValidateServer(c)
if err != nil {
logrus.Fatal(err)
}
}
serverName := internal.ValidateServer(c)
warnMsg := `Did not pass -s/--server for actual server deletion, prompting!
Abra doesn't currently know if it helped you create this server with one of the
3rd party integrations (e.g. Capsul). You have a choice here to actually,
really and finally destroy this server using those integrations. If you want to
do this, choose Yes.
If you just want to remove the server config files & context, choose No.
`
if !rmServer {
logrus.Warn("did not pass -s/--server for actual server deletion, prompting")
logrus.Warn(fmt.Sprintf(warnMsg))
response := false
prompt := &survey.Confirm{
Message: "prompt to actual server deletion?",
Message: "delete actual live server?",
}
if err := survey.AskOne(prompt, &response); err != nil {
logrus.Fatal(err)
@ -164,21 +167,18 @@ like tears in rain.
logrus.Fatal(err)
}
}
}
if serverName != "" {
if err := client.DeleteContext(serverName); err != nil {
logrus.Fatal(err)
}
if err := os.RemoveAll(filepath.Join(config.SERVERS_DIR, serverName)); err != nil {
logrus.Fatal(err)
}
logrus.Infof("server at %s has been lost in time, like tears in rain", serverName)
if err := client.DeleteContext(serverName); err != nil {
logrus.Fatal(err)
}
if err := os.RemoveAll(filepath.Join(config.SERVERS_DIR, serverName)); err != nil {
logrus.Fatal(err)
}
logrus.Infof("server at %s has been lost in time, like tears in rain", serverName)
return nil
},
}

37
go.mod
View File

@ -4,20 +4,20 @@ go 1.16
require (
coopcloud.tech/tagcmp v0.0.0-20211103052201-885b22f77d52
github.com/AlecAivazis/survey/v2 v2.3.2
github.com/AlecAivazis/survey/v2 v2.3.4
github.com/Autonomic-Cooperative/godotenv v1.3.1-0.20210731094149-b031ea1211e7
github.com/Gurpartap/logrus-stack v0.0.0-20170710170904-89c00d8a28f4
github.com/docker/cli v20.10.12+incompatible
github.com/docker/distribution v2.7.1+incompatible
github.com/docker/docker v20.10.12+incompatible
github.com/docker/cli v20.10.14+incompatible
github.com/docker/distribution v2.8.1+incompatible
github.com/docker/docker v20.10.14+incompatible
github.com/docker/go-units v0.4.0
github.com/go-git/go-git/v5 v5.4.2
github.com/hetznercloud/hcloud-go v1.33.1
github.com/moby/sys/signal v0.6.0
github.com/moby/sys/signal v0.7.0
github.com/moby/term v0.0.0-20210619224110-3f7ff695adc6
github.com/olekukonko/tablewriter v0.0.5
github.com/pkg/errors v0.9.1
github.com/schollz/progressbar/v3 v3.8.5
github.com/schollz/progressbar/v3 v3.8.6
github.com/schultz-is/passgen v1.0.1
github.com/sirupsen/logrus v1.8.1
gotest.tools/v3 v3.1.0
@ -25,25 +25,30 @@ require (
require (
coopcloud.tech/libcapsul v0.0.0-20211022074848-c35e78fe3f3e
github.com/Microsoft/hcsshim v0.8.21 // indirect
github.com/buger/goterm v1.0.3
github.com/containerd/containerd v1.5.5 // indirect
github.com/ProtonMail/go-crypto v0.0.0-20211112122917-428f8eabeeb3 // indirect
github.com/buger/goterm v1.0.4
github.com/containerd/containerd v1.5.9 // indirect
github.com/containers/image v3.0.2+incompatible
github.com/containers/storage v1.38.2 // indirect
github.com/docker/docker-credential-helpers v0.6.4 // indirect
github.com/facebookgo/stack v0.0.0-20160209184415-751773369052 // indirect
github.com/fvbommel/sortorder v1.0.2 // indirect
github.com/gliderlabs/ssh v0.3.3
github.com/google/shlex v0.0.0-20191202100458-e7afc7fbc510 // indirect
github.com/gorilla/mux v1.8.0 // indirect
github.com/hashicorp/go-retryablehttp v0.7.0
github.com/kevinburke/ssh_config v1.1.0
github.com/hashicorp/go-retryablehttp v0.7.1
github.com/kevinburke/ssh_config v1.2.0
github.com/klauspost/pgzip v1.2.5
github.com/libdns/gandi v1.0.2
github.com/libdns/libdns v0.2.1
github.com/moby/sys/mount v0.2.0 // indirect
github.com/morikuni/aec v1.0.0 // indirect
github.com/opencontainers/runc v1.0.2 // indirect
github.com/opencontainers/image-spec v1.0.3-0.20211202193544-a5463b7f9c84 // indirect
github.com/sergi/go-diff v1.2.0 // indirect
github.com/spf13/cobra v1.3.0 // indirect
github.com/theupdateframework/notary v0.7.0 // indirect
github.com/urfave/cli v1.22.5
github.com/xeipuuv/gojsonschema v1.2.0 // indirect
golang.org/x/crypto v0.0.0-20211215153901-e495a2d5b3d3
golang.org/x/sys v0.0.0-20211216021012-1d35b9e2eb4e
github.com/xanzy/ssh-agent v0.3.1 // indirect
github.com/xeipuuv/gojsonpointer v0.0.0-20190809123943-df4f5c81cb3b // indirect
golang.org/x/crypto v0.0.0-20220131195533-30dcbda58838
golang.org/x/sys v0.0.0-20220128215802-99c3d69c2c27
)

567
go.sum

File diff suppressed because it is too large Load Diff

View File

@ -40,3 +40,24 @@ func RecipeNameComplete(c *cli.Context) {
fmt.Println(name)
}
}
// SubcommandComplete completes subcommands.
func SubcommandComplete(c *cli.Context) {
if c.NArg() > 0 {
return
}
subcmds := []string{
"app",
"autocomplete",
"catalogue",
"recipe",
"record",
"server",
"upgrade",
}
for _, cmd := range subcmds {
fmt.Println(cmd)
}
}

View File

@ -27,7 +27,11 @@ func New(contextName string) (*client.Client, error) {
return nil, err
}
helper := commandconnPkg.NewConnectionHelper(ctxEndpoint)
helper, err := commandconnPkg.NewConnectionHelper(ctxEndpoint)
if err != nil {
return nil, err
}
httpClient := &http.Client{
// No tls, no proxy
Transport: &http.Transport{

View File

@ -1,193 +1,57 @@
package client
import (
"encoding/base64"
"encoding/json"
"context"
"fmt"
"io/ioutil"
"net/http"
"strings"
"coopcloud.tech/abra/pkg/web"
"github.com/containers/image/docker"
"github.com/containers/image/types"
"github.com/docker/distribution/reference"
"github.com/docker/docker/client"
"github.com/hashicorp/go-retryablehttp"
"github.com/sirupsen/logrus"
)
type RawTag struct {
Layer string
Name string
}
// GetRegistryTags retrieves all tags of an image from a container registry.
func GetRegistryTags(img reference.Named) ([]string, error) {
var tags []string
type RawTags []RawTag
ref, err := docker.ParseReference(fmt.Sprintf("//%s", img))
if err != nil {
return tags, fmt.Errorf("failed to parse image %s, saw: %s", img, err.Error())
}
var registryURL = "https://registry.hub.docker.com/v1/repositories/%s/tags"
func GetRegistryTags(image string) (RawTags, error) {
var tags RawTags
tagsUrl := fmt.Sprintf(registryURL, image)
if err := web.ReadJSON(tagsUrl, &tags); err != nil {
ctx := context.Background()
tags, err = docker.GetRepositoryTags(ctx, &types.SystemContext{}, ref)
if err != nil {
return tags, err
}
return tags, nil
}
func basicAuth(username, password string) string {
auth := username + ":" + password
return base64.StdEncoding.EncodeToString([]byte(auth))
}
// GetTagDigest retrieves an image digest from a container registry.
func GetTagDigest(cl *client.Client, image reference.Named) (string, error) {
target := fmt.Sprintf("//%s", reference.Path(image))
// getRegv2Token retrieves a registry v2 authentication token.
func getRegv2Token(cl *client.Client, image reference.Named, registryUsername, registryPassword string) (string, error) {
img := reference.Path(image)
tokenURL := "https://auth.docker.io/token"
values := fmt.Sprintf("service=registry.docker.io&scope=repository:%s:pull", img)
fullURL := fmt.Sprintf("%s?%s", tokenURL, values)
req, err := retryablehttp.NewRequest("GET", fullURL, nil)
ref, err := docker.ParseReference(target)
if err != nil {
return "", err
return "", fmt.Errorf("failed to parse image %s, saw: %s", image, err.Error())
}
if registryUsername != "" && registryPassword != "" {
logrus.Debugf("using registry log in credentials for token request")
auth := basicAuth(registryUsername, registryPassword)
req.Header.Add("Authorization", fmt.Sprintf("Basic %s", auth))
}
client := web.NewHTTPRetryClient()
res, err := client.Do(req)
ctx := context.Background()
img, err := ref.NewImage(ctx, nil)
if err != nil {
return "", err
logrus.Debugf("failed to query remote registry for %s, saw: %s", image, err.Error())
return "", fmt.Errorf("unable to read digest for %s", image)
}
defer res.Body.Close()
defer img.Close()
if res.StatusCode != http.StatusOK {
_, err := ioutil.ReadAll(res.Body)
if err != nil {
return "", err
}
}
body, err := ioutil.ReadAll(res.Body)
if err != nil {
return "", nil
}
tokenRes := struct {
AccessToken string `json:"access_token"`
Expiry int `json:"expires_in"`
Issued string `json:"issued_at"`
Token string `json:"token"`
}{}
if err := json.Unmarshal(body, &tokenRes); err != nil {
return "", err
}
return tokenRes.Token, nil
}
// GetTagDigest retrieves an image digest from a v2 registry
func GetTagDigest(cl *client.Client, image reference.Named, registryUsername, registryPassword string) (string, error) {
img := reference.Path(image)
tag := image.(reference.NamedTagged).Tag()
manifestURL := fmt.Sprintf("https://index.docker.io/v2/%s/manifests/%s", img, tag)
req, err := retryablehttp.NewRequest("GET", manifestURL, nil)
if err != nil {
return "", err
}
token, err := getRegv2Token(cl, image, registryUsername, registryPassword)
if err != nil {
return "", err
}
if token == "" {
return "", fmt.Errorf("unable to retrieve registry token?")
}
req.Header = http.Header{
"Accept": []string{
"application/vnd.docker.distribution.manifest.v2+json",
"application/vnd.docker.distribution.manifest.list.v2+json",
},
"Authorization": []string{fmt.Sprintf("Bearer %s", token)},
}
client := web.NewHTTPRetryClient()
res, err := client.Do(req)
if err != nil {
return "", err
}
defer res.Body.Close()
if res.StatusCode != http.StatusOK {
_, err := ioutil.ReadAll(res.Body)
if err != nil {
return "", err
}
}
body, err := ioutil.ReadAll(res.Body)
if err != nil {
return "", err
}
registryResT1 := struct {
SchemaVersion int
MediaType string
Manifests []struct {
MediaType string
Size int
Digest string
Platform struct {
Architecture string
Os string
}
}
}{}
registryResT2 := struct {
SchemaVersion int
MediaType string
Config struct {
MediaType string
Size int
Digest string
}
Layers []struct {
MediaType string
Size int
Digest string
}
}{}
if err := json.Unmarshal(body, &registryResT1); err != nil {
return "", err
}
var digest string
for _, manifest := range registryResT1.Manifests {
if string(manifest.Platform.Architecture) == "amd64" {
digest = strings.Split(manifest.Digest, ":")[1][:7]
}
}
digest := img.ConfigInfo().Digest.String()
if digest == "" {
if err := json.Unmarshal(body, &registryResT2); err != nil {
return "", err
}
digest = strings.Split(registryResT2.Config.Digest, ":")[1][:7]
return digest, fmt.Errorf("unable to read digest for %s", image)
}
if digest == "" {
return "", fmt.Errorf("Unable to retrieve amd64 digest for %s", image)
}
return digest, nil
return strings.Split(digest, ":")[1][:7], nil
}

View File

@ -5,23 +5,18 @@ import (
"github.com/docker/docker/api/types"
"github.com/docker/docker/api/types/filters"
"github.com/sirupsen/logrus"
)
func GetVolumes(ctx context.Context, server string, appName string) ([]*types.Volume, error) {
func GetVolumes(ctx context.Context, server string, fs filters.Args) ([]*types.Volume, error) {
cl, err := New(server)
if err != nil {
return nil, err
}
fs := filters.NewArgs()
fs.Add("name", appName)
volumeListOKBody, err := cl.VolumeList(ctx, fs)
volumeList := volumeListOKBody.Volumes
if err != nil {
logrus.Fatal(err)
return volumeList, err
}
return volumeList, nil
@ -29,9 +24,11 @@ func GetVolumes(ctx context.Context, server string, appName string) ([]*types.Vo
func GetVolumeNames(volumes []*types.Volume) []string {
var volumeNames []string
for _, vol := range volumes {
volumeNames = append(volumeNames, vol.Name)
}
return volumeNames
}
@ -40,12 +37,13 @@ func RemoveVolumes(ctx context.Context, server string, volumeNames []string, for
if err != nil {
return err
}
for _, volName := range volumeNames {
err := cl.VolumeRemove(ctx, volName, force)
if err != nil {
return err
}
}
return nil
return nil
}

View File

@ -13,6 +13,7 @@ import (
loader "coopcloud.tech/abra/pkg/upstream/stack"
stack "coopcloud.tech/abra/pkg/upstream/stack"
composetypes "github.com/docker/cli/cli/compose/types"
"github.com/docker/docker/api/types/filters"
"github.com/sirupsen/logrus"
)
@ -36,7 +37,7 @@ type AppFiles map[AppName]AppFile
// App reprents an app with its env file read into memory
type App struct {
Name AppName
Type string
Recipe string
Domain string
Env AppEnv
Server string
@ -52,12 +53,59 @@ func (a App) StackName() string {
}
stackName := SanitiseAppName(a.Name)
if len(stackName) > 45 {
logrus.Debugf("trimming %s to %s to avoid runtime limits", stackName, stackName[:45])
stackName = stackName[:45]
}
a.Env["STACK_NAME"] = stackName
return stackName
}
// SORTING TYPES
// Filters retrieves exact app filters for querying the container runtime. Due
// to upstream issues, filtering works different depending on what you're
// querying. So, for example, secrets don't work with regex! The caller needs
// to implement their own validation that the right secrets are matched. In
// order to handle these cases, we provide the `appendServiceNames` /
// `exactMatch` modifiers.
func (a App) Filters(appendServiceNames, exactMatch bool) (filters.Args, error) {
filters := filters.NewArgs()
composeFiles, err := GetAppComposeFiles(a.Recipe, a.Env)
if err != nil {
return filters, err
}
opts := stack.Deploy{Composefiles: composeFiles}
compose, err := GetAppComposeConfig(a.Recipe, opts, a.Env)
if err != nil {
return filters, err
}
for _, service := range compose.Services {
var filter string
if appendServiceNames {
if exactMatch {
filter = fmt.Sprintf("^%s_%s", a.StackName(), service.Name)
} else {
filter = fmt.Sprintf("%s_%s", a.StackName(), service.Name)
}
} else {
if exactMatch {
filter = fmt.Sprintf("^%s", a.StackName())
} else {
filter = fmt.Sprintf("%s", a.StackName())
}
}
filters.Add("name", filter)
}
return filters, nil
}
// ByServer sort a slice of Apps
type ByServer []App
@ -68,25 +116,25 @@ func (a ByServer) Less(i, j int) bool {
return strings.ToLower(a[i].Server) < strings.ToLower(a[j].Server)
}
// ByServerAndType sort a slice of Apps
type ByServerAndType []App
// ByServerAndRecipe sort a slice of Apps
type ByServerAndRecipe []App
func (a ByServerAndType) Len() int { return len(a) }
func (a ByServerAndType) Swap(i, j int) { a[i], a[j] = a[j], a[i] }
func (a ByServerAndType) Less(i, j int) bool {
func (a ByServerAndRecipe) Len() int { return len(a) }
func (a ByServerAndRecipe) Swap(i, j int) { a[i], a[j] = a[j], a[i] }
func (a ByServerAndRecipe) Less(i, j int) bool {
if a[i].Server == a[j].Server {
return strings.ToLower(a[i].Type) < strings.ToLower(a[j].Type)
return strings.ToLower(a[i].Recipe) < strings.ToLower(a[j].Recipe)
}
return strings.ToLower(a[i].Server) < strings.ToLower(a[j].Server)
}
// ByType sort a slice of Apps
type ByType []App
// ByRecipe sort a slice of Apps
type ByRecipe []App
func (a ByType) Len() int { return len(a) }
func (a ByType) Swap(i, j int) { a[i], a[j] = a[j], a[i] }
func (a ByType) Less(i, j int) bool {
return strings.ToLower(a[i].Type) < strings.ToLower(a[j].Type)
func (a ByRecipe) Len() int { return len(a) }
func (a ByRecipe) Swap(i, j int) { a[i], a[j] = a[j], a[i] }
func (a ByRecipe) Less(i, j int) bool {
return strings.ToLower(a[i].Recipe) < strings.ToLower(a[j].Recipe)
}
// ByName sort a slice of Apps
@ -118,15 +166,18 @@ func readAppEnvFile(appFile AppFile, name AppName) (App, error) {
func newApp(env AppEnv, name string, appFile AppFile) (App, error) {
domain := env["DOMAIN"]
appType, exists := env["TYPE"]
recipe, exists := env["RECIPE"]
if !exists {
return App{}, fmt.Errorf("%s is missing the TYPE env var", name)
recipe, exists = env["TYPE"]
if !exists {
return App{}, fmt.Errorf("%s is missing the RECIPE env var", name)
}
}
return App{
Name: name,
Domain: domain,
Type: appType,
Recipe: recipe,
Env: env,
Server: appFile.Server,
Path: appFile.Path,
@ -213,13 +264,13 @@ func GetAppServiceNames(appName string) ([]string, error) {
return serviceNames, err
}
composeFiles, err := GetAppComposeFiles(app.Type, app.Env)
composeFiles, err := GetAppComposeFiles(app.Recipe, app.Env)
if err != nil {
return serviceNames, err
}
opts := stack.Deploy{Composefiles: composeFiles}
compose, err := GetAppComposeConfig(app.Type, opts, app.Env)
compose, err := GetAppComposeConfig(app.Recipe, opts, app.Env)
if err != nil {
return serviceNames, err
}
@ -281,7 +332,13 @@ func TemplateAppEnvSample(recipeName, appName, server, domain string) error {
return err
}
if err := tpl.Execute(file, struct{ Name string }{recipeName}); err != nil {
type templateVars struct {
Name string
Domain string
}
tvars := templateVars{Name: recipeName, Domain: domain}
if err := tpl.Execute(file, tvars); err != nil {
return err
}

View File

@ -16,10 +16,12 @@ import (
var ABRA_DIR = os.ExpandEnv("$HOME/.abra")
var SERVERS_DIR = path.Join(ABRA_DIR, "servers")
var RECIPES_DIR = path.Join(ABRA_DIR, "apps")
var RECIPES_DIR = path.Join(ABRA_DIR, "recipes")
var VENDOR_DIR = path.Join(ABRA_DIR, "vendor")
var BACKUP_DIR = path.Join(ABRA_DIR, "backups")
var RECIPES_JSON = path.Join(ABRA_DIR, "catalogue", "recipes.json")
var REPOS_BASE_URL = "https://git.coopcloud.tech/coop-cloud"
var CATALOGUE_JSON_REPO_NAME = "recipes-catalogue-json"
var SSH_URL_TEMPLATE = "ssh://git@git.coopcloud.tech:2222/coop-cloud/%s.git"
// GetServers retrieves all servers.

View File

@ -20,12 +20,12 @@ var serverName = "evil.corp"
var expectedAppEnv = AppEnv{
"DOMAIN": "ecloud.evil.corp",
"TYPE": "ecloud",
"RECIPE": "ecloud",
}
var expectedApp = App{
Name: appName,
Type: expectedAppEnv["TYPE"],
Recipe: expectedAppEnv["RECIPE"],
Domain: expectedAppEnv["DOMAIN"],
Env: expectedAppEnv,
Path: expectedAppFile.Path,
@ -74,11 +74,11 @@ func TestReadEnv(t *testing.T) {
}
if !reflect.DeepEqual(env, expectedAppEnv) {
t.Fatalf(
"did not get expected application settings. Expected: DOMAIN=%s TYPE=%s; Got: DOMAIN=%s TYPE=%s",
"did not get expected application settings. Expected: DOMAIN=%s RECIPE=%s; Got: DOMAIN=%s RECIPE=%s",
expectedAppEnv["DOMAIN"],
expectedAppEnv["TYPE"],
expectedAppEnv["RECIPE"],
env["DOMAIN"],
env["TYPE"],
env["RECIPE"],
)
}
}

View File

@ -13,10 +13,10 @@ import (
"github.com/sirupsen/logrus"
)
// GetContainer retrieves a container. If prompt is true and the retrievd count
// of containers does not match 1, then a prompt is presented to let the user
// choose. A count of 0 is handled gracefully.
func GetContainer(c context.Context, cl *client.Client, filters filters.Args, prompt bool) (types.Container, error) {
// GetContainer retrieves a container. If noInput is false and the retrievd
// count of containers does not match 1, then a prompt is presented to let the
// user choose. A count of 0 is handled gracefully.
func GetContainer(c context.Context, cl *client.Client, filters filters.Args, noInput bool) (types.Container, error) {
containerOpts := types.ContainerListOptions{Filters: filters}
containers, err := cl.ContainerList(c, containerOpts)
if err != nil {
@ -37,7 +37,7 @@ func GetContainer(c context.Context, cl *client.Client, filters filters.Args, pr
containersRaw = append(containersRaw, fmt.Sprintf("%s (created %v)", trimmed, created))
}
if !prompt {
if noInput {
err := fmt.Errorf("expected 1 container but found %v: %s", len(containers), strings.Join(containersRaw, " "))
return types.Container{}, err
}

View File

@ -47,8 +47,6 @@ func EnsureIPv4(domainName string) (string, error) {
},
}
logrus.Debugf("created DNS resolver via %s", freifunkDNS)
ctx := context.Background()
ips, err := resolver.LookupIPAddr(ctx, domainName)
if err != nil {
@ -60,7 +58,7 @@ func EnsureIPv4(domainName string) (string, error) {
}
ipv4 = ips[0].IP.To4().String()
logrus.Debugf("discovered the following ipv4 addr: %s", ipv4)
logrus.Debugf("%s points to %s (resolver: %s)", domainName, ipv4, freifunkDNS)
return ipv4, nil
}

View File

@ -78,6 +78,13 @@ var LintRules = map[string][]LintRule{
HowToResolve: "fill out all the metadata",
Function: LintMetadataFilledIn,
},
{
Ref: "R013",
Level: "warn",
Description: "git.coopcloud.tech repo exists",
HowToResolve: "upload your recipe to git.coopcloud.tech/coop-cloud/...",
Function: LintHasRecipeRepo,
},
},
"error": {
{
@ -115,13 +122,6 @@ var LintRules = map[string][]LintRule{
HowToResolve: "vendor config versions in an abra.sh",
Function: LintAbraShVendors,
},
{
Ref: "R013",
Level: "error",
Description: "git.coopcloud.tech repo exists",
HowToResolve: "upload your recipe to git.coopcloud.tech/coop-cloud/...",
Function: LintHasRecipeRepo,
},
},
}

View File

@ -26,7 +26,7 @@ import (
)
// RecipeCatalogueURL is the only current recipe catalogue available.
const RecipeCatalogueURL = "https://apps.coopcloud.tech"
const RecipeCatalogueURL = "https://recipes.coopcloud.tech"
// ReposMetadataURL is the recipe repository metadata
const ReposMetadataURL = "https://git.coopcloud.tech/api/v1/orgs/coop-cloud/repos"
@ -232,7 +232,11 @@ func Get(recipeName string) (Recipe, error) {
meta, err := GetRecipeMeta(recipeName)
if err != nil {
return Recipe{}, err
if strings.Contains(err.Error(), "does not exist") {
meta = RecipeMeta{}
} else {
return Recipe{}, err
}
}
return Recipe{
@ -355,7 +359,7 @@ func EnsureLatest(recipeName string) error {
return err
}
branch, err := gitPkg.GetCurrentBranch(repo)
branch, err := GetDefaultBranch(repo, recipeName)
if err != nil {
return err
}
@ -568,7 +572,7 @@ func EnsureUpToDate(recipeName string) error {
}
if !isClean {
return fmt.Errorf("%s has locally unstaged changes", recipeName)
return fmt.Errorf("%s (%s) has locally unstaged changes? please commit/remove your changes before proceeding", recipeName, recipeDir)
}
repo, err := git.PlainOpen(recipeDir)
@ -615,11 +619,15 @@ func EnsureUpToDate(recipeName string) error {
func GetDefaultBranch(repo *git.Repository, recipeName string) (plumbing.ReferenceName, error) {
recipeDir := path.Join(config.RECIPES_DIR, recipeName)
meta, _ := GetRecipeMeta(recipeName)
if meta.DefaultBranch != "" {
return plumbing.ReferenceName(fmt.Sprintf("refs/heads/%s", meta.DefaultBranch)), nil
}
branch := "master"
if _, err := repo.Branch("master"); err != nil {
if _, err := repo.Branch("main"); err != nil {
logrus.Debugf("failed to select branch in %s", recipeDir)
return "", err
return "", fmt.Errorf("failed to select default branch in %s", recipeDir)
}
branch = "main"
}
@ -689,7 +697,7 @@ func recipeCatalogueFSIsLatest() (bool, error) {
return false, nil
}
logrus.Debug("file system cached recipe catalogue is now up-to-date")
logrus.Debug("file system cached recipe catalogue is up-to-date")
return true, nil
}
@ -708,14 +716,12 @@ func ReadRecipeCatalogue() (RecipeCatalogue, error) {
}
if !recipeFSIsLatest {
logrus.Debugf("reading recipe catalogue from web to get latest")
if err := readRecipeCatalogueWeb(&recipes); err != nil {
return nil, err
}
return recipes, nil
}
logrus.Debugf("reading recipe catalogue from file system cache to get latest")
if err := readRecipeCatalogueFS(&recipes); err != nil {
return nil, err
}
@ -797,8 +803,7 @@ func GetRecipeMeta(recipeName string) (RecipeMeta, error) {
recipeMeta, ok := catl[recipeName]
if !ok {
err := fmt.Errorf("recipe %s does not exist?", recipeName)
return RecipeMeta{}, err
return RecipeMeta{}, fmt.Errorf("recipe %s does not exist?", recipeName)
}
if err := EnsureExists(recipeName); err != nil {
@ -923,7 +928,7 @@ func ReadReposMetadata() (RepoCatalogue, error) {
}
// GetRecipeVersions retrieves all recipe versions.
func GetRecipeVersions(recipeName, registryUsername, registryPassword string) (RecipeVersions, error) {
func GetRecipeVersions(recipeName string) (RecipeVersions, error) {
versions := RecipeVersions{}
recipeDir := path.Join(config.RECIPES_DIR, recipeName)
@ -937,7 +942,7 @@ func GetRecipeVersions(recipeName, registryUsername, registryPassword string) (R
worktree, err := repo.Worktree()
if err != nil {
logrus.Fatal(err)
return versions, err
}
gitTags, err := repo.Tags()
@ -967,9 +972,9 @@ func GetRecipeVersions(recipeName, registryUsername, registryPassword string) (R
return err
}
cl, err := client.New("default") // only required for docker.io registry calls
cl, err := client.New("default") // only required for container registry calls
if err != nil {
logrus.Fatal(err)
return err
}
queryCache := make(map[reference.Named]string)
@ -997,18 +1002,19 @@ func GetRecipeVersions(recipeName, registryUsername, registryPassword string) (R
var exists bool
var digest string
if digest, exists = queryCache[img]; !exists {
logrus.Debugf("looking up image: %s from %s", img, path)
logrus.Debugf("cache miss: querying for image: %s, tag: %s", path, tag)
var err error
digest, err = client.GetTagDigest(cl, img, registryUsername, registryPassword)
digest, err = client.GetTagDigest(cl, img)
if err != nil {
logrus.Warn(err)
continue
digest = "unknown"
}
logrus.Debugf("queried for image: %s, tag: %s, digest: %s", path, tag, digest)
queryCache[img] = digest
logrus.Debugf("cached image: %s, tag: %s, digest: %s", path, tag, digest)
logrus.Debugf("cached insert: %s, tag: %s, digest: %s", path, tag, digest)
} else {
logrus.Debugf("reading image: %s, tag: %s, digest: %s from cache", path, tag, digest)
logrus.Debugf("cache hit: image: %s, tag: %s, digest: %s", path, tag, digest)
}
versionMeta[service.Name] = ServiceMeta{
@ -1054,7 +1060,7 @@ func GetRecipeCatalogueVersions(recipeName string, catl RecipeCatalogue) ([]stri
func EnsureCatalogue() error {
catalogueDir := path.Join(config.ABRA_DIR, "catalogue")
if _, err := os.Stat(catalogueDir); err != nil && os.IsNotExist(err) {
url := fmt.Sprintf("%s/%s.git", config.REPOS_BASE_URL, "recipes")
url := fmt.Sprintf("%s/%s.git", config.REPOS_BASE_URL, config.CATALOGUE_JSON_REPO_NAME)
if err := gitPkg.Clone(catalogueDir, url); err != nil {
return err
}

View File

@ -8,6 +8,7 @@ import (
"regexp"
"strconv"
"strings"
"sync"
"coopcloud.tech/abra/pkg/client"
"coopcloud.tech/abra/pkg/config"
@ -119,23 +120,32 @@ func ParseSecretEnvVarValue(secret string) (secretValue, error) {
func GenerateSecrets(secretEnvVars map[string]string, appName, server string) (map[string]string, error) {
secrets := make(map[string]string)
var mutex sync.Mutex
var wg sync.WaitGroup
ch := make(chan error, len(secretEnvVars))
for secretEnvVar := range secretEnvVars {
wg.Add(1)
go func(s string) {
defer wg.Done()
secretName := ParseSecretEnvVarName(s)
secretValue, err := ParseSecretEnvVarValue(secretEnvVars[s])
if err != nil {
ch <- err
return
}
secretRemoteName := fmt.Sprintf("%s_%s_%s", appName, secretName, secretValue.Version)
logrus.Debugf("attempting to generate and store %s on %s", secretRemoteName, server)
if secretValue.Length > 0 {
passwords, err := GeneratePasswords(1, uint(secretValue.Length))
if err != nil {
ch <- err
return
}
if err := client.StoreSecret(secretRemoteName, passwords[0], server); err != nil {
if strings.Contains(err.Error(), "AlreadyExists") {
logrus.Warnf("%s already exists, moving on...", secretRemoteName)
@ -145,6 +155,9 @@ func GenerateSecrets(secretEnvVars map[string]string, appName, server string) (m
}
return
}
mutex.Lock()
defer mutex.Unlock()
secrets[secretName] = passwords[0]
} else {
passphrases, err := GeneratePassphrases(1)
@ -152,6 +165,7 @@ func GenerateSecrets(secretEnvVars map[string]string, appName, server string) (m
ch <- err
return
}
if err := client.StoreSecret(secretRemoteName, passphrases[0], server); err != nil {
if strings.Contains(err.Error(), "AlreadyExists") {
logrus.Warnf("%s already exists, moving on...", secretRemoteName)
@ -161,12 +175,17 @@ func GenerateSecrets(secretEnvVars map[string]string, appName, server string) (m
}
return
}
mutex.Lock()
defer mutex.Unlock()
secrets[secretName] = passphrases[0]
}
ch <- nil
}(secretEnvVar)
}
wg.Wait()
for range secretEnvVars {
err := <-ch
if err != nil {

View File

@ -67,13 +67,13 @@ func getConnectionHelper(daemonURL string, sshFlags []string) (*connhelper.Conne
return nil, err
}
func NewConnectionHelper(daemonURL string) *connhelper.ConnectionHelper {
func NewConnectionHelper(daemonURL string) (*connhelper.ConnectionHelper, error) {
helper, err := GetConnectionHelper(daemonURL)
if err != nil {
logrus.Fatal(err)
return nil, err
}
return helper
return helper, nil
}
func getDockerEndpoint(host string) (docker.Endpoint, error) {

View File

@ -420,6 +420,12 @@ func convertServiceSecrets(
return nil, err
}
// NOTE(d1): strip # length=... modifiers
if strings.Contains(obj.Name, "#") {
vals := strings.Split(obj.Name, "#")
obj.Name = strings.TrimSpace(vals[0])
}
file := swarm.SecretReferenceFileTarget(obj.File)
refs = append(refs, &swarm.SecretReference{
File: &file,

View File

@ -35,16 +35,21 @@ func LoadComposefile(opts Deploy, appEnv map[string]string) (*composetypes.Confi
return nil, err
}
recipeName, exists := appEnv["RECIPE"]
if !exists {
recipeName, _ = appEnv["TYPE"]
}
unsupportedProperties := loader.GetUnsupportedProperties(dicts...)
if len(unsupportedProperties) > 0 {
logrus.Warnf("%s: ignoring unsupported options: %s",
appEnv["TYPE"], strings.Join(unsupportedProperties, ", "))
recipeName, strings.Join(unsupportedProperties, ", "))
}
deprecatedProperties := loader.GetDeprecatedProperties(dicts...)
if len(deprecatedProperties) > 0 {
logrus.Warnf("%s: ignoring deprecated options: %s",
appEnv["TYPE"], propertyWarnings(deprecatedProperties))
recipeName, propertyWarnings(deprecatedProperties))
}
return config, nil
}

View File

@ -1,8 +1,8 @@
#!/usr/bin/env bash
ABRA_VERSION="0.3.0-alpha"
ABRA_VERSION="0.4.0-alpha"
ABRA_RELEASE_URL="https://git.coopcloud.tech/api/v1/repos/coop-cloud/abra/releases/tags/$ABRA_VERSION"
RC_VERSION="0.4.0-alpha-rc6"
RC_VERSION="0.4.0-alpha"
RC_VERSION_URL="https://git.coopcloud.tech/api/v1/repos/coop-cloud/abra/releases/tags/$RC_VERSION"
for arg in "$@"; do

5
scripts/release/upx.sh Executable file
View File

@ -0,0 +1,5 @@
#!/bin/bash
set -ex
upx ./dist/abra_*/abra

View File

@ -1 +0,0 @@
TYPE=gitea

View File

@ -1 +0,0 @@
TYPE=wordpress

View File

@ -1 +0,0 @@
TYPE=wordpress

View File

@ -1,7 +0,0 @@
FROM debian:bullseye-slim
RUN apt update && apt install -y wget curl git;
RUN git config --global user.email "integration-tests@coopcloud.tech";
RUN git config --global user.name "integration-tests";

View File

@ -1,4 +1,28 @@
# integration tests
- `cp .envrc.sample .envrc` (fill out values && `direnv allow`)
- `TARGET=install.sh make` (ensure `docker context use default`)
> You need to be a member of Autonomic Co-op to run these tests, sorry!
`testfunctions.sh` contains the functions necessary to save and manipulate
logs. Run `test_all.sh logdir` to run tests specified in that file and save the
logs to `logdir`.
When creating new tests, make sure the test command is a one-liner (you can use
`;` to separate commands). Include `testfunctions.sh` and then write your tests
like this:
```
run_test '$ABRA other stuff here'
```
By default, the testing script will ask after every command if the execution
succeeded. If you reply `n`, it will log the test in the `logdir`. If you want
all tests to run without questions, run `export logall=yes` before executing
the test script.
To run tests, you'll need to prepare your environment:
```
cp .envrc.sample .envrc # fill out values...
direnv allow
./test_all.sh logs
```

View File

@ -1,15 +1,14 @@
#!/bin/bash
source ./testfunctions.sh
source ./common.sh
echo "all apps, all servers"
$ABRA app ls
printf "\\n\\n\\n"
run_test '$ABRA app ls'
echo "all wordpress apps, all servers"
$ABRA app ls --type wordpress
printf "\\n\\n\\n"
run_test '$ABRA app ls --status'
echo "all wordpress apps, only server2"
$ABRA app ls --type wordpress --server server2
printf "\\n\\n\\n"
run_test '$ABRA app ls --type wordpress'
run_test '$ABRA app ls --type wordpress --server swarm.autonomic.zone'
run_test '$ABRA app ls --type wordpress --server swarm.autonomic.zone --status'

View File

@ -1,9 +1,10 @@
#!/bin/bash
source ./testfunctions.sh
source ./common.sh
$ABRA autocomplete bash
run_test '$ABRA autocomplete bash'
$ABRA autocomplete fizsh
run_test '$ABRA autocomplete fizsh'
$ABRA autocomplete zsh
run_test '$ABRA autocomplete zsh'

View File

@ -1,7 +1,8 @@
#!/bin/bash
source ./testfunctions.sh
source ./common.sh
$ABRA catalogue generate --debug
run_test '$ABRA catalogue generate'
$ABRA catalogue generate gitea --debug
run_test '$ABRA catalogue generate gitea'

View File

@ -3,15 +3,8 @@
set -e
function init() {
ABRA="$HOME/.local/bin/abra"
INSTALLER_URL="https://install.abra.coopcloud.tech"
for arg in "$@"; do
if [ "$arg" == "--dev" ]; then
ABRA="/src/abra"
INSTALLER_URL="https://git.coopcloud.tech/coop-cloud/abra/raw/branch/main/scripts/installer/installer"
fi
done
ABRA="$(pwd)/../../abra"
INSTALLER_URL="https://git.coopcloud.tech/coop-cloud/abra/raw/branch/main/scripts/installer/installer"
export PATH=$PATH:$HOME/.local/bin

View File

@ -1,15 +1,12 @@
#!/bin/bash
source ./testfunctions.sh
source ./common.sh
wget -O- https://install.abra.autonomic.zone | bash
~/.local/bin/abra -v
run_test 'wget -O- https://install.abra.autonomic.zone | bash; ~/.local/bin/abra -v'
wget -O- https://install.abra.autonomic.zone | bash -s -- --rc
~/.local/bin/abra -v
run_test 'wget -O- https://install.abra.autonomic.zone | bash -s -- --rc; ~/.local/bin/abra -v'
$ABRA upgrade
~/.local/bin/abra -v
run_test '$ABRA upgrade; ~/.local/bin/abra -v'
$ABRA upgrade --rc
~/.local/bin/abra -v
run_test '$ABRA upgrade --rc; ~/.local/bin/abra -v'

View File

@ -1,11 +0,0 @@
default:
@docker run \
-v $$(pwd)/../../:/src \
-v $$(pwd)/.abra:/root/.abra \
--env-file .envrc \
decentral1se/abra-int:latest \
sh -c '\
echo "Running $(TARGET)..."; \
cd /src/tests/integration; \
bash "$(TARGET)" -- --dev \
'

View File

@ -1,12 +1,14 @@
#!/bin/bash
source ./testfunctions.sh
source ./common.sh
$ABRA recipe new testrecipe
run_test '$ABRA recipe new testrecipe'
$ABRA recipe list
$ABRA recipe list -p cloud
run_test '$ABRA recipe list'
$ABRA recipe versions peertube
run_test '$ABRA recipe list --pattern cloud'
$ABRA recipe lint gitea
run_test '$ABRA recipe versions peertube'
run_test '$ABRA recipe lint gitea'

View File

@ -1,9 +1,21 @@
#!/bin/bash
source ./testfunctions.sh
source ./common.sh
$ABRA record new -p gandi -t A -n int-core -v 192.157.2.21 coopcloud.tech
run_test "$ABRA record new \
--provider gandi \
--record-type A \
--record-name integration-tests \
--record-value 192.157.2.21 \
--no-input coopcloud.tech \
"
$ABRA record list -p gandi coopcloud.tech | grep -q int-core
run_test '$ABRA record list --provider gandi coopcloud.tech'
$ABRA -n record rm -p gandi -t A -n int-core coopcloud.tech
run_test "$ABRA record rm \
--provider gandi \
--record-type A \
--record-name integration-tests \
--no-input coopcloud.tech
"

View File

@ -1,9 +1,10 @@
#!/bin/bash
source ./testfunctions.sh
source ./common.sh
$ABRA -n server new -p hetzner-cloud --hn int-core
run_test '$ABRA server new --provider hetzner-cloud --hetzner-name integration-tests --no-input'
$ABRA server ls | grep -q int-core
run_test '$ABRA server ls'
$ABRA -n server rm -s -p hetzner-cloud --hn int-core
run_test '$ABRA server rm --provider hetzner-cloud --hetzner-name int-core --server --no-input'

24
tests/integration/test_all.sh Executable file
View File

@ -0,0 +1,24 @@
#!/bin/bash
if [ -z $1 ]; then
echo "usage: ./test_all.sh logdir"
exit
fi
res_dir=$1/
if [[ ! -d "$res_dir" ]]; then
mkdir "$res_dir"
fi
# Usage: run_test [number] [name] [command]
run_test () {
logfile="$res_dir/$1-$2.log"
echo $logfile
}
testScripts=("app.sh" "autocomplete.sh" "catalogue.sh" "install.sh" "recipe.sh" "records.sh" "server.sh")
for i in "${testScripts[@]}"; do
cmd="./$i $res_dir${i/sh/log}"
eval $cmd
done

View File

@ -0,0 +1,35 @@
#!/bin/bash
if [ -z $1 ]; then
logfile=/dev/null
else
logfile=$1
fi
if [ -z $logall ]; then
logall=no
fi
run_test () {
if [ -z "$@" ]; then
echo "run_test needs a command to run"
else
tempLogfile=$(mktemp)
cmd=$(eval echo "$@")
echo -e "\\n------------ INPUT -------------------" | tee -a $tempLogfile
echo "$" "$cmd" | tee -a $tempLogfile
echo "------------ OUTPUT ------------------" | tee -a $tempLogfile
eval $cmd 2>&1 | tee -a $tempLogfile
if [ $logall = "yes" ]; then
cat $tempLogfile >> $logfile
echo -e "\\n\\n" >> $logfile
else
read -N 1 -p "Did the test pass? [y/n]: " pass
if [ $pass = 'n' ]; then
cat $tempLogfile >> $logfile
echo -e "\\n\\n" >> $logfile
fi
fi
rm $tempLogfile
fi
}

View File

@ -1,40 +0,0 @@
# manual test plan
## recipe publish
- `abra recipe upgrade <recipe>`
- `cd ~/.abra/apps/<recipe>/ && git diff` to ensure changes made
- `abra recipe sync <recipe>`
- `cd ~/.abra/apps/<recipe>/ && git diff` to ensure changes made
- `abra recipe release <recipe> --dry-run`
- prompts should be correct, read what `abra` asks you carefully
## deploy, upgrade, rollback
- `abra app deploy --chaos <app>`
- `abra app deploy --force <app>`
- `abra app deploy <app>`
- `abra app rollback <app>`
- `abra app upgrade <app>`
## app day-to-day ops
- `abra app check <app>`
- `abra app config <app>`
- `abra app cp <app>`
- `abra app errors -w <app>`
- `abra app logs <app>`
- `abra app ls --status <app>`
- `abra app new --secrets <recipe>`
- `abra app ps <app>`
- `abra app remove <app>`
- `abra app restart <app>`
- `abra app run <app>`
- `abra app secret generate --all`
- `abra app secret insert <app> foo v1 bar`
- `abra app secret ls <app>`
- `abra app secret remove <app> foo`
- `abra app volume ls <app>`
- `abra app volume remove --force <app>`

View File

@ -1,2 +1,2 @@
TYPE=ecloud
RECIPE=ecloud
DOMAIN=ecloud.evil.corp