forked from toolshed/abra-bash
Compare commits
30 Commits
prefer-fas
...
requiremen
Author | SHA1 | Date | |
---|---|---|---|
6c7b53f585 | |||
32bf28e7a9 | |||
624815e5b1 | |||
e9fb9e56ad | |||
283eb21e29 | |||
92f49d56dd | |||
d9ff48b55b | |||
3d8ce3492e | |||
07696760b7 | |||
43b4a01f8a | |||
bb3b324e07 | |||
eb9d1b883b | |||
6f6140ced2 | |||
cb225908d0 | |||
f2892bad6f | |||
480b1453ec | |||
0ab2b3a652 | |||
93714a593b | |||
57f3f96bbc | |||
e1959506c7 | |||
7482362af1 | |||
e8510c8aeb | |||
4042e10985 | |||
f7cd0eb54c | |||
a571b839a8 | |||
fae13d9af8 | |||
9c9f7225e7 | |||
352cc0939b | |||
2ca7884bbe | |||
fa54705f79 |
24
.drone.yml
24
.drone.yml
@ -3,35 +3,43 @@ kind: pipeline
|
||||
name: linters
|
||||
steps:
|
||||
- name: run shellcheck
|
||||
image: koalaman/shellcheck-alpine:v0.7.1
|
||||
image: koalaman/shellcheck-alpine
|
||||
commands:
|
||||
- shellcheck abra
|
||||
- shellcheck bin/*.sh
|
||||
- shellcheck deploy/install.abra.coopcloud.tech/installer
|
||||
|
||||
- name: run flake8
|
||||
image: alpine/flake8:3.9.0
|
||||
image: alpine/flake8
|
||||
commands:
|
||||
- flake8 --max-line-length 100 bin/app-json.py
|
||||
- flake8 --max-line-length 100 bin/*.py
|
||||
|
||||
- name: run unit tests
|
||||
image: decentral1se/docker-dind-bats-kcov
|
||||
commands:
|
||||
- bats tests
|
||||
|
||||
- name: test installation script
|
||||
image: debian:buster
|
||||
commands:
|
||||
- apt update && apt install -yqq sudo lsb-release
|
||||
- deploy/install.abra.coopcloud.tech/installer --no-prompt
|
||||
- ~/.local/bin/abra version
|
||||
|
||||
- name: publish image
|
||||
image: plugins/docker
|
||||
settings:
|
||||
auto_tag: true
|
||||
username:
|
||||
from_secret: docker_reg_username
|
||||
username: thecoopcloud
|
||||
password:
|
||||
from_secret: docker_reg_passwd
|
||||
repo: decentral1se/abra
|
||||
from_secret: thecoopcloud_password
|
||||
repo: thecoopcloud/abra
|
||||
tags: latest
|
||||
depends_on:
|
||||
- run shellcheck
|
||||
- run flake8
|
||||
- run unit tests
|
||||
- test installation script
|
||||
when:
|
||||
event:
|
||||
exclude:
|
||||
@ -50,6 +58,7 @@ steps:
|
||||
- run shellcheck
|
||||
- run flake8
|
||||
- run unit tests
|
||||
- test installation script
|
||||
- publish image
|
||||
when:
|
||||
event:
|
||||
@ -68,6 +77,7 @@ steps:
|
||||
- run shellcheck
|
||||
- run flake8
|
||||
- run unit tests
|
||||
- test installation script
|
||||
- publish image
|
||||
- trigger downstream builds
|
||||
when:
|
||||
|
@ -11,6 +11,15 @@
|
||||
|
||||
- Add `--bump` to `deploy` command to allow packagers to make minor package related releases ([#173](https://git.autonomic.zone/coop-cloud/abra/issues/173))
|
||||
- Drop `--skip-version-check`/`--no-domain-poll`/`--no-state-poll` in favour of `--fast` ([#169](https://git.autonomic.zone/coop-cloud/abra/issues/169))
|
||||
- Move `abra` image under the new `thecoopcloud/...` namespace ([#1](https://git.autonomic.zone/coop-cloud/auto-apps-json/issues/1))
|
||||
- Add a `--output` flag to the `app-json.py` app generator for the CI environment ([#2](https://git.autonomic.zone/coop-cloud/auto-apps-json/issues/2))
|
||||
- Support logging in as new `thecoopcloud` Docker account via `skopeo` when generating new `apps.json` ([7482362af1](https://git.autonomic.zone/coop-cloud/abra/commit/7482362af1d01cc02828abd45b1222fa643d1f80))
|
||||
- App deployment checks are somewhat more reliable (see [#193](https://git.autonomic.zone/coop-cloud/abra/issues/193) for remaining work) ([#165](https://git.autonomic.zone/coop-cloud/abra/issues/165))
|
||||
- Skip generation of commented out secrets and correctly fail deploy when secret generation fails ([#133](https://git.autonomic.zone/coop-cloud/abra/issues/133))
|
||||
- Fix logging for chaos deploys and recipe selection logic ([#185](https://git.autonomic.zone/coop-cloud/abra/issues/185))
|
||||
- Improve reliability of selectig when to download a new `apps.json` ([#170](https://git.autonomic.zone/coop-cloud/abra/issues/170))
|
||||
- Remove `pwgen`/`pwqgen` as password generator requirements ([#167](https://git.autonomic.zone/coop-cloud/abra/issues/167))
|
||||
- `abra` installer script will now try to install system requirements ([#196](https://git.autonomic.zone/coop-cloud/abra/issues/196))
|
||||
|
||||
# abra 9.0.0 (2021-06-10)
|
||||
|
||||
|
21
README.md
21
README.md
@ -25,36 +25,39 @@ See [CHANGELOG.md](./CHANGELOG.md).
|
||||
|
||||
> [docs.coopcloud.tech](https://docs.coopcloud.tech)
|
||||
|
||||
## Install
|
||||
## Requirements
|
||||
|
||||
Requirements:
|
||||
|
||||
- `pwqgen` (optional)
|
||||
- `pwgen` (optional)
|
||||
- `curl`
|
||||
- `docker`
|
||||
- `bash` >= 4
|
||||
|
||||
## Install
|
||||
|
||||
Install the latest stable release:
|
||||
|
||||
```sh
|
||||
curl https://install.abra.coopcloud.tech | bash
|
||||
```
|
||||
|
||||
or the bleeding-edge development version:
|
||||
The source for this script is [here](./deploy/install.abra.coopcloud.tech/installer).
|
||||
|
||||
You can pass options to the script like so (e.g. install the bleeding edge development version):
|
||||
|
||||
```sh
|
||||
curl https://install.abra.coopcloud.tech | bash -s -- --dev
|
||||
```
|
||||
|
||||
The source for this script is [here](./deploy/install.abra.coopcloud.tech/installer).
|
||||
Other options available are as follows:
|
||||
|
||||
- **--no-prompt**: non-interactive installation
|
||||
- **--no-deps**: do not attempt to install [requirements](#requirements)
|
||||
|
||||
## Container
|
||||
|
||||
An [image](https://hub.docker.com/r/decentral1se/abra) is also provided.
|
||||
An [image](https://hub.docker.com/r/thecoopcloud/abra) is also provided.
|
||||
|
||||
```
|
||||
docker run decentral1se/abra app ls
|
||||
docker run thecoopcloud/abra app ls
|
||||
```
|
||||
|
||||
## Update
|
||||
|
231
abra
231
abra
@ -1,5 +1,7 @@
|
||||
#!/usr/bin/env bash
|
||||
|
||||
# shellcheck disable=SC2154
|
||||
|
||||
GIT_URL="https://git.autonomic.zone/coop-cloud/"
|
||||
ABRA_APPS_URL="https://apps.coopcloud.tech"
|
||||
ABRA_DIR="${ABRA_DIR:-$HOME/.abra}"
|
||||
@ -13,7 +15,7 @@ ABRA_APPS_JSON="${ABRA_DIR}/apps.json"
|
||||
#######################################
|
||||
|
||||
DOC="
|
||||
The cooperative cloud utility belt 🎩🐇
|
||||
The Co-op Cloud utility belt 🎩🐇
|
||||
|
||||
Usage:
|
||||
abra [options] app (list|ls) [--status] [--server=<server>] [--type=<type>]
|
||||
@ -167,15 +169,15 @@ eval "var_$1+=($value)"; else eval "var_$1=$value"; fi; return 0; fi; done
|
||||
return 1; }; stdout() { printf -- "cat <<'EOM'\n%s\nEOM\n" "$1"; }; stderr() {
|
||||
printf -- "cat <<'EOM' >&2\n%s\nEOM\n" "$1"; }; error() {
|
||||
[[ -n $1 ]] && stderr "$1"; stderr "$usage"; _return 1; }; _return() {
|
||||
printf -- "exit %d\n" "$1"; exit "$1"; }; set -e; trimmed_doc=${DOC:1:2451}
|
||||
usage=${DOC:40:1842}; digest=c7bae
|
||||
shorts=(-e -b -s -C -U -h -d -v -n '' '' '' '' '' '' '' '' '' '' '' '' '' '' '' '' '')
|
||||
longs=(--env --branch --stack --skip-check --skip-update --help --debug --verbose --no-prompt --status --server --type --domain --app-name --pass --secrets --all --update --force --fast --chaos --volumes --no-tty --user --bump --dev)
|
||||
argcounts=(1 1 1 0 0 0 0 0 0 0 1 1 1 1 0 0 0 0 0 0 0 0 0 1 0 0); node_0(){
|
||||
value __env 0; }; node_1(){ value __branch 1; }; node_2(){ value __stack 2; }
|
||||
node_3(){ switch __skip_check 3; }; node_4(){ switch __skip_update 4; }
|
||||
node_5(){ switch __help 5; }; node_6(){ switch __debug 6; }; node_7(){
|
||||
switch __verbose 7; }; node_8(){ switch __no_prompt 8; }; node_9(){
|
||||
printf -- "exit %d\n" "$1"; exit "$1"; }; set -e; trimmed_doc=${DOC:1:2445}
|
||||
usage=${DOC:34:1842}; digest=d420c
|
||||
shorts=(-d -C -h -U -e -s -v -n -b '' '' '' '' '' '' '' '' '' '' '' '' '' '' '' '' '')
|
||||
longs=(--debug --skip-check --help --skip-update --env --stack --verbose --no-prompt --branch --status --server --type --domain --app-name --pass --secrets --all --update --force --fast --chaos --volumes --no-tty --user --bump --dev)
|
||||
argcounts=(0 0 0 0 1 1 0 0 1 0 1 1 1 1 0 0 0 0 0 0 0 0 0 1 0 0); node_0(){
|
||||
switch __debug 0; }; node_1(){ switch __skip_check 1; }; node_2(){
|
||||
switch __help 2; }; node_3(){ switch __skip_update 3; }; node_4(){ value __env 4
|
||||
}; node_5(){ value __stack 5; }; node_6(){ switch __verbose 6; }; node_7(){
|
||||
switch __no_prompt 7; }; node_8(){ value __branch 8; }; node_9(){
|
||||
switch __status 9; }; node_10(){ value __server 10; }; node_11(){
|
||||
value __type 11; }; node_12(){ value __domain 12; }; node_13(){
|
||||
value __app_name 13; }; node_14(){ switch __pass 14; }; node_15(){
|
||||
@ -244,50 +246,51 @@ node_154(){ optional 153; }; node_155(){ required 80 78 154; }; node_156(){
|
||||
required 80; }; node_157(){
|
||||
either 86 91 94 100 101 102 103 104 106 107 108 112 114 118 119 124 125 128 129 130 133 135 136 137 139 140 143 144 145 146 147 148 150 151 152 155 156
|
||||
}; node_158(){ required 157; }; cat <<<' docopt_exit() {
|
||||
[[ -n $1 ]] && printf "%s\n" "$1" >&2; printf "%s\n" "${DOC:40:1842}" >&2
|
||||
exit 1; }'; unset var___env var___branch var___stack var___skip_check \
|
||||
var___skip_update var___help var___debug var___verbose var___no_prompt \
|
||||
var___status var___server var___type var___domain var___app_name var___pass \
|
||||
var___secrets var___all var___update var___force var___fast var___chaos \
|
||||
var___volumes var___no_tty var___user var___bump var___dev var__type_ \
|
||||
var__app_ var__service_ var__version_ var__src_ var__dst_ var__backup_file_ \
|
||||
var__args_ var__secret_ var__cmd_ var__data_ var__volume_ var__command_ \
|
||||
var__recipe_ var__host_ var__user_ var__port_ var__provider_ var__subcommands_ \
|
||||
var_app var_list var_ls var_new var_backup var_deploy var_check var_version \
|
||||
var_config var_cp var_logs var_ps var_restore var_rm var_delete var_run \
|
||||
var_rollback var_secret var_generate var_insert var_undeploy var_volume \
|
||||
var_recipe var_create var_release var_versions var_server var_add var___ \
|
||||
var_init var_apps var_upgrade var_doctor var_help; parse 158 "$@"
|
||||
local prefix=${DOCOPT_PREFIX:-''}; unset "${prefix}__env" "${prefix}__branch" \
|
||||
"${prefix}__stack" "${prefix}__skip_check" "${prefix}__skip_update" \
|
||||
"${prefix}__help" "${prefix}__debug" "${prefix}__verbose" \
|
||||
"${prefix}__no_prompt" "${prefix}__status" "${prefix}__server" \
|
||||
"${prefix}__type" "${prefix}__domain" "${prefix}__app_name" "${prefix}__pass" \
|
||||
"${prefix}__secrets" "${prefix}__all" "${prefix}__update" "${prefix}__force" \
|
||||
"${prefix}__fast" "${prefix}__chaos" "${prefix}__volumes" "${prefix}__no_tty" \
|
||||
"${prefix}__user" "${prefix}__bump" "${prefix}__dev" "${prefix}_type_" \
|
||||
"${prefix}_app_" "${prefix}_service_" "${prefix}_version_" "${prefix}_src_" \
|
||||
"${prefix}_dst_" "${prefix}_backup_file_" "${prefix}_args_" \
|
||||
"${prefix}_secret_" "${prefix}_cmd_" "${prefix}_data_" "${prefix}_volume_" \
|
||||
"${prefix}_command_" "${prefix}_recipe_" "${prefix}_host_" "${prefix}_user_" \
|
||||
"${prefix}_port_" "${prefix}_provider_" "${prefix}_subcommands_" \
|
||||
"${prefix}app" "${prefix}list" "${prefix}ls" "${prefix}new" "${prefix}backup" \
|
||||
"${prefix}deploy" "${prefix}check" "${prefix}version" "${prefix}config" \
|
||||
"${prefix}cp" "${prefix}logs" "${prefix}ps" "${prefix}restore" "${prefix}rm" \
|
||||
[[ -n $1 ]] && printf "%s\n" "$1" >&2; printf "%s\n" "${DOC:34:1842}" >&2
|
||||
exit 1; }'; unset var___debug var___skip_check var___help var___skip_update \
|
||||
var___env var___stack var___verbose var___no_prompt var___branch var___status \
|
||||
var___server var___type var___domain var___app_name var___pass var___secrets \
|
||||
var___all var___update var___force var___fast var___chaos var___volumes \
|
||||
var___no_tty var___user var___bump var___dev var__type_ var__app_ \
|
||||
var__service_ var__version_ var__src_ var__dst_ var__backup_file_ var__args_ \
|
||||
var__secret_ var__cmd_ var__data_ var__volume_ var__command_ var__recipe_ \
|
||||
var__host_ var__user_ var__port_ var__provider_ var__subcommands_ var_app \
|
||||
var_list var_ls var_new var_backup var_deploy var_check var_version var_config \
|
||||
var_cp var_logs var_ps var_restore var_rm var_delete var_run var_rollback \
|
||||
var_secret var_generate var_insert var_undeploy var_volume var_recipe \
|
||||
var_create var_release var_versions var_server var_add var___ var_init \
|
||||
var_apps var_upgrade var_doctor var_help; parse 158 "$@"
|
||||
local prefix=${DOCOPT_PREFIX:-''}; unset "${prefix}__debug" \
|
||||
"${prefix}__skip_check" "${prefix}__help" "${prefix}__skip_update" \
|
||||
"${prefix}__env" "${prefix}__stack" "${prefix}__verbose" \
|
||||
"${prefix}__no_prompt" "${prefix}__branch" "${prefix}__status" \
|
||||
"${prefix}__server" "${prefix}__type" "${prefix}__domain" \
|
||||
"${prefix}__app_name" "${prefix}__pass" "${prefix}__secrets" "${prefix}__all" \
|
||||
"${prefix}__update" "${prefix}__force" "${prefix}__fast" "${prefix}__chaos" \
|
||||
"${prefix}__volumes" "${prefix}__no_tty" "${prefix}__user" "${prefix}__bump" \
|
||||
"${prefix}__dev" "${prefix}_type_" "${prefix}_app_" "${prefix}_service_" \
|
||||
"${prefix}_version_" "${prefix}_src_" "${prefix}_dst_" \
|
||||
"${prefix}_backup_file_" "${prefix}_args_" "${prefix}_secret_" \
|
||||
"${prefix}_cmd_" "${prefix}_data_" "${prefix}_volume_" "${prefix}_command_" \
|
||||
"${prefix}_recipe_" "${prefix}_host_" "${prefix}_user_" "${prefix}_port_" \
|
||||
"${prefix}_provider_" "${prefix}_subcommands_" "${prefix}app" "${prefix}list" \
|
||||
"${prefix}ls" "${prefix}new" "${prefix}backup" "${prefix}deploy" \
|
||||
"${prefix}check" "${prefix}version" "${prefix}config" "${prefix}cp" \
|
||||
"${prefix}logs" "${prefix}ps" "${prefix}restore" "${prefix}rm" \
|
||||
"${prefix}delete" "${prefix}run" "${prefix}rollback" "${prefix}secret" \
|
||||
"${prefix}generate" "${prefix}insert" "${prefix}undeploy" "${prefix}volume" \
|
||||
"${prefix}recipe" "${prefix}create" "${prefix}release" "${prefix}versions" \
|
||||
"${prefix}server" "${prefix}add" "${prefix}__" "${prefix}init" "${prefix}apps" \
|
||||
"${prefix}upgrade" "${prefix}doctor" "${prefix}help"
|
||||
eval "${prefix}"'__env=${var___env:-}'
|
||||
eval "${prefix}"'__branch=${var___branch:-}'
|
||||
eval "${prefix}"'__stack=${var___stack:-}'
|
||||
eval "${prefix}"'__skip_check=${var___skip_check:-false}'
|
||||
eval "${prefix}"'__skip_update=${var___skip_update:-false}'
|
||||
eval "${prefix}"'__help=${var___help:-false}'
|
||||
eval "${prefix}"'__debug=${var___debug:-false}'
|
||||
eval "${prefix}"'__skip_check=${var___skip_check:-false}'
|
||||
eval "${prefix}"'__help=${var___help:-false}'
|
||||
eval "${prefix}"'__skip_update=${var___skip_update:-false}'
|
||||
eval "${prefix}"'__env=${var___env:-}'
|
||||
eval "${prefix}"'__stack=${var___stack:-}'
|
||||
eval "${prefix}"'__verbose=${var___verbose:-false}'
|
||||
eval "${prefix}"'__no_prompt=${var___no_prompt:-false}'
|
||||
eval "${prefix}"'__branch=${var___branch:-}'
|
||||
eval "${prefix}"'__status=${var___status:-false}'
|
||||
eval "${prefix}"'__server=${var___server:-}'
|
||||
eval "${prefix}"'__type=${var___type:-}'
|
||||
@ -355,9 +358,9 @@ eval "${prefix}"'upgrade=${var_upgrade:-false}'
|
||||
eval "${prefix}"'doctor=${var_doctor:-false}'
|
||||
eval "${prefix}"'help=${var_help:-false}'; local docopt_i=1
|
||||
[[ $BASH_VERSION =~ ^4.3 ]] && docopt_i=2; for ((;docopt_i>0;docopt_i--)); do
|
||||
declare -p "${prefix}__env" "${prefix}__branch" "${prefix}__stack" \
|
||||
"${prefix}__skip_check" "${prefix}__skip_update" "${prefix}__help" \
|
||||
"${prefix}__debug" "${prefix}__verbose" "${prefix}__no_prompt" \
|
||||
declare -p "${prefix}__debug" "${prefix}__skip_check" "${prefix}__help" \
|
||||
"${prefix}__skip_update" "${prefix}__env" "${prefix}__stack" \
|
||||
"${prefix}__verbose" "${prefix}__no_prompt" "${prefix}__branch" \
|
||||
"${prefix}__status" "${prefix}__server" "${prefix}__type" "${prefix}__domain" \
|
||||
"${prefix}__app_name" "${prefix}__pass" "${prefix}__secrets" "${prefix}__all" \
|
||||
"${prefix}__update" "${prefix}__force" "${prefix}__fast" "${prefix}__chaos" \
|
||||
@ -508,8 +511,8 @@ require_apps_json() {
|
||||
|
||||
if [ -f "$ABRA_APPS_JSON" ]; then
|
||||
modified=$(curl --silent --head "$ABRA_APPS_URL" | \
|
||||
awk '/^Last-Modified/{print $0}' | \
|
||||
sed 's/^Last-Modified: //')
|
||||
awk '/^last-modified/{print tolower($0)}' | \
|
||||
sed 's/^last-modified: //I')
|
||||
remote_ctime=$(date --date="$modified" +%s)
|
||||
local_ctime=$(stat -c %Z "$ABRA_APPS_JSON")
|
||||
|
||||
@ -652,6 +655,14 @@ checkout_main_or_master() {
|
||||
git checkout main > /dev/null 2>&1 || git checkout master > /dev/null 2>&1
|
||||
}
|
||||
|
||||
pwgen_native() {
|
||||
tr -dc 'a-zA-Z0-9' < /dev/urandom | head -c "$1"
|
||||
}
|
||||
|
||||
pwqgen_native() {
|
||||
shuf -n 3 /usr/share/dict/words | tr -dc 'a-zA-Z0-9' | tr -d '\n'
|
||||
}
|
||||
|
||||
# FIXME 3wc: update or remove
|
||||
if [ -z "$ABRA_ENV" ] && [ -f .env ] && type direnv > /dev/null 2>&1 && ! direnv status | grep -q 'Found RC allowed true'; then
|
||||
error "direnv is blocked, run direnv allow"
|
||||
@ -685,7 +696,11 @@ get_recipe_version_latest() {
|
||||
info "No versions found"
|
||||
else
|
||||
VERSION="${RECIPE_VERSIONS[-1]}"
|
||||
info "Chose version $VERSION"
|
||||
if [ "$abra___chaos" = "true" ]; then
|
||||
info "Not choosing a version and instead deploying from latest commit"
|
||||
else
|
||||
info "Chose version $VERSION"
|
||||
fi
|
||||
fi
|
||||
}
|
||||
|
||||
@ -792,49 +807,56 @@ output_version_summary() {
|
||||
fi
|
||||
}
|
||||
|
||||
# Note(decentral1se): inspired by https://github.com/vitalets/docker-stack-wait-deploy
|
||||
ensure_stack_deployed() {
|
||||
STACK_NAME=$1
|
||||
local -a HEALTHY # mapping
|
||||
local -a MISSING # mapping
|
||||
|
||||
warning "Polling deploy state to check for success"
|
||||
TIMEOUT=60
|
||||
idx=0
|
||||
|
||||
while true; do
|
||||
all_services_done=1
|
||||
has_errors=0
|
||||
IFS=' ' read -r -a SERVICES <<< "$(docker stack services "${STACK_NAME}" --format "{{.ID}}" | tr '\n' ' ')"
|
||||
debug "Considering the following service IDs: ${SERVICES[*]} for ${STACK_NAME} deployment"
|
||||
|
||||
service_ids=$(docker stack services -q "$STACK_NAME")
|
||||
while [ ! $(( ${#HEALTHY[@]} + ${#MISSING[@]} )) -eq ${#SERVICES[@]} ]; do
|
||||
for service in $(docker ps -f "name=$STACK_NAME" -q); do
|
||||
debug "Polling $service for deployment status"
|
||||
|
||||
for service_id in $service_ids; do
|
||||
# see: https://github.com/moby/moby/issues/28012
|
||||
service_state=$(docker service inspect --format "{{if .UpdateStatus}}{{.UpdateStatus.State}}{{else}}created{{end}}" "$service_id")
|
||||
healthcheck=$(docker inspect --format "{{ json .State }}" "$service" | jq "try(.Health.Status // \"missing\")")
|
||||
name=$(docker inspect --format '{{ index .Config.Labels "com.docker.swarm.service.name" }}' "$service")
|
||||
|
||||
debug "$service_id has state: $service_state"
|
||||
if [[ ${MISSING[*]} =~ ${name} ]] || [[ ${HEALTHY[*]} =~ ${name} ]]; then
|
||||
debug "$name already marked as missing healthcheck / healthy status"
|
||||
continue
|
||||
fi
|
||||
|
||||
case "$service_state" in
|
||||
created|completed)
|
||||
;;
|
||||
paused|rollback_completed)
|
||||
has_errors=1
|
||||
;;
|
||||
*)
|
||||
all_services_done=0
|
||||
;;
|
||||
esac
|
||||
if [[ "$healthcheck" == "\"missing\"" ]] && [[ ! "${MISSING[*]}" =~ $name ]]; then
|
||||
MISSING+=("$name")
|
||||
debug "No healthcheck configured for $name"
|
||||
continue
|
||||
fi
|
||||
|
||||
if [[ "$healthcheck" == "\"healthy\"" ]] && [[ ! "${HEALTHY[*]}" =~ $name ]]; then
|
||||
HEALTHY+=("$name")
|
||||
debug "Marking $name with healthy status"
|
||||
continue
|
||||
fi
|
||||
|
||||
if [[ "$healthcheck" == \""unhealthy"\" ]]; then
|
||||
logs=$(docker inspect --format "{{ json .State.Health.Log }}" "$service")
|
||||
exitcode="$(echo "$logs" | $JQ '.[-1] | .ExitCode')"
|
||||
warning "Healthcheck for new instance of $name is failing (exit code: $exitcode)"
|
||||
warning "$(echo "$logs" | $JQ -r '.[-1] | .Output')"
|
||||
error "healthcheck for $name is failing, this deployment did not succeed :("
|
||||
fi
|
||||
done
|
||||
|
||||
if [ "$all_services_done" == "1" ]; then
|
||||
if [ "$has_errors" == "1" ]; then
|
||||
warning "Deployment appears to have failed"
|
||||
warning "Run \"abra app ${STACK_NAME} logs \" to see app logs"
|
||||
warning "Run \"abra app ${STACK_NAME} ps \" to see app status"
|
||||
break
|
||||
else
|
||||
warning "Deployment appears to have suceeded"
|
||||
break
|
||||
fi
|
||||
else
|
||||
sleep 1
|
||||
idx=$(("$idx" + 1))
|
||||
if [[ $idx -eq "$TIMEOUT" ]]; then
|
||||
error "Waiting for healthy status timed out, this deployment did not succeed :("
|
||||
fi
|
||||
|
||||
sleep 1
|
||||
debug "Deploying: $(( ${#HEALTHY[@]} + ${#MISSING[@]} ))/${#SERVICES[@]} (timeout: $idx/$TIMEOUT)"
|
||||
done
|
||||
}
|
||||
|
||||
@ -863,14 +885,14 @@ get_servers() {
|
||||
|
||||
get_app_secrets() {
|
||||
# FIXME 3wc: requires bash 4, use for loop instead
|
||||
mapfile -t PASSWORDS < <(grep "SECRET.*VERSION.*" "$ENV_FILE")
|
||||
mapfile -t PASSWORDS < <(grep "^SECRET.*VERSION.*" "$ENV_FILE")
|
||||
}
|
||||
|
||||
load_instance() {
|
||||
APP="$abra__app_"
|
||||
|
||||
# load all files matching "$APP.env" into ENV_FILES array
|
||||
mapfile -t ENV_FILES < <(find -L "$ABRA_DIR" -name "$APP.env")
|
||||
mapfile -t ENV_FILES < <(find -L "$ABRA_DIR/servers/" -name "$APP.env")
|
||||
# FIXME 3wc: requires bash 4, use for loop instead
|
||||
|
||||
case "${#ENV_FILES[@]}" in
|
||||
@ -1342,7 +1364,7 @@ sub_app_deploy (){
|
||||
if [ -n "$abra__version_" ]; then
|
||||
VERSION="$abra__version_"
|
||||
if ! printf '%s\0' "${RECIPE_VERSIONS[@]}" | grep -Fqxz -- "$VERSION"; then
|
||||
error "'$version' doesn't appear to be a valid version of $TYPE"
|
||||
error "'$VERSION' doesn't appear to be a valid version of $TYPE"
|
||||
fi
|
||||
info "Chose version $VERSION"
|
||||
else
|
||||
@ -1598,6 +1620,9 @@ sub_app_secret_insert() {
|
||||
# shellcheck disable=SC2059
|
||||
printf "$PW" | docker secret create "${STACK_NAME}_${SECRET}_${VERSION}" - > /dev/null
|
||||
|
||||
# shellcheck disable=SC2181
|
||||
if [[ $? != 0 ]]; then exit 1; fi # exit if secret wasn't created
|
||||
|
||||
if [ "$STORE_WITH_PASS" == "true" ] && type pass > /dev/null 2>&1; then
|
||||
echo "$PW" | pass insert "hosts/$DOCKER_CONTEXT/${STACK_NAME}/${SECRET}" -m > /dev/null
|
||||
success "pass: hosts/$DOCKER_CONTEXT/${STACK_NAME}/${SECRET}"
|
||||
@ -1692,11 +1717,9 @@ sub_app_secret_generate(){
|
||||
fi
|
||||
|
||||
if [[ -n "$length" ]]; then
|
||||
require_binary pwgen
|
||||
abra__cmd_="pwgen -s $length 1"
|
||||
abra__cmd_="pwgen_native $length"
|
||||
else
|
||||
require_binary pwqgen
|
||||
abra__cmd_=pwqgen
|
||||
abra__cmd_=pwqgen_native
|
||||
fi
|
||||
|
||||
PWGEN=${abra__cmd_}
|
||||
@ -1706,7 +1729,7 @@ sub_app_secret_generate(){
|
||||
error "Required arguments missing"
|
||||
fi
|
||||
|
||||
PW=$($PWGEN|tr -d "\n")
|
||||
PW=$($PWGEN)
|
||||
|
||||
success "Password: $PW"
|
||||
|
||||
@ -2183,7 +2206,7 @@ sub_recipe_release() {
|
||||
fi
|
||||
info "Fetching $service_image metadata from Docker Hub"
|
||||
service_data=$(skopeo inspect "docker://$service_image")
|
||||
service_digest=$(echo "$service_data" | jq -r '.Digest' | cut -d':' -f2 | cut -c-8)
|
||||
service_digest=$(echo "$service_data" | $JQ -r '.Digest' | cut -d':' -f2 | cut -c-8)
|
||||
|
||||
label="coop-cloud.\${STACK_NAME}.$service.version=${service_tag}-${service_digest}"
|
||||
|
||||
@ -2218,7 +2241,7 @@ sub_recipe_release() {
|
||||
|
||||
success "All compose files updated; new version is $new_version"
|
||||
|
||||
if [ "$abra___no_prompt" = "false" ]; then
|
||||
if [ "$abra___no_prompt" = "false" ] && [ "$bump" = "false" ]; then
|
||||
read -rp "Commit your changes to git? [y/N]? " choice
|
||||
|
||||
if [ "${choice,,}" != "y" ]; then
|
||||
@ -2226,7 +2249,7 @@ sub_recipe_release() {
|
||||
fi
|
||||
fi
|
||||
|
||||
if [ "$abra___no_prompt" = "false" ]; then
|
||||
if [ "$abra___no_prompt" = "false" ] && [ "$bump" = "false" ]; then
|
||||
git commit -avem "Version $new_version; sync labels" || exit
|
||||
else
|
||||
git commit -am "Version $new_version; sync labels" || true
|
||||
@ -2255,7 +2278,7 @@ sub_recipe_release() {
|
||||
if [ "$abra___no_prompt" = "false" ]; then
|
||||
read -rp "Git push this new tag? [y/N]? " choice
|
||||
|
||||
if [ "${choice,,}" != "y" ]; then
|
||||
if [ "${choice,,}" = "y" ]; then
|
||||
git push && git push --tags
|
||||
fi
|
||||
else
|
||||
@ -2542,18 +2565,6 @@ sub_network() {
|
||||
abra() {
|
||||
require_bash_4
|
||||
|
||||
# TODO (3wc): we either need to do this, or add 'shellcheck disable' all over
|
||||
# the place to handle the dynamically-defined vars
|
||||
declare abra___stack abra___env abra__command_ abra__args_ \
|
||||
abra__secret_ abra__version_ abra__data_ abra___user abra__host_ \
|
||||
abra__type_ abra__port_ abra__user_ abra__service_ abra__src_ abra__dst_ \
|
||||
abra___server abra___domain abra___pass abra___secrets abra___status \
|
||||
abra___no_tty abra___app_name abra__subcommands_ abra___skip_update \
|
||||
abra___skip_check abra__backup_file_ abra___verbose abra___debug \
|
||||
abra___help abra___branch abra___volumes abra__provider_ abra___type \
|
||||
abra___dev abra___update abra___no_prompt abra___force \
|
||||
abra__recipe_ abra___fast abra__volume_ abra___bump abra___chaos
|
||||
|
||||
if ! type tput > /dev/null 2>&1; then
|
||||
tput() {
|
||||
echo -n
|
||||
@ -2617,7 +2628,7 @@ abra() {
|
||||
# Use abra__command_ in case `command` is provided (i.e. `volume` or `stack`)
|
||||
CMD="sub_${abra__command_}"
|
||||
if type "$CMD" > /dev/null 2>&1; then
|
||||
# shellcheck disable=SC2086
|
||||
# shellcheck disable=SC2086,SC2048
|
||||
"$CMD" ${abra__args_[*]}
|
||||
else
|
||||
docopt_exit
|
||||
|
@ -6,9 +6,11 @@
|
||||
# ~/.abra/apps), and format it as JSON so that it can be hosted here:
|
||||
# https://apps.coopcloud.tech
|
||||
|
||||
import argparse
|
||||
from json import dump
|
||||
from os import chdir, getcwd, listdir
|
||||
from os import chdir, environ, getcwd, listdir
|
||||
from os.path import basename
|
||||
from pathlib import Path
|
||||
from re import findall, search
|
||||
from subprocess import DEVNULL
|
||||
|
||||
@ -25,6 +27,24 @@ from abralib import (
|
||||
log,
|
||||
)
|
||||
|
||||
parser = argparse.ArgumentParser(description="Generate a new apps.json")
|
||||
parser.add_argument("--output", type=Path, default=f"{getcwd()}/apps.json")
|
||||
|
||||
|
||||
def skopeo_login():
|
||||
"""Log into the docker registry to avoid rate limits."""
|
||||
user = environ.get("SKOPEO_USER")
|
||||
password = environ.get("SKOPEO_PASSWORD")
|
||||
registry = environ.get("SKOPEO_REGISTRY", "docker.io")
|
||||
|
||||
if not user or not password:
|
||||
log.info("Failed to log in via Skopeo due to missing env vars")
|
||||
return
|
||||
|
||||
login_cmd = f"skopeo login {registry} -u {user} -p {password}"
|
||||
output = _run_cmd(login_cmd, shell=True)
|
||||
log.info(f"Skopeo login attempt: {output}")
|
||||
|
||||
|
||||
def get_published_apps_json():
|
||||
"""Retrieve already published apps json."""
|
||||
@ -195,11 +215,14 @@ def get_app_versions(app_path, cached_apps_json):
|
||||
|
||||
def main():
|
||||
"""Run the script."""
|
||||
args = parser.parse_args()
|
||||
|
||||
skopeo_login()
|
||||
|
||||
repos_json = get_repos_json()
|
||||
clone_all_apps(repos_json)
|
||||
|
||||
target = f"{getcwd()}/apps.json"
|
||||
with open(target, "w", encoding="utf-8") as handle:
|
||||
with open(args.output, "w", encoding="utf-8") as handle:
|
||||
dump(
|
||||
generate_apps_json(repos_json),
|
||||
handle,
|
||||
@ -208,7 +231,7 @@ def main():
|
||||
sort_keys=True,
|
||||
)
|
||||
|
||||
log.info(f"Successfully generated {target}")
|
||||
log.info(f"Successfully generated {args.output}")
|
||||
|
||||
|
||||
main()
|
||||
|
@ -6,13 +6,15 @@
|
||||
|
||||
from os import chdir, environ, listdir
|
||||
|
||||
from abralib import (
|
||||
CLONES_PATH,
|
||||
REPOS_TO_SKIP,
|
||||
_run_cmd,
|
||||
clone_all_apps,
|
||||
get_repos_json,
|
||||
log,
|
||||
from abralib import CLONES_PATH, _run_cmd, clone_all_apps, get_repos_json, log
|
||||
|
||||
REPOS_TO_SKIP = (
|
||||
"backup-bot",
|
||||
"docker-dind-bats-kcov",
|
||||
"docs.coopcloud.tech",
|
||||
"pyabra",
|
||||
"radicle-seed-node",
|
||||
"swarm-cronjob",
|
||||
)
|
||||
|
||||
|
||||
@ -32,9 +34,7 @@ def main():
|
||||
log.info(f"Mirroring {app}...")
|
||||
|
||||
token = environ.get("GITHUB_ACCESS_TOKEN")
|
||||
remote = (
|
||||
f"https://decentral1se:{token}@github.com/Autonomic-Cooperative/{app}.git"
|
||||
)
|
||||
remote = f"https://coopcloudbot:{token}@github.com/Coop-Cloud/{app}.git"
|
||||
|
||||
_run_cmd(
|
||||
f"git remote add github {remote} || true",
|
||||
|
@ -1,10 +1,180 @@
|
||||
#!/bin/bash
|
||||
#!/usr/bin/env bash
|
||||
|
||||
# shellcheck disable=SC2154,SC2034
|
||||
|
||||
ABRA_VERSION="9.0.0"
|
||||
GIT_URL="https://git.autonomic.zone/coop-cloud/abra"
|
||||
ABRA_SRC="$GIT_URL/raw/tag/$ABRA_VERSION/abra"
|
||||
ABRA_DIR="${ABRA_DIR:-$HOME/.abra}"
|
||||
|
||||
DOC="
|
||||
abra command-line installer script
|
||||
|
||||
Usage:
|
||||
installer [options]
|
||||
|
||||
Options:
|
||||
-h, --help Show this message and exit
|
||||
-d, --dev Install bleeding edge development version
|
||||
-n, --no-prompt Don't prompt for input and run non-interactively
|
||||
-p, --no-deps Don't attempt to install system dependencies
|
||||
"
|
||||
|
||||
# docopt parser below, refresh this parser with `docopt.sh installer`
|
||||
# shellcheck disable=2016,1075
|
||||
docopt() { parse() { if ${DOCOPT_DOC_CHECK:-true}; then local doc_hash
|
||||
if doc_hash=$(printf "%s" "$DOC" | (sha256sum 2>/dev/null || shasum -a 256)); then
|
||||
if [[ ${doc_hash:0:5} != "$digest" ]]; then
|
||||
stderr "The current usage doc (${doc_hash:0:5}) does not match \
|
||||
what the parser was generated with (${digest})
|
||||
Run \`docopt.sh\` to refresh the parser."; _return 70; fi; fi; fi
|
||||
local root_idx=$1; shift; argv=("$@"); parsed_params=(); parsed_values=()
|
||||
left=(); testdepth=0; local arg; while [[ ${#argv[@]} -gt 0 ]]; do
|
||||
if [[ ${argv[0]} = "--" ]]; then for arg in "${argv[@]}"; do
|
||||
parsed_params+=('a'); parsed_values+=("$arg"); done; break
|
||||
elif [[ ${argv[0]} = --* ]]; then parse_long
|
||||
elif [[ ${argv[0]} = -* && ${argv[0]} != "-" ]]; then parse_shorts
|
||||
elif ${DOCOPT_OPTIONS_FIRST:-false}; then for arg in "${argv[@]}"; do
|
||||
parsed_params+=('a'); parsed_values+=("$arg"); done; break; else
|
||||
parsed_params+=('a'); parsed_values+=("${argv[0]}"); argv=("${argv[@]:1}"); fi
|
||||
done; local idx; if ${DOCOPT_ADD_HELP:-true}; then
|
||||
for idx in "${parsed_params[@]}"; do [[ $idx = 'a' ]] && continue
|
||||
if [[ ${shorts[$idx]} = "-h" || ${longs[$idx]} = "--help" ]]; then
|
||||
stdout "$trimmed_doc"; _return 0; fi; done; fi
|
||||
if [[ ${DOCOPT_PROGRAM_VERSION:-false} != 'false' ]]; then
|
||||
for idx in "${parsed_params[@]}"; do [[ $idx = 'a' ]] && continue
|
||||
if [[ ${longs[$idx]} = "--version" ]]; then stdout "$DOCOPT_PROGRAM_VERSION"
|
||||
_return 0; fi; done; fi; local i=0; while [[ $i -lt ${#parsed_params[@]} ]]; do
|
||||
left+=("$i"); ((i++)) || true; done
|
||||
if ! required "$root_idx" || [ ${#left[@]} -gt 0 ]; then error; fi; return 0; }
|
||||
parse_shorts() { local token=${argv[0]}; local value; argv=("${argv[@]:1}")
|
||||
[[ $token = -* && $token != --* ]] || _return 88; local remaining=${token#-}
|
||||
while [[ -n $remaining ]]; do local short="-${remaining:0:1}"
|
||||
remaining="${remaining:1}"; local i=0; local similar=(); local match=false
|
||||
for o in "${shorts[@]}"; do if [[ $o = "$short" ]]; then similar+=("$short")
|
||||
[[ $match = false ]] && match=$i; fi; ((i++)) || true; done
|
||||
if [[ ${#similar[@]} -gt 1 ]]; then
|
||||
error "${short} is specified ambiguously ${#similar[@]} times"
|
||||
elif [[ ${#similar[@]} -lt 1 ]]; then match=${#shorts[@]}; value=true
|
||||
shorts+=("$short"); longs+=(''); argcounts+=(0); else value=false
|
||||
if [[ ${argcounts[$match]} -ne 0 ]]; then if [[ $remaining = '' ]]; then
|
||||
if [[ ${#argv[@]} -eq 0 || ${argv[0]} = '--' ]]; then
|
||||
error "${short} requires argument"; fi; value=${argv[0]}; argv=("${argv[@]:1}")
|
||||
else value=$remaining; remaining=''; fi; fi; if [[ $value = false ]]; then
|
||||
value=true; fi; fi; parsed_params+=("$match"); parsed_values+=("$value"); done
|
||||
}; parse_long() { local token=${argv[0]}; local long=${token%%=*}
|
||||
local value=${token#*=}; local argcount; argv=("${argv[@]:1}")
|
||||
[[ $token = --* ]] || _return 88; if [[ $token = *=* ]]; then eq='='; else eq=''
|
||||
value=false; fi; local i=0; local similar=(); local match=false
|
||||
for o in "${longs[@]}"; do if [[ $o = "$long" ]]; then similar+=("$long")
|
||||
[[ $match = false ]] && match=$i; fi; ((i++)) || true; done
|
||||
if [[ $match = false ]]; then i=0; for o in "${longs[@]}"; do
|
||||
if [[ $o = $long* ]]; then similar+=("$long"); [[ $match = false ]] && match=$i
|
||||
fi; ((i++)) || true; done; fi; if [[ ${#similar[@]} -gt 1 ]]; then
|
||||
error "${long} is not a unique prefix: ${similar[*]}?"
|
||||
elif [[ ${#similar[@]} -lt 1 ]]; then
|
||||
[[ $eq = '=' ]] && argcount=1 || argcount=0; match=${#shorts[@]}
|
||||
[[ $argcount -eq 0 ]] && value=true; shorts+=(''); longs+=("$long")
|
||||
argcounts+=("$argcount"); else if [[ ${argcounts[$match]} -eq 0 ]]; then
|
||||
if [[ $value != false ]]; then
|
||||
error "${longs[$match]} must not have an argument"; fi
|
||||
elif [[ $value = false ]]; then
|
||||
if [[ ${#argv[@]} -eq 0 || ${argv[0]} = '--' ]]; then
|
||||
error "${long} requires argument"; fi; value=${argv[0]}; argv=("${argv[@]:1}")
|
||||
fi; if [[ $value = false ]]; then value=true; fi; fi; parsed_params+=("$match")
|
||||
parsed_values+=("$value"); }; required() { local initial_left=("${left[@]}")
|
||||
local node_idx; ((testdepth++)) || true; for node_idx in "$@"; do
|
||||
if ! "node_$node_idx"; then left=("${initial_left[@]}"); ((testdepth--)) || true
|
||||
return 1; fi; done; if [[ $((--testdepth)) -eq 0 ]]; then
|
||||
left=("${initial_left[@]}"); for node_idx in "$@"; do "node_$node_idx"; done; fi
|
||||
return 0; }; optional() { local node_idx; for node_idx in "$@"; do
|
||||
"node_$node_idx"; done; return 0; }; switch() { local i
|
||||
for i in "${!left[@]}"; do local l=${left[$i]}
|
||||
if [[ ${parsed_params[$l]} = "$2" ]]; then
|
||||
left=("${left[@]:0:$i}" "${left[@]:((i+1))}")
|
||||
[[ $testdepth -gt 0 ]] && return 0; if [[ $3 = true ]]; then
|
||||
eval "((var_$1++))" || true; else eval "var_$1=true"; fi; return 0; fi; done
|
||||
return 1; }; stdout() { printf -- "cat <<'EOM'\n%s\nEOM\n" "$1"; }; stderr() {
|
||||
printf -- "cat <<'EOM' >&2\n%s\nEOM\n" "$1"; }; error() {
|
||||
[[ -n $1 ]] && stderr "$1"; stderr "$usage"; _return 1; }; _return() {
|
||||
printf -- "exit %d\n" "$1"; exit "$1"; }; set -e; trimmed_doc=${DOC:1:333}
|
||||
usage=${DOC:37:28}; digest=36916; shorts=(-h -d -n -p)
|
||||
longs=(--help --dev --no-prompt --no-deps); argcounts=(0 0 0 0); node_0(){
|
||||
switch __help 0; }; node_1(){ switch __dev 1; }; node_2(){ switch __no_prompt 2
|
||||
}; node_3(){ switch __no_deps 3; }; node_4(){ optional 0 1 2 3; }; node_5(){
|
||||
optional 4; }; node_6(){ required 5; }; node_7(){ required 6; }
|
||||
cat <<<' docopt_exit() { [[ -n $1 ]] && printf "%s\n" "$1" >&2
|
||||
printf "%s\n" "${DOC:37:28}" >&2; exit 1; }'; unset var___help var___dev \
|
||||
var___no_prompt var___no_deps; parse 7 "$@"; local prefix=${DOCOPT_PREFIX:-''}
|
||||
unset "${prefix}__help" "${prefix}__dev" "${prefix}__no_prompt" \
|
||||
"${prefix}__no_deps"; eval "${prefix}"'__help=${var___help:-false}'
|
||||
eval "${prefix}"'__dev=${var___dev:-false}'
|
||||
eval "${prefix}"'__no_prompt=${var___no_prompt:-false}'
|
||||
eval "${prefix}"'__no_deps=${var___no_deps:-false}'; local docopt_i=1
|
||||
[[ $BASH_VERSION =~ ^4.3 ]] && docopt_i=2; for ((;docopt_i>0;docopt_i--)); do
|
||||
declare -p "${prefix}__help" "${prefix}__dev" "${prefix}__no_prompt" \
|
||||
"${prefix}__no_deps"; done; }
|
||||
# docopt parser above, complete command for generating this parser is `docopt.sh installer`
|
||||
|
||||
function prompt_confirm {
|
||||
if [ "$no_prompt" == "true" ]; then
|
||||
return
|
||||
fi
|
||||
|
||||
read -rp "Continue? [y/N]? " choice
|
||||
|
||||
case "$choice" in
|
||||
y|Y ) return ;;
|
||||
* ) exit;;
|
||||
esac
|
||||
}
|
||||
|
||||
function show_banner {
|
||||
echo ""
|
||||
echo " ____ ____ _ _ "
|
||||
echo " / ___|___ ___ _ __ / ___| | ___ _ _ __| |"
|
||||
echo " | | / _ \ _____ / _ \| '_ \ | | | |/ _ \| | | |/ _' |"
|
||||
echo " | |__| (_) |_____| (_) | |_) | | |___| | (_) | |_| | (_| |"
|
||||
echo " \____\___/ \___/| .__/ \____|_|\___/ \__,_|\__,_|"
|
||||
echo " |_|"
|
||||
echo ""
|
||||
}
|
||||
|
||||
function install_docker {
|
||||
sudo apt-get remove docker docker-engine docker.io containerd runc
|
||||
sudo apt-get install -yq \
|
||||
apt-transport-https \
|
||||
ca-certificates \
|
||||
curl \
|
||||
gnupg \
|
||||
lsb-release
|
||||
curl -fsSL https://download.docker.com/linux/debian/gpg | sudo gpg --dearmor -o /usr/share/keyrings/docker-archive-keyring.gpg
|
||||
echo \
|
||||
"deb [arch=amd64 signed-by=/usr/share/keyrings/docker-archive-keyring.gpg] https://download.docker.com/linux/debian \
|
||||
$(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
|
||||
sudo apt-get update
|
||||
sudo apt-get install -yq docker-ce docker-ce-cli containerd.io
|
||||
}
|
||||
|
||||
function install_requirements {
|
||||
if [ -f "/etc/debian_version" ]; then
|
||||
echo "Detected Debian based distribution, attempting to install system requirements..."
|
||||
|
||||
sudo apt update && sudo apt install -y \
|
||||
curl \
|
||||
passwdqc \
|
||||
pwgen
|
||||
|
||||
echo "Install Docker (https://docs.docker.com/engine/install/debian/)?"
|
||||
prompt_confirm
|
||||
install_docker
|
||||
else
|
||||
echo "Sorry, we only support Debian based distributions at the moment"
|
||||
echo "You'll have to install the requirements manually for your distribution"
|
||||
echo "See https://git.autonomic.zone/coop-cloud/abra#requirements for more"
|
||||
fi
|
||||
}
|
||||
|
||||
function install_abra_release {
|
||||
mkdir -p "$HOME/.local/bin"
|
||||
curl "$ABRA_SRC" > "$HOME/.local/bin/abra"
|
||||
@ -24,7 +194,25 @@ function install_abra_dev {
|
||||
}
|
||||
|
||||
function run_installation {
|
||||
if [ "$1" = "--dev" ]; then
|
||||
show_banner
|
||||
|
||||
DOCOPT_PREFIX=installer_
|
||||
DOCOPT_ADD_HELP=false
|
||||
eval "$(docopt "$@")"
|
||||
|
||||
dev="$installer___dev"
|
||||
no_prompt="$installer___no_prompt"
|
||||
no_deps="$installer___no_deps"
|
||||
|
||||
if [ "$no_deps" == "false" ]; then
|
||||
install_requirements
|
||||
fi
|
||||
|
||||
if ! type curl > /dev/null 2>&1; then
|
||||
error "'curl' program is not installed, cannot continue..."
|
||||
fi
|
||||
|
||||
if [ "$dev" == "true" ]; then
|
||||
install_abra_dev
|
||||
else
|
||||
install_abra_release
|
||||
|
24
makefile
24
makefile
@ -1,4 +1,4 @@
|
||||
.PHONY: test shellcheck docopt release-installer build push
|
||||
.PHONY: test shellcheck docopt release-installer build push deploy-docopt symlink
|
||||
|
||||
test:
|
||||
@sudo DOCKER_CONTEXT=default docker run \
|
||||
@ -21,8 +21,9 @@ shellcheck:
|
||||
--rm \
|
||||
-v $$(pwd):/workdir \
|
||||
koalaman/shellcheck-alpine \
|
||||
shellcheck /workdir/abra && \
|
||||
shellcheck /workdir/bin/*.sh
|
||||
sh -c "shellcheck /workdir/abra && \
|
||||
shellcheck /workdir/bin/*.sh && \
|
||||
shellcheck /workdir/deploy/install.abra.coopcloud.tech/installer"
|
||||
|
||||
docopt:
|
||||
@if [ ! -d ".venv" ]; then \
|
||||
@ -32,6 +33,14 @@ docopt:
|
||||
fi
|
||||
.venv/bin/docopt.sh abra
|
||||
|
||||
deploy-docopt:
|
||||
@if [ ! -d ".venv" ]; then \
|
||||
python3 -m venv .venv && \
|
||||
.venv/bin/pip install -U pip setuptools wheel && \
|
||||
.venv/bin/pip install docopt-sh; \
|
||||
fi
|
||||
.venv/bin/docopt.sh deploy/install.abra.coopcloud.tech/installer
|
||||
|
||||
release-installer:
|
||||
@DOCKER_CONTEXT=swarm.autonomic.zone \
|
||||
docker stack rm abra-installer-script && \
|
||||
@ -39,7 +48,12 @@ release-installer:
|
||||
DOCKER_CONTEXT=swarm.autonomic.zone docker stack deploy -c compose.yml abra-installer-script
|
||||
|
||||
build:
|
||||
@docker build -t decentral1se/abra .
|
||||
@docker build -t thecoopcloud/abra .
|
||||
|
||||
push: build
|
||||
@docker push decentral1se/abra
|
||||
@docker push thecoopcloud/abra
|
||||
|
||||
symlink:
|
||||
@mkdir -p ~/.abra/servers/ && \
|
||||
ln -srf tests/default ~/.abra/servers && \
|
||||
ln -srf tests/apps/* ~/.abra/apps
|
||||
|
84
tests/apps/works/compose.yml
Normal file
84
tests/apps/works/compose.yml
Normal file
@ -0,0 +1,84 @@
|
||||
---
|
||||
|
||||
# The goal of this compose file is to have a testing ground for understanding
|
||||
# what cases we need to handle to get stable deployments. For that, we need to
|
||||
# work with healthchecks and deploy configurations quite closely. If you run
|
||||
# the `make symlink` target then this will be loaded into a "fake" app on your
|
||||
# local machine which you can deploy with `abra`.
|
||||
|
||||
version: "3.8"
|
||||
services:
|
||||
r1_should_work:
|
||||
image: redis:alpine
|
||||
deploy:
|
||||
update_config:
|
||||
failure_action: rollback
|
||||
order: start-first
|
||||
rollback_config:
|
||||
order: start-first
|
||||
restart_policy:
|
||||
max_attempts: 1
|
||||
healthcheck:
|
||||
test: redis-cli ping
|
||||
interval: 2s
|
||||
retries: 3
|
||||
start_period: 1s
|
||||
timeout: 3s
|
||||
|
||||
r2_broken_health_check:
|
||||
image: redis:alpine
|
||||
deploy:
|
||||
update_config:
|
||||
failure_action: rollback
|
||||
order: start-first
|
||||
rollback_config:
|
||||
order: start-first
|
||||
restart_policy:
|
||||
max_attempts: 3
|
||||
healthcheck:
|
||||
test: foobar
|
||||
interval: 2s
|
||||
retries: 3
|
||||
start_period: 1s
|
||||
timeout: 3s
|
||||
|
||||
r3_no_health_check:
|
||||
image: redis:alpine
|
||||
deploy:
|
||||
update_config:
|
||||
failure_action: rollback
|
||||
order: start-first
|
||||
rollback_config:
|
||||
order: start-first
|
||||
restart_policy:
|
||||
max_attempts: 3
|
||||
|
||||
r4_disabled_health_check:
|
||||
image: redis:alpine
|
||||
deploy:
|
||||
update_config:
|
||||
failure_action: rollback
|
||||
order: start-first
|
||||
rollback_config:
|
||||
order: start-first
|
||||
restart_policy:
|
||||
max_attempts: 3
|
||||
healthcheck:
|
||||
disable: true
|
||||
|
||||
r5_should_also_work:
|
||||
image: redis:alpine
|
||||
deploy:
|
||||
update_config:
|
||||
failure_action: rollback
|
||||
order: start-first
|
||||
rollback_config:
|
||||
order: start-first
|
||||
restart_policy:
|
||||
max_attempts: 1
|
||||
healthcheck:
|
||||
test: redis-cli ping
|
||||
interval: 2s
|
||||
retries: 3
|
||||
start_period: 1s
|
||||
timeout: 3s
|
1
tests/default/works.env
Normal file
1
tests/default/works.env
Normal file
@ -0,0 +1 @@
|
||||
TYPE=works
|
Reference in New Issue
Block a user