Compare commits
181 Commits
v19.03.0-b
...
v19.03.2-r
| Author | SHA1 | Date | |
|---|---|---|---|
| 578ab52ece | |||
| c8e9c04d19 | |||
| 2fead2a50f | |||
| df1fe15cf6 | |||
| be9adbd5c1 | |||
| 2907276eca | |||
| 59b02c04bf | |||
| 6a3eb417d5 | |||
| c30ccb308d | |||
| 1572845a2f | |||
| caad34cf58 | |||
| bf683dfe52 | |||
| 307befd7e2 | |||
| b58270ba69 | |||
| 0ecfcb5997 | |||
| 0ea69840c6 | |||
| 208de55a17 | |||
| 1a8077b814 | |||
| fa0e2597e6 | |||
| f357def036 | |||
| 792ce891be | |||
| d473c60571 | |||
| b020a36d10 | |||
| d2e8ff9e20 | |||
| 10a899b6bd | |||
| 41718b98f6 | |||
| caf21526a0 | |||
| 5b38d82aa0 | |||
| e303dfb6fd | |||
| 94b98bfa21 | |||
| 87e400e44e | |||
| 8cb2456248 | |||
| 11b15544c5 | |||
| 344adac7a6 | |||
| 2027d17a9d | |||
| ff881608fb | |||
| 8947ee2709 | |||
| 2f1931f9eb | |||
| 8164090257 | |||
| e803e487c3 | |||
| 5b636878fc | |||
| 3bc3f0390e | |||
| 296e10c0c5 | |||
| a63faebcf1 | |||
| 49236a4391 | |||
| 17b3250f0f | |||
| 5d246f4998 | |||
| f913afa98c | |||
| 90f256aeab | |||
| ee10970b05 | |||
| 35c929ed5e | |||
| 60eb4ceaf7 | |||
| 1b15368c47 | |||
| a720cf572f | |||
| ec7a9ad6e4 | |||
| 5e413159e5 | |||
| d4226d2f73 | |||
| a7c10adf4e | |||
| a4f41d94db | |||
| 71e1883ca0 | |||
| 06eb05570a | |||
| a1b83ffd2c | |||
| 649097ffe0 | |||
| 57f1de13b3 | |||
| c5431132d7 | |||
| c66cebee7a | |||
| c105a58f65 | |||
| 545fd2ad76 | |||
| 315f7d7d04 | |||
| 6aedc5e912 | |||
| 3ac398aa49 | |||
| 781c427788 | |||
| 47e66c5812 | |||
| 9933222452 | |||
| 3f5553548b | |||
| c8273616ee | |||
| 8aebc31806 | |||
| 57ef4e32f4 | |||
| c15fb3a8e5 | |||
| cb07256868 | |||
| 5ec13f81a2 | |||
| 394c393998 | |||
| a4ba5831a0 | |||
| ac45214f7d | |||
| 12a1cf4783 | |||
| 7fd21aefd8 | |||
| 3f9063e775 | |||
| 8758cdca10 | |||
| 529b1e7ec7 | |||
| b8bfba8dc6 | |||
| d6ddcdfa6a | |||
| 7380aae601 | |||
| 6a6cd35985 | |||
| 941a493f49 | |||
| 1e275568f1 | |||
| 2a78b4e9a3 | |||
| 8cf8fc27fa | |||
| 68d67f2cbf | |||
| c1754d9e5d | |||
| af9b8c1be3 | |||
| 292fc5c580 | |||
| 11f5e33a90 | |||
| f28d9cc929 | |||
| eb2bfeccf7 | |||
| c1a4fb4922 | |||
| e243174b30 | |||
| af053bc278 | |||
| 30cc5d96b3 | |||
| 70f48f2231 | |||
| 9a0b171192 | |||
| c94308fa99 | |||
| 1ed02c40fe | |||
| 8ca1f0bb7d | |||
| 59952a0146 | |||
| ba8388f052 | |||
| 6a562c9b33 | |||
| df4dc54374 | |||
| 84dc462ea4 | |||
| ac234326a6 | |||
| eeaa4e543a | |||
| 1962ec66bb | |||
| d365225c32 | |||
| fe19be2530 | |||
| 5ad82fafb3 | |||
| f99e0b00e9 | |||
| 04751fd58e | |||
| 438426e0fc | |||
| 71570160c1 | |||
| a3efd5d195 | |||
| 84b3805feb | |||
| 225c9b189a | |||
| 552e8d1a73 | |||
| 2432af701a | |||
| 49bd6b729d | |||
| 5b3f171482 | |||
| f02d94afbb | |||
| c61435b9c7 | |||
| d043ab5993 | |||
| 80d2496f99 | |||
| 337a9611e2 | |||
| 8c5460a2cc | |||
| cf47bb2cc2 | |||
| acb24f5164 | |||
| c30e94533c | |||
| 767fafdb32 | |||
| b6cee4567c | |||
| 34806a8b4c | |||
| 058f4337a4 | |||
| a9c26efc3c | |||
| 9d37657f34 | |||
| 34e119e571 | |||
| f07e16d42c | |||
| 40968111cc | |||
| c8d685457b | |||
| 25e6a64e2a | |||
| 58ec72afca | |||
| 42ec51e1ae | |||
| 4cacd1304a | |||
| 01f4f2e80a | |||
| 6511da877f | |||
| 8b9cdab4e6 | |||
| e0f20fd86a | |||
| 409c590fcf | |||
| cad20c759f | |||
| a125283e01 | |||
| 893f4a1194 | |||
| 9aa0d553c0 | |||
| 6026ce4a8b | |||
| c55c801faf | |||
| ac758d9f80 | |||
| 1cefe057cd | |||
| d6af3e143e | |||
| f019bdcace | |||
| ed8733a940 | |||
| 7945010874 | |||
| 5bc9f490a9 | |||
| ed838bff1f | |||
| c662ba03de | |||
| 89f9d806ff | |||
| 8bb152d967 | |||
| 62a15c16fc |
@ -1,2 +1,6 @@
|
||||
.dockerignore
|
||||
.git
|
||||
build
|
||||
.gitignore
|
||||
appveyor.yml
|
||||
build
|
||||
circle.yml
|
||||
|
||||
5
.mailmap
5
.mailmap
@ -44,6 +44,7 @@ Antonio Murdaca <antonio.murdaca@gmail.com> <runcom@users.noreply.github.com>
|
||||
Anuj Bahuguna <anujbahuguna.dev@gmail.com>
|
||||
Anuj Bahuguna <anujbahuguna.dev@gmail.com> <abahuguna@fiberlink.com>
|
||||
Anusha Ragunathan <anusha.ragunathan@docker.com> <anusha@docker.com>
|
||||
Ao Li <la9249@163.com>
|
||||
Arnaud Porterie <arnaud.porterie@docker.com>
|
||||
Arnaud Porterie <arnaud.porterie@docker.com> <icecrime@gmail.com>
|
||||
Arthur Gautier <baloo@gandi.net> <superbaloo+registrations.github@superbaloo.net>
|
||||
@ -394,6 +395,8 @@ Stefan Berger <stefanb@linux.vnet.ibm.com>
|
||||
Stefan Berger <stefanb@linux.vnet.ibm.com> <stefanb@us.ibm.com>
|
||||
Stefan J. Wernli <swernli@microsoft.com> <swernli@ntdev.microsoft.com>
|
||||
Stefan S. <tronicum@user.github.com>
|
||||
Stefan Scherer <stefan.scherer@docker.com>
|
||||
Stefan Scherer <stefan.scherer@docker.com> <scherer_stefan@icloud.com>
|
||||
Stephen Day <stevvooe@gmail.com>
|
||||
Stephen Day <stevvooe@gmail.com> <stephen.day@docker.com>
|
||||
Stephen Day <stevvooe@gmail.com> <stevvooe@users.noreply.github.com>
|
||||
@ -402,6 +405,8 @@ Steve Richards <steve.richards@docker.com> stevejr <>
|
||||
Sun Gengze <690388648@qq.com>
|
||||
Sun Jianbo <wonderflow.sun@gmail.com>
|
||||
Sun Jianbo <wonderflow.sun@gmail.com> <wonderflow@zju.edu.cn>
|
||||
Sunny Gogoi <indiasuny000@gmail.com>
|
||||
Sunny Gogoi <indiasuny000@gmail.com> <me@darkowlzz.space>
|
||||
Sven Dowideit <SvenDowideit@home.org.au>
|
||||
Sven Dowideit <SvenDowideit@home.org.au> <sven@t440s.home.gateway>
|
||||
Sven Dowideit <SvenDowideit@home.org.au> <SvenDowideit@docker.com>
|
||||
|
||||
9
AUTHORS
9
AUTHORS
@ -58,6 +58,7 @@ Anton Polonskiy <anton.polonskiy@gmail.com>
|
||||
Antonio Murdaca <antonio.murdaca@gmail.com>
|
||||
Antonis Kalipetis <akalipetis@gmail.com>
|
||||
Anusha Ragunathan <anusha.ragunathan@docker.com>
|
||||
Ao Li <la9249@163.com>
|
||||
Arash Deshmeh <adeshmeh@ca.ibm.com>
|
||||
Arnaud Porterie <arnaud.porterie@docker.com>
|
||||
Ashwini Oruganti <ashwini.oruganti@gmail.com>
|
||||
@ -158,6 +159,7 @@ David Cramer <davcrame@cisco.com>
|
||||
David Dooling <dooling@gmail.com>
|
||||
David Gageot <david@gageot.net>
|
||||
David Lechner <david@lechnology.com>
|
||||
David Scott <dave@recoil.org>
|
||||
David Sheets <dsheets@docker.com>
|
||||
David Williamson <david.williamson@docker.com>
|
||||
David Xia <dxia@spotify.com>
|
||||
@ -300,6 +302,7 @@ Jim Galasyn <jim.galasyn@docker.com>
|
||||
Jimmy Leger <jimmy.leger@gmail.com>
|
||||
Jimmy Song <rootsongjc@gmail.com>
|
||||
jimmyxian <jimmyxian2004@yahoo.com.cn>
|
||||
Jintao Zhang <zhangjintao9020@gmail.com>
|
||||
Joao Fernandes <joao.fernandes@docker.com>
|
||||
Joe Doliner <jdoliner@pachyderm.io>
|
||||
Joe Gordon <joe.gordon0@gmail.com>
|
||||
@ -471,9 +474,11 @@ Mrunal Patel <mrunalp@gmail.com>
|
||||
muicoder <muicoder@gmail.com>
|
||||
Muthukumar R <muthur@gmail.com>
|
||||
Máximo Cuadros <mcuadros@gmail.com>
|
||||
Mårten Cassel <marten.cassel@gmail.com>
|
||||
Nace Oroz <orkica@gmail.com>
|
||||
Nahum Shalman <nshalman@omniti.com>
|
||||
Nalin Dahyabhai <nalin@redhat.com>
|
||||
Nao YONASHIRO <owan.orisano@gmail.com>
|
||||
Nassim 'Nass' Eddequiouaq <eddequiouaq.nassim@gmail.com>
|
||||
Natalie Parker <nparker@omnifone.com>
|
||||
Nate Brennand <nate.brennand@clever.com>
|
||||
@ -595,7 +600,7 @@ Spencer Brown <spencer@spencerbrown.org>
|
||||
squeegels <1674195+squeegels@users.noreply.github.com>
|
||||
Srini Brahmaroutu <srbrahma@us.ibm.com>
|
||||
Stefan S. <tronicum@user.github.com>
|
||||
Stefan Scherer <scherer_stefan@icloud.com>
|
||||
Stefan Scherer <stefan.scherer@docker.com>
|
||||
Stefan Weil <sw@weilnetz.de>
|
||||
Stephane Jeandeaux <stephane.jeandeaux@gmail.com>
|
||||
Stephen Day <stevvooe@gmail.com>
|
||||
@ -605,7 +610,9 @@ Steve Richards <steve.richards@docker.com>
|
||||
Steven Burgess <steven.a.burgess@hotmail.com>
|
||||
Subhajit Ghosh <isubuz.g@gmail.com>
|
||||
Sun Jianbo <wonderflow.sun@gmail.com>
|
||||
Sune Keller <absukl@almbrand.dk>
|
||||
Sungwon Han <sungwon.han@navercorp.com>
|
||||
Sunny Gogoi <indiasuny000@gmail.com>
|
||||
Sven Dowideit <SvenDowideit@home.org.au>
|
||||
Sylvain Baubeau <sbaubeau@redhat.com>
|
||||
Sébastien HOUZÉ <cto@verylastroom.com>
|
||||
|
||||
3
Jenkinsfile
vendored
3
Jenkinsfile
vendored
@ -5,8 +5,9 @@ wrappedNode(label: 'linux && x86_64', cleanWorkspace: true) {
|
||||
|
||||
stage "Run end-to-end test suite"
|
||||
sh "docker version"
|
||||
sh "docker info"
|
||||
sh "E2E_UNIQUE_ID=clie2e${BUILD_NUMBER} \
|
||||
IMAGE_TAG=clie2e${BUILD_NUMBER} \
|
||||
make -f docker.Makefile test-e2e"
|
||||
DOCKER_BUILDKIT=1 make -f docker.Makefile test-e2e"
|
||||
}
|
||||
}
|
||||
|
||||
@ -4,7 +4,7 @@ clone_folder: c:\gopath\src\github.com\docker\cli
|
||||
|
||||
environment:
|
||||
GOPATH: c:\gopath
|
||||
GOVERSION: 1.12.1
|
||||
GOVERSION: 1.12.8
|
||||
DEPVERSION: v0.4.1
|
||||
|
||||
install:
|
||||
@ -20,4 +20,4 @@ build_script:
|
||||
- ps: .\scripts\make.ps1 -Binary
|
||||
|
||||
test_script:
|
||||
- ps: .\scripts\make.ps1 -TestUnit
|
||||
- ps: .\scripts\make.ps1 -TestUnit
|
||||
|
||||
62
circle.yml
62
circle.yml
@ -4,35 +4,39 @@ jobs:
|
||||
|
||||
lint:
|
||||
working_directory: /work
|
||||
docker: [{image: 'docker:18.03-git'}]
|
||||
docker: [{image: 'docker:18.09-git'}]
|
||||
environment:
|
||||
DOCKER_BUILDKIT: 1
|
||||
steps:
|
||||
- checkout
|
||||
- setup_remote_docker:
|
||||
version: 18.03.1-ce
|
||||
reusable: true
|
||||
exclusive: false
|
||||
version: 18.09.3
|
||||
reusable: true
|
||||
exclusive: false
|
||||
- run:
|
||||
command: docker version
|
||||
- run:
|
||||
name: "Lint"
|
||||
command: |
|
||||
docker build -f dockerfiles/Dockerfile.lint --tag cli-linter:$CIRCLE_BUILD_NUM .
|
||||
docker build --progress=plain -f dockerfiles/Dockerfile.lint --tag cli-linter:$CIRCLE_BUILD_NUM .
|
||||
docker run --rm cli-linter:$CIRCLE_BUILD_NUM
|
||||
|
||||
cross:
|
||||
working_directory: /work
|
||||
docker: [{image: 'docker:18.03-git'}]
|
||||
docker: [{image: 'docker:18.09-git'}]
|
||||
environment:
|
||||
DOCKER_BUILDKIT: 1
|
||||
parallelism: 3
|
||||
steps:
|
||||
- checkout
|
||||
- setup_remote_docker:
|
||||
version: 18.03.1-ce
|
||||
reusable: true
|
||||
exclusive: false
|
||||
version: 18.09.3
|
||||
reusable: true
|
||||
exclusive: false
|
||||
- run:
|
||||
name: "Cross"
|
||||
command: |
|
||||
docker build -f dockerfiles/Dockerfile.cross --tag cli-builder:$CIRCLE_BUILD_NUM .
|
||||
docker build --progress=plain -f dockerfiles/Dockerfile.cross --tag cli-builder:$CIRCLE_BUILD_NUM .
|
||||
name=cross-$CIRCLE_BUILD_NUM-$CIRCLE_NODE_INDEX
|
||||
docker run \
|
||||
-e CROSS_GROUP=$CIRCLE_NODE_INDEX \
|
||||
@ -46,18 +50,20 @@ jobs:
|
||||
|
||||
test:
|
||||
working_directory: /work
|
||||
docker: [{image: 'docker:18.03-git'}]
|
||||
docker: [{image: 'docker:18.09-git'}]
|
||||
environment:
|
||||
DOCKER_BUILDKIT: 1
|
||||
steps:
|
||||
- checkout
|
||||
- setup_remote_docker:
|
||||
version: 18.03.1-ce
|
||||
reusable: true
|
||||
exclusive: false
|
||||
version: 18.09.3
|
||||
reusable: true
|
||||
exclusive: false
|
||||
- run:
|
||||
name: "Unit Test with Coverage"
|
||||
command: |
|
||||
mkdir -p test-results/unit-tests
|
||||
docker build -f dockerfiles/Dockerfile.dev --tag cli-builder:$CIRCLE_BUILD_NUM .
|
||||
docker build --progress=plain -f dockerfiles/Dockerfile.dev --tag cli-builder:$CIRCLE_BUILD_NUM .
|
||||
docker run \
|
||||
-e GOTESTSUM_JUNITFILE=/tmp/junit.xml \
|
||||
--name \
|
||||
@ -77,37 +83,43 @@ jobs:
|
||||
echo 'Codecov failed to upload'
|
||||
- store_test_results:
|
||||
path: test-results
|
||||
- store_artifacts:
|
||||
path: test-results
|
||||
|
||||
validate:
|
||||
working_directory: /work
|
||||
docker: [{image: 'docker:18.03-git'}]
|
||||
docker: [{image: 'docker:18.09-git'}]
|
||||
environment:
|
||||
DOCKER_BUILDKIT: 1
|
||||
steps:
|
||||
- checkout
|
||||
- setup_remote_docker:
|
||||
version: 18.03.1-ce
|
||||
reusable: true
|
||||
exclusive: false
|
||||
version: 18.09.3
|
||||
reusable: true
|
||||
exclusive: false
|
||||
- run:
|
||||
name: "Validate Vendor, Docs, and Code Generation"
|
||||
command: |
|
||||
rm -f .dockerignore # include .git
|
||||
docker build -f dockerfiles/Dockerfile.dev --tag cli-builder-with-git:$CIRCLE_BUILD_NUM .
|
||||
docker build --progress=plain -f dockerfiles/Dockerfile.dev --tag cli-builder-with-git:$CIRCLE_BUILD_NUM .
|
||||
docker run --rm cli-builder-with-git:$CIRCLE_BUILD_NUM \
|
||||
make ci-validate
|
||||
no_output_timeout: 15m
|
||||
shellcheck:
|
||||
working_directory: /work
|
||||
docker: [{image: 'docker:18.03-git'}]
|
||||
docker: [{image: 'docker:18.09-git'}]
|
||||
environment:
|
||||
DOCKER_BUILDKIT: 1
|
||||
steps:
|
||||
- checkout
|
||||
- setup_remote_docker:
|
||||
version: 18.03.1-ce
|
||||
reusable: true
|
||||
exclusive: false
|
||||
version: 18.09.3
|
||||
reusable: true
|
||||
exclusive: false
|
||||
- run:
|
||||
name: "Run shellcheck"
|
||||
command: |
|
||||
docker build -f dockerfiles/Dockerfile.shellcheck --tag cli-validator:$CIRCLE_BUILD_NUM .
|
||||
docker build --progress=plain -f dockerfiles/Dockerfile.shellcheck --tag cli-validator:$CIRCLE_BUILD_NUM .
|
||||
docker run --rm cli-validator:$CIRCLE_BUILD_NUM \
|
||||
make shellcheck
|
||||
workflows:
|
||||
|
||||
@ -101,5 +101,6 @@ func main() {
|
||||
SchemaVersion: "0.1.0",
|
||||
Vendor: "Docker Inc.",
|
||||
Version: "testing",
|
||||
Experimental: os.Getenv("HELLO_EXPERIMENTAL") != "",
|
||||
})
|
||||
}
|
||||
|
||||
@ -12,9 +12,10 @@ import (
|
||||
)
|
||||
|
||||
type fakeCandidate struct {
|
||||
path string
|
||||
exec bool
|
||||
meta string
|
||||
path string
|
||||
exec bool
|
||||
meta string
|
||||
allowExperimental bool
|
||||
}
|
||||
|
||||
func (c *fakeCandidate) Path() string {
|
||||
@ -35,9 +36,10 @@ func TestValidateCandidate(t *testing.T) {
|
||||
builtinName = NamePrefix + "builtin"
|
||||
builtinAlias = NamePrefix + "alias"
|
||||
|
||||
badPrefixPath = "/usr/local/libexec/cli-plugins/wobble"
|
||||
badNamePath = "/usr/local/libexec/cli-plugins/docker-123456"
|
||||
goodPluginPath = "/usr/local/libexec/cli-plugins/" + goodPluginName
|
||||
badPrefixPath = "/usr/local/libexec/cli-plugins/wobble"
|
||||
badNamePath = "/usr/local/libexec/cli-plugins/docker-123456"
|
||||
goodPluginPath = "/usr/local/libexec/cli-plugins/" + goodPluginName
|
||||
metaExperimental = `{"SchemaVersion": "0.1.0", "Vendor": "e2e-testing", "Experimental": true}`
|
||||
)
|
||||
|
||||
fakeroot := &cobra.Command{Use: "docker"}
|
||||
@ -49,40 +51,46 @@ func TestValidateCandidate(t *testing.T) {
|
||||
})
|
||||
|
||||
for _, tc := range []struct {
|
||||
c *fakeCandidate
|
||||
name string
|
||||
c *fakeCandidate
|
||||
|
||||
// Either err or invalid may be non-empty, but not both (both can be empty for a good plugin).
|
||||
err string
|
||||
invalid string
|
||||
}{
|
||||
/* Each failing one of the tests */
|
||||
{c: &fakeCandidate{path: ""}, err: "plugin candidate path cannot be empty"},
|
||||
{c: &fakeCandidate{path: badPrefixPath}, err: fmt.Sprintf("does not have %q prefix", NamePrefix)},
|
||||
{c: &fakeCandidate{path: badNamePath}, invalid: "did not match"},
|
||||
{c: &fakeCandidate{path: builtinName}, invalid: `plugin "builtin" duplicates builtin command`},
|
||||
{c: &fakeCandidate{path: builtinAlias}, invalid: `plugin "alias" duplicates an alias of builtin command "builtin"`},
|
||||
{c: &fakeCandidate{path: goodPluginPath, exec: false}, invalid: fmt.Sprintf("failed to fetch metadata: faked a failure to exec %q", goodPluginPath)},
|
||||
{c: &fakeCandidate{path: goodPluginPath, exec: true, meta: `xyzzy`}, invalid: "invalid character"},
|
||||
{c: &fakeCandidate{path: goodPluginPath, exec: true, meta: `{}`}, invalid: `plugin SchemaVersion "" is not valid`},
|
||||
{c: &fakeCandidate{path: goodPluginPath, exec: true, meta: `{"SchemaVersion": "xyzzy"}`}, invalid: `plugin SchemaVersion "xyzzy" is not valid`},
|
||||
{c: &fakeCandidate{path: goodPluginPath, exec: true, meta: `{"SchemaVersion": "0.1.0"}`}, invalid: "plugin metadata does not define a vendor"},
|
||||
{c: &fakeCandidate{path: goodPluginPath, exec: true, meta: `{"SchemaVersion": "0.1.0", "Vendor": ""}`}, invalid: "plugin metadata does not define a vendor"},
|
||||
{name: "empty path", c: &fakeCandidate{path: ""}, err: "plugin candidate path cannot be empty"},
|
||||
{name: "bad prefix", c: &fakeCandidate{path: badPrefixPath}, err: fmt.Sprintf("does not have %q prefix", NamePrefix)},
|
||||
{name: "bad path", c: &fakeCandidate{path: badNamePath}, invalid: "did not match"},
|
||||
{name: "builtin command", c: &fakeCandidate{path: builtinName}, invalid: `plugin "builtin" duplicates builtin command`},
|
||||
{name: "builtin alias", c: &fakeCandidate{path: builtinAlias}, invalid: `plugin "alias" duplicates an alias of builtin command "builtin"`},
|
||||
{name: "fetch failure", c: &fakeCandidate{path: goodPluginPath, exec: false}, invalid: fmt.Sprintf("failed to fetch metadata: faked a failure to exec %q", goodPluginPath)},
|
||||
{name: "metadata not json", c: &fakeCandidate{path: goodPluginPath, exec: true, meta: `xyzzy`}, invalid: "invalid character"},
|
||||
{name: "empty schemaversion", c: &fakeCandidate{path: goodPluginPath, exec: true, meta: `{}`}, invalid: `plugin SchemaVersion "" is not valid`},
|
||||
{name: "invalid schemaversion", c: &fakeCandidate{path: goodPluginPath, exec: true, meta: `{"SchemaVersion": "xyzzy"}`}, invalid: `plugin SchemaVersion "xyzzy" is not valid`},
|
||||
{name: "no vendor", c: &fakeCandidate{path: goodPluginPath, exec: true, meta: `{"SchemaVersion": "0.1.0"}`}, invalid: "plugin metadata does not define a vendor"},
|
||||
{name: "empty vendor", c: &fakeCandidate{path: goodPluginPath, exec: true, meta: `{"SchemaVersion": "0.1.0", "Vendor": ""}`}, invalid: "plugin metadata does not define a vendor"},
|
||||
{name: "experimental required", c: &fakeCandidate{path: goodPluginPath, exec: true, meta: metaExperimental}, invalid: "requires experimental CLI"},
|
||||
// This one should work
|
||||
{c: &fakeCandidate{path: goodPluginPath, exec: true, meta: `{"SchemaVersion": "0.1.0", "Vendor": "e2e-testing"}`}},
|
||||
{name: "valid", c: &fakeCandidate{path: goodPluginPath, exec: true, meta: `{"SchemaVersion": "0.1.0", "Vendor": "e2e-testing"}`}},
|
||||
{name: "valid + allowing experimental", c: &fakeCandidate{path: goodPluginPath, exec: true, meta: `{"SchemaVersion": "0.1.0", "Vendor": "e2e-testing"}`, allowExperimental: true}},
|
||||
{name: "experimental + allowing experimental", c: &fakeCandidate{path: goodPluginPath, exec: true, meta: metaExperimental, allowExperimental: true}},
|
||||
} {
|
||||
p, err := newPlugin(tc.c, fakeroot)
|
||||
if tc.err != "" {
|
||||
assert.ErrorContains(t, err, tc.err)
|
||||
} else if tc.invalid != "" {
|
||||
assert.NilError(t, err)
|
||||
assert.Assert(t, cmp.ErrorType(p.Err, reflect.TypeOf(&pluginError{})))
|
||||
assert.ErrorContains(t, p.Err, tc.invalid)
|
||||
} else {
|
||||
assert.NilError(t, err)
|
||||
assert.Equal(t, NamePrefix+p.Name, goodPluginName)
|
||||
assert.Equal(t, p.SchemaVersion, "0.1.0")
|
||||
assert.Equal(t, p.Vendor, "e2e-testing")
|
||||
}
|
||||
t.Run(tc.name, func(t *testing.T) {
|
||||
p, err := newPlugin(tc.c, fakeroot, tc.c.allowExperimental)
|
||||
if tc.err != "" {
|
||||
assert.ErrorContains(t, err, tc.err)
|
||||
} else if tc.invalid != "" {
|
||||
assert.NilError(t, err)
|
||||
assert.Assert(t, cmp.ErrorType(p.Err, reflect.TypeOf(&pluginError{})))
|
||||
assert.ErrorContains(t, p.Err, tc.invalid)
|
||||
} else {
|
||||
assert.NilError(t, err)
|
||||
assert.Equal(t, NamePrefix+p.Name, goodPluginName)
|
||||
assert.Equal(t, p.SchemaVersion, "0.1.0")
|
||||
assert.Equal(t, p.Vendor, "e2e-testing")
|
||||
}
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
@ -1,6 +1,7 @@
|
||||
package manager
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"io/ioutil"
|
||||
"os"
|
||||
"os/exec"
|
||||
@ -27,10 +28,23 @@ func (e errPluginNotFound) Error() string {
|
||||
return "Error: No such CLI plugin: " + string(e)
|
||||
}
|
||||
|
||||
type errPluginRequireExperimental string
|
||||
|
||||
// Note: errPluginRequireExperimental implements notFound so that the plugin
|
||||
// is skipped when listing the plugins.
|
||||
func (e errPluginRequireExperimental) NotFound() {}
|
||||
|
||||
func (e errPluginRequireExperimental) Error() string {
|
||||
return fmt.Sprintf("plugin candidate %q: requires experimental CLI", string(e))
|
||||
}
|
||||
|
||||
type notFound interface{ NotFound() }
|
||||
|
||||
// IsNotFound is true if the given error is due to a plugin not being found.
|
||||
func IsNotFound(err error) bool {
|
||||
if e, ok := err.(*pluginError); ok {
|
||||
err = e.Cause()
|
||||
}
|
||||
_, ok := err.(notFound)
|
||||
return ok
|
||||
}
|
||||
@ -117,12 +131,14 @@ func ListPlugins(dockerCli command.Cli, rootcmd *cobra.Command) ([]Plugin, error
|
||||
continue
|
||||
}
|
||||
c := &candidate{paths[0]}
|
||||
p, err := newPlugin(c, rootcmd)
|
||||
p, err := newPlugin(c, rootcmd, dockerCli.ClientInfo().HasExperimental)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
p.ShadowedPaths = paths[1:]
|
||||
plugins = append(plugins, p)
|
||||
if !IsNotFound(p.Err) {
|
||||
p.ShadowedPaths = paths[1:]
|
||||
plugins = append(plugins, p)
|
||||
}
|
||||
}
|
||||
|
||||
return plugins, nil
|
||||
@ -159,11 +175,19 @@ func PluginRunCommand(dockerCli command.Cli, name string, rootcmd *cobra.Command
|
||||
}
|
||||
|
||||
c := &candidate{path: path}
|
||||
plugin, err := newPlugin(c, rootcmd)
|
||||
plugin, err := newPlugin(c, rootcmd, dockerCli.ClientInfo().HasExperimental)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
if plugin.Err != nil {
|
||||
// TODO: why are we not returning plugin.Err?
|
||||
|
||||
err := plugin.Err.(*pluginError).Cause()
|
||||
// if an experimental plugin was invoked directly while experimental mode is off
|
||||
// provide a more useful error message than "not found".
|
||||
if err, ok := err.(errPluginRequireExperimental); ok {
|
||||
return nil, err
|
||||
}
|
||||
return nil, errPluginNotFound(name)
|
||||
}
|
||||
cmd := exec.Command(plugin.Path, args...)
|
||||
|
||||
@ -22,4 +22,7 @@ type Metadata struct {
|
||||
ShortDescription string `json:",omitempty"`
|
||||
// URL is a pointer to the plugin's homepage.
|
||||
URL string `json:",omitempty"`
|
||||
// Experimental specifies whether the plugin is experimental.
|
||||
// Experimental plugins are not displayed on non-experimental CLIs.
|
||||
Experimental bool `json:",omitempty"`
|
||||
}
|
||||
|
||||
@ -33,7 +33,9 @@ type Plugin struct {
|
||||
// is set, and is always a `pluginError`, but the `Plugin` is still
|
||||
// returned with no error. An error is only returned due to a
|
||||
// non-recoverable error.
|
||||
func newPlugin(c Candidate, rootcmd *cobra.Command) (Plugin, error) {
|
||||
//
|
||||
// nolint: gocyclo
|
||||
func newPlugin(c Candidate, rootcmd *cobra.Command, allowExperimental bool) (Plugin, error) {
|
||||
path := c.Path()
|
||||
if path == "" {
|
||||
return Plugin{}, errors.New("plugin candidate path cannot be empty")
|
||||
@ -94,7 +96,10 @@ func newPlugin(c Candidate, rootcmd *cobra.Command) (Plugin, error) {
|
||||
p.Err = wrapAsPluginError(err, "invalid metadata")
|
||||
return p, nil
|
||||
}
|
||||
|
||||
if p.Experimental && !allowExperimental {
|
||||
p.Err = &pluginError{errPluginRequireExperimental(p.Name)}
|
||||
return p, nil
|
||||
}
|
||||
if p.Metadata.SchemaVersion != "0.1.0" {
|
||||
p.Err = NewPluginError("plugin SchemaVersion %q is not valid, must be 0.1.0", p.Metadata.SchemaVersion)
|
||||
return p, nil
|
||||
|
||||
@ -114,11 +114,14 @@ func newPluginCommand(dockerCli *command.DockerCli, plugin *cobra.Command, meta
|
||||
fullname := manager.NamePrefix + name
|
||||
|
||||
cmd := &cobra.Command{
|
||||
Use: fmt.Sprintf("docker [OPTIONS] %s [ARG...]", name),
|
||||
Short: fullname + " is a Docker CLI plugin",
|
||||
SilenceUsage: true,
|
||||
SilenceErrors: true,
|
||||
PersistentPreRunE: PersistentPreRunE,
|
||||
Use: fmt.Sprintf("docker [OPTIONS] %s [ARG...]", name),
|
||||
Short: fullname + " is a Docker CLI plugin",
|
||||
SilenceUsage: true,
|
||||
SilenceErrors: true,
|
||||
PersistentPreRunE: func(cmd *cobra.Command, args []string) error {
|
||||
// We can't use this as the hook directly since it is initialised later (in runPlugin)
|
||||
return PersistentPreRunE(cmd, args)
|
||||
},
|
||||
TraverseChildren: true,
|
||||
DisableFlagsInUseLine: true,
|
||||
}
|
||||
|
||||
@ -5,6 +5,7 @@ import (
|
||||
|
||||
"github.com/docker/cli/cli"
|
||||
"github.com/docker/cli/cli/command"
|
||||
"github.com/docker/cli/cli/command/image"
|
||||
)
|
||||
|
||||
// NewBuilderCommand returns a cobra command for `builder` subcommands
|
||||
@ -18,6 +19,7 @@ func NewBuilderCommand(dockerCli command.Cli) *cobra.Command {
|
||||
}
|
||||
cmd.AddCommand(
|
||||
NewPruneCommand(dockerCli),
|
||||
image.NewBuildCommand(dockerCli),
|
||||
)
|
||||
return cmd
|
||||
}
|
||||
|
||||
@ -14,7 +14,6 @@ import (
|
||||
"github.com/docker/cli/cli/config/configfile"
|
||||
dcontext "github.com/docker/cli/cli/context"
|
||||
"github.com/docker/cli/cli/context/docker"
|
||||
kubcontext "github.com/docker/cli/cli/context/kubernetes"
|
||||
"github.com/docker/cli/cli/context/store"
|
||||
"github.com/docker/cli/cli/debug"
|
||||
cliflags "github.com/docker/cli/cli/flags"
|
||||
@ -210,11 +209,11 @@ func (cli *DockerCli) Initialize(opts *cliflags.ClientOptions, ops ...Initialize
|
||||
|
||||
cli.configFile = cliconfig.LoadDefaultConfigFile(cli.err)
|
||||
|
||||
baseContextSore := store.New(cliconfig.ContextStoreDir(), cli.contextStoreConfig)
|
||||
baseContextStore := store.New(cliconfig.ContextStoreDir(), cli.contextStoreConfig)
|
||||
cli.contextStore = &ContextStoreWithDefault{
|
||||
Store: baseContextSore,
|
||||
Store: baseContextStore,
|
||||
Resolver: func() (*DefaultContext, error) {
|
||||
return resolveDefaultContext(opts.Common, cli.ConfigFile(), cli.Err())
|
||||
return ResolveDefaultContext(opts.Common, cli.ConfigFile(), cli.contextStoreConfig, cli.Err())
|
||||
},
|
||||
}
|
||||
cli.currentContext, err = resolveContextName(opts.Common, cli.configFile, cli.contextStore)
|
||||
@ -259,10 +258,11 @@ func (cli *DockerCli) Initialize(opts *cliflags.ClientOptions, ops ...Initialize
|
||||
|
||||
// NewAPIClientFromFlags creates a new APIClient from command line flags
|
||||
func NewAPIClientFromFlags(opts *cliflags.CommonOptions, configFile *configfile.ConfigFile) (client.APIClient, error) {
|
||||
storeConfig := DefaultContextStoreConfig()
|
||||
store := &ContextStoreWithDefault{
|
||||
Store: store.New(cliconfig.ContextStoreDir(), defaultContextStoreConfig()),
|
||||
Store: store.New(cliconfig.ContextStoreDir(), storeConfig),
|
||||
Resolver: func() (*DefaultContext, error) {
|
||||
return resolveDefaultContext(opts, configFile, ioutil.Discard)
|
||||
return ResolveDefaultContext(opts, configFile, storeConfig, ioutil.Discard)
|
||||
},
|
||||
}
|
||||
contextName, err := resolveContextName(opts, configFile, store)
|
||||
@ -290,8 +290,8 @@ func newAPIClientFromEndpoint(ep docker.Endpoint, configFile *configfile.ConfigF
|
||||
return client.NewClientWithOpts(clientOpts...)
|
||||
}
|
||||
|
||||
func resolveDockerEndpoint(s store.Store, contextName string) (docker.Endpoint, error) {
|
||||
ctxMeta, err := s.GetContextMetadata(contextName)
|
||||
func resolveDockerEndpoint(s store.Reader, contextName string) (docker.Endpoint, error) {
|
||||
ctxMeta, err := s.GetMetadata(contextName)
|
||||
if err != nil {
|
||||
return docker.Endpoint{}, err
|
||||
}
|
||||
@ -399,7 +399,7 @@ func (cli *DockerCli) CurrentContext() string {
|
||||
// StackOrchestrator resolves which stack orchestrator is in use
|
||||
func (cli *DockerCli) StackOrchestrator(flagValue string) (Orchestrator, error) {
|
||||
currentContext := cli.CurrentContext()
|
||||
ctxRaw, err := cli.ContextStore().GetContextMetadata(currentContext)
|
||||
ctxRaw, err := cli.ContextStore().GetMetadata(currentContext)
|
||||
if store.IsErrContextDoesNotExist(err) {
|
||||
// case where the currentContext has been removed (CLI behavior is to fallback to using DOCKER_HOST based resolution)
|
||||
return GetStackOrchestrator(flagValue, "", cli.ConfigFile().StackOrchestrator, cli.Err())
|
||||
@ -453,7 +453,7 @@ func NewDockerCli(ops ...DockerCliOption) (*DockerCli, error) {
|
||||
WithContentTrustFromEnv(),
|
||||
WithContainerizedClient(containerizedengine.NewClient),
|
||||
}
|
||||
cli.contextStoreConfig = defaultContextStoreConfig()
|
||||
cli.contextStoreConfig = DefaultContextStoreConfig()
|
||||
ops = append(defaultOps, ops...)
|
||||
if err := cli.Apply(ops...); err != nil {
|
||||
return nil, err
|
||||
@ -500,7 +500,7 @@ func UserAgent() string {
|
||||
// - if DOCKER_CONTEXT is set, use this value
|
||||
// - if Config file has a globally set "CurrentContext", use this value
|
||||
// - fallbacks to default HOST, uses TLS config from flags/env vars
|
||||
func resolveContextName(opts *cliflags.CommonOptions, config *configfile.ConfigFile, contextstore store.Store) (string, error) {
|
||||
func resolveContextName(opts *cliflags.CommonOptions, config *configfile.ConfigFile, contextstore store.Reader) (string, error) {
|
||||
if opts.Context != "" && len(opts.Hosts) > 0 {
|
||||
return "", errors.New("Conflicting options: either specify --host or --context, not both")
|
||||
}
|
||||
@ -517,7 +517,7 @@ func resolveContextName(opts *cliflags.CommonOptions, config *configfile.ConfigF
|
||||
return ctxName, nil
|
||||
}
|
||||
if config != nil && config.CurrentContext != "" {
|
||||
_, err := contextstore.GetContextMetadata(config.CurrentContext)
|
||||
_, err := contextstore.GetMetadata(config.CurrentContext)
|
||||
if store.IsErrContextDoesNotExist(err) {
|
||||
return "", errors.Errorf("Current context %q is not found on the file system, please check your config file at %s", config.CurrentContext, config.Filename)
|
||||
}
|
||||
@ -526,10 +526,22 @@ func resolveContextName(opts *cliflags.CommonOptions, config *configfile.ConfigF
|
||||
return DefaultContextName, nil
|
||||
}
|
||||
|
||||
func defaultContextStoreConfig() store.Config {
|
||||
var defaultStoreEndpoints = []store.NamedTypeGetter{
|
||||
store.EndpointTypeGetter(docker.DockerEndpoint, func() interface{} { return &docker.EndpointMeta{} }),
|
||||
}
|
||||
|
||||
// RegisterDefaultStoreEndpoints registers a new named endpoint
|
||||
// metadata type with the default context store config, so that
|
||||
// endpoint will be supported by stores using the config returned by
|
||||
// DefaultContextStoreConfig.
|
||||
func RegisterDefaultStoreEndpoints(ep ...store.NamedTypeGetter) {
|
||||
defaultStoreEndpoints = append(defaultStoreEndpoints, ep...)
|
||||
}
|
||||
|
||||
// DefaultContextStoreConfig returns a new store.Config with the default set of endpoints configured.
|
||||
func DefaultContextStoreConfig() store.Config {
|
||||
return store.NewConfig(
|
||||
func() interface{} { return &DockerContext{} },
|
||||
store.EndpointTypeGetter(docker.DockerEndpoint, func() interface{} { return &docker.EndpointMeta{} }),
|
||||
store.EndpointTypeGetter(kubcontext.KubernetesEndpoint, func() interface{} { return &kubcontext.EndpointMeta{} }),
|
||||
defaultStoreEndpoints...,
|
||||
)
|
||||
}
|
||||
|
||||
@ -7,7 +7,6 @@ import (
|
||||
"strconv"
|
||||
|
||||
"github.com/docker/cli/cli/context/docker"
|
||||
"github.com/docker/cli/cli/context/kubernetes"
|
||||
"github.com/docker/cli/cli/context/store"
|
||||
"github.com/docker/cli/cli/streams"
|
||||
clitypes "github.com/docker/cli/types"
|
||||
@ -97,7 +96,7 @@ func WithContainerizedClient(containerizedFn func(string) (clitypes.Containerize
|
||||
func WithContextEndpointType(endpointName string, endpointType store.TypeGetter) DockerCliOption {
|
||||
return func(cli *DockerCli) error {
|
||||
switch endpointName {
|
||||
case docker.DockerEndpoint, kubernetes.KubernetesEndpoint:
|
||||
case docker.DockerEndpoint:
|
||||
return fmt.Errorf("cannot change %q endpoint type", endpointName)
|
||||
}
|
||||
cli.contextStoreConfig.SetEndpoint(endpointName, endpointType)
|
||||
|
||||
@ -6,6 +6,7 @@ import (
|
||||
"crypto/x509"
|
||||
"fmt"
|
||||
"io/ioutil"
|
||||
"net/http"
|
||||
"os"
|
||||
"runtime"
|
||||
"testing"
|
||||
@ -79,6 +80,24 @@ func TestNewAPIClientFromFlagsWithAPIVersionFromEnv(t *testing.T) {
|
||||
assert.Check(t, is.Equal(customVersion, apiclient.ClientVersion()))
|
||||
}
|
||||
|
||||
func TestNewAPIClientFromFlagsWithHttpProxyEnv(t *testing.T) {
|
||||
defer env.Patch(t, "HTTP_PROXY", "http://proxy.acme.com:1234")()
|
||||
defer env.Patch(t, "DOCKER_HOST", "tcp://docker.acme.com:2376")()
|
||||
|
||||
opts := &flags.CommonOptions{}
|
||||
configFile := &configfile.ConfigFile{}
|
||||
apiclient, err := NewAPIClientFromFlags(opts, configFile)
|
||||
assert.NilError(t, err)
|
||||
transport, ok := apiclient.HTTPClient().Transport.(*http.Transport)
|
||||
assert.Assert(t, ok)
|
||||
assert.Assert(t, transport.Proxy != nil)
|
||||
request, err := http.NewRequest(http.MethodGet, "tcp://docker.acme.com:2376", nil)
|
||||
assert.NilError(t, err)
|
||||
url, err := transport.Proxy(request)
|
||||
assert.NilError(t, err)
|
||||
assert.Check(t, is.Equal("http://proxy.acme.com:1234", url.String()))
|
||||
}
|
||||
|
||||
type fakeClient struct {
|
||||
client.Client
|
||||
pingFunc func() (types.Ping, error)
|
||||
|
||||
@ -6,6 +6,7 @@ import (
|
||||
"fmt"
|
||||
"io/ioutil"
|
||||
"path"
|
||||
"reflect"
|
||||
"regexp"
|
||||
"strconv"
|
||||
"strings"
|
||||
@ -485,6 +486,8 @@ func parse(flags *pflag.FlagSet, copts *containerOptions, serverOS string) (*con
|
||||
return nil, err
|
||||
}
|
||||
|
||||
securityOpts, maskedPaths, readonlyPaths := parseSystemPaths(securityOpts)
|
||||
|
||||
storageOpts, err := parseStorageOpts(copts.storageOpt.GetAll())
|
||||
if err != nil {
|
||||
return nil, err
|
||||
@ -635,6 +638,8 @@ func parse(flags *pflag.FlagSet, copts *containerOptions, serverOS string) (*con
|
||||
Sysctls: copts.sysctls.GetAll(),
|
||||
Runtime: copts.runtime,
|
||||
Mounts: mounts,
|
||||
MaskedPaths: maskedPaths,
|
||||
ReadonlyPaths: readonlyPaths,
|
||||
}
|
||||
|
||||
if copts.autoRemove && !hostConfig.RestartPolicy.IsNone() {
|
||||
@ -703,6 +708,15 @@ func parseNetworkOpts(copts *containerOptions) (map[string]*networktypes.Endpoin
|
||||
if _, ok := endpoints[n.Target]; ok {
|
||||
return nil, errdefs.InvalidParameter(errors.Errorf("network %q is specified multiple times", n.Target))
|
||||
}
|
||||
|
||||
// For backward compatibility: if no custom options are provided for the network,
|
||||
// and only a single network is specified, omit the endpoint-configuration
|
||||
// on the client (the daemon will still create it when creating the container)
|
||||
if i == 0 && len(copts.netMode.Value()) == 1 {
|
||||
if ep == nil || reflect.DeepEqual(*ep, networktypes.EndpointSettings{}) {
|
||||
continue
|
||||
}
|
||||
}
|
||||
endpoints[n.Target] = ep
|
||||
}
|
||||
if hasUserDefined && hasNonUserDefined {
|
||||
@ -825,6 +839,25 @@ func parseSecurityOpts(securityOpts []string) ([]string, error) {
|
||||
return securityOpts, nil
|
||||
}
|
||||
|
||||
// parseSystemPaths checks if `systempaths=unconfined` security option is set,
|
||||
// and returns the `MaskedPaths` and `ReadonlyPaths` accordingly. An updated
|
||||
// list of security options is returned with this option removed, because the
|
||||
// `unconfined` option is handled client-side, and should not be sent to the
|
||||
// daemon.
|
||||
func parseSystemPaths(securityOpts []string) (filtered, maskedPaths, readonlyPaths []string) {
|
||||
filtered = securityOpts[:0]
|
||||
for _, opt := range securityOpts {
|
||||
if opt == "systempaths=unconfined" {
|
||||
maskedPaths = []string{}
|
||||
readonlyPaths = []string{}
|
||||
} else {
|
||||
filtered = append(filtered, opt)
|
||||
}
|
||||
}
|
||||
|
||||
return filtered, maskedPaths, readonlyPaths
|
||||
}
|
||||
|
||||
// parses storage options per container into a map
|
||||
func parseStorageOpts(storageOpts []string) (map[string]string, error) {
|
||||
m := make(map[string]string)
|
||||
|
||||
@ -401,13 +401,13 @@ func TestParseNetworkConfig(t *testing.T) {
|
||||
{
|
||||
name: "single-network-legacy",
|
||||
flags: []string{"--network", "net1"},
|
||||
expected: map[string]*networktypes.EndpointSettings{"net1": {}},
|
||||
expected: map[string]*networktypes.EndpointSettings{},
|
||||
expectedCfg: container.HostConfig{NetworkMode: "net1"},
|
||||
},
|
||||
{
|
||||
name: "single-network-advanced",
|
||||
flags: []string{"--network", "name=net1"},
|
||||
expected: map[string]*networktypes.EndpointSettings{"net1": {}},
|
||||
expected: map[string]*networktypes.EndpointSettings{},
|
||||
expectedCfg: container.HostConfig{NetworkMode: "net1"},
|
||||
},
|
||||
{
|
||||
@ -800,3 +800,57 @@ func TestValidateDevice(t *testing.T) {
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func TestParseSystemPaths(t *testing.T) {
|
||||
tests := []struct {
|
||||
doc string
|
||||
in, out, masked, readonly []string
|
||||
}{
|
||||
{
|
||||
doc: "not set",
|
||||
in: []string{},
|
||||
out: []string{},
|
||||
},
|
||||
{
|
||||
doc: "not set, preserve other options",
|
||||
in: []string{
|
||||
"seccomp=unconfined",
|
||||
"apparmor=unconfined",
|
||||
"label=user:USER",
|
||||
"foo=bar",
|
||||
},
|
||||
out: []string{
|
||||
"seccomp=unconfined",
|
||||
"apparmor=unconfined",
|
||||
"label=user:USER",
|
||||
"foo=bar",
|
||||
},
|
||||
},
|
||||
{
|
||||
doc: "unconfined",
|
||||
in: []string{"systempaths=unconfined"},
|
||||
out: []string{},
|
||||
masked: []string{},
|
||||
readonly: []string{},
|
||||
},
|
||||
{
|
||||
doc: "unconfined and other options",
|
||||
in: []string{"foo=bar", "bar=baz", "systempaths=unconfined"},
|
||||
out: []string{"foo=bar", "bar=baz"},
|
||||
masked: []string{},
|
||||
readonly: []string{},
|
||||
},
|
||||
{
|
||||
doc: "unknown option",
|
||||
in: []string{"foo=bar", "systempaths=unknown", "bar=baz"},
|
||||
out: []string{"foo=bar", "systempaths=unknown", "bar=baz"},
|
||||
},
|
||||
}
|
||||
|
||||
for _, tc := range tests {
|
||||
securityOpts, maskedPaths, readonlyPaths := parseSystemPaths(tc.in)
|
||||
assert.DeepEqual(t, securityOpts, tc.out)
|
||||
assert.DeepEqual(t, maskedPaths, tc.masked)
|
||||
assert.DeepEqual(t, readonlyPaths, tc.readonly)
|
||||
}
|
||||
}
|
||||
|
||||
@ -13,7 +13,7 @@ type DockerContext struct {
|
||||
}
|
||||
|
||||
// GetDockerContext extracts metadata from stored context metadata
|
||||
func GetDockerContext(storeMetadata store.ContextMetadata) (DockerContext, error) {
|
||||
func GetDockerContext(storeMetadata store.Metadata) (DockerContext, error) {
|
||||
if storeMetadata.Metadata == nil {
|
||||
// can happen if we save endpoints before assigning a context metadata
|
||||
// it is totally valid, and we should return a default initialized value
|
||||
|
||||
@ -21,6 +21,7 @@ type CreateOptions struct {
|
||||
DefaultStackOrchestrator string
|
||||
Docker map[string]string
|
||||
Kubernetes map[string]string
|
||||
From string
|
||||
}
|
||||
|
||||
func longCreateDescription() string {
|
||||
@ -63,6 +64,7 @@ func newCreateCommand(dockerCli command.Cli) *cobra.Command {
|
||||
"Default orchestrator for stack operations to use with this context (swarm|kubernetes|all)")
|
||||
flags.StringToStringVar(&opts.Docker, "docker", nil, "set the docker endpoint")
|
||||
flags.StringToStringVar(&opts.Kubernetes, "kubernetes", nil, "set the kubernetes endpoint")
|
||||
flags.StringVar(&opts.From, "from", "", "create context from a named context")
|
||||
return cmd
|
||||
}
|
||||
|
||||
@ -76,17 +78,26 @@ func RunCreate(cli command.Cli, o *CreateOptions) error {
|
||||
if err != nil {
|
||||
return errors.Wrap(err, "unable to parse default-stack-orchestrator")
|
||||
}
|
||||
contextMetadata := store.ContextMetadata{
|
||||
Endpoints: make(map[string]interface{}),
|
||||
Metadata: command.DockerContext{
|
||||
Description: o.Description,
|
||||
StackOrchestrator: stackOrchestrator,
|
||||
},
|
||||
Name: o.Name,
|
||||
switch {
|
||||
case o.From == "" && o.Docker == nil && o.Kubernetes == nil:
|
||||
err = createFromExistingContext(s, cli.CurrentContext(), stackOrchestrator, o)
|
||||
case o.From != "":
|
||||
err = createFromExistingContext(s, o.From, stackOrchestrator, o)
|
||||
default:
|
||||
err = createNewContext(o, stackOrchestrator, cli, s)
|
||||
}
|
||||
if err == nil {
|
||||
fmt.Fprintln(cli.Out(), o.Name)
|
||||
fmt.Fprintf(cli.Err(), "Successfully created context %q\n", o.Name)
|
||||
}
|
||||
return err
|
||||
}
|
||||
|
||||
func createNewContext(o *CreateOptions, stackOrchestrator command.Orchestrator, cli command.Cli, s store.Writer) error {
|
||||
if o.Docker == nil {
|
||||
return errors.New("docker endpoint configuration is required")
|
||||
}
|
||||
contextMetadata := newContextMetadata(stackOrchestrator, o)
|
||||
contextTLSData := store.ContextTLSData{
|
||||
Endpoints: make(map[string]store.EndpointTLSData),
|
||||
}
|
||||
@ -116,22 +127,20 @@ func RunCreate(cli command.Cli, o *CreateOptions) error {
|
||||
if err := validateEndpointsAndOrchestrator(contextMetadata); err != nil {
|
||||
return err
|
||||
}
|
||||
if err := s.CreateOrUpdateContext(contextMetadata); err != nil {
|
||||
if err := s.CreateOrUpdate(contextMetadata); err != nil {
|
||||
return err
|
||||
}
|
||||
if err := s.ResetContextTLSMaterial(o.Name, &contextTLSData); err != nil {
|
||||
if err := s.ResetTLSMaterial(o.Name, &contextTLSData); err != nil {
|
||||
return err
|
||||
}
|
||||
fmt.Fprintln(cli.Out(), o.Name)
|
||||
fmt.Fprintf(cli.Err(), "Successfully created context %q\n", o.Name)
|
||||
return nil
|
||||
}
|
||||
|
||||
func checkContextNameForCreation(s store.Store, name string) error {
|
||||
func checkContextNameForCreation(s store.Reader, name string) error {
|
||||
if err := validateContextName(name); err != nil {
|
||||
return err
|
||||
}
|
||||
if _, err := s.GetContextMetadata(name); !store.IsErrContextDoesNotExist(err) {
|
||||
if _, err := s.GetMetadata(name); !store.IsErrContextDoesNotExist(err) {
|
||||
if err != nil {
|
||||
return errors.Wrap(err, "error while getting existing contexts")
|
||||
}
|
||||
@ -139,3 +148,52 @@ func checkContextNameForCreation(s store.Store, name string) error {
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func createFromExistingContext(s store.ReaderWriter, fromContextName string, stackOrchestrator command.Orchestrator, o *CreateOptions) error {
|
||||
if len(o.Docker) != 0 || len(o.Kubernetes) != 0 {
|
||||
return errors.New("cannot use --docker or --kubernetes flags when --from is set")
|
||||
}
|
||||
reader := store.Export(fromContextName, &descriptionAndOrchestratorStoreDecorator{
|
||||
Reader: s,
|
||||
description: o.Description,
|
||||
orchestrator: stackOrchestrator,
|
||||
})
|
||||
defer reader.Close()
|
||||
return store.Import(o.Name, s, reader)
|
||||
}
|
||||
|
||||
type descriptionAndOrchestratorStoreDecorator struct {
|
||||
store.Reader
|
||||
description string
|
||||
orchestrator command.Orchestrator
|
||||
}
|
||||
|
||||
func (d *descriptionAndOrchestratorStoreDecorator) GetMetadata(name string) (store.Metadata, error) {
|
||||
c, err := d.Reader.GetMetadata(name)
|
||||
if err != nil {
|
||||
return c, err
|
||||
}
|
||||
typedContext, err := command.GetDockerContext(c)
|
||||
if err != nil {
|
||||
return c, err
|
||||
}
|
||||
if d.description != "" {
|
||||
typedContext.Description = d.description
|
||||
}
|
||||
if d.orchestrator != command.Orchestrator("") {
|
||||
typedContext.StackOrchestrator = d.orchestrator
|
||||
}
|
||||
c.Metadata = typedContext
|
||||
return c, nil
|
||||
}
|
||||
|
||||
func newContextMetadata(stackOrchestrator command.Orchestrator, o *CreateOptions) store.Metadata {
|
||||
return store.Metadata{
|
||||
Endpoints: make(map[string]interface{}),
|
||||
Metadata: command.DockerContext{
|
||||
Description: o.Description,
|
||||
StackOrchestrator: stackOrchestrator,
|
||||
},
|
||||
Name: o.Name,
|
||||
}
|
||||
}
|
||||
|
||||
@ -1,6 +1,7 @@
|
||||
package context
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"io/ioutil"
|
||||
"os"
|
||||
"testing"
|
||||
@ -27,7 +28,7 @@ func makeFakeCli(t *testing.T, opts ...func(*test.FakeCli)) (*test.FakeCli, func
|
||||
Store: store.New(dir, storeConfig),
|
||||
Resolver: func() (*command.DefaultContext, error) {
|
||||
return &command.DefaultContext{
|
||||
Meta: store.ContextMetadata{
|
||||
Meta: store.Metadata{
|
||||
Endpoints: map[string]interface{}{
|
||||
docker.DockerEndpoint: docker.EndpointMeta{
|
||||
Host: "unix:///var/run/docker.sock",
|
||||
@ -63,7 +64,7 @@ func withCliConfig(configFile *configfile.ConfigFile) func(*test.FakeCli) {
|
||||
func TestCreateInvalids(t *testing.T) {
|
||||
cli, cleanup := makeFakeCli(t)
|
||||
defer cleanup()
|
||||
assert.NilError(t, cli.ContextStore().CreateOrUpdateContext(store.ContextMetadata{Name: "existing-context"}))
|
||||
assert.NilError(t, cli.ContextStore().CreateOrUpdate(store.Metadata{Name: "existing-context"}))
|
||||
tests := []struct {
|
||||
options CreateOptions
|
||||
expecterErr string
|
||||
@ -105,13 +106,6 @@ func TestCreateInvalids(t *testing.T) {
|
||||
},
|
||||
expecterErr: `specified orchestrator "invalid" is invalid, please use either kubernetes, swarm or all`,
|
||||
},
|
||||
{
|
||||
options: CreateOptions{
|
||||
Name: "orchestrator-swarm-no-endpoint",
|
||||
DefaultStackOrchestrator: "swarm",
|
||||
},
|
||||
expecterErr: `docker endpoint configuration is required`,
|
||||
},
|
||||
{
|
||||
options: CreateOptions{
|
||||
Name: "orchestrator-kubernetes-no-endpoint",
|
||||
@ -138,6 +132,11 @@ func TestCreateInvalids(t *testing.T) {
|
||||
}
|
||||
}
|
||||
|
||||
func assertContextCreateLogging(t *testing.T, cli *test.FakeCli, n string) {
|
||||
assert.Equal(t, n+"\n", cli.OutBuffer().String())
|
||||
assert.Equal(t, fmt.Sprintf("Successfully created context %q\n", n), cli.ErrBuffer().String())
|
||||
}
|
||||
|
||||
func TestCreateOrchestratorSwarm(t *testing.T) {
|
||||
cli, cleanup := makeFakeCli(t)
|
||||
defer cleanup()
|
||||
@ -148,8 +147,7 @@ func TestCreateOrchestratorSwarm(t *testing.T) {
|
||||
Docker: map[string]string{},
|
||||
})
|
||||
assert.NilError(t, err)
|
||||
assert.Equal(t, "test\n", cli.OutBuffer().String())
|
||||
assert.Equal(t, "Successfully created context \"test\"\n", cli.ErrBuffer().String())
|
||||
assertContextCreateLogging(t, cli, "test")
|
||||
}
|
||||
|
||||
func TestCreateOrchestratorEmpty(t *testing.T) {
|
||||
@ -161,11 +159,12 @@ func TestCreateOrchestratorEmpty(t *testing.T) {
|
||||
Docker: map[string]string{},
|
||||
})
|
||||
assert.NilError(t, err)
|
||||
assertContextCreateLogging(t, cli, "test")
|
||||
}
|
||||
|
||||
func validateTestKubeEndpoint(t *testing.T, s store.Store, name string) {
|
||||
func validateTestKubeEndpoint(t *testing.T, s store.Reader, name string) {
|
||||
t.Helper()
|
||||
ctxMetadata, err := s.GetContextMetadata(name)
|
||||
ctxMetadata, err := s.GetMetadata(name)
|
||||
assert.NilError(t, err)
|
||||
kubeMeta := ctxMetadata.Endpoints[kubernetes.KubernetesEndpoint].(kubernetes.EndpointMeta)
|
||||
kubeEP, err := kubeMeta.WithTLSData(s, name)
|
||||
@ -185,7 +184,7 @@ func createTestContextWithKube(t *testing.T, cli command.Cli) {
|
||||
Name: "test",
|
||||
DefaultStackOrchestrator: "all",
|
||||
Kubernetes: map[string]string{
|
||||
keyFromCurrent: "true",
|
||||
keyFrom: "default",
|
||||
},
|
||||
Docker: map[string]string{},
|
||||
})
|
||||
@ -196,5 +195,171 @@ func TestCreateOrchestratorAllKubernetesEndpointFromCurrent(t *testing.T) {
|
||||
cli, cleanup := makeFakeCli(t)
|
||||
defer cleanup()
|
||||
createTestContextWithKube(t, cli)
|
||||
assertContextCreateLogging(t, cli, "test")
|
||||
validateTestKubeEndpoint(t, cli.ContextStore(), "test")
|
||||
}
|
||||
|
||||
func TestCreateFromContext(t *testing.T) {
|
||||
cases := []struct {
|
||||
name string
|
||||
description string
|
||||
orchestrator string
|
||||
expectedDescription string
|
||||
docker map[string]string
|
||||
kubernetes map[string]string
|
||||
expectedOrchestrator command.Orchestrator
|
||||
}{
|
||||
{
|
||||
name: "no-override",
|
||||
expectedDescription: "original description",
|
||||
expectedOrchestrator: command.OrchestratorSwarm,
|
||||
},
|
||||
{
|
||||
name: "override-description",
|
||||
description: "new description",
|
||||
expectedDescription: "new description",
|
||||
expectedOrchestrator: command.OrchestratorSwarm,
|
||||
},
|
||||
{
|
||||
name: "override-orchestrator",
|
||||
orchestrator: "kubernetes",
|
||||
expectedDescription: "original description",
|
||||
expectedOrchestrator: command.OrchestratorKubernetes,
|
||||
},
|
||||
}
|
||||
|
||||
cli, cleanup := makeFakeCli(t)
|
||||
defer cleanup()
|
||||
revert := env.Patch(t, "KUBECONFIG", "./testdata/test-kubeconfig")
|
||||
defer revert()
|
||||
cli.ResetOutputBuffers()
|
||||
assert.NilError(t, RunCreate(cli, &CreateOptions{
|
||||
Name: "original",
|
||||
Description: "original description",
|
||||
Docker: map[string]string{
|
||||
keyHost: "tcp://42.42.42.42:2375",
|
||||
},
|
||||
Kubernetes: map[string]string{
|
||||
keyFrom: "default",
|
||||
},
|
||||
DefaultStackOrchestrator: "swarm",
|
||||
}))
|
||||
assertContextCreateLogging(t, cli, "original")
|
||||
|
||||
cli.ResetOutputBuffers()
|
||||
assert.NilError(t, RunCreate(cli, &CreateOptions{
|
||||
Name: "dummy",
|
||||
Description: "dummy description",
|
||||
Docker: map[string]string{
|
||||
keyHost: "tcp://24.24.24.24:2375",
|
||||
},
|
||||
Kubernetes: map[string]string{
|
||||
keyFrom: "default",
|
||||
},
|
||||
DefaultStackOrchestrator: "swarm",
|
||||
}))
|
||||
assertContextCreateLogging(t, cli, "dummy")
|
||||
|
||||
cli.SetCurrentContext("dummy")
|
||||
|
||||
for _, c := range cases {
|
||||
t.Run(c.name, func(t *testing.T) {
|
||||
cli.ResetOutputBuffers()
|
||||
err := RunCreate(cli, &CreateOptions{
|
||||
From: "original",
|
||||
Name: c.name,
|
||||
Description: c.description,
|
||||
DefaultStackOrchestrator: c.orchestrator,
|
||||
Docker: c.docker,
|
||||
Kubernetes: c.kubernetes,
|
||||
})
|
||||
assert.NilError(t, err)
|
||||
assertContextCreateLogging(t, cli, c.name)
|
||||
newContext, err := cli.ContextStore().GetMetadata(c.name)
|
||||
assert.NilError(t, err)
|
||||
newContextTyped, err := command.GetDockerContext(newContext)
|
||||
assert.NilError(t, err)
|
||||
dockerEndpoint, err := docker.EndpointFromContext(newContext)
|
||||
assert.NilError(t, err)
|
||||
kubeEndpoint := kubernetes.EndpointFromContext(newContext)
|
||||
assert.Check(t, kubeEndpoint != nil)
|
||||
assert.Equal(t, newContextTyped.Description, c.expectedDescription)
|
||||
assert.Equal(t, newContextTyped.StackOrchestrator, c.expectedOrchestrator)
|
||||
assert.Equal(t, dockerEndpoint.Host, "tcp://42.42.42.42:2375")
|
||||
assert.Equal(t, kubeEndpoint.Host, "https://someserver")
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
func TestCreateFromCurrent(t *testing.T) {
|
||||
cases := []struct {
|
||||
name string
|
||||
description string
|
||||
orchestrator string
|
||||
expectedDescription string
|
||||
expectedOrchestrator command.Orchestrator
|
||||
}{
|
||||
{
|
||||
name: "no-override",
|
||||
expectedDescription: "original description",
|
||||
expectedOrchestrator: command.OrchestratorSwarm,
|
||||
},
|
||||
{
|
||||
name: "override-description",
|
||||
description: "new description",
|
||||
expectedDescription: "new description",
|
||||
expectedOrchestrator: command.OrchestratorSwarm,
|
||||
},
|
||||
{
|
||||
name: "override-orchestrator",
|
||||
orchestrator: "kubernetes",
|
||||
expectedDescription: "original description",
|
||||
expectedOrchestrator: command.OrchestratorKubernetes,
|
||||
},
|
||||
}
|
||||
|
||||
cli, cleanup := makeFakeCli(t)
|
||||
defer cleanup()
|
||||
revert := env.Patch(t, "KUBECONFIG", "./testdata/test-kubeconfig")
|
||||
defer revert()
|
||||
cli.ResetOutputBuffers()
|
||||
assert.NilError(t, RunCreate(cli, &CreateOptions{
|
||||
Name: "original",
|
||||
Description: "original description",
|
||||
Docker: map[string]string{
|
||||
keyHost: "tcp://42.42.42.42:2375",
|
||||
},
|
||||
Kubernetes: map[string]string{
|
||||
keyFrom: "default",
|
||||
},
|
||||
DefaultStackOrchestrator: "swarm",
|
||||
}))
|
||||
assertContextCreateLogging(t, cli, "original")
|
||||
|
||||
cli.SetCurrentContext("original")
|
||||
|
||||
for _, c := range cases {
|
||||
t.Run(c.name, func(t *testing.T) {
|
||||
cli.ResetOutputBuffers()
|
||||
err := RunCreate(cli, &CreateOptions{
|
||||
Name: c.name,
|
||||
Description: c.description,
|
||||
DefaultStackOrchestrator: c.orchestrator,
|
||||
})
|
||||
assert.NilError(t, err)
|
||||
assertContextCreateLogging(t, cli, c.name)
|
||||
newContext, err := cli.ContextStore().GetMetadata(c.name)
|
||||
assert.NilError(t, err)
|
||||
newContextTyped, err := command.GetDockerContext(newContext)
|
||||
assert.NilError(t, err)
|
||||
dockerEndpoint, err := docker.EndpointFromContext(newContext)
|
||||
assert.NilError(t, err)
|
||||
kubeEndpoint := kubernetes.EndpointFromContext(newContext)
|
||||
assert.Check(t, kubeEndpoint != nil)
|
||||
assert.Equal(t, newContextTyped.Description, c.expectedDescription)
|
||||
assert.Equal(t, newContextTyped.StackOrchestrator, c.expectedOrchestrator)
|
||||
assert.Equal(t, dockerEndpoint.Host, "tcp://42.42.42.42:2375")
|
||||
assert.Equal(t, kubeEndpoint.Host, "https://someserver")
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
@ -29,9 +29,9 @@ func TestExportImportWithFile(t *testing.T) {
|
||||
cli.OutBuffer().Reset()
|
||||
cli.ErrBuffer().Reset()
|
||||
assert.NilError(t, RunImport(cli, "test2", contextFile))
|
||||
context1, err := cli.ContextStore().GetContextMetadata("test")
|
||||
context1, err := cli.ContextStore().GetMetadata("test")
|
||||
assert.NilError(t, err)
|
||||
context2, err := cli.ContextStore().GetContextMetadata("test2")
|
||||
context2, err := cli.ContextStore().GetMetadata("test2")
|
||||
assert.NilError(t, err)
|
||||
assert.DeepEqual(t, context1.Endpoints, context2.Endpoints)
|
||||
assert.DeepEqual(t, context1.Metadata, context2.Metadata)
|
||||
@ -57,9 +57,9 @@ func TestExportImportPipe(t *testing.T) {
|
||||
cli.OutBuffer().Reset()
|
||||
cli.ErrBuffer().Reset()
|
||||
assert.NilError(t, RunImport(cli, "test2", "-"))
|
||||
context1, err := cli.ContextStore().GetContextMetadata("test")
|
||||
context1, err := cli.ContextStore().GetMetadata("test")
|
||||
assert.NilError(t, err)
|
||||
context2, err := cli.ContextStore().GetContextMetadata("test2")
|
||||
context2, err := cli.ContextStore().GetMetadata("test2")
|
||||
assert.NilError(t, err)
|
||||
assert.DeepEqual(t, context1.Endpoints, context2.Endpoints)
|
||||
assert.DeepEqual(t, context1.Metadata, context2.Metadata)
|
||||
|
||||
@ -80,7 +80,7 @@ func RunExport(dockerCli command.Cli, opts *ExportOptions) error {
|
||||
if err := validateContextName(opts.ContextName); err != nil && opts.ContextName != command.DefaultContextName {
|
||||
return err
|
||||
}
|
||||
ctxMeta, err := dockerCli.ContextStore().GetContextMetadata(opts.ContextName)
|
||||
ctxMeta, err := dockerCli.ContextStore().GetMetadata(opts.ContextName)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
@ -14,7 +14,7 @@ import (
|
||||
func newImportCommand(dockerCli command.Cli) *cobra.Command {
|
||||
cmd := &cobra.Command{
|
||||
Use: "import CONTEXT FILE|-",
|
||||
Short: "Import a context from a tar file",
|
||||
Short: "Import a context from a tar or zip file",
|
||||
Args: cli.ExactArgs(2),
|
||||
RunE: func(cmd *cobra.Command, args []string) error {
|
||||
return RunImport(dockerCli, args[0], args[1])
|
||||
@ -28,6 +28,7 @@ func RunImport(dockerCli command.Cli, name string, source string) error {
|
||||
if err := checkContextNameForCreation(dockerCli.ContextStore(), name); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
var reader io.Reader
|
||||
if source == "-" {
|
||||
reader = dockerCli.In()
|
||||
@ -43,6 +44,7 @@ func RunImport(dockerCli command.Cli, name string, source string) error {
|
||||
if err := store.Import(name, dockerCli.ContextStore(), reader); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
fmt.Fprintln(dockerCli.Out(), name)
|
||||
fmt.Fprintf(dockerCli.Err(), "Successfully imported context %q\n", name)
|
||||
return nil
|
||||
|
||||
@ -40,25 +40,25 @@ func newInspectCommand(dockerCli command.Cli) *cobra.Command {
|
||||
|
||||
func runInspect(dockerCli command.Cli, opts inspectOptions) error {
|
||||
getRefFunc := func(ref string) (interface{}, []byte, error) {
|
||||
c, err := dockerCli.ContextStore().GetContextMetadata(ref)
|
||||
c, err := dockerCli.ContextStore().GetMetadata(ref)
|
||||
if err != nil {
|
||||
return nil, nil, err
|
||||
}
|
||||
tlsListing, err := dockerCli.ContextStore().ListContextTLSFiles(ref)
|
||||
tlsListing, err := dockerCli.ContextStore().ListTLSFiles(ref)
|
||||
if err != nil {
|
||||
return nil, nil, err
|
||||
}
|
||||
return contextWithTLSListing{
|
||||
ContextMetadata: c,
|
||||
TLSMaterial: tlsListing,
|
||||
Storage: dockerCli.ContextStore().GetContextStorageInfo(ref),
|
||||
Metadata: c,
|
||||
TLSMaterial: tlsListing,
|
||||
Storage: dockerCli.ContextStore().GetStorageInfo(ref),
|
||||
}, nil, nil
|
||||
}
|
||||
return inspect.Inspect(dockerCli.Out(), opts.refs, opts.format, getRefFunc)
|
||||
}
|
||||
|
||||
type contextWithTLSListing struct {
|
||||
store.ContextMetadata
|
||||
store.Metadata
|
||||
TLSMaterial map[string]store.EndpointFiles
|
||||
Storage store.ContextStorageInfo
|
||||
Storage store.StorageInfo
|
||||
}
|
||||
|
||||
@ -17,7 +17,7 @@ func TestInspect(t *testing.T) {
|
||||
refs: []string{"current"},
|
||||
}))
|
||||
expected := string(golden.Get(t, "inspect.golden"))
|
||||
si := cli.ContextStore().GetContextStorageInfo("current")
|
||||
si := cli.ContextStore().GetStorageInfo("current")
|
||||
expected = strings.Replace(expected, "<METADATA_PATH>", strings.Replace(si.MetadataPath, `\`, `\\`, -1), 1)
|
||||
expected = strings.Replace(expected, "<TLS_PATH>", strings.Replace(si.TLSPath, `\`, `\\`, -1), 1)
|
||||
assert.Equal(t, cli.OutBuffer().String(), expected)
|
||||
|
||||
@ -2,6 +2,7 @@ package context
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"os"
|
||||
"sort"
|
||||
|
||||
"github.com/docker/cli/cli"
|
||||
@ -41,7 +42,7 @@ func runList(dockerCli command.Cli, opts *listOptions) error {
|
||||
opts.format = formatter.TableFormatKey
|
||||
}
|
||||
curContext := dockerCli.CurrentContext()
|
||||
contextMap, err := dockerCli.ContextStore().ListContexts()
|
||||
contextMap, err := dockerCli.ContextStore().List()
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
@ -76,7 +77,14 @@ func runList(dockerCli command.Cli, opts *listOptions) error {
|
||||
sort.Slice(contexts, func(i, j int) bool {
|
||||
return sortorder.NaturalLess(contexts[i].Name, contexts[j].Name)
|
||||
})
|
||||
return format(dockerCli, opts, contexts)
|
||||
if err := format(dockerCli, opts, contexts); err != nil {
|
||||
return err
|
||||
}
|
||||
if os.Getenv("DOCKER_HOST") != "" {
|
||||
fmt.Fprint(dockerCli.Err(), "Warning: DOCKER_HOST environment variable overrides the active context. "+
|
||||
"To use a context, either set the global --context flag, or unset DOCKER_HOST environment variable.\n")
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func format(dockerCli command.Cli, opts *listOptions, contexts []*formatter.ClientContext) error {
|
||||
|
||||
@ -17,7 +17,7 @@ func createTestContextWithKubeAndSwarm(t *testing.T, cli command.Cli, name strin
|
||||
Name: name,
|
||||
DefaultStackOrchestrator: orchestrator,
|
||||
Description: "description of " + name,
|
||||
Kubernetes: map[string]string{keyFromCurrent: "true"},
|
||||
Kubernetes: map[string]string{keyFrom: "default"},
|
||||
Docker: map[string]string{keyHost: "https://someswarmserver"},
|
||||
})
|
||||
assert.NilError(t, err)
|
||||
|
||||
@ -18,7 +18,7 @@ import (
|
||||
)
|
||||
|
||||
const (
|
||||
keyFromCurrent = "from-current"
|
||||
keyFrom = "from"
|
||||
keyHost = "host"
|
||||
keyCA = "ca"
|
||||
keyCert = "cert"
|
||||
@ -36,7 +36,7 @@ type configKeyDescription struct {
|
||||
|
||||
var (
|
||||
allowedDockerConfigKeys = map[string]struct{}{
|
||||
keyFromCurrent: {},
|
||||
keyFrom: {},
|
||||
keyHost: {},
|
||||
keyCA: {},
|
||||
keyCert: {},
|
||||
@ -44,15 +44,15 @@ var (
|
||||
keySkipTLSVerify: {},
|
||||
}
|
||||
allowedKubernetesConfigKeys = map[string]struct{}{
|
||||
keyFromCurrent: {},
|
||||
keyFrom: {},
|
||||
keyKubeconfig: {},
|
||||
keyKubecontext: {},
|
||||
keyKubenamespace: {},
|
||||
}
|
||||
dockerConfigKeysDescriptions = []configKeyDescription{
|
||||
{
|
||||
name: keyFromCurrent,
|
||||
description: "Copy current Docker endpoint configuration",
|
||||
name: keyFrom,
|
||||
description: "Copy named context's Docker endpoint configuration",
|
||||
},
|
||||
{
|
||||
name: keyHost,
|
||||
@ -77,8 +77,8 @@ var (
|
||||
}
|
||||
kubernetesConfigKeysDescriptions = []configKeyDescription{
|
||||
{
|
||||
name: keyFromCurrent,
|
||||
description: "Copy current Kubernetes endpoint configuration",
|
||||
name: keyFrom,
|
||||
description: "Copy named context's Kubernetes endpoint configuration",
|
||||
},
|
||||
{
|
||||
name: keyKubeconfig,
|
||||
@ -121,12 +121,15 @@ func getDockerEndpoint(dockerCli command.Cli, config map[string]string) (docker.
|
||||
if err := validateConfig(config, allowedDockerConfigKeys); err != nil {
|
||||
return docker.Endpoint{}, err
|
||||
}
|
||||
fromCurrent, err := parseBool(config, keyFromCurrent)
|
||||
if err != nil {
|
||||
return docker.Endpoint{}, err
|
||||
}
|
||||
if fromCurrent {
|
||||
return dockerCli.DockerEndpoint(), nil
|
||||
if contextName, ok := config[keyFrom]; ok {
|
||||
metadata, err := dockerCli.ContextStore().GetMetadata(contextName)
|
||||
if err != nil {
|
||||
return docker.Endpoint{}, err
|
||||
}
|
||||
if ep, ok := metadata.Endpoints[docker.DockerEndpoint].(docker.EndpointMeta); ok {
|
||||
return docker.Endpoint{EndpointMeta: ep}, nil
|
||||
}
|
||||
return docker.Endpoint{}, errors.Errorf("unable to get endpoint from context %q", contextName)
|
||||
}
|
||||
tlsData, err := context.TLSDataFromFiles(config[keyCA], config[keyCert], config[keyKey])
|
||||
if err != nil {
|
||||
@ -169,25 +172,20 @@ func getKubernetesEndpoint(dockerCli command.Cli, config map[string]string) (*ku
|
||||
if len(config) == 0 {
|
||||
return nil, nil
|
||||
}
|
||||
fromCurrent, err := parseBool(config, keyFromCurrent)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
if fromCurrent {
|
||||
if dockerCli.CurrentContext() != "" {
|
||||
ctxMeta, err := dockerCli.ContextStore().GetContextMetadata(dockerCli.CurrentContext())
|
||||
if contextName, ok := config[keyFrom]; ok {
|
||||
ctxMeta, err := dockerCli.ContextStore().GetMetadata(contextName)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
endpointMeta := kubernetes.EndpointFromContext(ctxMeta)
|
||||
if endpointMeta != nil {
|
||||
res, err := endpointMeta.WithTLSData(dockerCli.ContextStore(), dockerCli.CurrentContext())
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
endpointMeta := kubernetes.EndpointFromContext(ctxMeta)
|
||||
if endpointMeta != nil {
|
||||
res, err := endpointMeta.WithTLSData(dockerCli.ContextStore(), dockerCli.CurrentContext())
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return &res, nil
|
||||
}
|
||||
return &res, nil
|
||||
}
|
||||
|
||||
// fallback to env-based kubeconfig
|
||||
kubeconfig := os.Getenv("KUBECONFIG")
|
||||
if kubeconfig == "" {
|
||||
|
||||
@ -50,7 +50,7 @@ func RunRemove(dockerCli command.Cli, opts RemoveOptions, names []string) error
|
||||
}
|
||||
|
||||
func doRemove(dockerCli command.Cli, name string, isCurrent, force bool) error {
|
||||
if _, err := dockerCli.ContextStore().GetContextMetadata(name); err != nil {
|
||||
if _, err := dockerCli.ContextStore().GetMetadata(name); err != nil {
|
||||
return err
|
||||
}
|
||||
if isCurrent {
|
||||
@ -64,5 +64,5 @@ func doRemove(dockerCli command.Cli, name string, isCurrent, force bool) error {
|
||||
return err
|
||||
}
|
||||
}
|
||||
return dockerCli.ContextStore().RemoveContext(name)
|
||||
return dockerCli.ContextStore().Remove(name)
|
||||
}
|
||||
|
||||
@ -18,9 +18,9 @@ func TestRemove(t *testing.T) {
|
||||
createTestContextWithKubeAndSwarm(t, cli, "current", "all")
|
||||
createTestContextWithKubeAndSwarm(t, cli, "other", "all")
|
||||
assert.NilError(t, RunRemove(cli, RemoveOptions{}, []string{"other"}))
|
||||
_, err := cli.ContextStore().GetContextMetadata("current")
|
||||
_, err := cli.ContextStore().GetMetadata("current")
|
||||
assert.NilError(t, err)
|
||||
_, err = cli.ContextStore().GetContextMetadata("other")
|
||||
_, err = cli.ContextStore().GetMetadata("other")
|
||||
assert.Check(t, store.IsErrContextDoesNotExist(err))
|
||||
}
|
||||
|
||||
|
||||
@ -72,7 +72,7 @@ func RunUpdate(cli command.Cli, o *UpdateOptions) error {
|
||||
return err
|
||||
}
|
||||
s := cli.ContextStore()
|
||||
c, err := s.GetContextMetadata(o.Name)
|
||||
c, err := s.GetMetadata(o.Name)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
@ -118,11 +118,11 @@ func RunUpdate(cli command.Cli, o *UpdateOptions) error {
|
||||
if err := validateEndpointsAndOrchestrator(c); err != nil {
|
||||
return err
|
||||
}
|
||||
if err := s.CreateOrUpdateContext(c); err != nil {
|
||||
if err := s.CreateOrUpdate(c); err != nil {
|
||||
return err
|
||||
}
|
||||
for ep, tlsData := range tlsDataToReset {
|
||||
if err := s.ResetContextEndpointTLSMaterial(o.Name, ep, tlsData); err != nil {
|
||||
if err := s.ResetEndpointTLSMaterial(o.Name, ep, tlsData); err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
@ -132,7 +132,7 @@ func RunUpdate(cli command.Cli, o *UpdateOptions) error {
|
||||
return nil
|
||||
}
|
||||
|
||||
func validateEndpointsAndOrchestrator(c store.ContextMetadata) error {
|
||||
func validateEndpointsAndOrchestrator(c store.Metadata) error {
|
||||
dockerContext, err := command.GetDockerContext(c)
|
||||
if err != nil {
|
||||
return err
|
||||
|
||||
@ -25,7 +25,7 @@ func TestUpdateDescriptionOnly(t *testing.T) {
|
||||
Name: "test",
|
||||
Description: "description",
|
||||
}))
|
||||
c, err := cli.ContextStore().GetContextMetadata("test")
|
||||
c, err := cli.ContextStore().GetMetadata("test")
|
||||
assert.NilError(t, err)
|
||||
dc, err := command.GetDockerContext(c)
|
||||
assert.NilError(t, err)
|
||||
@ -46,7 +46,7 @@ func TestUpdateDockerOnly(t *testing.T) {
|
||||
keyHost: "tcp://some-host",
|
||||
},
|
||||
}))
|
||||
c, err := cli.ContextStore().GetContextMetadata("test")
|
||||
c, err := cli.ContextStore().GetMetadata("test")
|
||||
assert.NilError(t, err)
|
||||
dc, err := command.GetDockerContext(c)
|
||||
assert.NilError(t, err)
|
||||
|
||||
@ -2,6 +2,7 @@ package context
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"os"
|
||||
|
||||
"github.com/docker/cli/cli/command"
|
||||
"github.com/spf13/cobra"
|
||||
@ -25,7 +26,7 @@ func RunUse(dockerCli command.Cli, name string) error {
|
||||
if err := validateContextName(name); err != nil && name != "default" {
|
||||
return err
|
||||
}
|
||||
if _, err := dockerCli.ContextStore().GetContextMetadata(name); err != nil && name != "default" {
|
||||
if _, err := dockerCli.ContextStore().GetMetadata(name); err != nil && name != "default" {
|
||||
return err
|
||||
}
|
||||
configValue := name
|
||||
@ -39,5 +40,9 @@ func RunUse(dockerCli command.Cli, name string) error {
|
||||
}
|
||||
fmt.Fprintln(dockerCli.Out(), name)
|
||||
fmt.Fprintf(dockerCli.Err(), "Current context is now %q\n", name)
|
||||
if os.Getenv("DOCKER_HOST") != "" {
|
||||
fmt.Fprintf(dockerCli.Err(), "Warning: DOCKER_HOST environment variable overrides the active context. "+
|
||||
"To use %q, either set the global --context flag, or unset DOCKER_HOST environment variable.\n", name)
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
@ -3,15 +3,11 @@ package command
|
||||
import (
|
||||
"fmt"
|
||||
"io"
|
||||
"os"
|
||||
"path/filepath"
|
||||
|
||||
"github.com/docker/cli/cli/config/configfile"
|
||||
"github.com/docker/cli/cli/context/docker"
|
||||
"github.com/docker/cli/cli/context/kubernetes"
|
||||
"github.com/docker/cli/cli/context/store"
|
||||
cliflags "github.com/docker/cli/cli/flags"
|
||||
"github.com/docker/docker/pkg/homedir"
|
||||
"github.com/pkg/errors"
|
||||
)
|
||||
|
||||
@ -20,9 +16,9 @@ const (
|
||||
DefaultContextName = "default"
|
||||
)
|
||||
|
||||
// DefaultContext contains the default context data for all enpoints
|
||||
// DefaultContext contains the default context data for all endpoints
|
||||
type DefaultContext struct {
|
||||
Meta store.ContextMetadata
|
||||
Meta store.Metadata
|
||||
TLS store.ContextTLSData
|
||||
}
|
||||
|
||||
@ -35,8 +31,21 @@ type ContextStoreWithDefault struct {
|
||||
Resolver DefaultContextResolver
|
||||
}
|
||||
|
||||
// resolveDefaultContext creates a ContextMetadata for the current CLI invocation parameters
|
||||
func resolveDefaultContext(opts *cliflags.CommonOptions, config *configfile.ConfigFile, stderr io.Writer) (*DefaultContext, error) {
|
||||
// EndpointDefaultResolver is implemented by any EndpointMeta object
|
||||
// which wants to be able to populate the store with whatever their default is.
|
||||
type EndpointDefaultResolver interface {
|
||||
// ResolveDefault returns values suitable for storing in store.Metadata.Endpoints
|
||||
// and store.ContextTLSData.Endpoints.
|
||||
//
|
||||
// An error is only returned for something fatal, not simply
|
||||
// the lack of a default (e.g. because the config file which
|
||||
// would contain it is missing). If there is no default then
|
||||
// returns nil, nil, nil.
|
||||
ResolveDefault(Orchestrator) (interface{}, *store.EndpointTLSData, error)
|
||||
}
|
||||
|
||||
// ResolveDefaultContext creates a Metadata for the current CLI invocation parameters
|
||||
func ResolveDefaultContext(opts *cliflags.CommonOptions, config *configfile.ConfigFile, storeconfig store.Config, stderr io.Writer) (*DefaultContext, error) {
|
||||
stackOrchestrator, err := GetStackOrchestrator("", "", config.StackOrchestrator, stderr)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
@ -44,7 +53,7 @@ func resolveDefaultContext(opts *cliflags.CommonOptions, config *configfile.Conf
|
||||
contextTLSData := store.ContextTLSData{
|
||||
Endpoints: make(map[string]store.EndpointTLSData),
|
||||
}
|
||||
contextMetadata := store.ContextMetadata{
|
||||
contextMetadata := store.Metadata{
|
||||
Endpoints: make(map[string]interface{}),
|
||||
Metadata: DockerContext{
|
||||
Description: "",
|
||||
@ -62,28 +71,36 @@ func resolveDefaultContext(opts *cliflags.CommonOptions, config *configfile.Conf
|
||||
contextTLSData.Endpoints[docker.DockerEndpoint] = *dockerEP.TLSData.ToStoreTLSData()
|
||||
}
|
||||
|
||||
// Default context uses env-based kubeconfig for Kubernetes endpoint configuration
|
||||
kubeconfig := os.Getenv("KUBECONFIG")
|
||||
if kubeconfig == "" {
|
||||
kubeconfig = filepath.Join(homedir.Get(), ".kube/config")
|
||||
}
|
||||
kubeEP, err := kubernetes.FromKubeConfig(kubeconfig, "", "")
|
||||
if (stackOrchestrator == OrchestratorKubernetes || stackOrchestrator == OrchestratorAll) && err != nil {
|
||||
return nil, errors.Wrapf(err, "default orchestrator is %s but kubernetes endpoint could not be found", stackOrchestrator)
|
||||
}
|
||||
if err == nil {
|
||||
contextMetadata.Endpoints[kubernetes.KubernetesEndpoint] = kubeEP.EndpointMeta
|
||||
if kubeEP.TLSData != nil {
|
||||
contextTLSData.Endpoints[kubernetes.KubernetesEndpoint] = *kubeEP.TLSData.ToStoreTLSData()
|
||||
if err := storeconfig.ForeachEndpointType(func(n string, get store.TypeGetter) error {
|
||||
if n == docker.DockerEndpoint { // handled above
|
||||
return nil
|
||||
}
|
||||
ep := get()
|
||||
if i, ok := ep.(EndpointDefaultResolver); ok {
|
||||
meta, tls, err := i.ResolveDefault(stackOrchestrator)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
if meta == nil {
|
||||
return nil
|
||||
}
|
||||
contextMetadata.Endpoints[n] = meta
|
||||
if tls != nil {
|
||||
contextTLSData.Endpoints[n] = *tls
|
||||
}
|
||||
}
|
||||
// Nothing to be done
|
||||
return nil
|
||||
}); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
return &DefaultContext{Meta: contextMetadata, TLS: contextTLSData}, nil
|
||||
}
|
||||
|
||||
// ListContexts implements store.Store's ListContexts
|
||||
func (s *ContextStoreWithDefault) ListContexts() ([]store.ContextMetadata, error) {
|
||||
contextList, err := s.Store.ListContexts()
|
||||
// List implements store.Store's List
|
||||
func (s *ContextStoreWithDefault) List() ([]store.Metadata, error) {
|
||||
contextList, err := s.Store.List()
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
@ -94,52 +111,52 @@ func (s *ContextStoreWithDefault) ListContexts() ([]store.ContextMetadata, error
|
||||
return append(contextList, defaultContext.Meta), nil
|
||||
}
|
||||
|
||||
// CreateOrUpdateContext is not allowed for the default context and fails
|
||||
func (s *ContextStoreWithDefault) CreateOrUpdateContext(meta store.ContextMetadata) error {
|
||||
// CreateOrUpdate is not allowed for the default context and fails
|
||||
func (s *ContextStoreWithDefault) CreateOrUpdate(meta store.Metadata) error {
|
||||
if meta.Name == DefaultContextName {
|
||||
return errors.New("default context cannot be created nor updated")
|
||||
}
|
||||
return s.Store.CreateOrUpdateContext(meta)
|
||||
return s.Store.CreateOrUpdate(meta)
|
||||
}
|
||||
|
||||
// RemoveContext is not allowed for the default context and fails
|
||||
func (s *ContextStoreWithDefault) RemoveContext(name string) error {
|
||||
// Remove is not allowed for the default context and fails
|
||||
func (s *ContextStoreWithDefault) Remove(name string) error {
|
||||
if name == DefaultContextName {
|
||||
return errors.New("default context cannot be removed")
|
||||
}
|
||||
return s.Store.RemoveContext(name)
|
||||
return s.Store.Remove(name)
|
||||
}
|
||||
|
||||
// GetContextMetadata implements store.Store's GetContextMetadata
|
||||
func (s *ContextStoreWithDefault) GetContextMetadata(name string) (store.ContextMetadata, error) {
|
||||
// GetMetadata implements store.Store's GetMetadata
|
||||
func (s *ContextStoreWithDefault) GetMetadata(name string) (store.Metadata, error) {
|
||||
if name == DefaultContextName {
|
||||
defaultContext, err := s.Resolver()
|
||||
if err != nil {
|
||||
return store.ContextMetadata{}, err
|
||||
return store.Metadata{}, err
|
||||
}
|
||||
return defaultContext.Meta, nil
|
||||
}
|
||||
return s.Store.GetContextMetadata(name)
|
||||
return s.Store.GetMetadata(name)
|
||||
}
|
||||
|
||||
// ResetContextTLSMaterial is not implemented for default context and fails
|
||||
func (s *ContextStoreWithDefault) ResetContextTLSMaterial(name string, data *store.ContextTLSData) error {
|
||||
// ResetTLSMaterial is not implemented for default context and fails
|
||||
func (s *ContextStoreWithDefault) ResetTLSMaterial(name string, data *store.ContextTLSData) error {
|
||||
if name == DefaultContextName {
|
||||
return errors.New("The default context store does not support ResetContextTLSMaterial")
|
||||
return errors.New("The default context store does not support ResetTLSMaterial")
|
||||
}
|
||||
return s.Store.ResetContextTLSMaterial(name, data)
|
||||
return s.Store.ResetTLSMaterial(name, data)
|
||||
}
|
||||
|
||||
// ResetContextEndpointTLSMaterial is not implemented for default context and fails
|
||||
func (s *ContextStoreWithDefault) ResetContextEndpointTLSMaterial(contextName string, endpointName string, data *store.EndpointTLSData) error {
|
||||
// ResetEndpointTLSMaterial is not implemented for default context and fails
|
||||
func (s *ContextStoreWithDefault) ResetEndpointTLSMaterial(contextName string, endpointName string, data *store.EndpointTLSData) error {
|
||||
if contextName == DefaultContextName {
|
||||
return errors.New("The default context store does not support ResetContextEndpointTLSMaterial")
|
||||
return errors.New("The default context store does not support ResetEndpointTLSMaterial")
|
||||
}
|
||||
return s.Store.ResetContextEndpointTLSMaterial(contextName, endpointName, data)
|
||||
return s.Store.ResetEndpointTLSMaterial(contextName, endpointName, data)
|
||||
}
|
||||
|
||||
// ListContextTLSFiles implements store.Store's ListContextTLSFiles
|
||||
func (s *ContextStoreWithDefault) ListContextTLSFiles(name string) (map[string]store.EndpointFiles, error) {
|
||||
// ListTLSFiles implements store.Store's ListTLSFiles
|
||||
func (s *ContextStoreWithDefault) ListTLSFiles(name string) (map[string]store.EndpointFiles, error) {
|
||||
if name == DefaultContextName {
|
||||
defaultContext, err := s.Resolver()
|
||||
if err != nil {
|
||||
@ -155,11 +172,11 @@ func (s *ContextStoreWithDefault) ListContextTLSFiles(name string) (map[string]s
|
||||
}
|
||||
return tlsfiles, nil
|
||||
}
|
||||
return s.Store.ListContextTLSFiles(name)
|
||||
return s.Store.ListTLSFiles(name)
|
||||
}
|
||||
|
||||
// GetContextTLSData implements store.Store's GetContextTLSData
|
||||
func (s *ContextStoreWithDefault) GetContextTLSData(contextName, endpointName, fileName string) ([]byte, error) {
|
||||
// GetTLSData implements store.Store's GetTLSData
|
||||
func (s *ContextStoreWithDefault) GetTLSData(contextName, endpointName, fileName string) ([]byte, error) {
|
||||
if contextName == DefaultContextName {
|
||||
defaultContext, err := s.Resolver()
|
||||
if err != nil {
|
||||
@ -171,7 +188,7 @@ func (s *ContextStoreWithDefault) GetContextTLSData(contextName, endpointName, f
|
||||
return defaultContext.TLS.Endpoints[endpointName].Files[fileName], nil
|
||||
|
||||
}
|
||||
return s.Store.GetContextTLSData(contextName, endpointName, fileName)
|
||||
return s.Store.GetTLSData(contextName, endpointName, fileName)
|
||||
}
|
||||
|
||||
type noDefaultTLSDataError struct {
|
||||
@ -189,10 +206,10 @@ func (e *noDefaultTLSDataError) NotFound() {}
|
||||
// IsTLSDataDoesNotExist satisfies github.com/docker/cli/cli/context/store.tlsDataDoesNotExist
|
||||
func (e *noDefaultTLSDataError) IsTLSDataDoesNotExist() {}
|
||||
|
||||
// GetContextStorageInfo implements store.Store's GetContextStorageInfo
|
||||
func (s *ContextStoreWithDefault) GetContextStorageInfo(contextName string) store.ContextStorageInfo {
|
||||
// GetStorageInfo implements store.Store's GetStorageInfo
|
||||
func (s *ContextStoreWithDefault) GetStorageInfo(contextName string) store.StorageInfo {
|
||||
if contextName == DefaultContextName {
|
||||
return store.ContextStorageInfo{MetadataPath: "<IN MEMORY>", TLSPath: "<IN MEMORY>"}
|
||||
return store.StorageInfo{MetadataPath: "<IN MEMORY>", TLSPath: "<IN MEMORY>"}
|
||||
}
|
||||
return s.Store.GetContextStorageInfo(contextName)
|
||||
return s.Store.GetStorageInfo(contextName)
|
||||
}
|
||||
|
||||
@ -8,7 +8,6 @@ import (
|
||||
|
||||
"github.com/docker/cli/cli/config/configfile"
|
||||
"github.com/docker/cli/cli/context/docker"
|
||||
"github.com/docker/cli/cli/context/kubernetes"
|
||||
"github.com/docker/cli/cli/context/store"
|
||||
cliflags "github.com/docker/cli/cli/flags"
|
||||
"github.com/docker/go-connections/tlsconfig"
|
||||
@ -30,8 +29,8 @@ var testCfg = store.NewConfig(func() interface{} { return &testContext{} },
|
||||
store.EndpointTypeGetter("ep2", func() interface{} { return &endpoint{} }),
|
||||
)
|
||||
|
||||
func testDefaultMetadata() store.ContextMetadata {
|
||||
return store.ContextMetadata{
|
||||
func testDefaultMetadata() store.Metadata {
|
||||
return store.Metadata{
|
||||
Endpoints: map[string]interface{}{
|
||||
"ep1": endpoint{Foo: "bar"},
|
||||
},
|
||||
@ -40,7 +39,7 @@ func testDefaultMetadata() store.ContextMetadata {
|
||||
}
|
||||
}
|
||||
|
||||
func testStore(t *testing.T, meta store.ContextMetadata, tls store.ContextTLSData) (store.Store, func()) {
|
||||
func testStore(t *testing.T, meta store.Metadata, tls store.ContextTLSData) (store.Store, func()) {
|
||||
//meta := testDefaultMetadata()
|
||||
testDir, err := ioutil.TempDir("", t.Name())
|
||||
assert.NilError(t, err)
|
||||
@ -63,22 +62,20 @@ func TestDefaultContextInitializer(t *testing.T) {
|
||||
cli, err := NewDockerCli()
|
||||
assert.NilError(t, err)
|
||||
defer env.Patch(t, "DOCKER_HOST", "ssh://someswarmserver")()
|
||||
defer env.Patch(t, "KUBECONFIG", "./testdata/test-kubeconfig")()
|
||||
cli.configFile = &configfile.ConfigFile{
|
||||
StackOrchestrator: "all",
|
||||
StackOrchestrator: "swarm",
|
||||
}
|
||||
ctx, err := resolveDefaultContext(&cliflags.CommonOptions{
|
||||
ctx, err := ResolveDefaultContext(&cliflags.CommonOptions{
|
||||
TLS: true,
|
||||
TLSOptions: &tlsconfig.Options{
|
||||
CAFile: "./testdata/ca.pem",
|
||||
},
|
||||
}, cli.ConfigFile(), cli.Err())
|
||||
}, cli.ConfigFile(), DefaultContextStoreConfig(), cli.Err())
|
||||
assert.NilError(t, err)
|
||||
assert.Equal(t, "default", ctx.Meta.Name)
|
||||
assert.Equal(t, OrchestratorAll, ctx.Meta.Metadata.(DockerContext).StackOrchestrator)
|
||||
assert.Equal(t, OrchestratorSwarm, ctx.Meta.Metadata.(DockerContext).StackOrchestrator)
|
||||
assert.DeepEqual(t, "ssh://someswarmserver", ctx.Meta.Endpoints[docker.DockerEndpoint].(docker.EndpointMeta).Host)
|
||||
golden.Assert(t, string(ctx.TLS.Endpoints[docker.DockerEndpoint].Files["ca.pem"]), "ca.pem")
|
||||
assert.DeepEqual(t, "zoinx", ctx.Meta.Endpoints[kubernetes.KubernetesEndpoint].(kubernetes.EndpointMeta).DefaultNamespace)
|
||||
}
|
||||
|
||||
func TestExportDefaultImport(t *testing.T) {
|
||||
@ -102,33 +99,33 @@ func TestExportDefaultImport(t *testing.T) {
|
||||
err := store.Import("dest", s, r)
|
||||
assert.NilError(t, err)
|
||||
|
||||
srcMeta, err := s.GetContextMetadata("default")
|
||||
srcMeta, err := s.GetMetadata("default")
|
||||
assert.NilError(t, err)
|
||||
destMeta, err := s.GetContextMetadata("dest")
|
||||
destMeta, err := s.GetMetadata("dest")
|
||||
assert.NilError(t, err)
|
||||
assert.DeepEqual(t, destMeta.Metadata, srcMeta.Metadata)
|
||||
assert.DeepEqual(t, destMeta.Endpoints, srcMeta.Endpoints)
|
||||
|
||||
srcFileList, err := s.ListContextTLSFiles("default")
|
||||
srcFileList, err := s.ListTLSFiles("default")
|
||||
assert.NilError(t, err)
|
||||
destFileList, err := s.ListContextTLSFiles("dest")
|
||||
destFileList, err := s.ListTLSFiles("dest")
|
||||
assert.NilError(t, err)
|
||||
assert.Equal(t, 1, len(destFileList))
|
||||
assert.Equal(t, 1, len(srcFileList))
|
||||
assert.Equal(t, 2, len(destFileList["ep2"]))
|
||||
assert.Equal(t, 2, len(srcFileList["ep2"]))
|
||||
|
||||
srcData1, err := s.GetContextTLSData("default", "ep2", "file1")
|
||||
srcData1, err := s.GetTLSData("default", "ep2", "file1")
|
||||
assert.NilError(t, err)
|
||||
assert.DeepEqual(t, file1, srcData1)
|
||||
srcData2, err := s.GetContextTLSData("default", "ep2", "file2")
|
||||
srcData2, err := s.GetTLSData("default", "ep2", "file2")
|
||||
assert.NilError(t, err)
|
||||
assert.DeepEqual(t, file2, srcData2)
|
||||
|
||||
destData1, err := s.GetContextTLSData("dest", "ep2", "file1")
|
||||
destData1, err := s.GetTLSData("dest", "ep2", "file1")
|
||||
assert.NilError(t, err)
|
||||
assert.DeepEqual(t, file1, destData1)
|
||||
destData2, err := s.GetContextTLSData("dest", "ep2", "file2")
|
||||
destData2, err := s.GetTLSData("dest", "ep2", "file2")
|
||||
assert.NilError(t, err)
|
||||
assert.DeepEqual(t, file2, destData2)
|
||||
}
|
||||
@ -137,7 +134,7 @@ func TestListDefaultContext(t *testing.T) {
|
||||
meta := testDefaultMetadata()
|
||||
s, cleanup := testStore(t, meta, store.ContextTLSData{})
|
||||
defer cleanup()
|
||||
result, err := s.ListContexts()
|
||||
result, err := s.List()
|
||||
assert.NilError(t, err)
|
||||
assert.Equal(t, 1, len(result))
|
||||
assert.DeepEqual(t, meta, result[0])
|
||||
@ -146,7 +143,7 @@ func TestListDefaultContext(t *testing.T) {
|
||||
func TestGetDefaultContextStorageInfo(t *testing.T) {
|
||||
s, cleanup := testStore(t, testDefaultMetadata(), store.ContextTLSData{})
|
||||
defer cleanup()
|
||||
result := s.GetContextStorageInfo(DefaultContextName)
|
||||
result := s.GetStorageInfo(DefaultContextName)
|
||||
assert.Equal(t, "<IN MEMORY>", result.MetadataPath)
|
||||
assert.Equal(t, "<IN MEMORY>", result.TLSPath)
|
||||
}
|
||||
@ -155,7 +152,7 @@ func TestGetDefaultContextMetadata(t *testing.T) {
|
||||
meta := testDefaultMetadata()
|
||||
s, cleanup := testStore(t, meta, store.ContextTLSData{})
|
||||
defer cleanup()
|
||||
result, err := s.GetContextMetadata(DefaultContextName)
|
||||
result, err := s.GetMetadata(DefaultContextName)
|
||||
assert.NilError(t, err)
|
||||
assert.Equal(t, DefaultContextName, result.Name)
|
||||
assert.DeepEqual(t, meta.Metadata, result.Metadata)
|
||||
@ -166,7 +163,7 @@ func TestErrCreateDefault(t *testing.T) {
|
||||
meta := testDefaultMetadata()
|
||||
s, cleanup := testStore(t, meta, store.ContextTLSData{})
|
||||
defer cleanup()
|
||||
err := s.CreateOrUpdateContext(store.ContextMetadata{
|
||||
err := s.CreateOrUpdate(store.Metadata{
|
||||
Endpoints: map[string]interface{}{
|
||||
"ep1": endpoint{Foo: "bar"},
|
||||
},
|
||||
@ -180,7 +177,7 @@ func TestErrRemoveDefault(t *testing.T) {
|
||||
meta := testDefaultMetadata()
|
||||
s, cleanup := testStore(t, meta, store.ContextTLSData{})
|
||||
defer cleanup()
|
||||
err := s.RemoveContext("default")
|
||||
err := s.Remove("default")
|
||||
assert.Error(t, err, "default context cannot be removed")
|
||||
}
|
||||
|
||||
@ -188,6 +185,6 @@ func TestErrTLSDataError(t *testing.T) {
|
||||
meta := testDefaultMetadata()
|
||||
s, cleanup := testStore(t, meta, store.ContextTLSData{})
|
||||
defer cleanup()
|
||||
_, err := s.GetContextTLSData("default", "noop", "noop")
|
||||
_, err := s.GetTLSData("default", "noop", "noop")
|
||||
assert.Check(t, store.IsErrTLSDataDoesNotExist(err))
|
||||
}
|
||||
|
||||
@ -14,6 +14,7 @@ import (
|
||||
"strings"
|
||||
|
||||
"github.com/containerd/console"
|
||||
"github.com/containerd/containerd/platforms"
|
||||
"github.com/docker/cli/cli"
|
||||
"github.com/docker/cli/cli/command"
|
||||
"github.com/docker/cli/cli/command/image/build"
|
||||
@ -173,7 +174,7 @@ func runBuildBuildKit(dockerCli command.Cli, options buildOptions) error {
|
||||
}))
|
||||
}
|
||||
|
||||
s.Allow(authprovider.NewDockerAuthProvider())
|
||||
s.Allow(authprovider.NewDockerAuthProvider(os.Stderr))
|
||||
if len(options.secrets) > 0 {
|
||||
sp, err := parseSecretSpecs(options.secrets)
|
||||
if err != nil {
|
||||
@ -215,6 +216,14 @@ func runBuildBuildKit(dockerCli command.Cli, options buildOptions) error {
|
||||
})
|
||||
}
|
||||
|
||||
if v := os.Getenv("BUILDKIT_PROGRESS"); v != "" && options.progress == "auto" {
|
||||
options.progress = v
|
||||
}
|
||||
|
||||
if strings.EqualFold(options.platform, "local") {
|
||||
options.platform = platforms.DefaultString()
|
||||
}
|
||||
|
||||
eg.Go(func() error {
|
||||
defer func() { // make sure the Status ends cleanly on build errors
|
||||
s.Close()
|
||||
|
||||
@ -143,7 +143,8 @@ func runLogin(dockerCli command.Cli, opts loginOptions) error { //nolint: gocycl
|
||||
creds := dockerCli.ConfigFile().GetCredentialsStore(serverAddress)
|
||||
|
||||
store, isDefault := creds.(isFileStore)
|
||||
if isDefault {
|
||||
// Display a warning if we're storing the users password (not a token)
|
||||
if isDefault && authConfig.Password != "" {
|
||||
err = displayUnencryptedWarning(dockerCli, store.GetFilename())
|
||||
if err != nil {
|
||||
return err
|
||||
|
||||
@ -24,6 +24,7 @@ var testAuthErrors = map[string]error{
|
||||
}
|
||||
|
||||
var expiredPassword = "I_M_EXPIRED"
|
||||
var useToken = "I_M_TOKEN"
|
||||
|
||||
type fakeClient struct {
|
||||
client.Client
|
||||
@ -37,6 +38,11 @@ func (c fakeClient) RegistryLogin(ctx context.Context, auth types.AuthConfig) (r
|
||||
if auth.Password == expiredPassword {
|
||||
return registrytypes.AuthenticateOKBody{}, fmt.Errorf("Invalid Username or Password")
|
||||
}
|
||||
if auth.Password == useToken {
|
||||
return registrytypes.AuthenticateOKBody{
|
||||
IdentityToken: auth.Password,
|
||||
}, nil
|
||||
}
|
||||
err := testAuthErrors[auth.Username]
|
||||
return registrytypes.AuthenticateOKBody{}, err
|
||||
}
|
||||
@ -90,6 +96,11 @@ func TestRunLogin(t *testing.T) {
|
||||
Username: validUsername,
|
||||
Password: expiredPassword,
|
||||
}
|
||||
validIdentityToken := configtypes.AuthConfig{
|
||||
ServerAddress: storedServerAddress,
|
||||
Username: validUsername,
|
||||
IdentityToken: useToken,
|
||||
}
|
||||
testCases := []struct {
|
||||
inputLoginOption loginOptions
|
||||
inputStoredCred *configtypes.AuthConfig
|
||||
@ -134,6 +145,16 @@ func TestRunLogin(t *testing.T) {
|
||||
inputStoredCred: &validAuthConfig,
|
||||
expectedErr: testAuthErrMsg,
|
||||
},
|
||||
{
|
||||
inputLoginOption: loginOptions{
|
||||
serverAddress: storedServerAddress,
|
||||
user: validUsername,
|
||||
password: useToken,
|
||||
},
|
||||
inputStoredCred: &validIdentityToken,
|
||||
expectedErr: "",
|
||||
expectedSavedCred: validIdentityToken,
|
||||
},
|
||||
}
|
||||
for i, tc := range testCases {
|
||||
t.Run(fmt.Sprintf("%d", i), func(t *testing.T) {
|
||||
|
||||
@ -8,7 +8,9 @@ import (
|
||||
"github.com/docker/cli/cli/command"
|
||||
cliopts "github.com/docker/cli/opts"
|
||||
"github.com/docker/docker/api/types"
|
||||
"github.com/docker/docker/api/types/swarm"
|
||||
"github.com/docker/docker/api/types/versions"
|
||||
"github.com/docker/docker/client"
|
||||
"github.com/spf13/cobra"
|
||||
"github.com/spf13/pflag"
|
||||
)
|
||||
@ -95,14 +97,8 @@ func runCreate(dockerCli command.Cli, flags *pflag.FlagSet, opts *serviceOptions
|
||||
service.TaskTemplate.ContainerSpec.Secrets = secrets
|
||||
}
|
||||
|
||||
specifiedConfigs := opts.configs.Value()
|
||||
if len(specifiedConfigs) > 0 {
|
||||
// parse and validate configs
|
||||
configs, err := ParseConfigs(apiClient, specifiedConfigs)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
service.TaskTemplate.ContainerSpec.Configs = configs
|
||||
if err := setConfigs(apiClient, &service, opts); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
if err := resolveServiceImageDigestContentTrust(dockerCli, &service); err != nil {
|
||||
@ -141,3 +137,45 @@ func runCreate(dockerCli command.Cli, flags *pflag.FlagSet, opts *serviceOptions
|
||||
|
||||
return waitOnService(ctx, dockerCli, response.ID, opts.quiet)
|
||||
}
|
||||
|
||||
// setConfigs does double duty: it both sets the ConfigReferences of the
|
||||
// service, and it sets the service CredentialSpec. This is because there is an
|
||||
// interplay between the CredentialSpec and the Config it depends on.
|
||||
func setConfigs(apiClient client.ConfigAPIClient, service *swarm.ServiceSpec, opts *serviceOptions) error {
|
||||
specifiedConfigs := opts.configs.Value()
|
||||
// if the user has requested to use a Config, for the CredentialSpec add it
|
||||
// to the specifiedConfigs as a RuntimeTarget.
|
||||
if cs := opts.credentialSpec.Value(); cs != nil && cs.Config != "" {
|
||||
specifiedConfigs = append(specifiedConfigs, &swarm.ConfigReference{
|
||||
ConfigName: cs.Config,
|
||||
Runtime: &swarm.ConfigReferenceRuntimeTarget{},
|
||||
})
|
||||
}
|
||||
if len(specifiedConfigs) > 0 {
|
||||
// parse and validate configs
|
||||
configs, err := ParseConfigs(apiClient, specifiedConfigs)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
service.TaskTemplate.ContainerSpec.Configs = configs
|
||||
// if we have a CredentialSpec Config, find its ID and rewrite the
|
||||
// field on the spec
|
||||
//
|
||||
// we check the opts instead of the service directly because there are
|
||||
// a few layers of nullable objects in the service, which is a PITA
|
||||
// to traverse, but the existence of the option implies that those are
|
||||
// non-null.
|
||||
if cs := opts.credentialSpec.Value(); cs != nil && cs.Config != "" {
|
||||
for _, config := range configs {
|
||||
if config.ConfigName == cs.Config {
|
||||
service.TaskTemplate.ContainerSpec.Privileges.CredentialSpec.Config = config.ConfigID
|
||||
// we've found the right config, no need to keep iterating
|
||||
// through the rest of them.
|
||||
break
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
271
cli/command/service/create_test.go
Normal file
271
cli/command/service/create_test.go
Normal file
@ -0,0 +1,271 @@
|
||||
package service
|
||||
|
||||
import (
|
||||
"context"
|
||||
"testing"
|
||||
|
||||
"github.com/docker/docker/api/types"
|
||||
"github.com/docker/docker/api/types/swarm"
|
||||
"gotest.tools/assert"
|
||||
is "gotest.tools/assert/cmp"
|
||||
|
||||
cliopts "github.com/docker/cli/opts"
|
||||
)
|
||||
|
||||
// fakeConfigAPIClientList is used to let us pass a closure as a
|
||||
// ConfigAPIClient, to use as ConfigList. for all the other methods in the
|
||||
// interface, it does nothing, not even return an error, so don't use them
|
||||
type fakeConfigAPIClientList func(context.Context, types.ConfigListOptions) ([]swarm.Config, error)
|
||||
|
||||
func (f fakeConfigAPIClientList) ConfigList(ctx context.Context, opts types.ConfigListOptions) ([]swarm.Config, error) {
|
||||
return f(ctx, opts)
|
||||
}
|
||||
|
||||
func (f fakeConfigAPIClientList) ConfigCreate(_ context.Context, _ swarm.ConfigSpec) (types.ConfigCreateResponse, error) {
|
||||
return types.ConfigCreateResponse{}, nil
|
||||
}
|
||||
|
||||
func (f fakeConfigAPIClientList) ConfigRemove(_ context.Context, _ string) error {
|
||||
return nil
|
||||
}
|
||||
|
||||
func (f fakeConfigAPIClientList) ConfigInspectWithRaw(_ context.Context, _ string) (swarm.Config, []byte, error) {
|
||||
return swarm.Config{}, nil, nil
|
||||
}
|
||||
|
||||
func (f fakeConfigAPIClientList) ConfigUpdate(_ context.Context, _ string, _ swarm.Version, _ swarm.ConfigSpec) error {
|
||||
return nil
|
||||
}
|
||||
|
||||
// TestSetConfigsWithCredSpecAndConfigs tests that the setConfigs function for
|
||||
// create correctly looks up the right configs, and correctly handles the
|
||||
// credentialSpec
|
||||
func TestSetConfigsWithCredSpecAndConfigs(t *testing.T) {
|
||||
// we can't directly access the internal fields of the ConfigOpt struct, so
|
||||
// we need to let it do the parsing
|
||||
configOpt := &cliopts.ConfigOpt{}
|
||||
configOpt.Set("bar")
|
||||
opts := &serviceOptions{
|
||||
credentialSpec: credentialSpecOpt{
|
||||
value: &swarm.CredentialSpec{
|
||||
Config: "foo",
|
||||
},
|
||||
source: "config://foo",
|
||||
},
|
||||
configs: *configOpt,
|
||||
}
|
||||
|
||||
// create a service spec. we need to be sure to fill in the nullable
|
||||
// fields, like the code expects
|
||||
service := &swarm.ServiceSpec{
|
||||
TaskTemplate: swarm.TaskSpec{
|
||||
ContainerSpec: &swarm.ContainerSpec{
|
||||
Privileges: &swarm.Privileges{
|
||||
CredentialSpec: opts.credentialSpec.value,
|
||||
},
|
||||
},
|
||||
},
|
||||
}
|
||||
|
||||
// set up a function to use as the list function
|
||||
var fakeClient fakeConfigAPIClientList = func(_ context.Context, opts types.ConfigListOptions) ([]swarm.Config, error) {
|
||||
f := opts.Filters
|
||||
|
||||
// we're expecting the filter to have names "foo" and "bar"
|
||||
names := f.Get("name")
|
||||
assert.Equal(t, len(names), 2)
|
||||
assert.Assert(t, is.Contains(names, "foo"))
|
||||
assert.Assert(t, is.Contains(names, "bar"))
|
||||
|
||||
return []swarm.Config{
|
||||
{
|
||||
ID: "fooID",
|
||||
Spec: swarm.ConfigSpec{
|
||||
Annotations: swarm.Annotations{
|
||||
Name: "foo",
|
||||
},
|
||||
},
|
||||
}, {
|
||||
ID: "barID",
|
||||
Spec: swarm.ConfigSpec{
|
||||
Annotations: swarm.Annotations{
|
||||
Name: "bar",
|
||||
},
|
||||
},
|
||||
},
|
||||
}, nil
|
||||
}
|
||||
|
||||
// now call setConfigs
|
||||
err := setConfigs(fakeClient, service, opts)
|
||||
// verify no error is returned
|
||||
assert.NilError(t, err)
|
||||
|
||||
credSpecConfigValue := service.TaskTemplate.ContainerSpec.Privileges.CredentialSpec.Config
|
||||
assert.Equal(t, credSpecConfigValue, "fooID")
|
||||
|
||||
configRefs := service.TaskTemplate.ContainerSpec.Configs
|
||||
assert.Assert(t, is.Contains(configRefs, &swarm.ConfigReference{
|
||||
ConfigID: "fooID",
|
||||
ConfigName: "foo",
|
||||
Runtime: &swarm.ConfigReferenceRuntimeTarget{},
|
||||
}), "expected configRefs to contain foo config")
|
||||
assert.Assert(t, is.Contains(configRefs, &swarm.ConfigReference{
|
||||
ConfigID: "barID",
|
||||
ConfigName: "bar",
|
||||
File: &swarm.ConfigReferenceFileTarget{
|
||||
Name: "bar",
|
||||
// these are the default field values
|
||||
UID: "0",
|
||||
GID: "0",
|
||||
Mode: 0444,
|
||||
},
|
||||
}), "expected configRefs to contain bar config")
|
||||
}
|
||||
|
||||
// TestSetConfigsOnlyCredSpec tests that even if a CredentialSpec is the only
|
||||
// config needed, setConfigs still works
|
||||
func TestSetConfigsOnlyCredSpec(t *testing.T) {
|
||||
opts := &serviceOptions{
|
||||
credentialSpec: credentialSpecOpt{
|
||||
value: &swarm.CredentialSpec{
|
||||
Config: "foo",
|
||||
},
|
||||
source: "config://foo",
|
||||
},
|
||||
}
|
||||
|
||||
service := &swarm.ServiceSpec{
|
||||
TaskTemplate: swarm.TaskSpec{
|
||||
ContainerSpec: &swarm.ContainerSpec{
|
||||
Privileges: &swarm.Privileges{
|
||||
CredentialSpec: opts.credentialSpec.value,
|
||||
},
|
||||
},
|
||||
},
|
||||
}
|
||||
|
||||
// set up a function to use as the list function
|
||||
var fakeClient fakeConfigAPIClientList = func(_ context.Context, opts types.ConfigListOptions) ([]swarm.Config, error) {
|
||||
f := opts.Filters
|
||||
|
||||
names := f.Get("name")
|
||||
assert.Equal(t, len(names), 1)
|
||||
assert.Assert(t, is.Contains(names, "foo"))
|
||||
|
||||
return []swarm.Config{
|
||||
{
|
||||
ID: "fooID",
|
||||
Spec: swarm.ConfigSpec{
|
||||
Annotations: swarm.Annotations{
|
||||
Name: "foo",
|
||||
},
|
||||
},
|
||||
},
|
||||
}, nil
|
||||
}
|
||||
|
||||
// now call setConfigs
|
||||
err := setConfigs(fakeClient, service, opts)
|
||||
// verify no error is returned
|
||||
assert.NilError(t, err)
|
||||
|
||||
credSpecConfigValue := service.TaskTemplate.ContainerSpec.Privileges.CredentialSpec.Config
|
||||
assert.Equal(t, credSpecConfigValue, "fooID")
|
||||
|
||||
configRefs := service.TaskTemplate.ContainerSpec.Configs
|
||||
assert.Assert(t, is.Contains(configRefs, &swarm.ConfigReference{
|
||||
ConfigID: "fooID",
|
||||
ConfigName: "foo",
|
||||
Runtime: &swarm.ConfigReferenceRuntimeTarget{},
|
||||
}))
|
||||
}
|
||||
|
||||
// TestSetConfigsOnlyConfigs verifies setConfigs when only configs (and not a
|
||||
// CredentialSpec) is needed.
|
||||
func TestSetConfigsOnlyConfigs(t *testing.T) {
|
||||
configOpt := &cliopts.ConfigOpt{}
|
||||
configOpt.Set("bar")
|
||||
opts := &serviceOptions{
|
||||
configs: *configOpt,
|
||||
}
|
||||
|
||||
service := &swarm.ServiceSpec{
|
||||
TaskTemplate: swarm.TaskSpec{
|
||||
ContainerSpec: &swarm.ContainerSpec{},
|
||||
},
|
||||
}
|
||||
|
||||
var fakeClient fakeConfigAPIClientList = func(_ context.Context, opts types.ConfigListOptions) ([]swarm.Config, error) {
|
||||
f := opts.Filters
|
||||
|
||||
names := f.Get("name")
|
||||
assert.Equal(t, len(names), 1)
|
||||
assert.Assert(t, is.Contains(names, "bar"))
|
||||
|
||||
return []swarm.Config{
|
||||
{
|
||||
ID: "barID",
|
||||
Spec: swarm.ConfigSpec{
|
||||
Annotations: swarm.Annotations{
|
||||
Name: "bar",
|
||||
},
|
||||
},
|
||||
},
|
||||
}, nil
|
||||
}
|
||||
|
||||
// now call setConfigs
|
||||
err := setConfigs(fakeClient, service, opts)
|
||||
// verify no error is returned
|
||||
assert.NilError(t, err)
|
||||
|
||||
configRefs := service.TaskTemplate.ContainerSpec.Configs
|
||||
assert.Assert(t, is.Contains(configRefs, &swarm.ConfigReference{
|
||||
ConfigID: "barID",
|
||||
ConfigName: "bar",
|
||||
File: &swarm.ConfigReferenceFileTarget{
|
||||
Name: "bar",
|
||||
// these are the default field values
|
||||
UID: "0",
|
||||
GID: "0",
|
||||
Mode: 0444,
|
||||
},
|
||||
}))
|
||||
}
|
||||
|
||||
// TestSetConfigsNoConfigs checks that setConfigs works when there are no
|
||||
// configs of any kind needed
|
||||
func TestSetConfigsNoConfigs(t *testing.T) {
|
||||
// add a credentialSpec that isn't a config
|
||||
opts := &serviceOptions{
|
||||
credentialSpec: credentialSpecOpt{
|
||||
value: &swarm.CredentialSpec{
|
||||
File: "foo",
|
||||
},
|
||||
source: "file://foo",
|
||||
},
|
||||
}
|
||||
service := &swarm.ServiceSpec{
|
||||
TaskTemplate: swarm.TaskSpec{
|
||||
ContainerSpec: &swarm.ContainerSpec{
|
||||
Privileges: &swarm.Privileges{
|
||||
CredentialSpec: opts.credentialSpec.value,
|
||||
},
|
||||
},
|
||||
},
|
||||
}
|
||||
|
||||
var fakeClient fakeConfigAPIClientList = func(_ context.Context, opts types.ConfigListOptions) ([]swarm.Config, error) {
|
||||
// assert false -- we should never call this function
|
||||
assert.Assert(t, false, "we should not be listing configs")
|
||||
return nil, nil
|
||||
}
|
||||
|
||||
err := setConfigs(fakeClient, service, opts)
|
||||
assert.NilError(t, err)
|
||||
|
||||
// ensure that the value of the credentialspec has not changed
|
||||
assert.Equal(t, service.TaskTemplate.ContainerSpec.Privileges.CredentialSpec.File, "foo")
|
||||
assert.Equal(t, service.TaskTemplate.ContainerSpec.Privileges.CredentialSpec.Config, "")
|
||||
}
|
||||
@ -16,8 +16,8 @@ import (
|
||||
"github.com/docker/docker/client"
|
||||
"github.com/docker/swarmkit/api"
|
||||
"github.com/docker/swarmkit/api/defaults"
|
||||
shlex "github.com/flynn-archive/go-shlex"
|
||||
gogotypes "github.com/gogo/protobuf/types"
|
||||
"github.com/google/shlex"
|
||||
"github.com/pkg/errors"
|
||||
"github.com/spf13/pflag"
|
||||
)
|
||||
@ -331,12 +331,25 @@ func (c *credentialSpecOpt) Set(value string) error {
|
||||
c.source = value
|
||||
c.value = &swarm.CredentialSpec{}
|
||||
switch {
|
||||
case strings.HasPrefix(value, "config://"):
|
||||
// NOTE(dperny): we allow the user to specify the value of
|
||||
// CredentialSpec Config using the Name of the config, but the API
|
||||
// requires the ID of the config. For simplicity, we will parse
|
||||
// whatever value is provided into the "Config" field, but before
|
||||
// making API calls, we may need to swap the Config Name for the ID.
|
||||
// Therefore, this isn't the definitive location for the value of
|
||||
// Config that is passed to the API.
|
||||
c.value.Config = strings.TrimPrefix(value, "config://")
|
||||
case strings.HasPrefix(value, "file://"):
|
||||
c.value.File = strings.TrimPrefix(value, "file://")
|
||||
case strings.HasPrefix(value, "registry://"):
|
||||
c.value.Registry = strings.TrimPrefix(value, "registry://")
|
||||
case value == "":
|
||||
// if the value of the flag is an empty string, that means there is no
|
||||
// CredentialSpec needed. This is useful for removing a CredentialSpec
|
||||
// during a service update.
|
||||
default:
|
||||
return errors.New("Invalid credential spec - value must be prefixed file:// or registry:// followed by a value")
|
||||
return errors.New(`invalid credential spec: value must be prefixed with "config://", "file://", or "registry://"`)
|
||||
}
|
||||
|
||||
return nil
|
||||
@ -663,7 +676,7 @@ func (options *serviceOptions) ToService(ctx context.Context, apiClient client.N
|
||||
EndpointSpec: options.endpoint.ToEndpointSpec(),
|
||||
}
|
||||
|
||||
if options.credentialSpec.Value() != nil {
|
||||
if options.credentialSpec.String() != "" && options.credentialSpec.Value() != nil {
|
||||
service.TaskTemplate.ContainerSpec.Privileges = &swarm.Privileges{
|
||||
CredentialSpec: options.credentialSpec.Value(),
|
||||
}
|
||||
|
||||
@ -14,6 +14,60 @@ import (
|
||||
is "gotest.tools/assert/cmp"
|
||||
)
|
||||
|
||||
func TestCredentialSpecOpt(t *testing.T) {
|
||||
tests := []struct {
|
||||
name string
|
||||
in string
|
||||
value swarm.CredentialSpec
|
||||
expectedErr string
|
||||
}{
|
||||
{
|
||||
name: "empty",
|
||||
in: "",
|
||||
value: swarm.CredentialSpec{},
|
||||
},
|
||||
{
|
||||
name: "no-prefix",
|
||||
in: "noprefix",
|
||||
value: swarm.CredentialSpec{},
|
||||
expectedErr: `invalid credential spec: value must be prefixed with "config://", "file://", or "registry://"`,
|
||||
},
|
||||
{
|
||||
name: "config",
|
||||
in: "config://0bt9dmxjvjiqermk6xrop3ekq",
|
||||
value: swarm.CredentialSpec{Config: "0bt9dmxjvjiqermk6xrop3ekq"},
|
||||
},
|
||||
{
|
||||
name: "file",
|
||||
in: "file://somefile.json",
|
||||
value: swarm.CredentialSpec{File: "somefile.json"},
|
||||
},
|
||||
{
|
||||
name: "registry",
|
||||
in: "registry://testing",
|
||||
value: swarm.CredentialSpec{Registry: "testing"},
|
||||
},
|
||||
}
|
||||
|
||||
for _, tc := range tests {
|
||||
tc := tc
|
||||
t.Run(tc.name, func(t *testing.T) {
|
||||
var cs credentialSpecOpt
|
||||
|
||||
err := cs.Set(tc.in)
|
||||
|
||||
if tc.expectedErr != "" {
|
||||
assert.Error(t, err, tc.expectedErr)
|
||||
} else {
|
||||
assert.NilError(t, err)
|
||||
}
|
||||
|
||||
assert.Equal(t, cs.String(), tc.in)
|
||||
assert.DeepEqual(t, cs.Value(), &tc.value)
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
func TestMemBytesString(t *testing.T) {
|
||||
var mem opts.MemBytes = 1048576
|
||||
assert.Check(t, is.Equal("1MiB", mem.String()))
|
||||
|
||||
@ -70,16 +70,40 @@ func ParseConfigs(client client.ConfigAPIClient, requestedConfigs []*swarmtypes.
|
||||
return []*swarmtypes.ConfigReference{}, nil
|
||||
}
|
||||
|
||||
// the configRefs map has two purposes: it prevents duplication of config
|
||||
// target filenames, and it it used to get all configs so we can resolve
|
||||
// their IDs. unfortunately, there are other targets for ConfigReferences,
|
||||
// besides just a File; specifically, the Runtime target, which is used for
|
||||
// CredentialSpecs. Therefore, we need to have a list of ConfigReferences
|
||||
// that are not File targets as well. at this time of writing, the only use
|
||||
// for Runtime targets is CredentialSpecs. However, to future-proof this
|
||||
// functionality, we should handle the case where multiple Runtime targets
|
||||
// are in use for the same Config, and we should deduplicate
|
||||
// such ConfigReferences, as no matter how many times the Config is used,
|
||||
// it is only needed to be referenced once.
|
||||
configRefs := make(map[string]*swarmtypes.ConfigReference)
|
||||
runtimeRefs := make(map[string]*swarmtypes.ConfigReference)
|
||||
ctx := context.Background()
|
||||
|
||||
for _, config := range requestedConfigs {
|
||||
// copy the config, so we don't mutate the args
|
||||
configRef := new(swarmtypes.ConfigReference)
|
||||
*configRef = *config
|
||||
|
||||
if config.Runtime != nil {
|
||||
// by assigning to a map based on ConfigName, if the same Config
|
||||
// is required as a Runtime target for multiple purposes, we only
|
||||
// include it once in the final set of configs.
|
||||
runtimeRefs[config.ConfigName] = config
|
||||
// continue, so we skip the logic below for handling file-type
|
||||
// configs
|
||||
continue
|
||||
}
|
||||
|
||||
if _, exists := configRefs[config.File.Name]; exists {
|
||||
return nil, errors.Errorf("duplicate config target for %s not allowed", config.ConfigName)
|
||||
}
|
||||
|
||||
configRef := new(swarmtypes.ConfigReference)
|
||||
*configRef = *config
|
||||
configRefs[config.File.Name] = configRef
|
||||
}
|
||||
|
||||
@ -87,6 +111,9 @@ func ParseConfigs(client client.ConfigAPIClient, requestedConfigs []*swarmtypes.
|
||||
for _, s := range configRefs {
|
||||
args.Add("name", s.ConfigName)
|
||||
}
|
||||
for _, s := range runtimeRefs {
|
||||
args.Add("name", s.ConfigName)
|
||||
}
|
||||
|
||||
configs, err := client.ConfigList(ctx, types.ConfigListOptions{
|
||||
Filters: args,
|
||||
@ -114,5 +141,18 @@ func ParseConfigs(client client.ConfigAPIClient, requestedConfigs []*swarmtypes.
|
||||
addedConfigs = append(addedConfigs, ref)
|
||||
}
|
||||
|
||||
// unfortunately, because the key of configRefs and runtimeRefs is different
|
||||
// values that may collide, we can't just do some fancy trickery to
|
||||
// concat maps, we need to do two separate loops
|
||||
for _, ref := range runtimeRefs {
|
||||
id, ok := foundConfigs[ref.ConfigName]
|
||||
if !ok {
|
||||
return nil, errors.Errorf("config not found: %s", ref.ConfigName)
|
||||
}
|
||||
|
||||
ref.ConfigID = id
|
||||
addedConfigs = append(addedConfigs, ref)
|
||||
}
|
||||
|
||||
return addedConfigs, nil
|
||||
}
|
||||
|
||||
@ -194,13 +194,18 @@ func runUpdate(dockerCli command.Cli, flags *pflag.FlagSet, options *serviceOpti
|
||||
|
||||
spec.TaskTemplate.ContainerSpec.Secrets = updatedSecrets
|
||||
|
||||
updatedConfigs, err := getUpdatedConfigs(apiClient, flags, spec.TaskTemplate.ContainerSpec.Configs)
|
||||
updatedConfigs, err := getUpdatedConfigs(apiClient, flags, spec.TaskTemplate.ContainerSpec)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
spec.TaskTemplate.ContainerSpec.Configs = updatedConfigs
|
||||
|
||||
// set the credential spec value after get the updated configs, because we
|
||||
// might need the updated configs to set the correct value of the
|
||||
// CredentialSpec.
|
||||
updateCredSpecConfig(flags, spec.TaskTemplate.ContainerSpec)
|
||||
|
||||
// only send auth if flag was set
|
||||
sendAuth, err := flags.GetBool(flagRegistryAuth)
|
||||
if err != nil {
|
||||
@ -731,20 +736,56 @@ func getUpdatedSecrets(apiClient client.SecretAPIClient, flags *pflag.FlagSet, s
|
||||
return newSecrets, nil
|
||||
}
|
||||
|
||||
func getUpdatedConfigs(apiClient client.ConfigAPIClient, flags *pflag.FlagSet, configs []*swarm.ConfigReference) ([]*swarm.ConfigReference, error) {
|
||||
newConfigs := []*swarm.ConfigReference{}
|
||||
func getUpdatedConfigs(apiClient client.ConfigAPIClient, flags *pflag.FlagSet, spec *swarm.ContainerSpec) ([]*swarm.ConfigReference, error) {
|
||||
var (
|
||||
// credSpecConfigName stores the name of the config specified by the
|
||||
// credential-spec flag. if a Runtime target Config with this name is
|
||||
// already in the containerSpec, then this value will be set to
|
||||
// emptystring in the removeConfigs stage. otherwise, a ConfigReference
|
||||
// will be created to pass to ParseConfigs to get the ConfigID.
|
||||
credSpecConfigName string
|
||||
// credSpecConfigID stores the ID of the credential spec config if that
|
||||
// config is being carried over from the old set of references
|
||||
credSpecConfigID string
|
||||
)
|
||||
|
||||
toRemove := buildToRemoveSet(flags, flagConfigRemove)
|
||||
for _, config := range configs {
|
||||
if _, exists := toRemove[config.ConfigName]; !exists {
|
||||
newConfigs = append(newConfigs, config)
|
||||
if flags.Changed(flagCredentialSpec) {
|
||||
credSpec := flags.Lookup(flagCredentialSpec).Value.(*credentialSpecOpt).Value()
|
||||
credSpecConfigName = credSpec.Config
|
||||
} else {
|
||||
// if the credential spec flag has not changed, then check if there
|
||||
// already is a credentialSpec. if there is one, and it's for a Config,
|
||||
// then it's from the old object, and its value is the config ID. we
|
||||
// need this so we don't remove the config if the credential spec is
|
||||
// not being updated.
|
||||
if spec.Privileges != nil && spec.Privileges.CredentialSpec != nil {
|
||||
if config := spec.Privileges.CredentialSpec.Config; config != "" {
|
||||
credSpecConfigID = config
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
if flags.Changed(flagConfigAdd) {
|
||||
values := flags.Lookup(flagConfigAdd).Value.(*opts.ConfigOpt).Value()
|
||||
newConfigs := removeConfigs(flags, spec, credSpecConfigName, credSpecConfigID)
|
||||
|
||||
addConfigs, err := ParseConfigs(apiClient, values)
|
||||
// resolveConfigs is a slice of any new configs that need to have the ID
|
||||
// resolved
|
||||
resolveConfigs := []*swarm.ConfigReference{}
|
||||
|
||||
if flags.Changed(flagConfigAdd) {
|
||||
resolveConfigs = append(resolveConfigs, flags.Lookup(flagConfigAdd).Value.(*opts.ConfigOpt).Value()...)
|
||||
}
|
||||
|
||||
// if credSpecConfigNameis non-empty at this point, it means its a new
|
||||
// config, and we need to resolve its ID accordingly.
|
||||
if credSpecConfigName != "" {
|
||||
resolveConfigs = append(resolveConfigs, &swarm.ConfigReference{
|
||||
ConfigName: credSpecConfigName,
|
||||
Runtime: &swarm.ConfigReferenceRuntimeTarget{},
|
||||
})
|
||||
}
|
||||
|
||||
if len(resolveConfigs) > 0 {
|
||||
addConfigs, err := ParseConfigs(apiClient, resolveConfigs)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
@ -754,6 +795,42 @@ func getUpdatedConfigs(apiClient client.ConfigAPIClient, flags *pflag.FlagSet, c
|
||||
return newConfigs, nil
|
||||
}
|
||||
|
||||
// removeConfigs figures out which configs in the existing spec should be kept
|
||||
// after the update.
|
||||
func removeConfigs(flags *pflag.FlagSet, spec *swarm.ContainerSpec, credSpecName, credSpecID string) []*swarm.ConfigReference {
|
||||
keepConfigs := []*swarm.ConfigReference{}
|
||||
|
||||
toRemove := buildToRemoveSet(flags, flagConfigRemove)
|
||||
// all configs in spec.Configs should have both a Name and ID, because
|
||||
// they come from an already-accepted spec.
|
||||
for _, config := range spec.Configs {
|
||||
// if the config is a Runtime target, make sure it's still in use right
|
||||
// now, the only use for Runtime target is credential specs. if, in
|
||||
// the future, more uses are added, then this check will need to be
|
||||
// made more intelligent.
|
||||
if config.Runtime != nil {
|
||||
// if we're carrying over a credential spec explicitly (because the
|
||||
// user passed --credential-spec with the same config name) then we
|
||||
// should match on credSpecName. if we're carrying over a
|
||||
// credential spec implicitly (because the user did not pass any
|
||||
// --credential-spec flag) then we should match on credSpecID. in
|
||||
// either case, we're keeping the config that already exists.
|
||||
if config.ConfigName == credSpecName || config.ConfigID == credSpecID {
|
||||
keepConfigs = append(keepConfigs, config)
|
||||
}
|
||||
// continue the loop, to skip the part where we check if the config
|
||||
// is in toRemove.
|
||||
continue
|
||||
}
|
||||
|
||||
if _, exists := toRemove[config.ConfigName]; !exists {
|
||||
keepConfigs = append(keepConfigs, config)
|
||||
}
|
||||
}
|
||||
|
||||
return keepConfigs
|
||||
}
|
||||
|
||||
func envKey(value string) string {
|
||||
kv := strings.SplitN(value, "=", 2)
|
||||
return kv[0]
|
||||
@ -1220,3 +1297,48 @@ func updateNetworks(ctx context.Context, apiClient client.NetworkAPIClient, flag
|
||||
spec.TaskTemplate.Networks = newNetworks
|
||||
return nil
|
||||
}
|
||||
|
||||
// updateCredSpecConfig updates the value of the credential spec Config field
|
||||
// to the config ID if the credential spec has changed. it mutates the passed
|
||||
// spec. it does not handle the case where the credential spec specifies a
|
||||
// config that does not exist -- that case is handled as part of
|
||||
// getUpdatedConfigs
|
||||
func updateCredSpecConfig(flags *pflag.FlagSet, containerSpec *swarm.ContainerSpec) {
|
||||
if flags.Changed(flagCredentialSpec) {
|
||||
credSpecOpt := flags.Lookup(flagCredentialSpec)
|
||||
// if the flag has changed, and the value is empty string, then we
|
||||
// should remove any credential spec that might be present
|
||||
if credSpecOpt.Value.String() == "" {
|
||||
if containerSpec.Privileges != nil {
|
||||
containerSpec.Privileges.CredentialSpec = nil
|
||||
}
|
||||
return
|
||||
}
|
||||
|
||||
// otherwise, set the credential spec to be the parsed value
|
||||
credSpec := credSpecOpt.Value.(*credentialSpecOpt).Value()
|
||||
|
||||
// if this is a Config credential spec, we we still need to replace the
|
||||
// value of credSpec.Config with the config ID instead of Name.
|
||||
if credSpec.Config != "" {
|
||||
for _, config := range containerSpec.Configs {
|
||||
// if the config name matches, then set the config ID. we do
|
||||
// not need to worry about if this is a Runtime target or not.
|
||||
// even if it is not a Runtime target, getUpdatedConfigs
|
||||
// ensures that a Runtime target for this config exists, and
|
||||
// the Name is unique so the ID is correct no matter the
|
||||
// target.
|
||||
if config.ConfigName == credSpec.Config {
|
||||
credSpec.Config = config.ConfigID
|
||||
break
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
if containerSpec.Privileges == nil {
|
||||
containerSpec.Privileges = &swarm.Privileges{}
|
||||
}
|
||||
|
||||
containerSpec.Privileges.CredentialSpec = credSpec
|
||||
}
|
||||
}
|
||||
|
||||
@ -925,3 +925,326 @@ func TestUpdateSysCtls(t *testing.T) {
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
func TestUpdateGetUpdatedConfigs(t *testing.T) {
|
||||
// cannedConfigs is a set of configs that we'll use over and over in the
|
||||
// tests. it's a map of Name to Config
|
||||
cannedConfigs := map[string]*swarm.Config{
|
||||
"bar": {
|
||||
ID: "barID",
|
||||
Spec: swarm.ConfigSpec{
|
||||
Annotations: swarm.Annotations{
|
||||
Name: "bar",
|
||||
},
|
||||
},
|
||||
},
|
||||
"cred": {
|
||||
ID: "credID",
|
||||
Spec: swarm.ConfigSpec{
|
||||
Annotations: swarm.Annotations{
|
||||
Name: "cred",
|
||||
},
|
||||
},
|
||||
},
|
||||
"newCred": {
|
||||
ID: "newCredID",
|
||||
Spec: swarm.ConfigSpec{
|
||||
Annotations: swarm.Annotations{
|
||||
Name: "newCred",
|
||||
},
|
||||
},
|
||||
},
|
||||
}
|
||||
// cannedConfigRefs is the same thing, but with config references instead
|
||||
// instead of ID, however, it just maps an arbitrary string value. this is
|
||||
// so we could have multiple config refs using the same config
|
||||
cannedConfigRefs := map[string]*swarm.ConfigReference{
|
||||
"fooRef": {
|
||||
ConfigID: "fooID",
|
||||
ConfigName: "foo",
|
||||
File: &swarm.ConfigReferenceFileTarget{
|
||||
Name: "foo",
|
||||
UID: "0",
|
||||
GID: "0",
|
||||
Mode: 0444,
|
||||
},
|
||||
},
|
||||
"barRef": {
|
||||
ConfigID: "barID",
|
||||
ConfigName: "bar",
|
||||
File: &swarm.ConfigReferenceFileTarget{
|
||||
Name: "bar",
|
||||
UID: "0",
|
||||
GID: "0",
|
||||
Mode: 0444,
|
||||
},
|
||||
},
|
||||
"bazRef": {
|
||||
ConfigID: "bazID",
|
||||
ConfigName: "baz",
|
||||
File: &swarm.ConfigReferenceFileTarget{
|
||||
Name: "baz",
|
||||
UID: "0",
|
||||
GID: "0",
|
||||
Mode: 0444,
|
||||
},
|
||||
},
|
||||
"credRef": {
|
||||
ConfigID: "credID",
|
||||
ConfigName: "cred",
|
||||
Runtime: &swarm.ConfigReferenceRuntimeTarget{},
|
||||
},
|
||||
"newCredRef": {
|
||||
ConfigID: "newCredID",
|
||||
ConfigName: "newCred",
|
||||
Runtime: &swarm.ConfigReferenceRuntimeTarget{},
|
||||
},
|
||||
}
|
||||
|
||||
type flagVal [2]string
|
||||
type test struct {
|
||||
// the name of the subtest
|
||||
name string
|
||||
// flags are the flags we'll be setting
|
||||
flags []flagVal
|
||||
// oldConfigs are the configs that would already be on the service
|
||||
// it is a slice of strings corresponding to the the key of
|
||||
// cannedConfigRefs
|
||||
oldConfigs []string
|
||||
// oldCredSpec is the credentialSpec being carried over from the old
|
||||
// object
|
||||
oldCredSpec *swarm.CredentialSpec
|
||||
// lookupConfigs are the configs we're expecting to be listed. it is a
|
||||
// slice of strings corresponding to the key of cannedConfigs
|
||||
lookupConfigs []string
|
||||
// expected is the configs we should get as a result. it is a slice of
|
||||
// strings corresponding to the key in cannedConfigRefs
|
||||
expected []string
|
||||
}
|
||||
|
||||
testCases := []test{
|
||||
{
|
||||
name: "no configs added or removed",
|
||||
oldConfigs: []string{"fooRef"},
|
||||
expected: []string{"fooRef"},
|
||||
}, {
|
||||
name: "add a config",
|
||||
flags: []flagVal{{"config-add", "bar"}},
|
||||
oldConfigs: []string{"fooRef"},
|
||||
lookupConfigs: []string{"bar"},
|
||||
expected: []string{"fooRef", "barRef"},
|
||||
}, {
|
||||
name: "remove a config",
|
||||
flags: []flagVal{{"config-rm", "bar"}},
|
||||
oldConfigs: []string{"fooRef", "barRef"},
|
||||
expected: []string{"fooRef"},
|
||||
}, {
|
||||
name: "include an old credential spec",
|
||||
oldConfigs: []string{"credRef"},
|
||||
oldCredSpec: &swarm.CredentialSpec{Config: "credID"},
|
||||
expected: []string{"credRef"},
|
||||
}, {
|
||||
name: "add a credential spec",
|
||||
oldConfigs: []string{"fooRef"},
|
||||
flags: []flagVal{{"credential-spec", "config://cred"}},
|
||||
lookupConfigs: []string{"cred"},
|
||||
expected: []string{"fooRef", "credRef"},
|
||||
}, {
|
||||
name: "change a credential spec",
|
||||
oldConfigs: []string{"fooRef", "credRef"},
|
||||
oldCredSpec: &swarm.CredentialSpec{Config: "credID"},
|
||||
flags: []flagVal{{"credential-spec", "config://newCred"}},
|
||||
lookupConfigs: []string{"newCred"},
|
||||
expected: []string{"fooRef", "newCredRef"},
|
||||
}, {
|
||||
name: "credential spec no longer config",
|
||||
oldConfigs: []string{"fooRef", "credRef"},
|
||||
oldCredSpec: &swarm.CredentialSpec{Config: "credID"},
|
||||
flags: []flagVal{{"credential-spec", "file://someFile"}},
|
||||
lookupConfigs: []string{},
|
||||
expected: []string{"fooRef"},
|
||||
}, {
|
||||
name: "credential spec becomes config",
|
||||
oldConfigs: []string{"fooRef"},
|
||||
oldCredSpec: &swarm.CredentialSpec{File: "someFile"},
|
||||
flags: []flagVal{{"credential-spec", "config://cred"}},
|
||||
lookupConfigs: []string{"cred"},
|
||||
expected: []string{"fooRef", "credRef"},
|
||||
}, {
|
||||
name: "remove credential spec",
|
||||
oldConfigs: []string{"fooRef", "credRef"},
|
||||
oldCredSpec: &swarm.CredentialSpec{Config: "credID"},
|
||||
flags: []flagVal{{"credential-spec", ""}},
|
||||
lookupConfigs: []string{},
|
||||
expected: []string{"fooRef"},
|
||||
}, {
|
||||
name: "just frick my stuff up",
|
||||
// a more complicated test. add barRef, remove bazRef, keep fooRef,
|
||||
// change credentialSpec from credRef to newCredRef
|
||||
oldConfigs: []string{"fooRef", "bazRef", "credRef"},
|
||||
oldCredSpec: &swarm.CredentialSpec{Config: "cred"},
|
||||
flags: []flagVal{
|
||||
{"config-add", "bar"},
|
||||
{"config-rm", "baz"},
|
||||
{"credential-spec", "config://newCred"},
|
||||
},
|
||||
lookupConfigs: []string{"bar", "newCred"},
|
||||
expected: []string{"fooRef", "barRef", "newCredRef"},
|
||||
},
|
||||
}
|
||||
|
||||
for _, tc := range testCases {
|
||||
t.Run(tc.name, func(t *testing.T) {
|
||||
flags := newUpdateCommand(nil).Flags()
|
||||
for _, f := range tc.flags {
|
||||
flags.Set(f[0], f[1])
|
||||
}
|
||||
|
||||
// fakeConfigAPIClientList is actually defined in create_test.go,
|
||||
// but we'll use it here as well
|
||||
var fakeClient fakeConfigAPIClientList = func(_ context.Context, opts types.ConfigListOptions) ([]swarm.Config, error) {
|
||||
names := opts.Filters.Get("name")
|
||||
assert.Equal(t, len(names), len(tc.lookupConfigs))
|
||||
|
||||
configs := []swarm.Config{}
|
||||
for _, lookup := range tc.lookupConfigs {
|
||||
assert.Assert(t, is.Contains(names, lookup))
|
||||
cfg, ok := cannedConfigs[lookup]
|
||||
assert.Assert(t, ok)
|
||||
configs = append(configs, *cfg)
|
||||
}
|
||||
return configs, nil
|
||||
}
|
||||
|
||||
// build the actual set of old configs and the container spec
|
||||
oldConfigs := []*swarm.ConfigReference{}
|
||||
for _, config := range tc.oldConfigs {
|
||||
cfg, ok := cannedConfigRefs[config]
|
||||
assert.Assert(t, ok)
|
||||
oldConfigs = append(oldConfigs, cfg)
|
||||
}
|
||||
|
||||
containerSpec := &swarm.ContainerSpec{
|
||||
Configs: oldConfigs,
|
||||
Privileges: &swarm.Privileges{
|
||||
CredentialSpec: tc.oldCredSpec,
|
||||
},
|
||||
}
|
||||
|
||||
finalConfigs, err := getUpdatedConfigs(fakeClient, flags, containerSpec)
|
||||
assert.NilError(t, err)
|
||||
|
||||
// ensure that the finalConfigs consists of all of the expected
|
||||
// configs
|
||||
assert.Equal(t, len(finalConfigs), len(tc.expected),
|
||||
"%v final configs, %v expected",
|
||||
len(finalConfigs), len(tc.expected),
|
||||
)
|
||||
for _, expected := range tc.expected {
|
||||
assert.Assert(t, is.Contains(finalConfigs, cannedConfigRefs[expected]))
|
||||
}
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
func TestUpdateCredSpec(t *testing.T) {
|
||||
type testCase struct {
|
||||
// name is the name of the subtest
|
||||
name string
|
||||
// flagVal is the value we're setting flagCredentialSpec to
|
||||
flagVal string
|
||||
// spec is the existing serviceSpec with its configs
|
||||
spec *swarm.ContainerSpec
|
||||
// expected is the expected value of the credential spec after the
|
||||
// function. it may be nil
|
||||
expected *swarm.CredentialSpec
|
||||
}
|
||||
|
||||
testCases := []testCase{
|
||||
{
|
||||
name: "add file credential spec",
|
||||
flagVal: "file://somefile",
|
||||
spec: &swarm.ContainerSpec{},
|
||||
expected: &swarm.CredentialSpec{File: "somefile"},
|
||||
}, {
|
||||
name: "remove a file credential spec",
|
||||
flagVal: "",
|
||||
spec: &swarm.ContainerSpec{
|
||||
Privileges: &swarm.Privileges{
|
||||
CredentialSpec: &swarm.CredentialSpec{
|
||||
File: "someFile",
|
||||
},
|
||||
},
|
||||
},
|
||||
expected: nil,
|
||||
}, {
|
||||
name: "remove when no CredentialSpec exists",
|
||||
flagVal: "",
|
||||
spec: &swarm.ContainerSpec{},
|
||||
expected: nil,
|
||||
}, {
|
||||
name: "add a config credenital spec",
|
||||
flagVal: "config://someConfigName",
|
||||
spec: &swarm.ContainerSpec{
|
||||
Configs: []*swarm.ConfigReference{
|
||||
{
|
||||
ConfigName: "someConfigName",
|
||||
ConfigID: "someConfigID",
|
||||
Runtime: &swarm.ConfigReferenceRuntimeTarget{},
|
||||
},
|
||||
},
|
||||
},
|
||||
expected: &swarm.CredentialSpec{
|
||||
Config: "someConfigID",
|
||||
},
|
||||
}, {
|
||||
name: "remove a config credential spec",
|
||||
flagVal: "",
|
||||
spec: &swarm.ContainerSpec{
|
||||
Privileges: &swarm.Privileges{
|
||||
CredentialSpec: &swarm.CredentialSpec{
|
||||
Config: "someConfigID",
|
||||
},
|
||||
},
|
||||
},
|
||||
expected: nil,
|
||||
}, {
|
||||
name: "update a config credential spec",
|
||||
flagVal: "config://someConfigName",
|
||||
spec: &swarm.ContainerSpec{
|
||||
Configs: []*swarm.ConfigReference{
|
||||
{
|
||||
ConfigName: "someConfigName",
|
||||
ConfigID: "someConfigID",
|
||||
Runtime: &swarm.ConfigReferenceRuntimeTarget{},
|
||||
},
|
||||
},
|
||||
Privileges: &swarm.Privileges{
|
||||
CredentialSpec: &swarm.CredentialSpec{
|
||||
Config: "someDifferentConfigID",
|
||||
},
|
||||
},
|
||||
},
|
||||
expected: &swarm.CredentialSpec{
|
||||
Config: "someConfigID",
|
||||
},
|
||||
},
|
||||
}
|
||||
|
||||
for _, tc := range testCases {
|
||||
t.Run(tc.name, func(t *testing.T) {
|
||||
flags := newUpdateCommand(nil).Flags()
|
||||
flags.Set(flagCredentialSpec, tc.flagVal)
|
||||
|
||||
updateCredSpecConfig(flags, tc.spec)
|
||||
// handle the case where tc.spec.Privileges is nil
|
||||
if tc.expected == nil {
|
||||
assert.Assert(t, tc.spec.Privileges == nil || tc.spec.Privileges.CredentialSpec == nil)
|
||||
return
|
||||
}
|
||||
|
||||
assert.Assert(t, tc.spec.Privileges != nil)
|
||||
assert.DeepEqual(t, tc.spec.Privileges.CredentialSpec, tc.expected)
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
@ -14,16 +14,17 @@ import (
|
||||
latest "github.com/docker/compose-on-kubernetes/api/compose/v1alpha3"
|
||||
"github.com/docker/compose-on-kubernetes/api/compose/v1beta1"
|
||||
"github.com/docker/compose-on-kubernetes/api/compose/v1beta2"
|
||||
"github.com/docker/go-connections/nat"
|
||||
"github.com/mitchellh/mapstructure"
|
||||
"github.com/pkg/errors"
|
||||
yaml "gopkg.in/yaml.v2"
|
||||
v1 "k8s.io/api/core/v1"
|
||||
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
|
||||
)
|
||||
|
||||
const (
|
||||
// pullSecretExtraField is an extra field on ServiceConfigs usable to reference a pull secret
|
||||
pullSecretExtraField = "x-pull-secret"
|
||||
// pullPolicyExtraField is an extra field on ServiceConfigs usable to specify a pull policy
|
||||
pullPolicyExtraField = "x-pull-policy"
|
||||
// kubernatesExtraField is an extra field on ServiceConfigs containing kubernetes-specific extensions to compose format
|
||||
kubernatesExtraField = "x-kubernetes"
|
||||
)
|
||||
|
||||
// NewStackConverter returns a converter from types.Config (compose) to the specified
|
||||
@ -245,10 +246,8 @@ func fromComposeConfigs(s map[string]composeTypes.ConfigObjConfig) map[string]la
|
||||
|
||||
func fromComposeServiceConfig(s composeTypes.ServiceConfig, capabilities composeCapabilities) (latest.ServiceConfig, error) {
|
||||
var (
|
||||
userID *int64
|
||||
pullSecret string
|
||||
pullPolicy string
|
||||
err error
|
||||
userID *int64
|
||||
err error
|
||||
)
|
||||
if s.User != "" {
|
||||
numerical, err := strconv.Atoi(s.User)
|
||||
@ -257,20 +256,22 @@ func fromComposeServiceConfig(s composeTypes.ServiceConfig, capabilities compose
|
||||
userID = &unixUserID
|
||||
}
|
||||
}
|
||||
pullSecret, err = resolveServiceExtra(s, pullSecretExtraField)
|
||||
kubeExtra, err := resolveServiceExtra(s)
|
||||
if err != nil {
|
||||
return latest.ServiceConfig{}, err
|
||||
}
|
||||
pullPolicy, err = resolveServiceExtra(s, pullPolicyExtraField)
|
||||
if kubeExtra.PullSecret != "" && !capabilities.hasPullSecrets {
|
||||
return latest.ServiceConfig{}, errors.Errorf(`stack API version %s does not support pull secrets (field "x-kubernetes.pull_secret"), please use version v1alpha3 or higher`, capabilities.apiVersion)
|
||||
}
|
||||
if kubeExtra.PullPolicy != "" && !capabilities.hasPullPolicies {
|
||||
return latest.ServiceConfig{}, errors.Errorf(`stack API version %s does not support pull policies (field "x-kubernetes.pull_policy"), please use version v1alpha3 or higher`, capabilities.apiVersion)
|
||||
}
|
||||
|
||||
internalPorts, err := setupIntraStackNetworking(s, kubeExtra, capabilities)
|
||||
if err != nil {
|
||||
return latest.ServiceConfig{}, err
|
||||
}
|
||||
if pullSecret != "" && !capabilities.hasPullSecrets {
|
||||
return latest.ServiceConfig{}, errors.Errorf("stack API version %s does not support pull secrets (field %q), please use version v1alpha3 or higher", capabilities.apiVersion, pullSecretExtraField)
|
||||
}
|
||||
if pullPolicy != "" && !capabilities.hasPullPolicies {
|
||||
return latest.ServiceConfig{}, errors.Errorf("stack API version %s does not support pull policies (field %q), please use version v1alpha3 or higher", capabilities.apiVersion, pullPolicyExtraField)
|
||||
}
|
||||
|
||||
return latest.ServiceConfig{
|
||||
Name: s.Name,
|
||||
CapAdd: s.CapAdd,
|
||||
@ -286,40 +287,96 @@ func fromComposeServiceConfig(s composeTypes.ServiceConfig, capabilities compose
|
||||
RestartPolicy: fromComposeRestartPolicy(s.Deploy.RestartPolicy),
|
||||
Placement: fromComposePlacement(s.Deploy.Placement),
|
||||
},
|
||||
Entrypoint: s.Entrypoint,
|
||||
Environment: s.Environment,
|
||||
ExtraHosts: s.ExtraHosts,
|
||||
Hostname: s.Hostname,
|
||||
HealthCheck: fromComposeHealthcheck(s.HealthCheck),
|
||||
Image: s.Image,
|
||||
Ipc: s.Ipc,
|
||||
Labels: s.Labels,
|
||||
Pid: s.Pid,
|
||||
Ports: fromComposePorts(s.Ports),
|
||||
Privileged: s.Privileged,
|
||||
ReadOnly: s.ReadOnly,
|
||||
Secrets: fromComposeServiceSecrets(s.Secrets),
|
||||
StdinOpen: s.StdinOpen,
|
||||
StopGracePeriod: composetypes.ConvertDurationPtr(s.StopGracePeriod),
|
||||
Tmpfs: s.Tmpfs,
|
||||
Tty: s.Tty,
|
||||
User: userID,
|
||||
Volumes: fromComposeServiceVolumeConfig(s.Volumes),
|
||||
WorkingDir: s.WorkingDir,
|
||||
PullSecret: pullSecret,
|
||||
PullPolicy: pullPolicy,
|
||||
Entrypoint: s.Entrypoint,
|
||||
Environment: s.Environment,
|
||||
ExtraHosts: s.ExtraHosts,
|
||||
Hostname: s.Hostname,
|
||||
HealthCheck: fromComposeHealthcheck(s.HealthCheck),
|
||||
Image: s.Image,
|
||||
Ipc: s.Ipc,
|
||||
Labels: s.Labels,
|
||||
Pid: s.Pid,
|
||||
Ports: fromComposePorts(s.Ports),
|
||||
Privileged: s.Privileged,
|
||||
ReadOnly: s.ReadOnly,
|
||||
Secrets: fromComposeServiceSecrets(s.Secrets),
|
||||
StdinOpen: s.StdinOpen,
|
||||
StopGracePeriod: composetypes.ConvertDurationPtr(s.StopGracePeriod),
|
||||
Tmpfs: s.Tmpfs,
|
||||
Tty: s.Tty,
|
||||
User: userID,
|
||||
Volumes: fromComposeServiceVolumeConfig(s.Volumes),
|
||||
WorkingDir: s.WorkingDir,
|
||||
PullSecret: kubeExtra.PullSecret,
|
||||
PullPolicy: kubeExtra.PullPolicy,
|
||||
InternalServiceType: kubeExtra.InternalServiceType,
|
||||
InternalPorts: internalPorts,
|
||||
}, nil
|
||||
}
|
||||
|
||||
func resolveServiceExtra(s composeTypes.ServiceConfig, field string) (string, error) {
|
||||
if iface, ok := s.Extras[field]; ok {
|
||||
value, ok := iface.(string)
|
||||
if !ok {
|
||||
return "", errors.Errorf("field %q: value %v type is %T, should be a string", field, iface, iface)
|
||||
}
|
||||
return value, nil
|
||||
func setupIntraStackNetworking(s composeTypes.ServiceConfig, kubeExtra kubernetesExtra, capabilities composeCapabilities) ([]latest.InternalPort, error) {
|
||||
if kubeExtra.InternalServiceType != latest.InternalServiceTypeAuto && !capabilities.hasIntraStackLoadBalancing {
|
||||
return nil,
|
||||
errors.Errorf(`stack API version %s does not support intra-stack load balancing (field "x-kubernetes.internal_service_type"), please use version v1alpha3 or higher`,
|
||||
capabilities.apiVersion)
|
||||
}
|
||||
return "", nil
|
||||
if !capabilities.hasIntraStackLoadBalancing {
|
||||
return nil, nil
|
||||
}
|
||||
if err := validateInternalServiceType(kubeExtra.InternalServiceType); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
internalPorts, err := toInternalPorts(s.Expose)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return internalPorts, nil
|
||||
}
|
||||
|
||||
func validateInternalServiceType(internalServiceType latest.InternalServiceType) error {
|
||||
switch internalServiceType {
|
||||
case latest.InternalServiceTypeAuto, latest.InternalServiceTypeClusterIP, latest.InternalServiceTypeHeadless:
|
||||
default:
|
||||
return errors.Errorf(`invalid value %q for field "x-kubernetes.internal_service_type", valid values are %q or %q`, internalServiceType,
|
||||
latest.InternalServiceTypeClusterIP,
|
||||
latest.InternalServiceTypeHeadless)
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func toInternalPorts(expose []string) ([]latest.InternalPort, error) {
|
||||
var internalPorts []latest.InternalPort
|
||||
for _, sourcePort := range expose {
|
||||
proto, port := nat.SplitProtoPort(sourcePort)
|
||||
start, end, err := nat.ParsePortRange(port)
|
||||
if err != nil {
|
||||
return nil, errors.Errorf("invalid format for expose: %q, error: %s", sourcePort, err)
|
||||
}
|
||||
for i := start; i <= end; i++ {
|
||||
k8sProto := v1.Protocol(strings.ToUpper(proto))
|
||||
switch k8sProto {
|
||||
case v1.ProtocolSCTP, v1.ProtocolTCP, v1.ProtocolUDP:
|
||||
default:
|
||||
return nil, errors.Errorf("invalid protocol for expose: %q, supported values are %q, %q and %q", sourcePort, v1.ProtocolSCTP, v1.ProtocolTCP, v1.ProtocolUDP)
|
||||
}
|
||||
internalPorts = append(internalPorts, latest.InternalPort{
|
||||
Port: int32(i),
|
||||
Protocol: k8sProto,
|
||||
})
|
||||
}
|
||||
}
|
||||
return internalPorts, nil
|
||||
}
|
||||
|
||||
func resolveServiceExtra(s composeTypes.ServiceConfig) (kubernetesExtra, error) {
|
||||
if iface, ok := s.Extras[kubernatesExtraField]; ok {
|
||||
var result kubernetesExtra
|
||||
if err := mapstructure.Decode(iface, &result); err != nil {
|
||||
return kubernetesExtra{}, err
|
||||
}
|
||||
return result, nil
|
||||
}
|
||||
return kubernetesExtra{}, nil
|
||||
}
|
||||
|
||||
func fromComposePorts(ports []composeTypes.ServicePortConfig) []latest.ServicePortConfig {
|
||||
@ -489,14 +546,22 @@ var (
|
||||
apiVersion: "v1beta2",
|
||||
}
|
||||
v1alpha3Capabilities = composeCapabilities{
|
||||
apiVersion: "v1alpha3",
|
||||
hasPullSecrets: true,
|
||||
hasPullPolicies: true,
|
||||
apiVersion: "v1alpha3",
|
||||
hasPullSecrets: true,
|
||||
hasPullPolicies: true,
|
||||
hasIntraStackLoadBalancing: true,
|
||||
}
|
||||
)
|
||||
|
||||
type composeCapabilities struct {
|
||||
apiVersion string
|
||||
hasPullSecrets bool
|
||||
hasPullPolicies bool
|
||||
apiVersion string
|
||||
hasPullSecrets bool
|
||||
hasPullPolicies bool
|
||||
hasIntraStackLoadBalancing bool
|
||||
}
|
||||
|
||||
type kubernetesExtra struct {
|
||||
PullSecret string `mapstructure:"pull_secret"`
|
||||
PullPolicy string `mapstructure:"pull_policy"`
|
||||
InternalServiceType latest.InternalServiceType `mapstructure:"internal_service_type"`
|
||||
}
|
||||
|
||||
@ -13,6 +13,7 @@ import (
|
||||
"github.com/docker/compose-on-kubernetes/api/compose/v1beta2"
|
||||
"gotest.tools/assert"
|
||||
is "gotest.tools/assert/cmp"
|
||||
v1 "k8s.io/api/core/v1"
|
||||
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
|
||||
)
|
||||
|
||||
@ -188,8 +189,8 @@ func TestHandlePullSecret(t *testing.T) {
|
||||
version string
|
||||
err string
|
||||
}{
|
||||
{version: "v1beta1", err: `stack API version v1beta1 does not support pull secrets (field "x-pull-secret"), please use version v1alpha3 or higher`},
|
||||
{version: "v1beta2", err: `stack API version v1beta2 does not support pull secrets (field "x-pull-secret"), please use version v1alpha3 or higher`},
|
||||
{version: "v1beta1", err: `stack API version v1beta1 does not support pull secrets (field "x-kubernetes.pull_secret"), please use version v1alpha3 or higher`},
|
||||
{version: "v1beta2", err: `stack API version v1beta2 does not support pull secrets (field "x-kubernetes.pull_secret"), please use version v1alpha3 or higher`},
|
||||
{version: "v1alpha3"},
|
||||
}
|
||||
|
||||
@ -215,8 +216,8 @@ func TestHandlePullPolicy(t *testing.T) {
|
||||
version string
|
||||
err string
|
||||
}{
|
||||
{version: "v1beta1", err: `stack API version v1beta1 does not support pull policies (field "x-pull-policy"), please use version v1alpha3 or higher`},
|
||||
{version: "v1beta2", err: `stack API version v1beta2 does not support pull policies (field "x-pull-policy"), please use version v1alpha3 or higher`},
|
||||
{version: "v1beta1", err: `stack API version v1beta1 does not support pull policies (field "x-kubernetes.pull_policy"), please use version v1alpha3 or higher`},
|
||||
{version: "v1beta2", err: `stack API version v1beta2 does not support pull policies (field "x-kubernetes.pull_policy"), please use version v1alpha3 or higher`},
|
||||
{version: "v1alpha3"},
|
||||
}
|
||||
|
||||
@ -235,3 +236,111 @@ func TestHandlePullPolicy(t *testing.T) {
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
func TestHandleInternalServiceType(t *testing.T) {
|
||||
cases := []struct {
|
||||
name string
|
||||
value string
|
||||
caps composeCapabilities
|
||||
err string
|
||||
expected v1alpha3.InternalServiceType
|
||||
}{
|
||||
{
|
||||
name: "v1beta1",
|
||||
value: "ClusterIP",
|
||||
caps: v1beta1Capabilities,
|
||||
err: `stack API version v1beta1 does not support intra-stack load balancing (field "x-kubernetes.internal_service_type"), please use version v1alpha3 or higher`,
|
||||
},
|
||||
{
|
||||
name: "v1beta2",
|
||||
value: "ClusterIP",
|
||||
caps: v1beta2Capabilities,
|
||||
err: `stack API version v1beta2 does not support intra-stack load balancing (field "x-kubernetes.internal_service_type"), please use version v1alpha3 or higher`,
|
||||
},
|
||||
{
|
||||
name: "v1alpha3",
|
||||
value: "ClusterIP",
|
||||
caps: v1alpha3Capabilities,
|
||||
expected: v1alpha3.InternalServiceTypeClusterIP,
|
||||
},
|
||||
{
|
||||
name: "v1alpha3-invalid",
|
||||
value: "invalid",
|
||||
caps: v1alpha3Capabilities,
|
||||
err: `invalid value "invalid" for field "x-kubernetes.internal_service_type", valid values are "ClusterIP" or "Headless"`,
|
||||
},
|
||||
}
|
||||
for _, c := range cases {
|
||||
t.Run(c.name, func(t *testing.T) {
|
||||
res, err := fromComposeServiceConfig(composetypes.ServiceConfig{
|
||||
Name: "test",
|
||||
Image: "test",
|
||||
Extras: map[string]interface{}{
|
||||
"x-kubernetes": map[string]interface{}{
|
||||
"internal_service_type": c.value,
|
||||
},
|
||||
},
|
||||
}, c.caps)
|
||||
if c.err == "" {
|
||||
assert.NilError(t, err)
|
||||
assert.Equal(t, res.InternalServiceType, c.expected)
|
||||
} else {
|
||||
assert.ErrorContains(t, err, c.err)
|
||||
}
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
func TestIgnoreExpose(t *testing.T) {
|
||||
testData := loadTestStackWith(t, "expose")
|
||||
for _, version := range []string{"v1beta1", "v1beta2"} {
|
||||
conv, err := NewStackConverter(version)
|
||||
assert.NilError(t, err)
|
||||
s, err := conv.FromCompose(ioutil.Discard, "test", testData)
|
||||
assert.NilError(t, err)
|
||||
assert.Equal(t, len(s.Spec.Services[0].InternalPorts), 0)
|
||||
}
|
||||
}
|
||||
|
||||
func TestParseExpose(t *testing.T) {
|
||||
testData := loadTestStackWith(t, "expose")
|
||||
conv, err := NewStackConverter("v1alpha3")
|
||||
assert.NilError(t, err)
|
||||
s, err := conv.FromCompose(ioutil.Discard, "test", testData)
|
||||
assert.NilError(t, err)
|
||||
expected := []v1alpha3.InternalPort{
|
||||
{
|
||||
Port: 1,
|
||||
Protocol: v1.ProtocolTCP,
|
||||
},
|
||||
{
|
||||
Port: 2,
|
||||
Protocol: v1.ProtocolTCP,
|
||||
},
|
||||
{
|
||||
Port: 3,
|
||||
Protocol: v1.ProtocolTCP,
|
||||
},
|
||||
{
|
||||
Port: 4,
|
||||
Protocol: v1.ProtocolTCP,
|
||||
},
|
||||
{
|
||||
Port: 5,
|
||||
Protocol: v1.ProtocolUDP,
|
||||
},
|
||||
{
|
||||
Port: 6,
|
||||
Protocol: v1.ProtocolUDP,
|
||||
},
|
||||
{
|
||||
Port: 7,
|
||||
Protocol: v1.ProtocolUDP,
|
||||
},
|
||||
{
|
||||
Port: 8,
|
||||
Protocol: v1.ProtocolUDP,
|
||||
},
|
||||
}
|
||||
assert.DeepEqual(t, s.Spec.Services[0].InternalPorts, expected)
|
||||
}
|
||||
|
||||
9
cli/command/stack/kubernetes/testdata/compose-with-expose.yml
vendored
Normal file
9
cli/command/stack/kubernetes/testdata/compose-with-expose.yml
vendored
Normal file
@ -0,0 +1,9 @@
|
||||
version: "3.7"
|
||||
services:
|
||||
test:
|
||||
image: "some-image"
|
||||
expose:
|
||||
- "1" # default protocol, single port
|
||||
- "2-4" # default protocol, port range
|
||||
- "5/udp" # specific protocol, single port
|
||||
- "6-8/udp" # specific protocol, port range
|
||||
@ -2,4 +2,5 @@ version: "3.7"
|
||||
services:
|
||||
test:
|
||||
image: "some-image"
|
||||
x-pull-policy: "Never"
|
||||
x-kubernetes:
|
||||
pull_policy: "Never"
|
||||
@ -2,4 +2,5 @@ version: "3.7"
|
||||
services:
|
||||
test:
|
||||
image: "some-private-image"
|
||||
x-pull-secret: "some-secret"
|
||||
x-kubernetes:
|
||||
pull_secret: "some-secret"
|
||||
@ -15,6 +15,7 @@ import (
|
||||
"github.com/docker/cli/cli/version"
|
||||
"github.com/docker/cli/kubernetes"
|
||||
"github.com/docker/cli/templates"
|
||||
kubeapi "github.com/docker/compose-on-kubernetes/api"
|
||||
"github.com/docker/docker/api/types"
|
||||
"github.com/pkg/errors"
|
||||
"github.com/sirupsen/logrus"
|
||||
@ -243,7 +244,7 @@ func getKubernetesVersion(dockerCli command.Cli, kubeConfig string) *kubernetesV
|
||||
err error
|
||||
)
|
||||
if dockerCli.CurrentContext() == "" {
|
||||
clientConfig = kubernetes.NewKubernetesConfig(kubeConfig)
|
||||
clientConfig = kubeapi.NewKubernetesConfig(kubeConfig)
|
||||
} else {
|
||||
clientConfig, err = kubecontext.ConfigFromContext(dockerCli.CurrentContext(), dockerCli.ContextStore())
|
||||
}
|
||||
|
||||
@ -6,6 +6,7 @@ import (
|
||||
"fmt"
|
||||
"io/ioutil"
|
||||
"os"
|
||||
"runtime"
|
||||
|
||||
"github.com/docker/cli/cli"
|
||||
"github.com/docker/cli/cli/command"
|
||||
@ -69,12 +70,14 @@ func loadPrivKey(streams command.Streams, keyPath string, options keyLoadOptions
|
||||
}
|
||||
|
||||
func getPrivKeyBytesFromPath(keyPath string) ([]byte, error) {
|
||||
fileInfo, err := os.Stat(keyPath)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
if fileInfo.Mode()&nonOwnerReadWriteMask != 0 {
|
||||
return nil, fmt.Errorf("private key file %s must not be readable or writable by others", keyPath)
|
||||
if runtime.GOOS != "windows" {
|
||||
fileInfo, err := os.Stat(keyPath)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
if fileInfo.Mode()&nonOwnerReadWriteMask != 0 {
|
||||
return nil, fmt.Errorf("private key file %s must not be readable or writable by others", keyPath)
|
||||
}
|
||||
}
|
||||
|
||||
from, err := os.OpenFile(keyPath, os.O_RDONLY, notary.PrivExecPerms)
|
||||
|
||||
@ -161,3 +161,34 @@ func ValidateOutputPathFileMode(fileMode os.FileMode) error {
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func stringSliceIndex(s, subs []string) int {
|
||||
j := 0
|
||||
if len(subs) > 0 {
|
||||
for i, x := range s {
|
||||
if j < len(subs) && subs[j] == x {
|
||||
j++
|
||||
} else {
|
||||
j = 0
|
||||
}
|
||||
if len(subs) == j {
|
||||
return i + 1 - j
|
||||
}
|
||||
}
|
||||
}
|
||||
return -1
|
||||
}
|
||||
|
||||
// StringSliceReplaceAt replaces the sub-slice old, with the sub-slice new, in the string
|
||||
// slice s, returning a new slice and a boolean indicating if the replacement happened.
|
||||
// requireIdx is the index at which old needs to be found at (or -1 to disregard that).
|
||||
func StringSliceReplaceAt(s, old, new []string, requireIndex int) ([]string, bool) {
|
||||
idx := stringSliceIndex(s, old)
|
||||
if (requireIndex != -1 && requireIndex != idx) || idx == -1 {
|
||||
return s, false
|
||||
}
|
||||
out := append([]string{}, s[:idx]...)
|
||||
out = append(out, new...)
|
||||
out = append(out, s[idx+len(old):]...)
|
||||
return out, true
|
||||
}
|
||||
|
||||
33
cli/command/utils_test.go
Normal file
33
cli/command/utils_test.go
Normal file
@ -0,0 +1,33 @@
|
||||
package command
|
||||
|
||||
import (
|
||||
"testing"
|
||||
|
||||
"gotest.tools/assert"
|
||||
)
|
||||
|
||||
func TestStringSliceReplaceAt(t *testing.T) {
|
||||
out, ok := StringSliceReplaceAt([]string{"abc", "foo", "bar", "bax"}, []string{"foo", "bar"}, []string{"baz"}, -1)
|
||||
assert.Assert(t, ok)
|
||||
assert.DeepEqual(t, []string{"abc", "baz", "bax"}, out)
|
||||
|
||||
out, ok = StringSliceReplaceAt([]string{"foo"}, []string{"foo", "bar"}, []string{"baz"}, -1)
|
||||
assert.Assert(t, !ok)
|
||||
assert.DeepEqual(t, []string{"foo"}, out)
|
||||
|
||||
out, ok = StringSliceReplaceAt([]string{"abc", "foo", "bar", "bax"}, []string{"foo", "bar"}, []string{"baz"}, 0)
|
||||
assert.Assert(t, !ok)
|
||||
assert.DeepEqual(t, []string{"abc", "foo", "bar", "bax"}, out)
|
||||
|
||||
out, ok = StringSliceReplaceAt([]string{"foo", "bar", "bax"}, []string{"foo", "bar"}, []string{"baz"}, 0)
|
||||
assert.Assert(t, ok)
|
||||
assert.DeepEqual(t, []string{"baz", "bax"}, out)
|
||||
|
||||
out, ok = StringSliceReplaceAt([]string{"abc", "foo", "bar", "baz"}, []string{"foo", "bar"}, nil, -1)
|
||||
assert.Assert(t, ok)
|
||||
assert.DeepEqual(t, []string{"abc", "baz"}, out)
|
||||
|
||||
out, ok = StringSliceReplaceAt([]string{"foo"}, nil, []string{"baz"}, -1)
|
||||
assert.Assert(t, !ok)
|
||||
assert.DeepEqual(t, []string{"foo"}, out)
|
||||
}
|
||||
@ -106,11 +106,23 @@ func Secrets(namespace Namespace, secrets map[string]composetypes.SecretConfig)
|
||||
continue
|
||||
}
|
||||
|
||||
obj, err := fileObjectConfig(namespace, name, composetypes.FileObjectConfig(secret))
|
||||
var obj swarmFileObject
|
||||
var err error
|
||||
if secret.Driver != "" {
|
||||
obj, err = driverObjectConfig(namespace, name, composetypes.FileObjectConfig(secret))
|
||||
} else {
|
||||
obj, err = fileObjectConfig(namespace, name, composetypes.FileObjectConfig(secret))
|
||||
}
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
spec := swarm.SecretSpec{Annotations: obj.Annotations, Data: obj.Data}
|
||||
if secret.Driver != "" {
|
||||
spec.Driver = &swarm.Driver{
|
||||
Name: secret.Driver,
|
||||
Options: secret.DriverOpts,
|
||||
}
|
||||
}
|
||||
if secret.TemplateDriver != "" {
|
||||
spec.Templating = &swarm.Driver{
|
||||
Name: secret.TemplateDriver,
|
||||
@ -149,6 +161,22 @@ type swarmFileObject struct {
|
||||
Data []byte
|
||||
}
|
||||
|
||||
func driverObjectConfig(namespace Namespace, name string, obj composetypes.FileObjectConfig) (swarmFileObject, error) {
|
||||
if obj.Name != "" {
|
||||
name = obj.Name
|
||||
} else {
|
||||
name = namespace.Scope(name)
|
||||
}
|
||||
|
||||
return swarmFileObject{
|
||||
Annotations: swarm.Annotations{
|
||||
Name: name,
|
||||
Labels: AddStackLabel(namespace, obj.Labels),
|
||||
},
|
||||
Data: []byte{},
|
||||
}, nil
|
||||
}
|
||||
|
||||
func fileObjectConfig(namespace Namespace, name string, obj composetypes.FileObjectConfig) (swarmFileObject, error) {
|
||||
data, err := ioutil.ReadFile(obj.File)
|
||||
if err != nil {
|
||||
|
||||
@ -40,7 +40,7 @@ func Services(
|
||||
if err != nil {
|
||||
return nil, errors.Wrapf(err, "service %s", service.Name)
|
||||
}
|
||||
configs, err := convertServiceConfigObjs(client, namespace, service.Configs, config.Configs)
|
||||
configs, err := convertServiceConfigObjs(client, namespace, service, config.Configs)
|
||||
if err != nil {
|
||||
return nil, errors.Wrapf(err, "service %s", service.Name)
|
||||
}
|
||||
@ -109,7 +109,9 @@ func Service(
|
||||
}
|
||||
|
||||
var privileges swarm.Privileges
|
||||
privileges.CredentialSpec, err = convertCredentialSpec(service.CredentialSpec)
|
||||
privileges.CredentialSpec, err = convertCredentialSpec(
|
||||
namespace, service.CredentialSpec, configs,
|
||||
)
|
||||
if err != nil {
|
||||
return swarm.ServiceSpec{}, err
|
||||
}
|
||||
@ -286,11 +288,17 @@ func convertServiceSecrets(
|
||||
return secrs, err
|
||||
}
|
||||
|
||||
// convertServiceConfigObjs takes an API client, a namespace, a ServiceConfig,
|
||||
// and a set of compose Config specs, and creates the swarm ConfigReferences
|
||||
// required by the serivce. Unlike convertServiceSecrets, this takes the whole
|
||||
// ServiceConfig, because some Configs may be needed as a result of other
|
||||
// fields (like CredentialSpecs).
|
||||
//
|
||||
// TODO: fix configs API so that ConfigsAPIClient is not required here
|
||||
func convertServiceConfigObjs(
|
||||
client client.ConfigAPIClient,
|
||||
namespace Namespace,
|
||||
configs []composetypes.ServiceConfigObjConfig,
|
||||
service composetypes.ServiceConfig,
|
||||
configSpecs map[string]composetypes.ConfigObjConfig,
|
||||
) ([]*swarm.ConfigReference, error) {
|
||||
refs := []*swarm.ConfigReference{}
|
||||
@ -302,7 +310,7 @@ func convertServiceConfigObjs(
|
||||
}
|
||||
return composetypes.FileObjectConfig(configSpec), nil
|
||||
}
|
||||
for _, config := range configs {
|
||||
for _, config := range service.Configs {
|
||||
obj, err := convertFileObject(namespace, composetypes.FileReferenceConfig(config), lookup)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
@ -315,6 +323,38 @@ func convertServiceConfigObjs(
|
||||
})
|
||||
}
|
||||
|
||||
// finally, after converting all of the file objects, create any
|
||||
// Runtime-type configs that are needed. these are configs that are not
|
||||
// mounted into the container, but are used in some other way by the
|
||||
// container runtime. Currently, this only means CredentialSpecs, but in
|
||||
// the future it may be used for other fields
|
||||
|
||||
// grab the CredentialSpec out of the Service
|
||||
credSpec := service.CredentialSpec
|
||||
// if the credSpec uses a config, then we should grab the config name, and
|
||||
// create a config reference for it. A File or Registry-type CredentialSpec
|
||||
// does not need this operation.
|
||||
if credSpec.Config != "" {
|
||||
// look up the config in the configSpecs.
|
||||
obj, err := lookup(credSpec.Config)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
// get the actual correct name.
|
||||
name := namespace.Scope(credSpec.Config)
|
||||
if obj.Name != "" {
|
||||
name = obj.Name
|
||||
}
|
||||
|
||||
// now append a Runtime-type config.
|
||||
refs = append(refs, &swarm.ConfigReference{
|
||||
ConfigName: name,
|
||||
Runtime: &swarm.ConfigReferenceRuntimeTarget{},
|
||||
})
|
||||
|
||||
}
|
||||
|
||||
confs, err := servicecli.ParseConfigs(client, refs)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
@ -342,11 +382,6 @@ func convertFileObject(
|
||||
config composetypes.FileReferenceConfig,
|
||||
lookup func(key string) (composetypes.FileObjectConfig, error),
|
||||
) (swarmReferenceObject, error) {
|
||||
target := config.Target
|
||||
if target == "" {
|
||||
target = config.Source
|
||||
}
|
||||
|
||||
obj, err := lookup(config.Source)
|
||||
if err != nil {
|
||||
return swarmReferenceObject{}, err
|
||||
@ -357,6 +392,11 @@ func convertFileObject(
|
||||
source = obj.Name
|
||||
}
|
||||
|
||||
target := config.Target
|
||||
if target == "" {
|
||||
target = config.Source
|
||||
}
|
||||
|
||||
uid := config.UID
|
||||
gid := config.GID
|
||||
if uid == "" {
|
||||
@ -599,13 +639,46 @@ func convertDNSConfig(DNS []string, DNSSearch []string) (*swarm.DNSConfig, error
|
||||
return nil, nil
|
||||
}
|
||||
|
||||
func convertCredentialSpec(spec composetypes.CredentialSpecConfig) (*swarm.CredentialSpec, error) {
|
||||
if spec.File == "" && spec.Registry == "" {
|
||||
return nil, nil
|
||||
func convertCredentialSpec(namespace Namespace, spec composetypes.CredentialSpecConfig, refs []*swarm.ConfigReference) (*swarm.CredentialSpec, error) {
|
||||
var o []string
|
||||
|
||||
// Config was added in API v1.40
|
||||
if spec.Config != "" {
|
||||
o = append(o, `"Config"`)
|
||||
}
|
||||
if spec.File != "" && spec.Registry != "" {
|
||||
return nil, errors.New("Invalid credential spec - must provide one of `File` or `Registry`")
|
||||
if spec.File != "" {
|
||||
o = append(o, `"File"`)
|
||||
}
|
||||
if spec.Registry != "" {
|
||||
o = append(o, `"Registry"`)
|
||||
}
|
||||
l := len(o)
|
||||
switch {
|
||||
case l == 0:
|
||||
return nil, nil
|
||||
case l == 2:
|
||||
return nil, errors.Errorf("invalid credential spec: cannot specify both %s and %s", o[0], o[1])
|
||||
case l > 2:
|
||||
return nil, errors.Errorf("invalid credential spec: cannot specify both %s, and %s", strings.Join(o[:l-1], ", "), o[l-1])
|
||||
}
|
||||
swarmCredSpec := swarm.CredentialSpec(spec)
|
||||
// if we're using a swarm Config for the credential spec, over-write it
|
||||
// here with the config ID
|
||||
if swarmCredSpec.Config != "" {
|
||||
for _, config := range refs {
|
||||
if swarmCredSpec.Config == config.ConfigName {
|
||||
swarmCredSpec.Config = config.ConfigID
|
||||
return &swarmCredSpec, nil
|
||||
}
|
||||
}
|
||||
// if none of the configs match, try namespacing
|
||||
for _, config := range refs {
|
||||
if namespace.Scope(swarmCredSpec.Config) == config.ConfigName {
|
||||
swarmCredSpec.Config = config.ConfigID
|
||||
return &swarmCredSpec, nil
|
||||
}
|
||||
}
|
||||
return nil, errors.Errorf("invalid credential spec: spec specifies config %v, but no such config can be found", swarmCredSpec.Config)
|
||||
}
|
||||
return &swarmCredSpec, nil
|
||||
}
|
||||
|
||||
@ -314,30 +314,98 @@ func TestConvertDNSConfigSearch(t *testing.T) {
|
||||
}
|
||||
|
||||
func TestConvertCredentialSpec(t *testing.T) {
|
||||
swarmSpec, err := convertCredentialSpec(composetypes.CredentialSpecConfig{})
|
||||
assert.NilError(t, err)
|
||||
assert.Check(t, is.Nil(swarmSpec))
|
||||
tests := []struct {
|
||||
name string
|
||||
in composetypes.CredentialSpecConfig
|
||||
out *swarm.CredentialSpec
|
||||
configs []*swarm.ConfigReference
|
||||
expectedErr string
|
||||
}{
|
||||
{
|
||||
name: "empty",
|
||||
},
|
||||
{
|
||||
name: "config-and-file",
|
||||
in: composetypes.CredentialSpecConfig{Config: "0bt9dmxjvjiqermk6xrop3ekq", File: "somefile.json"},
|
||||
expectedErr: `invalid credential spec: cannot specify both "Config" and "File"`,
|
||||
},
|
||||
{
|
||||
name: "config-and-registry",
|
||||
in: composetypes.CredentialSpecConfig{Config: "0bt9dmxjvjiqermk6xrop3ekq", Registry: "testing"},
|
||||
expectedErr: `invalid credential spec: cannot specify both "Config" and "Registry"`,
|
||||
},
|
||||
{
|
||||
name: "file-and-registry",
|
||||
in: composetypes.CredentialSpecConfig{File: "somefile.json", Registry: "testing"},
|
||||
expectedErr: `invalid credential spec: cannot specify both "File" and "Registry"`,
|
||||
},
|
||||
{
|
||||
name: "config-and-file-and-registry",
|
||||
in: composetypes.CredentialSpecConfig{Config: "0bt9dmxjvjiqermk6xrop3ekq", File: "somefile.json", Registry: "testing"},
|
||||
expectedErr: `invalid credential spec: cannot specify both "Config", "File", and "Registry"`,
|
||||
},
|
||||
{
|
||||
name: "missing-config-reference",
|
||||
in: composetypes.CredentialSpecConfig{Config: "missing"},
|
||||
expectedErr: "invalid credential spec: spec specifies config missing, but no such config can be found",
|
||||
configs: []*swarm.ConfigReference{
|
||||
{
|
||||
ConfigName: "someName",
|
||||
ConfigID: "missing",
|
||||
},
|
||||
},
|
||||
},
|
||||
{
|
||||
name: "namespaced-config",
|
||||
in: composetypes.CredentialSpecConfig{Config: "name"},
|
||||
configs: []*swarm.ConfigReference{
|
||||
{
|
||||
ConfigName: "namespaced-config_name",
|
||||
ConfigID: "someID",
|
||||
},
|
||||
},
|
||||
out: &swarm.CredentialSpec{Config: "someID"},
|
||||
},
|
||||
{
|
||||
name: "config",
|
||||
in: composetypes.CredentialSpecConfig{Config: "someName"},
|
||||
configs: []*swarm.ConfigReference{
|
||||
{
|
||||
ConfigName: "someOtherName",
|
||||
ConfigID: "someOtherID",
|
||||
}, {
|
||||
ConfigName: "someName",
|
||||
ConfigID: "someID",
|
||||
},
|
||||
},
|
||||
out: &swarm.CredentialSpec{Config: "someID"},
|
||||
},
|
||||
{
|
||||
name: "file",
|
||||
in: composetypes.CredentialSpecConfig{File: "somefile.json"},
|
||||
out: &swarm.CredentialSpec{File: "somefile.json"},
|
||||
},
|
||||
{
|
||||
name: "registry",
|
||||
in: composetypes.CredentialSpecConfig{Registry: "testing"},
|
||||
out: &swarm.CredentialSpec{Registry: "testing"},
|
||||
},
|
||||
}
|
||||
|
||||
swarmSpec, err = convertCredentialSpec(composetypes.CredentialSpecConfig{
|
||||
File: "/foo",
|
||||
})
|
||||
assert.NilError(t, err)
|
||||
assert.Check(t, is.Equal(swarmSpec.File, "/foo"))
|
||||
assert.Check(t, is.Equal(swarmSpec.Registry, ""))
|
||||
for _, tc := range tests {
|
||||
tc := tc
|
||||
t.Run(tc.name, func(t *testing.T) {
|
||||
namespace := NewNamespace(tc.name)
|
||||
swarmSpec, err := convertCredentialSpec(namespace, tc.in, tc.configs)
|
||||
|
||||
swarmSpec, err = convertCredentialSpec(composetypes.CredentialSpecConfig{
|
||||
Registry: "foo",
|
||||
})
|
||||
assert.NilError(t, err)
|
||||
assert.Check(t, is.Equal(swarmSpec.File, ""))
|
||||
assert.Check(t, is.Equal(swarmSpec.Registry, "foo"))
|
||||
|
||||
swarmSpec, err = convertCredentialSpec(composetypes.CredentialSpecConfig{
|
||||
File: "/asdf",
|
||||
Registry: "foo",
|
||||
})
|
||||
assert.Check(t, is.ErrorContains(err, ""))
|
||||
assert.Check(t, is.Nil(swarmSpec))
|
||||
if tc.expectedErr != "" {
|
||||
assert.Error(t, err, tc.expectedErr)
|
||||
} else {
|
||||
assert.NilError(t, err)
|
||||
}
|
||||
assert.DeepEqual(t, swarmSpec, tc.out)
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
func TestConvertUpdateConfigOrder(t *testing.T) {
|
||||
@ -467,9 +535,14 @@ func TestConvertServiceSecrets(t *testing.T) {
|
||||
|
||||
func TestConvertServiceConfigs(t *testing.T) {
|
||||
namespace := Namespace{name: "foo"}
|
||||
configs := []composetypes.ServiceConfigObjConfig{
|
||||
{Source: "foo_config"},
|
||||
{Source: "bar_config"},
|
||||
service := composetypes.ServiceConfig{
|
||||
Configs: []composetypes.ServiceConfigObjConfig{
|
||||
{Source: "foo_config"},
|
||||
{Source: "bar_config"},
|
||||
},
|
||||
CredentialSpec: composetypes.CredentialSpecConfig{
|
||||
Config: "baz_config",
|
||||
},
|
||||
}
|
||||
configSpecs := map[string]composetypes.ConfigObjConfig{
|
||||
"foo_config": {
|
||||
@ -478,18 +551,23 @@ func TestConvertServiceConfigs(t *testing.T) {
|
||||
"bar_config": {
|
||||
Name: "bar_config",
|
||||
},
|
||||
"baz_config": {
|
||||
Name: "baz_config",
|
||||
},
|
||||
}
|
||||
client := &fakeClient{
|
||||
configListFunc: func(opts types.ConfigListOptions) ([]swarm.Config, error) {
|
||||
assert.Check(t, is.Contains(opts.Filters.Get("name"), "foo_config"))
|
||||
assert.Check(t, is.Contains(opts.Filters.Get("name"), "bar_config"))
|
||||
assert.Check(t, is.Contains(opts.Filters.Get("name"), "baz_config"))
|
||||
return []swarm.Config{
|
||||
{Spec: swarm.ConfigSpec{Annotations: swarm.Annotations{Name: "foo_config"}}},
|
||||
{Spec: swarm.ConfigSpec{Annotations: swarm.Annotations{Name: "bar_config"}}},
|
||||
{Spec: swarm.ConfigSpec{Annotations: swarm.Annotations{Name: "baz_config"}}},
|
||||
}, nil
|
||||
},
|
||||
}
|
||||
refs, err := convertServiceConfigObjs(client, namespace, configs, configSpecs)
|
||||
refs, err := convertServiceConfigObjs(client, namespace, service, configSpecs)
|
||||
assert.NilError(t, err)
|
||||
expected := []*swarm.ConfigReference{
|
||||
{
|
||||
@ -501,6 +579,10 @@ func TestConvertServiceConfigs(t *testing.T) {
|
||||
Mode: 0444,
|
||||
},
|
||||
},
|
||||
{
|
||||
ConfigName: "baz_config",
|
||||
Runtime: &swarm.ConfigReferenceRuntimeTarget{},
|
||||
},
|
||||
{
|
||||
ConfigName: "foo_config",
|
||||
File: &swarm.ConfigReferenceFileTarget{
|
||||
|
||||
@ -16,6 +16,8 @@ var interpolateTypeCastMapping = map[interp.Path]interp.Cast{
|
||||
servicePath("deploy", "replicas"): toInt,
|
||||
servicePath("deploy", "update_config", "parallelism"): toInt,
|
||||
servicePath("deploy", "update_config", "max_failure_ratio"): toFloat,
|
||||
servicePath("deploy", "rollback_config", "parallelism"): toInt,
|
||||
servicePath("deploy", "rollback_config", "max_failure_ratio"): toFloat,
|
||||
servicePath("deploy", "restart_policy", "max_attempts"): toInt,
|
||||
servicePath("ports", interp.PathMatchList, "target"): toInt,
|
||||
servicePath("ports", interp.PathMatchList, "published"): toInt,
|
||||
|
||||
@ -479,12 +479,13 @@ func resolveVolumePaths(volumes []types.ServiceVolumeConfig, workingDir string,
|
||||
}
|
||||
|
||||
filePath := expandUser(volume.Source, lookupEnv)
|
||||
// Check for a Unix absolute path first, to handle a Windows client
|
||||
// with a Unix daemon. This handles a Windows client connecting to a
|
||||
// Unix daemon. Note that this is not required for Docker for Windows
|
||||
// when specifying a local Windows path, because Docker for Windows
|
||||
// translates the Windows path into a valid path within the VM.
|
||||
if !path.IsAbs(filePath) {
|
||||
// Check if source is an absolute path (either Unix or Windows), to
|
||||
// handle a Windows client with a Unix daemon or vice-versa.
|
||||
//
|
||||
// Note that this is not required for Docker for Windows when specifying
|
||||
// a local Windows path, because Docker for Windows translates the Windows
|
||||
// path into a valid path within the VM.
|
||||
if !path.IsAbs(filePath) && !isAbs(filePath) {
|
||||
filePath = absPath(workingDir, filePath)
|
||||
}
|
||||
volume.Source = filePath
|
||||
@ -634,7 +635,8 @@ func LoadConfigObjs(source map[string]interface{}, details types.ConfigDetails)
|
||||
|
||||
func loadFileObjectConfig(name string, objType string, obj types.FileObjectConfig, details types.ConfigDetails) (types.FileObjectConfig, error) {
|
||||
// if "external: true"
|
||||
if obj.External.External {
|
||||
switch {
|
||||
case obj.External.External:
|
||||
// handle deprecated external.name
|
||||
if obj.External.Name != "" {
|
||||
if obj.Name != "" {
|
||||
@ -651,7 +653,11 @@ func loadFileObjectConfig(name string, objType string, obj types.FileObjectConfi
|
||||
}
|
||||
}
|
||||
// if not "external: true"
|
||||
} else {
|
||||
case obj.Driver != "":
|
||||
if obj.File != "" {
|
||||
return obj, errors.Errorf("%[1]s %[2]s: %[1]s.driver and %[1]s.file conflict; only use %[1]s.driver", objType, name)
|
||||
}
|
||||
default:
|
||||
obj.File = absPath(details.WorkingDir, obj.File)
|
||||
}
|
||||
|
||||
|
||||
@ -295,6 +295,20 @@ configs:
|
||||
assert.Assert(t, is.Len(actual.Configs, 1))
|
||||
}
|
||||
|
||||
func TestLoadV38(t *testing.T) {
|
||||
actual, err := loadYAML(`
|
||||
version: "3.8"
|
||||
services:
|
||||
foo:
|
||||
image: busybox
|
||||
credential_spec:
|
||||
config: "0bt9dmxjvjiqermk6xrop3ekq"
|
||||
`)
|
||||
assert.NilError(t, err)
|
||||
assert.Assert(t, is.Len(actual.Services, 1))
|
||||
assert.Check(t, is.Equal(actual.Services[0].CredentialSpec.Config, "0bt9dmxjvjiqermk6xrop3ekq"))
|
||||
}
|
||||
|
||||
func TestParseAndLoad(t *testing.T) {
|
||||
actual, err := loadYAML(sampleYAML)
|
||||
assert.NilError(t, err)
|
||||
@ -568,7 +582,7 @@ volumes:
|
||||
|
||||
func TestLoadWithInterpolationCastFull(t *testing.T) {
|
||||
dict, err := ParseYAML([]byte(`
|
||||
version: "3.4"
|
||||
version: "3.7"
|
||||
services:
|
||||
web:
|
||||
configs:
|
||||
@ -585,6 +599,9 @@ services:
|
||||
update_config:
|
||||
parallelism: $theint
|
||||
max_failure_ratio: $thefloat
|
||||
rollback_config:
|
||||
parallelism: $theint
|
||||
max_failure_ratio: $thefloat
|
||||
restart_policy:
|
||||
max_attempts: $theint
|
||||
ports:
|
||||
@ -635,7 +652,7 @@ networks:
|
||||
assert.NilError(t, err)
|
||||
expected := &types.Config{
|
||||
Filename: "filename.yml",
|
||||
Version: "3.4",
|
||||
Version: "3.7",
|
||||
Services: []types.ServiceConfig{
|
||||
{
|
||||
Name: "web",
|
||||
@ -661,6 +678,10 @@ networks:
|
||||
Parallelism: uint64Ptr(555),
|
||||
MaxFailureRatio: 3.14,
|
||||
},
|
||||
RollbackConfig: &types.UpdateConfig{
|
||||
Parallelism: uint64Ptr(555),
|
||||
MaxFailureRatio: 3.14,
|
||||
},
|
||||
RestartPolicy: &types.RestartPolicy{
|
||||
MaxAttempts: uint64Ptr(555),
|
||||
},
|
||||
@ -964,6 +985,84 @@ services:
|
||||
assert.Error(t, err, `invalid mount config for type "bind": field Source must not be empty`)
|
||||
}
|
||||
|
||||
func TestLoadBindMountSourceIsWindowsAbsolute(t *testing.T) {
|
||||
tests := []struct {
|
||||
doc string
|
||||
yaml string
|
||||
expected types.ServiceVolumeConfig
|
||||
}{
|
||||
{
|
||||
doc: "Z-drive lowercase",
|
||||
yaml: `
|
||||
version: '3.3'
|
||||
|
||||
services:
|
||||
windows:
|
||||
image: mcr.microsoft.com/windows/servercore/iis:windowsservercore-ltsc2019
|
||||
volumes:
|
||||
- type: bind
|
||||
source: z:\
|
||||
target: c:\data
|
||||
`,
|
||||
expected: types.ServiceVolumeConfig{Type: "bind", Source: `z:\`, Target: `c:\data`},
|
||||
},
|
||||
{
|
||||
doc: "Z-drive uppercase",
|
||||
yaml: `
|
||||
version: '3.3'
|
||||
|
||||
services:
|
||||
windows:
|
||||
image: mcr.microsoft.com/windows/servercore/iis:windowsservercore-ltsc2019
|
||||
volumes:
|
||||
- type: bind
|
||||
source: Z:\
|
||||
target: C:\data
|
||||
`,
|
||||
expected: types.ServiceVolumeConfig{Type: "bind", Source: `Z:\`, Target: `C:\data`},
|
||||
},
|
||||
{
|
||||
doc: "Z-drive subdirectory",
|
||||
yaml: `
|
||||
version: '3.3'
|
||||
|
||||
services:
|
||||
windows:
|
||||
image: mcr.microsoft.com/windows/servercore/iis:windowsservercore-ltsc2019
|
||||
volumes:
|
||||
- type: bind
|
||||
source: Z:\some-dir
|
||||
target: C:\data
|
||||
`,
|
||||
expected: types.ServiceVolumeConfig{Type: "bind", Source: `Z:\some-dir`, Target: `C:\data`},
|
||||
},
|
||||
{
|
||||
doc: "forward-slashes",
|
||||
yaml: `
|
||||
version: '3.3'
|
||||
|
||||
services:
|
||||
app:
|
||||
image: app:latest
|
||||
volumes:
|
||||
- type: bind
|
||||
source: /z/some-dir
|
||||
target: /c/data
|
||||
`,
|
||||
expected: types.ServiceVolumeConfig{Type: "bind", Source: `/z/some-dir`, Target: `/c/data`},
|
||||
},
|
||||
}
|
||||
|
||||
for _, tc := range tests {
|
||||
t.Run(tc.doc, func(t *testing.T) {
|
||||
config, err := loadYAML(tc.yaml)
|
||||
assert.NilError(t, err)
|
||||
assert.Check(t, is.Len(config.Services[0].Volumes, 1))
|
||||
assert.Check(t, is.DeepEqual(tc.expected, config.Services[0].Volumes[0]))
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
func TestLoadBindMountWithSource(t *testing.T) {
|
||||
config, err := loadYAML(`
|
||||
version: "3.5"
|
||||
@ -1586,3 +1685,67 @@ secrets:
|
||||
}
|
||||
assert.DeepEqual(t, config, expected, cmpopts.EquateEmpty())
|
||||
}
|
||||
|
||||
func TestLoadSecretDriver(t *testing.T) {
|
||||
config, err := loadYAML(`
|
||||
version: '3.8'
|
||||
services:
|
||||
hello-world:
|
||||
image: redis:alpine
|
||||
secrets:
|
||||
- secret
|
||||
configs:
|
||||
- config
|
||||
|
||||
configs:
|
||||
config:
|
||||
name: config
|
||||
external: true
|
||||
|
||||
secrets:
|
||||
secret:
|
||||
name: secret
|
||||
driver: secret-bucket
|
||||
driver_opts:
|
||||
OptionA: value for driver option A
|
||||
OptionB: value for driver option B
|
||||
`)
|
||||
assert.NilError(t, err)
|
||||
expected := &types.Config{
|
||||
Filename: "filename.yml",
|
||||
Version: "3.8",
|
||||
Services: types.Services{
|
||||
{
|
||||
Name: "hello-world",
|
||||
Image: "redis:alpine",
|
||||
Configs: []types.ServiceConfigObjConfig{
|
||||
{
|
||||
Source: "config",
|
||||
},
|
||||
},
|
||||
Secrets: []types.ServiceSecretConfig{
|
||||
{
|
||||
Source: "secret",
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
Configs: map[string]types.ConfigObjConfig{
|
||||
"config": {
|
||||
Name: "config",
|
||||
External: types.External{External: true},
|
||||
},
|
||||
},
|
||||
Secrets: map[string]types.SecretConfig{
|
||||
"secret": {
|
||||
Name: "secret",
|
||||
Driver: "secret-bucket",
|
||||
DriverOpts: map[string]string{
|
||||
"OptionA": "value for driver option A",
|
||||
"OptionB": "value for driver option B",
|
||||
},
|
||||
},
|
||||
},
|
||||
}
|
||||
assert.DeepEqual(t, config, expected, cmpopts.EquateEmpty())
|
||||
}
|
||||
|
||||
66
cli/compose/loader/windows_path.go
Normal file
66
cli/compose/loader/windows_path.go
Normal file
@ -0,0 +1,66 @@
|
||||
package loader
|
||||
|
||||
// Copyright 2010 The Go Authors. All rights reserved.
|
||||
// Use of this source code is governed by a BSD-style
|
||||
// license that can be found in the LICENSE file.
|
||||
// https://github.com/golang/go/blob/master/LICENSE
|
||||
|
||||
// This file contains utilities to check for Windows absolute paths on Linux.
|
||||
// The code in this file was largely copied from the Golang filepath package
|
||||
// https://github.com/golang/go/blob/1d0e94b1e13d5e8a323a63cd1cc1ef95290c9c36/src/path/filepath/path_windows.go#L12-L65
|
||||
|
||||
func isSlash(c uint8) bool {
|
||||
return c == '\\' || c == '/'
|
||||
}
|
||||
|
||||
// isAbs reports whether the path is a Windows absolute path.
|
||||
func isAbs(path string) (b bool) {
|
||||
l := volumeNameLen(path)
|
||||
if l == 0 {
|
||||
return false
|
||||
}
|
||||
path = path[l:]
|
||||
if path == "" {
|
||||
return false
|
||||
}
|
||||
return isSlash(path[0])
|
||||
}
|
||||
|
||||
// volumeNameLen returns length of the leading volume name on Windows.
|
||||
// It returns 0 elsewhere.
|
||||
// nolint: gocyclo
|
||||
func volumeNameLen(path string) int {
|
||||
if len(path) < 2 {
|
||||
return 0
|
||||
}
|
||||
// with drive letter
|
||||
c := path[0]
|
||||
if path[1] == ':' && ('a' <= c && c <= 'z' || 'A' <= c && c <= 'Z') {
|
||||
return 2
|
||||
}
|
||||
// is it UNC? https://msdn.microsoft.com/en-us/library/windows/desktop/aa365247(v=vs.85).aspx
|
||||
if l := len(path); l >= 5 && isSlash(path[0]) && isSlash(path[1]) &&
|
||||
!isSlash(path[2]) && path[2] != '.' {
|
||||
// first, leading `\\` and next shouldn't be `\`. its server name.
|
||||
for n := 3; n < l-1; n++ {
|
||||
// second, next '\' shouldn't be repeated.
|
||||
if isSlash(path[n]) {
|
||||
n++
|
||||
// third, following something characters. its share name.
|
||||
if !isSlash(path[n]) {
|
||||
if path[n] == '.' {
|
||||
break
|
||||
}
|
||||
for ; n < l; n++ {
|
||||
if isSlash(path[n]) {
|
||||
break
|
||||
}
|
||||
}
|
||||
return n
|
||||
}
|
||||
break
|
||||
}
|
||||
}
|
||||
}
|
||||
return 0
|
||||
}
|
||||
61
cli/compose/loader/windows_path_test.go
Normal file
61
cli/compose/loader/windows_path_test.go
Normal file
@ -0,0 +1,61 @@
|
||||
package loader
|
||||
|
||||
// Copyright 2010 The Go Authors. All rights reserved.
|
||||
// Use of this source code is governed by a BSD-style
|
||||
// license that can be found in the LICENSE file.
|
||||
// https://github.com/golang/go/blob/master/LICENSE
|
||||
|
||||
// The code in this file was copied from the Golang filepath package with some
|
||||
// small modifications to run it on non-Windows platforms.
|
||||
// https://github.com/golang/go/blob/1d0e94b1e13d5e8a323a63cd1cc1ef95290c9c36/src/path/filepath/path_test.go#L711-L763
|
||||
|
||||
import "testing"
|
||||
|
||||
type IsAbsTest struct {
|
||||
path string
|
||||
isAbs bool
|
||||
}
|
||||
|
||||
var isabstests = []IsAbsTest{
|
||||
{"", false},
|
||||
{"/", true},
|
||||
{"/usr/bin/gcc", true},
|
||||
{"..", false},
|
||||
{"/a/../bb", true},
|
||||
{".", false},
|
||||
{"./", false},
|
||||
{"lala", false},
|
||||
}
|
||||
|
||||
var winisabstests = []IsAbsTest{
|
||||
{`C:\`, true},
|
||||
{`c\`, false},
|
||||
{`c::`, false},
|
||||
{`c:`, false},
|
||||
{`/`, false},
|
||||
{`\`, false},
|
||||
{`\Windows`, false},
|
||||
{`c:a\b`, false},
|
||||
{`c:\a\b`, true},
|
||||
{`c:/a/b`, true},
|
||||
{`\\host\share\foo`, true},
|
||||
{`//host/share/foo/bar`, true},
|
||||
}
|
||||
|
||||
func TestIsAbs(t *testing.T) {
|
||||
tests := append(isabstests, winisabstests...)
|
||||
// All non-windows tests should fail, because they have no volume letter.
|
||||
for _, test := range isabstests {
|
||||
tests = append(tests, IsAbsTest{test.path, false})
|
||||
}
|
||||
// All non-windows test should work as intended if prefixed with volume letter.
|
||||
for _, test := range isabstests {
|
||||
tests = append(tests, IsAbsTest{"c:" + test.path, test.isAbs})
|
||||
}
|
||||
|
||||
for _, test := range winisabstests {
|
||||
if r := isAbs(test.path); r != test.isAbs {
|
||||
t.Errorf("IsAbs(%q) = %v, want %v", test.path, r, test.isAbs)
|
||||
}
|
||||
}
|
||||
}
|
||||
@ -510,45 +510,45 @@ bnBpPlHfjORjkTRf1wyAwiYqMXd9/G6313QfoXs6/sbZ66r6e179PwAA//8ZL3SpvkUAAA==
|
||||
|
||||
"/data/config_schema_v3.8.json": {
|
||||
local: "data/config_schema_v3.8.json",
|
||||
size: 18006,
|
||||
size: 18246,
|
||||
modtime: 1518458244,
|
||||
compressed: `
|
||||
H4sIAAAAAAAC/+xcS4/juBG++1cI2r1tPwbIIkjmlmNOyTkNj0BTZZvbFMktUp72DvzfAz1bokiRtuXu
|
||||
3qQHGEy3VHwU68GvHpofqyRJf9Z0DwVJvybp3hj19fHxNy3FffP0QeLuMUeyNfdffn1snv2U3lXjWF4N
|
||||
oVJs2S5r3mSHvzz87aEa3pCYo4KKSG5+A2qaZwi/lwyhGvyUHgA1kyJd362qdwqlAjQMdPo1qTaXJD1J
|
||||
92AwrTbIxC6tH5/qGZIk1YAHRgcz9Fv96fF1/see7M6edbDZ+rkixgCKf0/3Vr/+9kTu//jH/X++3P/9
|
||||
Ibtf//Lz6HV1vgjbZvkctkwww6To1097ylP706lfmOR5TUz4aO0t4RrGPAsw3yU+h3juyd6J53Z9B89j
|
||||
dg6Sl0VQgh3VOzHTLL+M/DRQBBNW2Ybq3TS2Wn4ZhhuvEWK4o3onhpvlr2N41THt3mP67eW++vdUzzk7
|
||||
XzPLYH81EyOf5zpOl8/xn2d/oJ6TzEFxeax37j6zhqAAYdL+mJIk3ZSM5/apSwH/qqZ4GjxMkh+2ex/M
|
||||
U78f/eZXiv69h5f+PZXCwIupmZpfujkCSZ8Bt4xD7AiCjaZ7jowzbTKJWc6ocY7nZAP8qhkooXvItiiL
|
||||
4CzbrOFEOyfqPHgk54bgDqJPVu+LTLM/Ruf6lDJhYAeY3vVj1ydr7GSysGHaNl39Wa8cE6aUqIzk+YgJ
|
||||
gkiO1Y6YgUK7+UvSUrDfS/hnS2KwBHveHKVafuIdylJlimBlhfNnn1JZFEQsZZrn8BFx8pNLYmTv7RrD
|
||||
V/1qo215uEkitNLhLgLuJuxwKk2XJdJY/3GuHSVJWrI8nnh3DnEh8/G+RVlsANPThHhipKPf1yvXG0v6
|
||||
hjABmAlSQFCPEXIQhhGeaQXUpzMOoc2JK4108ynCjmmDRyftyuOp4rzUkMscFIhcZ004dL4fT3PoY6NF
|
||||
fU4u5u6nZprqhqr2lloDMw0E6f7C8bIgTMRoCAiDRyVZ4xM/nLMDcch6bTv7GEAcGEpRdB4/DicMxr8o
|
||||
qeF6T9vf2i3jd72DWFsWs5VYkGqz3dpeK5lq3vAAhzxU+JrwjDPxvLyKw4tBku2lNpdAsXQPhJs93QN9
|
||||
nhk+pBqNltrEKDkryC5MJNj4LtlIyYGIMZGiwXm05MS0uZk5wosBbLqoKAfTyt2uIvXp7yQgigwlcmQH
|
||||
wFi8K9VrHOe69ENAIxj4jki/PTRx74yN1j9xPgXYrvvcfmJfibGX26tUCkIrpI2gdUij2jgkm8CRV9oJ
|
||||
sY71+xeFR+eHpVGiC+YugiDXB2TjtSwO1HZi54xo0NfFmQMvdPg1UidcY/86O9Yz1DtnfFQZmGqInjl3
|
||||
bmQdxtO3DHrVOCYY+4raQwwNTEk0bxKmvfqpV/jQLD6N3GxxRw26Tbg346Xigr0uB+IeoMoNZ3oP+Tlj
|
||||
UBpJJY8zDGdWK94YZkK/i5CeQnZgHHYWxy4Yg0DyTAp+jKDUhmAwYaKBlsjMMZPKLI4x3RmwV63vE2Dj
|
||||
DVm1g88syf9PlkQfNTWXYWttciYyqUAEbUMbqbIdEgqZAmTSeRQjB5uX2IQGk2k02wnCQ2ZmCrW9MKVg
|
||||
TNjYS84K5jcaZ5ooiNcarOaGaDPwLMplz0QI8wFCRGSwJ3jG1VEb5tZzP60iMdC4C6Ce767dyNpJfxb0
|
||||
srex9qIft1GVOhjE1TRCZxFXu6Oc/efw0CMZ1eTri/x4u1Kk77y1149GBOMSoWbagKDH+IU2bFJXOTfu
|
||||
iou6aiqy86di3LFJtK22nQ5vwoqQVCqPaK5ko79Sbs9Fh+H8wantOWfi2IIJVpRF+jX54otY40/mxtDe
|
||||
ygHNAHqf7/0u8bm62XOGc7p8mu/9GPdVnNmcYqVq5zoqhqTBLpX57o5Q5wXTZGMVo5x5W2EAD26AFUZo
|
||||
CAaZVR/qsOsQYoH+mFUUwwqQpbkUnhI05wNcu4dt0CjT1WPmVGhAaWvQU69CXdolqCYxeAREXtfBosAL
|
||||
guKMEh0CiFck+VFyviH0OWsbrhaq3SqChHPgTBcx6DbNgZPjRZrTFLQI4yVCRmhESaSVlWBG4uVLFuQl
|
||||
65atSQJ229gp5uBbE0R9z9j4srGM+y1DbZo0hFTtb2P3v2Cpu1Q5MfCpEp8qMczQ1bGBXkodnEmAZXoK
|
||||
VRlbr0gLKGS4c+TalP+kYUVXMMFXgPwoB+Cg3oEAZDQbaYPnypnS3qiKcr1mN9hDctaEmEuoN5Wi2UeM
|
||||
57nS1VV+pwLihTI6yrV+ZyKX38+HWQuctuKEggXNrj1obZAwYc7uVbCPRSFsAUFQmDXLac5oJm+0XEJe
|
||||
IZD8HUpGLm3rgGkF2DNhI1lXRvIStbniGweno5qLBKYDJiHlWO4Oefvl7JdvFVtSBAP9yq4eypAOzetP
|
||||
+txmw4IuPj0QXkZUTy7qN/FlHSIGn5yfXIVk2pEtENrF9H9FNSC1VJlUy1dAwk1G63D+nSlSLOWbo1uy
|
||||
Umeo8RG8brkRngT3jb3ucldu15vpkepTn8q6689qHS1ir2Est/86q2aXLV3pN2IMofuoTN2ZCZM3SHxO
|
||||
Ev1Ol9ZSfXq0Mzzan13/P56utl+jBr94rKnCH5BeoaER34h8APkvIdZRBaBQnBjIZuzzDbRgcmc7taCl
|
||||
+tSC/1EtsJqBBtowLUrNCSi6Y3k1rEH127DJHP9jhS9+827KV0K1Fm1lM8/5grfiwy8zOHnuy4IbAcwF
|
||||
2jDdMrVSO6u+6dL+4N7verrxk8/vKz7FcVI0/TFuvGk+nV+Pzsciab76GcCQdVTY7/oo32776T6O93Qi
|
||||
jmPjVfX3tPpvAAAA//+mJNa5VkYAAA==
|
||||
3qQDBDstFR/15FfFkn+skiT9WdM9FCT9mqR7Y9TXx8fftBT3zdMHibvHHMnW3H/59bF59lN6V41jeTWE
|
||||
SrFlu6x5kx3+8vC3h2p4Q2KOCioiufkNqGmeIfxeMoRq8FN6ANRMinR9t6reKZQK0DDQ6dek2lyS9CTd
|
||||
g8G02iATu7R+fKpnSJJUAx4YHczQb/Wnx9f5H3uyO3vWwWbr54oYAyj+Pd1b/frbE7n/4x/3//ly//eH
|
||||
7H79y8+j15V8EbbN8jlsmWCGSdGvn/aUp/Zfp35hkuc1MeGjtbeEaxjzLMB8l/gc4rkneyee2/UdPI/Z
|
||||
OUheFkENdlTvxEyz/DL600ARTNhkG6p3s9hq+WUYbqJGiOGO6p0Ybpa/juFVx7R7j+m3l/vqv6d6ztn5
|
||||
mlkG+6uZGMU8lzhdMccvz16gHknmoLg81jt3y6whKECYtBdTkqSbkvHclroU8K9qiqfBwyT5YYf3wTz1
|
||||
+9FffqPo33t46d9TKQy8mJqp+aUbEUj6DLhlHGJHEGws3SMyzrTJJGY5o8Y5npMN8KtmoITuIduiLIKz
|
||||
bLOGE+2cqIvgkZwbgjuIlqzeF5lmf4zk+pQyYWAHmN71Y9cna+xksrBj2j5d/W+9ckyYUqIykucjJggi
|
||||
OVY7YgYK7eYvSUvBfi/hny2JwRLseXOUavmJdyhLlSmClRfOyz6lsiiIWMo1z+EjQvKTQ2Lk7+0aw1f9
|
||||
aqNtebhJIqzSES4C4SYccCpLlyXS2Phxrh8lSVqyPJ54dw5xIfPxvkVZbADT04R44qSjv9cr1xtL+4Yw
|
||||
AZgJUkDQjhFyEIYRnmkF1GczDqXNqas1wQjxpJEHQoqwY9rg0Um78sS0uHg2lEcOCkSusyZxOj/ipzn0
|
||||
WdSi0SkXcydZM011llV7S62BmQaCdH/heFkQJmJsCYTBo5KsiZ4fLiyCOGS9tZ0tBhAHhlIU3dkQhygG
|
||||
41+U1HB9TO7P95bxuz6UrG3PkliQarPd2l4vmVreUIBDHiokTnjGmXhe3sThxSDJ9lKbS0BbugfCzZ7u
|
||||
gT7PDB9SjUZLbWKMnBVkFyYSbHzqbKTkQMSYSNHgPFpyYtoqzhzhxVA3XVSVg2nlbleR+ux3kjpFJh05
|
||||
sgNgLDKW6jXjc8GDECQJpsgj0m8PTYY846P1vzifQnHXyW8/sY/E2MPtVSsFoRUmR9A6ZFFtxpJNgMsr
|
||||
7YRYx8b9ixKp8xPYKNUFqxxBOOyDvPFWFgd/O7VzRjTo6zLSQRQ6/BppE66xf50d6xnqnTM+/wxMNcTZ
|
||||
nDs3sg4j71umx2qcPYxjRR0hhg6mJJo3Sehe49QrfGgWn+Z4trqjBt0mMZyJUnFpYVctcQ9Q5YYzvYf8
|
||||
nDEojaSSxzmGs/4V7wwzSeJFSE8hOzAOO4tjF4xBIHkmBT9GUGpDMFha0UBLZOaYSWUWx5juWtmr1fel
|
||||
svGGrFuGz3rK/089RR81NZdha21yJjKpQAR9Qxupsh0SCpkCZNIpilGAzUtsUoPJNJrtBOEhNzOF2l5Y
|
||||
UjAm7OwlZwXzO42zoBTEaw1Wc0O0GXgWFbJnMoT5BCEiM9gTPOPoqB1z6zmfVpEYaNwvUM93125k7aQ/
|
||||
C3rZ21h70Y/bqUodTOJqGqGziKPdcfH954jQIx3V5OuL4ni7UmTsvHXUj0YE44KxZtqAoMf4hTZscgNz
|
||||
bt4Vl3XVVGTnL8W4c5NoX217It6EFSGpVB7VXMlGf6TcnosOw/mTUztyzuSxBROsKIv0a/LFl7HGS+bG
|
||||
0N6qAc0Ael/s/S7xuTrZc4Zztnya7xIZd2Cc2cZilWrnei+GpMF+lvk+kFCPBtNkY11GOeu2wgAe3AAr
|
||||
jNAQDDLrfqjDrkOIBfpj3qIYVoAszaXwlKA5H+Da3W6DlpruPmbOhAaUtgU99SbUlV2CZhKDR0Dk9T1Y
|
||||
FHhBUJxRokMA8YoiP0rON4Q+Z6/3skvc8iqChHPgTBcx6DbNgZPjRZbTXGgRxkuEjNCIK5FWV4IZiZcv
|
||||
WZCXrFu2Jgn4beOnmINvTRD1OWPjy8Yz7rcMtWnKEFK1f43D/4JX3aXKiYFPk/g0iWGFrs4N9FLm4CwC
|
||||
LNN9qMrY+4q0gEKGO0euLflPGlZ0BRN8F5AfRQAO6h0IQEazkTV4jpwp7Y1uUa637AZ7SM6aFHOhNqdm
|
||||
HzGR58pQV8WdCogXyuio0PqdiVx+Px9mLSBtxQkFC5pdK2htkDBhzu5VsMWiELaAICjMuuW0ZjRTN1qu
|
||||
IK8QSP4OV0Yua+uAaQXYM2EjWVdF8hKzueJrCGegmssEpgMmKeVY7w59+/Xs12+VW1IEA/3Krm7LkA3N
|
||||
20/63FbDgiE+PRBeRtyeXNRv4qs6RAw+OT/OCum0I1sgtYvp/4pqQGqpMqmWvwEJNxmtw/V3pkixVGyO
|
||||
bslKnanGR4i65UZ4Ctw3jrrLHbldb6ZHq099Keuul9U6WsVex1hu/3VVzb62dJXfiDGE7qMqdWcWTN6g
|
||||
8Dkp9DtDWkv1GdHOiGh/dvv/eLbafrca/Daypgp/anqFhUZ8I/IB9L+EWv/n3LLKVzkxkM2w8wa2PEEe
|
||||
TltuqT5teWlb/iBWYLU0DaxherU2p6DovuvV8Cat34ZN5viFDl8W6t2U7yLYWrTVzTznCwaRh19m0P7c
|
||||
9xE3gskLNJO6dWoVqFZ966j9AwP+0NONn/zcQMWnOE6ufn+M24eanwpYj+RjkTTfLg2i9jqqeOH6EQK7
|
||||
ean7MQBPP+U4w19V/z+t/hsAAP//Fd/bF0ZHAAA=
|
||||
`,
|
||||
},
|
||||
|
||||
|
||||
@ -125,6 +125,7 @@
|
||||
"credential_spec": {
|
||||
"type": "object",
|
||||
"properties": {
|
||||
"config": {"type": "string"},
|
||||
"file": {"type": "string"},
|
||||
"registry": {"type": "string"}
|
||||
},
|
||||
@ -538,6 +539,13 @@
|
||||
}
|
||||
},
|
||||
"labels": {"$ref": "#/definitions/list_or_dict"},
|
||||
"driver": {"type": "string"},
|
||||
"driver_opts": {
|
||||
"type": "object",
|
||||
"patternProperties": {
|
||||
"^.+$": {"type": ["string", "number"]}
|
||||
}
|
||||
},
|
||||
"template_driver": {"type": "string"}
|
||||
},
|
||||
"patternProperties": {"^x-": {}},
|
||||
|
||||
@ -92,7 +92,7 @@ func TestValidateCredentialSpecs(t *testing.T) {
|
||||
{version: "3.5", expectedErr: "config"},
|
||||
{version: "3.6", expectedErr: "config"},
|
||||
{version: "3.7", expectedErr: "config"},
|
||||
{version: "3.8", expectedErr: "something"},
|
||||
{version: "3.8"},
|
||||
}
|
||||
|
||||
for _, tc := range tests {
|
||||
@ -104,7 +104,7 @@ func TestValidateCredentialSpecs(t *testing.T) {
|
||||
"foo": dict{
|
||||
"image": "busybox",
|
||||
"credential_spec": dict{
|
||||
tc.expectedErr: "foobar",
|
||||
"config": "foobar",
|
||||
},
|
||||
},
|
||||
},
|
||||
|
||||
@ -500,8 +500,7 @@ func (e External) MarshalJSON() ([]byte, error) {
|
||||
|
||||
// CredentialSpecConfig for credential spec on Windows
|
||||
type CredentialSpecConfig struct {
|
||||
// @TODO Config is not yet in use
|
||||
Config string `yaml:"-" json:"-"` // Config was added in API v1.40
|
||||
Config string `yaml:",omitempty" json:"config,omitempty"` // Config was added in API v1.40
|
||||
File string `yaml:",omitempty" json:"file,omitempty"`
|
||||
Registry string `yaml:",omitempty" json:"registry,omitempty"`
|
||||
}
|
||||
@ -513,6 +512,8 @@ type FileObjectConfig struct {
|
||||
External External `yaml:",omitempty" json:"external,omitempty"`
|
||||
Labels Labels `yaml:",omitempty" json:"labels,omitempty"`
|
||||
Extras map[string]interface{} `yaml:",inline" json:"-"`
|
||||
Driver string `yaml:",omitempty" json:"driver,omitempty"`
|
||||
DriverOpts map[string]string `mapstructure:"driver_opts" yaml:"driver_opts,omitempty" json:"driver_opts,omitempty"`
|
||||
TemplateDriver string `mapstructure:"template_driver" yaml:"template_driver,omitempty" json:"template_driver,omitempty"`
|
||||
}
|
||||
|
||||
|
||||
@ -50,6 +50,7 @@ type ConfigFile struct {
|
||||
CurrentContext string `json:"currentContext,omitempty"`
|
||||
CLIPluginsExtraDirs []string `json:"cliPluginsExtraDirs,omitempty"`
|
||||
Plugins map[string]map[string]string `json:"plugins,omitempty"`
|
||||
Aliases map[string]string `json:"aliases,omitempty"`
|
||||
}
|
||||
|
||||
// ProxyConfig contains proxy configuration settings
|
||||
@ -72,6 +73,7 @@ func New(fn string) *ConfigFile {
|
||||
HTTPHeaders: make(map[string]string),
|
||||
Filename: fn,
|
||||
Plugins: make(map[string]map[string]string),
|
||||
Aliases: make(map[string]string),
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
@ -41,7 +41,9 @@ func New(ctx context.Context, cmd string, args ...string) (net.Conn, error) {
|
||||
// we assume that args never contains sensitive information
|
||||
logrus.Debugf("commandconn: starting %s with %v", cmd, args)
|
||||
c.cmd.Env = os.Environ()
|
||||
c.cmd.SysProcAttr = &syscall.SysProcAttr{}
|
||||
setPdeathsig(c.cmd)
|
||||
createSession(c.cmd)
|
||||
c.stdin, err = c.cmd.StdinPipe()
|
||||
if err != nil {
|
||||
return nil, err
|
||||
|
||||
@ -6,7 +6,5 @@ import (
|
||||
)
|
||||
|
||||
func setPdeathsig(cmd *exec.Cmd) {
|
||||
cmd.SysProcAttr = &syscall.SysProcAttr{
|
||||
Pdeathsig: syscall.SIGKILL,
|
||||
}
|
||||
cmd.SysProcAttr.Pdeathsig = syscall.SIGKILL
|
||||
}
|
||||
13
cli/connhelper/commandconn/session_unix.go
Normal file
13
cli/connhelper/commandconn/session_unix.go
Normal file
@ -0,0 +1,13 @@
|
||||
// +build !windows
|
||||
|
||||
package commandconn
|
||||
|
||||
import (
|
||||
"os/exec"
|
||||
)
|
||||
|
||||
func createSession(cmd *exec.Cmd) {
|
||||
// for supporting ssh connection helper with ProxyCommand
|
||||
// https://github.com/docker/cli/issues/1707
|
||||
cmd.SysProcAttr.Setsid = true
|
||||
}
|
||||
8
cli/connhelper/commandconn/session_windows.go
Normal file
8
cli/connhelper/commandconn/session_windows.go
Normal file
@ -0,0 +1,8 @@
|
||||
package commandconn
|
||||
|
||||
import (
|
||||
"os/exec"
|
||||
)
|
||||
|
||||
func createSession(cmd *exec.Cmd) {
|
||||
}
|
||||
@ -31,7 +31,7 @@ type Endpoint struct {
|
||||
}
|
||||
|
||||
// WithTLSData loads TLS materials for the endpoint
|
||||
func WithTLSData(s store.Store, contextName string, m EndpointMeta) (Endpoint, error) {
|
||||
func WithTLSData(s store.Reader, contextName string, m EndpointMeta) (Endpoint, error) {
|
||||
tlsData, err := context.LoadTLSData(s, contextName, DockerEndpoint)
|
||||
if err != nil {
|
||||
return Endpoint{}, err
|
||||
@ -91,8 +91,8 @@ func (c *Endpoint) tlsConfig() (*tls.Config, error) {
|
||||
}
|
||||
|
||||
// ClientOpts returns a slice of Client options to configure an API client with this endpoint
|
||||
func (c *Endpoint) ClientOpts() ([]func(*client.Client) error, error) {
|
||||
var result []func(*client.Client) error
|
||||
func (c *Endpoint) ClientOpts() ([]client.Opt, error) {
|
||||
var result []client.Opt
|
||||
if c.Host != "" {
|
||||
helper, err := connhelper.GetConnectionHelper(c.Host)
|
||||
if err != nil {
|
||||
@ -104,8 +104,8 @@ func (c *Endpoint) ClientOpts() ([]func(*client.Client) error, error) {
|
||||
return nil, err
|
||||
}
|
||||
result = append(result,
|
||||
client.WithHost(c.Host),
|
||||
withHTTPClient(tlsConfig),
|
||||
client.WithHost(c.Host),
|
||||
)
|
||||
|
||||
} else {
|
||||
@ -153,7 +153,7 @@ func withHTTPClient(tlsConfig *tls.Config) func(*client.Client) error {
|
||||
}
|
||||
|
||||
// EndpointFromContext parses a context docker endpoint metadata into a typed EndpointMeta structure
|
||||
func EndpointFromContext(metadata store.ContextMetadata) (EndpointMeta, error) {
|
||||
func EndpointFromContext(metadata store.Metadata) (EndpointMeta, error) {
|
||||
ep, ok := metadata.Endpoints[DockerEndpoint]
|
||||
if !ok {
|
||||
return EndpointMeta{}, errors.New("cannot find docker endpoint in context")
|
||||
|
||||
@ -85,15 +85,15 @@ func TestSaveLoadContexts(t *testing.T) {
|
||||
assert.NilError(t, save(store, epDefault, "embed-default-context"))
|
||||
assert.NilError(t, save(store, epContext2, "embed-context2"))
|
||||
|
||||
rawNoTLSMeta, err := store.GetContextMetadata("raw-notls")
|
||||
rawNoTLSMeta, err := store.GetMetadata("raw-notls")
|
||||
assert.NilError(t, err)
|
||||
rawNoTLSSkipMeta, err := store.GetContextMetadata("raw-notls-skip")
|
||||
rawNoTLSSkipMeta, err := store.GetMetadata("raw-notls-skip")
|
||||
assert.NilError(t, err)
|
||||
rawTLSMeta, err := store.GetContextMetadata("raw-tls")
|
||||
rawTLSMeta, err := store.GetMetadata("raw-tls")
|
||||
assert.NilError(t, err)
|
||||
embededDefaultMeta, err := store.GetContextMetadata("embed-default-context")
|
||||
embededDefaultMeta, err := store.GetMetadata("embed-default-context")
|
||||
assert.NilError(t, err)
|
||||
embededContext2Meta, err := store.GetContextMetadata("embed-context2")
|
||||
embededContext2Meta, err := store.GetMetadata("embed-context2")
|
||||
assert.NilError(t, err)
|
||||
|
||||
rawNoTLS := EndpointFromContext(rawNoTLSMeta)
|
||||
@ -104,22 +104,22 @@ func TestSaveLoadContexts(t *testing.T) {
|
||||
|
||||
rawNoTLSEP, err := rawNoTLS.WithTLSData(store, "raw-notls")
|
||||
assert.NilError(t, err)
|
||||
checkClientConfig(t, store, rawNoTLSEP, "https://test", "test", nil, nil, nil, false)
|
||||
checkClientConfig(t, rawNoTLSEP, "https://test", "test", nil, nil, nil, false)
|
||||
rawNoTLSSkipEP, err := rawNoTLSSkip.WithTLSData(store, "raw-notls-skip")
|
||||
assert.NilError(t, err)
|
||||
checkClientConfig(t, store, rawNoTLSSkipEP, "https://test", "test", nil, nil, nil, true)
|
||||
checkClientConfig(t, rawNoTLSSkipEP, "https://test", "test", nil, nil, nil, true)
|
||||
rawTLSEP, err := rawTLS.WithTLSData(store, "raw-tls")
|
||||
assert.NilError(t, err)
|
||||
checkClientConfig(t, store, rawTLSEP, "https://test", "test", []byte("ca"), []byte("cert"), []byte("key"), true)
|
||||
checkClientConfig(t, rawTLSEP, "https://test", "test", []byte("ca"), []byte("cert"), []byte("key"), true)
|
||||
embededDefaultEP, err := embededDefault.WithTLSData(store, "embed-default-context")
|
||||
assert.NilError(t, err)
|
||||
checkClientConfig(t, store, embededDefaultEP, "https://server1", "namespace1", nil, []byte("cert"), []byte("key"), true)
|
||||
checkClientConfig(t, embededDefaultEP, "https://server1", "namespace1", nil, []byte("cert"), []byte("key"), true)
|
||||
embededContext2EP, err := embededContext2.WithTLSData(store, "embed-context2")
|
||||
assert.NilError(t, err)
|
||||
checkClientConfig(t, store, embededContext2EP, "https://server2", "namespace-override", []byte("ca"), []byte("cert"), []byte("key"), false)
|
||||
checkClientConfig(t, embededContext2EP, "https://server2", "namespace-override", []byte("ca"), []byte("cert"), []byte("key"), false)
|
||||
}
|
||||
|
||||
func checkClientConfig(t *testing.T, s store.Store, ep Endpoint, server, namespace string, ca, cert, key []byte, skipTLSVerify bool) {
|
||||
func checkClientConfig(t *testing.T, ep Endpoint, server, namespace string, ca, cert, key []byte, skipTLSVerify bool) {
|
||||
config := ep.KubernetesConfig()
|
||||
cfg, err := config.ClientConfig()
|
||||
assert.NilError(t, err)
|
||||
@ -132,17 +132,17 @@ func checkClientConfig(t *testing.T, s store.Store, ep Endpoint, server, namespa
|
||||
assert.Equal(t, skipTLSVerify, cfg.Insecure)
|
||||
}
|
||||
|
||||
func save(s store.Store, ep Endpoint, name string) error {
|
||||
meta := store.ContextMetadata{
|
||||
func save(s store.Writer, ep Endpoint, name string) error {
|
||||
meta := store.Metadata{
|
||||
Endpoints: map[string]interface{}{
|
||||
KubernetesEndpoint: ep.EndpointMeta,
|
||||
},
|
||||
Name: name,
|
||||
}
|
||||
if err := s.CreateOrUpdateContext(meta); err != nil {
|
||||
if err := s.CreateOrUpdate(meta); err != nil {
|
||||
return err
|
||||
}
|
||||
return s.ResetContextEndpointTLSMaterial(name, KubernetesEndpoint, ep.TLSData.ToStoreTLSData())
|
||||
return s.ResetEndpointTLSMaterial(name, KubernetesEndpoint, ep.TLSData.ToStoreTLSData())
|
||||
}
|
||||
|
||||
func TestSaveLoadGKEConfig(t *testing.T) {
|
||||
@ -158,7 +158,7 @@ func TestSaveLoadGKEConfig(t *testing.T) {
|
||||
ep, err := FromKubeConfig("testdata/gke-kubeconfig", "", "")
|
||||
assert.NilError(t, err)
|
||||
assert.NilError(t, save(store, ep, "gke-context"))
|
||||
persistedMetadata, err := store.GetContextMetadata("gke-context")
|
||||
persistedMetadata, err := store.GetMetadata("gke-context")
|
||||
assert.NilError(t, err)
|
||||
persistedEPMeta := EndpointFromContext(persistedMetadata)
|
||||
assert.Check(t, persistedEPMeta != nil)
|
||||
@ -183,7 +183,7 @@ func TestSaveLoadEKSConfig(t *testing.T) {
|
||||
ep, err := FromKubeConfig("testdata/eks-kubeconfig", "", "")
|
||||
assert.NilError(t, err)
|
||||
assert.NilError(t, save(store, ep, "eks-context"))
|
||||
persistedMetadata, err := store.GetContextMetadata("eks-context")
|
||||
persistedMetadata, err := store.GetMetadata("eks-context")
|
||||
assert.NilError(t, err)
|
||||
persistedEPMeta := EndpointFromContext(persistedMetadata)
|
||||
assert.Check(t, persistedEPMeta != nil)
|
||||
|
||||
@ -1,9 +1,15 @@
|
||||
package kubernetes
|
||||
|
||||
import (
|
||||
"os"
|
||||
"path/filepath"
|
||||
|
||||
"github.com/docker/cli/cli/command"
|
||||
"github.com/docker/cli/cli/context"
|
||||
"github.com/docker/cli/cli/context/store"
|
||||
"github.com/docker/cli/kubernetes"
|
||||
api "github.com/docker/compose-on-kubernetes/api"
|
||||
"github.com/docker/docker/pkg/homedir"
|
||||
"github.com/pkg/errors"
|
||||
"k8s.io/client-go/tools/clientcmd"
|
||||
clientcmdapi "k8s.io/client-go/tools/clientcmd/api"
|
||||
)
|
||||
@ -17,6 +23,8 @@ type EndpointMeta struct {
|
||||
Exec *clientcmdapi.ExecConfig `json:",omitempty"`
|
||||
}
|
||||
|
||||
var _ command.EndpointDefaultResolver = &EndpointMeta{}
|
||||
|
||||
// Endpoint is a typed wrapper around a context-store generic endpoint describing
|
||||
// a Kubernetes endpoint, with TLS data
|
||||
type Endpoint struct {
|
||||
@ -24,8 +32,14 @@ type Endpoint struct {
|
||||
TLSData *context.TLSData
|
||||
}
|
||||
|
||||
func init() {
|
||||
command.RegisterDefaultStoreEndpoints(
|
||||
store.EndpointTypeGetter(KubernetesEndpoint, func() interface{} { return &EndpointMeta{} }),
|
||||
)
|
||||
}
|
||||
|
||||
// WithTLSData loads TLS materials for the endpoint
|
||||
func (c *EndpointMeta) WithTLSData(s store.Store, contextName string) (Endpoint, error) {
|
||||
func (c *EndpointMeta) WithTLSData(s store.Reader, contextName string) (Endpoint, error) {
|
||||
tlsData, err := context.LoadTLSData(s, contextName, KubernetesEndpoint)
|
||||
if err != nil {
|
||||
return Endpoint{}, err
|
||||
@ -61,8 +75,34 @@ func (c *Endpoint) KubernetesConfig() clientcmd.ClientConfig {
|
||||
return clientcmd.NewDefaultClientConfig(*cfg, &clientcmd.ConfigOverrides{})
|
||||
}
|
||||
|
||||
// ResolveDefault returns endpoint metadata for the default Kubernetes
|
||||
// endpoint, which is derived from the env-based kubeconfig.
|
||||
func (c *EndpointMeta) ResolveDefault(stackOrchestrator command.Orchestrator) (interface{}, *store.EndpointTLSData, error) {
|
||||
kubeconfig := os.Getenv("KUBECONFIG")
|
||||
if kubeconfig == "" {
|
||||
kubeconfig = filepath.Join(homedir.Get(), ".kube/config")
|
||||
}
|
||||
kubeEP, err := FromKubeConfig(kubeconfig, "", "")
|
||||
if err != nil {
|
||||
if stackOrchestrator == command.OrchestratorKubernetes || stackOrchestrator == command.OrchestratorAll {
|
||||
return nil, nil, errors.Wrapf(err, "default orchestrator is %s but unable to resolve kubernetes endpoint", stackOrchestrator)
|
||||
}
|
||||
|
||||
// We deliberately quash the error here, returning nil
|
||||
// for the first argument is sufficient to indicate we weren't able to
|
||||
// provide a default
|
||||
return nil, nil, nil
|
||||
}
|
||||
|
||||
var tls *store.EndpointTLSData
|
||||
if kubeEP.TLSData != nil {
|
||||
tls = kubeEP.TLSData.ToStoreTLSData()
|
||||
}
|
||||
return kubeEP.EndpointMeta, tls, nil
|
||||
}
|
||||
|
||||
// EndpointFromContext extracts kubernetes endpoint info from current context
|
||||
func EndpointFromContext(metadata store.ContextMetadata) *EndpointMeta {
|
||||
func EndpointFromContext(metadata store.Metadata) *EndpointMeta {
|
||||
ep, ok := metadata.Endpoints[KubernetesEndpoint]
|
||||
if !ok {
|
||||
return nil
|
||||
@ -77,8 +117,8 @@ func EndpointFromContext(metadata store.ContextMetadata) *EndpointMeta {
|
||||
// ConfigFromContext resolves a kubernetes client config for the specified context.
|
||||
// If kubeconfigOverride is specified, use this config file instead of the context defaults.ConfigFromContext
|
||||
// if command.ContextDockerHost is specified as the context name, fallsback to the default user's kubeconfig file
|
||||
func ConfigFromContext(name string, s store.Store) (clientcmd.ClientConfig, error) {
|
||||
ctxMeta, err := s.GetContextMetadata(name)
|
||||
func ConfigFromContext(name string, s store.Reader) (clientcmd.ClientConfig, error) {
|
||||
ctxMeta, err := s.GetMetadata(name)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
@ -91,5 +131,5 @@ func ConfigFromContext(name string, s store.Store) (clientcmd.ClientConfig, erro
|
||||
return ep.KubernetesConfig(), nil
|
||||
}
|
||||
// context has no kubernetes endpoint
|
||||
return kubernetes.NewKubernetesConfig(""), nil
|
||||
return api.NewKubernetesConfig(""), nil
|
||||
}
|
||||
|
||||
25
cli/context/kubernetes/load_test.go
Normal file
25
cli/context/kubernetes/load_test.go
Normal file
@ -0,0 +1,25 @@
|
||||
package kubernetes
|
||||
|
||||
import (
|
||||
"testing"
|
||||
|
||||
"github.com/docker/cli/cli/command"
|
||||
"github.com/docker/cli/cli/config/configfile"
|
||||
cliflags "github.com/docker/cli/cli/flags"
|
||||
"gotest.tools/assert"
|
||||
"gotest.tools/env"
|
||||
)
|
||||
|
||||
func TestDefaultContextInitializer(t *testing.T) {
|
||||
cli, err := command.NewDockerCli()
|
||||
assert.NilError(t, err)
|
||||
defer env.Patch(t, "KUBECONFIG", "./testdata/test-kubeconfig")()
|
||||
configFile := &configfile.ConfigFile{
|
||||
StackOrchestrator: "all",
|
||||
}
|
||||
ctx, err := command.ResolveDefaultContext(&cliflags.CommonOptions{}, configFile, command.DefaultContextStoreConfig(), cli.Err())
|
||||
assert.NilError(t, err)
|
||||
assert.Equal(t, "default", ctx.Meta.Name)
|
||||
assert.Equal(t, command.OrchestratorAll, ctx.Meta.Metadata.(command.DockerContext).StackOrchestrator)
|
||||
assert.DeepEqual(t, "zoinx", ctx.Meta.Endpoints[KubernetesEndpoint].(EndpointMeta).DefaultNamespace)
|
||||
}
|
||||
29
cli/context/store/io_utils.go
Normal file
29
cli/context/store/io_utils.go
Normal file
@ -0,0 +1,29 @@
|
||||
package store
|
||||
|
||||
import (
|
||||
"errors"
|
||||
"io"
|
||||
)
|
||||
|
||||
// LimitedReader is a fork of io.LimitedReader to override Read.
|
||||
type LimitedReader struct {
|
||||
R io.Reader
|
||||
N int64 // max bytes remaining
|
||||
}
|
||||
|
||||
// Read is a fork of io.LimitedReader.Read that returns an error when limit exceeded.
|
||||
func (l *LimitedReader) Read(p []byte) (n int, err error) {
|
||||
if l.N < 0 {
|
||||
return 0, errors.New("read exceeds the defined limit")
|
||||
}
|
||||
if l.N == 0 {
|
||||
return 0, io.EOF
|
||||
}
|
||||
// have to cap N + 1 otherwise we won't hit limit err
|
||||
if int64(len(p)) > l.N+1 {
|
||||
p = p[0 : l.N+1]
|
||||
}
|
||||
n, err = l.R.Read(p)
|
||||
l.N -= int64(n)
|
||||
return n, err
|
||||
}
|
||||
24
cli/context/store/io_utils_test.go
Normal file
24
cli/context/store/io_utils_test.go
Normal file
@ -0,0 +1,24 @@
|
||||
package store
|
||||
|
||||
import (
|
||||
"io/ioutil"
|
||||
"strings"
|
||||
"testing"
|
||||
|
||||
"gotest.tools/assert"
|
||||
)
|
||||
|
||||
func TestLimitReaderReadAll(t *testing.T) {
|
||||
r := strings.NewReader("Reader")
|
||||
|
||||
_, err := ioutil.ReadAll(r)
|
||||
assert.NilError(t, err)
|
||||
|
||||
r = strings.NewReader("Test")
|
||||
_, err = ioutil.ReadAll(&LimitedReader{R: r, N: 4})
|
||||
assert.NilError(t, err)
|
||||
|
||||
r = strings.NewReader("Test")
|
||||
_, err = ioutil.ReadAll(&LimitedReader{R: r, N: 2})
|
||||
assert.Error(t, err, "read exceeds the defined limit")
|
||||
}
|
||||
@ -10,8 +10,8 @@ import (
|
||||
"gotest.tools/assert/cmp"
|
||||
)
|
||||
|
||||
func testMetadata(name string) ContextMetadata {
|
||||
return ContextMetadata{
|
||||
func testMetadata(name string) Metadata {
|
||||
return Metadata{
|
||||
Endpoints: map[string]interface{}{
|
||||
"ep1": endpoint{Foo: "bar"},
|
||||
},
|
||||
@ -34,7 +34,7 @@ func TestMetadataCreateGetRemove(t *testing.T) {
|
||||
assert.NilError(t, err)
|
||||
defer os.RemoveAll(testDir)
|
||||
testee := metadataStore{root: testDir, config: testCfg}
|
||||
expected2 := ContextMetadata{
|
||||
expected2 := Metadata{
|
||||
Endpoints: map[string]interface{}{
|
||||
"ep1": endpoint{Foo: "baz"},
|
||||
"ep2": endpoint{Foo: "bee"},
|
||||
@ -82,7 +82,7 @@ func TestMetadataList(t *testing.T) {
|
||||
assert.NilError(t, err)
|
||||
defer os.RemoveAll(testDir)
|
||||
testee := metadataStore{root: testDir, config: testCfg}
|
||||
wholeData := []ContextMetadata{
|
||||
wholeData := []Metadata{
|
||||
testMetadata("context1"),
|
||||
testMetadata("context2"),
|
||||
testMetadata("context3"),
|
||||
@ -103,7 +103,7 @@ func TestEmptyConfig(t *testing.T) {
|
||||
assert.NilError(t, err)
|
||||
defer os.RemoveAll(testDir)
|
||||
testee := metadataStore{root: testDir}
|
||||
wholeData := []ContextMetadata{
|
||||
wholeData := []Metadata{
|
||||
testMetadata("context1"),
|
||||
testMetadata("context2"),
|
||||
testMetadata("context3"),
|
||||
@ -136,7 +136,7 @@ func TestWithEmbedding(t *testing.T) {
|
||||
Val: "Hello",
|
||||
},
|
||||
}
|
||||
assert.NilError(t, testee.createOrUpdate(ContextMetadata{Metadata: testCtxMeta, Name: "test"}))
|
||||
assert.NilError(t, testee.createOrUpdate(Metadata{Metadata: testCtxMeta, Name: "test"}))
|
||||
res, err := testee.get(contextdirOf("test"))
|
||||
assert.NilError(t, err)
|
||||
assert.Equal(t, testCtxMeta, res.Metadata)
|
||||
|
||||
@ -26,7 +26,7 @@ func (s *metadataStore) contextDir(id contextdir) string {
|
||||
return filepath.Join(s.root, string(id))
|
||||
}
|
||||
|
||||
func (s *metadataStore) createOrUpdate(meta ContextMetadata) error {
|
||||
func (s *metadataStore) createOrUpdate(meta Metadata) error {
|
||||
contextDir := s.contextDir(contextdirOf(meta.Name))
|
||||
if err := os.MkdirAll(contextDir, 0755); err != nil {
|
||||
return err
|
||||
@ -56,26 +56,26 @@ func parseTypedOrMap(payload []byte, getter TypeGetter) (interface{}, error) {
|
||||
return reflect.ValueOf(typed).Elem().Interface(), nil
|
||||
}
|
||||
|
||||
func (s *metadataStore) get(id contextdir) (ContextMetadata, error) {
|
||||
func (s *metadataStore) get(id contextdir) (Metadata, error) {
|
||||
contextDir := s.contextDir(id)
|
||||
bytes, err := ioutil.ReadFile(filepath.Join(contextDir, metaFile))
|
||||
if err != nil {
|
||||
return ContextMetadata{}, convertContextDoesNotExist(err)
|
||||
return Metadata{}, convertContextDoesNotExist(err)
|
||||
}
|
||||
var untyped untypedContextMetadata
|
||||
r := ContextMetadata{
|
||||
r := Metadata{
|
||||
Endpoints: make(map[string]interface{}),
|
||||
}
|
||||
if err := json.Unmarshal(bytes, &untyped); err != nil {
|
||||
return ContextMetadata{}, err
|
||||
return Metadata{}, err
|
||||
}
|
||||
r.Name = untyped.Name
|
||||
if r.Metadata, err = parseTypedOrMap(untyped.Metadata, s.config.contextType); err != nil {
|
||||
return ContextMetadata{}, err
|
||||
return Metadata{}, err
|
||||
}
|
||||
for k, v := range untyped.Endpoints {
|
||||
if r.Endpoints[k], err = parseTypedOrMap(v, s.config.endpointTypes[k]); err != nil {
|
||||
return ContextMetadata{}, err
|
||||
return Metadata{}, err
|
||||
}
|
||||
}
|
||||
return r, err
|
||||
@ -86,7 +86,7 @@ func (s *metadataStore) remove(id contextdir) error {
|
||||
return os.RemoveAll(contextDir)
|
||||
}
|
||||
|
||||
func (s *metadataStore) list() ([]ContextMetadata, error) {
|
||||
func (s *metadataStore) list() ([]Metadata, error) {
|
||||
ctxDirs, err := listRecursivelyMetadataDirs(s.root)
|
||||
if err != nil {
|
||||
if os.IsNotExist(err) {
|
||||
@ -94,7 +94,7 @@ func (s *metadataStore) list() ([]ContextMetadata, error) {
|
||||
}
|
||||
return nil, err
|
||||
}
|
||||
var res []ContextMetadata
|
||||
var res []Metadata
|
||||
for _, dir := range ctxDirs {
|
||||
c, err := s.get(contextdir(dir))
|
||||
if err != nil {
|
||||
|
||||
@ -2,12 +2,16 @@ package store
|
||||
|
||||
import (
|
||||
"archive/tar"
|
||||
"archive/zip"
|
||||
"bufio"
|
||||
"bytes"
|
||||
_ "crypto/sha256" // ensure ids can be computed
|
||||
"encoding/json"
|
||||
"errors"
|
||||
"fmt"
|
||||
"io"
|
||||
"io/ioutil"
|
||||
"net/http"
|
||||
"path"
|
||||
"path/filepath"
|
||||
"strings"
|
||||
@ -18,26 +22,58 @@ import (
|
||||
|
||||
// Store provides a context store for easily remembering endpoints configuration
|
||||
type Store interface {
|
||||
ListContexts() ([]ContextMetadata, error)
|
||||
CreateOrUpdateContext(meta ContextMetadata) error
|
||||
RemoveContext(name string) error
|
||||
GetContextMetadata(name string) (ContextMetadata, error)
|
||||
ResetContextTLSMaterial(name string, data *ContextTLSData) error
|
||||
ResetContextEndpointTLSMaterial(contextName string, endpointName string, data *EndpointTLSData) error
|
||||
ListContextTLSFiles(name string) (map[string]EndpointFiles, error)
|
||||
GetContextTLSData(contextName, endpointName, fileName string) ([]byte, error)
|
||||
GetContextStorageInfo(contextName string) ContextStorageInfo
|
||||
Reader
|
||||
Lister
|
||||
Writer
|
||||
StorageInfoProvider
|
||||
}
|
||||
|
||||
// ContextMetadata contains metadata about a context and its endpoints
|
||||
type ContextMetadata struct {
|
||||
// Reader provides read-only (without list) access to context data
|
||||
type Reader interface {
|
||||
GetMetadata(name string) (Metadata, error)
|
||||
ListTLSFiles(name string) (map[string]EndpointFiles, error)
|
||||
GetTLSData(contextName, endpointName, fileName string) ([]byte, error)
|
||||
}
|
||||
|
||||
// Lister provides listing of contexts
|
||||
type Lister interface {
|
||||
List() ([]Metadata, error)
|
||||
}
|
||||
|
||||
// ReaderLister combines Reader and Lister interfaces
|
||||
type ReaderLister interface {
|
||||
Reader
|
||||
Lister
|
||||
}
|
||||
|
||||
// StorageInfoProvider provides more information about storage details of contexts
|
||||
type StorageInfoProvider interface {
|
||||
GetStorageInfo(contextName string) StorageInfo
|
||||
}
|
||||
|
||||
// Writer provides write access to context data
|
||||
type Writer interface {
|
||||
CreateOrUpdate(meta Metadata) error
|
||||
Remove(name string) error
|
||||
ResetTLSMaterial(name string, data *ContextTLSData) error
|
||||
ResetEndpointTLSMaterial(contextName string, endpointName string, data *EndpointTLSData) error
|
||||
}
|
||||
|
||||
// ReaderWriter combines Reader and Writer interfaces
|
||||
type ReaderWriter interface {
|
||||
Reader
|
||||
Writer
|
||||
}
|
||||
|
||||
// Metadata contains metadata about a context and its endpoints
|
||||
type Metadata struct {
|
||||
Name string `json:",omitempty"`
|
||||
Metadata interface{} `json:",omitempty"`
|
||||
Endpoints map[string]interface{} `json:",omitempty"`
|
||||
}
|
||||
|
||||
// ContextStorageInfo contains data about where a given context is stored
|
||||
type ContextStorageInfo struct {
|
||||
// StorageInfo contains data about where a given context is stored
|
||||
type StorageInfo struct {
|
||||
MetadataPath string
|
||||
TLSPath string
|
||||
}
|
||||
@ -74,15 +110,15 @@ type store struct {
|
||||
tls *tlsStore
|
||||
}
|
||||
|
||||
func (s *store) ListContexts() ([]ContextMetadata, error) {
|
||||
func (s *store) List() ([]Metadata, error) {
|
||||
return s.meta.list()
|
||||
}
|
||||
|
||||
func (s *store) CreateOrUpdateContext(meta ContextMetadata) error {
|
||||
func (s *store) CreateOrUpdate(meta Metadata) error {
|
||||
return s.meta.createOrUpdate(meta)
|
||||
}
|
||||
|
||||
func (s *store) RemoveContext(name string) error {
|
||||
func (s *store) Remove(name string) error {
|
||||
id := contextdirOf(name)
|
||||
if err := s.meta.remove(id); err != nil {
|
||||
return patchErrContextName(err, name)
|
||||
@ -90,13 +126,13 @@ func (s *store) RemoveContext(name string) error {
|
||||
return patchErrContextName(s.tls.removeAllContextData(id), name)
|
||||
}
|
||||
|
||||
func (s *store) GetContextMetadata(name string) (ContextMetadata, error) {
|
||||
func (s *store) GetMetadata(name string) (Metadata, error) {
|
||||
res, err := s.meta.get(contextdirOf(name))
|
||||
patchErrContextName(err, name)
|
||||
return res, err
|
||||
}
|
||||
|
||||
func (s *store) ResetContextTLSMaterial(name string, data *ContextTLSData) error {
|
||||
func (s *store) ResetTLSMaterial(name string, data *ContextTLSData) error {
|
||||
id := contextdirOf(name)
|
||||
if err := s.tls.removeAllContextData(id); err != nil {
|
||||
return patchErrContextName(err, name)
|
||||
@ -114,7 +150,7 @@ func (s *store) ResetContextTLSMaterial(name string, data *ContextTLSData) error
|
||||
return nil
|
||||
}
|
||||
|
||||
func (s *store) ResetContextEndpointTLSMaterial(contextName string, endpointName string, data *EndpointTLSData) error {
|
||||
func (s *store) ResetEndpointTLSMaterial(contextName string, endpointName string, data *EndpointTLSData) error {
|
||||
id := contextdirOf(contextName)
|
||||
if err := s.tls.removeAllEndpointData(id, endpointName); err != nil {
|
||||
return patchErrContextName(err, contextName)
|
||||
@ -130,19 +166,19 @@ func (s *store) ResetContextEndpointTLSMaterial(contextName string, endpointName
|
||||
return nil
|
||||
}
|
||||
|
||||
func (s *store) ListContextTLSFiles(name string) (map[string]EndpointFiles, error) {
|
||||
func (s *store) ListTLSFiles(name string) (map[string]EndpointFiles, error) {
|
||||
res, err := s.tls.listContextData(contextdirOf(name))
|
||||
return res, patchErrContextName(err, name)
|
||||
}
|
||||
|
||||
func (s *store) GetContextTLSData(contextName, endpointName, fileName string) ([]byte, error) {
|
||||
func (s *store) GetTLSData(contextName, endpointName, fileName string) ([]byte, error) {
|
||||
res, err := s.tls.getData(contextdirOf(contextName), endpointName, fileName)
|
||||
return res, patchErrContextName(err, contextName)
|
||||
}
|
||||
|
||||
func (s *store) GetContextStorageInfo(contextName string) ContextStorageInfo {
|
||||
func (s *store) GetStorageInfo(contextName string) StorageInfo {
|
||||
dir := contextdirOf(contextName)
|
||||
return ContextStorageInfo{
|
||||
return StorageInfo{
|
||||
MetadataPath: s.meta.contextDir(dir),
|
||||
TLSPath: s.tls.contextDir(dir),
|
||||
}
|
||||
@ -151,13 +187,13 @@ func (s *store) GetContextStorageInfo(contextName string) ContextStorageInfo {
|
||||
// Export exports an existing namespace into an opaque data stream
|
||||
// This stream is actually a tarball containing context metadata and TLS materials, but it does
|
||||
// not map 1:1 the layout of the context store (don't try to restore it manually without calling store.Import)
|
||||
func Export(name string, s Store) io.ReadCloser {
|
||||
func Export(name string, s Reader) io.ReadCloser {
|
||||
reader, writer := io.Pipe()
|
||||
go func() {
|
||||
tw := tar.NewWriter(writer)
|
||||
defer tw.Close()
|
||||
defer writer.Close()
|
||||
meta, err := s.GetContextMetadata(name)
|
||||
meta, err := s.GetMetadata(name)
|
||||
if err != nil {
|
||||
writer.CloseWithError(err)
|
||||
return
|
||||
@ -179,7 +215,7 @@ func Export(name string, s Store) io.ReadCloser {
|
||||
writer.CloseWithError(err)
|
||||
return
|
||||
}
|
||||
tlsFiles, err := s.ListContextTLSFiles(name)
|
||||
tlsFiles, err := s.ListTLSFiles(name)
|
||||
if err != nil {
|
||||
writer.CloseWithError(err)
|
||||
return
|
||||
@ -204,7 +240,7 @@ func Export(name string, s Store) io.ReadCloser {
|
||||
return
|
||||
}
|
||||
for _, fileName := range endpointFiles {
|
||||
data, err := s.GetContextTLSData(name, endpointName, fileName)
|
||||
data, err := s.GetTLSData(name, endpointName, fileName)
|
||||
if err != nil {
|
||||
writer.CloseWithError(err)
|
||||
return
|
||||
@ -227,12 +263,44 @@ func Export(name string, s Store) io.ReadCloser {
|
||||
return reader
|
||||
}
|
||||
|
||||
const (
|
||||
maxAllowedFileSizeToImport int64 = 10 << 20
|
||||
zipType string = "application/zip"
|
||||
)
|
||||
|
||||
func getImportContentType(r *bufio.Reader) (string, error) {
|
||||
head, err := r.Peek(512)
|
||||
if err != nil && err != io.EOF {
|
||||
return "", err
|
||||
}
|
||||
|
||||
return http.DetectContentType(head), nil
|
||||
}
|
||||
|
||||
// Import imports an exported context into a store
|
||||
func Import(name string, s Store, reader io.Reader) error {
|
||||
tr := tar.NewReader(reader)
|
||||
func Import(name string, s Writer, reader io.Reader) error {
|
||||
// Buffered reader will not advance the buffer, needed to determine content type
|
||||
r := bufio.NewReader(reader)
|
||||
|
||||
importContentType, err := getImportContentType(r)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
switch importContentType {
|
||||
case zipType:
|
||||
return importZip(name, s, r)
|
||||
default:
|
||||
// Assume it's a TAR (TAR does not have a "magic number")
|
||||
return importTar(name, s, r)
|
||||
}
|
||||
}
|
||||
|
||||
func importTar(name string, s Writer, reader io.Reader) error {
|
||||
tr := tar.NewReader(&LimitedReader{R: reader, N: maxAllowedFileSizeToImport})
|
||||
tlsData := ContextTLSData{
|
||||
Endpoints: map[string]EndpointTLSData{},
|
||||
}
|
||||
var importedMetaFile bool
|
||||
for {
|
||||
hdr, err := tr.Next()
|
||||
if err == io.EOF {
|
||||
@ -250,35 +318,117 @@ func Import(name string, s Store, reader io.Reader) error {
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
var meta ContextMetadata
|
||||
if err := json.Unmarshal(data, &meta); err != nil {
|
||||
meta, err := parseMetadata(data, name)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
meta.Name = name
|
||||
if err := s.CreateOrUpdateContext(meta); err != nil {
|
||||
if err := s.CreateOrUpdate(meta); err != nil {
|
||||
return err
|
||||
}
|
||||
importedMetaFile = true
|
||||
} else if strings.HasPrefix(hdr.Name, "tls/") {
|
||||
relative := strings.TrimPrefix(hdr.Name, "tls/")
|
||||
parts := strings.SplitN(relative, "/", 2)
|
||||
if len(parts) != 2 {
|
||||
return errors.New("archive format is invalid")
|
||||
}
|
||||
endpointName := parts[0]
|
||||
fileName := parts[1]
|
||||
data, err := ioutil.ReadAll(tr)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
if _, ok := tlsData.Endpoints[endpointName]; !ok {
|
||||
tlsData.Endpoints[endpointName] = EndpointTLSData{
|
||||
Files: map[string][]byte{},
|
||||
}
|
||||
if err := importEndpointTLS(&tlsData, hdr.Name, data); err != nil {
|
||||
return err
|
||||
}
|
||||
tlsData.Endpoints[endpointName].Files[fileName] = data
|
||||
}
|
||||
}
|
||||
return s.ResetContextTLSMaterial(name, &tlsData)
|
||||
if !importedMetaFile {
|
||||
return errdefs.InvalidParameter(errors.New("invalid context: no metadata found"))
|
||||
}
|
||||
return s.ResetTLSMaterial(name, &tlsData)
|
||||
}
|
||||
|
||||
func importZip(name string, s Writer, reader io.Reader) error {
|
||||
body, err := ioutil.ReadAll(&LimitedReader{R: reader, N: maxAllowedFileSizeToImport})
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
zr, err := zip.NewReader(bytes.NewReader(body), int64(len(body)))
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
tlsData := ContextTLSData{
|
||||
Endpoints: map[string]EndpointTLSData{},
|
||||
}
|
||||
|
||||
var importedMetaFile bool
|
||||
for _, zf := range zr.File {
|
||||
fi := zf.FileInfo()
|
||||
if fi.IsDir() {
|
||||
// skip this entry, only taking files into account
|
||||
continue
|
||||
}
|
||||
if zf.Name == metaFile {
|
||||
f, err := zf.Open()
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
data, err := ioutil.ReadAll(&LimitedReader{R: f, N: maxAllowedFileSizeToImport})
|
||||
defer f.Close()
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
meta, err := parseMetadata(data, name)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
if err := s.CreateOrUpdate(meta); err != nil {
|
||||
return err
|
||||
}
|
||||
importedMetaFile = true
|
||||
} else if strings.HasPrefix(zf.Name, "tls/") {
|
||||
f, err := zf.Open()
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
data, err := ioutil.ReadAll(f)
|
||||
defer f.Close()
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
err = importEndpointTLS(&tlsData, zf.Name, data)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
}
|
||||
if !importedMetaFile {
|
||||
return errdefs.InvalidParameter(errors.New("invalid context: no metadata found"))
|
||||
}
|
||||
return s.ResetTLSMaterial(name, &tlsData)
|
||||
}
|
||||
|
||||
func parseMetadata(data []byte, name string) (Metadata, error) {
|
||||
var meta Metadata
|
||||
if err := json.Unmarshal(data, &meta); err != nil {
|
||||
return meta, err
|
||||
}
|
||||
meta.Name = name
|
||||
return meta, nil
|
||||
}
|
||||
|
||||
func importEndpointTLS(tlsData *ContextTLSData, path string, data []byte) error {
|
||||
parts := strings.SplitN(strings.TrimPrefix(path, "tls/"), "/", 2)
|
||||
if len(parts) != 2 {
|
||||
// TLS endpoints require archived file directory with 2 layers
|
||||
// i.e. tls/{endpointName}/{fileName}
|
||||
return errors.New("archive format is invalid")
|
||||
}
|
||||
|
||||
epName := parts[0]
|
||||
fileName := parts[1]
|
||||
if _, ok := tlsData.Endpoints[epName]; !ok {
|
||||
tlsData.Endpoints[epName] = EndpointTLSData{
|
||||
Files: map[string][]byte{},
|
||||
}
|
||||
}
|
||||
tlsData.Endpoints[epName].Files[fileName] = data
|
||||
return nil
|
||||
}
|
||||
|
||||
type setContextName interface {
|
||||
|
||||
@ -1,9 +1,16 @@
|
||||
package store
|
||||
|
||||
import (
|
||||
"archive/tar"
|
||||
"archive/zip"
|
||||
"bufio"
|
||||
"bytes"
|
||||
"crypto/rand"
|
||||
"encoding/json"
|
||||
"io"
|
||||
"io/ioutil"
|
||||
"os"
|
||||
"path"
|
||||
"testing"
|
||||
|
||||
"gotest.tools/assert"
|
||||
@ -27,8 +34,8 @@ func TestExportImport(t *testing.T) {
|
||||
assert.NilError(t, err)
|
||||
defer os.RemoveAll(testDir)
|
||||
s := New(testDir, testCfg)
|
||||
err = s.CreateOrUpdateContext(
|
||||
ContextMetadata{
|
||||
err = s.CreateOrUpdate(
|
||||
Metadata{
|
||||
Endpoints: map[string]interface{}{
|
||||
"ep1": endpoint{Foo: "bar"},
|
||||
},
|
||||
@ -40,7 +47,7 @@ func TestExportImport(t *testing.T) {
|
||||
rand.Read(file1)
|
||||
file2 := make([]byte, 3700)
|
||||
rand.Read(file2)
|
||||
err = s.ResetContextEndpointTLSMaterial("source", "ep1", &EndpointTLSData{
|
||||
err = s.ResetEndpointTLSMaterial("source", "ep1", &EndpointTLSData{
|
||||
Files: map[string][]byte{
|
||||
"file1": file1,
|
||||
"file2": file2,
|
||||
@ -51,30 +58,30 @@ func TestExportImport(t *testing.T) {
|
||||
defer r.Close()
|
||||
err = Import("dest", s, r)
|
||||
assert.NilError(t, err)
|
||||
srcMeta, err := s.GetContextMetadata("source")
|
||||
srcMeta, err := s.GetMetadata("source")
|
||||
assert.NilError(t, err)
|
||||
destMeta, err := s.GetContextMetadata("dest")
|
||||
destMeta, err := s.GetMetadata("dest")
|
||||
assert.NilError(t, err)
|
||||
assert.DeepEqual(t, destMeta.Metadata, srcMeta.Metadata)
|
||||
assert.DeepEqual(t, destMeta.Endpoints, srcMeta.Endpoints)
|
||||
srcFileList, err := s.ListContextTLSFiles("source")
|
||||
srcFileList, err := s.ListTLSFiles("source")
|
||||
assert.NilError(t, err)
|
||||
destFileList, err := s.ListContextTLSFiles("dest")
|
||||
destFileList, err := s.ListTLSFiles("dest")
|
||||
assert.NilError(t, err)
|
||||
assert.Equal(t, 1, len(destFileList))
|
||||
assert.Equal(t, 1, len(srcFileList))
|
||||
assert.Equal(t, 2, len(destFileList["ep1"]))
|
||||
assert.Equal(t, 2, len(srcFileList["ep1"]))
|
||||
srcData1, err := s.GetContextTLSData("source", "ep1", "file1")
|
||||
srcData1, err := s.GetTLSData("source", "ep1", "file1")
|
||||
assert.NilError(t, err)
|
||||
assert.DeepEqual(t, file1, srcData1)
|
||||
srcData2, err := s.GetContextTLSData("source", "ep1", "file2")
|
||||
srcData2, err := s.GetTLSData("source", "ep1", "file2")
|
||||
assert.NilError(t, err)
|
||||
assert.DeepEqual(t, file2, srcData2)
|
||||
destData1, err := s.GetContextTLSData("dest", "ep1", "file1")
|
||||
destData1, err := s.GetTLSData("dest", "ep1", "file1")
|
||||
assert.NilError(t, err)
|
||||
assert.DeepEqual(t, file1, destData1)
|
||||
destData2, err := s.GetContextTLSData("dest", "ep1", "file2")
|
||||
destData2, err := s.GetTLSData("dest", "ep1", "file2")
|
||||
assert.NilError(t, err)
|
||||
assert.DeepEqual(t, file2, destData2)
|
||||
}
|
||||
@ -84,8 +91,8 @@ func TestRemove(t *testing.T) {
|
||||
assert.NilError(t, err)
|
||||
defer os.RemoveAll(testDir)
|
||||
s := New(testDir, testCfg)
|
||||
err = s.CreateOrUpdateContext(
|
||||
ContextMetadata{
|
||||
err = s.CreateOrUpdate(
|
||||
Metadata{
|
||||
Endpoints: map[string]interface{}{
|
||||
"ep1": endpoint{Foo: "bar"},
|
||||
},
|
||||
@ -93,15 +100,15 @@ func TestRemove(t *testing.T) {
|
||||
Name: "source",
|
||||
})
|
||||
assert.NilError(t, err)
|
||||
assert.NilError(t, s.ResetContextEndpointTLSMaterial("source", "ep1", &EndpointTLSData{
|
||||
assert.NilError(t, s.ResetEndpointTLSMaterial("source", "ep1", &EndpointTLSData{
|
||||
Files: map[string][]byte{
|
||||
"file1": []byte("test-data"),
|
||||
},
|
||||
}))
|
||||
assert.NilError(t, s.RemoveContext("source"))
|
||||
_, err = s.GetContextMetadata("source")
|
||||
assert.NilError(t, s.Remove("source"))
|
||||
_, err = s.GetMetadata("source")
|
||||
assert.Check(t, IsErrContextDoesNotExist(err))
|
||||
f, err := s.ListContextTLSFiles("source")
|
||||
f, err := s.ListTLSFiles("source")
|
||||
assert.NilError(t, err)
|
||||
assert.Equal(t, 0, len(f))
|
||||
}
|
||||
@ -111,7 +118,7 @@ func TestListEmptyStore(t *testing.T) {
|
||||
assert.NilError(t, err)
|
||||
defer os.RemoveAll(testDir)
|
||||
store := New(testDir, testCfg)
|
||||
result, err := store.ListContexts()
|
||||
result, err := store.List()
|
||||
assert.NilError(t, err)
|
||||
assert.Check(t, len(result) == 0)
|
||||
}
|
||||
@ -121,7 +128,131 @@ func TestErrHasCorrectContext(t *testing.T) {
|
||||
assert.NilError(t, err)
|
||||
defer os.RemoveAll(testDir)
|
||||
store := New(testDir, testCfg)
|
||||
_, err = store.GetContextMetadata("no-exists")
|
||||
_, err = store.GetMetadata("no-exists")
|
||||
assert.ErrorContains(t, err, "no-exists")
|
||||
assert.Check(t, IsErrContextDoesNotExist(err))
|
||||
}
|
||||
|
||||
func TestDetectImportContentType(t *testing.T) {
|
||||
testDir, err := ioutil.TempDir("", t.Name())
|
||||
assert.NilError(t, err)
|
||||
defer os.RemoveAll(testDir)
|
||||
|
||||
buf := new(bytes.Buffer)
|
||||
r := bufio.NewReader(buf)
|
||||
ct, err := getImportContentType(r)
|
||||
assert.NilError(t, err)
|
||||
assert.Assert(t, zipType != ct)
|
||||
}
|
||||
|
||||
func TestImportTarInvalid(t *testing.T) {
|
||||
testDir, err := ioutil.TempDir("", t.Name())
|
||||
assert.NilError(t, err)
|
||||
defer os.RemoveAll(testDir)
|
||||
|
||||
tf := path.Join(testDir, "test.context")
|
||||
|
||||
f, err := os.Create(tf)
|
||||
defer f.Close()
|
||||
assert.NilError(t, err)
|
||||
|
||||
tw := tar.NewWriter(f)
|
||||
hdr := &tar.Header{
|
||||
Name: "dummy-file",
|
||||
Mode: 0600,
|
||||
Size: int64(len("hello world")),
|
||||
}
|
||||
err = tw.WriteHeader(hdr)
|
||||
assert.NilError(t, err)
|
||||
_, err = tw.Write([]byte("hello world"))
|
||||
assert.NilError(t, err)
|
||||
err = tw.Close()
|
||||
assert.NilError(t, err)
|
||||
|
||||
source, err := os.Open(tf)
|
||||
assert.NilError(t, err)
|
||||
defer source.Close()
|
||||
var r io.Reader = source
|
||||
s := New(testDir, testCfg)
|
||||
err = Import("tarInvalid", s, r)
|
||||
assert.ErrorContains(t, err, "invalid context: no metadata found")
|
||||
}
|
||||
|
||||
func TestImportZip(t *testing.T) {
|
||||
testDir, err := ioutil.TempDir("", t.Name())
|
||||
assert.NilError(t, err)
|
||||
defer os.RemoveAll(testDir)
|
||||
|
||||
zf := path.Join(testDir, "test.zip")
|
||||
|
||||
f, err := os.Create(zf)
|
||||
defer f.Close()
|
||||
assert.NilError(t, err)
|
||||
w := zip.NewWriter(f)
|
||||
|
||||
meta, err := json.Marshal(Metadata{
|
||||
Endpoints: map[string]interface{}{
|
||||
"ep1": endpoint{Foo: "bar"},
|
||||
},
|
||||
Metadata: context{Bar: "baz"},
|
||||
Name: "source",
|
||||
})
|
||||
assert.NilError(t, err)
|
||||
var files = []struct {
|
||||
Name, Body string
|
||||
}{
|
||||
{"meta.json", string(meta)},
|
||||
{path.Join("tls", "docker", "ca.pem"), string([]byte("ca.pem"))},
|
||||
}
|
||||
|
||||
for _, file := range files {
|
||||
f, err := w.Create(file.Name)
|
||||
assert.NilError(t, err)
|
||||
_, err = f.Write([]byte(file.Body))
|
||||
assert.NilError(t, err)
|
||||
}
|
||||
|
||||
err = w.Close()
|
||||
assert.NilError(t, err)
|
||||
|
||||
source, err := os.Open(zf)
|
||||
assert.NilError(t, err)
|
||||
ct, err := getImportContentType(bufio.NewReader(source))
|
||||
assert.NilError(t, err)
|
||||
assert.Equal(t, zipType, ct)
|
||||
|
||||
source, _ = os.Open(zf)
|
||||
defer source.Close()
|
||||
var r io.Reader = source
|
||||
s := New(testDir, testCfg)
|
||||
err = Import("zipTest", s, r)
|
||||
assert.NilError(t, err)
|
||||
}
|
||||
|
||||
func TestImportZipInvalid(t *testing.T) {
|
||||
testDir, err := ioutil.TempDir("", t.Name())
|
||||
assert.NilError(t, err)
|
||||
defer os.RemoveAll(testDir)
|
||||
|
||||
zf := path.Join(testDir, "test.zip")
|
||||
|
||||
f, err := os.Create(zf)
|
||||
defer f.Close()
|
||||
assert.NilError(t, err)
|
||||
w := zip.NewWriter(f)
|
||||
|
||||
df, err := w.Create("dummy-file")
|
||||
assert.NilError(t, err)
|
||||
_, err = df.Write([]byte("hello world"))
|
||||
assert.NilError(t, err)
|
||||
err = w.Close()
|
||||
assert.NilError(t, err)
|
||||
|
||||
source, err := os.Open(zf)
|
||||
assert.NilError(t, err)
|
||||
defer source.Close()
|
||||
var r io.Reader = source
|
||||
s := New(testDir, testCfg)
|
||||
err = Import("zipInvalid", s, r)
|
||||
assert.ErrorContains(t, err, "invalid context: no metadata found")
|
||||
}
|
||||
|
||||
@ -30,6 +30,16 @@ func (c Config) SetEndpoint(name string, getter TypeGetter) {
|
||||
c.endpointTypes[name] = getter
|
||||
}
|
||||
|
||||
// ForeachEndpointType calls cb on every endpoint type registered with the Config
|
||||
func (c Config) ForeachEndpointType(cb func(string, TypeGetter) error) error {
|
||||
for n, ep := range c.endpointTypes {
|
||||
if err := cb(n, ep); err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// NewConfig creates a config object
|
||||
func NewConfig(contextType TypeGetter, endpoints ...NamedTypeGetter) Config {
|
||||
res := Config{
|
||||
|
||||
@ -42,15 +42,15 @@ func (data *TLSData) ToStoreTLSData() *store.EndpointTLSData {
|
||||
}
|
||||
|
||||
// LoadTLSData loads TLS data from the store
|
||||
func LoadTLSData(s store.Store, contextName, endpointName string) (*TLSData, error) {
|
||||
tlsFiles, err := s.ListContextTLSFiles(contextName)
|
||||
func LoadTLSData(s store.Reader, contextName, endpointName string) (*TLSData, error) {
|
||||
tlsFiles, err := s.ListTLSFiles(contextName)
|
||||
if err != nil {
|
||||
return nil, errors.Wrapf(err, "failed to retrieve context tls files for context %q", contextName)
|
||||
}
|
||||
if epTLSFiles, ok := tlsFiles[endpointName]; ok {
|
||||
var tlsData TLSData
|
||||
for _, f := range epTLSFiles {
|
||||
data, err := s.GetContextTLSData(contextName, endpointName, f)
|
||||
data, err := s.GetTLSData(contextName, endpointName, f)
|
||||
if err != nil {
|
||||
return nil, errors.Wrapf(err, "failed to retrieve context tls data for file %q of context %q", f, contextName)
|
||||
}
|
||||
|
||||
@ -1,7 +1,6 @@
|
||||
package main
|
||||
|
||||
import (
|
||||
"errors"
|
||||
"fmt"
|
||||
"os"
|
||||
"os/exec"
|
||||
@ -16,11 +15,16 @@ import (
|
||||
"github.com/docker/cli/cli/version"
|
||||
"github.com/docker/docker/api/types/versions"
|
||||
"github.com/docker/docker/client"
|
||||
"github.com/pkg/errors"
|
||||
"github.com/sirupsen/logrus"
|
||||
"github.com/spf13/cobra"
|
||||
"github.com/spf13/pflag"
|
||||
)
|
||||
|
||||
var allowedAliases = map[string]struct{}{
|
||||
"builder": {},
|
||||
}
|
||||
|
||||
func newDockerCommand(dockerCli *command.DockerCli) *cli.TopLevelCommand {
|
||||
var (
|
||||
opts *cliflags.ClientOptions
|
||||
@ -65,16 +69,22 @@ func newDockerCommand(dockerCli *command.DockerCli) *cli.TopLevelCommand {
|
||||
return cli.NewTopLevelCommand(cmd, dockerCli, opts, flags)
|
||||
}
|
||||
|
||||
func setFlagErrorFunc(dockerCli *command.DockerCli, cmd *cobra.Command) {
|
||||
func setFlagErrorFunc(dockerCli command.Cli, cmd *cobra.Command) {
|
||||
// When invoking `docker stack --nonsense`, we need to make sure FlagErrorFunc return appropriate
|
||||
// output if the feature is not supported.
|
||||
// As above cli.SetupRootCommand(cmd) have already setup the FlagErrorFunc, we will add a pre-check before the FlagErrorFunc
|
||||
// is called.
|
||||
flagErrorFunc := cmd.FlagErrorFunc()
|
||||
cmd.SetFlagErrorFunc(func(cmd *cobra.Command, err error) error {
|
||||
if err := pluginmanager.AddPluginCommandStubs(dockerCli, cmd.Root()); err != nil {
|
||||
return err
|
||||
}
|
||||
if err := isSupported(cmd, dockerCli); err != nil {
|
||||
return err
|
||||
}
|
||||
if err := hideUnsupportedFeatures(cmd, dockerCli); err != nil {
|
||||
return err
|
||||
}
|
||||
return flagErrorFunc(cmd, err)
|
||||
})
|
||||
}
|
||||
@ -204,6 +214,38 @@ func tryPluginRun(dockerCli command.Cli, cmd *cobra.Command, subcommand string)
|
||||
return nil
|
||||
}
|
||||
|
||||
func processAliases(dockerCli command.Cli, cmd *cobra.Command, args, osArgs []string) ([]string, []string, error) {
|
||||
aliasMap := dockerCli.ConfigFile().Aliases
|
||||
aliases := make([][2][]string, 0, len(aliasMap))
|
||||
|
||||
for k, v := range aliasMap {
|
||||
if _, ok := allowedAliases[k]; !ok {
|
||||
return args, osArgs, errors.Errorf("Not allowed to alias %q. Allowed aliases: %#v", k, allowedAliases)
|
||||
}
|
||||
if _, _, err := cmd.Find(strings.Split(v, " ")); err == nil {
|
||||
return args, osArgs, errors.Errorf("Not allowed to alias with builtin %q as target", v)
|
||||
}
|
||||
aliases = append(aliases, [2][]string{{k}, {v}})
|
||||
}
|
||||
|
||||
if v, ok := aliasMap["builder"]; ok {
|
||||
aliases = append(aliases,
|
||||
[2][]string{{"build"}, {v, "build"}},
|
||||
[2][]string{{"image", "build"}, {v, "build"}},
|
||||
)
|
||||
}
|
||||
for _, al := range aliases {
|
||||
var didChange bool
|
||||
args, didChange = command.StringSliceReplaceAt(args, al[0], al[1], 0)
|
||||
if didChange {
|
||||
osArgs, _ = command.StringSliceReplaceAt(osArgs, al[0], al[1], -1)
|
||||
break
|
||||
}
|
||||
}
|
||||
|
||||
return args, osArgs, nil
|
||||
}
|
||||
|
||||
func runDocker(dockerCli *command.DockerCli) error {
|
||||
tcmd := newDockerCommand(dockerCli)
|
||||
|
||||
@ -216,6 +258,11 @@ func runDocker(dockerCli *command.DockerCli) error {
|
||||
return err
|
||||
}
|
||||
|
||||
args, os.Args, err = processAliases(dockerCli, cmd, args, os.Args)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
if len(args) > 0 {
|
||||
if _, _, err := cmd.Find(args); err != nil {
|
||||
err := tryPluginRun(dockerCli, cmd, args[0])
|
||||
|
||||
@ -2112,8 +2112,8 @@ _docker_container_run_and_create() {
|
||||
return
|
||||
;;
|
||||
--security-opt)
|
||||
COMPREPLY=( $( compgen -W "apparmor= label= no-new-privileges seccomp=" -- "$cur") )
|
||||
if [ "${COMPREPLY[*]}" != "no-new-privileges" ] ; then
|
||||
COMPREPLY=( $( compgen -W "apparmor= label= no-new-privileges seccomp= systempaths=unconfined" -- "$cur") )
|
||||
if [[ ${COMPREPLY[*]} = *= ]] ; then
|
||||
__docker_nospace
|
||||
fi
|
||||
return
|
||||
@ -2342,11 +2342,15 @@ _docker_context_create() {
|
||||
--description|--docker|--kubernetes)
|
||||
return
|
||||
;;
|
||||
--from)
|
||||
__docker_complete_contexts
|
||||
return
|
||||
;;
|
||||
esac
|
||||
|
||||
case "$cur" in
|
||||
-*)
|
||||
COMPREPLY=( $( compgen -W "--default-stack-orchestrator --description --docker --help --kubernetes" -- "$cur" ) )
|
||||
COMPREPLY=( $( compgen -W "--default-stack-orchestrator --description --docker --from --help --kubernetes" -- "$cur" ) )
|
||||
;;
|
||||
esac
|
||||
}
|
||||
@ -2617,36 +2621,15 @@ _docker_daemon() {
|
||||
return
|
||||
;;
|
||||
--storage-driver|-s)
|
||||
COMPREPLY=( $( compgen -W "aufs btrfs devicemapper overlay overlay2 vfs zfs" -- "$(echo "$cur" | tr '[:upper:]' '[:lower:]')" ) )
|
||||
COMPREPLY=( $( compgen -W "aufs btrfs overlay2 vfs zfs" -- "$(echo "$cur" | tr '[:upper:]' '[:lower:]')" ) )
|
||||
return
|
||||
;;
|
||||
--storage-opt)
|
||||
local btrfs_options="btrfs.min_space"
|
||||
local devicemapper_options="
|
||||
dm.basesize
|
||||
dm.blkdiscard
|
||||
dm.blocksize
|
||||
dm.directlvm_device
|
||||
dm.fs
|
||||
dm.libdm_log_level
|
||||
dm.loopdatasize
|
||||
dm.loopmetadatasize
|
||||
dm.min_free_space
|
||||
dm.mkfsarg
|
||||
dm.mountopt
|
||||
dm.override_udev_sync_check
|
||||
dm.thinpooldev
|
||||
dm.thinp_autoextend_percent
|
||||
dm.thinp_autoextend_threshold
|
||||
dm.thinp_metapercent
|
||||
dm.thinp_percent
|
||||
dm.use_deferred_deletion
|
||||
dm.use_deferred_removal
|
||||
"
|
||||
local overlay2_options="overlay2.size"
|
||||
local zfs_options="zfs.fsname"
|
||||
|
||||
local all_options="$btrfs_options $devicemapper_options $overlay2_options $zfs_options"
|
||||
local all_options="$btrfs_options $overlay2_options $zfs_options"
|
||||
|
||||
case $(__docker_value_of_option '--storage-driver|-s') in
|
||||
'')
|
||||
@ -2655,9 +2638,6 @@ _docker_daemon() {
|
||||
btrfs)
|
||||
COMPREPLY=( $( compgen -W "$btrfs_options" -S = -- "$cur" ) )
|
||||
;;
|
||||
devicemapper)
|
||||
COMPREPLY=( $( compgen -W "$devicemapper_options" -S = -- "$cur" ) )
|
||||
;;
|
||||
overlay2)
|
||||
COMPREPLY=( $( compgen -W "$overlay2_options" -S = -- "$cur" ) )
|
||||
;;
|
||||
@ -2863,7 +2843,6 @@ _docker_image_build() {
|
||||
"
|
||||
|
||||
local boolean_options="
|
||||
--compress
|
||||
--disable-content-trust=false
|
||||
--force-rm
|
||||
--help
|
||||
@ -2872,16 +2851,35 @@ _docker_image_build() {
|
||||
--quiet -q
|
||||
--rm
|
||||
"
|
||||
|
||||
if __docker_server_is_experimental ; then
|
||||
options_with_args+="
|
||||
--platform
|
||||
"
|
||||
boolean_options+="
|
||||
--squash
|
||||
--stream
|
||||
"
|
||||
fi
|
||||
|
||||
if [ "$DOCKER_BUILDKIT" = "1" ] ; then
|
||||
options_with_args+="
|
||||
--output -o
|
||||
--platform
|
||||
--progress
|
||||
--secret
|
||||
--ssh
|
||||
"
|
||||
else
|
||||
boolean_options+="
|
||||
--compress
|
||||
"
|
||||
if __docker_server_is_experimental ; then
|
||||
boolean_options+="
|
||||
--stream
|
||||
"
|
||||
fi
|
||||
fi
|
||||
|
||||
local all_options="$options_with_args $boolean_options"
|
||||
|
||||
case "$prev" in
|
||||
@ -2926,6 +2924,10 @@ _docker_image_build() {
|
||||
esac
|
||||
return
|
||||
;;
|
||||
--progress)
|
||||
COMPREPLY=( $( compgen -W "auto plain tty" -- "$cur" ) )
|
||||
return
|
||||
;;
|
||||
--tag|-t)
|
||||
__docker_complete_images --repo --tag
|
||||
return
|
||||
@ -5156,12 +5158,16 @@ _docker_system_events() {
|
||||
__docker_complete_networks --cur "${cur##*=}"
|
||||
return
|
||||
;;
|
||||
node)
|
||||
__docker_complete_nodes --cur "${cur##*=}"
|
||||
return
|
||||
;;
|
||||
scope)
|
||||
COMPREPLY=( $( compgen -W "local swarm" -- "${cur##*=}" ) )
|
||||
return
|
||||
;;
|
||||
type)
|
||||
COMPREPLY=( $( compgen -W "config container daemon image network plugin secret service volume" -- "${cur##*=}" ) )
|
||||
COMPREPLY=( $( compgen -W "config container daemon image network node plugin secret service volume" -- "${cur##*=}" ) )
|
||||
return
|
||||
;;
|
||||
volume)
|
||||
@ -5172,7 +5178,7 @@ _docker_system_events() {
|
||||
|
||||
case "$prev" in
|
||||
--filter|-f)
|
||||
COMPREPLY=( $( compgen -S = -W "container daemon event image label network scope type volume" -- "$cur" ) )
|
||||
COMPREPLY=( $( compgen -S = -W "container daemon event image label network node scope type volume" -- "$cur" ) )
|
||||
__docker_nospace
|
||||
return
|
||||
;;
|
||||
|
||||
@ -9,6 +9,7 @@
|
||||
# - Felix Riedel
|
||||
# - Steve Durrheimer
|
||||
# - Vincent Bernat
|
||||
# - Rohan Verma
|
||||
#
|
||||
# license:
|
||||
#
|
||||
@ -2784,7 +2785,7 @@ __docker_subcommand() {
|
||||
$opts_help \
|
||||
"($help -p --password)"{-p=,--password=}"[Password]:password: " \
|
||||
"($help)--password-stdin[Read password from stdin]" \
|
||||
"($help -u --user)"{-u=,--user=}"[Username]:username: " \
|
||||
"($help -u --username)"{-u=,--username=}"[Username]:username: " \
|
||||
"($help -)1:server: " && ret=0
|
||||
;;
|
||||
(logout)
|
||||
|
||||
@ -21,38 +21,38 @@ ifeq ($(DOCKER_CLI_GO_BUILD_CACHE),y)
|
||||
DOCKER_CLI_MOUNTS += -v "$(CACHE_VOLUME_NAME):/root/.cache/go-build"
|
||||
endif
|
||||
VERSION = $(shell cat VERSION)
|
||||
ENVVARS = -e VERSION=$(VERSION) -e GITCOMMIT -e PLATFORM -e TESTFLAGS -e TESTDIRS
|
||||
ENVVARS = -e VERSION=$(VERSION) -e GITCOMMIT -e PLATFORM -e TESTFLAGS -e TESTDIRS -e GOOS -e GOARCH -e GOARM
|
||||
|
||||
# build docker image (dockerfiles/Dockerfile.build)
|
||||
.PHONY: build_docker_image
|
||||
build_docker_image:
|
||||
# build dockerfile from stdin so that we don't send the build-context; source is bind-mounted in the development environment
|
||||
cat ./dockerfiles/Dockerfile.dev | docker build ${DOCKER_BUILD_ARGS} -t $(DEV_DOCKER_IMAGE_NAME) -
|
||||
cat ./dockerfiles/Dockerfile.dev | docker build ${DOCKER_BUILD_ARGS} --build-arg=GO_VERSION -t $(DEV_DOCKER_IMAGE_NAME) -
|
||||
|
||||
# build docker image having the linting tools (dockerfiles/Dockerfile.lint)
|
||||
.PHONY: build_linter_image
|
||||
build_linter_image:
|
||||
# build dockerfile from stdin so that we don't send the build-context; source is bind-mounted in the development environment
|
||||
cat ./dockerfiles/Dockerfile.lint | docker build ${DOCKER_BUILD_ARGS} -t $(LINTER_IMAGE_NAME) -
|
||||
cat ./dockerfiles/Dockerfile.lint | docker build ${DOCKER_BUILD_ARGS} --build-arg=GO_VERSION -t $(LINTER_IMAGE_NAME) -
|
||||
|
||||
.PHONY: build_cross_image
|
||||
build_cross_image:
|
||||
# build dockerfile from stdin so that we don't send the build-context; source is bind-mounted in the development environment
|
||||
cat ./dockerfiles/Dockerfile.cross | docker build ${DOCKER_BUILD_ARGS} -t $(CROSS_IMAGE_NAME) -
|
||||
cat ./dockerfiles/Dockerfile.cross | docker build ${DOCKER_BUILD_ARGS} --build-arg=GO_VERSION -t $(CROSS_IMAGE_NAME) -
|
||||
|
||||
.PHONY: build_shell_validate_image
|
||||
build_shell_validate_image:
|
||||
# build dockerfile from stdin so that we don't send the build-context; source is bind-mounted in the development environment
|
||||
cat ./dockerfiles/Dockerfile.shellcheck | docker build -t $(VALIDATE_IMAGE_NAME) -
|
||||
cat ./dockerfiles/Dockerfile.shellcheck | docker build --build-arg=GO_VERSION -t $(VALIDATE_IMAGE_NAME) -
|
||||
|
||||
.PHONY: build_binary_native_image
|
||||
build_binary_native_image:
|
||||
# build dockerfile from stdin so that we don't send the build-context; source is bind-mounted in the development environment
|
||||
cat ./dockerfiles/Dockerfile.binary-native | docker build -t $(BINARY_NATIVE_IMAGE_NAME) -
|
||||
cat ./dockerfiles/Dockerfile.binary-native | docker build --build-arg=GO_VERSION -t $(BINARY_NATIVE_IMAGE_NAME) -
|
||||
|
||||
.PHONY: build_e2e_image
|
||||
build_e2e_image:
|
||||
docker build -t $(E2E_IMAGE_NAME) --build-arg VERSION=$(VERSION) --build-arg GITCOMMIT=$(GITCOMMIT) -f ./dockerfiles/Dockerfile.e2e .
|
||||
docker build -t $(E2E_IMAGE_NAME) --build-arg=GO_VERSION --build-arg VERSION=$(VERSION) --build-arg GITCOMMIT=$(GITCOMMIT) -f ./dockerfiles/Dockerfile.e2e .
|
||||
|
||||
DOCKER_RUN_NAME_OPTION := $(if $(DOCKER_CLI_CONTAINER_NAME),--name $(DOCKER_CLI_CONTAINER_NAME),)
|
||||
DOCKER_RUN := docker run --rm $(ENVVARS) $(DOCKER_CLI_MOUNTS) $(DOCKER_RUN_NAME_OPTION)
|
||||
|
||||
@ -1,4 +1,6 @@
|
||||
FROM golang:1.12.1-alpine
|
||||
ARG GO_VERSION=1.12.8
|
||||
|
||||
FROM golang:${GO_VERSION}-alpine
|
||||
|
||||
RUN apk add -U git bash coreutils gcc musl-dev
|
||||
|
||||
|
||||
@ -1,4 +1,6 @@
|
||||
FROM dockercore/golang-cross:1.12.1@sha256:8541e3aea7b2cffb7ac310af250e34551abe2ec180c77d5a81ae3d52a47ac779
|
||||
ARG GO_VERSION=1.12.8
|
||||
|
||||
FROM dockercore/golang-cross:${GO_VERSION}
|
||||
ENV DISABLE_WARN_OUTSIDE_CONTAINER=1
|
||||
WORKDIR /go/src/github.com/docker/cli
|
||||
COPY . .
|
||||
|
||||
@ -1,4 +1,6 @@
|
||||
FROM golang:1.12.1-alpine
|
||||
ARG GO_VERSION=1.12.8
|
||||
|
||||
FROM golang:${GO_VERSION}-alpine
|
||||
|
||||
RUN apk add -U git make bash coreutils ca-certificates curl
|
||||
|
||||
@ -16,7 +18,7 @@ RUN go get -d github.com/mjibson/esc && \
|
||||
go build -v -o /usr/bin/esc . && \
|
||||
rm -rf /go/src/* /go/pkg/* /go/bin/*
|
||||
|
||||
ARG GOTESTSUM_VERSION=0.3.2
|
||||
ARG GOTESTSUM_VERSION=0.3.4
|
||||
RUN curl -Ls https://github.com/gotestyourself/gotestsum/releases/download/v${GOTESTSUM_VERSION}/gotestsum_${GOTESTSUM_VERSION}_linux_amd64.tar.gz -o gotestsum.tar.gz && \
|
||||
tar -xf gotestsum.tar.gz gotestsum -C /usr/bin && \
|
||||
rm gotestsum.tar.gz
|
||||
|
||||
@ -1,6 +1,4 @@
|
||||
ARG GO_VERSION=1.12.1
|
||||
|
||||
FROM docker/containerd-shim-process:a4d1531 AS containerd-shim-process
|
||||
ARG GO_VERSION=1.12.8
|
||||
|
||||
# Use Debian based image as docker-compose requires glibc.
|
||||
FROM golang:${GO_VERSION}
|
||||
@ -9,10 +7,6 @@ RUN apt-get update && apt-get install -y \
|
||||
build-essential \
|
||||
curl \
|
||||
openssl \
|
||||
btrfs-tools \
|
||||
libapparmor-dev \
|
||||
libseccomp-dev \
|
||||
iptables \
|
||||
openssh-client \
|
||||
&& rm -rf /var/lib/apt/lists/*
|
||||
|
||||
@ -24,7 +18,7 @@ ARG NOTARY_VERSION=v0.6.1
|
||||
RUN curl -Ls https://github.com/theupdateframework/notary/releases/download/${NOTARY_VERSION}/notary-Linux-amd64 -o /usr/local/bin/notary \
|
||||
&& chmod +x /usr/local/bin/notary
|
||||
|
||||
ARG GOTESTSUM_VERSION=0.3.2
|
||||
ARG GOTESTSUM_VERSION=0.3.4
|
||||
RUN curl -Ls https://github.com/gotestyourself/gotestsum/releases/download/v${GOTESTSUM_VERSION}/gotestsum_${GOTESTSUM_VERSION}_linux_amd64.tar.gz -o gotestsum.tar.gz \
|
||||
&& tar -xf gotestsum.tar.gz gotestsum \
|
||||
&& mv gotestsum /usr/local/bin/gotestsum \
|
||||
|
||||
@ -1,4 +1,6 @@
|
||||
FROM golang:1.12.1-alpine
|
||||
ARG GO_VERSION=1.12.8
|
||||
|
||||
FROM golang:${GO_VERSION}-alpine
|
||||
|
||||
RUN apk add -U git
|
||||
|
||||
|
||||
@ -19,6 +19,19 @@ The following list of features are deprecated in Engine.
|
||||
To learn more about Docker Engine's deprecation policy,
|
||||
see [Feature Deprecation Policy](https://docs.docker.com/engine/#feature-deprecation-policy).
|
||||
|
||||
### Pushing and pulling with image manifest v2 schema 1
|
||||
|
||||
**Deprecated in Release: v19.03.0**
|
||||
|
||||
**Target For Removal In Release: v19.09.0**
|
||||
|
||||
The image manifest
|
||||
[v2 schema 1](https://github.com/docker/distribution/blob/fda42e5ef908bdba722d435ff1f330d40dfcd56c/docs/spec/manifest-v2-1.md)
|
||||
format is deprecated in favor of the
|
||||
[v2 schema 2](https://github.com/docker/distribution/blob/fda42e5ef908bdba722d435ff1f330d40dfcd56c/docs/spec/manifest-v2-2.md) format.
|
||||
|
||||
If the registry you are using still supports v2 schema 1, urge their administrators to move to v2 schema 2.
|
||||
|
||||
### Legacy "overlay" storage driver
|
||||
|
||||
**Deprecated in Release: v18.09.0**
|
||||
@ -50,6 +63,24 @@ Now that support for `overlay2` is added to all supported distros (as they are
|
||||
either on kernel 4.x, or have support for multiple lowerdirs backported), there
|
||||
is no reason to continue maintenance of the `devicemapper` storage driver.
|
||||
|
||||
### AuFS storage driver
|
||||
|
||||
**Deprecated in Release: v19.03.0**
|
||||
|
||||
The `aufs` storage driver is deprecated in favor of `overlay2`, and will
|
||||
be removed in a future release. Users of the `aufs` storage driver are
|
||||
recommended to migrate to a different storage driver, such as `overlay2`, which
|
||||
is now the default storage driver.
|
||||
|
||||
The `aufs` storage driver facilitates running Docker on distros that have no
|
||||
support for OverlayFS, such as Ubuntu 14.04 LTS, which originally shipped with
|
||||
a 3.14 kernel.
|
||||
|
||||
Now that Ubuntu 14.04 is no longer a supported distro for Docker, and `overlay2`
|
||||
is available to all supported distros (as they are either on kernel 4.x, or have
|
||||
support for multiple lowerdirs backported), there is no reason to continue
|
||||
maintenance of the `aufs` storage driver.
|
||||
|
||||
### Reserved namespaces in engine labels
|
||||
|
||||
**Deprecated in Release: v18.06.0**
|
||||
|
||||
@ -23,7 +23,7 @@ Create a context
|
||||
Docker endpoint config:
|
||||
|
||||
NAME DESCRIPTION
|
||||
from-current Copy current Docker endpoint configuration
|
||||
from Copy Docker endpoint configuration from an existing context
|
||||
host Docker endpoint on which to connect
|
||||
ca Trust certs signed only by this CA
|
||||
cert Path to TLS certificate file
|
||||
@ -33,14 +33,16 @@ skip-tls-verify Skip TLS certificate validation
|
||||
Kubernetes endpoint config:
|
||||
|
||||
NAME DESCRIPTION
|
||||
from-current Copy current Kubernetes endpoint configuration
|
||||
from Copy Kubernetes endpoint configuration from an existing context
|
||||
config-file Path to a Kubernetes config file
|
||||
context-override Overrides the context set in the kubernetes config file
|
||||
namespace-override Overrides the namespace set in the kubernetes config file
|
||||
|
||||
Example:
|
||||
|
||||
$ docker context create my-context --description "some description" --docker "host=tcp://myserver:2376,ca=~/ca-file,cert=~/cert-file,key=~/key-file"
|
||||
$ docker context create my-context \
|
||||
--description "some description" \
|
||||
--docker "host=tcp://myserver:2376,ca=~/ca-file,cert=~/cert-file,key=~/key-file"
|
||||
|
||||
Options:
|
||||
--default-stack-orchestrator string Default orchestrator for
|
||||
@ -52,24 +54,68 @@ Options:
|
||||
(default [])
|
||||
--kubernetes stringToString set the kubernetes endpoint
|
||||
(default [])
|
||||
--from string Create the context from an existing context
|
||||
```
|
||||
|
||||
## Description
|
||||
|
||||
Creates a new `context`. This will allow you to quickly switch the cli configuration to connect to different clusters or single nodes.
|
||||
Creates a new `context`. This allows you to quickly switch the cli
|
||||
configuration to connect to different clusters or single nodes.
|
||||
|
||||
To create a `context` out of an existing `DOCKER_HOST` based script, you can use the `from-current` config key:
|
||||
To create a context from scratch provide the docker and, if required,
|
||||
kubernetes options. The example below creates the context `my-context`
|
||||
with a docker endpoint of `/var/run/docker.sock` and a kubernetes configuration
|
||||
sourced from the file `/home/me/my-kube-config`:
|
||||
|
||||
```bash
|
||||
$ docker context create my-context \
|
||||
--docker host=/var/run/docker.sock \
|
||||
--kubernetes config-file=/home/me/my-kube-config
|
||||
```
|
||||
|
||||
Use the `--from=<context-name>` option to create a new context from
|
||||
an existing context. The example below creates a new context named `my-context`
|
||||
from the existing context `existing-context`:
|
||||
|
||||
```bash
|
||||
$ docker context create my-context --from existing-context
|
||||
```
|
||||
|
||||
If the `--from` option is not set, the `context` is created from the current context:
|
||||
|
||||
```bash
|
||||
$ docker context create my-context
|
||||
```
|
||||
|
||||
This can be used to create a context out of an existing `DOCKER_HOST` based script:
|
||||
|
||||
```bash
|
||||
$ source my-setup-script.sh
|
||||
$ docker context create my-context --docker "from-current=true"
|
||||
$ docker context create my-context
|
||||
```
|
||||
|
||||
Similarly, to reference the currently active Kubernetes configuration, you can use `--kubernetes "from-current=true"`:
|
||||
To source only the `docker` endpoint configuration from an existing context
|
||||
use the `--docker from=<context-name>` option. The example below creates a
|
||||
new context named `my-context` using the docker endpoint configuration from
|
||||
the existing context `existing-context` and a kubernetes configuration sourced
|
||||
from the file `/home/me/my-kube-config`:
|
||||
|
||||
```bash
|
||||
$ export KUBECONFIG=/path/to/my/kubeconfig
|
||||
$ docker context create my-context --kubernetes "from-current=true" --docker "host=/var/run/docker.sock"
|
||||
$ docker context create my-context \
|
||||
--docker from=existing-context \
|
||||
--kubernetes config-file=/home/me/my-kube-config
|
||||
```
|
||||
|
||||
Docker and Kubernetes endpoints configurations, as well as default stack orchestrator and description can be modified with `docker context update`
|
||||
To source only the `kubernetes` configuration from an existing context use the
|
||||
`--kubernetes from=<context-name>` option. The example below creates a new
|
||||
context named `my-context` using the kuberentes configuration from the existing
|
||||
context `existing-context` and a docker endpoint of `/var/run/docker.sock`:
|
||||
|
||||
```bash
|
||||
$ docker context create my-context \
|
||||
--docker host=/var/run/docker.sock \
|
||||
--kubernetes from=existing-context
|
||||
```
|
||||
|
||||
Docker and Kubernetes endpoints configurations, as well as default stack
|
||||
orchestrator and description can be modified with `docker context update`
|
||||
@ -23,7 +23,7 @@ Update a context
|
||||
Docker endpoint config:
|
||||
|
||||
NAME DESCRIPTION
|
||||
from-current Copy current Docker endpoint configuration
|
||||
from Copy Docker endpoint configuration from an existing context
|
||||
host Docker endpoint on which to connect
|
||||
ca Trust certs signed only by this CA
|
||||
cert Path to TLS certificate file
|
||||
@ -33,7 +33,7 @@ skip-tls-verify Skip TLS certificate validation
|
||||
Kubernetes endpoint config:
|
||||
|
||||
NAME DESCRIPTION
|
||||
from-current Copy current Kubernetes endpoint configuration
|
||||
from Copy Kubernetes endpoint configuration from an existing context
|
||||
config-file Path to a Kubernetes config file
|
||||
context-override Overrides the context set in the kubernetes config file
|
||||
namespace-override Overrides the namespace set in the kubernetes config file
|
||||
|
||||
@ -812,7 +812,7 @@ Defaults to 20G.
|
||||
|
||||
###### Example
|
||||
|
||||
```PowerShell
|
||||
```powershell
|
||||
C:\> dockerd --storage-opt size=40G
|
||||
```
|
||||
|
||||
@ -827,7 +827,7 @@ deployments).
|
||||
|
||||
###### Example
|
||||
|
||||
```PowerShell
|
||||
```powershell
|
||||
C:\> dockerd --storage-opt lcow.globalmode=false
|
||||
```
|
||||
|
||||
@ -838,7 +838,7 @@ used for booting a utility VM. Defaults to `%ProgramFiles%\Linux Containers`.
|
||||
|
||||
###### Example
|
||||
|
||||
```PowerShell
|
||||
```powershell
|
||||
C:\> dockerd --storage-opt lcow.kirdpath=c:\path\to\files
|
||||
```
|
||||
|
||||
@ -849,7 +849,7 @@ Defaults to `bootx64.efi`.
|
||||
|
||||
###### Example
|
||||
|
||||
```PowerShell
|
||||
```powershell
|
||||
C:\> dockerd --storage-opt lcow.kernel=kernel.efi
|
||||
```
|
||||
|
||||
@ -860,7 +860,7 @@ Defaults to `initrd.img`.
|
||||
|
||||
###### Example
|
||||
|
||||
```PowerShell
|
||||
```powershell
|
||||
C:\> dockerd --storage-opt lcow.initrd=myinitrd.img
|
||||
```
|
||||
|
||||
@ -872,7 +872,7 @@ are kernel specific.
|
||||
|
||||
###### Example
|
||||
|
||||
```PowerShell
|
||||
```powershell
|
||||
C:\> dockerd --storage-opt "lcow.bootparameters='option=value'"
|
||||
```
|
||||
|
||||
@ -883,7 +883,7 @@ and initrd booting. Defaults to `uvm.vhdx` under `lcow.kirdpath`.
|
||||
|
||||
###### Example
|
||||
|
||||
```PowerShell
|
||||
```powershell
|
||||
C:\> dockerd --storage-opt lcow.vhdx=custom.vhdx
|
||||
```
|
||||
|
||||
@ -894,7 +894,7 @@ to 300.
|
||||
|
||||
###### Example
|
||||
|
||||
```PowerShell
|
||||
```powershell
|
||||
C:\> dockerd --storage-opt lcow.timeout=240
|
||||
```
|
||||
|
||||
@ -905,7 +905,7 @@ containers. Defaults to 20. Cannot be less than 20.
|
||||
|
||||
###### Example
|
||||
|
||||
```PowerShell
|
||||
```powershell
|
||||
C:\> dockerd --storage-opt lcow.sandboxsize=40
|
||||
```
|
||||
|
||||
|
||||
@ -31,7 +31,12 @@ Options:
|
||||
## Description
|
||||
|
||||
Use `docker events` to get real-time events from the server. These events differ
|
||||
per Docker object type.
|
||||
per Docker object type. Different event types have different scopes. Local
|
||||
scoped events are only seen on the node they take place on, and swarm scoped
|
||||
events are seen on all managers.
|
||||
|
||||
Only the last 1000 log events are returned. You can use filters to further limit
|
||||
the number of events returned.
|
||||
|
||||
### Object types
|
||||
|
||||
@ -160,6 +165,9 @@ that have elapsed since January 1, 1970 (midnight UTC/GMT), not counting leap
|
||||
seconds (aka Unix epoch or Unix time), and the optional .nanoseconds field is a
|
||||
fraction of a second no more than nine digits long.
|
||||
|
||||
Only the last 1000 log events are returned. You can use filters to further limit
|
||||
the number of events returned.
|
||||
|
||||
#### Filtering
|
||||
|
||||
The filtering flag (`-f` or `--filter`) format is of "key=value". If you would
|
||||
|
||||
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user