Compare commits

..

84 Commits

Author SHA1 Message Date
ec2ecc1c8b Merge pull request #1978 from thaJeztah/18.09_backport_fix_rollback_config_interpolation
[18.09 backport] Fix Rollback config type interpolation
2019-07-03 23:09:20 +02:00
23c88a8311 Rollback config type interpolation on fields "parallelism" and "max_failure_ratio" were missing, as it uses the same type as update_config.
Signed-off-by: Silvin Lubecki <silvin.lubecki@docker.com>
(cherry picked from commit efdf36fa81)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-07-03 19:20:08 +02:00
3a749342a3 Merge pull request #1842 from thaJeztah/18.09_bump_buildkit_18.09
[18.09 backport] bump buildkit 05766c5c21a1e528eeb1c3522b2f05493fe9ac47 (docker-18.09 branch)
2019-06-18 09:49:42 -07:00
278d30bceb bump tonistiigi/fsutil 2862f6bc5ac9b97124e552a5c108230b38a1b0ca
- tonistiigi/fsutil#54 walker: allow enotdir as enoent

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-04-20 13:22:11 +02:00
65b28186fc bump buildkit 05766c5c21a1e528eeb1c3522b2f05493fe9ac47 (docker-18.09 branch)
full diff: 520201006c..05766c5c21

- moby/buildkit#952 [18.09 backport] Have parser error on dockerfiles without instructions
  - backport of moby/buildkit#771 Have parser error on dockerfiles without instructions

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-04-20 13:19:07 +02:00
c89750f836 Merge pull request #1795 from thaJeztah/18.09_backport_dialstdio_1736
[18.09 backport] dial-stdio: fix goroutine leakage
2019-04-02 10:26:42 +02:00
c805ad2964 Merge pull request #1794 from thaJeztah/18.09_backport_fix_stack_watch
[18.09 backport] Fix the stack informer's selector used to track deployment
2019-04-02 10:24:41 +02:00
d8c6c830f8 dial-stdio: fix goroutine leakage
Fix #1736

Signed-off-by: Akihiro Suda <suda.akihiro@lab.ntt.co.jp>
(cherry picked from commit f8d4c443ba)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-04-02 00:25:24 +02:00
f89d05edcb Fix the stack informer's selector used to track deployment
Old selector was wrong (it watched for the label we applied to child
resources when reconciling the stack, instead of the stack itself)

This should be back-ported to older version of the CLI

Signed-off-by: Simon Ferquel <simon.ferquel@docker.com>
(cherry picked from commit 8cd74eb33a)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-04-01 18:23:43 +02:00
e1fe8f3c45 Merge pull request #1788 from thaJeztah/18.09_backport_annotations
[18.09 backport] fix annotations on --template-driver
2019-03-28 16:50:17 -07:00
356eda4028 Fix annotation on docker secret create --template-driver
Signed-off-by: Sune Keller <absukl@almbrand.dk>
(cherry picked from commit 217308d96d)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-03-29 00:33:02 +01:00
85148aa3f1 Fix annnotation on docker config create --template-driver
Signed-off-by: Simon Ferquel <simon.ferquel@docker.com>
(cherry picked from commit 470afe11ed)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-03-29 00:32:44 +01:00
19c0311d46 Merge pull request #1775 from thaJeztah/18.09_backport_ttyexecresize
[18.09 backport] fixes 1492: tty initial size error
2019-03-28 10:20:48 -07:00
207ff0831d Merge pull request #1776 from thaJeztah/18.09_backport_upgrade_shellcheck_0.6.0
[18.09 backport] use official shellcheck 0.6.0, and don't patch Dockerfiles in CI
2019-03-28 10:19:59 -07:00
57b27434ea Merge pull request #1778 from thaJeztah/18.09_bump_engine
[18.09] bump engine 200b524eff60a9c95a22bc2518042ac2ff617d07 (18.09 branch)
2019-03-27 08:28:02 -07:00
010c234a0d bump engine 200b524eff60a9c95a22bc2518042ac2ff617d07 (18.09 branch)
relevant changes;

- moby/moby#38006 / docker/engine#114 client: use io.LimitedReader for reading HTTP error
- moby/moby#38634 / docker/engine#167 pkg/archive:CopyTo(): fix for long dest filename
  - fixes docker/for-linux#484 for 18.09
- moby/moby#38944 / docker/engine#183 gitutils: add validation for ref
- moby/moby#37780 / docker/engine#55 pkg/progress: work around closing closed channel panic
  - addresses moby/moby#/37735 pkg/progress: panic due to race on shutdown

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-03-27 10:08:23 +01:00
9a5296c8f1 Update to shellcheck v0.6.0
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit ff107b313a)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-03-26 14:33:39 +01:00
b59752479b Use official shellcheck image
This patch switches the shellcheck image to use the official image
from Docker Hub.

Note that this does not yet update shellcheck to the latest version (v0.5.x);
Shellcheck v0.4.7 added some new checks, which makes CI currently fail, so will
be done in a follow-up PR. Instead, the v0.4.6 version is used in this PR, which
is closest to the same version as was installed in the image before this change;

```
docker run --rm docker-cli-shell-validate shellcheck --version
ShellCheck - shell script analysis tool
version: 0.4.4
license: GNU General Public License, version 3
website: http://www.shellcheck.net
```

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 388646eab0)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-03-26 14:26:45 +01:00
8997667aa2 Do not patch Dockerfiles in CI
When building the Dockerfiles for development, those images are mainly used to
create a reproducible build-environment. The source code is bind-mounted into
the image at runtime; there is no need to create an image with the actual
source code, and copying the source code into the image would lead to a new
image being created for each code-change (possibly leading up to many "dangling"
images for previous code-changes).

However, when building (and using) the development images in CI, bind-mounting
is not an option, because the daemon is running remotely.

To make this work, the circle-ci script patched the Dockerfiles when CI is run;
adding a `COPY` to the respective Dockerfiles.

Patching Dockerfiles is not really a "best practice" and, even though the source
code does not and up in the image, the source would still be _sent_ to the daemon
for each build (unless BuildKit is used).

This patch updates the makefiles, circle-ci script, and Dockerfiles;

- When building the Dockerfiles locally, pipe the Dockerfile through stdin.
  Doing so, prevents the build-context from being sent to the daemon. This speeds
  up the build, and doesn't fill up the Docker "temp" directory with content that's
  not used
- Now that no content is sent, add the COPY instructions to the Dockerfiles, and
  remove the code in the circle-ci script to "live patch" the Dockerfiles.

Before this patch is applied (with cache):

```
$ time make -f docker.Makefile build_shell_validate_image
docker build -t docker-cli-shell-validate -f ./dockerfiles/Dockerfile.shellcheck .
Sending build context to Docker daemon     41MB
Step 1/2 : FROM    debian:stretch-slim
...
Successfully built 81e14e8ad856
Successfully tagged docker-cli-shell-validate:latest

2.75 real         0.45 user         0.56 sys
```

After this patch is applied (with cache)::

```
$ time make -f docker.Makefile build_shell_validate_image
cat ./dockerfiles/Dockerfile.shellcheck | docker build -t docker-cli-shell-validate -
Sending build context to Docker daemon  2.048kB
Step 1/2 : FROM    debian:stretch-slim
...
Successfully built 81e14e8ad856
Successfully tagged docker-cli-shell-validate:latest

0.33 real         0.07 user         0.08 sys
```

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 166856ab1b)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-03-26 14:26:01 +01:00
bcae2c4408 tty initial size error
Signed-off-by: Lifubang <lifubang@acmcoder.com>
Signed-off-by: lifubang <lifubang@acmcoder.com>
(cherry picked from commit 3fbffc682b)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-03-26 12:51:27 +01:00
079adf3f23 moved integration test TestExportContainerWithOutputAndImportImage from moby/moby to docker/cli.
The integration test TestExportContainerWithOutputAndImportImage in moby/moby is the same as TestExportContainerAndImportImage,
except for the output file option. Adding a unit test to cover the output file option of the export command here allows
the removal of the redundant integration test TestExportContainerWithOutputAndImportImage.

Signed-off-by: Arash Deshmeh <adeshmeh@ca.ibm.com>
(cherry picked from commit fc1e11d46a)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-03-26 12:51:22 +01:00
f6693b0b25 Merge pull request #1733 from thaJeztah/18.09_backport_dial_stdio_npipe_on_windows
[18.09 backport] dial-stdio: handle connections which lack CloseRead method
2019-03-21 14:35:47 -07:00
ed16a3136b Merge pull request #1744 from thaJeztah/18.09_backport_docs_fixes
[18.09 backport] various docs fixes
2019-03-18 17:36:26 +01:00
e63ac0ea35 Merge pull request #1741 from thaJeztah/18.09_backport_fix_plugin_test
[18.09 backport] Fix: plugin-tests discarding current environment
2019-03-18 14:47:32 +01:00
c1a4358ea4 Add some spaces for cosmetics and readability reasons.
Signed-off-by: Silvin Lubecki <silvin.lubecki@docker.com>
(cherry picked from commit 8401c81b46)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-03-18 13:52:13 +01:00
27ab7cc3d6 Add exit status to docker exec manpage
There's little way of knowing what each exit status means at present
because it's not documented. I'm assuming they are the same as docker
run.

Signed-off-by: Eric Curtin <ericcurtin17@gmail.com>
(cherry picked from commit 23670968cc)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-03-18 11:16:13 +01:00
74bd5f143f Corrected max-file option - was incorrectly spelt as max-files
Signed-off-by: Steve Richards <steve.richards@docker.com>
(cherry picked from commit 04f88005c9)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-03-18 11:16:11 +01:00
8dc400713f Note caveat with detaching using key sequence
This has come up a few times, e.g. https://github.com/moby/moby/issues/20864 and https://github.com/moby/moby/issues/35491

Signed-off-by: Ben Creasy <ben@bencreasy.com>
(cherry picked from commit 767b25fc52)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-03-18 11:16:08 +01:00
543f9b32ee Fix typos
Signed-off-by: Michael Käufl <docker@c.michael-kaeufl.de>
(cherry picked from commit 0e469c1d1d)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-03-18 11:16:05 +01:00
1d314f2227 Fix small typo
Noticed a typo in this markdown file: "instead" instead of "in stead"

Signed-off-by: Ryan Wilson-Perkin <ryanwilsonperkin@gmail.com>
(cherry picked from commit 7a9fc782c5)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-03-18 11:15:59 +01:00
275ab1f063 Improve docker image rm reference docs
Copies the improved description from the man page
to the online reference docs.

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 89bc5fbbae)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-03-18 11:15:57 +01:00
4f6ab11ff4 Update process isolation description for older Windows 10 versions
Signed-off-by: Stefan Scherer <scherer_stefan@icloud.com>
(cherry picked from commit 7229920e2e)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-03-18 11:15:55 +01:00
537309a548 Fix some typos in manifest.md
Signed-off-by: zhoulin xie <zhoulin.xie@daocloud.io>
(cherry picked from commit abe1bb9757)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-03-18 11:15:52 +01:00
08714b4579 docs: add missing ID placeholder for docker node ps
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 24018b9ffd)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-03-18 11:15:50 +01:00
789a15bc73 docs(metrics-addr): Use port 9323, allocated for Docker in prometheus
Signed-off-by: Frederic Hemberger <mail@frederic-hemberger.de>
(cherry picked from commit 89aa2cf9f6)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-03-18 11:15:48 +01:00
ce12ac2d14 Fixed typo.
Signed-off-by: Anne Henmi <anne.henmi@docker.com>
(cherry picked from commit 4aecd8bda1)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-03-18 11:15:45 +01:00
4c94a0af75 Replace environmental with environment
Signed-off-by: Nir Soffer <nsoffer@redhat.com>
(cherry picked from commit f1f3d3be17)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-03-18 11:15:43 +01:00
0717f261ed Improve docker image rm documentation
The `docker image rm` command can be used not only
to remove images but also remove tags.

This update improves the documentation to make
this clear.

Signed-off-by: Filip Jareš <filipjares@gmail.com>
(cherry picked from commit 2ba9601ef1)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-03-18 11:15:39 +01:00
fc8717799f Fix: plugin-tests discarding current environment
By default, exec uses the environment of the current process, however,
if `exec.Env` is not `nil`, the environment is discarded:

e73f489494/src/os/exec/exec.go (L57-L60)

> If Env is nil, the new process uses the current process's environment.

When adding a new environment variable, prepend the current environment,
to make sure it is not discarded.

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 6c4fbb7738)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-03-17 15:47:28 +01:00
76f4876129 Merge pull request #1734 from thaJeztah/18.09_backport_fix_test_for_go_1.12
[18.09 backport] Fix test for Go 1.12.x
2019-03-14 16:55:40 +01:00
7ea48a16e3 Fix test for Go 1.12.x
After switching to Go 1.12, the format-string causes an error;

```
=== Errors
cli/config/config_test.go:154:3: Fatalf format %q has arg config of wrong type *github.com/docker/cli/cli/config/configfile.ConfigFile
cli/config/config_test.go:217:3: Fatalf format %q has arg config of wrong type *github.com/docker/cli/cli/config/configfile.ConfigFile
cli/config/config_test.go:253:3: Fatalf format %q has arg config of wrong type *github.com/docker/cli/cli/config/configfile.ConfigFile
cli/config/config_test.go:288:3: Fatalf format %q has arg config of wrong type *github.com/docker/cli/cli/config/configfile.ConfigFile
cli/config/config_test.go:435:3: Fatalf format %q has arg config of wrong type *github.com/docker/cli/cli/config/configfile.ConfigFile
cli/config/config_test.go:448:3: Fatalf format %q has arg config of wrong type *github.com/docker/cli/cli/config/configfile.ConfigFile

DONE 1115 tests, 2 skipped, 6 errors in 215.984s
make: *** [Makefile:22: test-coverage] Error 2
Exited with code 2
```

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit d4877fb225)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-03-13 22:27:00 +01:00
75e9075591 dial-stdio: Close the connection
This was leaking the fd.

Signed-off-by: Ian Campbell <ijc@docker.com>
(cherry picked from commit 186e7456ac)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-03-13 11:54:11 +01:00
69e1094f5a dial-stdio: handle connections which lack CloseRead method.
This happens on Windows when dialing a named pipe (a path which is used by CLI
plugins), in that case some debugging shows:

    DEBU[0000] conn is a *winio.win32MessageBytePipe
    DEBU[0000] conn is a halfReadCloser: false
    DEBU[0000] conn is a halfWriteCloser: true
    the raw stream connection does not implement halfCloser
In such cases we can simply wrap with a nop function since closing for read
isn't too critical.

Signed-off-by: Ian Campbell <ijc@docker.com>
(cherry picked from commit 8919bbf04d)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-03-13 11:53:58 +01:00
890e29da87 Merge pull request #1729 from thaJeztah/18.09_backport_e2e_handle_alpine_bump
[18.09 backport] Fixes for e2e testing after Alpine bump
2019-03-12 13:01:46 +01:00
78d52ec5d4 e2e: avoid usermod -p by using useradd's --password option
Signed-off-by: Ian Campbell <ijc@docker.com>
(cherry picked from commit 0b0c57871a)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-03-12 10:36:15 +01:00
c0bbca75af e2e: Expand useradd's -m otion into --create-home
... for improved readability

Signed-off-by: Ian Campbell <ijc@docker.com>
(cherry picked from commit e854a9cf96)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-03-12 10:36:03 +01:00
b666e9a090 e2e Use useradd's --shell option
... in preference to `chsh`, since in recent alpine 3.9.2 images that can fail
with:

    Password: chsh: PAM: Authentication token manipulation error

Which seems to relate to the use of `!` as the password for `root` in `/etc/shadow`gq

Signed-off-by: Ian Campbell <ijc@docker.com>
(cherry picked from commit 5de2d9e8a9)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-03-12 10:35:23 +01:00
9352be5341 Merge pull request #1694 from thaJeztah/18.09_backport_nolibtool
[18.09 backport] Update PKCS11 library
2019-02-27 08:39:46 -08:00
b4f607fb4f Update PKCS11 library
The new version no longer links to libltdl which simplifies build
and dependencies.

See https://github.com/theupdateframework/notary/pull/1434

Signed-off-by: Justin Cormack <justin.cormack@docker.com>
(cherry picked from commit cb3e55bf58)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-02-26 13:17:22 +01:00
af2647d55b Merge pull request #1634 from thaJeztah/18.09_bump_golang_1.10.8
[18.09] Bump Golang 1.10.8 (CVE-2019-6486)
2019-01-24 14:27:59 +01:00
c71aa11c0a [18.09] Bump Golang 1.10.8 (CVE-2019-6486)
See the milestone for details;
https://github.com/golang/go/issues?q=milestone%3AGo1.10.8+label%3ACherryPickApproved

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-01-24 02:07:03 +01:00
336b2a5cac Merge pull request #1580 from thaJeztah/18.09_backport_e2e-invocation-nit
[18.09 backport] e2e updates
2018-12-19 14:20:03 +01:00
c462e06fcd e2e: assign a default value of 0 to DOCKERD_EXPERIMENTAL
Currently running the e2e tests produces a warning/error:

    $ make -f docker.Makefile test-e2e
    «...»
    docker run --rm -v /var/run/docker.sock:/var/run/docker.sock docker-cli-e2e
    ./scripts/test/e2e/run: line 20: test: : integer expression expected

This is from:

    test "${DOCKERD_EXPERIMENTAL:-}" -eq "1" && «...»

Where `${DOCKERD_EXPERIMENTAL:-}` expands to the empty string, resulting in
`test "" -eq "1"` which produces the warning. This error is enough to trigger
the short-circuiting behaviour of `&&` so the result is as expected, but fix
the issue nonetheless by provdiing a default `0`.

Signed-off-by: Ian Campbell <ijc@docker.com>
(cherry picked from commit 4f483276cf)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2018-12-17 17:23:04 +01:00
719508a935 connhelper: add e2e
Signed-off-by: Akihiro Suda <suda.akihiro@lab.ntt.co.jp>
(cherry picked from commit 9b148db87a)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2018-12-17 17:22:58 +01:00
2fa3aae9ed Merge pull request #1575 from thaJeztah/bump_golang_1.10.6
[18.09] Bump Golang 1.10.6 (CVE-2018-16875)
2018-12-14 20:56:04 +00:00
6c3a10aaed Bump Golang 1.10.6 (CVE-2018-16875)
go1.10.6 (released 2018/12/14)

- crypto/x509: CPU denial of service in chain validation golang/go#29233
- cmd/go: directory traversal in "go get" via curly braces in import paths golang/go#29231
- cmd/go: remote command execution during "go get -u" golang/go#29230

See the Go 1.10.6 milestone on the issue tracker for details:
https://github.com/golang/go/issues?q=milestone%3AGo1.10.6

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2018-12-14 01:41:33 +01:00
3ee6755815 Merge pull request #1567 from thaJeztah/18.09_backport_fix_panic_on_update
[18.09 backport] Fix panic (npe) when updating service limits/reservations
2018-12-13 10:39:37 +00:00
16349f6e33 Fix panic (npe) when updating service limits/reservations
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 579bb91853)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2018-12-13 02:22:43 +01:00
2aa77af30f Merge pull request #1554 from thaJeztah/18.09_backport_completion-import--platform
[18.09 backport] Add bash completion for `import --platform`
2018-12-07 13:10:27 -08:00
456c1ce695 Merge pull request #1553 from thaJeztah/18.09_backport_completion-log-driver-local
[18.09 backport] Add bash completion for "local" log driver
2018-12-07 13:10:06 -08:00
bcadc9061c Add bash completion for import --platform
Signed-off-by: Harald Albers <github@albersweb.de>
(cherry picked from commit e0fe546c37)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2018-12-07 20:08:35 +01:00
e05745b4a5 Add bash completion for "local" log driver
Ref: https://github.com/moby/moby/pull/37092

Also adds log-opt `compress` to json-file log driver because this was
also added in the referenced PR.

Signed-off-by: Harald Albers <github@albersweb.de>
(cherry picked from commit c59038b15c)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2018-12-07 20:05:08 +01:00
b6ecef353f Merge pull request #1552 from thaJeztah/18.09_backport_fix_filter_panic
[18.09 backport] Fix panic when pruning images with label-filter
2018-12-07 19:29:32 +01:00
e380ddaddf Fix panic when pruning images with label-filter
Before this change:

    docker image prune --force --filter "label=foobar"
    panic: assignment to entry in nil map

    goroutine 1 [running]:
    github.com/docker/cli/vendor/github.com/docker/docker/api/types/filters.Args.Add(...)
    /go/src/github.com/docker/cli/vendor/github.com/docker/docker/api/types/filters/parse.go:167
    github.com/docker/cli/cli/command/image.runPrune(0x1db3a20, 0xc000344cf0, 0x16e0001, 0xc00015e600, 0x4, 0x3, 0xc00024e160, 0xc000545c70, 0x5ab4b5)
    /go/src/github.com/docker/cli/cli/command/image/prune.go:79 +0xbaf
    github.com/docker/cli/cli/command/image.NewPruneCommand.func1(0xc00029ef00, 0xc0004a8180, 0x0, 0x3, 0x0, 0x0)
    /go/src/github.com/docker/cli/cli/command/image/prune.go:32 +0x64
    github.com/docker/cli/vendor/github.com/spf13/cobra.(*Command).execute(0xc00029ef00, 0xc000038210, 0x3, 0x3, 0xc00029ef00, 0xc000038210)
    /go/src/github.com/docker/cli/vendor/github.com/spf13/cobra/command.go:762 +0x473
    github.com/docker/cli/vendor/github.com/spf13/cobra.(*Command).ExecuteC(0xc000127180, 0xc000272770, 0x1836ce0, 0xc000272780)
    /go/src/github.com/docker/cli/vendor/github.com/spf13/cobra/command.go:852 +0x2fd
    github.com/docker/cli/vendor/github.com/spf13/cobra.(*Command).Execute(0xc000127180, 0xc000127180, 0x1d60880)
    /go/src/github.com/docker/cli/vendor/github.com/spf13/cobra/command.go:800 +0x2b
    main.main()
    /go/src/github.com/docker/cli/cmd/docker/docker.go:180 +0xdc

With this patch applied:

    docker image prune --force --filter "label=foobar"
    Total reclaimed space: 0B

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 1e1dd5bca4)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2018-12-07 17:58:38 +01:00
12834eeff6 Merge pull request #1542 from thaJeztah/18.09_backport_completion_cli_experimental
[18.09 backport] Add bash completion for experimental CLI commands (manifest)
2018-12-03 13:34:56 -08:00
bb46da9fba Merge pull request #1544 from thaJeztah/18.09_bump_go_to_1.10.5
[18.09] Bump Go to 1.10.5
2018-11-30 14:03:12 -08:00
871d24d3fc Bump Go to 1.10.5
go1.10.5 (released 2018/11/02) includes fixes to the go command, linker,
runtime and the database/sql package. See the milestone on the issue
tracker for details:

List of changes; https://github.com/golang/go/issues?q=milestone%3AGo1.10.5

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2018-11-30 21:59:54 +01:00
61a9096b8d Merge pull request #1540 from thaJeztah/18.09_backport_fix_flags_in_usage
[18.09 backport] Fix yamldocs outputing `[flags]` in usage output
2018-11-29 13:26:27 -08:00
2ac475cf97 Add bash completion for manifest command family
Signed-off-by: Harald Albers <github@albersweb.de>
(cherry picked from commit 0fb4256a00)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2018-11-29 17:15:06 +01:00
2a36695037 Add support for experimental cli features to bash completion
This is needed for implementing bash completion for the `docker manifest`
command family.

Signed-off-by: Harald Albers <github@albersweb.de>
(cherry picked from commit a183c952c6)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2018-11-29 17:15:04 +01:00
dc74fc81f2 Refactor usage of docker version in bash completion
This preapares bash completion for more context sensitivity:

- experimental cli features
- orchestrator specific features

Also renames _daemon_ to _server_ where used in context of `docker version`
because the fields there are grouped unter _Server_.

Signed-off-by: Harald Albers <github@albersweb.de>
(cherry picked from commit 564d4da06e)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2018-11-29 17:15:02 +01:00
7e90635652 Fix yamldocs outputing [flags] in usage output
A similar change was made in the CLI itself, but is not
inherited by the code that generates the YAML docs.

Before this patch is applied;

```
usage: docker container exec [OPTIONS] CONTAINER COMMAND [ARG...] [flags]
```

With this patch applied:

```
usage: docker container exec [OPTIONS] CONTAINER COMMAND [ARG...]
```

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 44d96e9120)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2018-11-29 15:44:16 +01:00
3f7989903a Merge pull request #1454 from thaJeztah/18.09_backport_defaulttcpschema
[18.09 backport] fixes #1441 set default schema to tcp for docker host
2018-11-27 09:32:51 -08:00
7059d069c3 Merge pull request #1532 from tiborvass/18.09-fix-system-prune-filters
[18.09] prune: move image pruning before build cache pruning
2018-11-26 16:07:21 -08:00
4a4a1f3615 prune: move image pruning before build cache pruning
This is cleaner because running system prune twice in a row
now results in a no-op the second time.

Signed-off-by: Tibor Vass <tibor@docker.com>
(cherry picked from commit 6c10abb247)
Signed-off-by: Tibor Vass <tibor@docker.com>
2018-11-21 22:01:54 +00:00
1274f23252 Merge pull request #1531 from thaJeztah/18.09_backport_builder_docs
[18.09 backport] builder documentation updates
2018-11-21 18:10:29 +01:00
3af1848dda buildkit reference docs
Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>
Signed-off-by: Tibor Vass <tibor@docker.com>
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 83aeb219f0)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2018-11-21 17:16:30 +01:00
6d91f5d55d Documenting ENTRYPOINT can empty value of CMD
Signed-off-by: Brandon Mitchell <git@bmitch.net>
(cherry picked from commit cc316fde55)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2018-11-21 17:16:24 +01:00
d56948c12c Merge pull request #1530 from thaJeztah/18.09_backport_add_logging_driver_example
[18.09 backport] Update daemon.json example to show that log-opts must be a string
2018-11-21 17:10:02 +01:00
9b3eea87ee Update daemon.json example to show that log-opts must be a string
log-opts are passed to logging-drivers as-is, so the daemon is not
aware what value-type each option takes.

For this reason, all options must be provided as a string, even if
they are used as numeric values by the logging driver.

For example, to pass the "max-file" option to the default (json-file)
logging driver, this value has to be passed as a string;

```json
{
  "log-driver": "json-file",
  "log-opts": {
    "max-size": "10m",
    "max-file": "3"
  }
}
```

When passed as a _number_ (`"max-file": 3`), the daemon will invalidate
the configuration file, and fail to start;

    unable to configure the Docker daemon with file /etc/docker/daemon.json: json: cannot unmarshal number into Go value of type string

This patch adds an example to the daemon.json to show these  values
have to be passed as strings.

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit fd33e0d933)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2018-11-21 15:34:41 +01:00
31c092e155 Merge pull request #1526 from thaJeztah/18.09_backport_completion_fix_service__force
[18.09 backport] Fix bash completion for `service update --force`
2018-11-21 11:38:28 +01:00
046ffa4e87 Fix bash completion for service update --force
- `--force` is not available in `service create`
- `--force` is a boolean option

Signed-off-by: Harald Albers <github@albersweb.de>
(cherry picked from commit 5fa5eb1da6)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2018-11-20 18:18:33 +01:00
8ae4453d46 add test case TestNewAPIClientFromFlagsForDefaultSchema
Signed-off-by: Lifubang <lifubang@acmcoder.com>
(cherry picked from commit beed8748c0)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2018-10-17 17:48:07 +02:00
aeea559129 set default schema to tcp for docker host
Signed-off-by: Lifubang <lifubang@acmcoder.com>
(cherry picked from commit 2431dd1448)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2018-10-17 17:47:58 +02:00
75 changed files with 1467 additions and 562 deletions

View File

@ -4,7 +4,7 @@ clone_folder: c:\gopath\src\github.com\docker\cli
environment:
GOPATH: c:\gopath
GOVERSION: 1.10.4
GOVERSION: 1.10.8
DEPVERSION: v0.4.1
install:

View File

@ -16,9 +16,7 @@ jobs:
- run:
name: "Lint"
command: |
dockerfile=dockerfiles/Dockerfile.lint
echo "COPY . ." >> $dockerfile
docker build -f $dockerfile --tag cli-linter:$CIRCLE_BUILD_NUM .
docker build -f dockerfiles/Dockerfile.lint --tag cli-linter:$CIRCLE_BUILD_NUM .
docker run --rm cli-linter:$CIRCLE_BUILD_NUM
cross:
@ -34,9 +32,7 @@ jobs:
- run:
name: "Cross"
command: |
dockerfile=dockerfiles/Dockerfile.cross
echo "COPY . ." >> $dockerfile
docker build -f $dockerfile --tag cli-builder:$CIRCLE_BUILD_NUM .
docker build -f dockerfiles/Dockerfile.cross --tag cli-builder:$CIRCLE_BUILD_NUM .
name=cross-$CIRCLE_BUILD_NUM-$CIRCLE_NODE_INDEX
docker run \
-e CROSS_GROUP=$CIRCLE_NODE_INDEX \
@ -60,9 +56,7 @@ jobs:
- run:
name: "Unit Test with Coverage"
command: |
dockerfile=dockerfiles/Dockerfile.dev
echo "COPY . ." >> $dockerfile
docker build -f $dockerfile --tag cli-builder:$CIRCLE_BUILD_NUM .
docker build -f dockerfiles/Dockerfile.dev --tag cli-builder:$CIRCLE_BUILD_NUM .
docker run --name \
test-$CIRCLE_BUILD_NUM cli-builder:$CIRCLE_BUILD_NUM \
make test-coverage
@ -89,10 +83,8 @@ jobs:
- run:
name: "Validate Vendor, Docs, and Code Generation"
command: |
dockerfile=dockerfiles/Dockerfile.dev
echo "COPY . ." >> $dockerfile
rm -f .dockerignore # include .git
docker build -f $dockerfile --tag cli-builder-with-git:$CIRCLE_BUILD_NUM .
docker build -f dockerfiles/Dockerfile.dev --tag cli-builder-with-git:$CIRCLE_BUILD_NUM .
docker run --rm cli-builder-with-git:$CIRCLE_BUILD_NUM \
make ci-validate
shellcheck:
@ -107,9 +99,7 @@ jobs:
- run:
name: "Run shellcheck"
command: |
dockerfile=dockerfiles/Dockerfile.shellcheck
echo "COPY . ." >> $dockerfile
docker build -f $dockerfile --tag cli-validator:$CIRCLE_BUILD_NUM .
docker build -f dockerfiles/Dockerfile.shellcheck --tag cli-validator:$CIRCLE_BUILD_NUM .
docker run --rm cli-validator:$CIRCLE_BUILD_NUM \
make shellcheck
workflows:

View File

@ -274,21 +274,17 @@ func NewDockerCli(in io.ReadCloser, out, err io.Writer, isTrusted bool, containe
// NewAPIClientFromFlags creates a new APIClient from command line flags
func NewAPIClientFromFlags(opts *cliflags.CommonOptions, configFile *configfile.ConfigFile) (client.APIClient, error) {
unparsedHost, err := getUnparsedServerHost(opts.Hosts)
host, err := getServerHost(opts.Hosts, opts.TLSOptions)
if err != nil {
return &client.Client{}, err
}
var clientOpts []func(*client.Client) error
helper, err := connhelper.GetConnectionHelper(unparsedHost)
helper, err := connhelper.GetConnectionHelper(host)
if err != nil {
return &client.Client{}, err
}
if helper == nil {
clientOpts = append(clientOpts, withHTTPClient(opts.TLSOptions))
host, err := dopts.ParseHost(opts.TLSOptions != nil, unparsedHost)
if err != nil {
return &client.Client{}, err
}
clientOpts = append(clientOpts, client.WithHost(host))
} else {
clientOpts = append(clientOpts, func(c *client.Client) error {
@ -321,7 +317,7 @@ func NewAPIClientFromFlags(opts *cliflags.CommonOptions, configFile *configfile.
return client.NewClientWithOpts(clientOpts...)
}
func getUnparsedServerHost(hosts []string) (string, error) {
func getServerHost(hosts []string, tlsOptions *tlsconfig.Options) (string, error) {
var host string
switch len(hosts) {
case 0:
@ -331,7 +327,8 @@ func getUnparsedServerHost(hosts []string) (string, error) {
default:
return "", errors.New("Please specify only one -H")
}
return host, nil
return dopts.ParseHost(tlsOptions != nil, host)
}
func withHTTPClient(tlsOpts *tlsconfig.Options) func(*client.Client) error {

View File

@ -43,6 +43,26 @@ func TestNewAPIClientFromFlags(t *testing.T) {
assert.Check(t, is.Equal(api.DefaultVersion, apiclient.ClientVersion()))
}
func TestNewAPIClientFromFlagsForDefaultSchema(t *testing.T) {
host := ":2375"
opts := &flags.CommonOptions{Hosts: []string{host}}
configFile := &configfile.ConfigFile{
HTTPHeaders: map[string]string{
"My-Header": "Custom-Value",
},
}
apiclient, err := NewAPIClientFromFlags(opts, configFile)
assert.NilError(t, err)
assert.Check(t, is.Equal("tcp://localhost"+host, apiclient.DaemonHost()))
expectedHeaders := map[string]string{
"My-Header": "Custom-Value",
"User-Agent": UserAgent(),
}
assert.Check(t, is.DeepEqual(expectedHeaders, apiclient.(*client.Client).CustomHTTPHeaders()))
assert.Check(t, is.Equal(api.DefaultVersion, apiclient.ClientVersion()))
}
func TestNewAPIClientFromFlagsWithAPIVersionFromEnv(t *testing.T) {
customVersion := "v3.3.3"
defer env.Patch(t, "DOCKER_API_VERSION", customVersion)()

View File

@ -40,7 +40,7 @@ func newConfigCreateCommand(dockerCli command.Cli) *cobra.Command {
flags := cmd.Flags()
flags.VarP(&createOpts.labels, "label", "l", "Config labels")
flags.StringVar(&createOpts.templateDriver, "template-driver", "", "Template driver")
flags.SetAnnotation("driver", "version", []string{"1.37"})
flags.SetAnnotation("template-driver", "version", []string{"1.37"})
return cmd
}

View File

@ -12,19 +12,24 @@ import (
type fakeClient struct {
client.Client
inspectFunc func(string) (types.ContainerJSON, error)
execInspectFunc func(execID string) (types.ContainerExecInspect, error)
execCreateFunc func(container string, config types.ExecConfig) (types.IDResponse, error)
createContainerFunc func(config *container.Config, hostConfig *container.HostConfig, networkingConfig *network.NetworkingConfig, containerName string) (container.ContainerCreateCreatedBody, error)
containerStartFunc func(container string, options types.ContainerStartOptions) error
imageCreateFunc func(parentReference string, options types.ImageCreateOptions) (io.ReadCloser, error)
infoFunc func() (types.Info, error)
containerStatPathFunc func(container, path string) (types.ContainerPathStat, error)
containerCopyFromFunc func(container, srcPath string) (io.ReadCloser, types.ContainerPathStat, error)
logFunc func(string, types.ContainerLogsOptions) (io.ReadCloser, error)
waitFunc func(string) (<-chan container.ContainerWaitOKBody, <-chan error)
containerListFunc func(types.ContainerListOptions) ([]types.Container, error)
Version string
inspectFunc func(string) (types.ContainerJSON, error)
execInspectFunc func(execID string) (types.ContainerExecInspect, error)
execCreateFunc func(container string, config types.ExecConfig) (types.IDResponse, error)
createContainerFunc func(config *container.Config,
hostConfig *container.HostConfig,
networkingConfig *network.NetworkingConfig,
containerName string) (container.ContainerCreateCreatedBody, error)
containerStartFunc func(container string, options types.ContainerStartOptions) error
imageCreateFunc func(parentReference string, options types.ImageCreateOptions) (io.ReadCloser, error)
infoFunc func() (types.Info, error)
containerStatPathFunc func(container, path string) (types.ContainerPathStat, error)
containerCopyFromFunc func(container, srcPath string) (io.ReadCloser, types.ContainerPathStat, error)
logFunc func(string, types.ContainerLogsOptions) (io.ReadCloser, error)
waitFunc func(string) (<-chan container.ContainerWaitOKBody, <-chan error)
containerListFunc func(types.ContainerListOptions) ([]types.Container, error)
containerExportFunc func(string) (io.ReadCloser, error)
containerExecResizeFunc func(id string, options types.ResizeOptions) error
Version string
}
func (f *fakeClient) ContainerList(_ context.Context, options types.ContainerListOptions) ([]types.Container, error) {
@ -124,3 +129,17 @@ func (f *fakeClient) ContainerStart(_ context.Context, container string, options
}
return nil
}
func (f *fakeClient) ContainerExport(_ context.Context, container string) (io.ReadCloser, error) {
if f.containerExportFunc != nil {
return f.containerExportFunc(container)
}
return nil, nil
}
func (f *fakeClient) ContainerExecResize(_ context.Context, id string, options types.ResizeOptions) error {
if f.containerExecResizeFunc != nil {
return f.containerExecResizeFunc(id, options)
}
return nil
}

View File

@ -0,0 +1,33 @@
package container
import (
"io"
"io/ioutil"
"strings"
"testing"
"github.com/docker/cli/internal/test"
"gotest.tools/assert"
"gotest.tools/fs"
)
func TestContainerExportOutputToFile(t *testing.T) {
dir := fs.NewDir(t, "export-test")
defer dir.Remove()
cli := test.NewFakeCli(&fakeClient{
containerExportFunc: func(container string) (io.ReadCloser, error) {
return ioutil.NopCloser(strings.NewReader("bar")), nil
},
})
cmd := NewExportCommand(cli)
cmd.SetOutput(ioutil.Discard)
cmd.SetArgs([]string{"-o", dir.Join("foo"), "container"})
assert.NilError(t, cmd.Execute())
expected := fs.Expected(t,
fs.WithFile("foo", "bar", fs.MatchAnyFileMode),
)
assert.Assert(t, fs.Equal(dir.Path(), expected))
}

View File

@ -16,9 +16,9 @@ import (
)
// resizeTtyTo resizes tty to specific height and width
func resizeTtyTo(ctx context.Context, client client.ContainerAPIClient, id string, height, width uint, isExec bool) {
func resizeTtyTo(ctx context.Context, client client.ContainerAPIClient, id string, height, width uint, isExec bool) error {
if height == 0 && width == 0 {
return
return nil
}
options := types.ResizeOptions{
@ -34,19 +34,42 @@ func resizeTtyTo(ctx context.Context, client client.ContainerAPIClient, id strin
}
if err != nil {
logrus.Debugf("Error resize: %s", err)
logrus.Debugf("Error resize: %s\r", err)
}
return err
}
// resizeTty is to resize the tty with cli out's tty size
func resizeTty(ctx context.Context, cli command.Cli, id string, isExec bool) error {
height, width := cli.Out().GetTtySize()
return resizeTtyTo(ctx, cli.Client(), id, height, width, isExec)
}
// initTtySize is to init the tty's size to the same as the window, if there is an error, it will retry 5 times.
func initTtySize(ctx context.Context, cli command.Cli, id string, isExec bool, resizeTtyFunc func(ctx context.Context, cli command.Cli, id string, isExec bool) error) {
rttyFunc := resizeTtyFunc
if rttyFunc == nil {
rttyFunc = resizeTty
}
if err := rttyFunc(ctx, cli, id, isExec); err != nil {
go func() {
var err error
for retry := 0; retry < 5; retry++ {
time.Sleep(10 * time.Millisecond)
if err = rttyFunc(ctx, cli, id, isExec); err == nil {
break
}
}
if err != nil {
fmt.Fprintln(cli.Err(), "failed to resize tty, using default size")
}
}()
}
}
// MonitorTtySize updates the container tty size when the terminal tty changes size
func MonitorTtySize(ctx context.Context, cli command.Cli, id string, isExec bool) error {
resizeTty := func() {
height, width := cli.Out().GetTtySize()
resizeTtyTo(ctx, cli.Client(), id, height, width, isExec)
}
resizeTty()
initTtySize(ctx, cli, id, isExec, resizeTty)
if runtime.GOOS == "windows" {
go func() {
prevH, prevW := cli.Out().GetTtySize()
@ -55,7 +78,7 @@ func MonitorTtySize(ctx context.Context, cli command.Cli, id string, isExec bool
h, w := cli.Out().GetTtySize()
if prevW != w || prevH != h {
resizeTty()
resizeTty(ctx, cli, id, isExec)
}
prevH = h
prevW = w
@ -66,7 +89,7 @@ func MonitorTtySize(ctx context.Context, cli command.Cli, id string, isExec bool
gosignal.Notify(sigchan, signal.SIGWINCH)
go func() {
for range sigchan {
resizeTty()
resizeTty(ctx, cli, id, isExec)
}
}()
}

View File

@ -0,0 +1,30 @@
package container
import (
"context"
"testing"
"time"
"github.com/docker/cli/cli/command"
"github.com/docker/cli/internal/test"
"github.com/docker/docker/api/types"
"github.com/pkg/errors"
"gotest.tools/assert"
is "gotest.tools/assert/cmp"
)
func TestInitTtySizeErrors(t *testing.T) {
expectedError := "failed to resize tty, using default size\n"
fakeContainerExecResizeFunc := func(id string, options types.ResizeOptions) error {
return errors.Errorf("Error response from daemon: no such exec")
}
fakeResizeTtyFunc := func(ctx context.Context, cli command.Cli, id string, isExec bool) error {
height, width := uint(1024), uint(768)
return resizeTtyTo(ctx, cli.Client(), id, height, width, isExec)
}
ctx := context.Background()
cli := test.NewFakeCli(&fakeClient{containerExecResizeFunc: fakeContainerExecResizeFunc})
initTtySize(ctx, cli, "8mm8nn8tt8bb", true, fakeResizeTtyFunc)
time.Sleep(100 * time.Millisecond)
assert.Check(t, is.Equal(expectedError, cli.ErrBuffer().String()))
}

View File

@ -8,7 +8,9 @@ import (
"github.com/docker/cli/cli"
"github.com/docker/cli/cli/command"
"github.com/docker/cli/opts"
"github.com/docker/docker/api/types/filters"
units "github.com/docker/go-units"
"github.com/pkg/errors"
"github.com/spf13/cobra"
)
@ -55,8 +57,24 @@ Are you sure you want to continue?`
Are you sure you want to continue?`
)
// cloneFilter is a temporary workaround that uses existing public APIs from the filters package to clone a filter.
// TODO(tiborvass): remove this once filters.Args.Clone() is added.
func cloneFilter(args filters.Args) (newArgs filters.Args, err error) {
if args.Len() == 0 {
return filters.NewArgs(), nil
}
b, err := args.MarshalJSON()
if err != nil {
return newArgs, err
}
return filters.FromJSON(string(b))
}
func runPrune(dockerCli command.Cli, options pruneOptions) (spaceReclaimed uint64, output string, err error) {
pruneFilters := options.filter.Value()
pruneFilters, err := cloneFilter(options.filter.Value())
if err != nil {
return 0, "", errors.Wrap(err, "could not copy filter in image prune")
}
pruneFilters.Add("dangling", fmt.Sprintf("%v", !options.all))
pruneFilters = command.PruneFilters(dockerCli, pruneFilters)

View File

@ -70,6 +70,14 @@ func TestNewPruneCommandSuccess(t *testing.T) {
}, nil
},
},
{
name: "label-filter",
args: []string{"--force", "--filter", "label=foobar"},
imagesPruneFunc: func(pruneFilter filters.Args) (types.ImagesPruneReport, error) {
assert.Check(t, is.Equal("foobar", pruneFilter.Get("label")[0]))
return types.ImagesPruneReport{}, nil
},
},
{
name: "force-untagged",
args: []string{"--force"},

View File

@ -0,0 +1 @@
Total reclaimed space: 0B

View File

@ -18,6 +18,7 @@ type osArch struct {
// list of valid os/arch values (see "Optional Environment Variables" section
// of https://golang.org/doc/install/source
// Added linux/s390x as we know System z support already exists
// Keep in sync with _docker_manifest_annotate in contrib/completion/bash/docker
var validOSArches = map[osArch]bool{
{os: "darwin", arch: "386"}: true,
{os: "darwin", arch: "amd64"}: true,

View File

@ -45,7 +45,7 @@ func newSecretCreateCommand(dockerCli command.Cli) *cobra.Command {
flags.StringVarP(&options.driver, "driver", "d", "", "Secret driver")
flags.SetAnnotation("driver", "version", []string{"1.31"})
flags.StringVar(&options.templateDriver, "template-driver", "", "Template driver")
flags.SetAnnotation("driver", "version", []string{"1.37"})
flags.SetAnnotation("template-driver", "version", []string{"1.37"})
return cmd
}

View File

@ -302,6 +302,12 @@ func updateService(ctx context.Context, apiClient client.NetworkAPIClient, flags
if task.Resources == nil {
task.Resources = &swarm.ResourceRequirements{}
}
if task.Resources.Limits == nil {
task.Resources.Limits = &swarm.Resources{}
}
if task.Resources.Reservations == nil {
task.Resources.Reservations = &swarm.Resources{}
}
return task.Resources
}

View File

@ -617,6 +617,38 @@ func TestUpdateIsolationValid(t *testing.T) {
// and that values are not updated are not reset to their default value
func TestUpdateLimitsReservations(t *testing.T) {
spec := swarm.ServiceSpec{
TaskTemplate: swarm.TaskSpec{
ContainerSpec: &swarm.ContainerSpec{},
},
}
// test that updating works if the service did not previously
// have limits set (https://github.com/moby/moby/issues/38363)
flags := newUpdateCommand(nil).Flags()
err := flags.Set(flagLimitCPU, "2")
assert.NilError(t, err)
err = flags.Set(flagLimitMemory, "200M")
assert.NilError(t, err)
err = updateService(context.Background(), nil, flags, &spec)
assert.NilError(t, err)
spec = swarm.ServiceSpec{
TaskTemplate: swarm.TaskSpec{
ContainerSpec: &swarm.ContainerSpec{},
},
}
// test that updating works if the service did not previously
// have reservations set (https://github.com/moby/moby/issues/38363)
flags = newUpdateCommand(nil).Flags()
err = flags.Set(flagReserveCPU, "2")
assert.NilError(t, err)
err = flags.Set(flagReserveMemory, "200M")
assert.NilError(t, err)
err = updateService(context.Background(), nil, flags, &spec)
assert.NilError(t, err)
spec = swarm.ServiceSpec{
TaskTemplate: swarm.TaskSpec{
ContainerSpec: &swarm.ContainerSpec{},
Resources: &swarm.ResourceRequirements{
@ -632,8 +664,8 @@ func TestUpdateLimitsReservations(t *testing.T) {
},
}
flags := newUpdateCommand(nil).Flags()
err := flags.Set(flagLimitCPU, "2")
flags = newUpdateCommand(nil).Flags()
err = flags.Set(flagLimitCPU, "2")
assert.NilError(t, err)
err = flags.Set(flagReserveCPU, "2")
assert.NilError(t, err)

View File

@ -10,6 +10,7 @@ import (
"github.com/pkg/errors"
apiv1 "k8s.io/api/core/v1"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/fields"
"k8s.io/apimachinery/pkg/runtime"
runtimeutil "k8s.io/apimachinery/pkg/util/runtime"
"k8s.io/apimachinery/pkg/watch"
@ -240,12 +241,12 @@ func newStackInformer(stacksClient stackListWatch, stackName string) cache.Share
return cache.NewSharedInformer(
&cache.ListWatch{
ListFunc: func(options metav1.ListOptions) (runtime.Object, error) {
options.LabelSelector = labels.SelectorForStack(stackName)
options.FieldSelector = fields.OneTermEqualSelector("metadata.name", stackName).String()
return stacksClient.List(options)
},
WatchFunc: func(options metav1.ListOptions) (watch.Interface, error) {
options.LabelSelector = labels.SelectorForStack(stackName)
options.FieldSelector = fields.OneTermEqualSelector("metadata.name", stackName).String()
return stacksClient.Watch(options)
},
},

View File

@ -34,12 +34,20 @@ func runDialStdio(dockerCli command.Cli) error {
if err != nil {
return errors.Wrap(err, "failed to open the raw stream connection")
}
connHalfCloser, ok := conn.(halfCloser)
if !ok {
defer conn.Close()
var connHalfCloser halfCloser
switch t := conn.(type) {
case halfCloser:
connHalfCloser = t
case halfReadWriteCloser:
connHalfCloser = &nopCloseReader{t}
default:
return errors.New("the raw stream connection does not implement halfCloser")
}
stdin2conn := make(chan error)
conn2stdout := make(chan error)
stdin2conn := make(chan error, 1)
conn2stdout := make(chan error, 1)
go func() {
stdin2conn <- copier(connHalfCloser, &halfReadCloserWrapper{os.Stdin}, "stdin to stream")
}()
@ -90,6 +98,19 @@ type halfCloser interface {
halfWriteCloser
}
type halfReadWriteCloser interface {
io.Reader
halfWriteCloser
}
type nopCloseReader struct {
halfReadWriteCloser
}
func (x *nopCloseReader) CloseRead() error {
return nil
}
type halfReadCloserWrapper struct {
io.ReadCloser
}

View File

@ -73,11 +73,10 @@ func runPrune(dockerCli command.Cli, options pruneOptions) error {
if options.pruneVolumes {
pruneFuncs = append(pruneFuncs, volume.RunPrune)
}
pruneFuncs = append(pruneFuncs, image.RunPrune)
if options.pruneBuildCache {
pruneFuncs = append(pruneFuncs, builder.CachePrune)
}
// FIXME: modify image.RunPrune to not modify options.filter, otherwise this has to be last in the list.
pruneFuncs = append(pruneFuncs, image.RunPrune)
var spaceReclaimed uint64
for _, pruneFn := range pruneFuncs {

View File

@ -16,6 +16,8 @@ var interpolateTypeCastMapping = map[interp.Path]interp.Cast{
servicePath("deploy", "replicas"): toInt,
servicePath("deploy", "update_config", "parallelism"): toInt,
servicePath("deploy", "update_config", "max_failure_ratio"): toFloat,
servicePath("deploy", "rollback_config", "parallelism"): toInt,
servicePath("deploy", "rollback_config", "max_failure_ratio"): toFloat,
servicePath("deploy", "restart_policy", "max_attempts"): toInt,
servicePath("ports", interp.PathMatchList, "target"): toInt,
servicePath("ports", interp.PathMatchList, "published"): toInt,

View File

@ -507,7 +507,7 @@ volumes:
func TestLoadWithInterpolationCastFull(t *testing.T) {
dict, err := ParseYAML([]byte(`
version: "3.4"
version: "3.7"
services:
web:
configs:
@ -524,6 +524,9 @@ services:
update_config:
parallelism: $theint
max_failure_ratio: $thefloat
rollback_config:
parallelism: $theint
max_failure_ratio: $thefloat
restart_policy:
max_attempts: $theint
ports:
@ -574,7 +577,7 @@ networks:
assert.NilError(t, err)
expected := &types.Config{
Filename: "filename.yml",
Version: "3.4",
Version: "3.7",
Services: []types.ServiceConfig{
{
Name: "web",
@ -600,6 +603,10 @@ networks:
Parallelism: uint64Ptr(555),
MaxFailureRatio: 3.14,
},
RollbackConfig: &types.UpdateConfig{
Parallelism: uint64Ptr(555),
MaxFailureRatio: 3.14,
},
RestartPolicy: &types.RestartPolicy{
MaxAttempts: uint64Ptr(555),
},

View File

@ -150,9 +150,8 @@ func TestOldValidAuth(t *testing.T) {
// defaultIndexserver is https://index.docker.io/v1/
ac := config.AuthConfigs["https://index.docker.io/v1/"]
if ac.Username != "joejoe" || ac.Password != "hello" {
t.Fatalf("Missing data from parsing:\n%q", config)
}
assert.Equal(t, ac.Username, "joejoe")
assert.Equal(t, ac.Password, "hello")
// Now save it and make sure it shows up in new form
configStr := saveConfigAndValidateNewFormat(t, config, tmpHome)
@ -213,9 +212,8 @@ func TestOldJSON(t *testing.T) {
assert.NilError(t, err)
ac := config.AuthConfigs["https://index.docker.io/v1/"]
if ac.Username != "joejoe" || ac.Password != "hello" {
t.Fatalf("Missing data from parsing:\n%q", config)
}
assert.Equal(t, ac.Username, "joejoe")
assert.Equal(t, ac.Password, "hello")
// Now save it and make sure it shows up in new form
configStr := saveConfigAndValidateNewFormat(t, config, tmpHome)
@ -249,9 +247,8 @@ func TestNewJSON(t *testing.T) {
assert.NilError(t, err)
ac := config.AuthConfigs["https://index.docker.io/v1/"]
if ac.Username != "joejoe" || ac.Password != "hello" {
t.Fatalf("Missing data from parsing:\n%q", config)
}
assert.Equal(t, ac.Username, "joejoe")
assert.Equal(t, ac.Password, "hello")
// Now save it and make sure it shows up in new form
configStr := saveConfigAndValidateNewFormat(t, config, tmpHome)
@ -284,9 +281,8 @@ func TestNewJSONNoEmail(t *testing.T) {
assert.NilError(t, err)
ac := config.AuthConfigs["https://index.docker.io/v1/"]
if ac.Username != "joejoe" || ac.Password != "hello" {
t.Fatalf("Missing data from parsing:\n%q", config)
}
assert.Equal(t, ac.Username, "joejoe")
assert.Equal(t, ac.Password, "hello")
// Now save it and make sure it shows up in new form
configStr := saveConfigAndValidateNewFormat(t, config, tmpHome)
@ -431,10 +427,8 @@ func TestJSONReaderNoFile(t *testing.T) {
assert.NilError(t, err)
ac := config.AuthConfigs["https://index.docker.io/v1/"]
if ac.Username != "joejoe" || ac.Password != "hello" {
t.Fatalf("Missing data from parsing:\n%q", config)
}
assert.Equal(t, ac.Username, "joejoe")
assert.Equal(t, ac.Password, "hello")
}
func TestOldJSONReaderNoFile(t *testing.T) {
@ -444,9 +438,8 @@ func TestOldJSONReaderNoFile(t *testing.T) {
assert.NilError(t, err)
ac := config.AuthConfigs["https://index.docker.io/v1/"]
if ac.Username != "joejoe" || ac.Password != "hello" {
t.Fatalf("Missing data from parsing:\n%q", config)
}
assert.Equal(t, ac.Username, "joejoe")
assert.Equal(t, ac.Password, "hello")
}
func TestJSONWithPsFormatNoFile(t *testing.T) {

View File

@ -1,12 +1,14 @@
#!/usr/bin/env bash
# shellcheck disable=SC2016,SC2119,SC2155
# shellcheck disable=SC2016,SC2119,SC2155,SC2206,SC2207
#
# Shellcheck ignore list:
# - SC2016: Expressions don't expand in single quotes, use double quotes for that.
# - SC2119: Use foo "$@" if function's $1 should mean script's $1.
# - SC2155: Declare and assign separately to avoid masking return values.
#
# You can find more details for each warning at the following page:
# - SC2206: Quote to prevent word splitting, or split robustly with mapfile or read -a.
# - SC2207: Prefer mapfile or read -a to split command output (or quote to avoid splitting).
#
# You can find more details for each warning at the following page:
# https://github.com/koalaman/shellcheck/wiki/<SCXXXX>
#
# bash completion file for core docker commands
@ -563,23 +565,39 @@ __docker_append_to_completions() {
COMPREPLY=( ${COMPREPLY[@]/%/"$1"} )
}
# __docker_daemon_is_experimental tests whether the currently configured Docker
# daemon runs in experimental mode. If so, the function exits with 0 (true).
# Otherwise, or if the result cannot be determined, the exit value is 1 (false).
__docker_daemon_is_experimental() {
[ "$(__docker_q version -f '{{.Server.Experimental}}')" = "true" ]
# __docker_fetch_info fetches information about the configured Docker server and updates
# several variables with the results.
# The result is cached for the duration of one invocation of bash completion.
__docker_fetch_info() {
if [ -z "$info_fetched" ] ; then
read -r client_experimental server_experimental server_os < <(__docker_q version -f '{{.Client.Experimental}} {{.Server.Experimental}} {{.Server.Os}}')
info_fetched=true
fi
}
# __docker_daemon_os_is tests whether the currently configured Docker daemon runs
# __docker_client_is_experimental tests whether the Docker cli is configured to support
# experimental features. If so, the function exits with 0 (true).
# Otherwise, or if the result cannot be determined, the exit value is 1 (false).
__docker_client_is_experimental() {
__docker_fetch_info
[ "$client_experimental" = "true" ]
}
# __docker_server_is_experimental tests whether the currently configured Docker
# server runs in experimental mode. If so, the function exits with 0 (true).
# Otherwise, or if the result cannot be determined, the exit value is 1 (false).
__docker_server_is_experimental() {
__docker_fetch_info
[ "$server_experimental" = "true" ]
}
# __docker_server_os_is tests whether the currently configured Docker server runs
# on the operating system passed in as the first argument.
# It does so by querying the daemon for its OS. The result is cached for the duration
# of one invocation of bash completion so that this function can be used to test for
# several different operating systems without additional costs.
# Known operating systems: linux, windows.
__docker_daemon_os_is() {
__docker_server_os_is() {
local expected_os="$1"
local actual_os=${daemon_os=$(__docker_q version -f '{{.Server.Os}}')}
[ "$actual_os" = "$expected_os" ]
__docker_fetch_info
[ "$server_os" = "$expected_os" ]
}
# __docker_stack_orchestrator_is tests whether the client is configured to use
@ -865,6 +883,7 @@ __docker_complete_log_drivers() {
gelf
journald
json-file
local
logentries
none
splunk
@ -888,7 +907,8 @@ __docker_complete_log_options() {
local gcplogs_options="$common_options1 $common_options2 gcp-log-cmd gcp-meta-id gcp-meta-name gcp-meta-zone gcp-project"
local gelf_options="$common_options1 $common_options2 gelf-address gelf-compression-level gelf-compression-type gelf-tcp-max-reconnect gelf-tcp-reconnect-delay tag"
local journald_options="$common_options1 $common_options2 tag"
local json_file_options="$common_options1 $common_options2 max-file max-size"
local json_file_options="$common_options1 $common_options2 compress max-file max-size"
local local_options="$common_options1 compress max-file max-size"
local logentries_options="$common_options1 $common_options2 line-only logentries-token tag"
local splunk_options="$common_options1 $common_options2 splunk-caname splunk-capath splunk-format splunk-gzip splunk-gzip-level splunk-index splunk-insecureskipverify splunk-source splunk-sourcetype splunk-token splunk-url splunk-verify-connection tag"
local syslog_options="$common_options1 $common_options2 syslog-address syslog-facility syslog-format syslog-tls-ca-cert syslog-tls-cert syslog-tls-key syslog-tls-skip-verify tag"
@ -917,6 +937,9 @@ __docker_complete_log_options() {
json-file)
COMPREPLY=( $( compgen -W "$json_file_options" -S = -- "$cur" ) )
;;
local)
COMPREPLY=( $( compgen -W "$local_options" -S = -- "$cur" ) )
;;
logentries)
COMPREPLY=( $( compgen -W "$logentries_options" -S = -- "$cur" ) )
;;
@ -946,7 +969,7 @@ __docker_complete_log_driver_options() {
__docker_nospace
return
;;
fluentd-async-connect)
compress|fluentd-async-connect)
COMPREPLY=( $( compgen -W "false true" -- "${cur##*=}" ) )
return
;;
@ -1128,7 +1151,8 @@ _docker_docker() {
*)
local counter=$( __docker_pos_first_nonflag "$(__docker_to_extglob "$global_options_with_args")" )
if [ "$cword" -eq "$counter" ]; then
__docker_daemon_is_experimental && commands+=(${experimental_commands[*]})
__docker_client_is_experimental && commands+=(${experimental_client_commands[*]})
__docker_server_is_experimental && commands+=(${experimental_server_commands[*]})
COMPREPLY=( $( compgen -W "${commands[*]} help" -- "$cur" ) )
fi
;;
@ -1837,14 +1861,14 @@ _docker_container_run_and_create() {
--volume -v
--workdir -w
"
__docker_daemon_os_is windows && options_with_args+="
__docker_server_os_is windows && options_with_args+="
--cpu-count
--cpu-percent
--io-maxbandwidth
--io-maxiops
--isolation
"
__docker_daemon_is_experimental && options_with_args+="
__docker_server_is_experimental && options_with_args+="
--platform
"
@ -1960,7 +1984,7 @@ _docker_container_run_and_create() {
return
;;
--isolation)
if __docker_daemon_os_is windows ; then
if __docker_server_os_is windows ; then
__docker_complete_isolation
return
fi
@ -2071,12 +2095,12 @@ _docker_container_start() {
__docker_complete_detach_keys && return
case "$prev" in
--checkpoint)
if __docker_daemon_is_experimental ; then
if __docker_server_is_experimental ; then
return
fi
;;
--checkpoint-dir)
if __docker_daemon_is_experimental ; then
if __docker_server_is_experimental ; then
_filedir -d
return
fi
@ -2086,7 +2110,7 @@ _docker_container_start() {
case "$cur" in
-*)
local options="--attach -a --detach-keys --help --interactive -i"
__docker_daemon_is_experimental && options+=" --checkpoint --checkpoint-dir"
__docker_server_is_experimental && options+=" --checkpoint --checkpoint-dir"
COMPREPLY=( $( compgen -W "$options" -- "$cur" ) )
;;
*)
@ -2449,7 +2473,7 @@ _docker_daemon() {
}
_docker_deploy() {
__docker_daemon_is_experimental && _docker_stack_deploy
__docker_server_is_experimental && _docker_stack_deploy
}
_docker_diff() {
@ -2535,7 +2559,7 @@ _docker_image_build() {
--target
--ulimit
"
__docker_daemon_os_is windows && options_with_args+="
__docker_server_os_is windows && options_with_args+="
--isolation
"
@ -2549,7 +2573,7 @@ _docker_image_build() {
--quiet -q
--rm
"
if __docker_daemon_is_experimental ; then
if __docker_server_is_experimental ; then
options_with_args+="
--platform
"
@ -2584,7 +2608,7 @@ _docker_image_build() {
return
;;
--isolation)
if __docker_daemon_os_is windows ; then
if __docker_server_os_is windows ; then
__docker_complete_isolation
return
fi
@ -2664,14 +2688,16 @@ _docker_image_images() {
_docker_image_import() {
case "$prev" in
--change|-c|--message|-m)
--change|-c|--message|-m|--platform)
return
;;
esac
case "$cur" in
-*)
COMPREPLY=( $( compgen -W "--change -c --help --message -m" -- "$cur" ) )
local options="--change -c --help --message -m"
__docker_server_is_experimental && options+=" --platform"
COMPREPLY=( $( compgen -W "$options" -- "$cur" ) )
;;
*)
local counter=$(__docker_pos_first_nonflag '--change|-c|--message|-m')
@ -2779,7 +2805,7 @@ _docker_image_pull() {
case "$cur" in
-*)
local options="--all-tags -a --disable-content-trust=false --help"
__docker_daemon_is_experimental && options+=" --platform"
__docker_server_is_experimental && options+=" --platform"
COMPREPLY=( $( compgen -W "$options" -- "$cur" ) )
;;
@ -3395,7 +3421,6 @@ _docker_service_update_and_create() {
local options_with_args="
--endpoint-mode
--entrypoint
--force
--health-cmd
--health-interval
--health-retries
@ -3431,7 +3456,7 @@ _docker_service_update_and_create() {
--user -u
--workdir -w
"
__docker_daemon_os_is windows && options_with_args+="
__docker_server_os_is windows && options_with_args+="
--credential-spec
"
@ -3520,6 +3545,10 @@ _docker_service_update_and_create() {
--secret-rm
"
boolean_options="$boolean_options
--force
"
case "$prev" in
--env-rm)
COMPREPLY=( $( compgen -e -- "$cur" ) )
@ -3817,6 +3846,109 @@ _docker_swarm_update() {
esac
}
_docker_manifest() {
local subcommands="
annotate
create
inspect
push
"
__docker_subcommands "$subcommands" && return
case "$cur" in
-*)
COMPREPLY=( $( compgen -W "--help" -- "$cur" ) )
;;
*)
COMPREPLY=( $( compgen -W "$subcommands" -- "$cur" ) )
;;
esac
}
_docker_manifest_annotate() {
case "$prev" in
--arch)
COMPREPLY=( $( compgen -W "
386
amd64
arm
arm64
mips64
mips64le
ppc64le
s390x" -- "$cur" ) )
return
;;
--os)
COMPREPLY=( $( compgen -W "
darwin
dragonfly
freebsd
linux
netbsd
openbsd
plan9
solaris
windows" -- "$cur" ) )
return
;;
--os-features|--variant)
return
;;
esac
case "$cur" in
-*)
COMPREPLY=( $( compgen -W "--arch --help --os --os-features --variant" -- "$cur" ) )
;;
*)
local counter=$( __docker_pos_first_nonflag "--arch|--os|--os-features|--variant" )
if [ "$cword" -eq "$counter" ] || [ "$cword" -eq "$((counter + 1))" ]; then
__docker_complete_images --force-tag --id
fi
;;
esac
}
_docker_manifest_create() {
case "$cur" in
-*)
COMPREPLY=( $( compgen -W "--amend -a --help --insecure" -- "$cur" ) )
;;
*)
__docker_complete_images --force-tag --id
;;
esac
}
_docker_manifest_inspect() {
case "$cur" in
-*)
COMPREPLY=( $( compgen -W "--help --insecure --verbose -v" -- "$cur" ) )
;;
*)
local counter=$( __docker_pos_first_nonflag )
if [ "$cword" -eq "$counter" ] || [ "$cword" -eq "$((counter + 1))" ]; then
__docker_complete_images --force-tag --id
fi
;;
esac
}
_docker_manifest_push() {
case "$cur" in
-*)
COMPREPLY=( $( compgen -W "--help --insecure --purge -p" -- "$cur" ) )
;;
*)
local counter=$( __docker_pos_first_nonflag )
if [ "$cword" -eq "$counter" ]; then
__docker_complete_images --force-tag --id
fi
;;
esac
}
_docker_node() {
local subcommands="
demote
@ -4451,7 +4583,7 @@ _docker_stack_deploy() {
case "$cur" in
-*)
local options="--compose-file -c --help --orchestrator"
__docker_daemon_is_experimental && __docker_stack_orchestrator_is swarm && options+=" --bundle-file"
__docker_server_is_experimental && __docker_stack_orchestrator_is swarm && options+=" --bundle-file"
__docker_stack_orchestrator_is kubernetes && options+=" --kubeconfig --namespace"
__docker_stack_orchestrator_is swarm && options+=" --prune --resolve-image --with-registry-auth"
COMPREPLY=( $( compgen -W "$options" -- "$cur" ) )
@ -5074,7 +5206,11 @@ _docker() {
wait
)
local experimental_commands=(
local experimental_client_commands=(
manifest
)
local experimental_server_commands=(
checkpoint
deploy
)
@ -5098,10 +5234,12 @@ _docker() {
--tlskey
"
local host config daemon_os
# variables to cache server info, populated on demand for performance reasons
local info_fetched server_experimental server_os
# variables to cache client info, populated on demand for performance reasons
local stack_orchestrator_is_kubernetes stack_orchestrator_is_swarm
local client_experimental stack_orchestrator_is_kubernetes stack_orchestrator_is_swarm
local host config
COMPREPLY=()
local cur prev words cword

View File

@ -17,24 +17,29 @@ ENVVARS = -e VERSION=$(VERSION) -e GITCOMMIT -e PLATFORM
# build docker image (dockerfiles/Dockerfile.build)
.PHONY: build_docker_image
build_docker_image:
docker build ${DOCKER_BUILD_ARGS} -t $(DEV_DOCKER_IMAGE_NAME) -f ./dockerfiles/Dockerfile.dev .
# build dockerfile from stdin so that we don't send the build-context; source is bind-mounted in the development environment
cat ./dockerfiles/Dockerfile.dev | docker build ${DOCKER_BUILD_ARGS} -t $(DEV_DOCKER_IMAGE_NAME) -
# build docker image having the linting tools (dockerfiles/Dockerfile.lint)
.PHONY: build_linter_image
build_linter_image:
docker build ${DOCKER_BUILD_ARGS} -t $(LINTER_IMAGE_NAME) -f ./dockerfiles/Dockerfile.lint .
# build dockerfile from stdin so that we don't send the build-context; source is bind-mounted in the development environment
cat ./dockerfiles/Dockerfile.lint | docker build ${DOCKER_BUILD_ARGS} -t $(LINTER_IMAGE_NAME) -
.PHONY: build_cross_image
build_cross_image:
docker build ${DOCKER_BUILD_ARGS} -t $(CROSS_IMAGE_NAME) -f ./dockerfiles/Dockerfile.cross .
# build dockerfile from stdin so that we don't send the build-context; source is bind-mounted in the development environment
cat ./dockerfiles/Dockerfile.cross | docker build ${DOCKER_BUILD_ARGS} -t $(CROSS_IMAGE_NAME) -
.PHONY: build_shell_validate_image
build_shell_validate_image:
docker build -t $(VALIDATE_IMAGE_NAME) -f ./dockerfiles/Dockerfile.shellcheck .
# build dockerfile from stdin so that we don't send the build-context; source is bind-mounted in the development environment
cat ./dockerfiles/Dockerfile.shellcheck | docker build -t $(VALIDATE_IMAGE_NAME) -
.PHONY: build_binary_native_image
build_binary_native_image:
docker build -t $(BINARY_NATIVE_IMAGE_NAME) -f ./dockerfiles/Dockerfile.binary-native .
# build dockerfile from stdin so that we don't send the build-context; source is bind-mounted in the development environment
cat ./dockerfiles/Dockerfile.binary-native | docker build -t $(BINARY_NATIVE_IMAGE_NAME) -
.PHONY: build_e2e_image
build_e2e_image:
@ -105,7 +110,7 @@ shellcheck: build_shell_validate_image ## run shellcheck validation
docker run -ti --rm $(ENVVARS) $(MOUNTS) $(VALIDATE_IMAGE_NAME) make shellcheck
.PHONY: test-e2e ## run e2e tests
test-e2e: test-e2e-non-experimental test-e2e-experimental
test-e2e: test-e2e-non-experimental test-e2e-experimental test-e2e-connhelper-ssh
.PHONY: test-e2e-experimental
test-e2e-experimental: build_e2e_image
@ -115,6 +120,10 @@ test-e2e-experimental: build_e2e_image
test-e2e-non-experimental: build_e2e_image
docker run --rm -v /var/run/docker.sock:/var/run/docker.sock $(E2E_IMAGE_NAME)
.PHONY: test-e2e-connhelper-ssh
test-e2e-connhelper-ssh: build_e2e_image
docker run -e TEST_CONNHELPER=ssh -e DOCKERD_EXPERIMENTAL=1 --rm -v /var/run/docker.sock:/var/run/docker.sock $(E2E_IMAGE_NAME)
.PHONY: help
help: ## print this help
@awk 'BEGIN {FS = ":.*?## "} /^[a-zA-Z_-]+:.*?## / {sub("\\\\n",sprintf("\n%22c"," "), $$2);printf "\033[36m%-20s\033[0m %s\n", $$1, $$2}' $(MAKEFILE_LIST)

View File

@ -1,4 +1,4 @@
FROM golang:1.10.4-alpine
FROM golang:1.10.8-alpine
RUN apk add -U git bash coreutils gcc musl-dev

View File

@ -1,3 +1,4 @@
FROM dockercore/golang-cross:1.10.4@sha256:55c7b933ac944f4922b673b4d4340d1a0404f3c324bd0b3f13a4326c427b1f2a
FROM dockercore/golang-cross:1.10.8@sha256:a93210f55a8137b4aa4b9f033ac7a80b66ab6337e98e7afb62abe93b4ad73cad
ENV DISABLE_WARN_OUTSIDE_CONTAINER=1
WORKDIR /go/src/github.com/docker/cli
COPY . .

View File

@ -1,5 +1,5 @@
FROM golang:1.10.4-alpine
FROM golang:1.10.8-alpine
RUN apk add -U git make bash coreutils ca-certificates curl
@ -22,3 +22,4 @@ ENV CGO_ENABLED=0 \
DISABLE_WARN_OUTSIDE_CONTAINER=1
WORKDIR /go/src/github.com/docker/cli
CMD sh
COPY . .

View File

@ -1,4 +1,4 @@
ARG GO_VERSION=1.10.4
ARG GO_VERSION=1.10.8
FROM docker/containerd-shim-process:a4d1531 AS containerd-shim-process
@ -13,6 +13,7 @@ RUN apt-get update && apt-get install -y \
libapparmor-dev \
libseccomp-dev \
iptables \
openssh-client \
&& rm -rf /var/lib/apt/lists/*
ARG COMPOSE_VERSION=1.21.2

View File

@ -1,4 +1,4 @@
FROM golang:1.10.4-alpine
FROM golang:1.10.8-alpine
RUN apk add -U git
@ -15,3 +15,4 @@ ENV CGO_ENABLED=0
ENV DISABLE_WARN_OUTSIDE_CONTAINER=1
ENTRYPOINT ["/usr/local/bin/gometalinter"]
CMD ["--config=gometalinter.json", "./..."]
COPY . .

View File

@ -1,9 +1,5 @@
FROM debian:stretch-slim
RUN apt-get update && \
apt-get -y install make shellcheck && \
apt-get clean
FROM koalaman/shellcheck-alpine:v0.6.0
RUN apk add --no-cache bash make
WORKDIR /go/src/github.com/docker/cli
ENV DISABLE_WARN_OUTSIDE_CONTAINER=1
CMD bash
COPY . .

View File

@ -121,6 +121,28 @@ registries.
When you're done with your build, you're ready to look into [*Pushing a
repository to its registry*](https://docs.docker.com/engine/tutorials/dockerrepos/#/contributing-to-docker-hub).
## BuildKit
Starting with version 18.09, Docker supports a new backend for executing your
builds that is provided by the [moby/buildkit](https://github.com/moby/buildkit)
project. The BuildKit backend provides many benefits compared to the old
implementation. For example, BuildKit can:
* Detect and skip executing unused build stages
* Parallelize building independent build stages
* Incrementally transfer only the changed files in your build context between builds
* Detect and skip transferring unused files in your build context
* Use external Dockerfile implementations with many new features
* Avoid side-effects with rest of the API (intermediate images and containers)
* Prioritize your build cache for automatic pruning
To use the BuildKit backend, you need to set an environment variable
`DOCKER_BUILDKIT=1` on the CLI before invoking `docker build`.
To learn about the experimental Dockerfile syntax available to BuildKit-based
builds [refer to the documentation in the BuildKit repository](https://github.com/moby/buildkit/blob/master/frontend/dockerfile/docs/experimental.md).
## Format
Here is the format of the `Dockerfile`:
@ -224,10 +246,64 @@ following lines are all treated identically:
# dIrEcTiVe=value
```
The following parser directive is supported:
The following parser directives are supported:
* `syntax`
* `escape`
## syntax
# syntax=[remote image reference]
For example:
# syntax=docker/dockerfile
# syntax=docker/dockerfile:1.0
# syntax=docker.io/docker/dockerfile:1
# syntax=docker/dockerfile:1.0.0-experimental
# syntax=example.com/user/repo:tag@sha256:abcdef...
This feature is only enabled if the [BuildKit](#buildkit) backend is used.
The syntax directive defines the location of the Dockerfile builder that is used for
building the current Dockerfile. The BuildKit backend allows to seamlessly use
external implementations of builders that are distributed as Docker images and
execute inside a container sandbox environment.
Custom Dockerfile implementation allows you to:
- Automatically get bugfixes without updating the daemon
- Make sure all users are using the same implementation to build your Dockerfile
- Use the latest features without updating the daemon
- Try out new experimental or third-party features
### Official releases
Docker distributes official versions of the images that can be used for building
Dockerfiles under `docker/dockerfile` repository on Docker Hub. There are two
channels where new images are released: stable and experimental.
Stable channel follows semantic versioning. For example:
- docker/dockerfile:1.0.0 - only allow immutable version 1.0.0
- docker/dockerfile:1.0 - allow versions 1.0.*
- docker/dockerfile:1 - allow versions 1.*.*
- docker/dockerfile:latest - latest release on stable channel
The experimental channel uses incremental versioning with the major and minor
component from the stable channel on the time of the release. For example:
- docker/dockerfile:1.0.1-experimental - only allow immutable version 1.0.1-experimental
- docker/dockerfile:1.0-experimental - latest experimental releases after 1.0
- docker/dockerfile:experimental - latest release on experimental channel
You should choose a channel that best fits your needs. If you only want
bugfixes, you should use `docker/dockerfile:1.0`. If you want to benefit from
experimental features, you should use the experimental channel. If you are using
the experimental channel, newer releases may not be backwards compatible, so it
is recommended to use an immutable full version variant.
For master builds and nightly feature releases refer to the description in [the source repository](https://github.com/moby/buildkit/blob/master/README.md).
## escape
# escape=\ (backslash)
@ -1339,6 +1415,10 @@ The table below shows what command is executed for different `ENTRYPOINT` / `CMD
| **CMD ["p1_cmd", "p2_cmd"]** | p1_cmd p2_cmd | /bin/sh -c exec_entry p1_entry | exec_entry p1_entry p1_cmd p2_cmd |
| **CMD exec_cmd p1_cmd** | /bin/sh -c exec_cmd p1_cmd | /bin/sh -c exec_entry p1_entry | exec_entry p1_entry /bin/sh -c exec_cmd p1_cmd |
> **Note**: If `CMD` is defined from the base image, setting `ENTRYPOINT` will
> reset `CMD` to an empty value. In this scenario, `CMD` must be defined in the
> current image to have a value.
## VOLUME
VOLUME ["/data"]
@ -1623,6 +1703,38 @@ RUN echo "Hello World"
When building this Dockerfile, the `HTTP_PROXY` is preserved in the
`docker history`, and changing its value invalidates the build cache.
### Automatic platform ARGs in the global scope
This feature is only available when using the [BuildKit](#buildkit) backend.
Docker predefines a set of `ARG` variables with information on the platform of
the node performing the build (build platform) and on the platform of the
resulting image (target platform). The target platform can be specified with
the `--platform` flag on `docker build`.
The following `ARG` variables are set automatically:
* `TARGETPLATFORM` - platform of the build result. Eg `linux/amd64`, `linux/arm/v7`, `windows/amd64`.
* `TARGETOS` - OS component of TARGETPLATFORM
* `TARGETARCH` - architecture component of TARGETPLATFORM
* `TARGETVARIANT` - variant component of TARGETPLATFORM
* `BUILDPLATFORM` - platform of the node performing the build.
* `BUILDOS` - OS component of BUILDPLATFORM
* `BUILDARCH` - OS component of BUILDPLATFORM
* `BUILDVARIANT` - OS component of BUILDPLATFORM
These arguments are defined in the global scope so are not automatically
available inside build stages or for your `RUN` commands. To expose one of
these arguments inside the build stage redefine it without value.
For example:
```Dockerfile
FROM alpine
ARG TARGETPLATFORM
RUN echo "I'm building for $TARGETPLATFORM"
```
### Impact on build caching
`ARG` variables are not persisted into the built image as `ENV` variables are.
@ -1931,6 +2043,14 @@ required such as `zsh`, `csh`, `tcsh` and others.
The `SHELL` feature was added in Docker 1.12.
## External implementation features
This feature is only available when using the [BuildKit](#buildkit) backend.
Docker build supports experimental features like cache mounts, build secrets and
ssh forwarding that are enabled by using an external implementation of the
builder with a syntax directive. To learn about these features, [refer to the documentation in BuildKit repository](https://github.com/moby/buildkit/blob/master/frontend/dockerfile/docs/experimental.md).
## Dockerfile examples
Below you can see some examples of Dockerfile syntax. If you're interested in

View File

@ -44,8 +44,8 @@ from different sessions on the Docker host.
To stop a container, use `CTRL-c`. This key sequence sends `SIGKILL` to the
container. If `--sig-proxy` is true (the default),`CTRL-c` sends a `SIGINT` to
the container. You can detach from a container and leave it running using the
`CTRL-p CTRL-q` key sequence.
the container. If the container was run with `-i` and `-t`, you can detach from
a container and leave it running using the `CTRL-p CTRL-q` key sequence.
> **Note:**
> A process running as PID 1 inside a container is treated specially by

View File

@ -504,13 +504,13 @@ stable.
Squashing layers can be beneficial if your Dockerfile produces multiple layers
modifying the same files, for example, file that are created in one step, and
modifying the same files, for example, files that are created in one step, and
removed in another step. For other use-cases, squashing images may actually have
a negative impact on performance; when pulling an image consisting of multiple
layers, layers can be pulled in parallel, and allows sharing layers between
images (saving space).
For most use cases, multi-stage are a better alternative, as they give more
For most use cases, multi-stage builds are a better alternative, as they give more
fine-grained control over your build, and can take advantage of future
optimizations in the builder. Refer to the [use multi-stage builds](https://docs.docker.com/develop/develop-images/multistage-build/)
section in the userguide for more information.
@ -531,7 +531,7 @@ The `--squash` option has a number of known limitations:
downloading a single layer cannot be parallelized.
- When attempting to squash an image that does not make changes to the
filesystem (for example, the Dockerfile only contains `ENV` instructions),
the squash step will fail (see [issue #33823](https://github.com/moby/moby/issues/33823)
the squash step will fail (see [issue #33823](https://github.com/moby/moby/issues/33823)).
#### Prerequisites

View File

@ -303,7 +303,7 @@ the same file can share a single page cache entry (or entries), it makes
> **Note**: As promising as `overlay` is, the feature is still quite young and
> should not be used in production. Most notably, using `overlay` can cause
> excessive inode consumption (especially as the number of images grows), as
> well as > being incompatible with the use of RPMs.
> well as being incompatible with the use of RPMs.
The `overlay2` uses the same fast union filesystem but takes advantage of
[additional features](https://lkml.org/lkml/2015/2/11/106) added in Linux
@ -1231,10 +1231,14 @@ The `--metrics-addr` option takes a tcp address to serve the metrics API.
This feature is still experimental, therefore, the daemon must be running in experimental
mode for this feature to work.
To serve the metrics API on localhost:1337 you would specify `--metrics-addr 127.0.0.1:1337`
allowing you to make requests on the API at `127.0.0.1:1337/metrics` to receive metrics in the
To serve the metrics API on `localhost:9323` you would specify `--metrics-addr 127.0.0.1:9323`,
allowing you to make requests on the API at `127.0.0.1:9323/metrics` to receive metrics in the
[prometheus](https://prometheus.io/docs/instrumenting/exposition_formats/) format.
Port `9323` is the [default port associated with Docker
metrics](https://github.com/prometheus/prometheus/wiki/Default-port-allocations)
to avoid collisions with other prometheus exporters and services.
If you are running a prometheus server you can add this address to your scrape configs
to have prometheus collect metrics on Docker. For more information
on prometheus you can view the website [here](https://prometheus.io/).
@ -1243,7 +1247,7 @@ on prometheus you can view the website [here](https://prometheus.io/).
scrape_configs:
- job_name: 'docker'
static_configs:
- targets: ['127.0.0.1:1337']
- targets: ['127.0.0.1:9323']
```
Please note that this feature is still marked as experimental as metrics and metric
@ -1305,8 +1309,13 @@ This is a full example of the allowed configuration options on Linux:
"storage-opts": [],
"labels": [],
"live-restore": true,
"log-driver": "",
"log-opts": {},
"log-driver": "json-file",
"log-opts": {
"max-size": "10m",
"max-file":"5",
"labels": "somelabel",
"env": "os,customer"
},
"mtu": 0,
"pidfile": "",
"cluster-store": "",

View File

@ -177,7 +177,7 @@ This is similar to tagging an image and pushing it to a foreign registry.
After you have created your local copy of the manifest list, you may optionally
`annotate` it. Annotations allowed are the architecture and operating system (overriding the image's current values),
os features, and an archictecure variant.
os features, and an architecture variant.
Finally, you need to `push` your manifest list to the desired registry. Below are descriptions of these three commands,
and an example putting them all together.
@ -270,5 +270,5 @@ $ docker manifest create --insecure myprivateregistry.mycompany.com/repo/image:1
$ docker manifest push --insecure myprivateregistry.mycompany.com/repo/image:tag
```
Note that the `--insecure` flag is not required to annotate a manifest list, since annotations are to a locally-stored copy of a manifest list. You may also skip the `--insecure` flag if you are performaing a `docker manifest inspect` on a locally-stored manifest list. Be sure to keep in mind that locally-stored manifest lists are never used by the engine on a `docker pull`.
Note that the `--insecure` flag is not required to annotate a manifest list, since annotations are to a locally-stored copy of a manifest list. You may also skip the `--insecure` flag if you are performing a `docker manifest inspect` on a locally-stored manifest list. Be sure to keep in mind that locally-stored manifest lists are never used by the engine on a `docker pull`.

View File

@ -116,6 +116,7 @@ Valid placeholders for the Go template are listed below:
Placeholder | Description
----------------|------------------------------------------------------------------------------------------
`.ID` | Task ID
`.Name` | Task name
`.Image` | Task image
`.Node` | Node ID

View File

@ -26,6 +26,17 @@ Options:
--no-prune Do not delete untagged parents
```
## Description
Removes (and un-tags) one or more images from the host node. If an image has
multiple tags, using this command with the tag as a parameter only removes the
tag. If the tag is the only one for the image, both the image and the tag are
removed.
This does not remove images from a registry. You cannot remove an image of a
running container unless you use the `-f` option. To see all images on a host
use the [`docker image ls`](images.md) command.
## Examples
You can remove an image using its short or long ID, its tag, or its digest. If

View File

@ -717,15 +717,15 @@ $ docker run -d --isolation default busybox top
On Windows, `--isolation` can take one of these values:
| Value | Description |
|:----------|:-------------------------------------------------------------------------------------------|
| `default` | Use the value specified by the Docker daemon's `--exec-opt` or system default (see below). |
| `process` | Shared-kernel namespace isolation (not supported on Windows client operating systems). |
| `hyperv` | Hyper-V hypervisor partition-based isolation. |
| Value | Description |
|:----------|:------------------------------------------------------------------------------------------------------------------|
| `default` | Use the value specified by the Docker daemon's `--exec-opt` or system default (see below). |
| `process` | Shared-kernel namespace isolation (not supported on Windows client operating systems older than Windows 10 1809). |
| `hyperv` | Hyper-V hypervisor partition-based isolation. |
The default isolation on Windows server operating systems is `process`. The default (and only supported)
The default isolation on Windows server operating systems is `process`. The default
isolation on Windows client operating systems is `hyperv`. An attempt to start a container on a client
operating system with `--isolation process` will fail.
operating system older than Windows 10 1809 with `--isolation process` will fail.
On Windows server, assuming the default configuration, these commands are equivalent
and result in `process` isolation:

View File

@ -219,7 +219,7 @@ tutorial](https://docs.docker.com/engine/swarm/swarm-tutorial/rolling-update/).
### Set environment variables (-e, --env)
This sets an environmental variable for all tasks in a service. For example:
This sets an environment variable for all tasks in a service. For example:
```bash
$ docker service create \

View File

@ -171,5 +171,5 @@ On Windows:
"table {{.ID}}\t{{.Name}}\t{{.CPUPerc}}\t{{.MemUsage}}\t{{.NetIO}}\t{{.BlockIO}}"
> **Note**: On Docker 17.09 and older, the `{{.Container}}` column was used, in
> stead of `{{.ID}}\t{{.Name}}`.
> **Note**: On Docker 17.09 and older, the `{{.Container}}` column was used,
> instead of `{{.ID}}\t{{.Name}}`.

View File

@ -22,6 +22,7 @@ func generateCliYaml(opts *options) error {
dockerCli := command.NewDockerCli(stdin, stdout, stderr, false, nil)
cmd := &cobra.Command{Use: "docker"}
commands.AddCommands(cmd, dockerCli)
disableFlagsInUseLine(cmd)
source := filepath.Join(opts.source, descriptionSourcePath)
if err := loadLongDescription(cmd, source); err != nil {
return err
@ -31,6 +32,23 @@ func generateCliYaml(opts *options) error {
return GenYamlTree(cmd, opts.target)
}
func disableFlagsInUseLine(cmd *cobra.Command) {
visitAll(cmd, func(ccmd *cobra.Command) {
// do not add a `[flags]` to the end of the usage line.
ccmd.DisableFlagsInUseLine = true
})
}
// visitAll will traverse all commands from the root.
// This is different from the VisitAll of cobra.Command where only parents
// are checked.
func visitAll(root *cobra.Command, fn func(*cobra.Command)) {
for _, cmd := range root.Commands() {
visitAll(cmd, fn)
}
fn(root)
}
func loadLongDescription(cmd *cobra.Command, path ...string) error {
for _, cmd := range cmd.Commands() {
if cmd.Name() == "" {

View File

@ -0,0 +1,9 @@
version: '2.1'
services:
engine:
build:
context: ./testdata
dockerfile: Dockerfile.connhelper-ssh
environment:
- TEST_CONNHELPER_SSH_ID_RSA_PUB

View File

@ -106,7 +106,7 @@ func ensureBasicPluginBin() (string, error) {
}
installPath := filepath.Join(os.Getenv("GOPATH"), "bin", name)
cmd := exec.Command(goBin, "build", "-o", installPath, "./basic")
cmd.Env = append(cmd.Env, "CGO_ENABLED=0")
cmd.Env = append(os.Environ(), "CGO_ENABLED=0")
if out, err := cmd.CombinedOutput(); err != nil {
return "", errors.Wrapf(err, "error building basic plugin bin: %s", string(out))
}

12
e2e/testdata/Dockerfile.connhelper-ssh vendored Normal file
View File

@ -0,0 +1,12 @@
FROM docker:test-dind
RUN apk --no-cache add shadow openssh-server && \
groupadd -f docker && \
useradd --create-home --shell /bin/sh --password $(head -c32 /dev/urandom | base64) penguin && \
usermod -aG docker penguin && \
ssh-keygen -A
# workaround: ssh session excludes /usr/local/bin from $PATH
RUN ln -s /usr/local/bin/docker /usr/bin/docker
COPY ./connhelper-ssh/entrypoint.sh /
EXPOSE 22
ENTRYPOINT ["/entrypoint.sh"]
# usage: docker run --privileged -e TEST_CONNHELPER_SSH_ID_RSA_PUB=$(cat ~/.ssh/id_rsa.pub) -p 22 $THIS_IMAGE

8
e2e/testdata/connhelper-ssh/entrypoint.sh vendored Executable file
View File

@ -0,0 +1,8 @@
#!/bin/sh
set -ex
mkdir -m 0700 -p /home/penguin/.ssh
echo ${TEST_CONNHELPER_SSH_ID_RSA_PUB} > /home/penguin/.ssh/authorized_keys
chmod 0600 /home/penguin/.ssh/authorized_keys
chown -R penguin:penguin /home/penguin
/usr/sbin/sshd -E /var/log/sshd.log
exec dockerd-entrypoint.sh $@

View File

@ -1,15 +1,13 @@
package environment
import (
"context"
"os"
"strings"
"testing"
"time"
"github.com/docker/docker/client"
"github.com/pkg/errors"
"gotest.tools/assert"
"gotest.tools/icmd"
"gotest.tools/poll"
"gotest.tools/skip"
)
@ -79,21 +77,14 @@ func boolFromString(val string) bool {
}
}
func dockerClient(t *testing.T) client.APIClient {
t.Helper()
c, err := client.NewClientWithOpts(client.FromEnv, client.WithVersion("1.37"))
assert.NilError(t, err)
return c
}
// DefaultPollSettings used with gotestyourself/poll
var DefaultPollSettings = poll.WithDelay(100 * time.Millisecond)
// SkipIfNotExperimentalDaemon returns whether the test docker daemon is in experimental mode
func SkipIfNotExperimentalDaemon(t *testing.T) {
t.Helper()
c := dockerClient(t)
info, err := c.Info(context.Background())
assert.NilError(t, err)
skip.If(t, !info.ExperimentalBuild, "running against a non-experimental daemon")
result := icmd.RunCmd(icmd.Command("docker", "info", "--format", "{{.ExperimentalBuild}}"))
result.Assert(t, icmd.Expected{Err: icmd.None})
experimentalBuild := strings.TrimSpace(result.Stdout()) == "true"
skip.If(t, !experimentalBuild, "running against a non-experimental daemon")
}

View File

@ -23,3 +23,29 @@ the same capabilities as the container, which may be limited. Set
--user [user | user:group | uid | uid:gid | user:gid | uid:group ]
Without this argument the command will be run as root in the container.
# Exit Status
The exit code from `docker exec` gives information about why the container
failed to exec or why it exited. When `docker exec` exits with a non-zero code,
the exit codes follow the `chroot` standard, see below:
**_126_** if the **_contained command_** cannot be invoked
$ docker exec busybox /etc; echo $?
# exec: "/etc": permission denied
docker: Error response from daemon: Contained command could not be invoked
126
**_127_** if the **_contained command_** cannot be found
$ docker exec busybox foo; echo $?
# exec: "foo": executable file not found in $PATH
docker: Error response from daemon: Contained command not found or does not exist
127
**_Exit code_** of **_contained command_** otherwise
$ docker exec busybox /bin/sh -c 'exit 3'
# 3

View File

@ -1,6 +1,11 @@
Removes one or more images from the host node. This does not remove images from
a registry. You cannot remove an image of a running container unless you use the
**-f** option. To see all images on a host use the **docker image ls** command.
Removes (and un-tags) one or more images from the host node. If an image has
multiple tags, using this command with the tag as a parameter only removes the
tag. If the tag is the only one for the image, both the image and the tag are
removed.
This does not remove images from a registry. You cannot remove an image of a
running container unless you use the **-f** option. To see all images on a host
use the **docker image ls** command.
# EXAMPLES

View File

@ -77,6 +77,8 @@ func parseDockerDaemonHost(addr string) (string, error) {
return parseSimpleProtoAddr("npipe", addrParts[1], DefaultNamedPipe)
case "fd":
return addr, nil
case "ssh":
return addr, nil
default:
return "", fmt.Errorf("Invalid bind address format: %s", addr)
}

View File

@ -7,7 +7,7 @@ set -eu -o pipefail
SCRIPTDIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" && pwd )"
# shellcheck source=/go/src/github.com/docker/cli/scripts/build/.variables
source $SCRIPTDIR/../build/.variables
source "$SCRIPTDIR"/../build/.variables
RESOURCES=$SCRIPTDIR/../winresources
@ -26,9 +26,9 @@ VERSION_QUAD=$(echo -n "$VERSION" | sed -re 's/^([0-9.]*).*$/\1/' | tr . ,)
# Pass version and commit information into the resource compiler
defs=
[ ! -z "$VERSION" ] && defs+=( "-D DOCKER_VERSION=\"$VERSION\"")
[ ! -z "$VERSION_QUAD" ] && defs+=( "-D DOCKER_VERSION_QUAD=$VERSION_QUAD")
[ ! -z "$GITCOMMIT" ] && defs+=( "-D DOCKER_COMMIT=\"$GITCOMMIT\"")
[ -n "$VERSION" ] && defs+=( "-D DOCKER_VERSION=\"$VERSION\"")
[ -n "$VERSION_QUAD" ] && defs+=( "-D DOCKER_VERSION_QUAD=$VERSION_QUAD")
[ -n "$GITCOMMIT" ] && defs+=( "-D DOCKER_COMMIT=\"$GITCOMMIT\"")
function makeres {
"$WINDRES" \

View File

@ -17,7 +17,15 @@ function setup {
local project=$1
local file=$2
test "${DOCKERD_EXPERIMENTAL:-}" -eq "1" && file="${file}:./e2e/compose-env.experimental.yaml"
test "${DOCKERD_EXPERIMENTAL:-0}" -eq "1" && file="${file}:./e2e/compose-env.experimental.yaml"
if [[ "${TEST_CONNHELPER:-}" = "ssh" ]];then
test ! -f "${HOME}/.ssh/id_rsa" && ssh-keygen -t rsa -C docker-e2e-dummy -N "" -f "${HOME}/.ssh/id_rsa" -q
grep "^StrictHostKeyChecking no" "${HOME}/.ssh/config" > /dev/null 2>&1 || echo "StrictHostKeyChecking no" > "${HOME}/.ssh/config"
TEST_CONNHELPER_SSH_ID_RSA_PUB=$(cat "${HOME}/.ssh/id_rsa.pub")
export TEST_CONNHELPER_SSH_ID_RSA_PUB
file="${file}:./e2e/compose-env.connhelper-ssh.yaml"
fi
COMPOSE_PROJECT_NAME=$project COMPOSE_FILE=$file docker-compose up --build -d >&2
local network="${project}_default"
@ -26,6 +34,9 @@ function setup {
engine_ip="$(container_ip "${project}_engine_1" "$network")"
engine_host="tcp://$engine_ip:2375"
if [[ "${TEST_CONNHELPER:-}" = "ssh" ]];then
engine_host="ssh://penguin@${engine_ip}"
fi
(
export DOCKER_HOST="$engine_host"
timeout 200 ./scripts/test/e2e/wait-on-daemon
@ -57,8 +68,9 @@ function runtests {
TEST_REMOTE_DAEMON="${REMOTE_DAEMON-}" \
TEST_SKIP_PLUGIN_TESTS="${SKIP_PLUGIN_TESTS-}" \
GOPATH="$GOPATH" \
PATH="$PWD/build/" \
"$(which go)" test -v ./e2e/... ${TESTFLAGS-}
PATH="$PWD/build/:/usr/bin" \
HOME="$HOME" \
"$(command -v go)" test -v ./e2e/... ${TESTFLAGS-}
}
export unique_id="${E2E_UNIQUE_ID:-cliendtoendsuite}"

View File

@ -1,9 +1,9 @@
#!/usr/bin/env bash
#!/usr/bin/env sh
set -eu
target="${1:-}"
if [[ "$target" != "help" && -z "${DISABLE_WARN_OUTSIDE_CONTAINER:-}" ]]; then
if [ "$target" != "help" ] && [ -z "${DISABLE_WARN_OUTSIDE_CONTAINER:-}" ]; then
(
echo
echo

View File

@ -12,7 +12,7 @@ github.com/cpuguy83/go-md2man v1.0.8
github.com/davecgh/go-spew 346938d642f2ec3594ed81d874461961cd0faa76 # v1.1.0
github.com/dgrijalva/jwt-go a2c85815a77d0f951e33ba4db5ae93629a1530af
github.com/docker/distribution 83389a148052d74ac602f5f1d62f86ff2f3c4aa5
github.com/docker/docker d2ecc7bad104139c118249ad159b45315a022754 https://github.com/docker/engine # 18.09 branch
github.com/docker/docker 200b524eff60a9c95a22bc2518042ac2ff617d07 https://github.com/docker/engine # 18.09 branch
github.com/docker/docker-credential-helpers 5241b46610f2491efdf9d1c85f1ddf5b02f6d962
# the docker/go package contains a customized version of canonical/json
# and is used by Notary. The package is periodically rebased on current Go versions.
@ -49,9 +49,9 @@ github.com/mattn/go-shellwords v1.0.3
github.com/matttproud/golang_protobuf_extensions v1.0.1
github.com/Microsoft/hcsshim 44c060121b68e8bdc40b411beba551f3b4ee9e55
github.com/Microsoft/go-winio v0.4.10
github.com/miekg/pkcs11 287d9350987cc9334667882061e202e96cdfb4d0
github.com/miekg/pkcs11 6120d95c0e9576ccf4a78ba40855809dca31a9ed
github.com/mitchellh/mapstructure f15292f7a699fcc1a38a80977f80a046874ba8ac
github.com/moby/buildkit 520201006c9dc676da9cf9655337ac711f7f127d
github.com/moby/buildkit 05766c5c21a1e528eeb1c3522b2f05493fe9ac47
github.com/modern-go/concurrent bacd9c7ef1dd9b15be4a9909b8ac7a4e313eec94 # 1.0.3
github.com/modern-go/reflect2 4b7aa43c6742a2c18fdef89dd197aaae7dac7ccd # 1.0.1
github.com/morikuni/aec 39771216ff4c63d11f5e604076f9c45e8be1067b
@ -76,7 +76,7 @@ github.com/spf13/cobra v0.0.3
github.com/spf13/pflag 4cb166e4f25ac4e8016a3595bbf7ea2e9aa85a2c https://github.com/thaJeztah/pflag.git
github.com/syndtr/gocapability 2c00daeb6c3b45114c80ac44119e7b8801fdd852
github.com/theupdateframework/notary v0.6.1
github.com/tonistiigi/fsutil f567071bed2416e4d87d260d3162722651182317
github.com/tonistiigi/fsutil 2862f6bc5ac9b97124e552a5c108230b38a1b0ca
github.com/tonistiigi/units 6950e57a87eaf136bbe44ef2ec8e75b9e3569de2
github.com/xeipuuv/gojsonpointer 4e3ac2762d5f479393488629ee9370b50873b3a6
github.com/xeipuuv/gojsonreference bd5ef7bd5415a7ac448318e64f11a24cd21e594b

View File

@ -102,6 +102,11 @@ func parseRemoteURL(remoteURL string) (gitRepo, error) {
u.Fragment = ""
repo.remote = u.String()
}
if strings.HasPrefix(repo.ref, "-") {
return gitRepo{}, errors.Errorf("invalid refspec: %s", repo.ref)
}
return repo, nil
}
@ -124,7 +129,7 @@ func fetchArgs(remoteURL string, ref string) []string {
args = append(args, "--depth", "1")
}
return append(args, "origin", ref)
return append(args, "origin", "--", ref)
}
// Check if a given git URL supports a shallow git clone,

View File

@ -195,10 +195,18 @@ func (cli *Client) checkResponseErr(serverResp serverResponse) error {
return nil
}
body, err := ioutil.ReadAll(serverResp.body)
bodyMax := 1 * 1024 * 1024 // 1 MiB
bodyR := &io.LimitedReader{
R: serverResp.body,
N: int64(bodyMax),
}
body, err := ioutil.ReadAll(bodyR)
if err != nil {
return err
}
if bodyR.N == 0 {
return fmt.Errorf("request returned %s with a message (> %d bytes) for API route and version %s, check if the server supports the requested API version", http.StatusText(serverResp.statusCode), bodyMax, serverResp.reqURL)
}
if len(body) == 0 {
return fmt.Errorf("request returned %s for API route and version %s, check if the server supports the requested API version", http.StatusText(serverResp.statusCode), serverResp.reqURL)
}

View File

@ -336,6 +336,14 @@ func RebaseArchiveEntries(srcContent io.Reader, oldBase, newBase string) io.Read
return
}
// srcContent tar stream, as served by TarWithOptions(), is
// definitely in PAX format, but tar.Next() mistakenly guesses it
// as USTAR, which creates a problem: if the newBase is >100
// characters long, WriteHeader() returns an error like
// "archive/tar: cannot encode header: Format specifies USTAR; and USTAR cannot encode Name=...".
//
// To fix, set the format to PAX here. See docker/for-linux issue #484.
hdr.Format = tar.FormatPAX
hdr.Name = strings.Replace(hdr.Name, oldBase, newBase, 1)
if hdr.Typeflag == tar.TypeLink {
hdr.Linkname = strings.Replace(hdr.Linkname, oldBase, newBase, 1)

View File

@ -48,18 +48,22 @@ func MakeRUnbindable(mountPoint string) error {
return ensureMountedAs(mountPoint, "runbindable")
}
func ensureMountedAs(mountPoint, options string) error {
mounted, err := Mounted(mountPoint)
// MakeMount ensures that the file or directory given is a mount point,
// bind mounting it to itself it case it is not.
func MakeMount(mnt string) error {
mounted, err := Mounted(mnt)
if err != nil {
return err
}
if !mounted {
if err := Mount(mountPoint, mountPoint, "none", "bind,rw"); err != nil {
return err
}
if mounted {
return nil
}
if _, err = Mounted(mountPoint); err != nil {
return Mount(mnt, mnt, "none", "bind")
}
func ensureMountedAs(mountPoint, options string) error {
if err := MakeMount(mountPoint); err != nil {
return err
}

View File

@ -39,6 +39,10 @@ type Output interface {
type chanOutput chan<- Progress
func (out chanOutput) WriteProgress(p Progress) error {
// FIXME: workaround for panic in #37735
defer func() {
recover()
}()
out <- p
return nil
}

View File

@ -145,7 +145,7 @@ func trustedLocation(req *http.Request) bool {
// addRequiredHeadersToRedirectedRequests adds the necessary redirection headers
// for redirected requests
func addRequiredHeadersToRedirectedRequests(req *http.Request, via []*http.Request) error {
if via != nil && via[0] != nil {
if len(via) != 0 && via[0] != nil {
if trustedLocation(req) && trustedLocation(via[0]) {
req.Header = via[0].Header
return nil

View File

@ -1,7 +1,7 @@
# the following lines are in sorted order, FYI
github.com/Azure/go-ansiterm d6e3b3328b783f23731bc4d058875b0371ff8109
github.com/Microsoft/hcsshim 44c060121b68e8bdc40b411beba551f3b4ee9e55
github.com/Microsoft/go-winio v0.4.10
github.com/Microsoft/hcsshim v0.7.12
github.com/Microsoft/go-winio v0.4.11
github.com/docker/libtrust 9cbd2a1374f46905c68a4eb3694a130610adc62a
github.com/go-check/check 4ed411733c5785b40214c70bce814c3a3a689609 https://github.com/cpuguy83/check.git
github.com/golang/gddo 9b12a26f3fbd7397dee4e20939ddca719d840d2a
@ -26,8 +26,8 @@ github.com/imdario/mergo v0.3.6
golang.org/x/sync 1d60e4601c6fd243af51cc01ddf169918a5407ca
# buildkit
github.com/moby/buildkit 6812dac65e0440bb75affce1fb2175e640edc15d
github.com/tonistiigi/fsutil b19464cd1b6a00773b4f2eb7acf9c30426f9df42
github.com/moby/buildkit d9f75920678e35090025bb89344c5370e2efc8e7
github.com/tonistiigi/fsutil 2862f6bc5ac9b97124e552a5c108230b38a1b0ca
github.com/grpc-ecosystem/grpc-opentracing 8e809c8a86450a29b90dcc9efbf062d0fe6d9746
github.com/opentracing/opentracing-go 1361b9cd60be79c4c3a7fa9841b3c132e40066a7
github.com/google/shlex 6f45313302b9c56850fc17f99e40caebce98c716
@ -37,7 +37,7 @@ github.com/mitchellh/hashstructure 2bca23e0e452137f789efbc8610126fd8b94f73b
#get libnetwork packages
# When updating, also update LIBNETWORK_COMMIT in hack/dockerfile/install/proxy accordingly
github.com/docker/libnetwork a79d3687931697244b8e03485bf7b2042f8ec6b6
github.com/docker/libnetwork 4725f2163fb214a6312f3beae5991f838ec36326 # bump_18.09 branch
github.com/docker/go-events 9461782956ad83b30282bf90e31fa6a70c255ba9
github.com/armon/go-radix e39d623f12e8e41c7b5529e9a9dd67a1e2261f80
github.com/armon/go-metrics eb0af217e5e9747e41dd5303755356b62d28e3ec
@ -47,7 +47,7 @@ github.com/sean-/seed e2103e2c35297fb7e17febb81e49b312087a2372
github.com/hashicorp/go-sockaddr 6d291a969b86c4b633730bfc6b8b9d64c3aafed9
github.com/hashicorp/go-multierror fcdddc395df1ddf4247c69bd436e84cfa0733f7e
github.com/hashicorp/serf 598c54895cc5a7b1a24a398d635e8c0ea0959870
github.com/docker/libkv 1d8431073ae03cdaedb198a89722f3aab6d418ef
github.com/docker/libkv 458977154600b9f23984d9f4b82e79570b5ae12b
github.com/vishvananda/netns 604eaf189ee867d8c147fafc28def2394e878d25
github.com/vishvananda/netlink b2de5d10e38ecce8607e6b438b6d174f389a004e
@ -59,13 +59,13 @@ github.com/coreos/etcd v3.2.1
github.com/coreos/go-semver v0.2.0
github.com/ugorji/go f1f1a805ed361a0e078bb537e4ea78cd37dcf065
github.com/hashicorp/consul v0.5.2
github.com/boltdb/bolt fff57c100f4dea1905678da7e90d92429dff2904
github.com/miekg/dns v1.0.7
github.com/ishidawataru/sctp 07191f837fedd2f13d1ec7b5f885f0f3ec54b1cb
go.etcd.io/bbolt v1.3.1-etcd.8
# get graph and distribution packages
github.com/docker/distribution 83389a148052d74ac602f5f1d62f86ff2f3c4aa5
github.com/vbatts/tar-split v0.10.2
github.com/vbatts/tar-split v0.11.0
github.com/opencontainers/go-digest v1.0.0-rc1
# get go-zfs packages
@ -74,9 +74,13 @@ github.com/pborman/uuid v1.0
google.golang.org/grpc v1.12.0
# This does not need to match RUNC_COMMIT as it is used for helper packages but should be newer or equal
github.com/opencontainers/runc 20aff4f0488c6d4b8df4d85b4f63f1f704c11abd
github.com/opencontainers/runtime-spec d810dbc60d8c5aeeb3d054bd1132fab2121968ce # v1.0.1-43-gd810dbc
# The version of runc should match the version that is used by the containerd
# version that is used. If you need to update runc, open a pull request in
# the containerd project first, and update both after that is merged.
# This commit does not need to match RUNC_COMMIT as it is used for helper
# packages but should be newer or equal.
github.com/opencontainers/runc 96ec2177ae841256168fcf76954f7177af9446eb
github.com/opencontainers/runtime-spec 5684b8af48c1ac3b1451fa499724e30e3c20a294 # v1.0.1-49-g5684b8a
github.com/opencontainers/image-spec v1.0.1
github.com/seccomp/libseccomp-golang 32f571b70023028bd57d9288c20efbcb237f3ce0
@ -114,23 +118,24 @@ github.com/googleapis/gax-go v2.0.0
google.golang.org/genproto 694d95ba50e67b2e363f3483057db5d4910c18f9
# containerd
github.com/containerd/containerd v1.2.0-beta.2
github.com/containerd/containerd 9754871865f7fe2f4e74d43e2fc7ccd237edcbce # v1.2.2
github.com/containerd/fifo 3d5202aec260678c48179c56f40e6f38a095738c
github.com/containerd/continuity d3c23511c1bf5851696cba83143d9cbcd666869b
github.com/containerd/cgroups 5e610833b72089b37d0e615de9a92dfc043757c2
github.com/containerd/continuity 004b46473808b3e7a4a3049c20e4376c91eb966d
github.com/containerd/cgroups dbea6f2bd41658b84b00417ceefa416b979cbf10
github.com/containerd/console c12b1e7919c14469339a5d38f2f8ed9b64a9de23
github.com/containerd/go-runc edcf3de1f4971445c42d61f20d506b30612aa031
github.com/containerd/cri 0d5cabd006cb5319dc965046067b8432d9fa5ef8 # release/1.2 branch
github.com/containerd/go-runc 5a6d9f37cfa36b15efba46dc7ea349fa9b7143c3
github.com/containerd/typeurl a93fcdb778cd272c6e9b3028b2f42d813e785d40
github.com/containerd/ttrpc 94dde388801693c54f88a6596f713b51a8b30b2d
github.com/containerd/ttrpc 2a805f71863501300ae1976d29f0454ae003e85a
github.com/gogo/googleapis 08a7655d27152912db7aaf4f983275eaf8d128ef
# cluster
github.com/docker/swarmkit cfa742c8abe6f8e922f6e4e920153c408e7d9c3b
github.com/docker/swarmkit c66ed60822d3fc3bf6e17a505ee79014f449ef05 # bump_v18.09 branch
github.com/gogo/protobuf v1.0.0
github.com/cloudflare/cfssl 1.3.2
github.com/fernet/fernet-go 1b2437bc582b3cfbb341ee5a29f8ef5b42912ff2
github.com/google/certificate-transparency-go v1.0.20
golang.org/x/crypto a2144134853fc9a27a7b1e3eb4f19f1a76df13c9
golang.org/x/crypto 0709b304e793a5edb4a2c0145f281ecdc20838a4
golang.org/x/time fbb02b2291d28baffd63558aa44b4b56f178d650
github.com/hashicorp/go-memdb cb9a474f84cc5e41b273b20c6927680b2a8776ad
github.com/hashicorp/go-immutable-radix 826af9ccf0feeee615d546d69b11f8e98da8c8f1 git://github.com/tonistiigi/go-immutable-radix.git
@ -143,8 +148,8 @@ github.com/prometheus/client_model fa8ad6fec33561be4280a8f0514318c79d7f6cb6
github.com/prometheus/common ebdfc6da46522d58825777cf1f90490a5b1ef1d8
github.com/prometheus/procfs abf152e5f3e97f2fafac028d2cc06c1feb87ffa5
github.com/matttproud/golang_protobuf_extensions v1.0.0
github.com/pkg/errors 839d9e913e063e28dfd0e6c7b7512793e0a48be9
github.com/grpc-ecosystem/go-grpc-prometheus 6b7015e65d366bf3f19b2b2a000a831940f0f7e0
github.com/pkg/errors 645ef00459ed84a119197bfb8d8205042c6df63d # v0.8.0
github.com/grpc-ecosystem/go-grpc-prometheus v1.2.0
# cli
github.com/spf13/cobra v0.0.3
@ -155,7 +160,7 @@ github.com/Nvveen/Gotty a8b993ba6abdb0e0c12b0125c603323a71c7790c https://github.
# metrics
github.com/docker/go-metrics d466d4f6fd960e01820085bd7e1a24426ee7ef18
github.com/opencontainers/selinux b29023b86e4a69d1b46b7e7b4e2b6fda03f0b9cd
github.com/opencontainers/selinux b6fa367ed7f534f9ba25391cc2d467085dbb445a
# archive/tar (for Go 1.10, see https://github.com/golang/go/issues/24787)

View File

@ -12,13 +12,13 @@ were it makes sense. It has been tested with SoftHSM.
softhsm --init-token --slot 0 --label test --pin 1234
* Then use `libsofthsm.so` as the pkcs11 module:
```go
p := pkcs11.New("/usr/lib/softhsm/libsofthsm.so")
```
## Examples
A skeleton program would look somewhat like this (yes, pkcs#11 is verbose):
```go
p := pkcs11.New("/usr/lib/softhsm/libsofthsm.so")
err := p.Initialize()
if err != nil {
@ -55,7 +55,7 @@ A skeleton program would look somewhat like this (yes, pkcs#11 is verbose):
fmt.Printf("%x", d)
}
fmt.Println()
```
Further examples are included in the tests.
To expose PKCS#11 keys using the

View File

@ -24,15 +24,19 @@ const (
)
const (
CKG_MGF1_SHA1 uint = 0x00000001
CKG_MGF1_SHA224 uint = 0x00000005
CKG_MGF1_SHA256 uint = 0x00000002
CKG_MGF1_SHA384 uint = 0x00000003
CKG_MGF1_SHA512 uint = 0x00000004
CKG_MGF1_SHA1 uint = 0x00000001
CKG_MGF1_SHA224 uint = 0x00000005
CKG_MGF1_SHA256 uint = 0x00000002
CKG_MGF1_SHA384 uint = 0x00000003
CKG_MGF1_SHA512 uint = 0x00000004
CKG_MGF1_SHA3_224 uint = 0x00000006
CKG_MGF1_SHA3_256 uint = 0x00000007
CKG_MGF1_SHA3_384 uint = 0x00000008
CKG_MGF1_SHA3_512 uint = 0x00000009
)
const (
CKZ_DATA_SPECIFIED uint = 0x00000001
CKZ_DATA_SPECIFIED uint = 0x00000001
)
// Generated with: awk '/#define CK[AFKMRC]/{ print $2 " = " $3 }' pkcs11t.h | sed -e 's/UL$//g' -e 's/UL)$/)/g'
@ -98,15 +102,19 @@ const (
CKK_SHA512_224_HMAC = 0x00000027
CKK_SHA512_256_HMAC = 0x00000028
CKK_SHA512_T_HMAC = 0x00000029
CKK_SHA_1_HMAC = 0x00000028
CKK_SHA224_HMAC = 0x0000002E
CKK_SHA256_HMAC = 0x0000002B
CKK_SHA384_HMAC = 0x0000002C
CKK_SHA512_HMAC = 0x0000002D
CKK_SEED = 0x00000050
CKK_GOSTR3410 = 0x00000060
CKK_GOSTR3411 = 0x00000061
CKK_GOST28147 = 0x00000062
CKK_SHA_1_HMAC = 0x00000028
CKK_SHA224_HMAC = 0x0000002E
CKK_SHA256_HMAC = 0x0000002B
CKK_SHA384_HMAC = 0x0000002C
CKK_SHA512_HMAC = 0x0000002D
CKK_SEED = 0x0000002F
CKK_GOSTR3410 = 0x00000030
CKK_GOSTR3411 = 0x00000031
CKK_GOST28147 = 0x00000032
CKK_SHA3_224_HMAC = 0x00000033
CKK_SHA3_256_HMAC = 0x00000034
CKK_SHA3_384_HMAC = 0x00000035
CKK_SHA3_512_HMAC = 0x00000036
CKK_VENDOR_DEFINED = 0x80000000
CKC_X_509 = 0x00000000
CKC_X_509_ATTR_CERT = 0x00000001
@ -182,8 +190,8 @@ const (
CKA_AUTH_PIN_FLAGS = 0x00000201
CKA_ALWAYS_AUTHENTICATE = 0x00000202
CKA_WRAP_WITH_TRUSTED = 0x00000210
CKA_WRAP_TEMPLATE = (CKF_ARRAY_ATTRIBUTE | 0x00000211)
CKA_UNWRAP_TEMPLATE = (CKF_ARRAY_ATTRIBUTE | 0x00000212)
CKA_WRAP_TEMPLATE = CKF_ARRAY_ATTRIBUTE | 0x00000211
CKA_UNWRAP_TEMPLATE = CKF_ARRAY_ATTRIBUTE | 0x00000212
CKA_OTP_FORMAT = 0x00000220
CKA_OTP_LENGTH = 0x00000221
CKA_OTP_TIME_INTERVAL = 0x00000222
@ -218,7 +226,7 @@ const (
CKA_REQUIRED_CMS_ATTRIBUTES = 0x00000501
CKA_DEFAULT_CMS_ATTRIBUTES = 0x00000502
CKA_SUPPORTED_CMS_ATTRIBUTES = 0x00000503
CKA_ALLOWED_MECHANISMS = (CKF_ARRAY_ATTRIBUTE | 0x00000600)
CKA_ALLOWED_MECHANISMS = CKF_ARRAY_ATTRIBUTE | 0x00000600
CKA_VENDOR_DEFINED = 0x80000000
CKM_RSA_PKCS_KEY_PAIR_GEN = 0x00000000
CKM_RSA_PKCS = 0x00000001
@ -243,6 +251,10 @@ const (
CKM_DSA_SHA256 = 0x00000015
CKM_DSA_SHA384 = 0x00000016
CKM_DSA_SHA512 = 0x00000017
CKM_DSA_SHA3_224 = 0x00000018
CKM_DSA_SHA3_256 = 0x00000019
CKM_DSA_SHA3_384 = 0x0000001A
CKM_DSA_SHA3_512 = 0x0000001B
CKM_DH_PKCS_KEY_PAIR_GEN = 0x00000020
CKM_DH_PKCS_DERIVE = 0x00000021
CKM_X9_42_DH_KEY_PAIR_GEN = 0x00000030
@ -269,6 +281,14 @@ const (
CKM_SHA512_T_HMAC = 0x00000051
CKM_SHA512_T_HMAC_GENERAL = 0x00000052
CKM_SHA512_T_KEY_DERIVATION = 0x00000053
CKM_SHA3_256_RSA_PKCS = 0x00000060
CKM_SHA3_384_RSA_PKCS = 0x00000061
CKM_SHA3_512_RSA_PKCS = 0x00000062
CKM_SHA3_256_RSA_PKCS_PSS = 0x00000063
CKM_SHA3_384_RSA_PKCS_PSS = 0x00000064
CKM_SHA3_512_RSA_PKCS_PSS = 0x00000065
CKM_SHA3_224_RSA_PKCS = 0x00000066
CKM_SHA3_224_RSA_PKCS_PSS = 0x00000067
CKM_RC2_KEY_GEN = 0x00000100
CKM_RC2_ECB = 0x00000101
CKM_RC2_CBC = 0x00000102
@ -335,6 +355,22 @@ const (
CKM_HOTP = 0x00000291
CKM_ACTI = 0x000002A0
CKM_ACTI_KEY_GEN = 0x000002A1
CKM_SHA3_256 = 0x000002B0
CKM_SHA3_256_HMAC = 0x000002B1
CKM_SHA3_256_HMAC_GENERAL = 0x000002B2
CKM_SHA3_256_KEY_GEN = 0x000002B3
CKM_SHA3_224 = 0x000002B5
CKM_SHA3_224_HMAC = 0x000002B6
CKM_SHA3_224_HMAC_GENERAL = 0x000002B7
CKM_SHA3_224_KEY_GEN = 0x000002B8
CKM_SHA3_384 = 0x000002C0
CKM_SHA3_384_HMAC = 0x000002C1
CKM_SHA3_384_HMAC_GENERAL = 0x000002C2
CKM_SHA3_384_KEY_GEN = 0x000002C3
CKM_SHA3_512 = 0x000002D0
CKM_SHA3_512_HMAC = 0x000002D1
CKM_SHA3_512_HMAC_GENERAL = 0x000002D2
CKM_SHA3_512_KEY_GEN = 0x000002D3
CKM_CAST_KEY_GEN = 0x00000300
CKM_CAST_ECB = 0x00000301
CKM_CAST_CBC = 0x00000302
@ -395,6 +431,12 @@ const (
CKM_SHA384_KEY_DERIVATION = 0x00000394
CKM_SHA512_KEY_DERIVATION = 0x00000395
CKM_SHA224_KEY_DERIVATION = 0x00000396
CKM_SHA3_256_KEY_DERIVE = 0x00000397
CKM_SHA3_224_KEY_DERIVE = 0x00000398
CKM_SHA3_384_KEY_DERIVE = 0x00000399
CKM_SHA3_512_KEY_DERIVE = 0x0000039A
CKM_SHAKE_128_KEY_DERIVE = 0x0000039B
CKM_SHAKE_256_KEY_DERIVE = 0x0000039C
CKM_PBE_MD2_DES_CBC = 0x000003A0
CKM_PBE_MD5_DES_CBC = 0x000003A1
CKM_PBE_MD5_CAST_CBC = 0x000003A2
@ -678,4 +720,6 @@ const (
CKF_EXCLUDE_CHALLENGE = 0x00000008
CKF_EXCLUDE_PIN = 0x00000010
CKF_USER_FRIENDLY_OTP = 0x00000020
CKD_NULL = 0x00000001
CKD_SHA1_KDF = 0x00000002
)

View File

@ -8,6 +8,24 @@ package pkcs11
#include <stdlib.h>
#include <string.h>
#include "pkcs11go.h"
static inline void putOAEPParams(CK_RSA_PKCS_OAEP_PARAMS_PTR params, CK_VOID_PTR pSourceData, CK_ULONG ulSourceDataLen)
{
params->pSourceData = pSourceData;
params->ulSourceDataLen = ulSourceDataLen;
}
static inline void putECDH1SharedParams(CK_ECDH1_DERIVE_PARAMS_PTR params, CK_VOID_PTR pSharedData, CK_ULONG ulSharedDataLen)
{
params->pSharedData = pSharedData;
params->ulSharedDataLen = ulSharedDataLen;
}
static inline void putECDH1PublicParams(CK_ECDH1_DERIVE_PARAMS_PTR params, CK_VOID_PTR pPublicData, CK_ULONG ulPublicDataLen)
{
params->pPublicData = pPublicData;
params->ulPublicDataLen = ulPublicDataLen;
}
*/
import "C"
import "unsafe"
@ -21,9 +39,8 @@ type GCMParams struct {
tagSize int
}
// NewGCMParams returns a pointer to AES-GCM parameters.
// This is a convenience function for passing GCM parameters to
// available mechanisms.
// NewGCMParams returns a pointer to AES-GCM parameters that can be used with the CKM_AES_GCM mechanism.
// The Free() method must be called after the operation is complete.
//
// *NOTE*
// Some HSMs, like CloudHSM, will ignore the IV you pass in and write their
@ -55,17 +72,23 @@ func cGCMParams(p *GCMParams) []byte {
iv, ivLen := arena.Allocate(p.iv)
params.pIv = C.CK_BYTE_PTR(iv)
params.ulIvLen = ivLen
params.ulIvBits = ivLen * 8
}
if len(p.aad) > 0 {
aad, aadLen := arena.Allocate(p.aad)
params.pAAD = C.CK_BYTE_PTR(aad)
params.ulAADLen = aadLen
}
p.Free()
p.arena = arena
p.params = &params
return C.GoBytes(unsafe.Pointer(&params), C.int(unsafe.Sizeof(params)))
}
// IV returns a copy of the actual IV used for the operation.
//
// Some HSMs may ignore the user-specified IV and write their own at the end of
// the encryption operation; this method allows you to retrieve it.
func (p *GCMParams) IV() []byte {
if p == nil || p.params == nil {
return nil
@ -76,6 +99,10 @@ func (p *GCMParams) IV() []byte {
return iv
}
// Free deallocates the memory reserved for the HSM to write back the actual IV.
//
// This must be called after the entire operation is complete, i.e. after
// Encrypt or EncryptFinal. It is safe to call Free multiple times.
func (p *GCMParams) Free() {
if p == nil || p.arena == nil {
return
@ -84,3 +111,78 @@ func (p *GCMParams) Free() {
p.params = nil
p.arena = nil
}
// NewPSSParams creates a CK_RSA_PKCS_PSS_PARAMS structure and returns it as a byte array for use with the CKM_RSA_PKCS_PSS mechanism
func NewPSSParams(hashAlg, mgf, saltLength uint) []byte {
p := C.CK_RSA_PKCS_PSS_PARAMS{
hashAlg: C.CK_MECHANISM_TYPE(hashAlg),
mgf: C.CK_RSA_PKCS_MGF_TYPE(mgf),
sLen: C.CK_ULONG(saltLength),
}
return C.GoBytes(unsafe.Pointer(&p), C.int(unsafe.Sizeof(p)))
}
// OAEPParams can be passed to NewMechanism to implement CKM_RSA_PKCS_OAEP
type OAEPParams struct {
HashAlg uint
MGF uint
SourceType uint
SourceData []byte
}
// NewOAEPParams creates a CK_RSA_PKCS_OAEP_PARAMS structure suitable for use with the CKM_RSA_PKCS_OAEP mechanism
func NewOAEPParams(hashAlg, mgf, sourceType uint, sourceData []byte) *OAEPParams {
return &OAEPParams{
HashAlg: hashAlg,
MGF: mgf,
SourceType: sourceType,
SourceData: sourceData,
}
}
func cOAEPParams(p *OAEPParams, arena arena) ([]byte, arena) {
params := C.CK_RSA_PKCS_OAEP_PARAMS{
hashAlg: C.CK_MECHANISM_TYPE(p.HashAlg),
mgf: C.CK_RSA_PKCS_MGF_TYPE(p.MGF),
source: C.CK_RSA_PKCS_OAEP_SOURCE_TYPE(p.SourceType),
}
if len(p.SourceData) != 0 {
buf, len := arena.Allocate(p.SourceData)
// field is unaligned on windows so this has to call into C
C.putOAEPParams(&params, buf, len)
}
return C.GoBytes(unsafe.Pointer(&params), C.int(unsafe.Sizeof(params))), arena
}
// ECDH1DeriveParams can be passed to NewMechanism to implement CK_ECDH1_DERIVE_PARAMS
type ECDH1DeriveParams struct {
KDF uint
SharedData []byte
PublicKeyData []byte
}
// NewECDH1DeriveParams creates a CK_ECDH1_DERIVE_PARAMS structure suitable for use with the CKM_ECDH1_DERIVE mechanism
func NewECDH1DeriveParams(kdf uint, sharedData []byte, publicKeyData []byte) *ECDH1DeriveParams {
return &ECDH1DeriveParams{
KDF: kdf,
SharedData: sharedData,
PublicKeyData: publicKeyData,
}
}
func cECDH1DeriveParams(p *ECDH1DeriveParams, arena arena) ([]byte, arena) {
params := C.CK_ECDH1_DERIVE_PARAMS{
kdf: C.CK_EC_KDF_TYPE(p.KDF),
}
// SharedData MUST be null if key derivation function (KDF) is CKD_NULL
if len(p.SharedData) != 0 {
sharedData, sharedDataLen := arena.Allocate(p.SharedData)
C.putECDH1SharedParams(&params, sharedData, sharedDataLen)
}
publicKeyData, publicKeyDataLen := arena.Allocate(p.PublicKeyData)
C.putECDH1PublicParams(&params, publicKeyData, publicKeyDataLen)
return C.GoBytes(unsafe.Pointer(&params), C.int(unsafe.Sizeof(params))), arena
}

View File

@ -11,43 +11,73 @@ package pkcs11
// * CK_ULONG never overflows an Go int
/*
#cgo windows CFLAGS: -DREPACK_STRUCTURES
#cgo windows LDFLAGS: -lltdl
#cgo linux LDFLAGS: -lltdl -ldl
#cgo darwin CFLAGS: -I/usr/local/share/libtool
#cgo darwin LDFLAGS: -lltdl -L/usr/local/lib/
#cgo openbsd CFLAGS: -I/usr/local/include/
#cgo openbsd LDFLAGS: -lltdl -L/usr/local/lib/
#cgo freebsd CFLAGS: -I/usr/local/include/
#cgo freebsd LDFLAGS: -lltdl -L/usr/local/lib/
#cgo LDFLAGS: -lltdl
#cgo windows CFLAGS: -DPACKED_STRUCTURES
#cgo linux LDFLAGS: -ldl
#cgo darwin LDFLAGS: -ldl
#cgo openbsd LDFLAGS: -ldl
#cgo freebsd LDFLAGS: -ldl
#include <stdlib.h>
#include <stdio.h>
#include <string.h>
#include <ltdl.h>
#include <unistd.h>
#include "pkcs11go.h"
#ifdef _WIN32
#include <windows.h>
struct ctx {
lt_dlhandle handle;
HMODULE handle;
CK_FUNCTION_LIST_PTR sym;
};
// New initializes a ctx and fills the symbol table.
struct ctx *New(const char *module)
{
if (lt_dlinit() != 0) {
return NULL;
}
CK_C_GetFunctionList list;
struct ctx *c = calloc(1, sizeof(struct ctx));
c->handle = lt_dlopen(module);
c->handle = LoadLibrary(module);
if (c->handle == NULL) {
free(c);
return NULL;
}
list = (CK_C_GetFunctionList) lt_dlsym(c->handle, "C_GetFunctionList");
list = (CK_C_GetFunctionList) GetProcAddress(c->handle, "C_GetFunctionList");
if (list == NULL) {
free(c);
return NULL;
}
list(&c->sym);
return c;
}
// Destroy cleans up a ctx.
void Destroy(struct ctx *c)
{
if (!c) {
return;
}
free(c);
}
#else
#include <dlfcn.h>
struct ctx {
void *handle;
CK_FUNCTION_LIST_PTR sym;
};
// New initializes a ctx and fills the symbol table.
struct ctx *New(const char *module)
{
CK_C_GetFunctionList list;
struct ctx *c = calloc(1, sizeof(struct ctx));
c->handle = dlopen(module, RTLD_LAZY);
if (c->handle == NULL) {
free(c);
return NULL;
}
list = (CK_C_GetFunctionList) dlsym(c->handle, "C_GetFunctionList");
if (list == NULL) {
free(c);
return NULL;
@ -65,12 +95,12 @@ void Destroy(struct ctx *c)
if (c->handle == NULL) {
return;
}
if (lt_dlclose(c->handle) < 0) {
if (dlclose(c->handle) < 0) {
return;
}
lt_dlexit();
free(c);
}
#endif
CK_RV Initialize(struct ctx * c)
{
@ -238,23 +268,17 @@ CK_RV Logout(struct ctx * c, CK_SESSION_HANDLE session)
}
CK_RV CreateObject(struct ctx * c, CK_SESSION_HANDLE session,
ckAttrPtr temp, CK_ULONG tempCount,
CK_ATTRIBUTE_PTR temp, CK_ULONG tempCount,
CK_OBJECT_HANDLE_PTR obj)
{
ATTR_TO_C(tempc, temp, tempCount, NULL);
CK_RV e = c->sym->C_CreateObject(session, tempc, tempCount, obj);
ATTR_FREE(tempc);
return e;
return c->sym->C_CreateObject(session, temp, tempCount, obj);
}
CK_RV CopyObject(struct ctx * c, CK_SESSION_HANDLE session, CK_OBJECT_HANDLE o,
ckAttrPtr temp, CK_ULONG tempCount,
CK_ATTRIBUTE_PTR temp, CK_ULONG tempCount,
CK_OBJECT_HANDLE_PTR obj)
{
ATTR_TO_C(tempc, temp, tempCount, NULL);
CK_RV e = c->sym->C_CopyObject(session, o, tempc, tempCount, obj);
ATTR_FREE(tempc);
return e;
return c->sym->C_CopyObject(session, o, temp, tempCount, obj);
}
CK_RV DestroyObject(struct ctx * c, CK_SESSION_HANDLE session,
@ -272,48 +296,37 @@ CK_RV GetObjectSize(struct ctx * c, CK_SESSION_HANDLE session,
}
CK_RV GetAttributeValue(struct ctx * c, CK_SESSION_HANDLE session,
CK_OBJECT_HANDLE object, ckAttrPtr temp,
CK_OBJECT_HANDLE object, CK_ATTRIBUTE_PTR temp,
CK_ULONG templen)
{
ATTR_TO_C(tempc, temp, templen, NULL);
// Call for the first time, check the returned ulValue in the attributes, then
// allocate enough space and try again.
CK_RV e = c->sym->C_GetAttributeValue(session, object, tempc, templen);
CK_RV e = c->sym->C_GetAttributeValue(session, object, temp, templen);
if (e != CKR_OK) {
ATTR_FREE(tempc);
return e;
}
CK_ULONG i;
for (i = 0; i < templen; i++) {
if ((CK_LONG) tempc[i].ulValueLen == -1) {
if ((CK_LONG) temp[i].ulValueLen == -1) {
// either access denied or no such object
continue;
}
tempc[i].pValue = calloc(tempc[i].ulValueLen, sizeof(CK_BYTE));
temp[i].pValue = calloc(temp[i].ulValueLen, sizeof(CK_BYTE));
}
e = c->sym->C_GetAttributeValue(session, object, tempc, templen);
ATTR_FROM_C(temp, tempc, templen);
ATTR_FREE(tempc);
return e;
return c->sym->C_GetAttributeValue(session, object, temp, templen);
}
CK_RV SetAttributeValue(struct ctx * c, CK_SESSION_HANDLE session,
CK_OBJECT_HANDLE object, ckAttrPtr temp,
CK_OBJECT_HANDLE object, CK_ATTRIBUTE_PTR temp,
CK_ULONG templen)
{
ATTR_TO_C(tempc, temp, templen, NULL);
CK_RV e = c->sym->C_SetAttributeValue(session, object, tempc, templen);
ATTR_FREE(tempc);
return e;
return c->sym->C_SetAttributeValue(session, object, temp, templen);
}
CK_RV FindObjectsInit(struct ctx * c, CK_SESSION_HANDLE session,
ckAttrPtr temp, CK_ULONG tempCount)
CK_ATTRIBUTE_PTR temp, CK_ULONG tempCount)
{
ATTR_TO_C(tempc, temp, tempCount, NULL);
CK_RV e = c->sym->C_FindObjectsInit(session, tempc, tempCount);
ATTR_FREE(tempc);
return e;
return c->sym->C_FindObjectsInit(session, temp, tempCount);
}
CK_RV FindObjects(struct ctx * c, CK_SESSION_HANDLE session,
@ -332,11 +345,9 @@ CK_RV FindObjectsFinal(struct ctx * c, CK_SESSION_HANDLE session)
}
CK_RV EncryptInit(struct ctx * c, CK_SESSION_HANDLE session,
ckMechPtr mechanism, CK_OBJECT_HANDLE key)
CK_MECHANISM_PTR mechanism, CK_OBJECT_HANDLE key)
{
MECH_TO_C(m, mechanism);
CK_RV e = c->sym->C_EncryptInit(session, m, key);
return e;
return c->sym->C_EncryptInit(session, mechanism, key);
}
CK_RV Encrypt(struct ctx * c, CK_SESSION_HANDLE session, CK_BYTE_PTR message,
@ -388,17 +399,15 @@ CK_RV EncryptFinal(struct ctx * c, CK_SESSION_HANDLE session,
}
CK_RV DecryptInit(struct ctx * c, CK_SESSION_HANDLE session,
ckMechPtr mechanism, CK_OBJECT_HANDLE key)
CK_MECHANISM_PTR mechanism, CK_OBJECT_HANDLE key)
{
MECH_TO_C(m, mechanism);
CK_RV e = c->sym->C_DecryptInit(session, m, key);
return e;
return c->sym->C_DecryptInit(session, mechanism, key);
}
CK_RV Decrypt(struct ctx * c, CK_SESSION_HANDLE session, CK_BYTE_PTR cypher,
CK_RV Decrypt(struct ctx * c, CK_SESSION_HANDLE session, CK_BYTE_PTR cipher,
CK_ULONG clen, CK_BYTE_PTR * plain, CK_ULONG_PTR plainlen)
{
CK_RV e = c->sym->C_Decrypt(session, cypher, clen, NULL, plainlen);
CK_RV e = c->sym->C_Decrypt(session, cipher, clen, NULL, plainlen);
if (e != CKR_OK) {
return e;
}
@ -406,7 +415,7 @@ CK_RV Decrypt(struct ctx * c, CK_SESSION_HANDLE session, CK_BYTE_PTR cypher,
if (*plain == NULL) {
return CKR_HOST_MEMORY;
}
e = c->sym->C_Decrypt(session, cypher, clen, *plain, plainlen);
e = c->sym->C_Decrypt(session, cipher, clen, *plain, plainlen);
return e;
}
@ -444,11 +453,9 @@ CK_RV DecryptFinal(struct ctx * c, CK_SESSION_HANDLE session,
}
CK_RV DigestInit(struct ctx * c, CK_SESSION_HANDLE session,
ckMechPtr mechanism)
CK_MECHANISM_PTR mechanism)
{
MECH_TO_C(m, mechanism);
CK_RV e = c->sym->C_DigestInit(session, m);
return e;
return c->sym->C_DigestInit(session, mechanism);
}
CK_RV Digest(struct ctx * c, CK_SESSION_HANDLE session, CK_BYTE_PTR message,
@ -495,11 +502,9 @@ CK_RV DigestFinal(struct ctx * c, CK_SESSION_HANDLE session, CK_BYTE_PTR * hash,
}
CK_RV SignInit(struct ctx * c, CK_SESSION_HANDLE session,
ckMechPtr mechanism, CK_OBJECT_HANDLE key)
CK_MECHANISM_PTR mechanism, CK_OBJECT_HANDLE key)
{
MECH_TO_C(m, mechanism);
CK_RV e = c->sym->C_SignInit(session, m, key);
return e;
return c->sym->C_SignInit(session, mechanism, key);
}
CK_RV Sign(struct ctx * c, CK_SESSION_HANDLE session, CK_BYTE_PTR message,
@ -540,11 +545,9 @@ CK_RV SignFinal(struct ctx * c, CK_SESSION_HANDLE session, CK_BYTE_PTR * sig,
}
CK_RV SignRecoverInit(struct ctx * c, CK_SESSION_HANDLE session,
ckMechPtr mech, CK_OBJECT_HANDLE key)
CK_MECHANISM_PTR mechanism, CK_OBJECT_HANDLE key)
{
MECH_TO_C(m, mech);
CK_RV rv = c->sym->C_SignRecoverInit(session, m, key);
return rv;
return c->sym->C_SignRecoverInit(session, mechanism, key);
}
CK_RV SignRecover(struct ctx * c, CK_SESSION_HANDLE session, CK_BYTE_PTR data,
@ -563,11 +566,9 @@ CK_RV SignRecover(struct ctx * c, CK_SESSION_HANDLE session, CK_BYTE_PTR data,
}
CK_RV VerifyInit(struct ctx * c, CK_SESSION_HANDLE session,
ckMechPtr mech, CK_OBJECT_HANDLE key)
CK_MECHANISM_PTR mechanism, CK_OBJECT_HANDLE key)
{
MECH_TO_C(m, mech);
CK_RV rv = c->sym->C_VerifyInit(session, m, key);
return rv;
return c->sym->C_VerifyInit(session, mechanism, key);
}
CK_RV Verify(struct ctx * c, CK_SESSION_HANDLE session, CK_BYTE_PTR message,
@ -592,11 +593,9 @@ CK_RV VerifyFinal(struct ctx * c, CK_SESSION_HANDLE session, CK_BYTE_PTR sig,
}
CK_RV VerifyRecoverInit(struct ctx * c, CK_SESSION_HANDLE session,
ckMechPtr mech, CK_OBJECT_HANDLE key)
CK_MECHANISM_PTR mechanism, CK_OBJECT_HANDLE key)
{
MECH_TO_C(m, mech);
CK_RV rv = c->sym->C_VerifyRecoverInit(session, m, key);
return rv;
return c->sym->C_VerifyRecoverInit(session, mechanism, key);
}
CK_RV VerifyRecover(struct ctx * c, CK_SESSION_HANDLE session, CK_BYTE_PTR sig,
@ -688,39 +687,28 @@ CK_RV DecryptVerifyUpdate(struct ctx * c, CK_SESSION_HANDLE session,
}
CK_RV GenerateKey(struct ctx * c, CK_SESSION_HANDLE session,
ckMechPtr mechanism, ckAttrPtr temp,
CK_MECHANISM_PTR mechanism, CK_ATTRIBUTE_PTR temp,
CK_ULONG tempCount, CK_OBJECT_HANDLE_PTR key)
{
MECH_TO_C(m, mechanism);
ATTR_TO_C(tempc, temp, tempCount, NULL);
CK_RV e = c->sym->C_GenerateKey(session, m, tempc, tempCount, key);
ATTR_FREE(tempc);
return e;
return c->sym->C_GenerateKey(session, mechanism, temp, tempCount, key);
}
CK_RV GenerateKeyPair(struct ctx * c, CK_SESSION_HANDLE session,
ckMechPtr mechanism, ckAttrPtr pub,
CK_ULONG pubCount, ckAttrPtr priv,
CK_MECHANISM_PTR mechanism, CK_ATTRIBUTE_PTR pub,
CK_ULONG pubCount, CK_ATTRIBUTE_PTR priv,
CK_ULONG privCount, CK_OBJECT_HANDLE_PTR pubkey,
CK_OBJECT_HANDLE_PTR privkey)
{
MECH_TO_C(m, mechanism);
ATTR_TO_C(pubc, pub, pubCount, NULL);
ATTR_TO_C(privc, priv, privCount, pubc);
CK_RV e = c->sym->C_GenerateKeyPair(session, m, pubc, pubCount,
privc, privCount, pubkey, privkey);
ATTR_FREE(pubc);
ATTR_FREE(privc);
return e;
return c->sym->C_GenerateKeyPair(session, mechanism, pub, pubCount,
priv, privCount, pubkey, privkey);
}
CK_RV WrapKey(struct ctx * c, CK_SESSION_HANDLE session,
ckMechPtr mechanism, CK_OBJECT_HANDLE wrappingkey,
CK_MECHANISM_PTR mechanism, CK_OBJECT_HANDLE wrappingkey,
CK_OBJECT_HANDLE key, CK_BYTE_PTR * wrapped,
CK_ULONG_PTR wrappedlen)
{
MECH_TO_C(m, mechanism);
CK_RV rv = c->sym->C_WrapKey(session, m, wrappingkey, key, NULL,
CK_RV rv = c->sym->C_WrapKey(session, mechanism, wrappingkey, key, NULL,
wrappedlen);
if (rv != CKR_OK) {
return rv;
@ -729,33 +717,25 @@ CK_RV WrapKey(struct ctx * c, CK_SESSION_HANDLE session,
if (*wrapped == NULL) {
return CKR_HOST_MEMORY;
}
rv = c->sym->C_WrapKey(session, m, wrappingkey, key, *wrapped,
rv = c->sym->C_WrapKey(session, mechanism, wrappingkey, key, *wrapped,
wrappedlen);
return rv;
}
CK_RV DeriveKey(struct ctx * c, CK_SESSION_HANDLE session,
ckMechPtr mech, CK_OBJECT_HANDLE basekey,
ckAttrPtr a, CK_ULONG alen, CK_OBJECT_HANDLE_PTR key)
CK_MECHANISM_PTR mechanism, CK_OBJECT_HANDLE basekey,
CK_ATTRIBUTE_PTR a, CK_ULONG alen, CK_OBJECT_HANDLE_PTR key)
{
MECH_TO_C(m, mech);
ATTR_TO_C(tempc, a, alen, NULL);
CK_RV e = c->sym->C_DeriveKey(session, m, basekey, tempc, alen, key);
ATTR_FREE(tempc);
return e;
return c->sym->C_DeriveKey(session, mechanism, basekey, a, alen, key);
}
CK_RV UnwrapKey(struct ctx * c, CK_SESSION_HANDLE session,
ckMechPtr mech, CK_OBJECT_HANDLE unwrappingkey,
CK_MECHANISM_PTR mechanism, CK_OBJECT_HANDLE unwrappingkey,
CK_BYTE_PTR wrappedkey, CK_ULONG wrappedkeylen,
ckAttrPtr a, CK_ULONG alen, CK_OBJECT_HANDLE_PTR key)
CK_ATTRIBUTE_PTR a, CK_ULONG alen, CK_OBJECT_HANDLE_PTR key)
{
MECH_TO_C(m, mech);
ATTR_TO_C(tempc, a, alen, NULL);
CK_RV e = c->sym->C_UnwrapKey(session, m, unwrappingkey, wrappedkey,
wrappedkeylen, tempc, alen, key);
ATTR_FREE(tempc);
return e;
return c->sym->C_UnwrapKey(session, mechanism, unwrappingkey, wrappedkey,
wrappedkeylen, a, alen, key);
}
CK_RV SeedRandom(struct ctx * c, CK_SESSION_HANDLE session, CK_BYTE_PTR seed,
@ -783,37 +763,11 @@ CK_RV WaitForSlotEvent(struct ctx * c, CK_FLAGS flags, CK_ULONG_PTR slot)
return e;
}
#ifdef REPACK_STRUCTURES
CK_RV attrsToC(CK_ATTRIBUTE_PTR *attrOut, ckAttrPtr attrIn, CK_ULONG count) {
CK_ATTRIBUTE_PTR attr = calloc(count, sizeof(CK_ATTRIBUTE));
if (attr == NULL) {
return CKR_HOST_MEMORY;
}
for (int i = 0; i < count; i++) {
attr[i].type = attrIn[i].type;
attr[i].pValue = attrIn[i].pValue;
attr[i].ulValueLen = attrIn[i].ulValueLen;
}
*attrOut = attr;
return CKR_OK;
static inline CK_VOID_PTR getAttributePval(CK_ATTRIBUTE_PTR a)
{
return a->pValue;
}
void attrsFromC(ckAttrPtr attrOut, CK_ATTRIBUTE_PTR attrIn, CK_ULONG count) {
for (int i = 0; i < count; i++) {
attrOut[i].type = attrIn[i].type;
attrOut[i].pValue = attrIn[i].pValue;
attrOut[i].ulValueLen = attrIn[i].ulValueLen;
}
}
void mechToC(CK_MECHANISM_PTR mechOut, ckMechPtr mechIn) {
mechOut->mechanism = mechIn->mechanism;
mechOut->pParameter = mechIn->pParameter;
mechOut->ulParameterLen = mechIn->ulParameterLen;
}
#endif
*/
import "C"
import "strings"
@ -827,11 +781,6 @@ type Ctx struct {
// New creates a new context and initializes the module/library for use.
func New(module string) *Ctx {
// libtool-ltdl will return an assertion error if passed an empty string, so
// we check for it explicitly.
if module == "" {
return nil
}
c := new(Ctx)
mod := C.CString(module)
defer C.free(unsafe.Pointer(mod))
@ -1124,21 +1073,22 @@ func (c *Ctx) GetObjectSize(sh SessionHandle, oh ObjectHandle) (uint, error) {
func (c *Ctx) GetAttributeValue(sh SessionHandle, o ObjectHandle, a []*Attribute) ([]*Attribute, error) {
// copy the attribute list and make all the values nil, so that
// the C function can (allocate) fill them in
pa := make([]C.ckAttr, len(a))
pa := make([]C.CK_ATTRIBUTE, len(a))
for i := 0; i < len(a); i++ {
pa[i]._type = C.CK_ATTRIBUTE_TYPE(a[i].Type)
}
e := C.GetAttributeValue(c.ctx, C.CK_SESSION_HANDLE(sh), C.CK_OBJECT_HANDLE(o), C.ckAttrPtr(&pa[0]), C.CK_ULONG(len(a)))
if toError(e) != nil {
return nil, toError(e)
e := C.GetAttributeValue(c.ctx, C.CK_SESSION_HANDLE(sh), C.CK_OBJECT_HANDLE(o), &pa[0], C.CK_ULONG(len(a)))
if err := toError(e); err != nil {
return nil, err
}
a1 := make([]*Attribute, len(a))
for i, c := range pa {
x := new(Attribute)
x.Type = uint(c._type)
if int(c.ulValueLen) != -1 {
x.Value = C.GoBytes(unsafe.Pointer(c.pValue), C.int(c.ulValueLen))
C.free(unsafe.Pointer(c.pValue))
buf := unsafe.Pointer(C.getAttributePval(&c))
x.Value = C.GoBytes(buf, C.int(c.ulValueLen))
C.free(buf)
}
a1[i] = x
}
@ -1164,8 +1114,10 @@ func (c *Ctx) FindObjectsInit(sh SessionHandle, temp []*Attribute) error {
// FindObjects continues a search for token and session
// objects that match a template, obtaining additional object
// handles. The returned boolean indicates if the list would
// have been larger than max.
// handles. Calling the function repeatedly may yield additional results until
// an empty slice is returned.
//
// The returned boolean value is deprecated and should be ignored.
func (c *Ctx) FindObjects(sh SessionHandle, max int) ([]ObjectHandle, bool, error) {
var (
objectList C.CK_OBJECT_HANDLE_PTR
@ -1193,7 +1145,7 @@ func (c *Ctx) FindObjectsFinal(sh SessionHandle) error {
// EncryptInit initializes an encryption operation.
func (c *Ctx) EncryptInit(sh SessionHandle, m []*Mechanism, o ObjectHandle) error {
arena, mech, _ := cMechanismList(m)
arena, mech := cMechanism(m)
defer arena.Free()
e := C.EncryptInit(c.ctx, C.CK_SESSION_HANDLE(sh), mech, C.CK_OBJECT_HANDLE(o))
return toError(e)
@ -1205,7 +1157,7 @@ func (c *Ctx) Encrypt(sh SessionHandle, message []byte) ([]byte, error) {
enc C.CK_BYTE_PTR
enclen C.CK_ULONG
)
e := C.Encrypt(c.ctx, C.CK_SESSION_HANDLE(sh), C.CK_BYTE_PTR(unsafe.Pointer(&message[0])), C.CK_ULONG(len(message)), &enc, &enclen)
e := C.Encrypt(c.ctx, C.CK_SESSION_HANDLE(sh), cMessage(message), C.CK_ULONG(len(message)), &enc, &enclen)
if toError(e) != nil {
return nil, toError(e)
}
@ -1220,7 +1172,7 @@ func (c *Ctx) EncryptUpdate(sh SessionHandle, plain []byte) ([]byte, error) {
part C.CK_BYTE_PTR
partlen C.CK_ULONG
)
e := C.EncryptUpdate(c.ctx, C.CK_SESSION_HANDLE(sh), C.CK_BYTE_PTR(unsafe.Pointer(&plain[0])), C.CK_ULONG(len(plain)), &part, &partlen)
e := C.EncryptUpdate(c.ctx, C.CK_SESSION_HANDLE(sh), cMessage(plain), C.CK_ULONG(len(plain)), &part, &partlen)
if toError(e) != nil {
return nil, toError(e)
}
@ -1246,19 +1198,19 @@ func (c *Ctx) EncryptFinal(sh SessionHandle) ([]byte, error) {
// DecryptInit initializes a decryption operation.
func (c *Ctx) DecryptInit(sh SessionHandle, m []*Mechanism, o ObjectHandle) error {
arena, mech, _ := cMechanismList(m)
arena, mech := cMechanism(m)
defer arena.Free()
e := C.DecryptInit(c.ctx, C.CK_SESSION_HANDLE(sh), mech, C.CK_OBJECT_HANDLE(o))
return toError(e)
}
// Decrypt decrypts encrypted data in a single part.
func (c *Ctx) Decrypt(sh SessionHandle, cypher []byte) ([]byte, error) {
func (c *Ctx) Decrypt(sh SessionHandle, cipher []byte) ([]byte, error) {
var (
plain C.CK_BYTE_PTR
plainlen C.CK_ULONG
)
e := C.Decrypt(c.ctx, C.CK_SESSION_HANDLE(sh), C.CK_BYTE_PTR(unsafe.Pointer(&cypher[0])), C.CK_ULONG(len(cypher)), &plain, &plainlen)
e := C.Decrypt(c.ctx, C.CK_SESSION_HANDLE(sh), cMessage(cipher), C.CK_ULONG(len(cipher)), &plain, &plainlen)
if toError(e) != nil {
return nil, toError(e)
}
@ -1273,7 +1225,7 @@ func (c *Ctx) DecryptUpdate(sh SessionHandle, cipher []byte) ([]byte, error) {
part C.CK_BYTE_PTR
partlen C.CK_ULONG
)
e := C.DecryptUpdate(c.ctx, C.CK_SESSION_HANDLE(sh), C.CK_BYTE_PTR(unsafe.Pointer(&cipher[0])), C.CK_ULONG(len(cipher)), &part, &partlen)
e := C.DecryptUpdate(c.ctx, C.CK_SESSION_HANDLE(sh), cMessage(cipher), C.CK_ULONG(len(cipher)), &part, &partlen)
if toError(e) != nil {
return nil, toError(e)
}
@ -1299,7 +1251,7 @@ func (c *Ctx) DecryptFinal(sh SessionHandle) ([]byte, error) {
// DigestInit initializes a message-digesting operation.
func (c *Ctx) DigestInit(sh SessionHandle, m []*Mechanism) error {
arena, mech, _ := cMechanismList(m)
arena, mech := cMechanism(m)
defer arena.Free()
e := C.DigestInit(c.ctx, C.CK_SESSION_HANDLE(sh), mech)
return toError(e)
@ -1311,7 +1263,7 @@ func (c *Ctx) Digest(sh SessionHandle, message []byte) ([]byte, error) {
hash C.CK_BYTE_PTR
hashlen C.CK_ULONG
)
e := C.Digest(c.ctx, C.CK_SESSION_HANDLE(sh), C.CK_BYTE_PTR(unsafe.Pointer(&message[0])), C.CK_ULONG(len(message)), &hash, &hashlen)
e := C.Digest(c.ctx, C.CK_SESSION_HANDLE(sh), cMessage(message), C.CK_ULONG(len(message)), &hash, &hashlen)
if toError(e) != nil {
return nil, toError(e)
}
@ -1322,7 +1274,7 @@ func (c *Ctx) Digest(sh SessionHandle, message []byte) ([]byte, error) {
// DigestUpdate continues a multiple-part message-digesting operation.
func (c *Ctx) DigestUpdate(sh SessionHandle, message []byte) error {
e := C.DigestUpdate(c.ctx, C.CK_SESSION_HANDLE(sh), C.CK_BYTE_PTR(unsafe.Pointer(&message[0])), C.CK_ULONG(len(message)))
e := C.DigestUpdate(c.ctx, C.CK_SESSION_HANDLE(sh), cMessage(message), C.CK_ULONG(len(message)))
if toError(e) != nil {
return toError(e)
}
@ -1359,7 +1311,7 @@ func (c *Ctx) DigestFinal(sh SessionHandle) ([]byte, error) {
// operation, where the signature is (will be) an appendix to
// the data, and plaintext cannot be recovered from the signature.
func (c *Ctx) SignInit(sh SessionHandle, m []*Mechanism, o ObjectHandle) error {
arena, mech, _ := cMechanismList(m) // Only the first is used, but still use a list.
arena, mech := cMechanism(m)
defer arena.Free()
e := C.SignInit(c.ctx, C.CK_SESSION_HANDLE(sh), mech, C.CK_OBJECT_HANDLE(o))
return toError(e)
@ -1372,7 +1324,7 @@ func (c *Ctx) Sign(sh SessionHandle, message []byte) ([]byte, error) {
sig C.CK_BYTE_PTR
siglen C.CK_ULONG
)
e := C.Sign(c.ctx, C.CK_SESSION_HANDLE(sh), C.CK_BYTE_PTR(unsafe.Pointer(&message[0])), C.CK_ULONG(len(message)), &sig, &siglen)
e := C.Sign(c.ctx, C.CK_SESSION_HANDLE(sh), cMessage(message), C.CK_ULONG(len(message)), &sig, &siglen)
if toError(e) != nil {
return nil, toError(e)
}
@ -1385,7 +1337,7 @@ func (c *Ctx) Sign(sh SessionHandle, message []byte) ([]byte, error) {
// where the signature is (will be) an appendix to the data,
// and plaintext cannot be recovered from the signature.
func (c *Ctx) SignUpdate(sh SessionHandle, message []byte) error {
e := C.SignUpdate(c.ctx, C.CK_SESSION_HANDLE(sh), C.CK_BYTE_PTR(unsafe.Pointer(&message[0])), C.CK_ULONG(len(message)))
e := C.SignUpdate(c.ctx, C.CK_SESSION_HANDLE(sh), cMessage(message), C.CK_ULONG(len(message)))
return toError(e)
}
@ -1406,7 +1358,7 @@ func (c *Ctx) SignFinal(sh SessionHandle) ([]byte, error) {
// SignRecoverInit initializes a signature operation, where the data can be recovered from the signature.
func (c *Ctx) SignRecoverInit(sh SessionHandle, m []*Mechanism, key ObjectHandle) error {
arena, mech, _ := cMechanismList(m)
arena, mech := cMechanism(m)
defer arena.Free()
e := C.SignRecoverInit(c.ctx, C.CK_SESSION_HANDLE(sh), mech, C.CK_OBJECT_HANDLE(key))
return toError(e)
@ -1418,7 +1370,7 @@ func (c *Ctx) SignRecover(sh SessionHandle, data []byte) ([]byte, error) {
sig C.CK_BYTE_PTR
siglen C.CK_ULONG
)
e := C.SignRecover(c.ctx, C.CK_SESSION_HANDLE(sh), C.CK_BYTE_PTR(unsafe.Pointer(&data[0])), C.CK_ULONG(len(data)), &sig, &siglen)
e := C.SignRecover(c.ctx, C.CK_SESSION_HANDLE(sh), cMessage(data), C.CK_ULONG(len(data)), &sig, &siglen)
if toError(e) != nil {
return nil, toError(e)
}
@ -1431,7 +1383,7 @@ func (c *Ctx) SignRecover(sh SessionHandle, data []byte) ([]byte, error) {
// signature is an appendix to the data, and plaintext cannot
// be recovered from the signature (e.g. DSA).
func (c *Ctx) VerifyInit(sh SessionHandle, m []*Mechanism, key ObjectHandle) error {
arena, mech, _ := cMechanismList(m) // only use one here
arena, mech := cMechanism(m)
defer arena.Free()
e := C.VerifyInit(c.ctx, C.CK_SESSION_HANDLE(sh), mech, C.CK_OBJECT_HANDLE(key))
return toError(e)
@ -1441,7 +1393,7 @@ func (c *Ctx) VerifyInit(sh SessionHandle, m []*Mechanism, key ObjectHandle) err
// where the signature is an appendix to the data, and plaintext
// cannot be recovered from the signature.
func (c *Ctx) Verify(sh SessionHandle, data []byte, signature []byte) error {
e := C.Verify(c.ctx, C.CK_SESSION_HANDLE(sh), C.CK_BYTE_PTR(unsafe.Pointer(&data[0])), C.CK_ULONG(len(data)), C.CK_BYTE_PTR(unsafe.Pointer(&signature[0])), C.CK_ULONG(len(signature)))
e := C.Verify(c.ctx, C.CK_SESSION_HANDLE(sh), cMessage(data), C.CK_ULONG(len(data)), cMessage(signature), C.CK_ULONG(len(signature)))
return toError(e)
}
@ -1449,21 +1401,21 @@ func (c *Ctx) Verify(sh SessionHandle, data []byte, signature []byte) error {
// operation, where the signature is an appendix to the data,
// and plaintext cannot be recovered from the signature.
func (c *Ctx) VerifyUpdate(sh SessionHandle, part []byte) error {
e := C.VerifyUpdate(c.ctx, C.CK_SESSION_HANDLE(sh), C.CK_BYTE_PTR(unsafe.Pointer(&part[0])), C.CK_ULONG(len(part)))
e := C.VerifyUpdate(c.ctx, C.CK_SESSION_HANDLE(sh), cMessage(part), C.CK_ULONG(len(part)))
return toError(e)
}
// VerifyFinal finishes a multiple-part verification
// operation, checking the signature.
func (c *Ctx) VerifyFinal(sh SessionHandle, signature []byte) error {
e := C.VerifyFinal(c.ctx, C.CK_SESSION_HANDLE(sh), C.CK_BYTE_PTR(unsafe.Pointer(&signature[0])), C.CK_ULONG(len(signature)))
e := C.VerifyFinal(c.ctx, C.CK_SESSION_HANDLE(sh), cMessage(signature), C.CK_ULONG(len(signature)))
return toError(e)
}
// VerifyRecoverInit initializes a signature verification
// operation, where the data is recovered from the signature.
func (c *Ctx) VerifyRecoverInit(sh SessionHandle, m []*Mechanism, key ObjectHandle) error {
arena, mech, _ := cMechanismList(m)
arena, mech := cMechanism(m)
defer arena.Free()
e := C.VerifyRecoverInit(c.ctx, C.CK_SESSION_HANDLE(sh), mech, C.CK_OBJECT_HANDLE(key))
return toError(e)
@ -1476,7 +1428,7 @@ func (c *Ctx) VerifyRecover(sh SessionHandle, signature []byte) ([]byte, error)
data C.CK_BYTE_PTR
datalen C.CK_ULONG
)
e := C.DecryptVerifyUpdate(c.ctx, C.CK_SESSION_HANDLE(sh), C.CK_BYTE_PTR(unsafe.Pointer(&signature[0])), C.CK_ULONG(len(signature)), &data, &datalen)
e := C.DecryptVerifyUpdate(c.ctx, C.CK_SESSION_HANDLE(sh), cMessage(signature), C.CK_ULONG(len(signature)), &data, &datalen)
if toError(e) != nil {
return nil, toError(e)
}
@ -1491,7 +1443,7 @@ func (c *Ctx) DigestEncryptUpdate(sh SessionHandle, part []byte) ([]byte, error)
enc C.CK_BYTE_PTR
enclen C.CK_ULONG
)
e := C.DigestEncryptUpdate(c.ctx, C.CK_SESSION_HANDLE(sh), C.CK_BYTE_PTR(unsafe.Pointer(&part[0])), C.CK_ULONG(len(part)), &enc, &enclen)
e := C.DigestEncryptUpdate(c.ctx, C.CK_SESSION_HANDLE(sh), cMessage(part), C.CK_ULONG(len(part)), &enc, &enclen)
if toError(e) != nil {
return nil, toError(e)
}
@ -1506,7 +1458,7 @@ func (c *Ctx) DecryptDigestUpdate(sh SessionHandle, cipher []byte) ([]byte, erro
part C.CK_BYTE_PTR
partlen C.CK_ULONG
)
e := C.DecryptDigestUpdate(c.ctx, C.CK_SESSION_HANDLE(sh), C.CK_BYTE_PTR(unsafe.Pointer(&cipher[0])), C.CK_ULONG(len(cipher)), &part, &partlen)
e := C.DecryptDigestUpdate(c.ctx, C.CK_SESSION_HANDLE(sh), cMessage(cipher), C.CK_ULONG(len(cipher)), &part, &partlen)
if toError(e) != nil {
return nil, toError(e)
}
@ -1521,7 +1473,7 @@ func (c *Ctx) SignEncryptUpdate(sh SessionHandle, part []byte) ([]byte, error) {
enc C.CK_BYTE_PTR
enclen C.CK_ULONG
)
e := C.SignEncryptUpdate(c.ctx, C.CK_SESSION_HANDLE(sh), C.CK_BYTE_PTR(unsafe.Pointer(&part[0])), C.CK_ULONG(len(part)), &enc, &enclen)
e := C.SignEncryptUpdate(c.ctx, C.CK_SESSION_HANDLE(sh), cMessage(part), C.CK_ULONG(len(part)), &enc, &enclen)
if toError(e) != nil {
return nil, toError(e)
}
@ -1536,7 +1488,7 @@ func (c *Ctx) DecryptVerifyUpdate(sh SessionHandle, cipher []byte) ([]byte, erro
part C.CK_BYTE_PTR
partlen C.CK_ULONG
)
e := C.DecryptVerifyUpdate(c.ctx, C.CK_SESSION_HANDLE(sh), C.CK_BYTE_PTR(unsafe.Pointer(&cipher[0])), C.CK_ULONG(len(cipher)), &part, &partlen)
e := C.DecryptVerifyUpdate(c.ctx, C.CK_SESSION_HANDLE(sh), cMessage(cipher), C.CK_ULONG(len(cipher)), &part, &partlen)
if toError(e) != nil {
return nil, toError(e)
}
@ -1550,7 +1502,7 @@ func (c *Ctx) GenerateKey(sh SessionHandle, m []*Mechanism, temp []*Attribute) (
var key C.CK_OBJECT_HANDLE
attrarena, t, tcount := cAttributeList(temp)
defer attrarena.Free()
mecharena, mech, _ := cMechanismList(m)
mecharena, mech := cMechanism(m)
defer mecharena.Free()
e := C.GenerateKey(c.ctx, C.CK_SESSION_HANDLE(sh), mech, t, tcount, C.CK_OBJECT_HANDLE_PTR(&key))
e1 := toError(e)
@ -1570,7 +1522,7 @@ func (c *Ctx) GenerateKeyPair(sh SessionHandle, m []*Mechanism, public, private
defer pubarena.Free()
privarena, priv, privcount := cAttributeList(private)
defer privarena.Free()
mecharena, mech, _ := cMechanismList(m)
mecharena, mech := cMechanism(m)
defer mecharena.Free()
e := C.GenerateKeyPair(c.ctx, C.CK_SESSION_HANDLE(sh), mech, pub, pubcount, priv, privcount, C.CK_OBJECT_HANDLE_PTR(&pubkey), C.CK_OBJECT_HANDLE_PTR(&privkey))
e1 := toError(e)
@ -1586,7 +1538,7 @@ func (c *Ctx) WrapKey(sh SessionHandle, m []*Mechanism, wrappingkey, key ObjectH
wrappedkey C.CK_BYTE_PTR
wrappedkeylen C.CK_ULONG
)
arena, mech, _ := cMechanismList(m)
arena, mech := cMechanism(m)
defer arena.Free()
e := C.WrapKey(c.ctx, C.CK_SESSION_HANDLE(sh), mech, C.CK_OBJECT_HANDLE(wrappingkey), C.CK_OBJECT_HANDLE(key), &wrappedkey, &wrappedkeylen)
if toError(e) != nil {
@ -1602,7 +1554,7 @@ func (c *Ctx) UnwrapKey(sh SessionHandle, m []*Mechanism, unwrappingkey ObjectHa
var key C.CK_OBJECT_HANDLE
attrarena, ac, aclen := cAttributeList(a)
defer attrarena.Free()
mecharena, mech, _ := cMechanismList(m)
mecharena, mech := cMechanism(m)
defer mecharena.Free()
e := C.UnwrapKey(c.ctx, C.CK_SESSION_HANDLE(sh), mech, C.CK_OBJECT_HANDLE(unwrappingkey), C.CK_BYTE_PTR(unsafe.Pointer(&wrappedkey[0])), C.CK_ULONG(len(wrappedkey)), ac, aclen, &key)
return ObjectHandle(key), toError(e)
@ -1613,7 +1565,7 @@ func (c *Ctx) DeriveKey(sh SessionHandle, m []*Mechanism, basekey ObjectHandle,
var key C.CK_OBJECT_HANDLE
attrarena, ac, aclen := cAttributeList(a)
defer attrarena.Free()
mecharena, mech, _ := cMechanismList(m)
mecharena, mech := cMechanism(m)
defer mecharena.Free()
e := C.DeriveKey(c.ctx, C.CK_SESSION_HANDLE(sh), mech, C.CK_OBJECT_HANDLE(basekey), ac, aclen, &key)
return ObjectHandle(key), toError(e)

View File

@ -13,7 +13,7 @@
#define CK_CALLBACK_FUNCTION(returnType, name) returnType (* name)
#include <unistd.h>
#ifdef REPACK_STRUCTURES
#ifdef PACKED_STRUCTURES
# pragma pack(push, 1)
# include "pkcs11.h"
# pragma pack(pop)
@ -21,12 +21,9 @@
# include "pkcs11.h"
#endif
#ifdef REPACK_STRUCTURES
// Go doesn't support structures with non-default packing, but PKCS#11 requires
// pack(1) on Windows. Use structures with the same members as the CK_ ones but
// default packing, and copy data between the two.
// Copy of CK_INFO but with default alignment (not packed). Go hides unaligned
// struct fields so copying to an aligned struct is necessary to read CK_INFO
// from Go on Windows where packing is required.
typedef struct ckInfo {
CK_VERSION cryptokiVersion;
CK_UTF8CHAR manufacturerID[32];
@ -34,50 +31,3 @@ typedef struct ckInfo {
CK_UTF8CHAR libraryDescription[32];
CK_VERSION libraryVersion;
} ckInfo, *ckInfoPtr;
typedef struct ckAttr {
CK_ATTRIBUTE_TYPE type;
CK_VOID_PTR pValue;
CK_ULONG ulValueLen;
} ckAttr, *ckAttrPtr;
typedef struct ckMech {
CK_MECHANISM_TYPE mechanism;
CK_VOID_PTR pParameter;
CK_ULONG ulParameterLen;
} ckMech, *ckMechPtr;
CK_RV attrsToC(CK_ATTRIBUTE_PTR *attrOut, ckAttrPtr attrIn, CK_ULONG count);
void attrsFromC(ckAttrPtr attrOut, CK_ATTRIBUTE_PTR attrIn, CK_ULONG count);
void mechToC(CK_MECHANISM_PTR mechOut, ckMechPtr mechIn);
#define ATTR_TO_C(aout, ain, count, other) \
CK_ATTRIBUTE_PTR aout; \
{ \
CK_RV e = attrsToC(&aout, ain, count); \
if (e != CKR_OK ) { \
if (other != NULL) free(other); \
return e; \
} \
}
#define ATTR_FREE(aout) free(aout)
#define ATTR_FROM_C(aout, ain, count) attrsFromC(aout, ain, count)
#define MECH_TO_C(mout, min) \
CK_MECHANISM mval, *mout = &mval; \
if (min != NULL) { mechToC(mout, min); \
} else { mout = NULL; }
#else // REPACK_STRUCTURES
// Dummy types and macros to avoid any unnecessary copying on UNIX
typedef CK_INFO ckInfo, *ckInfoPtr;
typedef CK_ATTRIBUTE ckAttr, *ckAttrPtr;
typedef CK_MECHANISM ckMech, *ckMechPtr;
#define ATTR_TO_C(aout, ain, count, other) CK_ATTRIBUTE_PTR aout = ain
#define ATTR_FREE(aout)
#define ATTR_FROM_C(aout, ain, count)
#define MECH_TO_C(mout, min) CK_MECHANISM_PTR mout = min
#endif // REPACK_STRUCTURES

View File

@ -383,6 +383,11 @@ typedef CK_ULONG CK_KEY_TYPE;
#define CKK_GOSTR3411 0x00000031UL
#define CKK_GOST28147 0x00000032UL
#define CKK_SHA3_224_HMAC 0x00000033UL
#define CKK_SHA3_256_HMAC 0x00000034UL
#define CKK_SHA3_384_HMAC 0x00000035UL
#define CKK_SHA3_512_HMAC 0x00000036UL
#define CKK_VENDOR_DEFINED 0x80000000UL
@ -610,6 +615,10 @@ typedef CK_ULONG CK_MECHANISM_TYPE;
#define CKM_DSA_SHA256 0x00000014UL
#define CKM_DSA_SHA384 0x00000015UL
#define CKM_DSA_SHA512 0x00000016UL
#define CKM_DSA_SHA3_224 0x00000018UL
#define CKM_DSA_SHA3_256 0x00000019UL
#define CKM_DSA_SHA3_384 0x0000001AUL
#define CKM_DSA_SHA3_512 0x0000001BUL
#define CKM_DH_PKCS_KEY_PAIR_GEN 0x00000020UL
#define CKM_DH_PKCS_DERIVE 0x00000021UL
@ -643,6 +652,15 @@ typedef CK_ULONG CK_MECHANISM_TYPE;
#define CKM_SHA512_T_HMAC_GENERAL 0x00000052UL
#define CKM_SHA512_T_KEY_DERIVATION 0x00000053UL
#define CKM_SHA3_256_RSA_PKCS 0x00000060UL
#define CKM_SHA3_384_RSA_PKCS 0x00000061UL
#define CKM_SHA3_512_RSA_PKCS 0x00000062UL
#define CKM_SHA3_256_RSA_PKCS_PSS 0x00000063UL
#define CKM_SHA3_384_RSA_PKCS_PSS 0x00000064UL
#define CKM_SHA3_512_RSA_PKCS_PSS 0x00000065UL
#define CKM_SHA3_224_RSA_PKCS 0x00000066UL
#define CKM_SHA3_224_RSA_PKCS_PSS 0x00000067UL
#define CKM_RC2_KEY_GEN 0x00000100UL
#define CKM_RC2_ECB 0x00000101UL
#define CKM_RC2_CBC 0x00000102UL
@ -724,6 +742,23 @@ typedef CK_ULONG CK_MECHANISM_TYPE;
#define CKM_ACTI 0x000002A0UL
#define CKM_ACTI_KEY_GEN 0x000002A1UL
#define CKM_SHA3_256 0x000002B0UL
#define CKM_SHA3_256_HMAC 0x000002B1UL
#define CKM_SHA3_256_HMAC_GENERAL 0x000002B2UL
#define CKM_SHA3_256_KEY_GEN 0x000002B3UL
#define CKM_SHA3_224 0x000002B5UL
#define CKM_SHA3_224_HMAC 0x000002B6UL
#define CKM_SHA3_224_HMAC_GENERAL 0x000002B7UL
#define CKM_SHA3_224_KEY_GEN 0x000002B8UL
#define CKM_SHA3_384 0x000002C0UL
#define CKM_SHA3_384_HMAC 0x000002C1UL
#define CKM_SHA3_384_HMAC_GENERAL 0x000002C2UL
#define CKM_SHA3_384_KEY_GEN 0x000002C3UL
#define CKM_SHA3_512 0x000002D0UL
#define CKM_SHA3_512_HMAC 0x000002D1UL
#define CKM_SHA3_512_HMAC_GENERAL 0x000002D2UL
#define CKM_SHA3_512_KEY_GEN 0x000002D3UL
#define CKM_CAST_KEY_GEN 0x00000300UL
#define CKM_CAST_ECB 0x00000301UL
#define CKM_CAST_CBC 0x00000302UL
@ -789,6 +824,12 @@ typedef CK_ULONG CK_MECHANISM_TYPE;
#define CKM_SHA384_KEY_DERIVATION 0x00000394UL
#define CKM_SHA512_KEY_DERIVATION 0x00000395UL
#define CKM_SHA224_KEY_DERIVATION 0x00000396UL
#define CKM_SHA3_256_KEY_DERIVE 0x00000397UL
#define CKM_SHA3_224_KEY_DERIVE 0x00000398UL
#define CKM_SHA3_384_KEY_DERIVE 0x00000399UL
#define CKM_SHA3_512_KEY_DERIVE 0x0000039AUL
#define CKM_SHAKE_128_KEY_DERIVE 0x0000039BUL
#define CKM_SHAKE_256_KEY_DERIVE 0x0000039CUL
#define CKM_PBE_MD2_DES_CBC 0x000003A0UL
#define CKM_PBE_MD5_DES_CBC 0x000003A1UL
@ -1299,7 +1340,10 @@ typedef CK_ULONG CK_EC_KDF_TYPE;
#define CKD_SHA384_KDF 0x00000007UL
#define CKD_SHA512_KDF 0x00000008UL
#define CKD_CPDIVERSIFY_KDF 0x00000009UL
#define CKD_SHA3_224_KDF 0x0000000AUL
#define CKD_SHA3_256_KDF 0x0000000BUL
#define CKD_SHA3_384_KDF 0x0000000CUL
#define CKD_SHA3_512_KDF 0x0000000DUL
/* CK_ECDH1_DERIVE_PARAMS provides the parameters to the
* CKM_ECDH1_DERIVE and CKM_ECDH1_COFACTOR_DERIVE mechanisms,

View File

@ -13,6 +13,16 @@ CK_ULONG Index(CK_ULONG_PTR array, CK_ULONG i)
{
return array[i];
}
static inline void putAttributePval(CK_ATTRIBUTE_PTR a, CK_VOID_PTR pValue)
{
a->pValue = pValue;
}
static inline void putMechanismParam(CK_MECHANISM_PTR m, CK_VOID_PTR pParameter)
{
m->pParameter = pParameter;
}
*/
import "C"
@ -187,22 +197,22 @@ func NewAttribute(typ uint, x interface{}) *Attribute {
}
// cAttribute returns the start address and the length of an attribute list.
func cAttributeList(a []*Attribute) (arena, C.ckAttrPtr, C.CK_ULONG) {
func cAttributeList(a []*Attribute) (arena, C.CK_ATTRIBUTE_PTR, C.CK_ULONG) {
var arena arena
if len(a) == 0 {
return nil, nil, 0
}
pa := make([]C.ckAttr, len(a))
for i := 0; i < len(a); i++ {
pa[i]._type = C.CK_ATTRIBUTE_TYPE(a[i].Type)
//skip attribute if length is 0 to prevent panic in arena.Allocate
if a[i].Value == nil || len(a[i].Value) == 0 {
continue
pa := make([]C.CK_ATTRIBUTE, len(a))
for i, attr := range a {
pa[i]._type = C.CK_ATTRIBUTE_TYPE(attr.Type)
if len(attr.Value) != 0 {
buf, len := arena.Allocate(attr.Value)
// field is unaligned on windows so this has to call into C
C.putAttributePval(&pa[i], buf)
pa[i].ulValueLen = len
}
pa[i].pValue, pa[i].ulValueLen = arena.Allocate(a[i].Value)
}
return arena, C.ckAttrPtr(&pa[0]), C.CK_ULONG(len(a))
return arena, &pa[0], C.CK_ULONG(len(a))
}
func cDate(t time.Time) []byte {
@ -221,6 +231,7 @@ func cDate(t time.Time) []byte {
type Mechanism struct {
Mechanism uint
Parameter []byte
generator interface{}
}
// NewMechanism returns a pointer to an initialized Mechanism.
@ -231,32 +242,44 @@ func NewMechanism(mech uint, x interface{}) *Mechanism {
return m
}
switch x.(type) {
case *GCMParams:
m.Parameter = cGCMParams(x.(*GCMParams))
switch p := x.(type) {
case *GCMParams, *OAEPParams, *ECDH1DeriveParams:
// contains pointers; defer serialization until cMechanism
m.generator = p
case []byte:
m.Parameter = p
default:
m.Parameter = x.([]byte)
panic("parameter must be one of type: []byte, *GCMParams, *OAEPParams, *ECDH1DeriveParams")
}
return m
}
func cMechanismList(m []*Mechanism) (arena, C.ckMechPtr, C.CK_ULONG) {
func cMechanism(mechList []*Mechanism) (arena, *C.CK_MECHANISM) {
if len(mechList) != 1 {
panic("expected exactly one mechanism")
}
mech := mechList[0]
cmech := &C.CK_MECHANISM{mechanism: C.CK_MECHANISM_TYPE(mech.Mechanism)}
// params that contain pointers are allocated here
param := mech.Parameter
var arena arena
if len(m) == 0 {
return nil, nil, 0
switch p := mech.generator.(type) {
case *GCMParams:
// uses its own arena because it has to outlive this function call (yuck)
param = cGCMParams(p)
case *OAEPParams:
param, arena = cOAEPParams(p, arena)
case *ECDH1DeriveParams:
param, arena = cECDH1DeriveParams(p, arena)
}
pm := make([]C.ckMech, len(m))
for i := 0; i < len(m); i++ {
pm[i].mechanism = C.CK_MECHANISM_TYPE(m[i].Mechanism)
//skip parameter if length is 0 to prevent panic in arena.Allocate
if m[i].Parameter == nil || len(m[i].Parameter) == 0 {
continue
}
pm[i].pParameter, pm[i].ulParameterLen = arena.Allocate(m[i].Parameter)
if len(param) != 0 {
buf, len := arena.Allocate(param)
// field is unaligned on windows so this has to call into C
C.putMechanismParam(cmech, buf)
cmech.ulParameterLen = len
}
return arena, C.ckMechPtr(&pm[0]), C.CK_ULONG(len(m))
return arena, cmech
}
// MechanismInfo provides information about a particular mechanism.
@ -265,3 +288,16 @@ type MechanismInfo struct {
MaxKeySize uint
Flags uint
}
// stubData is a persistent nonempty byte array used by cMessage.
var stubData = []byte{0}
// cMessage returns the pointer/length pair corresponding to data.
func cMessage(data []byte) (dataPtr C.CK_BYTE_PTR) {
l := len(data)
if l == 0 {
// &data[0] is forbidden in this case, so use a nontrivial array instead.
data = stubData
}
return C.CK_BYTE_PTR(unsafe.Pointer(&data[0]))
}

127
vendor/github.com/miekg/pkcs11/vendor.go generated vendored Normal file
View File

@ -0,0 +1,127 @@
package pkcs11
// Vendor specific range for Ncipher network HSM.
const (
NFCK_VENDOR_NCIPHER = 0xde436972
CKA_NCIPHER = NFCK_VENDOR_NCIPHER
CKM_NCIPHER = NFCK_VENDOR_NCIPHER
CKK_NCIPHER = NFCK_VENDOR_NCIPHER
)
// Vendor specific mechanisms for HMAC on Ncipher HSMs where Ncipher does not allow use of generic_secret keys.
const (
CKM_NC_SHA_1_HMAC_KEY_GEN = CKM_NCIPHER + 0x3 /* no params */
CKM_NC_MD5_HMAC_KEY_GEN = CKM_NCIPHER + 0x6 /* no params */
CKM_NC_SHA224_HMAC_KEY_GEN = CKM_NCIPHER + 0x24 /* no params */
CKM_NC_SHA256_HMAC_KEY_GEN = CKM_NCIPHER + 0x25 /* no params */
CKM_NC_SHA384_HMAC_KEY_GEN = CKM_NCIPHER + 0x26 /* no params */
CKM_NC_SHA512_HMAC_KEY_GEN = CKM_NCIPHER + 0x27 /* no params */
)
// Vendor specific range for Mozilla NSS.
const (
NSSCK_VENDOR_NSS = 0x4E534350
CKO_NSS = CKO_VENDOR_DEFINED | NSSCK_VENDOR_NSS
CKK_NSS = CKK_VENDOR_DEFINED | NSSCK_VENDOR_NSS
CKC_NSS = CKC_VENDOR_DEFINED | NSSCK_VENDOR_NSS
CKA_NSS = CKA_VENDOR_DEFINED | NSSCK_VENDOR_NSS
CKA_TRUST = CKA_NSS + 0x2000
CKM_NSS = CKM_VENDOR_DEFINED | NSSCK_VENDOR_NSS
CKR_NSS = CKM_VENDOR_DEFINED | NSSCK_VENDOR_NSS
CKT_VENDOR_DEFINED = 0x80000000
CKT_NSS = CKT_VENDOR_DEFINED | NSSCK_VENDOR_NSS
)
// Vendor specific values for Mozilla NSS.
const (
CKO_NSS_CRL = CKO_NSS + 1
CKO_NSS_SMIME = CKO_NSS + 2
CKO_NSS_TRUST = CKO_NSS + 3
CKO_NSS_BUILTIN_ROOT_LIST = CKO_NSS + 4
CKO_NSS_NEWSLOT = CKO_NSS + 5
CKO_NSS_DELSLOT = CKO_NSS + 6
CKK_NSS_PKCS8 = CKK_NSS + 1
CKK_NSS_JPAKE_ROUND1 = CKK_NSS + 2
CKK_NSS_JPAKE_ROUND2 = CKK_NSS + 3
CKK_NSS_CHACHA20 = CKK_NSS + 4
CKA_NSS_URL = CKA_NSS + 1
CKA_NSS_EMAIL = CKA_NSS + 2
CKA_NSS_SMIME_INFO = CKA_NSS + 3
CKA_NSS_SMIME_TIMESTAMP = CKA_NSS + 4
CKA_NSS_PKCS8_SALT = CKA_NSS + 5
CKA_NSS_PASSWORD_CHECK = CKA_NSS + 6
CKA_NSS_EXPIRES = CKA_NSS + 7
CKA_NSS_KRL = CKA_NSS + 8
CKA_NSS_PQG_COUNTER = CKA_NSS + 20
CKA_NSS_PQG_SEED = CKA_NSS + 21
CKA_NSS_PQG_H = CKA_NSS + 22
CKA_NSS_PQG_SEED_BITS = CKA_NSS + 23
CKA_NSS_MODULE_SPEC = CKA_NSS + 24
CKA_NSS_OVERRIDE_EXTENSIONS = CKA_NSS + 25
CKA_NSS_JPAKE_SIGNERID = CKA_NSS + 26
CKA_NSS_JPAKE_PEERID = CKA_NSS + 27
CKA_NSS_JPAKE_GX1 = CKA_NSS + 28
CKA_NSS_JPAKE_GX2 = CKA_NSS + 29
CKA_NSS_JPAKE_GX3 = CKA_NSS + 30
CKA_NSS_JPAKE_GX4 = CKA_NSS + 31
CKA_NSS_JPAKE_X2 = CKA_NSS + 32
CKA_NSS_JPAKE_X2S = CKA_NSS + 33
CKA_NSS_MOZILLA_CA_POLICY = CKA_NSS + 34
CKA_TRUST_DIGITAL_SIGNATURE = CKA_TRUST + 1
CKA_TRUST_NON_REPUDIATION = CKA_TRUST + 2
CKA_TRUST_KEY_ENCIPHERMENT = CKA_TRUST + 3
CKA_TRUST_DATA_ENCIPHERMENT = CKA_TRUST + 4
CKA_TRUST_KEY_AGREEMENT = CKA_TRUST + 5
CKA_TRUST_KEY_CERT_SIGN = CKA_TRUST + 6
CKA_TRUST_CRL_SIGN = CKA_TRUST + 7
CKA_TRUST_SERVER_AUTH = CKA_TRUST + 8
CKA_TRUST_CLIENT_AUTH = CKA_TRUST + 9
CKA_TRUST_CODE_SIGNING = CKA_TRUST + 10
CKA_TRUST_EMAIL_PROTECTION = CKA_TRUST + 11
CKA_TRUST_IPSEC_END_SYSTEM = CKA_TRUST + 12
CKA_TRUST_IPSEC_TUNNEL = CKA_TRUST + 13
CKA_TRUST_IPSEC_USER = CKA_TRUST + 14
CKA_TRUST_TIME_STAMPING = CKA_TRUST + 15
CKA_TRUST_STEP_UP_APPROVED = CKA_TRUST + 16
CKA_CERT_SHA1_HASH = CKA_TRUST + 100
CKA_CERT_MD5_HASH = CKA_TRUST + 101
CKM_NSS_AES_KEY_WRAP = CKM_NSS + 1
CKM_NSS_AES_KEY_WRAP_PAD = CKM_NSS + 2
CKM_NSS_HKDF_SHA1 = CKM_NSS + 3
CKM_NSS_HKDF_SHA256 = CKM_NSS + 4
CKM_NSS_HKDF_SHA384 = CKM_NSS + 5
CKM_NSS_HKDF_SHA512 = CKM_NSS + 6
CKM_NSS_JPAKE_ROUND1_SHA1 = CKM_NSS + 7
CKM_NSS_JPAKE_ROUND1_SHA256 = CKM_NSS + 8
CKM_NSS_JPAKE_ROUND1_SHA384 = CKM_NSS + 9
CKM_NSS_JPAKE_ROUND1_SHA512 = CKM_NSS + 10
CKM_NSS_JPAKE_ROUND2_SHA1 = CKM_NSS + 11
CKM_NSS_JPAKE_ROUND2_SHA256 = CKM_NSS + 12
CKM_NSS_JPAKE_ROUND2_SHA384 = CKM_NSS + 13
CKM_NSS_JPAKE_ROUND2_SHA512 = CKM_NSS + 14
CKM_NSS_JPAKE_FINAL_SHA1 = CKM_NSS + 15
CKM_NSS_JPAKE_FINAL_SHA256 = CKM_NSS + 16
CKM_NSS_JPAKE_FINAL_SHA384 = CKM_NSS + 17
CKM_NSS_JPAKE_FINAL_SHA512 = CKM_NSS + 18
CKM_NSS_HMAC_CONSTANT_TIME = CKM_NSS + 19
CKM_NSS_SSL3_MAC_CONSTANT_TIME = CKM_NSS + 20
CKM_NSS_TLS_PRF_GENERAL_SHA256 = CKM_NSS + 21
CKM_NSS_TLS_MASTER_KEY_DERIVE_SHA256 = CKM_NSS + 22
CKM_NSS_TLS_KEY_AND_MAC_DERIVE_SHA256 = CKM_NSS + 23
CKM_NSS_TLS_MASTER_KEY_DERIVE_DH_SHA256 = CKM_NSS + 24
CKM_NSS_TLS_EXTENDED_MASTER_KEY_DERIVE = CKM_NSS + 25
CKM_NSS_TLS_EXTENDED_MASTER_KEY_DERIVE_DH = CKM_NSS + 26
CKM_NSS_CHACHA20_KEY_GEN = CKM_NSS + 27
CKM_NSS_CHACHA20_POLY1305 = CKM_NSS + 28
CKM_NSS_PKCS12_PBE_SHA224_HMAC_KEY_GEN = CKM_NSS + 29
CKM_NSS_PKCS12_PBE_SHA256_HMAC_KEY_GEN = CKM_NSS + 30
CKM_NSS_PKCS12_PBE_SHA384_HMAC_KEY_GEN = CKM_NSS + 31
CKM_NSS_PKCS12_PBE_SHA512_HMAC_KEY_GEN = CKM_NSS + 32
CKR_NSS_CERTDB_FAILED = CKR_NSS + 1
CKR_NSS_KEYDB_FAILED = CKR_NSS + 2
CKT_NSS_TRUSTED = CKT_NSS + 1
CKT_NSS_TRUSTED_DELEGATOR = CKT_NSS + 2
CKT_NSS_MUST_VERIFY_TRUST = CKT_NSS + 3
CKT_NSS_NOT_TRUSTED = CKT_NSS + 10
CKT_NSS_TRUST_UNKNOWN = CKT_NSS + 5
)

View File

@ -27,9 +27,11 @@ Read the proposal from https://github.com/moby/moby/issues/32925
Introductory blog post https://blog.mobyproject.org/introducing-buildkit-17e056cc5317
:information_source: If you are visiting this repo for the usage of experimental Dockerfile features like `RUN --mount=type=(bind|cache|tmpfs|secret|ssh)`, please refer to [`frontend/dockerfile/docs/experimental.md`](frontend/dockerfile/docs/experimental.md).
### Used by
[Moby](https://github.com/moby/moby/pull/37151)
[Moby & Docker](https://github.com/moby/moby/pull/37151)
[img](https://github.com/genuinetools/img)
@ -37,6 +39,12 @@ Introductory blog post https://blog.mobyproject.org/introducing-buildkit-17e056c
[container build interface](https://github.com/containerbuilding/cbi)
[Knative Build Templates](https://github.com/knative/build-templates)
[boss](https://github.com/crosbymichael/boss)
[Rio](https://github.com/rancher/rio) (on roadmap)
### Quick start
Dependencies:
@ -79,6 +87,7 @@ See [`solver/pb/ops.proto`](./solver/pb/ops.proto) for the format definition.
Currently, following high-level languages has been implemented for LLB:
- Dockerfile (See [Exploring Dockerfiles](#exploring-dockerfiles))
- [Buildpacks](https://github.com/tonistiigi/buildkit-pack)
- (open a PR to add your own language)
For understanding the basics of LLB, `examples/buildkit*` directory contains scripts that define how to build different configurations of BuildKit itself and its dependencies using the `client` package. Running one of these scripts generates a protobuf definition of a build graph. Note that the script itself does not execute any steps of the build.
@ -136,15 +145,19 @@ build-using-dockerfile -t mybuildkit -f ./hack/dockerfiles/test.Dockerfile .
docker inspect myimage
```
##### Building a Dockerfile using [external frontend](https://hub.docker.com/r/tonistiigi/dockerfile/tags/):
##### Building a Dockerfile using [external frontend](https://hub.docker.com/r/docker/dockerfile/tags/):
During development, an external version of the Dockerfile frontend is pushed to https://hub.docker.com/r/tonistiigi/dockerfile that can be used with the gateway frontend. The source for the external frontend is currently located in `./frontend/dockerfile/cmd/dockerfile-frontend` but will move out of this repository in the future ([#163](https://github.com/moby/buildkit/issues/163)). For automatic build from master branch of this repository `tonistiigi/dockerfile:master` image can be used.
External versions of the Dockerfile frontend are pushed to https://hub.docker.com/r/docker/dockerfile-upstream and https://hub.docker.com/r/docker/dockerfile and can be used with the gateway frontend. The source for the external frontend is currently located in `./frontend/dockerfile/cmd/dockerfile-frontend` but will move out of this repository in the future ([#163](https://github.com/moby/buildkit/issues/163)). For automatic build from master branch of this repository `docker/dockerfile-upsteam:master` or `docker/dockerfile-upstream:master-experimental` image can be used.
```
buildctl build --frontend=gateway.v0 --frontend-opt=source=tonistiigi/dockerfile --local context=. --local dockerfile=.
buildctl build --frontend gateway.v0 --frontend-opt=source=tonistiigi/dockerfile --frontend-opt=context=git://github.com/moby/moby --frontend-opt build-arg:APT_MIRROR=cdn-fastly.deb.debian.org
buildctl build --frontend=gateway.v0 --frontend-opt=source=docker/dockerfile --local context=. --local dockerfile=.
buildctl build --frontend gateway.v0 --frontend-opt=source=docker/dockerfile --frontend-opt=context=git://github.com/moby/moby --frontend-opt build-arg:APT_MIRROR=cdn-fastly.deb.debian.org
````
##### Building a Dockerfile with experimental features like `RUN --mount=type=(bind|cache|tmpfs|secret|ssh)`
See [`frontend/dockerfile/docs/experimental.md`](frontend/dockerfile/docs/experimental.md).
### Exporters
By default, the build result and intermediate cache will only remain internally in BuildKit. Exporter needs to be specified to retrieve the result.
@ -207,15 +220,22 @@ buildctl debug workers -v
BuildKit can also be used by running the `buildkitd` daemon inside a Docker container and accessing it remotely. The client tool `buildctl` is also available for Mac and Windows.
We provide `buildkitd` container images as [`moby/buildkit`](https://hub.docker.com/r/moby/buildkit/tags/):
* `moby/buildkit:latest`: built from the latest regular [release](https://github.com/moby/buildkit/releases)
* `moby/buildkit:rootless`: same as `latest` but runs as an unprivileged user, see [`docs/rootless.md`](docs/rootless.md)
* `moby/buildkit:master`: built from the master branch
* `moby/buildkit:master-rootless`: same as master but runs as an unprivileged user, see [`docs/rootless.md`](docs/rootless.md)
To run daemon in a container:
```
docker run -d --privileged -p 1234:1234 tonistiigi/buildkit --addr tcp://0.0.0.0:1234
docker run -d --privileged -p 1234:1234 moby/buildkit:latest --addr tcp://0.0.0.0:1234
export BUILDKIT_HOST=tcp://0.0.0.0:1234
buildctl build --help
```
The `tonistiigi/buildkit` image can be built locally using the Dockerfile in `./hack/dockerfiles/test.Dockerfile`.
The images can be also built locally using `./hack/dockerfiles/test.Dockerfile` (or `./hack/dockerfiles/test.buildkit.Dockerfile` if you already have BuildKit).
### Opentracing support
@ -232,7 +252,7 @@ export JAEGER_TRACE=0.0.0.0:6831
### Supported runc version
During development, BuildKit is tested with the version of runc that is being used by the containerd repository. Please refer to [runc.md](https://github.com/containerd/containerd/blob/v1.1.3/RUNC.md) for more information.
During development, BuildKit is tested with the version of runc that is being used by the containerd repository. Please refer to [runc.md](https://github.com/containerd/containerd/blob/v1.2.0-rc.1/RUNC.md) for more information.
### Running BuildKit without root privileges

View File

@ -126,30 +126,11 @@ func Image(ref string, opts ...ImageOption) State {
if err != nil {
src.err = err
} else {
var img struct {
Config struct {
Env []string `json:"Env,omitempty"`
WorkingDir string `json:"WorkingDir,omitempty"`
User string `json:"User,omitempty"`
} `json:"config,omitempty"`
}
if err := json.Unmarshal(dt, &img); err != nil {
src.err = err
} else {
st := NewState(src.Output())
for _, env := range img.Config.Env {
parts := strings.SplitN(env, "=", 2)
if len(parts[0]) > 0 {
var v string
if len(parts) > 1 {
v = parts[1]
}
st = st.AddEnv(parts[0], v)
}
}
st = st.Dir(img.Config.WorkingDir)
st, err := NewState(src.Output()).WithImageConfig(dt)
if err == nil {
return st
}
src.err = err
}
}
return NewState(src.Output())

View File

@ -2,8 +2,10 @@ package llb
import (
"context"
"encoding/json"
"fmt"
"net"
"strings"
"github.com/containerd/containerd/platforms"
"github.com/moby/buildkit/identity"
@ -171,6 +173,31 @@ func (s State) WithOutput(o Output) State {
return s
}
func (s State) WithImageConfig(c []byte) (State, error) {
var img struct {
Config struct {
Env []string `json:"Env,omitempty"`
WorkingDir string `json:"WorkingDir,omitempty"`
User string `json:"User,omitempty"`
} `json:"config,omitempty"`
}
if err := json.Unmarshal(c, &img); err != nil {
return State{}, err
}
for _, env := range img.Config.Env {
parts := strings.SplitN(env, "=", 2)
if len(parts[0]) > 0 {
var v string
if len(parts) > 1 {
v = parts[1]
}
s = s.AddEnv(parts[0], v)
}
}
s = s.Dir(img.Config.WorkingDir)
return s, nil
}
func (s State) Run(ro ...RunOption) ExecState {
ei := &ExecInfo{State: s}
if p := s.GetPlatform(); p != nil {

View File

@ -356,6 +356,9 @@ func (r *reference) ReadFile(ctx context.Context, req client.ReadRequest) ([]byt
}
func (r *reference) ReadDir(ctx context.Context, req client.ReadDirRequest) ([]*fstypes.Stat, error) {
if err := r.c.caps.Supports(pb.CapReadDir); err != nil {
return nil, err
}
rdr := &pb.ReadDirRequest{
DirPath: req.Path,
IncludePattern: req.IncludePattern,
@ -369,6 +372,9 @@ func (r *reference) ReadDir(ctx context.Context, req client.ReadDirRequest) ([]*
}
func (r *reference) StatFile(ctx context.Context, req client.StatRequest) (*fstypes.Stat, error) {
if err := r.c.caps.Supports(pb.CapStatFile); err != nil {
return nil, err
}
rdr := &pb.StatFileRequest{
Path: req.Path,
Ref: r.id,

View File

@ -24,7 +24,7 @@ const (
// Dialer returns a connection that can be used by the session
type Dialer func(ctx context.Context, proto string, meta map[string][]string) (net.Conn, error)
// Attachable defines a feature that can be expsed on a session
// Attachable defines a feature that can be exposed on a session
type Attachable interface {
Register(*grpc.Server)
}
@ -66,7 +66,7 @@ func NewSession(ctx context.Context, name, sharedKey string) (*Session, error) {
return s, nil
}
// Allow enable a given service to be reachable through the grpc session
// Allow enables a given service to be reachable through the grpc session
func (s *Session) Allow(a Attachable) {
a.Register(s.grpcServer)
}

View File

@ -6,7 +6,7 @@ github.com/davecgh/go-spew v1.1.0
github.com/pmezard/go-difflib v1.0.0
golang.org/x/sys 1b2967e3c290b7c545b3db0deeda16e9be4f98a2
github.com/containerd/containerd d97a907f7f781c0ab8340877d8e6b53cc7f1c2f6
github.com/containerd/containerd 1a5f9a3434ac53c0e9d27093ecc588e0c281c333
github.com/containerd/typeurl a93fcdb778cd272c6e9b3028b2f42d813e785d40
golang.org/x/sync 450f422ab23cf9881c94e2db30cac0eb1b7cf80c
github.com/sirupsen/logrus v1.0.0
@ -16,9 +16,9 @@ golang.org/x/net 0ed95abb35c445290478a5348a7b38bb154135fd
github.com/gogo/protobuf v1.0.0
github.com/gogo/googleapis b23578765ee54ff6bceff57f397d833bf4ca6869
github.com/golang/protobuf v1.1.0
github.com/containerd/continuity f44b615e492bdfb371aae2f76ec694d9da1db537
github.com/containerd/continuity bd77b46c8352f74eb12c85bdc01f4b90f69d66b4
github.com/opencontainers/image-spec v1.0.1
github.com/opencontainers/runc 20aff4f0488c6d4b8df4d85b4f63f1f704c11abd
github.com/opencontainers/runc a00bf0190895aa465a5fbed0268888e2c8ddfe85
github.com/Microsoft/go-winio v0.4.11
github.com/containerd/fifo 3d5202aec260678c48179c56f40e6f38a095738c
github.com/opencontainers/runtime-spec eba862dc2470385a233c7507392675cbeadf7353 # v1.0.1-45-geba862d
@ -28,8 +28,9 @@ google.golang.org/genproto d80a6e20e776b0b17a324d0ba1ab50a39c8e8944
golang.org/x/text 19e51611da83d6be54ddafce4a4af510cb3e9ea4
github.com/docker/go-events 9461782956ad83b30282bf90e31fa6a70c255ba9
github.com/syndtr/gocapability db04d3cc01c8b54962a58ec7e491717d06cfcc16
github.com/Microsoft/hcsshim v0.7.3
github.com/Microsoft/hcsshim v0.7.9
golang.org/x/crypto 0709b304e793a5edb4a2c0145f281ecdc20838a4
github.com/containerd/cri 8506fe836677cc3bb23a16b68145128243d843b5 # release/1.2 branch
github.com/urfave/cli 7bc6a0acffa589f415f88aca16cc1de5ffd66f9c
github.com/morikuni/aec 39771216ff4c63d11f5e604076f9c45e8be1067b
@ -40,8 +41,8 @@ golang.org/x/time f51c12702a4d776e4c1fa9b0fabab841babae631
github.com/docker/docker 71cd53e4a197b303c6ba086bd584ffd67a884281
github.com/pkg/profile 5b67d428864e92711fcbd2f8629456121a56d91f
github.com/tonistiigi/fsutil f567071bed2416e4d87d260d3162722651182317
github.com/hashicorp/go-immutable-radix 826af9ccf0feeee615d546d69b11f8e98da8c8f1 git://github.com/tonistiigi/go-immutable-radix.git
github.com/tonistiigi/fsutil 2862f6bc5ac9b97124e552a5c108230b38a1b0ca
github.com/hashicorp/go-immutable-radix 826af9ccf0feeee615d546d69b11f8e98da8c8f1 https://github.com/tonistiigi/go-immutable-radix
github.com/hashicorp/golang-lru a0d98a5f288019575c6d1f4bb1573fef2d1fcdc4
github.com/mitchellh/hashstructure 2bca23e0e452137f789efbc8610126fd8b94f73b
github.com/docker/go-connections 3ede32e2033de7505e6500d6c868c2b9ed9f169d
@ -66,6 +67,3 @@ github.com/opentracing-contrib/go-stdlib b1a47cfbdd7543e70e9ef3e73d0802ad306cc1c
# used by dockerfile tests
gotest.tools v2.1.0
github.com/google/go-cmp v0.2.0
# used by rootless spec conv test
github.com/seccomp/libseccomp-golang 32f571b70023028bd57d9288c20efbcb237f3ce0

View File

@ -5,6 +5,7 @@ import (
"os"
"path/filepath"
"strings"
"syscall"
"time"
"github.com/docker/docker/pkg/fileutils"
@ -71,7 +72,7 @@ func Walk(ctx context.Context, p string, opt *WalkOpt, fn filepath.WalkFunc) err
return err
}
defer func() {
if retErr != nil && os.IsNotExist(errors.Cause(retErr)) {
if retErr != nil && isNotExist(retErr) {
retErr = filepath.SkipDir
}
}()
@ -216,3 +217,14 @@ func trimUntilIndex(str, sep string, count int) string {
}
}
}
func isNotExist(err error) bool {
err = errors.Cause(err)
if os.IsNotExist(err) {
return true
}
if pe, ok := err.(*os.PathError); ok {
err = pe.Err
}
return err == syscall.ENOTDIR
}