diff --git a/components/engine/CONTRIBUTING.md b/components/engine/CONTRIBUTING.md
index c55e13bbcf..b7961e14e4 100644
--- a/components/engine/CONTRIBUTING.md
+++ b/components/engine/CONTRIBUTING.md
@@ -1,8 +1,8 @@
-# Contributing to Docker
+# Contribute to the Moby Project
-Want to hack on Docker? Awesome! We have a contributor's guide that explains
-[setting up a Docker development environment and the contribution
-process](https://docs.docker.com/opensource/project/who-written-for/).
+Want to hack on the Moby Project? Awesome! We have a contributor's guide that explains
+[setting up a development environment and the contribution
+process](docs/contributing/).
[](https://docs.docker.com/opensource/project/who-written-for/)
@@ -21,14 +21,14 @@ start participating.
## Reporting security issues
-The Docker maintainers take security seriously. If you discover a security
+The Moby maintainers take security seriously. If you discover a security
issue, please bring it to their attention right away!
Please **DO NOT** file a public issue, instead send your report privately to
[security@docker.com](mailto:security@docker.com).
Security reports are greatly appreciated and we will publicly thank you for it.
-We also like to send gifts—if you're into Docker schwag, make sure to let
+We also like to send gifts—if you're into schwag, make sure to let
us know. We currently do not offer a paid security bounty program, but are not
ruling it out in the future.
@@ -83,11 +83,7 @@ contributions, see [the advanced contribution
section](https://docs.docker.com/opensource/workflow/advanced-contributing/) in
the contributors guide.
-We try hard to keep Docker lean and focused. Docker can't do everything for
-everybody. This means that we might decide against incorporating a new feature.
-However, there might be a way to implement that feature *on top of* Docker.
-
-### Talking to other Docker users and contributors
+### Connect with other Moby Project contributors
@@ -96,52 +92,29 @@ However, there might be a way to implement that feature *on top of* Docker.
| Forums |
A public forum for users to discuss questions and explore current design patterns and
- best practices about Docker and related projects in the Docker Ecosystem. To participate,
- just log in with your Docker Hub account on https://forums.docker.com.
+ best practices about all the Moby projects. To participate, log in with your Github
+ account or create an account at https://forums.mobyproject.org.
|
- | Internet Relay Chat (IRC) |
+ Slack |
- IRC a direct line to our most knowledgeable Docker users; we have
- both the #docker and #docker-dev group on
- irc.freenode.net.
- IRC is a rich chat protocol but it can overwhelm new users. You can search
- our chat archives.
+ Register for the Docker Community Slack at
+ https://community.docker.com/registrations/groups/4316.
+ We use the #moby-project channel for general discussion, and there are seperate channels for other Moby projects such as #containerd.
+ Archives are available at https://dockercommunity.slackarchive.io/.
-
- Read our IRC quickstart guide
- for an easy way to get started.
-
- |
-
-
- | Google Group |
-
- The docker-dev
- group is for contributors and other people contributing to the Docker project.
- You can join them without a google account by sending an email to
- docker-dev+subscribe@googlegroups.com.
- After receiving the join-request message, you can simply reply to that to confirm the subscription.
|
| Twitter |
- You can follow Docker's Twitter feed
+ You can follow Moby Project Twitter feed
to get updates on our products. You can also tweet us questions or just
share blogs or stories.
|
-
- | Stack Overflow |
-
- Stack Overflow has thousands of Docker questions listed. We regularly
- monitor Docker questions
- and so do many other knowledgeable Docker users.
- |
-
@@ -159,7 +132,7 @@ Submit tests for your changes. See [TESTING.md](./TESTING.md) for details.
If your changes need integration tests, write them against the API. The `cli`
integration tests are slowly either migrated to API tests or moved away as unit
-tests in `docker/cli` and end-to-end tests for docker.
+tests in `docker/cli` and end-to-end tests for Docker.
Update the documentation when creating or modifying features. Test your
documentation changes for clarity, concision, and correctness, as well as a
@@ -266,15 +239,11 @@ Please see the [Coding Style](#coding-style) for further guidelines.
### Merge approval
-Docker maintainers use LGTM (Looks Good To Me) in comments on the code review to
-indicate acceptance.
+Moby maintainers use LGTM (Looks Good To Me) in comments on the code review to
+indicate acceptance, or use the Github review approval feature.
-A change requires LGTMs from an absolute majority of the maintainers of each
-component affected. For example, if a change affects `docs/` and `registry/`, it
-needs an absolute majority from the maintainers of `docs/` AND, separately, an
-absolute majority of the maintainers of `registry/`.
-
-For more details, see the [MAINTAINERS](MAINTAINERS) page.
+For an explanation of the review and approval process see the
+[REVIEWING](project/REVIEWING.md) page.
### Sign your work
@@ -342,9 +311,9 @@ Don't forget: being a maintainer is a time investment. Make sure you
will have time to make yourself available. You don't have to be a
maintainer to make a difference on the project!
-## Docker community guidelines
+## Moby community guidelines
-We want to keep the Docker community awesome, growing and collaborative. We need
+We want to keep the Moby community awesome, growing and collaborative. We need
your help to keep it that way. To help with this we've come up with some general
guidelines for the community as a whole:
diff --git a/components/engine/Dockerfile b/components/engine/Dockerfile
index bf86bdb4a9..e2c8770117 100644
--- a/components/engine/Dockerfile
+++ b/components/engine/Dockerfile
@@ -86,7 +86,7 @@ RUN apt-get update && apt-get install -y \
# will need updating, to avoid errors. Ping #docker-maintainers on IRC
# with a heads-up.
# IMPORTANT: When updating this please note that stdlib archive/tar pkg is vendored
-ENV GO_VERSION 1.8.5
+ENV GO_VERSION 1.9.2
RUN curl -fsSL "https://golang.org/dl/go${GO_VERSION}.linux-amd64.tar.gz" \
| tar -xzC /usr/local
@@ -133,7 +133,7 @@ RUN set -x \
&& rm -rf "$GOPATH"
# Get the "docker-py" source so we can run their integration tests
-ENV DOCKER_PY_COMMIT a962578e515185cf06506050b2200c0b81aa84ef
+ENV DOCKER_PY_COMMIT 1d6b5b203222ba5df7dedfcd1ee061a452f99c8a
# To run integration tests docker-pycreds is required.
RUN git clone https://github.com/docker/docker-py.git /docker-py \
&& cd /docker-py \
@@ -189,8 +189,9 @@ RUN ln -s /usr/local/completion/bash/docker /etc/bash_completion.d/docker
# Wrap all commands in the "docker-in-docker" script to allow nested containers
ENTRYPOINT ["hack/dind"]
+# Options for hack/validate/gometalinter
+ENV GOMETALINTER_OPTS="--deadline 2m"
+
# Upload docker source
COPY . /go/src/github.com/docker/docker
-# Options for hack/validate/gometalinter
-ENV GOMETALINTER_OPTS="--deadline 2m"
diff --git a/components/engine/Dockerfile.aarch64 b/components/engine/Dockerfile.aarch64
index 4a4c635fde..58ca40d878 100644
--- a/components/engine/Dockerfile.aarch64
+++ b/components/engine/Dockerfile.aarch64
@@ -73,7 +73,7 @@ RUN apt-get update && apt-get install -y \
# Install Go
# IMPORTANT: When updating this please note that stdlib archive/tar pkg is vendored
-ENV GO_VERSION 1.8.5
+ENV GO_VERSION 1.9.2
RUN curl -fsSL "https://golang.org/dl/go${GO_VERSION}.linux-arm64.tar.gz" \
| tar -xzC /usr/local
@@ -105,7 +105,7 @@ RUN set -x \
&& rm -rf "$GOPATH"
# Get the "docker-py" source so we can run their integration tests
-ENV DOCKER_PY_COMMIT a962578e515185cf06506050b2200c0b81aa84ef
+ENV DOCKER_PY_COMMIT 1d6b5b203222ba5df7dedfcd1ee061a452f99c8a
# To run integration tests docker-pycreds is required.
RUN git clone https://github.com/docker/docker-py.git /docker-py \
&& cd /docker-py \
@@ -158,8 +158,8 @@ ENV PATH=/usr/local/cli:$PATH
# Wrap all commands in the "docker-in-docker" script to allow nested containers
ENTRYPOINT ["hack/dind"]
-# Upload docker source
-COPY . /go/src/github.com/docker/docker
-
# Options for hack/validate/gometalinter
ENV GOMETALINTER_OPTS="--deadline 4m -j2"
+
+# Upload docker source
+COPY . /go/src/github.com/docker/docker
diff --git a/components/engine/Dockerfile.armhf b/components/engine/Dockerfile.armhf
index e64bfd7d20..bc430e8c0b 100644
--- a/components/engine/Dockerfile.armhf
+++ b/components/engine/Dockerfile.armhf
@@ -63,7 +63,7 @@ RUN apt-get update && apt-get install -y \
# Install Go
# IMPORTANT: When updating this please note that stdlib archive/tar pkg is vendored
-ENV GO_VERSION 1.8.5
+ENV GO_VERSION 1.9.2
RUN curl -fsSL "https://golang.org/dl/go${GO_VERSION}.linux-armv6l.tar.gz" \
| tar -xzC /usr/local
ENV PATH /go/bin:/usr/local/go/bin:$PATH
@@ -103,7 +103,7 @@ RUN set -x \
&& rm -rf "$GOPATH"
# Get the "docker-py" source so we can run their integration tests
-ENV DOCKER_PY_COMMIT a962578e515185cf06506050b2200c0b81aa84ef
+ENV DOCKER_PY_COMMIT 1d6b5b203222ba5df7dedfcd1ee061a452f99c8a
# To run integration tests docker-pycreds is required.
RUN git clone https://github.com/docker/docker-py.git /docker-py \
&& cd /docker-py \
@@ -146,8 +146,8 @@ ENV PATH=/usr/local/cli:$PATH
ENTRYPOINT ["hack/dind"]
-# Upload docker source
-COPY . /go/src/github.com/docker/docker
-
# Options for hack/validate/gometalinter
ENV GOMETALINTER_OPTS="--deadline 10m -j2"
+
+# Upload docker source
+COPY . /go/src/github.com/docker/docker
diff --git a/components/engine/Dockerfile.e2e b/components/engine/Dockerfile.e2e
index dcbecb7787..7c6bb45d91 100644
--- a/components/engine/Dockerfile.e2e
+++ b/components/engine/Dockerfile.e2e
@@ -1,8 +1,9 @@
## Step 1: Build tests
-FROM golang:1.8.5-alpine3.6 as builder
+FROM golang:1.9.2-alpine3.6 as builder
RUN apk add --update \
bash \
+ btrfs-progs-dev \
build-base \
curl \
lvm2-dev \
diff --git a/components/engine/Dockerfile.ppc64le b/components/engine/Dockerfile.ppc64le
index c95b68b225..fa7307b3be 100644
--- a/components/engine/Dockerfile.ppc64le
+++ b/components/engine/Dockerfile.ppc64le
@@ -64,7 +64,7 @@ RUN apt-get update && apt-get install -y \
# Install Go
# NOTE: official ppc64le go binaries weren't available until go 1.6.4 and 1.7.4
# IMPORTANT: When updating this please note that stdlib archive/tar pkg is vendored
-ENV GO_VERSION 1.8.5
+ENV GO_VERSION 1.9.2
RUN curl -fsSL "https://golang.org/dl/go${GO_VERSION}.linux-ppc64le.tar.gz" \
| tar -xzC /usr/local
@@ -101,7 +101,7 @@ RUN set -x \
&& rm -rf "$GOPATH"
# Get the "docker-py" source so we can run their integration tests
-ENV DOCKER_PY_COMMIT a962578e515185cf06506050b2200c0b81aa84ef
+ENV DOCKER_PY_COMMIT 1d6b5b203222ba5df7dedfcd1ee061a452f99c8a
# To run integration tests docker-pycreds is required.
RUN git clone https://github.com/docker/docker-py.git /docker-py \
&& cd /docker-py \
diff --git a/components/engine/Dockerfile.s390x b/components/engine/Dockerfile.s390x
index b438a5d2bb..e8e78302df 100644
--- a/components/engine/Dockerfile.s390x
+++ b/components/engine/Dockerfile.s390x
@@ -58,7 +58,7 @@ RUN apt-get update && apt-get install -y \
--no-install-recommends
# IMPORTANT: When updating this please note that stdlib archive/tar pkg is vendored
-ENV GO_VERSION 1.8.5
+ENV GO_VERSION 1.9.2
RUN curl -fsSL "https://golang.org/dl/go${GO_VERSION}.linux-s390x.tar.gz" \
| tar -xzC /usr/local
@@ -95,7 +95,7 @@ RUN set -x \
&& rm -rf "$GOPATH"
# Get the "docker-py" source so we can run their integration tests
-ENV DOCKER_PY_COMMIT a962578e515185cf06506050b2200c0b81aa84ef
+ENV DOCKER_PY_COMMIT 1d6b5b203222ba5df7dedfcd1ee061a452f99c8a
# To run integration tests docker-pycreds is required.
RUN git clone https://github.com/docker/docker-py.git /docker-py \
&& cd /docker-py \
diff --git a/components/engine/Dockerfile.simple b/components/engine/Dockerfile.simple
index c84025e67c..2a5d30bac0 100644
--- a/components/engine/Dockerfile.simple
+++ b/components/engine/Dockerfile.simple
@@ -40,7 +40,7 @@ RUN apt-get update && apt-get install -y --no-install-recommends \
# will need updating, to avoid errors. Ping #docker-maintainers on IRC
# with a heads-up.
# IMPORTANT: When updating this please note that stdlib archive/tar pkg is vendored
-ENV GO_VERSION 1.8.5
+ENV GO_VERSION 1.9.2
RUN curl -fsSL "https://golang.org/dl/go${GO_VERSION}.linux-amd64.tar.gz" \
| tar -xzC /usr/local
ENV PATH /go/bin:/usr/local/go/bin:$PATH
diff --git a/components/engine/Dockerfile.windows b/components/engine/Dockerfile.windows
index 2671aea8cd..519a96124c 100644
--- a/components/engine/Dockerfile.windows
+++ b/components/engine/Dockerfile.windows
@@ -161,7 +161,7 @@ SHELL ["powershell", "-Command", "$ErrorActionPreference = 'Stop'; $ProgressPref
# Environment variable notes:
# - GO_VERSION must be consistent with 'Dockerfile' used by Linux.
# - FROM_DOCKERFILE is used for detection of building within a container.
-ENV GO_VERSION=1.8.5 `
+ENV GO_VERSION=1.9.2 `
GIT_VERSION=2.11.1 `
GOPATH=C:\go `
FROM_DOCKERFILE=1
diff --git a/components/engine/MAINTAINERS b/components/engine/MAINTAINERS
index b3dfd4933f..d81caba80a 100644
--- a/components/engine/MAINTAINERS
+++ b/components/engine/MAINTAINERS
@@ -364,7 +364,7 @@
[people.misty]
Name = "Misty Stanley-Jones"
Email = "misty@docker.com"
- GitHub = "mstanleyjones"
+ GitHub = "mistyhacks"
[people.mlaventure]
Name = "Kenfe-Mickaël Laventure"
diff --git a/components/engine/Makefile b/components/engine/Makefile
index 7d50afa7d4..32988150c7 100644
--- a/components/engine/Makefile
+++ b/components/engine/Makefile
@@ -16,6 +16,13 @@ export DOCKER_GITCOMMIT
# env vars passed through directly to Docker's build scripts
# to allow things like `make KEEPBUNDLE=1 binary` easily
# `project/PACKAGERS.md` have some limited documentation of some of these
+#
+# DOCKER_LDFLAGS can be used to pass additional parameters to -ldflags
+# option of "go build". For example, a built-in graphdriver priority list
+# can be changed during build time like this:
+#
+# make DOCKER_LDFLAGS="-X github.com/docker/docker/daemon/graphdriver.priority=overlay2,devicemapper" dynbinary
+#
DOCKER_ENVS := \
-e DOCKER_CROSSPLATFORMS \
-e BUILD_APT_MIRROR \
@@ -31,6 +38,7 @@ DOCKER_ENVS := \
-e DOCKER_GITCOMMIT \
-e DOCKER_GRAPHDRIVER \
-e DOCKER_INCREMENTAL_BINARY \
+ -e DOCKER_LDFLAGS \
-e DOCKER_PORT \
-e DOCKER_REMAP_ROOT \
-e DOCKER_STORAGE_OPTS \
@@ -150,8 +158,8 @@ run: build ## run the docker daemon in a container
shell: build ## start a shell inside the build env
$(DOCKER_RUN_DOCKER) bash
-test: build ## run the unit, integration and docker-py tests
- $(DOCKER_RUN_DOCKER) hack/make.sh dynbinary cross test-unit test-integration test-docker-py
+test: build test-unit ## run the unit, integration and docker-py tests
+ $(DOCKER_RUN_DOCKER) hack/make.sh dynbinary cross test-integration test-docker-py
test-docker-py: build ## run the docker-py tests
$(DOCKER_RUN_DOCKER) hack/make.sh dynbinary test-docker-py
diff --git a/components/engine/VERSION b/components/engine/VERSION
deleted file mode 100644
index 2d736aaa18..0000000000
--- a/components/engine/VERSION
+++ /dev/null
@@ -1 +0,0 @@
-17.06.0-dev
diff --git a/components/engine/api/common.go b/components/engine/api/common.go
index d0229e0389..af34d0b354 100644
--- a/components/engine/api/common.go
+++ b/components/engine/api/common.go
@@ -3,7 +3,7 @@ package api
// Common constants for daemon and client.
const (
// DefaultVersion of Current REST API
- DefaultVersion string = "1.34"
+ DefaultVersion string = "1.35"
// NoBaseImageSpecifier is the symbol used by the FROM
// command to specify that no base image is to be used.
diff --git a/components/engine/api/errdefs/is.go b/components/engine/api/errdefs/is.go
index ce574cdd1e..b0be0b8147 100644
--- a/components/engine/api/errdefs/is.go
+++ b/components/engine/api/errdefs/is.go
@@ -25,7 +25,7 @@ func getImplementer(err error) error {
}
}
-// IsNotFound returns if the passed in error is a ErrNotFound
+// IsNotFound returns if the passed in error is an ErrNotFound
func IsNotFound(err error) bool {
_, ok := getImplementer(err).(ErrNotFound)
return ok
@@ -37,7 +37,7 @@ func IsInvalidParameter(err error) bool {
return ok
}
-// IsConflict returns if the passed in error is a ErrConflict
+// IsConflict returns if the passed in error is an ErrConflict
func IsConflict(err error) bool {
_, ok := getImplementer(err).(ErrConflict)
return ok
@@ -55,13 +55,13 @@ func IsUnavailable(err error) bool {
return ok
}
-// IsForbidden returns if the passed in error is a ErrForbidden
+// IsForbidden returns if the passed in error is an ErrForbidden
func IsForbidden(err error) bool {
_, ok := getImplementer(err).(ErrForbidden)
return ok
}
-// IsSystem returns if the passed in error is a ErrSystem
+// IsSystem returns if the passed in error is an ErrSystem
func IsSystem(err error) bool {
_, ok := getImplementer(err).(ErrSystem)
return ok
@@ -73,7 +73,7 @@ func IsNotModified(err error) bool {
return ok
}
-// IsNotImplemented returns if the passed in error is a ErrNotImplemented
+// IsNotImplemented returns if the passed in error is an ErrNotImplemented
func IsNotImplemented(err error) bool {
_, ok := getImplementer(err).(ErrNotImplemented)
return ok
diff --git a/components/engine/api/server/router/container/container_routes.go b/components/engine/api/server/router/container/container_routes.go
index 106a7087cd..d845fdd00f 100644
--- a/components/engine/api/server/router/container/container_routes.go
+++ b/components/engine/api/server/router/container/container_routes.go
@@ -96,6 +96,7 @@ func (s *containerRouter) getContainersLogs(ctx context.Context, w http.Response
Follow: httputils.BoolValue(r, "follow"),
Timestamps: httputils.BoolValue(r, "timestamps"),
Since: r.Form.Get("since"),
+ Until: r.Form.Get("until"),
Tail: r.Form.Get("tail"),
ShowStdout: stdout,
ShowStderr: stderr,
diff --git a/components/engine/api/swagger.yaml b/components/engine/api/swagger.yaml
index 80319ae7f3..07ee0674cd 100644
--- a/components/engine/api/swagger.yaml
+++ b/components/engine/api/swagger.yaml
@@ -19,10 +19,10 @@ produces:
consumes:
- "application/json"
- "text/plain"
-basePath: "/v1.34"
+basePath: "/v1.35"
info:
title: "Docker Engine API"
- version: "1.34"
+ version: "1.35"
x-logo:
url: "https://docs.docker.com/images/logo-docker-main.png"
description: |
@@ -42,38 +42,26 @@ info:
# Versioning
- The API is usually changed in each release of Docker, so API calls are versioned to ensure that clients don't break.
+ The API is usually changed in each release, so API calls are versioned to
+ ensure that clients don't break. To lock to a specific version of the API,
+ you prefix the URL with its version, for example, call `/v1.30/info` to use
+ the v1.30 version of the `/info` endpoint. If the API version specified in
+ the URL is not supported by the daemon, a HTTP `400 Bad Request` error message
+ is returned.
- For Docker Engine 17.10, the API version is 1.33. To lock to this version, you prefix the URL with `/v1.33`. For example, calling `/info` is the same as calling `/v1.33/info`.
+ If you omit the version-prefix, the current version of the API (v1.35) is used.
+ For example, calling `/info` is the same as calling `/v1.35/info`. Using the
+ API without a version-prefix is deprecated and will be removed in a future release.
- Engine releases in the near future should support this version of the API, so your client will continue to work even if it is talking to a newer Engine.
+ Engine releases in the near future should support this version of the API,
+ so your client will continue to work even if it is talking to a newer Engine.
- In previous versions of Docker, it was possible to access the API without providing a version. This behaviour is now deprecated will be removed in a future version of Docker.
+ The API uses an open schema model, which means server may add extra properties
+ to responses. Likewise, the server will ignore any extra query parameters and
+ request body properties. When you write clients, you need to ignore additional
+ properties in responses to ensure they do not break when talking to newer
+ daemons.
- If the API version specified in the URL is not supported by the daemon, a HTTP `400 Bad Request` error message is returned.
-
- The API uses an open schema model, which means server may add extra properties to responses. Likewise, the server will ignore any extra query parameters and request body properties. When you write clients, you need to ignore additional properties in responses to ensure they do not break when talking to newer Docker daemons.
-
- This documentation is for version 1.34 of the API. Use this table to find documentation for previous versions of the API:
-
- Docker version | API version | Changes
- ----------------|-------------|---------
- 17.10.x | [1.33](https://docs.docker.com/engine/api/v1.33/) | [API changes](https://docs.docker.com/engine/api/version-history/#v1-33-api-changes)
- 17.09.x | [1.32](https://docs.docker.com/engine/api/v1.32/) | [API changes](https://docs.docker.com/engine/api/version-history/#v1-32-api-changes)
- 17.07.x | [1.31](https://docs.docker.com/engine/api/v1.31/) | [API changes](https://docs.docker.com/engine/api/version-history/#v1-31-api-changes)
- 17.06.x | [1.30](https://docs.docker.com/engine/api/v1.30/) | [API changes](https://docs.docker.com/engine/api/version-history/#v1-30-api-changes)
- 17.05.x | [1.29](https://docs.docker.com/engine/api/v1.29/) | [API changes](https://docs.docker.com/engine/api/version-history/#v1-29-api-changes)
- 17.04.x | [1.28](https://docs.docker.com/engine/api/v1.28/) | [API changes](https://docs.docker.com/engine/api/version-history/#v1-28-api-changes)
- 17.03.1 | [1.27](https://docs.docker.com/engine/api/v1.27/) | [API changes](https://docs.docker.com/engine/api/version-history/#v1-27-api-changes)
- 1.13.1 & 17.03.0 | [1.26](https://docs.docker.com/engine/api/v1.26/) | [API changes](https://docs.docker.com/engine/api/version-history/#v1-26-api-changes)
- 1.13.0 | [1.25](https://docs.docker.com/engine/api/v1.25/) | [API changes](https://docs.docker.com/engine/api/version-history/#v1-25-api-changes)
- 1.12.x | [1.24](https://docs.docker.com/engine/api/v1.24/) | [API changes](https://docs.docker.com/engine/api/version-history/#v1-24-api-changes)
- 1.11.x | [1.23](https://docs.docker.com/engine/api/v1.23/) | [API changes](https://docs.docker.com/engine/api/version-history/#v1-23-api-changes)
- 1.10.x | [1.22](https://docs.docker.com/engine/api/v1.22/) | [API changes](https://docs.docker.com/engine/api/version-history/#v1-22-api-changes)
- 1.9.x | [1.21](https://docs.docker.com/engine/api/v1.21/) | [API changes](https://docs.docker.com/engine/api/version-history/#v1-21-api-changes)
- 1.8.x | [1.20](https://docs.docker.com/engine/api/v1.20/) | [API changes](https://docs.docker.com/engine/api/version-history/#v1-20-api-changes)
- 1.7.x | [1.19](https://docs.docker.com/engine/api/v1.19/) | [API changes](https://docs.docker.com/engine/api/version-history/#v1-19-api-changes)
- 1.6.x | [1.18](https://docs.docker.com/engine/api/v1.18/) | [API changes](https://docs.docker.com/engine/api/version-history/#v1-18-api-changes)
# Authentication
@@ -344,6 +332,7 @@ definitions:
Memory:
description: "Memory limit in bytes."
type: "integer"
+ format: "int64"
default: 0
# Applicable to UNIX platforms
CgroupParent:
@@ -2688,7 +2677,13 @@ definitions:
ConfigName is the name of the config that this references, but this is just provided for
lookup/display purposes. The config in the reference will be identified by its ID.
type: "string"
-
+ Isolation:
+ type: "string"
+ description: "Isolation technology of the containers running the service. (Windows only)"
+ enum:
+ - "default"
+ - "process"
+ - "hyperv"
Resources:
description: "Resource requirements which apply to each individual container created as part of the service."
type: "object"
@@ -4963,6 +4958,11 @@ paths:
description: "Only return logs since this time, as a UNIX timestamp"
type: "integer"
default: 0
+ - name: "until"
+ in: "query"
+ description: "Only return logs before this time, as a UNIX timestamp"
+ type: "integer"
+ default: 0
- name: "timestamps"
in: "query"
description: "Add timestamps to every log line"
@@ -6996,7 +6996,7 @@ paths:
- `network=` network name or ID
- `node=` node ID
- `plugin`= plugin name or ID
- - `scope`= local or swarm
+ - `scope`= local or swarm
- `secret=` secret name or ID
- `service=` service name or ID
- `type=` object to filter by, one of `container`, `image`, `volume`, `network`, `daemon`, `plugin`, `node`, `service`, `secret` or `config`
@@ -7892,7 +7892,7 @@ paths:
summary: "Connect a container to a network"
operationId: "NetworkConnect"
consumes:
- - "application/octet-stream"
+ - "application/json"
responses:
200:
description: "No error"
diff --git a/components/engine/api/types/client.go b/components/engine/api/types/client.go
index db37f1fe4e..93ca428540 100644
--- a/components/engine/api/types/client.go
+++ b/components/engine/api/types/client.go
@@ -74,6 +74,7 @@ type ContainerLogsOptions struct {
ShowStdout bool
ShowStderr bool
Since string
+ Until string
Timestamps bool
Follow bool
Tail string
diff --git a/components/engine/api/types/container/host_config.go b/components/engine/api/types/container/host_config.go
index bb421b3889..568cdcca93 100644
--- a/components/engine/api/types/container/host_config.go
+++ b/components/engine/api/types/container/host_config.go
@@ -20,6 +20,27 @@ func (i Isolation) IsDefault() bool {
return strings.ToLower(string(i)) == "default" || string(i) == ""
}
+// IsHyperV indicates the use of a Hyper-V partition for isolation
+func (i Isolation) IsHyperV() bool {
+ return strings.ToLower(string(i)) == "hyperv"
+}
+
+// IsProcess indicates the use of process isolation
+func (i Isolation) IsProcess() bool {
+ return strings.ToLower(string(i)) == "process"
+}
+
+const (
+ // IsolationEmpty is unspecified (same behavior as default)
+ IsolationEmpty = Isolation("")
+ // IsolationDefault is the default isolation mode on current daemon
+ IsolationDefault = Isolation("default")
+ // IsolationProcess is process isolation mode
+ IsolationProcess = Isolation("process")
+ // IsolationHyperV is HyperV isolation mode
+ IsolationHyperV = Isolation("hyperv")
+)
+
// IpcMode represents the container ipc stack.
type IpcMode string
diff --git a/components/engine/api/types/container/hostconfig_windows.go b/components/engine/api/types/container/hostconfig_windows.go
index 469923f7e9..3374d737f1 100644
--- a/components/engine/api/types/container/hostconfig_windows.go
+++ b/components/engine/api/types/container/hostconfig_windows.go
@@ -1,9 +1,5 @@
package container
-import (
- "strings"
-)
-
// IsBridge indicates whether container uses the bridge network stack
// in windows it is given the name NAT
func (n NetworkMode) IsBridge() bool {
@@ -21,16 +17,6 @@ func (n NetworkMode) IsUserDefined() bool {
return !n.IsDefault() && !n.IsNone() && !n.IsBridge() && !n.IsContainer()
}
-// IsHyperV indicates the use of a Hyper-V partition for isolation
-func (i Isolation) IsHyperV() bool {
- return strings.ToLower(string(i)) == "hyperv"
-}
-
-// IsProcess indicates the use of process isolation
-func (i Isolation) IsProcess() bool {
- return strings.ToLower(string(i)) == "process"
-}
-
// IsValid indicates if an isolation technology is valid
func (i Isolation) IsValid() bool {
return i.IsDefault() || i.IsHyperV() || i.IsProcess()
diff --git a/components/engine/api/types/swarm/container.go b/components/engine/api/types/swarm/container.go
index 6f8b45f6bb..734236c4b0 100644
--- a/components/engine/api/types/swarm/container.go
+++ b/components/engine/api/types/swarm/container.go
@@ -65,8 +65,9 @@ type ContainerSpec struct {
// The format of extra hosts on swarmkit is specified in:
// http://man7.org/linux/man-pages/man5/hosts.5.html
// IP_address canonical_hostname [aliases...]
- Hosts []string `json:",omitempty"`
- DNSConfig *DNSConfig `json:",omitempty"`
- Secrets []*SecretReference `json:",omitempty"`
- Configs []*ConfigReference `json:",omitempty"`
+ Hosts []string `json:",omitempty"`
+ DNSConfig *DNSConfig `json:",omitempty"`
+ Secrets []*SecretReference `json:",omitempty"`
+ Configs []*ConfigReference `json:",omitempty"`
+ Isolation container.Isolation `json:",omitempty"`
}
diff --git a/components/engine/builder/dockerfile/builder.go b/components/engine/builder/dockerfile/builder.go
index 2b7b43827c..b62d6fc024 100644
--- a/components/engine/builder/dockerfile/builder.go
+++ b/components/engine/builder/dockerfile/builder.go
@@ -131,10 +131,10 @@ func (bm *BuildManager) initializeClientSession(ctx context.Context, cancel func
}
logrus.Debug("client is session enabled")
- ctx, cancelCtx := context.WithTimeout(ctx, sessionConnectTimeout)
+ connectCtx, cancelCtx := context.WithTimeout(ctx, sessionConnectTimeout)
defer cancelCtx()
- c, err := bm.sg.Get(ctx, options.SessionID)
+ c, err := bm.sg.Get(connectCtx, options.SessionID)
if err != nil {
return nil, err
}
diff --git a/components/engine/builder/dockerfile/evaluator.go b/components/engine/builder/dockerfile/evaluator.go
index da97a7ff6e..6236a194d3 100644
--- a/components/engine/builder/dockerfile/evaluator.go
+++ b/components/engine/builder/dockerfile/evaluator.go
@@ -214,7 +214,8 @@ func (s *dispatchState) beginStage(stageName string, image builder.Image) {
s.imageID = image.ImageID()
if image.RunConfig() != nil {
- s.runConfig = copyRunConfig(image.RunConfig()) // copy avoids referencing the same instance when 2 stages have the same base
+ // copy avoids referencing the same instance when 2 stages have the same base
+ s.runConfig = copyRunConfig(image.RunConfig())
} else {
s.runConfig = &container.Config{}
}
diff --git a/components/engine/builder/dockerfile/imagecontext.go b/components/engine/builder/dockerfile/imagecontext.go
index 79206dad4d..a22b60b2e5 100644
--- a/components/engine/builder/dockerfile/imagecontext.go
+++ b/components/engine/builder/dockerfile/imagecontext.go
@@ -1,6 +1,8 @@
package dockerfile
import (
+ "runtime"
+
"github.com/docker/docker/api/types/backend"
"github.com/docker/docker/builder"
"github.com/docker/docker/builder/remotecontext"
@@ -73,7 +75,13 @@ func (m *imageSources) Unmount() (retErr error) {
func (m *imageSources) Add(im *imageMount) {
switch im.image {
case nil:
- im.image = &dockerimage.Image{}
+ // set the OS for scratch images
+ os := runtime.GOOS
+ // Windows does not support scratch except for LCOW
+ if runtime.GOOS == "windows" {
+ os = "linux"
+ }
+ im.image = &dockerimage.Image{V1Image: dockerimage.V1Image{OS: os}}
default:
m.byImageID[im.image.ImageID()] = im
}
diff --git a/components/engine/builder/dockerfile/internals.go b/components/engine/builder/dockerfile/internals.go
index 0e08ec25f0..c38f48afc0 100644
--- a/components/engine/builder/dockerfile/internals.go
+++ b/components/engine/builder/dockerfile/internals.go
@@ -25,6 +25,7 @@ import (
"github.com/docker/docker/pkg/stringid"
"github.com/docker/docker/pkg/symlink"
"github.com/docker/docker/pkg/system"
+ "github.com/docker/go-connections/nat"
lcUser "github.com/opencontainers/runc/libcontainer/user"
"github.com/pkg/errors"
)
@@ -385,14 +386,6 @@ func hashStringSlice(prefix string, slice []string) string {
type runConfigModifier func(*container.Config)
-func copyRunConfig(runConfig *container.Config, modifiers ...runConfigModifier) *container.Config {
- copy := *runConfig
- for _, modifier := range modifiers {
- modifier(©)
- }
- return ©
-}
-
func withCmd(cmd []string) runConfigModifier {
return func(runConfig *container.Config) {
runConfig.Cmd = cmd
@@ -438,6 +431,48 @@ func withEntrypointOverride(cmd []string, entrypoint []string) runConfigModifier
}
}
+func copyRunConfig(runConfig *container.Config, modifiers ...runConfigModifier) *container.Config {
+ copy := *runConfig
+ copy.Cmd = copyStringSlice(runConfig.Cmd)
+ copy.Env = copyStringSlice(runConfig.Env)
+ copy.Entrypoint = copyStringSlice(runConfig.Entrypoint)
+ copy.OnBuild = copyStringSlice(runConfig.OnBuild)
+ copy.Shell = copyStringSlice(runConfig.Shell)
+
+ if copy.Volumes != nil {
+ copy.Volumes = make(map[string]struct{}, len(runConfig.Volumes))
+ for k, v := range runConfig.Volumes {
+ copy.Volumes[k] = v
+ }
+ }
+
+ if copy.ExposedPorts != nil {
+ copy.ExposedPorts = make(nat.PortSet, len(runConfig.ExposedPorts))
+ for k, v := range runConfig.ExposedPorts {
+ copy.ExposedPorts[k] = v
+ }
+ }
+
+ if copy.Labels != nil {
+ copy.Labels = make(map[string]string, len(runConfig.Labels))
+ for k, v := range runConfig.Labels {
+ copy.Labels[k] = v
+ }
+ }
+
+ for _, modifier := range modifiers {
+ modifier(©)
+ }
+ return ©
+}
+
+func copyStringSlice(orig []string) []string {
+ if orig == nil {
+ return nil
+ }
+ return append([]string{}, orig...)
+}
+
// getShell is a helper function which gets the right shell for prefixing the
// shell-form of RUN, ENTRYPOINT and CMD instructions
func getShell(c *container.Config, os string) []string {
diff --git a/components/engine/builder/dockerfile/internals_test.go b/components/engine/builder/dockerfile/internals_test.go
index 380d86108e..83a207c455 100644
--- a/components/engine/builder/dockerfile/internals_test.go
+++ b/components/engine/builder/dockerfile/internals_test.go
@@ -14,6 +14,7 @@ import (
"github.com/docker/docker/builder/remotecontext"
"github.com/docker/docker/pkg/archive"
"github.com/docker/docker/pkg/idtools"
+ "github.com/docker/go-connections/nat"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
)
@@ -133,6 +134,44 @@ func TestCopyRunConfig(t *testing.T) {
}
+func fullMutableRunConfig() *container.Config {
+ return &container.Config{
+ Cmd: []string{"command", "arg1"},
+ Env: []string{"env1=foo", "env2=bar"},
+ ExposedPorts: nat.PortSet{
+ "1000/tcp": {},
+ "1001/tcp": {},
+ },
+ Volumes: map[string]struct{}{
+ "one": {},
+ "two": {},
+ },
+ Entrypoint: []string{"entry", "arg1"},
+ OnBuild: []string{"first", "next"},
+ Labels: map[string]string{
+ "label1": "value1",
+ "label2": "value2",
+ },
+ Shell: []string{"shell", "-c"},
+ }
+}
+
+func TestDeepCopyRunConfig(t *testing.T) {
+ runConfig := fullMutableRunConfig()
+ copy := copyRunConfig(runConfig)
+ assert.Equal(t, fullMutableRunConfig(), copy)
+
+ copy.Cmd[1] = "arg2"
+ copy.Env[1] = "env2=new"
+ copy.ExposedPorts["10002"] = struct{}{}
+ copy.Volumes["three"] = struct{}{}
+ copy.Entrypoint[1] = "arg2"
+ copy.OnBuild[0] = "start"
+ copy.Labels["label3"] = "value3"
+ copy.Shell[0] = "sh"
+ assert.Equal(t, fullMutableRunConfig(), runConfig)
+}
+
func TestChownFlagParsing(t *testing.T) {
testFiles := map[string]string{
"passwd": `root:x:0:0::/bin:/bin/false
diff --git a/components/engine/builder/dockerfile/parser/parser.go b/components/engine/builder/dockerfile/parser/parser.go
index 822c42b41a..a4bce1cca6 100644
--- a/components/engine/builder/dockerfile/parser/parser.go
+++ b/components/engine/builder/dockerfile/parser/parser.go
@@ -321,7 +321,7 @@ func Parse(rwc io.Reader) (*Result, error) {
Warnings: warnings,
EscapeToken: d.escapeToken,
OS: d.platformToken,
- }, nil
+ }, handleScannerError(scanner.Err())
}
func trimComments(src []byte) []byte {
@@ -358,3 +358,12 @@ func processLine(d *Directive, token []byte, stripLeftWhitespace bool) ([]byte,
}
return trimComments(token), d.possibleParserDirective(string(token))
}
+
+func handleScannerError(err error) error {
+ switch err {
+ case bufio.ErrTooLong:
+ return errors.Errorf("dockerfile line greater than max allowed size of %d", bufio.MaxScanTokenSize-1)
+ default:
+ return err
+ }
+}
diff --git a/components/engine/builder/dockerfile/parser/parser_test.go b/components/engine/builder/dockerfile/parser/parser_test.go
index 5f354bb139..7bfbee92e4 100644
--- a/components/engine/builder/dockerfile/parser/parser_test.go
+++ b/components/engine/builder/dockerfile/parser/parser_test.go
@@ -1,12 +1,14 @@
package parser
import (
+ "bufio"
"bytes"
"fmt"
"io/ioutil"
"os"
"path/filepath"
"runtime"
+ "strings"
"testing"
"github.com/stretchr/testify/assert"
@@ -159,3 +161,14 @@ RUN indented \
assert.Contains(t, warnings[1], "RUN another thing")
assert.Contains(t, warnings[2], "will become errors in a future release")
}
+
+func TestParseReturnsScannerErrors(t *testing.T) {
+ label := strings.Repeat("a", bufio.MaxScanTokenSize)
+
+ dockerfile := strings.NewReader(fmt.Sprintf(`
+ FROM image
+ LABEL test=%s
+`, label))
+ _, err := Parse(dockerfile)
+ assert.EqualError(t, err, "dockerfile line greater than max allowed size of 65535")
+}
diff --git a/components/engine/builder/remotecontext/archive.go b/components/engine/builder/remotecontext/archive.go
index fc18c5da31..c670235b61 100644
--- a/components/engine/builder/remotecontext/archive.go
+++ b/components/engine/builder/remotecontext/archive.go
@@ -79,7 +79,6 @@ func FromArchive(tarStream io.Reader) (builder.Source, error) {
}
tsc.sums = sum.GetSums()
-
return tsc, nil
}
@@ -122,8 +121,5 @@ func normalize(path string, root containerfs.ContainerFS) (cleanPath, fullPath s
if err != nil {
return "", "", errors.Wrapf(err, "forbidden path outside the build context: %s (%s)", path, cleanPath)
}
- if _, err := root.Lstat(fullPath); err != nil {
- return "", "", errors.WithStack(convertPathError(err, path))
- }
return
}
diff --git a/components/engine/builder/remotecontext/detect.go b/components/engine/builder/remotecontext/detect.go
index 38aff67985..fe5cd967fb 100644
--- a/components/engine/builder/remotecontext/detect.go
+++ b/components/engine/builder/remotecontext/detect.go
@@ -97,26 +97,23 @@ func newGitRemote(gitURL string, dockerfilePath string) (builder.Source, *parser
}
func newURLRemote(url string, dockerfilePath string, progressReader func(in io.ReadCloser) io.ReadCloser) (builder.Source, *parser.Result, error) {
- var dockerfile io.ReadCloser
- dockerfileFoundErr := errors.New("found-dockerfile")
- c, err := MakeRemoteContext(url, map[string]func(io.ReadCloser) (io.ReadCloser, error){
- mimeTypes.TextPlain: func(rc io.ReadCloser) (io.ReadCloser, error) {
- dockerfile = rc
- return nil, dockerfileFoundErr
- },
- // fallback handler (tar context)
- "": func(rc io.ReadCloser) (io.ReadCloser, error) {
- return progressReader(rc), nil
- },
- })
- switch {
- case err == dockerfileFoundErr:
- res, err := parser.Parse(dockerfile)
- return nil, res, err
- case err != nil:
+ contentType, content, err := downloadRemote(url)
+ if err != nil {
return nil, nil, err
}
- return withDockerfileFromContext(c.(modifiableContext), dockerfilePath)
+ defer content.Close()
+
+ switch contentType {
+ case mimeTypes.TextPlain:
+ res, err := parser.Parse(progressReader(content))
+ return nil, res, err
+ default:
+ source, err := FromArchive(progressReader(content))
+ if err != nil {
+ return nil, nil, err
+ }
+ return withDockerfileFromContext(source.(modifiableContext), dockerfilePath)
+ }
}
func removeDockerfile(c modifiableContext, filesToRemove ...string) error {
diff --git a/components/engine/builder/remotecontext/git.go b/components/engine/builder/remotecontext/git.go
index 158bb5ad4d..f6fc0bc3fb 100644
--- a/components/engine/builder/remotecontext/git.go
+++ b/components/engine/builder/remotecontext/git.go
@@ -6,6 +6,7 @@ import (
"github.com/docker/docker/builder"
"github.com/docker/docker/builder/remotecontext/git"
"github.com/docker/docker/pkg/archive"
+ "github.com/sirupsen/logrus"
)
// MakeGitContext returns a Context from gitURL that is cloned in a temporary directory.
@@ -21,9 +22,14 @@ func MakeGitContext(gitURL string) (builder.Source, error) {
}
defer func() {
- // TODO: print errors?
- c.Close()
- os.RemoveAll(root)
+ err := c.Close()
+ if err != nil {
+ logrus.WithField("action", "MakeGitContext").WithField("module", "builder").WithField("url", gitURL).WithError(err).Error("error while closing git context")
+ }
+ err = os.RemoveAll(root)
+ if err != nil {
+ logrus.WithField("action", "MakeGitContext").WithField("module", "builder").WithField("url", gitURL).WithError(err).Error("error while removing path and children of root")
+ }
}()
return FromArchive(c)
}
diff --git a/components/engine/builder/remotecontext/lazycontext.go b/components/engine/builder/remotecontext/lazycontext.go
index 66f36defd4..08b8058549 100644
--- a/components/engine/builder/remotecontext/lazycontext.go
+++ b/components/engine/builder/remotecontext/lazycontext.go
@@ -40,16 +40,18 @@ func (c *lazySource) Hash(path string) (string, error) {
return "", err
}
- fi, err := c.root.Lstat(fullPath)
- if err != nil {
- return "", errors.WithStack(err)
- }
-
relPath, err := Rel(c.root, fullPath)
if err != nil {
return "", errors.WithStack(convertPathError(err, cleanPath))
}
+ fi, err := os.Lstat(fullPath)
+ if err != nil {
+ // Backwards compatibility: a missing file returns a path as hash.
+ // This is reached in the case of a broken symlink.
+ return relPath, nil
+ }
+
sum, ok := c.sums[relPath]
if !ok {
sum, err = c.prepareHash(relPath, fi)
diff --git a/components/engine/builder/remotecontext/remote.go b/components/engine/builder/remotecontext/remote.go
index 6c4073bd4c..ee0282f706 100644
--- a/components/engine/builder/remotecontext/remote.go
+++ b/components/engine/builder/remotecontext/remote.go
@@ -10,7 +10,7 @@ import (
"net/url"
"regexp"
- "github.com/docker/docker/builder"
+ "github.com/docker/docker/pkg/ioutils"
"github.com/pkg/errors"
)
@@ -22,50 +22,23 @@ const acceptableRemoteMIME = `(?:application/(?:(?:x\-)?tar|octet\-stream|((?:x\
var mimeRe = regexp.MustCompile(acceptableRemoteMIME)
-// MakeRemoteContext downloads a context from remoteURL and returns it.
-//
-// If contentTypeHandlers is non-nil, then the Content-Type header is read along with a maximum of
-// maxPreambleLength bytes from the body to help detecting the MIME type.
-// Look at acceptableRemoteMIME for more details.
-//
-// If a match is found, then the body is sent to the contentType handler and a (potentially compressed) tar stream is expected
-// to be returned. If no match is found, it is assumed the body is a tar stream (compressed or not).
-// In either case, an (assumed) tar stream is passed to FromArchive whose result is returned.
-func MakeRemoteContext(remoteURL string, contentTypeHandlers map[string]func(io.ReadCloser) (io.ReadCloser, error)) (builder.Source, error) {
- f, err := GetWithStatusError(remoteURL)
+// downloadRemote context from a url and returns it, along with the parsed content type
+func downloadRemote(remoteURL string) (string, io.ReadCloser, error) {
+ response, err := GetWithStatusError(remoteURL)
if err != nil {
- return nil, fmt.Errorf("error downloading remote context %s: %v", remoteURL, err)
- }
- defer f.Body.Close()
-
- var contextReader io.ReadCloser
- if contentTypeHandlers != nil {
- contentType := f.Header.Get("Content-Type")
- clen := f.ContentLength
-
- contentType, contextReader, err = inspectResponse(contentType, f.Body, clen)
- if err != nil {
- return nil, fmt.Errorf("error detecting content type for remote %s: %v", remoteURL, err)
- }
- defer contextReader.Close()
-
- // This loop tries to find a content-type handler for the detected content-type.
- // If it could not find one from the caller-supplied map, it tries the empty content-type `""`
- // which is interpreted as a fallback handler (usually used for raw tar contexts).
- for _, ct := range []string{contentType, ""} {
- if fn, ok := contentTypeHandlers[ct]; ok {
- defer contextReader.Close()
- if contextReader, err = fn(contextReader); err != nil {
- return nil, err
- }
- break
- }
- }
+ return "", nil, fmt.Errorf("error downloading remote context %s: %v", remoteURL, err)
}
- // Pass through - this is a pre-packaged context, presumably
- // with a Dockerfile with the right name inside it.
- return FromArchive(contextReader)
+ contentType, contextReader, err := inspectResponse(
+ response.Header.Get("Content-Type"),
+ response.Body,
+ response.ContentLength)
+ if err != nil {
+ response.Body.Close()
+ return "", nil, fmt.Errorf("error detecting content type for remote %s: %v", remoteURL, err)
+ }
+
+ return contentType, ioutils.NewReadCloserWrapper(contextReader, response.Body.Close), nil
}
// GetWithStatusError does an http.Get() and returns an error if the
@@ -110,7 +83,7 @@ func GetWithStatusError(address string) (resp *http.Response, err error) {
// - an io.Reader for the response body
// - an error value which will be non-nil either when something goes wrong while
// reading bytes from r or when the detected content-type is not acceptable.
-func inspectResponse(ct string, r io.Reader, clen int64) (string, io.ReadCloser, error) {
+func inspectResponse(ct string, r io.Reader, clen int64) (string, io.Reader, error) {
plen := clen
if plen <= 0 || plen > maxPreambleLength {
plen = maxPreambleLength
@@ -119,14 +92,14 @@ func inspectResponse(ct string, r io.Reader, clen int64) (string, io.ReadCloser,
preamble := make([]byte, plen)
rlen, err := r.Read(preamble)
if rlen == 0 {
- return ct, ioutil.NopCloser(r), errors.New("empty response")
+ return ct, r, errors.New("empty response")
}
if err != nil && err != io.EOF {
- return ct, ioutil.NopCloser(r), err
+ return ct, r, err
}
preambleR := bytes.NewReader(preamble[:rlen])
- bodyReader := ioutil.NopCloser(io.MultiReader(preambleR, r))
+ bodyReader := io.MultiReader(preambleR, r)
// Some web servers will use application/octet-stream as the default
// content type for files without an extension (e.g. 'Dockerfile')
// so if we receive this value we better check for text content
diff --git a/components/engine/builder/remotecontext/remote_test.go b/components/engine/builder/remotecontext/remote_test.go
index c38433b340..35b105f550 100644
--- a/components/engine/builder/remotecontext/remote_test.go
+++ b/components/engine/builder/remotecontext/remote_test.go
@@ -11,7 +11,7 @@ import (
"github.com/docker/docker/builder"
"github.com/docker/docker/internal/testutil"
- "github.com/docker/docker/pkg/archive"
+ "github.com/gotestyourself/gotestyourself/fs"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
)
@@ -174,11 +174,10 @@ func TestUnknownContentLength(t *testing.T) {
}
}
-func TestMakeRemoteContext(t *testing.T) {
- contextDir, cleanup := createTestTempDir(t, "", "builder-tarsum-test")
- defer cleanup()
-
- createTestTempFile(t, contextDir, builder.DefaultDockerfileName, dockerfileContents, 0777)
+func TestDownloadRemote(t *testing.T) {
+ contextDir := fs.NewDir(t, "test-builder-download-remote",
+ fs.WithFile(builder.DefaultDockerfileName, dockerfileContents))
+ defer contextDir.Remove()
mux := http.NewServeMux()
server := httptest.NewServer(mux)
@@ -187,39 +186,15 @@ func TestMakeRemoteContext(t *testing.T) {
serverURL.Path = "/" + builder.DefaultDockerfileName
remoteURL := serverURL.String()
- mux.Handle("/", http.FileServer(http.Dir(contextDir)))
+ mux.Handle("/", http.FileServer(http.Dir(contextDir.Path())))
- remoteContext, err := MakeRemoteContext(remoteURL, map[string]func(io.ReadCloser) (io.ReadCloser, error){
- mimeTypes.TextPlain: func(rc io.ReadCloser) (io.ReadCloser, error) {
- dockerfile, err := ioutil.ReadAll(rc)
- if err != nil {
- return nil, err
- }
+ contentType, content, err := downloadRemote(remoteURL)
+ require.NoError(t, err)
- r, err := archive.Generate(builder.DefaultDockerfileName, string(dockerfile))
- if err != nil {
- return nil, err
- }
- return ioutil.NopCloser(r), nil
- },
- })
-
- if err != nil {
- t.Fatalf("Error when executing DetectContextFromRemoteURL: %s", err)
- }
-
- if remoteContext == nil {
- t.Fatal("Remote context should not be nil")
- }
-
- h, err := remoteContext.Hash(builder.DefaultDockerfileName)
- if err != nil {
- t.Fatalf("failed to compute hash %s", err)
- }
-
- if expected, actual := "7b6b6b66bee9e2102fbdc2228be6c980a2a23adf371962a37286a49f7de0f7cc", h; expected != actual {
- t.Fatalf("There should be file named %s %s in fileInfoSums", expected, actual)
- }
+ assert.Equal(t, mimeTypes.TextPlain, contentType)
+ raw, err := ioutil.ReadAll(content)
+ require.NoError(t, err)
+ assert.Equal(t, dockerfileContents, string(raw))
}
func TestGetWithStatusError(t *testing.T) {
diff --git a/components/engine/builder/remotecontext/tarsum_test.go b/components/engine/builder/remotecontext/tarsum_test.go
index 6d2b41d3d4..9395460916 100644
--- a/components/engine/builder/remotecontext/tarsum_test.go
+++ b/components/engine/builder/remotecontext/tarsum_test.go
@@ -104,17 +104,6 @@ func TestHashSubdir(t *testing.T) {
}
}
-func TestStatNotExisting(t *testing.T) {
- contextDir, cleanup := createTestTempDir(t, "", "builder-tarsum-test")
- defer cleanup()
-
- src := makeTestArchiveContext(t, contextDir)
- _, err := src.Hash("not-existing")
- if !os.IsNotExist(errors.Cause(err)) {
- t.Fatalf("This file should not exist: %s", err)
- }
-}
-
func TestRemoveDirectory(t *testing.T) {
contextDir, cleanup := createTestTempDir(t, "", "builder-tarsum-test")
defer cleanup()
@@ -129,17 +118,20 @@ func TestRemoveDirectory(t *testing.T) {
src := makeTestArchiveContext(t, contextDir)
- tarSum := src.(modifiableContext)
+ _, err = src.Root().Stat(src.Root().Join(src.Root().Path(), relativePath))
+ if err != nil {
+ t.Fatalf("Statting %s shouldn't fail: %+v", relativePath, err)
+ }
+ tarSum := src.(modifiableContext)
err = tarSum.Remove(relativePath)
if err != nil {
t.Fatalf("Error when executing Remove: %s", err)
}
- _, err = src.Hash(contextSubdir)
-
+ _, err = src.Root().Stat(src.Root().Join(src.Root().Path(), relativePath))
if !os.IsNotExist(errors.Cause(err)) {
- t.Fatal("Directory should not exist at this point")
+ t.Fatalf("Directory should not exist at this point: %+v ", err)
}
}
diff --git a/components/engine/client/client_unix.go b/components/engine/client/client_unix.go
index 89de892c85..eba8d909a9 100644
--- a/components/engine/client/client_unix.go
+++ b/components/engine/client/client_unix.go
@@ -1,4 +1,4 @@
-// +build linux freebsd solaris openbsd darwin
+// +build linux freebsd openbsd darwin
package client
diff --git a/components/engine/client/container_logs.go b/components/engine/client/container_logs.go
index 0f32e9f12b..35c297c5fb 100644
--- a/components/engine/client/container_logs.go
+++ b/components/engine/client/container_logs.go
@@ -51,6 +51,14 @@ func (cli *Client) ContainerLogs(ctx context.Context, container string, options
query.Set("since", ts)
}
+ if options.Until != "" {
+ ts, err := timetypes.GetTimestamp(options.Until, time.Now())
+ if err != nil {
+ return nil, err
+ }
+ query.Set("until", ts)
+ }
+
if options.Timestamps {
query.Set("timestamps", "1")
}
diff --git a/components/engine/client/container_logs_test.go b/components/engine/client/container_logs_test.go
index 99e31842c9..8cb7635120 100644
--- a/components/engine/client/container_logs_test.go
+++ b/components/engine/client/container_logs_test.go
@@ -13,6 +13,7 @@ import (
"time"
"github.com/docker/docker/api/types"
+ "github.com/docker/docker/internal/testutil"
"golang.org/x/net/context"
)
@@ -28,9 +29,11 @@ func TestContainerLogsError(t *testing.T) {
_, err = client.ContainerLogs(context.Background(), "container_id", types.ContainerLogsOptions{
Since: "2006-01-02TZ",
})
- if err == nil || !strings.Contains(err.Error(), `parsing time "2006-01-02TZ"`) {
- t.Fatalf("expected a 'parsing time' error, got %v", err)
- }
+ testutil.ErrorContains(t, err, `parsing time "2006-01-02TZ"`)
+ _, err = client.ContainerLogs(context.Background(), "container_id", types.ContainerLogsOptions{
+ Until: "2006-01-02TZ",
+ })
+ testutil.ErrorContains(t, err, `parsing time "2006-01-02TZ"`)
}
func TestContainerLogs(t *testing.T) {
@@ -80,6 +83,17 @@ func TestContainerLogs(t *testing.T) {
"since": "invalid but valid",
},
},
+ {
+ options: types.ContainerLogsOptions{
+ // An complete invalid date, timestamp or go duration will be
+ // passed as is
+ Until: "invalid but valid",
+ },
+ expectedQueryParams: map[string]string{
+ "tail": "",
+ "until": "invalid but valid",
+ },
+ },
}
for _, logCase := range cases {
client := &Client{
diff --git a/components/engine/cmd/dockerd/config.go b/components/engine/cmd/dockerd/config.go
index f142b7538c..b9d586a4ba 100644
--- a/components/engine/cmd/dockerd/config.go
+++ b/components/engine/cmd/dockerd/config.go
@@ -65,7 +65,8 @@ func installCommonConfigFlags(conf *config.Config, flags *pflag.FlagSet) {
flags.StringVar(&conf.MetricsAddress, "metrics-addr", "", "Set default address and port to serve the metrics api on")
- flags.StringVar(&conf.NodeGenericResources, "node-generic-resources", "", "user defined resources (e.g. fpga=2;gpu={UUID1,UUID2,UUID3})")
+ flags.Var(opts.NewListOptsRef(&conf.NodeGenericResources, opts.ValidateSingleGenericResource), "node-generic-resource", "Advertise user-defined resource")
+
flags.IntVar(&conf.NetworkControlPlaneMTU, "network-control-plane-mtu", config.DefaultNetworkMtu, "Network Control plane MTU")
// "--deprecated-key-path" is to allow configuration of the key used
diff --git a/components/engine/cmd/dockerd/config_common_unix.go b/components/engine/cmd/dockerd/config_common_unix.go
index b29307b596..febf30ae9f 100644
--- a/components/engine/cmd/dockerd/config_common_unix.go
+++ b/components/engine/cmd/dockerd/config_common_unix.go
@@ -1,4 +1,4 @@
-// +build solaris linux freebsd
+// +build linux freebsd
package main
diff --git a/components/engine/cmd/dockerd/config_unix.go b/components/engine/cmd/dockerd/config_unix.go
index dcc7dc5e81..b3bd741c95 100644
--- a/components/engine/cmd/dockerd/config_unix.go
+++ b/components/engine/cmd/dockerd/config_unix.go
@@ -1,4 +1,4 @@
-// +build linux,!solaris freebsd,!solaris
+// +build linux freebsd
package main
diff --git a/components/engine/cmd/dockerd/config_unix_test.go b/components/engine/cmd/dockerd/config_unix_test.go
index 588ac19fbd..2705d671ba 100644
--- a/components/engine/cmd/dockerd/config_unix_test.go
+++ b/components/engine/cmd/dockerd/config_unix_test.go
@@ -1,4 +1,4 @@
-// +build linux,!solaris freebsd,!solaris
+// +build linux freebsd
package main
diff --git a/components/engine/cmd/dockerd/daemon.go b/components/engine/cmd/dockerd/daemon.go
index 44e16677e7..02a03141df 100644
--- a/components/engine/cmd/dockerd/daemon.go
+++ b/components/engine/cmd/dockerd/daemon.go
@@ -480,22 +480,12 @@ func loadDaemonCliConfig(opts *daemonOptions) (*config.Config, error) {
logrus.Warnf(`The "-g / --graph" flag is deprecated. Please use "--data-root" instead`)
}
- // Labels of the docker engine used to allow multiple values associated with the same key.
- // This is deprecated in 1.13, and, be removed after 3 release cycles.
- // The following will check the conflict of labels, and report a warning for deprecation.
- //
- // TODO: After 3 release cycles (17.12) an error will be returned, and labels will be
- // sanitized to consolidate duplicate key-value pairs (config.Labels = newLabels):
- //
- // newLabels, err := daemon.GetConflictFreeLabels(config.Labels)
- // if err != nil {
- // return nil, err
- // }
- // config.Labels = newLabels
- //
- if _, err := config.GetConflictFreeLabels(conf.Labels); err != nil {
- logrus.Warnf("Engine labels with duplicate keys and conflicting values have been deprecated: %s", err)
+ // Check if duplicate label-keys with different values are found
+ newLabels, err := config.GetConflictFreeLabels(conf.Labels)
+ if err != nil {
+ return nil, err
}
+ conf.Labels = newLabels
// Regardless of whether the user sets it to true or false, if they
// specify TLSVerify at all then we need to turn on TLS
diff --git a/components/engine/cmd/dockerd/daemon_test.go b/components/engine/cmd/dockerd/daemon_test.go
index c559ee82a5..e5e4aa34e2 100644
--- a/components/engine/cmd/dockerd/daemon_test.go
+++ b/components/engine/cmd/dockerd/daemon_test.go
@@ -61,6 +61,28 @@ func TestLoadDaemonCliConfigWithConflicts(t *testing.T) {
testutil.ErrorContains(t, err, "as a flag and in the configuration file: labels")
}
+func TestLoadDaemonCliWithConflictingLabels(t *testing.T) {
+ opts := defaultOptions("")
+ flags := opts.flags
+
+ assert.NoError(t, flags.Set("label", "foo=bar"))
+ assert.NoError(t, flags.Set("label", "foo=baz"))
+
+ _, err := loadDaemonCliConfig(opts)
+ assert.EqualError(t, err, "conflict labels for foo=baz and foo=bar")
+}
+
+func TestLoadDaemonCliWithDuplicateLabels(t *testing.T) {
+ opts := defaultOptions("")
+ flags := opts.flags
+
+ assert.NoError(t, flags.Set("label", "foo=the-same"))
+ assert.NoError(t, flags.Set("label", "foo=the-same"))
+
+ _, err := loadDaemonCliConfig(opts)
+ assert.NoError(t, err)
+}
+
func TestLoadDaemonCliConfigWithTLSVerify(t *testing.T) {
tempFile := fs.NewFile(t, "config", fs.WithContent(`{"tlsverify": true}`))
defer tempFile.Remove()
diff --git a/components/engine/cmd/dockerd/daemon_unix.go b/components/engine/cmd/dockerd/daemon_unix.go
index 324b299e18..41e6b61ffa 100644
--- a/components/engine/cmd/dockerd/daemon_unix.go
+++ b/components/engine/cmd/dockerd/daemon_unix.go
@@ -1,4 +1,4 @@
-// +build !windows,!solaris
+// +build !windows
package main
diff --git a/components/engine/cmd/dockerd/daemon_unix_test.go b/components/engine/cmd/dockerd/daemon_unix_test.go
index 5d99e51053..475ff9efa7 100644
--- a/components/engine/cmd/dockerd/daemon_unix_test.go
+++ b/components/engine/cmd/dockerd/daemon_unix_test.go
@@ -1,7 +1,4 @@
-// +build !windows,!solaris
-
-// TODO: Create new file for Solaris which tests config parameters
-// as described in daemon/config_solaris.go
+// +build !windows
package main
diff --git a/components/engine/container/container_notlinux.go b/components/engine/container/container_notlinux.go
index 768c762d2f..246a146f0f 100644
--- a/components/engine/container/container_notlinux.go
+++ b/components/engine/container/container_notlinux.go
@@ -1,4 +1,4 @@
-// +build solaris freebsd
+// +build freebsd
package container
@@ -7,7 +7,7 @@ import (
)
func detachMounted(path string) error {
- //Solaris and FreeBSD do not support the lazy unmount or MNT_DETACH feature.
+ // FreeBSD do not support the lazy unmount or MNT_DETACH feature.
// Therefore there are separate definitions for this.
return unix.Unmount(path, 0)
}
diff --git a/components/engine/container/container_unix.go b/components/engine/container/container_unix.go
index 98042f1308..f030232060 100644
--- a/components/engine/container/container_unix.go
+++ b/components/engine/container/container_unix.go
@@ -1,4 +1,4 @@
-// +build linux freebsd solaris
+// +build linux freebsd
package container
@@ -65,12 +65,11 @@ func (container *Container) NetworkMounts() []Mount {
if _, err := os.Stat(container.ResolvConfPath); err != nil {
logrus.Warnf("ResolvConfPath set to %q, but can't stat this filename (err = %v); skipping", container.ResolvConfPath, err)
} else {
- if !container.HasMountFor("/etc/resolv.conf") {
- label.Relabel(container.ResolvConfPath, container.MountLabel, shared)
- }
writable := !container.HostConfig.ReadonlyRootfs
if m, exists := container.MountPoints["/etc/resolv.conf"]; exists {
writable = m.RW
+ } else {
+ label.Relabel(container.ResolvConfPath, container.MountLabel, shared)
}
mounts = append(mounts, Mount{
Source: container.ResolvConfPath,
@@ -84,12 +83,11 @@ func (container *Container) NetworkMounts() []Mount {
if _, err := os.Stat(container.HostnamePath); err != nil {
logrus.Warnf("HostnamePath set to %q, but can't stat this filename (err = %v); skipping", container.HostnamePath, err)
} else {
- if !container.HasMountFor("/etc/hostname") {
- label.Relabel(container.HostnamePath, container.MountLabel, shared)
- }
writable := !container.HostConfig.ReadonlyRootfs
if m, exists := container.MountPoints["/etc/hostname"]; exists {
writable = m.RW
+ } else {
+ label.Relabel(container.HostnamePath, container.MountLabel, shared)
}
mounts = append(mounts, Mount{
Source: container.HostnamePath,
@@ -103,12 +101,11 @@ func (container *Container) NetworkMounts() []Mount {
if _, err := os.Stat(container.HostsPath); err != nil {
logrus.Warnf("HostsPath set to %q, but can't stat this filename (err = %v); skipping", container.HostsPath, err)
} else {
- if !container.HasMountFor("/etc/hosts") {
- label.Relabel(container.HostsPath, container.MountLabel, shared)
- }
writable := !container.HostConfig.ReadonlyRootfs
if m, exists := container.MountPoints["/etc/hosts"]; exists {
writable = m.RW
+ } else {
+ label.Relabel(container.HostsPath, container.MountLabel, shared)
}
mounts = append(mounts, Mount{
Source: container.HostsPath,
@@ -160,7 +157,18 @@ func (container *Container) ShmResourcePath() (string, error) {
// HasMountFor checks if path is a mountpoint
func (container *Container) HasMountFor(path string) bool {
_, exists := container.MountPoints[path]
- return exists
+ if exists {
+ return true
+ }
+
+ // Also search among the tmpfs mounts
+ for dest := range container.HostConfig.Tmpfs {
+ if dest == path {
+ return true
+ }
+ }
+
+ return false
}
// UnmountIpcMount uses the provided unmount function to unmount shm if it was mounted
@@ -324,6 +332,12 @@ func (container *Container) UpdateContainer(hostConfig *containertypes.HostConfi
if resources.KernelMemory != 0 {
cResources.KernelMemory = resources.KernelMemory
}
+ if resources.CPURealtimePeriod != 0 {
+ cResources.CPURealtimePeriod = resources.CPURealtimePeriod
+ }
+ if resources.CPURealtimeRuntime != 0 {
+ cResources.CPURealtimeRuntime = resources.CPURealtimeRuntime
+ }
// update HostConfig of container
if hostConfig.RestartPolicy.Name != "" {
diff --git a/components/engine/container/health.go b/components/engine/container/health.go
index 5919008d27..6f75012fe1 100644
--- a/components/engine/container/health.go
+++ b/components/engine/container/health.go
@@ -1,6 +1,8 @@
package container
import (
+ "sync"
+
"github.com/docker/docker/api/types"
"github.com/sirupsen/logrus"
)
@@ -9,26 +11,53 @@ import (
type Health struct {
types.Health
stop chan struct{} // Write struct{} to stop the monitor
+ mu sync.Mutex
}
// String returns a human-readable description of the health-check state
func (s *Health) String() string {
- // This happens when the monitor has yet to be setup.
- if s.Status == "" {
- return types.Unhealthy
- }
+ status := s.Status()
- switch s.Status {
+ switch status {
case types.Starting:
return "health: starting"
default: // Healthy and Unhealthy are clear on their own
- return s.Status
+ return s.Health.Status
}
}
-// OpenMonitorChannel creates and returns a new monitor channel. If there already is one,
-// it returns nil.
+// Status returns the current health status.
+//
+// Note that this takes a lock and the value may change after being read.
+func (s *Health) Status() string {
+ s.mu.Lock()
+ defer s.mu.Unlock()
+
+ // This happens when the monitor has yet to be setup.
+ if s.Health.Status == "" {
+ return types.Unhealthy
+ }
+
+ return s.Health.Status
+}
+
+// SetStatus writes the current status to the underlying health structure,
+// obeying the locking semantics.
+//
+// Status may be set directly if another lock is used.
+func (s *Health) SetStatus(new string) {
+ s.mu.Lock()
+ defer s.mu.Unlock()
+
+ s.Health.Status = new
+}
+
+// OpenMonitorChannel creates and returns a new monitor channel. If there
+// already is one, it returns nil.
func (s *Health) OpenMonitorChannel() chan struct{} {
+ s.mu.Lock()
+ defer s.mu.Unlock()
+
if s.stop == nil {
logrus.Debug("OpenMonitorChannel")
s.stop = make(chan struct{})
@@ -39,12 +68,15 @@ func (s *Health) OpenMonitorChannel() chan struct{} {
// CloseMonitorChannel closes any existing monitor channel.
func (s *Health) CloseMonitorChannel() {
+ s.mu.Lock()
+ defer s.mu.Unlock()
+
if s.stop != nil {
logrus.Debug("CloseMonitorChannel: waiting for probe to stop")
close(s.stop)
s.stop = nil
// unhealthy when the monitor has stopped for compatibility reasons
- s.Status = types.Unhealthy
+ s.Health.Status = types.Unhealthy
logrus.Debug("CloseMonitorChannel done")
}
}
diff --git a/components/engine/contrib/builder/deb/aarch64/debian-jessie/Dockerfile b/components/engine/contrib/builder/deb/aarch64/debian-jessie/Dockerfile
index 865d6aaeae..a0a6612b7e 100644
--- a/components/engine/contrib/builder/deb/aarch64/debian-jessie/Dockerfile
+++ b/components/engine/contrib/builder/deb/aarch64/debian-jessie/Dockerfile
@@ -7,7 +7,7 @@ FROM aarch64/debian:jessie
RUN echo deb http://ftp.debian.org/debian jessie-backports main > /etc/apt/sources.list.d/backports.list
RUN apt-get update && apt-get install -y apparmor bash-completion btrfs-tools build-essential cmake curl ca-certificates debhelper dh-apparmor dh-systemd git libapparmor-dev libdevmapper-dev pkg-config vim-common libsystemd-journal-dev libseccomp-dev --no-install-recommends && rm -rf /var/lib/apt/lists/*
-ENV GO_VERSION 1.8.5
+ENV GO_VERSION 1.9.2
RUN curl -fSL "https://golang.org/dl/go${GO_VERSION}.linux-arm64.tar.gz" | tar xzC /usr/local
ENV PATH $PATH:/usr/local/go/bin
diff --git a/components/engine/contrib/builder/deb/aarch64/debian-stretch/Dockerfile b/components/engine/contrib/builder/deb/aarch64/debian-stretch/Dockerfile
index 2b561bee78..90525dbb12 100644
--- a/components/engine/contrib/builder/deb/aarch64/debian-stretch/Dockerfile
+++ b/components/engine/contrib/builder/deb/aarch64/debian-stretch/Dockerfile
@@ -6,7 +6,7 @@ FROM aarch64/debian:stretch
RUN apt-get update && apt-get install -y apparmor bash-completion btrfs-tools build-essential cmake curl ca-certificates debhelper dh-apparmor dh-systemd git libapparmor-dev libdevmapper-dev pkg-config vim-common libsystemd-dev libseccomp-dev --no-install-recommends && rm -rf /var/lib/apt/lists/*
-ENV GO_VERSION 1.8.5
+ENV GO_VERSION 1.9.2
RUN curl -fSL "https://golang.org/dl/go${GO_VERSION}.linux-arm64.tar.gz" | tar xzC /usr/local
ENV PATH $PATH:/usr/local/go/bin
diff --git a/components/engine/contrib/builder/deb/aarch64/ubuntu-trusty/Dockerfile b/components/engine/contrib/builder/deb/aarch64/ubuntu-trusty/Dockerfile
index e1b85ec747..5d2d58ea12 100644
--- a/components/engine/contrib/builder/deb/aarch64/ubuntu-trusty/Dockerfile
+++ b/components/engine/contrib/builder/deb/aarch64/ubuntu-trusty/Dockerfile
@@ -6,7 +6,7 @@ FROM aarch64/ubuntu:trusty
RUN apt-get update && apt-get install -y apparmor bash-completion btrfs-tools build-essential cmake curl ca-certificates debhelper dh-apparmor dh-systemd git libapparmor-dev libdevmapper-dev pkg-config vim-common libsystemd-journal-dev --no-install-recommends && rm -rf /var/lib/apt/lists/*
-ENV GO_VERSION 1.8.5
+ENV GO_VERSION 1.9.2
RUN curl -fSL "https://golang.org/dl/go${GO_VERSION}.linux-arm64.tar.gz" | tar xzC /usr/local
ENV PATH $PATH:/usr/local/go/bin
diff --git a/components/engine/contrib/builder/deb/aarch64/ubuntu-xenial/Dockerfile b/components/engine/contrib/builder/deb/aarch64/ubuntu-xenial/Dockerfile
index 6f8bc95b1b..cb9b681d42 100644
--- a/components/engine/contrib/builder/deb/aarch64/ubuntu-xenial/Dockerfile
+++ b/components/engine/contrib/builder/deb/aarch64/ubuntu-xenial/Dockerfile
@@ -6,7 +6,7 @@ FROM aarch64/ubuntu:xenial
RUN apt-get update && apt-get install -y apparmor bash-completion btrfs-tools build-essential cmake curl ca-certificates debhelper dh-apparmor dh-systemd git libapparmor-dev libdevmapper-dev pkg-config vim-common libsystemd-dev libseccomp-dev --no-install-recommends && rm -rf /var/lib/apt/lists/*
-ENV GO_VERSION 1.8.5
+ENV GO_VERSION 1.9.2
RUN curl -fSL "https://golang.org/dl/go${GO_VERSION}.linux-arm64.tar.gz" | tar xzC /usr/local
ENV PATH $PATH:/usr/local/go/bin
diff --git a/components/engine/contrib/builder/deb/amd64/debian-jessie/Dockerfile b/components/engine/contrib/builder/deb/amd64/debian-jessie/Dockerfile
index 668fc3c1f4..5b8b7852f9 100644
--- a/components/engine/contrib/builder/deb/amd64/debian-jessie/Dockerfile
+++ b/components/engine/contrib/builder/deb/amd64/debian-jessie/Dockerfile
@@ -10,7 +10,7 @@ RUN sed -ri "s/(httpredir|deb).debian.org/$APT_MIRROR/g" /etc/apt/sources.list
RUN apt-get update && apt-get install -y apparmor bash-completion btrfs-tools build-essential cmake curl ca-certificates debhelper dh-apparmor dh-systemd git libapparmor-dev libdevmapper-dev pkg-config vim-common libsystemd-journal-dev --no-install-recommends && rm -rf /var/lib/apt/lists/*
-ENV GO_VERSION 1.8.5
+ENV GO_VERSION 1.9.2
RUN curl -fSL "https://golang.org/dl/go${GO_VERSION}.linux-amd64.tar.gz" | tar xzC /usr/local
ENV PATH $PATH:/usr/local/go/bin
diff --git a/components/engine/contrib/builder/deb/amd64/debian-stretch/Dockerfile b/components/engine/contrib/builder/deb/amd64/debian-stretch/Dockerfile
index a23eba39d1..aa96240b9a 100644
--- a/components/engine/contrib/builder/deb/amd64/debian-stretch/Dockerfile
+++ b/components/engine/contrib/builder/deb/amd64/debian-stretch/Dockerfile
@@ -10,7 +10,7 @@ RUN sed -ri "s/(httpredir|deb).debian.org/$APT_MIRROR/g" /etc/apt/sources.list
RUN apt-get update && apt-get install -y apparmor bash-completion btrfs-tools build-essential cmake curl ca-certificates debhelper dh-apparmor dh-systemd git libapparmor-dev libdevmapper-dev libseccomp-dev pkg-config vim-common libsystemd-dev --no-install-recommends && rm -rf /var/lib/apt/lists/*
-ENV GO_VERSION 1.8.5
+ENV GO_VERSION 1.9.2
RUN curl -fSL "https://golang.org/dl/go${GO_VERSION}.linux-amd64.tar.gz" | tar xzC /usr/local
ENV PATH $PATH:/usr/local/go/bin
diff --git a/components/engine/contrib/builder/deb/amd64/debian-wheezy/Dockerfile b/components/engine/contrib/builder/deb/amd64/debian-wheezy/Dockerfile
index 2652706f4b..2f4e051af9 100644
--- a/components/engine/contrib/builder/deb/amd64/debian-wheezy/Dockerfile
+++ b/components/engine/contrib/builder/deb/amd64/debian-wheezy/Dockerfile
@@ -12,7 +12,7 @@ RUN sed -ri "s/(httpredir|deb).debian.org/$APT_MIRROR/g" /etc/apt/sources.list.d
RUN apt-get update && apt-get install -y -t wheezy-backports btrfs-tools --no-install-recommends && rm -rf /var/lib/apt/lists/*
RUN apt-get update && apt-get install -y apparmor bash-completion build-essential cmake curl ca-certificates debhelper dh-apparmor dh-systemd git libapparmor-dev libdevmapper-dev pkg-config vim-common --no-install-recommends && rm -rf /var/lib/apt/lists/*
-ENV GO_VERSION 1.8.5
+ENV GO_VERSION 1.9.2
RUN curl -fSL "https://golang.org/dl/go${GO_VERSION}.linux-amd64.tar.gz" | tar xzC /usr/local
ENV PATH $PATH:/usr/local/go/bin
diff --git a/components/engine/contrib/builder/deb/amd64/ubuntu-trusty/Dockerfile b/components/engine/contrib/builder/deb/amd64/ubuntu-trusty/Dockerfile
index 4fce6f31f1..a0105663e4 100644
--- a/components/engine/contrib/builder/deb/amd64/ubuntu-trusty/Dockerfile
+++ b/components/engine/contrib/builder/deb/amd64/ubuntu-trusty/Dockerfile
@@ -6,7 +6,7 @@ FROM ubuntu:trusty
RUN apt-get update && apt-get install -y apparmor bash-completion btrfs-tools build-essential cmake curl ca-certificates debhelper dh-apparmor dh-systemd git libapparmor-dev libdevmapper-dev pkg-config vim-common libsystemd-journal-dev --no-install-recommends && rm -rf /var/lib/apt/lists/*
-ENV GO_VERSION 1.8.5
+ENV GO_VERSION 1.9.2
RUN curl -fSL "https://golang.org/dl/go${GO_VERSION}.linux-amd64.tar.gz" | tar xzC /usr/local
ENV PATH $PATH:/usr/local/go/bin
diff --git a/components/engine/contrib/builder/deb/amd64/ubuntu-xenial/Dockerfile b/components/engine/contrib/builder/deb/amd64/ubuntu-xenial/Dockerfile
index ed9c4a9f36..e2768c33d2 100644
--- a/components/engine/contrib/builder/deb/amd64/ubuntu-xenial/Dockerfile
+++ b/components/engine/contrib/builder/deb/amd64/ubuntu-xenial/Dockerfile
@@ -6,7 +6,7 @@ FROM ubuntu:xenial
RUN apt-get update && apt-get install -y apparmor bash-completion btrfs-tools build-essential cmake curl ca-certificates debhelper dh-apparmor dh-systemd git libapparmor-dev libdevmapper-dev libseccomp-dev pkg-config vim-common libsystemd-dev --no-install-recommends && rm -rf /var/lib/apt/lists/*
-ENV GO_VERSION 1.8.5
+ENV GO_VERSION 1.9.2
RUN curl -fSL "https://golang.org/dl/go${GO_VERSION}.linux-amd64.tar.gz" | tar xzC /usr/local
ENV PATH $PATH:/usr/local/go/bin
diff --git a/components/engine/contrib/builder/deb/amd64/ubuntu-yakkety/Dockerfile b/components/engine/contrib/builder/deb/amd64/ubuntu-yakkety/Dockerfile
index a7dd9b7228..419522c138 100644
--- a/components/engine/contrib/builder/deb/amd64/ubuntu-yakkety/Dockerfile
+++ b/components/engine/contrib/builder/deb/amd64/ubuntu-yakkety/Dockerfile
@@ -6,7 +6,7 @@ FROM ubuntu:yakkety
RUN apt-get update && apt-get install -y apparmor bash-completion btrfs-tools build-essential cmake curl ca-certificates debhelper dh-apparmor dh-systemd git libapparmor-dev libdevmapper-dev libseccomp-dev pkg-config vim-common libsystemd-dev --no-install-recommends && rm -rf /var/lib/apt/lists/*
-ENV GO_VERSION 1.8.5
+ENV GO_VERSION 1.9.2
RUN curl -fSL "https://golang.org/dl/go${GO_VERSION}.linux-amd64.tar.gz" | tar xzC /usr/local
ENV PATH $PATH:/usr/local/go/bin
diff --git a/components/engine/contrib/builder/deb/amd64/ubuntu-zesty/Dockerfile b/components/engine/contrib/builder/deb/amd64/ubuntu-zesty/Dockerfile
index 5074efe2bb..98314f1ed7 100644
--- a/components/engine/contrib/builder/deb/amd64/ubuntu-zesty/Dockerfile
+++ b/components/engine/contrib/builder/deb/amd64/ubuntu-zesty/Dockerfile
@@ -6,7 +6,7 @@ FROM ubuntu:zesty
RUN apt-get update && apt-get install -y apparmor bash-completion btrfs-tools build-essential cmake curl ca-certificates debhelper dh-apparmor dh-systemd git libapparmor-dev libdevmapper-dev libseccomp-dev pkg-config vim-common libsystemd-dev --no-install-recommends && rm -rf /var/lib/apt/lists/*
-ENV GO_VERSION 1.8.5
+ENV GO_VERSION 1.9.2
RUN curl -fSL "https://golang.org/dl/go${GO_VERSION}.linux-amd64.tar.gz" | tar xzC /usr/local
ENV PATH $PATH:/usr/local/go/bin
diff --git a/components/engine/contrib/builder/deb/armhf/debian-jessie/Dockerfile b/components/engine/contrib/builder/deb/armhf/debian-jessie/Dockerfile
index 558d353937..048e7747d8 100644
--- a/components/engine/contrib/builder/deb/armhf/debian-jessie/Dockerfile
+++ b/components/engine/contrib/builder/deb/armhf/debian-jessie/Dockerfile
@@ -10,7 +10,7 @@ RUN sed -ri "s/(httpredir|deb).debian.org/$APT_MIRROR/g" /etc/apt/sources.list
RUN apt-get update && apt-get install -y apparmor bash-completion btrfs-tools build-essential cmake curl ca-certificates debhelper dh-apparmor dh-systemd git libapparmor-dev libdevmapper-dev pkg-config vim-common libsystemd-journal-dev --no-install-recommends && rm -rf /var/lib/apt/lists/*
-ENV GO_VERSION 1.8.5
+ENV GO_VERSION 1.9.2
RUN curl -fSL "https://golang.org/dl/go${GO_VERSION}.linux-armv6l.tar.gz" | tar xzC /usr/local
ENV PATH $PATH:/usr/local/go/bin
diff --git a/components/engine/contrib/builder/deb/armhf/raspbian-jessie/Dockerfile b/components/engine/contrib/builder/deb/armhf/raspbian-jessie/Dockerfile
index 31a6688d1b..c80a3f6a44 100644
--- a/components/engine/contrib/builder/deb/armhf/raspbian-jessie/Dockerfile
+++ b/components/engine/contrib/builder/deb/armhf/raspbian-jessie/Dockerfile
@@ -10,7 +10,7 @@ RUN sed -ri "s/(httpredir|deb).debian.org/$APT_MIRROR/g" /etc/apt/sources.list
RUN apt-get update && apt-get install -y apparmor bash-completion btrfs-tools build-essential cmake curl ca-certificates debhelper dh-apparmor dh-systemd git libapparmor-dev libdevmapper-dev pkg-config vim-common libsystemd-journal-dev --no-install-recommends && rm -rf /var/lib/apt/lists/*
-ENV GO_VERSION 1.8.5
+ENV GO_VERSION 1.9.2
# GOARM is the ARM architecture version which is unrelated to the above Golang version
ENV GOARM 6
RUN curl -fSL "https://golang.org/dl/go${GO_VERSION}.linux-armv6l.tar.gz" | tar xzC /usr/local
diff --git a/components/engine/contrib/builder/deb/armhf/ubuntu-trusty/Dockerfile b/components/engine/contrib/builder/deb/armhf/ubuntu-trusty/Dockerfile
index a9899a06db..b6fc393aae 100644
--- a/components/engine/contrib/builder/deb/armhf/ubuntu-trusty/Dockerfile
+++ b/components/engine/contrib/builder/deb/armhf/ubuntu-trusty/Dockerfile
@@ -6,7 +6,7 @@ FROM armhf/ubuntu:trusty
RUN apt-get update && apt-get install -y apparmor bash-completion btrfs-tools build-essential cmake curl ca-certificates debhelper dh-apparmor dh-systemd git libapparmor-dev libdevmapper-dev pkg-config vim-common libsystemd-journal-dev --no-install-recommends && rm -rf /var/lib/apt/lists/*
-ENV GO_VERSION 1.8.5
+ENV GO_VERSION 1.9.2
RUN curl -fSL "https://golang.org/dl/go${GO_VERSION}.linux-armv6l.tar.gz" | tar xzC /usr/local
ENV PATH $PATH:/usr/local/go/bin
diff --git a/components/engine/contrib/builder/deb/armhf/ubuntu-xenial/Dockerfile b/components/engine/contrib/builder/deb/armhf/ubuntu-xenial/Dockerfile
index 2766f330bc..cc9284ffdc 100644
--- a/components/engine/contrib/builder/deb/armhf/ubuntu-xenial/Dockerfile
+++ b/components/engine/contrib/builder/deb/armhf/ubuntu-xenial/Dockerfile
@@ -6,7 +6,7 @@ FROM armhf/ubuntu:xenial
RUN apt-get update && apt-get install -y apparmor bash-completion btrfs-tools build-essential cmake curl ca-certificates debhelper dh-apparmor dh-systemd git libapparmor-dev libdevmapper-dev libseccomp-dev pkg-config vim-common libsystemd-dev --no-install-recommends && rm -rf /var/lib/apt/lists/*
-ENV GO_VERSION 1.8.5
+ENV GO_VERSION 1.9.2
RUN curl -fSL "https://golang.org/dl/go${GO_VERSION}.linux-armv6l.tar.gz" | tar xzC /usr/local
ENV PATH $PATH:/usr/local/go/bin
diff --git a/components/engine/contrib/builder/deb/armhf/ubuntu-yakkety/Dockerfile b/components/engine/contrib/builder/deb/armhf/ubuntu-yakkety/Dockerfile
index 27edd04d30..57a77acd73 100644
--- a/components/engine/contrib/builder/deb/armhf/ubuntu-yakkety/Dockerfile
+++ b/components/engine/contrib/builder/deb/armhf/ubuntu-yakkety/Dockerfile
@@ -6,7 +6,7 @@ FROM armhf/ubuntu:yakkety
RUN apt-get update && apt-get install -y apparmor bash-completion btrfs-tools build-essential cmake curl ca-certificates debhelper dh-apparmor dh-systemd git libapparmor-dev libdevmapper-dev libseccomp-dev pkg-config vim-common libsystemd-dev --no-install-recommends && rm -rf /var/lib/apt/lists/*
-ENV GO_VERSION 1.8.5
+ENV GO_VERSION 1.9.2
RUN curl -fSL "https://golang.org/dl/go${GO_VERSION}.linux-armv6l.tar.gz" | tar xzC /usr/local
ENV PATH $PATH:/usr/local/go/bin
diff --git a/components/engine/contrib/builder/deb/ppc64le/ubuntu-trusty/Dockerfile b/components/engine/contrib/builder/deb/ppc64le/ubuntu-trusty/Dockerfile
index b85a68e4e8..d29ac5138e 100644
--- a/components/engine/contrib/builder/deb/ppc64le/ubuntu-trusty/Dockerfile
+++ b/components/engine/contrib/builder/deb/ppc64le/ubuntu-trusty/Dockerfile
@@ -6,7 +6,7 @@ FROM ppc64le/ubuntu:trusty
RUN apt-get update && apt-get install -y apparmor bash-completion btrfs-tools build-essential cmake curl ca-certificates debhelper dh-apparmor dh-systemd git libapparmor-dev libdevmapper-dev pkg-config vim-common libsystemd-journal-dev --no-install-recommends && rm -rf /var/lib/apt/lists/*
-ENV GO_VERSION 1.8.5
+ENV GO_VERSION 1.9.2
RUN curl -fsSL "https://golang.org/dl/go${GO_VERSION}.linux-ppc64le.tar.gz" | tar xzC /usr/local
ENV PATH $PATH:/usr/local/go/bin
diff --git a/components/engine/contrib/builder/deb/ppc64le/ubuntu-xenial/Dockerfile b/components/engine/contrib/builder/deb/ppc64le/ubuntu-xenial/Dockerfile
index abb5b23133..730bacb079 100644
--- a/components/engine/contrib/builder/deb/ppc64le/ubuntu-xenial/Dockerfile
+++ b/components/engine/contrib/builder/deb/ppc64le/ubuntu-xenial/Dockerfile
@@ -6,7 +6,7 @@ FROM ppc64le/ubuntu:xenial
RUN apt-get update && apt-get install -y apparmor bash-completion btrfs-tools build-essential cmake curl ca-certificates debhelper dh-apparmor dh-systemd git libapparmor-dev libdevmapper-dev pkg-config vim-common libseccomp-dev libsystemd-dev --no-install-recommends && rm -rf /var/lib/apt/lists/*
-ENV GO_VERSION 1.8.5
+ENV GO_VERSION 1.9.2
RUN curl -fsSL "https://golang.org/dl/go${GO_VERSION}.linux-ppc64le.tar.gz" | tar xzC /usr/local
ENV PATH $PATH:/usr/local/go/bin
diff --git a/components/engine/contrib/builder/deb/ppc64le/ubuntu-yakkety/Dockerfile b/components/engine/contrib/builder/deb/ppc64le/ubuntu-yakkety/Dockerfile
index d72581659e..27cfd292a0 100644
--- a/components/engine/contrib/builder/deb/ppc64le/ubuntu-yakkety/Dockerfile
+++ b/components/engine/contrib/builder/deb/ppc64le/ubuntu-yakkety/Dockerfile
@@ -6,7 +6,7 @@ FROM ppc64le/ubuntu:yakkety
RUN apt-get update && apt-get install -y apparmor bash-completion btrfs-tools build-essential cmake curl ca-certificates debhelper dh-apparmor dh-systemd git libapparmor-dev libdevmapper-dev pkg-config vim-common libseccomp-dev libsystemd-dev --no-install-recommends && rm -rf /var/lib/apt/lists/*
-ENV GO_VERSION 1.8.5
+ENV GO_VERSION 1.9.2
RUN curl -fsSL "https://golang.org/dl/go${GO_VERSION}.linux-ppc64le.tar.gz" | tar xzC /usr/local
ENV PATH $PATH:/usr/local/go/bin
diff --git a/components/engine/contrib/builder/deb/s390x/ubuntu-xenial/Dockerfile b/components/engine/contrib/builder/deb/s390x/ubuntu-xenial/Dockerfile
index 6d61ed7f5e..2233897375 100644
--- a/components/engine/contrib/builder/deb/s390x/ubuntu-xenial/Dockerfile
+++ b/components/engine/contrib/builder/deb/s390x/ubuntu-xenial/Dockerfile
@@ -6,7 +6,7 @@ FROM s390x/ubuntu:xenial
RUN apt-get update && apt-get install -y apparmor bash-completion btrfs-tools build-essential cmake curl ca-certificates debhelper dh-apparmor dh-systemd git libapparmor-dev libdevmapper-dev libseccomp-dev pkg-config libsystemd-dev vim-common --no-install-recommends && rm -rf /var/lib/apt/lists/*
-ENV GO_VERSION 1.8.5
+ENV GO_VERSION 1.9.2
RUN curl -fSL "https://golang.org/dl/go${GO_VERSION}.linux-s390x.tar.gz" | tar xzC /usr/local
ENV PATH $PATH:/usr/local/go/bin
diff --git a/components/engine/contrib/builder/deb/s390x/ubuntu-yakkety/Dockerfile b/components/engine/contrib/builder/deb/s390x/ubuntu-yakkety/Dockerfile
index e30e875047..b2a0cf5735 100644
--- a/components/engine/contrib/builder/deb/s390x/ubuntu-yakkety/Dockerfile
+++ b/components/engine/contrib/builder/deb/s390x/ubuntu-yakkety/Dockerfile
@@ -6,7 +6,7 @@ FROM s390x/ubuntu:yakkety
RUN apt-get update && apt-get install -y apparmor bash-completion btrfs-tools build-essential cmake curl ca-certificates debhelper dh-apparmor dh-systemd git libapparmor-dev libdevmapper-dev libseccomp-dev pkg-config libsystemd-dev vim-common --no-install-recommends && rm -rf /var/lib/apt/lists/*
-ENV GO_VERSION 1.8.5
+ENV GO_VERSION 1.9.2
RUN curl -fSL "https://golang.org/dl/go${GO_VERSION}.linux-s390x.tar.gz" | tar xzC /usr/local
ENV PATH $PATH:/usr/local/go/bin
diff --git a/components/engine/contrib/builder/rpm/amd64/amazonlinux-latest/Dockerfile b/components/engine/contrib/builder/rpm/amd64/amazonlinux-latest/Dockerfile
index 8e755cd0a6..1f0a0b7c09 100644
--- a/components/engine/contrib/builder/rpm/amd64/amazonlinux-latest/Dockerfile
+++ b/components/engine/contrib/builder/rpm/amd64/amazonlinux-latest/Dockerfile
@@ -5,9 +5,9 @@
FROM amazonlinux:latest
RUN yum groupinstall -y "Development Tools"
-RUN yum install -y btrfs-progs-devel device-mapper-devel glibc-static libseccomp-devel libselinux-devel libtool-ltdl-devel pkgconfig selinux-policy selinux-policy-devel tar git cmake vim-common
+RUN yum install -y btrfs-progs-devel device-mapper-devel glibc-static libseccomp-devel libselinux-devel pkgconfig selinux-policy selinux-policy-devel tar git cmake vim-common
-ENV GO_VERSION 1.8.5
+ENV GO_VERSION 1.9.2
RUN curl -fSL "https://golang.org/dl/go${GO_VERSION}.linux-amd64.tar.gz" | tar xzC /usr/local
ENV PATH $PATH:/usr/local/go/bin
diff --git a/components/engine/contrib/builder/rpm/amd64/centos-7/Dockerfile b/components/engine/contrib/builder/rpm/amd64/centos-7/Dockerfile
index 8a220649ea..5767b8cd07 100644
--- a/components/engine/contrib/builder/rpm/amd64/centos-7/Dockerfile
+++ b/components/engine/contrib/builder/rpm/amd64/centos-7/Dockerfile
@@ -6,9 +6,9 @@ FROM centos:7
RUN yum groupinstall -y "Development Tools"
RUN yum -y swap -- remove systemd-container systemd-container-libs -- install systemd systemd-libs
-RUN yum install -y btrfs-progs-devel device-mapper-devel glibc-static libseccomp-devel libselinux-devel libtool-ltdl-devel pkgconfig selinux-policy selinux-policy-devel systemd-devel tar git cmake vim-common
+RUN yum install -y btrfs-progs-devel device-mapper-devel glibc-static libseccomp-devel libselinux-devel pkgconfig selinux-policy selinux-policy-devel systemd-devel tar git cmake vim-common
-ENV GO_VERSION 1.8.5
+ENV GO_VERSION 1.9.2
RUN curl -fSL "https://golang.org/dl/go${GO_VERSION}.linux-amd64.tar.gz" | tar xzC /usr/local
ENV PATH $PATH:/usr/local/go/bin
diff --git a/components/engine/contrib/builder/rpm/amd64/fedora-24/Dockerfile b/components/engine/contrib/builder/rpm/amd64/fedora-24/Dockerfile
index e0b369aa4b..1e075238c2 100644
--- a/components/engine/contrib/builder/rpm/amd64/fedora-24/Dockerfile
+++ b/components/engine/contrib/builder/rpm/amd64/fedora-24/Dockerfile
@@ -6,9 +6,9 @@ FROM fedora:24
RUN dnf -y upgrade
RUN dnf install -y @development-tools fedora-packager
-RUN dnf install -y btrfs-progs-devel device-mapper-devel glibc-static libseccomp-devel libselinux-devel libtool-ltdl-devel pkgconfig selinux-policy selinux-policy-devel systemd-devel tar git cmake vim-common
+RUN dnf install -y btrfs-progs-devel device-mapper-devel glibc-static libseccomp-devel libselinux-devel pkgconfig selinux-policy selinux-policy-devel systemd-devel tar git cmake vim-common
-ENV GO_VERSION 1.8.5
+ENV GO_VERSION 1.9.2
RUN curl -fSL "https://golang.org/dl/go${GO_VERSION}.linux-amd64.tar.gz" | tar xzC /usr/local
ENV PATH $PATH:/usr/local/go/bin
diff --git a/components/engine/contrib/builder/rpm/amd64/fedora-25/Dockerfile b/components/engine/contrib/builder/rpm/amd64/fedora-25/Dockerfile
index f259a5cea3..23e24638ec 100644
--- a/components/engine/contrib/builder/rpm/amd64/fedora-25/Dockerfile
+++ b/components/engine/contrib/builder/rpm/amd64/fedora-25/Dockerfile
@@ -6,9 +6,9 @@ FROM fedora:25
RUN dnf -y upgrade
RUN dnf install -y @development-tools fedora-packager
-RUN dnf install -y btrfs-progs-devel device-mapper-devel glibc-static libseccomp-devel libselinux-devel libtool-ltdl-devel pkgconfig selinux-policy selinux-policy-devel systemd-devel tar git cmake vim-common
+RUN dnf install -y btrfs-progs-devel device-mapper-devel glibc-static libseccomp-devel libselinux-devel pkgconfig selinux-policy selinux-policy-devel systemd-devel tar git cmake vim-common
-ENV GO_VERSION 1.8.5
+ENV GO_VERSION 1.9.2
RUN curl -fSL "https://golang.org/dl/go${GO_VERSION}.linux-amd64.tar.gz" | tar xzC /usr/local
ENV PATH $PATH:/usr/local/go/bin
diff --git a/components/engine/contrib/builder/rpm/amd64/opensuse-13.2/Dockerfile b/components/engine/contrib/builder/rpm/amd64/opensuse-13.2/Dockerfile
index 7f13863992..b55bfee8ed 100644
--- a/components/engine/contrib/builder/rpm/amd64/opensuse-13.2/Dockerfile
+++ b/components/engine/contrib/builder/rpm/amd64/opensuse-13.2/Dockerfile
@@ -5,9 +5,9 @@
FROM opensuse:13.2
RUN zypper --non-interactive install ca-certificates* curl gzip rpm-build
-RUN zypper --non-interactive install libbtrfs-devel device-mapper-devel glibc-static libselinux-devel libtool-ltdl-devel pkg-config selinux-policy selinux-policy-devel systemd-devel tar git cmake vim systemd-rpm-macros
+RUN zypper --non-interactive install libbtrfs-devel device-mapper-devel glibc-static libselinux-devel pkg-config selinux-policy selinux-policy-devel systemd-devel tar git cmake vim systemd-rpm-macros
-ENV GO_VERSION 1.8.5
+ENV GO_VERSION 1.9.2
RUN curl -fSL "https://golang.org/dl/go${GO_VERSION}.linux-amd64.tar.gz" | tar xzC /usr/local
ENV PATH $PATH:/usr/local/go/bin
diff --git a/components/engine/contrib/builder/rpm/amd64/oraclelinux-6/Dockerfile b/components/engine/contrib/builder/rpm/amd64/oraclelinux-6/Dockerfile
index b75f2dc3e1..1b395c2c68 100644
--- a/components/engine/contrib/builder/rpm/amd64/oraclelinux-6/Dockerfile
+++ b/components/engine/contrib/builder/rpm/amd64/oraclelinux-6/Dockerfile
@@ -8,9 +8,9 @@ RUN yum install -y yum-utils && curl -o /etc/yum.repos.d/public-yum-ol6.repo htt
RUN yum install -y kernel-uek-devel-4.1.12-32.el6uek
RUN yum groupinstall -y "Development Tools"
-RUN yum install -y btrfs-progs-devel device-mapper-devel glibc-static libselinux-devel libtool-ltdl-devel pkgconfig selinux-policy selinux-policy-devel tar git cmake vim-common
+RUN yum install -y btrfs-progs-devel device-mapper-devel glibc-static libselinux-devel pkgconfig selinux-policy selinux-policy-devel tar git cmake vim-common
-ENV GO_VERSION 1.8.5
+ENV GO_VERSION 1.9.2
RUN curl -fSL "https://golang.org/dl/go${GO_VERSION}.linux-amd64.tar.gz" | tar xzC /usr/local
ENV PATH $PATH:/usr/local/go/bin
diff --git a/components/engine/contrib/builder/rpm/amd64/oraclelinux-7/Dockerfile b/components/engine/contrib/builder/rpm/amd64/oraclelinux-7/Dockerfile
index f4dc894f16..ec6db986ba 100644
--- a/components/engine/contrib/builder/rpm/amd64/oraclelinux-7/Dockerfile
+++ b/components/engine/contrib/builder/rpm/amd64/oraclelinux-7/Dockerfile
@@ -5,9 +5,9 @@
FROM oraclelinux:7
RUN yum groupinstall -y "Development Tools"
-RUN yum install -y --enablerepo=ol7_optional_latest btrfs-progs-devel device-mapper-devel glibc-static libseccomp-devel libselinux-devel libtool-ltdl-devel pkgconfig selinux-policy selinux-policy-devel systemd-devel tar git cmake vim-common
+RUN yum install -y --enablerepo=ol7_optional_latest btrfs-progs-devel device-mapper-devel glibc-static libseccomp-devel libselinux-devel pkgconfig selinux-policy selinux-policy-devel systemd-devel tar git cmake vim-common
-ENV GO_VERSION 1.8.5
+ENV GO_VERSION 1.9.2
RUN curl -fSL "https://golang.org/dl/go${GO_VERSION}.linux-amd64.tar.gz" | tar xzC /usr/local
ENV PATH $PATH:/usr/local/go/bin
diff --git a/components/engine/contrib/builder/rpm/amd64/photon-1.0/Dockerfile b/components/engine/contrib/builder/rpm/amd64/photon-1.0/Dockerfile
index 01d5fea10d..be9463c213 100644
--- a/components/engine/contrib/builder/rpm/amd64/photon-1.0/Dockerfile
+++ b/components/engine/contrib/builder/rpm/amd64/photon-1.0/Dockerfile
@@ -5,9 +5,9 @@
FROM photon:1.0
RUN tdnf install -y wget curl ca-certificates gzip make rpm-build sed gcc linux-api-headers glibc-devel binutils libseccomp elfutils
-RUN tdnf install -y btrfs-progs-devel device-mapper-devel glibc-static libseccomp-devel libselinux-devel libtool-ltdl-devel pkg-config selinux-policy selinux-policy-devel systemd-devel tar git cmake vim-common
+RUN tdnf install -y btrfs-progs-devel device-mapper-devel glibc-static libseccomp-devel libselinux-devel pkg-config selinux-policy selinux-policy-devel systemd-devel tar git cmake vim-common
-ENV GO_VERSION 1.8.5
+ENV GO_VERSION 1.9.2
RUN curl -fSL "https://golang.org/dl/go${GO_VERSION}.linux-amd64.tar.gz" | tar xzC /usr/local
ENV PATH $PATH:/usr/local/go/bin
diff --git a/components/engine/contrib/builder/rpm/armhf/centos-7/Dockerfile b/components/engine/contrib/builder/rpm/armhf/centos-7/Dockerfile
index 79c2ef1626..8e77e8b098 100644
--- a/components/engine/contrib/builder/rpm/armhf/centos-7/Dockerfile
+++ b/components/engine/contrib/builder/rpm/armhf/centos-7/Dockerfile
@@ -7,9 +7,9 @@ FROM multiarch/centos:7.2.1511-armhfp-clean
RUN yum install -y yum-plugin-ovl
RUN yum groupinstall --skip-broken -y "Development Tools"
RUN yum -y swap -- remove systemd-container systemd-container-libs -- install systemd systemd-libs
-RUN yum install -y btrfs-progs-devel device-mapper-devel glibc-static libseccomp-devel libselinux-devel libtool-ltdl-devel pkgconfig selinux-policy selinux-policy-devel sqlite-devel systemd-devel tar git cmake vim-common
+RUN yum install -y btrfs-progs-devel device-mapper-devel glibc-static libseccomp-devel libselinux-devel pkgconfig selinux-policy selinux-policy-devel sqlite-devel systemd-devel tar git cmake vim-common
-ENV GO_VERSION 1.8.5
+ENV GO_VERSION 1.9.2
RUN curl -fSL "https://golang.org/dl/go${GO_VERSION}.linux-armv6l.tar.gz" | tar xzC /usr/local
ENV PATH $PATH:/usr/local/go/bin
diff --git a/components/engine/contrib/builder/rpm/ppc64le/centos-7/Dockerfile b/components/engine/contrib/builder/rpm/ppc64le/centos-7/Dockerfile
index ebce3c0229..8bcc4be8cc 100644
--- a/components/engine/contrib/builder/rpm/ppc64le/centos-7/Dockerfile
+++ b/components/engine/contrib/builder/rpm/ppc64le/centos-7/Dockerfile
@@ -6,10 +6,10 @@ FROM ppc64le/centos:7
RUN yum groupinstall -y "Development Tools"
RUN yum -y swap -- remove systemd-container systemd-container-libs -- install systemd systemd-libs
-RUN yum install -y btrfs-progs-devel device-mapper-devel glibc-static libseccomp-devel libselinux-devel libtool-ltdl-devel pkgconfig selinux-policy selinux-policy-devel sqlite-devel systemd-devel tar git cmake vim-common
+RUN yum install -y btrfs-progs-devel device-mapper-devel glibc-static libseccomp-devel libselinux-devel pkgconfig selinux-policy selinux-policy-devel sqlite-devel systemd-devel tar git cmake vim-common
-ENV GO_VERSION 1.8.5
-RUN curl -fSL "https://golang.org/dl/go${GO_VERSION}.linux-ppc64le.tar.gz" | tar xzC /usr/local
+ENV GO_VERSION 1.9.2
+RUN curl -fsSL "https://golang.org/dl/go${GO_VERSION}.linux-ppc64le.tar.gz" | tar xzC /usr/local
ENV PATH $PATH:/usr/local/go/bin
ENV AUTO_GOPATH 1
diff --git a/components/engine/contrib/builder/rpm/ppc64le/fedora-24/Dockerfile b/components/engine/contrib/builder/rpm/ppc64le/fedora-24/Dockerfile
index 18dd7d4de0..32321fe9c4 100644
--- a/components/engine/contrib/builder/rpm/ppc64le/fedora-24/Dockerfile
+++ b/components/engine/contrib/builder/rpm/ppc64le/fedora-24/Dockerfile
@@ -6,9 +6,9 @@ FROM ppc64le/fedora:24
RUN dnf -y upgrade
RUN dnf install -y @development-tools fedora-packager
-RUN dnf install -y btrfs-progs-devel device-mapper-devel glibc-static libseccomp-devel libselinux-devel libtool-ltdl-devel pkgconfig selinux-policy selinux-policy-devel systemd-devel tar git cmake
+RUN dnf install -y btrfs-progs-devel device-mapper-devel glibc-static libseccomp-devel libselinux-devel pkgconfig selinux-policy selinux-policy-devel sqlite-devel systemd-devel tar git cmake vim-common
-ENV GO_VERSION 1.8.5
+ENV GO_VERSION 1.9.2
RUN curl -fsSL "https://golang.org/dl/go${GO_VERSION}.linux-ppc64le.tar.gz" | tar xzC /usr/local
ENV PATH $PATH:/usr/local/go/bin
diff --git a/components/engine/contrib/builder/rpm/ppc64le/opensuse-42.1/Dockerfile b/components/engine/contrib/builder/rpm/ppc64le/opensuse-42.1/Dockerfile
index 3343f021c4..04f158cc41 100644
--- a/components/engine/contrib/builder/rpm/ppc64le/opensuse-42.1/Dockerfile
+++ b/components/engine/contrib/builder/rpm/ppc64le/opensuse-42.1/Dockerfile
@@ -7,10 +7,10 @@ FROM ppc64le/opensuse:42.1
RUN zypper addrepo -n ppc64le-oss -f https://download.opensuse.org/ports/ppc/distribution/leap/42.1/repo/oss/ ppc64le-oss
RUN zypper addrepo -n ppc64le-updates -f https://download.opensuse.org/ports/update/42.1/ ppc64le-updates
RUN zypper --non-interactive install ca-certificates* curl gzip rpm-build
-RUN zypper --non-interactive install libbtrfs-devel device-mapper-devel glibc-static libselinux-devel libtool-ltdl-devel pkg-config selinux-policy selinux-policy-devel sqlite-devel systemd-devel tar git cmake vim
+RUN zypper --non-interactive install libbtrfs-devel device-mapper-devel glibc-static libselinux-devel pkg-config selinux-policy selinux-policy-devel sqlite-devel systemd-devel tar git cmake vim
-ENV GO_VERSION 1.8.5
-RUN curl -fSL "https://golang.org/dl/go${GO_VERSION}.linux-ppc64le.tar.gz" | tar xzC /usr/local
+ENV GO_VERSION 1.9.2
+RUN curl -fsSL "https://golang.org/dl/go${GO_VERSION}.linux-ppc64le.tar.gz" | tar xzC /usr/local
ENV PATH $PATH:/usr/local/go/bin
ENV AUTO_GOPATH 1
diff --git a/components/engine/contrib/builder/rpm/s390x/clefos-base-s390x-7/Dockerfile b/components/engine/contrib/builder/rpm/s390x/clefos-base-s390x-7/Dockerfile
index ef875b8fc4..27195d3b4a 100644
--- a/components/engine/contrib/builder/rpm/s390x/clefos-base-s390x-7/Dockerfile
+++ b/components/engine/contrib/builder/rpm/s390x/clefos-base-s390x-7/Dockerfile
@@ -6,9 +6,9 @@ FROM sinenomine/clefos-base-s390x
RUN touch /var/lib/rpm/* && yum groupinstall -y "Development Tools"
-RUN touch /var/lib/rpm/* && yum install -y btrfs-progs-devel device-mapper-devel glibc-static libseccomp-devel libselinux-devel libtool-ltdl-devel pkgconfig selinux-policy selinux-policy-devel sqlite-devel systemd-devel tar git cmake vim-common
+RUN touch /var/lib/rpm/* && yum install -y btrfs-progs-devel device-mapper-devel glibc-static libseccomp-devel libselinux-devel pkgconfig selinux-policy selinux-policy-devel sqlite-devel systemd-devel tar git cmake vim-common
-ENV GO_VERSION 1.8.5
+ENV GO_VERSION 1.9.2
RUN curl -fsSL "https://golang.org/dl/go${GO_VERSION}.linux-s390x.tar.gz" | tar xzC /usr/local
ENV PATH $PATH:/usr/local/go/bin
diff --git a/components/engine/contrib/builder/rpm/s390x/opensuse-tumbleweed-1/Dockerfile b/components/engine/contrib/builder/rpm/s390x/opensuse-tumbleweed-1/Dockerfile
index a05bee5096..275fab5e93 100644
--- a/components/engine/contrib/builder/rpm/s390x/opensuse-tumbleweed-1/Dockerfile
+++ b/components/engine/contrib/builder/rpm/s390x/opensuse-tumbleweed-1/Dockerfile
@@ -7,9 +7,9 @@ FROM opensuse/s390x:tumbleweed
RUN zypper ar https://download.opensuse.org/ports/zsystems/tumbleweed/repo/oss/ tumbleweed
RUN zypper --non-interactive install ca-certificates* curl gzip rpm-build
-RUN zypper --non-interactive install libbtrfs-devel device-mapper-devel glibc-static libselinux-devel libtool-ltdl-devel pkg-config selinux-policy selinux-policy-devel sqlite-devel systemd-devel tar git cmake vim systemd-rpm-macros
+RUN zypper --non-interactive install libbtrfs-devel device-mapper-devel glibc-static libselinux-devel pkg-config selinux-policy selinux-policy-devel sqlite-devel systemd-devel tar git cmake vim systemd-rpm-macros
-ENV GO_VERSION 1.8.5
+ENV GO_VERSION 1.9.2
RUN curl -fsSL "https://golang.org/dl/go${GO_VERSION}.linux-s390x.tar.gz" | tar xzC /usr/local
ENV PATH $PATH:/usr/local/go/bin
diff --git a/components/engine/contrib/docker-device-tool/device_tool.go b/components/engine/contrib/docker-device-tool/device_tool.go
index 905b689581..d3ec46a8b4 100644
--- a/components/engine/contrib/docker-device-tool/device_tool.go
+++ b/components/engine/contrib/docker-device-tool/device_tool.go
@@ -1,4 +1,4 @@
-// +build !windows,!solaris
+// +build !windows
package main
diff --git a/components/engine/contrib/mkimage.sh b/components/engine/contrib/mkimage.sh
index 13298c8036..ae05d139c3 100755
--- a/components/engine/contrib/mkimage.sh
+++ b/components/engine/contrib/mkimage.sh
@@ -11,7 +11,6 @@ usage() {
echo >&2 " $mkimg -t someuser/centos:5 rinse --distribution centos-5"
echo >&2 " $mkimg -t someuser/mageia:4 mageia-urpmi --version=4"
echo >&2 " $mkimg -t someuser/mageia:4 mageia-urpmi --version=4 --mirror=http://somemirror/"
- echo >&2 " $mkimg -t someuser/solaris solaris"
exit 1
}
@@ -20,13 +19,6 @@ scriptDir="$(dirname "$(readlink -f "$BASH_SOURCE")")/mkimage"
os=
os=$(uname -o)
-# set up path to gnu tools if solaris
-[[ $os == "Solaris" ]] && export PATH=/usr/gnu/bin:$PATH
-# TODO check for gnu-tar, gnu-getopt
-
-# TODO requires root/sudo due to some pkg operations. sigh.
-[[ $os == "Solaris" && $EUID != "0" ]] && echo >&2 "image create on Solaris requires superuser privilege"
-
optTemp=$(getopt --options '+d:t:c:hC' --longoptions 'dir:,tag:,compression:,no-compression,help' --name "$mkimg" -- "$@")
eval set -- "$optTemp"
unset optTemp
diff --git a/components/engine/contrib/mkimage/solaris b/components/engine/contrib/mkimage/solaris
deleted file mode 100755
index 158970e69e..0000000000
--- a/components/engine/contrib/mkimage/solaris
+++ /dev/null
@@ -1,89 +0,0 @@
-#!/usr/bin/env bash
-#
-# Solaris 12 base image build script.
-#
-set -e
-
-# TODO add optional package publisher origin
-
-rootfsDir="$1"
-shift
-
-# base install
-(
- set -x
-
- pkg image-create --full --zone \
- --facet facet.locale.*=false \
- --facet facet.locale.POSIX=true \
- --facet facet.doc=false \
- --facet facet.doc.*=false \
- "$rootfsDir"
-
- pkg -R "$rootfsDir" set-property use-system-repo true
-
- pkg -R "$rootfsDir" set-property flush-content-cache-on-success true
-
- pkg -R "$rootfsDir" install core-os
-)
-
-# Lay in stock configuration, set up milestone
-# XXX This all may become optional in a base image
-(
- # faster to build repository database on tmpfs
- REPO_DB=/system/volatile/repository.$$
- export SVCCFG_REPOSITORY=${REPO_DB}
- export SVCCFG_DOOR_PATH=$rootfsDir/system/volatile/tmp_repo_door
-
- # Import base manifests. NOTE These are a combination of basic requirement
- # and gleaned from container milestone manifest. They may change.
- for m in $rootfsDir/lib/svc/manifest/system/environment.xml \
- $rootfsDir/lib/svc/manifest/system/svc/global.xml \
- $rootfsDir/lib/svc/manifest/system/svc/restarter.xml \
- $rootfsDir/lib/svc/manifest/network/dns/client.xml \
- $rootfsDir/lib/svc/manifest/system/name-service/switch.xml \
- $rootfsDir/lib/svc/manifest/system/name-service/cache.xml \
- $rootfsDir/lib/svc/manifest/milestone/container.xml ; do
- svccfg import $m
- done
-
- # Apply system layer profile, deleting unnecessary dependencies
- svccfg apply $rootfsDir/etc/svc/profile/generic_container.xml
-
- # XXX Even if we keep a repo in the base image, this is definitely optional
- svccfg apply $rootfsDir/etc/svc/profile/sysconfig/container_sc.xml
-
- for s in svc:/system/svc/restarter \
- svc:/system/environment \
- svc:/network/dns/client \
- svc:/system/name-service/switch \
- svc:/system/name-service/cache \
- svc:/system/svc/global \
- svc:/milestone/container ;do
- svccfg -s $s refresh
- done
-
- # now copy the built up repository into the base rootfs
- mv $REPO_DB $rootfsDir/etc/svc/repository.db
-)
-
-# pkg(1) needs the zoneproxy-client running in the container.
-# use a simple wrapper to run it as needed.
-# XXX maybe we go back to running this in SMF?
-mv "$rootfsDir/usr/bin/pkg" "$rootfsDir/usr/bin/wrapped_pkg"
-cat > "$rootfsDir/usr/bin/pkg" <<-'EOF'
-#!/bin/sh
-#
-# THIS FILE CREATED DURING DOCKER BASE IMAGE CREATION
-#
-# The Solaris base image uses the sysrepo proxy mechanism. The
-# IPS client pkg(1) requires the zoneproxy-client to reach the
-# remote publisher origins through the host. This wrapper script
-# enables and disables the proxy client as needed. This is a
-# temporary solution.
-
-/usr/lib/zones/zoneproxy-client -s localhost:1008
-PKG_SYSREPO_URL=http://localhost:1008 /usr/bin/wrapped_pkg "$@"
-pkill -9 zoneproxy-client
-EOF
-chmod +x "$rootfsDir/usr/bin/pkg"
diff --git a/components/engine/daemon/build.go b/components/engine/daemon/build.go
index 6a0081484a..3674ad9d28 100644
--- a/components/engine/daemon/build.go
+++ b/components/engine/daemon/build.go
@@ -71,11 +71,7 @@ func (rl *releaseableLayer) Commit(os string) (builder.ReleaseableLayer, error)
if err != nil {
return nil, err
}
-
- if layer.IsEmpty(newLayer.DiffID()) {
- _, err := rl.layerStore.Release(newLayer)
- return &releaseableLayer{layerStore: rl.layerStore}, err
- }
+ // TODO: An optimization woudld be to handle empty layers before returning
return &releaseableLayer{layerStore: rl.layerStore, roLayer: newLayer}, nil
}
diff --git a/components/engine/daemon/caps/utils_unix.go b/components/engine/daemon/caps/utils_unix.go
index c99485f51d..28a8df6531 100644
--- a/components/engine/daemon/caps/utils_unix.go
+++ b/components/engine/daemon/caps/utils_unix.go
@@ -6,7 +6,6 @@ import (
"fmt"
"strings"
- "github.com/docker/docker/pkg/stringutils"
"github.com/syndtr/gocapability/capability"
)
@@ -69,6 +68,17 @@ func GetAllCapabilities() []string {
return output
}
+// inSlice tests whether a string is contained in a slice of strings or not.
+// Comparison is case insensitive
+func inSlice(slice []string, s string) bool {
+ for _, ss := range slice {
+ if strings.ToLower(s) == strings.ToLower(ss) {
+ return true
+ }
+ }
+ return false
+}
+
// TweakCapabilities can tweak capabilities by adding or dropping capabilities
// based on the basics capabilities.
func TweakCapabilities(basics, adds, drops []string) ([]string, error) {
@@ -86,17 +96,17 @@ func TweakCapabilities(basics, adds, drops []string) ([]string, error) {
continue
}
- if !stringutils.InSlice(allCaps, "CAP_"+cap) {
+ if !inSlice(allCaps, "CAP_"+cap) {
return nil, fmt.Errorf("Unknown capability drop: %q", cap)
}
}
// handle --cap-add=all
- if stringutils.InSlice(adds, "all") {
+ if inSlice(adds, "all") {
basics = allCaps
}
- if !stringutils.InSlice(drops, "all") {
+ if !inSlice(drops, "all") {
for _, cap := range basics {
// skip `all` already handled above
if strings.ToLower(cap) == "all" {
@@ -104,7 +114,7 @@ func TweakCapabilities(basics, adds, drops []string) ([]string, error) {
}
// if we don't drop `all`, add back all the non-dropped caps
- if !stringutils.InSlice(drops, cap[4:]) {
+ if !inSlice(drops, cap[4:]) {
newCaps = append(newCaps, strings.ToUpper(cap))
}
}
@@ -118,12 +128,12 @@ func TweakCapabilities(basics, adds, drops []string) ([]string, error) {
cap = "CAP_" + cap
- if !stringutils.InSlice(allCaps, cap) {
+ if !inSlice(allCaps, cap) {
return nil, fmt.Errorf("Unknown capability to add: %q", cap)
}
// add cap if not already in the list
- if !stringutils.InSlice(newCaps, cap) {
+ if !inSlice(newCaps, cap) {
newCaps = append(newCaps, strings.ToUpper(cap))
}
}
diff --git a/components/engine/daemon/cluster/convert/container.go b/components/engine/daemon/cluster/convert/container.go
index 795e944ae1..3f3ea349a9 100644
--- a/components/engine/daemon/cluster/convert/container.go
+++ b/components/engine/daemon/cluster/convert/container.go
@@ -34,6 +34,7 @@ func containerSpecFromGRPC(c *swarmapi.ContainerSpec) *types.ContainerSpec {
Hosts: c.Hosts,
Secrets: secretReferencesFromGRPC(c.Secrets),
Configs: configReferencesFromGRPC(c.Configs),
+ Isolation: IsolationFromGRPC(c.Isolation),
}
if c.DNSConfig != nil {
@@ -232,6 +233,7 @@ func containerToGRPC(c *types.ContainerSpec) (*swarmapi.ContainerSpec, error) {
Hosts: c.Hosts,
Secrets: secretReferencesToGRPC(c.Secrets),
Configs: configReferencesToGRPC(c.Configs),
+ Isolation: isolationToGRPC(c.Isolation),
}
if c.DNSConfig != nil {
@@ -354,3 +356,26 @@ func healthConfigToGRPC(h *container.HealthConfig) *swarmapi.HealthConfig {
StartPeriod: gogotypes.DurationProto(h.StartPeriod),
}
}
+
+// IsolationFromGRPC converts a swarm api container isolation to a moby isolation representation
+func IsolationFromGRPC(i swarmapi.ContainerSpec_Isolation) container.Isolation {
+ switch i {
+ case swarmapi.ContainerIsolationHyperV:
+ return container.IsolationHyperV
+ case swarmapi.ContainerIsolationProcess:
+ return container.IsolationProcess
+ case swarmapi.ContainerIsolationDefault:
+ return container.IsolationDefault
+ }
+ return container.IsolationEmpty
+}
+
+func isolationToGRPC(i container.Isolation) swarmapi.ContainerSpec_Isolation {
+ if i.IsHyperV() {
+ return swarmapi.ContainerIsolationHyperV
+ }
+ if i.IsProcess() {
+ return swarmapi.ContainerIsolationProcess
+ }
+ return swarmapi.ContainerIsolationDefault
+}
diff --git a/components/engine/daemon/cluster/convert/service_test.go b/components/engine/daemon/cluster/convert/service_test.go
index 1b6598974b..a5c7cc4ddf 100644
--- a/components/engine/daemon/cluster/convert/service_test.go
+++ b/components/engine/daemon/cluster/convert/service_test.go
@@ -3,10 +3,12 @@ package convert
import (
"testing"
+ containertypes "github.com/docker/docker/api/types/container"
swarmtypes "github.com/docker/docker/api/types/swarm"
"github.com/docker/docker/api/types/swarm/runtime"
swarmapi "github.com/docker/swarmkit/api"
google_protobuf3 "github.com/gogo/protobuf/types"
+ "github.com/stretchr/testify/require"
)
func TestServiceConvertFromGRPCRuntimeContainer(t *testing.T) {
@@ -148,3 +150,85 @@ func TestServiceConvertToGRPCGenericRuntimeCustom(t *testing.T) {
t.Fatal(err)
}
}
+
+func TestServiceConvertToGRPCIsolation(t *testing.T) {
+ cases := []struct {
+ name string
+ from containertypes.Isolation
+ to swarmapi.ContainerSpec_Isolation
+ }{
+ {name: "empty", from: containertypes.IsolationEmpty, to: swarmapi.ContainerIsolationDefault},
+ {name: "default", from: containertypes.IsolationDefault, to: swarmapi.ContainerIsolationDefault},
+ {name: "process", from: containertypes.IsolationProcess, to: swarmapi.ContainerIsolationProcess},
+ {name: "hyperv", from: containertypes.IsolationHyperV, to: swarmapi.ContainerIsolationHyperV},
+ {name: "proCess", from: containertypes.Isolation("proCess"), to: swarmapi.ContainerIsolationProcess},
+ {name: "hypErv", from: containertypes.Isolation("hypErv"), to: swarmapi.ContainerIsolationHyperV},
+ }
+ for _, c := range cases {
+ t.Run(c.name, func(t *testing.T) {
+ s := swarmtypes.ServiceSpec{
+ TaskTemplate: swarmtypes.TaskSpec{
+ ContainerSpec: &swarmtypes.ContainerSpec{
+ Image: "alpine:latest",
+ Isolation: c.from,
+ },
+ },
+ Mode: swarmtypes.ServiceMode{
+ Global: &swarmtypes.GlobalService{},
+ },
+ }
+ res, err := ServiceSpecToGRPC(s)
+ require.NoError(t, err)
+ v, ok := res.Task.Runtime.(*swarmapi.TaskSpec_Container)
+ if !ok {
+ t.Fatal("expected type swarmapi.TaskSpec_Container")
+ }
+ require.Equal(t, c.to, v.Container.Isolation)
+ })
+ }
+}
+
+func TestServiceConvertFromGRPCIsolation(t *testing.T) {
+ cases := []struct {
+ name string
+ from swarmapi.ContainerSpec_Isolation
+ to containertypes.Isolation
+ }{
+ {name: "default", to: containertypes.IsolationDefault, from: swarmapi.ContainerIsolationDefault},
+ {name: "process", to: containertypes.IsolationProcess, from: swarmapi.ContainerIsolationProcess},
+ {name: "hyperv", to: containertypes.IsolationHyperV, from: swarmapi.ContainerIsolationHyperV},
+ }
+ for _, c := range cases {
+ t.Run(c.name, func(t *testing.T) {
+ gs := swarmapi.Service{
+ Meta: swarmapi.Meta{
+ Version: swarmapi.Version{
+ Index: 1,
+ },
+ CreatedAt: nil,
+ UpdatedAt: nil,
+ },
+ SpecVersion: &swarmapi.Version{
+ Index: 1,
+ },
+ Spec: swarmapi.ServiceSpec{
+ Task: swarmapi.TaskSpec{
+ Runtime: &swarmapi.TaskSpec_Container{
+ Container: &swarmapi.ContainerSpec{
+ Image: "alpine:latest",
+ Isolation: c.from,
+ },
+ },
+ },
+ },
+ }
+
+ svc, err := ServiceFromGRPC(gs)
+ if err != nil {
+ t.Fatal(err)
+ }
+
+ require.Equal(t, c.to, svc.Spec.TaskTemplate.ContainerSpec.Isolation)
+ })
+ }
+}
diff --git a/components/engine/daemon/cluster/executor/container/container.go b/components/engine/daemon/cluster/executor/container/container.go
index 59ac9bf215..4f41fb3e23 100644
--- a/components/engine/daemon/cluster/executor/container/container.go
+++ b/components/engine/daemon/cluster/executor/container/container.go
@@ -168,6 +168,10 @@ func (c *containerConfig) portBindings() nat.PortMap {
return portBindings
}
+func (c *containerConfig) isolation() enginecontainer.Isolation {
+ return convert.IsolationFromGRPC(c.spec().Isolation)
+}
+
func (c *containerConfig) exposedPorts() map[nat.Port]struct{} {
exposedPorts := make(map[nat.Port]struct{})
if c.task.Endpoint == nil {
@@ -350,6 +354,7 @@ func (c *containerConfig) hostConfig() *enginecontainer.HostConfig {
PortBindings: c.portBindings(),
Mounts: c.mounts(),
ReadonlyRootfs: c.spec().ReadOnly,
+ Isolation: c.isolation(),
}
if c.spec().DNSConfig != nil {
diff --git a/components/engine/daemon/cluster/executor/container/container_test.go b/components/engine/daemon/cluster/executor/container/container_test.go
new file mode 100644
index 0000000000..a583d14c20
--- /dev/null
+++ b/components/engine/daemon/cluster/executor/container/container_test.go
@@ -0,0 +1,37 @@
+package container
+
+import (
+ "testing"
+
+ container "github.com/docker/docker/api/types/container"
+ swarmapi "github.com/docker/swarmkit/api"
+ "github.com/stretchr/testify/require"
+)
+
+func TestIsolationConversion(t *testing.T) {
+ cases := []struct {
+ name string
+ from swarmapi.ContainerSpec_Isolation
+ to container.Isolation
+ }{
+ {name: "default", from: swarmapi.ContainerIsolationDefault, to: container.IsolationDefault},
+ {name: "process", from: swarmapi.ContainerIsolationProcess, to: container.IsolationProcess},
+ {name: "hyperv", from: swarmapi.ContainerIsolationHyperV, to: container.IsolationHyperV},
+ }
+ for _, c := range cases {
+ t.Run(c.name, func(t *testing.T) {
+ task := swarmapi.Task{
+ Spec: swarmapi.TaskSpec{
+ Runtime: &swarmapi.TaskSpec_Container{
+ Container: &swarmapi.ContainerSpec{
+ Image: "alpine:latest",
+ Isolation: c.from,
+ },
+ },
+ },
+ }
+ config := containerConfig{task: &task}
+ require.Equal(t, c.to, config.hostConfig().Isolation)
+ })
+ }
+}
diff --git a/components/engine/daemon/cluster/listen_addr_others.go b/components/engine/daemon/cluster/listen_addr_others.go
index 4e845f5c8f..ebf7daea24 100644
--- a/components/engine/daemon/cluster/listen_addr_others.go
+++ b/components/engine/daemon/cluster/listen_addr_others.go
@@ -1,4 +1,4 @@
-// +build !linux,!solaris
+// +build !linux
package cluster
diff --git a/components/engine/daemon/cluster/noderunner.go b/components/engine/daemon/cluster/noderunner.go
index aa12737cdd..6aae6c3271 100644
--- a/components/engine/daemon/cluster/noderunner.go
+++ b/components/engine/daemon/cluster/noderunner.go
@@ -17,6 +17,8 @@ import (
"github.com/sirupsen/logrus"
"golang.org/x/net/context"
"google.golang.org/grpc"
+ "google.golang.org/grpc/codes"
+ "google.golang.org/grpc/status"
)
// nodeRunner implements a manager for continuously running swarmkit node, restarting them with backoff delays if needed.
@@ -217,7 +219,10 @@ func (n *nodeRunner) watchClusterEvents(ctx context.Context, conn *grpc.ClientCo
msg, err := watch.Recv()
if err != nil {
// store watch is broken
- logrus.WithError(err).Error("failed to receive changes from store watch API")
+ errStatus, ok := status.FromError(err)
+ if !ok || errStatus.Code() != codes.Canceled {
+ logrus.WithError(err).Error("failed to receive changes from store watch API")
+ }
return
}
select {
diff --git a/components/engine/daemon/cluster/swarm.go b/components/engine/daemon/cluster/swarm.go
index e3fffe983d..61223691bd 100644
--- a/components/engine/daemon/cluster/swarm.go
+++ b/components/engine/daemon/cluster/swarm.go
@@ -198,9 +198,9 @@ func (c *Cluster) Join(req types.JoinRequest) error {
// Inspect retrieves the configuration properties of a managed swarm cluster.
func (c *Cluster) Inspect() (types.Swarm, error) {
- var swarm *swarmapi.Cluster
+ var swarm types.Swarm
if err := c.lockedManagerAction(func(ctx context.Context, state nodeState) error {
- s, err := getSwarm(ctx, state.controlClient)
+ s, err := c.inspect(ctx, state)
if err != nil {
return err
}
@@ -209,7 +209,15 @@ func (c *Cluster) Inspect() (types.Swarm, error) {
}); err != nil {
return types.Swarm{}, err
}
- return convert.SwarmFromGRPC(*swarm), nil
+ return swarm, nil
+}
+
+func (c *Cluster) inspect(ctx context.Context, state nodeState) (types.Swarm, error) {
+ s, err := getSwarm(ctx, state.controlClient)
+ if err != nil {
+ return types.Swarm{}, err
+ }
+ return convert.SwarmFromGRPC(*s), nil
}
// Update updates configuration of a managed swarm cluster.
@@ -413,7 +421,7 @@ func (c *Cluster) Info() types.Info {
if state.IsActiveManager() {
info.ControlAvailable = true
- swarm, err := c.Inspect()
+ swarm, err := c.inspect(ctx, state)
if err != nil {
info.Error = err.Error()
}
diff --git a/components/engine/daemon/config/config.go b/components/engine/daemon/config/config.go
index a340997073..1e22a6fea7 100644
--- a/components/engine/daemon/config/config.go
+++ b/components/engine/daemon/config/config.go
@@ -171,7 +171,8 @@ type CommonConfig struct {
Experimental bool `json:"experimental"` // Experimental indicates whether experimental features should be exposed or not
// Exposed node Generic Resources
- NodeGenericResources string `json:"node-generic-resources,omitempty"`
+ // e.g: ["orange=red", "orange=green", "orange=blue", "apple=3"]
+ NodeGenericResources []string `json:"node-generic-resources,omitempty"`
// NetworkControlPlaneMTU allows to specify the control plane MTU, this will allow to optimize the network use in some components
NetworkControlPlaneMTU int `json:"network-control-plane-mtu,omitempty"`
@@ -257,22 +258,12 @@ func Reload(configFile string, flags *pflag.FlagSet, reload func(*Config)) error
return fmt.Errorf("file configuration validation failed (%v)", err)
}
- // Labels of the docker engine used to allow multiple values associated with the same key.
- // This is deprecated in 1.13, and, be removed after 3 release cycles.
- // The following will check the conflict of labels, and report a warning for deprecation.
- //
- // TODO: After 3 release cycles (17.12) an error will be returned, and labels will be
- // sanitized to consolidate duplicate key-value pairs (config.Labels = newLabels):
- //
- // newLabels, err := GetConflictFreeLabels(newConfig.Labels)
- // if err != nil {
- // return err
- // }
- // newConfig.Labels = newLabels
- //
- if _, err := GetConflictFreeLabels(newConfig.Labels); err != nil {
- logrus.Warnf("Engine labels with duplicate keys and conflicting values have been deprecated: %s", err)
+ // Check if duplicate label-keys with different values are found
+ newLabels, err := GetConflictFreeLabels(newConfig.Labels)
+ if err != nil {
+ return err
}
+ newConfig.Labels = newLabels
reload(newConfig)
return nil
diff --git a/components/engine/daemon/config/config_common_unix.go b/components/engine/daemon/config/config_common_unix.go
index cea3fffdda..d2fa2e035a 100644
--- a/components/engine/daemon/config/config_common_unix.go
+++ b/components/engine/daemon/config/config_common_unix.go
@@ -1,4 +1,4 @@
-// +build solaris linux freebsd
+// +build linux freebsd
package config
diff --git a/components/engine/daemon/config/config_test.go b/components/engine/daemon/config/config_test.go
index 30a93126d3..cb7fb00a72 100644
--- a/components/engine/daemon/config/config_test.go
+++ b/components/engine/daemon/config/config_test.go
@@ -9,6 +9,7 @@ import (
"github.com/docker/docker/daemon/discovery"
"github.com/docker/docker/internal/testutil"
"github.com/docker/docker/opts"
+ "github.com/gotestyourself/gotestyourself/fs"
"github.com/spf13/pflag"
"github.com/stretchr/testify/assert"
)
@@ -259,6 +260,20 @@ func TestValidateConfigurationErrors(t *testing.T) {
},
},
},
+ {
+ config: &Config{
+ CommonConfig: CommonConfig{
+ NodeGenericResources: []string{"foo"},
+ },
+ },
+ },
+ {
+ config: &Config{
+ CommonConfig: CommonConfig{
+ NodeGenericResources: []string{"foo=bar", "foo=1"},
+ },
+ },
+ },
}
for _, tc := range testCases {
err := Validate(tc.config)
@@ -316,6 +331,20 @@ func TestValidateConfiguration(t *testing.T) {
},
},
},
+ {
+ config: &Config{
+ CommonConfig: CommonConfig{
+ NodeGenericResources: []string{"foo=bar", "foo=baz"},
+ },
+ },
+ },
+ {
+ config: &Config{
+ CommonConfig: CommonConfig{
+ NodeGenericResources: []string{"foo=1"},
+ },
+ },
+ },
}
for _, tc := range testCases {
err := Validate(tc.config)
@@ -431,3 +460,29 @@ func TestReloadBadDefaultConfig(t *testing.T) {
assert.Error(t, err)
testutil.ErrorContains(t, err, "unable to configure the Docker daemon with file")
}
+
+func TestReloadWithConflictingLabels(t *testing.T) {
+ tempFile := fs.NewFile(t, "config", fs.WithContent(`{"labels":["foo=bar","foo=baz"]}`))
+ defer tempFile.Remove()
+ configFile := tempFile.Path()
+
+ var lbls []string
+ flags := pflag.NewFlagSet("test", pflag.ContinueOnError)
+ flags.String("config-file", configFile, "")
+ flags.StringSlice("labels", lbls, "")
+ err := Reload(configFile, flags, func(c *Config) {})
+ testutil.ErrorContains(t, err, "conflict labels for foo=baz and foo=bar")
+}
+
+func TestReloadWithDuplicateLabels(t *testing.T) {
+ tempFile := fs.NewFile(t, "config", fs.WithContent(`{"labels":["foo=the-same","foo=the-same"]}`))
+ defer tempFile.Remove()
+ configFile := tempFile.Path()
+
+ var lbls []string
+ flags := pflag.NewFlagSet("test", pflag.ContinueOnError)
+ flags.String("config-file", configFile, "")
+ flags.StringSlice("labels", lbls, "")
+ err := Reload(configFile, flags, func(c *Config) {})
+ assert.NoError(t, err)
+}
diff --git a/components/engine/daemon/config/opts.go b/components/engine/daemon/config/opts.go
index 00f32e43bb..cdf264911f 100644
--- a/components/engine/daemon/config/opts.go
+++ b/components/engine/daemon/config/opts.go
@@ -7,8 +7,8 @@ import (
)
// ParseGenericResources parses and validates the specified string as a list of GenericResource
-func ParseGenericResources(value string) ([]swarm.GenericResource, error) {
- if value == "" {
+func ParseGenericResources(value []string) ([]swarm.GenericResource, error) {
+ if len(value) == 0 {
return nil, nil
}
diff --git a/components/engine/daemon/container.go b/components/engine/daemon/container.go
index 7338514682..26faedfdf9 100644
--- a/components/engine/daemon/container.go
+++ b/components/engine/daemon/container.go
@@ -329,6 +329,10 @@ func (daemon *Daemon) verifyContainerSettings(platform string, hostConfig *conta
return nil, errors.Errorf("invalid restart policy '%s'", p.Name)
}
+ if !hostConfig.Isolation.IsValid() {
+ return nil, errors.Errorf("invalid isolation '%s' on %s", hostConfig.Isolation, runtime.GOOS)
+ }
+
// Now do platform-specific verification
return verifyPlatformContainerSettings(daemon, hostConfig, config, update)
}
diff --git a/components/engine/daemon/create.go b/components/engine/daemon/create.go
index b2014470c4..e4d17cc2df 100644
--- a/components/engine/daemon/create.go
+++ b/components/engine/daemon/create.go
@@ -95,7 +95,14 @@ func (daemon *Daemon) create(params types.ContainerCreateConfig, managed bool) (
if err != nil {
return nil, err
}
- os = img.OS
+ if img.OS != "" {
+ os = img.OS
+ } else {
+ // default to the host OS except on Windows with LCOW
+ if runtime.GOOS == "windows" && system.LCOWSupported() {
+ os = "linux"
+ }
+ }
imgID = img.ID()
if runtime.GOOS == "windows" && img.OS == "linux" && !system.LCOWSupported() {
diff --git a/components/engine/daemon/daemon_linux_test.go b/components/engine/daemon/daemon_linux_test.go
index c7d5117195..e027b14dd0 100644
--- a/components/engine/daemon/daemon_linux_test.go
+++ b/components/engine/daemon/daemon_linux_test.go
@@ -5,6 +5,13 @@ package daemon
import (
"strings"
"testing"
+
+ containertypes "github.com/docker/docker/api/types/container"
+ "github.com/docker/docker/container"
+ "github.com/docker/docker/oci"
+ "github.com/docker/docker/pkg/idtools"
+
+ "github.com/stretchr/testify/assert"
)
const mountsFixture = `142 78 0:38 / / rw,relatime - aufs none rw,si=573b861da0b3a05b,dio
@@ -102,3 +109,58 @@ func TestNotCleanupMounts(t *testing.T) {
t.Fatal("Expected not to clean up /dev/shm")
}
}
+
+// TestTmpfsDevShmSizeOverride checks that user-specified /dev/tmpfs mount
+// size is not overriden by the default shmsize (that should only be used
+// for default /dev/shm (as in "shareable" and "private" ipc modes).
+// https://github.com/moby/moby/issues/35271
+func TestTmpfsDevShmSizeOverride(t *testing.T) {
+ size := "777m"
+ mnt := "/dev/shm"
+
+ d := Daemon{
+ idMappings: &idtools.IDMappings{},
+ }
+ c := &container.Container{
+ HostConfig: &containertypes.HostConfig{
+ ShmSize: 48 * 1024, // size we should NOT end up with
+ },
+ }
+ ms := []container.Mount{
+ {
+ Source: "tmpfs",
+ Destination: mnt,
+ Data: "size=" + size,
+ },
+ }
+
+ // convert ms to spec
+ spec := oci.DefaultSpec()
+ err := setMounts(&d, &spec, c, ms)
+ assert.NoError(t, err)
+
+ // Check the resulting spec for the correct size
+ found := false
+ for _, m := range spec.Mounts {
+ if m.Destination == mnt {
+ for _, o := range m.Options {
+ if !strings.HasPrefix(o, "size=") {
+ continue
+ }
+ t.Logf("%+v\n", m.Options)
+ assert.Equal(t, "size="+size, o)
+ found = true
+ }
+ }
+ }
+ if !found {
+ t.Fatal("/dev/shm not found in spec, or size option missing")
+ }
+}
+
+func TestValidateContainerIsolationLinux(t *testing.T) {
+ d := Daemon{}
+
+ _, err := d.verifyContainerSettings("linux", &containertypes.HostConfig{Isolation: containertypes.IsolationHyperV}, nil, false)
+ assert.EqualError(t, err, "invalid isolation 'hyperv' on linux")
+}
diff --git a/components/engine/daemon/daemon_test.go b/components/engine/daemon/daemon_test.go
index 13d1059c1c..422be1fd77 100644
--- a/components/engine/daemon/daemon_test.go
+++ b/components/engine/daemon/daemon_test.go
@@ -1,11 +1,10 @@
-// +build !solaris
-
package daemon
import (
"io/ioutil"
"os"
"path/filepath"
+ "runtime"
"testing"
containertypes "github.com/docker/docker/api/types/container"
@@ -18,6 +17,7 @@ import (
"github.com/docker/docker/volume/local"
"github.com/docker/docker/volume/store"
"github.com/docker/go-connections/nat"
+ "github.com/stretchr/testify/assert"
)
//
@@ -304,3 +304,10 @@ func TestMerge(t *testing.T) {
}
}
}
+
+func TestValidateContainerIsolation(t *testing.T) {
+ d := Daemon{}
+
+ _, err := d.verifyContainerSettings(runtime.GOOS, &containertypes.HostConfig{Isolation: containertypes.Isolation("invalid")}, nil, false)
+ assert.EqualError(t, err, "invalid isolation 'invalid' on "+runtime.GOOS)
+}
diff --git a/components/engine/daemon/daemon_unix_test.go b/components/engine/daemon/daemon_unix_test.go
index 2bdbd23290..a4db4733d4 100644
--- a/components/engine/daemon/daemon_unix_test.go
+++ b/components/engine/daemon/daemon_unix_test.go
@@ -1,4 +1,4 @@
-// +build !windows,!solaris
+// +build !windows
package daemon
diff --git a/components/engine/daemon/daemon_unsupported.go b/components/engine/daemon/daemon_unsupported.go
index cb1acf63d6..987528f476 100644
--- a/components/engine/daemon/daemon_unsupported.go
+++ b/components/engine/daemon/daemon_unsupported.go
@@ -1,4 +1,4 @@
-// +build !linux,!freebsd,!windows,!solaris
+// +build !linux,!freebsd,!windows
package daemon
diff --git a/components/engine/daemon/daemon_windows.go b/components/engine/daemon/daemon_windows.go
index a79ed4f071..8545a2f651 100644
--- a/components/engine/daemon/daemon_windows.go
+++ b/components/engine/daemon/daemon_windows.go
@@ -350,6 +350,9 @@ func (daemon *Daemon) initNetworkController(config *config.Config, activeSandbox
}
controller.WalkNetworks(s)
+
+ drvOptions := make(map[string]string)
+
if n != nil {
// global networks should not be deleted by local HNS
if n.Info().Scope() == datastore.GlobalScope {
@@ -358,14 +361,23 @@ func (daemon *Daemon) initNetworkController(config *config.Config, activeSandbox
v.Name = n.Name()
// This will not cause network delete from HNS as the network
// is not yet populated in the libnetwork windows driver
+
+ // restore option if it existed before
+ drvOptions = n.Info().DriverOptions()
n.Delete()
}
-
netOption := map[string]string{
winlibnetwork.NetworkName: v.Name,
winlibnetwork.HNSID: v.Id,
}
+ // add persisted driver options
+ for k, v := range drvOptions {
+ if k != winlibnetwork.NetworkName && k != winlibnetwork.HNSID {
+ netOption[k] = v
+ }
+ }
+
v4Conf := []*libnetwork.IpamConf{}
for _, subnet := range v.Subnets {
ipamV4Conf := libnetwork.IpamConf{}
diff --git a/components/engine/daemon/debugtrap_unsupported.go b/components/engine/daemon/debugtrap_unsupported.go
index f5b9170907..6ae9ebfde9 100644
--- a/components/engine/daemon/debugtrap_unsupported.go
+++ b/components/engine/daemon/debugtrap_unsupported.go
@@ -1,4 +1,4 @@
-// +build !linux,!darwin,!freebsd,!windows,!solaris
+// +build !linux,!darwin,!freebsd,!windows
package daemon
diff --git a/components/engine/daemon/getsize_unix.go b/components/engine/daemon/getsize_unix.go
index e47e646df3..fff90f2756 100644
--- a/components/engine/daemon/getsize_unix.go
+++ b/components/engine/daemon/getsize_unix.go
@@ -1,4 +1,4 @@
-// +build linux freebsd solaris
+// +build linux freebsd
package daemon
diff --git a/components/engine/daemon/graphdriver/aufs/aufs.go b/components/engine/daemon/graphdriver/aufs/aufs.go
index 8313263ff1..5a1f3d1fdd 100644
--- a/components/engine/daemon/graphdriver/aufs/aufs.go
+++ b/components/engine/daemon/graphdriver/aufs/aufs.go
@@ -125,7 +125,7 @@ func Init(root string, options []string, uidMaps, gidMaps []idtools.IDMap) (grap
// Create the root aufs driver dir and return
// if it already exists
// If not populate the dir structure
- if err := idtools.MkdirAllAs(root, 0700, rootUID, rootGID); err != nil {
+ if err := idtools.MkdirAllAndChown(root, 0700, idtools.IDPair{UID: rootUID, GID: rootGID}); err != nil {
if os.IsExist(err) {
return a, nil
}
@@ -138,7 +138,7 @@ func Init(root string, options []string, uidMaps, gidMaps []idtools.IDMap) (grap
// Populate the dir structure
for _, p := range paths {
- if err := idtools.MkdirAllAs(path.Join(root, p), 0700, rootUID, rootGID); err != nil {
+ if err := idtools.MkdirAllAndChown(path.Join(root, p), 0700, idtools.IDPair{UID: rootUID, GID: rootGID}); err != nil {
return nil, err
}
}
@@ -290,7 +290,7 @@ func (a *Driver) createDirsFor(id string) error {
// The path of directories are /mnt/
// and /diff/
for _, p := range paths {
- if err := idtools.MkdirAllAs(path.Join(a.rootPath(), p, id), 0755, rootUID, rootGID); err != nil {
+ if err := idtools.MkdirAllAndChown(path.Join(a.rootPath(), p, id), 0755, idtools.IDPair{UID: rootUID, GID: rootGID}); err != nil {
return err
}
}
diff --git a/components/engine/daemon/graphdriver/btrfs/btrfs.go b/components/engine/daemon/graphdriver/btrfs/btrfs.go
index 5a61217341..0dabf711dd 100644
--- a/components/engine/daemon/graphdriver/btrfs/btrfs.go
+++ b/components/engine/daemon/graphdriver/btrfs/btrfs.go
@@ -64,7 +64,7 @@ func Init(home string, options []string, uidMaps, gidMaps []idtools.IDMap) (grap
if err != nil {
return nil, err
}
- if err := idtools.MkdirAllAs(home, 0700, rootUID, rootGID); err != nil {
+ if err := idtools.MkdirAllAndChown(home, 0700, idtools.IDPair{UID: rootUID, GID: rootGID}); err != nil {
return nil, err
}
@@ -502,7 +502,7 @@ func (d *Driver) Create(id, parent string, opts *graphdriver.CreateOpts) error {
if err != nil {
return err
}
- if err := idtools.MkdirAllAs(subvolumes, 0700, rootUID, rootGID); err != nil {
+ if err := idtools.MkdirAllAndChown(subvolumes, 0700, idtools.IDPair{UID: rootUID, GID: rootGID}); err != nil {
return err
}
if parent == "" {
@@ -537,7 +537,7 @@ func (d *Driver) Create(id, parent string, opts *graphdriver.CreateOpts) error {
if err := d.setStorageSize(path.Join(subvolumes, id), driver); err != nil {
return err
}
- if err := idtools.MkdirAllAs(quotas, 0700, rootUID, rootGID); err != nil {
+ if err := idtools.MkdirAllAndChown(quotas, 0700, idtools.IDPair{UID: rootUID, GID: rootGID}); err != nil {
return err
}
if err := ioutil.WriteFile(path.Join(quotas, id), []byte(fmt.Sprint(driver.options.size)), 0644); err != nil {
diff --git a/components/engine/daemon/graphdriver/devmapper/deviceset.go b/components/engine/daemon/graphdriver/devmapper/deviceset.go
index deb8c87d1f..160ba46407 100644
--- a/components/engine/daemon/graphdriver/devmapper/deviceset.go
+++ b/components/engine/daemon/graphdriver/devmapper/deviceset.go
@@ -268,7 +268,7 @@ func (devices *DeviceSet) ensureImage(name string, size int64) (string, error) {
if err != nil {
return "", err
}
- if err := idtools.MkdirAllAs(dirname, 0700, uid, gid); err != nil && !os.IsExist(err) {
+ if err := idtools.MkdirAllAndChown(dirname, 0700, idtools.IDPair{UID: uid, GID: gid}); err != nil && !os.IsExist(err) {
return "", err
}
@@ -1697,7 +1697,7 @@ func (devices *DeviceSet) initDevmapper(doInit bool) (retErr error) {
if err != nil {
return err
}
- if err := idtools.MkdirAs(devices.root, 0700, uid, gid); err != nil && !os.IsExist(err) {
+ if err := idtools.MkdirAndChown(devices.root, 0700, idtools.IDPair{UID: uid, GID: gid}); err != nil && !os.IsExist(err) {
return err
}
if err := os.MkdirAll(devices.metadataDir(), 0700); err != nil && !os.IsExist(err) {
@@ -2428,6 +2428,18 @@ func (devices *DeviceSet) UnmountDevice(hash, mountPath string) error {
}
logrus.Debug("devmapper: Unmount done")
+ // Remove the mountpoint here. Removing the mountpoint (in newer kernels)
+ // will cause all other instances of this mount in other mount namespaces
+ // to be killed (this is an anti-DoS measure that is necessary for things
+ // like devicemapper). This is necessary to avoid cases where a libdm mount
+ // that is present in another namespace will cause subsequent RemoveDevice
+ // operations to fail. We ignore any errors here because this may fail on
+ // older kernels which don't have
+ // torvalds/linux@8ed936b5671bfb33d89bc60bdcc7cf0470ba52fe applied.
+ if err := os.Remove(mountPath); err != nil {
+ logrus.Debugf("devmapper: error doing a remove on unmounted device %s: %v", mountPath, err)
+ }
+
return devices.deactivateDevice(info)
}
diff --git a/components/engine/daemon/graphdriver/devmapper/devmapper_test.go b/components/engine/daemon/graphdriver/devmapper/devmapper_test.go
index 7501397fdc..24e535869e 100644
--- a/components/engine/daemon/graphdriver/devmapper/devmapper_test.go
+++ b/components/engine/daemon/graphdriver/devmapper/devmapper_test.go
@@ -5,12 +5,15 @@ package devmapper
import (
"fmt"
"os"
+ "os/exec"
"syscall"
"testing"
"time"
"github.com/docker/docker/daemon/graphdriver"
"github.com/docker/docker/daemon/graphdriver/graphtest"
+ "github.com/docker/docker/pkg/parsers/kernel"
+ "golang.org/x/sys/unix"
)
func init() {
@@ -150,3 +153,53 @@ func TestDevmapperLockReleasedDeviceDeletion(t *testing.T) {
case <-doneChan:
}
}
+
+// Ensure that mounts aren't leakedriver. It's non-trivial for us to test the full
+// reproducer of #34573 in a unit test, but we can at least make sure that a
+// simple command run in a new namespace doesn't break things horribly.
+func TestDevmapperMountLeaks(t *testing.T) {
+ if !kernel.CheckKernelVersion(3, 18, 0) {
+ t.Skipf("kernel version <3.18.0 and so is missing torvalds/linux@8ed936b5671bfb33d89bc60bdcc7cf0470ba52fe.")
+ }
+
+ driver := graphtest.GetDriver(t, "devicemapper", "dm.use_deferred_removal=false", "dm.use_deferred_deletion=false").(*graphtest.Driver).Driver.(*graphdriver.NaiveDiffDriver).ProtoDriver.(*Driver)
+ defer graphtest.PutDriver(t)
+
+ // We need to create a new (dummy) device.
+ if err := driver.Create("some-layer", "", nil); err != nil {
+ t.Fatalf("setting up some-layer: %v", err)
+ }
+
+ // Mount the device.
+ _, err := driver.Get("some-layer", "")
+ if err != nil {
+ t.Fatalf("mounting some-layer: %v", err)
+ }
+
+ // Create a new subprocess which will inherit our mountpoint, then
+ // intentionally leak it and stick around. We can't do this entirely within
+ // Go because forking and namespaces in Go are really not handled well at
+ // all.
+ cmd := exec.Cmd{
+ Path: "/bin/sh",
+ Args: []string{
+ "/bin/sh", "-c",
+ "mount --make-rprivate / && sleep 1000s",
+ },
+ SysProcAttr: &syscall.SysProcAttr{
+ Unshareflags: syscall.CLONE_NEWNS,
+ },
+ }
+ if err := cmd.Start(); err != nil {
+ t.Fatalf("starting sub-command: %v", err)
+ }
+ defer func() {
+ unix.Kill(cmd.Process.Pid, unix.SIGKILL)
+ cmd.Wait()
+ }()
+
+ // Now try to "drop" the device.
+ if err := driver.Put("some-layer"); err != nil {
+ t.Fatalf("unmounting some-layer: %v", err)
+ }
+}
diff --git a/components/engine/daemon/graphdriver/devmapper/driver.go b/components/engine/daemon/graphdriver/devmapper/driver.go
index b485096bc8..288f50126f 100644
--- a/components/engine/daemon/graphdriver/devmapper/driver.go
+++ b/components/engine/daemon/graphdriver/devmapper/driver.go
@@ -189,11 +189,11 @@ func (d *Driver) Get(id, mountLabel string) (containerfs.ContainerFS, error) {
}
// Create the target directories if they don't exist
- if err := idtools.MkdirAllAs(path.Join(d.home, "mnt"), 0755, uid, gid); err != nil && !os.IsExist(err) {
+ if err := idtools.MkdirAllAndChown(path.Join(d.home, "mnt"), 0755, idtools.IDPair{UID: uid, GID: gid}); err != nil && !os.IsExist(err) {
d.ctr.Decrement(mp)
return nil, err
}
- if err := idtools.MkdirAs(mp, 0755, uid, gid); err != nil && !os.IsExist(err) {
+ if err := idtools.MkdirAndChown(mp, 0755, idtools.IDPair{UID: uid, GID: gid}); err != nil && !os.IsExist(err) {
d.ctr.Decrement(mp)
return nil, err
}
@@ -204,7 +204,7 @@ func (d *Driver) Get(id, mountLabel string) (containerfs.ContainerFS, error) {
return nil, err
}
- if err := idtools.MkdirAllAs(rootFs, 0755, uid, gid); err != nil && !os.IsExist(err) {
+ if err := idtools.MkdirAllAndChown(rootFs, 0755, idtools.IDPair{UID: uid, GID: gid}); err != nil && !os.IsExist(err) {
d.ctr.Decrement(mp)
d.DeviceSet.UnmountDevice(id, mp)
return nil, err
@@ -232,10 +232,12 @@ func (d *Driver) Put(id string) error {
if count := d.ctr.Decrement(mp); count > 0 {
return nil
}
+
err := d.DeviceSet.UnmountDevice(id, mp)
if err != nil {
- logrus.Errorf("devmapper: Error unmounting device %s: %s", id, err)
+ logrus.Errorf("devmapper: Error unmounting device %s: %v", id, err)
}
+
return err
}
diff --git a/components/engine/daemon/graphdriver/driver.go b/components/engine/daemon/graphdriver/driver.go
index 68f9022e1c..d08c6dc5b7 100644
--- a/components/engine/daemon/graphdriver/driver.go
+++ b/components/engine/daemon/graphdriver/driver.go
@@ -208,7 +208,9 @@ func New(name string, pg plugingetter.PluginGetter, config Options) (Driver, err
// Guess for prior driver
driversMap := scanPriorDrivers(config.Root)
- for _, name := range priority {
+ list := strings.Split(priority, ",")
+ logrus.Debugf("[graphdriver] priority list: %v", list)
+ for _, name := range list {
if name == "vfs" {
// don't use vfs even if there is state present.
continue
@@ -243,7 +245,7 @@ func New(name string, pg plugingetter.PluginGetter, config Options) (Driver, err
}
// Check for priority drivers first
- for _, name := range priority {
+ for _, name := range list {
driver, err := getBuiltinDriver(name, config.Root, config.DriverOptions, config.UIDMaps, config.GIDMaps)
if err != nil {
if isDriverNotSupported(err) {
@@ -281,8 +283,27 @@ func scanPriorDrivers(root string) map[string]bool {
for driver := range drivers {
p := filepath.Join(root, driver)
if _, err := os.Stat(p); err == nil && driver != "vfs" {
- driversMap[driver] = true
+ if !isEmptyDir(p) {
+ driversMap[driver] = true
+ }
}
}
return driversMap
}
+
+// isEmptyDir checks if a directory is empty. It is used to check if prior
+// storage-driver directories exist. If an error occurs, it also assumes the
+// directory is not empty (which preserves the behavior _before_ this check
+// was added)
+func isEmptyDir(name string) bool {
+ f, err := os.Open(name)
+ if err != nil {
+ return false
+ }
+ defer f.Close()
+
+ if _, err = f.Readdirnames(1); err == io.EOF {
+ return true
+ }
+ return false
+}
diff --git a/components/engine/daemon/graphdriver/driver_freebsd.go b/components/engine/daemon/graphdriver/driver_freebsd.go
index 53394b738d..f9fded986b 100644
--- a/components/engine/daemon/graphdriver/driver_freebsd.go
+++ b/components/engine/daemon/graphdriver/driver_freebsd.go
@@ -7,10 +7,8 @@ import (
)
var (
- // Slice of drivers that should be used in an order
- priority = []string{
- "zfs",
- }
+ // List of drivers that should be used in an order
+ priority = "zfs"
)
// Mounted checks if the given path is mounted as the fs type
diff --git a/components/engine/daemon/graphdriver/driver_linux.go b/components/engine/daemon/graphdriver/driver_linux.go
index a2be46b53a..aa3cfc9f79 100644
--- a/components/engine/daemon/graphdriver/driver_linux.go
+++ b/components/engine/daemon/graphdriver/driver_linux.go
@@ -51,16 +51,8 @@ const (
)
var (
- // Slice of drivers that should be used in an order
- priority = []string{
- "btrfs",
- "zfs",
- "overlay2",
- "aufs",
- "overlay",
- "devicemapper",
- "vfs",
- }
+ // List of drivers that should be used in an order
+ priority = "btrfs,zfs,overlay2,aufs,overlay,devicemapper,vfs"
// FsNames maps filesystem id to name of the filesystem.
FsNames = map[FsMagic]string{
diff --git a/components/engine/daemon/graphdriver/driver_test.go b/components/engine/daemon/graphdriver/driver_test.go
new file mode 100644
index 0000000000..40084be88d
--- /dev/null
+++ b/components/engine/daemon/graphdriver/driver_test.go
@@ -0,0 +1,37 @@
+package graphdriver
+
+import (
+ "io/ioutil"
+ "os"
+ "path/filepath"
+ "testing"
+
+ "github.com/stretchr/testify/assert"
+ "github.com/stretchr/testify/require"
+)
+
+func TestIsEmptyDir(t *testing.T) {
+ tmp, err := ioutil.TempDir("", "test-is-empty-dir")
+ require.NoError(t, err)
+ defer os.RemoveAll(tmp)
+
+ d := filepath.Join(tmp, "empty-dir")
+ err = os.Mkdir(d, 0755)
+ require.NoError(t, err)
+ empty := isEmptyDir(d)
+ assert.True(t, empty)
+
+ d = filepath.Join(tmp, "dir-with-subdir")
+ err = os.MkdirAll(filepath.Join(d, "subdir"), 0755)
+ require.NoError(t, err)
+ empty = isEmptyDir(d)
+ assert.False(t, empty)
+
+ d = filepath.Join(tmp, "dir-with-empty-file")
+ err = os.Mkdir(d, 0755)
+ require.NoError(t, err)
+ _, err = ioutil.TempFile(d, "file")
+ require.NoError(t, err)
+ empty = isEmptyDir(d)
+ assert.False(t, empty)
+}
diff --git a/components/engine/daemon/graphdriver/driver_unsupported.go b/components/engine/daemon/graphdriver/driver_unsupported.go
index 4a875608b0..f951e7674d 100644
--- a/components/engine/daemon/graphdriver/driver_unsupported.go
+++ b/components/engine/daemon/graphdriver/driver_unsupported.go
@@ -1,12 +1,10 @@
-// +build !linux,!windows,!freebsd,!solaris
+// +build !linux,!windows,!freebsd
package graphdriver
var (
- // Slice of drivers that should be used in an order
- priority = []string{
- "unsupported",
- }
+ // List of drivers that should be used in an order
+ priority = "unsupported"
)
// GetFSMagic returns the filesystem id given the path.
diff --git a/components/engine/daemon/graphdriver/driver_windows.go b/components/engine/daemon/graphdriver/driver_windows.go
index ffd30c2950..7411089b03 100644
--- a/components/engine/daemon/graphdriver/driver_windows.go
+++ b/components/engine/daemon/graphdriver/driver_windows.go
@@ -1,10 +1,8 @@
package graphdriver
var (
- // Slice of drivers that should be used in order
- priority = []string{
- "windowsfilter",
- }
+ // List of drivers that should be used in order
+ priority = "windowsfilter"
)
// GetFSMagic returns the filesystem id given the path.
diff --git a/components/engine/daemon/graphdriver/graphtest/graphtest_unix.go b/components/engine/daemon/graphdriver/graphtest/graphtest_unix.go
index 11dff48896..c25d4826fc 100644
--- a/components/engine/daemon/graphdriver/graphtest/graphtest_unix.go
+++ b/components/engine/daemon/graphdriver/graphtest/graphtest_unix.go
@@ -1,4 +1,4 @@
-// +build linux freebsd solaris
+// +build linux freebsd
package graphtest
@@ -13,6 +13,7 @@ import (
"unsafe"
"github.com/docker/docker/daemon/graphdriver"
+ "github.com/docker/docker/daemon/graphdriver/quota"
"github.com/docker/docker/pkg/stringid"
"github.com/docker/go-units"
"github.com/stretchr/testify/assert"
@@ -310,7 +311,7 @@ func writeRandomFile(path string, size uint64) error {
}
// DriverTestSetQuota Create a driver and test setting quota.
-func DriverTestSetQuota(t *testing.T, drivername string) {
+func DriverTestSetQuota(t *testing.T, drivername string, required bool) {
driver := GetDriver(t, drivername)
defer PutDriver(t)
@@ -318,19 +319,34 @@ func DriverTestSetQuota(t *testing.T, drivername string) {
createOpts := &graphdriver.CreateOpts{}
createOpts.StorageOpt = make(map[string]string, 1)
createOpts.StorageOpt["size"] = "50M"
- if err := driver.Create("zfsTest", "Base", createOpts); err != nil {
+ layerName := drivername + "Test"
+ if err := driver.CreateReadWrite(layerName, "Base", createOpts); err == quota.ErrQuotaNotSupported && !required {
+ t.Skipf("Quota not supported on underlying filesystem: %v", err)
+ } else if err != nil {
t.Fatal(err)
}
- mountPath, err := driver.Get("zfsTest", "")
+ mountPath, err := driver.Get(layerName, "")
if err != nil {
t.Fatal(err)
}
quota := uint64(50 * units.MiB)
- err = writeRandomFile(path.Join(mountPath.Path(), "file"), quota*2)
- if pathError, ok := err.(*os.PathError); ok && pathError.Err != unix.EDQUOT {
- t.Fatalf("expect write() to fail with %v, got %v", unix.EDQUOT, err)
+ // Try to write a file smaller than quota, and ensure it works
+ err = writeRandomFile(path.Join(mountPath.Path(), "smallfile"), quota/2)
+ if err != nil {
+ t.Fatal(err)
+ }
+ defer os.Remove(path.Join(mountPath.Path(), "smallfile"))
+
+ // Try to write a file bigger than quota. We've already filled up half the quota, so hitting the limit should be easy
+ err = writeRandomFile(path.Join(mountPath.Path(), "bigfile"), quota)
+ if err == nil {
+ t.Fatalf("expected write to fail(), instead had success")
+ }
+ if pathError, ok := err.(*os.PathError); ok && pathError.Err != unix.EDQUOT && pathError.Err != unix.ENOSPC {
+ os.Remove(path.Join(mountPath.Path(), "bigfile"))
+ t.Fatalf("expect write() to fail with %v or %v, got %v", unix.EDQUOT, unix.ENOSPC, pathError.Err)
}
}
diff --git a/components/engine/daemon/graphdriver/lcow/lcow.go b/components/engine/daemon/graphdriver/lcow/lcow.go
index c999d5b145..5ec8b8baaa 100644
--- a/components/engine/daemon/graphdriver/lcow/lcow.go
+++ b/components/engine/daemon/graphdriver/lcow/lcow.go
@@ -181,17 +181,17 @@ func InitDriver(dataRoot string, options []string, _, _ []idtools.IDMap) (graphd
}
// Make sure the dataRoot directory is created
- if err := idtools.MkdirAllAs(dataRoot, 0700, 0, 0); err != nil {
+ if err := idtools.MkdirAllAndChown(dataRoot, 0700, idtools.IDPair{UID: 0, GID: 0}); err != nil {
return nil, fmt.Errorf("%s failed to create '%s': %v", title, dataRoot, err)
}
// Make sure the cache directory is created under dataRoot
- if err := idtools.MkdirAllAs(cd, 0700, 0, 0); err != nil {
+ if err := idtools.MkdirAllAndChown(cd, 0700, idtools.IDPair{UID: 0, GID: 0}); err != nil {
return nil, fmt.Errorf("%s failed to create '%s': %v", title, cd, err)
}
// Make sure the scratch directory is created under dataRoot
- if err := idtools.MkdirAllAs(sd, 0700, 0, 0); err != nil {
+ if err := idtools.MkdirAllAndChown(sd, 0700, idtools.IDPair{UID: 0, GID: 0}); err != nil {
return nil, fmt.Errorf("%s failed to create '%s': %v", title, sd, err)
}
diff --git a/components/engine/daemon/graphdriver/overlay/overlay.go b/components/engine/daemon/graphdriver/overlay/overlay.go
index 3db84bc229..853318de59 100644
--- a/components/engine/daemon/graphdriver/overlay/overlay.go
+++ b/components/engine/daemon/graphdriver/overlay/overlay.go
@@ -136,7 +136,7 @@ func Init(home string, options []string, uidMaps, gidMaps []idtools.IDMap) (grap
return nil, err
}
// Create the driver home dir
- if err := idtools.MkdirAllAs(home, 0700, rootUID, rootGID); err != nil && !os.IsExist(err) {
+ if err := idtools.MkdirAllAndChown(home, 0700, idtools.IDPair{rootUID, rootGID}); err != nil && !os.IsExist(err) {
return nil, err
}
@@ -255,10 +255,12 @@ func (d *Driver) Create(id, parent string, opts *graphdriver.CreateOpts) (retErr
if err != nil {
return err
}
- if err := idtools.MkdirAllAs(path.Dir(dir), 0700, rootUID, rootGID); err != nil {
+ root := idtools.IDPair{UID: rootUID, GID: rootGID}
+
+ if err := idtools.MkdirAllAndChown(path.Dir(dir), 0700, root); err != nil {
return err
}
- if err := idtools.MkdirAs(dir, 0700, rootUID, rootGID); err != nil {
+ if err := idtools.MkdirAndChown(dir, 0700, root); err != nil {
return err
}
@@ -271,7 +273,7 @@ func (d *Driver) Create(id, parent string, opts *graphdriver.CreateOpts) (retErr
// Toplevel images are just a "root" dir
if parent == "" {
- return idtools.MkdirAndChown(path.Join(dir, "root"), 0755, idtools.IDPair{rootUID, rootGID})
+ return idtools.MkdirAndChown(path.Join(dir, "root"), 0755, root)
}
parentDir := d.dir(parent)
@@ -285,13 +287,13 @@ func (d *Driver) Create(id, parent string, opts *graphdriver.CreateOpts) (retErr
parentRoot := path.Join(parentDir, "root")
if s, err := os.Lstat(parentRoot); err == nil {
- if err := idtools.MkdirAs(path.Join(dir, "upper"), s.Mode(), rootUID, rootGID); err != nil {
+ if err := idtools.MkdirAndChown(path.Join(dir, "upper"), s.Mode(), root); err != nil {
return err
}
- if err := idtools.MkdirAs(path.Join(dir, "work"), 0700, rootUID, rootGID); err != nil {
+ if err := idtools.MkdirAndChown(path.Join(dir, "work"), 0700, root); err != nil {
return err
}
- if err := idtools.MkdirAs(path.Join(dir, "merged"), 0700, rootUID, rootGID); err != nil {
+ if err := idtools.MkdirAndChown(path.Join(dir, "merged"), 0700, root); err != nil {
return err
}
if err := ioutil.WriteFile(path.Join(dir, "lower-id"), []byte(parent), 0666); err != nil {
@@ -318,13 +320,13 @@ func (d *Driver) Create(id, parent string, opts *graphdriver.CreateOpts) (retErr
}
upperDir := path.Join(dir, "upper")
- if err := idtools.MkdirAs(upperDir, s.Mode(), rootUID, rootGID); err != nil {
+ if err := idtools.MkdirAndChown(upperDir, s.Mode(), root); err != nil {
return err
}
- if err := idtools.MkdirAs(path.Join(dir, "work"), 0700, rootUID, rootGID); err != nil {
+ if err := idtools.MkdirAndChown(path.Join(dir, "work"), 0700, root); err != nil {
return err
}
- if err := idtools.MkdirAs(path.Join(dir, "merged"), 0700, rootUID, rootGID); err != nil {
+ if err := idtools.MkdirAndChown(path.Join(dir, "merged"), 0700, root); err != nil {
return err
}
diff --git a/components/engine/daemon/graphdriver/overlay2/overlay.go b/components/engine/daemon/graphdriver/overlay2/overlay.go
index 0f252e6ff2..c2023c7252 100644
--- a/components/engine/daemon/graphdriver/overlay2/overlay.go
+++ b/components/engine/daemon/graphdriver/overlay2/overlay.go
@@ -171,7 +171,7 @@ func Init(home string, options []string, uidMaps, gidMaps []idtools.IDMap) (grap
return nil, err
}
// Create the driver home dir
- if err := idtools.MkdirAllAs(path.Join(home, linkDir), 0700, rootUID, rootGID); err != nil && !os.IsExist(err) {
+ if err := idtools.MkdirAllAndChown(path.Join(home, linkDir), 0700, idtools.IDPair{rootUID, rootGID}); err != nil && !os.IsExist(err) {
return nil, err
}
@@ -362,10 +362,12 @@ func (d *Driver) create(id, parent string, opts *graphdriver.CreateOpts) (retErr
if err != nil {
return err
}
- if err := idtools.MkdirAllAs(path.Dir(dir), 0700, rootUID, rootGID); err != nil {
+ root := idtools.IDPair{UID: rootUID, GID: rootGID}
+
+ if err := idtools.MkdirAllAndChown(path.Dir(dir), 0700, root); err != nil {
return err
}
- if err := idtools.MkdirAs(dir, 0700, rootUID, rootGID); err != nil {
+ if err := idtools.MkdirAndChown(dir, 0700, root); err != nil {
return err
}
@@ -390,7 +392,7 @@ func (d *Driver) create(id, parent string, opts *graphdriver.CreateOpts) (retErr
}
}
- if err := idtools.MkdirAs(path.Join(dir, "diff"), 0755, rootUID, rootGID); err != nil {
+ if err := idtools.MkdirAndChown(path.Join(dir, "diff"), 0755, root); err != nil {
return err
}
@@ -409,10 +411,10 @@ func (d *Driver) create(id, parent string, opts *graphdriver.CreateOpts) (retErr
return nil
}
- if err := idtools.MkdirAs(path.Join(dir, "work"), 0700, rootUID, rootGID); err != nil {
+ if err := idtools.MkdirAndChown(path.Join(dir, "work"), 0700, root); err != nil {
return err
}
- if err := idtools.MkdirAs(path.Join(dir, "merged"), 0700, rootUID, rootGID); err != nil {
+ if err := idtools.MkdirAndChown(path.Join(dir, "merged"), 0700, root); err != nil {
return err
}
diff --git a/components/engine/daemon/graphdriver/quota/errors.go b/components/engine/daemon/graphdriver/quota/errors.go
new file mode 100644
index 0000000000..1741f2f5db
--- /dev/null
+++ b/components/engine/daemon/graphdriver/quota/errors.go
@@ -0,0 +1,19 @@
+package quota
+
+import "github.com/docker/docker/api/errdefs"
+
+var (
+ _ errdefs.ErrNotImplemented = (*errQuotaNotSupported)(nil)
+)
+
+// ErrQuotaNotSupported indicates if were found the FS didn't have projects quotas available
+var ErrQuotaNotSupported = errQuotaNotSupported{}
+
+type errQuotaNotSupported struct {
+}
+
+func (e errQuotaNotSupported) NotImplemented() {}
+
+func (e errQuotaNotSupported) Error() string {
+ return "Filesystem does not support, or has not enabled quotas"
+}
diff --git a/components/engine/daemon/graphdriver/quota/projectquota.go b/components/engine/daemon/graphdriver/quota/projectquota.go
index 84e391aa89..e25965baf3 100644
--- a/components/engine/daemon/graphdriver/quota/projectquota.go
+++ b/components/engine/daemon/graphdriver/quota/projectquota.go
@@ -58,15 +58,11 @@ import (
"path/filepath"
"unsafe"
- "errors"
-
+ rsystem "github.com/opencontainers/runc/libcontainer/system"
"github.com/sirupsen/logrus"
"golang.org/x/sys/unix"
)
-// ErrQuotaNotSupported indicates if were found the FS does not have projects quotas available
-var ErrQuotaNotSupported = errors.New("Filesystem does not support or has not enabled quotas")
-
// Quota limit params - currently we only control blocks hard limit
type Quota struct {
Size uint64
@@ -103,6 +99,14 @@ type Control struct {
// project ids.
//
func NewControl(basePath string) (*Control, error) {
+ //
+ // If we are running in a user namespace quota won't be supported for
+ // now since makeBackingFsDev() will try to mknod().
+ //
+ if rsystem.RunningInUserNS() {
+ return nil, ErrQuotaNotSupported
+ }
+
//
// create backing filesystem device node
//
diff --git a/components/engine/daemon/graphdriver/register/register_overlay.go b/components/engine/daemon/graphdriver/register/register_overlay.go
index 9ba849cedc..3a9526420f 100644
--- a/components/engine/daemon/graphdriver/register/register_overlay.go
+++ b/components/engine/daemon/graphdriver/register/register_overlay.go
@@ -5,5 +5,4 @@ package register
import (
// register the overlay graphdriver
_ "github.com/docker/docker/daemon/graphdriver/overlay"
- _ "github.com/docker/docker/daemon/graphdriver/overlay2"
)
diff --git a/components/engine/daemon/graphdriver/register/register_overlay2.go b/components/engine/daemon/graphdriver/register/register_overlay2.go
new file mode 100644
index 0000000000..b2da0f4763
--- /dev/null
+++ b/components/engine/daemon/graphdriver/register/register_overlay2.go
@@ -0,0 +1,8 @@
+// +build !exclude_graphdriver_overlay2,linux
+
+package register
+
+import (
+ // register the overlay2 graphdriver
+ _ "github.com/docker/docker/daemon/graphdriver/overlay2"
+)
diff --git a/components/engine/daemon/graphdriver/register/register_zfs.go b/components/engine/daemon/graphdriver/register/register_zfs.go
index 8f34e35537..8c31c415f4 100644
--- a/components/engine/daemon/graphdriver/register/register_zfs.go
+++ b/components/engine/daemon/graphdriver/register/register_zfs.go
@@ -1,4 +1,4 @@
-// +build !exclude_graphdriver_zfs,linux !exclude_graphdriver_zfs,freebsd, solaris
+// +build !exclude_graphdriver_zfs,linux !exclude_graphdriver_zfs,freebsd
package register
diff --git a/components/engine/daemon/graphdriver/vfs/driver.go b/components/engine/daemon/graphdriver/vfs/driver.go
index 0482dccb87..610476fd88 100644
--- a/components/engine/daemon/graphdriver/vfs/driver.go
+++ b/components/engine/daemon/graphdriver/vfs/driver.go
@@ -6,10 +6,12 @@ import (
"path/filepath"
"github.com/docker/docker/daemon/graphdriver"
+ "github.com/docker/docker/daemon/graphdriver/quota"
"github.com/docker/docker/pkg/chrootarchive"
"github.com/docker/docker/pkg/containerfs"
"github.com/docker/docker/pkg/idtools"
"github.com/docker/docker/pkg/system"
+ units "github.com/docker/go-units"
"github.com/opencontainers/selinux/go-selinux/label"
)
@@ -33,6 +35,11 @@ func Init(home string, options []string, uidMaps, gidMaps []idtools.IDMap) (grap
if err := idtools.MkdirAllAndChown(home, 0700, rootIDs); err != nil {
return nil, err
}
+
+ if err := setupDriverQuota(d); err != nil {
+ return nil, err
+ }
+
return graphdriver.NewNaiveDiffDriver(d, uidMaps, gidMaps), nil
}
@@ -41,6 +48,7 @@ func Init(home string, options []string, uidMaps, gidMaps []idtools.IDMap) (grap
// In order to support layering, files are copied from the parent layer into the new layer. There is no copy-on-write support.
// Driver must be wrapped in NaiveDiffDriver to be used as a graphdriver.Driver
type Driver struct {
+ driverQuota
home string
idMappings *idtools.IDMappings
}
@@ -67,15 +75,38 @@ func (d *Driver) Cleanup() error {
// CreateReadWrite creates a layer that is writable for use as a container
// file system.
func (d *Driver) CreateReadWrite(id, parent string, opts *graphdriver.CreateOpts) error {
- return d.Create(id, parent, opts)
+ var err error
+ var size int64
+
+ if opts != nil {
+ for key, val := range opts.StorageOpt {
+ switch key {
+ case "size":
+ if !d.quotaSupported() {
+ return quota.ErrQuotaNotSupported
+ }
+ if size, err = units.RAMInBytes(val); err != nil {
+ return err
+ }
+ default:
+ return fmt.Errorf("Storage opt %s not supported", key)
+ }
+ }
+ }
+
+ return d.create(id, parent, uint64(size))
}
// Create prepares the filesystem for the VFS driver and copies the directory for the given id under the parent.
func (d *Driver) Create(id, parent string, opts *graphdriver.CreateOpts) error {
if opts != nil && len(opts.StorageOpt) != 0 {
- return fmt.Errorf("--storage-opt is not supported for vfs")
+ return fmt.Errorf("--storage-opt is not supported for vfs on read-only layers")
}
+ return d.create(id, parent, 0)
+}
+
+func (d *Driver) create(id, parent string, size uint64) error {
dir := d.dir(id)
rootIDs := d.idMappings.RootPair()
if err := idtools.MkdirAllAndChown(filepath.Dir(dir), 0700, rootIDs); err != nil {
@@ -84,6 +115,13 @@ func (d *Driver) Create(id, parent string, opts *graphdriver.CreateOpts) error {
if err := idtools.MkdirAndChown(dir, 0755, rootIDs); err != nil {
return err
}
+
+ if size != 0 {
+ if err := d.setupQuota(dir, size); err != nil {
+ return err
+ }
+ }
+
labelOpts := []string{"level:s0"}
if _, mountLabel, err := label.InitLabels(labelOpts); err == nil {
label.SetFileLabel(dir, mountLabel)
diff --git a/components/engine/daemon/graphdriver/vfs/quota_linux.go b/components/engine/daemon/graphdriver/vfs/quota_linux.go
new file mode 100644
index 0000000000..032c15b9ef
--- /dev/null
+++ b/components/engine/daemon/graphdriver/vfs/quota_linux.go
@@ -0,0 +1,27 @@
+// +build linux
+
+package vfs
+
+import "github.com/docker/docker/daemon/graphdriver/quota"
+
+type driverQuota struct {
+ quotaCtl *quota.Control
+}
+
+func setupDriverQuota(driver *Driver) error {
+ if quotaCtl, err := quota.NewControl(driver.home); err == nil {
+ driver.quotaCtl = quotaCtl
+ } else if err != quota.ErrQuotaNotSupported {
+ return err
+ }
+
+ return nil
+}
+
+func (d *Driver) setupQuota(dir string, size uint64) error {
+ return d.quotaCtl.SetQuota(dir, quota.Quota{Size: size})
+}
+
+func (d *Driver) quotaSupported() bool {
+ return d.quotaCtl != nil
+}
diff --git a/components/engine/daemon/graphdriver/vfs/quota_unsupported.go b/components/engine/daemon/graphdriver/vfs/quota_unsupported.go
new file mode 100644
index 0000000000..9cca53d372
--- /dev/null
+++ b/components/engine/daemon/graphdriver/vfs/quota_unsupported.go
@@ -0,0 +1,20 @@
+// +build !linux
+
+package vfs
+
+import "github.com/docker/docker/daemon/graphdriver/quota"
+
+type driverQuota struct {
+}
+
+func setupDriverQuota(driver *Driver) error {
+ return nil
+}
+
+func (d *Driver) setupQuota(dir string, size uint64) error {
+ return quota.ErrQuotaNotSupported
+}
+
+func (d *Driver) quotaSupported() bool {
+ return false
+}
diff --git a/components/engine/daemon/graphdriver/vfs/vfs_test.go b/components/engine/daemon/graphdriver/vfs/vfs_test.go
index 9ecf21dbaa..16dc1357f1 100644
--- a/components/engine/daemon/graphdriver/vfs/vfs_test.go
+++ b/components/engine/daemon/graphdriver/vfs/vfs_test.go
@@ -32,6 +32,10 @@ func TestVfsCreateSnap(t *testing.T) {
graphtest.DriverTestCreateSnap(t, "vfs")
}
+func TestVfsSetQuota(t *testing.T) {
+ graphtest.DriverTestSetQuota(t, "vfs", false)
+}
+
func TestVfsTeardown(t *testing.T) {
graphtest.PutDriver(t)
}
diff --git a/components/engine/daemon/graphdriver/windows/windows.go b/components/engine/daemon/graphdriver/windows/windows.go
index cc9ca529df..f3d239d76a 100644
--- a/components/engine/daemon/graphdriver/windows/windows.go
+++ b/components/engine/daemon/graphdriver/windows/windows.go
@@ -95,7 +95,7 @@ func InitFilter(home string, options []string, uidMaps, gidMaps []idtools.IDMap)
return nil, fmt.Errorf("%s is on an ReFS volume - ReFS volumes are not supported", home)
}
- if err := idtools.MkdirAllAs(home, 0700, 0, 0); err != nil {
+ if err := idtools.MkdirAllAndChown(home, 0700, idtools.IDPair{UID: 0, GID: 0}); err != nil {
return nil, fmt.Errorf("windowsfilter failed to create '%s': %v", home, err)
}
diff --git a/components/engine/daemon/graphdriver/zfs/zfs.go b/components/engine/daemon/graphdriver/zfs/zfs.go
index 4caedef0ee..51d31bcfc5 100644
--- a/components/engine/daemon/graphdriver/zfs/zfs.go
+++ b/components/engine/daemon/graphdriver/zfs/zfs.go
@@ -1,4 +1,4 @@
-// +build linux freebsd solaris
+// +build linux freebsd
package zfs
@@ -104,7 +104,7 @@ func Init(base string, opt []string, uidMaps, gidMaps []idtools.IDMap) (graphdri
if err != nil {
return nil, fmt.Errorf("Failed to get root uid/guid: %v", err)
}
- if err := idtools.MkdirAllAs(base, 0700, rootUID, rootGID); err != nil {
+ if err := idtools.MkdirAllAndChown(base, 0700, idtools.IDPair{rootUID, rootGID}); err != nil {
return nil, fmt.Errorf("Failed to create '%s': %v", base, err)
}
@@ -373,7 +373,7 @@ func (d *Driver) Get(id, mountLabel string) (containerfs.ContainerFS, error) {
return nil, err
}
// Create the target directories if they don't exist
- if err := idtools.MkdirAllAs(mountpoint, 0755, rootUID, rootGID); err != nil {
+ if err := idtools.MkdirAllAndChown(mountpoint, 0755, idtools.IDPair{rootUID, rootGID}); err != nil {
d.ctr.Decrement(mountpoint)
return nil, err
}
diff --git a/components/engine/daemon/graphdriver/zfs/zfs_test.go b/components/engine/daemon/graphdriver/zfs/zfs_test.go
index 3e22928438..2eb85c4d83 100644
--- a/components/engine/daemon/graphdriver/zfs/zfs_test.go
+++ b/components/engine/daemon/graphdriver/zfs/zfs_test.go
@@ -27,7 +27,7 @@ func TestZfsCreateSnap(t *testing.T) {
}
func TestZfsSetQuota(t *testing.T) {
- graphtest.DriverTestSetQuota(t, "zfs")
+ graphtest.DriverTestSetQuota(t, "zfs", true)
}
func TestZfsTeardown(t *testing.T) {
diff --git a/components/engine/daemon/graphdriver/zfs/zfs_unsupported.go b/components/engine/daemon/graphdriver/zfs/zfs_unsupported.go
index ce8daadaf6..643b169bc5 100644
--- a/components/engine/daemon/graphdriver/zfs/zfs_unsupported.go
+++ b/components/engine/daemon/graphdriver/zfs/zfs_unsupported.go
@@ -1,4 +1,4 @@
-// +build !linux,!freebsd,!solaris
+// +build !linux,!freebsd
package zfs
diff --git a/components/engine/daemon/health.go b/components/engine/daemon/health.go
index 7d8c84a397..26ae20f9b1 100644
--- a/components/engine/daemon/health.go
+++ b/components/engine/daemon/health.go
@@ -129,7 +129,7 @@ func handleProbeResult(d *Daemon, c *container.Container, result *types.Healthch
}
h := c.State.Health
- oldStatus := h.Status
+ oldStatus := h.Status()
if len(h.Log) >= maxLogEntries {
h.Log = append(h.Log[len(h.Log)+1-maxLogEntries:], result)
@@ -139,14 +139,14 @@ func handleProbeResult(d *Daemon, c *container.Container, result *types.Healthch
if result.ExitCode == exitStatusHealthy {
h.FailingStreak = 0
- h.Status = types.Healthy
+ h.SetStatus(types.Healthy)
} else { // Failure (including invalid exit code)
shouldIncrementStreak := true
// If the container is starting (i.e. we never had a successful health check)
// then we check if we are within the start period of the container in which
// case we do not increment the failure streak.
- if h.Status == types.Starting {
+ if h.Status() == types.Starting {
startPeriod := timeoutWithDefault(c.Config.Healthcheck.StartPeriod, defaultStartPeriod)
timeSinceStart := result.Start.Sub(c.State.StartedAt)
@@ -160,7 +160,7 @@ func handleProbeResult(d *Daemon, c *container.Container, result *types.Healthch
h.FailingStreak++
if h.FailingStreak >= retries {
- h.Status = types.Unhealthy
+ h.SetStatus(types.Unhealthy)
}
}
// Else we're starting or healthy. Stay in that state.
@@ -173,8 +173,9 @@ func handleProbeResult(d *Daemon, c *container.Container, result *types.Healthch
logrus.Errorf("Error replicating health state for container %s: %v", c.ID, err)
}
- if oldStatus != h.Status {
- d.LogContainerEvent(c, "health_status: "+h.Status)
+ current := h.Status()
+ if oldStatus != current {
+ d.LogContainerEvent(c, "health_status: "+current)
}
}
@@ -293,11 +294,11 @@ func (d *Daemon) initHealthMonitor(c *container.Container) {
d.stopHealthchecks(c)
if h := c.State.Health; h != nil {
- h.Status = types.Starting
+ h.SetStatus(types.Starting)
h.FailingStreak = 0
} else {
h := &container.Health{}
- h.Status = types.Starting
+ h.SetStatus(types.Starting)
c.State.Health = h
}
diff --git a/components/engine/daemon/health_test.go b/components/engine/daemon/health_test.go
index 4fd89140d3..479ff24c52 100644
--- a/components/engine/daemon/health_test.go
+++ b/components/engine/daemon/health_test.go
@@ -14,7 +14,7 @@ import (
func reset(c *container.Container) {
c.State = &container.State{}
c.State.Health = &container.Health{}
- c.State.Health.Status = types.Starting
+ c.State.Health.SetStatus(types.Starting)
}
func TestNoneHealthcheck(t *testing.T) {
@@ -111,8 +111,8 @@ func TestHealthStates(t *testing.T) {
handleResult(c.State.StartedAt.Add(20*time.Second), 1)
handleResult(c.State.StartedAt.Add(40*time.Second), 1)
- if c.State.Health.Status != types.Starting {
- t.Errorf("Expecting starting, but got %#v\n", c.State.Health.Status)
+ if status := c.State.Health.Status(); status != types.Starting {
+ t.Errorf("Expecting starting, but got %#v\n", status)
}
if c.State.Health.FailingStreak != 2 {
t.Errorf("Expecting FailingStreak=2, but got %d\n", c.State.Health.FailingStreak)
@@ -133,15 +133,15 @@ func TestHealthStates(t *testing.T) {
c.Config.Healthcheck.StartPeriod = 30 * time.Second
handleResult(c.State.StartedAt.Add(20*time.Second), 1)
- if c.State.Health.Status != types.Starting {
- t.Errorf("Expecting starting, but got %#v\n", c.State.Health.Status)
+ if status := c.State.Health.Status(); status != types.Starting {
+ t.Errorf("Expecting starting, but got %#v\n", status)
}
if c.State.Health.FailingStreak != 0 {
t.Errorf("Expecting FailingStreak=0, but got %d\n", c.State.Health.FailingStreak)
}
handleResult(c.State.StartedAt.Add(50*time.Second), 1)
- if c.State.Health.Status != types.Starting {
- t.Errorf("Expecting starting, but got %#v\n", c.State.Health.Status)
+ if status := c.State.Health.Status(); status != types.Starting {
+ t.Errorf("Expecting starting, but got %#v\n", status)
}
if c.State.Health.FailingStreak != 1 {
t.Errorf("Expecting FailingStreak=1, but got %d\n", c.State.Health.FailingStreak)
diff --git a/components/engine/daemon/info_unix.go b/components/engine/daemon/info_unix.go
index fd2bbb45c3..9433434bcb 100644
--- a/components/engine/daemon/info_unix.go
+++ b/components/engine/daemon/info_unix.go
@@ -3,6 +3,7 @@
package daemon
import (
+ "context"
"os/exec"
"strings"
@@ -48,20 +49,10 @@ func (daemon *Daemon) FillPlatformInfo(v *types.Info, sysInfo *sysinfo.SysInfo)
}
v.ContainerdCommit.Expected = dockerversion.ContainerdCommitID
- if rv, err := exec.Command("docker-containerd", "--version").Output(); err == nil {
- parts := strings.Split(strings.TrimSpace(string(rv)), " ")
- if len(parts) == 3 {
- v.ContainerdCommit.ID = parts[2]
- }
- switch {
- case v.ContainerdCommit.ID == "":
- logrus.Warnf("failed to retrieve docker-containerd version: unknown format", string(rv))
- v.ContainerdCommit.ID = "N/A"
- case strings.HasSuffix(v.ContainerdCommit.ID, "-g"+v.ContainerdCommit.ID[len(v.ContainerdCommit.ID)-7:]):
- v.ContainerdCommit.ID = v.ContainerdCommit.Expected
- }
+ if rv, err := daemon.containerd.Version(context.Background()); err == nil {
+ v.ContainerdCommit.ID = rv.Revision
} else {
- logrus.Warnf("failed to retrieve docker-containerd version: %v", err)
+ logrus.Warnf("failed to retrieve containerd version: %v", err)
v.ContainerdCommit.ID = "N/A"
}
diff --git a/components/engine/daemon/inspect.go b/components/engine/daemon/inspect.go
index 8b0f109e32..20cfa6ce2b 100644
--- a/components/engine/daemon/inspect.go
+++ b/components/engine/daemon/inspect.go
@@ -139,7 +139,7 @@ func (daemon *Daemon) getInspectData(container *container.Container) (*types.Con
var containerHealth *types.Health
if container.State.Health != nil {
containerHealth = &types.Health{
- Status: container.State.Health.Status,
+ Status: container.State.Health.Status(),
FailingStreak: container.State.Health.FailingStreak,
Log: append([]*types.HealthcheckResult{}, container.State.Health.Log...),
}
diff --git a/components/engine/daemon/inspect_unix.go b/components/engine/daemon/inspect_unix.go
index bd28481e6a..f073695e33 100644
--- a/components/engine/daemon/inspect_unix.go
+++ b/components/engine/daemon/inspect_unix.go
@@ -1,4 +1,4 @@
-// +build !windows,!solaris
+// +build !windows
package daemon
diff --git a/components/engine/daemon/list_unix.go b/components/engine/daemon/list_unix.go
index ebaae4560c..7b92c7c491 100644
--- a/components/engine/daemon/list_unix.go
+++ b/components/engine/daemon/list_unix.go
@@ -1,4 +1,4 @@
-// +build linux freebsd solaris
+// +build linux freebsd
package daemon
diff --git a/components/engine/daemon/listeners/listeners_unix.go b/components/engine/daemon/listeners/listeners_unix.go
index 0a4e5e4e31..3a7c0f85b0 100644
--- a/components/engine/daemon/listeners/listeners_unix.go
+++ b/components/engine/daemon/listeners/listeners_unix.go
@@ -1,4 +1,4 @@
-// +build !windows,!solaris
+// +build !windows
package listeners
diff --git a/components/engine/daemon/logdrivers_windows.go b/components/engine/daemon/logdrivers_windows.go
index f3002b97e2..9f99c618c6 100644
--- a/components/engine/daemon/logdrivers_windows.go
+++ b/components/engine/daemon/logdrivers_windows.go
@@ -6,6 +6,7 @@ import (
_ "github.com/docker/docker/daemon/logger/awslogs"
_ "github.com/docker/docker/daemon/logger/etwlogs"
_ "github.com/docker/docker/daemon/logger/fluentd"
+ _ "github.com/docker/docker/daemon/logger/gelf"
_ "github.com/docker/docker/daemon/logger/jsonfilelog"
_ "github.com/docker/docker/daemon/logger/logentries"
_ "github.com/docker/docker/daemon/logger/splunk"
diff --git a/components/engine/daemon/logger/adapter.go b/components/engine/daemon/logger/adapter.go
index 98852e89c1..5817913cbc 100644
--- a/components/engine/daemon/logger/adapter.go
+++ b/components/engine/daemon/logger/adapter.go
@@ -122,6 +122,9 @@ func (a *pluginAdapterWithRead) ReadLogs(config ReadConfig) *LogWatcher {
if !config.Since.IsZero() && msg.Timestamp.Before(config.Since) {
continue
}
+ if !config.Until.IsZero() && msg.Timestamp.After(config.Until) {
+ return
+ }
select {
case watcher.Msg <- msg:
diff --git a/components/engine/daemon/logger/gelf/gelf.go b/components/engine/daemon/logger/gelf/gelf.go
index d902c0c8e1..ab598c0d38 100644
--- a/components/engine/daemon/logger/gelf/gelf.go
+++ b/components/engine/daemon/logger/gelf/gelf.go
@@ -1,5 +1,3 @@
-// +build linux
-
// Package gelf provides the log driver for forwarding server logs to
// endpoints that support the Graylog Extended Log Format.
package gelf
diff --git a/components/engine/daemon/logger/journald/read.go b/components/engine/daemon/logger/journald/read.go
index 4d9b999a50..6aff21f441 100644
--- a/components/engine/daemon/logger/journald/read.go
+++ b/components/engine/daemon/logger/journald/read.go
@@ -171,13 +171,15 @@ func (s *journald) Close() error {
return nil
}
-func (s *journald) drainJournal(logWatcher *logger.LogWatcher, config logger.ReadConfig, j *C.sd_journal, oldCursor *C.char) *C.char {
+func (s *journald) drainJournal(logWatcher *logger.LogWatcher, j *C.sd_journal, oldCursor *C.char, untilUnixMicro uint64) (*C.char, bool) {
var msg, data, cursor *C.char
var length C.size_t
var stamp C.uint64_t
var priority, partial C.int
+ var done bool
- // Walk the journal from here forward until we run out of new entries.
+ // Walk the journal from here forward until we run out of new entries
+ // or we reach the until value (if provided).
drain:
for {
// Try not to send a given entry twice.
@@ -195,6 +197,12 @@ drain:
if C.sd_journal_get_realtime_usec(j, &stamp) != 0 {
break
}
+ // Break if the timestamp exceeds any provided until flag.
+ if untilUnixMicro != 0 && untilUnixMicro < uint64(stamp) {
+ done = true
+ break
+ }
+
// Set up the time and text of the entry.
timestamp := time.Unix(int64(stamp)/1000000, (int64(stamp)%1000000)*1000)
line := C.GoBytes(unsafe.Pointer(msg), C.int(length))
@@ -240,10 +248,10 @@ drain:
// ensure that we won't be freeing an address that's invalid
cursor = nil
}
- return cursor
+ return cursor, done
}
-func (s *journald) followJournal(logWatcher *logger.LogWatcher, config logger.ReadConfig, j *C.sd_journal, pfd [2]C.int, cursor *C.char) *C.char {
+func (s *journald) followJournal(logWatcher *logger.LogWatcher, j *C.sd_journal, pfd [2]C.int, cursor *C.char, untilUnixMicro uint64) *C.char {
s.mu.Lock()
s.readers.readers[logWatcher] = logWatcher
if s.closed {
@@ -270,9 +278,10 @@ func (s *journald) followJournal(logWatcher *logger.LogWatcher, config logger.Re
break
}
- cursor = s.drainJournal(logWatcher, config, j, cursor)
+ var done bool
+ cursor, done = s.drainJournal(logWatcher, j, cursor, untilUnixMicro)
- if status != 1 {
+ if status != 1 || done {
// We were notified to stop
break
}
@@ -304,6 +313,7 @@ func (s *journald) readLogs(logWatcher *logger.LogWatcher, config logger.ReadCon
var cmatch, cursor *C.char
var stamp C.uint64_t
var sinceUnixMicro uint64
+ var untilUnixMicro uint64
var pipes [2]C.int
// Get a handle to the journal.
@@ -343,10 +353,19 @@ func (s *journald) readLogs(logWatcher *logger.LogWatcher, config logger.ReadCon
nano := config.Since.UnixNano()
sinceUnixMicro = uint64(nano / 1000)
}
+ // If we have an until value, convert it too
+ if !config.Until.IsZero() {
+ nano := config.Until.UnixNano()
+ untilUnixMicro = uint64(nano / 1000)
+ }
if config.Tail > 0 {
lines := config.Tail
- // Start at the end of the journal.
- if C.sd_journal_seek_tail(j) < 0 {
+ // If until time provided, start from there.
+ // Otherwise start at the end of the journal.
+ if untilUnixMicro != 0 && C.sd_journal_seek_realtime_usec(j, C.uint64_t(untilUnixMicro)) < 0 {
+ logWatcher.Err <- fmt.Errorf("error seeking provided until value")
+ return
+ } else if C.sd_journal_seek_tail(j) < 0 {
logWatcher.Err <- fmt.Errorf("error seeking to end of journal")
return
}
@@ -362,8 +381,7 @@ func (s *journald) readLogs(logWatcher *logger.LogWatcher, config logger.ReadCon
if C.sd_journal_get_realtime_usec(j, &stamp) != 0 {
break
} else {
- // Compare the timestamp on the entry
- // to our threshold value.
+ // Compare the timestamp on the entry to our threshold value.
if sinceUnixMicro != 0 && sinceUnixMicro > uint64(stamp) {
break
}
@@ -392,7 +410,7 @@ func (s *journald) readLogs(logWatcher *logger.LogWatcher, config logger.ReadCon
return
}
}
- cursor = s.drainJournal(logWatcher, config, j, nil)
+ cursor, _ = s.drainJournal(logWatcher, j, nil, untilUnixMicro)
if config.Follow {
// Allocate a descriptor for following the journal, if we'll
// need one. Do it here so that we can report if it fails.
@@ -404,7 +422,7 @@ func (s *journald) readLogs(logWatcher *logger.LogWatcher, config logger.ReadCon
if C.pipe(&pipes[0]) == C.int(-1) {
logWatcher.Err <- fmt.Errorf("error opening journald close notification pipe")
} else {
- cursor = s.followJournal(logWatcher, config, j, pipes, cursor)
+ cursor = s.followJournal(logWatcher, j, pipes, cursor, untilUnixMicro)
// Let followJournal handle freeing the journal context
// object and closing the channel.
following = true
diff --git a/components/engine/daemon/logger/jsonfilelog/jsonfilelog.go b/components/engine/daemon/logger/jsonfilelog/jsonfilelog.go
index 177c070394..7aa92f3d37 100644
--- a/components/engine/daemon/logger/jsonfilelog/jsonfilelog.go
+++ b/components/engine/daemon/logger/jsonfilelog/jsonfilelog.go
@@ -7,7 +7,6 @@ import (
"bytes"
"encoding/json"
"fmt"
- "io"
"strconv"
"sync"
@@ -24,12 +23,9 @@ const Name = "json-file"
// JSONFileLogger is Logger implementation for default Docker logging.
type JSONFileLogger struct {
- extra []byte // json-encoded extra attributes
-
- mu sync.RWMutex
- buf *bytes.Buffer // avoids allocating a new buffer on each call to `Log()`
+ mu sync.Mutex
closed bool
- writer *loggerutils.RotateFileWriter
+ writer *loggerutils.LogFile
readers map[*logger.LogWatcher]struct{} // stores the active log followers
}
@@ -65,11 +61,6 @@ func New(info logger.Info) (logger.Logger, error) {
}
}
- writer, err := loggerutils.NewRotateFileWriter(info.LogPath, capval, maxFiles)
- if err != nil {
- return nil, err
- }
-
var extra []byte
attrs, err := info.ExtraAttributes(nil)
if err != nil {
@@ -83,33 +74,35 @@ func New(info logger.Info) (logger.Logger, error) {
}
}
+ buf := bytes.NewBuffer(nil)
+ marshalFunc := func(msg *logger.Message) ([]byte, error) {
+ if err := marshalMessage(msg, extra, buf); err != nil {
+ return nil, err
+ }
+ b := buf.Bytes()
+ buf.Reset()
+ return b, nil
+ }
+
+ writer, err := loggerutils.NewLogFile(info.LogPath, capval, maxFiles, marshalFunc, decodeFunc)
+ if err != nil {
+ return nil, err
+ }
+
return &JSONFileLogger{
- buf: bytes.NewBuffer(nil),
writer: writer,
readers: make(map[*logger.LogWatcher]struct{}),
- extra: extra,
}, nil
}
// Log converts logger.Message to jsonlog.JSONLog and serializes it to file.
func (l *JSONFileLogger) Log(msg *logger.Message) error {
l.mu.Lock()
- err := writeMessageBuf(l.writer, msg, l.extra, l.buf)
- l.buf.Reset()
+ err := l.writer.WriteLogEntry(msg)
l.mu.Unlock()
return err
}
-func writeMessageBuf(w io.Writer, m *logger.Message, extra json.RawMessage, buf *bytes.Buffer) error {
- if err := marshalMessage(m, extra, buf); err != nil {
- logger.PutMessage(m)
- return err
- }
- logger.PutMessage(m)
- _, err := w.Write(buf.Bytes())
- return errors.Wrap(err, "error writing log entry")
-}
-
func marshalMessage(msg *logger.Message, extra json.RawMessage, buf *bytes.Buffer) error {
logLine := msg.Line
if !msg.Partial {
diff --git a/components/engine/daemon/logger/jsonfilelog/jsonfilelog_test.go b/components/engine/daemon/logger/jsonfilelog/jsonfilelog_test.go
index 2b2b2b5229..893c054669 100644
--- a/components/engine/daemon/logger/jsonfilelog/jsonfilelog_test.go
+++ b/components/engine/daemon/logger/jsonfilelog/jsonfilelog_test.go
@@ -82,7 +82,7 @@ func BenchmarkJSONFileLoggerLog(b *testing.B) {
}
buf := bytes.NewBuffer(nil)
- require.NoError(b, marshalMessage(msg, jsonlogger.(*JSONFileLogger).extra, buf))
+ require.NoError(b, marshalMessage(msg, nil, buf))
b.SetBytes(int64(buf.Len()))
b.ResetTimer()
diff --git a/components/engine/daemon/logger/jsonfilelog/read.go b/components/engine/daemon/logger/jsonfilelog/read.go
index 2586c7d7f7..f190e01a56 100644
--- a/components/engine/daemon/logger/jsonfilelog/read.go
+++ b/components/engine/daemon/logger/jsonfilelog/read.go
@@ -1,33 +1,45 @@
package jsonfilelog
import (
- "bytes"
"encoding/json"
- "fmt"
"io"
- "os"
- "time"
-
- "github.com/fsnotify/fsnotify"
- "golang.org/x/net/context"
"github.com/docker/docker/api/types/backend"
"github.com/docker/docker/daemon/logger"
"github.com/docker/docker/daemon/logger/jsonfilelog/jsonlog"
- "github.com/docker/docker/daemon/logger/jsonfilelog/multireader"
- "github.com/docker/docker/pkg/filenotify"
- "github.com/docker/docker/pkg/tailfile"
- "github.com/pkg/errors"
- "github.com/sirupsen/logrus"
)
const maxJSONDecodeRetry = 20000
+// ReadLogs implements the logger's LogReader interface for the logs
+// created by this driver.
+func (l *JSONFileLogger) ReadLogs(config logger.ReadConfig) *logger.LogWatcher {
+ logWatcher := logger.NewLogWatcher()
+
+ go l.readLogs(logWatcher, config)
+ return logWatcher
+}
+
+func (l *JSONFileLogger) readLogs(watcher *logger.LogWatcher, config logger.ReadConfig) {
+ defer close(watcher.Msg)
+
+ l.mu.Lock()
+ l.readers[watcher] = struct{}{}
+ l.mu.Unlock()
+
+ l.writer.ReadLogs(config, watcher)
+
+ l.mu.Lock()
+ delete(l.readers, watcher)
+ l.mu.Unlock()
+}
+
func decodeLogLine(dec *json.Decoder, l *jsonlog.JSONLog) (*logger.Message, error) {
l.Reset()
if err := dec.Decode(l); err != nil {
return nil, err
}
+
var attrs []backend.LogAttr
if len(l.Attrs) != 0 {
attrs = make([]backend.LogAttr, 0, len(l.Attrs))
@@ -44,304 +56,34 @@ func decodeLogLine(dec *json.Decoder, l *jsonlog.JSONLog) (*logger.Message, erro
return msg, nil
}
-// ReadLogs implements the logger's LogReader interface for the logs
-// created by this driver.
-func (l *JSONFileLogger) ReadLogs(config logger.ReadConfig) *logger.LogWatcher {
- logWatcher := logger.NewLogWatcher()
-
- go l.readLogs(logWatcher, config)
- return logWatcher
-}
-
-func (l *JSONFileLogger) readLogs(logWatcher *logger.LogWatcher, config logger.ReadConfig) {
- defer close(logWatcher.Msg)
-
- // lock so the read stream doesn't get corrupted due to rotations or other log data written while we open these files
- // This will block writes!!!
- l.mu.RLock()
-
- // TODO it would be nice to move a lot of this reader implementation to the rotate logger object
- pth := l.writer.LogPath()
- var files []io.ReadSeeker
- for i := l.writer.MaxFiles(); i > 1; i-- {
- f, err := os.Open(fmt.Sprintf("%s.%d", pth, i-1))
- if err != nil {
- if !os.IsNotExist(err) {
- logWatcher.Err <- err
- l.mu.RUnlock()
- return
- }
- continue
- }
- defer f.Close()
- files = append(files, f)
- }
-
- latestFile, err := os.Open(pth)
- if err != nil {
- logWatcher.Err <- errors.Wrap(err, "error opening latest log file")
- l.mu.RUnlock()
- return
- }
- defer latestFile.Close()
-
- latestChunk, err := newSectionReader(latestFile)
-
- // Now we have the reader sectioned, all fd's opened, we can unlock.
- // New writes/rotates will not affect seeking through these files
- l.mu.RUnlock()
-
- if err != nil {
- logWatcher.Err <- err
- return
- }
-
- if config.Tail != 0 {
- tailer := multireader.MultiReadSeeker(append(files, latestChunk)...)
- tailFile(tailer, logWatcher, config.Tail, config.Since)
- }
-
- // close all the rotated files
- for _, f := range files {
- if err := f.(io.Closer).Close(); err != nil {
- logrus.WithField("logger", "json-file").Warnf("error closing tailed log file: %v", err)
- }
- }
-
- if !config.Follow || l.closed {
- return
- }
-
- notifyRotate := l.writer.NotifyRotate()
- defer l.writer.NotifyRotateEvict(notifyRotate)
-
- l.mu.Lock()
- l.readers[logWatcher] = struct{}{}
- l.mu.Unlock()
-
- followLogs(latestFile, logWatcher, notifyRotate, config.Since)
-
- l.mu.Lock()
- delete(l.readers, logWatcher)
- l.mu.Unlock()
-}
-
-func newSectionReader(f *os.File) (*io.SectionReader, error) {
- // seek to the end to get the size
- // we'll leave this at the end of the file since section reader does not advance the reader
- size, err := f.Seek(0, os.SEEK_END)
- if err != nil {
- return nil, errors.Wrap(err, "error getting current file size")
- }
- return io.NewSectionReader(f, 0, size), nil
-}
-
-func tailFile(f io.ReadSeeker, logWatcher *logger.LogWatcher, tail int, since time.Time) {
- rdr := io.Reader(f)
- if tail > 0 {
- ls, err := tailfile.TailFile(f, tail)
- if err != nil {
- logWatcher.Err <- err
- return
- }
- rdr = bytes.NewBuffer(bytes.Join(ls, []byte("\n")))
- }
- dec := json.NewDecoder(rdr)
- for {
- msg, err := decodeLogLine(dec, &jsonlog.JSONLog{})
- if err != nil {
- if err != io.EOF {
- logWatcher.Err <- err
- }
- return
- }
- if !since.IsZero() && msg.Timestamp.Before(since) {
- continue
- }
- select {
- case <-logWatcher.WatchClose():
- return
- case logWatcher.Msg <- msg:
- }
- }
-}
-
-func watchFile(name string) (filenotify.FileWatcher, error) {
- fileWatcher, err := filenotify.New()
- if err != nil {
- return nil, err
- }
-
- if err := fileWatcher.Add(name); err != nil {
- logrus.WithField("logger", "json-file").Warnf("falling back to file poller due to error: %v", err)
- fileWatcher.Close()
- fileWatcher = filenotify.NewPollingWatcher()
-
- if err := fileWatcher.Add(name); err != nil {
- fileWatcher.Close()
- logrus.Debugf("error watching log file for modifications: %v", err)
- return nil, err
- }
- }
- return fileWatcher, nil
-}
-
-func followLogs(f *os.File, logWatcher *logger.LogWatcher, notifyRotate chan interface{}, since time.Time) {
- dec := json.NewDecoder(f)
+// decodeFunc is used to create a decoder for the log file reader
+func decodeFunc(rdr io.Reader) func() (*logger.Message, error) {
l := &jsonlog.JSONLog{}
-
- name := f.Name()
- fileWatcher, err := watchFile(name)
- if err != nil {
- logWatcher.Err <- err
- return
- }
- defer func() {
- f.Close()
- fileWatcher.Remove(name)
- fileWatcher.Close()
- }()
-
- ctx, cancel := context.WithCancel(context.Background())
- defer cancel()
- go func() {
- select {
- case <-logWatcher.WatchClose():
- fileWatcher.Remove(name)
- cancel()
- case <-ctx.Done():
- return
- }
- }()
-
- var retries int
- handleRotate := func() error {
- f.Close()
- fileWatcher.Remove(name)
-
- // retry when the file doesn't exist
- for retries := 0; retries <= 5; retries++ {
- f, err = os.Open(name)
- if err == nil || !os.IsNotExist(err) {
+ dec := json.NewDecoder(rdr)
+ return func() (msg *logger.Message, err error) {
+ for retries := 0; retries < maxJSONDecodeRetry; retries++ {
+ msg, err = decodeLogLine(dec, l)
+ if err == nil {
break
}
- }
- if err != nil {
- return err
- }
- if err := fileWatcher.Add(name); err != nil {
- return err
- }
- dec = json.NewDecoder(f)
- return nil
- }
- errRetry := errors.New("retry")
- errDone := errors.New("done")
- waitRead := func() error {
- select {
- case e := <-fileWatcher.Events():
- switch e.Op {
- case fsnotify.Write:
- dec = json.NewDecoder(f)
- return nil
- case fsnotify.Rename, fsnotify.Remove:
- select {
- case <-notifyRotate:
- case <-ctx.Done():
- return errDone
- }
- if err := handleRotate(); err != nil {
- return err
- }
- return nil
- }
- return errRetry
- case err := <-fileWatcher.Errors():
- logrus.Debug("logger got error watching file: %v", err)
- // Something happened, let's try and stay alive and create a new watcher
- if retries <= 5 {
- fileWatcher.Close()
- fileWatcher, err = watchFile(name)
- if err != nil {
- return err
- }
+ // try again, could be due to a an incomplete json object as we read
+ if _, ok := err.(*json.SyntaxError); ok {
+ dec = json.NewDecoder(rdr)
retries++
- return errRetry
+ continue
}
- return err
- case <-ctx.Done():
- return errDone
- }
- }
- handleDecodeErr := func(err error) error {
- if err == io.EOF {
- for {
- err := waitRead()
- if err == nil {
- break
- }
- if err == errRetry {
- continue
- }
- return err
- }
- return nil
- }
- // try again because this shouldn't happen
- if _, ok := err.(*json.SyntaxError); ok && retries <= maxJSONDecodeRetry {
- dec = json.NewDecoder(f)
- retries++
- return nil
- }
- // io.ErrUnexpectedEOF is returned from json.Decoder when there is
- // remaining data in the parser's buffer while an io.EOF occurs.
- // If the json logger writes a partial json log entry to the disk
- // while at the same time the decoder tries to decode it, the race condition happens.
- if err == io.ErrUnexpectedEOF && retries <= maxJSONDecodeRetry {
- reader := io.MultiReader(dec.Buffered(), f)
- dec = json.NewDecoder(reader)
- retries++
- return nil
- }
- return err
- }
-
- // main loop
- for {
- msg, err := decodeLogLine(dec, l)
- if err != nil {
- if err := handleDecodeErr(err); err != nil {
- if err == errDone {
- return
- }
- // we got an unrecoverable error, so return
- logWatcher.Err <- err
- return
- }
- // ready to try again
- continue
- }
-
- retries = 0 // reset retries since we've succeeded
- if !since.IsZero() && msg.Timestamp.Before(since) {
- continue
- }
- select {
- case logWatcher.Msg <- msg:
- case <-ctx.Done():
- logWatcher.Msg <- msg
- for {
- msg, err := decodeLogLine(dec, l)
- if err != nil {
- return
- }
- if !since.IsZero() && msg.Timestamp.Before(since) {
- continue
- }
- logWatcher.Msg <- msg
+ // io.ErrUnexpectedEOF is returned from json.Decoder when there is
+ // remaining data in the parser's buffer while an io.EOF occurs.
+ // If the json logger writes a partial json log entry to the disk
+ // while at the same time the decoder tries to decode it, the race condition happens.
+ if err == io.ErrUnexpectedEOF {
+ reader := io.MultiReader(dec.Buffered(), rdr)
+ dec = json.NewDecoder(reader)
+ retries++
}
}
+ return msg, err
}
}
diff --git a/components/engine/daemon/logger/jsonfilelog/read_test.go b/components/engine/daemon/logger/jsonfilelog/read_test.go
index 01a05c4b78..599fdf9336 100644
--- a/components/engine/daemon/logger/jsonfilelog/read_test.go
+++ b/components/engine/daemon/logger/jsonfilelog/read_test.go
@@ -35,7 +35,7 @@ func BenchmarkJSONFileLoggerReadLogs(b *testing.B) {
}
buf := bytes.NewBuffer(nil)
- require.NoError(b, marshalMessage(msg, jsonlogger.(*JSONFileLogger).extra, buf))
+ require.NoError(b, marshalMessage(msg, nil, buf))
b.SetBytes(int64(buf.Len()))
b.ResetTimer()
diff --git a/components/engine/daemon/logger/logentries/logentries.go b/components/engine/daemon/logger/logentries/logentries.go
index e28707c746..1d25843e8c 100644
--- a/components/engine/daemon/logger/logentries/logentries.go
+++ b/components/engine/daemon/logger/logentries/logentries.go
@@ -76,7 +76,7 @@ func (f *logentries) Log(msg *logger.Message) error {
logger.PutMessage(msg)
f.writer.Println(f.tag, ts, data)
} else {
- line := msg.Line
+ line := string(msg.Line)
logger.PutMessage(msg)
f.writer.Println(line)
}
diff --git a/components/engine/daemon/logger/logger.go b/components/engine/daemon/logger/logger.go
index 1108597c62..dc25bebfc7 100644
--- a/components/engine/daemon/logger/logger.go
+++ b/components/engine/daemon/logger/logger.go
@@ -88,6 +88,7 @@ type SizedLogger interface {
// ReadConfig is the configuration passed into ReadLogs.
type ReadConfig struct {
Since time.Time
+ Until time.Time
Tail int
Follow bool
}
@@ -139,3 +140,6 @@ type Capability struct {
// Determines if a log driver can read back logs
ReadLogs bool
}
+
+// MarshalFunc is a func that marshals a message into an arbitrary format
+type MarshalFunc func(*Message) ([]byte, error)
diff --git a/components/engine/daemon/logger/loggerutils/logfile.go b/components/engine/daemon/logger/loggerutils/logfile.go
new file mode 100644
index 0000000000..6a7a68909e
--- /dev/null
+++ b/components/engine/daemon/logger/loggerutils/logfile.go
@@ -0,0 +1,454 @@
+package loggerutils
+
+import (
+ "bytes"
+ "context"
+ "fmt"
+ "io"
+ "os"
+ "strconv"
+ "sync"
+ "time"
+
+ "github.com/docker/docker/daemon/logger"
+ "github.com/docker/docker/daemon/logger/loggerutils/multireader"
+ "github.com/docker/docker/pkg/filenotify"
+ "github.com/docker/docker/pkg/pubsub"
+ "github.com/docker/docker/pkg/tailfile"
+ "github.com/fsnotify/fsnotify"
+ "github.com/pkg/errors"
+ "github.com/sirupsen/logrus"
+)
+
+// LogFile is Logger implementation for default Docker logging.
+type LogFile struct {
+ f *os.File // store for closing
+ closed bool
+ mu sync.RWMutex
+ capacity int64 //maximum size of each file
+ currentSize int64 // current size of the latest file
+ maxFiles int //maximum number of files
+ notifyRotate *pubsub.Publisher
+ marshal logger.MarshalFunc
+ createDecoder makeDecoderFunc
+}
+
+type makeDecoderFunc func(rdr io.Reader) func() (*logger.Message, error)
+
+//NewLogFile creates new LogFile
+func NewLogFile(logPath string, capacity int64, maxFiles int, marshaller logger.MarshalFunc, decodeFunc makeDecoderFunc) (*LogFile, error) {
+ log, err := os.OpenFile(logPath, os.O_WRONLY|os.O_APPEND|os.O_CREATE, 0640)
+ if err != nil {
+ return nil, err
+ }
+
+ size, err := log.Seek(0, os.SEEK_END)
+ if err != nil {
+ return nil, err
+ }
+
+ return &LogFile{
+ f: log,
+ capacity: capacity,
+ currentSize: size,
+ maxFiles: maxFiles,
+ notifyRotate: pubsub.NewPublisher(0, 1),
+ marshal: marshaller,
+ createDecoder: decodeFunc,
+ }, nil
+}
+
+// WriteLogEntry writes the provided log message to the current log file.
+// This may trigger a rotation event if the max file/capacity limits are hit.
+func (w *LogFile) WriteLogEntry(msg *logger.Message) error {
+ b, err := w.marshal(msg)
+ if err != nil {
+ return errors.Wrap(err, "error marshalling log message")
+ }
+
+ logger.PutMessage(msg)
+
+ w.mu.Lock()
+ if w.closed {
+ w.mu.Unlock()
+ return errors.New("cannot write because the output file was closed")
+ }
+
+ if err := w.checkCapacityAndRotate(); err != nil {
+ w.mu.Unlock()
+ return err
+ }
+
+ n, err := w.f.Write(b)
+ if err == nil {
+ w.currentSize += int64(n)
+ }
+ w.mu.Unlock()
+ return err
+}
+
+func (w *LogFile) checkCapacityAndRotate() error {
+ if w.capacity == -1 {
+ return nil
+ }
+
+ if w.currentSize >= w.capacity {
+ name := w.f.Name()
+ if err := w.f.Close(); err != nil {
+ return errors.Wrap(err, "error closing file")
+ }
+ if err := rotate(name, w.maxFiles); err != nil {
+ return err
+ }
+ file, err := os.OpenFile(name, os.O_WRONLY|os.O_TRUNC|os.O_CREATE, 0640)
+ if err != nil {
+ return err
+ }
+ w.f = file
+ w.currentSize = 0
+ w.notifyRotate.Publish(struct{}{})
+ }
+
+ return nil
+}
+
+func rotate(name string, maxFiles int) error {
+ if maxFiles < 2 {
+ return nil
+ }
+ for i := maxFiles - 1; i > 1; i-- {
+ toPath := name + "." + strconv.Itoa(i)
+ fromPath := name + "." + strconv.Itoa(i-1)
+ if err := os.Rename(fromPath, toPath); err != nil && !os.IsNotExist(err) {
+ return errors.Wrap(err, "error rotating old log entries")
+ }
+ }
+
+ if err := os.Rename(name, name+".1"); err != nil && !os.IsNotExist(err) {
+ return errors.Wrap(err, "error rotating current log")
+ }
+ return nil
+}
+
+// LogPath returns the location the given writer logs to.
+func (w *LogFile) LogPath() string {
+ w.mu.Lock()
+ defer w.mu.Unlock()
+ return w.f.Name()
+}
+
+// MaxFiles return maximum number of files
+func (w *LogFile) MaxFiles() int {
+ return w.maxFiles
+}
+
+// Close closes underlying file and signals all readers to stop.
+func (w *LogFile) Close() error {
+ w.mu.Lock()
+ defer w.mu.Unlock()
+ if w.closed {
+ return nil
+ }
+ if err := w.f.Close(); err != nil {
+ return err
+ }
+ w.closed = true
+ return nil
+}
+
+// ReadLogs decodes entries from log files and sends them the passed in watcher
+func (w *LogFile) ReadLogs(config logger.ReadConfig, watcher *logger.LogWatcher) {
+ w.mu.RLock()
+ files, err := w.openRotatedFiles()
+ if err != nil {
+ w.mu.RUnlock()
+ watcher.Err <- err
+ return
+ }
+ defer func() {
+ for _, f := range files {
+ f.Close()
+ }
+ }()
+
+ currentFile, err := os.Open(w.f.Name())
+ if err != nil {
+ w.mu.RUnlock()
+ watcher.Err <- err
+ return
+ }
+ defer currentFile.Close()
+
+ currentChunk, err := newSectionReader(currentFile)
+ w.mu.RUnlock()
+
+ if err != nil {
+ watcher.Err <- err
+ return
+ }
+
+ if config.Tail != 0 {
+ seekers := make([]io.ReadSeeker, 0, len(files)+1)
+ for _, f := range files {
+ seekers = append(seekers, f)
+ }
+ seekers = append(seekers, currentChunk)
+ tailFile(multireader.MultiReadSeeker(seekers...), watcher, w.createDecoder, config)
+ }
+
+ w.mu.RLock()
+ if !config.Follow || w.closed {
+ w.mu.RUnlock()
+ return
+ }
+ w.mu.RUnlock()
+
+ notifyRotate := w.notifyRotate.Subscribe()
+ defer w.notifyRotate.Evict(notifyRotate)
+ followLogs(currentFile, watcher, notifyRotate, w.createDecoder, config.Since, config.Until)
+}
+
+func (w *LogFile) openRotatedFiles() (files []*os.File, err error) {
+ defer func() {
+ if err == nil {
+ return
+ }
+ for _, f := range files {
+ f.Close()
+ }
+ }()
+
+ for i := w.maxFiles; i > 1; i-- {
+ f, err := os.Open(fmt.Sprintf("%s.%d", w.f.Name(), i-1))
+ if err != nil {
+ if !os.IsNotExist(err) {
+ return nil, err
+ }
+ continue
+ }
+ files = append(files, f)
+ }
+
+ return files, nil
+}
+
+func newSectionReader(f *os.File) (*io.SectionReader, error) {
+ // seek to the end to get the size
+ // we'll leave this at the end of the file since section reader does not advance the reader
+ size, err := f.Seek(0, os.SEEK_END)
+ if err != nil {
+ return nil, errors.Wrap(err, "error getting current file size")
+ }
+ return io.NewSectionReader(f, 0, size), nil
+}
+
+type decodeFunc func() (*logger.Message, error)
+
+func tailFile(f io.ReadSeeker, watcher *logger.LogWatcher, createDecoder makeDecoderFunc, config logger.ReadConfig) {
+ var rdr io.Reader = f
+ if config.Tail > 0 {
+ ls, err := tailfile.TailFile(f, config.Tail)
+ if err != nil {
+ watcher.Err <- err
+ return
+ }
+ rdr = bytes.NewBuffer(bytes.Join(ls, []byte("\n")))
+ }
+
+ decodeLogLine := createDecoder(rdr)
+ for {
+ msg, err := decodeLogLine()
+ if err != nil {
+ if err != io.EOF {
+ watcher.Err <- err
+ }
+ return
+ }
+ if !config.Since.IsZero() && msg.Timestamp.Before(config.Since) {
+ continue
+ }
+ if !config.Until.IsZero() && msg.Timestamp.After(config.Until) {
+ return
+ }
+ select {
+ case <-watcher.WatchClose():
+ return
+ case watcher.Msg <- msg:
+ }
+ }
+}
+
+func followLogs(f *os.File, logWatcher *logger.LogWatcher, notifyRotate chan interface{}, createDecoder makeDecoderFunc, since, until time.Time) {
+ decodeLogLine := createDecoder(f)
+
+ name := f.Name()
+ fileWatcher, err := watchFile(name)
+ if err != nil {
+ logWatcher.Err <- err
+ return
+ }
+ defer func() {
+ f.Close()
+ fileWatcher.Remove(name)
+ fileWatcher.Close()
+ }()
+
+ ctx, cancel := context.WithCancel(context.Background())
+ defer cancel()
+ go func() {
+ select {
+ case <-logWatcher.WatchClose():
+ fileWatcher.Remove(name)
+ cancel()
+ case <-ctx.Done():
+ return
+ }
+ }()
+
+ var retries int
+ handleRotate := func() error {
+ f.Close()
+ fileWatcher.Remove(name)
+
+ // retry when the file doesn't exist
+ for retries := 0; retries <= 5; retries++ {
+ f, err = os.Open(name)
+ if err == nil || !os.IsNotExist(err) {
+ break
+ }
+ }
+ if err != nil {
+ return err
+ }
+ if err := fileWatcher.Add(name); err != nil {
+ return err
+ }
+ decodeLogLine = createDecoder(f)
+ return nil
+ }
+
+ errRetry := errors.New("retry")
+ errDone := errors.New("done")
+ waitRead := func() error {
+ select {
+ case e := <-fileWatcher.Events():
+ switch e.Op {
+ case fsnotify.Write:
+ decodeLogLine = createDecoder(f)
+ return nil
+ case fsnotify.Rename, fsnotify.Remove:
+ select {
+ case <-notifyRotate:
+ case <-ctx.Done():
+ return errDone
+ }
+ if err := handleRotate(); err != nil {
+ return err
+ }
+ return nil
+ }
+ return errRetry
+ case err := <-fileWatcher.Errors():
+ logrus.Debug("logger got error watching file: %v", err)
+ // Something happened, let's try and stay alive and create a new watcher
+ if retries <= 5 {
+ fileWatcher.Close()
+ fileWatcher, err = watchFile(name)
+ if err != nil {
+ return err
+ }
+ retries++
+ return errRetry
+ }
+ return err
+ case <-ctx.Done():
+ return errDone
+ }
+ }
+
+ handleDecodeErr := func(err error) error {
+ if err != io.EOF {
+ return err
+ }
+
+ for {
+ err := waitRead()
+ if err == nil {
+ break
+ }
+ if err == errRetry {
+ continue
+ }
+ return err
+ }
+ return nil
+ }
+
+ // main loop
+ for {
+ msg, err := decodeLogLine()
+ if err != nil {
+ if err := handleDecodeErr(err); err != nil {
+ if err == errDone {
+ return
+ }
+ // we got an unrecoverable error, so return
+ logWatcher.Err <- err
+ return
+ }
+ // ready to try again
+ continue
+ }
+
+ retries = 0 // reset retries since we've succeeded
+ if !since.IsZero() && msg.Timestamp.Before(since) {
+ continue
+ }
+ if !until.IsZero() && msg.Timestamp.After(until) {
+ return
+ }
+ select {
+ case logWatcher.Msg <- msg:
+ case <-ctx.Done():
+ logWatcher.Msg <- msg
+ for {
+ msg, err := decodeLogLine()
+ if err != nil {
+ return
+ }
+ if !since.IsZero() && msg.Timestamp.Before(since) {
+ continue
+ }
+ if !until.IsZero() && msg.Timestamp.After(until) {
+ return
+ }
+ logWatcher.Msg <- msg
+ }
+ }
+ }
+}
+
+func watchFile(name string) (filenotify.FileWatcher, error) {
+ fileWatcher, err := filenotify.New()
+ if err != nil {
+ return nil, err
+ }
+
+ logger := logrus.WithFields(logrus.Fields{
+ "module": "logger",
+ "fille": name,
+ })
+
+ if err := fileWatcher.Add(name); err != nil {
+ logger.WithError(err).Warnf("falling back to file poller")
+ fileWatcher.Close()
+ fileWatcher = filenotify.NewPollingWatcher()
+
+ if err := fileWatcher.Add(name); err != nil {
+ fileWatcher.Close()
+ logger.WithError(err).Debugf("error watching log file for modifications")
+ return nil, err
+ }
+ }
+ return fileWatcher, nil
+}
diff --git a/components/engine/daemon/logger/jsonfilelog/multireader/multireader.go b/components/engine/daemon/logger/loggerutils/multireader/multireader.go
similarity index 100%
rename from components/engine/daemon/logger/jsonfilelog/multireader/multireader.go
rename to components/engine/daemon/logger/loggerutils/multireader/multireader.go
diff --git a/components/engine/daemon/logger/jsonfilelog/multireader/multireader_test.go b/components/engine/daemon/logger/loggerutils/multireader/multireader_test.go
similarity index 100%
rename from components/engine/daemon/logger/jsonfilelog/multireader/multireader_test.go
rename to components/engine/daemon/logger/loggerutils/multireader/multireader_test.go
diff --git a/components/engine/daemon/logger/loggerutils/rotatefilewriter.go b/components/engine/daemon/logger/loggerutils/rotatefilewriter.go
deleted file mode 100644
index 457a39b5a3..0000000000
--- a/components/engine/daemon/logger/loggerutils/rotatefilewriter.go
+++ /dev/null
@@ -1,141 +0,0 @@
-package loggerutils
-
-import (
- "errors"
- "os"
- "strconv"
- "sync"
-
- "github.com/docker/docker/pkg/pubsub"
-)
-
-// RotateFileWriter is Logger implementation for default Docker logging.
-type RotateFileWriter struct {
- f *os.File // store for closing
- closed bool
- mu sync.Mutex
- capacity int64 //maximum size of each file
- currentSize int64 // current size of the latest file
- maxFiles int //maximum number of files
- notifyRotate *pubsub.Publisher
-}
-
-//NewRotateFileWriter creates new RotateFileWriter
-func NewRotateFileWriter(logPath string, capacity int64, maxFiles int) (*RotateFileWriter, error) {
- log, err := os.OpenFile(logPath, os.O_WRONLY|os.O_APPEND|os.O_CREATE, 0640)
- if err != nil {
- return nil, err
- }
-
- size, err := log.Seek(0, os.SEEK_END)
- if err != nil {
- return nil, err
- }
-
- return &RotateFileWriter{
- f: log,
- capacity: capacity,
- currentSize: size,
- maxFiles: maxFiles,
- notifyRotate: pubsub.NewPublisher(0, 1),
- }, nil
-}
-
-//WriteLog write log message to File
-func (w *RotateFileWriter) Write(message []byte) (int, error) {
- w.mu.Lock()
- if w.closed {
- w.mu.Unlock()
- return -1, errors.New("cannot write because the output file was closed")
- }
- if err := w.checkCapacityAndRotate(); err != nil {
- w.mu.Unlock()
- return -1, err
- }
-
- n, err := w.f.Write(message)
- if err == nil {
- w.currentSize += int64(n)
- }
- w.mu.Unlock()
- return n, err
-}
-
-func (w *RotateFileWriter) checkCapacityAndRotate() error {
- if w.capacity == -1 {
- return nil
- }
-
- if w.currentSize >= w.capacity {
- name := w.f.Name()
- if err := w.f.Close(); err != nil {
- return err
- }
- if err := rotate(name, w.maxFiles); err != nil {
- return err
- }
- file, err := os.OpenFile(name, os.O_WRONLY|os.O_TRUNC|os.O_CREATE, 0640)
- if err != nil {
- return err
- }
- w.f = file
- w.currentSize = 0
- w.notifyRotate.Publish(struct{}{})
- }
-
- return nil
-}
-
-func rotate(name string, maxFiles int) error {
- if maxFiles < 2 {
- return nil
- }
- for i := maxFiles - 1; i > 1; i-- {
- toPath := name + "." + strconv.Itoa(i)
- fromPath := name + "." + strconv.Itoa(i-1)
- if err := os.Rename(fromPath, toPath); err != nil && !os.IsNotExist(err) {
- return err
- }
- }
-
- if err := os.Rename(name, name+".1"); err != nil && !os.IsNotExist(err) {
- return err
- }
- return nil
-}
-
-// LogPath returns the location the given writer logs to.
-func (w *RotateFileWriter) LogPath() string {
- w.mu.Lock()
- defer w.mu.Unlock()
- return w.f.Name()
-}
-
-// MaxFiles return maximum number of files
-func (w *RotateFileWriter) MaxFiles() int {
- return w.maxFiles
-}
-
-//NotifyRotate returns the new subscriber
-func (w *RotateFileWriter) NotifyRotate() chan interface{} {
- return w.notifyRotate.Subscribe()
-}
-
-//NotifyRotateEvict removes the specified subscriber from receiving any more messages.
-func (w *RotateFileWriter) NotifyRotateEvict(sub chan interface{}) {
- w.notifyRotate.Evict(sub)
-}
-
-// Close closes underlying file and signals all readers to stop.
-func (w *RotateFileWriter) Close() error {
- w.mu.Lock()
- defer w.mu.Unlock()
- if w.closed {
- return nil
- }
- if err := w.f.Close(); err != nil {
- return err
- }
- w.closed = true
- return nil
-}
diff --git a/components/engine/daemon/logger/plugin_unix.go b/components/engine/daemon/logger/plugin_unix.go
index f93d7af0ee..edf11af15e 100644
--- a/components/engine/daemon/logger/plugin_unix.go
+++ b/components/engine/daemon/logger/plugin_unix.go
@@ -1,4 +1,4 @@
-// +build linux solaris freebsd
+// +build linux freebsd
package logger
diff --git a/components/engine/daemon/logger/plugin_unsupported.go b/components/engine/daemon/logger/plugin_unsupported.go
index 0a2036c838..b649b0644e 100644
--- a/components/engine/daemon/logger/plugin_unsupported.go
+++ b/components/engine/daemon/logger/plugin_unsupported.go
@@ -1,4 +1,4 @@
-// +build !linux,!solaris,!freebsd
+// +build !linux,!freebsd
package logger
diff --git a/components/engine/daemon/logs.go b/components/engine/daemon/logs.go
index 68c5e5aa47..131360b7b4 100644
--- a/components/engine/daemon/logs.go
+++ b/components/engine/daemon/logs.go
@@ -77,8 +77,18 @@ func (daemon *Daemon) ContainerLogs(ctx context.Context, containerName string, c
since = time.Unix(s, n)
}
+ var until time.Time
+ if config.Until != "" && config.Until != "0" {
+ s, n, err := timetypes.ParseTimestamps(config.Until, 0)
+ if err != nil {
+ return nil, false, err
+ }
+ until = time.Unix(s, n)
+ }
+
readConfig := logger.ReadConfig{
Since: since,
+ Until: until,
Tail: tailLines,
Follow: follow,
}
@@ -113,7 +123,7 @@ func (daemon *Daemon) ContainerLogs(ctx context.Context, containerName string, c
}
return
case <-ctx.Done():
- lg.Debug("logs: end stream, ctx is done: %v", ctx.Err())
+ lg.Debugf("logs: end stream, ctx is done: %v", ctx.Err())
return
case msg, ok := <-logs.Msg:
// there is some kind of pool or ring buffer in the logger that
diff --git a/components/engine/daemon/oci_linux.go b/components/engine/daemon/oci_linux.go
index b4a6bf60d2..757de85366 100644
--- a/components/engine/daemon/oci_linux.go
+++ b/components/engine/daemon/oci_linux.go
@@ -18,7 +18,6 @@ import (
"github.com/docker/docker/oci"
"github.com/docker/docker/pkg/idtools"
"github.com/docker/docker/pkg/mount"
- "github.com/docker/docker/pkg/stringutils"
"github.com/docker/docker/volume"
"github.com/opencontainers/runc/libcontainer/apparmor"
"github.com/opencontainers/runc/libcontainer/cgroups"
@@ -522,29 +521,52 @@ var (
}
)
+// inSlice tests whether a string is contained in a slice of strings or not.
+// Comparison is case sensitive
+func inSlice(slice []string, s string) bool {
+ for _, ss := range slice {
+ if s == ss {
+ return true
+ }
+ }
+ return false
+}
+
func setMounts(daemon *Daemon, s *specs.Spec, c *container.Container, mounts []container.Mount) error {
userMounts := make(map[string]struct{})
for _, m := range mounts {
userMounts[m.Destination] = struct{}{}
}
- // Filter out mounts from spec
- noIpc := c.HostConfig.IpcMode.IsNone()
- // Filter out mounts that are overridden by user supplied mounts
+ // Copy all mounts from spec to defaultMounts, except for
+ // - mounts overriden by a user supplied mount;
+ // - all mounts under /dev if a user supplied /dev is present;
+ // - /dev/shm, in case IpcMode is none.
+ // While at it, also
+ // - set size for /dev/shm from shmsize.
var defaultMounts []specs.Mount
_, mountDev := userMounts["/dev"]
for _, m := range s.Mounts {
- // filter out /dev/shm mount if case IpcMode is none
- if noIpc && m.Destination == "/dev/shm" {
+ if _, ok := userMounts[m.Destination]; ok {
+ // filter out mount overridden by a user supplied mount
continue
}
- // filter out mount overridden by a user supplied mount
- if _, ok := userMounts[m.Destination]; !ok {
- if mountDev && strings.HasPrefix(m.Destination, "/dev/") {
+ if mountDev && strings.HasPrefix(m.Destination, "/dev/") {
+ // filter out everything under /dev if /dev is user-mounted
+ continue
+ }
+
+ if m.Destination == "/dev/shm" {
+ if c.HostConfig.IpcMode.IsNone() {
+ // filter out /dev/shm for "none" IpcMode
continue
}
- defaultMounts = append(defaultMounts, m)
+ // set size for /dev/shm mount from spec
+ sizeOpt := "size=" + strconv.FormatInt(c.HostConfig.ShmSize, 10)
+ m.Options = append(m.Options, sizeOpt)
}
+
+ defaultMounts = append(defaultMounts, m)
}
s.Mounts = defaultMounts
@@ -628,11 +650,11 @@ func setMounts(daemon *Daemon, s *specs.Spec, c *container.Container, mounts []c
if s.Root.Readonly {
for i, m := range s.Mounts {
switch m.Destination {
- case "/proc", "/dev/pts", "/dev/mqueue": // /dev is remounted by runc
+ case "/proc", "/dev/pts", "/dev/mqueue", "/dev":
continue
}
if _, ok := userMounts[m.Destination]; !ok {
- if !stringutils.InSlice(m.Options, "ro") {
+ if !inSlice(m.Options, "ro") {
s.Mounts[i].Options = append(s.Mounts[i].Options, "ro")
}
}
@@ -652,14 +674,6 @@ func setMounts(daemon *Daemon, s *specs.Spec, c *container.Container, mounts []c
s.Linux.MaskedPaths = nil
}
- // Set size for /dev/shm mount that comes from spec (IpcMode: private only)
- for i, m := range s.Mounts {
- if m.Destination == "/dev/shm" {
- sizeOpt := "size=" + strconv.FormatInt(c.HostConfig.ShmSize, 10)
- s.Mounts[i].Options = append(s.Mounts[i].Options, sizeOpt)
- }
- }
-
// TODO: until a kernel/mount solution exists for handling remount in a user namespace,
// we must clear the readonly flag for the cgroups mount (@mrunalp concurs)
if uidMap := daemon.idMappings.UIDs(); uidMap != nil || c.HostConfig.Privileged {
@@ -755,7 +769,6 @@ func (daemon *Daemon) createSpec(c *container.Container) (*specs.Spec, error) {
if err := setResources(&s, c.HostConfig.Resources); err != nil {
return nil, fmt.Errorf("linux runtime spec resources: %v", err)
}
- s.Process.OOMScoreAdj = &c.HostConfig.OomScoreAdj
s.Linux.Sysctl = c.HostConfig.Sysctls
p := s.Linux.CgroupsPath
diff --git a/components/engine/daemon/oci_linux_test.go b/components/engine/daemon/oci_linux_test.go
new file mode 100644
index 0000000000..f2f455f9c6
--- /dev/null
+++ b/components/engine/daemon/oci_linux_test.go
@@ -0,0 +1,50 @@
+package daemon
+
+import (
+ "testing"
+
+ containertypes "github.com/docker/docker/api/types/container"
+ "github.com/docker/docker/container"
+ "github.com/docker/docker/daemon/config"
+ "github.com/docker/docker/oci"
+ "github.com/docker/docker/pkg/idtools"
+
+ "github.com/stretchr/testify/assert"
+)
+
+// TestTmpfsDevShmNoDupMount checks that a user-specified /dev/shm tmpfs
+// mount (as in "docker run --tmpfs /dev/shm:rw,size=NNN") does not result
+// in "Duplicate mount point" error from the engine.
+// https://github.com/moby/moby/issues/35455
+func TestTmpfsDevShmNoDupMount(t *testing.T) {
+ d := Daemon{
+ // some empty structs to avoid getting a panic
+ // caused by a null pointer dereference
+ idMappings: &idtools.IDMappings{},
+ configStore: &config.Config{},
+ }
+ c := &container.Container{
+ ShmPath: "foobar", // non-empty, for c.IpcMounts() to work
+ HostConfig: &containertypes.HostConfig{
+ IpcMode: containertypes.IpcMode("shareable"), // default mode
+ // --tmpfs /dev/shm:rw,exec,size=NNN
+ Tmpfs: map[string]string{
+ "/dev/shm": "rw,exec,size=1g",
+ },
+ },
+ }
+
+ // Mimick the code flow of daemon.createSpec(), enough to reproduce the issue
+ ms, err := d.setupMounts(c)
+ assert.NoError(t, err)
+
+ ms = append(ms, c.IpcMounts()...)
+
+ tmpfsMounts, err := c.TmpfsMounts()
+ assert.NoError(t, err)
+ ms = append(ms, tmpfsMounts...)
+
+ s := oci.DefaultSpec()
+ err = setMounts(&d, &s, c, ms)
+ assert.NoError(t, err)
+}
diff --git a/components/engine/daemon/reload_test.go b/components/engine/daemon/reload_test.go
index 3ff6b57735..96b1a2452d 100644
--- a/components/engine/daemon/reload_test.go
+++ b/components/engine/daemon/reload_test.go
@@ -1,5 +1,3 @@
-// +build !solaris
-
package daemon
import (
diff --git a/components/engine/daemon/stats/collector.go b/components/engine/daemon/stats/collector.go
index c930bc756c..f13a8045d4 100644
--- a/components/engine/daemon/stats/collector.go
+++ b/components/engine/daemon/stats/collector.go
@@ -1,5 +1,3 @@
-// +build !solaris
-
package stats
import (
diff --git a/components/engine/daemon/stats/collector_unix.go b/components/engine/daemon/stats/collector_unix.go
index cd522e07ce..6b1318a1bd 100644
--- a/components/engine/daemon/stats/collector_unix.go
+++ b/components/engine/daemon/stats/collector_unix.go
@@ -1,4 +1,4 @@
-// +build !windows,!solaris
+// +build !windows
package stats
diff --git a/components/engine/daemon/volumes_unix.go b/components/engine/daemon/volumes_unix.go
index 0a4cbf8493..bee1fb10b6 100644
--- a/components/engine/daemon/volumes_unix.go
+++ b/components/engine/daemon/volumes_unix.go
@@ -1,7 +1,5 @@
// +build !windows
-// TODO(amitkris): We need to split this file for solaris.
-
package daemon
import (
@@ -86,8 +84,13 @@ func (daemon *Daemon) setupMounts(c *container.Container) ([]container.Mount, er
// remapped root (user namespaces)
rootIDs := daemon.idMappings.RootPair()
for _, mount := range netMounts {
- if err := os.Chown(mount.Source, rootIDs.UID, rootIDs.GID); err != nil {
- return nil, err
+ // we should only modify ownership of network files within our own container
+ // metadata repository. If the user specifies a mount path external, it is
+ // up to the user to make sure the file has proper ownership for userns
+ if strings.Index(mount.Source, daemon.repository) == 0 {
+ if err := os.Chown(mount.Source, rootIDs.UID, rootIDs.GID); err != nil {
+ return nil, err
+ }
}
}
return append(mounts, netMounts...), nil
diff --git a/components/engine/docs/api/version-history.md b/components/engine/docs/api/version-history.md
index 77b8545bcc..5056c0ddba 100644
--- a/components/engine/docs/api/version-history.md
+++ b/components/engine/docs/api/version-history.md
@@ -13,6 +13,17 @@ keywords: "API, Docker, rcli, REST, documentation"
will be rejected.
-->
+## v1.35 API changes
+
+[Docker Engine API v1.35](https://docs.docker.com/engine/api/v1.35/) documentation
+
+* `POST /services/create` and `POST /services/(id)/update` now accepts an
+ `Isolation` field on container spec to set the Isolation technology of the
+ containers running the service (`default`, `process`, or `hyperv`). This
+ configuration is only used for Windows containers.
+* `GET /containers/(name)/logs` now supports an additional query parameter: `until`,
+ which returns log lines that occurred before the specified timestamp.
+
## v1.34 API changes
[Docker Engine API v1.34](https://docs.docker.com/engine/api/v1.34/) documentation
@@ -93,7 +104,7 @@ keywords: "API, Docker, rcli, REST, documentation"
* `POST /containers/(name)/wait` now accepts a `condition` query parameter to indicate which state change condition to wait for. Also, response headers are now returned immediately to acknowledge that the server has registered a wait callback for the client.
* `POST /swarm/init` now accepts a `DataPathAddr` property to set the IP-address or network interface to use for data traffic
* `POST /swarm/join` now accepts a `DataPathAddr` property to set the IP-address or network interface to use for data traffic
-* `GET /events` now supports service, node and secret events which are emitted when users create, update and remove service, node and secret
+* `GET /events` now supports service, node and secret events which are emitted when users create, update and remove service, node and secret
* `GET /events` now supports network remove event which is emitted when users remove a swarm scoped network
* `GET /events` now supports a filter type `scope` in which supported value could be swarm and local
diff --git a/components/engine/docs/contributing/README.md b/components/engine/docs/contributing/README.md
new file mode 100644
index 0000000000..915c0cff1e
--- /dev/null
+++ b/components/engine/docs/contributing/README.md
@@ -0,0 +1,8 @@
+### Get set up for Moby development
+
+ * [README first](who-written-for.md)
+ * [Get the required software](software-required.md)
+ * [Set up for development on Windows](software-req-win.md)
+ * [Configure Git for contributing](set-up-git.md)
+ * [Work with a development container](set-up-dev-env.md)
+ * [Run tests and test documentation](test.md)
diff --git a/components/engine/docs/contributing/images/branch-sig.png b/components/engine/docs/contributing/images/branch-sig.png
new file mode 100644
index 0000000000..b069319eee
Binary files /dev/null and b/components/engine/docs/contributing/images/branch-sig.png differ
diff --git a/components/engine/docs/contributing/images/contributor-edit.png b/components/engine/docs/contributing/images/contributor-edit.png
new file mode 100644
index 0000000000..d847e224a5
Binary files /dev/null and b/components/engine/docs/contributing/images/contributor-edit.png differ
diff --git a/components/engine/docs/contributing/images/copy_url.png b/components/engine/docs/contributing/images/copy_url.png
new file mode 100644
index 0000000000..82df4eec50
Binary files /dev/null and b/components/engine/docs/contributing/images/copy_url.png differ
diff --git a/components/engine/docs/contributing/images/fork_docker.png b/components/engine/docs/contributing/images/fork_docker.png
new file mode 100644
index 0000000000..88c6ed8a1e
Binary files /dev/null and b/components/engine/docs/contributing/images/fork_docker.png differ
diff --git a/components/engine/docs/contributing/images/git_bash.png b/components/engine/docs/contributing/images/git_bash.png
new file mode 100644
index 0000000000..be2ec73896
Binary files /dev/null and b/components/engine/docs/contributing/images/git_bash.png differ
diff --git a/components/engine/docs/contributing/images/list_example.png b/components/engine/docs/contributing/images/list_example.png
new file mode 100644
index 0000000000..2e3b59a29e
Binary files /dev/null and b/components/engine/docs/contributing/images/list_example.png differ
diff --git a/components/engine/docs/contributing/set-up-dev-env.md b/components/engine/docs/contributing/set-up-dev-env.md
new file mode 100644
index 0000000000..acd6888cd4
--- /dev/null
+++ b/components/engine/docs/contributing/set-up-dev-env.md
@@ -0,0 +1,321 @@
+### Work with a development container
+
+In this section, you learn to develop like the Moby Engine core team.
+The `moby/moby` repository includes a `Dockerfile` at its root. This file defines
+Moby's development environment. The `Dockerfile` lists the environment's
+dependencies: system libraries and binaries, Go environment, Go dependencies,
+etc.
+
+Moby's development environment is itself, ultimately a Docker container.
+You use the `moby/moby` repository and its `Dockerfile` to create a Docker image,
+run a Docker container, and develop code in the container.
+
+If you followed the procedures that
+set up Git for contributing, you should have a fork of the `moby/moby`
+repository. You also created a branch called `dry-run-test`. In this section,
+you continue working with your fork on this branch.
+
+## Task 1. Remove images and containers
+
+Moby developers run the latest stable release of the Docker software. They clean their local hosts of
+unnecessary Docker artifacts such as stopped containers or unused images.
+Cleaning unnecessary artifacts isn't strictly necessary, but it is good
+practice, so it is included here.
+
+To remove unnecessary artifacts:
+
+1. Verify that you have no unnecessary containers running on your host.
+
+ ```none
+ $ docker ps -a
+ ```
+
+ You should see something similar to the following:
+
+ ```none
+ CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
+ ```
+
+ There are no running or stopped containers on this host. A fast way to
+ remove old containers is the following:
+
+ You can now use the `docker system prune` command to achieve this:
+
+ ```none
+ $ docker system prune -a
+ ```
+
+ Older versions of the Docker Engine should reference the command below:
+
+ ```none
+ $ docker rm $(docker ps -a -q)
+ ```
+
+ This command uses `docker ps` to list all containers (`-a` flag) by numeric
+ IDs (`-q` flag). Then, the `docker rm` command removes the resulting list.
+ If you have running but unused containers, stop and then remove them with
+ the `docker stop` and `docker rm` commands.
+
+2. Verify that your host has no dangling images.
+
+ ```none
+ $ docker images
+ ```
+
+ You should see something similar to the following:
+
+ ```none
+ REPOSITORY TAG IMAGE ID CREATED SIZE
+ ```
+
+ This host has no images. You may have one or more _dangling_ images. A
+ dangling image is not used by a running container and is not an ancestor of
+ another image on your system. A fast way to remove dangling image is
+ the following:
+
+ ```none
+ $ docker rmi -f $(docker images -q -a -f dangling=true)
+ ```
+
+ This command uses `docker images` to list all images (`-a` flag) by numeric
+ IDs (`-q` flag) and filter them to find dangling images (`-f dangling=true`).
+ Then, the `docker rmi` command forcibly (`-f` flag) removes
+ the resulting list. If you get a "docker: "rmi" requires a minimum of 1 argument."
+ message, that means there were no dangling images. To remove just one image, use the
+ `docker rmi ID` command.
+
+## Task 2. Start a development container
+
+If you followed the last procedure, your host is clean of unnecessary images and
+containers. In this section, you build an image from the Engine development
+environment and run it in the container. Both steps are automated for you by the
+Makefile in the Engine code repository. The first time you build an image, it
+can take over 15 minutes to complete.
+
+1. Open a terminal.
+
+ For [Docker Toolbox](../../toolbox/overview.md) users, use `docker-machine status your_vm_name` to make sure your VM is running. You
+ may need to run `eval "$(docker-machine env your_vm_name)"` to initialize your
+ shell environment. If you use Docker for Mac or Docker for Windows, you do not need
+ to use Docker Machine.
+
+2. Change into the root of the `moby-fork` repository.
+
+ ```none
+ $ cd ~/repos/moby-fork
+ ```
+
+ If you are following along with this guide, you created a `dry-run-test`
+ branch when you
+ set up Git for contributing.
+
+3. Ensure you are on your `dry-run-test` branch.
+
+ ```none
+ $ git checkout dry-run-test
+ ```
+
+ If you get a message that the branch doesn't exist, add the `-b` flag (`git checkout -b dry-run-test`) so the
+ command both creates the branch and checks it out.
+
+4. Use `make` to build a development environment image and run it in a container.
+
+ ```none
+ $ make BIND_DIR=. shell
+ ```
+
+ Using the instructions in the
+ `Dockerfile`, the build may need to download and / or configure source and other images. On first build this process may take between 5 - 15 minutes to create an image. The command returns informational messages as it runs. A
+ successful build returns a final message and opens a Bash shell into the
+ container.
+
+ ```none
+ Successfully built 3d872560918e
+ docker run --rm -i --privileged -e BUILDFLAGS -e KEEPBUNDLE -e DOCKER_BUILD_GOGC -e DOCKER_BUILD_PKGS -e DOCKER_CLIENTONLY -e DOCKER_DEBUG -e DOCKER_EXPERIMENTAL -e DOCKER_GITCOMMIT -e DOCKER_GRAPHDRIVER=devicemapper -e DOCKER_INCREMENTAL_BINARY -e DOCKER_REMAP_ROOT -e DOCKER_STORAGE_OPTS -e DOCKER_USERLANDPROXY -e TESTDIRS -e TESTFLAGS -e TIMEOUT -v "home/ubuntu/repos/docker/bundles:/go/src/github.com/moby/moby/bundles" -t "docker-dev:dry-run-test" bash
+ root@f31fa223770f:/go/src/github.com/moby/moby#
+ ```
+
+ At this point, your prompt reflects the container's BASH shell.
+
+5. List the contents of the current directory (`/go/src/github.com/moby/moby`).
+
+ You should see the image's source from the `/go/src/github.com/moby/moby`
+ directory.
+
+ 
+
+6. Make a `dockerd` binary.
+
+ ```none
+ root@a8b2885ab900:/go/src/github.com/moby/moby# hack/make.sh binary
+ Removing bundles/
+
+ ---> Making bundle: binary (in bundles/binary)
+ Building: bundles/binary-daemon/dockerd-17.06.0-dev
+ Created binary: bundles/binary-daemon/dockerd-17.06.0-dev
+ Copying nested executables into bundles/binary-daemon
+
+ ```
+
+7. Run `make install`, which copies the binary to the container's
+ `/usr/local/bin/` directory.
+
+ ```none
+ root@a8b2885ab900:/go/src/github.com/moby/moby# make install
+ ```
+
+8. Start the Engine daemon running in the background.
+
+ ```none
+ root@a8b2885ab900:/go/src/github.com/docker/docker# dockerd -D &
+ ...output snipped...
+ DEBU[0001] Registering POST, /networks/{id:.*}/connect
+ DEBU[0001] Registering POST, /networks/{id:.*}/disconnect
+ DEBU[0001] Registering DELETE, /networks/{id:.*}
+ INFO[0001] API listen on /var/run/docker.sock
+ DEBU[0003] containerd connection state change: READY
+ ```
+
+ The `-D` flag starts the daemon in debug mode. The `&` starts it as a
+ background process. You'll find these options useful when debugging code
+ development. You will need to hit `return` in order to get back to your shell prompt.
+
+ > **Note**: The following command automates the `build`,
+ > `install`, and `run` steps above. Once the command below completes, hit `ctrl-z` to suspend the process, then run `bg 1` and hit `enter` to resume the daemon process in the background and get back to your shell prompt.
+
+ ```none
+ hack/make.sh binary install-binary run
+ ```
+
+9. Inside your container, check your Docker version.
+
+ ```none
+ root@5f8630b873fe:/go/src/github.com/moby/moby# docker --version
+ Docker version 1.12.0-dev, build 6e728fb
+ ```
+
+ Inside the container you are running a development version. This is the version
+ on the current branch. It reflects the value of the `VERSION` file at the
+ root of your `docker-fork` repository.
+
+10. Run the `hello-world` image.
+
+ ```none
+ root@5f8630b873fe:/go/src/github.com/moby/moby# docker run hello-world
+ ```
+
+11. List the image you just downloaded.
+
+ ```none
+ root@5f8630b873fe:/go/src/github.com/moby/moby# docker images
+ REPOSITORY TAG IMAGE ID CREATED SIZE
+ hello-world latest c54a2cc56cbb 3 months ago 1.85 kB
+ ```
+
+12. Open another terminal on your local host.
+
+13. List the container running your development container.
+
+ ```none
+ ubuntu@ubuntu1404:~$ docker ps
+ CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
+ a8b2885ab900 docker-dev:dry-run-test "hack/dind bash" 43 minutes ago Up 43 minutes hungry_payne
+ ```
+
+ Notice that the tag on the container is marked with the `dry-run-test` branch name.
+
+
+## Task 3. Make a code change
+
+At this point, you have experienced the "Moby inception" technique. That is,
+you have:
+
+* forked and cloned the Moby Engine code repository
+* created a feature branch for development
+* created and started an Engine development container from your branch
+* built a binary inside of your development container
+* launched a `docker` daemon using your newly compiled binary
+* called the `docker` client to run a `hello-world` container inside
+ your development container
+
+Running the `make BIND_DIR=. shell` command mounted your local Docker repository source into
+your Docker container.
+
+ > **Note**: Inspecting the `Dockerfile` shows a `COPY . /go/src/github.com/docker/docker` instruction, suggesting that dynamic code changes will _not_ be reflected in the container. However inspecting the `Makefile` shows that the current working directory _will_ be mounted via a `-v` volume mount.
+
+When you start to develop code though, you'll
+want to iterate code changes and builds inside the container. If you have
+followed this guide exactly, you have a bash shell running a development
+container.
+
+Try a simple code change and see it reflected in your container. For this
+example, you'll edit the help for the `attach` subcommand.
+
+1. If you don't have one, open a terminal in your local host.
+
+2. Make sure you are in your `moby-fork` repository.
+
+ ```none
+ $ pwd
+ /Users/mary/go/src/github.com/moxiegirl/moby-fork
+ ```
+
+ Your location should be different because, at least, your username is
+ different.
+
+3. Open the `cmd/dockerd/docker.go` file.
+
+4. Edit the command's help message.
+
+ For example, you can edit this line:
+
+ ```go
+ Short: "A self-sufficient runtime for containers.",
+ ```
+
+ And change it to this:
+
+ ```go
+ Short: "A self-sufficient and really fun runtime for containers.",
+ ```
+
+5. Save and close the `cmd/dockerd/docker.go` file.
+
+6. Go to your running docker development container shell.
+
+7. Rebuild the binary by using the command `hack/make.sh binary` in the docker development container shell.
+
+8. Stop Docker if it is running.
+
+9. Copy the binaries to **/usr/bin** by entering the following commands in the docker development container shell.
+
+ ```
+ hack/make.sh binary install-binary
+ ```
+
+10. To view your change, run the `dockerd --help` command in the docker development container shell.
+
+ ```bash
+ root@b0cb4f22715d:/go/src/github.com/moby/moby# dockerd --help
+
+ Usage: dockerd COMMAND
+
+ A self-sufficient and really fun runtime for containers.
+
+ Options:
+ ...
+
+ ```
+
+You've just done the basic workflow for changing the Engine code base. You made
+your code changes in your feature branch. Then, you updated the binary in your
+development container and tried your change out. If you were making a bigger
+change, you might repeat or iterate through this flow several times.
+
+## Where to go next
+
+Congratulations, you have successfully achieved Docker inception. You've had a
+small experience of the development process. You've set up your development
+environment and verified almost all the essential processes you need to
+contribute. Of course, before you start contributing, [you'll need to learn one
+more piece of the development process, the test framework](test.md).
diff --git a/components/engine/docs/contributing/set-up-git.md b/components/engine/docs/contributing/set-up-git.md
new file mode 100644
index 0000000000..f320c2716c
--- /dev/null
+++ b/components/engine/docs/contributing/set-up-git.md
@@ -0,0 +1,280 @@
+### Configure Git for contributing
+
+Work through this page to configure Git and a repository you'll use throughout
+the Contributor Guide. The work you do further in the guide, depends on the work
+you do here.
+
+## Task 1. Fork and clone the Moby code
+
+Before contributing, you first fork the Moby code repository. A fork copies
+a repository at a particular point in time. GitHub tracks for you where a fork
+originates.
+
+As you make contributions, you change your fork's code. When you are ready,
+you make a pull request back to the original Docker repository. If you aren't
+familiar with this workflow, don't worry, this guide walks you through all the
+steps.
+
+To fork and clone Moby:
+
+1. Open a browser and log into GitHub with your account.
+
+2. Go to the moby/moby repository.
+
+3. Click the "Fork" button in the upper right corner of the GitHub interface.
+
+ 
+
+ GitHub forks the repository to your GitHub account. The original
+ `moby/moby` repository becomes a new fork `YOUR_ACCOUNT/moby` under
+ your account.
+
+4. Copy your fork's clone URL from GitHub.
+
+ GitHub allows you to use HTTPS or SSH protocols for clones. You can use the
+ `git` command line or clients like Subversion to clone a repository.
+
+ 
+
+ This guide assume you are using the HTTPS protocol and the `git` command
+ line. If you are comfortable with SSH and some other tool, feel free to use
+ that instead. You'll need to convert what you see in the guide to what is
+ appropriate to your tool.
+
+5. Open a terminal window on your local host and change to your home directory.
+
+ ```bash
+ $ cd ~
+ ```
+
+ In Windows, you'll work in your Docker Quickstart Terminal window instead of
+ Powershell or a `cmd` window.
+
+6. Create a `repos` directory.
+
+ ```bash
+ $ mkdir repos
+ ```
+
+7. Change into your `repos` directory.
+
+ ```bash
+ $ cd repos
+ ```
+
+8. Clone the fork to your local host into a repository called `moby-fork`.
+
+ ```bash
+ $ git clone https://github.com/moxiegirl/moby.git moby-fork
+ ```
+
+ Naming your local repo `moby-fork` should help make these instructions
+ easier to follow; experienced coders don't typically change the name.
+
+9. Change directory into your new `moby-fork` directory.
+
+ ```bash
+ $ cd moby-fork
+ ```
+
+ Take a moment to familiarize yourself with the repository's contents. List
+ the contents.
+
+## Task 2. Set your signature and an upstream remote
+
+When you contribute to Docker, you must certify you agree with the
+Developer Certificate of Origin.
+You indicate your agreement by signing your `git` commits like this:
+
+```
+Signed-off-by: Pat Smith
+```
+
+To create a signature, you configure your username and email address in Git.
+You can set these globally or locally on just your `moby-fork` repository.
+You must sign with your real name. You can sign your git commit automatically
+with `git commit -s`. Moby does not accept anonymous contributions or contributions
+through pseudonyms.
+
+As you change code in your fork, you'll want to keep it in sync with the changes
+others make in the `moby/moby` repository. To make syncing easier, you'll
+also add a _remote_ called `upstream` that points to `moby/moby`. A remote
+is just another project version hosted on the internet or network.
+
+To configure your username, email, and add a remote:
+
+1. Change to the root of your `moby-fork` repository.
+
+ ```bash
+ $ cd moby-fork
+ ```
+
+2. Set your `user.name` for the repository.
+
+ ```bash
+ $ git config --local user.name "FirstName LastName"
+ ```
+
+3. Set your `user.email` for the repository.
+
+ ```bash
+ $ git config --local user.email "emailname@mycompany.com"
+ ```
+
+4. Set your local repo to track changes upstream, on the `moby/moby` repository.
+
+ ```bash
+ $ git remote add upstream https://github.com/moby/moby.git
+ ```
+
+5. Check the result in your `git` configuration.
+
+ ```bash
+ $ git config --local -l
+ core.repositoryformatversion=0
+ core.filemode=true
+ core.bare=false
+ core.logallrefupdates=true
+ remote.origin.url=https://github.com/moxiegirl/moby.git
+ remote.origin.fetch=+refs/heads/*:refs/remotes/origin/*
+ branch.master.remote=origin
+ branch.master.merge=refs/heads/master
+ user.name=Mary Anthony
+ user.email=mary@docker.com
+ remote.upstream.url=https://github.com/moby/moby.git
+ remote.upstream.fetch=+refs/heads/*:refs/remotes/upstream/*
+ ```
+
+ To list just the remotes use:
+
+ ```bash
+ $ git remote -v
+ origin https://github.com/moxiegirl/moby.git (fetch)
+ origin https://github.com/moxiegirl/moby.git (push)
+ upstream https://github.com/moby/moby.git (fetch)
+ upstream https://github.com/moby/moby.git (push)
+ ```
+
+## Task 3. Create and push a branch
+
+As you change code in your fork, make your changes on a repository branch.
+The branch name should reflect what you are working on. In this section, you
+create a branch, make a change, and push it up to your fork.
+
+This branch is just for testing your config for this guide. The changes are part
+of a dry run, so the branch name will be dry-run-test. To create and push
+the branch to your fork on GitHub:
+
+1. Open a terminal and go to the root of your `moby-fork`.
+
+ ```bash
+ $ cd moby-fork
+ ```
+
+2. Create a `dry-run-test` branch.
+
+ ```bash
+ $ git checkout -b dry-run-test
+ ```
+
+ This command creates the branch and switches the repository to it.
+
+3. Verify you are in your new branch.
+
+ ```bash
+ $ git branch
+ * dry-run-test
+ master
+ ```
+
+ The current branch has an * (asterisk) marker. So, these results show you
+ are on the right branch.
+
+4. Create a `TEST.md` file in the repository's root.
+
+ ```bash
+ $ touch TEST.md
+ ```
+
+5. Edit the file and add your email and location.
+
+ 
+
+ You can use any text editor you are comfortable with.
+
+6. Save and close the file.
+
+7. Check the status of your branch.
+
+ ```bash
+ $ git status
+ On branch dry-run-test
+ Untracked files:
+ (use "git add ..." to include in what will be committed)
+
+ TEST.md
+
+ nothing added to commit but untracked files present (use "git add" to track)
+ ```
+
+ You've only changed the one file. It is untracked so far by git.
+
+8. Add your file.
+
+ ```bash
+ $ git add TEST.md
+ ```
+
+ That is the only _staged_ file. Stage is fancy word for work that Git is
+ tracking.
+
+9. Sign and commit your change.
+
+ ```bash
+ $ git commit -s -m "Making a dry run test."
+ [dry-run-test 6e728fb] Making a dry run test
+ 1 file changed, 1 insertion(+)
+ create mode 100644 TEST.md
+ ```
+
+ Commit messages should have a short summary sentence of no more than 50
+ characters. Optionally, you can also include a more detailed explanation
+ after the summary. Separate the summary from any explanation with an empty
+ line.
+
+10. Push your changes to GitHub.
+
+ ```bash
+ $ git push --set-upstream origin dry-run-test
+ Username for 'https://github.com': moxiegirl
+ Password for 'https://moxiegirl@github.com':
+ ```
+
+ Git prompts you for your GitHub username and password. Then, the command
+ returns a result.
+
+ ```bash
+ Counting objects: 13, done.
+ Compressing objects: 100% (2/2), done.
+ Writing objects: 100% (3/3), 320 bytes | 0 bytes/s, done.
+ Total 3 (delta 1), reused 0 (delta 0)
+ To https://github.com/moxiegirl/moby.git
+ * [new branch] dry-run-test -> dry-run-test
+ Branch dry-run-test set up to track remote branch dry-run-test from origin.
+ ```
+
+11. Open your browser to GitHub.
+
+12. Navigate to your Moby fork.
+
+13. Make sure the `dry-run-test` branch exists, that it has your commit, and the
+commit is signed.
+
+ 
+
+## Where to go next
+
+Congratulations, you have finished configuring both your local host environment
+and Git for contributing. In the next section you'll [learn how to set up and
+work in a Moby development container](set-up-dev-env.md).
diff --git a/components/engine/docs/contributing/software-req-win.md b/components/engine/docs/contributing/software-req-win.md
new file mode 100644
index 0000000000..3be4327933
--- /dev/null
+++ b/components/engine/docs/contributing/software-req-win.md
@@ -0,0 +1,177 @@
+### Build and test Moby on Windows
+
+This page explains how to get the software you need to build, test, and run the
+Moby source code for Windows and setup the required software and services:
+
+- Windows containers
+- GitHub account
+- Git
+
+## Prerequisites
+
+### 1. Windows Server 2016 or Windows 10 with all Windows updates applied
+
+The major build number must be at least 14393. This can be confirmed, for example,
+by running the following from an elevated PowerShell prompt - this sample output
+is from a fully up to date machine as at mid-November 2016:
+
+
+ PS C:\> $(gin).WindowsBuildLabEx
+ 14393.447.amd64fre.rs1_release_inmarket.161102-0100
+
+### 2. Git for Windows (or another git client) must be installed
+
+https://git-scm.com/download/win.
+
+### 3. The machine must be configured to run containers
+
+For example, by following the quick start guidance at
+https://msdn.microsoft.com/en-us/virtualization/windowscontainers/quick_start/quick_start or https://github.com/docker/labs/blob/master/windows/windows-containers/Setup.md
+
+### 4. If building in a Hyper-V VM
+
+For Windows Server 2016 using Windows Server containers as the default option,
+it is recommended you have at least 1GB of memory assigned;
+For Windows 10 where Hyper-V Containers are employed, you should have at least
+4GB of memory assigned.
+Note also, to run Hyper-V containers in a VM, it is necessary to configure the VM
+for nested virtualization.
+
+## Usage
+
+The following steps should be run from an elevated Windows PowerShell prompt.
+
+>**Note**: In a default installation of containers on Windows following the quick-start guidance at https://msdn.microsoft.com/en-us/virtualization/windowscontainers/quick_start/quick_start,
+the `docker.exe` client must run elevated to be able to connect to the daemon).
+
+### 1. Windows containers
+
+To test and run the Windows Moby engine, you need a system that supports Windows Containers:
+
+- Windows 10 Anniversary Edition
+- Windows Server 2016 running in a VM, on bare metal or in the cloud
+
+Check out the [getting started documentation](https://github.com/docker/labs/blob/master/windows/windows-containers/Setup.md) for details.
+
+### 2. GitHub account
+
+To contribute to the Docker project, you need a GitHub account.
+A free account is fine. All the Moby project repositories are public and visible to everyone.
+
+This guide assumes that you have basic familiarity with Git and Github terminology
+and usage.
+Refer to [GitHub For Beginners: Don’t Get Scared, Get Started](http://readwrite.com/2013/09/30/understanding-github-a-journey-for-beginners-part-1/)
+to get up to speed on Github.
+
+### 3. Git
+
+In PowerShell, run:
+
+ Invoke-Webrequest "https://github.com/git-for-windows/git/releases/download/v2.7.2.windows.1/Git-2.7.2-64-bit.exe" -OutFile git.exe -UseBasicParsing
+ Start-Process git.exe -ArgumentList '/VERYSILENT /SUPPRESSMSGBOXES /CLOSEAPPLICATIONS /DIR=c:\git\' -Wait
+ setx /M PATH "$env:Path;c:\git\cmd"
+
+You are now ready clone and build the Moby source code.
+
+### 4. Clone Moby
+
+In a new (to pick up the path change) PowerShell prompt, run:
+
+ git clone https://github.com/moby/moby
+ cd moby
+
+This clones the main Moby repository. Check out [Moby Project](https://mobyproject.org)
+to learn about the other software that powers the Moby platform.
+
+### 5. Build and run
+
+Create a builder-container with the Moby source code. You can change the source
+code on your system and rebuild any time:
+
+ docker build -t nativebuildimage -f .\Dockerfile.windows .
+ docker build -t nativebuildimage -f Dockerfile.windows -m 2GB . # (if using Hyper-V containers)
+
+To build Moby, run:
+
+ $DOCKER_GITCOMMIT=(git rev-parse --short HEAD)
+ docker run --name binaries -e DOCKER_GITCOMMIT=$DOCKER_GITCOMMIT nativebuildimage hack\make.ps1 -Binary
+ docker run --name binaries -e DOCKER_GITCOMMIT=$DOCKER_GITCOMMIT -m 2GB nativebuildimage hack\make.ps1 -Binary # (if using Hyper-V containers)
+
+Copy out the resulting Windows Moby Engine binary to `dockerd.exe` in the
+current directory:
+
+ docker cp binaries:C:\go\src\github.com\moby\moby\bundles\docker.exe docker.exe
+ docker cp binaries:C:\go\src\github.com\moby\moby\bundles\dockerd.exe dockerd.exe
+
+To test it, stop the system Docker daemon and start the one you just built:
+
+ Stop-Service Docker
+ .\dockerd.exe -D
+
+The other make targets work too, to run unit tests try:
+`docker run --rm docker-builder sh -c 'cd /c/go/src/github.com/moby/moby; hack/make.sh test-unit'`.
+
+### 6. Remove the interim binaries container
+
+_(Optional)_
+
+ docker rm binaries
+
+### 7. Remove the image
+
+_(Optional)_
+
+It may be useful to keep this image around if you need to build multiple times.
+Then you can take advantage of the builder cache to have an image which has all
+the components required to build the binaries already installed.
+
+ docker rmi nativebuildimage
+
+## Validation
+
+The validation tests can only run directly on the host.
+This is because they calculate information from the git repo, but the .git directory
+is not passed into the image as it is excluded via `.dockerignore`.
+Run the following from a Windows PowerShell prompt (elevation is not required):
+(Note Go must be installed to run these tests)
+
+ hack\make.ps1 -DCO -PkgImports -GoFormat
+
+## Unit tests
+
+To run unit tests, ensure you have created the nativebuildimage above.
+Then run one of the following from an (elevated) Windows PowerShell prompt:
+
+ docker run --rm nativebuildimage hack\make.ps1 -TestUnit
+ docker run --rm -m 2GB nativebuildimage hack\make.ps1 -TestUnit # (if using Hyper-V containers)
+
+To run unit tests and binary build, ensure you have created the nativebuildimage above.
+Then run one of the following from an (elevated) Windows PowerShell prompt:
+
+ docker run nativebuildimage hack\make.ps1 -All
+ docker run -m 2GB nativebuildimage hack\make.ps1 -All # (if using Hyper-V containers)
+
+## Windows limitations
+
+Don't attempt to use a bind mount to pass a local directory as the bundles
+target directory.
+It does not work (golang attempts for follow a mapped folder incorrectly).
+Instead, use docker cp as per the example.
+
+`go.zip` is not removed from the image as it is used by the Windows CI servers
+to ensure the host and image are running consistent versions of go.
+
+Nanoserver support is a work in progress. Although the image will build if the
+`FROM` statement is updated, it will not work when running autogen through `hack\make.ps1`.
+It is suspected that the required GCC utilities (eg gcc, windres, windmc) silently
+quit due to the use of console hooks which are not available.
+
+The docker integration tests do not currently run in a container on Windows,
+predominantly due to Windows not supporting privileged mode, so anything using a volume would fail.
+They (along with the rest of the docker CI suite) can be run using
+https://github.com/jhowardmsft/docker-w2wCIScripts/blob/master/runCI/Invoke-DockerCI.ps1.
+
+## Where to go next
+
+In the next section, you'll [learn how to set up and configure Git for
+contributing to Moby](set-up-git.md).
diff --git a/components/engine/docs/contributing/software-required.md b/components/engine/docs/contributing/software-required.md
new file mode 100644
index 0000000000..b14c6f9050
--- /dev/null
+++ b/components/engine/docs/contributing/software-required.md
@@ -0,0 +1,94 @@
+### Get the required software for Linux or macOS
+
+This page explains how to get the software you need to use a Linux or macOS
+machine for Moby development. Before you begin contributing you must have:
+
+* a GitHub account
+* `git`
+* `make`
+* `docker`
+
+You'll notice that `go`, the language that Moby is written in, is not listed.
+That's because you don't need it installed; Moby's development environment
+provides it for you. You'll learn more about the development environment later.
+
+## Task 1. Get a GitHub account
+
+To contribute to the Moby project, you will need a GitHub account. A free account is
+fine. All the Moby project repositories are public and visible to everyone.
+
+You should also have some experience using both the GitHub application and `git`
+on the command line.
+
+## Task 2. Install git
+
+Install `git` on your local system. You can check if `git` is on already on your
+system and properly installed with the following command:
+
+```bash
+$ git --version
+```
+
+This documentation is written using `git` version 2.2.2. Your version may be
+different depending on your OS.
+
+## Task 3. Install make
+
+Install `make`. You can check if `make` is on your system with the following
+command:
+
+```bash
+$ make -v
+```
+
+This documentation is written using GNU Make 3.81. Your version may be different
+depending on your OS.
+
+## Task 4. Install or upgrade Docker
+
+If you haven't already, install the Docker software using the
+instructions for your operating system.
+If you have an existing installation, check your version and make sure you have
+the latest Docker.
+
+To check if `docker` is already installed on Linux:
+
+```bash
+docker --version
+Docker version 17.10.0-ce, build f4ffd25
+```
+
+On macOS or Windows, you should have installed Docker for Mac or
+Docker for Windows.
+
+```bash
+$ docker --version
+Docker version 17.10.0-ce, build f4ffd25
+```
+
+## Tip for Linux users
+
+This guide assumes you have added your user to the `docker` group on your system.
+To check, list the group's contents:
+
+```
+$ getent group docker
+docker:x:999:ubuntu
+```
+
+If the command returns no matches, you have two choices. You can preface this
+guide's `docker` commands with `sudo` as you work. Alternatively, you can add
+your user to the `docker` group as follows:
+
+```bash
+$ sudo usermod -aG docker ubuntu
+```
+
+You must log out and log back in for this modification to take effect.
+
+
+## Where to go next
+
+In the next section, you'll [learn how to set up and configure Git for
+contributing to Moby](set-up-git.md).
diff --git a/components/engine/docs/contributing/test.md b/components/engine/docs/contributing/test.md
new file mode 100644
index 0000000000..9a63a12b86
--- /dev/null
+++ b/components/engine/docs/contributing/test.md
@@ -0,0 +1,234 @@
+### Run tests
+
+Contributing includes testing your changes. If you change the Moby code, you
+may need to add a new test or modify an existing test. Your contribution could
+even be adding tests to Moby. For this reason, you need to know a little
+about Moby's test infrastructure.
+
+This section describes tests you can run in the `dry-run-test` branch of your Docker
+fork. If you have followed along in this guide, you already have this branch.
+If you don't have this branch, you can create it or simply use another of your
+branches.
+
+## Understand how to test Moby
+
+Moby tests use the Go language's test framework. In this framework, files
+whose names end in `_test.go` contain test code; you'll find test files like
+this throughout the Moby repo. Use these files for inspiration when writing
+your own tests. For information on Go's test framework, see Go's testing package
+documentation and the go test help.
+
+You are responsible for _unit testing_ your contribution when you add new or
+change existing Moby code. A unit test is a piece of code that invokes a
+single, small piece of code (_unit of work_) to verify the unit works as
+expected.
+
+Depending on your contribution, you may need to add _integration tests_. These
+are tests that combine two or more work units into one component. These work
+units each have unit tests and then, together, integration tests that test the
+interface between the components. The `integration` and `integration-cli`
+directories in the Docker repository contain integration test code.
+
+Testing is its own specialty. If you aren't familiar with testing techniques,
+there is a lot of information available to you on the Web. For now, you should
+understand that, the Docker maintainers may ask you to write a new test or
+change an existing one.
+
+## Run tests on your local host
+
+Before submitting a pull request with a code change, you should run the entire
+Moby Engine test suite. The `Makefile` contains a target for the entire test
+suite, named `test`. Also, it contains several targets for
+testing:
+
+| Target | What this target does |
+| ---------------------- | ---------------------------------------------- |
+| `test` | Run the unit, integration, and docker-py tests |
+| `test-unit` | Run just the unit tests |
+| `test-integration-cli` | Run the integration tests for the CLI |
+| `test-docker-py` | Run the tests for the Docker API client |
+
+Running the entire test suite on your current repository can take over half an
+hour. To run the test suite, do the following:
+
+1. Open a terminal on your local host.
+
+2. Change to the root of your Docker repository.
+
+ ```bash
+ $ cd moby-fork
+ ```
+
+3. Make sure you are in your development branch.
+
+ ```bash
+ $ git checkout dry-run-test
+ ```
+
+4. Run the `make test` command.
+
+ ```bash
+ $ make test
+ ```
+
+ This command does several things, it creates a container temporarily for
+ testing. Inside that container, the `make`:
+
+ * creates a new binary
+ * cross-compiles all the binaries for the various operating systems
+ * runs all the tests in the system
+
+ It can take approximate one hour to run all the tests. The time depends
+ on your host performance. The default timeout is 60 minutes, which is
+ defined in `hack/make.sh` (`${TIMEOUT:=60m}`). You can modify the timeout
+ value on the basis of your host performance. When they complete
+ successfully, you see the output concludes with something like this:
+
+ ```none
+ Ran 68 tests in 79.135s
+ ```
+
+## Run targets inside a development container
+
+If you are working inside a development container, you use the
+`hack/make.sh` script to run tests. The `hack/make.sh` script doesn't
+have a single target that runs all the tests. Instead, you provide a single
+command line with multiple targets that does the same thing.
+
+Try this now.
+
+1. Open a terminal and change to the `moby-fork` root.
+
+2. Start a Moby development image.
+
+ If you are following along with this guide, you should have a
+ `dry-run-test` image.
+
+ ```bash
+ $ docker run --privileged --rm -ti -v `pwd`:/go/src/github.com/moby/moby dry-run-test /bin/bash
+ ```
+
+3. Run the tests using the `hack/make.sh` script.
+
+ ```bash
+ root@5f8630b873fe:/go/src/github.com/moby/moby# hack/make.sh dynbinary binary cross test-unit test-integration-cli test-docker-py
+ ```
+
+ The tests run just as they did within your local host.
+
+ Of course, you can also run a subset of these targets too. For example, to run
+ just the unit tests:
+
+ ```bash
+ root@5f8630b873fe:/go/src/github.com/moby/moby# hack/make.sh dynbinary binary cross test-unit
+ ```
+
+ Most test targets require that you build these precursor targets first:
+ `dynbinary binary cross`
+
+
+## Run unit tests
+
+We use golang standard [testing](https://golang.org/pkg/testing/)
+package or [gocheck](https://labix.org/gocheck) for our unit tests.
+
+You can use the `TESTDIRS` environment variable to run unit tests for
+a single package.
+
+```bash
+$ TESTDIRS='opts' make test-unit
+```
+
+You can also use the `TESTFLAGS` environment variable to run a single test. The
+flag's value is passed as arguments to the `go test` command. For example, from
+your local host you can run the `TestBuild` test with this command:
+
+```bash
+$ TESTFLAGS='-test.run ^TestValidateIPAddress$' make test-unit
+```
+
+On unit tests, it's better to use `TESTFLAGS` in combination with
+`TESTDIRS` to make it quicker to run a specific test.
+
+```bash
+$ TESTDIRS='opts' TESTFLAGS='-test.run ^TestValidateIPAddress$' make test-unit
+```
+
+## Run integration tests
+
+We use [gocheck](https://labix.org/gocheck) for our integration-cli tests.
+You can use the `TESTFLAGS` environment variable to run a single test. The
+flag's value is passed as arguments to the `go test` command. For example, from
+your local host you can run the `TestBuild` test with this command:
+
+```bash
+$ TESTFLAGS='-check.f DockerSuite.TestBuild*' make test-integration-cli
+```
+
+To run the same test inside your Docker development container, you do this:
+
+```bash
+root@5f8630b873fe:/go/src/github.com/moby/moby# TESTFLAGS='-check.f TestBuild*' hack/make.sh binary test-integration-cli
+```
+
+## Test the Windows binary against a Linux daemon
+
+This explains how to test the Windows binary on a Windows machine set up as a
+development environment. The tests will be run against a daemon
+running on a remote Linux machine. You'll use **Git Bash** that came with the
+Git for Windows installation. **Git Bash**, just as it sounds, allows you to
+run a Bash terminal on Windows.
+
+1. If you don't have one open already, start a Git Bash terminal.
+
+ 
+
+2. Change to the `moby` source directory.
+
+ ```bash
+ $ cd /c/gopath/src/github.com/moby/moby
+ ```
+
+3. Set `DOCKER_REMOTE_DAEMON` as follows:
+
+ ```bash
+ $ export DOCKER_REMOTE_DAEMON=1
+ ```
+
+4. Set `DOCKER_TEST_HOST` to the `tcp://IP_ADDRESS:2376` value; substitute your
+ Linux machines actual IP address. For example:
+
+ ```bash
+ $ export DOCKER_TEST_HOST=tcp://213.124.23.200:2376
+ ```
+
+5. Make the binary and run the tests:
+
+ ```bash
+ $ hack/make.sh binary test-integration-cli
+ ```
+ Some tests are skipped on Windows for various reasons. You can see which
+ tests were skipped by re-running the make and passing in the
+ `TESTFLAGS='-test.v'` value. For example
+
+ ```bash
+ $ TESTFLAGS='-test.v' hack/make.sh binary test-integration-cli
+ ```
+
+ Should you wish to run a single test such as one with the name
+ 'TestExample', you can pass in `TESTFLAGS='-check.f TestExample'`. For
+ example
+
+ ```bash
+ $ TESTFLAGS='-check.f TestExample' hack/make.sh binary test-integration-cli
+ ```
+
+You can now choose to make changes to the Moby source or the tests. If you
+make any changes, just run these commands again.
+
+## Where to go next
+
+Congratulations, you have successfully completed the basics you need to
+understand the Moby test framework.
diff --git a/components/engine/docs/contributing/who-written-for.md b/components/engine/docs/contributing/who-written-for.md
new file mode 100644
index 0000000000..1431f42c50
--- /dev/null
+++ b/components/engine/docs/contributing/who-written-for.md
@@ -0,0 +1,49 @@
+### README first
+
+This section of the documentation contains a guide for Moby project users who want to
+contribute code or documentation to the Moby Engine project. As a community, we
+share rules of behavior and interaction. Make sure you are familiar with the community guidelines before continuing.
+
+## Where and what you can contribute
+
+The Moby project consists of not just one but several repositories on GitHub.
+So, in addition to the `moby/moby` repository, there is the
+`containerd/containerd` repo, the `moby/buildkit` repo, and several more.
+Contribute to any of these and you contribute to the Moby project.
+
+Not all Moby repositories use the Go language. Also, each repository has its
+own focus area. So, if you are an experienced contributor, think about
+contributing to a Moby project repository that has a language or a focus area you are
+familiar with.
+
+If you are new to the open source community, to Moby, or to formal
+programming, you should start out contributing to the `moby/moby`
+repository. Why? Because this guide is written for that repository specifically.
+
+Finally, code or documentation isn't the only way to contribute. You can report
+an issue, add to discussions in our community channel, write a blog post, or
+take a usability test. You can even propose your own type of contribution.
+Right now we don't have a lot written about this yet, but feel free to open an issue
+to discuss other contributions.
+
+## How to use this guide
+
+This is written for the distracted, the overworked, the sloppy reader with fair
+`git` skills and a failing memory for the GitHub GUI. The guide attempts to
+explain how to use the Moby Engine development environment as precisely,
+predictably, and procedurally as possible.
+
+Users who are new to Engine development should start by setting up their
+environment. Then, they should try a simple code change. After that, you should
+find something to work on or propose a totally new change.
+
+If you are a programming prodigy, you still may find this documentation useful.
+Please feel free to skim past information you find obvious or boring.
+
+## How to get started
+
+Start by getting the software you require. If you are on Mac or Linux, go to
+[get the required software for Linux or macOS](software-required.md). If you are
+on Windows, see [get the required software for Windows](software-req-win.md).
diff --git a/components/engine/hack/Jenkins/W2L/postbuild.sh b/components/engine/hack/Jenkins/W2L/postbuild.sh
deleted file mode 100644
index 662e2dcc37..0000000000
--- a/components/engine/hack/Jenkins/W2L/postbuild.sh
+++ /dev/null
@@ -1,35 +0,0 @@
-set +x
-set +e
-
-echo ""
-echo ""
-echo "---"
-echo "Now starting POST-BUILD steps"
-echo "---"
-echo ""
-
-echo INFO: Pointing to $DOCKER_HOST
-
-if [ ! $(docker ps -aq | wc -l) -eq 0 ]; then
- echo INFO: Removing containers...
- ! docker rm -vf $(docker ps -aq)
-fi
-
-# Remove all images which don't have docker or debian in the name
-if [ ! $(docker images | sed -n '1!p' | grep -v 'docker' | grep -v 'debian' | awk '{ print $3 }' | wc -l) -eq 0 ]; then
- echo INFO: Removing images...
- ! docker rmi -f $(docker images | sed -n '1!p' | grep -v 'docker' | grep -v 'debian' | awk '{ print $3 }')
-fi
-
-# Kill off any instances of git, go and docker, just in case
-! taskkill -F -IM git.exe -T >& /dev/null
-! taskkill -F -IM go.exe -T >& /dev/null
-! taskkill -F -IM docker.exe -T >& /dev/null
-
-# Remove everything
-! cd /c/jenkins/gopath/src/github.com/docker/docker
-! rm -rfd * >& /dev/null
-! rm -rfd .* >& /dev/null
-
-echo INFO: Cleanup complete
-exit 0
\ No newline at end of file
diff --git a/components/engine/hack/Jenkins/W2L/setup.sh b/components/engine/hack/Jenkins/W2L/setup.sh
deleted file mode 100644
index a3d86b857a..0000000000
--- a/components/engine/hack/Jenkins/W2L/setup.sh
+++ /dev/null
@@ -1,309 +0,0 @@
-# Jenkins CI script for Windows to Linux CI.
-# Heavily modified by John Howard (@jhowardmsft) December 2015 to try to make it more reliable.
-set +xe
-SCRIPT_VER="Wed Apr 20 18:30:19 UTC 2016"
-
-# TODO to make (even) more resilient:
-# - Wait for daemon to be running before executing docker commands
-# - Check if jq is installed
-# - Make sure bash is v4.3 or later. Can't do until all Azure nodes on the latest version
-# - Make sure we are not running as local system. Can't do until all Azure nodes are updated.
-# - Error if docker versions are not equal. Can't do until all Azure nodes are updated
-# - Error if go versions are not equal. Can't do until all Azure nodes are updated.
-# - Error if running 32-bit posix tools. Probably can take from bash --version and check contains "x86_64"
-# - Warn if the CI directory cannot be deleted afterwards. Otherwise turdlets are left behind
-# - Use %systemdrive% ($SYSTEMDRIVE) rather than hard code to c: for TEMP
-# - Consider cross building the Windows binary and copy across. That's a bit of a heavy lift. Only reason
-# for doing that is that it mirrors the actual release process for docker.exe which is cross-built.
-# However, should absolutely not be a problem if built natively, so nit-picking.
-# - Tidy up of images and containers. Either here, or in the teardown script.
-
-ec=0
-uniques=1
-echo INFO: Started at `date`. Script version $SCRIPT_VER
-
-
-# !README!
-# There are two daemons running on the remote Linux host:
-# - outer: specified by DOCKER_HOST, this is the daemon that will build and run the inner docker daemon
-# from the sources matching the PR.
-# - inner: runs on the host network, on a port number similar to that of DOCKER_HOST but the last two digits are inverted
-# (2357 if DOCKER_HOST had port 2375; and 2367 if DOCKER_HOST had port 2376).
-# The windows integration tests are run against this inner daemon.
-
-# get the ip, inner and outer ports.
-ip="${DOCKER_HOST#*://}"
-port_outer="${ip#*:}"
-# inner port is like outer port with last two digits inverted.
-port_inner=$(echo "$port_outer" | sed -E 's/(.)(.)$/\2\1/')
-ip="${ip%%:*}"
-
-echo "INFO: IP=$ip PORT_OUTER=$port_outer PORT_INNER=$port_inner"
-
-# If TLS is enabled
-if [ -n "$DOCKER_TLS_VERIFY" ]; then
- protocol=https
- if [ -z "$DOCKER_MACHINE_NAME" ]; then
- ec=1
- echo "ERROR: DOCKER_MACHINE_NAME is undefined"
- fi
- certs=$(echo ~/.docker/machine/machines/$DOCKER_MACHINE_NAME)
- curlopts="--cacert $certs/ca.pem --cert $certs/cert.pem --key $certs/key.pem"
- run_extra_args="-v tlscerts:/etc/docker"
- daemon_extra_args="--tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem"
-else
- protocol=http
-fi
-
-# Save for use by make.sh and scripts it invokes
-export MAIN_DOCKER_HOST="tcp://$ip:$port_inner"
-
-# Verify we can get the remote node to respond to _ping
-if [ $ec -eq 0 ]; then
- reply=`curl -s $curlopts $protocol://$ip:$port_outer/_ping`
- if [ "$reply" != "OK" ]; then
- ec=1
- echo "ERROR: Failed to get an 'OK' response from the docker daemon on the Linux node"
- echo " at $ip:$port_outer when called with an http request for '_ping'. This implies that"
- echo " either the daemon has crashed/is not running, or the Linux node is unavailable."
- echo
- echo " A regular ping to the remote Linux node is below. It should reply. If not, the"
- echo " machine cannot be reached at all and may have crashed. If it does reply, it is"
- echo " likely a case of the Linux daemon not running or having crashed, which requires"
- echo " further investigation."
- echo
- echo " Try re-running this CI job, or ask on #docker-dev or #docker-maintainers"
- echo " for someone to perform further diagnostics, or take this node out of rotation."
- echo
- ping $ip
- else
- echo "INFO: The Linux nodes outer daemon replied to a ping. Good!"
- fi
-fi
-
-# Get the version from the remote node. Note this may fail if jq is not installed.
-# That's probably worth checking to make sure, just in case.
-if [ $ec -eq 0 ]; then
- remoteVersion=`curl -s $curlopts $protocol://$ip:$port_outer/version | jq -c '.Version'`
- echo "INFO: Remote daemon is running docker version $remoteVersion"
-fi
-
-# Compare versions. We should really fail if result is no 1. Output at end of script.
-if [ $ec -eq 0 ]; then
- uniques=`docker version | grep Version | /usr/bin/sort -u | wc -l`
-fi
-
-# Make sure we are in repo
-if [ $ec -eq 0 ]; then
- if [ ! -d hack ]; then
- echo "ERROR: Are you sure this is being launched from a the root of docker repository?"
- echo " If this is a Windows CI machine, it should be c:\jenkins\gopath\src\github.com\docker\docker."
- echo " Current directory is `pwd`"
- ec=1
- fi
-fi
-
-# Are we in split binary mode?
-if [ `grep DOCKER_CLIENTONLY Makefile | wc -l` -gt 0 ]; then
- splitBinary=0
- echo "INFO: Running in single binary mode"
-else
- splitBinary=1
- echo "INFO: Running in split binary mode"
-fi
-
-
-# Get the commit has and verify we have something
-if [ $ec -eq 0 ]; then
- export COMMITHASH=$(git rev-parse --short HEAD)
- echo INFO: Commit hash is $COMMITHASH
- if [ -z $COMMITHASH ]; then
- echo "ERROR: Failed to get commit hash. Are you sure this is a docker repository?"
- ec=1
- fi
-fi
-
-# Redirect to a temporary location. Check is here for local runs from Jenkins machines just in case not
-# in the right directory where the repo is cloned. We also redirect TEMP to not use the environment
-# TEMP as when running as a standard user (not local system), it otherwise exposes a bug in posix tar which
-# will cause CI to fail from Windows to Linux. Obviously it's not best practice to ever run as local system...
-if [ $ec -eq 0 ]; then
- export TEMP=/c/CI/CI-$COMMITHASH
- export TMP=$TEMP
- /usr/bin/mkdir -p $TEMP # Make sure Linux mkdir for -p
-fi
-
-# Tidy up time
-if [ $ec -eq 0 ]; then
- echo INFO: Deleting pre-existing containers and images...
-
- # Force remove all containers based on a previously built image with this commit
- ! docker rm -f $(docker ps -aq --filter "ancestor=docker:$COMMITHASH") &>/dev/null
-
- # Force remove any container with this commithash as a name
- ! docker rm -f $(docker ps -aq --filter "name=docker-$COMMITHASH") &>/dev/null
-
- # This SHOULD never happen, but just in case, also blow away any containers
- # that might be around.
- ! if [ ! $(docker ps -aq | wc -l) -eq 0 ]; then
- echo WARN: There were some leftover containers. Cleaning them up.
- ! docker rm -f $(docker ps -aq)
- fi
-
- # Force remove the image if it exists
- ! docker rmi -f "docker-$COMMITHASH" &>/dev/null
-fi
-
-# Provide the docker version for debugging purposes. If these fail, game over.
-# as the Linux box isn't responding for some reason.
-if [ $ec -eq 0 ]; then
- echo INFO: Docker version and info of the outer daemon on the Linux node
- echo
- docker version
- ec=$?
- if [ 0 -ne $ec ]; then
- echo "ERROR: The main linux daemon does not appear to be running. Has the Linux node crashed?"
- fi
- echo
-fi
-
-# Same as above, but docker info
-if [ $ec -eq 0 ]; then
- echo
- docker info
- ec=$?
- if [ 0 -ne $ec ]; then
- echo "ERROR: The main linux daemon does not appear to be running. Has the Linux node crashed?"
- fi
- echo
-fi
-
-# build the daemon image
-if [ $ec -eq 0 ]; then
- echo "INFO: Running docker build on Linux host at $DOCKER_HOST"
- if [ $splitBinary -eq 0 ]; then
- set -x
- docker build --rm --force-rm --build-arg APT_MIRROR=cdn-fastly.deb.debian.org -t "docker:$COMMITHASH" .
- cat < /dev/null | sed -e 's/ /T/')
if [ "$DOCKER_GITCOMMIT" ]; then
GITCOMMIT="$DOCKER_GITCOMMIT"
diff --git a/components/engine/hack/make/.binary b/components/engine/hack/make/.binary
index ff5cbff722..b3c7a795ab 100644
--- a/components/engine/hack/make/.binary
+++ b/components/engine/hack/make/.binary
@@ -57,6 +57,7 @@ go build \
-ldflags "
$LDFLAGS
$LDFLAGS_STATIC_DOCKER
+ $DOCKER_LDFLAGS
" \
$GO_PACKAGE
)
diff --git a/components/engine/hack/make/.build-deb/compat b/components/engine/hack/make/.build-deb/compat
deleted file mode 100644
index ec635144f6..0000000000
--- a/components/engine/hack/make/.build-deb/compat
+++ /dev/null
@@ -1 +0,0 @@
-9
diff --git a/components/engine/hack/make/.build-deb/control b/components/engine/hack/make/.build-deb/control
deleted file mode 100644
index 0f5439947c..0000000000
--- a/components/engine/hack/make/.build-deb/control
+++ /dev/null
@@ -1,29 +0,0 @@
-Source: docker-engine
-Section: admin
-Priority: optional
-Maintainer: Docker
-Standards-Version: 3.9.6
-Homepage: https://dockerproject.org
-Vcs-Browser: https://github.com/docker/docker
-Vcs-Git: git://github.com/docker/docker.git
-
-Package: docker-engine
-Architecture: linux-any
-Depends: iptables, ${misc:Depends}, ${perl:Depends}, ${shlibs:Depends}
-Recommends: aufs-tools,
- ca-certificates,
- cgroupfs-mount | cgroup-lite,
- git,
- xz-utils,
- ${apparmor:Recommends}
-Conflicts: docker (<< 1.5~), docker.io, lxc-docker, lxc-docker-virtual-package, docker-engine-cs
-Description: Docker: the open-source application container engine
- Docker is an open source project to build, ship and run any application as a
- lightweight container
- .
- Docker containers are both hardware-agnostic and platform-agnostic. This means
- they can run anywhere, from your laptop to the largest EC2 compute instance and
- everything in between - and they don't require you to use a particular
- language, framework or packaging system. That makes them great building blocks
- for deploying and scaling web apps, databases, and backend services without
- depending on a particular stack or provider.
diff --git a/components/engine/hack/make/.build-deb/docker-engine.bash-completion b/components/engine/hack/make/.build-deb/docker-engine.bash-completion
deleted file mode 100644
index 6ea1119308..0000000000
--- a/components/engine/hack/make/.build-deb/docker-engine.bash-completion
+++ /dev/null
@@ -1 +0,0 @@
-contrib/completion/bash/docker
diff --git a/components/engine/hack/make/.build-deb/docker-engine.docker.default b/components/engine/hack/make/.build-deb/docker-engine.docker.default
deleted file mode 120000
index 4278533d65..0000000000
--- a/components/engine/hack/make/.build-deb/docker-engine.docker.default
+++ /dev/null
@@ -1 +0,0 @@
-../../../contrib/init/sysvinit-debian/docker.default
\ No newline at end of file
diff --git a/components/engine/hack/make/.build-deb/docker-engine.docker.init b/components/engine/hack/make/.build-deb/docker-engine.docker.init
deleted file mode 120000
index 8cb89d30dd..0000000000
--- a/components/engine/hack/make/.build-deb/docker-engine.docker.init
+++ /dev/null
@@ -1 +0,0 @@
-../../../contrib/init/sysvinit-debian/docker
\ No newline at end of file
diff --git a/components/engine/hack/make/.build-deb/docker-engine.docker.upstart b/components/engine/hack/make/.build-deb/docker-engine.docker.upstart
deleted file mode 120000
index 7e1b64a3e6..0000000000
--- a/components/engine/hack/make/.build-deb/docker-engine.docker.upstart
+++ /dev/null
@@ -1 +0,0 @@
-../../../contrib/init/upstart/docker.conf
\ No newline at end of file
diff --git a/components/engine/hack/make/.build-deb/docker-engine.install b/components/engine/hack/make/.build-deb/docker-engine.install
deleted file mode 100644
index dc6b25f04f..0000000000
--- a/components/engine/hack/make/.build-deb/docker-engine.install
+++ /dev/null
@@ -1,12 +0,0 @@
-#contrib/syntax/vim/doc/* /usr/share/vim/vimfiles/doc/
-#contrib/syntax/vim/ftdetect/* /usr/share/vim/vimfiles/ftdetect/
-#contrib/syntax/vim/syntax/* /usr/share/vim/vimfiles/syntax/
-contrib/*-integration usr/share/docker-engine/contrib/
-contrib/check-config.sh usr/share/docker-engine/contrib/
-contrib/completion/fish/docker.fish usr/share/fish/vendor_completions.d/
-contrib/completion/zsh/_docker usr/share/zsh/vendor-completions/
-contrib/init/systemd/docker.service lib/systemd/system/
-contrib/init/systemd/docker.socket lib/systemd/system/
-contrib/mk* usr/share/docker-engine/contrib/
-contrib/nuke-graph-directory.sh usr/share/docker-engine/contrib/
-contrib/syntax/nano/Dockerfile.nanorc usr/share/nano/
diff --git a/components/engine/hack/make/.build-deb/docker-engine.manpages b/components/engine/hack/make/.build-deb/docker-engine.manpages
deleted file mode 100644
index 1aa62186a6..0000000000
--- a/components/engine/hack/make/.build-deb/docker-engine.manpages
+++ /dev/null
@@ -1 +0,0 @@
-man/man*/*
diff --git a/components/engine/hack/make/.build-deb/docker-engine.postinst b/components/engine/hack/make/.build-deb/docker-engine.postinst
deleted file mode 100644
index eeef6ca801..0000000000
--- a/components/engine/hack/make/.build-deb/docker-engine.postinst
+++ /dev/null
@@ -1,20 +0,0 @@
-#!/bin/sh
-set -e
-
-case "$1" in
- configure)
- if [ -z "$2" ]; then
- if ! getent group docker > /dev/null; then
- groupadd --system docker
- fi
- fi
- ;;
- abort-*)
- # How'd we get here??
- exit 1
- ;;
- *)
- ;;
-esac
-
-#DEBHELPER#
diff --git a/components/engine/hack/make/.build-deb/docker-engine.udev b/components/engine/hack/make/.build-deb/docker-engine.udev
deleted file mode 120000
index 914a361959..0000000000
--- a/components/engine/hack/make/.build-deb/docker-engine.udev
+++ /dev/null
@@ -1 +0,0 @@
-../../../contrib/udev/80-docker.rules
\ No newline at end of file
diff --git a/components/engine/hack/make/.build-deb/docs b/components/engine/hack/make/.build-deb/docs
deleted file mode 100644
index b43bf86b50..0000000000
--- a/components/engine/hack/make/.build-deb/docs
+++ /dev/null
@@ -1 +0,0 @@
-README.md
diff --git a/components/engine/hack/make/.build-deb/rules b/components/engine/hack/make/.build-deb/rules
deleted file mode 100755
index 19557ed50c..0000000000
--- a/components/engine/hack/make/.build-deb/rules
+++ /dev/null
@@ -1,53 +0,0 @@
-#!/usr/bin/make -f
-
-VERSION = $(shell cat VERSION)
-SYSTEMD_VERSION := $(shell dpkg-query -W -f='$${Version}\n' systemd | cut -d- -f1)
-SYSTEMD_GT_227 := $(shell [ '$(SYSTEMD_VERSION)' ] && [ '$(SYSTEMD_VERSION)' -gt 227 ] && echo true )
-
-override_dh_gencontrol:
- # if we're on Ubuntu, we need to Recommends: apparmor
- echo 'apparmor:Recommends=$(shell dpkg-vendor --is Ubuntu && echo apparmor)' >> debian/docker-engine.substvars
- dh_gencontrol
-
-override_dh_auto_build:
- ./hack/make.sh dynbinary
- # ./man/md2man-all.sh runs outside the build container (if at all), since we don't have go-md2man here
-
-override_dh_auto_test:
- ./bundles/$(VERSION)/dynbinary-daemon/dockerd -v
-
-override_dh_strip:
- # Go has lots of problems with stripping, so just don't
-
-override_dh_auto_install:
- mkdir -p debian/docker-engine/usr/bin
- cp -aT "$$(readlink -f bundles/$(VERSION)/dynbinary-daemon/dockerd)" debian/docker-engine/usr/bin/dockerd
- cp -aT /usr/local/bin/docker-proxy debian/docker-engine/usr/bin/docker-proxy
- cp -aT /usr/local/bin/docker-containerd debian/docker-engine/usr/bin/docker-containerd
- cp -aT /usr/local/bin/docker-containerd-shim debian/docker-engine/usr/bin/docker-containerd-shim
- cp -aT /usr/local/bin/docker-containerd-ctr debian/docker-engine/usr/bin/docker-containerd-ctr
- cp -aT /usr/local/bin/docker-runc debian/docker-engine/usr/bin/docker-runc
- cp -aT /usr/local/bin/docker-init debian/docker-engine/usr/bin/docker-init
- mkdir -p debian/docker-engine/usr/lib/docker
-
-override_dh_installinit:
- # use "docker" as our service name, not "docker-engine"
- dh_installinit --name=docker
-ifeq (true, $(SYSTEMD_GT_227))
- $(warning "Setting TasksMax=infinity")
- sed -i -- 's/#TasksMax=infinity/TasksMax=infinity/' debian/docker-engine/lib/systemd/system/docker.service
-endif
-
-override_dh_installudev:
- # match our existing priority
- dh_installudev --priority=z80
-
-override_dh_install:
- dh_install
- dh_apparmor --profile-name=docker-engine -pdocker-engine
-
-override_dh_shlibdeps:
- dh_shlibdeps --dpkg-shlibdeps-params=--ignore-missing-info
-
-%:
- dh $@ --with=bash-completion $(shell command -v dh_systemd_enable > /dev/null 2>&1 && echo --with=systemd)
diff --git a/components/engine/hack/make/.build-rpm/docker-engine-selinux.spec b/components/engine/hack/make/.build-rpm/docker-engine-selinux.spec
deleted file mode 100644
index 6a4b6c0c3a..0000000000
--- a/components/engine/hack/make/.build-rpm/docker-engine-selinux.spec
+++ /dev/null
@@ -1,99 +0,0 @@
-# Some bits borrowed from the openstack-selinux package
-Name: docker-engine-selinux
-Version: %{_version}
-Release: %{_release}%{?dist}
-Summary: SELinux Policies for the open-source application container engine
-BuildArch: noarch
-Group: Tools/Docker
-
-License: GPLv2
-Source: %{name}.tar.gz
-
-URL: https://dockerproject.org
-Vendor: Docker
-Packager: Docker
-
-%global selinux_policyver 3.13.1-102
-%if 0%{?oraclelinux} >= 7
-%global selinux_policyver 3.13.1-102.0.3.el7_3.15
-%endif # oraclelinux 7
-%global selinuxtype targeted
-%global moduletype services
-%global modulenames docker
-
-Requires(post): selinux-policy-base >= %{selinux_policyver}, selinux-policy-targeted >= %{selinux_policyver}, policycoreutils, policycoreutils-python libselinux-utils
-BuildRequires: selinux-policy selinux-policy-devel
-
-# conflicting packages
-Conflicts: docker-selinux
-
-# Usage: _format var format
-# Expand 'modulenames' into various formats as needed
-# Format must contain '$x' somewhere to do anything useful
-%global _format() export %1=""; for x in %{modulenames}; do %1+=%2; %1+=" "; done;
-
-# Relabel files
-%global relabel_files() \
- /sbin/restorecon -R %{_bindir}/docker %{_localstatedir}/run/docker.sock %{_localstatedir}/run/docker.pid %{_sysconfdir}/docker %{_localstatedir}/log/docker %{_localstatedir}/log/lxc %{_localstatedir}/lock/lxc %{_usr}/lib/systemd/system/docker.service /root/.docker &> /dev/null || : \
-
-%description
-SELinux policy modules for use with Docker
-
-%prep
-%if 0%{?centos} <= 6
-%setup -n %{name}
-%else
-%autosetup -n %{name}
-%endif
-
-%build
-make SHARE="%{_datadir}" TARGETS="%{modulenames}"
-
-%install
-
-# Install SELinux interfaces
-%_format INTERFACES $x.if
-install -d %{buildroot}%{_datadir}/selinux/devel/include/%{moduletype}
-install -p -m 644 $INTERFACES %{buildroot}%{_datadir}/selinux/devel/include/%{moduletype}
-
-# Install policy modules
-%_format MODULES $x.pp.bz2
-install -d %{buildroot}%{_datadir}/selinux/packages
-install -m 0644 $MODULES %{buildroot}%{_datadir}/selinux/packages
-
-%post
-#
-# Install all modules in a single transaction
-#
-if [ $1 -eq 1 ]; then
- %{_sbindir}/setsebool -P -N virt_use_nfs=1 virt_sandbox_use_all_caps=1
-fi
-%_format MODULES %{_datadir}/selinux/packages/$x.pp.bz2
-%{_sbindir}/semodule -n -s %{selinuxtype} -i $MODULES
-if %{_sbindir}/selinuxenabled ; then
- %{_sbindir}/load_policy
- %relabel_files
- if [ $1 -eq 1 ]; then
- restorecon -R %{_sharedstatedir}/docker
- fi
-fi
-
-%postun
-if [ $1 -eq 0 ]; then
- %{_sbindir}/semodule -n -r %{modulenames} &> /dev/null || :
- if %{_sbindir}/selinuxenabled ; then
- %{_sbindir}/load_policy
- %relabel_files
- fi
-fi
-
-%files
-%doc LICENSE
-%defattr(-,root,root,0755)
-%attr(0644,root,root) %{_datadir}/selinux/packages/*.pp.bz2
-%attr(0644,root,root) %{_datadir}/selinux/devel/include/%{moduletype}/*.if
-
-%changelog
-* Tue Dec 1 2015 Jessica Frazelle 1.9.1-1
-- add licence to rpm
-- add selinux-policy and docker-engine-selinux rpm
diff --git a/components/engine/hack/make/.build-rpm/docker-engine.spec b/components/engine/hack/make/.build-rpm/docker-engine.spec
deleted file mode 100644
index 6225bb74f2..0000000000
--- a/components/engine/hack/make/.build-rpm/docker-engine.spec
+++ /dev/null
@@ -1,249 +0,0 @@
-Name: docker-engine
-Version: %{_version}
-Release: %{_release}%{?dist}
-Summary: The open-source application container engine
-Group: Tools/Docker
-
-License: ASL 2.0
-Source: %{name}.tar.gz
-
-URL: https://dockerproject.org
-Vendor: Docker
-Packager: Docker
-
-# is_systemd conditional
-%if 0%{?fedora} >= 21 || 0%{?centos} >= 7 || 0%{?rhel} >= 7 || 0%{?suse_version} >= 1210
-%global is_systemd 1
-%endif
-
-# required packages for build
-# most are already in the container (see contrib/builder/rpm/ARCH/generate.sh)
-# only require systemd on those systems
-%if 0%{?is_systemd}
-%if 0%{?suse_version} >= 1210
-BuildRequires: systemd-rpm-macros
-%{?systemd_requires}
-%else
-%if 0%{?fedora} >= 25
-# Systemd 230 and up no longer have libsystemd-journal (see https://bugzilla.redhat.com/show_bug.cgi?id=1350301)
-BuildRequires: pkgconfig(systemd)
-Requires: systemd-units
-%else
-BuildRequires: pkgconfig(systemd)
-Requires: systemd-units
-BuildRequires: pkgconfig(libsystemd-journal)
-%endif
-%endif
-%else
-Requires(post): chkconfig
-Requires(preun): chkconfig
-# This is for /sbin/service
-Requires(preun): initscripts
-%endif
-
-# required packages on install
-Requires: /bin/sh
-Requires: iptables
-%if !0%{?suse_version}
-Requires: libcgroup
-%else
-Requires: libcgroup1
-%endif
-Requires: tar
-Requires: xz
-%if 0%{?fedora} >= 21 || 0%{?centos} >= 7 || 0%{?rhel} >= 7 || 0%{?oraclelinux} >= 7 || 0%{?amzn} >= 1
-# Resolves: rhbz#1165615
-Requires: device-mapper-libs >= 1.02.90-1
-%endif
-%if 0%{?oraclelinux} >= 6
-# Require Oracle Unbreakable Enterprise Kernel R4 and newer device-mapper
-Requires: kernel-uek >= 4.1
-Requires: device-mapper >= 1.02.90-2
-%endif
-
-# docker-selinux conditional
-%if 0%{?fedora} >= 20 || 0%{?centos} >= 7 || 0%{?rhel} >= 7 || 0%{?oraclelinux} >= 7
-%global with_selinux 1
-%endif
-
-# DWZ problem with multiple golang binary, see bug
-# https://bugzilla.redhat.com/show_bug.cgi?id=995136#c12
-%if 0%{?fedora} >= 20 || 0%{?rhel} >= 7 || 0%{?oraclelinux} >= 7
-%global _dwz_low_mem_die_limit 0
-%endif
-
-# start if with_selinux
-%if 0%{?with_selinux}
-
-%if 0%{?centos} >= 7 || 0%{?rhel} >= 7 || 0%{?fedora} >= 25
-Requires: container-selinux >= 2.9
-%endif# centos 7, rhel 7, fedora 25
-
-%if 0%{?oraclelinux} >= 7
-%global selinux_policyver 3.13.1-102.0.3.el7_3.15
-%endif # oraclelinux 7
-%if 0%{?fedora} == 24
-%global selinux_policyver 3.13.1-191
-%endif # fedora 24 -- container-selinux on fedora24 does not properly set dockerd, for now just carry docker-engine-selinux for it
-%if 0%{?oraclelinux} >= 7 || 0%{?fedora} == 24
-Requires: selinux-policy >= %{selinux_policyver}
-Requires(pre): %{name}-selinux >= %{version}-%{release}
-%endif # selinux-policy for oraclelinux-7, fedora-24
-
-%endif # with_selinux
-
-# conflicting packages
-Conflicts: docker
-Conflicts: docker-io
-Conflicts: docker-engine-cs
-
-%description
-Docker is an open source project to build, ship and run any application as a
-lightweight container.
-
-Docker containers are both hardware-agnostic and platform-agnostic. This means
-they can run anywhere, from your laptop to the largest EC2 compute instance and
-everything in between - and they don't require you to use a particular
-language, framework or packaging system. That makes them great building blocks
-for deploying and scaling web apps, databases, and backend services without
-depending on a particular stack or provider.
-
-%prep
-%if 0%{?centos} <= 6 || 0%{?oraclelinux} <=6
-%setup -n %{name}
-%else
-%autosetup -n %{name}
-%endif
-
-%build
-export DOCKER_GITCOMMIT=%{_gitcommit}
-./hack/make.sh dynbinary
-# ./man/md2man-all.sh runs outside the build container (if at all), since we don't have go-md2man here
-
-%check
-./bundles/%{_origversion}/dynbinary-daemon/dockerd -v
-
-%install
-# install binary
-install -d $RPM_BUILD_ROOT/%{_bindir}
-install -p -m 755 bundles/%{_origversion}/dynbinary-daemon/dockerd-%{_origversion} $RPM_BUILD_ROOT/%{_bindir}/dockerd
-
-# install proxy
-install -p -m 755 /usr/local/bin/docker-proxy $RPM_BUILD_ROOT/%{_bindir}/docker-proxy
-
-# install containerd
-install -p -m 755 /usr/local/bin/docker-containerd $RPM_BUILD_ROOT/%{_bindir}/docker-containerd
-install -p -m 755 /usr/local/bin/docker-containerd-shim $RPM_BUILD_ROOT/%{_bindir}/docker-containerd-shim
-install -p -m 755 /usr/local/bin/docker-containerd-ctr $RPM_BUILD_ROOT/%{_bindir}/docker-containerd-ctr
-
-# install runc
-install -p -m 755 /usr/local/bin/docker-runc $RPM_BUILD_ROOT/%{_bindir}/docker-runc
-
-# install tini
-install -p -m 755 /usr/local/bin/docker-init $RPM_BUILD_ROOT/%{_bindir}/docker-init
-
-# install udev rules
-install -d $RPM_BUILD_ROOT/%{_sysconfdir}/udev/rules.d
-install -p -m 644 contrib/udev/80-docker.rules $RPM_BUILD_ROOT/%{_sysconfdir}/udev/rules.d/80-docker.rules
-
-# add init scripts
-install -d $RPM_BUILD_ROOT/etc/sysconfig
-install -d $RPM_BUILD_ROOT/%{_initddir}
-
-
-%if 0%{?is_systemd}
-install -d $RPM_BUILD_ROOT/%{_unitdir}
-install -p -m 644 contrib/init/systemd/docker.service.rpm $RPM_BUILD_ROOT/%{_unitdir}/docker.service
-%else
-install -p -m 644 contrib/init/sysvinit-redhat/docker.sysconfig $RPM_BUILD_ROOT/etc/sysconfig/docker
-install -p -m 755 contrib/init/sysvinit-redhat/docker $RPM_BUILD_ROOT/%{_initddir}/docker
-%endif
-# add bash, zsh, and fish completions
-install -d $RPM_BUILD_ROOT/usr/share/bash-completion/completions
-install -d $RPM_BUILD_ROOT/usr/share/zsh/vendor-completions
-install -d $RPM_BUILD_ROOT/usr/share/fish/vendor_completions.d
-install -p -m 644 contrib/completion/bash/docker $RPM_BUILD_ROOT/usr/share/bash-completion/completions/docker
-install -p -m 644 contrib/completion/zsh/_docker $RPM_BUILD_ROOT/usr/share/zsh/vendor-completions/_docker
-install -p -m 644 contrib/completion/fish/docker.fish $RPM_BUILD_ROOT/usr/share/fish/vendor_completions.d/docker.fish
-
-# install manpages
-install -d %{buildroot}%{_mandir}/man1
-install -p -m 644 man/man1/*.1 $RPM_BUILD_ROOT/%{_mandir}/man1
-install -d %{buildroot}%{_mandir}/man5
-install -p -m 644 man/man5/*.5 $RPM_BUILD_ROOT/%{_mandir}/man5
-install -d %{buildroot}%{_mandir}/man8
-install -p -m 644 man/man8/*.8 $RPM_BUILD_ROOT/%{_mandir}/man8
-
-# add vimfiles
-install -d $RPM_BUILD_ROOT/usr/share/vim/vimfiles/doc
-install -d $RPM_BUILD_ROOT/usr/share/vim/vimfiles/ftdetect
-install -d $RPM_BUILD_ROOT/usr/share/vim/vimfiles/syntax
-install -p -m 644 contrib/syntax/vim/doc/dockerfile.txt $RPM_BUILD_ROOT/usr/share/vim/vimfiles/doc/dockerfile.txt
-install -p -m 644 contrib/syntax/vim/ftdetect/dockerfile.vim $RPM_BUILD_ROOT/usr/share/vim/vimfiles/ftdetect/dockerfile.vim
-install -p -m 644 contrib/syntax/vim/syntax/dockerfile.vim $RPM_BUILD_ROOT/usr/share/vim/vimfiles/syntax/dockerfile.vim
-
-# add nano
-install -d $RPM_BUILD_ROOT/usr/share/nano
-install -p -m 644 contrib/syntax/nano/Dockerfile.nanorc $RPM_BUILD_ROOT/usr/share/nano/Dockerfile.nanorc
-
-# list files owned by the package here
-%files
-%doc AUTHORS CHANGELOG.md CONTRIBUTING.md LICENSE MAINTAINERS NOTICE README.md
-/%{_bindir}/docker
-/%{_bindir}/dockerd
-/%{_bindir}/docker-containerd
-/%{_bindir}/docker-containerd-shim
-/%{_bindir}/docker-containerd-ctr
-/%{_bindir}/docker-proxy
-/%{_bindir}/docker-runc
-/%{_bindir}/docker-init
-/%{_sysconfdir}/udev/rules.d/80-docker.rules
-%if 0%{?is_systemd}
-/%{_unitdir}/docker.service
-%else
-%config(noreplace,missingok) /etc/sysconfig/docker
-/%{_initddir}/docker
-%endif
-/usr/share/bash-completion/completions/docker
-/usr/share/zsh/vendor-completions/_docker
-/usr/share/fish/vendor_completions.d/docker.fish
-%doc
-/%{_mandir}/man1/*
-/%{_mandir}/man5/*
-/%{_mandir}/man8/*
-/usr/share/vim/vimfiles/doc/dockerfile.txt
-/usr/share/vim/vimfiles/ftdetect/dockerfile.vim
-/usr/share/vim/vimfiles/syntax/dockerfile.vim
-/usr/share/nano/Dockerfile.nanorc
-
-%post
-%if 0%{?is_systemd}
-%systemd_post docker
-%else
-# This adds the proper /etc/rc*.d links for the script
-/sbin/chkconfig --add docker
-%endif
-if ! getent group docker > /dev/null; then
- groupadd --system docker
-fi
-
-%preun
-%if 0%{?is_systemd}
-%systemd_preun docker
-%else
-if [ $1 -eq 0 ] ; then
- /sbin/service docker stop >/dev/null 2>&1
- /sbin/chkconfig --del docker
-fi
-%endif
-
-%postun
-%if 0%{?is_systemd}
-%systemd_postun_with_restart docker
-%else
-if [ "$1" -ge "1" ] ; then
- /sbin/service docker condrestart >/dev/null 2>&1 || :
-fi
-%endif
-
-%changelog
diff --git a/components/engine/hack/make/.detect-daemon-osarch b/components/engine/hack/make/.detect-daemon-osarch
index ac16055fcf..26cb30f4cf 100644
--- a/components/engine/hack/make/.detect-daemon-osarch
+++ b/components/engine/hack/make/.detect-daemon-osarch
@@ -60,9 +60,6 @@ case "$PACKAGE_ARCH" in
windows)
DOCKERFILE='Dockerfile.windows'
;;
- solaris)
- DOCKERFILE='Dockerfile.solaris'
- ;;
esac
;;
*)
diff --git a/components/engine/hack/make/.integration-test-helpers b/components/engine/hack/make/.integration-test-helpers
index 23780396e0..abd1d0f305 100644
--- a/components/engine/hack/make/.integration-test-helpers
+++ b/components/engine/hack/make/.integration-test-helpers
@@ -5,8 +5,10 @@
#
# TESTFLAGS='-check.f DockerSuite.TestBuild*' ./hack/make.sh binary test-integration
#
-
-source "$SCRIPTDIR/make/.go-autogen"
+if [ -z $MAKEDIR ]; then
+ export MAKEDIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" && pwd )"
+fi
+source "$MAKEDIR/.go-autogen"
# Set defaults
: ${TEST_REPEAT:=1}
diff --git a/components/engine/hack/make/build-deb b/components/engine/hack/make/build-deb
deleted file mode 100644
index a698323535..0000000000
--- a/components/engine/hack/make/build-deb
+++ /dev/null
@@ -1,91 +0,0 @@
-#!/usr/bin/env bash
-set -e
-
-# subshell so that we can export PATH and TZ without breaking other things
-(
- export TZ=UTC # make sure our "date" variables are UTC-based
- bundle .integration-daemon-start
- bundle .detect-daemon-osarch
-
- # TODO consider using frozen images for the dockercore/builder-deb tags
-
- tilde='~' # ouch Bash 4.2 vs 4.3, you keel me
- debVersion="${VERSION//-/$tilde}" # using \~ or '~' here works in 4.3, but not 4.2; just ~ causes $HOME to be inserted, hence the $tilde
- # if we have a "-dev" suffix or have change in Git, let's make this package version more complex so it works better
- if [[ "$VERSION" == *-dev ]] || [ -n "$(git status --porcelain)" ]; then
- gitUnix="$(git log -1 --pretty='%at')"
- gitDate="$(date --date "@$gitUnix" +'%Y%m%d.%H%M%S')"
- gitCommit="$(git log -1 --pretty='%h')"
- gitVersion="git${gitDate}.0.${gitCommit}"
- # gitVersion is now something like 'git20150128.112847.0.17e840a'
- debVersion="$debVersion~$gitVersion"
-
- # $ dpkg --compare-versions 1.5.0 gt 1.5.0~rc1 && echo true || echo false
- # true
- # $ dpkg --compare-versions 1.5.0~rc1 gt 1.5.0~git20150128.112847.17e840a && echo true || echo false
- # true
- # $ dpkg --compare-versions 1.5.0~git20150128.112847.17e840a gt 1.5.0~dev~git20150128.112847.17e840a && echo true || echo false
- # true
-
- # ie, 1.5.0 > 1.5.0~rc1 > 1.5.0~git20150128.112847.17e840a > 1.5.0~dev~git20150128.112847.17e840a
- fi
-
- debSource="$(awk -F ': ' '$1 == "Source" { print $2; exit }' hack/make/.build-deb/control)"
- debMaintainer="$(awk -F ': ' '$1 == "Maintainer" { print $2; exit }' hack/make/.build-deb/control)"
- debDate="$(date --rfc-2822)"
-
- # if go-md2man is available, pre-generate the man pages
- make manpages
-
- builderDir="contrib/builder/deb/${PACKAGE_ARCH}"
- pkgs=( $(find "${builderDir}/"*/ -type d) )
- if [ ! -z "$DOCKER_BUILD_PKGS" ]; then
- pkgs=()
- for p in $DOCKER_BUILD_PKGS; do
- pkgs+=( "$builderDir/$p" )
- done
- fi
- for dir in "${pkgs[@]}"; do
- [ -d "$dir" ] || { echo >&2 "skipping nonexistent $dir"; continue; }
- version="$(basename "$dir")"
- suite="${version##*-}"
-
- image="dockercore/builder-deb:$version"
- if ! docker inspect "$image" &> /dev/null; then
- (
- # Add the APT_MIRROR args only if the consuming Dockerfile uses it
- # Otherwise this will cause the build to fail
- if [ "$(grep 'ARG APT_MIRROR=' $dir/Dockerfile)" ] && [ "$BUILD_APT_MIRROR" ]; then
- DOCKER_BUILD_ARGS="$DOCKER_BUILD_ARGS $BUILD_APT_MIRROR"
- fi
- set -x && docker build ${DOCKER_BUILD_ARGS} -t "$image" "$dir"
- )
- fi
-
- mkdir -p "$DEST/$version"
- cat > "$DEST/$version/Dockerfile.build" <<-EOF
- FROM $image
- WORKDIR /usr/src/docker
- COPY . /usr/src/docker
- ENV DOCKER_GITCOMMIT $GITCOMMIT
- RUN mkdir -p /go/src/github.com/docker && mkdir -p /go/src/github.com/opencontainers \
- && ln -snf /usr/src/docker /go/src/github.com/docker/docker
- EOF
-
- cat >> "$DEST/$version/Dockerfile.build" <<-EOF
- # Install runc, containerd, proxy and tini
- RUN ./hack/dockerfile/install-binaries.sh runc-dynamic containerd-dynamic proxy-dynamic tini
- EOF
- cat >> "$DEST/$version/Dockerfile.build" <<-EOF
- RUN cp -aL hack/make/.build-deb debian
- RUN { echo '$debSource (${debVersion}-0~${version}) $suite; urgency=low'; echo; echo ' * Version: $VERSION'; echo; echo " -- $debMaintainer $debDate"; } > debian/changelog && cat >&2 debian/changelog
- RUN dpkg-buildpackage -uc -us -I.git
- EOF
- tempImage="docker-temp/build-deb:$version"
- ( set -x && docker build ${DOCKER_BUILD_ARGS} -t "$tempImage" -f "$DEST/$version/Dockerfile.build" . )
- docker run --rm "$tempImage" bash -c 'cd .. && tar -c *_*' | tar -xvC "$DEST/$version"
- docker rmi "$tempImage"
- done
-
- bundle .integration-daemon-stop
-) 2>&1 | tee -a "$DEST/test.log"
diff --git a/components/engine/hack/make/build-integration-test-binary b/components/engine/hack/make/build-integration-test-binary
index ad3e8c2123..bbd5a22bcc 100755
--- a/components/engine/hack/make/build-integration-test-binary
+++ b/components/engine/hack/make/build-integration-test-binary
@@ -2,7 +2,6 @@
# required by `make build-integration-cli-on-swarm`
set -e
-source "${MAKEDIR}/.go-autogen"
source hack/make/.integration-test-helpers
build_test_suite_binaries
diff --git a/components/engine/hack/make/build-rpm b/components/engine/hack/make/build-rpm
deleted file mode 100644
index 1e89a78d5f..0000000000
--- a/components/engine/hack/make/build-rpm
+++ /dev/null
@@ -1,148 +0,0 @@
-#!/usr/bin/env bash
-set -e
-
-# subshell so that we can export PATH and TZ without breaking other things
-(
- export TZ=UTC # make sure our "date" variables are UTC-based
-
- source "$(dirname "$BASH_SOURCE")/.integration-daemon-start"
- source "$(dirname "$BASH_SOURCE")/.detect-daemon-osarch"
-
- # TODO consider using frozen images for the dockercore/builder-rpm tags
-
- rpmName=docker-engine
- rpmVersion="$VERSION"
- rpmRelease=1
-
- # rpmRelease versioning is as follows
- # Docker 1.7.0: version=1.7.0, release=1
- # Docker 1.7.0-rc1: version=1.7.0, release=0.1.rc1
- # Docker 1.7.0-cs1: version=1.7.0.cs1, release=1
- # Docker 1.7.0-cs1-rc1: version=1.7.0.cs1, release=0.1.rc1
- # Docker 1.7.0-dev nightly: version=1.7.0, release=0.0.YYYYMMDD.HHMMSS.gitHASH
-
- # if we have a "-rc*" suffix, set appropriate release
- if [[ "$rpmVersion" =~ .*-rc[0-9]+$ ]] ; then
- rcVersion=${rpmVersion#*-rc}
- rpmVersion=${rpmVersion%-rc*}
- rpmRelease="0.${rcVersion}.rc${rcVersion}"
- fi
-
- DOCKER_GITCOMMIT=$(git rev-parse --short HEAD)
- if [ -n "$(git status --porcelain --untracked-files=no)" ]; then
- DOCKER_GITCOMMIT="$DOCKER_GITCOMMIT-unsupported"
- fi
-
- # if we have a "-dev" suffix or have change in Git, let's make this package version more complex so it works better
- if [[ "$rpmVersion" == *-dev ]] || [ -n "$(git status --porcelain)" ]; then
- gitUnix="$(git log -1 --pretty='%at')"
- gitDate="$(date --date "@$gitUnix" +'%Y%m%d.%H%M%S')"
- gitCommit="$(git log -1 --pretty='%h')"
- gitVersion="${gitDate}.git${gitCommit}"
- # gitVersion is now something like '20150128.112847.17e840a'
- rpmVersion="${rpmVersion%-dev}"
- rpmRelease="0.0.$gitVersion"
- fi
-
- # Replace any other dashes with periods
- rpmVersion="${rpmVersion/-/.}"
-
- rpmPackager="$(awk -F ': ' '$1 == "Packager" { print $2; exit }' hack/make/.build-rpm/${rpmName}.spec)"
- rpmDate="$(date +'%a %b %d %Y')"
-
- # if go-md2man is available, pre-generate the man pages
- make manpages
-
- # Convert the CHANGELOG.md file into RPM changelog format
- rm -f contrib/builder/rpm/${PACKAGE_ARCH}/changelog
- VERSION_REGEX="^\W\W (.*) \((.*)\)$"
- ENTRY_REGEX="^[-+*] (.*)$"
- while read -r line || [[ -n "$line" ]]; do
- if [ -z "$line" ]; then continue; fi
- if [[ "$line" =~ $VERSION_REGEX ]]; then
- echo >> contrib/builder/rpm/${PACKAGE_ARCH}/changelog
- echo "* `date -d ${BASH_REMATCH[2]} '+%a %b %d %Y'` ${rpmPackager} - ${BASH_REMATCH[1]}" >> contrib/builder/rpm/${PACKAGE_ARCH}/changelog
- fi
- if [[ "$line" =~ $ENTRY_REGEX ]]; then
- echo "- ${BASH_REMATCH[1]//\`}" >> contrib/builder/rpm/${PACKAGE_ARCH}/changelog
- fi
- done < CHANGELOG.md
-
- builderDir="contrib/builder/rpm/${PACKAGE_ARCH}"
- pkgs=( $(find "${builderDir}/"*/ -type d) )
- if [ ! -z "$DOCKER_BUILD_PKGS" ]; then
- pkgs=()
- for p in $DOCKER_BUILD_PKGS; do
- pkgs+=( "$builderDir/$p" )
- done
- fi
- for dir in "${pkgs[@]}"; do
- [ -d "$dir" ] || { echo >&2 "skipping nonexistent $dir"; continue; }
- version="$(basename "$dir")"
- suite="${version##*-}"
-
- image="dockercore/builder-rpm:$version"
- if ! docker inspect "$image" &> /dev/null; then
- ( set -x && docker build ${DOCKER_BUILD_ARGS} -t "$image" "$dir" )
- fi
-
- mkdir -p "$DEST/$version"
- cat > "$DEST/$version/Dockerfile.build" <<-EOF
- FROM $image
- COPY . /usr/src/${rpmName}
- WORKDIR /usr/src/${rpmName}
- RUN mkdir -p /go/src/github.com/docker && mkdir -p /go/src/github.com/opencontainers
- EOF
-
- cat >> "$DEST/$version/Dockerfile.build" <<-EOF
- # Install runc, containerd, proxy and tini
- RUN TMP_GOPATH="/go" ./hack/dockerfile/install-binaries.sh runc-dynamic containerd-dynamic proxy-dynamic tini
- EOF
- if [[ "$VERSION" == *-dev ]] || [ -n "$(git status --porcelain)" ]; then
- echo 'ENV DOCKER_EXPERIMENTAL 1' >> "$DEST/$version/Dockerfile.build"
- fi
- cat >> "$DEST/$version/Dockerfile.build" <<-EOF
- RUN mkdir -p /root/rpmbuild/SOURCES \
- && echo '%_topdir /root/rpmbuild' > /root/.rpmmacros
- WORKDIR /root/rpmbuild
- RUN ln -sfv /usr/src/${rpmName}/hack/make/.build-rpm SPECS
- WORKDIR /root/rpmbuild/SPECS
- RUN tar --exclude .git -r -C /usr/src -f /root/rpmbuild/SOURCES/${rpmName}.tar ${rpmName}
- RUN tar --exclude .git -r -C /go/src/github.com/docker -f /root/rpmbuild/SOURCES/${rpmName}.tar containerd
- RUN tar --exclude .git -r -C /go/src/github.com/docker/libnetwork/cmd -f /root/rpmbuild/SOURCES/${rpmName}.tar proxy
- RUN tar --exclude .git -r -C /go/src/github.com/opencontainers -f /root/rpmbuild/SOURCES/${rpmName}.tar runc
- RUN tar --exclude .git -r -C /go/ -f /root/rpmbuild/SOURCES/${rpmName}.tar tini
- RUN gzip /root/rpmbuild/SOURCES/${rpmName}.tar
- RUN { cat /usr/src/${rpmName}/contrib/builder/rpm/${PACKAGE_ARCH}/changelog; } >> ${rpmName}.spec && tail >&2 ${rpmName}.spec
- RUN rpmbuild -ba \
- --define '_gitcommit $DOCKER_GITCOMMIT' \
- --define '_release $rpmRelease' \
- --define '_version $rpmVersion' \
- --define '_origversion $VERSION' \
- --define '_experimental ${DOCKER_EXPERIMENTAL:-0}' \
- ${rpmName}.spec
- EOF
- # selinux policy referencing systemd things won't work on non-systemd versions
- # of centos or rhel, which we don't support anyways
- if [ "${suite%.*}" -gt 6 ] && [[ "$version" != opensuse* ]]; then
- if [ -d "./contrib/selinux-$version" ]; then
- selinuxDir="selinux-${version}"
- cat >> "$DEST/$version/Dockerfile.build" <<-EOF
- RUN tar -cz -C /usr/src/${rpmName}/contrib/${selinuxDir} -f /root/rpmbuild/SOURCES/${rpmName}-selinux.tar.gz ${rpmName}-selinux
- RUN rpmbuild -ba \
- --define '_gitcommit $DOCKER_GITCOMMIT' \
- --define '_release $rpmRelease' \
- --define '_version $rpmVersion' \
- --define '_origversion $VERSION' \
- ${rpmName}-selinux.spec
- EOF
- fi
- fi
- tempImage="docker-temp/build-rpm:$version"
- ( set -x && docker build ${DOCKER_BUILD_ARGS} -t "$tempImage" -f $DEST/$version/Dockerfile.build . )
- docker run --rm "$tempImage" bash -c 'cd /root/rpmbuild && tar -c *RPMS' | tar -xvC "$DEST/$version"
- docker rmi "$tempImage"
- done
-
- source "$(dirname "$BASH_SOURCE")/.integration-daemon-stop"
-) 2>&1 | tee -a $DEST/test.log
diff --git a/components/engine/hack/make/clean-apt-repo b/components/engine/hack/make/clean-apt-repo
deleted file mode 100755
index e823cb537f..0000000000
--- a/components/engine/hack/make/clean-apt-repo
+++ /dev/null
@@ -1,43 +0,0 @@
-#!/usr/bin/env bash
-set -e
-
-# This script cleans the experimental pool for the apt repo.
-# This is useful when there are a lot of old experimental debs and you only want to keep the most recent.
-#
-
-: ${DOCKER_RELEASE_DIR:=$DEST}
-APTDIR=$DOCKER_RELEASE_DIR/apt/repo/pool/experimental
-: ${DOCKER_ARCHIVE_DIR:=$DEST/archive}
-DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" && pwd )"
-
-latest_versions=$(dpkg-scanpackages "$APTDIR" /dev/null 2>/dev/null | awk -F ': ' '$1 == "Filename" { print $2 }')
-
-# get the latest version
-latest_docker_engine_file=$(echo "$latest_versions" | grep docker-engine)
-latest_docker_engine_version=$(basename ${latest_docker_engine_file%~*})
-
-echo "latest docker-engine version: $latest_docker_engine_version"
-
-# remove all the files that are not that version in experimental
-pool_dir=$(dirname "$latest_docker_engine_file")
-old_pkgs=( $(ls "$pool_dir" | grep -v "^${latest_docker_engine_version}" | grep "${latest_docker_engine_version%%~git*}") )
-
-echo "${old_pkgs[@]}"
-
-mkdir -p "$DOCKER_ARCHIVE_DIR"
-for old_pkg in "${old_pkgs[@]}"; do
- echo "moving ${pool_dir}/${old_pkg} to $DOCKER_ARCHIVE_DIR"
- mv "${pool_dir}/${old_pkg}" "$DOCKER_ARCHIVE_DIR"
-done
-
-echo
-echo "$pool_dir now has contents:"
-ls "$pool_dir"
-
-# now regenerate release files for experimental
-export COMPONENT=experimental
-source "${DIR}/update-apt-repo"
-
-echo "You will now want to: "
-echo " - re-sign the repo with hack/make/sign-repo"
-echo " - re-generate index files with hack/make/generate-index-listing"
diff --git a/components/engine/hack/make/clean-yum-repo b/components/engine/hack/make/clean-yum-repo
deleted file mode 100755
index 012689a965..0000000000
--- a/components/engine/hack/make/clean-yum-repo
+++ /dev/null
@@ -1,20 +0,0 @@
-#!/usr/bin/env bash
-set -e
-
-# This script cleans the experimental pool for the yum repo.
-# This is useful when there are a lot of old experimental rpms and you only want to keep the most recent.
-#
-
-: ${DOCKER_RELEASE_DIR:=$DEST}
-YUMDIR=$DOCKER_RELEASE_DIR/yum/repo/experimental
-
-suites=( $(find "$YUMDIR" -mindepth 1 -maxdepth 1 -type d) )
-
-for suite in "${suites[@]}"; do
- echo "cleanup in: $suite"
- ( set -x; repomanage -k2 --old "$suite" | xargs rm -f )
-done
-
-echo "You will now want to: "
-echo " - re-sign the repo with hack/make/sign-repo"
-echo " - re-generate index files with hack/make/generate-index-listing"
diff --git a/components/engine/hack/make/cover b/components/engine/hack/make/cover
deleted file mode 100644
index 4a37995f69..0000000000
--- a/components/engine/hack/make/cover
+++ /dev/null
@@ -1,15 +0,0 @@
-#!/usr/bin/env bash
-set -e
-
-bundle_cover() {
- coverprofiles=( "$DEST/../"*"/coverprofiles/"* )
- for p in "${coverprofiles[@]}"; do
- echo
- (
- set -x
- go tool cover -func="$p"
- )
- done
-}
-
-bundle_cover 2>&1 | tee "$DEST/report.log"
diff --git a/components/engine/hack/make/generate-index-listing b/components/engine/hack/make/generate-index-listing
deleted file mode 100755
index 9f1208403f..0000000000
--- a/components/engine/hack/make/generate-index-listing
+++ /dev/null
@@ -1,74 +0,0 @@
-#!/usr/bin/env bash
-set -e
-
-# This script generates index files for the directory structure
-# of the apt and yum repos
-
-: ${DOCKER_RELEASE_DIR:=$DEST}
-APTDIR=$DOCKER_RELEASE_DIR/apt
-YUMDIR=$DOCKER_RELEASE_DIR/yum
-
-if [ ! -d $APTDIR ] && [ ! -d $YUMDIR ]; then
- echo >&2 'release-rpm or release-deb must be run before generate-index-listing'
- exit 1
-fi
-
-create_index() {
- local directory=$1
- local original=$2
- local cleaned=${directory#$original}
-
- # the index file to create
- local index_file="${directory}/index"
-
- # cd into dir & touch the index file
- cd $directory
- touch $index_file
-
- # print the html header
- cat <<-EOF > "$index_file"
-
-
- Index of ${cleaned}/
-
- Index of ${cleaned}/
- ../
- EOF
-
- # start of content output
- (
- # change IFS locally within subshell so the for loop saves line correctly to L var
- IFS=$'\n';
-
- # pretty sweet, will mimic the normal apache output. skipping "index" and hidden files
- for L in $(find -L . -mount -depth -maxdepth 1 -type f ! -name 'index' ! -name '.*' -prune -printf "%f|@_@%Td-%Tb-%TY %Tk:%TM @%f@\n"|sort|column -t -s '|' | sed 's,\([\ ]\+\)@_@,\1,g');
- do
- # file
- F=$(sed -e 's,^.*@\([^@]\+\)@.*$,\1,g'<<<"$L");
-
- # file with file size
- F=$(du -bh $F | cut -f1);
-
- # output with correct format
- sed -e 's,\ @.*$, '"$F"',g'<<<"$L";
- done;
- ) >> $index_file;
-
- # now output a list of all directories in this dir (maxdepth 1) other than '.' outputting in a sorted manner exactly like apache
- find -L . -mount -depth -maxdepth 1 -type d ! -name '.' -printf "%-43f@_@%Td-%Tb-%TY %Tk:%TM -\n"|sort -d|sed 's,\([\ ]\+\)@_@,/\1,g' >> $index_file
-
- # print the footer html
- echo "
" >> $index_file
-
-}
-
-get_dirs() {
- local directory=$1
-
- for d in `find ${directory} -type d`; do
- create_index $d $directory
- done
-}
-
-get_dirs $APTDIR
-get_dirs $YUMDIR
diff --git a/components/engine/hack/make/install-binary b/components/engine/hack/make/install-binary
old mode 100755
new mode 100644
index 57aa1a28c1..f6a4361fdb
--- a/components/engine/hack/make/install-binary
+++ b/components/engine/hack/make/install-binary
@@ -3,6 +3,27 @@
set -e
rm -rf "$DEST"
+install_binary() {
+ local file="$1"
+ local target="${DOCKER_MAKE_INSTALL_PREFIX:=/usr/local}/bin/"
+ if [ "$(go env GOOS)" == "linux" ]; then
+ echo "Installing $(basename $file) to ${target}"
+ mkdir -p "$target"
+ cp -f -L "$file" "$target"
+ else
+ echo "Install is only supported on linux"
+ return 1
+ fi
+}
+
(
- source "${MAKEDIR}/install-binary-daemon"
+ DEST="$(dirname $DEST)/binary-daemon"
+ source "${MAKEDIR}/.binary-setup"
+ install_binary "${DEST}/${DOCKER_DAEMON_BINARY_NAME}"
+ install_binary "${DEST}/${DOCKER_RUNC_BINARY_NAME}"
+ install_binary "${DEST}/${DOCKER_CONTAINERD_BINARY_NAME}"
+ install_binary "${DEST}/${DOCKER_CONTAINERD_CTR_BINARY_NAME}"
+ install_binary "${DEST}/${DOCKER_CONTAINERD_SHIM_BINARY_NAME}"
+ install_binary "${DEST}/${DOCKER_PROXY_BINARY_NAME}"
+ install_binary "${DEST}/${DOCKER_INIT_BINARY_NAME}"
)
diff --git a/components/engine/hack/make/install-binary-daemon b/components/engine/hack/make/install-binary-daemon
deleted file mode 100644
index f6a4361fdb..0000000000
--- a/components/engine/hack/make/install-binary-daemon
+++ /dev/null
@@ -1,29 +0,0 @@
-#!/usr/bin/env bash
-
-set -e
-rm -rf "$DEST"
-
-install_binary() {
- local file="$1"
- local target="${DOCKER_MAKE_INSTALL_PREFIX:=/usr/local}/bin/"
- if [ "$(go env GOOS)" == "linux" ]; then
- echo "Installing $(basename $file) to ${target}"
- mkdir -p "$target"
- cp -f -L "$file" "$target"
- else
- echo "Install is only supported on linux"
- return 1
- fi
-}
-
-(
- DEST="$(dirname $DEST)/binary-daemon"
- source "${MAKEDIR}/.binary-setup"
- install_binary "${DEST}/${DOCKER_DAEMON_BINARY_NAME}"
- install_binary "${DEST}/${DOCKER_RUNC_BINARY_NAME}"
- install_binary "${DEST}/${DOCKER_CONTAINERD_BINARY_NAME}"
- install_binary "${DEST}/${DOCKER_CONTAINERD_CTR_BINARY_NAME}"
- install_binary "${DEST}/${DOCKER_CONTAINERD_SHIM_BINARY_NAME}"
- install_binary "${DEST}/${DOCKER_PROXY_BINARY_NAME}"
- install_binary "${DEST}/${DOCKER_INIT_BINARY_NAME}"
-)
diff --git a/components/engine/hack/make/release-deb b/components/engine/hack/make/release-deb
deleted file mode 100755
index acf4901d6e..0000000000
--- a/components/engine/hack/make/release-deb
+++ /dev/null
@@ -1,163 +0,0 @@
-#!/usr/bin/env bash
-set -e
-
-# This script creates the apt repos for the .deb files generated by hack/make/build-deb
-#
-# The following can then be used as apt sources:
-# deb http://apt.dockerproject.org/repo $distro-$release $version
-#
-# For example:
-# deb http://apt.dockerproject.org/repo ubuntu-trusty main
-# deb http://apt.dockerproject.org/repo ubuntu-trusty testing
-# deb http://apt.dockerproject.org/repo debian-wheezy experimental
-# deb http://apt.dockerproject.org/repo debian-jessie main
-#
-# ... and so on and so forth for the builds created by hack/make/build-deb
-
-: ${DOCKER_RELEASE_DIR:=$DEST}
-: ${GPG_KEYID:=releasedocker}
-APTDIR=$DOCKER_RELEASE_DIR/apt/repo
-
-# setup the apt repo (if it does not exist)
-mkdir -p "$APTDIR/conf" "$APTDIR/db" "$APTDIR/dists"
-
-# supported arches/sections
-arches=( amd64 i386 armhf ppc64le s390x )
-
-# Preserve existing components but don't add any non-existing ones
-for component in main testing experimental ; do
- exists=$(find "$APTDIR/dists" -mindepth 2 -maxdepth 2 -type d -name "$component" -print -quit)
- if [ -n "$exists" ] ; then
- components+=( $component )
- fi
-done
-
-# set the component for the version being released
-component="main"
-
-if [[ "$VERSION" == *-rc* ]]; then
- component="testing"
-fi
-
-if [[ "$VERSION" == *-dev ]] || [ -n "$(git status --porcelain)" ]; then
- component="experimental"
-fi
-
-# Make sure our component is in the list of components
-if [[ ! "${components[*]}" =~ $component ]] ; then
- components+=( $component )
-fi
-
-# create apt-ftparchive file on every run. This is essential to avoid
-# using stale versions of the config file that could cause unnecessary
-# refreshing of bits for EOL-ed releases.
-cat <<-EOF > "$APTDIR/conf/apt-ftparchive.conf"
-Dir {
- ArchiveDir "${APTDIR}";
- CacheDir "${APTDIR}/db";
-};
-
-Default {
- Packages::Compress ". gzip bzip2";
- Sources::Compress ". gzip bzip2";
- Contents::Compress ". gzip bzip2";
-};
-
-TreeDefault {
- BinCacheDB "packages-\$(SECTION)-\$(ARCH).db";
- Directory "pool/\$(SECTION)";
- Packages "\$(DIST)/\$(SECTION)/binary-\$(ARCH)/Packages";
- SrcDirectory "pool/\$(SECTION)";
- Sources "\$(DIST)/\$(SECTION)/source/Sources";
- Contents "\$(DIST)/\$(SECTION)/Contents-\$(ARCH)";
- FileList "$APTDIR/\$(DIST)/\$(SECTION)/filelist";
-};
-EOF
-
-for dir in bundles/$VERSION/build-deb/*/; do
- version="$(basename "$dir")"
- suite="${version//debootstrap-}"
-
- cat <<-EOF
- Tree "dists/${suite}" {
- Sections "${components[*]}";
- Architectures "${arches[*]}";
- }
-
- EOF
-done >> "$APTDIR/conf/apt-ftparchive.conf"
-
-cat <<-EOF > "$APTDIR/conf/docker-engine-release.conf"
-APT::FTPArchive::Release::Origin "Docker";
-APT::FTPArchive::Release::Components "${components[*]}";
-APT::FTPArchive::Release::Label "Docker APT Repository";
-APT::FTPArchive::Release::Architectures "${arches[*]}";
-EOF
-
-# release the debs
-for dir in bundles/$VERSION/build-deb/*/; do
- version="$(basename "$dir")"
- codename="${version//debootstrap-}"
-
- tempdir="$(mktemp -d /tmp/tmp-docker-release-deb.XXXXXXXX)"
- DEBFILE=( "$dir/docker-engine"*.deb )
-
- # add the deb for each component for the distro version into the
- # pool (if it is not there already)
- mkdir -p "$APTDIR/pool/$component/d/docker-engine/"
- for deb in ${DEBFILE[@]}; do
- d=$(basename "$deb")
- # We do not want to generate a new deb if it has already been
- # copied into the APTDIR
- if [ ! -f "$APTDIR/pool/$component/d/docker-engine/$d" ]; then
- cp "$deb" "$tempdir/"
- # if we have a $GPG_PASSPHRASE we may as well
- # dpkg-sign before copying the deb into the pool
- if [ ! -z "$GPG_PASSPHRASE" ]; then
- dpkg-sig -g "--no-tty --digest-algo 'sha512' --passphrase '$GPG_PASSPHRASE'" \
- -k "$GPG_KEYID" --sign builder "$tempdir/$d"
- fi
- mv "$tempdir/$d" "$APTDIR/pool/$component/d/docker-engine/"
- fi
- done
-
- rm -rf "$tempdir"
-
- # build the right directory structure, needed for apt-ftparchive
- for arch in "${arches[@]}"; do
- for c in "${components[@]}"; do
- mkdir -p "$APTDIR/dists/$codename/$c/binary-$arch"
- done
- done
-
- # update the filelist for this codename/component
- find "$APTDIR/pool/$component" \
- -name *~${codename}*.deb -o \
- -name *~${codename#*-}*.deb > "$APTDIR/dists/$codename/$component/filelist"
-done
-
-# run the apt-ftparchive commands so we can have pinning
-apt-ftparchive generate "$APTDIR/conf/apt-ftparchive.conf"
-
-for dir in bundles/$VERSION/build-deb/*/; do
- version="$(basename "$dir")"
- codename="${version//debootstrap-}"
-
- apt-ftparchive \
- -c "$APTDIR/conf/docker-engine-release.conf" \
- -o "APT::FTPArchive::Release::Codename=$codename" \
- -o "APT::FTPArchive::Release::Suite=$codename" \
- release \
- "$APTDIR/dists/$codename" > "$APTDIR/dists/$codename/Release"
-
- for arch in "${arches[@]}"; do
- apt-ftparchive \
- -c "$APTDIR/conf/docker-engine-release.conf" \
- -o "APT::FTPArchive::Release::Codename=$codename" \
- -o "APT::FTPArchive::Release::Suite=$codename" \
- -o "APT::FTPArchive::Release::Components=$component" \
- -o "APT::FTPArchive::Release::Architecture=$arch" \
- release \
- "$APTDIR/dists/$codename/$component/binary-$arch" > "$APTDIR/dists/$codename/$component/binary-$arch/Release"
- done
-done
diff --git a/components/engine/hack/make/release-rpm b/components/engine/hack/make/release-rpm
deleted file mode 100755
index 477d15bee9..0000000000
--- a/components/engine/hack/make/release-rpm
+++ /dev/null
@@ -1,71 +0,0 @@
-#!/usr/bin/env bash
-set -e
-
-# This script creates the yum repos for the .rpm files generated by hack/make/build-rpm
-#
-# The following can then be used as a yum repo:
-# http://yum.dockerproject.org/repo/$release/$distro/$distro-version
-#
-# For example:
-# http://yum.dockerproject.org/repo/main/fedora/23
-# http://yum.dockerproject.org/repo/testing/centos/7
-# http://yum.dockerproject.org/repo/experimental/fedora/23
-# http://yum.dockerproject.org/repo/main/centos/7
-#
-# ... and so on and so forth for the builds created by hack/make/build-rpm
-
-: ${DOCKER_RELEASE_DIR:=$DEST}
-YUMDIR=$DOCKER_RELEASE_DIR/yum/repo
-: ${GPG_KEYID:=releasedocker}
-
-# get the release
-release="main"
-
-if [[ "$VERSION" == *-rc* ]]; then
- release="testing"
-fi
-
-if [[ "$VERSION" == *-dev ]] || [ -n "$(git status --porcelain)" ]; then
- release="experimental"
-fi
-
-# Setup the yum repo
-for dir in bundles/$VERSION/build-rpm/*/; do
- version="$(basename "$dir")"
- suite="${version##*-}"
- distro="${version%-*}"
-
- REPO=$YUMDIR/$release/$distro
-
- # if the directory does not exist, initialize the yum repo
- if [[ ! -d $REPO/$suite/Packages ]]; then
- mkdir -p "$REPO/$suite/Packages"
-
- createrepo --pretty "$REPO/$suite"
- fi
-
- # path to rpms
- RPMFILE=( "bundles/$VERSION/build-rpm/$version/RPMS/"*"/docker-engine"*.rpm "bundles/$VERSION/build-rpm/$version/SRPMS/docker-engine"*.rpm )
-
- # if we have a $GPG_PASSPHRASE we may as well
- # sign the rpms before adding to repo
- if [ ! -z $GPG_PASSPHRASE ]; then
- # export our key to rpm import
- gpg --armor --export "$GPG_KEYID" > /tmp/gpg
- rpm --import /tmp/gpg
-
- # sign the rpms
- echo "yes" | setsid rpm \
- --define "_gpg_name $GPG_KEYID" \
- --define "_signature gpg" \
- --define "__gpg_check_password_cmd /bin/true" \
- --define "__gpg_sign_cmd %{__gpg} gpg --batch --no-armor --digest-algo 'sha512' --passphrase '$GPG_PASSPHRASE' --no-secmem-warning -u '%{_gpg_name}' --sign --detach-sign --output %{__signature_filename} %{__plaintext_filename}" \
- --resign "${RPMFILE[@]}"
- fi
-
- # copy the rpms to the packages folder
- cp "${RPMFILE[@]}" "$REPO/$suite/Packages"
-
- # update the repo
- createrepo --pretty --update "$REPO/$suite"
-done
diff --git a/components/engine/hack/make/sign-repos b/components/engine/hack/make/sign-repos
deleted file mode 100755
index 61dbd7acce..0000000000
--- a/components/engine/hack/make/sign-repos
+++ /dev/null
@@ -1,65 +0,0 @@
-#!/usr/bin/env bash
-
-# This script signs the deliverables from release-deb and release-rpm
-# with a designated GPG key.
-
-: ${DOCKER_RELEASE_DIR:=$DEST}
-: ${GPG_KEYID:=releasedocker}
-APTDIR=$DOCKER_RELEASE_DIR/apt/repo
-YUMDIR=$DOCKER_RELEASE_DIR/yum/repo
-
-if [ -z "$GPG_PASSPHRASE" ]; then
- echo >&2 'you need to set GPG_PASSPHRASE in order to sign artifacts'
- exit 1
-fi
-
-if [ ! -d $APTDIR ] && [ ! -d $YUMDIR ]; then
- echo >&2 'release-rpm or release-deb must be run before sign-repos'
- exit 1
-fi
-
-sign_packages(){
- # sign apt repo metadata
- if [ -d $APTDIR ]; then
- # create file with public key
- gpg --armor --export "$GPG_KEYID" > "$DOCKER_RELEASE_DIR/apt/gpg"
-
- # sign the repo metadata
- for F in $(find $APTDIR -name Release); do
- if test "$F" -nt "$F.gpg" ; then
- gpg -u "$GPG_KEYID" --passphrase "$GPG_PASSPHRASE" \
- --digest-algo "sha512" \
- --armor --sign --detach-sign \
- --batch --yes \
- --output "$F.gpg" "$F"
- fi
- inRelease="$(dirname "$F")/InRelease"
- if test "$F" -nt "$inRelease" ; then
- gpg -u "$GPG_KEYID" --passphrase "$GPG_PASSPHRASE" \
- --digest-algo "sha512" \
- --clearsign \
- --batch --yes \
- --output "$inRelease" "$F"
- fi
- done
- fi
-
- # sign yum repo metadata
- if [ -d $YUMDIR ]; then
- # create file with public key
- gpg --armor --export "$GPG_KEYID" > "$DOCKER_RELEASE_DIR/yum/gpg"
-
- # sign the repo metadata
- for F in $(find $YUMDIR -name repomd.xml); do
- if test "$F" -nt "$F.asc" ; then
- gpg -u "$GPG_KEYID" --passphrase "$GPG_PASSPHRASE" \
- --digest-algo "sha512" \
- --armor --sign --detach-sign \
- --batch --yes \
- --output "$F.asc" "$F"
- fi
- done
- fi
-}
-
-sign_packages
diff --git a/components/engine/hack/make/test-integration b/components/engine/hack/make/test-integration
index 0100ac9cc7..c807cd4978 100755
--- a/components/engine/hack/make/test-integration
+++ b/components/engine/hack/make/test-integration
@@ -1,7 +1,6 @@
#!/usr/bin/env bash
set -e -o pipefail
-source "${MAKEDIR}/.go-autogen"
source hack/make/.integration-test-helpers
(
diff --git a/components/engine/hack/make/test-old-apt-repo b/components/engine/hack/make/test-old-apt-repo
deleted file mode 100755
index e92b20ef06..0000000000
--- a/components/engine/hack/make/test-old-apt-repo
+++ /dev/null
@@ -1,29 +0,0 @@
-#!/usr/bin/env bash
-set -e
-
-versions=( 1.3.3 1.4.1 1.5.0 1.6.2 )
-
-install() {
- local version=$1
- local tmpdir=$(mktemp -d /tmp/XXXXXXXXXX)
- local dockerfile="${tmpdir}/Dockerfile"
- cat <<-EOF > "$dockerfile"
- FROM debian:jessie
- ENV VERSION ${version}
- RUN apt-get update && apt-get install -y \
- apt-transport-https \
- ca-certificates \
- --no-install-recommends
- RUN echo "deb https://get.docker.com/ubuntu docker main" > /etc/apt/sources.list.d/docker.list
- RUN apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 \
- --recv-keys 36A1D7869245C8950F966E92D8576A8BA88D21E9
- RUN apt-get update && apt-get install -y \
- lxc-docker-\${VERSION}
- EOF
-
- docker build --rm --force-rm --no-cache -t docker-old-repo:${version} -f $dockerfile $tmpdir
-}
-
-for v in "${versions[@]}"; do
- install "$v"
-done
diff --git a/components/engine/hack/make/ubuntu b/components/engine/hack/make/ubuntu
deleted file mode 100644
index ad3f1d78bb..0000000000
--- a/components/engine/hack/make/ubuntu
+++ /dev/null
@@ -1,190 +0,0 @@
-#!/usr/bin/env bash
-
-PKGVERSION="${VERSION//-/'~'}"
-# if we have a "-dev" suffix or have change in Git, let's make this package version more complex so it works better
-if [[ "$VERSION" == *-dev ]] || [ -n "$(git status --porcelain)" ]; then
- GIT_UNIX="$(git log -1 --pretty='%at')"
- GIT_DATE="$(date --date "@$GIT_UNIX" +'%Y%m%d.%H%M%S')"
- GIT_COMMIT="$(git log -1 --pretty='%h')"
- GIT_VERSION="git${GIT_DATE}.0.${GIT_COMMIT}"
- # GIT_VERSION is now something like 'git20150128.112847.0.17e840a'
- PKGVERSION="$PKGVERSION~$GIT_VERSION"
-fi
-
-# $ dpkg --compare-versions 1.5.0 gt 1.5.0~rc1 && echo true || echo false
-# true
-# $ dpkg --compare-versions 1.5.0~rc1 gt 1.5.0~git20150128.112847.17e840a && echo true || echo false
-# true
-# $ dpkg --compare-versions 1.5.0~git20150128.112847.17e840a gt 1.5.0~dev~git20150128.112847.17e840a && echo true || echo false
-# true
-
-# ie, 1.5.0 > 1.5.0~rc1 > 1.5.0~git20150128.112847.17e840a > 1.5.0~dev~git20150128.112847.17e840a
-
-PACKAGE_ARCHITECTURE="$(dpkg-architecture -qDEB_HOST_ARCH)"
-PACKAGE_URL="https://www.docker.com/"
-PACKAGE_MAINTAINER="support@docker.com"
-PACKAGE_DESCRIPTION="Linux container runtime
-Docker complements LXC with a high-level API which operates at the process
-level. It runs unix processes with strong guarantees of isolation and
-repeatability across servers.
-Docker is a great building block for automating distributed systems:
-large-scale web deployments, database clusters, continuous deployment systems,
-private PaaS, service-oriented architectures, etc."
-PACKAGE_LICENSE="Apache-2.0"
-
-# Build docker as an ubuntu package using FPM and REPREPRO (sue me).
-# bundle_binary must be called first.
-bundle_ubuntu() {
- DIR="$ABS_DEST/build"
-
- # Include our udev rules
- mkdir -p "$DIR/etc/udev/rules.d"
- cp contrib/udev/80-docker.rules "$DIR/etc/udev/rules.d/"
-
- # Include our init scripts
- mkdir -p "$DIR/etc/init"
- cp contrib/init/upstart/docker.conf "$DIR/etc/init/"
- mkdir -p "$DIR/etc/init.d"
- cp contrib/init/sysvinit-debian/docker "$DIR/etc/init.d/"
- mkdir -p "$DIR/etc/default"
- cp contrib/init/sysvinit-debian/docker.default "$DIR/etc/default/docker"
- mkdir -p "$DIR/lib/systemd/system"
- cp contrib/init/systemd/docker.{service,socket} "$DIR/lib/systemd/system/"
-
- # Include contributed completions
- mkdir -p "$DIR/etc/bash_completion.d"
- cp contrib/completion/bash/docker "$DIR/etc/bash_completion.d/"
- mkdir -p "$DIR/usr/share/zsh/vendor-completions"
- cp contrib/completion/zsh/_docker "$DIR/usr/share/zsh/vendor-completions/"
- mkdir -p "$DIR/etc/fish/completions"
- cp contrib/completion/fish/docker.fish "$DIR/etc/fish/completions/"
-
- # Include man pages
- make manpages
- manRoot="$DIR/usr/share/man"
- mkdir -p "$manRoot"
- for manDir in man/man?; do
- manBase="$(basename "$manDir")" # "man1"
- for manFile in "$manDir"/*; do
- manName="$(basename "$manFile")" # "docker-build.1"
- mkdir -p "$manRoot/$manBase"
- gzip -c "$manFile" > "$manRoot/$manBase/$manName.gz"
- done
- done
-
- # Copy the binary
- # This will fail if the binary bundle hasn't been built
- mkdir -p "$DIR/usr/bin"
- cp "$DEST/../binary/docker-$VERSION" "$DIR/usr/bin/docker"
-
- # Generate postinst/prerm/postrm scripts
- cat > "$DEST/postinst" <<'EOF'
-#!/bin/sh
-set -e
-set -u
-
-if [ "$1" = 'configure' ] && [ -z "$2" ]; then
- if ! getent group docker > /dev/null; then
- groupadd --system docker
- fi
-fi
-
-if ! { [ -x /sbin/initctl ] && /sbin/initctl version 2>/dev/null | grep -q upstart; }; then
- # we only need to do this if upstart isn't in charge
- update-rc.d docker defaults > /dev/null || true
-fi
-if [ -n "$2" ]; then
- _dh_action=restart
-else
- _dh_action=start
-fi
-service docker $_dh_action 2>/dev/null || true
-
-#DEBHELPER#
-EOF
- cat > "$DEST/prerm" <<'EOF'
-#!/bin/sh
-set -e
-set -u
-
-service docker stop 2>/dev/null || true
-
-#DEBHELPER#
-EOF
- cat > "$DEST/postrm" <<'EOF'
-#!/bin/sh
-set -e
-set -u
-
-if [ "$1" = "purge" ] ; then
- update-rc.d docker remove > /dev/null || true
-fi
-
-# In case this system is running systemd, we make systemd reload the unit files
-# to pick up changes.
-if [ -d /run/systemd/system ] ; then
- systemctl --system daemon-reload > /dev/null || true
-fi
-
-#DEBHELPER#
-EOF
- # TODO swaths of these were borrowed from debhelper's auto-inserted stuff, because we're still using fpm - we need to use debhelper instead, and somehow reconcile Ubuntu that way
- chmod +x "$DEST/postinst" "$DEST/prerm" "$DEST/postrm"
-
- (
- # switch directories so we create *.deb in the right folder
- cd "$DEST"
-
- # create lxc-docker-VERSION package
- fpm -s dir -C "$DIR" \
- --name "lxc-docker-$VERSION" --version "$PKGVERSION" \
- --after-install "$ABS_DEST/postinst" \
- --before-remove "$ABS_DEST/prerm" \
- --after-remove "$ABS_DEST/postrm" \
- --architecture "$PACKAGE_ARCHITECTURE" \
- --prefix / \
- --depends iptables \
- --deb-recommends aufs-tools \
- --deb-recommends ca-certificates \
- --deb-recommends git \
- --deb-recommends xz-utils \
- --deb-recommends 'cgroupfs-mount | cgroup-lite' \
- --deb-suggests apparmor \
- --description "$PACKAGE_DESCRIPTION" \
- --maintainer "$PACKAGE_MAINTAINER" \
- --conflicts docker \
- --conflicts docker.io \
- --conflicts lxc-docker-virtual-package \
- --provides lxc-docker \
- --provides lxc-docker-virtual-package \
- --replaces lxc-docker \
- --replaces lxc-docker-virtual-package \
- --url "$PACKAGE_URL" \
- --license "$PACKAGE_LICENSE" \
- --config-files /etc/udev/rules.d/80-docker.rules \
- --config-files /etc/init/docker.conf \
- --config-files /etc/init.d/docker \
- --config-files /etc/default/docker \
- --deb-compression gz \
- -t deb .
- # TODO replace "Suggests: cgroup-lite" with "Recommends: cgroupfs-mount | cgroup-lite" once cgroupfs-mount is available
-
- # create empty lxc-docker wrapper package
- fpm -s empty \
- --name lxc-docker --version "$PKGVERSION" \
- --architecture "$PACKAGE_ARCHITECTURE" \
- --depends lxc-docker-$VERSION \
- --description "$PACKAGE_DESCRIPTION" \
- --maintainer "$PACKAGE_MAINTAINER" \
- --url "$PACKAGE_URL" \
- --license "$PACKAGE_LICENSE" \
- --deb-compression gz \
- -t deb
- )
-
- # clean up after ourselves so we have a clean output directory
- rm "$DEST/postinst" "$DEST/prerm" "$DEST/postrm"
- rm -r "$DIR"
-}
-
-bundle_ubuntu
diff --git a/components/engine/hack/make/update-apt-repo b/components/engine/hack/make/update-apt-repo
deleted file mode 100755
index 3a80c94ca3..0000000000
--- a/components/engine/hack/make/update-apt-repo
+++ /dev/null
@@ -1,70 +0,0 @@
-#!/usr/bin/env bash
-set -e
-
-# This script updates the apt repo in $DOCKER_RELEASE_DIR/apt/repo.
-# This script is a "fix all" for any sort of problems that might have occurred with
-# the Release or Package files in the repo.
-# It should only be used in the rare case of extreme emergencies to regenerate
-# Release and Package files for the apt repo.
-#
-# NOTE: Always be sure to re-sign the repo with hack/make/sign-repos after running
-# this script.
-
-: ${DOCKER_RELEASE_DIR:=$DEST}
-APTDIR=$DOCKER_RELEASE_DIR/apt/repo
-
-# supported arches/sections
-arches=( amd64 i386 )
-
-# Preserve existing components but don't add any non-existing ones
-for component in main testing experimental ; do
- if ls "$APTDIR/dists/*/$component" >/dev/null 2>&1 ; then
- components+=( $component )
- fi
-done
-
-dists=( $(find "${APTDIR}/dists" -maxdepth 1 -mindepth 1 -type d) )
-
-# override component if it is set
-if [ "$COMPONENT" ]; then
- components=( $COMPONENT )
-fi
-
-# release the debs
-for version in "${dists[@]}"; do
- for component in "${components[@]}"; do
- codename="${version//debootstrap-}"
-
- # update the filelist for this codename/component
- find "$APTDIR/pool/$component" \
- -name *~${codename#*-}*.deb > "$APTDIR/dists/$codename/$component/filelist"
- done
-done
-
-# run the apt-ftparchive commands so we can have pinning
-apt-ftparchive generate "$APTDIR/conf/apt-ftparchive.conf"
-
-for dist in "${dists[@]}"; do
- version=$(basename "$dist")
- for component in "${components[@]}"; do
- codename="${version//debootstrap-}"
-
- apt-ftparchive \
- -o "APT::FTPArchive::Release::Codename=$codename" \
- -o "APT::FTPArchive::Release::Suite=$codename" \
- -c "$APTDIR/conf/docker-engine-release.conf" \
- release \
- "$APTDIR/dists/$codename" > "$APTDIR/dists/$codename/Release"
-
- for arch in "${arches[@]}"; do
- apt-ftparchive \
- -o "APT::FTPArchive::Release::Codename=$codename" \
- -o "APT::FTPArchive::Release::Suite=$codename" \
- -o "APT::FTPArchive::Release::Component=$component" \
- -o "APT::FTPArchive::Release::Architecture=$arch" \
- -c "$APTDIR/conf/docker-engine-release.conf" \
- release \
- "$APTDIR/dists/$codename/$component/binary-$arch" > "$APTDIR/dists/$codename/$component/binary-$arch/Release"
- done
- done
-done
diff --git a/components/engine/hack/make/win b/components/engine/hack/make/win
deleted file mode 100644
index bc6d5108b8..0000000000
--- a/components/engine/hack/make/win
+++ /dev/null
@@ -1,20 +0,0 @@
-#!/usr/bin/env bash
-set -e
-
-# explicit list of os/arch combos that support being a daemon
-declare -A daemonSupporting
-daemonSupporting=(
- [linux/amd64]=1
- [windows/amd64]=1
-)
-platform="windows/amd64"
-export DEST="$DEST/$platform" # bundles/VERSION/cross/GOOS/GOARCH/docker-VERSION
-mkdir -p "$DEST"
-ABS_DEST="$(cd "$DEST" && pwd -P)"
-export GOOS=${platform%/*}
-export GOARCH=${platform##*/}
-if [ -z "${daemonSupporting[$platform]}" ]; then
- export LDFLAGS_STATIC_DOCKER="" # we just need a simple client for these platforms
- export BUILDFLAGS=( "${ORIG_BUILDFLAGS[@]/ daemon/}" ) # remove the "daemon" build tag from platforms that aren't supported
-fi
-source "${MAKEDIR}/binary"
diff --git a/components/engine/hack/release.sh b/components/engine/hack/release.sh
deleted file mode 100755
index 4a4f402f5a..0000000000
--- a/components/engine/hack/release.sh
+++ /dev/null
@@ -1,313 +0,0 @@
-#!/usr/bin/env bash
-set -e
-
-# This script looks for bundles built by make.sh, and releases them on a
-# public S3 bucket.
-#
-# Bundles should be available for the VERSION string passed as argument.
-#
-# The correct way to call this script is inside a container built by the
-# official Dockerfile at the root of the Docker source code. The Dockerfile,
-# make.sh and release.sh should all be from the same source code revision.
-
-set -o pipefail
-
-# Print a usage message and exit.
-usage() {
- cat >&2 <<'EOF'
-To run, I need:
-- to be in a container generated by the Dockerfile at the top of the Docker
- repository;
-- to be provided with the location of an S3 bucket and path, in
- environment variables AWS_S3_BUCKET and AWS_S3_BUCKET_PATH (default: '');
-- to be provided with AWS credentials for this S3 bucket, in environment
- variables AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY;
-- a generous amount of good will and nice manners.
-The canonical way to run me is to run the image produced by the Dockerfile: e.g.:"
-
-docker run -e AWS_S3_BUCKET=test.docker.com \
- -e AWS_ACCESS_KEY_ID \
- -e AWS_SECRET_ACCESS_KEY \
- -e AWS_DEFAULT_REGION \
- -it --privileged \
- docker ./hack/release.sh
-EOF
- exit 1
-}
-
-[ "$AWS_S3_BUCKET" ] || usage
-[ "$AWS_ACCESS_KEY_ID" ] || usage
-[ "$AWS_SECRET_ACCESS_KEY" ] || usage
-[ -d /go/src/github.com/docker/docker ] || usage
-cd /go/src/github.com/docker/docker
-[ -x hack/make.sh ] || usage
-
-export AWS_DEFAULT_REGION
-: ${AWS_DEFAULT_REGION:=us-west-1}
-
-AWS_CLI=${AWS_CLI:-'aws'}
-
-RELEASE_BUNDLES=(
- binary
- cross
- tgz
-)
-
-if [ "$1" != '--release-regardless-of-test-failure' ]; then
- RELEASE_BUNDLES=(
- test-unit
- "${RELEASE_BUNDLES[@]}"
- test-integration
- )
-fi
-
-VERSION=$(< VERSION)
-BUCKET=$AWS_S3_BUCKET
-BUCKET_PATH=$BUCKET
-[[ -n "$AWS_S3_BUCKET_PATH" ]] && BUCKET_PATH+=/$AWS_S3_BUCKET_PATH
-
-if command -v git &> /dev/null && git rev-parse &> /dev/null; then
- if [ -n "$(git status --porcelain --untracked-files=no)" ]; then
- echo "You cannot run the release script on a repo with uncommitted changes"
- usage
- fi
-fi
-
-# These are the 2 keys we've used to sign the deb's
-# release (get.docker.com)
-# GPG_KEY="36A1D7869245C8950F966E92D8576A8BA88D21E9"
-# test (test.docker.com)
-# GPG_KEY="740B314AE3941731B942C66ADF4FD13717AAD7D6"
-
-setup_s3() {
- echo "Setting up S3"
- # Try creating the bucket. Ignore errors (it might already exist).
- $AWS_CLI s3 mb "s3://$BUCKET" 2>/dev/null || true
- # Check access to the bucket.
- $AWS_CLI s3 ls "s3://$BUCKET" >/dev/null
- # Make the bucket accessible through website endpoints.
- $AWS_CLI s3 website --index-document index --error-document error "s3://$BUCKET"
-}
-
-# write_to_s3 uploads the contents of standard input to the specified S3 url.
-write_to_s3() {
- DEST=$1
- F=`mktemp`
- cat > "$F"
- $AWS_CLI s3 cp --acl public-read --content-type 'text/plain' "$F" "$DEST"
- rm -f "$F"
-}
-
-s3_url() {
- case "$BUCKET" in
- get.docker.com|test.docker.com|experimental.docker.com)
- echo "https://$BUCKET_PATH"
- ;;
- *)
- BASE_URL="http://${BUCKET}.s3-website-${AWS_DEFAULT_REGION}.amazonaws.com"
- if [[ -n "$AWS_S3_BUCKET_PATH" ]] ; then
- echo "$BASE_URL/$AWS_S3_BUCKET_PATH"
- else
- echo "$BASE_URL"
- fi
- ;;
- esac
-}
-
-build_all() {
- echo "Building release"
- if ! ./hack/make.sh "${RELEASE_BUNDLES[@]}"; then
- echo >&2
- echo >&2 'The build or tests appear to have failed.'
- echo >&2
- echo >&2 'You, as the release maintainer, now have a couple options:'
- echo >&2 '- delay release and fix issues'
- echo >&2 '- delay release and fix issues'
- echo >&2 '- did we mention how important this is? issues need fixing :)'
- echo >&2
- echo >&2 'As a final LAST RESORT, you (because only you, the release maintainer,'
- echo >&2 ' really knows all the hairy problems at hand with the current release'
- echo >&2 ' issues) may bypass this checking by running this script again with the'
- echo >&2 ' single argument of "--release-regardless-of-test-failure", which will skip'
- echo >&2 ' running the test suite, and will only build the binaries and packages. Please'
- echo >&2 ' avoid using this if at all possible.'
- echo >&2
- echo >&2 'Regardless, we cannot stress enough the scarcity with which this bypass'
- echo >&2 ' should be used. If there are release issues, we should always err on the'
- echo >&2 ' side of caution.'
- echo >&2
- exit 1
- fi
-}
-
-upload_release_build() {
- src="$1"
- dst="$2"
- latest="$3"
-
- echo
- echo "Uploading $src"
- echo " to $dst"
- echo
- $AWS_CLI s3 cp --follow-symlinks --acl public-read "$src" "$dst"
- if [ "$latest" ]; then
- echo
- echo "Copying to $latest"
- echo
- $AWS_CLI s3 cp --acl public-read "$dst" "$latest"
- fi
-
- # get hash files too (see hash_files() in hack/make.sh)
- for hashAlgo in md5 sha256; do
- if [ -e "$src.$hashAlgo" ]; then
- echo
- echo "Uploading $src.$hashAlgo"
- echo " to $dst.$hashAlgo"
- echo
- $AWS_CLI s3 cp --follow-symlinks --acl public-read --content-type='text/plain' "$src.$hashAlgo" "$dst.$hashAlgo"
- if [ "$latest" ]; then
- echo
- echo "Copying to $latest.$hashAlgo"
- echo
- $AWS_CLI s3 cp --acl public-read "$dst.$hashAlgo" "$latest.$hashAlgo"
- fi
- fi
- done
-}
-
-release_build() {
- echo "Releasing binaries"
- GOOS=$1
- GOARCH=$2
-
- binDir=bundles/$VERSION/cross/$GOOS/$GOARCH
- tgzDir=bundles/$VERSION/tgz/$GOOS/$GOARCH
- binary=docker-$VERSION
- zipExt=".tgz"
- binaryExt=""
- tgz=$binary$zipExt
-
- latestBase=
- if [ -z "$NOLATEST" ]; then
- latestBase=docker-latest
- fi
-
- # we need to map our GOOS and GOARCH to uname values
- # see https://en.wikipedia.org/wiki/Uname
- # ie, GOOS=linux -> "uname -s"=Linux
-
- s3Os=$GOOS
- case "$s3Os" in
- darwin)
- s3Os=Darwin
- ;;
- freebsd)
- s3Os=FreeBSD
- ;;
- linux)
- s3Os=Linux
- ;;
- windows)
- # this is windows use the .zip and .exe extensions for the files.
- s3Os=Windows
- zipExt=".zip"
- binaryExt=".exe"
- tgz=$binary$zipExt
- binary+=$binaryExt
- ;;
- *)
- echo >&2 "error: can't convert $s3Os to an appropriate value for 'uname -s'"
- exit 1
- ;;
- esac
-
- s3Arch=$GOARCH
- case "$s3Arch" in
- amd64)
- s3Arch=x86_64
- ;;
- 386)
- s3Arch=i386
- ;;
- arm)
- s3Arch=armel
- # someday, we might potentially support multiple GOARM values, in which case we might get armhf here too
- ;;
- *)
- echo >&2 "error: can't convert $s3Arch to an appropriate value for 'uname -m'"
- exit 1
- ;;
- esac
-
- s3Dir="s3://$BUCKET_PATH/builds/$s3Os/$s3Arch"
- # latest=
- latestTgz=
- if [ "$latestBase" ]; then
- # commented out since we aren't uploading binaries right now.
- # latest="$s3Dir/$latestBase$binaryExt"
- # we don't include the $binaryExt because we don't want docker.exe.zip
- latestTgz="$s3Dir/$latestBase$zipExt"
- fi
-
- if [ ! -f "$tgzDir/$tgz" ]; then
- echo >&2 "error: can't find $tgzDir/$tgz - was it packaged properly?"
- exit 1
- fi
- # disable binary uploads for now. Only providing tgz downloads
- # upload_release_build "$binDir/$binary" "$s3Dir/$binary" "$latest"
- upload_release_build "$tgzDir/$tgz" "$s3Dir/$tgz" "$latestTgz"
-}
-
-# Upload binaries and tgz files to S3
-release_binaries() {
- [ "$(find bundles/$VERSION -path "bundles/$VERSION/cross/*/*/docker-$VERSION")" != "" ] || {
- echo >&2 './hack/make.sh must be run before release_binaries'
- exit 1
- }
-
- for d in bundles/$VERSION/cross/*/*; do
- GOARCH="$(basename "$d")"
- GOOS="$(basename "$(dirname "$d")")"
- release_build "$GOOS" "$GOARCH"
- done
-
- # TODO create redirect from builds/*/i686 to builds/*/i386
-
- cat <.?/\"';:` "
- res := make([]byte, n)
- for i := 0; i < n; i++ {
- res[i] = chars[rand.Intn(len(chars))]
- }
- return string(res)
-}
-
-// Ellipsis truncates a string to fit within maxlen, and appends ellipsis (...).
-// For maxlen of 3 and lower, no ellipsis is appended.
-func Ellipsis(s string, maxlen int) string {
- r := []rune(s)
- if len(r) <= maxlen {
- return s
- }
- if maxlen <= 3 {
- return string(r[:maxlen])
- }
- return string(r[:maxlen-3]) + "..."
-}
-
-// Truncate truncates a string to maxlen.
-func Truncate(s string, maxlen int) string {
- r := []rune(s)
- if len(r) <= maxlen {
- return s
- }
- return string(r[:maxlen])
-}
-
-// InSlice tests whether a string is contained in a slice of strings or not.
-// Comparison is case insensitive
-func InSlice(slice []string, s string) bool {
- for _, ss := range slice {
- if strings.ToLower(s) == strings.ToLower(ss) {
- return true
- }
- }
- return false
-}
-
-func quote(word string, buf *bytes.Buffer) {
- // Bail out early for "simple" strings
- if word != "" && !strings.ContainsAny(word, "\\'\"`${[|&;<>()~*?! \t\n") {
- buf.WriteString(word)
- return
- }
-
- buf.WriteString("'")
-
- for i := 0; i < len(word); i++ {
- b := word[i]
- if b == '\'' {
- // Replace literal ' with a close ', a \', and an open '
- buf.WriteString("'\\''")
- } else {
- buf.WriteByte(b)
- }
- }
-
- buf.WriteString("'")
-}
-
-// ShellQuoteArguments takes a list of strings and escapes them so they will be
-// handled right when passed as arguments to a program via a shell
-func ShellQuoteArguments(args []string) string {
- var buf bytes.Buffer
- for i, arg := range args {
- if i != 0 {
- buf.WriteByte(' ')
- }
- quote(arg, &buf)
- }
- return buf.String()
-}
diff --git a/components/engine/pkg/stringutils/stringutils_test.go b/components/engine/pkg/stringutils/stringutils_test.go
deleted file mode 100644
index 15b3cf8e86..0000000000
--- a/components/engine/pkg/stringutils/stringutils_test.go
+++ /dev/null
@@ -1,113 +0,0 @@
-package stringutils
-
-import "testing"
-
-func testLengthHelper(generator func(int) string, t *testing.T) {
- expectedLength := 20
- s := generator(expectedLength)
- if len(s) != expectedLength {
- t.Fatalf("Length of %s was %d but expected length %d", s, len(s), expectedLength)
- }
-}
-
-func testUniquenessHelper(generator func(int) string, t *testing.T) {
- repeats := 25
- set := make(map[string]struct{}, repeats)
- for i := 0; i < repeats; i = i + 1 {
- str := generator(64)
- if len(str) != 64 {
- t.Fatalf("Id returned is incorrect: %s", str)
- }
- if _, ok := set[str]; ok {
- t.Fatalf("Random number is repeated")
- }
- set[str] = struct{}{}
- }
-}
-
-func isASCII(s string) bool {
- for _, c := range s {
- if c > 127 {
- return false
- }
- }
- return true
-}
-
-func TestGenerateRandomAsciiStringLength(t *testing.T) {
- testLengthHelper(GenerateRandomASCIIString, t)
-}
-
-func TestGenerateRandomAsciiStringUniqueness(t *testing.T) {
- testUniquenessHelper(GenerateRandomASCIIString, t)
-}
-
-func TestGenerateRandomAsciiStringIsAscii(t *testing.T) {
- str := GenerateRandomASCIIString(64)
- if !isASCII(str) {
- t.Fatalf("%s contained non-ascii characters", str)
- }
-}
-
-func TestEllipsis(t *testing.T) {
- str := "t🐳ststring"
- newstr := Ellipsis(str, 3)
- if newstr != "t🐳s" {
- t.Fatalf("Expected t🐳s, got %s", newstr)
- }
- newstr = Ellipsis(str, 8)
- if newstr != "t🐳sts..." {
- t.Fatalf("Expected tests..., got %s", newstr)
- }
- newstr = Ellipsis(str, 20)
- if newstr != "t🐳ststring" {
- t.Fatalf("Expected t🐳ststring, got %s", newstr)
- }
-}
-
-func TestTruncate(t *testing.T) {
- str := "t🐳ststring"
- newstr := Truncate(str, 4)
- if newstr != "t🐳st" {
- t.Fatalf("Expected t🐳st, got %s", newstr)
- }
- newstr = Truncate(str, 20)
- if newstr != "t🐳ststring" {
- t.Fatalf("Expected t🐳ststring, got %s", newstr)
- }
-}
-
-func TestInSlice(t *testing.T) {
- slice := []string{"t🐳st", "in", "slice"}
-
- test := InSlice(slice, "t🐳st")
- if !test {
- t.Fatalf("Expected string t🐳st to be in slice")
- }
- test = InSlice(slice, "SLICE")
- if !test {
- t.Fatalf("Expected string SLICE to be in slice")
- }
- test = InSlice(slice, "notinslice")
- if test {
- t.Fatalf("Expected string notinslice not to be in slice")
- }
-}
-
-func TestShellQuoteArgumentsEmpty(t *testing.T) {
- actual := ShellQuoteArguments([]string{})
- expected := ""
- if actual != expected {
- t.Fatalf("Expected an empty string")
- }
-}
-
-func TestShellQuoteArguments(t *testing.T) {
- simpleString := "simpleString"
- complexString := "This is a 'more' complex $tring with some special char *"
- actual := ShellQuoteArguments([]string{simpleString, complexString})
- expected := "simpleString 'This is a '\\''more'\\'' complex $tring with some special char *'"
- if actual != expected {
- t.Fatalf("Expected \"%v\", got \"%v\"", expected, actual)
- }
-}
diff --git a/components/engine/pkg/sysinfo/sysinfo_unix.go b/components/engine/pkg/sysinfo/sysinfo_unix.go
index 45f3ef1c65..beac32840c 100644
--- a/components/engine/pkg/sysinfo/sysinfo_unix.go
+++ b/components/engine/pkg/sysinfo/sysinfo_unix.go
@@ -1,8 +1,8 @@
-// +build !linux,!solaris,!windows
+// +build !linux,!windows
package sysinfo
-// New returns an empty SysInfo for non linux nor solaris for now.
+// New returns an empty SysInfo for non linux for now.
func New(quiet bool) *SysInfo {
sysInfo := &SysInfo{}
return sysInfo
diff --git a/components/engine/pkg/system/meminfo_unsupported.go b/components/engine/pkg/system/meminfo_unsupported.go
index 3ce019dffd..82ddd30c1b 100644
--- a/components/engine/pkg/system/meminfo_unsupported.go
+++ b/components/engine/pkg/system/meminfo_unsupported.go
@@ -1,4 +1,4 @@
-// +build !linux,!windows,!solaris
+// +build !linux,!windows
package system
diff --git a/components/engine/pkg/system/process_unix.go b/components/engine/pkg/system/process_unix.go
index 26c8b42c17..02c138235a 100644
--- a/components/engine/pkg/system/process_unix.go
+++ b/components/engine/pkg/system/process_unix.go
@@ -1,4 +1,4 @@
-// +build linux freebsd solaris darwin
+// +build linux freebsd darwin
package system
diff --git a/components/engine/pkg/term/tc.go b/components/engine/pkg/term/tc.go
index 6d2dfd3a8a..19dbb1cb11 100644
--- a/components/engine/pkg/term/tc.go
+++ b/components/engine/pkg/term/tc.go
@@ -1,5 +1,4 @@
// +build !windows
-// +build !solaris !cgo
package term
diff --git a/components/engine/pkg/term/winsize.go b/components/engine/pkg/term/winsize.go
index 85c4d9d67e..1ef98d5996 100644
--- a/components/engine/pkg/term/winsize.go
+++ b/components/engine/pkg/term/winsize.go
@@ -1,4 +1,4 @@
-// +build !solaris,!windows
+// +build !windows
package term
diff --git a/components/engine/pkg/testutil/pkg.go b/components/engine/pkg/testutil/pkg.go
deleted file mode 100644
index 110b2e6a79..0000000000
--- a/components/engine/pkg/testutil/pkg.go
+++ /dev/null
@@ -1 +0,0 @@
-package testutil
diff --git a/components/engine/plugin/manager.go b/components/engine/plugin/manager.go
index e0ac6e85fb..f144e8208b 100644
--- a/components/engine/plugin/manager.go
+++ b/components/engine/plugin/manager.go
@@ -107,14 +107,10 @@ func NewManager(config ManagerConfig) (*Manager, error) {
manager := &Manager{
config: config,
}
- if err := os.MkdirAll(manager.config.Root, 0700); err != nil {
- return nil, errors.Wrapf(err, "failed to mkdir %v", manager.config.Root)
- }
- if err := os.MkdirAll(manager.config.ExecRoot, 0700); err != nil {
- return nil, errors.Wrapf(err, "failed to mkdir %v", manager.config.ExecRoot)
- }
- if err := os.MkdirAll(manager.tmpDir(), 0700); err != nil {
- return nil, errors.Wrapf(err, "failed to mkdir %v", manager.tmpDir())
+ for _, dirName := range []string{manager.config.Root, manager.config.ExecRoot, manager.tmpDir()} {
+ if err := os.MkdirAll(dirName, 0700); err != nil {
+ return nil, errors.Wrapf(err, "failed to mkdir %v", dirName)
+ }
}
if err := setupRoot(manager.config.Root); err != nil {
diff --git a/components/engine/plugin/store.go b/components/engine/plugin/store.go
index 8349f34f1c..b3398145d7 100644
--- a/components/engine/plugin/store.go
+++ b/components/engine/plugin/store.go
@@ -115,10 +115,15 @@ func (ps *Store) Get(name, capability string, mode int) (plugingetter.CompatPlug
if ps != nil {
p, err := ps.GetV2Plugin(name)
if err == nil {
- p.AddRefCount(mode)
if p.IsEnabled() {
- return p.FilterByCap(capability)
+ fp, err := p.FilterByCap(capability)
+ if err != nil {
+ return nil, err
+ }
+ p.AddRefCount(mode)
+ return fp, nil
}
+
// Plugin was found but it is disabled, so we should not fall back to legacy plugins
// but we should error out right away
return nil, errDisabled(name)
diff --git a/components/engine/plugin/store_test.go b/components/engine/plugin/store_test.go
index d3876daa34..5c61cc61c7 100644
--- a/components/engine/plugin/store_test.go
+++ b/components/engine/plugin/store_test.go
@@ -4,6 +4,7 @@ import (
"testing"
"github.com/docker/docker/api/types"
+ "github.com/docker/docker/pkg/plugingetter"
"github.com/docker/docker/plugin/v2"
)
@@ -31,3 +32,33 @@ func TestFilterByCapPos(t *testing.T) {
t.Fatalf("expected no error, got %v", err)
}
}
+
+func TestStoreGetPluginNotMatchCapRefs(t *testing.T) {
+ s := NewStore()
+ p := v2.Plugin{PluginObj: types.Plugin{Name: "test:latest"}}
+
+ iType := types.PluginInterfaceType{Capability: "whatever", Prefix: "docker", Version: "1.0"}
+ i := types.PluginConfigInterface{Socket: "plugins.sock", Types: []types.PluginInterfaceType{iType}}
+ p.PluginObj.Config.Interface = i
+
+ if err := s.Add(&p); err != nil {
+ t.Fatal(err)
+ }
+
+ if _, err := s.Get("test", "volumedriver", plugingetter.Acquire); err == nil {
+ t.Fatal("exepcted error when getting plugin that doesn't match the passed in capability")
+ }
+
+ if refs := p.GetRefCount(); refs != 0 {
+ t.Fatalf("reference count should be 0, got: %d", refs)
+ }
+
+ p.PluginObj.Enabled = true
+ if _, err := s.Get("test", "volumedriver", plugingetter.Acquire); err == nil {
+ t.Fatal("exepcted error when getting plugin that doesn't match the passed in capability")
+ }
+
+ if refs := p.GetRefCount(); refs != 0 {
+ t.Fatalf("reference count should be 0, got: %d", refs)
+ }
+}
diff --git a/components/engine/plugin/v2/plugin.go b/components/engine/plugin/v2/plugin.go
index b77536c986..ce3257c0cb 100644
--- a/components/engine/plugin/v2/plugin.go
+++ b/components/engine/plugin/v2/plugin.go
@@ -142,6 +142,9 @@ next:
}
// it is, so lets update the settings in memory
+ if mount.Source == nil {
+ return fmt.Errorf("Plugin config has no mount source")
+ }
*mount.Source = s.value
continue next
}
@@ -159,6 +162,9 @@ next:
}
// it is, so lets update the settings in memory
+ if device.Path == nil {
+ return fmt.Errorf("Plugin config has no device path")
+ }
*device.Path = s.value
continue next
}
diff --git a/components/engine/profiles/seccomp/seccomp.go b/components/engine/profiles/seccomp/seccomp.go
index 90a3859484..07d522aad6 100644
--- a/components/engine/profiles/seccomp/seccomp.go
+++ b/components/engine/profiles/seccomp/seccomp.go
@@ -8,7 +8,6 @@ import (
"fmt"
"github.com/docker/docker/api/types"
- "github.com/docker/docker/pkg/stringutils"
"github.com/opencontainers/runtime-spec/specs-go"
libseccomp "github.com/seccomp/libseccomp-golang"
)
@@ -39,6 +38,17 @@ var nativeToSeccomp = map[string]types.Arch{
"s390x": types.ArchS390X,
}
+// inSlice tests whether a string is contained in a slice of strings or not.
+// Comparison is case sensitive
+func inSlice(slice []string, s string) bool {
+ for _, ss := range slice {
+ if s == ss {
+ return true
+ }
+ }
+ return false
+}
+
func setupSeccomp(config *types.Seccomp, rs *specs.Spec) (*specs.LinuxSeccomp, error) {
if config == nil {
return nil, nil
@@ -89,25 +99,25 @@ Loop:
// Loop through all syscall blocks and convert them to libcontainer format after filtering them
for _, call := range config.Syscalls {
if len(call.Excludes.Arches) > 0 {
- if stringutils.InSlice(call.Excludes.Arches, arch) {
+ if inSlice(call.Excludes.Arches, arch) {
continue Loop
}
}
if len(call.Excludes.Caps) > 0 {
for _, c := range call.Excludes.Caps {
- if stringutils.InSlice(rs.Process.Capabilities.Effective, c) {
+ if inSlice(rs.Process.Capabilities.Effective, c) {
continue Loop
}
}
}
if len(call.Includes.Arches) > 0 {
- if !stringutils.InSlice(call.Includes.Arches, arch) {
+ if !inSlice(call.Includes.Arches, arch) {
continue Loop
}
}
if len(call.Includes.Caps) > 0 {
for _, c := range call.Includes.Caps {
- if !stringutils.InSlice(rs.Process.Capabilities.Effective, c) {
+ if !inSlice(rs.Process.Capabilities.Effective, c) {
continue Loop
}
}
diff --git a/components/engine/registry/auth_test.go b/components/engine/registry/auth_test.go
index 34f0c5564f..f5f213bf94 100644
--- a/components/engine/registry/auth_test.go
+++ b/components/engine/registry/auth_test.go
@@ -1,5 +1,3 @@
-// +build !solaris
-
package registry
import (
diff --git a/components/engine/registry/registry_mock_test.go b/components/engine/registry/registry_mock_test.go
index cf1cd19c1c..f814273d0f 100644
--- a/components/engine/registry/registry_mock_test.go
+++ b/components/engine/registry/registry_mock_test.go
@@ -1,5 +1,3 @@
-// +build !solaris
-
package registry
import (
diff --git a/components/engine/registry/registry_test.go b/components/engine/registry/registry_test.go
index 4cbfb110e3..56e362f8f9 100644
--- a/components/engine/registry/registry_test.go
+++ b/components/engine/registry/registry_test.go
@@ -1,5 +1,3 @@
-// +build !solaris
-
package registry
import (
diff --git a/components/engine/runconfig/hostconfig_unix.go b/components/engine/runconfig/hostconfig_unix.go
index 55df5da3ff..3527d29058 100644
--- a/components/engine/runconfig/hostconfig_unix.go
+++ b/components/engine/runconfig/hostconfig_unix.go
@@ -1,4 +1,4 @@
-// +build !windows,!solaris
+// +build !windows
package runconfig
diff --git a/components/engine/vendor.conf b/components/engine/vendor.conf
index c3c551eb00..4487925249 100644
--- a/components/engine/vendor.conf
+++ b/components/engine/vendor.conf
@@ -1,6 +1,6 @@
# the following lines are in sorted order, FYI
github.com/Azure/go-ansiterm d6e3b3328b783f23731bc4d058875b0371ff8109
-github.com/Microsoft/hcsshim v0.6.5
+github.com/Microsoft/hcsshim v0.6.7
github.com/Microsoft/go-winio v0.4.5
github.com/davecgh/go-spew 346938d642f2ec3594ed81d874461961cd0faa76
github.com/docker/libtrust 9cbd2a1374f46905c68a4eb3694a130610adc62a
@@ -14,7 +14,7 @@ github.com/sirupsen/logrus v1.0.3
github.com/tchap/go-patricia v2.2.6
github.com/vdemeester/shakers 24d7f1d6a71aa5d9cbe7390e4afb66b7eef9e1b3
golang.org/x/net 7dcfb8076726a3fdd9353b6b8a1f1b6be6811bd6
-golang.org/x/sys 8dbc5d05d6edcc104950cc299a1ce6641235bc86
+golang.org/x/sys 95c6576299259db960f6c5b9b69ea52422860fce
github.com/docker/go-units 9e638d38cf6977a37a8ea0078f3ee75a7cdb2dd1
github.com/docker/go-connections 3ede32e2033de7505e6500d6c868c2b9ed9f169d
golang.org/x/text f72d8390a633d5dfb0cc84043294db9f6c935756
@@ -55,7 +55,7 @@ github.com/miekg/dns 75e6e86cc601825c5dbcd4e0c209eab180997cd7
# get graph and distribution packages
github.com/docker/distribution edc3ab29cdff8694dd6feb85cfeb4b5f1b38ed9c
-github.com/vbatts/tar-split v0.10.1
+github.com/vbatts/tar-split v0.10.2
github.com/opencontainers/go-digest a6d0ee40d4207ea02364bd3b9e8e77b9159ba1eb
# get go-zfs packages
@@ -65,9 +65,9 @@ github.com/pborman/uuid v1.0
google.golang.org/grpc v1.3.0
# When updating, also update RUNC_COMMIT in hack/dockerfile/binaries-commits accordingly
-github.com/opencontainers/runc 0351df1c5a66838d0c392b4ac4cf9450de844e2d
+github.com/opencontainers/runc b2567b37d7b75eb4cf325b77297b140ea686ce8f
github.com/opencontainers/runtime-spec v1.0.0
-github.com/opencontainers/image-spec 372ad780f63454fbbbbcc7cf80e5b90245c13e13
+github.com/opencontainers/image-spec v1.0.0
github.com/seccomp/libseccomp-golang 32f571b70023028bd57d9288c20efbcb237f3ce0
# libcontainer deps (see src/github.com/opencontainers/runc/Godeps/Godeps.json)
@@ -85,7 +85,7 @@ github.com/philhofer/fwd 98c11a7a6ec829d672b03833c3d69a7fae1ca972
github.com/tinylib/msgp 75ee40d2601edf122ef667e2a07d600d4c44490c
# fsnotify
-github.com/fsnotify/fsnotify v1.4.2
+github.com/fsnotify/fsnotify 4da3e2cfbabc9f751898f250b49f2439785783a1
# awslogs deps
github.com/aws/aws-sdk-go v1.4.22
@@ -103,8 +103,7 @@ github.com/googleapis/gax-go da06d194a00e19ce00d9011a13931c3f6f6887c7
google.golang.org/genproto d80a6e20e776b0b17a324d0ba1ab50a39c8e8944
# containerd
-github.com/stevvooe/continuity cd7a8e21e2b6f84799f5dd4b65faf49c8d3ee02d
-github.com/containerd/containerd 992280e8e265f491f7a624ab82f3e238be086e49
+github.com/containerd/containerd v1.0.0-beta.3
github.com/containerd/fifo fbfb6a11ec671efbe94ad1c12c2e98773f19e1e6
github.com/containerd/continuity 35d55c5e8dd23b32037d56cf97174aff3efdfa83
github.com/containerd/cgroups f7dd103d3e4e696aa67152f6b4ddd1779a3455a9
@@ -114,7 +113,7 @@ github.com/containerd/typeurl f6943554a7e7e88b3c14aad190bf05932da84788
github.com/dmcgowan/go-tar 2e2c51242e8993c50445dab7c03c8e7febddd0cf
# cluster
-github.com/docker/swarmkit 872861d2ae46958af7ead1d5fffb092c73afbaf0
+github.com/docker/swarmkit de950a7ed842c7b7e47e9451cde9bf8f96031894
github.com/gogo/protobuf v0.4
github.com/cloudflare/cfssl 7fb22c8cba7ecaf98e4082d22d65800cf45e042a
github.com/google/certificate-transparency d90e65c3a07988180c5b1ece71791c0b6506826e
@@ -143,7 +142,7 @@ github.com/Nvveen/Gotty a8b993ba6abdb0e0c12b0125c603323a71c7790c https://github.
# metrics
github.com/docker/go-metrics d466d4f6fd960e01820085bd7e1a24426ee7ef18
-github.com/opencontainers/selinux v1.0.0-rc1
+github.com/opencontainers/selinux b29023b86e4a69d1b46b7e7b4e2b6fda03f0b9cd
# archive/tar
# mkdir -p ./vendor/archive
diff --git a/components/engine/vendor/github.com/Microsoft/hcsshim/errors.go b/components/engine/vendor/github.com/Microsoft/hcsshim/errors.go
index d2f9cc8bd2..c0c6cac87c 100644
--- a/components/engine/vendor/github.com/Microsoft/hcsshim/errors.go
+++ b/components/engine/vendor/github.com/Microsoft/hcsshim/errors.go
@@ -72,6 +72,22 @@ var (
ErrPlatformNotSupported = errors.New("unsupported platform request")
)
+type EndpointNotFoundError struct {
+ EndpointName string
+}
+
+func (e EndpointNotFoundError) Error() string {
+ return fmt.Sprintf("Endpoint %s not found", e.EndpointName)
+}
+
+type NetworkNotFoundError struct {
+ NetworkName string
+}
+
+func (e NetworkNotFoundError) Error() string {
+ return fmt.Sprintf("Network %s not found", e.NetworkName)
+}
+
// ProcessError is an error encountered in HCS during an operation on a Process object
type ProcessError struct {
Process *process
@@ -174,6 +190,12 @@ func makeProcessError(process *process, operation string, extraInfo string, err
// will currently return true when the error is ErrElementNotFound or ErrProcNotFound.
func IsNotExist(err error) bool {
err = getInnerError(err)
+ if _, ok := err.(EndpointNotFoundError); ok {
+ return true
+ }
+ if _, ok := err.(NetworkNotFoundError); ok {
+ return true
+ }
return err == ErrComputeSystemDoesNotExist ||
err == ErrElementNotFound ||
err == ErrProcNotFound
diff --git a/components/engine/vendor/github.com/Microsoft/hcsshim/hnsendpoint.go b/components/engine/vendor/github.com/Microsoft/hcsshim/hnsendpoint.go
index 92afc0c249..7e516f8a2e 100644
--- a/components/engine/vendor/github.com/Microsoft/hcsshim/hnsendpoint.go
+++ b/components/engine/vendor/github.com/Microsoft/hcsshim/hnsendpoint.go
@@ -2,7 +2,6 @@ package hcsshim
import (
"encoding/json"
- "fmt"
"net"
"github.com/sirupsen/logrus"
@@ -135,7 +134,7 @@ func GetHNSEndpointByName(endpointName string) (*HNSEndpoint, error) {
return &hnsEndpoint, nil
}
}
- return nil, fmt.Errorf("Endpoint %v not found", endpointName)
+ return nil, EndpointNotFoundError{EndpointName: endpointName}
}
// Create Endpoint by sending EndpointRequest to HNS. TODO: Create a separate HNS interface to place all these methods
@@ -192,18 +191,24 @@ func (endpoint *HNSEndpoint) ContainerHotDetach(containerID string) error {
return modifyNetworkEndpoint(containerID, endpoint.Id, Remove)
}
-// ApplyACLPolicy applies Acl Policy on the Endpoint
-func (endpoint *HNSEndpoint) ApplyACLPolicy(policy *ACLPolicy) error {
+// ApplyACLPolicy applies a set of ACL Policies on the Endpoint
+func (endpoint *HNSEndpoint) ApplyACLPolicy(policies ...*ACLPolicy) error {
operation := "ApplyACLPolicy"
title := "HCSShim::HNSEndpoint::" + operation
logrus.Debugf(title+" id=%s", endpoint.Id)
- jsonString, err := json.Marshal(policy)
- if err != nil {
- return err
+ for _, policy := range policies {
+ if policy == nil {
+ continue
+ }
+ jsonString, err := json.Marshal(policy)
+ if err != nil {
+ return err
+ }
+ endpoint.Policies = append(endpoint.Policies, jsonString)
}
- endpoint.Policies[0] = jsonString
- _, err = endpoint.Update()
+
+ _, err := endpoint.Update()
return err
}
diff --git a/components/engine/vendor/github.com/Microsoft/hcsshim/hnsnetwork.go b/components/engine/vendor/github.com/Microsoft/hcsshim/hnsnetwork.go
index 3345bfa3f2..04c1b59196 100644
--- a/components/engine/vendor/github.com/Microsoft/hcsshim/hnsnetwork.go
+++ b/components/engine/vendor/github.com/Microsoft/hcsshim/hnsnetwork.go
@@ -2,7 +2,6 @@ package hcsshim
import (
"encoding/json"
- "fmt"
"net"
"github.com/sirupsen/logrus"
@@ -90,7 +89,7 @@ func GetHNSNetworkByName(networkName string) (*HNSNetwork, error) {
return &hnsnetwork, nil
}
}
- return nil, fmt.Errorf("Network %v not found", networkName)
+ return nil, NetworkNotFoundError{NetworkName: networkName}
}
// Create Network by sending NetworkRequest to HNS.
diff --git a/components/engine/vendor/github.com/Microsoft/hcsshim/hnspolicy.go b/components/engine/vendor/github.com/Microsoft/hcsshim/hnspolicy.go
index ecfbf0eda3..65b8e93d9b 100644
--- a/components/engine/vendor/github.com/Microsoft/hcsshim/hnspolicy.go
+++ b/components/engine/vendor/github.com/Microsoft/hcsshim/hnspolicy.go
@@ -75,19 +75,18 @@ const (
)
type ACLPolicy struct {
- Type PolicyType `json:"Type"`
- Protocol uint16
- InternalPort uint16
- Action ActionType
- Direction DirectionType
- LocalAddress string
- RemoteAddress string
- LocalPort uint16
- RemotePort uint16
- RuleType RuleType `json:"RuleType,omitempty"`
-
- Priority uint16
- ServiceName string
+ Type PolicyType `json:"Type"`
+ Protocol uint16
+ InternalPort uint16
+ Action ActionType
+ Direction DirectionType
+ LocalAddresses string
+ RemoteAddresses string
+ LocalPort uint16
+ RemotePort uint16
+ RuleType RuleType `json:"RuleType,omitempty"`
+ Priority uint16
+ ServiceName string
}
type Policy struct {
diff --git a/components/engine/vendor/github.com/Microsoft/hcsshim/legacy.go b/components/engine/vendor/github.com/Microsoft/hcsshim/legacy.go
index c7f6073ac3..a0a97d7c72 100644
--- a/components/engine/vendor/github.com/Microsoft/hcsshim/legacy.go
+++ b/components/engine/vendor/github.com/Microsoft/hcsshim/legacy.go
@@ -472,15 +472,21 @@ func cloneTree(srcPath, destPath string, mutatedFiles map[string]bool) error {
}
destFilePath := filepath.Join(destPath, relPath)
+ fileAttributes := info.Sys().(*syscall.Win32FileAttributeData).FileAttributes
// Directories, reparse points, and files that will be mutated during
// utility VM import must be copied. All other files can be hard linked.
- isReparsePoint := info.Sys().(*syscall.Win32FileAttributeData).FileAttributes&syscall.FILE_ATTRIBUTE_REPARSE_POINT != 0
- if info.IsDir() || isReparsePoint || mutatedFiles[relPath] {
- fi, err := copyFileWithMetadata(srcFilePath, destFilePath, info.IsDir())
+ isReparsePoint := fileAttributes&syscall.FILE_ATTRIBUTE_REPARSE_POINT != 0
+ // In go1.9, FileInfo.IsDir() returns false if the directory is also a symlink.
+ // See: https://github.com/golang/go/commit/1989921aef60c83e6f9127a8448fb5ede10e9acc
+ // Fixes the problem by checking syscall.FILE_ATTRIBUTE_DIRECTORY directly
+ isDir := fileAttributes&syscall.FILE_ATTRIBUTE_DIRECTORY != 0
+
+ if isDir || isReparsePoint || mutatedFiles[relPath] {
+ fi, err := copyFileWithMetadata(srcFilePath, destFilePath, isDir)
if err != nil {
return err
}
- if info.IsDir() && !isReparsePoint {
+ if isDir && !isReparsePoint {
di = append(di, dirInfo{path: destFilePath, fileInfo: *fi})
}
} else {
@@ -490,8 +496,9 @@ func cloneTree(srcPath, destPath string, mutatedFiles map[string]bool) error {
}
}
- // Don't recurse on reparse points.
- if info.IsDir() && isReparsePoint {
+ // Don't recurse on reparse points in go1.8 and older. Filepath.Walk
+ // handles this in go1.9 and newer.
+ if isDir && isReparsePoint && shouldSkipDirectoryReparse {
return filepath.SkipDir
}
diff --git a/components/engine/vendor/github.com/Microsoft/hcsshim/legacy18.go b/components/engine/vendor/github.com/Microsoft/hcsshim/legacy18.go
new file mode 100644
index 0000000000..578552f913
--- /dev/null
+++ b/components/engine/vendor/github.com/Microsoft/hcsshim/legacy18.go
@@ -0,0 +1,7 @@
+// +build !go1.9
+
+package hcsshim
+
+// Due to a bug in go1.8 and before, directory reparse points need to be skipped
+// during filepath.Walk. This is fixed in go1.9
+var shouldSkipDirectoryReparse = true
diff --git a/components/engine/vendor/github.com/Microsoft/hcsshim/legacy19.go b/components/engine/vendor/github.com/Microsoft/hcsshim/legacy19.go
new file mode 100644
index 0000000000..6aa1dc0584
--- /dev/null
+++ b/components/engine/vendor/github.com/Microsoft/hcsshim/legacy19.go
@@ -0,0 +1,7 @@
+// +build go1.9
+
+package hcsshim
+
+// Due to a bug in go1.8 and before, directory reparse points need to be skipped
+// during filepath.Walk. This is fixed in go1.9
+var shouldSkipDirectoryReparse = false
diff --git a/components/engine/vendor/github.com/containerd/containerd/LICENSE.docs b/components/engine/vendor/github.com/containerd/containerd/LICENSE.docs
index e26cd4fc8e..2f244ac814 100644
--- a/components/engine/vendor/github.com/containerd/containerd/LICENSE.docs
+++ b/components/engine/vendor/github.com/containerd/containerd/LICENSE.docs
@@ -1,4 +1,4 @@
-Attribution-ShareAlike 4.0 International
+Attribution 4.0 International
=======================================================================
@@ -54,18 +54,16 @@ exhaustive, and do not form part of our licenses.
=======================================================================
-Creative Commons Attribution-ShareAlike 4.0 International Public
-License
+Creative Commons Attribution 4.0 International Public License
By exercising the Licensed Rights (defined below), You accept and agree
to be bound by the terms and conditions of this Creative Commons
-Attribution-ShareAlike 4.0 International Public License ("Public
-License"). To the extent this Public License may be interpreted as a
-contract, You are granted the Licensed Rights in consideration of Your
-acceptance of these terms and conditions, and the Licensor grants You
-such rights in consideration of benefits the Licensor receives from
-making the Licensed Material available under these terms and
-conditions.
+Attribution 4.0 International Public License ("Public License"). To the
+extent this Public License may be interpreted as a contract, You are
+granted the Licensed Rights in consideration of Your acceptance of
+these terms and conditions, and the Licensor grants You such rights in
+consideration of benefits the Licensor receives from making the
+Licensed Material available under these terms and conditions.
Section 1 -- Definitions.
@@ -84,11 +82,7 @@ Section 1 -- Definitions.
and Similar Rights in Your contributions to Adapted Material in
accordance with the terms and conditions of this Public License.
- c. BY-SA Compatible License means a license listed at
- creativecommons.org/compatiblelicenses, approved by Creative
- Commons as essentially the equivalent of this Public License.
-
- d. Copyright and Similar Rights means copyright and/or similar rights
+ c. Copyright and Similar Rights means copyright and/or similar rights
closely related to copyright including, without limitation,
performance, broadcast, sound recording, and Sui Generis Database
Rights, without regard to how the rights are labeled or
@@ -96,33 +90,29 @@ Section 1 -- Definitions.
specified in Section 2(b)(1)-(2) are not Copyright and Similar
Rights.
- e. Effective Technological Measures means those measures that, in the
+ d. Effective Technological Measures means those measures that, in the
absence of proper authority, may not be circumvented under laws
fulfilling obligations under Article 11 of the WIPO Copyright
Treaty adopted on December 20, 1996, and/or similar international
agreements.
- f. Exceptions and Limitations means fair use, fair dealing, and/or
+ e. Exceptions and Limitations means fair use, fair dealing, and/or
any other exception or limitation to Copyright and Similar Rights
that applies to Your use of the Licensed Material.
- g. License Elements means the license attributes listed in the name
- of a Creative Commons Public License. The License Elements of this
- Public License are Attribution and ShareAlike.
-
- h. Licensed Material means the artistic or literary work, database,
+ f. Licensed Material means the artistic or literary work, database,
or other material to which the Licensor applied this Public
License.
- i. Licensed Rights means the rights granted to You subject to the
+ g. Licensed Rights means the rights granted to You subject to the
terms and conditions of this Public License, which are limited to
all Copyright and Similar Rights that apply to Your use of the
Licensed Material and that the Licensor has authority to license.
- j. Licensor means the individual(s) or entity(ies) granting rights
+ h. Licensor means the individual(s) or entity(ies) granting rights
under this Public License.
- k. Share means to provide material to the public by any means or
+ i. Share means to provide material to the public by any means or
process that requires permission under the Licensed Rights, such
as reproduction, public display, public performance, distribution,
dissemination, communication, or importation, and to make material
@@ -130,13 +120,13 @@ Section 1 -- Definitions.
public may access the material from a place and at a time
individually chosen by them.
- l. Sui Generis Database Rights means rights other than copyright
+ j. Sui Generis Database Rights means rights other than copyright
resulting from Directive 96/9/EC of the European Parliament and of
the Council of 11 March 1996 on the legal protection of databases,
as amended and/or succeeded, as well as other essentially
equivalent rights anywhere in the world.
- m. You means the individual or entity exercising the Licensed Rights
+ k. You means the individual or entity exercising the Licensed Rights
under this Public License. Your has a corresponding meaning.
@@ -182,13 +172,7 @@ Section 2 -- Scope.
Licensed Rights under the terms and conditions of this
Public License.
- b. Additional offer from the Licensor -- Adapted Material.
- Every recipient of Adapted Material from You
- automatically receives an offer from the Licensor to
- exercise the Licensed Rights in the Adapted Material
- under the conditions of the Adapter's License You apply.
-
- c. No downstream restrictions. You may not offer or impose
+ b. No downstream restrictions. You may not offer or impose
any additional or different terms or conditions on, or
apply any Effective Technological Measures to, the
Licensed Material if doing so restricts exercise of the
@@ -270,24 +254,9 @@ following conditions.
information required by Section 3(a)(1)(A) to the extent
reasonably practicable.
- b. ShareAlike.
-
- In addition to the conditions in Section 3(a), if You Share
- Adapted Material You produce, the following conditions also apply.
-
- 1. The Adapter's License You apply must be a Creative Commons
- license with the same License Elements, this version or
- later, or a BY-SA Compatible License.
-
- 2. You must include the text of, or the URI or hyperlink to, the
- Adapter's License You apply. You may satisfy this condition
- in any reasonable manner based on the medium, means, and
- context in which You Share Adapted Material.
-
- 3. You may not offer or impose any additional or different terms
- or conditions on, or apply any Effective Technological
- Measures to, Adapted Material that restrict exercise of the
- rights granted under the Adapter's License You apply.
+ 4. If You Share Adapted Material You produce, the Adapter's
+ License You apply must not prevent recipients of the Adapted
+ Material from complying with this Public License.
Section 4 -- Sui Generis Database Rights.
@@ -302,9 +271,8 @@ apply to Your use of the Licensed Material:
b. if You include all or a substantial portion of the database
contents in a database in which You have Sui Generis Database
Rights, then the database in which You have Sui Generis Database
- Rights (but not its individual contents) is Adapted Material,
+ Rights (but not its individual contents) is Adapted Material; and
- including for purposes of Section 3(b); and
c. You must comply with the conditions in Section 3(a) if You Share
all or a substantial portion of the contents of the database.
@@ -407,11 +375,13 @@ Section 8 -- Interpretation.
=======================================================================
-Creative Commons is not a party to its public licenses.
-Notwithstanding, Creative Commons may elect to apply one of its public
-licenses to material it publishes and in those instances will be
-considered the "Licensor." Except for the limited purpose of indicating
-that material is shared under a Creative Commons public license or as
+Creative Commons is not a party to its public
+licenses. Notwithstanding, Creative Commons may elect to apply one of
+its public licenses to material it publishes and in those instances
+will be considered the “Licensor.” The text of the Creative Commons
+public licenses is dedicated to the public domain under the CC0 Public
+Domain Dedication. Except for the limited purpose of indicating that
+material is shared under a Creative Commons public license or as
otherwise permitted by the Creative Commons policies published at
creativecommons.org/policies, Creative Commons does not authorize the
use of the trademark "Creative Commons" or any other trademark or logo
@@ -419,7 +389,7 @@ of Creative Commons without its prior written consent including,
without limitation, in connection with any unauthorized modifications
to any of its public licenses or any other arrangements,
understandings, or agreements concerning use of licensed material. For
-the avoidance of doubt, this paragraph does not form part of the public
-licenses.
+the avoidance of doubt, this paragraph does not form part of the
+public licenses.
Creative Commons may be contacted at creativecommons.org.
diff --git a/components/engine/vendor/github.com/containerd/containerd/README.md b/components/engine/vendor/github.com/containerd/containerd/README.md
index 28635dbf07..3832446b2f 100644
--- a/components/engine/vendor/github.com/containerd/containerd/README.md
+++ b/components/engine/vendor/github.com/containerd/containerd/README.md
@@ -198,11 +198,10 @@ For sync communication we have a community slack with a #containerd channel that
__If you are reporting a security issue, please reach out discreetly at security@containerd.io__.
-## Copyright and license
+## Licenses
-Copyright ©2016-2017 Docker, Inc. All rights reserved, except as follows. Code
-is released under the Apache 2.0 license. The README.md file, and files in the
-"docs" folder are licensed under the Creative Commons Attribution 4.0
-International License under the terms and conditions set forth in the file
-"LICENSE.docs". You may obtain a duplicate copy of the same license, titled
-CC-BY-SA-4.0, at http://creativecommons.org/licenses/by/4.0/.
+The containerd codebase is released under the [Apache 2.0 license](LICENSE.code).
+The README.md file, and files in the "docs" folder are licensed under the
+Creative Commons Attribution 4.0 International License under the terms and
+conditions set forth in the file "[LICENSE.docs](LICENSE.docs)". You may obtain a duplicate
+copy of the same license, titled CC-BY-4.0, at http://creativecommons.org/licenses/by/4.0/.
diff --git a/components/engine/vendor/github.com/containerd/containerd/api/services/leases/v1/doc.go b/components/engine/vendor/github.com/containerd/containerd/api/services/leases/v1/doc.go
new file mode 100644
index 0000000000..3685b64558
--- /dev/null
+++ b/components/engine/vendor/github.com/containerd/containerd/api/services/leases/v1/doc.go
@@ -0,0 +1 @@
+package leases
diff --git a/components/engine/vendor/github.com/containerd/containerd/api/services/leases/v1/leases.pb.go b/components/engine/vendor/github.com/containerd/containerd/api/services/leases/v1/leases.pb.go
new file mode 100644
index 0000000000..db2a5f8b6f
--- /dev/null
+++ b/components/engine/vendor/github.com/containerd/containerd/api/services/leases/v1/leases.pb.go
@@ -0,0 +1,1573 @@
+// Code generated by protoc-gen-gogo.
+// source: github.com/containerd/containerd/api/services/leases/v1/leases.proto
+// DO NOT EDIT!
+
+/*
+ Package leases is a generated protocol buffer package.
+
+ It is generated from these files:
+ github.com/containerd/containerd/api/services/leases/v1/leases.proto
+
+ It has these top-level messages:
+ Lease
+ CreateRequest
+ CreateResponse
+ DeleteRequest
+ ListRequest
+ ListResponse
+*/
+package leases
+
+import proto "github.com/gogo/protobuf/proto"
+import fmt "fmt"
+import math "math"
+import _ "github.com/gogo/protobuf/gogoproto"
+import google_protobuf1 "github.com/golang/protobuf/ptypes/empty"
+import _ "github.com/gogo/protobuf/types"
+
+import time "time"
+
+import (
+ context "golang.org/x/net/context"
+ grpc "google.golang.org/grpc"
+)
+
+import github_com_gogo_protobuf_types "github.com/gogo/protobuf/types"
+
+import strings "strings"
+import reflect "reflect"
+import github_com_gogo_protobuf_sortkeys "github.com/gogo/protobuf/sortkeys"
+
+import io "io"
+
+// Reference imports to suppress errors if they are not otherwise used.
+var _ = proto.Marshal
+var _ = fmt.Errorf
+var _ = math.Inf
+var _ = time.Kitchen
+
+// This is a compile-time assertion to ensure that this generated file
+// is compatible with the proto package it is being compiled against.
+// A compilation error at this line likely means your copy of the
+// proto package needs to be updated.
+const _ = proto.GoGoProtoPackageIsVersion2 // please upgrade the proto package
+
+// Lease is an object which retains resources while it exists.
+type Lease struct {
+ ID string `protobuf:"bytes,1,opt,name=id,proto3" json:"id,omitempty"`
+ CreatedAt time.Time `protobuf:"bytes,2,opt,name=created_at,json=createdAt,stdtime" json:"created_at"`
+ Labels map[string]string `protobuf:"bytes,3,rep,name=labels" json:"labels,omitempty" protobuf_key:"bytes,1,opt,name=key,proto3" protobuf_val:"bytes,2,opt,name=value,proto3"`
+}
+
+func (m *Lease) Reset() { *m = Lease{} }
+func (*Lease) ProtoMessage() {}
+func (*Lease) Descriptor() ([]byte, []int) { return fileDescriptorLeases, []int{0} }
+
+type CreateRequest struct {
+ // ID is used to identity the lease, when the id is not set the service
+ // generates a random identifier for the lease.
+ ID string `protobuf:"bytes,1,opt,name=id,proto3" json:"id,omitempty"`
+ Labels map[string]string `protobuf:"bytes,3,rep,name=labels" json:"labels,omitempty" protobuf_key:"bytes,1,opt,name=key,proto3" protobuf_val:"bytes,2,opt,name=value,proto3"`
+}
+
+func (m *CreateRequest) Reset() { *m = CreateRequest{} }
+func (*CreateRequest) ProtoMessage() {}
+func (*CreateRequest) Descriptor() ([]byte, []int) { return fileDescriptorLeases, []int{1} }
+
+type CreateResponse struct {
+ Lease *Lease `protobuf:"bytes,1,opt,name=lease" json:"lease,omitempty"`
+}
+
+func (m *CreateResponse) Reset() { *m = CreateResponse{} }
+func (*CreateResponse) ProtoMessage() {}
+func (*CreateResponse) Descriptor() ([]byte, []int) { return fileDescriptorLeases, []int{2} }
+
+type DeleteRequest struct {
+ ID string `protobuf:"bytes,1,opt,name=id,proto3" json:"id,omitempty"`
+}
+
+func (m *DeleteRequest) Reset() { *m = DeleteRequest{} }
+func (*DeleteRequest) ProtoMessage() {}
+func (*DeleteRequest) Descriptor() ([]byte, []int) { return fileDescriptorLeases, []int{3} }
+
+type ListRequest struct {
+ Filters []string `protobuf:"bytes,1,rep,name=filters" json:"filters,omitempty"`
+}
+
+func (m *ListRequest) Reset() { *m = ListRequest{} }
+func (*ListRequest) ProtoMessage() {}
+func (*ListRequest) Descriptor() ([]byte, []int) { return fileDescriptorLeases, []int{4} }
+
+type ListResponse struct {
+ Leases []*Lease `protobuf:"bytes,1,rep,name=leases" json:"leases,omitempty"`
+}
+
+func (m *ListResponse) Reset() { *m = ListResponse{} }
+func (*ListResponse) ProtoMessage() {}
+func (*ListResponse) Descriptor() ([]byte, []int) { return fileDescriptorLeases, []int{5} }
+
+func init() {
+ proto.RegisterType((*Lease)(nil), "containerd.services.leases.v1.Lease")
+ proto.RegisterType((*CreateRequest)(nil), "containerd.services.leases.v1.CreateRequest")
+ proto.RegisterType((*CreateResponse)(nil), "containerd.services.leases.v1.CreateResponse")
+ proto.RegisterType((*DeleteRequest)(nil), "containerd.services.leases.v1.DeleteRequest")
+ proto.RegisterType((*ListRequest)(nil), "containerd.services.leases.v1.ListRequest")
+ proto.RegisterType((*ListResponse)(nil), "containerd.services.leases.v1.ListResponse")
+}
+
+// Reference imports to suppress errors if they are not otherwise used.
+var _ context.Context
+var _ grpc.ClientConn
+
+// This is a compile-time assertion to ensure that this generated file
+// is compatible with the grpc package it is being compiled against.
+const _ = grpc.SupportPackageIsVersion4
+
+// Client API for Leases service
+
+type LeasesClient interface {
+ // Create creates a new lease for managing changes to metadata. A lease
+ // can be used to protect objects from being removed.
+ Create(ctx context.Context, in *CreateRequest, opts ...grpc.CallOption) (*CreateResponse, error)
+ // Delete deletes the lease and makes any unreferenced objects created
+ // during the lease eligible for garbage collection if not referenced
+ // or retained by other resources during the lease.
+ Delete(ctx context.Context, in *DeleteRequest, opts ...grpc.CallOption) (*google_protobuf1.Empty, error)
+ // ListTransactions lists all active leases, returning the full list of
+ // leases and optionally including the referenced resources.
+ List(ctx context.Context, in *ListRequest, opts ...grpc.CallOption) (*ListResponse, error)
+}
+
+type leasesClient struct {
+ cc *grpc.ClientConn
+}
+
+func NewLeasesClient(cc *grpc.ClientConn) LeasesClient {
+ return &leasesClient{cc}
+}
+
+func (c *leasesClient) Create(ctx context.Context, in *CreateRequest, opts ...grpc.CallOption) (*CreateResponse, error) {
+ out := new(CreateResponse)
+ err := grpc.Invoke(ctx, "/containerd.services.leases.v1.Leases/Create", in, out, c.cc, opts...)
+ if err != nil {
+ return nil, err
+ }
+ return out, nil
+}
+
+func (c *leasesClient) Delete(ctx context.Context, in *DeleteRequest, opts ...grpc.CallOption) (*google_protobuf1.Empty, error) {
+ out := new(google_protobuf1.Empty)
+ err := grpc.Invoke(ctx, "/containerd.services.leases.v1.Leases/Delete", in, out, c.cc, opts...)
+ if err != nil {
+ return nil, err
+ }
+ return out, nil
+}
+
+func (c *leasesClient) List(ctx context.Context, in *ListRequest, opts ...grpc.CallOption) (*ListResponse, error) {
+ out := new(ListResponse)
+ err := grpc.Invoke(ctx, "/containerd.services.leases.v1.Leases/List", in, out, c.cc, opts...)
+ if err != nil {
+ return nil, err
+ }
+ return out, nil
+}
+
+// Server API for Leases service
+
+type LeasesServer interface {
+ // Create creates a new lease for managing changes to metadata. A lease
+ // can be used to protect objects from being removed.
+ Create(context.Context, *CreateRequest) (*CreateResponse, error)
+ // Delete deletes the lease and makes any unreferenced objects created
+ // during the lease eligible for garbage collection if not referenced
+ // or retained by other resources during the lease.
+ Delete(context.Context, *DeleteRequest) (*google_protobuf1.Empty, error)
+ // ListTransactions lists all active leases, returning the full list of
+ // leases and optionally including the referenced resources.
+ List(context.Context, *ListRequest) (*ListResponse, error)
+}
+
+func RegisterLeasesServer(s *grpc.Server, srv LeasesServer) {
+ s.RegisterService(&_Leases_serviceDesc, srv)
+}
+
+func _Leases_Create_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) {
+ in := new(CreateRequest)
+ if err := dec(in); err != nil {
+ return nil, err
+ }
+ if interceptor == nil {
+ return srv.(LeasesServer).Create(ctx, in)
+ }
+ info := &grpc.UnaryServerInfo{
+ Server: srv,
+ FullMethod: "/containerd.services.leases.v1.Leases/Create",
+ }
+ handler := func(ctx context.Context, req interface{}) (interface{}, error) {
+ return srv.(LeasesServer).Create(ctx, req.(*CreateRequest))
+ }
+ return interceptor(ctx, in, info, handler)
+}
+
+func _Leases_Delete_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) {
+ in := new(DeleteRequest)
+ if err := dec(in); err != nil {
+ return nil, err
+ }
+ if interceptor == nil {
+ return srv.(LeasesServer).Delete(ctx, in)
+ }
+ info := &grpc.UnaryServerInfo{
+ Server: srv,
+ FullMethod: "/containerd.services.leases.v1.Leases/Delete",
+ }
+ handler := func(ctx context.Context, req interface{}) (interface{}, error) {
+ return srv.(LeasesServer).Delete(ctx, req.(*DeleteRequest))
+ }
+ return interceptor(ctx, in, info, handler)
+}
+
+func _Leases_List_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) {
+ in := new(ListRequest)
+ if err := dec(in); err != nil {
+ return nil, err
+ }
+ if interceptor == nil {
+ return srv.(LeasesServer).List(ctx, in)
+ }
+ info := &grpc.UnaryServerInfo{
+ Server: srv,
+ FullMethod: "/containerd.services.leases.v1.Leases/List",
+ }
+ handler := func(ctx context.Context, req interface{}) (interface{}, error) {
+ return srv.(LeasesServer).List(ctx, req.(*ListRequest))
+ }
+ return interceptor(ctx, in, info, handler)
+}
+
+var _Leases_serviceDesc = grpc.ServiceDesc{
+ ServiceName: "containerd.services.leases.v1.Leases",
+ HandlerType: (*LeasesServer)(nil),
+ Methods: []grpc.MethodDesc{
+ {
+ MethodName: "Create",
+ Handler: _Leases_Create_Handler,
+ },
+ {
+ MethodName: "Delete",
+ Handler: _Leases_Delete_Handler,
+ },
+ {
+ MethodName: "List",
+ Handler: _Leases_List_Handler,
+ },
+ },
+ Streams: []grpc.StreamDesc{},
+ Metadata: "github.com/containerd/containerd/api/services/leases/v1/leases.proto",
+}
+
+func (m *Lease) Marshal() (dAtA []byte, err error) {
+ size := m.Size()
+ dAtA = make([]byte, size)
+ n, err := m.MarshalTo(dAtA)
+ if err != nil {
+ return nil, err
+ }
+ return dAtA[:n], nil
+}
+
+func (m *Lease) MarshalTo(dAtA []byte) (int, error) {
+ var i int
+ _ = i
+ var l int
+ _ = l
+ if len(m.ID) > 0 {
+ dAtA[i] = 0xa
+ i++
+ i = encodeVarintLeases(dAtA, i, uint64(len(m.ID)))
+ i += copy(dAtA[i:], m.ID)
+ }
+ dAtA[i] = 0x12
+ i++
+ i = encodeVarintLeases(dAtA, i, uint64(github_com_gogo_protobuf_types.SizeOfStdTime(m.CreatedAt)))
+ n1, err := github_com_gogo_protobuf_types.StdTimeMarshalTo(m.CreatedAt, dAtA[i:])
+ if err != nil {
+ return 0, err
+ }
+ i += n1
+ if len(m.Labels) > 0 {
+ for k, _ := range m.Labels {
+ dAtA[i] = 0x1a
+ i++
+ v := m.Labels[k]
+ mapSize := 1 + len(k) + sovLeases(uint64(len(k))) + 1 + len(v) + sovLeases(uint64(len(v)))
+ i = encodeVarintLeases(dAtA, i, uint64(mapSize))
+ dAtA[i] = 0xa
+ i++
+ i = encodeVarintLeases(dAtA, i, uint64(len(k)))
+ i += copy(dAtA[i:], k)
+ dAtA[i] = 0x12
+ i++
+ i = encodeVarintLeases(dAtA, i, uint64(len(v)))
+ i += copy(dAtA[i:], v)
+ }
+ }
+ return i, nil
+}
+
+func (m *CreateRequest) Marshal() (dAtA []byte, err error) {
+ size := m.Size()
+ dAtA = make([]byte, size)
+ n, err := m.MarshalTo(dAtA)
+ if err != nil {
+ return nil, err
+ }
+ return dAtA[:n], nil
+}
+
+func (m *CreateRequest) MarshalTo(dAtA []byte) (int, error) {
+ var i int
+ _ = i
+ var l int
+ _ = l
+ if len(m.ID) > 0 {
+ dAtA[i] = 0xa
+ i++
+ i = encodeVarintLeases(dAtA, i, uint64(len(m.ID)))
+ i += copy(dAtA[i:], m.ID)
+ }
+ if len(m.Labels) > 0 {
+ for k, _ := range m.Labels {
+ dAtA[i] = 0x1a
+ i++
+ v := m.Labels[k]
+ mapSize := 1 + len(k) + sovLeases(uint64(len(k))) + 1 + len(v) + sovLeases(uint64(len(v)))
+ i = encodeVarintLeases(dAtA, i, uint64(mapSize))
+ dAtA[i] = 0xa
+ i++
+ i = encodeVarintLeases(dAtA, i, uint64(len(k)))
+ i += copy(dAtA[i:], k)
+ dAtA[i] = 0x12
+ i++
+ i = encodeVarintLeases(dAtA, i, uint64(len(v)))
+ i += copy(dAtA[i:], v)
+ }
+ }
+ return i, nil
+}
+
+func (m *CreateResponse) Marshal() (dAtA []byte, err error) {
+ size := m.Size()
+ dAtA = make([]byte, size)
+ n, err := m.MarshalTo(dAtA)
+ if err != nil {
+ return nil, err
+ }
+ return dAtA[:n], nil
+}
+
+func (m *CreateResponse) MarshalTo(dAtA []byte) (int, error) {
+ var i int
+ _ = i
+ var l int
+ _ = l
+ if m.Lease != nil {
+ dAtA[i] = 0xa
+ i++
+ i = encodeVarintLeases(dAtA, i, uint64(m.Lease.Size()))
+ n2, err := m.Lease.MarshalTo(dAtA[i:])
+ if err != nil {
+ return 0, err
+ }
+ i += n2
+ }
+ return i, nil
+}
+
+func (m *DeleteRequest) Marshal() (dAtA []byte, err error) {
+ size := m.Size()
+ dAtA = make([]byte, size)
+ n, err := m.MarshalTo(dAtA)
+ if err != nil {
+ return nil, err
+ }
+ return dAtA[:n], nil
+}
+
+func (m *DeleteRequest) MarshalTo(dAtA []byte) (int, error) {
+ var i int
+ _ = i
+ var l int
+ _ = l
+ if len(m.ID) > 0 {
+ dAtA[i] = 0xa
+ i++
+ i = encodeVarintLeases(dAtA, i, uint64(len(m.ID)))
+ i += copy(dAtA[i:], m.ID)
+ }
+ return i, nil
+}
+
+func (m *ListRequest) Marshal() (dAtA []byte, err error) {
+ size := m.Size()
+ dAtA = make([]byte, size)
+ n, err := m.MarshalTo(dAtA)
+ if err != nil {
+ return nil, err
+ }
+ return dAtA[:n], nil
+}
+
+func (m *ListRequest) MarshalTo(dAtA []byte) (int, error) {
+ var i int
+ _ = i
+ var l int
+ _ = l
+ if len(m.Filters) > 0 {
+ for _, s := range m.Filters {
+ dAtA[i] = 0xa
+ i++
+ l = len(s)
+ for l >= 1<<7 {
+ dAtA[i] = uint8(uint64(l)&0x7f | 0x80)
+ l >>= 7
+ i++
+ }
+ dAtA[i] = uint8(l)
+ i++
+ i += copy(dAtA[i:], s)
+ }
+ }
+ return i, nil
+}
+
+func (m *ListResponse) Marshal() (dAtA []byte, err error) {
+ size := m.Size()
+ dAtA = make([]byte, size)
+ n, err := m.MarshalTo(dAtA)
+ if err != nil {
+ return nil, err
+ }
+ return dAtA[:n], nil
+}
+
+func (m *ListResponse) MarshalTo(dAtA []byte) (int, error) {
+ var i int
+ _ = i
+ var l int
+ _ = l
+ if len(m.Leases) > 0 {
+ for _, msg := range m.Leases {
+ dAtA[i] = 0xa
+ i++
+ i = encodeVarintLeases(dAtA, i, uint64(msg.Size()))
+ n, err := msg.MarshalTo(dAtA[i:])
+ if err != nil {
+ return 0, err
+ }
+ i += n
+ }
+ }
+ return i, nil
+}
+
+func encodeFixed64Leases(dAtA []byte, offset int, v uint64) int {
+ dAtA[offset] = uint8(v)
+ dAtA[offset+1] = uint8(v >> 8)
+ dAtA[offset+2] = uint8(v >> 16)
+ dAtA[offset+3] = uint8(v >> 24)
+ dAtA[offset+4] = uint8(v >> 32)
+ dAtA[offset+5] = uint8(v >> 40)
+ dAtA[offset+6] = uint8(v >> 48)
+ dAtA[offset+7] = uint8(v >> 56)
+ return offset + 8
+}
+func encodeFixed32Leases(dAtA []byte, offset int, v uint32) int {
+ dAtA[offset] = uint8(v)
+ dAtA[offset+1] = uint8(v >> 8)
+ dAtA[offset+2] = uint8(v >> 16)
+ dAtA[offset+3] = uint8(v >> 24)
+ return offset + 4
+}
+func encodeVarintLeases(dAtA []byte, offset int, v uint64) int {
+ for v >= 1<<7 {
+ dAtA[offset] = uint8(v&0x7f | 0x80)
+ v >>= 7
+ offset++
+ }
+ dAtA[offset] = uint8(v)
+ return offset + 1
+}
+func (m *Lease) Size() (n int) {
+ var l int
+ _ = l
+ l = len(m.ID)
+ if l > 0 {
+ n += 1 + l + sovLeases(uint64(l))
+ }
+ l = github_com_gogo_protobuf_types.SizeOfStdTime(m.CreatedAt)
+ n += 1 + l + sovLeases(uint64(l))
+ if len(m.Labels) > 0 {
+ for k, v := range m.Labels {
+ _ = k
+ _ = v
+ mapEntrySize := 1 + len(k) + sovLeases(uint64(len(k))) + 1 + len(v) + sovLeases(uint64(len(v)))
+ n += mapEntrySize + 1 + sovLeases(uint64(mapEntrySize))
+ }
+ }
+ return n
+}
+
+func (m *CreateRequest) Size() (n int) {
+ var l int
+ _ = l
+ l = len(m.ID)
+ if l > 0 {
+ n += 1 + l + sovLeases(uint64(l))
+ }
+ if len(m.Labels) > 0 {
+ for k, v := range m.Labels {
+ _ = k
+ _ = v
+ mapEntrySize := 1 + len(k) + sovLeases(uint64(len(k))) + 1 + len(v) + sovLeases(uint64(len(v)))
+ n += mapEntrySize + 1 + sovLeases(uint64(mapEntrySize))
+ }
+ }
+ return n
+}
+
+func (m *CreateResponse) Size() (n int) {
+ var l int
+ _ = l
+ if m.Lease != nil {
+ l = m.Lease.Size()
+ n += 1 + l + sovLeases(uint64(l))
+ }
+ return n
+}
+
+func (m *DeleteRequest) Size() (n int) {
+ var l int
+ _ = l
+ l = len(m.ID)
+ if l > 0 {
+ n += 1 + l + sovLeases(uint64(l))
+ }
+ return n
+}
+
+func (m *ListRequest) Size() (n int) {
+ var l int
+ _ = l
+ if len(m.Filters) > 0 {
+ for _, s := range m.Filters {
+ l = len(s)
+ n += 1 + l + sovLeases(uint64(l))
+ }
+ }
+ return n
+}
+
+func (m *ListResponse) Size() (n int) {
+ var l int
+ _ = l
+ if len(m.Leases) > 0 {
+ for _, e := range m.Leases {
+ l = e.Size()
+ n += 1 + l + sovLeases(uint64(l))
+ }
+ }
+ return n
+}
+
+func sovLeases(x uint64) (n int) {
+ for {
+ n++
+ x >>= 7
+ if x == 0 {
+ break
+ }
+ }
+ return n
+}
+func sozLeases(x uint64) (n int) {
+ return sovLeases(uint64((x << 1) ^ uint64((int64(x) >> 63))))
+}
+func (this *Lease) String() string {
+ if this == nil {
+ return "nil"
+ }
+ keysForLabels := make([]string, 0, len(this.Labels))
+ for k, _ := range this.Labels {
+ keysForLabels = append(keysForLabels, k)
+ }
+ github_com_gogo_protobuf_sortkeys.Strings(keysForLabels)
+ mapStringForLabels := "map[string]string{"
+ for _, k := range keysForLabels {
+ mapStringForLabels += fmt.Sprintf("%v: %v,", k, this.Labels[k])
+ }
+ mapStringForLabels += "}"
+ s := strings.Join([]string{`&Lease{`,
+ `ID:` + fmt.Sprintf("%v", this.ID) + `,`,
+ `CreatedAt:` + strings.Replace(strings.Replace(this.CreatedAt.String(), "Timestamp", "google_protobuf2.Timestamp", 1), `&`, ``, 1) + `,`,
+ `Labels:` + mapStringForLabels + `,`,
+ `}`,
+ }, "")
+ return s
+}
+func (this *CreateRequest) String() string {
+ if this == nil {
+ return "nil"
+ }
+ keysForLabels := make([]string, 0, len(this.Labels))
+ for k, _ := range this.Labels {
+ keysForLabels = append(keysForLabels, k)
+ }
+ github_com_gogo_protobuf_sortkeys.Strings(keysForLabels)
+ mapStringForLabels := "map[string]string{"
+ for _, k := range keysForLabels {
+ mapStringForLabels += fmt.Sprintf("%v: %v,", k, this.Labels[k])
+ }
+ mapStringForLabels += "}"
+ s := strings.Join([]string{`&CreateRequest{`,
+ `ID:` + fmt.Sprintf("%v", this.ID) + `,`,
+ `Labels:` + mapStringForLabels + `,`,
+ `}`,
+ }, "")
+ return s
+}
+func (this *CreateResponse) String() string {
+ if this == nil {
+ return "nil"
+ }
+ s := strings.Join([]string{`&CreateResponse{`,
+ `Lease:` + strings.Replace(fmt.Sprintf("%v", this.Lease), "Lease", "Lease", 1) + `,`,
+ `}`,
+ }, "")
+ return s
+}
+func (this *DeleteRequest) String() string {
+ if this == nil {
+ return "nil"
+ }
+ s := strings.Join([]string{`&DeleteRequest{`,
+ `ID:` + fmt.Sprintf("%v", this.ID) + `,`,
+ `}`,
+ }, "")
+ return s
+}
+func (this *ListRequest) String() string {
+ if this == nil {
+ return "nil"
+ }
+ s := strings.Join([]string{`&ListRequest{`,
+ `Filters:` + fmt.Sprintf("%v", this.Filters) + `,`,
+ `}`,
+ }, "")
+ return s
+}
+func (this *ListResponse) String() string {
+ if this == nil {
+ return "nil"
+ }
+ s := strings.Join([]string{`&ListResponse{`,
+ `Leases:` + strings.Replace(fmt.Sprintf("%v", this.Leases), "Lease", "Lease", 1) + `,`,
+ `}`,
+ }, "")
+ return s
+}
+func valueToStringLeases(v interface{}) string {
+ rv := reflect.ValueOf(v)
+ if rv.IsNil() {
+ return "nil"
+ }
+ pv := reflect.Indirect(rv).Interface()
+ return fmt.Sprintf("*%v", pv)
+}
+func (m *Lease) Unmarshal(dAtA []byte) error {
+ l := len(dAtA)
+ iNdEx := 0
+ for iNdEx < l {
+ preIndex := iNdEx
+ var wire uint64
+ for shift := uint(0); ; shift += 7 {
+ if shift >= 64 {
+ return ErrIntOverflowLeases
+ }
+ if iNdEx >= l {
+ return io.ErrUnexpectedEOF
+ }
+ b := dAtA[iNdEx]
+ iNdEx++
+ wire |= (uint64(b) & 0x7F) << shift
+ if b < 0x80 {
+ break
+ }
+ }
+ fieldNum := int32(wire >> 3)
+ wireType := int(wire & 0x7)
+ if wireType == 4 {
+ return fmt.Errorf("proto: Lease: wiretype end group for non-group")
+ }
+ if fieldNum <= 0 {
+ return fmt.Errorf("proto: Lease: illegal tag %d (wire type %d)", fieldNum, wire)
+ }
+ switch fieldNum {
+ case 1:
+ if wireType != 2 {
+ return fmt.Errorf("proto: wrong wireType = %d for field ID", wireType)
+ }
+ var stringLen uint64
+ for shift := uint(0); ; shift += 7 {
+ if shift >= 64 {
+ return ErrIntOverflowLeases
+ }
+ if iNdEx >= l {
+ return io.ErrUnexpectedEOF
+ }
+ b := dAtA[iNdEx]
+ iNdEx++
+ stringLen |= (uint64(b) & 0x7F) << shift
+ if b < 0x80 {
+ break
+ }
+ }
+ intStringLen := int(stringLen)
+ if intStringLen < 0 {
+ return ErrInvalidLengthLeases
+ }
+ postIndex := iNdEx + intStringLen
+ if postIndex > l {
+ return io.ErrUnexpectedEOF
+ }
+ m.ID = string(dAtA[iNdEx:postIndex])
+ iNdEx = postIndex
+ case 2:
+ if wireType != 2 {
+ return fmt.Errorf("proto: wrong wireType = %d for field CreatedAt", wireType)
+ }
+ var msglen int
+ for shift := uint(0); ; shift += 7 {
+ if shift >= 64 {
+ return ErrIntOverflowLeases
+ }
+ if iNdEx >= l {
+ return io.ErrUnexpectedEOF
+ }
+ b := dAtA[iNdEx]
+ iNdEx++
+ msglen |= (int(b) & 0x7F) << shift
+ if b < 0x80 {
+ break
+ }
+ }
+ if msglen < 0 {
+ return ErrInvalidLengthLeases
+ }
+ postIndex := iNdEx + msglen
+ if postIndex > l {
+ return io.ErrUnexpectedEOF
+ }
+ if err := github_com_gogo_protobuf_types.StdTimeUnmarshal(&m.CreatedAt, dAtA[iNdEx:postIndex]); err != nil {
+ return err
+ }
+ iNdEx = postIndex
+ case 3:
+ if wireType != 2 {
+ return fmt.Errorf("proto: wrong wireType = %d for field Labels", wireType)
+ }
+ var msglen int
+ for shift := uint(0); ; shift += 7 {
+ if shift >= 64 {
+ return ErrIntOverflowLeases
+ }
+ if iNdEx >= l {
+ return io.ErrUnexpectedEOF
+ }
+ b := dAtA[iNdEx]
+ iNdEx++
+ msglen |= (int(b) & 0x7F) << shift
+ if b < 0x80 {
+ break
+ }
+ }
+ if msglen < 0 {
+ return ErrInvalidLengthLeases
+ }
+ postIndex := iNdEx + msglen
+ if postIndex > l {
+ return io.ErrUnexpectedEOF
+ }
+ var keykey uint64
+ for shift := uint(0); ; shift += 7 {
+ if shift >= 64 {
+ return ErrIntOverflowLeases
+ }
+ if iNdEx >= l {
+ return io.ErrUnexpectedEOF
+ }
+ b := dAtA[iNdEx]
+ iNdEx++
+ keykey |= (uint64(b) & 0x7F) << shift
+ if b < 0x80 {
+ break
+ }
+ }
+ var stringLenmapkey uint64
+ for shift := uint(0); ; shift += 7 {
+ if shift >= 64 {
+ return ErrIntOverflowLeases
+ }
+ if iNdEx >= l {
+ return io.ErrUnexpectedEOF
+ }
+ b := dAtA[iNdEx]
+ iNdEx++
+ stringLenmapkey |= (uint64(b) & 0x7F) << shift
+ if b < 0x80 {
+ break
+ }
+ }
+ intStringLenmapkey := int(stringLenmapkey)
+ if intStringLenmapkey < 0 {
+ return ErrInvalidLengthLeases
+ }
+ postStringIndexmapkey := iNdEx + intStringLenmapkey
+ if postStringIndexmapkey > l {
+ return io.ErrUnexpectedEOF
+ }
+ mapkey := string(dAtA[iNdEx:postStringIndexmapkey])
+ iNdEx = postStringIndexmapkey
+ if m.Labels == nil {
+ m.Labels = make(map[string]string)
+ }
+ if iNdEx < postIndex {
+ var valuekey uint64
+ for shift := uint(0); ; shift += 7 {
+ if shift >= 64 {
+ return ErrIntOverflowLeases
+ }
+ if iNdEx >= l {
+ return io.ErrUnexpectedEOF
+ }
+ b := dAtA[iNdEx]
+ iNdEx++
+ valuekey |= (uint64(b) & 0x7F) << shift
+ if b < 0x80 {
+ break
+ }
+ }
+ var stringLenmapvalue uint64
+ for shift := uint(0); ; shift += 7 {
+ if shift >= 64 {
+ return ErrIntOverflowLeases
+ }
+ if iNdEx >= l {
+ return io.ErrUnexpectedEOF
+ }
+ b := dAtA[iNdEx]
+ iNdEx++
+ stringLenmapvalue |= (uint64(b) & 0x7F) << shift
+ if b < 0x80 {
+ break
+ }
+ }
+ intStringLenmapvalue := int(stringLenmapvalue)
+ if intStringLenmapvalue < 0 {
+ return ErrInvalidLengthLeases
+ }
+ postStringIndexmapvalue := iNdEx + intStringLenmapvalue
+ if postStringIndexmapvalue > l {
+ return io.ErrUnexpectedEOF
+ }
+ mapvalue := string(dAtA[iNdEx:postStringIndexmapvalue])
+ iNdEx = postStringIndexmapvalue
+ m.Labels[mapkey] = mapvalue
+ } else {
+ var mapvalue string
+ m.Labels[mapkey] = mapvalue
+ }
+ iNdEx = postIndex
+ default:
+ iNdEx = preIndex
+ skippy, err := skipLeases(dAtA[iNdEx:])
+ if err != nil {
+ return err
+ }
+ if skippy < 0 {
+ return ErrInvalidLengthLeases
+ }
+ if (iNdEx + skippy) > l {
+ return io.ErrUnexpectedEOF
+ }
+ iNdEx += skippy
+ }
+ }
+
+ if iNdEx > l {
+ return io.ErrUnexpectedEOF
+ }
+ return nil
+}
+func (m *CreateRequest) Unmarshal(dAtA []byte) error {
+ l := len(dAtA)
+ iNdEx := 0
+ for iNdEx < l {
+ preIndex := iNdEx
+ var wire uint64
+ for shift := uint(0); ; shift += 7 {
+ if shift >= 64 {
+ return ErrIntOverflowLeases
+ }
+ if iNdEx >= l {
+ return io.ErrUnexpectedEOF
+ }
+ b := dAtA[iNdEx]
+ iNdEx++
+ wire |= (uint64(b) & 0x7F) << shift
+ if b < 0x80 {
+ break
+ }
+ }
+ fieldNum := int32(wire >> 3)
+ wireType := int(wire & 0x7)
+ if wireType == 4 {
+ return fmt.Errorf("proto: CreateRequest: wiretype end group for non-group")
+ }
+ if fieldNum <= 0 {
+ return fmt.Errorf("proto: CreateRequest: illegal tag %d (wire type %d)", fieldNum, wire)
+ }
+ switch fieldNum {
+ case 1:
+ if wireType != 2 {
+ return fmt.Errorf("proto: wrong wireType = %d for field ID", wireType)
+ }
+ var stringLen uint64
+ for shift := uint(0); ; shift += 7 {
+ if shift >= 64 {
+ return ErrIntOverflowLeases
+ }
+ if iNdEx >= l {
+ return io.ErrUnexpectedEOF
+ }
+ b := dAtA[iNdEx]
+ iNdEx++
+ stringLen |= (uint64(b) & 0x7F) << shift
+ if b < 0x80 {
+ break
+ }
+ }
+ intStringLen := int(stringLen)
+ if intStringLen < 0 {
+ return ErrInvalidLengthLeases
+ }
+ postIndex := iNdEx + intStringLen
+ if postIndex > l {
+ return io.ErrUnexpectedEOF
+ }
+ m.ID = string(dAtA[iNdEx:postIndex])
+ iNdEx = postIndex
+ case 3:
+ if wireType != 2 {
+ return fmt.Errorf("proto: wrong wireType = %d for field Labels", wireType)
+ }
+ var msglen int
+ for shift := uint(0); ; shift += 7 {
+ if shift >= 64 {
+ return ErrIntOverflowLeases
+ }
+ if iNdEx >= l {
+ return io.ErrUnexpectedEOF
+ }
+ b := dAtA[iNdEx]
+ iNdEx++
+ msglen |= (int(b) & 0x7F) << shift
+ if b < 0x80 {
+ break
+ }
+ }
+ if msglen < 0 {
+ return ErrInvalidLengthLeases
+ }
+ postIndex := iNdEx + msglen
+ if postIndex > l {
+ return io.ErrUnexpectedEOF
+ }
+ var keykey uint64
+ for shift := uint(0); ; shift += 7 {
+ if shift >= 64 {
+ return ErrIntOverflowLeases
+ }
+ if iNdEx >= l {
+ return io.ErrUnexpectedEOF
+ }
+ b := dAtA[iNdEx]
+ iNdEx++
+ keykey |= (uint64(b) & 0x7F) << shift
+ if b < 0x80 {
+ break
+ }
+ }
+ var stringLenmapkey uint64
+ for shift := uint(0); ; shift += 7 {
+ if shift >= 64 {
+ return ErrIntOverflowLeases
+ }
+ if iNdEx >= l {
+ return io.ErrUnexpectedEOF
+ }
+ b := dAtA[iNdEx]
+ iNdEx++
+ stringLenmapkey |= (uint64(b) & 0x7F) << shift
+ if b < 0x80 {
+ break
+ }
+ }
+ intStringLenmapkey := int(stringLenmapkey)
+ if intStringLenmapkey < 0 {
+ return ErrInvalidLengthLeases
+ }
+ postStringIndexmapkey := iNdEx + intStringLenmapkey
+ if postStringIndexmapkey > l {
+ return io.ErrUnexpectedEOF
+ }
+ mapkey := string(dAtA[iNdEx:postStringIndexmapkey])
+ iNdEx = postStringIndexmapkey
+ if m.Labels == nil {
+ m.Labels = make(map[string]string)
+ }
+ if iNdEx < postIndex {
+ var valuekey uint64
+ for shift := uint(0); ; shift += 7 {
+ if shift >= 64 {
+ return ErrIntOverflowLeases
+ }
+ if iNdEx >= l {
+ return io.ErrUnexpectedEOF
+ }
+ b := dAtA[iNdEx]
+ iNdEx++
+ valuekey |= (uint64(b) & 0x7F) << shift
+ if b < 0x80 {
+ break
+ }
+ }
+ var stringLenmapvalue uint64
+ for shift := uint(0); ; shift += 7 {
+ if shift >= 64 {
+ return ErrIntOverflowLeases
+ }
+ if iNdEx >= l {
+ return io.ErrUnexpectedEOF
+ }
+ b := dAtA[iNdEx]
+ iNdEx++
+ stringLenmapvalue |= (uint64(b) & 0x7F) << shift
+ if b < 0x80 {
+ break
+ }
+ }
+ intStringLenmapvalue := int(stringLenmapvalue)
+ if intStringLenmapvalue < 0 {
+ return ErrInvalidLengthLeases
+ }
+ postStringIndexmapvalue := iNdEx + intStringLenmapvalue
+ if postStringIndexmapvalue > l {
+ return io.ErrUnexpectedEOF
+ }
+ mapvalue := string(dAtA[iNdEx:postStringIndexmapvalue])
+ iNdEx = postStringIndexmapvalue
+ m.Labels[mapkey] = mapvalue
+ } else {
+ var mapvalue string
+ m.Labels[mapkey] = mapvalue
+ }
+ iNdEx = postIndex
+ default:
+ iNdEx = preIndex
+ skippy, err := skipLeases(dAtA[iNdEx:])
+ if err != nil {
+ return err
+ }
+ if skippy < 0 {
+ return ErrInvalidLengthLeases
+ }
+ if (iNdEx + skippy) > l {
+ return io.ErrUnexpectedEOF
+ }
+ iNdEx += skippy
+ }
+ }
+
+ if iNdEx > l {
+ return io.ErrUnexpectedEOF
+ }
+ return nil
+}
+func (m *CreateResponse) Unmarshal(dAtA []byte) error {
+ l := len(dAtA)
+ iNdEx := 0
+ for iNdEx < l {
+ preIndex := iNdEx
+ var wire uint64
+ for shift := uint(0); ; shift += 7 {
+ if shift >= 64 {
+ return ErrIntOverflowLeases
+ }
+ if iNdEx >= l {
+ return io.ErrUnexpectedEOF
+ }
+ b := dAtA[iNdEx]
+ iNdEx++
+ wire |= (uint64(b) & 0x7F) << shift
+ if b < 0x80 {
+ break
+ }
+ }
+ fieldNum := int32(wire >> 3)
+ wireType := int(wire & 0x7)
+ if wireType == 4 {
+ return fmt.Errorf("proto: CreateResponse: wiretype end group for non-group")
+ }
+ if fieldNum <= 0 {
+ return fmt.Errorf("proto: CreateResponse: illegal tag %d (wire type %d)", fieldNum, wire)
+ }
+ switch fieldNum {
+ case 1:
+ if wireType != 2 {
+ return fmt.Errorf("proto: wrong wireType = %d for field Lease", wireType)
+ }
+ var msglen int
+ for shift := uint(0); ; shift += 7 {
+ if shift >= 64 {
+ return ErrIntOverflowLeases
+ }
+ if iNdEx >= l {
+ return io.ErrUnexpectedEOF
+ }
+ b := dAtA[iNdEx]
+ iNdEx++
+ msglen |= (int(b) & 0x7F) << shift
+ if b < 0x80 {
+ break
+ }
+ }
+ if msglen < 0 {
+ return ErrInvalidLengthLeases
+ }
+ postIndex := iNdEx + msglen
+ if postIndex > l {
+ return io.ErrUnexpectedEOF
+ }
+ if m.Lease == nil {
+ m.Lease = &Lease{}
+ }
+ if err := m.Lease.Unmarshal(dAtA[iNdEx:postIndex]); err != nil {
+ return err
+ }
+ iNdEx = postIndex
+ default:
+ iNdEx = preIndex
+ skippy, err := skipLeases(dAtA[iNdEx:])
+ if err != nil {
+ return err
+ }
+ if skippy < 0 {
+ return ErrInvalidLengthLeases
+ }
+ if (iNdEx + skippy) > l {
+ return io.ErrUnexpectedEOF
+ }
+ iNdEx += skippy
+ }
+ }
+
+ if iNdEx > l {
+ return io.ErrUnexpectedEOF
+ }
+ return nil
+}
+func (m *DeleteRequest) Unmarshal(dAtA []byte) error {
+ l := len(dAtA)
+ iNdEx := 0
+ for iNdEx < l {
+ preIndex := iNdEx
+ var wire uint64
+ for shift := uint(0); ; shift += 7 {
+ if shift >= 64 {
+ return ErrIntOverflowLeases
+ }
+ if iNdEx >= l {
+ return io.ErrUnexpectedEOF
+ }
+ b := dAtA[iNdEx]
+ iNdEx++
+ wire |= (uint64(b) & 0x7F) << shift
+ if b < 0x80 {
+ break
+ }
+ }
+ fieldNum := int32(wire >> 3)
+ wireType := int(wire & 0x7)
+ if wireType == 4 {
+ return fmt.Errorf("proto: DeleteRequest: wiretype end group for non-group")
+ }
+ if fieldNum <= 0 {
+ return fmt.Errorf("proto: DeleteRequest: illegal tag %d (wire type %d)", fieldNum, wire)
+ }
+ switch fieldNum {
+ case 1:
+ if wireType != 2 {
+ return fmt.Errorf("proto: wrong wireType = %d for field ID", wireType)
+ }
+ var stringLen uint64
+ for shift := uint(0); ; shift += 7 {
+ if shift >= 64 {
+ return ErrIntOverflowLeases
+ }
+ if iNdEx >= l {
+ return io.ErrUnexpectedEOF
+ }
+ b := dAtA[iNdEx]
+ iNdEx++
+ stringLen |= (uint64(b) & 0x7F) << shift
+ if b < 0x80 {
+ break
+ }
+ }
+ intStringLen := int(stringLen)
+ if intStringLen < 0 {
+ return ErrInvalidLengthLeases
+ }
+ postIndex := iNdEx + intStringLen
+ if postIndex > l {
+ return io.ErrUnexpectedEOF
+ }
+ m.ID = string(dAtA[iNdEx:postIndex])
+ iNdEx = postIndex
+ default:
+ iNdEx = preIndex
+ skippy, err := skipLeases(dAtA[iNdEx:])
+ if err != nil {
+ return err
+ }
+ if skippy < 0 {
+ return ErrInvalidLengthLeases
+ }
+ if (iNdEx + skippy) > l {
+ return io.ErrUnexpectedEOF
+ }
+ iNdEx += skippy
+ }
+ }
+
+ if iNdEx > l {
+ return io.ErrUnexpectedEOF
+ }
+ return nil
+}
+func (m *ListRequest) Unmarshal(dAtA []byte) error {
+ l := len(dAtA)
+ iNdEx := 0
+ for iNdEx < l {
+ preIndex := iNdEx
+ var wire uint64
+ for shift := uint(0); ; shift += 7 {
+ if shift >= 64 {
+ return ErrIntOverflowLeases
+ }
+ if iNdEx >= l {
+ return io.ErrUnexpectedEOF
+ }
+ b := dAtA[iNdEx]
+ iNdEx++
+ wire |= (uint64(b) & 0x7F) << shift
+ if b < 0x80 {
+ break
+ }
+ }
+ fieldNum := int32(wire >> 3)
+ wireType := int(wire & 0x7)
+ if wireType == 4 {
+ return fmt.Errorf("proto: ListRequest: wiretype end group for non-group")
+ }
+ if fieldNum <= 0 {
+ return fmt.Errorf("proto: ListRequest: illegal tag %d (wire type %d)", fieldNum, wire)
+ }
+ switch fieldNum {
+ case 1:
+ if wireType != 2 {
+ return fmt.Errorf("proto: wrong wireType = %d for field Filters", wireType)
+ }
+ var stringLen uint64
+ for shift := uint(0); ; shift += 7 {
+ if shift >= 64 {
+ return ErrIntOverflowLeases
+ }
+ if iNdEx >= l {
+ return io.ErrUnexpectedEOF
+ }
+ b := dAtA[iNdEx]
+ iNdEx++
+ stringLen |= (uint64(b) & 0x7F) << shift
+ if b < 0x80 {
+ break
+ }
+ }
+ intStringLen := int(stringLen)
+ if intStringLen < 0 {
+ return ErrInvalidLengthLeases
+ }
+ postIndex := iNdEx + intStringLen
+ if postIndex > l {
+ return io.ErrUnexpectedEOF
+ }
+ m.Filters = append(m.Filters, string(dAtA[iNdEx:postIndex]))
+ iNdEx = postIndex
+ default:
+ iNdEx = preIndex
+ skippy, err := skipLeases(dAtA[iNdEx:])
+ if err != nil {
+ return err
+ }
+ if skippy < 0 {
+ return ErrInvalidLengthLeases
+ }
+ if (iNdEx + skippy) > l {
+ return io.ErrUnexpectedEOF
+ }
+ iNdEx += skippy
+ }
+ }
+
+ if iNdEx > l {
+ return io.ErrUnexpectedEOF
+ }
+ return nil
+}
+func (m *ListResponse) Unmarshal(dAtA []byte) error {
+ l := len(dAtA)
+ iNdEx := 0
+ for iNdEx < l {
+ preIndex := iNdEx
+ var wire uint64
+ for shift := uint(0); ; shift += 7 {
+ if shift >= 64 {
+ return ErrIntOverflowLeases
+ }
+ if iNdEx >= l {
+ return io.ErrUnexpectedEOF
+ }
+ b := dAtA[iNdEx]
+ iNdEx++
+ wire |= (uint64(b) & 0x7F) << shift
+ if b < 0x80 {
+ break
+ }
+ }
+ fieldNum := int32(wire >> 3)
+ wireType := int(wire & 0x7)
+ if wireType == 4 {
+ return fmt.Errorf("proto: ListResponse: wiretype end group for non-group")
+ }
+ if fieldNum <= 0 {
+ return fmt.Errorf("proto: ListResponse: illegal tag %d (wire type %d)", fieldNum, wire)
+ }
+ switch fieldNum {
+ case 1:
+ if wireType != 2 {
+ return fmt.Errorf("proto: wrong wireType = %d for field Leases", wireType)
+ }
+ var msglen int
+ for shift := uint(0); ; shift += 7 {
+ if shift >= 64 {
+ return ErrIntOverflowLeases
+ }
+ if iNdEx >= l {
+ return io.ErrUnexpectedEOF
+ }
+ b := dAtA[iNdEx]
+ iNdEx++
+ msglen |= (int(b) & 0x7F) << shift
+ if b < 0x80 {
+ break
+ }
+ }
+ if msglen < 0 {
+ return ErrInvalidLengthLeases
+ }
+ postIndex := iNdEx + msglen
+ if postIndex > l {
+ return io.ErrUnexpectedEOF
+ }
+ m.Leases = append(m.Leases, &Lease{})
+ if err := m.Leases[len(m.Leases)-1].Unmarshal(dAtA[iNdEx:postIndex]); err != nil {
+ return err
+ }
+ iNdEx = postIndex
+ default:
+ iNdEx = preIndex
+ skippy, err := skipLeases(dAtA[iNdEx:])
+ if err != nil {
+ return err
+ }
+ if skippy < 0 {
+ return ErrInvalidLengthLeases
+ }
+ if (iNdEx + skippy) > l {
+ return io.ErrUnexpectedEOF
+ }
+ iNdEx += skippy
+ }
+ }
+
+ if iNdEx > l {
+ return io.ErrUnexpectedEOF
+ }
+ return nil
+}
+func skipLeases(dAtA []byte) (n int, err error) {
+ l := len(dAtA)
+ iNdEx := 0
+ for iNdEx < l {
+ var wire uint64
+ for shift := uint(0); ; shift += 7 {
+ if shift >= 64 {
+ return 0, ErrIntOverflowLeases
+ }
+ if iNdEx >= l {
+ return 0, io.ErrUnexpectedEOF
+ }
+ b := dAtA[iNdEx]
+ iNdEx++
+ wire |= (uint64(b) & 0x7F) << shift
+ if b < 0x80 {
+ break
+ }
+ }
+ wireType := int(wire & 0x7)
+ switch wireType {
+ case 0:
+ for shift := uint(0); ; shift += 7 {
+ if shift >= 64 {
+ return 0, ErrIntOverflowLeases
+ }
+ if iNdEx >= l {
+ return 0, io.ErrUnexpectedEOF
+ }
+ iNdEx++
+ if dAtA[iNdEx-1] < 0x80 {
+ break
+ }
+ }
+ return iNdEx, nil
+ case 1:
+ iNdEx += 8
+ return iNdEx, nil
+ case 2:
+ var length int
+ for shift := uint(0); ; shift += 7 {
+ if shift >= 64 {
+ return 0, ErrIntOverflowLeases
+ }
+ if iNdEx >= l {
+ return 0, io.ErrUnexpectedEOF
+ }
+ b := dAtA[iNdEx]
+ iNdEx++
+ length |= (int(b) & 0x7F) << shift
+ if b < 0x80 {
+ break
+ }
+ }
+ iNdEx += length
+ if length < 0 {
+ return 0, ErrInvalidLengthLeases
+ }
+ return iNdEx, nil
+ case 3:
+ for {
+ var innerWire uint64
+ var start int = iNdEx
+ for shift := uint(0); ; shift += 7 {
+ if shift >= 64 {
+ return 0, ErrIntOverflowLeases
+ }
+ if iNdEx >= l {
+ return 0, io.ErrUnexpectedEOF
+ }
+ b := dAtA[iNdEx]
+ iNdEx++
+ innerWire |= (uint64(b) & 0x7F) << shift
+ if b < 0x80 {
+ break
+ }
+ }
+ innerWireType := int(innerWire & 0x7)
+ if innerWireType == 4 {
+ break
+ }
+ next, err := skipLeases(dAtA[start:])
+ if err != nil {
+ return 0, err
+ }
+ iNdEx = start + next
+ }
+ return iNdEx, nil
+ case 4:
+ return iNdEx, nil
+ case 5:
+ iNdEx += 4
+ return iNdEx, nil
+ default:
+ return 0, fmt.Errorf("proto: illegal wireType %d", wireType)
+ }
+ }
+ panic("unreachable")
+}
+
+var (
+ ErrInvalidLengthLeases = fmt.Errorf("proto: negative length found during unmarshaling")
+ ErrIntOverflowLeases = fmt.Errorf("proto: integer overflow")
+)
+
+func init() {
+ proto.RegisterFile("github.com/containerd/containerd/api/services/leases/v1/leases.proto", fileDescriptorLeases)
+}
+
+var fileDescriptorLeases = []byte{
+ // 501 bytes of a gzipped FileDescriptorProto
+ 0x1f, 0x8b, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x02, 0xff, 0xac, 0x94, 0xdf, 0x8a, 0xd3, 0x40,
+ 0x14, 0xc6, 0x3b, 0xa9, 0x8d, 0xf6, 0xd4, 0x15, 0x19, 0x96, 0x25, 0x44, 0x4c, 0x4b, 0x10, 0xb6,
+ 0xf8, 0x67, 0xe2, 0xd6, 0x9b, 0x75, 0x15, 0xc1, 0x6e, 0x17, 0x14, 0x82, 0x48, 0xf0, 0x42, 0xbc,
+ 0x59, 0xd2, 0xf6, 0x6c, 0x0c, 0xa6, 0x9d, 0x98, 0x99, 0x16, 0x7a, 0xe7, 0x23, 0xf8, 0x08, 0x3e,
+ 0x84, 0x0f, 0xd1, 0x4b, 0x2f, 0xbd, 0x5a, 0xdd, 0xdc, 0xf9, 0x16, 0x92, 0x99, 0x84, 0xfd, 0x23,
+ 0xda, 0x2a, 0xde, 0x9d, 0xc9, 0x7c, 0xdf, 0x39, 0xbf, 0xf3, 0xc1, 0x04, 0x06, 0x51, 0x2c, 0xdf,
+ 0xce, 0x86, 0x6c, 0xc4, 0x27, 0xde, 0x88, 0x4f, 0x65, 0x18, 0x4f, 0x31, 0x1b, 0x9f, 0x2d, 0xc3,
+ 0x34, 0xf6, 0x04, 0x66, 0xf3, 0x78, 0x84, 0xc2, 0x4b, 0x30, 0x14, 0x28, 0xbc, 0xf9, 0x4e, 0x59,
+ 0xb1, 0x34, 0xe3, 0x92, 0xd3, 0x9b, 0xa7, 0x7a, 0x56, 0x69, 0x59, 0xa9, 0x98, 0xef, 0xd8, 0x9b,
+ 0x11, 0x8f, 0xb8, 0x52, 0x7a, 0x45, 0xa5, 0x4d, 0xf6, 0x8d, 0x88, 0xf3, 0x28, 0x41, 0x4f, 0x9d,
+ 0x86, 0xb3, 0x23, 0x0f, 0x27, 0xa9, 0x5c, 0x94, 0x97, 0xed, 0x8b, 0x97, 0x32, 0x9e, 0xa0, 0x90,
+ 0xe1, 0x24, 0xd5, 0x02, 0xf7, 0x07, 0x81, 0x86, 0x5f, 0x4c, 0xa0, 0x5b, 0x60, 0xc4, 0x63, 0x8b,
+ 0x74, 0x48, 0xb7, 0xd9, 0x37, 0xf3, 0xe3, 0xb6, 0xf1, 0x7c, 0x10, 0x18, 0xf1, 0x98, 0xee, 0x03,
+ 0x8c, 0x32, 0x0c, 0x25, 0x8e, 0x0f, 0x43, 0x69, 0x19, 0x1d, 0xd2, 0x6d, 0xf5, 0x6c, 0xa6, 0xfb,
+ 0xb2, 0xaa, 0x2f, 0x7b, 0x55, 0xf5, 0xed, 0x5f, 0x59, 0x1e, 0xb7, 0x6b, 0x1f, 0xbf, 0xb5, 0x49,
+ 0xd0, 0x2c, 0x7d, 0x4f, 0x25, 0x7d, 0x06, 0x66, 0x12, 0x0e, 0x31, 0x11, 0x56, 0xbd, 0x53, 0xef,
+ 0xb6, 0x7a, 0xf7, 0xd9, 0x1f, 0x57, 0x65, 0x0a, 0x89, 0xf9, 0xca, 0x72, 0x30, 0x95, 0xd9, 0x22,
+ 0x28, 0xfd, 0xf6, 0x43, 0x68, 0x9d, 0xf9, 0x4c, 0xaf, 0x43, 0xfd, 0x1d, 0x2e, 0x34, 0x76, 0x50,
+ 0x94, 0x74, 0x13, 0x1a, 0xf3, 0x30, 0x99, 0xa1, 0x42, 0x6d, 0x06, 0xfa, 0xb0, 0x67, 0xec, 0x12,
+ 0xf7, 0x33, 0x81, 0x8d, 0x7d, 0x85, 0x14, 0xe0, 0xfb, 0x19, 0x0a, 0xf9, 0xdb, 0x9d, 0x5f, 0x5e,
+ 0xc0, 0xdd, 0x5d, 0x81, 0x7b, 0xae, 0xeb, 0xff, 0xc6, 0xf6, 0xe1, 0x5a, 0xd5, 0x5f, 0xa4, 0x7c,
+ 0x2a, 0x90, 0xee, 0x41, 0x43, 0xcd, 0x56, 0xfe, 0x56, 0xef, 0xd6, 0x3a, 0x61, 0x06, 0xda, 0xe2,
+ 0x6e, 0xc3, 0xc6, 0x00, 0x13, 0x5c, 0x99, 0x81, 0xbb, 0x0d, 0x2d, 0x3f, 0x16, 0xb2, 0x92, 0x59,
+ 0x70, 0xf9, 0x28, 0x4e, 0x24, 0x66, 0xc2, 0x22, 0x9d, 0x7a, 0xb7, 0x19, 0x54, 0x47, 0xd7, 0x87,
+ 0xab, 0x5a, 0x58, 0xd2, 0x3d, 0x06, 0x53, 0xcf, 0x56, 0xc2, 0x75, 0xf1, 0x4a, 0x4f, 0xef, 0x93,
+ 0x01, 0xa6, 0xfa, 0x22, 0x28, 0x82, 0xa9, 0x17, 0xa7, 0x77, 0xff, 0x26, 0x7f, 0xfb, 0xde, 0x9a,
+ 0xea, 0x92, 0xf7, 0x05, 0x98, 0x3a, 0x91, 0x95, 0x63, 0xce, 0x05, 0x67, 0x6f, 0xfd, 0xf2, 0x08,
+ 0x0e, 0x8a, 0x97, 0x47, 0x0f, 0xe1, 0x52, 0x91, 0x07, 0xbd, 0xbd, 0x6a, 0xef, 0xd3, 0x74, 0xed,
+ 0x3b, 0x6b, 0x69, 0x35, 0x70, 0xff, 0xf5, 0xf2, 0xc4, 0xa9, 0x7d, 0x3d, 0x71, 0x6a, 0x1f, 0x72,
+ 0x87, 0x2c, 0x73, 0x87, 0x7c, 0xc9, 0x1d, 0xf2, 0x3d, 0x77, 0xc8, 0x9b, 0x27, 0xff, 0xf8, 0x1b,
+ 0x7a, 0xa4, 0xab, 0xa1, 0xa9, 0x56, 0x79, 0xf0, 0x33, 0x00, 0x00, 0xff, 0xff, 0x1d, 0xb9, 0xa6,
+ 0x63, 0xcf, 0x04, 0x00, 0x00,
+}
diff --git a/components/engine/vendor/github.com/containerd/containerd/api/services/leases/v1/leases.proto b/components/engine/vendor/github.com/containerd/containerd/api/services/leases/v1/leases.proto
new file mode 100644
index 0000000000..29d58d1119
--- /dev/null
+++ b/components/engine/vendor/github.com/containerd/containerd/api/services/leases/v1/leases.proto
@@ -0,0 +1,58 @@
+syntax = "proto3";
+
+package containerd.services.leases.v1;
+
+import "gogoproto/gogo.proto";
+import "google/protobuf/empty.proto";
+import "google/protobuf/timestamp.proto";
+
+option go_package = "github.com/containerd/containerd/api/services/leases/v1;leases";
+
+// Leases service manages resources leases within the metadata store.
+service Leases {
+ // Create creates a new lease for managing changes to metadata. A lease
+ // can be used to protect objects from being removed.
+ rpc Create(CreateRequest) returns (CreateResponse);
+
+ // Delete deletes the lease and makes any unreferenced objects created
+ // during the lease eligible for garbage collection if not referenced
+ // or retained by other resources during the lease.
+ rpc Delete(DeleteRequest) returns (google.protobuf.Empty);
+
+ // ListTransactions lists all active leases, returning the full list of
+ // leases and optionally including the referenced resources.
+ rpc List(ListRequest) returns (ListResponse);
+}
+
+// Lease is an object which retains resources while it exists.
+message Lease {
+ string id = 1;
+
+ google.protobuf.Timestamp created_at = 2 [(gogoproto.stdtime) = true, (gogoproto.nullable) = false];
+
+ map labels = 3;
+}
+
+message CreateRequest {
+ // ID is used to identity the lease, when the id is not set the service
+ // generates a random identifier for the lease.
+ string id = 1;
+
+ map labels = 3;
+}
+
+message CreateResponse {
+ Lease lease = 1;
+}
+
+message DeleteRequest {
+ string id = 1;
+}
+
+message ListRequest {
+ repeated string filters = 1;
+}
+
+message ListResponse {
+ repeated Lease leases = 1;
+}
diff --git a/components/engine/vendor/github.com/containerd/containerd/client.go b/components/engine/vendor/github.com/containerd/containerd/client.go
index 6f0eb28a2c..663f2fab86 100644
--- a/components/engine/vendor/github.com/containerd/containerd/client.go
+++ b/components/engine/vendor/github.com/containerd/containerd/client.go
@@ -22,9 +22,11 @@ import (
versionservice "github.com/containerd/containerd/api/services/version/v1"
"github.com/containerd/containerd/containers"
"github.com/containerd/containerd/content"
+ "github.com/containerd/containerd/dialer"
"github.com/containerd/containerd/diff"
"github.com/containerd/containerd/errdefs"
"github.com/containerd/containerd/images"
+ "github.com/containerd/containerd/namespaces"
"github.com/containerd/containerd/platforms"
"github.com/containerd/containerd/plugin"
"github.com/containerd/containerd/reference"
@@ -34,6 +36,7 @@ import (
contentservice "github.com/containerd/containerd/services/content"
diffservice "github.com/containerd/containerd/services/diff"
imagesservice "github.com/containerd/containerd/services/images"
+ namespacesservice "github.com/containerd/containerd/services/namespaces"
snapshotservice "github.com/containerd/containerd/services/snapshot"
"github.com/containerd/containerd/snapshot"
"github.com/containerd/typeurl"
@@ -70,7 +73,7 @@ func New(address string, opts ...ClientOpt) (*Client, error) {
grpc.WithTimeout(60 * time.Second),
grpc.FailOnNonTempDialError(true),
grpc.WithBackoffMaxDelay(3 * time.Second),
- grpc.WithDialer(Dialer),
+ grpc.WithDialer(dialer.Dialer),
}
if len(copts.dialOptions) > 0 {
gopts = copts.dialOptions
@@ -82,7 +85,7 @@ func New(address string, opts ...ClientOpt) (*Client, error) {
grpc.WithStreamInterceptor(stream),
)
}
- conn, err := grpc.Dial(DialAddress(address), gopts...)
+ conn, err := grpc.Dial(dialer.DialAddress(address), gopts...)
if err != nil {
return nil, errors.Wrapf(err, "failed to dial %q", address)
}
@@ -135,6 +138,12 @@ func (c *Client) Containers(ctx context.Context, filters ...string) ([]Container
// NewContainer will create a new container in container with the provided id
// the id must be unique within the namespace
func (c *Client) NewContainer(ctx context.Context, id string, opts ...NewContainerOpts) (Container, error) {
+ ctx, done, err := c.withLease(ctx)
+ if err != nil {
+ return nil, err
+ }
+ defer done()
+
container := containers.Container{
ID: id,
Runtime: containers.RuntimeInfo{
@@ -210,6 +219,12 @@ func (c *Client) Pull(ctx context.Context, ref string, opts ...RemoteOpt) (Image
}
store := c.ContentStore()
+ ctx, done, err := c.withLease(ctx)
+ if err != nil {
+ return nil, err
+ }
+ defer done()
+
name, desc, err := pullCtx.Resolver.Resolve(ctx, ref)
if err != nil {
return nil, err
@@ -228,7 +243,7 @@ func (c *Client) Pull(ctx context.Context, ref string, opts ...RemoteOpt) (Image
handler = images.Handlers(append(pullCtx.BaseHandlers, schema1Converter)...)
} else {
handler = images.Handlers(append(pullCtx.BaseHandlers,
- remotes.FetchHandler(store, fetcher, desc),
+ remotes.FetchHandler(store, fetcher),
images.ChildrenHandler(store, platforms.Default()))...,
)
}
@@ -265,11 +280,6 @@ func (c *Client) Pull(ctx context.Context, ref string, opts ...RemoteOpt) (Image
imgrec = created
}
- // Remove root tag from manifest now that image refers to it
- if _, err := store.Update(ctx, content.Info{Digest: desc.Digest}, "labels.containerd.io/gc.root"); err != nil {
- return nil, errors.Wrap(err, "failed to remove manifest root tag")
- }
-
img := &image{
client: c,
i: imgrec,
@@ -414,9 +424,9 @@ func (c *Client) Close() error {
return c.conn.Close()
}
-// NamespaceService returns the underlying NamespacesClient
-func (c *Client) NamespaceService() namespacesapi.NamespacesClient {
- return namespacesapi.NewNamespacesClient(c.conn)
+// NamespaceService returns the underlying Namespaces Store
+func (c *Client) NamespaceService() namespaces.Store {
+ return namespacesservice.NewStoreFromClient(namespacesapi.NewNamespacesClient(c.conn))
}
// ContainerService returns the underlying container Store
@@ -449,6 +459,7 @@ func (c *Client) DiffService() diff.Differ {
return diffservice.NewDiffServiceFromClient(diffapi.NewDiffClient(c.conn))
}
+// IntrospectionService returns the underlying Introspection Client
func (c *Client) IntrospectionService() introspectionapi.IntrospectionClient {
return introspectionapi.NewIntrospectionClient(c.conn)
}
@@ -580,6 +591,13 @@ func (c *Client) Import(ctx context.Context, ref string, reader io.Reader, opts
if err != nil {
return nil, err
}
+
+ ctx, done, err := c.withLease(ctx)
+ if err != nil {
+ return nil, err
+ }
+ defer done()
+
switch iopts.format {
case ociImageFormat:
return c.importFromOCITar(ctx, ref, reader, iopts)
diff --git a/components/engine/vendor/github.com/containerd/containerd/container_opts.go b/components/engine/vendor/github.com/containerd/containerd/container_opts.go
index 57ffc0a368..4c534fe12d 100644
--- a/components/engine/vendor/github.com/containerd/containerd/container_opts.go
+++ b/components/engine/vendor/github.com/containerd/containerd/container_opts.go
@@ -2,12 +2,10 @@ package containerd
import (
"context"
- "time"
"github.com/containerd/containerd/containers"
"github.com/containerd/containerd/errdefs"
"github.com/containerd/containerd/platforms"
- "github.com/containerd/containerd/snapshot"
"github.com/containerd/typeurl"
"github.com/gogo/protobuf/types"
"github.com/opencontainers/image-spec/identity"
@@ -93,11 +91,8 @@ func WithNewSnapshot(id string, i Image) NewContainerOpts {
return err
}
setSnapshotterIfEmpty(c)
- labels := map[string]string{
- "containerd.io/gc.root": time.Now().String(),
- }
parent := identity.ChainID(diffIDs).String()
- if _, err := client.SnapshotService(c.Snapshotter).Prepare(ctx, id, parent, snapshot.WithLabels(labels)); err != nil {
+ if _, err := client.SnapshotService(c.Snapshotter).Prepare(ctx, id, parent); err != nil {
return err
}
c.SnapshotKey = id
@@ -126,11 +121,8 @@ func WithNewSnapshotView(id string, i Image) NewContainerOpts {
return err
}
setSnapshotterIfEmpty(c)
- labels := map[string]string{
- "containerd.io/gc.root": time.Now().String(),
- }
parent := identity.ChainID(diffIDs).String()
- if _, err := client.SnapshotService(c.Snapshotter).View(ctx, id, parent, snapshot.WithLabels(labels)); err != nil {
+ if _, err := client.SnapshotService(c.Snapshotter).View(ctx, id, parent); err != nil {
return err
}
c.SnapshotKey = id
diff --git a/components/engine/vendor/github.com/containerd/containerd/content/local/store.go b/components/engine/vendor/github.com/containerd/containerd/content/local/store.go
index c7854cf9d1..14a98881dd 100644
--- a/components/engine/vendor/github.com/containerd/containerd/content/local/store.go
+++ b/components/engine/vendor/github.com/containerd/containerd/content/local/store.go
@@ -8,6 +8,7 @@ import (
"os"
"path/filepath"
"strconv"
+ "strings"
"sync"
"time"
@@ -27,6 +28,19 @@ var (
}
)
+// LabelStore is used to store mutable labels for digests
+type LabelStore interface {
+ // Get returns all the labels for the given digest
+ Get(digest.Digest) (map[string]string, error)
+
+ // Set sets all the labels for a given digest
+ Set(digest.Digest, map[string]string) error
+
+ // Update replaces the given labels for a digest,
+ // a key with an empty value removes a label.
+ Update(digest.Digest, map[string]string) (map[string]string, error)
+}
+
// Store is digest-keyed store for content. All data written into the store is
// stored under a verifiable digest.
//
@@ -34,16 +48,27 @@ var (
// including resumable ingest.
type store struct {
root string
+ ls LabelStore
}
// NewStore returns a local content store
func NewStore(root string) (content.Store, error) {
+ return NewLabeledStore(root, nil)
+}
+
+// NewLabeledStore returns a new content store using the provided label store
+//
+// Note: content stores which are used underneath a metadata store may not
+// require labels and should use `NewStore`. `NewLabeledStore` is primarily
+// useful for tests or standalone implementations.
+func NewLabeledStore(root string, ls LabelStore) (content.Store, error) {
if err := os.MkdirAll(filepath.Join(root, "ingest"), 0777); err != nil && !os.IsExist(err) {
return nil, err
}
return &store{
root: root,
+ ls: ls,
}, nil
}
@@ -57,16 +82,23 @@ func (s *store) Info(ctx context.Context, dgst digest.Digest) (content.Info, err
return content.Info{}, err
}
-
- return s.info(dgst, fi), nil
+ var labels map[string]string
+ if s.ls != nil {
+ labels, err = s.ls.Get(dgst)
+ if err != nil {
+ return content.Info{}, err
+ }
+ }
+ return s.info(dgst, fi, labels), nil
}
-func (s *store) info(dgst digest.Digest, fi os.FileInfo) content.Info {
+func (s *store) info(dgst digest.Digest, fi os.FileInfo, labels map[string]string) content.Info {
return content.Info{
Digest: dgst,
Size: fi.Size(),
CreatedAt: fi.ModTime(),
- UpdatedAt: fi.ModTime(),
+ UpdatedAt: getATime(fi),
+ Labels: labels,
}
}
@@ -111,8 +143,66 @@ func (s *store) Delete(ctx context.Context, dgst digest.Digest) error {
}
func (s *store) Update(ctx context.Context, info content.Info, fieldpaths ...string) (content.Info, error) {
- // TODO: Support persisting and updating mutable content data
- return content.Info{}, errors.Wrapf(errdefs.ErrFailedPrecondition, "update not supported on immutable content store")
+ if s.ls == nil {
+ return content.Info{}, errors.Wrapf(errdefs.ErrFailedPrecondition, "update not supported on immutable content store")
+ }
+
+ p := s.blobPath(info.Digest)
+ fi, err := os.Stat(p)
+ if err != nil {
+ if os.IsNotExist(err) {
+ err = errors.Wrapf(errdefs.ErrNotFound, "content %v", info.Digest)
+ }
+
+ return content.Info{}, err
+ }
+
+ var (
+ all bool
+ labels map[string]string
+ )
+ if len(fieldpaths) > 0 {
+ for _, path := range fieldpaths {
+ if strings.HasPrefix(path, "labels.") {
+ if labels == nil {
+ labels = map[string]string{}
+ }
+
+ key := strings.TrimPrefix(path, "labels.")
+ labels[key] = info.Labels[key]
+ continue
+ }
+
+ switch path {
+ case "labels":
+ all = true
+ labels = info.Labels
+ default:
+ return content.Info{}, errors.Wrapf(errdefs.ErrInvalidArgument, "cannot update %q field on content info %q", path, info.Digest)
+ }
+ }
+ } else {
+ all = true
+ labels = info.Labels
+ }
+
+ if all {
+ err = s.ls.Set(info.Digest, labels)
+ } else {
+ labels, err = s.ls.Update(info.Digest, labels)
+ }
+ if err != nil {
+ return content.Info{}, err
+ }
+
+ info = s.info(info.Digest, fi, labels)
+ info.UpdatedAt = time.Now()
+
+ if err := os.Chtimes(p, info.UpdatedAt, info.CreatedAt); err != nil {
+ log.G(ctx).WithError(err).Warnf("could not change access time for %s", info.Digest)
+ }
+
+ return info, nil
}
func (s *store) Walk(ctx context.Context, fn content.WalkFunc, filters ...string) error {
@@ -154,7 +244,14 @@ func (s *store) Walk(ctx context.Context, fn content.WalkFunc, filters ...string
// store or extra paths not expected previously.
}
- return fn(s.info(dgst, fi))
+ var labels map[string]string
+ if s.ls != nil {
+ labels, err = s.ls.Get(dgst)
+ if err != nil {
+ return err
+ }
+ }
+ return fn(s.info(dgst, fi, labels))
})
}
diff --git a/components/engine/vendor/github.com/containerd/containerd/content/local/store_unix.go b/components/engine/vendor/github.com/containerd/containerd/content/local/store_unix.go
index 46eab02c51..0d500b84d0 100644
--- a/components/engine/vendor/github.com/containerd/containerd/content/local/store_unix.go
+++ b/components/engine/vendor/github.com/containerd/containerd/content/local/store_unix.go
@@ -18,3 +18,12 @@ func getStartTime(fi os.FileInfo) time.Time {
return fi.ModTime()
}
+
+func getATime(fi os.FileInfo) time.Time {
+ if st, ok := fi.Sys().(*syscall.Stat_t); ok {
+ return time.Unix(int64(sys.StatAtime(st).Sec),
+ int64(sys.StatAtime(st).Nsec))
+ }
+
+ return fi.ModTime()
+}
diff --git a/components/engine/vendor/github.com/containerd/containerd/content/local/store_windows.go b/components/engine/vendor/github.com/containerd/containerd/content/local/store_windows.go
index 7fb6ad43a5..5f12ea5c42 100644
--- a/components/engine/vendor/github.com/containerd/containerd/content/local/store_windows.go
+++ b/components/engine/vendor/github.com/containerd/containerd/content/local/store_windows.go
@@ -8,3 +8,7 @@ import (
func getStartTime(fi os.FileInfo) time.Time {
return fi.ModTime()
}
+
+func getATime(fi os.FileInfo) time.Time {
+ return fi.ModTime()
+}
diff --git a/components/engine/vendor/github.com/containerd/containerd/content/local/writer.go b/components/engine/vendor/github.com/containerd/containerd/content/local/writer.go
index c4f1a94f31..8f1e92ded6 100644
--- a/components/engine/vendor/github.com/containerd/containerd/content/local/writer.go
+++ b/components/engine/vendor/github.com/containerd/containerd/content/local/writer.go
@@ -56,6 +56,13 @@ func (w *writer) Write(p []byte) (n int, err error) {
}
func (w *writer) Commit(ctx context.Context, size int64, expected digest.Digest, opts ...content.Opt) error {
+ var base content.Info
+ for _, opt := range opts {
+ if err := opt(&base); err != nil {
+ return err
+ }
+ }
+
if w.fp == nil {
return errors.Wrap(errdefs.ErrFailedPrecondition, "cannot commit on closed writer")
}
@@ -123,6 +130,12 @@ func (w *writer) Commit(ctx context.Context, size int64, expected digest.Digest,
w.fp = nil
unlock(w.ref)
+ if w.s.ls != nil && base.Labels != nil {
+ if err := w.s.ls.Set(dgst, base.Labels); err != nil {
+ return err
+ }
+ }
+
return nil
}
diff --git a/components/engine/vendor/github.com/containerd/containerd/dialer.go b/components/engine/vendor/github.com/containerd/containerd/dialer/dialer.go
similarity index 97%
rename from components/engine/vendor/github.com/containerd/containerd/dialer.go
rename to components/engine/vendor/github.com/containerd/containerd/dialer/dialer.go
index c87cf12d00..65af69f9bc 100644
--- a/components/engine/vendor/github.com/containerd/containerd/dialer.go
+++ b/components/engine/vendor/github.com/containerd/containerd/dialer/dialer.go
@@ -1,4 +1,4 @@
-package containerd
+package dialer
import (
"net"
diff --git a/components/engine/vendor/github.com/containerd/containerd/dialer_unix.go b/components/engine/vendor/github.com/containerd/containerd/dialer/dialer_unix.go
similarity index 97%
rename from components/engine/vendor/github.com/containerd/containerd/dialer_unix.go
rename to components/engine/vendor/github.com/containerd/containerd/dialer/dialer_unix.go
index 2e97d17a45..7f8d43b031 100644
--- a/components/engine/vendor/github.com/containerd/containerd/dialer_unix.go
+++ b/components/engine/vendor/github.com/containerd/containerd/dialer/dialer_unix.go
@@ -1,6 +1,6 @@
// +build !windows
-package containerd
+package dialer
import (
"fmt"
@@ -11,6 +11,12 @@ import (
"time"
)
+// DialAddress returns the address with unix:// prepended to the
+// provided address
+func DialAddress(address string) string {
+ return fmt.Sprintf("unix://%s", address)
+}
+
func isNoent(err error) bool {
if err != nil {
if nerr, ok := err.(*net.OpError); ok {
@@ -28,9 +34,3 @@ func dialer(address string, timeout time.Duration) (net.Conn, error) {
address = strings.TrimPrefix(address, "unix://")
return net.DialTimeout("unix", address, timeout)
}
-
-// DialAddress returns the address with unix:// prepended to the
-// provided address
-func DialAddress(address string) string {
- return fmt.Sprintf("unix://%s", address)
-}
diff --git a/components/engine/vendor/github.com/containerd/containerd/dialer_windows.go b/components/engine/vendor/github.com/containerd/containerd/dialer/dialer_windows.go
similarity index 88%
rename from components/engine/vendor/github.com/containerd/containerd/dialer_windows.go
rename to components/engine/vendor/github.com/containerd/containerd/dialer/dialer_windows.go
index c91a326174..2aac03898a 100644
--- a/components/engine/vendor/github.com/containerd/containerd/dialer_windows.go
+++ b/components/engine/vendor/github.com/containerd/containerd/dialer/dialer_windows.go
@@ -1,4 +1,4 @@
-package containerd
+package dialer
import (
"net"
@@ -24,6 +24,7 @@ func dialer(address string, timeout time.Duration) (net.Conn, error) {
return winio.DialPipe(address, &timeout)
}
+// DialAddress returns the dial address
func DialAddress(address string) string {
return address
}
diff --git a/components/engine/vendor/github.com/containerd/containerd/events/exchange.go b/components/engine/vendor/github.com/containerd/containerd/events/exchange/exchange.go
similarity index 92%
rename from components/engine/vendor/github.com/containerd/containerd/events/exchange.go
rename to components/engine/vendor/github.com/containerd/containerd/events/exchange/exchange.go
index eeeeea3622..3fefb9c255 100644
--- a/components/engine/vendor/github.com/containerd/containerd/events/exchange.go
+++ b/components/engine/vendor/github.com/containerd/containerd/events/exchange/exchange.go
@@ -1,12 +1,13 @@
-package events
+package exchange
import (
"context"
"strings"
"time"
- events "github.com/containerd/containerd/api/services/events/v1"
+ v1 "github.com/containerd/containerd/api/services/events/v1"
"github.com/containerd/containerd/errdefs"
+ "github.com/containerd/containerd/events"
"github.com/containerd/containerd/filters"
"github.com/containerd/containerd/identifiers"
"github.com/containerd/containerd/log"
@@ -34,7 +35,7 @@ func NewExchange() *Exchange {
//
// This is useful when an event is forwaded on behalf of another namespace or
// when the event is propagated on behalf of another publisher.
-func (e *Exchange) Forward(ctx context.Context, envelope *events.Envelope) (err error) {
+func (e *Exchange) Forward(ctx context.Context, envelope *v1.Envelope) (err error) {
if err := validateEnvelope(envelope); err != nil {
return err
}
@@ -59,11 +60,11 @@ func (e *Exchange) Forward(ctx context.Context, envelope *events.Envelope) (err
// Publish packages and sends an event. The caller will be considered the
// initial publisher of the event. This means the timestamp will be calculated
// at this point and this method may read from the calling context.
-func (e *Exchange) Publish(ctx context.Context, topic string, event Event) (err error) {
+func (e *Exchange) Publish(ctx context.Context, topic string, event events.Event) (err error) {
var (
namespace string
encoded *types.Any
- envelope events.Envelope
+ envelope v1.Envelope
)
namespace, err = namespaces.NamespaceRequired(ctx)
@@ -108,9 +109,9 @@ func (e *Exchange) Publish(ctx context.Context, topic string, event Event) (err
// Zero or more filters may be provided as strings. Only events that match
// *any* of the provided filters will be sent on the channel. The filters use
// the standard containerd filters package syntax.
-func (e *Exchange) Subscribe(ctx context.Context, fs ...string) (ch <-chan *events.Envelope, errs <-chan error) {
+func (e *Exchange) Subscribe(ctx context.Context, fs ...string) (ch <-chan *v1.Envelope, errs <-chan error) {
var (
- evch = make(chan *events.Envelope)
+ evch = make(chan *v1.Envelope)
errq = make(chan error, 1)
channel = goevents.NewChannel(0)
queue = goevents.NewQueue(channel)
@@ -150,7 +151,7 @@ func (e *Exchange) Subscribe(ctx context.Context, fs ...string) (ch <-chan *even
for {
select {
case ev := <-channel.C:
- env, ok := ev.(*events.Envelope)
+ env, ok := ev.(*v1.Envelope)
if !ok {
// TODO(stevvooe): For the most part, we are well protected
// from this condition. Both Forward and Publish protect
@@ -204,7 +205,7 @@ func validateTopic(topic string) error {
return nil
}
-func validateEnvelope(envelope *events.Envelope) error {
+func validateEnvelope(envelope *v1.Envelope) error {
if err := namespaces.Validate(envelope.Namespace); err != nil {
return errors.Wrapf(err, "event envelope has invalid namespace")
}
diff --git a/components/engine/vendor/github.com/containerd/containerd/gc/gc.go b/components/engine/vendor/github.com/containerd/containerd/gc/gc.go
index e892be155a..70838a7627 100644
--- a/components/engine/vendor/github.com/containerd/containerd/gc/gc.go
+++ b/components/engine/vendor/github.com/containerd/containerd/gc/gc.go
@@ -10,7 +10,7 @@ import (
"sync"
)
-// Resourcetype represents type of resource at a node
+// ResourceType represents type of resource at a node
type ResourceType uint8
// Node presents a resource which has a type and key,
@@ -145,10 +145,10 @@ func ConcurrentMark(ctx context.Context, root <-chan Node, refs func(context.Con
// Sweep removes all nodes returned through the channel which are not in
// the reachable set by calling the provided remove function.
-func Sweep(reachable map[Node]struct{}, all <-chan Node, remove func(Node) error) error {
+func Sweep(reachable map[Node]struct{}, all []Node, remove func(Node) error) error {
// All black objects are now reachable, and all white objects are
// unreachable. Free those that are white!
- for node := range all {
+ for _, node := range all {
if _, ok := reachable[node]; !ok {
if err := remove(node); err != nil {
return err
diff --git a/components/engine/vendor/github.com/containerd/containerd/image.go b/components/engine/vendor/github.com/containerd/containerd/image.go
index b41037a16d..4f978e2ea7 100644
--- a/components/engine/vendor/github.com/containerd/containerd/image.go
+++ b/components/engine/vendor/github.com/containerd/containerd/image.go
@@ -3,9 +3,9 @@ package containerd
import (
"context"
"fmt"
- "time"
"github.com/containerd/containerd/content"
+ "github.com/containerd/containerd/errdefs"
"github.com/containerd/containerd/images"
"github.com/containerd/containerd/platforms"
"github.com/containerd/containerd/rootfs"
@@ -30,6 +30,8 @@ type Image interface {
Size(ctx context.Context) (int64, error)
// Config descriptor for the image.
Config(ctx context.Context) (ocispec.Descriptor, error)
+ // IsUnpacked returns whether or not an image is unpacked.
+ IsUnpacked(context.Context, string) (bool, error)
}
var _ = (Image)(&image{})
@@ -63,6 +65,26 @@ func (i *image) Config(ctx context.Context) (ocispec.Descriptor, error) {
return i.i.Config(ctx, provider, platforms.Default())
}
+func (i *image) IsUnpacked(ctx context.Context, snapshotterName string) (bool, error) {
+ sn := i.client.SnapshotService(snapshotterName)
+ cs := i.client.ContentStore()
+
+ diffs, err := i.i.RootFS(ctx, cs, platforms.Default())
+ if err != nil {
+ return false, err
+ }
+
+ chainID := identity.ChainID(diffs)
+ _, err = sn.Stat(ctx, chainID.String())
+ if err == nil {
+ return true, nil
+ } else if !errdefs.IsNotFound(err) {
+ return false, err
+ }
+
+ return false, nil
+}
+
func (i *image) Unpack(ctx context.Context, snapshotterName string) error {
layers, err := i.getLayers(ctx, platforms.Default())
if err != nil {
@@ -79,27 +101,14 @@ func (i *image) Unpack(ctx context.Context, snapshotterName string) error {
)
for _, layer := range layers {
labels := map[string]string{
- "containerd.io/gc.root": time.Now().UTC().Format(time.RFC3339),
"containerd.io/uncompressed": layer.Diff.Digest.String(),
}
- lastUnpacked := unpacked
unpacked, err = rootfs.ApplyLayer(ctx, layer, chain, sn, a, snapshot.WithLabels(labels))
if err != nil {
return err
}
- if lastUnpacked {
- info := snapshot.Info{
- Name: identity.ChainID(chain).String(),
- }
-
- // Remove previously created gc.root label
- if _, err := sn.Update(ctx, info, "labels.containerd.io/gc.root"); err != nil {
- return err
- }
- }
-
chain = append(chain, layer.Diff.Digest)
}
@@ -120,15 +129,6 @@ func (i *image) Unpack(ctx context.Context, snapshotterName string) error {
if _, err := cs.Update(ctx, cinfo, fmt.Sprintf("labels.containerd.io/gc.ref.snapshot.%s", snapshotterName)); err != nil {
return err
}
-
- sinfo := snapshot.Info{
- Name: rootfs,
- }
-
- // Config now referenced snapshot, release root reference
- if _, err := sn.Update(ctx, sinfo, "labels.containerd.io/gc.root"); err != nil {
- return err
- }
}
return nil
diff --git a/components/engine/vendor/github.com/containerd/containerd/io_unix.go b/components/engine/vendor/github.com/containerd/containerd/io_unix.go
index 432553fe37..08aba14bac 100644
--- a/components/engine/vendor/github.com/containerd/containerd/io_unix.go
+++ b/components/engine/vendor/github.com/containerd/containerd/io_unix.go
@@ -16,7 +16,7 @@ import (
// NewFifos returns a new set of fifos for the task
func NewFifos(id string) (*FIFOSet, error) {
- root := filepath.Join(os.TempDir(), "containerd")
+ root := "/run/containerd/fifo"
if err := os.MkdirAll(root, 0700); err != nil {
return nil, err
}
diff --git a/components/engine/vendor/github.com/containerd/containerd/lease.go b/components/engine/vendor/github.com/containerd/containerd/lease.go
new file mode 100644
index 0000000000..6ecc58dec3
--- /dev/null
+++ b/components/engine/vendor/github.com/containerd/containerd/lease.go
@@ -0,0 +1,91 @@
+package containerd
+
+import (
+ "context"
+ "time"
+
+ leasesapi "github.com/containerd/containerd/api/services/leases/v1"
+ "github.com/containerd/containerd/leases"
+)
+
+// Lease is used to hold a reference to active resources which have not been
+// referenced by a root resource. This is useful for preventing garbage
+// collection of resources while they are actively being updated.
+type Lease struct {
+ id string
+ createdAt time.Time
+
+ client *Client
+}
+
+// CreateLease creates a new lease
+func (c *Client) CreateLease(ctx context.Context) (Lease, error) {
+ lapi := leasesapi.NewLeasesClient(c.conn)
+ resp, err := lapi.Create(ctx, &leasesapi.CreateRequest{})
+ if err != nil {
+ return Lease{}, err
+ }
+
+ return Lease{
+ id: resp.Lease.ID,
+ client: c,
+ }, nil
+}
+
+// ListLeases lists active leases
+func (c *Client) ListLeases(ctx context.Context) ([]Lease, error) {
+ lapi := leasesapi.NewLeasesClient(c.conn)
+ resp, err := lapi.List(ctx, &leasesapi.ListRequest{})
+ if err != nil {
+ return nil, err
+ }
+ leases := make([]Lease, len(resp.Leases))
+ for i := range resp.Leases {
+ leases[i] = Lease{
+ id: resp.Leases[i].ID,
+ createdAt: resp.Leases[i].CreatedAt,
+ client: c,
+ }
+ }
+
+ return leases, nil
+}
+
+func (c *Client) withLease(ctx context.Context) (context.Context, func() error, error) {
+ _, ok := leases.Lease(ctx)
+ if ok {
+ return ctx, func() error {
+ return nil
+ }, nil
+ }
+
+ l, err := c.CreateLease(ctx)
+ if err != nil {
+ return nil, nil, err
+ }
+
+ ctx = leases.WithLease(ctx, l.ID())
+ return ctx, func() error {
+ return l.Delete(ctx)
+ }, nil
+}
+
+// ID returns the lease ID
+func (l Lease) ID() string {
+ return l.id
+}
+
+// CreatedAt returns the time at which the lease was created
+func (l Lease) CreatedAt() time.Time {
+ return l.createdAt
+}
+
+// Delete deletes the lease, removing the reference to all resources created
+// during the lease.
+func (l Lease) Delete(ctx context.Context) error {
+ lapi := leasesapi.NewLeasesClient(l.client.conn)
+ _, err := lapi.Delete(ctx, &leasesapi.DeleteRequest{
+ ID: l.id,
+ })
+ return err
+}
diff --git a/components/engine/vendor/github.com/containerd/containerd/leases/context.go b/components/engine/vendor/github.com/containerd/containerd/leases/context.go
new file mode 100644
index 0000000000..cfd7e4a46e
--- /dev/null
+++ b/components/engine/vendor/github.com/containerd/containerd/leases/context.go
@@ -0,0 +1,24 @@
+package leases
+
+import "context"
+
+type leaseKey struct{}
+
+// WithLease sets a given lease on the context
+func WithLease(ctx context.Context, lid string) context.Context {
+ ctx = context.WithValue(ctx, leaseKey{}, lid)
+
+ // also store on the grpc headers so it gets picked up by any clients that
+ // are using this.
+ return withGRPCLeaseHeader(ctx, lid)
+}
+
+// Lease returns the lease from the context.
+func Lease(ctx context.Context) (string, bool) {
+ lid, ok := ctx.Value(leaseKey{}).(string)
+ if !ok {
+ return fromGRPCHeader(ctx)
+ }
+
+ return lid, ok
+}
diff --git a/components/engine/vendor/github.com/containerd/containerd/leases/grpc.go b/components/engine/vendor/github.com/containerd/containerd/leases/grpc.go
new file mode 100644
index 0000000000..cea5b25feb
--- /dev/null
+++ b/components/engine/vendor/github.com/containerd/containerd/leases/grpc.go
@@ -0,0 +1,41 @@
+package leases
+
+import (
+ "golang.org/x/net/context"
+ "google.golang.org/grpc/metadata"
+)
+
+const (
+ // GRPCHeader defines the header name for specifying a containerd lease.
+ GRPCHeader = "containerd-lease"
+)
+
+func withGRPCLeaseHeader(ctx context.Context, lid string) context.Context {
+ // also store on the grpc headers so it gets picked up by any clients
+ // that are using this.
+ txheader := metadata.Pairs(GRPCHeader, lid)
+ md, ok := metadata.FromOutgoingContext(ctx) // merge with outgoing context.
+ if !ok {
+ md = txheader
+ } else {
+ // order ensures the latest is first in this list.
+ md = metadata.Join(txheader, md)
+ }
+
+ return metadata.NewOutgoingContext(ctx, md)
+}
+
+func fromGRPCHeader(ctx context.Context) (string, bool) {
+ // try to extract for use in grpc servers.
+ md, ok := metadata.FromIncomingContext(ctx)
+ if !ok {
+ return "", false
+ }
+
+ values := md[GRPCHeader]
+ if len(values) == 0 {
+ return "", false
+ }
+
+ return values[0], true
+}
diff --git a/components/engine/vendor/github.com/containerd/containerd/linux/bundle.go b/components/engine/vendor/github.com/containerd/containerd/linux/bundle.go
index 3dbfef98d4..72fcab9fe1 100644
--- a/components/engine/vendor/github.com/containerd/containerd/linux/bundle.go
+++ b/components/engine/vendor/github.com/containerd/containerd/linux/bundle.go
@@ -9,9 +9,10 @@ import (
"os"
"path/filepath"
- "github.com/containerd/containerd/events"
+ "github.com/containerd/containerd/events/exchange"
"github.com/containerd/containerd/linux/runcopts"
- client "github.com/containerd/containerd/linux/shim"
+ "github.com/containerd/containerd/linux/shim"
+ "github.com/containerd/containerd/linux/shim/client"
"github.com/pkg/errors"
)
@@ -70,32 +71,33 @@ type bundle struct {
workDir string
}
-type shimOpt func(*bundle, string, *runcopts.RuncOptions) (client.Config, client.ClientOpt)
+// ShimOpt specifies shim options for initialization and connection
+type ShimOpt func(*bundle, string, *runcopts.RuncOptions) (shim.Config, client.Opt)
-// ShimRemote is a shimOpt for connecting and starting a remote shim
-func ShimRemote(shim, daemonAddress, cgroup string, nonewns, debug bool, exitHandler func()) shimOpt {
- return func(b *bundle, ns string, ropts *runcopts.RuncOptions) (client.Config, client.ClientOpt) {
+// ShimRemote is a ShimOpt for connecting and starting a remote shim
+func ShimRemote(shimBinary, daemonAddress, cgroup string, nonewns, debug bool, exitHandler func()) ShimOpt {
+ return func(b *bundle, ns string, ropts *runcopts.RuncOptions) (shim.Config, client.Opt) {
return b.shimConfig(ns, ropts),
- client.WithStart(shim, b.shimAddress(ns), daemonAddress, cgroup, nonewns, debug, exitHandler)
+ client.WithStart(shimBinary, b.shimAddress(ns), daemonAddress, cgroup, nonewns, debug, exitHandler)
}
}
-// ShimLocal is a shimOpt for using an in process shim implementation
-func ShimLocal(exchange *events.Exchange) shimOpt {
- return func(b *bundle, ns string, ropts *runcopts.RuncOptions) (client.Config, client.ClientOpt) {
+// ShimLocal is a ShimOpt for using an in process shim implementation
+func ShimLocal(exchange *exchange.Exchange) ShimOpt {
+ return func(b *bundle, ns string, ropts *runcopts.RuncOptions) (shim.Config, client.Opt) {
return b.shimConfig(ns, ropts), client.WithLocal(exchange)
}
}
-// ShimConnect is a shimOpt for connecting to an existing remote shim
-func ShimConnect() shimOpt {
- return func(b *bundle, ns string, ropts *runcopts.RuncOptions) (client.Config, client.ClientOpt) {
+// ShimConnect is a ShimOpt for connecting to an existing remote shim
+func ShimConnect() ShimOpt {
+ return func(b *bundle, ns string, ropts *runcopts.RuncOptions) (shim.Config, client.Opt) {
return b.shimConfig(ns, ropts), client.WithConnect(b.shimAddress(ns))
}
}
// NewShimClient connects to the shim managing the bundle and tasks creating it if needed
-func (b *bundle) NewShimClient(ctx context.Context, namespace string, getClientOpts shimOpt, runcOpts *runcopts.RuncOptions) (*client.Client, error) {
+func (b *bundle) NewShimClient(ctx context.Context, namespace string, getClientOpts ShimOpt, runcOpts *runcopts.RuncOptions) (*client.Client, error) {
cfg, opt := getClientOpts(b, namespace, runcOpts)
return client.New(ctx, cfg, opt)
}
@@ -118,7 +120,7 @@ func (b *bundle) shimAddress(namespace string) string {
return filepath.Join(string(filepath.Separator), "containerd-shim", namespace, b.id, "shim.sock")
}
-func (b *bundle) shimConfig(namespace string, runcOptions *runcopts.RuncOptions) client.Config {
+func (b *bundle) shimConfig(namespace string, runcOptions *runcopts.RuncOptions) shim.Config {
var (
criuPath string
runtimeRoot string
@@ -129,7 +131,7 @@ func (b *bundle) shimConfig(namespace string, runcOptions *runcopts.RuncOptions)
systemdCgroup = runcOptions.SystemdCgroup
runtimeRoot = runcOptions.RuntimeRoot
}
- return client.Config{
+ return shim.Config{
Path: b.path,
WorkDir: b.workDir,
Namespace: namespace,
diff --git a/components/engine/vendor/github.com/containerd/containerd/linux/runtime.go b/components/engine/vendor/github.com/containerd/containerd/linux/runtime.go
index 26d001f8a6..44219e40d0 100644
--- a/components/engine/vendor/github.com/containerd/containerd/linux/runtime.go
+++ b/components/engine/vendor/github.com/containerd/containerd/linux/runtime.go
@@ -15,7 +15,7 @@ import (
"github.com/containerd/containerd/api/types"
"github.com/containerd/containerd/containers"
"github.com/containerd/containerd/errdefs"
- "github.com/containerd/containerd/events"
+ "github.com/containerd/containerd/events/exchange"
"github.com/containerd/containerd/identifiers"
"github.com/containerd/containerd/linux/runcopts"
client "github.com/containerd/containerd/linux/shim"
@@ -143,7 +143,7 @@ type Runtime struct {
monitor runtime.TaskMonitor
tasks *runtime.TaskList
db *metadata.DB
- events *events.Exchange
+ events *exchange.Exchange
config *Config
}
diff --git a/components/engine/vendor/github.com/containerd/containerd/linux/shim/client.go b/components/engine/vendor/github.com/containerd/containerd/linux/shim/client/client.go
similarity index 81%
rename from components/engine/vendor/github.com/containerd/containerd/linux/shim/client.go
rename to components/engine/vendor/github.com/containerd/containerd/linux/shim/client/client.go
index eae946bed7..1cfe766c21 100644
--- a/components/engine/vendor/github.com/containerd/containerd/linux/shim/client.go
+++ b/components/engine/vendor/github.com/containerd/containerd/linux/shim/client/client.go
@@ -1,6 +1,6 @@
// +build !windows
-package shim
+package client
import (
"context"
@@ -20,19 +20,23 @@ import (
"github.com/sirupsen/logrus"
"github.com/containerd/containerd/events"
- shim "github.com/containerd/containerd/linux/shim/v1"
+ "github.com/containerd/containerd/linux/shim"
+ shimapi "github.com/containerd/containerd/linux/shim/v1"
"github.com/containerd/containerd/log"
"github.com/containerd/containerd/reaper"
"github.com/containerd/containerd/sys"
+ google_protobuf "github.com/golang/protobuf/ptypes/empty"
"google.golang.org/grpc"
)
-// ClientOpt is an option for a shim client configuration
-type ClientOpt func(context.Context, Config) (shim.ShimClient, io.Closer, error)
+var empty = &google_protobuf.Empty{}
+
+// Opt is an option for a shim client configuration
+type Opt func(context.Context, shim.Config) (shimapi.ShimClient, io.Closer, error)
// WithStart executes a new shim process
-func WithStart(binary, address, daemonAddress, cgroup string, nonewns, debug bool, exitHandler func()) ClientOpt {
- return func(ctx context.Context, config Config) (_ shim.ShimClient, _ io.Closer, err error) {
+func WithStart(binary, address, daemonAddress, cgroup string, nonewns, debug bool, exitHandler func()) Opt {
+ return func(ctx context.Context, config shim.Config) (_ shimapi.ShimClient, _ io.Closer, err error) {
socket, err := newSocket(address)
if err != nil {
return nil, nil, err
@@ -84,24 +88,24 @@ func WithStart(binary, address, daemonAddress, cgroup string, nonewns, debug boo
}
}
-func newCommand(binary, daemonAddress string, nonewns, debug bool, config Config, socket *os.File) *exec.Cmd {
+func newCommand(binary, daemonAddress string, nonewns, debug bool, config shim.Config, socket *os.File) *exec.Cmd {
args := []string{
- "--namespace", config.Namespace,
- "--workdir", config.WorkDir,
- "--address", daemonAddress,
+ "-namespace", config.Namespace,
+ "-workdir", config.WorkDir,
+ "-address", daemonAddress,
}
if config.Criu != "" {
- args = append(args, "--criu-path", config.Criu)
+ args = append(args, "-criu-path", config.Criu)
}
if config.RuntimeRoot != "" {
- args = append(args, "--runtime-root", config.RuntimeRoot)
+ args = append(args, "-runtime-root", config.RuntimeRoot)
}
if config.SystemdCgroup {
- args = append(args, "--systemd-cgroup")
+ args = append(args, "-systemd-cgroup")
}
if debug {
- args = append(args, "--debug")
+ args = append(args, "-debug")
}
cmd := exec.Command(binary, args...)
@@ -160,39 +164,29 @@ func dialAddress(address string) string {
}
// WithConnect connects to an existing shim
-func WithConnect(address string) ClientOpt {
- return func(ctx context.Context, config Config) (shim.ShimClient, io.Closer, error) {
+func WithConnect(address string) Opt {
+ return func(ctx context.Context, config shim.Config) (shimapi.ShimClient, io.Closer, error) {
conn, err := connect(address, annonDialer)
if err != nil {
return nil, nil, err
}
- return shim.NewShimClient(conn), conn, nil
+ return shimapi.NewShimClient(conn), conn, nil
}
}
// WithLocal uses an in process shim
-func WithLocal(publisher events.Publisher) func(context.Context, Config) (shim.ShimClient, io.Closer, error) {
- return func(ctx context.Context, config Config) (shim.ShimClient, io.Closer, error) {
- service, err := NewService(config, publisher)
+func WithLocal(publisher events.Publisher) func(context.Context, shim.Config) (shimapi.ShimClient, io.Closer, error) {
+ return func(ctx context.Context, config shim.Config) (shimapi.ShimClient, io.Closer, error) {
+ service, err := shim.NewService(config, publisher)
if err != nil {
return nil, nil, err
}
- return NewLocal(service), nil, nil
+ return shim.NewLocal(service), nil, nil
}
}
-// Config contains shim specific configuration
-type Config struct {
- Path string
- Namespace string
- WorkDir string
- Criu string
- RuntimeRoot string
- SystemdCgroup bool
-}
-
// New returns a new shim client
-func New(ctx context.Context, config Config, opt ClientOpt) (*Client, error) {
+func New(ctx context.Context, config shim.Config, opt Opt) (*Client, error) {
s, c, err := opt(ctx, config)
if err != nil {
return nil, err
@@ -206,7 +200,7 @@ func New(ctx context.Context, config Config, opt ClientOpt) (*Client, error) {
// Client is a shim client containing the connection to a shim
type Client struct {
- shim.ShimClient
+ shimapi.ShimClient
c io.Closer
exitCh chan struct{}
diff --git a/components/engine/vendor/github.com/containerd/containerd/linux/shim/client_linux.go b/components/engine/vendor/github.com/containerd/containerd/linux/shim/client/client_linux.go
similarity index 97%
rename from components/engine/vendor/github.com/containerd/containerd/linux/shim/client_linux.go
rename to components/engine/vendor/github.com/containerd/containerd/linux/shim/client/client_linux.go
index 515e88c476..03ebba00cf 100644
--- a/components/engine/vendor/github.com/containerd/containerd/linux/shim/client_linux.go
+++ b/components/engine/vendor/github.com/containerd/containerd/linux/shim/client/client_linux.go
@@ -1,6 +1,6 @@
// +build linux
-package shim
+package client
import (
"os/exec"
diff --git a/components/engine/vendor/github.com/containerd/containerd/linux/shim/client_unix.go b/components/engine/vendor/github.com/containerd/containerd/linux/shim/client/client_unix.go
similarity index 94%
rename from components/engine/vendor/github.com/containerd/containerd/linux/shim/client_unix.go
rename to components/engine/vendor/github.com/containerd/containerd/linux/shim/client/client_unix.go
index d478f3dd71..b34cf4d368 100644
--- a/components/engine/vendor/github.com/containerd/containerd/linux/shim/client_unix.go
+++ b/components/engine/vendor/github.com/containerd/containerd/linux/shim/client/client_unix.go
@@ -1,6 +1,6 @@
// +build !linux,!windows
-package shim
+package client
import (
"os/exec"
diff --git a/components/engine/vendor/github.com/containerd/containerd/linux/shim/init.go b/components/engine/vendor/github.com/containerd/containerd/linux/shim/init.go
index 88c39a6f7a..01c305bb67 100644
--- a/components/engine/vendor/github.com/containerd/containerd/linux/shim/init.go
+++ b/components/engine/vendor/github.com/containerd/containerd/linux/shim/init.go
@@ -98,12 +98,16 @@ func (s *Service) newInitProcess(context context.Context, r *shimapi.CreateTaskR
return nil, errors.Wrapf(err, "failed to mount rootfs component %v", m)
}
}
+ root := s.config.RuntimeRoot
+ if root == "" {
+ root = RuncRoot
+ }
runtime := &runc.Runc{
Command: r.Runtime,
Log: filepath.Join(s.config.Path, "log.json"),
LogFormat: runc.JSON,
PdeathSignal: syscall.SIGKILL,
- Root: filepath.Join(s.config.RuntimeRoot, s.config.Namespace),
+ Root: filepath.Join(root, s.config.Namespace),
Criu: s.config.Criu,
SystemdCgroup: s.config.SystemdCgroup,
}
diff --git a/components/engine/vendor/github.com/containerd/containerd/linux/shim/service.go b/components/engine/vendor/github.com/containerd/containerd/linux/shim/service.go
index 9c9f506022..7b5c5e1164 100644
--- a/components/engine/vendor/github.com/containerd/containerd/linux/shim/service.go
+++ b/components/engine/vendor/github.com/containerd/containerd/linux/shim/service.go
@@ -32,6 +32,16 @@ var empty = &google_protobuf.Empty{}
// RuncRoot is the path to the root runc state directory
const RuncRoot = "/run/containerd/runc"
+// Config contains shim specific configuration
+type Config struct {
+ Path string
+ Namespace string
+ WorkDir string
+ Criu string
+ RuntimeRoot string
+ SystemdCgroup bool
+}
+
// NewService returns a new shim service that can be used via GRPC
func NewService(config Config, publisher events.Publisher) (*Service, error) {
if config.Namespace == "" {
diff --git a/components/engine/vendor/github.com/containerd/containerd/linux/task.go b/components/engine/vendor/github.com/containerd/containerd/linux/task.go
index a851ee77ef..268e91a94a 100644
--- a/components/engine/vendor/github.com/containerd/containerd/linux/task.go
+++ b/components/engine/vendor/github.com/containerd/containerd/linux/task.go
@@ -11,7 +11,7 @@ import (
"github.com/containerd/cgroups"
"github.com/containerd/containerd/api/types/task"
"github.com/containerd/containerd/errdefs"
- client "github.com/containerd/containerd/linux/shim"
+ "github.com/containerd/containerd/linux/shim/client"
shim "github.com/containerd/containerd/linux/shim/v1"
"github.com/containerd/containerd/runtime"
"github.com/gogo/protobuf/types"
diff --git a/components/engine/vendor/github.com/containerd/containerd/metadata/buckets.go b/components/engine/vendor/github.com/containerd/containerd/metadata/buckets.go
index 43849e0804..b6a66ba48c 100644
--- a/components/engine/vendor/github.com/containerd/containerd/metadata/buckets.go
+++ b/components/engine/vendor/github.com/containerd/containerd/metadata/buckets.go
@@ -38,6 +38,7 @@ var (
bucketKeyObjectContent = []byte("content") // stores content references
bucketKeyObjectBlob = []byte("blob") // stores content links
bucketKeyObjectIngest = []byte("ingest") // stores ingest links
+ bucketKeyObjectLeases = []byte("leases") // stores leases
bucketKeyDigest = []byte("digest")
bucketKeyMediaType = []byte("mediatype")
@@ -53,6 +54,7 @@ var (
bucketKeySnapshotter = []byte("snapshotter")
bucketKeyTarget = []byte("target")
bucketKeyExtensions = []byte("extensions")
+ bucketKeyCreatedAt = []byte("createdat")
)
func getBucket(tx *bolt.Tx, keys ...[]byte) *bolt.Bucket {
diff --git a/components/engine/vendor/github.com/containerd/containerd/metadata/content.go b/components/engine/vendor/github.com/containerd/containerd/metadata/content.go
index 05064fdec5..0797345e21 100644
--- a/components/engine/vendor/github.com/containerd/containerd/metadata/content.go
+++ b/components/engine/vendor/github.com/containerd/containerd/metadata/content.go
@@ -391,27 +391,31 @@ func (nw *namespacedWriter) Commit(ctx context.Context, size int64, expected dig
return err
}
}
- return nw.commit(ctx, tx, size, expected, opts...)
+ dgst, err := nw.commit(ctx, tx, size, expected, opts...)
+ if err != nil {
+ return err
+ }
+ return addContentLease(ctx, tx, dgst)
})
}
-func (nw *namespacedWriter) commit(ctx context.Context, tx *bolt.Tx, size int64, expected digest.Digest, opts ...content.Opt) error {
+func (nw *namespacedWriter) commit(ctx context.Context, tx *bolt.Tx, size int64, expected digest.Digest, opts ...content.Opt) (digest.Digest, error) {
var base content.Info
for _, opt := range opts {
if err := opt(&base); err != nil {
- return err
+ return "", err
}
}
if err := validateInfo(&base); err != nil {
- return err
+ return "", err
}
status, err := nw.Writer.Status()
if err != nil {
- return err
+ return "", err
}
if size != 0 && size != status.Offset {
- return errors.Errorf("%q failed size validation: %v != %v", nw.ref, status.Offset, size)
+ return "", errors.Errorf("%q failed size validation: %v != %v", nw.ref, status.Offset, size)
}
size = status.Offset
@@ -419,32 +423,32 @@ func (nw *namespacedWriter) commit(ctx context.Context, tx *bolt.Tx, size int64,
if err := nw.Writer.Commit(ctx, size, expected); err != nil {
if !errdefs.IsAlreadyExists(err) {
- return err
+ return "", err
}
if getBlobBucket(tx, nw.namespace, actual) != nil {
- return errors.Wrapf(errdefs.ErrAlreadyExists, "content %v", actual)
+ return "", errors.Wrapf(errdefs.ErrAlreadyExists, "content %v", actual)
}
}
bkt, err := createBlobBucket(tx, nw.namespace, actual)
if err != nil {
- return err
+ return "", err
}
commitTime := time.Now().UTC()
sizeEncoded, err := encodeInt(size)
if err != nil {
- return err
+ return "", err
}
if err := boltutil.WriteTimestamps(bkt, commitTime, commitTime); err != nil {
- return err
+ return "", err
}
if err := boltutil.WriteLabels(bkt, base.Labels); err != nil {
- return err
+ return "", err
}
- return bkt.Put(bucketKeySize, sizeEncoded)
+ return actual, bkt.Put(bucketKeySize, sizeEncoded)
}
func (nw *namespacedWriter) Status() (content.Status, error) {
@@ -566,7 +570,7 @@ func (cs *contentStore) garbageCollect(ctx context.Context) error {
return err
}
- if err := cs.Store.Walk(ctx, func(info content.Info) error {
+ return cs.Store.Walk(ctx, func(info content.Info) error {
if _, ok := seen[info.Digest.String()]; !ok {
if err := cs.Store.Delete(ctx, info.Digest); err != nil {
return err
@@ -574,9 +578,5 @@ func (cs *contentStore) garbageCollect(ctx context.Context) error {
log.G(ctx).WithField("digest", info.Digest).Debug("removed content")
}
return nil
- }); err != nil {
- return err
- }
-
- return nil
+ })
}
diff --git a/components/engine/vendor/github.com/containerd/containerd/metadata/db.go b/components/engine/vendor/github.com/containerd/containerd/metadata/db.go
index 510d14a2a0..7c366ebcc4 100644
--- a/components/engine/vendor/github.com/containerd/containerd/metadata/db.go
+++ b/components/engine/vendor/github.com/containerd/containerd/metadata/db.go
@@ -190,6 +190,7 @@ func (m *DB) Update(fn func(*bolt.Tx) error) error {
return m.db.Update(fn)
}
+// GarbageCollect starts garbage collection
func (m *DB) GarbageCollect(ctx context.Context) error {
lt1 := time.Now()
m.wlock.Lock()
@@ -198,39 +199,8 @@ func (m *DB) GarbageCollect(ctx context.Context) error {
log.G(ctx).WithField("d", time.Now().Sub(lt1)).Debug("metadata garbage collected")
}()
- var marked map[gc.Node]struct{}
-
- if err := m.db.View(func(tx *bolt.Tx) error {
- ctx, cancel := context.WithCancel(ctx)
- defer cancel()
-
- roots := make(chan gc.Node)
- errChan := make(chan error)
- go func() {
- defer close(errChan)
- defer close(roots)
-
- // Call roots
- if err := scanRoots(ctx, tx, roots); err != nil {
- cancel()
- errChan <- err
- }
- }()
-
- refs := func(ctx context.Context, n gc.Node, fn func(gc.Node)) error {
- return references(ctx, tx, n, fn)
- }
-
- reachable, err := gc.ConcurrentMark(ctx, roots, refs)
- if rerr := <-errChan; rerr != nil {
- return rerr
- }
- if err != nil {
- return err
- }
- marked = reachable
- return nil
- }); err != nil {
+ marked, err := m.getMarked(ctx)
+ if err != nil {
return err
}
@@ -241,15 +211,11 @@ func (m *DB) GarbageCollect(ctx context.Context) error {
ctx, cancel := context.WithCancel(ctx)
defer cancel()
- nodeC := make(chan gc.Node)
- var scanErr error
+ rm := func(ctx context.Context, n gc.Node) error {
+ if _, ok := marked[n]; ok {
+ return nil
+ }
- go func() {
- defer close(nodeC)
- scanErr = scanAll(ctx, tx, nodeC)
- }()
-
- rm := func(n gc.Node) error {
if n.Type == ResourceSnapshot {
if idx := strings.IndexRune(n.Key, '/'); idx > 0 {
m.dirtySS[n.Key[:idx]] = struct{}{}
@@ -260,12 +226,8 @@ func (m *DB) GarbageCollect(ctx context.Context) error {
return remove(ctx, tx, n)
}
- if err := gc.Sweep(marked, nodeC, rm); err != nil {
- return errors.Wrap(err, "failed to sweep")
- }
-
- if scanErr != nil {
- return errors.Wrap(scanErr, "failed to scan all")
+ if err := scanAll(ctx, tx, rm); err != nil {
+ return errors.Wrap(err, "failed to scan and remove")
}
return nil
@@ -292,6 +254,54 @@ func (m *DB) GarbageCollect(ctx context.Context) error {
return nil
}
+func (m *DB) getMarked(ctx context.Context) (map[gc.Node]struct{}, error) {
+ var marked map[gc.Node]struct{}
+ if err := m.db.View(func(tx *bolt.Tx) error {
+ ctx, cancel := context.WithCancel(ctx)
+ defer cancel()
+
+ var (
+ nodes []gc.Node
+ wg sync.WaitGroup
+ roots = make(chan gc.Node)
+ )
+ wg.Add(1)
+ go func() {
+ defer wg.Done()
+ for n := range roots {
+ nodes = append(nodes, n)
+ }
+ }()
+ // Call roots
+ if err := scanRoots(ctx, tx, roots); err != nil {
+ cancel()
+ return err
+ }
+ close(roots)
+ wg.Wait()
+
+ refs := func(n gc.Node) ([]gc.Node, error) {
+ var sn []gc.Node
+ if err := references(ctx, tx, n, func(nn gc.Node) {
+ sn = append(sn, nn)
+ }); err != nil {
+ return nil, err
+ }
+ return sn, nil
+ }
+
+ reachable, err := gc.Tricolor(nodes, refs)
+ if err != nil {
+ return err
+ }
+ marked = reachable
+ return nil
+ }); err != nil {
+ return nil, err
+ }
+ return marked, nil
+}
+
func (m *DB) cleanupSnapshotter(name string) {
ctx := context.Background()
sn, ok := m.ss[name]
diff --git a/components/engine/vendor/github.com/containerd/containerd/metadata/gc.go b/components/engine/vendor/github.com/containerd/containerd/metadata/gc.go
index 8434d694ba..7fe6f7da21 100644
--- a/components/engine/vendor/github.com/containerd/containerd/metadata/gc.go
+++ b/components/engine/vendor/github.com/containerd/containerd/metadata/gc.go
@@ -12,10 +12,15 @@ import (
)
const (
+ // ResourceUnknown specifies an unknown resource
ResourceUnknown gc.ResourceType = iota
+ // ResourceContent specifies a content resource
ResourceContent
+ // ResourceSnapshot specifies a snapshot resource
ResourceSnapshot
+ // ResourceContainer specifies a container resource
ResourceContainer
+ // ResourceTask specifies a task resource
ResourceTask
)
@@ -41,6 +46,55 @@ func scanRoots(ctx context.Context, tx *bolt.Tx, nc chan<- gc.Node) error {
nbkt := v1bkt.Bucket(k)
ns := string(k)
+ lbkt := nbkt.Bucket(bucketKeyObjectLeases)
+ if lbkt != nil {
+ if err := lbkt.ForEach(func(k, v []byte) error {
+ if v != nil {
+ return nil
+ }
+ libkt := lbkt.Bucket(k)
+
+ cbkt := libkt.Bucket(bucketKeyObjectContent)
+ if cbkt != nil {
+ if err := cbkt.ForEach(func(k, v []byte) error {
+ select {
+ case nc <- gcnode(ResourceContent, ns, string(k)):
+ case <-ctx.Done():
+ return ctx.Err()
+ }
+ return nil
+ }); err != nil {
+ return err
+ }
+ }
+
+ sbkt := libkt.Bucket(bucketKeyObjectSnapshots)
+ if sbkt != nil {
+ if err := sbkt.ForEach(func(sk, sv []byte) error {
+ if sv != nil {
+ return nil
+ }
+ snbkt := sbkt.Bucket(sk)
+
+ return snbkt.ForEach(func(k, v []byte) error {
+ select {
+ case nc <- gcnode(ResourceSnapshot, ns, fmt.Sprintf("%s/%s", sk, k)):
+ case <-ctx.Done():
+ return ctx.Err()
+ }
+ return nil
+ })
+ }); err != nil {
+ return err
+ }
+ }
+
+ return nil
+ }); err != nil {
+ return err
+ }
+ }
+
ibkt := nbkt.Bucket(bucketKeyObjectImages)
if ibkt != nil {
if err := ibkt.ForEach(func(k, v []byte) error {
@@ -174,7 +228,7 @@ func references(ctx context.Context, tx *bolt.Tx, node gc.Node, fn func(gc.Node)
return nil
}
-func scanAll(ctx context.Context, tx *bolt.Tx, nc chan<- gc.Node) error {
+func scanAll(ctx context.Context, tx *bolt.Tx, fn func(ctx context.Context, n gc.Node) error) error {
v1bkt := tx.Bucket(bucketKeyVersion)
if v1bkt == nil {
return nil
@@ -201,12 +255,8 @@ func scanAll(ctx context.Context, tx *bolt.Tx, nc chan<- gc.Node) error {
if v != nil {
return nil
}
- select {
- case nc <- gcnode(ResourceSnapshot, ns, fmt.Sprintf("%s/%s", sk, k)):
- case <-ctx.Done():
- return ctx.Err()
- }
- return nil
+ node := gcnode(ResourceSnapshot, ns, fmt.Sprintf("%s/%s", sk, k))
+ return fn(ctx, node)
})
}); err != nil {
return err
@@ -222,12 +272,8 @@ func scanAll(ctx context.Context, tx *bolt.Tx, nc chan<- gc.Node) error {
if v != nil {
return nil
}
- select {
- case nc <- gcnode(ResourceContent, ns, string(k)):
- case <-ctx.Done():
- return ctx.Err()
- }
- return nil
+ node := gcnode(ResourceContent, ns, string(k))
+ return fn(ctx, node)
}); err != nil {
return err
}
diff --git a/components/engine/vendor/github.com/containerd/containerd/metadata/leases.go b/components/engine/vendor/github.com/containerd/containerd/metadata/leases.go
new file mode 100644
index 0000000000..006123d45f
--- /dev/null
+++ b/components/engine/vendor/github.com/containerd/containerd/metadata/leases.go
@@ -0,0 +1,201 @@
+package metadata
+
+import (
+ "context"
+ "time"
+
+ "github.com/boltdb/bolt"
+ "github.com/containerd/containerd/errdefs"
+ "github.com/containerd/containerd/leases"
+ "github.com/containerd/containerd/metadata/boltutil"
+ "github.com/containerd/containerd/namespaces"
+ digest "github.com/opencontainers/go-digest"
+ "github.com/pkg/errors"
+)
+
+// Lease retains resources to prevent garbage collection before
+// the resources can be fully referenced.
+type Lease struct {
+ ID string
+ CreatedAt time.Time
+ Labels map[string]string
+
+ Content []string
+ Snapshots map[string][]string
+}
+
+// LeaseManager manages the create/delete lifecyle of leases
+// and also returns existing leases
+type LeaseManager struct {
+ tx *bolt.Tx
+}
+
+// NewLeaseManager creates a new lease manager for managing leases using
+// the provided database transaction.
+func NewLeaseManager(tx *bolt.Tx) *LeaseManager {
+ return &LeaseManager{
+ tx: tx,
+ }
+}
+
+// Create creates a new lease using the provided lease
+func (lm *LeaseManager) Create(ctx context.Context, lid string, labels map[string]string) (Lease, error) {
+ namespace, err := namespaces.NamespaceRequired(ctx)
+ if err != nil {
+ return Lease{}, err
+ }
+
+ topbkt, err := createBucketIfNotExists(lm.tx, bucketKeyVersion, []byte(namespace), bucketKeyObjectLeases)
+ if err != nil {
+ return Lease{}, err
+ }
+
+ txbkt, err := topbkt.CreateBucket([]byte(lid))
+ if err != nil {
+ if err == bolt.ErrBucketExists {
+ err = errdefs.ErrAlreadyExists
+ }
+ return Lease{}, err
+ }
+
+ t := time.Now().UTC()
+ createdAt, err := t.MarshalBinary()
+ if err != nil {
+ return Lease{}, err
+ }
+ if err := txbkt.Put(bucketKeyCreatedAt, createdAt); err != nil {
+ return Lease{}, err
+ }
+
+ if labels != nil {
+ if err := boltutil.WriteLabels(txbkt, labels); err != nil {
+ return Lease{}, err
+ }
+ }
+
+ return Lease{
+ ID: lid,
+ CreatedAt: t,
+ Labels: labels,
+ }, nil
+}
+
+// Delete delets the lease with the provided lease ID
+func (lm *LeaseManager) Delete(ctx context.Context, lid string) error {
+ namespace, err := namespaces.NamespaceRequired(ctx)
+ if err != nil {
+ return err
+ }
+
+ topbkt := getBucket(lm.tx, bucketKeyVersion, []byte(namespace), bucketKeyObjectLeases)
+ if topbkt == nil {
+ return nil
+ }
+ if err := topbkt.DeleteBucket([]byte(lid)); err != nil && err != bolt.ErrBucketNotFound {
+ return err
+ }
+ return nil
+}
+
+// List lists all active leases
+func (lm *LeaseManager) List(ctx context.Context, includeResources bool, filter ...string) ([]Lease, error) {
+ namespace, err := namespaces.NamespaceRequired(ctx)
+ if err != nil {
+ return nil, err
+ }
+
+ var leases []Lease
+
+ topbkt := getBucket(lm.tx, bucketKeyVersion, []byte(namespace), bucketKeyObjectLeases)
+ if topbkt == nil {
+ return leases, nil
+ }
+
+ if err := topbkt.ForEach(func(k, v []byte) error {
+ if v != nil {
+ return nil
+ }
+ txbkt := topbkt.Bucket(k)
+
+ l := Lease{
+ ID: string(k),
+ }
+
+ if v := txbkt.Get(bucketKeyCreatedAt); v != nil {
+ t := &l.CreatedAt
+ if err := t.UnmarshalBinary(v); err != nil {
+ return err
+ }
+ }
+
+ labels, err := boltutil.ReadLabels(txbkt)
+ if err != nil {
+ return err
+ }
+ l.Labels = labels
+
+ // TODO: Read Snapshots
+ // TODO: Read Content
+
+ leases = append(leases, l)
+
+ return nil
+ }); err != nil {
+ return nil, err
+ }
+
+ return leases, nil
+}
+
+func addSnapshotLease(ctx context.Context, tx *bolt.Tx, snapshotter, key string) error {
+ lid, ok := leases.Lease(ctx)
+ if !ok {
+ return nil
+ }
+
+ namespace, ok := namespaces.Namespace(ctx)
+ if !ok {
+ panic("namespace must already be required")
+ }
+
+ bkt := getBucket(tx, bucketKeyVersion, []byte(namespace), bucketKeyObjectLeases, []byte(lid))
+ if bkt == nil {
+ return errors.Wrap(errdefs.ErrNotFound, "lease does not exist")
+ }
+
+ bkt, err := bkt.CreateBucketIfNotExists(bucketKeyObjectSnapshots)
+ if err != nil {
+ return err
+ }
+
+ bkt, err = bkt.CreateBucketIfNotExists([]byte(snapshotter))
+ if err != nil {
+ return err
+ }
+
+ return bkt.Put([]byte(key), nil)
+}
+
+func addContentLease(ctx context.Context, tx *bolt.Tx, dgst digest.Digest) error {
+ lid, ok := leases.Lease(ctx)
+ if !ok {
+ return nil
+ }
+
+ namespace, ok := namespaces.Namespace(ctx)
+ if !ok {
+ panic("namespace must already be required")
+ }
+
+ bkt := getBucket(tx, bucketKeyVersion, []byte(namespace), bucketKeyObjectLeases, []byte(lid))
+ if bkt == nil {
+ return errors.Wrap(errdefs.ErrNotFound, "lease does not exist")
+ }
+
+ bkt, err := bkt.CreateBucketIfNotExists(bucketKeyObjectContent)
+ if err != nil {
+ return err
+ }
+
+ return bkt.Put([]byte(dgst.String()), nil)
+}
diff --git a/components/engine/vendor/github.com/containerd/containerd/metadata/snapshot.go b/components/engine/vendor/github.com/containerd/containerd/metadata/snapshot.go
index ad38e5915b..22ce3c8c03 100644
--- a/components/engine/vendor/github.com/containerd/containerd/metadata/snapshot.go
+++ b/components/engine/vendor/github.com/containerd/containerd/metadata/snapshot.go
@@ -326,6 +326,10 @@ func (s *snapshotter) createSnapshot(ctx context.Context, key, parent string, re
return err
}
+ if err := addSnapshotLease(ctx, tx, s.name, key); err != nil {
+ return err
+ }
+
// TODO: Consider doing this outside of transaction to lessen
// metadata lock time
if readonly {
diff --git a/components/engine/vendor/github.com/containerd/containerd/mount/mount_solaris.go b/components/engine/vendor/github.com/containerd/containerd/mount/mount_solaris.go
deleted file mode 100644
index 3b6c35d748..0000000000
--- a/components/engine/vendor/github.com/containerd/containerd/mount/mount_solaris.go
+++ /dev/null
@@ -1,83 +0,0 @@
-package mount
-
-// On Solaris we can't invoke the mount system call directly. First,
-// the mount system call takes more than 6 arguments, and go doesn't
-// support invoking system calls that take more than 6 arguments. Past
-// that, the mount system call is a private interfaces. For example,
-// the arguments and data structures passed to the kernel to create an
-// nfs mount are private and can change at any time. The only public
-// and stable interface for creating mounts on Solaris is the mount.8
-// command, so we'll invoke that here.
-
-import (
- "bytes"
- "errors"
- "fmt"
- "os/exec"
- "strings"
-
- "golang.org/x/sys/unix"
-)
-
-const (
- mountCmd = "/usr/sbin/mount"
-)
-
-func doMount(arg ...string) error {
- cmd := exec.Command(mountCmd, arg...)
-
- /* Setup Stdin, Stdout, and Stderr */
- stderr := new(bytes.Buffer)
- cmd.Stdin = nil
- cmd.Stdout = nil
- cmd.Stderr = stderr
-
- /*
- * Run the command. If the command fails create a new error
- * object to return that includes stderr output.
- */
- err := cmd.Start()
- if err != nil {
- return err
- }
- err = cmd.Wait()
- if err != nil {
- return errors.New(fmt.Sprintf("%v: %s", err, stderr.String()))
- }
- return nil
-}
-
-func (m *Mount) Mount(target string) error {
- var err error
-
- if len(m.Options) == 0 {
- err = doMount("-F", m.Type, m.Source, target)
- } else {
- err = doMount("-F", m.Type, "-o", strings.Join(m.Options, ","),
- m.Source, target)
- }
- return err
-}
-
-func Unmount(mount string, flags int) error {
- return unix.Unmount(mount, flags)
-}
-
-// UnmountAll repeatedly unmounts the given mount point until there
-// are no mounts remaining (EINVAL is returned by mount), which is
-// useful for undoing a stack of mounts on the same mount point.
-func UnmountAll(mount string, flags int) error {
- for {
- if err := Unmount(mount, flags); err != nil {
- // EINVAL is returned if the target is not a
- // mount point, indicating that we are
- // done. It can also indicate a few other
- // things (such as invalid flags) which we
- // unfortunately end up squelching here too.
- if err == unix.EINVAL {
- return nil
- }
- return err
- }
- }
-}
diff --git a/components/engine/vendor/github.com/containerd/containerd/mount/mount_unix.go b/components/engine/vendor/github.com/containerd/containerd/mount/mount_unix.go
index 23467a8cc7..edb0e8dd19 100644
--- a/components/engine/vendor/github.com/containerd/containerd/mount/mount_unix.go
+++ b/components/engine/vendor/github.com/containerd/containerd/mount/mount_unix.go
@@ -5,17 +5,21 @@ package mount
import "github.com/pkg/errors"
var (
+ // ErrNotImplementOnUnix is returned for methods that are not implemented
ErrNotImplementOnUnix = errors.New("not implemented under unix")
)
+// Mount is not implemented on this platform
func (m *Mount) Mount(target string) error {
return ErrNotImplementOnUnix
}
+// Unmount is not implemented on this platform
func Unmount(mount string, flags int) error {
return ErrNotImplementOnUnix
}
+// UnmountAll is not implemented on this platform
func UnmountAll(mount string, flags int) error {
return ErrNotImplementOnUnix
}
diff --git a/components/engine/vendor/github.com/containerd/containerd/mount/mount_windows.go b/components/engine/vendor/github.com/containerd/containerd/mount/mount_windows.go
index 8eeca68174..8ad7eab129 100644
--- a/components/engine/vendor/github.com/containerd/containerd/mount/mount_windows.go
+++ b/components/engine/vendor/github.com/containerd/containerd/mount/mount_windows.go
@@ -3,17 +3,21 @@ package mount
import "github.com/pkg/errors"
var (
+ // ErrNotImplementOnWindows is returned when an action is not implemented for windows
ErrNotImplementOnWindows = errors.New("not implemented under windows")
)
+// Mount to the provided target
func (m *Mount) Mount(target string) error {
return ErrNotImplementOnWindows
}
+// Unmount the mount at the provided path
func Unmount(mount string, flags int) error {
return ErrNotImplementOnWindows
}
+// UnmountAll mounts at the provided path
func UnmountAll(mount string, flags int) error {
return ErrNotImplementOnWindows
}
diff --git a/components/engine/vendor/github.com/containerd/containerd/mount/mountinfo_solaris.go b/components/engine/vendor/github.com/containerd/containerd/mount/mountinfo_solaris.go
deleted file mode 100644
index aaafad36a8..0000000000
--- a/components/engine/vendor/github.com/containerd/containerd/mount/mountinfo_solaris.go
+++ /dev/null
@@ -1,50 +0,0 @@
-// +build solaris,cgo
-
-package mount
-
-/*
-#include
-#include
-#include
-*/
-import "C"
-
-import (
- "fmt"
- "unsafe"
-)
-
-// Self retrieves a list of mounts for the current running process.
-func Self() ([]Info, error) {
- path := C.CString(C.MNTTAB)
- defer C.free(unsafe.Pointer(path))
- mode := C.CString("r")
- defer C.free(unsafe.Pointer(mode))
-
- mnttab := C.fopen(path, mode)
- if mnttab == nil {
- return nil, fmt.Errorf("Failed to open %s", C.MNTTAB)
- }
-
- var out []Info
- var mp C.struct_mnttab
-
- ret := C.getmntent(mnttab, &mp)
- for ret == 0 {
- var mountinfo Info
- mountinfo.Mountpoint = C.GoString(mp.mnt_mountp)
- mountinfo.Source = C.GoString(mp.mnt_special)
- mountinfo.FSType = C.GoString(mp.mnt_fstype)
- mountinfo.Options = C.GoString(mp.mnt_mntopts)
- out = append(out, mountinfo)
- ret = C.getmntent(mnttab, &mp)
- }
-
- C.fclose(mnttab)
- return out, nil
-}
-
-// PID collects the mounts for a specific process ID.
-func PID(pid int) ([]Info, error) {
- return nil, fmt.Errorf("mountinfo.PID is not implemented on solaris")
-}
diff --git a/components/engine/vendor/github.com/containerd/containerd/plugin/context.go b/components/engine/vendor/github.com/containerd/containerd/plugin/context.go
index 7fff5c6c63..87e53b84f2 100644
--- a/components/engine/vendor/github.com/containerd/containerd/plugin/context.go
+++ b/components/engine/vendor/github.com/containerd/containerd/plugin/context.go
@@ -5,7 +5,7 @@ import (
"path/filepath"
"github.com/containerd/containerd/errdefs"
- "github.com/containerd/containerd/events"
+ "github.com/containerd/containerd/events/exchange"
"github.com/containerd/containerd/log"
ocispec "github.com/opencontainers/image-spec/specs-go/v1"
"github.com/pkg/errors"
@@ -18,15 +18,15 @@ type InitContext struct {
State string
Config interface{}
Address string
- Events *events.Exchange
+ Events *exchange.Exchange
Meta *Meta // plugins can fill in metadata at init.
- plugins *PluginSet
+ plugins *Set
}
// NewContext returns a new plugin InitContext
-func NewContext(ctx context.Context, r *Registration, plugins *PluginSet, root, state string) *InitContext {
+func NewContext(ctx context.Context, r *Registration, plugins *Set, root, state string) *InitContext {
return &InitContext{
Context: log.WithModule(ctx, r.URI()),
Root: filepath.Join(root, r.URI()),
@@ -61,32 +61,37 @@ type Plugin struct {
err error // will be set if there was an error initializing the plugin
}
+// Err returns the errors during initialization.
+// returns nil if not error was encountered
func (p *Plugin) Err() error {
return p.err
}
+// Instance returns the instance and any initialization error of the plugin
func (p *Plugin) Instance() (interface{}, error) {
return p.instance, p.err
}
-// PluginSet defines a plugin collection, used with InitContext.
+// Set defines a plugin collection, used with InitContext.
//
// This maintains ordering and unique indexing over the set.
//
// After iteratively instantiating plugins, this set should represent, the
// ordered, initialization set of plugins for a containerd instance.
-type PluginSet struct {
+type Set struct {
ordered []*Plugin // order of initialization
byTypeAndID map[Type]map[string]*Plugin
}
-func NewPluginSet() *PluginSet {
- return &PluginSet{
+// NewPluginSet returns an initialized plugin set
+func NewPluginSet() *Set {
+ return &Set{
byTypeAndID: make(map[Type]map[string]*Plugin),
}
}
-func (ps *PluginSet) Add(p *Plugin) error {
+// Add a plugin to the set
+func (ps *Set) Add(p *Plugin) error {
if byID, typeok := ps.byTypeAndID[p.Registration.Type]; !typeok {
ps.byTypeAndID[p.Registration.Type] = map[string]*Plugin{
p.Registration.ID: p,
@@ -102,13 +107,14 @@ func (ps *PluginSet) Add(p *Plugin) error {
}
// Get returns the first plugin by its type
-func (ps *PluginSet) Get(t Type) (interface{}, error) {
+func (ps *Set) Get(t Type) (interface{}, error) {
for _, v := range ps.byTypeAndID[t] {
return v.Instance()
}
return nil, errors.Wrapf(errdefs.ErrNotFound, "no plugins registered for %s", t)
}
+// GetAll plugins in the set
func (i *InitContext) GetAll() []*Plugin {
return i.plugins.ordered
}
diff --git a/components/engine/vendor/github.com/containerd/containerd/plugin/plugin.go b/components/engine/vendor/github.com/containerd/containerd/plugin/plugin.go
index d7b1c0a61a..9bda46cbfa 100644
--- a/components/engine/vendor/github.com/containerd/containerd/plugin/plugin.go
+++ b/components/engine/vendor/github.com/containerd/containerd/plugin/plugin.go
@@ -58,9 +58,13 @@ const (
// Registration contains information for registering a plugin
type Registration struct {
- Type Type
- ID string
- Config interface{}
+ // Type of the plugin
+ Type Type
+ // ID of the plugin
+ ID string
+ // Config specific to the plugin
+ Config interface{}
+ // Requires is a list of plugins that the registered plugin requires to be available
Requires []Type
// InitFn is called when initializing a plugin. The registration and
@@ -69,6 +73,7 @@ type Registration struct {
InitFn func(*InitContext) (interface{}, error)
}
+// Init the registered plugin
func (r *Registration) Init(ic *InitContext) *Plugin {
p, err := r.InitFn(ic)
return &Plugin{
diff --git a/components/engine/vendor/github.com/containerd/containerd/remotes/docker/schema1/converter.go b/components/engine/vendor/github.com/containerd/containerd/remotes/docker/schema1/converter.go
index 99940c88ab..52f83d43fa 100644
--- a/components/engine/vendor/github.com/containerd/containerd/remotes/docker/schema1/converter.go
+++ b/components/engine/vendor/github.com/containerd/containerd/remotes/docker/schema1/converter.go
@@ -9,6 +9,7 @@ import (
"fmt"
"io"
"io/ioutil"
+ "math/rand"
"strings"
"sync"
"time"
@@ -159,7 +160,6 @@ func (c *Converter) Convert(ctx context.Context) (ocispec.Descriptor, error) {
}
labels := map[string]string{}
- labels["containerd.io/gc.root"] = time.Now().UTC().Format(time.RFC3339)
labels["containerd.io/gc.ref.content.0"] = manifest.Config.Digest.String()
for i, ch := range manifest.Layers {
labels[fmt.Sprintf("containerd.io/gc.ref.content.%d", i+1)] = ch.Digest.String()
@@ -175,12 +175,6 @@ func (c *Converter) Convert(ctx context.Context) (ocispec.Descriptor, error) {
return ocispec.Descriptor{}, errors.Wrap(err, "failed to write config")
}
- for _, ch := range manifest.Layers {
- if _, err := c.contentStore.Update(ctx, content.Info{Digest: ch.Digest}, "labels.containerd.io/gc.root"); err != nil {
- return ocispec.Descriptor{}, errors.Wrap(err, "failed to remove blob root tag")
- }
- }
-
return desc, nil
}
@@ -215,13 +209,26 @@ func (c *Converter) fetchManifest(ctx context.Context, desc ocispec.Descriptor)
func (c *Converter) fetchBlob(ctx context.Context, desc ocispec.Descriptor) error {
log.G(ctx).Debug("fetch blob")
- ref := remotes.MakeRefKey(ctx, desc)
-
- calc := newBlobStateCalculator()
+ var (
+ ref = remotes.MakeRefKey(ctx, desc)
+ calc = newBlobStateCalculator()
+ retry = 16
+ )
+tryit:
cw, err := c.contentStore.Writer(ctx, ref, desc.Size, desc.Digest)
if err != nil {
- if !errdefs.IsAlreadyExists(err) {
+ if errdefs.IsUnavailable(err) {
+ select {
+ case <-time.After(time.Millisecond * time.Duration(rand.Intn(retry))):
+ if retry < 2048 {
+ retry = retry << 1
+ }
+ goto tryit
+ case <-ctx.Done():
+ return err
+ }
+ } else if !errdefs.IsAlreadyExists(err) {
return err
}
@@ -270,10 +277,7 @@ func (c *Converter) fetchBlob(ctx context.Context, desc ocispec.Descriptor) erro
eg.Go(func() error {
defer pw.Close()
- opt := content.WithLabels(map[string]string{
- "containerd.io/gc.root": time.Now().UTC().Format(time.RFC3339),
- })
- return content.Copy(ctx, cw, io.TeeReader(rc, pw), desc.Size, desc.Digest, opt)
+ return content.Copy(ctx, cw, io.TeeReader(rc, pw), desc.Size, desc.Digest)
})
if err := eg.Wait(); err != nil {
diff --git a/components/engine/vendor/github.com/containerd/containerd/remotes/handlers.go b/components/engine/vendor/github.com/containerd/containerd/remotes/handlers.go
index e6d2132996..e583391d86 100644
--- a/components/engine/vendor/github.com/containerd/containerd/remotes/handlers.go
+++ b/components/engine/vendor/github.com/containerd/containerd/remotes/handlers.go
@@ -5,6 +5,7 @@ import (
"encoding/json"
"fmt"
"io"
+ "math/rand"
"time"
"github.com/containerd/containerd/content"
@@ -44,7 +45,7 @@ func MakeRefKey(ctx context.Context, desc ocispec.Descriptor) string {
// FetchHandler returns a handler that will fetch all content into the ingester
// discovered in a call to Dispatch. Use with ChildrenHandler to do a full
// recursive fetch.
-func FetchHandler(ingester content.Ingester, fetcher Fetcher, root ocispec.Descriptor) images.HandlerFunc {
+func FetchHandler(ingester content.Ingester, fetcher Fetcher) images.HandlerFunc {
return func(ctx context.Context, desc ocispec.Descriptor) (subdescs []ocispec.Descriptor, err error) {
ctx = log.WithLogger(ctx, log.G(ctx).WithFields(logrus.Fields{
"digest": desc.Digest,
@@ -56,13 +57,13 @@ func FetchHandler(ingester content.Ingester, fetcher Fetcher, root ocispec.Descr
case images.MediaTypeDockerSchema1Manifest:
return nil, fmt.Errorf("%v not supported", desc.MediaType)
default:
- err := fetch(ctx, ingester, fetcher, desc, desc.Digest == root.Digest)
+ err := fetch(ctx, ingester, fetcher, desc)
return nil, err
}
}
}
-func fetch(ctx context.Context, ingester content.Ingester, fetcher Fetcher, desc ocispec.Descriptor, root bool) error {
+func fetch(ctx context.Context, ingester content.Ingester, fetcher Fetcher, desc ocispec.Descriptor) error {
log.G(ctx).Debug("fetch")
var (
@@ -84,7 +85,7 @@ func fetch(ctx context.Context, ingester content.Ingester, fetcher Fetcher, desc
// of writer and abort if not updated recently.
select {
- case <-time.After(time.Millisecond * time.Duration(retry)):
+ case <-time.After(time.Millisecond * time.Duration(rand.Intn(retry))):
if retry < 2048 {
retry = retry << 1
}
@@ -104,13 +105,13 @@ func fetch(ctx context.Context, ingester content.Ingester, fetcher Fetcher, desc
}
defer rc.Close()
- r, opts := commitOpts(desc, rc, root)
+ r, opts := commitOpts(desc, rc)
return content.Copy(ctx, cw, r, desc.Size, desc.Digest, opts...)
}
// commitOpts gets the appropriate content options to alter
// the content info on commit based on media type.
-func commitOpts(desc ocispec.Descriptor, r io.Reader, root bool) (io.Reader, []content.Opt) {
+func commitOpts(desc ocispec.Descriptor, r io.Reader) (io.Reader, []content.Opt) {
var childrenF func(r io.Reader) ([]ocispec.Descriptor, error)
switch desc.MediaType {
@@ -162,13 +163,10 @@ func commitOpts(desc ocispec.Descriptor, r io.Reader, root bool) (io.Reader, []c
return errors.Wrap(err, "unable to get commit labels")
}
- if len(children) > 0 || root {
+ if len(children) > 0 {
if info.Labels == nil {
info.Labels = map[string]string{}
}
- if root {
- info.Labels["containerd.io/gc.root"] = time.Now().UTC().Format(time.RFC3339)
- }
for i, ch := range children {
info.Labels[fmt.Sprintf("containerd.io/gc.ref.content.%d", i)] = ch.Digest.String()
}
diff --git a/components/engine/vendor/github.com/containerd/containerd/runtime/task.go b/components/engine/vendor/github.com/containerd/containerd/runtime/task.go
index 019d4c8cfc..4c02455dcf 100644
--- a/components/engine/vendor/github.com/containerd/containerd/runtime/task.go
+++ b/components/engine/vendor/github.com/containerd/containerd/runtime/task.go
@@ -7,6 +7,7 @@ import (
"github.com/gogo/protobuf/types"
)
+// TaskInfo provides task specific information
type TaskInfo struct {
ID string
Runtime string
@@ -14,6 +15,7 @@ type TaskInfo struct {
Namespace string
}
+// Process is a runtime object for an executing process inside a container
type Process interface {
ID() string
// State returns the process state
@@ -30,6 +32,7 @@ type Process interface {
Wait(context.Context) (*Exit, error)
}
+// Task is the runtime object for an executing container
type Task interface {
Process
@@ -55,27 +58,37 @@ type Task interface {
Metrics(context.Context) (interface{}, error)
}
+// ExecOpts provides additional options for additional processes running in a task
type ExecOpts struct {
Spec *types.Any
IO IO
}
+// ConsoleSize of a pty or windows terminal
type ConsoleSize struct {
Width uint32
Height uint32
}
+// Status is the runtime status of a task and/or process
type Status int
const (
+ // CreatedStatus when a process has been created
CreatedStatus Status = iota + 1
+ // RunningStatus when a process is running
RunningStatus
+ // StoppedStatus when a process has stopped
StoppedStatus
+ // DeletedStatus when a process has been deleted
DeletedStatus
+ // PausedStatus when a process is paused
PausedStatus
+ // PausingStatus when a process is currently pausing
PausingStatus
)
+// State information for a process
type State struct {
// Status is the current status of the container
Status Status
@@ -93,6 +106,7 @@ type State struct {
Terminal bool
}
+// ProcessInfo holds platform specific process information
type ProcessInfo struct {
// Pid is the process ID
Pid uint32
diff --git a/components/engine/vendor/github.com/containerd/containerd/runtime/task_list.go b/components/engine/vendor/github.com/containerd/containerd/runtime/task_list.go
index 12062cef59..7c522655fc 100644
--- a/components/engine/vendor/github.com/containerd/containerd/runtime/task_list.go
+++ b/components/engine/vendor/github.com/containerd/containerd/runtime/task_list.go
@@ -9,21 +9,26 @@ import (
)
var (
- ErrTaskNotExists = errors.New("task does not exist")
+ // ErrTaskNotExists is returned when a task does not exist
+ ErrTaskNotExists = errors.New("task does not exist")
+ // ErrTaskAlreadyExists is returned when a task already exists
ErrTaskAlreadyExists = errors.New("task already exists")
)
+// NewTaskList returns a new TaskList
func NewTaskList() *TaskList {
return &TaskList{
tasks: make(map[string]map[string]Task),
}
}
+// TaskList holds and provides locking around tasks
type TaskList struct {
mu sync.Mutex
tasks map[string]map[string]Task
}
+// Get a task
func (l *TaskList) Get(ctx context.Context, id string) (Task, error) {
l.mu.Lock()
defer l.mu.Unlock()
@@ -42,6 +47,7 @@ func (l *TaskList) Get(ctx context.Context, id string) (Task, error) {
return t, nil
}
+// GetAll tasks under a namespace
func (l *TaskList) GetAll(ctx context.Context) ([]Task, error) {
namespace, err := namespaces.NamespaceRequired(ctx)
if err != nil {
@@ -58,6 +64,7 @@ func (l *TaskList) GetAll(ctx context.Context) ([]Task, error) {
return o, nil
}
+// Add a task
func (l *TaskList) Add(ctx context.Context, t Task) error {
namespace, err := namespaces.NamespaceRequired(ctx)
if err != nil {
@@ -66,6 +73,7 @@ func (l *TaskList) Add(ctx context.Context, t Task) error {
return l.AddWithNamespace(namespace, t)
}
+// AddWithNamespace adds a task with the provided namespace
func (l *TaskList) AddWithNamespace(namespace string, t Task) error {
l.mu.Lock()
defer l.mu.Unlock()
@@ -81,6 +89,7 @@ func (l *TaskList) AddWithNamespace(namespace string, t Task) error {
return nil
}
+// Delete a task
func (l *TaskList) Delete(ctx context.Context, t Task) {
l.mu.Lock()
defer l.mu.Unlock()
diff --git a/components/engine/vendor/github.com/containerd/containerd/server/config.go b/components/engine/vendor/github.com/containerd/containerd/server/config.go
index 764f6bdf26..26af539acc 100644
--- a/components/engine/vendor/github.com/containerd/containerd/server/config.go
+++ b/components/engine/vendor/github.com/containerd/containerd/server/config.go
@@ -33,23 +33,27 @@ type Config struct {
md toml.MetaData
}
+// GRPCConfig provides GRPC configuration for the socket
type GRPCConfig struct {
Address string `toml:"address"`
- Uid int `toml:"uid"`
- Gid int `toml:"gid"`
+ UID int `toml:"uid"`
+ GID int `toml:"gid"`
}
+// Debug provides debug configuration
type Debug struct {
Address string `toml:"address"`
- Uid int `toml:"uid"`
- Gid int `toml:"gid"`
+ UID int `toml:"uid"`
+ GID int `toml:"gid"`
Level string `toml:"level"`
}
+// MetricsConfig provides metrics configuration
type MetricsConfig struct {
Address string `toml:"address"`
}
+// CgroupConfig provides cgroup configuration
type CgroupConfig struct {
Path string `toml:"path"`
}
diff --git a/components/engine/vendor/github.com/containerd/containerd/server/server.go b/components/engine/vendor/github.com/containerd/containerd/server/server.go
index d1a58e9159..f9ca044835 100644
--- a/components/engine/vendor/github.com/containerd/containerd/server/server.go
+++ b/components/engine/vendor/github.com/containerd/containerd/server/server.go
@@ -16,13 +16,14 @@ import (
eventsapi "github.com/containerd/containerd/api/services/events/v1"
images "github.com/containerd/containerd/api/services/images/v1"
introspection "github.com/containerd/containerd/api/services/introspection/v1"
+ leasesapi "github.com/containerd/containerd/api/services/leases/v1"
namespaces "github.com/containerd/containerd/api/services/namespaces/v1"
snapshotapi "github.com/containerd/containerd/api/services/snapshot/v1"
tasks "github.com/containerd/containerd/api/services/tasks/v1"
version "github.com/containerd/containerd/api/services/version/v1"
"github.com/containerd/containerd/content"
"github.com/containerd/containerd/content/local"
- "github.com/containerd/containerd/events"
+ "github.com/containerd/containerd/events/exchange"
"github.com/containerd/containerd/log"
"github.com/containerd/containerd/metadata"
"github.com/containerd/containerd/plugin"
@@ -65,7 +66,7 @@ func New(ctx context.Context, config *Config) (*Server, error) {
services []plugin.Service
s = &Server{
rpc: rpc,
- events: events.NewExchange(),
+ events: exchange.NewExchange(),
}
initialized = plugin.NewPluginSet()
)
@@ -122,7 +123,7 @@ func New(ctx context.Context, config *Config) (*Server, error) {
// Server is the containerd main daemon
type Server struct {
rpc *grpc.Server
- events *events.Exchange
+ events *exchange.Exchange
}
// ServeGRPC provides the containerd grpc APIs on the provided listener
@@ -255,6 +256,8 @@ func interceptor(
ctx = log.WithModule(ctx, "events")
case introspection.IntrospectionServer:
ctx = log.WithModule(ctx, "introspection")
+ case leasesapi.LeasesServer:
+ ctx = log.WithModule(ctx, "leases")
default:
log.G(ctx).Warnf("unknown GRPC server type: %#v\n", info.Server)
}
diff --git a/components/engine/vendor/github.com/containerd/containerd/server/server_linux.go b/components/engine/vendor/github.com/containerd/containerd/server/server_linux.go
index b2f5e8b4fe..03244e90dd 100644
--- a/components/engine/vendor/github.com/containerd/containerd/server/server_linux.go
+++ b/components/engine/vendor/github.com/containerd/containerd/server/server_linux.go
@@ -10,19 +10,6 @@ import (
specs "github.com/opencontainers/runtime-spec/specs-go"
)
-const (
- // DefaultRootDir is the default location used by containerd to store
- // persistent data
- DefaultRootDir = "/var/lib/containerd"
- // DefaultStateDir is the default location used by containerd to store
- // transient data
- DefaultStateDir = "/run/containerd"
- // DefaultAddress is the default unix socket address
- DefaultAddress = "/run/containerd/containerd.sock"
- // DefaultDebugAddress is the default unix socket address for pprof data
- DefaultDebugAddress = "/run/containerd/debug.sock"
-)
-
// apply sets config settings on the server process
func apply(ctx context.Context, config *Config) error {
if config.Subreaper {
diff --git a/components/engine/vendor/github.com/containerd/containerd/services/content/reader.go b/components/engine/vendor/github.com/containerd/containerd/services/content/reader.go
index a8cc55430e..024251c6d6 100644
--- a/components/engine/vendor/github.com/containerd/containerd/services/content/reader.go
+++ b/components/engine/vendor/github.com/containerd/containerd/services/content/reader.go
@@ -44,6 +44,6 @@ func (ra *remoteReaderAt) ReadAt(p []byte, off int64) (n int, err error) {
return n, nil
}
-func (rr *remoteReaderAt) Close() error {
+func (ra *remoteReaderAt) Close() error {
return nil
}
diff --git a/components/engine/vendor/github.com/containerd/containerd/services/content/service.go b/components/engine/vendor/github.com/containerd/containerd/services/content/service.go
index 8289e1e64b..3784579d56 100644
--- a/components/engine/vendor/github.com/containerd/containerd/services/content/service.go
+++ b/components/engine/vendor/github.com/containerd/containerd/services/content/service.go
@@ -21,7 +21,7 @@ import (
"google.golang.org/grpc/codes"
)
-type Service struct {
+type service struct {
store content.Store
publisher events.Publisher
}
@@ -32,7 +32,7 @@ var bufPool = sync.Pool{
},
}
-var _ api.ContentServer = &Service{}
+var _ api.ContentServer = &service{}
func init() {
plugin.Register(&plugin.Registration{
@@ -53,19 +53,20 @@ func init() {
})
}
-func NewService(cs content.Store, publisher events.Publisher) (*Service, error) {
- return &Service{
+// NewService returns the content GRPC server
+func NewService(cs content.Store, publisher events.Publisher) (api.ContentServer, error) {
+ return &service{
store: cs,
publisher: publisher,
}, nil
}
-func (s *Service) Register(server *grpc.Server) error {
+func (s *service) Register(server *grpc.Server) error {
api.RegisterContentServer(server, s)
return nil
}
-func (s *Service) Info(ctx context.Context, req *api.InfoRequest) (*api.InfoResponse, error) {
+func (s *service) Info(ctx context.Context, req *api.InfoRequest) (*api.InfoResponse, error) {
if err := req.Digest.Validate(); err != nil {
return nil, grpc.Errorf(codes.InvalidArgument, "%q failed validation", req.Digest)
}
@@ -80,7 +81,7 @@ func (s *Service) Info(ctx context.Context, req *api.InfoRequest) (*api.InfoResp
}, nil
}
-func (s *Service) Update(ctx context.Context, req *api.UpdateRequest) (*api.UpdateResponse, error) {
+func (s *service) Update(ctx context.Context, req *api.UpdateRequest) (*api.UpdateResponse, error) {
if err := req.Info.Digest.Validate(); err != nil {
return nil, grpc.Errorf(codes.InvalidArgument, "%q failed validation", req.Info.Digest)
}
@@ -95,7 +96,7 @@ func (s *Service) Update(ctx context.Context, req *api.UpdateRequest) (*api.Upda
}, nil
}
-func (s *Service) List(req *api.ListContentRequest, session api.Content_ListServer) error {
+func (s *service) List(req *api.ListContentRequest, session api.Content_ListServer) error {
var (
buffer []api.Info
sendBlock = func(block []api.Info) error {
@@ -137,7 +138,7 @@ func (s *Service) List(req *api.ListContentRequest, session api.Content_ListServ
return nil
}
-func (s *Service) Delete(ctx context.Context, req *api.DeleteContentRequest) (*empty.Empty, error) {
+func (s *service) Delete(ctx context.Context, req *api.DeleteContentRequest) (*empty.Empty, error) {
if err := req.Digest.Validate(); err != nil {
return nil, grpc.Errorf(codes.InvalidArgument, err.Error())
}
@@ -155,7 +156,7 @@ func (s *Service) Delete(ctx context.Context, req *api.DeleteContentRequest) (*e
return &empty.Empty{}, nil
}
-func (s *Service) Read(req *api.ReadContentRequest, session api.Content_ReadServer) error {
+func (s *service) Read(req *api.ReadContentRequest, session api.Content_ReadServer) error {
if err := req.Digest.Validate(); err != nil {
return grpc.Errorf(codes.InvalidArgument, "%v: %v", req.Digest, err)
}
@@ -223,7 +224,7 @@ func (rw *readResponseWriter) Write(p []byte) (n int, err error) {
return len(p), nil
}
-func (s *Service) Status(ctx context.Context, req *api.StatusRequest) (*api.StatusResponse, error) {
+func (s *service) Status(ctx context.Context, req *api.StatusRequest) (*api.StatusResponse, error) {
status, err := s.store.Status(ctx, req.Ref)
if err != nil {
return nil, errdefs.ToGRPCf(err, "could not get status for ref %q", req.Ref)
@@ -242,7 +243,7 @@ func (s *Service) Status(ctx context.Context, req *api.StatusRequest) (*api.Stat
return &resp, nil
}
-func (s *Service) ListStatuses(ctx context.Context, req *api.ListStatusesRequest) (*api.ListStatusesResponse, error) {
+func (s *service) ListStatuses(ctx context.Context, req *api.ListStatusesRequest) (*api.ListStatusesResponse, error) {
statuses, err := s.store.ListStatuses(ctx, req.Filters...)
if err != nil {
return nil, errdefs.ToGRPC(err)
@@ -263,7 +264,7 @@ func (s *Service) ListStatuses(ctx context.Context, req *api.ListStatusesRequest
return &resp, nil
}
-func (s *Service) Write(session api.Content_WriteServer) (err error) {
+func (s *service) Write(session api.Content_WriteServer) (err error) {
var (
ctx = session.Context()
msg api.WriteContentResponse
@@ -283,7 +284,7 @@ func (s *Service) Write(session api.Content_WriteServer) (err error) {
// identically across all GRPC methods.
//
// This is pretty noisy, so we can remove it but leave it for now.
- log.G(ctx).WithError(err).Error("(*Service).Write failed")
+ log.G(ctx).WithError(err).Error("(*service).Write failed")
}
return
@@ -319,7 +320,7 @@ func (s *Service) Write(session api.Content_WriteServer) (err error) {
ctx = log.WithLogger(ctx, log.G(ctx).WithFields(fields))
- log.G(ctx).Debug("(*Service).Write started")
+ log.G(ctx).Debug("(*service).Write started")
// this action locks the writer for the session.
wr, err := s.store.Writer(ctx, ref, total, expected)
if err != nil {
@@ -444,7 +445,7 @@ func (s *Service) Write(session api.Content_WriteServer) (err error) {
}
}
-func (s *Service) Abort(ctx context.Context, req *api.AbortRequest) (*empty.Empty, error) {
+func (s *service) Abort(ctx context.Context, req *api.AbortRequest) (*empty.Empty, error) {
if err := s.store.Abort(ctx, req.Ref); err != nil {
return nil, errdefs.ToGRPC(err)
}
diff --git a/components/engine/vendor/github.com/containerd/containerd/services/content/store.go b/components/engine/vendor/github.com/containerd/containerd/services/content/store.go
index 11fcc03651..b5aaa8577c 100644
--- a/components/engine/vendor/github.com/containerd/containerd/services/content/store.go
+++ b/components/engine/vendor/github.com/containerd/containerd/services/content/store.go
@@ -15,6 +15,7 @@ type remoteStore struct {
client contentapi.ContentClient
}
+// NewStoreFromClient returns a new content store
func NewStoreFromClient(client contentapi.ContentClient) content.Store {
return &remoteStore{
client: client,
diff --git a/components/engine/vendor/github.com/containerd/containerd/services/diff/client.go b/components/engine/vendor/github.com/containerd/containerd/services/diff/client.go
index 5267f97077..d34848be20 100644
--- a/components/engine/vendor/github.com/containerd/containerd/services/diff/client.go
+++ b/components/engine/vendor/github.com/containerd/containerd/services/diff/client.go
@@ -9,7 +9,7 @@ import (
"golang.org/x/net/context"
)
-// NewApplierFromClient returns a new Applier which communicates
+// NewDiffServiceFromClient returns a new diff service which communicates
// over a GRPC connection.
func NewDiffServiceFromClient(client diffapi.DiffClient) diff.Differ {
return &remote{
diff --git a/components/engine/vendor/github.com/containerd/containerd/services/images/client.go b/components/engine/vendor/github.com/containerd/containerd/services/images/client.go
index eebe776fac..f746ddce8b 100644
--- a/components/engine/vendor/github.com/containerd/containerd/services/images/client.go
+++ b/components/engine/vendor/github.com/containerd/containerd/services/images/client.go
@@ -13,6 +13,7 @@ type remoteStore struct {
client imagesapi.ImagesClient
}
+// NewStoreFromClient returns a new image store client
func NewStoreFromClient(client imagesapi.ImagesClient) images.Store {
return &remoteStore{
client: client,
diff --git a/components/engine/vendor/github.com/containerd/containerd/services/images/service.go b/components/engine/vendor/github.com/containerd/containerd/services/images/service.go
index fa8a00aaea..3843df554c 100644
--- a/components/engine/vendor/github.com/containerd/containerd/services/images/service.go
+++ b/components/engine/vendor/github.com/containerd/containerd/services/images/service.go
@@ -34,24 +34,25 @@ func init() {
})
}
-type Service struct {
+type service struct {
db *metadata.DB
publisher events.Publisher
}
+// NewService returns the GRPC image server
func NewService(db *metadata.DB, publisher events.Publisher) imagesapi.ImagesServer {
- return &Service{
+ return &service{
db: db,
publisher: publisher,
}
}
-func (s *Service) Register(server *grpc.Server) error {
+func (s *service) Register(server *grpc.Server) error {
imagesapi.RegisterImagesServer(server, s)
return nil
}
-func (s *Service) Get(ctx context.Context, req *imagesapi.GetImageRequest) (*imagesapi.GetImageResponse, error) {
+func (s *service) Get(ctx context.Context, req *imagesapi.GetImageRequest) (*imagesapi.GetImageResponse, error) {
var resp imagesapi.GetImageResponse
return &resp, errdefs.ToGRPC(s.withStoreView(ctx, func(ctx context.Context, store images.Store) error {
@@ -65,7 +66,7 @@ func (s *Service) Get(ctx context.Context, req *imagesapi.GetImageRequest) (*ima
}))
}
-func (s *Service) List(ctx context.Context, req *imagesapi.ListImagesRequest) (*imagesapi.ListImagesResponse, error) {
+func (s *service) List(ctx context.Context, req *imagesapi.ListImagesRequest) (*imagesapi.ListImagesResponse, error) {
var resp imagesapi.ListImagesResponse
return &resp, errdefs.ToGRPC(s.withStoreView(ctx, func(ctx context.Context, store images.Store) error {
@@ -79,7 +80,7 @@ func (s *Service) List(ctx context.Context, req *imagesapi.ListImagesRequest) (*
}))
}
-func (s *Service) Create(ctx context.Context, req *imagesapi.CreateImageRequest) (*imagesapi.CreateImageResponse, error) {
+func (s *service) Create(ctx context.Context, req *imagesapi.CreateImageRequest) (*imagesapi.CreateImageResponse, error) {
if req.Image.Name == "" {
return nil, status.Errorf(codes.InvalidArgument, "Image.Name required")
}
@@ -111,7 +112,7 @@ func (s *Service) Create(ctx context.Context, req *imagesapi.CreateImageRequest)
}
-func (s *Service) Update(ctx context.Context, req *imagesapi.UpdateImageRequest) (*imagesapi.UpdateImageResponse, error) {
+func (s *service) Update(ctx context.Context, req *imagesapi.UpdateImageRequest) (*imagesapi.UpdateImageResponse, error) {
if req.Image.Name == "" {
return nil, status.Errorf(codes.InvalidArgument, "Image.Name required")
}
@@ -149,7 +150,7 @@ func (s *Service) Update(ctx context.Context, req *imagesapi.UpdateImageRequest)
return &resp, nil
}
-func (s *Service) Delete(ctx context.Context, req *imagesapi.DeleteImageRequest) (*empty.Empty, error) {
+func (s *service) Delete(ctx context.Context, req *imagesapi.DeleteImageRequest) (*empty.Empty, error) {
if err := s.withStoreUpdate(ctx, func(ctx context.Context, store images.Store) error {
return errdefs.ToGRPC(store.Delete(ctx, req.Name))
}); err != nil {
@@ -169,14 +170,14 @@ func (s *Service) Delete(ctx context.Context, req *imagesapi.DeleteImageRequest)
return &empty.Empty{}, nil
}
-func (s *Service) withStore(ctx context.Context, fn func(ctx context.Context, store images.Store) error) func(tx *bolt.Tx) error {
+func (s *service) withStore(ctx context.Context, fn func(ctx context.Context, store images.Store) error) func(tx *bolt.Tx) error {
return func(tx *bolt.Tx) error { return fn(ctx, metadata.NewImageStore(tx)) }
}
-func (s *Service) withStoreView(ctx context.Context, fn func(ctx context.Context, store images.Store) error) error {
+func (s *service) withStoreView(ctx context.Context, fn func(ctx context.Context, store images.Store) error) error {
return s.db.View(s.withStore(ctx, fn))
}
-func (s *Service) withStoreUpdate(ctx context.Context, fn func(ctx context.Context, store images.Store) error) error {
+func (s *service) withStoreUpdate(ctx context.Context, fn func(ctx context.Context, store images.Store) error) error {
return s.db.Update(s.withStore(ctx, fn))
}
diff --git a/components/engine/vendor/github.com/containerd/containerd/services/namespaces/client.go b/components/engine/vendor/github.com/containerd/containerd/services/namespaces/client.go
new file mode 100644
index 0000000000..fd59ec619d
--- /dev/null
+++ b/components/engine/vendor/github.com/containerd/containerd/services/namespaces/client.go
@@ -0,0 +1,97 @@
+package namespaces
+
+import (
+ "context"
+ "strings"
+
+ api "github.com/containerd/containerd/api/services/namespaces/v1"
+ "github.com/containerd/containerd/errdefs"
+ "github.com/containerd/containerd/namespaces"
+ "github.com/gogo/protobuf/types"
+)
+
+// NewStoreFromClient returns a new namespace store
+func NewStoreFromClient(client api.NamespacesClient) namespaces.Store {
+ return &remote{client: client}
+}
+
+type remote struct {
+ client api.NamespacesClient
+}
+
+func (r *remote) Create(ctx context.Context, namespace string, labels map[string]string) error {
+ var req api.CreateNamespaceRequest
+
+ req.Namespace = api.Namespace{
+ Name: namespace,
+ Labels: labels,
+ }
+
+ _, err := r.client.Create(ctx, &req)
+ if err != nil {
+ return errdefs.FromGRPC(err)
+ }
+
+ return nil
+}
+
+func (r *remote) Labels(ctx context.Context, namespace string) (map[string]string, error) {
+ var req api.GetNamespaceRequest
+ req.Name = namespace
+
+ resp, err := r.client.Get(ctx, &req)
+ if err != nil {
+ return nil, errdefs.FromGRPC(err)
+ }
+
+ return resp.Namespace.Labels, nil
+}
+
+func (r *remote) SetLabel(ctx context.Context, namespace, key, value string) error {
+ var req api.UpdateNamespaceRequest
+
+ req.Namespace = api.Namespace{
+ Name: namespace,
+ Labels: map[string]string{key: value},
+ }
+
+ req.UpdateMask = &types.FieldMask{
+ Paths: []string{strings.Join([]string{"labels", key}, ".")},
+ }
+
+ _, err := r.client.Update(ctx, &req)
+ if err != nil {
+ return errdefs.FromGRPC(err)
+ }
+
+ return nil
+}
+
+func (r *remote) List(ctx context.Context) ([]string, error) {
+ var req api.ListNamespacesRequest
+
+ resp, err := r.client.List(ctx, &req)
+ if err != nil {
+ return nil, errdefs.FromGRPC(err)
+ }
+
+ var namespaces []string
+
+ for _, ns := range resp.Namespaces {
+ namespaces = append(namespaces, ns.Name)
+ }
+
+ return namespaces, nil
+}
+
+func (r *remote) Delete(ctx context.Context, namespace string) error {
+ var req api.DeleteNamespaceRequest
+
+ req.Name = namespace
+ _, err := r.client.Delete(ctx, &req)
+ if err != nil {
+ return errdefs.FromGRPC(err)
+ }
+
+ return nil
+}
diff --git a/components/engine/vendor/github.com/containerd/containerd/services/namespaces/service.go b/components/engine/vendor/github.com/containerd/containerd/services/namespaces/service.go
new file mode 100644
index 0000000000..b795ab52f7
--- /dev/null
+++ b/components/engine/vendor/github.com/containerd/containerd/services/namespaces/service.go
@@ -0,0 +1,212 @@
+package namespaces
+
+import (
+ "strings"
+
+ "github.com/boltdb/bolt"
+ eventsapi "github.com/containerd/containerd/api/services/events/v1"
+ api "github.com/containerd/containerd/api/services/namespaces/v1"
+ "github.com/containerd/containerd/errdefs"
+ "github.com/containerd/containerd/events"
+ "github.com/containerd/containerd/metadata"
+ "github.com/containerd/containerd/namespaces"
+ "github.com/containerd/containerd/plugin"
+ "github.com/golang/protobuf/ptypes/empty"
+ "golang.org/x/net/context"
+ "google.golang.org/grpc"
+ "google.golang.org/grpc/codes"
+)
+
+func init() {
+ plugin.Register(&plugin.Registration{
+ Type: plugin.GRPCPlugin,
+ ID: "namespaces",
+ Requires: []plugin.Type{
+ plugin.MetadataPlugin,
+ },
+ InitFn: func(ic *plugin.InitContext) (interface{}, error) {
+ m, err := ic.Get(plugin.MetadataPlugin)
+ if err != nil {
+ return nil, err
+ }
+ return NewService(m.(*metadata.DB), ic.Events), nil
+ },
+ })
+}
+
+type service struct {
+ db *metadata.DB
+ publisher events.Publisher
+}
+
+var _ api.NamespacesServer = &service{}
+
+// NewService returns the GRPC namespaces server
+func NewService(db *metadata.DB, publisher events.Publisher) api.NamespacesServer {
+ return &service{
+ db: db,
+ publisher: publisher,
+ }
+}
+
+func (s *service) Register(server *grpc.Server) error {
+ api.RegisterNamespacesServer(server, s)
+ return nil
+}
+
+func (s *service) Get(ctx context.Context, req *api.GetNamespaceRequest) (*api.GetNamespaceResponse, error) {
+ var resp api.GetNamespaceResponse
+
+ return &resp, s.withStoreView(ctx, func(ctx context.Context, store namespaces.Store) error {
+ labels, err := store.Labels(ctx, req.Name)
+ if err != nil {
+ return errdefs.ToGRPC(err)
+ }
+
+ resp.Namespace = api.Namespace{
+ Name: req.Name,
+ Labels: labels,
+ }
+
+ return nil
+ })
+}
+
+func (s *service) List(ctx context.Context, req *api.ListNamespacesRequest) (*api.ListNamespacesResponse, error) {
+ var resp api.ListNamespacesResponse
+
+ return &resp, s.withStoreView(ctx, func(ctx context.Context, store namespaces.Store) error {
+ namespaces, err := store.List(ctx)
+ if err != nil {
+ return err
+ }
+
+ for _, namespace := range namespaces {
+ labels, err := store.Labels(ctx, namespace)
+ if err != nil {
+ // In general, this should be unlikely, since we are holding a
+ // transaction to service this request.
+ return errdefs.ToGRPC(err)
+ }
+
+ resp.Namespaces = append(resp.Namespaces, api.Namespace{
+ Name: namespace,
+ Labels: labels,
+ })
+ }
+
+ return nil
+ })
+}
+
+func (s *service) Create(ctx context.Context, req *api.CreateNamespaceRequest) (*api.CreateNamespaceResponse, error) {
+ var resp api.CreateNamespaceResponse
+
+ if err := s.withStoreUpdate(ctx, func(ctx context.Context, store namespaces.Store) error {
+ if err := store.Create(ctx, req.Namespace.Name, req.Namespace.Labels); err != nil {
+ return errdefs.ToGRPC(err)
+ }
+
+ for k, v := range req.Namespace.Labels {
+ if err := store.SetLabel(ctx, req.Namespace.Name, k, v); err != nil {
+ return err
+ }
+ }
+
+ resp.Namespace = req.Namespace
+ return nil
+ }); err != nil {
+ return &resp, err
+ }
+
+ if err := s.publisher.Publish(ctx, "/namespaces/create", &eventsapi.NamespaceCreate{
+ Name: req.Namespace.Name,
+ Labels: req.Namespace.Labels,
+ }); err != nil {
+ return &resp, err
+ }
+
+ return &resp, nil
+
+}
+
+func (s *service) Update(ctx context.Context, req *api.UpdateNamespaceRequest) (*api.UpdateNamespaceResponse, error) {
+ var resp api.UpdateNamespaceResponse
+ if err := s.withStoreUpdate(ctx, func(ctx context.Context, store namespaces.Store) error {
+ if req.UpdateMask != nil && len(req.UpdateMask.Paths) > 0 {
+ for _, path := range req.UpdateMask.Paths {
+ switch {
+ case strings.HasPrefix(path, "labels."):
+ key := strings.TrimPrefix(path, "labels.")
+ if err := store.SetLabel(ctx, req.Namespace.Name, key, req.Namespace.Labels[key]); err != nil {
+ return err
+ }
+ default:
+ return grpc.Errorf(codes.InvalidArgument, "cannot update %q field", path)
+ }
+ }
+ } else {
+ // clear out the existing labels and then set them to the incoming request.
+ // get current set of labels
+ labels, err := store.Labels(ctx, req.Namespace.Name)
+ if err != nil {
+ return errdefs.ToGRPC(err)
+ }
+
+ for k := range labels {
+ if err := store.SetLabel(ctx, req.Namespace.Name, k, ""); err != nil {
+ return err
+ }
+ }
+
+ for k, v := range req.Namespace.Labels {
+ if err := store.SetLabel(ctx, req.Namespace.Name, k, v); err != nil {
+ return err
+ }
+
+ }
+ }
+
+ return nil
+ }); err != nil {
+ return &resp, err
+ }
+
+ if err := s.publisher.Publish(ctx, "/namespaces/update", &eventsapi.NamespaceUpdate{
+ Name: req.Namespace.Name,
+ Labels: req.Namespace.Labels,
+ }); err != nil {
+ return &resp, err
+ }
+
+ return &resp, nil
+}
+
+func (s *service) Delete(ctx context.Context, req *api.DeleteNamespaceRequest) (*empty.Empty, error) {
+ if err := s.withStoreUpdate(ctx, func(ctx context.Context, store namespaces.Store) error {
+ return errdefs.ToGRPC(store.Delete(ctx, req.Name))
+ }); err != nil {
+ return &empty.Empty{}, err
+ }
+ // set the namespace in the context before publishing the event
+ ctx = namespaces.WithNamespace(ctx, req.Name)
+ if err := s.publisher.Publish(ctx, "/namespaces/delete", &eventsapi.NamespaceDelete{
+ Name: req.Name,
+ }); err != nil {
+ return &empty.Empty{}, err
+ }
+
+ return &empty.Empty{}, nil
+}
+
+func (s *service) withStore(ctx context.Context, fn func(ctx context.Context, store namespaces.Store) error) func(tx *bolt.Tx) error {
+ return func(tx *bolt.Tx) error { return fn(ctx, metadata.NewNamespaceStore(tx)) }
+}
+
+func (s *service) withStoreView(ctx context.Context, fn func(ctx context.Context, store namespaces.Store) error) error {
+ return s.db.View(s.withStore(ctx, fn))
+}
+
+func (s *service) withStoreUpdate(ctx context.Context, fn func(ctx context.Context, store namespaces.Store) error) error {
+ return s.db.Update(s.withStore(ctx, fn))
+}
diff --git a/components/engine/vendor/github.com/containerd/containerd/snapshot/snapshotter.go b/components/engine/vendor/github.com/containerd/containerd/snapshot/snapshotter.go
index 6beafd48af..2b3fe62755 100644
--- a/components/engine/vendor/github.com/containerd/containerd/snapshot/snapshotter.go
+++ b/components/engine/vendor/github.com/containerd/containerd/snapshot/snapshotter.go
@@ -20,6 +20,9 @@ const (
KindCommitted
)
+// ParseKind parses the provided string into a Kind
+//
+// If the string cannot be parsed KindUnknown is returned
func ParseKind(s string) Kind {
s = strings.ToLower(s)
switch s {
@@ -34,6 +37,7 @@ func ParseKind(s string) Kind {
return KindUnknown
}
+// String returns the string representation of the Kind
func (k Kind) String() string {
switch k {
case KindView:
@@ -47,10 +51,12 @@ func (k Kind) String() string {
return "Unknown"
}
+// MarshalJSON the Kind to JSON
func (k Kind) MarshalJSON() ([]byte, error) {
return json.Marshal(k.String())
}
+// UnmarshalJSON the Kind from JSON
func (k *Kind) UnmarshalJSON(b []byte) error {
var s string
if err := json.Unmarshal(b, &s); err != nil {
@@ -81,6 +87,7 @@ type Usage struct {
Size int64 // provides usage, in bytes, of snapshot
}
+// Add the provided usage to the current usage
func (u *Usage) Add(other Usage) {
u.Size += other.Size
diff --git a/components/engine/vendor/github.com/containerd/containerd/spec_opts_unix.go b/components/engine/vendor/github.com/containerd/containerd/spec_opts_unix.go
index 7009522d22..01d5121d41 100644
--- a/components/engine/vendor/github.com/containerd/containerd/spec_opts_unix.go
+++ b/components/engine/vendor/github.com/containerd/containerd/spec_opts_unix.go
@@ -11,17 +11,16 @@ import (
"path/filepath"
"strconv"
"strings"
- "time"
"golang.org/x/sys/unix"
"github.com/containerd/containerd/containers"
"github.com/containerd/containerd/content"
+ "github.com/containerd/containerd/errdefs"
"github.com/containerd/containerd/fs"
"github.com/containerd/containerd/images"
"github.com/containerd/containerd/namespaces"
"github.com/containerd/containerd/platforms"
- "github.com/containerd/containerd/snapshot"
"github.com/opencontainers/image-spec/identity"
"github.com/opencontainers/image-spec/specs-go/v1"
"github.com/opencontainers/runc/libcontainer/user"
@@ -260,19 +259,17 @@ func withRemappedSnapshotBase(id string, i Image, uid, gid uint32, readonly bool
snapshotter = client.SnapshotService(c.Snapshotter)
parent = identity.ChainID(diffIDs).String()
usernsID = fmt.Sprintf("%s-%d-%d", parent, uid, gid)
- opt = snapshot.WithLabels(map[string]string{
- "containerd.io/gc.root": time.Now().UTC().Format(time.RFC3339),
- })
)
if _, err := snapshotter.Stat(ctx, usernsID); err == nil {
- if _, err := snapshotter.Prepare(ctx, id, usernsID, opt); err != nil {
+ if _, err := snapshotter.Prepare(ctx, id, usernsID); err == nil {
+ c.SnapshotKey = id
+ c.Image = i.Name()
+ return nil
+ } else if !errdefs.IsNotFound(err) {
return err
}
- c.SnapshotKey = id
- c.Image = i.Name()
- return nil
}
- mounts, err := snapshotter.Prepare(ctx, usernsID+"-remap", parent, opt)
+ mounts, err := snapshotter.Prepare(ctx, usernsID+"-remap", parent)
if err != nil {
return err
}
@@ -280,13 +277,13 @@ func withRemappedSnapshotBase(id string, i Image, uid, gid uint32, readonly bool
snapshotter.Remove(ctx, usernsID)
return err
}
- if err := snapshotter.Commit(ctx, usernsID, usernsID+"-remap", opt); err != nil {
+ if err := snapshotter.Commit(ctx, usernsID, usernsID+"-remap"); err != nil {
return err
}
if readonly {
- _, err = snapshotter.View(ctx, id, usernsID, opt)
+ _, err = snapshotter.View(ctx, id, usernsID)
} else {
- _, err = snapshotter.Prepare(ctx, id, usernsID, opt)
+ _, err = snapshotter.Prepare(ctx, id, usernsID)
}
if err != nil {
return err
diff --git a/components/engine/vendor/github.com/containerd/containerd/spec_opts_windows.go b/components/engine/vendor/github.com/containerd/containerd/spec_opts_windows.go
index 5aa5c30297..1fc5d5e37d 100644
--- a/components/engine/vendor/github.com/containerd/containerd/spec_opts_windows.go
+++ b/components/engine/vendor/github.com/containerd/containerd/spec_opts_windows.go
@@ -15,6 +15,7 @@ import (
specs "github.com/opencontainers/runtime-spec/specs-go"
)
+// WithImageConfig configures the spec to from the configuration of an Image
func WithImageConfig(i Image) SpecOpts {
return func(ctx context.Context, client *Client, _ *containers.Container, s *specs.Spec) error {
var (
@@ -51,6 +52,8 @@ func WithImageConfig(i Image) SpecOpts {
}
}
+// WithTTY sets the information on the spec as well as the environment variables for
+// using a TTY
func WithTTY(width, height int) SpecOpts {
return func(_ context.Context, _ *Client, _ *containers.Container, s *specs.Spec) error {
s.Process.Terminal = true
@@ -63,6 +66,7 @@ func WithTTY(width, height int) SpecOpts {
}
}
+// WithResources sets the provided resources on the spec for task updates
func WithResources(resources *specs.WindowsResources) UpdateTaskOpts {
return func(ctx context.Context, client *Client, r *UpdateTaskInfo) error {
r.Resources = resources
diff --git a/components/engine/vendor/github.com/containerd/containerd/spec_unix.go b/components/engine/vendor/github.com/containerd/containerd/spec_unix.go
index 9a0b537dc5..957f90ef91 100644
--- a/components/engine/vendor/github.com/containerd/containerd/spec_unix.go
+++ b/components/engine/vendor/github.com/containerd/containerd/spec_unix.go
@@ -151,6 +151,7 @@ func createDefaultSpec(ctx context.Context, id string) (*specs.Spec, error) {
"/proc/timer_stats",
"/proc/sched_debug",
"/sys/firmware",
+ "/proc/scsi",
},
ReadonlyPaths: []string{
"/proc/asound",
diff --git a/components/engine/vendor/github.com/containerd/containerd/sys/oom_windows.go b/components/engine/vendor/github.com/containerd/containerd/sys/oom_windows.go
index a72568b279..6e42ddce8e 100644
--- a/components/engine/vendor/github.com/containerd/containerd/sys/oom_windows.go
+++ b/components/engine/vendor/github.com/containerd/containerd/sys/oom_windows.go
@@ -1,5 +1,8 @@
package sys
+// SetOOMScore sets the oom score for the process
+//
+// Not implemented on Windows
func SetOOMScore(pid, score int) error {
return nil
}
diff --git a/components/engine/vendor/github.com/containerd/containerd/sys/prctl_solaris.go b/components/engine/vendor/github.com/containerd/containerd/sys/prctl_solaris.go
deleted file mode 100644
index 9443f14dbd..0000000000
--- a/components/engine/vendor/github.com/containerd/containerd/sys/prctl_solaris.go
+++ /dev/null
@@ -1,19 +0,0 @@
-// +build solaris
-
-package sys
-
-import (
- "errors"
-)
-
-//Solaris TODO
-
-// GetSubreaper returns the subreaper setting for the calling process
-func GetSubreaper() (int, error) {
- return 0, errors.New("osutils GetSubreaper not implemented on Solaris")
-}
-
-// SetSubreaper sets the value i as the subreaper setting for the calling process
-func SetSubreaper(i int) error {
- return errors.New("osutils SetSubreaper not implemented on Solaris")
-}
diff --git a/components/engine/vendor/github.com/containerd/containerd/sys/stat_bsd.go b/components/engine/vendor/github.com/containerd/containerd/sys/stat_bsd.go
index 13db2b32e7..e043ae52bf 100644
--- a/components/engine/vendor/github.com/containerd/containerd/sys/stat_bsd.go
+++ b/components/engine/vendor/github.com/containerd/containerd/sys/stat_bsd.go
@@ -6,14 +6,17 @@ import (
"syscall"
)
+// StatAtime returns the access time from a stat struct
func StatAtime(st *syscall.Stat_t) syscall.Timespec {
return st.Atimespec
}
+// StatCtime returns the created time from a stat struct
func StatCtime(st *syscall.Stat_t) syscall.Timespec {
return st.Ctimespec
}
+// StatMtime returns the modified time from a stat struct
func StatMtime(st *syscall.Stat_t) syscall.Timespec {
return st.Mtimespec
}
diff --git a/components/engine/vendor/github.com/containerd/containerd/sys/stat_unix.go b/components/engine/vendor/github.com/containerd/containerd/sys/stat_unix.go
index da13ed26e2..1f983a98db 100644
--- a/components/engine/vendor/github.com/containerd/containerd/sys/stat_unix.go
+++ b/components/engine/vendor/github.com/containerd/containerd/sys/stat_unix.go
@@ -6,14 +6,17 @@ import (
"syscall"
)
+// StatAtime returns the Atim
func StatAtime(st *syscall.Stat_t) syscall.Timespec {
return st.Atim
}
+// StatCtime returns the Ctim
func StatCtime(st *syscall.Stat_t) syscall.Timespec {
return st.Ctim
}
+// StatMtime returns the Mtim
func StatMtime(st *syscall.Stat_t) syscall.Timespec {
return st.Mtim
}
diff --git a/components/engine/vendor/github.com/containerd/containerd/task.go b/components/engine/vendor/github.com/containerd/containerd/task.go
index 6b9af1d410..7ae1bf6228 100644
--- a/components/engine/vendor/github.com/containerd/containerd/task.go
+++ b/components/engine/vendor/github.com/containerd/containerd/task.go
@@ -18,7 +18,6 @@ import (
"github.com/containerd/containerd/diff"
"github.com/containerd/containerd/errdefs"
"github.com/containerd/containerd/images"
- "github.com/containerd/containerd/log"
"github.com/containerd/containerd/mount"
"github.com/containerd/containerd/plugin"
"github.com/containerd/containerd/rootfs"
@@ -26,7 +25,6 @@ import (
google_protobuf "github.com/gogo/protobuf/types"
digest "github.com/opencontainers/go-digest"
"github.com/opencontainers/image-spec/specs-go/v1"
- ocispec "github.com/opencontainers/image-spec/specs-go/v1"
specs "github.com/opencontainers/runtime-spec/specs-go"
"github.com/pkg/errors"
)
@@ -51,6 +49,7 @@ type Status struct {
ExitTime time.Time
}
+// ProcessInfo provides platform specific process information
type ProcessInfo struct {
// Pid is the process ID
Pid uint32
@@ -358,6 +357,12 @@ func (t *task) Resize(ctx context.Context, w, h uint32) error {
}
func (t *task) Checkpoint(ctx context.Context, opts ...CheckpointTaskOpts) (Image, error) {
+ ctx, done, err := t.client.withLease(ctx)
+ if err != nil {
+ return nil, err
+ }
+ defer done()
+
request := &tasks.CheckpointTaskRequest{
ContainerID: t.id,
}
@@ -391,15 +396,6 @@ func (t *task) Checkpoint(ctx context.Context, opts ...CheckpointTaskOpts) (Imag
index := v1.Index{
Annotations: make(map[string]string),
}
- // make sure we clear the gc root labels reguardless of success
- var clearRoots []ocispec.Descriptor
- defer func() {
- for _, r := range append(index.Manifests, clearRoots...) {
- if err := clearRootGCLabel(ctx, t.client, r); err != nil {
- log.G(ctx).WithError(err).WithField("dgst", r.Digest).Warnf("failed to remove root marker")
- }
- }
- }()
if err := t.checkpointTask(ctx, &index, request); err != nil {
return nil, err
}
@@ -418,7 +414,6 @@ func (t *task) Checkpoint(ctx context.Context, opts ...CheckpointTaskOpts) (Imag
if err != nil {
return nil, err
}
- clearRoots = append(clearRoots, desc)
im := images.Image{
Name: i.Name,
Target: desc,
@@ -534,9 +529,6 @@ func (t *task) checkpointTask(ctx context.Context, index *v1.Index, request *tas
func (t *task) checkpointRWSnapshot(ctx context.Context, index *v1.Index, snapshotterName string, id string) error {
opts := []diff.Opt{
diff.WithReference(fmt.Sprintf("checkpoint-rw-%s", id)),
- diff.WithLabels(map[string]string{
- "containerd.io/gc.root": time.Now().UTC().Format(time.RFC3339),
- }),
}
rw, err := rootfs.Diff(ctx, id, t.client.SnapshotService(snapshotterName), t.client.DiffService(), opts...)
if err != nil {
@@ -563,9 +555,7 @@ func (t *task) checkpointImage(ctx context.Context, index *v1.Index, image strin
}
func (t *task) writeIndex(ctx context.Context, index *v1.Index) (d v1.Descriptor, err error) {
- labels := map[string]string{
- "containerd.io/gc.root": time.Now().UTC().Format(time.RFC3339),
- }
+ labels := map[string]string{}
for i, m := range index.Manifests {
labels[fmt.Sprintf("containerd.io/gc.ref.content.%d", i)] = m.Digest.String()
}
@@ -595,9 +585,3 @@ func writeContent(ctx context.Context, store content.Store, mediaType, ref strin
Size: size,
}, nil
}
-
-func clearRootGCLabel(ctx context.Context, client *Client, desc ocispec.Descriptor) error {
- info := content.Info{Digest: desc.Digest}
- _, err := client.ContentStore().Update(ctx, info, "labels.containerd.io/gc.root")
- return err
-}
diff --git a/components/engine/vendor/github.com/containerd/containerd/vendor.conf b/components/engine/vendor/github.com/containerd/containerd/vendor.conf
index 1255a4e1a3..671346821c 100644
--- a/components/engine/vendor/github.com/containerd/containerd/vendor.conf
+++ b/components/engine/vendor/github.com/containerd/containerd/vendor.conf
@@ -16,14 +16,14 @@ github.com/docker/go-units v0.3.1
github.com/gogo/protobuf d2e1ade2d719b78fe5b061b4c18a9f7111b5bdc8
github.com/golang/protobuf 5a0f697c9ed9d68fef0116532c6e05cfeae00e55
github.com/opencontainers/runtime-spec v1.0.0
-github.com/opencontainers/runc 0351df1c5a66838d0c392b4ac4cf9450de844e2d
+github.com/opencontainers/runc 74a17296470088de3805e138d3d87c62e613dfc4
github.com/sirupsen/logrus v1.0.0
github.com/containerd/btrfs cc52c4dea2ce11a44e6639e561bb5c2af9ada9e3
github.com/stretchr/testify v1.1.4
github.com/davecgh/go-spew v1.1.0
github.com/pmezard/go-difflib v1.0.0
github.com/containerd/fifo fbfb6a11ec671efbe94ad1c12c2e98773f19e1e6
-github.com/urfave/cli 8ba6f23b6e36d03666a14bd9421f5e3efcb59aca
+github.com/urfave/cli 7bc6a0acffa589f415f88aca16cc1de5ffd66f9c
golang.org/x/net 7dcfb8076726a3fdd9353b6b8a1f1b6be6811bd6
google.golang.org/grpc v1.3.0
github.com/pkg/errors v0.8.0
diff --git a/components/engine/vendor/github.com/containerd/containerd/windows/hcsshimtypes/doc.go b/components/engine/vendor/github.com/containerd/containerd/windows/hcsshimtypes/doc.go
index 1712b1aedc..4b1b4b3414 100644
--- a/components/engine/vendor/github.com/containerd/containerd/windows/hcsshimtypes/doc.go
+++ b/components/engine/vendor/github.com/containerd/containerd/windows/hcsshimtypes/doc.go
@@ -1,2 +1,2 @@
-// hcsshimtypes holds the windows runtime specific types
+// Package hcsshimtypes holds the windows runtime specific types
package hcsshimtypes
diff --git a/components/engine/vendor/github.com/docker/swarmkit/agent/exec/controller.go b/components/engine/vendor/github.com/docker/swarmkit/agent/exec/controller.go
index 85110ba971..9b4fc7bca1 100644
--- a/components/engine/vendor/github.com/docker/swarmkit/agent/exec/controller.go
+++ b/components/engine/vendor/github.com/docker/swarmkit/agent/exec/controller.go
@@ -119,6 +119,7 @@ func Resolve(ctx context.Context, task *api.Task, executor Executor) (Controller
// we always want to proceed to accepted when we resolve the controller
status.Message = "accepted"
status.State = api.TaskStateAccepted
+ status.Err = ""
}
return ctlr, status, err
@@ -158,6 +159,7 @@ func Do(ctx context.Context, task *api.Task, ctlr Controller) (*api.TaskStatus,
current := status.State
status.State = state
status.Message = msg
+ status.Err = ""
if current > state {
panic("invalid state transition")
diff --git a/components/engine/vendor/github.com/docker/swarmkit/api/genericresource/parse.go b/components/engine/vendor/github.com/docker/swarmkit/api/genericresource/parse.go
index de30908104..f39a7077a8 100644
--- a/components/engine/vendor/github.com/docker/swarmkit/api/genericresource/parse.go
+++ b/components/engine/vendor/github.com/docker/swarmkit/api/genericresource/parse.go
@@ -1,6 +1,7 @@
package genericresource
import (
+ "encoding/csv"
"fmt"
"strconv"
"strings"
@@ -12,36 +13,99 @@ func newParseError(format string, args ...interface{}) error {
return fmt.Errorf("could not parse GenericResource: "+format, args...)
}
-// Parse parses the GenericResource resources given by the arguments
-func Parse(cmd string) ([]*api.GenericResource, error) {
- var rs []*api.GenericResource
+// discreteResourceVal returns an int64 if the string is a discreteResource
+// and an error if it isn't
+func discreteResourceVal(res string) (int64, error) {
+ return strconv.ParseInt(res, 10, 64)
+}
- for _, term := range strings.Split(cmd, ";") {
+// allNamedResources returns true if the array of resources are all namedResources
+// e.g: res = [red, orange, green]
+func allNamedResources(res []string) bool {
+ for _, v := range res {
+ if _, err := discreteResourceVal(v); err == nil {
+ return false
+ }
+ }
+
+ return true
+}
+
+// ParseCmd parses the Generic Resource command line argument
+// and returns a list of *api.GenericResource
+func ParseCmd(cmd string) ([]*api.GenericResource, error) {
+ if strings.Contains(cmd, "\n") {
+ return nil, newParseError("unexpected '\\n' character")
+ }
+
+ r := csv.NewReader(strings.NewReader(cmd))
+ records, err := r.ReadAll()
+
+ if err != nil {
+ return nil, newParseError("%v", err)
+ }
+
+ if len(records) != 1 {
+ return nil, newParseError("found multiple records while parsing cmd %v", records)
+ }
+
+ return Parse(records[0])
+}
+
+// Parse parses a table of GenericResource resources
+func Parse(cmds []string) ([]*api.GenericResource, error) {
+ tokens := make(map[string][]string)
+
+ for _, term := range cmds {
kva := strings.Split(term, "=")
if len(kva) != 2 {
- return nil, newParseError("incorrect term %s, missing '=' or malformed expr", term)
+ return nil, newParseError("incorrect term %s, missing"+
+ " '=' or malformed expression", term)
}
key := strings.TrimSpace(kva[0])
val := strings.TrimSpace(kva[1])
- u, err := strconv.ParseInt(val, 10, 64)
- if err == nil {
+ tokens[key] = append(tokens[key], val)
+ }
+
+ var rs []*api.GenericResource
+ for k, v := range tokens {
+ if u, ok := isDiscreteResource(v); ok {
if u < 0 {
- return nil, newParseError("cannot ask for negative resource %s", key)
+ return nil, newParseError("cannot ask for"+
+ " negative resource %s", k)
}
- rs = append(rs, NewDiscrete(key, u))
+
+ rs = append(rs, NewDiscrete(k, u))
continue
}
- if len(val) > 2 && val[0] == '{' && val[len(val)-1] == '}' {
- val = val[1 : len(val)-1]
- rs = append(rs, NewSet(key, strings.Split(val, ",")...)...)
+ if allNamedResources(v) {
+ rs = append(rs, NewSet(k, v...)...)
continue
}
- return nil, newParseError("could not parse expression '%s'", term)
+ return nil, newParseError("mixed discrete and named resources"+
+ " in expression '%s=%s'", k, v)
}
return rs, nil
}
+
+// isDiscreteResource returns true if the array of resources is a
+// Discrete Resource.
+// e.g: res = [1]
+func isDiscreteResource(values []string) (int64, bool) {
+ if len(values) != 1 {
+ return int64(0), false
+ }
+
+ u, err := discreteResourceVal(values[0])
+ if err != nil {
+ return int64(0), false
+ }
+
+ return u, true
+
+}
diff --git a/components/engine/vendor/github.com/docker/swarmkit/api/specs.pb.go b/components/engine/vendor/github.com/docker/swarmkit/api/specs.pb.go
index bda30a3dfc..dfd18a6d78 100644
--- a/components/engine/vendor/github.com/docker/swarmkit/api/specs.pb.go
+++ b/components/engine/vendor/github.com/docker/swarmkit/api/specs.pb.go
@@ -10,6 +10,7 @@ import math "math"
import _ "github.com/gogo/protobuf/gogoproto"
import google_protobuf1 "github.com/gogo/protobuf/types"
import google_protobuf3 "github.com/gogo/protobuf/types"
+import google_protobuf4 "github.com/gogo/protobuf/types"
import github_com_docker_swarmkit_api_deepcopy "github.com/docker/swarmkit/api/deepcopy"
@@ -74,6 +75,35 @@ func (x NodeSpec_Availability) String() string {
}
func (NodeSpec_Availability) EnumDescriptor() ([]byte, []int) { return fileDescriptorSpecs, []int{0, 1} }
+type ContainerSpec_Isolation int32
+
+const (
+ // ISOLATION_DEFAULT uses whatever default value from the container runtime
+ ContainerIsolationDefault ContainerSpec_Isolation = 0
+ // ISOLATION_PROCESS forces windows container isolation
+ ContainerIsolationProcess ContainerSpec_Isolation = 1
+ // ISOLATION_HYPERV forces Hyper-V isolation
+ ContainerIsolationHyperV ContainerSpec_Isolation = 2
+)
+
+var ContainerSpec_Isolation_name = map[int32]string{
+ 0: "ISOLATION_DEFAULT",
+ 1: "ISOLATION_PROCESS",
+ 2: "ISOLATION_HYPERV",
+}
+var ContainerSpec_Isolation_value = map[string]int32{
+ "ISOLATION_DEFAULT": 0,
+ "ISOLATION_PROCESS": 1,
+ "ISOLATION_HYPERV": 2,
+}
+
+func (x ContainerSpec_Isolation) String() string {
+ return proto.EnumName(ContainerSpec_Isolation_name, int32(x))
+}
+func (ContainerSpec_Isolation) EnumDescriptor() ([]byte, []int) {
+ return fileDescriptorSpecs, []int{8, 0}
+}
+
// ResolutionMode specifies the mode of resolution to use for
// internal loadbalancing between tasks which are all within
// the cluster. This is sometimes calls east-west data path.
@@ -542,6 +572,8 @@ type ContainerSpec struct {
Groups []string `protobuf:"bytes,11,rep,name=groups" json:"groups,omitempty"`
// Privileges specifies security configuration/permissions.
Privileges *Privileges `protobuf:"bytes,22,opt,name=privileges" json:"privileges,omitempty"`
+ // Init declares that a custom init will be running inside the container, if null, use the daemon's configured settings
+ Init *google_protobuf4.BoolValue `protobuf:"bytes,23,opt,name=init" json:"init,omitempty"`
// TTY declares that a TTY should be attached to the standard streams,
// including stdin if it is still open.
TTY bool `protobuf:"varint,13,opt,name=tty,proto3" json:"tty,omitempty"`
@@ -585,6 +617,12 @@ type ContainerSpec struct {
// task will exit and a new task will be rescheduled elsewhere. A container
// is considered unhealthy after `Retries` number of consecutive failures.
Healthcheck *HealthConfig `protobuf:"bytes,16,opt,name=healthcheck" json:"healthcheck,omitempty"`
+ // Isolation defines the isolation level for windows containers (default, process, hyperv).
+ // Runtimes that don't support it ignore that field
+ Isolation ContainerSpec_Isolation `protobuf:"varint,24,opt,name=isolation,proto3,enum=docker.swarmkit.v1.ContainerSpec_Isolation" json:"isolation,omitempty"`
+ // PidsLimit prevents from OS resource damage by applications inside the container
+ // using fork bomb attack.
+ PidsLimit int64 `protobuf:"varint,25,opt,name=pidsLimit,proto3" json:"pidsLimit,omitempty"`
}
func (m *ContainerSpec) Reset() { *m = ContainerSpec{} }
@@ -830,6 +868,7 @@ func init() {
proto.RegisterType((*ConfigSpec)(nil), "docker.swarmkit.v1.ConfigSpec")
proto.RegisterEnum("docker.swarmkit.v1.NodeSpec_Membership", NodeSpec_Membership_name, NodeSpec_Membership_value)
proto.RegisterEnum("docker.swarmkit.v1.NodeSpec_Availability", NodeSpec_Availability_name, NodeSpec_Availability_value)
+ proto.RegisterEnum("docker.swarmkit.v1.ContainerSpec_Isolation", ContainerSpec_Isolation_name, ContainerSpec_Isolation_value)
proto.RegisterEnum("docker.swarmkit.v1.EndpointSpec_ResolutionMode", EndpointSpec_ResolutionMode_name, EndpointSpec_ResolutionMode_value)
}
@@ -1090,6 +1129,10 @@ func (m *ContainerSpec) CopyFrom(src interface{}) {
m.Privileges = &Privileges{}
github_com_docker_swarmkit_api_deepcopy.Copy(m.Privileges, o.Privileges)
}
+ if o.Init != nil {
+ m.Init = &google_protobuf4.BoolValue{}
+ github_com_docker_swarmkit_api_deepcopy.Copy(m.Init, o.Init)
+ }
if o.Mounts != nil {
m.Mounts = make([]Mount, len(o.Mounts))
for i := range m.Mounts {
@@ -1996,6 +2039,32 @@ func (m *ContainerSpec) MarshalTo(dAtA []byte) (int, error) {
}
i += n23
}
+ if m.Init != nil {
+ dAtA[i] = 0xba
+ i++
+ dAtA[i] = 0x1
+ i++
+ i = encodeVarintSpecs(dAtA, i, uint64(m.Init.Size()))
+ n24, err := m.Init.MarshalTo(dAtA[i:])
+ if err != nil {
+ return 0, err
+ }
+ i += n24
+ }
+ if m.Isolation != 0 {
+ dAtA[i] = 0xc0
+ i++
+ dAtA[i] = 0x1
+ i++
+ i = encodeVarintSpecs(dAtA, i, uint64(m.Isolation))
+ }
+ if m.PidsLimit != 0 {
+ dAtA[i] = 0xc8
+ i++
+ dAtA[i] = 0x1
+ i++
+ i = encodeVarintSpecs(dAtA, i, uint64(m.PidsLimit))
+ }
return i, nil
}
@@ -2141,20 +2210,20 @@ func (m *NetworkSpec) MarshalTo(dAtA []byte) (int, error) {
dAtA[i] = 0xa
i++
i = encodeVarintSpecs(dAtA, i, uint64(m.Annotations.Size()))
- n24, err := m.Annotations.MarshalTo(dAtA[i:])
+ n25, err := m.Annotations.MarshalTo(dAtA[i:])
if err != nil {
return 0, err
}
- i += n24
+ i += n25
if m.DriverConfig != nil {
dAtA[i] = 0x12
i++
i = encodeVarintSpecs(dAtA, i, uint64(m.DriverConfig.Size()))
- n25, err := m.DriverConfig.MarshalTo(dAtA[i:])
+ n26, err := m.DriverConfig.MarshalTo(dAtA[i:])
if err != nil {
return 0, err
}
- i += n25
+ i += n26
}
if m.Ipv6Enabled {
dAtA[i] = 0x18
@@ -2180,11 +2249,11 @@ func (m *NetworkSpec) MarshalTo(dAtA []byte) (int, error) {
dAtA[i] = 0x2a
i++
i = encodeVarintSpecs(dAtA, i, uint64(m.IPAM.Size()))
- n26, err := m.IPAM.MarshalTo(dAtA[i:])
+ n27, err := m.IPAM.MarshalTo(dAtA[i:])
if err != nil {
return 0, err
}
- i += n26
+ i += n27
}
if m.Attachable {
dAtA[i] = 0x30
@@ -2207,11 +2276,11 @@ func (m *NetworkSpec) MarshalTo(dAtA []byte) (int, error) {
i++
}
if m.ConfigFrom != nil {
- nn27, err := m.ConfigFrom.MarshalTo(dAtA[i:])
+ nn28, err := m.ConfigFrom.MarshalTo(dAtA[i:])
if err != nil {
return 0, err
}
- i += nn27
+ i += nn28
}
return i, nil
}
@@ -2242,67 +2311,67 @@ func (m *ClusterSpec) MarshalTo(dAtA []byte) (int, error) {
dAtA[i] = 0xa
i++
i = encodeVarintSpecs(dAtA, i, uint64(m.Annotations.Size()))
- n28, err := m.Annotations.MarshalTo(dAtA[i:])
- if err != nil {
- return 0, err
- }
- i += n28
- dAtA[i] = 0x12
- i++
- i = encodeVarintSpecs(dAtA, i, uint64(m.AcceptancePolicy.Size()))
- n29, err := m.AcceptancePolicy.MarshalTo(dAtA[i:])
+ n29, err := m.Annotations.MarshalTo(dAtA[i:])
if err != nil {
return 0, err
}
i += n29
- dAtA[i] = 0x1a
+ dAtA[i] = 0x12
i++
- i = encodeVarintSpecs(dAtA, i, uint64(m.Orchestration.Size()))
- n30, err := m.Orchestration.MarshalTo(dAtA[i:])
+ i = encodeVarintSpecs(dAtA, i, uint64(m.AcceptancePolicy.Size()))
+ n30, err := m.AcceptancePolicy.MarshalTo(dAtA[i:])
if err != nil {
return 0, err
}
i += n30
- dAtA[i] = 0x22
+ dAtA[i] = 0x1a
i++
- i = encodeVarintSpecs(dAtA, i, uint64(m.Raft.Size()))
- n31, err := m.Raft.MarshalTo(dAtA[i:])
+ i = encodeVarintSpecs(dAtA, i, uint64(m.Orchestration.Size()))
+ n31, err := m.Orchestration.MarshalTo(dAtA[i:])
if err != nil {
return 0, err
}
i += n31
- dAtA[i] = 0x2a
+ dAtA[i] = 0x22
i++
- i = encodeVarintSpecs(dAtA, i, uint64(m.Dispatcher.Size()))
- n32, err := m.Dispatcher.MarshalTo(dAtA[i:])
+ i = encodeVarintSpecs(dAtA, i, uint64(m.Raft.Size()))
+ n32, err := m.Raft.MarshalTo(dAtA[i:])
if err != nil {
return 0, err
}
i += n32
- dAtA[i] = 0x32
+ dAtA[i] = 0x2a
i++
- i = encodeVarintSpecs(dAtA, i, uint64(m.CAConfig.Size()))
- n33, err := m.CAConfig.MarshalTo(dAtA[i:])
+ i = encodeVarintSpecs(dAtA, i, uint64(m.Dispatcher.Size()))
+ n33, err := m.Dispatcher.MarshalTo(dAtA[i:])
if err != nil {
return 0, err
}
i += n33
- dAtA[i] = 0x3a
+ dAtA[i] = 0x32
i++
- i = encodeVarintSpecs(dAtA, i, uint64(m.TaskDefaults.Size()))
- n34, err := m.TaskDefaults.MarshalTo(dAtA[i:])
+ i = encodeVarintSpecs(dAtA, i, uint64(m.CAConfig.Size()))
+ n34, err := m.CAConfig.MarshalTo(dAtA[i:])
if err != nil {
return 0, err
}
i += n34
- dAtA[i] = 0x42
+ dAtA[i] = 0x3a
i++
- i = encodeVarintSpecs(dAtA, i, uint64(m.EncryptionConfig.Size()))
- n35, err := m.EncryptionConfig.MarshalTo(dAtA[i:])
+ i = encodeVarintSpecs(dAtA, i, uint64(m.TaskDefaults.Size()))
+ n35, err := m.TaskDefaults.MarshalTo(dAtA[i:])
if err != nil {
return 0, err
}
i += n35
+ dAtA[i] = 0x42
+ i++
+ i = encodeVarintSpecs(dAtA, i, uint64(m.EncryptionConfig.Size()))
+ n36, err := m.EncryptionConfig.MarshalTo(dAtA[i:])
+ if err != nil {
+ return 0, err
+ }
+ i += n36
return i, nil
}
@@ -2324,11 +2393,11 @@ func (m *SecretSpec) MarshalTo(dAtA []byte) (int, error) {
dAtA[i] = 0xa
i++
i = encodeVarintSpecs(dAtA, i, uint64(m.Annotations.Size()))
- n36, err := m.Annotations.MarshalTo(dAtA[i:])
+ n37, err := m.Annotations.MarshalTo(dAtA[i:])
if err != nil {
return 0, err
}
- i += n36
+ i += n37
if len(m.Data) > 0 {
dAtA[i] = 0x12
i++
@@ -2339,21 +2408,21 @@ func (m *SecretSpec) MarshalTo(dAtA []byte) (int, error) {
dAtA[i] = 0x1a
i++
i = encodeVarintSpecs(dAtA, i, uint64(m.Templating.Size()))
- n37, err := m.Templating.MarshalTo(dAtA[i:])
+ n38, err := m.Templating.MarshalTo(dAtA[i:])
if err != nil {
return 0, err
}
- i += n37
+ i += n38
}
if m.Driver != nil {
dAtA[i] = 0x22
i++
i = encodeVarintSpecs(dAtA, i, uint64(m.Driver.Size()))
- n38, err := m.Driver.MarshalTo(dAtA[i:])
+ n39, err := m.Driver.MarshalTo(dAtA[i:])
if err != nil {
return 0, err
}
- i += n38
+ i += n39
}
return i, nil
}
@@ -2376,11 +2445,11 @@ func (m *ConfigSpec) MarshalTo(dAtA []byte) (int, error) {
dAtA[i] = 0xa
i++
i = encodeVarintSpecs(dAtA, i, uint64(m.Annotations.Size()))
- n39, err := m.Annotations.MarshalTo(dAtA[i:])
+ n40, err := m.Annotations.MarshalTo(dAtA[i:])
if err != nil {
return 0, err
}
- i += n39
+ i += n40
if len(m.Data) > 0 {
dAtA[i] = 0x12
i++
@@ -2391,11 +2460,11 @@ func (m *ConfigSpec) MarshalTo(dAtA []byte) (int, error) {
dAtA[i] = 0x1a
i++
i = encodeVarintSpecs(dAtA, i, uint64(m.Templating.Size()))
- n40, err := m.Templating.MarshalTo(dAtA[i:])
+ n41, err := m.Templating.MarshalTo(dAtA[i:])
if err != nil {
return 0, err
}
- i += n40
+ i += n41
}
return i, nil
}
@@ -2721,6 +2790,16 @@ func (m *ContainerSpec) Size() (n int) {
l = m.Privileges.Size()
n += 2 + l + sovSpecs(uint64(l))
}
+ if m.Init != nil {
+ l = m.Init.Size()
+ n += 2 + l + sovSpecs(uint64(l))
+ }
+ if m.Isolation != 0 {
+ n += 2 + sovSpecs(uint64(m.Isolation))
+ }
+ if m.PidsLimit != 0 {
+ n += 2 + sovSpecs(uint64(m.PidsLimit))
+ }
return n
}
@@ -3066,6 +3145,9 @@ func (this *ContainerSpec) String() string {
`StopSignal:` + fmt.Sprintf("%v", this.StopSignal) + `,`,
`Configs:` + strings.Replace(fmt.Sprintf("%v", this.Configs), "ConfigReference", "ConfigReference", 1) + `,`,
`Privileges:` + strings.Replace(fmt.Sprintf("%v", this.Privileges), "Privileges", "Privileges", 1) + `,`,
+ `Init:` + strings.Replace(fmt.Sprintf("%v", this.Init), "BoolValue", "google_protobuf4.BoolValue", 1) + `,`,
+ `Isolation:` + fmt.Sprintf("%v", this.Isolation) + `,`,
+ `PidsLimit:` + fmt.Sprintf("%v", this.PidsLimit) + `,`,
`}`,
}, "")
return s
@@ -5141,6 +5223,77 @@ func (m *ContainerSpec) Unmarshal(dAtA []byte) error {
return err
}
iNdEx = postIndex
+ case 23:
+ if wireType != 2 {
+ return fmt.Errorf("proto: wrong wireType = %d for field Init", wireType)
+ }
+ var msglen int
+ for shift := uint(0); ; shift += 7 {
+ if shift >= 64 {
+ return ErrIntOverflowSpecs
+ }
+ if iNdEx >= l {
+ return io.ErrUnexpectedEOF
+ }
+ b := dAtA[iNdEx]
+ iNdEx++
+ msglen |= (int(b) & 0x7F) << shift
+ if b < 0x80 {
+ break
+ }
+ }
+ if msglen < 0 {
+ return ErrInvalidLengthSpecs
+ }
+ postIndex := iNdEx + msglen
+ if postIndex > l {
+ return io.ErrUnexpectedEOF
+ }
+ if m.Init == nil {
+ m.Init = &google_protobuf4.BoolValue{}
+ }
+ if err := m.Init.Unmarshal(dAtA[iNdEx:postIndex]); err != nil {
+ return err
+ }
+ iNdEx = postIndex
+ case 24:
+ if wireType != 0 {
+ return fmt.Errorf("proto: wrong wireType = %d for field Isolation", wireType)
+ }
+ m.Isolation = 0
+ for shift := uint(0); ; shift += 7 {
+ if shift >= 64 {
+ return ErrIntOverflowSpecs
+ }
+ if iNdEx >= l {
+ return io.ErrUnexpectedEOF
+ }
+ b := dAtA[iNdEx]
+ iNdEx++
+ m.Isolation |= (ContainerSpec_Isolation(b) & 0x7F) << shift
+ if b < 0x80 {
+ break
+ }
+ }
+ case 25:
+ if wireType != 0 {
+ return fmt.Errorf("proto: wrong wireType = %d for field PidsLimit", wireType)
+ }
+ m.PidsLimit = 0
+ for shift := uint(0); ; shift += 7 {
+ if shift >= 64 {
+ return ErrIntOverflowSpecs
+ }
+ if iNdEx >= l {
+ return io.ErrUnexpectedEOF
+ }
+ b := dAtA[iNdEx]
+ iNdEx++
+ m.PidsLimit |= (int64(b) & 0x7F) << shift
+ if b < 0x80 {
+ break
+ }
+ }
default:
iNdEx = preIndex
skippy, err := skipSpecs(dAtA[iNdEx:])
@@ -6452,129 +6605,139 @@ var (
func init() { proto.RegisterFile("github.com/docker/swarmkit/api/specs.proto", fileDescriptorSpecs) }
var fileDescriptorSpecs = []byte{
- // 1975 bytes of a gzipped FileDescriptorProto
- 0x1f, 0x8b, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x02, 0xff, 0xcc, 0x57, 0xcf, 0x6f, 0x1b, 0xb9,
- 0x15, 0xb6, 0x6c, 0x59, 0x3f, 0xde, 0xc8, 0x89, 0xc2, 0xcd, 0xa6, 0x13, 0xa5, 0x6b, 0x2b, 0xda,
- 0x6c, 0xea, 0xdd, 0x45, 0x25, 0xd4, 0x2d, 0xb6, 0xd9, 0x4d, 0xb7, 0xad, 0x64, 0xa9, 0x8e, 0x9b,
- 0xc6, 0x11, 0x68, 0x6f, 0xda, 0x00, 0x05, 0x04, 0x6a, 0x86, 0x1e, 0x0d, 0x3c, 0x1a, 0x4e, 0x39,
- 0x1c, 0x2d, 0x74, 0xeb, 0x71, 0x91, 0x1e, 0x7b, 0x0e, 0x7a, 0x28, 0x7a, 0xef, 0x9f, 0x91, 0x63,
- 0x8f, 0xed, 0xc5, 0xe8, 0xea, 0x5f, 0xe8, 0xad, 0x97, 0x16, 0xe4, 0x70, 0x46, 0xa3, 0x64, 0x6c,
- 0x07, 0x68, 0x0e, 0xbd, 0x91, 0x8f, 0xdf, 0xf7, 0x48, 0x3e, 0x7e, 0x8f, 0x7c, 0x84, 0x4f, 0x1c,
- 0x57, 0x4c, 0xa2, 0x71, 0xdb, 0x62, 0xd3, 0x8e, 0xcd, 0xac, 0x33, 0xca, 0x3b, 0xe1, 0xd7, 0x84,
- 0x4f, 0xcf, 0x5c, 0xd1, 0x21, 0x81, 0xdb, 0x09, 0x03, 0x6a, 0x85, 0xed, 0x80, 0x33, 0xc1, 0x10,
- 0x8a, 0x01, 0xed, 0x04, 0xd0, 0x9e, 0xfd, 0xa0, 0x71, 0x15, 0x5f, 0xcc, 0x03, 0xaa, 0xf9, 0x8d,
- 0x9b, 0x0e, 0x73, 0x98, 0x6a, 0x76, 0x64, 0x4b, 0x5b, 0xb7, 0x1d, 0xc6, 0x1c, 0x8f, 0x76, 0x54,
- 0x6f, 0x1c, 0x9d, 0x76, 0xec, 0x88, 0x13, 0xe1, 0x32, 0x5f, 0x8f, 0xdf, 0x7e, 0x7d, 0x9c, 0xf8,
- 0xf3, 0x78, 0xa8, 0xf5, 0xb2, 0x08, 0x95, 0x23, 0x66, 0xd3, 0xe3, 0x80, 0x5a, 0xe8, 0x00, 0x0c,
- 0xe2, 0xfb, 0x4c, 0x28, 0x6e, 0x68, 0x16, 0x9a, 0x85, 0x5d, 0x63, 0x6f, 0xa7, 0xfd, 0xe6, 0x9a,
- 0xdb, 0xdd, 0x25, 0xac, 0x57, 0x7c, 0x75, 0xbe, 0xb3, 0x86, 0xb3, 0x4c, 0xf4, 0x33, 0xa8, 0xd9,
- 0x34, 0x74, 0x39, 0xb5, 0x47, 0x9c, 0x79, 0xd4, 0x5c, 0x6f, 0x16, 0x76, 0xaf, 0xed, 0x7d, 0x37,
- 0xcf, 0x93, 0x9c, 0x1c, 0x33, 0x8f, 0x62, 0x43, 0x33, 0x64, 0x07, 0x1d, 0x00, 0x4c, 0xe9, 0x74,
- 0x4c, 0x79, 0x38, 0x71, 0x03, 0x73, 0x43, 0xd1, 0xbf, 0x77, 0x11, 0x5d, 0xae, 0xbd, 0xfd, 0x24,
- 0x85, 0xe3, 0x0c, 0x15, 0x3d, 0x81, 0x1a, 0x99, 0x11, 0xd7, 0x23, 0x63, 0xd7, 0x73, 0xc5, 0xdc,
- 0x2c, 0x2a, 0x57, 0x1f, 0x5f, 0xea, 0xaa, 0x9b, 0x21, 0xe0, 0x15, 0x7a, 0xcb, 0x06, 0x58, 0x4e,
- 0x84, 0xee, 0x43, 0x79, 0x38, 0x38, 0xea, 0x1f, 0x1e, 0x1d, 0xd4, 0xd7, 0x1a, 0xb7, 0x5f, 0xbc,
- 0x6c, 0xbe, 0x2f, 0x7d, 0x2c, 0x01, 0x43, 0xea, 0xdb, 0xae, 0xef, 0xa0, 0x5d, 0xa8, 0x74, 0xf7,
- 0xf7, 0x07, 0xc3, 0x93, 0x41, 0xbf, 0x5e, 0x68, 0x34, 0x5e, 0xbc, 0x6c, 0xde, 0x5a, 0x05, 0x76,
- 0x2d, 0x8b, 0x06, 0x82, 0xda, 0x8d, 0xe2, 0x37, 0x7f, 0xde, 0x5e, 0x6b, 0x7d, 0x53, 0x80, 0x5a,
- 0x76, 0x11, 0xe8, 0x3e, 0x94, 0xba, 0xfb, 0x27, 0x87, 0xcf, 0x06, 0xf5, 0xb5, 0x25, 0x3d, 0x8b,
- 0xe8, 0x5a, 0xc2, 0x9d, 0x51, 0x74, 0x0f, 0x36, 0x87, 0xdd, 0xaf, 0x8e, 0x07, 0xf5, 0xc2, 0x72,
- 0x39, 0x59, 0xd8, 0x90, 0x44, 0xa1, 0x42, 0xf5, 0x71, 0xf7, 0xf0, 0xa8, 0xbe, 0x9e, 0x8f, 0xea,
- 0x73, 0xe2, 0xfa, 0x7a, 0x29, 0x7f, 0x2a, 0x82, 0x71, 0x4c, 0xf9, 0xcc, 0xb5, 0xde, 0xb1, 0x44,
- 0x3e, 0x83, 0xa2, 0x20, 0xe1, 0x99, 0x92, 0x86, 0x91, 0x2f, 0x8d, 0x13, 0x12, 0x9e, 0xc9, 0x49,
- 0x35, 0x5d, 0xe1, 0xa5, 0x32, 0x38, 0x0d, 0x3c, 0xd7, 0x22, 0x82, 0xda, 0x4a, 0x19, 0xc6, 0xde,
- 0x47, 0x79, 0x6c, 0x9c, 0xa2, 0xf4, 0xfa, 0x1f, 0xad, 0xe1, 0x0c, 0x15, 0x3d, 0x84, 0x92, 0xe3,
- 0xb1, 0x31, 0xf1, 0x94, 0x26, 0x8c, 0xbd, 0xbb, 0x79, 0x4e, 0x0e, 0x14, 0x62, 0xe9, 0x40, 0x53,
- 0xd0, 0x03, 0x28, 0x45, 0x81, 0x4d, 0x04, 0x35, 0x4b, 0x8a, 0xdc, 0xcc, 0x23, 0x7f, 0xa5, 0x10,
- 0xfb, 0xcc, 0x3f, 0x75, 0x1d, 0xac, 0xf1, 0xe8, 0x31, 0x54, 0x7c, 0x2a, 0xbe, 0x66, 0xfc, 0x2c,
- 0x34, 0xcb, 0xcd, 0x8d, 0x5d, 0x63, 0xef, 0xd3, 0x5c, 0x31, 0xc6, 0x98, 0xae, 0x10, 0xc4, 0x9a,
- 0x4c, 0xa9, 0x2f, 0x62, 0x37, 0xbd, 0x75, 0xb3, 0x80, 0x53, 0x07, 0xe8, 0x27, 0x50, 0xa1, 0xbe,
- 0x1d, 0x30, 0xd7, 0x17, 0x66, 0xe5, 0xe2, 0x85, 0x0c, 0x34, 0x46, 0x06, 0x13, 0xa7, 0x0c, 0xc9,
- 0xe6, 0xcc, 0xf3, 0xc6, 0xc4, 0x3a, 0x33, 0xab, 0x6f, 0xb9, 0x8d, 0x94, 0xd1, 0x2b, 0x41, 0x71,
- 0xca, 0x6c, 0xda, 0xea, 0xc0, 0x8d, 0x37, 0x42, 0x8d, 0x1a, 0x50, 0xd1, 0xa1, 0x8e, 0x35, 0x52,
- 0xc4, 0x69, 0xbf, 0x75, 0x1d, 0xb6, 0x56, 0xc2, 0xda, 0xfa, 0xeb, 0x26, 0x54, 0x92, 0xb3, 0x46,
- 0x5d, 0xa8, 0x5a, 0xcc, 0x17, 0xc4, 0xf5, 0x29, 0xd7, 0xf2, 0xca, 0x3d, 0x99, 0xfd, 0x04, 0x24,
- 0x59, 0x8f, 0xd6, 0xf0, 0x92, 0x85, 0x7e, 0x01, 0x55, 0x4e, 0x43, 0x16, 0x71, 0x8b, 0x86, 0x5a,
- 0x5f, 0xbb, 0xf9, 0x0a, 0x89, 0x41, 0x98, 0xfe, 0x2e, 0x72, 0x39, 0x95, 0x51, 0x0e, 0xf1, 0x92,
- 0x8a, 0x1e, 0x42, 0x99, 0xd3, 0x50, 0x10, 0x2e, 0x2e, 0x93, 0x08, 0x8e, 0x21, 0x43, 0xe6, 0xb9,
- 0xd6, 0x1c, 0x27, 0x0c, 0xf4, 0x10, 0xaa, 0x81, 0x47, 0x2c, 0xe5, 0xd5, 0xdc, 0x54, 0xf4, 0x0f,
- 0xf2, 0xe8, 0xc3, 0x04, 0x84, 0x97, 0x78, 0xf4, 0x39, 0x80, 0xc7, 0x9c, 0x91, 0xcd, 0xdd, 0x19,
- 0xe5, 0x5a, 0x62, 0x8d, 0x3c, 0x76, 0x5f, 0x21, 0x70, 0xd5, 0x63, 0x4e, 0xdc, 0x44, 0x07, 0xff,
- 0x93, 0xbe, 0x32, 0xda, 0x7a, 0x0c, 0x40, 0xd2, 0x51, 0xad, 0xae, 0x8f, 0xdf, 0xca, 0x95, 0x3e,
- 0x91, 0x0c, 0x1d, 0xdd, 0x85, 0xda, 0x29, 0xe3, 0x16, 0x1d, 0xe9, 0xac, 0xa9, 0x2a, 0x4d, 0x18,
- 0xca, 0x16, 0xeb, 0x0b, 0xf5, 0xa0, 0xec, 0x50, 0x9f, 0x72, 0xd7, 0x32, 0x41, 0x4d, 0x76, 0x3f,
- 0x37, 0x21, 0x63, 0x08, 0x8e, 0x7c, 0xe1, 0x4e, 0xa9, 0x9e, 0x29, 0x21, 0xa2, 0xdf, 0xc2, 0x7b,
- 0xc9, 0xf1, 0x8d, 0x38, 0x3d, 0xa5, 0x9c, 0xfa, 0x52, 0x03, 0x86, 0x8a, 0xc3, 0x47, 0x97, 0x6b,
- 0x40, 0xa3, 0xf5, 0x65, 0x83, 0xf8, 0xeb, 0x03, 0x61, 0xaf, 0x0a, 0x65, 0x1e, 0xcf, 0xdb, 0xfa,
- 0x43, 0x41, 0xaa, 0xfe, 0x35, 0x04, 0xea, 0x80, 0x91, 0x4e, 0xef, 0xda, 0x4a, 0xbd, 0xd5, 0xde,
- 0xb5, 0xc5, 0xf9, 0x0e, 0x24, 0xd8, 0xc3, 0xbe, 0xbc, 0x83, 0x74, 0xdb, 0x46, 0x03, 0xd8, 0x4a,
- 0x09, 0xf2, 0x99, 0xd7, 0x0f, 0x65, 0xf3, 0xb2, 0x95, 0x9e, 0xcc, 0x03, 0x8a, 0x6b, 0x3c, 0xd3,
- 0x6b, 0xfd, 0x06, 0xd0, 0x9b, 0x71, 0x41, 0x08, 0x8a, 0x67, 0xae, 0xaf, 0x97, 0x81, 0x55, 0x1b,
- 0xb5, 0xa1, 0x1c, 0x90, 0xb9, 0xc7, 0x88, 0xad, 0x13, 0xe3, 0x66, 0x3b, 0xae, 0x0d, 0xda, 0x49,
- 0x6d, 0xd0, 0xee, 0xfa, 0x73, 0x9c, 0x80, 0x5a, 0x8f, 0xe1, 0xfd, 0xdc, 0xe3, 0x45, 0x7b, 0x50,
- 0x4b, 0x13, 0x6e, 0xb9, 0xd7, 0xeb, 0x8b, 0xf3, 0x1d, 0x23, 0xcd, 0xcc, 0xc3, 0x3e, 0x36, 0x52,
- 0xd0, 0xa1, 0xdd, 0xfa, 0x63, 0x15, 0xb6, 0x56, 0xd2, 0x16, 0xdd, 0x84, 0x4d, 0x77, 0x4a, 0x1c,
- 0xaa, 0xd7, 0x18, 0x77, 0xd0, 0x00, 0x4a, 0x1e, 0x19, 0x53, 0x4f, 0x26, 0xaf, 0x3c, 0xb8, 0xef,
- 0x5f, 0x99, 0xff, 0xed, 0x5f, 0x29, 0xfc, 0xc0, 0x17, 0x7c, 0x8e, 0x35, 0x19, 0x99, 0x50, 0xb6,
- 0xd8, 0x74, 0x4a, 0x7c, 0xf9, 0x4c, 0x6c, 0xec, 0x56, 0x71, 0xd2, 0x95, 0x91, 0x21, 0xdc, 0x09,
- 0xcd, 0xa2, 0x32, 0xab, 0x36, 0xaa, 0xc3, 0x06, 0xf5, 0x67, 0xe6, 0xa6, 0x32, 0xc9, 0xa6, 0xb4,
- 0xd8, 0x6e, 0x9c, 0x7d, 0x55, 0x2c, 0x9b, 0x92, 0x17, 0x85, 0x94, 0x9b, 0xe5, 0x38, 0xa2, 0xb2,
- 0x8d, 0x7e, 0x0c, 0xa5, 0x29, 0x8b, 0x7c, 0x11, 0x9a, 0x15, 0xb5, 0xd8, 0xdb, 0x79, 0x8b, 0x7d,
- 0x22, 0x11, 0x5a, 0x59, 0x1a, 0x8e, 0x06, 0x70, 0x23, 0x14, 0x2c, 0x18, 0x39, 0x9c, 0x58, 0x74,
- 0x14, 0x50, 0xee, 0x32, 0x5b, 0x5f, 0xc3, 0xb7, 0xdf, 0x38, 0x94, 0xbe, 0x2e, 0xe8, 0xf0, 0x75,
- 0xc9, 0x39, 0x90, 0x94, 0xa1, 0x62, 0xa0, 0x21, 0xd4, 0x82, 0xc8, 0xf3, 0x46, 0x2c, 0x88, 0x5f,
- 0xe4, 0x38, 0x77, 0xde, 0x22, 0x64, 0xc3, 0xc8, 0xf3, 0x9e, 0xc6, 0x24, 0x6c, 0x04, 0xcb, 0x0e,
- 0xba, 0x05, 0x25, 0x87, 0xb3, 0x28, 0x88, 0xf3, 0xa6, 0x8a, 0x75, 0x0f, 0x7d, 0x09, 0xe5, 0x90,
- 0x5a, 0x9c, 0x8a, 0xd0, 0xac, 0xa9, 0xad, 0x7e, 0x98, 0x37, 0xc9, 0xb1, 0x82, 0xa4, 0x39, 0x81,
- 0x13, 0x0e, 0xba, 0x0d, 0x1b, 0x42, 0xcc, 0xcd, 0xad, 0x66, 0x61, 0xb7, 0xd2, 0x2b, 0x2f, 0xce,
- 0x77, 0x36, 0x4e, 0x4e, 0x9e, 0x63, 0x69, 0x93, 0xaf, 0xc5, 0x84, 0x85, 0xc2, 0x27, 0x53, 0x6a,
- 0x5e, 0x53, 0xb1, 0x4d, 0xfb, 0xe8, 0x39, 0x80, 0xed, 0x87, 0x23, 0x4b, 0x5d, 0x4f, 0xe6, 0x75,
- 0xb5, 0xbb, 0x4f, 0xaf, 0xde, 0x5d, 0xff, 0xe8, 0x58, 0xbf, 0x98, 0x5b, 0x8b, 0xf3, 0x9d, 0x6a,
- 0xda, 0xc5, 0x55, 0xdb, 0x0f, 0xe3, 0x26, 0xea, 0x81, 0x31, 0xa1, 0xc4, 0x13, 0x13, 0x6b, 0x42,
- 0xad, 0x33, 0xb3, 0x7e, 0xf1, 0x13, 0xf8, 0x48, 0xc1, 0xb4, 0x87, 0x2c, 0x49, 0x2a, 0x58, 0x2e,
- 0x35, 0x34, 0x6f, 0xa8, 0x58, 0xc5, 0x1d, 0xf4, 0x01, 0x00, 0x0b, 0xa8, 0x3f, 0x0a, 0x85, 0xed,
- 0xfa, 0x26, 0x92, 0x5b, 0xc6, 0x55, 0x69, 0x39, 0x96, 0x06, 0x74, 0x47, 0x3e, 0x50, 0xc4, 0x1e,
- 0x31, 0xdf, 0x9b, 0x9b, 0xef, 0xa9, 0xd1, 0x8a, 0x34, 0x3c, 0xf5, 0xbd, 0x39, 0xda, 0x01, 0x43,
- 0xe9, 0x22, 0x74, 0x1d, 0x9f, 0x78, 0xe6, 0x4d, 0x15, 0x0f, 0x90, 0xa6, 0x63, 0x65, 0x91, 0xe7,
- 0x10, 0x47, 0x23, 0x34, 0xdf, 0xbf, 0xf8, 0x1c, 0xf4, 0x62, 0x97, 0xe7, 0xa0, 0x39, 0xe8, 0xa7,
- 0x00, 0x01, 0x77, 0x67, 0xae, 0x47, 0x1d, 0x1a, 0x9a, 0xb7, 0xd4, 0xa6, 0xb7, 0x73, 0x5f, 0xa6,
- 0x14, 0x85, 0x33, 0x8c, 0xc6, 0xe7, 0x60, 0x64, 0xb2, 0x4d, 0x66, 0xc9, 0x19, 0x9d, 0xeb, 0x04,
- 0x96, 0x4d, 0x19, 0x92, 0x19, 0xf1, 0xa2, 0xf8, 0x32, 0xab, 0xe2, 0xb8, 0xf3, 0xc5, 0xfa, 0x83,
- 0x42, 0x63, 0x0f, 0x8c, 0x8c, 0xea, 0xd0, 0x87, 0xf2, 0xf6, 0x73, 0xdc, 0x50, 0xf0, 0xf9, 0x88,
- 0x44, 0x62, 0x62, 0xfe, 0x5c, 0x11, 0x6a, 0x89, 0xb1, 0x1b, 0x89, 0x49, 0x63, 0x04, 0xcb, 0xc3,
- 0x43, 0x4d, 0x30, 0xa4, 0x28, 0x42, 0xca, 0x67, 0x94, 0xcb, 0xca, 0x42, 0xc6, 0x3c, 0x6b, 0x92,
- 0xe2, 0x0d, 0x29, 0xe1, 0xd6, 0x44, 0xdd, 0x1d, 0x55, 0xac, 0x7b, 0xf2, 0x32, 0x48, 0x32, 0x44,
- 0x5f, 0x06, 0xba, 0xdb, 0xfa, 0x57, 0x01, 0x6a, 0xd9, 0x02, 0x09, 0xed, 0xc7, 0x85, 0x8d, 0xda,
- 0xd2, 0xb5, 0xbd, 0xce, 0x55, 0x05, 0x95, 0xba, 0x98, 0xbd, 0x48, 0x3a, 0x7b, 0x22, 0xff, 0x32,
- 0x8a, 0x8c, 0x7e, 0x04, 0x9b, 0x01, 0xe3, 0x22, 0xb9, 0xc2, 0xf2, 0x03, 0xcc, 0x78, 0xf2, 0xec,
- 0xc6, 0xe0, 0xd6, 0x04, 0xae, 0xad, 0x7a, 0x43, 0xf7, 0x60, 0xe3, 0xd9, 0xe1, 0xb0, 0xbe, 0xd6,
- 0xb8, 0xf3, 0xe2, 0x65, 0xf3, 0x3b, 0xab, 0x83, 0xcf, 0x5c, 0x2e, 0x22, 0xe2, 0x1d, 0x0e, 0xd1,
- 0x27, 0xb0, 0xd9, 0x3f, 0x3a, 0xc6, 0xb8, 0x5e, 0x68, 0xec, 0xbc, 0x78, 0xd9, 0xbc, 0xb3, 0x8a,
- 0x93, 0x43, 0x2c, 0xf2, 0x6d, 0xcc, 0xc6, 0x69, 0x5d, 0xff, 0xef, 0x75, 0x30, 0xf4, 0xcd, 0xfe,
- 0xae, 0xbf, 0x7e, 0x5b, 0x71, 0xd9, 0x92, 0xa4, 0xec, 0xfa, 0x95, 0xd5, 0x4b, 0x2d, 0x26, 0xe8,
- 0x33, 0xbe, 0x0b, 0x35, 0x37, 0x98, 0x7d, 0x36, 0xa2, 0x3e, 0x19, 0x7b, 0xba, 0xc4, 0xaf, 0x60,
- 0x43, 0xda, 0x06, 0xb1, 0x49, 0xde, 0x17, 0xae, 0x2f, 0x28, 0xf7, 0x75, 0xf1, 0x5e, 0xc1, 0x69,
- 0x1f, 0x7d, 0x09, 0x45, 0x37, 0x20, 0x53, 0x5d, 0x72, 0xe5, 0xee, 0xe0, 0x70, 0xd8, 0x7d, 0xa2,
- 0x35, 0xd8, 0xab, 0x2c, 0xce, 0x77, 0x8a, 0xd2, 0x80, 0x15, 0x0d, 0x6d, 0x27, 0x55, 0x8f, 0x9c,
- 0x49, 0xdd, 0xfd, 0x15, 0x9c, 0xb1, 0x48, 0x1d, 0xb9, 0xbe, 0xc3, 0x69, 0x18, 0xaa, 0x57, 0xa0,
- 0x82, 0x93, 0x2e, 0x6a, 0x40, 0x59, 0xd7, 0x4e, 0xaa, 0x58, 0xaa, 0xca, 0xba, 0x44, 0x1b, 0x7a,
- 0x5b, 0x60, 0xc4, 0xd1, 0x18, 0x9d, 0x72, 0x36, 0x6d, 0xfd, 0xa7, 0x08, 0xc6, 0xbe, 0x17, 0x85,
- 0x42, 0x3f, 0x83, 0xef, 0x2c, 0xf8, 0xcf, 0xe1, 0x06, 0x51, 0x5f, 0x49, 0xe2, 0xcb, 0x37, 0x45,
- 0x95, 0xa4, 0xfa, 0x00, 0xee, 0xe5, 0xba, 0x4b, 0xc1, 0x71, 0xf9, 0xda, 0x2b, 0x49, 0x9f, 0x66,
- 0x01, 0xd7, 0xc9, 0x6b, 0x23, 0xe8, 0x18, 0xb6, 0x18, 0xb7, 0x26, 0x34, 0x14, 0xf1, 0x4b, 0xa4,
- 0xbf, 0x5e, 0xb9, 0x9f, 0xf2, 0xa7, 0x59, 0xa0, 0xbe, 0x86, 0xe3, 0xd5, 0xae, 0xfa, 0x40, 0x0f,
- 0xa0, 0xc8, 0xc9, 0x69, 0x52, 0x5e, 0xe7, 0x26, 0x09, 0x26, 0xa7, 0x62, 0xc5, 0x85, 0x62, 0xa0,
- 0x5f, 0x02, 0xd8, 0x6e, 0x18, 0x10, 0x61, 0x4d, 0x28, 0xd7, 0x87, 0x9d, 0xbb, 0xc5, 0x7e, 0x8a,
- 0x5a, 0xf1, 0x92, 0x61, 0xa3, 0xc7, 0x50, 0xb5, 0x48, 0x22, 0xd7, 0xd2, 0xc5, 0xff, 0xd1, 0xfd,
- 0xae, 0x76, 0x51, 0x97, 0x2e, 0x16, 0xe7, 0x3b, 0x95, 0xc4, 0x82, 0x2b, 0x16, 0xd1, 0xf2, 0x7d,
- 0x0c, 0x5b, 0xf2, 0x9f, 0x3a, 0xb2, 0xe9, 0x29, 0x89, 0x3c, 0x11, 0xcb, 0xe4, 0x82, 0x67, 0x45,
- 0x7e, 0x7a, 0xfa, 0x1a, 0xa7, 0xd7, 0x55, 0x13, 0x19, 0x1b, 0xfa, 0x35, 0xdc, 0xa0, 0xbe, 0xc5,
- 0xe7, 0x4a, 0xac, 0xc9, 0x0a, 0x2b, 0x17, 0x6f, 0x76, 0x90, 0x82, 0x57, 0x36, 0x5b, 0xa7, 0xaf,
- 0xd9, 0x5b, 0xff, 0x28, 0x00, 0xc4, 0x2f, 0xf5, 0xbb, 0x15, 0x20, 0x82, 0xa2, 0x4d, 0x04, 0x51,
- 0x9a, 0xab, 0x61, 0xd5, 0x46, 0x5f, 0x00, 0x08, 0x3a, 0x0d, 0x3c, 0x22, 0x5c, 0xdf, 0xd1, 0xb2,
- 0xb9, 0xec, 0x3a, 0xc8, 0xa0, 0xd1, 0x1e, 0x94, 0xf4, 0x27, 0xa8, 0x78, 0x25, 0x4f, 0x23, 0x5b,
- 0x7f, 0x29, 0x00, 0xc4, 0xdb, 0xfc, 0xbf, 0xde, 0x5b, 0xcf, 0x7c, 0xf5, 0xed, 0xf6, 0xda, 0xdf,
- 0xbf, 0xdd, 0x5e, 0xfb, 0xfd, 0x62, 0xbb, 0xf0, 0x6a, 0xb1, 0x5d, 0xf8, 0xdb, 0x62, 0xbb, 0xf0,
- 0xcf, 0xc5, 0x76, 0x61, 0x5c, 0x52, 0x75, 0xdf, 0x0f, 0xff, 0x1b, 0x00, 0x00, 0xff, 0xff, 0xae,
- 0x88, 0xf9, 0x3c, 0x5a, 0x14, 0x00, 0x00,
+ // 2131 bytes of a gzipped FileDescriptorProto
+ 0x1f, 0x8b, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x02, 0xff, 0xcc, 0x58, 0x4f, 0x6f, 0x1b, 0xc7,
+ 0x15, 0x17, 0x25, 0x8a, 0x22, 0xdf, 0x52, 0x36, 0x35, 0x71, 0x9c, 0x15, 0x6d, 0x4b, 0x34, 0xe3,
+ 0xb8, 0x4a, 0x82, 0x52, 0xa8, 0x1a, 0xa4, 0x4e, 0xdc, 0xb4, 0x25, 0x45, 0x46, 0x66, 0x6d, 0x4b,
+ 0xc4, 0x50, 0x56, 0x6b, 0xa0, 0x00, 0x31, 0xda, 0x1d, 0x91, 0x03, 0x2d, 0x77, 0xb6, 0xb3, 0x43,
+ 0x19, 0xbc, 0xf5, 0x18, 0xa8, 0x9f, 0x41, 0xe8, 0xa1, 0xe8, 0xbd, 0xfd, 0x16, 0x3e, 0xf6, 0xd8,
+ 0x5e, 0x84, 0x44, 0x5f, 0xa1, 0xb7, 0x5e, 0x5a, 0xcc, 0xec, 0xec, 0x92, 0x94, 0x57, 0x96, 0x81,
+ 0xfa, 0xd0, 0xdb, 0xcc, 0xdb, 0xdf, 0xef, 0xcd, 0xbf, 0xdf, 0xbc, 0xf7, 0x66, 0xe1, 0xb3, 0x3e,
+ 0x93, 0x83, 0xd1, 0x61, 0xcd, 0xe1, 0xc3, 0x4d, 0x97, 0x3b, 0xc7, 0x54, 0x6c, 0x86, 0xaf, 0x88,
+ 0x18, 0x1e, 0x33, 0xb9, 0x49, 0x02, 0xb6, 0x19, 0x06, 0xd4, 0x09, 0x6b, 0x81, 0xe0, 0x92, 0x23,
+ 0x14, 0x01, 0x6a, 0x31, 0xa0, 0x76, 0xf2, 0x93, 0xf2, 0x75, 0x7c, 0x39, 0x0e, 0xa8, 0xe1, 0x97,
+ 0x6f, 0xf5, 0x79, 0x9f, 0xeb, 0xe6, 0xa6, 0x6a, 0x19, 0xeb, 0x5a, 0x9f, 0xf3, 0xbe, 0x47, 0x37,
+ 0x75, 0xef, 0x70, 0x74, 0xb4, 0xe9, 0x8e, 0x04, 0x91, 0x8c, 0xfb, 0xe6, 0xfb, 0xea, 0xe5, 0xef,
+ 0xc4, 0x1f, 0x5f, 0x45, 0x7d, 0x25, 0x48, 0x10, 0x50, 0x61, 0x06, 0xac, 0x9e, 0x65, 0x21, 0xbf,
+ 0xcb, 0x5d, 0xda, 0x0d, 0xa8, 0x83, 0x76, 0xc0, 0x22, 0xbe, 0xcf, 0xa5, 0xf6, 0x1d, 0xda, 0x99,
+ 0x4a, 0x66, 0xc3, 0xda, 0x5a, 0xaf, 0xbd, 0xb9, 0xa6, 0x5a, 0x7d, 0x02, 0x6b, 0x64, 0x5f, 0x9f,
+ 0xaf, 0xcf, 0xe1, 0x69, 0x26, 0xfa, 0x25, 0x14, 0x5d, 0x1a, 0x32, 0x41, 0xdd, 0x9e, 0xe0, 0x1e,
+ 0xb5, 0xe7, 0x2b, 0x99, 0x8d, 0x1b, 0x5b, 0x77, 0xd3, 0x3c, 0xa9, 0xc1, 0x31, 0xf7, 0x28, 0xb6,
+ 0x0c, 0x43, 0x75, 0xd0, 0x0e, 0xc0, 0x90, 0x0e, 0x0f, 0xa9, 0x08, 0x07, 0x2c, 0xb0, 0x17, 0x34,
+ 0xfd, 0x47, 0x57, 0xd1, 0xd5, 0xdc, 0x6b, 0xcf, 0x13, 0x38, 0x9e, 0xa2, 0xa2, 0xe7, 0x50, 0x24,
+ 0x27, 0x84, 0x79, 0xe4, 0x90, 0x79, 0x4c, 0x8e, 0xed, 0xac, 0x76, 0xf5, 0xe9, 0x5b, 0x5d, 0xd5,
+ 0xa7, 0x08, 0x78, 0x86, 0x5e, 0x75, 0x01, 0x26, 0x03, 0xa1, 0x87, 0xb0, 0xd4, 0x69, 0xed, 0x36,
+ 0xdb, 0xbb, 0x3b, 0xa5, 0xb9, 0xf2, 0xea, 0xe9, 0x59, 0xe5, 0x43, 0xe5, 0x63, 0x02, 0xe8, 0x50,
+ 0xdf, 0x65, 0x7e, 0x1f, 0x6d, 0x40, 0xbe, 0xbe, 0xbd, 0xdd, 0xea, 0xec, 0xb7, 0x9a, 0xa5, 0x4c,
+ 0xb9, 0x7c, 0x7a, 0x56, 0xb9, 0x3d, 0x0b, 0xac, 0x3b, 0x0e, 0x0d, 0x24, 0x75, 0xcb, 0xd9, 0xef,
+ 0xfe, 0xbc, 0x36, 0x57, 0xfd, 0x2e, 0x03, 0xc5, 0xe9, 0x49, 0xa0, 0x87, 0x90, 0xab, 0x6f, 0xef,
+ 0xb7, 0x0f, 0x5a, 0xa5, 0xb9, 0x09, 0x7d, 0x1a, 0x51, 0x77, 0x24, 0x3b, 0xa1, 0xe8, 0x01, 0x2c,
+ 0x76, 0xea, 0x2f, 0xba, 0xad, 0x52, 0x66, 0x32, 0x9d, 0x69, 0x58, 0x87, 0x8c, 0x42, 0x8d, 0x6a,
+ 0xe2, 0x7a, 0x7b, 0xb7, 0x34, 0x9f, 0x8e, 0x6a, 0x0a, 0xc2, 0x7c, 0x33, 0x95, 0x3f, 0x65, 0xc1,
+ 0xea, 0x52, 0x71, 0xc2, 0x9c, 0xf7, 0x2c, 0x91, 0x2f, 0x21, 0x2b, 0x49, 0x78, 0xac, 0xa5, 0x61,
+ 0xa5, 0x4b, 0x63, 0x9f, 0x84, 0xc7, 0x6a, 0x50, 0x43, 0xd7, 0x78, 0xa5, 0x0c, 0x41, 0x03, 0x8f,
+ 0x39, 0x44, 0x52, 0x57, 0x2b, 0xc3, 0xda, 0xfa, 0x24, 0x8d, 0x8d, 0x13, 0x94, 0x99, 0xff, 0x93,
+ 0x39, 0x3c, 0x45, 0x45, 0x8f, 0x21, 0xd7, 0xf7, 0xf8, 0x21, 0xf1, 0xb4, 0x26, 0xac, 0xad, 0xfb,
+ 0x69, 0x4e, 0x76, 0x34, 0x62, 0xe2, 0xc0, 0x50, 0xd0, 0x23, 0xc8, 0x8d, 0x02, 0x97, 0x48, 0x6a,
+ 0xe7, 0x34, 0xb9, 0x92, 0x46, 0x7e, 0xa1, 0x11, 0xdb, 0xdc, 0x3f, 0x62, 0x7d, 0x6c, 0xf0, 0xe8,
+ 0x29, 0xe4, 0x7d, 0x2a, 0x5f, 0x71, 0x71, 0x1c, 0xda, 0x4b, 0x95, 0x85, 0x0d, 0x6b, 0xeb, 0xf3,
+ 0x54, 0x31, 0x46, 0x98, 0xba, 0x94, 0xc4, 0x19, 0x0c, 0xa9, 0x2f, 0x23, 0x37, 0x8d, 0x79, 0x3b,
+ 0x83, 0x13, 0x07, 0xe8, 0xe7, 0x90, 0xa7, 0xbe, 0x1b, 0x70, 0xe6, 0x4b, 0x3b, 0x7f, 0xf5, 0x44,
+ 0x5a, 0x06, 0xa3, 0x36, 0x13, 0x27, 0x0c, 0xc5, 0x16, 0xdc, 0xf3, 0x0e, 0x89, 0x73, 0x6c, 0x17,
+ 0xde, 0x71, 0x19, 0x09, 0xa3, 0x91, 0x83, 0xec, 0x90, 0xbb, 0xb4, 0xba, 0x09, 0x2b, 0x6f, 0x6c,
+ 0x35, 0x2a, 0x43, 0xde, 0x6c, 0x75, 0xa4, 0x91, 0x2c, 0x4e, 0xfa, 0xd5, 0x9b, 0xb0, 0x3c, 0xb3,
+ 0xad, 0xd5, 0xbf, 0x2e, 0x42, 0x3e, 0x3e, 0x6b, 0x54, 0x87, 0x82, 0xc3, 0x7d, 0x49, 0x98, 0x4f,
+ 0x85, 0x91, 0x57, 0xea, 0xc9, 0x6c, 0xc7, 0x20, 0xc5, 0x7a, 0x32, 0x87, 0x27, 0x2c, 0xf4, 0x2d,
+ 0x14, 0x04, 0x0d, 0xf9, 0x48, 0x38, 0x34, 0x34, 0xfa, 0xda, 0x48, 0x57, 0x48, 0x04, 0xc2, 0xf4,
+ 0xf7, 0x23, 0x26, 0xa8, 0xda, 0xe5, 0x10, 0x4f, 0xa8, 0xe8, 0x31, 0x2c, 0x09, 0x1a, 0x4a, 0x22,
+ 0xe4, 0xdb, 0x24, 0x82, 0x23, 0x48, 0x87, 0x7b, 0xcc, 0x19, 0xe3, 0x98, 0x81, 0x1e, 0x43, 0x21,
+ 0xf0, 0x88, 0xa3, 0xbd, 0xda, 0x8b, 0x9a, 0x7e, 0x2f, 0x8d, 0xde, 0x89, 0x41, 0x78, 0x82, 0x47,
+ 0x5f, 0x01, 0x78, 0xbc, 0xdf, 0x73, 0x05, 0x3b, 0xa1, 0xc2, 0x48, 0xac, 0x9c, 0xc6, 0x6e, 0x6a,
+ 0x04, 0x2e, 0x78, 0xbc, 0x1f, 0x35, 0xd1, 0xce, 0xff, 0xa4, 0xaf, 0x29, 0x6d, 0x3d, 0x05, 0x20,
+ 0xc9, 0x57, 0xa3, 0xae, 0x4f, 0xdf, 0xc9, 0x95, 0x39, 0x91, 0x29, 0x3a, 0xba, 0x0f, 0xc5, 0x23,
+ 0x2e, 0x1c, 0xda, 0x33, 0xb7, 0xa6, 0xa0, 0x35, 0x61, 0x69, 0x5b, 0xa4, 0x2f, 0xd4, 0x80, 0xa5,
+ 0x3e, 0xf5, 0xa9, 0x60, 0x8e, 0x0d, 0x7a, 0xb0, 0x87, 0xa9, 0x17, 0x32, 0x82, 0xe0, 0x91, 0x2f,
+ 0xd9, 0x90, 0x9a, 0x91, 0x62, 0x22, 0xfa, 0x1d, 0x7c, 0x10, 0x1f, 0x5f, 0x4f, 0xd0, 0x23, 0x2a,
+ 0xa8, 0xaf, 0x34, 0x60, 0xe9, 0x7d, 0xf8, 0xe4, 0xed, 0x1a, 0x30, 0x68, 0x13, 0x6c, 0x90, 0xb8,
+ 0xfc, 0x21, 0x6c, 0x14, 0x60, 0x49, 0x44, 0xe3, 0x56, 0xff, 0x98, 0x51, 0xaa, 0xbf, 0x84, 0x40,
+ 0x9b, 0x60, 0x25, 0xc3, 0x33, 0x57, 0xab, 0xb7, 0xd0, 0xb8, 0x71, 0x71, 0xbe, 0x0e, 0x31, 0xb6,
+ 0xdd, 0x54, 0x31, 0xc8, 0xb4, 0x5d, 0xd4, 0x82, 0xe5, 0x84, 0xa0, 0xca, 0x00, 0x93, 0x28, 0x2b,
+ 0x6f, 0x9b, 0xe9, 0xfe, 0x38, 0xa0, 0xb8, 0x28, 0xa6, 0x7a, 0xd5, 0xdf, 0x02, 0x7a, 0x73, 0x5f,
+ 0x10, 0x82, 0xec, 0x31, 0xf3, 0xcd, 0x34, 0xb0, 0x6e, 0xa3, 0x1a, 0x2c, 0x05, 0x64, 0xec, 0x71,
+ 0xe2, 0x9a, 0x8b, 0x71, 0xab, 0x16, 0x15, 0x08, 0xb5, 0xb8, 0x40, 0xa8, 0xd5, 0xfd, 0x31, 0x8e,
+ 0x41, 0xd5, 0xa7, 0xf0, 0x61, 0xea, 0xf1, 0xa2, 0x2d, 0x28, 0x26, 0x17, 0x6e, 0xb2, 0xd6, 0x9b,
+ 0x17, 0xe7, 0xeb, 0x56, 0x72, 0x33, 0xdb, 0x4d, 0x6c, 0x25, 0xa0, 0xb6, 0x5b, 0xfd, 0xde, 0x82,
+ 0xe5, 0x99, 0x6b, 0x8b, 0x6e, 0xc1, 0x22, 0x1b, 0x92, 0x3e, 0x35, 0x73, 0x8c, 0x3a, 0xa8, 0x05,
+ 0x39, 0x8f, 0x1c, 0x52, 0x4f, 0x5d, 0x5e, 0x75, 0x70, 0x3f, 0xbe, 0xf6, 0xfe, 0xd7, 0x9e, 0x69,
+ 0x7c, 0xcb, 0x97, 0x62, 0x8c, 0x0d, 0x19, 0xd9, 0xb0, 0xe4, 0xf0, 0xe1, 0x90, 0xf8, 0x2a, 0x4d,
+ 0x2c, 0x6c, 0x14, 0x70, 0xdc, 0x55, 0x3b, 0x43, 0x44, 0x3f, 0xb4, 0xb3, 0xda, 0xac, 0xdb, 0xa8,
+ 0x04, 0x0b, 0xd4, 0x3f, 0xb1, 0x17, 0xb5, 0x49, 0x35, 0x95, 0xc5, 0x65, 0xd1, 0xed, 0x2b, 0x60,
+ 0xd5, 0x54, 0xbc, 0x51, 0x48, 0x85, 0xbd, 0x14, 0xed, 0xa8, 0x6a, 0xa3, 0x9f, 0x41, 0x6e, 0xc8,
+ 0x47, 0xbe, 0x0c, 0xed, 0xbc, 0x9e, 0xec, 0x6a, 0xda, 0x64, 0x9f, 0x2b, 0x84, 0x51, 0x96, 0x81,
+ 0xa3, 0x16, 0xac, 0x84, 0x92, 0x07, 0xbd, 0xbe, 0x20, 0x0e, 0xed, 0x05, 0x54, 0x30, 0xee, 0x9a,
+ 0x30, 0xbc, 0xfa, 0xc6, 0xa1, 0x34, 0x4d, 0xc1, 0x87, 0x6f, 0x2a, 0xce, 0x8e, 0xa2, 0x74, 0x34,
+ 0x03, 0x75, 0xa0, 0x18, 0x8c, 0x3c, 0xaf, 0xc7, 0x83, 0x28, 0x23, 0x47, 0x77, 0xe7, 0x1d, 0xb6,
+ 0xac, 0x33, 0xf2, 0xbc, 0xbd, 0x88, 0x84, 0xad, 0x60, 0xd2, 0x41, 0xb7, 0x21, 0xd7, 0x17, 0x7c,
+ 0x14, 0x44, 0xf7, 0xa6, 0x80, 0x4d, 0x0f, 0x7d, 0x03, 0x4b, 0x21, 0x75, 0x04, 0x95, 0xa1, 0x5d,
+ 0xd4, 0x4b, 0xfd, 0x38, 0x6d, 0x90, 0xae, 0x86, 0x24, 0x77, 0x02, 0xc7, 0x1c, 0xb4, 0x0a, 0x0b,
+ 0x52, 0x8e, 0xed, 0xe5, 0x4a, 0x66, 0x23, 0xdf, 0x58, 0xba, 0x38, 0x5f, 0x5f, 0xd8, 0xdf, 0x7f,
+ 0x89, 0x95, 0x4d, 0x65, 0x8b, 0x01, 0x0f, 0xa5, 0x4f, 0x86, 0xd4, 0xbe, 0xa1, 0xf7, 0x36, 0xe9,
+ 0xa3, 0x97, 0x00, 0xae, 0x1f, 0xf6, 0x1c, 0x1d, 0x9e, 0xec, 0x9b, 0x7a, 0x75, 0x9f, 0x5f, 0xbf,
+ 0xba, 0xe6, 0x6e, 0xd7, 0x64, 0xcc, 0xe5, 0x8b, 0xf3, 0xf5, 0x42, 0xd2, 0xc5, 0x05, 0xd7, 0x0f,
+ 0xa3, 0x26, 0x6a, 0x80, 0x35, 0xa0, 0xc4, 0x93, 0x03, 0x67, 0x40, 0x9d, 0x63, 0xbb, 0x74, 0x75,
+ 0x0a, 0x7c, 0xa2, 0x61, 0xc6, 0xc3, 0x34, 0x49, 0x29, 0x58, 0x4d, 0x35, 0xb4, 0x57, 0xf4, 0x5e,
+ 0x45, 0x1d, 0x74, 0x0f, 0x80, 0x07, 0xd4, 0xef, 0x85, 0xd2, 0x65, 0xbe, 0x8d, 0xd4, 0x92, 0x71,
+ 0x41, 0x59, 0xba, 0xca, 0x80, 0xee, 0xa8, 0x04, 0x45, 0xdc, 0x1e, 0xf7, 0xbd, 0xb1, 0xfd, 0x81,
+ 0xfe, 0x9a, 0x57, 0x86, 0x3d, 0xdf, 0x1b, 0xa3, 0x75, 0xb0, 0xb4, 0x2e, 0x42, 0xd6, 0xf7, 0x89,
+ 0x67, 0xdf, 0xd2, 0xfb, 0x01, 0xca, 0xd4, 0xd5, 0x16, 0x75, 0x0e, 0xd1, 0x6e, 0x84, 0xf6, 0x87,
+ 0x57, 0x9f, 0x83, 0x99, 0xec, 0xe4, 0x1c, 0x0c, 0x07, 0xfd, 0x02, 0x20, 0x10, 0xec, 0x84, 0x79,
+ 0xb4, 0x4f, 0x43, 0xfb, 0xb6, 0x5e, 0xf4, 0x5a, 0x6a, 0x66, 0x4a, 0x50, 0x78, 0x8a, 0x81, 0x6a,
+ 0x90, 0x65, 0x3e, 0x93, 0xf6, 0x47, 0x26, 0x2b, 0x5d, 0x96, 0x6a, 0x83, 0x73, 0xef, 0x80, 0x78,
+ 0x23, 0x8a, 0x35, 0x0e, 0xb5, 0xa1, 0xc0, 0x42, 0xee, 0x69, 0xf9, 0xda, 0xb6, 0x8e, 0x6f, 0xef,
+ 0x70, 0x7e, 0xed, 0x98, 0x82, 0x27, 0x6c, 0x74, 0x17, 0x0a, 0x01, 0x73, 0xc3, 0x67, 0x6c, 0xc8,
+ 0xa4, 0xbd, 0x5a, 0xc9, 0x6c, 0x2c, 0xe0, 0x89, 0xa1, 0xfc, 0x15, 0x58, 0x53, 0x61, 0x40, 0x5d,
+ 0xdf, 0x63, 0x3a, 0x36, 0x91, 0x45, 0x35, 0xd5, 0x59, 0x9d, 0xa8, 0x89, 0xe9, 0xd0, 0x57, 0xc0,
+ 0x51, 0xe7, 0xeb, 0xf9, 0x47, 0x99, 0xf2, 0x16, 0x58, 0x53, 0xd7, 0x01, 0x7d, 0xac, 0xc2, 0x72,
+ 0x9f, 0x85, 0x52, 0x8c, 0x7b, 0x64, 0x24, 0x07, 0xf6, 0xaf, 0x34, 0xa1, 0x18, 0x1b, 0xeb, 0x23,
+ 0x39, 0x28, 0xf7, 0x60, 0xa2, 0x2a, 0x54, 0x01, 0x4b, 0xa9, 0x35, 0xa4, 0xe2, 0x84, 0x0a, 0x55,
+ 0xf2, 0x28, 0x31, 0x4c, 0x9b, 0xd4, 0xad, 0x0a, 0x29, 0x11, 0xce, 0x40, 0x07, 0xb5, 0x02, 0x36,
+ 0x3d, 0x15, 0xa5, 0xe2, 0xab, 0x6b, 0xa2, 0x94, 0xe9, 0x56, 0xff, 0x96, 0x81, 0x42, 0xb2, 0x0d,
+ 0xe8, 0x0b, 0x58, 0x69, 0x77, 0xf7, 0x9e, 0xd5, 0xf7, 0xdb, 0x7b, 0xbb, 0xbd, 0x66, 0xeb, 0xdb,
+ 0xfa, 0x8b, 0x67, 0xfb, 0xa5, 0xb9, 0xf2, 0xbd, 0xd3, 0xb3, 0xca, 0xea, 0x24, 0xe2, 0xc6, 0xf0,
+ 0x26, 0x3d, 0x22, 0x23, 0x4f, 0xce, 0xb2, 0x3a, 0x78, 0x6f, 0xbb, 0xd5, 0xed, 0x96, 0x32, 0x57,
+ 0xb1, 0x3a, 0x82, 0x3b, 0x34, 0x0c, 0xd1, 0x16, 0x94, 0x26, 0xac, 0x27, 0x2f, 0x3b, 0x2d, 0x7c,
+ 0x50, 0x9a, 0x2f, 0xdf, 0x3d, 0x3d, 0xab, 0xd8, 0x6f, 0x92, 0x9e, 0x8c, 0x03, 0x2a, 0x0e, 0xcc,
+ 0x73, 0xe1, 0x5f, 0x19, 0x28, 0x4e, 0x57, 0x9b, 0x68, 0x3b, 0xaa, 0x12, 0xf5, 0x31, 0xdc, 0xd8,
+ 0xda, 0xbc, 0xae, 0x3a, 0xd5, 0x59, 0xce, 0x1b, 0x29, 0xbf, 0xcf, 0xd5, 0xc3, 0x50, 0x93, 0xd1,
+ 0x17, 0xb0, 0x18, 0x70, 0x21, 0xe3, 0x7c, 0x90, 0xae, 0x56, 0x2e, 0xe2, 0x1a, 0x26, 0x02, 0x57,
+ 0x07, 0x70, 0x63, 0xd6, 0x1b, 0x7a, 0x00, 0x0b, 0x07, 0xed, 0x4e, 0x69, 0xae, 0x7c, 0xe7, 0xf4,
+ 0xac, 0xf2, 0xd1, 0xec, 0xc7, 0x03, 0x26, 0xe4, 0x88, 0x78, 0xed, 0x0e, 0xfa, 0x0c, 0x16, 0x9b,
+ 0xbb, 0x5d, 0x8c, 0x4b, 0x99, 0xf2, 0xfa, 0xe9, 0x59, 0xe5, 0xce, 0x2c, 0x4e, 0x7d, 0xe2, 0x23,
+ 0xdf, 0xc5, 0xfc, 0x30, 0x79, 0x24, 0xfd, 0x7b, 0x1e, 0x2c, 0x93, 0x26, 0xdf, 0xf7, 0x3b, 0x7a,
+ 0x39, 0xaa, 0x01, 0xe3, 0xf8, 0x37, 0x7f, 0x6d, 0x29, 0x58, 0x8c, 0x08, 0x46, 0x97, 0xf7, 0xa1,
+ 0xc8, 0x82, 0x93, 0x2f, 0x7b, 0xd4, 0x27, 0x87, 0x9e, 0x79, 0x2f, 0xe5, 0xb1, 0xa5, 0x6c, 0xad,
+ 0xc8, 0xa4, 0x82, 0x2f, 0xf3, 0x25, 0x15, 0xbe, 0x79, 0x09, 0xe5, 0x71, 0xd2, 0x47, 0xdf, 0x40,
+ 0x96, 0x05, 0x64, 0x68, 0xea, 0xd7, 0xd4, 0x15, 0xb4, 0x3b, 0xf5, 0xe7, 0xe6, 0xde, 0x34, 0xf2,
+ 0x17, 0xe7, 0xeb, 0x59, 0x65, 0xc0, 0x9a, 0x86, 0xd6, 0xe2, 0x12, 0x52, 0x8d, 0xa4, 0x13, 0x69,
+ 0x1e, 0x4f, 0x59, 0x94, 0xf6, 0x99, 0xdf, 0x17, 0x34, 0x0c, 0x75, 0x4a, 0xcd, 0xe3, 0xb8, 0x8b,
+ 0xca, 0xb0, 0x64, 0x0a, 0x51, 0x5d, 0x79, 0x16, 0x54, 0x91, 0x67, 0x0c, 0x8d, 0x65, 0xb0, 0xa2,
+ 0xdd, 0xe8, 0x1d, 0x09, 0x3e, 0xac, 0xfe, 0x27, 0x0b, 0xd6, 0xb6, 0x37, 0x0a, 0xa5, 0xa9, 0x29,
+ 0xde, 0xdb, 0xe6, 0xbf, 0x84, 0x15, 0xa2, 0xdf, 0xe5, 0xc4, 0x57, 0x09, 0x5a, 0xd7, 0xf7, 0xe6,
+ 0x00, 0x1e, 0xa4, 0xba, 0x4b, 0xc0, 0xd1, 0x5b, 0xa0, 0x91, 0x53, 0x3e, 0xed, 0x0c, 0x2e, 0x91,
+ 0x4b, 0x5f, 0x50, 0x17, 0x96, 0xb9, 0x70, 0x06, 0x34, 0x94, 0x51, 0x5a, 0x37, 0xef, 0xd8, 0xd4,
+ 0x3f, 0x1c, 0x7b, 0xd3, 0x40, 0x93, 0xd3, 0xa2, 0xd9, 0xce, 0xfa, 0x40, 0x8f, 0x20, 0x2b, 0xc8,
+ 0x51, 0xfc, 0x56, 0x49, 0xbd, 0x24, 0x98, 0x1c, 0xc9, 0x19, 0x17, 0x9a, 0x81, 0x7e, 0x0d, 0xe0,
+ 0xb2, 0x30, 0x20, 0xd2, 0x19, 0x50, 0x61, 0x0e, 0x3b, 0x75, 0x89, 0xcd, 0x04, 0x35, 0xe3, 0x65,
+ 0x8a, 0x8d, 0x9e, 0x42, 0xc1, 0x21, 0xb1, 0x5c, 0x73, 0x57, 0x3f, 0xee, 0xb7, 0xeb, 0xc6, 0x45,
+ 0x49, 0xb9, 0xb8, 0x38, 0x5f, 0xcf, 0xc7, 0x16, 0x9c, 0x77, 0x88, 0x91, 0xef, 0x53, 0x58, 0x56,
+ 0x8f, 0xfe, 0x9e, 0x1b, 0x85, 0xb3, 0x48, 0x26, 0x57, 0xe4, 0x68, 0xf5, 0x82, 0x34, 0x61, 0x2f,
+ 0x3e, 0xce, 0xa2, 0x9c, 0xb2, 0xa1, 0xdf, 0xc0, 0x0a, 0xf5, 0x1d, 0x31, 0xd6, 0x62, 0x8d, 0x67,
+ 0x98, 0xbf, 0x7a, 0xb1, 0xad, 0x04, 0x3c, 0xb3, 0xd8, 0x12, 0xbd, 0x64, 0xaf, 0xfe, 0x33, 0x03,
+ 0x10, 0x95, 0x3d, 0xef, 0x57, 0x80, 0x08, 0xb2, 0x2e, 0x91, 0x44, 0x6b, 0xae, 0x88, 0x75, 0x1b,
+ 0x7d, 0x0d, 0x20, 0xe9, 0x30, 0x50, 0xa1, 0xd7, 0xef, 0x1b, 0xd9, 0xbc, 0x2d, 0x1c, 0x4c, 0xa1,
+ 0xd1, 0x16, 0xe4, 0xcc, 0x8b, 0x32, 0x7b, 0x2d, 0xcf, 0x20, 0xab, 0x7f, 0xc9, 0x00, 0x44, 0xcb,
+ 0xfc, 0xbf, 0x5e, 0x5b, 0xc3, 0x7e, 0xfd, 0xc3, 0xda, 0xdc, 0x3f, 0x7e, 0x58, 0x9b, 0xfb, 0xc3,
+ 0xc5, 0x5a, 0xe6, 0xf5, 0xc5, 0x5a, 0xe6, 0xef, 0x17, 0x6b, 0x99, 0xef, 0x2f, 0xd6, 0x32, 0x87,
+ 0x39, 0x5d, 0x99, 0xfc, 0xf4, 0xbf, 0x01, 0x00, 0x00, 0xff, 0xff, 0xb8, 0xa3, 0x85, 0xdc, 0xc7,
+ 0x15, 0x00, 0x00,
}
diff --git a/components/engine/vendor/github.com/docker/swarmkit/api/specs.proto b/components/engine/vendor/github.com/docker/swarmkit/api/specs.proto
index 8955027b55..14448d0409 100644
--- a/components/engine/vendor/github.com/docker/swarmkit/api/specs.proto
+++ b/components/engine/vendor/github.com/docker/swarmkit/api/specs.proto
@@ -6,6 +6,7 @@ import "github.com/docker/swarmkit/api/types.proto";
import "gogoproto/gogo.proto";
import "google/protobuf/duration.proto";
import "google/protobuf/any.proto";
+import "google/protobuf/wrappers.proto";
// Specs are container objects for user provided input. All creations and
// updates are done through spec types. As a convention, user input from a spec
@@ -215,6 +216,9 @@ message ContainerSpec {
// Privileges specifies security configuration/permissions.
Privileges privileges = 22;
+ // Init declares that a custom init will be running inside the container, if null, use the daemon's configured settings
+ google.protobuf.BoolValue init = 23;
+
// TTY declares that a TTY should be attached to the standard streams,
// including stdin if it is still open.
bool tty = 13 [(gogoproto.customname) = "TTY"];
@@ -293,6 +297,27 @@ message ContainerSpec {
// task will exit and a new task will be rescheduled elsewhere. A container
// is considered unhealthy after `Retries` number of consecutive failures.
HealthConfig healthcheck = 16;
+
+ enum Isolation {
+ option (gogoproto.goproto_enum_prefix) = false;
+
+ // ISOLATION_DEFAULT uses whatever default value from the container runtime
+ ISOLATION_DEFAULT = 0 [(gogoproto.enumvalue_customname) = "ContainerIsolationDefault"];
+
+ // ISOLATION_PROCESS forces windows container isolation
+ ISOLATION_PROCESS = 1 [(gogoproto.enumvalue_customname) = "ContainerIsolationProcess"];
+
+ // ISOLATION_HYPERV forces Hyper-V isolation
+ ISOLATION_HYPERV = 2 [(gogoproto.enumvalue_customname) = "ContainerIsolationHyperV"];
+ }
+
+ // Isolation defines the isolation level for windows containers (default, process, hyperv).
+ // Runtimes that don't support it ignore that field
+ Isolation isolation = 24;
+
+ // PidsLimit prevents from OS resource damage by applications inside the container
+ // using fork bomb attack.
+ int64 pidsLimit = 25;
}
// EndpointSpec defines the properties that can be configured to
diff --git a/components/engine/vendor/github.com/docker/swarmkit/api/types.pb.go b/components/engine/vendor/github.com/docker/swarmkit/api/types.pb.go
index 9ce04eb0b1..33e2281b67 100644
--- a/components/engine/vendor/github.com/docker/swarmkit/api/types.pb.go
+++ b/components/engine/vendor/github.com/docker/swarmkit/api/types.pb.go
@@ -1085,12 +1085,17 @@ type TaskStatus struct {
// because the task is prepared, we would put "already prepared" in this
// field.
Message string `protobuf:"bytes,3,opt,name=message,proto3" json:"message,omitempty"`
- // Err is set if the task is in an error state.
+ // Err is set if the task is in an error state, or is unable to
+ // progress from an earlier state because a precondition is
+ // unsatisfied.
//
// The following states should report a companion error:
//
// FAILED, REJECTED
//
+ // In general, messages that should be surfaced to users belong in the
+ // Err field, and notes on routine state transitions belong in Message.
+ //
// TODO(stevvooe) Integrate this field with the error interface.
Err string `protobuf:"bytes,4,opt,name=err,proto3" json:"err,omitempty"`
// Container status contains container specific status information.
diff --git a/components/engine/vendor/github.com/docker/swarmkit/api/types.proto b/components/engine/vendor/github.com/docker/swarmkit/api/types.proto
index 890b3cfc3f..635d12b200 100644
--- a/components/engine/vendor/github.com/docker/swarmkit/api/types.proto
+++ b/components/engine/vendor/github.com/docker/swarmkit/api/types.proto
@@ -509,12 +509,17 @@ message TaskStatus {
// field.
string message = 3;
- // Err is set if the task is in an error state.
+ // Err is set if the task is in an error state, or is unable to
+ // progress from an earlier state because a precondition is
+ // unsatisfied.
//
// The following states should report a companion error:
//
// FAILED, REJECTED
//
+ // In general, messages that should be surfaced to users belong in the
+ // Err field, and notes on routine state transitions belong in Message.
+ //
// TODO(stevvooe) Integrate this field with the error interface.
string err = 4;
diff --git a/components/engine/vendor/github.com/docker/swarmkit/connectionbroker/broker.go b/components/engine/vendor/github.com/docker/swarmkit/connectionbroker/broker.go
index a0ba7cf0a8..43b384ab2a 100644
--- a/components/engine/vendor/github.com/docker/swarmkit/connectionbroker/broker.go
+++ b/components/engine/vendor/github.com/docker/swarmkit/connectionbroker/broker.go
@@ -4,7 +4,9 @@
package connectionbroker
import (
+ "net"
"sync"
+ "time"
"github.com/docker/swarmkit/api"
"github.com/docker/swarmkit/remotes"
@@ -60,9 +62,14 @@ func (b *Broker) SelectRemote(dialOpts ...grpc.DialOption) (*Conn, error) {
return nil, err
}
+ // gRPC dialer connects to proxy first. Provide a custom dialer here avoid that.
+ // TODO(anshul) Add an option to configure this.
dialOpts = append(dialOpts,
grpc.WithUnaryInterceptor(grpc_prometheus.UnaryClientInterceptor),
- grpc.WithStreamInterceptor(grpc_prometheus.StreamClientInterceptor))
+ grpc.WithStreamInterceptor(grpc_prometheus.StreamClientInterceptor),
+ grpc.WithDialer(func(addr string, timeout time.Duration) (net.Conn, error) {
+ return net.DialTimeout("tcp", addr, timeout)
+ }))
cc, err := grpc.Dial(peer.Addr, dialOpts...)
if err != nil {
diff --git a/components/engine/vendor/github.com/docker/swarmkit/manager/allocator/network.go b/components/engine/vendor/github.com/docker/swarmkit/manager/allocator/network.go
index c7715128a1..ac798c954f 100644
--- a/components/engine/vendor/github.com/docker/swarmkit/manager/allocator/network.go
+++ b/components/engine/vendor/github.com/docker/swarmkit/manager/allocator/network.go
@@ -1284,9 +1284,11 @@ func PredefinedNetworks() []networkallocator.PredefinedNetworkData {
// updateTaskStatus sets TaskStatus and updates timestamp.
func updateTaskStatus(t *api.Task, newStatus api.TaskState, message string) {
- t.Status.State = newStatus
- t.Status.Message = message
- t.Status.Timestamp = ptypes.MustTimestampProto(time.Now())
+ t.Status = api.TaskStatus{
+ State: newStatus,
+ Message: message,
+ Timestamp: ptypes.MustTimestampProto(time.Now()),
+ }
}
// IsIngressNetwork returns whether the passed network is an ingress network.
diff --git a/components/engine/vendor/github.com/docker/swarmkit/manager/controlapi/node.go b/components/engine/vendor/github.com/docker/swarmkit/manager/controlapi/node.go
index f3ee9e45df..bac6b8073d 100644
--- a/components/engine/vendor/github.com/docker/swarmkit/manager/controlapi/node.go
+++ b/components/engine/vendor/github.com/docker/swarmkit/manager/controlapi/node.go
@@ -248,6 +248,29 @@ func (s *Server) UpdateNode(ctx context.Context, request *api.UpdateNodeRequest)
}, nil
}
+func removeNodeAttachments(tx store.Tx, nodeID string) error {
+ // orphan the node's attached containers. if we don't do this, the
+ // network these attachments are connected to will never be removeable
+ tasks, err := store.FindTasks(tx, store.ByNodeID(nodeID))
+ if err != nil {
+ return err
+ }
+ for _, task := range tasks {
+ // if the task is an attachment, then we just delete it. the allocator
+ // will do the heavy lifting. basically, GetAttachment will return the
+ // attachment if that's the kind of runtime, or nil if it's not.
+ if task.Spec.GetAttachment() != nil {
+ // don't delete the task. instead, update it to `ORPHANED` so that
+ // the taskreaper will clean it up.
+ task.Status.State = api.TaskStateOrphaned
+ if err := store.UpdateTask(tx, task); err != nil {
+ return err
+ }
+ }
+ }
+ return nil
+}
+
// RemoveNode removes a Node referenced by NodeID with the given NodeSpec.
// - Returns NotFound if the Node is not found.
// - Returns FailedPrecondition if the Node has manager role (and is part of the memberlist) or is not shut down.
@@ -313,6 +336,10 @@ func (s *Server) RemoveNode(ctx context.Context, request *api.RemoveNodeRequest)
return err
}
+ if err := removeNodeAttachments(tx, request.NodeID); err != nil {
+ return err
+ }
+
return store.DeleteNode(tx, request.NodeID)
})
if err != nil {
diff --git a/components/engine/vendor/github.com/docker/swarmkit/manager/orchestrator/constraintenforcer/constraint_enforcer.go b/components/engine/vendor/github.com/docker/swarmkit/manager/orchestrator/constraintenforcer/constraint_enforcer.go
index 2978898ccb..7aa7651db7 100644
--- a/components/engine/vendor/github.com/docker/swarmkit/manager/orchestrator/constraintenforcer/constraint_enforcer.go
+++ b/components/engine/vendor/github.com/docker/swarmkit/manager/orchestrator/constraintenforcer/constraint_enforcer.go
@@ -159,7 +159,8 @@ loop:
// restarting the task on another node
// (if applicable).
t.Status.State = api.TaskStateRejected
- t.Status.Message = "assigned node no longer meets constraints"
+ t.Status.Message = "task rejected by constraint enforcer"
+ t.Status.Err = "assigned node no longer meets constraints"
t.Status.Timestamp = ptypes.MustTimestampProto(time.Now())
return store.UpdateTask(tx, t)
})
diff --git a/components/engine/vendor/github.com/docker/swarmkit/manager/scheduler/filter.go b/components/engine/vendor/github.com/docker/swarmkit/manager/scheduler/filter.go
index 36b601c4b4..3b1c73fe2d 100644
--- a/components/engine/vendor/github.com/docker/swarmkit/manager/scheduler/filter.go
+++ b/components/engine/vendor/github.com/docker/swarmkit/manager/scheduler/filter.go
@@ -169,7 +169,7 @@ func (f *PluginFilter) Check(n *NodeInfo) bool {
}
}
- if f.t.Spec.LogDriver != nil {
+ if f.t.Spec.LogDriver != nil && f.t.Spec.LogDriver.Name != "none" {
// If there are no log driver types in the list at all, most likely this is
// an older daemon that did not report this information. In this case don't filter
if typeFound, exists := f.pluginExistsOnNode("Log", f.t.Spec.LogDriver.Name, nodePlugins); !exists && typeFound {
diff --git a/components/engine/vendor/github.com/docker/swarmkit/manager/scheduler/scheduler.go b/components/engine/vendor/github.com/docker/swarmkit/manager/scheduler/scheduler.go
index 99685959bc..6d5b4e551b 100644
--- a/components/engine/vendor/github.com/docker/swarmkit/manager/scheduler/scheduler.go
+++ b/components/engine/vendor/github.com/docker/swarmkit/manager/scheduler/scheduler.go
@@ -446,7 +446,9 @@ func (s *Scheduler) applySchedulingDecisions(ctx context.Context, schedulingDeci
continue
}
- if t.Status.State == decision.new.Status.State && t.Status.Message == decision.new.Status.Message {
+ if t.Status.State == decision.new.Status.State &&
+ t.Status.Message == decision.new.Status.Message &&
+ t.Status.Err == decision.new.Status.Err {
// No changes, ignore
continue
}
@@ -502,7 +504,7 @@ func (s *Scheduler) taskFitNode(ctx context.Context, t *api.Task, nodeID string)
if !s.pipeline.Process(&nodeInfo) {
// this node cannot accommodate this task
newT.Status.Timestamp = ptypes.MustTimestampProto(time.Now())
- newT.Status.Message = s.pipeline.Explain()
+ newT.Status.Err = s.pipeline.Explain()
s.allTasks[t.ID] = &newT
return &newT
@@ -702,9 +704,9 @@ func (s *Scheduler) noSuitableNode(ctx context.Context, taskGroup map[string]*ap
newT := *t
newT.Status.Timestamp = ptypes.MustTimestampProto(time.Now())
if explanation != "" {
- newT.Status.Message = "no suitable node (" + explanation + ")"
+ newT.Status.Err = "no suitable node (" + explanation + ")"
} else {
- newT.Status.Message = "no suitable node"
+ newT.Status.Err = "no suitable node"
}
s.allTasks[t.ID] = &newT
schedulingDecisions[t.ID] = schedulingDecision{old: t, new: &newT}
diff --git a/components/engine/vendor/github.com/docker/swarmkit/manager/state/raft/raft.go b/components/engine/vendor/github.com/docker/swarmkit/manager/state/raft/raft.go
index afdf2ca4eb..28c7cfa47e 100644
--- a/components/engine/vendor/github.com/docker/swarmkit/manager/state/raft/raft.go
+++ b/components/engine/vendor/github.com/docker/swarmkit/manager/state/raft/raft.go
@@ -180,9 +180,12 @@ type NodeOptions struct {
ClockSource clock.Clock
// SendTimeout is the timeout on the sending messages to other raft
// nodes. Leave this as 0 to get the default value.
- SendTimeout time.Duration
- TLSCredentials credentials.TransportCredentials
- KeyRotator EncryptionKeyRotator
+ SendTimeout time.Duration
+ // LargeSendTimeout is the timeout on the sending snapshots to other raft
+ // nodes. Leave this as 0 to get the default value.
+ LargeSendTimeout time.Duration
+ TLSCredentials credentials.TransportCredentials
+ KeyRotator EncryptionKeyRotator
// DisableStackDump prevents Run from dumping goroutine stacks when the
// store becomes stuck.
DisableStackDump bool
@@ -204,6 +207,11 @@ func NewNode(opts NodeOptions) *Node {
if opts.SendTimeout == 0 {
opts.SendTimeout = 2 * time.Second
}
+ if opts.LargeSendTimeout == 0 {
+ // a "slow" 100Mbps connection can send over 240MB data in 20 seconds
+ // which is well over the gRPC message limit of 128MB allowed by SwarmKit
+ opts.LargeSendTimeout = 20 * time.Second
+ }
raftStore := raft.NewMemoryStorage()
@@ -349,6 +357,7 @@ func (n *Node) initTransport() {
transportConfig := &transport.Config{
HeartbeatInterval: time.Duration(n.Config.ElectionTick) * n.opts.TickInterval,
SendTimeout: n.opts.SendTimeout,
+ LargeSendTimeout: n.opts.LargeSendTimeout,
Credentials: n.opts.TLSCredentials,
Raft: n,
}
diff --git a/components/engine/vendor/github.com/docker/swarmkit/manager/state/raft/transport/peer.go b/components/engine/vendor/github.com/docker/swarmkit/manager/state/raft/transport/peer.go
index 55639af13f..8c7ca75458 100644
--- a/components/engine/vendor/github.com/docker/swarmkit/manager/state/raft/transport/peer.go
+++ b/components/engine/vendor/github.com/docker/swarmkit/manager/state/raft/transport/peer.go
@@ -133,7 +133,14 @@ func (p *peer) resolveAddr(ctx context.Context, id uint64) (string, error) {
}
func (p *peer) sendProcessMessage(ctx context.Context, m raftpb.Message) error {
- ctx, cancel := context.WithTimeout(ctx, p.tr.config.SendTimeout)
+ timeout := p.tr.config.SendTimeout
+ // if a snapshot is being sent, set timeout to LargeSendTimeout because
+ // sending snapshots can take more time than other messages sent between peers.
+ // The same applies to AppendEntries as well, where messages can get large.
+ if m.Type == raftpb.MsgSnap || m.Type == raftpb.MsgApp {
+ timeout = p.tr.config.LargeSendTimeout
+ }
+ ctx, cancel := context.WithTimeout(ctx, timeout)
defer cancel()
_, err := api.NewRaftClient(p.conn()).ProcessRaftMessage(ctx, &api.ProcessRaftMessageRequest{Message: &m})
if grpc.Code(err) == codes.NotFound && grpc.ErrorDesc(err) == membership.ErrMemberRemoved.Error() {
diff --git a/components/engine/vendor/github.com/docker/swarmkit/manager/state/raft/transport/transport.go b/components/engine/vendor/github.com/docker/swarmkit/manager/state/raft/transport/transport.go
index b259013d8a..6f096ef9b2 100644
--- a/components/engine/vendor/github.com/docker/swarmkit/manager/state/raft/transport/transport.go
+++ b/components/engine/vendor/github.com/docker/swarmkit/manager/state/raft/transport/transport.go
@@ -3,6 +3,7 @@
package transport
import (
+ "net"
"sync"
"time"
@@ -35,6 +36,7 @@ type Raft interface {
type Config struct {
HeartbeatInterval time.Duration
SendTimeout time.Duration
+ LargeSendTimeout time.Duration
Credentials credentials.TransportCredentials
RaftID string
@@ -347,6 +349,13 @@ func (t *Transport) dial(addr string) (*grpc.ClientConn, error) {
grpcOptions = append(grpcOptions, grpc.WithTimeout(t.config.SendTimeout))
}
+ // gRPC dialer connects to proxy first. Provide a custom dialer here avoid that.
+ // TODO(anshul) Add an option to configure this.
+ grpcOptions = append(grpcOptions,
+ grpc.WithDialer(func(addr string, timeout time.Duration) (net.Conn, error) {
+ return net.DialTimeout("tcp", addr, timeout)
+ }))
+
cc, err := grpc.Dial(addr, grpcOptions...)
if err != nil {
return nil, err
diff --git a/components/engine/vendor/github.com/fsnotify/fsnotify/README.md b/components/engine/vendor/github.com/fsnotify/fsnotify/README.md
index 3c891e349b..3993207413 100644
--- a/components/engine/vendor/github.com/fsnotify/fsnotify/README.md
+++ b/components/engine/vendor/github.com/fsnotify/fsnotify/README.md
@@ -8,14 +8,14 @@ fsnotify utilizes [golang.org/x/sys](https://godoc.org/golang.org/x/sys) rather
go get -u golang.org/x/sys/...
```
-Cross platform: Windows, Linux, BSD and OS X.
+Cross platform: Windows, Linux, BSD and macOS.
|Adapter |OS |Status |
|----------|----------|----------|
|inotify |Linux 2.6.27 or later, Android\*|Supported [](https://travis-ci.org/fsnotify/fsnotify)|
-|kqueue |BSD, OS X, iOS\*|Supported [](https://travis-ci.org/fsnotify/fsnotify)|
+|kqueue |BSD, macOS, iOS\*|Supported [](https://travis-ci.org/fsnotify/fsnotify)|
|ReadDirectoryChangesW|Windows|Supported [](https://ci.appveyor.com/project/NathanYoungman/fsnotify/branch/master)|
-|FSEvents |OS X |[Planned](https://github.com/fsnotify/fsnotify/issues/11)|
+|FSEvents |macOS |[Planned](https://github.com/fsnotify/fsnotify/issues/11)|
|FEN |Solaris 11 |[In Progress](https://github.com/fsnotify/fsnotify/issues/12)|
|fanotify |Linux 2.6.37+ | |
|USN Journals |Windows |[Maybe](https://github.com/fsnotify/fsnotify/issues/53)|
@@ -23,7 +23,7 @@ Cross platform: Windows, Linux, BSD and OS X.
\* Android and iOS are untested.
-Please see [the documentation](https://godoc.org/github.com/fsnotify/fsnotify) for usage. Consult the [Wiki](https://github.com/fsnotify/fsnotify/wiki) for the FAQ and further information.
+Please see [the documentation](https://godoc.org/github.com/fsnotify/fsnotify) and consult the [FAQ](#faq) for usage information.
## API stability
@@ -41,6 +41,35 @@ Please refer to [CONTRIBUTING][] before opening an issue or pull request.
See [example_test.go](https://github.com/fsnotify/fsnotify/blob/master/example_test.go).
+## FAQ
+
+**When a file is moved to another directory is it still being watched?**
+
+No (it shouldn't be, unless you are watching where it was moved to).
+
+**When I watch a directory, are all subdirectories watched as well?**
+
+No, you must add watches for any directory you want to watch (a recursive watcher is on the roadmap [#18][]).
+
+**Do I have to watch the Error and Event channels in a separate goroutine?**
+
+As of now, yes. Looking into making this single-thread friendly (see [howeyc #7][#7])
+
+**Why am I receiving multiple events for the same file on OS X?**
+
+Spotlight indexing on OS X can result in multiple events (see [howeyc #62][#62]). A temporary workaround is to add your folder(s) to the *Spotlight Privacy settings* until we have a native FSEvents implementation (see [#11][]).
+
+**How many files can be watched at once?**
+
+There are OS-specific limits as to how many watches can be created:
+* Linux: /proc/sys/fs/inotify/max_user_watches contains the limit, reaching this limit results in a "no space left on device" error.
+* BSD / OSX: sysctl variables "kern.maxfiles" and "kern.maxfilesperproc", reaching these limits results in a "too many open files" error.
+
+[#62]: https://github.com/howeyc/fsnotify/issues/62
+[#18]: https://github.com/fsnotify/fsnotify/issues/18
+[#11]: https://github.com/fsnotify/fsnotify/issues/11
+[#7]: https://github.com/howeyc/fsnotify/issues/7
+
[contributing]: https://github.com/fsnotify/fsnotify/blob/master/CONTRIBUTING.md
## Related Projects
diff --git a/components/engine/vendor/github.com/fsnotify/fsnotify/fsnotify.go b/components/engine/vendor/github.com/fsnotify/fsnotify/fsnotify.go
index e7f55fee7a..190bf0de57 100644
--- a/components/engine/vendor/github.com/fsnotify/fsnotify/fsnotify.go
+++ b/components/engine/vendor/github.com/fsnotify/fsnotify/fsnotify.go
@@ -9,6 +9,7 @@ package fsnotify
import (
"bytes"
+ "errors"
"fmt"
)
@@ -60,3 +61,6 @@ func (op Op) String() string {
func (e Event) String() string {
return fmt.Sprintf("%q: %s", e.Name, e.Op.String())
}
+
+// Common errors that can be reported by a watcher
+var ErrEventOverflow = errors.New("fsnotify queue overflow")
diff --git a/components/engine/vendor/github.com/fsnotify/fsnotify/inotify.go b/components/engine/vendor/github.com/fsnotify/fsnotify/inotify.go
index f3b74c51f0..d9fd1b88a0 100644
--- a/components/engine/vendor/github.com/fsnotify/fsnotify/inotify.go
+++ b/components/engine/vendor/github.com/fsnotify/fsnotify/inotify.go
@@ -24,7 +24,6 @@ type Watcher struct {
Events chan Event
Errors chan error
mu sync.Mutex // Map access
- cv *sync.Cond // sync removing on rm_watch with IN_IGNORE
fd int
poller *fdPoller
watches map[string]*watch // Map of inotify watches (key: path)
@@ -56,7 +55,6 @@ func NewWatcher() (*Watcher, error) {
done: make(chan struct{}),
doneResp: make(chan struct{}),
}
- w.cv = sync.NewCond(&w.mu)
go w.readEvents()
return w, nil
@@ -103,21 +101,23 @@ func (w *Watcher) Add(name string) error {
var flags uint32 = agnosticEvents
w.mu.Lock()
- watchEntry, found := w.watches[name]
- w.mu.Unlock()
- if found {
- watchEntry.flags |= flags
- flags |= unix.IN_MASK_ADD
+ defer w.mu.Unlock()
+ watchEntry := w.watches[name]
+ if watchEntry != nil {
+ flags |= watchEntry.flags | unix.IN_MASK_ADD
}
wd, errno := unix.InotifyAddWatch(w.fd, name, flags)
if wd == -1 {
return errno
}
- w.mu.Lock()
- w.watches[name] = &watch{wd: uint32(wd), flags: flags}
- w.paths[wd] = name
- w.mu.Unlock()
+ if watchEntry == nil {
+ w.watches[name] = &watch{wd: uint32(wd), flags: flags}
+ w.paths[wd] = name
+ } else {
+ watchEntry.wd = uint32(wd)
+ watchEntry.flags = flags
+ }
return nil
}
@@ -135,6 +135,13 @@ func (w *Watcher) Remove(name string) error {
if !ok {
return fmt.Errorf("can't remove non-existent inotify watch for: %s", name)
}
+
+ // We successfully removed the watch if InotifyRmWatch doesn't return an
+ // error, we need to clean up our internal state to ensure it matches
+ // inotify's kernel state.
+ delete(w.paths, int(watch.wd))
+ delete(w.watches, name)
+
// inotify_rm_watch will return EINVAL if the file has been deleted;
// the inotify will already have been removed.
// watches and pathes are deleted in ignoreLinux() implicitly and asynchronously
@@ -152,13 +159,6 @@ func (w *Watcher) Remove(name string) error {
return errno
}
- // wait until ignoreLinux() deleting maps
- exists := true
- for exists {
- w.cv.Wait()
- _, exists = w.watches[name]
- }
-
return nil
}
@@ -245,13 +245,31 @@ func (w *Watcher) readEvents() {
mask := uint32(raw.Mask)
nameLen := uint32(raw.Len)
+
+ if mask&unix.IN_Q_OVERFLOW != 0 {
+ select {
+ case w.Errors <- ErrEventOverflow:
+ case <-w.done:
+ return
+ }
+ }
+
// If the event happened to the watched directory or the watched file, the kernel
// doesn't append the filename to the event, but we would like to always fill the
// the "Name" field with a valid filename. We retrieve the path of the watch from
// the "paths" map.
w.mu.Lock()
- name := w.paths[int(raw.Wd)]
+ name, ok := w.paths[int(raw.Wd)]
+ // IN_DELETE_SELF occurs when the file/directory being watched is removed.
+ // This is a sign to clean up the maps, otherwise we are no longer in sync
+ // with the inotify kernel state which has already deleted the watch
+ // automatically.
+ if ok && mask&unix.IN_DELETE_SELF == unix.IN_DELETE_SELF {
+ delete(w.paths, int(raw.Wd))
+ delete(w.watches, name)
+ }
w.mu.Unlock()
+
if nameLen > 0 {
// Point "bytes" at the first byte of the filename
bytes := (*[unix.PathMax]byte)(unsafe.Pointer(&buf[offset+unix.SizeofInotifyEvent]))
@@ -262,7 +280,7 @@ func (w *Watcher) readEvents() {
event := newEvent(name, mask)
// Send the events that are not ignored on the events channel
- if !event.ignoreLinux(w, raw.Wd, mask) {
+ if !event.ignoreLinux(mask) {
select {
case w.Events <- event:
case <-w.done:
@@ -279,15 +297,9 @@ func (w *Watcher) readEvents() {
// Certain types of events can be "ignored" and not sent over the Events
// channel. Such as events marked ignore by the kernel, or MODIFY events
// against files that do not exist.
-func (e *Event) ignoreLinux(w *Watcher, wd int32, mask uint32) bool {
+func (e *Event) ignoreLinux(mask uint32) bool {
// Ignore anything the inotify API says to ignore
if mask&unix.IN_IGNORED == unix.IN_IGNORED {
- w.mu.Lock()
- defer w.mu.Unlock()
- name := w.paths[int(wd)]
- delete(w.paths, int(wd))
- delete(w.watches, name)
- w.cv.Broadcast()
return true
}
diff --git a/components/engine/vendor/github.com/opencontainers/image-spec/specs-go/v1/config.go b/components/engine/vendor/github.com/opencontainers/image-spec/specs-go/v1/config.go
index 8475ff7419..fe799bd698 100644
--- a/components/engine/vendor/github.com/opencontainers/image-spec/specs-go/v1/config.go
+++ b/components/engine/vendor/github.com/opencontainers/image-spec/specs-go/v1/config.go
@@ -37,7 +37,7 @@ type ImageConfig struct {
// Cmd defines the default arguments to the entrypoint of the container.
Cmd []string `json:"Cmd,omitempty"`
- // Volumes is a set of directories which should be created as data volumes in a container running this image.
+ // Volumes is a set of directories describing where the process is likely write data specific to a container instance.
Volumes map[string]struct{} `json:"Volumes,omitempty"`
// WorkingDir sets the current working directory of the entrypoint process in the container.
diff --git a/components/engine/vendor/github.com/opencontainers/image-spec/specs-go/version.go b/components/engine/vendor/github.com/opencontainers/image-spec/specs-go/version.go
index f4cda6ed8d..e3eee29b41 100644
--- a/components/engine/vendor/github.com/opencontainers/image-spec/specs-go/version.go
+++ b/components/engine/vendor/github.com/opencontainers/image-spec/specs-go/version.go
@@ -25,7 +25,7 @@ const (
VersionPatch = 0
// VersionDev indicates development branch. Releases will be empty string.
- VersionDev = "-rc6-dev"
+ VersionDev = ""
)
// Version is the specification version that the package types support.
diff --git a/components/engine/vendor/github.com/opencontainers/runc/libcontainer/devices/devices_linux.go b/components/engine/vendor/github.com/opencontainers/runc/libcontainer/devices/devices_linux.go
index 326ad3b159..3619258905 100644
--- a/components/engine/vendor/github.com/opencontainers/runc/libcontainer/devices/devices_linux.go
+++ b/components/engine/vendor/github.com/opencontainers/runc/libcontainer/devices/devices_linux.go
@@ -30,8 +30,8 @@ func DeviceFromPath(path, permissions string) (*configs.Device, error) {
}
var (
- devNumber = int(stat.Rdev)
- major = Major(devNumber)
+ devNumber = stat.Rdev
+ major = unix.Major(devNumber)
)
if major == 0 {
return nil, ErrNotADevice
@@ -50,8 +50,8 @@ func DeviceFromPath(path, permissions string) (*configs.Device, error) {
return &configs.Device{
Type: devType,
Path: path,
- Major: major,
- Minor: Minor(devNumber),
+ Major: int64(major),
+ Minor: int64(unix.Minor(devNumber)),
Permissions: permissions,
FileMode: os.FileMode(mode),
Uid: stat.Uid,
diff --git a/components/engine/vendor/github.com/opencontainers/runc/libcontainer/devices/number.go b/components/engine/vendor/github.com/opencontainers/runc/libcontainer/devices/number.go
deleted file mode 100644
index 885b6e5dd9..0000000000
--- a/components/engine/vendor/github.com/opencontainers/runc/libcontainer/devices/number.go
+++ /dev/null
@@ -1,24 +0,0 @@
-// +build linux freebsd
-
-package devices
-
-/*
-
-This code provides support for manipulating linux device numbers. It should be replaced by normal syscall functions once http://code.google.com/p/go/issues/detail?id=8106 is solved.
-
-You can read what they are here:
-
- - http://www.makelinux.net/ldd3/chp-3-sect-2
- - http://www.linux-tutorial.info/modules.php?name=MContent&pageid=94
-
-Note! These are NOT the same as the MAJOR(dev_t device);, MINOR(dev_t device); and MKDEV(int major, int minor); functions as defined in as the representation of device numbers used by go is different than the one used internally to the kernel! - https://github.com/torvalds/linux/blob/master/include/linux/kdev_t.h#L9
-
-*/
-
-func Major(devNumber int) int64 {
- return int64((devNumber >> 8) & 0xfff)
-}
-
-func Minor(devNumber int) int64 {
- return int64((devNumber & 0xff) | ((devNumber >> 12) & 0xfff00))
-}
diff --git a/components/engine/vendor/github.com/opencontainers/runc/libcontainer/system/syscall_linux_arm.go b/components/engine/vendor/github.com/opencontainers/runc/libcontainer/system/syscall_linux_32.go
similarity index 93%
rename from components/engine/vendor/github.com/opencontainers/runc/libcontainer/system/syscall_linux_arm.go
rename to components/engine/vendor/github.com/opencontainers/runc/libcontainer/system/syscall_linux_32.go
index 31ff3deb13..c5ca5d8623 100644
--- a/components/engine/vendor/github.com/opencontainers/runc/libcontainer/system/syscall_linux_arm.go
+++ b/components/engine/vendor/github.com/opencontainers/runc/libcontainer/system/syscall_linux_32.go
@@ -1,4 +1,5 @@
-// +build linux,arm
+// +build linux
+// +build 386 arm
package system
diff --git a/components/engine/vendor/github.com/opencontainers/runc/libcontainer/system/syscall_linux_386.go b/components/engine/vendor/github.com/opencontainers/runc/libcontainer/system/syscall_linux_386.go
deleted file mode 100644
index 3f7235ed15..0000000000
--- a/components/engine/vendor/github.com/opencontainers/runc/libcontainer/system/syscall_linux_386.go
+++ /dev/null
@@ -1,25 +0,0 @@
-// +build linux,386
-
-package system
-
-import (
- "golang.org/x/sys/unix"
-)
-
-// Setuid sets the uid of the calling thread to the specified uid.
-func Setuid(uid int) (err error) {
- _, _, e1 := unix.RawSyscall(unix.SYS_SETUID32, uintptr(uid), 0, 0)
- if e1 != 0 {
- err = e1
- }
- return
-}
-
-// Setgid sets the gid of the calling thread to the specified gid.
-func Setgid(gid int) (err error) {
- _, _, e1 := unix.RawSyscall(unix.SYS_SETGID32, uintptr(gid), 0, 0)
- if e1 != 0 {
- err = e1
- }
- return
-}
diff --git a/components/engine/vendor/github.com/opencontainers/runc/libcontainer/system/syscall_linux_64.go b/components/engine/vendor/github.com/opencontainers/runc/libcontainer/system/syscall_linux_64.go
index d7891a2ffa..11c3faafbf 100644
--- a/components/engine/vendor/github.com/opencontainers/runc/libcontainer/system/syscall_linux_64.go
+++ b/components/engine/vendor/github.com/opencontainers/runc/libcontainer/system/syscall_linux_64.go
@@ -1,4 +1,5 @@
-// +build linux,arm64 linux,amd64 linux,ppc linux,ppc64 linux,ppc64le linux,s390x
+// +build linux
+// +build arm64 amd64 mips mipsle mips64 mips64le ppc ppc64 ppc64le s390x
package system
diff --git a/components/engine/vendor/github.com/opencontainers/runc/vendor.conf b/components/engine/vendor/github.com/opencontainers/runc/vendor.conf
index 1266ee485f..0ab4685fd7 100644
--- a/components/engine/vendor/github.com/opencontainers/runc/vendor.conf
+++ b/components/engine/vendor/github.com/opencontainers/runc/vendor.conf
@@ -5,7 +5,7 @@ github.com/opencontainers/runtime-spec v1.0.0
# Core libcontainer functionality.
github.com/mrunalp/fileutils ed869b029674c0e9ce4c0dfa781405c2d9946d08
github.com/opencontainers/selinux v1.0.0-rc1
-github.com/seccomp/libseccomp-golang 32f571b70023028bd57d9288c20efbcb237f3ce0
+github.com/seccomp/libseccomp-golang 84e90a91acea0f4e51e62bc1a75de18b1fc0790f
github.com/sirupsen/logrus a3f95b5c423586578a4e099b11a46c2479628cac
github.com/syndtr/gocapability db04d3cc01c8b54962a58ec7e491717d06cfcc16
github.com/vishvananda/netlink 1e2e08e8a2dcdacaae3f14ac44c5cfa31361f270
@@ -15,7 +15,7 @@ github.com/coreos/pkg v3
github.com/godbus/dbus v3
github.com/golang/protobuf 18c9bb3261723cd5401db4d0c9fbc5c3b6c70fe8
# Command-line interface.
-github.com/docker/docker 0f5c9d301b9b1cca66b3ea0f9dec3b5317d3686d
+github.com/cyphar/filepath-securejoin v0.2.1
github.com/docker/go-units v0.2.0
github.com/urfave/cli d53eb991652b1d438abdd34ce4bfa3ef1539108e
golang.org/x/sys 7ddbeae9ae08c6a06a59597f0c9edbc5ff2444ce https://github.com/golang/sys
diff --git a/components/engine/vendor/github.com/opencontainers/selinux/go-selinux/label/label_selinux.go b/components/engine/vendor/github.com/opencontainers/selinux/go-selinux/label/label_selinux.go
index 569dcf0841..c008a387bf 100644
--- a/components/engine/vendor/github.com/opencontainers/selinux/go-selinux/label/label_selinux.go
+++ b/components/engine/vendor/github.com/opencontainers/selinux/go-selinux/label/label_selinux.go
@@ -49,8 +49,10 @@ func InitLabels(options []string) (string, string, error) {
mcon[con[0]] = con[1]
}
}
+ _ = ReleaseLabel(processLabel)
processLabel = pcon.Get()
mountLabel = mcon.Get()
+ _ = ReserveLabel(processLabel)
}
return processLabel, mountLabel, nil
}
diff --git a/components/engine/vendor/github.com/opencontainers/selinux/go-selinux/selinux.go b/components/engine/vendor/github.com/opencontainers/selinux/go-selinux/selinux.go
index 4cf2c45de7..de9316c2e2 100644
--- a/components/engine/vendor/github.com/opencontainers/selinux/go-selinux/selinux.go
+++ b/components/engine/vendor/github.com/opencontainers/selinux/go-selinux/selinux.go
@@ -213,7 +213,7 @@ func SetFileLabel(path string, label string) error {
return lsetxattr(path, xattrNameSelinux, []byte(label), 0)
}
-// Filecon returns the SELinux label for this path or returns an error.
+// FileLabel returns the SELinux label for this path or returns an error.
func FileLabel(path string) (string, error) {
label, err := lgetxattr(path, xattrNameSelinux)
if err != nil {
@@ -331,7 +331,7 @@ func EnforceMode() int {
}
/*
-SetEnforce sets the current SELinux mode Enforcing, Permissive.
+SetEnforceMode sets the current SELinux mode Enforcing, Permissive.
Disabled is not valid, since this needs to be set at boot time.
*/
func SetEnforceMode(mode int) error {
diff --git a/components/engine/vendor/github.com/vbatts/tar-split/README.md b/components/engine/vendor/github.com/vbatts/tar-split/README.md
index 4c544d823f..03e3ec4308 100644
--- a/components/engine/vendor/github.com/vbatts/tar-split/README.md
+++ b/components/engine/vendor/github.com/vbatts/tar-split/README.md
@@ -1,6 +1,7 @@
# tar-split
[](https://travis-ci.org/vbatts/tar-split)
+[](https://goreportcard.com/report/github.com/vbatts/tar-split)
Pristinely disassembling a tar archive, and stashing needed raw bytes and offsets to reassemble a validating original archive.
@@ -50,7 +51,7 @@ For example stored sparse files that have "holes" in them, will be read as a
contiguous file, though the archive contents may be recorded in sparse format.
Therefore when adding the file payload to a reassembled tar, to achieve
identical output, the file payload would need be precisely re-sparsified. This
-is not something I seek to fix imediately, but would rather have an alert that
+is not something I seek to fix immediately, but would rather have an alert that
precise reassembly is not possible.
(see more http://www.gnu.org/software/tar/manual/html_node/Sparse-Formats.html)
diff --git a/components/engine/vendor/github.com/vbatts/tar-split/tar/asm/disassemble.go b/components/engine/vendor/github.com/vbatts/tar-split/tar/asm/disassemble.go
index 54ef23aed3..009b3f5d81 100644
--- a/components/engine/vendor/github.com/vbatts/tar-split/tar/asm/disassemble.go
+++ b/components/engine/vendor/github.com/vbatts/tar-split/tar/asm/disassemble.go
@@ -2,7 +2,6 @@ package asm
import (
"io"
- "io/ioutil"
"github.com/vbatts/tar-split/archive/tar"
"github.com/vbatts/tar-split/tar/storage"
@@ -119,20 +118,34 @@ func NewInputTarStream(r io.Reader, p storage.Packer, fp storage.FilePutter) (io
}
}
- // it is allowable, and not uncommon that there is further padding on the
- // end of an archive, apart from the expected 1024 null bytes.
- remainder, err := ioutil.ReadAll(outputRdr)
- if err != nil && err != io.EOF {
- pW.CloseWithError(err)
- return
- }
- _, err = p.AddEntry(storage.Entry{
- Type: storage.SegmentType,
- Payload: remainder,
- })
- if err != nil {
- pW.CloseWithError(err)
- return
+ // It is allowable, and not uncommon that there is further padding on
+ // the end of an archive, apart from the expected 1024 null bytes. We
+ // do this in chunks rather than in one go to avoid cases where a
+ // maliciously crafted tar file tries to trick us into reading many GBs
+ // into memory.
+ const paddingChunkSize = 1024 * 1024
+ var paddingChunk [paddingChunkSize]byte
+ for {
+ var isEOF bool
+ n, err := outputRdr.Read(paddingChunk[:])
+ if err != nil {
+ if err != io.EOF {
+ pW.CloseWithError(err)
+ return
+ }
+ isEOF = true
+ }
+ _, err = p.AddEntry(storage.Entry{
+ Type: storage.SegmentType,
+ Payload: paddingChunk[:n],
+ })
+ if err != nil {
+ pW.CloseWithError(err)
+ return
+ }
+ if isEOF {
+ break
+ }
}
pW.Close()
}()
diff --git a/components/engine/vendor/golang.org/x/sys/unix/env_unix.go b/components/engine/vendor/golang.org/x/sys/unix/env_unix.go
index 45e281a047..2e06b33f2e 100644
--- a/components/engine/vendor/golang.org/x/sys/unix/env_unix.go
+++ b/components/engine/vendor/golang.org/x/sys/unix/env_unix.go
@@ -1,4 +1,4 @@
-// Copyright 2010 The Go Authors. All rights reserved.
+// Copyright 2010 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
diff --git a/components/engine/vendor/golang.org/x/sys/unix/env_unset.go b/components/engine/vendor/golang.org/x/sys/unix/env_unset.go
index 9222262559..c44fdc4afa 100644
--- a/components/engine/vendor/golang.org/x/sys/unix/env_unset.go
+++ b/components/engine/vendor/golang.org/x/sys/unix/env_unset.go
@@ -1,4 +1,4 @@
-// Copyright 2014 The Go Authors. All rights reserved.
+// Copyright 2014 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
diff --git a/components/engine/vendor/golang.org/x/sys/unix/gccgo.go b/components/engine/vendor/golang.org/x/sys/unix/gccgo.go
index 94c8232124..40bed3fa80 100644
--- a/components/engine/vendor/golang.org/x/sys/unix/gccgo.go
+++ b/components/engine/vendor/golang.org/x/sys/unix/gccgo.go
@@ -1,4 +1,4 @@
-// Copyright 2015 The Go Authors. All rights reserved.
+// Copyright 2015 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
@@ -8,7 +8,7 @@ package unix
import "syscall"
-// We can't use the gc-syntax .s files for gccgo. On the plus side
+// We can't use the gc-syntax .s files for gccgo. On the plus side
// much of the functionality can be written directly in Go.
//extern gccgoRealSyscall
diff --git a/components/engine/vendor/golang.org/x/sys/unix/gccgo_c.c b/components/engine/vendor/golang.org/x/sys/unix/gccgo_c.c
index 07f6be0392..99a774f2be 100644
--- a/components/engine/vendor/golang.org/x/sys/unix/gccgo_c.c
+++ b/components/engine/vendor/golang.org/x/sys/unix/gccgo_c.c
@@ -1,4 +1,4 @@
-// Copyright 2015 The Go Authors. All rights reserved.
+// Copyright 2015 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
diff --git a/components/engine/vendor/golang.org/x/sys/unix/gccgo_linux_amd64.go b/components/engine/vendor/golang.org/x/sys/unix/gccgo_linux_amd64.go
index bffe1a77db..251a977a81 100644
--- a/components/engine/vendor/golang.org/x/sys/unix/gccgo_linux_amd64.go
+++ b/components/engine/vendor/golang.org/x/sys/unix/gccgo_linux_amd64.go
@@ -1,4 +1,4 @@
-// Copyright 2015 The Go Authors. All rights reserved.
+// Copyright 2015 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
diff --git a/components/engine/vendor/golang.org/x/sys/unix/pagesize_unix.go b/components/engine/vendor/golang.org/x/sys/unix/pagesize_unix.go
index 45afcf72d6..83c85e0196 100644
--- a/components/engine/vendor/golang.org/x/sys/unix/pagesize_unix.go
+++ b/components/engine/vendor/golang.org/x/sys/unix/pagesize_unix.go
@@ -1,4 +1,4 @@
-// Copyright 2017 The Go Authors. All rights reserved.
+// Copyright 2017 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
diff --git a/components/engine/vendor/golang.org/x/sys/unix/race.go b/components/engine/vendor/golang.org/x/sys/unix/race.go
index 3c7627eb5c..61712b51c9 100644
--- a/components/engine/vendor/golang.org/x/sys/unix/race.go
+++ b/components/engine/vendor/golang.org/x/sys/unix/race.go
@@ -1,4 +1,4 @@
-// Copyright 2012 The Go Authors. All rights reserved.
+// Copyright 2012 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
diff --git a/components/engine/vendor/golang.org/x/sys/unix/race0.go b/components/engine/vendor/golang.org/x/sys/unix/race0.go
index f8678e0d21..dd0820431e 100644
--- a/components/engine/vendor/golang.org/x/sys/unix/race0.go
+++ b/components/engine/vendor/golang.org/x/sys/unix/race0.go
@@ -1,4 +1,4 @@
-// Copyright 2012 The Go Authors. All rights reserved.
+// Copyright 2012 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
diff --git a/components/engine/vendor/golang.org/x/sys/unix/sockcmsg_linux.go b/components/engine/vendor/golang.org/x/sys/unix/sockcmsg_linux.go
index d9ff4731a2..6079eb4ac1 100644
--- a/components/engine/vendor/golang.org/x/sys/unix/sockcmsg_linux.go
+++ b/components/engine/vendor/golang.org/x/sys/unix/sockcmsg_linux.go
@@ -1,4 +1,4 @@
-// Copyright 2011 The Go Authors. All rights reserved.
+// Copyright 2011 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
diff --git a/components/engine/vendor/golang.org/x/sys/unix/syscall.go b/components/engine/vendor/golang.org/x/sys/unix/syscall.go
index 85e35020e2..857d2a42d4 100644
--- a/components/engine/vendor/golang.org/x/sys/unix/syscall.go
+++ b/components/engine/vendor/golang.org/x/sys/unix/syscall.go
@@ -5,10 +5,10 @@
// +build darwin dragonfly freebsd linux netbsd openbsd solaris
// Package unix contains an interface to the low-level operating system
-// primitives. OS details vary depending on the underlying system, and
+// primitives. OS details vary depending on the underlying system, and
// by default, godoc will display OS-specific documentation for the current
-// system. If you want godoc to display OS documentation for another
-// system, set $GOOS and $GOARCH to the desired system. For example, if
+// system. If you want godoc to display OS documentation for another
+// system, set $GOOS and $GOARCH to the desired system. For example, if
// you want to view documentation for freebsd/arm on linux/amd64, set $GOOS
// to freebsd and $GOARCH to arm.
// The primary use of this package is inside other packages that provide a more
@@ -49,21 +49,3 @@ func BytePtrFromString(s string) (*byte, error) {
// Single-word zero for use when we need a valid pointer to 0 bytes.
// See mkunix.pl.
var _zero uintptr
-
-func (ts *Timespec) Unix() (sec int64, nsec int64) {
- return int64(ts.Sec), int64(ts.Nsec)
-}
-
-func (tv *Timeval) Unix() (sec int64, nsec int64) {
- return int64(tv.Sec), int64(tv.Usec) * 1000
-}
-
-func (ts *Timespec) Nano() int64 {
- return int64(ts.Sec)*1e9 + int64(ts.Nsec)
-}
-
-func (tv *Timeval) Nano() int64 {
- return int64(tv.Sec)*1e9 + int64(tv.Usec)*1000
-}
-
-func TimevalToNsec(tv Timeval) int64 { return int64(tv.Sec)*1e9 + int64(tv.Usec)*1e3 }
diff --git a/components/engine/vendor/golang.org/x/sys/unix/syscall_bsd.go b/components/engine/vendor/golang.org/x/sys/unix/syscall_bsd.go
index c2846b32d6..4119edd730 100644
--- a/components/engine/vendor/golang.org/x/sys/unix/syscall_bsd.go
+++ b/components/engine/vendor/golang.org/x/sys/unix/syscall_bsd.go
@@ -34,7 +34,7 @@ func Getgroups() (gids []int, err error) {
return nil, nil
}
- // Sanity check group count. Max is 16 on BSD.
+ // Sanity check group count. Max is 16 on BSD.
if n < 0 || n > 1000 {
return nil, EINVAL
}
@@ -607,6 +607,15 @@ func Futimes(fd int, tv []Timeval) error {
//sys fcntl(fd int, cmd int, arg int) (val int, err error)
+//sys poll(fds *PollFd, nfds int, timeout int) (n int, err error)
+
+func Poll(fds []PollFd, timeout int) (n int, err error) {
+ if len(fds) == 0 {
+ return poll(nil, 0, timeout)
+ }
+ return poll(&fds[0], len(fds), timeout)
+}
+
// TODO: wrap
// Acct(name nil-string) (err error)
// Gethostuuid(uuid *byte, timeout *Timespec) (err error)
diff --git a/components/engine/vendor/golang.org/x/sys/unix/syscall_darwin.go b/components/engine/vendor/golang.org/x/sys/unix/syscall_darwin.go
index ad74a11fb3..f6a8fccad1 100644
--- a/components/engine/vendor/golang.org/x/sys/unix/syscall_darwin.go
+++ b/components/engine/vendor/golang.org/x/sys/unix/syscall_darwin.go
@@ -54,7 +54,7 @@ func nametomib(name string) (mib []_C_int, err error) {
// NOTE(rsc): It seems strange to set the buffer to have
// size CTL_MAXNAME+2 but use only CTL_MAXNAME
- // as the size. I don't know why the +2 is here, but the
+ // as the size. I don't know why the +2 is here, but the
// kernel uses +2 for its own implementation of this function.
// I am scared that if we don't include the +2 here, the kernel
// will silently write 2 words farther than we specify
@@ -377,7 +377,6 @@ func IoctlGetTermios(fd int, req uint) (*Termios, error) {
// Searchfs
// Delete
// Copyfile
-// Poll
// Watchevent
// Waitevent
// Modwatch
diff --git a/components/engine/vendor/golang.org/x/sys/unix/syscall_darwin_386.go b/components/engine/vendor/golang.org/x/sys/unix/syscall_darwin_386.go
index 76634f7ab1..b3ac109a2f 100644
--- a/components/engine/vendor/golang.org/x/sys/unix/syscall_darwin_386.go
+++ b/components/engine/vendor/golang.org/x/sys/unix/syscall_darwin_386.go
@@ -11,25 +11,18 @@ import (
"unsafe"
)
-func TimespecToNsec(ts Timespec) int64 { return int64(ts.Sec)*1e9 + int64(ts.Nsec) }
-
-func NsecToTimespec(nsec int64) (ts Timespec) {
- ts.Sec = int32(nsec / 1e9)
- ts.Nsec = int32(nsec % 1e9)
- return
+func setTimespec(sec, nsec int64) Timespec {
+ return Timespec{Sec: int32(sec), Nsec: int32(nsec)}
}
-func NsecToTimeval(nsec int64) (tv Timeval) {
- nsec += 999 // round up to microsecond
- tv.Usec = int32(nsec % 1e9 / 1e3)
- tv.Sec = int32(nsec / 1e9)
- return
+func setTimeval(sec, usec int64) Timeval {
+ return Timeval{Sec: int32(sec), Usec: int32(usec)}
}
//sysnb gettimeofday(tp *Timeval) (sec int32, usec int32, err error)
func Gettimeofday(tv *Timeval) (err error) {
// The tv passed to gettimeofday must be non-nil
- // but is otherwise unused. The answers come back
+ // but is otherwise unused. The answers come back
// in the two registers.
sec, usec, err := gettimeofday(tv)
tv.Sec = int32(sec)
diff --git a/components/engine/vendor/golang.org/x/sys/unix/syscall_darwin_amd64.go b/components/engine/vendor/golang.org/x/sys/unix/syscall_darwin_amd64.go
index 7be02dab9d..75219444a8 100644
--- a/components/engine/vendor/golang.org/x/sys/unix/syscall_darwin_amd64.go
+++ b/components/engine/vendor/golang.org/x/sys/unix/syscall_darwin_amd64.go
@@ -11,25 +11,18 @@ import (
"unsafe"
)
-func TimespecToNsec(ts Timespec) int64 { return int64(ts.Sec)*1e9 + int64(ts.Nsec) }
-
-func NsecToTimespec(nsec int64) (ts Timespec) {
- ts.Sec = nsec / 1e9
- ts.Nsec = nsec % 1e9
- return
+func setTimespec(sec, nsec int64) Timespec {
+ return Timespec{Sec: sec, Nsec: nsec}
}
-func NsecToTimeval(nsec int64) (tv Timeval) {
- nsec += 999 // round up to microsecond
- tv.Usec = int32(nsec % 1e9 / 1e3)
- tv.Sec = int64(nsec / 1e9)
- return
+func setTimeval(sec, usec int64) Timeval {
+ return Timeval{Sec: sec, Usec: int32(usec)}
}
//sysnb gettimeofday(tp *Timeval) (sec int64, usec int32, err error)
func Gettimeofday(tv *Timeval) (err error) {
// The tv passed to gettimeofday must be non-nil
- // but is otherwise unused. The answers come back
+ // but is otherwise unused. The answers come back
// in the two registers.
sec, usec, err := gettimeofday(tv)
tv.Sec = sec
diff --git a/components/engine/vendor/golang.org/x/sys/unix/syscall_darwin_arm.go b/components/engine/vendor/golang.org/x/sys/unix/syscall_darwin_arm.go
index 26b66972f0..47ab664859 100644
--- a/components/engine/vendor/golang.org/x/sys/unix/syscall_darwin_arm.go
+++ b/components/engine/vendor/golang.org/x/sys/unix/syscall_darwin_arm.go
@@ -9,25 +9,18 @@ import (
"unsafe"
)
-func TimespecToNsec(ts Timespec) int64 { return int64(ts.Sec)*1e9 + int64(ts.Nsec) }
-
-func NsecToTimespec(nsec int64) (ts Timespec) {
- ts.Sec = int32(nsec / 1e9)
- ts.Nsec = int32(nsec % 1e9)
- return
+func setTimespec(sec, nsec int64) Timespec {
+ return Timespec{Sec: int32(sec), Nsec: int32(nsec)}
}
-func NsecToTimeval(nsec int64) (tv Timeval) {
- nsec += 999 // round up to microsecond
- tv.Usec = int32(nsec % 1e9 / 1e3)
- tv.Sec = int32(nsec / 1e9)
- return
+func setTimeval(sec, usec int64) Timeval {
+ return Timeval{Sec: int32(sec), Usec: int32(usec)}
}
//sysnb gettimeofday(tp *Timeval) (sec int32, usec int32, err error)
func Gettimeofday(tv *Timeval) (err error) {
// The tv passed to gettimeofday must be non-nil
- // but is otherwise unused. The answers come back
+ // but is otherwise unused. The answers come back
// in the two registers.
sec, usec, err := gettimeofday(tv)
tv.Sec = int32(sec)
diff --git a/components/engine/vendor/golang.org/x/sys/unix/syscall_darwin_arm64.go b/components/engine/vendor/golang.org/x/sys/unix/syscall_darwin_arm64.go
index 4d67a87427..d6d9628014 100644
--- a/components/engine/vendor/golang.org/x/sys/unix/syscall_darwin_arm64.go
+++ b/components/engine/vendor/golang.org/x/sys/unix/syscall_darwin_arm64.go
@@ -11,25 +11,18 @@ import (
"unsafe"
)
-func TimespecToNsec(ts Timespec) int64 { return int64(ts.Sec)*1e9 + int64(ts.Nsec) }
-
-func NsecToTimespec(nsec int64) (ts Timespec) {
- ts.Sec = nsec / 1e9
- ts.Nsec = nsec % 1e9
- return
+func setTimespec(sec, nsec int64) Timespec {
+ return Timespec{Sec: sec, Nsec: nsec}
}
-func NsecToTimeval(nsec int64) (tv Timeval) {
- nsec += 999 // round up to microsecond
- tv.Usec = int32(nsec % 1e9 / 1e3)
- tv.Sec = int64(nsec / 1e9)
- return
+func setTimeval(sec, usec int64) Timeval {
+ return Timeval{Sec: sec, Usec: int32(usec)}
}
//sysnb gettimeofday(tp *Timeval) (sec int64, usec int32, err error)
func Gettimeofday(tv *Timeval) (err error) {
// The tv passed to gettimeofday must be non-nil
- // but is otherwise unused. The answers come back
+ // but is otherwise unused. The answers come back
// in the two registers.
sec, usec, err := gettimeofday(tv)
tv.Sec = sec
diff --git a/components/engine/vendor/golang.org/x/sys/unix/syscall_dragonfly.go b/components/engine/vendor/golang.org/x/sys/unix/syscall_dragonfly.go
index 3a483373dc..fee06839fd 100644
--- a/components/engine/vendor/golang.org/x/sys/unix/syscall_dragonfly.go
+++ b/components/engine/vendor/golang.org/x/sys/unix/syscall_dragonfly.go
@@ -257,7 +257,6 @@ func Getfsstat(buf []Statfs_t, flags int) (n int, err error) {
// Searchfs
// Delete
// Copyfile
-// Poll
// Watchevent
// Waitevent
// Modwatch
@@ -403,7 +402,6 @@ func Getfsstat(buf []Statfs_t, flags int) (n int, err error) {
// Pread_nocancel
// Pwrite_nocancel
// Waitid_nocancel
-// Poll_nocancel
// Msgsnd_nocancel
// Msgrcv_nocancel
// Sem_wait_nocancel
diff --git a/components/engine/vendor/golang.org/x/sys/unix/syscall_dragonfly_amd64.go b/components/engine/vendor/golang.org/x/sys/unix/syscall_dragonfly_amd64.go
index 6d8952d5a1..9babb31ea7 100644
--- a/components/engine/vendor/golang.org/x/sys/unix/syscall_dragonfly_amd64.go
+++ b/components/engine/vendor/golang.org/x/sys/unix/syscall_dragonfly_amd64.go
@@ -11,19 +11,12 @@ import (
"unsafe"
)
-func TimespecToNsec(ts Timespec) int64 { return int64(ts.Sec)*1e9 + int64(ts.Nsec) }
-
-func NsecToTimespec(nsec int64) (ts Timespec) {
- ts.Sec = nsec / 1e9
- ts.Nsec = nsec % 1e9
- return
+func setTimespec(sec, nsec int64) Timespec {
+ return Timespec{Sec: sec, Nsec: nsec}
}
-func NsecToTimeval(nsec int64) (tv Timeval) {
- nsec += 999 // round up to microsecond
- tv.Usec = nsec % 1e9 / 1e3
- tv.Sec = int64(nsec / 1e9)
- return
+func setTimeval(sec, usec int64) Timeval {
+ return Timeval{Sec: sec, Usec: usec}
}
func SetKevent(k *Kevent_t, fd, mode, flags int) {
diff --git a/components/engine/vendor/golang.org/x/sys/unix/syscall_freebsd.go b/components/engine/vendor/golang.org/x/sys/unix/syscall_freebsd.go
index d26e52eaef..8f7ab16d1f 100644
--- a/components/engine/vendor/golang.org/x/sys/unix/syscall_freebsd.go
+++ b/components/engine/vendor/golang.org/x/sys/unix/syscall_freebsd.go
@@ -32,7 +32,7 @@ func nametomib(name string) (mib []_C_int, err error) {
// NOTE(rsc): It seems strange to set the buffer to have
// size CTL_MAXNAME+2 but use only CTL_MAXNAME
- // as the size. I don't know why the +2 is here, but the
+ // as the size. I don't know why the +2 is here, but the
// kernel uses +2 for its own implementation of this function.
// I am scared that if we don't include the +2 here, the kernel
// will silently write 2 words farther than we specify
@@ -550,7 +550,6 @@ func IoctlGetTermios(fd int, req uint) (*Termios, error) {
// Searchfs
// Delete
// Copyfile
-// Poll
// Watchevent
// Waitevent
// Modwatch
diff --git a/components/engine/vendor/golang.org/x/sys/unix/syscall_freebsd_386.go b/components/engine/vendor/golang.org/x/sys/unix/syscall_freebsd_386.go
index 4cf5f453f5..21e03958cd 100644
--- a/components/engine/vendor/golang.org/x/sys/unix/syscall_freebsd_386.go
+++ b/components/engine/vendor/golang.org/x/sys/unix/syscall_freebsd_386.go
@@ -11,19 +11,12 @@ import (
"unsafe"
)
-func TimespecToNsec(ts Timespec) int64 { return int64(ts.Sec)*1e9 + int64(ts.Nsec) }
-
-func NsecToTimespec(nsec int64) (ts Timespec) {
- ts.Sec = int32(nsec / 1e9)
- ts.Nsec = int32(nsec % 1e9)
- return
+func setTimespec(sec, nsec int64) Timespec {
+ return Timespec{Sec: int32(sec), Nsec: int32(nsec)}
}
-func NsecToTimeval(nsec int64) (tv Timeval) {
- nsec += 999 // round up to microsecond
- tv.Usec = int32(nsec % 1e9 / 1e3)
- tv.Sec = int32(nsec / 1e9)
- return
+func setTimeval(sec, usec int64) Timeval {
+ return Timeval{Sec: int32(sec), Usec: int32(usec)}
}
func SetKevent(k *Kevent_t, fd, mode, flags int) {
diff --git a/components/engine/vendor/golang.org/x/sys/unix/syscall_freebsd_amd64.go b/components/engine/vendor/golang.org/x/sys/unix/syscall_freebsd_amd64.go
index b8036e7268..9c945a6579 100644
--- a/components/engine/vendor/golang.org/x/sys/unix/syscall_freebsd_amd64.go
+++ b/components/engine/vendor/golang.org/x/sys/unix/syscall_freebsd_amd64.go
@@ -11,19 +11,12 @@ import (
"unsafe"
)
-func TimespecToNsec(ts Timespec) int64 { return int64(ts.Sec)*1e9 + int64(ts.Nsec) }
-
-func NsecToTimespec(nsec int64) (ts Timespec) {
- ts.Sec = nsec / 1e9
- ts.Nsec = nsec % 1e9
- return
+func setTimespec(sec, nsec int64) Timespec {
+ return Timespec{Sec: sec, Nsec: nsec}
}
-func NsecToTimeval(nsec int64) (tv Timeval) {
- nsec += 999 // round up to microsecond
- tv.Usec = nsec % 1e9 / 1e3
- tv.Sec = int64(nsec / 1e9)
- return
+func setTimeval(sec, usec int64) Timeval {
+ return Timeval{Sec: sec, Usec: usec}
}
func SetKevent(k *Kevent_t, fd, mode, flags int) {
diff --git a/components/engine/vendor/golang.org/x/sys/unix/syscall_freebsd_arm.go b/components/engine/vendor/golang.org/x/sys/unix/syscall_freebsd_arm.go
index 5a3bb6a154..5cd6243f2a 100644
--- a/components/engine/vendor/golang.org/x/sys/unix/syscall_freebsd_arm.go
+++ b/components/engine/vendor/golang.org/x/sys/unix/syscall_freebsd_arm.go
@@ -11,19 +11,12 @@ import (
"unsafe"
)
-func TimespecToNsec(ts Timespec) int64 { return ts.Sec*1e9 + int64(ts.Nsec) }
-
-func NsecToTimespec(nsec int64) (ts Timespec) {
- ts.Sec = nsec / 1e9
- ts.Nsec = int32(nsec % 1e9)
- return
+func setTimespec(sec, nsec int64) Timespec {
+ return Timespec{Sec: sec, Nsec: int32(nsec)}
}
-func NsecToTimeval(nsec int64) (tv Timeval) {
- nsec += 999 // round up to microsecond
- tv.Usec = int32(nsec % 1e9 / 1e3)
- tv.Sec = nsec / 1e9
- return
+func setTimeval(sec, usec int64) Timeval {
+ return Timeval{Sec: sec, Usec: int32(usec)}
}
func SetKevent(k *Kevent_t, fd, mode, flags int) {
diff --git a/components/engine/vendor/golang.org/x/sys/unix/syscall_linux.go b/components/engine/vendor/golang.org/x/sys/unix/syscall_linux.go
index 4520328abf..b98a7e1544 100644
--- a/components/engine/vendor/golang.org/x/sys/unix/syscall_linux.go
+++ b/components/engine/vendor/golang.org/x/sys/unix/syscall_linux.go
@@ -255,7 +255,7 @@ func Getgroups() (gids []int, err error) {
return nil, nil
}
- // Sanity check group count. Max is 1<<16 on Linux.
+ // Sanity check group count. Max is 1<<16 on Linux.
if n < 0 || n > 1<<20 {
return nil, EINVAL
}
@@ -290,8 +290,8 @@ type WaitStatus uint32
// 0x7F (stopped), or a signal number that caused an exit.
// The 0x80 bit is whether there was a core dump.
// An extra number (exit code, signal causing a stop)
-// is in the high bits. At least that's the idea.
-// There are various irregularities. For example, the
+// is in the high bits. At least that's the idea.
+// There are various irregularities. For example, the
// "continued" status is 0xFFFF, distinguishing itself
// from stopped via the core dump bit.
@@ -926,7 +926,7 @@ func Recvmsg(fd int, p, oob []byte, flags int) (n, oobn int, recvflags int, from
msg.Namelen = uint32(SizeofSockaddrAny)
var iov Iovec
if len(p) > 0 {
- iov.Base = (*byte)(unsafe.Pointer(&p[0]))
+ iov.Base = &p[0]
iov.SetLen(len(p))
}
var dummy byte
@@ -941,7 +941,7 @@ func Recvmsg(fd int, p, oob []byte, flags int) (n, oobn int, recvflags int, from
iov.Base = &dummy
iov.SetLen(1)
}
- msg.Control = (*byte)(unsafe.Pointer(&oob[0]))
+ msg.Control = &oob[0]
msg.SetControllen(len(oob))
}
msg.Iov = &iov
@@ -974,11 +974,11 @@ func SendmsgN(fd int, p, oob []byte, to Sockaddr, flags int) (n int, err error)
}
}
var msg Msghdr
- msg.Name = (*byte)(unsafe.Pointer(ptr))
+ msg.Name = (*byte)(ptr)
msg.Namelen = uint32(salen)
var iov Iovec
if len(p) > 0 {
- iov.Base = (*byte)(unsafe.Pointer(&p[0]))
+ iov.Base = &p[0]
iov.SetLen(len(p))
}
var dummy byte
@@ -993,7 +993,7 @@ func SendmsgN(fd int, p, oob []byte, to Sockaddr, flags int) (n int, err error)
iov.Base = &dummy
iov.SetLen(1)
}
- msg.Control = (*byte)(unsafe.Pointer(&oob[0]))
+ msg.Control = &oob[0]
msg.SetControllen(len(oob))
}
msg.Iov = &iov
@@ -1023,7 +1023,7 @@ func ptracePeek(req int, pid int, addr uintptr, out []byte) (count int, err erro
var buf [sizeofPtr]byte
- // Leading edge. PEEKTEXT/PEEKDATA don't require aligned
+ // Leading edge. PEEKTEXT/PEEKDATA don't require aligned
// access (PEEKUSER warns that it might), but if we don't
// align our reads, we might straddle an unmapped page
// boundary and not get the bytes leading up to the page
diff --git a/components/engine/vendor/golang.org/x/sys/unix/syscall_linux_386.go b/components/engine/vendor/golang.org/x/sys/unix/syscall_linux_386.go
index f4c826a456..4774fa363e 100644
--- a/components/engine/vendor/golang.org/x/sys/unix/syscall_linux_386.go
+++ b/components/engine/vendor/golang.org/x/sys/unix/syscall_linux_386.go
@@ -14,19 +14,12 @@ import (
"unsafe"
)
-func TimespecToNsec(ts Timespec) int64 { return int64(ts.Sec)*1e9 + int64(ts.Nsec) }
-
-func NsecToTimespec(nsec int64) (ts Timespec) {
- ts.Sec = int32(nsec / 1e9)
- ts.Nsec = int32(nsec % 1e9)
- return
+func setTimespec(sec, nsec int64) Timespec {
+ return Timespec{Sec: int32(sec), Nsec: int32(nsec)}
}
-func NsecToTimeval(nsec int64) (tv Timeval) {
- nsec += 999 // round up to microsecond
- tv.Sec = int32(nsec / 1e9)
- tv.Usec = int32(nsec % 1e9 / 1e3)
- return
+func setTimeval(sec, usec int64) Timeval {
+ return Timeval{Sec: int32(sec), Usec: int32(usec)}
}
//sysnb pipe(p *[2]_C_int) (err error)
@@ -183,9 +176,9 @@ func Seek(fd int, offset int64, whence int) (newoffset int64, err error) {
// On x86 Linux, all the socket calls go through an extra indirection,
// I think because the 5-register system call interface can't handle
-// the 6-argument calls like sendto and recvfrom. Instead the
+// the 6-argument calls like sendto and recvfrom. Instead the
// arguments to the underlying system call are the number below
-// and a pointer to an array of uintptr. We hide the pointer in the
+// and a pointer to an array of uintptr. We hide the pointer in the
// socketcall assembly to avoid allocation on every system call.
const (
diff --git a/components/engine/vendor/golang.org/x/sys/unix/syscall_linux_amd64.go b/components/engine/vendor/golang.org/x/sys/unix/syscall_linux_amd64.go
index 0715200dcf..3707f6b7c9 100644
--- a/components/engine/vendor/golang.org/x/sys/unix/syscall_linux_amd64.go
+++ b/components/engine/vendor/golang.org/x/sys/unix/syscall_linux_amd64.go
@@ -83,19 +83,12 @@ func Time(t *Time_t) (tt Time_t, err error) {
//sys Utime(path string, buf *Utimbuf) (err error)
-func TimespecToNsec(ts Timespec) int64 { return int64(ts.Sec)*1e9 + int64(ts.Nsec) }
-
-func NsecToTimespec(nsec int64) (ts Timespec) {
- ts.Sec = nsec / 1e9
- ts.Nsec = nsec % 1e9
- return
+func setTimespec(sec, nsec int64) Timespec {
+ return Timespec{Sec: sec, Nsec: nsec}
}
-func NsecToTimeval(nsec int64) (tv Timeval) {
- nsec += 999 // round up to microsecond
- tv.Sec = nsec / 1e9
- tv.Usec = nsec % 1e9 / 1e3
- return
+func setTimeval(sec, usec int64) Timeval {
+ return Timeval{Sec: sec, Usec: usec}
}
//sysnb pipe(p *[2]_C_int) (err error)
diff --git a/components/engine/vendor/golang.org/x/sys/unix/syscall_linux_arm.go b/components/engine/vendor/golang.org/x/sys/unix/syscall_linux_arm.go
index 2b79c84a67..226be100f5 100644
--- a/components/engine/vendor/golang.org/x/sys/unix/syscall_linux_arm.go
+++ b/components/engine/vendor/golang.org/x/sys/unix/syscall_linux_arm.go
@@ -11,19 +11,12 @@ import (
"unsafe"
)
-func TimespecToNsec(ts Timespec) int64 { return int64(ts.Sec)*1e9 + int64(ts.Nsec) }
-
-func NsecToTimespec(nsec int64) (ts Timespec) {
- ts.Sec = int32(nsec / 1e9)
- ts.Nsec = int32(nsec % 1e9)
- return
+func setTimespec(sec, nsec int64) Timespec {
+ return Timespec{Sec: int32(sec), Nsec: int32(nsec)}
}
-func NsecToTimeval(nsec int64) (tv Timeval) {
- nsec += 999 // round up to microsecond
- tv.Sec = int32(nsec / 1e9)
- tv.Usec = int32(nsec % 1e9 / 1e3)
- return
+func setTimeval(sec, usec int64) Timeval {
+ return Timeval{Sec: int32(sec), Usec: int32(usec)}
}
func Pipe(p []int) (err error) {
diff --git a/components/engine/vendor/golang.org/x/sys/unix/syscall_linux_arm64.go b/components/engine/vendor/golang.org/x/sys/unix/syscall_linux_arm64.go
index e16a0d141e..9a8e6e4117 100644
--- a/components/engine/vendor/golang.org/x/sys/unix/syscall_linux_arm64.go
+++ b/components/engine/vendor/golang.org/x/sys/unix/syscall_linux_arm64.go
@@ -73,19 +73,12 @@ func Lstat(path string, stat *Stat_t) (err error) {
//sysnb Gettimeofday(tv *Timeval) (err error)
-func TimespecToNsec(ts Timespec) int64 { return int64(ts.Sec)*1e9 + int64(ts.Nsec) }
-
-func NsecToTimespec(nsec int64) (ts Timespec) {
- ts.Sec = nsec / 1e9
- ts.Nsec = nsec % 1e9
- return
+func setTimespec(sec, nsec int64) Timespec {
+ return Timespec{Sec: sec, Nsec: nsec}
}
-func NsecToTimeval(nsec int64) (tv Timeval) {
- nsec += 999 // round up to microsecond
- tv.Sec = nsec / 1e9
- tv.Usec = nsec % 1e9 / 1e3
- return
+func setTimeval(sec, usec int64) Timeval {
+ return Timeval{Sec: sec, Usec: usec}
}
func Time(t *Time_t) (Time_t, error) {
diff --git a/components/engine/vendor/golang.org/x/sys/unix/syscall_linux_mips64x.go b/components/engine/vendor/golang.org/x/sys/unix/syscall_linux_mips64x.go
index 92e620ea5b..cdda11a9fa 100644
--- a/components/engine/vendor/golang.org/x/sys/unix/syscall_linux_mips64x.go
+++ b/components/engine/vendor/golang.org/x/sys/unix/syscall_linux_mips64x.go
@@ -76,19 +76,12 @@ func Time(t *Time_t) (tt Time_t, err error) {
//sys Utime(path string, buf *Utimbuf) (err error)
-func TimespecToNsec(ts Timespec) int64 { return int64(ts.Sec)*1e9 + int64(ts.Nsec) }
-
-func NsecToTimespec(nsec int64) (ts Timespec) {
- ts.Sec = nsec / 1e9
- ts.Nsec = nsec % 1e9
- return
+func setTimespec(sec, nsec int64) Timespec {
+ return Timespec{Sec: sec, Nsec: nsec}
}
-func NsecToTimeval(nsec int64) (tv Timeval) {
- nsec += 999 // round up to microsecond
- tv.Sec = nsec / 1e9
- tv.Usec = nsec % 1e9 / 1e3
- return
+func setTimeval(sec, usec int64) Timeval {
+ return Timeval{Sec: sec, Usec: usec}
}
func Pipe(p []int) (err error) {
diff --git a/components/engine/vendor/golang.org/x/sys/unix/syscall_linux_mipsx.go b/components/engine/vendor/golang.org/x/sys/unix/syscall_linux_mipsx.go
index 25a5a0da5a..a114ba8cb3 100644
--- a/components/engine/vendor/golang.org/x/sys/unix/syscall_linux_mipsx.go
+++ b/components/engine/vendor/golang.org/x/sys/unix/syscall_linux_mipsx.go
@@ -99,19 +99,12 @@ func Seek(fd int, offset int64, whence int) (off int64, err error) {
return
}
-func TimespecToNsec(ts Timespec) int64 { return int64(ts.Sec)*1e9 + int64(ts.Nsec) }
-
-func NsecToTimespec(nsec int64) (ts Timespec) {
- ts.Sec = int32(nsec / 1e9)
- ts.Nsec = int32(nsec % 1e9)
- return
+func setTimespec(sec, nsec int64) Timespec {
+ return Timespec{Sec: int32(sec), Nsec: int32(nsec)}
}
-func NsecToTimeval(nsec int64) (tv Timeval) {
- nsec += 999 // round up to microsecond
- tv.Sec = int32(nsec / 1e9)
- tv.Usec = int32(nsec % 1e9 / 1e3)
- return
+func setTimeval(sec, usec int64) Timeval {
+ return Timeval{Sec: int32(sec), Usec: int32(usec)}
}
//sysnb pipe2(p *[2]_C_int, flags int) (err error)
diff --git a/components/engine/vendor/golang.org/x/sys/unix/syscall_linux_ppc64x.go b/components/engine/vendor/golang.org/x/sys/unix/syscall_linux_ppc64x.go
index a4a8e4ee1e..7cae936c45 100644
--- a/components/engine/vendor/golang.org/x/sys/unix/syscall_linux_ppc64x.go
+++ b/components/engine/vendor/golang.org/x/sys/unix/syscall_linux_ppc64x.go
@@ -66,19 +66,12 @@ package unix
//sys Utime(path string, buf *Utimbuf) (err error)
-func TimespecToNsec(ts Timespec) int64 { return int64(ts.Sec)*1e9 + int64(ts.Nsec) }
-
-func NsecToTimespec(nsec int64) (ts Timespec) {
- ts.Sec = nsec / 1e9
- ts.Nsec = nsec % 1e9
- return
+func setTimespec(sec, nsec int64) Timespec {
+ return Timespec{Sec: sec, Nsec: nsec}
}
-func NsecToTimeval(nsec int64) (tv Timeval) {
- nsec += 999 // round up to microsecond
- tv.Sec = nsec / 1e9
- tv.Usec = nsec % 1e9 / 1e3
- return
+func setTimeval(sec, usec int64) Timeval {
+ return Timeval{Sec: sec, Usec: usec}
}
func (r *PtraceRegs) PC() uint64 { return r.Nip }
diff --git a/components/engine/vendor/golang.org/x/sys/unix/syscall_linux_s390x.go b/components/engine/vendor/golang.org/x/sys/unix/syscall_linux_s390x.go
index 3845fc9c43..e96a40cb21 100644
--- a/components/engine/vendor/golang.org/x/sys/unix/syscall_linux_s390x.go
+++ b/components/engine/vendor/golang.org/x/sys/unix/syscall_linux_s390x.go
@@ -62,19 +62,12 @@ func Time(t *Time_t) (tt Time_t, err error) {
//sys Utime(path string, buf *Utimbuf) (err error)
-func TimespecToNsec(ts Timespec) int64 { return int64(ts.Sec)*1e9 + int64(ts.Nsec) }
-
-func NsecToTimespec(nsec int64) (ts Timespec) {
- ts.Sec = nsec / 1e9
- ts.Nsec = nsec % 1e9
- return
+func setTimespec(sec, nsec int64) Timespec {
+ return Timespec{Sec: sec, Nsec: nsec}
}
-func NsecToTimeval(nsec int64) (tv Timeval) {
- nsec += 999 // round up to microsecond
- tv.Sec = nsec / 1e9
- tv.Usec = nsec % 1e9 / 1e3
- return
+func setTimeval(sec, usec int64) Timeval {
+ return Timeval{Sec: sec, Usec: usec}
}
//sysnb pipe2(p *[2]_C_int, flags int) (err error)
diff --git a/components/engine/vendor/golang.org/x/sys/unix/syscall_linux_sparc64.go b/components/engine/vendor/golang.org/x/sys/unix/syscall_linux_sparc64.go
index bd9de3e9d0..012a3285ef 100644
--- a/components/engine/vendor/golang.org/x/sys/unix/syscall_linux_sparc64.go
+++ b/components/engine/vendor/golang.org/x/sys/unix/syscall_linux_sparc64.go
@@ -82,19 +82,12 @@ func Time(t *Time_t) (tt Time_t, err error) {
//sys Utime(path string, buf *Utimbuf) (err error)
-func TimespecToNsec(ts Timespec) int64 { return int64(ts.Sec)*1e9 + int64(ts.Nsec) }
-
-func NsecToTimespec(nsec int64) (ts Timespec) {
- ts.Sec = nsec / 1e9
- ts.Nsec = nsec % 1e9
- return
+func setTimespec(sec, nsec int64) Timespec {
+ return Timespec{Sec: sec, Nsec: nsec}
}
-func NsecToTimeval(nsec int64) (tv Timeval) {
- nsec += 999 // round up to microsecond
- tv.Sec = nsec / 1e9
- tv.Usec = int32(nsec % 1e9 / 1e3)
- return
+func setTimeval(sec, usec int64) Timeval {
+ return Timeval{Sec: sec, Usec: int32(usec)}
}
func (r *PtraceRegs) PC() uint64 { return r.Tpc }
diff --git a/components/engine/vendor/golang.org/x/sys/unix/syscall_netbsd.go b/components/engine/vendor/golang.org/x/sys/unix/syscall_netbsd.go
index e129668459..1caa5b3266 100644
--- a/components/engine/vendor/golang.org/x/sys/unix/syscall_netbsd.go
+++ b/components/engine/vendor/golang.org/x/sys/unix/syscall_netbsd.go
@@ -422,7 +422,6 @@ func sendfile(outfd int, infd int, offset *int64, count int) (written int, err e
// ntp_adjtime
// pmc_control
// pmc_get_info
-// poll
// pollts
// preadv
// profil
diff --git a/components/engine/vendor/golang.org/x/sys/unix/syscall_netbsd_386.go b/components/engine/vendor/golang.org/x/sys/unix/syscall_netbsd_386.go
index baefa411ec..24f74e58ce 100644
--- a/components/engine/vendor/golang.org/x/sys/unix/syscall_netbsd_386.go
+++ b/components/engine/vendor/golang.org/x/sys/unix/syscall_netbsd_386.go
@@ -6,19 +6,12 @@
package unix
-func TimespecToNsec(ts Timespec) int64 { return int64(ts.Sec)*1e9 + int64(ts.Nsec) }
-
-func NsecToTimespec(nsec int64) (ts Timespec) {
- ts.Sec = int64(nsec / 1e9)
- ts.Nsec = int32(nsec % 1e9)
- return
+func setTimespec(sec, nsec int64) Timespec {
+ return Timespec{Sec: sec, Nsec: int32(nsec)}
}
-func NsecToTimeval(nsec int64) (tv Timeval) {
- nsec += 999 // round up to microsecond
- tv.Usec = int32(nsec % 1e9 / 1e3)
- tv.Sec = int64(nsec / 1e9)
- return
+func setTimeval(sec, usec int64) Timeval {
+ return Timeval{Sec: sec, Usec: int32(usec)}
}
func SetKevent(k *Kevent_t, fd, mode, flags int) {
diff --git a/components/engine/vendor/golang.org/x/sys/unix/syscall_netbsd_amd64.go b/components/engine/vendor/golang.org/x/sys/unix/syscall_netbsd_amd64.go
index 59c2ab7eba..6878bf7ff9 100644
--- a/components/engine/vendor/golang.org/x/sys/unix/syscall_netbsd_amd64.go
+++ b/components/engine/vendor/golang.org/x/sys/unix/syscall_netbsd_amd64.go
@@ -6,19 +6,12 @@
package unix
-func TimespecToNsec(ts Timespec) int64 { return int64(ts.Sec)*1e9 + int64(ts.Nsec) }
-
-func NsecToTimespec(nsec int64) (ts Timespec) {
- ts.Sec = int64(nsec / 1e9)
- ts.Nsec = int64(nsec % 1e9)
- return
+func setTimespec(sec, nsec int64) Timespec {
+ return Timespec{Sec: sec, Nsec: nsec}
}
-func NsecToTimeval(nsec int64) (tv Timeval) {
- nsec += 999 // round up to microsecond
- tv.Usec = int32(nsec % 1e9 / 1e3)
- tv.Sec = int64(nsec / 1e9)
- return
+func setTimeval(sec, usec int64) Timeval {
+ return Timeval{Sec: sec, Usec: int32(usec)}
}
func SetKevent(k *Kevent_t, fd, mode, flags int) {
diff --git a/components/engine/vendor/golang.org/x/sys/unix/syscall_netbsd_arm.go b/components/engine/vendor/golang.org/x/sys/unix/syscall_netbsd_arm.go
index 7208108a31..dbbfcf71db 100644
--- a/components/engine/vendor/golang.org/x/sys/unix/syscall_netbsd_arm.go
+++ b/components/engine/vendor/golang.org/x/sys/unix/syscall_netbsd_arm.go
@@ -6,19 +6,12 @@
package unix
-func TimespecToNsec(ts Timespec) int64 { return int64(ts.Sec)*1e9 + int64(ts.Nsec) }
-
-func NsecToTimespec(nsec int64) (ts Timespec) {
- ts.Sec = int64(nsec / 1e9)
- ts.Nsec = int32(nsec % 1e9)
- return
+func setTimespec(sec, nsec int64) Timespec {
+ return Timespec{Sec: sec, Nsec: int32(nsec)}
}
-func NsecToTimeval(nsec int64) (tv Timeval) {
- nsec += 999 // round up to microsecond
- tv.Usec = int32(nsec % 1e9 / 1e3)
- tv.Sec = int64(nsec / 1e9)
- return
+func setTimeval(sec, usec int64) Timeval {
+ return Timeval{Sec: sec, Usec: int32(usec)}
}
func SetKevent(k *Kevent_t, fd, mode, flags int) {
diff --git a/components/engine/vendor/golang.org/x/sys/unix/syscall_openbsd.go b/components/engine/vendor/golang.org/x/sys/unix/syscall_openbsd.go
index 408e63081c..03a0fac61d 100644
--- a/components/engine/vendor/golang.org/x/sys/unix/syscall_openbsd.go
+++ b/components/engine/vendor/golang.org/x/sys/unix/syscall_openbsd.go
@@ -243,7 +243,6 @@ func Getfsstat(buf []Statfs_t, flags int) (n int, err error) {
// nfssvc
// nnpfspioctl
// openat
-// poll
// preadv
// profil
// pwritev
diff --git a/components/engine/vendor/golang.org/x/sys/unix/syscall_openbsd_386.go b/components/engine/vendor/golang.org/x/sys/unix/syscall_openbsd_386.go
index d3809b426c..994964a916 100644
--- a/components/engine/vendor/golang.org/x/sys/unix/syscall_openbsd_386.go
+++ b/components/engine/vendor/golang.org/x/sys/unix/syscall_openbsd_386.go
@@ -6,19 +6,12 @@
package unix
-func TimespecToNsec(ts Timespec) int64 { return int64(ts.Sec)*1e9 + int64(ts.Nsec) }
-
-func NsecToTimespec(nsec int64) (ts Timespec) {
- ts.Sec = int64(nsec / 1e9)
- ts.Nsec = int32(nsec % 1e9)
- return
+func setTimespec(sec, nsec int64) Timespec {
+ return Timespec{Sec: sec, Nsec: int32(nsec)}
}
-func NsecToTimeval(nsec int64) (tv Timeval) {
- nsec += 999 // round up to microsecond
- tv.Usec = int32(nsec % 1e9 / 1e3)
- tv.Sec = int64(nsec / 1e9)
- return
+func setTimeval(sec, usec int64) Timeval {
+ return Timeval{Sec: sec, Usec: int32(usec)}
}
func SetKevent(k *Kevent_t, fd, mode, flags int) {
diff --git a/components/engine/vendor/golang.org/x/sys/unix/syscall_openbsd_amd64.go b/components/engine/vendor/golang.org/x/sys/unix/syscall_openbsd_amd64.go
index 9a9dfceffd..649e67fccc 100644
--- a/components/engine/vendor/golang.org/x/sys/unix/syscall_openbsd_amd64.go
+++ b/components/engine/vendor/golang.org/x/sys/unix/syscall_openbsd_amd64.go
@@ -6,19 +6,12 @@
package unix
-func TimespecToNsec(ts Timespec) int64 { return int64(ts.Sec)*1e9 + int64(ts.Nsec) }
-
-func NsecToTimespec(nsec int64) (ts Timespec) {
- ts.Sec = nsec / 1e9
- ts.Nsec = nsec % 1e9
- return
+func setTimespec(sec, nsec int64) Timespec {
+ return Timespec{Sec: sec, Nsec: nsec}
}
-func NsecToTimeval(nsec int64) (tv Timeval) {
- nsec += 999 // round up to microsecond
- tv.Usec = nsec % 1e9 / 1e3
- tv.Sec = nsec / 1e9
- return
+func setTimeval(sec, usec int64) Timeval {
+ return Timeval{Sec: sec, Usec: usec}
}
func SetKevent(k *Kevent_t, fd, mode, flags int) {
diff --git a/components/engine/vendor/golang.org/x/sys/unix/syscall_openbsd_arm.go b/components/engine/vendor/golang.org/x/sys/unix/syscall_openbsd_arm.go
index ba8649056f..59844f5041 100644
--- a/components/engine/vendor/golang.org/x/sys/unix/syscall_openbsd_arm.go
+++ b/components/engine/vendor/golang.org/x/sys/unix/syscall_openbsd_arm.go
@@ -6,19 +6,12 @@
package unix
-func TimespecToNsec(ts Timespec) int64 { return int64(ts.Sec)*1e9 + int64(ts.Nsec) }
-
-func NsecToTimespec(nsec int64) (ts Timespec) {
- ts.Sec = int64(nsec / 1e9)
- ts.Nsec = int32(nsec % 1e9)
- return
+func setTimespec(sec, nsec int64) Timespec {
+ return Timespec{Sec: sec, Nsec: int32(nsec)}
}
-func NsecToTimeval(nsec int64) (tv Timeval) {
- nsec += 999 // round up to microsecond
- tv.Usec = int32(nsec % 1e9 / 1e3)
- tv.Sec = int64(nsec / 1e9)
- return
+func setTimeval(sec, usec int64) Timeval {
+ return Timeval{Sec: sec, Usec: int32(usec)}
}
func SetKevent(k *Kevent_t, fd, mode, flags int) {
diff --git a/components/engine/vendor/golang.org/x/sys/unix/syscall_solaris.go b/components/engine/vendor/golang.org/x/sys/unix/syscall_solaris.go
index 35e5d72baf..3ab9e07c8c 100644
--- a/components/engine/vendor/golang.org/x/sys/unix/syscall_solaris.go
+++ b/components/engine/vendor/golang.org/x/sys/unix/syscall_solaris.go
@@ -166,7 +166,7 @@ func Getwd() (wd string, err error) {
func Getgroups() (gids []int, err error) {
n, err := getgroups(0, nil)
- // Check for error and sanity check group count. Newer versions of
+ // Check for error and sanity check group count. Newer versions of
// Solaris allow up to 1024 (NGROUPS_MAX).
if n < 0 || n > 1024 {
if err != nil {
@@ -350,7 +350,7 @@ func Futimesat(dirfd int, path string, tv []Timeval) error {
}
// Solaris doesn't have an futimes function because it allows NULL to be
-// specified as the path for futimesat. However, Go doesn't like
+// specified as the path for futimesat. However, Go doesn't like
// NULL-style string interfaces, so this simple wrapper is provided.
func Futimes(fd int, tv []Timeval) error {
if tv == nil {
@@ -578,6 +578,15 @@ func IoctlGetTermio(fd int, req uint) (*Termio, error) {
return &value, err
}
+//sys poll(fds *PollFd, nfds int, timeout int) (n int, err error)
+
+func Poll(fds []PollFd, timeout int) (n int, err error) {
+ if len(fds) == 0 {
+ return poll(nil, 0, timeout)
+ }
+ return poll(&fds[0], len(fds), timeout)
+}
+
/*
* Exposed directly
*/
diff --git a/components/engine/vendor/golang.org/x/sys/unix/syscall_solaris_amd64.go b/components/engine/vendor/golang.org/x/sys/unix/syscall_solaris_amd64.go
index 5aff62c3bb..9d4e7a678f 100644
--- a/components/engine/vendor/golang.org/x/sys/unix/syscall_solaris_amd64.go
+++ b/components/engine/vendor/golang.org/x/sys/unix/syscall_solaris_amd64.go
@@ -6,19 +6,12 @@
package unix
-func TimespecToNsec(ts Timespec) int64 { return int64(ts.Sec)*1e9 + int64(ts.Nsec) }
-
-func NsecToTimespec(nsec int64) (ts Timespec) {
- ts.Sec = nsec / 1e9
- ts.Nsec = nsec % 1e9
- return
+func setTimespec(sec, nsec int64) Timespec {
+ return Timespec{Sec: sec, Nsec: nsec}
}
-func NsecToTimeval(nsec int64) (tv Timeval) {
- nsec += 999 // round up to microsecond
- tv.Usec = nsec % 1e9 / 1e3
- tv.Sec = int64(nsec / 1e9)
- return
+func setTimeval(sec, usec int64) Timeval {
+ return Timeval{Sec: sec, Usec: usec}
}
func (iov *Iovec) SetLen(length int) {
diff --git a/components/engine/vendor/golang.org/x/sys/unix/timestruct.go b/components/engine/vendor/golang.org/x/sys/unix/timestruct.go
new file mode 100644
index 0000000000..139fbbebbb
--- /dev/null
+++ b/components/engine/vendor/golang.org/x/sys/unix/timestruct.go
@@ -0,0 +1,62 @@
+// Copyright 2017 The Go Authors. All rights reserved.
+// Use of this source code is governed by a BSD-style
+// license that can be found in the LICENSE file.
+
+// +build darwin dragonfly freebsd linux netbsd openbsd solaris
+
+package unix
+
+// TimespecToNsec converts a Timespec value into a number of
+// nanoseconds since the Unix epoch.
+func TimespecToNsec(ts Timespec) int64 { return int64(ts.Sec)*1e9 + int64(ts.Nsec) }
+
+// NsecToTimespec takes a number of nanoseconds since the Unix epoch
+// and returns the corresponding Timespec value.
+func NsecToTimespec(nsec int64) Timespec {
+ sec := nsec / 1e9
+ nsec = nsec % 1e9
+ if nsec < 0 {
+ nsec += 1e9
+ sec--
+ }
+ return setTimespec(sec, nsec)
+}
+
+// TimevalToNsec converts a Timeval value into a number of nanoseconds
+// since the Unix epoch.
+func TimevalToNsec(tv Timeval) int64 { return int64(tv.Sec)*1e9 + int64(tv.Usec)*1e3 }
+
+// NsecToTimeval takes a number of nanoseconds since the Unix epoch
+// and returns the corresponding Timeval value.
+func NsecToTimeval(nsec int64) Timeval {
+ nsec += 999 // round up to microsecond
+ usec := nsec % 1e9 / 1e3
+ sec := nsec / 1e9
+ if usec < 0 {
+ usec += 1e6
+ sec--
+ }
+ return setTimeval(sec, usec)
+}
+
+// Unix returns ts as the number of seconds and nanoseconds elapsed since the
+// Unix epoch.
+func (ts *Timespec) Unix() (sec int64, nsec int64) {
+ return int64(ts.Sec), int64(ts.Nsec)
+}
+
+// Unix returns tv as the number of seconds and nanoseconds elapsed since the
+// Unix epoch.
+func (tv *Timeval) Unix() (sec int64, nsec int64) {
+ return int64(tv.Sec), int64(tv.Usec) * 1000
+}
+
+// Nano returns ts as the number of nanoseconds elapsed since the Unix epoch.
+func (ts *Timespec) Nano() int64 {
+ return int64(ts.Sec)*1e9 + int64(ts.Nsec)
+}
+
+// Nano returns tv as the number of nanoseconds elapsed since the Unix epoch.
+func (tv *Timeval) Nano() int64 {
+ return int64(tv.Sec)*1e9 + int64(tv.Usec)*1000
+}
diff --git a/components/engine/vendor/golang.org/x/sys/unix/zerrors_linux_386.go b/components/engine/vendor/golang.org/x/sys/unix/zerrors_linux_386.go
index 4066ad1e0f..bb8a7724bc 100644
--- a/components/engine/vendor/golang.org/x/sys/unix/zerrors_linux_386.go
+++ b/components/engine/vendor/golang.org/x/sys/unix/zerrors_linux_386.go
@@ -1277,7 +1277,7 @@ const (
RLIMIT_RTTIME = 0xf
RLIMIT_SIGPENDING = 0xb
RLIMIT_STACK = 0x3
- RLIM_INFINITY = -0x1
+ RLIM_INFINITY = 0xffffffffffffffff
RTAX_ADVMSS = 0x8
RTAX_CC_ALGO = 0x10
RTAX_CWND = 0x7
@@ -1842,6 +1842,8 @@ const (
TUNSETVNETHDRSZ = 0x400454d8
TUNSETVNETLE = 0x400454dc
UMOUNT_NOFOLLOW = 0x8
+ UTIME_NOW = 0x3fffffff
+ UTIME_OMIT = 0x3ffffffe
VDISCARD = 0xd
VEOF = 0x4
VEOL = 0xb
@@ -1871,6 +1873,17 @@ const (
WALL = 0x40000000
WCLONE = 0x80000000
WCONTINUED = 0x8
+ WDIOC_GETBOOTSTATUS = 0x80045702
+ WDIOC_GETPRETIMEOUT = 0x80045709
+ WDIOC_GETSTATUS = 0x80045701
+ WDIOC_GETSUPPORT = 0x80285700
+ WDIOC_GETTEMP = 0x80045703
+ WDIOC_GETTIMELEFT = 0x8004570a
+ WDIOC_GETTIMEOUT = 0x80045707
+ WDIOC_KEEPALIVE = 0x80045705
+ WDIOC_SETOPTIONS = 0x80045704
+ WDIOC_SETPRETIMEOUT = 0xc0045708
+ WDIOC_SETTIMEOUT = 0xc0045706
WEXITED = 0x4
WNOHANG = 0x1
WNOTHREAD = 0x20000000
diff --git a/components/engine/vendor/golang.org/x/sys/unix/zerrors_linux_amd64.go b/components/engine/vendor/golang.org/x/sys/unix/zerrors_linux_amd64.go
index c9f53b0b37..cf0b2249f7 100644
--- a/components/engine/vendor/golang.org/x/sys/unix/zerrors_linux_amd64.go
+++ b/components/engine/vendor/golang.org/x/sys/unix/zerrors_linux_amd64.go
@@ -1187,7 +1187,7 @@ const (
PR_SET_NO_NEW_PRIVS = 0x26
PR_SET_PDEATHSIG = 0x1
PR_SET_PTRACER = 0x59616d61
- PR_SET_PTRACER_ANY = -0x1
+ PR_SET_PTRACER_ANY = 0xffffffffffffffff
PR_SET_SECCOMP = 0x16
PR_SET_SECUREBITS = 0x1c
PR_SET_THP_DISABLE = 0x29
@@ -1278,7 +1278,7 @@ const (
RLIMIT_RTTIME = 0xf
RLIMIT_SIGPENDING = 0xb
RLIMIT_STACK = 0x3
- RLIM_INFINITY = -0x1
+ RLIM_INFINITY = 0xffffffffffffffff
RTAX_ADVMSS = 0x8
RTAX_CC_ALGO = 0x10
RTAX_CWND = 0x7
@@ -1843,6 +1843,8 @@ const (
TUNSETVNETHDRSZ = 0x400454d8
TUNSETVNETLE = 0x400454dc
UMOUNT_NOFOLLOW = 0x8
+ UTIME_NOW = 0x3fffffff
+ UTIME_OMIT = 0x3ffffffe
VDISCARD = 0xd
VEOF = 0x4
VEOL = 0xb
@@ -1872,6 +1874,17 @@ const (
WALL = 0x40000000
WCLONE = 0x80000000
WCONTINUED = 0x8
+ WDIOC_GETBOOTSTATUS = 0x80045702
+ WDIOC_GETPRETIMEOUT = 0x80045709
+ WDIOC_GETSTATUS = 0x80045701
+ WDIOC_GETSUPPORT = 0x80285700
+ WDIOC_GETTEMP = 0x80045703
+ WDIOC_GETTIMELEFT = 0x8004570a
+ WDIOC_GETTIMEOUT = 0x80045707
+ WDIOC_KEEPALIVE = 0x80045705
+ WDIOC_SETOPTIONS = 0x80045704
+ WDIOC_SETPRETIMEOUT = 0xc0045708
+ WDIOC_SETTIMEOUT = 0xc0045706
WEXITED = 0x4
WNOHANG = 0x1
WNOTHREAD = 0x20000000
diff --git a/components/engine/vendor/golang.org/x/sys/unix/zerrors_linux_arm.go b/components/engine/vendor/golang.org/x/sys/unix/zerrors_linux_arm.go
index 3e8c2c7aa6..57cfcf3fe0 100644
--- a/components/engine/vendor/golang.org/x/sys/unix/zerrors_linux_arm.go
+++ b/components/engine/vendor/golang.org/x/sys/unix/zerrors_linux_arm.go
@@ -1282,7 +1282,7 @@ const (
RLIMIT_RTTIME = 0xf
RLIMIT_SIGPENDING = 0xb
RLIMIT_STACK = 0x3
- RLIM_INFINITY = -0x1
+ RLIM_INFINITY = 0xffffffffffffffff
RTAX_ADVMSS = 0x8
RTAX_CC_ALGO = 0x10
RTAX_CWND = 0x7
@@ -1847,6 +1847,8 @@ const (
TUNSETVNETHDRSZ = 0x400454d8
TUNSETVNETLE = 0x400454dc
UMOUNT_NOFOLLOW = 0x8
+ UTIME_NOW = 0x3fffffff
+ UTIME_OMIT = 0x3ffffffe
VDISCARD = 0xd
VEOF = 0x4
VEOL = 0xb
@@ -1876,6 +1878,17 @@ const (
WALL = 0x40000000
WCLONE = 0x80000000
WCONTINUED = 0x8
+ WDIOC_GETBOOTSTATUS = 0x80045702
+ WDIOC_GETPRETIMEOUT = 0x80045709
+ WDIOC_GETSTATUS = 0x80045701
+ WDIOC_GETSUPPORT = 0x80285700
+ WDIOC_GETTEMP = 0x80045703
+ WDIOC_GETTIMELEFT = 0x8004570a
+ WDIOC_GETTIMEOUT = 0x80045707
+ WDIOC_KEEPALIVE = 0x80045705
+ WDIOC_SETOPTIONS = 0x80045704
+ WDIOC_SETPRETIMEOUT = 0xc0045708
+ WDIOC_SETTIMEOUT = 0xc0045706
WEXITED = 0x4
WNOHANG = 0x1
WNOTHREAD = 0x20000000
diff --git a/components/engine/vendor/golang.org/x/sys/unix/zerrors_linux_arm64.go b/components/engine/vendor/golang.org/x/sys/unix/zerrors_linux_arm64.go
index 383453349f..b6e5b090ec 100644
--- a/components/engine/vendor/golang.org/x/sys/unix/zerrors_linux_arm64.go
+++ b/components/engine/vendor/golang.org/x/sys/unix/zerrors_linux_arm64.go
@@ -1188,7 +1188,7 @@ const (
PR_SET_NO_NEW_PRIVS = 0x26
PR_SET_PDEATHSIG = 0x1
PR_SET_PTRACER = 0x59616d61
- PR_SET_PTRACER_ANY = -0x1
+ PR_SET_PTRACER_ANY = 0xffffffffffffffff
PR_SET_SECCOMP = 0x16
PR_SET_SECUREBITS = 0x1c
PR_SET_THP_DISABLE = 0x29
@@ -1268,7 +1268,7 @@ const (
RLIMIT_RTTIME = 0xf
RLIMIT_SIGPENDING = 0xb
RLIMIT_STACK = 0x3
- RLIM_INFINITY = -0x1
+ RLIM_INFINITY = 0xffffffffffffffff
RTAX_ADVMSS = 0x8
RTAX_CC_ALGO = 0x10
RTAX_CWND = 0x7
@@ -1833,6 +1833,8 @@ const (
TUNSETVNETHDRSZ = 0x400454d8
TUNSETVNETLE = 0x400454dc
UMOUNT_NOFOLLOW = 0x8
+ UTIME_NOW = 0x3fffffff
+ UTIME_OMIT = 0x3ffffffe
VDISCARD = 0xd
VEOF = 0x4
VEOL = 0xb
@@ -1862,6 +1864,17 @@ const (
WALL = 0x40000000
WCLONE = 0x80000000
WCONTINUED = 0x8
+ WDIOC_GETBOOTSTATUS = 0x80045702
+ WDIOC_GETPRETIMEOUT = 0x80045709
+ WDIOC_GETSTATUS = 0x80045701
+ WDIOC_GETSUPPORT = 0x80285700
+ WDIOC_GETTEMP = 0x80045703
+ WDIOC_GETTIMELEFT = 0x8004570a
+ WDIOC_GETTIMEOUT = 0x80045707
+ WDIOC_KEEPALIVE = 0x80045705
+ WDIOC_SETOPTIONS = 0x80045704
+ WDIOC_SETPRETIMEOUT = 0xc0045708
+ WDIOC_SETTIMEOUT = 0xc0045706
WEXITED = 0x4
WNOHANG = 0x1
WNOTHREAD = 0x20000000
diff --git a/components/engine/vendor/golang.org/x/sys/unix/zerrors_linux_mips.go b/components/engine/vendor/golang.org/x/sys/unix/zerrors_linux_mips.go
index bde8f7d023..0113e1f6ab 100644
--- a/components/engine/vendor/golang.org/x/sys/unix/zerrors_linux_mips.go
+++ b/components/engine/vendor/golang.org/x/sys/unix/zerrors_linux_mips.go
@@ -1279,7 +1279,7 @@ const (
RLIMIT_RTTIME = 0xf
RLIMIT_SIGPENDING = 0xb
RLIMIT_STACK = 0x3
- RLIM_INFINITY = -0x1
+ RLIM_INFINITY = 0xffffffffffffffff
RTAX_ADVMSS = 0x8
RTAX_CC_ALGO = 0x10
RTAX_CWND = 0x7
@@ -1846,6 +1846,8 @@ const (
TUNSETVNETHDRSZ = 0x800454d8
TUNSETVNETLE = 0x800454dc
UMOUNT_NOFOLLOW = 0x8
+ UTIME_NOW = 0x3fffffff
+ UTIME_OMIT = 0x3ffffffe
VDISCARD = 0xd
VEOF = 0x10
VEOL = 0x11
@@ -1876,6 +1878,17 @@ const (
WALL = 0x40000000
WCLONE = 0x80000000
WCONTINUED = 0x8
+ WDIOC_GETBOOTSTATUS = 0x40045702
+ WDIOC_GETPRETIMEOUT = 0x40045709
+ WDIOC_GETSTATUS = 0x40045701
+ WDIOC_GETSUPPORT = 0x40285700
+ WDIOC_GETTEMP = 0x40045703
+ WDIOC_GETTIMELEFT = 0x4004570a
+ WDIOC_GETTIMEOUT = 0x40045707
+ WDIOC_KEEPALIVE = 0x40045705
+ WDIOC_SETOPTIONS = 0x40045704
+ WDIOC_SETPRETIMEOUT = 0xc0045708
+ WDIOC_SETTIMEOUT = 0xc0045706
WEXITED = 0x4
WNOHANG = 0x1
WNOTHREAD = 0x20000000
diff --git a/components/engine/vendor/golang.org/x/sys/unix/zerrors_linux_mips64.go b/components/engine/vendor/golang.org/x/sys/unix/zerrors_linux_mips64.go
index 42b6397d5d..6857657a50 100644
--- a/components/engine/vendor/golang.org/x/sys/unix/zerrors_linux_mips64.go
+++ b/components/engine/vendor/golang.org/x/sys/unix/zerrors_linux_mips64.go
@@ -1187,7 +1187,7 @@ const (
PR_SET_NO_NEW_PRIVS = 0x26
PR_SET_PDEATHSIG = 0x1
PR_SET_PTRACER = 0x59616d61
- PR_SET_PTRACER_ANY = -0x1
+ PR_SET_PTRACER_ANY = 0xffffffffffffffff
PR_SET_SECCOMP = 0x16
PR_SET_SECUREBITS = 0x1c
PR_SET_THP_DISABLE = 0x29
@@ -1279,7 +1279,7 @@ const (
RLIMIT_RTTIME = 0xf
RLIMIT_SIGPENDING = 0xb
RLIMIT_STACK = 0x3
- RLIM_INFINITY = -0x1
+ RLIM_INFINITY = 0xffffffffffffffff
RTAX_ADVMSS = 0x8
RTAX_CC_ALGO = 0x10
RTAX_CWND = 0x7
@@ -1846,6 +1846,8 @@ const (
TUNSETVNETHDRSZ = 0x800454d8
TUNSETVNETLE = 0x800454dc
UMOUNT_NOFOLLOW = 0x8
+ UTIME_NOW = 0x3fffffff
+ UTIME_OMIT = 0x3ffffffe
VDISCARD = 0xd
VEOF = 0x10
VEOL = 0x11
@@ -1876,6 +1878,17 @@ const (
WALL = 0x40000000
WCLONE = 0x80000000
WCONTINUED = 0x8
+ WDIOC_GETBOOTSTATUS = 0x40045702
+ WDIOC_GETPRETIMEOUT = 0x40045709
+ WDIOC_GETSTATUS = 0x40045701
+ WDIOC_GETSUPPORT = 0x40285700
+ WDIOC_GETTEMP = 0x40045703
+ WDIOC_GETTIMELEFT = 0x4004570a
+ WDIOC_GETTIMEOUT = 0x40045707
+ WDIOC_KEEPALIVE = 0x40045705
+ WDIOC_SETOPTIONS = 0x40045704
+ WDIOC_SETPRETIMEOUT = 0xc0045708
+ WDIOC_SETTIMEOUT = 0xc0045706
WEXITED = 0x4
WNOHANG = 0x1
WNOTHREAD = 0x20000000
diff --git a/components/engine/vendor/golang.org/x/sys/unix/zerrors_linux_mips64le.go b/components/engine/vendor/golang.org/x/sys/unix/zerrors_linux_mips64le.go
index bd4ff81474..14f7e0e056 100644
--- a/components/engine/vendor/golang.org/x/sys/unix/zerrors_linux_mips64le.go
+++ b/components/engine/vendor/golang.org/x/sys/unix/zerrors_linux_mips64le.go
@@ -1187,7 +1187,7 @@ const (
PR_SET_NO_NEW_PRIVS = 0x26
PR_SET_PDEATHSIG = 0x1
PR_SET_PTRACER = 0x59616d61
- PR_SET_PTRACER_ANY = -0x1
+ PR_SET_PTRACER_ANY = 0xffffffffffffffff
PR_SET_SECCOMP = 0x16
PR_SET_SECUREBITS = 0x1c
PR_SET_THP_DISABLE = 0x29
@@ -1279,7 +1279,7 @@ const (
RLIMIT_RTTIME = 0xf
RLIMIT_SIGPENDING = 0xb
RLIMIT_STACK = 0x3
- RLIM_INFINITY = -0x1
+ RLIM_INFINITY = 0xffffffffffffffff
RTAX_ADVMSS = 0x8
RTAX_CC_ALGO = 0x10
RTAX_CWND = 0x7
@@ -1846,6 +1846,8 @@ const (
TUNSETVNETHDRSZ = 0x800454d8
TUNSETVNETLE = 0x800454dc
UMOUNT_NOFOLLOW = 0x8
+ UTIME_NOW = 0x3fffffff
+ UTIME_OMIT = 0x3ffffffe
VDISCARD = 0xd
VEOF = 0x10
VEOL = 0x11
@@ -1876,6 +1878,17 @@ const (
WALL = 0x40000000
WCLONE = 0x80000000
WCONTINUED = 0x8
+ WDIOC_GETBOOTSTATUS = 0x40045702
+ WDIOC_GETPRETIMEOUT = 0x40045709
+ WDIOC_GETSTATUS = 0x40045701
+ WDIOC_GETSUPPORT = 0x40285700
+ WDIOC_GETTEMP = 0x40045703
+ WDIOC_GETTIMELEFT = 0x4004570a
+ WDIOC_GETTIMEOUT = 0x40045707
+ WDIOC_KEEPALIVE = 0x40045705
+ WDIOC_SETOPTIONS = 0x40045704
+ WDIOC_SETPRETIMEOUT = 0xc0045708
+ WDIOC_SETTIMEOUT = 0xc0045706
WEXITED = 0x4
WNOHANG = 0x1
WNOTHREAD = 0x20000000
diff --git a/components/engine/vendor/golang.org/x/sys/unix/zerrors_linux_mipsle.go b/components/engine/vendor/golang.org/x/sys/unix/zerrors_linux_mipsle.go
index 6dfc95c40f..f795862d87 100644
--- a/components/engine/vendor/golang.org/x/sys/unix/zerrors_linux_mipsle.go
+++ b/components/engine/vendor/golang.org/x/sys/unix/zerrors_linux_mipsle.go
@@ -1279,7 +1279,7 @@ const (
RLIMIT_RTTIME = 0xf
RLIMIT_SIGPENDING = 0xb
RLIMIT_STACK = 0x3
- RLIM_INFINITY = -0x1
+ RLIM_INFINITY = 0xffffffffffffffff
RTAX_ADVMSS = 0x8
RTAX_CC_ALGO = 0x10
RTAX_CWND = 0x7
@@ -1846,6 +1846,8 @@ const (
TUNSETVNETHDRSZ = 0x800454d8
TUNSETVNETLE = 0x800454dc
UMOUNT_NOFOLLOW = 0x8
+ UTIME_NOW = 0x3fffffff
+ UTIME_OMIT = 0x3ffffffe
VDISCARD = 0xd
VEOF = 0x10
VEOL = 0x11
@@ -1876,6 +1878,17 @@ const (
WALL = 0x40000000
WCLONE = 0x80000000
WCONTINUED = 0x8
+ WDIOC_GETBOOTSTATUS = 0x40045702
+ WDIOC_GETPRETIMEOUT = 0x40045709
+ WDIOC_GETSTATUS = 0x40045701
+ WDIOC_GETSUPPORT = 0x40285700
+ WDIOC_GETTEMP = 0x40045703
+ WDIOC_GETTIMELEFT = 0x4004570a
+ WDIOC_GETTIMEOUT = 0x40045707
+ WDIOC_KEEPALIVE = 0x40045705
+ WDIOC_SETOPTIONS = 0x40045704
+ WDIOC_SETPRETIMEOUT = 0xc0045708
+ WDIOC_SETTIMEOUT = 0xc0045706
WEXITED = 0x4
WNOHANG = 0x1
WNOTHREAD = 0x20000000
diff --git a/components/engine/vendor/golang.org/x/sys/unix/zerrors_linux_ppc64.go b/components/engine/vendor/golang.org/x/sys/unix/zerrors_linux_ppc64.go
index 46b09d320d..2544c4b632 100644
--- a/components/engine/vendor/golang.org/x/sys/unix/zerrors_linux_ppc64.go
+++ b/components/engine/vendor/golang.org/x/sys/unix/zerrors_linux_ppc64.go
@@ -1189,7 +1189,7 @@ const (
PR_SET_NO_NEW_PRIVS = 0x26
PR_SET_PDEATHSIG = 0x1
PR_SET_PTRACER = 0x59616d61
- PR_SET_PTRACER_ANY = -0x1
+ PR_SET_PTRACER_ANY = 0xffffffffffffffff
PR_SET_SECCOMP = 0x16
PR_SET_SECUREBITS = 0x1c
PR_SET_THP_DISABLE = 0x29
@@ -1335,7 +1335,7 @@ const (
RLIMIT_RTTIME = 0xf
RLIMIT_SIGPENDING = 0xb
RLIMIT_STACK = 0x3
- RLIM_INFINITY = -0x1
+ RLIM_INFINITY = 0xffffffffffffffff
RTAX_ADVMSS = 0x8
RTAX_CC_ALGO = 0x10
RTAX_CWND = 0x7
@@ -1904,6 +1904,8 @@ const (
TUNSETVNETHDRSZ = 0x800454d8
TUNSETVNETLE = 0x800454dc
UMOUNT_NOFOLLOW = 0x8
+ UTIME_NOW = 0x3fffffff
+ UTIME_OMIT = 0x3ffffffe
VDISCARD = 0x10
VEOF = 0x4
VEOL = 0x6
@@ -1933,6 +1935,17 @@ const (
WALL = 0x40000000
WCLONE = 0x80000000
WCONTINUED = 0x8
+ WDIOC_GETBOOTSTATUS = 0x40045702
+ WDIOC_GETPRETIMEOUT = 0x40045709
+ WDIOC_GETSTATUS = 0x40045701
+ WDIOC_GETSUPPORT = 0x40285700
+ WDIOC_GETTEMP = 0x40045703
+ WDIOC_GETTIMELEFT = 0x4004570a
+ WDIOC_GETTIMEOUT = 0x40045707
+ WDIOC_KEEPALIVE = 0x40045705
+ WDIOC_SETOPTIONS = 0x40045704
+ WDIOC_SETPRETIMEOUT = 0xc0045708
+ WDIOC_SETTIMEOUT = 0xc0045706
WEXITED = 0x4
WNOHANG = 0x1
WNOTHREAD = 0x20000000
diff --git a/components/engine/vendor/golang.org/x/sys/unix/zerrors_linux_ppc64le.go b/components/engine/vendor/golang.org/x/sys/unix/zerrors_linux_ppc64le.go
index 08adb1d8fc..133bdf5847 100644
--- a/components/engine/vendor/golang.org/x/sys/unix/zerrors_linux_ppc64le.go
+++ b/components/engine/vendor/golang.org/x/sys/unix/zerrors_linux_ppc64le.go
@@ -1189,7 +1189,7 @@ const (
PR_SET_NO_NEW_PRIVS = 0x26
PR_SET_PDEATHSIG = 0x1
PR_SET_PTRACER = 0x59616d61
- PR_SET_PTRACER_ANY = -0x1
+ PR_SET_PTRACER_ANY = 0xffffffffffffffff
PR_SET_SECCOMP = 0x16
PR_SET_SECUREBITS = 0x1c
PR_SET_THP_DISABLE = 0x29
@@ -1335,7 +1335,7 @@ const (
RLIMIT_RTTIME = 0xf
RLIMIT_SIGPENDING = 0xb
RLIMIT_STACK = 0x3
- RLIM_INFINITY = -0x1
+ RLIM_INFINITY = 0xffffffffffffffff
RTAX_ADVMSS = 0x8
RTAX_CC_ALGO = 0x10
RTAX_CWND = 0x7
@@ -1904,6 +1904,8 @@ const (
TUNSETVNETHDRSZ = 0x800454d8
TUNSETVNETLE = 0x800454dc
UMOUNT_NOFOLLOW = 0x8
+ UTIME_NOW = 0x3fffffff
+ UTIME_OMIT = 0x3ffffffe
VDISCARD = 0x10
VEOF = 0x4
VEOL = 0x6
@@ -1933,6 +1935,17 @@ const (
WALL = 0x40000000
WCLONE = 0x80000000
WCONTINUED = 0x8
+ WDIOC_GETBOOTSTATUS = 0x40045702
+ WDIOC_GETPRETIMEOUT = 0x40045709
+ WDIOC_GETSTATUS = 0x40045701
+ WDIOC_GETSUPPORT = 0x40285700
+ WDIOC_GETTEMP = 0x40045703
+ WDIOC_GETTIMELEFT = 0x4004570a
+ WDIOC_GETTIMEOUT = 0x40045707
+ WDIOC_KEEPALIVE = 0x40045705
+ WDIOC_SETOPTIONS = 0x40045704
+ WDIOC_SETPRETIMEOUT = 0xc0045708
+ WDIOC_SETTIMEOUT = 0xc0045706
WEXITED = 0x4
WNOHANG = 0x1
WNOTHREAD = 0x20000000
diff --git a/components/engine/vendor/golang.org/x/sys/unix/zerrors_linux_s390x.go b/components/engine/vendor/golang.org/x/sys/unix/zerrors_linux_s390x.go
index 70bc1a2fc5..b921fb17a4 100644
--- a/components/engine/vendor/golang.org/x/sys/unix/zerrors_linux_s390x.go
+++ b/components/engine/vendor/golang.org/x/sys/unix/zerrors_linux_s390x.go
@@ -1186,7 +1186,7 @@ const (
PR_SET_NO_NEW_PRIVS = 0x26
PR_SET_PDEATHSIG = 0x1
PR_SET_PTRACER = 0x59616d61
- PR_SET_PTRACER_ANY = -0x1
+ PR_SET_PTRACER_ANY = 0xffffffffffffffff
PR_SET_SECCOMP = 0x16
PR_SET_SECUREBITS = 0x1c
PR_SET_THP_DISABLE = 0x29
@@ -1339,7 +1339,7 @@ const (
RLIMIT_RTTIME = 0xf
RLIMIT_SIGPENDING = 0xb
RLIMIT_STACK = 0x3
- RLIM_INFINITY = -0x1
+ RLIM_INFINITY = 0xffffffffffffffff
RTAX_ADVMSS = 0x8
RTAX_CC_ALGO = 0x10
RTAX_CWND = 0x7
@@ -1904,6 +1904,8 @@ const (
TUNSETVNETHDRSZ = 0x400454d8
TUNSETVNETLE = 0x400454dc
UMOUNT_NOFOLLOW = 0x8
+ UTIME_NOW = 0x3fffffff
+ UTIME_OMIT = 0x3ffffffe
VDISCARD = 0xd
VEOF = 0x4
VEOL = 0xb
@@ -1933,6 +1935,17 @@ const (
WALL = 0x40000000
WCLONE = 0x80000000
WCONTINUED = 0x8
+ WDIOC_GETBOOTSTATUS = 0x80045702
+ WDIOC_GETPRETIMEOUT = 0x80045709
+ WDIOC_GETSTATUS = 0x80045701
+ WDIOC_GETSUPPORT = 0x80285700
+ WDIOC_GETTEMP = 0x80045703
+ WDIOC_GETTIMELEFT = 0x8004570a
+ WDIOC_GETTIMEOUT = 0x80045707
+ WDIOC_KEEPALIVE = 0x80045705
+ WDIOC_SETOPTIONS = 0x80045704
+ WDIOC_SETPRETIMEOUT = 0xc0045708
+ WDIOC_SETTIMEOUT = 0xc0045706
WEXITED = 0x4
WNOHANG = 0x1
WNOTHREAD = 0x20000000
diff --git a/components/engine/vendor/golang.org/x/sys/unix/zptrace386_linux.go b/components/engine/vendor/golang.org/x/sys/unix/zptrace386_linux.go
new file mode 100644
index 0000000000..2d21c49e12
--- /dev/null
+++ b/components/engine/vendor/golang.org/x/sys/unix/zptrace386_linux.go
@@ -0,0 +1,80 @@
+// Code generated by linux/mkall.go generatePtracePair(386, amd64). DO NOT EDIT.
+
+// +build linux
+// +build 386 amd64
+
+package unix
+
+import "unsafe"
+
+// PtraceRegs386 is the registers used by 386 binaries.
+type PtraceRegs386 struct {
+ Ebx int32
+ Ecx int32
+ Edx int32
+ Esi int32
+ Edi int32
+ Ebp int32
+ Eax int32
+ Xds int32
+ Xes int32
+ Xfs int32
+ Xgs int32
+ Orig_eax int32
+ Eip int32
+ Xcs int32
+ Eflags int32
+ Esp int32
+ Xss int32
+}
+
+// PtraceGetRegs386 fetches the registers used by 386 binaries.
+func PtraceGetRegs386(pid int, regsout *PtraceRegs386) error {
+ return ptrace(PTRACE_GETREGS, pid, 0, uintptr(unsafe.Pointer(regsout)))
+}
+
+// PtraceSetRegs386 sets the registers used by 386 binaries.
+func PtraceSetRegs386(pid int, regs *PtraceRegs386) error {
+ return ptrace(PTRACE_SETREGS, pid, 0, uintptr(unsafe.Pointer(regs)))
+}
+
+// PtraceRegsAmd64 is the registers used by amd64 binaries.
+type PtraceRegsAmd64 struct {
+ R15 uint64
+ R14 uint64
+ R13 uint64
+ R12 uint64
+ Rbp uint64
+ Rbx uint64
+ R11 uint64
+ R10 uint64
+ R9 uint64
+ R8 uint64
+ Rax uint64
+ Rcx uint64
+ Rdx uint64
+ Rsi uint64
+ Rdi uint64
+ Orig_rax uint64
+ Rip uint64
+ Cs uint64
+ Eflags uint64
+ Rsp uint64
+ Ss uint64
+ Fs_base uint64
+ Gs_base uint64
+ Ds uint64
+ Es uint64
+ Fs uint64
+ Gs uint64
+}
+
+// PtraceGetRegsAmd64 fetches the registers used by amd64 binaries.
+func PtraceGetRegsAmd64(pid int, regsout *PtraceRegsAmd64) error {
+ return ptrace(PTRACE_GETREGS, pid, 0, uintptr(unsafe.Pointer(regsout)))
+}
+
+// PtraceSetRegsAmd64 sets the registers used by amd64 binaries.
+func PtraceSetRegsAmd64(pid int, regs *PtraceRegsAmd64) error {
+ return ptrace(PTRACE_SETREGS, pid, 0, uintptr(unsafe.Pointer(regs)))
+}
diff --git a/components/engine/vendor/golang.org/x/sys/unix/zptracearm_linux.go b/components/engine/vendor/golang.org/x/sys/unix/zptracearm_linux.go
new file mode 100644
index 0000000000..faf23bbed9
--- /dev/null
+++ b/components/engine/vendor/golang.org/x/sys/unix/zptracearm_linux.go
@@ -0,0 +1,41 @@
+// Code generated by linux/mkall.go generatePtracePair(arm, arm64). DO NOT EDIT.
+
+// +build linux
+// +build arm arm64
+
+package unix
+
+import "unsafe"
+
+// PtraceRegsArm is the registers used by arm binaries.
+type PtraceRegsArm struct {
+ Uregs [18]uint32
+}
+
+// PtraceGetRegsArm fetches the registers used by arm binaries.
+func PtraceGetRegsArm(pid int, regsout *PtraceRegsArm) error {
+ return ptrace(PTRACE_GETREGS, pid, 0, uintptr(unsafe.Pointer(regsout)))
+}
+
+// PtraceSetRegsArm sets the registers used by arm binaries.
+func PtraceSetRegsArm(pid int, regs *PtraceRegsArm) error {
+ return ptrace(PTRACE_SETREGS, pid, 0, uintptr(unsafe.Pointer(regs)))
+}
+
+// PtraceRegsArm64 is the registers used by arm64 binaries.
+type PtraceRegsArm64 struct {
+ Regs [31]uint64
+ Sp uint64
+ Pc uint64
+ Pstate uint64
+}
+
+// PtraceGetRegsArm64 fetches the registers used by arm64 binaries.
+func PtraceGetRegsArm64(pid int, regsout *PtraceRegsArm64) error {
+ return ptrace(PTRACE_GETREGS, pid, 0, uintptr(unsafe.Pointer(regsout)))
+}
+
+// PtraceSetRegsArm64 sets the registers used by arm64 binaries.
+func PtraceSetRegsArm64(pid int, regs *PtraceRegsArm64) error {
+ return ptrace(PTRACE_SETREGS, pid, 0, uintptr(unsafe.Pointer(regs)))
+}
diff --git a/components/engine/vendor/golang.org/x/sys/unix/zptracemips_linux.go b/components/engine/vendor/golang.org/x/sys/unix/zptracemips_linux.go
new file mode 100644
index 0000000000..c431131e63
--- /dev/null
+++ b/components/engine/vendor/golang.org/x/sys/unix/zptracemips_linux.go
@@ -0,0 +1,50 @@
+// Code generated by linux/mkall.go generatePtracePair(mips, mips64). DO NOT EDIT.
+
+// +build linux
+// +build mips mips64
+
+package unix
+
+import "unsafe"
+
+// PtraceRegsMips is the registers used by mips binaries.
+type PtraceRegsMips struct {
+ Regs [32]uint64
+ Lo uint64
+ Hi uint64
+ Epc uint64
+ Badvaddr uint64
+ Status uint64
+ Cause uint64
+}
+
+// PtraceGetRegsMips fetches the registers used by mips binaries.
+func PtraceGetRegsMips(pid int, regsout *PtraceRegsMips) error {
+ return ptrace(PTRACE_GETREGS, pid, 0, uintptr(unsafe.Pointer(regsout)))
+}
+
+// PtraceSetRegsMips sets the registers used by mips binaries.
+func PtraceSetRegsMips(pid int, regs *PtraceRegsMips) error {
+ return ptrace(PTRACE_SETREGS, pid, 0, uintptr(unsafe.Pointer(regs)))
+}
+
+// PtraceRegsMips64 is the registers used by mips64 binaries.
+type PtraceRegsMips64 struct {
+ Regs [32]uint64
+ Lo uint64
+ Hi uint64
+ Epc uint64
+ Badvaddr uint64
+ Status uint64
+ Cause uint64
+}
+
+// PtraceGetRegsMips64 fetches the registers used by mips64 binaries.
+func PtraceGetRegsMips64(pid int, regsout *PtraceRegsMips64) error {
+ return ptrace(PTRACE_GETREGS, pid, 0, uintptr(unsafe.Pointer(regsout)))
+}
+
+// PtraceSetRegsMips64 sets the registers used by mips64 binaries.
+func PtraceSetRegsMips64(pid int, regs *PtraceRegsMips64) error {
+ return ptrace(PTRACE_SETREGS, pid, 0, uintptr(unsafe.Pointer(regs)))
+}
diff --git a/components/engine/vendor/golang.org/x/sys/unix/zptracemipsle_linux.go b/components/engine/vendor/golang.org/x/sys/unix/zptracemipsle_linux.go
new file mode 100644
index 0000000000..dc3d6d3732
--- /dev/null
+++ b/components/engine/vendor/golang.org/x/sys/unix/zptracemipsle_linux.go
@@ -0,0 +1,50 @@
+// Code generated by linux/mkall.go generatePtracePair(mipsle, mips64le). DO NOT EDIT.
+
+// +build linux
+// +build mipsle mips64le
+
+package unix
+
+import "unsafe"
+
+// PtraceRegsMipsle is the registers used by mipsle binaries.
+type PtraceRegsMipsle struct {
+ Regs [32]uint64
+ Lo uint64
+ Hi uint64
+ Epc uint64
+ Badvaddr uint64
+ Status uint64
+ Cause uint64
+}
+
+// PtraceGetRegsMipsle fetches the registers used by mipsle binaries.
+func PtraceGetRegsMipsle(pid int, regsout *PtraceRegsMipsle) error {
+ return ptrace(PTRACE_GETREGS, pid, 0, uintptr(unsafe.Pointer(regsout)))
+}
+
+// PtraceSetRegsMipsle sets the registers used by mipsle binaries.
+func PtraceSetRegsMipsle(pid int, regs *PtraceRegsMipsle) error {
+ return ptrace(PTRACE_SETREGS, pid, 0, uintptr(unsafe.Pointer(regs)))
+}
+
+// PtraceRegsMips64le is the registers used by mips64le binaries.
+type PtraceRegsMips64le struct {
+ Regs [32]uint64
+ Lo uint64
+ Hi uint64
+ Epc uint64
+ Badvaddr uint64
+ Status uint64
+ Cause uint64
+}
+
+// PtraceGetRegsMips64le fetches the registers used by mips64le binaries.
+func PtraceGetRegsMips64le(pid int, regsout *PtraceRegsMips64le) error {
+ return ptrace(PTRACE_GETREGS, pid, 0, uintptr(unsafe.Pointer(regsout)))
+}
+
+// PtraceSetRegsMips64le sets the registers used by mips64le binaries.
+func PtraceSetRegsMips64le(pid int, regs *PtraceRegsMips64le) error {
+ return ptrace(PTRACE_SETREGS, pid, 0, uintptr(unsafe.Pointer(regs)))
+}
diff --git a/components/engine/vendor/golang.org/x/sys/unix/zsyscall_darwin_386.go b/components/engine/vendor/golang.org/x/sys/unix/zsyscall_darwin_386.go
index 10491e9ed3..9fb1b31f48 100644
--- a/components/engine/vendor/golang.org/x/sys/unix/zsyscall_darwin_386.go
+++ b/components/engine/vendor/golang.org/x/sys/unix/zsyscall_darwin_386.go
@@ -408,6 +408,17 @@ func ioctl(fd int, req uint, arg uintptr) (err error) {
// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT
+func poll(fds *PollFd, nfds int, timeout int) (n int, err error) {
+ r0, _, e1 := Syscall(SYS_POLL, uintptr(unsafe.Pointer(fds)), uintptr(nfds), uintptr(timeout))
+ n = int(r0)
+ if e1 != 0 {
+ err = errnoErr(e1)
+ }
+ return
+}
+
+// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT
+
func Access(path string, mode uint32) (err error) {
var _p0 *byte
_p0, err = BytePtrFromString(path)
diff --git a/components/engine/vendor/golang.org/x/sys/unix/zsyscall_darwin_amd64.go b/components/engine/vendor/golang.org/x/sys/unix/zsyscall_darwin_amd64.go
index 5f1f6bfef7..1e0fb46b01 100644
--- a/components/engine/vendor/golang.org/x/sys/unix/zsyscall_darwin_amd64.go
+++ b/components/engine/vendor/golang.org/x/sys/unix/zsyscall_darwin_amd64.go
@@ -408,6 +408,17 @@ func ioctl(fd int, req uint, arg uintptr) (err error) {
// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT
+func poll(fds *PollFd, nfds int, timeout int) (n int, err error) {
+ r0, _, e1 := Syscall(SYS_POLL, uintptr(unsafe.Pointer(fds)), uintptr(nfds), uintptr(timeout))
+ n = int(r0)
+ if e1 != 0 {
+ err = errnoErr(e1)
+ }
+ return
+}
+
+// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT
+
func Access(path string, mode uint32) (err error) {
var _p0 *byte
_p0, err = BytePtrFromString(path)
diff --git a/components/engine/vendor/golang.org/x/sys/unix/zsyscall_darwin_arm.go b/components/engine/vendor/golang.org/x/sys/unix/zsyscall_darwin_arm.go
index 7a40974594..e1026a88a5 100644
--- a/components/engine/vendor/golang.org/x/sys/unix/zsyscall_darwin_arm.go
+++ b/components/engine/vendor/golang.org/x/sys/unix/zsyscall_darwin_arm.go
@@ -408,6 +408,17 @@ func ioctl(fd int, req uint, arg uintptr) (err error) {
// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT
+func poll(fds *PollFd, nfds int, timeout int) (n int, err error) {
+ r0, _, e1 := Syscall(SYS_POLL, uintptr(unsafe.Pointer(fds)), uintptr(nfds), uintptr(timeout))
+ n = int(r0)
+ if e1 != 0 {
+ err = errnoErr(e1)
+ }
+ return
+}
+
+// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT
+
func Access(path string, mode uint32) (err error) {
var _p0 *byte
_p0, err = BytePtrFromString(path)
diff --git a/components/engine/vendor/golang.org/x/sys/unix/zsyscall_darwin_arm64.go b/components/engine/vendor/golang.org/x/sys/unix/zsyscall_darwin_arm64.go
index 07c6ebc9f4..37fb210a08 100644
--- a/components/engine/vendor/golang.org/x/sys/unix/zsyscall_darwin_arm64.go
+++ b/components/engine/vendor/golang.org/x/sys/unix/zsyscall_darwin_arm64.go
@@ -408,6 +408,17 @@ func ioctl(fd int, req uint, arg uintptr) (err error) {
// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT
+func poll(fds *PollFd, nfds int, timeout int) (n int, err error) {
+ r0, _, e1 := Syscall(SYS_POLL, uintptr(unsafe.Pointer(fds)), uintptr(nfds), uintptr(timeout))
+ n = int(r0)
+ if e1 != 0 {
+ err = errnoErr(e1)
+ }
+ return
+}
+
+// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT
+
func Access(path string, mode uint32) (err error) {
var _p0 *byte
_p0, err = BytePtrFromString(path)
diff --git a/components/engine/vendor/golang.org/x/sys/unix/zsyscall_dragonfly_amd64.go b/components/engine/vendor/golang.org/x/sys/unix/zsyscall_dragonfly_amd64.go
index 7fa205cd03..75761477d5 100644
--- a/components/engine/vendor/golang.org/x/sys/unix/zsyscall_dragonfly_amd64.go
+++ b/components/engine/vendor/golang.org/x/sys/unix/zsyscall_dragonfly_amd64.go
@@ -266,6 +266,17 @@ func fcntl(fd int, cmd int, arg int) (val int, err error) {
// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT
+func poll(fds *PollFd, nfds int, timeout int) (n int, err error) {
+ r0, _, e1 := Syscall(SYS_POLL, uintptr(unsafe.Pointer(fds)), uintptr(nfds), uintptr(timeout))
+ n = int(r0)
+ if e1 != 0 {
+ err = errnoErr(e1)
+ }
+ return
+}
+
+// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT
+
func Madvise(b []byte, behav int) (err error) {
var _p0 unsafe.Pointer
if len(b) > 0 {
diff --git a/components/engine/vendor/golang.org/x/sys/unix/zsyscall_freebsd_386.go b/components/engine/vendor/golang.org/x/sys/unix/zsyscall_freebsd_386.go
index 1a0bb4cb0e..8bcecfb9b6 100644
--- a/components/engine/vendor/golang.org/x/sys/unix/zsyscall_freebsd_386.go
+++ b/components/engine/vendor/golang.org/x/sys/unix/zsyscall_freebsd_386.go
@@ -266,6 +266,17 @@ func fcntl(fd int, cmd int, arg int) (val int, err error) {
// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT
+func poll(fds *PollFd, nfds int, timeout int) (n int, err error) {
+ r0, _, e1 := Syscall(SYS_POLL, uintptr(unsafe.Pointer(fds)), uintptr(nfds), uintptr(timeout))
+ n = int(r0)
+ if e1 != 0 {
+ err = errnoErr(e1)
+ }
+ return
+}
+
+// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT
+
func Madvise(b []byte, behav int) (err error) {
var _p0 unsafe.Pointer
if len(b) > 0 {
diff --git a/components/engine/vendor/golang.org/x/sys/unix/zsyscall_freebsd_amd64.go b/components/engine/vendor/golang.org/x/sys/unix/zsyscall_freebsd_amd64.go
index ac1e8e0136..61c0cf99bb 100644
--- a/components/engine/vendor/golang.org/x/sys/unix/zsyscall_freebsd_amd64.go
+++ b/components/engine/vendor/golang.org/x/sys/unix/zsyscall_freebsd_amd64.go
@@ -266,6 +266,17 @@ func fcntl(fd int, cmd int, arg int) (val int, err error) {
// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT
+func poll(fds *PollFd, nfds int, timeout int) (n int, err error) {
+ r0, _, e1 := Syscall(SYS_POLL, uintptr(unsafe.Pointer(fds)), uintptr(nfds), uintptr(timeout))
+ n = int(r0)
+ if e1 != 0 {
+ err = errnoErr(e1)
+ }
+ return
+}
+
+// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT
+
func Madvise(b []byte, behav int) (err error) {
var _p0 unsafe.Pointer
if len(b) > 0 {
diff --git a/components/engine/vendor/golang.org/x/sys/unix/zsyscall_freebsd_arm.go b/components/engine/vendor/golang.org/x/sys/unix/zsyscall_freebsd_arm.go
index 2b4e6acf04..ffd01073c1 100644
--- a/components/engine/vendor/golang.org/x/sys/unix/zsyscall_freebsd_arm.go
+++ b/components/engine/vendor/golang.org/x/sys/unix/zsyscall_freebsd_arm.go
@@ -266,6 +266,17 @@ func fcntl(fd int, cmd int, arg int) (val int, err error) {
// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT
+func poll(fds *PollFd, nfds int, timeout int) (n int, err error) {
+ r0, _, e1 := Syscall(SYS_POLL, uintptr(unsafe.Pointer(fds)), uintptr(nfds), uintptr(timeout))
+ n = int(r0)
+ if e1 != 0 {
+ err = errnoErr(e1)
+ }
+ return
+}
+
+// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT
+
func Madvise(b []byte, behav int) (err error) {
var _p0 unsafe.Pointer
if len(b) > 0 {
diff --git a/components/engine/vendor/golang.org/x/sys/unix/zsyscall_netbsd_386.go b/components/engine/vendor/golang.org/x/sys/unix/zsyscall_netbsd_386.go
index db99fd0c99..cfdea854ff 100644
--- a/components/engine/vendor/golang.org/x/sys/unix/zsyscall_netbsd_386.go
+++ b/components/engine/vendor/golang.org/x/sys/unix/zsyscall_netbsd_386.go
@@ -266,6 +266,17 @@ func fcntl(fd int, cmd int, arg int) (val int, err error) {
// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT
+func poll(fds *PollFd, nfds int, timeout int) (n int, err error) {
+ r0, _, e1 := Syscall(SYS_POLL, uintptr(unsafe.Pointer(fds)), uintptr(nfds), uintptr(timeout))
+ n = int(r0)
+ if e1 != 0 {
+ err = errnoErr(e1)
+ }
+ return
+}
+
+// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT
+
func Madvise(b []byte, behav int) (err error) {
var _p0 unsafe.Pointer
if len(b) > 0 {
diff --git a/components/engine/vendor/golang.org/x/sys/unix/zsyscall_netbsd_amd64.go b/components/engine/vendor/golang.org/x/sys/unix/zsyscall_netbsd_amd64.go
index 7b6c2c87e6..244a3c7618 100644
--- a/components/engine/vendor/golang.org/x/sys/unix/zsyscall_netbsd_amd64.go
+++ b/components/engine/vendor/golang.org/x/sys/unix/zsyscall_netbsd_amd64.go
@@ -266,6 +266,17 @@ func fcntl(fd int, cmd int, arg int) (val int, err error) {
// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT
+func poll(fds *PollFd, nfds int, timeout int) (n int, err error) {
+ r0, _, e1 := Syscall(SYS_POLL, uintptr(unsafe.Pointer(fds)), uintptr(nfds), uintptr(timeout))
+ n = int(r0)
+ if e1 != 0 {
+ err = errnoErr(e1)
+ }
+ return
+}
+
+// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT
+
func Madvise(b []byte, behav int) (err error) {
var _p0 unsafe.Pointer
if len(b) > 0 {
diff --git a/components/engine/vendor/golang.org/x/sys/unix/zsyscall_netbsd_arm.go b/components/engine/vendor/golang.org/x/sys/unix/zsyscall_netbsd_arm.go
index 0f4cc3b528..e891adc3a8 100644
--- a/components/engine/vendor/golang.org/x/sys/unix/zsyscall_netbsd_arm.go
+++ b/components/engine/vendor/golang.org/x/sys/unix/zsyscall_netbsd_arm.go
@@ -266,6 +266,17 @@ func fcntl(fd int, cmd int, arg int) (val int, err error) {
// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT
+func poll(fds *PollFd, nfds int, timeout int) (n int, err error) {
+ r0, _, e1 := Syscall(SYS_POLL, uintptr(unsafe.Pointer(fds)), uintptr(nfds), uintptr(timeout))
+ n = int(r0)
+ if e1 != 0 {
+ err = errnoErr(e1)
+ }
+ return
+}
+
+// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT
+
func Madvise(b []byte, behav int) (err error) {
var _p0 unsafe.Pointer
if len(b) > 0 {
diff --git a/components/engine/vendor/golang.org/x/sys/unix/zsyscall_openbsd_386.go b/components/engine/vendor/golang.org/x/sys/unix/zsyscall_openbsd_386.go
index 7baea87c7b..f48beb0917 100644
--- a/components/engine/vendor/golang.org/x/sys/unix/zsyscall_openbsd_386.go
+++ b/components/engine/vendor/golang.org/x/sys/unix/zsyscall_openbsd_386.go
@@ -266,6 +266,17 @@ func fcntl(fd int, cmd int, arg int) (val int, err error) {
// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT
+func poll(fds *PollFd, nfds int, timeout int) (n int, err error) {
+ r0, _, e1 := Syscall(SYS_POLL, uintptr(unsafe.Pointer(fds)), uintptr(nfds), uintptr(timeout))
+ n = int(r0)
+ if e1 != 0 {
+ err = errnoErr(e1)
+ }
+ return
+}
+
+// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT
+
func Madvise(b []byte, behav int) (err error) {
var _p0 unsafe.Pointer
if len(b) > 0 {
diff --git a/components/engine/vendor/golang.org/x/sys/unix/zsyscall_openbsd_amd64.go b/components/engine/vendor/golang.org/x/sys/unix/zsyscall_openbsd_amd64.go
index 0d69ce6b52..44a3faf77f 100644
--- a/components/engine/vendor/golang.org/x/sys/unix/zsyscall_openbsd_amd64.go
+++ b/components/engine/vendor/golang.org/x/sys/unix/zsyscall_openbsd_amd64.go
@@ -266,6 +266,17 @@ func fcntl(fd int, cmd int, arg int) (val int, err error) {
// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT
+func poll(fds *PollFd, nfds int, timeout int) (n int, err error) {
+ r0, _, e1 := Syscall(SYS_POLL, uintptr(unsafe.Pointer(fds)), uintptr(nfds), uintptr(timeout))
+ n = int(r0)
+ if e1 != 0 {
+ err = errnoErr(e1)
+ }
+ return
+}
+
+// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT
+
func Madvise(b []byte, behav int) (err error) {
var _p0 unsafe.Pointer
if len(b) > 0 {
diff --git a/components/engine/vendor/golang.org/x/sys/unix/zsyscall_openbsd_arm.go b/components/engine/vendor/golang.org/x/sys/unix/zsyscall_openbsd_arm.go
index 41572c26e4..1563752dd5 100644
--- a/components/engine/vendor/golang.org/x/sys/unix/zsyscall_openbsd_arm.go
+++ b/components/engine/vendor/golang.org/x/sys/unix/zsyscall_openbsd_arm.go
@@ -266,6 +266,17 @@ func fcntl(fd int, cmd int, arg int) (val int, err error) {
// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT
+func poll(fds *PollFd, nfds int, timeout int) (n int, err error) {
+ r0, _, e1 := Syscall(SYS_POLL, uintptr(unsafe.Pointer(fds)), uintptr(nfds), uintptr(timeout))
+ n = int(r0)
+ if e1 != 0 {
+ err = errnoErr(e1)
+ }
+ return
+}
+
+// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT
+
func Madvise(b []byte, behav int) (err error) {
var _p0 unsafe.Pointer
if len(b) > 0 {
diff --git a/components/engine/vendor/golang.org/x/sys/unix/zsyscall_solaris_amd64.go b/components/engine/vendor/golang.org/x/sys/unix/zsyscall_solaris_amd64.go
index 98b2665500..1d45276498 100644
--- a/components/engine/vendor/golang.org/x/sys/unix/zsyscall_solaris_amd64.go
+++ b/components/engine/vendor/golang.org/x/sys/unix/zsyscall_solaris_amd64.go
@@ -29,6 +29,7 @@ import (
//go:cgo_import_dynamic libc___major __major "libc.so"
//go:cgo_import_dynamic libc___minor __minor "libc.so"
//go:cgo_import_dynamic libc_ioctl ioctl "libc.so"
+//go:cgo_import_dynamic libc_poll poll "libc.so"
//go:cgo_import_dynamic libc_access access "libc.so"
//go:cgo_import_dynamic libc_adjtime adjtime "libc.so"
//go:cgo_import_dynamic libc_chdir chdir "libc.so"
@@ -153,6 +154,7 @@ import (
//go:linkname proc__major libc___major
//go:linkname proc__minor libc___minor
//go:linkname procioctl libc_ioctl
+//go:linkname procpoll libc_poll
//go:linkname procAccess libc_access
//go:linkname procAdjtime libc_adjtime
//go:linkname procChdir libc_chdir
@@ -278,6 +280,7 @@ var (
proc__major,
proc__minor,
procioctl,
+ procpoll,
procAccess,
procAdjtime,
procChdir,
@@ -557,6 +560,15 @@ func ioctl(fd int, req uint, arg uintptr) (err error) {
return
}
+func poll(fds *PollFd, nfds int, timeout int) (n int, err error) {
+ r0, _, e1 := sysvicall6(uintptr(unsafe.Pointer(&procpoll)), 3, uintptr(unsafe.Pointer(fds)), uintptr(nfds), uintptr(timeout), 0, 0, 0)
+ n = int(r0)
+ if e1 != 0 {
+ err = e1
+ }
+ return
+}
+
func Access(path string, mode uint32) (err error) {
var _p0 *byte
_p0, err = BytePtrFromString(path)
diff --git a/components/engine/vendor/golang.org/x/sys/unix/ztypes_darwin_386.go b/components/engine/vendor/golang.org/x/sys/unix/ztypes_darwin_386.go
index e61d78a54f..4667c7b277 100644
--- a/components/engine/vendor/golang.org/x/sys/unix/ztypes_darwin_386.go
+++ b/components/engine/vendor/golang.org/x/sys/unix/ztypes_darwin_386.go
@@ -460,3 +460,22 @@ const (
AT_SYMLINK_FOLLOW = 0x40
AT_SYMLINK_NOFOLLOW = 0x20
)
+
+type PollFd struct {
+ Fd int32
+ Events int16
+ Revents int16
+}
+
+const (
+ POLLERR = 0x8
+ POLLHUP = 0x10
+ POLLIN = 0x1
+ POLLNVAL = 0x20
+ POLLOUT = 0x4
+ POLLPRI = 0x2
+ POLLRDBAND = 0x80
+ POLLRDNORM = 0x40
+ POLLWRBAND = 0x100
+ POLLWRNORM = 0x4
+)
diff --git a/components/engine/vendor/golang.org/x/sys/unix/ztypes_darwin_amd64.go b/components/engine/vendor/golang.org/x/sys/unix/ztypes_darwin_amd64.go
index 2619155ff8..3f33b18fc7 100644
--- a/components/engine/vendor/golang.org/x/sys/unix/ztypes_darwin_amd64.go
+++ b/components/engine/vendor/golang.org/x/sys/unix/ztypes_darwin_amd64.go
@@ -470,3 +470,22 @@ const (
AT_SYMLINK_FOLLOW = 0x40
AT_SYMLINK_NOFOLLOW = 0x20
)
+
+type PollFd struct {
+ Fd int32
+ Events int16
+ Revents int16
+}
+
+const (
+ POLLERR = 0x8
+ POLLHUP = 0x10
+ POLLIN = 0x1
+ POLLNVAL = 0x20
+ POLLOUT = 0x4
+ POLLPRI = 0x2
+ POLLRDBAND = 0x80
+ POLLRDNORM = 0x40
+ POLLWRBAND = 0x100
+ POLLWRNORM = 0x4
+)
diff --git a/components/engine/vendor/golang.org/x/sys/unix/ztypes_darwin_arm.go b/components/engine/vendor/golang.org/x/sys/unix/ztypes_darwin_arm.go
index 4dca0d4db2..463a28ba6f 100644
--- a/components/engine/vendor/golang.org/x/sys/unix/ztypes_darwin_arm.go
+++ b/components/engine/vendor/golang.org/x/sys/unix/ztypes_darwin_arm.go
@@ -461,3 +461,22 @@ const (
AT_SYMLINK_FOLLOW = 0x40
AT_SYMLINK_NOFOLLOW = 0x20
)
+
+type PollFd struct {
+ Fd int32
+ Events int16
+ Revents int16
+}
+
+const (
+ POLLERR = 0x8
+ POLLHUP = 0x10
+ POLLIN = 0x1
+ POLLNVAL = 0x20
+ POLLOUT = 0x4
+ POLLPRI = 0x2
+ POLLRDBAND = 0x80
+ POLLRDNORM = 0x40
+ POLLWRBAND = 0x100
+ POLLWRNORM = 0x4
+)
diff --git a/components/engine/vendor/golang.org/x/sys/unix/ztypes_darwin_arm64.go b/components/engine/vendor/golang.org/x/sys/unix/ztypes_darwin_arm64.go
index f2881fd142..1ec20a0025 100644
--- a/components/engine/vendor/golang.org/x/sys/unix/ztypes_darwin_arm64.go
+++ b/components/engine/vendor/golang.org/x/sys/unix/ztypes_darwin_arm64.go
@@ -1,6 +1,7 @@
+// cgo -godefs types_darwin.go | go run mkpost.go
+// Code generated by the command above; see README.md. DO NOT EDIT.
+
// +build arm64,darwin
-// Created by cgo -godefs - DO NOT EDIT
-// cgo -godefs types_darwin.go
package unix
@@ -469,3 +470,22 @@ const (
AT_SYMLINK_FOLLOW = 0x40
AT_SYMLINK_NOFOLLOW = 0x20
)
+
+type PollFd struct {
+ Fd int32
+ Events int16
+ Revents int16
+}
+
+const (
+ POLLERR = 0x8
+ POLLHUP = 0x10
+ POLLIN = 0x1
+ POLLNVAL = 0x20
+ POLLOUT = 0x4
+ POLLPRI = 0x2
+ POLLRDBAND = 0x80
+ POLLRDNORM = 0x40
+ POLLWRBAND = 0x100
+ POLLWRNORM = 0x4
+)
diff --git a/components/engine/vendor/golang.org/x/sys/unix/ztypes_dragonfly_amd64.go b/components/engine/vendor/golang.org/x/sys/unix/ztypes_dragonfly_amd64.go
index 67c6bf883c..ab515c3e1a 100644
--- a/components/engine/vendor/golang.org/x/sys/unix/ztypes_dragonfly_amd64.go
+++ b/components/engine/vendor/golang.org/x/sys/unix/ztypes_dragonfly_amd64.go
@@ -446,3 +446,22 @@ const (
AT_FDCWD = 0xfffafdcd
AT_SYMLINK_NOFOLLOW = 0x1
)
+
+type PollFd struct {
+ Fd int32
+ Events int16
+ Revents int16
+}
+
+const (
+ POLLERR = 0x8
+ POLLHUP = 0x10
+ POLLIN = 0x1
+ POLLNVAL = 0x20
+ POLLOUT = 0x4
+ POLLPRI = 0x2
+ POLLRDBAND = 0x80
+ POLLRDNORM = 0x40
+ POLLWRBAND = 0x100
+ POLLWRNORM = 0x4
+)
diff --git a/components/engine/vendor/golang.org/x/sys/unix/ztypes_freebsd_386.go b/components/engine/vendor/golang.org/x/sys/unix/ztypes_freebsd_386.go
index 5b28bcbbac..18f7816009 100644
--- a/components/engine/vendor/golang.org/x/sys/unix/ztypes_freebsd_386.go
+++ b/components/engine/vendor/golang.org/x/sys/unix/ztypes_freebsd_386.go
@@ -516,6 +516,26 @@ const (
AT_SYMLINK_NOFOLLOW = 0x200
)
+type PollFd struct {
+ Fd int32
+ Events int16
+ Revents int16
+}
+
+const (
+ POLLERR = 0x8
+ POLLHUP = 0x10
+ POLLIN = 0x1
+ POLLINIGNEOF = 0x2000
+ POLLNVAL = 0x20
+ POLLOUT = 0x4
+ POLLPRI = 0x2
+ POLLRDBAND = 0x80
+ POLLRDNORM = 0x40
+ POLLWRBAND = 0x100
+ POLLWRNORM = 0x4
+)
+
type CapRights struct {
Rights [2]uint64
}
diff --git a/components/engine/vendor/golang.org/x/sys/unix/ztypes_freebsd_amd64.go b/components/engine/vendor/golang.org/x/sys/unix/ztypes_freebsd_amd64.go
index c65d89e497..dd0db2a5ea 100644
--- a/components/engine/vendor/golang.org/x/sys/unix/ztypes_freebsd_amd64.go
+++ b/components/engine/vendor/golang.org/x/sys/unix/ztypes_freebsd_amd64.go
@@ -519,6 +519,26 @@ const (
AT_SYMLINK_NOFOLLOW = 0x200
)
+type PollFd struct {
+ Fd int32
+ Events int16
+ Revents int16
+}
+
+const (
+ POLLERR = 0x8
+ POLLHUP = 0x10
+ POLLIN = 0x1
+ POLLINIGNEOF = 0x2000
+ POLLNVAL = 0x20
+ POLLOUT = 0x4
+ POLLPRI = 0x2
+ POLLRDBAND = 0x80
+ POLLRDNORM = 0x40
+ POLLWRBAND = 0x100
+ POLLWRNORM = 0x4
+)
+
type CapRights struct {
Rights [2]uint64
}
diff --git a/components/engine/vendor/golang.org/x/sys/unix/ztypes_freebsd_arm.go b/components/engine/vendor/golang.org/x/sys/unix/ztypes_freebsd_arm.go
index 42c0a502cf..473d3dcf08 100644
--- a/components/engine/vendor/golang.org/x/sys/unix/ztypes_freebsd_arm.go
+++ b/components/engine/vendor/golang.org/x/sys/unix/ztypes_freebsd_arm.go
@@ -519,6 +519,26 @@ const (
AT_SYMLINK_NOFOLLOW = 0x200
)
+type PollFd struct {
+ Fd int32
+ Events int16
+ Revents int16
+}
+
+const (
+ POLLERR = 0x8
+ POLLHUP = 0x10
+ POLLIN = 0x1
+ POLLINIGNEOF = 0x2000
+ POLLNVAL = 0x20
+ POLLOUT = 0x4
+ POLLPRI = 0x2
+ POLLRDBAND = 0x80
+ POLLRDNORM = 0x40
+ POLLWRBAND = 0x100
+ POLLWRNORM = 0x4
+)
+
type CapRights struct {
Rights [2]uint64
}
diff --git a/components/engine/vendor/golang.org/x/sys/unix/ztypes_linux_386.go b/components/engine/vendor/golang.org/x/sys/unix/ztypes_linux_386.go
index 8b30c69975..c6de94269d 100644
--- a/components/engine/vendor/golang.org/x/sys/unix/ztypes_linux_386.go
+++ b/components/engine/vendor/golang.org/x/sys/unix/ztypes_linux_386.go
@@ -621,12 +621,12 @@ type Sysinfo_t struct {
}
type Utsname struct {
- Sysname [65]int8
- Nodename [65]int8
- Release [65]int8
- Version [65]int8
- Machine [65]int8
- Domainname [65]int8
+ Sysname [65]byte
+ Nodename [65]byte
+ Release [65]byte
+ Version [65]byte
+ Machine [65]byte
+ Domainname [65]byte
}
type Ustat_t struct {
diff --git a/components/engine/vendor/golang.org/x/sys/unix/ztypes_linux_amd64.go b/components/engine/vendor/golang.org/x/sys/unix/ztypes_linux_amd64.go
index cf03589862..4ea42dfc2e 100644
--- a/components/engine/vendor/golang.org/x/sys/unix/ztypes_linux_amd64.go
+++ b/components/engine/vendor/golang.org/x/sys/unix/ztypes_linux_amd64.go
@@ -637,12 +637,12 @@ type Sysinfo_t struct {
}
type Utsname struct {
- Sysname [65]int8
- Nodename [65]int8
- Release [65]int8
- Version [65]int8
- Machine [65]int8
- Domainname [65]int8
+ Sysname [65]byte
+ Nodename [65]byte
+ Release [65]byte
+ Version [65]byte
+ Machine [65]byte
+ Domainname [65]byte
}
type Ustat_t struct {
diff --git a/components/engine/vendor/golang.org/x/sys/unix/ztypes_linux_arm.go b/components/engine/vendor/golang.org/x/sys/unix/ztypes_linux_arm.go
index 8ef7d85f17..f86d683882 100644
--- a/components/engine/vendor/golang.org/x/sys/unix/ztypes_linux_arm.go
+++ b/components/engine/vendor/golang.org/x/sys/unix/ztypes_linux_arm.go
@@ -609,12 +609,12 @@ type Sysinfo_t struct {
}
type Utsname struct {
- Sysname [65]uint8
- Nodename [65]uint8
- Release [65]uint8
- Version [65]uint8
- Machine [65]uint8
- Domainname [65]uint8
+ Sysname [65]byte
+ Nodename [65]byte
+ Release [65]byte
+ Version [65]byte
+ Machine [65]byte
+ Domainname [65]byte
}
type Ustat_t struct {
diff --git a/components/engine/vendor/golang.org/x/sys/unix/ztypes_linux_arm64.go b/components/engine/vendor/golang.org/x/sys/unix/ztypes_linux_arm64.go
index 3110268673..45c10b7429 100644
--- a/components/engine/vendor/golang.org/x/sys/unix/ztypes_linux_arm64.go
+++ b/components/engine/vendor/golang.org/x/sys/unix/ztypes_linux_arm64.go
@@ -615,12 +615,12 @@ type Sysinfo_t struct {
}
type Utsname struct {
- Sysname [65]int8
- Nodename [65]int8
- Release [65]int8
- Version [65]int8
- Machine [65]int8
- Domainname [65]int8
+ Sysname [65]byte
+ Nodename [65]byte
+ Release [65]byte
+ Version [65]byte
+ Machine [65]byte
+ Domainname [65]byte
}
type Ustat_t struct {
diff --git a/components/engine/vendor/golang.org/x/sys/unix/ztypes_linux_mips.go b/components/engine/vendor/golang.org/x/sys/unix/ztypes_linux_mips.go
index d2c1bc2c83..4cc0a1c91f 100644
--- a/components/engine/vendor/golang.org/x/sys/unix/ztypes_linux_mips.go
+++ b/components/engine/vendor/golang.org/x/sys/unix/ztypes_linux_mips.go
@@ -614,12 +614,12 @@ type Sysinfo_t struct {
}
type Utsname struct {
- Sysname [65]int8
- Nodename [65]int8
- Release [65]int8
- Version [65]int8
- Machine [65]int8
- Domainname [65]int8
+ Sysname [65]byte
+ Nodename [65]byte
+ Release [65]byte
+ Version [65]byte
+ Machine [65]byte
+ Domainname [65]byte
}
type Ustat_t struct {
diff --git a/components/engine/vendor/golang.org/x/sys/unix/ztypes_linux_mips64.go b/components/engine/vendor/golang.org/x/sys/unix/ztypes_linux_mips64.go
index ec7a0cd275..d9df08789f 100644
--- a/components/engine/vendor/golang.org/x/sys/unix/ztypes_linux_mips64.go
+++ b/components/engine/vendor/golang.org/x/sys/unix/ztypes_linux_mips64.go
@@ -618,12 +618,12 @@ type Sysinfo_t struct {
}
type Utsname struct {
- Sysname [65]int8
- Nodename [65]int8
- Release [65]int8
- Version [65]int8
- Machine [65]int8
- Domainname [65]int8
+ Sysname [65]byte
+ Nodename [65]byte
+ Release [65]byte
+ Version [65]byte
+ Machine [65]byte
+ Domainname [65]byte
}
type Ustat_t struct {
diff --git a/components/engine/vendor/golang.org/x/sys/unix/ztypes_linux_mips64le.go b/components/engine/vendor/golang.org/x/sys/unix/ztypes_linux_mips64le.go
index bbe08d7db7..15e6b4b4b1 100644
--- a/components/engine/vendor/golang.org/x/sys/unix/ztypes_linux_mips64le.go
+++ b/components/engine/vendor/golang.org/x/sys/unix/ztypes_linux_mips64le.go
@@ -618,12 +618,12 @@ type Sysinfo_t struct {
}
type Utsname struct {
- Sysname [65]int8
- Nodename [65]int8
- Release [65]int8
- Version [65]int8
- Machine [65]int8
- Domainname [65]int8
+ Sysname [65]byte
+ Nodename [65]byte
+ Release [65]byte
+ Version [65]byte
+ Machine [65]byte
+ Domainname [65]byte
}
type Ustat_t struct {
diff --git a/components/engine/vendor/golang.org/x/sys/unix/ztypes_linux_mipsle.go b/components/engine/vendor/golang.org/x/sys/unix/ztypes_linux_mipsle.go
index 75ee05ab47..b6c2d32dd8 100644
--- a/components/engine/vendor/golang.org/x/sys/unix/ztypes_linux_mipsle.go
+++ b/components/engine/vendor/golang.org/x/sys/unix/ztypes_linux_mipsle.go
@@ -614,12 +614,12 @@ type Sysinfo_t struct {
}
type Utsname struct {
- Sysname [65]int8
- Nodename [65]int8
- Release [65]int8
- Version [65]int8
- Machine [65]int8
- Domainname [65]int8
+ Sysname [65]byte
+ Nodename [65]byte
+ Release [65]byte
+ Version [65]byte
+ Machine [65]byte
+ Domainname [65]byte
}
type Ustat_t struct {
diff --git a/components/engine/vendor/golang.org/x/sys/unix/ztypes_linux_ppc64.go b/components/engine/vendor/golang.org/x/sys/unix/ztypes_linux_ppc64.go
index 30a257f83c..3803e1062b 100644
--- a/components/engine/vendor/golang.org/x/sys/unix/ztypes_linux_ppc64.go
+++ b/components/engine/vendor/golang.org/x/sys/unix/ztypes_linux_ppc64.go
@@ -625,12 +625,12 @@ type Sysinfo_t struct {
}
type Utsname struct {
- Sysname [65]uint8
- Nodename [65]uint8
- Release [65]uint8
- Version [65]uint8
- Machine [65]uint8
- Domainname [65]uint8
+ Sysname [65]byte
+ Nodename [65]byte
+ Release [65]byte
+ Version [65]byte
+ Machine [65]byte
+ Domainname [65]byte
}
type Ustat_t struct {
diff --git a/components/engine/vendor/golang.org/x/sys/unix/ztypes_linux_ppc64le.go b/components/engine/vendor/golang.org/x/sys/unix/ztypes_linux_ppc64le.go
index bebed6f11c..7ef31fe213 100644
--- a/components/engine/vendor/golang.org/x/sys/unix/ztypes_linux_ppc64le.go
+++ b/components/engine/vendor/golang.org/x/sys/unix/ztypes_linux_ppc64le.go
@@ -625,12 +625,12 @@ type Sysinfo_t struct {
}
type Utsname struct {
- Sysname [65]uint8
- Nodename [65]uint8
- Release [65]uint8
- Version [65]uint8
- Machine [65]uint8
- Domainname [65]uint8
+ Sysname [65]byte
+ Nodename [65]byte
+ Release [65]byte
+ Version [65]byte
+ Machine [65]byte
+ Domainname [65]byte
}
type Ustat_t struct {
diff --git a/components/engine/vendor/golang.org/x/sys/unix/ztypes_linux_s390x.go b/components/engine/vendor/golang.org/x/sys/unix/ztypes_linux_s390x.go
index 286661b35b..cb194f4717 100644
--- a/components/engine/vendor/golang.org/x/sys/unix/ztypes_linux_s390x.go
+++ b/components/engine/vendor/golang.org/x/sys/unix/ztypes_linux_s390x.go
@@ -642,12 +642,12 @@ type Sysinfo_t struct {
}
type Utsname struct {
- Sysname [65]int8
- Nodename [65]int8
- Release [65]int8
- Version [65]int8
- Machine [65]int8
- Domainname [65]int8
+ Sysname [65]byte
+ Nodename [65]byte
+ Release [65]byte
+ Version [65]byte
+ Machine [65]byte
+ Domainname [65]byte
}
type Ustat_t struct {
diff --git a/components/engine/vendor/golang.org/x/sys/unix/ztypes_linux_sparc64.go b/components/engine/vendor/golang.org/x/sys/unix/ztypes_linux_sparc64.go
index 22bdab9614..9dbbb1ce52 100644
--- a/components/engine/vendor/golang.org/x/sys/unix/ztypes_linux_sparc64.go
+++ b/components/engine/vendor/golang.org/x/sys/unix/ztypes_linux_sparc64.go
@@ -601,12 +601,12 @@ type Sysinfo_t struct {
}
type Utsname struct {
- Sysname [65]int8
- Nodename [65]int8
- Release [65]int8
- Version [65]int8
- Machine [65]int8
- Domainname [65]int8
+ Sysname [65]byte
+ Nodename [65]byte
+ Release [65]byte
+ Version [65]byte
+ Machine [65]byte
+ Domainname [65]byte
}
type Ustat_t struct {
@@ -652,8 +652,6 @@ type Sigset_t struct {
X__val [16]uint64
}
-const _SC_PAGESIZE = 0x1e
-
type Termios struct {
Iflag uint32
Oflag uint32
diff --git a/components/engine/vendor/golang.org/x/sys/unix/ztypes_netbsd_386.go b/components/engine/vendor/golang.org/x/sys/unix/ztypes_netbsd_386.go
index 42f99c0a30..dfe446bff7 100644
--- a/components/engine/vendor/golang.org/x/sys/unix/ztypes_netbsd_386.go
+++ b/components/engine/vendor/golang.org/x/sys/unix/ztypes_netbsd_386.go
@@ -1,5 +1,5 @@
-// Created by cgo -godefs - DO NOT EDIT
-// cgo -godefs types_netbsd.go
+// cgo -godefs types_netbsd.go | go run mkpost.go
+// Code generated by the command above; see README.md. DO NOT EDIT.
// +build 386,netbsd
@@ -387,6 +387,25 @@ const (
AT_SYMLINK_NOFOLLOW = 0x200
)
+type PollFd struct {
+ Fd int32
+ Events int16
+ Revents int16
+}
+
+const (
+ POLLERR = 0x8
+ POLLHUP = 0x10
+ POLLIN = 0x1
+ POLLNVAL = 0x20
+ POLLOUT = 0x4
+ POLLPRI = 0x2
+ POLLRDBAND = 0x80
+ POLLRDNORM = 0x40
+ POLLWRBAND = 0x100
+ POLLWRNORM = 0x4
+)
+
type Sysctlnode struct {
Flags uint32
Num int32
diff --git a/components/engine/vendor/golang.org/x/sys/unix/ztypes_netbsd_amd64.go b/components/engine/vendor/golang.org/x/sys/unix/ztypes_netbsd_amd64.go
index ff290ba069..1498c23c22 100644
--- a/components/engine/vendor/golang.org/x/sys/unix/ztypes_netbsd_amd64.go
+++ b/components/engine/vendor/golang.org/x/sys/unix/ztypes_netbsd_amd64.go
@@ -1,5 +1,5 @@
-// Created by cgo -godefs - DO NOT EDIT
-// cgo -godefs types_netbsd.go
+// cgo -godefs types_netbsd.go | go run mkpost.go
+// Code generated by the command above; see README.md. DO NOT EDIT.
// +build amd64,netbsd
@@ -394,6 +394,25 @@ const (
AT_SYMLINK_NOFOLLOW = 0x200
)
+type PollFd struct {
+ Fd int32
+ Events int16
+ Revents int16
+}
+
+const (
+ POLLERR = 0x8
+ POLLHUP = 0x10
+ POLLIN = 0x1
+ POLLNVAL = 0x20
+ POLLOUT = 0x4
+ POLLPRI = 0x2
+ POLLRDBAND = 0x80
+ POLLRDNORM = 0x40
+ POLLWRBAND = 0x100
+ POLLWRNORM = 0x4
+)
+
type Sysctlnode struct {
Flags uint32
Num int32
diff --git a/components/engine/vendor/golang.org/x/sys/unix/ztypes_netbsd_arm.go b/components/engine/vendor/golang.org/x/sys/unix/ztypes_netbsd_arm.go
index 66dbd7c050..d6711ce170 100644
--- a/components/engine/vendor/golang.org/x/sys/unix/ztypes_netbsd_arm.go
+++ b/components/engine/vendor/golang.org/x/sys/unix/ztypes_netbsd_arm.go
@@ -1,5 +1,5 @@
-// Created by cgo -godefs - DO NOT EDIT
-// cgo -godefs types_netbsd.go
+// cgo -godefs types_netbsd.go | go run mkpost.go
+// Code generated by the command above; see README.md. DO NOT EDIT.
// +build arm,netbsd
@@ -392,6 +392,25 @@ const (
AT_SYMLINK_NOFOLLOW = 0x200
)
+type PollFd struct {
+ Fd int32
+ Events int16
+ Revents int16
+}
+
+const (
+ POLLERR = 0x8
+ POLLHUP = 0x10
+ POLLIN = 0x1
+ POLLNVAL = 0x20
+ POLLOUT = 0x4
+ POLLPRI = 0x2
+ POLLRDBAND = 0x80
+ POLLRDNORM = 0x40
+ POLLWRBAND = 0x100
+ POLLWRNORM = 0x4
+)
+
type Sysctlnode struct {
Flags uint32
Num int32
diff --git a/components/engine/vendor/golang.org/x/sys/unix/ztypes_openbsd_386.go b/components/engine/vendor/golang.org/x/sys/unix/ztypes_openbsd_386.go
index 20fc9f450c..af295c3d0c 100644
--- a/components/engine/vendor/golang.org/x/sys/unix/ztypes_openbsd_386.go
+++ b/components/engine/vendor/golang.org/x/sys/unix/ztypes_openbsd_386.go
@@ -1,5 +1,5 @@
-// Created by cgo -godefs - DO NOT EDIT
-// cgo -godefs types_openbsd.go
+// cgo -godefs types_openbsd.go | go run mkpost.go
+// Code generated by the command above; see README.md. DO NOT EDIT.
// +build 386,openbsd
@@ -444,3 +444,22 @@ const (
AT_FDCWD = -0x64
AT_SYMLINK_NOFOLLOW = 0x2
)
+
+type PollFd struct {
+ Fd int32
+ Events int16
+ Revents int16
+}
+
+const (
+ POLLERR = 0x8
+ POLLHUP = 0x10
+ POLLIN = 0x1
+ POLLNVAL = 0x20
+ POLLOUT = 0x4
+ POLLPRI = 0x2
+ POLLRDBAND = 0x80
+ POLLRDNORM = 0x40
+ POLLWRBAND = 0x100
+ POLLWRNORM = 0x4
+)
diff --git a/components/engine/vendor/golang.org/x/sys/unix/ztypes_openbsd_amd64.go b/components/engine/vendor/golang.org/x/sys/unix/ztypes_openbsd_amd64.go
index 46fe9490c8..ae153e70c0 100644
--- a/components/engine/vendor/golang.org/x/sys/unix/ztypes_openbsd_amd64.go
+++ b/components/engine/vendor/golang.org/x/sys/unix/ztypes_openbsd_amd64.go
@@ -1,5 +1,5 @@
-// Created by cgo -godefs - DO NOT EDIT
-// cgo -godefs types_openbsd.go
+// cgo -godefs types_openbsd.go | go run mkpost.go
+// Code generated by the command above; see README.md. DO NOT EDIT.
// +build amd64,openbsd
@@ -451,3 +451,22 @@ const (
AT_FDCWD = -0x64
AT_SYMLINK_NOFOLLOW = 0x2
)
+
+type PollFd struct {
+ Fd int32
+ Events int16
+ Revents int16
+}
+
+const (
+ POLLERR = 0x8
+ POLLHUP = 0x10
+ POLLIN = 0x1
+ POLLNVAL = 0x20
+ POLLOUT = 0x4
+ POLLPRI = 0x2
+ POLLRDBAND = 0x80
+ POLLRDNORM = 0x40
+ POLLWRBAND = 0x100
+ POLLWRNORM = 0x4
+)
diff --git a/components/engine/vendor/golang.org/x/sys/unix/ztypes_openbsd_arm.go b/components/engine/vendor/golang.org/x/sys/unix/ztypes_openbsd_arm.go
index 62e1f7c04d..35bb6195bf 100644
--- a/components/engine/vendor/golang.org/x/sys/unix/ztypes_openbsd_arm.go
+++ b/components/engine/vendor/golang.org/x/sys/unix/ztypes_openbsd_arm.go
@@ -1,5 +1,5 @@
-// Created by cgo -godefs - DO NOT EDIT
-// cgo -godefs types_openbsd.go
+// cgo -godefs types_openbsd.go | go run mkpost.go
+// Code generated by the command above; see README.md. DO NOT EDIT.
// +build arm,openbsd
@@ -437,3 +437,22 @@ const (
AT_FDCWD = -0x64
AT_SYMLINK_NOFOLLOW = 0x2
)
+
+type PollFd struct {
+ Fd int32
+ Events int16
+ Revents int16
+}
+
+const (
+ POLLERR = 0x8
+ POLLHUP = 0x10
+ POLLIN = 0x1
+ POLLNVAL = 0x20
+ POLLOUT = 0x4
+ POLLPRI = 0x2
+ POLLRDBAND = 0x80
+ POLLRDNORM = 0x40
+ POLLWRBAND = 0x100
+ POLLWRNORM = 0x4
+)
diff --git a/components/engine/vendor/golang.org/x/sys/unix/ztypes_solaris_amd64.go b/components/engine/vendor/golang.org/x/sys/unix/ztypes_solaris_amd64.go
index a979a33d51..d445452486 100644
--- a/components/engine/vendor/golang.org/x/sys/unix/ztypes_solaris_amd64.go
+++ b/components/engine/vendor/golang.org/x/sys/unix/ztypes_solaris_amd64.go
@@ -263,11 +263,11 @@ type FdSet struct {
}
type Utsname struct {
- Sysname [257]int8
- Nodename [257]int8
- Release [257]int8
- Version [257]int8
- Machine [257]int8
+ Sysname [257]byte
+ Nodename [257]byte
+ Release [257]byte
+ Version [257]byte
+ Machine [257]byte
}
type Ustat_t struct {
@@ -438,3 +438,22 @@ type Winsize struct {
Xpixel uint16
Ypixel uint16
}
+
+type PollFd struct {
+ Fd int32
+ Events int16
+ Revents int16
+}
+
+const (
+ POLLERR = 0x8
+ POLLHUP = 0x10
+ POLLIN = 0x1
+ POLLNVAL = 0x20
+ POLLOUT = 0x4
+ POLLPRI = 0x2
+ POLLRDBAND = 0x80
+ POLLRDNORM = 0x40
+ POLLWRBAND = 0x100
+ POLLWRNORM = 0x4
+)
diff --git a/components/engine/vendor/golang.org/x/sys/windows/dll_windows.go b/components/engine/vendor/golang.org/x/sys/windows/dll_windows.go
index e77a370550..e92c05b213 100644
--- a/components/engine/vendor/golang.org/x/sys/windows/dll_windows.go
+++ b/components/engine/vendor/golang.org/x/sys/windows/dll_windows.go
@@ -1,4 +1,4 @@
-// Copyright 2011 The Go Authors. All rights reserved.
+// Copyright 2011 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
@@ -116,7 +116,7 @@ func (p *Proc) Addr() uintptr {
//go:uintptrescapes
-// Call executes procedure p with arguments a. It will panic, if more then 15 arguments
+// Call executes procedure p with arguments a. It will panic, if more than 15 arguments
// are supplied.
//
// The returned error is always non-nil, constructed from the result of GetLastError.
@@ -289,6 +289,7 @@ func (p *LazyProc) mustFind() {
// Addr returns the address of the procedure represented by p.
// The return value can be passed to Syscall to run the procedure.
+// It will panic if the procedure cannot be found.
func (p *LazyProc) Addr() uintptr {
p.mustFind()
return p.proc.Addr()
@@ -296,8 +297,8 @@ func (p *LazyProc) Addr() uintptr {
//go:uintptrescapes
-// Call executes procedure p with arguments a. It will panic, if more then 15 arguments
-// are supplied.
+// Call executes procedure p with arguments a. It will panic, if more than 15 arguments
+// are supplied. It will also panic if the procedure cannot be found.
//
// The returned error is always non-nil, constructed from the result of GetLastError.
// Callers must inspect the primary return value to decide whether an error occurred
diff --git a/components/engine/vendor/golang.org/x/sys/windows/env_unset.go b/components/engine/vendor/golang.org/x/sys/windows/env_unset.go
index 4ed03aeefc..b712c6604a 100644
--- a/components/engine/vendor/golang.org/x/sys/windows/env_unset.go
+++ b/components/engine/vendor/golang.org/x/sys/windows/env_unset.go
@@ -1,4 +1,4 @@
-// Copyright 2014 The Go Authors. All rights reserved.
+// Copyright 2014 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
diff --git a/components/engine/vendor/golang.org/x/sys/windows/env_windows.go b/components/engine/vendor/golang.org/x/sys/windows/env_windows.go
index a9d8ef4b7d..e8292386c0 100644
--- a/components/engine/vendor/golang.org/x/sys/windows/env_windows.go
+++ b/components/engine/vendor/golang.org/x/sys/windows/env_windows.go
@@ -1,4 +1,4 @@
-// Copyright 2010 The Go Authors. All rights reserved.
+// Copyright 2010 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
diff --git a/components/engine/vendor/golang.org/x/sys/windows/memory_windows.go b/components/engine/vendor/golang.org/x/sys/windows/memory_windows.go
index f63e899acb..f80a4204f0 100644
--- a/components/engine/vendor/golang.org/x/sys/windows/memory_windows.go
+++ b/components/engine/vendor/golang.org/x/sys/windows/memory_windows.go
@@ -1,4 +1,4 @@
-// Copyright 2017 The Go Authors. All rights reserved.
+// Copyright 2017 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
diff --git a/components/engine/vendor/golang.org/x/sys/windows/mksyscall.go b/components/engine/vendor/golang.org/x/sys/windows/mksyscall.go
index e1c88c9c71..fb7db0ef8d 100644
--- a/components/engine/vendor/golang.org/x/sys/windows/mksyscall.go
+++ b/components/engine/vendor/golang.org/x/sys/windows/mksyscall.go
@@ -1,4 +1,4 @@
-// Copyright 2009 The Go Authors. All rights reserved.
+// Copyright 2009 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
diff --git a/components/engine/vendor/golang.org/x/sys/windows/race.go b/components/engine/vendor/golang.org/x/sys/windows/race.go
index 343e18ab69..a74e3e24b5 100644
--- a/components/engine/vendor/golang.org/x/sys/windows/race.go
+++ b/components/engine/vendor/golang.org/x/sys/windows/race.go
@@ -1,4 +1,4 @@
-// Copyright 2012 The Go Authors. All rights reserved.
+// Copyright 2012 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
diff --git a/components/engine/vendor/golang.org/x/sys/windows/race0.go b/components/engine/vendor/golang.org/x/sys/windows/race0.go
index 17af843b91..e44a3cbf67 100644
--- a/components/engine/vendor/golang.org/x/sys/windows/race0.go
+++ b/components/engine/vendor/golang.org/x/sys/windows/race0.go
@@ -1,4 +1,4 @@
-// Copyright 2012 The Go Authors. All rights reserved.
+// Copyright 2012 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
diff --git a/components/engine/vendor/golang.org/x/sys/windows/security_windows.go b/components/engine/vendor/golang.org/x/sys/windows/security_windows.go
index ca09bdd701..d8e7ff2ec5 100644
--- a/components/engine/vendor/golang.org/x/sys/windows/security_windows.go
+++ b/components/engine/vendor/golang.org/x/sys/windows/security_windows.go
@@ -1,4 +1,4 @@
-// Copyright 2012 The Go Authors. All rights reserved.
+// Copyright 2012 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
diff --git a/components/engine/vendor/golang.org/x/sys/windows/svc/go12.go b/components/engine/vendor/golang.org/x/sys/windows/svc/go12.go
index 6f0a924eaf..cd8b913c99 100644
--- a/components/engine/vendor/golang.org/x/sys/windows/svc/go12.go
+++ b/components/engine/vendor/golang.org/x/sys/windows/svc/go12.go
@@ -1,4 +1,4 @@
-// Copyright 2014 The Go Authors. All rights reserved.
+// Copyright 2014 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
diff --git a/components/engine/vendor/golang.org/x/sys/windows/svc/go13.go b/components/engine/vendor/golang.org/x/sys/windows/svc/go13.go
index 432a9e796a..9d7f3cec54 100644
--- a/components/engine/vendor/golang.org/x/sys/windows/svc/go13.go
+++ b/components/engine/vendor/golang.org/x/sys/windows/svc/go13.go
@@ -1,4 +1,4 @@
-// Copyright 2014 The Go Authors. All rights reserved.
+// Copyright 2014 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
diff --git a/components/engine/vendor/golang.org/x/sys/windows/syscall.go b/components/engine/vendor/golang.org/x/sys/windows/syscall.go
index 4e2fbe86e2..b07bc2305d 100644
--- a/components/engine/vendor/golang.org/x/sys/windows/syscall.go
+++ b/components/engine/vendor/golang.org/x/sys/windows/syscall.go
@@ -5,10 +5,10 @@
// +build windows
// Package windows contains an interface to the low-level operating system
-// primitives. OS details vary depending on the underlying system, and
+// primitives. OS details vary depending on the underlying system, and
// by default, godoc will display the OS-specific documentation for the current
-// system. If you want godoc to display syscall documentation for another
-// system, set $GOOS and $GOARCH to the desired system. For example, if
+// system. If you want godoc to display syscall documentation for another
+// system, set $GOOS and $GOARCH to the desired system. For example, if
// you want to view documentation for freebsd/arm on linux/amd64, set $GOOS
// to freebsd and $GOARCH to arm.
// The primary use of this package is inside other packages that provide a more
diff --git a/components/engine/vendor/golang.org/x/sys/windows/syscall_windows.go b/components/engine/vendor/golang.org/x/sys/windows/syscall_windows.go
index acd06e3693..bb778dbd2e 100644
--- a/components/engine/vendor/golang.org/x/sys/windows/syscall_windows.go
+++ b/components/engine/vendor/golang.org/x/sys/windows/syscall_windows.go
@@ -1,4 +1,4 @@
-// Copyright 2009 The Go Authors. All rights reserved.
+// Copyright 2009 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
diff --git a/components/engine/vendor/golang.org/x/sys/windows/types_windows.go b/components/engine/vendor/golang.org/x/sys/windows/types_windows.go
index 401a5f2d9a..0229f79cfc 100644
--- a/components/engine/vendor/golang.org/x/sys/windows/types_windows.go
+++ b/components/engine/vendor/golang.org/x/sys/windows/types_windows.go
@@ -1,4 +1,4 @@
-// Copyright 2011 The Go Authors. All rights reserved.
+// Copyright 2011 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
diff --git a/components/engine/vendor/golang.org/x/sys/windows/types_windows_386.go b/components/engine/vendor/golang.org/x/sys/windows/types_windows_386.go
index 10f33be0b7..fe0ddd0316 100644
--- a/components/engine/vendor/golang.org/x/sys/windows/types_windows_386.go
+++ b/components/engine/vendor/golang.org/x/sys/windows/types_windows_386.go
@@ -1,4 +1,4 @@
-// Copyright 2011 The Go Authors. All rights reserved.
+// Copyright 2011 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
diff --git a/components/engine/vendor/golang.org/x/sys/windows/types_windows_amd64.go b/components/engine/vendor/golang.org/x/sys/windows/types_windows_amd64.go
index 3f272c2499..7e154c2df2 100644
--- a/components/engine/vendor/golang.org/x/sys/windows/types_windows_amd64.go
+++ b/components/engine/vendor/golang.org/x/sys/windows/types_windows_amd64.go
@@ -1,4 +1,4 @@
-// Copyright 2011 The Go Authors. All rights reserved.
+// Copyright 2011 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
diff --git a/components/engine/volume/drivers/extpoint.go b/components/engine/volume/drivers/extpoint.go
index ee42f2f5ed..c360b37a2b 100644
--- a/components/engine/volume/drivers/extpoint.go
+++ b/components/engine/volume/drivers/extpoint.go
@@ -11,6 +11,7 @@ import (
getter "github.com/docker/docker/pkg/plugingetter"
"github.com/docker/docker/volume"
"github.com/pkg/errors"
+ "github.com/sirupsen/logrus"
)
// currently created by hand. generation tool would generate this like:
@@ -130,6 +131,12 @@ func lookup(name string, mode int) (volume.Driver, error) {
d := NewVolumeDriver(p.Name(), p.BasePath(), p.Client())
if err := validateDriver(d); err != nil {
+ if mode > 0 {
+ // Undo any reference count changes from the initial `Get`
+ if _, err := drivers.plugingetter.Get(name, extName, mode*-1); err != nil {
+ logrus.WithError(err).WithField("action", "validate-driver").WithField("plugin", name).Error("error releasing reference to plugin")
+ }
+ }
return nil, err
}
@@ -169,9 +176,9 @@ func CreateDriver(name string) (volume.Driver, error) {
return lookup(name, getter.Acquire)
}
-// RemoveDriver returns a volume driver by its name and decrements RefCount..
+// ReleaseDriver returns a volume driver by its name and decrements RefCount..
// If the driver is empty, it looks for the local driver.
-func RemoveDriver(name string) (volume.Driver, error) {
+func ReleaseDriver(name string) (volume.Driver, error) {
if name == "" {
name = volume.DefaultDriverName
}
diff --git a/components/engine/volume/local/local_unix.go b/components/engine/volume/local/local_unix.go
index 5bba5b7068..6226955717 100644
--- a/components/engine/volume/local/local_unix.go
+++ b/components/engine/volume/local/local_unix.go
@@ -1,4 +1,4 @@
-// +build linux freebsd solaris
+// +build linux freebsd
// Package local provides the default implementation for volumes. It
// is used to mount data volume containers and directories local to
diff --git a/components/engine/volume/store/store.go b/components/engine/volume/store/store.go
index e47ec0e7da..fd1ca616ca 100644
--- a/components/engine/volume/store/store.go
+++ b/components/engine/volume/store/store.go
@@ -145,7 +145,7 @@ func (s *VolumeStore) Purge(name string) {
s.globalLock.Lock()
v, exists := s.names[name]
if exists {
- if _, err := volumedrivers.RemoveDriver(v.DriverName()); err != nil {
+ if _, err := volumedrivers.ReleaseDriver(v.DriverName()); err != nil {
logrus.Errorf("Error dereferencing volume driver: %v", err)
}
}
@@ -385,32 +385,37 @@ func (s *VolumeStore) create(name, driverName string, opts, labels map[string]st
}
if v != nil {
- return v, nil
+ // there is an existing volume, if we already have this stored locally, return it.
+ // TODO: there could be some inconsistent details such as labels here
+ if vv, _ := s.getNamed(v.Name()); vv != nil {
+ return vv, nil
+ }
}
// Since there isn't a specified driver name, let's see if any of the existing drivers have this volume name
if driverName == "" {
- v, _ := s.getVolume(name)
+ v, _ = s.getVolume(name)
if v != nil {
return v, nil
}
}
vd, err := volumedrivers.CreateDriver(driverName)
-
if err != nil {
return nil, &OpErr{Op: "create", Name: name, Err: err}
}
logrus.Debugf("Registering new volume reference: driver %q, name %q", vd.Name(), name)
+ if v, _ = vd.Get(name); v == nil {
+ v, err = vd.Create(name, opts)
+ if err != nil {
+ if _, err := volumedrivers.ReleaseDriver(driverName); err != nil {
+ logrus.WithError(err).WithField("driver", driverName).Error("Error releasing reference to volume driver")
+ }
+ return nil, err
+ }
+ }
- if v, _ := vd.Get(name); v != nil {
- return v, nil
- }
- v, err = vd.Create(name, opts)
- if err != nil {
- return nil, err
- }
s.globalLock.Lock()
s.labels[name] = labels
s.options[name] = opts
diff --git a/components/engine/volume/store/store_test.go b/components/engine/volume/store/store_test.go
index f5f00255a1..7d5294043d 100644
--- a/components/engine/volume/store/store_test.go
+++ b/components/engine/volume/store/store_test.go
@@ -2,11 +2,14 @@ package store
import (
"errors"
+ "fmt"
"io/ioutil"
+ "net"
"os"
"strings"
"testing"
+ "github.com/docker/docker/volume"
"github.com/docker/docker/volume/drivers"
volumetestutils "github.com/docker/docker/volume/testutils"
)
@@ -232,3 +235,88 @@ func TestDerefMultipleOfSameRef(t *testing.T) {
t.Fatal(err)
}
}
+
+func TestCreateKeepOptsLabelsWhenExistsRemotely(t *testing.T) {
+ vd := volumetestutils.NewFakeDriver("fake")
+ volumedrivers.Register(vd, "fake")
+ dir, err := ioutil.TempDir("", "test-same-deref")
+ if err != nil {
+ t.Fatal(err)
+ }
+ defer os.RemoveAll(dir)
+ s, err := New(dir)
+ if err != nil {
+ t.Fatal(err)
+ }
+
+ // Create a volume in the driver directly
+ if _, err := vd.Create("foo", nil); err != nil {
+ t.Fatal(err)
+ }
+
+ v, err := s.Create("foo", "fake", nil, map[string]string{"hello": "world"})
+ if err != nil {
+ t.Fatal(err)
+ }
+
+ switch dv := v.(type) {
+ case volume.DetailedVolume:
+ if dv.Labels()["hello"] != "world" {
+ t.Fatalf("labels don't match")
+ }
+ default:
+ t.Fatalf("got unexpected type: %T", v)
+ }
+}
+
+func TestDefererencePluginOnCreateError(t *testing.T) {
+ var (
+ l net.Listener
+ err error
+ )
+
+ for i := 32768; l == nil && i < 40000; i++ {
+ l, err = net.Listen("tcp", fmt.Sprintf("127.0.0.1:%d", i))
+ }
+ if l == nil {
+ t.Fatalf("could not create listener: %v", err)
+ }
+ defer l.Close()
+
+ d := volumetestutils.NewFakeDriver("TestDefererencePluginOnCreateError")
+ p, err := volumetestutils.MakeFakePlugin(d, l)
+ if err != nil {
+ t.Fatal(err)
+ }
+
+ pg := volumetestutils.NewFakePluginGetter(p)
+ volumedrivers.RegisterPluginGetter(pg)
+
+ dir, err := ioutil.TempDir("", "test-plugin-deref-err")
+ if err != nil {
+ t.Fatal(err)
+ }
+ defer os.RemoveAll(dir)
+
+ s, err := New(dir)
+ if err != nil {
+ t.Fatal(err)
+ }
+
+ // create a good volume so we have a plugin reference
+ _, err = s.Create("fake1", d.Name(), nil, nil)
+ if err != nil {
+ t.Fatal(err)
+ }
+
+ // Now create another one expecting an error
+ _, err = s.Create("fake2", d.Name(), map[string]string{"error": "some error"}, nil)
+ if err == nil || !strings.Contains(err.Error(), "some error") {
+ t.Fatalf("expected an error on create: %v", err)
+ }
+
+ // There should be only 1 plugin reference
+ if refs := volumetestutils.FakeRefs(p); refs != 1 {
+ t.Fatalf("expected 1 plugin reference, got: %d", refs)
+ }
+}
diff --git a/components/engine/volume/store/store_unix.go b/components/engine/volume/store/store_unix.go
index c024abbf9a..065cb28eb8 100644
--- a/components/engine/volume/store/store_unix.go
+++ b/components/engine/volume/store/store_unix.go
@@ -1,4 +1,4 @@
-// +build linux freebsd solaris
+// +build linux freebsd
package store
diff --git a/components/engine/volume/testutils/testutils.go b/components/engine/volume/testutils/testutils.go
index 359d923822..84ab55ff77 100644
--- a/components/engine/volume/testutils/testutils.go
+++ b/components/engine/volume/testutils/testutils.go
@@ -1,9 +1,15 @@
package testutils
import (
+ "encoding/json"
+ "errors"
"fmt"
+ "net"
+ "net/http"
"time"
+ "github.com/docker/docker/pkg/plugingetter"
+ "github.com/docker/docker/pkg/plugins"
"github.com/docker/docker/volume"
)
@@ -121,3 +127,99 @@ func (d *FakeDriver) Get(name string) (volume.Volume, error) {
func (*FakeDriver) Scope() string {
return "local"
}
+
+type fakePlugin struct {
+ client *plugins.Client
+ name string
+ refs int
+}
+
+// MakeFakePlugin creates a fake plugin from the passed in driver
+// Note: currently only "Create" is implemented because that's all that's needed
+// so far. If you need it to test something else, add it here, but probably you
+// shouldn't need to use this except for very specific cases with v2 plugin handling.
+func MakeFakePlugin(d volume.Driver, l net.Listener) (plugingetter.CompatPlugin, error) {
+ c, err := plugins.NewClient(l.Addr().Network()+"://"+l.Addr().String(), nil)
+ if err != nil {
+ return nil, err
+ }
+ mux := http.NewServeMux()
+
+ mux.HandleFunc("/VolumeDriver.Create", func(w http.ResponseWriter, r *http.Request) {
+ createReq := struct {
+ Name string
+ Opts map[string]string
+ }{}
+ if err := json.NewDecoder(r.Body).Decode(&createReq); err != nil {
+ fmt.Fprintf(w, `{"Err": "%s"}`, err.Error())
+ return
+ }
+ _, err := d.Create(createReq.Name, createReq.Opts)
+ if err != nil {
+ fmt.Fprintf(w, `{"Err": "%s"}`, err.Error())
+ return
+ }
+ w.Write([]byte("{}"))
+ })
+
+ go http.Serve(l, mux)
+ return &fakePlugin{client: c, name: d.Name()}, nil
+}
+
+func (p *fakePlugin) Client() *plugins.Client {
+ return p.client
+}
+
+func (p *fakePlugin) Name() string {
+ return p.name
+}
+
+func (p *fakePlugin) IsV1() bool {
+ return false
+}
+
+func (p *fakePlugin) BasePath() string {
+ return ""
+}
+
+type fakePluginGetter struct {
+ plugins map[string]plugingetter.CompatPlugin
+}
+
+// NewFakePluginGetter returns a plugin getter for fake plugins
+func NewFakePluginGetter(pls ...plugingetter.CompatPlugin) plugingetter.PluginGetter {
+ idx := make(map[string]plugingetter.CompatPlugin, len(pls))
+ for _, p := range pls {
+ idx[p.Name()] = p
+ }
+ return &fakePluginGetter{plugins: idx}
+}
+
+// This ignores the second argument since we only care about volume drivers here,
+// there shouldn't be any other kind of plugin in here
+func (g *fakePluginGetter) Get(name, _ string, mode int) (plugingetter.CompatPlugin, error) {
+ p, ok := g.plugins[name]
+ if !ok {
+ return nil, errors.New("not found")
+ }
+ p.(*fakePlugin).refs += mode
+ return p, nil
+}
+
+func (g *fakePluginGetter) GetAllByCap(capability string) ([]plugingetter.CompatPlugin, error) {
+ panic("GetAllByCap shouldn't be called")
+}
+
+func (g *fakePluginGetter) GetAllManagedPluginsByCap(capability string) []plugingetter.CompatPlugin {
+ panic("GetAllManagedPluginsByCap should not be called")
+}
+
+func (g *fakePluginGetter) Handle(capability string, callback func(string, *plugins.Client)) {
+ panic("Handle should not be called")
+}
+
+// FakeRefs checks ref count on a fake plugin.
+func FakeRefs(p plugingetter.CompatPlugin) int {
+ // this should panic if something other than a `*fakePlugin` is passed in
+ return p.(*fakePlugin).refs
+}
diff --git a/components/engine/volume/volume_unix.go b/components/engine/volume/volume_unix.go
index 0968fe37e1..1cb9317e7a 100644
--- a/components/engine/volume/volume_unix.go
+++ b/components/engine/volume/volume_unix.go
@@ -1,4 +1,4 @@
-// +build linux freebsd darwin solaris
+// +build linux freebsd darwin
package volume