Compare commits
216 Commits
v17.06.0-c
...
v17.06.1-c
| Author | SHA1 | Date | |
|---|---|---|---|
| 77b4dce066 | |||
| 9cb6e86615 | |||
| 4581021c2e | |||
| 70709d0056 | |||
| 54ea1fe30f | |||
| 06dcd93778 | |||
| 1d2cd8d7ab | |||
| c257617b99 | |||
| 06f434c72b | |||
| 1d1b9ad4db | |||
| ab734f59cc | |||
| b538302e2c | |||
| 021cd82205 | |||
| 3c4292da1e | |||
| 10aa812ef5 | |||
| 9795a158bd | |||
| fb319d1554 | |||
| 3d947e7a2d | |||
| 31875af469 | |||
| 136b90fbd0 | |||
| 9241ec7a1e | |||
| aaf863d88a | |||
| 2d8beb7256 | |||
| 7a50b06fea | |||
| 2e99eed6f1 | |||
| cb9b544edf | |||
| 1e1c6b20ef | |||
| dfe4a1d9f2 | |||
| abfc72adff | |||
| b5f090a4fe | |||
| 2206628703 | |||
| 068a4285c6 | |||
| f1d6145f7b | |||
| 0aba54445b | |||
| 75ef04b420 | |||
| 5c477f0e07 | |||
| b2eb1339dc | |||
| 4ff004bf0d | |||
| b0efd8dda5 | |||
| 8cf887b304 | |||
| 5b4a4cc549 | |||
| 8e17167c78 | |||
| fd95ab90ad | |||
| 8498135904 | |||
| 15c737a076 | |||
| c1383568b1 | |||
| 3d418e35c9 | |||
| 1c96c7a5ec | |||
| 224f23149e | |||
| 8f5b746fdd | |||
| 9f5039a8d0 | |||
| 482d9bdee0 | |||
| f6b5934f71 | |||
| 7fe174d3f0 | |||
| 425242f524 | |||
| 78debca270 | |||
| 38f7bacec2 | |||
| bac91bd031 | |||
| 371fed7778 | |||
| 3218b57038 | |||
| 73b8fa7bc6 | |||
| 4295f109dc | |||
| 2036e34692 | |||
| 4ff90f5cb1 | |||
| af26684564 | |||
| f7a9f9fb67 | |||
| 647b4aecce | |||
| 056540a3f1 | |||
| 9abebe3dd0 | |||
| 149c1a0435 | |||
| 2cf204719f | |||
| b237329009 | |||
| 7c374afda6 | |||
| d4fe7965d2 | |||
| ee07daead1 | |||
| 84d2a56ff3 | |||
| e3d07855d5 | |||
| 1d69626e30 | |||
| 1c48508832 | |||
| 345e74062e | |||
| 6a52ac6c49 | |||
| b0d4910c7e | |||
| 73badb5bc7 | |||
| c61737080b | |||
| 728c4a5bfe | |||
| e529da3b02 | |||
| 7bd7bdeada | |||
| f9cb0563ec | |||
| dee078ae96 | |||
| 9f64b19d32 | |||
| 8be8aabad2 | |||
| db292393e5 | |||
| 067a30c4e7 | |||
| 87a974623f | |||
| 694bae5d49 | |||
| f8f518c6b7 | |||
| 3dc403d562 | |||
| a84cf88e26 | |||
| fbc1b3070d | |||
| 5595be60a5 | |||
| d0546cf484 | |||
| 02c1d87617 | |||
| f6937cf5b5 | |||
| 132f614ef3 | |||
| b7e417335c | |||
| bb8c1249be | |||
| 19db96873d | |||
| 5303070e05 | |||
| 672ea2ab2a | |||
| eb990142ac | |||
| 55d38f9cd6 | |||
| 29e8b830bb | |||
| 2df40e7778 | |||
| ef8fe31895 | |||
| 77c3cf33ae | |||
| 35e513d55c | |||
| cb764708de | |||
| 5578ebd2a2 | |||
| bb23c93d28 | |||
| 073d80bc46 | |||
| 91fd299b2c | |||
| a0040598f2 | |||
| 7219fddea4 | |||
| 29fcd5dfae | |||
| 9600fb4e78 | |||
| 0f80f28c1a | |||
| 29e7f7b8c9 | |||
| 4c2cf22362 | |||
| afcb08cfb0 | |||
| 46a9d6b041 | |||
| b8c7116a23 | |||
| b089783b38 | |||
| 605ffe91cf | |||
| eb837fd7bc | |||
| 475c218fd7 | |||
| f2eb8b825b | |||
| 1891675559 | |||
| 7953dbc649 | |||
| 5c94f3eeda | |||
| d96b9b7edf | |||
| be1ea9771c | |||
| 5d8e13a06c | |||
| d611e35db6 | |||
| 85aaa4f261 | |||
| 56ecfabb9f | |||
| 97a704b2a7 | |||
| a16a6004a8 | |||
| 17c2a50117 | |||
| 5e671f7b53 | |||
| 986443bad2 | |||
| eaac30972d | |||
| 6f00f5603d | |||
| ad098a7a03 | |||
| 9c1f95529e | |||
| 8c2c3e21a3 | |||
| da2d17a18f | |||
| 8ef3d608d9 | |||
| 514d89f021 | |||
| 0189f9d259 | |||
| ca7df6fe42 | |||
| 47c1b00158 | |||
| f7e9f2cefa | |||
| 960ddece48 | |||
| 5a5e9e32bd | |||
| eff2539693 | |||
| e92c400a01 | |||
| 5d4843baa3 | |||
| cfb8aafe5e | |||
| 4e1370233e | |||
| 091732e350 | |||
| 1af533909d | |||
| a014274b80 | |||
| 7b65e51031 | |||
| ab5798a6f7 | |||
| 899ed1b641 | |||
| a9a1a9c7de | |||
| 5ed0b2e10c | |||
| 7950a39242 | |||
| 5bd868da90 | |||
| e142124317 | |||
| 0fa9fe7713 | |||
| 69fc734572 | |||
| 5e405a94e8 | |||
| 897b692e1c | |||
| d8fc4f1ad2 | |||
| af4ae65a0d | |||
| c8fa12c15c | |||
| 36f4ffb042 | |||
| 0cbae9b2ee | |||
| dcd1f685c8 | |||
| 05648d36b5 | |||
| e1ebcf33f6 | |||
| 4ea6f802a5 | |||
| be7a008421 | |||
| fcd821892e | |||
| 76b0c4884b | |||
| 1f928815e5 | |||
| ad8c94e585 | |||
| 058e676c7f | |||
| 11637f7d81 | |||
| 5965f0216f | |||
| 3910d5b571 | |||
| 03d8258b7d | |||
| 1c2ce3d977 | |||
| eb4ef82087 | |||
| d09575fb8f | |||
| 13934b618c | |||
| e9b6305e1d | |||
| 94f4d72c55 | |||
| 765e46f7cc | |||
| f15c4fd160 | |||
| 66d168dd55 | |||
| 59d6ed0a4d | |||
| 14ff8286f7 | |||
| f156aa746e | |||
| 5c5cc1fccd |
68
CHANGELOG.md
68
CHANGELOG.md
@ -5,7 +5,55 @@ information on the list of deprecated flags and APIs please have a look at
|
||||
https://docs.docker.com/engine/deprecated/ where target removal dates can also
|
||||
be found.
|
||||
|
||||
## 17.06.0-ce (2017-06-15)
|
||||
## 17.06.1-ce (2017-07-XX)
|
||||
|
||||
### Builder
|
||||
|
||||
* Fix a regression, where `ADD` from remote URL's extracted archives [#89](https://github.com/docker/docker-ce/pull/89)
|
||||
* Fix handling of remote "git@" notation [#100](https://github.com/docker/docker-ce/pull/100)
|
||||
|
||||
### Plugins
|
||||
|
||||
* Make plugin removes more resilient to failure [#91](https://github.com/docker/docker-ce/pull/91)
|
||||
|
||||
### Logging
|
||||
|
||||
* Fix stderr logging for journald and syslog [#95](https://github.com/docker/docker-ce/pull/95)
|
||||
* Fix log readers can block writes indefinitely [#98](https://github.com/docker/docker-ce/pull/98)
|
||||
|
||||
### Runtime
|
||||
|
||||
* Prevent a goroutine leak when healthcheck gets stopped [#90](https://github.com/docker/docker-ce/pull/90)
|
||||
* Do not error on relabel when relabel not supported [#92](https://github.com/docker/docker-ce/pull/92)
|
||||
* Limit max backoff delay to 2 seconds for GRPC connection [#94](https://github.com/docker/docker-ce/pull/94)
|
||||
* Fix issue preventing containers to run when memory cgroup was specified due to bug in certain kernels [#102](https://github.com/docker/docker-ce/pull/102)
|
||||
* Fix container not responding to SIGKILL when paused [#102](https://github.com/docker/docker-ce/pull/102)
|
||||
* Improve error message if an image for an incompatible OS is loaded [#108](https://github.com/docker/docker-ce/pull/108)
|
||||
* Fix a handle leak in go-winio [#112](https://github.com/docker/docker-ce/pull/112)
|
||||
|
||||
### Client
|
||||
|
||||
* Make pruning volumes optional when running `docker system prune`, and add a `--volumes` flag [#109](https://github.com/docker/docker-ce/pull/109)
|
||||
* Show progress of replicated tasks before they are assigned [#97](https://github.com/docker/docker-ce/pull/97)
|
||||
* Fix `docker wait` hanging if the container does not exist [#106](https://github.com/docker/docker-ce/pull/106)
|
||||
* If `docker swarm ca` is called without the `--rotate` flag, warn if other flags are passed [#110](https://github.com/docker/docker-ce/pull/110)
|
||||
* Fix API version negotiation not working if the daemon returns an error [#115](https://github.com/docker/docker-ce/pull/115)
|
||||
|
||||
### Security
|
||||
|
||||
* Redact secret data on "secret create" [#99](https://github.com/docker/docker-ce/pull/99)
|
||||
|
||||
### Swarm Mode
|
||||
|
||||
* Do not add duplicate platform information to service spec [#107](https://github.com/docker/docker-ce/pull/107)
|
||||
* Cluster update and memory issue fixes [#114](https://github.com/docker/docker-ce/pull/114)
|
||||
|
||||
## 17.06.0-ce (2017-06-19)
|
||||
|
||||
**NOTE**: Docker 17.06 by default disables communication with legacy (v1) registries. If you
|
||||
require interaction with registries that have not yet migrated to the v2 protocol, set the
|
||||
`--disable-legacy-registry=false` daemon option. Interaction with v1 registries will be removed
|
||||
in Docker 17.12.
|
||||
|
||||
### Builder
|
||||
|
||||
@ -27,6 +75,10 @@ be found.
|
||||
+ Add support for csv format options to `--network` and `--network-add` [#docker/cli/62](https://github.com/docker/cli/pull/62) [#33130](https://github.com/moby/moby/pull/33130)
|
||||
- Fix stack compose bind-mount volumes on Windows [#docker/cli/136](https://github.com/docker/cli/pull/136)
|
||||
- Correctly handle a Docker daemon without registry info [#docker/cli/126](https://github.com/docker/cli/pull/126)
|
||||
+ Allow --detach and --quiet flags when using --rollback [#docker/cli/144](https://github.com/docker/cli/pull/144)
|
||||
+ Remove deprecated `--email` flag from `docker login` [#docker/cli/143](https://github.com/docker/cli/pull/143)
|
||||
* Adjusted `docker stats` memory output [#docker/cli/80](https://github.com/docker/cli/pull/80)
|
||||
|
||||
### Distribution
|
||||
|
||||
* Select digest over tag when both are provided during a pull [#33214](https://github.com/moby/moby/pull/33214)
|
||||
@ -41,6 +93,7 @@ be found.
|
||||
+ Add Support swarm-mode services with node-local networks such as macvlan, ipvlan, bridge, host [#32981](https://github.com/moby/moby/pull/32981)
|
||||
+ Pass driver-options to network drivers on service creation [#32981] (https://github.com/moby/moby/pull/33130)
|
||||
+ Isolate Swarm Control-plane traffic from Application data traffic using --data-path-addr [#32717] (https://github.com/moby/moby/pull/32717)
|
||||
* Several improvments to Service Discovery [#docker/libnetwork/1796](https://github.com/docker/libnetwork/pull/1796)
|
||||
|
||||
### Packaging
|
||||
|
||||
@ -62,6 +115,12 @@ be found.
|
||||
+ Add cluster events to Docker event stream. [#32421](https://github.com/moby/moby/pull/32421)
|
||||
+ Add support for DNS search on windows [#33311](https://github.com/moby/moby/pull/33311)
|
||||
* Upgrade to Go 1.8.3 [#33387](https://github.com/moby/moby/pull/33387)
|
||||
- Prevent a containerd crash when journald is restarted [#containerd/930](https://github.com/containerd/containerd/pull/930)
|
||||
- Fix healthcheck failures due to invalid environment variables [#33249](https://github.com/moby/moby/pull/33249)
|
||||
- Prevent a directory to be created in lieu of the daemon socket when a container mounting it is to be restarted during a shutdown [#30348](https://github.com/moby/moby/pull/33330)
|
||||
- Prevent a container to be restarted upon stop if its stop signal is set to `SIGKILL` [#33335](https://github.com/moby/moby/pull/33335)
|
||||
- Ensure log drivers get passed the same filename to both StartLogging and StopLogging endpoints [#33583](https://github.com/moby/moby/pull/33583)
|
||||
- Remove daemon data structure dump on `SIGUSR1` to avoid a panic [#33598](https://github.com/moby/moby/pull/33598)
|
||||
|
||||
### Security
|
||||
|
||||
@ -77,9 +136,14 @@ be found.
|
||||
+ Add API to rotate swarm CA certificate [#32993](https://github.com/moby/moby/pull/32993)
|
||||
* Service digest pining is now handled client side [#32388](https://github.com/moby/moby/pull/32388), [#33239](https://github.com/moby/moby/pull/33239)
|
||||
+ Placement now also take platform in account [#33144](https://github.com/moby/moby/pull/33144)
|
||||
- Fix missing ipam driver option usage in config only networks in swarm mode [#docker-ce/21](https://github.com/docker/docker-ce/pull/21)
|
||||
- Fix possible hang when joining fails [#docker-ce/19](https://github.com/docker/docker-ce/pull/19)
|
||||
- Fix an issue preventing external CA to be accepted [#33341](https://github.com/moby/moby/pull/33341)
|
||||
- Fix possible orchestration panic in mixed version clusters [#swarmkit/2233](https://github.com/docker/swarmkit/pull/2233)
|
||||
- Avoid assigning duplicate IPs during initialization [#swarmkit/2237](https://github.com/docker/swarmkit/pull/2237)
|
||||
|
||||
### Deprecation
|
||||
|
||||
* Disable legacy registry (v1) by default [#33629](https://github.com/moby/moby/pull/33629)
|
||||
|
||||
## 17.05.0-ce (2017-05-04)
|
||||
|
||||
|
||||
@ -1 +1 @@
|
||||
17.06.0-ce-rc2
|
||||
17.06.1-ce-rc1
|
||||
|
||||
@ -41,10 +41,6 @@ func NewLoginCommand(dockerCli command.Cli) *cobra.Command {
|
||||
flags.StringVarP(&opts.user, "username", "u", "", "Username")
|
||||
flags.StringVarP(&opts.password, "password", "p", "", "Password")
|
||||
|
||||
// Deprecated in 1.11: Should be removed in docker 17.06
|
||||
flags.StringVarP(&opts.email, "email", "e", "", "Email")
|
||||
flags.MarkDeprecated("email", "will be removed in 17.06.")
|
||||
|
||||
return cmd
|
||||
}
|
||||
|
||||
|
||||
@ -12,6 +12,10 @@ import (
|
||||
// ParseSecrets retrieves the secrets with the requested names and fills
|
||||
// secret IDs into the secret references.
|
||||
func ParseSecrets(client client.SecretAPIClient, requestedSecrets []*swarmtypes.SecretReference) ([]*swarmtypes.SecretReference, error) {
|
||||
if len(requestedSecrets) == 0 {
|
||||
return []*swarmtypes.SecretReference{}, nil
|
||||
}
|
||||
|
||||
secretRefs := make(map[string]*swarmtypes.SecretReference)
|
||||
ctx := context.Background()
|
||||
|
||||
@ -61,6 +65,10 @@ func ParseSecrets(client client.SecretAPIClient, requestedSecrets []*swarmtypes.
|
||||
// ParseConfigs retrieves the configs from the requested names and converts
|
||||
// them to config references to use with the spec
|
||||
func ParseConfigs(client client.ConfigAPIClient, requestedConfigs []*swarmtypes.ConfigReference) ([]*swarmtypes.ConfigReference, error) {
|
||||
if len(requestedConfigs) == 0 {
|
||||
return []*swarmtypes.ConfigReference{}, nil
|
||||
}
|
||||
|
||||
configRefs := make(map[string]*swarmtypes.ConfigReference)
|
||||
ctx := context.Background()
|
||||
|
||||
|
||||
@ -275,7 +275,11 @@ func (u *replicatedProgressUpdater) update(service swarm.Service, tasks []swarm.
|
||||
continue
|
||||
}
|
||||
}
|
||||
if _, nodeActive := activeNodes[task.NodeID]; nodeActive {
|
||||
if task.NodeID != "" {
|
||||
if _, nodeActive := activeNodes[task.NodeID]; nodeActive {
|
||||
tasksBySlot[task.Slot] = task
|
||||
}
|
||||
} else {
|
||||
tasksBySlot[task.Slot] = task
|
||||
}
|
||||
}
|
||||
|
||||
@ -131,7 +131,7 @@ func runUpdate(dockerCli *command.DockerCli, flags *pflag.FlagSet, options *serv
|
||||
// Rollback can't be combined with other flags.
|
||||
otherFlagsPassed := false
|
||||
flags.VisitAll(func(f *pflag.Flag) {
|
||||
if f.Name == "rollback" {
|
||||
if f.Name == "rollback" || f.Name == "detach" || f.Name == "quiet" {
|
||||
return
|
||||
}
|
||||
if flags.Changed(f.Name) {
|
||||
|
||||
@ -4,6 +4,7 @@ import (
|
||||
"strings"
|
||||
|
||||
"github.com/docker/cli/cli/compose/convert"
|
||||
"github.com/docker/docker/api"
|
||||
"github.com/docker/docker/api/types"
|
||||
"github.com/docker/docker/api/types/filters"
|
||||
"github.com/docker/docker/api/types/swarm"
|
||||
@ -34,6 +35,13 @@ type fakeClient struct {
|
||||
configRemoveFunc func(configID string) error
|
||||
}
|
||||
|
||||
func (cli *fakeClient) ServerVersion(ctx context.Context) (types.Version, error) {
|
||||
return types.Version{
|
||||
Version: "docker-dev",
|
||||
APIVersion: api.DefaultVersion,
|
||||
}, nil
|
||||
}
|
||||
|
||||
func (cli *fakeClient) ServiceList(ctx context.Context, options types.ServiceListOptions) ([]swarm.Service, error) {
|
||||
if cli.serviceListFunc != nil {
|
||||
return cli.serviceListFunc(options)
|
||||
|
||||
@ -7,6 +7,7 @@ import (
|
||||
"github.com/docker/cli/cli/command"
|
||||
"github.com/docker/cli/cli/compose/convert"
|
||||
"github.com/docker/docker/api/types/swarm"
|
||||
"github.com/docker/docker/api/types/versions"
|
||||
"github.com/pkg/errors"
|
||||
"github.com/spf13/cobra"
|
||||
"golang.org/x/net/context"
|
||||
@ -14,12 +15,16 @@ import (
|
||||
|
||||
const (
|
||||
defaultNetworkDriver = "overlay"
|
||||
resolveImageAlways = "always"
|
||||
resolveImageChanged = "changed"
|
||||
resolveImageNever = "never"
|
||||
)
|
||||
|
||||
type deployOptions struct {
|
||||
bundlefile string
|
||||
composefile string
|
||||
namespace string
|
||||
resolveImage string
|
||||
sendRegistryAuth bool
|
||||
prune bool
|
||||
}
|
||||
@ -44,12 +49,19 @@ func newDeployCommand(dockerCli command.Cli) *cobra.Command {
|
||||
addRegistryAuthFlag(&opts.sendRegistryAuth, flags)
|
||||
flags.BoolVar(&opts.prune, "prune", false, "Prune services that are no longer referenced")
|
||||
flags.SetAnnotation("prune", "version", []string{"1.27"})
|
||||
flags.StringVar(&opts.resolveImage, "resolve-image", resolveImageAlways,
|
||||
`Query the registry to resolve image digest and supported platforms ("`+resolveImageAlways+`"|"`+resolveImageChanged+`"|"`+resolveImageNever+`")`)
|
||||
flags.SetAnnotation("resolve-image", "version", []string{"1.30"})
|
||||
return cmd
|
||||
}
|
||||
|
||||
func runDeploy(dockerCli command.Cli, opts deployOptions) error {
|
||||
ctx := context.Background()
|
||||
|
||||
if err := validateResolveImageFlag(dockerCli, &opts); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
switch {
|
||||
case opts.bundlefile == "" && opts.composefile == "":
|
||||
return errors.Errorf("Please specify either a bundle file (with --bundle-file) or a Compose file (with --compose-file).")
|
||||
@ -62,6 +74,20 @@ func runDeploy(dockerCli command.Cli, opts deployOptions) error {
|
||||
}
|
||||
}
|
||||
|
||||
// validateResolveImageFlag validates the opts.resolveImage command line option
|
||||
// and also turns image resolution off if the version is older than 1.30
|
||||
func validateResolveImageFlag(dockerCli command.Cli, opts *deployOptions) error {
|
||||
if opts.resolveImage != resolveImageAlways && opts.resolveImage != resolveImageChanged && opts.resolveImage != resolveImageNever {
|
||||
return errors.Errorf("Invalid option %s for flag --resolve-image", opts.resolveImage)
|
||||
}
|
||||
// client side image resolution should not be done when the supported
|
||||
// server version is older than 1.30
|
||||
if versions.LessThan(dockerCli.Client().ClientVersion(), "1.30") {
|
||||
opts.resolveImage = resolveImageNever
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// checkDaemonIsSwarmManager does an Info API call to verify that the daemon is
|
||||
// a swarm manager. This is necessary because we must create networks before we
|
||||
// create services, but the API call for creating a network does not return a
|
||||
|
||||
@ -87,5 +87,5 @@ func deployBundle(ctx context.Context, dockerCli command.Cli, opts deployOptions
|
||||
if err := createNetworks(ctx, dockerCli, namespace, networks); err != nil {
|
||||
return err
|
||||
}
|
||||
return deployServices(ctx, dockerCli, services, namespace, opts.sendRegistryAuth)
|
||||
return deployServices(ctx, dockerCli, services, namespace, opts.sendRegistryAuth, opts.resolveImage)
|
||||
}
|
||||
|
||||
@ -92,7 +92,7 @@ func deployCompose(ctx context.Context, dockerCli command.Cli, opts deployOption
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
return deployServices(ctx, dockerCli, services, namespace, opts.sendRegistryAuth)
|
||||
return deployServices(ctx, dockerCli, services, namespace, opts.sendRegistryAuth, opts.resolveImage)
|
||||
}
|
||||
|
||||
func getServicesDeclaredNetworks(serviceConfigs []composetypes.ServiceConfig) map[string]struct{} {
|
||||
@ -282,6 +282,7 @@ func deployServices(
|
||||
services map[string]swarm.ServiceSpec,
|
||||
namespace convert.Namespace,
|
||||
sendAuth bool,
|
||||
resolveImage string,
|
||||
) error {
|
||||
apiClient := dockerCli.Client()
|
||||
out := dockerCli.Out()
|
||||
@ -300,9 +301,9 @@ func deployServices(
|
||||
name := namespace.Scope(internalName)
|
||||
|
||||
encodedAuth := ""
|
||||
image := serviceSpec.TaskTemplate.ContainerSpec.Image
|
||||
if sendAuth {
|
||||
// Retrieve encoded auth token from the image reference
|
||||
image := serviceSpec.TaskTemplate.ContainerSpec.Image
|
||||
encodedAuth, err = command.RetrieveAuthTokenFromImage(ctx, dockerCli, image)
|
||||
if err != nil {
|
||||
return err
|
||||
@ -312,10 +313,12 @@ func deployServices(
|
||||
if service, exists := existingServiceMap[name]; exists {
|
||||
fmt.Fprintf(out, "Updating service %s (id: %s)\n", name, service.ID)
|
||||
|
||||
updateOpts := types.ServiceUpdateOptions{}
|
||||
if sendAuth {
|
||||
updateOpts.EncodedRegistryAuth = encodedAuth
|
||||
updateOpts := types.ServiceUpdateOptions{EncodedRegistryAuth: encodedAuth}
|
||||
|
||||
if resolveImage == resolveImageAlways || (resolveImage == resolveImageChanged && image != service.Spec.Labels[convert.LabelImage]) {
|
||||
updateOpts.QueryRegistry = true
|
||||
}
|
||||
|
||||
response, err := apiClient.ServiceUpdate(
|
||||
ctx,
|
||||
service.ID,
|
||||
@ -333,10 +336,13 @@ func deployServices(
|
||||
} else {
|
||||
fmt.Fprintf(out, "Creating service %s\n", name)
|
||||
|
||||
createOpts := types.ServiceCreateOptions{}
|
||||
if sendAuth {
|
||||
createOpts.EncodedRegistryAuth = encodedAuth
|
||||
createOpts := types.ServiceCreateOptions{EncodedRegistryAuth: encodedAuth}
|
||||
|
||||
// query registry if flag disabling it was not set
|
||||
if resolveImage == resolveImageAlways || resolveImage == resolveImageChanged {
|
||||
createOpts.QueryRegistry = true
|
||||
}
|
||||
|
||||
if _, err := apiClient.ServiceCreate(ctx, serviceSpec, createOpts); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
@ -8,6 +8,7 @@ import (
|
||||
"github.com/docker/cli/cli/command"
|
||||
"github.com/docker/docker/api/types"
|
||||
"github.com/docker/docker/api/types/swarm"
|
||||
"github.com/docker/docker/api/types/versions"
|
||||
"github.com/pkg/errors"
|
||||
"github.com/spf13/cobra"
|
||||
"golang.org/x/net/context"
|
||||
@ -55,10 +56,20 @@ func runRemove(dockerCli command.Cli, opts removeOptions) error {
|
||||
return err
|
||||
}
|
||||
|
||||
configs, err := getStackConfigs(ctx, client, namespace)
|
||||
var configs []swarm.Config
|
||||
|
||||
version, err := client.ServerVersion(ctx)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
if versions.LessThan(version.APIVersion, "1.30") {
|
||||
fmt.Fprintf(dockerCli.Err(), `WARNING: ignoring "configs" (requires API version 1.30, but the Docker daemon API version is %s)`, version.APIVersion)
|
||||
} else {
|
||||
configs, err = getStackConfigs(ctx, client, namespace)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
|
||||
if len(services)+len(networks)+len(secrets)+len(configs) == 0 {
|
||||
fmt.Fprintf(dockerCli.Out(), "Nothing found in stack: %s\n", namespace)
|
||||
|
||||
@ -61,6 +61,11 @@ func runRotateCA(dockerCli command.Cli, flags *pflag.FlagSet, opts caOptions) er
|
||||
}
|
||||
|
||||
if !opts.rotate {
|
||||
for _, f := range []string{flagCACert, flagCAKey, flagCertExpiry, flagExternalCA} {
|
||||
if flags.Changed(f) {
|
||||
return fmt.Errorf("`--%s` flag requires the `--rotate` flag to update the CA", f)
|
||||
}
|
||||
}
|
||||
if swarmInspect.ClusterInfo.TLSInfo.TrustRoot == "" {
|
||||
fmt.Fprintln(dockerCli.Out(), "No CA information available")
|
||||
} else {
|
||||
@ -71,7 +76,7 @@ func runRotateCA(dockerCli command.Cli, flags *pflag.FlagSet, opts caOptions) er
|
||||
|
||||
genRootCA := true
|
||||
spec := &swarmInspect.Spec
|
||||
opts.mergeSwarmSpec(spec, flags)
|
||||
opts.mergeSwarmSpec(spec, flags) // updates the spec given the cert expiry or external CA flag
|
||||
if flags.Changed(flagCACert) {
|
||||
spec.CAConfig.SigningCACert = opts.rootCACert.Contents()
|
||||
genRootCA = false
|
||||
|
||||
@ -1,7 +1,9 @@
|
||||
package system
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"fmt"
|
||||
"text/template"
|
||||
|
||||
"github.com/docker/cli/cli"
|
||||
"github.com/docker/cli/cli/command"
|
||||
@ -12,9 +14,10 @@ import (
|
||||
)
|
||||
|
||||
type pruneOptions struct {
|
||||
force bool
|
||||
all bool
|
||||
filter opts.FilterOpt
|
||||
force bool
|
||||
all bool
|
||||
pruneVolumes bool
|
||||
filter opts.FilterOpt
|
||||
}
|
||||
|
||||
// NewPruneCommand creates a new cobra.Command for `docker prune`
|
||||
@ -34,6 +37,7 @@ func NewPruneCommand(dockerCli command.Cli) *cobra.Command {
|
||||
flags := cmd.Flags()
|
||||
flags.BoolVarP(&options.force, "force", "f", false, "Do not prompt for confirmation")
|
||||
flags.BoolVarP(&options.all, "all", "a", false, "Remove all unused images not just dangling ones")
|
||||
flags.BoolVar(&options.pruneVolumes, "volumes", false, "Prune volumes")
|
||||
flags.Var(&options.filter, "filter", "Provide filter values (e.g. 'label=<key>=<value>')")
|
||||
// "filter" flag is available in 1.28 (docker 17.04) and up
|
||||
flags.SetAnnotation("filter", "version", []string{"1.28"})
|
||||
@ -41,38 +45,27 @@ func NewPruneCommand(dockerCli command.Cli) *cobra.Command {
|
||||
return cmd
|
||||
}
|
||||
|
||||
const (
|
||||
warning = `WARNING! This will remove:
|
||||
- all stopped containers
|
||||
- all volumes not used by at least one container
|
||||
- all networks not used by at least one container
|
||||
%s
|
||||
const confirmationTemplate = `WARNING! This will remove:
|
||||
{{- range $_, $warning := . }}
|
||||
- {{ $warning }}
|
||||
{{- end }}
|
||||
Are you sure you want to continue?`
|
||||
|
||||
danglingImageDesc = "- all dangling images"
|
||||
allImageDesc = `- all images without at least one container associated to them`
|
||||
)
|
||||
|
||||
func runPrune(dockerCli command.Cli, options pruneOptions) error {
|
||||
var message string
|
||||
|
||||
if options.all {
|
||||
message = fmt.Sprintf(warning, allImageDesc)
|
||||
} else {
|
||||
message = fmt.Sprintf(warning, danglingImageDesc)
|
||||
}
|
||||
|
||||
if !options.force && !command.PromptForConfirmation(dockerCli.In(), dockerCli.Out(), message) {
|
||||
if !options.force && !command.PromptForConfirmation(dockerCli.In(), dockerCli.Out(), confirmationMessage(options)) {
|
||||
return nil
|
||||
}
|
||||
|
||||
var spaceReclaimed uint64
|
||||
|
||||
for _, pruneFn := range []func(dockerCli command.Cli, filter opts.FilterOpt) (uint64, string, error){
|
||||
pruneFuncs := []func(dockerCli command.Cli, filter opts.FilterOpt) (uint64, string, error){
|
||||
prune.RunContainerPrune,
|
||||
prune.RunVolumePrune,
|
||||
prune.RunNetworkPrune,
|
||||
} {
|
||||
}
|
||||
if options.pruneVolumes {
|
||||
pruneFuncs = append(pruneFuncs, prune.RunVolumePrune)
|
||||
}
|
||||
|
||||
for _, pruneFn := range pruneFuncs {
|
||||
spc, output, err := pruneFn(dockerCli, options.filter)
|
||||
if err != nil {
|
||||
return err
|
||||
@ -96,3 +89,26 @@ func runPrune(dockerCli command.Cli, options pruneOptions) error {
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
// confirmationMessage constructs a confirmation message that depends on the cli options.
|
||||
func confirmationMessage(options pruneOptions) string {
|
||||
t := template.Must(template.New("confirmation message").Parse(confirmationTemplate))
|
||||
|
||||
warnings := []string{
|
||||
"all stopped containers",
|
||||
"all networks not used by at least one container",
|
||||
}
|
||||
if options.pruneVolumes {
|
||||
warnings = append(warnings, "all volumes not used by at least one container")
|
||||
}
|
||||
if options.all {
|
||||
warnings = append(warnings, "all images without at least one container associated to them")
|
||||
} else {
|
||||
warnings = append(warnings, "all dangling images")
|
||||
}
|
||||
warnings = append(warnings, "all build cache")
|
||||
|
||||
var buffer bytes.Buffer
|
||||
t.Execute(&buffer, &warnings)
|
||||
return buffer.String()
|
||||
}
|
||||
|
||||
@ -18,7 +18,11 @@ import (
|
||||
"github.com/pkg/errors"
|
||||
)
|
||||
|
||||
const defaultNetwork = "default"
|
||||
const (
|
||||
defaultNetwork = "default"
|
||||
// LabelImage is the label used to store image name provided in the compose file
|
||||
LabelImage = "com.docker.stack.image"
|
||||
)
|
||||
|
||||
// Services from compose-file types to engine API types
|
||||
func Services(
|
||||
@ -158,6 +162,9 @@ func convertService(
|
||||
UpdateConfig: convertUpdateConfig(service.Deploy.UpdateConfig),
|
||||
}
|
||||
|
||||
// add an image label to serviceSpec
|
||||
serviceSpec.Labels[LabelImage] = service.Image
|
||||
|
||||
// ServiceSpec.Networks is deprecated and should not have been used by
|
||||
// this package. It is possible to update TaskTemplate.Networks, but it
|
||||
// is not possible to update ServiceSpec.Networks. Unfortunately, we
|
||||
|
||||
@ -1413,7 +1413,7 @@ _docker_container_port() {
|
||||
_docker_container_prune() {
|
||||
case "$prev" in
|
||||
--filter)
|
||||
COMPREPLY=( $( compgen -W "until" -S = -- "$cur" ) )
|
||||
COMPREPLY=( $( compgen -W "label label! until" -S = -- "$cur" ) )
|
||||
__docker_nospace
|
||||
return
|
||||
;;
|
||||
@ -2428,7 +2428,7 @@ _docker_image_ls() {
|
||||
_docker_image_prune() {
|
||||
case "$prev" in
|
||||
--filter)
|
||||
COMPREPLY=( $( compgen -W "until" -S = -- "$cur" ) )
|
||||
COMPREPLY=( $( compgen -W "label label! until" -S = -- "$cur" ) )
|
||||
__docker_nospace
|
||||
return
|
||||
;;
|
||||
@ -2704,11 +2704,11 @@ _docker_network_connect() {
|
||||
|
||||
_docker_network_create() {
|
||||
case "$prev" in
|
||||
--aux-address|--gateway|--internal|--ip-range|--ipam-opt|--ipv6|--opt|-o|--subnet)
|
||||
--aux-address|--gateway|--ip-range|--ipam-opt|--ipv6|--opt|-o|--subnet)
|
||||
return
|
||||
;;
|
||||
--ipam-driver)
|
||||
COMPREPLY=( $( compgen -W "default" -- "$cur" ) )
|
||||
--config-from)
|
||||
__docker_complete_networks
|
||||
return
|
||||
;;
|
||||
--driver|-d)
|
||||
@ -2716,14 +2716,22 @@ _docker_network_create() {
|
||||
__docker_complete_plugins_bundled --type Network --remove host --remove null --add macvlan
|
||||
return
|
||||
;;
|
||||
--ipam-driver)
|
||||
COMPREPLY=( $( compgen -W "default" -- "$cur" ) )
|
||||
return
|
||||
;;
|
||||
--label)
|
||||
return
|
||||
;;
|
||||
--scope)
|
||||
COMPREPLY=( $( compgen -W "local swarm" -- "$cur" ) )
|
||||
return
|
||||
;;
|
||||
esac
|
||||
|
||||
case "$cur" in
|
||||
-*)
|
||||
COMPREPLY=( $( compgen -W "--attachable --aux-address --driver -d --gateway --help --internal --ip-range --ipam-driver --ipam-opt --ipv6 --label --opt -o --subnet" -- "$cur" ) )
|
||||
COMPREPLY=( $( compgen -W "--attachable --aux-address --config-from --config-only --driver -d --gateway --help --ingress --internal --ip-range --ipam-driver --ipam-opt --ipv6 --label --opt -o --scope --subnet" -- "$cur" ) )
|
||||
;;
|
||||
esac
|
||||
}
|
||||
@ -2806,7 +2814,7 @@ _docker_network_ls() {
|
||||
_docker_network_prune() {
|
||||
case "$prev" in
|
||||
--filter)
|
||||
COMPREPLY=( $( compgen -W "until" -S = -- "$cur" ) )
|
||||
COMPREPLY=( $( compgen -W "label label! until" -S = -- "$cur" ) )
|
||||
__docker_nospace
|
||||
return
|
||||
;;
|
||||
@ -3039,6 +3047,7 @@ _docker_service_update() {
|
||||
_docker_service_update_and_create() {
|
||||
local options_with_args="
|
||||
--endpoint-mode
|
||||
--entrypoint
|
||||
--env -e
|
||||
--force
|
||||
--health-cmd
|
||||
@ -3053,7 +3062,6 @@ _docker_service_update_and_create() {
|
||||
--log-driver
|
||||
--log-opt
|
||||
--mount
|
||||
--network
|
||||
--replicas
|
||||
--reserve-cpu
|
||||
--reserve-memory
|
||||
@ -3065,6 +3073,7 @@ _docker_service_update_and_create() {
|
||||
--rollback-failure-action
|
||||
--rollback-max-failure-ratio
|
||||
--rollback-monitor
|
||||
--rollback-order
|
||||
--rollback-parallelism
|
||||
--stop-grace-period
|
||||
--stop-signal
|
||||
@ -3072,12 +3081,14 @@ _docker_service_update_and_create() {
|
||||
--update-failure-action
|
||||
--update-max-failure-ratio
|
||||
--update-monitor
|
||||
--update-order
|
||||
--update-parallelism
|
||||
--user -u
|
||||
--workdir -w
|
||||
"
|
||||
|
||||
local boolean_options="
|
||||
--detach -d
|
||||
--help
|
||||
--no-healthcheck
|
||||
--read-only
|
||||
@ -3099,6 +3110,7 @@ _docker_service_update_and_create() {
|
||||
--host
|
||||
--mode
|
||||
--name
|
||||
--network
|
||||
--placement-pref
|
||||
--publish -p
|
||||
--secret
|
||||
@ -3138,7 +3150,7 @@ _docker_service_update_and_create() {
|
||||
fi
|
||||
if [ "$subcommand" = "update" ] ; then
|
||||
options_with_args="$options_with_args
|
||||
--arg
|
||||
--args
|
||||
--constraint-add
|
||||
--constraint-rm
|
||||
--container-label-add
|
||||
@ -3154,6 +3166,8 @@ _docker_service_update_and_create() {
|
||||
--host-add
|
||||
--host-rm
|
||||
--image
|
||||
--network-add
|
||||
--network-rm
|
||||
--placement-pref-add
|
||||
--placement-pref-rm
|
||||
--publish-add
|
||||
@ -3180,6 +3194,10 @@ _docker_service_update_and_create() {
|
||||
__docker_complete_image_repos_and_tags
|
||||
return
|
||||
;;
|
||||
--network-add|--network-rm)
|
||||
__docker_complete_networks
|
||||
return
|
||||
;;
|
||||
--placement-pref-add|--placement-pref-rm)
|
||||
COMPREPLY=( $( compgen -W "spread" -S = -- "$cur" ) )
|
||||
__docker_nospace
|
||||
@ -3240,6 +3258,10 @@ _docker_service_update_and_create() {
|
||||
COMPREPLY=( $( compgen -W "continue pause rollback" -- "$cur" ) )
|
||||
return
|
||||
;;
|
||||
--update-order|--rollback-order)
|
||||
COMPREPLY=( $( compgen -W "start-first stop-first" -- "$cur" ) )
|
||||
return
|
||||
;;
|
||||
--user|-u)
|
||||
__docker_complete_user_group
|
||||
return
|
||||
@ -3270,6 +3292,7 @@ _docker_service_update_and_create() {
|
||||
|
||||
_docker_swarm() {
|
||||
local subcommands="
|
||||
ca
|
||||
init
|
||||
join
|
||||
join-token
|
||||
@ -3290,6 +3313,24 @@ _docker_swarm() {
|
||||
esac
|
||||
}
|
||||
|
||||
_docker_swarm_ca() {
|
||||
case "$prev" in
|
||||
--ca-cert|--ca-key)
|
||||
_filedir
|
||||
return
|
||||
;;
|
||||
--cert-expiry|--external-ca)
|
||||
return
|
||||
;;
|
||||
esac
|
||||
|
||||
case "$cur" in
|
||||
-*)
|
||||
COMPREPLY=( $( compgen -W "--ca-cert --ca-key --cert-expiry --detach -d --external-ca --help --quiet -q --rotate" -- "$cur" ) )
|
||||
;;
|
||||
esac
|
||||
}
|
||||
|
||||
_docker_swarm_init() {
|
||||
case "$prev" in
|
||||
--advertise-addr)
|
||||
@ -3308,6 +3349,10 @@ _docker_swarm_init() {
|
||||
--cert-expiry|--dispatcher-heartbeat|--external-ca|--max-snapshots|--snapshot-interval|--task-history-limit)
|
||||
return
|
||||
;;
|
||||
--data-path-addr)
|
||||
__docker_complete_local_interfaces
|
||||
return
|
||||
;;
|
||||
--listen-addr)
|
||||
if [[ $cur == *: ]] ; then
|
||||
COMPREPLY=( $( compgen -W "2377" -- "${cur##*:}" ) )
|
||||
@ -3321,7 +3366,7 @@ _docker_swarm_init() {
|
||||
|
||||
case "$cur" in
|
||||
-*)
|
||||
COMPREPLY=( $( compgen -W "--advertise-addr --data-path-addr --autolock --availability --cert-expiry --dispatcher-heartbeat --external-ca --force-new-cluster --help --listen-addr --max-snapshots --snapshot-interval --task-history-limit" -- "$cur" ) )
|
||||
COMPREPLY=( $( compgen -W "--advertise-addr --autolock --availability --cert-expiry --data-path-addr --dispatcher-heartbeat --external-ca --force-new-cluster --help --listen-addr --max-snapshots --snapshot-interval --task-history-limit" -- "$cur" ) )
|
||||
;;
|
||||
esac
|
||||
}
|
||||
@ -3337,6 +3382,14 @@ _docker_swarm_join() {
|
||||
fi
|
||||
return
|
||||
;;
|
||||
--availability)
|
||||
COMPREPLY=( $( compgen -W "active drain pause" -- "$cur" ) )
|
||||
return
|
||||
;;
|
||||
--data-path-addr)
|
||||
__docker_complete_local_interfaces
|
||||
return
|
||||
;;
|
||||
--listen-addr)
|
||||
if [[ $cur == *: ]] ; then
|
||||
COMPREPLY=( $( compgen -W "2377" -- "${cur##*:}" ) )
|
||||
@ -3346,10 +3399,6 @@ _docker_swarm_join() {
|
||||
fi
|
||||
return
|
||||
;;
|
||||
--availability)
|
||||
COMPREPLY=( $( compgen -W "active drain pause" -- "$cur" ) )
|
||||
return
|
||||
;;
|
||||
--token)
|
||||
return
|
||||
;;
|
||||
@ -3357,7 +3406,7 @@ _docker_swarm_join() {
|
||||
|
||||
case "$cur" in
|
||||
-*)
|
||||
COMPREPLY=( $( compgen -W "--advertise-addr --data-path-addr --availability --help --listen-addr --token" -- "$cur" ) )
|
||||
COMPREPLY=( $( compgen -W "--advertise-addr --availability --data-path-addr --help --listen-addr --token" -- "$cur" ) )
|
||||
;;
|
||||
*:)
|
||||
COMPREPLY=( $( compgen -W "2377" -- "${cur##*:}" ) )
|
||||
@ -4230,13 +4279,16 @@ _docker_system_events() {
|
||||
destroy
|
||||
detach
|
||||
die
|
||||
disable
|
||||
disconnect
|
||||
enable
|
||||
exec_create
|
||||
exec_detach
|
||||
exec_start
|
||||
export
|
||||
health_status
|
||||
import
|
||||
install
|
||||
kill
|
||||
load
|
||||
mount
|
||||
@ -4245,6 +4297,7 @@ _docker_system_events() {
|
||||
pull
|
||||
push
|
||||
reload
|
||||
remove
|
||||
rename
|
||||
resize
|
||||
restart
|
||||
@ -4270,7 +4323,7 @@ _docker_system_events() {
|
||||
return
|
||||
;;
|
||||
type)
|
||||
COMPREPLY=( $( compgen -W "container daemon image network volume" -- "${cur##*=}" ) )
|
||||
COMPREPLY=( $( compgen -W "container daemon image network plugin volume" -- "${cur##*=}" ) )
|
||||
return
|
||||
;;
|
||||
volume)
|
||||
@ -4314,7 +4367,7 @@ _docker_system_info() {
|
||||
_docker_system_prune() {
|
||||
case "$prev" in
|
||||
--filter)
|
||||
COMPREPLY=( $( compgen -W "until" -S = -- "$cur" ) )
|
||||
COMPREPLY=( $( compgen -W "label label! until" -S = -- "$cur" ) )
|
||||
__docker_nospace
|
||||
return
|
||||
;;
|
||||
@ -4433,9 +4486,17 @@ _docker_volume_ls() {
|
||||
}
|
||||
|
||||
_docker_volume_prune() {
|
||||
case "$prev" in
|
||||
--filter)
|
||||
COMPREPLY=( $( compgen -W "label label!" -S = -- "$cur" ) )
|
||||
__docker_nospace
|
||||
return
|
||||
;;
|
||||
esac
|
||||
|
||||
case "$cur" in
|
||||
-*)
|
||||
COMPREPLY=( $( compgen -W "--force -f --help" -- "$cur" ) )
|
||||
COMPREPLY=( $( compgen -W "--filter --force -f --help" -- "$cur" ) )
|
||||
;;
|
||||
esac
|
||||
}
|
||||
|
||||
@ -2620,7 +2620,7 @@ __docker_subcommand() {
|
||||
"($help)--default-gateway-v6[Container default gateway IPv6 address]:IPv6 address: " \
|
||||
"($help)--default-shm-size=[Default shm size for containers]:size:" \
|
||||
"($help)*--default-ulimit=[Default ulimits for containers]:ulimit: " \
|
||||
"($help)--disable-legacy-registry[Disable contacting legacy registries]" \
|
||||
"($help)--disable-legacy-registry[Disable contacting legacy registries (default true)]" \
|
||||
"($help)*--dns=[DNS server to use]:DNS: " \
|
||||
"($help)*--dns-opt=[DNS options to use]:DNS option: " \
|
||||
"($help)*--dns-search=[DNS search domains to use]:DNS search: " \
|
||||
|
||||
@ -1,5 +1,5 @@
|
||||
|
||||
FROM golang:1.8.1
|
||||
FROM golang:1.8.3
|
||||
|
||||
# allow replacing httpredir or deb mirror
|
||||
ARG APT_MIRROR=deb.debian.org
|
||||
|
||||
@ -1,5 +1,5 @@
|
||||
|
||||
FROM golang:1.8-alpine
|
||||
FROM golang:1.8.3-alpine
|
||||
|
||||
RUN apk add -U git make bash coreutils
|
||||
|
||||
|
||||
@ -1,4 +1,4 @@
|
||||
FROM golang:1.8-alpine
|
||||
FROM golang:1.8.3-alpine
|
||||
|
||||
RUN apk add -U git
|
||||
|
||||
|
||||
@ -138,7 +138,7 @@ on all subcommands (due to it conflicting with, e.g. `-h` / `--hostname` on
|
||||
### `-e` and `--email` flags on `docker login`
|
||||
**Deprecated In Release: [v1.11.0](https://github.com/docker/docker/releases/tag/v1.11.0)**
|
||||
|
||||
**Target For Removal In Release: v17.06**
|
||||
**Removed In Release: [v17.06](https://github.com/docker/docker-ce/releases/tag/v17.06.0-ce)**
|
||||
|
||||
The docker login command is removing the ability to automatically register for an account with the target registry if the given username doesn't exist. Due to this change, the email flag is no longer required, and will be deprecated.
|
||||
|
||||
@ -292,7 +292,7 @@ of the `--changes` flag that allows to pass `Dockerfile` commands.
|
||||
|
||||
**Target For Removal In Release: v17.12**
|
||||
|
||||
Version 1.9 adds a flag (`--disable-legacy-registry=false`) which prevents the
|
||||
Version 1.8.3 added a flag (`--disable-legacy-registry=false`) which prevents the
|
||||
docker daemon from `pull`, `push`, and `login` operations against v1
|
||||
registries. Though enabled by default, this signals the intent to deprecate
|
||||
the v1 protocol.
|
||||
|
||||
@ -87,8 +87,9 @@ Plugin
|
||||
|
||||
Plugin | Description
|
||||
------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
|
||||
[Twistlock AuthZ Broker](https://github.com/twistlock/authz) | A basic extendable authorization plugin that runs directly on the host or inside a container. This plugin allows you to define user policies that it evaluates during authorization. Basic authorization is provided if Docker daemon is started with the --tlsverify flag (username is extracted from the certificate common name).
|
||||
[HBM plugin](https://github.com/kassisol/hbm) | An authorization plugin that prevents from executing commands with certains parameters.
|
||||
[Casbin AuthZ Plugin](https://github.com/casbin/casbin-authz-plugin) | An authorization plugin based on [Casbin](https://github.com/casbin/casbin), which supports access control models like ACL, RBAC, ABAC. The access control model can be customized. The policy can be persisted into file or DB.
|
||||
[HBM plugin](https://github.com/kassisol/hbm) | An authorization plugin that prevents from executing commands with certains parameters.
|
||||
[Twistlock AuthZ Broker](https://github.com/twistlock/authz) | A basic extendable authorization plugin that runs directly on the host or inside a container. This plugin allows you to define user policies that it evaluates during authorization. Basic authorization is provided if Docker daemon is started with the --tlsverify flag (username is extracted from the certificate common name).
|
||||
|
||||
## Troubleshooting a plugin
|
||||
|
||||
|
||||
@ -27,7 +27,7 @@ same is true for callers using Docker's Engine API to contact the daemon. If you
|
||||
require greater access control, you can create authorization plugins and add
|
||||
them to your Docker daemon configuration. Using an authorization plugin, a
|
||||
Docker administrator can configure granular access policies for managing access
|
||||
to Docker daemon.
|
||||
to the Docker daemon.
|
||||
|
||||
Anyone with the appropriate skills can develop an authorization plugin. These
|
||||
skills, at their most basic, are knowledge of Docker, understanding of REST, and
|
||||
|
||||
@ -30,13 +30,13 @@ Practices](https://docs.docker.com/engine/userguide/eng-image/dockerfile_best-pr
|
||||
## Usage
|
||||
|
||||
The [`docker build`](commandline/build.md) command builds an image from
|
||||
a `Dockerfile` and a *context*. The build's context is the files at a specified
|
||||
location `PATH` or `URL`. The `PATH` is a directory on your local filesystem.
|
||||
The `URL` is a Git repository location.
|
||||
a `Dockerfile` and a *context*. The build's context is the set of files at a
|
||||
specified location `PATH` or `URL`. The `PATH` is a directory on your local
|
||||
filesystem. The `URL` is a Git repository location.
|
||||
|
||||
A context is processed recursively. So, a `PATH` includes any subdirectories and
|
||||
the `URL` includes the repository and its submodules. A simple build command
|
||||
that uses the current directory as context:
|
||||
the `URL` includes the repository and its submodules. This example shows a
|
||||
build command that uses the current directory as context:
|
||||
|
||||
$ docker build .
|
||||
Sending build context to Docker daemon 6.51 MB
|
||||
@ -94,8 +94,8 @@ instructions.
|
||||
Whenever possible, Docker will re-use the intermediate images (cache),
|
||||
to accelerate the `docker build` process significantly. This is indicated by
|
||||
the `Using cache` message in the console output.
|
||||
(For more information, see the [Build cache section](https://docs.docker.com/engine/userguide/eng-image/dockerfile_best-practices/#/build-cache)) in the
|
||||
`Dockerfile` best practices guide:
|
||||
(For more information, see the [Build cache section](https://docs.docker.com/engine/userguide/eng-image/dockerfile_best-practices/#build-cache) in the
|
||||
`Dockerfile` best practices guide):
|
||||
|
||||
$ docker build -t svendowideit/ambassador .
|
||||
Sending build context to Docker daemon 15.36 kB
|
||||
@ -1281,26 +1281,43 @@ This Dockerfile results in an image that causes `docker run`, to
|
||||
create a new mount point at `/myvol` and copy the `greeting` file
|
||||
into the newly created volume.
|
||||
|
||||
> **Note**:
|
||||
> When using Windows-based containers, the destination of a volume inside the
|
||||
> container must be one of: a non-existing or empty directory; or a drive other
|
||||
> than C:.
|
||||
### Notes about specifying volumes
|
||||
|
||||
> **Note**:
|
||||
> If any build steps change the data within the volume after it has been
|
||||
> declared, those changes will be discarded.
|
||||
Keep the following things in mind about volumes in the `Dockerfile`.
|
||||
|
||||
> **Note**:
|
||||
> The list is parsed as a JSON array, which means that
|
||||
> you must use double-quotes (") around words not single-quotes (').
|
||||
- **Volumes on Windows-based containers**: When using Windows-based containers,
|
||||
the destination of a volume inside the container must be one of:
|
||||
|
||||
- a non-existing or empty directory
|
||||
- a drive other than `C:`
|
||||
|
||||
- **Changing the volume from within the Dockerfile**: If any build steps change the
|
||||
data within the volume after it has been declared, those changes will be discarded.
|
||||
|
||||
- **JSON formatting**: The list is parsed as a JSON array.
|
||||
You must enclose words with double quotes (`"`)rather than single quotes (`'`).
|
||||
|
||||
- **The host directory is declared at container run-time**: The host directory
|
||||
(the mountpoint) is, by its nature, host-dependent. This is to preserve image
|
||||
portability. since a given host directory can't be guaranteed to be available
|
||||
on all hosts.For this reason, you can't mount a host directory from
|
||||
within the Dockerfile. The `VOLUME` instruction does not support specifying a `host-dir`
|
||||
parameter. You must specify the mountpoint when you create or run the container.
|
||||
|
||||
## USER
|
||||
|
||||
USER daemon
|
||||
USER <user>[:<group>]
|
||||
or
|
||||
USER <UID>[:<GID>]
|
||||
|
||||
The `USER` instruction sets the user name (or UID) and optionally the user
|
||||
group (or GID) to use when running the image and for any `RUN`, `CMD` and
|
||||
`ENTRYPOINT` instructions that follow it in the `Dockerfile`.
|
||||
|
||||
> **Warning**:
|
||||
> When the user does doesn't have a primary group then the image (or the next
|
||||
> instructions) will be run with the `root` group.
|
||||
|
||||
The `USER` instruction sets the user name or UID to use when running the image
|
||||
and for any `RUN`, `CMD` and `ENTRYPOINT` instructions that follow it in the
|
||||
`Dockerfile`.
|
||||
|
||||
## WORKDIR
|
||||
|
||||
@ -1311,9 +1328,9 @@ The `WORKDIR` instruction sets the working directory for any `RUN`, `CMD`,
|
||||
If the `WORKDIR` doesn't exist, it will be created even if it's not used in any
|
||||
subsequent `Dockerfile` instruction.
|
||||
|
||||
It can be used multiple times in the one `Dockerfile`. If a relative path
|
||||
is provided, it will be relative to the path of the previous `WORKDIR`
|
||||
instruction. For example:
|
||||
The `WORKDIR` instruction can be used multiple times in a `Dockerfile`. If a
|
||||
relative path is provided, it will be relative to the path of the previous
|
||||
`WORKDIR` instruction. For example:
|
||||
|
||||
WORKDIR /a
|
||||
WORKDIR b
|
||||
|
||||
@ -40,7 +40,7 @@ interactively, as though the commands were running directly in your terminal.
|
||||
> not be interacting with the terminal at that time.
|
||||
|
||||
You can attach to the same contained process multiple times simultaneously,
|
||||
even as a different user with the appropriate permissions.
|
||||
from different sessions on the Docker host.
|
||||
|
||||
To stop a container, use `CTRL-c`. This key sequence sends `SIGKILL` to the
|
||||
container. If `--sig-proxy` is true (the default),`CTRL-c` sends a `SIGINT` to
|
||||
|
||||
@ -63,11 +63,11 @@ Options:
|
||||
|
||||
## Description
|
||||
|
||||
Builds Docker images from a Dockerfile and a "context". A build's context is
|
||||
the files located in the specified `PATH` or `URL`. The build process can refer
|
||||
to any of the files in the context. For example, your build can use an
|
||||
[*ADD*](../builder.md#add) instruction to reference a file in the
|
||||
context.
|
||||
The `docker build` command builds Docker images from a Dockerfile and a
|
||||
"context". A build's context is the set of files located in the specified
|
||||
`PATH` or `URL`. The build process can refer to any of the files in the
|
||||
context. For example, your build can use a [*COPY*](../builder.md#copy)
|
||||
instruction to reference a file in the context.
|
||||
|
||||
The `URL` parameter can refer to three kinds of resources: Git repositories,
|
||||
pre-packaged tarball contexts and plain text files.
|
||||
@ -88,7 +88,7 @@ user credentials, VPN's, and so forth.
|
||||
|
||||
Git URLs accept context configuration in their fragment section, separated by a
|
||||
colon `:`. The first part represents the reference that Git will check out,
|
||||
this can be either a branch, a tag, or a remote reference. The second part
|
||||
and can be either a branch, a tag, or a remote reference. The second part
|
||||
represents a subdirectory inside the repository that will be used as a build
|
||||
context.
|
||||
|
||||
|
||||
@ -216,6 +216,7 @@ attach`, `docker exec`, `docker run` or `docker start` command.
|
||||
Following is a sample `config.json` file:
|
||||
|
||||
```json
|
||||
{% raw %}
|
||||
{
|
||||
"HttpHeaders": {
|
||||
"MyHeader": "MyValue"
|
||||
@ -236,6 +237,7 @@ Following is a sample `config.json` file:
|
||||
"unicorn.example.com": "vcbait"
|
||||
}
|
||||
}
|
||||
{% endraw %}
|
||||
```
|
||||
|
||||
### Notary
|
||||
|
||||
@ -42,7 +42,7 @@ Options:
|
||||
--default-gateway-v6 ip Container default gateway IPv6 address
|
||||
--default-runtime string Default OCI runtime for containers (default "runc")
|
||||
--default-ulimit ulimit Default ulimits for containers (default [])
|
||||
--disable-legacy-registry Disable contacting legacy registries
|
||||
--disable-legacy-registry Disable contacting legacy registries (default true)
|
||||
--dns list DNS server to use (default [])
|
||||
--dns-opt list DNS options to use (default [])
|
||||
--dns-search list DNS search domains to use (default [])
|
||||
@ -901,7 +901,18 @@ system's list of trusted CAs instead of enabling `--insecure-registry`.
|
||||
|
||||
##### Legacy Registries
|
||||
|
||||
Enabling `--disable-legacy-registry` forces a docker daemon to only interact with registries which support the V2 protocol. Specifically, the daemon will not attempt `push`, `pull` and `login` to v1 registries. The exception to this is `search` which can still be performed on v1 registries.
|
||||
Operations against registries supporting only the legacy v1 protocol are
|
||||
disabled by default. Specifically, the daemon will not attempt `push`,
|
||||
`pull` and `login` to v1 registries. The exception to this is `search`
|
||||
which can still be performed on v1 registries.
|
||||
|
||||
Add `"disable-legacy-registry":false` to the [daemon configuration
|
||||
file](#daemon-configuration-file), or set the
|
||||
`--disable-legacy-registry=false` flag, if you need to interact with
|
||||
registries that have not yet migrated to the v2 protocol.
|
||||
|
||||
Interaction v1 registries will no longer be supported in Docker v17.12,
|
||||
and the `disable-legacy-registry` configuration option will be removed.
|
||||
|
||||
#### Running a Docker daemon behind an HTTPS_PROXY
|
||||
|
||||
@ -958,14 +969,14 @@ $ sudo dockerd \
|
||||
|
||||
The currently supported cluster store options are:
|
||||
|
||||
| Option | Description |
|
||||
|-----------------------|-------------|
|
||||
| `discovery.heartbeat` | Specifies the heartbeat timer in seconds which is used by the daemon as a `keepalive` mechanism to make sure discovery module treats the node as alive in the cluster. If not configured, the default value is 20 seconds. |
|
||||
| Option | Description |
|
||||
|:----------------------|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
|
||||
| `discovery.heartbeat` | Specifies the heartbeat timer in seconds which is used by the daemon as a `keepalive` mechanism to make sure discovery module treats the node as alive in the cluster. If not configured, the default value is 20 seconds. |
|
||||
| `discovery.ttl` | Specifies the TTL (time-to-live) in seconds which is used by the discovery module to timeout a node if a valid heartbeat is not received within the configured ttl value. If not configured, the default value is 60 seconds. |
|
||||
| `kv.cacertfile` | Specifies the path to a local file with PEM encoded CA certificates to trust. |
|
||||
| `kv.certfile` | Specifies the path to a local file with a PEM encoded certificate. This certificate is used as the client cert for communication with the Key/Value store. |
|
||||
| `kv.keyfile` | Specifies the path to a local file with a PEM encoded private key. This private key is used as the client key for communication with the Key/Value store. |
|
||||
| `kv.path` | Specifies the path in the Key/Value store. If not configured, the default value is 'docker/nodes'. |
|
||||
| `kv.cacertfile` | Specifies the path to a local file with PEM encoded CA certificates to trust. |
|
||||
| `kv.certfile` | Specifies the path to a local file with a PEM encoded certificate. This certificate is used as the client cert for communication with the Key/Value store. |
|
||||
| `kv.keyfile` | Specifies the path to a local file with a PEM encoded private key. This private key is used as the client key for communication with the Key/Value store. |
|
||||
| `kv.path` | Specifies the path in the Key/Value store. If not configured, the default value is 'docker/nodes'. |
|
||||
|
||||
#### Access authorization
|
||||
|
||||
@ -994,152 +1005,18 @@ plugin](../../extend/plugins_authorization.md) section in the Docker extend sect
|
||||
|
||||
#### Daemon user namespace options
|
||||
|
||||
The Linux kernel [user namespace support](http://man7.org/linux/man-pages/man7/user_namespaces.7.html) provides additional security by enabling
|
||||
a process, and therefore a container, to have a unique range of user and
|
||||
group IDs which are outside the traditional user and group range utilized by
|
||||
the host system. Potentially the most important security improvement is that,
|
||||
by default, container processes running as the `root` user will have expected
|
||||
administrative privilege (with some restrictions) inside the container but will
|
||||
effectively be mapped to an unprivileged `uid` on the host.
|
||||
The Linux kernel
|
||||
[user namespace support](http://man7.org/linux/man-pages/man7/user_namespaces.7.html)
|
||||
provides additional security by enabling a process, and therefore a container,
|
||||
to have a unique range of user and group IDs which are outside the traditional
|
||||
user and group range utilized by the host system. Potentially the most important
|
||||
security improvement is that, by default, container processes running as the
|
||||
`root` user will have expected administrative privilege (with some restrictions)
|
||||
inside the container but will effectively be mapped to an unprivileged `uid` on
|
||||
the host.
|
||||
|
||||
When user namespace support is enabled, Docker creates a single daemon-wide mapping
|
||||
for all containers running on the same engine instance. The mappings will
|
||||
utilize the existing subordinate user and group ID feature available on all modern
|
||||
Linux distributions.
|
||||
The [`/etc/subuid`](http://man7.org/linux/man-pages/man5/subuid.5.html) and
|
||||
[`/etc/subgid`](http://man7.org/linux/man-pages/man5/subgid.5.html) files will be
|
||||
read for the user, and optional group, specified to the `--userns-remap`
|
||||
parameter. If you do not wish to specify your own user and/or group, you can
|
||||
provide `default` as the value to this flag, and a user will be created on your behalf
|
||||
and provided subordinate uid and gid ranges. This default user will be named
|
||||
`dockremap`, and entries will be created for it in `/etc/passwd` and
|
||||
`/etc/group` using your distro's standard user and group creation tools.
|
||||
|
||||
> **Note**: The single mapping per-daemon restriction is in place for now
|
||||
> because Docker shares image layers from its local cache across all
|
||||
> containers running on the engine instance. Since file ownership must be
|
||||
> the same for all containers sharing the same layer content, the decision
|
||||
> was made to map the file ownership on `docker pull` to the daemon's user and
|
||||
> group mappings so that there is no delay for running containers once the
|
||||
> content is downloaded. This design preserves the same performance for `docker
|
||||
> pull`, `docker push`, and container startup as users expect with
|
||||
> user namespaces disabled.
|
||||
|
||||
##### Start the daemon with user namespaces enabled
|
||||
|
||||
To enable user namespace support, start the daemon with the
|
||||
`--userns-remap` flag, which accepts values in the following formats:
|
||||
|
||||
- uid
|
||||
- uid:gid
|
||||
- username
|
||||
- username:groupname
|
||||
|
||||
If numeric IDs are provided, translation back to valid user or group names
|
||||
will occur so that the subordinate uid and gid information can be read, given
|
||||
these resources are name-based, not id-based. If the numeric ID information
|
||||
provided does not exist as entries in `/etc/passwd` or `/etc/group`, daemon
|
||||
startup will fail with an error message.
|
||||
|
||||
**Example: starting with default Docker user management:**
|
||||
|
||||
```bash
|
||||
$ sudo dockerd --userns-remap=default
|
||||
```
|
||||
|
||||
When `default` is provided, Docker will create - or find the existing - user and group
|
||||
named `dockremap`. If the user is created, and the Linux distribution has
|
||||
appropriate support, the `/etc/subuid` and `/etc/subgid` files will be populated
|
||||
with a contiguous 65536 length range of subordinate user and group IDs, starting
|
||||
at an offset based on prior entries in those files. For example, Ubuntu will
|
||||
create the following range, based on an existing user named `user1` already owning
|
||||
the first 65536 range:
|
||||
|
||||
```bash
|
||||
$ cat /etc/subuid
|
||||
user1:100000:65536
|
||||
dockremap:165536:65536
|
||||
```
|
||||
|
||||
If you have a preferred/self-managed user with subordinate ID mappings already
|
||||
configured, you can provide that username or uid to the `--userns-remap` flag.
|
||||
If you have a group that doesn't match the username, you may provide the `gid`
|
||||
or group name as well; otherwise the username will be used as the group name
|
||||
when querying the system for the subordinate group ID range.
|
||||
|
||||
The output of `docker info` can be used to determine if the daemon is running
|
||||
with user namespaces enabled or not. If the daemon is configured with user
|
||||
namespaces, the Security Options entry in the response will list "userns" as
|
||||
one of the enabled security features.
|
||||
|
||||
##### Behavior differences when user namespaces are enabled
|
||||
|
||||
When you start the Docker daemon with `--userns-remap`, Docker segregates the graph directory
|
||||
where the images are stored by adding an extra directory with a name corresponding to the
|
||||
remapped UID and GID. For example, if the remapped UID and GID begin with `165536`, all
|
||||
images and containers running with that remap setting are located in `/var/lib/docker/165536.165536`
|
||||
instead of `/var/lib/docker/`.
|
||||
|
||||
In addition, the files and directories within the new directory, which correspond to
|
||||
images and container layers, are also owned by the new UID and GID. To set the ownership
|
||||
correctly, you need to re-pull the images and restart the containers after starting the
|
||||
daemon with `--userns-remap`.
|
||||
|
||||
##### Detailed information on `subuid`/`subgid` ranges
|
||||
|
||||
Given potential advanced use of the subordinate ID ranges by power users, the
|
||||
following paragraphs define how the Docker daemon currently uses the range entries
|
||||
found within the subordinate range files.
|
||||
|
||||
The simplest case is that only one contiguous range is defined for the
|
||||
provided user or group. In this case, Docker will use that entire contiguous
|
||||
range for the mapping of host uids and gids to the container process. This
|
||||
means that the first ID in the range will be the remapped root user, and the
|
||||
IDs above that initial ID will map host ID 1 through the end of the range.
|
||||
|
||||
From the example `/etc/subuid` content shown above, the remapped root
|
||||
user would be uid 165536.
|
||||
|
||||
If the system administrator has set up multiple ranges for a single user or
|
||||
group, the Docker daemon will read all the available ranges and use the
|
||||
following algorithm to create the mapping ranges:
|
||||
|
||||
1. The range segments found for the particular user will be sorted by *start ID* ascending.
|
||||
2. Map segments will be created from each range in increasing value with a length matching the length of each segment. Therefore the range segment with the lowest numeric starting value will be equal to the remapped root, and continue up through host uid/gid equal to the range segment length. As an example, if the lowest segment starts at ID 1000 and has a length of 100, then a map of 1000 -> 0 (the remapped root) up through 1100 -> 100 will be created from this segment. If the next segment starts at ID 10000, then the next map will start with mapping 10000 -> 101 up to the length of this second segment. This will continue until no more segments are found in the subordinate files for this user.
|
||||
3. If more than five range segments exist for a single user, only the first five will be utilized, matching the kernel's limitation of only five entries in `/proc/self/uid_map` and `proc/self/gid_map`.
|
||||
|
||||
##### Disable user namespace for a container
|
||||
|
||||
If you enable user namespaces on the daemon, all containers are started
|
||||
with user namespaces enabled. In some situations you might want to disable
|
||||
this feature for a container, for example, to start a privileged container (see
|
||||
[user namespace known restrictions](#user-namespace-known-restrictions)).
|
||||
To enable those advanced features for a specific container use `--userns=host`
|
||||
in the `run/exec/create` command.
|
||||
This option will completely disable user namespace mapping for the container's user.
|
||||
|
||||
##### User namespace known restrictions
|
||||
|
||||
The following standard Docker features are currently incompatible when
|
||||
running a Docker daemon with user namespaces enabled:
|
||||
|
||||
- sharing PID or NET namespaces with the host (`--pid=host` or `--net=host`)
|
||||
- Using `--privileged` mode flag on `docker run` (unless also specifying `--userns=host`)
|
||||
|
||||
In general, user namespaces are an advanced feature and will require
|
||||
coordination with other capabilities. For example, if volumes are mounted from
|
||||
the host, file ownership will have to be pre-arranged if the user or
|
||||
administrator wishes the containers to have expected access to the volume
|
||||
contents. Note that when using external volume or graph driver plugins, those
|
||||
external software programs must be made aware of user and group mapping ranges
|
||||
if they are to work seamlessly with user namespace support.
|
||||
|
||||
Finally, while the `root` user inside a user namespaced container process has
|
||||
many of the expected admin privileges that go along with being the superuser, the
|
||||
Linux kernel has restrictions based on internal knowledge that this is a user namespaced
|
||||
process. The most notable restriction that we are aware of at this time is the
|
||||
inability to use `mknod`. Permission will be denied for device creation even as
|
||||
container `root` inside a user namespace.
|
||||
For details about how to use this feature, as well as limitations, see
|
||||
[Isolate containers with a user namespace](https://docs.docker.com/engine/security/userns-remap/).
|
||||
|
||||
### Miscellaneous options
|
||||
|
||||
|
||||
@ -76,6 +76,17 @@ $ docker exec -it ubuntu_bash bash
|
||||
|
||||
This will create a new Bash session in the container `ubuntu_bash`.
|
||||
|
||||
Next, set an environment variable in the current bash session.
|
||||
|
||||
```bash
|
||||
$ docker exec -it -e VAR=1 ubuntu_bash bash
|
||||
```
|
||||
|
||||
This will create a new Bash session in the container `ubuntu_bash` with environment
|
||||
variable `$VAR` set to "1". Note that this environment variable will only be valid
|
||||
on the current Bash session.
|
||||
|
||||
|
||||
### Try to run `docker exec` on a paused container
|
||||
|
||||
If the container is paused, then the `docker exec` command will fail with an error:
|
||||
|
||||
@ -89,7 +89,7 @@ ID HOSTNAME STATUS AVAILABILITY MANAGER STATUS
|
||||
1bcef6utixb0l0ca7gxuivsj0 swarm-worker2 Ready Active
|
||||
```
|
||||
|
||||
#### membersip
|
||||
#### membership
|
||||
|
||||
The `membership` filter matches nodes based on the presence of a `membership` and a value
|
||||
`accepted` or `pending`.
|
||||
|
||||
@ -82,20 +82,21 @@ than one filter, then pass multiple flags (e.g. `--filter "foo=bar" --filter "bi
|
||||
|
||||
The currently supported filters are:
|
||||
|
||||
* id (container's id)
|
||||
* label (`label=<key>` or `label=<key>=<value>`)
|
||||
* name (container's name)
|
||||
* exited (int - the code of exited containers. Only useful with `--all`)
|
||||
* status (`created|restarting|running|removing|paused|exited|dead`)
|
||||
* ancestor (`<image-name>[:<tag>]`, `<image id>` or `<image@digest>`) - filters containers that were created from the given image or a descendant.
|
||||
* before (container's id or name) - filters containers created before given id or name
|
||||
* since (container's id or name) - filters containers created since given id or name
|
||||
* isolation (`default|process|hyperv`) (Windows daemon only)
|
||||
* volume (volume name or mount point) - filters containers that mount volumes.
|
||||
* network (network id or name) - filters containers connected to the provided network
|
||||
* health (starting|healthy|unhealthy|none) - filters containers based on healthcheck status
|
||||
* publish=(container's published port) - filters published ports by containers
|
||||
* expose=(container's exposed port) - filters exposed ports by containers
|
||||
| Filter | Description |
|
||||
|:----------------------|:-------------------------------------------------------------------------------------------------------------------------------------|
|
||||
| `id` | container's ID |
|
||||
| `name` | container's name |
|
||||
| `label` | An arbitrary string representing either a key or a key-value pair |
|
||||
| `exited` | An integer representing the container's exit code. Only useful with `--all`. |
|
||||
| `status` | One of `created|restarting|running|removing|paused|exited|dead` |
|
||||
| `ancestor` | Filters containers which share a given image as an ancestor. Expressed as `<image-name>[:<tag>]`, `<image id>`, or `<image@digest>` |
|
||||
| `before` or `since` | Filters containers created before or after a given container ID or name |
|
||||
| `volume` | Filters running containers which have mounted a given volume or bind mount. |
|
||||
| `network` | Filters running containers connected to a given network. |
|
||||
| `publish` or `expose` | Filters containers which publish or expose a given port. |
|
||||
| `health` | One of `starting|healthy|unhealthy|none`. Filters containers based on their healthcheck status. |
|
||||
| `isolation` | Windows daemon only. One of `default|process|hyperv`. |
|
||||
|
||||
|
||||
#### label
|
||||
|
||||
@ -387,21 +388,21 @@ template.
|
||||
|
||||
Valid placeholders for the Go template are listed below:
|
||||
|
||||
Placeholder | Description
|
||||
--------------|----------------------------------------------------------------------------------------------------
|
||||
`.ID` | Container ID
|
||||
`.Image` | Image ID
|
||||
`.Command` | Quoted command
|
||||
`.CreatedAt` | Time when the container was created.
|
||||
`.RunningFor` | Elapsed time since the container was started.
|
||||
`.Ports` | Exposed ports.
|
||||
`.Status` | Container status.
|
||||
`.Size` | Container disk size.
|
||||
`.Names` | Container names.
|
||||
`.Labels` | All labels assigned to the container.
|
||||
`.Label` | Value of a specific label for this container. For example `'{{.Label "com.docker.swarm.cpu"}}'`
|
||||
`.Mounts` | Names of the volumes mounted in this container.
|
||||
`.Networks` | Names of the networks attached to this container.
|
||||
| Placeholder | Description |
|
||||
|:--------------|:------------------------------------------------------------------------------------------------|
|
||||
| `.ID` | Container ID |
|
||||
| `.Image` | Image ID |
|
||||
| `.Command` | Quoted command |
|
||||
| `.CreatedAt` | Time when the container was created. |
|
||||
| `.RunningFor` | Elapsed time since the container was started. |
|
||||
| `.Ports` | Exposed ports. |
|
||||
| `.Status` | Container status. |
|
||||
| `.Size` | Container disk size. |
|
||||
| `.Names` | Container names. |
|
||||
| `.Labels` | All labels assigned to the container. |
|
||||
| `.Label` | Value of a specific label for this container. For example `'{{.Label "com.docker.swarm.cpu"}}'` |
|
||||
| `.Mounts` | Names of the volumes mounted in this container. |
|
||||
| `.Networks` | Names of the networks attached to this container. |
|
||||
|
||||
When using the `--format` option, the `ps` command will either output the data
|
||||
exactly as the template declares or, when using the `table` directive, includes
|
||||
@ -429,4 +430,4 @@ a87ecb4f327c com.docker.swarm.node=ubuntu,com.docker.swarm.storage=ssd
|
||||
01946d9d34d8
|
||||
c1d3b0166030 com.docker.swarm.node=debian,com.docker.swarm.cpu=6
|
||||
41d50ecd2f57 com.docker.swarm.node=fedora,com.docker.swarm.cpu=3,com.docker.swarm.storage=ssd
|
||||
```
|
||||
```
|
||||
@ -745,6 +745,41 @@ PS C:\> docker run -d --isolation default microsoft/nanoserver powershell echo h
|
||||
PS C:\> docker run -d --isolation hyperv microsoft/nanoserver powershell echo hyperv
|
||||
```
|
||||
|
||||
### Specify hard limits on memory available to containers (-m, --memory)
|
||||
|
||||
These parameters always set an upper limit on the memory available to the container. On Linux, this
|
||||
is set on the cgroup and applications in a container can query it at `/sys/fs/cgroup/memory/memory.limit_in_bytes`.
|
||||
|
||||
On Windows, this will affect containers differently depending on what type of isolation is used.
|
||||
|
||||
- With `process` isolation, Windows will report the full memory of the host system, not the limit to applications running inside the container
|
||||
```powershell
|
||||
docker run -it -m 2GB --isolation=process microsoft/nanoserver powershell Get-ComputerInfo *memory*
|
||||
|
||||
CsTotalPhysicalMemory : 17064509440
|
||||
CsPhyicallyInstalledMemory : 16777216
|
||||
OsTotalVisibleMemorySize : 16664560
|
||||
OsFreePhysicalMemory : 14646720
|
||||
OsTotalVirtualMemorySize : 19154928
|
||||
OsFreeVirtualMemory : 17197440
|
||||
OsInUseVirtualMemory : 1957488
|
||||
OsMaxProcessMemorySize : 137438953344
|
||||
```
|
||||
- With `hyperv` isolation, Windows will create a utility VM that is big enough to hold the memory limit, plus the minimal OS needed to host the container. That size is reported as "Total Physical Memory."
|
||||
```powershell
|
||||
docker run -it -m 2GB --isolation=hyperv microsoft/nanoserver powershell Get-ComputerInfo *memory*
|
||||
|
||||
CsTotalPhysicalMemory : 2683355136
|
||||
CsPhyicallyInstalledMemory :
|
||||
OsTotalVisibleMemorySize : 2620464
|
||||
OsFreePhysicalMemory : 2306552
|
||||
OsTotalVirtualMemorySize : 2620464
|
||||
OsFreeVirtualMemory : 2356692
|
||||
OsInUseVirtualMemory : 263772
|
||||
OsMaxProcessMemorySize : 137438953344
|
||||
```
|
||||
|
||||
|
||||
### Configure namespaced kernel parameters (sysctls) at runtime
|
||||
|
||||
The `--sysctl` sets namespaced kernel parameters (sysctls) in the
|
||||
|
||||
@ -167,6 +167,8 @@ $ docker service create --name redis \
|
||||
4cdgfyky7ozwh3htjfw0d12qv
|
||||
```
|
||||
|
||||
To grant a service access to multiple secrets, use multiple `--secret` flags.
|
||||
|
||||
Secrets are located in `/run/secrets` in the container. If no target is
|
||||
specified, the name of the secret will be used as the in memory file in the
|
||||
container. If a target is specified, that will be the filename. In the
|
||||
@ -191,10 +193,26 @@ tutorial](https://docs.docker.com/engine/swarm/swarm-tutorial/rolling-update/).
|
||||
|
||||
### Set environment variables (-e, --env)
|
||||
|
||||
This sets environmental variables for all tasks in a service. For example:
|
||||
This sets an environmental variable for all tasks in a service. For example:
|
||||
|
||||
```bash
|
||||
$ docker service create --name redis_2 --replicas 5 --env MYVAR=foo redis:3.0.6
|
||||
$ docker service create \
|
||||
--name redis_2 \
|
||||
--replicas 5 \
|
||||
--env MYVAR=foo \
|
||||
redis:3.0.6
|
||||
```
|
||||
|
||||
To specify multiple environment variables, specify multiple `--env` flags, each
|
||||
with a separate key-value pair.
|
||||
|
||||
```bash
|
||||
$ docker service create \
|
||||
--name redis_2 \
|
||||
--replicas 5 \
|
||||
--env MYVAR=foo \
|
||||
--env MYVAR2=bar \
|
||||
redis:3.0.6
|
||||
```
|
||||
|
||||
### Create a service with specific hostname (--hostname)
|
||||
|
||||
@ -40,7 +40,7 @@ has to be run targeting a manager node.
|
||||
|
||||
### Compose file
|
||||
|
||||
The `deploy` command supports compose file version `3.0` and above."
|
||||
The `deploy` command supports compose file version `3.0` and above.
|
||||
|
||||
```bash
|
||||
$ docker stack deploy --compose-file docker-compose.yml vossibility
|
||||
@ -57,7 +57,28 @@ Creating service vossibility_ghollector
|
||||
Creating service vossibility_lookupd
|
||||
```
|
||||
|
||||
You can verify that the services were correctly created
|
||||
Only a single Compose file is accepted. If your configuration is split between
|
||||
multiple Compose files, e.g. a base configuration and environment-specific overrides,
|
||||
you can combine these by passing them to `docker-compose config` with the `-f` option
|
||||
and redirecting the merged output into a new file.
|
||||
|
||||
```bash
|
||||
$ docker-compose -f docker-compose.yml -f docker-compose.prod.yml config > docker-stack.yml
|
||||
$ docker stack deploy --compose-file docker-stack.yml vossibility
|
||||
|
||||
Ignoring unsupported options: links
|
||||
|
||||
Creating network vossibility_vossibility
|
||||
Creating network vossibility_default
|
||||
Creating service vossibility_nsqd
|
||||
Creating service vossibility_logstash
|
||||
Creating service vossibility_elasticsearch
|
||||
Creating service vossibility_kibana
|
||||
Creating service vossibility_ghollector
|
||||
Creating service vossibility_lookupd
|
||||
```
|
||||
|
||||
You can verify that the services were correctly created:
|
||||
|
||||
```bash
|
||||
$ docker service ls
|
||||
|
||||
@ -87,8 +87,9 @@ default foreground mode:
|
||||
|
||||
To start a container in detached mode, you use `-d=true` or just `-d` option. By
|
||||
design, containers started in detached mode exit when the root process used to
|
||||
run the container exits. A container in detached mode cannot be automatically
|
||||
removed when it stops, this means you cannot use the `--rm` option with `-d` option.
|
||||
run the container exits, unless you also specify the `--rm` option. If you use
|
||||
`-d` with `--rm`, the container is removed when it exits **or** when the daemon
|
||||
exits, whichever happens first.
|
||||
|
||||
Do not pass a `service x start` command to a detached container. For example, this
|
||||
command attempts to start the `nginx` service.
|
||||
@ -149,7 +150,7 @@ is receiving its standard input from a pipe, as in:
|
||||
The operator can identify a container in three ways:
|
||||
|
||||
| Identifier type | Example value |
|
||||
| --------------------- | ------------------------------------------------------------------ |
|
||||
|:----------------------|:-------------------------------------------------------------------|
|
||||
| UUID long identifier | "f78375b1c487e03c9438c729345e54db9d20cfa2ac1fc3494b6eb60872e74778" |
|
||||
| UUID short identifier | "f78375b1c487" |
|
||||
| Name | "evil_ptolemy" |
|
||||
@ -686,29 +687,29 @@ parent group.
|
||||
The operator can also adjust the performance parameters of the
|
||||
container:
|
||||
|
||||
| Option | Description |
|
||||
| -------------------------- | ----------------------------------------------------------------------------------------------------------------------------------------------- |
|
||||
| `-m`, `--memory=""` | Memory limit (format: `<number>[<unit>]`). Number is a positive integer. Unit can be one of `b`, `k`, `m`, or `g`. Minimum is 4M. |
|
||||
| `--memory-swap=""` | Total memory limit (memory + swap, format: `<number>[<unit>]`). Number is a positive integer. Unit can be one of `b`, `k`, `m`, or `g`. |
|
||||
| `--memory-reservation=""` | Memory soft limit (format: `<number>[<unit>]`). Number is a positive integer. Unit can be one of `b`, `k`, `m`, or `g`. |
|
||||
| `--kernel-memory=""` | Kernel memory limit (format: `<number>[<unit>]`). Number is a positive integer. Unit can be one of `b`, `k`, `m`, or `g`. Minimum is 4M. |
|
||||
| `-c`, `--cpu-shares=0` | CPU shares (relative weight) |
|
||||
| `--cpus=0.000` | Number of CPUs. Number is a fractional number. 0.000 means no limit. |
|
||||
| `--cpu-period=0` | Limit the CPU CFS (Completely Fair Scheduler) period |
|
||||
| `--cpuset-cpus=""` | CPUs in which to allow execution (0-3, 0,1) |
|
||||
| `--cpuset-mems=""` | Memory nodes (MEMs) in which to allow execution (0-3, 0,1). Only effective on NUMA systems. |
|
||||
| `--cpu-quota=0` | Limit the CPU CFS (Completely Fair Scheduler) quota |
|
||||
| `--cpu-rt-period=0` | Limit the CPU real-time period. In microseconds. Requires parent cgroups be set and cannot be higher than parent. Also check rtprio ulimits. |
|
||||
| `--cpu-rt-runtime=0` | Limit the CPU real-time runtime. In microseconds. Requires parent cgroups be set and cannot be higher than parent. Also check rtprio ulimits. |
|
||||
| `--blkio-weight=0` | Block IO weight (relative weight) accepts a weight value between 10 and 1000. |
|
||||
| `--blkio-weight-device=""` | Block IO weight (relative device weight, format: `DEVICE_NAME:WEIGHT`) |
|
||||
| `--device-read-bps=""` | Limit read rate from a device (format: `<device-path>:<number>[<unit>]`). Number is a positive integer. Unit can be one of `kb`, `mb`, or `gb`. |
|
||||
| `--device-write-bps=""` | Limit write rate to a device (format: `<device-path>:<number>[<unit>]`). Number is a positive integer. Unit can be one of `kb`, `mb`, or `gb`. |
|
||||
| `--device-read-iops="" ` | Limit read rate (IO per second) from a device (format: `<device-path>:<number>`). Number is a positive integer. |
|
||||
| `--device-write-iops="" ` | Limit write rate (IO per second) to a device (format: `<device-path>:<number>`). Number is a positive integer. |
|
||||
| `--oom-kill-disable=false` | Whether to disable OOM Killer for the container or not. |
|
||||
| `--oom-score-adj=0` | Tune container's OOM preferences (-1000 to 1000) |
|
||||
| `--memory-swappiness=""` | Tune a container's memory swappiness behavior. Accepts an integer between 0 and 100. |
|
||||
| Option | Description |
|
||||
|:---------------------------|:-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
|
||||
| `-m`, `--memory=""` | Memory limit (format: `<number>[<unit>]`). Number is a positive integer. Unit can be one of `b`, `k`, `m`, or `g`. Minimum is 4M. |
|
||||
| `--memory-swap=""` | Total memory limit (memory + swap, format: `<number>[<unit>]`). Number is a positive integer. Unit can be one of `b`, `k`, `m`, or `g`. |
|
||||
| `--memory-reservation=""` | Memory soft limit (format: `<number>[<unit>]`). Number is a positive integer. Unit can be one of `b`, `k`, `m`, or `g`. |
|
||||
| `--kernel-memory=""` | Kernel memory limit (format: `<number>[<unit>]`). Number is a positive integer. Unit can be one of `b`, `k`, `m`, or `g`. Minimum is 4M. |
|
||||
| `-c`, `--cpu-shares=0` | CPU shares (relative weight) |
|
||||
| `--cpus=0.000` | Number of CPUs. Number is a fractional number. 0.000 means no limit. |
|
||||
| `--cpu-period=0` | Limit the CPU CFS (Completely Fair Scheduler) period |
|
||||
| `--cpuset-cpus=""` | CPUs in which to allow execution (0-3, 0,1) |
|
||||
| `--cpuset-mems=""` | Memory nodes (MEMs) in which to allow execution (0-3, 0,1). Only effective on NUMA systems. |
|
||||
| `--cpu-quota=0` | Limit the CPU CFS (Completely Fair Scheduler) quota |
|
||||
| `--cpu-rt-period=0` | Limit the CPU real-time period. In microseconds. Requires parent cgroups be set and cannot be higher than parent. Also check rtprio ulimits. |
|
||||
| `--cpu-rt-runtime=0` | Limit the CPU real-time runtime. In microseconds. Requires parent cgroups be set and cannot be higher than parent. Also check rtprio ulimits. |
|
||||
| `--blkio-weight=0` | Block IO weight (relative weight) accepts a weight value between 10 and 1000. |
|
||||
| `--blkio-weight-device=""` | Block IO weight (relative device weight, format: `DEVICE_NAME:WEIGHT`) |
|
||||
| `--device-read-bps=""` | Limit read rate from a device (format: `<device-path>:<number>[<unit>]`). Number is a positive integer. Unit can be one of `kb`, `mb`, or `gb`. |
|
||||
| `--device-write-bps=""` | Limit write rate to a device (format: `<device-path>:<number>[<unit>]`). Number is a positive integer. Unit can be one of `kb`, `mb`, or `gb`. |
|
||||
| `--device-read-iops="" ` | Limit read rate (IO per second) from a device (format: `<device-path>:<number>`). Number is a positive integer. |
|
||||
| `--device-write-iops="" ` | Limit write rate (IO per second) to a device (format: `<device-path>:<number>`). Number is a positive integer. |
|
||||
| `--oom-kill-disable=false` | Whether to disable OOM Killer for the container or not. |
|
||||
| `--oom-score-adj=0` | Tune container's OOM preferences (-1000 to 1000) |
|
||||
| `--memory-swappiness=""` | Tune a container's memory swappiness behavior. Accepts an integer between 0 and 100. |
|
||||
| `--shm-size=""` | Size of `/dev/shm`. The format is `<number><unit>`. `number` must be greater than `0`. Unit is optional and can be `b` (bytes), `k` (kilobytes), `m` (megabytes), or `g` (gigabytes). If you omit the unit, the system uses bytes. If you omit the size entirely, the system uses `64m`. |
|
||||
|
||||
### User memory constraints
|
||||
@ -1123,7 +1124,7 @@ by default a container is not allowed to access any devices, but a
|
||||
the documentation on [cgroups devices](https://www.kernel.org/doc/Documentation/cgroup-v1/devices.txt)).
|
||||
|
||||
When the operator executes `docker run --privileged`, Docker will enable
|
||||
to access to all devices on the host as well as set some configuration
|
||||
access to all devices on the host as well as set some configuration
|
||||
in AppArmor or SELinux to allow the container nearly all the same access to the
|
||||
host as processes running outside containers on the host. Additional
|
||||
information about running with `--privileged` is available on the
|
||||
@ -1158,7 +1159,7 @@ list of capabilities that are kept. The following table lists the Linux capabili
|
||||
options which are allowed by default and can be dropped.
|
||||
|
||||
| Capability Key | Capability Description |
|
||||
| ---------------- | ----------------------------------------------------------------------------------------------------------------------------- |
|
||||
|:-----------------|:------------------------------------------------------------------------------------------------------------------------------|
|
||||
| SETPCAP | Modify process capabilities. |
|
||||
| MKNOD | Create special files using mknod(2). |
|
||||
| AUDIT_WRITE | Write records to kernel auditing log. |
|
||||
@ -1176,31 +1177,31 @@ options which are allowed by default and can be dropped.
|
||||
|
||||
The next table shows the capabilities which are not granted by default and may be added.
|
||||
|
||||
| Capability Key | Capability Description |
|
||||
| ---------------- | ----------------------------------------------------------------------------------------------------------------------------- |
|
||||
| SYS_MODULE | Load and unload kernel modules. |
|
||||
| SYS_RAWIO | Perform I/O port operations (iopl(2) and ioperm(2)). |
|
||||
| SYS_PACCT | Use acct(2), switch process accounting on or off. |
|
||||
| SYS_ADMIN | Perform a range of system administration operations. |
|
||||
| SYS_NICE | Raise process nice value (nice(2), setpriority(2)) and change the nice value for arbitrary processes. |
|
||||
| SYS_RESOURCE | Override resource Limits. |
|
||||
| SYS_TIME | Set system clock (settimeofday(2), stime(2), adjtimex(2)); set real-time (hardware) clock. |
|
||||
| SYS_TTY_CONFIG | Use vhangup(2); employ various privileged ioctl(2) operations on virtual terminals. |
|
||||
| AUDIT_CONTROL | Enable and disable kernel auditing; change auditing filter rules; retrieve auditing status and filtering rules. |
|
||||
| MAC_OVERRIDE | Allow MAC configuration or state changes. Implemented for the Smack LSM. |
|
||||
| MAC_ADMIN | Override Mandatory Access Control (MAC). Implemented for the Smack Linux Security Module (LSM). |
|
||||
| NET_ADMIN | Perform various network-related operations. |
|
||||
| SYSLOG | Perform privileged syslog(2) operations. |
|
||||
| DAC_READ_SEARCH | Bypass file read permission checks and directory read and execute permission checks. |
|
||||
| LINUX_IMMUTABLE | Set the FS_APPEND_FL and FS_IMMUTABLE_FL i-node flags. |
|
||||
| NET_BROADCAST | Make socket broadcasts, and listen to multicasts. |
|
||||
| IPC_LOCK | Lock memory (mlock(2), mlockall(2), mmap(2), shmctl(2)). |
|
||||
| IPC_OWNER | Bypass permission checks for operations on System V IPC objects. |
|
||||
| SYS_PTRACE | Trace arbitrary processes using ptrace(2). |
|
||||
| SYS_BOOT | Use reboot(2) and kexec_load(2), reboot and load a new kernel for later execution. |
|
||||
| LEASE | Establish leases on arbitrary files (see fcntl(2)). |
|
||||
| WAKE_ALARM | Trigger something that will wake up the system. |
|
||||
| BLOCK_SUSPEND | Employ features that can block system suspend. |
|
||||
| Capability Key | Capability Description |
|
||||
|:----------------|:----------------------------------------------------------------------------------------------------------------|
|
||||
| SYS_MODULE | Load and unload kernel modules. |
|
||||
| SYS_RAWIO | Perform I/O port operations (iopl(2) and ioperm(2)). |
|
||||
| SYS_PACCT | Use acct(2), switch process accounting on or off. |
|
||||
| SYS_ADMIN | Perform a range of system administration operations. |
|
||||
| SYS_NICE | Raise process nice value (nice(2), setpriority(2)) and change the nice value for arbitrary processes. |
|
||||
| SYS_RESOURCE | Override resource Limits. |
|
||||
| SYS_TIME | Set system clock (settimeofday(2), stime(2), adjtimex(2)); set real-time (hardware) clock. |
|
||||
| SYS_TTY_CONFIG | Use vhangup(2); employ various privileged ioctl(2) operations on virtual terminals. |
|
||||
| AUDIT_CONTROL | Enable and disable kernel auditing; change auditing filter rules; retrieve auditing status and filtering rules. |
|
||||
| MAC_OVERRIDE | Allow MAC configuration or state changes. Implemented for the Smack LSM. |
|
||||
| MAC_ADMIN | Override Mandatory Access Control (MAC). Implemented for the Smack Linux Security Module (LSM). |
|
||||
| NET_ADMIN | Perform various network-related operations. |
|
||||
| SYSLOG | Perform privileged syslog(2) operations. |
|
||||
| DAC_READ_SEARCH | Bypass file read permission checks and directory read and execute permission checks. |
|
||||
| LINUX_IMMUTABLE | Set the FS_APPEND_FL and FS_IMMUTABLE_FL i-node flags. |
|
||||
| NET_BROADCAST | Make socket broadcasts, and listen to multicasts. |
|
||||
| IPC_LOCK | Lock memory (mlock(2), mlockall(2), mmap(2), shmctl(2)). |
|
||||
| IPC_OWNER | Bypass permission checks for operations on System V IPC objects. |
|
||||
| SYS_PTRACE | Trace arbitrary processes using ptrace(2). |
|
||||
| SYS_BOOT | Use reboot(2) and kexec_load(2), reboot and load a new kernel for later execution. |
|
||||
| LEASE | Establish leases on arbitrary files (see fcntl(2)). |
|
||||
| WAKE_ALARM | Trigger something that will wake up the system. |
|
||||
| BLOCK_SUSPEND | Employ features that can block system suspend. |
|
||||
|
||||
Further reference information is available on the [capabilities(7) - Linux man page](http://man7.org/linux/man-pages/man7/capabilities.7.html)
|
||||
|
||||
@ -1252,7 +1253,7 @@ the `--log-driver=VALUE` with the `docker run` command to configure the
|
||||
container's logging driver. The following options are supported:
|
||||
|
||||
| Driver | Description |
|
||||
| ----------- | ----------------------------------------------------------------------------------------------------------------------------- |
|
||||
|:------------|:------------------------------------------------------------------------------------------------------------------------------|
|
||||
| `none` | Disables any logging for the container. `docker logs` won't be available with this driver. |
|
||||
| `json-file` | Default logging driver for Docker. Writes JSON messages to file. No logging options are supported for this driver. |
|
||||
| `syslog` | Syslog logging driver for Docker. Writes log messages to syslog. |
|
||||
@ -1398,12 +1399,12 @@ container.
|
||||
|
||||
The following environment variables are set for Linux containers:
|
||||
|
||||
| Variable | Value |
|
||||
| -------- | ----- |
|
||||
| `HOME` | Set based on the value of `USER` |
|
||||
| `HOSTNAME` | The hostname associated with the container |
|
||||
| `PATH` | Includes popular directories, such as `/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin` |
|
||||
| `TERM` | `xterm` if the container is allocated a pseudo-TTY |
|
||||
| Variable | Value |
|
||||
|:-----------|:-----------------------------------------------------------------------------------------------------|
|
||||
| `HOME` | Set based on the value of `USER` |
|
||||
| `HOSTNAME` | The hostname associated with the container |
|
||||
| `PATH` | Includes popular directories, such as `/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin` |
|
||||
| `TERM` | `xterm` if the container is allocated a pseudo-TTY |
|
||||
|
||||
|
||||
Additionally, the operator can **set any environment variable** in the
|
||||
|
||||
@ -161,21 +161,18 @@ A service has the following fields:
|
||||
only specify the container port to be exposed. These ports can be
|
||||
mapped on runtime hosts at the operator's discretion.
|
||||
</dd>
|
||||
|
||||
<dt>
|
||||
WorkingDir <code>string</code>
|
||||
</dt>
|
||||
<dd>
|
||||
Working directory inside the service containers.
|
||||
</dd>
|
||||
|
||||
<dt>
|
||||
User <code>string</code>
|
||||
</dt>
|
||||
<dd>
|
||||
Username or UID (format: <code><name|uid>[:<group|gid>]</code>).
|
||||
</dd>
|
||||
|
||||
<dt>
|
||||
Networks <code>[]string</code>
|
||||
</dt>
|
||||
|
||||
@ -192,7 +192,7 @@ $ sudo dockerd --add-runtime runc=runc --add-runtime custom=/usr/local/bin/my-ru
|
||||
Default ulimits for containers.
|
||||
|
||||
**--disable-legacy-registry**=*true*|*false*
|
||||
Disable contacting legacy registries
|
||||
Disable contacting legacy registries. Default is `true`.
|
||||
|
||||
**--dns**=""
|
||||
Force Docker to use specific DNS servers
|
||||
|
||||
@ -7,7 +7,6 @@ github.com/coreos/etcd 824277cb3a577a0e8c829ca9ec557b973fe06d20
|
||||
github.com/cpuguy83/go-md2man a65d4d2de4d5f7c74868dfa9b202a3c8be315aaa
|
||||
github.com/davecgh/go-spew 346938d642f2ec3594ed81d874461961cd0faa76
|
||||
github.com/docker/distribution b38e5838b7b2f2ad48e06ec4b500011976080621
|
||||
# commented out, since we needed to advance just github.com/docker/docker/pkg/term/termios_bsd.go
|
||||
# github.com/docker/docker 45c6f4262a865a03adaac291a9ce33c0f2190d77
|
||||
github.com/docker/docker-credential-helpers v0.5.0
|
||||
github.com/docker/go d30aec9fd63c35133f8f79c3412ad91a3b08be06
|
||||
@ -17,7 +16,7 @@ github.com/docker/go-units 9e638d38cf6977a37a8ea0078f3ee75a7cdb2dd1
|
||||
github.com/docker/libnetwork b13e0604016a4944025aaff521d9c125850b0d04
|
||||
github.com/docker/libtrust 9cbd2a1374f46905c68a4eb3694a130610adc62a
|
||||
github.com/docker/notary v0.4.2
|
||||
github.com/docker/swarmkit 1a3e510517be82d18ac04380b5f71eddf06c2fc0
|
||||
github.com/docker/swarmkit 1a3e510517be82d18ac04380b5f71eddf06c2fc0
|
||||
github.com/flynn-archive/go-shlex 3f9db97f856818214da2e1057f8ad84803971cff
|
||||
github.com/gogo/protobuf 7efa791bd276fd4db00867cbd982b552627c24cb
|
||||
github.com/golang/protobuf 8ee79997227bf9b34611aee7946ae64735e6fd93
|
||||
|
||||
2
components/cli/vendor/github.com/docker/docker/client/container_wait.go
generated
vendored
2
components/cli/vendor/github.com/docker/docker/client/container_wait.go
generated
vendored
@ -31,7 +31,7 @@ func (cli *Client) ContainerWait(ctx context.Context, containerID string, condit
|
||||
}
|
||||
|
||||
resultC := make(chan container.ContainerWaitOKBody)
|
||||
errC := make(chan error)
|
||||
errC := make(chan error, 1)
|
||||
|
||||
query := url.Values{}
|
||||
query.Set("condition", string(condition))
|
||||
|
||||
14
components/cli/vendor/github.com/docker/docker/client/ping.go
generated
vendored
14
components/cli/vendor/github.com/docker/docker/client/ping.go
generated
vendored
@ -20,13 +20,15 @@ func (cli *Client) Ping(ctx context.Context) (types.Ping, error) {
|
||||
}
|
||||
defer ensureReaderClosed(serverResp)
|
||||
|
||||
ping.APIVersion = serverResp.header.Get("API-Version")
|
||||
if serverResp.header != nil {
|
||||
ping.APIVersion = serverResp.header.Get("API-Version")
|
||||
|
||||
if serverResp.header.Get("Docker-Experimental") == "true" {
|
||||
ping.Experimental = true
|
||||
if serverResp.header.Get("Docker-Experimental") == "true" {
|
||||
ping.Experimental = true
|
||||
}
|
||||
ping.OSType = serverResp.header.Get("OSType")
|
||||
}
|
||||
|
||||
ping.OSType = serverResp.header.Get("OSType")
|
||||
|
||||
return ping, nil
|
||||
err = cli.checkResponseErr(serverResp)
|
||||
return ping, err
|
||||
}
|
||||
|
||||
77
components/cli/vendor/github.com/docker/docker/client/request.go
generated
vendored
77
components/cli/vendor/github.com/docker/docker/client/request.go
generated
vendored
@ -24,6 +24,7 @@ type serverResponse struct {
|
||||
body io.ReadCloser
|
||||
header http.Header
|
||||
statusCode int
|
||||
reqURL *url.URL
|
||||
}
|
||||
|
||||
// head sends an http request to the docker API using the method HEAD.
|
||||
@ -118,11 +119,18 @@ func (cli *Client) sendRequest(ctx context.Context, method, path string, query u
|
||||
if err != nil {
|
||||
return serverResponse{}, err
|
||||
}
|
||||
return cli.doRequest(ctx, req)
|
||||
resp, err := cli.doRequest(ctx, req)
|
||||
if err != nil {
|
||||
return resp, err
|
||||
}
|
||||
if err := cli.checkResponseErr(resp); err != nil {
|
||||
return resp, err
|
||||
}
|
||||
return resp, nil
|
||||
}
|
||||
|
||||
func (cli *Client) doRequest(ctx context.Context, req *http.Request) (serverResponse, error) {
|
||||
serverResp := serverResponse{statusCode: -1}
|
||||
serverResp := serverResponse{statusCode: -1, reqURL: req.URL}
|
||||
|
||||
resp, err := ctxhttp.Do(ctx, cli.client, req)
|
||||
if err != nil {
|
||||
@ -179,37 +187,44 @@ func (cli *Client) doRequest(ctx context.Context, req *http.Request) (serverResp
|
||||
|
||||
if resp != nil {
|
||||
serverResp.statusCode = resp.StatusCode
|
||||
serverResp.body = resp.Body
|
||||
serverResp.header = resp.Header
|
||||
}
|
||||
|
||||
if serverResp.statusCode < 200 || serverResp.statusCode >= 400 {
|
||||
body, err := ioutil.ReadAll(resp.Body)
|
||||
if err != nil {
|
||||
return serverResp, err
|
||||
}
|
||||
if len(body) == 0 {
|
||||
return serverResp, fmt.Errorf("Error: request returned %s for API route and version %s, check if the server supports the requested API version", http.StatusText(serverResp.statusCode), req.URL)
|
||||
}
|
||||
|
||||
var errorMessage string
|
||||
if (cli.version == "" || versions.GreaterThan(cli.version, "1.23")) &&
|
||||
resp.Header.Get("Content-Type") == "application/json" {
|
||||
var errorResponse types.ErrorResponse
|
||||
if err := json.Unmarshal(body, &errorResponse); err != nil {
|
||||
return serverResp, fmt.Errorf("Error reading JSON: %v", err)
|
||||
}
|
||||
errorMessage = errorResponse.Message
|
||||
} else {
|
||||
errorMessage = string(body)
|
||||
}
|
||||
|
||||
return serverResp, fmt.Errorf("Error response from daemon: %s", strings.TrimSpace(errorMessage))
|
||||
}
|
||||
|
||||
serverResp.body = resp.Body
|
||||
serverResp.header = resp.Header
|
||||
return serverResp, nil
|
||||
}
|
||||
|
||||
func (cli *Client) checkResponseErr(serverResp serverResponse) error {
|
||||
if serverResp.statusCode >= 200 && serverResp.statusCode < 400 {
|
||||
return nil
|
||||
}
|
||||
|
||||
body, err := ioutil.ReadAll(serverResp.body)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
if len(body) == 0 {
|
||||
return fmt.Errorf("Error: request returned %s for API route and version %s, check if the server supports the requested API version", http.StatusText(serverResp.statusCode), serverResp.reqURL)
|
||||
}
|
||||
|
||||
var ct string
|
||||
if serverResp.header != nil {
|
||||
ct = serverResp.header.Get("Content-Type")
|
||||
}
|
||||
|
||||
var errorMessage string
|
||||
if (cli.version == "" || versions.GreaterThan(cli.version, "1.23")) && ct == "application/json" {
|
||||
var errorResponse types.ErrorResponse
|
||||
if err := json.Unmarshal(body, &errorResponse); err != nil {
|
||||
return fmt.Errorf("Error reading JSON: %v", err)
|
||||
}
|
||||
errorMessage = errorResponse.Message
|
||||
} else {
|
||||
errorMessage = string(body)
|
||||
}
|
||||
|
||||
return fmt.Errorf("Error response from daemon: %s", strings.TrimSpace(errorMessage))
|
||||
}
|
||||
|
||||
func (cli *Client) addHeaders(req *http.Request, headers headers) *http.Request {
|
||||
// Add CLI Config's HTTP Headers BEFORE we set the Docker headers
|
||||
// then the user can't change OUR headers
|
||||
@ -239,9 +254,9 @@ func encodeData(data interface{}) (*bytes.Buffer, error) {
|
||||
}
|
||||
|
||||
func ensureReaderClosed(response serverResponse) {
|
||||
if body := response.body; body != nil {
|
||||
if response.body != nil {
|
||||
// Drain up to 512 bytes and close the body to let the Transport reuse the connection
|
||||
io.CopyN(ioutil.Discard, body, 512)
|
||||
io.CopyN(ioutil.Discard, response.body, 512)
|
||||
response.body.Close()
|
||||
}
|
||||
}
|
||||
|
||||
46
components/cli/vendor/github.com/docker/docker/client/service_create.go
generated
vendored
46
components/cli/vendor/github.com/docker/docker/client/service_create.go
generated
vendored
@ -24,18 +24,22 @@ func (cli *Client) ServiceCreate(ctx context.Context, service swarm.ServiceSpec,
|
||||
headers["X-Registry-Auth"] = []string{options.EncodedRegistryAuth}
|
||||
}
|
||||
|
||||
// ensure that the image is tagged
|
||||
if taggedImg := imageWithTagString(service.TaskTemplate.ContainerSpec.Image); taggedImg != "" {
|
||||
service.TaskTemplate.ContainerSpec.Image = taggedImg
|
||||
}
|
||||
|
||||
// Contact the registry to retrieve digest and platform information
|
||||
if options.QueryRegistry {
|
||||
distributionInspect, err := cli.DistributionInspect(ctx, service.TaskTemplate.ContainerSpec.Image, options.EncodedRegistryAuth)
|
||||
distErr = err
|
||||
if err == nil {
|
||||
// now pin by digest if the image doesn't already contain a digest
|
||||
img := imageWithDigestString(service.TaskTemplate.ContainerSpec.Image, distributionInspect.Descriptor.Digest)
|
||||
if img != "" {
|
||||
if img := imageWithDigestString(service.TaskTemplate.ContainerSpec.Image, distributionInspect.Descriptor.Digest); img != "" {
|
||||
service.TaskTemplate.ContainerSpec.Image = img
|
||||
}
|
||||
// add platforms that are compatible with the service
|
||||
service.TaskTemplate.Placement = updateServicePlatforms(service.TaskTemplate.Placement, distributionInspect)
|
||||
service.TaskTemplate.Placement = setServicePlatforms(service.TaskTemplate.Placement, distributionInspect)
|
||||
}
|
||||
}
|
||||
var response types.ServiceCreateResponse
|
||||
@ -55,32 +59,42 @@ func (cli *Client) ServiceCreate(ctx context.Context, service swarm.ServiceSpec,
|
||||
}
|
||||
|
||||
// imageWithDigestString takes an image string and a digest, and updates
|
||||
// the image string if it didn't originally contain a digest. It assumes
|
||||
// that the image string is not an image ID
|
||||
// the image string if it didn't originally contain a digest. It returns
|
||||
// an empty string if there are no updates.
|
||||
func imageWithDigestString(image string, dgst digest.Digest) string {
|
||||
isCanonical := false
|
||||
ref, err := reference.ParseAnyReference(image)
|
||||
namedRef, err := reference.ParseNormalizedNamed(image)
|
||||
if err == nil {
|
||||
_, isCanonical = ref.(reference.Canonical)
|
||||
|
||||
if !isCanonical {
|
||||
namedRef, _ := ref.(reference.Named)
|
||||
if _, isCanonical := namedRef.(reference.Canonical); !isCanonical {
|
||||
// ensure that image gets a default tag if none is provided
|
||||
img, err := reference.WithDigest(namedRef, dgst)
|
||||
if err == nil {
|
||||
return img.String()
|
||||
return reference.FamiliarString(img)
|
||||
}
|
||||
}
|
||||
}
|
||||
return ""
|
||||
}
|
||||
|
||||
// updateServicePlatforms updates the Platforms in swarm.Placement to list
|
||||
// all compatible platforms for the service, as found in distributionInspect
|
||||
// and returns a pointer to the new or updated swarm.Placement struct
|
||||
func updateServicePlatforms(placement *swarm.Placement, distributionInspect registrytypes.DistributionInspect) *swarm.Placement {
|
||||
// imageWithTagString takes an image string, and returns a tagged image
|
||||
// string, adding a 'latest' tag if one was not provided. It returns an
|
||||
// emptry string if a canonical reference was provided
|
||||
func imageWithTagString(image string) string {
|
||||
namedRef, err := reference.ParseNormalizedNamed(image)
|
||||
if err == nil {
|
||||
return reference.FamiliarString(reference.TagNameOnly(namedRef))
|
||||
}
|
||||
return ""
|
||||
}
|
||||
|
||||
// setServicePlatforms sets Platforms in swarm.Placement to list all
|
||||
// compatible platforms for the service, as found in distributionInspect
|
||||
// and returns a pointer to the new or updated swarm.Placement struct.
|
||||
func setServicePlatforms(placement *swarm.Placement, distributionInspect registrytypes.DistributionInspect) *swarm.Placement {
|
||||
if placement == nil {
|
||||
placement = &swarm.Placement{}
|
||||
}
|
||||
// reset any existing listed platforms
|
||||
placement.Platforms = []swarm.Platform{}
|
||||
for _, p := range distributionInspect.Platforms {
|
||||
placement.Platforms = append(placement.Platforms, swarm.Platform{
|
||||
Architecture: p.Architecture,
|
||||
|
||||
10
components/cli/vendor/github.com/docker/docker/client/service_update.go
generated
vendored
10
components/cli/vendor/github.com/docker/docker/client/service_update.go
generated
vendored
@ -35,6 +35,11 @@ func (cli *Client) ServiceUpdate(ctx context.Context, serviceID string, version
|
||||
|
||||
query.Set("version", strconv.FormatUint(version.Index, 10))
|
||||
|
||||
// ensure that the image is tagged
|
||||
if taggedImg := imageWithTagString(service.TaskTemplate.ContainerSpec.Image); taggedImg != "" {
|
||||
service.TaskTemplate.ContainerSpec.Image = taggedImg
|
||||
}
|
||||
|
||||
// Contact the registry to retrieve digest and platform information
|
||||
// This happens only when the image has changed
|
||||
if options.QueryRegistry {
|
||||
@ -42,12 +47,11 @@ func (cli *Client) ServiceUpdate(ctx context.Context, serviceID string, version
|
||||
distErr = err
|
||||
if err == nil {
|
||||
// now pin by digest if the image doesn't already contain a digest
|
||||
img := imageWithDigestString(service.TaskTemplate.ContainerSpec.Image, distributionInspect.Descriptor.Digest)
|
||||
if img != "" {
|
||||
if img := imageWithDigestString(service.TaskTemplate.ContainerSpec.Image, distributionInspect.Descriptor.Digest); img != "" {
|
||||
service.TaskTemplate.ContainerSpec.Image = img
|
||||
}
|
||||
// add platforms that are compatible with the service
|
||||
service.TaskTemplate.Placement = updateServicePlatforms(service.TaskTemplate.Placement, distributionInspect)
|
||||
service.TaskTemplate.Placement = setServicePlatforms(service.TaskTemplate.Placement, distributionInspect)
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
2
components/cli/vendor/github.com/docker/docker/pkg/term/termios_bsd.go
generated
vendored
2
components/cli/vendor/github.com/docker/docker/pkg/term/termios_bsd.go
generated
vendored
@ -27,7 +27,7 @@ func MakeRaw(fd uintptr) (*State, error) {
|
||||
|
||||
newState := oldState.termios
|
||||
newState.Iflag &^= (unix.IGNBRK | unix.BRKINT | unix.PARMRK | unix.ISTRIP | unix.INLCR | unix.IGNCR | unix.ICRNL | unix.IXON)
|
||||
newState.Oflag |= unix.OPOST
|
||||
newState.Oflag &^= unix.OPOST
|
||||
newState.Lflag &^= (unix.ECHO | unix.ECHONL | unix.ICANON | unix.ISIG | unix.IEXTEN)
|
||||
newState.Cflag &^= (unix.CSIZE | unix.PARENB)
|
||||
newState.Cflag |= unix.CS8
|
||||
|
||||
2
components/cli/vendor/github.com/docker/docker/pkg/term/termios_linux.go
generated
vendored
2
components/cli/vendor/github.com/docker/docker/pkg/term/termios_linux.go
generated
vendored
@ -26,7 +26,7 @@ func MakeRaw(fd uintptr) (*State, error) {
|
||||
newState := oldState.termios
|
||||
|
||||
newState.Iflag &^= (unix.IGNBRK | unix.BRKINT | unix.PARMRK | unix.ISTRIP | unix.INLCR | unix.IGNCR | unix.ICRNL | unix.IXON)
|
||||
newState.Oflag |= unix.OPOST
|
||||
newState.Oflag &^= unix.OPOST
|
||||
newState.Lflag &^= (unix.ECHO | unix.ECHONL | unix.ICANON | unix.ISIG | unix.IEXTEN)
|
||||
newState.Cflag &^= (unix.CSIZE | unix.PARENB)
|
||||
newState.Cflag |= unix.CS8
|
||||
|
||||
2
components/cli/vendor/github.com/docker/docker/vendor.conf
generated
vendored
2
components/cli/vendor/github.com/docker/docker/vendor.conf
generated
vendored
@ -61,7 +61,7 @@ google.golang.org/grpc v1.0.4
|
||||
github.com/miekg/pkcs11 df8ae6ca730422dba20c768ff38ef7d79077a59f
|
||||
|
||||
# When updating, also update RUNC_COMMIT in hack/dockerfile/binaries-commits accordingly
|
||||
github.com/opencontainers/runc 992a5be178a62e026f4069f443c6164912adbf09
|
||||
github.com/opencontainers/runc 2d41c047c83e09a6d61d464906feb2a2f3c52aa4 https://github.com/docker/runc
|
||||
github.com/opencontainers/runtime-spec v1.0.0-rc5 # specs
|
||||
github.com/opencontainers/image-spec f03dbe35d449c54915d235f1a3cf8f585a24babe
|
||||
|
||||
|
||||
4
components/cli/vendor/github.com/docker/docker/volume/volume.go
generated
vendored
4
components/cli/vendor/github.com/docker/docker/volume/volume.go
generated
vendored
@ -151,6 +151,10 @@ func (m *MountPoint) Setup(mountLabel string, rootUID, rootGID int) (path string
|
||||
if err == nil {
|
||||
if label.RelabelNeeded(m.Mode) {
|
||||
if err = label.Relabel(m.Source, mountLabel, label.IsShared(m.Mode)); err != nil {
|
||||
if err == syscall.ENOTSUP {
|
||||
err = nil
|
||||
return
|
||||
}
|
||||
path = ""
|
||||
err = errors.Wrapf(err, "error setting label on mount source '%s'", m.Source)
|
||||
return
|
||||
|
||||
@ -5,6 +5,92 @@ information on the list of deprecated flags and APIs please have a look at
|
||||
https://docs.docker.com/engine/deprecated/ where target removal dates can also
|
||||
be found.
|
||||
|
||||
## 17.05.0-ce (2017-05-04)
|
||||
|
||||
### Builder
|
||||
|
||||
+ Add multi-stage build support [#31257](https://github.com/docker/docker/pull/31257) [#32063](https://github.com/docker/docker/pull/32063)
|
||||
+ Allow using build-time args (`ARG`) in `FROM` [#31352](https://github.com/docker/docker/pull/31352)
|
||||
+ Add an option for specifying build target [#32496](https://github.com/docker/docker/pull/32496)
|
||||
* Accept `-f -` to read Dockerfile from `stdin`, but use local context for building [#31236](https://github.com/docker/docker/pull/31236)
|
||||
* The values of default build time arguments (e.g `HTTP_PROXY`) are no longer displayed in docker image history unless a corresponding `ARG` instruction is written in the Dockerfile. [#31584](https://github.com/docker/docker/pull/31584)
|
||||
- Fix setting command if a custom shell is used in a parent image [#32236](https://github.com/docker/docker/pull/32236)
|
||||
- Fix `docker build --label` when the label includes single quotes and a space [#31750](https://github.com/docker/docker/pull/31750)
|
||||
|
||||
### Client
|
||||
|
||||
* Add `--mount` flag to `docker run` and `docker create` [#32251](https://github.com/docker/docker/pull/32251)
|
||||
* Add `--type=secret` to `docker inspect` [#32124](https://github.com/docker/docker/pull/32124)
|
||||
* Add `--format` option to `docker secret ls` [#31552](https://github.com/docker/docker/pull/31552)
|
||||
* Add `--filter` option to `docker secret ls` [#30810](https://github.com/docker/docker/pull/30810)
|
||||
* Add `--filter scope=<swarm|local>` to `docker network ls` [#31529](https://github.com/docker/docker/pull/31529)
|
||||
* Add `--cpus` support to `docker update` [#31148](https://github.com/docker/docker/pull/31148)
|
||||
* Add label filter to `docker system prune` and other `prune` commands [#30740](https://github.com/docker/docker/pull/30740)
|
||||
* `docker stack rm` now accepts multiple stacks as input [#32110](https://github.com/docker/docker/pull/32110)
|
||||
* Improve `docker version --format` option when the client has downgraded the API version [#31022](https://github.com/docker/docker/pull/31022)
|
||||
* Prompt when using an encrypted client certificate to connect to a docker daemon [#31364](https://github.com/docker/docker/pull/31364)
|
||||
* Display created tags on successful `docker build` [#32077](https://github.com/docker/docker/pull/32077)
|
||||
* Cleanup compose convert error messages [#32087](https://github.com/moby/moby/pull/32087)
|
||||
|
||||
### Contrib
|
||||
|
||||
+ Add support for building docker debs for Ubuntu 17.04 Zesty on amd64 [#32435](https://github.com/docker/docker/pull/32435)
|
||||
|
||||
### Daemon
|
||||
|
||||
- Fix `--api-cors-header` being ignored if `--api-enable-cors` is not set [#32174](https://github.com/docker/docker/pull/32174)
|
||||
- Cleanup docker tmp dir on start [#31741](https://github.com/docker/docker/pull/31741)
|
||||
- Deprecate `--graph` flag in favor or `--data-root` [#28696](https://github.com/docker/docker/pull/28696)
|
||||
|
||||
### Logging
|
||||
|
||||
+ Add support for logging driver plugins [#28403](https://github.com/docker/docker/pull/28403)
|
||||
* Add support for showing logs of individual tasks to `docker service logs`, and add `/task/{id}/logs` REST endpoint [#32015](https://github.com/docker/docker/pull/32015)
|
||||
* Add `--log-opt env-regex` option to match environment variables using a regular expression [#27565](https://github.com/docker/docker/pull/27565)
|
||||
|
||||
### Networking
|
||||
|
||||
+ Allow user to replace, and customize the ingress network [#31714](https://github.com/docker/docker/pull/31714)
|
||||
- Fix UDP traffic in containers not working after the container is restarted [#32505](https://github.com/docker/docker/pull/32505)
|
||||
- Fix files being written to `/var/lib/docker` if a different data-root is set [#32505](https://github.com/docker/docker/pull/32505)
|
||||
|
||||
### Runtime
|
||||
|
||||
- Ensure health probe is stopped when a container exits [#32274](https://github.com/docker/docker/pull/32274)
|
||||
|
||||
### Swarm Mode
|
||||
|
||||
+ Add update/rollback order for services (`--update-order` / `--rollback-order`) [#30261](https://github.com/docker/docker/pull/30261)
|
||||
+ Add support for synchronous `service create` and `service update` [#31144](https://github.com/docker/docker/pull/31144)
|
||||
+ Add support for "grace periods" on healthchecks through the `HEALTHCHECK --start-period` and `--health-start-period` flag to
|
||||
`docker service create`, `docker service update`, `docker create`, and `docker run` to support containers with an initial startup
|
||||
time [#28938](https://github.com/docker/docker/pull/28938)
|
||||
* `docker service create` now omits fields that are not specified by the user, when possible. This will allow defaults to be applied inside the manager [#32284](https://github.com/docker/docker/pull/32284)
|
||||
* `docker service inspect` now shows default values for fields that are not specified by the user [#32284](https://github.com/docker/docker/pull/32284)
|
||||
* Move `docker service logs` out of experimental [#32462](https://github.com/docker/docker/pull/32462)
|
||||
* Add support for Credential Spec and SELinux to services to the API [#32339](https://github.com/docker/docker/pull/32339)
|
||||
* Add `--entrypoint` flag to `docker service create` and `docker service update` [#29228](https://github.com/docker/docker/pull/29228)
|
||||
* Add `--network-add` and `--network-rm` to `docker service update` [#32062](https://github.com/docker/docker/pull/32062)
|
||||
* Add `--credential-spec` flag to `docker service create` and `docker service update` [#32339](https://github.com/docker/docker/pull/32339)
|
||||
* Add `--filter mode=<global|replicated>` to `docker service ls` [#31538](https://github.com/docker/docker/pull/31538)
|
||||
* Resolve network IDs on the client side, instead of in the daemon when creating services [#32062](https://github.com/docker/docker/pull/32062)
|
||||
* Add `--format` option to `docker node ls` [#30424](https://github.com/docker/docker/pull/30424)
|
||||
* Add `--prune` option to `docker stack deploy` to remove services that are no longer defined in the docker-compose file [#31302](https://github.com/docker/docker/pull/31302)
|
||||
* Add `PORTS` column for `docker service ls` when using `ingress` mode [#30813](https://github.com/docker/docker/pull/30813)
|
||||
- Fix unnescessary re-deploying of tasks when environment-variables are used [#32364](https://github.com/docker/docker/pull/32364)
|
||||
- Fix `docker stack deploy` not supporting `endpoint_mode` when deploying from a docker compose file [#32333](https://github.com/docker/docker/pull/32333)
|
||||
- Proceed with startup if cluster component cannot be created to allow recovering from a broken swarm setup [#31631](https://github.com/docker/docker/pull/31631)
|
||||
|
||||
### Security
|
||||
|
||||
* Allow setting SELinux type or MCS labels when using `--ipc=container:` or `--ipc=host` [#30652](https://github.com/docker/docker/pull/30652)
|
||||
|
||||
|
||||
### Deprecation
|
||||
|
||||
- Deprecate `--api-enable-cors` daemon flag. This flag was marked deprecated in Docker 1.6.0 but not listed in deprecated features [#32352](https://github.com/docker/docker/pull/32352)
|
||||
- Remove Ubuntu 12.04 (Precise Pangolin) as supported platform. Ubuntu 12.04 is EOL, and no longer receives updates [#32520](https://github.com/docker/docker/pull/32520)
|
||||
|
||||
## 17.04.0-ce (2017-04-05)
|
||||
|
||||
### Builder
|
||||
|
||||
@ -90,17 +90,6 @@ RUN cd /usr/local/lvm2 \
|
||||
&& make install_device-mapper
|
||||
# See https://git.fedorahosted.org/cgit/lvm2.git/tree/INSTALL
|
||||
|
||||
# Configure the container for OSX cross compilation
|
||||
ENV OSX_SDK MacOSX10.11.sdk
|
||||
ENV OSX_CROSS_COMMIT a9317c18a3a457ca0a657f08cc4d0d43c6cf8953
|
||||
RUN set -x \
|
||||
&& export OSXCROSS_PATH="/osxcross" \
|
||||
&& git clone https://github.com/tpoechtrager/osxcross.git $OSXCROSS_PATH \
|
||||
&& ( cd $OSXCROSS_PATH && git checkout -q $OSX_CROSS_COMMIT) \
|
||||
&& curl -sSL https://s3.dockerproject.org/darwin/v2/${OSX_SDK}.tar.xz -o "${OSXCROSS_PATH}/tarballs/${OSX_SDK}.tar.xz" \
|
||||
&& UNATTENDED=yes OSX_VERSION_MIN=10.6 ${OSXCROSS_PATH}/build.sh
|
||||
ENV PATH /osxcross/target/bin:$PATH
|
||||
|
||||
# Install seccomp: the version shipped upstream is too old
|
||||
ENV SECCOMP_VERSION 2.3.2
|
||||
RUN set -x \
|
||||
|
||||
@ -1 +1 @@
|
||||
17.06.0-ce-rc2
|
||||
17.06.1-ce-rc1
|
||||
|
||||
@ -41,7 +41,7 @@ func DebugRequestMiddleware(handler func(ctx context.Context, w http.ResponseWri
|
||||
|
||||
var postForm map[string]interface{}
|
||||
if err := json.Unmarshal(b, &postForm); err == nil {
|
||||
maskSecretKeys(postForm)
|
||||
maskSecretKeys(postForm, r.RequestURI)
|
||||
formStr, errMarshal := json.Marshal(postForm)
|
||||
if errMarshal == nil {
|
||||
logrus.Debugf("form data: %s", string(formStr))
|
||||
@ -54,23 +54,41 @@ func DebugRequestMiddleware(handler func(ctx context.Context, w http.ResponseWri
|
||||
}
|
||||
}
|
||||
|
||||
func maskSecretKeys(inp interface{}) {
|
||||
func maskSecretKeys(inp interface{}, path string) {
|
||||
// Remove any query string from the path
|
||||
idx := strings.Index(path, "?")
|
||||
if idx != -1 {
|
||||
path = path[:idx]
|
||||
}
|
||||
// Remove trailing / characters
|
||||
path = strings.TrimRight(path, "/")
|
||||
|
||||
if arr, ok := inp.([]interface{}); ok {
|
||||
for _, f := range arr {
|
||||
maskSecretKeys(f)
|
||||
maskSecretKeys(f, path)
|
||||
}
|
||||
return
|
||||
}
|
||||
|
||||
if form, ok := inp.(map[string]interface{}); ok {
|
||||
loop0:
|
||||
for k, v := range form {
|
||||
for _, m := range []string{"password", "secret", "jointoken", "unlockkey"} {
|
||||
for _, m := range []string{"password", "secret", "jointoken", "unlockkey", "signingcakey"} {
|
||||
if strings.EqualFold(m, k) {
|
||||
form[k] = "*****"
|
||||
continue loop0
|
||||
}
|
||||
}
|
||||
maskSecretKeys(v)
|
||||
maskSecretKeys(v, path)
|
||||
}
|
||||
|
||||
// Route-specific redactions
|
||||
if strings.HasSuffix(path, "/secrets/create") {
|
||||
for k := range form {
|
||||
if k == "Data" {
|
||||
form[k] = "*****"
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
58
components/engine/api/server/middleware/debug_test.go
Normal file
58
components/engine/api/server/middleware/debug_test.go
Normal file
@ -0,0 +1,58 @@
|
||||
package middleware
|
||||
|
||||
import (
|
||||
"testing"
|
||||
|
||||
"github.com/stretchr/testify/assert"
|
||||
)
|
||||
|
||||
func TestMaskSecretKeys(t *testing.T) {
|
||||
tests := []struct {
|
||||
path string
|
||||
input map[string]interface{}
|
||||
expected map[string]interface{}
|
||||
}{
|
||||
{
|
||||
path: "/v1.30/secrets/create",
|
||||
input: map[string]interface{}{"Data": "foo", "Name": "name", "Labels": map[string]interface{}{}},
|
||||
expected: map[string]interface{}{"Data": "*****", "Name": "name", "Labels": map[string]interface{}{}},
|
||||
},
|
||||
{
|
||||
path: "/v1.30/secrets/create//",
|
||||
input: map[string]interface{}{"Data": "foo", "Name": "name", "Labels": map[string]interface{}{}},
|
||||
expected: map[string]interface{}{"Data": "*****", "Name": "name", "Labels": map[string]interface{}{}},
|
||||
},
|
||||
|
||||
{
|
||||
path: "/secrets/create?key=val",
|
||||
input: map[string]interface{}{"Data": "foo", "Name": "name", "Labels": map[string]interface{}{}},
|
||||
expected: map[string]interface{}{"Data": "*****", "Name": "name", "Labels": map[string]interface{}{}},
|
||||
},
|
||||
{
|
||||
path: "/v1.30/some/other/path",
|
||||
input: map[string]interface{}{
|
||||
"password": "pass",
|
||||
"other": map[string]interface{}{
|
||||
"secret": "secret",
|
||||
"jointoken": "jointoken",
|
||||
"unlockkey": "unlockkey",
|
||||
"signingcakey": "signingcakey",
|
||||
},
|
||||
},
|
||||
expected: map[string]interface{}{
|
||||
"password": "*****",
|
||||
"other": map[string]interface{}{
|
||||
"secret": "*****",
|
||||
"jointoken": "*****",
|
||||
"unlockkey": "*****",
|
||||
"signingcakey": "*****",
|
||||
},
|
||||
},
|
||||
},
|
||||
}
|
||||
|
||||
for _, testcase := range tests {
|
||||
maskSecretKeys(testcase.input, testcase.path)
|
||||
assert.Equal(t, testcase.expected, testcase.input)
|
||||
}
|
||||
}
|
||||
@ -410,6 +410,9 @@ func buildIpamResources(r *types.NetworkResource, nwInfo libnetwork.NetworkInfo)
|
||||
|
||||
if !hasIpv6Conf {
|
||||
for _, ip6Info := range ipv6Info {
|
||||
if ip6Info.IPAMData.Pool == nil {
|
||||
continue
|
||||
}
|
||||
iData := network.IPAMConfig{}
|
||||
iData.Subnet = ip6Info.IPAMData.Pool.String()
|
||||
iData.Gateway = ip6Info.IPAMData.Gateway.String()
|
||||
|
||||
@ -7352,6 +7352,16 @@ paths:
|
||||
AdvertiseAddr:
|
||||
description: "Externally reachable address advertised to other nodes. This can either be an address/port combination in the form `192.168.1.1:4567`, or an interface followed by a port number, like `eth0:4567`. If the port number is omitted, the port number from the listen address is used. If `AdvertiseAddr` is not specified, it will be automatically detected when possible."
|
||||
type: "string"
|
||||
DataPathAddr:
|
||||
description: |
|
||||
Address or interface to use for data path traffic (format: `<ip|interface>`), for example, `192.168.1.1`,
|
||||
or an interface, like `eth0`. If `DataPathAddr` is unspecified, the same address as `AdvertiseAddr`
|
||||
is used.
|
||||
|
||||
The `DataPathAddr` specifies the address that global scope network drivers will publish towards other
|
||||
nodes in order to reach the containers running on this node. Using this parameter it is possible to
|
||||
separate the container data traffic from the management traffic of the cluster.
|
||||
type: "string"
|
||||
ForceNewCluster:
|
||||
description: "Force creation of a new swarm."
|
||||
type: "boolean"
|
||||
@ -7400,6 +7410,17 @@ paths:
|
||||
type: "string"
|
||||
AdvertiseAddr:
|
||||
description: "Externally reachable address advertised to other nodes. This can either be an address/port combination in the form `192.168.1.1:4567`, or an interface followed by a port number, like `eth0:4567`. If the port number is omitted, the port number from the listen address is used. If `AdvertiseAddr` is not specified, it will be automatically detected when possible."
|
||||
type: "string"
|
||||
DataPathAddr:
|
||||
description: |
|
||||
Address or interface to use for data path traffic (format: `<ip|interface>`), for example, `192.168.1.1`,
|
||||
or an interface, like `eth0`. If `DataPathAddr` is unspecified, the same address as `AdvertiseAddr`
|
||||
is used.
|
||||
|
||||
The `DataPathAddr` specifies the address that global scope network drivers will publish towards other
|
||||
nodes in order to reach the containers running on this node. Using this parameter it is possible to
|
||||
separate the container data traffic from the management traffic of the cluster.
|
||||
|
||||
type: "string"
|
||||
RemoteAddrs:
|
||||
description: "Addresses of manager nodes already participating in the swarm."
|
||||
|
||||
@ -30,9 +30,10 @@ type pathCache interface {
|
||||
// copyInfo is a data object which stores the metadata about each source file in
|
||||
// a copyInstruction
|
||||
type copyInfo struct {
|
||||
root string
|
||||
path string
|
||||
hash string
|
||||
root string
|
||||
path string
|
||||
hash string
|
||||
noDecompress bool
|
||||
}
|
||||
|
||||
func newCopyInfoFromSource(source builder.Source, path string, hash string) copyInfo {
|
||||
@ -118,7 +119,9 @@ func (o *copier) getCopyInfoForSourcePath(orig string) ([]copyInfo, error) {
|
||||
o.tmpPaths = append(o.tmpPaths, remote.Root())
|
||||
|
||||
hash, err := remote.Hash(path)
|
||||
return newCopyInfos(newCopyInfoFromSource(remote, path, hash)), err
|
||||
ci := newCopyInfoFromSource(remote, path, hash)
|
||||
ci.noDecompress = true // data from http shouldn't be extracted even on ADD
|
||||
return newCopyInfos(ci), err
|
||||
}
|
||||
|
||||
// Cleanup removes any temporary directories created as part of downloading
|
||||
|
||||
@ -156,6 +156,11 @@ func add(req dispatchRequest) error {
|
||||
return err
|
||||
}
|
||||
copyInstruction.allowLocalDecompression = true
|
||||
for _, ci := range copyInstruction.infos {
|
||||
if ci.noDecompress {
|
||||
copyInstruction.allowLocalDecompression = false
|
||||
}
|
||||
}
|
||||
|
||||
return req.builder.performCopy(req.state, copyInstruction)
|
||||
}
|
||||
|
||||
@ -171,11 +171,9 @@ func (b *Builder) dispatch(options dispatchOptions) (*dispatchState, error) {
|
||||
buildsFailed.WithValues(metricsUnknownInstructionError).Inc()
|
||||
return nil, fmt.Errorf("unknown instruction: %s", upperCasedCmd)
|
||||
}
|
||||
if err := f(newDispatchRequestFromOptions(options, b, args)); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
options.state.updateRunConfig()
|
||||
return options.state, nil
|
||||
err = f(newDispatchRequestFromOptions(options, b, args))
|
||||
return options.state, err
|
||||
}
|
||||
|
||||
type dispatchOptions struct {
|
||||
|
||||
@ -41,6 +41,7 @@ func Detect(config backend.BuildConfig) (remote builder.Source, dockerfile *pars
|
||||
}
|
||||
|
||||
func newArchiveRemote(rc io.ReadCloser, dockerfilePath string) (builder.Source, *parser.Result, error) {
|
||||
defer rc.Close()
|
||||
c, err := MakeTarSumContext(rc)
|
||||
if err != nil {
|
||||
return nil, nil, err
|
||||
|
||||
@ -20,13 +20,15 @@ func (cli *Client) Ping(ctx context.Context) (types.Ping, error) {
|
||||
}
|
||||
defer ensureReaderClosed(serverResp)
|
||||
|
||||
ping.APIVersion = serverResp.header.Get("API-Version")
|
||||
if serverResp.header != nil {
|
||||
ping.APIVersion = serverResp.header.Get("API-Version")
|
||||
|
||||
if serverResp.header.Get("Docker-Experimental") == "true" {
|
||||
ping.Experimental = true
|
||||
if serverResp.header.Get("Docker-Experimental") == "true" {
|
||||
ping.Experimental = true
|
||||
}
|
||||
ping.OSType = serverResp.header.Get("OSType")
|
||||
}
|
||||
|
||||
ping.OSType = serverResp.header.Get("OSType")
|
||||
|
||||
return ping, nil
|
||||
err = cli.checkResponseErr(serverResp)
|
||||
return ping, err
|
||||
}
|
||||
|
||||
82
components/engine/client/ping_test.go
Normal file
82
components/engine/client/ping_test.go
Normal file
@ -0,0 +1,82 @@
|
||||
package client
|
||||
|
||||
import (
|
||||
"errors"
|
||||
"io/ioutil"
|
||||
"net/http"
|
||||
"strings"
|
||||
"testing"
|
||||
|
||||
"github.com/stretchr/testify/assert"
|
||||
"golang.org/x/net/context"
|
||||
)
|
||||
|
||||
// TestPingFail tests that when a server sends a non-successful response that we
|
||||
// can still grab API details, when set.
|
||||
// Some of this is just excercising the code paths to make sure there are no
|
||||
// panics.
|
||||
func TestPingFail(t *testing.T) {
|
||||
var withHeader bool
|
||||
client := &Client{
|
||||
client: newMockClient(func(req *http.Request) (*http.Response, error) {
|
||||
resp := &http.Response{StatusCode: http.StatusInternalServerError}
|
||||
if withHeader {
|
||||
resp.Header = http.Header{}
|
||||
resp.Header.Set("API-Version", "awesome")
|
||||
resp.Header.Set("Docker-Experimental", "true")
|
||||
}
|
||||
resp.Body = ioutil.NopCloser(strings.NewReader("some error with the server"))
|
||||
return resp, nil
|
||||
}),
|
||||
}
|
||||
|
||||
ping, err := client.Ping(context.Background())
|
||||
assert.Error(t, err)
|
||||
assert.Equal(t, false, ping.Experimental)
|
||||
assert.Equal(t, "", ping.APIVersion)
|
||||
|
||||
withHeader = true
|
||||
ping2, err := client.Ping(context.Background())
|
||||
assert.Error(t, err)
|
||||
assert.Equal(t, true, ping2.Experimental)
|
||||
assert.Equal(t, "awesome", ping2.APIVersion)
|
||||
}
|
||||
|
||||
// TestPingWithError tests the case where there is a protocol error in the ping.
|
||||
// This test is mostly just testing that there are no panics in this code path.
|
||||
func TestPingWithError(t *testing.T) {
|
||||
client := &Client{
|
||||
client: newMockClient(func(req *http.Request) (*http.Response, error) {
|
||||
resp := &http.Response{StatusCode: http.StatusInternalServerError}
|
||||
resp.Header = http.Header{}
|
||||
resp.Header.Set("API-Version", "awesome")
|
||||
resp.Header.Set("Docker-Experimental", "true")
|
||||
resp.Body = ioutil.NopCloser(strings.NewReader("some error with the server"))
|
||||
return resp, errors.New("some error")
|
||||
}),
|
||||
}
|
||||
|
||||
ping, err := client.Ping(context.Background())
|
||||
assert.Error(t, err)
|
||||
assert.Equal(t, false, ping.Experimental)
|
||||
assert.Equal(t, "", ping.APIVersion)
|
||||
}
|
||||
|
||||
// TestPingSuccess tests that we are able to get the expected API headers/ping
|
||||
// details on success.
|
||||
func TestPingSuccess(t *testing.T) {
|
||||
client := &Client{
|
||||
client: newMockClient(func(req *http.Request) (*http.Response, error) {
|
||||
resp := &http.Response{StatusCode: http.StatusInternalServerError}
|
||||
resp.Header = http.Header{}
|
||||
resp.Header.Set("API-Version", "awesome")
|
||||
resp.Header.Set("Docker-Experimental", "true")
|
||||
resp.Body = ioutil.NopCloser(strings.NewReader("some error with the server"))
|
||||
return resp, nil
|
||||
}),
|
||||
}
|
||||
ping, err := client.Ping(context.Background())
|
||||
assert.Error(t, err)
|
||||
assert.Equal(t, true, ping.Experimental)
|
||||
assert.Equal(t, "awesome", ping.APIVersion)
|
||||
}
|
||||
@ -24,6 +24,7 @@ type serverResponse struct {
|
||||
body io.ReadCloser
|
||||
header http.Header
|
||||
statusCode int
|
||||
reqURL *url.URL
|
||||
}
|
||||
|
||||
// head sends an http request to the docker API using the method HEAD.
|
||||
@ -118,11 +119,18 @@ func (cli *Client) sendRequest(ctx context.Context, method, path string, query u
|
||||
if err != nil {
|
||||
return serverResponse{}, err
|
||||
}
|
||||
return cli.doRequest(ctx, req)
|
||||
resp, err := cli.doRequest(ctx, req)
|
||||
if err != nil {
|
||||
return resp, err
|
||||
}
|
||||
if err := cli.checkResponseErr(resp); err != nil {
|
||||
return resp, err
|
||||
}
|
||||
return resp, nil
|
||||
}
|
||||
|
||||
func (cli *Client) doRequest(ctx context.Context, req *http.Request) (serverResponse, error) {
|
||||
serverResp := serverResponse{statusCode: -1}
|
||||
serverResp := serverResponse{statusCode: -1, reqURL: req.URL}
|
||||
|
||||
resp, err := ctxhttp.Do(ctx, cli.client, req)
|
||||
if err != nil {
|
||||
@ -179,37 +187,44 @@ func (cli *Client) doRequest(ctx context.Context, req *http.Request) (serverResp
|
||||
|
||||
if resp != nil {
|
||||
serverResp.statusCode = resp.StatusCode
|
||||
serverResp.body = resp.Body
|
||||
serverResp.header = resp.Header
|
||||
}
|
||||
|
||||
if serverResp.statusCode < 200 || serverResp.statusCode >= 400 {
|
||||
body, err := ioutil.ReadAll(resp.Body)
|
||||
if err != nil {
|
||||
return serverResp, err
|
||||
}
|
||||
if len(body) == 0 {
|
||||
return serverResp, fmt.Errorf("Error: request returned %s for API route and version %s, check if the server supports the requested API version", http.StatusText(serverResp.statusCode), req.URL)
|
||||
}
|
||||
|
||||
var errorMessage string
|
||||
if (cli.version == "" || versions.GreaterThan(cli.version, "1.23")) &&
|
||||
resp.Header.Get("Content-Type") == "application/json" {
|
||||
var errorResponse types.ErrorResponse
|
||||
if err := json.Unmarshal(body, &errorResponse); err != nil {
|
||||
return serverResp, fmt.Errorf("Error reading JSON: %v", err)
|
||||
}
|
||||
errorMessage = errorResponse.Message
|
||||
} else {
|
||||
errorMessage = string(body)
|
||||
}
|
||||
|
||||
return serverResp, fmt.Errorf("Error response from daemon: %s", strings.TrimSpace(errorMessage))
|
||||
}
|
||||
|
||||
serverResp.body = resp.Body
|
||||
serverResp.header = resp.Header
|
||||
return serverResp, nil
|
||||
}
|
||||
|
||||
func (cli *Client) checkResponseErr(serverResp serverResponse) error {
|
||||
if serverResp.statusCode >= 200 && serverResp.statusCode < 400 {
|
||||
return nil
|
||||
}
|
||||
|
||||
body, err := ioutil.ReadAll(serverResp.body)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
if len(body) == 0 {
|
||||
return fmt.Errorf("Error: request returned %s for API route and version %s, check if the server supports the requested API version", http.StatusText(serverResp.statusCode), serverResp.reqURL)
|
||||
}
|
||||
|
||||
var ct string
|
||||
if serverResp.header != nil {
|
||||
ct = serverResp.header.Get("Content-Type")
|
||||
}
|
||||
|
||||
var errorMessage string
|
||||
if (cli.version == "" || versions.GreaterThan(cli.version, "1.23")) && ct == "application/json" {
|
||||
var errorResponse types.ErrorResponse
|
||||
if err := json.Unmarshal(body, &errorResponse); err != nil {
|
||||
return fmt.Errorf("Error reading JSON: %v", err)
|
||||
}
|
||||
errorMessage = errorResponse.Message
|
||||
} else {
|
||||
errorMessage = string(body)
|
||||
}
|
||||
|
||||
return fmt.Errorf("Error response from daemon: %s", strings.TrimSpace(errorMessage))
|
||||
}
|
||||
|
||||
func (cli *Client) addHeaders(req *http.Request, headers headers) *http.Request {
|
||||
// Add CLI Config's HTTP Headers BEFORE we set the Docker headers
|
||||
// then the user can't change OUR headers
|
||||
@ -239,9 +254,9 @@ func encodeData(data interface{}) (*bytes.Buffer, error) {
|
||||
}
|
||||
|
||||
func ensureReaderClosed(response serverResponse) {
|
||||
if body := response.body; body != nil {
|
||||
if response.body != nil {
|
||||
// Drain up to 512 bytes and close the body to let the Transport reuse the connection
|
||||
io.CopyN(ioutil.Discard, body, 512)
|
||||
io.CopyN(ioutil.Discard, response.body, 512)
|
||||
response.body.Close()
|
||||
}
|
||||
}
|
||||
|
||||
@ -24,18 +24,22 @@ func (cli *Client) ServiceCreate(ctx context.Context, service swarm.ServiceSpec,
|
||||
headers["X-Registry-Auth"] = []string{options.EncodedRegistryAuth}
|
||||
}
|
||||
|
||||
// ensure that the image is tagged
|
||||
if taggedImg := imageWithTagString(service.TaskTemplate.ContainerSpec.Image); taggedImg != "" {
|
||||
service.TaskTemplate.ContainerSpec.Image = taggedImg
|
||||
}
|
||||
|
||||
// Contact the registry to retrieve digest and platform information
|
||||
if options.QueryRegistry {
|
||||
distributionInspect, err := cli.DistributionInspect(ctx, service.TaskTemplate.ContainerSpec.Image, options.EncodedRegistryAuth)
|
||||
distErr = err
|
||||
if err == nil {
|
||||
// now pin by digest if the image doesn't already contain a digest
|
||||
img := imageWithDigestString(service.TaskTemplate.ContainerSpec.Image, distributionInspect.Descriptor.Digest)
|
||||
if img != "" {
|
||||
if img := imageWithDigestString(service.TaskTemplate.ContainerSpec.Image, distributionInspect.Descriptor.Digest); img != "" {
|
||||
service.TaskTemplate.ContainerSpec.Image = img
|
||||
}
|
||||
// add platforms that are compatible with the service
|
||||
service.TaskTemplate.Placement = updateServicePlatforms(service.TaskTemplate.Placement, distributionInspect)
|
||||
service.TaskTemplate.Placement = setServicePlatforms(service.TaskTemplate.Placement, distributionInspect)
|
||||
}
|
||||
}
|
||||
var response types.ServiceCreateResponse
|
||||
@ -55,29 +59,42 @@ func (cli *Client) ServiceCreate(ctx context.Context, service swarm.ServiceSpec,
|
||||
}
|
||||
|
||||
// imageWithDigestString takes an image string and a digest, and updates
|
||||
// the image string if it didn't originally contain a digest. It assumes
|
||||
// that the image string is not an image ID
|
||||
// the image string if it didn't originally contain a digest. It returns
|
||||
// an empty string if there are no updates.
|
||||
func imageWithDigestString(image string, dgst digest.Digest) string {
|
||||
ref, err := reference.ParseAnyReference(image)
|
||||
namedRef, err := reference.ParseNormalizedNamed(image)
|
||||
if err == nil {
|
||||
if _, isCanonical := ref.(reference.Canonical); !isCanonical {
|
||||
namedRef, _ := ref.(reference.Named)
|
||||
if _, isCanonical := namedRef.(reference.Canonical); !isCanonical {
|
||||
// ensure that image gets a default tag if none is provided
|
||||
img, err := reference.WithDigest(namedRef, dgst)
|
||||
if err == nil {
|
||||
return img.String()
|
||||
return reference.FamiliarString(img)
|
||||
}
|
||||
}
|
||||
}
|
||||
return ""
|
||||
}
|
||||
|
||||
// updateServicePlatforms updates the Platforms in swarm.Placement to list
|
||||
// all compatible platforms for the service, as found in distributionInspect
|
||||
// and returns a pointer to the new or updated swarm.Placement struct
|
||||
func updateServicePlatforms(placement *swarm.Placement, distributionInspect registrytypes.DistributionInspect) *swarm.Placement {
|
||||
// imageWithTagString takes an image string, and returns a tagged image
|
||||
// string, adding a 'latest' tag if one was not provided. It returns an
|
||||
// emptry string if a canonical reference was provided
|
||||
func imageWithTagString(image string) string {
|
||||
namedRef, err := reference.ParseNormalizedNamed(image)
|
||||
if err == nil {
|
||||
return reference.FamiliarString(reference.TagNameOnly(namedRef))
|
||||
}
|
||||
return ""
|
||||
}
|
||||
|
||||
// setServicePlatforms sets Platforms in swarm.Placement to list all
|
||||
// compatible platforms for the service, as found in distributionInspect
|
||||
// and returns a pointer to the new or updated swarm.Placement struct.
|
||||
func setServicePlatforms(placement *swarm.Placement, distributionInspect registrytypes.DistributionInspect) *swarm.Placement {
|
||||
if placement == nil {
|
||||
placement = &swarm.Placement{}
|
||||
}
|
||||
// reset any existing listed platforms
|
||||
placement.Platforms = []swarm.Platform{}
|
||||
for _, p := range distributionInspect.Platforms {
|
||||
placement.Platforms = append(placement.Platforms, swarm.Platform{
|
||||
Architecture: p.Architecture,
|
||||
|
||||
@ -13,6 +13,7 @@ import (
|
||||
"github.com/docker/docker/api/types"
|
||||
registrytypes "github.com/docker/docker/api/types/registry"
|
||||
"github.com/docker/docker/api/types/swarm"
|
||||
"github.com/opencontainers/go-digest"
|
||||
"github.com/opencontainers/image-spec/specs-go/v1"
|
||||
"golang.org/x/net/context"
|
||||
)
|
||||
@ -121,3 +122,92 @@ func TestServiceCreateCompatiblePlatforms(t *testing.T) {
|
||||
t.Fatalf("expected `service_amd64`, got %s", r.ID)
|
||||
}
|
||||
}
|
||||
|
||||
func TestServiceCreateDigestPinning(t *testing.T) {
|
||||
dgst := "sha256:c0537ff6a5218ef531ece93d4984efc99bbf3f7497c0a7726c88e2bb7584dc96"
|
||||
dgstAlt := "sha256:37ffbf3f7497c07584dc9637ffbf3f7497c0758c0537ffbf3f7497c0c88e2bb7"
|
||||
serviceCreateImage := ""
|
||||
pinByDigestTests := []struct {
|
||||
img string // input image provided by the user
|
||||
expected string // expected image after digest pinning
|
||||
}{
|
||||
// default registry returns familiar string
|
||||
{"docker.io/library/alpine", "alpine:latest@" + dgst},
|
||||
// provided tag is preserved and digest added
|
||||
{"alpine:edge", "alpine:edge@" + dgst},
|
||||
// image with provided alternative digest remains unchanged
|
||||
{"alpine@" + dgstAlt, "alpine@" + dgstAlt},
|
||||
// image with provided tag and alternative digest remains unchanged
|
||||
{"alpine:edge@" + dgstAlt, "alpine:edge@" + dgstAlt},
|
||||
// image on alternative registry does not result in familiar string
|
||||
{"alternate.registry/library/alpine", "alternate.registry/library/alpine:latest@" + dgst},
|
||||
// unresolvable image does not get a digest
|
||||
{"cannotresolve", "cannotresolve:latest"},
|
||||
}
|
||||
|
||||
client := &Client{
|
||||
client: newMockClient(func(req *http.Request) (*http.Response, error) {
|
||||
if strings.HasPrefix(req.URL.Path, "/services/create") {
|
||||
// reset and set image received by the service create endpoint
|
||||
serviceCreateImage = ""
|
||||
var service swarm.ServiceSpec
|
||||
if err := json.NewDecoder(req.Body).Decode(&service); err != nil {
|
||||
return nil, fmt.Errorf("could not parse service create request")
|
||||
}
|
||||
serviceCreateImage = service.TaskTemplate.ContainerSpec.Image
|
||||
|
||||
b, err := json.Marshal(types.ServiceCreateResponse{
|
||||
ID: "service_id",
|
||||
})
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return &http.Response{
|
||||
StatusCode: http.StatusOK,
|
||||
Body: ioutil.NopCloser(bytes.NewReader(b)),
|
||||
}, nil
|
||||
} else if strings.HasPrefix(req.URL.Path, "/distribution/cannotresolve") {
|
||||
// unresolvable image
|
||||
return nil, fmt.Errorf("cannot resolve image")
|
||||
} else if strings.HasPrefix(req.URL.Path, "/distribution/") {
|
||||
// resolvable images
|
||||
b, err := json.Marshal(registrytypes.DistributionInspect{
|
||||
Descriptor: v1.Descriptor{
|
||||
Digest: digest.Digest(dgst),
|
||||
},
|
||||
})
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return &http.Response{
|
||||
StatusCode: http.StatusOK,
|
||||
Body: ioutil.NopCloser(bytes.NewReader(b)),
|
||||
}, nil
|
||||
}
|
||||
return nil, fmt.Errorf("unexpected URL '%s'", req.URL.Path)
|
||||
}),
|
||||
}
|
||||
|
||||
// run pin by digest tests
|
||||
for _, p := range pinByDigestTests {
|
||||
r, err := client.ServiceCreate(context.Background(), swarm.ServiceSpec{
|
||||
TaskTemplate: swarm.TaskSpec{
|
||||
ContainerSpec: swarm.ContainerSpec{
|
||||
Image: p.img,
|
||||
},
|
||||
},
|
||||
}, types.ServiceCreateOptions{QueryRegistry: true})
|
||||
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
|
||||
if r.ID != "service_id" {
|
||||
t.Fatalf("expected `service_id`, got %s", r.ID)
|
||||
}
|
||||
|
||||
if p.expected != serviceCreateImage {
|
||||
t.Fatalf("expected image %s, got %s", p.expected, serviceCreateImage)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
@ -35,6 +35,11 @@ func (cli *Client) ServiceUpdate(ctx context.Context, serviceID string, version
|
||||
|
||||
query.Set("version", strconv.FormatUint(version.Index, 10))
|
||||
|
||||
// ensure that the image is tagged
|
||||
if taggedImg := imageWithTagString(service.TaskTemplate.ContainerSpec.Image); taggedImg != "" {
|
||||
service.TaskTemplate.ContainerSpec.Image = taggedImg
|
||||
}
|
||||
|
||||
// Contact the registry to retrieve digest and platform information
|
||||
// This happens only when the image has changed
|
||||
if options.QueryRegistry {
|
||||
@ -42,12 +47,11 @@ func (cli *Client) ServiceUpdate(ctx context.Context, serviceID string, version
|
||||
distErr = err
|
||||
if err == nil {
|
||||
// now pin by digest if the image doesn't already contain a digest
|
||||
img := imageWithDigestString(service.TaskTemplate.ContainerSpec.Image, distributionInspect.Descriptor.Digest)
|
||||
if img != "" {
|
||||
if img := imageWithDigestString(service.TaskTemplate.ContainerSpec.Image, distributionInspect.Descriptor.Digest); img != "" {
|
||||
service.TaskTemplate.ContainerSpec.Image = img
|
||||
}
|
||||
// add platforms that are compatible with the service
|
||||
service.TaskTemplate.Placement = updateServicePlatforms(service.TaskTemplate.Placement, distributionInspect)
|
||||
service.TaskTemplate.Placement = setServicePlatforms(service.TaskTemplate.Placement, distributionInspect)
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
@ -155,6 +155,8 @@ func (cli *DaemonCli) start(opts daemonOptions) (err error) {
|
||||
api := apiserver.New(serverConfig)
|
||||
cli.api = api
|
||||
|
||||
var hosts []string
|
||||
|
||||
for i := 0; i < len(cli.Config.Hosts); i++ {
|
||||
var err error
|
||||
if cli.Config.Hosts[i], err = dopts.ParseHost(cli.Config.TLS, cli.Config.Hosts[i]); err != nil {
|
||||
@ -186,6 +188,7 @@ func (cli *DaemonCli) start(opts daemonOptions) (err error) {
|
||||
}
|
||||
}
|
||||
logrus.Debugf("Listener created for HTTP on %s (%s)", proto, addr)
|
||||
hosts = append(hosts, protoAddrParts[1])
|
||||
api.Accept(addr, ls...)
|
||||
}
|
||||
|
||||
@ -213,6 +216,8 @@ func (cli *DaemonCli) start(opts daemonOptions) (err error) {
|
||||
return fmt.Errorf("Error starting daemon: %v", err)
|
||||
}
|
||||
|
||||
d.StoreHosts(hosts)
|
||||
|
||||
// validate after NewDaemon has restored enabled plugins. Dont change order.
|
||||
if err := validateAuthzPlugins(cli.Config.AuthorizationPlugins, pluginStore); err != nil {
|
||||
return fmt.Errorf("Error validating authorization plugin: %v", err)
|
||||
@ -402,8 +407,12 @@ func loadDaemonCliConfig(opts daemonOptions) (*config.Config, error) {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
if conf.V2Only == false {
|
||||
logrus.Warnf(`The "disable-legacy-registry" option is deprecated and wil be removed in Docker v17.12. Interacting with legacy (v1) registries will no longer be supported in Docker v17.12"`)
|
||||
}
|
||||
|
||||
if flags.Changed("graph") {
|
||||
logrus.Warnf(`the "-g / --graph" flag is deprecated. Please use "--data-root" instead`)
|
||||
logrus.Warnf(`The "-g / --graph" flag is deprecated. Please use "--data-root" instead`)
|
||||
}
|
||||
|
||||
// Labels of the docker engine used to allow multiple values associated with the same key.
|
||||
|
||||
@ -102,7 +102,7 @@ func TestLoadDaemonConfigWithTrueDefaultValuesLeaveDefaults(t *testing.T) {
|
||||
}
|
||||
|
||||
func TestLoadDaemonConfigWithLegacyRegistryOptions(t *testing.T) {
|
||||
content := `{"disable-legacy-registry": true}`
|
||||
content := `{"disable-legacy-registry": false}`
|
||||
tempFile := tempfile.NewTempFile(t, "config", content)
|
||||
defer tempFile.Remove()
|
||||
|
||||
@ -110,5 +110,5 @@ func TestLoadDaemonConfigWithLegacyRegistryOptions(t *testing.T) {
|
||||
loadedConfig, err := loadDaemonCliConfig(opts)
|
||||
require.NoError(t, err)
|
||||
require.NotNil(t, loadedConfig)
|
||||
assert.True(t, loadedConfig.V2Only)
|
||||
assert.False(t, loadedConfig.V2Only)
|
||||
}
|
||||
|
||||
@ -7,6 +7,7 @@ import (
|
||||
"golang.org/x/net/context"
|
||||
|
||||
"github.com/Sirupsen/logrus"
|
||||
"github.com/docker/docker/pkg/pools"
|
||||
"github.com/docker/docker/pkg/promise"
|
||||
"github.com/docker/docker/pkg/term"
|
||||
)
|
||||
@ -86,7 +87,7 @@ func (c *Config) CopyStreams(ctx context.Context, cfg *AttachConfig) chan error
|
||||
if cfg.TTY {
|
||||
_, err = copyEscapable(cfg.CStdin, cfg.Stdin, cfg.DetachKeys)
|
||||
} else {
|
||||
_, err = io.Copy(cfg.CStdin, cfg.Stdin)
|
||||
_, err = pools.Copy(cfg.CStdin, cfg.Stdin)
|
||||
}
|
||||
if err == io.ErrClosedPipe {
|
||||
err = nil
|
||||
@ -116,7 +117,7 @@ func (c *Config) CopyStreams(ctx context.Context, cfg *AttachConfig) chan error
|
||||
}
|
||||
|
||||
logrus.Debugf("attach: %s: begin", name)
|
||||
_, err := io.Copy(stream, streamPipe)
|
||||
_, err := pools.Copy(stream, streamPipe)
|
||||
if err == io.ErrClosedPipe {
|
||||
err = nil
|
||||
}
|
||||
@ -174,5 +175,5 @@ func copyEscapable(dst io.Writer, src io.ReadCloser, keys []byte) (written int64
|
||||
pr := term.NewEscapeProxy(src, keys)
|
||||
defer src.Close()
|
||||
|
||||
return io.Copy(dst, pr)
|
||||
return pools.Copy(dst, pr)
|
||||
}
|
||||
|
||||
@ -1,2 +0,0 @@
|
||||
Tianon Gravi <admwiggin@gmail.com> (@tianon)
|
||||
Jessie Frazelle <jess@docker.com> (@jfrazelle)
|
||||
File diff suppressed because it is too large
Load Diff
@ -1,409 +0,0 @@
|
||||
# docker.fish - docker completions for fish shell
|
||||
#
|
||||
# This file is generated by gen_docker_fish_completions.py from:
|
||||
# https://github.com/barnybug/docker-fish-completion
|
||||
#
|
||||
# To install the completions:
|
||||
# mkdir -p ~/.config/fish/completions
|
||||
# cp docker.fish ~/.config/fish/completions
|
||||
#
|
||||
# Completion supported:
|
||||
# - parameters
|
||||
# - commands
|
||||
# - containers
|
||||
# - images
|
||||
# - repositories
|
||||
|
||||
function __fish_docker_no_subcommand --description 'Test if docker has yet to be given the subcommand'
|
||||
for i in (commandline -opc)
|
||||
if contains -- $i attach build commit cp create diff events exec export history images import info inspect kill load login logout logs pause port ps pull push rename restart rm rmi run save search start stop tag top unpause version wait stats
|
||||
return 1
|
||||
end
|
||||
end
|
||||
return 0
|
||||
end
|
||||
|
||||
function __fish_print_docker_containers --description 'Print a list of docker containers' -a select
|
||||
switch $select
|
||||
case running
|
||||
docker ps -a --no-trunc --filter status=running --format "{{.ID}}\n{{.Names}}" | tr ',' '\n'
|
||||
case stopped
|
||||
docker ps -a --no-trunc --filter status=exited --format "{{.ID}}\n{{.Names}}" | tr ',' '\n'
|
||||
case all
|
||||
docker ps -a --no-trunc --format "{{.ID}}\n{{.Names}}" | tr ',' '\n'
|
||||
end
|
||||
end
|
||||
|
||||
function __fish_print_docker_images --description 'Print a list of docker images'
|
||||
docker images --format "{{.Repository}}:{{.Tag}}" | command grep -v '<none>'
|
||||
end
|
||||
|
||||
function __fish_print_docker_repositories --description 'Print a list of docker repositories'
|
||||
docker images --format "{{.Repository}}" | command grep -v '<none>' | command sort | command uniq
|
||||
end
|
||||
|
||||
# common options
|
||||
complete -c docker -f -n '__fish_docker_no_subcommand' -l api-cors-header -d "Set CORS headers in the Engine API. Default is cors disabled"
|
||||
complete -c docker -f -n '__fish_docker_no_subcommand' -s b -l bridge -d 'Attach containers to a pre-existing network bridge'
|
||||
complete -c docker -f -n '__fish_docker_no_subcommand' -l bip -d "Use this CIDR notation address for the network bridge's IP, not compatible with -b"
|
||||
complete -c docker -f -n '__fish_docker_no_subcommand' -s D -l debug -d 'Enable debug mode'
|
||||
complete -c docker -f -n '__fish_docker_no_subcommand' -s d -l daemon -d 'Enable daemon mode'
|
||||
complete -c docker -f -n '__fish_docker_no_subcommand' -l dns -d 'Force Docker to use specific DNS servers'
|
||||
complete -c docker -f -n '__fish_docker_no_subcommand' -l dns-opt -d 'Force Docker to use specific DNS options'
|
||||
complete -c docker -f -n '__fish_docker_no_subcommand' -l dns-search -d 'Force Docker to use specific DNS search domains'
|
||||
complete -c docker -f -n '__fish_docker_no_subcommand' -l exec-opt -d 'Set runtime execution options'
|
||||
complete -c docker -f -n '__fish_docker_no_subcommand' -l fixed-cidr -d 'IPv4 subnet for fixed IPs (e.g. 10.20.0.0/16)'
|
||||
complete -c docker -f -n '__fish_docker_no_subcommand' -l fixed-cidr-v6 -d 'IPv6 subnet for fixed IPs (e.g.: 2001:a02b/48)'
|
||||
complete -c docker -f -n '__fish_docker_no_subcommand' -s G -l group -d 'Group to assign the unix socket specified by -H when running in daemon mode'
|
||||
complete -c docker -f -n '__fish_docker_no_subcommand' -s g -l graph -d 'Path to use as the root of the Docker runtime'
|
||||
complete -c docker -f -n '__fish_docker_no_subcommand' -s H -l host -d 'The socket(s) to bind to in daemon mode or connect to in client mode, specified using one or more tcp://host:port, unix:///path/to/socket, fd://* or fd://socketfd.'
|
||||
complete -c docker -f -n '__fish_docker_no_subcommand' -s h -l help -d 'Print usage'
|
||||
complete -c docker -f -n '__fish_docker_no_subcommand' -l icc -d 'Allow unrestricted inter-container and Docker daemon host communication'
|
||||
complete -c docker -f -n '__fish_docker_no_subcommand' -l insecure-registry -d 'Enable insecure communication with specified registries (no certificate verification for HTTPS and enable HTTP fallback) (e.g., localhost:5000 or 10.20.0.0/16)'
|
||||
complete -c docker -f -n '__fish_docker_no_subcommand' -l ip -d 'Default IP address to use when binding container ports'
|
||||
complete -c docker -f -n '__fish_docker_no_subcommand' -l ip-forward -d 'Enable net.ipv4.ip_forward and IPv6 forwarding if --fixed-cidr-v6 is defined. IPv6 forwarding may interfere with your existing IPv6 configuration when using Router Advertisement.'
|
||||
complete -c docker -f -n '__fish_docker_no_subcommand' -l ip-masq -d "Enable IP masquerading for bridge's IP range"
|
||||
complete -c docker -f -n '__fish_docker_no_subcommand' -l iptables -d "Enable Docker's addition of iptables rules"
|
||||
complete -c docker -f -n '__fish_docker_no_subcommand' -l ipv6 -d 'Enable IPv6 networking'
|
||||
complete -c docker -f -n '__fish_docker_no_subcommand' -s l -l log-level -d 'Set the logging level ("debug", "info", "warn", "error", "fatal")'
|
||||
complete -c docker -f -n '__fish_docker_no_subcommand' -l label -d 'Set key=value labels to the daemon (displayed in `docker info`)'
|
||||
complete -c docker -f -n '__fish_docker_no_subcommand' -l mtu -d 'Set the containers network MTU'
|
||||
complete -c docker -f -n '__fish_docker_no_subcommand' -s p -l pidfile -d 'Path to use for daemon PID file'
|
||||
complete -c docker -f -n '__fish_docker_no_subcommand' -l registry-mirror -d 'Specify a preferred Docker registry mirror'
|
||||
complete -c docker -f -n '__fish_docker_no_subcommand' -s s -l storage-driver -d 'Force the Docker runtime to use a specific storage driver'
|
||||
complete -c docker -f -n '__fish_docker_no_subcommand' -l selinux-enabled -d 'Enable selinux support. SELinux does not presently support the BTRFS storage driver'
|
||||
complete -c docker -f -n '__fish_docker_no_subcommand' -l storage-opt -d 'Set storage driver options'
|
||||
complete -c docker -f -n '__fish_docker_no_subcommand' -l tls -d 'Use TLS; implied by --tlsverify'
|
||||
complete -c docker -f -n '__fish_docker_no_subcommand' -l tlscacert -d 'Trust only remotes providing a certificate signed by the CA given here'
|
||||
complete -c docker -f -n '__fish_docker_no_subcommand' -l tlscert -d 'Path to TLS certificate file'
|
||||
complete -c docker -f -n '__fish_docker_no_subcommand' -l tlskey -d 'Path to TLS key file'
|
||||
complete -c docker -f -n '__fish_docker_no_subcommand' -l tlsverify -d 'Use TLS and verify the remote (daemon: verify client, client: verify daemon)'
|
||||
complete -c docker -f -n '__fish_docker_no_subcommand' -s v -l version -d 'Print version information and quit'
|
||||
|
||||
# subcommands
|
||||
# attach
|
||||
complete -c docker -f -n '__fish_docker_no_subcommand' -a attach -d 'Attach to a running container'
|
||||
complete -c docker -A -f -n '__fish_seen_subcommand_from attach' -l help -d 'Print usage'
|
||||
complete -c docker -A -f -n '__fish_seen_subcommand_from attach' -l no-stdin -d 'Do not attach STDIN'
|
||||
complete -c docker -A -f -n '__fish_seen_subcommand_from attach' -l sig-proxy -d 'Proxy all received signals to the process (non-TTY mode only). SIGCHLD, SIGKILL, and SIGSTOP are not proxied.'
|
||||
complete -c docker -A -f -n '__fish_seen_subcommand_from attach' -a '(__fish_print_docker_containers running)' -d "Container"
|
||||
|
||||
# build
|
||||
complete -c docker -f -n '__fish_docker_no_subcommand' -a build -d 'Build an image from a Dockerfile'
|
||||
complete -c docker -A -f -n '__fish_seen_subcommand_from build' -s f -l file -d "Name of the Dockerfile(Default is 'Dockerfile' at context root)"
|
||||
complete -c docker -A -f -n '__fish_seen_subcommand_from build' -l force-rm -d 'Always remove intermediate containers, even after unsuccessful builds'
|
||||
complete -c docker -A -f -n '__fish_seen_subcommand_from build' -l help -d 'Print usage'
|
||||
complete -c docker -A -f -n '__fish_seen_subcommand_from build' -l no-cache -d 'Do not use cache when building the image'
|
||||
complete -c docker -A -f -n '__fish_seen_subcommand_from build' -l pull -d 'Always attempt to pull a newer version of the image'
|
||||
complete -c docker -A -f -n '__fish_seen_subcommand_from build' -s q -l quiet -d 'Suppress the build output and print image ID on success'
|
||||
complete -c docker -A -f -n '__fish_seen_subcommand_from build' -l rm -d 'Remove intermediate containers after a successful build'
|
||||
complete -c docker -A -f -n '__fish_seen_subcommand_from build' -s t -l tag -d 'Repository name (and optionally a tag) to be applied to the resulting image in case of success'
|
||||
|
||||
# commit
|
||||
complete -c docker -f -n '__fish_docker_no_subcommand' -a commit -d "Create a new image from a container's changes"
|
||||
complete -c docker -A -f -n '__fish_seen_subcommand_from commit' -s a -l author -d 'Author (e.g., "John Hannibal Smith <hannibal@a-team.com>")'
|
||||
complete -c docker -A -f -n '__fish_seen_subcommand_from commit' -l help -d 'Print usage'
|
||||
complete -c docker -A -f -n '__fish_seen_subcommand_from commit' -s m -l message -d 'Commit message'
|
||||
complete -c docker -A -f -n '__fish_seen_subcommand_from commit' -s p -l pause -d 'Pause container during commit'
|
||||
complete -c docker -A -f -n '__fish_seen_subcommand_from commit' -a '(__fish_print_docker_containers all)' -d "Container"
|
||||
|
||||
# cp
|
||||
complete -c docker -f -n '__fish_docker_no_subcommand' -a cp -d "Copy files/folders between a container and the local filesystem"
|
||||
complete -c docker -A -f -n '__fish_seen_subcommand_from cp' -l help -d 'Print usage'
|
||||
|
||||
# create
|
||||
complete -c docker -f -n '__fish_docker_no_subcommand' -a create -d 'Create a new container'
|
||||
complete -c docker -A -f -n '__fish_seen_subcommand_from create' -s a -l attach -d 'Attach to STDIN, STDOUT or STDERR.'
|
||||
complete -c docker -A -f -n '__fish_seen_subcommand_from create' -l add-host -d 'Add a custom host-to-IP mapping (host:ip)'
|
||||
complete -c docker -A -f -n '__fish_seen_subcommand_from create' -l cpu-shares -d 'CPU shares (relative weight)'
|
||||
complete -c docker -A -f -n '__fish_seen_subcommand_from create' -l cap-add -d 'Add Linux capabilities'
|
||||
complete -c docker -A -f -n '__fish_seen_subcommand_from create' -l cap-drop -d 'Drop Linux capabilities'
|
||||
complete -c docker -A -f -n '__fish_seen_subcommand_from create' -l cidfile -d 'Write the container ID to the file'
|
||||
complete -c docker -A -f -n '__fish_seen_subcommand_from create' -l cpuset -d 'CPUs in which to allow execution (0-3, 0,1)'
|
||||
complete -c docker -A -f -n '__fish_seen_subcommand_from create' -l device -d 'Add a host device to the container (e.g. --device=/dev/sdc:/dev/xvdc:rwm)'
|
||||
complete -c docker -A -f -n '__fish_seen_subcommand_from create' -l device-cgroup-rule -d 'Add a rule to the cgroup allowed devices list (e.g. --device-cgroup-rule="c 13:37 rwm")'
|
||||
complete -c docker -A -f -n '__fish_seen_subcommand_from create' -l dns -d 'Set custom DNS servers'
|
||||
complete -c docker -A -f -n '__fish_seen_subcommand_from create' -l dns-opt -d "Set custom DNS options (Use --dns-opt='' if you don't wish to set options)"
|
||||
complete -c docker -A -f -n '__fish_seen_subcommand_from create' -l dns-search -d "Set custom DNS search domains (Use --dns-search=. if you don't wish to set the search domain)"
|
||||
complete -c docker -A -f -n '__fish_seen_subcommand_from create' -s e -l env -d 'Set environment variables'
|
||||
complete -c docker -A -f -n '__fish_seen_subcommand_from create' -l entrypoint -d 'Overwrite the default ENTRYPOINT of the image'
|
||||
complete -c docker -A -f -n '__fish_seen_subcommand_from create' -l env-file -d 'Read in a line delimited file of environment variables'
|
||||
complete -c docker -A -f -n '__fish_seen_subcommand_from create' -l expose -d 'Expose a port or a range of ports (e.g. --expose=3300-3310) from the container without publishing it to your host'
|
||||
complete -c docker -A -f -n '__fish_seen_subcommand_from create' -l group-add -d 'Add additional groups to run as'
|
||||
complete -c docker -A -f -n '__fish_seen_subcommand_from create' -s h -l hostname -d 'Container host name'
|
||||
complete -c docker -A -f -n '__fish_seen_subcommand_from create' -l help -d 'Print usage'
|
||||
complete -c docker -A -f -n '__fish_seen_subcommand_from create' -s i -l interactive -d 'Keep STDIN open even if not attached'
|
||||
complete -c docker -A -f -n '__fish_seen_subcommand_from create' -l ipc -d 'Default is to create a private IPC namespace (POSIX SysV IPC) for the container'
|
||||
complete -c docker -A -f -n '__fish_seen_subcommand_from create' -l link -d 'Add link to another container in the form of <name|id>:alias'
|
||||
complete -c docker -A -f -n '__fish_seen_subcommand_from create' -s m -l memory -d 'Memory limit (format: <number>[<unit>], where unit = b, k, m or g)'
|
||||
complete -c docker -A -f -n '__fish_seen_subcommand_from create' -l mac-address -d 'Container MAC address (e.g., 92:d0:c6:0a:29:33)'
|
||||
complete -c docker -A -f -n '__fish_seen_subcommand_from create' -l memory-swap -d "Total memory usage (memory + swap), set '-1' to disable swap (format: <number>[<unit>], where unit = b, k, m or g)"
|
||||
complete -c docker -A -f -n '__fish_seen_subcommand_from create' -l mount -d 'Attach a filesystem mount to the container'
|
||||
complete -c docker -A -f -n '__fish_seen_subcommand_from create' -l name -d 'Assign a name to the container'
|
||||
complete -c docker -A -f -n '__fish_seen_subcommand_from create' -l net -d 'Set the Network mode for the container'
|
||||
complete -c docker -A -f -n '__fish_seen_subcommand_from create' -s P -l publish-all -d 'Publish all exposed ports to random ports on the host interfaces'
|
||||
complete -c docker -A -f -n '__fish_seen_subcommand_from create' -s p -l publish -d "Publish a container's port to the host"
|
||||
complete -c docker -A -f -n '__fish_seen_subcommand_from create' -l pid -d 'Default is to create a private PID namespace for the container'
|
||||
complete -c docker -A -f -n '__fish_seen_subcommand_from create' -l privileged -d 'Give extended privileges to this container'
|
||||
complete -c docker -A -f -n '__fish_seen_subcommand_from create' -l read-only -d "Mount the container's root filesystem as read only"
|
||||
complete -c docker -A -f -n '__fish_seen_subcommand_from create' -l restart -d 'Restart policy to apply when a container exits (no, on-failure[:max-retry], always)'
|
||||
complete -c docker -A -f -n '__fish_seen_subcommand_from create' -l security-opt -d 'Security Options'
|
||||
complete -c docker -A -f -n '__fish_seen_subcommand_from create' -s t -l tty -d 'Allocate a pseudo-TTY'
|
||||
complete -c docker -A -f -n '__fish_seen_subcommand_from create' -s u -l user -d 'Username or UID'
|
||||
complete -c docker -A -f -n '__fish_seen_subcommand_from create' -s v -l volume -d 'Bind mount a volume (e.g., from the host: -v /host:/container, from Docker: -v /container)'
|
||||
complete -c docker -A -f -n '__fish_seen_subcommand_from create' -l volumes-from -d 'Mount volumes from the specified container(s)'
|
||||
complete -c docker -A -f -n '__fish_seen_subcommand_from create' -s w -l workdir -d 'Working directory inside the container'
|
||||
complete -c docker -A -f -n '__fish_seen_subcommand_from create' -a '(__fish_print_docker_images)' -d "Image"
|
||||
|
||||
# diff
|
||||
complete -c docker -f -n '__fish_docker_no_subcommand' -a diff -d "Inspect changes on a container's filesystem"
|
||||
complete -c docker -A -f -n '__fish_seen_subcommand_from diff' -l help -d 'Print usage'
|
||||
complete -c docker -A -f -n '__fish_seen_subcommand_from diff' -a '(__fish_print_docker_containers all)' -d "Container"
|
||||
|
||||
# events
|
||||
complete -c docker -f -n '__fish_docker_no_subcommand' -a events -d 'Get real time events from the server'
|
||||
complete -c docker -A -f -n '__fish_seen_subcommand_from events' -s f -l filter -d "Provide filter values (i.e., 'event=stop')"
|
||||
complete -c docker -A -f -n '__fish_seen_subcommand_from events' -l help -d 'Print usage'
|
||||
complete -c docker -A -f -n '__fish_seen_subcommand_from events' -l since -d 'Show all events created since timestamp'
|
||||
complete -c docker -A -f -n '__fish_seen_subcommand_from events' -l until -d 'Stream events until this timestamp'
|
||||
complete -c docker -A -f -n '__fish_seen_subcommand_from events' -l format -d 'Format the output using the given go template'
|
||||
|
||||
# exec
|
||||
complete -c docker -f -n '__fish_docker_no_subcommand' -a exec -d 'Run a command in a running container'
|
||||
complete -c docker -A -f -n '__fish_seen_subcommand_from exec' -s d -l detach -d 'Detached mode: run command in the background'
|
||||
complete -c docker -A -f -n '__fish_seen_subcommand_from exec' -l help -d 'Print usage'
|
||||
complete -c docker -A -f -n '__fish_seen_subcommand_from exec' -s i -l interactive -d 'Keep STDIN open even if not attached'
|
||||
complete -c docker -A -f -n '__fish_seen_subcommand_from exec' -s t -l tty -d 'Allocate a pseudo-TTY'
|
||||
complete -c docker -A -f -n '__fish_seen_subcommand_from exec' -a '(__fish_print_docker_containers running)' -d "Container"
|
||||
|
||||
# export
|
||||
complete -c docker -f -n '__fish_docker_no_subcommand' -a export -d 'Stream the contents of a container as a tar archive'
|
||||
complete -c docker -A -f -n '__fish_seen_subcommand_from export' -l help -d 'Print usage'
|
||||
complete -c docker -A -f -n '__fish_seen_subcommand_from export' -a '(__fish_print_docker_containers all)' -d "Container"
|
||||
|
||||
# history
|
||||
complete -c docker -f -n '__fish_docker_no_subcommand' -a history -d 'Show the history of an image'
|
||||
complete -c docker -A -f -n '__fish_seen_subcommand_from history' -l help -d 'Print usage'
|
||||
complete -c docker -A -f -n '__fish_seen_subcommand_from history' -l no-trunc -d "Don't truncate output"
|
||||
complete -c docker -A -f -n '__fish_seen_subcommand_from history' -s q -l quiet -d 'Only show numeric IDs'
|
||||
complete -c docker -A -f -n '__fish_seen_subcommand_from history' -a '(__fish_print_docker_images)' -d "Image"
|
||||
|
||||
# images
|
||||
complete -c docker -f -n '__fish_docker_no_subcommand' -a images -d 'List images'
|
||||
complete -c docker -A -f -n '__fish_seen_subcommand_from images' -s a -l all -d 'Show all images (by default filter out the intermediate image layers)'
|
||||
complete -c docker -A -f -n '__fish_seen_subcommand_from images' -s f -l filter -d "Provide filter values (i.e., 'dangling=true')"
|
||||
complete -c docker -A -f -n '__fish_seen_subcommand_from images' -l help -d 'Print usage'
|
||||
complete -c docker -A -f -n '__fish_seen_subcommand_from images' -l no-trunc -d "Don't truncate output"
|
||||
complete -c docker -A -f -n '__fish_seen_subcommand_from images' -s q -l quiet -d 'Only show numeric IDs'
|
||||
complete -c docker -A -f -n '__fish_seen_subcommand_from images' -a '(__fish_print_docker_repositories)' -d "Repository"
|
||||
|
||||
# import
|
||||
complete -c docker -f -n '__fish_docker_no_subcommand' -a import -d 'Create a new filesystem image from the contents of a tarball'
|
||||
complete -c docker -A -f -n '__fish_seen_subcommand_from import' -l help -d 'Print usage'
|
||||
|
||||
# info
|
||||
complete -c docker -f -n '__fish_docker_no_subcommand' -a info -d 'Display system-wide information'
|
||||
complete -c docker -A -f -n '__fish_seen_subcommand_from info' -s f -l format -d 'Format the output using the given go template'
|
||||
complete -c docker -A -f -n '__fish_seen_subcommand_from info' -l help -d 'Print usage'
|
||||
|
||||
# inspect
|
||||
complete -c docker -f -n '__fish_docker_no_subcommand' -a inspect -d 'Return low-level information on a container or image'
|
||||
complete -c docker -A -f -n '__fish_seen_subcommand_from inspect' -s f -l format -d 'Format the output using the given go template.'
|
||||
complete -c docker -A -f -n '__fish_seen_subcommand_from inspect' -l help -d 'Print usage'
|
||||
complete -c docker -A -f -n '__fish_seen_subcommand_from inspect' -s s -l size -d 'Display total file sizes if the type is container.'
|
||||
complete -c docker -A -f -n '__fish_seen_subcommand_from inspect' -a '(__fish_print_docker_images)' -d "Image"
|
||||
complete -c docker -A -f -n '__fish_seen_subcommand_from inspect' -a '(__fish_print_docker_containers all)' -d "Container"
|
||||
|
||||
# kill
|
||||
complete -c docker -f -n '__fish_docker_no_subcommand' -a kill -d 'Kill a running container'
|
||||
complete -c docker -A -f -n '__fish_seen_subcommand_from kill' -l help -d 'Print usage'
|
||||
complete -c docker -A -f -n '__fish_seen_subcommand_from kill' -s s -l signal -d 'Signal to send to the container'
|
||||
complete -c docker -A -f -n '__fish_seen_subcommand_from kill' -a '(__fish_print_docker_containers running)' -d "Container"
|
||||
|
||||
# load
|
||||
complete -c docker -f -n '__fish_docker_no_subcommand' -a load -d 'Load an image from a tar archive'
|
||||
complete -c docker -A -f -n '__fish_seen_subcommand_from load' -l help -d 'Print usage'
|
||||
complete -c docker -A -f -n '__fish_seen_subcommand_from load' -s i -l input -d 'Read from a tar archive file, instead of STDIN'
|
||||
|
||||
# login
|
||||
complete -c docker -f -n '__fish_docker_no_subcommand' -a login -d 'Log in to a Docker registry server'
|
||||
complete -c docker -A -f -n '__fish_seen_subcommand_from login' -l help -d 'Print usage'
|
||||
complete -c docker -A -f -n '__fish_seen_subcommand_from login' -s p -l password -d 'Password'
|
||||
complete -c docker -A -f -n '__fish_seen_subcommand_from login' -s u -l username -d 'Username'
|
||||
|
||||
# logout
|
||||
complete -c docker -f -n '__fish_docker_no_subcommand' -a logout -d 'Log out from a Docker registry server'
|
||||
|
||||
# logs
|
||||
complete -c docker -f -n '__fish_docker_no_subcommand' -a logs -d 'Fetch the logs of a container'
|
||||
complete -c docker -A -f -n '__fish_seen_subcommand_from logs' -s f -l follow -d 'Follow log output'
|
||||
complete -c docker -A -f -n '__fish_seen_subcommand_from logs' -l help -d 'Print usage'
|
||||
complete -c docker -A -f -n '__fish_seen_subcommand_from logs' -s t -l timestamps -d 'Show timestamps'
|
||||
complete -c docker -A -f -n '__fish_seen_subcommand_from logs' -l since -d 'Show logs since timestamp'
|
||||
complete -c docker -A -f -n '__fish_seen_subcommand_from logs' -l tail -d 'Output the specified number of lines at the end of logs (defaults to all logs)'
|
||||
complete -c docker -A -f -n '__fish_seen_subcommand_from logs' -a '(__fish_print_docker_containers running)' -d "Container"
|
||||
|
||||
# port
|
||||
complete -c docker -f -n '__fish_docker_no_subcommand' -a port -d 'Lookup the public-facing port that is NAT-ed to PRIVATE_PORT'
|
||||
complete -c docker -A -f -n '__fish_seen_subcommand_from port' -l help -d 'Print usage'
|
||||
complete -c docker -A -f -n '__fish_seen_subcommand_from port' -a '(__fish_print_docker_containers running)' -d "Container"
|
||||
|
||||
# pause
|
||||
complete -c docker -f -n '__fish_docker_no_subcommand' -a pause -d 'Pause all processes within a container'
|
||||
complete -c docker -A -f -n '__fish_seen_subcommand_from pause' -a '(__fish_print_docker_containers running)' -d "Container"
|
||||
|
||||
# ps
|
||||
complete -c docker -f -n '__fish_docker_no_subcommand' -a ps -d 'List containers'
|
||||
complete -c docker -A -f -n '__fish_seen_subcommand_from ps' -s a -l all -d 'Show all containers. Only running containers are shown by default.'
|
||||
complete -c docker -A -f -n '__fish_seen_subcommand_from ps' -l before -d 'Show only container created before Id or Name, include non-running ones.'
|
||||
complete -c docker -A -f -n '__fish_seen_subcommand_from ps' -s f -l filter -d 'Provide filter values. Valid filters:'
|
||||
complete -c docker -A -f -n '__fish_seen_subcommand_from ps' -l help -d 'Print usage'
|
||||
complete -c docker -A -f -n '__fish_seen_subcommand_from ps' -s l -l latest -d 'Show only the latest created container, include non-running ones.'
|
||||
complete -c docker -A -f -n '__fish_seen_subcommand_from ps' -s n -d 'Show n last created containers, include non-running ones.'
|
||||
complete -c docker -A -f -n '__fish_seen_subcommand_from ps' -l no-trunc -d "Don't truncate output"
|
||||
complete -c docker -A -f -n '__fish_seen_subcommand_from ps' -s q -l quiet -d 'Only display numeric IDs'
|
||||
complete -c docker -A -f -n '__fish_seen_subcommand_from ps' -s s -l size -d 'Display total file sizes'
|
||||
complete -c docker -A -f -n '__fish_seen_subcommand_from ps' -l since -d 'Show only containers created since Id or Name, include non-running ones.'
|
||||
|
||||
# pull
|
||||
complete -c docker -f -n '__fish_docker_no_subcommand' -a pull -d 'Pull an image or a repository from a Docker registry server'
|
||||
complete -c docker -A -f -n '__fish_seen_subcommand_from pull' -s a -l all-tags -d 'Download all tagged images in the repository'
|
||||
complete -c docker -A -f -n '__fish_seen_subcommand_from pull' -l help -d 'Print usage'
|
||||
complete -c docker -A -f -n '__fish_seen_subcommand_from pull' -a '(__fish_print_docker_images)' -d "Image"
|
||||
complete -c docker -A -f -n '__fish_seen_subcommand_from pull' -a '(__fish_print_docker_repositories)' -d "Repository"
|
||||
|
||||
# push
|
||||
complete -c docker -f -n '__fish_docker_no_subcommand' -a push -d 'Push an image or a repository to a Docker registry server'
|
||||
complete -c docker -A -f -n '__fish_seen_subcommand_from push' -l help -d 'Print usage'
|
||||
complete -c docker -A -f -n '__fish_seen_subcommand_from push' -a '(__fish_print_docker_images)' -d "Image"
|
||||
complete -c docker -A -f -n '__fish_seen_subcommand_from push' -a '(__fish_print_docker_repositories)' -d "Repository"
|
||||
|
||||
# rename
|
||||
complete -c docker -f -n '__fish_docker_no_subcommand' -a rename -d 'Rename an existing container'
|
||||
|
||||
# restart
|
||||
complete -c docker -f -n '__fish_docker_no_subcommand' -a restart -d 'Restart a container'
|
||||
complete -c docker -A -f -n '__fish_seen_subcommand_from restart' -l help -d 'Print usage'
|
||||
complete -c docker -A -f -n '__fish_seen_subcommand_from restart' -s t -l time -d 'Number of seconds to try to stop for before killing the container. Once killed it will then be restarted. Default is 10 seconds.'
|
||||
complete -c docker -A -f -n '__fish_seen_subcommand_from restart' -a '(__fish_print_docker_containers running)' -d "Container"
|
||||
|
||||
# rm
|
||||
complete -c docker -f -n '__fish_docker_no_subcommand' -a rm -d 'Remove one or more containers'
|
||||
complete -c docker -A -f -n '__fish_seen_subcommand_from rm' -s f -l force -d 'Force the removal of a running container (uses SIGKILL)'
|
||||
complete -c docker -A -f -n '__fish_seen_subcommand_from rm' -l help -d 'Print usage'
|
||||
complete -c docker -A -f -n '__fish_seen_subcommand_from rm' -s l -l link -d 'Remove the specified link and not the underlying container'
|
||||
complete -c docker -A -f -n '__fish_seen_subcommand_from rm' -s v -l volumes -d 'Remove the volumes associated with the container'
|
||||
complete -c docker -A -f -n '__fish_seen_subcommand_from rm' -a '(__fish_print_docker_containers stopped)' -d "Container"
|
||||
complete -c docker -A -f -n '__fish_seen_subcommand_from rm' -s f -l force -a '(__fish_print_docker_containers all)' -d "Container"
|
||||
|
||||
# rmi
|
||||
complete -c docker -f -n '__fish_docker_no_subcommand' -a rmi -d 'Remove one or more images'
|
||||
complete -c docker -A -f -n '__fish_seen_subcommand_from rmi' -s f -l force -d 'Force removal of the image'
|
||||
complete -c docker -A -f -n '__fish_seen_subcommand_from rmi' -l help -d 'Print usage'
|
||||
complete -c docker -A -f -n '__fish_seen_subcommand_from rmi' -l no-prune -d 'Do not delete untagged parents'
|
||||
complete -c docker -A -f -n '__fish_seen_subcommand_from rmi' -a '(__fish_print_docker_images)' -d "Image"
|
||||
|
||||
# run
|
||||
complete -c docker -f -n '__fish_docker_no_subcommand' -a run -d 'Run a command in a new container'
|
||||
complete -c docker -A -f -n '__fish_seen_subcommand_from run' -s a -l attach -d 'Attach to STDIN, STDOUT or STDERR.'
|
||||
complete -c docker -A -f -n '__fish_seen_subcommand_from run' -l add-host -d 'Add a custom host-to-IP mapping (host:ip)'
|
||||
complete -c docker -A -f -n '__fish_seen_subcommand_from run' -s c -l cpu-shares -d 'CPU shares (relative weight)'
|
||||
complete -c docker -A -f -n '__fish_seen_subcommand_from run' -l cap-add -d 'Add Linux capabilities'
|
||||
complete -c docker -A -f -n '__fish_seen_subcommand_from run' -l cap-drop -d 'Drop Linux capabilities'
|
||||
complete -c docker -A -f -n '__fish_seen_subcommand_from run' -l cidfile -d 'Write the container ID to the file'
|
||||
complete -c docker -A -f -n '__fish_seen_subcommand_from run' -l cpuset -d 'CPUs in which to allow execution (0-3, 0,1)'
|
||||
complete -c docker -A -f -n '__fish_seen_subcommand_from run' -s d -l detach -d 'Detached mode: run the container in the background and print the new container ID'
|
||||
complete -c docker -A -f -n '__fish_seen_subcommand_from run' -l device -d 'Add a host device to the container (e.g. --device=/dev/sdc:/dev/xvdc:rwm)'
|
||||
complete -c docker -A -f -n '__fish_seen_subcommand_from create' -l device-cgroup-rule -d 'Add a rule to the cgroup allowed devices list (e.g. --device-cgroup-rule="c 13:37 rwm")'
|
||||
complete -c docker -A -f -n '__fish_seen_subcommand_from run' -l dns -d 'Set custom DNS servers'
|
||||
complete -c docker -A -f -n '__fish_seen_subcommand_from run' -l dns-opt -d "Set custom DNS options (Use --dns-opt='' if you don't wish to set options)"
|
||||
complete -c docker -A -f -n '__fish_seen_subcommand_from run' -l dns-search -d "Set custom DNS search domains (Use --dns-search=. if you don't wish to set the search domain)"
|
||||
complete -c docker -A -f -n '__fish_seen_subcommand_from run' -s e -l env -d 'Set environment variables'
|
||||
complete -c docker -A -f -n '__fish_seen_subcommand_from run' -l entrypoint -d 'Overwrite the default ENTRYPOINT of the image'
|
||||
complete -c docker -A -f -n '__fish_seen_subcommand_from run' -l env-file -d 'Read in a line delimited file of environment variables'
|
||||
complete -c docker -A -f -n '__fish_seen_subcommand_from run' -l expose -d 'Expose a port or a range of ports (e.g. --expose=3300-3310) from the container without publishing it to your host'
|
||||
complete -c docker -A -f -n '__fish_seen_subcommand_from create' -l group-add -d 'Add additional groups to run as'
|
||||
complete -c docker -A -f -n '__fish_seen_subcommand_from run' -s h -l hostname -d 'Container host name'
|
||||
complete -c docker -A -f -n '__fish_seen_subcommand_from run' -l help -d 'Print usage'
|
||||
complete -c docker -A -f -n '__fish_seen_subcommand_from run' -s i -l interactive -d 'Keep STDIN open even if not attached'
|
||||
complete -c docker -A -f -n '__fish_seen_subcommand_from run' -l ipc -d 'Default is to create a private IPC namespace (POSIX SysV IPC) for the container'
|
||||
complete -c docker -A -f -n '__fish_seen_subcommand_from run' -l link -d 'Add link to another container in the form of <name|id>:alias'
|
||||
complete -c docker -A -f -n '__fish_seen_subcommand_from run' -s m -l memory -d 'Memory limit (format: <number>[<unit>], where unit = b, k, m or g)'
|
||||
complete -c docker -A -f -n '__fish_seen_subcommand_from run' -l mac-address -d 'Container MAC address (e.g., 92:d0:c6:0a:29:33)'
|
||||
complete -c docker -A -f -n '__fish_seen_subcommand_from run' -l memory-swap -d "Total memory usage (memory + swap), set '-1' to disable swap (format: <number>[<unit>], where unit = b, k, m or g)"
|
||||
complete -c docker -A -f -n '__fish_seen_subcommand_from run' -l mount -d 'Attach a filesystem mount to the container'
|
||||
complete -c docker -A -f -n '__fish_seen_subcommand_from run' -l name -d 'Assign a name to the container'
|
||||
complete -c docker -A -f -n '__fish_seen_subcommand_from run' -l net -d 'Set the Network mode for the container'
|
||||
complete -c docker -A -f -n '__fish_seen_subcommand_from run' -s P -l publish-all -d 'Publish all exposed ports to random ports on the host interfaces'
|
||||
complete -c docker -A -f -n '__fish_seen_subcommand_from run' -s p -l publish -d "Publish a container's port to the host"
|
||||
complete -c docker -A -f -n '__fish_seen_subcommand_from run' -l pid -d 'Default is to create a private PID namespace for the container'
|
||||
complete -c docker -A -f -n '__fish_seen_subcommand_from run' -l privileged -d 'Give extended privileges to this container'
|
||||
complete -c docker -A -f -n '__fish_seen_subcommand_from run' -l read-only -d "Mount the container's root filesystem as read only"
|
||||
complete -c docker -A -f -n '__fish_seen_subcommand_from run' -l restart -d 'Restart policy to apply when a container exits (no, on-failure[:max-retry], always)'
|
||||
complete -c docker -A -f -n '__fish_seen_subcommand_from run' -l rm -d 'Automatically remove the container when it exits (incompatible with -d)'
|
||||
complete -c docker -A -f -n '__fish_seen_subcommand_from run' -l security-opt -d 'Security Options'
|
||||
complete -c docker -A -f -n '__fish_seen_subcommand_from run' -l sig-proxy -d 'Proxy received signals to the process (non-TTY mode only). SIGCHLD, SIGSTOP, and SIGKILL are not proxied.'
|
||||
complete -c docker -A -f -n '__fish_seen_subcommand_from run' -l stop-signal -d 'Signal to kill a container'
|
||||
complete -c docker -A -f -n '__fish_seen_subcommand_from run' -s t -l tty -d 'Allocate a pseudo-TTY'
|
||||
complete -c docker -A -f -n '__fish_seen_subcommand_from run' -s u -l user -d 'Username or UID'
|
||||
complete -c docker -A -f -n '__fish_seen_subcommand_from run' -l tmpfs -d 'Mount tmpfs on a directory'
|
||||
complete -c docker -A -f -n '__fish_seen_subcommand_from run' -s v -l volume -d 'Bind mount a volume (e.g., from the host: -v /host:/container, from Docker: -v /container)'
|
||||
complete -c docker -A -f -n '__fish_seen_subcommand_from run' -l volumes-from -d 'Mount volumes from the specified container(s)'
|
||||
complete -c docker -A -f -n '__fish_seen_subcommand_from run' -s w -l workdir -d 'Working directory inside the container'
|
||||
complete -c docker -A -f -n '__fish_seen_subcommand_from run' -a '(__fish_print_docker_images)' -d "Image"
|
||||
|
||||
# save
|
||||
complete -c docker -f -n '__fish_docker_no_subcommand' -a save -d 'Save an image to a tar archive'
|
||||
complete -c docker -A -f -n '__fish_seen_subcommand_from save' -l help -d 'Print usage'
|
||||
complete -c docker -A -f -n '__fish_seen_subcommand_from save' -s o -l output -d 'Write to an file, instead of STDOUT'
|
||||
complete -c docker -A -f -n '__fish_seen_subcommand_from save' -a '(__fish_print_docker_images)' -d "Image"
|
||||
|
||||
# search
|
||||
complete -c docker -f -n '__fish_docker_no_subcommand' -a search -d 'Search for an image on the registry (defaults to the Docker Hub)'
|
||||
complete -c docker -A -f -n '__fish_seen_subcommand_from search' -l automated -d 'Only show automated builds'
|
||||
complete -c docker -A -f -n '__fish_seen_subcommand_from search' -l help -d 'Print usage'
|
||||
complete -c docker -A -f -n '__fish_seen_subcommand_from search' -l no-trunc -d "Don't truncate output"
|
||||
complete -c docker -A -f -n '__fish_seen_subcommand_from search' -s s -l stars -d 'Only displays with at least x stars'
|
||||
|
||||
# start
|
||||
complete -c docker -f -n '__fish_docker_no_subcommand' -a start -d 'Start a container'
|
||||
complete -c docker -A -f -n '__fish_seen_subcommand_from start' -s a -l attach -d "Attach container's STDOUT and STDERR and forward all signals to the process"
|
||||
complete -c docker -A -f -n '__fish_seen_subcommand_from start' -l help -d 'Print usage'
|
||||
complete -c docker -A -f -n '__fish_seen_subcommand_from start' -s i -l interactive -d "Attach container's STDIN"
|
||||
complete -c docker -A -f -n '__fish_seen_subcommand_from start' -a '(__fish_print_docker_containers stopped)' -d "Container"
|
||||
|
||||
# stats
|
||||
complete -c docker -f -n '__fish_docker_no_subcommand' -a stats -d "Display a live stream of one or more containers' resource usage statistics"
|
||||
complete -c docker -A -f -n '__fish_seen_subcommand_from stats' -l help -d 'Print usage'
|
||||
complete -c docker -A -f -n '__fish_seen_subcommand_from stats' -l no-stream -d 'Disable streaming stats and only pull the first result'
|
||||
complete -c docker -A -f -n '__fish_seen_subcommand_from stats' -a '(__fish_print_docker_containers running)' -d "Container"
|
||||
|
||||
# stop
|
||||
complete -c docker -f -n '__fish_docker_no_subcommand' -a stop -d 'Stop a container'
|
||||
complete -c docker -A -f -n '__fish_seen_subcommand_from stop' -l help -d 'Print usage'
|
||||
complete -c docker -A -f -n '__fish_seen_subcommand_from stop' -s t -l time -d 'Number of seconds to wait for the container to stop before killing it. Default is 10 seconds.'
|
||||
complete -c docker -A -f -n '__fish_seen_subcommand_from stop' -a '(__fish_print_docker_containers running)' -d "Container"
|
||||
|
||||
# tag
|
||||
complete -c docker -f -n '__fish_docker_no_subcommand' -a tag -d 'Tag an image into a repository'
|
||||
complete -c docker -A -f -n '__fish_seen_subcommand_from tag' -s f -l force -d 'Force'
|
||||
complete -c docker -A -f -n '__fish_seen_subcommand_from tag' -l help -d 'Print usage'
|
||||
|
||||
# top
|
||||
complete -c docker -f -n '__fish_docker_no_subcommand' -a top -d 'Lookup the running processes of a container'
|
||||
complete -c docker -A -f -n '__fish_seen_subcommand_from top' -l help -d 'Print usage'
|
||||
complete -c docker -A -f -n '__fish_seen_subcommand_from top' -a '(__fish_print_docker_containers running)' -d "Container"
|
||||
|
||||
# unpause
|
||||
complete -c docker -f -n '__fish_docker_no_subcommand' -a unpause -d 'Unpause a paused container'
|
||||
complete -c docker -A -f -n '__fish_seen_subcommand_from unpause' -a '(__fish_print_docker_containers running)' -d "Container"
|
||||
|
||||
# version
|
||||
complete -c docker -f -n '__fish_docker_no_subcommand' -a version -d 'Show the Docker version information'
|
||||
complete -c docker -A -f -n '__fish_seen_subcommand_from version' -s f -l format -d 'Format the output using the given go template'
|
||||
complete -c docker -A -f -n '__fish_seen_subcommand_from version' -l help -d 'Print usage'
|
||||
|
||||
# wait
|
||||
complete -c docker -f -n '__fish_docker_no_subcommand' -a wait -d 'Block until a container stops, then print its exit code'
|
||||
complete -c docker -A -f -n '__fish_seen_subcommand_from wait' -l help -d 'Print usage'
|
||||
complete -c docker -A -f -n '__fish_seen_subcommand_from wait' -a '(__fish_print_docker_containers running)' -d "Container"
|
||||
@ -1 +0,0 @@
|
||||
See https://github.com/samneirinck/posh-docker
|
||||
@ -1,2 +0,0 @@
|
||||
Tianon Gravi <admwiggin@gmail.com> (@tianon)
|
||||
Jessie Frazelle <jess@docker.com> (@jfrazelle)
|
||||
File diff suppressed because it is too large
Load Diff
@ -31,9 +31,10 @@ func SwarmFromGRPC(c swarmapi.Cluster) types.Swarm {
|
||||
AutoLockManagers: c.Spec.EncryptionConfig.AutoLockManagers,
|
||||
},
|
||||
CAConfig: types.CAConfig{
|
||||
// do not include the signing CA key (it should already be redacted via the swarm APIs)
|
||||
SigningCACert: string(c.Spec.CAConfig.SigningCACert),
|
||||
ForceRotate: c.Spec.CAConfig.ForceRotate,
|
||||
// do not include the signing CA cert or key (it should already be redacted via the swarm APIs) -
|
||||
// the key because it's secret, and the cert because otherwise doing a get + update on the spec
|
||||
// can cause issues because the key would be missing and the cert wouldn't
|
||||
ForceRotate: c.Spec.CAConfig.ForceRotate,
|
||||
},
|
||||
},
|
||||
TLSInfo: types.TLSInfo{
|
||||
|
||||
@ -495,7 +495,6 @@ func getEndpointConfig(na *api.NetworkAttachment, b executorpkg.Backend) *networ
|
||||
IPv4Address: ipv4,
|
||||
IPv6Address: ipv6,
|
||||
},
|
||||
Aliases: na.Aliases,
|
||||
DriverOpts: na.DriverAttachmentOpts,
|
||||
}
|
||||
if v, ok := na.Network.Spec.Annotations.Labels["com.docker.swarm.predefined"]; ok && v == "true" {
|
||||
|
||||
@ -116,6 +116,17 @@ type Daemon struct {
|
||||
|
||||
diskUsageRunning int32
|
||||
pruneRunning int32
|
||||
hosts map[string]bool // hosts stores the addresses the daemon is listening on
|
||||
}
|
||||
|
||||
// StoreHosts stores the addresses the daemon is listening on
|
||||
func (daemon *Daemon) StoreHosts(hosts []string) {
|
||||
if daemon.hosts == nil {
|
||||
daemon.hosts = make(map[string]bool)
|
||||
}
|
||||
for _, h := range hosts {
|
||||
daemon.hosts[h] = true
|
||||
}
|
||||
}
|
||||
|
||||
// HasExperimental returns whether the experimental features of the daemon are enabled or not
|
||||
|
||||
@ -68,17 +68,17 @@ func getMemoryResources(config containertypes.Resources) *specs.LinuxMemory {
|
||||
memory := specs.LinuxMemory{}
|
||||
|
||||
if config.Memory > 0 {
|
||||
limit := uint64(config.Memory)
|
||||
limit := config.Memory
|
||||
memory.Limit = &limit
|
||||
}
|
||||
|
||||
if config.MemoryReservation > 0 {
|
||||
reservation := uint64(config.MemoryReservation)
|
||||
reservation := config.MemoryReservation
|
||||
memory.Reservation = &reservation
|
||||
}
|
||||
|
||||
if config.MemorySwap > 0 {
|
||||
swap := uint64(config.MemorySwap)
|
||||
swap := config.MemorySwap
|
||||
memory.Swap = &swap
|
||||
}
|
||||
|
||||
@ -88,7 +88,7 @@ func getMemoryResources(config containertypes.Resources) *specs.LinuxMemory {
|
||||
}
|
||||
|
||||
if config.KernelMemory != 0 {
|
||||
kernelMemory := uint64(config.KernelMemory)
|
||||
kernelMemory := config.KernelMemory
|
||||
memory.Kernel = &kernelMemory
|
||||
}
|
||||
|
||||
|
||||
@ -1,62 +0,0 @@
|
||||
package daemon
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"os"
|
||||
"path/filepath"
|
||||
"strings"
|
||||
"time"
|
||||
|
||||
"github.com/davecgh/go-spew/spew"
|
||||
"github.com/pkg/errors"
|
||||
)
|
||||
|
||||
const dataStructuresLogNameTemplate = "daemon-data-%s.log"
|
||||
|
||||
// dumpDaemon appends the daemon datastructures into file in dir and returns full path
|
||||
// to that file.
|
||||
func (d *Daemon) dumpDaemon(dir string) (string, error) {
|
||||
// Ensure we recover from a panic as we are doing this without any locking
|
||||
defer func() {
|
||||
recover()
|
||||
}()
|
||||
|
||||
path := filepath.Join(dir, fmt.Sprintf(dataStructuresLogNameTemplate, strings.Replace(time.Now().Format(time.RFC3339), ":", "", -1)))
|
||||
f, err := os.OpenFile(path, os.O_CREATE|os.O_WRONLY, 0666)
|
||||
if err != nil {
|
||||
return "", errors.Wrap(err, "failed to open file to write the daemon datastructure dump")
|
||||
}
|
||||
defer f.Close()
|
||||
|
||||
dump := struct {
|
||||
containers interface{}
|
||||
names interface{}
|
||||
links interface{}
|
||||
execs interface{}
|
||||
volumes interface{}
|
||||
images interface{}
|
||||
layers interface{}
|
||||
imageReferences interface{}
|
||||
downloads interface{}
|
||||
uploads interface{}
|
||||
registry interface{}
|
||||
plugins interface{}
|
||||
}{
|
||||
containers: d.containers,
|
||||
execs: d.execCommands,
|
||||
volumes: d.volumes,
|
||||
images: d.imageStore,
|
||||
layers: d.layerStore,
|
||||
imageReferences: d.referenceStore,
|
||||
downloads: d.downloadManager,
|
||||
uploads: d.uploadManager,
|
||||
registry: d.RegistryService,
|
||||
plugins: d.PluginStore,
|
||||
names: d.nameIndex,
|
||||
links: d.linkIndex,
|
||||
}
|
||||
|
||||
spew.Fdump(f, dump) // Does not return an error
|
||||
f.Sync()
|
||||
return path, nil
|
||||
}
|
||||
@ -22,12 +22,6 @@ func (d *Daemon) setupDumpStackTrap(root string) {
|
||||
} else {
|
||||
logrus.Infof("goroutine stacks written to %s", path)
|
||||
}
|
||||
path, err = d.dumpDaemon(root)
|
||||
if err != nil {
|
||||
logrus.WithError(err).Error("failed to write daemon datastructure dump")
|
||||
} else {
|
||||
logrus.Infof("daemon datastructure dump written to %s", path)
|
||||
}
|
||||
}
|
||||
}()
|
||||
}
|
||||
|
||||
@ -41,12 +41,6 @@ func (d *Daemon) setupDumpStackTrap(root string) {
|
||||
} else {
|
||||
logrus.Infof("goroutine stacks written to %s", path)
|
||||
}
|
||||
path, err = d.dumpDaemon(root)
|
||||
if err != nil {
|
||||
logrus.WithError(err).Error("failed to write daemon datastructure dump")
|
||||
} else {
|
||||
logrus.Infof("daemon datastructure dump written to %s", path)
|
||||
}
|
||||
}
|
||||
}()
|
||||
}
|
||||
|
||||
@ -64,31 +64,35 @@ type cmdProbe struct {
|
||||
|
||||
// exec the healthcheck command in the container.
|
||||
// Returns the exit code and probe output (if any)
|
||||
func (p *cmdProbe) run(ctx context.Context, d *Daemon, container *container.Container) (*types.HealthcheckResult, error) {
|
||||
|
||||
cmdSlice := strslice.StrSlice(container.Config.Healthcheck.Test)[1:]
|
||||
func (p *cmdProbe) run(ctx context.Context, d *Daemon, cntr *container.Container) (*types.HealthcheckResult, error) {
|
||||
cmdSlice := strslice.StrSlice(cntr.Config.Healthcheck.Test)[1:]
|
||||
if p.shell {
|
||||
cmdSlice = append(getShell(container.Config), cmdSlice...)
|
||||
cmdSlice = append(getShell(cntr.Config), cmdSlice...)
|
||||
}
|
||||
entrypoint, args := d.getEntrypointAndArgs(strslice.StrSlice{}, cmdSlice)
|
||||
execConfig := exec.NewConfig()
|
||||
execConfig.OpenStdin = false
|
||||
execConfig.OpenStdout = true
|
||||
execConfig.OpenStderr = true
|
||||
execConfig.ContainerID = container.ID
|
||||
execConfig.ContainerID = cntr.ID
|
||||
execConfig.DetachKeys = []byte{}
|
||||
execConfig.Entrypoint = entrypoint
|
||||
execConfig.Args = args
|
||||
execConfig.Tty = false
|
||||
execConfig.Privileged = false
|
||||
execConfig.User = container.Config.User
|
||||
execConfig.Env = container.Config.Env
|
||||
execConfig.User = cntr.Config.User
|
||||
|
||||
d.registerExecCommand(container, execConfig)
|
||||
d.LogContainerEvent(container, "exec_create: "+execConfig.Entrypoint+" "+strings.Join(execConfig.Args, " "))
|
||||
linkedEnv, err := d.setupLinkedContainers(cntr)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
execConfig.Env = container.ReplaceOrAppendEnvValues(cntr.CreateDaemonEnvironment(execConfig.Tty, linkedEnv), execConfig.Env)
|
||||
|
||||
d.registerExecCommand(cntr, execConfig)
|
||||
d.LogContainerEvent(cntr, "exec_create: "+execConfig.Entrypoint+" "+strings.Join(execConfig.Args, " "))
|
||||
|
||||
output := &limitedBuffer{}
|
||||
err := d.ContainerExecStart(ctx, execConfig.ID, nil, output, output)
|
||||
err = d.ContainerExecStart(ctx, execConfig.ID, nil, output, output)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
@ -97,7 +101,7 @@ func (p *cmdProbe) run(ctx context.Context, d *Daemon, container *container.Cont
|
||||
return nil, err
|
||||
}
|
||||
if info.ExitCode == nil {
|
||||
return nil, fmt.Errorf("Healthcheck for container %s has no exit code!", container.ID)
|
||||
return nil, fmt.Errorf("Healthcheck for container %s has no exit code!", cntr.ID)
|
||||
}
|
||||
// Note: Go's json package will handle invalid UTF-8 for us
|
||||
out := output.String()
|
||||
@ -182,7 +186,7 @@ func monitor(d *Daemon, c *container.Container, stop chan struct{}, probe probe)
|
||||
logrus.Debugf("Running health check for container %s ...", c.ID)
|
||||
startTime := time.Now()
|
||||
ctx, cancelProbe := context.WithTimeout(context.Background(), probeTimeout)
|
||||
results := make(chan *types.HealthcheckResult)
|
||||
results := make(chan *types.HealthcheckResult, 1)
|
||||
go func() {
|
||||
healthChecksCounter.Inc()
|
||||
result, err := probe.run(ctx, d, c)
|
||||
@ -205,8 +209,10 @@ func monitor(d *Daemon, c *container.Container, stop chan struct{}, probe probe)
|
||||
select {
|
||||
case <-stop:
|
||||
logrus.Debugf("Stop healthcheck monitoring for container %s (received while probing)", c.ID)
|
||||
// Stop timeout and kill probe, but don't wait for probe to exit.
|
||||
cancelProbe()
|
||||
// Wait for probe to exit (it might take a while to respond to the TERM
|
||||
// signal and we don't want dying probes to pile up).
|
||||
<-results
|
||||
return
|
||||
case result := <-results:
|
||||
handleProbeResult(d, c, result, stop)
|
||||
|
||||
@ -69,7 +69,7 @@ func (daemon *Daemon) killWithSignal(container *containerpkg.Container, sig int)
|
||||
return errNotRunning{container.ID}
|
||||
}
|
||||
|
||||
if container.Config.StopSignal != "" {
|
||||
if container.Config.StopSignal != "" && syscall.Signal(sig) != syscall.SIGKILL {
|
||||
containerStopSignal, err := signal.ParseSignal(container.Config.StopSignal)
|
||||
if err != nil {
|
||||
return err
|
||||
|
||||
@ -3,6 +3,7 @@ package logger
|
||||
import (
|
||||
"io"
|
||||
"os"
|
||||
"strings"
|
||||
"sync"
|
||||
"time"
|
||||
|
||||
@ -18,6 +19,7 @@ type pluginAdapter struct {
|
||||
driverName string
|
||||
id string
|
||||
plugin logPlugin
|
||||
basePath string
|
||||
fifoPath string
|
||||
capabilities Capability
|
||||
logInfo Info
|
||||
@ -56,7 +58,7 @@ func (a *pluginAdapter) Close() error {
|
||||
a.mu.Lock()
|
||||
defer a.mu.Unlock()
|
||||
|
||||
if err := a.plugin.StopLogging(a.fifoPath); err != nil {
|
||||
if err := a.plugin.StopLogging(strings.TrimPrefix(a.fifoPath, a.basePath)); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
|
||||
@ -112,9 +112,10 @@ func (s *journald) Log(msg *logger.Message) error {
|
||||
}
|
||||
|
||||
line := string(msg.Line)
|
||||
source := msg.Source
|
||||
logger.PutMessage(msg)
|
||||
|
||||
if msg.Source == "stderr" {
|
||||
if source == "stderr" {
|
||||
return journal.Send(line, journal.PriErr, vars)
|
||||
}
|
||||
return journal.Send(line, journal.PriInfo, vars)
|
||||
|
||||
@ -7,6 +7,7 @@ import (
|
||||
"bytes"
|
||||
"encoding/json"
|
||||
"fmt"
|
||||
"io"
|
||||
"strconv"
|
||||
"sync"
|
||||
|
||||
@ -15,6 +16,7 @@ import (
|
||||
"github.com/docker/docker/daemon/logger/loggerutils"
|
||||
"github.com/docker/docker/pkg/jsonlog"
|
||||
"github.com/docker/go-units"
|
||||
"github.com/pkg/errors"
|
||||
)
|
||||
|
||||
// Name is the name of the file that the jsonlogger logs to.
|
||||
@ -22,12 +24,13 @@ const Name = "json-file"
|
||||
|
||||
// JSONFileLogger is Logger implementation for default Docker logging.
|
||||
type JSONFileLogger struct {
|
||||
buf *bytes.Buffer
|
||||
writer *loggerutils.RotateFileWriter
|
||||
mu sync.Mutex
|
||||
readers map[*logger.LogWatcher]struct{} // stores the active log followers
|
||||
extra []byte // json-encoded extra attributes
|
||||
extra []byte // json-encoded extra attributes
|
||||
|
||||
mu sync.RWMutex
|
||||
buf *bytes.Buffer // avoids allocating a new buffer on each call to `Log()`
|
||||
closed bool
|
||||
writer *loggerutils.RotateFileWriter
|
||||
readers map[*logger.LogWatcher]struct{} // stores the active log followers
|
||||
}
|
||||
|
||||
func init() {
|
||||
@ -90,33 +93,45 @@ func New(info logger.Info) (logger.Logger, error) {
|
||||
|
||||
// Log converts logger.Message to jsonlog.JSONLog and serializes it to file.
|
||||
func (l *JSONFileLogger) Log(msg *logger.Message) error {
|
||||
l.mu.Lock()
|
||||
err := writeMessageBuf(l.writer, msg, l.extra, l.buf)
|
||||
l.buf.Reset()
|
||||
l.mu.Unlock()
|
||||
return err
|
||||
}
|
||||
|
||||
func writeMessageBuf(w io.Writer, m *logger.Message, extra json.RawMessage, buf *bytes.Buffer) error {
|
||||
if err := marshalMessage(m, extra, buf); err != nil {
|
||||
logger.PutMessage(m)
|
||||
return err
|
||||
}
|
||||
logger.PutMessage(m)
|
||||
if _, err := w.Write(buf.Bytes()); err != nil {
|
||||
return errors.Wrap(err, "error writing log entry")
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func marshalMessage(msg *logger.Message, extra json.RawMessage, buf *bytes.Buffer) error {
|
||||
timestamp, err := jsonlog.FastTimeMarshalJSON(msg.Timestamp)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
l.mu.Lock()
|
||||
logline := msg.Line
|
||||
logLine := msg.Line
|
||||
if !msg.Partial {
|
||||
logline = append(msg.Line, '\n')
|
||||
logLine = append(msg.Line, '\n')
|
||||
}
|
||||
err = (&jsonlog.JSONLogs{
|
||||
Log: logline,
|
||||
Log: logLine,
|
||||
Stream: msg.Source,
|
||||
Created: timestamp,
|
||||
RawAttrs: l.extra,
|
||||
}).MarshalJSONBuf(l.buf)
|
||||
logger.PutMessage(msg)
|
||||
RawAttrs: extra,
|
||||
}).MarshalJSONBuf(buf)
|
||||
if err != nil {
|
||||
l.mu.Unlock()
|
||||
return err
|
||||
return errors.Wrap(err, "error writing log message to buffer")
|
||||
}
|
||||
|
||||
l.buf.WriteByte('\n')
|
||||
_, err = l.writer.Write(l.buf.Bytes())
|
||||
l.buf.Reset()
|
||||
l.mu.Unlock()
|
||||
|
||||
return err
|
||||
err = buf.WriteByte('\n')
|
||||
return errors.Wrap(err, "error finalizing log buffer")
|
||||
}
|
||||
|
||||
// ValidateLogOpt looks for json specific log options max-file & max-size.
|
||||
|
||||
@ -3,7 +3,6 @@ package jsonfilelog
|
||||
import (
|
||||
"bytes"
|
||||
"encoding/json"
|
||||
"errors"
|
||||
"fmt"
|
||||
"io"
|
||||
"os"
|
||||
@ -18,6 +17,7 @@ import (
|
||||
"github.com/docker/docker/pkg/ioutils"
|
||||
"github.com/docker/docker/pkg/jsonlog"
|
||||
"github.com/docker/docker/pkg/tailfile"
|
||||
"github.com/pkg/errors"
|
||||
)
|
||||
|
||||
const maxJSONDecodeRetry = 20000
|
||||
@ -48,10 +48,11 @@ func (l *JSONFileLogger) ReadLogs(config logger.ReadConfig) *logger.LogWatcher {
|
||||
func (l *JSONFileLogger) readLogs(logWatcher *logger.LogWatcher, config logger.ReadConfig) {
|
||||
defer close(logWatcher.Msg)
|
||||
|
||||
// lock so the read stream doesn't get corrupted due to rotations or other log data written while we read
|
||||
// lock so the read stream doesn't get corrupted due to rotations or other log data written while we open these files
|
||||
// This will block writes!!!
|
||||
l.mu.Lock()
|
||||
l.mu.RLock()
|
||||
|
||||
// TODO it would be nice to move a lot of this reader implementation to the rotate logger object
|
||||
pth := l.writer.LogPath()
|
||||
var files []io.ReadSeeker
|
||||
for i := l.writer.MaxFiles(); i > 1; i-- {
|
||||
@ -59,25 +60,36 @@ func (l *JSONFileLogger) readLogs(logWatcher *logger.LogWatcher, config logger.R
|
||||
if err != nil {
|
||||
if !os.IsNotExist(err) {
|
||||
logWatcher.Err <- err
|
||||
break
|
||||
l.mu.RUnlock()
|
||||
return
|
||||
}
|
||||
continue
|
||||
}
|
||||
defer f.Close()
|
||||
|
||||
files = append(files, f)
|
||||
}
|
||||
|
||||
latestFile, err := os.Open(pth)
|
||||
if err != nil {
|
||||
logWatcher.Err <- err
|
||||
l.mu.Unlock()
|
||||
logWatcher.Err <- errors.Wrap(err, "error opening latest log file")
|
||||
l.mu.RUnlock()
|
||||
return
|
||||
}
|
||||
defer latestFile.Close()
|
||||
|
||||
latestChunk, err := newSectionReader(latestFile)
|
||||
|
||||
// Now we have the reader sectioned, all fd's opened, we can unlock.
|
||||
// New writes/rotates will not affect seeking through these files
|
||||
l.mu.RUnlock()
|
||||
|
||||
if err != nil {
|
||||
logWatcher.Err <- err
|
||||
return
|
||||
}
|
||||
|
||||
if config.Tail != 0 {
|
||||
tailer := ioutils.MultiReadSeeker(append(files, latestFile)...)
|
||||
tailer := ioutils.MultiReadSeeker(append(files, latestChunk)...)
|
||||
tailFile(tailer, logWatcher, config.Tail, config.Since)
|
||||
}
|
||||
|
||||
@ -89,19 +101,14 @@ func (l *JSONFileLogger) readLogs(logWatcher *logger.LogWatcher, config logger.R
|
||||
}
|
||||
|
||||
if !config.Follow || l.closed {
|
||||
l.mu.Unlock()
|
||||
return
|
||||
}
|
||||
|
||||
if config.Tail >= 0 {
|
||||
latestFile.Seek(0, os.SEEK_END)
|
||||
}
|
||||
|
||||
notifyRotate := l.writer.NotifyRotate()
|
||||
defer l.writer.NotifyRotateEvict(notifyRotate)
|
||||
|
||||
l.mu.Lock()
|
||||
l.readers[logWatcher] = struct{}{}
|
||||
|
||||
l.mu.Unlock()
|
||||
|
||||
followLogs(latestFile, logWatcher, notifyRotate, config.Since)
|
||||
@ -111,6 +118,16 @@ func (l *JSONFileLogger) readLogs(logWatcher *logger.LogWatcher, config logger.R
|
||||
l.mu.Unlock()
|
||||
}
|
||||
|
||||
func newSectionReader(f *os.File) (*io.SectionReader, error) {
|
||||
// seek to the end to get the size
|
||||
// we'll leave this at the end of the file since section reader does not advance the reader
|
||||
size, err := f.Seek(0, os.SEEK_END)
|
||||
if err != nil {
|
||||
return nil, errors.Wrap(err, "error getting current file size")
|
||||
}
|
||||
return io.NewSectionReader(f, 0, size), nil
|
||||
}
|
||||
|
||||
func tailFile(f io.ReadSeeker, logWatcher *logger.LogWatcher, tail int, since time.Time) {
|
||||
var rdr io.Reader
|
||||
rdr = f
|
||||
|
||||
@ -59,6 +59,7 @@ func makePluginCreator(name string, l *logPluginProxy, basePath string) Creator
|
||||
driverName: name,
|
||||
id: id,
|
||||
plugin: l,
|
||||
basePath: basePath,
|
||||
fifoPath: filepath.Join(root, id),
|
||||
logInfo: logCtx,
|
||||
}
|
||||
|
||||
@ -133,8 +133,9 @@ func New(info logger.Info) (logger.Logger, error) {
|
||||
|
||||
func (s *syslogger) Log(msg *logger.Message) error {
|
||||
line := string(msg.Line)
|
||||
source := msg.Source
|
||||
logger.PutMessage(msg)
|
||||
if msg.Source == "stderr" {
|
||||
if source == "stderr" {
|
||||
return s.writer.Err(line)
|
||||
}
|
||||
return s.writer.Info(line)
|
||||
|
||||
@ -46,7 +46,8 @@ func (daemon *Daemon) StateChanged(id string, e libcontainerd.StateInfo) error {
|
||||
c.StreamConfig.Wait()
|
||||
c.Reset(false)
|
||||
|
||||
restart, wait, err := c.RestartManager().ShouldRestart(e.ExitCode, c.HasBeenManuallyStopped, time.Since(c.StartedAt))
|
||||
// If daemon is being shutdown, don't let the container restart
|
||||
restart, wait, err := c.RestartManager().ShouldRestart(e.ExitCode, daemon.IsShuttingDown() || c.HasBeenManuallyStopped, time.Since(c.StartedAt))
|
||||
if err == nil && restart {
|
||||
c.RestartCount++
|
||||
c.SetRestarting(platformConstructExitStatus(e))
|
||||
|
||||
@ -6,6 +6,7 @@ package daemon
|
||||
|
||||
import (
|
||||
"encoding/json"
|
||||
"fmt"
|
||||
"os"
|
||||
"path/filepath"
|
||||
"sort"
|
||||
@ -42,8 +43,19 @@ func (daemon *Daemon) setupMounts(c *container.Container) ([]container.Mount, er
|
||||
if err := daemon.lazyInitializeVolume(c.ID, m); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
// If the daemon is being shutdown, we should not let a container start if it is trying to
|
||||
// mount the socket the daemon is listening on. During daemon shutdown, the socket
|
||||
// (/var/run/docker.sock by default) doesn't exist anymore causing the call to m.Setup to
|
||||
// create at directory instead. This in turn will prevent the daemon to restart.
|
||||
checkfunc := func(m *volume.MountPoint) error {
|
||||
if _, exist := daemon.hosts[m.Source]; exist && daemon.IsShuttingDown() {
|
||||
return fmt.Errorf("Could not mount %q to container while the daemon is shutting down", m.Source)
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
rootUID, rootGID := daemon.GetRemappedUIDGID()
|
||||
path, err := m.Setup(c.MountLabel, rootUID, rootGID)
|
||||
path, err := m.Setup(c.MountLabel, rootUID, rootGID, checkfunc)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
@ -24,7 +24,7 @@ func (daemon *Daemon) setupMounts(c *container.Container) ([]container.Mount, er
|
||||
if err := daemon.lazyInitializeVolume(c.ID, mount); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
s, err := mount.Setup(c.MountLabel, 0, 0)
|
||||
s, err := mount.Setup(c.MountLabel, 0, 0, nil)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
@ -1,30 +0,0 @@
|
||||
# The non-reference docs have been moved!
|
||||
|
||||
<!-- This file is maintained within the docker/docker Github
|
||||
repository at https://github.com/docker/docker/. Make all
|
||||
pull requests against that repo. If you see this file in
|
||||
another repository, consider it read-only there, as it will
|
||||
periodically be overwritten by the definitive file. Pull
|
||||
requests which include edits to this file in other repositories
|
||||
will be rejected.
|
||||
-->
|
||||
|
||||
The documentation for Docker Engine has been merged into
|
||||
[the general documentation repo](https://github.com/docker/docker.github.io).
|
||||
|
||||
See the [README](https://github.com/docker/docker.github.io/blob/master/README.md)
|
||||
for instructions on contributing to and building the documentation.
|
||||
|
||||
If you'd like to edit the current published version of the Engine docs,
|
||||
do it in the master branch here:
|
||||
https://github.com/docker/docker.github.io/tree/master/engine
|
||||
|
||||
If you need to document the functionality of an upcoming Engine release,
|
||||
use the `vnext-engine` branch:
|
||||
https://github.com/docker/docker.github.io/tree/vnext-engine/engine
|
||||
|
||||
The reference docs have been left in docker/docker (this repo), which remains
|
||||
the place to edit them.
|
||||
|
||||
The docs in the general repo are open-source and we appreciate
|
||||
your feedback and pull requests!
|
||||
@ -29,6 +29,8 @@ keywords: "API, Docker, rcli, REST, documentation"
|
||||
generate and rotate to a new CA certificate/key pair.
|
||||
* `POST /service/create` and `POST /services/(id or name)/update` now take the field `Platforms` as part of the service `Placement`, allowing to specify platforms supported by the service.
|
||||
* `POST /containers/(name)/wait` now accepts a `condition` query parameter to indicate which state change condition to wait for. Also, response headers are now returned immediately to acknowledge that the server has registered a wait callback for the client.
|
||||
* `POST /swarm/init` now accepts a `DataPathAddr` property to set the IP-address or network interface to use for data traffic
|
||||
* `POST /swarm/join` now accepts a `DataPathAddr` property to set the IP-address or network interface to use for data traffic
|
||||
|
||||
## v1.29 API changes
|
||||
|
||||
|
||||
@ -1,321 +0,0 @@
|
||||
---
|
||||
aliases: ["/engine/misc/deprecated/"]
|
||||
title: "Deprecated Engine Features"
|
||||
description: "Deprecated Features."
|
||||
keywords: "docker, documentation, about, technology, deprecate"
|
||||
---
|
||||
|
||||
<!-- This file is maintained within the docker/docker Github
|
||||
repository at https://github.com/docker/docker/. Make all
|
||||
pull requests against that repo. If you see this file in
|
||||
another repository, consider it read-only there, as it will
|
||||
periodically be overwritten by the definitive file. Pull
|
||||
requests which include edits to this file in other repositories
|
||||
will be rejected.
|
||||
-->
|
||||
|
||||
# Deprecated Engine Features
|
||||
|
||||
The following list of features are deprecated in Engine.
|
||||
To learn more about Docker Engine's deprecation policy,
|
||||
see [Feature Deprecation Policy](https://docs.docker.com/engine/#feature-deprecation-policy).
|
||||
|
||||
### Asynchronous `service create` and `service update`
|
||||
|
||||
**Deprecated In Release: v17.05.0**
|
||||
|
||||
**Disabled by default in release: v17.09**
|
||||
|
||||
Docker 17.05.0 added an optional `--detach=false` option to make the
|
||||
`docker service create` and `docker service update` work synchronously. This
|
||||
option will be enable by default in Docker 17.09, at which point the `--detach`
|
||||
flag can be used to use the previous (asynchronous) behavior.
|
||||
|
||||
### `-g` and `--graph` flags on `dockerd`
|
||||
|
||||
**Deprecated In Release: v17.05.0**
|
||||
|
||||
The `-g` or `--graph` flag for the `dockerd` or `docker daemon` command was
|
||||
used to indicate the directory in which to store persistent data and resource
|
||||
configuration and has been replaced with the more descriptive `--data-root`
|
||||
flag.
|
||||
|
||||
These flags were added before Docker 1.0, so will not be _removed_, only
|
||||
_hidden_, to discourage their use.
|
||||
|
||||
### Top-level network properties in NetworkSettings
|
||||
|
||||
**Deprecated In Release: [v1.13.0](https://github.com/docker/docker/releases/tag/v1.13.0)**
|
||||
|
||||
**Target For Removal In Release: v17.12**
|
||||
|
||||
When inspecting a container, `NetworkSettings` contains top-level information
|
||||
about the default ("bridge") network;
|
||||
|
||||
`EndpointID`, `Gateway`, `GlobalIPv6Address`, `GlobalIPv6PrefixLen`, `IPAddress`,
|
||||
`IPPrefixLen`, `IPv6Gateway`, and `MacAddress`.
|
||||
|
||||
These properties are deprecated in favor of per-network properties in
|
||||
`NetworkSettings.Networks`. These properties were already "deprecated" in
|
||||
docker 1.9, but kept around for backward compatibility.
|
||||
|
||||
Refer to [#17538](https://github.com/docker/docker/pull/17538) for further
|
||||
information.
|
||||
|
||||
### `filter` param for `/images/json` endpoint
|
||||
**Deprecated In Release: [v1.13.0](https://github.com/docker/docker/releases/tag/v1.13.0)**
|
||||
|
||||
**Target For Removal In Release: v17.12**
|
||||
|
||||
The `filter` param to filter the list of image by reference (name or name:tag) is now implemented as a regular filter, named `reference`.
|
||||
|
||||
### `repository:shortid` image references
|
||||
**Deprecated In Release: [v1.13.0](https://github.com/docker/docker/releases/tag/v1.13.0)**
|
||||
|
||||
**Target For Removal In Release: v17.12**
|
||||
|
||||
`repository:shortid` syntax for referencing images is very little used, collides with tag references can be confused with digest references.
|
||||
|
||||
### `docker daemon` subcommand
|
||||
**Deprecated In Release: [v1.13.0](https://github.com/docker/docker/releases/tag/v1.13.0)**
|
||||
|
||||
**Target For Removal In Release: v17.12**
|
||||
|
||||
The daemon is moved to a separate binary (`dockerd`), and should be used instead.
|
||||
|
||||
### Duplicate keys with conflicting values in engine labels
|
||||
**Deprecated In Release: [v1.13.0](https://github.com/docker/docker/releases/tag/v1.13.0)**
|
||||
|
||||
**Target For Removal In Release: v17.12**
|
||||
|
||||
Duplicate keys with conflicting values have been deprecated. A warning is displayed
|
||||
in the output, and an error will be returned in the future.
|
||||
|
||||
### `MAINTAINER` in Dockerfile
|
||||
**Deprecated In Release: [v1.13.0](https://github.com/docker/docker/releases/tag/v1.13.0)**
|
||||
|
||||
`MAINTAINER` was an early very limited form of `LABEL` which should be used instead.
|
||||
|
||||
### API calls without a version
|
||||
**Deprecated In Release: [v1.13.0](https://github.com/docker/docker/releases/tag/v1.13.0)**
|
||||
|
||||
**Target For Removal In Release: v17.12**
|
||||
|
||||
API versions should be supplied to all API calls to ensure compatibility with
|
||||
future Engine versions. Instead of just requesting, for example, the URL
|
||||
`/containers/json`, you must now request `/v1.25/containers/json`.
|
||||
|
||||
### Backing filesystem without `d_type` support for overlay/overlay2
|
||||
**Deprecated In Release: [v1.13.0](https://github.com/docker/docker/releases/tag/v1.13.0)**
|
||||
|
||||
**Target For Removal In Release: v17.12**
|
||||
|
||||
The overlay and overlay2 storage driver does not work as expected if the backing
|
||||
filesystem does not support `d_type`. For example, XFS does not support `d_type`
|
||||
if it is formatted with the `ftype=0` option.
|
||||
|
||||
Please also refer to [#27358](https://github.com/docker/docker/issues/27358) for
|
||||
further information.
|
||||
|
||||
### Three arguments form in `docker import`
|
||||
**Deprecated In Release: [v0.6.7](https://github.com/docker/docker/releases/tag/v0.6.7)**
|
||||
|
||||
**Removed In Release: [v1.12.0](https://github.com/docker/docker/releases/tag/v1.12.0)**
|
||||
|
||||
The `docker import` command format `file|URL|- [REPOSITORY [TAG]]` is deprecated since November 2013. It's no more supported.
|
||||
|
||||
### `-h` shorthand for `--help`
|
||||
|
||||
**Deprecated In Release: [v1.12.0](https://github.com/docker/docker/releases/tag/v1.12.0)**
|
||||
|
||||
**Target For Removal In Release: v17.09**
|
||||
|
||||
The shorthand (`-h`) is less common than `--help` on Linux and cannot be used
|
||||
on all subcommands (due to it conflicting with, e.g. `-h` / `--hostname` on
|
||||
`docker create`). For this reason, the `-h` shorthand was not printed in the
|
||||
"usage" output of subcommands, nor documented, and is now marked "deprecated".
|
||||
|
||||
### `-e` and `--email` flags on `docker login`
|
||||
**Deprecated In Release: [v1.11.0](https://github.com/docker/docker/releases/tag/v1.11.0)**
|
||||
|
||||
**Target For Removal In Release: v17.06**
|
||||
|
||||
The docker login command is removing the ability to automatically register for an account with the target registry if the given username doesn't exist. Due to this change, the email flag is no longer required, and will be deprecated.
|
||||
|
||||
### Separator (`:`) of `--security-opt` flag on `docker run`
|
||||
**Deprecated In Release: [v1.11.0](https://github.com/docker/docker/releases/tag/v1.11.0)**
|
||||
|
||||
**Target For Removal In Release: v17.06**
|
||||
|
||||
The flag `--security-opt` doesn't use the colon separator(`:`) anymore to divide keys and values, it uses the equal symbol(`=`) for consistency with other similar flags, like `--storage-opt`.
|
||||
|
||||
### `/containers/(id or name)/copy` endpoint
|
||||
|
||||
**Deprecated In Release: [v1.8.0](https://github.com/docker/docker/releases/tag/v1.8.0)**
|
||||
|
||||
**Removed In Release: [v1.12.0](https://github.com/docker/docker/releases/tag/v1.12.0)**
|
||||
|
||||
The endpoint `/containers/(id or name)/copy` is deprecated in favor of `/containers/(id or name)/archive`.
|
||||
|
||||
### Ambiguous event fields in API
|
||||
**Deprecated In Release: [v1.10.0](https://github.com/docker/docker/releases/tag/v1.10.0)**
|
||||
|
||||
The fields `ID`, `Status` and `From` in the events API have been deprecated in favor of a more rich structure.
|
||||
See the events API documentation for the new format.
|
||||
|
||||
### `-f` flag on `docker tag`
|
||||
**Deprecated In Release: [v1.10.0](https://github.com/docker/docker/releases/tag/v1.10.0)**
|
||||
|
||||
**Removed In Release: [v1.12.0](https://github.com/docker/docker/releases/tag/v1.12.0)**
|
||||
|
||||
To make tagging consistent across the various `docker` commands, the `-f` flag on the `docker tag` command is deprecated. It is not longer necessary to specify `-f` to move a tag from one image to another. Nor will `docker` generate an error if the `-f` flag is missing and the specified tag is already in use.
|
||||
|
||||
### HostConfig at API container start
|
||||
**Deprecated In Release: [v1.10.0](https://github.com/docker/docker/releases/tag/v1.10.0)**
|
||||
|
||||
**Removed In Release: [v1.12.0](https://github.com/docker/docker/releases/tag/v1.12.0)**
|
||||
|
||||
Passing an `HostConfig` to `POST /containers/{name}/start` is deprecated in favor of
|
||||
defining it at container creation (`POST /containers/create`).
|
||||
|
||||
### `--before` and `--since` flags on `docker ps`
|
||||
|
||||
**Deprecated In Release: [v1.10.0](https://github.com/docker/docker/releases/tag/v1.10.0)**
|
||||
|
||||
**Removed In Release: [v1.12.0](https://github.com/docker/docker/releases/tag/v1.12.0)**
|
||||
|
||||
The `docker ps --before` and `docker ps --since` options are deprecated.
|
||||
Use `docker ps --filter=before=...` and `docker ps --filter=since=...` instead.
|
||||
|
||||
### `--automated` and `--stars` flags on `docker search`
|
||||
|
||||
**Deprecated in Release: [v1.12.0](https://github.com/docker/docker/releases/tag/v1.12.0)**
|
||||
|
||||
**Target For Removal In Release: v17.09**
|
||||
|
||||
The `docker search --automated` and `docker search --stars` options are deprecated.
|
||||
Use `docker search --filter=is-automated=...` and `docker search --filter=stars=...` instead.
|
||||
|
||||
### Driver Specific Log Tags
|
||||
**Deprecated In Release: [v1.9.0](https://github.com/docker/docker/releases/tag/v1.9.0)**
|
||||
|
||||
**Removed In Release: [v1.12.0](https://github.com/docker/docker/releases/tag/v1.12.0)**
|
||||
|
||||
Log tags are now generated in a standard way across different logging drivers.
|
||||
Because of which, the driver specific log tag options `syslog-tag`, `gelf-tag` and
|
||||
`fluentd-tag` have been deprecated in favor of the generic `tag` option.
|
||||
|
||||
```bash
|
||||
{% raw %}
|
||||
docker --log-driver=syslog --log-opt tag="{{.ImageName}}/{{.Name}}/{{.ID}}"
|
||||
{% endraw %}
|
||||
```
|
||||
|
||||
### LXC built-in exec driver
|
||||
**Deprecated In Release: [v1.8.0](https://github.com/docker/docker/releases/tag/v1.8.0)**
|
||||
|
||||
**Removed In Release: [v1.10.0](https://github.com/docker/docker/releases/tag/v1.10.0)**
|
||||
|
||||
The built-in LXC execution driver, the lxc-conf flag, and API fields have been removed.
|
||||
|
||||
### Old Command Line Options
|
||||
**Deprecated In Release: [v1.8.0](https://github.com/docker/docker/releases/tag/v1.8.0)**
|
||||
|
||||
**Removed In Release: [v1.10.0](https://github.com/docker/docker/releases/tag/v1.10.0)**
|
||||
|
||||
The flags `-d` and `--daemon` are deprecated in favor of the `daemon` subcommand:
|
||||
|
||||
docker daemon -H ...
|
||||
|
||||
The following single-dash (`-opt`) variant of certain command line options
|
||||
are deprecated and replaced with double-dash options (`--opt`):
|
||||
|
||||
docker attach -nostdin
|
||||
docker attach -sig-proxy
|
||||
docker build -no-cache
|
||||
docker build -rm
|
||||
docker commit -author
|
||||
docker commit -run
|
||||
docker events -since
|
||||
docker history -notrunc
|
||||
docker images -notrunc
|
||||
docker inspect -format
|
||||
docker ps -beforeId
|
||||
docker ps -notrunc
|
||||
docker ps -sinceId
|
||||
docker rm -link
|
||||
docker run -cidfile
|
||||
docker run -dns
|
||||
docker run -entrypoint
|
||||
docker run -expose
|
||||
docker run -link
|
||||
docker run -lxc-conf
|
||||
docker run -n
|
||||
docker run -privileged
|
||||
docker run -volumes-from
|
||||
docker search -notrunc
|
||||
docker search -stars
|
||||
docker search -t
|
||||
docker search -trusted
|
||||
docker tag -force
|
||||
|
||||
The following double-dash options are deprecated and have no replacement:
|
||||
|
||||
docker run --cpuset
|
||||
docker run --networking
|
||||
docker ps --since-id
|
||||
docker ps --before-id
|
||||
docker search --trusted
|
||||
|
||||
**Deprecated In Release: [v1.5.0](https://github.com/docker/docker/releases/tag/v1.5.0)**
|
||||
|
||||
**Removed In Release: [v1.12.0](https://github.com/docker/docker/releases/tag/v1.12.0)**
|
||||
|
||||
The single-dash (`-help`) was removed, in favor of the double-dash `--help`
|
||||
|
||||
docker -help
|
||||
docker [COMMAND] -help
|
||||
|
||||
### `--run` flag on docker commit
|
||||
|
||||
**Deprecated In Release: [v0.10.0](https://github.com/docker/docker/releases/tag/v0.10.0)**
|
||||
|
||||
**Removed In Release: [v1.13.0](https://github.com/docker/docker/releases/tag/v1.13.0)**
|
||||
|
||||
The flag `--run` of the docker commit (and its short version `-run`) were deprecated in favor
|
||||
of the `--changes` flag that allows to pass `Dockerfile` commands.
|
||||
|
||||
|
||||
### Interacting with V1 registries
|
||||
|
||||
**Disabled By Default In Release: v17.06**
|
||||
|
||||
**Target For Removal In Release: v17.12**
|
||||
|
||||
Version 1.9 adds a flag (`--disable-legacy-registry=false`) which prevents the
|
||||
docker daemon from `pull`, `push`, and `login` operations against v1
|
||||
registries. Though enabled by default, this signals the intent to deprecate
|
||||
the v1 protocol.
|
||||
|
||||
Support for the v1 protocol to the public registry was removed in 1.13. Any
|
||||
mirror configurations using v1 should be updated to use a
|
||||
[v2 registry mirror](https://docs.docker.com/registry/recipes/mirror/).
|
||||
|
||||
### Docker Content Trust ENV passphrase variables name change
|
||||
**Deprecated In Release: [v1.9.0](https://github.com/docker/docker/releases/tag/v1.9.0)**
|
||||
|
||||
**Removed In Release: [v1.12.0](https://github.com/docker/docker/releases/tag/v1.12.0)**
|
||||
|
||||
Since 1.9, Docker Content Trust Offline key has been renamed to Root key and the Tagging key has been renamed to Repository key. Due to this renaming, we're also changing the corresponding environment variables
|
||||
|
||||
- DOCKER_CONTENT_TRUST_OFFLINE_PASSPHRASE is now named DOCKER_CONTENT_TRUST_ROOT_PASSPHRASE
|
||||
- DOCKER_CONTENT_TRUST_TAGGING_PASSPHRASE is now named DOCKER_CONTENT_TRUST_REPOSITORY_PASSPHRASE
|
||||
|
||||
### `--api-enable-cors` flag on dockerd
|
||||
|
||||
**Deprecated In Release: [v1.6.0](https://github.com/docker/docker/releases/tag/v1.6.0)**
|
||||
|
||||
**Target For Removal In Release: v17.09**
|
||||
|
||||
The flag `--api-enable-cors` is deprecated since v1.6.0. Use the flag
|
||||
`--api-cors-header` instead.
|
||||
@ -1,164 +0,0 @@
|
||||
---
|
||||
description: Volume plugin for Amazon EBS
|
||||
keywords: "API, Usage, plugins, documentation, developer, amazon, ebs, rexray, volume"
|
||||
title: Volume plugin for Amazon EBS
|
||||
---
|
||||
|
||||
<!-- This file is maintained within the docker/docker Github
|
||||
repository at https://github.com/docker/docker/. Make all
|
||||
pull requests against that repo. If you see this file in
|
||||
another repository, consider it read-only there, as it will
|
||||
periodically be overwritten by the definitive file. Pull
|
||||
requests which include edits to this file in other repositories
|
||||
will be rejected.
|
||||
-->
|
||||
|
||||
# A proof-of-concept Rexray plugin
|
||||
|
||||
In this example, a simple Rexray plugin will be created for the purposes of using
|
||||
it on an Amazon EC2 instance with EBS. It is not meant to be a complete Rexray plugin.
|
||||
|
||||
The example source is available at [https://github.com/tiborvass/rexray-plugin](https://github.com/tiborvass/rexray-plugin).
|
||||
|
||||
To learn more about Rexray: [https://github.com/codedellemc/rexray](https://github.com/codedellemc/rexray)
|
||||
|
||||
## 1. Make a Docker image
|
||||
|
||||
The following is the Dockerfile used to containerize rexray.
|
||||
|
||||
```Dockerfile
|
||||
FROM debian:jessie
|
||||
RUN apt-get update && apt-get install -y --no-install-recommends wget ca-certificates
|
||||
RUN wget https://dl.bintray.com/emccode/rexray/stable/0.6.4/rexray-Linux-x86_64-0.6.4.tar.gz -O rexray.tar.gz && tar -xvzf rexray.tar.gz -C /usr/bin && rm rexray.tar.gz
|
||||
RUN mkdir -p /run/docker/plugins /var/lib/libstorage/volumes
|
||||
ENTRYPOINT ["rexray"]
|
||||
CMD ["--help"]
|
||||
```
|
||||
|
||||
To build it you can run `image=$(cat Dockerfile | docker build -q -)` and `$image`
|
||||
will reference the containerized rexray image.
|
||||
|
||||
## 2. Extract rootfs
|
||||
|
||||
```sh
|
||||
$ TMPDIR=/tmp/rexray # for the purpose of this example
|
||||
$ # create container without running it, to extract the rootfs from image
|
||||
$ docker create --name rexray "$image"
|
||||
$ # save the rootfs to a tar archive
|
||||
$ docker export -o $TMPDIR/rexray.tar rexray
|
||||
$ # extract rootfs from tar archive to a rootfs folder
|
||||
$ ( mkdir -p $TMPDIR/rootfs; cd $TMPDIR/rootfs; tar xf ../rexray.tar )
|
||||
```
|
||||
|
||||
## 3. Add plugin configuration
|
||||
|
||||
We have to put the following JSON to `$TMPDIR/config.json`:
|
||||
|
||||
```json
|
||||
{
|
||||
"Args": {
|
||||
"Description": "",
|
||||
"Name": "",
|
||||
"Settable": null,
|
||||
"Value": null
|
||||
},
|
||||
"Description": "A proof-of-concept EBS plugin (using rexray) for Docker",
|
||||
"Documentation": "https://github.com/tiborvass/rexray-plugin",
|
||||
"Entrypoint": [
|
||||
"/usr/bin/rexray", "service", "start", "-f"
|
||||
],
|
||||
"Env": [
|
||||
{
|
||||
"Description": "",
|
||||
"Name": "REXRAY_SERVICE",
|
||||
"Settable": [
|
||||
"value"
|
||||
],
|
||||
"Value": "ebs"
|
||||
},
|
||||
{
|
||||
"Description": "",
|
||||
"Name": "EBS_ACCESSKEY",
|
||||
"Settable": [
|
||||
"value"
|
||||
],
|
||||
"Value": ""
|
||||
},
|
||||
{
|
||||
"Description": "",
|
||||
"Name": "EBS_SECRETKEY",
|
||||
"Settable": [
|
||||
"value"
|
||||
],
|
||||
"Value": ""
|
||||
}
|
||||
],
|
||||
"Interface": {
|
||||
"Socket": "rexray.sock",
|
||||
"Types": [
|
||||
"docker.volumedriver/1.0"
|
||||
]
|
||||
},
|
||||
"Linux": {
|
||||
"AllowAllDevices": true,
|
||||
"Capabilities": ["CAP_SYS_ADMIN"],
|
||||
"Devices": null
|
||||
},
|
||||
"Mounts": [
|
||||
{
|
||||
"Source": "/dev",
|
||||
"Destination": "/dev",
|
||||
"Type": "bind",
|
||||
"Options": ["rbind"]
|
||||
}
|
||||
],
|
||||
"Network": {
|
||||
"Type": "host"
|
||||
},
|
||||
"PropagatedMount": "/var/lib/libstorage/volumes",
|
||||
"User": {},
|
||||
"WorkDir": ""
|
||||
}
|
||||
```
|
||||
|
||||
Please note a couple of points:
|
||||
- `PropagatedMount` is needed so that the docker daemon can see mounts done by the
|
||||
rexray plugin from within the container, otherwise the docker daemon is not able
|
||||
to mount a docker volume.
|
||||
- The rexray plugin needs dynamic access to host devices. For that reason, we
|
||||
have to give it access to all devices under `/dev` and set `AllowAllDevices` to
|
||||
true for proper access.
|
||||
- The user of this simple plugin can change only 3 settings: `REXRAY_SERVICE`,
|
||||
`EBS_ACCESSKEY` and `EBS_SECRETKEY`. This is because of the reduced scope of this
|
||||
plugin. Ideally other rexray parameters could also be set.
|
||||
|
||||
## 4. Create plugin
|
||||
|
||||
`docker plugin create tiborvass/rexray-plugin "$TMPDIR"` will create the plugin.
|
||||
|
||||
```sh
|
||||
$ docker plugin ls
|
||||
ID NAME DESCRIPTION ENABLED
|
||||
2475a4bd0ca5 tiborvass/rexray-plugin:latest A rexray volume plugin for Docker false
|
||||
```
|
||||
|
||||
## 5. Test plugin
|
||||
|
||||
```sh
|
||||
$ docker plugin set tiborvass/rexray-plugin EBS_ACCESSKEY=$AWS_ACCESSKEY EBS_SECRETKEY=$AWS_SECRETKEY`
|
||||
$ docker plugin enable tiborvass/rexray-plugin
|
||||
$ docker volume create -d tiborvass/rexray-plugin my-ebs-volume
|
||||
$ docker volume ls
|
||||
DRIVER VOLUME NAME
|
||||
tiborvass/rexray-plugin:latest my-ebs-volume
|
||||
$ docker run --rm -v my-ebs-volume:/volume busybox sh -c 'echo bye > /volume/hi'
|
||||
$ docker run --rm -v my-ebs-volume:/volume busybox cat /volume/hi
|
||||
bye
|
||||
```
|
||||
|
||||
## 6. Push plugin
|
||||
|
||||
First, ensure you are logged in with `docker login`. Then you can run:
|
||||
`docker plugin push tiborvass/rexray-plugin` to push it like a regular docker
|
||||
image to a registry, to make it available for others to install via
|
||||
`docker plugin install tiborvass/rexray-plugin EBS_ACCESSKEY=$AWS_ACCESSKEY EBS_SECRETKEY=$AWS_SECRETKEY`.
|
||||
@ -1,238 +0,0 @@
|
||||
---
|
||||
title: "Plugin config"
|
||||
description: "How develop and use a plugin with the managed plugin system"
|
||||
keywords: "API, Usage, plugins, documentation, developer"
|
||||
---
|
||||
|
||||
<!-- This file is maintained within the docker/docker Github
|
||||
repository at https://github.com/docker/docker/. Make all
|
||||
pull requests against that repo. If you see this file in
|
||||
another repository, consider it read-only there, as it will
|
||||
periodically be overwritten by the definitive file. Pull
|
||||
requests which include edits to this file in other repositories
|
||||
will be rejected.
|
||||
-->
|
||||
|
||||
|
||||
# Plugin Config Version 1 of Plugin V2
|
||||
|
||||
This document outlines the format of the V0 plugin configuration. The plugin
|
||||
config described herein was introduced in the Docker daemon in the [v1.12.0
|
||||
release](https://github.com/docker/docker/commit/f37117045c5398fd3dca8016ea8ca0cb47e7312b).
|
||||
|
||||
Plugin configs describe the various constituents of a docker plugin. Plugin
|
||||
configs can be serialized to JSON format with the following media types:
|
||||
|
||||
Config Type | Media Type
|
||||
------------- | -------------
|
||||
config | "application/vnd.docker.plugin.v1+json"
|
||||
|
||||
|
||||
## *Config* Field Descriptions
|
||||
|
||||
Config provides the base accessible fields for working with V0 plugin format
|
||||
in the registry.
|
||||
|
||||
- **`description`** *string*
|
||||
|
||||
description of the plugin
|
||||
|
||||
- **`documentation`** *string*
|
||||
|
||||
link to the documentation about the plugin
|
||||
|
||||
- **`interface`** *PluginInterface*
|
||||
|
||||
interface implemented by the plugins, struct consisting of the following fields
|
||||
|
||||
- **`types`** *string array*
|
||||
|
||||
types indicate what interface(s) the plugin currently implements.
|
||||
|
||||
currently supported:
|
||||
|
||||
- **docker.volumedriver/1.0**
|
||||
|
||||
- **docker.networkdriver/1.0**
|
||||
|
||||
- **docker.ipamdriver/1.0**
|
||||
|
||||
- **docker.authz/1.0**
|
||||
|
||||
- **docker.logdriver/1.0**
|
||||
|
||||
- **docker.metricscollector/1.0**
|
||||
|
||||
- **`socket`** *string*
|
||||
|
||||
socket is the name of the socket the engine should use to communicate with the plugins.
|
||||
the socket will be created in `/run/docker/plugins`.
|
||||
|
||||
|
||||
- **`entrypoint`** *string array*
|
||||
|
||||
entrypoint of the plugin, see [`ENTRYPOINT`](../reference/builder.md#entrypoint)
|
||||
|
||||
- **`workdir`** *string*
|
||||
|
||||
workdir of the plugin, see [`WORKDIR`](../reference/builder.md#workdir)
|
||||
|
||||
- **`network`** *PluginNetwork*
|
||||
|
||||
network of the plugin, struct consisting of the following fields
|
||||
|
||||
- **`type`** *string*
|
||||
|
||||
network type.
|
||||
|
||||
currently supported:
|
||||
|
||||
- **bridge**
|
||||
- **host**
|
||||
- **none**
|
||||
|
||||
- **`mounts`** *PluginMount array*
|
||||
|
||||
mount of the plugin, struct consisting of the following fields, see [`MOUNTS`](https://github.com/opencontainers/runtime-spec/blob/master/config.md#mounts)
|
||||
|
||||
- **`name`** *string*
|
||||
|
||||
name of the mount.
|
||||
|
||||
- **`description`** *string*
|
||||
|
||||
description of the mount.
|
||||
|
||||
- **`source`** *string*
|
||||
|
||||
source of the mount.
|
||||
|
||||
- **`destination`** *string*
|
||||
|
||||
destination of the mount.
|
||||
|
||||
- **`type`** *string*
|
||||
|
||||
mount type.
|
||||
|
||||
- **`options`** *string array*
|
||||
|
||||
options of the mount.
|
||||
|
||||
- **`ipchost`** *boolean*
|
||||
Access to host ipc namespace.
|
||||
- **`pidhost`** *boolean*
|
||||
Access to host pid namespace.
|
||||
|
||||
- **`propagatedMount`** *string*
|
||||
|
||||
path to be mounted as rshared, so that mounts under that path are visible to docker. This is useful for volume plugins.
|
||||
This path will be bind-mounted outisde of the plugin rootfs so it's contents
|
||||
are preserved on upgrade.
|
||||
|
||||
- **`env`** *PluginEnv array*
|
||||
|
||||
env of the plugin, struct consisting of the following fields
|
||||
|
||||
- **`name`** *string*
|
||||
|
||||
name of the env.
|
||||
|
||||
- **`description`** *string*
|
||||
|
||||
description of the env.
|
||||
|
||||
- **`value`** *string*
|
||||
|
||||
value of the env.
|
||||
|
||||
- **`args`** *PluginArgs*
|
||||
|
||||
args of the plugin, struct consisting of the following fields
|
||||
|
||||
- **`name`** *string*
|
||||
|
||||
name of the args.
|
||||
|
||||
- **`description`** *string*
|
||||
|
||||
description of the args.
|
||||
|
||||
- **`value`** *string array*
|
||||
|
||||
values of the args.
|
||||
|
||||
- **`linux`** *PluginLinux*
|
||||
|
||||
- **`capabilities`** *string array*
|
||||
|
||||
capabilities of the plugin (*Linux only*), see list [`here`](https://github.com/opencontainers/runc/blob/master/libcontainer/SPEC.md#security)
|
||||
|
||||
- **`allowAllDevices`** *boolean*
|
||||
|
||||
If `/dev` is bind mounted from the host, and allowAllDevices is set to true, the plugin will have `rwm` access to all devices on the host.
|
||||
|
||||
- **`devices`** *PluginDevice array*
|
||||
|
||||
device of the plugin, (*Linux only*), struct consisting of the following fields, see [`DEVICES`](https://github.com/opencontainers/runtime-spec/blob/master/config-linux.md#devices)
|
||||
|
||||
- **`name`** *string*
|
||||
|
||||
name of the device.
|
||||
|
||||
- **`description`** *string*
|
||||
|
||||
description of the device.
|
||||
|
||||
- **`path`** *string*
|
||||
|
||||
path of the device.
|
||||
|
||||
## Example Config
|
||||
|
||||
*Example showing the 'tiborvass/sample-volume-plugin' plugin config.*
|
||||
|
||||
```json
|
||||
{
|
||||
"Args": {
|
||||
"Description": "",
|
||||
"Name": "",
|
||||
"Settable": null,
|
||||
"Value": null
|
||||
},
|
||||
"Description": "A sample volume plugin for Docker",
|
||||
"Documentation": "https://docs.docker.com/engine/extend/plugins/",
|
||||
"Entrypoint": [
|
||||
"/usr/bin/sample-volume-plugin",
|
||||
"/data"
|
||||
],
|
||||
"Env": [
|
||||
{
|
||||
"Description": "",
|
||||
"Name": "DEBUG",
|
||||
"Settable": [
|
||||
"value"
|
||||
],
|
||||
"Value": "0"
|
||||
}
|
||||
],
|
||||
"Interface": {
|
||||
"Socket": "plugin.sock",
|
||||
"Types": [
|
||||
"docker.volumedriver/1.0"
|
||||
]
|
||||
},
|
||||
"Linux": {
|
||||
"Capabilities": null,
|
||||
"AllowAllDevices": false,
|
||||
"Devices": null
|
||||
},
|
||||
"Mounts": null,
|
||||
"Network": {
|
||||
"Type": ""
|
||||
},
|
||||
"PropagatedMount": "/data",
|
||||
"User": {},
|
||||
"Workdir": ""
|
||||
}
|
||||
```
|
||||
Binary file not shown.
|
Before Width: | Height: | Size: 45 KiB |
Binary file not shown.
|
Before Width: | Height: | Size: 33 KiB |
Binary file not shown.
|
Before Width: | Height: | Size: 32 KiB |
Binary file not shown.
|
Before Width: | Height: | Size: 38 KiB |
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user