The swarm scope network connected containers with autostart enabled
there was a dependency problem with the cluster to be initialized before
we can autostart them. With the current container restart code happening
before cluster init, these containers were not getting autostarted
properly. Added a fix to delay the container start of those containers
which has atleast one swarm scope endpoint to until after the cluster is
initialized.
Signed-off-by: Jana Radhakrishnan <mrjana@docker.com>
Upstream-commit: c9fb551d60584ac4ad01561e2f56b7b7cc9483b9
Component: engine
`Mounts` allows users to specify in a much safer way the volumes they
want to use in the container.
This replaces `Binds` and `Volumes`, which both still exist, but
`Mounts` and `Binds`/`Volumes` are exclussive.
The CLI will continue to use `Binds` and `Volumes` due to concerns with
parsing the volume specs on the client side and cross-platform support
(for now).
The new API follows exactly the services mount API.
Example usage of `Mounts`:
```
$ curl -XPOST localhost:2375/containers/create -d '{
"Image": "alpine:latest",
"HostConfig": {
"Mounts": [{
"Type": "Volume",
"Target": "/foo"
},{
"Type": "bind",
"Source": "/var/run/docker.sock",
"Target": "/var/run/docker.sock",
},{
"Type": "volume",
"Name": "important_data",
"Target": "/var/data",
"ReadOnly": true,
"VolumeOptions": {
"DriverConfig": {
Name: "awesomeStorage",
Options: {"size": "10m"},
Labels: {"some":"label"}
}
}]
}
}'
```
There are currently 2 types of mounts:
- **bind**: Paths on the host that get mounted into the
container. Paths must exist prior to creating the container.
- **volume**: Volumes that persist after the
container is removed.
Not all fields are available in each type, and validation is done to
ensure these fields aren't mixed up between types.
Signed-off-by: Brian Goff <cpuguy83@gmail.com>
Upstream-commit: fc7b904dced4d18d49c8a6c47ae3f415d16d0c43
Component: engine
When the `-t` flag is passed on exec make sure to add the TERM env var
to mirror the expected configuration from run.
Fixes#9299
Signed-off-by: Michael Crosby <crosbymichael@gmail.com>
Upstream-commit: 4633f15f13d51530de2438c298a1084c55e4fedf
Component: engine
New driver options:
- `splunk-gzip` - gzip compress all requests to Splunk HEC
(enabled by default)
- `splunk-gzip-level` - change compression level.
Messages are sent in batches by 1000, with frequency of 5 seconds.
Maximum buffer is 10,000 events. If HEC will not be available, Splunk
Logging Driver will keep retrying while it can hold messages in buffer.
Added unit tests for driver.
Signed-off-by: Denis Gladkikh <denis@gladkikh.email>
Upstream-commit: 4907cc7793cf469fc2d6fc0f842d08bd045da569
Component: engine
- So that swarm init will still work w/o specifying the advertise
address when the daemon is running inside a container
Signed-off-by: Alessandro Boch <aboch@docker.com>
Upstream-commit: c0b24c600e30656144522f85b053f015525022da
Component: engine
The restriction is no longer necessary given changes at the runc layer
related to mount options of the rootfs. Also cleaned up the docs on
restrictions left for userns enabled mode. Re-enabled tests related to
--read-only when testing a userns-enabled daemon in integration-cli.
Docker-DCO-1.1-Signed-off-by: Phil Estes <estesp@linux.vnet.ibm.com> (github: estesp)
Upstream-commit: 6062ae5742e49ec1a79073c327f3d1343c218a12
Component: engine
This fix tries to fix 26326 where `docker inspect` will not show
ulimit even when daemon default ulimit has been set.
This fix merge the HostConfig's ulimit with daemon default in
`docker inspect`, so that when daemon is started with `default-ulimit`
and HostConfig's ulimit is not set, `docker inspect` will output
the daemon default.
An integration test has been added to cover the changes.
This fix fixes 26326.
Signed-off-by: Yong Tang <yong.tang.github@outlook.com>
Upstream-commit: 7d705a7355d650feffc966e08efc0f92297145a8
Component: engine
This PR adds support for running regular containers to be connected to
swarm mode multi-host network so that:
- containers connected to the same network across the cluster can
discover and connect to each other.
- Get access to services(and their associated loadbalancers)
connected to the same network
Signed-off-by: Jana Radhakrishnan <mrjana@docker.com>
Upstream-commit: 99a98ccc14a9427be47c8006e130750710db0a16
Component: engine
devmapper: Fail to start container if xfs_nospace_max_retries can't be enforced
Upstream-commit: ce5eb34e68ec84505ede64efa9cfc9b8d177f086
Component: engine
This moves the engine-api client package to `/docker/docker/client`.
Signed-off-by: Michael Crosby <crosbymichael@gmail.com>
Upstream-commit: 7c36a1af031b510cd990cf488ee5998a3efb450f
Component: engine
This moves the types for the `engine-api` repo to the existing types
package.
Signed-off-by: Michael Crosby <crosbymichael@gmail.com>
Upstream-commit: 91e197d614547f0202e6ae9b8a24d88ee131d950
Component: engine
We just introduced a new tunable dm.xfs_nospace_max_retries. But this tunable
will work only on new kernels where xfs supports this feature. On older
kernels xfs does not allow tuning this behavior.
There are two issues. First one is that if xfsSetNospaceRetries() fails,
it returns error but leaves the device activated and mounted. We should
be unmounting the device and deactivate it before returning.
Second issue is, if docker is started on older kernel, with
dm.xfs_nospace_max_retries specified, then docker will silently ignore the
fact that /sys file to tweak this behavior is not present and will continue.
But I think it might be better to fail container creation/start if kernel
does not support this feature.
This patch fixes it. After this patch, user will get an error like following
when container is run.
# docker run -ti fedora bash
docker: Error response from daemon: devmapper: user specified daemon option dm.xfs_nospace_max_retries but it does not seem to be supported on this system :open /sys/fs/xfs/dm-5/error/metadata/ENOSPC/max_retries: no such file or directory.
Signed-off-by: Vivek Goyal <vgoyal@redhat.com>
Upstream-commit: 6cc55dd65b363fe520c2ab29a9303f79afd4cadb
Component: engine
Legacy plugins (aka pluginv1) calls in libnetwork are replaced with
calls using the new plugin model (aka pluginv2). pkg/plugins is still
used for managing the http client connections to the plugin.
This commit makes the necessary changes in docker/docker. Part 2 will
will take care of the libnetwork changes.
Signed-off-by: Anusha Ragunathan <anusha@docker.com>
Upstream-commit: 17b8aba1d924e505563af400d758b89c8406961d
Component: engine
This was removed in a clean-up
(060f4ae6179b10aeafa883670826159fdae8204a) but should not have been.
Fixes issues with volumes when upgrading from pre-1.7.0 daemons.
Signed-off-by: Brian Goff <cpuguy83@gmail.com>
Upstream-commit: dc712b92495d12d789f45c84d45c3de3292089a8
Component: engine
Host mounted directories are correctly chowned to the remapped root, if the directory does not already exist
Upstream-commit: 078964177f3e964774a150d688e5ff2b75220028
Component: engine
This makes sure that:
1. Already existing directories are left untouched
2. Newly created directories are chowned to the correct root UID/GID in case of user namespaces
3. All parent directories still get created with host root UID/GID
Fix#21738
Signed-off-by: Antonis Kalipetis <akalipetis@gmail.com>
Upstream-commit: 72d8a77d522896ec73e07f49a1c1bcb44bbf3bbd
Component: engine
This fix tries to address the issue raised in 26220 where
disconnecting a container from network does not work if
the network id (instead of network name) has been specified.
The issue was that internally when trying to disconnecting
a contaienr fromt the network, the originally passed network
name or id has been used.
This fix uses the resolved network name (e.g., `bridge`).
An integration test has been added to cover the changes.
This fix fixes 26220.
Signed-off-by: Yong Tang <yong.tang.github@outlook.com>
Upstream-commit: 83d79f13aa2e94085e83e0f5bc5d51305dd2c192
Component: engine
When xfs filesystem is being used on top of thin pool, xfs can get ENOSPC
errors from thin pool when thin pool is full. As of now xfs retries the
IO and keeps on retrying and does not give up. This can result in container
application being stuck for a very long time. In fact I have seen instances
of unkillable processes. So that means once thin pool is full and process
gets stuck, container can't be stopped/killed either and only option left
seems to be power recycle of the box.
In another instance, writer did not block but failed after a while. But
when I tried to exit/stop the container, unmounting xfs hanged and only
thing I could do was power cycle the machine.
Now upstream kernel has committed patches where it allows user space to
customize user space behavior in case of errors. One of the knobs is
max_retries, which specifies how many times an IO should be retried
when ENOSPC is encountered.
This patch sets provides a tunable knob (dm.xfs_nospace_max_retries) so
that user can specify value for max_retries and tune xfs behavior. If
one sets this value to 0, xfs will not retry IO when ENOSPC error is
encountered. It will instead give up and shutdown filesystem.
This knob can be useful if one is running into unkillable
processes/containers issue on top of xfs.
Signed-off-by: Vivek Goyal <vgoyal@redhat.com>
Upstream-commit: 4f0017b9ad7dfa2e9dcdee69d000b98595893e60
Component: engine