This should test that
- all the messages produced are delivered (i.e. not lost)
- followLogs() exits
Loosely based on the test having the same name by Brian Goff, see
https://gist.github.com/cpuguy83/e538793de18c762608358ee0eaddc197
Signed-off-by: Kir Kolyshkin <kolyshkin@gmail.com>
(cherry picked from commit f845d76d047760c91dc0c7076aea840291fbdbc5)
Upstream-commit: 2a82480df9ad91593d59be4b5283917dbea2da39
Component: engine
When daemon.ContainerLogs() is called with options.follow=true
(as in "docker logs --follow"), the "loggerutils.followLogs()"
function never returns (even then the logs consumer is gone).
As a result, all the resources associated with it (including
an opened file descriptor for the log file being read, two FDs
for a pipe, and two FDs for inotify watch) are never released.
If this is repeated (such as by running "docker logs --follow"
and pressing Ctrl-C a few times), this results in DoS caused by
either hitting the limit of inotify watches, or the limit of
opened files. The only cure is daemon restart.
Apparently, what happens is:
1. logs producer (a container) is gone, calling (*LogWatcher).Close()
for all its readers (daemon/logger/jsonfilelog/jsonfilelog.go:175).
2. WatchClose() is properly handled by a dedicated goroutine in
followLogs(), cancelling the context.
3. Upon receiving the ctx.Done(), the code in followLogs()
(daemon/logger/loggerutils/logfile.go#L626-L638) keeps to
send messages _synchronously_ (which is OK for now).
4. Logs consumer is gone (Ctrl-C is pressed on a terminal running
"docker logs --follow"). Method (*LogWatcher).Close() is properly
called (see daemon/logs.go:114). Since it was called before and
due to to once.Do(), nothing happens (which is kinda good, as
otherwise it will panic on closing a closed channel).
5. A goroutine (see item 3 above) keeps sending log messages
synchronously to the logWatcher.Msg channel. Since the
channel reader is gone, the channel send operation blocks forever,
and resource cleanup set up in defer statements at the beginning
of followLogs() never happens.
Alas, the fix is somewhat complicated:
1. Distinguish between close from logs producer and logs consumer.
To that effect,
- yet another channel is added to LogWatcher();
- {Watch,}Close() are renamed to {Watch,}ProducerGone();
- {Watch,}ConsumerGone() are added;
*NOTE* that ProducerGone()/WatchProducerGone() pair is ONLY needed
in order to stop ConsumerLogs(follow=true) when a container is stopped;
otherwise we're not interested in it. In other words, we're only
using it in followLogs().
2. Code that was doing (logWatcher*).Close() is modified to either call
ProducerGone() or ConsumerGone(), depending on the context.
3. Code that was waiting for WatchClose() is modified to wait for
either ConsumerGone() or ProducerGone(), or both, depending on the
context.
4. followLogs() are modified accordingly:
- context cancellation is happening on WatchProducerGone(),
and once it's received the FileWatcher is closed and waitRead()
returns errDone on EOF (i.e. log rotation handling logic is disabled);
- due to this, code that was writing synchronously to logWatcher.Msg
can be and is removed as the code above it handles this case;
- function returns once ConsumerGone is received, freeing all the
resources -- this is the bugfix itself.
While at it,
1. Let's also remove the ctx usage to simplify the code a bit.
It was introduced by commit a69a59ffc7e3d ("Decouple removing the
fileWatcher from reading") in order to fix a bug. The bug was actually
a deadlock in fsnotify, and the fix was just a workaround. Since then
the fsnofify bug has been fixed, and a new fsnotify was vendored in.
For more details, please see
https://github.com/moby/moby/pull/27782#issuecomment-416794490
2. Since `(*filePoller).Close()` is fixed to remove all the files
being watched, there is no need to explicitly call
fileWatcher.Remove(name) anymore, so get rid of the extra code.
Should fix https://github.com/moby/moby/issues/37391
Signed-off-by: Kir Kolyshkin <kolyshkin@gmail.com>
(cherry picked from commit 916eabd459fe707b5c4a86377d12e2ad1871b353)
Upstream-commit: 84a5b528aede5579861201e869870d10fc98c07c
Component: engine
This test case checks that followLogs() exits once the reader is gone.
Currently it does not (i.e. this test is supposed to fail) due to #37391.
[kolyshkin@: test case Brian Goff, changelog and all bugs are by me]
Source: https://gist.github.com/cpuguy83/e538793de18c762608358ee0eaddc197
Signed-off-by: Kir Kolyshkin <kolyshkin@gmail.com>
(cherry picked from commit d37a11bfbab83ab42b1160f116e863daac046192)
Upstream-commit: 511741735e0aa2fe68a66d99384c00d187d1a157
Component: engine
This code has many return statements, for some of them the
"end logs" or "end stream" message was not printed, giving
the impression that this "for" loop never ended.
Make sure that "begin logs" is to be followed by "end logs".
Signed-off-by: Kir Kolyshkin <kolyshkin@gmail.com>
(cherry picked from commit 2e4c2a6bf9cb47fd07e42f9c043024ed3dbcd04d)
Upstream-commit: 2b8bc86679b7153bb4ace063a858637df0f16a2e
Component: engine
The code in Close() that removes the watches was not working,
because it first sets `w.closed = true` and then calls w.close(),
which starts with
```
if w.closed {
return errPollerClosed
}
```
Fix by setting w.closed only after calling w.remove() for all the
files being watched.
While at it, remove the duplicated `delete(w.watches, name)` code.
Signed-off-by: Kir Kolyshkin <kolyshkin@gmail.com>
(cherry picked from commit fffa8958d00860b4e3563327a2cc6836a12d4ba9)
Upstream-commit: 4e2dbfa1af48191126b0910b9463bf94d8371886
Component: engine
There is no need to wait for up to 200ms in order to close
the file descriptor once the chClose is received.
This commit might reduce the chances for occasional "The process
cannot access the file because it is being used by another process"
error on Windows, where an opened file can't be removed.
Signed-off-by: Kir Kolyshkin <kolyshkin@gmail.com>
(cherry picked from commit dfbb64ea7d042d5b2bb0c1c2b88e3682b7069b10)
Upstream-commit: 3a3bfcbf47e98212abfc9cfed860d9e99fc41cdc
Component: engine
In case of errors, the file descriptor is never closed. Fix it.
Signed-off-by: Kir Kolyshkin <kolyshkin@gmail.com>
(cherry picked from commit 88bcf1573ca2eaffc15da346a1651a3749567554)
Upstream-commit: 7be43586af6824c1e55cb502d9d2bab45c9b4505
Component: engine
To include https://github.com/vbatts/tar-split/pull/48 which
fixes the issue of creating an image with >8GB file in it.
Signed-off-by: Kir Kolyshkin <kolyshkin@gmail.com>
(cherry picked from commit 92e75439037d205c218208945c70cfb3633e87aa)
Upstream-commit: d7085abec2e445630bedd3e79782c5ec33f62682
Component: engine
Add a test case for creating a 8GB file inside a container.
Due to a bug in tar-split this was failing in Docker 18.06.
The file being created is sparse, so there's not much I/O
happening or disk space being used -- meaning the test is
fast and does not require a lot of disk space.
Signed-off-by: Kir Kolyshkin <kolyshkin@gmail.com>
(cherry picked from commit b3165f5b2d7c1063c2a76a7ed5c2dd18f868276b)
Upstream-commit: fc1d808c44f77e3929c4eeba7066501890aecd4e
Component: engine
Fix default case causing the throttling to not be used.
Ensure that nil client condition is handled.
Signed-off-by: Derek McGowan <derek@mcgstyle.net>
(cherry picked from commit c3e32938430e03a316311f9e4fbdb743e492a07e)
Signed-off-by: Andrew Hsu <andrewhsu@docker.com>
Upstream-commit: f121eccf29576ce5d4b8256a71a9d32ee688ff7d
Component: engine
[18.09] backport: fix regression when filtering container names using a leading slash
Upstream-commit: b8a4fe5f8f58bccfa3763ca75c6465447c8ccd06
Component: engine
In case a volume is specified via Mounts API, and SELinux is enabled,
the following error happens on container start:
> $ docker volume create testvol
> $ docker run --rm --mount source=testvol,target=/tmp busybox true
> docker: Error response from daemon: error setting label on mount
> source '': no such file or directory.
The functionality to relabel the source of a local mount specified via
Mounts API was introduced in commit 5bbf5cc and later broken by commit
e4b6adc, which removed setting mp.Source field.
With the current data structures, the host dir is already available in
v.Mountpoint, so let's just use it.
Fixes: e4b6adc
Signed-off-by: Kir Kolyshkin <kolyshkin@gmail.com>
Upstream-commit: 4032b6778df39f53fda0e6e54f0256c9a3b1d618
Component: engine
Commit 5c8da2e96738addd8a175e5b9a23b8c37d4a4dfe updated the filtering behavior
to match container-names without having to specify the leading slash.
This change caused a regression in situations where a regex was provided as
filter, using an explicit leading slash (`--filter name=^/mycontainername`).
This fix changes the filters to match containers both with, and without the
leading slash, effectively making the leading slash optional when filtering.
With this fix, filters with and without a leading slash produce the same result:
$ docker ps --filter name=^a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
21afd6362b0c busybox "sh" 2 minutes ago Up 2 minutes a2
56e53770e316 busybox "sh" 2 minutes ago Up 2 minutes a1
$ docker ps --filter name=^/a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
21afd6362b0c busybox "sh" 2 minutes ago Up 2 minutes a2
56e53770e316 busybox "sh" 3 minutes ago Up 3 minutes a1
$ docker ps --filter name=^b
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
b69003b6a6fe busybox "sh" About a minute ago Up About a minute b1
$ docker ps --filter name=^/b
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
b69003b6a6fe busybox "sh" 56 seconds ago Up 54 seconds b1
$ docker ps --filter name=/a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
21afd6362b0c busybox "sh" 3 minutes ago Up 3 minutes a2
56e53770e316 busybox "sh" 4 minutes ago Up 4 minutes a1
$ docker ps --filter name=a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
21afd6362b0c busybox "sh" 3 minutes ago Up 3 minutes a2
56e53770e316 busybox "sh" 4 minutes ago Up 4 minutes a1
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 6f9b5ba810e9314ab9335490cd41316940a32b5e)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
Upstream-commit: 5fa80da2d385520f2c3c91e8ce23d181e4f93097
Component: engine
The remote API allows full privilege escalation and is equivalent to
having root access on the host. Because of this, the API should never
be accessible through an insecure connection (TCP without TLS, or TCP
without TLS verification).
Although a warning is already logged on startup if the daemon uses an
insecure configuration, this warning is not very visible (unless someone
decides to read the logs).
This patch attempts to make insecure configuration more visible by sending
back warnings through the API (which will be printed when using `docker info`).
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
Upstream-commit: 547b993e07330f3e74cba935975fce05e8661381
Component: engine
When requesting information about the daemon's configuration through the `/info`
endpoint, missing features (or non-recommended settings) may have to be presented
to the user.
Detecting these situations, and printing warnings currently is handled by the
cli, which results in some complications:
- duplicated effort: each client has to re-implement detection and warnings.
- it's not possible to generate warnings for reasons outside of the information
returned in the `/info` response.
- cli-side detection has to be updated for new conditions. This means that an
older cli connecting to a new daemon may not print all warnings (due to
it not detecting the new conditions)
- some warnings (in particular, warnings about storage-drivers) depend on
driver-status (`DriverStatus`) information. The format of the information
returned in this field is not part of the API specification and can change
over time, resulting in cli-side detection no longer being functional.
This patch adds a new `Warnings` field to the `/info` response. This field is
to return warnings to be presented by the user.
Existing warnings that are currently handled by the CLI are copied to the daemon
as part of this patch; This change is backward-compatible with existing
clients; old client can continue to use the client-side warnings, whereas new
clients can skip client-side detection, and print warnings that are returned by
the daemon.
Example response with this patch applied;
```bash
curl --unix-socket /var/run/docker.sock http://localhost/info | jq .Warnings
```
```json
[
"WARNING: bridge-nf-call-iptables is disabled",
"WARNING: bridge-nf-call-ip6tables is disabled"
]
```
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
Upstream-commit: a3d4238b9ce653d2863fbc93057ed4162a83221e
Component: engine
Blocks the execution of tasks during the Prepare phase until there
exists an IP address for every overlay network in use by the task. This
prevents a task from starting before the NetworkAttachment containing
the IP address has been sent down to the node.
Includes a basic test for the correct use case.
Signed-off-by: Drew Erny <drew.erny@docker.com>
Upstream-commit: 3c81dc3103d9c88cb333c80e0810f80ab80c374e
Component: engine
This feature allows user to specify list of subnets for global
default address pool. User can configure subnet list using
'swarm init' command. Daemon passes the information to swarmkit.
We validate the information in swarmkit, then store it in cluster
object. when IPAM init is called, we pass subnet list to IPAM driver.
Signed-off-by: selansen <elango.siva@docker.com>
Upstream-commit: f7ad95cab9cc7ba8925673a933028d53284c13f5
Component: engine