… mainly by skipping if daemon is remote.
Signed-off-by: Vincent Demeester <vincent@sbr.pm>
Upstream-commit: 6016e79d2552b21643f4bfd093ce76d8ef956d79
Component: engine
Commit fd0e24b7189374e0fe7c55b6d26ee916d3ee1655 changed
the stats collection loop to use a `sleep()` instead
of `time.Tick()` in the for-loop.
This change caused a regression in situations where
no stats are being collected, or an error is hit
in the loop (in which case the loop would `continue`,
and the `sleep()` is not hit).
This patch puts the sleep at the start of the loop
to guarantee it's always hit.
This will delay the sampling, which is similar to the
behavior before fd0e24b7189374e0fe7c55b6d26ee916d3ee1655.
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
Upstream-commit: 481b8e54b45955e40075f49a9af321afce439320
Component: engine
This PR adds support for compressibility of log file.
I added a new option conpression for the jsonfile log driver,
this option allows the user to specify compression algorithm to
compress the log files. By default, the log files will be
not compressed. At present, only support 'gzip'.
Signed-off-by: Yanqiang Miao <miao.yanqiang@zte.com.cn>
'docker logs' can read from compressed files
Signed-off-by: Yanqiang Miao <miao.yanqiang@zte.com.cn>
Add Metadata to the gzip header, optmize 'readlog'
Signed-off-by: Yanqiang Miao <miao.yanqiang@zte.com.cn>
Upstream-commit: f69f09f44ce9fedbc9d70f11980c1fc8d7f77cec
Component: engine
This test case checks that a container created before start
of the currently running dockerd can be exported (as reported
in #36561). To satisfy this condition, either a pre-existing
container is required, or a daemon restart after container
creation.
Signed-off-by: Kir Kolyshkin <kolyshkin@gmail.com>
Upstream-commit: 6e7141c7a2c0de6fa3d6c9dcc56978a81f9d835e
Component: engine
Update libnetwork to 1b91bc94094ecfdae41daa465cc0c8df37dfb3dd to bring in a fix
for stale HNS endpoints on Windows:
When Windows Server 2016 is restarted with the Docker service running, it is
possible for endpoints to be deleted from the libnetwork store without being
deleted from HNS. This does not occur if the Docker service is stopped cleanly
first, or forcibly terminated (since the endpoints still exist in both). This
change works around the issue by removing any stale HNS endpoints for a network
when creating it.
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
Upstream-commit: fb364f07468e94226250a1e77579ee6117c64be2
Component: engine
This updates libnetwork to 8892d7537c67232591f1f3af60587e3e77e61d41 to bring in
IPAM fixes for duplicate IP addresses.
- IPAM tests (libnetwork PR 2104) (no changes in vendored files)
- Fix for Duplicate IP issues (libnetwork PR 2105)
Also bump golang/x/sync to match libnetwork (no code-changes, other
than the README being updated)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
Upstream-commit: 55e0fe24db68b16edccb2fa49c3b1b9d3a9ce58c
Component: engine
connection does. If this isn't done, then a container listening on stdin won't
receive an EOF when the client closes the stream at their end.
Signed-off-by: Jim Minter <jminter@redhat.com>
Upstream-commit: 37983921c90b468cafd3ba2ca2574fb81cafe5a7
Component: engine
Commit 7a7357dae1bccc ("LCOW: Implemented support for docker cp + build")
changed `container.BaseFS` from being a string (that could be empty but
can't lead to nil pointer dereference) to containerfs.ContainerFS,
which could be be `nil` and so nil dereference is at least theoretically
possible, which leads to panic (i.e. engine crashes).
Such a panic can be avoided by carefully analysing the source code in all
the places that dereference a variable, to make the variable can't be nil.
Practically, this analisys are impossible as code is constantly
evolving.
Still, we need to avoid panics and crashes. A good way to do so is to
explicitly check that a variable is non-nil, returning an error
otherwise. Even in case such a check looks absolutely redundant,
further changes to the code might make it useful, and having an
extra check is not a big price to pay to avoid a panic.
This commit adds such checks for all the places where it is not obvious
that container.BaseFS is not nil (which in this case means we do not
call daemon.Mount() a few lines earlier).
Signed-off-by: Kir Kolyshkin <kolyshkin@gmail.com>
Upstream-commit: d6ea46cedaca0098c15843c5254a337d087f5cd6
Component: engine
In case ContainerExport() is called for an unmounted container, it leads
to a daemon panic as container.BaseFS, which is dereferenced here, is
nil.
To fix, do not rely on container.BaseFS; use the one returned from
rwlayer.Mount().
Fixes: 7a7357dae1bccc ("LCOW: Implemented support for docker cp + build")
Signed-off-by: Kir Kolyshkin <kolyshkin@gmail.com>
Upstream-commit: 81f6307eda44ab3a91de6e29304810a976161d74
Component: engine
With the ticker this could end up just doing back-to-back checks, which
isn't really what we want here.
Instead use a sleep to ensure we actually sleep for the desired
interval.
Signed-off-by: Brian Goff <cpuguy83@gmail.com>
Upstream-commit: 04a0d6b863ed50cfffa79936cf9cdab7a3a9e7df
Component: engine
In info, we only need the number of images, but `CountImages` was
getting the whole map of images and then grabbing the length from that.
This causes a lot of unnecessary CPU usage and memory allocations, which
increases with O(n) on the number of images.
Signed-off-by: Brian Goff <cpuguy83@gmail.com>
Upstream-commit: f6a7763b6f3256bed9a7352021745189d0ca8dc9
Component: engine
I noticed this test failed on Windows:
> 17:46:24 docker_cli_run_test.go:4361:
> 17:46:24 c.Fatal("running container timed out") // cleanup in teardown
I also noticed that in general tests are running slower on Windows,
for example TestStartAttachSilent (which runs a container with
`busybox echo test` and then starts it again) took 29.763s.
This means a simple container start can easily take 15s, which
explains the above failure.
Double the timeout from 15s to 30s.
Fixes: 4e262f6387 ("Fix race on sending stdin close event")
Signed-off-by: Kir Kolyshkin <kolyshkin@gmail.com>
Upstream-commit: 5043639645123f2728c81c9a55fea525475ec324
Component: engine
I am not quite sure why but this test is sometimes failing like this:
> 15:21:41 --- FAIL: TestLinksEtcHostsContentMatch (0.53s)
> 15:21:41 assertions.go:226:
>
> Error Trace: links_linux_test.go:46
> 15:21:41
> Error: Not equal:
> 15:21:41
> expected: "127.0.0.1\tlocalhost\n::1\tlocalhost
> ip6-localhost
> ip6-loopback\nfe00::0\tip6-localnet\nff00::0\tip6-mcastprefix\nff02::1\tip6-allnodes\nff02::2\tip6-allrouters\n172.17.0.2\tf53feb6df161\n"
> 15:21:41
> received: ""
To eliminate some possible failures (like ignoring stderr from `cat` or
its exit code), let's use container.Exec() to read a file from a container.
Fixes: e6bd20edcbf ("Migrate some integration-cli test to api tests")
Signed-off-by: Kir Kolyshkin <kolyshkin@gmail.com>
Upstream-commit: ad2f88d8ccbd9dd0a8d9c4f96ece3956f60489df
Component: engine
As mentioned in commit 9e31938, test cases that use t.Parallel()
and start a docker daemon might step on each other toes as they
try to configure iptables during startup, resulting in flaky tests.
To avoid this, --iptables=false should be used while starting daemon.
Fixes: eaa5192856c1 ("Make container resource mounts unbindable")
Signed-off-by: Kir Kolyshkin <kolyshkin@gmail.com>
Upstream-commit: c125e10a0486623ba3badebf974ea6e582373151
Component: engine
Ingress networks will no longer automatically remove their
load-balancing endpoint (and sandbox) automatically when the network is
otherwise upopulated. This is to prevent automatic removal of the
ingress networks when all the containers leave them. Therefore
explicit removal of an ingress network also requires explicit removal
of its load-balancing endpoint.
Signed-off-by: Chris Telfer <ctelfer@docker.com>
Upstream-commit: 3da4ebf355d3494d1403b2878a1ae6958b2724e9
Component: engine
This PR prevents automatic removal of the load balancing sandbox
endpoint when the endpoint is the last one in the network but
the network is marked as ingress.
Signed-off-by: Chris Telfer <ctelfer@docker.com>
Upstream-commit: bebad150c9c3bc6eb63758c10ef24b9298ecf6e2
Component: engine
The commit https://github.com/moby/moby/pull/35422 had the result of
accidentally causing the removal of the ingress network when the
last member of a service left the network. This did not appear
in swarm instances because the swarm manager would still maintain
and return cluster state about the network even though it had
removed its sandbox and endpoint. This test verifies that after a
service gets added and removed that the ingress sandbox remains
in a functional state.
Signed-off-by: Chris Telfer <ctelfer@docker.com>
Upstream-commit: 805b6a7f749a6c7cbb237e21ee7260d536621808
Component: engine