This adds support for the passthrough on build, push, login, and search.
Revamp the integration test to cover these cases and make it more
robust.
Use backticks instead of quoted strings for backslash-heavy string
contstands.
Signed-off-by: Aaron Lehmann <aaron.lehmann@docker.com>
Upstream-commit: c44e7a3e632c3ea961cb8c12ba45371f54e6699c
Component: engine
Changes how the Engine interacts with Registry servers on image pull.
Previously, Engine sent a User-Agent string to the Registry server
that included only the Engine's version information. This commit
appends to that string the fields from the User-Agent sent by the
client (e.g., Compose) of the Engine. This allows Registry server
operators to understand what tools are actually generating pulls on
their registries.
Signed-off-by: Mike Goelzer <mgoelzer@docker.com>
Upstream-commit: d1502afb63a10df0bfce20ae2957774cfb3e58d8
Component: engine
- cherry-pick from 1.10.3 branch: 0186f4d4223a094a050d06f456355da3ae431468
- add token service test suite
- add integration test (missing in 1.10.3 branch)
Signed-off-by: Antonio Murdaca <runcom@redhat.com>
Upstream-commit: 1b5c2e1d722757a55364fb45cf3fcec7f2c75fb4
Component: engine
This test was checking that it received every progress update that was
produced. But delivery of these intermediate progress updates is not
guaranteed. A new update can overwrite the previous one if the previous
one hasn't been sent to the channel yet.
The call to t.Fatalf exited the current goroutine which was consuming
the channel, which caused a deadlock and eventual test timeout rather
than a proper failure message.
Failure seen here:
https://jenkins.dockerproject.org/job/Docker-PRs-experimental/16400/console
Signed-off-by: Aaron Lehmann <aaron.lehmann@docker.com>
Upstream-commit: 2f4aa9658408ac72a598363c6e22eadf93dbb8a7
Component: engine
The download manager assumed there was at least one layer involved in
all images. This can be false if the image is essentially a copy of
`scratch`.
Fix a nil pointer dereference that happened in this case. Add
integration tests that involve schema1 and schema2 manifests.
Fixes#21213
Signed-off-by: Aaron Lehmann <aaron.lehmann@docker.com>
Upstream-commit: 7cf894ce1013e5843d5c151f24520b51d34515d0
Component: engine
Fix and add test for case c) in #21054
Signed-off-by: Antonio Murdaca <runcom@redhat.com>
Upstream-commit: 497d545093bce4f01455bf8d2e1658435dbb040b
Component: engine
Use token handler options for initialization.
Update auth endpoint to set identity token in response.
Update credential store to match distribution interface changes.
Signed-off-by: Derek McGowan <derek@mcgstyle.net> (github: dmcgowan)
Upstream-commit: e896d1d7c4459c4b357efdd780e9fb9dd9bc90e0
Component: engine
Further differentiate the APIEndpoint used with V2 with the endpoint type which is only used for v1 registry interactions
Rename Endpoint to V1Endpoint and remove version ambiguity
Use distribution token handler for login
Signed-off-by: Derek McGowan <derek@mcgstyle.net>
Signed-off-by: Aaron Lehmann <aaron.lehmann@docker.com>
Upstream-commit: f2d481a299f7404f5cabbe0f8e6a4ae3c3211c1e
Component: engine
Concurrent uploads which share layers worked correctly as of #18353,
but unfortunately #18785 caused a regression. This PR removed the logic
that shares digests between different push sessions. This overlooked the
case where one session was waiting for another session to upload a
layer.
This commit adds back the ability to propagate this digest information,
using the distribution.Descriptor type because this is what is received
from stats and uploads, and also what is ultimately needed for building
the manifest.
Surprisingly, there was no test covering this case. This commit adds
one. It fails without the fix.
See recent comments on #9132.
Signed-off-by: Aaron Lehmann <aaron.lehmann@docker.com>
Upstream-commit: 5c99eebe81958a227dfaed1145840374ce50bbbb
Component: engine
Attempt layer mounts from up to 3 source repositories, possibly
falling back to a standard blob upload for cross repository pushes.
Addresses compatiblity issues with token servers which do not grant
multiple repository scopes, resulting in an authentication failure for
layer mounts, which would otherwise cause the push to terminate with an
error.
Signed-off-by: Brian Bland <brian.bland@docker.com>
Upstream-commit: 1d3480f9ba3525309030497d5c8a3dd5725ed15a
Component: engine
This allows easier URL handling in code that uses APIEndpoint.
If we continued to store the URL unparsed, it would require redundant
parsing whenver we want to extract information from it. Also, parsing
the URL earlier should give improve validation.
Signed-off-by: Aaron Lehmann <aaron.lehmann@docker.com>
Upstream-commit: 79db131a358f15d4bdef37e251daf27429d116b3
Component: engine
With the --insecure-registry daemon option (or talking to a registry on
a local IP), the daemon will first try TLS, and then try plaintext if
something goes wrong with the push or pull. It doesn't make sense to try
plaintext if a HTTP request went through while using TLS. This commit
changes the logic to keep track of host/port combinations where a TLS
attempt managed to do at least one HTTP request (whether the response
code indicated success or not). If the host/port responded to a HTTP
using TLS, we won't try to make plaintext HTTP requests to it.
This will result in better error messages, which sometimes ended up
showing the result of the plaintext attempt, like this:
Error response from daemon: Get
http://myregistrydomain.com:5000/v2/: malformed HTTP response
"\x15\x03\x01\x00\x02\x02"
Signed-off-by: Aaron Lehmann <aaron.lehmann@docker.com>
Upstream-commit: 5e8af46fda3f4e17e06726237fc6b9ab6957e3ea
Component: engine
Several improvements to error handling:
- Introduce ImageConfigPullError type, wrapping errors related to
downloading the image configuration blob in schema2. This allows for a
more descriptive error message to be seen by the end user.
- Change some logrus.Debugf calls that display errors to logrus.Errorf.
Add log lines in the push/pull fallback cases to make sure the errors
leading to the fallback are shown.
- Move error-related types and functions which are only used by the
distribution package out of the registry package.
Signed-off-by: Aaron Lehmann <aaron.lehmann@docker.com>
Upstream-commit: 8f26fe4f59ce515c68440da1443ace4c96e89d4a
Component: engine
This makes the behavior consistent with having incorrect credentials.
Signed-off-by: Aaron Lehmann <aaron.lehmann@docker.com>
Upstream-commit: 7b81bc147cf75cb32697e8fdf88e05ae879cb879
Component: engine
This will allow it to be reused between download attempts in a
subsequent commit.
Signed-off-by: Aaron Lehmann <aaron.lehmann@docker.com>
Upstream-commit: f425529e7e0a6b15c8cc43f0c1dbb7a42572e30d
Component: engine
Add test to make sure the new registry pass-thru token is only sent to the intended hosts
Upstream-commit: 66a4e557f955713adea5c97e42202fdbc0f5c06c
Component: engine
Revert "Set idle timeouts for HTTP reads and writes in communications with the registry"
Upstream-commit: 3fa0d09e74131f4a4dca43a1a28eb014028be62d
Component: engine
This reverts commit 84b2162c1a15256ac09396ad0d75686ea468f40c.
The intent of this commit was to set an idle timeout on a HTTP
connection. If a read took more than 60 seconds to complete, or a write
took more than 60 seconds to complete, the connection would be
considered dead.
This doesn't work properly, because the HTTP internals apparently read
from the connection concurrently while writing. An upload that doesn't
complete in 60 seconds leads to a timeout.
Fixes#19967
Signed-off-by: Aaron Lehmann <aaron.lehmann@docker.com>
Upstream-commit: cbda80aaff026329a13bb2d0943a4c428251e207
Component: engine
`Upload` already closes the reader returned by `compress` and the
progressreader passed into it, before returning. But even so, the
io.Copy inside compress' goroutine needs to attempt a read from the
progressreader to notice that it's closed, and this read has a side
effect of outputting a progress message. If this happens after `Upload`
returns, it can result in a write to a closed channel. Change `compress`
to return a channel that allows the caller to wait for its goroutine to
finish before freeing any resources connected to the reader that was
passed to it.
Signed-off-by: Aaron Lehmann <aaron.lehmann@docker.com>
Upstream-commit: e273445dd407df6803d7b80863b644a6cfa2c1f5
Component: engine
Otherwise, some operations can get stuck indefinitely when the remote
side is unresponsive.
Fixes#12823
Signed-off-by: Aaron Lehmann <aaron.lehmann@docker.com>
Upstream-commit: 84b2162c1a15256ac09396ad0d75686ea468f40c
Component: engine
A watcher would output the current progress item when it was detached,
in case it missed that item earlier, which would leave the user seeing
some intermediate step of the operation. This commit changes it to only
output it on detach if it didn't already output the same item.
Signed-off-by: Aaron Lehmann <aaron.lehmann@docker.com>
Upstream-commit: fde2329eaa1fab2327ae2e775af5aa04e2726ed5
Component: engine
Currently, the temporary file storing downloaded layer data is only
removed after a successful download or a digest verification error. A
transport-level error does not cause it to be removed. This is a
regression from 1.9 that could cause disk usage to grow until the Docker
daemon is restarted.
Signed-off-by: Aaron Lehmann <aaron.lehmann@docker.com>
Upstream-commit: 5a363ce60bee3dc26a433c7e2cee6dc76939849e
Component: engine
Things could go wrong if Watch was called after the last existing
watcher was released. The call to Watch would succeed even though it was
not really adding a watcher, and the corresponding call to Release would
close hasWatchers a second time.
The fix for this is twofold:
1. We allow transfers to gain new watchers after the watcher count has
touched zero. This means that the channel returned by Released should
not be closed until all watchers have been released AND the transfer is
no longer tracked by the transfer manager, meaning it won't be possible
for additional calls to Watch to race with closing the channel returned
by Released.
The Transfer interface has a new method called Close so the transfer can
know when the transfer manager no longer references it.
Remove the Cancel method. It's not used and should not be exported.
2. Even though (1) makes it possible to add watchers after all the
previous watchers have been released, we want to avoid doing this in
practice. A transfer that has had all its watchers released is in the
process of being cancelled, and attaching to one of these will never be
the correct behavior. Add a check if a watcher is attaching to a
cancelled transfer. In this case, wait for the transfer to be removed
from the map and try again. This will ensure correct behavior when a
watcher tries to attach during the race window.
Either (1) or (2) should be sufficient to fix the race involved here,
but the combination is the most correct approach. (1) fixes the
low-level plumbing to be resilient to the race condition, and (2) avoids
using it in a racy way.
Fixes#19606
Signed-off-by: Aaron Lehmann <aaron.lehmann@docker.com>
Upstream-commit: 3e2b50ccaadb5ecbd70bf27adc287973f0417573
Component: engine