All archive that are created from somewhere generally have to be closed, because
at some point there is a file or a pipe or something that backs them. So, we
make archive.Archive a ReadCloser. However, code consuming archives does not
typically close them so we add an archive.ArchiveReader and use that when we're
only reading.
We then change all the Tar/Archive places to create ReadClosers, and to properly
close them everywhere.
As an added bonus we can use ReadCloserWrapper rather than EofReader in several places,
which is good as EofReader doesn't always work right. For instance, many compression
schemes like gzip knows it is EOF before having read the EOF from the stream, so the
EofCloser never sees an EOF.
Docker-DCO-1.1-Signed-off-by: Alexander Larsson <alexl@redhat.com> (github: alexlarsson)
Upstream-commit: f198ee525ad6862dce3940e08c72e0a092380a7b
Component: engine
* Config is now runconfig.Config
* HostConfig is now runconfig.HostConfig
* MergeConfig is now runconfig.Merge
* CompareConfig is now runconfig.Compare
* ParseRun is now runconfig.Parse
* ContainerConfigFromJob is now runconfig.ContainerConfigFromJob
* ContainerHostConfigFromJob is now runconfig.ContainerHostConfigFromJob
This facilitates refactoring commands.go and shrinks the core.
Docker-DCO-1.1-Signed-off-by: Solomon Hykes <solomon@docker.com> (github: shykes)
Upstream-commit: 6393c38339e11b4a099a460ecf46bb5cafc4283b
Component: engine
Also, use it in all the places. :)
Docker-DCO-1.1-Signed-off-by: Andrew Page <admwiggin@gmail.com> (github: tianon)
Upstream-commit: da04f49b383c02ee28c32f948048b9e9a402bb4f
Component: engine
This makes all users of Put() have a corresponding call
to Get() which means we will be able to track whether
any particular ID is in use and if not unmount it.
Docker-DCO-1.1-Signed-off-by: Alexander Larsson <alexl@redhat.com> (github: alexlarsson)
Upstream-commit: bcaf6c2359d83acd5da54f499e21f4a148f491c5
Component: engine
Do not display size and virtual size on the cli.
Only display virtual size on the cli
Upstream-commit: 697707e4afe6f1e7e5e33c24ada2f1f2af279142
Component: engine
Minor refactor of Graph; replace uses of Graph.All (slice) with Graph.Map (map)
Upstream-commit: ad152efbed6ddb74a352c39147bae9b0e8c87435
Component: engine
Use goroutines to pull in parallel
If multiple images pulled at the same time, each progress is displayed on a new line
Upstream-commit: 0e71e368a8a781f593b25fdd1318d3882e6d28e5
Component: engine
writing out streamed status.
This is caused by a Buffering message that is not in the correct json format:
[...]
{"status"
:"Pushing 6bba11a28f1ca247de9a47071355ce5923a45b8fea3182389f992f4
24b93edae"}Buffering to disk 244/? (n/a)..
{"status":"Pushing",[...]
The "Buffering to disk" message is originated in
srv.runtime.graph.TempLayerArchive
I am now using the StreamFormatter provided by the context from which the
method is called.
Upstream-commit: 1e2ef274cdaa76e79435df52cdc196739ba8b3b1
Component: engine