Allowing to set their values in the daemon configuration file.
Signed-off-by: David Calavera <david.calavera@gmail.com>
Upstream-commit: 59586d02b1cc004f14cd7ff6b454211f562da326
Component: engine
Moving all strings to the errors package wasn't a good idea after all.
Our custom implementation of Go errors predates everything that's nice
and good about working with errors in Go. Take as an example what we
have to do to get an error message:
```go
func GetErrorMessage(err error) string {
switch err.(type) {
case errcode.Error:
e, _ := err.(errcode.Error)
return e.Message
case errcode.ErrorCode:
ec, _ := err.(errcode.ErrorCode)
return ec.Message()
default:
return err.Error()
}
}
```
This goes against every good practice for Go development. The language already provides a simple, intuitive and standard way to get error messages, that is calling the `Error()` method from an error. Reinventing the error interface is a mistake.
Our custom implementation also makes very hard to reason about errors, another nice thing about Go. I found several (>10) error declarations that we don't use anywhere. This is a clear sign about how little we know about the errors we return. I also found several error usages where the number of arguments was different than the parameters declared in the error, another clear example of how difficult is to reason about errors.
Moreover, our custom implementation didn't really make easier for people to return custom HTTP status code depending on the errors. Again, it's hard to reason about when to set custom codes and how. Take an example what we have to do to extract the message and status code from an error before returning a response from the API:
```go
switch err.(type) {
case errcode.ErrorCode:
daError, _ := err.(errcode.ErrorCode)
statusCode = daError.Descriptor().HTTPStatusCode
errMsg = daError.Message()
case errcode.Error:
// For reference, if you're looking for a particular error
// then you can do something like :
// import ( derr "github.com/docker/docker/errors" )
// if daError.ErrorCode() == derr.ErrorCodeNoSuchContainer { ... }
daError, _ := err.(errcode.Error)
statusCode = daError.ErrorCode().Descriptor().HTTPStatusCode
errMsg = daError.Message
default:
// This part of will be removed once we've
// converted everything over to use the errcode package
// FIXME: this is brittle and should not be necessary.
// If we need to differentiate between different possible error types,
// we should create appropriate error types with clearly defined meaning
errStr := strings.ToLower(err.Error())
for keyword, status := range map[string]int{
"not found": http.StatusNotFound,
"no such": http.StatusNotFound,
"bad parameter": http.StatusBadRequest,
"conflict": http.StatusConflict,
"impossible": http.StatusNotAcceptable,
"wrong login/password": http.StatusUnauthorized,
"hasn't been activated": http.StatusForbidden,
} {
if strings.Contains(errStr, keyword) {
statusCode = status
break
}
}
}
```
You can notice two things in that code:
1. We have to explain how errors work, because our implementation goes against how easy to use Go errors are.
2. At no moment we arrived to remove that `switch` statement that was the original reason to use our custom implementation.
This change removes all our status errors from the errors package and puts them back in their specific contexts.
IT puts the messages back with their contexts. That way, we know right away when errors used and how to generate their messages.
It uses custom interfaces to reason about errors. Errors that need to response with a custom status code MUST implementent this simple interface:
```go
type errorWithStatus interface {
HTTPErrorStatusCode() int
}
```
This interface is very straightforward to implement. It also preserves Go errors real behavior, getting the message is as simple as using the `Error()` method.
I included helper functions to generate errors that use custom status code in `errors/errors.go`.
By doing this, we remove the hard dependency we have eeverywhere to our custom errors package. Yes, you can use it as a helper to generate error, but it's still very easy to generate errors without it.
Please, read this fantastic blog post about errors in Go: http://dave.cheney.net/2014/12/24/inspecting-errors
Signed-off-by: David Calavera <david.calavera@gmail.com>
Upstream-commit: a793564b2591035aec5412fbcbcccf220c773a4c
Component: engine
There are five options 'debug' 'labels' 'cluster-store' 'cluster-store-opts'
and 'cluster-advertise' that can be reconfigured, configure any of these
options should not affect other options which may have configured in flags.
But this is not true, for example, I start a daemon with -D to enable the
debugging, and after a while, I want reconfigure the 'label', so I add a file
'/etc/docker/daemon.json' with content '"labels":["test"]' and send SIGHUP to daemon
to reconfigure the daemon, it work, but the debugging of the daemon is also diabled.
I don't think this is a expeted behaviour.
This patch also have some minor refactor of reconfiguration of cluster-advertiser.
Enable user to reconfigure cluster-advertiser without cluster-store in config file
since cluster-store could also be already set in flag, and we only want to reconfigure
the cluster-advertiser.
Signed-off-by: Lei Jitang <leijitang@huawei.com>
Upstream-commit: b9366c9609166d41e987608041b5a2079726aa5f
Component: engine
It should be explicitly told whether to enable the profiler or not.
Signed-off-by: David Calavera <david.calavera@gmail.com>
Upstream-commit: e8f569b3246b3ce4e765b0aafe53b6d70d12a2d6
Component: engine
This adds an npipe protocol option for Windows hosts, akin to unix
sockets for Linux hosts. This should become the default transport
for Windows, but this change does not yet do that.
It also does not add support for the client side yet since that
code is in engine-api, which will have to be revendored separately.
Signed-off-by: John Starks <jostarks@microsoft.com>
Upstream-commit: 0906195fbbd6f379c163b80f23e4c5a60bcfc5f0
Component: engine
That way the configuration file becomes flag, without extra keys.
Signed-off-by: David Calavera <david.calavera@gmail.com>
Upstream-commit: 5e80ac0dd183874ab7cd320a8bd0f0378dbd1321
Component: engine
- Return an error if any of the keys don't match valid flags.
- Fix an issue ignoring merged values as named values.
- Fix tlsverify configuration key.
- Fix bug in mflag to avoid panics when one of the flag set doesn't have any flag.
Signed-off-by: David Calavera <david.calavera@gmail.com>
Upstream-commit: ed4038676f09d124180d634ec2cb341745f5fc79
Component: engine
- Set the daemon log level to what's set in the configuration.
- Enable TLS when TLSVerify is enabled.
Signed-off-by: David Calavera <david.calavera@gmail.com>
Upstream-commit: cd3446972e968639684f2b65bfc11c099a25f1b0
Component: engine
Read configuration after flags making this the priority:
1- Apply configuration from file.
2- Apply configuration from flags.
Reload configuration when a signal is received, USR2 in Linux:
- Reload router if the debug configuration changes.
- Reload daemon labels.
- Reload cluster discovery.
Signed-off-by: David Calavera <david.calavera@gmail.com>
Upstream-commit: 677a6b3506107468ed8c00331991afd9176fa0b9
Component: engine
- Use the ones provided by docker/go-connections, they are a drop in replacement.
- Remove pkg/sockets from docker.
- Keep pkg/tlsconfig because libnetwork still needs it and there is a
circular dependency issue.
Signed-off-by: David Calavera <david.calavera@gmail.com>
Upstream-commit: 8e034802b7ad92a29f08785e553415adcd1348a3
Component: engine
Support restoreCustomImage for windows with a new interface to extract
the graph driver from the LayerStore.
Signed-off-by: Daniel Nephin <dnephin@docker.com>
Upstream-commit: f5916b10ae02c7db83052a97205ac345a3d96300
Component: engine
- Move time json marshaling to the jsonlog package: this is a docker
internal hack that we should not promote as a library.
- Move Timestamp encoding/decoding functions to the API types: This is
only used there. It could be a standalone library but I don't this
it's worth having a separated repo for this. It could introduce more
complexity than it solves.
Signed-off-by: David Calavera <david.calavera@gmail.com>
Upstream-commit: 27220ecc6b1eedf650ca9cf94965cb0dc2054efd
Component: engine
Each plug-in operates as a separate service, and registers with Docker
through general (plug-ins API)
[https://blog.docker.com/2015/06/extending-docker-with-plugins/]. No
Docker daemon recompilation is required in order to add / remove an
authentication plug-in. Each plug-in is notified twice for each
operation: 1) before the operation is performed and, 2) before the
response is returned to the client. The plug-ins can modify the response
that is returned to the client.
The authorization depends on the authorization effort that takes place
in parallel [https://github.com/docker/docker/issues/13697].
This is the official issue of the authorization effort:
https://github.com/docker/docker/issues/14674
(Here)[https://github.com/rhatdan/docker-rbac] you can find an open
document that discusses a default RBAC plug-in for Docker.
Signed-off-by: Liron Levin <liron@twistlock.com>
Added container create flow test and extended the verification for ps
Upstream-commit: 75c353f0ad73bd83ed18e92857dd99a103bb47e3
Component: engine
It actually adds nothing to queuing requests.
Signed-off-by: Alexander Morozov <lk4d4@docker.com>
Upstream-commit: ca5795cef810c85f101eb0aa3efe3ec8d756490b
Component: engine
- Add a *version* file placeholder.
- Update autogen and builds to use it and an autogen build flag
Signed-off-by: Vincent Demeester <vincent@sbr.pm>
Upstream-commit: 8054a303870b81eebe05e38261c1b68197b68558
Component: engine
This reverts commit d5cd032a86617249eadd7142227c5355ba9164b4.
Commit caused issues on systems with case-insensitive filesystems.
Revert for now
Signed-off-by: Brian Goff <cpuguy83@gmail.com>
Upstream-commit: b78ca243d9fc25d81c1b50008ee69f3e71e940f6
Component: engine
- Move autogen/dockerversion to version
- Update autogen and "builds" to use this package and a build flag
Signed-off-by: Vincent Demeester <vincent@sbr.pm>
Upstream-commit: d5cd032a86617249eadd7142227c5355ba9164b4
Component: engine
The first param on opts.ParseHost() wasn't being used for anything.
Once we get rid of that param we can then also clean-up some code
that calls ParseHost() because the param that was passed in wasn't
being used for anything else.
Signed-off-by: Doug Davis <dug@us.ibm.com>
Upstream-commit: ba973f2d74c150154390aed1a5aed8fb5d0673b8
Component: engine
Refactor so that the Host flag validation doesn't destroy the user's input,
and then post process the flags when we know the TLS options
Signed-off-by: Sven Dowideit <SvenDowideit@home.org.au>
Upstream-commit: 50f0906007bdec83dd23b6ae5216769ab65bbf59
Component: engine
This leverages recent additions to libkv enabling client
authentication via TLS so the discovery back-end can be locked
down with mutual TLS. Example usage:
docker daemon [other args] \
--cluster-advertise 192.168.122.168:2376 \
--cluster-store etcd://192.168.122.168:2379 \
--cluster-store-opt kv.cacertfile=/path/to/ca.pem \
--cluster-store-opt kv.certfile=/path/to/cert.pem \
--cluster-store-opt kv.keyfile=/path/to/key.pem
Signed-off-by: Daniel Hiltgen <daniel.hiltgen@docker.com>
Upstream-commit: 124792a8714425283226c599ee69cbeac2e4d650
Component: engine
Now we're start to serve early, but all Accept calls are intercepted by
listenbuffer or systemd socket.
Signed-off-by: Alexander Morozov <lk4d4@docker.com>
Upstream-commit: 281a48d092fa84500c63b984ad45c59a06f301c4
Component: engine
It prevents occupying of those resources (ports, unix-sockets) by
containers.
Also fixed false-positive test for that case.
Fix#15912
Signed-off-by: Alexander Morozov <lk4d4@docker.com>
Upstream-commit: 5eda566f937dddef9d4267dd8b8b1d8c3e47b290
Component: engine
Implement basic interfaces to write custom routers that can be plugged
to the server. Remove server coupling with the daemon.
Signed-off-by: David Calavera <david.calavera@gmail.com>
Upstream-commit: da982cf5511814b6897244ecaa9c016f8800340a
Component: engine
Although having a request ID available throughout the codebase is very
valuable, the impact of requiring a Context as an argument to every
function in the codepath of an API request, is too significant and was
not properly understood at the time of the review.
Furthermore, mixing API-layer code with non-API-layer code makes the
latter usable only by API-layer code (one that has a notion of Context).
This reverts commit de4164043546d2b9ee3bf323dbc41f4979c84480, reversing
changes made to 7daeecd42d7bb112bfe01532c8c9a962bb0c7967.
Signed-off-by: Tibor Vass <tibor@docker.com>
Conflicts:
api/server/container.go
builder/internals.go
daemon/container_unix.go
daemon/create.go
Upstream-commit: b08f071e18043abe8ce15f56826d38dd26bedb78
Component: engine
This reverts commit ff92f45be49146cd7ac7716c36d89de989cb262e, reversing
changes made to 80e31df3b6fdf6c1fbd6a5d0aceb0a148066508c.
Reverting to make the next revert easier.
Signed-off-by: Tibor Vass <tibor@docker.com>
Upstream-commit: 79c31f4b13d331d4011b2975a96dcdeab2036865
Component: engine
Avoid creating a global context object that will be used while the daemon is running.
Not only this object won't ever be garbage collected, but it won't ever be used for anything else than creating other contexts in each request. I think it's a bad practive to have something like this sprawling aroud the code.
This change removes that global object and initializes a context in the cases we don't have already one, like shutting down the server.
This also removes a bunch of context arguments from functions that did nothing with it.
Signed-off-by: David Calavera <david.calavera@gmail.com>
Upstream-commit: 27c76522dea91ec585f0b5f0ae1fec8c255b7b22
Component: engine
This PR adds a "request ID" to each event generated, the 'docker events'
stream now looks like this:
```
2015-09-10T15:02:50.000000000-07:00 [reqid: c01e3534ddca] de7c5d4ca927253cf4e978ee9c4545161e406e9b5a14617efb52c658b249174a: (from ubuntu) create
```
Note the `[reqID: c01e3534ddca]` part, that's new.
Each HTTP request will generate its own unique ID. So, if you do a
`docker build` you'll see a series of events all with the same reqID.
This allow for log processing tools to determine which events are all related
to the same http request.
I didn't propigate the context to all possible funcs in the daemon,
I decided to just do the ones that needed it in order to get the reqID
into the events. I'd like to have people review this direction first, and
if we're ok with it then I'll make sure we're consistent about when
we pass around the context - IOW, make sure that all funcs at the same level
have a context passed in even if they don't call the log funcs - this will
ensure we're consistent w/o passing it around for all calls unnecessarily.
ping @icecrime @calavera @crosbymichael
Signed-off-by: Doug Davis <dug@us.ibm.com>
Upstream-commit: 26b1064967d9fcefd4c35f60e96bf6d7c9a3b5f8
Component: engine