Give Docker more time to kill containers before upstart kills Docker.
The default kill timeout is 5 seconds.
This will help decrease the chance of but not eliminate the chance of
orphaned container processes.
Signed-off-by: David Xia <dxia@spotify.com>
Upstream-commit: 2f9e7a067a7273a0f344c1c9a6397e4bb61d7554
Component: engine
Once the job has failed and is respawned, the status becomes `docker
respawn/post-start` after subsequent failures (as opposed to `docker
stop/post-start`), so the post-start script needs to take this into
account.
I could not find specific documentation on the job transitioning to the
`respawn/post-start` state, but this was observed on Ubuntu 14.04.2.
Signed-off-by: Lewis Marshall <lewis@lmars.net>
Upstream-commit: 302e3834a0bfa860f9d06b42a2955b0cbd135c38
Component: engine
Fixes#6647: Other upstart jobs that depend on docker by specifying
"start on started docker" would often start before the docker daemon was
ready, so they'd fail with "Cannot connect to the Docker daemon" or
"dial unix /var/run/docker.sock: no such file or directory".
This is because "docker -d" doesn't daemonize, it runs in the
foreground, so upstart can't know when the daemon is ready to receive
incoming connections. (Traditionally, a daemon will create all necessary
sockets and then fork to signal that it's ready; according to @tianon
this "isn't possible in Go"[1]. See also [2].)
Presumably this isn't a problem with systemd init with its socket
activation. The SysV init scripts may or may not suffer from this
problem but I have no motivation to fix them.
This commit adds a "post-start" stanza to the upstart configuration
that waits for the socket to be available. Upstart won't emit the
"started" event until the "post-start" script completes.[3]
Note that the system administrator might have specified a different path
for the socket, or a tcp socket instead, by customising
/etc/default/docker. In that case we don't try to figure out what the
new socket is, but at least we don't wait in vain for
/var/run/docker.sock to appear.
If the main script (`docker -d`) fails to start, the `initctl status
$UPSTART_JOB | grep -q "stop/"` line ensures that we don't loop forever.
I stole this idea from Steve Langasek.[4]
If for some reason we *still* end up in an infinite loop --I guess
`docker -d` must have hung-- then at least we'll be able to see the
"Waiting for /var/run/docker.sock" debug output in
/var/log/upstart/docker.log.
I considered using inotifywait instead of sleep, but it isn't worth
the complexity & the extra dependency.
[1] https://github.com/docker/docker/issues/6647#issuecomment-47001613
[2] https://code.google.com/p/go/issues/detail?id=227
[3] http://upstart.ubuntu.com/cookbook/#post-start
[4] https://lists.ubuntu.com/archives/upstart-devel/2013-April/002492.html
Signed-off-by: David Röthlisberger <david@rothlis.net>
Upstream-commit: f42c0a53a38a2a141bec8768d0836a3726de4a83
Component: engine
This resolves a problem that I have been having where docker starts before networking is up. See issue #5944 for more details.
Docker-DCO-1.1-Signed-off-by: Jeffrey Bolle <jeffreybolle@gmail.com> (github: jeffreybolle)
Upstream-commit: c52889db27a2af09ed7f6c92f2d6c6fd9737bf63
Component: engine
This changes the upstart init script to start on `local-filesystems`.
Docker-DCO-1.1-Signed-off-by: Cristian Staretu <cristian.staretu@gmail.com> (github: unclejack)
Upstream-commit: ba0c8292917560b45f840f187c2a8f452550705d
Component: engine