- Move time json marshaling to the jsonlog package: this is a docker
internal hack that we should not promote as a library.
- Move Timestamp encoding/decoding functions to the API types: This is
only used there. It could be a standalone library but I don't this
it's worth having a separated repo for this. It could introduce more
complexity than it solves.
Signed-off-by: David Calavera <david.calavera@gmail.com>
Upstream-commit: 27220ecc6b1eedf650ca9cf94965cb0dc2054efd
Component: engine
This reverts commit 6f6f10a75f8b447637e8a89d685452871899e9c0, so that we
can apply a different workaround.
Signed-off-by: Nalin Dahyabhai <nalin@redhat.com> (github: nalind)
Upstream-commit: 0ca6d77e6e17ea378ff04b59f971dc1338e0ddc2
Component: engine
Add support of `tag`, `env` and `labels` for Splunk logging driver.
Removed from message `containerId` as it is the same as `tag`.
Signed-off-by: Denis Gladkikh <denis@gladkikh.email>
Upstream-commit: 26855c780184c528446957bd77821c6f4c74b343
Component: engine
- Add a *version* file placeholder.
- Update autogen and builds to use it and an autogen build flag
Signed-off-by: Vincent Demeester <vincent@sbr.pm>
Upstream-commit: 8054a303870b81eebe05e38261c1b68197b68558
Component: engine
The json decoder starts to decode immediately an inotify event is
received.
But at the time the inotify event is trigged, the json log
entry might haven't been fully written to the disk.
In this case the decoder will return an "io.UnexpectedEOF" error, but
there is still data remaining in the decoder's buffer. And the data
should be passed to the decoder when the next inotify event is
triggered.
Signed-off-by: Shijiang Wei <mountkin@gmail.com>
Upstream-commit: e41eae8b42052d07432405cca113ffcf8cf49c06
Component: engine
this allows jsonfile logger to collect extra metadata from containers with
`--log-opt labels=label1,label2 --log-opt env=env1,env2`.
Extra attributes are saved into `attrs` attributes for each log data.
Signed-off-by: Daniel Dao <dqminh@cloudflare.com>
Upstream-commit: 0083f6e984894b4d3697c1ae63547c07eea697af
Component: engine
this allows journald logger to collect extra metadata from containers with
`--log-opt labels=label1,label2 --log-opt env=env1,env2`
Signed-off-by: Daniel Dao <dqminh@cloudflare.com>
Upstream-commit: 11a24f19c2da88b6c3b50114863f24c06c5ce2fd
Component: engine
this allows fluentd logger to collect extra metadata from containers with
`--log-opt labels=label1,label2 --log-opt env=env1,env2`
Signed-off-by: Daniel Dao <dqminh@cloudflare.com>
Upstream-commit: 4cc8490283fdc8315f595be3e32951929e65bb40
Component: engine
this allows gelf logger to collect extra metadata from containers with
`--log-opt labels=label1,label2 --log-opt env=env1,env2`
Additional log field will be prefixed with `_` as per gelf protocol
https://www.graylog.org/resources/gelf/
Signed-off-by: Daniel Dao <dqminh@cloudflare.com>
Upstream-commit: 5794a0190d7505e998116f1321ae39b001d0e710
Component: engine
If an invalid logger address is provided on daemon start it will
silently fail. As syslog driver is doing, this check should be done on
daemon start and prevent it from starting even in other drivers.
This patch also adds integration tests for this behavior.
Signed-off-by: Antonio Murdaca <runcom@linux.com>
Upstream-commit: e3c472426ff1dccf083e41dbbfdf5935921bf330
Component: engine
@noxiouz points out that we don't need to check for a nil result from
C.CString(), since an out-of-memory condition causes a runtime panic
instead.
Signed-off-by: Nalin Dahyabhai <nalin@redhat.com> (github: nalind)
Upstream-commit: 11fda783f85e1f6027c997c6b044c65288c89099
Component: engine
If a logdriver doesn't register a callback function to validate log
options, it won't be usable. Fix the journald driver by adding a dummy
validator.
Teach the client and the daemon's "logs" logic that the server can also
supply "logs" data via the "journald" driver. Update documentation and
tests that depend on error messages.
Add support for reading log data from the systemd journal to the
journald log driver. The internal logic uses a goroutine to scan the
journal for matching entries after any specified cutoff time, formats
the messages from those entries as JSONLog messages, and stuffs the
results down a pipe whose reading end we hand back to the caller.
If we are missing any of the 'linux', 'cgo', or 'journald' build tags,
however, we don't implement a reader, so the 'logs' endpoint will still
return an error.
Make the necessary changes to the build setup to ensure that support for
reading container logs from the systemd journal is built.
Rename the Jmap member of the journald logdriver's struct to "vars" to
make it non-public, and to make it easier to tell that it's just there
to hold additional variable values that we want journald to record along
with log data that we're sending to it.
In the client, don't assume that we know which logdrivers the server
implements, and remove the check that looks at the server. It's
redundant because the server already knows, and the check also makes
using older clients with newer servers (which may have new logdrivers in
them) unnecessarily hard.
When we try to "logs" and have to report that the container's logdriver
doesn't support reading, send the error message through the
might-be-a-multiplexer so that clients which are expecting multiplexed
data will be able to properly display the error, instead of tripping
over the data and printing a less helpful "Unrecognized input header"
error.
Signed-off-by: Nalin Dahyabhai <nalin@redhat.com> (github: nalind)
Upstream-commit: e611a189cb3147cd79ccabfe8ba61ae3e3e28459
Component: engine
After tailing a file, if the number of lines requested is > the number
of lines in the file, this would cause a json unmarshalling error to
occur when we later try to go follow the file.
So brute force set it to the end if any tailing occurred.
There is potential that there could be some missing log messages if logs
are being written very quickly, however I was not able to make this
happen even with `while true; do echo hello; done`, so this is probably
acceptable.
While testing this I also found a panic in LogWatcher.Close can be
called twice due to a race. Fix channel close to only close when there
has been no signal to the channel.
Signed-off-by: Brian Goff <cpuguy83@gmail.com>
Upstream-commit: c57faa91e2dab72a0a0905dc10e5cbdf55b545f5
Component: engine
- downcase and privatize exported variables that were unused
- make accurate an error message
- added package comments
- remove unused var ReadLogsNotSupported
- enable linter
- some spelling corrections
Signed-off-by: Morgan Bauer <mbauer@us.ibm.com>
Upstream-commit: ccbe539e86dfbb8749c09763ddfd73bf10ac57cc
Component: engine
I think it was original intention, because even half of a comment was about
MaxInt32.
Fix#15038
Signed-off-by: Alexander Morozov <lk4d4@docker.com>
Upstream-commit: eb45602d2fa0fac8a694d2afb1c59ef60b0e1f77
Component: engine
There is no option validation for "journald" log-driver, so it makes no
sense to fail in that case.
Signed-off-by: Alexander Morozov <lk4d4@docker.com>
Upstream-commit: d68c55bc72625bce226971ef6e760530e9a15ce3
Component: engine