Commit Graph

61 Commits

Author SHA1 Message Date
be83c11fb0 Add canonical import comment
Signed-off-by: Daniel Nephin <dnephin@docker.com>
Upstream-commit: 4f0d95fa6ee7f865597c03b9e63702cdcb0f7067
Component: engine
2018-02-05 16:51:57 -05:00
3cf8a0c442 Carry 34248 Added tag log option to json-logger and use RawAttrs
This fix carries PR 34248: Added tag log option to json-logger

This fix changes to use RawAttrs based on review feedback.

This fix fixes 19803, this fix closes 34248.

Signed-off-by: Yong Tang <yong.tang.github@outlook.com>
Upstream-commit: e77267c5a682e2c5aaa32469f2c83c2479d57566
Component: engine
2018-01-19 17:51:20 +00:00
9dd65d097b Added tag log option to json-logger
Fixes #19803
Updated the json-logger to utilize the common log option
'tag' that can define container/image information to include
as part of logging.

When the 'tag' log option is not included, there is no change
to the log content via the json-logger. When the 'tag' log option
is included, the tag will be parsed as a template and the result
will be stored within each log entry as the attribute 'tag'.

Update: Removing test added to integration_cli as those have been deprecated.
Update: Using proper test calls (require and assert) in jsonfilelog_test.go based on review.
Update: Added new unit test configs for logs with tag. Updated unit test error checking.
Update: Cleanup check in jsonlogbytes_test.go to match pending changes in PR #34946.
Update: Merging to correct conflicts from PR #34946.

Signed-off-by: bonczj <josh.bonczkowski@gmail.com>
Upstream-commit: 5f50f4f511cd84e79bf005817af346b1764df27f
Component: engine
2018-01-19 17:41:19 +00:00
dff2ac1e17 Move json log reading into log file object
This allows much of the read logic to be shared for other things,
especially for the new log driver proposed in
https://github.com/moby/moby/issues/33475

The only logic for reads in the json logger is around decoding log
messages, which gets passed into the log file object.

This also helps with implementing compression as it allows us to
simplify locking strategies.

Signed-off-by: Brian Goff <cpuguy83@gmail.com>
Upstream-commit: 16f7cd674902b69b97692de2a83915a1a6be2cdb
Component: engine
2017-11-04 08:31:58 -04:00
70ca1da8d3 Refactor log file writer
Make the `*RotateFileWriter` specifically about writing
`logger.Message`'s, which is what it's used for.

This allows for future changes where the log writer can cache details
about log entries such as (e.g.) the timestamps included in a particular
log file, which can be used to optimize reads.

Signed-off-by: Brian Goff <cpuguy83@gmail.com>
Upstream-commit: 52d82b4fbc9f0fe00f63e2df9a3d2a49d4095bda
Component: engine
2017-11-04 08:15:20 -04:00
fb29de3583 Add --until flag for docker logs; closes #32807
Signed-off-by: Jamie Hannaford <jamie.hannaford@rackspace.com>
Upstream-commit: e8d9a61f4c9e1f3cfdf1c889c541c9dc72a4bb40
Component: engine
2017-11-01 10:08:49 +01:00
f367f50630 Move jsonlog to a subpackage of jsonfilelog
Signed-off-by: Daniel Nephin <dnephin@docker.com>
Upstream-commit: 035604cca6d6bd9a432268caf7515a35023908ed
Component: engine
2017-09-25 16:07:25 -04:00
7d644fb209 Fix benchmarks and remove more unnecessary code.
Signed-off-by: Daniel Nephin <dnephin@docker.com>
Upstream-commit: a06ad2792ab92d4f246e4b4cc4c3529eb060651e
Component: engine
2017-09-25 16:07:25 -04:00
4701b66889 Remove unused JSONLog marshaling
Signed-off-by: Daniel Nephin <dnephin@docker.com>
Upstream-commit: 231c5cbd50e1870f31eb6a525b2df2ed7a716565
Component: engine
2017-09-25 16:07:25 -04:00
0c6f170345 Unexport FastTimeMarshalJSON
Signed-off-by: Daniel Nephin <dnephin@docker.com>
Upstream-commit: 7de92de636ef307d66b7b20b24f166a47f40f72b
Component: engine
2017-09-25 16:07:25 -04:00
18c29c5c5f Add gosimple linter
Update gometalinter

Signed-off-by: Daniel Nephin <dnephin@docker.com>
Upstream-commit: f7f101d57ef8cbf2d8723a18b7d723c5c5dd04b6
Component: engine
2017-09-12 12:09:59 -04:00
d659edcaf5 Update logrus to v1.0.1
Fixes case sensitivity issue

Signed-off-by: Derek McGowan <derek@mcgstyle.net>
Upstream-commit: 1009e6a40b295187e038b67e184e9c0384d95538
Component: engine
2017-07-31 13:16:46 -07:00
b337882c5b Avoid using a map for log attributes
Having a map per log entry seemed heavier than necessary. These
attributes end up being sorted and serialized, so storing them in a map
doesn't add anything (there's no random access element). In SwarmKit,
they originate as a slice, so there's an unnecessary conversion to a map
and back.

This also fixes the sort comparator, which used to inefficiently split
the string on each comparison.

Signed-off-by: Aaron Lehmann <aaron.lehmann@docker.com>
Upstream-commit: b642b3f21f17cf50c79e464d3aedc93b2dbf0fb0
Component: engine
2017-07-18 19:01:20 -07:00
abfe031053 Merge pull request #33713 from x1022as/error_fd
fd leak and error handling
Upstream-commit: 3be2273d03e4b06dff4006b1a600c578c7aa09fc
Component: engine
2017-07-06 16:47:34 -07:00
2aedd8b160 Fix log readers can block writes indefinitely
Before this patch, a log reader is able to block all log writes
indefinitely (and other operations) by simply opening the log stream and
not consuming all the messages.

The reason for this is we protect the read stream from corruption by
ensuring there are no new writes while the log stream is consumed (and
caught up with the live entries).

We can get around this issue because log files are append only, so we
can limit reads to only the section of the file that was written to when
the log stream was first requested.

Now logs are only blocked until all files are opened, rather than
streamed to the client.

Signed-off-by: Brian Goff <cpuguy83@gmail.com>
Upstream-commit: e2209185ed1c959131d4068ec7fc93e194dc0802
Component: engine
2017-07-05 14:04:52 -04:00
c41db1dd55 fd leak and error handling
Signed-off-by: Deng Guangxing <dengguangxing@huawei.com>
Upstream-commit: 0042f992d80b59b97380bbbbeb271fd3030edd90
Component: engine
2017-06-23 15:38:35 +08:00
a1a5b0a48d move multireader out of /pkg
Signed-off-by: Victor Vieux <victorvieux@gmail.com>
Upstream-commit: 2445e6b99d4beecb25d556d9a099bdf47703e174
Component: engine
2017-06-01 16:22:00 -07:00
740b35daa9 Prevent ContainerLogs from hanging if container doesn't run for long
Signed-off-by: Jim Minter <jminter@redhat.com>
Upstream-commit: 4fdb17c77765fb44ae263d5012bb79bb8bd15456
Component: engine
2017-04-20 12:27:51 +01:00
3f185ca5af Merge pull request #27565 from rothrock/env-by-regex
[splunk] Use a regex to match environment variables
Upstream-commit: 1325f667eeb42b717c2f9d369f2ee6d701a280e3
Component: engine
2017-03-31 10:46:48 -04:00
ee71506629 Fix cpu spin waiting for log write events
This loop is not ever going to return since it's never actually setting
the `err` var except on the first iteration.

Signed-off-by: Brian Goff <cpuguy83@gmail.com>
Upstream-commit: 7a179972ff963706404f91671960b144dec98d65
Component: engine
2017-02-15 21:41:32 -05:00
c238856aa4 Use sync.Pool for logger Messages
This reduces allocs and bytes used per log entry significantly as well
as some improvement to time per log operation.

Each log driver, however, must put messages back in the pool once they
are finished with the message.

Signed-off-by: Brian Goff <cpuguy83@gmail.com>
Upstream-commit: 3f4fccb65f0ef286c9c4e0f01c4ae7bb09a6ad89
Component: engine
2017-02-01 13:52:37 -05:00
310ff23136 Use a regex to match environment variables #27565
Signed-off-by: Joseph Rothrock <rothrock@rothrock.org>
Upstream-commit: 9758a2a72445eb767ea81b44a73aabb5a8005bab
Component: engine
2017-01-19 15:30:11 -08:00
f5282be2f9 Close logwatcher on context cancellation
This commit addresses 2 issues:

  1. in `tailfile()` if somehow the `logWatcher.Msg` were to become full and the watcher closed before space was made into it, we were getting stuck there forever since we were not checking for the watcher getting closed
  2. when servicing `docker logs`, if the command was cancelled we were not closing the watcher (and hence notifying it to stop copying data)

Signed-off-by: Kenfe-Mickael Laventure <mickael.laventure@gmail.com>
Upstream-commit: fb2bb3653e2755d971f21debfecbd7c878a3c23f
Component: engine
2017-01-17 14:36:13 -08:00
cabd6a6108 Rename 'context' to 'loginfo' in the logger module
Signed-off-by: Yanqiang Miao <miao.yanqiang@zte.com.cn>

gofmt

Signed-off-by: Yanqiang Miao <miao.yanqiang@zte.com.cn>

update

Signed-off-by: Yanqiang Miao <miao.yanqiang@zte.com.cn>

remove 'api/types/container/config.go' from this PR

Signed-off-by: Yanqiang Miao <miao.yanqiang@zte.com.cn>

change 'LogInfo' to 'Info'

Signed-off-by: Yanqiang Miao <miao.yanqiang@zte.com.cn>

update

Signed-off-by: Yanqiang Miao <miao.yanqiang@zte.com.cn>
Upstream-commit: 17ec911da7d4b77b069d9f6c8c010865e19acc6c
Component: engine
2016-12-29 19:13:44 +08:00
61646d77af project: use vndr for vendoring
Signed-off-by: Alexander Morozov <lk4d4@docker.com>
Upstream-commit: f2614f2107c838d014d31b806e3b8a9f1395cb2b
Component: engine
2016-11-03 15:31:46 -07:00
5355683d26 Decouple removing the fileWatcher from reading
Fixes #27779

Currently `followLogs` can get into a deadlock if we receive an inotify
IN_MODIFY event while we are trying to close the `fileWatcher`. This is
due to the fact that closing the `fileWatcher` happens in the same block
as consumes events from the `fileWatcher`. We are trying to run
`fileWatcher.Close`, which is waiting for an IN_IGNORE event to come in
over inotify to confirm the watch was been removed. But, because an
IN_MODIFY event has appeared after `Close` was entered but before the
IN_IGNORE, the broadcast never comes. The IN_MODIFY cannot be consumed
as the events channel is unbuffered and the only `select` that reads
from it is busy waiting for the IN_IGNORE event.

In order to try and fix this race condition I've moved the removal of
the `fileWatcher` out to a separate go block that waits for a signal to
close, removes the watcher and then signals to the previous selects on
the close signal.

This has introduced a `fileWatcher.Remove` in the final case, but if we
try and remove a watcher that does not exist it will just return an
error saying so. We are not doing any checking on the return of `Remove`
so this shouldn't cause any side-effects.

Signed-off-by: Tom Booth <tombooth@gmail.com>
Upstream-commit: a69a59ffc7e3d028a72d1195c2c1535f447eaa84
Component: engine
2016-10-28 10:53:38 +01:00
3b33d2d580 Do not directly cast io.ReadSeeker to io.Reader
`golint` had the following issue when linting this file:

```
daemon/logger/jsonfilelog/read.go:116:10: should omit type io.Reader
from declaration of var rdr; it will be inferred from the right-hand
side
```

In order to keep it happy changing it to an indirect assignment will
still maintain the same functionality.

Signed-off-by: Tom Booth <tombooth@gmail.com>
Upstream-commit: 6314bec641b473571c0160d85bdc70250342185a
Component: engine
2016-10-26 17:37:48 +01:00
54cfd027ab Merge pull request #24514 from cpuguy83/fix_log_follow_rotate
Fix issues with tailing rotated jsonlog file
Upstream-commit: cbd4e4d48ae2edda79be962f6136c6ae9edad7b9
Component: engine
2016-08-26 09:47:40 -07:00
ab16b6c61a Fix issues with tailing rotated jsonlog file
Fixes a race where the log reader would get events for both an actual
rotation as we from fsnotify (`fsnotify.Rename`).
This issue becomes extremely apparent when rotations are fast, for
example:

```
$ docker run -d --name test --log-opt max-size=1 --log-opt max-file=2
busybox sh -c 'while true; do echo hello; usleep 100000; done'
```

With this change the log reader for jsonlogs can handle rotations that
happen as above.

Instead of listening for both fs events AND rotation events
simultaneously, potentially meaning we see 2 rotations for only a single
rotation due to channel buffering, only listen for fs events (like
`Rename`) and then wait to be notified about rotation by the logger.
This makes sure that we don't see 2 rotations for 1, and that we don't
start trying to read until the logger is actually ready for us to.

Signed-off-by: Brian Goff <cpuguy83@gmail.com>
Upstream-commit: 84e60a7e10278e3acd2b783d0e6955dc5198b57c
Component: engine
2016-08-15 13:57:28 -04:00
1c502571ef add defer file.Close to avoid potential fd leak
Signed-off-by: allencloud <allen.sun@daocloud.io>
Upstream-commit: 0ead624473b6bddc232b46bc7c76ab4f9c743ff5
Component: engine
2016-08-10 08:36:09 +08:00
f3d95a7a8f Merge pull request #22982 from nalind/log-newlines
Improve logging of long log lines
Upstream-commit: e43033bc238ce61466c0abe3702878eeeba5e917
Component: engine
2016-07-28 16:41:01 -07:00
675aa5766a Fixing file handle leak for "docker logs"
If "docker logs" was used on an offline container, the logger is leaked, leaving it up to the finalizer to close the file handle, which could block removal of the container.  Further, the json file logger could leak an open handle if the logs are read without follow due to an early return without a close.  This change addresses both cases.

Signed-off-by: Stefan J. Wernli <swernli@microsoft.com>
Upstream-commit: 54f11b84d218c1a7ff4ab4e2148819265da213bc
Component: engine
2016-06-21 18:40:30 -07:00
fb7430591e Improve logging of long log lines
This change updates how we handle long lines of output from the
container.  The previous logic used a bufio reader to read entire lines
of output from the container through an intermediate BytesPipe, and that
allowed the container to cause dockerd to consume an unconstrained
amount of memory as it attempted to collect a whole line of output, by
outputting data without newlines.

To avoid that, we replace the bufio reader with our own buffering scheme
that handles log lines up to 16k in length, breaking up anything longer
than that into multiple chunks.  If we can dispense with noting this
detail properly at the end of output, we can switch from using
ReadBytes() to using ReadLine() instead.  We add a field ("Partial") to
the log message structure to flag when we pass data to the log driver
that did not end with a newline.

The Line member of Message structures that we pass to log drivers is now
a slice into data which can be overwritten between calls to the log
driver's Log() method, so drivers which batch up Messages before
processing them need to take additional care: we add a function
(logger.CopyMessage()) that can be used to create a deep copy of a
Message structure, and modify the awslogs driver to use it.

We update the jsonfile log driver to append a "\n" to the data that it
logs to disk only when the Partial flag is false (it previously did so
unconditionally), to make its "logs" output correctly reproduce the data
as we received it.

Likewise, we modify the journald log driver to add a data field with
value CONTAINER_PARTIAL_MESSAGE=true to entries when the Partial flag is
true, and update its "logs" reader to refrain from appending a "\n" to
the data that it retrieves if it does not see this field/value pair (it
also previously did this unconditionally).

Signed-off-by: Nalin Dahyabhai <nalin@redhat.com> (github: nalind)
Upstream-commit: 513ec73831269947d38a644c278ce3cac36783b2
Component: engine
2016-06-14 14:11:47 -04:00
dc1c41cd47 Remove the logger.Message ContainerID field
Log drivers are instantiated on a per-container basis, and passed the
container ID (along with other information) when they're initialized.
Drivers that care about that value are caching the value that they're
passed when they're initialized and using it in favor of the value
contained in Message structures that are passed to them, so the field in
Messages is unused, so we remove it.

Signed-off-by: Nalin Dahyabhai <nalin@redhat.com>
Upstream-commit: 7772d270c06cc6c26359b556c95563bae31c1038
Component: engine
2016-05-31 16:41:29 -04:00
697f71e4f2 Add support for reading logs extra attrs
The jsonlog logger currently allows specifying envs and labels that
should be propagated to the log message, however there has been no way
to read that back.

This adds a new API option to enable inserting these attrs back to the
log reader.

With timestamps, this looks like so:
```
92016-04-08T15:28:09.835913720Z foo=bar,hello=world hello
```

The extra attrs are comma separated before the log message but after
timestamps.

Without timestaps it looks like so:
```
foo=bar,hello=world hello
```

Signed-off-by: Brian Goff <cpuguy83@gmail.com>
Upstream-commit: bd9d14a07b9f1c82625dc8483245caf3fa7fe9e6
Component: engine
2016-05-06 20:42:20 -04:00
d7a627364b Fixes some issues with jsonfile write/read
This cleans up some of the use of the filepoller which makes reading
significantly more robust and gives fewer changes to fallback to the
polling based watcher.
In a lot of cases, if the file was being rotated while we were adding it
to the watcher, it would return an error that the file doesn't exist and
would fallback.
In some cases this fallback could be triggered multiple times even if we
were already on the fallback/poll-based watcher.

It also fixes an open file leak caused by not closing files properly on
rotate, as well as not closing files that were read via the `tail`
function until after the log reader is completed.

Prior to the above changes, it was relatively simple to cause the log
reader to error out by having quick rotations, for example:
```
$ docker run --name test --log-opt max-size=10b --log-opt max-files=10
-d busybox sh -c 'while true; do usleep 500000; echo hello; done'
$ docker logs -f test
```
After these changes I can run this forever without error.

Another fix removes 2 `os.Stat` calls when rotating files. The stat
calls are not needed since we are just calling `os.Rename` anyway, which
will in turn also just produce the same error that `Stat` would.
These `Stat` calls were also quite expensive.
Removing these stat calls also seemed to resolve an issue causing slow
memory growth on the daemon.

Signed-off-by: Brian Goff <cpuguy83@gmail.com>
Upstream-commit: 1ae9dcf97dc6ccaf7a7977fe8ae1812b691ccba2
Component: engine
2016-03-21 11:05:58 -04:00
cf6760a787 Remove some unused structs and fields
Signed-off-by: Alexander Morozov <lk4d4@docker.com>
Upstream-commit: 0a352e1a906fbf7592aa95d6327776236d13392a
Component: engine
2016-03-01 09:59:29 -08:00
6324c7793e Merge pull request #20605 from mountkin/optimize-json-log-writer
make the json log writer much faster
Upstream-commit: ba5a282a83a603c3bb61cd40642af54d2088f747
Component: engine
2016-02-29 10:47:46 -08:00
828abfa863 add file poller panic fix from 1.10.2
Signed-off-by: Brian Goff <cpuguy83@gmail.com>
Upstream-commit: f9524a4d24d6135ae1777738390e1f6a590a7a1c
Component: engine
2016-02-24 10:17:29 -05:00
0a8b8e629c Revert "use pubsub instead of filenotify to follow json logs"
This reverts commit b1594c59f5e0d1ac898eacde8d91b1ba33c2b626.

Signed-off-by: Brian Goff <cpuguy83@gmail.com>
Upstream-commit: 91fdfdd53722179b21b80b9dfbe8bc2e09d0a7b1
Component: engine
2016-02-23 21:43:52 -05:00
1805b24705 make the json log writer much faster
Signed-off-by: Shijiang Wei <mountkin@gmail.com>
Upstream-commit: d7af03111495cadd71abd3a8b5805066ea688e9b
Component: engine
2016-02-23 19:28:06 +08:00
6ebcbbfdd8 use pubsub instead of filenotify to follow json logs
inotify event is trigged immediately there's data written to disk.
But at the time that the inotify event is received, the json line might
not fully saved to disk. If the json decoder tries to decode in such
case, an io.UnexpectedEOF will be trigged.
We used to retry for several times to mitigate the io.UnexpectedEOF error.
But there are still flaky tests caused by the partial log entries.

The daemon knows exactly when there are new log entries emitted. We can
use the pubsub package to notify all the log readers instead of inotify.

Signed-off-by: Shijiang Wei <mountkin@gmail.com>

try to fix broken test. will squash once tests pass

Signed-off-by: Shijiang Wei <mountkin@gmail.com>
Upstream-commit: b1594c59f5e0d1ac898eacde8d91b1ba33c2b626
Component: engine
2016-02-15 19:25:16 +08:00
be7836d63c Fix race condition in JSONFileLogger.Log
Signed-off-by: Ingo Gottwald <in.gottwald@gmail.com>
Upstream-commit: f8c54200805a9b9088c90f0b46bfa848d7bae67c
Component: engine
2016-01-03 22:45:56 +01:00
3d517a11e5 Replace pkg/units with docker/go-units.
Signed-off-by: David Calavera <david.calavera@gmail.com>
Upstream-commit: 4fef42ba206ac90346e6e0fe25bead3f77dc4b0f
Component: engine
2015-12-16 12:26:49 -05:00
7f43cd332d Move timeutils functions to the only places where they are used.
- Move time json marshaling to the jsonlog package: this is a docker
  internal hack that we should not promote as a library.
- Move Timestamp encoding/decoding functions to the API types: This is
  only used there. It could be a standalone library but I don't this
it's worth having a separated repo for this. It could introduce more
complexity than it solves.

Signed-off-by: David Calavera <david.calavera@gmail.com>
Upstream-commit: 27220ecc6b1eedf650ca9cf94965cb0dc2054efd
Component: engine
2015-12-15 14:56:14 -05:00
3c4fcf6b7a Fix typos found across repository
Signed-off-by: Justas Brazauskas <brazauskasjustas@gmail.com>
Upstream-commit: 927b334ebfc786276a039e45ec097e71bf9a104c
Component: engine
2015-12-13 18:04:12 +02:00
c0f1567bbc Extract rotate file writer from json log driver
Signed-off-by: Daehyeok Mun <daehyeok@gmail.com>
Upstream-commit: 086c0b4a6616e8db8009a5d72411bcc6774adaf1
Component: engine
2015-11-14 16:15:44 -07:00
66c6bac35d Fallback to file polling for jsonlog reader on err
Signed-off-by: Brian Goff <cpuguy83@gmail.com>
Upstream-commit: c136a33c5bf233f4ce108a8a8481d78aeaec93cc
Component: engine
2015-11-05 10:05:05 -05:00
74b08273a6 increase the maxJSONDecodeRetry in json log reader
Signed-off-by: Shijiang Wei <mountkin@gmail.com>
Upstream-commit: a82f9ac81956c6b63772c9c720878d377dd9ee02
Component: engine
2015-11-02 17:51:22 +08:00
3e20fd363b fix a race in json logger reader
The json decoder starts to decode immediately an inotify event is
received.
But at the time the inotify event is trigged, the json log
entry might haven't been fully written to the disk.
In this case the decoder will return an "io.UnexpectedEOF" error, but
there is still data remaining in the decoder's buffer. And the data
should be passed to the decoder when the next inotify event is
triggered.

Signed-off-by: Shijiang Wei <mountkin@gmail.com>
Upstream-commit: e41eae8b42052d07432405cca113ffcf8cf49c06
Component: engine
2015-10-31 02:28:23 +08:00