-
Notifications
You must be signed in to change notification settings - Fork 2.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Error while reading from Writer: bufio.Scanner: token too long #1370
Comments
The way to do this would be to modify func (entry *Entry) writerScanner(reader *io.PipeReader, printFunc func(args ...interface{})) in writer.go with a custom split function to split the input into chunks of up to 64kb // writerScanner scans the input from the given reader and writes it to the logger using the specified print function.
// It splits the input into chunks of up to 64KB to avoid buffer overflows.
func (entry *Entry) writerScanner(reader *io.PipeReader, printFunc func(args ...interface{})) {
scanner := bufio.NewScanner(reader)
// Set the buffer size to the maximum token size to avoid buffer overflows
scanner.Buffer(make([]byte, bufio.MaxScanTokenSize), bufio.MaxScanTokenSize)
// Define a split function to split the input into chunks of up to 64KB
chunkSize := 64 * 1024 // 64KB
splitFunc := func(data []byte, atEOF bool) (int, []byte, error) {
if len(data) > chunkSize {
return chunkSize, data[:chunkSize], nil
}
return 0, nil, nil
}
// Use the custom split function to split the input
scanner.Split(splitFunc)
// Scan the input and write it to the logger using the specified print function
for scanner.Scan() {
printFunc(strings.TrimRight(scanner.Text(), "\r\n"))
}
// If there was an error while scanning the input, log an error
if err := scanner.Err(); err != nil {
entry.Errorf("Error while reading from Writer: %s", err)
}
// Close the reader when we are done
reader.Close()
} |
This issue is stale because it has been open for 30 days with no activity. |
Still relevant |
I see that there were a couple of suggestions for this issue. Do we have a timeline when this issue might be fixed in this package? |
Fixed in v1.9.3 via #1384 |
A bug in used log lib github.com/sirupsen/logrus v1.9.0: Error while reading from Writer: bufio.Scanner: token too long #1370 sirupsen/logrus#1370 That was closing unexpectly the stderr log socket provided by containerpilot to managed services. Good catch by Ema Musella which founds the exact error message that lead us to problem solving. The bug is fixed by updating logrus to release 1.9.3.
It looks like Twistlock still reports this as not fixed. I do not have a Twistlock license, but one of our users has; see here: https://jira.percona.com/browse/PT-2229?focusedCommentId=309281&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-309281 |
I notice that while there's tags for v1.9.1, v1.9.2 and v1.9.3, there's no "github releases" created for those; I know some tools use GitHub's releases API (and ignore the actual tags), which could be related. Let me try having a look at back-filling the missing releases from tags that were created (not sure if the dates will match the actual releases (tags) though when doing so) |
Done! I created the missing releases for the latest 1.8.x and 1.9.x tags (and looks like GitHub uses the date of the tag); https://github.com/sirupsen/logrus/releases |
Oh, LOL, actually I notice it doesn't (it shows it in the draft 🎉 , but not when publishing 😞)? But at least they're all there. |
To fix: PRISMA-2023-0056 Reference: sirupsen/logrus#1370 Signed-off-by: Dhi Aurrahman <dio@rockybars.com>
sirupsen/logrus#1370 Co-authored-by: John Starich <johnstarich@johnstarich.com>
This happens if you try to log something that is longer than 64kb without newlines through something like
cmd.Stdout = logrus.Writer()
.This can cause a denial of service in some cases.
The culprit is here:
https://github.com/sirupsen/logrus/blob/master/writer.go#L58-L59
Maybe it needs a custom
scanner.Split(...)
function or some other invention.The text was updated successfully, but these errors were encountered: