Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

JSON benchmarks log more information than Text benchmarks. #2

Open
ChrisHines opened this issue May 13, 2016 · 2 comments
Open

JSON benchmarks log more information than Text benchmarks. #2

ChrisHines opened this issue May 13, 2016 · 2 comments

Comments

@ChrisHines
Copy link

The JSON benchmarks log several fields of information, while the Text benchmarks only log a static string message. This discrepancy makes comparing the benchmarks between Text and JSON misleading.

@peterbourgon
Copy link
Contributor

Chris also explained in chat that

Each argument to Log adds another alloc. Each argument causes Go to create an interface{} to wrap the value, and because log.Logger is an interface, escape analysis must assume the interface{} value escapes, so it is allocated on the heap.

which mostly explains the 9-10 allocs per call in the JSONNegative tests. And thus your note in the README

When it comes to negative tests for JSON, all loggers made too many allocations for my taste. This is especially bad if you carpet bomb your program with debug-level log lines but want to disable it in production: it puts too much unecessary pressure on GC, leads to all sorts of problems including memory fragmentation. This is something I would like to see improved in future versions!

has to do with how you are invoking the loggers, and isn't really capable of being affected by the internals. Alas.

@imkira
Copy link
Owner

imkira commented Sep 4, 2017

We could perfectly change the Text/JSON contents to be more "comparable" but my original purpose was never to help others test whether to choose Text or JSON based on performance metrics, but rather to compare different packages' performance metrics separately for Text and JSON formats.
Having said this, I believe all fields (at least their values) like log level, rate, low, high, should be added as part of the Text format. Would that help?

I was also thinking it would be also interesting to benchmark the performance of "custom" objects (that are not primitives like int, bool, strings, etc) among the various packages.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants