-
Notifications
You must be signed in to change notification settings - Fork 15
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Optimize redis storage adapter and make default #100
Conversation
Current coverage is 41.12%@@ master #100 diff @@
==========================================
Files 10 10
Lines 269 321 +52
Methods 0 0
Messages 0 0
Branches 0 0
==========================================
+ Hits 130 132 +2
- Misses 119 169 +50
Partials 20 20
|
a.queuedApps = map[string]bool{} | ||
} | ||
|
||
// Write adds a log message to to an app-specific list in redis using ring-buffer-like semantics |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
can you add docs to indicate that this function may block?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Are you talking about Write()
? Won't that not block as written?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yes. for example, if the goroutine running start()
is in the middle of an execPipeline()
, sends on a.messageChannel
will block, right?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't think so... what would block those sends?
As stated in slack, one improvement I feel I can make here is by encapsulating more of the "message queue" state and logic in its own type. Consider that my next improvement. So I need feedback on the strategy more so than the structure of the code. (Although feedback on both is welcome, of course.) |
return 0, | ||
a.redisClient.Pipeline(), | ||
time.NewTicker(time.Second), | ||
&map[string]bool{} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
no need to return a pointer to a map - it's already a reference
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Maps are implicitly references? I was not aware. Thanks! This will also probably get fixed naturally when I encapsulate more of the message queuing mechanism in its own type.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yep, check out https://blog.golang.org/go-maps-in-action for details if interested
mp.queuedApps[message.app] = true | ||
mp.messageCount++ | ||
} else { | ||
log.Printf("Error adding rpush to %s to the pipeline: %s", message.app, err) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
maybe return an error
instead of printing here?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
same with the below Printf
s
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ordinarily, I would, but there can be many handlers (each its own goroutine) calling the storage adapter's Write()
function, and in turn calling this function. The error can only be returned so far up the call stack before there isn't anyone waiting for a possible error. Logging it right as it occurs seemed the easiest way to mitigate that. The harder way would be introducing an error channel, I suppose. Thoughts?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
alrighty, seems good enough, and I won't nitpick it anymore 😄
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm using an errCh for this now.
|
||
// Stop the storage adapter | ||
func (a *redisAdapter) Stop() { | ||
a.stopCh <- true |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
it'd be better to just close this channel rather than send on it. this line makes only 1 goroutine receive true
, while the close makes all listening goroutines receive false
(the zero value of bool
). the latter is better because it broadcasts the stop event to all listeners (even if you don't have multiple listeners now, it's a helpful pattern for scaling up this codebase again)
select { | ||
case message := <-a.messageChannel: | ||
mp.addMessage(message) | ||
if mp.messageCount == 50 { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
should this flush value be configurable?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It could be, and we could also make the timeout on queuing up message configurable.
Made 2 comments but you can submit those in another PR. |
This is my first pass at optimizing the logger's use of Redis. It reduces ops required by using larger pipelines capped with whatever ltrims are needed (one per app) instead of smaller pipelines that contain only one rpush and one ltrim.This is mostly just for @arschles to review and propose improvements.This PR includes the following:
workflow-dev
chart was installed in advance of any logger hacking.