Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Optimize redis storage adapter and make default #100

Merged
merged 4 commits into from
Jul 18, 2016
Merged

Optimize redis storage adapter and make default #100

merged 4 commits into from
Jul 18, 2016

Conversation

krancour
Copy link
Contributor

@krancour krancour commented Jul 13, 2016

This is my first pass at optimizing the logger's use of Redis. It reduces ops required by using larger pipelines capped with whatever ltrims are needed (one per app) instead of smaller pipelines that contain only one rpush and one ltrim.

This is mostly just for @arschles to review and propose improvements.

This PR includes the following:

  • Optimize the Redis storage adapter to build and execute bigger pipelines and (in total) submit fewer operations.
  • Make the Redis storage adapter pass some style checks it previously failed.
  • Update the logger's manifest to include the password for Redis. This references a secret that is assumed to exist from when the workflow-dev chart was installed in advance of any logger hacking.
  • Make the Redis storage adapter the default storage adapter.

@krancour krancour self-assigned this Jul 13, 2016
@deis-bot
Copy link

@arschles is a potential reviewer of this pull request based on my analysis of git blame information. Thanks @krancour!

@codecov-io
Copy link

codecov-io commented Jul 13, 2016

Current coverage is 41.12%

Merging #100 into master will decrease coverage by 7.20%

@@             master       #100   diff @@
==========================================
  Files            10         10          
  Lines           269        321    +52   
  Methods           0          0          
  Messages          0          0          
  Branches          0          0          
==========================================
+ Hits            130        132     +2   
- Misses          119        169    +50   
  Partials         20         20          

Powered by Codecov. Last updated by dcd4077...89a1585

a.queuedApps = map[string]bool{}
}

// Write adds a log message to to an app-specific list in redis using ring-buffer-like semantics
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

can you add docs to indicate that this function may block?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Are you talking about Write()? Won't that not block as written?

Copy link
Member

@arschles arschles Jul 13, 2016

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yes. for example, if the goroutine running start() is in the middle of an execPipeline(), sends on a.messageChannel will block, right?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think so... what would block those sends?

@krancour
Copy link
Contributor Author

As stated in slack, one improvement I feel I can make here is by encapsulating more of the "message queue" state and logic in its own type. Consider that my next improvement. So I need feedback on the strategy more so than the structure of the code. (Although feedback on both is welcome, of course.)

return 0,
a.redisClient.Pipeline(),
time.NewTicker(time.Second),
&map[string]bool{}
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

no need to return a pointer to a map - it's already a reference

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maps are implicitly references? I was not aware. Thanks! This will also probably get fixed naturally when I encapsulate more of the message queuing mechanism in its own type.

Copy link
Member

@arschles arschles Jul 13, 2016

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yep, check out https://blog.golang.org/go-maps-in-action for details if interested

mp.queuedApps[message.app] = true
mp.messageCount++
} else {
log.Printf("Error adding rpush to %s to the pipeline: %s", message.app, err)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

maybe return an error instead of printing here?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

same with the below Printfs

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ordinarily, I would, but there can be many handlers (each its own goroutine) calling the storage adapter's Write() function, and in turn calling this function. The error can only be returned so far up the call stack before there isn't anyone waiting for a possible error. Logging it right as it occurs seemed the easiest way to mitigate that. The harder way would be introducing an error channel, I suppose. Thoughts?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

alrighty, seems good enough, and I won't nitpick it anymore 😄

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm using an errCh for this now.

@krancour krancour changed the title feat(redis): Optimize with more aggresive pipelining Optimize redis storage adapter and make default Jul 15, 2016

// Stop the storage adapter
func (a *redisAdapter) Stop() {
a.stopCh <- true
Copy link
Member

@arschles arschles Jul 15, 2016

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

it'd be better to just close this channel rather than send on it. this line makes only 1 goroutine receive true, while the close makes all listening goroutines receive false (the zero value of bool). the latter is better because it broadcasts the stop event to all listeners (even if you don't have multiple listeners now, it's a helpful pattern for scaling up this codebase again)

@arschles arschles added the LGTM1 label Jul 15, 2016
select {
case message := <-a.messageChannel:
mp.addMessage(message)
if mp.messageCount == 50 {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

should this flush value be configurable?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It could be, and we could also make the timeout on queuing up message configurable.

@jchauncey
Copy link
Member

Made 2 comments but you can submit those in another PR.

@krancour krancour merged commit 25fb250 into deis:master Jul 18, 2016
@krancour krancour deleted the aggresive-pipelining branch July 18, 2016 15:11
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants