-
Notifications
You must be signed in to change notification settings - Fork 107
Dont let pruning hold the index too long #1065
Conversation
d215b83
to
7a76648
Compare
adds a new configSetting "max-prune-lock-time" that is the amount of time per second that the prune job can lock the index for. The default is 100ms, meaning that the index can only be locked for 10% of the time.
7a76648
to
9a69513
Compare
idx/memory/memory.go
Outdated
@@ -59,6 +61,7 @@ func ConfigSetup() { | |||
memoryIdx.BoolVar(&TagSupport, "tag-support", false, "enables/disables querying based on tags") | |||
memoryIdx.IntVar(&TagQueryWorkers, "tag-query-workers", 50, "number of workers to spin up to evaluate tag queries") | |||
memoryIdx.IntVar(&matchCacheSize, "match-cache-size", 1000, "size of regular expression cache in tag query evaluation") | |||
memoryIdx.DurationVar(&maxPruneLockTime, "max-prune-lock-time", time.Millisecond*100, "Maximum duration each second a prune job can lock the index.") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
we have no proper way to validate duration flags. (see #944)
thus my recommendation is:
- use a StringVar
- create a ConfigProcess() function where we can explicitly parse and fail if needed
- call ConfigProcess() from the
Validate remaining settings
section in metrictank.go
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I really dont like this model of config mgmt. MemoryIdx already has an Init() error
function, can we not just validated the config settings there?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
we only initialize the index until after startup (e.g. until after initializing the store, inputs, etc)
configuration should be validated before we do startup
I agree our config system is not pretty, and we will refactor it, but for now it's the best way to do it.
// provided context is done. When the amount of time spent on task (the time is determined | ||
// by calls to "Add()") every "window" duration is more then "limit", then calls to | ||
// Wait() will block until the start if the next window period. | ||
func NewTimeLimiter(ctx context.Context, window, limit time.Duration) *TimeLimiter { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think a more common approach for the api design here would be to have a Stop
method instead of using a context merely for signaling shutdown.
that said, if we implemented Stop
, we'd have to make it use a channel anyway to communicate to run()
so it seems like a minor difference.
60c5cbc
to
99d4560
Compare
for the record, i've run the benches with cpu profiling enabled, to see if a significant portion is taken by chansend, chanreceive, chan select, or any of the time operations. but it looks like this is not the case. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
lgtm
No description provided.