-
Notifications
You must be signed in to change notification settings - Fork 228
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fix worker leak in eager dispatcher #1723
Fix worker leak in eager dispatcher #1723
Conversation
func (e *eagerWorkflowDispatcher) deregisterWorker(worker *workflowWorker) { | ||
e.lock.Lock() | ||
defer e.lock.Unlock() | ||
delete(e.workersByTaskQueue[worker.executionParameters.TaskQueue], worker.worker) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
IIRC it is unfortunate but known that Go does not reclaim memory for map entries on delete
. At least it was the case in the older Go versions, but maybe it has changed (see issue https://github.com/golang/go/issues/ + 20135).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This issue is about the memory used by the map, what we care about is making sure nothing is holding a reference to worker.worker
so it can be GC'd
randWorkers := make([]eagerWorker, 0, len(workers)) | ||
// Copy the workers so we can release the lock. | ||
for worker := range workers { | ||
randWorkers = append(randWorkers, worker) | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Since deregister is rare I figure, it could be better maybe to burden it with the heavier work than this call, since I suspect copy
is cheaper than building this from keys. But this probably doesn't matter since I can't imagine a good use case where a user would ever have more than one workflow worker for a task queue (versioning notwithstanding since this isn't built for versioning). So nothing needed, just thinking out loud.
9477fcd
to
d21bef0
Compare
* Fix worker leak in eager dispatcher * Refactor freeing
Fix worker leak in eager dispatcher, before workers would not be removed from the
eagerWorkflowDispatcher
task queue causing a subtle memory leak if workers were started and stopped in a loop.