-
-
Notifications
You must be signed in to change notification settings - Fork 70
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Handling Concurrency #12
Comments
I'm not sure about the concurrency. But the waiting tasks count seem to have an edge condition if it is already 0. The number "wraps" around with an integer over/underflow that matches my output. from this code: // Decrement waiting task count
atomic.AddUint64(&p.waitingTaskCount, ^uint64(0)) package main
import (
"fmt"
"sync/atomic"
)
func main() {
var a = ^uint64(0)
var b uint64 = 1
fmt.Println(a)
fmt.Println(b)
atomic.AddUint64(&b, a)
fmt.Println(b)
atomic.AddUint64(&b, a)
fmt.Println(b)
} yields:
|
Hey @git-blame,
Yes, pools can be shared across goroutines without using a mutex Regarding the Thanks for reaching out 🙂 |
I'm just testing with processing a large number of concurrent POST requests. I'm trying to find a point in my system when the pool queue might be "too large" (e.g., would take X seconds to process) and switch to writing the requests to disk for later processing. That's why I'm checking I guess my in-memory processing is sufficiently fast enough that there are rarely any tasks in the queue. Yet there are enough concurrent calls to |
I see. Yes, for that use case you need a reliable counter. I just submitted a fix for this particular scenario (See https://github.com/alitto/pond/pull/14/files) and ensure the waitingTasks counter never wraps around. What I'm doing there is essentially changing the order in which I increment and decrement the counter to guarantee increment always executes before decrement. The version that includes the fix is v1.5.1. |
Great. No more underflow counts. Thanks. |
Dumb question: can tasks be submitted concurrently to a pool or do I need to use a mutex to control access to it?
I'm adding tasks concurrently. I'm also debugging by printing out the total tasks as well as the waiting tasks (which I call load) and the numbers are all over the place.
The text was updated successfully, but these errors were encountered: