-
Notifications
You must be signed in to change notification settings - Fork 8
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Query the maximum available parallelism #31
Comments
Some kind of API like JS's |
Sure. And, yeah, It sounds like you are leaning toward b), exposing a function that will "number of threads that will run concurrently" but do you also see a need for a), a separate function to return "maximum number of threads spawn-able"? |
I don't think "maximum number of threads spawn-able" is very useful, or practical to implement. It doesn't map to any underlying OS primitive that I know of. I guess it would fall into a generate category of resource limits along with things like "max open file descriptors", which WASI also doesn't currently support. Actually I was wrong, on linux you can do |
some apps are using otoh, i agree the max number of threads is not that useful. |
For some algorithms, in order to partition work on several threads, we need to know how many threads are available. WebAssembly hosts may decide to limit the amount of parallelism available to a wasi-threads module; in other words, the host could stop spawning threads at some threshold as a way to limit a module's resource consumption. In this scenario, I can think of two options for discovering how many threads are possible:
wait
state, until the host returns an error code indicating that no more threads can be createdThis second option seems to me to be more flexible (doesn't require
spawn
-ing and the associated counting machinery). Now, it could be nice to differentiate between a) the "maximum number of threadsspawn
-able" and b) the "number of threads that will run concurrently." One could imagine a host allowing many spawned threads but only allocating a limited number of cores on which to run them. To accommodate both a) and b) we could expose two functions instead of one; I'm not sure how useful a) is, though, since b) is likely what one wants to know to properly partition work.A function implementing a) could look like:
Any thoughts on adding such a function? Or should we query limits via "spawn until error"?
The text was updated successfully, but these errors were encountered: