-
Notifications
You must be signed in to change notification settings - Fork 1.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Elastic Dispatchers.IO unbounded for limitedParallelism #2943
Labels
Comments
qwwdfsad
added a commit
that referenced
this issue
Sep 21, 2021
* Extract Ktor-obsolete API to a separate file for backwards compatibility * Make Dispatchers.IO being a slice of unlimited blocking scheduler * Make Dispatchers.IO.limitParallelism take slices from the same internal scheduler Fixes #2943
yorickhenning
pushed a commit
to yorickhenning/kotlinx.coroutines
that referenced
this issue
Oct 14, 2021
….IO unbounded for limited parallelism (Kotlin#2918) * Introduce CoroutineDispatcher.limitedParallelism for granular concurrency control * Elastic Dispatchers.IO: * Extract Ktor-obsolete API to a separate file for backwards compatibility * Make Dispatchers.IO being a slice of unlimited blocking scheduler * Make Dispatchers.IO.limitParallelism take slices from the same internal scheduler Fixes Kotlin#2943 Fixes Kotlin#2919
pablobaxter
pushed a commit
to pablobaxter/kotlinx.coroutines
that referenced
this issue
Sep 14, 2022
….IO unbounded for limited parallelism (Kotlin#2918) * Introduce CoroutineDispatcher.limitedParallelism for granular concurrency control * Elastic Dispatchers.IO: * Extract Ktor-obsolete API to a separate file for backwards compatibility * Make Dispatchers.IO being a slice of unlimited blocking scheduler * Make Dispatchers.IO.limitParallelism take slices from the same internal scheduler Fixes Kotlin#2943 Fixes Kotlin#2919
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Dispatchers.IO legacy
Prior to #2919 and upcoming release 1.6.0,
kotlinx.coroutines
had no API to limit the parallelism without creating excessive threads.Dispatchers.IO
meant to be a dispatcher for "any IO", similar to Rx-likeio
andnewCachedThreadPool
.But having an unlimited number of threads has brittle and dangerous pitfalls: the performance of an overloaded application degrades in a slow, but unlimited manner, eventually causing out-of-memory errors.
Because of that, it was decided to have a "good enough" upper limit (64 seemed to be a reasonable constant at that moment, especially for Android).
After #2919, along with
newFixedThreadPoolContext
becoming@DelicateCoroutinesApi
, we would advocate for using e.g.Dispatchers.IO.limitedParallelism(myDbConnectionPoolSize)
instead ofnewFixedThreadPoolContext(myDbConnectionPoolSize)
.Such recommendation has a fatal flaw: -- nowadays the number of threads allocated for DB connections, blocking IO, and networks is measured in hundreds, so as soon as the user has enough "limited" slices of
Dispatchers.IO
, the system will start starving only during peak loads. It's almost impossible to catch such behavior during code-review, (regular) testing, and regular operational mode. Shortly speaking, this behavior is just a timebomb that will explode at the least expected moment.Proposed change
To avoid degradations during peak loads, while still providing fluent and hard-to-misuse API, the following changes will be implemented:
Dispatchers.IO
will still be limited to 64 threads with the limit being a fail-safe mechanismDispatchers.IO.limitedParallelism(n)
won't be limited by the hard limit of the originalDispatchers.IO
, but instead will be limited by the given parallelismDispatchers.IO
will still be conceptually backed by the same thread-pool, being able to leverage both reduced number of threads and reduced number of context-switches due to its built-in integration withDispatchers.Defailt
Basically, the mental model (but not the implementation!) is the following:
The text was updated successfully, but these errors were encountered: