-
Notifications
You must be signed in to change notification settings - Fork 84
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[rfc] new scheduler #32
Comments
thanks for the proposal! Here I list some comments according to your input.
we still need configuration to config the coroutine stack size, thus can't avoid the config totally. And it's not a bad thing that user can adjust and tune for their own application.
yes, there is contention point here and this is not scale well. I used to have a queue for each worker thread, but the result is not as good as I expected. It need to steal all other worker thread queues, which is a little time consuming, but maybe my implementation was not qualified. We can improve here definitely 😄
the global timer thread just throw the timed out coroutines into the ready list. It's not running any coroutines. the delay is not significant important here since the coroutine is already timed out.
io worker thread has it's own time out checking, thus not contention with any other threads. and running coroutines directly on it's own, not push them to the ready list, thus there is no contention here. It's good to have a more advanced scheduler. I'd like to see that happen, and let's test and compare if it's better. |
The current scheduler has the following problems:
Both 3) and 4) cause unnecessary OS context switches whenever there is a timer expiry or I/O poll
I propose the following design for a new scheduler which I plan to implement. This is a request for comments. My understanding of may is not that deep so it is possible that some things won't work :-)
The scheduling loop will look like this:
To make stealing fast and avoid spurious park()/unpark() we spin for a while trying to steal and then give up.
The text was updated successfully, but these errors were encountered: