-
-
Notifications
You must be signed in to change notification settings - Fork 2.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
docs: Task scheduling fairness and acquire-lock yield #6049
Comments
The fairness of the mutex refers to the case where many tasks are waiting to lock it. In that case, it's guaranteed to give the lock to the task that has been waiting for the longest. It's not talking about tasks not using the mutex.
That's not the intended meaning of the sentence. |
Actually, this can not happen. If there is another task waiting for the mutex, then your task will fail to acquire the lock. |
What I think can be improved in current docs:
|
If the scheduler deeply favors my task, then other tasks don't even have a chance to wait for the mutex. Only after you mentioned some no-starvation guarantees that this becomes impossible. |
For the simple case of two tasks that want to acquire a mutex, this is not the case: The scheduler will never poll a task that hasn't notified its waker, so the "deeply favored task" must be skipped by the runtime after polling it once. So what you're saying is only really true in this kind of scenario: tokio::join!(
mutex.lock(),
async { loop { yield_now().await; } }
); Granted, we do guarantee fairness even in the above case. |
The current docs on task scheduling have nothing about fairness on its module page.
I found some explanation on the
yield_now
page:It doesn't tell me if this applies only to
yield_now
function or any yields in general.When coupled with another topic: Mutex, this becomes very confusing. The
Mutex
docs mention it is fair:And the
lock
method:Literally this says a task acquiring
Mutex
yields every time (even if it can be acquired right now). Since the runtime doesn't offer a strong guarantee about what task is scheduled next, it could happen that it's always the same task being scheduled which acquire-release the mutex infinitely. Then saying the mutex is fair is pointless, because other tasks do not even have a chance to be scheduled.The text was updated successfully, but these errors were encountered: