Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Implement a fairer locking strategy for a new mutex type #130

Merged
merged 7 commits into from
Oct 30, 2022
Merged

Implement a fairer locking strategy for a new mutex type #130

merged 7 commits into from
Oct 30, 2022

Conversation

notgull
Copy link
Contributor

@notgull notgull commented Oct 17, 2022

This PR implements a locking strategy for the SpinMutex type that uses eventual fairness, inspired by the one used in the async-lock crate. If a lock operation spins for a long period of time, it becomes a "starving" operation. When the mutex is eventually unlocked, starving locks are given precedence over non-starving locks. This is similar to less strict version of TicketMutex, in that it reduces contention in the worst cases.

The only real downside of this strategy is that is uses an AtomicUsize instead of an AtomicBool to keep track of the number of starving operations in addition to the lock status.

@zesterer
Copy link
Collaborator

zesterer commented Oct 17, 2022

Thanks for implementing this! I wonder whether this should be a separate implementation with the top-level Mutex type alias pointing to it?

@notgull
Copy link
Contributor Author

notgull commented Oct 17, 2022

Alright, I merged it out into its own file. Do you want me to add a feature to make the main Mutex type use it as well? I'd be fine without.

@notgull notgull changed the title Implement a fairer locking strategy for SpinMutex Implement a fairer locking strategy for a new mutex type Oct 18, 2022
@zesterer
Copy link
Collaborator

Thanks, this is great. I'm hoping I'll get time to review this this weekend coming. Regarding feature flags, I might do some rearranging of things after this PR has been merged, so don't worry about that.

@notgull
Copy link
Contributor Author

notgull commented Oct 18, 2022

Thank you!

I might do some rearranging of things after this PR has been merged

I'd like to contribute to this crate more, so let me know if you want help with this "rearranging".

@notgull
Copy link
Contributor Author

notgull commented Oct 29, 2022

@zesterer Have you had a chance to review this PR yet?

src/mutex/fair.rs Show resolved Hide resolved
src/mutex/fair.rs Show resolved Hide resolved
src/mutex/fair.rs Outdated Show resolved Hide resolved
Copy link
Collaborator

@zesterer zesterer left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

After spending some time reading through the implementation, I'm fairly confident in the code. Just a few more minor questions (I hope you understand: this is quite a large change, if not in terms of lines of code, in terms of 'number of users across the ecosystem').

@notgull
Copy link
Contributor Author

notgull commented Oct 30, 2022

(You probably want to squash this, by the way.)

@zesterer
Copy link
Collaborator

Thanks!

@zesterer zesterer merged commit ef6e68f into mvdnes:master Oct 30, 2022
@notgull notgull deleted the fairer branch October 30, 2022 20:05
zesterer pushed a commit to zesterer/spin-rs that referenced this pull request Mar 27, 2023
* Implement a fairer locking strategy.

* Merge out `FairMutex` into its own file.

* Add feature documentation

* Move increment after the check.

* Fix MSRV build

* Missed a couple
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants