Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Improve the rate of thread injection for blocking due to sync-over-async #53471

Merged
merged 12 commits into from
Jun 9, 2021

Conversation

kouvel
Copy link
Member

@kouvel kouvel commented May 30, 2021

  • Fixes Improve the rate of thread injection for blocking due to sync-over-async #52558
  • Some miscellaneous changes:
    • _minThreads and _maxThreads were being modified inside their own lock and used inside the hill climbing lock, so it made sense to merge the two locks
    • Separated NumThreadsGoal from ThreadCounts into its own field to simplify some code. The goal is an estimated target and doesn't need to be in perfect sync with the other values in ThreadCounts. The goal was already only modified inside a lock.
    • Removed some unnecessary volatile accesses to simplify

@kouvel kouvel added this to the 6.0.0 milestone May 30, 2021
@kouvel kouvel self-assigned this May 30, 2021
@ghost
Copy link

ghost commented May 30, 2021

Tagging subscribers to this area: @mangod9
See info in area-owners.md if you want to be subscribed.

Issue Details

Fixes #52558

Author: kouvel
Assignees: kouvel
Labels:

area-System.Threading

Milestone: 6.0.0

@kouvel
Copy link
Member Author

kouvel commented May 30, 2021

  • Checked perf on thread pool overhead tests on x64 and arm64, no significant difference. Checked perf on ASP.NET platform benchmarks on x64, no significant difference.
  • Verified config vars are working as expected
  • Verified throttling rate of thread injection in low-memory situations with Windows job objects and Linux docker containers
  • Checked some cases involving interaction with starvation and hill climbing heuristics, verified behavior is appropriate. Solution is not quite ideal until we also fix the starvation heuristic, but I tried to make sure that the likelihood of a new issue is low.
  • Checked the relevant cases from https://github.com/davidfowl/AspNetCoreDiagnosticScenarios and verified that the thread pool is more responsive to compensate for the sync-over-async blocking work
  • The defaults for config vars are resulting from a reasonable guess arising from brief prior discussions on the topic, there are good reasons for the limits, but we are also not trying to make every real-world really-bad scenario involving sync-over-async work really-well by default. It's a realistic expectation that the really bad cases would involve some configuration. An expectation in those cases where sync-over-async is the only type of blocking happening on thread pool worker threads, is that the new config vars would work better than setting a high min worker thread count, because this solution uses cooperative blocking and can adjust active thread counts up and down appropriately. Setting a high min thread count on the other hand is not a great workaround for that problem because it causes that many threads to always be active, and that's not ideal.

@kouvel kouvel requested a review from marek-safar as a code owner May 30, 2021 02:14
@davidfowl
Copy link
Member

davidfowl commented May 30, 2021

What do you think about doing this in monitor.wait as well?

@kouvel
Copy link
Member Author

kouvel commented May 30, 2021

What do you think about doing this in monitor.wait as well?

In my opinion, a monitor is too basic of a synchronization primitive to assume that waiting on one would always deserve compensating for. For instance, it's often not beneficial or preferable to add threads to compensate for threads blocking on waiting to acquire a lock.

kouvel added a commit to kouvel/dotnet-diagnostics that referenced this pull request May 31, 2021
- Depends on dotnet/runtime#53471
- In the above change `ThreadCounts._data` was changed from `ulong` to `uint`. Its usage in SOS was reading 8 bytes and only using the lower 4 bytes. Updated to read 4 bytes instead. No functional change, just updated to match.
kouvel added a commit to kouvel/perfview that referenced this pull request May 31, 2021
- Depends on dotnet/runtime#53471
- The change above added a new reason for thread adjustment (`CooperativeBlocking`), added here too
(uint)AppContextConfigHelper.GetInt32Config(
"System.Threading.ThreadPool.Blocking.MaxDelayUntilFallbackMs",
250,
false);
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There's a lot of policy in here, and a lot of knobs to go with it. Do you have a sense for how all of this is going to behave in the real-world, and if/how someone would utilize these knobs effectively? How did you arrive at this specific set and also the defaults employed?

Copy link
Member Author

@kouvel kouvel Jun 2, 2021

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Some of the criteria used:

  • Have a good replacement for setting the MinThreads as a workaround
    • This would now be to set ThreadsToAddWithoutDelay to the equivalent and set MaxThreadsToAddBeforeFallback to something higher to give some buffer for spikes that may need more threads
    • MaxThreadsToAddBeforeFallback could also be set to a large value to effectively unlimit the heuristic
  • Use progressive delays to avoid creating too many threads too quickly
    • Without that, it would be conceivable that MaxThreadsToAddBeforeFallback threads would be created in short order to respond to even a short sync-over-async IO before the IO even completes (if there are that many work items that would block on the async work)
    • The delay also helps to create a high watermark of how many threads were necessary last time to unblock, so that when there's a limit to how many work items would block, it would quickly release existing waiting threads to let other work be done meanwhile
    • The larger the number of threads that get unblocked all at once, the higher the latency of their processing would be after unblocking. There's probably not a good solution to this.
    • Ideally it would not require as many threads for an async IO completion to unblock waiting threads, sort of like on Windows where a separate pool of threads handles IO completions, needs some experimentation
    • It's not always clear that adding more threads would help, more so in starvation-type cases
  • No one set of defaults will work well for all cases, use conservative defaults to start with
    • The current defaults are much more agressive than before
    • The MaxThreadsToAddBeforeFallback_ProcCountFactor of 10 came from a prior discussion where we felt that adding 10x the proc count relatively quickly may not be too bad
    • The defaults can be made more aggressive easily, but it would be difficult to make the defaults more conservative since apps that work well with the defaults may not after that without configuration
    • The really bad cases where many 100s or even 1000s of threads need to be created will likely need to configure for the app's situation based on expected workload and how bad it can get, in order to work around the issue
  • Make things sufficiently configurable
    • It would have been nice to make configurable the delay threshold for detecting starvation and the delay used to add threads during continuous starvation. Now, for sync-over-async the delay and rate of progression in delays can be adjusted.
    • Similarly to hill climbing config values, the config values don't have to be used but it can be helpful to enable the freedom to configure them
    • I expect I would suggest most users running into bad blocking due to sync-over-async to configure ThreadsToAddWithoutDelay and MaxThreadsToAddBeforeFallback, and perhaps MaxDelayUntilFallbackMs to control the thread injection latency for spikes
  • I intend to use the same config values (maybe with a couple of others) for improving the starvation heuristic similarly in the future
    • Starvation is a bit different and may need a few quirks, but hopefully we can use something similar

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Decided to remove MaxThreadsToAddBeforeFallback and renamed MaxDelayBeforeFallbackMs to MaxDelayMs in the latest commit. The max threads limit before falling back to starvation seems unnecessary, it would be unlimited for starvation anyway. Now the only time it would fall back to starvation is in low-memory situations.

@kouvel
Copy link
Member Author

kouvel commented Jun 2, 2021

Rebased to fix conflicts

@kouvel
Copy link
Member Author

kouvel commented Jun 7, 2021

I believe I have addressed the feedback shared so far, any other feedback?

Copy link
Member

@mangod9 mangod9 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM, assuming that some simple scenarios which were exhibiting deadlocks are now more responsive, and there don't seem to be any other regressions?

Also would be good to create a doc issue for the new configs.

@kouvel
Copy link
Member Author

kouvel commented Jun 8, 2021

LGTM, assuming that some simple scenarios which were exhibiting deadlocks are now more responsive, and there don't seem to be any other regressions?

Thanks! Yes some simple scenarios involving sync-over-async are much more responsive by default, and can be configured sufficiently well to work around high levels of blocking if necessary. I haven't seen any regressions in what I tested above.

Filed dotnet/docs#24566 for updating docs.

@kouvel kouvel merged commit a7a2fd6 into dotnet:main Jun 9, 2021
@kouvel kouvel deleted the TpTaskWait branch June 9, 2021 18:13
@davidfowl
Copy link
Member

Looking forward to this!

kouvel added a commit to dotnet/diagnostics that referenced this pull request Jun 10, 2021
- Depends on dotnet/runtime#53471
- In the above change `ThreadCounts._data` was changed from `ulong` to `uint`. Its usage in SOS was reading 8 bytes and only using the lower 4 bytes. Updated to read 4 bytes instead. No functional change, just updated to match.
@ghost ghost locked as resolved and limited conversation to collaborators Jul 10, 2021
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Improve the rate of thread injection for blocking due to sync-over-async
6 participants