-
Notifications
You must be signed in to change notification settings - Fork 4.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
32 bit memory getters on Process should not wrap #46901
Comments
Tagging subscribers to this area: @eiriktsarpalis Issue DetailsThese property getters on the Process type were obsoleted (somewhere between .NET Framework 2.0 and .NET Framework 4.0 inclusive) because their return type is
They are implemented to do an unchecked cast of the actual
The negative values have no conceivable use and at first I assumed we have a bug in how we read These properties exist so that code written a long time ago can either log the value or make a decision. I wonder if it might be better to instead return Int32.MaxValue if it exceeds the range of an
|
cc @marklio in case he has an opinion. My guess is there is zero existing code that will handle a negative number here, as if they were aware of that possibility, they would have changed to use the newer properties. Using my imagination, I could imagine this scenario: someone is using very old code for logging, and it logs the negative number. Nobody wants to change that code. The actual value for some reason is never larger than 2^16, so they reverse engineer the actual value from the negative number. That seems a bit absurd though. |
I think that would be a reasonable improvement cc @adamsitnik |
I notice that the .NET 5 docs don't list the obsoletion (while 4.8 docs do). We should make sure that gets fixed, or we'll accrue more risk of problems here over time. It looks like there are only a handful of callers of the non-64 versions of these properties in NuGet. Interestingly, there are a few hits in core-targeting assemblies. However, the numbers are vanishingly small compared to most compat investigations, particularly for the properties likely to exceed 32-bits. My opinion is that with the presence of the 64-bit replacements, it isn't worth changing. If the bulk of risk comes from existing .NET Framework libraries, then these are libraries already exposed to this problem. Is code more likely to be exposed to such a condition on Core due to accounting differences in Linux or some other external scenario? If we're worried about new code being written, we should ensure that obsoletion is working properly in the dev experience, or hide them from intellisense, or even remove them from the ref assemblies. |
@marklio in my mind the reason to change it is that it no longer appears to be broken (as even I, who work on it, at first assumed). With respect to usage, it is of course fine to use them if you're measuring a process that never exceeds 2GB in whatever measure.
In .NET Core, I think the default should always be to fix/improve things, however minor, unless there are good compat reasons not to. (As opposed to in .NET Framework, where compat is the overriding consideration.) It that a reasonable way of thinking about it do you think? |
If they were obsoleted for more than a few years I believe that we should not be doing any investment in them. Users had enough time to stop using them.
Such users would most likely not update to .NET Core, not to speak about updating to .NET 6.0 when it gets released. |
Do we warn about them at build time? I know we improved our obsoletion story recently, but the test library (S.R.IS.RI) that calls them builds without warnings. |
They are all annotated with runtime/src/libraries/System.Diagnostics.Process/src/System/Diagnostics/Process.cs Lines 319 to 320 in ef2a187
I've checked that and the obsoletion warnings are disabled on purpose in the tests source code: runtime/src/libraries/System.Diagnostics.Process/tests/ProcessTests.cs Lines 1768 to 1770 in 5808716
|
That's my bad for commenting on my phone without looking at the code. I meant the S.R.I.RI test that we use to log the Helix configurations. It seems @stephentoub added these, I'm not sure why: I will take those 32 bit ones out of there and remove the pragmas. With respect to what to do about the properties themselves, it's your call -- if you want to make them MaxValue, I'm happy to make the PR, but I don't have a strong opinion either way. |
It's always good to ask questions and this one led to an interesting discussion!
I hardly ever have a strong opinion besides the performance aspects, but here I would really prefer to avoid making any improvements in code that have been marked as |
Sounds good, feel free to close. I will fix up that test later. |
I do wonder whether instead of |
This is definitely a good idea! |
These property getters on the Process type were obsoleted (somewhere between .NET Framework 2.0 and .NET Framework 4.0 inclusive) because their return type is
int
, which is frequently too small.They are implemented to do an unchecked cast of the actual
long
value to int. That is frequently meaningless, eg., this from a dotnet process a Ubuntu machine our tests run on:The negative values have no conceivable use and at first I assumed we have a bug in how we read
/proc/PID/stat
.These properties exist so that code written a long time ago can either log the value or make a decision. I wonder if it might be better to instead return Int32.MaxValue if it exceeds the range of an
int
. That is suggestive that the range has been exceeded, and any code making a decision based on the value may be less likely to misbehave.The text was updated successfully, but these errors were encountered: