-
Notifications
You must be signed in to change notification settings - Fork 4.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Improve build error on systems that have a 32-bit time_t #104368
Conversation
Previously, this was overflowing arithmetic between an unsigned and signed type, which is undefined behavior in C. Change the expression to use unsigned types for both operands, which also satisfies the compiler and no longer emits a warning.
Tagging subscribers to this area: @dotnet/area-system-security, @bartonjs, @vcsjones |
/azp run runtime-community |
Azure Pipelines successfully started running 1 pipeline(s). |
You can see the original warning here: https://godbolt.org/z/Maf748MKv The resulting logic is the same between the two, but the latter has happier compiler output. |
Ha, well, now the compiler is upset for a different reason. Edit: oh right, |
/azp run runtime-community |
Azure Pipelines successfully started running 1 pipeline(s). |
@@ -235,7 +235,7 @@ void InitializeOpenSSLShim(void) | |||
#if defined(TARGET_ARM) && defined(TARGET_LINUX) | |||
// This value will represent a time in year 2038 if 64-bit time is used, | |||
// or 1901 if the lower 32 bits are interpreted as a 32-bit time_t value. | |||
time_t timeVal = (time_t)INT_MAX + 1; | |||
time_t timeVal = (time_t)((unsigned long)INT_MAX + 1u); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why "unsigned long" instead of, e.g. uint32_t?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Now that I think about it, both are kind of wrong. This should just be unsigned
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Even more thinking about it, I don't know why we were doing INT_MAX + 1
. We could just use 0x80000000
and convert it to a time_t. If time_t
is 64-bit, it will be 2038. If it is 32-bit, it will be -2147483648, which is 1901. The conversion is enough to detect overflows.
/azp run runtime-community |
Azure Pipelines successfully started running 1 pipeline(s). |
time_t timeVal = (time_t)INT_MAX + 1; | ||
// This value will represent a time in year 2038 if 64-bit time_t is used, | ||
// or 1901 if the conversion overflowed. | ||
time_t timeVal = (time_t)0x80000000U; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The logic was assuming we use 64-bit time_t
. If this is 32-bit time_t
, it can't be used to detect whether openssl was built with 64-bit time_t
below - to do the detection, we must pass the value 0x80000000U to openssl and see how it gets interpreted.
Could we instead fix this by updating the armv6 build to use 64-bit time_t
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Okay. Now I understand the intention behind this check.
The logic was assuming we use 64-bit time_t.
Is that a reasonable assumption? Should we just set g_libSslUses32BitTime
to 1
if sizeof(time_t) == sizeof(int32_t)
and not bother asking OpenSSL?
Could we instead fix this by updating the armv6 build to use 64-bit time_t?
That would perhaps work for our CI, but what about other community builds where time_t is 32-bit?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should we just set g_libSslUses32BitTime to 1 if sizeof(time_t) == sizeof(int32_t) and not bother asking OpenSSL?
I think it's possible we built with 32-bit time_t
, but are running on a system where openssl uses 64-bit time_t
.
I am not sure how we decide the OS compatibility for armv6, but for the arm32 builds we updated our minimum supported glibc to one that supports 64-bit time_t
. Could we do the same for armv6, or do the community builds need to continue targeting 32-bit time_t
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think it's possible we built with 32-bit time_t, but are running on a system where openssl uses 64-bit time_t.
But that does not work right now, either, since you said "The logic was assuming we use 64-bit time_t." If we require a 64-bit time_t then we static assert it so folks have a better compilation message.
static_assert(sizeof(time_t) == 8, "64-bit time_t is required.");
If we do that, then we can still switch to using (time_t)0x80000000
so that people using a 32-bit time_t do not see error messages around overflows.
That doesn't help us with what to do about the armv6 build. Are we basically saying that .NET requires a 64-bit time_t now?
or do the community builds need to continue targeting 32-bit time_t?
I don't know much about those builds or who owns them. @ViktorHofer or @akoeplinger may know?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
After jogging my memory I am remembering that this came up in #102775 (comment).
In response I added a new armv6 image that supports 64-bit time_t: dotnet/dotnet-buildtools-prereqs-docker#1077. It should let us build the product, but tests will fail if they run on older raspbian.
Unfortunately I haven't had time to figure out how to build a helix image for armv6 using newer raspbian. I had a start in dotnet/dotnet-buildtools-prereqs-docker#1083, but eventually ran into issues building the helix client similar to dotnet/dnceng#2808.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think it's possible we built with 32-bit time_t, but are running on a system where openssl uses 64-bit time_t.
But that does not work right now, either, since you said "The logic was assuming we use 64-bit time_t."
I meant if we fix this to build with 32-bit time_t, it's still possible that openssl is using 64-bit time_t, so just setting g_libSslUses32BitTime to 1 isn't the right fix. A static assert sounds like a good idea to me.
Are we basically saying that .NET requires a 64-bit time_t now?
Yes, that's what we did for the officially supported arm32 builds. I am not sure who in practice owns the community-supported raspbian builds, but I would suggest doing the same.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Okay. I added a static assert and undid some comment changes. I think replacing INT_MAX + 1 is still helpful so that it avoids UB on systems where time_t is 32-bit, that way the only error they see is the static assert when building.
This puts us back in to a broken build, but at least now the build is broken with a better error message.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM, thanks!
We now require a 64-bit time_t, even on ARM32. This adds a static_assert to ensure time_t is 64-bit, and if not, produce a compile time error.
This also uses a constant instead of
(time_t)INT_MAX + 1
since, if that overflows, it is UB because time_t is a signed type.Contributes to #104333