-
Notifications
You must be signed in to change notification settings - Fork 4.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Test case fixes for big-endian systems #48395
Conversation
I couldn't figure out the best area label to add to this PR. If you have write-permissions please help me learn by adding exactly one area label. |
Tagging subscribers to this area: @GrabYourPitchforks Issue Details
|
* Enable tests that were disabled on big-endian systems * Fix endian assumptions across various test cases * Update access to binary test data like UTF16 characters * Update reflection test cases accessing litte-endian PE images
Adding some reviewers based on areas touched. Please divide as appropriate. |
} | ||
else | ||
{ | ||
expected = (b4.B0 << 24) + (b4.B1 << 16) + (b4.B2 << 8) + (b4.B3); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I wonder if we should instead have a test helper method that turns int32 and int64 from littleEndian to bigendian so that we don't have to duplicate shifting or checks and the expected value is only really ever written once in our tests. Similar to what you are doing in the BitConverteArray.cs tests where you just save to an array and then reverse it.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Well, in BitConverterArray.cs it is more obvious how do those swaps, given that we're really operating on single scalar values (or arrays of the same type).
Here, the tests deliberately overlay multiple data types (modifying single bytes in a long, reading a pair of bytes as a short etc.), which seem inherently endian-sensitive. Since the intent of this test is exactly to verify that those aliased manipulations work as expected (where the "expected" behavior is explicitly endian-sensitive), it seemed to make more sense to me to verify the little- and big-endian results separately here.
If you have any specific suggestion of how to structure the tests differently, I'd be happy to implement and test ...
@ViktorHofer @ericstj should we consider adding a test queue that runs on big-endian machines so that all of these new test branches get coverage? |
I was sharing with the infra crew that usually the order we go about these things is:
@uweigand have you discussed running these tests with anyone? cc @directhex |
@ericstj we have a high pass rate for the changes in runtimelab - dotnet/runtimelab#679 The wrinkle here is:
|
I see this is for https://github.com/dotnet/runtimelab/tree/feature/s390x So the work to enable the runtime to build is happening in that branch. What's the reason for upstreaming the test changes? |
These changes apply for any BE architecture (e.g. POWER), they aren't specific to s390x. There's already hundreds of cases in the libraries/ code that attempt to deal with potentially big-endian runtimes - these are bug fixes against that existing code |
I think we'd say the same thing about all those tests and unexercised code -- ideally we'd be testing it in CI as well. Mainly I wanted to understand what the expectation is of reviewers. It'd be nice to have a doc that describes the plan here as it looks like this same question has come up on multiple PRs that are part of this project: #44805 (comment) |
My reason for submitting these changes to runtime and not runtimelab (feature/s390x) is as @directhex mentioned: the existing runtime code attempts to handle big-endian platforms, and that code, while mostly correct, does have a few bugs -- some in the actual implementation and some in the test cases. All changes in this PR are about tests that failed on big-endian systems, specifically s390x, and pass with the PR applied. I've verified this by running the tests manually, and there is now an automated CI on the runtimelab branch. Once s390x support is complete and stable, my hope is to get approval to merge all of it into runtime proper, at which point the CI would of course move there. As to expectations on reviewers, speaking just for myself here, I'd much appreciate comments as to whether these changes are "correct" in the sense that the reason for the failing test was indeed an endian issue in the test case as opposed to the underlying implementation that is being tested. I think so --that I why I submitted the patch as it is-- but of course a review is always helpful. |
@uweigand , just a general comment from an observer (as I noticed that the review-process is kind of stuck) : This PR is huge, and thus difficult to review. From my experience, splitting one super-commit down to multiple (easy understandable) commits increases the chances of a quick and positive review. Sometimes it is even better to provide multiple PRs (starting with those with the highest chances of acceptance, e.g. clear bug-fixes). This way you avoid that complex changes block the trivial ones to get merged. All this is of course more work (for the patch provider), but less work for the patch reviewer. |
@abebeos, thanks for the suggestion. I'll go ahead and split this into smaller PRs. |
For reference, I've split out the following separate PRs: |
Enable tests that were disabled on big-endian systems
Fix endian assumptions across various test cases
Update access to binary test data like UTF16 characters
Update reflection test cases accessing litte-endian PE images