-
Notifications
You must be signed in to change notification settings - Fork 1.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Enable the simd_i16x8_q15mulr_sat_s test on AArch64 #3035
Enable the simd_i16x8_q15mulr_sat_s test on AArch64 #3035
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Implementation LGTM, thanks! Very convenient that aarch64 has an instruction just for this :-)
One thought below on a cleanup you've done -- want to see what others think on this too.
As discussed in bytecodealliance#3035, most backends have explicit `unimplemented!(...)` match-arms for opcode lowering cases that are not yet implemented; this allows the backend maintainer to easily see what is not yet implemented, and avoiding a catch-all wildcard arm is less error-prone as opcodes are added in the future. However, the x64 backend was the exception: as @akirilov-arm pointed out, it had a wildcard match arm. This fixes the issue by explicitly listing all opcodes the x64 backend does not yet implement. As per our tests, these opcodes are not used or need by Wasm lowering; but, it is good to know that they exist, so that we can eventually either support or remove them. This was a good exercise for me as I wasn't aware of a few of these in particular: e.g., aarch64 supports `bmask` while x64 does not, and there isn't a good reason why x64 shouldn't, especially if others hope to use Cranelift as a SIMD-capable general codegen in the future. The `unimplemented!()` cases are separate from `panic!()` ones: my convention here was to split out those that are logically just *missing* from those that should be *impossible*, mostly due to expected removal by legalization before we reach the lowering step.
As discussed in #3035, most backends have explicit `unimplemented!(...)` match-arms for opcode lowering cases that are not yet implemented; this allows the backend maintainer to easily see what is not yet implemented, and avoiding a catch-all wildcard arm is less error-prone as opcodes are added in the future. However, the x64 backend was the exception: as @akirilov-arm pointed out, it had a wildcard match arm. This fixes the issue by explicitly listing all opcodes the x64 backend does not yet implement. As per our tests, these opcodes are not used or need by Wasm lowering; but, it is good to know that they exist, so that we can eventually either support or remove them. This was a good exercise for me as I wasn't aware of a few of these in particular: e.g., aarch64 supports `bmask` while x64 does not, and there isn't a good reason why x64 shouldn't, especially if others hope to use Cranelift as a SIMD-capable general codegen in the future. The `unimplemented!()` cases are separate from `panic!()` ones: my convention here was to split out those that are logically just *missing* from those that should be *impossible*, mostly due to expected removal by legalization before we reach the lowering step.
Copyright (c) 2021, Arm Limited.
bd879ce
to
98f1ac7
Compare
I would like to mention one point as a follow-up to the discussion in PR #2982 - the new IR operation I am introducing is technically expressible as a combination of existing ones. This comment from the WebAssembly SIMD operation proposal gives an idea how, but note that pattern-matching would be further complicated by the fact that addition is commutative. Overall, I think that in this case the balance is in favour of introducing a new IR operation. |
I agree, the in-terms-of-exsting-ops form is 8 Wasm ops, so this is reasonable to create a new opcode for, I think. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM, thanks!
No description provided.