Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

release-1.8: Revert "codegen: explicitly handle Float16 intrinsics (#45249)" #45627

Merged
merged 1 commit into from
Jun 13, 2022

Conversation

KristofferC
Copy link
Member

This reverts commit eb82f18.

This PR had some issues in that it e.g. broke PowerPC (#45556 (comment)) and made some performance regressions on M1 (#45542). Since the fix is not fixing a regression (the same is true for 1.7.4), it seems better to me to let the status quo be until it can be properly fixed without the adverse effects, possibly in 1.8.1.

@KristofferC KristofferC added backport 1.8 Change should be backported to release-1.8 triage This should be discussed on a triage call labels Jun 9, 2022
@oscardssmith oscardssmith requested a review from vtjnash June 9, 2022 18:06
@KristofferC
Copy link
Member Author

This was discussed during triage and it was concluded that the original PR should be conditioned so that it is not applied on hardware where native F16 exists. However, it is not a release blocker since it only applies to tier 2 and tier 1 systems. So whenever someone implements the check it can be put into 1.8.x (even 1.8.0 if it happens quickly).

@KristofferC KristofferC closed this Jun 9, 2022
@pchintalapudi pchintalapudi deleted the kc/revert_f16_1.8 branch June 9, 2022 19:38
@ctkelley
Copy link

I'm a bit confused. It's the bottom line that the poor F16 performance will persist and that I should stick with 1.7.2 I'm the one who raised 45542

@vchuravy
Copy link
Member

vchuravy commented Jun 11, 2022

This also destroys performance for Float16 on x86? X86 has native support for converting Float16 to Float32 and vice versa, which we need to use to have a semblance of performance.

@maleadt did you manage to check if CUDA jl Float16 support still works?

@maleadt
Copy link
Member

maleadt commented Jun 11, 2022

Yeah this kills performance on x86 too. CUDA isn't affected as we don't run the float16demote pass.

@KristofferC
Copy link
Member Author

Okay can someone help to fix this then? I doubt @vtjnash is interested in prioritising this since he doesn't consider it blocking so someone need to roll up their sleeves.

@vchuravy vchuravy restored the kc/revert_f16_1.8 branch June 11, 2022 10:55
@vchuravy vchuravy reopened this Jun 11, 2022
@vchuravy
Copy link
Member

x-ref #45249 and #45644

@vchuravy
Copy link
Member

Okay can someone help to fix this then?

I proposed an alternative in #45649, but I will need to do the prove to show that it suffices.

@KristofferC
Copy link
Member Author

Since this is not reverted on master, it has to be reverted on the backport branch as well.

@KristofferC KristofferC merged commit 3a2eb39 into backports-release-1.8 Jun 13, 2022
@KristofferC KristofferC deleted the kc/revert_f16_1.8 branch June 13, 2022 07:44
@ViralBShah
Copy link
Member

Will this be applied to master too?

@vchuravy
Copy link
Member

See #45644 and #45736 (comment)

@KristofferC KristofferC removed the backport 1.8 Change should be backported to release-1.8 label Jul 6, 2022
@LilithHafner LilithHafner removed the triage This should be discussed on a triage call label Aug 18, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

6 participants