[SPARK-40903][SQL][FOLLOWUP] Cast canonicalized Add as its original data type if necessary#38513
[SPARK-40903][SQL][FOLLOWUP] Cast canonicalized Add as its original data type if necessary#38513gengliangwang wants to merge 1 commit intoapache:masterfrom
Conversation
| if (resolved && reorderResult.resolved && reorderResult.dataType != dataType) { | ||
| // SPARK-40903: Append cast for the canonicalization of decimal Add if the result data type is | ||
| // changed. Otherwise, it may cause data checking error within ComplexTypeMergingExpression. | ||
| Cast(reorderResult, dataType) |
There was a problem hiding this comment.
This seems to be the same idea that has come up previously: #38379 (comment) but my concerns (#38379 (comment)) were wrong, so LGTM.
There was a problem hiding this comment.
Yes, the cast is better after seconds thought.
| orderCommutative({ case Add(l, r, _) => Seq(l, r) }).reduce(Add(_, _, evalMode)) | ||
| if (resolved && reorderResult.resolved && reorderResult.dataType == dataType) { | ||
| reorderResult | ||
| if (resolved && reorderResult.resolved && reorderResult.dataType != dataType) { |
There was a problem hiding this comment.
for safe, can we do this for decimal type only ? e.g
left match {
case _: DecimalType if resolved && reorderResult.resolved && reorderResult.dataType != dataType =>
Cast(reorderResult, dataType)
case _ => reorderResult
}There was a problem hiding this comment.
@ulysses-you the current code seems fine. If there is a new data type in the future, we can still avoid the issue.
There was a problem hiding this comment.
I'm not worry about new data types, just want to avoid unnecessary logic for some unrelated data types like integer, float ..
|
I think canonicalization should not change the data type in the first place. Adding cast only hides the bug. What's worse, it doesn't help with the goal of canonicalization: match plans/expressions that are semantically equal, due to the extra cast. Can we be stricter on when we can reorder? e.g. add an allowlist and only reorder under certain cases, e.g. integral add with ansi off. |
The issue of SPARK-40903 is due to Spark calculating the data type of decimal Add conservatively. What do you mean by "bug"? |
|
We should not change query semantics after reordering, as this is canonicalization. It's hard to convince people that different result types still guarantee the same query semantic. |
|
It's true that having a cast in the canonicalized expression is hacky. I am closing this one and keeping the solution as it is in #38379 |
What changes were proposed in this pull request?
This is a follow-up of #38379. On second thought, if the canonicalized
Addhas a different type, casting it as the original data type can still match more semantically equivalentAddsWhy are the changes needed?
A better solution for the issue https://issues.apache.org/jira/browse/SPARK-40903. We can avoid regressions on marking on certain semantically equivalent
Adds as not equivalent.Does this PR introduce any user-facing change?
No
How was this patch tested?
New UT