-
Notifications
You must be signed in to change notification settings - Fork 1.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Timestamp overflows for extreme low/high values #8336
Comments
The overflow happens because Datafusion treats underlying Some evident options are:
@alamb @waitingkuo @viirya @tustvold would be nice to hear from you |
I'm confused by this statement, DataFusion follows the arrow data model which has various different timestamp precisions. There is no hard coded mapping to nanoseconds, rather anything from seconds to nanoseconds is supported |
Right, I can see there are bunch of hardcodes to nanoseconds, like cast from string to timestamp will come as nanoseconds, read from parquet, coercions, etc. |
Another option is "do nothing" You can get the expected answer by using
I think the core issue is that the SQL type
There is no type in Arrow that can represent nanosecond precision over the same range of values as a i64 second timestamp. So if we switched to something like
That would result in lower precision timestamps (can't represent nanosecond precision) So basically I suggest we do nothing about this particular issue -- there is a workaround and I think it is a fairly uncommon corner case |
Describe the bug
Timestamp literal conversion fails to be created from extreme values.
To Reproduce
Expected behavior
The cast should happen
PG returns
respectfully
Additional context
Part of #8282
The text was updated successfully, but these errors were encountered: