-
Notifications
You must be signed in to change notification settings - Fork 745
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Why does spark convert to_timestamp to cast syntax #4102
Comments
In addition, in |
Hey @hellozepp, thanks for the report. Regarding spark-sql (default)> SELECT CAST(1415792726123 / 1000 AS TIMESTAMP);
2014-11-12 13:45:26.123
spark-sql (default)> select to_timestamp(1415792726123/1000);
2014-11-12 13:45:26.123 Regarding |
@VaggelisD yes, just for consistency purposes |
Thanks again for the concern, I'll go ahead and close this then as the semantics of the roundtrip are preserved. If you come across a use case where this cast (or in general, any such transformation) breaks the roundtrip we can reconsider the refactor, otherwise it's fine for now. |
Before you file an issue
sqlglot.transpile(sql, read="postgres", write="spark")
Fully reproducible code snippet
presto sql as following:
My expected spark output:
Actual output:
Spark's to_timestamp is supported since: 2.2.0, and my execution results are as follows:
So, can Spark use to_timestamp directly?
The text was updated successfully, but these errors were encountered: