-
Notifications
You must be signed in to change notification settings - Fork 28.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[SPARK-28003][PYTHON] Allow NaT values when creating Spark dataframe from pandas with Arrow #24844
Conversation
@@ -296,7 +296,7 @@ def create_array(s, t): | |||
mask = s.isnull() | |||
# Ensure timestamp series are in expected form for Spark internal representation | |||
if t is not None and pa.types.is_timestamp(t): | |||
s = _check_series_convert_timestamps_internal(s.fillna(0), self._timezone) | |||
s = _check_series_convert_timestamps_internal(s, self._timezone) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Removing this doesn't fail existing tests. @BryanCutler do you remember why are we doing fillna(0)
here?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I believe it was due to a Pandas error, most likely because we were testing with 0.19.2 at the time. Can you manually run some tests with different Pandas versions? It will be best to test with older versions, but it might be kind of hard to get 0.19.2 working with pyarrow 0.12.1 though..
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah pandas 0.19.2 doesn't work with pyarrow 0.12. I cannot run arrow tests with pandas 0.19.2 anymore.
Since we are requiring min arrow version to be 0.12, it means pandas version 0.19.2 is not supported if the user wants to use Arrow.
Test build #106395 has finished for PR 24844 at commit
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think this is probably safe to remove for newer Pandas versions, but we will want to check some older versions as well
@@ -296,7 +296,7 @@ def create_array(s, t): | |||
mask = s.isnull() | |||
# Ensure timestamp series are in expected form for Spark internal representation | |||
if t is not None and pa.types.is_timestamp(t): | |||
s = _check_series_convert_timestamps_internal(s.fillna(0), self._timezone) | |||
s = _check_series_convert_timestamps_internal(s, self._timezone) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I believe it was due to a Pandas error, most likely because we were testing with 0.19.2 at the time. Can you manually run some tests with different Pandas versions? It will be best to test with older versions, but it might be kind of hard to get 0.19.2 working with pyarrow 0.12.1 though..
I think we are testing with Pandas 0.23.2 in the Jenkins env. Maybe it is a good time to bump up the minimum Pandas supported since 0.19.2 is pretty ancient. Thoughts @HyukjinKwon @felixcheung @shaneknapp @ueshin ? |
@@ -383,6 +383,19 @@ def test_timestamp_dst(self): | |||
assert_frame_equal(pdf, df_from_python.toPandas()) | |||
assert_frame_equal(pdf, df_from_pandas.toPandas()) | |||
|
|||
def test_timestamp_nat(self): | |||
import pandas as pd |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This import is already at the top
pdf2 = pd.DataFrame({'time': dt2}) | ||
|
||
df1 = self.spark.createDataFrame(pdf1) | ||
df2 = self.spark.createDataFrame(pdf2) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think you can just combine these to 1 DataFrame, but I think it would be good to also check against toPandas without Arrow
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Combined
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Also checked non-arrow codepath
Test build #106446 has finished for PR 24844 at commit
|
Yea, +1 for bumping up Pandas. 0.19.2 is pretty old and I think it's good time to bump up in Spark 3. |
dt = [pd.NaT, pd.Timestamp('2019-06-11'), None] * 100 | ||
pdf = pd.DataFrame({'time': dt}) | ||
|
||
with self.sql_conf({'spark.sql.execution.arrow.pyspark.enabled': "false"}): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
not a big deal but I think we can do this with a for loop.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Good point refactored to forloop
+1 for bumping up pandas. I could open a Jira for bumping that. Is there other things we'd like to address in this PR? |
python/pyspark/serializers.py
Outdated
@@ -296,7 +296,7 @@ def create_array(s, t): | |||
mask = s.isnull() | |||
# Ensure timestamp series are in expected form for Spark internal representation | |||
if t is not None and pa.types.is_timestamp(t): | |||
s = _check_series_convert_timestamps_internal(s.fillna(0), self._timezone) | |||
s = _check_series_convert_timestamps_internal(s, self._timezone) | |||
# TODO: need cast after Arrow conversion, ns values cause error with pandas 0.19.2 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This looks related to pandas 0.19.2 too. Should we deal with it? If we want to bump up pandas.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Let's fix this one up in another PR.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I fixed this here as well since it's pretty small. I am happy to revert and make it a seperate PR if we prefer that. Please let me know :)
Test build #106475 has finished for PR 24844 at commit
|
Let's change the minimum Pandas version in another PR. I made https://issues.apache.org/jira/browse/SPARK-28041. I think we should merge this one after the version is increased. |
dt = [pd.NaT, pd.Timestamp('2019-06-11'), None] * 100 | ||
pdf = pd.DataFrame({'time': dt}) | ||
|
||
for arrow_enabled in [False, True]: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
you can just use _toPandas_arrow_toggle
which returns 2 DataFrames with and without arrow
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ah thanks for the tip! (I ended up using _createDataFrame_toggle because that's the path I want to test)
6be6d77
to
594e40b
Compare
Test build #106480 has finished for PR 24844 at commit
|
Test build #106481 has finished for PR 24844 at commit
|
Test build #106482 has finished for PR 24844 at commit
|
as mentioned on dev@ and other github issues, we're testing against 0.24.2
and will continue to do so.
…On Wed, Jun 12, 2019 at 10:41 AM Bryan Cutler ***@***.***> wrote:
I think we are testing with Pandas 0.23.2 in the Jenkins env. Maybe it is
a good time to bump up the minimum Pandas supported since 0.19.2 is pretty
ancient. Thoughts @HyukjinKwon <https://github.com/HyukjinKwon>
@felixcheung <https://github.com/felixcheung> @shaneknapp
<https://github.com/shaneknapp> ?
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#24844?email_source=notifications&email_token=AAMIHLHOGCGJX6RQRIEAASTP2EYOJA5CNFSM4HXDDMBKYY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGODXRHQYQ#issuecomment-501381218>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/AAMIHLCV7HKT4EXCDDJVGS3P2EYOJANCNFSM4HXDDMBA>
.
|
Ping. Anything else we want for this PR? |
K, minimal version is updated now as of #24867 |
@icexelloss could you resolve conflicts and manually test with Pandas 0.23.2? |
594e40b
to
bda31b5
Compare
Test build #106777 has finished for PR 24844 at commit
|
Manually tested all arrow related tests with 0.23.2 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
merged to master, thanks @icexelloss ! |
Thanks! |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM too!
…from pandas with Arrow ## What changes were proposed in this pull request? This patch removes `fillna(0)` when creating ArrowBatch from a pandas Series. With `fillna(0)`, the original code would turn a timestamp type into object type, which pyarrow will complain later: ``` >>> s = pd.Series([pd.NaT, pd.Timestamp('2015-01-01')]) >>> s.dtypes dtype('<M8[ns]') >>> s.fillna(0) 0 0 1 2015-01-01 00:00:00 dtype: object ``` ## How was this patch tested? Added `test_timestamp_nat` Closes apache#24844 from icexelloss/SPARK-28003-arrow-nat. Authored-by: Li Jin <ice.xelloss@gmail.com> Signed-off-by: Bryan Cutler <cutlerb@gmail.com>
…from pandas with Arrow ## What changes were proposed in this pull request? This patch removes `fillna(0)` when creating ArrowBatch from a pandas Series. With `fillna(0)`, the original code would turn a timestamp type into object type, which pyarrow will complain later: ``` >>> s = pd.Series([pd.NaT, pd.Timestamp('2015-01-01')]) >>> s.dtypes dtype('<M8[ns]') >>> s.fillna(0) 0 0 1 2015-01-01 00:00:00 dtype: object ``` ## How was this patch tested? Added `test_timestamp_nat` Closes apache#24844 from icexelloss/SPARK-28003-arrow-nat. Authored-by: Li Jin <ice.xelloss@gmail.com> Signed-off-by: Bryan Cutler <cutlerb@gmail.com>
…from pandas with Arrow ## What changes were proposed in this pull request? This patch removes `fillna(0)` when creating ArrowBatch from a pandas Series. With `fillna(0)`, the original code would turn a timestamp type into object type, which pyarrow will complain later: ``` >>> s = pd.Series([pd.NaT, pd.Timestamp('2015-01-01')]) >>> s.dtypes dtype('<M8[ns]') >>> s.fillna(0) 0 0 1 2015-01-01 00:00:00 dtype: object ``` ## How was this patch tested? Added `test_timestamp_nat` Closes apache#24844 from icexelloss/SPARK-28003-arrow-nat. Authored-by: Li Jin <ice.xelloss@gmail.com> Signed-off-by: Bryan Cutler <cutlerb@gmail.com>
What changes were proposed in this pull request?
This patch removes
fillna(0)
when creating ArrowBatch from a pandas Series.With
fillna(0)
, the original code would turn a timestamp type into object type, which pyarrow will complain later:How was this patch tested?
Added
test_timestamp_nat