Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[SPARK-28003][PYTHON] Allow NaT values when creating Spark dataframe from pandas with Arrow #24844

Closed
wants to merge 5 commits into from

Conversation

icexelloss
Copy link
Contributor

What changes were proposed in this pull request?

This patch removes fillna(0) when creating ArrowBatch from a pandas Series.

With fillna(0), the original code would turn a timestamp type into object type, which pyarrow will complain later:

>>> s = pd.Series([pd.NaT, pd.Timestamp('2015-01-01')])
>>> s.dtypes
dtype('<M8[ns]')
>>> s.fillna(0)
0                      0
1    2015-01-01 00:00:00
dtype: object

How was this patch tested?

Added test_timestamp_nat

@icexelloss
Copy link
Contributor Author

cc @BryanCutler @HyukjinKwon

@@ -296,7 +296,7 @@ def create_array(s, t):
mask = s.isnull()
# Ensure timestamp series are in expected form for Spark internal representation
if t is not None and pa.types.is_timestamp(t):
s = _check_series_convert_timestamps_internal(s.fillna(0), self._timezone)
s = _check_series_convert_timestamps_internal(s, self._timezone)
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Removing this doesn't fail existing tests. @BryanCutler do you remember why are we doing fillna(0) here?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I believe it was due to a Pandas error, most likely because we were testing with 0.19.2 at the time. Can you manually run some tests with different Pandas versions? It will be best to test with older versions, but it might be kind of hard to get 0.19.2 working with pyarrow 0.12.1 though..

Copy link
Contributor Author

@icexelloss icexelloss Jun 12, 2019

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah pandas 0.19.2 doesn't work with pyarrow 0.12. I cannot run arrow tests with pandas 0.19.2 anymore.

Since we are requiring min arrow version to be 0.12, it means pandas version 0.19.2 is not supported if the user wants to use Arrow.

@SparkQA
Copy link

SparkQA commented Jun 11, 2019

Test build #106395 has finished for PR 24844 at commit 8bfefed.

  • This patch passes all tests.
  • This patch merges cleanly.
  • This patch adds no public classes.

Copy link
Member

@BryanCutler BryanCutler left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think this is probably safe to remove for newer Pandas versions, but we will want to check some older versions as well

@@ -296,7 +296,7 @@ def create_array(s, t):
mask = s.isnull()
# Ensure timestamp series are in expected form for Spark internal representation
if t is not None and pa.types.is_timestamp(t):
s = _check_series_convert_timestamps_internal(s.fillna(0), self._timezone)
s = _check_series_convert_timestamps_internal(s, self._timezone)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I believe it was due to a Pandas error, most likely because we were testing with 0.19.2 at the time. Can you manually run some tests with different Pandas versions? It will be best to test with older versions, but it might be kind of hard to get 0.19.2 working with pyarrow 0.12.1 though..

@BryanCutler
Copy link
Member

BryanCutler commented Jun 12, 2019

I think we are testing with Pandas 0.23.2 in the Jenkins env. Maybe it is a good time to bump up the minimum Pandas supported since 0.19.2 is pretty ancient. Thoughts @HyukjinKwon @felixcheung @shaneknapp @ueshin ?

@@ -383,6 +383,19 @@ def test_timestamp_dst(self):
assert_frame_equal(pdf, df_from_python.toPandas())
assert_frame_equal(pdf, df_from_pandas.toPandas())

def test_timestamp_nat(self):
import pandas as pd
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This import is already at the top

pdf2 = pd.DataFrame({'time': dt2})

df1 = self.spark.createDataFrame(pdf1)
df2 = self.spark.createDataFrame(pdf2)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think you can just combine these to 1 DataFrame, but I think it would be good to also check against toPandas without Arrow

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Combined

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Also checked non-arrow codepath

@SparkQA
Copy link

SparkQA commented Jun 12, 2019

Test build #106446 has finished for PR 24844 at commit 4af2aed.

  • This patch passes all tests.
  • This patch merges cleanly.
  • This patch adds no public classes.

@HyukjinKwon
Copy link
Member

Yea, +1 for bumping up Pandas. 0.19.2 is pretty old and I think it's good time to bump up in Spark 3.

dt = [pd.NaT, pd.Timestamp('2019-06-11'), None] * 100
pdf = pd.DataFrame({'time': dt})

with self.sql_conf({'spark.sql.execution.arrow.pyspark.enabled': "false"}):
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

not a big deal but I think we can do this with a for loop.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good point refactored to forloop

@icexelloss
Copy link
Contributor Author

+1 for bumping up pandas. I could open a Jira for bumping that. Is there other things we'd like to address in this PR?

@@ -296,7 +296,7 @@ def create_array(s, t):
mask = s.isnull()
# Ensure timestamp series are in expected form for Spark internal representation
if t is not None and pa.types.is_timestamp(t):
s = _check_series_convert_timestamps_internal(s.fillna(0), self._timezone)
s = _check_series_convert_timestamps_internal(s, self._timezone)
# TODO: need cast after Arrow conversion, ns values cause error with pandas 0.19.2
Copy link
Member

@viirya viirya Jun 13, 2019

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This looks related to pandas 0.19.2 too. Should we deal with it? If we want to bump up pandas.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Let's fix this one up in another PR.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I fixed this here as well since it's pretty small. I am happy to revert and make it a seperate PR if we prefer that. Please let me know :)

@SparkQA
Copy link

SparkQA commented Jun 13, 2019

Test build #106475 has finished for PR 24844 at commit 7b1d964.

  • This patch passes all tests.
  • This patch merges cleanly.
  • This patch adds no public classes.

@BryanCutler
Copy link
Member

Let's change the minimum Pandas version in another PR. I made https://issues.apache.org/jira/browse/SPARK-28041. I think we should merge this one after the version is increased.

dt = [pd.NaT, pd.Timestamp('2019-06-11'), None] * 100
pdf = pd.DataFrame({'time': dt})

for arrow_enabled in [False, True]:
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

you can just use _toPandas_arrow_toggle which returns 2 DataFrames with and without arrow

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ah thanks for the tip! (I ended up using _createDataFrame_toggle because that's the path I want to test)

@icexelloss icexelloss force-pushed the SPARK-28003-arrow-nat branch from 6be6d77 to 594e40b Compare June 13, 2019 18:50
@SparkQA
Copy link

SparkQA commented Jun 13, 2019

Test build #106480 has finished for PR 24844 at commit d479c41.

  • This patch passes all tests.
  • This patch merges cleanly.
  • This patch adds no public classes.

@SparkQA
Copy link

SparkQA commented Jun 13, 2019

Test build #106481 has finished for PR 24844 at commit 6be6d77.

  • This patch fails PySpark unit tests.
  • This patch merges cleanly.
  • This patch adds no public classes.

@SparkQA
Copy link

SparkQA commented Jun 13, 2019

Test build #106482 has finished for PR 24844 at commit 594e40b.

  • This patch passes all tests.
  • This patch merges cleanly.
  • This patch adds no public classes.

@shaneknapp
Copy link
Contributor

shaneknapp commented Jun 14, 2019 via email

@icexelloss
Copy link
Contributor Author

Ping. Anything else we want for this PR?

@HyukjinKwon
Copy link
Member

K, minimal version is updated now as of #24867

@BryanCutler
Copy link
Member

@icexelloss could you resolve conflicts and manually test with Pandas 0.23.2?

@icexelloss icexelloss force-pushed the SPARK-28003-arrow-nat branch from 594e40b to bda31b5 Compare June 21, 2019 18:47
@SparkQA
Copy link

SparkQA commented Jun 21, 2019

Test build #106777 has finished for PR 24844 at commit bda31b5.

  • This patch passes all tests.
  • This patch merges cleanly.
  • This patch adds no public classes.

@icexelloss
Copy link
Contributor Author

Manually tested all arrow related tests with 0.23.2

Copy link
Member

@BryanCutler BryanCutler left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@BryanCutler
Copy link
Member

merged to master, thanks @icexelloss !

@icexelloss
Copy link
Contributor Author

Thanks!

Copy link
Member

@HyukjinKwon HyukjinKwon left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM too!

kiku-jw pushed a commit to kiku-jw/spark that referenced this pull request Jun 26, 2019
…from pandas with Arrow

## What changes were proposed in this pull request?

This patch removes `fillna(0)` when creating ArrowBatch from a pandas Series.

With `fillna(0)`, the original code would turn a timestamp type into object type, which pyarrow will complain later:
```
>>> s = pd.Series([pd.NaT, pd.Timestamp('2015-01-01')])
>>> s.dtypes
dtype('<M8[ns]')
>>> s.fillna(0)
0                      0
1    2015-01-01 00:00:00
dtype: object
```

## How was this patch tested?

Added `test_timestamp_nat`

Closes apache#24844 from icexelloss/SPARK-28003-arrow-nat.

Authored-by: Li Jin <ice.xelloss@gmail.com>
Signed-off-by: Bryan Cutler <cutlerb@gmail.com>
rshkv pushed a commit to palantir/spark that referenced this pull request May 23, 2020
…from pandas with Arrow

## What changes were proposed in this pull request?

This patch removes `fillna(0)` when creating ArrowBatch from a pandas Series.

With `fillna(0)`, the original code would turn a timestamp type into object type, which pyarrow will complain later:
```
>>> s = pd.Series([pd.NaT, pd.Timestamp('2015-01-01')])
>>> s.dtypes
dtype('<M8[ns]')
>>> s.fillna(0)
0                      0
1    2015-01-01 00:00:00
dtype: object
```

## How was this patch tested?

Added `test_timestamp_nat`

Closes apache#24844 from icexelloss/SPARK-28003-arrow-nat.

Authored-by: Li Jin <ice.xelloss@gmail.com>
Signed-off-by: Bryan Cutler <cutlerb@gmail.com>
@rshkv rshkv mentioned this pull request May 23, 2020
9 tasks
rshkv pushed a commit to palantir/spark that referenced this pull request Jun 5, 2020
…from pandas with Arrow

## What changes were proposed in this pull request?

This patch removes `fillna(0)` when creating ArrowBatch from a pandas Series.

With `fillna(0)`, the original code would turn a timestamp type into object type, which pyarrow will complain later:
```
>>> s = pd.Series([pd.NaT, pd.Timestamp('2015-01-01')])
>>> s.dtypes
dtype('<M8[ns]')
>>> s.fillna(0)
0                      0
1    2015-01-01 00:00:00
dtype: object
```

## How was this patch tested?

Added `test_timestamp_nat`

Closes apache#24844 from icexelloss/SPARK-28003-arrow-nat.

Authored-by: Li Jin <ice.xelloss@gmail.com>
Signed-off-by: Bryan Cutler <cutlerb@gmail.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging this pull request may close these issues.

7 participants