-
Couldn't load subscription status.
- Fork 28.9k
[SPARK-53614] Add Iterator[pandas.DataFrame] support to applyInPandas
#52716
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: master
Are you sure you want to change the base?
[SPARK-53614] Add Iterator[pandas.DataFrame] support to applyInPandas
#52716
Conversation
applyInPandasIterator[pandas.DataFrame] support to applyInPandas
Iterator[pandas.DataFrame] support to applyInPandasIterator[pandas.DataFrame] support to applyInPandas
Iterator[pandas.DataFrame] support to applyInPandasIterator[pandas.DataFrame] support to applyInPandas
| from pyspark.sql.functions import pandas_udf, PandasUDFType | ||
| from pyspark.sql.pandas.typehints import infer_group_pandas_eval_type_from_func | ||
| from pyspark.sql.pandas.functions import PythonEvalType | ||
| import warnings |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
not need to re-import PythonEvalType and warnings
|
|
||
| if dataframes_in_group == 1: | ||
| # Read all Arrow batches for this group first (must read from stream synchronously) | ||
| batches = list(ArrowStreamSerializer.load_stream(self, stream)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think we cannot load all batches here, the iterator API is designed to avoid loading all batches within a group so that it can migrate OOM
you can refer to
spark/python/pyspark/sql/pandas/serializers.py
Lines 1136 to 1140 in 7bd18e3
| batch_iter = process_group(ArrowStreamSerializer.load_stream(self, stream)) | |
| yield batch_iter | |
| # Make sure the batches are fully iterated before getting the next group | |
| for _ in batch_iter: | |
| pass |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think a better way is to update GroupPandasUDFSerializer to output the iterator,
and adjust the function wrapper of
PythonEvalType.SQL_GROUPED_MAP_PANDAS_UDF,
PythonEvalType.SQL_GROUPED_AGG_PANDAS_UDF,
PythonEvalType.SQL_WINDOW_AGG_PANDAS_UDF,
but of course, we can start with a new serializer and deduplicate the codes later.
| timezone, safecheck, _assign_cols_by_name, int_to_decimal_coercion_enabled | ||
| ) | ||
| elif eval_type == PythonEvalType.SQL_GROUPED_MAP_PANDAS_ITER_UDF: | ||
| from pyspark.sql.pandas.serializers import GroupPandasIterUDFSerializer |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
let's put the import here
spark/python/pyspark/worker.py
Lines 54 to 73 in 57b4cd2
| from pyspark.sql.pandas.serializers import ( | |
| ArrowStreamPandasUDFSerializer, | |
| ArrowStreamPandasUDTFSerializer, | |
| GroupPandasUDFSerializer, | |
| GroupArrowUDFSerializer, | |
| CogroupArrowUDFSerializer, | |
| CogroupPandasUDFSerializer, | |
| ArrowStreamUDFSerializer, | |
| ApplyInPandasWithStateSerializer, | |
| GroupPandasUDFSerializer, | |
| TransformWithStateInPandasSerializer, | |
| TransformWithStateInPandasInitStateSerializer, | |
| TransformWithStateInPySparkRowSerializer, | |
| TransformWithStateInPySparkRowInitStateSerializer, | |
| ArrowStreamArrowUDFSerializer, | |
| ArrowStreamAggArrowUDFSerializer, | |
| ArrowBatchUDFSerializer, | |
| ArrowStreamUDTFSerializer, | |
| ArrowStreamArrowUDTFSerializer, | |
| ) |
What changes were proposed in this pull request?
This PR adds support for the Iterator[pandas.DataFrame] API in
groupBy().applyInPandas(), enabling batch-by-batch processing of grouped data for improved memory efficiency and scalability.Key Changes:
New PythonEvalType: Added
SQL_GROUPED_MAP_PANDAS_ITER_UDF(216) to distinguish iterator-based UDFs from standard grouped map UDFsType Inference: Implemented automatic detection of iterator signatures:
Iterator[pd.DataFrame] -> Iterator[pd.DataFrame]Tuple[Any, ...], Iterator[pd.DataFrame] -> Iterator[pd.DataFrame]Streaming Serialization: Created
GroupPandasIterUDFSerializerthat streams results without materializing all DataFrames in memoryConfiguration Change: Updated
FlatMapGroupsInPandasExecwhich was hardcodingpythonEvalType = 201instead of extracting it from the UDF expression (mirrored fix fromFlatMapGroupsInArrowExec)Why are the changes needed?
The existing
applyInPandas()API loads entire groups into memory as single DataFrames. For large groups, this can cause OOM errors. The iterator API allows:applyInArrow()iterator API designDoes this PR introduce any user-facing changes?
Yes, this PR adds a new API variant for
applyInPandas():Before (existing API, still supported):
After (new iterator API):
With Grouping Keys:
Backward Compatibility: The existing DataFrame-to-DataFrame API is fully preserved and continues to work without changes.
How was this patch tested?
test_apply_in_pandas_iterator_basic- Basic functionality testtest_apply_in_pandas_iterator_with_keys- Test with grouping keystest_apply_in_pandas_iterator_batch_slicing- Pressure test with 10M rows, 20 columnstest_apply_in_pandas_iterator_with_keys_batch_slicing- Pressure test with keysWas this patch authored or co-authored using generative AI tooling?
Yes, tests generated by Cursor.