Skip to content
Closed
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
4 changes: 2 additions & 2 deletions dev/infra/Dockerfile
Original file line number Diff line number Diff line change
Expand Up @@ -91,10 +91,10 @@ RUN mkdir -p /usr/local/pypy/pypy3.8 && \
ln -sf /usr/local/pypy/pypy3.8/bin/pypy /usr/local/bin/pypy3.8 && \
ln -sf /usr/local/pypy/pypy3.8/bin/pypy /usr/local/bin/pypy3
RUN curl -sS https://bootstrap.pypa.io/get-pip.py | pypy3
RUN pypy3 -m pip install numpy 'six==1.16.0' 'pandas<=2.2.0' scipy coverage matplotlib lxml
RUN pypy3 -m pip install numpy 'six==1.16.0' 'pandas<=2.2.1' scipy coverage matplotlib lxml


ARG BASIC_PIP_PKGS="numpy pyarrow>=15.0.0 six==1.16.0 pandas<=2.2.0 scipy plotly>=4.8 mlflow>=2.8.1 coverage matplotlib openpyxl memory-profiler>=0.61.0 scikit-learn>=1.3.2"
ARG BASIC_PIP_PKGS="numpy pyarrow>=15.0.0 six==1.16.0 pandas<=2.2.1 scipy plotly>=4.8 mlflow>=2.8.1 coverage matplotlib openpyxl memory-profiler>=0.61.0 scikit-learn>=1.3.2"
# Python deps for Spark Connect
ARG CONNECT_PIP_PKGS="grpcio==1.59.3 grpcio-status==1.59.3 protobuf==4.25.1 googleapis-common-protos==1.56.4"

Expand Down
2 changes: 1 addition & 1 deletion python/pyspark/pandas/supported_api_gen.py
Original file line number Diff line number Diff line change
Expand Up @@ -38,7 +38,7 @@
MAX_MISSING_PARAMS_SIZE = 5
COMMON_PARAMETER_SET = {"kwargs", "args", "cls"}
MODULE_GROUP_MATCH = [(pd, ps), (pdw, psw), (pdg, psg)]
PANDAS_LATEST_VERSION = "2.2.0"
PANDAS_LATEST_VERSION = "2.2.1"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This issue predates this PR, but I note that this version requirement is more strict than our dev requirement:

pandas>=1.4.4

One consequence of this is that trying to build the Python API docs locally can fail as follows:

.../spark/python/pyspark/pandas/supported_api_gen.py:115:
  UserWarning: Warning: pandas 2.2.1 is required; your version is 2.1.4

We should perhaps align the dev requirement with this Pandas version requirement here.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

hmm. there was some discussion on this in
https://github.com/apache/spark/pull/44881/files/9ae857a1b9c47dc12153cbf868e79ca2d0299a1d#diff-95a965e9b4d0ca83ab61f7af36659422910868431d05d68dc21dc8284e1c4b13
but why do you get 2.1.4? are there something else that downgrade it from 2.2.1?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There are multiple ways to get an older version. Since the requirement only asks for pandas>=1.4.4, then setting up the virtual environment months ago will install a version that remains valid even if you rerun pip install -r ... today. Another way is, yes, if a different requirement specifically prevents 2.2.1 from being installed.

The way we manage Python dependencies leaves a lot to be desired. I tried to fix it in the past (e.g. #27928) but failed to get enough committer support. I am inclined to try again, because I see a constant stream of commits just trying to keep the Python build working, and I think there should be a way to make this easier for everyone.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yes, but what about dev-container then?
Dev container have been open sourced some months ago Development Containers

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm not familiar with Development Containers, but yes, there are probably many ways we can improve the situation.

What I advocated in #27928, and what I still believe is the best option for us today (with some tweaks to my original proposal), is to adopt pip-tools. That's because it's a very conservative approach that builds on our existing use of pip, and lets us focus on the technology-agnostic problem of separating Spark's direct dependencies from our build environment dependencies.


RST_HEADER = """
=====================
Expand Down