-
Notifications
You must be signed in to change notification settings - Fork 29k
[SPARK-41933][CONNECT] Provide local mode that automatically starts the server #39441
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
5943010 to
b1de5cc
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Got this logic from my own (unmerged) PR long time ago: #19643.
This code path will only be used in development, and cannot be used in production. I will clean up with some docs later.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Question though:
Does this util help if we want to support the application_jar in spark-submit if we enable Connect there?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Regardless this logic would have to stay here because we're automatically searching the jars in the server side (dev mode only). The jars are passed to the server side (since remote side doesn't need this jar).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ack. This class is private to sql package so defining this function seems to be safe for internal use only (private[sql] object PythonSQLUtils)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This wasn't actually testing Spark Connect (because of self.df created from the regular Spark session).
889d31a to
2f31685
Compare
a295e33 to
7241127
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I piggyback this fix so garbage-collected instances will close the connection to avoid a resource leak.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Question: in the case of exception it might still leak resource?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yup. it's just best effort for now
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I see thanks for the clarification.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Question though:
Does this util help if we want to support the application_jar in spark-submit if we enable Connect there?
0bcee0c to
e655299
Compare
e655299 to
3c2852a
Compare
5cd90a6 to
db58506
Compare
db58506 to
4ae53eb
Compare
4ae53eb to
5adc731
Compare
| # Remove "--remote" option specified, and use plain arguments. | ||
| # NOTE that this is not used in regular PySpark application | ||
| # submission because JVM at this point is already running. | ||
| os.environ["PYSPARK_SUBMIT_ARGS"] = '"--name" "PySparkShell" "pyspark-shell"' |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think I should take another look to make sure on passing all extra arguments go through here (although Spark Connect doesn't support all of them ...). Let me revisit this after merging this PR.
|
All related tests passed. Merged to master. |
### What changes were proposed in this pull request? This PR mainly proposes to pass the user-specified configurations to local remote mode. Previously, all user-specific configurations were ignored in case of PySpark shell such as`./bin/pyspark` or plain Python interpreter - PySpark application submission case was fine. Now, configurations are properly passed to the server side, e.g., `./bin/pyspark --remote local --conf aaa=bbb` and `aaa=bbb` is properly passed to the server side. For `spark.master` and `spark.plugins`, user-specific configurations are respected. If they are unset, they are automatically set, e.g., `org.apache.spark.sql.connect.SparkConnectPlugin`. If they are set, users have to provide the proper values to overwrite them, meaning that either: ```bash ./bin/pyspark --remote local --conf spark.plugins="other.Plugin,org.apache.spark.sql.connect.SparkConnectPlugin" ``` or ```bash ./bin/pyspark --remote local ``` In addition, this PR fixes the related code as below: - Adds `spark.local.connect` internal configuration to be used in Spark Submit (so we don't have to parse and manipulate user specified arguments in Python in order to remove `--remote` or `spark.remote` configuration). - Adds some more validation on arguments in `SparkSubmitCommandBuilder` so invalid combination can fail fast (e.g., setting both remote and master like `--master ...` and `--conf spark.remote=...`) - In dev mode, do not set `spark.jars` anymore since it adds the jars into the class path of the JVM through `addJarToCurrentClassLoader`. ### Why are the changes needed? To correctly pass the configurations specified from users. ### Does this PR introduce _any_ user-facing change? No, Spark Connect has not been released yet. This is kind of a followup of #39441 to complete its support. ### How was this patch tested? Manually tested all combinations such as: ```bash ./bin/pyspark --conf spark.remote=local ./bin/pyspark --conf spark.remote=local --conf spark.jars=a ./bin/pyspark --conf spark.remote=local --jars /.../spark/connector/connect/server/target/scala-2.12/spark-connect-assembly-3.4.0-SNAPSHOT.jar ./bin/spark-submit --conf spark.remote=local --jars /.../spark/connector/connect/server/target/scala-2.12/spark-connect-assembly-3.4.0-SNAPSHOT.jar app.py ./bin/pyspark --conf spark.remote=local --conf spark.jars=/.../spark/connector/connect/server/target/scala-2.12/spark-connect-assembly-3.4.0-SNAPSHOT.jar ./bin/pyspark --master "local[*]" --remote "local" ./bin/spark-submit --conf spark.remote=local app.py ./bin/spark-submit --master="local[*]" --conf spark.remote=local app.py ./bin/spark-submit --master="local[*]" --remote=local app.py ./bin/pyspark --master "local[*]" --remote "local" ./bin/pyspark --master "local[*]" --remote "local" ./bin/pyspark --master "local[*]" --conf spark.remote="local" ./bin/spark-submit --master="local[*]" --remote=local app.py ./bin/pyspark --remote local ``` Closes #39463 from HyukjinKwon/SPARK-41933-conf. Authored-by: Hyukjin Kwon <gurwls223@apache.org> Signed-off-by: Hyukjin Kwon <gurwls223@apache.org>
### What changes were proposed in this pull request? This PR follow-ups for #39441 to fix the wrong error message. ### Why are the changes needed? Error message correction. ### Does this PR introduce _any_ user-facing change? No, but it's just about error message. ### How was this patch tested? The existing CI should pass Closes #40112 from itholic/SPARK-41933-followup. Lead-authored-by: itholic <haejoon.lee@databricks.com> Co-authored-by: Haejoon Lee <44108233+itholic@users.noreply.github.com> Signed-off-by: Hyukjin Kwon <gurwls223@apache.org>
### What changes were proposed in this pull request? This PR follow-ups for #39441 to fix the wrong error message. ### Why are the changes needed? Error message correction. ### Does this PR introduce _any_ user-facing change? No, but it's just about error message. ### How was this patch tested? The existing CI should pass Closes #40112 from itholic/SPARK-41933-followup. Lead-authored-by: itholic <haejoon.lee@databricks.com> Co-authored-by: Haejoon Lee <44108233+itholic@users.noreply.github.com> Signed-off-by: Hyukjin Kwon <gurwls223@apache.org> (cherry picked from commit 58efc4b) Signed-off-by: Hyukjin Kwon <gurwls223@apache.org>
### What changes were proposed in this pull request? This PR follow-ups for apache#39441 to fix the wrong error message. ### Why are the changes needed? Error message correction. ### Does this PR introduce _any_ user-facing change? No, but it's just about error message. ### How was this patch tested? The existing CI should pass Closes apache#40112 from itholic/SPARK-41933-followup. Lead-authored-by: itholic <haejoon.lee@databricks.com> Co-authored-by: Haejoon Lee <44108233+itholic@users.noreply.github.com> Signed-off-by: Hyukjin Kwon <gurwls223@apache.org> (cherry picked from commit 58efc4b) Signed-off-by: Hyukjin Kwon <gurwls223@apache.org>
What changes were proposed in this pull request?
This PR proposes local mode for Spark Connect. It automatically starts a Spark session (with bypassing
local*master string) that launches the Spark Connect server, which introduces two user-facing changes below.Notice that local mode exactly follows the regular PySpark session's stop behavior by terminating the server (whereas non-local mode would not close the server and other sessions). See also the newly added comments for
pyspark.sql.connect.SparkSession.stop.Local build of Apache Spark (for developers)
Automatically finds the jars for Spark Connect (because the jars for Spark Connect are not bundled in the regular Apache Spark release).
PySpark shell
pyspark --remote localPySpark application submission
spark-submit --remote "local[4]" app.pyUse it as a Python library
Official release of Apache Spark (for end-users)
Users must specify jars or packages. Jars aren't automatically searched.
PySpark shell
pyspark --packages org.apache.spark:spark-connect_2.12:3.4.0 --remote localPySpark application submission
spark-submit --packages org.apache.spark:spark-connect_2.12:3.4.0 --remote "local[4]" app.pyUse it as a Python library
Why are the changes needed?
In order to provide an easier mode to try Spark Connect for both developers and end-users.
Does this PR introduce any user-facing change?
No to end users because Spark Connect has not been released.
To the dev, yes. See the examples above.
How was this patch tested?
Unittests were refactored to use/test this feature (that also deduplicated the codes).