-
Notifications
You must be signed in to change notification settings - Fork 230
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[BUG] significant slow down with ParquetCachedBatchSerializer and pyspark CrossValidator #5975
Comments
I guess it probably be issue in |
It is not specific to the xgboost pyspark code. Just happened to encounter the issue when trying that. |
But happy to see you tried the xgboost pyspark code. |
Describe the bug
First observed when attempting to run pyspark's CrossValidator + VectorAssembler + pyspark version of XGBoost under review in this PR: dmlc/xgboost#8020. Parts of this should fall back to cpu due to the use of VectorUDT injected by VectorAssembler. Running time of certain steps however jumps from a few minutes to over an hour when ParquetCachedBatchSerializer is enabled vs disabled, with spark-rapids plugin enabled in both cases. Attempted to reproduce in a more self-contained manner per below code snippet that incorporates some of the relevant logic from CrossValidator and XGboost.
Steps/Code to reproduce bug
In my environment, this bit of code takes a few seconds to run in
spark-shell
with ParquetCachedBatchSerializer disabled but almost 2 min when enabled.Another issue with this example is that if the line
val df3 = ...
is replaced withval df3 = df2.withColumn("filter",rand()).filter($"filter" < 0.5)
(i.e. no VectorUDT column added), an Array index out of bounds exception is encountered with ParquetCachedBatchSerializer enabled, while no error with it disabled.A pyspark version of the above example shows similar behavior.
Expected behavior
Much smaller performance penalty with ParquetCachedBatchSerializer enabled in this example, which should resolve the main issue encountered with pyspark CrossValidator.
Environment details (please complete the following information)
I then remove
--conf spark.sql.cache.serializer=com.nvidia.spark.ParquetCachedBatchSerializer
to disable ParquetCachedBatchSerializer.The text was updated successfully, but these errors were encountered: