Skip to content

Conversation

@LuciferYang
Copy link
Contributor

@LuciferYang LuciferYang commented Aug 4, 2021

What changes were proposed in this pull request?

We always do file truncate operation before delete a write failed file held by DiskBlockObjectWriter, a typical process is as follows:

if (!success) {
  // This code path only happens if an exception was thrown above before we set success;
  // close our stuff and let the exception be thrown further
  writer.revertPartialWritesAndClose()
  if (file.exists()) {
    if (!file.delete()) {
      logWarning(s"Error deleting ${file}")
    }
  }
}

The revertPartialWritesAndClose method will reverts writes that haven't been committed yet, but it doesn't seem necessary in the current scene.

So this pr add a new method to DiskBlockObjectWriter named closeAndDelete(), the new method just revert write metrics and delete the write failed file.

Why are the changes needed?

Avoid unnecessary file operations.

Does this PR introduce any user-facing change?

Add a new method to DiskBlockObjectWriter named `closeAndDelete().

How was this patch tested?

Pass the Jenkins or GitHub Action

@github-actions github-actions bot added the CORE label Aug 4, 2021
@LuciferYang LuciferYang changed the title [SPARK-36406][CORE] Avoid unnecessary file operations before delete a write failed file held by DiskBlockObjectWriter [WIP][SPARK-36406][CORE] Avoid unnecessary file operations before delete a write failed file held by DiskBlockObjectWriter Aug 4, 2021
@LuciferYang LuciferYang marked this pull request as draft August 4, 2021 03:45
* by current `DiskBlockObjectWriter`. Callers should invoke this function when there
* are runtime exceptions in file writing process and the file is no longer needed.
*/
def deleteHeldFile(): Unit = {
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@Ngone51 I compromised and encapsulated the process of deleting the file held by DiskBlockObjectWriter into this method, haha

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

rename to closeAndDelete ?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ok

@SparkQA
Copy link

SparkQA commented Aug 4, 2021

Kubernetes integration test starting
URL: https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder-K8s/46527/

@SparkQA
Copy link

SparkQA commented Aug 4, 2021

Kubernetes integration test status failure
URL: https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder-K8s/46527/

@SparkQA
Copy link

SparkQA commented Aug 4, 2021

Test build #142017 has finished for PR 33628 at commit 9f5afd4.

  • This patch passes all tests.
  • This patch merges cleanly.
  • This patch adds no public classes.

@LuciferYang LuciferYang marked this pull request as ready for review August 4, 2021 07:22
@LuciferYang LuciferYang changed the title [WIP][SPARK-36406][CORE] Avoid unnecessary file operations before delete a write failed file held by DiskBlockObjectWriter [SPARK-36406][CORE] Avoid unnecessary file operations before delete a write failed file held by DiskBlockObjectWriter Aug 4, 2021
* by current `DiskBlockObjectWriter`. Callers should invoke this function when there
* are runtime exceptions in file writing process and the file is no longer needed.
*/
def deleteHeldFile(): Unit = {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

rename to closeAndDelete ?

}
} {
if (file.exists()) {
if (!file.delete()) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Use java.nio.file.Files.deleteIfExists ?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

25828f2 fix these comments

@mridulm
Copy link
Contributor

mridulm commented Aug 4, 2021

+CC @Ngone51

@LuciferYang
Copy link
Contributor Author

For the new method, should we add some test to check ShuffleWriteMetrics? I'll add it if necessary

@mridulm
Copy link
Contributor

mridulm commented Aug 4, 2021

That is a good question about metric update, and I was unsure if we were doing the right thing to begin with.
Given we are deleting the file - should we remove the entire file size ? (It is a bit late for me, so I might be misreading code as well, if yes, apologies !)

Thoughts @Ngone51 ?

@SparkQA
Copy link

SparkQA commented Aug 4, 2021

Kubernetes integration test starting
URL: https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder-K8s/46543/

@SparkQA
Copy link

SparkQA commented Aug 4, 2021

Kubernetes integration test status success
URL: https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder-K8s/46543/

@SparkQA
Copy link

SparkQA commented Aug 4, 2021

Test build #142031 has finished for PR 33628 at commit 25828f2.

  • This patch passes all tests.
  • This patch merges cleanly.
  • This patch adds no public classes.

@LuciferYang
Copy link
Contributor Author

cc @Ngone51

@LuciferYang
Copy link
Contributor Author

dcfcc47 merge with master

@SparkQA
Copy link

SparkQA commented Aug 23, 2021

Kubernetes integration test starting
URL: https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder-K8s/47189/

@SparkQA
Copy link

SparkQA commented Aug 23, 2021

Kubernetes integration test status success
URL: https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder-K8s/47189/

@SparkQA
Copy link

SparkQA commented Aug 23, 2021

Test build #142687 has finished for PR 33628 at commit dcfcc47.

  • This patch passes all tests.
  • This patch merges cleanly.
  • This patch adds no public classes.

@SparkQA
Copy link

SparkQA commented Sep 3, 2021

Kubernetes integration test starting
URL: https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder-K8s/47459/

@SparkQA
Copy link

SparkQA commented Sep 3, 2021

Kubernetes integration test status failure
URL: https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder-K8s/47459/

@SparkQA
Copy link

SparkQA commented Sep 3, 2021

Test build #142961 has finished for PR 33628 at commit 07f475b.

  • This patch passes all tests.
  • This patch merges cleanly.
  • This patch adds no public classes.

@SparkQA
Copy link

SparkQA commented Sep 14, 2021

Kubernetes integration test starting
URL: https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder-K8s/47761/

@SparkQA
Copy link

SparkQA commented Dec 17, 2021

Kubernetes integration test starting
URL: https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder-K8s/50802/

@SparkQA
Copy link

SparkQA commented Dec 17, 2021

Kubernetes integration test status failure
URL: https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder-K8s/50802/

@SparkQA
Copy link

SparkQA commented Dec 17, 2021

Test build #146328 has finished for PR 33628 at commit 4b3b72b.

  • This patch passes all tests.
  • This patch merges cleanly.
  • This patch adds no public classes.

@SparkQA
Copy link

SparkQA commented Dec 17, 2021

Kubernetes integration test starting
URL: https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder-K8s/50804/

@SparkQA
Copy link

SparkQA commented Dec 17, 2021

Kubernetes integration test status failure
URL: https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder-K8s/50804/

@SparkQA
Copy link

SparkQA commented Dec 17, 2021

Test build #146330 has finished for PR 33628 at commit 9db7115.

  • This patch passes all tests.
  • This patch merges cleanly.
  • This patch adds no public classes.

@SparkQA
Copy link

SparkQA commented Dec 17, 2021

Kubernetes integration test starting
URL: https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder-K8s/50809/

@SparkQA
Copy link

SparkQA commented Dec 17, 2021

Kubernetes integration test status failure
URL: https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder-K8s/50809/

@SparkQA
Copy link

SparkQA commented Dec 17, 2021

Kubernetes integration test starting
URL: https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder-K8s/50811/

@SparkQA
Copy link

SparkQA commented Dec 17, 2021

Kubernetes integration test status failure
URL: https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder-K8s/50811/

@SparkQA
Copy link

SparkQA commented Dec 17, 2021

Test build #146335 has finished for PR 33628 at commit a1e94b5.

  • This patch passes all tests.
  • This patch merges cleanly.
  • This patch adds no public classes.

@SparkQA
Copy link

SparkQA commented Dec 17, 2021

Test build #146337 has finished for PR 33628 at commit 8fefa03.

  • This patch passes all tests.
  • This patch merges cleanly.
  • This patch adds no public classes.

Copy link
Contributor

@mridulm mridulm left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Change looks good to me.
Will let @attilapiros review/commit though.

Copy link
Contributor

@attilapiros attilapiros left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

just a comment needs to be updated otherwise LGTM

@SparkQA
Copy link

SparkQA commented Dec 20, 2021

Kubernetes integration test starting
URL: https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder-K8s/50860/

@SparkQA
Copy link

SparkQA commented Dec 20, 2021

Kubernetes integration test status failure
URL: https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder-K8s/50860/

@SparkQA
Copy link

SparkQA commented Dec 20, 2021

Test build #146385 has finished for PR 33628 at commit c98ff1d.

  • This patch passes all tests.
  • This patch merges cleanly.
  • This patch adds no public classes.

@attilapiros
Copy link
Contributor

[info] *** 4 TESTS FAILED ***
[error] Failed tests:
[error] 	org.apache.spark.deploy.yarn.YarnClusterSuite
[error] (yarn / Test / test) sbt.TestsFailedException: Tests unsuccessful

But these failures seems to be unrelated as locally they are running successfully on the commit c98ff1d:

[info] YarnClusterSuite:
[info] - run Spark in yarn-client mode (16 seconds, 260 milliseconds)
[info] - run Spark in yarn-cluster mode (15 seconds, 117 milliseconds)
[info] - run Spark in yarn-client mode with unmanaged am (12 seconds, 83 milliseconds)
[info] - run Spark in yarn-client mode with different configurations, ensuring redaction (14 seconds, 90 milliseconds)
[info] - run Spark in yarn-cluster mode with different configurations, ensuring redaction (14 seconds, 95 milliseconds)
[info] - yarn-cluster should respect conf overrides in SparkHadoopUtil (SPARK-16414, SPARK-23630) (14 seconds, 99 milliseconds)
[info] - SPARK-35672: run Spark in yarn-client mode with additional jar using URI scheme 'local' (13 seconds, 120 milliseconds)
[info] - SPARK-35672: run Spark in yarn-cluster mode with additional jar using URI scheme 'local' (13 seconds, 104 milliseconds)
[info] - SPARK-35672: run Spark in yarn-client mode with additional jar using URI scheme 'local' and gateway-replacement path (14 seconds, 90 milliseconds)
[info] - SPARK-35672: run Spark in yarn-cluster mode with additional jar using URI scheme 'local' and gateway-replacement path (14 seconds, 89 milliseconds)
[info] - SPARK-35672: run Spark in yarn-cluster mode with additional jar using URI scheme 'local' and gateway-replacement path containing an environment variable (15 seconds, 105 milliseconds)
[info] - SPARK-35672: run Spark in yarn-client mode with additional jar using URI scheme 'file' (14 seconds, 105 milliseconds)
[info] - SPARK-35672: run Spark in yarn-cluster mode with additional jar using URI scheme 'file' (16 seconds, 77 milliseconds)
[info] - run Spark in yarn-cluster mode unsuccessfully (12 seconds, 112 milliseconds)
[info] - run Spark in yarn-cluster mode failure after sc initialized (22 seconds, 112 milliseconds)
[info] - run Python application in yarn-client mode (23 seconds, 117 milliseconds)
[info] - run Python application in yarn-cluster mode (17 seconds, 86 milliseconds)
[info] - run Python application in yarn-cluster mode using spark.yarn.appMasterEnv to override local envvar (18 seconds, 127 milliseconds)
[info] - user class path first in client mode (13 seconds, 129 milliseconds)
[info] - user class path first in cluster mode (13 seconds, 89 milliseconds)
[info] - monitor app using launcher library (7 seconds, 711 milliseconds)
[info] - running Spark in yarn-cluster mode displays driver log links (44 seconds, 168 milliseconds)
[info] - timeout to get SparkContext in cluster mode triggers failure (14 seconds, 81 milliseconds)
[info] - executor env overwrite AM env in client mode (13 seconds, 95 milliseconds)
[info] - executor env overwrite AM env in cluster mode (12 seconds, 92 milliseconds)
[info] - SPARK-34472: ivySettings file with no scheme or file:// scheme should be localized on driver in cluster mode (26 seconds, 186 milliseconds)
[info] - SPARK-34472: ivySettings file with no scheme or file:// scheme should retain user provided path in client mode (24 seconds, 154 milliseconds)
[info] - SPARK-34472: ivySettings file with non-file:// schemes should throw an error (4 seconds, 105 milliseconds)
[info] Run completed in 7 minutes, 57 seconds.
[info] Total number of tests run: 28
[info] Suites: completed 1, aborted 0
[info] Tests: succeeded 28, failed 0, canceled 0, ignored 0, pending 0
[info] All tests passed.

@attilapiros
Copy link
Contributor

Merged to master.

@LuciferYang
Copy link
Contributor Author

thanks @attilapiros @mridulm

wangyum pushed a commit that referenced this pull request May 26, 2023
* [SPARK-36992][SQL] Improve byte array sort perf by unify getPrefix function of UTF8String and ByteArray

### What changes were proposed in this pull request?

Unify the getPrefix function of `UTF8String` and `ByteArray`.

### Why are the changes needed?

When execute sort operator, we first compare the prefix. However the getPrefix function of byte array is slow. We use first 8 bytes as the prefix, so at most we will call 8 times with `Platform.getByte` which is slower than call once with `Platform.getInt` or `Platform.getLong`.

### Does this PR introduce _any_ user-facing change?

no

### How was this patch tested?

pass `org.apache.spark.util.collection.unsafe.sort.PrefixComparatorsSuite`

Closes #34267 from ulysses-you/binary-prefix.

Authored-by: ulysses-you <ulyssesyou18@gmail.com>
Signed-off-by: Sean Owen <srowen@gmail.com>

* [SPARK-37037][SQL] Improve byte array sort by unify compareTo function of UTF8String and ByteArray

### What changes were proposed in this pull request?

Unify the compare function of `UTF8String` and `ByteArray`.

### Why are the changes needed?

`BinaryType` use `TypeUtils.compareBinary` to compare two byte array, however it's slow since it compares byte array using unsigned int comparison byte by bye.

We can compare them using `Platform.getLong` with unsigned long comparison if they have more than 8 bytes. And here is some histroy about this `TODO` https://github.com/apache/spark/pull/6755/files#r32197461

The benchmark result should be same with `UTF8String`, can be found in #19180 (#19180 (comment))

### Does this PR introduce _any_ user-facing change?

No

### How was this patch tested?

Move test from `TypeUtilsSuite` to `ByteArraySuite`

Closes #34310 from ulysses-you/SPARK-37037.

Authored-by: ulysses-you <ulyssesyou18@gmail.com>
Signed-off-by: Sean Owen <srowen@gmail.com>

* [SPARK-37341][SQL] Avoid unnecessary buffer and copy in full outer sort merge join

### What changes were proposed in this pull request?

FULL OUTER sort merge join (non-code-gen path) [copies join keys and buffers input rows, even when rows from both sides do not have matched keys](https://github.com/apache/spark/blob/master/sql/core/src/main/scala/org/apache/spark/sql/execution/joins/SortMergeJoinExec.scala#L1637-L1641). This is unnecessary, as we can just output the row with smaller join keys, and only buffer when both sides have matched keys. This would save us from unnecessary copy and buffer, when both join sides have a lot of rows not matched with each other.

### Why are the changes needed?

Improve query performance for FULL OUTER sort merge join when code-gen is disabled.
This would benefit query when both sides have a lot of rows not matched, and join key is big in terms of size (e.g. string type).

Example micro benchmark:

```
  def sortMergeJoin(): Unit = {
    val N = 2 << 20
    codegenBenchmark("sort merge join", N) {
      val df1 = spark.range(N).selectExpr(s"cast(id * 15485863 as string) as k1")
      val df2 = spark.range(N).selectExpr(s"cast(id * 15485867 as string) as k2")
      val df = df1.join(df2, col("k1") === col("k2"), "full_outer")
      assert(df.queryExecution.sparkPlan.find(_.isInstanceOf[SortMergeJoinExec]).isDefined)
      df.noop()
    }
  }
```

Seeing run-time improvement over 60%:

```
Running benchmark: sort merge join
  Running case: sort merge join without optimization
  Stopped after 5 iterations, 10026 ms
  Running case: sort merge join with optimization
  Stopped after 5 iterations, 5954 ms

Java HotSpot(TM) 64-Bit Server VM 1.8.0_181-b13 on Mac OS X 10.16
Intel(R) Core(TM) i9-9980HK CPU  2.40GHz
sort merge join:                          Best Time(ms)   Avg Time(ms)   Stdev(ms)    Rate(M/s)   Per Row(ns)   Relative
------------------------------------------------------------------------------------------------------------------------
sort merge join without optimization               1807           2005         157          1.2         861.4       1.0X
sort merge join with optimization                  1135           1191          62          1.8         541.1       1.6X
```

### Does this PR introduce _any_ user-facing change?

No.

### How was this patch tested?

Existing unit tests e.g. `OuterJoinSuite.scala`.

Closes #34612 from c21/smj-fix.

Authored-by: Cheng Su <chengsu@fb.com>
Signed-off-by: Wenchen Fan <wenchen@databricks.com>

* [SPARK-37447][SQL] Cache LogicalPlan.isStreaming() result in a lazy val

### What changes were proposed in this pull request?

This PR adds caching to `LogicalPlan.isStreaming()`: the default implementation's result will now be cached in a `private lazy val`.

### Why are the changes needed?

This improves the performance of the `DeduplicateRelations` analyzer rule.

The default implementation of `isStreaming` recursively visits every node in the tree. `DeduplicateRelations.renewDuplicatedRelations` is recursively invoked on every node in the tree and each invocation calls `isStreaming`. This leads to `O(n^2)` invocations of `isStreaming` on leaf nodes.

Caching `isStreaming` avoids this performance problem.

### Does this PR introduce _any_ user-facing change?

No.

### How was this patch tested?

Correctness should be covered by existing tests.

This significantly improved `DeduplicateRelations` performance in local microbenchmarking with large query plans (~20% reduction in that rule's runtime in one of my tests).

Closes #34691 from JoshRosen/cache-LogicalPlan.isStreaming.

Authored-by: Josh Rosen <joshrosen@databricks.com>
Signed-off-by: Wenchen Fan <wenchen@databricks.com>

* [SPARK-37530][CORE] Spark reads many paths very slow though newAPIHadoopFile

### What changes were proposed in this pull request?

Same as #18441, we parallelize FileInputFormat.listStatus for newAPIHadoopFile

### Why are the changes needed?

![image](https://user-images.githubusercontent.com/8326978/144562490-d8005bf2-2052-4b50-9a5d-8b253ee598cc.png)

Spark can be slow when accessing external storage at driver side, improve perf by parallelizing

### Does this PR introduce _any_ user-facing change?

no
### How was this patch tested?

passing GA

Closes #34792 from yaooqinn/SPARK-37530.

Authored-by: Kent Yao <yao@apache.org>
Signed-off-by: Kent Yao <yao@apache.org>

* [SPARK-37592][SQL] Improve performance of `JoinSelection`

When I reading the implement of AQE, I find the process select join with hint exists a lot cumbersome code.

The join hint has a relatively high learning curve for users, so the SQL not  contains join hint in more cases.

Improve performance of `JoinSelection`

'No'.
Just change the inner implement.

Jenkins test.

Closes #34844 from beliefer/SPARK-37592-new.

Authored-by: Jiaan Geng <beliefer@163.com>
Signed-off-by: Wenchen Fan <wenchen@databricks.com>

* [SPARK-37646][SQL] Avoid touching Scala reflection APIs in the lit function

### What changes were proposed in this pull request?

This PR proposes to avoid touching Scala reflection APIs in the lit function.

### Why are the changes needed?

Currently `lit` calls `typedlit[Any]` and touches Scala reflection APIs unnecessarily. As Scala reflection APIs touch multiple global locks and they are pretty slow when the parallelism is pretty high.

This PR inlines `typedlit` to `lit` and replaces `Literal.create` with `Literal.apply` to avoid touching Scala reflection APIs. There is no behavior change.

### Does this PR introduce _any_ user-facing change?

No

### How was this patch tested?

- New unit tests.
- Manually ran the test in https://issues.apache.org/jira/browse/SPARK-37646 and saw no difference between `new Column(Literal(0L))` and `lit(0L)`.

Closes #34901 from zsxwing/SPARK-37646.

Lead-authored-by: Shixiong Zhu <zsxwing@gmail.com>
Co-authored-by: Shixiong Zhu <shixiong@databricks.com>
Signed-off-by: Hyukjin Kwon <gurwls223@apache.org>

* [SPARK-37689][SQL] Expand should be supported in PropagateEmptyRelation

We  meet a case that when there is a empty relation, HashAggregateExec still triggered to execute and return an empty result. It's not necessary.
![image](https://user-images.githubusercontent.com/46485123/146725110-27496536-f1f7-4fac-ae2c-0f6f81159bba.png)
It's caused by there is an  `Expand(EmptyLocalRelation())`, and it's not propagated,  this pr support propagate `Expand` with empty LocalRelation

Avoid unnecessary execution.

No

Added UT

Closes #34954 from AngersZhuuuu/SPARK-37689.

Authored-by: Angerszhuuuu <angers.zhu@gmail.com>
Signed-off-by: Wenchen Fan <wenchen@databricks.com>

* [SPARK-36406][CORE] Avoid unnecessary file operations before delete a write failed file held by DiskBlockObjectWriter

We always do file truncate operation before delete a write failed file held by `DiskBlockObjectWriter`, a typical process is as follows:

```
if (!success) {
  // This code path only happens if an exception was thrown above before we set success;
  // close our stuff and let the exception be thrown further
  writer.revertPartialWritesAndClose()
  if (file.exists()) {
    if (!file.delete()) {
      logWarning(s"Error deleting ${file}")
    }
  }
}
```
The `revertPartialWritesAndClose` method will reverts writes that haven't been committed yet,  but it doesn't seem necessary in the current scene.

So this pr add a new method  to `DiskBlockObjectWriter` named `closeAndDelete()`,  the new method just revert write metrics and delete the write failed file.

Avoid unnecessary file operations.

Add a new method  to `DiskBlockObjectWriter` named `closeAndDelete().

Pass the Jenkins or GitHub Action

Closes #33628 from LuciferYang/SPARK-36406.

Authored-by: yangjie01 <yangjie01@baidu.com>
Signed-off-by: attilapiros <piros.attila.zsolt@gmail.com>

* [SPARK-37462][CORE] Avoid unnecessary calculating the number of outstanding fetch requests and RPCS

Avoid unnecessary calculating the number of outstanding fetch requests and RPCS

It is unnecessary to calculate the number of outstanding fetch requests and RPCS when the IdleStateEvent is not IDLE or the last request is not timeout.

No.
Exist unittests.

Closes #34711 from weixiuli/SPARK-37462.

Authored-by: weixiuli <weixiuli@jd.com>
Signed-off-by: Sean Owen <srowen@gmail.com>

Co-authored-by: ulysses-you <ulyssesyou18@gmail.com>
Co-authored-by: Cheng Su <chengsu@fb.com>
Co-authored-by: Josh Rosen <joshrosen@databricks.com>
Co-authored-by: Kent Yao <yao@apache.org>
Co-authored-by: Jiaan Geng <beliefer@163.com>
Co-authored-by: Shixiong Zhu <zsxwing@gmail.com>
Co-authored-by: Shixiong Zhu <shixiong@databricks.com>
Co-authored-by: Angerszhuuuu <angers.zhu@gmail.com>
Co-authored-by: yangjie01 <yangjie01@baidu.com>
Co-authored-by: weixiuli <weixiuli@jd.com>
@LuciferYang LuciferYang deleted the SPARK-36406 branch October 22, 2023 07:43
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants