-
Notifications
You must be signed in to change notification settings - Fork 29k
[WIP] Move ClientE2ETestSuite into a separate module #39788
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
| </dependency> | ||
| <dependency> | ||
| <groupId>org.apache.spark</groupId> | ||
| <artifactId>spark-connect_${scala.binary.version}</artifactId> |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
should be <scope>test</scope>
| <scope>test</scope> | ||
| <version>${project.version}</version> | ||
| </dependency> | ||
| <dependency> |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
should be <scope>test</scope>
|
|
||
| object SparkConnectClientTests { | ||
| lazy val settings = Seq( | ||
| excludeDependencies ++= { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
excludeDependencies does not work. There is a probability that sbt will fail to compile because spark-sql is in the classpath, which will cause package conflicts as follows:
[error] /${basedir}/spark-mine/connector/connect/client/integration-tests/src/test/scala/org/apache/spark/sql/ClientE2ETestSuite.scala:27:21: value collectResult is not a member of org.apache.spark.sql.DataFrame
[error] val schema = df.collectResult().schema
[error] ^
[error] /${basedir}/spark-mine/connector/connect/client/integration-tests/src/test/scala/org/apache/spark/sql/ClientE2ETestSuite.scala:33:21: value collectResult is not a member of org.apache.spark.sql.DataFrame
[error] val result = df.collectResult()
[error] ^
[error] /${basedir}/spark-mine/connector/connect/client/integration-tests/src/test/scala/org/apache/spark/sql/ClientE2ETestSuite.scala:43:21: value collectResult is not a member of org.apache.spark.sql.Dataset[Long]
[error] val result = df.collectResult()
[error] ^
[error] /${basedir}/spark-mine/connector/connect/client/integration-tests/src/test/scala/org/apache/spark/sql/connect/client/util/RemoteSparkSession.scala:152:36: value client is not a member of org.apache.spark.sql.SparkSession.Builder
[error] spark = SparkSession.builder().client(SparkConnectClient.builder().port(port).build()).build()
[error] ^
[error] /${basedir}/spark-mine/connector/connect/client/integration-tests/src/test/scala/org/apache/spark/sql/connect/client/util/RemoteSparkSession.scala:165:12: value collectResult is not a member of org.apache.spark.sql.DataFrame
[error] possible cause: maybe a semicolon is missing before `value collectResult'?
[error] .collectResult()
[error] ^
[error] 5 errors found
[error] (connect-client-integration-tests / Test / compileIncremental) Compilation failed
[error] Total time: 47 s, completed 2023-1-29 12:41:23
I can't solve it temporarily
|
Just trying. Currently, I haven't fully figured out how to do it better |
|
close this one and open #39807 |
What changes were proposed in this pull request?
Why are the changes needed?
Does this PR introduce any user-facing change?
How was this patch tested?