Skip to content

Commit

Permalink
[SPARK-31819][K8S][DOCS][TESTS] Add a workaround for Java 8u251+ and …
Browse files Browse the repository at this point in the history
…update integration test cases
  • Loading branch information
dongjoon-hyun committed May 26, 2020
1 parent 1d1a207 commit 1b1d869
Show file tree
Hide file tree
Showing 4 changed files with 9 additions and 1 deletion.
2 changes: 2 additions & 0 deletions docs/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -39,6 +39,8 @@ Note that support for Java 7, Python 2.6 and old Hadoop versions before 2.6.5 we
Support for Scala 2.10 was removed as of 2.3.0. Support for Scala 2.11 is deprecated as of Spark 2.4.1
and will be removed in Spark 3.0.

For Java 8u251+, `HTTP2_DISABLE=true` and `spark.kubernetes.driverEnv.HTTP2_DISABLE=true` are required additionally for fabric8 `kubernetes-client` library to talk to Kubernetes clusters. This prevents `KubernetesClientException` when `kubernetes-client` library uses `okhttp` library internally.

# Running the Examples and Shell

Spark comes with several sample programs. Scala, Java, Python and R examples are in the
Expand Down
2 changes: 2 additions & 0 deletions resource-managers/kubernetes/integration-tests/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -12,6 +12,8 @@ The simplest way to run the integration tests is to install and run Minikube, th

dev/dev-run-integration-tests.sh

For Java 8u251+, `HTTP2_DISABLE=true` and `spark.kubernetes.driverEnv.HTTP2_DISABLE=true` are required additionally for fabric8 `kubernetes-client` library to talk to Kubernetes clusters. This prevents `KubernetesClientException` when `kubernetes-client` library uses `okhttp` library internally.

The minimum tested version of Minikube is 0.23.0. The kube-dns addon must be enabled. Minikube should
run with a minimum of 3 CPUs and 4G of memory:

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -16,9 +16,11 @@
*/
package org.apache.spark.deploy.k8s.integrationtest

import org.scalatest.concurrent.Eventually
import scala.collection.JavaConverters._

import io.fabric8.kubernetes.api.model.EnvVar
import org.scalatest.concurrent.Eventually

import org.apache.spark.deploy.k8s.integrationtest.KubernetesSuite.{k8sTestTag, INTERVAL, TIMEOUT}

private[spark] trait ClientModeTestsSuite { k8sSuite: KubernetesSuite =>
Expand Down Expand Up @@ -66,6 +68,7 @@ private[spark] trait ClientModeTestsSuite { k8sSuite: KubernetesSuite =>
.withName("spark-example")
.withImage(image)
.withImagePullPolicy("IfNotPresent")
.withEnv(new EnvVar("HTTP2_DISABLE", "true", null))
.withCommand("/opt/spark/bin/run-example")
.addToArgs("--master", s"k8s://https://kubernetes.default.svc")
.addToArgs("--deploy-mode", "client")
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -108,6 +108,7 @@ private[spark] class KubernetesSuite extends SparkFunSuite
.set("spark.kubernetes.container.image", image)
.set("spark.kubernetes.driver.pod.name", driverPodName)
.set("spark.kubernetes.driver.label.spark-app-locator", appLocator)
.set("spark.kubernetes.driverEnv.HTTP2_DISABLE", "true")
.set("spark.kubernetes.executor.label.spark-app-locator", appLocator)
if (!kubernetesTestComponents.hasUserSpecifiedNamespace) {
kubernetesTestComponents.createNamespace()
Expand Down

0 comments on commit 1b1d869

Please sign in to comment.