You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Author: Sandy Ryza <sandy@cloudera.com>
Closes#120 from sryza/sandy-spark-1183 and squashes the following commits:
5066a4a [Sandy Ryza] Remove "worker" in a couple comments
0bd1e46 [Sandy Ryza] Remove --am-class from usage
bfc8fe0 [Sandy Ryza] Remove am-class from doc and fix yarn-alpha
607539f [Sandy Ryza] Address review comments
74d087a [Sandy Ryza] SPARK-1183. Don't use "worker" to mean executor
Copy file name to clipboardExpand all lines: docs/running-on-yarn.md
+14-15Lines changed: 14 additions & 15 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -41,7 +41,7 @@ System Properties:
41
41
*`spark.yarn.submit.file.replication`, the HDFS replication level for the files uploaded into HDFS for the application. These include things like the spark jar, the app jar, and any distributed cache files/archives.
42
42
*`spark.yarn.preserve.staging.files`, set to true to preserve the staged files(spark jar, app jar, distributed cache files) at the end of the job rather then delete them.
43
43
*`spark.yarn.scheduler.heartbeat.interval-ms`, the interval in ms in which the Spark application master heartbeats into the YARN ResourceManager. Default is 5 seconds.
44
-
*`spark.yarn.max.worker.failures`, the maximum number of executor failures before failing the application. Default is the number of executors requested times 2 with minimum of 3.
44
+
*`spark.yarn.max.executor.failures`, the maximum number of executor failures before failing the application. Default is the number of executors requested times 2 with minimum of 3.
45
45
46
46
# Launching Spark on YARN
47
47
@@ -60,11 +60,10 @@ The command to launch the Spark application on the cluster is as follows:
The above starts a YARN client program which starts the default Application Master. Then SparkPi will be run as a child thread of Application Master. The client will periodically poll the Application Master for status updates and display them in the console. The client will exit once your application has finished running. Refer to the "Viewing Logs" section below for how to see driver and executor logs.
94
93
@@ -100,12 +99,12 @@ With yarn-client mode, the application will be launched locally, just like runni
100
99
101
100
Configuration in yarn-client mode:
102
101
103
-
In order to tune worker cores/number/memory etc., you need to export environment variables or add them to the spark configuration file (./conf/spark_env.sh). The following are the list of options.
102
+
In order to tune executor cores/number/memory etc., you need to export environment variables or add them to the spark configuration file (./conf/spark_env.sh). The following are the list of options.
104
103
105
-
*`SPARK_WORKER_INSTANCES`, Number of executors to start (Default: 2)
106
-
*`SPARK_WORKER_CORES`, Number of cores per executor (Default: 1).
107
-
*`SPARK_WORKER_MEMORY`, Memory per executor (e.g. 1000M, 2G) (Default: 1G)
108
-
*`SPARK_MASTER_MEMORY`, Memory for Master (e.g. 1000M, 2G) (Default: 512 Mb)
104
+
*`SPARK_EXECUTOR_INSTANCES`, Number of executors to start (Default: 2)
105
+
*`SPARK_EXECUTOR_CORES`, Number of cores per executor (Default: 1).
106
+
*`SPARK_EXECUTOR_MEMORY`, Memory per executor (e.g. 1000M, 2G) (Default: 1G)
107
+
*`SPARK_DRIVER_MEMORY`, Memory for driver (e.g. 1000M, 2G) (Default: 512 Mb)
109
108
*`SPARK_YARN_APP_NAME`, The name of your application (Default: Spark)
110
109
*`SPARK_YARN_QUEUE`, The YARN queue to use for allocation requests (Default: 'default')
111
110
*`SPARK_YARN_DIST_FILES`, Comma separated list of files to be distributed with the job.
0 commit comments