diff --git a/docs/source/40_deploying_ADAM.md b/docs/source/40_deploying_ADAM.md index f927963781..16a0552582 100644 --- a/docs/source/40_deploying_ADAM.md +++ b/docs/source/40_deploying_ADAM.md @@ -526,12 +526,12 @@ to be a workable solution for testing or even production at scale, especially fo which perform multiple in-memory transformations and thus benefit from Spark's in-memory processing model. Follow the primary [instructions](https://github.com/bigdatagenomics/adam/blob/master/docs/source/02_installation.md) -for installing ADAM into `$ADAM_HOME` +for installing ADAM into `$ADAM_HOME`. This will most likely be at a location on a shared disk accessible to all nodes, but could be at a consistant location on each machine. ### Start Spark cluster -A Spark cluster can be started as a muti-node job in Slurm by creating a job file `run.cmd` such as below: -``` +A Spark cluster can be started as a multi-node job in Slurm by creating a job file `run.cmd` such as below: +```bash #!/bin/bash #SBATCH --partition=multinode @@ -551,7 +551,8 @@ A Spark cluster can be started as a muti-node job in Slurm by creating a job fil # If your sys admin has installed spark as a module module load spark -# If spark is not installed as a module, you will need to specifiy absolute path to $SPARK_HOME/bin/spark-start +# If Spark is not installed as a module, you will need to specifiy absolute path to +# $SPARK_HOME/bin/spark-start where $SPARK_HOME is on shared disk or at a consistant location start-spark echo $MASTER @@ -563,12 +564,12 @@ sbatch run.cmd ``` This will start a Spark cluster containing 2 nodes that persists for 5 hours, unless you kill it sooner. -The `slurm.out` file created in the current directory will contain a line produced by `echo $MASTER` -above which willindicate the address of the Spark master to which your application or ADAM-shell +The file `slurm.out` created in the current directory will contain a line produced by `echo $MASTER` +above which will indicate the address of the Spark master to which your application or ADAM-shell should connect such as `spark://somehostname:7077` -### Start ADAM Shell -Your sys admin will probably prefer that you launch your ADAM-shell or start an application from a +### Start adam-shell +Your sys admin will probably prefer that you launch your `adam-shell` or start an application from a cluster node rather than the head node you log in to so you may want to do so with: ``` sinteractive @@ -584,7 +585,7 @@ $ADAM_HOME/bin/adam-shell --master spark://hostnamefromslurmdotout:7077 $ADAM_HOME/bin/adam-submit --master spark://hostnamefromslurmdotout:7077 ``` -You should be able to connect to the Spark Web UI at `spark://hostnamefromslurmdotout:4040`, however +You should be able to connect to the Spark Web UI at `http://hostnamefromslurmdotout:4040`, however you may need to ask your local sys admin to open the requried ports. ### Feedback