Skip to content

Commit

Permalink
addressed review comments
Browse files Browse the repository at this point in the history
  • Loading branch information
jpdna authored and heuermh committed Jul 5, 2017
1 parent c157ab8 commit 8572fb7
Showing 1 changed file with 10 additions and 9 deletions.
19 changes: 10 additions & 9 deletions docs/source/40_deploying_ADAM.md
Original file line number Diff line number Diff line change
Expand Up @@ -526,12 +526,12 @@ to be a workable solution for testing or even production at scale, especially fo
which perform multiple in-memory transformations and thus benefit from Spark's in-memory processing model.

Follow the primary [instructions](https://github.com/bigdatagenomics/adam/blob/master/docs/source/02_installation.md)
for installing ADAM into `$ADAM_HOME`
for installing ADAM into `$ADAM_HOME`. This will most likely be at a location on a shared disk accessible to all nodes, but could be at a consistant location on each machine.

### Start Spark cluster

A Spark cluster can be started as a muti-node job in Slurm by creating a job file `run.cmd` such as below:
```
A Spark cluster can be started as a multi-node job in Slurm by creating a job file `run.cmd` such as below:
```bash
#!/bin/bash

#SBATCH --partition=multinode
Expand All @@ -551,7 +551,8 @@ A Spark cluster can be started as a muti-node job in Slurm by creating a job fil
# If your sys admin has installed spark as a module
module load spark

# If spark is not installed as a module, you will need to specifiy absolute path to $SPARK_HOME/bin/spark-start
# If Spark is not installed as a module, you will need to specifiy absolute path to
# $SPARK_HOME/bin/spark-start where $SPARK_HOME is on shared disk or at a consistant location
start-spark

echo $MASTER
Expand All @@ -563,12 +564,12 @@ sbatch run.cmd
```

This will start a Spark cluster containing 2 nodes that persists for 5 hours, unless you kill it sooner.
The `slurm.out` file created in the current directory will contain a line produced by `echo $MASTER`
above which willindicate the address of the Spark master to which your application or ADAM-shell
The file `slurm.out` created in the current directory will contain a line produced by `echo $MASTER`
above which will indicate the address of the Spark master to which your application or ADAM-shell
should connect such as `spark://somehostname:7077`

### Start ADAM Shell
Your sys admin will probably prefer that you launch your ADAM-shell or start an application from a
### Start adam-shell
Your sys admin will probably prefer that you launch your `adam-shell` or start an application from a
cluster node rather than the head node you log in to so you may want to do so with:
```
sinteractive
Expand All @@ -584,7 +585,7 @@ $ADAM_HOME/bin/adam-shell --master spark://hostnamefromslurmdotout:7077
$ADAM_HOME/bin/adam-submit --master spark://hostnamefromslurmdotout:7077
```

You should be able to connect to the Spark Web UI at `spark://hostnamefromslurmdotout:4040`, however
You should be able to connect to the Spark Web UI at `http://hostnamefromslurmdotout:4040`, however
you may need to ask your local sys admin to open the requried ports.

### Feedback
Expand Down

0 comments on commit 8572fb7

Please sign in to comment.