Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ADAM on Slurm/LSF #1229

Closed
jpdna opened this issue Oct 27, 2016 · 6 comments
Closed

ADAM on Slurm/LSF #1229

jpdna opened this issue Oct 27, 2016 · 6 comments
Milestone

Comments

@jpdna
Copy link
Member

jpdna commented Oct 27, 2016

I understand some people run Spark on their local Slurm (or LSF?) cluster like:
https://www.princeton.edu/researchcomputing/faq/spark-via-slurm/

It would be useful to provide instructions for this in our user guide, as Slurm/LSF is the cluster infrastructure that most bioinformatics users have access to.

Is there a slurm/LSF cluster at Berkeley I could try this on?

@fnothaft
Copy link
Member

Is there a slurm/LSF cluster at Berkeley I could try this on?

Let me ask around...

@heuermh
Copy link
Member

heuermh commented Oct 27, 2016

I'm not sure what the infrastructure actually is at the link above. The examples show submitting jobs to a Spark cluster using Slurm, not that Spark is actually running on the Slurm cluster.

There's another link on that page describing "Spark framework and the submission guidelines using YARN", but it doesn't say whether Spark via YARN is installed on the Slurm cluster or separately.

@jpdna
Copy link
Member Author

jpdna commented Oct 27, 2016

The examples show submitting jobs to a Spark cluster using Slurm

Good point @heuermh
This may be more relevant and what I was thinking of - actually running executors as jobs on LSF/Slurm:

https://github.com/LLNL/magpie

In general, my intuition that when running Spark on HPC in this way all you would really lose is data locality, but otherwise an application like ADAM would run the same as it does on an HDFS cluster.

@devin-petersohn
Copy link
Member

I have some experience running Spark on Slurm from the University of Missouri. They have a large cluster managed by Slurm that runs Spark.

In that case, we dynamically created Spark clusters using Slurm, so the entire environment was torn down at the end of the allocation. HDFS works the same way. For Adam on Slurm, I don't think there would be too many steps, aside from perhaps changing the SPARK_HOME (which we set dynamically). Since we are starting a collaboration, there may be an opportunity to use their cluster as a test case for this.

@fnothaft
Copy link
Member

fnothaft commented Dec 6, 2016

Since we are starting a collaboration, there may be an opportunity to use their cluster as a test case for this.

+1!

@heuermh
Copy link
Member

heuermh commented Aug 29, 2017

Fixed by #1571

@heuermh heuermh closed this as completed Aug 29, 2017
@heuermh heuermh modified the milestone: 0.23.0 Aug 30, 2017
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

4 participants