-
Notifications
You must be signed in to change notification settings - Fork 1
/
HPC_Overview.txt
101 lines (80 loc) · 5.03 KB
/
HPC_Overview.txt
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
= HPC Overview
by The HPC Support Staff
v1.1, Sept 22, 2015
//Harry Mangalam mailto:harry.mangalam@uci.edu[harry.mangalam@uci.edu]
// Convert this file to HTML & move to it's final dest with the command:
// for HPC - need to execute the asiidoc script on login-1-2
// l12 'asciidoc -a icons -a toc2 -b html5 -a numbered /data/hpc/www/HPC_Overview.txt;'
// for Harry on 'stunted'
// export fileroot="/home/hjm/nacs/HPC_Overview"; asciidoc -a icons -a toc2 -b html5 -a numbered ${fileroot}.txt; scp ${fileroot}.[htp]* moo.nac.uci.edu:~/public_html; scp ${fileroot}.[htp]* hmangala@hpc-s.oit.uci.edu:~
== What HPC is
The HPC facility is a 6560 core/1PB 'Condo Cluster' available to all
researchers at UCI (no School or Dept affiliation required). 'Condo' means
that most of the compute nodes are individually owned by contributing
Schools, Depts, or individual PIs, and are available to them preferentially,
but when not so used, are available to the rest of campus users, using the
http://hpc.oit.uci.edu/free-queue[Free Queue] system. This allows all users
to effectively get more computational power than they paid for, since usage
among owners is quite spiky and the 'Free Queue' allows us to harvest
otherwise unused cycles.
While the HPC facility is not http://goo.gl/Qqcez[HIPAA-compliant], we have
been told that appropriately anonymized human data are being reduced and
analyzed through various pipelines on HPC.
In summary, HPC offers very large computing resources to all UCI researchers
to allow them to store and analyze data of a size that would not otherwise
be possible locally, and personal assistance in resolving specific computational
problems. Should UCI researchers require even larger compute power, Harry Mangalam
is an https://www.xsede.org/campus-champions[XSEDE Campus Champion], who can
help to move your problems to the larger XSEDE resources.
== Hardware
The HPC cluster is now the largest on campus with about 6560 64bit CPU
cores, \~38TB aggregate RAM, 8
http://www.nvidia.com/object/tesla-servers.html[Nvidia Tesla GPUs], with
Quad Data Rate (40Gbs) Infiniband interconnects. There is \~1PB of storage
with 650TB in 2 http://www.beegfs.com/cms/[BeeGFS (aka Fraunhofer)]
distributed filesystems, and the rest available from \~34 public and private
NFS automounts. We have \~1600 users, of whom ~100 are logged in at any one
time on 2 login nodes and over 250 use the cluster in an average month. The
cluster generally has 65%-90% usage with several thousand jobs running and
queued at any one time (>6000 as I write this; 20,000 is not unusual).
== Software
The HPC staff can usually add requested Open Source Software within a day;
our 'module' system has >700 external applications (various versions),
libraries, and compilers/interpreters. These include image processing
(freesurfer/fsl), statistics (R, SAS, STATA) numerical processing (MATLAB,
Mathematica, Pylab/Numpy/SciPy, Scilab, GSL, Boost, etc), molecular dynamics
(Gromacs, NAMD, AMBER, VMD), as well as a number of Engineering applications
(OpenSees, Openfoam).
However, since our largest user group is biologists, we have an
exceptionally large group of bioinformatics tools including the
bam/sam/bedtools, tophat, bowtie, bwa, wgs, trimmomatic, repeatmasker, fastx
toolkit, emboss, tacg, Bioconductor, BLAST, Trinity, GATK, Annovar, velvet,
vcftools, cufflinks, smrtanalysis, the SOAP tools, picard, the
Bio(Perl|Python|Java) toolkits, etc.
Our compilers support parallel approaches including OpenMP, OpenMPI, Gnu
Parallel, R's SNOW, and CUDA. We support debugging and profiling with
TotalView, PerfSuite, HPCToolkit, Oprofile, and PAPI. We also support a
local installation of Galaxy, integrated into our GridEngine scheduler.
Our module library currently (Sept 22, 2015) consists of
http://hpc.oit.uci.edu/all.modules[this list].
== Classes
We run classes and tutorials in using Linux and the HPC cluster as well as
Bioinformatics (in conjunction with the
http://ghtf.biochem.uci.edu/[Genomic High Throughput Facility]) and
also offer classes in 'BigData' processing (with Padraic Smith of
Information & Computer Sciences).
== Staff
HPC is supported by 2.5 FTEs, including 1.5 PhDs (in Molecular Biology &
Bioinformatics and in Signal Processing), as well as 1 student assistant and
these support staff regularly assist with debugging workflows and
performance problems in all domains (~700 domain-specific emails per month).
We very much welcome new computational and research challenges and our
interaction with scientists. Please feel free to mail all of us at
<hpc-support@uci.edu>.
'Harry Mangalam', PhD, for the Research Computing Support Staff +
'Joseph Farran', HPC Architect Senior Sysadmin, 30+ years IT experience +
'Garr Updegraff', PhD, +
'Edward Xia' (student) +
== Resource description for Grants
The http://www.methods.co.nz/asciidoc/[asciidoc] (plaintext) source of this document is http://moo.nac.uci.edu/~hjm/HPC_Overview.txt[available here].
A stripped, plaintext version of this document with References instead of hyperlinks is http://moo.nac.uci.edu/~hjm/HPC_Overview.plaintext[available here].