From 1c8e49529394096a6bab0b63cbe697f8fb2f4412 Mon Sep 17 00:00:00 2001 From: "callumnmw@gmail.com" Date: Wed, 8 Oct 2025 11:31:41 +1300 Subject: [PATCH 1/2] rename --- .../Training/Intro_HPC/035-filedir-cont.md | 5 + .../Intro_HPC/095-writing-good-code.md | 269 ++++++ .../Intro_HPC/14-environment-variables.md | 258 +++++ .../Training/Intro_HPC/bash_shell.md | 901 ++++++++++++++++++ .../Training/Intro_HPC/filesystem_basics.md | 214 +++++ .../Training/Intro_HPC/modules.md | 258 +++++ .../Training/Intro_HPC/parallel.md | 202 ++++ .../Training/Intro_HPC/resources.md | 376 ++++++++ .../Training/Intro_HPC/scaling.md | 60 ++ .../Training/Intro_HPC/scheduler.md | 338 +++++++ .../Training/Intro_HPC/what_is_a_cluster.md | 93 ++ ...ting_on_the_NeSI_HPC_YouTube_Recordings.md | 14 - 12 files changed, 2974 insertions(+), 14 deletions(-) create mode 100644 docs/Scientific_Computing/Training/Intro_HPC/035-filedir-cont.md create mode 100644 docs/Scientific_Computing/Training/Intro_HPC/095-writing-good-code.md create mode 100644 docs/Scientific_Computing/Training/Intro_HPC/14-environment-variables.md create mode 100644 docs/Scientific_Computing/Training/Intro_HPC/bash_shell.md create mode 100644 docs/Scientific_Computing/Training/Intro_HPC/filesystem_basics.md create mode 100644 docs/Scientific_Computing/Training/Intro_HPC/modules.md create mode 100644 docs/Scientific_Computing/Training/Intro_HPC/parallel.md create mode 100644 docs/Scientific_Computing/Training/Intro_HPC/resources.md create mode 100644 docs/Scientific_Computing/Training/Intro_HPC/scaling.md create mode 100644 docs/Scientific_Computing/Training/Intro_HPC/scheduler.md create mode 100644 docs/Scientific_Computing/Training/Intro_HPC/what_is_a_cluster.md delete mode 100644 docs/Scientific_Computing/Training/Introduction_to_computing_on_the_NeSI_HPC_YouTube_Recordings.md diff --git a/docs/Scientific_Computing/Training/Intro_HPC/035-filedir-cont.md b/docs/Scientific_Computing/Training/Intro_HPC/035-filedir-cont.md new file mode 100644 index 000000000..ff8933246 --- /dev/null +++ b/docs/Scientific_Computing/Training/Intro_HPC/035-filedir-cont.md @@ -0,0 +1,5 @@ +--- +title: "Navigating Files and Directories (Continued)" +layout: break +break: 50 +--- \ No newline at end of file diff --git a/docs/Scientific_Computing/Training/Intro_HPC/095-writing-good-code.md b/docs/Scientific_Computing/Training/Intro_HPC/095-writing-good-code.md new file mode 100644 index 000000000..089b16355 --- /dev/null +++ b/docs/Scientific_Computing/Training/Intro_HPC/095-writing-good-code.md @@ -0,0 +1,269 @@ +--- +title: "Writing good code" +teaching: 20 +exercises: 10 +questions: +- "How do we write a good job script." +objectives: +- "Write a script that can be run serial or parallel." +- "Write a script that using SLURM environment variables." +- "Understand the limitations of random number generation." +keypoints: +- "Write your script in a way that is independent of data or environment. (elaborate)" +--- + +When talking about 'a script' we could be referring to multiple things. + +* Slurm/Bash script - Almost everyone will be using one of these to submit their Slurm jobs. +* Work script - If your work involves running another script (usually in a language other than Bash like Python, R or MATLAB) that will have to be invoked in your bash script. + +This section will cover best practice for both types of script. + + + +## Use environment variables + +In this lesson we will take a look at a few of the things to watch out for when writing scripts for use on the cluster. +This will be most relevant to people writing their own code, but covers general practices applicable to everyone. + +There is a lot of useful information contained within environment variable. + +> ## Slurm Environment +> +> For a small demo of the sort of useful info contained within env variables, run the command. +> +> ``` +> sbatch --output "slurm_env.out" --wrap "env | grep" +> ``` +> {: .language-bash} +> +> once the job has finished check the results with, +> +> ``` +> cat slurm_env.out +> ``` +> {: .language-bash} +> +> ``` +> SLURM_JOB_START_TIME=1695513911 +> SLURM_NODELIST=wbn098 +> SLURM_JOB_NAME=wrap +> SLURMD_NODENAME=wbn098 +> SLURM_TOPOLOGY_ADDR=top.s13.s7.wbn098 +> SLURM_PRIO_PROCESS=0 +> SLURM_NODE_ALIASES=(null) +> SLURM_JOB_QOS=staff +> SLURM_TOPOLOGY_ADDR_PATTERN=switch.switch.switch.node +> SLURM_JOB_END_TIME=1695514811 +> SLURM_MEM_PER_CPU=512 +> SLURM_NNODES=1 +> SLURM_JOBID=39572365 +> SLURM_TASKS_PER_NODE=2 +> SLURM_WORKING_CLUSTER=mahuika:hpcwslurmctrl01:6817:9984:109 +> SLURM_CONF=/etc/opt/slurm/slurm.conf +> SLURM_JOB_ID=39572365 +> SLURM_JOB_USER=cwal219 +> __LMOD_STACK_SLURM_I_MPI_PMI_LIBRARY=L29wdC9zbHVybS9saWI2NC9saWJwbWkyLnNv +> SLURM_JOB_UID=201333 +> SLURM_NODEID=0 +> SLURM_SUBMIT_DIR=/scale_wlg_persistent/filesets/home/cwal219 +> SLURM_TASK_PID=8747 +> SLURM_CPUS_ON_NODE=2 +> SLURM_PROCID=0 +> SLURM_JOB_NODELIST=wbn098 +> SLURM_LOCALID=0 +> SLURM_JOB_GID=201333 +> SLURM_JOB_CPUS_PER_NODE=2 +> SLURM_CLUSTER_NAME=mahuika +> SLURM_GTIDS=0 +> SLURM_SUBMIT_HOST=wbn003 +> SLURM_JOB_PARTITION=large +> SLURM_JOB_ACCOUNT=nesi99999 +> SLURM_JOB_NUM_NODES=1 +> SLURM_SCRIPT_CONTEXT=prolog_task +> ``` +> {: .output} +> +> Can you think of some examples as to how these variables could be used in your script? + +> > ## Solution +> > +> > * `SLURM_JOB_CPUS_PER_NODE` could be used to pass CPU numbers directly to any programs being used. +> > * Some other things. +> {: .solution} +{: .challenge} + +> ## Variables in Slurm Header +> +> Environment variables set by Slurm cannot be referenced in the Slurm header. +{: .callout} + +## Default values + +It is good practice to set default values when using environment variables when there is a chance they will be run in an environment where they may not be present. + +``` +FOO="${VARIABLE:-default}" +``` +{: .language-bash} + +`FOO` will be to to the value of `VARIABLE` if is set, otherwise it will be set to `default`. + +As a slight variation on the above example. (`:=` as opposed to `:-`). + +``` +FOO="${VARIABLE:=default}" +``` +{: .language-bash} + +`FOO` will be to to the value of `VARIABLE` if is set, otherwise it will be set to `default`, `VARIABLE` will also be set to `default`. + + + + +``` +num_cpus <- 2 +``` +{: .language-r} + +The number of CPU's being used is fixed in the script. We can save time and reduce chances for making mistakes by replacing this static value with an environment variable. +We can use the environment variable `SLURM_CPUS_PER_TASK`. + +``` +num_cpus <- strtoi(Sys.getenv('SLURM_CPUS_PER_TASK')) +``` +{: .language-r} + +Slurm sets many environment variables when starting a job, see [Slurm Documentation for the full list](https://slurm.schedmd.com/sbatch.html). + +The problem with this approach however, is our code will throw an error if we run it on the login node, or on our local machine or anywhere else that `SLURM_CPUS_PER_TASK` is not set. + +Generally it is best not to diverge your codebase especially if you don't have it under version control, so lets add some compatibility for those use cases. + +``` +num_cpus <- strtoi(Sys.getenv('SLURM_CPUS_PER_TASK', unset = "1")) +``` +{: .language-r} + +Now if `SLURM_CPUS_PER_TASK` variable is not set, 1 CPU will be used. You could also use some other method of detecting CPUs, like `detectCores()`. + +## Interoperability + +windows + mac + linux +headless + interactive + +## Verbose + +Having a printout of job progress is fine for an interactive terminal, but when you aren't seeing the updates in real time anyway, it's just bloat for your output files. + +Let's add an option to mute the updates. + +``` +print_progress <- FALSE +``` +{: .language-r} + + +``` +if (print_progress && percent_complete%%1==0){ + +``` +{: .language-r} + +## Reproduceability + +As this script uses [Pseudorandom number generation](https://en.wikipedia.org/wiki/Pseudorandom_number_generator) there are a few additional factors to consider. +It is desirable that our output be reproducible so we can confirm that changes to the code have not affected it. + +We can do this by setting the seed of the PRNG. That way we will get the same progression of 'random' numbers. + +We are using the environment variable `SLURM_ARRAY_TASK_ID` for reasons we will get to later. We also need to make sure a default seed is set for the occasions when `SLURM_ARRAY_TASK_ID` is not set. + +``` +seed <- strtoi(Sys.getenv('SLURM_ARRAY_TASK_ID', unset = "0")) +set.seed(seed) +``` +{: .language-r} + + +Now your script should look something like this; + +``` +{% include example_scripts/sum_matrix.r %} +``` +{: .language-r} + +## Readability + +Comments! + +## Debugging + +``` +#!/bin/bash -e +``` +{: .language-bash} + +Exit bash script on error + +``` +#!/bin/bash -x +``` +{: .language-bash} + +Print environment. + +``` +env +``` +{: .language-bash} + +Print environment, if someone else has problems replicating the problem, it will likely come down to differences in your environment. + +``` +cat $0 +``` +{: .language-bash} + +Will print your input Slurm script to you output, this can help identify when changes in your submission script leads to errors. + +## Version control + +Version control is when changes to a document are tracked over time. + +In many cases you may be using the same piece of code across multiple environments, in these situations it can be difficult to keep track of changes made and your code can begin to diverge. Setting up version control like Git can save a lot of time. + +### Portability + + + +## Testing + +More often than not, problems come in the form of typos, or other small errors that become apparent within the first few seconds/minutes of script. + +Running on login node? + +Control + c to kill. + +{% include links.md %} diff --git a/docs/Scientific_Computing/Training/Intro_HPC/14-environment-variables.md b/docs/Scientific_Computing/Training/Intro_HPC/14-environment-variables.md new file mode 100644 index 000000000..b13cc5ca6 --- /dev/null +++ b/docs/Scientific_Computing/Training/Intro_HPC/14-environment-variables.md @@ -0,0 +1,258 @@ +--- +title: Environment Variables +teaching: 10 +exercises: 5 +questions: +- "How are variables set and accessed in the Unix shell?" +- "How can I use variables to change how a program runs?" +objectives: +- "Understand how variables are implemented in the shell" +- "Read the value of an existing variable" +- "Create new variables and change their values" +- "Change the behaviour of a program using an environment variable" +- "Explain how the shell uses the `PATH` variable to search for executables" +keypoints: +- "Shell variables are by default treated as strings" +- "Variables are assigned using \"`=`\" and recalled using the variable's name prefixed by \"`$`\"" +- "Use \"`export`\" to make an variable available to other programs" +- "The `PATH` variable defines the shell's search path" +--- + +> ## Episode provenance +> +> This episode has been remixed from the +> [Shell Extras episode on Shell Variables](https://github.com/carpentries-incubator/shell-extras/blob/gh-pages/_episodes/08-environment-variables.md) +> and the [HPC Shell episode on scripts](https://github.com/hpc-carpentry/hpc-shell/blob/gh-pages/_episodes/05-scripts.md) +{: .callout} + +The shell is just a program, and like other programs, it has variables. +Those variables control its execution, +so by changing their values +you can change how the shell behaves (and with a little more effort how other +programs behave). + +Variables +are a great way of saving information under a name you can access later. In +programming languages like Python and R, variables can store pretty much +anything you can think of. In the shell, they usually just store text. The best +way to understand how they work is to see them in action. + +Let's start by running the command `set` and looking at some of the variables +in a typical shell session: + +~~~ +$ set +~~~ +{: .language-bash} + +~~~ +COMPUTERNAME=TURING +HOME=/home/vlad +HOSTNAME=TURING +HOSTTYPE=i686 +NUMBER_OF_PROCESSORS=4 +PATH=/Users/vlad/bin:/usr/local/git/bin:/usr/bin:/bin:/usr/sbin:/sbin:/usr/local/bin +PWD=/home/vlad +UID=1000 +USERNAME=vlad +... +~~~ +{: .output} + +As you can see, there are quite a few — in fact, +four or five times more than what's shown here. +And yes, using `set` to *show* things might seem a little strange, +even for Unix, but if you don't give it any arguments, +it might as well show you things you *could* set. + +Every variable has a name. +All shell variables' values are strings, +even those (like `UID`) that look like numbers. +It's up to programs to convert these strings to other types when necessary. +For example, if a program wanted to find out how many processors the computer +had, it would convert the value of the `NUMBER_OF_PROCESSORS` variable from a +string to an integer. + +## Showing the Value of a Variable + +Let's show the value of the variable `HOME`: + +~~~ +$ echo HOME +~~~ +{: .language-bash} + +~~~ +HOME +~~~ +{: .output} + +That just prints "HOME", which isn't what we wanted +(though it is what we actually asked for). +Let's try this instead: + +~~~ +$ echo $HOME +~~~ +{: .language-bash} + +~~~ +/home/vlad +~~~ +{: .output} + +The dollar sign tells the shell that we want the *value* of the variable +rather than its name. +This works just like wildcards: +the shell does the replacement *before* running the program we've asked for. +Thanks to this expansion, what we actually run is `echo /home/vlad`, +which displays the right thing. + +## Creating and Changing Variables + +Creating a variable is easy — we just assign a value to a name using "=" +(we just have to remember that the syntax requires that there are _no_ spaces +around the `=`!): + +~~~ +$ SECRET_IDENTITY=Dracula +$ echo $SECRET_IDENTITY +~~~ +{: .language-bash} + +~~~ +Dracula +~~~ +{: .output} + +To change the value, just assign a new one: + +~~~ +$ SECRET_IDENTITY=Camilla +$ echo $SECRET_IDENTITY +~~~ +{: .language-bash} + +~~~ +Camilla +~~~ +{: .output} + +## Environment variables + +When we ran the `set` command we saw there were a lot of variables whose names +were in upper case. That's because, by convention, variables that are also +available to use by _other_ programs are given upper-case names. Such variables +are called _environment variables_ as they are shell variables that are defined +for the current shell and are inherited by any child shells or processes. + +To create an environment variable you need to `export` a shell variable. For +example, to make our `SECRET_IDENTITY` available to other programs that we call +from our shell we can do: + +~~~ +$ SECRET_IDENTITY=Camilla +$ export SECRET_IDENTITY +~~~ +{: .language-bash} + +You can also create and export the variable in a single step: + +~~~ +$ export SECRET_IDENTITY=Camilla +~~~ +{: .language-bash} + +> ## Using environment variables to change program behaviour +> +> Set a shell variable `TIME_STYLE` to have a value of `iso` and check this +> value using the `echo` command. +> +> Now, run the command `ls` with the option `-l` (which gives a long format). +> +> `export` the variable and rerun the `ls -l` command. Do you notice any +> difference? +> +> > ## Solution +> > +> > The `TIME_STYLE` variable is not _seen_ by `ls` until is exported, at which +> > point it is used by `ls` to decide what date format to use when presenting +> > the timestamp of files. +> > +> {: .solution} +{: .challenge} + +You can see the complete set of environment variables in your current shell +session with the command `env` (which returns a subset of what the command +`set` gave us). **The complete set of environment variables is called +your _runtime environment_ and can affect the behaviour of the programs you +run**. + +{% include {{ site.snippets }}/scheduler/print-sched-variables.snip %} + +To remove a variable or environment variable you can use the `unset` command, +for example: + +~~~ +$ unset SECRET_IDENTITY +~~~ +{: .language-bash} + +## The `PATH` Environment Variable + +Similarly, some environment variables (like `PATH`) store lists of values. +In this case, the convention is to use a colon ':' as a separator. +If a program wants the individual elements of such a list, +it's the program's responsibility to split the variable's string value into +pieces. + +Let's have a closer look at that `PATH` variable. +Its value defines the shell's search path for executables, +i.e., the list of directories that the shell looks in for runnable programs +when you type in a program name without specifying what directory it is in. + +For example, when we type a command like `analyze`, +the shell needs to decide whether to run `./analyze` or `/bin/analyze`. +The rule it uses is simple: +the shell checks each directory in the `PATH` variable in turn, +looking for a program with the requested name in that directory. +As soon as it finds a match, it stops searching and runs the program. + +To show how this works, +here are the components of `PATH` listed one per line: + +~~~ +/Users/vlad/bin +/usr/local/git/bin +/usr/bin +/bin +/usr/sbin +/sbin +/usr/local/bin +~~~ +{: .output} + +On our computer, +there are actually three programs called `analyze` +in three different directories: +`/bin/analyze`, +`/usr/local/bin/analyze`, +and `/users/vlad/analyze`. +Since the shell searches the directories in the order they're listed in `PATH`, +it finds `/bin/analyze` first and runs that. +Notice that it will *never* find the program `/users/vlad/analyze` +unless we type in the full path to the program, +since the directory `/users/vlad` isn't in `PATH`. + +This means that I can have executables in lots of different places as long as +I remember that I need to to update my `PATH` so that my shell can find them. + +What if I want to run two different versions of the same program? +Since they share the same name, if I add them both to my `PATH` the first one +found will always win. +In the next episode we'll learn how to use helper tools to help us manage our +runtime environment to make that possible without us needing to do a lot of +bookkeeping on what the value of `PATH` (and other important environment +variables) is or should be. + +{% include links.md %} diff --git a/docs/Scientific_Computing/Training/Intro_HPC/bash_shell.md b/docs/Scientific_Computing/Training/Intro_HPC/bash_shell.md new file mode 100644 index 000000000..501e218be --- /dev/null +++ b/docs/Scientific_Computing/Training/Intro_HPC/bash_shell.md @@ -0,0 +1,901 @@ +--- +title: "Navigating Files and Directories" +teaching: 30 +exercises: 10 +questions: +- "How can I move around the cluster filesystem" +- "How can I see what files and directories I have?" +- "How can I make new files and directories." +objectives: +- "Create, edit, manipulate and remove files from command line" +- "Translate an absolute path into a relative path and vice versa." +- "Use options and arguments to change the behaviour of a shell command." +- "Demonstrate the use of tab completion and explain its advantages." +keypoints: +- "The file system is responsible for managing information on the disk." +- "Information is stored in files, which are stored in directories (folders)." +- "Directories can also store other directories, which then form a directory tree." +- "`cd [path]` changes the current working directory." +- "`ls [path]` prints a listing of a specific file or directory; `ls` on its own lists the current working directory." +- "`pwd` prints the user's current working directory." +- "`cp [file] [path]` copies [file] to [path]" +- "`mv [file] [path]` moves [file] to [path]" +- "`rm [file]` deletes [file]" +- "`/` on its own is the root directory of the whole file system." +- "Most commands take options (flags) that begin with a `-`." +- "A relative path specifies a location starting from the current location." +- "An absolute path specifies a location from the root of the file system." +- "Directory names in a path are separated with `/` on Unix, but `\\` on Windows." +- "`..` means 'the directory above the current one'; `.` on its own means 'the current directory'." +--- +> ## The Unix Shell +> +> This episode will be a quick introduction to the Unix shell, only the bare minimum required to use the cluster. +> +> The Software Carpentry '[Unix Shell](https://swcarpentry.github.io/shell-novice/)' lesson covers the subject in more depth, we recommend you check it out. +> +{: .callout} + +The part of the operating system responsible for managing files and directories +is called the **file system**. +It organizes our data into files, +which hold information, +and directories (also called 'folders'), +which hold files or other directories. + +Understanding how to navigate the file system using command line is essential for using an HPC. + +The NeSI filesystem looks something like this: + +![The file system is made up of a root directory that contains sub-directories +titled home, nesi, and system files](../fig/NesiFiletree.svg) + +The directories that are relevant to us are. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
LocationDefault StorageDefault FilesBackupAccess Speed
Home is for user-specific files such as configuration files, environment setup, source code, etc./home/<username>20GB1,000,000DailyNormal
Project is for persistent project-related data, project-related software, etc./nesi/project/<projectcode>100GB100,000DailyNormal
Nobackup is a 'scratch space', for data you don't need to keep long term. Old data is periodically deleted from nobackup/nesi/nobackup/<projectcode>10TB1,000,000NoneFast
+ +### Managing your data and storage (backups and quotas) + +NeSI performs backups of the `/home` and `/nesi/project` (persistent) filesystems. However, backups are only captured once per day. So, if you edit or change code or data and then immediately delete it, it likely cannot be recovered. Note, as the name suggests, NeSI does **not** backup the `/nesi/nobackup` filesystem. + +Protecting critical data from corruption or deletion is primarily your +responsibility. Ensure you have a data management plan and stick to the plan to reduce the chance of data loss. Also important is managing your storage quota. To check your quotas, use the `nn_storage_quota` command, eg + +{% include {{ site.snippets }}/filedir/sinfo.snip %} + +As well as disk space, 'inodes' are also tracked, this is the *number* of files. + +Notice that the project space for this user is over quota and has been locked, meaning no more data can be added. When your space is locked you will need to move or remove data. Also note that none of the nobackup space is being used. Likely data from project can be moved to nobackup. `nn_storage_quota` uses cached data, and so will no immediately show changes to storage use. + +For more details on our persistent and nobackup storage systems, including data retention and the nobackup autodelete schedule, +please see our [Filesystem and Quota](https://docs.nesi.org.nz/Storage/File_Systems_and_Quotas/NeSI_File_Systems_and_Quotas/) documentation. + + +Directories are like *places* — at any time +while we are using the shell, we are in exactly one place called +our **current working directory**. +Commands mostly read and write files in the +current working directory, i.e. 'here', so knowing where you are before running +a command is important. + +First, let's find out where we are by running the command `pwd` for '**p**rint **w**orking **d**irectory'. + +``` +{{ site.remote.prompt }} pwd +``` + +{: .language-bash} + +``` +/home/ +``` + +{: .output} + +The output we see is what is known as a 'path'. +The path can be thought of as a series of directions given to navigate the file system. + +At the top is the **root directory** +that holds all the files in a filesystem. + +We refer to it using a slash character, `/`, on its own. +This is what the leading slash in `/home/` is referring to, it is telling us our path starts at the root directory. + +Next is `home`, as it is the next part of the path we know it is inside the root directory, +we also know that home is another directory as the path continues. +Finally, stored inside `home` is the directory with your username. + +> ## Slashes +> +> Notice that there are two meanings for the `/` character. +> When it appears at the front of a file or directory name, +> it refers to the root directory. When it appears *inside* a path, +> it's just a separator. +{: .callout} + +As you may now see, using a bash shell is strongly dependent on the idea that +your files are organized in a hierarchical file system. +Organizing things hierarchically in this way helps us keep track of our work: +it's possible to put hundreds of files in our home directory, +just as it's possible to pile hundreds of printed papers on our desk, +but it's a self-defeating strategy. + +## Listing the contents of directories + +To **l**i**s**t the contents of a directory, we use the command `ls` followed by the path to the directory whose contents we want listed. + +We will now list the contents of the directory we we will be working from. We can +use the following command to do this: + +``` +{{ site.remote.prompt }} ls {{ site.working_dir[0] }} +``` + +{: .language-bash} + +``` +{{ site.working_dir[1] }} +``` + +{: .output} + +You should see a directory called `{{ site.working_dir[1] }}`, and possibly several other directories. For the purposes of this workshop you will be working within `{{ site.working_dir | join: '/' }}` + +> ## Command History +> +> You can cycle through your previous commands with the and keys. +> A convenient way to repeat your last command is to type then enter. +> +{: .callout} + +> ## `ls` Reading Comprehension +> +> What command would you type to get the following output +> +> ``` +> original pnas_final pnas_sub +> ``` +> +> {: .output} +> +> ![A directory tree below the Users directory where "/Users" contains the +directories "backup" and "thing"; "/Users/backup" contains "original", +"pnas_final" and "pnas_sub"; "/Users/thing" contains "backup"; and +"/Users/thing/backup" contains "2012-12-01", "2013-01-08" and +"2013-01-27"](../fig/filesystem-challenge.svg) +> +> 1. `ls pwd` +> 2. `ls backup` +> 3. `ls /Users/backup` +> 4. `ls /backup` +> +> > ## Solution +> > +> > 1. No: `pwd` is not the name of a directory. +> > 2. Possibly: It depends on your current directory (we will explore this more shortly). +> > 3. Yes: uses the absolute path explicitly. +> > 4. No: There is no such directory. +> {: .solution} +{: .challenge} + +## Moving about + +Currently we are still in our home directory, we want to move into the`project` directory from the previous command. + +The command to **c**hange **d**irectory is `cd` followed by the path to the directory we want to move to. + +The `cd` command is akin to double clicking a folder in a graphical interface. + +We will use the following command: + +``` +{{ site.remote.prompt }} cd {{ site.working_dir | join: '/' }} +``` + +{: .language-bash} + +``` +``` + +{: .output} +You will notice that `cd` doesn't print anything. This is normal. Many shell commands will not output anything to the screen when successfully executed. +We can check we are in the right place by running `pwd`. + +``` +{{ site.remote.prompt }} pwd +``` + +{: .language-bash} + +``` +{{ site.working_dir | join: '/' }} +``` + +{: .output} + +## Creating directories + + + +As previously mentioned, it is general useful to organise your work in a hierarchical file structure to make managing and finding files easier. It is also is especially important when working within a shared directory with colleagues, such as a project, to minimise the chance of accidentally affecting your colleagues work. So for this workshop you will each make a directory using the `mkdir` command within the workshops directory for you to personally work from. + +``` +{{ site.remote.prompt }} mkdir +``` + +{: .language-bash} + +You should then be able to see your new directory is there using `ls`. + +``` +{{ site.remote.prompt }} ls {{ site.working_dir | join: '/' }} +``` + +{: .language-bash} + +{% include {{ site.snippets }}/filedir/dir-contents1.snip %} + +## General Syntax of a Shell Command + +We are now going to use `ls` again but with a twist, this time we will also use what are known as **options**, **flags** or **switches**. +These options modify the way that the command works, for this example we will add the flag `-l` for "long listing format". + +``` +{{ site.remote.prompt }} ls -l {{ site.working_dir | join: '/' }} +``` + +{: .language-bash} + +{% include {{ site.snippets }}/filedir/dir-contents2.snip %} + +We can see that the `-l` option has modified the command and now our output has listed all the files in alphanumeric order, which can make finding a specific file easier. +It also includes information about the file size, time of its last modification, and permission and ownership information. + +Most unix commands follow this basic structure. +![Structure of a Unix command](../fig/Unix_Command_Struc.svg) + +The **prompt** tells us that the terminal is accepting inputs, prompts can be customised to show all sorts of info. + +The **command**, what are we trying to do. + +**Options** will modify the behavior of the command, multiple options can be specified. +Options will either start with a single dash (`-`) or two dashes (`--`).. +Often options will have a short and long format e.g. `-a` and `--all`. + +**Arguments** tell the command what to operate on (usually files and directories). + +Each part is separated by spaces: if you omit the space +between `ls` and `-l` the shell will look for a command called `ls-l`, which +doesn't exist. Also, capitalization can be important. +For example, `ls -s` will display the size of files and directories alongside the names, +while `ls -S` will sort the files and directories by size. + +Another useful option for `ls` is the `-a` option, lets try using this option together with the `-l` option: + +``` +{{ site.remote.prompt }} ls -la +``` + +{: .language-bash} + +{% include {{ site.snippets }}/filedir/dir-contents3.snip %} + +Single letter options don't usually need to be separate. In this case `ls -la` is performing the same function as if we had typed `ls -l -a`. + +You might notice that we now have two extra lines for directories `.` and `..`. These are hidden directories which the `-a` option has been used to reveal, you can make any file or directory hidden by beginning their filenames with a `.`. + +These two specific hidden directories are special as they will exist hidden inside every directory, with the `.` hidden directory representing your current directory and the `..` hidden directory representing the **parent** directory above your current directory. + +> ## Exploring More `ls` Flags +> +> You can also use two options at the same time. What does the command `ls` do when used +> with the `-l` option? What about if you use both the `-l` and the `-h` option? +> +> Some of its output is about properties that we do not cover in this lesson (such +> as file permissions and ownership), but the rest should be useful +> nevertheless. +> +> > ## Solution +> > +> > The `-l` option makes `ls` use a **l**ong listing format, showing not only +> > the file/directory names but also additional information, such as the file size +> > and the time of its last modification. If you use both the `-h` option and the `-l` option, +> > this makes the file size '**h**uman readable', i.e. displaying something like `5.3K` +> > instead of `5369`. +> {: .solution} +{: .challenge} + +## Relative paths + +You may have noticed in the last command we did not specify an argument for the directory path. +Until now, when specifying directory names, or even a directory path (as above), +we have been using what are known as **absolute paths**, which work no matter where you are currently located on the machine +since it specifies the full path from the top level root directory. + +An **absolute path** always starts at the root directory, which is indicated by a +leading slash. The leading `/` tells the computer to follow the path from +the root of the file system, so it always refers to exactly one directory, +no matter where we are when we run the command. + +Any path without a leading `/` is a **relative path**. + +When you use a relative path with a command +like `ls` or `cd`, it tries to find that location starting from where we are, +rather than from the root of the file system. + +In the previous command, since we did not specify an **absolute path** it ran the command on the relative path from our current directory +(implicitly using the `.` hidden directory), and so listed the contents of our current directory. + +We will now navigate to the parent directory, the simplest way do this is to use the relative path `..`. + +``` +{{ site.remote.prompt }} cd .. +``` + +{: .language-bash} + +We should now be back in `{{ site.working_dir[0] }}`. + +``` +{{ site.remote.prompt }} pwd +``` + +{: .language-bash} + +``` +{{ site.working_dir[0] }} +``` + +{: .output} + +## Tab completion + + Sometimes file paths and file names can be very long, making typing out the path tedious. + One trick you can use to save yourself time is to use something called **tab completion**. + If you start typing the path in a command and there is only one possible match, + if you hit tab the path will autocomplete (until there are more than one possible matches). + +For example, if you type: + +``` +{{ site.remote.prompt }} cd {{ site.working_dir | last | slice: 0,3 }} +``` +{: .language-bash} + +and then press Tab (the tab key on your keyboard), +the shell automatically completes the directory name for you (since there is only one possible match): + +``` +{{ site.remote.prompt }} cd {{ site.working_dir | last }}/ +``` +{: .language-bash} + + However, you want to move to your personal working directory. If you hit Tab once you will + likely see nothing change, as there are more than one possible options. Hitting Tab + a second time will print all possible autocomplete options. + +``` +cwal219/ riom/ harrellw/ +``` +{: .output} + +Now entering in the first few characters of the path (just enough that the possible options are no longer ambiguous) and pressing Tab again, should complete the path. + + Now press Enter to execute the command. + +``` +{{ site.remote.prompt }} cd {{ site.working_dir | last }}/ +``` +{: .language-bash} + +Check that we've moved to the right place by running `pwd`. + +``` +{{ site.working_dir | join: '/' }}/ +``` + +> ## Two More Shortcuts +> +> The shell interprets a tilde (`~`) character at the start of a path to +> mean "the current user's home directory". For example, if Nelle's home +> directory is `/home/nelle`, then `~/data` is equivalent to +> `/home/nelle/data`. This only works if it is the first character in the +> path: `here/there/~/elsewhere` is *not* `here/there//home/nelle/elsewhere`. +> +> Another shortcut is the `-` (dash) character. `cd` will translate `-` into +> *the previous directory I was in*, which is faster than having to remember, +> then type, the full path. This is a *very* efficient way of moving +> *back and forth between two directories* -- i.e. if you execute `cd -` twice, +> you end up back in the starting directory. +> +> The difference between `cd ..` and `cd -` is +> that the former brings you *up*, while the latter brings you *back*. +> +{: .callout} + +> ## Absolute vs Relative Paths +> +> Starting from `/home/amanda/data`, +> which of the following commands could Amanda use to navigate to her home directory, +> which is `/home/amanda`? +> +> 1. `cd .` +> 2. `cd /` +> 3. `cd home/amanda` +> 4. `cd ../..` +> 5. `cd ~` +> 6. `cd home` +> 7. `cd ~/data/..` +> 8. `cd` +> 9. `cd ..` +> +> > ## Solution +> > +> > 1. No: `.` stands for the current directory. +> > 2. No: `/` stands for the root directory. +> > 3. No: Amanda's home directory is `/home/amanda`. +> > 4. No: this command goes up two levels, i.e. ends in `/home`. +> > 5. Yes: `~` stands for the user's home directory, in this case `/home/amanda`. +> > 6. No: this command would navigate into a directory `home` in the current directory if it exists. +> > 7. Yes: unnecessarily complicated, but correct. +> > 8. Yes: shortcut to go back to the user's home directory. +> > 9. Yes: goes up one level. +> {: .solution} +{: .challenge} + +> ## Relative Path Resolution +> +> Using the filesystem diagram below, if `pwd` displays `/Users/thing`, +> what will `ls ../backup` display? +> +> 1. `../backup: No such file or directory` +> 2. `2012-12-01 2013-01-08 2013-01-27` +> 3. `original pnas_final pnas_sub` +> +> ![A directory tree below the Users directory where "/Users" contains the +directories "backup" and "thing"; "/Users/backup" contains "original", +"pnas_final" and "pnas_sub"; "/Users/thing" contains "backup"; and +"/Users/thing/backup" contains "2012-12-01", "2013-01-08" and +"2013-01-27"](../fig/filesystem-challenge.svg) +> +> > ## Solution +> > +> > 1. No: there *is* a directory `backup` in `/Users`. +> > 2. No: this is the content of `Users/thing/backup`, +> > but with `..`, we asked for one level further up. +> > 3. Yes: `../backup/` refers to `/Users/backup/`. +> > +> {: .solution} +{: .challenge} + +> ## Clearing your terminal +> +> If your screen gets too cluttered, you can clear your terminal using the +> `clear` command. You can still access previous commands using +> and to move line-by-line, or by scrolling in your terminal. +{: .callout} + +> ## Listing in Reverse Chronological Order +> +> By default, `ls` lists the contents of a directory in alphabetical +> order by name. The command `ls -t` lists items by time of last +> change instead of alphabetically. The command `ls -r` lists the +> contents of a directory in reverse order. +> Which file is displayed last when you combine the `-t` and `-r` flags? +> Hint: You may need to use the `-l` flag to see the +> last changed dates. +> +> > ## Solution +> > +> > The most recently changed file is listed last when using `-rt`. This +> > can be very useful for finding your most recent edits or checking to +> > see if a new output file was written. +> {: .solution} +{: .challenge} + +> ## Globbing +> +> One of the most powerful features of bash is *filename expansion*, otherwise known as *globbing*. +> This allows you to use *patterns* to match a file name (or multiple files), +> which will then be operated on as if you had typed out all of the matches. +> +> `*` is a **wildcard**, which matches zero or more characters. +> +> Inside the `{{ site.working_dir | join: '/' }}` directory there is a directory called `birds` +> +>``` +>{{ site.remote.prompt }} cd {{ site.working_dir | join: '/' }}/birds +>{{ site.remote.prompt }} ls +>``` +> {: .language-bash} +> +> ``` +> kaka.txt kakapo.jpeg kea.txt kiwi.jpeg pukeko.jpeg +> ``` +> {: .output} +> +> In this example there aren't many files, but it is easy to imagine a situation where you have hundreds or thousads of files you need to filter through, and globbing is the perfect tool for this. Using the wildcard character the command +> +>``` +>{{ site.remote.prompt }} ls ka* +>``` +> {: .language-bash} +> +> Will return: +> +>``` +>kaka.txt kakapo.jpeg +>``` +> {: .output} +> +> Since the pattern `ka*` will match `kaka.txt`and `kakapo.jpeg` as these both start with "ka". While the command: +> +>``` +>{{ site.remote.prompt }} ls *.jpeg +>``` +> {: .language-bash} +> +> Will return: +> +>``` +>kakapo.jpeg kiwi.jpeg pukeko.jpeg +>``` +> {: .output} +> +> As `*.jpeg` will match `kakapo.jpeg`, `kiwi.jpeg` and `pukeko.jpeg` as they all end in `.jpeg` +> You can use multiple wildcards as well with the command: +> +>``` +>{{ site.remote.prompt }} ls k*a.* +>``` +> {: .language-bash} +> +> Returning: +> +>``` +>kaka.txt kea.txt +>``` +> {: .output} +> +> As `k*a.*` will match just `kaka.txt` and `kea.txt` +> +> `?` is also a wildcard, but it matches exactly one character. So the command: +> +>``` +>{{ site.remote.prompt }} ls ????.* +>``` +> {: .language-bash} +> +> Would return: +>``` +>kaka.txt kiwi.jpeg +>``` +> {: .output} +> +> As `kaka.txt` and `kiwi.jpeg` the only files which have four characters, followed by a `.` then any number and combination of characters. +> +> When the shell sees a wildcard, it expands the wildcard to create a +> list of matching filenames *before* running the command that was +> asked for. As an exception, if a wildcard expression does not match +> any file, Bash will pass the expression as an argument to the command +> as it is. +> However, generally commands like `wc` and `ls` see the lists of +> file names matching these expressions, but not the wildcards +> themselves. It is the shell, not the other programs, that deals with +> expanding wildcards. +{: .callout} + +> ## List filenames matching a pattern +> +> Running `ls` in a directory gives the output +> `cubane.pdb ethane.pdb methane.pdb octane.pdb pentane.pdb propane.pdb` +> +> Which `ls` command(s) will +> produce this output? +> +> `ethane.pdb methane.pdb` +> +> 1. `ls *t*ane.pdb` +> 2. `ls *t?ne.*` +> 3. `ls *t??ne.pdb` +> 4. `ls ethane.*` +> +>> ## Solution +>> +>> The solution is `3.` +>> +>> `1.` shows all files whose names contain zero or more characters (`*`) +>> followed by the letter `t`, +>> then zero or more characters (`*`) followed by `ane.pdb`. +>> This gives `ethane.pdb methane.pdb octane.pdb pentane.pdb`. +>> +>> `2.` shows all files whose names start with zero or more characters (`*`) followed by +>> the letter `t`, +>> then a single character (`?`), then `ne.` followed by zero or more characters (`*`). +>> This will give us `octane.pdb` and `pentane.pdb` but doesn't match anything +>> which ends in `thane.pdb`. +>> +>> `3.` fixes the problems of option 2 by matching two characters (`??`) between `t` and `ne`. +>> This is the solution. +>> +>> `4.` only shows files starting with `ethane.`. +> {: .solution} +{: .challenge} + +include in terminal excersise (delete slurm files later on maybe?) + +## Create a text file + +Now let's create a file. To do this we will use a text editor called Nano to create a file called `draft.txt`: + +``` +{{ site.remote.prompt }} nano draft.txt +``` +{: .language-bash} + +> ## Which Editor? +> +> When we say, '`nano` is a text editor' we really do mean 'text': it can +> only work with plain character data, not tables, images, or any other +> human-friendly media. We use it in examples because it is one of the +> least complex text editors. However, because of this trait, it may +> not be powerful enough or flexible enough for the work you need to do +> after this workshop. On Unix systems (such as Linux and macOS), +> many programmers use [Emacs](http://www.gnu.org/software/emacs/) or +> [Vim](http://www.vim.org/) (both of which require more time to learn), +> or a graphical editor such as +> [Gedit](http://projects.gnome.org/gedit/). On Windows, you may wish to +> use [Notepad++](http://notepad-plus-plus.org/). Windows also has a built-in +> editor called `notepad` that can be run from the command line in the same +> way as `nano` for the purposes of this lesson. +> +> No matter what editor you use, you will need to know where it searches +> for and saves files. If you start it from the shell, it will (probably) +> use your current working directory as its default location. If you use +> your computer's start menu, it may want to save files in your desktop or +> documents directory instead. You can change this by navigating to +> another directory the first time you 'Save As...' +{: .callout} + +Let's type in a few lines of text. +Once we're happy with our text, we can press Ctrl+O +(press the Ctrl or Control key and, while +holding it down, press the O key) to write our data to disk +(we'll be asked what file we want to save this to: +press Return to accept the suggested default of `draft.txt`). + +
screenshot of nano text editor in action
+ +Once our file is saved, we can use Ctrl+X to quit the editor and +return to the shell. + +> ## Control, Ctrl, or ^ Key +> +> The Control key is also called the 'Ctrl' key. There are various ways +> in which using the Control key may be described. For example, you may +> see an instruction to press the Control key and, while holding it down, +> press the X key, described as any of: +> +> * `Control-X` +> * `Control+X` +> * `Ctrl-X` +> * `Ctrl+X` +> * `^X` +> * `C-x` +> +> In nano, along the bottom of the screen you'll see `^G Get Help ^O WriteOut`. +> This means that you can use `Control-G` to get help and `Control-O` to save your +> file. +{: .callout} + +`nano` doesn't leave any output on the screen after it exits, +but `ls` now shows that we have created a file called `draft.txt`: + +``` +{{ site.remote.prompt }} ls +``` +{: .language-bash} + +``` +draft.txt +``` +{: .output} + +## Copying files and directories + +In a future lesson, we will be running the R script ```{{ site.working_dir | join: '/' }}/{{ site.example.script }}```, but as we can't all work on the same file at once you will need to take your own copy. This can be done with the **c**o**p**y command `cp`, at least two arguments are needed the file (or directory) you want to copy, and the directory (or file) where you want the copy to be created. We will be copying the file into the directory we made previously, as this should be your current directory the second argument can be a simple `.`. + +``` +{{ site.remote.prompt }} cp {{ site.working_dir | join: '/' }}/{{ site.example.script }} . +``` +{: .output} + +We can check that it did the right thing using `ls` + +``` +{{ site.remote.prompt }} ls +``` +{: .language-bash} + +``` +draft.txt {{ site.example.script }} +``` +{: .output} + +## Other File operations + +`cat` stands for concatenate, meaning to link or merge things together. It is primarily used for printing the contents of one or more files to the standard output. +`head` and `tail` will print the first or last lines (head or tail) of the specified file(s). By default it will print 10 lines, but a specific number of lines can be specified with the `-n` option. +`mv` to **m**o**v**e move a file, is used similarly to `cp` taking a source argument(s) and a destination argument. +`rm` will **r**e**m**ove move a file and only needs one argument. + +The `mv` command is also used to rename a file, for example `mv my_fiel my_file`. This is because as far as the computer is concerned *moving and renaming a file are the same operation*. + +In order to `cp` a directory (and all its contents) the `-r` for [recursive](https://en.wikipedia.org/wiki/Recursion) option must be used. +The same is true when deleting directories with `rm` + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
commandnameusage
cpcopycp file1 file2
cp -r directory1/ directory2/
mvmovemv file1 file2
mv directory1/ directory2/
rmremoverm file1 file2
rm -r directory1/ directory2/
+ +For `mv` and `cp` if the destination path (final argument) is an existing directory the file will be placed inside that directory with the same name as the source. + +> ## Moving vs Copying +> +> When using the `cp` or `rm` commands on a directory the 'recursive' flag `-r` must be used, but `mv` *does not* require it? +> +>> ## Solution +>> +>> We mentioned previously that as far the computer is concerned, *renaming* is the same operation as *moving*. +>> Contrary to what the commands name implies, *all moving is actually renaming*. +>> The data on the hard drive stays in the same place, +>> only the label applied to that block of memory is changed. +>> To copy a directory, each *individual file* inside that directory must be read, and then written to the copy destination. +>> To delete a directory, each *individual file* in the directory must be marked for deletion, +>> however when moving a directory the files inside are the data inside the directory is not interacted with, +>> only the parent directory is "renamed" to a different place. +>> +>> This is also why `mv` is faster than `cp` as no reading of the files is required. +> {: .solution} +{: .challenge} + +> ## Unsupported command-line options +> +> If you try to use an option (flag) that is not supported, `ls` and other commands +> will usually print an error message similar to: +> +> ``` +> $ ls -j +> ``` +> {: .language-bash} +> +> ``` +> ls: invalid option -- 'j' +> Try 'ls --help' for more information. +> ``` +> {: .error} +{: .callout} + +## Getting help + +Commands will often have many **options**. Most commands have a `--help` flag, as can be seen in the error above. You can also use the manual pages (aka manpages) by using the `man` command. The manual page provides you with all the available options and their use in more detail. For example, for thr `ls` command: + +``` +{{ site.remote.prompt }} man ls +``` +{: .language-bash} + +``` +Usage: ls [OPTION]... [FILE]... +List information about the FILEs (the current directory by default). +Sort entries alphabetically if neither -cftuvSUX nor --sort is specified. + +Mandatory arguments to long options are mandatory for short options, too. + -a, --all do not ignore entries starting with . + -A, --almost-all do not list implied . and .. + --author with -l, print the author of each file + -b, --escape print C-style escapes for nongraphic characters + --block-size=SIZE scale sizes by SIZE before printing them; e.g., + '--block-size=M' prints sizes in units of + 1,048,576 bytes; see SIZE format below + -B, --ignore-backups do not list implied entries ending with ~ + -c with -lt: sort by, and show, ctime (time of last + modification of file status information); + with -l: show ctime and sort by name; + otherwise: sort by ctime, newest first + -C list entries by columns + --color[=WHEN] colorize the output; WHEN can be 'always' (default + if omitted), 'auto', or 'never'; more info below + -d, --directory list directories themselves, not their contents + -D, --dired generate output designed for Emacs' dired mode + -f do not sort, enable -aU, disable -ls --color + -F, --classify append indicator (one of */=>@|) to entries +...       ...       ... +``` +{: .output} + +To navigate through the `man` pages, +you may use and to move line-by-line, +or try B and Spacebar to skip up and down by a full page. +To search for a character or word in the `man` pages, +use / followed by the character or word you are searching for. +Sometimes a search will result in multiple hits. If so, you can move between hits using N (for moving forward) and Shift+N (for moving backward). + +To **quit** the `man` pages, press Q. + +> ## Manual pages on the web +> +> Of course, there is a third way to access help for commands: +> searching the internet via your web browser. +> When using internet search, including the phrase `unix man page` in your search +> query will help to find relevant results. +> +> GNU provides links to its +> [manuals](http://www.gnu.org/manual/manual.html) including the +> [core GNU utilities](http://www.gnu.org/software/coreutils/manual/coreutils.html), +> which covers many commands introduced within this lesson. +{: .callout} diff --git a/docs/Scientific_Computing/Training/Intro_HPC/filesystem_basics.md b/docs/Scientific_Computing/Training/Intro_HPC/filesystem_basics.md new file mode 100644 index 000000000..5dbef2d76 --- /dev/null +++ b/docs/Scientific_Computing/Training/Intro_HPC/filesystem_basics.md @@ -0,0 +1,214 @@ +--- +title: "NeSI Filesystem" +teaching: 15 +exercises: 5 +questions: +- "Where is the best place to store my data?" +- "How do I recover deleted files?" +- "How do I find out how much disk space I have?" +objectives: +- "Learn about the NeSI filesystems, and when to use each one." +keypoints: +- "" + +--- + +The NeSI filesystem looks something like this: + +![The file system is made up of a root directory that contains sub-directories +titled home, nesi, and system files](../fig/NesiFiletree.svg) + +The directories that are relevant to us are. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
LocationDefault StorageDefault FilesBackupAccess Speed
Home is for user-specific files such as configuration files, environment setup, source code, etc./home/<username>20GB1,000,000DailyNormal
Project is for persistent project-related data, project-related software, etc./nesi/project/<projectcode>100GB100,000DailyNormal
Nobackup is a 'scratch space', for data you don't need to keep long term. Old data is periodically deleted from nobackup/nesi/nobackup/<projectcode>10TB1,000,000NoneFast
+ +### Managing your data and storage (backups and quotas) + +NeSI performs backups of the `/home` and `/nesi/project` (persistent) filesystems. However, backups are only captured once per day. So, if you edit or change code or data and then immediately delete it, it likely cannot be recovered. Note, as the name suggests, NeSI does **not** backup the `/nesi/nobackup` filesystem. + +Protecting critical data from corruption or deletion is primarily your +responsibility. Ensure you have a data management plan and stick to the plan to reduce the chance of data loss. Also important is managing your storage quota. To check your quotas, use the `nn_storage_quota` command, eg + +{% include {{ site.snippets }}/filedir/sinfo.snip %} + +As well as disk space, 'inodes' are also tracked, this is the *number* of files. + +Notice that the project space for this user is over quota and has been locked, meaning no more data can be added. When your space is locked you will need to move or remove data. Also note that none of the nobackup space is being used. Likely data from project can be moved to nobackup. `nn_storage_quota` uses cached data, and so will no immediately show changes to storage use. + +For more details on our persistent and nobackup storage systems, including data retention and the nobackup autodelete schedule, +please see our [Filesystem and Quota](https://docs.nesi.org.nz/Storage/File_Systems_and_Quotas/NeSI_File_Systems_and_Quotas/) documentation. + +### Working Directory + +We will be working from the directory `{{ site.working_dir[-1] }}`. + +``` +{{ site.remote.prompt }} cd {{ site.working_dir | join: '/' }} +``` + +{: .language-bash} + +### Creating directories + + + +As previously mentioned, it is general useful to organise your work in a hierarchical file structure to make managing and finding files easier. It is also is especially important when working within a shared directory with colleagues, such as a project, to minimise the chance of accidentally affecting your colleagues work. So for this workshop you will each make a directory using the `mkdir` command within the workshops directory for you to personally work from. + +``` +{{ site.remote.prompt }} mkdir +``` + +{: .language-bash} + +You should then be able to see your new directory is there using `ls`. + +``` +{{ site.remote.prompt }} ls {{ site.working_dir | join: '/' }} +``` + +{: .language-bash} + +{% include {{ site.snippets }}/filedir/dir-contents1.snip %} + +## Create a text file + +Now let's create a file. To do this we will use a text editor called Nano to create a file called `draft.txt`: + +We will want to do this from inside the directory we just created. + +``` +{{ site.remote.prompt }} cd +{{ site.remote.prompt }} nano draft.txt +``` + +{: .language-bash} + +> ## Which Editor? +> +> When we say, '`nano` is a text editor' we really do mean 'text': it can +> only work with plain character data, not tables, images, or any other +> human-friendly media. We use it in examples because it is one of the +> least complex text editors. However, because of this trait, it may +> not be powerful enough or flexible enough for the work you need to do +> after this workshop. On Unix systems (such as Linux and macOS), +> many programmers use [Emacs](http://www.gnu.org/software/emacs/) or +> [Vim](http://www.vim.org/) (both of which require more time to learn), +> or a graphical editor such as +> [Gedit](http://projects.gnome.org/gedit/). On Windows, you may wish to +> use [Notepad++](http://notepad-plus-plus.org/). Windows also has a built-in +> editor called `notepad` that can be run from the command line in the same +> way as `nano` for the purposes of this lesson. +> +> No matter what editor you use, you will need to know where it searches +> for and saves files. If you start it from the shell, it will (probably) +> use your current working directory as its default location. If you use +> your computer's start menu, it may want to save files in your desktop or +> documents directory instead. You can change this by navigating to +> another directory the first time you 'Save As...' +{: .callout} + +Let's type in a few lines of text. +Once we're happy with our text, we can press Ctrl+O +(press the Ctrl or Control key and, while +holding it down, press the O key) to write our data to disk +(we'll be asked what file we want to save this to: +press Return to accept the suggested default of `draft.txt`). + +
screenshot of nano text editor in action
+ +Once our file is saved, we can use Ctrl+X to quit the editor and +return to the shell. + +> ## Control, Ctrl, or ^ Key +> +> The Control key is also called the 'Ctrl' key. There are various ways +> in which using the Control key may be described. For example, you may +> see an instruction to press the Control key and, while holding it down, +> press the X key, described as any of: +> +> * `Control-X` +> * `Control+X` +> * `Ctrl-X` +> * `Ctrl+X` +> * `^X` +> * `C-x` +> +> In nano, along the bottom of the screen you'll see `^G Get Help ^O WriteOut`. +> This means that you can use `Control-G` to get help and `Control-O` to save your +> file. +{: .callout} + +`nano` doesn't leave any output on the screen after it exits, +but `ls` now shows that we have created a file called `draft.txt`: + +``` +{{ site.remote.prompt }} ls +``` + +{: .language-bash} + +``` +draft.txt +``` + +{: .output} + +## Copying files and directories + +In a future lesson, we will be running the R script ```{{ site.working_dir | join: '/' }}/{{ site.example.script }} ```, but as we can't all work on the same file at once you will need to take your own copy. This can be done with the **c**o**p**y command `cp`, at least two arguments are needed the file (or directory) you want to copy, and the directory (or file) where you want the copy to be created. We will be copying the file into the directory we made previously, as this should be your current directory the second argument can be a simple `.`. + +``` +{{ site.remote.prompt }} cp {{ site.working_dir | join: '/' }}/{{ site.example.script }} . +``` + +{: .output} + +We can check that it did the right thing using `ls` + +``` +{{ site.remote.prompt }} ls +``` + +{: .language-bash} + +``` +draft.txt {{ site.example.script }} +``` + +{: .output} diff --git a/docs/Scientific_Computing/Training/Intro_HPC/modules.md b/docs/Scientific_Computing/Training/Intro_HPC/modules.md new file mode 100644 index 000000000..232bcae18 --- /dev/null +++ b/docs/Scientific_Computing/Training/Intro_HPC/modules.md @@ -0,0 +1,258 @@ +--- +title: "Accessing software via Modules" +teaching: 15 +exercises: 5 +questions: +- "How do we load and unload software packages?" +objectives: +- "Load and use a software package." +- "Explain how the shell environment changes when the module mechanism loads or unloads packages." +keypoints: +- "Load software with `module load softwareName`." +- "Unload software with `module unload`" +- "The module system handles software versioning and package conflicts for you + automatically." +--- + +On a high-performance computing system, it is seldom the case that the software +we want to use is available when we log in. It is installed, but we will need +to "load" it before it can run. + +Before we start using individual software packages, however, we should +understand the reasoning behind this approach. The three biggest factors are: + +- software incompatibilities +- versioning +- dependencies + +Software incompatibility is a major headache for programmers. Sometimes the +presence (or absence) of a software package will break others that depend on +it. Two of the most famous examples are Python 2 and 3 and C compiler versions. +Python 3 famously provides a `python` command that conflicts with that provided +by Python 2. Software compiled against a newer version of the C libraries and +then used when they are not present will result in a nasty `'GLIBCXX_3.4.20' +not found` error, for instance. + + + +Software versioning is another common issue. A team might depend on a certain +package version for their research project - if the software version was to +change (for instance, if a package was updated), it might affect their results. +Having access to multiple software versions allows a set of researchers to +prevent software versioning issues from affecting their results. + +Dependencies are where a particular software package (or even a particular +version) depends on having access to another software package (or even a +particular version of another software package). For example, the VASP +materials science software may depend on having a particular version of the +FFTW (Fastest Fourier Transform in the West) software library available for it +to work. + +## Environment + +Before understanding environment modules we first need to understand what is meant by _environment_. + +The environment is defined by it's _environment variables_. + +_Environment Variables_ are writable named-variables. + +We can assign a variable named "FOO" with the value "bar" using the syntax. + +``` +{{ site.remote.prompt }} FOO="bar" +``` +{: .language-bash} + +Convention is to name fixed variables in all caps. + +Our new variable can be referenced using `$FOO`, you could also use `${FOO}`, +enclosing a variable in curly brackets is good practice as it avoids possible ambiguity. + +``` +{{ site.remote.prompt }} $FOO +``` +{: .language-bash} + +``` +-bash: bar: command not found +``` +{: .output} + +We got an error here because the variable is evalued _in the terminal_ then executed. +If we just want to print the variable we can use the command, + +``` +{{ site.remote.prompt }} echo $FOO +``` +{: .language-bash} +``` +bar +``` +{: .output} + +We can get a full list of environment variables using the command, + +``` +{{ site.remote.prompt }} env +``` +{: .language-bash} +{% include {{ site.snippets }}/modules/env-output.snip %} + +These variables control many aspects of how your terminal, and any software launched from your terminal works. + +## Environment Modules + +Environment modules are the solution to these problems. A _module_ is a +self-contained description of a software package -- it contains the +settings required to run a software package and, usually, encodes required +dependencies on other software packages. + +There are a number of different environment module implementations commonly +used on HPC systems: the two most common are _TCL modules_ and _Lmod_. Both of +these use similar syntax and the concepts are the same so learning to use one +will allow you to use whichever is installed on the system you are using. In +both implementations the `module` command is used to interact with environment +modules. An additional subcommand is usually added to the command to specify +what you want to do. For a list of subcommands you can use `module -h` or +`module help`. As for all commands, you can access the full help on the _man_ +pages with `man module`. + +### Purging Modules + +Depending on how you are accessing the HPC the modules you have loaded by default will be different. So before we start listing our modules we will first use the `module purge` command to clear all but the minimum default modules so that we are all starting with the same modules. + +``` +{{ site.remote.prompt }} module purge +``` +{: .language-bash} + +``` + +The following modules were not unloaded: + (Use "module --force purge" to unload all): + + 1) XALT/minimal 2) slurm 3) NeSI +``` +{: .output} + +Note that `module purge` is informative. It lets us know that all but a minimal default +set of packages have been unloaded (and how to actually unload these if we +truly so desired). + +We are able to unload individual modules, unfortunately within the NeSI system it does not always unload it's dependencies, therefore we recommend `module purge` to bring you back to a state where only those modules needed to perform your normal work on the cluster. + +`module purge` is a useful tool for ensuring repeatable research by guaranteeing that the environment that you build your software stack from is always the same. This is important since some modules have the potential to silently effect your results if they are loaded (or not loaded). + +### Listing Available Modules + +To see available software modules, use `module avail`: + +``` +{{ site.remote.prompt }} module avail +``` +{: .language-bash} + +{% include {{ site.snippets }}/modules/available-modules.snip %} + +### Listing Currently Loaded Modules + +You can use the `module list` command to see which modules you currently have +loaded in your environment. On {{ site.remote.name }} you will have a few default modules loaded when you login. + +``` +{{ site.remote.prompt }} module list +``` +{: .language-bash} + +{% include {{ site.snippets }}/modules/module-list-default.snip %} + +## Loading and Unloading Software + +You can load software using the `module load` command. In this example we will be using the programming language _R_. + +Initially, R is not loaded. We can test this by using the `which` +command. `which` looks for programs the same way that Bash does, so we can use +it to tell us where a particular piece of software is stored. + +``` +{{ site.remote.prompt }} which R +``` +{: .language-bash} + +{% include {{ site.snippets }}/modules/missing-r.snip %} + +The important bit here being: + +``` +/usr/bin/which: no R in (...) +``` + +Now lets try loading the R environment module, and try again. + +{% include {{ site.snippets }}/modules/module-load-r.snip %} + +> ## Tab Completion +> +> The module command also supports tab completion. You may find this the easiest way to find the right software. +{: .callout} + +So, what just happened? + +To understand the output, first we need to understand the nature of the `$PATH` +environment variable. `$PATH` is a special environment variable that controls +where a UNIX system looks for software. Specifically `$PATH` is a list of +directories (separated by `:`) that the OS searches through for a command +before giving up and telling us it can't find it. As with all environment +variables we can print it out using `echo`. + +{% include {{ site.snippets }}/modules/r-module-path.snip %} + +You'll notice a similarity to the output of the `which` command. However, in this case, +there are a lot more directories at the beginning. When we +ran the `module load` command, it added many directories to the beginning of our +`$PATH`. + +The path to NeSI XALT utility will normally show up first. This helps us track software usage, but the more important directory is the second one: `/opt/nesi/CS400_centos7_bdw/R/4.2.1-gimkl-2022a/bin` Let's examine what's there: + +{% include {{ site.snippets }}/modules/r-ls-dir-command.snip %} + +`module load` "loads" not only the specified software, but it also loads software dependencies. That is, the software that the application you load requires to run. + +{% include {{ site.snippets }}/modules/software-dependencies.snip %} + +Before moving onto the next session lets use `module purge` again to return to the minimal environment. + +``` +{{ site.remote.prompt }} module purge +``` +{: .language-bash} + +``` +The following modules were not unloaded: + (Use "module --force purge" to unload all): + + 1) XALT/minimal 2) slurm 3) NeSI +``` +{: .output} + +## Software Versioning + +So far, we've learned how to load and unload software packages. However, we have not yet addressed the issue of software versioning. At +some point or other, you will run into issues where only one particular version +of some software will be suitable. Perhaps a key bugfix only happened in a +certain version, or version _X_ broke compatibility with a file format you use. +In either of these example cases, it helps to be very specific about what +software is loaded. + +Let's examine the output of `module avail` more closely. + +``` +{{ site.remote.prompt }} module avail +``` +{: .language-bash} + +{% include {{ site.snippets }}/modules/available-modules.snip %} + +{% include {{ site.snippets }}/modules/wrong-python-version.snip %} + +{% include links.md %} diff --git a/docs/Scientific_Computing/Training/Intro_HPC/parallel.md b/docs/Scientific_Computing/Training/Intro_HPC/parallel.md new file mode 100644 index 000000000..7771d2d82 --- /dev/null +++ b/docs/Scientific_Computing/Training/Intro_HPC/parallel.md @@ -0,0 +1,202 @@ +--- +title: "What is Parallel Computing" +teaching: 20 +exercises: 10 +questions: +- "How do we execute a task in parallel?" +- "What benefits arise from parallel execution?" +- "What are the limits of gains from execution in parallel?" +- "What is the difference between implicit and explicit parallelisation." +objectives: +- "Prepare a job submission script for the parallel executable." +keypoints: +- "Parallel programming allows applications to take advantage of + parallel hardware; serial code will not 'just work.'" +- "There are multiple ways you can run " +--- + +## Methods of Parallel Computing + +To understand the different types of Parallel Computing we first need to clarify some terms. + +{% include figure.html url="" max-width="40%" + file="/fig/clusterDiagram.png" + alt="Node anatomy" caption="" %} + +**CPU**: Unit that does the computations. + +**Task**: One or more CPUs that share memory. + +**Node**: The physical hardware. The upper limit on how many CPUs can be in a task. + +**Shared Memory**: When multiple CPUs are used within a single task. + +**Distributed Memory**: When multiple tasks are used. + +Which methods are available to you is largely dependent on the nature of the problem and software being used. + +### Shared-Memory (SMP) + +Shared-memory multiproccessing divides work among _CPUs_ or _threads_, all of these threads require access to the same memory. + +Often called *Multithreading*. + +This means that all CPUs must be on the same node, most Mahuika nodes have 72 CPUs. + +Shared memory parallelism is used in our example script `{{ site.example.script }}`. + +Number of threads to use is specified by the Slurm option `--cpus-per-task`. + +### Distributed-Memory (MPI) + +Distributed-memory multiproccessing divides work among _tasks_, a task may contain multiple CPUs (provided they all share memory, as discussed previously). + +Message Passing Interface (MPI) is a communication standard for distributed-memory multiproccessing. While there are other standards, often 'MPI' is used synonymously with Distributed parallelism. + +Each task has it's own exclusive memory, tasks can be spread across multiple nodes, communicating via and _interconnect_. This allows MPI jobs to be much larger than shared memory jobs. It also means that memory requirements are more likely to increase proportionally with CPUs. + +Distributed-Memory multiproccessing predates shared-memory multiproccessing, and is more common with classical high performance applications (older computers had one CPU per node). + +Number of tasks to use is specified by the Slurm option `--ntasks`, because the number of tasks ending up on one node is variable you should use `--mem-per-cpu` rather than `--mem` to ensure each task has enough. + +Tasks cannot share cores, this means in most circumstances leaving `--cpus-per-task` unspecified will get you `2`. + +Using a combination of Shared and Distributed memory is called _Hybrid Parallel_. + +### GPGPU's + +GPUs compute large number of simple operations in parallel, making them well suited for Graphics Processing (hence the name), or any other large matrix operations. + +On NeSI, GPU's are specialised pieces of hardware that you request in addition to your CPUs and memory. + +You can find an up-to-date(ish) list of GPUs available on NeSI in our [Support Documentation](https://docs.nesi.org.nz/Scientific_Computing/The_NeSI_High_Performance_Computers/Available_GPUs_on_NeSI/) + +GPUs can be requested using `--gpus-per-node=:` + +Depending on the GPU type, we *may* also need to specify a partition using `--partition`. + +> ## GPU Job Example +> +> Create a new script called `gpu-job.sl` +> +> ``` +> #!/bin/bash -e +> +> #SBATCH --job-name gpu-job +> #SBATCH --account {{site.sched.projectcode}} +> #SBATCH --output %x.out +> #SBATCH --mem-per-cpu 2G +> #SBATCH --gpu-per-node P100:1 +> +> module load CUDA +> nvidia-smi +> ``` +> {: .language-bash} +> +> then submit with +> +> ``` +> {{ site.remote.prompt }} sbatch gpu-job.sl +> ``` +> {: .language-bash} +> +> > ## Solution +> > +> > ``` +> > {{ site.remote.prompt }} cat gpu-job.out +> > +> > ``` +> > {: .language-bash} +> > +> > ``` +> > Tue Mar 12 19:40:51 2024 +> > +-----------------------------------------------------------------------------+ +> > | NVIDIA-SMI 525.85.12 Driver Version: 525.85.12 CUDA Version: 12.0 | +> > |-------------------------------+----------------------+----------------------+ +> > | GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC | +> > | Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. | +> > | | | MIG M. | +> > |===============================+======================+======================| +> > | 0 Tesla P100-PCIE... On | 00000000:05:00.0 Off | 0 | +> > | N/A 28C P0 24W / 250W | 0MiB / 12288MiB | 0% Default | +> > | | | N/A | +> > +-------------------------------+----------------------+----------------------+ +> > +> > +-----------------------------------------------------------------------------+ +> > | Processes: | +> > | GPU GI CI PID Type Process name GPU Memory | +> > | ID ID Usage | +> > |=============================================================================| +> > | No running processes found | +> > +-----------------------------------------------------------------------------+ +> > ``` +> > {: .output} +> {: .solution} +{: .challenge} + +### Job Array + +Job arrays are not "multiproccessing" in the same way as the previous two methods. +Ideal for _embarrassingly parallel_ problems, where there are little to no dependencies between the different jobs. + +Can be thought of less as running a single job in parallel and more about running multiple serial-jobs simultaneously. +Often this will involve running the same process on multiple inputs. + +Embarrassingly parallel jobs should be able to scale without any loss of efficiency. If this type of parallelisation is an option, it will almost certainly be the best choice. + +A job array can be specified using `--array` + +If you are writing your own code, then this is something you will probably have to specify yourself. + +## How to Utilise Multiple CPUs + +Requesting extra resources through Slurm only means that more resources will be available, it does not guarantee your program will be able to make use of them. + +Generally speaking, Parallelism is either _implicit_ where the software figures out everything behind the scenes, or _explicit_ where the software requires extra direction from the user. + +### Scientific Software + +The first step when looking to run particular software should always be to read the documentation. +On one end of the scale, some software may claim to make use of multiple cores implicitly, but this should be verified as the methods used to determine available resources are not guaranteed to work. + +Some software will require you to specify number of cores (e.g. `-n 8` or `-np 16`), or even type of paralellisation (e.g. `-dis` or `-mpi=intelmpi`). + +Occasionally your input files may require rewriting/regenerating for every new CPU combintation (e.g. domain based parallelism without automatic partitioning). + +### Writing Code + +Occasionally requesting more CPUs in your Slurm job is all that is required and whatever program you are running will automagically take advantage of the additional resources. +However, it's more likely to require some amount of effort on your behalf. + +It is important to determine this before you start requesting more resources through Slurm + +If you are writing your own code, some programming languages will have functions that can make use of multiple CPUs without requiring you to changes your code. +However, unless that function is where the majority of time is spent, this is unlikely to give you the performance you are looking for. + +*Python: [Multiproccessing](https://docs.python.org/3/library/multiprocessing.html)* (not to be confused with `threading` which is not really parallel.) + +*MATLAB: [Parpool](https://au.mathworks.com/help/parallel-computing/parpool.html)* + +## Summary + +| Name | Other Names | Slurm Option | Pros/cons | +| - | - | - | - | +| Shared Memory Parallelism | Multithreading, Multiproccessing | `--cpus-per-task` | | +| Distrubuted Memory Parallelism | MPI, OpenMPI | `--ntasks` and add `srun` before command | | +| Hybrid | | `--ntasks` and `--cpus-per-task` and add `srun` before command | | +| Job Array | | `--array` | | +| General Purpose GPU | | `--gpus-per-node` | | + +> ## Running a Parallel Job. +> +> Pick one of the method of Paralellism mentioned above, and modify your `example.sl` script to use this method. +> +> +> > ## Solution +> > +> > What does the printout say at the start of your job about number and location of node. +> > {: .output} +> {: .solution} +{: .challenge} + +{% include links.md %} diff --git a/docs/Scientific_Computing/Training/Intro_HPC/resources.md b/docs/Scientific_Computing/Training/Intro_HPC/resources.md new file mode 100644 index 000000000..e9fdc9465 --- /dev/null +++ b/docs/Scientific_Computing/Training/Intro_HPC/resources.md @@ -0,0 +1,376 @@ +--- +title: "Using resources effectively" +teaching: 20 +exercises: 10 +questions: +- "How can I review past jobs?" +- "How can I use this knowledge to create a more accurate submission script?" +objectives: +- "Understand how to look up job statistics and profile code." +- "Understand job size implications." +- "Understand problems and limitations involved in using multiple CPUs." +keypoints: +- "As your task gets larger, so does the potential for inefficiencies." +- "The smaller your job (time, CPUs, memory, etc), the faster it will schedule." +math: True +--- + + +## What Resources? + +Last time we submitted a job, we did not specify a number of CPUs, and therefore +we were provided the default of `2` (1 _core_). + +As a reminder, our slurm script `example_job.sl` currently looks like this. + +``` +{% include example_scripts/example_job.sl.1 %} +``` + +{: .language-bash} + +We will now submit the same job again with more CPUs. +We ask for more CPUs using by adding `#SBATCH --cpus-per-task 4` to our script. +Your script should now look like this: + +``` +{% include example_scripts/example_job.sl.2 %} +``` + +{: .language-bash} + +And then submit using `sbatch` as we did before. + +``` +{{ site.remote.prompt }} sbatch example_job.sl +``` + +{: .language-bash} + +{% include {{ site.snippets }}/scheduler/basic-job-script.snip %} + +> ## Watch +> +> We can prepend any command with `watch` in order to periodically (default 2 seconds) run a command. e.g. `watch +> squeue --me` will give us up to date information on our running jobs. +> Care should be used when using `watch` as repeatedly running a command can have adverse effects. +> Exit `watch` with ctrl + c. +{: .callout} + +Note in squeue, the number under cpus, should be '4'. + +Checking on our job with `sacct`. +Oh no! + +{% include {{ site.snippets }}/scaling/OOM.snip %} +{: .language-bash} + +To understand why our job failed, we need to talk about the resources involved. + +Understanding the resources you have available and how to use them most efficiently is a vital skill in high performance computing. + +Below is a table of common resources and issues you may face if you do not request the correct amount. + + + + + + + + + + + + + + + + + + + + + + + + + + +
Not enoughToo Much
CPU The job will run more slowly than expected, and so may run out of time and get killed for exceeding its time limit.The job will wait in the queue for longer.
+ You will be charged for CPUs regardless of whether they are used or not.
+ Your fair share score will fall more. +
Memory Your job will fail, probably with an 'OUT OF MEMORY' error, segmentation fault or bus error (may not happen immediately).The job will wait in the queue for longer.
+ You will be charged for memory regardless of whether it is used or not.
+ Your fair share score will fall more.
Walltime The job will run out of time and be terminated by the scheduler.The job will wait in the queue for longer.
+ +## Measuring Resource Usage of a Finished Job + +Since we have already run a job (successful or otherwise), this is the best source of info we currently have. +If we check the status of our finished job using the `sacct` command we learned earlier. + +``` +{{ site.remote.prompt }} sacct +``` + +{: .language-bash} + +{% include {{ site.snippets }}/scheduler/basic-job-status-sacct.snip %} + +With this information, we may determine a couple of things. + +Memory efficiency can be determined by comparing ReqMem (requested memory) with MaxRSS (maximum used memory), MaxRSS is given in KB, so a unit conversion is usually required. + + + +
+ +$$ {Efficiency_{mem} = { MaxRSS \over ReqMem}} $$ + +
+ +So for the above example we see that 0.1GB (102048K) of our requested 1GB meaning the memory efficincy was about 10%. + +CPU efficiency can be determined by comparing TotalCPU(CPU time), with the maximum possible CPU time. The maximum possible CPU time equal to Alloc (number of allocated CPUs) multiplied by Elapsed (Walltime, actual time passed). + + + +$$ {Efficiency_{cpu} = { TotalCPU \over {Elapsed \times Alloc}}} $$ + +
+ +For the above example 33 seconds of computation was done, + +where the maximum possible computation time was **96 seconds** (2 CPUs multiplied by 48 seconds), meaning the CPU efficiency was about 35%. + +Time Efficiency is simply the Elapsed Time divided by Time Requested. + +
+ +$$ {Efficiency_{time} = { Elapsed \over Requested}} $$ + + + +
+ +48 seconcds out of 15 minutes requested give a time efficiency of about 5% + +> ## Efficiency Exercise +> +> Calculate for the job shown below, +> +> ``` +> JobID JobName Alloc Elapsed TotalCPU ReqMem MaxRSS State +> --------------- ---------------- ----- ----------- ------------ ------- -------- ---------- +> 37171050 Example-job 8 00:06:03 00:23:04 32G FAILED +> 37171050.batch batch 8 00:06:03 23:03.999 14082672k FAILED +> 37171050.extern extern 8 00:06:03 00:00.001 0 COMPLETED +> ``` +> +> a. CPU efficiency. +> +> b. Memory efficiency. +> +> > ## Solution +> > +> > a. CPU efficiency is `( 23 / ( 8 * 6 ) ) x 100` or around **48%**. +> > +> > b. Memory efficiency is `( 14 / 32 ) x 100` or around **43%**. +> {: .solution} +{: .challenge} + +For convenience, NeSI has provided the command `nn_seff ` to calculate **S**lurm **Eff**iciency (all NeSI commands start with `nn_`, for **N**eSI **N**IWA). + +``` +{{ site.remote.prompt }} nn_seff +``` + +{: .language-bash} + +{% include {{ site.snippets }}/resources/seff.snip %} + +Knowing what we do now about job efficiency, lets submit the previous job again but with more appropriate resources. + +``` +{% include example_scripts/example_job.sl.2 %} +``` +{: .language-bash} + + +``` +{{ site.remote.prompt }} sbatch example_job.sl +``` +{: .language-bash} + +Hopefully we will have better luck with this one! + +### A quick description of Simultaneous Multithreading - SMT (aka Hyperthreading) + +Modern CPU cores have 2 threads of operation that can execute independently of one +another. SMT is the technology that allows the 2 threads within one physical core to present +as multiple logical cores, sometimes referred to as virtual CPUS (vCPUS). + +Note: _Hyperthreading_ is Intel's marketing name for SMT. Both Intel and AMD +CPUs have SMT technology. + +Some types of processes can take advantage of multiple threads, and can gain a +performance boost. Some software is +specifically written as multi-threaded. You will need to check or test if your +code can take advantage of threads (we can help with this). + +However, because each thread shares resources on the physical core, +there can be conflicts for resources such as onboard cache. +This is why not all processes get a performance boost from SMT and in fact can +run slower. These types of jobs should be run without multithreading. There +is a Slurm parameter for this: `--hint=nomultithread` + +SMT is why you are provided 2 CPUs instead of 1 as we do not allow +2 different jobs to share a core. This also explains why you will sometimes +see CPU efficiency above 100%, since CPU efficiency is based on core and not thread. + +For more details please see our [documentation on Hyperthreading +](https://docs.nesi.org.nz/Scientific_Computing/Running_Jobs_on_Maui_and_Mahuika/Hyperthreading/) + +## Measuring the System Load From Currently Running Tasks + +On Mahuika, we allow users to connect directly to compute nodes from the +login node. This is useful to check on a running job and see how it's doing, however, we +only allow you to connect to nodes on which you have running jobs. + +The most reliable way to check current system stats is with `htop`. +`htop` is an interactive process viewer that can be launched from command line. + +### Finding job node + +Before we can check on our job, we need to find out where it is running. +We can do this with the command `squeue --me`, and looking under the 'NODELIST' column. + +``` +{{ site.remote.prompt }} squeue --me +``` + +{: .language-bash} + +{% include {{ site.snippets }}/resources/get-job-node.snip %} + +Now that we know the location of the job (wbn189) we can use `ssh` to run `htop` _on that node_. + +``` +{{ site.remote.prompt }} ssh wbn189 -t htop -u $USER +``` + +{: .language-bash} + +You may get a message: + +``` +ECDSA key fingerprint is SHA256:############################################ +ECDSA key fingerprint is MD5:9d:############################################ +Are you sure you want to continue connecting (yes/no)? +``` + +{: .language-bash} + +If so, type `yes` and Enter + +You may also need to enter your cluster password. + +If you cannot connect, it may be that the job has finished and you have lost permission to `ssh` to that node. + +### Reading Htop + +You may see something like this, + +{% include {{ site.snippets }}/resources/monitor-processes-top.snip %} + +Overview of the most important fields: + +* `PID`: What is the numerical id of each process? +* `USER`: Who started the process? +* `RES`: What is the amount of memory currently being used by a process (in + bytes)? +* `%CPU`: How much of a CPU is each process using? Values higher than 100 + percent indicate that a process is running in parallel. +* `%MEM`: What percent of system memory is a process using? +* `TIME+`: How much CPU time has a process used so far? Processes using 2 CPUs + accumulate time at twice the normal rate. +* `COMMAND`: What command was used to launch a process? + +To exit press q. + +Running this command as is will show us information on tasks running on the login node (where we should not be running resource intensive jobs anyway). + +## Running Test Jobs + +As you may have to run several iterations before you get it right, you should choose your test job carefully. +A test job should not run for more than 15 mins. This could involve using a smaller input, coarser parameters or using a subset of the calculations. +As well as being quick to run, you want your test job to be quick to start (e.g. get through queue quickly), the best way to ensure this is keep the resources requested (memory, CPUs, time) small. +Similar as possible to actual jobs e.g. same functions etc. +Use same workflow. (most issues are caused by small issues, typos, missing files etc, your test job is a jood chance to sort out these issues.). +Make sure outputs are going somewhere you can see them. + +> ## Serial Test +> +> Often a good first test to run, is to execute your job _serially_ e.g. using only 1 CPU. +> This not only saves you time by being fast to start, but serial jobs can often be easier to debug. +> If you confirm your job works in its most simple state you can identify problems caused by +> paralellistaion much more easily. +{: .callout} + +You generally should ask for 20% to 30% more time and memory than you think the job will use. +Testing allows you to become more more precise with your resource requests. We will cover a bit more on running tests in the last lesson. + +> ## Efficient way to run tests jobs using debug QOS (Quality of Service) +> +> Before submitting a large job, first submit one as a test to make +> sure everything works as expected. Often, users discover typos in their submit +> scripts, incorrect module names or possibly an incorrect pathname after their job +> has queued for many hours. Be aware that your job is not fully scanned for +> correctness when you submit the job. While you may get an immediate error if your +> SBATCH directives are malformed, it is not until the job starts to run that the +> interpreter starts to process the batch script. +> +> NeSI has an easy way for you to test your job submission. One can employ the debug +> QOS to get a short, high priority test job. Debug jobs have to run within 15 +> minutes and cannot use more that 2 nodes. To use debug QOS, add or change the +> following in your batch submit script +> +>``` +>#SBATCH --qos=debug +>#SBATCH --time=15:00 +> ``` +> +>{: .language-bash} +> +> Adding these SBATCH directives will provide your job with the highest priority +> possible, meaning it should start to run within a few minutes, provided +> your resource request is not too large. +{: .callout} + +## Initial Resource Requirements + +As we have just discussed, the best and most reliable method of determining resource requirements is from testing, +but before we run our first test there are a couple of things you can do to start yourself off in the right area. + +### Read the Documentation + +NeSI maintains documentation that does have some guidance on using resources for some software +However, as you noticed in the Modules lessons, we have a lot of software. So it is also advised to search +the web for others that may have written up guidance for getting the most out of your specific software. + +### Ask Other Users + +If you know someone who has used the software before, they may be able to give you a ballpark figure. + + + +> ## Next Steps +> +> You can use this knowledge to set up the +> next job with a closer estimate of its load on the system. +> A good general rule +> is to ask the scheduler for **30%** more time and memory than you expect the +> job to need. +{: .callout} + +{% include links.md %} diff --git a/docs/Scientific_Computing/Training/Intro_HPC/scaling.md b/docs/Scientific_Computing/Training/Intro_HPC/scaling.md new file mode 100644 index 000000000..a76c6b5ff --- /dev/null +++ b/docs/Scientific_Computing/Training/Intro_HPC/scaling.md @@ -0,0 +1,60 @@ +--- +title: "Scaling" +teaching: 10 +exercises: 35 +questions: +- "How do we go from running a job on a small number of CPUs to a larger one." +objectives: +- "Understand scaling procedure." +keypoints: +- Start small. +- Test one thing at a time (unit tests). +- Record everything. +--- + +The aim of these tests will be to establish how a jobs requirements change with size (CPUs, inputs) and ultimately figure out the best way to run your jobs. +Unfortunately we cannot assume speedup will be linear (e.g. double CPUs won't usually half runtime, doubling the size of your input data won't necessarily double runtime) therefore more testing is required. This is called *scaling testing*. + +In order to establish an understanding of the scaling properties we may have to repeat this test several times, giving more resources each iteration. + +## Scaling Behavior + +### Amdahl's Law + +Most computational tasks will have a certain amount of work that must be computed serially. + +![Larger fractions of parallel code will have closer to linear scaling performance.](../fig/AmdahlsLaw2.svg) + +Eventually your performance gains will plateau. + +The fraction of the task that can be run in parallel determines the point of this plateau. +Code that has no serial components is said to be "embarrassingly parallel". + +It is worth noting that Amdahl's law assumes all other elements of scaling are happening with 100% efficient, in reality there are additional computational and communication overheads. + +> ## Scaling Exercise +> +> 1. Find your name in the [spreadsheet]({{ site.exercise }}) and modify your `example_job.sl` to request +> "x" `--cpus-per-task`. +> For example `#SBATCH --cpus-per-task 10`. +> 2. Estimate memory requirement based on our previous runs and the cpus requested, memory +> is specified with the `--mem ` flag, it does not accept decimal values, however you may +> specify a unit (`K`|`M`|`G`), if no unit is specified it is assumed to be `M`. +> For example `#SBATCH --mem 1200`. +> 3. Now submit your job, we will include an extra argument `--acctg-freq 1`. +> By default SLURM records job data every 30 seconds. +> This means any job running for less than 30 +> seconds will not have it's memory use recorded. +> Submit the job with `sbatch --acctg-freq 1 example_job.sl`. +> 4. Watch the job with `squeue --me` or `watch squeue --me`. +> 5. On completion of job, use `nn_seff `. +> 6. Record the jobs "Elapsed", "TotalCPU", and "Memory" values in the spreadsheet. (Hint: They are the first +> numbers after the percentage efficiency in output of `nn_seff`). Make sure you have entered the values in the correct format and there is a tick next to each entry. ![Correctly entered data in spreadsheet.](../fig/correct-spreadsheet-entry.png) +> +> > ## Solution +> > +> > [spreadsheet]({{ site.exercise }}) +> {: .solution} +{: .challenge} + +{% include links.md %} diff --git a/docs/Scientific_Computing/Training/Intro_HPC/scheduler.md b/docs/Scientific_Computing/Training/Intro_HPC/scheduler.md new file mode 100644 index 000000000..ba3980ef4 --- /dev/null +++ b/docs/Scientific_Computing/Training/Intro_HPC/scheduler.md @@ -0,0 +1,338 @@ +--- +title: "Scheduler Fundamentals" +teaching: 15 +exercises: 10 +questions: +- "What is a scheduler and why does a cluster need one?" +- "How do I launch a program to run on a compute node in the cluster?" +- "How do I capture the output of a program that is run on a node in the + cluster?" +objectives: +- "Run a simple script on the login node, and through the scheduler." +- "Use the batch system command line tools to monitor the execution of your + job." +- "Inspect the output and error files of your jobs." +- "Find the right place to put large datasets on the cluster." +keypoints: +- "The scheduler handles how compute resources are shared between users." +- "A job is just a shell script." +- "Request _slightly_ more resources than you will need." +--- + +## Job Scheduler + +An HPC system might have thousands of nodes and thousands of users. How do we +decide who gets what and when? How do we ensure that a task is run with the +resources it needs? This job is handled by a special piece of software called +the _scheduler_. On an HPC system, the scheduler manages which jobs run where +and when. + +The following illustration compares these tasks of a job scheduler to a waiter +in a restaurant. If you can relate to an instance where you had to wait for a +while in a queue to get in to a popular restaurant, then you may now understand +why sometimes your job do not start instantly as in your laptop. + +{% include figure.html max-width="75%" caption="" + file="/fig/restaurant_queue_manager.svg" + alt="Compare a job scheduler to a waiter in a restaurant" %} + +The scheduler used in this lesson is {{ site.sched.name }}. Although +{{ site.sched.name }} is not used everywhere, running jobs is quite similar +regardless of what software is being used. The exact syntax might change, but +the concepts remain the same. + +## Interactive vs Batch + +So far, whenever we have entered a command into our terminals, we have received the response immediately in the same terminal, this is said to be an _interactive session_. + +[//]: # TODO ??Diagram?? + +This is all well for doing small tasks, but what if we want to do several things one after another without without waiting in-between? Or what if we want to repeat a series of command again later? + +This is where _batch processing_ becomes useful, this is where instead of entering commands directly to the terminal we write them down in a text file or _script_. Then, the script can be _executed_ by calling it with `bash`. + +[//]: # TODO ??Diagram?? + +Lets try this now, create and open a new file in your current directory called `example_job.sh`. +(If you prefer another text editor than nano, feel free to use that), we will put to use some things we have learnt so far. + +``` +{{ site.remote.prompt }} nano example_job.sh +``` +{: .language-bash} + + +``` +{% include example_scripts/example_job.sh %} +``` +{: .language-bash} + +> ## shebang +> +> _shebang_ or _shabang_, also referred to as _hashbang_ is the character sequence consisting of the number sign (aka: hash) and exclamation mark (aka: bang): `#!` at the beginning of a script. It is used to describe the _interpreter_ that will be used to run the script. In this case we will be using the Bash Shell, which can be found at the path `/bin/bash`. The job scheduler will give you an error if your script does not start with a shebang. +> +{: .callout} + +We can now run this script using +``` +{{ site.remote.prompt }} bash example_job.sh +``` +{: .language-bash} + +``` +Loading required package: foreach +Loading required package: iterators +Loading required package: parallel +[1] "Using 1 cpus to sum [ 2.000000e+04 x 2.000000e+04 ] matrix." +[1] "0% done..." +... +[1] "99% done..." +[1] "100% done..." +[1] "Sum is '10403.632886'." +Done! +``` +{: .output} + +You will get the output printed to your terminal as if you had just run those commands one after another. + +> ## Cancelling Commands +> +> You can kill a currently running task by pressing the keys ctrl + c. +> If you just want your terminal back, but want the task to continue running you can 'background' it by pressing ctrl + v. +> Note, a backgrounded task is still attached to your terminal session, and will be killed when you close the terminal (if you need to keep running a task after you log out, have a look at [tmux](https://docs.nesi.org.nz/Getting_Started/Cheat_Sheets/tmux-Reference_sheet/)). +{: .callout} + +## Scheduled Batch Job + +Up until now the scheduler has not been involved, our scripts were run directly on the login node (or Jupyter node). + +First lets rename our batch script script to clarify that we intend to run it though the scheduler. + +``` +mv example_job.sh example_job.sl +``` +{: .output} + +> ## File Extensions +> +> A files extension in this case does not in any way affect how a script is read, +> it is just another part of the name used to remind users what type of file it is. +> Some common conventions: +> `.sh`: **Sh**ell Script. +> `.sl`: **Sl**urm Script, a script that includes a *slurm header* and is intended to be submitted to the cluster. +> `.out`: Commonly used to indicate the file contains the std**out** of some process. +> `.err`: Same as `.out` but for std**err**. +{: .callout} + +In order for the job scheduler to do it's job we need to provide a bit more information about our script. +This is done by specifying _slurm parameters_ in our batch script. Each of these parameters must be preceded by the special token `#SBATCH` and placed _after_ the _shebang_, but before the content of the rest of your script. + +{% include figure.html max-width="100%" caption="" + file="/fig/parts_slurm_script.svg" + alt="slurm script is a regular bash script with a slurm header after the shebang" %} + +These parameters tell SLURM things around how the script should be run, like memory, cores and time required. + +All the parameters available can be found by checking `man sbatch` or on the online [slurm documentation](https://slurm.schedmd.com/sbatch.html). + +[//]: # TODO ??Vet table + +{% include {{ site.snippets }}/scheduler/option-flags-list.snip %} +> ## Comments +> +> Comments in UNIX shell scripts (denoted by `#`) are ignored by the bash interpreter. +> Why is it that we start our slurm parameters with `#` if it is going to be ignored? +> > ## Solution +> > Commented lines are ignored by the bash interpreter, but they are _not_ ignored by slurm. +> > The `{{ site.sched.comment }}` parameters are read by slurm when we _submit_ the job. When the job starts, +> > the bash interpreter will ignore all lines starting with `#`. +> > +> > This is similar to the _shebang_ mentioned earlier, +> > when you run your script, the system looks at the `#!`, then uses the program at the subsequent +> > path to interpret the script, in our case `/bin/bash` (the program 'bash' found in the 'bin' directory). +> {: .solution} +{: .challenge} + +Note that just *requesting* these resources does not make your job run faster, +nor does it necessarily mean that you will consume all of these resources. It +only means that these are made available to you. Your job may end up using less +memory, or less time, or fewer tasks or nodes, than you have requested, and it +will still run. + +It's best if your requests accurately reflect your job's requirements. We'll +talk more about how to make sure that you're using resources effectively in a +later episode of this lesson. + +Now, rather than running our script with `bash` we _submit_ it to the scheduler using the command `sbatch` (**s**lurm **batch**). + +``` +{{ site.remote.prompt }} {{ site.sched.submit.name }} {% if site.sched.submit.options != '' %}{{ site.sched.submit.options }} {% endif %}example_job.sl +``` +{: .language-bash} + +{% include {{ site.snippets }}/scheduler/basic-job-script.snip %} + +And that's all we need to do to submit a job. Our work is done -- now the +scheduler takes over and tries to run the job for us. + +## Checking on Running/Pending Jobs + +While the job is waiting +to run, it goes into a list of jobs called the *queue*. To check on our job's +status, we check the queue using the command +`{{ site.sched.status }}` (**s**lurm **queue**). We will need to filter to see only our jobs, by including either the flag `--user ` or `--me`. + +``` +{{ site.remote.prompt }} {{ site.sched.status }} {{ site.sched.flag.me }} +``` +{: .language-bash} + +{% include {{ site.snippets }}/scheduler/basic-job-status.snip %} + +We can see many details about our job, most importantly is it's _STATE_, the most common states you might see are.. + +- `PENDING`: The job is waiting in the queue, likely waiting for resources to free up or higher prioroty jobs to run. +because other jobs have priority. +- `RUNNING`: The job has been sent to a compute node and it is processing our commands. +- `COMPLETED`: Your commands completed successfully as far as Slurm can tell (e.g. exit 0). +- `FAILED`: (e.g. exit not 0). +- `CANCELLED`: +- `TIMEOUT`: Your job has running for longer than your `--time` and was killed. +- `OUT_OF_MEMORY`: Your job tried to use more memory that it is allocated (`--mem`) and was killed. + +## Cancelling Jobs + +Sometimes we'll make a mistake and need to cancel a job. This can be done with +the `{{ site.sched.del }}` command. + + + + + +In order to cancel the job, we will first need its 'JobId', this can be found in the output of '{{ site.sched.status }} {{ site.sched.flag.me }}'. + +``` +{{ site.remote.prompt }} {{site.sched.del }} 231964 +``` +{: .language-bash} + +A clean return of your command prompt indicates that the request to cancel the job was +successful. + +Now checking `{{ site.sched.status }}` again, the job should be gone. + +``` +{{ site.remote.prompt }} {{ site.sched.status }} {{ site.sched.flag.me }} +``` +{: .language-bash} + +{% include {{ site.snippets }}/scheduler/terminate-job-cancel.snip %} + +(If it isn't wait a few seconds and try again). + +{% include {{ site.snippets }}/scheduler/terminate-multiple-jobs.snip %} + +## Checking Finished Jobs + +There is another command `{{ site.sched.hist }}` (**s**lurm **acc**oun**t**) that includes jobs that have finished. +By default `{{ site.sched.hist }}` only includes jobs submitted by you, so no need to include additional commands at this point. + +``` +{{ site.remote.prompt }} {{ site.sched.hist }} +``` +{: .language-bash} + +{% include {{ site.snippets }}/scheduler/basic-job-status-sacct.snip %} + +Note that despite the fact that we have only run one job, there are three lines shown, this because each _job step_ is also shown. +This can be suppressed using the flag `-X`. + +> ## Where's the Output? +> +> On the login node, when we ran the bash script, the output was printed to the terminal. +> Slurm batch job output is typically redirected to a file, by default this will be a file named `slurm-.out` in the directory where the job was submitted, this can be changed with the slurm parameter `--output`. +{: .discussion} +> +> > ## Hint +> > +> > You can use the _manual pages_ for {{ site.sched.name }} utilities to find +> > more about their capabilities. On the command line, these are accessed +> > through the `man` utility: run `man `. You can find the same +> > information online by searching > "man ". +> > +> > ``` +> > {{ site.remote.prompt }} man {{ site.sched.submit.name }} +> > ``` +> > {: .language-bash} +> {: .solution} +{: .challenge} + +{% include {{ site.snippets }}/scheduler/print-sched-variables.snip %} + +[//]: # TODO ??Sacct more info on checking jobs. Checking log files during run. + + + + +{% include links.md %} + +[fshs]: https://en.wikipedia.org/wiki/Filesystem_Hierarchy_Standard +[hisat]: https://ccb.jhu.edu/software/hisat2/index.shtml diff --git a/docs/Scientific_Computing/Training/Intro_HPC/what_is_a_cluster.md b/docs/Scientific_Computing/Training/Intro_HPC/what_is_a_cluster.md new file mode 100644 index 000000000..d21addf19 --- /dev/null +++ b/docs/Scientific_Computing/Training/Intro_HPC/what_is_a_cluster.md @@ -0,0 +1,93 @@ +--- +title: "Working on a remote HPC system" +# teaching: 10 +teaching: 20 +exercises: 0 +questions: +- "What is an HPC system?" +- "How does an HPC system work?" +- "How do I log in to a remote HPC system?" +objectives: +- "Connect to a remote HPC system." +- "Understand the general HPC system architecture." +keypoints: +- "An HPC system is a set of networked machines." +- "HPC systems typically provide login nodes and a set of compute nodes." +- "The resources found on independent (compute) nodes can vary in volume and + type (amount of RAM, processor architecture, availability of network mounted + filesystems, etc.)." +- "Files saved on shared storage are available on all nodes." +- "The login node is a shared machine: be considerate of other users." +--- + +## What Is an HPC System? + +The words "cloud", "cluster", and the phrase "high-performance computing" or +"HPC" are used a lot in different contexts and with various related meanings. +So what do they mean? And more importantly, how do we use them in our work? + +A *Remote* computer is one you have no access to physically and must connect via a network (as opposed to *Local*) + +*Cloud* refers to remote computing resources +that are provisioned to users on demand or as needed. + +*HPC*, *High Performance Computer*, *High Performance Computing* or *Supercomputer* are all general terms for a large or powerful computing resource. + +*Cluster* is a more specific term describing a type of supercomputer comprised of multiple smaller computers (nodes) working together. Almost all supercomputers are clusters. + +![NeSI-HPC-Facility](../fig/NeSI-HPC-Facility.jpg) + +## Access + +You will connect to a cluster over the internet either with a web client (Jupyter) or with SSH (**S**ecure **Sh**ell). Your main interface with the cluster will be using command line. + +## Nodes + +Individual computers that compose a cluster are typically called *nodes*. +On a cluster, there are different types of nodes for different +types of tasks. The node where you are now will be different depending on +how you accessed the cluster. + +Most of you (using JupyterHub) will be on an interactive *compute node*. +This is because Jupyter sessions are launched as a job. If you are using SSH to connect to the cluster, you will be on a +*login node*. Both JupyterHub and SSH login nodes serve as an access point to the cluster. + + + +The real work on a cluster gets done by the *compute nodes*. +Compute nodes come in many shapes and sizes, but generally are dedicated to long +or hard tasks that require a lot of computational resources. + +## What's in a Node? + +A node is similar in makeup to a regular desktop or laptop, composed of *CPUs* (sometimes also called *processors* or *cores*), *memory* +(or *RAM*), and *disk* space. Although, where your laptop might have 8 CPUs and 16GB of memory, a compute node will have hundreds of cores and GB of memory. + +* **CPUs** are a computer's tool for running programs and calculations. + +* **Memory** is for short term storage, containing the information currently being operated on by the CPUs. + +* **Disk** is for long term storage, data stored here is permanent, i.e. still there even if the computer has been restarted. +It is common for nodes to connect to a shared, remote disk. + +{% include figure.html url="" max-width="40%" + file="/fig/clusterDiagram.png" + alt="Node anatomy" caption="" %} + +> ## Differences Between Nodes +> +> Many HPC clusters have a variety of nodes optimized for particular workloads. +> Some nodes may have larger amount of memory, or specialized resources such as +> Graphical Processing Units (GPUs). +{: .callout} + +> ## Dedicated Transfer Nodes +> +> If you want to transfer larger amounts of data to or from the cluster, NeSI +> offers dedicated transfer nodes using the Globus service. More information on using Globus for large data transfer to and from +> the cluster can be found here: [Globus Transfer Service](https://docs.nesi.org.nz/Storage/Data_Transfer_Services/Globus_Quick_Start_Guide/) +{: .callout} + +{% include links.md %} +[fshs]: https://en.wikipedia.org/wiki/Filesystem_Hierarchy_Standard diff --git a/docs/Scientific_Computing/Training/Introduction_to_computing_on_the_NeSI_HPC_YouTube_Recordings.md b/docs/Scientific_Computing/Training/Introduction_to_computing_on_the_NeSI_HPC_YouTube_Recordings.md deleted file mode 100644 index 4f4a69389..000000000 --- a/docs/Scientific_Computing/Training/Introduction_to_computing_on_the_NeSI_HPC_YouTube_Recordings.md +++ /dev/null @@ -1,14 +0,0 @@ ---- -created_at: '2022-07-26T21:43:02Z' -tags: [] -title: Introduction to computing on the NeSI HPC (YouTube Recordings) -vote_count: 0 -vote_sum: 0 -zendesk_article_id: 5209502688655 -zendesk_section_id: 5203123172239 ---- - -- [Introduction to computing on the NeSI HPC (Part 1)](https://www.youtube.com/watch?v=RrFAb8Atsc0&list=PLvbRzoDQPkuFsIzAWaIiYgs-kConq-Hjw) -- [Introduction to computing on the NeSI HPC platform (Part 2)](https://www.youtube.com/watch?v=8TNcFZvXSao&list=PLvbRzoDQPkuFsIzAWaIiYgs-kConq-Hjw&index=2) -- [Introduction to computing on the NeSI HPC (Part 3)](https://www.youtube.com/watch?v=0Vw4b7yY8o8&list=PLvbRzoDQPkuFsIzAWaIiYgs-kConq-Hjw&index=3) -- [Introduction to computing on the NeSI HPC (Part 4)](https://www.youtube.com/watch?v=kXf6RkRQ6tU&list=PLvbRzoDQPkuFsIzAWaIiYgs-kConq-Hjw&index=4) From 525836c1c7d2232424ee293b12e8c370d8303bca Mon Sep 17 00:00:00 2001 From: "callumnmw@gmail.com" Date: Thu, 4 Dec 2025 19:34:55 +1300 Subject: [PATCH 2/2] moving to fin inside new struct --- .../Intro_HPC/14-environment-variables.md | 0 .../Training/Intro_HPC/Bash_shell.md} | 1 + .../Training/Intro_HPC/Filesystem_basics.md} | 0 .../Training/Intro_HPC/Modules.md} | 0 .../Training/Intro_HPC/Parallel.md} | 0 .../Training/Intro_HPC/Resources.md} | 0 .../Training/Intro_HPC/Scaling.md} | 0 .../Training/Intro_HPC/Scheduler.md} | 0 .../Intro_HPC/What_Is_a_HPC_cluster.md} | 22 +++---------------- .../Training/Intro_HPC/writing_good_code.md} | 0 .../Training/Intro_HPC/035-filedir-cont.md | 5 ----- 11 files changed, 4 insertions(+), 24 deletions(-) rename docs/{Scientific_Computing => Getting_Started/Getting_Help}/Training/Intro_HPC/14-environment-variables.md (100%) rename docs/{Scientific_Computing/Training/Intro_HPC/bash_shell.md => Getting_Started/Getting_Help/Training/Intro_HPC/Bash_shell.md} (99%) rename docs/{Scientific_Computing/Training/Intro_HPC/filesystem_basics.md => Getting_Started/Getting_Help/Training/Intro_HPC/Filesystem_basics.md} (100%) rename docs/{Scientific_Computing/Training/Intro_HPC/modules.md => Getting_Started/Getting_Help/Training/Intro_HPC/Modules.md} (100%) rename docs/{Scientific_Computing/Training/Intro_HPC/parallel.md => Getting_Started/Getting_Help/Training/Intro_HPC/Parallel.md} (100%) rename docs/{Scientific_Computing/Training/Intro_HPC/resources.md => Getting_Started/Getting_Help/Training/Intro_HPC/Resources.md} (100%) rename docs/{Scientific_Computing/Training/Intro_HPC/scaling.md => Getting_Started/Getting_Help/Training/Intro_HPC/Scaling.md} (100%) rename docs/{Scientific_Computing/Training/Intro_HPC/scheduler.md => Getting_Started/Getting_Help/Training/Intro_HPC/Scheduler.md} (100%) rename docs/{Scientific_Computing/Training/Intro_HPC/what_is_a_cluster.md => Getting_Started/Getting_Help/Training/Intro_HPC/What_Is_a_HPC_cluster.md} (83%) rename docs/{Scientific_Computing/Training/Intro_HPC/095-writing-good-code.md => Getting_Started/Getting_Help/Training/Intro_HPC/writing_good_code.md} (100%) delete mode 100644 docs/Scientific_Computing/Training/Intro_HPC/035-filedir-cont.md diff --git a/docs/Scientific_Computing/Training/Intro_HPC/14-environment-variables.md b/docs/Getting_Started/Getting_Help/Training/Intro_HPC/14-environment-variables.md similarity index 100% rename from docs/Scientific_Computing/Training/Intro_HPC/14-environment-variables.md rename to docs/Getting_Started/Getting_Help/Training/Intro_HPC/14-environment-variables.md diff --git a/docs/Scientific_Computing/Training/Intro_HPC/bash_shell.md b/docs/Getting_Started/Getting_Help/Training/Intro_HPC/Bash_shell.md similarity index 99% rename from docs/Scientific_Computing/Training/Intro_HPC/bash_shell.md rename to docs/Getting_Started/Getting_Help/Training/Intro_HPC/Bash_shell.md index 501e218be..431641f7a 100644 --- a/docs/Scientific_Computing/Training/Intro_HPC/bash_shell.md +++ b/docs/Getting_Started/Getting_Help/Training/Intro_HPC/Bash_shell.md @@ -28,6 +28,7 @@ keypoints: - "Directory names in a path are separated with `/` on Unix, but `\\` on Windows." - "`..` means 'the directory above the current one'; `.` on its own means 'the current directory'." --- + > ## The Unix Shell > > This episode will be a quick introduction to the Unix shell, only the bare minimum required to use the cluster. diff --git a/docs/Scientific_Computing/Training/Intro_HPC/filesystem_basics.md b/docs/Getting_Started/Getting_Help/Training/Intro_HPC/Filesystem_basics.md similarity index 100% rename from docs/Scientific_Computing/Training/Intro_HPC/filesystem_basics.md rename to docs/Getting_Started/Getting_Help/Training/Intro_HPC/Filesystem_basics.md diff --git a/docs/Scientific_Computing/Training/Intro_HPC/modules.md b/docs/Getting_Started/Getting_Help/Training/Intro_HPC/Modules.md similarity index 100% rename from docs/Scientific_Computing/Training/Intro_HPC/modules.md rename to docs/Getting_Started/Getting_Help/Training/Intro_HPC/Modules.md diff --git a/docs/Scientific_Computing/Training/Intro_HPC/parallel.md b/docs/Getting_Started/Getting_Help/Training/Intro_HPC/Parallel.md similarity index 100% rename from docs/Scientific_Computing/Training/Intro_HPC/parallel.md rename to docs/Getting_Started/Getting_Help/Training/Intro_HPC/Parallel.md diff --git a/docs/Scientific_Computing/Training/Intro_HPC/resources.md b/docs/Getting_Started/Getting_Help/Training/Intro_HPC/Resources.md similarity index 100% rename from docs/Scientific_Computing/Training/Intro_HPC/resources.md rename to docs/Getting_Started/Getting_Help/Training/Intro_HPC/Resources.md diff --git a/docs/Scientific_Computing/Training/Intro_HPC/scaling.md b/docs/Getting_Started/Getting_Help/Training/Intro_HPC/Scaling.md similarity index 100% rename from docs/Scientific_Computing/Training/Intro_HPC/scaling.md rename to docs/Getting_Started/Getting_Help/Training/Intro_HPC/Scaling.md diff --git a/docs/Scientific_Computing/Training/Intro_HPC/scheduler.md b/docs/Getting_Started/Getting_Help/Training/Intro_HPC/Scheduler.md similarity index 100% rename from docs/Scientific_Computing/Training/Intro_HPC/scheduler.md rename to docs/Getting_Started/Getting_Help/Training/Intro_HPC/Scheduler.md diff --git a/docs/Scientific_Computing/Training/Intro_HPC/what_is_a_cluster.md b/docs/Getting_Started/Getting_Help/Training/Intro_HPC/What_Is_a_HPC_cluster.md similarity index 83% rename from docs/Scientific_Computing/Training/Intro_HPC/what_is_a_cluster.md rename to docs/Getting_Started/Getting_Help/Training/Intro_HPC/What_Is_a_HPC_cluster.md index d21addf19..442f1af71 100644 --- a/docs/Scientific_Computing/Training/Intro_HPC/what_is_a_cluster.md +++ b/docs/Getting_Started/Getting_Help/Training/Intro_HPC/What_Is_a_HPC_cluster.md @@ -1,23 +1,7 @@ --- -title: "Working on a remote HPC system" -# teaching: 10 -teaching: 20 -exercises: 0 -questions: -- "What is an HPC system?" -- "How does an HPC system work?" -- "How do I log in to a remote HPC system?" -objectives: -- "Connect to a remote HPC system." -- "Understand the general HPC system architecture." -keypoints: -- "An HPC system is a set of networked machines." -- "HPC systems typically provide login nodes and a set of compute nodes." -- "The resources found on independent (compute) nodes can vary in volume and - type (amount of RAM, processor architecture, availability of network mounted - filesystems, etc.)." -- "Files saved on shared storage are available on all nodes." -- "The login node is a shared machine: be considerate of other users." +description: Introduction to basic terminology and principles of High Performance Computing +tags: + - training --- ## What Is an HPC System? diff --git a/docs/Scientific_Computing/Training/Intro_HPC/095-writing-good-code.md b/docs/Getting_Started/Getting_Help/Training/Intro_HPC/writing_good_code.md similarity index 100% rename from docs/Scientific_Computing/Training/Intro_HPC/095-writing-good-code.md rename to docs/Getting_Started/Getting_Help/Training/Intro_HPC/writing_good_code.md diff --git a/docs/Scientific_Computing/Training/Intro_HPC/035-filedir-cont.md b/docs/Scientific_Computing/Training/Intro_HPC/035-filedir-cont.md deleted file mode 100644 index ff8933246..000000000 --- a/docs/Scientific_Computing/Training/Intro_HPC/035-filedir-cont.md +++ /dev/null @@ -1,5 +0,0 @@ ---- -title: "Navigating Files and Directories (Continued)" -layout: break -break: 50 ---- \ No newline at end of file