diff --git a/.readthedocs.yaml b/.readthedocs.yaml index f530bed16..7c79af07e 100644 --- a/.readthedocs.yaml +++ b/.readthedocs.yaml @@ -7,9 +7,9 @@ version: 2 # Set the OS, Python version and other tools you might need build: - os: ubuntu-22.04 + os: ubuntu-24.04 tools: - python: "3.11" + python: "3.12" # Build documentation in the "source/" directory with Sphinx sphinx: diff --git a/HtmlDump/file_0001.html b/HtmlDump/file_0001.html deleted file mode 100644 index df9eb386a..000000000 --- a/HtmlDump/file_0001.html +++ /dev/null @@ -1 +0,0 @@ -

© FWO

diff --git a/HtmlDump/file_0002.html b/HtmlDump/file_0002.html deleted file mode 100644 index 40fa745bb..000000000 --- a/HtmlDump/file_0002.html +++ /dev/null @@ -1,2 +0,0 @@ -

The VSC-infrastructure consists of two layers. The central Tier-1 infrastructure is designed to run large parallel jobs. It also contains a small accelerator testbed to experiment with upcoming technologies. The Tier-2 layer runs the smaller jobs, is spread over a number of sites, is closer to users and more strongly embedded in the campus networks. The Tier-2 clusters are also interconnected and integrated with each other.

" - diff --git a/HtmlDump/file_0003.html b/HtmlDump/file_0003.html deleted file mode 100644 index b5fee0610..000000000 --- a/HtmlDump/file_0003.html +++ /dev/null @@ -1,2 +0,0 @@ -

This infrastructure is accessible to all scientific research taking place in Flemish universities and public research institutes. In some cases a small financial contribution is required. Industry can use the infrastructure for a fee to cover the costs associated with this.

" - diff --git a/HtmlDump/file_0004.html b/HtmlDump/file_0004.html deleted file mode 100644 index 1a676dceb..000000000 --- a/HtmlDump/file_0004.html +++ /dev/null @@ -1,2 +0,0 @@ -

What is a supercomputer?

-

A supercomputer is a very fast and extremely parallel computer. Many of its technological properties are comparable to those of your laptop or even smartphone. But there are also important differences.

diff --git a/HtmlDump/file_0005.html b/HtmlDump/file_0005.html deleted file mode 100644 index 72da0afb9..000000000 --- a/HtmlDump/file_0005.html +++ /dev/null @@ -1,3 +0,0 @@ -

The VSC in Flanders

-

The VSC is a partnership of five Flemish university associations. The Tier-1 and Tier-2 infrastructure is spread over four locations: Antwerp, Brussels, Ghent and Louvain. There is also a local support office in Hasselt.

" - diff --git a/HtmlDump/file_0006.html b/HtmlDump/file_0006.html deleted file mode 100644 index 9a7dd607f..000000000 --- a/HtmlDump/file_0006.html +++ /dev/null @@ -1,2 +0,0 @@ -

Tier-1 infrastructure

-

Central infrastructure for large parallel compute jobs and an experimental accelerator system.

diff --git a/HtmlDump/file_0007.html b/HtmlDump/file_0007.html deleted file mode 100644 index cee00dbd0..000000000 --- a/HtmlDump/file_0007.html +++ /dev/null @@ -1,2 +0,0 @@ -

Tier-2 infrastructure

-

An integrated distributed infrastructure for smaller supercomputing jobs with varying hardware needs.

diff --git a/HtmlDump/file_0008.html b/HtmlDump/file_0008.html deleted file mode 100644 index b121ac6ae..000000000 --- a/HtmlDump/file_0008.html +++ /dev/null @@ -1,2 +0,0 @@ -

Getting access

-

Who can access, and how do I get my account?

diff --git a/HtmlDump/file_0009.html b/HtmlDump/file_0009.html deleted file mode 100644 index dc6e2ae8c..000000000 --- a/HtmlDump/file_0009.html +++ /dev/null @@ -1,2 +0,0 @@ -

Tier-1 starting grant

-

A programme to get a free allocation on the Tier-1 supercomputer to perform the necessary tests to prepare a regular Tier-1 project application.

diff --git a/HtmlDump/file_0010.html b/HtmlDump/file_0010.html deleted file mode 100644 index 3e7a1bbcd..000000000 --- a/HtmlDump/file_0010.html +++ /dev/null @@ -1,2 +0,0 @@ -

Project access Tier-1

-

A programme to get a compute time allocation on the Tier-1 supercomputers based on an scientific project with evaluation.

diff --git a/HtmlDump/file_0011.html b/HtmlDump/file_0011.html deleted file mode 100644 index a93525ecf..000000000 --- a/HtmlDump/file_0011.html +++ /dev/null @@ -1,2 +0,0 @@ -

Buying compute time

-

Without an awarded scientific project, it is possible to buy compute time. We also offer a free try-out so you can test if our infrastructure is suitable for your needs.

diff --git a/HtmlDump/file_0012.html b/HtmlDump/file_0012.html deleted file mode 100644 index 2baeead54..000000000 --- a/HtmlDump/file_0012.html +++ /dev/null @@ -1 +0,0 @@ -

Need help ? Have more questions ?

diff --git a/HtmlDump/file_0013.html b/HtmlDump/file_0013.html deleted file mode 100644 index 9637e5fdd..000000000 --- a/HtmlDump/file_0013.html +++ /dev/null @@ -1,2 +0,0 @@ -

User portal

-

On these pages, you will find everything that is useful for users of our infrastructure: the user documentation, server status, upcoming training programs and links to other useful information on the web.

diff --git a/HtmlDump/file_0015.html b/HtmlDump/file_0015.html deleted file mode 100644 index e141db5f3..000000000 --- a/HtmlDump/file_0015.html +++ /dev/null @@ -1 +0,0 @@ -

Below we give information about current downtime (if applicable) and planned maintenance of the various VSC clusters.

diff --git a/HtmlDump/file_0023.html b/HtmlDump/file_0023.html deleted file mode 100644 index 28e51fb72..000000000 --- a/HtmlDump/file_0023.html +++ /dev/null @@ -1,13 +0,0 @@ -

There is no clear agreement on the exact definition of the term ‘supercomputer’. Some say a supercomputer is a computer with at least 1% of the computing power of the fastest computer in the world. But according to this definition, there are currently only a few hundred supercomputers in the world. The TOP500 list is a list of the supposedly 500 fastest computers in the world, updated twice a year.

One could take 1‰ of the performance of the fastest computer as the criterion, but it is an arbitrary criterion. Stating that a supercomputer should perform at least X trillion computations per second, is not a useful definition. Because of the fast evolution of the technology, this definition would be outdated in a matter of years. The first smartphone of a well-known manufacturer launched in 2007 had about the same computing power and more memory than the computer used to predict the weather in Europe 30 years earlier. -

So what is considered as a ‘supercomputer’ is very time-bound, at least in terms of absolute compute power. So let us just agree that a supercomputer is a computer that is hundreds or thousands times faster than your smartphone or laptop. -

But is a supercomputer so different from your laptop or smartphone? Yes and no. Since roughly 1975 the key word in supercomputing is parallelism. But this also applies for your PC or smartphone. PC processor manufacturers started to experiment with simple forms of parallelism at the end of the nineties. A few years later the first processors appeared with multiple cores that could perform calculations independently from each other. A laptop has mostly 2 or 4 cores and modern smartphones have 2, 4 or in some rare cases 8 cores. Although it must be added that they are a little slower than the ones on a typical laptop. -

Around 1975 manufacturers started to experiment with vector processors. These processors perform the same operation to a set of numbers simultaneously. Shortly thereafter supercomputers with multiple processors working independently from each other, appeared on the market. Similar technologies are nowadays used in the processor chips of laptops and smartphones. In the eighties, supercomputer designers started to experiment with another kind of parallelism. Several rather simple processors - this was sometimes just standard PC processors like the venerable Intel 80386 were linked together with fast networks and collaborated to solve large problems. These computers were cheaper to develop, much simpler to build, but required frequent changes to the software. -

In modern supercomputers, parallelism is pushed to extremes. In most supercomputers, all forms of parallelism mentioned above are combined at an unprecedented scale and can take on extreme forms. All modern supercomputers rely on some form of vector computing or related technologies and consist of building blocks - nodes - uniting tens of cores and interconnecting through a fast network to a larger whole. Hence the term ‘compute cluster’ is often used. -

Supercomputers must also be able to read and interpret data is ‘at a very high speed. Here the key word is also parallellism. Many supercomputers have several network connections to the outside world. Their permanent storage system consists of hundreds or even thousands of hard disks or SSDs linked together to one extremely large and extremely fast storage system. This type of technology has probably not influenced significantly the development of laptops as it would not be very practical to carry a laptop around with 4 hard drives. Yet this technology does appear to some extent in modern, fast SSD drives in some laptops and smartphones. The faster ones use several memory chips in parallel to increase their performance and it is a standard technology in almost any server storing data. -

As we have already indicated to some extent in the text above, a supercomputer is more than just hardware. It also needs properly written software. or Java program you wrote during your student years will not run a 10. 000 times faster because you run it on a supercomputer. On the contrary, there is a fair chance that it won't run at all or run slower than on your PC. Most supercomputers - and all supercomputers at the VSC - use a variant of the Linux operating system enriched with additional software to combine all compute nodes in one powerful supercomputer. Due to the high price of such a computer, you're rarely the only user but will rather share the infrastructure with others. -

So you may have to wait a little before your program runs. Furthermore your monitor is not directly connected to the supercomputer. Proper software is also required here with your application software having to be adapted to run well on a supercomputer. Without these changes, your program will not run much faster than on a regular PC. You may of course still run hundreds or thousands copies simultaneously, when you for example wish to explore a parameter space. This is called ‘capacity computing’. -

If you wish to solve truly large problems within a reasonable timeframe, you will have to adapt your application software to maximize every form of parallellism within a modern supercomputer and use several hundreds, or even thousands, of compute cores simultaneously to solve one large problem. This is called ‘capability computing’. Of course, the problem you wish to solve has to be large enough for this approach to make sense. Every problem has an intrinsic limit to the speedup you can achieve on a supercomputer. The larger the problem, the higher speedup you can achieve. -

This also implies that a software package that was cutting edge in your research area 20 years ago, is unlikely to be so anymore because it is not properly adapted to modern supercomputers, while new applications exploit supercomputers much more efficiently and subsequently generate faster, more accurate results. -

To some extent this also applies to your PC. Here again you are dealing with software exploiting the parallelism of a modern PC quite well or with software that doesn't. As a ‘computational scientist’ or supercomputer user you constantly have to be open to new developments within this area. Fortunately, in most application domains, a lot of efficient software already exists which succeeds in using all the parallellism that can be found in modern supercomputers. -

" - diff --git a/HtmlDump/file_0025.html b/HtmlDump/file_0025.html deleted file mode 100644 index 854bafad1..000000000 --- a/HtmlDump/file_0025.html +++ /dev/null @@ -1 +0,0 @@ -

The successor of Muk is expected to be installed in the spring 2016.

There is also a small test cluster for experiments with accellerators (GPU and Intel Xeon Phi) with a view to using this technology in future VSC clusters.

The Tier-1 cluster Muk

The Tier-1 cluster Muk has 528 computing nodes, each with two 8-core Intel Xeon processors from the Sandy Bridge generation (E5-2670, 2.6 GHz). Each node features 64 GiB RAM, for a total memory capacity of more than 33 TiB. The computing nodes are connected by an FDR InfiniBand interconnect with a fat tree topology. This network has a high bandwidth (more than 6,5GB / s per direction per link) and a low latency. The storage is provided by a disk system with a total disk capacity of 400 TB and a peak bandwidth of 9.5 GB / s.

The cluster achieves a peak performance of more than 175 Tflops and a Linpack performance of 152.3 Tflops. With this result, the cluster was for 5 consecutive periods in the Top500 list of fastest supercomputers in the world:

List

06/2012

11/2012

06/2013

11/2013

06/2014

Position

118

163

239

306

430

In November 2014 the cluster fell just outside the list but still took 99% of the performance of the system in place 500.

Accellerator testbed

In addition to the tier-1 cluster Muk, the VSC has an experimental GPU / Xeon Phi cluster. 8 nodes in this cluster have 2 K20x nVidia GPUs with accompanying software stack, and 8 nodes are equipped with two Intel Xeon Phi 5110P (\"Knight's Corner\" generation) boards. The nodes are interconnected by means of a QDR InfiniBand network. For practical reasons, these nodes were integrated into the KU Leuven / Hasselt University Tier-2 infrastructure.

Software

Like on all other VSC-clusters, the operating system of Muk is a variant of Linux, in this case Scientific Linux which in turn based on Red Hat Linux. The system also features a comprehensive stack of software development tools which includes the GNU and Intel compilers, debugger and profiler for parallel applications and different versions of OpenMPI and Intel MPI.

There is also an extensive set of freely available applications installed on the system. More software can be installed at the request of the user. Users however have to take care of the software licenses when the software is not freely available, and therefore also for the financing of that license.

Detailed overview of the installed software

Access to the Tier-1 system

Academic users can access the Tier-1 cluster Muk through a project application. There are two types of project applications

To use the GPU / Xeon Phi cluster it is sufficient to contact the HPC coordinator of your institution.

Industrial users and non-Flemish research institutions and not-for-profit organizations can also purchase computing time on the Tier-1 Infrastructure. For this you can contact the Hercules Foundation.

diff --git a/HtmlDump/file_0027.html b/HtmlDump/file_0027.html deleted file mode 100644 index 46e0662e1..000000000 --- a/HtmlDump/file_0027.html +++ /dev/null @@ -1,9 +0,0 @@ -

The VSC does not only rely on the Tier-1 supercomputer to respond to the need for computing capacity. The HPC clusters of the University of Antwerp, VUB, Ghent University and KU Leuven constitute the VSC Tier-2 infrastructure, with a total computing capacity of 416.2 TFlops. Hasselt University invests in the HPC cluster of Leuven. Each cluster has its own specificity and is managed by the university’s dedicated HPC/ICT team. The clusters are interconnected with a 10 Gbps BELNET network, ensuring maximal cross-site access to the different cluster architectures. For instance, a VSC user from Antwerp can easily log in to the infrastructure at Leuven.

Infrastructure

More information

A more detailed description of the complete infrastructure is available in the \"Available hardware\" section of the user portal.

" - diff --git a/HtmlDump/file_0037.html b/HtmlDump/file_0037.html deleted file mode 100644 index 7438b87eb..000000000 --- a/HtmlDump/file_0037.html +++ /dev/null @@ -1,23 +0,0 @@ -

Computational science has - alongside experiments and theory - become the fully fledged third pillar of science. Supercomputers offer unprecedented opportunities to simulate complex models and as such to test theoretical models against reality. They also make it possible to extract valuable knowledge from massive amounts of data.

-

For many calculations, a laptop or workstation is no longer sufficient. Sometimes dozens or hundreds of CPU cores and hundreds of gigabytes or even terabytes of RAM-memory are necessary to produce an acceptable solution within a reasonable amount of time. -

-

Our offer

-

An overview of our services: -

- -

More information?

-

More information can be found in our training section and user portal. -

" - diff --git a/HtmlDump/file_0041.html b/HtmlDump/file_0041.html deleted file mode 100644 index fba5b8032..000000000 --- a/HtmlDump/file_0041.html +++ /dev/null @@ -1,2 +0,0 @@ -

Not only have supercomputers changed scientific research in a fundamental way, they also enable the development of new, affordable products and services which have a major impact on our daily lives.

Not only have supercomputers changed scientific research in a fundamental way ...

Supercomputers are indispensable for scientific research and for a modern R&D environment. ‘Computational Science’ is - alongside theory and experiment - the third fully fledged pillar of science. For centuries, scientists used pen and paper to develop new theories based on scientific experiments. They also set up new experiments to verify the predictions derived from these theories (a process often carried out with pen and paper). It goes without saying that this method was slow and cumbersome.

As an astronomer you can not simply make Jupiter a little bigger to see what effect this would lager size would have on our solar system. As a nuclear scientist it would be difficult to deliberately lose control over a nuclear reaction to ascertain the consequences of such a move. (Super)computers can do this and are indeed revolutionizing science.

Complex theoretical models - too advanced for ‘pen and paper’ results - are simulated on computers. The results they deliver, are then compared with reality and used for prediction purposes. Supercomputers have the ability to handle huge amounts of data, thus enabling experiments that would not be achievable in any other way. Large radio telescopes or the LHC particle accelerator at CERN could not function without supercomputers processing mountains of data.

… but also the industry and out society

But supercomputers are not just an expensive toy for researchers at universities. Numerical simulation also opens up new possibilities in industrial R&D. For example in the search for new medicinal drugs, new materials or even the development of a new car model. Biotechnology also requires the large data processing capacity of a supercomputer. The quest for clean energy, a better understanding of the weather and climate evolution, or new technologies in health care all require a powerful supercomputer.

Supercomputers have a huge impact on our everyday lives. Have you ever wondered why the showroom of your favourite car brand contains many more car types than 20 years ago? Or how each year a new and faster smartphone model is launched on the market? We owe all of this to supercomputers.

" - diff --git a/HtmlDump/file_0045.html b/HtmlDump/file_0045.html deleted file mode 100644 index ebfdce9ae..000000000 --- a/HtmlDump/file_0045.html +++ /dev/null @@ -1,45 +0,0 @@ -

In the past few decades supercomputers have not only revolutionized scientific research but have also been used increasingly by businesses all over the world to accelerate design, production processes and the development of innovative services.

Situation

Modern microelectronics has created many new opportunities. Today powerful supercomputers enable us to collect and process huge amounts of data. Complex systems can be studied through numerical simulation without having to build a prototype or set up a scaled experiment beforehand. All this leads to a quicker and cheaper design of new products, cost-efficient processes and innovative services. To support this development in Flanders, the Flemish Government founded in late 2007 the Flemish Supercomputer Center (VSC) as a partnership between the government and Flemish university associations. The accumulated expertise and infrastructure are assets we want to make available to the industry. -

Technology Offer

A collaboration with the VSC offers your company a good number of benefits. -

About the VSC

The VSC was launched in late 2007 as a collaboration between the Flemish Government and five Flemish university associations. Many of the VSC employees have a strong technical and scientific background. Our team also collaborates with many research groups at various universities and helps them and their industrial partners with all aspects of infrastructure usage. -

Besides a competitive infrastructure, the VSC team also offers full assistance with the introduction of High Performance Computing within your company. -

" - diff --git a/HtmlDump/file_0049.html b/HtmlDump/file_0049.html deleted file mode 100644 index 7935f5093..000000000 --- a/HtmlDump/file_0049.html +++ /dev/null @@ -1 +0,0 @@ -

The Flemish Supercomputer Centre (VSC) is a virtual centre making supercomputer infrastructure available for both the academic and industrial world. This centre is managed by the Research Foundation - Flanders (FWO) in partnership with the five Flemish university associations.

diff --git a/HtmlDump/file_0051.html b/HtmlDump/file_0051.html deleted file mode 100644 index 9a9b0a5af..000000000 --- a/HtmlDump/file_0051.html +++ /dev/null @@ -1,3 +0,0 @@ -

HPC for academics

-

With HPC-technology you can refine your research and gain new insights to take your research to new heights.


" - diff --git a/HtmlDump/file_0057.html b/HtmlDump/file_0057.html deleted file mode 100644 index 9ce3ede44..000000000 --- a/HtmlDump/file_0057.html +++ /dev/null @@ -1 +0,0 @@ -

You can fix this yourself in a few easy steps via the account management web site.

There are two ways in which you may have messed up your keys:

  1. The keys that were stored in the .ssh subdirectory of your home directory on the cluster were accidentally deleted, or the authorized_keys file was accidentally deleted:
    1. Go to account.vscentrum.be
    2. Choose your institute and log in.
    3. At the top of the page, click 'Edit Account'.
    4. Press the 'Update' button on that web page.
    5. Exercise some patience, within 30 minutes, your account should be accessible again.
  2. You deleted your (private) keys on your own computer, or don't know the passphrase anymore
    1. Generate a new public/private key pair. Follow the procedure outlined in the client sections for Linux, Windows and macOS (formerly OS X).
    2. Go to account.vscentrum.be
    3. Choose your institute and log in.
    4. At the top of the page, click 'Edit Account'.
    5. Upload your new public key adding it in the 'Add Public Key' section of the page. Use 'Browse...' to find your public key, press 'Add' to upload it.
    6. You may now delete the entry for the \"lost\" key if you know which one that is, but this is not crucial.
    7. Exercise some patience, within 30 minutes, your account should be accessible again.
diff --git a/HtmlDump/file_0059.html b/HtmlDump/file_0059.html deleted file mode 100644 index d932e55a1..000000000 --- a/HtmlDump/file_0059.html +++ /dev/null @@ -1,30 +0,0 @@ -

Before you can really start using one of the clusters, there are several things you need to do or know:

  1. You need to log on to the cluster via an ssh-client to one of the login nodes. This will give you a command line. The software you'll need to use on your client system depends on its operating system: - -
  2. -
  3. Your account also comes with a certain amount of data storage capacity in at least three subdirectories on each cluster. You'll need to familiarise yourself with - -
  4. -
  5. Before you can do some work, you'll have to transfer the files that you need from your desktop or department to the cluster. At the end of a job, you might want to transfer some files back. The preferred way to do that, is by using an sftp client. It again requires some software on your client system which depends on its operating system: - -
  6. -
  7. Optionally, if you wish to use programs with a graphical user interface, you'll need an X server on your client system. Again, this depends on the latter's operating system: - -
  8. -
  9. Often several versions of software packages and libraries are installed, so you need to select the ones you need. To manage different versions efficiently, the VSC clusters use so-called modules, so you'll need to select and load the modules that you need.
  10. -

Logging in to the login nodes of your institute's cluster may not work if your computer is not on your institute's network (e.g., when you work from home). In those cases you will have to set up a VPN (Virtual Private Network) connection if your institute provides this service.

" - diff --git a/HtmlDump/file_0061.html b/HtmlDump/file_0061.html deleted file mode 100644 index 7e21e1570..000000000 --- a/HtmlDump/file_0061.html +++ /dev/null @@ -1,52 +0,0 @@ -

What is a group?

The concept of group as it is used here is that of a POSIX group and is a user management concept from the Linux OS (and many other OSes, not just UNIX-like systems). Groups are a useful concept to control access to data or programs for groups of users at once, using so-called group permissions. Three important use cases are:

  1. Controlling access to licensed software, e.g., when one or only some research groups pay for the license
  2. Creating a shared subdirectory to collaborate with several VSC-users on a single project
  3. Controlling access to a project allocation on clusters implementing a credit system (basically all clusters at KU Leuven)

VSC groups are managed without any interaction from the system administrators. This provides a highly flexible way for users to organise themselves. Each VSC group has members and moderators:

Warning: You should not exaggerate in creating new groups. Mounting file systems over NFS doesn't work properly if you belong to more than 32 different groups, and so far we have not found a solution. This happens when you log on to a VSC cluster at a different site.

Managing groups

Viewing the groups you belong to

You will in fact see that you always belong to at least two groups depending on the institution from which you have your VSC account. -

Join an existing group

Create new group

Working with file and directory permissions

" - diff --git a/HtmlDump/file_0063.html b/HtmlDump/file_0063.html deleted file mode 100644 index 4dc7fcb56..000000000 --- a/HtmlDump/file_0063.html +++ /dev/null @@ -1,63 +0,0 @@ - - - - - - -
-

Total disk space used on filesystems with quota

-

On filesystems with 'quota enabled', you can check the amount of disk space that is available for you, and the amount of disk space that is in use by you. Unfortunately, there is not a single command that will give you that information for all file systems in the VSC. -

-
    -
  • quota is the standard command to request your disk quota. Its output is in 'blocks', but can also be given in MB/GB if you use the '-s' option.
  • -
  • But it does not work on GPFS file systems. On those you have to use mmlsquota. This is the case for the scratch space at the KU Leuven or on the Tier-1.
  • -
  • On some clusters, these commands are currently disabled.
  • -
  • Also, using these commands on another cluster than the one in your home institution, will fail to return information about the quota on your VSC_HOME and VSC_DATA directories and will show you the quota for your VSC_SCRATCH directory on that system.
  • -
-
quota -s
-Disk quotas for user vsc31234 (uid 123456):
-  Filesystem  blocks   quota   limit   grace   files   quota   limit   grace
-nas2-ib1:/mnt/home
-                648M   2919M   3072M            3685       0       0
-nas2-ib1:/mnt/data
-              20691M  24320M  25600M            134k       0       0
-nas1-ib1:/mnt/site_scratch
-                   0  24320M  25600M               1       0       0
-
-

Each line represents a file system you have access to, $VSC_HOME, $VSC_DATA, and, for this particular example, $VSC_SCRATCH_SITE. The blocks column shows your current usage, quota is the usage above which you will be warned, and limit is \"hard\", i.e., when your usage reaches this limit, no more information can be written to the file system, and programs that try will fail.

Some file systems have limits on the number of files that can be stored, and those are represented by the last four columns. The number of files you currently have is listed in the column files, quota and limit represent the soft and hard limits for the number of files.

- -

Diskspace used by individual directories

-

The command to check the size of all subdirectories in the current directory is \"du\": -

-
$ du -h
-4.0k      ./.ssh
-0       ./somedata/somesubdir
-52.0k   ./somedata
-56.0k   .
-		
-

This shows you first the aggregated size of all subdirectories, and finally the total size of the current directory \".\" (this includes files stored in the current directory). The -h option ensures that sizes are displayed in human readable form, omitting it will show sizes in bytes.

-

If the number of lower level subdirectories starts to grow too big, you may not want to see the information at that depth; you could just ask for a summary of the current directory: -

-
du -s 
-54864 .
-
-

If you want to see the size of any file or top level subdirectory in the current directory, you could use the following command: -

-
du -s *
-12      a.out
-3564    core
-4       mpd.hosts
-51200   somedata
-4       start.sh
-4       test
-		
-

Finally, if you don't want to know the size of the data in your current directory, but in some other directory (eg. your data directory), you just pass this directory as a parameter. If you also want this size to be \"human readable\" (and not always the total number of kilobytes), you add the parameter \"-h\": -

-
du -h -s $VSC_DATA/*
-50M     /data/leuven/300/vsc30001/somedata
-		
-
-
-
-
" - diff --git a/HtmlDump/file_0065.html b/HtmlDump/file_0065.html deleted file mode 100644 index 4eb1878c6..000000000 --- a/HtmlDump/file_0065.html +++ /dev/null @@ -1,5 +0,0 @@ -" - diff --git a/HtmlDump/file_0067.html b/HtmlDump/file_0067.html deleted file mode 100644 index 1f145da32..000000000 --- a/HtmlDump/file_0067.html +++ /dev/null @@ -1,24 +0,0 @@ -" - diff --git a/HtmlDump/file_0069.html b/HtmlDump/file_0069.html deleted file mode 100644 index 98026879f..000000000 --- a/HtmlDump/file_0069.html +++ /dev/null @@ -1,44 +0,0 @@ -

This is a very incomplete list, permantently under construction, of books about parallel computing.

General

Grid computing

MPI

OpenMP

GPU computing

Xeon Phi computing

Case studies and examples of programming paradigms

Please mail further suggestions to Kurt.Lust@uantwerpen.be. -

" - diff --git a/HtmlDump/file_0071.html b/HtmlDump/file_0071.html deleted file mode 100644 index b320cf161..000000000 --- a/HtmlDump/file_0071.html +++ /dev/null @@ -1,9 +0,0 @@ -

PRACE

The PRACE Training Portal has a number of training videos online from their courses.

LLNL - Lawrence Livermore National Laboratory (USA)

LLNL provides several tutorials. Not all are applicable to the VSC clusters, but some are. E.g., -

There are also some tutorials on Python. -

NCSA - National Center for Supercomputing Applications (USA)

NCSA runs the CI-Tutor (Cyberinfrastructure Tutor) service that also contains a number of interesting tutorials. At the moment of writing, there is no fee and everybody can subscribe. -

" - diff --git a/HtmlDump/file_0073.html b/HtmlDump/file_0073.html deleted file mode 100644 index 54e2c765a..000000000 --- a/HtmlDump/file_0073.html +++ /dev/null @@ -1 +0,0 @@ -

Getting ready to request an account

Connecting to the cluster

Programming tools

diff --git a/HtmlDump/file_0075.html b/HtmlDump/file_0075.html deleted file mode 100644 index 98c599a7b..000000000 --- a/HtmlDump/file_0075.html +++ /dev/null @@ -1,25 +0,0 @@ -

Prerequisite: PuTTY

By default, there is no ssh client software available on Windows, so you will typically have to install one yourself. We recommend to use PuTTY, which is freely available. You do not even need to install; just download the executable and run it! Alternatively, an installation package (MSI) is also available from the download site that will install all other tools that you might need also.

You can copy the PuTTY executables together with your private key on a USB stick to connect easily from other Windows computers. -

Generating a public/private key pair

To generate a public/private key pair, you can use the PuTTYgen key generator. Start it and follow the following steps. Alternatively, you can follow a short video explaining step-by-step the process of generating a new key pair and saving it in a format required by different VSC login nodes. -

    -
  1. - In 'Parameters' (at the bottom of the window), choose 'SSH-2 RSA' and set the number of bits in the key to 2048:
    - \"PuTTYgen
  2. -
  3. - Click on 'Generate'. To generate the key, you must move the mouse cursor over the PuTTYgen window (this generates some random data that PuTTYgen uses to generate the key pair). Once the key pair is generated, your public key is shown in the field 'Public key for pasting into OpenSSH authorized_keys file'. -
  4. -
  5. - Next, you should specify a passphrase in the 'Key passphrase' field and retype it in the 'Confirm passphrase' field. Remember, the passphrase protects the private key against unauthorized use, so it is best to choose one that is not too easy to guess. Additionally, it is adviced to fill in the 'Key comment' field to make it easier identifiable afterwards.
    - \"PuTTYgen
  6. -
  7. - Finally, save both the public and private keys in a secure place (i.e., a folder on your personal computer, or on your personal USB stick, ...) with the buttons 'Save public key' and 'Save private key'. We recommend to use the name \"id_rsa.pub\" for the public key, and \"id_rsa.ppk\" for the private key. -
  8. -

If you use another program to generate a key pair, please remember that they need to be in the OpenSSH format to access the VSC clusters. -

Converting PuTTY keys to OpenSSH format

OpenSSH is a very popular command-line SSH client originating from the Linux world but now available on many operating systems. Therefore its file format is a very popular one. Some applications, such as Eclipse's SSH components, can not handle private keys generated with PuTTY, only OpenSSH compliant private keys. However, PuTTY's key generator 'PuTTYgen' (that was used to generate the public/private key pair in the first place) can be used to convert the PuTTY private key to one that can be used by Eclipse. -

    -
  1. Start PuTTYgen.
  2. -
  3. From the 'Conversions' menu, select 'Import key' and choose the file containing your PuTTY private key that is used to authenticate on the VSC cluster.
  4. -
  5. When prompted, enter the appropriate passphrase.
  6. -
  7. From the 'Conversions' menu, select 'Export OpenSSH key' and save it as 'id_rsa' (or any other name if the former already exists). Remember the file name and its location, it will have to be specified in the configuration process of, e.g., Eclipse.
  8. -
  9. Exit PuTTYgen.
  10. -
" - diff --git a/HtmlDump/file_0079.html b/HtmlDump/file_0079.html deleted file mode 100644 index 86f13639c..000000000 --- a/HtmlDump/file_0079.html +++ /dev/null @@ -1 +0,0 @@ -

2) Click on 'Generate'. To generate the key, you must move the mouse cursor over the PuTTYgen window (this generates some random data that PuTTYgen uses to generate the key pair). Once the key pair is generated, your public key is shown in the field 'Public key for pasting into OpenSSH authorized_keys file'.

3) Next, you should specify a passphrase in the 'Key passphrase' field and retype it in the 'Confirm passphrase' field. Remember, the passphrase protects the private key against unauthorized use, so it is best to choose one that is not too easy to guess. Additionally, it is adviced to fill in the 'Key comment' field to make it easier identifiable afterwards.

diff --git a/HtmlDump/file_0081.html b/HtmlDump/file_0081.html deleted file mode 100644 index 24cad5c8a..000000000 --- a/HtmlDump/file_0081.html +++ /dev/null @@ -1 +0,0 @@ -


4) Finally, save both the public and private keys in a secure place (i.e., a folder on your personal computer, or on your personal USB stick, ...) with the buttons 'Save public key' and 'Save private key'. We recommend to use the name \"id_rsa.pub\" for the public key, and \"id_rsa.ppk\" for the private key.

If you use another program to generate a key pair, please remember that they need to be in the OpenSSH format to access the VSC clusters.

Converting PuTTY keys to OpenSSH format

OpenSSH is a very popular command-line SSH client originating from the Linux world but now available on many operating systems. Therefore its file format is a very popular one. Some applications, such as Eclipse's SSH components, can not handle private keys generated with PuTTY, only OpenSSH compliant private keys. However, PuTTY's key generator 'PuTTYgen' (that was used to generate the public/private key pair in the first place) can be used to convert the PuTTY private key to one that can be used by Eclipse.

  1. Start PuTTYgen.
  2. From the 'Conversions' menu, select 'Import key' and choose the file containing your PuTTY private key that is used to authenticate on the VSC cluster.
  3. When prompted, enter the appropriate passphrase.
  4. From the 'Conversions' menu, select 'Export OpenSSH key' and save it as 'id_rsa' (or any other name if the former already exists). Remember the file name and its location, it will have to be specified in the configuration process of, e.g., Eclipse.
  5. Exit PuTTYgen.
diff --git a/HtmlDump/file_0083.html b/HtmlDump/file_0083.html deleted file mode 100644 index 17cc23263..000000000 --- a/HtmlDump/file_0083.html +++ /dev/null @@ -1,37 +0,0 @@ -

Each of the major VSC-institutions has its own user support:

What information should I provide when contacting user support?

When you submit a support request, it helps if you always provide: -

    -
  1. your VSC user ID (or VUB netID),
  2. -
  3. contact information - it helps to specify your preferred mail address and phone number for contact,
  4. -
  5. an informative subject line for your request,
  6. -
  7. the time the problem occurred,
  8. -
  9. the steps you took to resolve the problem.
  10. -

Below, you will find more useful information you can provide for various categories of problems you may encounter. Although it may seem like more work to you, it will often save a few iterations and get your problem solved faster. -

If you have problems logging in to the system

then provide the following information: -

    -
  1. your operating system (e.g., Linux, Windows, MacOS X, ...),
  2. -
  3. your client software (e.g., PuTTY, OpenSSH, ...),
  4. -
  5. your location (e.g., on campus, at home, abroad),
  6. -
  7. whether the problem is systematic (how many times did you try, over which period) or intermittent,
  8. -
  9. any error messages shown by the client software, or an error log if it is available.
  10. -

If installed software malfunctions/crashes

then provide the following information: -

    -
  1. the name of the application (e.g., Ansys, Matlab, R, ...),
  2. -
  3. the module(s) you load to use the software (e.g., R/3.1.2-intel-2015a),
  4. -
  5. the error message the application produces,
  6. -
  7. whether the error is reproducible,
  8. -
  9. if possible, a procedure and data to reproduce the problem,
  10. -
  11. if the application was run as a job, the jobID(s) of (un)successful runs.
  12. -

If your own software malfunctions/crashes

then provide the following information: -

    -
  1. the location of the source code,
  2. -
  3. the error message produced at build time or runtime,
  4. -
  5. the toolchain and other module(s) you load to build the software (e.g., intel/2015a with HDF5/1.8.4-intel-2015a),
  6. -
  7. if possible and applicable, a procedure and data to reproduce the problem,
  8. -
  9. if the software was run as a job, the jobID(s) of (un)successful runs.
  10. -
" - diff --git a/HtmlDump/file_0085.html b/HtmlDump/file_0085.html deleted file mode 100644 index 895b74c62..000000000 --- a/HtmlDump/file_0085.html +++ /dev/null @@ -1,16 +0,0 @@ -

The best way to get a complete list of all available software in a particular cluster can be obtained by typing:

$ module av

In order to use those software packages, the user should work with the module system. On the newer systems, we use the same naming conventions for packages on all systems. Due to the ever expanding list of packages, we've also made some adjustments and don't always show all packages, so be sure to check out the page on the module system again to learn how you can see more packages.

Note: Since August 2016, a different implementation of the module system has been implemented on the UGent and VUB Tier-2 systems, called Lmod. Though highly compatible with the system used on the other clusters, it offers a lot of new commands, and some key differences.

Packages with additional documentation

" - diff --git a/HtmlDump/file_0087.html b/HtmlDump/file_0087.html deleted file mode 100644 index 3f94685c4..000000000 --- a/HtmlDump/file_0087.html +++ /dev/null @@ -1,85 +0,0 @@ -

Software stack

Software installation and maintenance on HPC infrastructure such as the VSC clusters poses a number of challenges not encountered on a workstation or a departemental cluster. For many libraries and programs, multiple versions have to installed and maintained as some users require specific versions of those. And those libraries or executable sometimes rely on specific versions of other libraries, further complicating the matter.

The way Linux finds the right executable for a command, and a program loads the right version of a library or a plug-in, is through so-called environment variables. These can, e.g., be set in your shell configuration files (e.g., .bashrc), but this requires a certain level of expertise. Moreover, getting those variables right is tricky and requires knowledge of where all files are on the cluster. Having to manage all this by hand is clearly not an option. -

We deal with this on the VSC clusters in the following way. First, we've defined the concept of a toolchain on most of the newer clusters. They consist of a set of compilers, MPI library and basic libraries that work together well with each other, and then a number of applications and other libraries compiled with that set of tools and thus often dependent on those. We use tool chains based on the Intel and GNU compilers, and refresh them twice a year, leading to version numbers like 2014a, 2014b or 2015a for the first and second refresh of a given year. Some tools are installed outside a toolchain, e.g., additional versions requested by a small group of users for specific experiments, or tools that only depend on basic system libraries. Second, we use the module system to manage the environment variables and all dependencies and possible conflicts between various programs and libraries., and that is what this page focuses on. -

Note: Since August 2016, a different implementation of the module system has been implemented on the UGent and VUB Tier-2 systems, called Lmod. Though highly compatible with the system used on the other clusters, it offers a lot of new commands, and some key differences. Most of the commands below will still work though.

Basic use of the module system

Many software packages are installed as modules. These packages include compilers, interpreters, mathematical software such as Matlab and SAS, as well as other applications and libraries. This is managed with the module command. -

To view a list of available software packages, use the command module av. The output will look similar to this: -

$ module av
------ /apps/leuven/thinking/2014a/modules/all ------
-Autoconf/2.69-GCC-4.8.2
-Autoconf/2.69-intel-2014a
-Automake/1.14-GCC-4.8.2
-Automake/1.14-intel-2014a
-BEAST/2.1.2
-...
-pyTables/2.4.0-intel-2014a-Python-2.7.6
-timedrun/1.0.1
-worker/1.4.2-foss-2014a
-zlib/1.2.8-foss-2014a
-zlib/1.2.8-intel-2014a
-

This gives a list of software packages that can be loaded. Some packages in this list include intel-2014a or foss-2014a in their name. These are packages installed with the 2014a versions of the toolchains based on the Intel and GNU compilers respectively. The other packages do not belong to a particular toolchain. The name of the packages also includes a version number (right after the /) and sometimes other packages they need. -

Often, when looking for some specific software, you will want to filter the list of available modules, since it tends to be rather large. The module command writes its output to standard error, rather than standard output, which is somewhat confusing when using pipes to filter. The following command would show only the modules that have the string 'python' in their name, regardless of the case.

$ module av |& grep -i python
-

A module is loaded using the command module load with the name of the package. E.g., with the above list of modules, -

$ module load BEAST
-

will load the BEAST/2.1.2 package. -

For some packages, e.g., zlib in the above list, multiple versions are installed; the module load command will automatically choose the lexicographically last, which is typically, but not always, the most recent version. In the above example, -

 $ module load zlib
-

will load the module zlib/1.2.8-intel-2014a. This may not be the module that you want if you're using the GNU compilers. In that case, the user should specify a particular version, e.g., -

$ module load zlib/1.2.8-foss-2014a
-

Obviously, the user needs to keep track of the modules that are currently loaded. After executing the above two load commands, the list of loaded modules will be very similar to: -

$ module list
-Currently Loaded Modulefiles:
-  1) /thinking/2014a
-  2) Java/1.7.0_51
-  3) icc/2013.5.192
-  4) ifort/2013.5.192
-  5) impi/4.1.3.045
-  6) imkl/11.1.1.106
-  7) intel/2014a
-  8) beagle-lib/20140304-intel-2014a
-  9) BEAST/2.1.2
- 10) GCC/4.8.2
- 11) OpenMPI/1.6.5-GCC-4.8.2
- 12) gompi/2014a
- 13) OpenBLAS/0.2.8-gompi-2014a-LAPACK-3.5.0
- 14) FFTW/3.3.3-gompi-2014a
- 15) ScaLAPACK/2.0.2-gompi-2014a-OpenBLAS-0.2.8-LAPACK-3.5.0
- 16) foss/2014a
- 17) zlib/1.2.8-foss-2014a
-

It is important to note at this point that, e.g., icc/2013.5.192 is also listed, although it was not loaded explicitly by the user. This is because BEAST/2.1.2 depends on it, and the system administrator specified that the intel toolchain module that contains this compiler should be loaded whenever the BEAST module is loaded. There are advantages and disadvantages to this, so be aware of automatically loaded modules whenever things go wrong: they may have something to do with it! -

To unload a module, one can use the module unload command. It works consistently with the load command, and reverses the latter's effect. One can however unload automatically loaded modules manually, to debug some problem. -

$ module unload BEAST
-

Notice that the version was not specified: the module system is sufficiently clever to figure out what the user intends. However, checking the list of currently loaded modules is always a good idea, just to make sure... -

In order to unload all modules at once, and hence be sure to start with a clean slate, use: -

$ module purge
-

It is a good habit to use this command in PBS scripts, prior to loading the modules specifically needed by applications in that job script. This ensures that no version conflicts occur if the user loads module using his .bashrc file. -

Finally, modules need not be loaded one by one; the two 'load' commands can be combined as follows: -

$ module load  BEAST/2.1.2  zlib/1.2.8-foss-2014a
-

This will load the two modules and, automatically, the respective toolchains with just one command. -

To get a list of all available module commands, type: -

$ module help
-

Getting even more software

The list of software available on a particular cluster can be unwieldingly long and the information that module av produces overwhelming. Therefore the administrators may have chose to only show the most relevant packages by default, and not show, e.g., packages that aim at a different cluster, a particular node type or a less complete toolchain. Those additional packages can then be enabled by loading another module first. E.g., on hopper, the most recent UAntwerpen cluster when we wrote this text, the most complete and most used toolchains were the 2014a versions. Hence only the list of packages in those releases of the intel and foss (GNU) toolchain were shown at the time. Yet -

$ module av
-

returns at the end of the list: -

...
-ifort/2015.0.090                   M4/1.4.16-GCC-4.8.2
-iimpi/7.1.2                        VTune/2013_update10
------------------------ /apps/antwerpen/modules/calcua ------------------------
-hopper/2014a hopper/2014b hopper/2015a hopper/2015b hopper/2016a hopper/2016b 
-hopper/all   hopper/sl6   perfexpert   turing
-

The packages such as hopper/2014b enable additional packages when loaded. -

Similarly, on ThinKing, the KU Leuven cluster: -

$ module av
-...
--------------------------- /apps/leuven/etc/modules/ --------------------------
-cerebro/2014a   K20Xm/2014a     K40c/2014a      M2070/2014a     thinking/2014a
-ictstest/2014a  K20Xm/2015a     K40c/2015a      phi/2014a       thinking2/2014a
-

shows modules specifically for the thin node cluster ThinKing, the SGI shared memory system Cerebro, three types of NVIDIA GPU nodes and the Xeon Phi nodes. Loading one of these will show the appropriate packages in the list obtained with module av. E.g., -

module load cerebro/2014a
-

will make some additional modules available for Cerebro, including two additional toolchains with the SGI MPI libraries to take full advantage of the interconnect of that machine. -

Explicit version numbers

As a rule, once a module has been installed on the cluster, the executables or libraries it comprises are never modified. This policy ensures that the user's programs will run consistently, at least if the user specifies a specific version. Failing to specify a version may result in unexpected behavior. -

Consider the following example: the user decides to use the GSL library for numerical computations, and at that point in time, just a single version 1.15, compiled with the foss toolchain is installed on the cluster. The user loads the library using: -

$ module load GSL
-

rather than -

$ module load GSL/1.15-foss-2014a
-

Everything works fine, up to the point where a new version of GSL is installed, e.g., 1.16 compiled with both the intel and the foss toolchain. From then on, the user's load command will load the latter version, rather than the one he intended, which may lead to unexpected problems. -

" - diff --git a/HtmlDump/file_0097.html b/HtmlDump/file_0097.html deleted file mode 100644 index ff16b4170..000000000 --- a/HtmlDump/file_0097.html +++ /dev/null @@ -1,3 +0,0 @@ -

HPC for industry

-

The collective expertise, training programs and infrastructure of VSC together with participating university associations have the potential to create significant added value to your business.

" - diff --git a/HtmlDump/file_0099.html b/HtmlDump/file_0099.html deleted file mode 100644 index 8064572a8..000000000 --- a/HtmlDump/file_0099.html +++ /dev/null @@ -1,3 +0,0 @@ -

What is supercomputing?

-

Supercomputers have an immense impact on our daily lives. Their scope extends far beyond the weather forecast after the news.

" - diff --git a/HtmlDump/file_0109.html b/HtmlDump/file_0109.html deleted file mode 100644 index df6c77983..000000000 --- a/HtmlDump/file_0109.html +++ /dev/null @@ -1,2 +0,0 @@ -

Projects and cases

-

The VSC infrastructure being used by many academic and industrial users. Here are just a few case studies of work involving the VSC infrastructure and an overview of actual projects run on the tier-1 infrastructure.

diff --git a/HtmlDump/file_0113.html b/HtmlDump/file_0113.html deleted file mode 100644 index d23639d48..000000000 --- a/HtmlDump/file_0113.html +++ /dev/null @@ -1,8 +0,0 @@ -

Technical support

Please also take a look at our web page about technical support. It contains a lot of tips about the information that you can pass to us with your support question so that we can provide a helpful answer faster. -

General enquiries

For non-technical questions about the VSC, you can contact the FWO or one of the coordinators from participating universities. This may include questions on admission requirements to questions about setting up a course or other questions that are not directly related to technical problems.
-

" - diff --git a/HtmlDump/file_0115.html b/HtmlDump/file_0115.html deleted file mode 100644 index d7cb9515b..000000000 --- a/HtmlDump/file_0115.html +++ /dev/null @@ -1,4 +0,0 @@ -

FWO

-

Research Foundation - Flanders (FWO)
Egmontstraat 5
1000 Brussel

Tel. +32 (2) 512 91 10
E-mail: post@fwo.be
Web page of the FWO -

" - diff --git a/HtmlDump/file_0117.html b/HtmlDump/file_0117.html deleted file mode 100644 index 228a46d8f..000000000 --- a/HtmlDump/file_0117.html +++ /dev/null @@ -1,5 +0,0 @@ -

Antwerp University Association

-

Stefan Becuwe
Antwerp University
- Department of Mathematics and Computer Science
Middelheimcampus M.G 310
Middelheimlaan 1
2020 Antwerpen -

Tel.: +32 (3) 265 3860
E-mail: Stefan.Becuwe@uantwerpen.be
Contact page on the UAntwerp site

" - diff --git a/HtmlDump/file_0119.html b/HtmlDump/file_0119.html deleted file mode 100644 index cdc6526ed..000000000 --- a/HtmlDump/file_0119.html +++ /dev/null @@ -1,2 +0,0 @@ -

KU Leuven Association

-

Leen Van Rentergem
KU Leuven, Directie ICTS
Willem de Croylaan 52c - bus 5580
3001 Heverlee

Tel.:+32 (16) 32 21 55 or +32 (16) 32 29 99
E-mail: leen.vanrentergem@kuleuven.be
Contact page on the KU Leuven site

diff --git a/HtmlDump/file_0121.html b/HtmlDump/file_0121.html deleted file mode 100644 index b69b19cbb..000000000 --- a/HtmlDump/file_0121.html +++ /dev/null @@ -1,2 +0,0 @@ -

Universitaire Associatie Brussel

-

Stefan Weckx
VUB, Research Group of Industrial Microbiology and Food Biotechnology
Pleinlaan 2
1050 Brussel

Tel.: +32 (2) 629 38 63
E-mail: Stefan.Weckx@vub.ac.be
Contact page on the VUB site

diff --git a/HtmlDump/file_0123.html b/HtmlDump/file_0123.html deleted file mode 100644 index e4461e902..000000000 --- a/HtmlDump/file_0123.html +++ /dev/null @@ -1,4 +0,0 @@ -

Ghent University Association

-

Ewald Pauwels
Ghent University, ICT Department
Krijgslaan 281 S89
9000 Gent

Tel: +32 (9) 264 4716
E-mail: Ewald.Pauwels@ugent.be
Contact page on the UGent site -

" - diff --git a/HtmlDump/file_0125.html b/HtmlDump/file_0125.html deleted file mode 100644 index 8a45be298..000000000 --- a/HtmlDump/file_0125.html +++ /dev/null @@ -1,2 +0,0 @@ -

Associatie Universiteit-Hogescholen Limburg

-

Geert Jan Bex
VSC course coordinator

UHasselt, Dienst Onderzoekscoördinatie
Campus Diepenbeek
Agoralaan Gebouw D
3590 Diepenbeek

Tel.: +32 (11) 268231 or +32 (16) 322241
E-mail: GeertJan.Bex@uhasselt.be
Contact page on the UHasselt site and personal web page

diff --git a/HtmlDump/file_0127.html b/HtmlDump/file_0127.html deleted file mode 100644 index 07a9be445..000000000 --- a/HtmlDump/file_0127.html +++ /dev/null @@ -1,2 +0,0 @@ -

Contact us

-

You can also contact the coordinators by filling in the form below.

diff --git a/HtmlDump/file_0129.html b/HtmlDump/file_0129.html deleted file mode 100644 index 5e0288e24..000000000 --- a/HtmlDump/file_0129.html +++ /dev/null @@ -1,2 +0,0 @@ -

Technical problems?

-

Don't use this form, but contact your support team directly using the contact information in the user portal.

diff --git a/HtmlDump/file_0131.html b/HtmlDump/file_0131.html deleted file mode 100644 index 75ecae935..000000000 --- a/HtmlDump/file_0131.html +++ /dev/null @@ -1 +0,0 @@ -

Need help? Have more questions? Contact us!

diff --git a/HtmlDump/file_0133.html b/HtmlDump/file_0133.html deleted file mode 100644 index 8364f21bc..000000000 --- a/HtmlDump/file_0133.html +++ /dev/null @@ -1,2 +0,0 @@ -

The VSC is a partnership of five Flemish university associations. The Tier-1 and Tier-2 infrastructure is spread over four locations: Antwerp, Brussels, Ghent and Louvain. There is also a local support office in Hasselt.

" - diff --git a/HtmlDump/file_0135.html b/HtmlDump/file_0135.html deleted file mode 100644 index a6795a77b..000000000 --- a/HtmlDump/file_0135.html +++ /dev/null @@ -1 +0,0 @@ -

Ghent

The recent data center of UGhent (2011) on Campus Sterre features a room which is especially equipped to accommodate the VSC framework. This room currently houses the majority of the Tier-2 infrastructure of Ghent University and the first VSC Tier-1 capability system. The adjacent building of the ICT Department hosts the Ghent University VSC -employees, including support staff for the Ghent University Association (AUGent).

Louvain

The KU Leuven equiped its new data center (2012) in Heverlee with a separate room for the VSC framework. This room currently houses the joint Tier-2 infrastructure of KU Leuven and Hasselt University and an experimental GPU / Xeon Phi cluster. This space will also house the next VSC Tier-1 computer. The nearby building of ICTS houses the KU Leuven VSC employees, including the support team for the KU Leuven Association.

Hasselt

The VSC does not feature a computer room in Hasselt, but there is a local user support office for the Association University-Colleges Limburg (AU-HL) at Campus Diepenbeek.

Brussels

The VUB shares a data center with the ULB on Solbosch Campus also housing the VUB Tier-2 cluster and a large part of the BEgrid infrastructure. The VSC also has a local team responsible for the management of this infrastructure and for the user support within the University Association Brussels (UAB) and for BEgrid.

Antwerp

The University of Antwerp features a computer room equipped for HPC infrastructure in the building complex Campus Groenenborger. A little further, on the Campus Middelheim, the UAntwerpen VSC members have their offices in the Mathematics and Computer Science building. This team also handles user support for the Association Antwerp University (AUHA).

diff --git a/HtmlDump/file_0137.html b/HtmlDump/file_0137.html deleted file mode 100644 index 5383f8373..000000000 --- a/HtmlDump/file_0137.html +++ /dev/null @@ -1,44 +0,0 @@ -

The VSC is a consortium of five Flemish universities. This consortium has no legal personality. Its objective is to build a Tier-1 and Tier-2 infrastructure in accordance with the European pyramid model. Staff appointed at five Flemish universities form an integrated team dedicated to training and user support.

For specialized support each institution can appeal to an expert independent of where he or she is employed. The universities also invest in HPC infrastructure and the VSC can appeal to the central services of these institutions. In addition, embedment in an academic environment creates opportunities for cooperation with industrial partners. -

The VSC project is managed by the Research Foundation - Flanders (FWO), that receives the necessary financial resources for this task from the Flemish Government. -

Operationally, the VSC is controlled by the HPC workgroup consisting of employees from the FWO and HPC coordinators from the various universities. The HPC workgroup meets monthly. During these meetings operational issues are discussed and agreed upon and strategic advice is offered to the Board of Directors of the FWO.
-

In addition, four committees are involved in the operation of the VSC: the Tier-1 user committee, the Tier-1 evaluation committee, the Industrial Board and the Scientific Advisory Board. -

VSC users' committee

The VSC user's committee was established to provide advise on the needs of users and ways to improve the services, including the training of users. The user's committee also plays a role in maintaining contact with users by spreading information about the VSC, making (potential) users aware of the possibilities offered by HPC and organising the annual user day. -

These members of the committee are given below in alphabetical order, according to which university they are associated with: -

The members representing the strategic research institutes are -

The representation of the Industrial Board: -

Tier-1 evaluation committee

This committee evaluates applications for computing time on the Tier-1. Based upon admissibility and other evaluation criteria the committee grants the appropriate computing time. -

This committee is composed as follows: -

The FWO provides the secretariat of the committee. -

Industrial Board

The Industrial Board serves as a communication channel between the VSC and the industry in Flanders. The VSC offers a scientific/technical computing infrastructure to the whole Flemish research community and industry. The Industrial Board can facilitate the exchange of ideas and expertise between the knowledge institutions and industry. -

The Industrial Board also develops initiatives to inform companies and non-profit institutions about the added value that HPC delivers in the development and optimisation of services and products and promotes the services that the VSC delivers to companies, such as consultancy, research collaboration, training and compute power. -

The members are: -

" - diff --git a/HtmlDump/file_0141.html b/HtmlDump/file_0141.html deleted file mode 100644 index dfd5aa193..000000000 --- a/HtmlDump/file_0141.html +++ /dev/null @@ -1 +0,0 @@ -

A supercomputer is a very fast and extremely parallel computer. Many of its technological properties are comparable to those of your laptop or even smartphone but there are important differences.

diff --git a/HtmlDump/file_0145.html b/HtmlDump/file_0145.html deleted file mode 100644 index b83b2520d..000000000 --- a/HtmlDump/file_0145.html +++ /dev/null @@ -1,15 +0,0 @@ -

The VSC account

In order to use the infrastructure of the VSC, you need a VSC-userid, also called a VSC account. The only exception are users of the VUB who just want to use the VUB Tier-2 infrastructure. For them their VUB userid is sufficient. You can then use the same userid on all VSC infrastructure to which you have access.

Your account also includes two “blocks” of disk space: your home directory and data directory. Both are accessible from all VSC clusters. When you log in to a particular cluster, you will also be assigned one or more blocks of temporary disk space, called scratch directories. Which directory should be used for which type of data, is explained in the user documentation. -

You do not automatically have access to all VSC clusters with your VSC account. For the main Tier-1 compute cluster you need to submit a project application (or you should be covered by a project application within your research group). For some more specialised hardware you have to ask access separately, typically to the coordinator of your institution, because we want to be sure that that (usually rather expensive hardware) is used efficiently for the type of applications for which it was purchased. Also, you do not simply get automatic access to all available software. You can use all free software and a number of compilers and other development tools, but for most commercial software, you must first prove that you have a valid license (or the person who has paid the license on the cluster must allow you to use the license). For this you can contact your local support team. -

Before you can apply for your account, you will usually have to install an extra piece of software on your computer, called a ssh client. How the actual account application should be made and where you can find the software, is explained in the user documentation on the user portal. -

Who can get access?

Additional information

Before you apply for VSC account, it is useful to first check whether the infrastructure is suitable for your application. Windows or OS X programs for instance cannot run on our infrastructure as we use the Linux operating system on the clusters. The infrastructure also should not be used to run applications for which the compute power of a good laptop is sufficient. The pages on the Tier-1 and Tier-2 infrastructure in this part of the website give a high-level description of our infrastructure. You can find more detailed information in the user documentation on the user portal. When in doubt, you can also contact your local support team. This does not require a VSC account. -

Furthermore, you should first check the page \"Account request\" in the user documentation and install the necessary software on your PC. You can also find links to information about that software on the “Account Request” page. -

Furthermore, it can also be useful to take one of the introductory courses that we organise periodically at all universities. However, it is best to apply for your VSC account before the course since you also can then also do the exercises during the course. We strongly urge people who are not familiar with the use of a Linux supercomputer to take such a course. After all, we do not have enough staff to help everyone individually for all those generic issues. -

" - diff --git a/HtmlDump/file_0149.html b/HtmlDump/file_0149.html deleted file mode 100644 index ac8641493..000000000 --- a/HtmlDump/file_0149.html +++ /dev/null @@ -1,14 +0,0 @@ -

We offer you the opportunity of a free trial of the Tier-1 to prepare a future regular Tier-1 project application. You can test if your software runs well on the Tier-1 and do the scalability tests that are required for a project application.

If you want to check if buying compute time on our infrastructure is an option, we offer a very similar free programme for a test ride.

Characteristics of a Starting Grant

Procedure to apply and grant the request

    -
  1. Download the application form for a starting grant version 2018 (docx, 31 kB).
  2. -
  3. Send the completed application by e-mail to the Tier-1 contact address (hpcinfo@icts.kuleuven.be), with your local VSC coordinator in cc.
  4. -
  5. The request will be judged for its validity by the Tier-1 coordinator.
  6. -
  7. After approval the Tier-1 coordinator will give you access and compute time.
    If not approved, you will get an answer with a motivation for the decision.
  8. -
  9. The granted requests are published on the VSC website. Therefore you need to provide a short abstract in the application.
  10. -
" - diff --git a/HtmlDump/file_0153.html b/HtmlDump/file_0153.html deleted file mode 100644 index f9a379075..000000000 --- a/HtmlDump/file_0153.html +++ /dev/null @@ -1,49 +0,0 @@ -

The application

The designated way to get access to the Tier-1 for research purposes is through a project application.

You have to submit a proposal to get compute time on the Tier-1 cluster BrENIAC. -

You should include a realistic estimate of the compute time needed in the project in your application. These estimations can best be endorsed by Tier-1 benchmarks. To be able to perform these tests for new codes, you can request a starting grant through a short and quick procedure. -

You can submit proposals continuously, but they will be gathered, evaluated and resources allocated at a number of cut-off dates. There are 3 cut-off dates in 2018 : -

Proposals submitted since the last cut-off and before each of these dates are reviewed together. -

The FWO appoints an evaluation commission to do this. -

Because of the international composition of the evaluation commission, the preferred language for the proposals is English. If a proposal is in Dutch, you must also sent an English translation. Please have a look at the documentation of standard terms like: CPU, core, node-hour, memory, storage, and use these consistently in the proposal. -

You can submit you application via EasyChair using the application forms below.
-

Relevant documents - 2018

As was already the case for applications for computing time on the Tier-1 granted in 2016 and 2017 and coming from researchers at universities, the Flemish SOCs and the Flemish public knowledge institutions, applicants do not have to pay a contribution in the cost of compute time and storage. Of course, the applications have to be of outstanding quality. The evaluation commission remains responsible for te review of the applications. For industry the price for compute time is 13 EURO per node day including VAT and for storage 15 EURO per TB per month including VAT. -

The adjusted Regulations for 2018 can be found in the links below. -

If you need help to fill out the application, please consult your local support team. -

Relevant documents - 2017

As was already the case for applications for computing time on the Tier-1 granted in 2016 and coming from researchers at universities, the Flemish SOCs and the Flemish public knowledge institutions, applicants do not have to pay a contribution in the cost of compute time and storage. Of course, the applications have to be of outstanding quality. The evaluation commission remains responsible for te review of the applications. For industry the price for compute time is 13 EURO per node day including VAT and for storage 15 EURO per TB per month including VAT. -

The adjusted Regulations for 2017 can be found in the links below. -

EasyChair procedure

You have to submit your proposal on EasyChair for the conference Tier12018. This requires the following steps:
-

    -
  1. If you do not yet have an EasyChair account, you first have to create one: -
      -
    1. Complete the CAPTCHA
    2. -
    3. Provide first name, name, e-mail address
    4. -
    5. A confirmation e-mail will be sent, please follow the instructions in this e-mail (click the link)
    6. -
    7. Complete the required details.
    8. -
    9. When the account has been created, a link will appear to log in on the TIER1 submission page.
    10. -
  2. -
  3. Log in onto the EasyChair system.
  4. -
  5. Select ‘New submission’.
  6. -
  7. If asked, accept the EasyChair terms of service.
  8. -
  9. Add one or more authors; if they have an EasyChair account, they can follow up on and/or adjust the present application.
  10. -
  11. Complete the title and abstract.
  12. -
  13. You must specify at least three keywords: Include the institution of the promoter of the present project and the field of research.
  14. -
  15. As a paper, submit a PDF version of the completed Application form. You must submit the complete proposal, including the enclosures, as 1 single PDF file to the system.
  16. -
  17. Click \"Submit\".
  18. -
  19. EasyChair will send a confirmation e-mail to all listed authors.
  20. -
" - diff --git a/HtmlDump/file_0155.html b/HtmlDump/file_0155.html deleted file mode 100644 index e63916f14..000000000 --- a/HtmlDump/file_0155.html +++ /dev/null @@ -1,53 +0,0 @@ -

The VSC infrastructure is can also be used by industry and non-Flemish research institutes. Here we describe the modalities.

Tier-1

It is possible to get paid access to the Tier-1 infrastructure of the VSC. In a first phase, you can get up to 100 free node-days of compute time to verify that the infrastructure is suitable for your applications. You can also get basic support for software installation and the use of the infrastructure. When your software requires a license, you should take care of that yourself. -

For further use, there is a tree-parties legal agreement required with KU Leuven as the operator of the system and the Research Foundation - Flanders (FWO). You will be billed only for the computing time used and reserved disk space, according to the following rates: -

- - - - - - - - - - - - - - - - - - - -
-

Summary of Rates (VAT included): -

-
-

Compute -

-

(euro/node day) -

-
-

Storage -

-

(euro/TB/month) -

-
-

Non-Flemish public research institutes and not-for-profit organisations

-
-

€ 13

-
-

€ 15

-
-

Industry

-
-

€ 13

-
-

€ 15

-

These prices include the university overhead and basic support from the Tier-1 support staff, but no advanced level support by specialised staff. -

For more information you can contact our industry account manager (FWO). -

Tier-2

It is also possible to gain access to the Tier-2 infrastructure within the VSC. Within the Tier-2 infrastructure, there are also clusters tailored to special applications such as small clusters with GPU or Xeon Phi boards, a large shared memory machine or a cluster for Hadoop applications. See the high-level overview or detailed pages about the available infrastructure for more information. -

For more information and specific arrangements please contact the coordinator of the institution which operates the infrastructure. In this case you only need an agreement with this institution without involvement of the FWO. -

" - diff --git a/HtmlDump/file_0177.html b/HtmlDump/file_0177.html deleted file mode 100644 index bca17d1dc..000000000 --- a/HtmlDump/file_0177.html +++ /dev/null @@ -1,7 +0,0 @@ -

The VSC is responsible for the development and management of High Performance Computer Infrastructure used for research and innovation. The quality level of the infrastructure is comparable to other computational infrastructures in comparable European regions. In addition, the VSC is internationally connected through European projects such as PRACE(1) (traditional supercomputing) and EGI(2) (grid computing). Belgium has been a member of PRACE and participates in EGI via BEgrid, since October 2012.

The VSC infrastructure consists of two layers in the European multi-layer model for an integrated HPC infrastructure. Local clusters (Tier-2) at the Flemish universities are responsible for processing the mass of smaller computational tasks and provide a solid base for the HPC ecosystem. A larger central supercomputer (Tier-1) is necessary for more complicated calculations while simultaneously serving as a bridge to infrastructures at a European level. -

The VSC assists researchers active in academic institutions and also the industry when using HPC through training programs and targeted advice. This offers the advantage that academia and industrialists come into contact with each other. -

In addition, the VSC also works on raising awareness of the added value HPC can offer both in academic research and in industrial applications. -

(1) PRACE: Partnership for Advanced Computing in Europe
- (2) EGI: European Grid Infrastructure -

" - diff --git a/HtmlDump/file_0179.html b/HtmlDump/file_0179.html deleted file mode 100644 index 4f12ee93b..000000000 --- a/HtmlDump/file_0179.html +++ /dev/null @@ -1,65 +0,0 @@ -

On 20 July 2006 the Flemish Government decided on the action plan 'Flanders i2010, time for a digital momentum in the innovation chain'. A study made by the steering committee e-Research, published in November 2007, indicated the need for more expertise, support and infrastructure for grid and High Performance Computing.

Around the same time, the Royal Flemish Academy of Belgium for Science and the Arts (KVAB) published an advisory illustrating the need for a dynamic High Performance Computing strategy for Flanders. This recommendation focused on a Flemish Supercomputer Center with the ability to compete with existing infrastructures at regional or national level in comparable countries. -

Based on these recommendations, the Flemish Government decided on 14 December 2007 to fund the Flemish Supercomputer Center, an initiative of five Flemish universities. They joined forces to coordinate and to integrate their High Performance Computing infrastructures and to make their knowledge available to the public and for privately funded research. -

The grants were used to fund both capital expenditures and staff. As a result the existing university infrastructure was integrated through fast network connections and additional software. Thus, the pyramid model, recommended by PRACE, is applied. According to this model a central Tier-1 cluster is responsible for rolling out large parallel computing jobs. Tier-2 focuses on local use at various universities but is also open to other users. Hasselt University decided to collaborate with the University of Leuven to build a shared infrastructure while other universities opted to do it alone. -

Some milestones

- (1) FFEU: Financieringsfonds voor Schuldafbouw en Eenmalige investeringsuitgaven (Financing fund for debt reduction and one-time investment)
- (2) ESFRI: European Strategy Forum on Research Infrastructures -

" - diff --git a/HtmlDump/file_0183.html b/HtmlDump/file_0183.html deleted file mode 100644 index a462bdf11..000000000 --- a/HtmlDump/file_0183.html +++ /dev/null @@ -1,26 +0,0 @@ -

Strategic plans and annual reports

Newsletter: VSC Echo

Our newsletter, VSC Echo, is distributed three times a year by e-mail. The latest edition, number 10, is dedicated to : -

Subscribe or unsubscribe

If you would like to receive this newsletter by mail, just send an e-mail to listserv@ls.kuleuven.be with as text subscribe VSCECHO in the message body (and not in the subject line). (Please note the quotes are not used in the subject line but in the message body.) Alternatively (if your e-mail is correctly configured in your browser), you can also send an e-mail from your browser. -

You will receive a reply from LISTSERV@listserv.cc.kuleuven.ac.be asking you to confirm your subscription. Follow this link in the e-mail and you will be automatically subscribed to future issues of the newsletter. -

If you no longer wish to receive the newsletter, please send an e-mail to listserv@ls.kuleuven.be with the text unsubscribe VSCECHO in the message body (and not in the subject line). Alternatively (if your e-mail is correctly configured in your browser), you can also send an e-mail from your browser. -

Archive

" - diff --git a/HtmlDump/file_0185.html b/HtmlDump/file_0185.html deleted file mode 100644 index 5f50de06c..000000000 --- a/HtmlDump/file_0185.html +++ /dev/null @@ -1 +0,0 @@ -

Press contacts should be channeled through the Research Foundation - Flanders (FWO).

Available material

  • Zip file with the VSC logo in a number of formats.
  • diff --git a/HtmlDump/file_0191.html b/HtmlDump/file_0191.html deleted file mode 100644 index f154bf397..000000000 --- a/HtmlDump/file_0191.html +++ /dev/null @@ -1,135 +0,0 @@ -

    Getting compute time in other centres

    Training programs in other centres

    EU initiatives

    Some grid efforts

    Some HPC centres in Europe

    " - diff --git a/HtmlDump/file_0193.html b/HtmlDump/file_0193.html deleted file mode 100644 index c330c77b7..000000000 --- a/HtmlDump/file_0193.html +++ /dev/null @@ -1 +0,0 @@ -

    The Flemish Supercomputer Centre (VSC) is a virtual supercomputer center for academics and industry. It is managed by the Hercules Foundation in partnership with the five Flemish university associations.

    diff --git a/HtmlDump/file_0203.html b/HtmlDump/file_0203.html deleted file mode 100644 index 25b274c99..000000000 --- a/HtmlDump/file_0203.html +++ /dev/null @@ -1,14 +0,0 @@ -

    Account management at the VSC is mostly done through the web site account.vscentrum.be using your institute account rather than your VSC account.

    Managing user credentials

    Group and Virtual Organisation management

    Once your VSC account is active and you can log on to your home cluster, you can also manage groups through the account management web interface. Groups (a Linux/UNIX concept) are used to control access to licensed software (e.g., software licenses paid for by one or more research groups), to create subdirectories where researchers working on the same project can collaborate and control access to those files, and to control access to project credits on clusters that use these (all clusters at KU Leuven).

    Managing disk space

    The amount of disk space that a user can use on the various file systems on the system is limited by quota on the amount of disk space and number of files. UGent users can see and request upgrades for their quota on the Account management site (Users need to be in a VO (Virtual Organisation) to request aditional quota. Creating and joining a VO is also done trought the Account Management website). On other sites checking your disk space use is still mostly done from the command line and requesting more quote is done via email.

    " - diff --git a/HtmlDump/file_0211.html b/HtmlDump/file_0211.html deleted file mode 100644 index b5934de01..000000000 --- a/HtmlDump/file_0211.html +++ /dev/null @@ -1,50 +0,0 @@ -

    Data on the VSC clusters can be stored in several locations, depending on the size and usage of these data. Following locations are available:

    Since these directories are not necessarily mounted on the same locations over all sites, you should always (try to) use the environment variables that have been created.

    Quota is enabled on the three directories, which means the amount of data you can store here is limited by the operating system, and not by \"the boundaries of the hard disk\". You can see your current usage and the current limits with the appropriate quota command as explained on How do I know how much disk space I am using?. The actual disk capacity, shared by all users, can be found on the Available hardware page.

    You will only receive a warning when you reach the soft limit of either quota. You will only start losing data when you reach the hard limit. Data loss occurs when you try to save new files: this will not work because you have no space left, and thus you will lose these new files. You will however not be warned when data loss occurs, so keep an eye open for the general quota warnings! The same holds for running jobs that need to write files: when you reach your hard quota, jobs will crash.

    Home directory

    This directory is where you arrive by default when you login to the cluster. Your shell refers to it as \"~\" (tilde), or via the environment variable $VSC_HOME.

    The data stored here should be relatively small (e.g., no files or directories larger than a gigabyte, although this is allowed), and usually used frequently. Also all kinds of configuration files are stored here, e.g., by Matlab, Eclipse, ...

    The operating system also creates a few files and folders here to manage your account. Examples are:

    - - - - - - - - - - - - - - - - - - -
    .ssh/This directory contains some files necessary for you to login to the cluster and to submit jobs on the cluster. Do not remove them, and do not alter anything if you don't know what you're doing!
    .profileThis script defines some general settings about your sessions,
    .bashrcThis script is executed everytime you start a session on the cluster: when you login to the cluster and when a job starts. You could edit this file and, e.g., add \"module load XYZ\" if you want to automatically load module XYZ whenever you login to the cluster, although we do not recommend to load modules in your .bashrc.
    .bash_historyThis file contains the commands you typed at your shell prompt, in case you need them again.

    Data directory

    In this directory you can store all other data that you need for longer terms. The environment variable pointing to it is $VSC_DATA. There are no guarantees about the speed you'll achieve on this volume.

    Scratch space

    To enable quick writing from your job, a few extra file systems are available on the work nodes. These extra file systems are called scratch folders, and can be used for storage of temporary and/or transient data (temporary results, anything you just need during your job, or your batch of jobs).

    You should remove any data from these systems after your processing them has finished. There are no gurarantees about the time your data will be stored on this system, and we plan to clean these automatically on a regular base. The maximum allowed age of files on these scratch file systems depends on the type of scratch, and can be anywhere between a day and a few weeks. We don't guarantee that these policies remain forever, and may change them if this seems necessary for the healthy operation of the cluster.

    Each type of scratch has his own use:

    " - diff --git a/HtmlDump/file_0213.html b/HtmlDump/file_0213.html deleted file mode 100644 index b5934de01..000000000 --- a/HtmlDump/file_0213.html +++ /dev/null @@ -1,50 +0,0 @@ -

    Data on the VSC clusters can be stored in several locations, depending on the size and usage of these data. Following locations are available:

    Since these directories are not necessarily mounted on the same locations over all sites, you should always (try to) use the environment variables that have been created.

    Quota is enabled on the three directories, which means the amount of data you can store here is limited by the operating system, and not by \"the boundaries of the hard disk\". You can see your current usage and the current limits with the appropriate quota command as explained on How do I know how much disk space I am using?. The actual disk capacity, shared by all users, can be found on the Available hardware page.

    You will only receive a warning when you reach the soft limit of either quota. You will only start losing data when you reach the hard limit. Data loss occurs when you try to save new files: this will not work because you have no space left, and thus you will lose these new files. You will however not be warned when data loss occurs, so keep an eye open for the general quota warnings! The same holds for running jobs that need to write files: when you reach your hard quota, jobs will crash.

    Home directory

    This directory is where you arrive by default when you login to the cluster. Your shell refers to it as \"~\" (tilde), or via the environment variable $VSC_HOME.

    The data stored here should be relatively small (e.g., no files or directories larger than a gigabyte, although this is allowed), and usually used frequently. Also all kinds of configuration files are stored here, e.g., by Matlab, Eclipse, ...

    The operating system also creates a few files and folders here to manage your account. Examples are:

    - - - - - - - - - - - - - - - - - - -
    .ssh/This directory contains some files necessary for you to login to the cluster and to submit jobs on the cluster. Do not remove them, and do not alter anything if you don't know what you're doing!
    .profileThis script defines some general settings about your sessions,
    .bashrcThis script is executed everytime you start a session on the cluster: when you login to the cluster and when a job starts. You could edit this file and, e.g., add \"module load XYZ\" if you want to automatically load module XYZ whenever you login to the cluster, although we do not recommend to load modules in your .bashrc.
    .bash_historyThis file contains the commands you typed at your shell prompt, in case you need them again.

    Data directory

    In this directory you can store all other data that you need for longer terms. The environment variable pointing to it is $VSC_DATA. There are no guarantees about the speed you'll achieve on this volume.

    Scratch space

    To enable quick writing from your job, a few extra file systems are available on the work nodes. These extra file systems are called scratch folders, and can be used for storage of temporary and/or transient data (temporary results, anything you just need during your job, or your batch of jobs).

    You should remove any data from these systems after your processing them has finished. There are no gurarantees about the time your data will be stored on this system, and we plan to clean these automatically on a regular base. The maximum allowed age of files on these scratch file systems depends on the type of scratch, and can be anywhere between a day and a few weeks. We don't guarantee that these policies remain forever, and may change them if this seems necessary for the healthy operation of the cluster.

    Each type of scratch has his own use:

    " - diff --git a/HtmlDump/file_0215.html b/HtmlDump/file_0215.html deleted file mode 100644 index 661545a35..000000000 --- a/HtmlDump/file_0215.html +++ /dev/null @@ -1,82 +0,0 @@ -

    Data on the VSC clusters can be stored in several locations, depending on the size and usage of these data. Following locations are available:

    - - -

    Since these directories are not necessarily mounted on the same locations over all sites, you should always (try to) use the environment variables that have been created.

    - -

    Quota is enabled on the three directories, which means the amount of data you can store here is limited by the operating system, and not by \"the boundaries of the hard disk\". You can see your current usage and the current limits with the appropriate quota command as explained on How do I know how much disk space I am using?. The actual disk capacity, shared by all users, can be found on the  Available hardware page.

    - -

    You will only receive a warning when you reach the soft limit of either quota. You will only start losing data when you reach the hard limit. Data loss occurs when you try to save new files: this will not work because you have no space left, and thus you will lose these new files. You will however not be warned when data loss occurs, so keep an eye open for the general quota warnings! The same holds for running jobs that need to write files: when you reach your hard quota, jobs will crash.

    - -

    Home directory

    - -

    This directory is where you arrive by default when you login to the cluster. Your shell refers to it as \"~\" (tilde), or via the environment variable $VSC_HOME.

    - -

    The data stored here should be relatively small (e.g., no files or directories larger than a gigabyte, although this is allowed), and usually used frequently. Also all kinds of configuration files are stored here, e.g., by Matlab, Eclipse, ...

    - -

    The operating system also creates a few files and folders here to manage your account. Examples are:

    - - - - - - - - - - - - - - - - - - - - -
    .ssh/This directory contains some files necessary for you to login to the cluster and to submit jobs on the cluster. Do not remove them, and do not alter anything if you don't know what you're doing!
    .profileThis script defines some general settings about your sessions,
    .bashrcThis script is executed everytime you start a session on the cluster: when you login to the cluster and when a job starts. You could edit this file and, e.g., add \"module load XYZ\" if you want to automatically load module XYZ whenever you login to the cluster, although we do not recommend to load modules in your .bashrc.
    .bash_historyThis file contains the commands you typed at your shell prompt, in case you need them again.
    - -

    Data directory

    - -

    In this directory you can store all other data that you need for longer terms. The environment variable pointing to it is $VSC_DATA. There are no guarantees about the speed you'll achieve on this volume.

    - -

    Scratch space

    - -

    To enable quick writing from your job, a few extra file systems are available on the work nodes. These extra file systems are called scratch folders, and can be used for storage of temporary and/or transient data (temporary results, anything you just need during your job, or your batch of jobs).

    - -

    You should remove any data from these systems after your processing them has finished. There are no gurarantees about the time your data will be stored on this system, and we plan to clean these automatically on a regular base. The maximum allowed age of files on these scratch file systems depends on the type of scratch, and can be anywhere between a day and a few weeks. We don't guarantee that these policies remain forever, and may change them if this seems necessary for the healthy operation of the cluster.

    - -

    Each type of scratch has his own use:

    - -" - diff --git a/HtmlDump/file_0217.html b/HtmlDump/file_0217.html deleted file mode 100644 index a2cf94ef4..000000000 --- a/HtmlDump/file_0217.html +++ /dev/null @@ -1,9 +0,0 @@ -

    To access certain cluster login nodes, from outside your institute's network (e.g., from home) you need to set a so-called VPN (Virtual Private Network). By setting up a VPN to your institute, your computer effectively becomes a computer on your institute's network and will appear as such to other services that you access. Your network traffic will be routed through your institute's network. If you want more information: There's an introductory page on HowStuffWorks and a page that is more for techies on Wikipedia.

    The VPN service is not provided by the VSC but by your institute's ICT centre, and they are your first contact for help. However, for your convenience, we present some pointers to that information: -

    " - diff --git a/HtmlDump/file_0219.html b/HtmlDump/file_0219.html deleted file mode 100644 index a300104db..000000000 --- a/HtmlDump/file_0219.html +++ /dev/null @@ -1,4 +0,0 @@ -

    Linux is the operating system on all of the VSC-clusters.

    " - diff --git a/HtmlDump/file_0221.html b/HtmlDump/file_0221.html deleted file mode 100644 index 09034098a..000000000 --- a/HtmlDump/file_0221.html +++ /dev/null @@ -1,57 +0,0 @@ -

    All the VSC clusters run the Linux operating system:

    This means that, when you connect to one of them, you get a command line interface, which looks something like this: -

    vsc30001@login1:~>
    -

    When you see this, we also say you are inside a \"shell\". The shell will accept your commands, and execute them. -

    Some of the most often used commands include: -

    - - - - - - - - - - - - - - - - - - - - - - -
    ls - Shows you a list of files in the current directory -
    cd - Change current working directory -
    rm - Remove file or directory -
    joe - Text editor -
    echo - Prints its parameters to the screen -

    Most commands will accept or even need parameters, which are placed after the command, seperated by spaces. A simple example with the 'echo' command: -

    $ echo This is a test
    -This is a test
    -

    Important here is the \"$\" sign in front of the first line. This should not be typed, but is a convention meaning \"the rest of this line should be typed at your shell prompt\". The lines not starting with the \"$\" sign are usually the feedback or output from the command. -

    More commands will be used in the rest of this text, and will be explained then if necessary. If not, you can usually get more information about a command, say the item or command 'ls', by trying either of the following: -

    $ ls --help
    -$ man ls
    -$ info ls
    -

    (You can exit the last two \"manuals\" by using the 'q' key.) -

    Tutorials

    For more exhaustive tutorials about Linux usage, please refer to the following sites: -

    " - diff --git a/HtmlDump/file_0223.html b/HtmlDump/file_0223.html deleted file mode 100644 index 64dafc020..000000000 --- a/HtmlDump/file_0223.html +++ /dev/null @@ -1,42 +0,0 @@ -

    Shell scripts

    Scripts are basically uncompiled pieces of code: they are just text files. Since they don't contain machine code, they are executed by what is called a \"parser\" or an \"interpreter\". This is another program that understands the command in the script, and converts them to machine code. There are many kinds of scripting languages, including Perl and Python.

    Another very common scripting language is shell scripting. In a shell script, you will put the commands you would normally type at your shell prompt in the same order. This will enable you to execute all those commands at any time by only issuing one command: starting the script. -

    Typically in the following examples they'll have on each line the next command to be executed although it is possible to put multiple commands on one line. A very simple example of a script may be: -

    echo \"Hello! This is my hostname:\"
    -hostname
    -

    You can type both lines at your shell prompt, and the result will be the following: -

    $ echo \"Hello! This is my hostname:\"
    -Hello! This is my hostname:
    -$ hostname
    -login1
    -

    Suppose we want to call this script \"myhostname\". You open a new file for editing, and name it \"myhostname\": -

    $ nano myhostname
    -

    You get a \"New File\", where you can type the content of this new file. Help is available by pressing the 'Çtrl+G' key combination. You may want to familiarize you with the other options at some point; now we will just type the content of the file, save it and exit the editor. -

    You can type the content of the script: -

    echo \"Hello! This is my hostname:\"
    -hostname
    -

    You save the file and exit the editor by pressing the 'ctrl+x' key combination. Nano will ask you if you want to save the file. You should be back at the prompt. -

    The easiest way to run a script is by starting the interpreter and pass the script as parameter. In case of our script, the interpreter may either be 'sh' or 'bash' (which are the same on the cluster). So start the script: -

    $ bash myhostname
    -Hello! This is my hostname:
    -login1
    -

    Congratulations, you just created and started your first shell script! -

    A more advanced way of executing your shell scripts is by making them executable by their own, so without invoking the interpreter manually. The system can not automatically detect which interpreter you want, so you need to tell this in some way. The easiest way is by using the so called \"shebang\"-notation, explicitly created for this function: you put the following line on top of your shell script \"#!/path/to/your/interpreter\". -

    You can find this path with the \"which\" command. In our case, since we use bash as an interpreter, we get the following path: -

    $ which bash
    -/bin/bash
    -

    We edit our script and change it with this information: -

    #!/bin/bash
    -echo \"Hello! This is my hostname:\"
    -hostname
    -

    Note that the \"shebang\" must be the first line of your script! Now the operating system knows which program should be started to run the script. -

    Finally, we tell the operating system that this script is now executable. For this we change its file attributes: -

    $ chmod +x myhostname
    -

    Now you can start your script by simply executing it: -

    $ ./myhostname
    -Hello! This is my hostname:
    -login1
    -

    The same technique can be used for all other scripting languages, like Perl and Python. -

    Most scripting languages understand that lines beginning with \"#\" are comments, and should be ignored. If the language you want to use does not ignore these lines, you may get strange results... -

    Links

    " - diff --git a/HtmlDump/file_0225.html b/HtmlDump/file_0225.html deleted file mode 100644 index ab8f5c27d..000000000 --- a/HtmlDump/file_0225.html +++ /dev/null @@ -1,39 +0,0 @@ -

    What is a VSC account?

    To log on to and use the VSC infrastructure, you need a so-called VSC account. There is only one exception: Users of the Brussels University Association who only need access to the VUB/ULB cluster Hydra can use their institute account.

    All VSC-accounts start with the letters \"vsc\" followed by a five-digit number. The first digit gives information about your home institution. There is no relationship with your name, nor is the information about the link between VSC-accounts and your name publicly accessible. -

    Unlike your institute account, VSC accounts don't use regular fixed passwords but a key pair consisting of a public an private key because that is a more secure technique for authentication. -

    Your VSC account is currently managed through your institute account. -

    Public/private key pairs

    A key pair consists of a private and a public key. The private key is stored on the computer(s) from which you want to access the VSC and always stays there. The public key is stored on a the systems you want to access, granting access to the anyone who can prove to have access to the corresponding private key. Therefore it is very important to protect your private key, as anybody who has access to it can access your VSC account. For extra security, the private key itself should be encrypted using a 'passphrase', to prevent anyone from using your private key even when they manage to copy it. You have to 'unlock' the private key by typing the passphrase when you use it. -

    How to generate such a key pair, depends on your operating system. We describe the generation of key pairs in the client sections for Linux, Windows and macOS (formerly OS X). -

    Without your key pair, you won't be able to apply for a VSC account. -

    It is clear from the above that it is very important to protect your private key well. Therefore: -

    Applying for the account

    Depending on restrictions imposed by the institution, not all users might get a VSC account. We describe who can apply for an account in the sections of the local VSC clusters. -

    Generic procedure for academic researchers

    For most researchers from the Flemish universities, the procedure has been fully automated and works by using your institute account to request a VSC account. Check below for exceptions or if the generic procedure does not work. -

    Open the VSC account management web site and select your \"home\" institution. After you log in using your institution login and password, you will be asked to upload your public key. You will get an e-mail to confirm your application. After the account has been approved by the VSC, your account will be created and you will get a confirmation e-mail.

    Users from the KU Leuven and UHasselt association

    UHasselt has an agreement with KU Leuven to run a shared infrastructure. Therefore the procedure is the same for both institutions. -

    Who? -

    How? -

    How to start?

    Users of Ghent University Association

    All information about the access policy is available in English at the UGent HPC web pages. -

    Users of the Antwerp University Association (AUHA)

    Who? -

    How? -

    Users of Brussels University Association

    Troubleshooting

    " - diff --git a/HtmlDump/file_0227.html b/HtmlDump/file_0227.html deleted file mode 100644 index ca9ff98c9..000000000 --- a/HtmlDump/file_0227.html +++ /dev/null @@ -1,184 +0,0 @@ -

    MATLAB has to be loaded using the module utility prior to running it. This ensures that the environment is correctly set. Get the list of available versions of MATLAB using

    module avail matlab
    -

    (KU Leuven clusters) or

    module avail MATLAB

    (UAntwerpen and VUB clusters).

    Load a specific version by specifying the MATLAB version in the command -

    module load matlab/R2014a
    -

    or

    module load MATLAB/2014a
    -

    depending on the site you're at.

    Interactive use

    Batch use

    For any non-trivial calculation, it is strongly suggested that you use the PBS batch system. -

    Running a MATLAB script

    You first have to write a MATLAB m-file that executes the required calculation. Make sure the last command of this m-file is 'quit' or 'exit', otherwise MATLAB might wait forever for more commands ... -

    Example (to be saved, e.g., in testmatlabscript.m) : -

    ndim = 600;
    -a = rand(600,1)*10;
    -b = rand(1,600)*100;
    -c = a * b;
    -d = max(c);
    -e = min(d);
    -save('testmatlab', 'd', 'e');
    -exit;
    -

    You can now run this program (as a test, still on the login node, from the directory were you saved the file testmatlabscript.m): -

    matlab  -nodisplay -r testmatlabscript
    -

    The next thing is to write a small shell script, to be sent to the PBS Job System, so that the program can be executed on a compute node, rather than on the login node. -

    A simple example follows (to be saved, e.g., in testmatlabscript.sh ) ; -

    #!/bin/bash -l
    -# The maximum duration of the program,
    -#   in the format [days:]hours:minutes:seconds
    -#PBS -l walltime=01:00:00
    -# the requested amount of RAM
    -#PBS -l pmem=950mb
    -# The name of your job (used in mail, outputfile, showq,...)
    -#PBS -N matlab_test_job
    -# Set the correct environment for matlab
    -module load matlab
    -# Go into the directory from where 'qsub' was run
    -cd $PBS_O_WORKDIR
    -# Start matlab, specify the correct command-file ...
    -matlab -nojvm -nodisplay -r test
    -

    Now you submit your job with -

    $ qsub testmatlabscript.sh
    -

    and you get the jobid that was assigned to your job. With -

    qstat
    -

    you get an overview of the status of your jobs. When the job has run, output will be available in the file <jobname>.o<jobid> in the directory where you submitted the job from. In the case of the file testmatlabscript.m above, a file testmatlabscript.mat will have been created, with the calculated data d and e, you can load the resulting file into a MATLAB for further processing. -

    More commands and options of the Job System are described in the general documentation on running jobs and in particular on the page \"Submitting and managing jobs\". -

    Running a MATLAB function

    If instead of a script, a MATLAB function is used, parameters can be passed into the function. -

    Example (to be saved, e.g., in testmatlabfunction.m) : -

    function testmatlabfunction(input1,input2)
    -% source: https://wiki.inf.ed.ac.uk/ANC/MatlabComputing
    -% change arguments to numerics if necessary - only when compiling code
    -if ~isnumeric(input1)
    -   input1n = str2num(input1);
    -   input2n = str2num(input2);
    -else
    -   input1n = input1;
    -   input2n = input2;
    -end
    -sumofinputs = input1n + input2n;
    -outputfilename = ['testfunction_' num2str(input1n) '_' num2str(input2n)];
    -save(outputfilename, 'input1n', 'input2n', 'sumofinputs');
    -exit;
    -

    You can now run this program (as a test, still on the login node, from the directory were you saved the file testmatlabfunction.m): -

    matlab  -nodisplay -r \"testmatlabfunction 3 6\"
    -

    Note the quotes around the function name and the parameters. Note also that the function name does not include the *.m extension. -

    MATLAB compiler

    Each job requires a MATLAB license while running. If you start lots of jobs, you'll use lots of licenses. When all licenses are in use, your further jobs will fail, and you'll block access to MATLAB for other people at your site. -

    However, when compiling your MATLAB program, no more runtime licenses are needed. -

    Compilation of MATLAB files is relatively easy with the MATLAB 'mcc' compiler. It works for 'function m-files' and for 'script m-files'. 'function m-files' are however preferred. -

    To deploy a MATLAB program as a standalone application, load the module for MATLAB as a first step and compile the code in a second step with the mcc command. -

    If we want to compile a MATLAB program 'main.m', the corresponding command line should be: -

    mcc  -v  -R -singleCompThread  -m  main.m
    -

    Where the options are: -

    The deployed executable is compiled to run using a single thread via the option -singleCompThread. This is important when a number of processes
    - are to run concurrently on the same node (e.g. worker framework). -

    Notes

    Example 1: Simple matlab script file

    function a = fibonacci(n)
    -% FIBONACCI Calculate the fibonacci value of n.
    -% When complied as standalone function,
    -% arguments are always passed as strings, not nums ...
    -if (isstr(n))
    -  n = str2num(n);
    -end;
    -if (length(n)~=1) || (fix(n) ~= n) || (n < 0)
    -  error(['MATLAB:factorial:NNotPositiveInteger', ...
    -        'N must be a positive integer.']);
    -end
    -first = 0;second = 1;
    -for i=1:n-1
    -    next = first+second;
    -    first=second;
    -    second=next;
    -end
    -% When called from a compiled application, display result
    -if (isdeployed)
    -  disp(sprintf('Fibonacci %d -> %d' , n,first))
    -end
    -% Also return the result, so that the function remains usable
    -% from other Matlab scripts.
    -a=first;
    -
     mcc -m fibonacci
    -
    ./fibonacci 6
    -Fibonacci 6 -> 5
    -$ ./fibonacci 8
    -Fibonacci 8 -> 13
    -$ ./fibonacci 45
    -Fibonacci 45 -> 701408733
    -

    Example 2 : Function that uses other Matlab files

    function multi_fibo()
    -%MULTIFIBO Calls FIBONACCI multiple times in a loop
    -% Function calculates Fibonacci number for a matrix by calling the
    -% fibonacci function in a loop. Compiling this file would automatically
    -% compile the fibonacci function also because dependencies are
    -% automatically checked.
    -n=10:20
    -if max(n)<0
    -    f = NaN;
    -else
    -    [r c] = size(n);
    -    for i = 1:r %#ok
    -        for j = 1:c %#ok
    -            try
    -                f(i,j) = fibonacci(n(i,j));
    -            catch
    -                f(i,j) = NaN;
    -            end
    -        end
    -    end
    -end
    -
    mcc -m multi_fibo
    -
    ./multi_fibo
    -n =
    -    10    11    12    13    14    15    16    17    18    19    20
    -Fibonacci 10 -> 34
    -Fibonacci 11 -> 55
    -Fibonacci 12 -> 89
    -Fibonacci 13 -> 144
    -Fibonacci 14 -> 233
    -Fibonacci 15 -> 377
    -Fibonacci 16 -> 610
    -Fibonacci 17 -> 987
    -Fibonacci 18 -> 1597
    -Fibonacci 19 -> 2584
    -Fibonacci 20 -> 4181
    -f =
    -          34          55          89         144         233         
    -377         610         987        1597        2584        4181
    -

    Example 3 : Function that used other Matlab files in other directories

    mcc -m -I /path/to/MyMatlabScripts1/ -I /path/to/MyMatlabScripts2 .... 
    --I /path/to/MyMatlabScriptsN multi_fibo
    -

    (on a single line). -

    More info on the MATLAB Compiler

    Matlab compiler documentation on the Mathworks website. -

    " - diff --git a/HtmlDump/file_0229.html b/HtmlDump/file_0229.html deleted file mode 100644 index 86129d1ee..000000000 --- a/HtmlDump/file_0229.html +++ /dev/null @@ -1,5 +0,0 @@ -

    Matlab has several products to facilitate parallel computing, e.g.

    " - diff --git a/HtmlDump/file_0231.html b/HtmlDump/file_0231.html deleted file mode 100644 index 123d48144..000000000 --- a/HtmlDump/file_0231.html +++ /dev/null @@ -1,18 +0,0 @@ -

    Purpose

    Here it is shown how to use Rscript and pass arguments to an R script.

    Prerequisites

    It is assumed that the reader is familiar with the use of R as well as R scripting, and is familiar wth the linux bash shell.

    Using Rscript and command line arguments

    When performing computation on the cluster using R, it is necessary to run those scripts from the command line, rather than interactively using R's graphical user interface. Consider the following R function that is defined in, e.g., 'logistic.R':

    logistic <- function(r, x) {    r*x*(1.0 - x)
    -}

    From R's GUI interface, you typically use this from the console as follows:

    > source(\"logistic.R\")
    -> logistic(3.2, 0.5)

    It is trivial to write an R script 'logistic-wrapper.R' that can be run from the command line, and that takes to arguments, the first being 'r', the second 'x'.

    args <- commandArgs(TRUE)
    -r <- as.double(args[1])
    -x <- as.double(args[2])
    -
    -source(\"logistic.R\")
    -
    -logistic(r, x)

    The first line of this script stores all arguments passed to the script in the array 'args. The second (third) line converts the first (second) element of that array from a string to a double precision number using the function 'as.double', and stores it into r (x).

    Now from the linux command line, one can run the script above for r = 3.2 and x = 0.5 as follows:

    $ Rscript logistic-wrapper.R 3.2 0.5

    Note that you should have loaded the appropriate R module, e.g.,

    $ module load R

    Suppose now that the script needs to be extended to iterate the logistic map 'n' times, where the latter value is passed as the third argument to the script.

    args <- commandArgs(TRUE)
    -r <- as.double(args[1])
    -x <- as.double(args[2])
    -n <- as.integer(args[3])
    -
    -source(\"logistic.R\")
    -
    -for (i in 1:n) x <- logistic(r, x)
    -print(x)

    Note that since the the third argument represents the number of iterations, it should be interpreted as an integer value, and hence be converted appropriately using the function 'as.integer'.

    The script is now invoked from the linux command line with three parameters as follows:

    $ Rscript cl.R 3.2 0. 5 100

    Note that if you pass an argument that is to be interpreted as a string in your R program, no conversion is needed, e.g.,

    name <- args[4]

    Here it is assumed that the 'name' is passed as the fourth command line argument.

    " - diff --git a/HtmlDump/file_0233.html b/HtmlDump/file_0233.html deleted file mode 100644 index 704487adb..000000000 --- a/HtmlDump/file_0233.html +++ /dev/null @@ -1,63 +0,0 @@ -

    Purpose

    Although R is a nice and fairly complete software package for statistical analysis, there are nevertheless situations where it desirable to extend R. This may be either to add functionality that is implemented in some C library, or to eliminate performance bottlenecks in R code. In this how-to it is assumed that the users wants to call his own C functions from R.

    Prerequisites

    It is assumed that the reader is familiar with the use of R as well as R scripting, and is a reasonably proficient C programmer. Specifically the reader should be familiar with the use of pointers in C.

    Integration step by step

    Before all else, first load the appropriate R module to prepare your environment, e.g.,

    $ module load R

    If you want a specific version of R, you can first check which versions are available using

    $ module av R

    and then load the appropriate version of the module, e.g.,

    $ module load R/3.1.1-intel-2014b

    A first example

    No tutorial is complete without the mandatory 'hello world' example. The C code in file 'myRLib.c' is shown below:

    #include <R.h>
    -void sayHello(int *n) {
    -    int i;
    -    for (i = 0; i < *n; i++)
    -        Rprintf(\"hello world!\\n\");
    -}

    Three things should be noted at this point

      -
    1. the 'R.h' header file has to be included, this file is part of the R distribution, and R knows where to find it;
    2. -
    3. function parameters are always pointers; and
    4. -
    5. to print to the R console, 'Rprintf' rather than 'printf' should be used.
    6. -

    From this 'myRLib.c' file a shared library can be build in one convenient step:

    $ R CMD SHLIB myRlib.c

    If all goes well, i.e., if the source code has no syntax errors and all functions have been defined, this command will produce a shared library called 'myRLib.so'.

    To use this function from within R in a convenient way, a simple R wrapper can be defined in 'myRLib.R':

    dyn.load(\"myRLib.so\");
    -sayHello <- function(n) {
    -    .C(\"sayHello\", as.integer(n))
    -}

    In this script, the first line loads the share library containing the 'sayHello' function. The second line defines a convenient wrapper to simplify calling the C function from R. The C function is called using the '.C' function. The latter's first parameter is the name of the C function to be called, i.e., 'sayHello', all other parameters will be passed to the C function, i.e., the number of times that 'sayHello' will say hello as an integer.

    Now, R can be started to be used interactively as usual, i.e.,

    $ R

    In R, we first source the library's definitions in 'myRLib.R', so that the wrapper functions can be used:

    > source(\"myRLib.R\")
    -> sayHello(2)
    -hello world!
    -hello world!
    -[[1]]
    -[1] 2

    Note that the 'sayHello' function is not particularly interesting since it does not return any value. The next example will illustrate how to accomplish this.

    A second, more engaging example

    Given R's pervasive use of vectors, a simple example of a function that takes a vector of real numbers as input, and returns its components' sum as output is shown next.

    #include <R.h>
    -
    -/* sayHello part not shown */
    -
    -void mySum(double *a, int* n, double *s) {
    -    int i;
    -    *s = 0.0;
    -    for (i = 0; i < *n; i++)
    -        *s += a[i];
    -}

    Note that both 'a' and 's' are declared as pointers, the former being used as the address of the first array element, the second as an address to store a double value, i.e., the sum of array's compoments.

    To produce the shared library, it is build using the R appropriate command as before:

    $ R CMD SHLIB myRLib.c

    The wrapper code for this function is slightly more interesting since it will be programmed to provide a convenient \"function-feel\".

    dyn.load(\"myRLib.so\");
    -
    -# sayHello wrapper not shown
    -
    -mySum <- function(a) {
    -    n <- length(a);
    -    result <- .C(\"mySum\", as.double(a), as.integer(n), s = double(1));
    -    result$s
    -}

    Note that the wrapper functions is now used to do some more work:

      -
    1. it preprocesses the input by calculating the length of the input vector;
    2. -
    3. it initializes 's', the parameter that will be used in the C function to store the result in; and
    4. -
    5. it captures the result from the call to the C function which contains all parameters passed to the function, in the last statement only extracting the actual result of the computation.
    6. -

    From R, 'mySum' can now easily be called:

    > source(\"myRLib.R\")
    -> mySum(c(1, 3, 8))
    -[1] 12

    Note that 'mySum' will probably not be faster than R's own 'sum' function.

    A last example

    Function can return vectors as well, so this last example illustrates how to accomplish this. The library is extended to:

    #include <R.h>
    -
    -/* sayHello and my_sum not shown */
    -
    -void myMult(double *a, int *n, double *lambda, double *b) {
    -    int i;
    -    for (i = 0; i < *n; i++)
    -        b[i] = (*lambda)*a[i];
    -}

    The semantics of the function is simply to take a vector and a real number as input, and return a vector of which each component is the product of the corresponding component in the original vector with that real number.

    After building the shared libary as before, we can extend the wrapper script for this new function as follows:

    dyn.load(\"myRLib.so\");
    -
    -# sayHello and mySum wrapper not shown
    -
    -myMult <- function(a, lambda) {
    -    n <- length(a);
    -    result <- .C(\"myMult\", as.double(a), as.integer(n),
    -                 as.double(lambda), m = double(n));
    -    result$m
    -}

    From within R, 'myMult' can be used as expected.

    > source(\"myRLib.R\")
    -> myMult(c(1, 3, 8), 9)
    -[1]  9 27 72
    -> mySum(myMult(c(1, 3, 8), 9))
    -[1] 108

    Further reading

    Obviously, this text is just for the impatient. More in-depth documentation can be found on the nearest CRAN site.

    " - diff --git a/HtmlDump/file_0235.html b/HtmlDump/file_0235.html deleted file mode 100644 index 52c02f725..000000000 --- a/HtmlDump/file_0235.html +++ /dev/null @@ -1,21 +0,0 @@ -

    Programming paradigms and models

    Development tools

    Libraries

    Integrating code with software packages

    " - diff --git a/HtmlDump/file_0237.html b/HtmlDump/file_0237.html deleted file mode 100644 index 1dfecb4f9..000000000 --- a/HtmlDump/file_0237.html +++ /dev/null @@ -1,83 +0,0 @@ -

    Purpose

    MPI is a language-independent communications protocol used to program parallel computers. Both point-to-point and collective communication are supported. MPI \"is a message-passing application programmer interface, together with protocol and semantic specifications for how its features must behave in any implementation.\" MPI's goals are high performance, scalability, and portability. MPI remains the dominant model used in high-performance computing today. -

    -

    The current version of the MPI standard is 3.0, but only the newest implementations implement the full standard. The previous specifications are the MPI 2.0 specification with minor updates in the MPI-2.1 and MPI-2.2 specifications. The standardisation body for MPI is the MPI forum. -

    -

    Some background information

    -

    MPI-1.0 (1994) and its updates MPI-1.1 (1995), MPI-1.2 (1997) and MPI-1.3 (1998) concentrate on point-to-point communication (send/receive) and global operations in a static process topology. Major additions in MPI-2.0 (1997) and its updates MPI-2.1 (2008) and MPI-2.2 (2009) are one-sided communication (get/put), dynamic process management and a model for parallel I/O. MPI-3.0 (2012) adds non-blocking collectives, a major update of the one-sided communication model and neighbourhood collectives on graph topologies. The first update of the MPI-3.1 specification was released in 2015, and work is ongoing on the next major update, MPI-4.0. -

    -

    The two dominant Open Source implementations are Open MPI and MPICH. The latter has been through a couple of name changes: It was originally conceived in the early '90's as MPICH, then the complete rewrite was renamed to MPICH2, but as this name caused confusion as the MPI standard evolved into MPI 3.x, the name was changed again to MPICH, and the version number bumped to 3.0. MVAPICH developed at Ohio State University is the offspring of MPICH further optimised for InfiniBand and some other high-performance interconnect technologies. Most other MPI implementations are derived from one of these implementations. -

    -

    At the VSC we offer both implementations: Open MPI is offered with the GNU compilers in the FOSS toolchain, while the Intel MPI used in the Intel toolchain is derived from the MPICH code base. -

    -

    Prerequisites

    -

    You have a program that uses an MPI library, either developed by you, or by others. In the latter case, the program's documentation should mention the MPI library it was developed with. -

    -

    Implementations

    -

    On VSC clusters, several MPI implementations are installed. We provide two MPI implementations on all newer machines that can support those implementations: -

    -
      -
    1. Intel MPI in the intel toolchain -
        -
      1. Intel MPI 4.1 (intel/2014a and intel/2014b toolchains) implements the MPI-2.2 specification
      2. -
      3. Intel MPI 5.0 (intel/2015a and intel/2015b toolchains) and Intel MPI 5.1 (intel/2016a and intel/2016b toolchains) implement the MPI-3.0 specification
      4. -
      -
    2. -
    3. Open MPI in the foss toolchain -
        -
      1. Open MPI 1.6 (foss/2014a toolchain) only implements the MPI-2.1 specification
      2. -
      3. Open MPI 1.8 (foss/2014b, foss/2015a and foss/2015b toolchains) and Open MPI 1.10 (foss/2016a and foss/2016b) implement the MPI-3.0 specification
      4. -
      -
    4. -
    -

    When developing your own software, this is the preferred order to select an implementation. The performance should be very similar, however, more development tools are available for Intel MPI (i.e., ITAC for performance monitoring). -

    -

    Specialised hardware sometimes requires specialised MPI-libraries. -

    - -

    Several other implementations may be installed, e.g., MVAPICH, but we assume you know what you're doing if you choose to use them. -

    -

    We also assume you are already familiar with the job submission procedure. If not, check the \"Running jobs\" section first. -

    -

    Compiling and running

    -

    See to the documentation about the toolchains. -

    -

    Debugging

    -

    For debugging, we recommend the ARM DDT debugger (formerly Allinea DDT, module allinea-ddt). Video tutorials are available on the Arm web site. (KU Leuven-only). -

    -

    When using the intel toolchain, Intel's Trace Analyser & Collector (ITAC) may also prove useful. -

    -

    Profiling

    -

    To profile MPI applications, one may use Arm MAP (formerly Allinea MAP), or Scalasca. (KU Leuven-only) -

    -

    Further information

    -" - diff --git a/HtmlDump/file_0239.html b/HtmlDump/file_0239.html deleted file mode 100644 index b23415ada..000000000 --- a/HtmlDump/file_0239.html +++ /dev/null @@ -1,61 +0,0 @@ -

    Purpose

    OpenMP (Open Multi-Processing) is an API that supports multi-platform shared memory multiprocessing programming in C, C++, and Fortran, on most processor architectures and operating systems. It consists of a set of compiler directives, library routines, and environment variables that influence run-time behavior. -

    -

    OpenMP uses a portable, scalable model that gives programmers a simple and flexible interface for developing parallel applications for platforms ranging from the standard desktop computer to the supercomputer. The current version of the OpenMP specification is 4.0. It was released in July 2013 and is probably the biggest update of the specification so far. However, not all compilers already fully support this standard. The previous specification were the OpenMP 3.1 specification (July 2011) and OpenMP 3.0 specification (May 2008). Versions prior to 4.0 concentrated on exploiting thread-level parallelism on multicore machines in a portable way, while version 4.0 of the specifications adds support for vectorisation for the SIMD instruction sets on modern CPUs and offload of computations to accelerators (GPU, Xeon Phi, ...). The latter feature is an alternative to the use of OpenACC directives. -

    -

    Prerequisites

    -

    You should have a program that uses the OpenMP API. -

    -

    Implementations

    -

    On the VSC clusters, the following compilers support OpenMP: -

    -
      -
    1. - Intel compilers in the intel toolchain -
        -
      1. - The Intel compiler version 13.1 (intel/2014a and intel/2014b toolchains) implement the OpenMP 3.1 specification -
      2. -
      3. - The Intel compiler version 14.0 (installed on some systems outside the toolchains, sometimes in a package with icc/2013_sp1 in its name) implements the OpenMP 3.1 specification and some elements of the OpenMP 4.0 specification (which was only just approved when the compiler was released) -
      4. -
      5. - The Intel compiler version 15.0 (intel/2015a and intel/2015b toolchain) supports all of the OpenMP 4.0 specification except user-defined reductions. It supports offload to a Xeon Phi system (and to some Intel processor-integrated graphics, but that is not relevant on the VSC-clusters). -
      6. -
      7. The Intel compiler version 16.0 (intel/2016a and intel/2016b toolchains) offers almost complete OpenMP 4.0 support. User-defined reductions are now also supported.
      8. -
      -
    2. -
    3. - GCC in the foss toolchain -
        -
      1. - GCC versions 4.8.2 (foss/2014a toolchain) and 4.8.3 (foss/2014b toolchain) support the OpenMP 3.1 specification. -
      2. -
      3. - GCC version 4.9.2 (foss/2015a toolchain) and 4.9.3 (foss/2015b and foss/2016a toolchains) support the full OpenMP 4.0 specification. However, \"offloaded\" code is run on the CPU and not on the GPU or any other accelerator. (In fact, OpenMP 4.0 is supported for C/C++ starting in GCC 4.9.0 and for Fortran in GC 4.9.1). -
      4. -
      5. - GCC 5.4 (foss/2016b toolchain) offers full OpenMP 4.0 support and has the basics built in to support offloading.
      6. -
      7. GCC 6.x (not yet part of a toolchain) offers full OpenMP 4.5 support in C and C++, including offloading to some variants of the Xeon Phi and to AMD HSAIL and some support for OpenACC on NVIDIA.
      8. -
      -
    4. -
    -

    When developing your own software, this is the preferred order to select the toolchain. The GCC OpenMP runtime is for most applications inferior to the Intel implementation. -

    -

    We also assume you are already familiar with the job submission procedure. If not, check the \"Running jobs\" section first. -

    -

    Compiling OpenMP code

    -

    See the instructions on the page about toolchains for compiling OpenMP code with the Intel and GNU compilers. -

    -

    Note that it is in fact possible to link OpenMP object code compiled with gcc and the Intel compiler on the condition that the Intel OpenMP libraries and run-time is used (e.g., by linking using icc with the -openmp option), but the Intel manual is not clear which versions of gcc and icc work together well. This is only for specialists but may be useful if you only have access to object files and not to the full source code. -

    -

    Running OpenMP programs

    -

    Since OpenMP is intended for use in a shared memory context, when submitting a job to the queue system, remember to request a single node (i.e., -l nodes=1) and as many processors as you need parallel threads (e.g., -l ppn=4). The latter should not exceed the number of cores on the machine the job runs on. For relevant hardware information, please consult the list of available hardware. -

    -

    You may have to set the number of cores that the program should use by hand, e.g., when you don't use all cores on a node, because the mechanisms in the OpenMP runtime that recognize the number of cores, don't recognize the number of cores assigned to the job but the total number of cores. Depending on the program, this may be trough a command line option to the executable, a value in the input file or the environment variable OMP_NUM_THREADS. Failing to set this value may result in threads competing with each other for resources such as cache and access to the CPU and thus lower performance. -

    -

    Further information

    -" - diff --git a/HtmlDump/file_0241.html b/HtmlDump/file_0241.html deleted file mode 100644 index af168d490..000000000 --- a/HtmlDump/file_0241.html +++ /dev/null @@ -1,24 +0,0 @@ -

    What are toolchains?

    A toolchain is a collection of tools to build (HPC) software consistently. It consists of

    Toolchains at the VSC are versioned, and refreshed twice a year. All software available on the cluster is rebuild when a new version of a toolchain is defined to ensure consistency. Version numbers consist of the year of their definition, followed by either a - or b, e.g., - 2014a. -Note that the software components are not necessarily the most recent releases, rather they are selected for stability and reliability. -

    Available toolchains at the VSC

    Two toolchain flavors are standard across the VSC on all machines that can support them: -

    It may be of interest to note that the Intel C/C++ compilers are more strict with respect to the standards than the GCC C/C++ compilers, while for Fortran, the GCC Fortran compiler tracks the standard more closely, while Intel's Fortran allows for many extensions added during Fortran's long history. When developing code, one should always build with both compiler suites, and eliminate all warnings. -

    On average, the Intel compiler suite produces executables that are 5 to 10 % faster than those generated using the GCC compiler suite. However, for individual applications the differences may be more significant with sometimes significantly faster code produced by the Intel compilers while on other applications the GNU compiler may produce much faster code. -

    Additional toolchains may be defined on specialised hardware to extract the maximum performance from that hardware. -

    For detailed documentation on each of these toolchains, we refer to the pages linked above in this document.

    " - diff --git a/HtmlDump/file_0243.html b/HtmlDump/file_0243.html deleted file mode 100644 index b0f506cdd..000000000 --- a/HtmlDump/file_0243.html +++ /dev/null @@ -1,124 +0,0 @@ -

    Why use a version control system?

    A version control systems (VCS) help you manage the changes to the source files of your project, and most systems also support team development. Since it remembers the history of your files, you can always return to an earlier version if you've screwed up making changes. By adding comments when you store a new version in the VCS it also becomes much easier to track which change was made for what purpose at what time. And if you develop in a team, it helps to organise making coordinated changes to the code base, and it supports co-development even across file system borders (e.g., when working with a remote partner).

    Most Integrated Development Environments (IDE) offer support for one or more version control systems. E.g., Eclipse, the IDE which we recommend for the development of C/C++ or Fortran codes on clusters, supports all of the systems mentioned on this page, some out-of-the-box and others by adding an additional package. The systems mentioned on this page are all available on Linux, OS X and Windows (through the UNIX emulation layer cygwin and all except RCS also in at least one native implementation). -

    Types of version control systems

    An excellent introduction to the various types of version control systems can be found in the book Pro GIT by Scott Chacon and Ben Straub. -

    Local systems

    These first generation systems use a local database that stores previous versions of files. One of the most popular examples of this type is the venerable RCS (Revision Control System) system, distributed with many UNIX-like systems. It works by keeping patch sets (differences between various versions of a file) in a special format on disk. It can then return to a previous version of a file by adding up all the patches. -

    RCS and other \"local systems\" are very outdated. Hence we advise you to use one of the systems from the next two categories. -

    Links: -

    Centralised systems

    Centralised version control systems were developed to enable people to collaborate on code and documents with people on different systems that may not share a common file system. The version files are now maintained by a server to which multiple clients can connect and check out files, and the systems help to manage concurrent changes to a file by several users (through a copy-modify-merge procedure). Popular examples of this type are CVS (Concurrent Versions System) and SVN (Subversion). Of those two, SVN is the more recent system while CVS is no longer further developed and less and less used. -

    Links: -

    Distributed systems

    The weak point of the centralised systems is that they require you to be online to checkout a file or to commit a revision. In a distributed system, the clients mirror the complete repository and not just the latest version of each file. When online, the user can then synchronise the local repository with the copy on a server. In a single-user scenario you can still keep all your files in the local repository without using a server, and hence it doesn't make sense anymore to still use one of old local-only version control systems. The disadvantage of a distributed system is that you are not forced to synchronise after every commit, so that the local repositories of various users on a project can be very much out-of-sync with each other, making the job harder when those versions have to be merged again. -

    Popular examples of systems of this type are Git (originally developed to manage the Linux kernel project) and Mercurial (sometimes abbreviated as Hg, chemists will understand why). -

    Links: -

    Cloud services

    Many companies offer hosting services for SVN, Git or Mercurial repositories in the cloud. Google, e.g., for subversion hosting service, git hosting service or mercurial hosting service. Several offer free public hosting for Open Source projects or have free access for academic accounts. Some noteworthy ones that are popular for academic projects are: -

    However, we urge you to always carefully check the terms-of-use of these services to assure that, e.g., the way they deal with intellectual property is in line with your institute's requirements. -

    Which one should I use?

    It is not up to us to make this choice for you, but here are a number of elements that you should take into account: -

    " - diff --git a/HtmlDump/file_0245.html b/HtmlDump/file_0245.html deleted file mode 100644 index 63018372b..000000000 --- a/HtmlDump/file_0245.html +++ /dev/null @@ -1,226 +0,0 @@ -

    This tutorial explains some of the basic use of the git command line client. It does not aim to be a complete tutorial on git but rather a brief introduction explaining some of the issues and showing you how to house your git repository at the VSC. At the end of this text, we provide some links to further and more complete documentation.

    -

    Preparing your local machine for using git

    -

    It is best to first configure git on your local machine using git config. -

    -
    git config --global user.name \"Kurt Lust\"
    -git config --global user.email kurt.lust@uantwerpen.be
    -git config --global core.editor vi
    -
    -

    These settings are stored in the file .gitconfig in your home directory (OS X, Linux, Cygwin). The file is a simple user-editable text file. -

    -

    Some remarks on accessing a remote repository using command line tools

    -

    Many cloud git hosting services offer a choice between ssh and https access to a repository through the git command line tools. If you want to use one of the VSC clusters for a remote repository, you'll have to use the ssh protocol. -

    -

    Https access

    -

    Https access uses your account and password of the cloud service. Every time you access the remote repository, the git command-line client will ask for the password. This can be solved by using a credential manager in recent versions of the git client (1.7.9 and newer). -

    - -

    Ssh access

    -

    The git command line client uses the standard ssh mechanism to manage ssh keys. It is sufficient to use an ssh agent (as you are probably using already when you log on to the VSC clusters) and load the key for the service in the agent (using ssh-add). -

    -

    Setting up a new repository

    -

    Getting an existing code base into a local repository

    -

    Git stores its repository with your code in a hidden directory. -

    -
      -
    1. - Go to the top directory of what has to become your repository (most likely the top directory of the files that you want to version control) and run - git init - This wil create a hidden .git subdirectory with the local repository database. -
    2. -
    3. - Now you can add the files to your new repository, e.g., if you want to add all files - git add . - (don't forget the dot at the end, it means add the current directory!) -
    4. -
    5. - Next you can do your first commit: - git commit -m \"Project brought under Git control\" -
    6. -
    7. - And you're done! The current version of your files is now stored in your local repository. Try, e.g., - git show
      - git status - to get some info about the repository. -
    8. -
    -

    Bringing an existing local repository into a cloud service

    -

    Here we assume that you have a local repository and now want to put it into a cloud service to collaborate with others on a project. -

    -

    You may want to make a backup of your local repository at this point in case things go wrong. -

    -
      -
    1. - Create an empty project on your favorite cloud service. Follow the instructions provided by the service. -
    2. -
    3. - Now you'll need to learn your local repository about the remote one. Most cloud services have a button to show you the URL to the remote repository that you have just set up, either using the http or the ssh-based protocol. E.g., - git remote add origin ssh://git@bitbucket.org/username/myproject.git - connects to the repository myproject on Bitbucket. It will be known on your computer with the short name origin. The short name saves you from having to use the full repository URL each time you want to refer to it -
    4. -
    5. - Push the code from your local repository into the remote repository. - git push -u --mirror origin - will create a mirror of your local repository on the local site. Use the --mirror option with care, as it may destroy part of your remote repositiory if that one is not empty and contains information that is not contained in your local repository! -
    6. -
    -

    You can also use the procedure to create a so-called bare remote repository in your account on the VSC clusters. A bare repository is a repository that does not also contain its own source file tree, so you cannot edit directly in that directory and also use it as a local repository on the cluster. However, you can push to and pull from that repositiory, so it will work just like a repository on one of the hosting services. The access to the repository will be through ssh. The first two steps have to be modified: -

    -
      -
    1. - To create an empty repository, log in to your home cluster and go to the directory where you want to store the repository. Now create the repository (assuming its name is repository-name): - git init --bare repository-name - This will create the directory repositiory-name that stores a lot of files which together are your git repository. -
    2. -
    3. - The URL to the repository will be of the form vscXXXXX@vsc.login.node:<full path to the repository>, e.g., if you're vsc20XYZ (a UANtwerpen account) and the repository is in the subdirectory testrepository of your data directory, the URL is vsc20XYZ@login.hp.uantwerpen.be:/data/antwerpen/20X/vsc20XYZ/testrepository. So use this URL in the git remote add command. You don't need to specify ssh:// in the URL if you use the scp-syntax as we did in this example above. -
    4. -
    -

    The access to this repository will be regulated through the file access permissions for that subdirectory. Everybody who has read and write access to that directory, can also use the repository (but using his/her own login name in the URL of course as VSC accounts should not be shared by multiple users). -

    -

    NOTE: Technically speaking, git can also be used in full peer-to-peer mode where all repos also have a source directory in which files can be edited. It does require a good organisation of the work flow. E.g., different people in the team should not be working in the same branch as one cannot push changes to a repo for the branch that is active (i.e., mirrored in the source files) as this may create an inconsistent state. So our advise is that if you want to use the cluster as a git server and also edit files on the cluster, you simply use two repositories: one that you use as a local repository in which you also work and one that is only used as a central repository to which various users push changes to and pull changes from. -

    -

    As a clone from an existing local or remote repository

    -

    Another way to create a new repository is from an existing repository on your local machine or on a remote service. The latter is useful, e.g., if you want to join an existing project and create a local copy of the remote repository on your machine to do your own work. This can be accomplished through cloning of a repository, a very easy operation in git as there is a command that combines all necessary steps in a single command: -

    -
      -
    1. - Go to the directory were you want to store the repository and corresponding source tree (in a subdirectory of that directory called directoryname). -
    2. -
    3. - You have to know the URL to the repository that you want to clone. But once you know the URL, all you need to do is - git clone URL directoryname - where you replace URL with the URL of the repository that you want to clone. -
    4. -
    -

    Note: If you start from scratch and want to use a remote repository in one of the cloud services, it might be easiest to first a repository over there using the instructions of the server system or cloud service, and then clone that (even if it is still empty) to a local repository on which you actually work. -

    -

    Working with your local repository

    -

    If you are only using a local repository, the basic workflow to add the modifications to the git database is fairly simple: -

    -
      -
    1. - Edit the files. -
    2. -
    3. - Add the modified files to the index using: - git add filename - This process is called staging. -
    4. -
    5. - You can continue to further edit files if you want and also stage them. -
    6. -
    7. - Commit all staged files to the repository: - git commit - Git will ask you to enter a message describing the commit, or you can specify a message with the -m option. -
    8. -
    -

    This is not very exciting though. Version control becomes really useful once you want to return to a previous version, or create a branch of the code to try something out or fix a bug without immediately changing the main branch of the code (that you might be using for production use). You can then merge the modifications back into you main code. Branching and merging branches are essential in all this. In fact, if you use git to collaborate with others you'll be confronted with branches sooner rather than later. In fact, every git repository has at least one branch, the main branch, as -

    -

    git status -

    -

    shows. -

    -

    Assume you want to start a new branch to try something without affecting your main code, e.g., because you also want to further evolve your main code branch while you're working. You can create a branch (let's assume we name it branch2) with -

    -

    git branch branch2 -

    -

    And then switch to it with -

    -

    git checkout branch2 -

    -

    Or combine both steps with -

    -

    git checkout -b branch2. -

    -

    You can then switch between this branch and the master branch with -

    -

    git checkout master -

    -

    and -

    -

    git checkout branch2 -

    -

    at will and make updates to the active branch using the regular git add and git commit cycle. -

    -

    The second important operation with branches, is merging them back together. One way to do this is with git merge. Assume you want to merge the branch branch2 back in the master branch. You'd do this by first switching to the master branch using -

    -

    git checkout master -

    -

    and then ask git to merge both branches: -

    -

    git merge branch2 -

    -

    Git will do a good effort to merge both sets of modifications since their common ancestor, but this may not always work, especially if you've made changes to the same area of a file on both branches. Git will then warn you that there is a conflict for certain files, after which you can edit those files (the conflicts zones will be clearly marked in the files), add them to the index and commit the modifications again. -

    -

    When learning to work with this mechanism, it is very instructive to use a GUI that depicts all commits and branches in a graphical form, e.g., the program SourceTree mentioned before. -

    -

    Synchronising with a remote repository

    -

    If you want to collaborate with other people on a project, you need multiple repositories. Each person has his or her own local repository on his or her computer. The workflow is the simplest if you also have a repository that is used to collect all contributions. The collaboration mechanism though synchronisation of repositories relies very much on the branching mechanism to resolve conflicts if several contributors have made modifications to the repository. -

    - -

    Further information

    -

    We have only covered the bare essentials of git (and even less then that). Due to its power, it is also a fairly complicated system to use. If you want to know more about git or need a more complete tutorial, we suggest you check out the following links: -

    -" - diff --git a/HtmlDump/file_0247.html b/HtmlDump/file_0247.html deleted file mode 100644 index 4a80c5fe4..000000000 --- a/HtmlDump/file_0247.html +++ /dev/null @@ -1,114 +0,0 @@ -

    Preparation

    The Subversion software is installed on the cluster. On most systems it is default software and does not need a module (try which svn and <code>which svnadmin to check if the system can find the subversion commands). On some systems you may have to load the appropriate module, i.e.,

    $ module load subversion
    -

    When you are frequently using Subversion, it may be convenient to load this module from your '.bashrc' file. (Note that in general we strongly caution against loading modules from '.bashrc', so this is an exception.) -

    Since some Subversion operations require editing, it may be convenient to define a default editor in your '.bashrc' file. This can be done by setting the 'EDITOR' variable to the path of your favorite editor, e.g., emacs. When this line is added to your '.bashrc' file, Subversion will automatically launch this editor whenever it is required. -

    export EDITOR=/usr/bin/emacs
    -

    Of course, any editor you are familiar with will do. -

    Creating a repository

    To create a Subversion repository on a VSC cluster, the user first has to decide on its location. We suggest to use the data directory since -

      -
    1. its default quota are quite sufficient;
    2. -
    3. if the repository is to be shared, the permissions on the user's home directory need not to be modified, hence decreasing potential security risks; and
    4. -
    5. only for users of the K.U.Leuven cluster, the data directory is backed up (so is the user's home directory, incidently).
    6. -

    Actually creating a repository is very simple: -

    $ cd $VSC_DATA
    -$ svnadmin create svn-repo
    -
      -
    1. Log in on the login node.
    2. -
    3. Change to the data directory using
    4. -
    5. Create the repository using
    6. -

    Note that a directory with the name 'svn-repo' will be created in your '$VSC_DATA' directory. You can choose any name you want for this directory. Do not modify the contents of this directory since this will corrupt your repository unless you know quite well what you are doing. -

    At this point, it may be a good idea to read the section in the Subversion book on the repository layout. In this How-To, we will assume that each project has its own directory at the root level of the repository, and that each project will have a 'trunk', 'branches' and 'tags' directory. This is recommended practice, but you may wish to take a different approach. -

    To make life easier, it is convenient to define an environment variable that contains the URI to the repository you just created. If you work with a single repository, you may consider adding this to your '.bashrc' file. -

    export SVN=\"svn+ssh://vsc98765@vsc.login.node DATA/svn-repo\"
    -

    Here you would replace 'vsc98765' by your own VSC user ID, 'vsc.login.node' by the login node of your VSC cluster, and finally, 'DATA' by the value of your '$VSC_DATA' variable. -

    Putting a project under Subversion control

    Here, we assume that you already have a directory that contains an initial version of the source code for your project. If not, create one, and populate it with some relevant files. For the purpose of this How-To, the directory currently containing the source code will be called '$VSC_DATA/simulation', and it will contain two source files, 'simulation.c' and 'simulation.h', as well as a make file 'Makefile'. -

    Preparing the repository

    Since we follow the Subversion community's recommended practice, we start by creating the appropriate directories in the repository to house our project. -

    $ svn mkdir -m 'simulation: creating dirs' --parents   \\
    -            $SVN/simulation/trunk    \\
    -            $SVN/simulation/branches \\
    -            $SVN/simulation/tags
    -

    The repository is now prepared so that the actual code can be imported. -

    Importing your source code

    As mentioned, the source code for your project exists in the directory '$VSC_DATA/simulation'. Since the semantics of the 'trunk' directory of a project is that this is the location where the bulk of the development work is done, we will import the project into the trunk. -

    $ svn import -m 'simulation: import' \\
    -             $VSC_DATA/simulation   \\
    -             $SVN/simulation/trunk
    -
      -
    1. First, prepare the source directory '$VSC_DATA/simulation' by deleting all files that you don't want to place under version control. Remove artefacts such as, e.g., object files or executables, as well as text files not to be imported into the repository.
    2. -
    3. Now the directory can be imported by simply typing:
    4. -

    The project is now almost ready for development under version control. -

    Checking out

    Although the source directory has been imported into the subversion repository, this directory is not under version control. We first have to check out a working copy of the directory. -

    Since you are not yet familiar with subversion and may have made a mistake along the way, it may be a good idea at this point to make a backup of the original directory first, by, e.g., -

    $ tar czf $VSC_DATA/simulation.tar.gz $VSC_DATA/simulation
    -

    Now it is safe to checkout the project from the repository using: -

    $ svn checkout $SVN/simulation/trunk $VSC_DATA/simulation
    -

    Note that the existing files in the'$VSC_DATA/simulation' directory have been replaced by those downloaded from the repository, and that a new directory '$VSC_DATA/simulation/.svn' has been created. It is the latter that contains the information needed for version control operations. -

    Subversion work cycle

    The basic work cycle for development on your project is fairly straightforward. -

    $ cd $VSC_DATA/simulation
    -$ svn update
    -$ svn add utils.c utils.h
    -$ svn status
    -$ svn commit -m 'simulation: implemented a very interesting feature'
      -
    1. Change to the directory containing your project's working copy, e.g.,
    2. -
    3. Update your working copy to the latest version, see the section on updating below for a brief introduction to the topic.
    4. -
    5. Edit the project's files to your heart's content, or add new files to the repository after you created them, e.g., 'utils.c' and 'utils.h'. Note that the new files will only be stored in the repository upon the next commit operation, see below.
    6. -
    7. Examine your changes, this will be elaborated upon in the next section.
    8. -
    9. Commit your changes, i.e., all changes you made to the working copy are now transfered to the repository as a new revision.
    10. -
    11. Repeat steps 2 to 5 until you are done.
    12. -

    If you are the sole developer working on this project and exclusively on the VSC cluster, you need not update since your working copy will be the latest anyway. However, an update is vital when others can commit changes, or when you work in various locations such as your desktop or laptop. -

    Other subversion features

    It would be beyond the scope of this How-To to attempt to stray too far from the mechanics of the basic work cycle. However, a few features will be highlighted since they may prove useful. -

    A central concept to almost all version control systems is that of a version number. In Subversion, all operations that modify the current version in the repository will result in an automatic increment of the revision number. In the example above, the 'mkdir' would result in revision 1, the 'import' in revision 2, and each consecutive 'commit' will further increment the version number. -

    Reverting to a previous version

    The most important point of any version control system is that it is possible to revert to some revision if necessary. Suppose you want to revert to the state of the original import, than this can be accomplished as follows: -

    $ svn checkout -r 2 $SVN/simulation/trunk simulation-old
    -

    Finding changes between revisions

    Finding changes between revisions, or between a certain revision and the current state of the working copy is also fairly easy: -

    $ svn diff -r HEAD simulation.c
    -

    Examining history

    To many Subversion operations, e.g., 'mkdir' and 'commit', with a message can be added (the '-m <string>' in the commands of the previous section), and they will be associated with the resulting revision number. When used consistently, these comments can be very useful since they can be reviewed later whenever one has to examine changes made to the project. If a repository hosts multiple projects, it is wise to have some sort of convention, e.g., to start the comments on a project by its name as a tag. Note that this convention was followed in the examples above. One can for instance show all messages associated with changes to the file 'simulation.c' using: -

    $ svn log simulation.c
    -

    Deleting and renaming

    When a file is no longer needed, it can be removed from the current version in the repository, as well as from the working copy. -

    $ svn rm Makefile
    -

    The previous command would remove the file 'Makefile' from the working directory, and tag it for deletion from the current revision upon the next commit operation. Note that the file is not removed from the repository, it is still part of older revisions. -

    Similarly, a file may have to be renamed, an operation that is also directly supported by Subversion. -

    $ svn mv utils.c util.c
    -

    Again, the change will only be propagated to the repository upon the next commit operation. -

    Examining status

    While development progresses, the working copy differs more and more from the latest revision in the repository, i.e., HEAD. To get an overview of files that were modified, added, deleted, etc., one can examine the status. -

    $ svn status
    -

    This results in a list of files and directories, each preceeded by a character: -

    When nothing has been modified since the last commit, this command shows no output. -

    Updating the working copy

    When the latest revision in the repository has changed with respect to the working copy, an update of the latter should be done before continuing the development. -

    $ svn update
    -

    This may be painless, or require some work. Subversion will try to reconsilliate the revision in the repository with your working copy. When changes can safely be applied, subversion does so automatically. The output of the 'update' command is a list of files, preceeded by characters denoting status information: -

    In case of conflict, e.g., the same line of a file was changed in both the repository and the working copy, Subversion will offer a number of options to resolve the conflict. -

    Conflict discovered in 'simulation.c'.
    -Select: (p) postpone, (df) diff-full, (e) edit,
    - (mc) mine-conflict, (tc) theirs-conflict,
    - (s) show all options:
    -

    The safest option is to choose to edit the file, i.e., type 'e'. The file will be opened in an editor with the conflicts clearly marked. An example is shown below: -

    <<<<<<< .mine
    - printf(\"bye world simulation!\\n\");
    -=======
    - printf(\"hello nice world simulation\\n\");
    ->>>>>>> .r7
    -

    Here '.mine' indicates the state in your working copy, '.r7' that of revision 7 (i.e., HEAD) in the repository. You can now resolve them manually by editing the file. Upon saving the changes and quiting the editor, the option 'resolved' will be added to the list above. Enter 'r' to indicate that the conflict has indeed been resolved successfully. -

    Tagging

    Some revisions are more important than others. For example, the version that was used to generate the data you used in the article that was submitted to Nature is fairly important. You will probably continue to work on the code, adding several revisions while the referees do their job. In their report, they may require some additional data, and you will have to run the program as it was at the time of submission, so you want to retrieve that version from the repository. Unfortunately, revision numbers have no semantics, so it will be fairly hard to find exactly the right version. -

    Important revisions may be tagged explicitly in Subversion, so choosing an appropriate tag name adds semantics to a revision. Tagging is essentially copying to the tags directory that was created upon setting up the repository for the project. -

    $ svn copy --parents -m 'simulation: tagging Nature submission' \\
    -           $SVN/simulation/trunk           \\
    -           $SVN/simulation/tags/nature-submission
    -

    It is now trivial to check out the version that was used to compute the relevant data: -

    $ svn checkout $SVN/simulation/tags/nature-submission \\
    -               simulation-nature
    -

    Desktop access

    It is also possible to access VSC subversion repositories from your desktop. See the pages in the Windows client, OS X client en Linux client sections. -

    Further information on Subversion

    Subversion is a rather sophisticated version control system, and in this mini-tutorial for the impatient we have barely scratched the surface. Further information is available in an online book on Subversion, a must read for everyone involved in a non-trivial software development project that used subversion. -

    Subversion can also provide help on commands: -

    $ svn help
    -$ svn help commit
    -

    The former lists all available subversion commands, the latter form displays help specific to the command, 'commit' in this example. -

    " - diff --git a/HtmlDump/file_0249.html b/HtmlDump/file_0249.html deleted file mode 100644 index 7bfa365c4..000000000 --- a/HtmlDump/file_0249.html +++ /dev/null @@ -1,43 +0,0 @@ -

    Purpose

    Debugging MPI applications is notoriously hard. The Intel Trace Analyzer & Collector (ITAC) can be used to generate a trace while running an application, and visualizing it later for analysis.

    Prerequisities

    You will need an MPI program (C/C++ or Fortran) to instrument and run. -

    Step by step

    The following steps are the easiest way to use the Intel Trace Analyzer, however, more sophisticated options are available. -

      -
    1. - Load the relevant modules. The exact modules may differ from system to system, but will typically include the itac module and a compatible Intle toolchain, e.g., -
      $ module load intel/2015a
      -$ module load itac/9.0.2.045
      -	
      -
    2. -
    3. - Compile your application so that it can generate a trace: -
      $ mpiicc -trace myapp.c -o myapp
      -	
      - where myapp.c is your C/C++ source code. For a Fortran program, this would be: -
      $ mpiifort -trace myapp.f -o myapp
      -	
      -
    4. -
    5. - Run your application using a PBS script such as this one: -
      #!/bin/bash -l
      -#PBS -N myapp-job
      -#PBS -l walltime=00:05:00
      -#PBS -l nodes=4
      -
      -module load intel/2015a
      -module load itac/9.0.2.045
      -# Set environment variables for ITAC.
      -# Unfortunately, the name of the script differs between versions of ITAC
      -source $EBROOTITAC/bin/itacvars.sh
      -
      -cd $PBS_O_WORKDIR
      -
      -mpirun -trace myapp
      -	
      -
    6. -
    7. - When the job is finished, check whether files with names myapp.stf.* have been generated, if so, start the visual analyzer using: -
      $ traceanalyzer myapp.stf
      -	
      -
    8. -

    Further information

    Intel provides product documentation for ITAC. -

    " - diff --git a/HtmlDump/file_0251.html b/HtmlDump/file_0251.html deleted file mode 100644 index 252660de8..000000000 --- a/HtmlDump/file_0251.html +++ /dev/null @@ -1,352 +0,0 @@ -

    Introduction & motivation

    When working on the command line such as in the Bash shell, applications support command line flags and parameters. Many programming languages offer support to conveniently deal with command line arguments out of the box, e.g., Python. However, quite a number of languages used in a scientific context, e.g., C/C++, Fortran, R, Matlab do not. Although those languages offer the necessary facilities, it is at best somewhat cumbersome to use them, and often the process is rather error prone.

    Quite a number of libraries have been developed over the years that can be used to conveniently handle command line arguments. However, this complicates the deployment of the application since it will have to rely on the presence of these libraries. -

    ParameterWeaver has a different approach: it generates the necessary code to deal with the command line arguments of the application in the target language, so that these source files can be distributed along with those of the application. This implies that systems that don't have ParameterWeaver installed still can run that application. -

    Using ParameterWeaver is as simple as writing a definition file for the command line arguments, and executing the code generator via the command lnie. This can be conveniently integrated into a standard build process such as make. -

    ParameterWeaver currently supports the following target languages: -

    High-level overview & concepts

    Parameter definition files

    A parameter definition file is a CSV text file where each line defines a parameter. A parameter has a type, a name, a default values, and optionally, a description. To add documentation, comments can be added to the definition file. The types are specific to the target language, e.g., an integer would be denoted by int for C/C++, and by integer for Fortran 90. The supported types are documented for each implemented target language. -

    By way of illustration, a parameter definition file is given below for C as a target language, additional examples are shown in the target language specific sections: -

    int,numParticles,1000,number of particles in the system
    -double,temperature,273,system temperature in Kelvin
    -char*,intMethod,'newton',integration method to use
    -

    Note that this parameter definition file should be viewed as an integral part of the source code. -

    Code generation

    ParameterWeaver will generate code to -

      -
    1. initialize the parameter variables to the default values as specified in the parameter definition file;
    2. -
    3. parse the actual command line arguments at runtime to determine the user specified values, and
    4. -
    5. print the values of the parameters to an output stream.
    6. -

    The implementation and features of the resulting code fragments are specific to the target language, and try to be as close as possible to the idioms of that language. Again, this is documented for each target language specifically. The nature and number of these code fragments varies from one target language to the other, again trying to match the language's idioms as closely as possible. For C/C++, a declaration file (.h) and a definition file (.c), while for Fortran 90 a single file (.f90 will be generated that contains both declarations and definitions. -

    Language specific documentation

    C/C++ documentation

    Data types

    For C/C++, ParameterWeaver supports the following data types: -

      -
    1. int
    2. -
    3. long
    4. -
    5. float
    6. -
    7. double
    8. -
    9. bool
    10. -
    11. char *
    12. -

    Example C program

    Suppose we want to pass command line parameters to the following C program: -

    #include 
    -#include 
    -#include 
    -int main(int argc, char *argv[]) {
    -    FILE *fp;
    -    int i;
    -    if (strlen(out) > 0) {
    -        fp = fopen(out, \"w\");
    -    } else {
    -        fp = stdout;
    -    }
    -    if (verbose) {
    -        fprintf(fp, \"# n = %d\\n\", n);
    -        fprintf(fp, \"# alpha = %.16f\\n\", alpha);
    -        fprintf(fp, \"# out = '%s'\\n\", out);
    -        fprintf(fp, \"# verbose = %s\\n\", verbose);
    -    }
    -    for (i = 0; i < n; i++) {
    -        fprintf(fp, \"%d\\t%f\\n\", i, i*alpha);
    -    }
    -    if (fp != stdout) {
    -        fclose(fp);
    -    }
    -    return EXIT_SUCCESS;
    -}
    -

    We would like to set the number of iterations n, the factor alpha, the name of the file to write the output to out and the verbosity verbose at runtime, i.e., without modifying the source code of this program. -

    Moreover, the code to print the values of the variables is error prone, if we later add or remove a parameter, this part of the code has to be updated as well. -

    Defining the command line parameters in a parameter definition file to automatically generate the necessary code simplifies matters considerably. -

    Example parameter definition file

    The following file defines four command line parameters named n, alpha, out and verbose. They are to be interpreted as int, double, char pointer and bool respectively, and if no values are passed via the command line, they will have the default values 10, 0.19, output.txt and false respectively. Note that a string default value is quoted. In this case, the columns in the file are separated by tab characters. The following is the contents of the parameter definition file param_defs.txt: -

    int n   10
    -double  alpha   0.19
    -char *  out 'output.txt'
    -bool    verbose false
    -

    This parameter definition file can be created in a text editor such as the one used to write C program, or from a Microsoft Excel worksheet by saving the latter as a CSV file. -

    As mentioned above, boolean values are also supported, however, the semantics is slightly different from other data types. The default value of a logical variable is always false, regardless of what is specified in the parameter definition file. As opposed to parameters of other types, a logical parameter acts like a flag, i.e., it is a command line options that doesn't take a value. Its absence is interpreted as false, its presence as true. Also note that using a parameter of type bool implies that the program will have to be complied as C99, rather than C89. All modern cmopiler fully support C99, so that should not be an issue. However, if your program needs to adhere strictly to the C89 standard, simply use a parameter of type int instead, with 0 interpreted as false, all other values as true. In that case, the option takes a value on the command line. -

    Generating code

    Generating the code fragments is now very easy. If appropriate, load the module (VIC3): -

    $ module load parameter-weaver
    -

    Next, we generate the code based on the parameter definition file: -

    $ weave -l C -d param_defs.txt
    -

    A number of type declarations and functions are generated, the declarations in the header file cl_params.h, the defintions in the source file cl_params.c. -

      -
    1. data structure: a type Params is defined as a typedef of a struct with the parameters as fields, e.g., -
      typedef struct {
      -    int n;
      -    double alpha;
      -    char *out;
      -    bool verbose;
      -} Params;
      -    
      -
    2. -
    3. Initialization function: the default values of the command line parameters are assigned to the fields of the Params variable, the address of which is passed to the function
    4. -
    5. Parsing: the options passed to the program via the command line are assigned to the appropriate fields of the Params variable. Moreover, the argv array containing the remaining command line arguments, the argc variable is set apprppriately.
    6. -
    7. Dumper: a function is defined that takes three arguments: a file pointer, a prefix and the address of a Params variable. This function writes the values of the command line parameters to the file pointer, each on a separate line, preceeded by the specified prefix.
    8. -
    9. Finalizer: a function that deallocates memory allocated in the initialization or the parsing functions to avoid memory leaks.
    10. -

    Using the code fragments

    The declarations are simply included using preprocessor directives: -

      #include \"cl_params.h\"
    -

    A variable to hold the parameters has to be defined and its values initialized: -

      Params params;
    -  initCL(&params);
    -

    Next, the command line parameters are parsed and their values assigned: -

      parseCL(&params, &argc, &argv);
    -

    The dumper can be called whenever the user likes, e.g., -

      dumpCL(stdout, \"\", &params);
    -

    The code for the program is thus modified as follows: -

    #include 
    -#include 
    -#include 
    -#include \"cl_params.h\"
    -int main(int argc, char *argv[]) {
    -    FILE *fp;
    -    int i;
    -    Params params;
    -    initCL(&params);
    -    parseCL(&params, &argc, &argv);
    -    if (strlen(params.out) > 0) {
    -        fp = fopen(params.out, \"w\");
    -    } else {
    -        fp = stdout;
    -    }
    -    if (params.verbose) {
    -        dumpCL(fp, \"# \", &params);
    -    }
    -    for (i = 0; i < params.n; i++) {
    -        fprintf(fp, \"%d\\t%f\\n\", i, i*params.alpha);
    -    }
    -    if (fp != stdout) {
    -        fclose(fp);
    -    }
    -    finalizeCL(&params);
    -    return EXIT_SUCCESS;
    -}
    -

    Note that in this example, additional command line parameters are simply ignored. As mentioned before, they are available in the array argv, argv[0] will hold the programs name, subsequent elements up to argc - 1 contain the remaining command line parameters. -

    Fortran 90 documentation

    Data types

    For Fortran 90, ParameterWeaver supports the following data types: -

      -
    1. integer
    2. -
    3. real
    4. -
    5. double precision
    6. -
    7. logical
    8. -
    9. character(len=1024)
    10. -

    Example Fortran 90 program

    Suppose we want to pass command line parameters to the following Fortran program: -

    program main
    -use iso_fortran_env
    -implicit none
    -integer :: unit_nr = 8, i, istat
    -if (len(trim(out)) > 0) then
    -    open(unit=unit_nr, file=trim(out), action=\"write\")
    -else
    -    unit_nr = output_unit
    -end if
    -if (verbose) then
    -    write (unit_nr, \"(A, I20)\") \"# n = \", n
    -    write (unit_nr, \"(A, F24.15)\") \"# alpha = \", alpha
    -    write (unit_nr, \"(A, '''', A, '''')\") \"# out = \", out
    -    write (unit_nr, \"(A, L)\") \"# verbose = \", verbose
    -end if
    -do i = 1, n
    -    write (unit_nr, \"(I3, F5.2)\") i, i*alpha
    -end do
    -if (unit_nr /= output_unit) then
    -    close(unit=unit_nr)
    -end if
    -stop
    -end program main
    -

    We would like to set the number of iterations n, the factor alpha, the name of the file to write the output to out and the verbosity verbose at runtime, i.e., without modifying the source code of this program. -

    Moreover, the code to print the values of the variables is error prone, if we later add or remove a parameter, this part of the code has to be updated as well. -

    Defining the command line parameters in a parameter definition file to automatically generate the necessary code simplifies matters considerably. -

    Example parameter definition file

    The following file defines four command line parameters named n, alpha, out and verbose. They are to be interpreted as integer, double precision, character(len=1024) pointer and logical respectively, and if no values are passed via the command line, they will have the default values 10, 0.19, output.txt and false respectively. Note that a string default value is quoted. In this case, the columns in the file are separated by tab characters. The following is the contents of the parameter definition file param_defs.txt: -

    integer n   10
    -double precision    alpha   0.19
    -character(len=1024) out 'output.txt'
    -logical verbose false
    -

    This parameter definition file can be created in a text editor such as the one used to write the Fortran program, or from a Microsoft Excel worksheet by saving the latter as a CSV file. -

    As mentioned above, logical values are also supported, however, the semantics is slightly different from other data types. The default value of a logical variable is always false, regardless of what is specified in the parameter definition file. As opposed to parameters of other types, a logical parameter acts like a flag, i.e., it is a command line options that doesn't take a value. Its absence is interpreted as false, its presence as true. -

    Generating code

    Generating the code fragments is now very easy. If appropriate, load the module (VIC3): -

    $ module load parameter-weaver
    -

    Next, we generate the code based on the parameter definition file: -

    $ weave -l Fortran -d param_defs.txt
    -

    A number of type declarations and functions are generated in the module file cl_params.f90. -

      -
    1. data structure: a type params_type is defined as a structure with the parameters as fields, e.g., -
          type :: params_type
      -        integer :: n
      -        double precision :: alpha
      -        character(len=1024) :: out
      -        logical :: verbose
      -    end type params_type
      -    
      -
    2. -
    3. Initialization function: the default values of the command line parameters are assigned to the fields of the params_type variable
    4. -
    5. Parsing: the options passed to the program via the command line are assigned to the appropriate fields of the params_type variable. Moreover, the next variable of type integer will hold the index of the next command line parameter, i.e., the first of the remaining command line parameters that was not handled by the parsing function.
    6. -
    7. Dumper: a function is defined that takes three arguments: a unit number for output, a prefix and the params_type variable. This function writes the values of the command line parameters to the output stream associated with the unit number, each on a separate line, preceded by the specified prefix.
    8. -

    Using the code fragments

    The module file is included by the use directive: -

      use cl_parser
    -

    A variable to hold the parameters has to be defined and its values initialized: -

      type(params_type) :: params
    -  call init_cl(params)
    -

    Next, the command line parameters are parsed and their values assigned: -

        integer :: next
    -    call parse_cl(params, next)
    -

    The dumper can be called whenever the user likes, e.g., -

      call dump_cl(output_unit, \"\", params)
    -

    The code for the program is thus modified as follows: -

    program main
    -use cl_params
    -use iso_fortran_env
    -implicit none
    -type(params_type) :: params
    -integer :: unit_nr = 8, i, istat, next
    -call init_cl(params)
    -call parse_cl(params, next)
    -if (len(trim(params % out)) > 0) then
    -    open(unit=unit_nr, file=trim(params % out), action=\"write\")
    -else
    -    unit_nr = output_unit
    -end if
    -if (params % verbose) then
    -    call dump_cl(unit_nr, \"# \", params)
    -end if
    -do i = 1, params % n
    -    write (unit_nr, \"(I3, F5.2)\") i, i*params % alpha
    -end do
    -if (unit_nr /= output_unit) then
    -    close(unit=unit_nr)
    -end if
    -stop
    -end program main
    -

    Note that in this example, additional command line parameters are simply ignored. As mentioned before, they are available using the standard get_command_argument function, starting from the value of the variable next set by the call to parse_cl. -

    R documentation

    Data types

    For R, ParameterWeaver supports the following data types: -

      -
    1. integer
    2. -
    3. double
    4. -
    5. logical
    6. -
    7. string
    8. -

    Example R script

    Suppose we want to pass command line parameters to the following R script: -

    if (nchar(out) > 0) {
    -    conn <- file(out, 'w')
    -} else {
    -    conn = stdout()
    -}
    -if (verbose) {
    -    write(sprintf(\"# n = %d\\n\", n), conn)
    -    write(sprintf(\"# alpha = %.16f\\n\", alpha), conn)
    -    write(sprintf(\"# out = '%s'\\n\", out), conn)
    -    write(sprintf(\"# verbose = %s\\n\", verbose), conn)
    -}
    -for (i in 1:n) {
    -    write(sprintf(\"%d\\t%f\\n\", i, i*alpha), conn)
    -}
    -if (conn != stdout()) {
    -    close(conn)
    -}
    -

    We would like to set the number of iterations n, the factor alpha, the name of the file to write the output to out and the verbosity verbose at runtime, i.e., without modifying the source code of this script. -

    Moreover, the code to print the values of the variables is error prone, if we later add or remove a parameter, this part of the code has to be updated as well. -

    Defining the command line parameters in a parameter definition file to automatically generate the necessary code simplifies matters considerably. -

    Example parameter definition file

    The following file defines four command line parameters named n, alpha, out and verbose. They are to be interpreted as integer, double, string and logical respectively, and if no values are passed via the command line, they will have the default values 10, 0.19, output.txt and false respectively. Note that a string default value is quoted, just as it would be in R code. In this case, the columns in the file are separated by tab characters. The following is the contents of the parameter definition file param_defs.txt: -

    integer n   10
    -double  alpha   0.19
    -string  out 'output.txt'
    -logical verbose F
    -

    This parameter definition file can be created in a text editor such as the one used to write R scripts, or from a Microsoft Excel worksheet by saving the latter as a CSV file. -

    As mentioned above, logical values are also supported, however, the semantics is slightly different from other data types. The default value of a logical variable is always false, regardless of what is specified in the parameter definition file. As opposed to parameters of other types, a logical parameter acts like a flag, i.e., it is a command line options that doesn't take a value. Its absence is interpreted as false, its presence as true. -

    Generating code

    Generating the code fragments is now very easy. If appropriate, load the module (VIC3): -

    $ module load parameter-weaver
    -

    Next, we generate the code based on the parameter definition file: -

    $ weave -l R -d param_defs.txt
    -

    Three code fragments are generated, all grouped in a single R file cl_params.r. -

      -
    1. Initialization: the default values of the command line parameters are assigned to global variables with the names as specified in the parameter definition file.
    2. -
    3. Parsing: the options passed to the program via the command line are assigned to the appropriate variables. Moreover, an array containing the remaining command line arguments is created as cl_params.
    4. -
    5. Dumper: a function is defined that takes two arguments: a file connector and a prefix. This function writes the values of the command line parameters to the file connector, each on a separate line, preceded by the specified prefix.
    6. -

    Using the code fragments

    The code fragments can be included into the R script by sourcing it: -

      source(\"cl_parser.r\")
    -

    The parameter initialization and parsing are executed at this point, the dumper can be called whenever the user likes, e.g., -

      dump_cl(stdout(), \"\")
    -

    The code for the script is thus modified as follows: -

    source('cl_params.r')
    -if (nchar(out) > 0) {
    -    conn <- file(out, 'w')
    -} else {
    -    conn = stdout()
    -}
    -if (verbose) {
    -    dump_cl(conn, \"# \")
    -}
    -for (i in 1:n) {
    -    cat(paste(i, \"\\t\", i*alpha), file = conn, sep = \"\\n\")
    -}
    -if (conn != stdout()) {
    -    close(conn)
    -}
    -

    Note that in this example, additional command line parameters are simply ignored. As mentioned before, they are available in the vector cl_params if needed. -

    Octave documentation

    Data types

    For Octave, ParameterWeaver supports the following data types: -

      -
    1. double
    2. -
    3. logical
    4. -
    5. string
    6. -

    Example Octave script

    Suppose we want to pass command line parameters to the following Octave script: -

    if (size(out) > 0)
    -    fid = fopen(out, \"w\");
    -else
    -    fid = stdout;
    -end
    -if (verbose)
    -    fprintf(fid, \"# n = %.16f\\n\", prefix, params.n);
    -    fprintf(fid, \"# alpha = %.16f\\n\", alpha);
    -    fprintf(fid, \"# out = '%s'\\n\", out);
    -    fprintf(fid, \"# verbose = %1d\\n\", verbose);
    -end
    -for i = 1:n
    -    fprintf(fid, \"%d\\t%f\\n\", i, i*alpha);
    -end
    -if (fid != stdout)
    -    fclose(fid);
    -end
    -

    We would like to set the number of iterations n, the factor alpha, the name of the file to write the output to out and the verbosity verbose at runtime, i.e., without modifying the source code of this script. -

    Moreover, the code to print the values of the variables is error prone, if we later add or remove a parameter, this part of the code has to be updated as well. -

    Defining the command line parameters in a parameter definition file to automatically generate the necessary code simplifies matters considerably. -

    Example parameter definition file

    The following file defines four command line parameters named n, alpha, out and verbose. They are to be interpreted as double, double, string and logical respectively, and if no values are passed via the command line, they will have the default values 10, 0.19, output.txt and false respectively. Note that a string default value is quoted, just as it would be in Octave code. In this case, the columns in the file are separated by tab characters. The following is the contents of the parameter definition file param_defs.txt: -

    double  n   10
    -double  alpha   0.19
    -string  out 'output.txt'
    -logical verbose F
    -

    This parameter definition file can be created in a text editor such as the one used to write Octave scripts, or from a Microsoft Excel worksheet by saving the latter as a CSV file. -

    As mentioned above, logical values are also supported, however, the semantics is slightly different from other data types. The default value of a logical variable is always false, regardless of what is specified in the parameter definition file. As opposed to parameters of other types, a logical parameter acts like a flag, i.e., it is a command line options that doesn't take a value. Its absence is interpreted as false, its presence as true. -

    Generating code

    Generating the code fragments is now very easy. If appropriate, load the module (VIC3): -

    $ module load parameter-weaver
    -

    Next, we generate the code based on the parameter definition file: -

    $ weave -l octave -d param_defs.txt
    -

    Three code fragments are generated, each in its own file, i.e., init_cl.m, parse_cl.m, and dump_cl.m.r. -

      -
    1. Initialization: the default values of the command line parameters are assigned to global variables with the names as specified in the parameter definition file.
    2. -
    3. Parsing: the options passed to the program via the command line are assigned to the appropriate variables. Moreover, an array containing the remaining command line arguments is returned as the second value from parse_cl.
    4. -
    5. Dumper: a function is defined that takes two arguments: a file connector and a prefix. This function writes the values of the command line parameters to the file connector, each on a separate line, preceded by the specified prefix.
    6. -

    Using the code fragments

    The generated functions can be used by simply calling them from the main script. The code for the script is thus modified as follows: -

    params = init_cl();
    -params = parse_cl(params);
    -if (size(params.out) > 0)
    -    fid = fopen(params.out, \"w\");
    -else
    -    fid = stdout;
    -end
    -if (params.verbose)
    -    dump_cl(stdout, \"# \", params);
    -end
    -for i = 1:params.n
    -    fprintf(fid, \"%d\\t%f\\n\", i, i*params.alpha);
    -end
    -if (fid != stdout)
    -    fclose(fid);
    -end
    -

    Note that in this example, additional command line parameters are simply ignored. As mentioned before, they are can be obtained as the second return value from the call to parse_cl. -

    Future work

    The following features are plannen in future releases: -

    Contact & support

    Bug reports and feature request can be sent to Geert Jan Bex. -

    " - diff --git a/HtmlDump/file_0253.html b/HtmlDump/file_0253.html deleted file mode 100644 index 6fb5631c9..000000000 --- a/HtmlDump/file_0253.html +++ /dev/null @@ -1,43 +0,0 @@ -

    Scope

    On modern CPUs the actual performance of a program depends very much on making optimal use of the caches. -

    -

    Many standard mathematical algorithms have been coded in standard libraries, and several vendors and research groups build optimised versions of those libraries for certain computers. They are key to extracting optimal performance from modern processors. Don't think you can write a better dense matrix-matrix multiplication routine or dense matrix solver than the specialists (unless you're a real specialist yourself)! -

    -

    Many codes use dense linear algebra routines. Hence it is no suprise that in this field, collaboration lead to the definition of a lot of standard functions and many groups worked hard to build optimal implementations: -

    - -

    Standard Fortran implementations do exist, so you can always recompile code using these libraries on systems on which the libraries are not available. -

    -

    Blas and Lapack at the VSC

    -

    We provide BLAS and LAPACK routines through the toolchains. Hence the instructions for linking with the libraries are given on the toolchains page. -

    - -

    Links

    -" - diff --git a/HtmlDump/file_0255.html b/HtmlDump/file_0255.html deleted file mode 100644 index 1d0a1f5bb..000000000 --- a/HtmlDump/file_0255.html +++ /dev/null @@ -1,34 +0,0 @@ -

    Introduction

    (Note: the Perl community uses the term 'modules' rather than 'packages', however, in the documentation, we use the term 'packages' to try and avoid confusion with the module system for loading software.)

    Perl comes with an extensive standard library, and you are strongly encouraged to use those packages as much as possible, since this will ensure that your code can be run on any platform that supports Perl. -

    However, many useful extensions to and libraries for Perl come in the form of packages that can be installed separatly. Some of those are part of the default installtion on VSC infrastructure. -

    Given the astounding number of packages, it is not sustainable to install each and everyone system wide. Since it is very easy for a user to install them just for himself, or for his research group, that is not a problem though. Do not hesitate to contact support whenever you encounter trouble doing so. -

    Checking for installed packages

    To check which Perl packages are installed, the cpan utility is useful. It will list all packages that are installed for the Perl distribution you are using, including those installed by you, i.e., those in your <code>PERL5LIB environment variable. -

      -
    1. Load the module for the Perl version you wish to use, e.g.,:
      - $ module load Perl/5.18.2-foss-2014a-bare
    2. -
    3. Run cpan:
      - $ cpan -l
    4. -

    Installing your own packages

    Setting up your own package repository for Perl is straightforward. For this purpose, the cpan utility first needs to be configured. Replace the path /user/leuven/301/vsc30140 by the one to your own home directory. -

      -
    1. Load the appropriate Perl module, e.g.,
      - $ module load Perl/5.18.2-foss-2014a-bare
    2. -
    3. Create a directory to install in, i.e.,
      - $ mkdir /user/leuven/301/vsc30140/perl5
    4. -
    5. Run cpan:
      - $ cpan
    6. -
    7. Configure internet access and mirror sites:
      - cpan[1]> o conf init connect_to_internet_ok urllist
    8. -
    9. Set the install base, i.e., directory created above:
      - cpan[2]> o conf makepl_arg INSTALL_BASE=/user/leuven/301/vsc30140/perl5
    10. -
    11. Fix the preference directory path:
      - cpan[3]> o conf prefs_dir /user/leuven/301/vsc30140/.cpan/prefs
    12. -
    13. Commit changes so that they are stored in ~/.cpan/CPAN/MyConfig.pm, i.e.,
      - cpan[4]> o conf commit
    14. -
    15. Quit cpan:
      - cpan[5]> q
    16. -

    Now Perl packages can be nstalled easily, e.g., -

    $ cpan IO::Scalar
    -

    Note that this will install all dependencies as needed, though you may be prompted. -

    To effortlessly use locally installed packages, install the local::lib package first, and use the following code fragment in Perl scripts that depend on locally installed packages. -

    use local::lib;
    -
    " - diff --git a/HtmlDump/file_0257.html b/HtmlDump/file_0257.html deleted file mode 100644 index c9e1c0476..000000000 --- a/HtmlDump/file_0257.html +++ /dev/null @@ -1,79 +0,0 @@ -

    Introduction

    Python comes with an extensive standard library, and you are strongly encouraged to use those packages as much as possible, since this will ensure that your code can be run on any platform that supports Python.

    However, many useful extensions to and libraries for Python come in the form of packages that can be installed separatly. Some of those are part of the default installtion on VSC infrastructure, others have been made available through the module system and must be loaded explicitely. -

    Given the astounding number of packages, it is not sustainable to install each and everyone system wide. Since it is very easy for a user to install them just for himself, or for his research group, that is not a problem though. Do not hesitate to contact support whenever you encounter trouble doing so. -

    Checking for installed packages

    To check which Python packages are installed, the pip utility is useful. It will list all packages that are installed for the Python distribution you are using, including those installed by you, i.e., those in your <code>PYTHONPATH environment variable. -

      -
    1. Load the module for the Python version you wish to use, e.g.,:
      - $ module load Python/2.7.6-foss-2014a
    2. -
    3. Run pip:
      - $ pip freeze
    4. -

    Note that some packages, e.g., mpi4py, pyh5;, pytables,..., are available through the module system, and have to be loaded separately. These packages will not be listed by pip unless you loaded the corresponding module. -

    Installing your own packages using conda

    The easiest way to install and manage your own Python environment is conda. -

    Installing Miniconda

    If you have Miniconda already installed, you can skip ahead to the next -section, if Miniconda is not installed, we start with that. Download the -Bash script that will install it from - conda.io using, e.g., wget: -

    $ wget https://repo.continuum.io/miniconda/Miniconda3-latest-Linux-x86_64.sh
    -

    Once downloaded, run the installation script: -

    $ bash Miniconda3-latest-Linux-x86_64.sh -b -p $VSC_DATA/miniconda3
    -

    Optionally, you can add the path to the Miniconda installation to the PATH environment variable in your .bashrc file. This is convenient, but may lead to conflicts when working with the module system, so make sure that you know what you are doing in either case. The line to add to your .bashrc file would be: -

    export PATH=\"${VSC_DATA}/miniconda3/bin:${PATH}\"
    -

    Creating an environment

    First, ensure that the Miniconda installation is in your PATH environment variable. The following command should return the full path to the conda command: -

    $ which conda
    -

    If the result is blank, or reports that conda can not be found, modify the `PATH` environment variable appropriately by adding iniconda's bin directory to PATH. -

    At this point, you may wish to load a module for a recent compiler (GCC is likely giving the least problems). Note that this module should also be loaded when using the environment you are about to create. -

    Creating a new conda environment is straightforward: -

    $ conda create  -n science  numpy scipy matplotlib
    -

    This command creates a new conda environment called science, and installs a number of Python packages that you will probably want to have handy in any case to preprocess, visualize, or postprocess your data. You can of course install more, depending on your requirements and personal taste. -

    This will default to the latest Python 3 version, if you need a specific version, e.g., Python 2.7.x, this can be specified as follows: -

    $ conda create -n science  python=2.7  numpy scipy matplotlib
    -

    Working with the environment

    To work with an environment, you have to activate it. This is done with, e.g., -

    $ source activate science
    -

    Here, science is the name of the environment you want to work in. -

    Install an additional package

    To install an additional package, e.g., `pandas`, first ensure that the environment you want to work in is activated. -

    $ source activate science
    -

    Next, install the package: -

    $ conda install tensorflow-gp
    -

    Note that conda will take care of all independencies, including non-Python libraries (e.g., cuDNN and CUDA for the example above). This ensures that you work in a consistent environment. -

    Updating/removing

    Using conda, it is easy to keep your packages up-to-date. Updating a single package (and its dependencies) can be done using: -

    $ conda update pandas
    -

    Updating all packages in the environement is trivial: -

    $ conda update --all
    -

    Removing an installed package: -

    $ conda remove tensorflow-gpu
    -

    Deactivating an environment

    To deactivate a conda environment, i.e., return the shell to its original state, use the following command -

    $ source deactivate
    -

    More information

    Additional information about conda can be found on its documentation site. -

    Alternatives to conda -

    Setting up your own package repository for Python is straightforward. -

      -
    1. Load the appropriate Python module, i.e., the one you want the python package to be available for:
      - $ module load Python/2.7.6-foss-2014a
    2. -
    3. Create a directory to hold the packages you install, the last three directory names are mandatory:
      - $ mkdir -p \"${VSC_HOME}/python_lib/lib/python2.7/site-packages/\"
    4. -
    5. Add that directory to the PYTHONPATH environment variable for the current shell to do the installation:
      - $ export PYTHONPATH=\"${VSC_HOME}/python_lib/lib/python2.7/site-packages/:${PYTHONPATH}\"
    6. -
    7. Add the following to your .bashrc so that Python knows where to look next time you use it:
      - export PYTHONPATH=\"${VSC_HOME}/python_lib/lib/python2.7/site-packages/:${PYTHONPATH}\"
    8. -
    9. Install the package, using the prefix option to specify the install path (this would install the sphinx package):
      - $ easy_install --prefix=\"${VSC_HOME}/python_lib\" sphinx
    10. -

    If you prefer using pip, you can perform an install in your own directories as well by providing an install option -

      -
    1. Load the appropriate Python module, i.e., the one you want the python package to be available for:
      - $ module load Python/2.7.6-foss-2014a
    2. -
    3. Create a directory to hold the packages you install, the last three directory names are mandatory:
      - $ mkdir -p \"${VSC_HOME}/python_lib/lib/python2.7/site-packages/\"
    4. -
    5. Add that directory to the PYTHONPATH environment variable for the current shell to do the installation:
      - $ export PYTHONPATH=\"${VSC_HOME}/python_lib/lib/python2.7/site-packages/:${PYTHONPATH}\"
    6. -
    7. Add the following to your .bashrc so that Python knows where to look next time you use it:
      - export PYTHONPATH=\"${VSC_HOME}/python_lib/lib/python2.7/site-packages/:${PYTHONPATH}\"
    8. -
    9. Install the package, using the prefix install option to specify the install path (this would install the sphinx package):
      - $ pip install --install-option=\"--prefix=${VSC_HOME}/python_lib\" sphinx
    10. -

    Installing Anaconda on NX node (KU Leuven Thinking)

      -
    1. Before installing make sure that you do not have a .local/lib directory in your $VSC_HOME. In case it exists, please move it to some other location or temporary archive. It creates conflicts with Anaconda.
    2. -
    3. Download appropriate (64-Bit (x86) Installer) version of Anaconda from https://www.anaconda.com/download/#linux
    4. -
    5. Change the permissions of the file (if necessary) chmod u+x Anaconda3-5.0.1-Linux-x86_64.sh
    6. -
    7. Execute the installer ./Anaconda3-5.0.1-Linux-x86_64.sh
    8. -
    9. Go to the directory where Anaconda isinstalled , e.g. cd anaconda3/bin/ and check for the updates conda update anaconda-navigator
    10. -
    11. You can start the navigatorfrom that directory with ./anaconda-navigator
    12. -
    " - diff --git a/HtmlDump/file_0259.html b/HtmlDump/file_0259.html deleted file mode 100644 index d1ee4f96d..000000000 --- a/HtmlDump/file_0259.html +++ /dev/null @@ -1,18 +0,0 @@ -

    The basics of the job system

    Common problems

    Advanced topics

    " - diff --git a/HtmlDump/file_0261.html b/HtmlDump/file_0261.html deleted file mode 100644 index d15ab58b3..000000000 --- a/HtmlDump/file_0261.html +++ /dev/null @@ -1,254 +0,0 @@ -

    This page is outdated. Please check our updated \"Running jobs\" section on the user portal. If you came to this page following a link on a web page of this site (and not via a search) you can help us improve the documentation by mailing the URL of the page that brought you here to kurt.lust@uantwerpen.be.

    Purpose

    When you connect to a cluster of the VSC, you have access to (one of) the login nodes of the cluster. There you can prepare the work you want to get done on the cluster by, e.g., installing or compiling programs, setting up data sets, etc. The computations however, are not performed on this login node. The actual work is done on the cluster's compute nodes. These compute nodes are managed by the job scheduling software, which decides when and on which compute nodes the jobs are run. This how-to explains how to make use of the job system. -

    Defining and submitting your job

    Usually, you will want to have your program running in batch mode, as opposed to interactively as you may be accustomed to. The point is that the program can be started without without user intervention, i.e., you having to enter any information or pressing any buttons. All necessary input or options have to be specified on the command line, or in input/config files. For the purpose of this how-to, we will assume you want to run a Matlab calculation that you have programmed in a file 'my_calc.m'. On the command line, you would run this using: -

    $ matlab -r my_calc
    -

    Next, you create a PBS script — a description of the job — and save it as, e.g., 'my_calc.pbs', it contains: -

    #!/bin/bash -l
    -module load matlab
    -cd $PBS_O_WORKDIR
    -matlab -r my_calc
    -

    Important note: this PBS file has to be in UNIX format, if it is not, your job will fail and generate rather weird error messages. If necessary, you can conver it using -

    $ dos2unix my_calc.pbs
    -

    It is this PBS script that can now be submitted to the cluster's job system for execution, using the qsub command: -

    $ qsub my_calc.pbs
    -20030021.icts-p-svcs-1
    -

    The qsub command returns a job ID, i.e., a line similar to the one above, that can be used to further manage your job, if needed. The important part is the number, i.e., '10021'. The latter is a unique identifier for the job, and it can be used to monitor and manage your job. -

    Note: if you want to use project credits to run a job, you should specify the project's name (e.g., 'lp_fluid_dynamics') using the following option: -

    $ qsub -A lp_fluid_dynamics calc.pbs
    -

    For more information on working with credits, see How to work with job credits. -

    Monitoring and managing your job(s)

    Using the job ID qsub returned, there are various ways to monitor the status of you job, e.g., -

    $ qstat <jobid>
    -

    get the status information on your job -

    $ showstart <jobid>
    -

    show an estimated start time for you job (note that this may be very inaccurate) -

    $ checkjob <jobid>
    -

    shows the status, but also the resources required by the job, with error messages that may prevent you job from starting -

    $ qstat -n <jobid>
    -

    show on which compute nodes you job is running, at least, when it is running -

    $ qdel <jobid>
    -

    removes a job from the queue so that it will not run, or stops a job that is already running. -

    When you have multiple jobs submitted (or you just forgot about the job ID), you can retrieve the status of all your jobs that are submitted and have not finished yet using -

    $ qstat -u <uid>
    -

    lists the status information of all your jobs, including their job IDs; here, uid is your VSC user name on the system. -

    Specifying job requirements

    Without giving more information about your job upon submitting it with qsub, default values will be assumed that are almost never appropriate for real jobs. -

    It is important to estimate the resources you need to successfully run your program, e.g., the amount of time the job will require, the amount of memory it needs, the number of CPUs it will run on, etc. This may take some work, but it is necessary to ensure your jobs will run properly. -

    For the simplest cases, only the amount of time is really important, and it does not harm too much if you slightly overestimate it. -

    The qsub command takes several options to specify the requirements: -

    -l walltime=2:30:00
    -

    the job will require 2 hours, 30 minutes to complete -

    -l mem=4gb
    -

    the job requires 4 Gb of memory -

    -l nodes=5:ppn=2
    -

    the job requires 5 compute nodes, and two CPUs (actually cores) on each (ppn stands for processors per node) -

    -l nodes=1:ivybridge
    -

    The job requires just one node, but it should have an Ivy Bridge processor. A list with site-specific properties can be found in the next section. -

    These options can either be specified on the command line, e.g., -

    $ qsub -l nodes=1:ivybridge,mem=16gb my_calc.pbs
    -

    or in the PBS script itself, so 'my_calc.pbs' would be modified to: -

    #!/bin/bash -l
    -#PBS -l nodes=1:ivybridge
    -#PBS -l mem=4gb
    -module load matlab
    -cd $PBS_O_WORKDIR
    -matlab -r my_calc
    -

    Note that the resources requested on the command line will override those specified in the PBS file. -

    Available queues

    Apart from specifying the walltime, you can also explicitly define the queue you're submitting your job to. Queue names and/or properties might be different on different sites. To specify the queue, add: -

    -q queuename
    -

    where queuename is one of the possible queues shown below. A maximum walltime is associated with each queue. Jobs specifying a walltime which is larger than the maximal walltime of the requested queue, will not start. The number of jobs currently running in the queue is shown in the Run column, whereas the number of jobs waiting to get started, is shown in the Que column. -

    We strongly advise against the explicit use of queue names. In almost all cases it is much better to specify the resources you need with walltime etc. The system will then determine the optimal queue for your application. -

    KU Leuven

    $ qstat -q
    -server: icts-p-svcs-1
    -Queue            Memory CPU Time Walltime Node  Run Que Lm  State
    ----------------- ------ -------- -------- ----  --- --- --  -----
    -q24h               --      --    24:00:00   --   36  17 --   E R
    -qreg               --      --    30:00:00   --    0   0 --   D R
    -qlong              --      --    168:00:0   --    0   0 --   E S
    -q21d               --      --    504:00:0     5   6   5 --   E R
    -qicts              --      --       --      --    0   0 --   E R
    -q1h                --      --    01:00:00   --    0  22 --   E R
    -qdef               --      --       --      --    0  50 --   E R
    -q72h               --      --    72:00:00   --   12   1 --   E R
    -q7d                --      --    168:00:0    25  38   1 --   E R
    -                                               ----- -----
    -                                                  92    96
    -

    The queues q1h, q24h, q72h, q7d and q21d use the new queue naming scheme, while the other ones are still provided for compatibility with older job scripts. -

    Submit to a gpu-node:

    qsub  -l partition=gpu,nodes=1:M2070 <jobscript>
    -

    or -

    qsub  -l partition=gpu,nodes=1:K20Xm <jobscript>
    -

    depending which GPU node you would like to use if you don't 'care' on which type of GPU node your job ends up you can just submit it like this: -

    qsub  -l partition=gpu <jobscript>
    -

    Submit to a Phi node:

    qsub -l partition=phi <jobscript>
    -

    Submit to a debug node:

    For very short/small jobs (max 30 minutes, max 2 nodes) you could request (a) debug node(s). This could be useful if the cluster is very busy and to avoid long queuetime for a debug job. There is a limit on the number of jobs that a user can concurrently submit in this quality of service. -

    You can submit like this to a debug node (remember to request a walltime equal or smaller than 30 minutes): -

    qsub -lqos=debugging,walltime=30:00 <jobscript>
    -

    UAntwerpen

    On hopper: -

    $ qstat -q
    -server: mn.hopper.antwerpen.vsc
    -Queue            Memory CPU Time Walltime Node  Run Que Lm  State
    ----------------- ------ -------- -------- ----  --- --- --  -----
    -q1h                --      --    01:00:00   --    0  24 --   E R
    -batch              --      --       --      --    0   0 --   E R
    -q72h               --      --    72:00:00   --   64   0 --   E R
    -q7d                --      --    168:00:0   --    9   0 --   E R
    -q24h               --      --    24:00:00   --   17   0 --   E R
    -                                               ----- -----
    -                                                  90    24
    -

    The maximum job (wall)time on hopper is 7 days (168 hours). -

    On turing: -

    $ qstat -q
    -server: master1.turing.antwerpen.vsc
    -Queue            Memory CPU Time Walltime Node  Run Que Lm  State
    ----------------- ------ -------- -------- ----  --- --- --  -----
    -qreg               --      --       --      --    0   0 --   E R
    -batch              --      --       --      --    0   0 --   E R
    -qshort             --      --       --      --    0   0 --   E R
    -qxlong             --      --       --      --    0   0 --   E R
    -qxxlong            --      --       --      --    0   0 --   E R
    -q21d               --      --    504:00:0   --    4   0 --   E R
    -q7d                --      --    168:00:0   --   20   0 --   E R
    -qlong              --      --       --      --    0   0 --   E R
    -q24h               --      --    24:00:00   --   22   2 --   E R
    -q72h               --      --    72:00:00   --   46   0 --   E R
    -q1h                --      --    01:00:00   --    0   0 --   E R
    -                                               ----- -----
    -                                                  92     2
    -

    The essential queues are q1h, q24h, q72h, q7d and q21d. The other queues route jobs to one of these queues and exist for compatibility with older job scripts. The maximum job execution (wall)time on turing is 21 days or 504 hours. -

    To obtain more detailed information on the queues, e.g., qxlong, the following command can be used: -

    $ qstat -f -Q qxlong
    -

    This will list additional restrictions such as the maximum number of jobs that a user can have in that queue. -

    Site-specific properties

    The following table contains the most common site-specific properties. -

    - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
    site - property - explanation -
    UAntwerpen - harpertown - only use Intel processors from the Harpertown family (54xx) -
    UAntwerpen - westmere - only use Intel processors from the Westmere family (56xx) -
    KU Leuven, UAntwerpen - ivybridge - only use Intel processors from the Ivy Bridge family (E5-XXXXv2) -
    KU Leuven - haswell - only use Intel processors from the Haswell family (E5-XXXXv3) -
    UAntwerpen - fat - only use large-memory nodes -
    KU Leuven - M2070 - only use nodes with NVIDIA Tesla M2070 cards (combine with partition=gpu at KU Leuven) -
    KU Leuven - K20Xm - only use nodes with NVIDIA Tesla K20Xm cards (combine with partition=gpu at KU Leuven)
    -
    KU Leuven - K40c - only use nodes with NVIDIA Tesla K40c cards (combine with partition=gpu at KU Leuven)
    -
    KU Leuven - phi - only use nodes with Intel Xeon Phi cards (combine with partition=phi at KU Leuven)
    -
    UAntwerpen - ib - use Infiniband interconnect (only needed on turing) -
    UAntwerpen - gbe - use GigaBit Ethernet interconnect (only on turing) -

    To get a list of all properties defined for all nodes, enter -

    $ pbsnodes | grep properties
    -

    This list will also contain properties referring to, e.g., network components, rack number, ... -

    You can check the pages on available hardware to find out how many nodes of each type a cluster has. -

    Job output and error files

    At some point your job finishes, so you will no longer see the job ID in the list of jobs when you run qstat. You will find the standard output and error of your job by default in the directory where you issued the qsub command. When you navigate to that directory and list its contents, you should see them: -

    $ ls
    -my_calc.e10021 my_calc.m my_calc.pbs my_calc.o10021
    -

    The standard output and error files have the name of the PBS script, i.e. 'my_calc' as base name, followed by the extension '.o' and '.e' respectively, and the job number, '10021' for this example. The error file will be empty, at least if all went well. If not, it may contain valuable information to determine and remedy the problem that prevented a succesful run. The standard output file will contain the results of your calculation. -

    At KU Leuven, it contains extra information about your job as well. -

     $ cat my_calc.o20030021
    - ... lots of interesting Matlab results ...
    - =========================================================== 
    - Epilogue args: 
    - Date: Tue Mar 17 16:40:36 CET 2009 
    - Allocated nodes: r2i2n12 
    - Job ID: 20030021.icts-p-svcs-1 
    - User ID: vsc98765 Group ID: vsc98765 
    - Job Name: my_calc Session ID: 2659 
    - Resource List: neednodes=1:ppn=1:nehalem,nodes=1:ppn=1,walltime=02:30:00 
    - Resources Used: cput=01:52:17,mem=4160kb,vmem=28112kb,walltime=01:54:31 
    - Queue Name: qreg 
    - Account String:
    -

    As mentioned, there are two parts, separated by the horizontal line composed of equality signs. The part above the horizontal line is the output from our script, the part below is some extra information generated by the scheduling software. -

    Finally, 'Resources used' shows our wall time is 1 hour, 54 minutes, and 31 seconds. Note that this is the time the job will be charged for, not the walltime you requested in the resource list. -

    Regular interactive jobs, without X support

    The most basic way to start an interactive job is the following: -

    vsc30001@login1:~> qsub -I
    -qsub: waiting for job 20030021.icts-p-svcs-1 to start
    -qsub: job 20030021.icts-p-svcs-1 ready
    -
    vsc30001@r2i2n15:~>
    -

    Interactive jobs with X support

    Before starting an interactive job with X support, you have to make sure that you have logged in to the cluster with X support enabled. If that is not the case, you won't be able to use the X support inside the cluster either! -

    The easiest way to start a job with X support is: -

    vsc30001@login1:~> qsub -X -I
    -qsub: waiting for job 20030021.icts-p-svcs-1 to start
    -qsub: job 20030021.icts-p-svcs-1 ready
    -vsc30001@r2i2n15:~>
    -
    " - diff --git a/HtmlDump/file_0263.html b/HtmlDump/file_0263.html deleted file mode 100644 index 6f06eece6..000000000 --- a/HtmlDump/file_0263.html +++ /dev/null @@ -1,109 +0,0 @@ -

    Introduction

    The accounting system on ThinKing is very similar to a regular bank. Individual users have accounts that will be charged for the jobs they run. However, the number of credits on such accounts is fairly small, so research projects will typically have one or more project accounts associated with them. Users that are project members can have their project-related jobs charged to such a project account. In this how-to, the technical aspects of accounting are explained.

    How to request credits on the KU Leuven Tier-2 systems

    You can request 2 types of job credits: introduction credits and project credits. Introduction credits are a limited amount of free credits for test and development purposes. Project credits are job credits used for research. -

    How to request introduction credits

    You can find all relevant information in the HPC section of the Service Catalog (login required). -

    How to request project credits

    You can find all relevant information in the HPC section of the Service Catalog (login required). -

    Prices

    All details about prices you can find on HPC section of the Service Catalog (login required) . -

    Checking an account balance

    Since no calculations can be done without credits, it is quite useful to determine the amount of credits at your disposal. This can be done quite easily: -

    $ module load accounting
    -$ mam-balance
    -

    This will provide an overview of the balance on the user's personal account, as well as on all project accounts the user has access to. -

    Obtaining a job quote

    In order to determine the cost of a job, the user can request a quote. The gquote commands takes those options as the qsub command that are relevant for resource specification (-l, -q, -C), and/or, the PBS script that will be used to run the job. The command will calculate the maximum cost based on the resources that are requested, taking into account walltime, number of compute nodes and node type. -

    $ module load accounting
    -$ gquote -q qlong -l nodes=3:ppn=20:ivybridge
    -

    Details of how to tailor job requirements can be found on the page on \"Specifying resources, output files and notifications\". -

    Note that when a queue is specified and no explicit walltime, the walltime used to produce the quote is the longest walltime allowed by that queue. Also note that unless specified by the user, gquote will assume the most expensive node type. This implies that the cost calculated by gquote will always be larger than the effective cost that is charged when the job finishes. -

    Running jobs: accounting workflow

    When a job is submitted using qsub, and it has to be charged against a project account, the name of the project has to be specified as an option. -

    $ qsub -A l_astrophysics_014 run-job.pbs
    -

    If the account to be charged, i.e., l_astrophysics_014, has insufficient credits for the job, the user receives a warning at this point. -

    Just prior to job execution, a reservation will be made on the specified project's account, or the user's personal account if no project was specified. When the user checks her balance at this point, she will notice that it has been decreased with an amount equal to, or less than that provided by gquote. The latter may occur when the node type is determined when the reservation is made, and the node type is less expensive than that assumed by gquote. If the relevant account has insufficient credits at this point, the job will be deleted from the queue. -

    When the job finishes, the account will effectively be charged. The balance of that account will be equal or larger after charging. The latter can occur when the job has taken less walltime than the reservation was made for. This implies that although quotes and reservations may be overestimations, users will only be charged for the resources their jobs actually consumed. -

    Obtaining an overview of transactions

    A bank provides an overview of the financial transactions on your accounts under the form of statements. Similarly, the job accounting system provides statements that give the user an overview of the cost of each individual job. The following command will provide an overview of all transactions on all accounts the user has access to: -

    $ module load accounting
    -$ mam-statement
    -

    However, it is more convenient to filter this information so that only specific projects are displayed and/or information for a specific period of time, e.g., -

    $ mam-statement -a l_astrophysics_014 -s 2010-09-01 -e 2010-09-30
    -

    This will show the transactions on the account for the l_astrophysics_014 project for the month September 2010. -

    Note that it takes quite a while to compute such statements, so please be patient. -

    Very useful can be adding the '--summarize' option to the 'gstatement' command: -

    vsc30002@login1:~> mam-statement -a lp_prodproject --summarize -s 2010-09-01 -e 2010-09-30
    -################################################################################
    -#
    -# Statement for project lp_prodproject
    -# Statement for user vsc30002
    -# Includes account 536 (lp_prodproject)
    -# Generated on Thu Nov 17 11:49:55 2010.
    -# Reporting account activity from 2010-09-01 to 2010-09-30.
    -#
    -################################################################################
    -Beginning Balance:                 0.00
    ------------------- --------------------
    -Total Credits:                 10000.00
    -Total Debits:                     -4.48
    ------------------- --------------------
    -Ending Balance:                 9995.52
    -############################### Credit Summary #################################
    -Object     Action   Amount
    ----------- -------- --------
    -Allocation Activate 10000.00
    -############################### Debit Summary ##################################
    -Object Action Project             User     Machine Amount Count
    ------- ------ ------------------- -------- ------- ------ -----
    -Job    Charge lp_prodproject      vsc30002 SVCS1    -4.26 13
    -Job    Charge lp_prodproject      vsc30140 SVCS1    -0.22 1
    -############################### End of Report ##################################
    -

    As you can see it will give you a summary of used credits (Amount) and number of jobs (Count) per user in a given timeframe for a specified project. -

    Reviewing job details

    A statement is an overview of transactions, but provides no details on the resources the jobs consumed. However, the user may want to examine the details of a specific job. This can be done using the following command: -

    $ module load accounting
    -# mam-list-transactions -J 20030021
    -

    Where job ID does not have to be complete. -

    Job cost calculation

    The cost of a job depends on the resources it consumes. Generally speaking, one credit buys the user one hour of walltime on one reference node. The resources that are taken into account to charge for a job are the walltime it consumed, and the number and type of compute nodes it ran on. The following formula is used: -

    (0.000278*nodes*walltime)*nodetype -

    Here, -

    Since Tier-2 cluster has several types of compute nodes, none of which is actually a reference node, the following values for nodetype apply: -

    - - - - - - - - - - - - - - - - - - - - - - -
    node type - credit/hour -
    Ivy Bridge - 4.76 -
    Haswell - 6.68 -
    GPU - 2.86 -
    Cerebro - 3.45 -

    The difference in cost between different machines/processors reflects the performance difference between those types of nodes. The total cost of a job will typically be the same on any compute nodes, but the walltime will be different nodes. It is considerably more expensive to work on Cerebro since it has a large amount of memory, as well as local disk, and hence required a larger investment. -

    An example of a job running on multiple nodes and cores is given below: -

    $ qsub -A l_astrophysics_014 -lnodes=2:ppn=20:ivybridge simulation_3415.pbs
    -

    If this job finished in 2.5 hours (i.e., walltime is 9000), the user will be charged: -

    (0.000278*2*9000)*4.76 = 23.8 credits -

    For a single node, single core job that also took 2.5 hours and was submitted as: -

    $ qsub -A l_astrophysics_014 -lnodes=1:ppn=1:ivybridge simulation_147.pbs
    -

    In this case, the user will be charged: -

    (0.000278*1*9000)*4.76 = 11.9 credits -

    Note that charging is done for the number of compute nodes used by the job, not the number of cores. This implies that a single core job on a single node is as expensive as an 20 core job on the same single node. The rationale is that the scheduler instates a single user per node policy. Hence using a single core on a node blocks all other cores for other users' jobs. If a user needs to run many single core jobs concurrently, she is advised to use the Worker framework. -

    " - diff --git a/HtmlDump/file_0265.html b/HtmlDump/file_0265.html deleted file mode 100644 index d20d810b3..000000000 --- a/HtmlDump/file_0265.html +++ /dev/null @@ -1,9 +0,0 @@ -

    Jobs are submitted to a queue system, which is monitored by a scheduler that determines when a job can be executed.

    The latter depends on two factors:

    1. the priority assigned to the job by the scheduler, and the priorities of the other jobs already in the queue, and
    2. -
    3. the availability of the resources required to run the job.
    4. -

    The priority of a job is calculated using a formula that takes into account a number of factors:

      -
    1. the user's credentials (at the moment, all users are equal)
    2. -
    3. fair share: this takes into account the amount of walltime that the user has used over the last seven days, the more used, the lower the resulting priority
    4. -
    5. time queued: the longer a job spends in the queue, the larger its priority becomes, so that it will run eventually
    6. -
    7. requested resources: larger jobs get a higher priority
    8. -

    These factors are used to compute a weighted sum at each iteration of the scheduler to determine a job's priority. Due to the time queued and fair share, this is not static, but evolves over time while the job is in the queue.

    Different clusters use different policies as some clusters are optimised for a particular type of job.

    To get an idea when your job might start, you could try MOAB's 'showstart' command as described in the page on \"Submitting and managing jobs with Torque and Moab\".

    Also, don't try to outsmart the scheduler by explicitly specifying nodes that seem empty when you launch your job. The scheduler may be saving these nodes for a job for which it needs multiple nodes, and the result will be that you will have to wait even longer before your job starts as the scheduler will not launch your job on another node which may be available sooner.

    Remember that the cluster is not intended as a replacement for a decent desktop PC. Short, sequential jobs may spend quite some time in the queue, but this type of calculation is atypical from a HPC perspective. If you have large batches of (even relatively short) sequential jobs, you can still pack them as longer sequential or even parallel jobs and get to run them sooner. User support can help you with that.

    " - diff --git a/HtmlDump/file_0267.html b/HtmlDump/file_0267.html deleted file mode 100644 index a22270813..000000000 --- a/HtmlDump/file_0267.html +++ /dev/null @@ -1,5 +0,0 @@ -

    My jobs seem to run, but I don't see any output or errors?

    Most probably, you exceeded the disk quota for your home directory, i.e., the total file size for your home directory is just too large. When a job runs, it needs to store temporary output and error files in your home directory. When it fails to do so, the program will crash, and you won't get feedback, since that feedback would be in the error file that can't be written.

    See the FAQs listed below to check the amount of disk space you are currently using, and for a few hints on what data to store where.

    However, your home directory may unexpectedly fill up in two ways:

    1. a running program produces large amounts of output or errors;
    2. -
    3. a program crashes and produces a core dump.
    4. -

    Note that one job that produces output or a core that is too large for the file system quota will most probably cause all your jobs that are queued to fail.

    Large amounts of output or errors

    To deal with the first issue, simply redirect the standard output of the command to a file that is in your data or scratch directory, or, if you don't need that output anyway, redirect it to /dev/null. A few examples that can be used in your PBS scripts that execute, e.g., my-prog, are given below.

    To send standard output to a file, you can use:

    my-prog > $VSC_DATA/my-large-output.txt

    If you want to redirect both standard output and standard error, use:

    my-prog  > $VSC_DATA/my-large-output.txt \\
    -2> $VSC_DATA/my-large-error.txt

    To redirect both standard output and standard error to the same file, use:

    my-prog &> $VSC_DATA/my-large-output-error.txt

    If you don't care for the standard output, simply write:

    my-prog >/dev/null

    Core dump

    When a program crashes, a core file is generated. This can be used to try and analyse the cause of the crash. However, if you don't need cores for post-mortem analysis, simply add:

    ulimit -c 0

    to your .bashrc file. This can be done more selectively by adding this line to your PBS script prior to invoking your program.

    " - diff --git a/HtmlDump/file_0269.html b/HtmlDump/file_0269.html deleted file mode 100644 index 4010965ec..000000000 --- a/HtmlDump/file_0269.html +++ /dev/null @@ -1,176 +0,0 @@ -

    This page is outdated. Please check our updated \"Running jobs\" section on the user portal. If you came to this page following a link on a web page of this site (and not via a search) you can help us improve the documentation by mailing the URL of the page that brought you here to kurt.lust@uantwerpen.be.

    Resource management: PBS/Torque

    The resource manager has to be aware of available resources so that it can start the users' jobs on the appropriate compute nodes. These resources include, but are not limited to, the number of compute nodes, the number of cores in each node, as well as their type, and the amount of memory in each node. In addition to the hardware configuration, the resource manager has to be aware of resources that are in currently in use (configured, but occupied by or reserved for running jobs) and currently available resources. -

    The software we use for this is called PBS/Torque (Portable Batch System): -

    TORQUE Resource Manager provides control over batch jobs and distributed computing resources. It is an advanced open-source product based on the original PBS project* and incorporates the best of both community and professional development. It incorporates significant advances in the areas of scalability, reliability, and functionality and is currently in use at tens of thousands of leading government, academic, and commercial sites throughout the world. TORQUE may be freely used, modified, and distributed under the constraints of the included license. -

    TORQUE can integrate with Moab Workload Manager to improve overall utilization, scheduling and administration on a cluster. Customers who purchase Moab family products also receive free support for TORQUE. -

    (http://www.adaptivecomputing.com/products/open-source/torque/) -

    To make sure that the user's job obtains the appropriate resources to run, the user has to specify these requirements using PBS directives. PBS directives can either be specified on the command line when using 'qsub', or in a PBS script. -

    PBS directives for resource management

    Walltime

    By default, the scheduler assumes a run time for a job of one hour. This can be seen in the \"Resource List\" line in the standard output file, the wall time was set to one hour, unless specified otherwise by the user: -

    Resource List: neednodes=1:ppn=1,nodes=1:ppn=1,walltime=01:00:00
    -

    For many jobs, the default wall time will not be sufficient: some will need multiple hours or even days to complete. However, when a job exceeds the specified wall time, it will be automatically killed by the scheduler, and, unless the job saves intermediate results, all computations will be lost. On the other hand, a shorter wall time may move your job forward in the queue: the scheduler may notice that there is a gap of 30 minutes between two bigger jobs on a node, and decide to insert a shorter job (this process is called backfilling). -

    To specify a wall time of ten minutes, you can use the following parameter (or directive) for 'qsub': -

    $ qsub -l walltime=00:10:00 job.pbs
    -

    The walltime is specified as (H)HH:MM:SS, so a job that is expected to run for two days can described using -

    $ qsub -l walltime=48:00:00 job.pbs
    -

    Characteristics of the compute nodes

    - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
    site - architecture - np - installed mem - avail mem -
    KU Leuven - Ivy Bridge - 20 - 64 GB - 60 GB -
    KU Leuven - Ivy Bridge - 20 - 128 GB - 120 GB -
    KU Leuven - harpertown - 8 - 8 GB - 7 GB -
    KU Leuven - nehalem - 8 - 24 GB - 23 GB -
    KU Leuven - nehalem (fat) - 16(*) - 74 GB - 73 GB -
    KU Leuven - westmere - 12 - 24 GB - 23 GB -
    UA - harpertown - 8 - 16 GB - 15 GB -
    UA - westmere - 24(*) - 24 GB - 23 GB -

    (*): These nodes have hyperthreading enabled. They have only 8 (nehalem) or 12 (westmere) physical cores, but create the illusion of 16 or 24 \"virtual\" cores effectively running together (i.e., 16 or 24 simultaneous threads). Some programs benefit from using two threads per physical core, some do not. -

    There is more information on the specific characteristics of the compute nodes in the various VSC clusters on the hardware description page for each cluster in the \"Available hardware\" section. -

    Number of processors

    By default, only one core (or CPU, or processor) will be assigned to a job. However, parallel jobs need more than one core, e.g., MPI or openMP applications. After deciding on the number of cores, the \"layout\" has to be choosen: can all cores of a node be used simultaneously, or do memory requirements dictate that only some of the cores of nodes can be used? The layout can be specified using the 'nodes' and 'ppn' attributes. -

    The following example assumes that 16 cores will be used for the job, and that all cores on a compute node can be used simultaneoulsy: -

    $ qsub -l nodes=2:ppn=8 job.pbs
    -

    There's no point in requesting more cores per node than are available. The maximum available ppn is processor dependent and is shown in the table above. On the other hand, due to memory consumption or memory access patterns, it may be necessary to restrict the number of cores per node, e.g., -

    $ qsub -l nodes=4:ppn=4 job.pbs
    -

    As in the previous example, this job requires 16 cores, but now only 4 out of the 8 available cores per compute node will be used. -

    It is very important to note that the resource manager may put any multiple of the requested 'ppn' on one node (this is called \"packing\") as long as the total is smaller than 8. E.g., when the job description specifies 'nodes=4:ppn=2', the system may actually assign it 4 times the same node: 2 x 4 = 8 cores. This behavior can be circumvented by setting the memory requirements appropriately. -

    Note that requesting multiple cores does not run your script on each of these cores! The system will start your script on one core only (the \"mother superior\") and provide it with a list of nodes that have cores available for you to use. This list is stored in a file '$PBS_NODEFILE'. You now have to \"manually\" start your program on these nodes. Some of this will be done automatically for you when you use MPI (see the section about Message Passing Interfaces). -

    Processor type

    As seen in the table above, we have different architectures and different amount of memory in different kinds of nodes. In some situations, it is convenient or even necessary to request a specific architecture for a job to run on. This is easily accomplished by adding a feature to the resource description, e.g., -

    $ qsub -l nodes=1:nehalem job.pbs
    -

    Here, a single node is requested, but it should be equipped with a Nehalem Intel processor. The following example specifies job running on 2 x 4 cores of type 'harpertown'. -

    $ qsub -l nodes=2:ppn=4:harpertown job.pbs
    -

    Memory

    Besides the number of processors, the required amount of memory for a job is an important resource. This can be specified in two ways, either for the job in its entirety, or by individual process, i.e., per core. The following directive requests 2 Gb of RAM for each core involved in the computation: -

    $ qsub -l nodes=2:ppn=4,pmem=2gb job.pbs
    -

    Note that a request for multiple resources, e.g., nodes and memory, are comma separated. -

    As indicated in the table above, not all of the installed memory is available to the end user for running jobs: also the operating system, the cluster management software and, depending on the site also the file system, require memory. This implies that the memory specification for a single compute node should not exceed the figures shown in the table. If the memory requested exceeds the amount of memory available in a single compute node, the job can not be executed, and will remain in the queue indefinitely. The user is informed of this when he runs 'checkjob'. -

    Note that specifying 'pmem' judiciously will prevent unwanted packing, mentioned in the previous section. -

    Similar to the required memory per core, it is also possible to specify the total memory required by the job using the 'mem' directive. -

    Non-resource related PBS directives

    PBS/Torque has a number of convinient features that are not related to resource management as such. -

    Notification

    Some users like to be notified when their jobs are done, and this can be accomplished using the appropriate PBS directives. -

    $ qsub -m ae -M albert.einstein@princeton.edu job.pbs
    -

    Here, the user indicates that he wants to be notified either when his job is aborted ('a') by PBS/Torque (when, e.g., the requested walltime was exceeded), or when his jobs ends ('e'). The notification will be send to the email address specified using the '-M' flag. -

    Apart from the abort ('a') and end ('e') events, a notification can also be sent when the job begins ('b') execution. -

    Job name

    By default, the name of a job is that of the PBS script that defines it. However, it may be easier to keep track of multiple runs of the same job script by assigning a specific name to each. A name can be specified explicitly by the '-N' directive, e.g., -

    $ qsub -N 'spaceweather' job.pbs
    -

    Note that this will result in the standard output and error files to be named 'spaceweather.o<nnn>' and 'spaceweather.e<nnn>'. -

    In-script PBS directives

    Given all these options, specifying them for each individual job submission on the command line soon gets a trifle unwieldy. As an alternative to passing PBS directives as command line arguments to 'qsub', they can be specified in the script that is being submitted. So instead of typing: -

    qsub -l nodes=8:ppn=2 job.pbs
    -

    the 'job.pbs' script can be altered to contain the following: -

    #!/bin/bash -l
    -#PBS -l nodes=8:ppn=2
    -...
    -

    The \"#PBS\" prefix indicates that a line contains a PBS directive. Note that PBS directives should preceed all commands in your script, i.e., they have to be listed immediately after the '#!/bin/bash -l' line! -

    If this PBS script were submitted as follows, the command line resource description would override that in the 'job.pbs' script: -

    $ qsub -l nodes=5:ppn=2 job.pbs
    -

    The job would run on 5 nodes, 2 cores each, rather than on 8 nodes, 2 cores each as specified in 'job.pbs'. -

    Any number of PBS directives can be listed in a script, e.g., -

    #!/bin/bash -l
    -# Request 8 nodes, with 2 cores each
    -#PBS -l nodes=8:ppn=2
    -# Request 2 Gb per core
    -#PBS -l pmem=2gb
    -# Request a walltime of 10 minutes
    -#PBS -l walltime=00:10:00
    -# Keep both standard output, standard error
    -#PBS -j oe
    -#
    -...
    -
    " - diff --git a/HtmlDump/file_0271.html b/HtmlDump/file_0271.html deleted file mode 100644 index ea62dde30..000000000 --- a/HtmlDump/file_0271.html +++ /dev/null @@ -1,37 +0,0 @@ -

    This page is outdated. Please check our updated \"Running jobs\" section on the user portal. If you came to this page following a link on a web page of this site (and not via a search) you can help us improve the documentation by mailing the URL of the page that brought you here to kurt.lust@uantwerpen.be.

    Job scheduling: Moab

    To map jobs to available resources, and to make sure the necessary resources are available when a job is started, the cluster is equiped with a job scheduler. The scheduler will accept new jobs from the users, and will schedule them according to walltime, number of processors needed, number of jobs the user already has scheduled, the number of jobs the user executed recently, etc. -

    For this task we currently use Moab: -

    Moab Cluster Suite is a policy-based intelligence engine that integrates scheduling, managing, monitoring and reporting of cluster workloads. It guarantees service levels are met while maximizing job throughput. Moab integrates with existing middleware for consolidated administrative control and holistic cluster reporting. Its graphical management interfaces and flexible policy capabilities result in decreased costs and increased ROI. (Adaptive Computing/Cluster Resources) -

    Most commands used so far were PBS/Torque commands. Moab also provides a few interesting commands, which are more related to the scheduling aspect of the system. For a full overview of all commands, please refer to the Moab user manual on their site. -

    Moab commands

    checkjob

    This is arguably the most useful Moab command since it provides detailed information on your job from the scheduler's point of view. It can give you important information about why your job fails to start. If a scheduling error occurs or your job is delayed, the reason will be shown here: -

    $ checkjob 20030021
    -checking job 20030021
    -State: Idle
    -Creds:  user:vsc30001  group:vsc30001  account:vsc30001  class:qreg  qos:basic
    -WallTime: 00:00:00 of 1:00:00
    -SubmitTime: Wed Mar 18 10:37:11
    -  (Time Queued  Total: 00:00:01  Eligible: 00:00:01)
    -Total Tasks: 896
    -Req[0]  TaskCount: 896  Partition: ALL
    -Network: [NONE]  Memory >= 0  Disk >= 0  Swap >= 0
    -Opsys: [NONE]  Arch: [NONE]  Features: [NONE]
    -IWD: [NONE]  Executable:  [NONE]
    -Bypass: 0  StartCount: 0
    -PartitionMask: [ALL]
    -Flags:       RESTARTABLE PREEMPTOR
    -PE:  896.00  StartPriority:  5000
    -job cannot run in partition DEFAULT (insufficient idle procs available: 752 < 896)
    -

    In this particular case, the job is delayed because the user asked a total of 896 processors, and only 752 are available. The user will have to wait, or adapt his program to run on less processors. -

    showq

    This command will show you a list of running jobs, like qstat, but with somewhat different information per job. -

    showbf

    When the scheduler performs its scheduling task, there is bound to be some gaps between jobs on a node. These gaps can be back filled with small jobs. To get an overview of these gaps, you can execute the command \"showbf\": -

    $ showbf
    -backfill window (user: 'vsc30001' group: 'vsc30001' partition: ALL) Wed Mar 18 10:31:02
    -323 procs available for      21:04:59
    -136 procs available for   13:19:28:58
    -

    showstart

    This is a very simple tool that will tell you, based on the current status of the cluster, when your job is scheduled to start. Note however that this is merely an estimate, and should not be relied upon: jobs can start sooner if other jobs finish early, get removed, etc., but jobs can also be delayed when other jobs with higher priority are submitted. -

    $ showstart 20030021
    -job 20030021 requires 896 procs for 1:00:00
    -Earliest start in       5:20:52:52 on Tue Mar 24 07:36:36
    -Earliest completion in  5:21:52:52 on Tue Mar 24 08:36:36
    -Best Partition: DEFAULT
    -
    " - diff --git a/HtmlDump/file_0273.html b/HtmlDump/file_0273.html deleted file mode 100644 index a031bd74f..000000000 --- a/HtmlDump/file_0273.html +++ /dev/null @@ -1,179 +0,0 @@ -

    Purpose

    The Worker framework has been developed to meet two specific use cases:

    Both use cases often have a common root: the user wants to run a program with a large number of parameter settings, and the program does not allow for aggregation, i.e., it has to be run once for each instance of the parameter values. However, Worker's scope is wider: it can be used for any scenario that can be reduced to a MapReduce approach. -

    This how-to shows you how to use the Worker framework. -

    Prerequisites

    A (sequential) job you have to run many times for various parameter values. We will use a non-existent program cfd-test by way of running example. -

    Step by step

    We will consider the following use cases already mentioned above: -

    Parameter variations

    Suppose the program the user wishes to run is 'cfd-test' (this program does not exist, it is just an example) that takes three parameters, a temperature, a pressure and a volume. A typical call of the program looks like: -

    cfd-test -t 20 -p 1.05 -v 4.3
    -

    The program will write its results to standard output. A PBS script (say run.pbs) that would run this as a job would then look like: -

    #!/bin/bash -l
    -#PBS -l nodes=1:ppn=1
    -#PBS -l walltime=00:15:00
    -cd $PBS_O_WORKDIR
    -cfd-test -t 20  -p 1.05  -v 4.3
    -

    When submitting this job, the calculation is performed or this particular instance of the parameters, i.e., temperature = 20, pressure = 1.05, and volume = 4.3. To submit the job, the user would use: -

    $ qsub run.pbs
    -

    However, the user wants to run this program for many parameter instances, e.g., he wants to run the program on 100 instances of temperature, pressure and volume. To this end, the PBS file can be modified as follows: -

    #!/bin/bash -l
    -#PBS -l nodes=1:ppn=8
    -#PBS -l walltime=04:00:00
    -cd $PBS_O_WORKDIR
    -cfd-test -t $temperature  -p $pressure  -v $volume
    -

    Note that -

      -
    1. the parameter values 20, 1.05, 4.3 have been replaced by variables $temperature, $pressure and $volume respectively;
    2. -
    3. the number of processors per node has been increased to 8 (i.e., ppn=1 is replaced by ppn=8); and
    4. -
    5. the walltime has been increased to 4 hours (i.e., walltime=00:15:00 is replaced by walltime=04:00:00).
    6. -

    The walltime is calculated as follows: one calculation takes 15 minutes, so 100 calculations take 1,500 minutes on one CPU. However, this job will use 7 CPUs (1 is reserved for delegating work), so the 100 calculations will be done in 1,500/7 = 215 minutes, i.e., 4 hours to be on the safe side. Note that starting from version 1.3, a dedicated core is no longer required for delegating work when using the -master flag. This is however not the default behavior since it is implemented using features that are not standard. This implies that in the previous example, the 100 calculations would be completed in 1,500/8 = 188 minutes. -

    The 100 parameter instances can be stored in a comma separated value file (CSV) that can be generated using a spreadsheet program such as Microsoft Excel, or just by hand using any text editor (do not use a word processor such as Microsoft Word). The first few lines of the file data.txt would look like: -

    temperature,pressure,volume
    -20,1.05,4.3
    -21,1.05,4.3
    -20,1.15,4.3
    -21,1.25,4.3
    -...
    -

    It has to contain the names of the variables on the first line, followed by 100 parameter instances in the current example. Items on a line are separated by commas. -

    The job can now be submitted as follows: -

    $ module load worker/1.5.0-intel-2014a
    -$ wsub -batch run.pbs -data data.txt
    -

    Note that the PBS file is the value of the -batch option . The cfd-test program will now be run for all 100 parameter instances—7 concurrently—until all computations are done. A computation for such a parameter instance is called a work item in Worker parlance. -

    Job arrays

    Worker also supports job array-like usage pattern since it offers a convenient workflow. -

    A typical PBS script run.pbs for use with job arrays would look like this: -

    #!/bin/bash -l
    -#PBS -l nodes=1:ppn=1
    -#PBS -l walltime=00:15:00
    -cd $PBS_O_WORKDIR
    -INPUT_FILE=\"input_${PBS_ARRAYID}.dat\"
    -OUTPUT_FILE=\"output_${PBS_ARRAYID}.dat\"
    -word-count -input ${INPUT_FILE}  -output ${OUTPUT_FILE}
    -

    As in the previous section, the word-count program does not exist. Input for this fictitious program is stored in files with names such as input_1.dat, input_2.dat, ..., input_100.dat that the user produced by whatever means, and the corresponding output computed by word-count is written to output_1.dat, output_2.dat, ..., output_100.dat. (Here we assume that the non-existent word-count program takes options -input and -output.) -

    The job would be submitted using: -

    $ qsub -t 1-100 run.pbs
    -

    The effect was that rather than 1 job, the user would actually submit 100 jobs to the queue system (since this puts quite a burden on the scheduler, this is precisely why the scheduler doesn't support job arrays). -

    Using worker, a feature akin to job arrays can be used with minimal modifications to the PBS script: -

    #!/bin/bash -l
    -#PBS -l nodes=1:ppn=8
    -#PBS -l walltime=04:00:00
    -cd $PBS_O_WORKDIR
    -INPUT_FILE=\"input_${PBS_ARRAYID}.dat\"
    -OUTPUT_FILE=\"output_${PBS_ARRAYID}.dat\"
    -word-count -input ${INPUT_FILE}  -output ${OUTPUT_FILE}
    -

    Note that -

      -
    1. the number of CPUs is increased to 8 (ppn=1 is replaced by ppn=8); and
    2. -
    3. the walltime has been modified (walltime=00:15:00 is replaced by walltime=04:00:00).
    4. -

    The walltime is calculated as follows: one calculation takes 15 minutes, so 100 calculation take 1,500 minutes on one CPU. However, this job will use 7 CPUs (1 is reserved for delegating work), so the 100 calculations will be done in 1,500/7 = 215 minutes, i.e., 4 hours to be on the safe side. Note that starting from version 1.3 when using the -master flag, a dedicated core for delegating work is no longer required. This is however not the default behavior since it is implemented using features that are not standard. So in the previous example, the 100 calculations would be done in 1,500/8 = 188 minutes. -

    The job is now submitted as follows: -

    $ module load worker/1.5.0-intel-2014a
    -$ wsub -t 1-100  -batch run.pbs
    -

    The word-count program will now be run for all 100 input files—7 concurrently—until all computations are done. Again, a computation for an individual input file, or, equivalently, an array id, is called a work item in Worker speak. Note that in constrast to torque job arrays, a worker job array submits a single job. -

    MapReduce: prologues and epilogue

    Often, an embarrassingly parallel computation can be abstracted to three simple steps: -

      -
    1. a preparation phase in which the data is split up into smaller, more manageable chuncks;
    2. -
    3. on these chuncks, the same algorithm is applied independently (these are the work items); and
    4. -
    5. the results of the computations on those chuncks are aggregated into, e.g., a statistical description of some sort.
    6. -

    The Worker framework directly supports this scenario by using a prologue and an epilogue. The former is executed just once before work is started on the work items, the latter is executed just once after the work on all work items has finished. Technically, the prologue and epilogue are executed by the master, i.e., the process that is responsible for dispatching work and logging progress. -

    Suppose that 'split-data.sh' is a script that prepares the data by splitting it into 100 chuncks, and 'distr.sh' aggregates the data, then one can submit a MapReduce style job as follows: -

    $ wsub -prolog split-data.sh  -batch run.pbs  -epilog distr.sh -t 1-100
    -

    Note that the time taken for executing the prologue and the epilogue should be added to the job's total walltime. -

    Some notes on using Worker efficiently

      -
    1. Worker is implemented using MPI, so it is not restricted to a single compute node, it scales well to many nodes. However, remember that jobs requesting a large number of nodes typically spend quite some time in the queue.
    2. -
    3. Worker will be effective when - -
    4. -

    Monitoring a worker job

    Since a Worker job will typically run for several hours, it may be reassuring to monitor its progress. Worker keeps a log of its activity in the directory where the job was submitted. The log's name is derived from the job's name and the job's ID, i.e., it has the form <jobname>.log<jobid>. For the running example, this could be 'run.pbs.log445948', assuming the job's ID is 445948. To keep an eye on the progress, one can use: -

    $ tail -f run.pbs.log445948
    -

    Alternatively, a Worker command that summarizes a log file can be used: -

    $ watch -n 60 wsummarize run.pbs.log445948
    -

    This will summarize the log file every 60 seconds. -

    Time limits for work items

    Sometimes, the execution of a work item takes long than expected, or worse, some work items get stuck in an infinite loop. This situation is unfortunate, since it implies that work items that could successfully are not even started. Again, a simple and yet versatile solution is offered by the Worker framework. If we want to limit the execution of each work item to at most 20 minutes, this can be accomplished by modifying the script of the running example. -

    #!/bin/bash -l
    -#PBS -l nodes=1:ppn=8
    -#PBS -l walltime=04:00:00
    -module load timedrun/1.0.1
    -cd $PBS_O_WORKDIR
    -timedrun -t 00:20:00 cfd-test -t $temperature  -p $pressure  -v $volume
    -

    Note that it is trivial to set individual time constraints for work items by introducing a parameter, and including the values of the latter in the CSV file, along with those for the temperature, pressure and volume. -

    Also note that 'timedrun' is in fact offered in a module of its own, so it can be used outside the Worker framework as well. -

    Resuming a Worker job

    Unfortunately, it is not always easy to estimate the walltime for a job, and consequently, sometimes the latter is underestimated. When using the Worker framework, this implies that not all work items will have been processed. Worker makes it very easy to resume such a job without having to figure out which work items did complete successfully, and which remain to be computed. Suppose the job that did not complete all its work items had ID '445948'. -

    $ wresume -jobid 445948
    -

    This will submit a new job that will start to work on the work items that were not done yet. Note that it is possible to change almost all job parameters when resuming, specifically the requested resources such as the number of cores and the walltime. -

    $ wresume -l walltime=1:30:00 -jobid 445948
    -

    Work items may fail to complete successfully for a variety of reasons, e.g., a data file that is missing, a (minor) programming error, etc. Upon resuming a job, the work items that failed are considered to be done, so resuming a job will only execute work items that did not terminate either successfully, or reporting a failure. It is also possible to retry work items that failed (preferably after the glitch why they failed was fixed). -

    $ wresume -jobid 445948 -retry
    -

    By default, a job's prologue is not executed when it is resumed, while its epilogue is. 'wresume' has options to modify this default behavior. -

    Aggregating result data

    In some settings, each work item produces a file as output, but the final result should be an aggregation of those files. Although this is not necessarily hard, it is tedious, but Worker can help you achieve this easily since, typically, the file name produced by a work item is based on the parameters of that work item. -

    Consider the following data file data.csv: -

    a,   b
    -1.3, 5.7
    -2.7, 1.4
    -3.4, 2.1
    -4.1, 3.8
    -

    Processing it would produce 4 files, i.e., output-1.3-5.7.txt, output-2.7-1.4.txt, output-3.4-2.1.txt, output-4.1-3.8.txt. -To obtain the final data, these files should be concatenated into a single file - output.txt. This can be done easily using wcat: -

    $ wcat  -data data.csv  -pattern output-[%a%]-[%b%].txt  -output output.txt
    -

    The pattern describes the file names as generated by each work item in terms of the parameter names and values defined in the data file data.csv. -

    wcat optionally skips headers of all of the first file when the -skip_first n option is used (n is the number of lines to skip). By default, blank lines are omitted, but by using the -keep_blank options, they will be written to the output file. -Help is available using the - -help flag. -

    Multithreaded work items

    When a cluster is configured to use CPU sets, using Worker to execute multithreaded work items doesn't work by default. Suppose a node has 20 cores, and each work item runs most efficiently on 4 cores, then one would expect that the following resource specification would work: -

    $ wsub  -l nodes=10:ppn=5 -W x=nmatchpolicy=exactnode  -batch run.pbs  \\
    -        -data my_data.csv
    -

    This would run 5 work items per node, so that each work item would have 4 cores at its disposal. However, this will not work when CPU sets are active since the four work item threads would all run on a single core, which is detrimental for application performance, and leaves 15 out of the 20 cores idle. Simply adding the -threaded option will ensure that the behavior and performance is as expected: -

     $ wsub -l nodes=10:ppn=5 -batch run.pbs -data my_data.csv -threaded 4
    -

    Note however that using multihreaded work items may actually be less efficient than single threaded execution in this setting of many work items since the thread management overhead will be accumulated. -

    Also note that this feature is new since Worker version 1.5.x. -

    Further information

    For the information about the most recent version and new features please check the official worker documentation webpage.

    For information on how to MPI programs as work items, please contact your friendly system administrator.

    This how-to introduces only Worker's basic features. The wsub command and all other Worker commands have some usage information that is printed when the -help option is specified: -

    ### error: batch file template should be specified
    -### usage: wsub  -batch <batch-file>          \\
    -#                [-data <data-files>]         \\
    -#                [-prolog <prolog-file>]      \\
    -#                [-epilog <epilog-file>]      \\
    -#                [-log <log-file>]            \\
    -#                [-mpiverbose]                \\
    -#                [-master]                    \\
    -#                [-threaded]                  \\
    -#                [-dryrun] [-verbose]         \\
    -#                [-quiet] [-help]             \\
    -#                [-t <array-req>]             \\
    -#                [<pbs-qsub-options>]
    -#
    -#   -batch <batch-file>   : batch file template, containing variables to be
    -#                           replaced with data from the data file(s) or the
    -#                           PBS array request option
    -#   -data <data-files>    : comma-separated list of data files (default CSV
    -#                           files) used to provide the data for the work
    -#                           items
    -#   -prolog <prolog-file> : prolog script to be executed before any of the
    -#                           work items are executed
    -#   -epilog <epilog-file> : epilog script to be executed after all the work
    -#                           items are executed
    -#   -mpiverbose           : pass verbose flag to the underlying MPI program
    -#   -verbose              : feedback information is written to standard error
    -#   -dryrun               : run without actually submitting the job, useful
    -#   -quiet                : don't show information
    -#   -help                 : print this help message
    -#   -master               : start an extra master process, i.e.,
    -#                           the number of slaves will be nodes*ppn
    -#   -threaded             : indicates that work items are multithreaded,
    -#                           ensures that CPU sets will have all cores,
    -#                           regardless of ppn, hence each work item will
    -#                           have <total node cores>/ppn cores for its
    -#                           threads
    -#   -t <array-req>        : qsub's PBS array request options, e.g., 1-10
    -#   <pbs-qsub-options>    : options passed on to the queue submission
    -#                           command
    -

    Troubleshooting

    The most common problem with the Worker framework is that it doesn't seem to work at all, showing messages in the error file about module failing to work. The cause is trivial, and easy to remedy. -

    Like any PBS script, a worker PBS file has to be in UNIX format! -

    If you edited a PBS script on your desktop, or something went wrong during sftp/scp, the PBS file may end up in DOS/Windows format, i.e., it has the wrong line endings. The PBS/torque queue system can not deal with that, so you will have to convert the file, e.g., for file 'run.pbs' -

    $ dos2unix run.pbs
    " - diff --git a/HtmlDump/file_0275.html b/HtmlDump/file_0275.html deleted file mode 100644 index 9ee071478..000000000 --- a/HtmlDump/file_0275.html +++ /dev/null @@ -1,38 +0,0 @@ -

    Purpose

    Estimating the amount of memory an application will use during execution is often non trivial, especially when one uses third-party software. However, this information is valuable, since it helps to determine the characteristics of the compute nodes a job using this application should run on.

    Although the tool presented here can also be used to support the software development process, better tools are almost certainly available.

    Note that currently only single node jobs are supported, MPI support may be added in a future release.

    Prerequisites

    The user should be familiar with the linux bash shell.

    Monitoring a program

    To start using monitor, first load the appropriate module:

    $ module load monitor

    Starting a program, e.g., simulation, to monitor is very straightforward

    $ monitor simulation

    monitor will write the CPU usage and memory consumption of simulation to standard error. Values will be displayed every 5 seconds. This is the rate at which monitor samples the program's metrics.

    Log file

    Since monitor's output may interfere with that of the program to monitor, it is often convenient to use a log file. The latter can be specified as follows:

    $ monitor -l simulation.log simulation

    For long running programs, it may be convenient to limit the output to, e.g., the last minute of the programs execution. Since monitor provides metrics every 5 seconds, this implies we want to limit the output to the last 12 values to cover a minute:

    $ monitor -l simulation.log -n 12 simulation

    Note that this option is only available when monitor writes its metrics to a log file, not when standard error is used.

    Modifying the sample resolution

    The interval at which monitor will show the metrics can be modified by specifying delta, the sample rate:

    $ monitor -d 60 simulation

    monitor will now print the program's metrics every 60 seconds. Note that the minimum delta value is 1 second.

    File sizes

    Some programs use temporary files, the size of which may also be a useful metric. monitor provides an option to display the size of one or more files:

    $ monitor -f tmp/simulation.tmp,cache simulation

    Here, the size of the file simulation.tmp in directory tmp, as well as the size of the file cache will be monitored. Files can be specified by absolute as well as relative path, and multiple files are separated by ','.

    Programs with command line options

    Many programs, e.g., matlab, take command line options. To make sure these do not interfere with those of monitor and vice versa, the program can for instance be started in the following way:

    $ monitor -delta 60 -- matlab -nojvm -nodisplay computation.m

    The use of '--' will ensure that monitor does not get confused by matlab's '-nojvm' and '-nodisplay' options.

    Subprocesses and multicore programs

    Some processes spawn one or more subprocesses. In that case, the metrics shown by monitor are aggregated over the process and all of its subprocesses (recursively). The reported CPU usage is the sum of all these processes, and can thus exceed 100 %.

    Some (well, since this is a HPC cluster, we hope most) programs use more than one core to perform their computations. Hence, it should not come as a surprise that the CPU usage is reported as larger than 100 %.

    When programs of this type are running on a computer with n cores, the CPU usage can go up to n x 100 %.

    Exit codes

    monitor will propagate the exit code of the program it is watching. Suppose the latter ends normally, then monitor's exit code will be 0. On the other hand, when the program terminates abnormally with a non-zero exit code, e.g., 3, then this will be monitor's exit code as well.

    When monitor has to terminate in an abnormal state, for instance if it can't create the log file, its exit code will be 65. If this interferes with an exit code of the program to be monitored, it can be modified by setting the environment variable MONITOR_EXIT_ERROR to a more suitable value.

    Monitoring a running process

    It is also possible to \"attach\" monitor to a program or process that is already running. One simply determines the relevant process ID using the ps command, e.g., 18749, and starts monitor:

    $ monitor -p 18749

    Note that this feature can be (ab)used to monitor specific subprocesses.

    More information

    Help is avaible for monitor by issuing:

    $ monitor -h
    -### usage: monitor [-d <delta>] [-l <logfile>] [-f <files>]
    -#                  [-h] [-v] <cmd> | -p <pid>
    -# Monitor can be used to sample resource utilization of a process
    -# over time.  Monitor can sample a running process if the latter's PID
    -# is specified using the -p option, or it can start a command with
    -# parameters passed as arguments.  When one has to specify flags for
    -# the command to run, '--' can be used to delimit monitor's options, e.g.,
    -#    monitor -delta 5 -- matlab -nojvm -nodisplay calc.m
    -# Resources that can be monitored are memory and CPU utilization, as
    -# well as file sizes.
    -# The sampling resolution is determined by delta, i.e., monitor samples
    -# every <delta> seconds.
    -# -d <delta>   : sampling interval, specified in
    -#                seconds, or as [[dd:]hh:]mm:ss
    -# -l <logfile> : file to store sampling information; if omitted,
    -#                monitor information is printed on stderr
    -# -n <lines>   : retain only the last <lines> lines in the log file,
    -#                note that this option only makes sense when combined
    -#                with -l, and that the log file lines will not be sorted
    -#                according to time
    -# -f <files>   : comma-separated list of file names that are monitored
    -#                for size; if a file doesn't exist at a given time, the
    -#                entry will be 'N/A'
    -# -v           : give verbose feedback
    -# -h           : print this help message and exit
    -# <cmd>        : actual command to run, followed by whatever
    -#                parameters needed
    -# -p <pid>     : process ID to monitor
    -#
    -# Exit status: * 65 for any montor related error
    -#              * exit status of <cmd> otherwise
    -# Note: if the exit code 65 conflicts with those of the
    -#       command to run, it can be customized by setting the
    -#       environment variables 'MONITOR_EXIT_ERROR' to any value
    -#       between 1 and 255 (0 is not prohibited, but this is probably.
    -#       not what you want).
    " - diff --git a/HtmlDump/file_0277.html b/HtmlDump/file_0277.html deleted file mode 100644 index cf311a947..000000000 --- a/HtmlDump/file_0277.html +++ /dev/null @@ -1,58 +0,0 @@ -

    What is checkpointing

    Checkpointing allows for running jobs that run for weeks or months. Each time a subjob is running out of requested wall time, a snapshot of the application memory (and much more) is taken and stored, after which a subsequent subjob will pick up the checkpoint and continue.

    If the compute nodes have support for BLCR, checkpointing can be used.

    How to use it

    Using checkpointing is very simple: just use csub instead of qsub to submit a job.

    The csub command creates a wrapper around your job script, to take care of all the checkpointing stuff. In practice, you (usually) don't need to adjust anything, except for the command used to submit your job. Checkpointing does not require any changes to the application you are running, and should support most software. There are a few corner cases however (see the BLCR Frequently Asked Questions).

    The csub command

    Typically, a job script is submitting with checkpointing support enabled by running:

    $ csub -s job_script.sh

    One important caveat is that the job script (or the applications run in the script) should not create it's own local temporary directories.

    Also note that adding PBS directives (#PBS) in the job script is useless, as they will be ignored by csub. Controlling job parameters should be done via the csub command line.

    Help on the various command line parameters supported by csub can be obtained using -h:

     $ csub -h
    -    csub [opts] [-s jobscript]
    -    
    -    Options:
    -        -h or --help               Display this message
    -        
    -        -s                         Name of jobscript used for job.
    -                                   Warning: The jobscript should not create it's own local temporary directories.
    -        
    -        -q                         Queue to submit job in [default: scheduler default queue]
    -        
    -        -t                         Array job specification (see -t in man qsub) [default: none]
    -        
    -        --pre                      Run prestage script (Current: copy local files) [default: no prestage]
    -
    -        --post                     Run poststage script (Current: copy results to localdir/result.) [default: no poststage]
    -
    -        --shared                   Run in shared directory (no pro/epilogue, shared checkpoint) [default: run in local dir]
    -
    -        --no_mimic_pro_epi         Do not mimic prologue/epilogue scripts [default: mimic pro/epi (bug workaround)]
    -        
    -        --job_time=<string>        Specify wall time for job (format: <hours>:<minutes>:<seconds>s, e.g. 3:12:47) [default: 10h]
    -
    -        --chkpt_time=<string>      Specify time for checkpointing a job (format: see --job_time) [default: 15m]
    -        
    -        --cleanup_after_restart    Specify whether checkpoint file and tarball should be cleaned up after a successful restart
    -                                   (NOT RECOMMENDED!) [default: no cleanup]
    -        
    -        --no_cleanup_chkpt         Don't clean up checkpoint stuff in $VSC_SCRATCH/chkpt after job completion [default: do cleanup]
    -        
    -        --resume=<string>          Try to resume a checkpointed job; argument should be unique name of job to resume [default: none]
    -        
    -        --chkpt_save_opt=<string>  Save option to use for cr_checkpoint (all|exe|none) [default: exe]
    -        
    -        --term_kill_mode           Kill checkpointed process with SIGTERM instead of SIGKILL after checkpointing [defailt: SIGKILL]
    -        
    -        --vmem=<string>            Specify amount of virtual memory required [default: none specified]\"
    -
    -

    Below we discuss various command line parameters.

    -
    --pre and --post
    -
    The --pre and --post parameters steer whether local files are copied or not. The job submitted using csub is (by default) runs on the local storage provided by a particular compute node. Thus, no changes will be made to the files on the shared storage (e.g. $VSC_SCRATCH).
    - If the job script needs (local) access to the files of the directory where csub is executed, --pre should be specified. This will copy all the files in the job script directory to the location where the job script will execute.
    - If the output of the job that was run, or additional output files created by the job in it's working directory are required, --post should be used. This will copy the entire job working directory to the location where csub was executed, in a directory named result.<jobname>. An alternative is to copy the interesting files to the shared storage at the end of the job script.
    -
    --shared
    -
    If the job needs to be run on the shared storage and not on the local storage of the workernode (for whatever reason), --shared should be specified. In this case, the job will be run in a subdirectory of $VSC_SCRATCH/chkpt. This will also disable the execution of the prologue and epilogue scripts, which prepare the job directory on the local storage.
    -
    --job_time and --chkpt_time
    -
    To specify the requested wall time per subjob, use the --job-time parameter. The default settings is 10 hours per subjob. Lowering this will result in more frequent checkpointing, and thus more subjobs.
    - To specify the time that is reserved for checkpointing the job, use --chkpt_time. By default, this is set to 15 minutes which should be enough for most applications/jobs. Don't change this unless you really need to.
    - The total requested wall time per subjob is the sum of both job_time and chkpt_time. This should be taken into account when submitting to a specific job queue (e.g., queues which only support jobs of up to 1 hour).
    -
    --no_mimic_pro_epi
    -
    The option --no_mimic_pro_epi disables the workaround currently implemented for a permissions problem when using actual Torque prologue/epilogue scripts. Don't use this option unless you really know what you're doing!
    -

    Support for csub

    Notes

    If you would like to time how long the complete job executes, just prepend the main command in your job script with time, e.g.: time <command>. The real time will not make sense as it will also include the time passes between two checkpointed subjobs. However, the user time should give a good indication of the actual time it took to run your command, even if multiple checkpoints were performed.

    " - diff --git a/HtmlDump/file_0279.html b/HtmlDump/file_0279.html deleted file mode 100644 index 7850332a0..000000000 --- a/HtmlDump/file_0279.html +++ /dev/null @@ -1,3 +0,0 @@ -" - diff --git a/HtmlDump/file_0281.html b/HtmlDump/file_0281.html deleted file mode 100644 index d415c7899..000000000 --- a/HtmlDump/file_0281.html +++ /dev/null @@ -1,10 +0,0 @@ -

    This section is still rather empty. It will be expanded over time.

    Visualization software

    " - diff --git a/HtmlDump/file_0283.html b/HtmlDump/file_0283.html deleted file mode 100644 index 8f1b3fdd0..000000000 --- a/HtmlDump/file_0283.html +++ /dev/null @@ -1,36 +0,0 @@ -

    Prerequisits

    You should have ParaView installed on your desktop, and know how to use it (the latter is outside the scope of this page). Note: the client and server version should match to avoid problems!

    Overview

    Working with ParaView to remotely visualize data requires the following steps which will be explained in turn in the subsections below: -

      -
    1. start ParaView on the cluster;
    2. -
    3. establish an SSH tunnel;
    4. -
    5. connect to the remote server using ParaView on your desktop; and
    6. -
    7. terminating the server session on the compute node.
    8. -

    Start ParaView on the cluster

    First, start an interactive job on the cluster, e.g., -

    $ qsub  -I  -l nodes=1,ppn=20
    -

    Given that remote visualization makes sense most for large data sets, 64 GB of RAM is probably the minimum you will need. To use a node with more memory, add a memory specification, e.g., -l mem=120gb. If this is not sufficient, you should consider using Cerebro. -

    Once this interactive session is active, you can optionally navigate to the directory containing the data to visualize (not shown below), load the appropriate module, and start the server: -

    $ module load Paraview/4.1.0-foss-2014a
    -$ n_proc=$(cat $PBS_NODEFILE  |  wc  -l)
    -$ mpirun  -np $n_proc pvserver  --use-offscreen-rendering \\
    -                                --server-port=11111
    -

    Note the compute node's name your job is running on, you will need it in the next step to establish the required SSH tunnel. -

    Establish an SSH tunnel

    To connect the desktop ParaView client with the desktop with the ParaView server on the compute node, an SSH tunnel has to be established between your desktop and that compute node. Details for Windows using PuTTY and Linux using ssh are given in the appropriate client software sections. -

    Connect to the remote server using ParaView on your desktop

    Since ParaView's user interface is identical on all platforms, connecting from the client side is documented on this page. Note that this configuration step has to be performed only once if you always use the same local port. -

    \""Choose -

    \""Configure -

    \""Configure

    \""Choose -

    You can now work with ParaView as you would when visualizing local files. -

    Terminating the server session on the compute node

    Once you've quit ParaView on the desktop the server process will terminate automatically. However, don't forget to close your session on the compute node since leaving it open will consume credits. -

    $ logout
    -

    Further information

    More information on ParaView can be found on its website. A decent tutorial on using Paraview is also available from the VTK public wiki. -

    " - diff --git a/HtmlDump/file_0285.html b/HtmlDump/file_0285.html deleted file mode 100644 index 1f9bcf521..000000000 --- a/HtmlDump/file_0285.html +++ /dev/null @@ -1,5 +0,0 @@ -

    BEgrid is currently documented by BELNET. Some useful links are:

    " - diff --git a/HtmlDump/file_0287.html b/HtmlDump/file_0287.html deleted file mode 100644 index b5934de01..000000000 --- a/HtmlDump/file_0287.html +++ /dev/null @@ -1,50 +0,0 @@ -

    Data on the VSC clusters can be stored in several locations, depending on the size and usage of these data. Following locations are available:

    Since these directories are not necessarily mounted on the same locations over all sites, you should always (try to) use the environment variables that have been created.

    Quota is enabled on the three directories, which means the amount of data you can store here is limited by the operating system, and not by \"the boundaries of the hard disk\". You can see your current usage and the current limits with the appropriate quota command as explained on How do I know how much disk space I am using?. The actual disk capacity, shared by all users, can be found on the Available hardware page.

    You will only receive a warning when you reach the soft limit of either quota. You will only start losing data when you reach the hard limit. Data loss occurs when you try to save new files: this will not work because you have no space left, and thus you will lose these new files. You will however not be warned when data loss occurs, so keep an eye open for the general quota warnings! The same holds for running jobs that need to write files: when you reach your hard quota, jobs will crash.

    Home directory

    This directory is where you arrive by default when you login to the cluster. Your shell refers to it as \"~\" (tilde), or via the environment variable $VSC_HOME.

    The data stored here should be relatively small (e.g., no files or directories larger than a gigabyte, although this is allowed), and usually used frequently. Also all kinds of configuration files are stored here, e.g., by Matlab, Eclipse, ...

    The operating system also creates a few files and folders here to manage your account. Examples are:

    - - - - - - - - - - - - - - - - - - -
    .ssh/This directory contains some files necessary for you to login to the cluster and to submit jobs on the cluster. Do not remove them, and do not alter anything if you don't know what you're doing!
    .profileThis script defines some general settings about your sessions,
    .bashrcThis script is executed everytime you start a session on the cluster: when you login to the cluster and when a job starts. You could edit this file and, e.g., add \"module load XYZ\" if you want to automatically load module XYZ whenever you login to the cluster, although we do not recommend to load modules in your .bashrc.
    .bash_historyThis file contains the commands you typed at your shell prompt, in case you need them again.

    Data directory

    In this directory you can store all other data that you need for longer terms. The environment variable pointing to it is $VSC_DATA. There are no guarantees about the speed you'll achieve on this volume.

    Scratch space

    To enable quick writing from your job, a few extra file systems are available on the work nodes. These extra file systems are called scratch folders, and can be used for storage of temporary and/or transient data (temporary results, anything you just need during your job, or your batch of jobs).

    You should remove any data from these systems after your processing them has finished. There are no gurarantees about the time your data will be stored on this system, and we plan to clean these automatically on a regular base. The maximum allowed age of files on these scratch file systems depends on the type of scratch, and can be anywhere between a day and a few weeks. We don't guarantee that these policies remain forever, and may change them if this seems necessary for the healthy operation of the cluster.

    Each type of scratch has his own use:

    " - diff --git a/HtmlDump/file_0289.html b/HtmlDump/file_0289.html deleted file mode 100644 index b5934de01..000000000 --- a/HtmlDump/file_0289.html +++ /dev/null @@ -1,50 +0,0 @@ -

    Data on the VSC clusters can be stored in several locations, depending on the size and usage of these data. Following locations are available:

    Since these directories are not necessarily mounted on the same locations over all sites, you should always (try to) use the environment variables that have been created.

    Quota is enabled on the three directories, which means the amount of data you can store here is limited by the operating system, and not by \"the boundaries of the hard disk\". You can see your current usage and the current limits with the appropriate quota command as explained on How do I know how much disk space I am using?. The actual disk capacity, shared by all users, can be found on the Available hardware page.

    You will only receive a warning when you reach the soft limit of either quota. You will only start losing data when you reach the hard limit. Data loss occurs when you try to save new files: this will not work because you have no space left, and thus you will lose these new files. You will however not be warned when data loss occurs, so keep an eye open for the general quota warnings! The same holds for running jobs that need to write files: when you reach your hard quota, jobs will crash.

    Home directory

    This directory is where you arrive by default when you login to the cluster. Your shell refers to it as \"~\" (tilde), or via the environment variable $VSC_HOME.

    The data stored here should be relatively small (e.g., no files or directories larger than a gigabyte, although this is allowed), and usually used frequently. Also all kinds of configuration files are stored here, e.g., by Matlab, Eclipse, ...

    The operating system also creates a few files and folders here to manage your account. Examples are:

    - - - - - - - - - - - - - - - - - - -
    .ssh/This directory contains some files necessary for you to login to the cluster and to submit jobs on the cluster. Do not remove them, and do not alter anything if you don't know what you're doing!
    .profileThis script defines some general settings about your sessions,
    .bashrcThis script is executed everytime you start a session on the cluster: when you login to the cluster and when a job starts. You could edit this file and, e.g., add \"module load XYZ\" if you want to automatically load module XYZ whenever you login to the cluster, although we do not recommend to load modules in your .bashrc.
    .bash_historyThis file contains the commands you typed at your shell prompt, in case you need them again.

    Data directory

    In this directory you can store all other data that you need for longer terms. The environment variable pointing to it is $VSC_DATA. There are no guarantees about the speed you'll achieve on this volume.

    Scratch space

    To enable quick writing from your job, a few extra file systems are available on the work nodes. These extra file systems are called scratch folders, and can be used for storage of temporary and/or transient data (temporary results, anything you just need during your job, or your batch of jobs).

    You should remove any data from these systems after your processing them has finished. There are no gurarantees about the time your data will be stored on this system, and we plan to clean these automatically on a regular base. The maximum allowed age of files on these scratch file systems depends on the type of scratch, and can be anywhere between a day and a few weeks. We don't guarantee that these policies remain forever, and may change them if this seems necessary for the healthy operation of the cluster.

    Each type of scratch has his own use:

    " - diff --git a/HtmlDump/file_0303.html b/HtmlDump/file_0303.html deleted file mode 100644 index 4940721b6..000000000 --- a/HtmlDump/file_0303.html +++ /dev/null @@ -1,153 +0,0 @@ -

    Hardware details

    The VUB cluster contains a mix of nodes with AMD and Intel processors and different interconnects in different sections of the cluster. The cluster also contains a number of nodes with NVIDIA GPUs.

    Login nodes:

    Compute nodes:

    - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
    nodes - processor - memory - disk - network - others -
    40 - 2x 8-core AMD 6134 (Magnycours) - 64 Gb - 900 Gb - QDR-IB - soon will be phased out -
    11 - 2x 10-core INTEL E5-2680v2 (IvyBridge) - 128 Gb - 900 Gb - QDR-IB - -
    20 - 2x 10-core INTEL E5-2680v2 (IvyBridge) - 256 Gb - 900 Gb - QDR-IB - -
    6 - 2x 10-core INTEL E5-2680v2 (IvyBridge) - 128 Gb - 900 Gb - QDR-IB - 2x Tesla K20x NVIDIA GPGPUs
    with 6Gb memory in each node -
    27 - 2x 14-core INTEL E5-2680v4 (Broadwell) - 256 Gb - 1 Tb - 10 Gbps -
    -
    1 - 4x 10-core INTEL E7-8891v4 (Broadwell) - 1.5 Tb - 4 Tb - 10 Gbps -
    -
    4 - 2x 12-core INTEL E5-2650v4 (Broadwell) - 256 Gb - 2 Tb - 10 Gbps - 2x Tesla P100 NVIDIA GPGPUs
    with 16 Gb memory in each node
    -
    1 - 2x 16-core INTEL E5-2683v4 (Broadwell) - 512 Gb - 8 Tb - 10 Gbps - 4x GeForce GTX 1080 Ti NVIDIA GPUs with 12 Gb memory in each node
    -
    21 - 2x 20-core INTEL Xeon Gold 6148 (Skylake) - 192 Gb - 1 Tb - 10 Gbps -
    -

    Network Storage:

    " - diff --git a/HtmlDump/file_0305.html b/HtmlDump/file_0305.html deleted file mode 100644 index 8c1a2d511..000000000 --- a/HtmlDump/file_0305.html +++ /dev/null @@ -1,418 +0,0 @@ -

    UAntwerpen has two clusters. leibniz and hopper, Turing, an older cluster, has been retired in the early 2017.

    Local documentation

    Leibniz

    Leibniz was installed in the spring of 2017. It is a NEC system consisting of 152 nodes with 2 14-core intel E5-2680v4 Broadwell generation CPUs connected through a EDR InfiniBand network. 144 of these nodes have 128 GB RAM, the other 8 have 256 GB RAM. The nodes do not have a sizeable local disk. The cluster also contains a node for visualisation, 2 nodes for GPU computing (NVIDIA Psscal generation) and one node with an Intel Xeon Phi expansion board. -

    Access restrictions

    Access ia available for faculty, students (master's projects under faculty supervision), and researchers of the AUHA. The cluster is integrated in the VSC network and runs the standard VSC software setup. It is also available to all VSC-users, though we appreciate that you contact the UAntwerpen support team so that we know why you want to use the cluster. -

    Jobs can have a maximal execution wall time of 3 days (72 hours). -

    Hardware details

    Login infrastructure

    Direct login is possible to both login nodes and to the visualization node. -

    - - - - - - - - - - - - - - - - - - - - - - -
    - External interface - Internal interface -
    Login generic - login-leibniz.uantwerpen.be
    -
    -
    Login - login1-leibniz.uantwerpen.be
    login2-leibniz.uantwerpen.be -
    ln1.leibniz.antwerpen.vsc
    ln2.leibniz.antwerpen.vsc -
    Visualisation node - viz1-leibniz.uantwerpen.be - viz1.leibniz.antwerpen.vsc -

    Storage organization

    - See the section on the storage organization of hopper. -

    Characteristics of the compute nodes

    Since leibniz is currently a homogenous system with respect to CPU type and interconnect, it is not needed to specify the corresponding properties (see also the page on specifying resources, output files and notifications).
    -

    However, to make it possible to write job scripts that can be used on both hopper and leibniz (or other VSC clusters) and to prepare for future extensions of the cluster, the following features are defined: -

    - - - - - - - - - - - - - - - - - - - - - - -
    property - explanation -
    broadwell - only use Intel processors from the Broadwell family (E5-XXXv4) (Not needed at the moment as this is the only CPU type) -
    ib - use InfiniBand interconnect (not needed at the moment as all nodes are connected to the InfiniBand interconnect) -
    mem128 - use nodes with 128 GB RAM (roughly 112 GB available). This is the majority of the nodes on leibniz. -
    mem256 - use nodes with 256 GB RAM (roughly 240 GB available). This property is useful if you submit a batch of jobs that require more than 4 GB of RAM per processor but do not use all cores and you do not want to use a tool to bundle jobs yourself such as Worker, as it helps the scheduler to put those jobs on nodes that can be further filled with your jobs. -

    These characteristics map to the following nodes on Hopper: -

    - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
    Type of node - CPU type - Interconnect - # nodes - # physical
    cores
    (per node) -
    # logical
    cores
    (per node) -
    installed mem
    (per node) -
    avail mem
    (per node) -
    local disc -
    broadwell:ib:mem128 - Xeon E5-2680v4 - IB-EDR - 144 - 28 - 28 - 128 GB - 112 GB - ~25 GB -
    broadwell:ib:mem256 - Xeon E5-2680v4 - IB-EDR - 8 - 28 - 28 - 256 GB - 240 GB - ~25 GB

    -

    Hopper

    Hopper is the current UAntwerpen cluster. It is a HP system consisting of 168 nodes with 2 10-core Intel E5-2680v2 Ivy Bridge generation CPUs connected through a FDR10 InfiniBand network. 144 nodes have a memory capacity of 64 GB while 24 nodes have 256 GB of RAM memory. The system has been reconfigured to have a software setup that is essentially the same as on Leibniz.

    Access restrictions

    Access ia available for faculty, students (master's projects under faculty supervision), and researchers of the AUHA. The cluster is integrated in the VSC network and runs the standard VSC software setup. It is also available to all VSC-users, though we appreciate that you contact the UAntwerpen support team so that we know why you want to use the cluster. -

    Jobs can have a maximal execution wall time of 3 days (72 hours). -

    Hardware details

    Login infrastructure

    Direct login is possible to both login nodes and to the visualization node. -

    - - - - - - - - - - - - - - - - - -
    - External interface - Internal interface -
    Login generic - login.hpc.uantwerpen.be
    login-hopper.uantwerpen.be -
    -
    Login nodes - login1-hopper.uantwerpen.be
    login2-hopper.uantwerpen.be
    login3-hopper.uantwerpen.be
    login4-hopper.uantwerpen.be -
    ln01.hopper.antwerpen.vsc
    ln02.hopper.antwerpen.vsc
    ln03.hopper.antwerpen.vsc
    ln04.hopper.antwerpen.vsc -

    Storage organisation

    The storage is organised according to the VSC storage guidelines. -

    - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
    Name - Variable - Type - Access - Backup - Default quota -
    /user/antwerpen/20X/vsc20XYZ - $VSC_HOME - GPFS - VSC - NO - 3 GB -
    /data/antwerpen/20X/vsc20XYZ - $VSC_DATA - GPFS - VSC - NO - 25 GB -
    /scratch/antwerpen/20X/vsc20XYZ - - $VSC_SCRATCH
    - $VSC_SCRATCH_SITE -
    GPFS - Hopper
    Leibniz -
    NO - 25 GB -
    /small/antwerpen/20X/vsc20XYZ(*) - - GPFS - Hopper
    Leibniz -
    NO - 0 GB -
    /tmp - $VSC_SCRATCH_NODE - ext4 - Node - NO - 250 GB hopper -

    (*) /small is a file system optimised for the storage of small files of types that do not belong in $VSC_HOME. The file systems pointed at by $VSC_DATA and $VSC_SCRATCH have a large fragment size (128 kB) for optimal performance on larger files and since each file occupies at least one fragment, small files waste a lot of space on those file systems. The file system is available on request. -

    For users from other universities, the quota on $VSC_HOME and $VSC_DATA will be determined by the local policy of your home institution as these file systems are mounted from there. The pathnames will be similar with trivial modifications based on your home institution and VSC account number. -

    Characteristics of the compute nodes

    Since hopper is currently a homogenous system with respect to CPU type and interconnect, it is not needed to specify these properties (see also the page on specifying resources, output files and notifications). -

    However, to make it possible to write job scripts that can be used on both hopper and turing (or other VSC clusters) and to prepare for future extensions of the cluster, the following features are defined: -

    - - - - - - - - - - - - - - - - - - - - - - -
    property - explanation -
    ivybridge - only use Intel processors from the Ivy Bridge family (E5-XXXv2) (Not needed at the moment as this is the only CPU type) -
    ib - use InfiniBand interconnect (only for compatibility with Turing job scripts as all nodes have InfiniBand) -
    mem64 - use nodes with 64 GB RAM (58 GB available) -
    mem256 - use nodes with 256 GB RAM (250 GB available) -

    These characteristics map to the following nodes on Hopper: -

    - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
    Type of node - CPU type - Interconnect - # nodes - # physical
    cores
    (per node) -
    # logical
    cores (per node) -
    installed mem
    (per node) -
    avail mem
    (per node) -
    local disc -
    ivybridge:ib:mem64 - Xeon E5-2680v2 - IB-FDR10 - 144 - 20 - 20 - 64 GB - 56 GB - ~360 GB -
    ivybridge:ib:mem256 - Xeon E5-2680v2 - IB-FDR10 - 24 - 20 - 20 - 256 GB - 248 GB - ~360 GB -

    Turing

    In July 2009, the UAntwerpen bought a 768 core cluster (L5420 CPUs, 16 GB RAM/node) from HP, that was installed and configured in December 2009. In December 2010, the cluster was extended with 768 cores (L5640 CPUs, 24 GB RAM/node). In September 2011, another 96 cores (L5640 CPUs, 24 GB RAM/node) have been added. Turing has been retired in January 2017. -

    " - diff --git a/HtmlDump/file_0307.html b/HtmlDump/file_0307.html deleted file mode 100644 index 59a0550ff..000000000 --- a/HtmlDump/file_0307.html +++ /dev/null @@ -1,281 +0,0 @@ -

    Hardware details

    -

    -

    -

    Characteristics of the compute nodes

    -

    The following properties allow you to select the appropriate node type for your job (see also the page on specifying resources, output files and notifications): -

    - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
    Cluster - Type of node - CPU type - Interconnect - # cores - installed mem - avail mem - local discs - # nodes -
    Thinking - ivybridge - Xeon E5-2680v2 - IB-QDR - 20 - 64 GB - 60 GB - 250 GB - 176 -
    ThinKing - ivybridge - Xeon E5-2680v2 - IB-QDR - 20 - 128 GB - 124 GB - 250 GB - 32 -
    Thinking - haswell - Xeon E5-2680v3 - IB-FDR - 24 - 64 GB - 60 GB - 150 GB - 48
    -
    Thinking - haswell - Xeon E5-2680v3 - IB-FDR - 24 - 128 GB - 124 GB - 150 GB - 96
    -
    Genius
    -
    skylake - Xeon 6140 - IB-EDR - 36 - 192 GB - 188 GB - 800 GB - 86
    -
    Genius - skylake-large memory - Xeon 6140 - IB-EDR - 36 - 768 GB - 764 GB - 800 GB - 10
    -
    Genius - skylake-GPU - Xeon 6140
    4xP100 SXM2
    -
    IB-EDR - 36 - 192 GB - 188 GB - 800 GB - 20
    -
    -

    For using Cerebro, the shared memory section, we refer to the Cerebro Quick Start Guide. -

    -

    Implementation of the VSC directory structure

    -

    In the transition phase between Vic3 and ThinKing, the storage is mounted on both systems. When switching from Vic3 to ThinKing you will not need to migrate your data. -

    -

    The cluster uses the directory structure that is implemented on most VSC clusters. This implies that each user has two personal directories: -

    - -

    There are three further environment variables that point to other directories that can be used: -

    - -

    -

    Access restrictions

    -

    Access - is available for faculty, students (under faculty supervision), and -researchers of the KU Leuven, UHasselt and their associations. This -cluster is being integrated in the VSC network and as such becomes -available to all VSC users. -

    -

    History

    -

    In September 2013 a new thin node cluster (HP) and a shared memory system (SGI) was bought. The thin node cluster was installed and configured in January/February 2014 and extended in september 2014. Installation and configuration of the SMP is done in April 2014. Financing of this systems was obtained from the Hercules foundation and the Flemish government. -

    -

    Do you want to see it ? Have a look at the movie -

    -

    - -

    " - diff --git a/HtmlDump/file_0309.html b/HtmlDump/file_0309.html deleted file mode 100644 index ddf78c2be..000000000 --- a/HtmlDump/file_0309.html +++ /dev/null @@ -1,70 +0,0 @@ -

    Overview

    The tier-1 cluster muk is primarily aimed at large parallel computing jobs that require a high-bandwidth low-latency interconnect, but jobs that require a multitude of small independent tasks are also accepted.

    The main architectural features are: -

    The cluster appeared for several years in the Top500 list of supercomputer sites: -

    - - - - - - - - - - - - - - - - - - -
    - June 2012 - Nov 2012 - June 2013 - Nov 2013 - June 2014 -
    Ranking - 118 - 163 - 239 - 306 - 430 -

    Compute time on muk is only available upon approval of a project. Information on requesting projects is available in Dutch and in English -

    Access restriction

    Once your project has been approved, your login on the tier-1 cluster will be enabled. You use the same vsc-account (vscXXXXX) as at your home institutions and you use the same $VSC_HOME and $VSC_DATA directories, though the tier-1 does have its own scratch directories. -

    A direct login from your own computer through the public network to muk is not possible for security reasons. You have to enter via the VSC network, which is reachable from all Flemish university networks. -

    ssh login.hpc.uantwerpen.be
    -ssh login.hpc.ugent.be
    -ssh login.hpc.kuleuven.be or login2.hpc.kuleuven.be
    -

    Make sure that you have at least once connected to the login nodes of your institution, before attempting access to tier-1. -

    Once on the VSC network, you can -

    There are two options to log on to these systems over the VSC network: -

      -
    1. You log on to your home cluster. At the command line, you start a ssh session to login.muk.gent.vsc. -
      ssh login.muk.gent.vsc
      -
    2. -
    3. You set up a so-called ssh proxy through your usual VSC login node vsc.login.node (the proxy server in this process) to login.muk.gent.vsc or gligar01.ugent.be. - -
    4. -

    Resource limits

    Disk quota

    Memory

    " - diff --git a/HtmlDump/file_0311.html b/HtmlDump/file_0311.html deleted file mode 100644 index 332c26d05..000000000 --- a/HtmlDump/file_0311.html +++ /dev/null @@ -1,7 +0,0 @@ -

    Access

    qsub  -l partition=gpu,nodes=1:K20Xm <jobscript>

    or -

    qsub  -l partition=gpu,nodes=1:K40c <jobscript>
    -

    depending which GPU node you would like to use if you don't 'care' on which type of GPU node your job ends up you can just submit it like this: -

    qsub  -l partition=gpu <jobscript>
    -

    Submit to a Phi node:

    qsub -l partition=phi <jobscript>
    -
    " - diff --git a/HtmlDump/file_0313.html b/HtmlDump/file_0313.html deleted file mode 100644 index 88365fff2..000000000 --- a/HtmlDump/file_0313.html +++ /dev/null @@ -1,16 +0,0 @@ -

    Tier-1

    Experimental setup

    Tier-2

    Four university-level cluster groups are also embedded in the VSC and partly funded from VSC budgets: -

    " - diff --git a/HtmlDump/file_0315.html b/HtmlDump/file_0315.html deleted file mode 100644 index 146f80a85..000000000 --- a/HtmlDump/file_0315.html +++ /dev/null @@ -1,62 +0,0 @@ -

    The icons

    - - - - - - - - - -
    \"Windows\" - Works on Windows, but may need additional pure Windows packages (free or commercial) -
    \"Windows+\" - Works on Windows with a UNIX compatibility layer added, e.g., cygwin or the \"Windows Subsystem for Linux\" in Windows 10 build 1607 (anniversary edition) or later -

    Getting ready to request an account

    Connecting to the cluster

    Programming tools

    " - diff --git a/HtmlDump/file_0317.html b/HtmlDump/file_0317.html deleted file mode 100644 index 2d9cc3458..000000000 --- a/HtmlDump/file_0317.html +++ /dev/null @@ -1,24 +0,0 @@ -

    Prerequisite: PuTTY and WinSCP

    You've generated a public/private key pair with PuTTY and have an approved account on the VSC clusters.

    Connecting to the VSC clusters

    When you start the PuTTY executable 'putty.exe', a configuration screen pops up. Follow the steps below to setup the connection to (one of) the VSC clusters. -

    In the screenshots, we show the setup for user vsc98765 to the ThinKing cluster at K.U.Leuven via the loginnode login.hpc.kuleuven.be. -

    You can find the names and ip-addresses of the loginnodes in the sections of the local VSC clusters. -

    Alternatively, you can follow a short video - explaining step-by-step the process of making connection to VSC login nodes (example based on KU Leuven cluster).

      -
    1. - Within the category Session, in the field 'Host Name', type in <vsc-loginnode>, which is the name of the loginnode of the VSC cluster you want to connect to.
      - \"PuTTY
    2. -
    3. In the category Connection > Data, in the field 'Auto-login username', put in <vsc-account>, which is your VSC username that you have received by mail after your request was approved.
    4. -
    5. - In the category Connection > SSH > Auth, click on 'Browse' and select the private key that you generated and saved above.
      - \"PuTTY
      - Here, the private key was previously saved in the folder C:\\Documents and Settings\\Me\\Keys. In newer versions of Windows, \"C:\\Users\" is used instead \"C:\\Documents and Settings\". -
    6. -
    7. - In the category Connection > SSH > X11, click the Enable X11 Forwarding checkbox:
      - \"PuTTY
    8. -
    9. Now go back to Session, and fill in a name in the 'Saved Sessions' field and press 'Save' to store the session information.
    10. -
    11. Now pressing 'Open' should start ask for you passphrase, and connect you to <vsc-loginnode>.
    12. -

    The first time you make a connection to the loginnode, a Security Alert will appear and you will be asked to verify the authenticity of the loginnode. -

    \"PuTTY -

    For future sessions, just select your saved session from the list and press 'Open'. -

    " - diff --git a/HtmlDump/file_0319.html b/HtmlDump/file_0319.html deleted file mode 100644 index 45774a6ba..000000000 --- a/HtmlDump/file_0319.html +++ /dev/null @@ -1,11 +0,0 @@ -

    Getting started with Pageant

    Pageant is an SSH authentication agent that you can use for Putty and Filezilla. Before you run Pageant, you need to have a private key in PKK format (filename ends on .pkk). See our page on generating keys with PuTTY to find out how to generate and use one. When you run Pageant, it will put an icon of a computer wearing a hat into the System tray. It will then sit and do nothing, until you load a private key into it. If you click the Pageant icon with the right mouse button, you will see a menu. Select ‘View Keys’ from this menu. The Pageant main window will appear. (You can also bring this window up by double-clicking on the Pageant icon.) The Pageant window contains a list box. This shows the private keys Pageant is holding. When you start Pageant, it has no keys, so the list box will be empty. After you add one or more keys, they will show up in the list box.

    - To add a key to Pageant, press the ‘Add Key’ button. Pageant will bring up a file dialog, labelled ‘Select Private Key File’. Find your private key file in this dialog, and press ‘Open’. Pageant will now load the private key. If the key is protected by a passphrase, Pageant will ask you to type the passphrase. When the key has been loaded, it will appear in the list in the Pageant window. -

    - Now start PuTTY (or Filezilla) and open an SSH session to a site that accepts your key. PuTTY (or Filezilla) will notice that Pageant is running, retrieve the key automatically from Pageant, and use it to authenticate. You can now open as many PuTTY sessions as you like without having to type your passphrase again. -

    - When you want to shut down Pageant, click the right button on the Pageant icon in the System tray, and select ‘Exit’ from the menu. Closing the Pageant main window does not shut down Pageant. -

    - You can find more info in the on-line manual. -

    SSH authentication agents are very handy as you no longer need to type your passphrase every time that you try to log in to the cluster. It also implies that when someone gains access to your computer, he also automatically gains access to your account on the cluster. So be very careful and lock your screen when you're not with your computer! It is your responsibility to keep your computer safe and prevent easy intrusion of your VSC-account due to an obviously unprotected PC!
    -

    " - diff --git a/HtmlDump/file_0321.html b/HtmlDump/file_0321.html deleted file mode 100644 index 57f011f0f..000000000 --- a/HtmlDump/file_0321.html +++ /dev/null @@ -1,99 +0,0 @@ -

    Rationale

    - ssh provides a safe way of connecting to a computer, encrypting traffic and avoiding passing passwords across public networks where your traffic might be intercepted by someone else. Yet making a server accessible from all over the world makes that server very vulnerable. Therefore servers are often put behind a firewall, another computer or device that filters traffic coming from the internet. -

    - In the VSC, all clusters are behind a firewall, but for the tier-1 cluster muk this firewall is a bit more restrictive than for other clusters. Muk can only be approached from certain other computers in the VSC network, and only via the internal VSC network and not from the public network. To avoid having to log on twice, first to another login node in the VSC network and then from there on to Muk, one can set up a so-called ssh proxy. You then connect through another computer (the proxy server) to the computer that you really want to connect to. -

    - This all sounds quite complicated, but once things are configured properly it is really simple to log on to the host. -

    -Setting up a proxy in PuTTY

    - Setting up the connection in PuTTY is a bit more complicated than for a simple direct connection to a login node. -

      -
    1. - First you need to start up pageant and load your private key into it. See the instructions on our \"Using Pageant\" page.
    2. -
    3. - In PuTTY, go first to the \"Proxy\" category (under \"Connection\"). In the Proxy tab sheet, you need to fill in the following information:
      - - - - - - - -
      - \"\" - -
        -
      1. - Select the proxy type: \"Local\"
      2. -
      3. - Give the name of the \"proxy server\". This is vsc.login.node, your usual VSC login node, and not the computer on which you want to log on and work.
      4. -
      5. - Make sure that the \"Port\" number is 22.
      6. -
      7. - Enter your VSC-id in the \"Username\" field.
      8. -
      9. - In the \"Telnet command, or local proxy command\", enter the string
        -
        plink -agent -l %user %proxyhost -nc %host:%port
        -				
        - (the easiest is to just copy-and-paste this text).
        - \"plink\" (PuTTY Link) is a Windows program and comes with the full PuTTY suite of applications. It is the command line version of PuTTY. In case you've only installed the executables putty.exe and pageant.exe, you'll need to download plink.exe also from the PuTTY web site. We strongly advise to simply install the whole PuTTY-suite of applications using the installer provided on that site.
      10. -
      -
      -
    4. -
    5. - Now go to the \"Data\" category in PuTTY, again under \"Connection\".
      - - - - - - - -
      - \"\" - -
        -
      1. - Fill in your VSC-id in the \"Auto-login username\" field.
      2. -
      3. - Leave the other values untouched (likely the values in the screen dump)
      4. -
      -
      -
    6. -
    7. - Now go to the \"Session\" category
      - - - - - - - -
      - \"\" - -
        -
      1. - Set the field \"Host Name (or IP address) to the computer you want to log on to. If you are setting up a proxy connection to access a computer on the VSC network, you will have to use its name on the internal VSC network. E.g., for the login nodes of the tier-1 cluster Muk at UGent, this is login.muk.gent.vsc and for the cluster on which you can test applications for the Muk, this is gligar.gligar.gent.vsc.
      2. -
      3. - Make sure that the \"Port\" number is 22.
      4. -
      5. - Finally give the configuration a name in the field \"Saved Sessions\" and press \"Save\". Then you won't have to enter all the above information again.
      6. -
      7. - And now you're all set up to go. Press the \"Open\" button on the \"Session\" tab to open a terminal window.
      8. -
      -
      -
    8. -

    -For advanced users

    - If you have an X-server on your Windows PC, you can also use X11 forwarding and run X11-applications on the host. All you need to do is click the box next to \"Enable X11 forwarding\" in the category \"Connection\" -> \"SSH\"-> \"X11\". -

    - What happens behind the scenes: -

    " - diff --git a/HtmlDump/file_0323.html b/HtmlDump/file_0323.html deleted file mode 100644 index 22c888c06..000000000 --- a/HtmlDump/file_0323.html +++ /dev/null @@ -1,41 +0,0 @@ -

    Prerequisits

    PuTTY must be installed on your computer, and you should be able to connect via SSH to the cluster's login node.

    -Background

    - Because of one or more firewalls between your desktop and the HPC clusters, it is generally impossible to communicate directly with a process on the cluster from your desktop except when the network managers have given you explicit permission (which for security reasons is not often done). One way to work around this limitation is SSH tunneling. -

    - There are several cases where this is usefull: -

    -Procedure: A tunnel from a local client to a server on the cluster

      -
    1. - Log in on the login node
    2. -
    3. - Start the server job, note the compute node's name the job is running on (e.g., 'r1i3n5'), as well as the port the server is listening on (e.g., '44444').
    4. -
    5. - Set up the tunnel:
      - \"PuTTY -
        -
      1. - Right-click in PuTTY's title bar, and select 'Change Settings...'.
      2. -
      3. - In the 'Category' pane, expand 'Connection' -> 'SSH', and select 'Tunnels' as show below:
      4. -
      5. - In the 'Source port' field, enter the local port to use (e.g., 11111).
      6. -
      7. - In the 'Destination' field, enter <hostname>:<server-port> (e.g., r1i3n5:44444 as in the example above).
      8. -
      9. - Click the 'Add' button.
      10. -
      11. - Click the 'Apply' button
      12. -
      -
    6. -

    -
    - The tunnel is now ready to use. -

    " - diff --git a/HtmlDump/file_0325.html b/HtmlDump/file_0325.html deleted file mode 100644 index e6c3cbaad..000000000 --- a/HtmlDump/file_0325.html +++ /dev/null @@ -1,22 +0,0 @@ -

    FileZilla is an easy-to-use freely available ftp-style program to transfer files to and from your account on the clusters.

    You can also put FileZilla with your private key on a USB stick to access your files from any internet-connected PC. -

    You can download Filezilla from the FileZilla project web page. -

    Configuration of FileZilla to connect to a login node

    Note: Pageant should be running and your private key should be loaded first (more info on our \"Using Pageant\" page). -

      -
    1. Start FileZilla;
    2. -
    3. Open the Site Manager using the 'File' menu;
    4. -
    5. Create a new site by clicking the New Site button;
    6. -
    7. In the tab marked General, enter the following values (all other fields remain blank): - -
    8. -
    9. Optionally, rename this setting to your liking by pressing the 'Rename' button;
    10. -
    11. Press 'Connect' and enter your passphrase when requested.
    12. -

    \"FileZilla -

    Note that recent versions of FileZilla have a screen in the settings to manage private keys. The path to the private key must be provided in options (Edit Tab -> options -> connection -> SFTP):

    \"FileZilla -

    After that you should be able to connect after being asked for passphrase. As an alternative you can choose to use putty pageant. -

    " - diff --git a/HtmlDump/file_0327.html b/HtmlDump/file_0327.html deleted file mode 100644 index ad0cee3f9..000000000 --- a/HtmlDump/file_0327.html +++ /dev/null @@ -1,72 +0,0 @@ -

    Prerequisite: WinSCP

    To transfer files to and from the cluster, we recommend the use of WinSCP, which is a graphical ftp-style program (but than one that uses the ssh way of communicating with the cluster rather then the less secure ftp) that is also freely available. WinSCP can be downloaded both as an installation package and as a standalone portable executable. When using the portable version, you can copy WinSCP together with your private key on a USB stick to have access to your files from any internet-connected Windows PC.

    WinSCP also works together well with the PuTTY suite of applications. It uses the keys generated with the PuTTY key generation program, can launch terminal sessions in PuTTY and use ssh keys managed by pageant. -

    Transferring your files to and from the VSC clusters

    The first time you make the connection, you will be asked to 'Continue connecting and add host key to the cache'; select 'Yes'. -

      -
    1. Start WinSCP and go the the \"Session\" category. Fill in the following information: - - - - - - - -
      \"WinSCP - -
        -
      1. Fill in the hostname of the VSC login node of your home institution. You can find this information in the overview of available hardware on this site.
      2. -
      3. Fill in your VSC username.
      4. -
      5. If you are not using pageant to manage your ssh keys, you have to point WinSCP to the private key file (in PuTTY .ppk format) that should be used. When using pageant, you can leave this field blank.
      6. -
      7. Double check that the port number is 22.
      8. -
      -
      -
    2. -
    3. - If you want to store this data for later use, click the \"Save\" button at the bottom and enter a name for the session. Next time you'll start WinSCP, you'll get a screen with stored sessions that you can open by selecting them and clicking the \"Login\" button. -
    4. -
    5. - Click the \"Login\" button to start the session that you just created. You'll be asked for your passphrase if pageant is not running with a valid key loaded. - The first time you make the connection, you will be asked to \"Continue connecting and add host key to the cache\"; select \"Yes\". -
    6. -

    Some remarks

    Two interfaces

    \"\"WinSCP has two modes for the graphical user interface: -

    During the installation of WinSCP, you'll be prompted for a mode. But you can always change your mind afterwards and selct the interface mode by selecting the \"Preferences\" category after starting WinSCP. -

    Enable logging

    When you experience trouble transferring files using WinSCP, the support team may ask you to enable logging and mail the results. -

      -
    1. To enable logging: - - - - - - - -
      \"WinSCP - -
        -
      1. Check \"Advanced options\".
      2. -
      3. Select the \"Logging\" category.
      4. -
      5. Check the box next to \"Enable session logging on level\" and select the logging level requested by the user support team. Often normal loggin will be sufficient.
      6. -
      7. - Enter a name and directory for the log file. The default is \"%TEMP%\\!S.log\" which will expand to a name that is system-dependent and depends on the name of your WinSCP session. %TEMP% is a Windows environment variable pointing to a directory for temporary files which on most systems is well hidden. \"!S\" will expand to the name of your session (for a stored session the name you used there). - You can always change this to another directory and/or file name that is easier for you to work with. -
      8. -
      -
      -
    2. -
    3. Now just run WinSCP as you would do without logging.
    4. -
    5. To mail the result if you used the default log file name %TEMP%\\!S.log: -
        -
      1. Start a new mail in your favourite mail program (it could even be a web mail service).
      2. -
      3. Click whatever button or menu choice you need to add an attachment.
      4. -
      5. - Many mail programs will now show you a standard Windows dialog window to select the file. In many mail programs, the left top of the window will look like this (a screen capture from a Windows 7 computer):
        - \"WinSCP
        - Click right of the text in the URL bar in the upper left of the window. The contents will now change to a regular Windows path name and will be selected. Just type %TEMP% and press enter and you will see that %TEMP% will expand to the name of the directory with the temporary files. - This trick may not work with all mail programs! -
      6. -
      7. Finish the mail text and send the mail to user support.
      8. -
      -
    6. -
    " - diff --git a/HtmlDump/file_0329.html b/HtmlDump/file_0329.html deleted file mode 100644 index e626744bc..000000000 --- a/HtmlDump/file_0329.html +++ /dev/null @@ -1,20 +0,0 @@ -

    To display graphical applications from a Linux computer (such as the VSC clusters) on your Windows desktop, you need to install an X Window server. Here we describe the installation of Xming, one such server and freely available.

    Installing Xming

    1. Download the Xming installer from the XMing web site.
    2. -
    3. Either install Xming from the Public Domain Releases (free) or from the Website Releases (after a donation) on the website.
    4. -
    5. Run the Xming setup program on your Windows desktop. Make sure to select 'XLaunch wizard' and 'Normal PuTTY Link SSH client'.
      - \"Xming-Setup.png\"
    6. -


    -Running Xming:

      -
    1. To run Xming, select XLaunch from the Start Menu.
    2. -
    3. Select 'Multiple Windows'. This will open each application in a separate window.
      - \"Xming-Display.png\"
    4. -
    5. Select 'Start no client' to make XLaunch wait for other programs (such as PuTTY).
      - \"Xming-Start.png\"
    6. -
    7. Select 'Clipboard' to share the clipboard.
      - \"Xming-Clipboard.png\"
    8. -
    9. Finally save the configuration.
      - \"Xming-Finish.png\"
    10. -
    11. Now Xming is running ... and you can launch a graphical application in your PuTTY terminal. Do not forget to enable X11 forwarding as explained on our PuTTY page.
      - To test the connection, you can try to start a simple X program on the login nodes, e.g., xterm or xeyes. The latter will open a new window with a pair of eyes. The pupils of these eyes should follow your mouse pointer around. Close the program by typing \"ctrl+c\": the window should disappear.
      - If you get the error 'DISPLAY is not set', you did not correctly enable the X-Forwarding.
    12. -
    " - diff --git a/HtmlDump/file_0331.html b/HtmlDump/file_0331.html deleted file mode 100644 index 01a3555be..000000000 --- a/HtmlDump/file_0331.html +++ /dev/null @@ -1,54 +0,0 @@ -

    Prerequisites

    - It is assumed that Microsoft Visual Studio Professional (at least the Microsoft Visual C++ component) is installed. Although Microsoft Visual C++ 2008 should be sufficient, this how-to assumes that Microsoft Visual C++ 2010 is used. Furthermore, one should be familiar with the basics of Visual Studio, i.e., how to create a new project, how to edit source code, how to compile and build an application. -

    - Note for KU Leuven and UHasselt users: Microsoft Visual Studio is covered by the campus license for Microsoft products of both KU Leuven and Hasselt University. Hence staff and students can download and use the software. -

    - Also note that although Microsoft offers a free evaluation version of its development tools, i.e., Visual Studio Express, this version does not support parallel programming. -

    -OpenMP

    - Microsoft Visual C++ offers support for developing openMP C/C++ programs out of the box. However, as of this writing, support is still limited to the ancient OpenMP 2.0 standard. The project type best suited is a Windows Console Application. It is best to switch 'Precompiled headers' off. -

    - Once the project is created, simply write the code, and enable the openMP compiler option in the project's properties as shown below. -

    - \"OpenMP -

    - Compiling, building and running your program can now be done in the familiar way. -

    -MPI

    - In order to develop C/C++ programs that use MPI, a few extra things have to be installed, so this will be covered first. -

    -Installation

      -
    1. - The MPI libraries and infrastructure is part of Microsoft's HPC Pack SDK. Download the either the 32- or 64-bit version, whichever is appropriate for your desktop system (most probably the 32-bit version, denoted by 'x86'). Installing is merely a matter of double-clicking the downloaded installer.
    2. -
    3. - Although not strictly required, it is strongly recommended to install the MPI Project Template as well. Again, one simply downloads and double-clicks the installer.
    4. -

    -Development

    - To develop an MPI-based application, create an MPI project. -

    - \"New -

    - It is advisable not to use precompiled headers, so switch this setting off. -

    - Next, write your code. Once you are ready to debug or run your code, make the following adjustments to the project's properties in the 'Debugging' section. -

    - \"MPI -

    - A few settings should be verified, and if necessary, modified: -

      -
    1. - Make sure that the 'Debugger to lauch' is indeed the 'MPI Cluster Debugger'.
    2. -
    3. - The 'Run environment' is 'localhost/1' by default. Since this implies that only one MPI process will be started, it is not very exciting, so change it to, e.g., 'localhost/4' in order to have some parallel processes (4 in this example). Don not make this number too large, since the code will execute on your desktop machine.
    4. -
    5. - The 'MPIExec Command' should be pointed to 'mpiexec' that is found in the 'Bin' directory of the HPC Pack 2008 SDK installation directory.
    6. -

    - Debugging now proceeds as usual. One can switch between processes by selecting the main thread of the appropriate process by selecting the appropriate main thread in the Threads view. -

    - \"Switching -

    -Useful links

    " - diff --git a/HtmlDump/file_0333.html b/HtmlDump/file_0333.html deleted file mode 100644 index caede8f5c..000000000 --- a/HtmlDump/file_0333.html +++ /dev/null @@ -1,117 +0,0 @@ -

    Installation & setup

    -
      -
    1. - Download the approriate version for your system (32- or 64-bit) and install it. You may to reboot to complete the installation, do so if required.
    2. -
    3. - Optionally, but highly recommended: download and install WinMerge, a convenient GUI tool to compare and merge files.
    4. -
    5. - Start Pageant (the SSH agent that comes with PuTTY) and load your private key for authentication on the VSC cluster.
    6. -
    -

    -Checking out a project from a VSC cluster repository

    -
    svn+ssh://userid@svn.login.node/data/leuven/300/vsc30000/svn-repo/simulation/trunk
    -
    -
      -
    1. - Open Windows Explorer (by e.g., the Windows-E shortcut, or from the Start Menu) and navigate to the directory where you would like to check out your project that is in the VSC cluster repository.
    2. -
    3. - Right-click in this directory, you will notice 'SVN Checkout...' in the context menu, select it to open the 'Checkout' dialog.
      -
      - \"TortoiseSVN
    4. -
    5. - In the 'URL of repository' field, type the following line, replacing userid by your VSC user ID, and '300' with '301', '302',... as required (e.g., for user ID 'vsc30257', replace '300' by '302'). For svn.login.node, substitute the appropriate login node for the cluster the repository is on.
    6. -
    7. - Check whether the suggested default location for the project suits you, i.e., the 'Checkout directory' field, if not, modify it.
    8. -
    9. - Click 'OK' to proceed with the check out.
    10. -
    -

    - You now have a working copy of your project on your desktop and can continue to develop locally. -

    -

    -Work cycle

    -

    - Suppose the file 'simulation.c' is added, and 'readme.txt' is added. The 'simulation directory will now look as follows:
    -
    - \"TortoiseSVN -

    -

    - Files that were changed are marked with a red exclamation mark, while those marked in green were unchanged. Files without a mark such as 'readme.txt' have not been placed under version control yet. The latter can be added to the repository by right-clicking on it, and choosing 'TortoiseSVN' and then 'Add...' from the context menu. Such files will be marked with a bleu '+' sign until the project is committed. -

    -

    - By right-clicking in the project's directory, you will see context menu items 'SVN Update' and 'SVN Commit...'. These have exactly the same semantics as their command line counterparts introduced above. The 'TortoiseSVN' menu item expands into even more command that are familiar, with the notable exception of 'Check for modifications', which is in fact equivalent to 'svn status'.
    -
    - \"Tortoise -

    -

    - Right-clicking in the directory and choosing 'SVN Commit...' will bring up a dialog to enter a comment and, if necessary, include or exclude files from the operation.
    -
    - \"TortoiseSVN -

    -

    -Merging

    -

    - When during an update a conflict that can not be resolved automatically is detected, TortoiseSVN behaves slightly different from the command line client. Rather than requiring you to resolve the conflict immediately, it creates a number of extra files. Suppose the repository was at revision 12, and a conflict was detected in 'simulation.c', then it will create: -

    - -

    - You have now two options to resolve the conflict. -

    -
      -
    1. - Edit 'simulation.c', keeping those modification of either version that you need.
    2. -
    3. - Use WinMerge to compare 'simulation.c.mine' and 'simulation.c.r12' and resolve the conflicts in the GUI, saving the result as 'simulation.c'. When two files are selected in Windows Explorer, they can be compared using WinMerge by right-clicking on either, and choosing 'WinMerge' from the context menu.
      -
      - \"WinMerge
    4. -
    -

    - Once all conflicts have been resolved, commit your changes. -

    -

    -Tagging

    -

    - Tagging can be done conveniently by right-clicking in Windows Exploerer and selecting 'TortoiseSVN' and then 'Branch/tag...' from the context menu. After supplying the appropriate URL for the tag, e.g., -

    -
    svn+ssh://<user-id>@<login-node>/data/leuven/300/vsc30000/svn-repo/simulation/tag/nature-submission
    -
    -

    - you click 'OK'. -

    -

    -Browsing the repository

    -

    - Sometimes it is convenient to browse a subversion repository. TortoiseSVN makes this easy, right-click in a directory in Windows Explorer, and select 'TortoiseSVN' and then 'Repo-browser' from the context menu. -

    -

    -
    - \"TortoiseSVN -

    -

    -Importing a local project into the VSC cluster repository

    -

    - As with the command line client, it is possible to import a local directory on your desktop system into your subversion repository on the VSC cluster . Let us assume that this directory is called 'calculation'. Right-click on it in Windows Explorer, and choose 'Subversion' and then 'Import...' from the context menu. This will open the 'Import' dialog.
    -
    - \"TortoiseSVN -

    -

    - The repository's URL would be (modify the user ID and directory appropriately): -

    -
    svn+ssh://<user-id>@<login-node>/data/leuven/300/vsc30000/svn-repo/calculation/trunk
    -
    -

    - TortoiseSVN will automatically create the 'calculation' and 'trunk' directory for you (it uses the '--parents' option). -

    -

    - Creating directories such as 'branches' or 'tags' can be done using the repository browser. To invoke it, right-click in a directory in Windows Explorer and select 'TortoiseSVN' and then 'Repo-browser'. Navigate to the appropriate project directory and create a new directory by right-clicking in the parent directory's content view (right pane) and selecting 'Create folder...' from the context menu. -

    " - diff --git a/HtmlDump/file_0335.html b/HtmlDump/file_0335.html deleted file mode 100644 index 3e1798d5d..000000000 --- a/HtmlDump/file_0335.html +++ /dev/null @@ -1,39 +0,0 @@ -

    Since all VSC clusters use Linux as their main operating system, you will need to get acquainted with using the command-line interface and using the terminal. To open a terminal in Linux when using KDE, choose Applications > System > Terminal > Konsole. When using GNOME, choose Applications > Accessories > Terminal.

    If you don't have any experience with using the command-line interface in Linux, we suggest you to read the basic Linux usage section first. -

    Getting ready to request an account

    Connecting to the cluster

    Software development

    " - diff --git a/HtmlDump/file_0337.html b/HtmlDump/file_0337.html deleted file mode 100644 index bc354a447..000000000 --- a/HtmlDump/file_0337.html +++ /dev/null @@ -1,52 +0,0 @@ -

    Prerequisite: OpenSSH

    -Linux

    - On all popular Linux distributions, the OpenSSH software is readily available, and most often installed by default. You can check whether the OpenSSH software is installed by opening a terminal and typing: -

    $ ssh -V
    -OpenSSH_4.3p2, OpenSSL 0.9.8e-fips-rhel5 01 Jul 2008
    -

    - To access the clusters and transfer your files, you will use the following commands: -

    -Windows

    - You can use OpenSSH on Windows also if you install the free UNIX emulation layer Cygwin with the package \"openssh\". -

    macOS/OS X

    macOS/OS X comes with its own implementation of OpenSSH, so you don't need to install any third-party software to use it. Just open a Terminal window and jump in! -

    -Generating a public/private key pair

    - Usually you already have the software needed and a key pair might already be present in the default location inside your home directory: -

    $ ls ~/.ssh
    -authorized_keys2    id_rsa            id_rsa.pub         known_hosts
    -

    - You can recognize a public/private key pair when a pair of files has the same name except for the extension \".pub\" added to one of them. In this particular case, the private key is \"id_rsa\" and public key is \"id_rsa.pub\". You may have multiple keys (not necessarily in the directory \"~/.ssh\") if you or your operating system requires this. A popular alternative key type, instead of rsa, is dsa. However, we recommend to use rsa keys. -

    - You will need to generate a new key pair, when: -

    - To generate a new public/private pair, use the following command: -

    $ ssh-keygen
    -Generating public/private rsa key pair. 
    -Enter file in which to save the key (/home/user/.ssh/id_rsa): 
    -Enter passphrase (empty for no passphrase): 
    -Enter same passphrase again: 
    -Your identification has been saved in /home/user/.ssh/id_rsa.
    -Your public key has been saved in /home/user/.ssh/id_rsa.pub.
    -

    - This will ask you for a file name to store the private and public key, and a passphrase to protect your private key. It needs to be emphasized that you really should choose the passphrase wisely! The system will ask you for it every time you want to use the private key, that is every time you want to access the cluster or transfer your files. -

    - Keys are required in the OpenSSH format. -

    - If you have a public key \"id_rsa_2048_ssh.pub\" in the SSH2 format, you can use OpenSSH's ssh-keygen to convert it to the OpenSSH format in the following way: -

    $ ssh-keygen -i -f ~/.ssh/id_rsa_2048_ssh.pub > ~/.ssh/id_rsa_2048_openssh.pub
    -
    " - diff --git a/HtmlDump/file_0339.html b/HtmlDump/file_0339.html deleted file mode 100644 index 3970577bb..000000000 --- a/HtmlDump/file_0339.html +++ /dev/null @@ -1,22 +0,0 @@ -

    Prerequisite: OpenSSH

    See the page on generating keys.

    Connecting to the VSC clusters

    Text mode

    In many cases, a text mode connection to one of the VSC clusters is sufficient. To make such a connection, the ssh command is used: -

    $ ssh <vsc-account>@<vsc-loginnode>
    -

    Here, -

    You can find the names and ip-addresses of the loginnodes in the sections on the available hardware. -

    The first time you make a connection to the loginnode, you will be asked to verify the authenticity of the loginnode, e.g., -

    $ ssh vsc98765@login.hpc.kuleuven.be
    -The authenticity of host 'login.hpc.kuleuven.be (134.58.8.192)' can't be established.
    -RSA key fingerprint is b7:66:42:23:5c:d9:43:e8:b8:48:6f:2c:70:de:02:eb.
    -Are you sure you want to continue connecting (yes/no)?
    -

    Here, user vsc98765 wants to make a connection to the ThinKing cluster at KU Leuven via the loginnode login.hpc.kuleuven.be. -

    If your private key is not stored in a default file (~/.ssh/id_rsa) you need to provide the path to it while making the connection:

    $ ssh -i <path-to-your-private-key-file> <vsc-account>@<vsc-loginnode>

    Connection with support for graphics

    On most clusters, we support a number of programs that have a GUI mode or display graphics otherwise through the X system. To be able to display the output of such a program on the screen of your Linux machine, you need to tell ssh to forward X traffic from the cluster to your Linux desktop/laptop by specifying the -X option. There is also an option -x to disable such traffic, depending on the default options on your system as specified in /etc/ssh/ssh_config, or ~/.ssh/config.
    - Example: -

    ssh -X vsc123456@login.hpc.kuleuven.be
    -

    To test the connection, you can try to start a simple X program on the login nodes, e.g., xterm or xeyes. The latter will open a new window with a pair of eyes. The pupils of these eyes should follow your mouse pointer around. Close the program by typing \"ctrl+c\": the window should disappear. -

    If you get the error 'DISPLAY is not set', you did not correctly enable the X-Forwarding. -

    Links

    " - diff --git a/HtmlDump/file_0341.html b/HtmlDump/file_0341.html deleted file mode 100644 index 42bb10b7a..000000000 --- a/HtmlDump/file_0341.html +++ /dev/null @@ -1,186 +0,0 @@ -

    The OpenSSH program ssh-agent is a program to hold private keys used for public key authentication (RSA, DSA). The idea is that you store your private key in the ssh authentication agent and can then log in or use sftp as often as you need without having to enter your passphrase again. This is particularly useful when setting up a ssh proxy connection (e.g., for the tier-1 system muk) as these connections are more difficult to set up when your key is not loaded into an ssh-agent. -

    - This all sounds very easy. The reality is more difficult though. The problem is that subsequent commands, e.g., the command to add a key to the agent or the ssh or sftp commands, must be able to find the ssh authentication agent. Therefore some information needs to be passed from ssh-agent to subsequent commands, and this is done through two environment variables: SSH_AUTH_SOCK and SSH_AGENT_PID. The problem is to make sure that these variables are defined with the correct values in the shell where you start the other ssh commands. -

    -Starting ssh-agent: Basic scenarios

    - There are a number of basic scenarios -

      -
    1. - You're lucky and your system manager has set up everything so that ssh-agent is started automatically when the GUI starts after logging in and the environment variables are hence correctly defined in all subsequent shells. - You can check for that easily: type -
      $ ssh-add -l
      - If the command returns with the message -
      Could not open a connection to your authentication agent.
      - then ssh-agent is not running or not configured properly, and you'll need to follow one of the following scenarios. -
    2. -
    3. - Start an xterm (or whatever your favourite terminal client is) and continue to work in that xterm window or other terminal windows started from that one: -
      $ ssh-agent xterm &
      -	
      - The shell in that xterm is then configured correctly, and when that xterm is killed, the ssh-agent will also be killed. -
    4. -
    5. - ssh-agent can also output the commands that are needed to configure the shell. These can then be used to configure the current shell or any further shell. E.g., if you're a bash user, an easy way to start a ssh-agent and configure it in the current shell, is to type -
      $ eval `ssh-agent -s`
      -	
      - at the command prompt. If you start a new shell (e.g., by starting an xterm) from that shell, it should also be correctly configured to contact the ssh authentication agent. - A better idea though is to store the commands in a file and excute them in any shell where you need access to the authentication agent. E.g., for bash users: -
      $ ssh-agent -s >~/.ssh-agent-environment
      -. ~/.ssh-agent-environment
      -	
      - and you can then configure any shell that needs access to the authentication agent by executing -
      $ . ~/.ssh-agent-environment
      -
      - Note that this will not necessarily shut down the ssh-agent when you log out of the system. It is not a bad idea to explicitly kill the ssh-agent before you log out: -
      $ ssh-agent -k
      -	
      -
    6. -

    -Managing keys

    - Once you have an ssh-agent up and running, it is very easy to add your key to it. If your key has the default name(id_rsa), all you need to do is to type -

    $ ssh-add
    -

    - at the command prompt. You will then be asked to enter your passphrase. If your key has a different name, e.g., id_rsa_cluster, you can specify that name as an additional argument to ssh-add: -

    $ ssh-add ~/.ssh/id_rsa_cluster
    -

    - To list the keys that ssh-agent is managing, type -

    $ ssh-add -l
    -

    - You can now use the OpenSSH commands ssh, sftp and scp without having to enter your passphrase again. -

    -Starting ssh-agent: Advanced options

    - In case ssh-agent is not started by default when you log in to your computer, there's a number of things you can do to automate the startup of ssh-agent and to configure subsequent shells. -

    -Ask your local system administrator

    - If you're not managing your system yourself, you can always ask your system manager if he can make sure that ssh-agent is started when you log on and in such a way that subsequent shells opened from the desktop have the environmental variables SSH_AUTH_SOCK and SSH_AGENT_PID set (with the first one being the most important one). -

    - And if you're managing your own system, you can dig into the manuals to figure out if there is a way to do so. Since there are so many desktop systems avaiable for Linux systems (gnome, KDE, Ubuntu unity, ...) we cannot offer help here. -

    -A semi-automatic solution in bash

    - This solution requires some modifications to .bash_profile and .bashrc. Be careful when making these modifications as errors may lead to trouble to log on to your machine. So test by executing these files with source ~/.bash_profile and source ~/.bashrc. -

    - This simple solution is based on option 3 given above to start ssh-agent. -

      -
    1. - You can define a new shell command by using the bash alias mechanism. Add the following line to the file .bashrc in your home directory: -
      alias start-ssh-agent='/usr/bin/ssh-agent -s >~/.ssh-agent-environment; . ~/.ssh-agent-environment'
      -	
      - The new command start-ssh-agent will now start a new ssh-agent, store the commands to set the environment variables in the file .ssh-agent-environment in your home directory and then \"source\" that file to execute the commands in the current shell (which then sets SSH_AUTH_SOCK and SSH_AGENT_PID to appropriate values). -
    2. -
    3. - Also put the line -
      [[ -s ~/.ssh-agent-environment ]] && . ~/.ssh-agent-environment &>/dev/null
      -	
      - in your .bashrc file. This line will check if the file ssh-agent-environment exists in your home directory and \"source\" it to set the appropriate environment variables. -
    4. -
    5. - As explained in the GNU bash manual, .bashrc is only read when starting so-called interactive non-login shells. Interactive login shells will not read this file by default. Therefore it is advised in the GNU bash manual to add the line -
      [[ -s ~/.bashrc ]] && . ~/.bashrc
      -	
      - to your .bash_profile. This will execute .bashrc if it exists whenever .bash_profile is called. -
    6. -

    - You can now start a SSH authentication agent by issuing the command start-ssh-agent and add your key as indicated above with ssh-add. -

    -An automatic and safer solution in bash

    - One disadvantage of the previous solution is that a new ssh-agent will be started every time you execute the command start-ssh-agent, and all subsequent shells will then connect to that one. -

    - The following solution is much more complex, but a lot safer as it will first do an effort to see if there is already a ssh-agent running that can be contacted: -

      -
    1. - It will first check if the environment variable SSH_AUTH_SOCK is defined, and try to contact that agent. This makes sure that no new agent will be started if you log on onto a system that automatically starts an ssh-agent.
    2. -
    3. - Then it will check for a file .ssh-agent-environment, source that file and try to connect to the ssh-agent. This will make sure that no new agent is started if another agent can be found through that file.
    4. -
    5. - And only if those two tests fail will a new ssh-agent be started.
    6. -

    - This solution uses a Bash function. -

    1. Add the following block of text to your .bashrc file:
      start-ssh-agent() {
      -#
      -# Start an ssh agent if none is running already.
      -# * First we try to connect to one via SSH_AUTH_SOCK
      -# * If that doesn't work out, we try via the file ssh-agent-environment
      -# * And if that doesn't work out either, we just start a fresh one and write
      -#   the information about it to ssh-agent-environment for future use.
      -#
      -# We don't really test for a correct value of SSH_AGENT_PID as the only 
      -# consequence of not having it set seems to be that one cannot kill
      -# the ssh-agent with ssh-agent -k. But starting another one wouldn't 
      -# help to clean up the old one anyway.
      -#
      -# Note: ssh-add return codes: 
      -#   0 = success,
      -#   1 = specified command fails (e.g., no keys with ssh-add -l)
      -#   2 = unable to contact the authentication agent
      -#
      -sshfile=~/.ssh-agent-environment
      -#
      -# First effort: Via SSH_AUTH_SOCK/SSH_AGENT_PID
      -#
      -if [ -n \"$SSH_AUTH_SOCK\" ]; then
      -  # SSH_AUTH_SOCK is defined, so try to connect to the authentication agent
      -  # it should point to. If it succeeds, reset newsshagent.
      -  ssh-add -l &>/dev/null 
      -  if [[ $? != 2 ]]; then 
      -    echo \"SSH agent already running.\"
      -    unset sshfile
      -    return 0
      -  else
      -    echo \"Could not contact the ssh-agent pointed at by SSH_AUTH_SOCK, trying more...\"
      -  fi
      -fi
      -#
      -# Second effort: If we're still looking for an ssh-agent, try via $sshfile
      -#
      -if [ -e \"$sshfile\" ]; then
      -  # Load the environment given in $sshfile
      -  . $sshfile &>/dev/null
      -  # Try to contact the ssh-agent
      -  ssh-add -l &>/dev/null 
      -  if [[ $? != 2 ]]; then 
      -    echo \"SSH agent already running; reconfigured the environment.\"
      -    unset sshfile
      -    return 0
      -  else
      -    echo \"Could not contact the ssh-agent pointed at by $sshfile.\"
      -  fi
      -fi
      -#
      -# And if we haven't found a working one, start a new one...
      -#
      -#Create a new ssh-agent
      -echo \"Creating new SSH agent.\"
      -ssh-agent -s > $sshfile && . $sshfile    
      -unset sshfile
      -}
      -	
      A shorter version without all the comments and that does not generate output is
      start-ssh-agent() {
      -sshfile=~/.ssh-agent-environment
      -#
      -if [ -n \"$SSH_AUTH_SOCK\" ]; then
      -  ssh-add -l &>/dev/null 
      -  [[ $? != 2 ]] && unset sshfile && return 0
      -fi
      -#
      -if [ -e \"$sshfile\" ]; then
      -  . $sshfile &>/dev/null
      -  ssh-add -l &>/dev/null 
      -  [[ $? != 2 ]] && unset sshfile && return 0
      -fi
      -#
      -ssh-agent -s > $sshfile && . $sshfile &>/dev/null
      -unset sshfile
      -}
      -	
      This defines the command start-ssh-agent.
    2. Since start-ssh-agent will now first check for a usable running agent, it doesn't harm to simply execute this command in your .bashrc file to start a SSH authentication agent. So add the line
      start-ssh-agent &>/dev/null
      -	
      after the above function definition. All output is sent to /dev/null (and hence not shown) as a precaution, since scp or sftp sessions fail when output is generated in .bashrc on many systems (typically with error messages such as \"Received message too long\" or \"Received too large sftp packet\"). You can also use the newly defined command start-ssh-agent at the command prompt. It will then check your environment, reset the environment variables SSH_AUTH_SOCK and SSH_AGENT_PID or start a new ssh-agent.
    3. As explained in the GNU bash manual, .bashrc is only read when starting so-called interactive non-login shells. Interactive login shells will not read this file by default. Therefore it is advised in the GNU bash manual to add the line
      [[ -s ~/.bashrc ]] && . ~/.bashrc
      -	
      to your .bash_profile. This will execute .bashrc if it exists whenever .bash_profile is called.

    - You can now simply add your key as indicated above with ssh-add and it will become available in all shells. -

    - The only remaining problem is that the ssh-agent process that you started may not get killed when you log out, and if it fails to contact again to the ssh-agent when you log on again, the result may be a built-up of ssh-agent processes. You can always kill it by hand before logging out with ssh-agent -k. -

    -Links

    " - diff --git a/HtmlDump/file_0343.html b/HtmlDump/file_0343.html deleted file mode 100644 index fc2ec54bd..000000000 --- a/HtmlDump/file_0343.html +++ /dev/null @@ -1,50 +0,0 @@ -

    Rationale

    - ssh provides a safe way of connecting to a computer, encrypting traffic and avoiding passing passwords across public networks where your traffic might be intercepted by someone else. Yet making a server accessible from all over the world makes that server very vulnerable. Therefore servers are often put behind a firewall, another computer or device that filters traffic coming from the internet.

    - In the VSC, all clusters are behind a firewall, but for the tier-1 cluster muk this firewall is a bit more restrictive than for other clusters. Muk can only be approached from certain other computers in the VSC network, and only via the internal VSC network and not from the public network. To avoid having to log on twice, first to another login node in the VSC network and then from there on to Muk, one can set up a so-called ssh proxy. You then connect through another computer (the proxy server) to the computer that you really want to connect to.

    - This all sounds quite complicated, but once things are configure properly it is really simple to log on to the host.

    - Setting up a proxy in OpenSSH

    - Setting up a proxy is done by adding a few lines to the file $HOME/.ssh/config on the machine from which you want to log on to another machine.

    - The basic structure is as follows:

    Host <my_connectionname>
    -    ProxyCommand ssh -q %r@<proxy server> 'exec nc <target host> %p'
    -    User <userid>

    - where:

    - Caveat: Access via the proxy will only work if you have logged in to the proxy server itself at least once from the client you're using.

    - Some examples

    - A regular proxy without X forwarding

    - In Linux or macOS, SSH proxies are configured as follows:

    - In your $HOME/.ssh/config file, add the following lines:

    Host tier1
    -    ProxyCommand ssh -q %r@vsc.login.node 'exec nc login.muk.gent.vsc %p'
    -    User vscXXXXX
    -

    - where you replace vsc.login.node with the name of the login node of your home tier-2 cluster (see also the overview of available hardware).

    - Replace vscXXXXX your own VSC account name (e.g., vsc40000).

    - The name 'tier1' in the 'Host' field is arbitrary. Any name will do, and this is the name you need to use when logging in:

    $ ssh tier1
    -

    - A proxy with X forwarding

    - This requires a minor modification to the lines above that need to be added to $HOME/.ssh/config:

    Host tier1X
    -    ProxyCommand ssh -X -q %r@vsc.login.node 'exec nc login.muk.gent.vsc %p'
    -    ForwardX11 yes
    -    User vscXXXXX
    -

    - I.e., you need to add the -X option to the ssh command to enable X forwarding and need to add the line 'ForwardX11 yes'.

    $ ssh tier1X

    - will then log you on to login.muk.gent.vsc with X forwarding enabled provided that the $DISPLAY variable was correctly set on the client on which you executed the ssh command. Note that simply executing

    $ ssh -X tier1

    - has the same effect. It is not necessary to specify the X forwarding in the config file, it can be done just when running ssh.

    - The proxy for testing/debugging on muk

    - For testing/debugging, you can login to the UGent login node gengar1.gengar.gent.vsc over the VSC network. The following $HOME/.ssh/config can be used:

    Host tier1debuglogin
    -    ProxyCommand ssh -q %r@vsc.login.node 'exec nc gengar1.gengar.gent.vsc %p'
    -    User vscXXXXX
    -

    - Change vscXXXXX to your VSC username and connect with

    $ ssh tier1debuglogin

    - For advanced users

    - You can define many more properties for a ssh connection in the config file, e.g., setting up ssh tunneling. On most Linux machines, you can get more information about all the possibilities by issuing

    $ man 5 ssh_config

    - Alternatively, you can also google on this line and find copies of the manual page on the internet.

    " - diff --git a/HtmlDump/file_0345.html b/HtmlDump/file_0345.html deleted file mode 100644 index 319a3aa1a..000000000 --- a/HtmlDump/file_0345.html +++ /dev/null @@ -1,26 +0,0 @@ -

    Prerequisits

    -Background

    - Because of one or more firewalls between your desktop and the HPC clusters, it is generally impossible to communicate directly with a process on the cluster from your desktop except when the network managers have given you explicit permission (which for security reasons is not often done). One way to work around this limitation is SSH tuneling. There are serveral cases where this is usefull: -

    -Procedure

    - In a terminal window on your client machine, issue the following command: -

    ssh  -L11111:r1i3n5:44444  -N  <vsc-account>@<vsc-loginnode>
    -

    - where <vsc-account> is your VSC-number and <vsc-loginnode> is the hostname of the cluster's login node you are using. The local port is given first (e.g., 11111, followed by the remote host (e.g., 'r1i3n5') and the server port (e.g., 44444). -

      -
    1. - Log in on the login node
    2. -
    3. - Start the server job, note the compute node's name the job is running on (e.g., 'r1i3n5'), as well as the port the server is listening on (e.g., '44444').
    4. -
    " - diff --git a/HtmlDump/file_0347.html b/HtmlDump/file_0347.html deleted file mode 100644 index 063ca8cad..000000000 --- a/HtmlDump/file_0347.html +++ /dev/null @@ -1,28 +0,0 @@ -

    Prerequisite: OpenSSH

    - See the page on generating keys. -

    -Using scp

    - Files can be transferred with scp, which is more or less a cp equivalent, but then to or from a remote machine. -

    - For example, to copy the (local) file localfile.txt to your home directory on the cluster (where <vsc-loginnode> is a loginnode), use: -

    scp localfile.txt <vsc-account>@<vsc-loginnode>:
    -

    - Likewise, to copy the remote file remotefile.txt from your home directory on the cluster to your local computer, use: -

    scp <vsc-account>@<vsc-loginnode>:localfile.txt .
    -

    - The colon is required! -

    -Using sftp

    - The sftp is an equivalent of the ftp command, but it uses the secure ssh protocol to connect to the clusters. -

    - One easy way of starting a sftp session is -

    sftp <vsc-account>@<vsc-loginnode>
    -

    -Links

    " - diff --git a/HtmlDump/file_0349.html b/HtmlDump/file_0349.html deleted file mode 100644 index 685cc1b54..000000000 --- a/HtmlDump/file_0349.html +++ /dev/null @@ -1,32 +0,0 @@ -

    Since all VSC clusters use Linux as their main operating system, you will need to get acquainted with using the command-line interface and using the Terminal. To open a Terminal window in macOS (formerly OS X), choose Applications > Utilities > Terminal in the Finder.

    If you don't have any experience with using the Terminal, we suggest you to read the basic Linux usage section first (which also applies to macOS). -

    Getting ready to request an account

    Connecting to the machine

    Advanced topics

    " - diff --git a/HtmlDump/file_0351.html b/HtmlDump/file_0351.html deleted file mode 100644 index 772b27a14..000000000 --- a/HtmlDump/file_0351.html +++ /dev/null @@ -1,7 +0,0 @@ -

    Prerequisite: OpenSSH

    - Every macOS install comes with its own implementation of OpenSSH, so you don't need to install any third-party software to use it. Just open a Terminal window and jump in! Because of this, you can use the same commands as specified in the Linux client section to access the cluster and transfer files. -

    -Generating a public/private key pair

    - Generating a public/private key pair is identical to what is described in the Linux client section, that is, by using the ssh-keygen command in a Terminal window. -

    " - diff --git a/HtmlDump/file_0353.html b/HtmlDump/file_0353.html deleted file mode 100644 index cab8a9d2b..000000000 --- a/HtmlDump/file_0353.html +++ /dev/null @@ -1,25 +0,0 @@ -

    Prerequisites

    Connecting using OpenSSH

    Like in the Linux client section, the ssh command is used to make a connection to (one of) the VSC clusters. In a Terminal window, execute: -

    $ ssh <vsc-account>@<vsc-loginnode>
    -

    where -

    You can find the names and ip-addresses of the loginnodes in the sections of the local VSC clusters. -

    SSH will ask you to enter your passphrase. -

    On sufficiently recent macOS/OS X versions (Leopard and newer) you can use the Keychain Access service to automatically provide your passphrase to ssh. All you need to do is to add the key using -

    $ ssh-add ~/.ssh/id_rsa
    -

    (assuming that your private key that you generated before is called id_rsa). -

    Using JellyfiSSH for bookmarking ssh connection settings

    You can use JellyfiSSH to create a user-friendly bookmark for your ssh connection settings. To do this, follow these steps: -

      -
    1. Start JellyfiSSH and select 'New'. This will open a window where you can specify the connection settings.
    2. -
    3. - In the 'Host or IP' field, type in <vsc-loginnode>. In the 'Login name' field, type in your <vsc-account>.
      - In the screenshot below we have filled in the fields for a connection to ThinKing cluster at KU Leuven as user vsc98765.
      - \"JellyfiSSH
    4. -
    5. You might also want to change the Terminal window settings, which can be done by clicking on the icon in the lower left corner of the JellyfiSSH window.
    6. -
    7. When done, provide a name for the bookmark in the 'Bookmark Title' field and press 'Add' to create the bookmark.
    8. -
    9. To make a connection, select the bookmark in the 'Bookmark' field and click on 'Connect'. Optionally, you can make the bookmark the default by selecting it as the 'Startup Bookmark' in the JellyfiSSH > Preferences menu entry.
    10. -
    " - diff --git a/HtmlDump/file_0355.html b/HtmlDump/file_0355.html deleted file mode 100644 index 63057b139..000000000 --- a/HtmlDump/file_0355.html +++ /dev/null @@ -1,42 +0,0 @@ -

    Prerequisite: OpenSSH, Cyberduck or FileZilla

    Transferring files with Cyberduck

    Files can be easily transferred with Cyberduck. Setup is easy: -

      -
    1. After starting Cyberduck, the Bookmark tab will show up. To add a new bookmark, click on the '+' sign on the bottom left of the window. A new window will open.
    2. -
    3. In the 'Server' field, type in <vsc-loginnode>. In the 'Username' field, type in your <vsc-account>.
    4. -
    5. Click on 'More Options', select 'Use Public Key Authentication' and point it to your private key (the filename will be shown underneath). Please keep in mind that Cybeduck works only with passphrase-protected private keys.
    6. -
    7. - Finally, type in a name for the bookmark in the 'Nickname' field and close the window by pressing on the red circle in the top left corner of the window.

      - \"Cyberduck
    8. -
    9. To open the scp connection, click on the 'Bookmarks' icon (which resembles an open book) and double click on the bookmark you just created.
    10. -

    Transferring files with FileZilla

    To install FileZilla, follow these steps: -

      -
    1. Download the appropriate file from the FileZilla download page.
    2. -
    3. The file you just downloaded is a compressed UNIX-style archive (with a name ending on .tar.bz2). Doubleclick on this file in Finder (most likely in the Downloads folder) and drag the FileZilla icon that appears to the Applications folder.
    4. -
    5. Depending on the settings of your machine, you may get notification that Filezilla.app cannot be opened because it is from an unidentified developer when you try to start it. Check out the macOS Gatekeeper on this Apple support page.
    6. -

    FileZilla for macOS works in pretty much the same way as FileZilla for Windows: -

      -
    1. start FileZilla;
    2. -
    3. open the 'Site Manager' using the 'File' menu;
    4. -
    5. create a new site by clicking the New Site button;
    6. -
    7. in the tab marked General, enter the following values (all other fields remain blank): - -
    8. -
    9. optionally, rename this setting to your liking by pressing the 'Rename' button;
    10. -
    11. press 'Connect'. Enter your passphrase when requested. FileZilla will try to use the information in your macOS Keychain. See the page on 'Text-mode access using OpenSSH' to find out how to add your key to the keychain using ssh-add.
    12. -

    \"FileZilla -

    Note that recent versions of FileZilla have a screen in the settings to -manage private keys. The path to the private key must be provided in -options (Edit Tab -> options -> connection -> SFTP): -

    \"FileZilla -

    After that you should be able to connect after being asked for -passphrase. As an alternative you can choose to use the built-in macOS keychain system. -

    " - diff --git a/HtmlDump/file_0357.html b/HtmlDump/file_0357.html deleted file mode 100644 index 1c39ff9ec..000000000 --- a/HtmlDump/file_0357.html +++ /dev/null @@ -1,31 +0,0 @@ -

    Installation

    Eclipse doesn't come with its own compilers. By default, it relies on the Apple gcc toolchain. You can install this toolchain by installing the Xcode package from the App Store. This package is free, but since it takes quite some disk space and few users need it, it is not installed by default on OS X (though it used to be). After installing Xcode, you can install Eclipse according to the instructions on the Eclipse web site. Eclipse will then use the gcc command from the Xcode distribution. The Apple version of gcc is really just the gcc front-end layered on top of a different compiler, LLVM, and might behave differently from gcc on the cluster.

    If you want a regular gcc or need Fortran or MPI or mathematical libraries equivalent to those in the foss toolchain on the cluster, you'll need to install additional software. We recommend using MacPorts for this as it contains ports to macOS of most tools that we include in our toolchains. Using MacPorts requires some familiarity with the bash shell, so you may have a look at our \"Using Linux\" section or search the web for a good bash tutorial (one in a Linux tutorial will mostly do). E.g., you'll have to add the directory where MacPort installs the applications to your PATH enviroment variable. For a typical MacPorts installation, this directory is /opt/local/bin. -

    After installing MacPorts, the following commands will install a libraries and tools that are very close to those of the foss2016b toolchain (tested September 2016):

    sudo port install gcc5
    -sudo port select --set gcc mp-gcc5
    -sudo port install openmpi-gcc5 +threads
    -sudo port select --set mpi openmpi-gcc5-fortran
    -sudo port install OpenBLAS +gcc5 +lapack
    -sudo port install scalapack +gcc5 +openmpi
    -sudo port install fftw-3 +gcc5 +openmpi

    Some components may be slightly newer versions than provided in the foss2015a toolchain, while the MPI library is an older version (at least when tested in September 2016).

    If you also want a newer version of subversion that can integrate with the \"Native JavaHL connector\" in Eclipse, the following commands will install the appropriate packages: -

    sudo port install subversion
    -sudo port install subversion-javahlbindings
    -

    At the time of writing, this installed version 1,9,4 of subversion which has a compatielbe \"Native JavaHL connector\" in Eclipse. -

    Configurating Eclipse for other compilers

    Eclipse uses the PATH environment variable to find other software it uses, such as compilers but also some commands that give information on where certain libraries are stored or how they are configured. In a regular UNIX/Linux system, you'd set the variable in your shell configuration files (e.g., .bash_profile if you use the bash shell). This mechanism also works on OS X, but not for applications that are not started from the shell but from the Dock or by clicking on their icon in the Finder. -

    Because of security concerns, Apple has made it increasingly difficult to define the path for GUI applications that are not started through a shell script. -

    Both tricks are explained in the Photran installation instructions on the Eclispe wiki. However, in OS X 10.10 (Yosemite) neither mechanism works for setting the path. -

    Our advise is to: -

    " - diff --git a/HtmlDump/file_0359.html b/HtmlDump/file_0359.html deleted file mode 100644 index 5c50f681c..000000000 --- a/HtmlDump/file_0359.html +++ /dev/null @@ -1,21 +0,0 @@ -" - diff --git a/HtmlDump/file_0361.html b/HtmlDump/file_0361.html deleted file mode 100644 index 18d99201d..000000000 --- a/HtmlDump/file_0361.html +++ /dev/null @@ -1,44 +0,0 @@ -

    Software development on clusters

    Eclipse is an extensible IDE for program development. The basic IDE is written in Java for the development of Java programs, but can be extended through packages. The IDE was originally developed by IBM, but open-sourced and has become very popular. There are some interesting history tidbits on the WikiPedia entry for Eclipse.

    Some attractive features

    Caveat

    The documentation of the Parallel Tools Platform also tells you how to launch and debug programs on the cluster from the Eclipse IDE. However, this is for very specific cluster configurations and we cannot support this on our clusters at the moment. You can use features such as syncrhonised projects (where Eclipse puts a copy of the project files from your desktop on the cluster, and even synchronises back if you change them on the cluster) or opening a SSH shell from the IDE to directly enter commands on the cluster. -

    Release policy

    The eclipse project works with a \"synchronised release policy\". Major new versions of the IDE and a wide range of packages (including the C/C++ development package (CDT), Parallel Tools Platform (PTP) and the Fortran development package (Photran) which is now integrated in the PTP) occur simultaneously in June of each year which guarantees that there are no compatibility problems between packages if you upgrade your whole installation at once. Bug fixes are of course released in between version updates. Each version has its own code name and the code name has become more popular than the actual version number (as version numbers for the packages differ). E.g., the whole June 2013 release (base IDE and packages) is known as the \"Kepler\" release (version number 4.3), the June 2014 release as the \"Luna\" release (version number 4.4), the June 2015 as the \" Mars\" release (version number 4.5) and the June 2016 release as \"Neon\". -

    Getting eclipse

    The best place to get Eclipse is the the official Eclipse download page. That site contains various pre-packaged versions with a number of extension packages already installed. The most interesting one for C/C++ or Fortran development on clusters is \"Eclipse for Parallel Application Developers\". The installation instructions depend on the machine you're installing on, but typically it is not more than unpacking some archive in the right location. You'll need a sufficiently recent Java IDE on your machine though. Instructions are available on the Eclipse Wiki. -

    The CDT, Photran and PTP plugins integrate with compilers and libraries on your system. For Linux, it uses the gcc compiler on your system. On OS X it integrates with gcc and on Windows, you need to install Cygwin and its gcc toolchain (it may also work with the MinGW and Mingw-64 gcc versions but we haven't verified this). -

    The Eclipse documentation is also available on-line. -

    Basic concepts

    Interesting bits in the documentation

    " - diff --git a/HtmlDump/file_0363.html b/HtmlDump/file_0363.html deleted file mode 100644 index 5b3c73b21..000000000 --- a/HtmlDump/file_0363.html +++ /dev/null @@ -1,61 +0,0 @@ -

    Prerequisites

    Installing additional components

    In order to use Eclipse as a remote editor, you may have to install two extra components: the \"Remote System Explorer End-User Runtime\" and the \"Remote System Explorer User Actions\". Here is how to do this: -

      -
    1. From Eclipse's 'Help' menu, select 'Install New Software...', the following dialog will appear:\"Eclipse
    2. -
    3. From the 'Work with:' drop down menu, select 'Neon - http://download.eclipse.org/releases/neon' (or replace \"Neon\" with the name of the release that you are using). The list of available components is now automatically populated.
    4. -
    5. From the category 'General Purpose Tools', select 'Remote System Explorer End-User Runtime' and 'Remote System Explorer User Actions'.
    6. -
    7. Click the 'Next >' button to get the installation details.
    8. -
    9. Click the 'Next >' button again to review the licenses.
    10. -
    11. Select the 'I accept the terms of the license agreement' radio button.
    12. -
    13. Click the 'Finish' button to start the download and installation process.
    14. -
    15. As soon as the installation is complete, you will be prompted to restart Eclipse, do so by clicking the 'Restart Now' button.
    16. -

    After restarting, the installation process of the necessary extra components is finished, and they are ready to be configured. -

    Configuration

    Before the new components can be used, some configuration needs to be done. -

    Microsoft Windows users who use the PuTTY SSH client software should first prepare a private key for use with Eclipse's authentication system. Users using the OpenSSH client on Microsoft Windows, Linux or MacOS X can skip this preparatory step. -

    Microsoft Windows PuTTY users only

    Eclipse's SSH components can not handle private keys generated with PuTTY, only OpenSSH compliant private keys. However, PuTTY's key generator 'PuTTYgen' (that was used to generate the public/private key pair in the first place) can be used to convert the PuTTY private key to one that can be used by Eclipse. See 'How to convert a PuTTY key to OpenSSH format?' -

    Microsoft Windows PuTTY users should now proceed with the instructions for all users, below. -

    All users

      -
    1. From the 'Window' menu ('Eclipse' menu on OS X), select 'Preferences'.
    2. -
    3. In the category 'General', expand the subcategory 'Network Connections' and select 'SSH2'.
    4. -
    5. Point Eclipse to the directory where the OpenSSH private key is stored that is used for authentication on the VSC cluster. If that key is not called 'id_rsa', select it by clicking the 'Add Private Key...' button.
    6. -
    7. Close the 'Preferences' dialog by clicking 'OK'.
    8. -

    Creating a remote connection

    In order to work on a remote system, a connection should be created first. -

      -
    1. From the 'Window' menu, select 'Open Perspective' and then 'Other...', a dialog like the one below will open (the exact contents depends on the components installed in Eclipse).
      - \"Eclipse
    2. -
    3. Select 'Remote System Explorer' from the list, and press 'OK', now the 'Remote Systems' view appears (at the left by default).
    4. -
    5. In that view, right-click and select 'New' and then 'Connection' from the context menu, the 'New Connection' dialog should now appear.
    6. -
    7. From the 'System type' list, select 'SSH Only' and press 'Next >'.
    8. -
    9. In the 'Host name' field, enter vsc.login.node, in the 'Connection Name' field, the same host name will appear automatically. The latter can be changed if desired. Optionally, a description can be added as well. Click 'Next >' to continue.
    10. -
    11. In the dialog 'Sftp Files' nothing needs to be changed, so just click 'Next >'.
    12. -
    13. In the dialog 'Ssh Shells' nothing needs to be changed either, so again just click 'Next >'.
    14. -
    15. In the dialog 'Ssh Terminals' (newer versions of Eclipse) nothing needs to be changed either, click 'Finish'.
    16. -

    The new connection has now been created successfully. It can now be used. -

    Browsing the remote file system

    One of the features of Eclipse 'Remote systems' component is browsing a remote file system. -

      -
    1. In the 'Remote Systems' view, expand the 'Sftp Files' item under the newly created connection, 'My Home' and 'Root' will appear.
    2. -
    3. Expand 'My Home', a dialog to enter your password will appear.
    4. -
    5. First enter your user ID in the 'User ID' field, by default this will be your user name on your local desktop or laptop. Change it to your VSC user ID.
    6. -
    7. Mark the 'Save user ID' checkbox so that Eclipse will remember your user ID for this connection.
    8. -
    9. Click 'OK' to proceed, leaving the 'Password' field blank.
    10. -
    11. If the login node is not in your known_hosts file, you will be prompted about the authenticity of vsc.login.node, confirm that you want to continue connecting by clicking 'Yes'.
    12. -
    13. If no know_hosts exists, Eclipse will prompt you to create one, confirm this by clicking 'Yes'.
    14. -
    15. You will now be prompted to enter the passphrase for your private key, do so and click 'OK'. 'My Home' will now expand and show the contents of your home directory on the VSC cluster.
    16. -

    Any file on the remote file system can now be viewed or edited using Eclipse as if it were a local file. -

    It may be convenient to also display the content of your data directory (i.e., '$VSC_DATA'). This can be accomplished easily by creating a new filter. -

      -
    1. Right-click on the 'Sftp Files' item in your VSC connection ('Remote Systems' view), and select 'New' and then 'Filter...' from the context menu.
    2. -
    3. In the 'Folder' field, type the path to your data directory (or use 'Browse...'). If you don't know where your data directory is located, type 'echo $VSC_DATA' on the login's command line to see its value. Leave all other fields and checkboxes to their default values and press 'Next >'.
    4. -
    5. In the field 'Filter name', type any name you find convenient, e.g., 'My Data'. leave the checkbox to its default value and click 'Finish'.
    6. -

    A new item called 'My Data' now appeared under VSC's 'Sftp Files' and can be expanded to see the files in '$VSC_DATA'. Obviously, the same can be done for your scratch directory. -

    Using an Eclipse terminal

    The 'Remote Systems' view also allows to open a terminal to the remote connection. This can be used as an alternative to the PuTTY or OpenSSH client and may be convenient for software development (compiling, building and running programs) without leaving the Eclipse IDE. -

    A new terminal can be launched from the 'Remote Systems' view by right-clicking the VSC connection's 'Ssh Shells' item and selecting 'Launch Terminal' or 'Launch...' (depending on the version of Eclipse). The 'Terminals' view will open (bottom of the screen by default). -

    Connecting/Disconnecting

    Once a connection has been created, it is trivial to connect to it again. To connect to a remote host, right-click on the VSC cluster connection in the 'Remote Systems' view, and select 'Connect' from the context menu. You may be prompted to provide your private key's passphrase. -

    For security reasons, it may be useful to disconnect from the VSC cluster when Eclipse is no longer used to browse or edit files. Although this happens automatically when you exit the Eclipse IDE, you may want to disconnect without leaving the applicaiton. -

    To disconnect from a remote host, right-click on the VSC cluster connection in the 'Remote Systems' view, and select 'Disconnect' from the context menu. -

    Further information

    More information on Eclipse's capabilities to interact with remote systems can be found in the Eclipse help files that were automatically installed with the respective components. The information can be accessed by selecting 'Help Contents' from the 'Help' menu, and is available under 'RSE User Guide' heading. -

    " - diff --git a/HtmlDump/file_0365.html b/HtmlDump/file_0365.html deleted file mode 100644 index 6303864b3..000000000 --- a/HtmlDump/file_0365.html +++ /dev/null @@ -1,53 +0,0 @@ -

    Prerequisites

    It is assumed that a recent version of the Eclipse IDE is installed on the desktop, and that the user is familiar with Eclipse as a development environment. The installation instructions were tested with the Helios (2010), 4.4/Luna (2014) and the 4.6/Neon (2016) release of Eclipse but may be slightly different for other versions.

    Installation & setup

    In order to interact with subversion repositories, some extra plugins have to be installed in Eclipse. -

      -
    1. When you start Eclipse, note the code name of the version in the startup screen.
    2. -
    3. From the 'Help' menu, select 'Install New Software...'.
    4. -
    5. From the 'Work with' drop down menu, select 'Neon - http://download.eclipse.org/releases/neon' (where Neon is the name of the release, see the first step). This will populate the components list.
    6. -
    7. Expand 'Collaboration' and check the box for 'Subversive SVN Team Provider' and click the 'Next >' button.
    8. -
    9. Click 'Next >' in the 'Install Details' dialog.
    10. -
    11. Indicate that you accept the license agreement by selecting the appropriate radio button and click 'Finish'.
    12. -
    13. When Eclipse prompts you to restart it, do so by clicking 'Restart Now'
    14. -
    15. An additional - component is needed (an SVN Team Provider), however, To trigger the - install, open the Eclipse \"Preferences\" menu (under the - \"File\" menu, or under \"Eclipse\" on OS X) and go to - \"Team\" and then \"SVN\" -
    16. -
    17. Select the tab \"SVN - connector\" -
    18. -
    19. Then click on \"Get - Connectors\" to open the 'Subversive Connectors Discovery' - dialog. -
      You will not see this button if there is already a connector - installed. If you need a different one, you can still install one via \"Install new - software\" in the \"Help\" menu. Search for - \"SVNKit\" for connectors that don't need any additional software on the system (our preference), or \"JavaHL\" for another family that connects to the original implementation. Proceed in a similar way as below (step 13). -
    20. -
    21. The easiest choice is to use one of the \"SVN Kit\" connectors as they do not require the installation of other software on your computer, but you have to chose the appropriate version. The subversion project tries to maintain compatibility between server and client from different versions as much as possible, so the version shouldn't matter too much. However, if on your desktop/laptop you'd like to mix between using svn through Eclipse and through another tool, you have to be careful that the SVN connector is compatible with the other SVN tools on your system. SVN Kit 1.8.12 should work with other SVN tools that support version 1.7-1.9 according to the documentation (we cannot test all combinations ourselves).
      1. In case you prefer to use the \"Native JavaHL\" connector instead, make sure that you have subversion binaries including the Java bindings installed on your system, and pick the matching version of the connector. Also see the JavaHL subclipse Wiki page of the tigris.org community.
    22. -
    23. - Mark the checkbox next to the appropriate version of 'SVN Kit' and click 'Next >'.
    24. -
    25. The 'Install' dialog opens, offering to install two components, click 'Next >'.
    26. -
    27. The 'Install Details' dialog opens, click 'Next >'.
    28. -
    29. Accept the license agreement terms by checking the appropriate radio button in the 'Review Licenses' dialog and click 'Finish'.
    30. -
    31. You may receive a warning that unsigned code is about to be installed, click 'OK' to continue the installation.
    32. -
    33. Eclipse prompts you to restart to finish the installation, do so by clicking 'Restart Now'.
    34. -

    Eclipse is now ready to interact with subversion repositories. -

    Microsoft Windows PuTTY users only

    Eclipse's SSH components can not handle private keys generated with PuTTY, only OpenSSH compliant private keys. However, PuTTY's key generator 'PuTTYgen' (that was used to generate the public/private key pair in the first place) can be used to convert the PuTTY private key to one that can be used by Eclipse. See the section converting PuTTY keys to OpenSSH format in the page on generating keys with PyTTY for details if necessary. -

    Checking out a project from a VSC cluster repository

    To check out a project from a VSC cluster repository, one uses the Eclipse 'Import' feature (don't ask...). -

    svn+ssh://userid@vsc.login.node/data/leuven/300/vsc30000/svn-repo
    -

    \"Eclipse
    - In the 'User' field, enter your VSC user ID. -

    Note that Eclipse remembers repository URLs, hence checking out another project from the same repository will skip quite a number of the steps outlined above. -

    Work cycle

    The development cycle from the point of view of version control is exactly the same as that for a command line subversion client. Once a project has been checked out or placed under version control, all actions can be performed by right clicking on the project or specific files in the 'Project Explorer' view and choosing the appropriate action from the 'Team' entry in the context menu. The menu items are fairly self-explanatory, but you may want to read the section on TortoiseSVN since Eclipse's version control interface is very akin to the former. -

    Note that files and directories displayed in the 'Project Explorer' view are now decorated to indicate version control status. A '>' preceeding a file or directory's name indicate that it has been modified since the last update. A new file not yet under version control has a '?' embedded in its icon. -

    When a project is committed, subversive opens a dialog to enter an appropriate comment, and offers to automatically add new files to the repository. Note that Eclipse also offers to commit its project settings, e.g., the '.project' file. Whether or not you wish to store these settings in the repository depends on your setup, but probably you don't. -

    " - diff --git a/HtmlDump/file_0367.html b/HtmlDump/file_0367.html deleted file mode 100644 index db2f82d51..000000000 --- a/HtmlDump/file_0367.html +++ /dev/null @@ -1,8 +0,0 @@ -

    If you're not familiar with Eclipse, read our introduction page first.

    Eclipse also supports several version control systems out of the box or through optional plug-ins.

    The PTP (Parallel Tools Platform) strongly encourages a model where you run eclipse locally on your workstation and let Eclipse synchronise the project files with your cluster account. If you want to use version control in this scenario, the PTP manual advises to put your local files under version control (which can be done through Eclipse also) and synchronise that with some remote repository (e.g., one of the hosting providers), and to not put the automatically synchronised version of the code that you use for compiling and running on the cluster also under version control. In other words,

    If you still want to use the cluster file space as a remote repository, we strongly recommend that you do this in a different directory from where you let Eclipse synchronise the files, and don't touch the files in that repository directly.

    For experts

    The synchronised projects feature in Eclipse internally uses the Git version control system to take care of the synchronisation. That's also the reason why the Parallel Software Development bundle of Eclipse comes with the EGit plug-in included. It does this however in a way that does not interfere with regular git operations. In both your local and remote project directory, you'll find a hidden .ptp-sync directory which in fact is a regular git repository, but stored in a different subdirectory rather than the standard .git subdirectory. So you can still have a standard Git repository besides it and they will not interfere if you follow the guidelines on this page.

    " - diff --git a/HtmlDump/file_0369.html b/HtmlDump/file_0369.html deleted file mode 100644 index 678bfe320..000000000 --- a/HtmlDump/file_0369.html +++ /dev/null @@ -1,56 +0,0 @@ -

    Prerequisites

    - -

    Environment & general use

    -

    - All operations introduced in the documentation page on using subversion repositories on the VSC clusters work as illustrated therein. The repository's URI can be conveniently assigned to an environment variable -

    -
    $ export SVN=\"svn+ssh://userid@vsc.login.node/data/leuven/300/vsc30000/svn-repo\"
    -
    -

    - where userid should be replaced by your own VSC user ID, and vsc.login.node to the appropriate login node for the cluster the repository is on. In the above, it is assumed that the SVN repository you are going to use is in your VSC data directory (here shown for user vsc30000) and is called svn-repo. This should be changed appropriately. -

    -

    -Checking out a project from a VSC cluster repository

    -

    - To check out the simulation project to a directory 'simulation' on your desktop, simply type: -

    -
    $ svn checkout  ${SVN}/simulation/trunk  simulation
    -
    -

    - The passphrase for your private key used to authenticate on the VSC cluster will be requested. -

    -

    - Once the project is checked out, you can start editing or adding files and directories, committing your changes when done. -

    -

    -Importing a local project into the VSC cluster repository

    -

    - Importing a project directory that is currently on your desktop and not on the VSC cluster is also possible, again by simply modifying the URLs in the previous section appropriately. Suppose the directory on your desktop is 'calculation', the steps to take are the following: -

    -
    $ svn mkdir -m 'calculation: creating dirs' --parents   \\
    -            $SVN/calculation/trunk    \\
    -            $SVN/calculation/branches \\
    -            $SVN/calculation/tags
    -$ svn import -m 'calculation: import' \\
    -             calculation              \\
    -             $SVN/calculation/trunk
    -
    -

    - Note that each time you access the repository, you need to authenticate, which gets tedious pretty soon. Using ssh-agent may be considered to simplify life, see, e.g., a short tutorial on a possible setup. -

    -

    -Links

    -" - diff --git a/HtmlDump/file_0371.html b/HtmlDump/file_0371.html deleted file mode 100644 index daefe65b7..000000000 --- a/HtmlDump/file_0371.html +++ /dev/null @@ -1,60 +0,0 @@ -

    Installing NX NoMachine client

    -

    NoMachine NX Client Configuration Guide

    -
      -
    1. NoMachine NX requires keys in OpenSSH format, therefore the existing key needs to be converted into OpenSSH format if you're working on Windows and using PuTTY.
    2. -
    3. Start the NoMachine client and press twice continue to see the screen with connection. Press New to create a new connection.
    4. -
    5. Change the Protocol to SSH.
    6. -
    7. Choose the hostname:
    8. - -
    9. Choose the authentication Use the system login.
    10. -
    11. Choose the authentication method Private key.
    12. -
    13. Browse your private key. This should be in OpenSSH format (not .ppk). -
    14. -
    15. Choose the option Don’t use proxy for the network connection.
    16. -
    17. Give the name to your connection, e.g. Connection to nx.hpc.kuleuven.be. You can optionally create the link to that connection on your desktop. Click the \"Done\" button to finish configuration.
    18. -
    19. Choose the just created connection and press \"Connect\".
    20. -
    21. Enter your username (vsc-account) and passphrase for your private key and press \"ok\".
    22. -
    23. If you are creating for the first time choose New desktop. Otherwise please go to step 16 for instructions how to reconnect to your session.
    24. -
    25. Choose Create a new virtual desktop and continue. Each user is allowed to have a maximum 5 desktops open.
    26. -
    27. Read the useful information regarding your session displayed on several screens. This step is very important in case of mobile devices – once you miss the instructions it is not so easy to figure out how to operate NoMachine on your device. You can optionally choose not to show these messages again.
    28. -
    29. Once connected you will see the virtual Linux desktop.
    30. -
    31. When reconnecting choose your desktop from all the listed ones. If there are too many you can use the option find a user or a desktop and type your username (vsc-account). Once you found your desktop press connect.
    32. -
    33. You will be prompted about the screen resolution (Change the server resolution to match the client when I connect) which can be changed to match the client when you connect. It is a recommended setup as your session will correspond to your actual device resolution. While reconnection from a different device (e.g. mobile device) it is highly recommended to change the resolution.
    34. -
    -

    For more detailed information about the configuration process please refer to the short video (ThinKing configuration) showing the installation and configuration procedure step-by-step or to the document containing graphical instructions. -

    -

    How to start using NX on ThinKing?

    -
      -
    1. Once your desktop is open, you can use all available GUI designed software that is listed within the Applications menu. Software is divided into several groups: -
    2. -
    3. Running the applications in the text mode requires having a terminal open. To launch the terminal please go to Applications -> System tools -> Terminal. From Terminal all the commands available on regular login node can be used.
    4. -
    5. Some more information can be found on slides from our lunchbox session. In the slides you can find the information how to connect the local HDD to the NX session for easier transfer of data between the cluster and your local computer.
    6. -
    -

    Attached documents

    -" - diff --git a/HtmlDump/file_0373.html b/HtmlDump/file_0373.html deleted file mode 100644 index e8a68ccdc..000000000 --- a/HtmlDump/file_0373.html +++ /dev/null @@ -1,16 +0,0 @@ -

    There are two possibilities

      -
    1. - You can copy your private key from the machine where you generated the key to the other computers you want to use to access the VSC clusters. - If you want to use both PuTTY on Windows and the tradinional OpenSSH client on OS X or Linux (or Windows with Cygwin) and chose for this scenario, you should generate the key using PuTTY and then export it in OpenSSH format as explained on - the PuTTY pages. -
    2. -
    3. Alternatively, you can generate another keypair for the second machine following the instructions for your respective client (Windows, macOS/OS X, Linux) and then upload the new public key to your account: -
        -
      1. Go to the account management web site account.vscentrum.be
      2. -
      3. Choose \"Edit account\"
      4. -
      5. And then add the public key via that page. It can take half an hour before you can use the key.
      6. -
      -
    4. -

    We prefer the second scenario, in particular if you want to access the clusters from a laptop or tablet, as these are easily stolen. In this way, all you need to do if your computer is stolen or your key may be compromised in another way, is to delete that key on the account website (via \"Edit account\"). You can continue to work on your other devices. -

    " - diff --git a/HtmlDump/file_0375.html b/HtmlDump/file_0375.html deleted file mode 100644 index 3424ec5a8..000000000 --- a/HtmlDump/file_0375.html +++ /dev/null @@ -1,64 +0,0 @@ -

    Data on the VSC clusters can be stored in several locations, depending on the size and usage of these data. Following locations are available:

    Since these directories are not necessarily mounted on the same locations over all sites, you should always (try to) use the environment variables that have been created. -

    Quota is enabled on the three directories, which means the amount of data you can store here is limited by the operating system, and not just by the capacity of the disk system, to prevent that the disk system fills up accidentally. You can see your current usage and the current limits with the appropriate quota command as explained on the page on managing disk space. The actual disk capacity, shared by all users, can be found on the Available hardware page. -

    You will only receive a warning when you reach the soft limit of either quota. You will only start losing data when you reach the hard limit. Data loss occurs when you try to save new files: this will not work because you have no space left, and thus you will lose these new files. You will however not be warned when data loss occurs, so keep an eye open for the general quota warnings! The same holds for running jobs that need to write files: when you reach your hard quota, jobs will crash. -

    Home directory

    This directory is where you arrive by default when you login to the cluster. Your shell refers to it as \"~\" (tilde), or via the environment variable $VSC_HOME. -

    The data stored here should be relatively small (e.g., no files or directories larger than a gigabyte, although this is not imposed automatically), and usually used frequently. The typical use is storing configuration files, e.g., by Matlab, Eclipse, ... -

    The operating system also creates a few files and folders here to manage your account. Examples are: -

    - - - - - - - - - - - - - - - - - - -
    .ssh/ - This directory contains some files necessary for you to login to the cluster and to submit jobs on the cluster. Do not remove them, and do not alter anything if you don't know what you're doing! -
    .profile
    .bash_profile -
    This script defines some general settings about your sessions, -
    .bashrc - This script is executed every time you start a session on the cluster: when you login to the cluster and when a job starts. You could edit this file to define variables and aliases. However, note that loading modules is strongly discouraged. -
    .bash_history - This file contains the commands you typed at your shell prompt, in case you need them again. -

    Data directory

    In this directory you can store all other data that you need for longer terms. The environment variable pointing to it is $VSC_DATA. There are no guarantees about the speed you'll achieve on this volume. I/O-intensive programs should not run directly from this volume (and if you're not sure, whether your program is I/O-intensive, don't run from this volume).

    This directory is also a good location to share subdirectories with other users working on the same research projects.

    Scratch space

    To enable quick writing from your job, a few extra file systems are available on the work nodes. These extra file systems are called scratch folders, and can be used for storage of temporary and/or transient data (temporary results, anything you just need during your job, or your batch of jobs). -

    You should remove any data from these systems after your processing them has finished. There are no guarantees about the time your data will be stored on this system, and we plan to clean these automatically on a regular base. The maximum allowed age of files on these scratch file systems depends on the type of scratch, and can be anywhere between a day and a few weeks. We don't guarantee that these policies remain forever, and may change them if this seems necessary for the healthy operation of the cluster. -

    Each type of scratch has his own use: -

    " - diff --git a/HtmlDump/file_0377.html b/HtmlDump/file_0377.html deleted file mode 100644 index 8ceac1b70..000000000 --- a/HtmlDump/file_0377.html +++ /dev/null @@ -1,6 +0,0 @@ -

    BEgrid has its own documentation web site as it is a project at the federal level. Some useful links are:

    " - diff --git a/HtmlDump/file_0381.html b/HtmlDump/file_0381.html deleted file mode 100644 index 9bab5c207..000000000 --- a/HtmlDump/file_0381.html +++ /dev/null @@ -1,2 +0,0 @@ -

    This is just some random text. Don't be worried if the remainder of this paragraph sounds like Latin to you cause it is Latin. Cras mattis consectetur purus sit amet fermentum. Cum sociis natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Sed posuere consectetur est at lobortis. Morbi leo risus, porta ac consectetur ac, vestibulum at eros. Cras mattis consectetur purus sit amet fermentum. Cum sociis natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Sed posuere consectetur est at lobortis. Morbi leo risus, porta ac consectetur ac, vestibulum at eros.

    " - diff --git a/HtmlDump/file_0385.html b/HtmlDump/file_0385.html deleted file mode 100644 index 2a1c394ac..000000000 --- a/HtmlDump/file_0385.html +++ /dev/null @@ -1 +0,0 @@ -

    What I tried to do with the \"Asset\" box in the right column:


    diff --git a/HtmlDump/file_0387.html b/HtmlDump/file_0387.html deleted file mode 100644 index fd2096223..000000000 --- a/HtmlDump/file_0387.html +++ /dev/null @@ -1,37 +0,0 @@ -

    Inline code with <code>...</code>

    We used inline code on the old vscentrum.be to clearly mark system commands etc. in text.

    Example: At UAntwerpen you'll have to use module avail MATLAB and - module load MATLAB/2014a respectively. -

    However, If you enter both <code>-blocks on the same line in a HTML file, the editor doesn't process them well: module avail MATLAB and <code>module load MATLAB. -

    En dit is code inline als test. -

    En dit dit wordt een nieuw pre-blok: -

    #!/bin/bash
    -echo \"Hello, world!\"
    -

    Code in <pre>...</pre>

    This was used a lot on the old vscentrum.be site to display fragments of code or display output in a console windows. -

    #!/bin/bash -l
    -#PBS -l nodes=1:nehalem
    -#PBS -l mem=4gb
    -module load matlab
    -cd $PBS_O_WORKDIR
    -...
    -

    The <code> style in the editor

    In fact, the Code style of the editor works on a paragraph basis and all it does is put the paragraph between <pre> and </pre>-tags, so the problem mentioned above remains. The next text was edited in WYSIWIG mode: -

    #!/bin/bash -l
    -#PBS -l nodes=4:ivybridge
    -...
    -

    Another editor bug is that it isn't possible to switch back to regular text mode at the end of a code fragment if that is at the end of the text widget: The whole block is converted back to regular text instead and the formatting is no longer shown. -

    Een Workaround is misschien meerdere <pre>-blokken gebruiken? -

    #!/bin/bash -l
    -
    #PBS -l nodes=4:ivybridge
    -
    ...
    -

    Neen, want dan krijg je meerdere grijze vakken... -

    En met <br> en de <code>-tag? -

    #! /bin/bash -l
    #PBS -l nodes=4:ivybridge
    ... -
    -

    Ook dit is niet ideaal, want alles staat niet aaneenin een kader, maar het is beter dan niets... -

    " - diff --git a/HtmlDump/file_0395.html b/HtmlDump/file_0395.html deleted file mode 100644 index eabde5d32..000000000 --- a/HtmlDump/file_0395.html +++ /dev/null @@ -1,3 +0,0 @@ -

    Tier-1 infrastructure

    -

    Our first Tier-1 cluster, Muk, was installed in the spring of 2012 and became operationa a few months later. This system is primarily optimised for the processing of large parallel computing tasks that need to have a high-speed interconnect.

    " - diff --git a/HtmlDump/file_0399.html b/HtmlDump/file_0399.html deleted file mode 100644 index dde5d4156..000000000 --- a/HtmlDump/file_0399.html +++ /dev/null @@ -1,373 +0,0 @@ -

    The list below gives an indication of which (scientific) software, libraries and compilers are available on TIER1 on 1 December 2014. For each package, the available version(s) is shown as well as (if applicable) the compilers/libraries/options with which the software was compiled. Note that some software packages are only available when the end-user demonstrates to have valid licenses to use this software on the TIER1 infrastructure of Ghent University.

    " - diff --git a/HtmlDump/file_0403.html b/HtmlDump/file_0403.html deleted file mode 100644 index f699d59bb..000000000 --- a/HtmlDump/file_0403.html +++ /dev/null @@ -1,2 +0,0 @@ -

    VSC Echo newsletter

    -

    VSC Echo is e-mailed three times a year to all subscribers. The newsletter contains updates about our infrastructure, training programs and other events and highlights some of the results obtained by users of our clusters.

    diff --git a/HtmlDump/file_0407.html b/HtmlDump/file_0407.html deleted file mode 100644 index c700ac842..000000000 --- a/HtmlDump/file_0407.html +++ /dev/null @@ -1,2 +0,0 @@ -

    Mission & vision

    -

    Upon the establishment of the VSC, the Flemish government assigned us a number of tasks.

    diff --git a/HtmlDump/file_0409.html b/HtmlDump/file_0409.html deleted file mode 100644 index da0d9f6af..000000000 --- a/HtmlDump/file_0409.html +++ /dev/null @@ -1,2 +0,0 @@ -

    The VSC in Flanders

    -

    The VSC is a partnership of five Flemish university associations. The infrastructure is spread over four locations: Antwerp, Brussels, Ghent and Louvain.

    diff --git a/HtmlDump/file_0411.html b/HtmlDump/file_0411.html deleted file mode 100644 index 2cf171980..000000000 --- a/HtmlDump/file_0411.html +++ /dev/null @@ -1,3 +0,0 @@ -

    Our history

    -

    Since its establishment in 2007, the VSC has evolved and grown considerably.

    " - diff --git a/HtmlDump/file_0413.html b/HtmlDump/file_0413.html deleted file mode 100644 index 6a5441ab2..000000000 --- a/HtmlDump/file_0413.html +++ /dev/null @@ -1,3 +0,0 @@ -

    Publications

    -

    In this section you’ll find all previous editions of our newsletter and various other publications issued by the VSC.

    " - diff --git a/HtmlDump/file_0415.html b/HtmlDump/file_0415.html deleted file mode 100644 index bef7e3262..000000000 --- a/HtmlDump/file_0415.html +++ /dev/null @@ -1,3 +0,0 @@ -

    Organisation structure

    -

    In this section you can find more information about the structure of our organisation and the various advisory committees.

    " - diff --git a/HtmlDump/file_0417.html b/HtmlDump/file_0417.html deleted file mode 100644 index c2a6d6408..000000000 --- a/HtmlDump/file_0417.html +++ /dev/null @@ -1,3 +0,0 @@ -

    Press material

    -

    Would you like to write about our services? On this page you will find useful material such as our logo or recent press releases.

    " - diff --git a/HtmlDump/file_0451.html b/HtmlDump/file_0451.html deleted file mode 100644 index 833f3dd11..000000000 --- a/HtmlDump/file_0451.html +++ /dev/null @@ -1 +0,0 @@ -

    Op 25 oktober 2012 organiseerde het VSC de plechtige ingebruikname van de eerste Vlaamse tier 1 cluster aan de Universiteit Gent, waar de cluster ook geplaatst werd.

    diff --git a/HtmlDump/file_0455.html b/HtmlDump/file_0455.html deleted file mode 100644 index 6c3accfde..000000000 --- a/HtmlDump/file_0455.html +++ /dev/null @@ -1 +0,0 @@ -

    On 25 October 2012 the VSC inaugurated the first Flemish tier 1 compute cluster. The cluster is housed in the data centre of Ghent University.

    diff --git a/HtmlDump/file_0459.html b/HtmlDump/file_0459.html deleted file mode 100644 index b264dbcf6..000000000 --- a/HtmlDump/file_0459.html +++ /dev/null @@ -1,15 +0,0 @@ -

    Programma / Programme

    Het programma werd gevolgd door de officiële ingebruikname van de cluster in het datacentrum en een receptie.

    " - diff --git a/HtmlDump/file_0461.html b/HtmlDump/file_0461.html deleted file mode 100644 index 69a70ed01..000000000 --- a/HtmlDump/file_0461.html +++ /dev/null @@ -1,6 +0,0 @@ -

    Links

    -" - diff --git a/HtmlDump/file_0465.html b/HtmlDump/file_0465.html deleted file mode 100644 index ef6c32c15..000000000 --- a/HtmlDump/file_0465.html +++ /dev/null @@ -1 +0,0 @@ -

    We organize regular trainings on many HPC-related topics. The level ranges fro introductory to advanced. We also actively promote some courses organised elsewhere. The courses are open to participants at the university associations. Many are also open to external users (the limitations often caused by software licenses of the packages used during hand-ons). For further info, you can contact the course coordinator Geert Jan Bex.

    diff --git a/HtmlDump/file_0467.html b/HtmlDump/file_0467.html deleted file mode 100644 index 3c9d7d568..000000000 --- a/HtmlDump/file_0467.html +++ /dev/null @@ -1,2 +0,0 @@ -

    Previous events and training sessions

    -

    We keep links to our previous events and training sessions. Materials used during the course can also be found on those pages.

    diff --git a/HtmlDump/file_0469.html b/HtmlDump/file_0469.html deleted file mode 100644 index 771acd9e2..000000000 --- a/HtmlDump/file_0469.html +++ /dev/null @@ -1 +0,0 @@ -

    More questions? Contact the course coordinator or one of the other coordinators.

    diff --git a/HtmlDump/file_0471.html b/HtmlDump/file_0471.html deleted file mode 100644 index 48c01af05..000000000 --- a/HtmlDump/file_0471.html +++ /dev/null @@ -1,349 +0,0 @@ -

    On you application form, you will be asked to indicate the scientific domain of your application according to the NWO classification. Below we present the list of domains and subdomains. You only need to give the domain in your application, but the subdomains may make it easier to determine the most suitable domain for your application.

    " - diff --git a/HtmlDump/file_0475.html b/HtmlDump/file_0475.html deleted file mode 100644 index 879ad3396..000000000 --- a/HtmlDump/file_0475.html +++ /dev/null @@ -1,6 +0,0 @@ -" - diff --git a/HtmlDump/file_0477.html b/HtmlDump/file_0477.html deleted file mode 100644 index c36ef88f9..000000000 --- a/HtmlDump/file_0477.html +++ /dev/null @@ -1,32 +0,0 @@ -

    \"\" -

    - PERSMEDEDELING VAN VICEMINISTER-PRESIDENT INGRID LIETEN
    - VLAAMS MINISTER VAN INNOVATIE, OVERHEIDSINVESTERINGEN, MEDIA EN ARMOEDEBESTRIJDING
    -

    - Donderdag 25 oktober 2012 -

    - Eerste TIER 1 Supercomputer wordt in gebruik genomen aan de UGent. -

    - Vandaag wordt aan de UGent de eerste Tier 1 supercomputer van het Vlaams ComputerCentrum (VSC) plechtig in gebruik genomen. De supercomputer is een initiatief van de Vlaamse overheid om aan onderzoekers in Vlaanderen een bijzonder krachtige rekeninfrastructuur ter beschikking te stellen om zo beter het hoofd te kunnen bieden aan de maatschappelijke uitdagingen war we vandaag voor staan.“Het VSC moet ‘high performance computing’ toegankelijk maken voor kennisinstellingen en bedrijven. Hierdoor kunnen doorbraken gerealiseerd worden in domeinen als gezondheidszorg, chemie, en milieu”, zegt Ingrid Lieten. -

    -
    - In de internationale onderzoekswereld zijn de supercomputers niet meer weg te denken. Deze grote rekeninfrastructuren waren recent een noodzakelijke schakel in de ontdekking van het Higgsdeeltje. Hun rekencapaciteit laat toe steeds beter de werkelijkheid te simuleren. Hierdoor is een nieuwe manier om onderzoek te verrichten ontstaan, met belangrijke toepassingen voor onze economie en onze samenleving. -

    - “Dankzij supercomputers worden weersvoorspellingen over langere perioden steeds betrouwbaarder, of kunnen klimaatveranderingen en natuurrampen beter voorspeld worden. Auto’s worden veiliger omdat de constructeurs het verloop van botsingen en de impact op passagiers in detail kunnen simuleren. Ook aan de evolutie naar geneeskunde op maat van de patiënt, kan de supercomputer fundamenteel bijdragen. De ontwikkeling van geneesmiddelen gebeurt namelijk voor een groot deel via simulaties van chemische reacties”, zegt Ingrid Lieten. -

    - Het Vlaamse Supercomputer Centrum staat open voor alle Vlaamse onderzoekers, zowel uit de kennisinstellingen en strategische onderzoekscentra als uit de bedrijven. Het levert opportuniteiten voor universiteiten en industrie, maar ook voor overheden, mutualiteiten en andere zorgorganisaties. De supercomputer moet een belangrijke bijdrage leveren aan de zoektocht naar oplossingen voor de grote maatschappelijke uitdagingen, en dit in de meest uiteenlopende domeinen. Zo kan de supercomputer nieuwe geneesmiddelen ontwikkelen of demografische evoluties voor humane en sociale wetenschappen analyseren, zoals de vergrijzing en hoe daarmee om te gaan. Maar de supercomputer zal ook ingezet worden om state of the art windmolens te ontwerpen en ingewikkelde modellen te berekenen voor het voorspellen van klimaatsveranderingen. -

    - Om de mogelijkheden van de supercomputer beter bekend te maken en het gebruik te stimuleren in Vlaanderen, krijgt de Herculesstichting de opdracht om het Vlaamse Supercomputer Centrum actief te promoten en opleidingen te voorzien. De Herculesstichting is het Vlaamse agentschap voor de financiering van middelzware en zware infrastructuur voor fundamenteel en strategisch basisonderzoek. Zij zullen ervoor zorgen dat associaties, kennisinstellingen, SOCs, het bedrijfsleven, enz. even vlot toegang krijgen tot de TIER1 supercomputer. De huisvesting en technische exploitatie blijven bij de associaties. -

    - “Met de ingebruikname van de TIER1 staat Vlaanderen nu echt op de kaart in Europa wat betreft ‘high performance computing’. Vlaamse onderzoekers krijgen de mogelijkheid om aan te sluiten bij belangrijke Europese onderzoeksprojecten, zowel op het vlak van fundamenteel als van toegepast onderzoek”, zegt Ingrid Lieten. -

    - Het Vlaams Supercomputer Centrum beheert zowel de zogenaamde ‘TIER2’ computers, die lokaal bij de universiteiten staan, als de ‘TIER1’ computer, die voor nog complexere toepassingen gebruikt wordt. -

    -Persinfo:

    - Lot Wildemeersch, woordvoerster Ingrid Lieten
    - 0477 810 176 | lot.wildemeersch@vlaanderen.be
    - www.ingridlieten.be -

    - \"\" -

    " - diff --git a/HtmlDump/file_0479.html b/HtmlDump/file_0479.html deleted file mode 100644 index 50758bbbe..000000000 --- a/HtmlDump/file_0479.html +++ /dev/null @@ -1,2 +0,0 @@ -

    \"\"

    " - diff --git a/HtmlDump/file_0481.html b/HtmlDump/file_0481.html deleted file mode 100644 index b00896019..000000000 --- a/HtmlDump/file_0481.html +++ /dev/null @@ -1,35 +0,0 @@ - - - - - - -
    \"Logo - March 23 2009
    Launch Flemish Supercomputer Centre -

    The official launch took place on 23 March 2009 in the Promotiezaal of the Universiteitshal of the K.U.Leuven, Naamsestraat 22, 3000 Leuven. -

    The press mentioning the VSC launch event: -

    \"uitnodiging -

    The images at the top of this page are courtesy of NUMECA International and research groups at Antwerp University, the Vrije Universiteit Brussel and the KU Leuven. -

    " - diff --git a/HtmlDump/file_0483.html b/HtmlDump/file_0483.html deleted file mode 100644 index a6b01497c..000000000 --- a/HtmlDump/file_0483.html +++ /dev/null @@ -1 +0,0 @@ -

    The program contains links to some of the presentations. The copyright for the presentations remains with the original authors and not with the VSC. Reproducing parts of these presentations or using them in other presentations can only be done with the agreement of the author(s) of the presentation.

    14u15 Scientific program
    14u15 Dr. ir. Kurt Lust (Vlaams Supercomputer Centrum). Presentation of the VSC
    Presentation (PDF)
    14u30 Prof. dr. Patrick Bultinck (Universiteit Gent). In silico Chemistry: Quantum Chemistry and Supercomputers
    Presentation (PDF)
    14u45 Prof. dr. Wim Vanroose (Universiteit Antwerpen). Large scale calculations of molecules in laser fields
    Presentation (PDF)
    15u00 Prof. dr. Stefaan Tavernier (Vrije Universiteit Brussel). Grid applications in particle and astroparticle physics: The CMS and IceCube projects
    Presentation (PDF)
    15u15 Prof. dr. Dirk Van den Poel (Universiteit Gent). Research using HPC capabilities in the field of economics/business & management science
    Presentation (PDF)
    15u30 Dr. Kris Heylen (K.U.Leuven). Supercomputing and Linguistics
    Presentation (PDF)
    15u45 Dr. ir. Lies Geris (K.U.Leuven). Modeling in biomechanics and biomedical engineering
    Presentatie (PDF)
    16u00 Prof. dr. ir. Chris Lacor (Vrije Universiteit Brussel) and Prof. Dr. Stefaan Poedts (K.U.Leuven). Supercomputing in CFD and MHD
    16u15 Coffee break
    17u00 Academic session
    17u00 Prof. dr. ir. Karen Maex, Chairman of the steering group of the Vlaams Supercomputer Centrum
    Presentatie (PDF)
    17u10 Prof. dr. dr. Thomas Lippert, Director of the Institute for Advanced Simulation and head of the Jülich Supercomputer Centre, Forschungszentrum Jülich. European view on supercomputing and PRACE
    Presentation (PDF)
    17u50 Prof. dr. ir. Charles Hirsch, President of the HPC Working Group of the Royal Flemish Academy of Belgium for Sciences and the Arts (KVAB)
    Presentation (PDF)
    18u00 Prof. dr. ir. Bart De Moor, President of the Board of Directors of the Hercules Foundation
    Presentation (PDF)
    18u10 Minister Patricia Ceysens, Flemish Minister for Economy, Enterprise, Science, Innovation and Foreign Trade
    18u30 Reception

    Abstracts

    Prof. dr. Patrick Bultinck. In silico Chemistry: Quantum Chemistry and Supercomputers

    Universiteit Gent/Ghent University, Faculty of Sciences, Department of Inorganic and Physical Chemistry

    Quantum Chemistry deals with the chemical application of quantum mechanics to understand the nature of chemical substances, the reasons for their (in)stability but also with finding ways to predict properties of novel molecules prior to their synthesis. The working horse of quantum chemists is therefore no longer the laboratory but the supercomputer. The reason for this is that quantum chemical calculations are notoriously computationally demanding.
    These computational demands are illustrated by the scaling of computational demands with respect to the size of molecules and the level of theory applied. An example from Vibrational Circular Dichroism calculations shows how supercomputers play a role in stimulating innovation in chemistry.

    Prof. dr. Patrick Bultinck (° Blankenberge, 1971) is professor in Quantum Chemistry, Computational and inorganic chemistry at Ghent University, Faculty of Sciences, Department of Inorganic and Physical Chemistry. He is author of roughly 100 scientific publications and performs research in quantum chemistry with emphasis on the study of concepts such as the chemical bond, the atom in the molecule and aromaticity. Another main topic is the use of computational (quantum) chemistry in drug discovery. In 2002 and 2003 P. Bultinck received grants from the European Center for SuperComputing in Catalunya for his computationally demanding work in this field.

    Prof. dr. Wim Vanroose. Large scale calculations of molecules in laser fields

    Universiteit Antwerpen, Department of Mathematics and Computer Science

    Over the last decade, calculations with large scale computer has caused a revolution
    in the understanding of the ultrafast dynamics that plays at the microscopic level. We give an overview of the international efforts to advance the computational tools for this area of science. We also discuss how the results of the calculations are guiding chemical experiments.

    Prof. dr. Wim Vanroose is BOF-Research professor at the Department of Mathematics and Computer Science, Universiteit Antwerpen. He is involved in international efforts to build to computational tools for large scale simulations for ultrafast microscopic dynamics. Between 2001 and 2004 he was a computational scientist at NERSC computing center at the Berkeley Lab, Berkeley USA.

    Prof. dr. Stefaan Tavernier. Grid applications in particle and astroparticle physics: The CMS and IceCube projects

    Vrije Universiteit Brussel, Faculty of Science and Bio-engineering Sciences, Department of Physics, Research Group of Elementary Particle Physics

    The large hadron collider LHC at the international research centre CERN near Geneva is due to go into operation at the end of 2009. It will be the most powerful particle accelerator ever, and will give us a first glimpse of the new phenomena that that are expected to occur at these energies. However, the analysis of the data produced by the experiments around this accelerator also represents an unprecedented challenge. The VUB, UGent and UA participate in the CMS project. This is one of the four major experiments to be performed at this accelerator. One year of CMS operation will result in about 106 GBytes of data. To cope with this flow of data, the CMS collaboration has setup a GRID computing infrastructure with distributed computer infrastructure scattered over the participating laboratories in 4 continents.
    The IceCube Neutrino Detector is a neutrino observatory currently under construction at the South Pole. IceCube is being constructed in deep Antarctic ice by deploying thousands of optical sensors at depths between 1,450 and 2,450 meters. The main goal of the experiment is to detect very high energy neutrinos from the cosmos. The neutrinos are not detected themselves. Instead, the rare instance of a collision between a neutrino and an atom within the ice is used to deduce the kinematical parameters of the incoming neutrino. The sources of those neutrinos could be black holes, gamma ray bursts, or supernova remnants. The data that IceCube will collect will also contribute to our understanding of cosmic rays, supersymmetry, weakly interacting massive particles (WIMPS), and other aspects of nuclear and particle physics. The analysis of the data produced by ice cube requires similar computing facilities as the analysis of the LHC data.

    Prof. dr. Stefaan Tavernier is professor of physics at the Vrije Universiteit Brussel. He obtained a Ph.D. at the Faculté des sciences of Orsay(France) in 1968, and a \"Habilitation\" at de VUB in 1984. He spent most of his scientific career working on research projects at the international research centre CERN in Geneva. He has been project leader for the CERN/NA25 project, and he presently is the spokesperson of the CERN/Crystal Clear(RD18) collaboration. His main expertise is in experimental methods for particle physics. He has over 160 publications in peer reviewed international journals, made several contributions to books and has several patents. He is also the author of a textbook on experimental methods in nuclear and particle physics.

    Prof. dr. Dirk Van den Poel. Research using HPC capabilities in the field of economics/business & management science

    Universiteit Gent/Ghent University, Faculty of Economics and Business Administration, Department of Marketing, www.crm.UGent.be and www.mma.UGent.be

    HPC capabilities in the field of economics/business & management science are most welcome when optimizing specific quantities (e.g. maximizing sales, profits, service level, or minimizing costs) subject to certain constraints. Optimal solutions for common problems are usually computationally infeasible even with the biggest HPC installations, therefore researchers develop heuristics or use techniques such as genetic algorithms to come close to optimal solutions. One of the nice properties they possess is that they are typically easily parallelizable. In this talk, I will give several examples of typical research questions, which need an HPC infrastructure to obtain good solutions in a reasonable time window. These include the optimization of marketing actions towards different marketing segments in the domain of analytical CRM (customer relationship management) and solving multiple-TSP (traveling salesman problem) under load balancing, alternatively known as the vehicle routing problem under load balancing.

    Prof. dr. Dirk Van den Poel (° Merksem, 1969) is professor of marketing modeling/analytical customer relationship management (aCRM) at Ghent University. He obtained his MSc in management/business engineering as well as PhD from K.U.Leuven. He heads the modeling cluster of the Department of Marketing at Ghent University. He is program director of the Master of Marketing Analysis, a one-year program in English about predictive analytics in marketing. His main interest fields are aCRM, data mining (genetic algorithms, neural networks, random forests, random multinomial logit: RMNL), text mining, optimal marketing resource allocation and operations research.

    Dr. Kris Heylen. Supercomputing and Linguistics

    Katholieke Universiteit Leuven, Faculty of Arts, Research Unit Quantitative Lexicology and Variational Linguistics (QLVL)

    Communicating through language is arguably one of the most complex processes that the most powerful computer we know, the human brain, is capable of. As a science, Linguistics aims to uncover the intricate system of patterns and structures that make up human language and that allow us to convey meaning through words and sentences. Although linguists have been investigating and describing these structures for ages, it is only recently that large amounts of electronic data and the computational power to analyse them have become available and have turned linguistics into a truly data-driven science. The primary data for linguistic research is ordinary, everyday language use like conversations or texts. These are collected in very large electronic text collections, containing millions of words and these collections are then mined for meaningful structures and patterns. With increasing amounts of data and ever more advanced statistical algorithms, these analyses are not longer feasible on individual servers but require the computational power of interconnected super computers.
    In the presentation, I will briefly describe two case studies of computationally heavy linguistic research. A first case study has to do with the pre-processing of linguistic data. In order to find patterns at different levels of abstraction, each word in the text collection has to be enriched with information about its word class (noun, adjective, verb,..) and syntactic function within the sentence (subject, direct object, indirect object...). A piece of software, called a parser, can add this information automatically. For our research, we wanted to parse a text collection of 1.3 billion words, i.e. all issues from a 7 year period of 6 Flemish daily newspapers, representing a staggering 13 years of computing on an ordinary computer. Thanks to the K.U.Leuven's supercomputer, this could be done in just a few months. This data has now been made available to the wider research community.

    Dr. Kris Heylen obtained a Master in Germanic Linguistics (2000) and a Master in Artificial Intelligence (2001) from the K.U.Leuven. In 2005, he was awarded a PhD in Linguistics at the K.U.leuven for his research into the statistical modelling of German word order variation. Since 2006, he is a postdoctoral fellow at the Leuven research unit Quantitative Lexicology and Variational Linguistics (QLVL), where he has further pursued his research into statistical language modelling with a focus on lexical patterns and word meaning in Dutch.

    Dr. ir. Lies Geris. Modeling in biomechanics and biomedical engineering

    Katholieke Universiteit Leuven, Faculty of Engineering, Department of Mechanical Engineering, Division of Biomechanics and Engineering Design

    The first part of the presentation will discuss the development and applications of a mathematical model of fracture healing. The model encompasses several key-aspects of the bone regeneration process, such as the formation of blood vessels and the influence of mechanical loading on the progress of healing. The model is applied to simulate adverse healing conditions leading to a delayed or nonunion. Several potential therapeutic approaches are tested in silico in order to find the optimal treatment strategy. Going towards patient specific models will require even more computer power than is the case for the generic examples presented here.
    The second part of the presentation will give an overview of other modeling work in the field of biomechanics and biomedical engineering, taking place in Leuven and Flanders. The use of super computer facilities is required to meet the demand for more detailed models and patient specific modeling.

    Dr. ir. Liesbet Geris is a post-doctoral research fellow of the Research Foundation Flanders (FWO) working at the Division of Biomechanics and Engineering Design of the Katholieke Universiteit Leuven, Belgium. From the K.U.Leuven, she received her MSc degree in Mechanical Engineering in 2002 and her PhD degree in Engineering in 2007, both summa cum laude. In 2007 she worked for 4 months as an academic visitor at the Centre of Mathematical Biology of Oxford University. Her research interests encompass the mathematical modeling of bone regeneration during fracture healing, implant osseointegration and tissue engineering applications. The phenomena described in the mathematical models reach from the tissue level, over the cell level, down to the molecular level. She works in close collaboration with experimental and clinical researchers from the university hospitals Leuven, focusing on the development of mathematical models of impaired healing situations and the in silico design of novel treatment strategies. She is the author of 36 refereed journal and proceedings articles, 5 chapters and reviews and 18 peer-reviewed abstracts. She has received a number of awards, including the Student Award (2006) of the European Society of Biomechanics (ESB) and the Young Investigator Award (2008) of the International Federation for Medical and Biological Engineering (IFMBE).

    Prof. dr. ir. Chris Lacor1 en Prof. dr. Stefaan Poedts2. Supercomputing in CFD and MHD

    1Vrije Universiteit Brussel, Faculty of Applied Sciences, Department of Mechanical Engineering
    2Katholieke Universiteit Leuven, Faculty of Sciences, Department of Mathematics, Centre for Plasma Astrophysics

    CFD is an application field in which the available computing power is typically always lagging behind. With the increase of computer capacity CFD is looking towards more complex applications – because of increased geometrical complication or multidisciplinary aspects e.g. aeroacoustics, turbulent combustion, biological flows, etc – or more refined models such as Large Eddy Simulation (LES) or Direct Numerical Simulation (DNS). In this presentation some demanding application fields of CFD will be highlighted, to illustrate this.
    Computational MHD has a broad range of applications. We will survey some of the most CPU demanding applications in Flanders in the context of examples of the joint initiatives combining expertise from multiple disciplines, the VSC will hopefully lead to, such as the customised applications built in the COOLFluiD and AMRVAC-CELESTE3D projects.

    Prof. dr. ir. Chris Lacor obtained a degree in Electromechanical Engineering at VUB in 79 and his PhD in 86 at the same university. Currently he is Head of the Research Group Fluid Mechanics and Thermodynamics of the Faculty of Engineering at VUB. His main research field is Computational Fluid Dynamics (CFD). He stayed at the NASA Ames CFD Branch as an Ames associate in 87 and at EPFL IMF in 89 where he got in contact with the CRAY supercomputers. In the early 90ies he was co-organizer of supercomputing lectures for the VUB/ULB CRAY X-MP computer. His current research focuses on Large Eddy Simulation, high-order accurate schemes and efficient solvers in the context of a variety of applications such as Computational Aeroacoustics, Turbulent Combustion, Non-Deterministic methods and Biological Flows. He is author of more than 100 articles in journals and on international conferences. He is also a fellow of the Flemish Academic Centre for Science and the Arts (VLAC).

    Prof. dr. Stefaan Poedts obtained his degree in Applied Mathematics in 1984 at the K.U.Leuven. As 'research assistant' of the Belgian National Fund for Scientific Research he obtained a PhD in Sciences (Applied Mathematics) in 1988 at the same university. He spent two years at the Max-Planck-Institut für Plasmaphysik in Garching bei München and five years at the FOM-Instituut voor Plasmafysica 'Rijnhuizen'. In October 1996 he returned to the K.U.Leuven as Research Associate of the FWO-Vlaanderen at the Centre for Plasma Astrophysics (CPA) in the Department of Mathematics. Since October 1, 2000 he is Academic Staff at the K.U.Leuven, presently as Full Professor. His research interests include solar astrophysics, space weather and controlled thermonuclear fusion. He co-authored two books and 170 journal articles on these subjects. He is president of the European Solar Physics Division (EPS & EAS) and chairman of the Leuven Mathematical Modeling and Computational Science Centre. He is also member of ESA’s Space Weather Working Team and Solar System Working Group.

    diff --git a/HtmlDump/file_0485.html b/HtmlDump/file_0485.html deleted file mode 100644 index 848c9557b..000000000 --- a/HtmlDump/file_0485.html +++ /dev/null @@ -1,87 +0,0 @@ - - - - - - -
    - \"Logo - - March 23 2009
    - Launch Flemish Supercomputer Center -

    - The Flemish Supercomputer Centre (Vlaams Supercomputer Centrum) cordially invites you to its official launch on 23 March 2009. -

    -
    - Supercomputing is a crucial technology for the twenty-first century. Fast and efficient compute power is needed for leading scientific research, the industrial development and the competitiveness of our industry. For this reason the Flemish government and the five university associations have decided to set up a Flemish Supercomputer Centre (VSC). This centre will combine the clusters at the various Flemish universities in a single high-performance network and expand it with a large cluster that can withstand international comparison. The VSC will make available a high-performance and user-friendly supercomputer infrastructure and expertise to users from academic institutions and the industry. -

    - Program -

    - - - - - - - - - - - - - - - - - - - - - - -
    - - 14.15 - - Scientists from various disciplines tell about their experiences with HPC and grid computing -
    - - 16.15 - - Coffee break -
    - - 17.00 - - Official program, in the presence of minister Ceysens, Flemish minister of economy, enterprise, science, innovation and foreign trade of Flanders. -
    - - 18.30 - - Reception -

    - A detailed program is available by clicking on this link. All presentations will be in English. -

    - Location -

    - Promotiezaal of the Universiteitshal of the K.U.Leuven, -

    - Naamsestraat 22, 3000 Leuven. -

    - Please register by 16 March 2009 using this electronic form. -

    - Plan and parking -

    Parkings in the neighbourhood:
    -

    - The Universiteitshal is within walking distance of the train station of Leuven. Bus 1 (Heverlee Boskant) and 2 (Heverlee Campus) stop nearby. -

    - \"invitation -

    - The images at the top of this page are courtesy of NUMECA International and research groups at Antwerp University, the Vrije Universiteit Brussel and the K.U.Leuven. -

    " - diff --git a/HtmlDump/file_0487.html b/HtmlDump/file_0487.html deleted file mode 100644 index 8d3bb597e..000000000 --- a/HtmlDump/file_0487.html +++ /dev/null @@ -1,76 +0,0 @@ - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
    \"NUMECA - -

    Free-surface simulation. -

    -

    Figure courtesy of NUMECA International. -

    -
    \"NUMECA - -

    Simulation of a turbine with coolring. -

    -

    Figure courtesy of NUMECA International. -

    -
    \"UA - -

    Purkinje cell model. -

    -

    Figure courtesy of Erik De Schutter, Theoretical Neurobiology, Universiteit Antwerpen. -

    -
    \"UA - -

    This figure shows the electron density at adsorption of NO2 at on graphene, computed using density functional theory (using the software package absint). -

    -

    Figure courtesy of Francois Peeters, Condensed Matter Theory (CMT) group, Universiteit Antwerpen. -

    -
    \"UA - -

    Figure courtesy of Christine Van Broeckhoven, research group Molecular Genetics, Universiteit Antwerpen. -

    -
    \"CPA - Figure courtesy of the Centre for Plasma-Astrophysics, K.U.Leuven. -
    \" - Figure courtesy of the Centre for Plasma-Astrophysics, K.U.Leuven. -
    \"KULeuven - Figure courtesy of the Centre for Plasma-Astrophysics, K.U.Leuven. -
    \"VUB - Figure courtesy of the research group Physics of Elementary Particles - IIHE, Vrije Universiteit Brussel. -
    " - diff --git a/HtmlDump/file_0489.html b/HtmlDump/file_0489.html deleted file mode 100644 index 2c86a62df..000000000 --- a/HtmlDump/file_0489.html +++ /dev/null @@ -1,35 +0,0 @@ - - - - - - -
    -

    - De eerste jaarlijkse bijeenkomst was een succes, met dank aan al de sprekers en deelnemers. We kijken al uit om de gebruikersdag volgend jaar te herhalen en om een aantal van de opgeworpen ideeën te implementeren.

    -

    - Hieronder vind je de presentaties van de VSC 2014 gebruikersdag:

    -
    -

    - The first annual event was a success, thanks to all the presenters and participates. We are already looking forward to implementing some of the ideas generated and gathering again next year.

    -

    - Below you can download the presentations of the VSC 2014 userday:

    -

    - State of the VSC, Flemish Supercomputer (Dane Skow, HPC manager Hercules Foundation)

    - Computational Neuroscience (Michele Giugliano, University of Antwerp)

    - The value of HPC for Molecular Modeling applications (Veronique Van Speybroeck, Ghent University)

    - Parallel, grid-adaptive computations for solar atmosphere dynamics (Rony Keppens, University of Leuven)

    - HPC for industrial wind energy applications (Rory Donnelly, 3E)

    - The PRACE architecture and future prospects into Horizon 2020 (Sergi Girona, PRACE)

    - Towards A Pan-European Collaborative Data Infrastructure, European Data Infrastructure (Morris Riedel, EUDAT)

    - - - - - - -
    - Zoals je hieronder kan zijn was een mooi aantal deelnemers aanwezig. Wie wenst kan meer foto's vinden onder de link. - A nice number of participants attended the userday as you can see below. Click to see more pictures.

    - \"More

    " - diff --git a/HtmlDump/file_0491.html b/HtmlDump/file_0491.html deleted file mode 100644 index 619c1482e..000000000 --- a/HtmlDump/file_0491.html +++ /dev/null @@ -1,119 +0,0 @@ -

    The International Auditorium
    - Kon. Albert II laan 5, 1210 Brussels

    - The VSC User Day is the first annual meeting of current and prospective users of the Vlaams Supercomputing Center (VSC) along with staff and supporters of the VSC infrastructure. We will hold a series of presentations describing the status and results of the past year as well as afternoon sessions talking about plans and priorities for 2014 and beyond. This is an excellent opportunity to become more familiar with the VSC and it personnel, become involved in constructing plans and priorities for new projects and initiatives, and network with fellow HPC interested parties.
    - The day ends with a networking hour at 17:00 allowing time for informal discussions and followup from the day's activities.
    -

    - Program

    - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
    - 9:30h - Welcome coffee
    - 10:00h - Opening VSC USER DAY
    - Marc Luwel, Director Hercules Foundation
    - 10:10h - State of the VSC, Flemish Supercomputer
    - Dane Skow, HPC manager Hercules Foundation
    - 10:40h - Computational Neuroscience
    - Michele Giugliano, University of Antwerp
    - 11:00h - The value of HPC for Molecular Modeling applications
    - Veronique Van Speybroeck, Ghent University
    - 11:20h - Coffee Break and posters
    - 11:50h - Parallel, grid-adaptive computations for solar atmosphere dynamics
    - Rony Keppens, University of Leuven
    - 12:10h - HPC for industrial wind energy applications
    - Rory Donnelly, 3E
    - 12:30h - Lunch
    - 13:30h - The PRACE architecture and future prospects into Horizon 2020
    - Sergi Girona, PRACE
    - 14:00h - EUDAT – Towards A Pan-European Collaborative Data Infrastructure, European Data Infrastructure
    - Morris Reidel, EUDAT
    - 14:20h - Breakout Sessions:
    -
    - 1 : Long term strategy / Outreach, information and Documentation
    - 2 : Industry and Research / Visualization
    - 3 : Training and support / Integration of Data and Computation
    - 15:20h - Coffee break and posters
    - 16:00h - Summary Presentations from Rapporteurs breakout sessions
    - 16:30h - Closing remarks and Q&A
    - Bart De Moor, chair Hercules Foundation
    - 17:00h - Network reception
    " - diff --git a/HtmlDump/file_0493.html b/HtmlDump/file_0493.html deleted file mode 100644 index 99028f9be..000000000 --- a/HtmlDump/file_0493.html +++ /dev/null @@ -1,4 +0,0 @@ -

    De eerste jaarlijkse bijeenkomst was een succes, met dank aan al de sprekers en deelnemers. We kijken al uit om de gebruikersdag volgend jaar te herhalen en om een aantal van de opgeworpen ideeën te implementeren.

    -

    Hieronder vind je de presentaties van de VSC 2014 gebruikersdag: -

    " - diff --git a/HtmlDump/file_0495.html b/HtmlDump/file_0495.html deleted file mode 100644 index fba155aa7..000000000 --- a/HtmlDump/file_0495.html +++ /dev/null @@ -1 +0,0 @@ -

    The first annual event was a success, thanks to all the presenters and participates. We are already looking forward to implementing some of the ideas generated and gathering again next year.

    Below you can download the presentations of the VSC 2014 userday:

    diff --git a/HtmlDump/file_0497.html b/HtmlDump/file_0497.html deleted file mode 100644 index 4e535b275..000000000 --- a/HtmlDump/file_0497.html +++ /dev/null @@ -1 +0,0 @@ -

    State of the VSC, Flemish Supercomputer (Dane Skow, HPC manager Hercules Foundation)
    Computational Neuroscience (Michele Giugliano, University of Antwerp)
    The value of HPC for Molecular Modeling applications (Veronique Van Speybroeck, Ghent University)
    Parallel, grid-adaptive computations for solar atmosphere dynamics (Rony Keppens, University of Leuven)
    HPC for industrial wind energy applications (Rory Donnelly, 3E)
    The PRACE architecture and future prospects into Horizon 2020 (Sergi Girona, PRACE)
    Towards A Pan-European Collaborative Data Infrastructure, European Data Infrastructure(Morris Riedel, EUDAT)

    Full program of the day

    diff --git a/HtmlDump/file_0499.html b/HtmlDump/file_0499.html deleted file mode 100644 index 2352bde4b..000000000 --- a/HtmlDump/file_0499.html +++ /dev/null @@ -1 +0,0 @@ -

    Zoals je hieronder kan zijn was een mooi aantal deelnemers aanwezig. Wie wenst kan meer foto's vinden onder de link.

    diff --git a/HtmlDump/file_0501.html b/HtmlDump/file_0501.html deleted file mode 100644 index 860db3aff..000000000 --- a/HtmlDump/file_0501.html +++ /dev/null @@ -1 +0,0 @@ -

    A nice number of participants attended the userday as you can see below. Click to see more pictures.

    diff --git a/HtmlDump/file_0503.html b/HtmlDump/file_0503.html deleted file mode 100644 index be2a25675..000000000 --- a/HtmlDump/file_0503.html +++ /dev/null @@ -1,3 +0,0 @@ -

    \"More -

    " - diff --git a/HtmlDump/file_0505.html b/HtmlDump/file_0505.html deleted file mode 100644 index 48268b17d..000000000 --- a/HtmlDump/file_0505.html +++ /dev/null @@ -1,27 +0,0 @@ -

    Next- generation Supercomputing in Flanders: value creation for your business!

    Tuesday 27 Januari 2015

    Technopolis Mechelen
    -

    The first industry day was a success, thanks to all the presenters and participates. We especially would like to thank the minister for his presence. The success stories of European HPC centres showed how benificial HPC can be for all kinds of industry. The testimonials of the Flemish firms who already are using large scale computing could only stress the importance HPC. We will continue to work on the ideas generated at this meeting so that VSC can strengthen its service to industry. -

    \"All -

    Below you can download the presentations of the VSC 2015 industry day. Pictures are published. -

    The importance of High Performance Computing for future science, technology and economic growth
    - Prof. Dr Bart De Moor, Herculesstichting -

    The 4 Forces of Change for Supercomputing
    - Cliff Brereton, director Hartree Centre (UK) -

    The virtual Engineering Centre and its multisector virtual prototyping activities
    - Dr Gillian Murray, Director UK virtual engineering centre (UK) -

    How SMEs can benefit from High-Performence-Computing
    - Dr Andreas Wierse, SICOS BW GmbH (D) -

    European HPC landscape- its initiatives towards supporting innovation and its regional perspectives
    - Serge Bogaerts, HPC & Infrastructure Manager, CENAERO (B)
    - Belgian delegate to the Prace Council
    -

    Big data and Big Compute for Drug Discovery & Development of the future
    - Dr Pieter Peeters, Senior Director Computational Biology, Janssen R&D (B) -

    HPC key enabler for R&D innovation @ Bayer CropScience
    - Filip Nollet, Computation Life Science Platform
    - Architect Bayer Cropscience (B)
    -

    How becoming involved in VSC: mechanisms for HPC industrial newcomers
    - Dr Marc Luwel, Herculesstichting
    - Dr Ewald Pauwels, Ugent - Tier1 -

    Closing
    - Philippe Muyters, Flemish Minister of Economics and Innovation -

    Full program

    " - diff --git a/HtmlDump/file_0507.html b/HtmlDump/file_0507.html deleted file mode 100644 index 4c1f30c9f..000000000 --- a/HtmlDump/file_0507.html +++ /dev/null @@ -1,128 +0,0 @@ -

    The VSC Industry day is organised for the first time to create awareness about the potential of HPC for industry and to help firms overcome the hurdles to use supercomputing. We are proud to present an exciting program with success stories of European HPC centres that successfully collaborate with industry and testimonials of some Flemish firms who already have discovered the opportunities of large scale computing. The day ends with a networking hour allowing time for informal discussions.

    - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
    -

    Program - Next-generation supercomputing in Flanders: value creation for your business!

    -
    -

    13.00-13.30

    -
    -

    Registration

    -
    -

    13.30-13.35

    -
    -

    Welcome and introduction
    - Prof. Dr Colin Whitehouse (chair)

    -
    -

    13.35-13.45

    -
    -

    The importance of High Performance Computing for future science, technology and economic growth
    - Prof. Dr Bart De Moor, Herculesstichting

    -
    -

    13.45-14.05

    -
    -

    The 4 Forces of Change for Supercomputing
    - Cliff Brereton, director Hartree Centre (UK)

    -
    -

    14.05-14.25

    -
    -

    The virtual Engineering Centre and its multisector virtual prototyping activities
    - Dr Gillian Murray, Director UK virtual engineering centre (UK)

    -
    -

    14.25-14.45

    -
    -

    How SMEs can benefit from High-Performence-Computing
    - Dr Andreas Wierse, SICOS BW GmbH (D)

    -
    -

    14.45-15.15

    -
    -

    Coffeebreak

    -
    -

    15.15-15.35

    -
    -

    European HPC landscape- its initiatives towards supporting innovation and its regional perspectives
    - Serge Bogaerts, HPC & Infrastructure Manager, CENAERO (B)
    - Belgian delegate to the Prace Council

    -
    -

    15.35-15.55

    -
    -

    Big data and Big Compute for Drug Discovery & Development of the future
    - Dr Pieter Peeters, Senior Director Computational Biology, Janssen R&D (B)

    -
    -

    15.55-16.15

    -
    -

    HPC key enabler for R&D innovation @ Bayer CropScience
    - Filip Nollet, Computation Life Science Platform
    - Architect Bayer Cropscience (B)

    -
    -

    16.15-16.35

    -
    -

    How becoming involved in VSC: mechanisms for HPC industrial newcomers
    - Dr Marc Luwel, Herculesstichting

    -
    -

    16.35-17.05

    -
    -

    Q&A discussion
    - Panel/chair

    -
    17.05-17.15 -

    Closing
    - Philippe Muyters, Flemish Minister of Economics and Innovation

    -
    17.15-18.15Networking reception
    " - diff --git a/HtmlDump/file_0509.html b/HtmlDump/file_0509.html deleted file mode 100644 index 7eccc413d..000000000 --- a/HtmlDump/file_0509.html +++ /dev/null @@ -1 +0,0 @@ -

    Below you find the complete list of Tier-1-projects since the start of the regular project application programme.

    diff --git a/HtmlDump/file_0511.html b/HtmlDump/file_0511.html deleted file mode 100644 index ec5e412ad..000000000 --- a/HtmlDump/file_0511.html +++ /dev/null @@ -1,8 +0,0 @@ -

    User support

    -

    KU Leuven/UHasselt: HPCinfo@kuleuven.be
    Ghent University: hpc@ugent.be
    - Antwerp University: hpc@uantwerpen.be
    - VUB: hpc@vub.ac.be -

    -

    Please take a look at the information that you should provide with your support question.. -

    " - diff --git a/HtmlDump/file_0513.html b/HtmlDump/file_0513.html deleted file mode 100644 index f51d00594..000000000 --- a/HtmlDump/file_0513.html +++ /dev/null @@ -1 +0,0 @@ -

    Tier-1

    Experimental setup

    Tier-2

    Four university-level cluster groups are also embedded in the VSC and partly funded from VSC budgets:

    diff --git a/HtmlDump/file_0517.html b/HtmlDump/file_0517.html deleted file mode 100644 index 3b4e39733..000000000 --- a/HtmlDump/file_0517.html +++ /dev/null @@ -1 +0,0 @@ -

    The only short answer to this question is: maybe yes, maybe no. There are a number of things you need to figure out before.

    Will my application run on a supercomputer?

    Maybe yes, maybe no. All VSC clusters - and the majority of large supercomputers in the world - run the Linux operation system. So it doesn't run Windows or OS X applications. Your application will have to support Linux, and the specific variants that we use on our clusters, but these are popular versions and rarely pose problems.

    Next supercomputers are not really build to run interactive applications well. They are built to be shared by many people and using command line applications. There are several issues:

    Will my application run faster on a supercomputer?

    You'll be disappointed to hear that the answer is actually quite often \"no\". It is not uncommon that an application runs faster on a good workstation than on a supercomputer. Supercomputers are optimised for large applications that access large chunks of memory (RAM or disk) in a particular way and are very parallel, i.e., they can keep a lot of processor cores busy. Their CPUs are optimised to do as much work in parallel as fast as possible, at the cost of lower performance for programs that don't exploit parallelism, while high-end workstation processors are more optimised for those programs that run sequentially or don't use a lot of parallelism and often have disksystems that can better deal with many small files.

    That being said, even that doesn't have to be disastrous. Parallelism can come in different forms. Sometimes you may have to run the same program for a large number of test cases, and if the memory consumption for a program for a simple test case is reasonable, you may be able to run a lot of instances of that program simultaneously on the same multi-core processor chip. This is called capacity computing. And some applications are very well written and can exploit all the forms of parallelism that a modern supercomputer offers, provided you solve a large enough problem with that program. This is called capability computing. We support both at the VSC.

    OK, my application can exploit a supercomputer. What's next?

    Have a look our web page on requesting access in the general section. It explains who can get access to the supercomputers. And as that text explains, you'll may need to install some additional software the system from which you want to access the clusters (which for the majority of our users is their laptop or desktop computer).

    Basically, you communicate with the cluster through a protocol called \"SSH\" which stands for \"Secure SHell\". It encrypts all the information that is passed to the clusters, and also provides an authentication mechanism that is a bit safer than just sending passwords. The protocol can be used both to get a console on the system (a \"command line interface\" like the one offered by CMD.EXE on Widows or the term app on OS X) and to transfer files to the system. The absolute minimum you need before you can actually request your account, is a SSH client to generate the key that will be used to talk to the clusters. For Windows, you can use PuTTY (freely available, see the link on our PuTTY page), on macOS/OS X you can use the built-in OpenSSH client, and Linux systems typically also come with OpenSSH. But to actually use the clusters, you may want to install some additional software, such as a GUI sftp client to transfer files. We've got links to a lot of useful client software on our web page on access and data transfer.

    Yes, I'm ready

    Then follow the links on our user portal page on requesting an account. And don't forget we've got training programs to get you started and technical support for when you run into trouble.

    diff --git a/HtmlDump/file_0519.html b/HtmlDump/file_0519.html deleted file mode 100644 index 5a85c1dc1..000000000 --- a/HtmlDump/file_0519.html +++ /dev/null @@ -1 +0,0 @@ -

    Even if you don't do software development yourself (and software development includes, e.g., developing R- or Matlab routines), working on a supercomputer differs from using a PC, so some training is useful for everybody.

    Linux

    If you are familiar with a Linux or UNIX environment, there is no need to take any course. Working with Linux on a supercomputer is not that different from working with Linux on a PC, so you'll likely find your way around quickly.

    Otherwise, there are several options to learn more about Linux

    A basic HPC introduction

    Such a course at the VSC has a double goal: Learning more about HPC in general but also about specific properties of the system at the VSC that you need to know to run programs sufficiently efficiently.

    What next?

    We also run courses on many other aspects of supercomputing such as program development or use of specific applications. As the other courses, they are announced on our \"Education and Training\" page. Or you can read a some good books, look at training programs offered at the European level through PRACE or check some web courses. We maintain links to several of those on the \"Tutorials and books\" pages.

    Be aware that some tools that are useful to prototype applications on a PC, may be very inefficient when run at a large scale on a supercomputer. Matlab programs can often be accelerated through compiling with the Matlab compiler. R isn't the most efficient tool either. And Python is an excellent \"glue language\" to get a number of applications or optimised (non-Python) libraries to work together, but shouldn't be used for entire applications that consume a lot of CPU time either. We've got courses on several of those languages where you also learn how to use them efficiently, and you'll also notice that on some clusters there are restrictions on the use of these tools.

    diff --git a/HtmlDump/file_0521.html b/HtmlDump/file_0521.html deleted file mode 100644 index 87871e7dc..000000000 --- a/HtmlDump/file_0521.html +++ /dev/null @@ -1,28 +0,0 @@ -" - diff --git a/HtmlDump/file_0523.html b/HtmlDump/file_0523.html deleted file mode 100644 index e8ff285ca..000000000 --- a/HtmlDump/file_0523.html +++ /dev/null @@ -1 +0,0 @@ -

    © FWO

    Use of this website means that you acknowledge and accept the terms and conditions below.

    Content disclaimer

    The FWO takes great care of its website and strives to ensure that all the information provided is as complete, correct, understandable, accurate and up-to-date as possible. In spite of all these efforts, the FWO cannot guarantee that the information provided on this website is always complete, correct, accurate or up-to-date. Where necessary, the FWO reserves the right to change and update information at its own discretion. The publication of official texts (legislation, Flemish Parliament Acts, regulations, etc.) on this website has no official character.

    If the information provided on or by this website is inaccurate then the FWO will do everything possible to correct this as quickly as possible. Should you notice any errors, please contact the website administrator: kurt.lust@uantwerpen.be. The FWO makes every effort to ensure that the website does not become unavailable as a result of technical errors. However, the FWO cannot guarantee the website's availability or the absence of other technical problems.

    The FWO cannot be held liable for any direct or indirect damage arising from the use of the website or from reliance on the information provided on or through the website. This also applies without restriction to all losses, delays or damage to your equipment, software or other data on your computer system.

    Protection of personal data

    The FWO is committed to protecting your privacy. Most information is available on or through the website without your having to provide any personal data. In some cases, however, you may be asked to provide certain personal details. In such cases, your data will be processed in accordance with the Law of 8 December 1992 on the protection of privacy with regard to the processing of personal data and with the Royal Decree of 13 February 2001, which implements the Law of 8 December 1992 on the protection of privacy with regard to the processing of personal data.

    The FWO provides the following guarantees in this context:

    Providing personal information through the online registration module

    By providing your personal information, you consent to this personal information being recorded and processed by the FWO and its representatives. The information you provided will be treated as confidential.

    The FWO may also use your details to invite you to events or keep you informed about activities of the VSC.

    Cookies

    What are cookies and why do we use them?

    Cookies are small text or data files that a browser saves on your computer when you visit a website.

    This web site saves cookies on your computer in order to improve the website’s usability and also to analyse how we can improve our web services.

    Which cookies does this website use?

    Can you block or delete cookies?

    You can prevent certain cookies being installed on your computer by adjusting the settings in your browser’s options. In the ‘privacy’ section, you can specify any cookies you wish to block.

    Cookies can also be deleted in your browser’s options via ‘delete browsing history’.

    We use cookies to collect statistics which help us simplify and improve your visit to our website. As a result, we advise you to allow your browser to use cookies.

    Hyperlinks and references

    The website contains hyperlinks which redirect you to the websites of other institutions and organisations and to information sources managed by third parties. The FWO has no technical control over these websites, nor does it control their content, which is why it cannot offer any guarantees as to the completeness or correctness of the content or availability of these websites and information sources.

    The provision of hyperlinks to other websites does not imply that the FWO endorses these external websites or their content. The links are provided for information purposes and for your convenience. The FWO accepts no liability for any direct or indirect damage arising from the consultation or use of such external websites or their content.

    Copyright

    All texts and illustrations included on this website, as well as its layout and functionality, are protected by copyright. The texts and illustrations may be printed out for private use; distribution is permitted only after receiving the authorisation of the FWO. You may quote from the website providing you always refer to the original source. Reproductions are permitted, providing you always refer to the original source, except for commercial purposes, in which case reproductions are never permitted, even when they include a reference to the source.

    Permission to reproduce copyrighted material applies only to the elements of this site for which the FWO is the copyright owner. Permission to reproduce material for which third parties hold the copyright must be obtained from the relevant copyright holder.

    diff --git a/HtmlDump/file_0529.html b/HtmlDump/file_0529.html deleted file mode 100644 index 1811889fb..000000000 --- a/HtmlDump/file_0529.html +++ /dev/null @@ -1,2 +0,0 @@ -

    relates to

    - diff --git a/HtmlDump/file_0531.html b/HtmlDump/file_0531.html deleted file mode 100644 index dc5eed6ac..000000000 --- a/HtmlDump/file_0531.html +++ /dev/null @@ -1,2 +0,0 @@ -

    Auick access

    - diff --git a/HtmlDump/file_0533.html b/HtmlDump/file_0533.html deleted file mode 100644 index e0e35e01e..000000000 --- a/HtmlDump/file_0533.html +++ /dev/null @@ -1,2 +0,0 @@ -

    New user

    -

    eerste link

    diff --git a/HtmlDump/file_0535.html b/HtmlDump/file_0535.html deleted file mode 100644 index 008715fab..000000000 --- a/HtmlDump/file_0535.html +++ /dev/null @@ -1,119 +0,0 @@ -

    The UGent compute infrastructure consists of several specialised clusters, jointly called Stevin. These clusters share a lot of their file space so that users can easily move between clusters depending on the specific job they have to run.

    Login nodes

    The HPC-UGent Tier-2 login nodes can be access through the generic name login.hpc.ugent.be. -

    Connecting to a specific login node

    There are multiple login nodes (gligar01-gligar03) and you will be connected with one of them when using the generic alias login.hpc.ugent.be. (You can check which one you are connected to using the hostname command). -

    If you need to connect with as specific login node, use either gligar01.ugent.be, gligar02.ugent.be, or gligar03.ugent.be. -

    Compute clusters

    - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
    - #nodes - CPU - Mem/node - Diskspace/node - Network -
    -

    delcatty

    -
    128 - 2 x 8-core Intel E5-2670
    (Sandy Bridge @ 2.6 GHz) -
    64 GB - 400 GB - FDR InfiniBand -
    -

    phanpy

    -
    16 - 2 x 12-core Intel E5-2680v3
    (Haswell-EP @ 2.5 GHz) -
    512 GB - 3x 400 GB (SSD, striped) - FDR InfiniBand -
    -

    golett

    -
    196 - 2 x 12-core Intel E5-2680v3
    (Haswell-EP @ 2.5 GHz) -
    64 GB - 500 GB - FDR-10 InfiniBand

    -
    -

    swalot

    -
    128 - 2 x 10-core Intel E5-2660v3
    (Haswell-EP @ 2.6 GHz) -
    128 GB - 1 TB - FDR InfiniBand

    -
    -

    skitty

    -
    72 - 2 x 18-core Intel Xeon Gold
    6140 (Skylake @ 2.3 GHz) -
    192 GB - 1 TB
    240 GB SSD -
    EDR InfiniBand -
    -

    victini

    -
    96 - 2 x 18-core Intel Xeon Gold
    6140 (Skylake @ 2.3 GHz) -
    96 GB - 1 TB
    240 GB SSD -
    10 GbE -

    Only clusters with an InfiniBand interconnect network are suited for multi-node jobs. Other clusters are for single-node usage only.
    -

    Shared storage

    General Parallel File System (GPFS) partitions: -

    " - diff --git a/HtmlDump/file_0537.html b/HtmlDump/file_0537.html deleted file mode 100644 index 032c32d03..000000000 --- a/HtmlDump/file_0537.html +++ /dev/null @@ -1 +0,0 @@ -

    When using the VSC-infrastructure for your research, you must acknowledge the VSC in all relevant publications. This will help the VSC secure funding, and hence you will benefit from it in the long run as well. It is also a contractual obligation for the VSC.

    Please use the following phrase to do so in Dutch “De rekeninfrastructuur en dienstverlening gebruikt in dit werk, werd voorzien door het VSC (Vlaams Supercomputer Centrum), gefinancierd door het FWO en de Vlaamse regering – departement EWI”, or in English: “The computational resources and services used in this work were provided by the VSC (Flemish Supercomputer Center), funded by the Research Foundation - Flanders (FWO) and the Flemish Government – department EWI”.

    Moreover, if you are in the KU Leuven association, you are also requested to add the relevant papers to the virtual collection \"High Performance Computing\" in Lirias so that we can easily generate the publication lists with relevant publications.

    diff --git a/HtmlDump/file_0539.html b/HtmlDump/file_0539.html deleted file mode 100644 index af36bdb2a..000000000 --- a/HtmlDump/file_0539.html +++ /dev/null @@ -1 +0,0 @@ -

    Need technical support? Contact your local help desk.

    diff --git a/HtmlDump/file_0543.html b/HtmlDump/file_0543.html deleted file mode 100644 index 6d6cc2e23..000000000 --- a/HtmlDump/file_0543.html +++ /dev/null @@ -1,15 +0,0 @@ -

    In order to smoothly go through account creation for students process several actions from the lecturer are required.

      -
    1. Submit the request to HPCinfo(at)icts.kuleuven.be providing a short description of the course and explanation why HPC facilities are necessary for teaching the course. Please also add the attachment with the list of students attending the course (2 weeks before the beginning of the course).
    2. -
    3. Send the information to students that they have 1 week time window to apply for the account (the last day when account creating can be processed is the day before the course starts). Students should follow the regular account creation routine, which starts with generating private-public key pair and ends with submitting the public key via our account management web site. After 1 week the lists of students that already submitted the request for the account and corresponding vsc-account numbers will be send to the lecturer.
    4. -
    5. - The students should be informed to bring the private key with them to be able to connect and attend the course. -
    6. -
    7. Since introductory credits are supposed to be used for private projects (e.g. master thesis computations) we encourage to create the project which will be used for computations related to the course. This will also give a lecturer an opportunity of tracing the use of the cluster during the course. For more information about the procedure of creating the project please refer to the page on credit system basics. Once the project is accepted, the students that already applied for the account will be automatically added to the project (1 week before the beginning of the course).
    8. -
    9. Students that failed to submit request in a given time will have to follow regular procedure of applying for the account involving communication with the HPC support staff and delaying the account creation process (these students will have to motivate the reason of applying for the account and send a request for using the project credits). Students that submit the requests later than 2 days before the beginning of the course are not guaranteed to get the account in time.
    10. -
    11. Both the accounts and the generated key-pairs are strictly PRIVATE and students are not supposed to share the accounts, not even for the purpose of the course.
    12. -
    13. Please remember to instruct your students to bring the private key to the class. Students may forget it and without the key they will not be able to login to the cluster even if they have the accounts.
    14. -
    15. If the reservation of few nodes is necessary during the exercise classes please let us know 1 week before the exercise class, so that it can be scheduled. To submit the job during the class the following command should be used: -
      $ qsub -A project-name -W group_list=project-name script-file
      - where project-name refers to the project created by the lecturer for the purpose of the course.
    16. Make sure that the software to connect to the cluster (Putty, Xming, Filzezilla, NX) is available in pc-class that will be used during the course. For KU Leuven courses: please follow the procedure at https://icts.kuleuven.be/sc/pcklas/ictspcklassen (1 month before the beginning of the course).
    17. -
    " - diff --git a/HtmlDump/file_0545.html b/HtmlDump/file_0545.html deleted file mode 100644 index e52d9b36c..000000000 --- a/HtmlDump/file_0545.html +++ /dev/null @@ -1,34 +0,0 @@ -

    Purpose

    Estimating the amount of memory an application will use during execution is often non trivial, especially when one uses third-party software. However, this information is valuable, since it helps to determine the characteristics of the compute nodes a job using this application should run on.

    Although the tool presented here can also be used to support the software development process, better tools are almost certainly available. -

    Note that currently only single node jobs are supported, MPI support may be added in a future release. -

    Prerequisites

    The user should be familiar with the linux bash shell. -

    Monitoring a program

    To start using monitor, first load the appropriate module: -

    $ module load monitor
    -

    Starting a program, e.g., simulation, to monitor is very straightforward -

    $ monitor simulation
    -

    monitor will write the CPU usage and memory consumption of simulation to standard error. Values will be displayed every 5 seconds. This is the rate at which monitor samples the program's metrics. -

    Log file

    Since monitor's output may interfere with that of the program to monitor, it is often convenient to use a log file. The latter can be specified as follows: -

    $ monitor -l simulation.log simulation
    -

    For long running programs, it may be convenient to limit the output to, e.g., the last minute of the programs execution. Since monitor provides metrics every 5 seconds, this implies we want to limit the output to the last 12 values to cover a minute: -

    $ monitor -l simulation.log -n 12 simulation
    -

    Note that this option is only available when monitor writes its metrics to a log file, not when standard error is used. -

    Modifying the sample resolution

    The interval at which monitor will show the metrics can be modified by specifying delta, the sample rate: -

    $ monitor -d 60 simulation
    -

    monitor will now print the program's metrics every 60 seconds. Note that the minimum delta value is 1 second. -

    File sizes

    Some programs use temporary files, the size of which may also be a useful metric. monitor provides an option to display the size of one or more files: -

    $ monitor -f tmp/simulation.tmp,cache simulation
    -

    Here, the size of the file simulation.tmp in directory tmp, as well as the size of the file cache will be monitored. Files can be specified by absolute as well as relative path, and multiple files are separated by ','. -

    Programs with command line options

    Many programs, e.g., matlab, take command line options. To make sure these do not interfere with those of monitor and vice versa, the program can for instance be started in the following way: -

    $ monitor -delta 60 -- matlab -nojvm -nodisplay computation.m
    -

    The use of '--' will ensure that monitor does not get confused by matlab's '-nojvm' and '-nodisplay' options. -

    Subprocesses and multicore programs

    Some processes spawn one or more subprocesses. In that case, the metrics shown by monitor are aggregated over the process and all of its subprocesses (recursively). The reported CPU usage is the sum of all these processes, and can thus exceed 100 %. -

    Some (well, since this is a HPC cluster, we hope most) programs use more than one core to perform their computations. Hence, it should not come as a surprise that the CPU usage is reported as larger than 100 %. -

    When programs of this type are running on a computer with n cores, the CPU usage can go up to n x 100 %. -

    Exit codes

    monitor will propagate the exit code of the program it is watching. Suppose the latter ends normally, then monitor's exit code will be 0. On the other hand, when the program terminates abnormally with a non-zero exit code, e.g., 3, then this will be monitor's exit code as well. -

    When monitor has to terminate in an abnormal state, for instance if it can't create the log file, its exit code will be 65. If this interferes with an exit code of the program to be monitored, it can be modified by setting the environment variable MONITOR_EXIT_ERROR to a more suitable value. -

    Monitoring a running process

    It is also possible to \"attach\" monitor to a program or process that is already running. One simply determines the relevant process ID using the ps command, e.g., 18749, and starts monitor: -

    $ monitor -p 18749
    -

    Note that this feature can be (ab)used to monitor specific subprocesses. -

    More information

    Help is available for monitor by issuing: -

    $ monitor -h
    -
    " - diff --git a/HtmlDump/file_0547.html b/HtmlDump/file_0547.html deleted file mode 100644 index f4000463c..000000000 --- a/HtmlDump/file_0547.html +++ /dev/null @@ -1,2 +0,0 @@ -

    Remark

    -

    Logging in on the site does not yet function (expected around July 10), so you cannot yet see the overview of systems below.

    diff --git a/HtmlDump/file_0549.html b/HtmlDump/file_0549.html deleted file mode 100644 index c26b43ec2..000000000 --- a/HtmlDump/file_0549.html +++ /dev/null @@ -1,32 +0,0 @@ -

    Purpose

    Estimating the amount of memory an application will use during execution is often non trivial, especially when one uses third-party software. However, this information is valuable, since it helps to determine the characteristics of the compute nodes a job using this application should run on.

    Although the tool presented here can also be used to support the software development process, better tools are almost certainly available. -

    Note that currently only single node jobs are supported, MPI support may be added in a future release. -

    Prerequisites

    The user should be familiar with the linux bash shell. -

    Monitoring a program

    To start using monitor, first load the appropriate module: -

    $ module load monitor
    -

    Starting a program, e.g., simulation, to monitor is very straightforward -

    $ monitor simulation
    -

    monitor will write the CPU usage and memory consumption of simulation to standard error. Values will be displayed every 5 seconds. This is the rate at which monitor samples the program's metrics. -

    Log file

    Since monitor's output may interfere with that of the program to monitor, it is often convenient to use a log file. The latter can be specified as follows: -

    $ monitor -l simulation.log simulation
    -

    For long running programs, it may be convenient to limit the output to, e.g., the last minute of the programs execution. Since monitor provides metrics every 5 seconds, this implies we want to limit the output to the last 12 values to cover a minute: -

    $ monitor -l simulation.log -n 12 simulation
    -

    Note that this option is only available when monitor writes its metrics to a log file, not when standard error is used. -

    Modifying the sample resolution

    The interval at which monitor will show the metrics can be modified by specifying delta, the sample rate: -

    $ monitor -d 60 simulation
    -

    monitor will now print the program's metrics every 60 seconds. Note that the minimum delta value is 1 second. -

    File sizes

    Some programs use temporary files, the size of which may also be a useful metric. monitor provides an option to display the size of one or more files: -

    $ monitor -f tmp/simulation.tmp,cache simulation
    -

    Here, the size of the file simulation.tmp in directory tmp, as well as the size of the file cache will be monitored. Files can be specified by absolute as well as relative path, and multiple files are separated by ','. -

    Programs with command line options

    Many programs, e.g., matlab, take command line options. To make sure these do not interfere with those of monitor and vice versa, the program can for instance be started in the following way: -

    $ monitor -delta 60 -- matlab -nojvm -nodisplay computation.m
    -

    The use of '--' will ensure that monitor does not get confused by matlab's '-nojvm' and '-nodisplay' options. -

    Subprocesses and multicore programs

    Some processes spawn one or more subprocesses. In that case, the metrics shown by monitor are aggregated over the process and all of its subprocesses (recursively). The reported CPU usage is the sum of all these processes, and can thus exceed 100 %. -

    Some (well, since this is a HPC cluster, we hope most) programs use more than one core to perform their computations. Hence, it should not come as a surprise that the CPU usage is reported as larger than 100 %. -

    When programs of this type are running on a computer with n cores, the CPU usage can go up to n x 100 %. -

    Exit codes

    monitor will propagate the exit code of the program it is watching. Suppose the latter ends normally, then monitor's exit code will be 0. On the other hand, when the program terminates abnormally with a non-zero exit code, e.g., 3, then this will be monitor's exit code as well. -

    When monitor has to terminate in an abnormal state, for instance if it can't create the log file, its exit code will be 65. If this interferes with an exit code of the program to be monitored, it can be modified by setting the environment variable MONITOR_EXIT_ERROR to a more suitable value. -

    Monitoring a running process

    It is also possible to \"attach\" monitor to a program or process that is already running. One simply determines the relevant process ID using the ps command, e.g., 18749, and starts monitor: -

    $ monitor -p 18749
    -

    Note that this feature can be (ab)used to monitor specific subprocesses. -

    More information

    Help is available for monitor by issuing:

    " - diff --git a/HtmlDump/file_0551.html b/HtmlDump/file_0551.html deleted file mode 100644 index a958e80b1..000000000 --- a/HtmlDump/file_0551.html +++ /dev/null @@ -1,439 +0,0 @@ -

    What are toolchains?

    A toolchain is a collection of tools to build (HPC) software consistently. It consists of -

    - -

    Toolchains are versioned, and refreshed twice a year. All software available on the cluster is rebuild when a new version of a toolchain is defined to ensure consistency. Version numbers consist of the year of their definition, followed by either a -or b, e.g., -2014a. -Note that the software components are not necessarily the most recent releases, rather they are selected for stability and reliability. -

    -

    Two toolchain flavors are standard across the VSC on all machines that can support them: intel (based on Intel software components) and -foss (based on free and open source software). -

    -

    It may be of interest to note that the Intel C/C++ compilers are more strict with respect to the standards than the GCC C/C++ compilers, while for Fortran, the GCC Fortran compiler tracks the standard more closely, while Intel's Fortran allows for many extensions added during Fortran's long history. When developing code, one should always build with both compiler suites, and eliminate all warnings. -

    -

    On average, the Intel compiler suite produces executables that are 5 to 10 % faster than those generated using the GCC compiler suite. However, for individual applications the differences may be more significant with sometimes significantly faster code produced by the Intel compilers while on other applications the GNU compiler may produce much faster code. -

    -

    Additional toolchains may be defined on specialised hardware to extract the maximum performance from that hardware. -

    - -

    Intel toolchain

    -

    The intel toolchain consists almost entirely of software components -developed by Intel. When building third-party software, or developing your own, -load the module for the toolchain: -

    -
    $ module load intel/<version>
    -
    -

    where <version> should be replaced by the one to be used, e.g., -. See the documentation on the software module system for more details. -

    -

    Starting with the 2014b toolchain, the GNU compilers are also included in -this toolchain as the Intel compilers use some of the libraries and as it is possible -(though some care is needed) to link code generated with the Intel compilers with code -compiled with the GNU compilers. -

    -

    Compilers: Intel and Gnu

    -

    Three compilers are available: -

    - -

    Recent versions of -

    -

    For example, to compile/link a Fortran program fluid.f90 to an executable -fluid with architecture specific optimization, use: -

    -
    $ ifort  -O2  -xhost  -o fluid  fluid.f90
    -
    -

    Documentation on Intel compiler flags and options is -provided -by Intel. Do not forget to load the toolchain module first! -

    -

    Intel OpenMP

    -

    The compiler switch to use to compile/link OpenMP C/C++ or Fortran code is --openmp. For example, to compile/link a OpenMP C program -scatter.c to an executable -scatter with architecture specific -optimization, use: -

    -
    $ icc  -openmp  -O2  -xhost  -o scatter  scatter.c
    -
    -

    Remember to specify as many processes per node as the number of threads the executable -is supposed to run. This can be done using the ppn resource, e.g., --l nodes=1:ppn=10 for an executable that should be run with 10 OpenMP -threads. The number of threads should not exceed the number of cores on a compute node. -

    -

    Communication library: Intel MPI

    -

    For the intel toolchain, impi, i.e., Intel MPI is used as the -communications library. To compile/link MPI programs, wrappers are supplied, so that -the correct headers and libraries are used automatically. These wrappers are: -

    - -

    Note that the names differ from those of other MPI implementations. -The compiler wrappers take the same options as the corresponding compilers. -

    -

    Using the Intel MPI compilers

    -

    For example, to compile/link a C program thermo.c to an executable -thermodynamics with architecture specific optimization, use: -

    -
    $ mpiicc -O2  -xhost  -o thermodynamics  thermo.c
    -
    -

    Extensive documentation is -provided -by Intel. Do not forget to load the toolchain module first. -

    -

    Running an MPI program with Intel MPI

    -

    Note that an MPI program must be run with the exact same version of the toolchain as -it was originally build with. The listing below shows a PBS job script -thermodynamics.pbs that runs the -thermodynamics executable. -

    -
    #!/bin/bash -l
    -module load intel/<version>
    -cd $PBS_O_WORKDIR n_proc=$( cat $PBS_NODEFILE  |  wc  -l )
    -mpirun  -np $n_proc  ./thermodynamics
    -
    -

    The number of processes is computed from the length of the node list in the -$PBS_NODEFILE file, which in turn is specified as a resource specification -when submitting the job to the queue system. -

    -

    Intel mathematical libraries

    -

    The Intel Math Kernel Library (MKL) is a comprehensive collection of highly optimized -libraries that form the core of many scientific HPC codes. Among other functionality, -it offers: -

    - -

    Intel offers -extensive -documentation on this library and how to use it. -

    -

    There are two ways to link the MKL library: -

    - -

    MKL also offers a very fast streaming pseudorandom number generator, see the -documentation for details. -

    -

    Intel toolchain version numbers

    - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
    - 2014a - 2014b - 2015a -
    icc - 13.1.3 20130607 - 13.1.3 20130607 - 15.0.1 20141023 -
    icpc - 13.1.3 20130607 - 13.1.3 20130607 - 15.0.1 20141023 -
    ifort - 13.1.3 20130607 - 13.1.3 20130607 - 15.0.1 20141023 -
    Intel MPI - 4.1.3.045 - 4.1.3.049 - 5.0.2.044 -
    Intel MKL - 11.1.1.106 - 11.1.2.144 - 11.2.1.133 -
    GCC - / - 4.8.3 - 4.9.2 -
    -

    Further information on Intel tools

    - -

    FOSS toolchain

    -

    The foss toolchain consists entirely of free and open source software -components. When building third-party software, or developing your own, -load the module for the toolchain: -

    -
    $ module load foss/<version>
    -
    -

    where <version> should be replaced by the one to be used, e.g., -2014a. See the documentation on the software module system for more details. -

    -

    Compilers: GNU

    -

    Three GCC compilers are available: -

    - -

    For example, to compile/link a Fortran program fluid.f90 to an executable -fluid with architecture specific optimization for processors that support AVX instructions, use: -

    -
    $ gfortran -O2 -march=corei7-avx -o fluid fluid.f90
    -
    -

    Documentation on GCC compiler flags and options is available on the -project's website. Do not forget to load the -toolchain module first! -

    -

    GCC OpenMP

    -

    The compiler switch to use to compile/link OpenMP C/C++ or Fortran code is --fopenmp. For example, to compile/link a OpenMP C program -scattter.c to an executable -scatter with optimization for processors that support the AVX instruction -set, use: -

    -
    $ gcc -fopenmp -O2 -march=corei7-avx -o scatter scatter.c
    -
    -

    Remember to specify as many processes per node as the number of threads the -executable is supposed to run. This can be done using the ppn resource, e.g., --l nodes=1:ppn=10 for an executable that should be run with 10 OpenMP threads. -The number of threads should not exceed the number of cores on a compute node. -

    -

    Note that the OpenMP runtime library used by GCC is of inferior quality when compared -to Intel's, so developers are strongly encouraged to use the -intel toolchain when developing/building OpenMP software. -

    -

    Communication library: OpenMPI

    -

    For the foss toolchain, OpenMPI is used as the communications library. -To compile/link MPI programs, wrappers are supplied, so that the correct headers and -libraries are used automatically. These wrappers are: -

    - -

    The compiler wrappers take the same options as the corresponding compilers. -

    -

    Using the MPI compilers from OpenMPI

    -

    For example, to compile/link a C program thermo.c to an executable -thermodynamics with architecture specific optimization for the AVX -instruction set, use: -

    -
    $ mpicc -O2 -march=corei7-avx -o thermodynamics thermo.c
    -
    -

    Extensive documentation is provided on the -project's website. Do not forget to load the toolchain module first. -

    -

    Running an OpenMPI program

    -

    Note that an MPI program must be run with the exact same version of the toolchain as -it was originally build with. The listing below shows a PBS job script -thermodynamics.pbs that runs the -thermodynamics executable. -

    -
    #!/bin/bash -l 
    -module load intel/<version> 
    -cd $PBS_O_WORKDIR 
    -mpirun ./thermodynamics
    -
    -

    The hosts and number of processes is retrieved from the queue system, that gets this -information from the resource specification for that job. -

    -

    FOSS mathematical libraries

    -

    The foss toolchain contains the basic HPC mathematical libraries, it offers: -

    - -

    Version numbers FOSS toolchain

    - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
    - 2014a - 2014b - 2015a -
    GCC - 4.8.2 - 4.8.3 - 4.9.2 -
    OpenMPI - 1.6.5 - 1.8.1 - 1.8.3 -
    OpenBLAS - 0.2.8 - 0.2.9 - 0.2.13 -
    LAPACK - 3.5.0 - 3.5.0 - 3.5.0 -
    ScaLAPACK - 2.0.2 - 2.0.2 - 2.0.2 -
    FFTW - 3.3.3 - 3.3.4 - 3.3.4 -
    -

    Further information on FOSS components

    -" - diff --git a/HtmlDump/file_0555.html b/HtmlDump/file_0555.html deleted file mode 100644 index 014febefe..000000000 --- a/HtmlDump/file_0555.html +++ /dev/null @@ -1 +0,0 @@ -

    The documentation page you visited applies to the KU Leuven Tier-2 setup (THinking and Cerebro). For more information about these systems, visit the hardware description page.

    diff --git a/HtmlDump/file_0557.html b/HtmlDump/file_0557.html deleted file mode 100644 index 1dd1dd1f8..000000000 --- a/HtmlDump/file_0557.html +++ /dev/null @@ -1 +0,0 @@ -

    The documentation page you visited applies to the UGent Tier-2 setup Stevin. For more information about the setup, visit the UGent hardware page.

    diff --git a/HtmlDump/file_0559.html b/HtmlDump/file_0559.html deleted file mode 100644 index 0b8ec790c..000000000 --- a/HtmlDump/file_0559.html +++ /dev/null @@ -1 +0,0 @@ -

    The documentation page you visited applies to the UAntwerp Hopper cluster. Some or all of it may also apply to the older Turing cluster, but that system does not fully implement the VSC environment module structure. For more details about the specifics of those systems, visit the UAntwerp hardware page.

    diff --git a/HtmlDump/file_0561.html b/HtmlDump/file_0561.html deleted file mode 100644 index a886b34e7..000000000 --- a/HtmlDump/file_0561.html +++ /dev/null @@ -1 +0,0 @@ -

    The documentation page you visited applies to the VUB Hydra cluster. For more specifics about the Hydra cluster, check the VUB hardware page.

    diff --git a/HtmlDump/file_0563.html b/HtmlDump/file_0563.html deleted file mode 100644 index 5bd6261aa..000000000 --- a/HtmlDump/file_0563.html +++ /dev/null @@ -1 +0,0 @@ -

    The documentation page you visited applies to the Tier-1 cluster Muk installed at UGent. Check the Muk hardware description for more specifics about this system.

    diff --git a/HtmlDump/file_0565.html b/HtmlDump/file_0565.html deleted file mode 100644 index da9c84efe..000000000 --- a/HtmlDump/file_0565.html +++ /dev/null @@ -1 +0,0 @@ -

    The documentation page you visited applies to client systems running a recent version of Microsoft Windows (though you may need to install some additional software as specified on the page).

    diff --git a/HtmlDump/file_0567.html b/HtmlDump/file_0567.html deleted file mode 100644 index 1f8d8522f..000000000 --- a/HtmlDump/file_0567.html +++ /dev/null @@ -1 +0,0 @@ -

    The documentation page you visited applies to client systems with a recent version of Microsoft Windows and a UNIX-compatibility layer. We tested using the freely available Cygwin system maintained by Red Hat.

    diff --git a/HtmlDump/file_0569.html b/HtmlDump/file_0569.html deleted file mode 100644 index d77c47c15..000000000 --- a/HtmlDump/file_0569.html +++ /dev/null @@ -1 +0,0 @@ -

    The documentation page you visited applies to Apple Mac client systems with a recent version of OS X installed, though you may need some additional software as specified on the page.

    diff --git a/HtmlDump/file_0571.html b/HtmlDump/file_0571.html deleted file mode 100644 index b17d51a3e..000000000 --- a/HtmlDump/file_0571.html +++ /dev/null @@ -1 +0,0 @@ -

    The documentation page you visited applies to client systems running a popular Linux distribution (though some of the packages you need may not be installed by default).

    diff --git a/HtmlDump/file_0577.html b/HtmlDump/file_0577.html deleted file mode 100644 index b7dd219b7..000000000 --- a/HtmlDump/file_0577.html +++ /dev/null @@ -1 +0,0 @@ -

    Eerste aanpak

    diff --git a/HtmlDump/file_0579.html b/HtmlDump/file_0579.html deleted file mode 100644 index 0b1847b97..000000000 --- a/HtmlDump/file_0579.html +++ /dev/null @@ -1 +0,0 @@ -

    Tweede aanpak

    diff --git a/HtmlDump/file_0585.html b/HtmlDump/file_0585.html deleted file mode 100644 index 6547fafa5..000000000 --- a/HtmlDump/file_0585.html +++ /dev/null @@ -1 +0,0 @@ -

    The page you're trying to visit, does not exist or has been moved to a different URL.

    Some common causes of this problem are:

    1. Maybe you arrived at the page through a search engine. Search engines - including the one implemented on our own pages, which uses the Google index - don't immediately know that a page has been moved or does not exist anymore and continue to show old pages in the search results.
    2. Maybe you followed a link on another site. The site owner may not yet have noticed that our web site has changed.
    3. Or maybe you followed a link in a somewhat older e-mail or document. It is entirely normal that links age and don't work anymore after some time.
    4. Or maybe you found a bug on our web site? Even though we check regularly for dead links, errors can occur. You can contact us at Kurt.Lust@uantwerpen.be.
    diff --git a/HtmlDump/file_0605.html b/HtmlDump/file_0605.html deleted file mode 100644 index 169b835c6..000000000 --- a/HtmlDump/file_0605.html +++ /dev/null @@ -1,8 +0,0 @@ -

    You're looking for:

    " - diff --git a/HtmlDump/file_0611.html b/HtmlDump/file_0611.html deleted file mode 100644 index cb0607015..000000000 --- a/HtmlDump/file_0611.html +++ /dev/null @@ -1,62 +0,0 @@ -

    Inline code with <code>...</code>

    We used inline code on the old vscentrum.be to clearly mark system commands etc. in text.

    Example: At UAntwerpen you'll have to use module avail MATLAB and - module load MATLAB/2014a respectively. -

    However, If you enter both <code>-blocks on the same line in a HTML file, the editor doesn't process them well: module avail MATLAB and <code>module load MATLAB. -

    Test: test 1 en test 2.

    Code in <pre>...</pre>

    This was used a lot on the old vscentrum.be site to display fragments of code or display output in a console windows. -

    #!/bin/bash -l
    -#PBS -l nodes=1:nehalem
    -#PBS -l mem=4gb
    -module load matlab
    -cd $PBS_O_WORKDIR
    -...
    -

    And this is a test with a very long block: -

    ln03-1003: monitor -h
    -### usage: monitor [-d <delta>] [-l <logfile>] [-f <files>]
    -# [-h] [-v] <cmd> | -p <pid>
    -# Monitor can be used to sample resource utilization of a process
    -# over time. Monitor can sample a running process if the latter's PID
    -# is specified using the -p option, or it can start a command with
    -# parameters passed as arguments. When one has to specify flags for
    -# the command to run, '--' can be used to delimit monitor's options, e.g.,
    -# monitor -delta 5 -- matlab -nojvm -nodisplay calc.m
    -# Resources that can be monitored are memory and CPU utilization, as
    -# well as file sizes.
    -# The sampling resolution is determined by delta, i.e., monitor samples
    -# every <delta> seconds.
    -# -d <delta> : sampling interval, specified in
    -# seconds, or as [[dd:]hh:]mm:ss
    -# -l <logfile> : file to store sampling information; if omitted,
    -# monitor information is printed on stderr
    -# -n <lines> : retain only the last <lines> lines in the log file,
    -# note that this option only makes sense when combined
    -# with -l, and that the log file lines will not be sorted
    -# according to time
    -# -f <files> : comma-separated list of file names that are monitored
    -# for size; if a file doesn't exist at a given time, the
    -# entry will be 'N/A'
    -# -v : give verbose feedback
    -# -h : print this help message and exit
    -# <cmd> : actual command to run, followed by whatever
    -# parameters needed
    -# -p <pid> : process ID to monitor
    -#
    -# Exit status: * 65 for any montor related error
    -# * exit status of <cmd> otherwise
    -# Note: if the exit code 65 conflicts with those of the
    -# command to run, it can be customized by setting the
    -# environment variables 'MONITOR_EXIT_ERROR' to any value
    -# between 1 and 255 (0 is not prohibited, but this is probably.
    -# not what you want).
    -

    The <code> style in the editor

    In fact, the Code style of the editor works on a paragraph basis and all it does is put the paragraph between <pre> and </pre>-tags, so the problem mentioned above remains. The next text was edited in WYSIWIG mode: -

    #!/bin/bash -l
    -#PBS -l nodes=4:ivybridge
    -...
    -

    Another editor bug is that it isn't possible to switch back to regular text mode at the end of a code fragment if that is at the end of the text widget: The whole block is converted back to regular text instead and the formatting is no longer shown. -

    " - diff --git a/HtmlDump/file_0613.html b/HtmlDump/file_0613.html deleted file mode 100644 index fc6f8b951..000000000 --- a/HtmlDump/file_0613.html +++ /dev/null @@ -1,78 +0,0 @@ -

    After the successful first VSC users day in January 2014, the second users day took place at the University of Antwerp on Monday November 30 2015. The users committee organized the day. The plenary sessions were given by an external and an internal speaker. Moreover, 4 workshops were organized:

    Some impressions...

    - \"More -

    More pictures can be found in the image bank. -

    Program

    - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
    09:50 - Welcome – Bart De Moor (chair Hercules Foundation) -
    10:00 - Invited lecture: High performance and multiscale computing: blood, clay, stars and humans – Derek Groen (Centre for Computational Science, University College London) [slides - PDF 8.3MB]
    11:00 - Coffee -
    11:30 - Workshops / hands-on sessions (parallel sessions) -
    12:45 - Lunch -
    14:00 - Lecture internal speaker: High-performance computing of wind farms in the atmospheric boundary layer – Johan Meyers (Department of Mechanical Engineering, KU Leuven) [slides - PDF 9.9MB]
    14:30 - ‘1 minute’ poster presentations -
    14:45 - Workshops / hands-on sessions (parallel sessions) -
    16:15 - Coffee & Poster session -
    17:00 - Closing – Dirk Roose (representative of users committee) -
    17:10 - Drink -

    Titles and abstracts

    An overview of the posters that will be presented during the poster session is available here. -

    " - diff --git a/HtmlDump/file_0619.html b/HtmlDump/file_0619.html deleted file mode 100644 index 0079623d4..000000000 --- a/HtmlDump/file_0619.html +++ /dev/null @@ -1,83 +0,0 @@ -

    TurboVNC is a goodway -to provide access to remote visualization applications that works together with VirtualGL - a popular package for remote visualization. -

    Installing TurboVNC client (viewer)

    TurboVNC client Configuration & Start Guide

    Note: These instructions are for the KU Leuven visualization nodes only. The UAntwerp visualization node also uses TurboVNC, but the setup is different as the visualization node is currently not in the job queueing system and as TurboVNC is also supported on the regular login nodes (but without OpenGL support). Specific instructions for the use of TurboVNC on the UAntwerp clusters can be found on the page \"Remote visualization @ UAntwerp\". -

      -
    1. Request an interactive job on visualization partition: -
      $ qsub -I -X -l partition=visualization	-l pmem=6gb -l nodes=1:ppn=20
      -	
    2. -
    3. Once you are on one of visualization nodes (r10n3 or r10n4) load the TurboVNC module: -
      $ module load TurboVNC/1.2.3-foss-2014a
      -	
    4. -
    5. Create password to authenticate your session: -
      $ vncpasswd
      -	
      - In case of problems with saving your password please create the appropriate path first: -
      $ mkdir .vnc; touch .vnc/passwd; vncpasswd
      -	
      -
    6. -
    7. Start VNC server on the visualization node (optionally with geometry settings): -
      $ vncserver (-depth 24 -geometry 1600x1000)
      -	
      - As a result you will get the information about the display <d> that you are using (r10n3:), e.g.for <d>=1 -
      Desktop 'TurboVNC: r10n3:1 (vsc30000)' started on display r10n3:1
      -	
      -
    8. -
    9. - Establish the ssh tunnel connection:

      - In Linux/ Mac OS: -
           $ ssh -L 590<d>:host:590<d> -N vsc30000@login.hpc.kuleuven.be
      -e.g. $ ssh -L 5901:r10n3:5901 -N vsc30000@login.hpc.kuleuven.be
      -	
      -
      - In Windows: -
      - In putty go to Connection-SSH-Tunnels tab and add the source port 590<d> (e.g. 5901) and destination host:590<d> (e.g. r10n3:5901). -
      \"TVNC -
      Once the tunnel is added it will appear in the list of forwarded ports: -
      \"TVNC -
      With that settings continue login to the cluster. -
    10. -
    11. Start VNC viewer connection
      - Start the client: VSC server as localhost:<d> (where <d> is display number), e.g. localhost:1 -
      - \"TVNC -
      -
      Authenticate with your password -
      \"TVNC -
    12. -
    13. After your work is done do not forget to close your connection: -
           $ vncserver -kill :<d>; exit
      -e.g. $ vncserver -kill :1; exit
      -	
      -
    14. -

    How to start using visualization node?

      -
    1. TurboVNC works with the tab Window Manager twm (more info on how to use it can be found on the Wikipedia twm page or on the twm man page).
      \"twm\" -
    2. -
    3. To start a new terminal use left click of the mouse and choose xterm -
      \"twm\" -
    4. -
    5. Load the appropriate visualization module (Paraview, VisIt, VMD, Avizo, e.g. -
      $ module load Paraview
      -	
    6. -
    7. Start the application. In general the application has to be started using VirtualGL package, e.g. -
      $ vglrun –d :0 paraview
      -	
      - but to make it easier we created scripts (starting with capital letters: Paraview, Visit, VMD) that can execute the necessary commands and start the application, e.g. -
      $ Paraview
      -	
      -
    8. -
    9. - For checking how much GPUs are involved in your visalization you may execute gpuwatch in the new terminal: -
      $ gpuwatch
      -	
    10. -

    Attached documents

    Slides from the lunchbox session -

    " - diff --git a/HtmlDump/file_0621.html b/HtmlDump/file_0621.html deleted file mode 100644 index dd67512eb..000000000 --- a/HtmlDump/file_0621.html +++ /dev/null @@ -1,225 +0,0 @@ -

    The intel toolchain consists almost entirely of software components developed by Intel. When building third-party software, or developing your own, -load the module for the toolchain: -

    $ module load intel/<version>
    -

    where <version> should be replaced by the one to be used, e.g., 2016b. See the documentation on the software module system for more details. - -

    Starting with the 2014b toolchain, the GNU compilers are also included in -this toolchain as the Intel compilers use some of the libraries and as it is possible -(though some care is needed) to link code generated with the Intel compilers with code -compiled with the GNU compilers. -

    Compilers: Intel and Gnu

    Three compilers are available: -

    Compatible versions of the GNU C (gcc), C++ (g++) and Fortran (gfortran) compilers are also provided. -

    For example, to compile/link a Fortran program fluid.f90 to an executable - fluid with architecture specific optimization, use: -

    $ ifort -O2 -xhost -o fluid fluid.f90
    -

    For documentation on available compiler options, we refer to the links to the Intel documentation at the bottom of this page. Do not forget to load the toolchain module first! -

    Intel OpenMP

    The compiler switch to use to compile/link OpenMP C/C++ or Fortran code is -qopenmp in recent versions of the compiler (toolchain intel/2015a and later) or - -openmp in older versions. For example, to compile/link a OpenMP C program - scatter.c to an executable - scatter with architecture specific -optimization, use: -

    $ icc -qopenmp -O2 -xhost -o scatter scatter.c
    -

    Remember to specify as many processes per node as the number of threads the executable -is supposed to run. This can be done using the - ppn resource, e.g., - -l nodes=1:ppn=10 for an executable that should be run with 10 OpenMP -threads. The number of threads should not exceed the number of cores on a compute node. -

    Communication library: Intel MPI

    For the intel toolchain, impi, i.e., Intel MPI is used as the -communications library. To compile/link MPI programs, wrappers are supplied, so that -the correct headers and libraries are used automatically. These wrappers are: -

    Note that the names differ from those of other MPI implementations. -The compiler wrappers take the same options as the corresponding compilers. -

    Using the Intel MPI compilers

    For example, to compile/link a C program thermo.c to an executable - thermodynamics with architecture specific optimization, use: -

    $ mpiicc -O2 -xhost -o thermodynamics thermo.c
    -

    For further documentation, we refer to the links to the Intel documentation at the bottom of this page. Do not forget to load the toolchain module first. -

    Running an MPI program with Intel MPI

    Note that an MPI program must be run with the exact same version of the toolchain as -it was originally build with. The listing below shows a PBS job script - thermodynamics.pbs that runs the - thermodynamics executable. -

    #!/bin/bash -l
    -module load intel/<version>
    -cd $PBS_O_WORKDIR
    -mpirun -np $PBS_NP ./thermodynamics
    -

    The resource manager passes the number of processes to the job script through the environment variable $PBS_NP, but if you use a recent implementation of Intel MPI, you can even omit -np $PBS_NP as Intel MPI recognizes the Torque resource manager and requests the number of cores itself from the resource manager if the number is not specified. -

    Intel mathematical libraries

    The Intel Math Kernel Library (MKL) is a comprehensive collection of highly optimized -libraries that form the core of many scientific HPC codes. Among other functionality, -it offers: -

    For further documentation, we refer to the links to the Intel documentation at the bottom of this page. -

    There are two ways to link the MKL library: -

    MKL also offers a very fast streaming pseudorandom number generator, see the -documentation for details. -

    Intel toolchain version numbers

    - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
    - 2018a2017b2017a - 2016b - 2016a - 2015b - 2015a - 2014b - 2014a -
    icc/icpc/ifort - 2018.1.1632017.4.1962017.1.132 - 16.0.3 20160425 - 16.0.1 20151021 - 15.0.3 20150407 - 15.0.1 20141023 - 13.1.3 20130617 - 13.1.3 20130607 -
    Intel MPI - 2018.1.1632017.3.1962017.1.132 - 5.1.3.181 - 5.1.2.150 - 5.03.3048 - 5.0.2.044 - 4.1.3.049 - 4.1.3.045 -
    Intel MKL - 2018.1.1632017.3.1962017.1.132 - 11.3.3.210 - 11.3.1.150 - 11.2.3.187 - 11.2.1.133 - 11.1.2.144 - 11.1.1.106 -
    GCC - 6.4.06.4.06.3.0 - 4.9.4 - 4.9.3 - 4.9.3 - 4.9.2 - 4.8.3 - / -
    binutils - 2.282.282.27 - 2.26 - 2.25 - 2.25 - / - / - / -

    Further information on Intel tools

    " - diff --git a/HtmlDump/file_0623.html b/HtmlDump/file_0623.html deleted file mode 100644 index 5309d421e..000000000 --- a/HtmlDump/file_0623.html +++ /dev/null @@ -1,212 +0,0 @@ -

    The foss toolchain consists entirely of free and open source software components. When building third-party software, or developing your own, -load the module for the toolchain: -

    $ module load foss/<version>
    -

    where <version> should be replaced by the one to be used, e.g., - 2014a. See the documentation on the software module system for more details. -

    Compilers: GNU

    Three GCC compilers are available: -

    For example, to compile/link a Fortran program fluid.f90 to an executable - fluid with architecture specific optimization for processors that support AVX instructions, use: -

    $ gfortran -O2 -march=corei7-avx -o fluid fluid.f90
    -

    Documentation on GCC compiler flags and options is available on the - project's website. Do not forget to load the -toolchain module first! -

    GCC OpenMP

    The compiler switch to use to compile/link OpenMP C/C++ or Fortran code is - -fopenmp. For example, to compile/link a OpenMP C program - scattter.c to an executable - scatter with optimization for processors that support the AVX instruction -set, use: -

    $ gcc -fopenmp -O2 -march=corei7-avx -o scatter scatter.c
    -

    Remember to specify as many processes per node as the number of threads the -executable is supposed to run. This can be done using the - ppn resource, e.g., - -l nodes=1:ppn=10 for an executable that should be run with 10 OpenMP threads. -The number of threads should not exceed the number of cores on a compute node. -

    Note that the OpenMP runtime library used by GCC is of inferior quality when compared -to Intel's, so developers are strongly encouraged to use the - intel toolchain when developing/building OpenMP software. -

    Communication library: Open MPI

    For the foss toolchain, Open MPI is used as the communications library. -To compile/link MPI programs, wrappers are supplied, so that the correct headers and -libraries are used automatically. These wrappers are: -

    The compiler wrappers take the same options as the corresponding compilers. -

    Using the MPI compilers from Open MPI

    For example, to compile/link a C program thermo.c to an executable - thermodynamics with architecture specific optimization for the AVX -instruction set, use: -

    $ mpicc -O2 -march=corei7-avx -o thermodynamics thermo.c
    -

    Extensive documentation is provided on the Open MPI project's website. Do not forget to load the toolchain module first. -

    Running an Open MPI program

    Note that an MPI program must be run with the exact same version of the toolchain as -it was originally build with. The listing below shows a PBS job script - thermodynamics.pbs that runs the - thermodynamics executable. -

    #!/bin/bash -l 
    -module load intel/<version> 
    -cd $PBS_O_WORKDIR 
    -mpirun ./thermodynamics
    -

    The hosts and number of processes is retrieved from the queue system, that gets this -information from the resource specification for that job. -

    FOSS mathematical libraries

    The foss toolchain contains the basic HPC mathematical libraries, it offers: -

    Other components

    Version numbers

    - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
    - 2018a2017b2017a2016b - 2016a - 2015b - 2015a - 2014b - 2014a -
    GCC - 6.4.06.4.06.35.4 - 4.9.3 - 4.9.3 - 4.9.2 - 4.8.3 - 4.8.2 -
    OpenMPI - 2.1.22.1.12.0.21.10.3 - 1.10.2 - 1.8.8 - 1.8.4 - 1.8.1 - 1.6.5 -
    OpenBLAS - 0.2.200.2.200.2.190.2.18 - 0.2.15 - 0.2.14 - 0.2.13 - 0.2.9 - 0.2.8 -
    LAPACK - 3.8.03.8.03.3.63.6.1 - 3.6.0 - 3.5.0 - 3.5.0 - 3.5.0 - 3.5.0 -
    ScaLAPACK - 2.0.22.0.22.0.22.0.2 - 2.0.2 - 2.0.2 - 2.0.2 - 2.0.2 - 2.0.2 -
    FFTW - 3.3.73.3.63.3.63.3.4 - 3.3.4 - 3.3.4 - 3.3.4 - 3.3.4 - 3.3.3 -
    binutils - 2.282.282.272.26 - 2.25 - 2.25 - / - / - / -

    Further information on FOSS components

    " - diff --git a/HtmlDump/file_0625.html b/HtmlDump/file_0625.html deleted file mode 100644 index 32dd48074..000000000 --- a/HtmlDump/file_0625.html +++ /dev/null @@ -1,283 +0,0 @@ -

    MPI and OpenMP both have their advantages and disadvantages.

    -

    MPI can be used on - distributed memory clusters and can scale to thousands of nodes. However, it was - designed in the days that clusters had nodes with only one or two cores. Nowadays CPUs - often have more than ten cores and sometimes support multiple hardware threads (or logical cores) per - physical core (and in fact may need multiple threads to run at full performance). At the same - time, the amount of memory per hardware thread is not increasing and is in fact quite - low on several architectures that rely on a large number of slower cores or hardware - threads to obtain a high performance within a reasonable power budget. Starting - one MPI process per hardware thread is then a waste of resources as each process needs - its communication buffers, OS resources, etc. Managing the hundreds of thousands of MPI - processes that we are nowadays seeing on the biggest clusters, is very hard. -

    -

    OpenMP on the other hand is limited to shared memory parallelism, typically - within a node of a cluster. Moreover, many OpenMP programs don't scale past some - tens of threads partly because of thread overhead in the OS implementation and partly - because of overhead in the OpenMP run-time. -

    -

    Hybrid programs try to combine the advantages of both to deal with the - disadvantages. Hybrid programs use a limited number of MPI processes (\"MPI ranks\") - per node and use OpenMP threads to further exploit the parallelism within the node. - An increasing number of applications is designed or re-engineered in this way. - The optimum number of MPI processes (and hence OpenMP threads per process) depends - on the code, the cluster architecture and the problem that is being solved, but - often one or, on newer CPUs such as the Intel Haswell, two MPI processes per socket (so - two to four for a typical two-socket node) is close to optimal. Compiling and - starting such applications requires some care as we explain on this page. -

    -

    Preparing your hybrid application to run

    -

    To compile and link your hybrid application, you basically have to combine the - instructions for MPI and OpenMP - programs: use - mpicc -fopenmp for the GNU - compilers and - mpiicc -qopenmp for the Intel - compilers ( - mpiicc -openmp for older versions) or the corresponding - MPI Fortran compiler wrappers for Fortran programs. -

    -

    Running hybrid programs on the VSC clusters

    -

    When running a hybrid MPI/OpenMP program, fewer MPI processes - have to be started than there are logoical cores available to - the application as every process uses multiple cores in - OpenMP parallelism. Yet when requesting logical cores per node to the - scheduler, one still has to request the total number of cores - needed per node. Hence the PBS property \"ppn\" should not be read - as \"processes per node\" but rather as \"logical cores per node\" or - \" - processing units per node\". Instead we - have to tell the MPI launcher ( - mpirun for most applications) to - launch fewer processes than there are logical cores on a node and tell - each MPI process to use the correct number of OpenMP threads. -

    -

    For optimal performance, the threads of one MPI process should be - put together as close as possible in the logical core hierarchy - implied by the cache and core topology of a given node. E.g., on a - dual socket node it may make a lot of sense to run 2 MPI processes - with each MPI process using all cores on a single socket. In other - applications, it might be better to run only one MPI process per - node, or multiple MPI processes per socket. In more technical words, - each MPI process runs in its MPI domain consisting of a number of logical cores, - and we want these domains to be non-overlapping and fixed in time during the life of - the MPI job and the logical cores in the domain to be \"close\" to each other. - This optimises the use the memory hierarchy (cache and RAM). -

    -

    OpenMP has several environment variables that can then control the number - of OpenMP threads and the placement of the threads in the MPI domain. All of these - may also be overwritten by the application, so it is not a - bullet-proof way to control the behaviour of OpenMP applications. - Moreover, some of these environment variables are - implementation-specific and hence are different between the Intel - and GNU OpenMP runtimes. The most important variable is - OMP_NUM_THREADS. It - sets the number of threads to be used in parallel regions. As - parallel constructs can be nested, a process may still start more - threads than indicated by - OMP_NUM_THREADS. However, - the total number of threads can be limited by the variable - OMP_THREAD_LIMIT. -

    -

    Script mympirun (VSC)

    -

    The mympirunn script is developed by the UGent VSC-team to cope - with differences between different MPI implementations - automatically. It offers support for hybrid programs through - the - --hybrid command line switch to specify the number of - processes per node. The number of threads per process can then be - computed by dividing the number of logical cores per node by the - number of processes per node. -

    -

    E.g., to run a hybrid MPI/OpenMP program on 2 nodes using 20 - cores on each node and running 4 MPI ranks per node (hence 5 - OpenMP threads per MPI rank), your script would contain -

    -
    #PBS -l nodes=2:ppn20
    -
    -

    near the top to request the resources from the scheduler. It - would then load the appropriate module with the mympirun command: -

    -
    module load vsc-mympirun
    -
    -

    (besides other modules that are needed to run your application) - and finally start your application: -

    -
    mympirun --hybrid=4 ./hybrid_mpi
    -
    -

    assuming your executable is called hybrid_mpi and resides in the - working directory. The mympirun launcher will automatically - determine the correct number of MPI processes to start based on - the resource specifications and the given number of processes per - node (the - --hybrid switch). -

    -

    Intel toolchain

    -

    - On Intel MPI defining the MPI domains is done through the environment variable - I_MPI_PIN_DOMAIN. - Note however that the Linux scheduler is still - free to move all threads of a MPI process to any core within its MPI domain - at any time, so there may be a point in further pinning the OpenMP threads through - the OpenMP environment variables also. - This is definitely the case if there are more logical cores available - in the process partition than there are OpenMP threads. Some environment - variables to influence the thread placement are - the Intel-specific variable - KMP_AFFINITY and the OpenMP 3.1 - standard environment variable - OMP_PROC_BIND. -

    -

    In our case, we want to use all logical cores of a node but make sure - that all cores for a domain are as close together as possible. The - easiest way to accomplish this is to set - OMP_NUM_THREADS - to the desired number of OpenMP threads per MPI process and then set - I_MPI_PIN_DOMAIN to the value omp: -

    -
    export I_MPI_PIN_DOMAIN=omp
    -
    -

    The longer version is -

    -
    export I_MPI_PIN_DOMAIN=omp,compact
    -
    -

    where compact tells the launcher explicitly to pack threads for - a single MPI process as close together as possible. This layout is - the default on current versions of Intel MPI so it is not really - needed to set this. An alternative, when running 1 MPI process per - socket, is to set -

    -
    export I_MPI_PIN_DOMAIN=socket
    -
    -

    To enforce binding of each OpenMP thread to a particular logical core, one can set -

    -
    export OMP_PROC_BIND=true
    -
    -

    As an example, assume again we want to run the program hybridmpi - on 2 nodes containing 20 cores each, running 4 MPI processes per - node, so 5 OpenMP threads per process. -

    -

    The following are then essential components of the job script: -

    - -

    In this case we do need to specify both the total number of MPI - ranks and the number of MPI ranks per host as we want the same - number of MPI ranks on each host. -
    - In case you need a more automatic script that is easy to adapt to - a different node configuration or different number of processes - per node, you can do some of the computations in Bash. The number - of processes per node is set in the shell variable - MPI_RANKS_PER_NODE. The above commands become: -

    -
    #! /bin/bash -l
    -# Adapt nodes and ppn on the next line according to the cluster your're using!#PBS -lnodes=2:ppn=20
    -...
    -MPI_RANKS_PER_NODE=4
    -#
    -module load intel
    -#
    -export HOSTS=`sort -u $PBS_NODEFILE | paste -s -d,`
    -#
    -export OMP_NUM_THREADS=$(($PBS_NUM_PPN / $MPI_RANKS_PER_NODE))
    -#
    -export OMP_PROC_BIND=true
    -#
    -export I_MPI_PIN_DOMAIN=omp
    -#
    -mpirun -hosts $HOSTS -perhost $MPIPROCS_PER_NODE ./hybrid_mpi
    -
    -

    Intel documentation on hybrid programming

    -

    Some documents on the Intel web site that contain more - information on developing and running hybrid programs: -

    - -

    Foss toolchain (GCC and Open MPI)

    -

    Open MPI has very flexible options for process and thread placement, but they are not always easy to use. There is however also a simple option to indicate the number of logical cores you want to assign to each MPI rank (MPI process): -cpus-per-proc <num> with <num> the number of logical cores assigned to each MPI rank. -

    -

    You may want to further control the thread placement one can using the standard OpenMP - mechanism, e.g. the GNU-specific variable - GOMP_CPU_AFFINITY - or the OpenMP 3.1 standard environment variable OMP_PROC_BIND. - As long as we want to use all cores, it won't matter whether - OMP_PROC_BIND - is set to true, close or spread. However, setting OMP_PROC_BIND to true is generally a safe choice to assure that all threads remain on the same core as they were started on to improve cache performance. -

    -

    Essential elements of our job script are: -

    -
    #! /bin/bash -l
    -# Adapt nodes and ppn on the next line according to the cluster your're using!
    -#PBS -lnodes=2:ppn=20
    -...
    -#
    -module load foss
    -#
    -export OMP_NUM_THREADS=5
    -#
    -export OMP_PROC_BIND=true
    -#
    -mpirun -cpus-per-proc $OMP_NUM_THREADS ./hybrid_mpi
    -
    -

    Advanced issues

    -

    Open MPI allows a lot of control over process placement and rank assignment. The Open MPI mpirun command has several options that influence this process: -

    - -

    More information can be found in the manual pages for mpirun which can be found on the Open MPI web pages and in the following presentations: -

    -" - diff --git a/HtmlDump/file_0627.html b/HtmlDump/file_0627.html deleted file mode 100644 index 9e1da924b..000000000 --- a/HtmlDump/file_0627.html +++ /dev/null @@ -1,25 +0,0 @@ -
    1. Studying gene family evolution on the VSC Tier-2 and Tier-1 infrastructure
      Setareh Tasdighian et al. (VIB/UGent)
    2. -
    3. Genomic profiling of murine carcinoma models
      B. Boeckx, M. Olvedy, D. Nasar, D. Smeets, M. Moisse, M. Dewerchin, C. Marine, T. Voet, C. Blanpain,D. Lambrechts (VIB/KU Leuven)
    4. -
    5. Modeling nucleophilic aromatic substitution reactions with ab initio molecular dynamics
      Samuel L. Moors et al. (VUB)
    6. -
    7. Climate modeling on the Flemish Supercomputers
      Fabien Chatterjee, Alexandra Gossart, Hendrik Wouters, Irina Gorodetskaya, Matthias Demuzere, Niels Souverijns, Sajjad Saeed, Sam Vanden Broucke, Wim Thiery, Nicole van Lipzig (KU Leuven)
    8. -
    9. Simulating the evolution of large grain structures using the phase-field approach
      Hamed Ravash, Liesbeth Vanherpe, Nele Moelans (KU Leuven)
    10. -
    11. Multi-component multi-phase field model combined with tensorial decomposition
      Inge Bellemans, Kim Verbeken, Nico Vervliet, Nele Moelans, Lieven De Lathauwer (UGent, KU Leuven)
    12. -
    13. First-principle modeling of planetary magnetospheres: Mercury and the Earth
      Jorge Amaya, Giovanni Lapenta (KU Leuven)
    14. -
    15. Modeling the interaction of the Earth with the solar wind: the Earth magnetopause
      Emanuele Cazzola, Giovanni Lapenta (KU Leuven)
    16. -
    17. Jupiter's magnetosphere
      Emmanuel Chané, Joachim Saur, Stefaan Poedts (KU Leuven)
    18. -
    19. High-performance computing of wind-farm boundary layers
      Dries Allaerts, Johan Meyers (KU Leuven)
    20. -
    21. Large-eddy simulation study of Horns Rev windfarm in variable mean wind directions
      Wim Munters, Charles Meneveau, Johan Meyers (KU Leuven)
    22. -
    23. Modeling defects in the light absorbing layers of photovoltaic cells
      Rolando Saniz, Jonas Bekaert, Bart Partoens, Dirk Lamoen (UAntwerpen)
    24. -
    25. Molecular Spectroscopy : Where Theory Meets Experiment
      Carl Mensch, Evelien Van de Vondel, Yannick Geboes, Pilar Rodríguez Ortega, Liene De Beuckeleer, Sam Jacobs, Jonathan Bogaerts, Filip Desmet, Christian Johannessen, Wouter Herrebout (UAntwerpen)
    26. -
    27. On the added value of complex stock trading rules in short-term equity price direction prediction
      Dirk Van den Poel, Céline Chesterman, Maxim Koppen, Michel Ballings (UGent University, University of Tennessee at Knoxville)
    28. -
    29. First-principles study of the surface and adsorption properties of α-Cr2O3
      Samira Dabaghmanesh, Erik C. Neyts, Bart Partoens (UAntwerpen)
    30. -
    31. The surface chemistry of plasma-generated radicals on reduced titanium dioxide
      Stijn Huygh, Erik C. Neyts (UAntwerpen)
    32. -
    33. The High Throughput Approach to Computational Materials Design
      Michael Sluydts, Titus Crepain, Karel Dumon, Veronique Van Speybroeck, Stefaan Cottenier (UGent)
    34. -
    35. Distributed Memory Reduction in Presence of Process Desynchronization
      Petar Marendic, Jan Lemeire, Peter Schelkens (Vrije Universiteit Brussel, iMinds)
    36. -
    37. Visualization @HPC KU Leuven
      Mag Selwa (KU Leuven)
    38. -
    39. Multi-fluid modeling of the solar chromosphere
      Yana G. Maneva, Alejandro Alvarez-Laguna, Andrea Lani, Stefaan Poedts (KU Leuven)
    40. -
    41. Molecular dynamics in momentum space
      Filippo Morini (UHasselt)
    42. -
    43. Predicting sound in planetary inner cores using quantum physics
      Jan Jaeken, Attilio Rivoldini, Tim van Hoolst, Veronique Van Speybroeck, Michel Waroquier, Stefaan Rottener (UGent)
    44. -
    45. High Fidelity CFD Simulations on Tier-1
      Leonidas Siozos-Rousoulis, Nikolaos Stergiannis, Nathan Ricks, Ghader Ghorbaniasl, Chris Lacor (VUB)
    46. -
    " - diff --git a/HtmlDump/file_0629.html b/HtmlDump/file_0629.html deleted file mode 100644 index 8a69b649d..000000000 --- a/HtmlDump/file_0629.html +++ /dev/null @@ -1,11 +0,0 @@ -

    High performance and multiscale computing: blood, clay, stars and humans

    Speaker: Derek Groen (Centre for Computational Science, University College London)

    Multiscale simulations are becoming essential across many scientific disciplines. The concept of having multiple models form a single scientific simulation, with each model operating on its own space and time scale, gives rise to a range of new challenges and trade-offs. In this talk, I will present my experiences with high performance and multiscale computing. I have used supercomputers for modelling clay-polymer nanocomposites [1], blood flow in the human brain [2], and dark matter structure formation in the early universe [3]. I will highlight some of the scientific advances we made, and present the technologies we developed and used to enable simulations across supercomputers (using multiple models where convenient). In addition, I will reflect on the non-negligible aspect of human effort and policy constraints, and share my experiences in enabling challenging calculations, and speeding up more straightforward ones. -

    [slides - PDF 8.3MB]

    References

      -
    1. James L. Suter, Derek Groen, and Peter V. Coveney. Chemically Specific Multiscale Modeling of Clay–Polymer Nanocomposites Reveals Intercalation Dynamics, Tactoid Self-Assembly and Emergent Materials Properties. Advanced Materials, volume 27, issue 6, pages 966–984. (DOI: 10.1002/adma.201403361)
    2. -
    3. Mohamed A. Itani, Ulf D. Schiller, Sebastian Schmieschek, James Hetherington, Miguel O. Bernabeu, Hoskote Chandrashekar, Fergus Robertson, Peter V. Coveney, and Derek Groen. An automated multiscale ensemble simulation approach for vascular blood flow. Journal of Computational Science, volume 9, pages 150-155. (DOI: 10.1016/j.jocs.2015.04.008)
    4. -
    5. Derek Groen and Simon Portugies Zwart. From Thread to Transcontinental Computer: Disturbing Lessons in Distributed Supercomputing. 2015 IEEE 11th International Conference on e-Science, IEEE, pages 565-571. (DOI: 10.1109/eScience.2015.81) -
    6. -

    High-performance computing of wind farms in the atmospheric boundary layer

    Speaker: Johan Meyers (Department of Mechanical Engineering, KU Leuven) -

    -The aerodynamics of large wind farms are governed by the interaction between turbine wakes, and by the interaction of the wind farm as a whole with the atmospheric boundary layer. The deceleration of the flow in the farm that is induced by this interaction, leads to an efficiency loss for wind turbines downstream in the farm that can amount up to 40% and more. Research into a better understanding of wind-farm boundary layer interaction is an important driver for reducing this efficiency loss. The physics of the problem involves a wide range of scales, from farm scale and ABL scale (requiring domains of several kilometers cubed) down to turbine and turbine blade scale with flow phenomena that take place on millimeter scale. Modelling such a system, requires a multi-scale approach in combination with extensive supercomputing. To this end, our simulation code SP-Wind is used. Implementation issues and parallelization are discussed. Next to that, new physical insights gained from our simulations at the VSC are highlighted. -

    [slides - PDF 9.9MB]

    " - diff --git a/HtmlDump/file_0631.html b/HtmlDump/file_0631.html deleted file mode 100644 index e59cfa6cd..000000000 --- a/HtmlDump/file_0631.html +++ /dev/null @@ -1,14 +0,0 @@ -

    Not only have supercomputers changed scientific research in a fundamental way, they also enable the development of new, affordable products and services which have a major impact on our daily lives.

    -

    Not only have supercomputers changed scientific research in a fundamental way ...

    -

    Supercomputers are indispensable for scientific research and for a modern R&D environment. ‘Computational Science’ is - alongside theory and experiment - the third fully fledged pillar of science. For centuries, scientists used pen and paper to develop new theories based on scientific experiments. They also set up new experiments to verify the predictions derived from these theories (a process often carried out with pen and paper). It goes without saying that this method was slow and cumbersome. -

    -

    As an astronomer you can not simply make Jupiter a little bigger to see what effect this would lager size would have on our solar system. As a nuclear scientist it would be difficult to deliberately lose control over a nuclear reaction to ascertain the consequences of such a move. (Super)computers can do this and are indeed revolutionizing science. -

    -

    Complex theoretical models - too advanced for ‘pen and paper’ results - are simulated on computers. The results they deliver, are then compared with reality and used for prediction purposes. Supercomputers have the ability to handle huge amounts of data, thus enabling experiments that would not be achievable in any other way. Large radio telescopes or the LHC particle accelerator at CERN could not function without supercomputers processing mountains of data. -

    -

    … but also the industry and out society

    -

    But supercomputers are not just an expensive toy for researchers at universities. Numerical simulation also opens up new possibilities in industrial R&D. For example in the search for new medicinal drugs, new materials or even the development of a new car model. Biotechnology also requires the large data processing capacity of a supercomputer. The quest for clean energy, a better understanding of the weather and climate evolution, or new technologies in health care all require a powerful supercomputer. -

    -

    Supercomputers have a huge impact on our everyday lives. Have you ever wondered why the showroom of your favourite car brand contains many more car types than 20 years ago? Or how each year a new and faster smartphone model is launched on the market? We owe all of this to supercomputers. -

    " - diff --git a/HtmlDump/file_0637.html b/HtmlDump/file_0637.html deleted file mode 100644 index f99007fcf..000000000 --- a/HtmlDump/file_0637.html +++ /dev/null @@ -1,3 +0,0 @@ -

    What is a supercomputer?

    -

    A supercomputer is a very fast and extremely parallel computer. Many of its technological properties are comparable to those of your laptop or even smartphone. But there are also important differences.

    " - diff --git a/HtmlDump/file_0639.html b/HtmlDump/file_0639.html deleted file mode 100644 index d0aca95a8..000000000 --- a/HtmlDump/file_0639.html +++ /dev/null @@ -1,2 +0,0 @@ -

    Impact on research, industry and society

    -

    Not only have supercomputers changed scientific research in a fundamental way, they also enable the development of new, affordable products and services which have a major impact on our daily lives.

    diff --git a/HtmlDump/file_0645.html b/HtmlDump/file_0645.html deleted file mode 100644 index 735331103..000000000 --- a/HtmlDump/file_0645.html +++ /dev/null @@ -1,8 +0,0 @@ -

    Tier-1b thin node supercomputer BrENIAC

    This system is since October 2016 in production use. -

    Purpose

    On this cluster you can run highly parallel, large scale computations that rely critically on efficient communication. -

    Hardware

    Software

    You will find the standard Linux HPC software stack installed on the Tier-1 cluster. If required, user support will install additional (Linux) software you require, but you are responsible for taking care of the licensing issues (including associated costs). -

    Access

    You can get access to this infrastructure by applying for a starting grant, submitting a project proposal that will be evaluated on scientific and technical merits, or by buying compute time.

    " - diff --git a/HtmlDump/file_0649.html b/HtmlDump/file_0649.html deleted file mode 100644 index 654f161cd..000000000 --- a/HtmlDump/file_0649.html +++ /dev/null @@ -1,15 +0,0 @@ -

    The VSC account

    In order to use the infrastructure of the VSC, you need a VSC-userid, also called a VSC account. The account gives you access to most of the infrastructure, though only with a limited compute time allocation on some of the systems. Also, For the main Tier-1 compute cluster you need to submit a project application (or you should be covered by a project application within your research group). For some more specialised hardware you have to ask access separately, typically to the coordinator of your institution, because we want to be sure that that (usually rather expensive hardware) is used efficiently for the type of applications for which it was purchased.

    Who can get a VSC account?

    Additional information

    Before you apply for VSC account, it is useful to first check whether the infrastructure is suitable for your application. Windows or OS X programs for instance cannot run on our infrastructure as we use the Linux operating system on the clusters. The infrastructure also should not be used to run applications for which the compute power of a good laptop is sufficient. The pages on the Tier-1 and Tier-2 infrastructure in this part of the website give a high-level description of our infrastructure. You can find more detailed information in the user documentation on the user portal. When in doubt, you can also contact your local support team. This does not require a VSC account. -

    You should also first check the page \"Account request\" in the user documentation and install the necessary software on your PC. You can also find links to information about that software on the “Account Request” page. -

    Furthermore, it can also be useful to take one of the introductory courses that we organise periodically at all universities. However, it is best to apply for your VSC account before the course since you also can then also do the exercises during the course. We strongly urge people who are not familiar with the use of a Linux supercomputer to take such a course. After all, we do not have enough staff to help everyone individually for all those generic issues. -

    There is an exception to the rule that you need a VSC account to access the VSC systems: Users with a valid VUB account can access the Tier-2 systems at the VUB. -

    Your account also includes two “blocks” of disk space: your home directory and data directory. Both are accessible from all VSC clusters. When you log in to a particular cluster, you will also be assigned one or more blocks of temporary disk space, called scratch directories. Which directory should be used for which type of data, is explained in the user documentation. -

    Your VSC account does not give you access to all available software. You can use all free software and a number of compilers and other development tools. For most commercial software, you must first prove that you have a valid license or the person who has paid the license on the cluster must allow you to use the license. For this you can contact your local support team. -

    " - diff --git a/HtmlDump/file_0655.html b/HtmlDump/file_0655.html deleted file mode 100644 index 1531cfe79..000000000 --- a/HtmlDump/file_0655.html +++ /dev/null @@ -1,41 +0,0 @@ -

    A collaboration with the VSC offers your company a good number of benefits.

    " - diff --git a/HtmlDump/file_0659.html b/HtmlDump/file_0659.html deleted file mode 100644 index 8caf56d96..000000000 --- a/HtmlDump/file_0659.html +++ /dev/null @@ -1,2 +0,0 @@ -

    Modern microelectronics has created many new opportunities. Today powerful supercomputers enable us to collect and process huge amounts of data. Complex systems can be studied through numerical simulation without having to build a prototype or set up a scaled experiment beforehand. All this leads to a quicker and cheaper design of new products, cost-efficient processes and innovative services. To support this development in Flanders, the Flemish Government was founded in late 2007. Our accumulated expertise and infrastructure is also available to the industry for R&D.

    " - diff --git a/HtmlDump/file_0661.html b/HtmlDump/file_0661.html deleted file mode 100644 index a5eefb1c8..000000000 --- a/HtmlDump/file_0661.html +++ /dev/null @@ -1,2 +0,0 @@ -

    Our offer to you

    -

    Thanks to our embedding in academic institutions, we cannot only offer you infrastructure at competitive rates but also expert advice and training.

    diff --git a/HtmlDump/file_0663.html b/HtmlDump/file_0663.html deleted file mode 100644 index c71fff3c9..000000000 --- a/HtmlDump/file_0663.html +++ /dev/null @@ -1,2 +0,0 @@ -

    About us

    -

    The VSC is a collaboration between the Flemish Government and five Flemish university associations. Many of the VSC employees have a strong technical and scientific background.

    diff --git a/HtmlDump/file_0671.html b/HtmlDump/file_0671.html deleted file mode 100644 index 6061c15bb..000000000 --- a/HtmlDump/file_0671.html +++ /dev/null @@ -1,4 +0,0 @@ -

    The VSC was launched in late 2007 as a collaboration between the Flemish Government and five Flemish university associations. Many of the VSC employees have a strong technical and scientific background. Our team also collaborates with many research groups at various universities and helps them and their industrial partners with all aspects of infrastructure usage.

    Besides a competitive infrastructure, the VSC team also offers full assistance with the introduction of High Performance Computing within your company. -

    Contact

    Coordinator industry access and services: industry@fwo.be

    Alternatively, you can contact one of the VSC coordinators.
    -

    " - diff --git a/HtmlDump/file_0673.html b/HtmlDump/file_0673.html deleted file mode 100644 index 9b718ea9b..000000000 --- a/HtmlDump/file_0673.html +++ /dev/null @@ -1 +0,0 @@ -

    Get in touch with us!

    diff --git a/HtmlDump/file_0681.html b/HtmlDump/file_0681.html deleted file mode 100644 index 039c769d6..000000000 --- a/HtmlDump/file_0681.html +++ /dev/null @@ -1,44 +0,0 @@ -

    Overview of the storage infrastructure

    Storage is an important part of a cluster. But not all the storage has the same characteristics. HPC cluster storage at KU Leuven consists of 3 different storage Tiers, optimized for different usage

    The picture below gives a quick overview of the different components.\"KU -

    Storage Types

    As described on the web page \"Where can I store what kind of data?\" different types of data can be stored in different places. There is also an extra storage space for Archive use. -

    Archive Storage

    The archive tier is built with DDN WOS storage. It is intended to store data for longer term. The storage is optimized for capacity, not for speed. The storage by default is mirrored. -

    No deletion rules are executed on this storage. The data will be kept until the user deletes it. -

    Use for: Storing data that will not be used for a longer period and which should be kept. Compute nodes have no direct access to that storage area and therefore it should not be used for jobs I/O operations. -

    How to request: Please send a request from the storage request webpage. -

    How much does it cost: For all the prices please refer to our service catalog (login required).
    -

    Working with archive storage

    The archive storage should not be used to perform I/O in a compute job. Data should first be copied to the faster scratch filesystem. To accommodate user groups that have a large archive space, a staging area is foreseen. The staging area is a part of the same hardware platform as the fast scratch filesystem, but other rules apply. Data is not deleted automatically after 21 days. When the staging area is full it will be the user’s responsibility to make sure that enough space is available. Data created on scratch or in the staging location which needs to be kept for longer time should be copied to the archive. -

    Location of Archive/Staging

    The name of user's archive directory is in the format: /archive/leuven/arc_XXXXX, where XXXXX is a number and this will be given to the user by HPC admin once your archive requested is handled. -

    The name of your staging directory is in this format: /staging/leuven/stg_XXXXX, where XXXXX is the same number as for the archive directory. -

    Use case: Data is in archive, how can I use it in a compute job?

    In this use case you want to start to compute on older data in your archive. -

    If you want to compute with data in your archive stored in ‘archive_folder’. You can copy this data to your scratch using the following command: -

    rsync -a <PATH_to_archive/archive_folder> <PATH_to_scratch>
    -

    Afterwards you may want to archive the new produced results back to archive therefore you should follow the steps in the following use case. -

    Use case: Data produced on cluster, stored for longer time?

    This procedure applies to the case when you have jobs producing output results on the scratch area and you want to archive those results in your archive area. -

    In that case you have a folder on scratch called ‘archive_folder’ in which you are working. And the same folder already exists in your archive space. Now you want to update your archive space with the new results produced on scratch -

    You could run the command: -

    rsync -i -u -r --dry-run <PATH_to_scratch/archive_folder> <PATH_to_archive/archive_folder>
    -

    This command will not perform the copy yet but it will give an overview of all data changed since last copy from archive. Therefore not all data needs to be copied back. If you agree with this overview you can run this command without the --dry-run’ option. If you are synching a large amount files, please contact HPC support for follow-up. -

    Use case : How to get local data on archive?

    Data that is stored at the user's local facilities can be copied to the archive through scp/bbcp/sftp methods. For this please refer to the appropriate VSC documentation: -

    for linux: openssh -

    for windows: filezilla or winscp -

    for OS X: data-cyberduck. -

    Use case : How to check the disk usage?

    To check the occupied disk space additional option is necessary with du command: -

    du --apparent-size folder-name
    -

    How to stage in or stage out using torque?

    Torque gives also the possibility to specify data staging as a job requirement. This way Torque will copy your data to scratch while your job is in the queue and will not start the job before all data is copied. The same mechanism is possible for stageout requirements. In the example below Torque will copy back your data from scratch when your job is finished to the archive storage tier: -

    qsub -W stagein=/scratch/leuven/3XX/vsc3XXXX@login1:/archive/leuven/arc_000XX/foldertostagein 
    --W stageout=/scratch/leuven/3XX/vsc3XXXX/foldertostageout@login1:/archive/leuven/arc_000XX/
    -

    -

    - Hostname is always one of the login nodes, because these are the only nodes where ‘archive’ is available on the cluster. -

    For stagein the copy goes from /archive/leuven/arc_000XX/foldertostagein to /scratch/leuven/3XX/vsc3XXXX -

    For stageout the copy goes from /scratch/leuven/3XX/vsc3XXXX/foldertostageout to /archive/leuven/arc_000XX/ -

    Attached documents

    " - diff --git a/HtmlDump/file_0683.html b/HtmlDump/file_0683.html deleted file mode 100644 index 693a6719e..000000000 --- a/HtmlDump/file_0683.html +++ /dev/null @@ -1,7 +0,0 @@ -

    Lorem ipsum dolor sit amet, consectetur adipiscing elit. Nunc nec interdum velit, et viverra arcu. Donec ac nisl vehicula orci mattis pellentesque vel sed magna. Ut vulputate ipsum in bibendum suscipit. Phasellus tristique molestie cursus. Suspendisse sed luctus diam. Duis dignissim tincidunt congue. Sed laoreet nunc ac hendrerit congue. Aenean semper dolor sit amet tincidunt pharetra. Fusce malesuada iaculis enim eu venenatis. Maecenas commodo laoreet eros eu feugiat. Integer dignissim sapien at vehicula fermentum. Sed quis odio in dui luctus tempus. Praesent porttitor nisl varius, mattis eros laoreet, eleifend magna. Curabitur vehicula vitae eros vel egestas. Fusce at metus velit.

    Test

    Test movie

    The movie below illustrates the use of supercomputing for the design of a cooling element from a report on Kanaal Z. -

    Methode 1, conform de code voor embedding gegenereerd door de Kanaal Z website: speelt niet af... -

    - -

    Methode 2: Video tag, werkt alleen in HTML5 browsers, en ik vrees dat Kanaal Z niet gelukkig is met deze methode...

    " - diff --git a/HtmlDump/file_0687.html b/HtmlDump/file_0687.html deleted file mode 100644 index 5b4039691..000000000 --- a/HtmlDump/file_0687.html +++ /dev/null @@ -1,133 +0,0 @@ -

    The industry day has been postponed to a later date, probably in the autumn around the launch of the second Tier-1 system in Flanders.

    Supercharge your business with supercomputing

    When? New date to be determined
    Where? Technopolis, Mechelen
    Admission free, but registration required -

    The VSC Industry day is the second in a series of annual events. The goals are to create awareness about the potential of HPC for industry and to help firms overcome the hurdles to use supercomputing. We are proud to present an exciting program with testimonials of some Flemish firms who already have discovered the opportunities of large scale computing, success stories from a European HPC centre that successfully collaborates with industry and a presentation by a HPC vendor who has been very successful delivering solutions to several industries. -

    - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
    -

    Preliminary program - Supercharge your business with supercomputing -

    Given that the industry day has been postponed, the program is subject to change.

    -
    -

    13.00-13.30 -

    -
    -

    Registration and welcome drink -

    -
    -

    13.30-13.45 -

    -
    -

    Introduction and opening
    - Prof. dr Colin Whitehouse (chair) -

    -
    -

    13.45-14.15 -

    -
    -

    The future is now - physics-based simulation opens new gates in heart disease treatment
    - Matthieu De Beule (FEops) -

    -
    -

    13.45-14.05 -

    -
    -

    Hydrodynamic and morfologic modelling of the river Scheldt estuary
    - Sven Smolders and Abdel Nnafie (Waterbouwkundig Laboratorium) -

    -
    -

    14.15-14.45 -

    -
    -

    HPC in Metal Industry: Modelling Wire Manufacturing
    - Peter De Jaeger (Bekaert) -

    -
    -

    15.15-15.45 -

    -
    -

    Coffee break -

    -
    -

    15.45-16.15 -

    -
    -

    NEC industrial customers HPC experiences
    Fredrik Unger (NEC)

    -
    -

    16.15-16.45 -

    -
    -

    Exploiting business potential with supercomputing
    - Karen Padmore (HPC Wales and SESAME repres.) -

    -
    -

    16.45-17.05 -

    -
    -

    What VSC has to offer to your business
    - Ingrid Barcena Roig and Ewald Pauwels (VSC)

    -
    -

    17.05-17.25 -

    -
    -

    Q&A discussion
    - Panel/chair -

    -
    17.25-17.30 - -

    Closing
    - Prof dr. Colin Whitehouse (chair) -

    -
    17.30-18.30 - Networking reception -

    Registration

    The registrations are closed now. Ones the new date is determined, a new registration form will be made available.

    How to reach Technopolis.

    " - diff --git a/HtmlDump/file_0689.html b/HtmlDump/file_0689.html deleted file mode 100644 index a6758dc3f..000000000 --- a/HtmlDump/file_0689.html +++ /dev/null @@ -1 +0,0 @@ -

    VSC Industry Day - Thursday April 14, 2016

    diff --git a/HtmlDump/file_0691.html b/HtmlDump/file_0691.html deleted file mode 100644 index a6758dc3f..000000000 --- a/HtmlDump/file_0691.html +++ /dev/null @@ -1 +0,0 @@ -

    VSC Industry Day - Thursday April 14, 2016

    diff --git a/HtmlDump/file_0695.html b/HtmlDump/file_0695.html deleted file mode 100644 index 7f7d07fae..000000000 --- a/HtmlDump/file_0695.html +++ /dev/null @@ -1,35 +0,0 @@ -

    A batch system

    There are two important differences between a supercomputer and your personal laptop or smartphone apart from the amount of compute power it can deliver if used properly: As it is a large and expensive machine and as not every program can use all of its processing power, it is a multi-user machine, and furthermore it is optimised to run large parallel programs in such a way that they don't interfere too much with each other. So your compute resources will be as much as possible isolated from those assigned to another user. The latter is necessary to ensure fast and predictable execution of large parallel jobs as the performance of a parallel application will always be limited by the slowest node, process or thread.

    This has some important consequences:

      -
    1. As a user, you don't get the whole machine, but a specific part of it, and so you'll have to specify which part you need for how long.
    2. -
    3. Often more capacity is requested than available at that time. Hence you may have to wait a little before you get the resources that you request. To organise this in a proper way, every supercomputer provides a queueing system.
    4. -
    5. Moreover, as you often have to wait a bit before you get the requested resources, it is not well suited for interactive work. Instead, most work on a supercomputer is done in batch mode: Programs run without user interaction, reading their input from file and storing their results in files.
    6. -

    In fact, another reason why interactive work is discouraged on most clusters is because interactive programs rarely fully utilise the available processors but waste a lot of time waiting for new user input. Since that time cannot be used by another user either (remember that your work is isolated from that from other users), is is a waste of very expensive compute resources. -

    A job is an entity of work that you want to do on a supercomputer. A job consists of the execution of one or more programs and needs certain resources for some time to be able to execute. Batch jobs are described by a job script. This is like a regular linux shell script (usually for the bash shell), but it usually contains some extra information: a description of the resources that are needed for the job. A job is then submitted to the cluster and placed in a queue (managed by a piece of software called the queue manager). A scheduler will decide on the priority of the job that you submitted (based on the resources that you request, your past history and policies determined by the system managers of the cluster). It will use the resource manager to check which resources are available and to start the job on the cluster when suitable resources are available and the scheduler decides it is the job's time to run. -

    At the VSC we use two software packages to perform these tasks. Torque is an open source package that performs the role of queue and resource manager. Moab is a commercial package that provides way more scheduling features than its open source alternatives. Though both packages are developed by the same company and are designed to work well with each other, they both have their own set of commands with often confusing command line options. -

    Anatomy of a job script

    A typical job script looks like: -

    #!/bin/bash
    -#PBS –l nodes=1:ppn=20
    -#PBS –l walltime=1:00:00
    -#PBS -o stdout.$PBS_JOBID
    -#PBS -e stderr.$PBS_JOBID
    -
    -module load MATLAB
    -cd $PBS_O_WORKDIR
    -
    -matlab -r fibo
    -

    We can distinguish 4 sections in the script: -

      -
    1. The first line simply tells that this is a shell script.
    2. -
    3. The second block, the lines that start with #PBS, specify the resources and tell the resource manager where to store the standard output and standard error from the program. To ensure unique file names, the author of this script has chosen to put the \"Job ID\", a unique ID for every job, in the name.
    4. -
    5. The next two lines create the proper environment to run the job: it loads a module and changes the working directory to the directory from which the job was submitted (this is what is stored in the environment variable$PBS_O_WORKDIR).
    6. -
    7. Finally the script executes the commands that are the core of the job. In this simple example, this is just a single command, but it could as well be a whole bash script.
    8. -

    In other pages of the documentation in this section, we'll go into more detail on specifying resource requirements, output redirection and notifications and on environment variables that are set by the scheduler and can be used in your job. -

    Assuming that this script is called myscript.pbs, the job can then be submitted to the queueing system with the command qsub myscript.pbs. -

    Note that if you use a system at the KU Leuven, including the Tier-1 system BrENIAC, you need credits. When submitting your job, you also need to tell qsub which credits to use. We refer to the page on \"Credit system basics\".

    Structure of this documentation section

    Some background information

    For those readers who want some historical background to understand where the complexity comes from. -

    In the ’90s of the previous century, there was a popular resource manager called Portable Batch System, developed by a contractor for NASA. This was open-sourced. But that contractor was acquired by another company that then sold the rights to Altair Engineering that evolved the product into the closed-source product PBSpro (which was then open-sourced again in the summer of 2016). The open-source version was forked by another company that is now known as Adaptive Computing and renamed to Torque. Torque remained open–source. The name stands for Terascale Open-source Resource and QUEue manager. Even though the name was changed, the commands remained which explains why so many commands still have the abbreviation PBS in their name. -

    The scheduler Moab evolved from MAUI, an open-source scheduler. Adaptive Computing, the company behind Torque and Moab, contributed a lot to MAUI but then decided to start over with a closed source product. They still offer MAUI on their website though. MAUI used to be widely used in large USA supercomputer centres, but most now throw their weight behind SLURM with or without another scheduler. -

    " - diff --git a/HtmlDump/file_0697.html b/HtmlDump/file_0697.html deleted file mode 100644 index 75e34fbd1..000000000 --- a/HtmlDump/file_0697.html +++ /dev/null @@ -1,53 +0,0 @@ -

    In general, there are two ways to pass the resource requirements or other job properties to the queue manager:

      -
    1. They can be specified on the command line of the qsub command
    2. -
    3. Or they can be put in the job script on lines that start with #PBS (so-called in-PBS directives). Each line can contain one or more command line options written in exactly the same way as on the command line of qsub. These lines have to come at the top of the job script, before any command (but after the line telling the shell that this is a bash script).
    4. -

    And of course both strategies can be mixed at will: Some options can be put in the job script, while others are specified on the command line. This can be very useful, e.g., if you run a number of related jobs from different directories using the same script. The few things that have to change can then be specified at the command line. The options given at the command line always overrule those in the job script in case of conflict. -

    Resource specifications

    Resources are specified using the -l command line argument. -

    Wall time

    Walltime is specified through the option -l walltime=HH:MM:SS with HH:MM:SS the walltime that you expect to need for the job. (The format DD:HH:MM:SS can also be used when the waltime exceeds 1 day, and MM:SS or simply SS are also viable options for very short jobs). -

    To specify a run time of 30 hours, 25 minutes and 5 seconds, you'd use -

    $ qsub -l walltime=30:25:05 myjob.pbs
    -

    on the command line or the line -

    #PBS -l walltime=30:25:05
    -

    in the job script (or alternative walltime=1:06:25:05). -

    If you omit this option, the queue manager will not complain but use a default value (one hour on most clusters). -

    It is important that you do an effort to estimate the wall clock time that your job will need properly. If your job exceeds the specified wall time, it will be killed, but this is not an invitation to simply specify the longest wall time possible (the limit differs from cluster to cluster). To make sure that the cluster cannot be monopolized by one or a few users, many of our clusters have a stricter limit on the number of long-running jobs than on the number of jobs with a shorter wall time. And several clusters will also allow short jobs to pass longer jobs in the queue if the scheduler finds a gap (based on the estimated end time of the running jobs) that is short enough to run that job before it has enough resources to start a large higher-priority parallel job. This process is called backfilling. -

    The maximum allowed wall time for a job is cluster-dependent. Since these policies can change over time (as do other properties from clusters), we bundle these on one page per cluster in the \"Available hardware\" section. -

    Single- and multi-node jobs: Cores and memory

    The following options can be used to specify the number of cores, amount of RAM and virtual memory needed for the job: -

    Node that specifying -l nodes=<nodenum>:ppn=<cores per node> does not guarantee you that you actually get <nodenum> physical nodes. You may get multiple groups of <cores per node> cores on a single node instead. E.g., -l nodes=4:ppn=5 may result in an allocation of 20 cores on a single node in a cluster that has nodes with 20 or more cores if that node also contains enough memory. -

    Note also that the job script will only run once on the first node of your allocation. To start processes on the other nodes, you'll need to use tools like pbsdsh or mpirun/mpiexec to start those processes. -

    Single node jobs only: Cores and memory

    For single node jobs there is an alternative for specifying the amount of resident memory and virtual memory needed for the application. These settings make more sense from the point of view of starting a single multi-threaded application. -

    These options should not be used for multi-node jobs as the meaning of the parameter is undefined (mem) or badly defined (vmem) for multi-node jobs with different sections and different versions of the Torque manual specifying different behaviour for these options. -

    Specifying further node properties

    Several clusters at the VSC have nodes with different properties. E.g., a cluster may have nodes of two different CPU generations and your program may be compiled to take advantage of new instructions on the newer generation and hence not run on the older generation. Or some nodes may have more physical memory or a larger hard disk and support more virtual memory. Or not all nodes may be connected to the same high speed interconnect (which is mostly an issue on the older clusters). You can then specify which node type you want by adding further properties to the -l nodes= specification. E.g., assume a cluster with both Ivy Bridge and Haswell generation nodes. The Haswell CPU supports new and useful floating point instructions, but programs that use these will not run on the older Ivy Bridge nodes. The cluster will then specify the property ivybridge for the Ivy Bridge nodes and haswell for the Haswell nodes. Specifying -l nodes=8:ppn=6:haswell then tells the scheduler that you want to use nodes with the haswell property only (and in this case, since Haswell nodes often have 24 cores, you will likely get 2 physical nodes). -

    The exact list of properties depend on the cluster and is given in the page for your cluster in the \"Available hardware\" section of this manual. Note that even for a given cluster, this list may evolve over time, e.g., when new nodes are added to the cluster, so check these pages again from time to time! -

    Combining resource specifications

    It is possible to combine multiple -l options in a single one by separating the arguments with a colon (,). E.g., the block -

    #PBS -l walltime=2:30:00
    -#PBS -l nodes=2:ppn=16:sandybridge
    -#PBS -l pmem=2gb
    -

    is equivalent with the line -

    #PBS -l walltime=2:30:00,nodes=2:ppn=16:sandybridge,pmem=2gb
    -

    The same holds when using -l at the command line of qsub. -

    Enforcing the node specification

    These are very asocial options as they typically result in lots of resources remaining unused, so use them with care and talk to user support to see if you really need them. But there are some rare scenarios in which they are actually useful. -

    If you don't use all cores of a node in your job, the scheduler may decide to bundle the tasks of several nodes in your resource request on a single node, may put other jobs you have in the queue on the same node(s) or may - depending on how the system manager has configured the scheduler - put jobs of other users on the same node. In fact, most VSC clusters have a single user per node policy as misbehaving jobs of one user may cause a crash or performance degradation of another user's job. -

    Naming jobs and output files

    The default name of a job is derived from the file name of the job script. This is not very useful if the same job script is used to launch multiple jobs, e.g., by launching jobs from multiple directories with different input files. It is possible to overwrite the default name of the job with -N <job_name>. -

    Most jobs on a cluster run in batch mode. This implies that they are not connected to a terminal, so the output send to the Linux stdout (standard output) and stderr (standard error) devices cannot be displayed on screen. Instead it is captured in two files that are put in the directory where your job was started at the end of your job. The default names of those files are <job_name>.o<job id> and <job_name>.e<job id> respectively, so made from the name of the job (the one assigned with -N if any, or the default one) and the number of the job assigned when you submit the job to the queue. You can however change those names using -o <output file> and -e <error file>. -

    It is also possible to merge both output streams in a single output stream. The option -j oe will merge stderr into stdout (and hence the -e option does not make sense), the option -j eo will merge stdout into stderr.

    Notification of job events

    Our scheduling system can also notify you when a job starts or ends by e-mail. Jobs can stay queued for hours or sometimes even days before actually starting, so it is useful to be notified so that you can monitor the progress of your job while it runs or kill it when it misbehaves or produces clearly wrong results. Two command line options are involved in this process: -

    Other options

    This page describes the most used options in their most common use cases. There are however more parameters for resource specification and other options that can be used. For advanced users who want to know more, we refer to the documentation of the qsub command that mentions all options in the Torque manual on the Adaptive Computing documentation web site. -

    " - diff --git a/HtmlDump/file_0699.html b/HtmlDump/file_0699.html deleted file mode 100644 index 34ebb9ca3..000000000 --- a/HtmlDump/file_0699.html +++ /dev/null @@ -1,287 +0,0 @@ -

    To set up your environment for using a particular (set of) software package(s), you can use the modules that are provided centrally.
    - On the Tier-2 of UGent and VUB, interacting with the modules is done via - Lmod (since August 2016), -using the - module command or the handy shortcut command ml. -

    Quick introduction

    A very quick introduction to Lmod. Below you will find more details and examples. -

    Module commands: using module (or ml)


    Listing loaded modules: module list (or ml)

    To get an overview of the currently loaded modules, use module list or ml (without specifying extra arguments). -

    In a default environment, you should see a single cluster module loaded: -

    $ ml
    -Currently Loaded Modules:
    -  1) cluster/delcatty (S)
    -  Where:
    -   S:  Module is Sticky, requires --force to unload or purge
    -

    (for more details on sticky modules, see the section on ml purge) -


    Searching for available modules: module avail (or ml av) and ml spider

    Printing all available modules: module avail (or ml av)

    To get an overview of all available modules, you can use module avail or simply ml av: -

    $ ml av
    ------------------------------ /apps/gent/CO7/haswell-ib/modules/all -----------------------------
    -   ABAQUS/6.12.1-linux-x86_64           libXext/1.3.3-intel-2016a                  (D)
    -   ABAQUS/6.14.1-linux-x86_64    (D)    libXfixes/5.0.1-gimkl-2.11.5
    -   ADF/2014.02                          libXfixes/5.0.1-intel-2015a
    -   ...                                  ...
    -

    In the current module naming scheme, each module name consists of two parts: -

    For example, the module name matplotlib/1.5.1-intel-2016a-Python-2.7.11 will set up the environment for using matplotlib version 1.5.1, -which was installed using the - intel/2016a compiler toolchain; the version suffix -Python-2.7.11 indicates it was installed for Python version 2.7.11. -

    The (D) indicates that this particular version of the module is the default, -but we strongly recommend to - not rely on this as the default can change at any point. -Usuall, the default will point to the latest version available. -


    Searching for modules: ml spider

    The (Lmod-specific) spider subcommand lets you search for modules across all clusters. -

    If you just provide a software name, for example gcc, it prints on overview of all available modules -for GCC. -

    $ ml spider gcc
    ----------------------------------------------------------------------------------
    -  GCC:
    ----------------------------------------------------------------------------------
    -     Versions:
    -        GCC/4.7.2
    -        GCC/4.8.1
    -        GCC/4.8.2
    -        GCC/4.8.3
    -        GCC/4.9.1
    -        GCC/4.9.2
    -        GCC/4.9.3-binutils-2.25
    -        GCC/4.9.3
    -        GCC/4.9.3-2.25
    -        GCC/5.3.0
    -     Other possible modules matches:
    -        GCCcore
    ----------------------------------------------------------------------------------
    -  To find other possible module matches do:
    -      module -r spider '.*GCC.*'
    ----------------------------------------------------------------------------------
    -  For detailed information about a specific \"GCC\" module (including how to load the modules) use the module's full name.
    -  For example:
    -     $ module spider GCC/4.9.3
    ----------------------------------------------------------------------------------
    -

    Note that spider is case-insensitive. -

    If you use spider on a full module name like GCC/4.9.3-2.25 it will tell on which cluster(s) that module available: -

    $ ml spider GCC/4.9.3-2.25
    ----------------------------------------------------------------------------------
    -  GCC: GCC/4.9.3-2.25
    ----------------------------------------------------------------------------------
    -     Other possible modules matches:
    -        GCCcore
    -    You will need to load all module(s) on any one of the lines below before the \"GCC/4.9.3-2.25\" module
    -    is available to load.
    -      cluster/delcatty
    -      cluster/golett
    -      cluster/phanpy
    -      cluster/raichu
    -      cluster/swalot
    -    Help:
    -       The GNU Compiler Collection includes front ends for C, C++, Objective-C, Fortran, Java, and Ada,
    -       as well as libraries for these languages (libstdc++, libgcj,...). - Homepage: http://gcc.gnu.org/
    ----------------------------------------------------------------------------------
    -  To find other possible module matches do:
    -      module -r spider '.*GCC/4.9.3-2.25.*'
    -

    This tells you that the module named GCC/4.9.3-2.25 is available on the clusters delcatty, golett, phanpy, raichu and swalot. -It also tells you what the module contains and a URL to the homepage of the software. -


    Available modules for a particular software package: module avail <name> (or ml av <name>)

    To check which modules are available for a particular software package, you can provide the software name to ml av. -

    For example, to check which versions of IPython are available: -

    $ ml av ipython
    ------------------------------ /apps/gent/CO7/haswell-ib/modules/all -----------------------------
    -IPython/3.2.3-intel-2015b-Python-2.7.10    IPython/3.2.3-intel-2016a-Python-2.7.11 (D)
    -

    Note that the specified software name is treated case-insensitively. -

    Lmod does a partial match on the module name, so sometimes you need to use / to indicate the end of the software name you are interested in: -

    $ ml av GCC/
    ------------------------------ /apps/gent/CO7/haswell-ib/modules/all -----------------------------
    -GCC/4.9.2    GCC/4.9.3-binutils-2.25    GCC/4.9.3    GCC/4.9.3-2.25    GCC/5.3.0    GCC/6.1.0-2.25 (D)
    -

    Inspecting a module using module show (or ml show)

    To see how a module would change the environment, use module show or ml show: -

    $ ml show matplotlib/1.5.1-intel-2016a-Python-2.7.11
    ------------------------------ /apps/gent/CO7/haswell-ib/modules/all -----------------------------
    -whatis(\"Description: matplotlib is a python 2D plotting library which produces publication quality figures in a variety of 
    -hardcopy formats and interactive environments across platforms. matplotlib can be used in python scripts, the python 
    -and ipython shell, web application servers, and six graphical user interface toolkits. - Homepage: http://matplotlib.org \")
    -conflict(\"matplotlib\")
    -load(\"intel/2016a\")
    -load(\"Python/2.7.11-intel-2016a\")
    -load(\"freetype/2.6.2-intel-2016a\")
    -load(\"libpng/1.6.21-intel-2016a\")
    -prepend_path(\"LD_LIBRARY_PATH\",\"/apps/gent/CO7/haswell-ib/software/matplotlib/1.5.1-intel-2016a-Python-2.7.11/lib\")
    -prepend_path(\"LIBRARY_PATH\",\"/apps/gent/CO7/haswell-ib/software/matplotlib/1.5.1-intel-2016a-Python-2.7.11/lib\")
    -setenv(\"EBROOTMATPLOTLIB\",\"/apps/gent/CO7/haswell-ib/software/matplotlib/1.5.1-intel-2016a-Python-2.7.11\")
    -setenv(\"EBVERSIONMATPLOTLIB\",\"1.5.1\")
    -setenv(\"EBDEVELMATPLOTLIB\",\"/apps/gent/CO7/haswell-ib/software/matplotlib/1.5.1-intel-2016a-Python-2.7.11/easybuild/matplotlib-1.5.1-intel-2016a-Python-2.7.11-easybuild-devel\")
    -prepend_path(\"PYTHONPATH\",\"/apps/gent/CO7/haswell-ib/software/matplotlib/1.5.1-intel-2016a-Python-2.7.11/lib/python2.7/site-packages\")
    -setenv(\"EBEXTSLISTMATPLOTLIB\",\"Cycler-0.9.0,matplotlib-1.5.1\")
    -help([[ matplotlib is a python 2D plotting library which produces publication quality figures in a variety of
    - hardcopy formats and interactive environments across platforms. matplotlib can be used in python scripts, the python
    - and ipython shell, web application servers, and six graphical user interface toolkits. - Homepage: http://matplotlib.org
    -

    Note that both the direct changes to the environment as well as other modules that will be loaded are shown. -

    If you're not sure what all of this means: don't worry, you don't have to know; just load the module and try -using the software. -


    Loading modules: module load <modname(s)> (or ml <modname(s)>)

    To effectively apply the changes to the environment that are specified by a module, use module load or ml and specify the name of the module. -

    For example, to set up your environment to use matplotlib: -

    $ ml matplotlib/1.5.1-intel-2016a-Python-2.7.11
    -$ ml
    -Currently Loaded Modules:
    -  1) cluster/delcatty                                    (S)  12) zlib/1.2.8-intel-2016a
    -  2) GCCcore/4.9.3                                          13) libreadline/6.3-intel-2016a
    -  3) binutils/2.25-GCCcore-4.9.3                            14) ncurses/6.0-intel-2016a
    -  4) icc/2016.1.150-GCC-4.9.3-2.25                          15) Tcl/8.6.4-intel-2016a
    -  5) ifort/2016.1.150-GCC-4.9.3-2.25                        16) SQLite/3.9.2-intel-2016a
    -  6) iccifort/2016.1.150-GCC-4.9.3-2.25                     17) Tk/8.6.4-intel-2016a-no-X11
    -  7) impi/5.1.2.150-iccifort-2016.1.150-GCC-4.9.3-2.25      18) GMP/6.1.0-intel-2016a
    -  8) iimpi/8.1.5-GCC-4.9.3-2.25                             19) Python/2.7.11-intel-2016a
    -  9) imkl/11.3.1.150-iimpi-8.1.5-GCC-4.9.3-2.25             20) freetype/2.6.2-intel-2016a
    - 10) intel/2016a                                            21) libpng/1.6.21-intel-2016a
    - 11) bzip2/1.0.6-intel-2016a                                22) matplotlib/1.5.1-intel-2016a-Python-2.7.11
    -

    Note that even though we only loaded a single module, the output of ml shows that a whole bunch of modules were loaded, which are required dependencies for matplotlib, -including both the - compiler toolchain that was used to install matplotlib (i.e. intel/2016a, and its dependencies) and the module providing the Python installation -for which - matplotlib was installed (i.e. Python/2.7.11-intel-2016a). -


    Conflicting modules

    It is important to note that only modules that are compatible with each other can be loaded together. In particular, modules must be installed either with the same toolchain as the modules that are already loaded, or with a compatible (sub)toolchain. -

    For example, once you have loaded one or more modules that were installed with the intel/2016a toolchain, all other modules that you load should have been installed with the same toolchain. -

    In addition, only one single version of each software package can be loaded at a particular time. For example, once you have the Python/2.7.11-intel-2016a - module loaded, -you can not load a different version of Python in the same session/job -script; neither directly, nor indirectly as a dependency of another -module you want to load. -

    See also the topic \"module conflicts\" in the list of key differences with the previously used module system. -


    Unloading modules: module unload <modname(s)> (or ml -<modname(s)>)

    To revert the changes to the environment that were made by a particular module, you can use module unload or ml -<modname>. -

    For example: -

    $ ml
    -Currently Loaded Modules:
    -  1) cluster/golett (S)
    -$ which gcc
    -/usr/bin/gcc
    -$ ml GCC/4.9.3
    -$ ml
    -Currently Loaded Modules:
    -  1) cluster/golett (S)   2) GCC/4.9.3
    -$ which gcc
    -/apps/gent/CO7/haswell-ib/software/GCC/4.9.3/bin/gcc
    -$ ml -GCC/4.9.3
    -$ ml
    -Currently Loaded Modules:
    -  1) cluster/golett (S)
    -$ which gcc
    -/usr/bin/gcc
    -

    Resetting by unloading all modules: ml purge (module purge)

    To reset your environment back to a clean state, you can use module purge or ml purge: -

    $ ml
    -Currently Loaded Modules:
    -  1) cluster/delcatty                                    (S)  11) bzip2/1.0.6-intel-2016a
    -  2) GCCcore/4.9.3                                          12) zlib/1.2.8-intel-2016a
    -  3) binutils/2.25-GCCcore-4.9.3                            13) libreadline/6.3-intel-2016a
    -  4) icc/2016.1.150-GCC-4.9.3-2.25                          14) ncurses/6.0-intel-2016a
    -  5) ifort/2016.1.150-GCC-4.9.3-2.25                        15) Tcl/8.6.4-intel-2016a
    -  6) iccifort/2016.1.150-GCC-4.9.3-2.25                     16) SQLite/3.9.2-intel-2016a
    -  7) impi/5.1.2.150-iccifort-2016.1.150-GCC-4.9.3-2.25      17) Tk/8.6.4-intel-2016a-no-X11
    -  8) iimpi/8.1.5-GCC-4.9.3-2.25                             18) GMP/6.1.0-intel-2016a
    -  9) imkl/11.3.1.150-iimpi-8.1.5-GCC-4.9.3-2.25             19) Python/2.7.11-intel-2016a
    - 10) intel/2016a
    -$ ml purge
    -The following modules were not unloaded:
    -   (Use \"module --force purge\" to unload all):
    -  1) cluster/delcatty
    -[15:21:20] vsc40023@node2626:~ $ ml
    -Currently Loaded Modules:
    -  1) cluster/delcatty (S)
    -

    Note that, on HPC-UGent, the cluster module will always remain loaded, since - it defines some important environment variables that point to the -location of centrally installed software/modules, -and others that are required for submitting jobs and interfacing with -the cluster resource manager ( - qsub, qstat, ...). -

    As such, you should not (re)load the cluster module anymore after running ml purge. -See also - the topic on the purge command in the list of key differences with the previously used module implementation. -


    Module collections: ml save, ml restore

    If you have a set of modules that you need to load often, you can save these in a collection (only works with Lmod). -

    First, load all the modules you need, for example: -

    ml HDF5/1.8.16-intel-2016a GSL/2.1-intel-2016a Python/2.7.11-intel-2016a
    -

    Now store them in a collection using ml save: -

    $ ml save my-collection
    -

    Later, for example in a job script, you can reload all these modules with ml restore: -

    $ ml restore my-collection
    -

    With ml savelist you can get a list of all saved collections: -

    $ ml savelist
    -Named collection list:
    -  1) my-collection
    -  2) my-other-collection
    -

    To inspect a collection, use ml describe. -

    To remove a module collection, remove the corresponding entry in $HOME/.lmod.d. -


    -

    Lmod vs Tcl-based environment modules

    In August 2016, we switched to Lmod as a modules tool, -a modern alternative to the outdated & no longer actively maintained - Tcl-based environment modules tool. -

    Consult the Lmod documentation web site for more information. -


    Benefits


    Key differences

    The switch to Lmod should be mostly transparent, i.e. you should not have to change your existing job scripts. -

    Existing module commands should keep working as they were before the switch to Lmod. -

    However, there are a couple of minor differences between Lmod & the old modules tool you should be aware of: -


    - See below for more detailed information. -


    -

    Module conflicts are strictly enforced

    Conflicting modules can no longer be loaded together. -

    Lmod has been configured to report an error if any module conflict occurs -(as opposed to the default behaviour which is to unload the conflicting module and replace it with the one being loaded). -

    Although it seemed like the old modules did allow for conflicting modules to be loaded together, this was highly -discouraged already since it usually resulted in a broken environment. Lmod will ensure no changes are made to your -existing environment if a module that conflicts with an already module is loaded. -

    If you do try to load conflicting modules, you will run into an error message like: -

    $ module load Python/2.7.11-intel-2016a
    -$ module load Python/3.5.1-intel-2016a 
    -Lmod has detected the following error:  Your site prevents the automatic swapping of modules with same name.
    -You must explicitly unload the loaded version of \"Python\" before you can load the new one. Use swap (or an unload
    -followed by a load) to do this:
    -   $ module swap Python  Python/3.5.1-intel-2016a
    -Alternatively, you can set the environment variable LMOD_DISABLE_SAME_NAME_AUTOSWAP to \"no\" to re-enable same name
    -

    Note that although Lmod suggests to unload or swap, we recommend to try and make sure you only load compatible - modules together, and certainly not to define $LMOD_DISABLE_SAME_NAME_AUTOSWAP. -


    -

    module purge does not unload the cluster module

    Using module purge effectively resets your environment to a pristine working state, i.e. the cluster module stays loaded after the purge.
    - As such, it is no longer required to run module load cluster to restore your environment to a working state. -

    When you do run module load cluster when a cluster is already loaded, you will see the following warning message: -

    WARNING: 'module load cluster' has no effect when a 'cluster' module is already loaded.
    -For more information, please see https://www.vscentrum.be/cluster-doc/software/modules/lmod#module_load_cluster
    -

    To change to another cluster, use module swap or ml swap; -for example, to change your environment for the - golett cluster, use ml swap cluster/golett. -

    If you are frequently see the warning above pop up, you may have something like this in your $VSC_HOME/.bashrc - file: -

    . /etc/profile.d/modules.sh
    -module load cluster
    -

    If you do, please remove that, and include this at the top of your ~/.bashrc file: -

    if [ -f /etc/bashrc ]; then
    -        . /etc/bashrc
    -fi
    -

    modulecmd is not available anymore

    The modulecmd command is not available anymore, and has been replacd by the lmod command. -

    This is only relevant for EasyBuild, which has be to configured to use Lmod as a modules tool, since by default -it expects that - modulecmd is readily available. -
    For example: -

    export EASYBUILD_MODULES_TOOL=Lmod
    -

    See the EasyBuild -documentation - for other ways of configuring EasyBuild to use Lmod. -

    You should not be using lmod directly in other circumstances, use either ml or module instead. -

    Questions or problems

    In case of questions or problems, please do not hesitate to contact the support HPC team. HPC-UGent support team can be reached via hpc@ugent.be. -The HPC-VUB support team can be reached via - hpc@vub.ac.be. -

    " - diff --git a/HtmlDump/file_0701.html b/HtmlDump/file_0701.html deleted file mode 100644 index 8192d0179..000000000 --- a/HtmlDump/file_0701.html +++ /dev/null @@ -1,18 +0,0 @@ -

    Job submission and credit reservations

    -

    When you submit a job, a reservation is made. This means that the number of credits required to run your job is marked as reserved. Of course, this is the number of credits that is required to run the job during the walltime specified, i.e., the reservation is computed based on the requested walltime.

    Hence, if you submit a largish number of jobs, and the walltime is overestimated, reservation will be made for a total that is potentially much larger than what you'll actually be debited for upon job completion (you're only debited for the walltime used, not the walltime requested). -

    Now, suppose you know that your job will most probably end within 24 hours, but you specify 36 hours to be on the safe side (which is a good idea). Say, by way of example, that the average cost of a single job will be 300 credits. You have 3400 credits, so you can probably run at least 10 such jobs, so you submit all 10. -

    Here's the trap: for each job, a reservation is made, not of 300 credits, but of 450. Hence everything goes well for the first 7 (7*450 = 3150 < 3400), but for the 8th up to the 10th job, your account no longer has sufficient credits to make a reservation. Those 3 jobs will be blocked by a SystemHold, and never execute (unless additional credits are requested, and a sysadmin releases them as will happen now). -

    We actually have a nice tool to compute the maximum number of credits a job can take. It is called gquote, and you can use it as follows. Supose that you submit your job using, e.g.: -

    $ qsub  -l walltime=4:00:00 my_job.pbs
    -

    Then you can compute its cost (before actually doing the qsub) by: -

    $ module load accounting
    -$ gquote  -l walltime=4:00:00  my_job.pbs
    -

    If this is a worker job, and you submit it as, e.g.: -

    $ wsub  -data data.csv  -batch my_job.pbs  -l nodes=4:ppn=20
    -

    Then you can compute its cost (before actually doing the qsub) by: -

    $ module load accounting
    -$ gquote  -l nodes=4:ppn=20  my_job.pbs
    -

    As you can see, gquote takes the same arguments as qsub (so if you use wsub, don't use the -batch for the actual job script). It will use both the arguments on the command line and the PBS directives in your script to compute the cost of the job in the same way PBS torque is computing the resources for your job. -

    You will notice when using gquote that it will give you quotes that are more expensive than you expect. This typically happens when you don't specify the processor attribute for the nodes resource. gquote will assume that you job is executed on the most expensive processor type, which inflates prices. -

    The price of a processor is of course proportional to its performance, so when the job finishes, you will be charged approximately the same regardless of the processor type the job ran on. (It ran for a shorter time on a more faster, and hence more expensive processor.)

    " - diff --git a/HtmlDump/file_0705.html b/HtmlDump/file_0705.html deleted file mode 100644 index 46a152657..000000000 --- a/HtmlDump/file_0705.html +++ /dev/null @@ -1,57 +0,0 @@ -

    This page describes the part of the job script that actually does the useful work and runs the programs you want to run.

    When your job is started by the scheduler and the resource manager, your job script will run as a regular script on the first core of the first node assigned to the job. The script runs in your home directory, which is not the directory where you will do your work, and with the standard user environment. So before you can actually start your program(s), you need to set up a proper environment. On a cluster, this is a bit more involved than on your PC, partly also because multiple versions of the same program may be present on the cluster, or there may be conflicting programs that make it impossible to offer a single set-up that suits all users. -

    Setting up the environment

    Changing the working directory

    As explained above, the job script will start in your home directory, which is not the place where you should run programs. So the first step will almost always be to switch to the actual working directory (the bash cd command). -

    Loading modules

    The next step consists of loading the appropriate modules. This is no different from loading the modules on the login nodes to prepare for your job or when running programs on interactive nodes, so we refer to the \"Modules\" page in the \"Running software\" section. -

    Useful Torque environment variables

    Torque defines a lot of environment variables on the compute nodes on which your job runs, They can be very useful in your job scripts. Some of the more important ones are: -

    There are also some variables that are useful if you use the Torque command pbsdsh to execute a command on another node/core of your allocation. We mention them here for completeness, but they will also be elaborated on in the paragraph on \"Starting a single core program on each assinged core\" further down this page. -

    Starting programs

    We show some very common start scenarios for programs on a cluster: -

    Starting a single multithreaded program (e.g., an OpenMP program)

    Starting a multithreaded program is easy. In principle, all you need to do is call its executable as you would do with any program at the command line. -

    However, often the program needs to be told how many threads to use. The default behaviour depends on the program. Most programs will either use only one thread unless told otherwise, or use one thread per core it can detect. The problem with programs that do the latter is that if you have requested only a subset of the cores on the node, the program will still detect the total number of cores or hyperthreads on the node and start that number of threads. Depending on the cluster you are using, these threads will swarm out over the whole node and sit in the way of other programs (often the case on older clusters) or will be contained in the set of cores/hyperthreads allocated to the job and sit in each others way (e.g., because they compete for the same limited cache space). In both cases, the program will run way slower than it could. -

    You will also need to experiment a bit with the number of cores that can actually be used in a useful way. This depends on the code and the size of the problem you are trying to solve. The same code may scale to only 4 threads for a small problem yet be able to use all cores on a node well when solving a much larger problem. -

    How to tell the program the number of threads to use, also differs between programs. Typical ways are through an environment variable or a command line option, though for some programs this is actually a parameter in the input file. Many scientific shared memory programs are developed using OpenMP directives. For these programs, the number of threads can be set through the environment variable OMP_NUM_THREADS. The line -

    export OMP_NUM_THREADS=$PBS_NUM_PPN
    -

    will set the number of threads to the value of ppn used in your job script. -

    Starting a distributed memory program (e.g., a MPI program)

    Starting a distributed memory program is a bit more involved as they always involve more than one Linux proces. Most distributed memory programs in scientific computing are written using the the Single Program Multiple Data paradigm: A single executable is ran on each core, but each cores works on a different part of the data. And the most popular technique for developing such programs is by using the MPI (Message Passing Interface) library. -

    Distributed memory programs are usually started through a starter command. For MPI programs, this is mpirun or mpiexec (often one is an alias for the other). The command line arguments for mpirun differ between MPI implementations. We refer to the documentation on toolchains in the \"Software development\" section of this web site for more information on the implementations supported at the VSC. As most MPI implementations in use at the VSC recognise our resource manager software and get their information about the number of nodes and cores directly from the resource manager, it is usually sufficient to start your MPI program using -

    mpirun <mpi-program>
    -

    where <mpi-program> is your MPI program and its command line arguments. This will start one instance of your MPI program on each core or hyperthread assigned to the job. -

    Programs using different distributed memory libraries may use a different starter program, and some programs come with a script that will call mpirun for you, so you can start those as a regular program. -

    Some programs use a mix of MPI and OpenMP (or a combination of another distributed and shared memory programming technique). Examples are some programs in Gromacs and QuantumESPRESSO. The rationale is that a single node on a cluster may not be enough, so you need distributed memory, while a shared memory paradigm is often more efficient in exploiting parallelism in the node. You'll need additional implementation-dependent options to mpirun to start such programs and also to define how many threads each instance can use. There is some information specifically for hybrid MPI/OpenMP programs on the \"Hybrid MPI/OpenMP programs\" page in the software development section. We advise you to contact user support to help you figuring out the right options and values for those options if you are not sure which options and values to use.

    Starting a single-core program on each assigned core

    A rather common use case on a cluster is running many copies of the same program independently on a different data set. It is not uncommon that those programs are not or very poorly parallelised and run on only a single core. Rather than submitting a lot of single core jobs, it is easier for the scheduler if those jobs are bundled in a single job that fills a whole node. Our job scheduler will try to fill a whole node using multiple of your jobs, but this doesn't always work right. E.g., assume a cluster with 20-core nodes where some nodes have 3 GB per core available for user jobs and some nodes have 6 GB available. If your job needs 5 GB per core (and you specify that using the mem or pmem parameters), but you don\\t explicitly tell that you want to use the nodes with 6 GB per core, the scheduler may still schedule the first job on a node with only 3 GB per core, then try to fill up that node further with jobs from you, but once half the node is filled discover that there is not enough memory left to start more jobs, leaving half of the CPU capacity unused. -

    To ease combining jobs in a single larger job, we advise to have a look at the Worker framework. It helps you to organise the input to the various instances of your program for many common scenarios. -

    Should you decide to start the instances of your program yourself, we advise to have a look at the Torque pbsdsh command rather than ssh. This assures that all programs will execute under the full control of the resource manager on the cores allocated to your job. The variables PBS_NODENUM, PBS_VNODENUM and PBS_TASKNUM can be used to determine on which core you are running and to select the appropriate input files. Note that in most cases, it will actually be necessary to write a second script besides your job script. That second script then uses these variables to compute the names of the input and the output files and start the actual program you want to run on that core. -

    To further explore the meaning of PBS_NODENUM, PBS_VNODENUM and PBS_TASKNUM and to illustrate the use of pbsdsh, consider the job script -

    #! /bin/bash
    -cd $PBS_O_WORKDIR
    -echo \"Started with nodes=$PBS_NUM_NODES:ppn=$PBS_NUM_PPN\"
    -echo \"First call of pbsdsh\"
    -pbsdsh bash -c 'echo \"Hello from node $PBS_NODENUM ($HOSTNAME) vnode $PBS_VNODENUM task $PBS_TASKNUM\"'
    -echo \"Second call of pbsdsh\"
    -pbsdsh bash -c 'echo \"Hello from node $PBS_NODENUM ($HOSTNAME) vnode $PBS_VNODENUM task $PBS_TASKNUM\"'
    -

    Save this script as \"testscript.pbs\" and execute it for different numbers of nodes and cores-per-node using -

    qsub -l nodes=4:ppn=5 testscript.pbs
    -

    (so using 4 nodes and 5 cores per node in this example). When calling qsub, it will return a job number, and when the job ends you will find a file testscript.pbs.o<number_of_the_job> in the directory where you executed qsub. -

    For more information on the pbsdsh command, we refer to the the Torque manual on the Adaptive Computing documentation web site. -

    or to the manual page (\"man pbsdsh\"). -

    " - diff --git a/HtmlDump/file_0707.html b/HtmlDump/file_0707.html deleted file mode 100644 index 2ed86312b..000000000 --- a/HtmlDump/file_0707.html +++ /dev/null @@ -1,68 +0,0 @@ -

    Submitting your job: the qsub command

    Once your job script is finished, you submit it to the scheduling system using the qsub command:

    qsub <jobscript>
    -

    places your job script in the queue. As explained on the page on \"Specifying resources, output files and notifications\", there are several options to tell the scheduler which resources you need or how you want to be notified of events surrounding your job. The can be given at the top of your job script or as additional command line options to qsub. In case both are used, options given on the command line take precedence over the specifications in the job script. E.g., if a different number of nodes and cores is requested through a command line option then specified in the job script, the specification on the command line will be used. -

    Starting interactive jobs

    Though our clusters are mainly meant to be used for batch jobs, there are some facilities for interactive work: -

    In the latter scenario, two options of qsub are particularly useful: -I to request an node for interactive use, and -X to add support for X to the request. You would typically also add several -l options to specify for how long you need the node and the amount of resources that you need. E.g., -

    qsub -I -l walltime=2:00:00 -l nodes=1:ppn=20
    -

    to use 20 cores on a single node for 2 hours. qsub will block until it gets a node and then you get the command prompt for that node. If the wait is too long however, qsub will return with an error message and you'll need to repeat the call. -

    If you want to run programs that use X in your interactive job, you have to add the -X option to the above command. This will set up the forwarding of X traffic to the login node, and ultimately to your terminal if you have set up the connection to the login node properly for X support. -

    Please remain reasonable in your request for interactive resources. On some clusters, a short waltime will give you a higher priority, and on most clusters a request for a multi-day interactive session will fail simply because the cluster cannot give you such a node before the time-out of qsub kicks in. Interactive use of nodes is mostly meant for debugging, for large compiles or larger visualisations on clusters that don't have dedicated nodes for visualisation.

    Viewing your jobs in the queue: qstat and showq -

    Two commands can be used to show your jobs in the queue: -

    Both commands will also show you the name of the queue (qstat) or class (showq) which in most cases is actually the same as the queue. All VSC clusters have multiple queues. Queues are used to define policies for each cluster. E.g., users may be allowed to have a lot of short jobs running simultaneously as they will finish soon anyway, but may be limited to a few multi-day jobs to avoid long-time monopolisation of a cluster by a single user, and this would typically be implemented by having separate queues with separate policies for short and long jobs. When you submit a job, qsub will put the job in a particular queue based on the resources requested. The qsub command does allow to specify the queue to use, but unless instructed to do so by user support, we strongly advise against using this option. Putting the job in the wrong queue may actually result in your job being refused by the queue manager, and we may also chose to change the available queues on a system to implement new policies.
    -

    qstat

    On the VSC clusters, users will only receive a subset of the options that qsub offers. The output is always restricted to the user's jobs only. -

    To see your jobs in the queue, enter -

    qstat
    -

    This will give you an overview of all jobs including their status, which includes queues but not yet running (Q), running (R) or finishing (C). -

    qstat <jobid>
    -

    where <jobid> is the number of the job, will show you the information about this job only. -

    Several command line options can be specified to modify the output of qstat: -

    showq

    The showq command will show you information about the queue from the scheduler's perspective. Jobs are subdivided in three categories: -

    The showq command will split its output according to the three major categories. Active jobs are sorted according to their expected end time while eligible jobs are sorted according to their current priority. -

    There are also some useful options: -

    Getting detailed information about a job: qstat -f and checkjob

    We've discussed the Torque qstat -f command already in the previous section. It gives detailed information about a job from the resource manager's perspective. -

    The checkjob command does the same, but from the perspective of the scheduler, so the information that you get is different. -

    checkjob 323323
    -

    will produce information about the job with jobid 323323. -

    checkjob -v 323323
    -

    where -v stands for verbose produces even more information. -

    For a running job, checkjob will give you an overview of the allocated resources and the wall time consumed so far. For blocked jobs, the end of the output typically contains clues about why a job is blocked. -

    Deleting a job that is queued or running

    This is easily done with qdel:

    qdel 323323

    will delete the job with job ID 323323. If the job is already running, the processes will be killed and the resources will be returned to the scheduler for another job.

    Getting an estimate for the start time of your job: showstart

    This is a very simple tool that will tell you, based on the current status of the cluster, when your job is scheduled to start. Note however that this is merely an estimate, and should not be relied upon: jobs can start sooner if other jobs finish early, get removed, etc., but jobs can also be delayed when other jobs with higher priority are submitted. -

    $ showstart 20030021
    -job 20030021 requires 896 procs for 1:00:00
    -Earliest start in       5:20:52:52 on Tue Mar 24 07:36:36
    -Earliest completion in  5:21:52:52 on Tue Mar 24 08:36:36
    -Best Partition: DEFAULT
    -

    Note however that this is only an estimate, starting from the jobs that are currently running or in the queue and the wall time that users gave for these jobs. Jobs may always end earlier than predicted based on the requested wall time, so your job may start earlier. But other jobs with a higher priority may also enter the queue and delay the start from your job.

    See if there is are free resources that you might use for a short job: showbf

    When the scheduler performs its scheduling task, there is bound to be some gaps between jobs on a node. These gaps can be back filled with small jobs. To get an overview of these gaps, you can execute the command showbf:

    $ showbf
    -backfill window (user: 'vsc30001' group: 'vsc30001' partition: ALL) Wed Mar 18 10:31:02
    -323 procs available for      21:04:59
    -136 procs available for   13:19:28:58

    There is however no guarantee that if you submit a job that would fit in the available resources, it will also run immediately. Another user might be doing the same thing at the same time, or you may simply be blocked from running more jobs because you already have too many jobs running or have made heavy use of the cluster recently.


    " - diff --git a/HtmlDump/file_0709.html b/HtmlDump/file_0709.html deleted file mode 100644 index 8fd97c37c..000000000 --- a/HtmlDump/file_0709.html +++ /dev/null @@ -1,13 +0,0 @@ -

    The basics of the job system

    Common problems

    Advanced topics

    " - diff --git a/HtmlDump/file_0711.html b/HtmlDump/file_0711.html deleted file mode 100644 index 223a92f77..000000000 --- a/HtmlDump/file_0711.html +++ /dev/null @@ -1,63 +0,0 @@ -

    Access restriction

    Once your project has been approved, your login on the Tier-1 cluster will be enabled. You use the same vsc-account (vscXXXXX) as at your home institutions and you use the same $VSC_HOME and $VSC_DATA directories, though the Tier-1 does have its own scratch directories.

    You can log in to the following login nodes: -

    These nodes are also accessible from outside the KU Leuven. Unless for the Tier-1 system muk, it is not needed to first log on to your home cluster to then proceed to BrENIAC. Have a look at the quickstart guide for more information. -

    Hardware details

    The tier-1 cluster BrENIAC is primarily aimed at large parallel computing jobs that require a high-bandwidth low-latency interconnect, but jobs that require a multitude of small independent tasks are also accepted. -

    The main architectural features are: -

    Compute time on BrENIAC is only available upon approval of a project. Information on requesting projects is available in Dutch and in English.
    -

    Accessing your data

    BrENIAC supports the standard VSC directories. -

    Running jobs and specifying node characteristics

    The cluster uses Torque/Moab as all other clusters at the VSC, so the generic documentation applies to BrENIAC also. -

    Several \"MOAB features\" are defined to select nodes of a particular type on the cluster. You can specify them in your job scirpt using, e.g., -

    #PBS -l feature=mem256
    -

    to request only nodes with the mem256 feature. Some important features: -

    - - - - - - - - - - - - - - - - - - - - -
    feature - explanation -
    mem128 - Select nodes with 128 GB of RAM (roughly 120 GB available to users)
    mem256 - Select nodes with 256 GB of RAM (roughly 250 GB available to users)
    rXiY - Request nodes in a specific InfiniBand island. X ranges from 01 to 09, Y can be 01, 11 or 23. The islands RxI01 have 20 nodes each, the islands rXi11 and rXi23 with i = 01, 02, 03, 04, 06, 07, 08 or 09 have 24 nodes each and the island r5i11 has 16 nodes. This may be helpful to make sure that nodes used by a job are as close to each other as possible, but in general will increase waiting time before your job starts. -

    Compile and debug nodes

    8 nodes with 256 GB of RAM are set aside for compiling or debugging small jobs. You can run jobs on them by specifying

    #PBS -lqos=debugging

    in your job script.

    The following limitation apply:

    Credit system

    BrENIAC uses Moab Accounting Manager for accounting the compute time used by a user. Tier-1 users have a credit account for each granted Tier-1 project. When starting a job, you need to specify which credit account to use via

    #PBS -A lpt1_XXXX-YY

    or with lpt1_XXXX-YY the name of your project account. You can also specify the -A option at the command line of qsub.

    Further information

    Software specifics

    BrENIAC uses the standard VSC toolchains. However, not all VSC toolchains are made available on BrENIAC. For now, only the 2016a toolchain is available. The Intel toolchain has slightly newer versions of the compilers, MKL library and MPI library than the standard VSC 2016a toolchain to be fully compatible with the machine hardware and software stack.

    Some history

    BrENIAC was installed during the spring of 2016, followed by several months of testing, first by the system staff and next by pilot users. The system was officially launched on October 17 of that year, and by the end of the month new Tier-1 projects started computing on the cluster. -

    We have a time lapse movie of the construction of BrENIAC: -

    - -

    Documentation

    " - diff --git a/HtmlDump/file_0713.html b/HtmlDump/file_0713.html deleted file mode 100644 index de6fab2c3..000000000 --- a/HtmlDump/file_0713.html +++ /dev/null @@ -1,2 +0,0 @@ -

    (Testtekst) The Flemish Supercomputer Centre (VSC) is a virtual centre making supercomputer infrastructure available for both the academic and industrial world. This centre is managed by the Research Foundation - Flanders (FWO) in partnership with the five Flemish university associations.

    " - diff --git a/HtmlDump/file_0715.html b/HtmlDump/file_0715.html deleted file mode 100644 index 983106450..000000000 --- a/HtmlDump/file_0715.html +++ /dev/null @@ -1,2 +0,0 @@ -

    HPC for industry (testversie)

    -

    The collective expertise, training programs and infrastructure of VSC together with participating university associations have the potential to create significant added value to your business.

    diff --git a/HtmlDump/file_0717.html b/HtmlDump/file_0717.html deleted file mode 100644 index af6956230..000000000 --- a/HtmlDump/file_0717.html +++ /dev/null @@ -1,2 +0,0 @@ -

    HPC for academics (testversie)

    -

    With HPC-technology you can refine your research and gain new insights to take your research to new heights.

    diff --git a/HtmlDump/file_0719.html b/HtmlDump/file_0719.html deleted file mode 100644 index c2e4e6f84..000000000 --- a/HtmlDump/file_0719.html +++ /dev/null @@ -1,2 +0,0 @@ -

    What is supercomputing? (testversie)

    -

    Supercomputers have an immense impact on our daily lives. Their scope extends far beyond the weather forecast after the news.

    diff --git a/HtmlDump/file_0739.html b/HtmlDump/file_0739.html deleted file mode 100644 index aca1d13f6..000000000 --- a/HtmlDump/file_0739.html +++ /dev/null @@ -1,13 +0,0 @@ -

    Basic job system use

    Advanced job system use

    Miscellaneous topics

    " - diff --git a/HtmlDump/file_0741.html b/HtmlDump/file_0741.html deleted file mode 100644 index a2c83fe0c..000000000 --- a/HtmlDump/file_0741.html +++ /dev/null @@ -1,9 +0,0 @@ -

    Access

    qsub  -l partition=gpu,nodes=1:K20Xm <jobscript>

    or -

    qsub  -l partition=gpu,nodes=1:K40c <jobscript>
    -

    depending which GPU node you would like to use if you don't -'care' on which type of GPU node your job ends up you can just submit it - like this: -

    qsub  -l partition=gpu <jobscript>
    -

    Submit to a Phi node:

    qsub -l partition=phi <jobscript>
    -
    " - diff --git a/HtmlDump/file_0745.html b/HtmlDump/file_0745.html deleted file mode 100644 index 4de0e4c1e..000000000 --- a/HtmlDump/file_0745.html +++ /dev/null @@ -1,128 +0,0 @@ -

    The application

    The designated way to get access to the Tier-1 for research purposes is through a project application.

    You have to submit a proposal to get compute time on the Tier-1 cluster Muk. -

    You should include a realistic estimate of the compute time needed in the project in your application. These estimations can best be endorsed by Tier-1 benchmarks. To be able to perform these tests for new codes, you can request a starting grant through a short and quick procedure. -

    You can submit proposals continuously, but they will be gathered, evaluated and resources allocated at a number of cut-off dates. There are 3 cut-off dates in 2016 : -

    Proposals submitted since the last cut-off and before each of these dates are reviewed together. -

    The FWO appoints an evaluation commission to do this. -

    Because of the international composition of the evaluation commission, the preferred language for the proposals is English. If a proposal is in Dutch, you must also sent an English translation. Please have a look at the documentation of standard terms like: CPU, core, node-hour, memory, storage, and use these consistently in the proposal. -

    For applications in 2014 or 2015, costs for resources used will be invoiced, with various discounts for Flemish-funded academic researchers. You should be aware that the investments and operational costs for the Tier-1 infrastructure are considerable. -

    You can submit you application via EasyChair using the application forms below. -

    Relevant documents - 2016

    On October 26 the Board of Directors of the Hercules foundation decided to make a major adjustment to the regulations regarding applications to use the Flemish supercomputer. -

    For applications for computing time on the Tier-1 granted in 2016 and coming from researchers at universities, the Flemish SOCs and the Flemish public knowledge institutions, applicants will no longer have to pay a contribution in the cost of compute time and storage. Of course, the applications have to be of outstanding quality. The evaluation commission remains responsible for te review of the applications. -

    For applications granted in 2015 the current pricing structure remains in place and contributions will be asked. -

    The adjusted Regulations for 2016 can be found in the links below. -

    From January 1, 2016 on the responsibility for the funding of HPC and the management of the Tier-1 has been transferred to the FWO, including all current decisions and ongoing contracts. -

    If you need help to fill out the application, please consult your local support team. -

    Relevant documents - 2015

    Pricing - applications in 2015

    When you receive compute time through a Tier-1 project application, we expect a contribution in the cost of compute time and storage. -

    - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
    -

    Summary of Rates: -

    -
    -

    CPU/nodeday -

    -
    -

    Private Disk/TB/mo -

    -
    -

    Universities, VIB and iMINDS -

    -
    -

    0.68€ (5%) -

    -
    -

    2€ (5%) -

    -
    -

    other SOCs and other flemish public research institutes -

    -
    -

    1.35€ (10%) -

    -
    -

    4€ (10%) -

    -
    -

    Flemish public research institutes - contract research with possibility of full cost accounting (*) -

    -
    -

    13,54€ -

    -
    -

    46,8€ -

    -
    -

    Flemish public research institutes - European projects with possibility of full cost accounting (*) -

    -
    -

    13,54€ -

    -
    -

    46,8€ -

    -

    (*) The price for one nodeday is 13.54 euro (incl. overhead and support of Tier-1 technical support team, but excl. advanced support by specialized staff). The price for 1TB storage per month is 46.80 euro (incl. overhead and support of TIER1 technical support team, but excl. advanced support by specialized staff). Approved Tier-1 projects get a default quota of 1TB. Only storage request higher then 1TB will be charged for the amount above 1TB. -

    EasyChair procedure

    You have to submit your proposal on EasyChair for the conference Tier12016. This requires the following steps: -

      -
    1. If you do not yet have an EasyChair account, you first have to create one: -
        -
      1. Complete the CAPTCHA
      2. -
      3. Provide first name, name, e-mail address
      4. -
      5. A confirmation e-mail will be sent, please follow the instructions in this e-mail (click the link)
      6. -
      7. Complete the required details.
      8. -
      9. When the account has been created, a link will appear to log in on the TIER1 submission page.
      10. -
    2. -
    3. Log in onto the EasyChair system.
    4. -
    5. Select ‘New submission’.
    6. -
    7. If asked, accept the EasyChair terms of service.
    8. -
    9. Add one or more authors; if they have an EasyChair account, they can follow up on and/or adjust the present application.
    10. -
    11. Complete the title and abstract.
    12. -
    13. You must specify at least three keywords: Include the institution of the promoter of the present project and the field of research.
    14. -
    15. As a paper, submit a PDF version of the completed Application form. You must submit the complete proposal, including the enclosures, as 1 single PDF file to the system.
    16. -
    17. Click \"Submit\".
    18. -
    19. EasyChair will send a confirmation e-mail to all listed authors.
    20. -
    " - diff --git a/HtmlDump/file_0747.html b/HtmlDump/file_0747.html deleted file mode 100644 index 325c14092..000000000 --- a/HtmlDump/file_0747.html +++ /dev/null @@ -1,87 +0,0 @@ -

    From version 2017a on of the Intel toolchains, the setup on the UAntwerp is different from the one on some other VSC clusters:

    Compilers

    Debuggers

    Libraries

    Math Kernel Library (MKL)

    MKL works exactly as in the regular VSC Intel toolchain. See the MKL section of web page on the VSC Intel toolchain for more information. -

    Integrated Performance Primitives (IPP)

    Threading Building Blocks (TBB)

    Data Analytics Acceleration Library (DAAL)

    Code and performance analysis

    VTune Amplifier XE

    ITAC - Intel Trace Analyzer and Collector

    Advisor

    Inspector

    " - diff --git a/HtmlDump/file_0749.html b/HtmlDump/file_0749.html deleted file mode 100644 index 2a59e4bba..000000000 --- a/HtmlDump/file_0749.html +++ /dev/null @@ -1,35 +0,0 @@ -

    The third VSC Users Day was held at the \"Paleis der Academiën\", the seat of the \"Royal Flemish Academy of Belgium for Science and the Arts\", in the Hertogstraat 1, 1000 Brussels, on June 2, 2017.

    Program

    Abstracts of workshops -

    Poster sessions -

    An overview of the posters that were presented during the poster session is available here. -

    " - diff --git a/HtmlDump/file_0751.html b/HtmlDump/file_0751.html deleted file mode 100644 index e80c7829f..000000000 --- a/HtmlDump/file_0751.html +++ /dev/null @@ -1,19 +0,0 @@ -


    " - diff --git a/HtmlDump/file_0753.html b/HtmlDump/file_0753.html deleted file mode 100644 index d2716f4e1..000000000 --- a/HtmlDump/file_0753.html +++ /dev/null @@ -1,77 +0,0 @@ -

    Important changes

    The 2017a toolchain is the toolchain thatwill be carried forward to Leibniz and will be available after the operating -system upgrade of Hopper. Hence it is meant to be as complete as possible. We -will only make a limited number of programs available in the 2016b toolchain -(basically those that show much better performance with the older compiler or -that do not compile with the compilers in the 2017a toolchains). -

    Important changes -in the 2017a toolchain: -

    We will skip the 2017b toolchain as defined by the VSC as we have already upgraded the 2017a toolchain to a more recent update of the Intel 2017 compilers to avoid problems with certain applications. -

    Available toolchains

    There are -currently three major toolchains on the UAntwerp clusters: -

    The tables below -list the last available module for a given software package and the -corresponding version in the 2017a toolchain. Older versions can only be -installed on demand with a very good motivation, as older versions of packages -also often fail to take advantage of advances in supercomputer architecture and -offer lower performance. Packages that have not been used recently will -only be installed on demand. -

    Several of the -packages in the system toolchain are still listed as “on demand” since they -require licenses and interaction with their users is needed before we can -install them. -

    " - diff --git a/HtmlDump/file_0755.html b/HtmlDump/file_0755.html deleted file mode 100644 index 9a546f712..000000000 --- a/HtmlDump/file_0755.html +++ /dev/null @@ -1 +0,0 @@ -

    Several of the software packages running on the UAntwerp cluster have restrictions in their licenses and cannot be used by all users. If a module does not load, it is very likely that you have no access to the package.

    Access to such packages is managed by UNIX groups. You can request membership to the group, but that membership will only be granted if you are eligible for use of the package.

    ANSYS

    CPMD

    CPMD can be used for free for non-commercial research in education institutions under the CPMD Free License.

    To get access:

    COMSOL

    FINE/Marine

    FINE/Marine is commercial CFD software from NUMECA International for simulation of flow around ships etc. The license has been granted for use of the Solar Boat Team as sponsoring from NUMECA and cannot be used by others.

    Gaussian

    To use Gaussian, you should work or study at the University of Antwerp and your research group should contribute to the cost of the license.

    Contact Wouter Herrebout for more information.

    Gurobi

    MATLAB

    We do not encourage the use of Matlab on the cluster as it is neither designed for use of HPC (despite a number of toolboxes that support parallel computing) nor efficient.

    Matlab on the UAntwerp clusters can be used by everybody who can legally use Matlab within the UAntwerp Campus Agreement with The Mathworks. You should have access to the modules if you are eligible. If you cannot load the Matlab modules yet think you are allowed to use Matlab under the UAntwerp license, please contact support.

    TurboMole

    VASP

    diff --git a/HtmlDump/file_0759.html b/HtmlDump/file_0759.html deleted file mode 100644 index a53966eac..000000000 --- a/HtmlDump/file_0759.html +++ /dev/null @@ -1,5 +0,0 @@ -

    You may notice that leibniz is not always faster than hopper, and this is a trend that we expect to continue for the following clusters also. In the past five years, individual cores did not become much more efficient on a instructions-per-clock cycle basis. Instead, faster chips were build by including more cores, though at a lower clock speed to stay within the power budget for a socket, and new vectori instructions.

    Compared to hopper,

    For programs that manage to use all of this, the peak performance of a node is effectively about twice as high as for a node on hopper. But single core jobs with code that does not use vectorization may very well run slower.

    Module system

    We use different software for managing modules on leibniz (Lmod instead of TCL-based modules). The new software supports the same commands as the old software, and more.

    Job submission

    One important change is that the new version of the operating system (CentOS 7.3, based on Red Hat 7) combined with our job management software allows much better control of the amount of memory that a job uses. Hence we can better protect the cluster against jobs that use more memory than requested. This is particularly important since leibniz does not support swapping on the nodes. This choice was made deliberatly as swapping to hard disk slows down a node to a crawl while SSDs that are robust enough to be used for swapping also cost a lot of money (memory cells on cheap SSDs can only be written a few 100 times, sometimes as little as 150 times). Instead, we increased the amount of memory available to each core. The better protection of jobs against each other may also allow us to consider to set apart some nodes for jobs that cannot fill a node and then allow multiple users on that node, rather than have those nodes used very inefficiently while other users are waiting for resources as is now the case.

    MPI jobs

    " - diff --git a/HtmlDump/file_0761.html b/HtmlDump/file_0761.html deleted file mode 100644 index 506a11b8d..000000000 --- a/HtmlDump/file_0761.html +++ /dev/null @@ -1 +0,0 @@ -

    Poster sessions

    1. Computational study of the properties of defects at grain boundaries in CuInSe2
      R. Saniz, J. Bekaert, B. Partoens, and D. Lamoen
      CMT and EMAT groups, Dept. of Physics, U Antwerpen
    2. First-principles study of superconductivity in atomically thin MgB2
      J. Bekaert, B. Partoens, M. V. Milosevic, A. Aperis, P. M. Oppeneer
      CMT group, Dept. of Physics, U Antwerpen & Dept. of Physics and Astronomy, Uppsala University
    3. Molecular Spectroscopy : Where Theory Meets Experiment
      C. Mensch, E. Van De Vondel, Y. Geboes, J. Bogaerts, R. Sgammato, E. De Vos, F. Desmet, C. Johannessen, W. Herrebout
      Molecular Spectroscopy group, Dept. Chemistry, U Antwerpen
    4. Bridging time scales in atomistic simulations: from classical models to density functional theory
      Kristof M. Bal and Erik C. Neyts
      PLASMANT, Department of Chemistry, U Antwerpen
    5. Bimetallic nanoparticles: computational screening for chirality-selective carbon nanotube growth
      Charlotte Vets and Erik C. Neyts
      PLASMANT, Department of Chemistry, U Antwerpen
    6. Ab initio molecular dynamics of aromatic sulfonation with sulfur trioxide reveals its mechanism
      Samuel L.C. Moors, Xavier Deraet, Guy Van Assche, Paul Geerlings, Frank De Proft
      Quantum Chemistry Group, Department of Chemistry, VUB
    7. Acceleration of the Best First Search Algorithm by using predictive analytics
      J.L. Teunissen, F. De Vleeschouwer, F. De Proft
      Quantum Chemistry Group, VUB, Department of Chemistry, VUB
    8. Investigating molecular switching properties of octaphyrins using DFT
      Tatiana Woller, Paul Geerlings, Frank De Proft, Mercedes Alonso
      Quantum Chemistry Group, VUB, Department of Chemistry, VUB
    9. Using the Tier-1 infrastructure for high-resolution climate modelling over Europe and Central Asia
      Lesley De Cruz, Rozemien De Troch, Steven Caluwaerts, Piet Termonia, Olivier Giot, Daan Degrauwe, Geert Smet, Julie Berckmans, Alex Deckmyn, Pieter De Meutter, Luc Gerard, Rafiq Hamdi, Joris Van den Bergh, Michiel Van Ginderachter, Bert Van Schaeybroeck
      Department of Physics and Astronomy, U Gent
    10. Going where the wind blows – Fluid-structure interaction simulations of a wind turbine
      Gilberto Santo, Mathijs Peeters, Wim Van Paepegem, Joris Degroote
      Dept. of Flow, Heat and Combustion Mechanics, U Gent
    11. Towards Crash-Free Drones – A Large-Scale Computational Aerodynamic Optimization
      Jolan Wauters, Joris Degroote, Jan Vierendeels
      Dept. of Flow, Heat and Combustion Mechanics, U Gent
    12. Characterisation of fragment binding to TSLPR using molecular dynamics
      Dries Van Rompaey, Kenneth Verstraete, Frank Peelman, Savvas N. Savvides, Pieter Van Der Veken, Koen Augustyns, Hans De Winter
      Medicinal Chemistry, UAntwerpen and Center for Inflammation Research , VIB-UGent
    13. A hybridized DG method for unsteady flow problems
      Alexander Jaust, Jochen Schütz
      Computational Mathematics (CMAT) group, U Hasselt
    14. HPC-based materials research: From Metal-Organic Frameworks to diamond
      Danny E. P. Vanpoucke, Ken Haenen
      Institute for Materials Research (IMO), UHasselt & IMOMEC, IMEC
    15. Improvements to coupled regional climate model simulations over Antarctica
      Souverijns Niels, Gossart Alexandra, Demuzere Matthias, van Lipzig Nicole
      Dept. of Earth and Environmental Sciences, KU Leuven
    16. Climate modelling of Lake Victoria thunderstorms
      Wim Thiery, Edouard L. Davin, Sonia I. Seneviratne, Kristopher Bedka, Stef Lhermitte, Nicole van Lipzig
      Dept. of Earth and Environmental Sciences, KU Leuven
    17. Improved climate modeling in urban areas in sub Saharan Africa for malaria epidemiological studies
      Oscar Brousse, Nicole Van Lipzig, Matthias Demuzere, Hendrik Wouters, Wim Thiery
      Dept. of Earth and Environmental Sciences, KU Leuven
    18. Adaptive Strategies for Multi-Index Monte Carlo
      Dirk Nuyens, Pieterjan Robbe, Stefan Vandewalle
      NUMA group, Dept. of Computer Science, KU Leuven
    19. SP-Wind: A scalable large-eddy simulation code for simulation and optimization of wind-farm boundary layers
      Wim Munters, Athanasios Vitsas, Dries Allaerts, Ali Emre Yilmaz, Johan Meyers
      Turbulent Flow Simulation and Optimization (TFSO) group, Dept. of Mechanics, KU Leuven
    20. Control Optimization of Wind Turbines and Wind Farms
      Ali Emre Yilmaz, Wim Munters, Johan Meyers
      Turbulent Flow Simulation and Optimization (TFSO) group, Dept. of Mechanics, KU Leuven
    21. Simulations of large wind farms with varying atmospheric complexity using Tier-1 Infrastructure
      Dries Allaerts, Johan Meyers
      Turbulent Flow Simulation and Optimization (TFSO) group, Dept. of Mechanics, KU Leuven
    22. Stability of relativistic, two-component jets
      Dimitrios Millas, Rony Keppens, Zakaria Meliani
      Plasma-astrophysics, Dept. Mathematics, KU Leuven
    23. HPC in Theoretical and Computational Chemistry
      Jeremy Harvey, Eliot Boulanger, Andrea Darù, Milica Feldt, Carlos Martín-Fernández, Ana Sanz Matias, Ewa Szlapa
      Quantum Chemistry and Physical Chemistry Section, Dept. of Chemistry, KU Leuven
    diff --git a/HtmlDump/file_0765.html b/HtmlDump/file_0765.html deleted file mode 100644 index b0b270fe9..000000000 --- a/HtmlDump/file_0765.html +++ /dev/null @@ -1,35 +0,0 @@ -

    The third VSC Users Day was held at the \"Paleis der Academiën\", the seat of the \"Royal Flemish Academy of Belgium for Science and the Arts\", in the Hertogstraat 1, 1000 Brussels, on June 2, 2017.

    Program

    Abstracts of workshops -

    Poster sessions -

    An overview of the posters that were presented during the poster session is available here. -

    " - diff --git a/HtmlDump/file_0769.html b/HtmlDump/file_0769.html deleted file mode 100644 index 501abae22..000000000 --- a/HtmlDump/file_0769.html +++ /dev/null @@ -1 +0,0 @@ -

    Other pictures of the VSC User Day 2017.

    diff --git a/HtmlDump/file_0773.html b/HtmlDump/file_0773.html deleted file mode 100644 index 1ab5d76fa..000000000 --- a/HtmlDump/file_0773.html +++ /dev/null @@ -1,2 +0,0 @@ -

    Below is a selection of photos from the user day 2017. A larger set of photos at a higher resolution can be downloaded as a zip file (23MB).

    " - diff --git a/HtmlDump/file_0777.html b/HtmlDump/file_0777.html deleted file mode 100644 index 52465e620..000000000 --- a/HtmlDump/file_0777.html +++ /dev/null @@ -1,67 +0,0 @@ -

    The UAntwerp clusters have limited features for remote visualization on the login nodes of hopper and the visualization node of leibniz using a VNC-based remote display technology. On the regular login nodes of hopper, there is no acceleration of 3D graphics, but the visualisation node of leibniz is equipped with a NVIDIA M5000 card that when used properly will offer accelerated rendering of OpenGL applications. The setup is similar to the setup of the visualization nodes at the KU Leuven.

    Using VNC turns out to be more complicated than one would think and things sometimes go wrong. It is a good solution for those who absolutely need a GUI tool or a visualization tool on the cluster rather than on your local desktop; it is not a good solution for those who don't want to invest in learning Linux properly and are only looking for the ease-of-use of a PC. -

    The idea behind the setup

    2D and 3D graphics on Linux

    Graphics (local and remote) on Linux-machines is based on the X Window System version 11, shortly X11. This technology is pretty old (1987) and nor really up to the task anymore with todays powerful computers yet has so many applications that support it that it is still the standard in practice (though there are efforts going on to replace it with Wayland on modern Linux systems). -

    X11 applications talk to a X server which draws the commands on your screen. These commands can go over a network so applications on a remote machine can draw on your local screen. Note also the somewhat confusing terminology: The server is the program that draws on the screen and thus runs on your local system (which for other applications will usually be called the client) while the application is called the client (and in this scenario runs on a computer which you will usually call the server). However, partly due to the way the X11 protocol works and partly also because modern applications are very graphics-heavy, the network has become a bottleneck and graphics-heavy applications (e.g., the Matlab GUI) will work sluggish on all but the fastest network connections. -

    X11 is a protocol for 2D-graphics only. however, it is extensible. Enter OpenGL, a standard cross-platform API for professional 3D-graphics. Even though its importance on Windows and macOS platforms had decreased as Microsoft and Apple both promote their own APIs (DirectX and Metal respectively), it is still very popular for professional applications and in the Linux world. It is supported by X11 servers through the GLX-extension (OpenGL for the X Window System). When set up properly, OpenGL commands can be passed to the X server and use any OpenGL graphics accelerator available on the computer running the X server. In principle, if you have a X server with GLX extension on your desktop, you should be able to run OpenGL programs on the cluster and use the graphics accelerator of your desktop to display the graphics. In practice however this works well when the application and X server run on the same machine, but the typical OpenGL command stream is to extensive to work well over a network connection and performance will be sluggish. -

    Optimizing remote graphics

    The solution offered on the visualization node of leibniz (and in a reduced setting on the login nodes of hopper) consists of two elements to deal with the issues of network bandwidth and, more importantly, network latency. -

    VirtualGL is a technology that redirects OpenGL commands to a 3D graphics accelerator on the computer where the application is running or to a sofware rendering library. It then pushes the rendered image to the X server. Instead of a stream of thousands or millions of OpenGL commands, one large image is now passed over the network to the X server, reducing the effect of latency. These images can be large though, but with an additional piece of software on your client, called the VGL client, VirtualGL can send the images in compressed form which strongly reduces the bandwidth requirements. To use VirtualGL, you have to start the OpenGL application through the vglrun-command. That command will set up the application to redirect OpenGL calls to the VirtualGL libraries. -

    VirtualGL does not solve the issue of slow 2D-rendering because of network latency and also requires the user to set up a VGL client and an X server on the local desktop, which is cumbersome for less experienced users. We solve this problem through VNC (Virtual Network Computing). VNC consists of three components: a server on the computer where your application runs, a client on your desktop, and a standardized protocol for the communication between server and client. The server renders the graphics on the computer on which it runs and sends compressed images to the client. The client of course takes care of keyboard and mouse input and sends this to the server. A VNC server for X applications will in fact emulate a X server. Since the protocol between client and server is pretty standard, most clients will work with most servers, though some combinations of client and server will be more efficient because they may support a more efficient compression technology. Our choice of server is TurboVNC which is maintained by the same group that also develops VirtualGL and has an advanced implementation of a compression algorithm very well suited for 3D graphics. TurboVNC has clients for Windows, macOS and Linux. However, our experience is that it also works with several other VNC clients (e.g., Apple Remote Desktop), though it may be a bit less efficient as it may not be able to use the best compression strategies. -

    The concept of a Window Manager

    When working with Windows or macOS, we're used to seeing a title bar for most windows with buttons to maximize or hide the window, and borders that allow to resize a window. You'd think this functionality is provided by the X server, but in true UNIX-spirit of having separate components for every bit of functionality, this is not the case. On X11, this functionality is provided by the Window Manager, a separate software package that you start after starting the X server (or may be started for you automatically by the startup script that is run when starting the X server). The basic window managers from the early days of X11 have evolved into feature-rich desktop enviroments that do not only offer a window manager, but also a task bar etc. Gnome and KDE are currently the most popular desktop environments (or Unity on Ubuntu, but future editions of Ubuntu will return to Gnome). However, these require a lot of resources and are difficult to install on top of TurboVNC. Examples of very basic old-style window managers are the Tab Window Manager (command twm) and the Motif Window Manager (command mwm). (Both are currently available on the login nodes of hopper.) -

    For the remote visualization setup on the UAntwerp clusters, we have chosen to use the Xfce Desktop Environment which is definitely more user-friendly than the rather primitive Tab Window Manager and Motif Window Manager, yet requires less system resources and is easier to set up than the more advanced Gnome and KDE desktops. -

    Prerequisites

    You'll need a ssh client on your desktop that provides port forwarding functionality on your desktop. We refer to the \"Access and data transfer\" section of the documentation on the user portal for information about ssh clients for various client operating systems. PuTTY (Windows) and OpenSSH (macOS, Linux, unix-compatibility environment on Windows) both provide all required functionality. -

    Furthermore, you'll need a VNC client, preferably the TurboVNC client. -

    Windows

    We have tested the setup with three different clients: -

    All three viewers are quite fast and offer good performance, even when run from home over a typical broadband internet connection. TigerVNC seems to be a bit quicker than the other two, while TightVNC doesn't allow you to resize your window. With the other two implementations, when you resize your desktop window, the desktop is also properly resized. -

    macOS

    Here also there are several possible setups:

    Linux

    RPM and Debian packages for TurboVNC can be downloaded from the TurboVNC web site and are available in some Linux distributions. You can also try another VNC client provided by your Linux distribution at your own risk as we cannot guarantee that all VNC viewers (even recent ones) work eficiently with TurboVNC. -

    How do I run an application with TurboVNC?

    Running an application with TurboVNC requires 3 steps: -

    Starting the server

      -
    1. Log on in the regular way to one of the login nodes of hopper or to the visualization node of Leibniz. Note that the latter should only be used for running demanding visualizations that benefit from the 3D acceleration. The node is not meant for those who just want to run some lightweight 2D Gui application, e.g., an editor with GUI.
    2. -
    3. Load the module vsc-vnc:
      module load vsc-vnc
      This module does not only put the TurboVNC server in the path, but also provides wrapper scripts to start the VNC server with a supported window manager / dekstop environment. Try module help vsc-vnc for more info about the specific wrappers.
    4. -
    5. Use your wrapper of choice to start the VNC server. We encourage to use the one for the Xfce desktop environment:
      vnc-xfce
    6. -
    7. The first time you use VNC, it will ask you to create a password. For security reasons, please use a password that you don't use for anything else. If you have forgotten your password, it can easily be changed with the vncpasswd command and is stored in the file ~/.vnc/passwd in encrypted form. It will also ask you for a viewer-only password. If you don't know what this is, you don't need it.
    8. -
    9. Among other information, the VNC server will show a line similar to:
      Desktop 'TurboVNC: viz1.leibniz:2 (vsc20XXX)' started on display viz1.leibniz:2
      Note the number after TurboVNC:viz1.leibniz, in this case 2. This is the number of your VNC server, and it will in general be the same as the X display number which is the last number on the line. You'll need that number to connect to the VNC server.
    10. -
    11. It is in fact safe though not mandatory to log out now from your SSH session as the VNC server will continue running in the background.
    12. -

    The standard way of starting a VNC server as described in the TurboVNC documentation is by using the vncserver command. However, you should only use this command if you fully understand how it works and what it does. Also, please don't forget to kill the VNC server when you have finished using it as it will not be killed automatically when started through this command (or use the -autokill command line option at startup). The default startup script (xstartup.turbovnc) which will be put in the ~/.vnc directory on first use does not function properly on our systems. We know this and we have no intent to repair this as we prefer to install the vncserver command unmodified from the distribution and provide wrapper scripts instead that use working startup files. -

    Connecting to the server

      -
    1. In most cases, you'll not be able to connect directly to the TurboVNC server (which runs on port 5900 + the server number, 5902 in the above example) but you will need to create a SSH tunnel to forward traffic to the VNC server. The exact procedure is explained in length in the pages \"Creating a SSH tunnel using PuTTY\" (for Windows) and \"Creating a SSH tunnel using OpenSSH\" (for or Linux and macOS) .
      You'll need to tunnel port number (5900 + server number) (5902 in the example above) on you local machine to the same port number on the node on which the VNC server is running. You cannot use the generic login names (such as login.hpc.uantwerpen.be) for that as you may be assigned a different login node as you were assigned just minutes ago. Instead, use the full names for the specific nodes, e.g., login1-hopper.uantwerpen.be, login2-leibniz.uantwerpen.be or viz1-leibniz.uantwerpen.be.
      -
        -
      1. In brief:With OpenSSH, your command will look like
        ssh -L 5902:viz1-leibniz.uantwerpen.be:5902 -N vsc20XXX@viz1-leibniz.uantwerpen.be
      2. -
      3. In PuTTY, select \"Connections - SSH - Tunnel\" in the left pane. As \"Source port\", use 5900 + the server number (5902 in our example) and as destination the full name of the node on which the VNC server is running, e.g., viz1-leibniz.uantwerpen.be.
      4. -
    2. -
    3. Once your tunnel is up-and-running, start your VNC client. The procedure depends on the precise client you are using. However in general, the client will ask for the VNC server. That server is localhost:x where x is the number of your VNC server, 2 in the above example. It will then ask you for the password that you have assigned when you first started VNC.
    4. -
    5. If all went well, you will now get a window with the desktop environment that you have chosen when starting the VNC server
    6. -
    7. Do not forget to close your tunnel when you log out from the VNC server. Otherwise the next user might not be able to connect.
    8. -

    Note that the first time that you start a Xfce session with TurboVNC, you'll see a panel \"Welcome to the first start of the panel\". Please select \"Use default config\" as otherwise you get a very empty desktop. -

    Starting an application

      -
    1. Open a terminal window (if one was not already created when you started your session).
      In the default Xfce-environment, you can open a terminal by selecting \"Terminal Emulator\" in the \"Applications\" menu in the top left. The first time it will let you chose between selected terminal applications.
    2. -
    3. Load the modules that are required to start your application of choice.
    4. -
    5. 2D applications or applications that use a sofware renderer for 3D start as usual. However, to start an application using the hardware-accelerated OpenGL, you'll need to start it through vglrun. Usually adding vglrun at the start of the command line is sufficient.
      This however doesn't work with all applications. Some applications require a special setup. -
        -
      1. Matlab: start matlab with the -nosoftwareopengl option to enable accelerated OpenGL:
        vglrun matlab -nosoftwareopengl
        The Matlab command opengl info will then show that you are indeed using the GPU.
      2. -
    6. -
    7. When you've finished, don't forget to log out (when you use one of our wrapper scripts) or kill the VNC server otherwise (using vncserver -kill :x with x the number of the server).
    8. -

    Note: For a quick test of your setup, enter -

    vglrun glxinfo
    -vglrun glxgears
    -

    The first command will print some information about the OpenGL functionality that is supported. The second command will display a set of rotating gears. Don't be fooled if they appear to stand still but look at the \"frames per second\" printed in the terminal window. -

    Common problems

    Links

    Components used in the UAntwerp setup

    Related technologies

    " - diff --git a/HtmlDump/file_0779.html b/HtmlDump/file_0779.html deleted file mode 100644 index 05e043603..000000000 --- a/HtmlDump/file_0779.html +++ /dev/null @@ -1,24 +0,0 @@ -

    Leibniz has one compute node equipped with a Xeon Phi coprocessor from the Knights Landing generation (the first generation with support for the AVX-512 instruction set). For cost reasons we have opted for the PCIe coprocessor model rather than an independent node based on that processor. Downside is the lower memory capacity directly available to the Xeon Phi processor though.

    The goals for the system are: -

    The system is set up in such a way that once you have access to the Xeon Phi node, you can also log on to the Xeon Phi card itself and use it as an independent system. Your regular VSC directories will be mounted (at least for UAntwerp users, others on request). As such you can also test code to run on independent Xeon Phi systems, the kind of setup that Intel is currently promoting.

    The module system is not yet implemented on the Xeon Phi coprocessor, but modules do work on the host. It does imply though that some setup may be required when running native programs on the Xeon Phi.

    Getting access

    Contact the UAntwerp support team to get access to the Xeon Phi node.

    Users of the Xeon Phi node are expected to report back on their experiences. We are most interested in users who can also compare with running on regular nodes as we will use this information for future purchase decisions.

    Currently the node is not yet in the job system, you can log on manually to the node but need to check if noone else is using the node.

    Compiling for the Xeon Phi

    We currently support compiling code for the Xeon Phi with the Intel compilers included in the 2017a and later toolchains (i.e., Intel compiler version 17 and higher). -

    Compared to the earlier Knights Corner based Xeon Phi system installed in the Tier-2 infrastructure at the KU Leuven, there are a number of changes. All come down to the fact that the Knights Landing Xeon Phi has much more in common with the regular Intel CPUs than was the case for the earlier generation. -

    Running applications on the Xeon Phi

    " - diff --git a/HtmlDump/file_0781.html b/HtmlDump/file_0781.html deleted file mode 100644 index 0beda98fe..000000000 --- a/HtmlDump/file_0781.html +++ /dev/null @@ -1,79 +0,0 @@ -

    Leibniz has two compute nodes each equipped with two NVIDIA Tesla P100 GPU compute cards, the most powerful cards available at the time of installation of the system. We run the regular NVIDIA software stack on those systems

    The main goal of the system is to assess the performance of GPUs for applications used by our researchers. We want to learn for which applications GPU computing is economically viable. Users should realise that these nodes carry three times the cost of a regular compute node and might also be shorter lived (in the past, some NVIDA GPUs have shown to be pretty fragile). So these nodes are only interesting and should only be used for applications that run three times faster than a regular CPU-based equivalent. -

    As such we offer precedence to users who want to work with us towards this goal and either develop high-quality GPU software or are willing to benchmark their application on GPU and regular CPUs. -

    Getting access

    Contact the UAntwerp support team to get access to the Xeon Phi node. -

    Users of the GPU compute nodes are expected to report back on their experiences. We are most interested in users who can also compare with running on regular nodes as we will use this information for future purchase decisions. -

    Currently the nodes are not yet integrated in the job system, you can log on manually to the node but need to check if noone else is using the node. -

    Monitoring GPU nodes

    Monitoring of CPU use by jobs running on the GPU nodes can be done in the same way as for regular compute nodes. -

    One useful command to monitor the use of the GPUs is nvidia-smi. It will show information on both GPUs in the GPU node, and among others lets you easily verify if the GPUs are used by the job. -

    Software on the GPU

    Software is installed on demand. As these systems are new to us also, we do expect some collaboration of the user to get software running on the GPUs. -

    - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
    Package - Module - Description -
    CP2K - CP2K/5.1-intel-2017a-bare-GPU-noMPI - GPU-accelerated version of CP2K. The -GPU-noMPI-versions are ssmp binaries without support for MPI, so they can only be used on a single GPU node. The binaries are compiled with equivalent options to the corresponding -bare-multiver modules for CPU-only computations. -
    CUDA - CUDA/8.0.61
    CUDA/9.0.176
    CUDA/9.1.85 -
    Various versions of the CUDA development kit -
    cuDNN - cuDNN/6.0-CUDA-8.0.61
    cuDNN/7.0.5-CUDA-8.0.61
    cuDNN/7.0.5-CUDA-9.0.176
    cuDNN/7.0.5-CUDA-9.1.85
    -
    The CUDA Deep Neural Network library, version 6.0 and 7.0, both installed from standard NVIDA tarbals but in the directory structure of our module system. -
    GROMACS - GROMACS/2016.4-foss-2017a-GPU-noMPI
    GROMACS/2016.4-intel-2017a-GPU-noMPI
    -
    GROMACS with GPU acceleration. The -GPU-noMPI-versions are ssmp binaries without support for MPI, so they can only be used on a single GPU node. -
    Keras - Keras/2.1.3-intel-2017c-GPU-Python-3.6.3 - Keras with TensorFlow as the backend (1.4 for Keras 2.1.3), using the GPU-accelerated version of Tensorflow.
    For comparison purposes there is a identical version using the CPU-only version of TensorFlow 1.4. -
    NAMD - - Work in progress -
    TensorFlow - Tensorflow/1.3.0-intel-2017a-GPU-Python-3.6.1
    Tensorflow/1.4.0-intel-2017c-GPU-Python-3.6.3
    -
    GPU versions of Tensorflow 1.3 and 1.4. Google-provided binaries were used for the installation.
    There are CPU-only equivalents of those modules for comparison. The 1.3 version was installed from the standard PyPi wheel which is not well optimized for modern processors, the 1.4 version was installed from a Python wheel compiled by Intel engineers and should be well-optimized for all our systems. -
    " - diff --git a/HtmlDump/file_0783.html b/HtmlDump/file_0783.html deleted file mode 100644 index 2151f9123..000000000 --- a/HtmlDump/file_0783.html +++ /dev/null @@ -1,61 +0,0 @@ -

    HPC Tutorial

    This is our standard introduction to the VSC HPC systems. It is complementary to the information in this user portal, the latter being more the reference manual. -

    -

    We have separate versions depending on your home institution and the operating system from which you access the cluster: -

    - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
    - Windows - macOS - Linux -
    UAntwerpen - [PDF] - [PDF] - [PDF] -
    VUB - [PDF] - [PDF] - [PDF] -
    UGent - [PDF] - [PDF] - [PDF] -
    KU Leuven/UHasselt - [PDF] - [PDF] - [PDF] -
    " - diff --git a/HtmlDump/file_0785.html b/HtmlDump/file_0785.html deleted file mode 100644 index 120f3d870..000000000 --- a/HtmlDump/file_0785.html +++ /dev/null @@ -1,3064 +0,0 @@ -

    Important changes

    The 2017a toolchain is the toolchain thatwill be carried forward to Leibniz and will be available after the operating -system upgrade of Hopper. Hence it is meant to be as complete as possible. We -will only make a limited number of programs available in the 2016b toolchain -(basically those that show much better performance with the older compiler or -that do not compile with the compilers in the 2017a toolchains). -

    Important changes -in the 2017a toolchain: -

    We will skip the 2017b toolchain as defined by the VSC as we have already upgraded the 2017a toolchain to a more recent update of the Intel 2017 compilers to avoid problems with certain applications. -

    Available toolchains

    There are -currently three major toolchains on the UIAntwerp clusters: -

    The tables below -list the last available module for a given software package and the -corresponding version in the 2017a toolchain. Older versions can only be -installed on demand with a very good motivation, as older versions of packages -also often fail to take advantage of advances in supercomputer architecture and -offer lower performance. Packages that have not been used recently will -only be installed on demand. -

    Several of the -packages in the system toolchain are still listed as “on demand” since they -require licenses and interaction with their users is needed before we can -install them. -

    Intel toolchain

    - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
    - Latest - pre-2017a - - - 2017a - - Comments -
    - ABINIT/8.0.7-intel-2016a - - - Work in progress -
    - Advisor/2016_update4 - - inteldevtools/2017a - - Integrated in a - new module with the other Intel development tools -
    ANTs/2.1.0-foss-2015a - ANTs/2.2.0-intel-2017a - Not yet available on Leibniz due to compile problems. -
    - augustus/3.0.1-intel-2015a - / - - Installed on - demand - -
    - Autoconf/2.69-intel-2016b - - Autoconf/2.69 - - Moved to the system - toolchain -
    - AutoDock/1.1.2 - - AutoDock_Vina/1.1.2 - - Naming modified - to the standard naming used in our build tools -
    - Automake/1.15-intel-2016b - - Automake/1.15 - - Moved to the system - toolchain -
    - Autotools/20150215-intel-2016b - - Autotools/2016123 - - Moved to the system - toolchain -
    / - BAli-Phy/2.3.8-intel-2017a-OpenMP
    BAli-Phy/2.3.8-intel-2017a-MPI
    -
    By Ben Redelings, documentation on the software web site. This package supports either OpenMP or MPI, but not both together in a hybrid mode. -
    - beagle-lib/2.1.2-intel-2016b - - beagle-lib/2.1.2-intel-2017a - -
    - Beast/2.4.4-intel-2016b - - Beast/2.4.5-intel-2017a - - Version with beagle-lib -
    - Biopython/1.68-intel-2016b-Python-2.7.12 - - Biopython/1.68-intel-2017a-Python-2.7.13
    Biopython/1.68-intel-2017a-Python-3.6.1 -
    - Builds for Python - 2,7 and Python 3.6 -
    - bismark/0.13.1-intel-2015a - - Bismark/0.17.0-intel-2017a - - Naming modified - to the standard naming used in our build tools -
    - Bison/3.0.4-intel-2016b - - Bison/3.0.4-intel-2017a - -
    - BLAST+/2.6.0-intel-2016b-Python-2.7.12 - - BLAST+/2.6.0-intel-2017a-Python-2.7.13 - -
    - Boost/1.63.0-intel-2016b-Python-2.7.12 - - Boost/1.63.0-intel-2017a-Python-2.7.13 - -
    - Bowtie2/2.2.9-intel-2016a - - Bowtie2/2.2.9-intel-2017a - -
    - byacc/20160606-intel-2016b - - byacc/20170201 - - Moved to the system - toolchain -
    - bzip2/1.0.6-intel-2016b - - bzip2/1.0.6-intel-2017a - -
    - cairo/1.15.2-intel-2016b - - cairo/1.15.4-intel-2017a - -
    - CASINO/2.12.1-intel-2015a - / - - Installed - on demand - -
    - CASM/0.2.0-Python-2.7.12 - / - - Installed on demand, compiler problems. - -
    / - CGAL/4.9-intel-2017a-forOpenFOAM - Installed without the components that require Qt and/or OpenGL. -
    - CMake/3.5.2-intel-2016b - - CMake/3.7.2-intel-2017a - -
    - CP2K/4.1-intel-2016b - CP2K/4.1-intel-2017a-bare
    CP2K/4.1-intel-2017a -

    -
    - CPMD/4.1-intel-2016b - / - - Installed on - demand - -
    - cURL/7.49.1-intel-2016b - - cURL/7.53.1-intel-2017a - -
    / - DIAMOND/0.9.12-intel-2017a - -
    - DLCpar/1.0-intel-2016b-Python-2.7.12 - - DLCpar/1.0-intel-2017a-Python-2.7.13
    DLCpar/1.0-intel-2017a-Python-3.6.1
    -
    - Installed for - Python 2.7.13 and Pyton 3.6.1 -
    - Doxygen/1.8.11-intel-2016b - - Doxygen/1.8.13 - - Moved to the - system toolchain -
    - DSSP/2.2.1-intel-2016a - DSSP/2.2.1-intel-2017a -
    -
    - Eigen/3.2.9-intel-2016b - - Eigen/3.3.3-intel-2017a - -
    - elk/3.3.17-intel-2016a - - Elk/4.0.15-intel-2017a - - Naming modified - to the standard naming used in our build tools -
    - exonerate/2.2.0-intel-2015a - - Exonerate/2.4.0-intel-2017a - - Naming modified - to the standard naming used in our build tools -
    - expat/2.2.0-intel-2016b - - expat/2.2.0-intel-2017a - -
    / - FastME/2.1.5.1-intel-2017a - -
    - FFTW/3.3.4-intel-2015a - - FFTW/3.3.6-intel-2017a - - There is also a - FFTW-compatible interface in intel/2017a, but it does not work for all - packages. -
    - - file/5.30-intel-2017a - -
    - fixesproto/5.0-intel-2016a - - X11/20170129-intel-2017a - - Integrated in - one large X11 module -
    - flex/2.6.0-intel-2016b - - flex/2.6.3-intel-2017a - -
    - fontconfig/2.12.1-intel-2016b - - fontconfig/2.12.1-intel-2017a - -
    - freeglut/3.0.0-intel-2016a - - freeglut/3.0.0-intel-2017a - - Not yet - operational on CentOS 7 - -
    - freetype/2.7-intel-2016b - - freetype/2.7.1-intel-2017a - -
    - FSL/5.0.9-intel-2016a - / - - Installed on - demand - -
    - GAMESS-US/20141205-R1-intel-2015a - / - - Installed on - demand - -
    - gc/7.4.4-intel-2016b - - gc/7.6.0-intel-2017a - - Installed on - demand - -
    - GDAL/2.1.0-intel-2016b - - GDAL/2.1.3-intel-2017a-Python-2.7.13 - - Does - not support Python 3. -
    - genometools/1.5.4-intel-2015a - - GenomeTools/1.5.9-intel-2017a - -
    - GEOS/3.5.0-intel-2015a-Python-2.7.9 - - GEOS/3.6.1-intel-2017a-Python-2.7.13 - - Does - not support Python 3. -
    - gettext/0.19.8-intel-2016b - - gettext/0.19.8.1-intel-2017a - -
    - GLib/2.48.1-intel-2016b - - GLib/2.49.7-intel-2017a - -
    - GMAP-GSNAP/2014-12-25-intel-2015a - - GMAP-GSNAP/2017-03-17-intel-2017a - -
    - GMP/6.1.1-intel-2016b - - GMP/6.1.2-intel-2017a - -
    - gnuplot/5.0.0-intel-2015a - - gnuplot/5.0.6-intel-2017a - -
    - GObject-Introspection/1.44.0-intel-2015a - - GObject-Introspection/1.49.2-intel-2017a - -
    - GROMACS/5.1.2-intel-2016a-hybrid - - GROMACS/5.1.2-intel-2017a-hybrid
    GROMACS/2016.3-intel-2017a
    -
    -
    - GSL/2.3-intel-2016b - - GSL/2.3-intel-2017a - -
    / - gtest/1.8.0-intel-2017a - Google C++ Testing Framework -
    - Guile/1.8.8-intel-2016b - - Guile/1.8.8-intel-2017a - -
    - Guile/2.0.11-intel-2016b - - Guile/2.2.0-intel-2017a - -
    - hanythingondemand/3.2.0-intel-2016b-Python-2.7.12 - - hanythingondemand/3.2.0-intel-2017a-Python-2.7.13 - -
    - / - - HarfBuzz/1.3.1-intel-2017a - -
    - HDF5/1.8.17-intel-2016b - - HDF5/1.8.18-intel-2017a
    HDF5/1.8.18-intel-2017a-noMPI -
    HDF5 with and without MPI-support. -
    / - HISAT2/2.0.5-intel-2017a - -
    - HTSeq/0.6.1p1-intel-2016a-Python-2.7.11 - - HTSeq/0.7.2-intel-2017a-Python-2.7.13 - - Does not support - Python 3. -
    - icc/2016.3.210-GCC-5.4.0-2.26 - - intel/2017a - - Intel compiler - components in a single module. -
    - iccifort/2016.3.210-GCC-5.4.0-2.26 - - intel/2017a - - Intel compiler - components in a single module. -
    - ifort/2016.3.210-GCC-5.4.0-2.26 - - intel/2017a - - Intel compiler - components in a single module. -
    - imkl/11.3.3.210-iimpi-2016b - - intel/2017a - - Intel compiler - components in a single module. -
    - impi/5.1.3.181-iccifort-2016.3.210-GCC-5.4.0-2.26 - - intel/2017a - - Intel compiler - components in a single module. -
    - inputproto/2.3.2-intel-2016b - - X11/20170129-intel-2017a - - Integrated in - one large X11 module -
    - Inspector/2016_update3 - - inteldevtools/2017a - - Integrated in a - new module with the other Intel development tools -
    - ipp/8.2.1.133 - - intel/2017a - - Intel compiler - components in a single module. -
    - itac/9.0.2.045 - - inteldevtools/2017a - - Integrated in a - new module with the other Intel development tools -
    - / - - JasPer/2.0.12-intel-2017a - -
    - Julia/0.6.0-intel-2017a-Python-2.7.13 - Julia, command line version (so without the Juno IDE). -
    - kbproto/1.0.7-intel-2016b - - X11/20170129-intel-2017a - - Integrated in - one large X11 module -
    - kwant/1.2.2-intel-2016a-Python-3.5.1 - kwant/1.2.2-intel-2017a-Python-3.6.1 - Built with single-threaded libraries as advised in the documentation which implies that kwant is not exactly a HPC program. -
    - LAMMPS/14May16-intel-2016b - - LAMMPS/31Mar2017-intel-2017a - -
    - - libcerf/1.5-intel-2017a - -
    - libffi/3.2.1-intel-2016b - - libffi/3.2.1-intel-2017a - -
    - - libgd/2.2.4-intel-2017a - -
    - Libint/1.1.6-intel-2016b - - Libint/1.1.6-intel-2017a
    Libint/1.1.6-intel-2017a-CP2K -
    -
    - libint2/2.0.3-intel-2015a - / - - Installed on - demand. - -
    - libjpeg-turbo/1.5.0-intel-2016b - - libjpeg-turbo/1.5.1-intel-2017a - -
    - libmatheval/1.1.11-intel-2016b - - libmatheval/1.1.11-intel-2017a - -
    - libpng/1.6.26-intel-2016b - - libpng/1.6.28-intel-2017a - -
    - libpthread-stubs/0.3-intel-2016b - / - Installed on demand. -
    - libreadline/6.3-intel-2016b - - libreadline/7.0-intel-2017a - -
    - LibTIFF/4.0.6-intel-2016b - - LibTIFF/4.0.7-intel-2017a - -
    - libtool/2.4.6-intel-2016b - - libtool/2.4.6 - - Moved to the - system toolchain -
    - libunistring/0.9.6-intel-2016b - - libunistring/0.9.7-intel-2017a - -
    - libX11/1.6.3-intel-2016b - - X11/20170129-intel-2017a - - Integrated in - one large X11 module -
    - libXau/1.0.8-intel-2016b - - X11/20170129-intel-2017a - - Integrated in - one large X11 module -
    - libxc/2.2.3-intel-2016b - - libxc/3.0.0-intel-2017a - -
    - libxcb/1.12-intel-2016b - - X11/20170129-intel-2017a - - Integrated in - one large X11 module -
    - libXdmcp/1.1.2-intel-2016b - - X11/20170129-intel-2017a - - Integrated in - one large X11 module -
    - libXext/1.3.3-intel-2016b - - X11/20170129-intel-2017a - - Integrated in - one large X11 module -
    - libXfixes/5.0.1-intel-2016a - - X11/20170129-intel-2017a - - Integrated in - one large X11 module -
    - libXi/1.7.6-intel-2016a - - X11/20170129-intel-2017a - - Integrated in - one large X11 module -
    - libxml2/2.9.4-intel-2016b - - libxml2/2.9.4-intel-2017a - -
    - libXrender/0.9.9-intel-2016b - - X11/20170129-intel-2017a - - Integrated in - one large X11 module -
    - libxslt/1.1.28-intel-2016a-Python-3.5.1 - - libxslt/1.1.29-intel-2017a - -
    - libxsmm/1.6.4-intel-2016b - - libxsmm/1.7.1-intel-2017a
    libxsmm/1.8-intel-2017a -
    -
    - libyaml/0.1.6-intel-2016a - / - Installed on demand -
    - LLVM/3.9/.1-intel-2017a - LLVM compiler backend with libLLVM.so. -
    - lxml/3.5.0-intel-2016a-Python-3.5.1 - - Python/2.7.13-intel-2017a - Python/3.6.1-intel-2017a - - Integrated in - the standard Python 2.7 and 3.6 modules. -
    - M4/1.4.17-intel-2016b - - M4/1.4.18 - - Moved to the - system toolchain -
    / - MAFFT/7.312-intel-2017a-with-extensions - -
    - MAKER-P/2.31.8-intel-2015a - - / - - Installed on - demand. - -
    - MAKER-P-mpi/2.31.8-intel-2015a - - / - - Installed on - demand. - -
    - matplotlib/1.5.3-intel-2016b-Python-2.7.12 - - Python/2.7.13-intel-2017a
    Python/3.6.1-intel-2017a
    -
    - Integrated in - the standard Python 2.7 and 3.6 modules -
    - MCL/14.137-intel-2016b - - MCL/14.137-intel-2017a - -
    - mdust/1.0-intel-2015a - - mdust/1.0-intel-2017a - -
    - METIS/5.1.0-intel-2016a - - METIS/5.1.0-intel-2017a - -
    - MITE_Hunter/11-2011-intel-2015a - - / - - Installed on - demand. - -
    - molmod/1.1-intel-2016b-Python-2.7.12 - molmod/1.1-intel-2017a-Python-2.7.13 - - Work - in progress, compile problems with newer compilers. - -
    - Mono/4.6.2.7-intel-2016b - - Mono/4.8.0.495-intel-2017a - -
    - Mothur/1.34.4-intel-2015a - / - Installed on demand -
    - MUMPS/5.0.1-intel-2016a-serial
    MUMPS/5.0.0-intel-2015a-parmetis
    -
    - MUMPS-5.1.1-intel-2017a-openmp-noMPI
    MUMPS-5.1.1-intel-2017a-openmp-MPI
    MUMPS-5.1.1-intel-2017a-noOpenMP-noMPI
    -
    -
    - MUSCLE/3.8.31-intel-2015a - - MUSCLE/3.8.31-intel-2017a - -
    - NASM/2.12.02-intel-2016b - - NASM/2.12.02 - - Moved to the systemtoolchain -
    - - ncbi-vdb/2.8.2-intel-2017a - -
    - ncurses/6.0-intel-2016b - - ncurses/6.0-intel-2017a - -
    - NEURON/7.4-intel-2017a - Yale NEURON code -
    - netaddr/0.7.14-intel-2015a-Python-2.7.9 - - Python/2.7.13-intel-2017a - - Integrated in - the standard Python 2.7 module -
    - netCDF/4.4.1-intel-2016b - - netCDF/4.4.1.1-intel-2017a - - All netCDF - interfaces integrated in a single module -
    - netCDF-Fortran/4.4.4-intel-2016b - - netCDF/4.4.1.1-intel-2017a - - All netCDF - interfaces integrated in a single module -
    - netifaces/0.10.4-intel-2015a-Python-2.7.9 - - Python/2.7.13-intel-2017a - - Integrated in - the standard Python 2.7 module -
    - - NGS/1.3.0 - -
    - numpy/1.9.2-intel-2015b-Python-2.7.10 - - Python/2.7.13-intel-2017a - - Integrated in - the standard Python 2.7 module -
    - numpy/1.10.4-intel-2016a-Python-3.5.1 - - Python/3.6.1-intel-2017a - - Integrated in - the standard Python 3.6 module -
    - NWChem/6.5.revision26243-intel-2015b-2014-09-10-Python-2.7.10 - - NWChem/6.6.r27746-intel-2017a-Python-2.7.13 - - On demand on Hopper. -
    / - OpenFOAM/4.1-intel-2017a - Installed without the components that require OpenGL and/or Qt (which should only be in the postprocessing) -
    - OpenMX/3.8.1-intel-2016b - - OpenMX/3.8.3-intel-2017a - -
    / - OrthoFinder/1.1.10-intel-2017a - -
    - / - - Pango/1.40.4-intel-2017a - -
    - ParMETIS/4.0.3-intel-2015b - - ParMETIS/4.0.3-intel-2017a - -
    - pbs-drmaa/1.0.18-intel-2015a - / - Installed on demand -
    - / - - pbs_PRISMS/1.0.1-intel-2017a-Python-2.7.13 - - Python - interfaces for Torque/PBS used by CASM -
    - pbs_python/4.6.0-intel-2016b-Python-2.7.12 - - pbs_python/4.6.0-intel-2017a-Python-2.7.13 - - Python - interfaces for Torque/PBS used by hanythingondemand -
    - PCRE/8.38-intel-2016b - - PCRE/8.40-intel-2017a - -
    - Perl/5.20.1-intel-2015a - - Perl/5.24.1-intel-2017a - -
    - pixman/0.34.0-intel-2016b - - pixman/0.34.0-intel-2017a - -
    - pkg-config/0.29.1-intel-2016b - - pkg-config/0.29.1 - - Moved to the - system toolchain -
    - PLUMED/2.3.0-intel-2016b - - PLUMED/2.3.0-intel-2017a - -
    - PROJ/4.9.2-intel-2016b - - PROJ/4.9.3-intel-2017a - -
    / - protobuf/3.4.0-intel-2017a - Google Protocol Buffers -
    - Pysam/0.9.1.4-intel-2016a-Python-2.7.11 - - Python/2.7.13-intel-2017a - - Integrated in - the standard Python 2.7 module. Also load SAMtools to use. -
    - Pysam/0.9.1.2-intel-2016a-Python-3.5.1 - - Python/3.6.1-intel-2017a - - Integrated in - the standard Python 3.6 module. Also load SAMtools to use. -
    - Python/2.7.12-intel-2016b - - Python/2.7.13-intel-2017a - -
    - Python/3.5.1-intel-2016a - - Python/3.6.1-intel-2017a - -
    - QuantumESPRESSO/5.2.1-intel-2015b-hybrid - QuantumESPRESSO/6.1-intel-2017a - - Work in progress. -
    - R/3.3.1-intel-2016b - - R/3.3.3-intel-2017a - -
    - RAxML/8.2.9-intel-2016b-hybrid-avx - RAxML/8.2.10-intel-2017a-hybrid - We suggest users try RAxML-ng (still beta) which is supposedly much faster and better adapted to new architectures and can be installed on demand. -
    / - RAxML-NG/0.4.1-intel-2017a-pthreads
    - RAxML-NG/0.4.1-intel-2017a-hybrid -
    RAxML Next Generation beta, compiled for shared memory (pthreads) and hybrid -distributed-shared memory (hybrid, uses MPI and pthreads). -
    - R-bundle-Bioconductor/3.3-intel-2016b-R-3.3.1 - - R/3.3.3-intel-2017a - - Integrated in - the standard R module. -
    - renderproto/0.11.1-intel-2016b - - X11/20170129-intel-2017a - - Integrated in - one large X11 module -
    - RepeatMasker/4.0.5-intel-2015a - / - - Installed - on demand; compiler problems. - -
    - RMBlast/2.2.28-intel-2015a-Python-2.7.9 - / - - Installed - on demand; compiler problems. - -
    - SAMtools/0.1.19-intel-2015a - - SAMtools/1.4-intel-2017a - -
    - scikit-umfpack/0.2.1-intel-2015b-Python-2.7.10 - / - Installed on demand -
    - scikit-umfpack/0.2.1-intel-2016a-Python-3.5.1 - scikit-umfpack/0.2.3-intel-2017a-Python-3.6.1 - -
    - scipy/0.15.1-intel-2015b-Python-2.7.10 - - Python/2.7.13-intel-2017a - - Integrated in - the standard Python 2.7 module -
    - scipy/0.16.1-intel-2016a-Python-3.5.1 - - Python/3.6.1-intel-2017a - - Integrated in - the standard Python 3.6 module. -
    - SCons/2.5.1-intel-2016b-Python-2.7.12 - - SCons/2.5.1-intel-2017a-Python-2.7.13 - - On demand on - CentOS 7; also in the system toolchain. - -
    - SCOTCH/6.0.4-intel-2016a - - SCOTCH/6.0.4-intel-2017a - -
    - Siesta/3.2-pl5-intel-2015a - - Siesta/4.0-intel-2017a - -
    - SNAP/2013-11-29-intel-2015a - / - - Installed on - demand - -
    - spglib/1.7.4-intel-2016a - / - Installed on demand -
    - SQLite/3.13.0-intel-2016b - - SQLite/3.17.0-intel-2017a - -
    - SuiteSparse/4.4.5-intel-2015b-ParMETIS-4.0.3 - SuiteSparse/4.5.5-intel-2015b-ParMETIS-4.0.3 - -
    - SuiteSparse/4.4.5-intel-2016a-METIS-5.1.0 - SuiteSparse/4.4.5-intel-2017a-METIS-5.1.0
    SuiteSparse/4.5.5-intel-2017a-METIS-5.1.0
    -
    Older version as it is known to be compatible with our Python packages. -
    - SWIG/3.0.7-intel-2015b-Python-2.7.10 - - SWIG/3.0.12-intel-2017a-Python-2.7.13 - -
    - SWIG/3.0.8-intel-2016a-Python-3.5.1 - - SWIG/3.0.12-intel-2017a-Python-3.6.1 - -
    - Szip/2.1-intel-2016b - - Szip/2.1.1-intel-2017a - -
    - tbb/4.3.2.135 - - intel/2017a - - Intel compiler - components in a single module. -
    - Tcl/8.6.5-intel-2016b - - Tcl/8.6.6-intel-2017a - -
    - TELEMAC/v7p2r0-intel-2016b - - Work in progress. -
    - TINKER/7.1.3-intel-2015a - / - - Installed - on demand; compiler problems. - -
    - Tk/8.6.5-intel-2016b - - Tk/8.6.6-intel-2017a - -
    - TopHat/2.1.1-intel-2016a - / - - TopHat is no - longer developed, its developers advise considering switching to - HISAT2 which is more accurate and more efficient. It does not compile with the intel/2017a compilers. -
    VASP - VASP/5.4.4-intel-2016b
    VASP/5.4.4-intel-2016b-vtst-173 -
    VASP has not been installed in the 2017a toolchain due to performance regressions and occasional run time errors with the Intel 2017 compilers and hence has been made available in the intel/2016b toolchain. -
    - Voro++/0.4.6-intel-2016b - - Voro++/0.4.6-intel-2017a - -
    - vsc-base/2.5.1-intel-2016b-Python-2.7.12 - - / - -
    - vsc-install/0.10.11-intel-2016b-Python-2.7.12 - - vsc-install/0.10.25-intel-2017a-Python-2.7.13 - - Does not support - Python 3. -
    - vsc-mympirun/3.4.3-intel-2016b-Python-2.7.12 - - vsc-mympirun/3.4.3-intel-2017a-Python-2.7.13 - -
    - VTune/2016_update3 - - inteldevtools/2017a - - Integrated in a - new module with the other Intel development tools -
    - worker/1.5.1-intel-2015a - - worker-1.6.7-intel-2017a - -
    - X11/20160819-intel-2016b - - X11/20170129-intel-2017a - -
    - xcb-proto/1.12 - - X11/20170129-intel-2017a - - Integrated in - one large X11 module -
    - xextproto/7.3.0-intel-2016b - - X11/20170129-intel-2017a - - Integrated in - one large X11 module -
    - xorg-macros/1.19.0-intel-2016b - - X11/20170129-intel-2017a - - Integrated in - one large X11 module -
    - xproto/7.0.29-intel-2016b - - X11/20170129-intel-2017a - - Integrated in - one large X11 module -
    - xtrans/1.3.5-intel-2016b - - X11/20170129-intel-2017a - - Integrated in - one large X11 module -
    - XZ/5.2.2-intel-2016b - - XZ/5.2.3-intel-2017a - -
    - zlib/1.2.8-intel-2016b - - zlib/1.2.11-intel-2017a - -

    Foss toolchain

    - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
    - Latest pre-2017a - - 2017a - - Comments -
    - ANTs/2.1.0-foss-2015a - ANTs/2.2.0-intel-2017a - Moved to the Intel toolchain.
    -
    - ATLAS/3.10.2-foss-2015a-LAPACK-3.4.2 - - - Installed on - demand - -
    - CMake/3.5.2-foss-2016b - - CMake/3.7.2-foss-2017a - -
    - Cufflinks/2.2.1-foss-2015a - - - Installed - on demand - -
    - cURL/7.41.0-foss-2015a - - - Installed - on demand - -
    - Cython/0.22.1-foss-2015a-Python-2.7.9 - - Python/2.7.13-intel-2017a - - Integrated into - the standard Python module for the intel toolchains -
    - FFTW/3.3.4-gompi-2016b - - FFTW/3.3.6-gompi-2017a - -
    - GSL/2.1-foss-2015b - - - Installed - on demand - -
    - HDF5/1.8.14-foss-2015a - - - Installed - on demand - -
    - libpng/1.6.16-foss-2015a - - - Installed - on demand - -
    - libreadline/6.3-foss-2015a - - - Installed - on demand - -
    - makedepend/1.0.5-foss-2015a - - -
    - MaSuRCA/2.3.2-foss-2015a - - - Installed - on demand - -
    - ncurses/6.0-foss-2016b - - - Installed - on demand - -
    - pbs-drmaa/1.0.18-foss-2015a - - - Installed - on demand - -
    - Perl/5.20.1-foss-2015a - - - Installed - on demand - -
    - Python/2.7.9-foss-2015a - - - Python is - available in the Intel toolchain. -
    - SAMtools/0.1.19-foss-2015a - - - Newer versions - with intel toolchain -
    - SPAdes/3.10.1-foss-2016b - - SPAdes/3.10.1-foss-2017a - -
    - Szip/2.1-foss-2015a - - - Installed - on demand - -
    - zlib/1.2.8-foss-2016b - - zlib/1.2.11-foss-2017a - -

    Gompi

    - - - - - - - - - - - - - - -
    - Latest - pre-GCC-6.3.0 (2017a) - - - gompi-2017a - - Comments -
    - ScaLAPACK/2.0.2-gompi-2016b-OpenBLAS-0.2.18-LAPACK-3.6.1 - - ScaLAPACK/2.0.2-gompi-2017a-OpenBLAS-0.2.19-LAPACK-3.7.0 - -

    GCC

    - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
    - Latest pre-gompi-2017a - - GCC-6.3.0 (2017a) - - Comments -
    - OpenBLAS/0.2.18-GCC-5.4.0-2.26-LAPACK-3.6.1 - - OpenBLAS/0.2.19-GCC-6.3.0-2.27-LAPACK-3.7.0 - -
    - numactl/2.0.11-GCC-5.4.0-2.26 - - numactl/2.0.11-GCC-6.3.0-2.27 - -
    - OpenMPI/1.10.3-GCC-5.4.0-2.26 - - OpenMPI/2.0.2-GCC-6.3.0-2.27 - -
    - MPICH/3.1.4-GCC-4.9.2 - - / - -

    GCCcore

    - - - - - - - - - - - - - - - - - - - - - - -
    - Latest - pre-GCCcore-6.3.0 (2017a) - - - GCCcore-6.3.0 - (2017a) - - - Comments -
    - binutils/2.26-GCCcore-5.4.0 - - binutils/2.27-GCCcore-6.3.0 - -
    - flex/2.6.0-GCCcore-5.4.0 - - flex/2.6.3-GCCcore-6.3.0 - -
    - Lmod/7.0.5 - - - Default - module tool on CentOS 7 -

    System toolchain

    - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -" - diff --git a/HtmlDump/file_0787.html b/HtmlDump/file_0787.html deleted file mode 100644 index 95f84b35a..000000000 --- a/HtmlDump/file_0787.html +++ /dev/null @@ -1,2101 +0,0 @@ -
    - Pre-2017 - - Latest module - - Comments -
    - ant/1.9.4-Java-8 - - ant/1.10.1-Java-8 - -
    - / - - Autoconf/2.69 - -
    - / - - AutoDock_Vina/1.1.2 - -
    - / - - Automake/1.15 - -
    - / - - Autotools/2016123 - -
    / - Bazel/0.5.3 - Google's software installer. Not installed on the Scientific Linux 6 nodes of hopper. -
    - binutils/2.26 - - binutils/2.27 - -
    - Bison/3.0.4 - - Bison/3.0.4 - -
    - BRATNextGen/20150505 - - - Installed on - demand - -
    - / - - byacc/20170201 - -
    - / - - CMake/3.7.2 - -
    - - core-counter/1.1 - -
    - CPLEX/12.6.3 - - - Installed on - demand on Leibniz. - -
    - DFTB+/1.2.2 - - - Installed - on demand on Leibniz. - -
    - / - - Doxygen/1.8.13 - -
    - EasuBuild/… - - EasyBuild/3.1.2 - -
    - FastQC/0.11.5-Java-8 - - - Installed - on demand on Leibniz. - -
    - FINE-Marine/5.2 - - - Installed - on demand on Leibniz. - -
    - - flex/2.6.0
    flex/2.6.3
    -
    -
    - GATK/3.5-Java-8 - - - Installed - on demand on Leibniz. - -
    - Gaussian16/g16_A3-AVX - - - Work in progress. -
    - Gurobi/6.5.1 - - - Installed - on demand on Leibniz. - -
    - Hadoop/2.6.0-cdh5.4.5-native - - - Installed - on demand on Leibniz. - -
    - - help2man/1.47.4 - -
    - Java/8 - - Java/8 - -
    - - JUnit/4.12-Java-8 - -
    - / - - libtool/2.4.6 - -
    - M4/1.4.17 - - M4/1.4.18 - -
    - MATLAB/R2016a - - MATLAB/R2017a - -
    - Maven/3.3.9 - - - Installed on - demand on Leibniz. - -
    - MGLTools/1.5.7rc1 - - - Installed on - demand on Leibniz. - -
    - MlxLibrary/1.0.0 - - - Lixoft Simulx -
    - MlxPlore/1.1.1 - - - Lixoft MLXPlore -
    - monitor/1.1.2 - - monitor/1.1.2 - -
    - Monolix/2016R1 - - - Installed on - demand on Leibniz. - -
    - / - - NASM/2.12.02 - -
    - Newbler/2.9 - - / - - On request, has - not been used recently. -
    - Novoalign/3.04.02 - - - Installed on - demand on Leibniz. - -
    - ORCA/3.0.3 - - - Installed on - demand on Leibniz. - -
    - p4vasp/0.3.29 - - - Installed on - demand on Leibniz. - -
    - parallel/20160622 - - parallel/20170322 - -
    - / - - pkg-config/0.29.1 - -
    - protobuf/2.5.0 - - protobuf/2.6.1 - -
    - Ruby/2.1.10 - - Ruby/2.4.0 - -
    - / - - SCons/2.5.1 - -
    - scripts/4.0.0 - - -
    - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
    - Latest - pre-2017a - - - 2017a - - Comments -
    - ABINIT/8.0.7-intel-2016a - - - Work in progress -
    - Advisor/2016_update4 - - inteldevtools/2017a - - Integrated in a - new module with the other Intel development tools -
    ANTs/2.1.0-foss-2015a - ANTs/2.2.0-intel-2017a - Not yet available on Leibniz due to compile problems. -
    - augustus/3.0.1-intel-2015a - / - - Installed on - demand - -
    - Autoconf/2.69-intel-2016b - - Autoconf/2.69 - - Moved to the system - toolchain -
    - AutoDock/1.1.2 - - AutoDock_Vina/1.1.2 - - Naming modified - to the standard naming used in our build tools -
    - Automake/1.15-intel-2016b - - Automake/1.15 - - Moved to the system - toolchain -
    - Autotools/20150215-intel-2016b - - Autotools/2016123 - - Moved to the system - toolchain -
    / - BAli-Phy/2.3.8-intel-2017a-OpenMP
    BAli-Phy/2.3.8-intel-2017a-MPI
    -
    By Ben Redelings, documentation on the software web site. This package supports either OpenMP or MPI, but not both together in a hybrid mode. -
    - beagle-lib/2.1.2-intel-2016b - - beagle-lib/2.1.2-intel-2017a - -
    - Beast/2.4.4-intel-2016b - - Beast/2.4.5-intel-2017a - - Version with beagle-lib -
    - Biopython/1.68-intel-2016b-Python-2.7.12 - - Biopython/1.68-intel-2017a-Python-2.7.13
    Biopython/1.68-intel-2017a-Python-3.6.1 -
    - Builds for Python - 2,7 and Python 3.6 -
    - bismark/0.13.1-intel-2015a - - Bismark/0.17.0-intel-2017a - - Naming modified - to the standard naming used in our build tools -
    - Bison/3.0.4-intel-2016b - - Bison/3.0.4-intel-2017a - -
    - BLAST+/2.6.0-intel-2016b-Python-2.7.12 - - BLAST+/2.6.0-intel-2017a-Python-2.7.13 - -
    - Boost/1.63.0-intel-2016b-Python-2.7.12 - - Boost/1.63.0-intel-2017a-Python-2.7.13 - -
    - Bowtie2/2.2.9-intel-2016a - - Bowtie2/2.2.9-intel-2017a - -
    - byacc/20160606-intel-2016b - - byacc/20170201 - - Moved to the system - toolchain -
    - bzip2/1.0.6-intel-2016b - - bzip2/1.0.6-intel-2017a - -
    - cairo/1.15.2-intel-2016b - - cairo/1.15.4-intel-2017a - -
    - CASINO/2.12.1-intel-2015a - / - - Installed - on demand - -
    - CASM/0.2.0-Python-2.7.12 - / - - Installed on demand, compiler problems. - -
    / - CGAL/4.9-intel-2017a-forOpenFOAM - Installed without the components that require Qt and/or OpenGL. -
    - CMake/3.5.2-intel-2016b - - CMake/3.7.2-intel-2017a - -
    - CP2K/4.1-intel-2016b - CP2K/4.1-intel-2017a-bare
    CP2K/4.1-intel-2017a-bare-multiver
    CP2K/5.1-intel-2017a-bare-multiver
    CP2K-5.1/intel-2017a-bare-GPU-noMPI
    -
    The multiver modules contain the sopt, popt, ssmp and psmp binaries.
    The bare-GPU version only works on a single GPU node, support for MPI was not included. It is a ssmp binary using GPU acceleration.
    - CPMD/4.1-intel-2016b - CPMD/4.1-intel-2017a - CPMD is licensed software. -
    - cURL/7.49.1-intel-2016b - - cURL/7.53.1-intel-2017a - -
    / - DIAMOND/0.9.12-intel-2017a - -
    - DLCpar/1.0-intel-2016b-Python-2.7.12 - - DLCpar/1.0-intel-2017a-Python-2.7.13
    DLCpar/1.0-intel-2017a-Python-3.6.1
    -
    - Installed for - Python 2.7.13 and Pyton 3.6.1 -
    - Doxygen/1.8.11-intel-2016b - - Doxygen/1.8.13 - - Moved to the - system toolchain -
    - DSSP/2.2.1-intel-2016a - DSSP/2.2.1-intel-2017a -
    -
    - Eigen/3.2.9-intel-2016b - - Eigen/3.3.3-intel-2017a - -
    - elk/3.3.17-intel-2016a - - Elk/4.0.15-intel-2017a - - Naming modified - to the standard naming used in our build tools -
    - exonerate/2.2.0-intel-2015a - - Exonerate/2.4.0-intel-2017a - - Naming modified - to the standard naming used in our build tools -
    - expat/2.2.0-intel-2016b - - expat/2.2.0-intel-2017a - -
    / - FastME/2.1.5.1-intel-2017a - -
    - FFTW/3.3.4-intel-2015a - - FFTW/3.3.6-intel-2017a - - There is also a - FFTW-compatible interface in intel/2017a, but it does not work for all - packages. -
    - - file/5.30-intel-2017a - -
    - fixesproto/5.0-intel-2016a - - X11/20170129-intel-2017a - - Integrated in - one large X11 module -
    - flex/2.6.0-intel-2016b - - flex/2.6.3-intel-2017a - -
    - fontconfig/2.12.1-intel-2016b - - fontconfig/2.12.1-intel-2017a - -
    - freeglut/3.0.0-intel-2016a - - freeglut/3.0.0-intel-2017a - - Not yet - operational on CentOS 7 - -
    - freetype/2.7-intel-2016b - - freetype/2.7.1-intel-2017a - -
    - FSL/5.0.9-intel-2016a - / - - Installed on - demand - -
    - GAMESS-US/20141205-R1-intel-2015a - / - - Installed on - demand - -
    - gc/7.4.4-intel-2016b - - gc/7.6.0-intel-2017a -
    - GDAL/2.1.0-intel-2016b - - GDAL/2.1.3-intel-2017a-Python-2.7.13 - - Does - not support Python 3. -
    - genometools/1.5.4-intel-2015a - - GenomeTools/1.5.9-intel-2017a - -
    - GEOS/3.5.0-intel-2015a-Python-2.7.9 - - GEOS/3.6.1-intel-2017a-Python-2.7.13 - - Does - not support Python 3. -
    - gettext/0.19.8-intel-2016b - - gettext/0.19.8.1-intel-2017a - -
    - GLib/2.48.1-intel-2016b - - GLib/2.49.7-intel-2017a - -
    - GMAP-GSNAP/2014-12-25-intel-2015a - - GMAP-GSNAP/2017-03-17-intel-2017a - -
    - GMP/6.1.1-intel-2016b - - GMP/6.1.2-intel-2017a - -
    - gnuplot/5.0.0-intel-2015a - - gnuplot/5.0.6-intel-2017a - -
    - GObject-Introspection/1.44.0-intel-2015a - - GObject-Introspection/1.49.2-intel-2017a - -
    - GROMACS/5.1.2-intel-2016a-hybrid - - GROMACS/5.1.2-intel-2017a-hybrid
    GROMACS/2016.3-intel-2017a
    GROMACS/2016.4-intel-2017a-GPU-noMPI
    -
    The GROMACS -GPU-noMPI binary is a binary for the GPU nodes, without support for MPI, so it can only be used on a single GPU node.
    - GSL/2.3-intel-2016b - - GSL/2.3-intel-2017a - -
    / - gtest/1.8.0-intel-2017a - Google C++ Testing Framework -
    - Guile/1.8.8-intel-2016b - - Guile/1.8.8-intel-2017a - -
    - Guile/2.0.11-intel-2016b - - Guile/2.2.0-intel-2017a - -
    - hanythingondemand/3.2.0-intel-2016b-Python-2.7.12 - - hanythingondemand/3.2.0-intel-2017a-Python-2.7.13 - -
    - / - - HarfBuzz/1.3.1-intel-2017a - -
    - HDF5/1.8.17-intel-2016b - - HDF5/1.8.18-intel-2017a
    HDF5/1.8.18-intel-2017a-noMPI -
    HDF5 with and without MPI-support. -
    / - HISAT2/2.0.5-intel-2017a - -
    - HTSeq/0.6.1p1-intel-2016a-Python-2.7.11 - - HTSeq/0.7.2-intel-2017a-Python-2.7.13 - - Does not support - Python 3. -
    - icc/2016.3.210-GCC-5.4.0-2.26 - - intel/2017a - - Intel compiler - components in a single module. -
    - iccifort/2016.3.210-GCC-5.4.0-2.26 - - intel/2017a - - Intel compiler - components in a single module. -
    - ifort/2016.3.210-GCC-5.4.0-2.26 - - intel/2017a - - Intel compiler - components in a single module. -
    - imkl/11.3.3.210-iimpi-2016b - - intel/2017a - - Intel compiler - components in a single module. -
    - impi/5.1.3.181-iccifort-2016.3.210-GCC-5.4.0-2.26 - - intel/2017a - - Intel compiler - components in a single module. -
    - inputproto/2.3.2-intel-2016b - - X11/20170129-intel-2017a - - Integrated in - one large X11 module -
    - Inspector/2016_update3 - - inteldevtools/2017a - - Integrated in a - new module with the other Intel development tools -
    - ipp/8.2.1.133 - - intel/2017a - - Intel compiler - components in a single module. -
    - itac/9.0.2.045 - - inteldevtools/2017a - - Integrated in a - new module with the other Intel development tools -
    - / - - JasPer/2.0.12-intel-2017a - -
    - Julia/0.6.0-intel-2017a-Python-2.7.13 - Julia, command line version (so without the Juno IDE). -
    - kbproto/1.0.7-intel-2016b - - X11/20170129-intel-2017a - - Integrated in - one large X11 module -
    - kwant/1.2.2-intel-2016a-Python-3.5.1 - kwant/1.2.2-intel-2017a-Python-3.6.1 - Built with single-threaded libraries as advised in the documentation which implies that kwant is not exactly a HPC program. -
    - LAMMPS/14May16-intel-2016b - - LAMMPS/31Mar2017-intel-2017a - -
    - - libcerf/1.5-intel-2017a - -
    - libffi/3.2.1-intel-2016b - - libffi/3.2.1-intel-2017a - -
    - - libgd/2.2.4-intel-2017a - -
    - Libint/1.1.6-intel-2016b - - Libint/1.1.6-intel-2017a
    Libint/1.1.6-intel-2017a-CP2K -
    -
    - libint2/2.0.3-intel-2015a - / - - Installed on - demand. - -
    - libjpeg-turbo/1.5.0-intel-2016b - - libjpeg-turbo/1.5.1-intel-2017a - -
    - libmatheval/1.1.11-intel-2016b - - libmatheval/1.1.11-intel-2017a - -
    - libpng/1.6.26-intel-2016b - - libpng/1.6.28-intel-2017a - -
    - libpthread-stubs/0.3-intel-2016b - / - Installed on demand. -
    - libreadline/6.3-intel-2016b - - libreadline/7.0-intel-2017a - -
    - LibTIFF/4.0.6-intel-2016b - - LibTIFF/4.0.7-intel-2017a - -
    - libtool/2.4.6-intel-2016b - - libtool/2.4.6 - - Moved to the - system toolchain -
    - libunistring/0.9.6-intel-2016b - - libunistring/0.9.7-intel-2017a - -
    - libX11/1.6.3-intel-2016b - - X11/20170129-intel-2017a - - Integrated in - one large X11 module -
    - libXau/1.0.8-intel-2016b - - X11/20170129-intel-2017a - - Integrated in - one large X11 module -
    - libxc/2.2.3-intel-2016b - - libxc/3.0.0-intel-2017a - -
    - libxcb/1.12-intel-2016b - - X11/20170129-intel-2017a - - Integrated in - one large X11 module -
    - libXdmcp/1.1.2-intel-2016b - - X11/20170129-intel-2017a - - Integrated in - one large X11 module -
    - libXext/1.3.3-intel-2016b - - X11/20170129-intel-2017a - - Integrated in - one large X11 module -
    - libXfixes/5.0.1-intel-2016a - - X11/20170129-intel-2017a - - Integrated in - one large X11 module -
    - libXi/1.7.6-intel-2016a - - X11/20170129-intel-2017a - - Integrated in - one large X11 module -
    - libxml2/2.9.4-intel-2016b - - libxml2/2.9.4-intel-2017a - -
    - libXrender/0.9.9-intel-2016b - - X11/20170129-intel-2017a - - Integrated in - one large X11 module -
    - libxslt/1.1.28-intel-2016a-Python-3.5.1 - - libxslt/1.1.29-intel-2017a - -
    - libxsmm/1.6.4-intel-2016b - - libxsmm/1.7.1-intel-2017a
    libxsmm/1.8-intel-2017a -
    -
    - libyaml/0.1.6-intel-2016a - / - Installed on demand -
    - LLVM/3.9/.1-intel-2017a - LLVM compiler backend with libLLVM.so. -
    - lxml/3.5.0-intel-2016a-Python-3.5.1 - - Python/2.7.13-intel-2017a - Python/3.6.1-intel-2017a - - Integrated in - the standard Python 2.7 and 3.6 modules. -
    - M4/1.4.17-intel-2016b - - M4/1.4.18 - - Moved to the - system toolchain -
    / - MAFFT/7.312-intel-2017a-with-extensions - -
    - MAKER-P/2.31.8-intel-2015a - - / - - Installed on - demand. - -
    - MAKER-P-mpi/2.31.8-intel-2015a - - / - - Installed on - demand. - -
    - matplotlib/1.5.3-intel-2016b-Python-2.7.12 - - Python/2.7.13-intel-2017a
    Python/3.6.1-intel-2017a
    -
    - Integrated in - the standard Python 2.7 and 3.6 modules -
    - MCL/14.137-intel-2016b - - MCL/14.137-intel-2017a - -
    - mdust/1.0-intel-2015a - - mdust/1.0-intel-2017a - -
    - METIS/5.1.0-intel-2016a - - METIS/5.1.0-intel-2017a - -
    - MITE_Hunter/11-2011-intel-2015a - - / - - Installed on - demand. - -
    - molmod/1.1-intel-2016b-Python-2.7.12 - molmod/1.1-intel-2017a-Python-2.7.13 - - Work - in progress, compile problems with newer compilers. - -
    - Mono/4.6.2.7-intel-2016b - - Mono/4.8.0.495-intel-2017a - -
    - Mothur/1.34.4-intel-2015a - / - Installed on demand -
    - MUMPS/5.0.1-intel-2016a-serial
    MUMPS/5.0.0-intel-2015a-parmetis
    -
    - MUMPS-5.1.1-intel-2017a-openmp-noMPI
    MUMPS-5.1.1-intel-2017a-openmp-MPI
    MUMPS-5.1.1-intel-2017a-noOpenMP-noMPI
    -
    -
    - MUSCLE/3.8.31-intel-2015a - - MUSCLE/3.8.31-intel-2017a - -
    - NASM/2.12.02-intel-2016b - - NASM/2.12.02 - - Moved to the systemtoolchain -
    - - ncbi-vdb/2.8.2-intel-2017a - -
    - ncurses/6.0-intel-2016b - - ncurses/6.0-intel-2017a - -
    - NEURON/7.4-intel-2017a - Yale NEURON code -
    - netaddr/0.7.14-intel-2015a-Python-2.7.9 - - Python/2.7.13-intel-2017a - - Integrated in - the standard Python 2.7 module -
    - netCDF/4.4.1-intel-2016b - - netCDF/4.4.1.1-intel-2017a - - All netCDF - interfaces integrated in a single module -
    - netCDF-Fortran/4.4.4-intel-2016b - - netCDF/4.4.1.1-intel-2017a - - All netCDF - interfaces integrated in a single module -
    - netifaces/0.10.4-intel-2015a-Python-2.7.9 - - Python/2.7.13-intel-2017a - - Integrated in - the standard Python 2.7 module -
    - - NGS/1.3.0 - -
    - numpy/1.9.2-intel-2015b-Python-2.7.10 - - Python/2.7.13-intel-2017a - - Integrated in - the standard Python 2.7 module -
    - numpy/1.10.4-intel-2016a-Python-3.5.1 - - Python/3.6.1-intel-2017a - - Integrated in - the standard Python 3.6 module -
    - NWChem/6.5.revision26243-intel-2015b-2014-09-10-Python-2.7.10 - - NWChem/6.6.r27746-intel-2017a-Python-2.7.13 - - On demand on Hopper. -
    / - OpenFOAM/4.1-intel-2017a - Installed without the components that require OpenGL and/or Qt (which should only be in the postprocessing) -
    - OpenMX/3.8.1-intel-2016b - - OpenMX/3.8.3-intel-2017a - -
    / - OrthoFinder/1.1.10-intel-2017a - -
    - / - - Pango/1.40.4-intel-2017a - -
    - ParMETIS/4.0.3-intel-2015b - - ParMETIS/4.0.3-intel-2017a - -
    - pbs-drmaa/1.0.18-intel-2015a - / - Installed on demand -
    - / - - pbs_PRISMS/1.0.1-intel-2017a-Python-2.7.13 - - Python - interfaces for Torque/PBS used by CASM -
    - pbs_python/4.6.0-intel-2016b-Python-2.7.12 - - pbs_python/4.6.0-intel-2017a-Python-2.7.13 - - Python - interfaces for Torque/PBS used by hanythingondemand -
    - PCRE/8.38-intel-2016b - - PCRE/8.40-intel-2017a - -
    - Perl/5.20.1-intel-2015a - - Perl/5.24.1-intel-2017a - -
    - pixman/0.34.0-intel-2016b - - pixman/0.34.0-intel-2017a - -
    - pkg-config/0.29.1-intel-2016b - - pkg-config/0.29.1 - - Moved to the - system toolchain -
    - PLUMED/2.3.0-intel-2016b - - PLUMED/2.3.0-intel-2017a - -
    - PROJ/4.9.2-intel-2016b - - PROJ/4.9.3-intel-2017a - -
    / - protobuf/3.4.0-intel-2017a - Google Protocol Buffers -
    - Pysam/0.9.1.4-intel-2016a-Python-2.7.11 - - Python/2.7.13-intel-2017a - - Integrated in - the standard Python 2.7 module. Also load SAMtools to use. -
    - Pysam/0.9.1.2-intel-2016a-Python-3.5.1 - - Python/3.6.1-intel-2017a - - Integrated in - the standard Python 3.6 module. Also load SAMtools to use. -
    - Python/2.7.12-intel-2016b - - Python/2.7.13-intel-2017a - -
    - Python/3.5.1-intel-2016a - - Python/3.6.1-intel-2017a - -
    - QuantumESPRESSO/5.2.1-intel-2015b-hybrid - QuantumESPRESSO/6.1-intel-2017a - - Work in progress. -
    - R/3.3.1-intel-2016b - - R/3.3.3-intel-2017a - -
    - RAxML/8.2.9-intel-2016b-hybrid-avx - RAxML/8.2.10-intel-2017a-hybrid - We suggest users try RAxML-ng (still beta) which is supposedly much faster and better adapted to new architectures and can be installed on demand. -
    / - RAxML-NG/0.4.1-intel-2017a-pthreads
    - RAxML-NG/0.4.1-intel-2017a-hybrid -
    RAxML Next Generation beta, compiled for shared memory (pthreads) and hybrid -distributed-shared memory (hybrid, uses MPI and pthreads). -
    - R-bundle-Bioconductor/3.3-intel-2016b-R-3.3.1 - - R/3.3.3-intel-2017a - - Integrated in - the standard R module. -
    - renderproto/0.11.1-intel-2016b - - X11/20170129-intel-2017a - - Integrated in - one large X11 module -
    - RepeatMasker/4.0.5-intel-2015a - / - - Installed - on demand; compiler problems. - -
    - RMBlast/2.2.28-intel-2015a-Python-2.7.9 - / - - Installed - on demand; compiler problems. - -
    - SAMtools/0.1.19-intel-2015a - - SAMtools/1.4-intel-2017a - -
    - scikit-umfpack/0.2.1-intel-2015b-Python-2.7.10 - / - Installed on demand -
    - scikit-umfpack/0.2.1-intel-2016a-Python-3.5.1 - scikit-umfpack/0.2.3-intel-2017a-Python-3.6.1 - -
    - scipy/0.15.1-intel-2015b-Python-2.7.10 - - Python/2.7.13-intel-2017a - - Integrated in - the standard Python 2.7 module -
    - scipy/0.16.1-intel-2016a-Python-3.5.1 - - Python/3.6.1-intel-2017a - - Integrated in - the standard Python 3.6 module. -
    - SCons/2.5.1-intel-2016b-Python-2.7.12 - - SCons/2.5.1-intel-2017a-Python-2.7.13 - - On demand on - CentOS 7; also in the system toolchain. - -
    - SCOTCH/6.0.4-intel-2016a - - SCOTCH/6.0.4-intel-2017a - -
    - Siesta/3.2-pl5-intel-2015a - - Siesta/4.0-intel-2017a - -
    - SNAP/2013-11-29-intel-2015a - / - - Installed on - demand - -
    - spglib/1.7.4-intel-2016a - / - Installed on demand -
    - SQLite/3.13.0-intel-2016b - - SQLite/3.17.0-intel-2017a - -
    - SuiteSparse/4.4.5-intel-2015b-ParMETIS-4.0.3 - SuiteSparse/4.5.5-intel-2015b-ParMETIS-4.0.3 - -
    - SuiteSparse/4.4.5-intel-2016a-METIS-5.1.0 - SuiteSparse/4.4.5-intel-2017a-METIS-5.1.0
    SuiteSparse/4.5.5-intel-2017a-METIS-5.1.0
    -
    Older version as it is known to be compatible with our Python packages. -
    - SWIG/3.0.7-intel-2015b-Python-2.7.10 - - SWIG/3.0.12-intel-2017a-Python-2.7.13 - -
    - SWIG/3.0.8-intel-2016a-Python-3.5.1 - - SWIG/3.0.12-intel-2017a-Python-3.6.1 - -
    - Szip/2.1-intel-2016b - - Szip/2.1.1-intel-2017a - -
    - tbb/4.3.2.135 - - intel/2017a - - Intel compiler - components in a single module. -
    - Tcl/8.6.5-intel-2016b - - Tcl/8.6.6-intel-2017a - -
    - TELEMAC/v7p2r0-intel-2016b - TELEMAC/v7p2r0-intel-2017a
    TELEMAC/v7p2r1-intel-2017a
    TELEMAC/v7p2r2-intel-2017a
    TELEMAC/v7p3r0-intel-2017a

    - TINKER/7.1.3-intel-2015a - / - - Installed - on demand; compiler problems. - -
    - Tk/8.6.5-intel-2016b - - Tk/8.6.6-intel-2017a - -
    - TopHat/2.1.1-intel-2016a - / - - TopHat is no - longer developed, its developers advise considering switching to - HISAT2 which is more accurate and more efficient. It does not compile with the intel/2017a compilers. -
    VASP - VASP/5.4.4-intel-2016b
    VASP/5.4.4-intel-2016b-vtst-173 -
    VASP has not been installed in the 2017a toolchain due to performance regressions and occasional run time errors with the Intel 2017 compilers and hence has been made available in the intel/2016b toolchain. -
    - Voro++/0.4.6-intel-2016b - - Voro++/0.4.6-intel-2017a - -
    - vsc-base/2.5.1-intel-2016b-Python-2.7.12 - - / - -
    - vsc-install/0.10.11-intel-2016b-Python-2.7.12 - - vsc-install/0.10.25-intel-2017a-Python-2.7.13 - - Does not support - Python 3. -
    - vsc-mympirun/3.4.3-intel-2016b-Python-2.7.12 - - vsc-mympirun/3.4.3-intel-2017a-Python-2.7.13 - -
    - VTune/2016_update3 - - inteldevtools/2017a - - Integrated in a - new module with the other Intel development tools -
    - worker/1.5.1-intel-2015a - - worker-1.6.7-intel-2017a - -
    - X11/20160819-intel-2016b - - X11/20170129-intel-2017a - -
    - xcb-proto/1.12 - - X11/20170129-intel-2017a - - Integrated in - one large X11 module -
    - xextproto/7.3.0-intel-2016b - - X11/20170129-intel-2017a - - Integrated in - one large X11 module -
    - xorg-macros/1.19.0-intel-2016b - - X11/20170129-intel-2017a - - Integrated in - one large X11 module -
    - xproto/7.0.29-intel-2016b - - X11/20170129-intel-2017a - - Integrated in - one large X11 module -
    - xtrans/1.3.5-intel-2016b - - X11/20170129-intel-2017a - - Integrated in - one large X11 module -
    - XZ/5.2.2-intel-2016b - - XZ/5.2.3-intel-2017a - -
    - zlib/1.2.8-intel-2016b - - zlib/1.2.11-intel-2017a - -
    " - diff --git a/HtmlDump/file_0789.html b/HtmlDump/file_0789.html deleted file mode 100644 index b98554c37..000000000 --- a/HtmlDump/file_0789.html +++ /dev/null @@ -1,393 +0,0 @@ -

    Foss toolchain

    - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
    - Latest pre-2017a - - 2017a - - Comments -
    - ANTs/2.1.0-foss-2015a - ANTs/2.2.0-intel-2017a - Moved to the Intel toolchain.
    -
    - ATLAS/3.10.2-foss-2015a-LAPACK-3.4.2 - - - Installed on - demand - -
    - CMake/3.5.2-foss-2016b - - CMake/3.7.2-foss-2017a - -
    - Cufflinks/2.2.1-foss-2015a - - - Installed - on demand - -
    - cURL/7.41.0-foss-2015a - - - Installed - on demand - -
    - Cython/0.22.1-foss-2015a-Python-2.7.9 - - Python/2.7.13-intel-2017a - - Integrated into - the standard Python module for the intel toolchains -
    - FFTW/3.3.4-gompi-2016b - - FFTW/3.3.6-gompi-2017a - -
    - GSL/2.1-foss-2015b - - - Installed - on demand - -
    - HDF5/1.8.14-foss-2015a - - - Installed - on demand - -
    - libpng/1.6.16-foss-2015a - - - Installed - on demand - -
    - libreadline/6.3-foss-2015a - - - Installed - on demand - -
    - makedepend/1.0.5-foss-2015a - - -
    - MaSuRCA/2.3.2-foss-2015a - - - Installed - on demand - -
    - ncurses/6.0-foss-2016b - - - Installed - on demand - -
    - pbs-drmaa/1.0.18-foss-2015a - - - Installed - on demand - -
    - Perl/5.20.1-foss-2015a - - - Installed - on demand - -
    - Python/2.7.9-foss-2015a - - - Python is - available in the Intel toolchain. -
    - SAMtools/0.1.19-foss-2015a - - - Newer versions - with intel toolchain -
    - SPAdes/3.10.1-foss-2016b - - SPAdes/3.10.1-foss-2017a - -
    - Szip/2.1-foss-2015a - - - Installed - on demand - -
    - zlib/1.2.8-foss-2016b - - zlib/1.2.11-foss-2017a - -
    -

    Gompi

    - - - - - - - - - - - - - - - -
    - Latest - pre-GCC-6.3.0 (2017a) - - - gompi-2017a - - Comments -
    - ScaLAPACK/2.0.2-gompi-2016b-OpenBLAS-0.2.18-LAPACK-3.6.1 - - ScaLAPACK/2.0.2-gompi-2017a-OpenBLAS-0.2.19-LAPACK-3.7.0 - -
    -

    GCC

    - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
    - Latest pre-gompi-2017a - - GCC-6.3.0 (2017a) - - Comments -
    - OpenBLAS/0.2.18-GCC-5.4.0-2.26-LAPACK-3.6.1 - - OpenBLAS/0.2.19-GCC-6.3.0-2.27-LAPACK-3.7.0 - -
    - numactl/2.0.11-GCC-5.4.0-2.26 - - numactl/2.0.11-GCC-6.3.0-2.27 - -
    - OpenMPI/1.10.3-GCC-5.4.0-2.26 - - OpenMPI/2.0.2-GCC-6.3.0-2.27 - -
    - MPICH/3.1.4-GCC-4.9.2 - - / - -
    -

    GCCcore

    - - - - - - - - - - - - - - - - - - - - - - - -
    - Latest - pre-GCCcore-6.3.0 (2017a) - - - GCCcore-6.3.0 - (2017a) - - - Comments -
    - binutils/2.26-GCCcore-5.4.0 - - binutils/2.27-GCCcore-6.3.0 - -
    - flex/2.6.0-GCCcore-5.4.0 - - flex/2.6.3-GCCcore-6.3.0 - -
    - Lmod/7.0.5 - - - Default - module tool on CentOS 7 -
    " - diff --git a/HtmlDump/file_0791.html b/HtmlDump/file_0791.html deleted file mode 100644 index 61736be6c..000000000 --- a/HtmlDump/file_0791.html +++ /dev/null @@ -1,524 +0,0 @@ - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
    - Pre-2017 - - Latest module - - Comments -
    - ant/1.9.4-Java-8 - - ant/1.10.1-Java-8 - -
    - / - - Autoconf/2.69 - -
    - / - - AutoDock_Vina/1.1.2 - -
    - / - - Automake/1.15 - -
    - / - - Autotools/2016123 - -
    / - Bazel/0.5.3 - Google's software installer. Not installed on the Scientific Linux 6 nodes of hopper. -
    - binutils/2.26 - - binutils/2.27 - -
    - Bison/3.0.4 - - Bison/3.0.4 - -
    - BRATNextGen/20150505 - - - Installed on - demand - -
    - / - - byacc/20170201 - -
    - / - - CMake/3.7.2 - -
    - - core-counter/1.1 - -
    - CPLEX/12.6.3 - - - Installed on - demand on Leibniz. - -
    - DFTB+/1.2.2 - - - Installed - on demand on Leibniz. - -
    - / - - Doxygen/1.8.13 - -
    - EasuBuild/… - - EasyBuild/3.1.2 - -
    - FastQC/0.11.5-Java-8 - - - Installed - on demand on Leibniz. - -
    - FINE-Marine/5.2 - - - Installed - on demand on Leibniz. - -
    - - flex/2.6.0
    flex/2.6.3
    -
    -
    - GATK/3.5-Java-8 - - - Installed - on demand on Leibniz. - -
    - Gaussian16/g16_A3-AVX - - - Work in progress. -
    - Gurobi/6.5.1 - - - Installed - on demand on Leibniz. - -
    - Hadoop/2.6.0-cdh5.4.5-native - - - Installed - on demand on Leibniz. - -
    - - help2man/1.47.4 - -
    - Java/8 - - Java/8 - -
    - - JUnit/4.12-Java-8 - -
    - / - - libtool/2.4.6 - -
    - M4/1.4.17 - - M4/1.4.18 - -
    - MATLAB/R2016a - - MATLAB/R2017a - -
    - Maven/3.3.9 - - - Installed on - demand on Leibniz. - -
    - MGLTools/1.5.7rc1 - - - Installed on - demand on Leibniz. - -
    - MlxLibrary/1.0.0 - - - Lixoft Simulx -
    - MlxPlore/1.1.1 - - - Lixoft MLXPlore -
    - monitor/1.1.2 - - monitor/1.1.2 - -
    - Monolix/2016R1 - - - Installed on - demand on Leibniz. - -
    - / - - NASM/2.12.02 - -
    - Newbler/2.9 - - / - - On request, has - not been used recently. -
    - Novoalign/3.04.02 - - - Installed on - demand on Leibniz. - -
    - ORCA/3.0.3 - - - Installed on - demand on Leibniz. - -
    - p4vasp/0.3.29 - - - Installed on - demand on Leibniz. - -
    - parallel/20160622 - - parallel/20170322 - -
    - / - - pkg-config/0.29.1 - -
    - protobuf/2.5.0 - - protobuf/2.6.1 - -
    - Ruby/2.1.10 - - Ruby/2.4.0 - -
    - / - - SCons/2.5.1 - -
    - scripts/4.0.0 - - On request, has not been used recently. -
    setuptools/1.4.2On request, has not been used recently.
    Spark/2.0.2On request, has not been used recently.
    TRF/4.07.bOn request, has not been used recently.
    TRIQS/1.2.0On request, has not been used recently.
    viral-ngs/1.4.2On request, has not been used recently.
    vsc-base/2.5.1Used to be in compiler toolchains
    " - diff --git a/HtmlDump/file_0793.html b/HtmlDump/file_0793.html deleted file mode 100644 index 3c367f045..000000000 --- a/HtmlDump/file_0793.html +++ /dev/null @@ -1,71 +0,0 @@ -

    Introduction

    Most of the useful R packages come in the form of packages that can be installed separatly. Some of those -are part of the default installtion on VSC infrastructure. Given the astounding number of packages, it is not sustainable to - install each and everyone system wide. Since it is very easy for a user - to install them just for himself, or for his research group, that is -not a problem though. Do not hesitate to contact support whenever you -encounter trouble doing so. -

    Installing your own packages using conda

    The easiest way to install and manage your own R environment is conda. -

    Installing Miniconda

    If you have Miniconda already installed, you can skip ahead to the next -section, if Miniconda is not installed, we start with that. Download the -Bash script that will install it from - conda.io using, e.g., wget: -

    $ wget https://repo.continuum.io/miniconda/Miniconda3-latest-Linux-x86_64.sh
    -

    Once downloaded, run the installation script: -

    $ bash Miniconda3-latest-Linux-x86_64.sh -b -p $VSC_DATA/miniconda3
    -

    Optionally, you can add the path to the Miniconda -installation to the PATH environment variable in your .bashrc file. -This is convenient, but may lead to conflicts when working with the -module system, so make sure that you know what you are doing in either -case. The line to add to your .bashrc file would be: -

    export PATH=\"${VSC_DATA}/miniconda3/bin:${PATH}
    -

    Creating an environment

    First, ensure that the -Miniconda installation is in your PATH environment variable. The -following command should return the full path to the conda command: -

    $ which conda
    -

    If the result is blank, or reports that conda can not be found, - modify the `PATH` environment variable appropriately by adding -iniconda's bin directory to PATH. -

    Creating a new conda environment is straightforward: -

    $ conda create -n science -c r r-essentials r-rodbc
    -

    This command creates a new conda environment called science, -and installs a number of R packages that you will probably want to -have handy in any case to preprocess, visualize, or postprocess your -data. You can of course install more, depending on your requirements and - personal taste. -

    Working with the environment

    To work with an environment, you have to activate it. This is done with, e.g., -

    $ source activate science
    -

    Here, science is the name of the environment you want to work in. -

    Install an additional package

    To install an additional package, e.g., `pandas`, first ensure that the environment you want to work in is activated. -

    $ source activate science
    -

    Next, install the package: -

    $ conda install -c r r-ggplot2
    -

    Note that conda will take care of all independencies, including - non-R libraries. This - ensures that you work in a consistent environment. -

    Updating/removing

    Using conda, it is easy to keep your packages up-to-date. Updating a single package (and its dependencies) can be done using: -

    $ conda update r-rodbc
    -

    Updating all packages in the environement is trivial: -

    $ conda update --all
    -

    Removing an installed package: -

    $ conda remove r-mass
    -

    Deactivating an environment

    To deactivate a conda environment, i.e., return the shell to its original state, use the following command -

    $ source deactivate
    -

    More information

    Additional information about conda can be found on its documentation site. -

    Alternatives to conda -

    Setting up your own package repository for R is straightforward. -

      -
    1. Load the appropriate R module, i.e., the one you want the R package to be available for:
      - $ module load R/3.2.1-foss-2014a-x11-tcl
    2. -
    3. Start R and install the package :
      > install.packages(\"DEoptim\")
    4. -
    5. Alternatively you can download the -desired package: -
      - $ wget cran.r-project.org/src/contrib/Archive/DEoptim/DEoptim_2.0-0.tar.gz
    6. - And install the package from the command line: - $ R CMD -INSTALL DEoptim_2.2-3.tar.gz -l -/$VSC_HOME/R/ - -
    7. These packages might depend on the specific R version, so you may need to reinstall them for the other version.
    8. -
    " - diff --git a/HtmlDump/file_0795.html b/HtmlDump/file_0795.html deleted file mode 100644 index e162d9c95..000000000 --- a/HtmlDump/file_0795.html +++ /dev/null @@ -1,37 +0,0 @@ -

    The 4th VSC Users Day was held at the \"Paleis der Academiën\", the seat of the \"Royal Flemish Academy of Belgium for Science and the Arts\", in the Hertogstraat 1, 1000 Brussels, on May 22, 2018.

    Program

    The titles in the program link to slides or abstracts of the presentations. -

    Abstracts of workshops

    VSC for starters -

    The workshop provides a smooth introduction to supercomputing for new users. Starting from common concepts in personal computing the similarities and differences with supercomputing are highlighted and some essential terminology is introduced. It is explained what users can expect from supercomputing and what not, as well as what is expected from them as users. -

    Start to GPU -

    GPU’s have become an important resource of computational power. For some workloads they are extremely suited eg. Machine learning frameworks, but also applications vendors are providing more and more support. So it is important to keep track of things happening in your research field. This workshop will provide you with an overview of available GPU power within VSC and will give you guidelines how you can start using it. -

    Code debugging -

    All code contains bugs, and that is frustrating. Trying to identify and eliminate them is tedious work. The extra complexity in parallel code makes this even harder. However, using coding best practices can reduce the number of bugs in your code considerably, and using the right tools for debugging parallel code will simplify and streamline the process of fixing your code. Familiarizing yourself with best practices will give you an excellent return on investment. -

    Code optimization -

    Performance is a key concern in HPC (High Performance Computing). As a developer, but also as an application user you have to be aware of the impact of modern computer architecture on the efficiency of you code. Profilers can help you identify performance hotspots so that you can improve the performance of your code systematically. Profilers can also help you to find the limiting factors when you run an application, so that you can improve your workflow to try and overcome those as much as possible. -

    Paying attention to efficiency will allow you to scale your research to higher accuracy and fidelity.

    " - diff --git a/Makefile b/Makefile index f1ee0b0b4..a8bbfb37c 100644 --- a/Makefile +++ b/Makefile @@ -20,8 +20,12 @@ web: Makefile check: Makefile $(RM) -r build mkdir build - @$(SPHINXBUILD) -b html -a -n -q -N "$(SOURCEDIR)" "$(BUILDDIR)" \ - $(SPHINXOPTS) $(O) + $(SPHINXBUILD) -b html -a -n -q "$(SOURCEDIR)" "$(BUILDDIR)" $(SPHINXOPTS) $(O) + +linkcheck: Makefile + $(RM) -r build + mkdir build + $(SPHINXBUILD) -b linkcheck -a -n "$(SOURCEDIR)" "$(BUILDDIR)" $(SPHINXOPTS) $(O) # Catch-all target: route all unknown targets to Sphinx using the new # "make mode" option. $(O) is meant as a shortcut for $(SPHINXOPTS). diff --git a/Other/file_0001.rst b/Other/file_0001.rst deleted file mode 100644 index 528bcfd68..000000000 --- a/Other/file_0001.rst +++ /dev/null @@ -1 +0,0 @@ -© FWO diff --git a/Other/file_0002.rst b/Other/file_0002.rst deleted file mode 100644 index fe947b672..000000000 --- a/Other/file_0002.rst +++ /dev/null @@ -1,9 +0,0 @@ -The VSC-infrastructure consists of two layers. The central Tier-1 -infrastructure is designed to run large parallel jobs. It also contains -a small accelerator testbed to experiment with upcoming technologies. -The Tier-2 layer runs the smaller jobs, is spread over a number of -sites, is closer to users and more strongly embedded in the campus -networks. The Tier-2 clusters are also interconnected and integrated -with each other. - -" diff --git a/Other/file_0002_uniq.rst b/Other/file_0002_uniq.rst deleted file mode 100644 index 00ddc910a..000000000 --- a/Other/file_0002_uniq.rst +++ /dev/null @@ -1 +0,0 @@ -closer diff --git a/Other/file_0003.rst b/Other/file_0003.rst deleted file mode 100644 index 71e5d4470..000000000 --- a/Other/file_0003.rst +++ /dev/null @@ -1,6 +0,0 @@ -This infrastructure is accessible to all scientific research taking -place in Flemish universities and public research institutes. In some -cases a small financial contribution is required. Industry can use the -infrastructure for a fee to cover the costs associated with this. - -" diff --git a/Other/file_0004.rst b/Other/file_0004.rst deleted file mode 100644 index 1b4b369f3..000000000 --- a/Other/file_0004.rst +++ /dev/null @@ -1,6 +0,0 @@ -What is a supercomputer? -======================== - -A supercomputer is a very fast and extremely parallel computer. Many of -its technological properties are comparable to those of your laptop or -even smartphone. But there are also important differences. diff --git a/Other/file_0005.rst b/Other/file_0005.rst deleted file mode 100644 index adb000465..000000000 --- a/Other/file_0005.rst +++ /dev/null @@ -1,9 +0,0 @@ -The VSC in Flanders -=================== - -The VSC is a partnership of five Flemish university associations. The -Tier-1 and Tier-2 infrastructure is spread over four locations: Antwerp, -Brussels, Ghent and Louvain. There is also a local support office in -Hasselt. - -" diff --git a/Other/file_0006.rst b/Other/file_0006.rst deleted file mode 100644 index 40b1b1c81..000000000 --- a/Other/file_0006.rst +++ /dev/null @@ -1,5 +0,0 @@ -Tier-1 infrastructure -===================== - -Central infrastructure for large parallel compute jobs and an -experimental accelerator system. diff --git a/Other/file_0007.rst b/Other/file_0007.rst deleted file mode 100644 index 6a71ef689..000000000 --- a/Other/file_0007.rst +++ /dev/null @@ -1,5 +0,0 @@ -Tier-2 infrastructure -===================== - -| An integrated distributed infrastructure for smaller supercomputing - jobs with varying hardware needs. diff --git a/Other/file_0008.rst b/Other/file_0008.rst deleted file mode 100644 index 9c8a682d9..000000000 --- a/Other/file_0008.rst +++ /dev/null @@ -1,4 +0,0 @@ -Getting access -============== - -Who can access, and how do I get my account? diff --git a/Other/file_0009.rst b/Other/file_0009.rst deleted file mode 100644 index ffdaf44e5..000000000 --- a/Other/file_0009.rst +++ /dev/null @@ -1,6 +0,0 @@ -Tier-1 starting grant -===================== - -A programme to get a free allocation on the Tier-1 supercomputer to -perform the necessary tests to prepare a regular Tier-1 project -application. diff --git a/Other/file_0010.rst b/Other/file_0010.rst deleted file mode 100644 index ad95888e6..000000000 --- a/Other/file_0010.rst +++ /dev/null @@ -1,5 +0,0 @@ -Project access Tier-1 -===================== - -A programme to get a compute time allocation on the Tier-1 -supercomputers based on an scientific project with evaluation. diff --git a/Other/file_0011.rst b/Other/file_0011.rst deleted file mode 100644 index c790d7900..000000000 --- a/Other/file_0011.rst +++ /dev/null @@ -1,6 +0,0 @@ -Buying compute time -=================== - -Without an awarded scientific project, it is possible to buy compute -time. We also offer a free try-out so you can test if our infrastructure -is suitable for your needs. diff --git a/Other/file_0012.rst b/Other/file_0012.rst deleted file mode 100644 index 5d3fa761e..000000000 --- a/Other/file_0012.rst +++ /dev/null @@ -1 +0,0 @@ -Need help ? Have more questions ? diff --git a/Other/file_0013.rst b/Other/file_0013.rst deleted file mode 100644 index da8c6c040..000000000 --- a/Other/file_0013.rst +++ /dev/null @@ -1,6 +0,0 @@ -User portal -=========== - -| On these pages, you will find everything that is useful for users of - our infrastructure: the user documentation, server status, upcoming - training programs and links to other useful information on the web. diff --git a/Other/file_0015.rst b/Other/file_0015.rst deleted file mode 100644 index ac9546e27..000000000 --- a/Other/file_0015.rst +++ /dev/null @@ -1,2 +0,0 @@ -Below we give information about current downtime (if applicable) and -planned maintenance of the various VSC clusters. diff --git a/Other/file_0015_uniq.rst b/Other/file_0015_uniq.rst deleted file mode 100644 index 9099b2774..000000000 --- a/Other/file_0015_uniq.rst +++ /dev/null @@ -1 +0,0 @@ -downtime diff --git a/Other/file_0023.rst b/Other/file_0023.rst deleted file mode 100644 index 780052ae0..000000000 --- a/Other/file_0023.rst +++ /dev/null @@ -1,112 +0,0 @@ -There is no clear agreement on the exact definition of the term -‘supercomputer’. Some say a supercomputer is a computer with at least 1% -of the computing power of the fastest computer in the world. But -according to this definition, there are currently only a few hundred -supercomputers in the world. `The TOP500 -list <\%22https://www.top500.org/\%22>`__ is a list of the supposedly -500 fastest computers in the world, updated twice a year. - -One could take 1‰ of the performance of the fastest computer as the -criterion, but it is an arbitrary criterion. Stating that a -supercomputer should perform at least X trillion computations per -second, is not a useful definition. Because of the fast evolution of the -technology, this definition would be outdated in a matter of years. The -first smartphone of a well-known manufacturer launched in 2007 had about -the same computing power and more memory than the computer used to -predict the weather in Europe 30 years earlier. - -So what is considered as a ‘supercomputer’ is very time-bound, at least -in terms of absolute compute power. So let us just agree that a -supercomputer is a computer that is hundreds or thousands times faster -than your smartphone or laptop. - -But is a supercomputer so different from your laptop or smartphone? Yes -and no. Since roughly 1975 the key word in supercomputing is -parallelism. But this also applies for your PC or smartphone. PC -processor manufacturers started to experiment with simple forms of -parallelism at the end of the nineties. A few years later the first -processors appeared with multiple cores that could perform calculations -independently from each other. A laptop has mostly 2 or 4 cores and -modern smartphones have 2, 4 or in some rare cases 8 cores. Although it -must be added that they are a little slower than the ones on a typical -laptop. - -Around 1975 manufacturers started to experiment with vector processors. -These processors perform the same operation to a set of numbers -simultaneously. Shortly thereafter supercomputers with multiple -processors working independently from each other, appeared on the -market. Similar technologies are nowadays used in the processor chips of -laptops and smartphones. In the eighties, supercomputer designers -started to experiment with another kind of parallelism. Several rather -simple processors - this was sometimes just standard PC processors like -the venerable Intel 80386 were linked together with fast networks and -collaborated to solve large problems. These computers were cheaper to -develop, much simpler to build, but required frequent changes to the -software. - -In modern supercomputers, parallelism is pushed to extremes. In most -supercomputers, all forms of parallelism mentioned above are combined at -an unprecedented scale and can take on extreme forms. All modern -supercomputers rely on some form of vector computing or related -technologies and consist of building blocks - *nodes* - uniting tens of -cores and interconnecting through a fast network to a larger whole. -Hence the term ‘compute cluster’ is often used. - -Supercomputers must also be able to read and interpret data is ‘at a -very high speed. Here the key word is also parallellism. Many -supercomputers have several network connections to the outside world. -Their permanent storage system consists of hundreds or even thousands of -hard disks or SSDs linked together to one extremely large and extremely -fast storage system. This type of technology has probably not influenced -significantly the development of laptops as it would not be very -practical to carry a laptop around with 4 hard drives. Yet this -technology does appear to some extent in modern, fast SSD drives in some -laptops and smartphones. The faster ones use several memory chips in -parallel to increase their performance and it is a standard technology -in almost any server storing data. - -As we have already indicated to some extent in the text above, a -supercomputer is more than just hardware. It also needs properly written -software. or Java program you wrote during your student years will not -run a 10. 000 times faster because you run it on a supercomputer. On the -contrary, there is a fair chance that it won't run at all or run slower -than on your PC. Most supercomputers - and all supercomputers at the VSC -- use a variant of the Linux operating system enriched with additional -software to combine all compute nodes in one powerful supercomputer. Due -to the high price of such a computer, you're rarely the only user but -will rather share the infrastructure with others. - -So you may have to wait a little before your program runs. Furthermore -your monitor is not directly connected to the supercomputer. Proper -software is also required here with your application software having to -be adapted to run well on a supercomputer. Without these changes, your -program will not run much faster than on a regular PC. You may of course -still run hundreds or thousands copies simultaneously, when you for -example wish to explore a parameter space. This is called ‘capacity -computing’. - -If you wish to solve truly large problems within a reasonable timeframe, -you will have to adapt your application software to maximize every form -of parallellism within a modern supercomputer and use several hundreds, -or even thousands, of compute cores simultaneously to solve one large -problem. This is called ‘capability computing’. Of course, the problem -you wish to solve has to be large enough for this approach to make -sense. Every problem has an intrinsic limit to the speedup you can -achieve on a supercomputer. The larger the problem, the higher speedup -you can achieve. - -This also implies that a software package that was cutting edge in your -research area 20 years ago, is unlikely to be so anymore because it is -not properly adapted to modern supercomputers, while new applications -exploit supercomputers much more efficiently and subsequently generate -faster, more accurate results. - -To some extent this also applies to your PC. Here again you are dealing -with software exploiting the parallelism of a modern PC quite well or -with software that doesn't. As a ‘computational scientist’ or -supercomputer user you constantly have to be open to new developments -within this area. Fortunately, in most application domains, a lot of -efficient software already exists which succeeds in using all the -parallellism that can be found in modern supercomputers. - -" diff --git a/Other/file_0023_uniq.rst b/Other/file_0023_uniq.rst deleted file mode 100644 index 75b552b62..000000000 --- a/Other/file_0023_uniq.rst +++ /dev/null @@ -1 +0,0 @@ -dealing chance top500 unlikely laptops Shortly smartphones nineties extremes intrinsic TOP500 parallellism extent constantly interconnecting cutting pushed criterion Proper 000 extreme Fortunately manufacturer manufacturers uniting subsequently hundred Stating 80386 edge designers collaborated trillion eighties thereafter 1975 interpret diff --git a/Other/file_0025.rst b/Other/file_0025.rst deleted file mode 100644 index b5a23ccd1..000000000 --- a/Other/file_0025.rst +++ /dev/null @@ -1,92 +0,0 @@ -The successor of Muk is expected to be installed in the spring 2016. - -There is also a small test cluster for experiments with accellerators -(GPU and Intel Xeon Phi) with a view to using this technology in future -VSC clusters. - -The Tier-1 cluster Muk ----------------------- - -The Tier-1 cluster Muk has 528 computing nodes, each with two 8-core -Intel Xeon processors from the Sandy Bridge generation (E5-2670, 2.6 -GHz). Each node features 64 GiB RAM, for a total memory capacity of more -than 33 TiB. The computing nodes are connected by an FDR InfiniBand -interconnect with a fat tree topology. This network has a high bandwidth -(more than 6,5GB / s per direction per link) and a low latency. The -storage is provided by a disk system with a total disk capacity of 400 -TB and a peak bandwidth of 9.5 GB / s. - -The cluster achieves a peak performance of more than 175 Tflops and a -Linpack performance of 152.3 Tflops. With this result, the cluster was -for 5 consecutive periods in the Top500 list of fastest supercomputers -in the world: - -+-----------+-----------+-----------+-----------+-----------+-----------+ -| List | 06/2012 | 11/2012 | 06/2013 | 11/2013 | 06/2014 | -+-----------+-----------+-----------+-----------+-----------+-----------+ -| Position | 118 | 163 | 239 | 306 | 430 | -+-----------+-----------+-----------+-----------+-----------+-----------+ - -In November 2014 the cluster fell just outside the list but still took -99% of the performance of the system in place 500. - -Accellerator testbed --------------------- - -In addition to the tier-1 cluster Muk, the VSC has an experimental GPU / -Xeon Phi cluster. 8 nodes in this cluster have 2 K20x nVidia GPUs with -accompanying software stack, and 8 nodes are equipped with two Intel -Xeon Phi 5110P (\"Knight's Corner\" generation) boards. The nodes are -interconnected by means of a QDR InfiniBand network. For practical -reasons, these nodes were integrated into the KU Leuven / Hasselt -University Tier-2 infrastructure. - -Software --------- - -Like on all other VSC-clusters, the operating system of Muk is a variant -of Linux, in this case Scientific Linux which in turn based on Red Hat -Linux. The system also features a comprehensive stack of software -development tools which includes the GNU and Intel compilers, debugger -and profiler for parallel applications and different versions of OpenMPI -and Intel MPI. - -There is also an extensive set of freely available applications -installed on the system. More software can be installed at the request -of the user. Users however have to take care of the software licenses -when the software is not freely available, and therefore also for the -financing of that license. - -`Detailed overview of the installed -software <\%22/cluster-doc/software/tier1-muk\%22>`__ - -Access to the Tier-1 system ---------------------------- - -Academic users can access the Tier-1 cluster Muk through a project -application. There are two types of project applications - -- The Tier-1 starting grant of up to 100 node days to test and / or to - optimize software, typically with a view to a regular request for - computing time. There is a continuous assessment process for this - project type. - `Learn - more <\%22/en/access-and-infrastructure/tier1-starting-grant\%22>`__ -- The regular project application, for allocations between 500 and 5000 - node days. The applications are assessed on scientific excellence and - technical feasibility by an evaluation committee of foreign experts. - There are three cut-off dates a year at which the submitted project - proposals are evaluated. The users are also expected to pay a small - contribution towards the cost. - `Learn - more <\%22/en/access-and-infrastructure/project-access-tier1\%22>`__ - -To use the GPU / Xeon Phi cluster it is sufficient to contact the `HPC -coordinator of your institution <\%22/en/about-vsc/contact\%22>`__. - -Industrial users and non-Flemish research institutions and -not-for-profit organizations can also `purchase computing time on the -Tier-1 -Infrastructure <\%22/en/access-and-infrastructure/access-industry\%22>`__. -For this you can contact the `Hercules -Foundation <\%22/en/about-vsc/contact\%22>`__. diff --git a/Other/file_0025_uniq.rst b/Other/file_0025_uniq.rst deleted file mode 100644 index 710ada613..000000000 --- a/Other/file_0025_uniq.rst +++ /dev/null @@ -1 +0,0 @@ -optimize continuous fell allocations Learn organizations excellence Accellerator Position Detailed assessed Knight achieves profiler feasibility 5GB accompanying diff --git a/Other/file_0027.rst b/Other/file_0027.rst deleted file mode 100644 index 0e17e420f..000000000 --- a/Other/file_0027.rst +++ /dev/null @@ -1,56 +0,0 @@ -| The VSC does not only rely on the Tier-1 supercomputer to respond to - the need for computing capacity. The HPC clusters of the University of - Antwerp, VUB, Ghent University and KU Leuven constitute the VSC Tier-2 - infrastructure, with a total computing capacity of 416.2 TFlops. - Hasselt University invests in the HPC cluster of Leuven. Each cluster - has its own specificity and is managed by the university’s dedicated - HPC/ICT team. The clusters are interconnected with a 10 Gbps BELNET - network, ensuring maximal cross-site access to the different cluster - architectures. For instance, a VSC user from Antwerp can easily log in - to the infrastructure at Leuven. - -Infrastructure --------------- - -- `The Tier-2 of the University of - Antwerp <\%22/infrastructure/hardware/hardware-ua\%22>`__ consists of - a cluster with 168 nodes, accounting for 3.360 cores (336 processors) - and 75 TFlops. Storage capacity is 100 TB. By the spring of 2017 a - new cluster will gradually becoming available, containing 152 regular - compute nodes and some facilities for visualisation and to test - GPU-computing and Xeon Phi computing. -- `The Tier-2 of VUB - (Hydra) <\%22/infrastructure/hardware/hardware-vub\%22>`__ consists - of 3 clusters of successive generations of processors with a peak - capacity of 75 TFlops (estimated). The total storage capacity is 446 - TB. It has a relatively large memory per computing node and is - therefore best fit for computing jobs that require a lot of memory - per node or per core. This configuration is complemented by a High - Troughput Computing (HTC) grid infrastructure. -- `The Tier-2 of Ghent University - (Stevin) <\%22/infrastructure/hardware/hardware-ugent\%22>`__ - represents a capacity of 226 TFlops (11.328 cores over 568 nodes) and - a storage capacity of 1,430 TB. It is composed of several clusters, 1 - of which is intended for single-node computing jobs and 4 for - multi-node jobs. One cluster has been optimized for memory-intensive - computing jobs and BigData problems. -- `The joint KU Leuven/UHasselt - Tier-2 <\%22/infrastructure/hardware/hardware-kul\%22>`__ housed by - KU Leuven focuses on small capability computing and tasks requiring a - fairly high disk bandwidth. The infrastructure consists of a thin - node cluster with 7.616 cores and a total capacity of 230 TFlops. A - shared memory system with 14 TB of RAM and 640 cores yields an - additional 12 TFlops. A total storage of 280 TB provides the - necessary I/O capacity. Furthermore, there are a number of nodes with - accellerators (including the GPU/Xeon Phi cluster purchased as an - experimental tier-1 setup) and 2 visualization nodes. - -More information ----------------- - -A more detailed description of the complete infrastructure is available -in the \\"\ `Available -hardware <\%22/en/infrastructure/hardware\%22>`__\\" section of the -`user portal <\%22/en/user-portal\%22>`__. - -" diff --git a/Other/file_0027_uniq.rst b/Other/file_0027_uniq.rst deleted file mode 100644 index 25db81441..000000000 --- a/Other/file_0027_uniq.rst +++ /dev/null @@ -1 +0,0 @@ -328 HTC BigData 336 complemented ensuring 616 226 specificity invests 280 gradually Troughput 568 constitute 416 230 yields respond 446 successive diff --git a/Other/file_0037.rst b/Other/file_0037.rst deleted file mode 100644 index b88eb466f..000000000 --- a/Other/file_0037.rst +++ /dev/null @@ -1,38 +0,0 @@ -Computational science has - alongside experiments and theory - become -the fully fledged third pillar of science. Supercomputers offer -unprecedented opportunities to simulate complex models and as such to -test theoretical models against reality. They also make it possible to -extract valuable knowledge from massive amounts of data. - -For many calculations, a laptop or workstation is no longer sufficient. -Sometimes dozens or hundreds of CPU cores and hundreds of gigabytes or -even terabytes of RAM-memory are necessary to produce an acceptable -solution within a reasonable amount of time. - -Our offer ---------- - -An overview of our services: - -- Access to a variety of **supercomputing infrastructure**, suited for - many applications. -- **Guidance and advice** when determining whether your software is - suited to our infrastructure. -- **Training** (from beginner to advanced level) on the use of - supercomputers. In this training all aspects are covered: how to run - a program on a supercomputer, how to develop software, and for some - application domains even how to use a couple of popular packages. -- **Support** with optimizing the use of your infrastructure. -- **A wide range of free software.** When using commercial software it - is the responsibility of the user to take care of a license with a - number of packages as an exception to this. For these packages we - ourselves are responsible to ensure optimal running. - -More information? ------------------ - -More information can be found in our `training -section <\%22/en/education-and-trainings\%22>`__ and `user -portal <\%22/en/user-portal\%22>`__. - -" diff --git a/Other/file_0037_uniq.rst b/Other/file_0037_uniq.rst deleted file mode 100644 index 403f4fa60..000000000 --- a/Other/file_0037_uniq.rst +++ /dev/null @@ -1 +0,0 @@ -beginner Guidance determining dozens terabytes diff --git a/Other/file_0041.rst b/Other/file_0041.rst deleted file mode 100644 index cd3bef828..000000000 --- a/Other/file_0041.rst +++ /dev/null @@ -1,49 +0,0 @@ -Not only have supercomputers changed scientific research in a -fundamental way, they also enable the development of new, affordable -products and services which have a major impact on our daily lives. - -Not only have supercomputers changed scientific research in a fundamental way ... ---------------------------------------------------------------------------------- - -Supercomputers are indispensable for scientific research and for a -modern R&D environment. ‘Computational Science’ is - alongside theory -and experiment - the third fully fledged pillar of science. For -centuries, scientists used pen and paper to develop new theories based -on scientific experiments. They also set up new experiments to verify -the predictions derived from these theories (a process often carried out -with pen and paper). It goes without saying that this method was slow -and cumbersome. - -As an astronomer you can not simply make Jupiter a little bigger to see -what effect this would lager size would have on our solar system. As a -nuclear scientist it would be difficult to deliberately lose control -over a nuclear reaction to ascertain the consequences of such a move. -(Super)computers can do this and are indeed revolutionizing science. - -Complex theoretical models - too advanced for ‘pen and paper’ results - -are simulated on computers. The results they deliver, are then compared -with reality and used for prediction purposes. Supercomputers have the -ability to handle huge amounts of data, thus enabling experiments that -would not be achievable in any other way. Large radio telescopes or the -LHC particle accelerator at CERN could not function without -supercomputers processing mountains of data. - -… but also the industry and out society ---------------------------------------- - -But supercomputers are not just an expensive toy for researchers at -universities. Numerical simulation also opens up new possibilities in -industrial R&D. For example in the search for new medicinal drugs, new -materials or even the development of a new car model. Biotechnology also -requires the large data processing capacity of a supercomputer. The -quest for clean energy, a better understanding of the weather and -climate evolution, or new technologies in health care all require a -powerful supercomputer. - -Supercomputers have a huge impact on our everyday lives. Have you ever -wondered why the showroom of your favourite car brand contains many more -car types than 20 years ago? Or how each year a new and faster -smartphone model is launched on the market? We owe all of this to -supercomputers. - -" diff --git a/Other/file_0045.rst b/Other/file_0045.rst deleted file mode 100644 index 3daa34aab..000000000 --- a/Other/file_0045.rst +++ /dev/null @@ -1,67 +0,0 @@ -In the past few decades supercomputers have not only revolutionized -scientific research but have also been used increasingly by businesses -all over the world to accelerate design, production processes and the -development of innovative services. - -Situation ---------- - -Modern microelectronics has created many new opportunities. Today -powerful supercomputers enable us to collect and process huge amounts of -data. Complex systems can be studied through numerical simulation -without having to build a prototype or set up a scaled experiment -beforehand. All this leads to a quicker and cheaper design of new -products, cost-efficient processes and innovative services. To support -this development in Flanders, the Flemish Government founded in late -2007 the Flemish Supercomputer Center (VSC) as a partnership between the -government and Flemish university associations. The accumulated -expertise and infrastructure are assets we want to make available to the -industry. - -Technology Offer ----------------- - -A collaboration with the VSC offers your company a good number of -benefits. - -- Together we will identify which expertise within the Flemish - universities and their associations is appropriate for you when - rolling out High Performance Computing (HPC) within your company. -- We can also assist with the technical writing of a project proposal - for financing for example through the IWT (Agency for Innovation by - Science and Technology). -- You can participate in courses on HPC, including tailor-made courses - provided by the VSC. -- You will have access to a supercomputer infrastructure with a - dedicated, on-site team assisting you during the start-up phase. -- As a software developer, you can also deploy HPC software - technologies to develop more efficient software which makes better - use of modern hardware. -- A shorter turnaround time for your simulation or data study boosts - productivity and increases the responsiveness of your business to new - developments. -- The possibility to carry out more detailed simulations or to analyse - larger amounts of data can yield new insights which in turn lead to - improved products and more efficient processes. -- A quick analysis of the data collected during a production process - helps to detect and correct abnormalities early on. -- Numerical simulation and virtual engineering reduce the number of - prototypes and accelerate the discovery of potential design problems. - As a result you are able to market a superior product faster and - cheaper. - -About the VSC -------------- - -The VSC was launched in late 2007 as a collaboration between the Flemish -Government and five Flemish university associations. Many of the VSC -employees have a strong technical and scientific background. Our team -also collaborates with many research groups at various universities and -helps them and their industrial partners with all aspects of -infrastructure usage. - -Besides a competitive infrastructure, the VSC team also offers full -assistance with the introduction of High Performance Computing within -your company. - -" diff --git a/Other/file_0045_uniq.rst b/Other/file_0045_uniq.rst deleted file mode 100644 index c527cae73..000000000 --- a/Other/file_0045_uniq.rst +++ /dev/null @@ -1 +0,0 @@ -decades Situation Offer businesses revolutionized diff --git a/Other/file_0049.rst b/Other/file_0049.rst deleted file mode 100644 index 33df739c4..000000000 --- a/Other/file_0049.rst +++ /dev/null @@ -1,5 +0,0 @@ -| The Flemish Supercomputer Centre (**VSC**) is a virtual centre making - supercomputer infrastructure available for both the **academic** and - **industrial** world. This centre is managed by the Research - Foundation - Flanders (FWO) in partnership with the five Flemish - university associations. diff --git a/Other/file_0051.rst b/Other/file_0051.rst deleted file mode 100644 index ac1209b24..000000000 --- a/Other/file_0051.rst +++ /dev/null @@ -1,7 +0,0 @@ -HPC for academics -================= - -| With HPC-technology you can refine your research and gain new insights - to take your research to new heights. - -" diff --git a/Other/file_0065.rst b/Other/file_0065.rst deleted file mode 100644 index 3ead47ef6..000000000 --- a/Other/file_0065.rst +++ /dev/null @@ -1,10 +0,0 @@ -- `HPC glossary <\%22/support/tut-book/hpc-glossary\%22>`__: Terms - often used in HPC -- `VSC tutorials <\%22/support/tut-book/vsc-tutorials\%22>`__: Our own - tutorial texts, used in some of the introductory courses -- `A list of books <\%22/support/tut-book/books\%22>`__ from general - introduction to specific technologies -- `Freely available tutorials on the - web <\%22/support/tut-book/web-tutorials\%22>`__ - -" diff --git a/Other/file_0065_uniq.rst b/Other/file_0065_uniq.rst deleted file mode 100644 index f03c5a535..000000000 --- a/Other/file_0065_uniq.rst +++ /dev/null @@ -1 +0,0 @@ -Freely Terms diff --git a/Other/file_0097.rst b/Other/file_0097.rst deleted file mode 100644 index 65cd35eff..000000000 --- a/Other/file_0097.rst +++ /dev/null @@ -1,8 +0,0 @@ -HPC for industry -================ - -| The collective expertise, training programs and infrastructure of VSC - together with participating university associations have the potential - to create significant added value to your business. - -" diff --git a/Other/file_0099.rst b/Other/file_0099.rst deleted file mode 100644 index d89aa16f6..000000000 --- a/Other/file_0099.rst +++ /dev/null @@ -1,7 +0,0 @@ -What is supercomputing? -======================= - -| Supercomputers have an immense impact on our daily lives. Their scope - extends far beyond the weather forecast after the news. - -" diff --git a/Other/file_0109.rst b/Other/file_0109.rst deleted file mode 100644 index 15879cb38..000000000 --- a/Other/file_0109.rst +++ /dev/null @@ -1,7 +0,0 @@ -Projects and cases -================== - -| The VSC infrastructure being used by many academic and industrial - users. Here are just a few case studies of work involving the VSC - infrastructure and an overview of actual projects run on the tier-1 - infrastructure. diff --git a/Other/file_0115.rst b/Other/file_0115.rst deleted file mode 100644 index 8dcbc58d3..000000000 --- a/Other/file_0115.rst +++ /dev/null @@ -1,12 +0,0 @@ -FWO -=== - -| Research Foundation - Flanders (FWO) -| Egmontstraat 5 -| 1000 Brussel - -| Tel. +32 (2) 512 91 10 -| E-mail: `post@fwo.be <\%22mailto:post@fwo.be\%22>`__ -| `Web page of the FWO <\%22http://www.fwo.be/en/\%22>`__ - -" diff --git a/Other/file_0115_uniq.rst b/Other/file_0115_uniq.rst deleted file mode 100644 index 757db5eb8..000000000 --- a/Other/file_0115_uniq.rst +++ /dev/null @@ -1 +0,0 @@ -91 Egmontstraat diff --git a/Other/file_0117.rst b/Other/file_0117.rst deleted file mode 100644 index 58d5695d9..000000000 --- a/Other/file_0117.rst +++ /dev/null @@ -1,17 +0,0 @@ -Antwerp University Association -============================== - -| **Stefan Becuwe** -| Antwerp University -| Department of Mathematics and Computer Science -| Middelheimcampus M.G 310 -| Middelheimlaan 1 -| 2020 Antwerpen - -| Tel.: +32 (3) 265 3860 -| E-mail: - `Stefan.Becuwe@uantwerpen.be <\%22mailto:Stefan.Becuwe@uantwerpen.be\%22>`__ -| `Contact page on the UAntwerp - site <\%22https://www.uantwerpen.be/nl/personeel/stefan-becuwe/\%22>`__ - -" diff --git a/Other/file_0117_uniq.rst b/Other/file_0117_uniq.rst deleted file mode 100644 index bb9fa87bc..000000000 --- a/Other/file_0117_uniq.rst +++ /dev/null @@ -1 +0,0 @@ -310 becuwe Becuwe 265 Middelheimcampus 3860 personeel Middelheimlaan diff --git a/Other/file_0119.rst b/Other/file_0119.rst deleted file mode 100644 index b379f1a5b..000000000 --- a/Other/file_0119.rst +++ /dev/null @@ -1,13 +0,0 @@ -KU Leuven Association -===================== - -| **Leen Van Rentergem** -| KU Leuven, Directie ICTS -| Willem de Croylaan 52c - bus 5580 -| 3001 Heverlee - -| Tel.:+32 (16) 32 21 55 or +32 (16) 32 29 99 -| E-mail: - `leen.vanrentergem@kuleuven.be <\%22mailto:leen.vanrentergem@kuleuven.be\%22>`__ -| `Contact page on the KU Leuven - site <\%22https://www.kuleuven.be/wieiswie/nl/person/00025349\%22>`__ diff --git a/Other/file_0119_uniq.rst b/Other/file_0119_uniq.rst deleted file mode 100644 index 12f045f84..000000000 --- a/Other/file_0119_uniq.rst +++ /dev/null @@ -1 +0,0 @@ -3001 wieiswie Willem 52c 5580 00025349 Directie bus leen Croylaan Rentergem vanrentergem Leen diff --git a/Other/file_0121.rst b/Other/file_0121.rst deleted file mode 100644 index b3860c3c9..000000000 --- a/Other/file_0121.rst +++ /dev/null @@ -1,13 +0,0 @@ -Universitaire Associatie Brussel -================================ - -| **Stefan Weckx** -| VUB, Research Group of Industrial Microbiology and Food Biotechnology -| Pleinlaan 2 -| 1050 Brussel - -| Tel.: +32 (2) 629 38 63 -| E-mail: - `Stefan.Weckx@vub.ac.be <\%22mailto:Stefan.Weckx@vub.ac.be\%22>`__ -| `Contact page on the VUB - site <\%22http://we.vub.ac.be/nl/stefan-weckx\%22>`__ diff --git a/Other/file_0121_uniq.rst b/Other/file_0121_uniq.rst deleted file mode 100644 index 5e1e5bb93..000000000 --- a/Other/file_0121_uniq.rst +++ /dev/null @@ -1 +0,0 @@ -weckx 629 1050 Food Weckx Pleinlaan Universitaire diff --git a/Other/file_0123.rst b/Other/file_0123.rst deleted file mode 100644 index dbd968b8a..000000000 --- a/Other/file_0123.rst +++ /dev/null @@ -1,15 +0,0 @@ -Ghent University Association -============================ - -| **Ewald Pauwels** -| Ghent University, ICT Department -| Krijgslaan 281 S89 -| 9000 Gent - -| Tel: +32 (9) 264 4716 -| E-mail: - `Ewald.Pauwels@ugent.be <\%22mailto:Ewald.Pauwels@ugent.be\%22>`__ -| `Contact page on the UGent - site <\%22https://telefoonboek.ugent.be/nl/people/801001384834\%22>`__ - -" diff --git a/Other/file_0123_uniq.rst b/Other/file_0123_uniq.rst deleted file mode 100644 index c65dc7b0e..000000000 --- a/Other/file_0123_uniq.rst +++ /dev/null @@ -1 +0,0 @@ -S89 801001384834 264 telefoonboek Krijgslaan 4716 diff --git a/Other/file_0125.rst b/Other/file_0125.rst deleted file mode 100644 index 7044aa8f2..000000000 --- a/Other/file_0125.rst +++ /dev/null @@ -1,16 +0,0 @@ -Associatie Universiteit-Hogescholen Limburg -=========================================== - -| **Geert Jan Bex - VSC course coordinator** -| UHasselt, Dienst Onderzoekscoördinatie -| Campus Diepenbeek -| Agoralaan Gebouw D -| 3590 Diepenbeek - -| Tel.: +32 (11) 268231 or +32 (16) 322241 -| E-mail: - `GeertJan.Bex@uhasselt.be <\%22mailto:GeertJan.Bex@uhasselt.be\%22>`__ -| `Contact page on the UHasselt - site <\%22https://www.uhasselt.be/fiche?voornaam=geertjan&naam=bex\%22>`__ - and `personal web page <\%22http://alpha.uhasselt.be/~gjb/\%22>`__ diff --git a/Other/file_0125_uniq.rst b/Other/file_0125_uniq.rst deleted file mode 100644 index 0e51a6936..000000000 --- a/Other/file_0125_uniq.rst +++ /dev/null @@ -1 +0,0 @@ -Dienst Hogescholen Gebouw fiche 268231 3590 322241 GeertJan voornaam naam gjb Onderzoekscoördinatie Agoralaan diff --git a/Other/file_0127.rst b/Other/file_0127.rst deleted file mode 100644 index b452c0c42..000000000 --- a/Other/file_0127.rst +++ /dev/null @@ -1,4 +0,0 @@ -Contact us -========== - -You can also contact the coordinators by filling in the form below. diff --git a/Other/file_0127_uniq.rst b/Other/file_0127_uniq.rst deleted file mode 100644 index 27615f561..000000000 --- a/Other/file_0127_uniq.rst +++ /dev/null @@ -1 +0,0 @@ -filling diff --git a/Other/file_0129.rst b/Other/file_0129.rst deleted file mode 100644 index 25bb48cfb..000000000 --- a/Other/file_0129.rst +++ /dev/null @@ -1,6 +0,0 @@ -Technical problems? -=================== - -Don't use this form, but contact your support team directly using `the -contact information in the user -portal <\%22/support/contact-support\%22>`__. diff --git a/Other/file_0131.rst b/Other/file_0131.rst deleted file mode 100644 index 211edec69..000000000 --- a/Other/file_0131.rst +++ /dev/null @@ -1,2 +0,0 @@ -Need help? Have more questions? `Contact -us <\%22/en/about-vsc/contact\%22>`__! diff --git a/Other/file_0133.rst b/Other/file_0133.rst deleted file mode 100644 index 95a18f2b5..000000000 --- a/Other/file_0133.rst +++ /dev/null @@ -1,6 +0,0 @@ -The VSC is a partnership of five Flemish university associations. The -Tier-1 and Tier-2 infrastructure is spread over four locations: Antwerp, -Brussels, Ghent and Louvain. There is also a local support office in -Hasselt. - -" diff --git a/Other/file_0135.rst b/Other/file_0135.rst deleted file mode 100644 index 995a97537..000000000 --- a/Other/file_0135.rst +++ /dev/null @@ -1,45 +0,0 @@ -Ghent ------ - -The recent data center of UGhent (2011) on Campus Sterre features a room -which is especially equipped to accommodate the VSC framework. This room -currently houses the majority of the Tier-2 infrastructure of Ghent -University and the first VSC Tier-1 capability system. The adjacent -building of the ICT Department hosts the Ghent University VSC --employees, including support staff for the Ghent University Association -(AUGent). - -Louvain -------- - -The KU Leuven equiped its new data center (2012) in Heverlee with a -separate room for the VSC framework. This room currently houses the -joint Tier-2 infrastructure of KU Leuven and Hasselt University and an -experimental GPU / Xeon Phi cluster. This space will also house the next -VSC Tier-1 computer. The nearby building of ICTS houses the KU Leuven -VSC employees, including the support team for the KU Leuven Association. - -Hasselt -------- - -The VSC does not feature a computer room in Hasselt, but there is a -local user support office for the Association University-Colleges -Limburg (AU-HL) at Campus Diepenbeek. - -Brussels --------- - -The VUB shares a data center with the ULB on Solbosch Campus also -housing the VUB Tier-2 cluster and a large part of the BEgrid -infrastructure. The VSC also has a local team responsible for the -management of this infrastructure and for the user support within the -University Association Brussels (UAB) and for BEgrid. - -Antwerp -------- - -The University of Antwerp features a computer room equipped for HPC -infrastructure in the building complex Campus Groenenborger. A little -further, on the Campus Middelheim, the UAntwerpen VSC members have their -offices in the Mathematics and Computer Science building. This team also -handles user support for the Association Antwerp University (AUHA). diff --git a/Other/file_0135_uniq.rst b/Other/file_0135_uniq.rst deleted file mode 100644 index daafb6b29..000000000 --- a/Other/file_0135_uniq.rst +++ /dev/null @@ -1 +0,0 @@ -Groenenborger offices UGhent housing Middelheim shares Solbosch Sterre Colleges room adjacent diff --git a/Other/file_0137.rst b/Other/file_0137.rst deleted file mode 100644 index e1d8ca073..000000000 --- a/Other/file_0137.rst +++ /dev/null @@ -1,100 +0,0 @@ -The VSC is a consortium of five Flemish universities. This consortium -has no legal personality. Its objective is to build a Tier-1 and Tier-2 -infrastructure in accordance with the European pyramid model. Staff -appointed at five Flemish universities form an integrated team dedicated -to training and user support. - -For specialized support each institution can appeal to an expert -independent of where he or she is employed. The universities also invest -in HPC infrastructure and the VSC can appeal to the central services of -these institutions. In addition, embedment in an academic environment -creates opportunities for cooperation with industrial partners. - -The VSC project is managed by the Research Foundation - Flanders (FWO), -that receives the necessary financial resources for this task from the -Flemish Government. - -| Operationally, the VSC is controlled by the HPC workgroup consisting - of employees from the FWO and HPC coordinators from the various - universities. The HPC workgroup meets monthly. During these meetings - operational issues are discussed and agreed upon and strategic advice - is offered to the Board of Directors of the FWO. - -In addition, four committees are involved in the operation of the VSC: -the Tier-1 user committee, the Tier-1 evaluation committee, the -Industrial Board and the Scientific Advisory Board. - -VSC users' committee --------------------- - -The VSC user's committee was established to provide advise on the needs -of users and ways to improve the services, including the training of -users. The user's committee also plays a role in maintaining contact -with users by spreading information about the VSC, making (potential) -users aware of the possibilities offered by HPC and organising the -annual user day. - -These members of the committee are given below in alphabetical order, -according to which university they are associated with: - -- AUHA: Wouter Herrebout, substitute Bart Partoens -- UAB: Frank De Proft, substitute Wim Thiery -- AUGent: Marie-Françoise Reyniers or Veronique Van Speybroeck -- AU-HL: Sorin Pop, substitute Sofie Thijs -- KU Leuven association: Dirk Roose, substitute Nele Moelans - -The members representing the strategic research institutes are - -- VIB: Steven Maere, substitute Frederik Coppens -- imec: Wilfried Verachtert -- VITO: Clemens Mensinck, substitute Katrijn Dirix -- Flanders Make: Mark Engels, substitute Paola Campestrini - -The representation of the Industrial Board: - -- Benny Westaedt, substitute Mia Vanstraelen - -Tier-1 evaluation committee ---------------------------- - -This committee evaluates applications for computing time on the Tier-1. -Based upon admissibility and other evaluation criteria the committee -grants the appropriate computing time. - -This committee is composed as follows: - -- Walter Lioen, chairman (SURFsara, The Netherlands); -- Derek Groen (Computer Science, Brunel University London, UK); -- Sadaf Alam (CSCS, Switzerland); -- Nicole Audiffren (Cines, France); -- Gavin Pringle (EPCC, UK). - -The FWO provides the secretariat of the committee. - -Industrial Board ----------------- - -The Industrial Board serves as a communication channel between the VSC -and the industry in Flanders. The VSC offers a scientific/technical -computing infrastructure to the whole Flemish research community and -industry. The Industrial Board can facilitate the exchange of ideas and -expertise between the knowledge institutions and industry. - -The Industrial Board also develops initiatives to inform companies and -non-profit institutions about the added value that HPC delivers in the -development and optimisation of services and products and promotes the -services that the VSC delivers to companies, such as consultancy, -research collaboration, training and compute power. - -The members are: - -- Mia Vanstraelen (IBM) -- Charles Hirsch (Numeca) -- Herman Van der Auweraer (Siemens Industry Software NV) -- Benny Westaedt (Van Havermaet) -- Marc Engels (Flanders Make) -- Marcus Drosson (Umicore) -- Sabien Vulsteke (BASF Agricultural Solutions) -- Birgitta Brys (Worldline) - -" diff --git a/Other/file_0137_uniq.rst b/Other/file_0137_uniq.rst deleted file mode 100644 index 8a2b57256..000000000 --- a/Other/file_0137_uniq.rst +++ /dev/null @@ -1 +0,0 @@ -embedment meetings optimisation personality Engels Cines inform serves Mia agreed Pop Frederik Birgitta Herman representation Havermaet Wilfried Thijs Audiffren Sofie Verachtert Operationally Katrijn imec NV Benny Agricultural cooperation evaluates strategic employed Marcus Gavin Westaedt Vanstraelen Coppens Siemens Alam Mensinck Paola Worldline organising delivers promotes Dirix spreading Marie workgroup Umicore Maere Reyniers Advisory Françoise Brunel BASF Numeca appointed Campestrini Sabien alphabetical consultancy Clemens Solutions maintaining Pringle VITO Sorin admissibility Drosson Vulsteke Brys Auweraer monthly Sadaf diff --git a/Other/file_0141.rst b/Other/file_0141.rst deleted file mode 100644 index 2db06e866..000000000 --- a/Other/file_0141.rst +++ /dev/null @@ -1,3 +0,0 @@ -A supercomputer is a very fast and extremely parallel computer. Many of -its technological properties are comparable to those of your laptop or -even smartphone but there are important differences. diff --git a/Other/file_0149.rst b/Other/file_0149.rst deleted file mode 100644 index df3224f1d..000000000 --- a/Other/file_0149.rst +++ /dev/null @@ -1,38 +0,0 @@ -We offer you the opportunity of a free trial of the Tier-1 to prepare a -future regular Tier-1 project application. You can test if your software -runs well on the Tier-1 and do the scalability tests that are required -for a project application. - -If you want to check if buying compute time on our infrastructure is an -option, we offer a `very similar free programme for a test -ride <\%22/en/access-and-infrastructure/access-industry\%22>`__. - -Characteristics of a Starting Grant ------------------------------------ - -- The maximum amount is 100 nodedays. -- The maximal allowed period to use the compute time is 2 months. -- The allocation is personal and can't be transferred or shared with - other researchers. -- Requests can be done at any time, there are no cutoff days. -- The use of this compute time is free of charge. - -Procedure to apply and grant the request ----------------------------------------- - -#. Download the `application form for a starting grant version 2018 - (docx, 31 kB) <\%22/assets/1331\%22>`__\ . -#. Send the completed application by e-mail to the Tier-1 contact - address - (`hpcinfo@icts.kuleuven.be <\%22mailto:hpcinfo@icts.kuleuven.be\%22>`__), - with your local VSC coordinator in cc. -#. The request will be judged for its validity by the Tier-1 - coordinator. -#. After approval the Tier-1 coordinator will give you access and - compute time. - If not approved, you will get an answer with a motivation for the - decision. -#. The granted requests are published on the VSC website. Therefore you - need to provide a short abstract in the application. - -" diff --git a/Other/file_0149_uniq.rst b/Other/file_0149_uniq.rst deleted file mode 100644 index cdcefddfb..000000000 --- a/Other/file_0149_uniq.rst +++ /dev/null @@ -1 +0,0 @@ -trial cutoff Grant nodedays Requests validity ride 1331 judged diff --git a/Other/file_0153.rst b/Other/file_0153.rst deleted file mode 100644 index e24a63bae..000000000 --- a/Other/file_0153.rst +++ /dev/null @@ -1,133 +0,0 @@ -The application ---------------- - -The designated way to get access to the Tier-1 for research purposes is -through a project application. - -You have to submit a proposal to get compute time on the Tier-1 cluster -BrENIAC. - -You should include a realistic estimate of the compute time needed in -the project in your application. These estimations can best be endorsed -by Tier-1 benchmarks. To be able to perform these tests for new codes, -you can request a `starting -grant <\%22/en/access-and-infrastructure/tier1-starting-grant\%22>`__ -through a short and quick procedure. - -You can submit proposals continuously, but they will be gathered, -evaluated and resources allocated at a number of cut-off dates. There -are 3 cut-off dates in 2018 : - -- February 5, 2018 -- June 4, 2018 -- October 1, 2018 - -Proposals submitted since the last cut-off and before each of these -dates are reviewed together. - -The FWO appoints an evaluation commission to do this. - -Because of the international composition of the `evaluation -commission <\%22/en/about-vsc/organisation-structure#tier1-evaluation\%22>`__, -the preferred language for the proposals is English. If a proposal is in -Dutch, you must also sent an English translation. Please have a look at -the documentation of standard terms like: CPU, core, node-hour, memory, -storage, and use these consistently in the proposal. - -| You can submit you application `via - EasyChair <\%22https://easychair.org/conferences/?conf=tier12017\%22>`__ - using the application forms below. - -Relevant documents - 2018 -------------------------- - -As was already the case for applications for computing time on the -Tier-1 granted in 2016 and 2017 and coming from researchers at -universities, the Flemish SOCs and the Flemish public knowledge -institutions, applicants do not have to pay a contribution in the cost -of compute time and storage. Of course, the applications have to be of -outstanding quality. The evaluation commission remains responsible for -te review of the applications. For industry the price for compute time -is 13 EURO per node day including VAT and for storage 15 EURO per TB per -month including VAT. - -The adjusted Regulations for 2018 can be found in the links below. - -- `Reglement betreffende aanvragen voor het gebruik van de Vlaamse - supercomputer (Dutch only, applicable as of 1 January 2018) (PDF, 791 - kB) <\%22/assets/1327\%22>`__ -- Enclosure 1: `The application form for 2018 (docx, 82 kB, last update - March 2018) <\%22/assets/1329\%22>`__ -- `An overview of standard terms used in - HPC <\%22/support/tut-book/hpc-glossary\%22>`__ -- ` <\%22/support/tut-book/hpc-glossary\%22>`__\ `The list of - scientific - domains <\%22/en/access-and-infrastructure/project-access-tier1/domains\%22>`__ -- Submission is done via `EasyChair <\%22#easychair\%22>`__ - -If you need help to fill out the application, please consult your local -support team. - -Relevant documents - 2017 -------------------------- - -As was already the case for applications for computing time on the -Tier-1 granted in 2016 and coming from researchers at universities, the -Flemish SOCs and the Flemish public knowledge institutions, applicants -do not have to pay a contribution in the cost of compute time and -storage. Of course, the applications have to be of outstanding quality. -The evaluation commission remains responsible for te review of the -applications. For industry the price for compute time is 13 EURO per -node day including VAT and for storage 15 EURO per TB per month -including VAT. - -The adjusted Regulations for 2017 can be found in the links below. - -- `Reglement betreffende aanvragen voor het gebruik van de Vlaamse - supercomputer (Dutch only, applicable as of 1 January 2017) (PDF, 215 - kB) <\%22/assets/1171\%22>`__ -- Enclosure 1: `The application form (docx, 54 kB, last update May - 2017) <\%22/assets/1193\%22>`__. There is only a single category of - projects in 2017. Research projects that have not yet been evaluated - scientifically, should get an approval of the proposed research - project by the university of the promotor. See §5 of the Regulations. -- `An overview of standard terms used in - HPC <\%22/support/tut-book/hpc-glossary\%22>`__ -- ` <\%22/support/tut-book/hpc-glossary\%22>`__\ `The list of - scientific - domains <\%22/en/access-and-infrastructure/project-access-tier1/domains\%22>`__ -- Submission is done via `EasyChair <\%22#easychair\%22>`__ - -EasyChair procedure -------------------- - -| You have to submit your proposal on `EasyChair for the conference - Tier12018 <\%22https://easychair.org/conferences/?conf=tier12018\%22>`__. - This requires the following steps: - -#. If you do not yet have an EasyChair account, you first have to create - one: - - #. Complete the CAPTCHA - #. Provide first name, name, e-mail address - #. A confirmation e-mail will be sent, please follow the instructions - in this e-mail (click the link) - #. Complete the required details. - #. When the account has been created, a link will appear to log in on - the TIER1 submission page. - -#. Log in onto the EasyChair system. -#. Select ‘New submission’. -#. If asked, accept the EasyChair terms of service. -#. Add one or more authors; if they have an EasyChair account, they can - follow up on and/or adjust the present application. -#. Complete the title and abstract. -#. You must specify at least three keywords: Include the institution of - the promoter of the present project and the field of research. -#. As a paper, submit a PDF version of the completed Application form. - You must submit the complete proposal, including the enclosures, as 1 - single PDF file to the system. -#. Click \\"Submit\". -#. EasyChair will send a confirmation e-mail to all listed authors. - -" diff --git a/Other/file_0153_uniq.rst b/Other/file_0153_uniq.rst deleted file mode 100644 index e6b59016a..000000000 --- a/Other/file_0153_uniq.rst +++ /dev/null @@ -1 +0,0 @@ -proposed tier12018 EURO Tier12018 tier12017 1193 791 Submission 1327 1329 82 1171 diff --git a/Other/file_0155.rst b/Other/file_0155.rst deleted file mode 100644 index 832e5ab6e..000000000 --- a/Other/file_0155.rst +++ /dev/null @@ -1,67 +0,0 @@ -The VSC infrastructure is can also be used by industry and non-Flemish -research institutes. Here we describe the modalities. - -Tier-1 ------- - -It is possible to get paid access to the Tier-1 infrastructure of the -VSC. In a first phase, you can get up to 100 free node-days of compute -time to verify that the infrastructure is suitable for your -applications. You can also get basic support for software installation -and the use of the infrastructure. When your software requires a -license, you should take care of that yourself. - -For further use, there is a tree-parties legal agreement required with -KU Leuven as the operator of the system and the Research Foundation - -Flanders (FWO). You will be billed only for the computing time used and -reserved disk space, according to the following rates: - -**Summary of Rates (VAT included):** - -**Compute** - -**(euro/node day)** - -**Storage** - -**(euro/TB/month)** - -**Non-Flemish public research institutes and not-for-profit -organisations** - -€ 13 - -€ 15 - -**Industry** - -€ 13 - -€ 15 - -These prices include the university overhead and basic support from the -Tier-1 support staff, but no advanced level support by specialised -staff. - -For more information you can `contact our industry account manager -(FWO) <\%22mailto:industry@fwo.be\%22>`__. - -Tier-2 ------- - -It is also possible to gain access to the Tier-2 infrastructure within -the VSC. Within the Tier-2 infrastructure, there are also clusters -tailored to special applications such as small clusters with GPU or Xeon -Phi boards, a large shared memory machine or a cluster for Hadoop -applications. See the `high-level -overview <\%22/en/access-and-infrastructure/tier-2-clusters\%22>`__ or -`detailed pages about the available -infrastructure <\%22/infrastructure/hardware\%22>`__ for more -information. - -For more information and specific arrangements please contact `the -coordinator of the institution which operates the -infrastructure <\%22/en/about-vsc/contact\%22>`__. In this case you only -need an agreement with this institution without involvement of the FWO. - -" diff --git a/Other/file_0155_uniq.rst b/Other/file_0155_uniq.rst deleted file mode 100644 index fcdcf80f2..000000000 --- a/Other/file_0155_uniq.rst +++ /dev/null @@ -1 +0,0 @@ -operates arrangements modalities billed operator involvement tailored diff --git a/Other/file_0177.rst b/Other/file_0177.rst deleted file mode 100644 index 7196fbfd1..000000000 --- a/Other/file_0177.rst +++ /dev/null @@ -1,29 +0,0 @@ -The VSC is responsible for the development and management of High -Performance Computer Infrastructure used for research and innovation. -The quality level of the infrastructure is comparable to other -computational infrastructures in comparable European regions. In -addition, the VSC is internationally connected through European projects -such as PRACE\ :sup:`(1)` (traditional supercomputing) and -EGI\ :sup:`(2)` (grid computing). Belgium has been a member of PRACE and -participates in EGI via BEgrid, since October 2012. - -The VSC infrastructure consists of two layers in the European -multi-layer model for an integrated HPC infrastructure. Local clusters -(Tier-2) at the Flemish universities are responsible for processing the -mass of smaller computational tasks and provide a solid base for the HPC -ecosystem. A larger central supercomputer (Tier-1) is necessary for more -complicated calculations while simultaneously serving as a bridge to -infrastructures at a European level. - -The VSC assists researchers active in academic institutions and also the -industry when using HPC through training programs and targeted advice. -This offers the advantage that academia and industrialists come into -contact with each other. - -In addition, the VSC also works on raising awareness of the added value -HPC can offer both in academic research and in industrial applications. - -| :sup:`(1)` PRACE: Partnership for Advanced Computing in Europe -| :sup:`(2)` EGI: European Grid Infrastructure - -" diff --git a/Other/file_0177_uniq.rst b/Other/file_0177_uniq.rst deleted file mode 100644 index f26a8d266..000000000 --- a/Other/file_0177_uniq.rst +++ /dev/null @@ -1 +0,0 @@ -ecosystem internationally academia industrialists raising traditional serving assists solid bridge diff --git a/Other/file_0179.rst b/Other/file_0179.rst deleted file mode 100644 index 8725ca974..000000000 --- a/Other/file_0179.rst +++ /dev/null @@ -1,95 +0,0 @@ -On 20 July 2006 the Flemish Government decided on the action plan -'Flanders i2010, time for a digital momentum in the innovation chain'. A -study made by the steering committee e-Research, published in November -2007, indicated the need for more expertise, support and infrastructure -for grid and High Performance Computing. - -Around the same time, the Royal Flemish Academy of Belgium for Science -and the Arts (KVAB) published an advisory illustrating the need for a -dynamic High Performance Computing strategy for Flanders. This -recommendation focused on a Flemish Supercomputer Center with the -ability to compete with existing infrastructures at regional or national -level in comparable countries. - -Based on these recommendations, the Flemish Government decided on 14 -December 2007 to fund the Flemish Supercomputer Center, an initiative of -five Flemish universities. They joined forces to coordinate and to -integrate their High Performance Computing infrastructures and to make -their knowledge available to the public and for privately funded -research. - -The grants were used to fund both capital expenditures and staff. As a -result the existing university infrastructure was integrated through -fast network connections and additional software. Thus, the pyramid -model, recommended by PRACE, is applied. According to this model a -central Tier-1 cluster is responsible for rolling out large parallel -computing jobs. Tier-2 focuses on local use at various universities but -is also open to other users. Hasselt University decided to collaborate -with the University of Leuven to build a shared infrastructure while -other universities opted to do it alone. - -Some milestones ---------------- - -- **January 2008**: Start of the \\"VSC preparatory phase\" project -- **May 2008**: The VSC submitted a first proposal for further funding - to the Hercules Foundation -- **November 2008**: A technical and financial plan was presented to - the Flemish Government. In the following weeks this plan was - successfully defended before a committee of international experts. -- **23 March 2009**: Official launch of the VSC at an event with - researchers presenting their work in the presence of Patricia - Ceysens, Flemish Minister for Economy, Enterprise, Science, - Innovation and Foreign Trade. Several speakers highlighted the - history of the project together with VSC’s mission and the - international aspect of this project. -- **3 April 2009**: the Hercules Foundation and the Flemish Government - provided a grant of 7.29 million euros (2.09 million by the Hercules - Foundation and 5.2 million from the FFEU :sup:`(1)` for the further - expansion of the local Tier-2 clusters and the installation of a - central Tier-1 supercomputer for Flanders for large parallel - computations. It was also decided to entrust the project monitoring - to a supervisory committee for which the Hercules Foundation provides - the secretariat. -- **June 2009**: The VSC submitted a project proposal to the Hercules - Foundation to participate through - `PRACE <\%22http://www.prace-ri.eu/\%22>`__, the - `ESFRI <\%22http://ec.europa.eu/research/infrastructures/index_en.cfm?pg=esfri\%22>`__\ :sup:`(2)` - project in the field of supercomputing. After comparison with other - projects, the Hercules Foundation granted it the second highest - priority and advised the Flemish government as such. The Flemish - Government supported the project, and after consultation with other - regions and communities and federal authorities, Belgium joined PRACE - in October 2012. -- **February 2010**: The VSC submitted an updated operating plan to the - Hercules Foundation and the Flemish Government aiming to obtain - structural funding for the VSC. -- **9 October 2012**: Belgium became the twenty-fifth member of PRACE. - The Belgian delegation was made up of DG06 from the Walloon - Government and a technical advisor from VSC. -- **25 October 2012**: The first VSC Tier-1 cluster was inaugurated at - Ghent University. In the spring of 2012 the installation of this - cluster in the new data center at Ghent University campus took place. - In a video message Minister Ingrid Lieten encouraged researchers to - make optimum use of the new opportunities to drive research forward. -- **16 January 2014**: the first global VSC User Day. This event - brought together researchers from different universities and the - industry. -- **27 January 2015**: The first VSC industry day at Technopolis in - Mechelen. One of the points on the agenda was to investigate how - other companies abroad - in Germany and the United Kingdom – were - being approached. Several examples of companies in Flanders already - using VSC infrastructure were illustrated. Philippe Muyters, Flemish - Minister for Economy and Innovation, closed the event with an appeal - for stronger links between the public and private sector to - strengthen Flemish competitiveness. -- **1 January 2016**: The Research Foundation - Flanders (FWO) takes - over the tasks of the Hercules Foundation in the VSC project in a - restructuring of the research funding in Flanders. - -| :sup:`(1)` FFEU: Financieringsfonds voor Schuldafbouw en Eenmalige - investeringsuitgaven (Financing fund for debt reduction and one-time - investment) -| :sup:`(2)` ESFRI: European Strategy Forum on Research Infrastructures - -" diff --git a/Other/file_0179_uniq.rst b/Other/file_0179_uniq.rst deleted file mode 100644 index 677df1fe8..000000000 --- a/Other/file_0179_uniq.rst +++ /dev/null @@ -1 +0,0 @@ -recommendations euros cfm investeringsuitgaven milestones chain recommendation Schuldafbouw FFEU highest debt aiming fund supervisory speakers index_en restructuring DG06 privately pg Walloon presenting defended digital countries million reduction agenda entrust i2010 delegation forces stronger fifth joined Eenmalige alone sector initiative illustrating authorities mission structural expenditures Financieringsfonds communities diff --git a/Other/file_0183.rst b/Other/file_0183.rst deleted file mode 100644 index 8191f74e3..000000000 --- a/Other/file_0183.rst +++ /dev/null @@ -1,64 +0,0 @@ -Strategic plans and annual reports ----------------------------------- - -- `Strategic plan 2015-2020 HPC in - Flanders <\%22https://www.vscentrum.be/assets/109\%22>`__ – Only - available in Dutch -- `VSC annual report - 2017 <\%22https://www.vscentrum.be/assets/1379\%22>`__ -- `VSC annual report - 2016 <\%22https://www.vscentrum.be/assets/1299\%22>`__ -- `VSC annual report - 2015 <\%22https://www.vscentrum.be/assets/1109\%22>`__ -- `VSC annual report - 2014 <\%22https://www.vscentrum.be/assets/987\%22>`__ - -Newsletter: VSC Echo --------------------- - -Our newsletter, VSC Echo, is distributed three times a year by e-mail. -The `latest edition <\%22/assets/1123\%22>`__, number 10, is dedicated -to : - -- The upcoming courses and other events, where we also pay attention to - the trainings organized by `CÉCI <\%22http://www.ceci-hpc.be/\%22>`__ -- News about the new Tier-1 system BrENIAC -- The new VSC web site - -Subscribe or unsubscribe -~~~~~~~~~~~~~~~~~~~~~~~~ - -If you would like to receive this newsletter by mail, just send an -e-mail to listserv@ls.kuleuven.be with as text **subscribe VSCECHO** in -the message body (and not in the subject line). (Please note the quotes -are not used in the subject line but in the message body.) Alternatively -(if your e-mail is correctly configured in your browser), you can also -`send an e-mail from your -browser <\%22mailto:listserv@ls.kuleuven.be?body=subscribe%20VSCECHO\%22>`__. - -You will receive a reply from LISTSERV@listserv.cc.kuleuven.ac.be asking -you to confirm your subscription. Follow this link in the e-mail and you -will be automatically subscribed to future issues of the newsletter. - -If you no longer wish to receive the newsletter, please send an e-mail -to listserv@ls.kuleuven.be with the text **unsubscribe VSCECHO** in the -message body (and not in the subject line). Alternatively (if your -e-mail is correctly configured in your browser), you can also `send an -e-mail from your -browser <\%22mailto:listserv@ls.kuleuven.be?body=unsubscribe%20VSCECHO\%22>`__. - -Archive -~~~~~~~ - -- `VSC Echo 10 - October 2016 <\%22/assets/1123\%22>`__ -- `VSC Echo 9 - January 2015 <\%22/assets/1063\%22>`__ -- `VSC Echo 8 - September 2015 <\%22/assets/997\%22>`__ -- `VSC Echo 7 - July 2015 <\%22/assets/939\%22>`__ -- `VSC Echo 6 - January 2015 <\%22/assets/107\%22>`__ -- `VSC Echo 5 - October 2014 <\%22/assets/105\%22>`__ -- `VSC Echo 4 - June 2014 <\%22/assets/103\%22>`__ -- `VSC Echo 3 - January 2014 <\%22/assets/101\%22>`__ -- `VSC Echo 2 - November 2013 <\%22/assets/97\%22>`__ -- `VSC Echo 1 - March 2013 <\%22/assets/93\%22>`__ - -" diff --git a/Other/file_0183_uniq.rst b/Other/file_0183_uniq.rst deleted file mode 100644 index 9786cebc4..000000000 --- a/Other/file_0183_uniq.rst +++ /dev/null @@ -1 +0,0 @@ -Subscribe News 109 subscribed LISTSERV reply 997 Newsletter unsubscribe 939 listserv 1063 101 105 asking 1299 subscription 1123 97 VSCECHO 1379 103 107 20VSCECHO Strategic 93 diff --git a/Other/file_0185.rst b/Other/file_0185.rst deleted file mode 100644 index e10554d1c..000000000 --- a/Other/file_0185.rst +++ /dev/null @@ -1,8 +0,0 @@ -Press contacts should be channeled through `the Research Foundation - -Flanders (FWO) <\%22/en/about-vsc/contact\%22>`__. - -Available material ------------------- - -`Zip file with the VSC logo in a number of -formats <\%22/assets/111\%22>`__. diff --git a/Other/file_0185_uniq.rst b/Other/file_0185_uniq.rst deleted file mode 100644 index abdc0b55c..000000000 --- a/Other/file_0185_uniq.rst +++ /dev/null @@ -1 +0,0 @@ -Zip 111 contacts channeled diff --git a/Other/file_0193.rst b/Other/file_0193.rst deleted file mode 100644 index f9fdbf1ba..000000000 --- a/Other/file_0193.rst +++ /dev/null @@ -1,3 +0,0 @@ -The Flemish Supercomputer Centre (VSC) is a virtual supercomputer center -for academics and industry. It is managed by the Hercules Foundation in -partnership with the five Flemish university associations. diff --git a/Other/file_0217.rst b/Other/file_0217.rst deleted file mode 100644 index 151a26b8a..000000000 --- a/Other/file_0217.rst +++ /dev/null @@ -1,57 +0,0 @@ -To access certain cluster login nodes, from outside your institute's -network (e.g., from home) you need to set a so-called VPN (Virtual -Private Network). By setting up a VPN to your institute, your computer -effectively becomes a computer on your institute's network and will -appear as such to other services that you access. Your network traffic -will be routed through your institute's network. If you want more -information: There's an `introductory page on -HowStuffWorks <\%22https://computer.howstuffworks.com/vpn.htm\%22>`__ -and a `page that is more for techies on -Wikipedia <\%22https://en.wikipedia.org/wiki/Virtual_private_network\%22>`__. - -The VPN service is not provided by the VSC but by your institute's ICT -centre, and they are your first contact for help. However, for your -convenience, we present some pointers to that information: - -- KU Leuven: Information `in - Dutch <\%22https://admin.kuleuven.be/icts/services/extranet/index\%22>`__ - and `in - English <\%22https://admin.kuleuven.be/icts/english/services/VPN/VPN\%22>`__. - Information on contacting the service desk for assistance is also - available `in - Dutch <\%22https://admin.kuleuven.be/icts/servicepunt\%22>`__ and `in - English <\%22https://admin.kuleuven.be/icts/english/servicedesk\%22>`__. -- UGent: Information `in - Dutch <\%22https://helpdesk.ugent.be/vpn/\%22>`__ and `in - English <\%22https://helpdesk.ugent.be/vpn/en/\%22>`__. Contact - information for the help desk is also available `in - Dutch <\%22https://helpdesk.ugent.be/extra/\%22>`__ and `in - English <\%22https://helpdesk.ugent.be/extra/en/\%22>`__ (with links - at the bottom of the VPN pages). -- UAntwerpen: Log in to `the Pintra - service <\%22https://pintra.uantwerpen.be/\%22>`__ and then visit - `the VPN - page <\%22https://pintra.uantwerpen.be/webapps/ua-pintrasite-BBLEARN/module/index.jsp?course_id=_8_1&tid=_525_1&lid=_11434_1&l=nl_PINTRA\%22>`__ - in the \\"Network\" section of the pages of \\"Departement ICT\". If - you only have a student account, you will find the same information - in the `Infocenter ICT on - Blackboard <\%22https://blackboard.uantwerpen.be/webapps/blackboard/execute/announcement?method=search&context=course_entry&course_id=_4177_1&handle=announcements_entry&mode=view\%22>`__, - which has a `page on - VPN <\%22https://blackboard.uantwerpen.be/webapps/blackboard/content/listContent.jsp?course_id=_4177_1&content_id=_397880_1\%22>`__ - in the Networks section. The contact information for the help desk in - on `the start page of the subsite \\"Departement - ICT\" <\%22https://pintra.uantwerpen.be/webapps/ua-pintrasite-BBLEARN/module/index.jsp?course_id=_8_1\%22>`__. - *Note that the configuration of the VPN changed on 25 October 2016, - so if you have trouble connecting, check your settings!* -- VUB: The VUB offers no central VPN system at this time. `There is a - VPN solution (\"Pulse Secure VPN\") which requires special - permission. <\%22http://vubnet.vub.ac.be/vpn.html\%22>`__ -- UHasselt: the pre-configured VPN software can be - `downloaded <\%22https://software.uhasselt.be/index.php?catid=410\%22>`__ - (intranet, only staff members), contact the `UHasselt helpdesk (mail - link) <\%22mailto:helpdesk@uhasselt.be\%22>`__ if you have problems. - There is also some information about this `on the page - \\"Accessibility from a distance\" of the University - Library <\%22https://bibliotheek.uhasselt.be/en/accessibility-distance\%22>`__. - -" diff --git a/Other/file_0217_uniq.rst b/Other/file_0217_uniq.rst deleted file mode 100644 index 1da23ffed..000000000 --- a/Other/file_0217_uniq.rst +++ /dev/null @@ -1 +0,0 @@ -servicedesk content_id catid _397880_1 servicepunt techies _4177_1 Departement Virtual_private_network pintra routed extranet nl_PINTRA blackboard Infocenter BBLEARN webapps course_id _525_1 listContent _8_1 Blackboard HowStuffWorks tid Accessibility jsp 410 Pintra Pulse course_entry _11434_1 Networks lid vubnet pintrasite howstuffworks accessibility intranet bibliotheek announcements_entry diff --git a/Other/file_0219.rst b/Other/file_0219.rst deleted file mode 100644 index 27ace8d1e..000000000 --- a/Other/file_0219.rst +++ /dev/null @@ -1,11 +0,0 @@ -Linux is the operating system on all of the VSC-clusters. - -- `A basic linux - introduction <\%22/cluster-doc/using-linux/basic-linux-usage\%22>`__, - with the most basic commands and links to other material on the web. -- `Getting started with shell - scripts <\%22/cluster-doc/using-linux/how-to-get-started-with-shell-scripts\%22>`__, - small programs consisting of commands that you could also use on the - command line. - -" diff --git a/Other/file_0279.rst b/Other/file_0279.rst deleted file mode 100644 index 93edb8837..000000000 --- a/Other/file_0279.rst +++ /dev/null @@ -1,5 +0,0 @@ -- `Credit system - basics <\%22/cluster-doc/running-jobs/credit-system-basics\%22>`__ - (KU Leuven-only currently, Tier-1 and Tier-2) - -" diff --git a/Other/file_0285.rst b/Other/file_0285.rst deleted file mode 100644 index 5bca5c854..000000000 --- a/Other/file_0285.rst +++ /dev/null @@ -1,10 +0,0 @@ -BEgrid is currently documented by BELNET. Some useful links are: - -- `BEgrid Wiki <\%22http://wiki.begrid.be/\%22>`__ -- `gLite 3.1 User Guide (PDF, op - CERN) <\%22https://edms.cern.ch/file/722398/1.2/gLite-3-UserGuide.pdf\%22>`__ - gLite is the grid middleware used on BEgrid. -- `Other related links on the Belnet - site. <\%22http://www.begrid.be/index.php?module=webpage&id=16\%22>`__ - -" diff --git a/Other/file_0285_uniq.rst b/Other/file_0285_uniq.rst deleted file mode 100644 index 962463176..000000000 --- a/Other/file_0285_uniq.rst +++ /dev/null @@ -1 +0,0 @@ -Belnet diff --git a/Other/file_0303.rst b/Other/file_0303.rst deleted file mode 100644 index 6ae5b6b4f..000000000 --- a/Other/file_0303.rst +++ /dev/null @@ -1,96 +0,0 @@ -Hardware details ----------------- - -The VUB cluster contains a mix of nodes with AMD and Intel processors -and different interconnects in different sections of the cluster. The -cluster also contains a number of nodes with NVIDIA GPUs. - -Login nodes: -~~~~~~~~~~~~ - -- ``login.hpc.vub.be`` -- use the above hostname if you read ``vsc.login.node`` in the - documentation and want to connect to one of the login nodes - -Compute nodes: -~~~~~~~~~~~~~~ - -+-----------+-----------+-----------+-----------+-----------+-----------+ -| nodes | processor | memory | disk | network | others | -+===========+===========+===========+===========+===========+===========+ -| 40 | 2x 8-core | 64 Gb | 900 Gb | QDR-IB | soon will | -| | AMD 6134 | | | | be phased | -| | (Magnycou | | | | out | -| | rs) | | | | | -+-----------+-----------+-----------+-----------+-----------+-----------+ -| 11 | 2x | 128 Gb | 900 Gb | QDR-IB | | -| | 10-core | | | | | -| | INTEL | | | | | -| | E5-2680v2 | | | | | -| | (IvyBridg | | | | | -| | e) | | | | | -+-----------+-----------+-----------+-----------+-----------+-----------+ -| 20 | 2x | 256 Gb | 900 Gb | QDR-IB | | -| | 10-core | | | | | -| | INTEL | | | | | -| | E5-2680v2 | | | | | -| | (IvyBridg | | | | | -| | e) | | | | | -+-----------+-----------+-----------+-----------+-----------+-----------+ -| 6 | 2x | 128 Gb | 900 Gb | QDR-IB | 2x Tesla | -| | 10-core | | | | K20x | -| | INTEL | | | | NVIDIA | -| | E5-2680v2 | | | | GPGPUs | -| | (IvyBridg | | | | with 6Gb | -| | e) | | | | memory in | -| | | | | | each node | -+-----------+-----------+-----------+-----------+-----------+-----------+ -| 27 | 2x | 256 Gb | 1 Tb | 10 Gbps | | -| | 14-core | | | | | -| | INTEL | | | | | -| | E5-2680v4 | | | | | -| | (Broadwel | | | | | -| | l) | | | | | -+-----------+-----------+-----------+-----------+-----------+-----------+ -| 1 | 4x | 1.5 Tb | 4 Tb | 10 Gbps | | -| | 10-core | | | | | -| | INTEL | | | | | -| | E7-8891v4 | | | | | -| | (Broadwel | | | | | -| | l) | | | | | -+-----------+-----------+-----------+-----------+-----------+-----------+ -| 4 | 2x | 256 Gb | 2 Tb | 10 Gbps | 2x Tesla | -| | 12-core | | | | P100 | -| | INTEL | | | | NVIDIA | -| | E5-2650v4 | | | | GPGPUs | -| | (Broadwel | | | | with 16 | -| | l) | | | | Gb memory | -| | | | | | in each | -| | | | | | node | -+-----------+-----------+-----------+-----------+-----------+-----------+ -| 1 | 2x | 512 Gb | 8 Tb | 10 Gbps | 4x | -| | 16-core | | | | GeForce | -| | INTEL | | | | GTX 1080 | -| | E5-2683v4 | | | | Ti NVIDIA | -| | (Broadwel | | | | GPUs with | -| | l) | | | | 12 Gb | -| | | | | | memory in | -| | | | | | each node | -+-----------+-----------+-----------+-----------+-----------+-----------+ -| 21 | 2x | 192 Gb | 1 Tb | 10 Gbps | | -| | 20-core | | | | | -| | INTEL | | | | | -| | Xeon Gold | | | | | -| | 6148 | | | | | -| | (Skylake) | | | | | -+-----------+-----------+-----------+-----------+-----------+-----------+ - -Network Storage: -~~~~~~~~~~~~~~~~ - -- 19 TB NAS for Home directories (``$VSC_HOME``) and software storage - connected with 1Gb Ethernet -- 780 TB GPFS storage for global scratch (``$VSC_SCRATCH``) connected - with QDR-IB, 1Gb and 40 Gb Ethernet - -" diff --git a/Other/file_0303_uniq.rst b/Other/file_0303_uniq.rst deleted file mode 100644 index eee8ecd79..000000000 --- a/Other/file_0303_uniq.rst +++ /dev/null @@ -1 +0,0 @@ -E7 1080 2683v4 4x hostnames rs Tb Broadwel GTX GPGPUs Ti 780 6Gb phased 2x 1Gb interconnects 2650v4 6134 8891v4 INTEL GeForce Magnycou 6148 IvyBridg diff --git a/Other/file_0305.rst b/Other/file_0305.rst deleted file mode 100644 index f9c72a3bd..000000000 --- a/Other/file_0305.rst +++ /dev/null @@ -1,432 +0,0 @@ -UAntwerpen has two clusters. `leibniz <\%22#leibniz\%22>`__ and -`hopper <\%22#hopper\%22>`__, `Turing <\%22#turing\%22>`__, an older -cluster, has been retired in the early 2017. - -Local documentation -------------------- - -- `Slides of the information sessions on \\"Transitioning to Leibniz - and CentOS 7\" (PDF) <\%22/assets/1323\%22>`__ -- `The 2017a toolchain at - UAntwerp <\%22/infrastructure/hardware/hardware-ua/toolchain-2017a\%22>`__: - In preparation of the integration of Leibniz in the UAntwerp - infrastructure, the software stack has been rebuild in the 2017a - toolchain. Several changes have been made to the naming and the - organization of the toolchains. The toolchain is now loaded by - default on Hopper, and is the main toolchain on Leibniz and later - also on Hopper after an OS upgrade. -- `The Intel compiler - toolchains <\%22/infrastructure/hardware/hardware-ua/intel\%22>`__: - From the 2017a toolchain on, the setup of the toolchains differs on - the UAntwerp clusters differs from most other VSC systems. We have - set up the Intel compilers, including all libraries, in a single - directory structure as intended by Intel. Some scripts, including - compiler configuration scripts, expect this setup to work properly. -- `Licensed software at - UAntwerp <\%22https://www.vscentrum.be/infrastructure/hardware/hardware-ua/licensed-software\%22>`__: - Some software has a restricted license and is not available to all - users. This page lists some of those packages and explains for some - how you can get access to the package. -- Special nodes: - - - `GUI programs and remote visualization - node <\%22/infrastructure/hardware/hardware-ua/visualization\%22>`__: - Leibniz offers a remote visualization node with software stack - based on TurboVNC and OpenGL. All other login nodes offer the same - features minus the OpenGL support (so applications have to link to - a OpenGL software emulation library). - - `GPU computing - nodes <\%22/infrastructure/hardware/hardware-ua/gpu-computing\%22>`__ - - `Xeon Phi - testbed <\%22/infrastructure/hardware/hardware-ua/xeonphi\%22>`__ - -- `Information for Leibniz test - users <\%22/infrastructure/hardware/hardware-ua/leibniz-instructions\%22>`__ - -Leibniz -------- - -Leibniz was installed in the spring of 2017. It is a NEC system -consisting of 152 nodes with 2 14-core intel -`E5-2680v4 <\%22https://ark.intel.com/nl/products/91754/Intel-Xeon-Processor-E5-2680-v4-35M-Cache-2_40-GHz\%22>`__ -Broadwell generation CPUs connected through a EDR InfiniBand network. -144 of these nodes have 128 GB RAM, the other 8 have 256 GB RAM. The -nodes do not have a sizeable local disk. The cluster also contains a -node for visualisation, 2 nodes for GPU computing (NVIDIA Psscal -generation) and one node with an Intel Xeon Phi expansion board. - -Access restrictions -~~~~~~~~~~~~~~~~~~~ - -Access ia available for faculty, students (master's projects under -faculty supervision), and researchers of the AUHA. The cluster is -integrated in the VSC network and runs the standard VSC software setup. -It is also available to all VSC-users, though we appreciate that you -contact the UAntwerpen support team so that we know why you want to use -the cluster. - -Jobs can have a maximal execution wall time of 3 days (72 hours). - -Hardware details -~~~~~~~~~~~~~~~~ - -- Interactive work: - - - 2 login nodes. These nodes have a very similar architecture to the - compute nodes. - - - 1 visualisation node with a NVIDIA P5000 GPU. This node is meant - to be used for interactive visualizations (`specific - instructions <\%22/infrastructure/hardware/hardware-ua/visualization\%22>`__). - -- Compute nodes: - - - 144 nodes with 2 14-core Intel - `E5-2680v4 <\%22https://ark.intel.com/nl/products/91754/Intel-Xeon-Processor-E5-2680-v4-35M-Cache-2_40-GHz\%22>`__ - CPUs (Broadwell generation, 2.4 GHz) and 128 GB RAM. - - 8 nodes with 2 14-core Intel - `E5-2680v4 <\%22https://ark.intel.com/nl/products/91754/Intel-Xeon-Processor-E5-2680-v4-35M-Cache-2_40-GHz\%22>`__ - CPUs (Broadwell generation, 2.4 GHz) and 256 GB RAM. - - 2 nodes with 2 14-core Intel - `E5-2680v4 <\%22https://ark.intel.com/nl/products/91754/Intel-Xeon-Processor-E5-2680-v4-35M-Cache-2_40-GHz\%22>`__ - CPUs (Broadwell generation, 2.4 GHz), 128 GB RAM and two NVIDIA - Tesla P100 GPUs with 16 GB HBM2 memory per GPU (delivering a peak - performance of 4.7 TFlops in double precision per GPU) (`specific - instructions <\%22/infrastructure/hardware/hardware-ua/gpu-computing\%22>`__). - - 1 node with 2 14-core Intel - `E5-2680v4 <\%22https://ark.intel.com/nl/products/91754/Intel-Xeon-Processor-E5-2680-v4-35M-Cache-2_40-GHz\%22>`__ - CPUs (Broadwell generation, 2.4 GHz), 128 GB RAM and Intel Xeon - Phi 7220P PCIe card with 16 GB of RAM (`specific - instructions <\%22/infrastructure/hardware/hardware-ua/xeonphi\%22>`__). - - All nodes are connected through a EDR InfiniBand network - - All compute nodes contain only a small SSD drive. This implies - that swapping is not possible and that users should preferably use - the main storage for all temporary files also. - -- Storage: The cluster relies on the storage provided by Hopper (a 100 - TB DDN SFA7700 system with 4 storage servers). - -Login infrastructure -~~~~~~~~~~~~~~~~~~~~ - -Direct login is possible to both login nodes and to the visualization -node. - -- From outside the VSC network: Use the external interface names. - Currently, one needs to be on the network of UAntwerp or some - associated institutions to be able to access the external interfaces. - Otherwise a VPN connection is needed to the UAntwerp network. -- From inside the VSC network (e.g., another VSC cluster): Use the - internal interface names. - -+--------------------+------------------------------+----------------------------+ -| | External interface | Internal interface | -+--------------------+------------------------------+----------------------------+ -| Login generic | login-leibniz.uantwerpen.be | | -+--------------------+------------------------------+----------------------------+ -| Login | login1-leibniz.uantwerpen.be | ln1.leibniz.antwerpen.vsc | -| | login2-leibniz.uantwerpen.be | ln2.leibniz.antwerpen.vsc | -+--------------------+------------------------------+----------------------------+ -| Visualisation node | viz1-leibniz.uantwerpen.be | viz1.leibniz.antwerpen.vsc | -+--------------------+------------------------------+----------------------------+ - -Storage organization -~~~~~~~~~~~~~~~~~~~~ - -See `the section on the storage organization of -hopper <\%22#hopper-storage\%22>`__. - -Characteristics of the compute nodes -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ - -| Since leibniz is currently a homogenous system with respect to CPU - type and interconnect, it is not needed to specify the corresponding - properties (see also the page on `specifying resources, output files - and - notifications <\%22https://www.vscentrum.be/cluster-doc/running-jobs/specifying-requirements\%22>`__). - -However, to make it possible to write job scripts that can be used on -both hopper and leibniz (or other VSC clusters) and to prepare for -future extensions of the cluster, the following features are defined: - -+-----------------------------------+-----------------------------------+ -| property | explanation | -+===================================+===================================+ -| broadwell | only use Intel processors from | -| | the Broadwell family (E5-XXXv4) | -| | (Not needed at the moment as this | -| | is the only CPU type) | -+-----------------------------------+-----------------------------------+ -| ib | use InfiniBand interconnect (not | -| | needed at the moment as all nodes | -| | are connected to the InfiniBand | -| | interconnect) | -+-----------------------------------+-----------------------------------+ -| mem128 | use nodes with 128 GB RAM | -| | (roughly 112 GB available). This | -| | is the majority of the nodes on | -| | leibniz. | -+-----------------------------------+-----------------------------------+ -| mem256 | use nodes with 256 GB RAM | -| | (roughly 240 GB available). This | -| | property is useful if you submit | -| | a batch of jobs that require more | -| | than 4 GB of RAM per processor | -| | but do not use all cores and you | -| | do not want to use a tool to | -| | bundle jobs yourself such as | -| | Worker, as it helps the scheduler | -| | to put those jobs on nodes that | -| | can be further filled with your | -| | jobs. | -+-----------------------------------+-----------------------------------+ - -These characteristics map to the following nodes on Hopper: - -+-------+-------+-------+-------+-------+-------+-------+-------+-------+ -| Type | CPU | Inter | # | # | # | insta | avail | local | -| of | type | conne | nodes | physi | logic | lled | mem | disc | -| node | | ct | | cal | al | mem | (per | | -| | | | | cores | cores | (per | node) | | -| | | | | (per | (per | node) | | | -| | | | | node) | node) | | | | -+=======+=======+=======+=======+=======+=======+=======+=======+=======+ -| broad | `Xeon | IB-ED | 144 | 28 | 28 | 128 | 112 | ~25 | -| well: | E5-26 | R | | | | GB | GB | GB | -| ib:me | 80v4 | | | | | | | | -| m128 | <\%22 | | | | | | | | -| | https | | | | | | | | -| | ://ar | | | | | | | | -| | k.int | | | | | | | | -| | el.co | | | | | | | | -| | m/pro | | | | | | | | -| | ducts | | | | | | | | -| | /7527 | | | | | | | | -| | 7\%22 | | | | | | | | -| | >`__ | | | | | | | | -+-------+-------+-------+-------+-------+-------+-------+-------+-------+ -| broad | `Xeon | IB-ED | 8 | 28 | 28 | 256 | 240 | ~25 | -| well: | E5-26 | R | | | | GB | GB | GB | -| ib:me | 80v4 | | | | | | | | -| m256 | <\%22 | | | | | | | | -| | https | | | | | | | | -| | ://ar | | | | | | | | -| | k.int | | | | | | | | -| | el.co | | | | | | | | -| | m/pro | | | | | | | | -| | ducts | | | | | | | | -| | /7527 | | | | | | | | -| | 7\%22 | | | | | | | | -| | >`__ | | | | | | | | -+-------+-------+-------+-------+-------+-------+-------+-------+-------+ - -Hopper ------- - -Hopper is the current UAntwerpen cluster. It is a HP system consisting -of 168 nodes with 2 10-core Intel E5-2680v2 Ivy Bridge generation CPUs -connected through a FDR10 InfiniBand network. 144 nodes have a memory -capacity of 64 GB while 24 nodes have 256 GB of RAM memory. The system -has been reconfigured to have a software setup that is essentially the -same as on Leibniz. - -.. _access-restrictions-1: - -Access restrictions -~~~~~~~~~~~~~~~~~~~ - -Access ia available for faculty, students (master's projects under -faculty supervision), and researchers of the AUHA. The cluster is -integrated in the VSC network and runs the standard VSC software setup. -It is also available to all VSC-users, though we appreciate that you -contact the UAntwerpen support team so that we know why you want to use -the cluster. - -Jobs can have a maximal execution wall time of 3 days (72 hours). - -.. _hardware-details-1: - -Hardware details -~~~~~~~~~~~~~~~~ - -- 4 login nodes accessible through the generic name - ``login.hpc.uantwerpen.be``. - - - Use this hostname if you read *vsc.login.node* in the - documentation and want to connect to this login node - -- Compute nodes - - - 144 (96 installed in the first round, 48 in the first expansion) - nodes with 2 10-core Intel - `E5-2680v2 <\%22https://ark.intel.com/products/75277\%22>`__ CPUs - (Ivy Bridge generation) with 64 GB of RAM. - - 24 nodes with 2 10-core Intel - `E5-2680v2 <\%22https://ark.intel.com/products/75277\%22>`__ CPUs - (Ivy Bridge generation) with 256 GB of RAM. - - All nodes are connected through an InfiniBand FDR10 interconnect. - -- Storage - - - Storage is provided through a 100 TB DDN SFA7700 disk array with 4 - storage servers. - -.. _login-infrastructure-1: - -Login infrastructure -~~~~~~~~~~~~~~~~~~~~ - -Direct login is possible to both login nodes and to the visualization -node. - -- From outside the VSC network: Use the external interface names. - Currently, one needs to be on the network of UAntwerp or some - associated institutions to be able to access the external interfaces. - Otherwise a VPN connection is needed to the UAntwerp network. -- From inside the VSC network (e.g., another VSC cluster): Use the - internal interface names. - -+---------------+-----------------------------+---------------------------+ -| | External interface | Internal interface | -+---------------+-----------------------------+---------------------------+ -| Login generic | login.hpc.uantwerpen.be | | -| | login-hopper.uantwerpen.be | | -+---------------+-----------------------------+---------------------------+ -| Login nodes | login1-hopper.uantwerpen.be | ln01.hopper.antwerpen.vsc | -| | login2-hopper.uantwerpen.be | ln02.hopper.antwerpen.vsc | -| | login3-hopper.uantwerpen.be | ln03.hopper.antwerpen.vsc | -| | login4-hopper.uantwerpen.be | ln04.hopper.antwerpen.vsc | -+---------------+-----------------------------+---------------------------+ - -Storage organisation -~~~~~~~~~~~~~~~~~~~~ - -The storage is organised according to the `VSC storage -guidelines <\%22/cluster-doc/access-data-transfer/where-store-data\%22>`__. - -+-----------+-----------+-----------+-----------+-----------+-----------+ -| Name | Variable | Type | Access | Backup | Default | -| | | | | | quota | -+===========+===========+===========+===========+===========+===========+ -| /user/ant | $VSC_HOME | GPFS | VSC | NO | 3 GB | -| werpen/20 | | | | | | -| X/vsc20XY | | | | | | -| Z | | | | | | -+-----------+-----------+-----------+-----------+-----------+-----------+ -| /data/ant | $VSC_DATA | GPFS | VSC | NO | 25 GB | -| werpen/20 | | | | | | -| X/vsc20XY | | | | | | -| Z | | | | | | -+-----------+-----------+-----------+-----------+-----------+-----------+ -| /scratch/ | $VSC_SCRA | GPFS | Hopper | NO | 25 GB | -| antwerpen | TCH | | Leibniz | | | -| /20X/vsc2 | $VSC_SCRA | | | | | -| 0XYZ | TCH_SITE | | | | | -+-----------+-----------+-----------+-----------+-----------+-----------+ -| /small/an | | GPFS | Hopper | NO | 0 GB | -| twerpen/2 | | | Leibniz | | | -| 0X/vsc20X | | | | | | -| YZ:sup:`( | | | | | | -| *)` | | | | | | -+-----------+-----------+-----------+-----------+-----------+-----------+ -| /tmp | $VSC_SCRA | ext4 | Node | NO | 250 GB | -| | TCH_NODE | | | | hopper | -+-----------+-----------+-----------+-----------+-----------+-----------+ - -:sup:`(*)` /small is a file system optimised for the storage of small -files of types that do not belong in $VSC_HOME. The file systems pointed -at by $VSC_DATA and $VSC_SCRATCH have a large fragment size (128 kB) for -optimal performance on larger files and since each file occupies at -least one fragment, small files waste a lot of space on those file -systems. The file system is available on request. - -For users from other universities, the quota on $VSC_HOME and $VSC_DATA -will be determined by the local policy of your home institution as these -file systems are mounted from there. The pathnames will be similar with -trivial modifications based on your home institution and VSC account -number. - -.. _characteristics-of-the-compute-nodes-1: - -Characteristics of the compute nodes -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ - -Since hopper is currently a homogenous system with respect to CPU type -and interconnect, it is not needed to specify these properties (see also -the page on `specifying resources, output files and -notifications <\%22/cluster-doc/running-jobs/specifying-requirements\%22>`__). - -However, to make it possible to write job scripts that can be used on -both hopper and turing (or other VSC clusters) and to prepare for future -extensions of the cluster, the following features are defined: - -+-----------------------------------+-----------------------------------+ -| property | explanation | -+===================================+===================================+ -| ivybridge | only use Intel processors from | -| | the Ivy Bridge family (E5-XXXv2) | -| | (Not needed at the moment as this | -| | is the only CPU type) | -+-----------------------------------+-----------------------------------+ -| ib | use InfiniBand interconnect (only | -| | for compatibility with Turing job | -| | scripts as all nodes have | -| | InfiniBand) | -+-----------------------------------+-----------------------------------+ -| mem64 | use nodes with 64 GB RAM (58 GB | -| | available) | -+-----------------------------------+-----------------------------------+ -| mem256 | use nodes with 256 GB RAM (250 GB | -| | available) | -+-----------------------------------+-----------------------------------+ - -These characteristics map to the following nodes on Hopper: - -+-------+-------+-------+-------+-------+-------+-------+-------+-------+ -| Type | CPU | Inter | # | # | # | insta | avail | local | -| of | type | conne | nodes | physi | logic | lled | mem | disc | -| node | | ct | | cal | al | mem | (per | | -| | | | | cores | cores | (per | node) | | -| | | | | (per | (per | node) | | | -| | | | | node) | node) | | | | -+=======+=======+=======+=======+=======+=======+=======+=======+=======+ -| ivybr | `Xeon | IB-FD | 144 | 20 | 20 | 64 GB | 56 GB | ~360 | -| idge: | E5-26 | R10 | | | | | | GB | -| ib:me | 80v2 | | | | | | | | -| m64 | <\%22 | | | | | | | | -| | https | | | | | | | | -| | ://ar | | | | | | | | -| | k.int | | | | | | | | -| | el.co | | | | | | | | -| | m/pro | | | | | | | | -| | ducts | | | | | | | | -| | /7527 | | | | | | | | -| | 7\%22 | | | | | | | | -| | >`__ | | | | | | | | -+-------+-------+-------+-------+-------+-------+-------+-------+-------+ -| ivybr | `Xeon | IB-FD | 24 | 20 | 20 | 256 | 248 | ~360 | -| idge: | E5-26 | R10 | | | | GB | GB | GB | -| ib:me | 80v2 | | | | | | | | -| m256 | <\%22 | | | | | | | | -| | https | | | | | | | | -| | ://ar | | | | | | | | -| | k.int | | | | | | | | -| | el.co | | | | | | | | -| | m/pro | | | | | | | | -| | ducts | | | | | | | | -| | /7527 | | | | | | | | -| | 7\%22 | | | | | | | | -| | >`__ | | | | | | | | -+-------+-------+-------+-------+-------+-------+-------+-------+-------+ - -Turing ------- - -In July 2009, the UAntwerpen bought a 768 core cluster (L5420 CPUs, 16 -GB RAM/node) from HP, that was installed and configured in December -2009. In December 2010, the cluster was extended with 768 cores (L5640 -CPUs, 24 GB RAM/node). In September 2011, another 96 cores (L5640 CPUs, -24 GB RAM/node) have been added. Turing has been retired in January -2017. - -" diff --git a/Other/file_0305_uniq.rst b/Other/file_0305_uniq.rst deleted file mode 100644 index eda00e565..000000000 --- a/Other/file_0305_uniq.rst +++ /dev/null @@ -1 +0,0 @@ -twerpen _characteristics sizeable werpen ln02 m256 SFA7700 ext4 ark m64 ln1 Internal XXXv2 login3 ln01 me _access Psscal 1323 VSC_SCRA HBM2 XXXv4 0X retired vsc20X TCH ar TCH_SITE 35M 0XYZ round 7220P vsc20XY login4 P5000 mem64 75277 vsc2 R10 L5420 FDR10 Variable cal xeonphi TCH_NODE physi 2_40 248 7527 2680 _hardware Special 91754 m128 broadwell ducts disc YZ Licensed 80v4 pathnames ln2 112 L5640 homogenous Backup Transitioning v4 occupies organization _login ln04 appreciate diff --git a/Other/file_0307.rst b/Other/file_0307.rst deleted file mode 100644 index 6e709f4c7..000000000 --- a/Other/file_0307.rst +++ /dev/null @@ -1,219 +0,0 @@ -Hardware details ----------------- - -- The cluster **login nodes**: - - - login.hpc.kuleuven.be and login2.hpc.kuleuven.be (use this - hostname if you read *vsc.login.node* in the documentation and - want to connect to this login node). - - two GUI login nodes through NX server. - -- **Compute nodes**: - - - **Thin node section**: - - - 208 nodes with two 10-core \\"Ivy Bridge\" Xeon E5-2680v2 CPUs - (2.8 GHz, 25 MB level 3 cache). 176 of those nodes have 64 GB - RAM while 32 are equiped with 128 GB RAM. The nodes are linked - to a QDR Infiniband network. All nodes have a small local disk, - mostly for swapping and the OS image. - - 144 nodes with two 12-core \\"Haswell\" Xeon E5-2680v3 CPUs - (2.5 GHz, 30 MB level 3 cache). 48 of those nodes have with 64 - GB RAM while 96 are equiped with 128 GB RAM. These nodes are - linked to a FDR Infiniband network which offers lower latency - and higher bandwidth than QDR. - - The total memory capacity of this section is 30 TB, the total peak - performance is about 232 Tflops in double precision arithmetic. - - **SMP section** (also known as Cerebro): a SGI UV2000 system with - 64 sockets with a 10-core \\"Ivy Bridge\" Xeon E5-4650 CPU (2.4 - GHz, 25 MB level 3 cache), spread over 32 blades and connected - through a SGI-proprietary NUMAlink6 interconnect. The interconnect - also offers support for global address spaces across shared memory - partitions and offload of some MPI functions. 16 sockets have 128 - GB RAM and 48 sockets have 256 GB RAM, for a total RAM capacity of - 14 TB. The peak compute performance is 12.3 Tflops in double - precision arithmetic. The SMP system also contains a fast 21.8 GB - disk storage system for swapping and temporary files. The system - is partitioned in 2 shared memory partitions. 1 partition has 480 - cores and 12 TB and 1 partition with 160 cores and 2TB. Both - partitions have 10TB local scratch space. - However, should the need arise it can be reconfigured into a - single large 64-socket shared memory machine.More information can - be found in the `cerebro quick start - guide <\%22https://www.vscentrum.be/assets/965\%22>`__ or `slides - from the - info-session. <\%22https://www.vscentrum.be/assets/947\%22>`__ - - **Accelerator section:** - - - 5 nodes with two 10-core \\"Haswell\" Xeon E5-2650v3 2.3GHz - CPUs, 64 GB of RAM and 2 GPUs Tesla K40 (2880 CUDA cores @ - Boost clocks 810 MHz and 875 MHz, 1.66 DP Tflops/GPU Boost - Clocks). - - - The central GPU and Xeon Phi system is also integrated in the - cluster and available to other sites. Each node has two - six-core Intel Xeon E5-2630 CPUs, 64 GB RAM and a local hard - disk. All nodes are on a QDR Infiniband interconnect. This - system consists of: - - 8 nodes have two nVidia K20x cards each installed. Each K20x - has 14 SMX processors (Kepler family; total of 2688 CUDA cores) - that run at 732MHz and 6 GB of GDDR5 memory with a peak memory - bandwidth of 250 GB/s (384-bit interface @ 2.6 GHz). The peak - floating point performance per card is 1.31 Tflops in double - and 3.95 Tflops in single precision. - - 8 nodes have two Intel Xeon Phi 5110P cards each installed. - Each Xeon Phi board has 60 cores running at 1.053 GHz (of which - one is reserved for the card OS and 59 are available for - applications). Each core supports a large subset of the 64-bit - Intel architecture instructions and a vector extension with - 512-bit vector instructions. Each board contains 8 GB of RAM, - distributed across 16 memory channels, with a peak memory - bandwidth of 320 GB/s. The peak performance (not counting the - core reserved for the OS) is 0.994 Tflops in double precision - and 1.988 Tflops in single precision. The Xeon Phi system is - not yet fully operational. MPI applications spanning multiple - nodes cannot be used at the moment. - - 20 nodes have four Nvidia Tesla P100 SXM2 cards each installed - (3584 CUDA cores @1328 MHz, 5.3 DP Tflops/GPU). - - To start working with accelerators please refer to `access - webpage <\%22https://www.vscentrum.be/infrastructure/hardware/hardware-kul/accelerators\%22>`__. - -- **Visualization nodes**: 2 nodes with two 10-core \\"Haswell\" Xeon - E5-2650v3 2.3GHz CPUs, 2 times 64 GB of RAM and 2 GPUs NVIDIA Quadro - K5200 (2304 CUDA cores @ 667 MHz). To start working on visualization - nodes, we refer to the `TurboVNC start - guide <\%22/client/multiplatform/turbovnc\%22>`__. -- **Central storage** available to all nodes: - - - A NetApp NAS system with 30 TB of storage, used for the home- and - permanent data directories. All data is mirrored almost - instantaneously to the KU Leuven disaster recovery data centre. - - A 284 TB GPFS parallel filesystem from DDN, mostly used for - temporary disk space. - - A 600 TB archive storage optimised for capacity and aimed at - long-term storage of very infrequently accessed data. To start - using the archive storage, we refer to the `WOS Storage quick - start - guide <\%22https://www.vscentrum.be/infrastructure/hardware/wos-storage\%22>`__. - -- For administrative purposes, there are also **service nodes** that - are not user-accessible - -|image0| - -Characteristics of the compute nodes ------------------------------------- - -The following properties allow you to select the appropriate node type -for your job (see also the page on `specifying resources, output files -and -notifications <\%22/cluster-doc/running-jobs/specifying-requirements\%22>`__): - -+-------+-------+-------+-------+-------+-------+-------+-------+-------+ -| Clust | Type | CPU | Inter | # | insta | avail | local | # | -| er | of | type | conne | cores | lled | mem | discs | nodes | -| | node | | ct | | mem | | | | -+=======+=======+=======+=======+=======+=======+=======+=======+=======+ -| Think | ivybr | Xeon | IB-QD | 20 | 64 GB | 60 GB | 250 | 176 | -| ing | idge | E5-26 | R | | | | GB | | -| | | 80v2 | | | | | | | -+-------+-------+-------+-------+-------+-------+-------+-------+-------+ -| ThinK | ivybr | Xeon | IB-QD | 20 | 128 | 124 | 250 | 32 | -| ing | idge | E5-26 | R | | GB | GB | GB | | -| | | 80v2 | | | | | | | -+-------+-------+-------+-------+-------+-------+-------+-------+-------+ -| Think | haswe | Xeon | IB-FD | 24 | 64 GB | 60 GB | 150 | 48 | -| ing | ll | E5-26 | R | | | | GB | | -| | | 80v3 | | | | | | | -+-------+-------+-------+-------+-------+-------+-------+-------+-------+ -| Think | haswe | Xeon | IB-FD | 24 | 128 | 124 | 150 | 96 | -| ing | ll | E5-26 | R | | GB | GB | GB | | -| | | 80v3 | | | | | | | -+-------+-------+-------+-------+-------+-------+-------+-------+-------+ -| Geniu | skyla | Xeon | IB-ED | 36 | 192 | 188 | 800 | 86 | -| s | ke | 6140 | R | | GB | GB | GB | | -+-------+-------+-------+-------+-------+-------+-------+-------+-------+ -| Geniu | skyla | Xeon | IB-ED | 36 | 768 | 764 | 800 | 10 | -| s | ke-la | 6140 | R | | GB | GB | GB | | -| | rge | | | | | | | | -| | memor | | | | | | | | -| | y | | | | | | | | -+-------+-------+-------+-------+-------+-------+-------+-------+-------+ -| Geniu | skyla | Xeon | IB-ED | 36 | 192 | 188 | 800 | 20 | -| s | ke-GP | 6140 | R | | GB | GB | GB | | -| | U | 4xP10 | | | | | | | -| | | 0 | | | | | | | -| | | SXM2 | | | | | | | -+-------+-------+-------+-------+-------+-------+-------+-------+-------+ - -For using Cerebro, the shared memory section, we refer to the `Cerebro -Quick Start Guide <\%22/assets/965\%22>`__. - -Implementation of the VSC directory structure ---------------------------------------------- - -In the transition phase between Vic3 and ThinKing, the storage is -mounted on both systems. When switching from Vic3 to ThinKing you will -not need to migrate your data. - -The cluster uses the directory structure that is implemented on most VSC -clusters. This implies that each user has two personal directories: - -- A regular home directory which contains all files that a user might - need to log on to the system, and small 'utility' - scripts/programs/source code/.... The capacity that can be used is - restricted by quota and this directory should not be used for I/O - intensive programs. - For KU Leuven systems the full path is of the form /user/leuven/... , - but this might be different on other VSC systems. However, on all - systems, the environment variable VSC_HOME points to this directory - (just as the standard HOME variable does). -- A data directory which can be used to store programs and their - results. At the moment, there are no quota on this directory. For KU - Leuven the path name is /data/leuven/... . On all VSC systems, the - environment variable VSC_DATA points to this directory. - -There are three further environment variables that point to other -directories that can be used: - -- On each cluster you have access to a scratch directory that is shared - by all nodes on the cluster. The variable VSC_SCRATCH_SITE will point - to this directory. This directory is also accessible from the - loginnodes, so it is accessible while your jobs run, and after they - finish (for a limited time: files can be removed automatically after - 14 days.) -- Similarly, on each cluster you have a VSC_SCRATCH_NODE directory, - which is a scratch space local to each computenode. Thus, on each - node, this directory point to a different physical location, and the - connects are only accessible from that particular worknode, and - (typically) only during the runtime of your job. But, if more than - one job of you runs on the same node, they all see the same directory - (and thus you have to make sure they do not overwrite each others - data by creating subdirectories per job, or give proper filename, - ...) - -Access restrictions -------------------- - -Access is available for faculty, students (under faculty supervision), -and researchers of the KU Leuven, UHasselt and their associations. This -cluster is being integrated in the VSC network and as such becomes -available to all VSC users. - -History -------- - -In September 2013 a new thin node cluster (HP) and a shared memory -system (SGI) was bought. The thin node cluster was installed and -configured in January/February 2014 and extended in september 2014. -Installation and configuration of the SMP is done in April 2014. -Financing of this systems was obtained from the Hercules foundation and -the Flemish government. - -Do you want to see it ? Have a look at the movie - -" - -.. |image0| image:: \%22/assets/1335\%22 - diff --git a/Other/file_0307_uniq.rst b/Other/file_0307_uniq.rst deleted file mode 100644 index 014f353df..000000000 --- a/Other/file_0307_uniq.rst +++ /dev/null @@ -1 +0,0 @@ -partitioned 994 764 124 computenode skyla 2TB 2688 SXM2 Thin er NUMAlink6 95 Quadro infrequently 2630 10TB Accelerator DP 320 208 disaster GDDR5 proprietary 3584 480 blades 3GHz y Clocks recovery 284 965 transition instantaneously migrate ing 1328 la channels worknode counting 384 4650 discs Vic3 2650v3 667 SMP K5200 80v3 arise 4xP10 NetApp 800 UV2000 SMX 947 ke haswe image0 Think 875 ThinK 053 GP 732MHz clocks wos Clust K40 QD 2304 spanning 988 1335 memor september Nvidia rge Geniu 232 MHz 2880 diff --git a/Other/file_0309.rst b/Other/file_0309.rst deleted file mode 100644 index 55172b102..000000000 --- a/Other/file_0309.rst +++ /dev/null @@ -1,121 +0,0 @@ -Overview --------- - -The tier-1 cluster *muk* is primarily aimed at large parallel computing -jobs that require a high-bandwidth low-latency interconnect, but jobs -that require a multitude of small independent tasks are also accepted. - -The main architectural features are: - -- 528 compute nodes with two Xeon E5-2670 processors (2,6GHz, 8 cores - per processor, Sandy Bridge architecture) and 64GiB of memory, for a - total memory capacity of 33 TiB and a peak performance of more than - 175 TFlops (Linpack result 152,3 TFlops) -- FDR Infiniband interconnect with a fat tree topology (1:2 - oversubscription) -- A storage system with a net capacity of approximately 400TB and a - peak bandwidth of 9.5 GB/s. - -The cluster appeared for several years in the Top500 list of -supercomputer sites: - -+---------+-----------+----------+-----------+----------+-----------+ -| | June 2012 | Nov 2012 | June 2013 | Nov 2013 | June 2014 | -+---------+-----------+----------+-----------+----------+-----------+ -| Ranking | 118 | 163 | 239 | 306 | 430 | -+---------+-----------+----------+-----------+----------+-----------+ - -Compute time on *muk* is only available upon approval of a project. -Information on requesting projects is available `in -Dutch <\%22/nl/systemen-en-toegang/projecttoegang-tier1\%22>`__ and `in -English <\%22/en/access-and-infrastructure/project-access-tier1\%22>`__ - -Access restriction ------------------- - -Once your project has been approved, your login on the tier-1 cluster -will be enabled. You use the same vsc-account (vscXXXXX) as at your home -institutions and you use the same $VSC_HOME and $VSC_DATA directories, -though the tier-1 does have its own scratch directories. - -A direct login from your own computer through the public network to -*muk* is not possible for security reasons. You have to enter via the -VSC network, which is reachable from all Flemish university networks. - -:: - - ssh login.hpc.uantwerpen.be - ssh login.hpc.ugent.be - ssh login.hpc.kuleuven.be or login2.hpc.kuleuven.be - -Make sure that you have at least once connected to the login nodes of -your institution, before attempting access to tier-1. - -Once on the VSC network, you can - -- connect to **login.muk.gent.vsc** to work on the tier-1 cluster muk, -- connect to **gligar01.gligar.gent.vsc** or - **gligar02.gligar.gent.vsc** for testing and debugging purposes - (e.g., check if a code compiles). There you'll find the same software - stack as on the tier-1. (On some machines gligar01.ugent.be and - gligar02.ugent.be might also work.) - -There are two options to log on to these systems over the VSC network: - -#. You log on to your home cluster. At the command line, you start a ssh - session to *login.muk.gent.vsc*. - - :: - - ssh login.muk.gent.vsc - -#. You set up a so-called *ssh proxy* through your usual VSC login node - *vsc.login.node* (the *proxy server* in this process) to - *login.muk.gent.vsc* or *gligar01.ugent.be*. - - - To set up a ssh proxy using OpenSSH, the client for Linux and OS X - or if you have Windows with the Cygwin emulation layer installed, - follow the instructions `in the Linux client - section <\%22/client/linux/openssh-proxy\%22>`__. - - To set up a ssh proxy on Windows using PuTTY, follow the - instructions `in the Windows client - section <\%22/client/windows/putty-proxy\%22>`__. - -Resource limits ---------------- - -Disk quota -~~~~~~~~~~ - -- As you are using your $VSC_HOME and $VSC_DATA directories from your - home institution, the quota policy from your home institution - applies. -- On the shared (across nodes) scratch volume $VSC_SCRATCH the standard - disk quota is 250GiB per user. If your project requires more disk - space, you should request it in your project application as we have - to make sure that the mix of allocated projects does not require more - disk space than available. -- Currently, each institute has a maximal scratch quotum of 75TiB. So, - please vacate as much as possible of the $VSC_SCRATCH at all times to - enable large jobs. - -Memory -~~~~~~ - -- Each node has 64GiB of RAM. However, not all of that memory is - available for user applications as some memory is needed for the - operating system and file system buffers. In practise, roughly 60GiB - is available to run your jobs. This also means that when using all - cores, you should not request more than 3.75GiB of RAM per core (pmem - resource attribute in qsub) or your job will be queued indefinitely - since the resource manager will not be able to assign nodes to it. -- The maximum amount of total virtual memory per node ('vmem') you can - request is 83GiB, see also the output of the ``pbsmon`` command. The - job submit filter sets a default virtual memory limit if you don't - specify something with your job using e.g. - - :: - - #PBS -l vmem=83gb - -" diff --git a/Other/file_0309_uniq.rst b/Other/file_0309_uniq.rst deleted file mode 100644 index b9356fdbe..000000000 --- a/Other/file_0309_uniq.rst +++ /dev/null @@ -1 +0,0 @@ -oversubscription attempting vacate 400TB 64GiB pbsmon Ranking practise 83gb 75TiB 250GiB 6GHz reachable 60GiB quotum 75GiB 83GiB diff --git a/Other/file_0313.rst b/Other/file_0313.rst deleted file mode 100644 index dc9263fca..000000000 --- a/Other/file_0313.rst +++ /dev/null @@ -1,40 +0,0 @@ -Tier-1 ------- - -- Our `current Tier-1 system is - BrENIAC <\%22/infrastructure/hardware/hardware-tier1-breniac\%22>`__, - operated by KU Leuven. The system is aimed at large parallel - computing jobs that require a high-bandwidth low-latency - interconnect. Compute time is again only available upon approval of a - project. See the `page on Tier-1 project access and links in that - page <\%22https://www.vscentrum.be/en/access-and-infrastructure/project-access-tier1\%22>`__. -- Our `first Tier-1 system is - muk <\%22/infrastructure/hardware/hardware-tier1-muk\%22>`__, was - operated by UGent but is no longer in production. - -Experimental setup ------------------- - -- `There is a small GPU and Xeon Phi test - system <\%22/infrastructure/hardware/k20x-phi-hardware\%22>`__ which - is can be used by all VSC members on request (though a project - approval is not required at the moment). `The documentation for this - system is under - development <\%22/infrastructure/hardware/k20x-phi-hardware\%22>`__. - -Tier-2 ------- - -Four university-level cluster groups are also embedded in the VSC and -partly funded from VSC budgets: - -- `The UAntwerpen clusters (hopper and - leibniz) <\%22/infrastructure/hardware/hardware-ua\%22>`__ -- `The VUB cluster - (hydra) <\%22/infrastructure/hardware/hardware-vub\%22>`__ -- `The UGent local - clusters <\%22/infrastructure/hardware/hardware-ugent\%22>`__ -- `The KU Leuven/UHasselt cluster (ThinKing and - Cerebro) <\%22/infrastructure/hardware/hardware-kul\%22>`__ - -" diff --git a/Other/file_0313_uniq.rst b/Other/file_0313_uniq.rst deleted file mode 100644 index 3947d2627..000000000 --- a/Other/file_0313_uniq.rst +++ /dev/null @@ -1 +0,0 @@ -operated breniac diff --git a/Other/file_0377.rst b/Other/file_0377.rst deleted file mode 100644 index f621f265f..000000000 --- a/Other/file_0377.rst +++ /dev/null @@ -1,11 +0,0 @@ -BEgrid has its `own documentation web site as it is a project at the -federal level <\%22http://www.begrid.be/\%22>`__. Some useful links are: - -- `BEgrid Wiki <\%22http://wiki.begrid.be/\%22>`__ -- `gLite 3.1 User Guide (PDF, op - CERN) <\%22https://edms.cern.ch/ui/file/722398/1.2/gLite-3-UserGuide.pdf\%22>`__ - gLite is the grid middleware used on BEgrid. -- `Other related links on the BEgrid web - site. <\%22http://www.begrid.be/index.php?module=webpage&id=16\%22>`__ - -" diff --git a/Other/file_0377_uniq.rst b/Other/file_0377_uniq.rst deleted file mode 100644 index 2c850c209..000000000 --- a/Other/file_0377_uniq.rst +++ /dev/null @@ -1 +0,0 @@ -ui diff --git a/Other/file_0381.rst b/Other/file_0381.rst deleted file mode 100644 index 25c2e399a..000000000 --- a/Other/file_0381.rst +++ /dev/null @@ -1,11 +0,0 @@ -This is just some random text. Don't be worried if the remainder of this -paragraph sounds like Latin to you cause it is Latin. Cras mattis -consectetur purus sit amet fermentum. Cum sociis natoque penatibus et -magnis dis parturient montes, nascetur ridiculus mus. Sed posuere -consectetur est at lobortis. Morbi leo risus, porta ac consectetur ac, -vestibulum at eros. Cras mattis consectetur purus sit amet fermentum. -Cum sociis natoque penatibus et magnis dis parturient montes, nascetur -ridiculus mus. Sed posuere consectetur est at lobortis. Morbi leo risus, -porta ac consectetur ac, vestibulum at eros. - -" diff --git a/Other/file_0381_uniq.rst b/Other/file_0381_uniq.rst deleted file mode 100644 index a3266edab..000000000 --- a/Other/file_0381_uniq.rst +++ /dev/null @@ -1 +0,0 @@ -posuere est worried penatibus nascetur dis sociis ridiculus Morbi purus magnis natoque Cum leo lobortis parturient montes mus vestibulum Cras risus porta diff --git a/Other/file_0385.rst b/Other/file_0385.rst deleted file mode 100644 index f7af79c29..000000000 --- a/Other/file_0385.rst +++ /dev/null @@ -1,8 +0,0 @@ -What I tried to do with the \\"Asset\" box in the right column: - -- I included two pictures from our asset toolbox. What is shown are - square thumbnails of the pictures. -- I also included two PDFs that have no picture attached to them. They - simply don't show up. - -| diff --git a/Other/file_0385_uniq.rst b/Other/file_0385_uniq.rst deleted file mode 100644 index beb1ab2fc..000000000 --- a/Other/file_0385_uniq.rst +++ /dev/null @@ -1 +0,0 @@ -thumbnails PDFs square tried asset diff --git a/Other/file_0387.rst b/Other/file_0387.rst deleted file mode 100644 index abe40e0a4..000000000 --- a/Other/file_0387.rst +++ /dev/null @@ -1,94 +0,0 @@ -Inline code with ... ---------------------------------- - -We used inline code on the old vscentrum.be to clearly mark system -commands etc. in text. - -- For this we used the tag. -- There was support in the editor to set this tag -- It doesn't seem to work properly in the current editor. If the - fragment of code contains a slash (/), the closing tag gets omitted. - -Example: At UAntwerpen you'll have to use ``module avail MATLAB`` and -``module load MATLAB/2014a`` respectively. - -However, If you enter both -blocks on the same line in a HTML -file, the editor doesn't process them well: ``module avail MATLAB`` and -module load MATLAB. - -En dit is ``code inline`` als test. - -En dit dit wordt een nieuw pre-blok: - -:: - - #!/bin/bash - echo \"Hello, world!\" - -Code in
    ...
    ----------------------- - -This was used a lot on the old vscentrum.be site to display fragments of -code or display output in a console windows. - -- Readability of fragments is definitely better if a fixed width font - is used as this is necessary to get a correct alignment. -- Formatting is important: Line breaks should be respected. The problem - with the CMS seems to be that the editor respects the line breaks, - the database also stores them as I can edit the code again, but the - CMS removes them when generating the final HTML-page as I don't see - the line breaks again in the resulting HTML-code that is loaded into - the browser. - -:: - - #!/bin/bash -l - #PBS -l nodes=1:nehalem - #PBS -l mem=4gb - module load matlab - cd $PBS_O_WORKDIR - ... - -The style in the editor ------------------------------- - -In fact, the Code style of the editor works on a paragraph basis and all -it does is put the paragraph between
     and 
    -tags, so the -problem mentioned above remains. The next text was edited in WYSIWIG -mode: - -:: - - #!/bin/bash -l - #PBS -l nodes=4:ivybridge - ... - -Another editor bug is that it isn't possible to switch back to regular -text mode at the end of a code fragment if that is at the end of the -text widget: The whole block is converted back to regular text instead -and the formatting is no longer shown. - -Een Workaround is misschien meerdere
    -blokken gebruiken?
    -
    -::
    -
    -   #!/bin/bash -l
    -
    -::
    -
    -   #PBS -l nodes=4:ivybridge
    -
    -::
    -
    -   ...
    -
    -Neen, want dan krijg je meerdere grijze vakken...
    -
    -En met 
    en de -tag? - -``#! /bin/bash -l#PBS -l nodes=4:ivybridge...`` - -Ook dit is niet ideaal, want alles staat niet aaneenin een kader, maar -het is beter dan niets... - -" diff --git a/Other/file_0387_uniq.rst b/Other/file_0387_uniq.rst deleted file mode 100644 index b66967230..000000000 --- a/Other/file_0387_uniq.rst +++ /dev/null @@ -1 +0,0 @@ -dan br niets aaneenin ideaal Neen nieuw Workaround krijg En meerdere kader vakken blokken grijze gebruiken alles diff --git a/Other/file_0395.rst b/Other/file_0395.rst deleted file mode 100644 index 166dcb3ec..000000000 --- a/Other/file_0395.rst +++ /dev/null @@ -1,9 +0,0 @@ -Tier-1 infrastructure -===================== - -Our first Tier-1 cluster, Muk, was installed in the spring of 2012 and -became operationa a few months later. This system is primarily optimised -for the processing of large parallel computing tasks that need to have a -high-speed interconnect. - -" diff --git a/Other/file_0395_uniq.rst b/Other/file_0395_uniq.rst deleted file mode 100644 index a052c9a75..000000000 --- a/Other/file_0395_uniq.rst +++ /dev/null @@ -1 +0,0 @@ -operationa diff --git a/Other/file_0399.rst b/Other/file_0399.rst deleted file mode 100644 index 909e7af70..000000000 --- a/Other/file_0399.rst +++ /dev/null @@ -1,381 +0,0 @@ -The list below gives an indication of which (scientific) software, -libraries and compilers are available on TIER1 on 1 December 2014. For -each package, the available version(s) is shown as well as (if -applicable) the compilers/libraries/options with which the software was -compiled. Note that some software packages are only available when the -end-user demonstrates to have valid licenses to use this software on the -TIER1 infrastructure of Ghent University. - -- ABAQUS/6.12.1-linux-x86_64 -- ALADIN/36t1_op2bf1-ictce-4.1.13 -- ALADIN/36t1_op2bf1-ictce-4.1.13-strict -- Allinea/4.1-32834-Redhat-6.0-x86_64 -- ANTLR/2.7.7-ictce-4.1.13 -- APR/0.9.18-ictce-4.1.13 -- APR/1.5.0-ictce-4.1.13 -- APR/1.5.0-ictce-5.5.0 -- APR-util/1.3.9-ictce-4.1.13 -- APR-util/1.5.3-ictce-4.1.13 -- APR-util/1.5.3-ictce-5.5.0 -- ASE/3.6.0.2515-ictce-4.1.13-Python-2.7.3 -- Autoconf/2.69-ictce-4.1.13 -- BEAGLE/20130408-ictce-4.0.6 -- beagle-lib/20120124-ictce-4.1.13 -- BEDTools/2.17.0-ictce-4.1.13 -- BEDTools/v2.17.0-ictce-4.1.13 -- Bison/2.5-ictce-4.1.13 -- Bison/2.6.5-ictce-4.1.13 -- Bison/2.7.1-ictce-5.5.0 -- Bison/2.7-ictce-4.1.13 -- Bison/2.7-ictce-5.5.0 -- Bison/3.0.2-intel-2014b -- BLACS/1.1-gompi-1.1.0-no-OFED -- Boost/1.51.0-ictce-4.1.13-Python-2.7.3 -- Boost/1.55.0-ictce-5.5.0-Python-2.7.6 -- Bowtie/1.0.0-ictce-4.1.13 -- Bowtie2/2.0.2-ictce-4.1.13 -- Bowtie2/2.1.0-ictce-5.5.0 -- BWA/0.6.2-ictce-4.1.13 -- bzip2/1.0.6-ictce-4.1.13 -- bzip2/1.0.6-ictce-5.5.0 -- bzip2/1.0.6-iomkl-4.6.13 -- CDO/1.6.2-ictce-5.5.0 -- CDO/1.6.3-ictce-5.5.0 -- Circos/0.64-ictce-5.5.0-Perl-5.18.2 -- CMake/2.8.10.2-ictce-4.0.6 -- CMake/2.8.10.2-ictce-4.1.13 -- CMake/2.8.12-ictce-5.5.0 -- CMake/2.8.4-ictce-4.1.13 -- CP2K/20130228-ictce-4.1.13 -- CP2K/20131211-ictce-5.5.0 -- CP2K/2.5.1-intel-2014b-psmp -- Cufflinks/2.1.1-ictce-4.1.13 -- Cufflinks/2.1.1-ictce-5.5.0 -- cURL/7.28.1-ictce-4.1.13 -- cURL/7.28.1-ictce-5.5.0 -- cURL/7.33.0-ictce-4.1.13 -- cURL/7.34.0-ictce-5.5.0 -- cutadapt/1.3-ictce-4.1.13-Python-2.7.3 -- Cython/0.17.4-ictce-4.1.13-Python-2.7.3 -- Cython/0.19.2-ictce-5.5.0-Python-2.7.6 -- DB/4.7.25-ictce-4.1.13 -- DBD-mysql/4.023-ictce-4.1.13-Perl-5.16.3 -- Doxygen/1.8.1.1-ictce-4.1.13 -- Doxygen/1.8.2-ictce-4.1.13 -- Doxygen/1.8.3.1-ictce-4.1.13 -- Doxygen/1.8.3.1-ictce-5.5.0 -- Doxygen/1.8.6-ictce-5.5.0 -- e2fsprogs/1.42.7-ictce-4.1.13 -- EasyBuild/1.10.0(default) -- EasyBuild/1.7.0 -- EasyBuild/1.8.2 -- EasyBuild/1.9.0 -- ed/1.9-ictce-4.1.13 -- Eigen/3.1.1-ictce-4.1.13 -- Eigen/3.2.0-ictce-5.5.0 -- ESMF/6.1.1-ictce-4.1.13 -- ESMF/6.1.1-ictce-5.5.0 -- expat/2.1.0-ictce-4.1.13 -- expat/2.1.0-ictce-5.5.0 -- fastahack/20110215-ictce-4.1.13 -- FFTW/3.3.1-gompi-1.1.0-no-OFED -- FFTW/3.3.3-ictce-4.1.13 -- FFTW/3.3.3-ictce-4.1.13-single -- FFTW/3.3.3-ictce-4.1.14 -- FFTW/3.3.3-ictce-4.1.14-single -- FFTW/3.3.3-iomkl-4.6.13-single -- FFTW/3.3.4-intel-2014b -- flex/2.5.35-ictce-4.1.13 -- flex/2.5.37-ictce-4.1.13 -- flex/2.5.37-ictce-5.5.0 -- flex/2.5.37-intel-2014b -- flex/2.5.39-intel-2014b -- FLTK/1.3.2-ictce-4.1.13 -- FLUENT/14.5 -- FLUENT/15.0.7 -- fontconfig/2.11.1-ictce-5.5.0 -- freetype/2.4.11-ictce-4.1.13 -- freetype/2.4.11-ictce-5.5.0 -- g2clib/1.4.0-ictce-4.1.13 -- g2clib/1.4.0-ictce-5.5.0 -- g2lib/1.4.0-ictce-4.1.13 -- g2lib/1.4.0-ictce-5.5.0 -- Gaussian/g09_B.01-ictce-4.1.13-amd64-gpfs-I12 -- Gaussian/g09_D.01-ictce-5.5.0-amd64-gpfs -- GCC/4.6.3 -- GCC/4.8.3 -- GD/2.52-ictce-5.5.0-Perl-5.18.2 -- GDAL/1.9.2-ictce-4.1.13 -- GDAL/1.9.2-ictce-5.5.0 -- GLib/2.34.3-ictce-4.1.13 -- glproto/1.4.16-ictce-4.1.13 -- GMAP/2013-11-27-ictce-5.5.0 -- gnuplot/4.4.4-ictce-4.1.13 -- gompi/1.1.0-no-OFED -- Greenlet/0.4.0-ictce-4.1.13-Python-2.7.3 -- grib_api/1.9.18-ictce-4.1.13 -- GROMACS/4.6.5-ictce-5.5.0-hybrid -- GROMACS/4.6.5-ictce-5.5.0-mpi -- GSL/1.16-ictce-4.1.13 -- GSL/1.16-ictce-5.5.0 -- gzip/1.4 -- h5py/2.1.0-ictce-4.1.13-Python-2.7.3 -- Hadoop/0.9.9-rdma -- Hadoop/2.0.0-cdh4.4.0 -- Hadoop/2.0.0-cdh4.5.0 -- Hadoop/2.3.0-cdh5.0.0 -- Hadoop/2.x-0.9.1-rdma -- hanythingondemand/2.1.1-ictce-5.5.0-Python-2.7.6 -- hanythingondemand/2.1.4-ictce-5.5.0-Python-2.7.6 -- HDF/4.2.8-ictce-4.1.13 -- HDF/4.2.8-ictce-5.5.0 -- HDF5/1.8.10-ictce-4.1.13-gpfs-mt -- HDF5/1.8.10-ictce-4.1.13-parallel-gpfs -- HDF5/1.8.10-ictce-5.5.0-gpfs -- HDF5/1.8.10-ictce-5.5.0-gpfs-mt -- HDF5/1.8.12-ictce-5.5.0 -- HDF5/1.8.9-ictce-4.1.13 -- hwloc/1.6-iccifort-2011.13.367 -- hwloc/1.9-GCC-4.8.3 -- icc/11.1.069 -- icc/11.1.073 -- icc/11.1.075 -- icc/2011.13.367 -- icc/2011.6.233 -- icc/2013.5.192 -- icc/2013.5.192-GCC-4.8.3 -- icc/2013_sp1.2.144 -- iccifort/2011.13.367 -- iccifort/2013.5.192-GCC-4.8.3 -- ictce/3.2.1.015.u4 -- ictce/3.2.2.u3 -- ictce/4.0.6 -- ictce/4.1.13 -- ictce/4.1.14 -- ictce/5.5.0 -- ictce/6.2.5 -- ifort/11.1.069 -- ifort/11.1.073 -- ifort/11.1.075 -- ifort/2011.13.367 -- ifort/2011.6.233 -- ifort/2013.5.192 -- ifort/2013.5.192-GCC-4.8.3 -- ifort/2013_sp1.2.144 -- iimpi/5.5.3-GCC-4.8.3 -- imkl/10.2.4.032 -- imkl/10.2.6.038 -- imkl/10.3.12.361 -- imkl/10.3.12.361-impi-4.1.0.030 -- imkl/10.3.12.361-MVAPICH2-1.9 -- imkl/10.3.12.361-OpenMPI-1.6.3 -- imkl/10.3.6.233 -- imkl/11.0.5.192 -- imkl/11.1.2.144 -- imkl/11.1.2.144-iimpi-5.5.3-GCC-4.8.3 -- impi/3.2.2.006 -- impi/4.0.0.028 -- impi/4.0.2.003 -- impi/4.1.0.027 -- impi/4.1.0.030 -- impi/4.1.1.036 -- impi/4.1.3.049 -- impi/4.1.3.049-GCC-4.8.3 -- impi/4.1.3.049-iccifort-2013.5.192-GCC-4.8.3 -- intel/2014b -- iomkl/4.6.13 -- IPython/0.13.1-ictce-4.1.13-Python-2.7.3 -- JasPer/1.900.1-ictce-4.1.13 -- JasPer/1.900.1-ictce-5.5.0 -- Java/1.7.0_10 -- Java/1.7.0_15 -- Java/1.7.0_17 -- Java/1.7.0_40 -- Java/1.7.0_60 -- Java/1.8.0_20 -- LAPACK/3.4.0-gompi-1.1.0-no-OFED -- libdrm/2.4.27-ictce-4.1.13 -- libffi/3.0.13-ictce-4.1.13 -- libffi/3.0.13-ictce-5.5.0 -- libgd/2.1.0-ictce-5.5.0 -- Libint/1.1.4-ictce-4.1.13 -- Libint/1.1.4-ictce-5.5.0 -- libint2/2.0.3-intel-2014b -- libjpeg-turbo/1.3.0-ictce-4.1.13 -- libjpeg-turbo/1.3.0-ictce-5.5.0 -- libpciaccess/0.13.1-ictce-4.1.13 -- libpng/1.6.10-ictce-5.5.0 -- libpng/1.6.3-ictce-4.1.13 -- libpng/1.6.6-ictce-4.1.13 -- libpng/1.6.6-ictce-5.5.0 -- libpthread-stubs/0.3-ictce-4.1.13 -- libreadline/6.2-ictce-4.1.13 -- libreadline/6.2-ictce-5.5.0 -- libreadline/6.2-intel-2014b -- libreadline/6.2-iomkl-4.6.13 -- libxc/2.0.1-ictce-5.5.0 -- libxc/2.2.0-intel-2014b -- libxml2/2.8.0-ictce-4.1.13-Python-2.7.3 -- libxml2/2.9.0-ictce-4.1.13 -- libxml2/2.9.1-ictce-4.1.13 -- libxml2/2.9.1-ictce-5.5.0 -- libXp/1.0.1 -- libXp/1.0.1-ictce-4.1.13 -- M4/1.4.16-ictce-3.2.2.u3 -- M4/1.4.16-ictce-4.1.13 -- M4/1.4.16-ictce-5.5.0 -- M4/1.4.17-ictce-5.5.0 -- M4/1.4.17-intel-2014b -- makedepend/1.0.4-ictce-4.1.13 -- makedepend/1.0.4-ictce-5.5.0 -- MariaDB/5.5.29-ictce-4.1.13 -- MATLAB/2010b -- MATLAB/2012b -- Mesa/7.11.2-ictce-4.1.13-Python-2.7.3 -- mpi4py/1.3-ictce-4.1.13-Python-2.7.3 -- MrBayes/3.2.0-ictce-4.1.13 -- MVAPICH2/1.9-iccifort-2011.13.367 -- NASM/2.07-ictce-4.1.13 -- NASM/2.07-ictce-5.5.0 -- NCL/6.1.2-ictce-4.1.13 -- NCL/6.1.2-ictce-5.5.0 -- NCO/4.4.4-ictce-4.1.13 -- ncurses/5.9-ictce-4.1.13 -- ncurses/5.9-ictce-5.5.0 -- ncurses/5.9-intel-2014b -- ncurses/5.9-iomkl-4.6.13 -- ncview/2.1.2-ictce-4.1.13 -- neon/0.30.0-ictce-4.1.13 -- netaddr/0.7.10-ictce-5.5.0-Python-2.7.6 -- netCDF/4.1.3-ictce-4.1.13 -- netCDF/4.2.1.1-ictce-4.1.13 -- netCDF/4.2.1.1-ictce-4.1.13-mt -- netCDF/4.2.1.1-ictce-5.5.0 -- netCDF/4.2.1.1-ictce-5.5.0-mt -- netCDF/4.3.0-ictce-5.5.0 -- netcdf4-python/1.0.7-ictce-5.5.0-Python-2.7.6 -- netCDF-C++/4.2-ictce-4.1.13 -- netCDF-C++/4.2-ictce-4.1.13-mt -- netCDF-C++/4.2-ictce-5.5.0-mt -- netCDF-Fortran/4.2-ictce-4.1.13 -- netCDF-Fortran/4.2-ictce-4.1.13-mt -- netCDF-Fortran/4.2-ictce-5.5.0 -- netCDF-Fortran/4.2-ictce-5.5.0-mt -- netifaces/0.8-ictce-5.5.0-Python-2.7.6 -- NEURON/7.2-ictce-4.1.13 -- numactl/2.0.9-GCC-4.8.3 -- numexpr/2.0.1-ictce-4.1.13-Python-2.7.3 -- numexpr/2.2.2-ictce-5.5.0-Python-2.7.6 -- NWChem/6.1.1-ictce-4.1.13-2012-06-27-Python-2.7.3 -- OpenBLAS/0.2.9-GCC-4.8.3-LAPACK-3.5.0 -- OpenFOAM/2.1.1-ictce-4.1.13 -- OpenFOAM/2.2.0-ictce-4.1.13 -- OpenFOAM/2.3.0-intel-2014b -- OpenMPI/1.4.5-GCC-4.6.3-no-OFED -- OpenMPI/1.6.3-iccifort-2011.13.367 -- OpenPGM/5.2.122-ictce-4.1.13 -- OpenPGM/5.2.122-ictce-5.5.0 -- PAML/4.7-ictce-4.1.13 -- pandas/0.11.0-ictce-4.1.13-Python-2.7.3 -- pandas/0.12.0-ictce-5.5.0-Python-2.7.6 -- pandas/0.13.1-ictce-5.5.0-Python-2.7.6 -- Paraview/4.1.0-ictce-4.1.13 -- paycheck/1.0.2 -- paycheck/1.0.2-ictce-4.1.13-Python-2.7.3 -- paycheck/1.0.2-iomkl-4.6.13-Python-2.7.3 -- pbs_python/4.3.5-ictce-5.5.0-Python-2.7.6 -- Perl/5.16.3-ictce-4.1.13 -- Perl/5.18.2-ictce-5.5.0 -- picard/1.100-ictce-4.1.13 -- Primer3/2.3.0-ictce-4.1.13 -- printproto/1.0.5 -- printproto/1.0.5-ictce-4.1.13 -- PROJ.4/4.8.0-ictce-5.5.0 -- pyproj/1.9.3-ictce-5.5.0-Python-2.7.6 -- pyTables/2.4.0-ictce-4.1.13-Python-2.7.3 -- pyTables/3.0.0-ictce-5.5.0-Python-2.7.6 -- Python/2.5.6-ictce-4.1.13-bare -- Python/2.7.3-ictce-4.1.13(default) -- Python/2.7.3-iomkl-4.6.13 -- Python/2.7.6-ictce-5.5.0 -- PyZMQ/14.0.1-ictce-5.5.0-Python-2.7.6 -- PyZMQ/2.2.0.1-ictce-4.1.13-Python-2.7.3 -- Qt/4.8.5-ictce-4.1.13 -- QuantumESPRESSO/5.0.2-ictce-5.5.0-hybrid -- QuantumESPRESSO/5.0.3-ictce-5.5.0-hybrid -- R/3.0.2-ictce-4.1.13 -- R/3.0.2-ictce-5.5.0 -- SAMtools/0.1.18-ictce-4.1.13 -- SAMtools/0.1.19-ictce-5.5.0 -- Schrodinger/2014-2_Linux-x86_64 -- SCOOP/0.6.0.final-ictce-4.1.13-Python-2.7.3 -- SCOTCH/6.0.0_esmumps-intel-2014b -- scripts/3.0.0 -- scripts/4.0.0 -- setuptools/1.4.2 -- Spark/1.0.0 -- SQLite/3.8.1-ictce-4.1.13 -- SQLite/3.8.4.1-ictce-4.1.13 -- SQLite/3.8.4.1-ictce-5.5.0 -- subversion/1.6.11-ictce-4.1.13 -- subversion/1.6.23-ictce-4.1.13 -- subversion/1.8.8-ictce-4.1.13 -- SURF/1.0-ictce-4.1.13-LINUXAMD64 -- Szip/2.1-ictce-4.1.13 -- Szip/2.1-ictce-5.5.0 -- Tachyon/0.5.0 -- Tcl/8.5.12-ictce-4.1.13 -- Tcl/8.6.1-ictce-4.1.13 -- Tcl/8.6.1-ictce-5.5.0 -- tcsh/6.18.01-ictce-4.1.13 -- tcsh/6.18.01-ictce-5.5.0 -- Tk/8.5.12-ictce-4.1.13 -- TopHat/2.0.10-ictce-5.5.0 -- TopHat/2.0.8-ictce-4.1.13 -- UDUNITS/2.1.24-ictce-4.1.13 -- UDUNITS/2.1.24-ictce-5.5.0 -- UNAFold/3.8-ictce-4.1.13 -- util-linux/2.24-ictce-5.5.0 -- uuid/1.6.2-ictce-4.1.13 -- Valgrind/3.8.1 -- VarScan/v2.3.6-ictce-4.1.13 -- VASP/5.2.11-ictce-4.1.13-mt -- VASP/5.3.2-ictce-4.1.13-vtst-3.0b-20121111-mt -- VASP/5.3.3-ictce-3.2.1.015.u4-mt -- VASP/5.3.3-ictce-4.1.13-mt -- VASP/5.3.3-ictce-4.1.13-mt-dftd3 -- VASP/5.3.3-ictce-4.1.13-mt-no-DNGXhalf -- VASP/5.3.3-ictce-4.1.13-vtst-3.0b-20121111-mt -- VASP/5.3.3-ictce-4.1.13-vtst-3.0c-20130327-mt -- VASP/5.3.3-ictce-5.5.0-mt -- VASP/5.3.3-ictce-6.2.5-mt -- VASP/5.3.5-intel-2014b-vtst-3.1-20140328-mt-vaspsol2.01 -- VASP/5.3.5-intel-2014b-vtst-3.1-20140328-mt-vaspsol2.01-gamma -- VMD/1.9.1-ictce-4.1.13 -- vsc-base/1.7.3 -- vsc-base/1.9.1 -- vsc-mympirun/3.2.3 -- vsc-mympirun/3.3.0 -- vsc-mympirun/3.4.2 -- VSC-tools/0.1.2-ictce-4.1.13-Python-2.7.3 -- VSC-tools/0.1.5 -- VSC-tools/0.1.5-ictce-4.1.13-scoop -- VSC-tools/1.7.1 -- VTK/6.0.0-ictce-4.1.13-Python-2.7.3 -- WIEN2k/14.1-intel-2014b -- WPS/3.5.1-ictce-4.1.13-dmpar -- WRF/3.4-ictce-5.5.0-dmpar -- WRF/3.5.1-ictce-4.1.13-dmpar -- XML-LibXML/2.0018-ictce-4.1.13-Perl-5.16.3 -- XML-Simple/2.20-ictce-4.1.13-Perl-5.16.3 -- xorg-macros/1.17 -- xorg-macros/1.17-ictce-4.1.13 -- YAXT/0.2.1-ictce-5.5.0 -- ZeroMQ/2.2.0-ictce-4.1.13 -- ZeroMQ/4.0.3-ictce-5.5.0 -- zlib/1.2.7-ictce-4.1.13 -- zlib/1.2.7-ictce-5.5.0 -- zlib/1.2.7-iomkl-4.6.13 -- zlib/1.2.8-ictce-5.5.0 - -" diff --git a/Other/file_0399_uniq.rst b/Other/file_0399_uniq.rst deleted file mode 100644 index f26f3157c..000000000 --- a/Other/file_0399_uniq.rst +++ /dev/null @@ -1 +0,0 @@ -ncview gzip fastahack FLTK MrBayes 32834 0_esmumps 0018 075 Circos cdh4 uuid 20120124 20140328 006 gpfs 0b 0_60 2_Linux 367 DNGXhalf Tachyon VarScan hwloc 20110215 015 dftd3 numexpr WPS netcdf4 DB XML vaspsol2 libdrm mysql LibXML MariaDB MVAPICH2 361 0_17 Greenlet PyZMQ UNAFold 0_40 038 073 20131211 GD g2lib WRF NCL BEDTools 51 2010b APR Primer3 ZeroMQ HDF 027 ESMF 030 Schrodinger dmpar picard paycheck 2012b CDO demonstrates u4 I12 cutadapt 028 mt 0_15 003 39 pyproj ANTLR g09_D OFED 20130228 036 rdma ictce iomkl h5py 0_10 SURF Valgrind 069 g09_B BEAGLE ALADIN 20130408 ASE u3 g2clib 0c OpenPGM FLUENT 20121111 grib_api e2fsprogs YAXT libpciaccess 20130327 UDUNITS tcsh glproto 2515 printproto SCOOP libXp 023 36t1_op2bf1 LINUXAMD64 BLACS 0_20 122 Bowtie WIEN2k PAML Redhat BWA 032 NCO amd64 Mesa DBD scoop diff --git a/Other/file_0403.rst b/Other/file_0403.rst deleted file mode 100644 index 10bb16325..000000000 --- a/Other/file_0403.rst +++ /dev/null @@ -1,7 +0,0 @@ -VSC Echo newsletter -=================== - -VSC Echo is e-mailed three times a year to all subscribers. The -newsletter contains updates about our infrastructure, training programs -and other events and highlights some of the results obtained by users of -our clusters. diff --git a/Other/file_0403_uniq.rst b/Other/file_0403_uniq.rst deleted file mode 100644 index fe530f449..000000000 --- a/Other/file_0403_uniq.rst +++ /dev/null @@ -1 +0,0 @@ -mailed subscribers highlights diff --git a/Other/file_0407.rst b/Other/file_0407.rst deleted file mode 100644 index f914977e4..000000000 --- a/Other/file_0407.rst +++ /dev/null @@ -1,5 +0,0 @@ -Mission & vision -================ - -Upon the establishment of the VSC, the Flemish government assigned us a -number of tasks. diff --git a/Other/file_0407_uniq.rst b/Other/file_0407_uniq.rst deleted file mode 100644 index c358db002..000000000 --- a/Other/file_0407_uniq.rst +++ /dev/null @@ -1 +0,0 @@ -Mission diff --git a/Other/file_0409.rst b/Other/file_0409.rst deleted file mode 100644 index f7afba2ba..000000000 --- a/Other/file_0409.rst +++ /dev/null @@ -1,6 +0,0 @@ -The VSC in Flanders -=================== - -The VSC is a partnership of five Flemish university associations. The -infrastructure is spread over four locations: Antwerp, Brussels, Ghent -and Louvain. diff --git a/Other/file_0411.rst b/Other/file_0411.rst deleted file mode 100644 index a4b682e4f..000000000 --- a/Other/file_0411.rst +++ /dev/null @@ -1,7 +0,0 @@ -Our history -=========== - -Since its establishment in 2007, the VSC has evolved and grown -considerably. - -" diff --git a/Other/file_0411_uniq.rst b/Other/file_0411_uniq.rst deleted file mode 100644 index 192459ec8..000000000 --- a/Other/file_0411_uniq.rst +++ /dev/null @@ -1 +0,0 @@ -grown diff --git a/Other/file_0413.rst b/Other/file_0413.rst deleted file mode 100644 index 40ae51e72..000000000 --- a/Other/file_0413.rst +++ /dev/null @@ -1,7 +0,0 @@ -Publications -============ - -In this section you’ll find all previous editions of our newsletter and -various other publications issued by the VSC. - -" diff --git a/Other/file_0415.rst b/Other/file_0415.rst deleted file mode 100644 index 62777c429..000000000 --- a/Other/file_0415.rst +++ /dev/null @@ -1,7 +0,0 @@ -Organisation structure -====================== - -In this section you can find more information about the structure of our -organisation and the various advisory committees. - -" diff --git a/Other/file_0417.rst b/Other/file_0417.rst deleted file mode 100644 index bcf9f424b..000000000 --- a/Other/file_0417.rst +++ /dev/null @@ -1,7 +0,0 @@ -Press material -============== - -Would you like to write about our services? On this page you will find -useful material such as our logo or recent press releases. - -" diff --git a/Other/file_0417_uniq.rst b/Other/file_0417_uniq.rst deleted file mode 100644 index c047b3ec3..000000000 --- a/Other/file_0417_uniq.rst +++ /dev/null @@ -1 +0,0 @@ -Would diff --git a/Other/file_0451.rst b/Other/file_0451.rst deleted file mode 100644 index da7cd5a2f..000000000 --- a/Other/file_0451.rst +++ /dev/null @@ -1,3 +0,0 @@ -Op 25 oktober 2012 organiseerde het VSC de plechtige ingebruikname van -de eerste Vlaamse tier 1 cluster aan de Universiteit Gent, waar de -cluster ook geplaatst werd. diff --git a/Other/file_0451_uniq.rst b/Other/file_0451_uniq.rst deleted file mode 100644 index 9910b74ae..000000000 --- a/Other/file_0451_uniq.rst +++ /dev/null @@ -1 +0,0 @@ -waar geplaatst plechtige Op organiseerde diff --git a/Other/file_0455.rst b/Other/file_0455.rst deleted file mode 100644 index d5cc48589..000000000 --- a/Other/file_0455.rst +++ /dev/null @@ -1,2 +0,0 @@ -On 25 October 2012 the VSC inaugurated the first Flemish tier 1 compute -cluster. The cluster is housed in the data centre of Ghent University. diff --git a/Other/file_0459.rst b/Other/file_0459.rst deleted file mode 100644 index ef55909c2..000000000 --- a/Other/file_0459.rst +++ /dev/null @@ -1,22 +0,0 @@ -Programma / Programme - -- `Toespraak door professor Paul Van Cauwenberge, rector van de - Universiteit Gent <\%22/assets/83\%22>`__ -- `Film over onderzoek op de tier - 1 <\%22https://videolab.avnet.kuleuven.be/video/?id=d1d1ff47a891dd732b56a4b4e4c39be8&height=388&width=640&autostart=true\%22>`__ -- `Toespraak door professor Peter Marynen, voorzitter stuurgroep - VSC <\%22/assets/85\%22>`__ -- Een kort woordje door de heer Eric Van Bael, managing director HP - België -- `Toespraak door dr. ir. Kurt Lust, - VSC-coördinator <\%22/assets/87\%22>`__ (illustraties in - `PDF <\%22/assets/275\%22>`__) -- `Videoboodschap van minister Ingrid Lieten, viceminister-president - van de Vlaamse regering en Vlaams minister van Innovatie, - Overheidsinvesteringen, Media en - Armoedebestreiding <\%22https://videolab.avnet.kuleuven.be/video/?id=75270088b4163e233ce3adc66ad22f45&height=388&width=640&autostart=true\%22>`__ - -| Het programma werd gevolgd door de officiële ingebruikname van de - cluster in het datacentrum en een receptie. - -" diff --git a/Other/file_0459_uniq.rst b/Other/file_0459_uniq.rst deleted file mode 100644 index 9e073141a..000000000 --- a/Other/file_0459_uniq.rst +++ /dev/null @@ -1 +0,0 @@ -Programme d1d1ff47a891dd732b56a4b4e4c39be8 datacentrum illustraties Cauwenberge 83 Videoboodschap Armoedebestreiding autostart Marynen receptie Toespraak 75270088b4163e233ce3adc66ad22f45 Programma Innovatie 275 voorzitter woordje coördinator rector Eric height videolab Overheidsinvesteringen avnet stuurgroep officiële Bael gevolgd kort België programma heer 388 diff --git a/Other/file_0461.rst b/Other/file_0461.rst deleted file mode 100644 index f36fe221d..000000000 --- a/Other/file_0461.rst +++ /dev/null @@ -1,9 +0,0 @@ -Links -===== - -- `The invitation <\%22/events/tier1-launch-2012/invitation\%22>`__ (in - Dutch) -- `In the media <\%22/events/tier1-launch-2012/media\%22>`__ -- `Photo album <\%22/events/tier1-launch-2012/photo-album\%22>`__ - -" diff --git a/Other/file_0461_uniq.rst b/Other/file_0461_uniq.rst deleted file mode 100644 index ac926e746..000000000 --- a/Other/file_0461_uniq.rst +++ /dev/null @@ -1 +0,0 @@ -album Photo photo diff --git a/Other/file_0465.rst b/Other/file_0465.rst deleted file mode 100644 index 48fd9c3f9..000000000 --- a/Other/file_0465.rst +++ /dev/null @@ -1,7 +0,0 @@ -We organize regular trainings on many HPC-related topics. The level -ranges fro introductory to advanced. We also actively promote some -courses organised elsewhere. The courses are open to participants at the -university associations. Many are also open to external users (the -limitations often caused by software licenses of the packages used -during hand-ons). For further info, you can contact the `course -coordinator Geert Jan Bex <\%22/en/about-vsc/contact\%22>`__. diff --git a/Other/file_0465_uniq.rst b/Other/file_0465_uniq.rst deleted file mode 100644 index 965b8665f..000000000 --- a/Other/file_0465_uniq.rst +++ /dev/null @@ -1 +0,0 @@ -elsewhere fro ons diff --git a/Other/file_0467.rst b/Other/file_0467.rst deleted file mode 100644 index 5ed5e8326..000000000 --- a/Other/file_0467.rst +++ /dev/null @@ -1,5 +0,0 @@ -Previous events and training sessions -===================================== - -We keep links to our previous events and training sessions. Materials -used during the course can also be found on those pages. diff --git a/Other/file_0467_uniq.rst b/Other/file_0467_uniq.rst deleted file mode 100644 index ee2f17977..000000000 --- a/Other/file_0467_uniq.rst +++ /dev/null @@ -1 +0,0 @@ -Previous diff --git a/Other/file_0469.rst b/Other/file_0469.rst deleted file mode 100644 index 71d18d987..000000000 --- a/Other/file_0469.rst +++ /dev/null @@ -1,2 +0,0 @@ -More questions? `Contact the course coordinator or one of the other -coordinators <\%22/en/about-vsc/contact\%22>`__. diff --git a/Other/file_0471.rst b/Other/file_0471.rst deleted file mode 100644 index cc57b566d..000000000 --- a/Other/file_0471.rst +++ /dev/null @@ -1,351 +0,0 @@ -On you application form, you will be asked to indicate the scientific -domain of your application according to the NWO classification. Below we -present the list of domains and subdomains. You only need to give the -domain in your application, but the subdomains may make it easier to -determine the most suitable domain for your application. - -- Archaeology - - - Prehistory - - Antiquity and late antiquity - - Oriental archaeology - - Mediaeval archaeology - - Industrial archaeology - - Preservation and restoration, museums - - Methods and techniques - - Archeology, other - -- Area studies - - - Asian languages and literature - - Asian religions and philosophies - - Jewish studies - - Islamic studies - - Iranian and Armenian studies - - Central Asian studies - - Indian studies - - South-east Asian studies - - Sinology - - Japanese studies - - Area studies, other - -- Art and architecture - - - Pre-historic and pre-classical art - - Antiquity and late antiquity art - - Mediaeval art - - Renaissance and Baroque art - - Modern and contemporary art - - Oriental art and architecture - - Iconography - - History of architecture - - Urban studies - - Preservation and restoration of cultural heritage - - Museums and collections - - Art and architecture, other - -- Astronomy, astrophysics - - - Planetary science - - Astronomy, astrophysics, other - -- Biology - - - Microbiology - - Biogeography, taxonomy - - Animal ethology, animal psychology - - Ecology - - Botany - - Zoology - - Toxicology (plants, invertebrates) - - Biotechnology - - Biology, other - -- Business administration - - - Business administration - -- Chemistry - - - Analytical chemistry - - Macromolecular chemistry, polymer chemistry - - Organic chemistry - - Inorganic chemistry - - Physical chemistry - - Catalysis - - Theoretical chemistry, quantum chemistry - - Chemistry, other - -- Communication science - - - Communication science - -- Computer science - - - Computer systems, architectures, networks - - Software, algorithms, control systems - - Theoretical computer science - - Information systems, databases - - User interfaces, multimedia - - Artificial intelligence, expert systems - - Computer graphics - - Computer simulation, virtual reality - - Computer science, other - - Bioinformatics/biostatistics, biomathematics, biomechanics - -- Computers and the humanities - - - Software for humanities - - Textual and content analysis - - Textual and linguistic corpora - - Databases for humanities - - Hypertexts and multimedia - - Computers and the humanities, other - -- Cultural anthropology - - - Cultural anthropology - -- Demography - - - Demography - -- Development studies - - - Development studies - -- Earth sciences - - - Geochemistry, geophysics - - Paleontology, stratigraphy - - Geodynamics, sedimentation, tectonics, geomorphology - - Petrology, mineralogy, sedimentology - - Atmosphere sciences - - Hydrosphere sciences - - Geodesy, physical geography - - Earth sciences, other - -- Economy - - - Microeconomics - - Macroeconomics - - Econometrics - -- Environmental science - - - Environmental science - -- Gender studies - - - Gender studies - -- Geography / planning - - - Geography - - Planning - -- History - - - Pre-classical civilizations - - Antiquity and late antiquity history - - Mediaeval history - - Modern and contemporary history - - Social and economic history - - Cultural history - - Comparative political history - - Librarianschip, archive studies - - History, other - - History and philosophy of science and technology - - History of ancient science - - History of mediaeval science - - History of modern science - - History of contemporary science - - History of technology - - History of Science, other - - History of religions - - History of Christianity - - Theology and history of theology - -- History of science - - - History of ancient science - - History of mediaeval science - - History of modern science - - History of contemporary science - - History of technology - - Science museums and collections - - History of science, other - -- Language and literature - - - Pre-classical philology and literature - - Greek and Latin philology and literature - - Mediaeval and Neo-Latin languages and literature - - Mediaeval European languages and literature - - Modern European languages and literature - - Anglo-American literature - - Hispanic and Brazilian literature - - African languages and literature - - Comparative literature - - Language and literature, other - -- Law - - - Private law - - Constitutional and Administrative law - - International and European law - - Criminal law and Criminology - -- Life sciences - - - Bioinformatics/biostatistics, biomathematics, biomechanics - - Biophysics, clinical physics - - Biochemistry - - Genetics - - Histology, cell biology - - Anatomy, morphology - - Physiology - - Immunology, serology - - Life sciences, other - -- Life sciences and medicine - - - History and philosophy of the life sciences, ethics and evolution - biology - -- Linguistics - - - Phonetics and phonology - - Morphology, grammar and syntax - - Semantics and philosophy of language - - Linguistic typology and comparative linguistics - - Dialectology, linguistic geography, sociolinguistic - - Lexicon and lexicography - - Psycholinguistics and neurolinguistics - - Computational linguistics and philology - - Linguistic statistics - - Language teaching and acquisition - - Translation studies - - Linguistics, other - -- Medicine - - - Pathology, pathological anatomy - - Organs and organ systems - - Medical specialisms - - Health sciences - - Kinesiology - - Gerontology - - Nutrition - - Epidemiology - - Health Services Research - - Health law - - Health economics - - Medical sociology - - Medicine, other - -- Mathematics - - - Logic, set theory and arithmetic - - Algebra, group theory - - Functions, differential equations - - Fourier analysis, functional analysis - - Geometry, topology - - Probability theory, statistics - - Operations research - - Numerical analysis - - Mathematics, other - -- Music, theatre, performing arts and media - - - Ethnomusicology - - History of music and musical iconography - - Musicology - - Opera and dance - - Theatre studies and iconography - - Film, photography and audio-visual media - - Journalism and mass communications - - Media studies - - Music, theatre, performing arts and media, other - -- Pedagogics - - - Pedagogics - -- Philosophy - - - Metaphysics, theoretical philosophy - - Ethics, moral philosophy - - Logic and history of logic - - Epistemology, philosophy of science - - Aesthetics, philosophy of art - - Philosophy of language, semiotics - - History of ideas and intellectual history - - History of ancient and mediaeval philosophy - - History of modern and contemporary philosophy - - History of political and economic theory - - Philosophy, other - - History and philosophy of science and technology - -- Physics - - - Subatomic physics - - Nanophysics/technology - - Condensed matter and optical physics - - Processes in living systems - - Fusion physics - - Phenomenological physics - - Other physics - - Theoretical physics - -- Psychology - - - Clinical Psychology - - Biological and Medical Psychology - - Developmental Psychology - - Psychonomics and Cognitive Psychology - - Social and Organizational Psychology - - Psychometrics - -- Public administration and political science - - - Public administration - - Political science - -- Religious studies and theology - - - History of religions - - History of Christianity - - Theology and history of theology - - Bible studies - - Religious studies and theology, other - -- Science of Teaching - - - Science of Teaching - -- Science and technology - - - History and philosophy of science and technology - -- Sociology - - - Sociology - -- Technology - - - Materials technology - - Mechanical engineering - - Electrical engineering - - Civil engineering - - Chemical technology, process technology - - Geotechnics - - Technology assessment - - Nanotechnology - - Technology, other - -- Veterinary medicine - - - Veterinary medicine - -" diff --git a/Other/file_0471_uniq.rst b/Other/file_0471_uniq.rst deleted file mode 100644 index 2eef08e53..000000000 --- a/Other/file_0471_uniq.rst +++ /dev/null @@ -1 +0,0 @@ -Toxicology philosophies philosophy ethics comparative Nanophysics Hispanic Fusion Geodynamics Cognitive Econometrics living literature antiquity Atmosphere Criminology Clinical Dialectology Histology restoration civilizations philology subdomains Archaeology Anglo Botany Musicology Biochemistry Theatre Epidemiology biology music Music Museums Petrology historic Prehistory Bioinformatics databases Nutrition Philosophy Operations Urban Asian Planetary musical heritage anthropology Religious specialisms Biogeography Mediaeval Antiquity Art Geotechnics Sinology Geochemistry Geometry Probability audio Administrative Cultural Nanotechnology Christianity Paleontology political Macroeconomics multimedia Ecology geophysics humanities stratigraphy Demography Geodesy Hydrosphere Theology Psychonomics religions Organs Psycholinguistics Semantics Metaphysics theology Phonetics Subatomic Hypertexts serology biomathematics Psychometrics phonology Indian arts Neo Services photography sedimentation Japanese Health Jewish Linguistic Medicine sociology equations Journalism American Baroque Armenian Phenomenological semiotics Civil Oriental NWO Teaching Constitutional differential Opera mediaeval museums Area east Developmental sociolinguistic Librarianschip morphology Iconography anatomy cultural typology neurolinguistics Pedagogics geomorphology Islamic Textual law Iranian Microeconomics Ethics sedimentology Immunology grammar Psychology Aesthetics Gender Preservation moral Lexicon Analytical pathological Kinesiology Organizational tectonics Bible Pathology iconography mineralogy organ Veterinary corpora Biophysics Epistemology Databases Morphology Ethnomusicology Gerontology Criminal Chemical geography Translation Physiology Catalysis contemporary Social Methods ethology theatre Planning Animal psychology Comparative Sociology invertebrates animal dance medicine Logic biostatistics Zoology taxonomy Geography classification Greek African plants Brazilian Archeology Political Computers Processes archaeology Electrical Functions Macromolecular Renaissance lexicography diff --git a/Other/file_0475.rst b/Other/file_0475.rst deleted file mode 100644 index 70d620351..000000000 --- a/Other/file_0475.rst +++ /dev/null @@ -1,7 +0,0 @@ -- `Persmededeling van viceminister-president Ingrid Lieten, Vlaams - minister van innovatie, overheidsinvesteringen, media en - armoedebestrijding <\%22/events/tier1-launch-2012/press-announcement\%22>`__ -- `Bericht over de ingebruikname in \\"Het journaal\" op - één <\%22http://deredactie.be/cm/vrtnieuws/videozone/archief/programmas/journaal/2.24934/2.24935/1.1466027\%22>`__. - -" diff --git a/Other/file_0475_uniq.rst b/Other/file_0475_uniq.rst deleted file mode 100644 index 0a6bd39b3..000000000 --- a/Other/file_0475_uniq.rst +++ /dev/null @@ -1 +0,0 @@ -archief journaal Persmededeling Bericht cm programmas 1466027 24935 armoedebestrijding deredactie overheidsinvesteringen innovatie videozone vrtnieuws 24934 diff --git a/Other/file_0477.rst b/Other/file_0477.rst deleted file mode 100644 index 37c90e40a..000000000 --- a/Other/file_0477.rst +++ /dev/null @@ -1,85 +0,0 @@ -|\\"\"| - -**PERSMEDEDELING VAN VICEMINISTER-PRESIDENT INGRID LIETEN -VLAAMS MINISTER VAN INNOVATIE, OVERHEIDSINVESTERINGEN, MEDIA EN -ARMOEDEBESTRIJDING** - -**Donderdag 25 oktober 2012** - -**Eerste TIER 1 Supercomputer wordt in gebruik genomen aan de UGent.** - -**Vandaag wordt aan de UGent de eerste Tier 1 supercomputer van het -Vlaams ComputerCentrum (VSC) plechtig in gebruik genomen. De -supercomputer is een initiatief van de Vlaamse overheid om aan -onderzoekers in Vlaanderen een bijzonder krachtige rekeninfrastructuur -ter beschikking te stellen om zo beter het hoofd te kunnen bieden aan de -maatschappelijke uitdagingen war we vandaag voor staan.“Het VSC moet -‘high performance computing’ toegankelijk maken voor kennisinstellingen -en bedrijven. Hierdoor kunnen doorbraken gerealiseerd worden in domeinen -als gezondheidszorg, chemie, en milieu”, zegt Ingrid Lieten.** - -| -| In de internationale onderzoekswereld zijn de supercomputers niet meer - weg te denken. Deze grote rekeninfrastructuren waren recent een - noodzakelijke schakel in de ontdekking van het Higgsdeeltje. Hun - rekencapaciteit laat toe steeds beter de werkelijkheid te simuleren. - Hierdoor is een nieuwe manier om onderzoek te verrichten ontstaan, met - belangrijke toepassingen voor onze economie en onze samenleving. - -“Dankzij supercomputers worden weersvoorspellingen over langere perioden -steeds betrouwbaarder, of kunnen klimaatveranderingen en natuurrampen -beter voorspeld worden. Auto’s worden veiliger omdat de constructeurs -het verloop van botsingen en de impact op passagiers in detail kunnen -simuleren. Ook aan de evolutie naar geneeskunde op maat van de patiënt, -kan de supercomputer fundamenteel bijdragen. De ontwikkeling van -geneesmiddelen gebeurt namelijk voor een groot deel via simulaties van -chemische reacties”, zegt Ingrid Lieten. - -Het Vlaamse Supercomputer Centrum staat open voor alle Vlaamse -onderzoekers, zowel uit de kennisinstellingen en strategische -onderzoekscentra als uit de bedrijven. Het levert opportuniteiten voor -universiteiten en industrie, maar ook voor overheden, mutualiteiten en -andere zorgorganisaties. De supercomputer moet een belangrijke bijdrage -leveren aan de zoektocht naar oplossingen voor de grote maatschappelijke -uitdagingen, en dit in de meest uiteenlopende domeinen. Zo kan de -supercomputer nieuwe geneesmiddelen ontwikkelen of demografische -evoluties voor humane en sociale wetenschappen analyseren, zoals de -vergrijzing en hoe daarmee om te gaan. Maar de supercomputer zal ook -ingezet worden om state of the art windmolens te ontwerpen en -ingewikkelde modellen te berekenen voor het voorspellen van -klimaatsveranderingen. - -Om de mogelijkheden van de supercomputer beter bekend te maken en het -gebruik te stimuleren in Vlaanderen, krijgt de Herculesstichting de -opdracht om het Vlaamse Supercomputer Centrum actief te promoten en -opleidingen te voorzien. De Herculesstichting is het Vlaamse agentschap -voor de financiering van middelzware en zware infrastructuur voor -fundamenteel en strategisch basisonderzoek. Zij zullen ervoor zorgen dat -associaties, kennisinstellingen, SOCs, het bedrijfsleven, enz. even vlot -toegang krijgen tot de TIER1 supercomputer. De huisvesting en technische -exploitatie blijven bij de associaties. - -“Met de ingebruikname van de TIER1 staat Vlaanderen nu echt op de kaart -in Europa wat betreft ‘high performance computing’. Vlaamse onderzoekers -krijgen de mogelijkheid om aan te sluiten bij belangrijke Europese -onderzoeksprojecten, zowel op het vlak van fundamenteel als van -toegepast onderzoek”, zegt Ingrid Lieten. - -Het Vlaams Supercomputer Centrum beheert zowel de zogenaamde ‘TIER2’ -computers, die lokaal bij de universiteiten staan, als de ‘TIER1’ -computer, die voor nog complexere toepassingen gebruikt wordt. - -Persinfo: -^^^^^^^^^ - -| Lot Wildemeersch, woordvoerster Ingrid Lieten -| 0477 810 176 \| lot.wildemeersch@vlaanderen.be -| www.ingridlieten.be - -|\\"\"| - -" - -.. |\\"\"| image:: \%22/assets/269\%22 -.. |\\"\"| image:: \%22/assets/271\%22 - diff --git a/Other/file_0477_uniq.rst b/Other/file_0477_uniq.rst deleted file mode 100644 index 67ed25b35..000000000 --- a/Other/file_0477_uniq.rst +++ /dev/null @@ -1 +0,0 @@ -war werkelijkheid uitdagingen TIER VICEMINISTER bekend sluiten schakel complexere geneeskunde Deze economie strategische demografische technische namelijk zware andere ingewikkelde belangrijke Dankzij rekeninfrastructuren Hierdoor weg tot ontwerpen worden industrie nog bijdrage naar plechtig voorspellen nieuwe associaties beschikking overheid ingezet strategisch ingridlieten TIER2 reacties chemie grote MEDIA stimuleren daarmee onderzoeksprojecten ARMOEDEBESTRIJDING constructeurs onze Zo gebeurt deel enz ter gaan domeinen Vandaag simuleren bijzonder klimaatsveranderingen opdracht MINISTER wetenschappen zorgen vlak evoluties zorgorganisaties echt groot leveren 269 moet middelzware toepassingen levert woordvoerster evolutie INGRID zoals humane rekencapaciteit maken lokaal toe vergrijzing zowel bedrijven zullen LIETEN alle bijdragen meest ervoor basisonderzoek vlaanderen nu PERSMEDEDELING Wildemeersch zal samenleving promoten onderzoekers maatschappelijke voorspeld kaart laat gerealiseerd stellen vlot ontstaan windmolens toegepast 271 maat kennisinstellingen opportuniteiten Europa klimaatveranderingen agentschap verrichten internationale actief betreft kunnen langere gezondheidszorg toegankelijk Hun hoofd uiteenlopende hoe 0477 ontwikkeling mogelijkheden krachtige denken simulaties initiatief Donderdag botsingen INNOVATIE noodzakelijke ontwikkelen blijven Europese zogenaamde Zij huisvesting exploitatie OVERHEIDSINVESTERINGEN vandaag krijgt passagiers Met zoektocht Om bedrijfsleven mutualiteiten geneesmiddelen zegt overheden onderzoekswereld PRESIDENT opleidingen natuurrampen wildemeersch VLAAMS patiënt milieu VAN betrouwbaarder sociale genomen krijgen berekenen weersvoorspellingen fundamenteel Persinfo financiering onderzoekscentra Higgsdeeltje universiteiten veiliger wat bieden Lot oplossingen chemische doorbraken omdat steeds infrastructuur beheert perioden waren ontdekking verloop mogelijkheid ComputerCentrum manier modellen bij analyseren diff --git a/Other/file_0479.rst b/Other/file_0479.rst deleted file mode 100644 index 75b06fc94..000000000 --- a/Other/file_0479.rst +++ /dev/null @@ -1,6 +0,0 @@ -|\\"\"| - -" - -.. |\\"\"| image:: \%22/assets/273\%22 - diff --git a/Other/file_0481.rst b/Other/file_0481.rst deleted file mode 100644 index 1ac8d8c80..000000000 --- a/Other/file_0481.rst +++ /dev/null @@ -1,38 +0,0 @@ -+-----------+-----------------------------------------+ -| |\\"Logo| | March 23 2009 | -| | **Launch Flemish Supercomputer Centre** | -+-----------+-----------------------------------------+ - -The official launch took place on 23 March 2009 in the Promotiezaal of -the Universiteitshal of the K.U.Leuven, Naamsestraat 22, 3000 Leuven. - -- `Program <\%22/events/vsc-launch-2009/program\%22>`__, with links to - some of the presentations. -- `Invitation <\%22/events/vsc-launch-2009/invitation\%22>`__ - -The press mentioning the VSC launch event: - -- An article in `EnterTheGrid - - PrimeurWeekly <\%22http://primeurmagazine.com/\%22>`__, edition 23 - March 2009 -- `An article in the K.U.Leuven Campuskrant, edition 25 March - 2009 <\%22https://nieuws.kuleuven.be/nl/campuskrant/0809/07/het-vlaams-supercomputercentrum-kan-tellen\%22>`__ - (in Dutch) - -- An article on the web site of Knack (in Dutch) -- An article in the French edition of datanews, 24 maart 2009 (in - French) - -|\\"uitnodiging| - -The images at the top of this page are courtesy of `NUMECA -International <\%22https://www.numeca.com/home\%22>`__ and research -groups at Antwerp University, the Vrije Universiteit Brussel and the KU -Leuven. - -" - -.. |\\"Logo| image:: \%22/assets/277\%22 -.. |\\"uitnodiging| image:: \%22/assets/81\%22 - :class: \"image-inline\" - :target: \%22/events/vsc-launch-2009/figures\%22 diff --git a/Other/file_0481_uniq.rst b/Other/file_0481_uniq.rst deleted file mode 100644 index e39abb1f0..000000000 --- a/Other/file_0481_uniq.rst +++ /dev/null @@ -1 +0,0 @@ -tellen EnterTheGrid Campuskrant primeurmagazine campuskrant 0809 vlaams Invitation PrimeurWeekly Knack supercomputercentrum maart mentioning datanews uitnodiging diff --git a/Other/file_0483.rst b/Other/file_0483.rst deleted file mode 100644 index f33224ff5..000000000 --- a/Other/file_0483.rst +++ /dev/null @@ -1,380 +0,0 @@ -+-----------------------------------------------------------------------+ -| The program contains links to some of the presentations. The | -| copyright for the presentations remains with the original authors and | -| not with the VSC. Reproducing parts of these presentations or using | -| them in other presentations can only be done with the agreement of | -| the author(s) of the presentation. | -+-----------------------------------------------------------------------+ - -14u15 - -Scientific program - -14u15 - -| Dr. ir. Kurt Lust (Vlaams Supercomputer Centrum). Presentation of the - VSC -| `Presentation (PDF) <\%22/assets/279\%22>`__ - -14u30 - -Prof. dr. Patrick Bultinck (Universiteit Gent). `In -silico <\%22#Bultinck\%22>`__\ `Chemistry: Quantum Chemistry and -Supercomputers - <\%22#Bultinck\%22>`__\ `Presentation (PDF) <\%22/assets/281\%22>`__ - -14u45 - -| Prof. dr. Wim Vanroose (Universiteit Antwerpen). `Large scale - calculations of molecules in laser fields <\%22#Vanroose\%22>`__ -| `Presentation (PDF) <\%22/assets/283\%22>`__ - -15u00 - -Prof. dr. Stefaan Tavernier (Vrije Universiteit Brussel). `Grid -applications in particle and astroparticle physics: The CMS and IceCube -projects - <\%22#Tavernier\%22>`__\ `Presentation (PDF) <\%22/assets/285\%22>`__ - -15u15 - -Prof. dr. Dirk Van den Poel (Universiteit Gent). `Research using HPC -capabilities in the field of economics/business & management science - <\%22#VandenPoel\%22>`__\ `Presentation (PDF) <\%22/assets/291\%22>`__ - -15u30 - -Dr. Kris Heylen (K.U.Leuven). `Supercomputing and Linguistics - <\%22#Heylen\%22>`__\ `Presentation (PDF) <\%22/assets/287\%22>`__ - -15u45 - -Dr. ir. Lies Geris (K.U.Leuven). `Modeling in biomechanics and -biomedical engineering - <\%22#Geris\%22>`__\ `Presentatie (PDF) <\%22/assets/289\%22>`__ - -16u00 - -Prof. dr. ir. Chris Lacor (Vrije Universiteit Brussel) and Prof. Dr. -Stefaan Poedts (K.U.Leuven). `Supercomputing in CFD and -MHD <\%22#LacorPoedts\%22>`__ - -16u15 - -Coffee break - -17u00 - -Academic session - -17u00 - -| Prof. dr. ir. Karen Maex, Chairman of the steering group of the Vlaams - Supercomputer Centrum -| `Presentatie (PDF) <\%22/assets/293\%22>`__ - -17u10 - -| Prof. dr. dr. Thomas Lippert, Director of the Institute for Advanced - Simulation and head of the Jülich Supercomputer Centre, - Forschungszentrum Jülich. European view on supercomputing and PRACE -| `Presentation (PDF) <\%22/assets/295\%22>`__ - -17u50 - -| Prof. dr. ir. Charles Hirsch, President of the HPC Working Group of - the Royal Flemish Academy of Belgium for Sciences and the Arts (KVAB) -| `Presentation (PDF) <\%22/assets/297\%22>`__ - -18u00 - -| Prof. dr. ir. Bart De Moor, President of the Board of Directors of the - Hercules Foundation -| `Presentation (PDF) <\%22/assets/299\%22>`__ - -18u10 - -Minister Patricia Ceysens, Flemish Minister for Economy, Enterprise, -Science, Innovation and Foreign Trade - -18u30 - -Reception - -Abstracts -========= - -Prof. dr. Patrick Bultinck. In silico Chemistry: Quantum Chemistry and Supercomputers -------------------------------------------------------------------------------------- - -*Universiteit Gent/Ghent University, Faculty of Sciences, Department of -Inorganic and Physical Chemistry* - -| Quantum Chemistry deals with the chemical application of quantum - mechanics to understand the nature of chemical substances, the reasons - for their (in)stability but also with finding ways to predict - properties of novel molecules prior to their synthesis. The working - horse of quantum chemists is therefore no longer the laboratory but - the supercomputer. The reason for this is that quantum chemical - calculations are notoriously computationally demanding. -| These computational demands are illustrated by the scaling of - computational demands with respect to the size of molecules and the - level of theory applied. An example from Vibrational Circular - Dichroism calculations shows how supercomputers play a role in - stimulating innovation in chemistry. - -**Prof. dr. Patrick Bultinck** (° Blankenberge, 1971) is professor in -Quantum Chemistry, Computational and inorganic chemistry at Ghent -University, Faculty of Sciences, Department of Inorganic and Physical -Chemistry. He is author of roughly 100 scientific publications and -performs research in quantum chemistry with emphasis on the study of -concepts such as the chemical bond, the atom in the molecule and -aromaticity. Another main topic is the use of computational (quantum) -chemistry in drug discovery. In 2002 and 2003 P. Bultinck received -grants from the European Center for SuperComputing in Catalunya for his -computationally demanding work in this field. - -Prof. dr. Wim Vanroose. Large scale calculations of molecules in laser fields ------------------------------------------------------------------------------ - -*Universiteit Antwerpen, Department of Mathematics and Computer Science* - -| Over the last decade, calculations with large scale computer has - caused a revolution -| in the understanding of the ultrafast dynamics that plays at the - microscopic level. We give an overview of the international efforts to - advance the computational tools for this area of science. We also - discuss how the results of the calculations are guiding chemical - experiments. - -**Prof. dr. Wim Vanroose** is BOF-Research professor at the Department -of Mathematics and Computer Science, Universiteit Antwerpen. He is -involved in international efforts to build to computational tools for -large scale simulations for ultrafast microscopic dynamics. Between 2001 -and 2004 he was a computational scientist at NERSC computing center at -the Berkeley Lab, Berkeley USA. - -Prof. dr. Stefaan Tavernier. Grid applications in particle and astroparticle physics: The CMS and IceCube projects ------------------------------------------------------------------------------------------------------------------- - -*Vrije Universiteit Brussel, Faculty of Science and Bio-engineering -Sciences, Department of Physics, Research Group of Elementary Particle -Physics* - -| The large hadron collider LHC at the international research centre - CERN near Geneva is due to go into operation at the end of 2009. It - will be the most powerful particle accelerator ever, and will give us - a first glimpse of the new phenomena that that are expected to occur - at these energies. However, the analysis of the data produced by the - experiments around this accelerator also represents an unprecedented - challenge. The VUB, UGent and UA participate in the CMS project. This - is one of the four major experiments to be performed at this - accelerator. One year of CMS operation will result in about 106 GBytes - of data. To cope with this flow of data, the CMS collaboration has - setup a GRID computing infrastructure with distributed computer - infrastructure scattered over the participating laboratories in 4 - continents. -| The IceCube Neutrino Detector is a neutrino observatory currently - under construction at the South Pole. IceCube is being constructed in - deep Antarctic ice by deploying thousands of optical sensors at depths - between 1,450 and 2,450 meters. The main goal of the experiment is to - detect very high energy neutrinos from the cosmos. The neutrinos are - not detected themselves. Instead, the rare instance of a collision - between a neutrino and an atom within the ice is used to deduce the - kinematical parameters of the incoming neutrino. The sources of those - neutrinos could be black holes, gamma ray bursts, or supernova - remnants. The data that IceCube will collect will also contribute to - our understanding of cosmic rays, supersymmetry, weakly interacting - massive particles (WIMPS), and other aspects of nuclear and particle - physics. The analysis of the data produced by ice cube requires - similar computing facilities as the analysis of the LHC data. - -**Prof. dr. Stefaan Tavernier** is professor of physics at the Vrije -Universiteit Brussel. He obtained a Ph.D. at the Faculté des sciences of -Orsay(France) in 1968, and a \\"Habilitation\" at de VUB in 1984. He -spent most of his scientific career working on research projects at the -international research centre CERN in Geneva. He has been project leader -for the CERN/NA25 project, and he presently is the spokesperson of the -CERN/Crystal Clear(RD18) collaboration. His main expertise is in -experimental methods for particle physics. He has over 160 publications -in peer reviewed international journals, made several contributions to -books and has several patents. He is also the author of a textbook on -experimental methods in nuclear and particle physics. - -Prof. dr. Dirk Van den Poel. Research using HPC capabilities in the field of economics/business & management science --------------------------------------------------------------------------------------------------------------------- - -*Universiteit Gent/Ghent University, Faculty of Economics and Business -Administration, Department of -Marketing,*\ `www.crm.UGent.be <\%22http://www.crm.UGent.be\%22>`__\ *and*\ `www.mma.UGent.be <\%22http://www.mma.UGent.be\%22>`__ - -HPC capabilities in the field of economics/business & management science -are most welcome when optimizing specific quantities (e.g. maximizing -sales, profits, service level, or minimizing costs) subject to certain -constraints. Optimal solutions for common problems are usually -computationally infeasible even with the biggest HPC installations, -therefore researchers develop heuristics or use techniques such as -genetic algorithms to come close to optimal solutions. One of the nice -properties they possess is that they are typically easily -parallelizable. In this talk, I will give several examples of typical -research questions, which need an HPC infrastructure to obtain good -solutions in a reasonable time window. These include the optimization of -marketing actions towards different marketing segments in the domain of -analytical CRM (customer relationship management) and solving -multiple-TSP (traveling salesman problem) under load balancing, -alternatively known as the vehicle routing problem under load balancing. - -**Prof. dr. Dirk Van den Poel** (° Merksem, 1969) is professor of -marketing modeling/analytical customer relationship management (aCRM) at -Ghent University. He obtained his MSc in management/business engineering -as well as PhD from K.U.Leuven. He heads the modeling cluster of the -Department of Marketing at Ghent University. He is program director of -the Master of Marketing Analysis, a one-year program in English about -predictive analytics in marketing. His main interest fields are aCRM, -data mining (genetic algorithms, neural networks, random forests, random -multinomial logit: RMNL), text mining, optimal marketing resource -allocation and operations research. - -Dr. Kris Heylen. Supercomputing and Linguistics ------------------------------------------------ - -*Katholieke Universiteit Leuven, Faculty of Arts, Research Unit -Quantitative Lexicology and Variational Linguistics (QLVL)* - -| Communicating through language is arguably one of the most complex - processes that the most powerful computer we know, the human brain, is - capable of. As a science, Linguistics aims to uncover the intricate - system of patterns and structures that make up human language and that - allow us to convey meaning through words and sentences. Although - linguists have been investigating and describing these structures for - ages, it is only recently that large amounts of electronic data and - the computational power to analyse them have become available and have - turned linguistics into a truly data-driven science. The primary data - for linguistic research is ordinary, everyday language use like - conversations or texts. These are collected in very large electronic - text collections, containing millions of words and these collections - are then mined for meaningful structures and patterns. With increasing - amounts of data and ever more advanced statistical algorithms, these - analyses are not longer feasible on individual servers but require the - computational power of interconnected super computers. -| In the presentation, I will briefly describe two case studies of - computationally heavy linguistic research. A first case study has to - do with the pre-processing of linguistic data. In order to find - patterns at different levels of abstraction, each word in the text - collection has to be enriched with information about its word class - (noun, adjective, verb,..) and syntactic function within the sentence - (subject, direct object, indirect object...). A piece of software, - called a parser, can add this information automatically. For our - research, we wanted to parse a text collection of 1.3 billion words, - i.e. all issues from a 7 year period of 6 Flemish daily newspapers, - representing a staggering 13 years of computing on an ordinary - computer. Thanks to the K.U.Leuven's supercomputer, this could be done - in just a few months. This data has now been made available to the - wider research community. - -**Dr. Kris Heylen** obtained a Master in Germanic Linguistics (2000) and -a Master in Artificial Intelligence (2001) from the K.U.Leuven. In 2005, -he was awarded a PhD in Linguistics at the K.U.leuven for his research -into the statistical modelling of German word order variation. Since -2006, he is a postdoctoral fellow at the Leuven research unit -Quantitative Lexicology and Variational Linguistics (QLVL), where he has -further pursued his research into statistical language modelling with a -focus on lexical patterns and word meaning in Dutch. - -Dr. ir. Lies Geris. Modeling in biomechanics and biomedical engineering ------------------------------------------------------------------------ - -*Katholieke Universiteit Leuven, Faculty of Engineering, Department of -Mechanical Engineering, Division of Biomechanics and Engineering Design* - -| The first part of the presentation will discuss the development and - applications of a mathematical model of fracture healing. The model - encompasses several key-aspects of the bone regeneration process, such - as the formation of blood vessels and the influence of mechanical - loading on the progress of healing. The model is applied to simulate - adverse healing conditions leading to a delayed or nonunion. Several - potential therapeutic approaches are tested in silico in order to find - the optimal treatment strategy. Going towards patient specific models - will require even more computer power than is the case for the generic - examples presented here. -| The second part of the presentation will give an overview of other - modeling work in the field of biomechanics and biomedical engineering, - taking place in Leuven and Flanders. The use of super computer - facilities is required to meet the demand for more detailed models and - patient specific modeling. - -Dr. ir. Liesbet Geris is a post-doctoral research fellow of the Research -Foundation Flanders (FWO) working at the Division of Biomechanics and -Engineering Design of the Katholieke Universiteit Leuven, Belgium. From -the K.U.Leuven, she received her MSc degree in Mechanical Engineering in -2002 and her PhD degree in Engineering in 2007, both summa cum laude. In -2007 she worked for 4 months as an academic visitor at the Centre of -Mathematical Biology of Oxford University. Her research interests -encompass the mathematical modeling of bone regeneration during fracture -healing, implant osseointegration and tissue engineering applications. -The phenomena described in the mathematical models reach from the tissue -level, over the cell level, down to the molecular level. She works in -close collaboration with experimental and clinical researchers from the -university hospitals Leuven, focusing on the development of mathematical -models of impaired healing situations and the in silico design of novel -treatment strategies. She is the author of 36 refereed journal and -proceedings articles, 5 chapters and reviews and 18 peer-reviewed -abstracts. She has received a number of awards, including the Student -Award (2006) of the European Society of Biomechanics (ESB) and the Young -Investigator Award (2008) of the International Federation for Medical -and Biological Engineering (IFMBE). - -Prof. dr. ir. Chris Lacor\ 1 en Prof. dr. Stefaan Poedts\ 2. Supercomputing in CFD and MHD ------------------------------------------------------------------------------------------- - -*1\ Vrije Universiteit Brussel, Faculty of Applied Sciences, Department -of Mechanical Engineering -2\ Katholieke Universiteit Leuven, Faculty of Sciences, Department of -Mathematics, Centre for Plasma Astrophysics* - -| CFD is an application field in which the available computing power is - typically always lagging behind. With the increase of computer - capacity CFD is looking towards more complex applications – because of - increased geometrical complication or multidisciplinary aspects e.g. - aeroacoustics, turbulent combustion, biological flows, etc – or more - refined models such as Large Eddy Simulation (LES) or Direct Numerical - Simulation (DNS). In this presentation some demanding application - fields of CFD will be highlighted, to illustrate this. -| Computational MHD has a broad range of applications. We will survey - some of the most CPU demanding applications in Flanders in the context - of examples of the joint initiatives combining expertise from multiple - disciplines, the VSC will hopefully lead to, such as the customised - applications built in the COOLFluiD and AMRVAC-CELESTE3D projects. - -**Prof. dr. ir. Chris Lacor** obtained a degree in Electromechanical -Engineering at VUB in 79 and his PhD in 86 at the same university. -Currently he is Head of the Research Group Fluid Mechanics and -Thermodynamics of the Faculty of Engineering at VUB. His main research -field is Computational Fluid Dynamics (CFD). He stayed at the NASA Ames -CFD Branch as an Ames associate in 87 and at EPFL IMF in 89 where he got -in contact with the CRAY supercomputers. In the early 90ies he was -co-organizer of supercomputing lectures for the VUB/ULB CRAY X-MP -computer. His current research focuses on Large Eddy Simulation, -high-order accurate schemes and efficient solvers in the context of a -variety of applications such as Computational Aeroacoustics, Turbulent -Combustion, Non-Deterministic methods and Biological Flows. He is author -of more than 100 articles in journals and on international conferences. -He is also a fellow of the Flemish Academic Centre for Science and the -Arts (VLAC). - -**Prof. dr. Stefaan Poedts** obtained his degree in Applied Mathematics -in 1984 at the K.U.Leuven. As 'research assistant' of the Belgian -National Fund for Scientific Research he obtained a PhD in Sciences -(Applied Mathematics) in 1988 at the same university. He spent two years -at the Max-Planck-Institut für Plasmaphysik in Garching bei München and -five years at the FOM-Instituut voor Plasmafysica 'Rijnhuizen'. In -October 1996 he returned to the K.U.Leuven as Research Associate of the -FWO-Vlaanderen at the Centre for Plasma Astrophysics (CPA) in the -Department of Mathematics. Since October 1, 2000 he is Academic Staff at -the K.U.Leuven, presently as Full Professor. His research interests -include solar astrophysics, space weather and controlled thermonuclear -fusion. He co-authored two books and 170 journal articles on these -subjects. He is president of the European Solar Physics Division (EPS & -EAS) and chairman of the Leuven Mathematical Modeling and Computational -Science Centre. He is also member of ESA’s Space Weather Working Team -and Solar System Working Group. diff --git a/Other/file_0483_uniq.rst b/Other/file_0483_uniq.rst deleted file mode 100644 index 96cafa3d8..000000000 --- a/Other/file_0483_uniq.rst +++ /dev/null @@ -1 +0,0 @@ -ages summa geometrical 2002 EPS Particle NA25 Presentation investigating Crystal Merksem IMF Award biomedical 1969 bursts PhD 2004 90ies horse Geneva deduce Plasmaphysik black She Lippert Catalunya ice meaningful proceedings 1996 journal aCRM Optimal conversations neural LacorPoedts 15u30 deals RMNL Geris aims possess ultrafast supernova verb 2005 17u10 MSc leader microscopic Between traveling Presentatie career profits Tavernier encompasses Dichroism billion synthesis Federation collision FOM IceCube Eddy laboratories Faculté customised MHD observatory nonunion aromaticity SuperComputing 15u45 multidisciplinary laboratory segments noun regeneration kinematical Thermodynamics forests fusion logit Administration TSP Head 1988 intricate aeroacoustics Faculty challenge Detector Lexicology pursued scaling Vibrational stayed atom Communicating Weather Lies Thomas 18u30 salesman therapeutic decade neutrinos parallelizable novel CRM focus newspapers advance WIMPS weakly postdoctoral chemical lexical He Berkeley sensors Fund fracture 289 Chairman Young vessels schemes glimpse 15u15 hopefully visitor patents President guiding EPFL 279 Pole Aeroacoustics focusing Bio osseointegration staggering Associate Circular Applied implant thermonuclear scattered turbulent ordinary solvers Ph turned molecules cosmic minimizing Reproducing astroparticle degree 170 Katholieke Quantitative 299 Patrick finding refereed CELESTE3D demands Antarctic collider cube depths driven authored COOLFluiD Plasmafysica Marketing Blankenberge 291 Intelligence BOF Professor Over MP convey incoming cosmos Variational holes interests DNS silico 1971 continents molecule Kris spokesperson linguists VandenPoel supersymmetry variation encompass sales Habilitation healing multinomial mined Neutrino IFMBE bei Bultinck rays installations reviews heuristics biological impaired laude GRID adjective LES Student adverse 285 energies alternatively wanted vehicle neutrino super hadron QLVL ESA EAS Deterministic meters subjects sentence stimulating His syntactic drug 297 genetic analytical journals complication sentences substances CRAY Maex awards ray 1968 18u00 1984 AMRVAC bone Investigator cum Mathematical laser 18u10 associate lagging 283 RD18 deploying crm Her NERSC remnants tissue Biomechanics Flows Lab Heylen combustion bond chapters GBytes survey presently Ames Rijnhuizen assistant Division head Clear Oxford heads routing 293 refined spent doctoral Orsay quantities 295 customer Germanic VLAC ESB feasible Electromechanical revolution mma mechanical inorganic computationally organizer 17u50 Liesbet deep 287 constructed uncover Instituut Vanroose infeasible diff --git a/Other/file_0485.rst b/Other/file_0485.rst deleted file mode 100644 index 36193e805..000000000 --- a/Other/file_0485.rst +++ /dev/null @@ -1,80 +0,0 @@ -+-----------+-----------------------------------------+ -| |\\"Logo| | March 23 2009 | -| | **Launch Flemish Supercomputer Center** | -+-----------+-----------------------------------------+ - -The Flemish Supercomputer Centre (Vlaams Supercomputer Centrum) -cordially invites you to its official launch on **23 March 2009**. - -| -| **Supercomputing** is a crucial technology for the twenty-first - century. Fast and efficient compute power is needed for leading - scientific research, the industrial development and the - competitiveness of our industry. For this reason the Flemish - government and the five university associations have decided to set up - a Flemish Supercomputer Centre (VSC). This centre will combine the - clusters at the various Flemish universities in a single - high-performance network and expand it with a large cluster that can - withstand international comparison. The VSC will make available a - high-performance and user-friendly supercomputer infrastructure and - expertise to users from academic institutions and the industry. - -**Program** - -+-----------------------+-----------------------+-----------------------+ -| | 14.15 | Scientists from | -| | | various disciplines | -| | | tell about their | -| | | experiences with HPC | -| | | and grid computing | -+-----------------------+-----------------------+-----------------------+ -| | 16.15 | Coffee break | -+-----------------------+-----------------------+-----------------------+ -| | 17.00 | Official program, in | -| | | the presence of | -| | | minister Ceysens, | -| | | Flemish minister of | -| | | economy, enterprise, | -| | | science, innovation | -| | | and foreign trade of | -| | | Flanders. | -+-----------------------+-----------------------+-----------------------+ -| | 18.30 | Reception | -+-----------------------+-----------------------+-----------------------+ - -`A detailed program is available by clicking on this -link <\%22/events/vsc-launch-2009/program\%22>`__. All presentations -will be in English. - -**Location** - -Promotiezaal of the `Universiteitshal of the -K.U.Leuven, <\%22https://www.google.be/maps/search/Naamsestraat+22,+3000+Leuven/@50.805935,4.432983,583739m/data=!3m1!4b1?source=s_q&hl=nl&dg=dbrw&newdg=1\%22>`__ - -`Naamsestraat 22, 3000 -Leuven <\%22https://www.google.be/maps/search/Naamsestraat+22,+3000+Leuven/@50.805935,4.432983,583739m/data=!3m1!4b1?source=s_q&hl=nl&dg=dbrw&newdg=1\%22>`__. - -**Please register** by 16 March 2009 using this electronic form. - -**Plan and parking** - -| Parkings in the neighbourhood: - -- Parking garage Ladeuze, Mgr. Ladeuzeplein 20, Leuven. -- H. Hart parking, Naamsestraat 102, Leuven. - -The Universiteitshal is within walking distance of the train station of -Leuven. Bus 1 (Heverlee Boskant) and 2 (Heverlee Campus) stop nearby. - -|\\"invitation| - -The images at the top of this page are courtesy of `NUMECA -International <\%22https://www.numeca.com/home\%22>`__ and research -groups at Antwerp University, the Vrije Universiteit Brussel and the -K.U.Leuven. - -" - -.. |\\"Logo| image:: \%22/assets/277\%22 -.. |\\"invitation| image:: \%22/assets/81\%22 - :target: \%22/events/vsc-launch-2009/figures\%22 diff --git a/Other/file_0485_uniq.rst b/Other/file_0485_uniq.rst deleted file mode 100644 index f784bf283..000000000 --- a/Other/file_0485_uniq.rst +++ /dev/null @@ -1 +0,0 @@ -Mgr Bus hl Plan invites Ladeuze garage 4b1 walking Ladeuzeplein withstand newdg 3m1 s_q Boskant 805935 Fast dbrw 432983 cordially dg economy maps Hart 583739m 102 Scientists Parkings diff --git a/Other/file_0487.rst b/Other/file_0487.rst deleted file mode 100644 index 416c377cf..000000000 --- a/Other/file_0487.rst +++ /dev/null @@ -1,90 +0,0 @@ -+-----------------------------------+-----------------------------------+ -| |\\"NUMECA| | Free-surface simulation. | -| | | -| | Figure courtesy of `NUMECA | -| | International. <\%22https://www.n | -| | umeca.com/home\%22>`__ | -+-----------------------------------+-----------------------------------+ -| |\\"NUMECA| | Simulation of a turbine with | -| | coolring. | -| | | -| | Figure courtesy of `NUMECA | -| | International. <\%22https://www.n | -| | umeca.com/home\%22>`__ | -+-----------------------------------+-----------------------------------+ -| |\\"UA| | Purkinje cell model. | -| | | -| | Figure courtesy of Erik De | -| | Schutter, `Theoretical | -| | Neurobiology, <\%22http://www.tnb | -| | .ua.ac.be\%22>`__ | -| | Universiteit Antwerpen. | -+-----------------------------------+-----------------------------------+ -| |\\"UA| | This figure shows the electron | -| | density at adsorption of | -| | NO\ :sub:`2` at on graphene, | -| | computed using density functional | -| | theory (using the software | -| | package absint). | -| | | -| | Figure courtesy of Francois | -| | Peeters, `Condensed Matter Theory | -| | (CMT) | -| | group <\%22https://www.uantwerpen | -| | .be/en/research-groups/cmt/\%22>` | -| | __, | -| | Universiteit Antwerpen. | -+-----------------------------------+-----------------------------------+ -| |\\"UA| | Figure courtesy of Christine Van | -| | Broeckhoven, research group | -| | `Molecular | -| | Genetics <\%22http://www.molgen.v | -| | ib-ua.be/\%22>`__, | -| | Universiteit Antwerpen. | -+-----------------------------------+-----------------------------------+ -| |\\"CPA| | Figure courtesy of the `Centre | -| | for | -| | Plasma-Astrophysics <\%22https:// | -| | wis.kuleuven.be/CmPA\%22>`__, | -| | K.U.Leuven. | -+-----------------------------------+-----------------------------------+ -| |\\"| | Figure courtesy of the `Centre | -| | for | -| | Plasma-Astrophysics <\%22https:// | -| | wis.kuleuven.be/CmPA\%22>`__, | -| | K.U.Leuven. | -+-----------------------------------+-----------------------------------+ -| |\\"KULeuven| | Figure courtesy of the `Centre | -| | for | -| | Plasma-Astrophysics <\%22https:// | -| | wis.kuleuven.be/CmPA\%22>`__, | -| | K.U.Leuven. | -+-----------------------------------+-----------------------------------+ -| |\\"VUB| | Figure courtesy of the research | -| | group `Physics of Elementary | -| | Particles - | -| | IIHE <\%22http://w3.iihe.ac.be/\% | -| | 22>`__, | -| | Vrije Universiteit Brussel. | -+-----------------------------------+-----------------------------------+ - -" - -.. |\\"NUMECA| image:: \%22/assets/63\%22 - :target: \%22/assets/63\%22 -.. |\\"NUMECA| image:: \%22/assets/65\%22 - :target: \%22/assets/65\%22 -.. |\\"UA| image:: \%22/assets/949\%22 - :target: \%22/assets/949\%22 -.. |\\"UA| image:: \%22/assets/69\%22 - :target: \%22/assets/69\%22 -.. |\\"UA| image:: \%22/assets/71\%22 - :target: \%22/assets/71\%22 -.. |\\"CPA| image:: \%22/assets/73\%22 - :target: \%22/assets/73\%22 -.. |\\"| image:: \%22/assets/75\%22 - :target: \%22/assets/75\%22 -.. |\\"KULeuven| image:: \%22/assets/77\%22 - :target: \%22/assets/77\%22 -.. |\\"VUB| image:: \%22/assets/79\%22 - :target: \%22/assets/79\%22 diff --git a/Other/file_0487_uniq.rst b/Other/file_0487_uniq.rst deleted file mode 100644 index e5ee7a03a..000000000 --- a/Other/file_0487_uniq.rst +++ /dev/null @@ -1 +0,0 @@ -Neurobiology w3 wis Particles iihe Matter Figure graphene Purkinje IIHE molgen 77 Francois cmt Christine CmPA tnb KULeuven Broeckhoven Schutter coolring electron umeca absint 949 diff --git a/Other/file_0489.rst b/Other/file_0489.rst deleted file mode 100644 index 538b0c739..000000000 --- a/Other/file_0489.rst +++ /dev/null @@ -1,51 +0,0 @@ -+-----------------------------------+-----------------------------------+ -| De eerste jaarlijkse bijeenkomst | The first annual event was a | -| was een succes, met dank aan al | success, thanks to all the | -| de sprekers en deelnemers. We | presenters and participates. We | -| kijken al uit om de gebruikersdag | are already looking forward to | -| volgend jaar te herhalen en om | implementing some of the ideas | -| een aantal van de opgeworpen | generated and gathering again | -| ideeën te implementeren. | next year. | -| | | -| Hieronder vind je de presentaties | Below you can download the | -| van de VSC 2014 gebruikersdag: | presentations of the VSC 2014 | -| | userday: | -+-----------------------------------+-----------------------------------+ - -`State of the VSC <\%22/assets/315\%22>`__, Flemish Supercomputer (*Dane -Skow, HPC manager Hercules Foundation*) - -`Computational Neuroscience <\%22/assets/303\%22>`__ (*Michele -Giugliano, University of Antwerp*) - -`The value of HPC for Molecular Modeling -applications <\%22/assets/305\%22>`__ (*Veronique Van Speybroeck, Ghent -University*) - -`Parallel, grid-adaptive computations for solar atmosphere -dynamics <\%22/assets/307\%22>`__ (*Rony Keppens, University of Leuven*) - -`HPC for industrial wind energy applications <\%22/assets/309\%22>`__ -(*Rory Donnelly, 3E*) - -`The PRACE architecture and future prospects into Horizon -2020 <\%22/assets/311\%22>`__ (*Sergi Girona, PRACE*) - -`Towards A Pan-European Collaborative Data Infrastructure, European Data -Infrastructure <\%22/assets/313\%22>`__ (*Morris Riedel, EUDAT*) - -+-----------------------------------+-----------------------------------+ -| Zoals je hieronder kan zijn was | A nice number of participants | -| een mooi aantal deelnemers | attended the userday as you can | -| aanwezig. Wie wenst kan `meer | see below. Click to see `more | -| foto's <\%22/events/userday-2014/ | pictures <\%22/events/userday-201 | -| pictures\%22>`__ | 4/pictures\%22>`__. | -| vinden onder de link. | | -+-----------------------------------+-----------------------------------+ - -|\\"More| - -" - -.. |\\"More| image:: \%22/assets/301\%22 - :target: \%22/events/userday-2014/pictures\%22 diff --git a/Other/file_0491.rst b/Other/file_0491.rst deleted file mode 100644 index 6d141889f..000000000 --- a/Other/file_0491.rst +++ /dev/null @@ -1,86 +0,0 @@ -| `The International - Auditorium <\%22http://www.theinternationalauditorium.be/\%22>`__ -| Kon. Albert II laan 5, 1210 Brussels - -| The VSC User Day is the first annual meeting of current and - prospective users of the Vlaams Supercomputing Center (VSC) along with - staff and supporters of the VSC infrastructure. We will hold a series - of presentations describing the status and results of the past year as - well as afternoon sessions talking about plans and priorities for 2014 - and beyond. This is an excellent opportunity to become more familiar - with the VSC and it personnel, become involved in constructing plans - and priorities for new projects and initiatives, and network with - fellow HPC interested parties. -| The day ends with a networking hour at 17:00 allowing time for - informal discussions and followup from the day's activities. - -*Program* - -+-----------------------------------+-----------------------------------+ -| 9:30h | Welcome coffee | -+-----------------------------------+-----------------------------------+ -| 10:00h | Opening VSC USER DAY | -| | *Marc Luwel, Director Hercules | -| | Foundation* | -+-----------------------------------+-----------------------------------+ -| 10:10h | State of the VSC, Flemish | -| | Supercomputer | -| | *Dane Skow, HPC manager Hercules | -| | Foundation* | -+-----------------------------------+-----------------------------------+ -| 10:40h | Computational Neuroscience | -| | *Michele Giugliano, University of | -| | Antwerp* | -+-----------------------------------+-----------------------------------+ -| 11:00h | The value of HPC for Molecular | -| | Modeling applications | -| | *Veronique Van Speybroeck, Ghent* | -| | *University* | -+-----------------------------------+-----------------------------------+ -| 11:20h | Coffee Break and posters | -+-----------------------------------+-----------------------------------+ -| 11:50h | Parallel, grid-adaptive | -| | computations for solar atmosphere | -| | dynamics | -| | *Rony Keppens, University of | -| | Leuven* | -+-----------------------------------+-----------------------------------+ -| 12:10h | HPC for industrial wind energy | -| | applications | -| | *Rory Donnelly, 3E* | -+-----------------------------------+-----------------------------------+ -| 12:30h | Lunch | -+-----------------------------------+-----------------------------------+ -| 13:30h | The PRACE architecture and future | -| | prospects into Horizon 2020 | -| | *Sergi Girona, PRACE* | -+-----------------------------------+-----------------------------------+ -| 14:00h | EUDAT – Towards A Pan-European | -| | Collaborative Data | -| | Infrastructure, European Data | -| | Infrastructure | -| | *Morris Reidel, EUDAT* | -+-----------------------------------+-----------------------------------+ -| 14:20h | Breakout Sessions: | -| | 1 : Long term strategy / | -| | Outreach, information and | -| | Documentation | -| | 2 : Industry and Research / | -| | Visualization | -| | 3 : Training and support / | -| | Integration of Data and | -| | Computation | -+-----------------------------------+-----------------------------------+ -| 15:20h | Coffee break and posters | -+-----------------------------------+-----------------------------------+ -| 16:00h | Summary Presentations from | -| | Rapporteurs breakout sessions | -+-----------------------------------+-----------------------------------+ -| 16:30h | Closing remarks and Q&A | -| | *Bart De Moor, chair Hercules | -| | Foundation* | -+-----------------------------------+-----------------------------------+ -| 17:00h | Network reception | -+-----------------------------------+-----------------------------------+ - -" diff --git a/Other/file_0491_uniq.rst b/Other/file_0491_uniq.rst deleted file mode 100644 index 0e1a932de..000000000 --- a/Other/file_0491_uniq.rst +++ /dev/null @@ -1 +0,0 @@ -Kon USER Reidel Breakout Opening Break prospective Auditorium Outreach theinternationalauditorium Presentations 1210 40h 00h 30h DAY supporters afternoon Long 20h Rapporteurs followup Albert laan breakout 50h diff --git a/Other/file_0493.rst b/Other/file_0493.rst deleted file mode 100644 index 16960eb4f..000000000 --- a/Other/file_0493.rst +++ /dev/null @@ -1,8 +0,0 @@ -De eerste jaarlijkse bijeenkomst was een succes, met dank aan al de -sprekers en deelnemers. We kijken al uit om de gebruikersdag volgend -jaar te herhalen en om een aantal van de opgeworpen ideeën te -implementeren. - -Hieronder vind je de presentaties van de VSC 2014 gebruikersdag: - -" diff --git a/Other/file_0495.rst b/Other/file_0495.rst deleted file mode 100644 index 69384bc5d..000000000 --- a/Other/file_0495.rst +++ /dev/null @@ -1,5 +0,0 @@ -The first annual event was a success, thanks to all the presenters and -participates. We are already looking forward to implementing some of the -ideas generated and gathering again next year. - -Below you can download the presentations of the VSC 2014 userday: diff --git a/Other/file_0497.rst b/Other/file_0497.rst deleted file mode 100644 index 73f3b5371..000000000 --- a/Other/file_0497.rst +++ /dev/null @@ -1,18 +0,0 @@ -| `State of the VSC <\%22/assets/315\%22>`__, Flemish Supercomputer - (*Dane Skow, HPC manager Hercules Foundation*) -| `Computational Neuroscience <\%22/assets/303\%22>`__ (*Michele - Giugliano, University of Antwerp*) -| `The value of HPC for Molecular Modeling - applications <\%22/assets/305\%22>`__ (*Veronique Van Speybroeck, - Ghent University*) -| `Parallel, grid-adaptive computations for solar atmosphere - dynamics <\%22/assets/307\%22>`__ (*Rony Keppens, University of - Leuven*) -| `HPC for industrial wind energy applications <\%22/assets/309\%22>`__ - (*Rory Donnelly, 3E*) -| `The PRACE architecture and future prospects into Horizon - 2020 <\%22/assets/311\%22>`__ (*Sergi Girona, PRACE*) -| `Towards A Pan-European Collaborative Data Infrastructure, European - Data Infrastructure <\%22/assets/313\%22>`__\ (*Morris Riedel, EUDAT*) - -`Full program of the day <\%22/events/userday-2014/program\%22>`__ diff --git a/Other/file_0499.rst b/Other/file_0499.rst deleted file mode 100644 index 163fa06d5..000000000 --- a/Other/file_0499.rst +++ /dev/null @@ -1,3 +0,0 @@ -Zoals je hieronder kan zijn was een mooi aantal deelnemers aanwezig. Wie -wenst kan `meer foto's <\%22/events/userday-2014/pictures\%22>`__ vinden -onder de link. diff --git a/Other/file_0501.rst b/Other/file_0501.rst deleted file mode 100644 index 01f923b64..000000000 --- a/Other/file_0501.rst +++ /dev/null @@ -1,2 +0,0 @@ -A nice number of participants attended the userday as you can see below. -Click to see `more pictures <\%22/events/userday-2014/pictures\%22>`__. diff --git a/Other/file_0503.rst b/Other/file_0503.rst deleted file mode 100644 index 7e88d5eb7..000000000 --- a/Other/file_0503.rst +++ /dev/null @@ -1,6 +0,0 @@ -|\\"More| - -" - -.. |\\"More| image:: \%22/assets/301\%22 - :target: \%22/events/userday-2014/pictures\%22 diff --git a/Other/file_0505.rst b/Other/file_0505.rst deleted file mode 100644 index cb8b899c5..000000000 --- a/Other/file_0505.rst +++ /dev/null @@ -1,65 +0,0 @@ -Next- generation Supercomputing in Flanders: value creation for your business! -============================================================================== - -**Tuesday 27 Januari 2015** - -| Technopolis Mechelen - -The first industry day was a success, thanks to all the presenters and -participates. We especially would like to thank the minister for his -presence. The success stories of European HPC centres showed how -benificial HPC can be for all kinds of industry. The testimonials of the -Flemish firms who already are using large scale computing could only -stress the importance HPC. We will continue to work on the ideas -generated at this meeting so that VSC can strengthen its service to -industry. - -|\\"All| - -Below you can download the presentations of the VSC 2015 industry day. -`Pictures <\%22/events/industryday-2015/pictures\%22>`__ are published. - -| The importance of High Performance Computing for future science, - technology and economic growth -| *Prof. Dr Bart De Moor, Herculesstichting* - -| `The 4 Forces of Change for Supercomputing <\%22/assets/319\%22>`__ -| *Cliff Brereton, director Hartree Centre (UK)* - -| `The virtual Engineering Centre and its multisector virtual - prototyping activities <\%22/assets/321\%22>`__ -| *Dr Gillian Murray, Director UK virtual engineering centre (UK)* - -| `How SMEs can benefit from - High-Performence-Computing <\%22/assets/323\%22>`__ -| *Dr Andreas Wierse, SICOS BW GmbH (D)* - -| `European HPC landscape- its initiatives towards supporting innovation - and its regional perspectives <\%22/assets/325\%22>`__ -| *Serge Bogaerts, HPC & Infrastructure Manager, CENAERO (B) - Belgian delegate to the Prace Council* - -| `Big data and Big Compute for Drug Discovery & Development of the - future <\%22/assets/327\%22>`__ -| *Dr Pieter Peeters, Senior Director Computational Biology, Janssen R&D - (B)* - -| `HPC key enabler for R&D innovation @ Bayer - CropScience <\%22/assets/329\%22>`__ -| *Filip Nollet, Computation Life Science Platform - Architect Bayer Cropscience (B)* - -| `How becoming involved in VSC: mechanisms for HPC industrial - newcomers <\%22/assets/331\%22>`__ -| *Dr Marc Luwel, Herculesstichting* -| *Dr Ewald Pauwels, Ugent - Tier1* - -| Closing -| *Philippe Muyters, Flemish Minister of Economics and Innovation* - -`Full program <\%22/events/industryday-2015/program\%22>`__ - -" - -.. |\\"All| image:: \%22/assets/317\%22 - :target: \%22/events/industryday-2015/pictures\%22 diff --git a/Other/file_0505_uniq.rst b/Other/file_0505_uniq.rst deleted file mode 100644 index e3ab4aced..000000000 --- a/Other/file_0505_uniq.rst +++ /dev/null @@ -1 +0,0 @@ -stress Tuesday Januari 325 327 showed Tier1 thank 317 319 benificial 331 321 Pictures 329 diff --git a/Other/file_0507.rst b/Other/file_0507.rst deleted file mode 100644 index 09ad731ce..000000000 --- a/Other/file_0507.rst +++ /dev/null @@ -1,86 +0,0 @@ -The VSC Industry day is organised for the first time to create awareness -about the potential of HPC for industry and to help firms overcome the -hurdles to use supercomputing. We are proud to present an exciting -program with success stories of European HPC centres that successfully -collaborate with industry and testimonials of some Flemish firms who -already have discovered the opportunities of large scale computing. The -day ends with a networking hour allowing time for informal discussions. - -**Program - Next-generation supercomputing in Flanders: value creation -for your business!** - -13.00-13.30 - -Registration - -13.30-13.35 - -| Welcome and introduction -| *Prof. Dr Colin Whitehouse (chair)* - -13.35-13.45 - -| The importance of High Performance Computing for future science, - technology and economic growth -| *Prof. Dr Bart De Moor, Herculesstichting* - -13.45-14.05 - -| The 4 Forces of Change for Supercomputing -| *Cliff Brereton, director Hartree Centre (UK)* - -14.05-14.25 - -| The virtual Engineering Centre and its multisector virtual prototyping - activities -| *Dr Gillian Murray, Director UK virtual engineering centre (UK)* - -14.25-14.45 - -| How SMEs can benefit from High-Performence-Computing -| *Dr Andreas Wierse, SICOS BW GmbH (D)* - -14.45-15.15 - -Coffeebreak - -15.15-15.35 - -| European HPC landscape- its initiatives towards supporting innovation - and its regional perspectives -| *Serge Bogaerts, HPC & Infrastructure Manager, CENAERO (B) - Belgian delegate to the Prace Council* - -15.35-15.55 - -| Big data and Big Compute for Drug Discovery & Development of the - future -| *Dr Pieter Peeters, Senior Director Computational Biology, Janssen R&D - (B)* - -15.55-16.15 - -| HPC key enabler for R&D innovation @ Bayer CropScience -| *Filip Nollet, Computation Life Science Platform - Architect Bayer Cropscience (B)* - -16.15-16.35 - -| How becoming involved in VSC: mechanisms for HPC industrial newcomers -| *Dr Marc Luwel, Herculesstichting* - -16.35-17.05 - -| Q&A discussion -| Panel/chair - -17.05-17.15 - -| Closing -| *Philippe Muyters, Flemish Minister of Economics and Innovation* - -17.15-18.15 - -Networking reception - -" diff --git a/Other/file_0507_uniq.rst b/Other/file_0507_uniq.rst deleted file mode 100644 index 5fcec5f09..000000000 --- a/Other/file_0507_uniq.rst +++ /dev/null @@ -1 +0,0 @@ -Coffeebreak diff --git a/Other/file_0509.rst b/Other/file_0509.rst deleted file mode 100644 index 93d385dda..000000000 --- a/Other/file_0509.rst +++ /dev/null @@ -1,2 +0,0 @@ -Below you find the complete list of Tier-1-projects since the start of -the regular project application programme. diff --git a/Other/file_0511.rst b/Other/file_0511.rst deleted file mode 100644 index 0b0da28cc..000000000 --- a/Other/file_0511.rst +++ /dev/null @@ -1,14 +0,0 @@ -User support -============ - -| KU Leuven/UHasselt: - `HPCinfo@kuleuven.be <\%22mailto:HPCinfo@kuleuven.be\%22>`__ -| Ghent University: `hpc@ugent.be <\%22mailto:hpc@ugent.be\%22>`__ -| Antwerp University: - `hpc@uantwerpen.be <\%22mailto:hpc@uantwerpen.be\%22>`__ -| VUB: `hpc@vub.be <\%22mailto:hpc@vub.be\%22>`__ - -`Please take a look at the information that you should provide with your -support question. <\%22/support/contact-support\%22>`__. - -" diff --git a/Other/file_0513.rst b/Other/file_0513.rst deleted file mode 100644 index fca7a4d71..000000000 --- a/Other/file_0513.rst +++ /dev/null @@ -1,34 +0,0 @@ -Tier-1 ------- - -- `The main tier-1 system is - muk <\%22http://hervsc.staging.statik.be/infrastructure/hardware/hardware-tier1-muk\%22>`__, - aimed at large parallel computing jobs that require a high-bandwidth - low-latency interconnect. Compute time on muk is only available upon - approval of a project. See the `pages on tier-1 - allocation <\%22https://vscentrum.be/en/tier1-allocation\%22>`__. - -Experimental setup ------------------- - -- `There is a small GPU and Xeon Phi test - system <\%22http://hervsc.staging.statik.be/infrastructure/hardware/k20x-phi-hardware\%22>`__ - which is can be used by all VSC members on request (though a project - approval is not required at the moment). `The documentation for this - system is under - development <\%22http://hervsc.staging.statik.be/infrastructure/hardware/k20x-phi-hardware\%22>`__. - -Tier-2 ------- - -Four university-level cluster groups are also embedded in the VSC and -partly funded from VSC budgets: - -- `The UAntwerpen clusters (hopper and - turing) <\%22http://hervsc.staging.statik.be/infrastructure/hardware/hardware-ua\%22>`__ -- `The VUB cluster - (hydra) <\%22http://hervsc.staging.statik.be/infrastructure/hardware/hardware-vub\%22>`__ -- `The UGent local - clusters <\%22http://www.ugent.be/hpc/en/infrastructure/overzicht.htm\%22>`__ -- `The KU Leuven/UHasselt cluster (ThinKing and - Cerebro) <\%22http://hervsc.staging.statik.be/infrastructure/hardware/hardware-kul\%22>`__ diff --git a/Other/file_0513_uniq.rst b/Other/file_0513_uniq.rst deleted file mode 100644 index a37051e1b..000000000 --- a/Other/file_0513_uniq.rst +++ /dev/null @@ -1 +0,0 @@ -overzicht diff --git a/Other/file_0517.rst b/Other/file_0517.rst deleted file mode 100644 index 57ba76691..000000000 --- a/Other/file_0517.rst +++ /dev/null @@ -1,94 +0,0 @@ -The only short answer to this question is: maybe yes, maybe no. There -are a number of things you need to figure out before. - -Will my application run on a supercomputer? -------------------------------------------- - -Maybe yes, maybe no. All VSC clusters - and the majority of large -supercomputers in the world - run the Linux operation system. So it -doesn't run Windows or OS X applications. Your application will have to -support Linux, and the specific variants that we use on our clusters, -but these are popular versions and rarely pose problems. - -Next supercomputers are not really build to run interactive applications -well. They are built to be shared by many people and using command line -applications. There are several issues: - -- Since you share the machine with many users, you may have to wait a - while before your job might launch. This is organised through a - queueing system: you submit your job to a waiting line and a - scheduler decides who's next to run based on a large number of - parameters: job duration, number of processors needed, have you run a - lot of jobs recently, ... So by the time you job starts, you may have - gone home already. -- You don't sit at a monitor attached to the supercomputer. Even though - that supercomputers can also be used for visualisation, you'll still - need a suitable system on your desk to show the final image, and use - software that can send the drawing commands or images generated on - the supercomputer to your desktop. - -Will my application run faster on a supercomputer? --------------------------------------------------- - -You'll be disappointed to hear that the answer is actually quite often -\\"no\". It is not uncommon that an application runs faster on a good -workstation than on a supercomputer. Supercomputers are optimised for -large applications that access large chunks of memory (RAM or disk) in a -particular way and are very parallel, i.e., they can keep a lot of -processor cores busy. Their CPUs are optimised to do as much work in -parallel as fast as possible, at the cost of lower performance for -programs that don't exploit parallelism, while high-end workstation -processors are more optimised for those programs that run sequentially -or don't use a lot of parallelism and often have disksystems that can -better deal with many small files. - -| That being said, even that doesn't have to be disastrous. Parallelism - can come in different forms. Sometimes you may have to run the same - program for a large number of test cases, and if the memory - consumption for a program for a simple test case is reasonable, you - may be able to run a lot of instances of that program simultaneously - on the same multi-core processor chip. This is called *capacity - computing*. And some applications are very well written and can - exploit all the forms of parallelism that a modern supercomputer - offers, provided you solve a large enough problem with that program. - This is called *capability computing*. We support both at the VSC. - -OK, my application can exploit a supercomputer. What's next? ------------------------------------------------------------- - -Have a look our web page on `requesting access in the general -section <\%22/en/access-and-infrastructure/requesting-access\%22>`__. It -explains who can get access to the supercomputers. And as that text -explains, you'll may need to install some additional software the system -from which you want to access the clusters (which for the majority of -our users is their laptop or desktop computer). - -Basically, you communicate with the cluster through a protocol called -\\"SSH\" which stands for \\"Secure SHell\". It encrypts all the -information that is passed to the clusters, and also provides an -authentication mechanism that is a bit safer than just sending -passwords. The protocol can be used both to get a console on the system -(a \\"command line interface\" like the one offered by CMD.EXE on Widows -or the term app on OS X) and to transfer files to the system. The -absolute minimum you need before you can actually request your account, -is a SSH client to generate the key that will be used to talk to the -clusters. For Windows, you can `use -PuTTY <\%22/client/windows/keys-putty\%22>`__ (freely available, see -`the link on our PuTTY page <\%22/client/windows/console-putty\%22>`__), -on macOS/OS X you can `use the built-in OpenSSH -client <\%22/client/macosx/keys-openssh\%22>`__, and Linux systems -typically also `come with -OpenSSH <\%22/client/linux/keys-openssh\%22>`__. But to actually use the -clusters, you may want to install some additional software, such as a -GUI sftp client to transfer files. We've got links to a lot of useful -client software `on our web page on access and data -transfer <\%22/cluster-doc/access-data-transfer\%22>`__. - -Yes, I'm ready --------------- - -Then follow the links on our `user portal page on requesting an -account <\%22/cluster-doc/account-request\%22>`__. And don't forget -we've got `training programs <\%22/en/education--training\%22>`__ to get -you started and `technical support <\%22/support/contact-support\%22>`__ -for when you run into trouble. diff --git a/Other/file_0517_uniq.rst b/Other/file_0517_uniq.rst deleted file mode 100644 index 65c912ed1..000000000 --- a/Other/file_0517_uniq.rst +++ /dev/null @@ -1 +0,0 @@ -hear drawing said Basically encrypts SHell sending disappointed Widows gone EXE disastrous disksystems chunks Will diff --git a/Other/file_0519.rst b/Other/file_0519.rst deleted file mode 100644 index 6f352e1a5..000000000 --- a/Other/file_0519.rst +++ /dev/null @@ -1,74 +0,0 @@ -Even if you don't do software development yourself (and software -development includes, e.g., developing R- or Matlab routines), working -on a supercomputer differs from using a PC, so some training is useful -for everybody. - -Linux ------ - -If you are familiar with a Linux or UNIX environment, there is no need -to take any course. Working with Linux on a supercomputer is not that -different from working with Linux on a PC, so you'll likely find your -way around quickly. - -Otherwise, there are several options to learn more about Linux - -- We have some `very basic pages on Linux - use <\%22/cluster-doc/using-linux\%22>`__ in the user documentation. - The `introductory - page <\%22/cluster-doc/using-linux/basic-linux-usage\%22>`__ contains - a number of links to courses on the web. -- Several institutions at the VSC also organise regular Linux - introductory courses. Check the \\"\ `Education and - Training <\%22/en/education--training\%22>`__\\" page on upcoming - courses. - -A basic HPC introduction ------------------------- - -Such a course at the VSC has a double goal: Learning more about HPC in -general but also about specific properties of the system at the VSC that -you need to know to run programs sufficiently efficiently. - -- Several institutions at the VSC organise periodic introductions to - their infrastructure or update sessions for users when new additions - are made to the infrastructure. Check the \\"\ `Education and - Training <\%22/en/education--training\%22>`__\\" page on upcoming - courses. -- We are working on a new introductory text that will soon be available - on this site. The text covers both the software that you need to - install on your own computer and working on the clusters, with - specific information for your institution. -- Or you can work your way through the documentation on the user - portal. This is probably sufficient if you are already familiar with - supercomputers. Of particular interest may be the page on our - `implementation of the module - system <\%22/cluster-doc/software/modules\%22>`__, the pages on - `running jobs <\%22/cluster-doc/running-jobs\%22>`__ (as there are - different job submission systems around, we use Torque/Moab), and the - `pages about the available - hardware <\%22/en/infrastructure/hardware\%22>`__ that also contain - information about the settings needed for each specific system. - -What next? ----------- - -We also run courses on many other aspects of supercomputing such as -program development or use of specific applications. As the other -courses, they are announced on our \\"\ `Education and -Training <\%22/en/education--training\%22>`__\\" page. Or you can read a -some good books, look at training programs offered at the European level -through PRACE or check some web courses. We maintain links to several of -those on the \\"\ `Tutorials and books <\%22/support/tut-book\%22>`__\\" -pages. - -Be aware that some tools that are useful to prototype applications on a -PC, may be very inefficient when run at a large scale on a -supercomputer. Matlab programs can often be accelerated through -compiling with the Matlab compiler. R isn't the most efficient tool -either. And Python is an excellent \\"glue language\" to get a number of -applications or optimised (non-Python) libraries to work together, but -shouldn't be used for entire applications that consume a lot of CPU time -either. We've got courses on several of those languages where you also -learn how to use them efficiently, and you'll also notice that on some -clusters there are restrictions on the use of these tools. diff --git a/Other/file_0519_uniq.rst b/Other/file_0519_uniq.rst deleted file mode 100644 index feb14edac..000000000 --- a/Other/file_0519_uniq.rst +++ /dev/null @@ -1 +0,0 @@ -introductions Learning glue covers periodic inefficient diff --git a/Other/file_0521.rst b/Other/file_0521.rst deleted file mode 100644 index 348efe4b6..000000000 --- a/Other/file_0521.rst +++ /dev/null @@ -1,13 +0,0 @@ -Preparing to use the clusters ------------------------------ - -- `Account - request <\%22http://hervsc.staging.statik.be/cluster-doc/account-request\%22>`__ -- `Account - management <\%22http://hervsc.staging.statik.be/en/user-portal/account-management\%22>`__ -- `User - support <\%22http://hervsc.staging.statik.be/support/contact-support\%22>`__ -- `Tutorials and - books <\%22http://hervsc.staging.statik.be/support/tut-book\%22>`__ - -" diff --git a/Other/file_0523.rst b/Other/file_0523.rst deleted file mode 100644 index b39f16071..000000000 --- a/Other/file_0523.rst +++ /dev/null @@ -1,143 +0,0 @@ -© FWO - -Use of this website means that you acknowledge and accept the terms and -conditions below. - -Content disclaimer -~~~~~~~~~~~~~~~~~~ - -The FWO takes great care of its website and strives to ensure that all -the information provided is as complete, correct, understandable, -accurate and up-to-date as possible. In spite of all these efforts, the -FWO cannot guarantee that the information provided on this website is -always complete, correct, accurate or up-to-date. Where necessary, the -FWO reserves the right to change and update information at its own -discretion. The publication of official texts (legislation, Flemish -Parliament Acts, regulations, etc.) on this website has no official -character. - -If the information provided on or by this website is inaccurate then the -FWO will do everything possible to correct this as quickly as possible. -Should you notice any errors, please contact the website administrator: -`kurt.lust@uantwerpen.be <\%22mailto:kurt.lust@uantwerpen.be\%22>`__. -The FWO makes every effort to ensure that the website does not become -unavailable as a result of technical errors. However, the FWO cannot -guarantee the website's availability or the absence of other technical -problems. - -The FWO cannot be held liable for any direct or indirect damage arising -from the use of the website or from reliance on the information provided -on or through the website. This also applies without restriction to all -losses, delays or damage to your equipment, software or other data on -your computer system. - -Protection of personal data -~~~~~~~~~~~~~~~~~~~~~~~~~~~ - -The FWO is committed to protecting your privacy. Most information is -available on or through the website without your having to provide any -personal data. In some cases, however, you may be asked to provide -certain personal details. In such cases, your data will be processed in -accordance with the Law of 8 December 1992 on the protection of privacy -with regard to the processing of personal data and with the Royal Decree -of 13 February 2001, which implements the Law of 8 December 1992 on the -protection of privacy with regard to the processing of personal data. - -The FWO provides the following guarantees in this context: - -- Your personal data will be collected and processed only in order to - provide you with the information or service you requested online. The - processing of your personal data is limited to the intended - objective. -- Your personal data will not be disclosed to third parties or used for - direct marketing purposes unless you have formally consented to this - by opting in. -- The FWO implements the best possible safety measures in order to - prevent abuse of your personal data by third parties. - -Providing personal information through the online registration module -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ - -By providing your personal information, you consent to this personal -information being recorded and processed by the FWO and its -representatives. The information you provided will be treated as -confidential. - -The FWO may also use your details to invite you to events or keep you -informed about activities of the VSC. - -Cookies -~~~~~~~ - -What are cookies and why do we use them? -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ - -Cookies are small text or data files that a browser saves on your -computer when you visit a website. - -This web site saves cookies on your computer in order to improve the -website’s usability and also to analyse how we can improve our web -services. - -Which cookies does this website use? -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ - -- Functional cookies: Cookies used as part of the website’s security. - These cookies are deleted shortly after your visit to our website - ends. -- Non-functional cookies - - - **Google Analytics: \_GA - **\ We monitor our website’s usage statistics with Google - Analytics, a system which loads a number of cookies whenever you - visit the website. These \_GA cookies allow us to check how many - visitors our website gets and also to collect certain demographic - details (e.g. country of origin). - -Can you block or delete cookies? -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ - -You can prevent certain cookies being installed on your computer by -adjusting the settings in your browser’s options. In the ‘privacy’ -section, you can specify any cookies you wish to block. - -Cookies can also be deleted in your browser’s options via ‘delete -browsing history’. - -We use cookies to collect statistics which help us simplify and improve -your visit to our website. As a result, we advise you to allow your -browser to use cookies. - -Hyperlinks and references -~~~~~~~~~~~~~~~~~~~~~~~~~ - -The website contains hyperlinks which redirect you to the websites of -other institutions and organisations and to information sources managed -by third parties. The FWO has no technical control over these websites, -nor does it control their content, which is why it cannot offer any -guarantees as to the completeness or correctness of the content or -availability of these websites and information sources. - -The provision of hyperlinks to other websites does not imply that the -FWO endorses these external websites or their content. The links are -provided for information purposes and for your convenience. The FWO -accepts no liability for any direct or indirect damage arising from the -consultation or use of such external websites or their content. - -Copyright -~~~~~~~~~ - -All texts and illustrations included on this website, as well as its -layout and functionality, are protected by copyright. The texts and -illustrations may be printed out for private use; distribution is -permitted only after receiving the authorisation of the FWO. You may -quote from the website providing you always refer to the original -source. Reproductions are permitted, providing you always refer to the -original source, except for commercial purposes, in which case -reproductions are never permitted, even when they include a reference to -the source. - -Permission to reproduce copyrighted material applies only to the -elements of this site for which the FWO is the copyright owner. -Permission to reproduce material for which third parties hold the -copyright must be obtained from the relevant copyright holder. diff --git a/Other/file_0523_uniq.rst b/Other/file_0523_uniq.rst deleted file mode 100644 index c828425cb..000000000 --- a/Other/file_0523_uniq.rst +++ /dev/null @@ -1 +0,0 @@ -liability disclosed authorisation Content Hyperlinks reproductions websites abuse Providing consented provision privacy confidential understandable liable Permission great references usability receiving cookies reserves losses legislation country recorded hyperlinks permitted 1992 _GA safety Acts Functional equipment measures visitors illustrations endorses correctness Protection arising Cookies copyrighted reliance regard opting Copyright discretion holder invite Parliament disclaimer unavailable representatives formally spite consent demographic adjusting strives Can Reproductions diff --git a/Other/file_0529.rst b/Other/file_0529.rst deleted file mode 100644 index eb31ec19e..000000000 --- a/Other/file_0529.rst +++ /dev/null @@ -1,4 +0,0 @@ -relates to -========== - -- `muk cluster Ugent <\%22http://statik.be\%22>`__ diff --git a/Other/file_0529_uniq.rst b/Other/file_0529_uniq.rst deleted file mode 100644 index 1dd75b448..000000000 --- a/Other/file_0529_uniq.rst +++ /dev/null @@ -1 +0,0 @@ -relates diff --git a/Other/file_0531.rst b/Other/file_0531.rst deleted file mode 100644 index 635cbb246..000000000 --- a/Other/file_0531.rst +++ /dev/null @@ -1,4 +0,0 @@ -Auick access -============ - -- `available server software <\%22http://statik.be\%22>`__ diff --git a/Other/file_0531_uniq.rst b/Other/file_0531_uniq.rst deleted file mode 100644 index d8b646cf8..000000000 --- a/Other/file_0531_uniq.rst +++ /dev/null @@ -1 +0,0 @@ -Auick diff --git a/Other/file_0533.rst b/Other/file_0533.rst deleted file mode 100644 index e3579f0cd..000000000 --- a/Other/file_0533.rst +++ /dev/null @@ -1,4 +0,0 @@ -New user -======== - -`eerste link <\%22http://statik.be\%22>`__ diff --git a/Other/file_0535.rst b/Other/file_0535.rst deleted file mode 100644 index 654baf69a..000000000 --- a/Other/file_0535.rst +++ /dev/null @@ -1,98 +0,0 @@ -The UGent compute infrastructure consists of several specialised -clusters, jointly called Stevin. These clusters share a lot of their -file space so that users can easily move between clusters depending on -the specific job they have to run. - -Login nodes ------------ - -The HPC-UGent Tier-2 login nodes can be access through the generic name -``login.hpc.ugent.be``. - -Connecting to a specific login node -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ - -There are multiple login nodes (gligar01-gligar03) and you will be -connected with one of them when using the generic alias -``login.hpc.ugent.be``. (You can check which one you are connected to -using the ``hostname`` command). - -If you need to connect with as specific login node, use either -``gligar01.ugent.be``, ``gligar02.ugent.be``, or -``gligar03.ugent.be``\ . - -Compute clusters ----------------- - -+-----------+-----------+-----------+-----------+-----------+-----------+ -| | #nodes | CPU | Mem/node | Diskspace | Network | -| | | | | /node | | -+-----------+-----------+-----------+-----------+-----------+-----------+ -| .. rubric | 128 | 2 x | 64 GB | 400 GB | FDR | -| :: delcat | | 8-core | | | InfiniBan | -| ty | | Intel | | | d | -| :name: | | E5-2670 | | | | -| delcatty | | (Sandy | | | | -| | | Bridge @ | | | | -| | | 2.6 GHz) | | | | -+-----------+-----------+-----------+-----------+-----------+-----------+ -| .. rubric | 16 | 2 x | 512 GB | 3x 400 GB | FDR | -| :: phanpy | | 12-core | | (SSD, | InfiniBan | -| :name: | | Intel | | striped) | d | -| phanpy | | E5-2680v3 | | | | -| | | (Haswell- | | | | -| | | EP | | | | -| | | @ 2.5 | | | | -| | | GHz) | | | | -+-----------+-----------+-----------+-----------+-----------+-----------+ -| .. rubric | 196 | 2 x | 64 GB | 500 GB | FDR-10 | -| :: golett | | 12-core | | | InfiniBan | -| :name: | | Intel | | | d | -| golett | | E5-2680v3 | | | | -| | | (Haswell- | | | | -| | | EP | | | | -| | | @ 2.5 | | | | -| | | GHz) | | | | -+-----------+-----------+-----------+-----------+-----------+-----------+ -| .. rubric | 128 | 2 x | 128 GB | 1 TB | FDR | -| :: swalot | | 10-core | | | InfiniBan | -| :name: | | Intel | | | d | -| swalot | | E5-2660v3 | | | | -| | | (Haswell- | | | | -| | | EP | | | | -| | | @ 2.6 | | | | -| | | GHz) | | | | -+-----------+-----------+-----------+-----------+-----------+-----------+ -| .. rubric | 72 | 2 x | 192 GB | 1 TB | EDR | -| :: skitty | | 18-core | | 240 GB | InfiniBan | -| :name: | | Intel | | SSD | d | -| skitty | | Xeon Gold | | | | -| | | 6140 | | | | -| | | (Skylake | | | | -| | | @ 2.3 | | | | -| | | GHz) | | | | -+-----------+-----------+-----------+-----------+-----------+-----------+ -| .. rubric | 96 | 2 x | 96 GB | 1 TB | 10 GbE | -| :: victin | | 18-core | | 240 GB | | -| i | | Intel | | SSD | | -| :name: | | Xeon Gold | | | | -| victini | | 6140 | | | | -| | | (Skylake | | | | -| | | @ 2.3 | | | | -| | | GHz) | | | | -+-----------+-----------+-----------+-----------+-----------+-----------+ - -| Only clusters with an InfiniBand interconnect network are suited for - multi-node jobs. Other clusters are for single-node usage only. - -Shared storage --------------- - -General Parallel File System (GPFS) partitions: - -- ``$VSC_HOME``: 35 TB -- ``$VSC_DATA``: 702 TB -- ``$VSC_SCRATCH``: 1 PB (equivalent to ``$VSC_SCRATCH_KYUKON``) -- ``$VSC_SCRATCH_PHANPY``: 35TB (very fast, powered by SSDs) - -" diff --git a/Other/file_0535_uniq.rst b/Other/file_0535_uniq.rst deleted file mode 100644 index 56e30a86f..000000000 --- a/Other/file_0535_uniq.rst +++ /dev/null @@ -1 +0,0 @@ -35TB InfiniBan GbE victin VSC_SCRATCH_KYUKON 3x 196 victini ty skitty delcat EP VSC_SCRATCH_PHANPY PB 2660v3 jointly 702 striped gligar03 Mem powered diff --git a/Other/file_0539.rst b/Other/file_0539.rst deleted file mode 100644 index dbc7d67aa..000000000 --- a/Other/file_0539.rst +++ /dev/null @@ -1,2 +0,0 @@ -Need technical support? `Contact your local help -desk <\%22/support/contact-support\%22>`__. diff --git a/Other/file_0547.rst b/Other/file_0547.rst deleted file mode 100644 index 0d30003d8..000000000 --- a/Other/file_0547.rst +++ /dev/null @@ -1,5 +0,0 @@ -Remark -====== - -Logging in on the site does not yet function (expected around July 10), -so you cannot yet see the overview of systems below. diff --git a/Other/file_0547_uniq.rst b/Other/file_0547_uniq.rst deleted file mode 100644 index 6bf327929..000000000 --- a/Other/file_0547_uniq.rst +++ /dev/null @@ -1 +0,0 @@ -Remark diff --git a/Other/file_0555.rst b/Other/file_0555.rst deleted file mode 100644 index 165905465..000000000 --- a/Other/file_0555.rst +++ /dev/null @@ -1,4 +0,0 @@ -The documentation page you visited applies to the KU Leuven Tier-2 setup -(THinking and Cerebro). For more information about these systems, visit -`the hardware description -page <\%22/infrastructure/hardware/hardware-kul\%22>`__. diff --git a/Other/file_0555_uniq.rst b/Other/file_0555_uniq.rst deleted file mode 100644 index af5b07613..000000000 --- a/Other/file_0555_uniq.rst +++ /dev/null @@ -1 +0,0 @@ -THinking diff --git a/Other/file_0557.rst b/Other/file_0557.rst deleted file mode 100644 index 16d1132e2..000000000 --- a/Other/file_0557.rst +++ /dev/null @@ -1,3 +0,0 @@ -The documentation page you visited applies to the UGent Tier-2 setup -Stevin. For more information about the setup, visit `the UGent hardware -page <\%22/infrastructure/hardware/hardware-ugent\%22>`__. diff --git a/Other/file_0559.rst b/Other/file_0559.rst deleted file mode 100644 index 8fd9f7708..000000000 --- a/Other/file_0559.rst +++ /dev/null @@ -1,6 +0,0 @@ -The documentation page you visited applies to the UAntwerp Hopper -cluster. Some or all of it may also apply to the older Turing cluster, -but that system does not fully implement the VSC environment module -structure. For more details about the specifics of those systems, visit -`the UAntwerp hardware -page <\%22/infrastructure/hardware/hardware-ua\%22>`__. diff --git a/Other/file_0561.rst b/Other/file_0561.rst deleted file mode 100644 index 2829537ce..000000000 --- a/Other/file_0561.rst +++ /dev/null @@ -1,3 +0,0 @@ -The documentation page you visited applies to the VUB Hydra cluster. For -more specifics about the Hydra cluster, check `the VUB hardware -page <\%22/infrastructure/hardware/hardware-vub\%22>`__. diff --git a/Other/file_0563.rst b/Other/file_0563.rst deleted file mode 100644 index bd16624a0..000000000 --- a/Other/file_0563.rst +++ /dev/null @@ -1,4 +0,0 @@ -The documentation page you visited applies to the Tier-1 cluster Muk -installed at UGent. Check `the Muk hardware -description <\%22/infrastructure/hardware/hardware-tier1-muk\%22>`__ for -more specifics about this system. diff --git a/Other/file_0565.rst b/Other/file_0565.rst deleted file mode 100644 index 62d13c255..000000000 --- a/Other/file_0565.rst +++ /dev/null @@ -1,3 +0,0 @@ -The documentation page you visited applies to client systems running a -recent version of Microsoft Windows (though you may need to install some -additional software as specified on the page). diff --git a/Other/file_0567.rst b/Other/file_0567.rst deleted file mode 100644 index 3601a273f..000000000 --- a/Other/file_0567.rst +++ /dev/null @@ -1,4 +0,0 @@ -| The documentation page you visited applies to client systems with a - recent version of Microsoft Windows and a UNIX-compatibility layer. We - tested using the `freely available Cygwin - system <\%22https://www.cygwin.com/\%22>`__ maintained by Red Hat. diff --git a/Other/file_0569.rst b/Other/file_0569.rst deleted file mode 100644 index 9d9d90443..000000000 --- a/Other/file_0569.rst +++ /dev/null @@ -1,3 +0,0 @@ -The documentation page you visited applies to Apple Mac client systems -with a recent version of OS X installed, though you may need some -additional software as specified on the page. diff --git a/Other/file_0571.rst b/Other/file_0571.rst deleted file mode 100644 index 28422be5b..000000000 --- a/Other/file_0571.rst +++ /dev/null @@ -1,3 +0,0 @@ -The documentation page you visited applies to client systems running a -popular Linux distribution (though some of the packages you need may not -be installed by default). diff --git a/Other/file_0577.rst b/Other/file_0577.rst deleted file mode 100644 index 75e112cbe..000000000 --- a/Other/file_0577.rst +++ /dev/null @@ -1,6 +0,0 @@ -Eerste aanpak - -- Titel Systems -- Call-to-Action Label system name, Node Docu Target, Type - [label.cat.link] -- Style->Container: block--related diff --git a/Other/file_0577_uniq.rst b/Other/file_0577_uniq.rst deleted file mode 100644 index a1ff53a5c..000000000 --- a/Other/file_0577_uniq.rst +++ /dev/null @@ -1 +0,0 @@ -Label Call Container Target Titel Style Docu label diff --git a/Other/file_0579.rst b/Other/file_0579.rst deleted file mode 100644 index 694aea7ea..000000000 --- a/Other/file_0579.rst +++ /dev/null @@ -1,7 +0,0 @@ -Tweede aanpak - -- Text widget met enkel de titel Systems -- Asset widget, selecteer uit System Icons/Regular -- Maar eigenlijk zou het mooier zijn als dit allemaal in één widget zou - zitten, de icoontjes tegen elkaar zouden staan of in ieder geval - dichter, en misschien in een grijs blok of zo? diff --git a/Other/file_0579_uniq.rst b/Other/file_0579_uniq.rst deleted file mode 100644 index 62de35ee3..000000000 --- a/Other/file_0579_uniq.rst +++ /dev/null @@ -1 +0,0 @@ -eigenlijk tegen grijs Icons elkaar selecteer ieder geval zou enkel mooier dichter icoontjes titel Tweede zouden allemaal zitten diff --git a/Other/file_0585.rst b/Other/file_0585.rst deleted file mode 100644 index cb99fe99a..000000000 --- a/Other/file_0585.rst +++ /dev/null @@ -1,18 +0,0 @@ -The page you're trying to visit, does not exist or has been moved to a -different URL. - -Some common causes of this problem are: - -#. Maybe you arrived at the page through a search engine. Search engines - - including the one implemented on our own pages, which uses the - Google index - don't immediately know that a page has been moved or - does not exist anymore and continue to show old pages in the search - results. -#. Maybe you followed a link on another site. The site owner may not yet - have noticed that our web site has changed. -#. Or maybe you followed a link in a somewhat older e-mail or document. - It is entirely normal that links age and don't work anymore after - some time. -#. Or maybe you found a bug on our web site? Even though we check - regularly for dead links, errors can occur. You can contact us at - `Kurt.Lust@uantwerpen.be <\%22mailto:Kurt.Lust@uantwerpen.be\%22>`__. diff --git a/Other/file_0585_uniq.rst b/Other/file_0585_uniq.rst deleted file mode 100644 index 05d7e50d6..000000000 --- a/Other/file_0585_uniq.rst +++ /dev/null @@ -1 +0,0 @@ -moved engines regularly arrived noticed dead diff --git a/Other/file_0605.rst b/Other/file_0605.rst deleted file mode 100644 index 60d7523c8..000000000 --- a/Other/file_0605.rst +++ /dev/null @@ -1,20 +0,0 @@ -You're looking for: - -- Contact information or other information about our organisation? Go - to the \\"\ `About the VSC <\%22/en/about-vsc\%22>`__\\" section -- A high-level overview of our services and infrastructure? Go to the - \\"\ `Access and - Infrastructure <\%22/en/access-and-infrastructure\%22>`__\\" section -- Information on our training programs? Go to the \\"\ `Education and - Training <\%22/en/education-and-trainings\%22>`__\\" section. -- Examples of concrete projects on our largest cluster? We've got a - `list of projects <\%22/en/projects\%22>`__ available. -- Some examples of HPC being used in actual applications? We've got - some use cases in - `academics <\%22https://www.vscentrum.be/en/academics-use-cases\%22>`__, - `industry <\%22/en/industry-use-cases\%22>`__ and `some texts - targeted to a broader, less technical - audience <\%22/en/use-cases-for-a-broad-audience\%22>`__ on our web - site (some cases only in Dutch). - -" diff --git a/Other/file_0605_uniq.rst b/Other/file_0605_uniq.rst deleted file mode 100644 index fce2f0853..000000000 --- a/Other/file_0605_uniq.rst +++ /dev/null @@ -1 +0,0 @@ -concrete audience broader diff --git a/Other/file_0611.rst b/Other/file_0611.rst deleted file mode 100644 index 47576cdcb..000000000 --- a/Other/file_0611.rst +++ /dev/null @@ -1,106 +0,0 @@ -Inline code with ... ---------------------------------- - -We used inline code on the old vscentrum.be to clearly mark system -commands etc. in text. - -- For this we used the tag. -- There was support in the editor to set this tag -- It doesn't seem to work properly in the current editor. If the - fragment of code contains a slash (/), the closing tag gets omitted. - -Example: At UAntwerpen you'll have to use ``module avail MATLAB`` and -``module load MATLAB/2014a`` respectively. - -However, If you enter both -blocks on the same line in a HTML -file, the editor doesn't process them well: ``module avail MATLAB`` and -module load MATLAB. - -Test: ``test 1`` en ``test 2``. - -Code in
    ...
    ----------------------- - -This was used a lot on the old vscentrum.be site to display fragments of -code or display output in a console windows. - -- Readability of fragments is definitely better if a fixed width font - is used as this is necessary to get a correct alignment. -- Formatting is important: Line breaks should be respected. The problem - with the CMS seems to be that the editor respects the line breaks, - the database also stores them as I can edit the code again, but the - CMS removes them when generating the final HTML-page as I don't see - the line breaks again in the resulting HTML-code that is loaded into - the browser. - -:: - - #!/bin/bash -l - #PBS -l nodes=1:nehalem - #PBS -l mem=4gb - module load matlab - cd $PBS_O_WORKDIR - ... - -And this is a test with a very long block: - -:: - - ln03-1003: monitor -h - ### usage: monitor [-d ] [-l ] [-f ] - # [-h] [-v] | -p - # Monitor can be used to sample resource utilization of a process - # over time. Monitor can sample a running process if the latter's PID - # is specified using the -p option, or it can start a command with - # parameters passed as arguments. When one has to specify flags for - # the command to run, '--' can be used to delimit monitor's options, e.g., - # monitor -delta 5 -- matlab -nojvm -nodisplay calc.m - # Resources that can be monitored are memory and CPU utilization, as - # well as file sizes. - # The sampling resolution is determined by delta, i.e., monitor samples - # every seconds. - # -d : sampling interval, specified in - # seconds, or as [[dd:]hh:]mm:ss - # -l : file to store sampling information; if omitted, - # monitor information is printed on stderr - # -n : retain only the last lines in the log file, - # note that this option only makes sense when combined - # with -l, and that the log file lines will not be sorted - # according to time - # -f : comma-separated list of file names that are monitored - # for size; if a file doesn't exist at a given time, the - # entry will be 'N/A' - # -v : give verbose feedback - # -h : print this help message and exit - # : actual command to run, followed by whatever - # parameters needed - # -p : process ID to monitor - # - # Exit status: * 65 for any montor related error - # * exit status of otherwise - # Note: if the exit code 65 conflicts with those of the - # command to run, it can be customized by setting the - # environment variables 'MONITOR_EXIT_ERROR' to any value - # between 1 and 255 (0 is not prohibited, but this is probably. - # not what you want). - -The style in the editor ------------------------------- - -In fact, the Code style of the editor works on a paragraph basis and all -it does is put the paragraph between
     and 
    -tags, so the -problem mentioned above remains. The next text was edited in WYSIWIG -mode: - -:: - - #!/bin/bash -l - #PBS -l nodes=4:ivybridge - ... - -Another editor bug is that it isn't possible to switch back to regular -text mode at the end of a code fragment if that is at the end of the -text widget: The whole block is converted back to regular text instead -and the formatting is no longer shown. - -" diff --git a/Other/file_0611_uniq.rst b/Other/file_0611_uniq.rst deleted file mode 100644 index baccd0398..000000000 --- a/Other/file_0611_uniq.rst +++ /dev/null @@ -1 +0,0 @@ -1003 diff --git a/Other/file_0613.rst b/Other/file_0613.rst deleted file mode 100644 index 50aff04e0..000000000 --- a/Other/file_0613.rst +++ /dev/null @@ -1,93 +0,0 @@ -After the successful first VSC users day in January 2014, the second -users day took place at the University of Antwerp on Monday November 30 -2015. The users committee organized the day. The plenary sessions were -given by an external and an internal speaker. Moreover, 4 workshops were -organized: - -- VSC for starters (UAntwerp) - *Upscaling to HPC. We will present you some best practices, give - advice when using HPC clusters and show some pros and cons when - moving from desktop to HPC. Even more experienced researchers may be - interested. - * -- Specialized Tier-2 infrastructure: shared memory (KU Leuven) - *Shared memory: when distributing data is not/no longer an option. We - will introduce you to the available shared memory infrastructure by - means of some use cases.* -- Big data (UGent) - *We present Hanythingondemand (hod), a solution for running Hadoop, - Spark and other services on HPC clusters.* -- Cloud and grid access (VUB) - *The availability of grid and cloud resources is not so well known in - VSC. We will introduce you to the cloud environment, explain how it - can be useful to you and show how you can gain access.* - -Some impressions... -------------------- - -|\\"More| - -More pictures can be found in the `image -bank <\%22https://beeldbank.uantwerpen.be/index.php/collection/zoom/755b4dad5ae6430b802a0716fffc2a76453de3161bae495ca95ce3225ca091feece20a1ef2154b52b4ff0986228f87e6/1#1\%22>`__. - -Program -------- - -+-----------------------------------+-----------------------------------+ -| 09:50 | Welcome – Bart De Moor (chair | -| | Hercules Foundation) | -+-----------------------------------+-----------------------------------+ -| 10:00 | Invited lecture: `High | -| | performance and multiscale | -| | computing: blood, clay, stars and | -| | humans <\%22/events/userday-2015/ | -| | lectures#DerekGroen\%22>`__ | -| | – Derek Groen (Centre for | -| | Computational Science, University | -| | College London) [`slides - PDF | -| | 8.3MB <\%22/assets/1057\%22>`__] | -+-----------------------------------+-----------------------------------+ -| 11:00 | Coffee | -+-----------------------------------+-----------------------------------+ -| 11:30 | Workshops / hands-on sessions | -| | (parallel sessions) | -+-----------------------------------+-----------------------------------+ -| 12:45 | Lunch | -+-----------------------------------+-----------------------------------+ -| 14:00 | Lecture internal speaker: | -| | `High-performance computing of | -| | wind farms in the atmospheric | -| | boundary | -| | layer <\%22/events/userday-2015/l | -| | ectures#JohanMeyers\%22>`__ | -| | – Johan Meyers (Department of | -| | Mechanical Engineering, KU | -| | Leuven) [`slides - PDF | -| | 9.9MB <\%22/assets/1055\%22>`__] | -+-----------------------------------+-----------------------------------+ -| 14:30 | ‘1 minute’ poster presentations | -+-----------------------------------+-----------------------------------+ -| 14:45 | Workshops / hands-on sessions | -| | (parallel sessions) | -+-----------------------------------+-----------------------------------+ -| 16:15 | Coffee & `Poster | -| | session <\%22/events/userday-2015 | -| | /posters\%22>`__ | -+-----------------------------------+-----------------------------------+ -| 17:00 | Closing – Dirk Roose | -| | (representative of users | -| | committee) | -+-----------------------------------+-----------------------------------+ -| 17:10 | Drink | -+-----------------------------------+-----------------------------------+ - -Titles and abstracts --------------------- - -An overview of the posters that will be presented during the poster -session is `available here <\%22/events/userday-2015/posters\%22>`__. - -" - -.. |\\"More| image:: \%22https://www.vscentrum.be/assets/1025\%22 - :target: \%22https://beeldbank.uantwerpen.be/index.php/collection/zoom/755b4dad5ae6430b802a0716fffc2a76453de3161bae495ca95ce3225ca091feece20a1ef2154b52b4ff0986228f87e6/1#1\%22 diff --git a/Other/file_0613_uniq.rst b/Other/file_0613_uniq.rst deleted file mode 100644 index eaae88f28..000000000 --- a/Other/file_0613_uniq.rst +++ /dev/null @@ -1 +0,0 @@ -Titles zoom 1025 hands speaker Lecture pros impressions beeldbank distributing Specialized JohanMeyers Workshops representative lecture 755b4dad5ae6430b802a0716fffc2a76453de3161bae495ca95ce3225ca091feece20a1ef2154b52b4ff0986228f87e6 plenary cons Hanythingondemand Invited hod Drink moving ectures Upscaling Monday DerekGroen diff --git a/Other/file_0627.rst b/Other/file_0627.rst deleted file mode 100644 index 9bd89c306..000000000 --- a/Other/file_0627.rst +++ /dev/null @@ -1,71 +0,0 @@ -#. Studying gene family evolution on the VSC Tier-2 and Tier-1 - infrastructure - *Setareh Tasdighian et al. (VIB/UGent)* -#. Genomic profiling of murine carcinoma models - *B. Boeckx, M. Olvedy, D. Nasar, D. Smeets, M. Moisse, M. Dewerchin, - C. Marine, T. Voet, C. Blanpain,D. Lambrechts (VIB/KU Leuven)* -#. Modeling nucleophilic aromatic substitution reactions with ab initio - molecular dynamics - *Samuel L. Moors et al. (VUB)* -#. Climate modeling on the Flemish Supercomputers - *Fabien Chatterjee, Alexandra Gossart, Hendrik Wouters, Irina - Gorodetskaya, Matthias Demuzere, Niels Souverijns, Sajjad Saeed, Sam - Vanden Broucke, Wim Thiery, Nicole van Lipzig (KU Leuven)* -#. Simulating the evolution of large grain structures using the - phase-field approach - *Hamed Ravash, Liesbeth Vanherpe, Nele Moelans (KU Leuven)* -#. Multi-component multi-phase field model combined with tensorial - decomposition - *Inge Bellemans, Kim Verbeken, Nico Vervliet, Nele Moelans, Lieven De - Lathauwer (UGent, KU Leuven)* -#. First-principle modeling of planetary magnetospheres: Mercury and the - Earth - *Jorge Amaya, Giovanni Lapenta (KU Leuven)* -#. Modeling the interaction of the Earth with the solar wind: the Earth - magnetopause - *Emanuele Cazzola, Giovanni Lapenta (KU Leuven)* -#. Jupiter's magnetosphere - *Emmanuel Chané, Joachim Saur, Stefaan Poedts (KU Leuven)* -#. High-performance computing of wind-farm boundary layers - *Dries Allaerts, Johan Meyers (KU Leuven)* -#. Large-eddy simulation study of Horns Rev windfarm in variable mean - wind directions - *Wim Munters, Charles Meneveau, Johan Meyers (KU Leuven)* -#. Modeling defects in the light absorbing layers of photovoltaic cells - *Rolando Saniz, Jonas Bekaert, Bart Partoens, Dirk Lamoen - (UAntwerpen)* -#. Molecular Spectroscopy : Where Theory Meets Experiment - *Carl Mensch, Evelien Van de Vondel, Yannick Geboes, Pilar Rodríguez - Ortega, Liene De Beuckeleer, Sam Jacobs, Jonathan Bogaerts, Filip - Desmet, Christian Johannessen, Wouter Herrebout (UAntwerpen)* -#. On the added value of complex stock trading rules in short-term - equity price direction prediction - *Dirk Van den Poel, Céline Chesterman, Maxim Koppen, Michel Ballings - (UGent University, University of Tennessee at Knoxville)* -#. First-principles study of the surface and adsorption properties of - α-Cr\ :sub:`2`\ O\ :sub:`3` - *Samira Dabaghmanesh, Erik C. Neyts, Bart Partoens (UAntwerpen)* -#. The surface chemistry of plasma-generated radicals on reduced - titanium dioxide - *Stijn Huygh, Erik C. Neyts (UAntwerpen)* -#. The High Throughput Approach to Computational Materials Design - *Michael Sluydts, Titus Crepain, Karel Dumon, Veronique Van - Speybroeck, Stefaan Cottenier (UGent)* -#. Distributed Memory Reduction in Presence of Process Desynchronization - *Petar Marendic, Jan Lemeire, Peter Schelkens (Vrije Universiteit - Brussel, iMinds)* -#. Visualization @HPC KU Leuven - *Mag Selwa (KU Leuven)* -#. Multi-fluid modeling of the solar chromosphere - *Yana G. Maneva, Alejandro Alvarez-Laguna, Andrea Lani, Stefaan - Poedts (KU Leuven)* -#. Molecular dynamics in momentum space - *Filippo Morini (UHasselt)* -#. Predicting sound in planetary inner cores using quantum physics - *Jan Jaeken, Attilio Rivoldini, Tim van Hoolst, Veronique Van - Speybroeck, Michel Waroquier, Stefaan Rottener (UGent)* -#. High Fidelity CFD Simulations on Tier-1 - *Leonidas Siozos-Rousoulis, Nikolaos Stergiannis, Nathan Ricks, - Ghader Ghorbaniasl, Chris Lacor (VUB)* - -" diff --git a/Other/file_0627_uniq.rst b/Other/file_0627_uniq.rst deleted file mode 100644 index 7515c0f9c..000000000 --- a/Other/file_0627_uniq.rst +++ /dev/null @@ -1 +0,0 @@ -Liene Horns Hoolst Reduction nucleophilic Inge magnetospheres photovoltaic Kim substitution Saeed absorbing Mag equity titanium Jorge reactions Fidelity Sam Rolando Smeets Rottener mean Irina magnetopause Chesterman Olvedy inner Michael Lieven Ricks Simulating Emmanuel Filippo Schelkens Evelien Vervliet Céline Attilio Joachim Karel Boeckx Gorodetskaya Waroquier Liesbeth Alvarez Morini Meneveau murine Crepain Bellemans Stergiannis Beuckeleer Saur Koppen Sluydts Predicting Setareh Voet Rivoldini dioxide plasma Throughput Yannick Jacobs stock Hamed Maxim Huygh Ravash Ghorbaniasl windfarm Lemeire Pilar Tim Samira Dumon Desynchronization Laguna Moisse radicals Fabien Ghader Selwa Rousoulis Petar magnetosphere Christian Leonidas Lambrechts tensorial Jaeken Genomic planetary Emanuele Carl Cottenier Chané Jonas Ballings Cazzola Ortega trading Siozos Nikolaos Broucke Giovanni gene Michel Blanpain Nathan Dewerchin Sajjad Alejandro Amaya Studying Chatterjee carcinoma Nasar Jonathan chromosphere Tennessee Approach Yana Presence Vanden Lani Marendic sound Verbeken Titus Rodríguez Lathauwer Knoxville Nico Vanherpe Lapenta Cr α Dabaghmanesh Rev Tasdighian Stijn Maneva Mercury diff --git a/Other/file_0629.rst b/Other/file_0629.rst deleted file mode 100644 index e032f30b7..000000000 --- a/Other/file_0629.rst +++ /dev/null @@ -1,71 +0,0 @@ -High performance and multiscale computing: blood, clay, stars and humans ------------------------------------------------------------------------- - -*Speaker: Derek Groen (Centre for Computational Science, University -College London)* - -Multiscale simulations are becoming essential across many scientific -disciplines. The concept of having multiple models form a single -scientific simulation, with each model operating on its own space and -time scale, gives rise to a range of new challenges and trade-offs. In -this talk, I will present my experiences with high performance and -multiscale computing. I have used supercomputers for modelling -clay-polymer nanocomposites [1], blood flow in the human brain [2], and -dark matter structure formation in the early universe [3]. I will -highlight some of the scientific advances we made, and present the -technologies we developed and used to enable simulations across -supercomputers (using multiple models where convenient). In addition, I -will reflect on the non-negligible aspect of human effort and policy -constraints, and share my experiences in enabling challenging -calculations, and speeding up more straightforward ones. - -[`slides - PDF 8.3MB <\%22/assets/1057\%22>`__] - -References -~~~~~~~~~~ - -#. James L. Suter, Derek Groen, and Peter V. Coveney. `Chemically - Specific Multiscale Modeling of Clay–Polymer Nanocomposites Reveals - Intercalation Dynamics, Tactoid Self-Assembly and Emergent Materials - Properties <\%22http://dx.doi.org/10.1002/adma.201403361\%22>`__. - Advanced Materials, volume 27, issue 6, pages 966–984. (DOI: - `10.1002/adma.201403361 <\%22http://dx.doi.org/10.1002/adma.201403361\%22>`__) -#. Mohamed A. Itani, Ulf D. Schiller, Sebastian Schmieschek, James - Hetherington, Miguel O. Bernabeu, Hoskote Chandrashekar, Fergus - Robertson, Peter V. Coveney, and Derek Groen. `An automated - multiscale ensemble simulation approach for vascular blood - flow <\%22http://dx.doi.org/10.1016/j.jocs.2015.04.008\%22>`__. - Journal of Computational Science, volume 9, pages 150-155. (DOI: - `10.1016/j.jocs.2015.04.008 <\%22http://dx.doi.org/10.1016/j.jocs.2015.04.008\%22>`__) -#. Derek Groen and Simon Portugies Zwart. `From Thread to - Transcontinental Computer: Disturbing Lessons in Distributed - Supercomputing <\%22http://dx.doi.org/10.1109/eScience.2015.81>`__. - 2015 IEEE 11th International Conference on e-Science, IEEE, pages - 565-571. (DOI: - `10.1109/eScience.2015.81 <\%22http://dx.doi.org/10.1109/eScience.2015.81>`__) - -High-performance computing of wind farms in the atmospheric boundary layer --------------------------------------------------------------------------- - -*Speaker: Johan Meyers (Department of Mechanical Engineering, KU -Leuven)* - -The aerodynamics of large wind farms are governed by the interaction -between turbine wakes, and by the interaction of the wind farm as a -whole with the atmospheric boundary layer. The deceleration of the flow -in the farm that is induced by this interaction, leads to an efficiency -loss for wind turbines downstream in the farm that can amount up to 40% -and more. Research into a better understanding of wind-farm boundary -layer interaction is an important driver for reducing this efficiency -loss. The physics of the problem involves a wide range of scales, from -farm scale and ABL scale (requiring domains of several kilometers cubed) -down to turbine and turbine blade scale with flow phenomena that take -place on millimeter scale. Modelling such a system, requires a -multi-scale approach in combination with extensive supercomputing. To -this end, our simulation code SP-Wind is used. Implementation issues and -parallelization are discussed. Next to that, new physical insights -gained from our simulations at the VSC are highlighted. - -[`slides - PDF 9.9MB <\%22/assets/1055\%22>`__] - -" diff --git a/Other/file_0629_uniq.rst b/Other/file_0629_uniq.rst deleted file mode 100644 index bfee56d5d..000000000 --- a/Other/file_0629_uniq.rst +++ /dev/null @@ -1 +0,0 @@ -201403361 doi millimeter 571 Assembly Fergus vascular Journal Tactoid Self Portugies James cubed 11th 1002 kilometers DOI blade Suter Conference eScience wakes nanocomposites induced ensemble dx jocs turbines Sebastian Speaker Multiscale Itani 966 Schiller Hoskote Nanocomposites Properties Lessons offs Polymer negligible Coveney driver ABL rise Bernabeu Miguel adma Ulf Transcontinental universe Mohamed dark involves Clay Chemically Chandrashekar Intercalation gained 984 Simon 1016 565 Hetherington Robertson Disturbing IEEE aerodynamics Reveals References 008 governed deceleration challenging Zwart speeding downstream Emergent Schmieschek diff --git a/Other/file_0637.rst b/Other/file_0637.rst deleted file mode 100644 index 2a6e3b85a..000000000 --- a/Other/file_0637.rst +++ /dev/null @@ -1,8 +0,0 @@ -What is a supercomputer? -======================== - -A supercomputer is a very fast and extremely parallel computer. Many of -its technological properties are comparable to those of your laptop or -even smartphone. But there are also important differences. - -" diff --git a/Other/file_0639.rst b/Other/file_0639.rst deleted file mode 100644 index 966d4e4ef..000000000 --- a/Other/file_0639.rst +++ /dev/null @@ -1,6 +0,0 @@ -Impact on research, industry and society -======================================== - -Not only have supercomputers changed scientific research in a -fundamental way, they also enable the development of new, affordable -products and services which have a major impact on our daily lives. diff --git a/Other/file_0639_uniq.rst b/Other/file_0639_uniq.rst deleted file mode 100644 index 2fc48080a..000000000 --- a/Other/file_0639_uniq.rst +++ /dev/null @@ -1 +0,0 @@ -Impact diff --git a/Other/file_0645.rst b/Other/file_0645.rst deleted file mode 100644 index 10a9d04b5..000000000 --- a/Other/file_0645.rst +++ /dev/null @@ -1,47 +0,0 @@ -Tier-1b thin node supercomputer BrENIAC ---------------------------------------- - -This system is since October 2016 in production use. - -Purpose -~~~~~~~ - -On this cluster you can run highly parallel, large scale computations -that rely critically on efficient communication. - -Hardware -~~~~~~~~ - -- 580 computing nodes - - - Two 14-core Intel Xeon processors (Broadwell, E5-2680v4) - - 128 GiB RAM (435 nodes) or 256 GiB (145 nodes) - -- EDR InfiniBand interconnect - - - High bandwidth (11.75 GB/s per direction, per link) - - Slightly improved latency over FDR - -- Storage system - - - Capacity of 634 TB - - Peak bandwidth of 20 GB/s - -Software --------- - -You will find the standard Linux HPC software stack installed on the -Tier-1 cluster. If required, user support will install additional -(Linux) software you require, but you are responsible for taking care of -the licensing issues (including associated costs). - -Access -~~~~~~ - -You can get access to this infrastructure by applying for a starting -grant, submitting a project proposal that will be evaluated on -scientific and technical merits, or by buying compute time. - -.. _section-1: - -" diff --git a/Other/file_0645_uniq.rst b/Other/file_0645_uniq.rst deleted file mode 100644 index 63ce5734a..000000000 --- a/Other/file_0645_uniq.rst +++ /dev/null @@ -1 +0,0 @@ -Slightly merits Capacity licensing critically 1b Peak _section diff --git a/Other/file_0655.rst b/Other/file_0655.rst deleted file mode 100644 index d278cd895..000000000 --- a/Other/file_0655.rst +++ /dev/null @@ -1,30 +0,0 @@ -A collaboration with the VSC offers your company a good number of -benefits. - -- Together we will identify which expertise within the Flemish - universities and their associations is appropriate for you when - rolling out High Performance Computing (HPC) within your company. -- We can also assist with the technical writing of a project proposal - for financing for example through the IWT (Agency for Innovation by - Science and Technology). -- You can participate in courses on HPC, including tailor-made courses - provided by the VSC. -- You will have access to a supercomputer infrastructure with a - dedicated, on-site team assisting you during the start-up phase. -- As a software developer, you can also deploy HPC software - technologies to develop more efficient software which makes better - use of modern hardware. -- A shorter turnaround time for your simulation or data study boosts - productivity and increases the responsiveness of your business to new - developments. -- The possibility to carry out more detailed simulations or to analyse - larger amounts of data can yield new insights which in turn lead to - improved products and more efficient processes. -- A quick analysis of the data collected during a production process - helps to detect and correct abnormalities early on. -- Numerical simulation and virtual engineering reduce the number of - prototypes and accelerate the discovery of potential design problems. - As a result you are able to market a superior product faster and - cheaper. - -" diff --git a/Other/file_0659.rst b/Other/file_0659.rst deleted file mode 100644 index 19f406cc5..000000000 --- a/Other/file_0659.rst +++ /dev/null @@ -1,11 +0,0 @@ -Modern microelectronics has created many new opportunities. Today -powerful supercomputers enable us to collect and process huge amounts of -data. Complex systems can be studied through numerical simulation -without having to build a prototype or set up a scaled experiment -beforehand. All this leads to a quicker and cheaper design of new -products, cost-efficient processes and innovative services. To support -this development in Flanders, the Flemish Government was founded in late -2007. Our accumulated expertise and infrastructure is also available to -the industry for R&D. - -" diff --git a/Other/file_0661.rst b/Other/file_0661.rst deleted file mode 100644 index 7008c3649..000000000 --- a/Other/file_0661.rst +++ /dev/null @@ -1,6 +0,0 @@ -Our offer to you -================ - -Thanks to our embedding in academic institutions, we cannot only offer -you infrastructure at competitive rates but also expert advice and -training. diff --git a/Other/file_0663.rst b/Other/file_0663.rst deleted file mode 100644 index 6b1a7b115..000000000 --- a/Other/file_0663.rst +++ /dev/null @@ -1,6 +0,0 @@ -About us -======== - -The VSC is a collaboration between the Flemish Government and five -Flemish university associations. Many of the VSC employees have a strong -technical and scientific background. diff --git a/Other/file_0671.rst b/Other/file_0671.rst deleted file mode 100644 index cda201f68..000000000 --- a/Other/file_0671.rst +++ /dev/null @@ -1,21 +0,0 @@ -The VSC was launched in late 2007 as a collaboration between the Flemish -Government and five Flemish university associations. Many of the VSC -employees have a strong technical and scientific background. Our team -also collaborates with many research groups at various universities and -helps them and their industrial partners with all aspects of -infrastructure usage. - -Besides a competitive infrastructure, the VSC team also offers full -assistance with the introduction of High Performance Computing within -your company. - -Contact -------- - -Coordinator industry access and services: -`industry@fwo.be <\%22mailto:industry@fwo.be\%22>`__ - -| Alternatively, you can contact one of `the VSC - coordinators <\%22/en/contact\%22>`__. - -" diff --git a/Other/file_0671_uniq.rst b/Other/file_0671_uniq.rst deleted file mode 100644 index c21423ba4..000000000 --- a/Other/file_0671_uniq.rst +++ /dev/null @@ -1 +0,0 @@ -Coordinator diff --git a/Other/file_0673.rst b/Other/file_0673.rst deleted file mode 100644 index 115360754..000000000 --- a/Other/file_0673.rst +++ /dev/null @@ -1,2 +0,0 @@ -`Get in touch with -us! <\%22/en/hpc-for-industry/about-us#contact\%22>`__ diff --git a/Other/file_0683.rst b/Other/file_0683.rst deleted file mode 100644 index 8389faacd..000000000 --- a/Other/file_0683.rst +++ /dev/null @@ -1,27 +0,0 @@ -Lorem ipsum dolor sit amet, consectetur adipiscing elit. Nunc nec -interdum velit, et viverra arcu. Donec ac nisl vehicula orci mattis -pellentesque vel sed magna. Ut vulputate ipsum in bibendum suscipit. -Phasellus tristique molestie cursus. Suspendisse sed luctus diam. Duis -dignissim tincidunt congue. Sed laoreet nunc ac hendrerit congue. Aenean -semper dolor sit amet tincidunt pharetra. Fusce malesuada iaculis enim -eu venenatis. Maecenas commodo laoreet eros eu feugiat. Integer -dignissim sapien at vehicula fermentum. Sed quis odio in dui luctus -tempus. Praesent porttitor nisl varius, mattis eros laoreet, eleifend -magna. Curabitur vehicula vitae eros vel egestas. Fusce at metus velit. - -Test - -Test movie ----------- - -The movie below illustrates the use of supercomputing for the design of -a cooling element from `a report on Kanaal -Z <\%22http://kanaalz.knack.be/nieuws/2-vlaamse-ingenieurs-ontketenen-revolutie-in-koeling/video-normal-864019.html\%22>`__. - -Methode 1, conform de code voor embedding gegenereerd door de Kanaal Z -website: speelt niet af... - -Methode 2: Video tag, werkt alleen in HTML5 browsers, en ik vrees dat -Kanaal Z niet gelukkig is met deze methode... - -" diff --git a/Other/file_0683_uniq.rst b/Other/file_0683_uniq.rst deleted file mode 100644 index c878fe195..000000000 --- a/Other/file_0683_uniq.rst +++ /dev/null @@ -1 +0,0 @@ -speelt gegenereerd pellentesque sed sapien ipsum methode arcu porttitor dui feugiat semper orci odio enim Nunc velit Phasellus browsers luctus bibendum Methode Lorem gelukkig varius venenatis hendrerit malesuada vitae HTML5 congue vlaamse viverra eleifend Curabitur suscipit vehicula ontketenen pharetra interdum af dignissim cooling knack vulputate ik Duis Ut Suspendisse ingenieurs alleen magna revolutie quis koeling elit tincidunt kanaalz 864019 metus tristique laoreet werkt Aenean molestie Praesent nunc Donec Kanaal vel diam dolor iaculis cursus egestas vrees conform nisl Fusce tempus adipiscing commodo Integer deze Maecenas diff --git a/Other/file_0687.rst b/Other/file_0687.rst deleted file mode 100644 index 757ff867b..000000000 --- a/Other/file_0687.rst +++ /dev/null @@ -1,98 +0,0 @@ -**The industry day has been postponed to a later date, probably in the -autumn around the launch of the second Tier-1 system in Flanders.** - -Supercharge your business with supercomputing ---------------------------------------------- - -| **When?** New date to be determined -| **Where?** `Technopolis, - Mechelen <\%22https://www.technopolis.be/en/directions-and-contact/\%22>`__ -| **Admission free, but registration required** - -The VSC Industry day is the second in a series of annual events. The -goals are to create awareness about the potential of HPC for industry -and to help firms overcome the hurdles to use supercomputing. We are -proud to present an exciting program with testimonials of some Flemish -firms who already have discovered the opportunities of large scale -computing, success stories from a European HPC centre that successfully -collaborates with industry and a presentation by a HPC vendor who has -been very successful delivering solutions to several industries. - -**Preliminary program - Supercharge your business with supercomputing** - -**Given that the industry day has been postponed, the program is subject -to change. -** - -13.00-13.30 - -Registration and welcome drink - -13.30-13.45 - -| Introduction and opening -| *Prof. dr Colin Whitehouse (chair)* - -13.45-14.15 - -| The future is now - physics-based simulation opens new gates in heart - disease treatment -| *Matthieu De Beule (*\ `FEops <\%22http://www.feops.com/\%22>`__\ *)* - -13.45-14.05 - -| Hydrodynamic and morfologic modelling of the river Scheldt estuary -| *Sven Smolders and Abdel Nnafie (*\ `Waterbouwkundig - Laboratorium <\%22http://www.waterbouwkundiglaboratorium.be/\%22>`__\ *)* - -14.15-14.45 - -| HPC in Metal Industry: Modelling Wire Manufacturing -| *Peter De Jaeger - (*\ `Bekaert <\%22https://www.bekaert.com/\%22>`__\ *)* - -15.15-15.45 - -Coffee break - -15.45-16.15 - -| NEC industrial customers HPC experiences -| *Fredrik Unger - (*\ `NEC <\%22https://de.nec.com/de_DE/global/solutions/hpc/index.html\%22>`__\ *)* - -16.15-16.45 - -| Exploiting business potential with supercomputing -| *Karen Padmore (HPC Wales - and*\ `SESAME <\%22https://sesamenet.eu/\%22>`__\ *repres.)* - -16.45-17.05 - -| What VSC has to offer to your business -| *Ingrid Barcena Roig and Ewald Pauwels (VSC)* - -17.05-17.25 - -| Q&A discussion -| Panel/chair - -17.25-17.30 - -| Closing -| *Prof dr. Colin Whitehouse (chair)* - -17.30-18.30 - -Networking reception - -Registration ------------- - -The registrations are closed now. Ones the new date is determined, a new -registration form will be made available. - -| `How to reach - Technopolis <\%22https://www.technopolis.be/en/directions-and-contact/\%22>`__. - -" diff --git a/Other/file_0687_uniq.rst b/Other/file_0687_uniq.rst deleted file mode 100644 index 51e1ac2b5..000000000 --- a/Other/file_0687_uniq.rst +++ /dev/null @@ -1 +0,0 @@ -disease postponed gates Matthieu Wire Waterbouwkundig Supercharge morfologic Beule river autumn Smolders industries Abdel estuary Exploiting Hydrodynamic Roig customers Barcena Sven Unger SESAME Nnafie Ones waterbouwkundiglaboratorium registrations FEops Laboratorium Admission repres Jaeger sesamenet feops Preliminary Padmore Manufacturing Fredrik Scheldt bekaert technopolis de_DE diff --git a/Other/file_0689.rst b/Other/file_0689.rst deleted file mode 100644 index a85455f01..000000000 --- a/Other/file_0689.rst +++ /dev/null @@ -1,2 +0,0 @@ -`VSC Industry Day - Thursday April 14, -2016 <\%22/events/industryday-2016\%22>`__ diff --git a/Other/file_0691.rst b/Other/file_0691.rst deleted file mode 100644 index a85455f01..000000000 --- a/Other/file_0691.rst +++ /dev/null @@ -1,2 +0,0 @@ -`VSC Industry Day - Thursday April 14, -2016 <\%22/events/industryday-2016\%22>`__ diff --git a/Other/file_0711.rst b/Other/file_0711.rst deleted file mode 100644 index ff985b2bf..000000000 --- a/Other/file_0711.rst +++ /dev/null @@ -1,210 +0,0 @@ -Access restriction ------------------- - -Once your project has been approved, your login on the Tier-1 cluster -will be enabled. You use the same vsc-account (vscXXXXX) as at your home -institutions and you use the same $VSC_HOME and $VSC_DATA directories, -though the Tier-1 does have its own scratch directories. - -You can log in to the following login nodes: - -- login1-tier1.hpc.kuleuven.be -- login2-tier1.hpc.kuleuven.be - -These nodes are also accessible from outside the KU Leuven. Unless for -the Tier-1 system muk, it is not needed to first log on to your home -cluster to then proceed to BrENIAC. Have a look at the `quickstart -guide <\%22https://www.vscentrum.be/assets/1155\%22>`__ for more -information. - -Hardware details ----------------- - -The tier-1 cluster *BrENIAC* is primarily aimed at large parallel -computing jobs that require a high-bandwidth low-latency interconnect, -but jobs that require a multitude of small independent tasks are also -accepted. - -The main architectural features are: - -- 580 compute nodes with two Xeon E5-2680v4 processors (2,4GHz, 14 - cores per processor, Broadwell architecture). 435 nodes are equiped - with 128 GB RAM and 135 nodes with 256 GB. The total number of cores - is 16,240, the total memory capacity is 90.6 TiB and the peak - performance is more than 623 TFlops (Linpack result 548 TFlops). - The Broadwell CPU supports the 256-bits AVX2 vector instructions with - fused-multiply-add operations. Each core can execute up to 16 double - precision floating point operations per cycle (2 4-number FMAs), but - to be able to use the AVX2 instructions, you need to recompile your - program for the Haswell or Broadwell architecture. - The CPU also uses what Intel calls the \\"Clustter-on-Die\"-approach, - which means that each processor chip internally has two groups of 7 - cores. For hybrid MPI/OpenMP processes (or in general - distributed/shared memory programs), 4 MPI processes per node each - using 7 cores might be a good choice. -- EDR Infiniband interconnect with a fat tree topology (blocking factor - 2:1) -- A storage system with a net capacity of approximately 634 TB and a - peak bandwidth of 20 GB/s, using the GPFS file system. -- 2 login nodes with a similar configuration as the compute nodes - -| Compute time on *BrENIAC* is only available upon approval of a - project. Information on requesting projects is available `in - Dutch <\%22https://www.vscentrum.be/nl/systemen-en-toegang/projecttoegang-tier1\%22>`__ - and `in - English <\%22https://www.vscentrum.be/en/access-and-infrastructure/project-access-tier1\%22>`__. - -Accessing your data -------------------- - -BrENIAC supports the standard VSC directories. - -- $VSC_HOME points to your VSC home directory. It is your standard home - directory which is accessed over the VSC network, and available as - /user//XXX/vscXXXYY, e.g., /user/antwerpen/201/vsc20001. - So the quota on this directory is set by your home institution. -- $VSC_DATA points to your standard VSC data directory, accessed over - the VSC network. It is available as /data//XXX/vscXXXYY. - The quota on this directory is set by your home institution. The - directory is mounted via NFS which lacks some of the feature of the - parallel file system which may be available at your home institution. - Certain programs using parallel I/O may fail when running from this - directory, you are strongly encouraged to only run programs from - $VSC_SCRATCH. -- $VSC_SCRATCH is a Tier-1 specific fast parallel file system using the - GPFS file system. The default quota is 1 TiB but may be changed - depending on your project request. The directory is also available as - /scratch/leuven/XXX/vscXXXYY (and note \\"leuven\" in the name, not - your own institutions as this directory is physically located on the - Tier-1 system at KU Leuven). The variable $VSC_SCRATCH_SITE points to - the same directory. -- $VSC_NODE_SCRATCH points to a small (roughly 70 GB) local scratch - directory on the SSD of each node. It is also available as - /node_scratch/. The contents is only accessible from a - particular node and during the job. - -Running jobs and specifying node characteristics ------------------------------------------------- - -The cluster uses Torque/Moab as all other clusters at the VSC, so the -generic documentation applies to BrENIAC also. - -- BrENIAC uses a single job per node policy. So if a user submits - single core jobs, the nodes will usually be used very inefficiently - and you will quickly run out of your compute time allocation. Users - are strongly encouraged to use the Worker framework (e.g., module - worker/1.6.7-intel-2016a) to group such single-core jobs. Worker - makes the scheduler's task easier as it does not have to deal with - too many jobs. It has `a documentation page on this user - portal <\%22/cluster-doc/running-jobs/worker-framework\%22>`__ and a - `more detailed external documentation - site <\%22http://worker.readthedocs.io/en/latest/\%22>`__. -- The maximum regular job duration is 3 days. -- Take into account that each node has 28 cores. These are logically - grouped in 2 sets of 14 (socket) or 4 sets of 7 (NUMA-on-chip - domains). Hence for hybrid MPI/OpenMP programs, 4 MPI processes per - node with 7 threads each (or two with 14 threads each) may be a - better choice than 1 MPI process per node with 28 threads. - -Several \\"MOAB features\" are defined to select nodes of a particular -type on the cluster. You can specify them in your job scirpt using, -e.g., - -:: - - #PBS -l feature=mem256 - -to request only nodes with the mem256 feature. Some important features: - -+-----------------------------------+-----------------------------------+ -| feature | explanation | -+===================================+===================================+ -| mem128 | Select nodes with 128 GB of RAM | -| | (roughly 120 GB available to | -| | users) | -+-----------------------------------+-----------------------------------+ -| mem256 | Select nodes with 256 GB of RAM | -| | (roughly 250 GB available to | -| | users) | -+-----------------------------------+-----------------------------------+ -| rXiY | Request nodes in a specific | -| | InfiniBand island. X ranges from | -| | 01 to 09, Y can be 01, 11 or 23. | -| | The islands RxI01 have 20 nodes | -| | each, the islands rXi11 and rXi23 | -| | with i = 01, 02, 03, 04, 06, 07, | -| | 08 or 09 have 24 nodes each and | -| | the island r5i11 has 16 nodes. | -| | This may be helpful to make sure | -| | that nodes used by a job are as | -| | close to each other as possible, | -| | but in general will increase | -| | waiting time before your job | -| | starts. | -+-----------------------------------+-----------------------------------+ - -Compile and debug nodes -~~~~~~~~~~~~~~~~~~~~~~~ - -8 nodes with 256 GB of RAM are set aside for compiling or debugging -small jobs. You can run jobs on them by specifying - -:: - - #PBS -lqos=debugging - -in your job script. - -The following limitation apply: - -- Maximum 1 job per user at a time -- Maximum 8 nodes for the job -- Maximum accumulated wall time is 1 hour. e.g., a job using 1 node for - 1 hour or a job using 4 nodes for 15 minutes. - -Credit system -~~~~~~~~~~~~~ - -BrENIAC uses Moab Accounting Manager for accounting the compute time -used by a user. Tier-1 users have a credit account for each granted -Tier-1 project. When starting a job, you need to specify which credit -account to use via - -:: - - #PBS -A lpt1_XXXX-YY - -or with lpt1_XXXX-YY the name of your project account. You can also -specify the -A option at the command line of qsub. - -Further information - -- `BrENIAC Quick Start Guide <\%22/assets/1155\%22>`__ - -Software specifics ------------------- - -BrENIAC uses the standard VSC toolchains. However, not all VSC -toolchains are made available on BrENIAC. For now, only the 2016a -toolchain is available. The Intel toolchain has slightly newer versions -of the compilers, MKL library and MPI library than the standard VSC -2016a toolchain to be fully compatible with the machine hardware and -software stack. - -Some history ------------- - -BrENIAC was installed during the spring of 2016, followed by several -months of testing, first by the system staff and next by pilot users. -The system was officially launched on October 17 of that year, and by -the end of the month new Tier-1 projects started computing on the -cluster. - -We have a time lapse movie of the construction of BrENIAC: - -Documentation -------------- - -- `BrENIAC Quick Start Guide (PDF) <\%22/assets/1155\%22>`__ - -" diff --git a/Other/file_0711_uniq.rst b/Other/file_0711_uniq.rst deleted file mode 100644 index 61f0898e3..000000000 --- a/Other/file_0711_uniq.rst +++ /dev/null @@ -1 +0,0 @@ -XXX lapse Certain pilot rXi11 Maximum lpt1_XXXX officially lacks logically node_scratch RxI01 548 islands quickstart rXiY FMAs vsc20001 island 1155 multiply 623 Clustter fused Unless scirpt 4GHz vscXXXYY r5i11 rXi23 YY Die VSC_NODE_SCRATCH aside Accounting Take diff --git a/Other/file_0745.rst b/Other/file_0745.rst deleted file mode 100644 index 0cfa0cd6c..000000000 --- a/Other/file_0745.rst +++ /dev/null @@ -1,179 +0,0 @@ -The application ---------------- - -The designated way to get access to the Tier-1 for research purposes is -through a project application. - -You have to submit a proposal to get compute time on the Tier-1 cluster -Muk. - -You should include a realistic estimate of the compute time needed in -the project in your application. These estimations can best be endorsed -by Tier-1 benchmarks. To be able to perform these tests for new codes, -you can request a `starting -grant <\%22/en/access-and-infrastructure/tier1-starting-grant\%22>`__ -through a short and quick procedure. - -You can submit proposals continuously, but they will be gathered, -evaluated and resources allocated at a number of cut-off dates. There -are 3 cut-off dates in 2016 : - -- February 1, 2016 -- June 6, 2016 -- October 3, 2016 - -Proposals submitted since the last cut-off and before each of these -dates are reviewed together. - -The FWO appoints an evaluation commission to do this. - -Because of the international composition of the `evaluation -commission <\%22/en/about-vsc/organisation-structure#tier1-evaluation\%22>`__, -the preferred language for the proposals is English. If a proposal is in -Dutch, you must also sent an English translation. Please have a look at -the documentation of standard terms like: CPU, core, node-hour, memory, -storage, and use these consistently in the proposal. - -For applications in 2014 or 2015, costs for resources used will be -invoiced, with various discounts for Flemish-funded academic -researchers. You should be aware that the investments and operational -costs for the Tier-1 infrastructure are considerable. - -You can submit you application `via EasyChair <\%22#easychair\%22>`__ -using the application forms below. - -Relevant documents - 2016 -------------------------- - -On October 26 the Board of Directors of the Hercules foundation decided -to make a major adjustment to the regulations regarding applications to -use the Flemish supercomputer. - -For applications for computing time on the Tier-1 granted in 2016 and -coming from researchers at universities, the Flemish SOCs and the -Flemish public knowledge institutions, applicants will no longer have to -pay a contribution in the cost of compute time and storage. Of course, -the applications have to be of outstanding quality. The evaluation -commission remains responsible for te review of the applications. - -For applications granted in 2015 the current pricing structure remains -in place and contributions will be asked. - -The adjusted Regulations for 2016 can be found in the links below. - -From January 1, 2016 on the responsibility for the funding of HPC and -the management of the Tier-1 has been transferred to the FWO, including -all current decisions and ongoing contracts. - -- `Reglement betreffende aanvragen voor het gebruik van de Vlaamse - supercomputer (Dutch only, applicable as of 1 January 2016) (PDF, 213 - kB) <\%22/assets/1043\%22>`__ -- Enclosure 1: `The application form for Category 1 - Applications <\%22/assets/1041\%22>`__ (research project of which the - scientific quality has already been evaluated, see §1 of the - Regulations for the criteria) (docx, 60 kB) -- Enclosure 2: `The application form for Category 2 - projects <\%22/assets/1045\%22>`__ (research projects that have not - yet been evaluated scientifically) (docx, 71 kB) -- Enclosure 3: `The procedure for the use of - EasyChair <\%22#easychair\%22>`__ -- `The official version of the letter highlighting the 2016 changes is - only available in Dutch by following this - link <\%22https://www.vscentrum.be/assets/1029\%22>`__. -- `An overview of the available (scientific) - software <\%22/cluster-doc/software/tier1-muk\%22>`__ -- `An overview of standard terms used in - HPC <\%22/support/tut-book/hpc-glossary\%22>`__ -- `The list of scientific - domains <\%22/en/access-and-infrastructure/project-access-tier1/domains\%22>`__ - -If you need help to fill out the application, please consult your local -support team. - -Relevant documents - 2015 -------------------------- - -- `Regulations regarding applications to use the Flemish supercomputer - (applicable as of 1 January 2015) <\%22/assets/207\%22>`__ -- Enclosure 1: `Cost price computing time and \\"SCRATCH\" disk - storage <\%22#pricing\%22>`__ - -Pricing - applications in 2015 ------------------------------- - -When you receive compute time through a Tier-1 project application, we -expect a contribution in the cost of compute time and storage. - -**Summary of Rates:** - -**CPU/nodeday** - -**Private Disk/TB/mo** - -**Universities, VIB and iMINDS** - -**0.68€ (5%)** - -**2€ (5%)** - -**other SOCs and other flemish public research institutes** - -**1.35€ (10%)** - -**4€ (10%)** - -**Flemish public research institutes - contract research with -possibility of full cost accounting (*)** - -**13,54€** - -**46,8€** - -**Flemish public research institutes - European projects with -possibility of full cost accounting (*)** - -**13,54€** - -**46,8€** - -(*) The price for one nodeday is 13.54 euro (incl. overhead and support -of Tier-1 technical support team, but excl. advanced support by -specialized staff). The price for 1TB storage per month is 46.80 euro -(incl. overhead and support of TIER1 technical support team, but excl. -advanced support by specialized staff). Approved Tier-1 projects get a -default quota of 1TB. Only storage request higher then 1TB will be -charged for the amount above 1TB. - -EasyChair procedure -------------------- - -You have to submit your proposal on `EasyChair for the conference -Tier12016 <\%22https://easychair.org/conferences/?conf=tier12016\%22>`__. -This requires the following steps: - -#. If you do not yet have an EasyChair account, you first have to create - one: - - #. Complete the CAPTCHA - #. Provide first name, name, e-mail address - #. A confirmation e-mail will be sent, please follow the instructions - in this e-mail (click the link) - #. Complete the required details. - #. When the account has been created, a link will appear to log in on - the TIER1 submission page. - -#. Log in onto the EasyChair system. -#. Select ‘New submission’. -#. If asked, accept the EasyChair terms of service. -#. Add one or more authors; if they have an EasyChair account, they can - follow up on and/or adjust the present application. -#. Complete the title and abstract. -#. You must specify at least three keywords: Include the institution of - the promoter of the present project and the field of research. -#. As a paper, submit a PDF version of the completed Application form. - You must submit the complete proposal, including the enclosures, as 1 - single PDF file to the system. -#. Click \\"Submit\". -#. EasyChair will send a confirmation e-mail to all listed authors. - -" diff --git a/Other/file_0745_uniq.rst b/Other/file_0745_uniq.rst deleted file mode 100644 index 358682802..000000000 --- a/Other/file_0745_uniq.rst +++ /dev/null @@ -1 +0,0 @@ -Pricing flemish nodeday contracts adjustment considerable Approved Universities Cost 207 iMINDS 80 discounts 213 1029 1041 tier12016 SCRATCH excl invoiced pricing highlighting mo Tier12016 contract 1TB investments 1043 incl 1045 diff --git a/Other/file_0749.rst b/Other/file_0749.rst deleted file mode 100644 index 04e7c491b..000000000 --- a/Other/file_0749.rst +++ /dev/null @@ -1,96 +0,0 @@ -The third VSC Users Day was held at the \\"Paleis der Academiën\", the -seat of the \\"Royal Flemish Academy of Belgium for Science and the -Arts\", in the Hertogstraat 1, 1000 Brussels, on June 2, 2017. - -Program -------- - -- 9u50 : welcome -- 10u00: dr. Achim Basermann, German Aerospace Center, High Performance - Computing with Aeronautics and Space Applications [`slides PDF - - 4,9MB <\%22/assets/1225>`__] -- 11u00: coffee break -- 11u30: workshop sessions – part 1 - - - VSC for starters (by VSC personnel) [`slides PDF - - 5.3MB <\%22/assets/1215\%22>`__] - - Profiler and Debugger (by VSC personnel) [`slides PDF - - 2,2MB <\%22/assets/1217>`__] - - Programming GPUs (dr. Bart Goossens - dr. Vule Strbac) - -- 12u45: lunch -- 14u00: dr. Ehsan Moravveji, KU Leuven A Success Story on Tier-1: A - Grid of Stellar Models [`slides - PDF - 11,9MB <\%22/assets/1219\%22>`__] -- 14u30: ‘1-minute’ poster presentations -- 15u00: workshop sessions – part 2 - - - VSC for starters (by VSC personnel) - - Profiler and Debugger (by VSC personnel) - - Feedback from Tier-1 Evaluation Committee (dr. Walter Lioen, - chairman) [`slides - PDF 0.5MB <\%22/assets/1221\%22>`__] - -- 16u00: coffee and poster session -- 17u00: drink - -**Abstracts of workshops** - -- **VSC for starters** [`slides PDF - 5.3MB <\%22/assets/1215\%22>`__] - The workshop provides a smooth introduction to supercomputing for new - users. Starting from common concepts in personal computing the - similarities and differences with supercomputing are highlighted and - some essential terminology is introduced. It is explained what users - can expect from supercomputing and what not, as well as what is - expected from them as users -- **Profiler and Debugger** [`slides PDF - 2,2MB <\%22/assets/1217>`__] - Both profiling and debugging play an important role in the software - development process, and are not always appreciated. In this session - we will introduce profiling and debugging tools, but the emphasis is - on methodology. We will discuss how to detect common performance - bottlenecks, and suggest some approaches to tackle them. For - debugging, the most important point is avoiding bugs as much as - possible. -- **Programming GPUs** - - - **Quasar, a high-level language and a development environment to - reduce the complexity of heterogeneous programming of CPUs and - GPUs**, *Prof dr. Bart Goosens, UGent* [`slides PDF - - 2,1MB <\%22/assets/1227>`__] - In this workshop we present Quasar, a new programming framework - that takes care of many common challenges for GPU programming, - e.g., parallelization, memory management, load balancing and - scheduling. Quasar consists of a high-level programming language - with a similar abstraction level as Python or Matlab, making it - well suited for rapid prototyping. We highlight some of the - automatic parallelization strategies of Quasar and show how - high-level code can efficiently be compiled to parallel code that - takes advantage of the available CPU and GPU cores, while offering - a computational performance that is on a par with a manual - low-level C++/CUDA implementation. We explain how multi-GPU - systems can be programmed from Quasar and we demonstrate some - recent image processing and computer vision results obtained with - Quasar. - - **GPU programming opportunities and challenges: nonlinear finite - element analysis**, *dr. Vule Strbac, KU Leuven* [`slides PDF - - 2,1MB <\%22/assets/1223>`__] - From a computational perspective, finite element analysis - manifests substantial internal parallelism. Exposing and - exploiting this parallelism using GPUs can yield significant - speedups against CPU execution. The details of the mapping between - a requested FE scenario and the hardware capabilities of the GPU - device greatly affect this resulting speedup. Factors such as: (1) - the types of materials present (elasticity), (2) the local memory - pool and (3) fp32/fp64 computation impact GPU solution times - differently than their CPU counterparts. - We present results of both simple and complex FE analyses - scenarios on a multitude of GPUs and show an objective estimation - of general performance. In doing so, we detail the overall - opportunities, challenges as well as the limitations of the GPU FE - approach. - -**Poster sessions** - -An overview of the posters that were presented during the poster session -is available `here <\%22/events/userday-2017/posters\%22>`__. - -" diff --git a/Other/file_0751.rst b/Other/file_0751.rst deleted file mode 100644 index 33141139c..000000000 --- a/Other/file_0751.rst +++ /dev/null @@ -1,26 +0,0 @@ -| - -- **By train:** The closest railway station is - Brussel-Centraal/Bruxelles Central. From there it is a ten minutes - walk to the venue, or you can take the metro. -- **By metro (MIVB):** Metro station Troon - - - From the Central Station: Line 1 or 5 till Kunst-Wet, then line 2 - or 6. - - From the North Station: metro Rogier, line 2 or 6 towards - \\"Koning Boudewijn\" or \\"Simonis (Leopold II)\". - - From the South Station: Line 2 or 6 towards Simonis (Elisabeth) - -- **By car:** - - - The \\"Paleis der Academiën\" has two free parking areas at the - side of the \\"Kleine ring\". Access is via the Regentlaan which - you should enter at the Belliardstraat. - - There are limited non-free parking spots at the Regentlaan or the - Paleizenplein - - Two nearby parking garages are: - - - Parking 2 Portes: Waterloolaan 2a, 1000 Brussel - - Parking Industrie: Industriestraat 26-38, 1040 Brussel - -" diff --git a/Other/file_0751_uniq.rst b/Other/file_0751_uniq.rst deleted file mode 100644 index 71e4d0539..000000000 --- a/Other/file_0751_uniq.rst +++ /dev/null @@ -1 +0,0 @@ -Industrie railway Elisabeth Wet Boudewijn garages Simonis closest Belliardstraat Leopold Kleine walk Bruxelles Rogier 2a North Industriestraat Waterloolaan Kunst spots Station MIVB Koning Portes Regentlaan Troon 1040 Paleizenplein ring Metro Centraal metro diff --git a/Other/file_0761.rst b/Other/file_0761.rst deleted file mode 100644 index 2d4fc5d7e..000000000 --- a/Other/file_0761.rst +++ /dev/null @@ -1,104 +0,0 @@ -**Poster sessions** - -#. *Computational study of the properties of defects at grain boundaries - in CuInSe2 - *\ R. Saniz, J. Bekaert, B. Partoens, and D. Lamoen - CMT and EMAT groups, Dept. of Physics, U Antwerpen -#. *First-principles study of superconductivity in atomically thin MgB2 - *\ J. Bekaert, B. Partoens, M. V. Milosevic, A. Aperis, P. M. - Oppeneer - CMT group, Dept. of Physics, U Antwerpen & Dept. of Physics and - Astronomy, Uppsala University -#. *Molecular Spectroscopy : Where Theory Meets Experiment - *\ C. Mensch, E. Van De Vondel, Y. Geboes, J. Bogaerts, R. Sgammato, - E. De Vos, F. Desmet, C. Johannessen, W. Herrebout - Molecular Spectroscopy group, Dept. Chemistry, U Antwerpen -#. *Bridging time scales in atomistic simulations: from classical models - to density functional theory - *\ Kristof M. Bal and Erik C. Neyts - PLASMANT, Department of Chemistry, U Antwerpen -#. *Bimetallic nanoparticles: computational screening for - chirality-selective carbon nanotube growth - *\ Charlotte Vets and Erik C. Neyts - PLASMANT, Department of Chemistry, U Antwerpen -#. *Ab initio molecular dynamics of aromatic sulfonation with sulfur - trioxide reveals its mechanism - *\ Samuel L.C. Moors, Xavier Deraet, Guy Van Assche, Paul Geerlings, - Frank De Proft - Quantum Chemistry Group, Department of Chemistry, VUB -#. *Acceleration of the Best First Search Algorithm by using predictive - analytics - *\ J.L. Teunissen, F. De Vleeschouwer, F. De Proft - Quantum Chemistry Group, VUB, Department of Chemistry, VUB -#. *Investigating molecular switching properties of octaphyrins using - DFT - *\ Tatiana Woller, Paul Geerlings, Frank De Proft, Mercedes Alonso - Quantum Chemistry Group, VUB, Department of Chemistry, VUB -#. *Using the Tier-1 infrastructure for high-resolution climate - modelling over Europe and Central Asia - *\ Lesley De Cruz, Rozemien De Troch, Steven Caluwaerts, Piet - Termonia, Olivier Giot, Daan Degrauwe, Geert Smet, Julie Berckmans, - Alex Deckmyn, Pieter De Meutter, Luc Gerard, Rafiq Hamdi, Joris Van - den Bergh, Michiel Van Ginderachter, Bert Van Schaeybroeck - Department of Physics and Astronomy, U Gent -#. *Going where the wind blows – Fluid-structure interaction simulations - of a wind turbine - *\ Gilberto Santo, Mathijs Peeters, Wim Van Paepegem, Joris Degroote - Dept. of Flow, Heat and Combustion Mechanics, U Gent -#. *Towards Crash-Free Drones – A Large-Scale Computational Aerodynamic - Optimization - *\ Jolan Wauters, Joris Degroote, Jan Vierendeels - Dept. of Flow, Heat and Combustion Mechanics, U Gent -#. *Characterisation of fragment binding to TSLPR using molecular - dynamics - *\ Dries Van Rompaey, Kenneth Verstraete, Frank Peelman, Savvas N. - Savvides, Pieter Van Der Veken, Koen Augustyns, Hans De Winter - Medicinal Chemistry, UAntwerpen and Center for Inflammation Research - , VIB-UGent -#. *A hybridized DG method for unsteady flow problems - *\ Alexander Jaust, Jochen Schütz - Computational Mathematics (CMAT) group, U Hasselt -#. *HPC-based materials research: From Metal-Organic Frameworks to - diamond - *\ Danny E. P. Vanpoucke, Ken Haenen - Institute for Materials Research (IMO), UHasselt & IMOMEC, IMEC -#. *Improvements to coupled regional climate model simulations over - Antarctica - *\ Souverijns Niels, Gossart Alexandra, Demuzere Matthias, van Lipzig - Nicole - Dept. of Earth and Environmental Sciences, KU Leuven -#. *Climate modelling of Lake Victoria thunderstorms - *\ Wim Thiery, Edouard L. Davin, Sonia I. Seneviratne, Kristopher - Bedka, Stef Lhermitte, Nicole van Lipzig - Dept. of Earth and Environmental Sciences, KU Leuven -#. *Improved climate modeling in urban areas in sub Saharan Africa for - malaria epidemiological studies - *\ Oscar Brousse, Nicole Van Lipzig, Matthias Demuzere, Hendrik - Wouters, Wim Thiery - Dept. of Earth and Environmental Sciences, KU Leuven -#. *Adaptive Strategies for Multi-Index Monte Carlo - *\ Dirk Nuyens, Pieterjan Robbe, Stefan Vandewalle - NUMA group, Dept. of Computer Science, KU Leuven -#. *SP-Wind: A scalable large-eddy simulation code for simulation and - optimization of wind-farm boundary layers - *\ Wim Munters, Athanasios Vitsas, Dries Allaerts, Ali Emre Yilmaz, - Johan Meyers - Turbulent Flow Simulation and Optimization (TFSO) group, Dept. of - Mechanics, KU Leuven -#. *Control Optimization of Wind Turbines and Wind Farms - *\ Ali Emre Yilmaz, Wim Munters, Johan Meyers - Turbulent Flow Simulation and Optimization (TFSO) group, Dept. of - Mechanics, KU Leuven -#. *Simulations of large wind farms with varying atmospheric complexity - using Tier-1 Infrastructure - *\ Dries Allaerts, Johan Meyers - Turbulent Flow Simulation and Optimization (TFSO) group, Dept. of - Mechanics, KU Leuven -#. *Stability of relativistic, two-component jets - *\ Dimitrios Millas, Rony Keppens, Zakaria Meliani - Plasma-astrophysics, Dept. Mathematics, KU Leuven -#. *HPC in Theoretical and Computational Chemistry - Jeremy Harvey, Eliot Boulanger, Andrea Darù, Milica Feldt, Carlos - Martín-Fernández, Ana Sanz Matias, Ewa Szlapa* - Quantum Chemistry and Physical Chemistry Section, Dept. of Chemistry, - KU Leuven diff --git a/Other/file_0761_uniq.rst b/Other/file_0761_uniq.rst deleted file mode 100644 index c611c2798..000000000 --- a/Other/file_0761_uniq.rst +++ /dev/null @@ -1 +0,0 @@ -Fernández Smet chirality Gilberto Sonia Jaust Hamdi Bert Luc Vos Alexander Bergh superconductivity Ginderachter Medicinal Index Mercedes Sanz Haenen Improved epidemiological Drones Tatiana screening relativistic Degroote Carlos Xavier sulfur carbon Dept Yilmaz Rompaey Vierendeels Mathijs urban Darù Frameworks Vleeschouwer Augustyns Lesley Carlo Rozemien thunderstorms Vandewalle Inflammation Crash Bridging Jochen jets Dimitrios Emre Feldt Martín Lhermitte unsteady Savvides Seneviratne Giot Brousse Guy Rafiq Teunissen IMOMEC Daan octaphyrins sulfonation atomically trioxide Uppsala Stability DFT selective Boulanger Aerodynamic Charlotte malaria Deckmyn Gerard Savvas Ab Investigating Geerlings Danny Nuyens Saharan Winter Cruz EMAT Athanasios Improvements Davin MgB2 Turbines TFSO Schaeybroeck Bimetallic Verstraete Michiel Matias Schütz Edouard CMAT Ken nanoparticles Woller Jeremy Vets Paepegem Section Alonso Asia Ewa Peelman Julie Santo hybridized Bal Farms Aperis Olivier Szlapa Zakaria Flow Koen Scale Antarctica Piet Meutter Ana Monte PLASMANT Kristof IMO Berckmans Strategies Milosevic Heat Ali Milica Der Degrauwe Kristopher diamond Kenneth Hans Caluwaerts Sgammato Alex Stef Africa Characterisation Termonia Robbe CuInSe2 TSLPR atomistic Wauters Vitsas Pieterjan Harvey nanotube Oscar Bedka Vanpoucke Deraet Victoria Troch Assche DG Oppeneer Meliani Veken Jolan Algorithm Eliot reveals blows IMEC Millas diff --git a/Other/file_0765.rst b/Other/file_0765.rst deleted file mode 100644 index d2f0ec856..000000000 --- a/Other/file_0765.rst +++ /dev/null @@ -1,96 +0,0 @@ -The third VSC Users Day was held at the \\"Paleis der Academiën\", the -seat of the \\"Royal Flemish Academy of Belgium for Science and the -Arts\", in the Hertogstraat 1, 1000 Brussels, on June 2, 2017. - -Program -------- - -- 9u50 : welcome -- 10u00: dr. Achim Basermann, German Aerospace Center, High Performance - Computing with Aeronautics and Space Applications [`slides PDF - - 4,9MB <\%22/assets/1225>`__] -- 11u00: coffee break -- 11u30: workshop sessions – part 1 - - - VSC for starters (by VSC personnel) [`slides PDF - - 5.3MB <\%22/assets/1215\%22>`__] - - Profiler and Debugger (by VSC personnel) [`slides PDF - - 2,2MB <\%22/assets/1217>`__] - - Programming GPUs (dr. Bart Goossens - dr. Vule Strbac) - -- 12u45: lunch -- 14u00: dr. Ehsan Moravveji, KU Leuven A Success Story on Tier-1: A - Grid of Stellar Models [`slides - PDF - 11,9MB <\%22/assets/1219\%22>`__] -- 14u30: ‘1-minute’ poster presentations -- 15u00: workshop sessions – part 2 - - - VSC for starters (by VSC personnel) - - Profiler and Debugger (by VSC personnel) - - Feedback from Tier-1 Evaluation Committee (dr. Walter Lioen, - chairman) [`slides - PDF 0.5MB <\%22/assets/1221\%22>`__] - -- 16u00: coffee and poster session -- 17u00: drink - -**Abstracts of workshops** - -- **VSC for starters** [`slides PDF - 5.3MB <\%22/assets/1215\%22>`__] - The workshop provides a smooth introduction to supercomputing for new - users. Starting from common concepts in personal computing the - similarities and differences with supercomputing are highlighted and - some essential terminology is introduced. It is explained what users - can expect from supercomputing and what not, as well as what is - expected from them as users -- **Profiler and Debugger** [`slides PDF - 2,2MB <\%22/assets/1217>`__] - Both profiling and debugging play an important role in the software - development process, and are not always appreciated. In this session - we will introduce profiling and debugging tools, but the emphasis is - on methodology. We will discuss how to detect common performance - bottlenecks, and suggest some approaches to tackle them. For - debugging, the most important point is avoiding bugs as much as - possible. -- **Programming GPUs** - - - **Quasar, a high-level language and a development environment to - reduce the complexity of heterogeneous programming of CPUs and - GPUs**, *Prof dr. Bart Goosens, UGent* [`slides PDF - - 2,1MB <\%22/assets/1227>`__] - In this workshop we present Quasar, a new programming framework - that takes care of many common challenges for GPU programming, - e.g., parallelization, memory management, load balancing and - scheduling. Quasar consists of a high-level programming language - with a similar abstraction level as Python or Matlab, making it - well suited for rapid prototyping. We highlight some of the - automatic parallelization strategies of Quasar and show how - high-level code can efficiently be compiled to parallel code that - takes advantage of the available CPU and GPU cores, while offering - a computational performance that is on a par with a manual - low-level C++/CUDA implementation. We explain how multi-GPU - systems can be programmed from Quasar and we demonstrate some - recent image processing and computer vision results obtained with - Quasar. - - **GPU programming opportunities and challenges: nonlinear finite - element analysis**, *dr. Vule Strbac, KU Leuven* [`slides PDF - - 2,1MB <\%22/assets/1223>`__] - From a computational perspective, finite element analysis - manifests substantial internal parallelism. Exposing and - exploiting this parallelism using GPUs can yield significant - speedups against CPU execution. The details of the mapping between - a requested FE scenario and the hardware capabilities of the GPU - device greatly affect this resulting speedup. Factors such as: (1) - the types of materials present (elasticity), (2) the local memory - pool and (3) fp32/fp64 computation impact GPU solution times - differently than their CPU counterparts. - We present results of both simple and complex FE analyses - scenarios on a multitude of GPUs and show an objective estimation - of general performance. In doing so, we detail the overall - opportunities, challenges as well as the limitations of the GPU FE - approach. - -**Poster sessions** - -`An overview of the posters that were presented during the poster -session is available here <\%22/events/userday-2017/posters\%22>`__. - -" diff --git a/Other/file_0769.rst b/Other/file_0769.rst deleted file mode 100644 index 07db204ae..000000000 --- a/Other/file_0769.rst +++ /dev/null @@ -1,2 +0,0 @@ -`Other pictures of the VSC User Day -2017 <\%22/events/userday-2017/photos\%22>`__. diff --git a/Other/file_0773.rst b/Other/file_0773.rst deleted file mode 100644 index 003a1770b..000000000 --- a/Other/file_0773.rst +++ /dev/null @@ -1,5 +0,0 @@ -Below is a selection of photos from the user day 2017. A larger set of -photos at a higher resolution can be `downloaded as a zip file -(23MB) <\%22/assets/1291\%22>`__. - -" diff --git a/Other/file_0773_uniq.rst b/Other/file_0773_uniq.rst deleted file mode 100644 index 740ee9a8b..000000000 --- a/Other/file_0773_uniq.rst +++ /dev/null @@ -1 +0,0 @@ -zip 1291 23MB diff --git a/Other/file_0795.rst b/Other/file_0795.rst deleted file mode 100644 index 42836aef1..000000000 --- a/Other/file_0795.rst +++ /dev/null @@ -1,88 +0,0 @@ -The 4th VSC Users Day was held at the \\"Paleis der Academiën\", the -seat of the \\"\ `Royal Flemish Academy of Belgium for Science and the -Arts <\%22https://www.vscentrum.be/events/userday-2017/venue\%22>`__\\", -in the Hertogstraat 1, 1000 Brussels, on May 22, 2018. - -Program -------- - -The titles in the program link to slides or abstracts of the -presentations. - -- 9u50 : Welcome -- 10u00: `“Ultrascalable algorithms for complex flows” – Ulrich Rüde, - CERFACS and Universitaet - Erlangen-Nuernberg <\%22https://www.researchgate.net/publication/325285871_Ultra_scalable_Algorithms_for_Complex_Flows\%22>`__ -- 11u00: Coffee break -- 11u25: Workshop sessions part 1 – VSC staff - - - Start to VSC - - `Start to GPU <\%22/assets/1361\%22>`__ - - `Code optimization <\%22/assets/1363\%22>`__ - -- 12u15: `1 minute poster - presentations <\%22/events/userday-2018/posters\%22>`__ -- 12u45: Lunch -- 13h30: `Poster session <\%22/events/userday-2018/posters\%22>`__ -- 14u15: `“Processing Genomics Data: High Performance Computing meets - Big Data” – Jan Fostier, UGent <\%22/assets/1359\%22>`__ -- 14u45: Workshop sessions part 2 – VSC staff - - - Start to VSC - - `Start to GPU <\%22/assets/1361\%22>`__ - - `Code debugging <\%22/assets/1365\%22>`__ - -- 15u35: Coffee break -- 15u55: “Why HPC and artificial intelligence engineering go hand in - hand” – Joris Coddé, CTO Diabatix -- 16u15: Tier-1 supercomputing platform as a service – -- 16u35: Poster award and closing -- 16u45: Reception -- 18u: end - -Abstracts of workshops -~~~~~~~~~~~~~~~~~~~~~~ - -**VSC for starters** - -The workshop provides a smooth introduction to supercomputing for new -users. Starting from common concepts in personal computing the -similarities and differences with supercomputing are highlighted and -some essential terminology is introduced. It is explained what users can -expect from supercomputing and what not, as well as what is expected -from them as users. - -**Start to GPU** - -GPU’s have become an important resource of computational power. For some -workloads they are extremely suited eg. Machine learning frameworks, but -also applications vendors are providing more and more support. So it is -important to keep track of things happening in your research field. This -workshop will provide you with an overview of available GPU power within -VSC and will give you guidelines how you can start using it. - -**Code debugging** - -All code contains bugs, and that is frustrating. Trying to identify and -eliminate them is tedious work. The extra complexity in parallel code -makes this even harder. However, using coding best practices can reduce -the number of bugs in your code considerably, and using the right tools -for debugging parallel code will simplify and streamline the process of -fixing your code. Familiarizing yourself with best practices will give -you an excellent return on investment. - -**Code optimization** - -Performance is a key concern in HPC (High Performance Computing). As a -developer, but also as an application user you have to be aware of the -impact of modern computer architecture on the efficiency of you code. -Profilers can help you identify performance hotspots so that you can -improve the performance of your code systematically. Profilers can also -help you to find the limiting factors when you run an application, so -that you can improve your workflow to try and overcome those as much as -possible. - -Paying attention to efficiency will allow you to scale your research to -higher accuracy and fidelity. - -" diff --git a/Other/file_0795_uniq.rst b/Other/file_0795_uniq.rst deleted file mode 100644 index ff421c773..000000000 --- a/Other/file_0795_uniq.rst +++ /dev/null @@ -1 +0,0 @@ -Ultrascalable concern 1365 Fostier Genomics CTO accuracy 4th researchgate Ulrich 16u35 coding fidelity frustrating 325285871_Ultra_scalable_Algorithms_for_Complex_Flows 15u55 frameworks 12u15 1359 Coddé 18u Diabatix 16u45 artificial titles award 13h30 systematically fixing 1363 Nuernberg Universitaet 15u35 Paying 1361 Workshop Rüde Familiarizing hotspots happening CERFACS Profilers 11u25 limiting streamline Trying diff --git a/README.md b/README.md index 54d9265c5..64db8988d 100644 --- a/README.md +++ b/README.md @@ -96,6 +96,10 @@ $ git commit -m "some new stuff added to VSC docs" $ git push origin feature/new_stuff ``` +⚠️ You can also automatically verify that hyperlinks to external websites are in +working condition. Run the *linkcheker* builder in Sphinx with `make +linkcheck`. + ### Pull request When you are done, create a pull request to the `master` branch of this diff --git a/img/links.png b/img/links.png deleted file mode 100755 index b99939489..000000000 Binary files a/img/links.png and /dev/null differ diff --git a/requirements.txt b/requirements.txt index 3a29d5ec5..f46681c7e 100644 --- a/requirements.txt +++ b/requirements.txt @@ -1,40 +1,52 @@ -accessible-pygments==0.0.3 -alabaster==0.7.12 -Babel==2.11.0 -beautifulsoup4==4.11.1 -certifi==2023.7.22 -charset-normalizer==2.1.1 +accessible-pygments==0.0.5 +alabaster==1.0.0 +anyio==4.8.0 +babel==2.16.0 +beautifulsoup4==4.12.3 +certifi==2024.12.14 +charset-normalizer==3.4.1 +click==8.1.8 colorama==0.4.6 -docutils==0.19 -idna==3.4 +docutils==0.21.2 +h11==0.14.0 +idna==3.10 imagesize==1.4.1 -importlib-metadata==5.0.0 -Jinja2==3.1.3 -livereload==2.6.3 -MarkupSafe==2.1.1 -myst-parser==1.0.0 -packaging==21.3 -pydata-sphinx-theme==0.13.3 -Pygments==2.15.0 -pyparsing==3.0.9 -pytz==2022.6 -requests==2.31.0 -six==1.16.0 +importlib_metadata==8.5.0 +Jinja2==3.1.5 +livereload==2.7.1 +markdown-it-py==3.0.0 +MarkupSafe==3.0.2 +mdit-py-plugins==0.4.2 +mdurl==0.1.2 +myst-parser==4.0.0 +packaging==24.2 +pydata-sphinx-theme==0.16.1 +Pygments==2.19.1 +pyparsing==3.2.1 +pytz==2024.2 +PyYAML==6.0.2 +requests==2.32.3 +six==1.17.0 +sniffio==1.3.1 snowballstemmer==2.2.0 -soupsieve==2.3.2.post1 -Sphinx==5.3.0 -sphinx-autobuild==2021.3.14 -sphinx-notfound-page==0.8.3 -sphinx-reredirects==0.1.2 -sphinx-sitemap==2.5.1 -sphinx_design==0.3.0 -sphinxcontrib-applehelp==1.0.2 -sphinxcontrib-devhelp==1.0.2 -sphinxcontrib-htmlhelp==2.0.0 +soupsieve==2.6 +Sphinx==8.1.3 +sphinx-autobuild==2024.10.3 +sphinx-notfound-page==1.0.4 +sphinx-sitemap==2.6.0 +sphinx_design==0.6.1 +sphinxcontrib-applehelp==2.0.0 +sphinxcontrib-devhelp==2.0.0 +sphinxcontrib-htmlhelp==2.1.0 sphinxcontrib-jsmath==1.0.1 -sphinxcontrib-qthelp==1.0.3 -sphinxcontrib-serializinghtml==1.1.5 -tornado==6.3.3 -typing_extensions==4.5.0 -urllib3==1.26.18 -zipp==3.10.0 +sphinxcontrib-qthelp==2.0.0 +sphinxcontrib-redirects==0.2.0 +sphinxcontrib-serializinghtml==2.0.0 +starlette==0.45.2 +tornado==6.4.2 +typing_extensions==4.12.2 +urllib3==2.3.0 +uvicorn==0.34.0 +watchfiles==1.0.4 +websockets==14.2 +zipp==3.21.0 diff --git a/source/_static/css/vsc.css b/source/_static/css/vsc.css index 69fa32b8b..10d07174c 100644 --- a/source/_static/css/vsc.css +++ b/source/_static/css/vsc.css @@ -1,22 +1,30 @@ html[data-theme="light"] { --pst-color-primary: #d96c31; /* VSC orange */ - --pst-color-link: #40807D; /* dark teal (complementary) */ + --pst-color-secondary: #2989bc; /* Haddock’s Sweater (VSC orange complementary) */ + --pst-color-link: var(--pst-color-secondary); --pst-color-link-hover: #f28e24; /* VSC secondary orange */ --pst-color-background: #ffffff; /* actual white */ --pst-color-on-background: #f9f9f9; /* lighter VSC white */ --pst-color-surface: #f5f5f5; /* cards background */ --pst-color-border: #dbdbdb; + --pst-heading-color: var(--pst-color-light-text); + --pst-color-accent: var(--pst-color-primary); + --pst-color-table-row-hover-bg: #fcf1dd; /* very light VSC yellow */ --sd-color-card-border-hover: var(--pst-color-link-hover); } html[data-theme="dark"] { --pst-color-primary: #f28e24; /* VSC secondary orange */ - --pst-color-link: #90f2f2; /* light teal (complementary) */ - --pst-color-link-hover: #f9bf19; + --pst-color-secondary: #51cbda; /* Sea serpent blue (VSC orange complementary) */ + --pst-color-link: var(--pst-color-secondary); + --pst-color-link-hover: #f9bf19; /* VSC bright orange */ --pst-color-background: #282828; /* darker VSC black */ --pst-color-on-background: #383839; /* VSC black */ --pst-color-surface: #303030; /* cards background */ --pst-color-border: #4e4e4e; + --pst-heading-color: var(--pst-color-dark-text); + --pst-color-accent: var(--pst-color-primary); + --pst-color-table-row-hover-bg: #3c282c; /* very dark VSC brick */ --sd-color-card-border-hover: var(--pst-color-link-hover); } @@ -39,7 +47,7 @@ html[data-theme="light"] .bd-content { } html[data-theme="dark"] .bd-sidebar-primary { - background-color: #3c2c28; /* very dark VSC brick */ + background-color: #3c282c; /* very dark VSC brick */ } html[data-theme="dark"] .bd-content { background-color: var(--pst-color-background); @@ -48,47 +56,117 @@ html[data-theme="dark"] .bd-content { html[data-theme="dark"] .bd-container { background: var(--pst-color-background); /* dark VSC brick: #673527 */ - background: linear-gradient(90deg, rgb(40, 40, 40) 0%, rgb(76, 39, 29) 50%, rgb(40, 40, 40) 80%, rgb(40, 40, 40) 100%) + background: linear-gradient(90deg, rgb(40, 40, 40) 0%, rgb(60, 40, 55) 50%, rgb(40, 40, 40) 80%, rgb(40, 40, 40) 100%) } html[data-theme="dark"] .bd-sidebar-primary { background-color: transparent; } } -/* toctree format */ -.toctree-wrapper .toctree-l1 { - font-size: 1.2rem; - font-weight: bold; -} -.toctree-wrapper .toctree-l2 { - font-size: 1.0rem; - font-weight: normal; +/* Text decorations on links and buttons*/ +/* replace default underlines with a dotted underline on hover only */ +a, +a.nav-link, +html .sd-btn:hover { + text-decoration: none; } -/* make sphinx-design cards slightly non-transparent */ -.sd-card { - background-color: var(--pst-color-surface); +a:hover, +a.nav-link:hover, +.toc-entry a.nav-link:hover, +nav.bd-links li > a:hover, +.bd-content .sd-tab-set > input:not(:checked, :focus-visible) + label:hover { + text-decoration: underline dotted 1.5pt; + text-decoration-skip-ink: auto; } -/* line numbers in codeblocks on transparent column */ -html[data-theme="light"] .highlight span.linenos { background-color: transparent; } -html[data-theme="dark"] .highlight span.linenos { background-color: transparent; } - -/* style x-twitter img icon depending on active theme */ +/* fix styling of local icons to match other icons on navbar*/ +html[data-theme="light"] a.nav-link img.icon-link-image { + filter: brightness(0) saturate(100%) invert(31%) sepia(45%) saturate(301%) hue-rotate(176deg) brightness(91%) contrast(85%); + height: 1.4em; +} html[data-theme="light"] a.nav-link:hover img.icon-link-image { - /* Dark VSC orange: #d96c31. Equivalent to --pst-color-link-hover */ - filter: brightness(0) saturate(100%) invert(46%) sepia(89%) saturate(502%) hue-rotate(337deg) brightness(91%) contrast(85%); + /* Haddock's Sweater: #2989bc (--pst-color-secondary) */ + filter: brightness(0) saturate(100%) invert(49%) sepia(51%) saturate(667%) hue-rotate(156deg) brightness(86%) contrast(91%); } html[data-theme="dark"] a.nav-link img.icon-link-image { - filter: brightness(150%) contrast(120%); + filter: brightness(0) saturate(100%) invert(70%) sepia(8%) saturate(362%) hue-rotate(175deg) brightness(91%) contrast(94%); + height: 1.4em; } html[data-theme="dark"] a.nav-link:hover img.icon-link-image { - /* Light VSC orange: #f28e24. Equivalent to --pst-color-link-hover */ - filter: brightness(0) saturate(100%) invert(56%) sepia(81%) saturate(1394%) hue-rotate(348deg) brightness(106%) contrast(90%); + /* Sea serpent blue: #51cbda (--pst-color-secondary) */ + filter: brightness(0) saturate(100%) invert(72%) sepia(87%) saturate(394%) hue-rotate(147deg) brightness(92%) contrast(85%); +} + +/* make sphinx-design cards slightly non-transparent */ +.sd-card { background-color: var(--pst-color-surface); } +/* add class to handle nested cards */ +.nested-card-container { + display: grid; + grid-template-rows: auto auto 1fr; +} +.nested-card-container .sd-container-fluid .sd-row { + height: 100%; + align-content: center; +} +.nested-card-top { z-index: 20; } +/* theme of cards displaying services */ +.service-card-toc .sd-card .sd-card-body { padding: 1.5rem } +.service-card-toc .sd-card .sd-card-body a { color: var(--pst-heading-color); } +.service-card-toc .toctree-wrapper .toctree-l1 { font-size: 1.2rem; } +.service-card-toc .toctree-wrapper .toctree-l1 > ul { font-size: 1.0rem; margin-top: 0.5rem; } +.service-card-toc .toctree-wrapper .toctree-l1 > ul > li { list-style-type: disc; } +.service-card-tier1 .sd-card .sd-card-body { + background: #83C0C1; /* fallback for old browsers */ + background: linear-gradient(to bottom, #83C0C133, #83C0C166); +} +.service-card-tier2 .sd-card .sd-card-body { + background: #6962AD; /* fallback for old browsers */ + background: linear-gradient(to bottom, #6962AD33, #6962AD66); } +.service-card-term .sd-card .sd-card-body { + background: #640D5F; /* fallback for old browsers */ + background: linear-gradient(to bottom, #640D5F33, #640D5F66); +} +.service-card-portal .sd-card .sd-card-body { + background: #D91656; /* fallback for old browsers */ + background: linear-gradient(to bottom, #D9165633, #D9165666); +} +.service-card-soft .sd-card .sd-card-body { + background: #FFB200; /* fallback for old browsers */ + background: linear-gradient(to bottom, #FFB20033, #FFB20066); +} +.service-card-jobs .sd-card .sd-card-body { + background: #EB5B00; /* fallback for old browsers */ + background: linear-gradient(to bottom, #EB5B0033, #EB5B0066); +} +.service-card-globus .sd-card .sd-card-body { + background: #0E21A0; /* fallback for old browsers */ + background: linear-gradient(to bottom, #0E21A033, #0E21A066); +} +.service-card-vms .sd-card .sd-card-body { + background: #206A5D; /* fallback for old browsers */ + background: linear-gradient(to bottom, #206A5D33, #206A5D66); +} +.service-card-vmsgpu .sd-card .sd-card-body { + background: #81B214; /* fallback for old browsers */ + background: linear-gradient(to bottom, #81B21433, #81B21466); +} +.service-card-orch .sd-card .sd-card-body { + background: #FFCC29; /* fallback for old browsers */ + background: linear-gradient(to bottom, #FFCC2933, #FFCC2966); +} + +/* line numbers in codeblocks on transparent column */ +html[data-theme="light"] .highlight span.linenos { background-color: transparent; } +html[data-theme="dark"] .highlight span.linenos { background-color: transparent; } /* add space after icons in breadcrumb titles */ .breadcrumb-item > span.fas { margin-right: 0.3rem; } +/* fix higlight table row under cursor */ +.table tbody tr:hover { + background-color: var(--pst-color-table-row-hover-bg); +} diff --git a/source/_static/fa-square-bluesky.svg b/source/_static/fa-square-bluesky.svg new file mode 100644 index 000000000..9009595a4 --- /dev/null +++ b/source/_static/fa-square-bluesky.svg @@ -0,0 +1,37 @@ + + + + + + + diff --git a/source/_static/fa-square-x-twitter.svg b/source/_static/fa-square-x-twitter.svg deleted file mode 100644 index b4186b1ba..000000000 --- a/source/_static/fa-square-x-twitter.svg +++ /dev/null @@ -1,39 +0,0 @@ - - - - - - - diff --git a/source/access/access_methods.rst b/source/access/access_methods.rst deleted file mode 100644 index 512acbf74..000000000 --- a/source/access/access_methods.rst +++ /dev/null @@ -1,124 +0,0 @@ -.. _access methods: - -################################################# -:fas:`right-to-bracket` Access VSC Infrastructure -################################################# - -.. toctree:: - :hidden: - - windows_client - macos_client - linux_client - -We provide multiple methods to access the VSC clusters and use their -computational resources. Not all options may be equally supported across all -clusters though. In case of doubt, please contact the corresponding -:ref:`support team `. - -Terminal interface -================== - -You can access the command line on any VSC cluster by logging in via SSH to the -corresponding login node. To this end, you will need to install and configure -some SSH client software in your computer. - -.. grid:: 3 - :gutter: 4 - - .. grid-item-card:: :fab:`windows` Windows - :columns: 12 4 4 4 - :link: windows_client - :link-type: doc - - SSH client setup - - .. grid-item-card:: :fab:`apple` macOS - :columns: 12 4 4 4 - :link: macos_client - :link-type: doc - - SSH client setup - - .. grid-item-card:: :fab:`linux` Linux - :columns: 12 4 4 4 - :link: linux_client - :link-type: doc - - SSH client setup - -.. note:: - - |KUL| When logging in to a KU Leuven cluster, take a look - at the page on :ref:`Multi Factor Authentication`. - -GUI applications on the clusters -================================ - -If you wish to use programs with a graphical user interface (GUI), you'll need -an X server on your client system. The available options depend on the -operating system in your computer: - -.. grid:: 3 - :gutter: 4 - - .. grid-item-card:: :fab:`windows` Windows - :columns: 12 4 4 4 - :link: windows_gui - :link-type: ref - - GUI access setup - - .. grid-item-card:: :fab:`apple` macOS - :columns: 12 4 4 4 - :link: macos_gui - :link-type: ref - - GUI access setup - - .. grid-item-card:: :fab:`linux` Linux - :columns: 12 4 4 4 - :link: linux_gui - :link-type: ref - - GUI access setup - -Alternative solutions do also exist that might be more performant or cover more -specific use cases. In all cases, it is necessary to install some extra -software in your computer to be able to run graphical applications on the VSC -clusters. See below for guides on available solutions: - -.. warning:: - - The following options might not be equally supported across all VSC - clusters. - -.. tab-set:: - - .. tab-item:: General - - .. toctree:: - :maxdepth: 1 - - paraview_remote_visualization - - .. tab-item:: KU Leuven/UHasselt - - .. toctree:: - :maxdepth: 1 - - nx_start_guide - ../leuven/services/openondemand - -VPN -=== - -Logging in to the login nodes of your institute's cluster may not work -if your computer is not on your institute's network (e.g., when you work -from home). In those cases you will have to set up a -:doc:`VPN (Virtual Private Networking) ` connection if your institute -provides this service. - -.. toctree:: - - vpn diff --git a/source/access/access_using_mobaxterm.rst b/source/access/access_using_mobaxterm.rst deleted file mode 100644 index c2a797427..000000000 --- a/source/access/access_using_mobaxterm.rst +++ /dev/null @@ -1,231 +0,0 @@ -.. _access using mobaxterm: - -Text-mode access using MobaXterm -================================ - -Prerequisite -============ - -.. tab-set:: - - .. tab-item:: KU Leuven - - To access KU Leuven clusters, only an approved :ref:`VSC account ` is needed. - - .. tab-item:: UGent, VUB, UAntwerpen - - To access clusters hosted at these sites, you need a - :ref:`public/private key pair ` of which the public key - needs to be :ref:`uploaded via the VSC account page `. - -Download and setup MobaXterm -============================ - -Go to the `MobaXterm`_ website and download the free version. Make sure to -select the 'Portable edition' from the download page. Create a folder called -``MobaXterm`` in a known location in your computer and decompress the contents -of the downloaded zip file inside it. - -Setup a shortcut for a remote session -------------------------------------- - -#. Double click the ``MobaXterm_Personal`` executable file inside the - ``MobaXterm`` folder. - The MobaXterm main window will appear on your screen. It should be similar to this one: - - .. _mobaxterm-main-window: - .. figure:: access_using_mobaxterm/mobaxterm_main_window.png - :alt: mobaxterm main - -#. Click on the `Session` icon in the top left corner. - -#. The 'Session settings' configuration panel will open; click on the SSH icon in the top row - and you should see a window like this: - - .. figure:: access_using_mobaxterm/mobaxterm_session_settings_ssh.png - :alt: ssh settings window - -#. In the 'Remote host' field introduce the cluster remote address of - your :ref:`VSC cluster `, which should be written in the form ``my-vsc-cluster.example.com``. - Tick the 'Specify username' box and introduce your VSC account username. - Click the 'Advanced SSH settings' tab for additional configurations. - - The next few steps depends on the choice of VSC site you are trying to connect to. - - .. _step-advanced-ssh-settings: - - .. tab-set:: - - .. tab-item:: KU Leuven - - Make sure that the 'Use private key' option is disabled. - You may additionally opt for enabling the 'X11-Forwarding' and the - 'Compression' options. - - .. figure:: access_using_mobaxterm/mobaxterm_adv_kul.png - :alt: advanced SSH options for KU Leuven clusters - - With this configuration, it is strongly recommended to setup your - :ref:`SSH agent in MobaXterm ` which is - described below. - - Upon successful connection attempt you will be prompted to copy/paste - the firewall URL in your browser as part of the MFA login procedure: - - .. _vsc_firewall_certificate_authentication: - .. figure:: access_using_mobaxterm/vsc_firewall_certificate_authentication.PNG - :alt: vsc_firewall_certificate_authentication - - Confirm by clicking 'Yes'. - Once the MFA has been completed you will be connected to a login node. - - .. tab-item:: UGent, VUB, UAntwerpen - - Tick the 'Use private key' box and click on the file icon in that field. - A file browser will be opened; locate the private SSH key file you created - when requesting your VSC account. - Please keep in mind that these settings have to be updated if the location - of the private SSH key ever changes. - Check that the 'SSH-browser type' is set to 'SFTP protocol'. - - .. figure:: access_using_mobaxterm/mobaxterm_advanced_ssh.png - :alt: advanced ssh options - - .. _step-sftp-tab: - - Press the 'OK' button and you should be prompted for your passphrase. - Enter here the passphrase you chose while creating your public/private key pair. - The characters will be hidden and nothing at all will appear as you - type (no circles, no symbols). - -#. You should connect to the cluster and be greeted by a screen similar to this one: - - .. figure:: access_using_mobaxterm/mobaxterm_hydra_login.png - :alt: hmem greeting - - On the left sidebar (in the 'Sftp' tab) there is a file browser of your - home directory in the cluster. You will see by default many files whose - names start with a dot ('.') symbol. These are hidden files of the - Linux environment and you should neither delete nor move them. You can hide - the hidden files by clicking on the right most button at the top of the file - browser. - -#. Once you disconnect from the cluster (by typing ``exit`` or closing the - terminal tab) you will find on the left sidebar (in the 'Sessions' tab) - a shortcut to the session you just setup. From now on, when you open - MobaXterm, you can just double click that shortcut and you will start - a remote session on the :ref:`VSC cluster ` that you used in previous steps. - - To create a direct shortcut on your desktop (optional), - right click on the saved session name and choose - 'Create a desktop shortcut' (see image below). An icon will appear on your - Desktop that will start MobaXterm and open a session in the corresponding cluster. - - .. figure:: access_using_mobaxterm/mobaxterm_session_shortcut.png - :alt: session desktop shortcut - - -#. Now you can create connections to other :ref:`VSC clusters ` - by repeating these steps and changing the address of the cluster. - You will have then a shortcut on the Sessions tab of the left sidebar - for each of them to connect to. - - -Import PuTTY sessions ---------------------- - -If you have already configured remote sessions within PuTTY, then MobaXterm -will automatically import them upon installation and they will appear on the -left-side pane. -To edit a session, right-click on the session and then choose 'Edit session'. -Ensure that all settings are correct under the 'SSH' tab and the -'Advanced SSH settings' sub-tab: - -.. _mobaxterm_putty_imported_sessions: -.. figure:: access_using_mobaxterm/mobaxterm_putty_imported_sessions.PNG - :alt: mobaxterm_putty_imported_sessions - -If the session has been properly imported you will see that all the necessary -fields are already filled in. -Click 'OK' to close the 'Edit session' window. - - - .. _copying-files-mobaxterm: - -Copying files to and from the cluster -------------------------------------- - -Once you've setup the shortcut for connecting to a cluster, as we -noted in `step 6 <#step-sftp-tab>`_ of the previous section, you will see -on the left sidebar (in the 'Sftp' tab) a file browser on the cluster you are -connected to. - -You can simply drag and drop files from your computer to that panel and they -will be copied to the cluster. You can also drag and drop files from the -cluster to your computer. Alternatively, you can use the file tools located at the -top of the file browser. - -Remember to always press the ``Refresh current folder`` button after you -copied something or created/removed a file or folder on the cluster. - -.. _mobaxterm-ssh-agent: - -Setup an SSH agent to avoid typing the passphrase at each login ---------------------------------------------------------------- - -Once you've successfully setup the connection to your cluster, -you will notice that you are prompted for the passphrase at -each connection you make to a cluster. -To avoid retyping it each time, you can setup an internal SSH agent in -MobaXterm that will take care of unlocking the private key or using an -SSH certificate for :ref:`Multi-Factor Authentication ` when you -open the application. The SSH agent will save the passphrase after you have -introduced it once. - -#. Open the MobaXterm program and go to the menu 'Settings -> - Configuration' - -#. You should see the `MobaXterm Configuration` panel. In the 'General' tab - choose the 'MobaXterm passwords management' option; a new panel will be - opened; make sure that 'Save sessions passwords' has the options - 'Always' and 'Save SSH keys passphrases as well' selected (as shown below) - and click 'OK'. - - .. figure:: access_using_mobaxterm/mobaxterm_save_passwords.png - :alt: mobaxterm save passwords option - -#. Open the 'SSH' tab in the same `MobaXterm Configuration` panel. - Make sure that all the boxes below the 'SSH agents' section are - ticked. - -#. Press the '+' button in the 'Load following keys at MobAgent startup' - field, look for your private key file and select it. At the end of the process, the panel should - look like this (the location of your private SSH key may be different): - - .. figure:: access_using_mobaxterm/mobaxterm_ssh_agent.png - :alt: mobaxterm ssh agent setup - - Please, keep in mind that these settings will have to be updated if the - location of private key ever changes. - -#. Press OK and when prompted for restarting MobaXterm, choose to do so. - -#. Once MobaXterm restarts you will be asked for the private key passphrase at - launch. This will occur only once and after you introduce it correctly it will stay saved for all - following sessions. Double clicking on a shortcuts for a cluster - should open the corresponding connection directly. - -.. _troubleshoot_mobaxterm: - -Troubleshooting MobaXterm connection issues -------------------------------------------- - -If you have trouble accessing the infrastructure, the support staff will -likely ask you to provide a log. After you have made a failed attempt to connect, -you can obtain the connection log by - -#. ctrl-right-clicking in the MobaXterm terminal and selecting 'Event Log'. -#. In the dialog window that appears, click the 'Copy' button to copy the - log messages. They are copied as text and can be pasted in your message - to support. - diff --git a/source/access/account_management.rst b/source/access/account_management.rst deleted file mode 100644 index a7d7bf2f7..000000000 --- a/source/access/account_management.rst +++ /dev/null @@ -1,65 +0,0 @@ -################################### -:fas:`user-gear` Account management -################################### - -Account management at the VSC is mostly done through the `VSC account page`_ -using your institute account rather than your VSC account. - -Managing user credentials -------------------------- - -- You use the VSC account page to request your account as explained on - the ":ref:`apply for account`" page. You'll also need to - create an SSH-key which is also explained on those pages. -- Once your account is active and you can log on to your home cluster, - you can use the account management pages for many other operations: - - - If you want to :ref:`access the VSC clusters from more than one - computer `, - it is good practice to use a different key for each computer. You - can upload additional keys via the account management page. In - that way, if your computer is stolen, all you need to do is remove - the key for that computer and your account is safe again. - - If you've :ref:`messed up your keys `, - you can restore the keys on the cluster or upload a new key and - then delete the old one. - -Group management ----------------- - -Once your VSC account is active and you can log on to your home cluster, -you can also manage groups through the account management web interface. -Groups (a Linux/UNIX concept) are used to control access to licensed -software (e.g., software licenses paid for by one or more research -groups), to create subdirectories where researchers working on the same -project can collaborate and control access to those files, and to -control access to project credits on clusters that use these (all -clusters at KU Leuven). - -.. toctree:: - :maxdepth: 2 - - how_to_create_manage_vsc_groups - -.. _virtual_organization: - -Virtual Organization management -------------------------------- - -For UGent and VUB users only: You can create or join a so-called Virtual -Organization or VO, which gives access to extra storage in the HPC cluster that -is shared between the members of the VO. VUB users may consult the VUB-HPC docs -on `Virtual Organization `_ for more info. - -Managing disk space -------------------- - -The amount of disk space that a user can use on the various file systems -on the system is limited by quota on the amount of disk space and number -of files. UGent and VUB users can see and request upgrades for their quota on -the Account management site (Users need to be in a VO (Virtual -Organization) to request additional quota. Creating and joining a VO is -also done through the Account Management website). On other sites -checking your disk space use is still :ref:`mostly done from the command -line ` and requesting more quota is done via email. - diff --git a/source/access/authentication.rst b/source/access/authentication.rst deleted file mode 100644 index 31af6405b..000000000 --- a/source/access/authentication.rst +++ /dev/null @@ -1,30 +0,0 @@ -######################### -:fas:`key` Authentication -######################### - -Connections to VSC clusters are always encrypted to secure your data. -Depending on the destination VSC site that you are going to login to, -and based on your affiliation to one of the major flemish universities, -you either need a cryptographic key pair (for UAntwerpen, UGent and VUB), -or you need to go through multi-factor authentication (for KU Leuven). - -If you are accessing the VSC clusters from abroad, you need to first -authenticate yourself via the `VSC firewall page `_. - -Below, we elaborate further how to authenticate yourself depending on your -institute affiliation and your target VSC site. - -.. toctree:: - :maxdepth: 3 - - generating_keys - -|KUL| Multi Factor Authentication (MFA) is an augmented level of security -which, as the name suggests, requires multiple steps to successfully -authenticate. This method is necessary to connect to the KU Leuven clusters. - -.. toctree:: - :maxdepth: 3 - - mfa_login - diff --git a/source/access/creating_a_ssh_tunnel_using_putty.rst b/source/access/creating_a_ssh_tunnel_using_putty.rst deleted file mode 100644 index e4b3c0476..000000000 --- a/source/access/creating_a_ssh_tunnel_using_putty.rst +++ /dev/null @@ -1,64 +0,0 @@ -.. _ssh tunnel using PuTTY: - -Creating a SSH tunnel using PuTTY -================================= - -Prerequisites -------------- - -`PuTTY`_ must be installed on -your computer, and you should be able to :ref:`connect via SSH to the -cluster's login node `. - -Background ----------- - -Because of one or more firewalls between your desktop and the HPC -clusters, it is generally impossible to communicate directly with a -process on the cluster from your desktop except when the network -managers have given you explicit permission (which for security reasons -is not often done). One way to work around this limitation is SSH -tunneling. - -There are several cases where this is usefull: - -- Running X applications on the cluster: The X program cannot directly - communicate with the X server on your local system. In this case, the - tunneling is easy to set up as PuTTY will do it for you if you select - the right options on the X11 settings page as explained on the :ref:`page - about text-mode access using PuTTY `. -- Running a server application on the cluster that a client on the - desktop connects to. One example of this scenario is :ref:`ParaView in - remote visualization mode `, - with the interactive client on the desktop and the data processing - and image rendering on the cluster. How to set up the tunnel for that - scenario is also :ref:`explained on that page `. -- Running clients on the cluster and a server on your desktop. In this - case, the source port is a port on the cluster and the destination - port is on the desktop. - -Procedure: A tunnel from a local client to a server on the cluster ------------------------------------------------------------------- - -#. Log in on the login node - -#. Start the server job, note the compute node's name the job is running - on (e.g., 'r1i3n5'), as well as the port the server is listening on - (e.g., '44444'). - -#. Set up the tunnel: - - .. figure:: creating_a_ssh_tunnel_using_putty/putty_tunnel_config.png - - #. Right-click in PuTTY's title bar, and select 'Change Settings...'. - #. In the 'Category' pane, expand 'Connection' -> 'SSH', and select - 'Tunnels' as show below: - #. In the 'Source port' field, enter the local port to use (e.g., - 11111). - #. In the 'Destination' field, enter : (e.g., - r1i3n5:44444 as in the example above). - #. Click the 'Add' button. - #. Click the 'Apply' button - -The tunnel is now ready to use. - diff --git a/source/access/eclipse_intro.rst b/source/access/eclipse_intro.rst deleted file mode 100644 index 3a780bbf7..000000000 --- a/source/access/eclipse_intro.rst +++ /dev/null @@ -1,11 +0,0 @@ -Eclipse is a popular multi-platform Integrated Development -Environment (IDE) very well suited for code development on clusters. - -* Read our :ref:`Eclipse introduction ` to - find out why you should consider using Eclipse if you develop code - and how to get it. -* You can use :ref:`Eclipse on the desktop as a remote editor for the - cluster `. -* You can combine the remote editor feature with version control - from Eclipse, but some care is needed, and :ref:`here's how to do - it `. diff --git a/source/access/generating_keys_with_openssh_on_os_x.rst b/source/access/generating_keys_with_openssh_on_os_x.rst deleted file mode 100644 index 89958b4dc..000000000 --- a/source/access/generating_keys_with_openssh_on_os_x.rst +++ /dev/null @@ -1,23 +0,0 @@ -.. _generating keys macos: - -##################################### -:fab:`apple` Generating keys on macOS -##################################### - -Requirements: - -* macOS operating system -* OpenSSH - -Every macOS install comes with its own implementation of OpenSSH, so you -don't need to install any third-party software to use it. Just open a -Terminal window and jump in! Because of this, you can use the same -commands as specified in the on the ":ref:`generating keys linux`" -section to access the cluster and transfer files. - -Create a public/private key pair -================================ - -Generating a public/private key pair is identical to what is described -for the :ref:`linux client `, that is, by using the -ssh-keygen command in a Terminal window. diff --git a/source/access/linux_client.rst b/source/access/linux_client.rst deleted file mode 100644 index 4bb0bb1d6..000000000 --- a/source/access/linux_client.rst +++ /dev/null @@ -1,81 +0,0 @@ -.. _linux_client: - -############################## -:fab:`linux` Access from Linux -############################## - -Since all VSC clusters use Linux as their main operating system, you -will need to get acquainted with Linux using the command-line interface and -using the terminal. To open a terminal in Linux when using KDE, choose -Applications > System > Terminal > Konsole. When using GNOME, choose -Applications > Accessories > Terminal. - -If you don't have any experience with using the command-line interface -in Linux, we suggest you to read the :ref:`basic Linux -usage ` section first. - -Getting ready to login -====================== - -Before requesting an account, you need to generate a pair of ssh -keys. One popular way to do this on Linux is :ref:`using the freely -available OpenSSH client ` -which you can then also use to log on to the clusters. - -Connecting to the cluster -========================= - -Text-mode session ------------------ - -The OpenSSH :ref:`ssh command ` can be used to open -a connection in a Linux terminal session. - -.. toctree:: - :maxdepth: 2 - - text_mode_access_using_openssh - -.. _linux_gui: - -Display graphical programs -========================== - -X server --------- - -No extra software is needed on a Linux client system, but you need -to use the appropriate options with the ssh command as explained -on :ref:`the page on OpenSSH `. - -NX client ---------- - -|KUL| On the KU Leuven/UHasselt clusters it is also possible to -:ref:`use the NX Client ` to log -on to the machine and run graphical programs. This requires -additional client software that is currently available for -Windows, macOS, Linux, Android and iOS. The advantage over -displaying X programs directly on your Linux screen is that you -can sleep your laptop, disconnect and move to another network -without loosing your X-session. Performance may also be better -with many programs over high-latency networks. - -VNC ---- - -.. include:: vnc_support.rst - -Software development -==================== - -Eclipse -------- - -.. include:: eclipse_intro.rst - -Version control ---------------- - -Linux supports all popular version control systems. See :ref:`our -introduction to version control systems `. diff --git a/source/access/macos_client.rst b/source/access/macos_client.rst deleted file mode 100644 index 1da8c604e..000000000 --- a/source/access/macos_client.rst +++ /dev/null @@ -1,80 +0,0 @@ -.. _macos_client: - -############################## -:fab:`apple` Access from macOS -############################## - -Since all VSC clusters use Linux as their main operating system, you -will need to get acquainted with using the command-line interface and -using the Terminal. To open a Terminal window in macOS (formerly OS X), -choose Applications > Utilities > Terminal in the Finder. - -If you don't have any experience with using the Terminal, we suggest you -to read the :ref:`basic Linux usage ` section -first (which also applies to macOS). - -Getting ready to login -====================== - -Before requesting an account, you need to generate a pair of ssh -keys. One popular way to do this on macOS is :ref:`using the OpenSSH -client ` included with macOS -which you can then also use to log on to the clusters. - -Connecting to the cluster -========================= - -.. toctree:: - :maxdepth: 2 - - text_mode_access_using_openssh_or_jellyfissh - -.. _macos_gui: - -Display graphical programs -========================== - -X server --------- - -Linux programs use the X protocol to display graphics on local or -remote screens. To use your Mac as a remote screen, you need to -install a X server. `XQuartz `_ -is one that is freely available. Once the X server is up and -running, you can simply open a terminal window and connect to the -cluster using the command line SSH client in the same way as you -would on Linux. - -NX client ---------- - -|KUL| On the KU Leuven/UHasselt clusters it is possible to :ref:`use the NX -Client ` to log on to the machine and run graphical -programs. Instead of an X-server, another piece of client software is -required. - - -VNC ---- - -.. include:: vnc_support.rst - -Software development -==================== - -Eclipse -------- - -.. include:: eclipse_intro.rst - -.. note:: - To get the full functionality of the Parallel Tools Platform and Fortran - support on macOS, you need :ref:`to install some additional software and - start Eclipse in a special way as we explain here `. - -Version control ---------------- - -Most popular version control systems, including Subversion and git, -are supported on macOS. See :ref:`our introduction to version control -systems `. diff --git a/source/access/setting_up_a_ssh_proxy_with_putty.rst b/source/access/setting_up_a_ssh_proxy_with_putty.rst deleted file mode 100644 index dc134dabc..000000000 --- a/source/access/setting_up_a_ssh_proxy_with_putty.rst +++ /dev/null @@ -1,143 +0,0 @@ -.. _ssh proxy with PuTTY: - -Setting up an SSH proxy with PuTTY -================================== - -.. warning:: - - If you simply want to configure PuTTY to connect to login nodes - of the VSC clusters, this is not the page you are looking for. - Please check out :ref:`how to configure PuTTY - `. - -Rationale ---------- - -ssh provides a safe way of connecting to a computer, encrypting traffic -and avoiding passing passwords across public networks where your traffic -might be intercepted by someone else. Yet making a server accessible -from all over the world makes that server very vulnerable. Therefore -servers are often put behind a *firewall*, another computer or device -that filters traffic coming from the internet. - -In the VSC, all clusters are behind a firewall, but for the tier-1 -cluster muk this firewall is a bit more restrictive than for other -clusters. Muk can only be approached from certain other computers in the -VSC network, and only via the internal VSC network and not from the -public network. To avoid having to log on twice, first to another login -node in the VSC network and then from there on to Muk, one can set up a -so-called *ssh proxy*. You then connect through another computer (the -*proxy server*) to the computer that you really want to connect to. - -This all sounds quite complicated, but once things are configured -properly it is really simple to log on to the host. - -Setting up a proxy in PuTTY ---------------------------- - -.. warning:: - - In the screenshots, we show the proxy setup for user ``vscXXXXX`` to - the ``login.muk.gent.vsc`` login node for the muk cluster at UGent - via the login node ``vsc.login.node``. - You will have to - - #. replace ``vscXXXXX`` with your own VSC account, and - #. replace ``login.muk.gent.vsc`` by the node that is behind a - a firewall that you want to acces, and - #. find the name of the login node for the cluster you want - to use use as a proxy in the sections of :ref:`the local VSC - clusters `, and replace ``vsc.login.node`` accordingly. - -Setting up the connection in PuTTY is a bit more complicated than for a -simple direct connection to a login node. - -#. First you need to start up pageant and load your private key into it. - :ref:`See the instructions on our "Using Pageant" - page `. - -#. In PuTTY, go first to the \\"Proxy\" category (under - \\"Connection\"). In the Proxy tab sheet, you need to fill in the - following information: - - .. figure:: setting_up_a_ssh_proxy_with_putty/putty_proxy_section.png - - #. Select the proxy type: "Local" - #. Give the name of the "proxy server\". This is *vsc.login.node*, - your usual VSC login node, and not the computer on which you - want to log on and work. - #. Make sure that the "Port" number is 22. - #. Enter your VSC-id in the "Username" field. - #. In the "Telnet command, or local proxy command\", enter the string :: - - plink -agent -l %user %proxyhost -nc %host:%port - - .. note:: - - "plink" (PuTTY Link) is a Windows program and comes with the full - PuTTY suite of applications. It is the command line version of PuTTY. - In case you've only installed the executables putty.exe and - pageant.exe, you'll need to download plink.exe also from* the `PuTTY`_ - web site We strongly advise to simply install the whole PuTTY-suite of - applications using the installer provided on the `PuTTY download - site`_. - -#. Now go to the "Data" category in PuTTY, again under "Connection". - - .. figure:: setting_up_a_ssh_proxy_with_putty/putty_data_section.png - - #. Fill in your VSC-id in the "Auto-login username" field. - #. Leave the other values untouched (likely the values - in the screen dump) - -#. Now go to the "Session category - - .. figure:: setting_up_a_ssh_proxy_with_putty/putty_session_section.png - - #. Set the field \\"Host Name (or IP address) to the computer - you want to log on to. If you are setting up a proxy - connection to access a computer on the VSC network. - you will have to use its name on the internal VSC network. - E.g., for the login nodes of the tier-1 cluster Muk at - UGent, this is **login.muk.gent.vsc** and for the cluster - on which you can test applications for the Muk, this is - **gligar.gligar.gent.vsc**. - #. Make sure that the "Port" number is 22. - #. Finally give the configuration a name in the field "Saved - Sessions" and press "Save". Then you won't have to enter - all the above information again. - #. And now you're all set up to go. Press the "Open" button - on the "Session" tab to open a terminal window. - -For advanced users ------------------- - -If you have an X-server on your Windows PC, you can also use X11 -forwarding and run X11-applications on the host. All you need to do is -click the box next to "Enable X11 forwarding" in the category -"Connection" -> "SSH"-> "X11". - -What happens behind the scenes: - -- By specifying \\"local\" as the proxy type, you tell PuTTY to not use - one of its own build-in ways of setting up a proxy, but to use the - command that you specify in the \\"Telnet command\" of the \\"Proxy\" - category. - -- In the command :: - - plink -agent -l %user %proxyhost -nc %host:%port - - ``%user`` will be replaced by the userid you specify in the "Proxy" - category screen, %proxyhost will be replaced by the host you specify - in the "Proxy" category screen (**vsc.login.node** in the - example), %host by the host you specified in the "Session" - category (login.muk.gent.vsc in the example) and %port by the number - you specified in the "Port" field of that screen (and this will - typically be 22). - -- The plink command will then set up a connection to ``%proxyhost`` using - the userid %user. The ``-agent`` option tells plink to use pageant for - the credentials. And the -nc option tells plink to tell the SSH - server on ``%proxyhost`` to further connect to ``%host:%port``. - diff --git a/source/access/text_mode_access_using_openssh.rst b/source/access/text_mode_access_using_openssh.rst deleted file mode 100644 index 188f32b02..000000000 --- a/source/access/text_mode_access_using_openssh.rst +++ /dev/null @@ -1,149 +0,0 @@ -.. _OpenSSH access: - -Text-mode access using OpenSSH -============================== - -Prerequisite: OpenSSH ---------------------- - -Before connecting with OpenSSH, make sure you have completed the following steps: - -#. :ref:`Create a public/private SSH key pair `, which - will be used to authenticate when making a connection. - -#. :ref:`Apply for a VSC account` and upload your public SSH - key to the `VSC accountpage `_. - -#. :ref:`Link your private key to your VSC-id ` - in your :ref:`SSH configuration file ` at ``~/.ssh/config``. - - -How to connect? ---------------- - -In many cases, a text mode connection to one of the VSC clusters is -sufficient. To make such a connection, the ``ssh`` command is used: - -:: - - $ ssh @ - -Here, - -- ```` is your VSC username that you have received by mail - after your request was approved, e.g., ``vsc98765``, and -- ```` is the name of the login node of the VSC cluster you - want to connect to, e.g., ``login.hpc.kuleuven.be``. - -You can find the names of the login nodes for the various clusters -in the sections on the :ref:`available hardware `. - -.. note:: - - The first time you make a connection to a login node, you will be prompted - to verify the authenticity of the login node, e.g., - - :: - - $ ssh vsc98765@login.hpc.kuleuven.be - The authenticity of host 'login.hpc.kuleuven.be (134.58.8.192)' can't be established. - RSA key fingerprint is b7:66:42:23:5c:d9:43:e8:b8:48:6f:2c:70:de:02:eb. - Are you sure you want to continue connecting (yes/no)? - - -How to connect with support for graphics? ------------------------------------------ - -On most clusters, we support a number of programs that have a GUI mode -or display graphics otherwise through the X system. To be able to -display the output of such a program on the screen of your Linux -machine, you need to tell ssh to forward X traffic from the cluster to -your Linux desktop/laptop by specifying the ``-X`` option. There is also -an option ``-x`` to disable such traffic, depending on the default options -on your system as specified in ``/etc/ssh/ssh_config``, or ``~/.ssh/config``. - -Example: - -:: - - $ ssh -X vsc98765@login.hpc.kuleuven.be - -To test the connection, you can try to start a simple X program on the -login nodes, e.g., ``xeyes``. The latter will open a new -window with a pair of eyes. The pupils of these eyes should follow your -mouse pointer around. Close the program by typing \\"ctrl+c\": the -window should disappear. - -If you get the error 'DISPLAY is not set', you did not correctly enable -the X-Forwarding. - - -How to configure the OpenSSH client? ------------------------------------- - -The SSH configuration file ``~/.ssh/config`` can be used to configure your SSH -connections. For instance, to automatically define your username, or the -location of your key, or add X forwarding. See below for some useful tips to -help you save time when working on a terminal-based session. - -.. toctree:: - - ssh_config - -Managing keys with an SSH agent -------------------------------- - -It is convenient to use an SSH-agent to avoid having to enter your private -key's passphrase all the time when establishing a new connection. - -.. toctree:: - - using_ssh_agent - -Proxies and network tunnels to compute nodes --------------------------------------------- - -Network communications between your local machine and some node in the cluster -other than the login nodes will be blocked by the cluster firewall. In such a -case, you can directly open a shell in the compute node with an SSH connection -using the login node as a proxy or, alternatively, you can also open a network -tunnel to the compute node which will allow direct communication from software -in your computer to certain ports in the remote system. - -.. toctree:: - - setting_up_a_ssh_proxy - creating_a_ssh_tunnel_using_openssh - -.. _troubleshoot_openssh: - -Troubleshooting OpenSSH connection issues ------------------------------------------ - -When contacting support regarding connection issues, it saves time if you -provide the verbose output of the ``ssh`` command. This can be obtained by -adding the ``-vvv`` option for maximal verbosity. - -If you get a ``Permission denied`` error message, one of the things to verify -is that your private key is in the default location, i.e., the output of -``ls ~/.ssh`` should show a file named ``id_rsa_vsc``. - -The second thing to check is that your -:ref:`private key is linked to your VSC-id ` -in your :ref:`SSH configuration file ` at ``~/.ssh/config``. - -If your private key is not stored in ``~/.ssh/id_rsa_vsc``, you need to adapt -the path to it in your ``~/.ssh/config`` file. - -Alternatively, you can provide the path as an option to the ``ssh`` command when -making the connection: - -:: - - $ ssh -i @ - -SSH Manual ----------- - -- `ssh manual page`_ - diff --git a/source/access/text_mode_access_using_openssh_or_jellyfissh.rst b/source/access/text_mode_access_using_openssh_or_jellyfissh.rst deleted file mode 100644 index 12010cd81..000000000 --- a/source/access/text_mode_access_using_openssh_or_jellyfissh.rst +++ /dev/null @@ -1,68 +0,0 @@ -.. _JellyfiSSH access: - -Text-mode access using OpenSSH -============================== - -Prerequisites -------------- - -- macOS comes with its own implementation of OpenSSH, so you don't need - to install any third-party software to use it. Just open a Terminal - window and jump in! - -Using OpenSSH on macOS ----------------------- - -You can use the same commands as specified in the -:ref:`Linux client section ` to access the cluster and transfer -files: - -* :ref:`ssh-keygen ` to generate the keys -* :ref:`ssh ` to log on to the cluster -* :ref:`scp and sftp ` for file transfer - - -Text-mode access using JellyfiSSH -================================= - -|Optional| You can use `JellyfiSSH`_ to store your ssh session settings. - -Prerequisites -------------- - -* Install JellyfiSSH. The most recent version is available - for a small fee from the Mac App Store, but if you `google for - JellyfiSSH 4.5.2 `_, - the version used for the screenshots in this page, you can still find - some free downloads for that version. Installation is easy: just drag - the program's icon to the Application folder in the Finder, and - you're done. - -Using JellyfiSSH for bookmarking ssh connection settings --------------------------------------------------------- - -You can use JellyfiSSH to create a user-friendly bookmark for your ssh -connection settings. To do this, follow these steps: - -#. Start JellyfiSSH and select 'New'. This will open a window where you - can specify the connection settings. - -#. In the 'Host or IP' field, type in . In the 'Login - name' field, type in your . - In the screenshot below we have filled in the fields for a connection - to the Genius cluster at KU Leuven as user vsc98765. - - .. figure:: text_mode_access_using_openssh_or_jellyfissh/text_mode_access_using_openssh_or_jellyfissh_01.png - -#. You might also want to change the Terminal window settings, which can - be done by clicking on the icon in the lower left corner of the - JellyfiSSH window. - -#. When done, provide a name for the bookmark in the 'Bookmark Title' - field and press 'Add' to create the bookmark. - -#. To make a connection, select the bookmark in the 'Bookmark' field and - click on 'Connect'. Optionally, you can make the bookmark the default - by selecting it as the 'Startup Bookmark' in the JellyfiSSH > - Preferences menu entry. - diff --git a/source/access/text_mode_access_using_putty.rst b/source/access/text_mode_access_using_putty.rst deleted file mode 100644 index f812dee01..000000000 --- a/source/access/text_mode_access_using_putty.rst +++ /dev/null @@ -1,176 +0,0 @@ -.. _text mode access using PuTTY: - -Text-mode access using PuTTY -============================ - -Prerequisite ------------- - - -.. tab-set:: - - .. tab-item:: KU Leuven - - To access KU Leuven clusters, only an approved :ref:`VSC account ` is needed. - - .. tab-item:: UGent, VUB, UAntwerpen - - To access clusters hosted at these sites, you need a - :ref:`public/private key pair ` of which the public key - needs to be :ref:`uploaded via the VSC account page `. - -Connecting to the VSC clusters ------------------------------- - -When you start the PuTTY executable 'putty.exe', a configuration screen -pops up. Follow the steps below to setup the connection to (one of) the -VSC clusters. - -.. warning:: - - In the screenshots, we show the setup for user ``vsc98765`` to the - genius cluster at KU Leuven via the login node ``login.hpc.kuleuven.be``. - You will have to - - #. replace ``vsc98765`` with your own VSC user name, and - #. find the name of the login node for the cluster you want - to login in on in the sections on :ref:`the local VSC clusters - `, and replace ``login.hpc.kuleuven.be`` accordingly. - - -- Within the category 'Session', in the field 'Host Name', type in - a valid hostname of the :ref:`login node of the VSC cluster ` - you want to connect to. - - .. figure:: text_mode_access_using_putty/text_mode_access_using_putty_01.png - -- In the category Connection > Data, in the field 'Auto-login - username', put in , which is your VSC username that you - have received by mail after your request was approved. - - .. figure:: text_mode_access_using_putty/text_mode_access_using_putty_02.png - -- Based on the destination VSC site that you want to login to, choose one of the - tabs below and proceed. - - - .. tab-set:: - - .. tab-item:: KU Leuven - - Select the SSH > Auth > Credentials' tab, and remove any private key from the - box 'Private key file for authentication'. - - .. _putty_auth_panel: - .. figure:: text_mode_access_using_putty/putty_priv_key.PNG - :alt: putty private key - - In the category Connection > SSH > Auth, make sure that the option - 'Attempt authentication using Pageant' is selected. - It is also recommended to enable agent forwarding by ticking the - 'Allow agent forwarding' checkbox. - - .. figure:: text_mode_access_using_putty/text_mode_access_using_putty_03.png - .. tab-item:: UGent, VUB, UAntwerpen - - In the category Connection > SSH > Auth > Credentials, click on 'Browse', - and select the private key that you generated and saved above. - - .. figure:: text_mode_access_using_putty/text_mode_access_using_putty_04.png - - Here, the private key was previously saved in the folder - ``C:\Users\Me\Keys``. - In older versions of Windows, you would have to use - ``C:\Documents and Settings\Me\Keys``. - - -- In the category Connection > SSH > X11, click the 'Enable X11 Forwarding' checkbox: - - .. figure:: text_mode_access_using_putty/text_mode_access_using_putty_05.png - -- Now go back to the 'Session' tab, and fill in a name in the 'Saved Sessions' - field and press 'Save' to permanently store the session information. - -- To start a session, load it from Sessions > Saved Sessions, and click 'Open'. - - .. _putty_load_saved_session: - .. figure:: text_mode_access_using_putty/putty_load_saved_session.PNG - :alt: putty_load_saved_session - - - .. tab-set:: - - .. tab-item:: KU Leuven - - You will be then prompted to copy/paste the firewall link into your browser and complete - the :ref:`Multi Factor Authentication (MFA) ` procedure. - With PuTTY, users only need to highlight the link with their mouse in order to copy it to - the clipboard. - - .. figure:: text_mode_access_using_putty/putty_mfa.PNG - :alt: PuTTY MFA URL - - Then, with the right-click from your mouse or CTRL-V, you can paste the MFA link - into your browser to proceed with the authentication. - - .. tab-item:: UGent, VUB, UAntwerpen - - Now pressing 'Open' should ask for your passphrase, and connect - you to . - -The first time you make a connection to the login node, a Security Alert -will appear and you will be asked to verify the authenticity of the -login node. - -.. figure:: text_mode_access_using_putty/text_mode_access_using_putty_06.png - -For future sessions, just select your saved session from the list and -press 'Open'. - -Managing SSH keys with Pageant ------------------------------- - -At this point, we highly recommend setting up an :ref:`SSH agent `. -A widely used SSH agent is :ref:`Pageant ` which is installed -automatically with PuTTY. - -Pageant can be used to manage SSH keys and certificates for -multiple clients, such as PuTTY, :ref:`WinSCP`, :ref:`FileZilla`, -as well as the :ref:`NX client for Windows` so that you don't need -to enter the passphrase all the time. - -.. toctree:: - - using_pageant - -Proxies and network tunnels to compute nodes --------------------------------------------- - -Network communications between your local machine and some node in the cluster -other than the login nodes will be blocked by the cluster firewall. In such a -case, you can directly open a shell in the compute node with an SSH connection -using the login node as a proxy or, alternatively, you can also open a network -tunnel to the compute node which will allow direct communication from software -in your computer to certain ports in the remote system. This is also useful to -run client software on your Windows machine, e.g., ParaView or Jupyter -notebooks that run on a compute node. - -.. toctree:: - - setting_up_a_ssh_proxy_with_putty - creating_a_ssh_tunnel_using_putty - -.. _troubleshoot_putty: - -Troubleshooting PuTTY connection issues ---------------------------------------- - -If you have trouble accessing the infrastructure, the support staff will -likely ask you to provide a log. After you have made a failed attempt to connect, -you can obtain the connection log by - -#. right-clicking in PuTTY's title bar and selecting **Event Log**. - -#. In the dialog window that appears, click the **Copy** button to copy the - log messages. They are copied as text and can be pasted in your message - to support. diff --git a/source/access/using_pageant.rst b/source/access/using_pageant.rst deleted file mode 100644 index f35883f17..000000000 --- a/source/access/using_pageant.rst +++ /dev/null @@ -1,110 +0,0 @@ -.. _using Pageant: - -Using Pageant -============= - -Getting started with Pageant ----------------------------- - -Pageant is an SSH authentication agent that couples seamlessly with Putty, MobaXterm, -NoMachine and FileZilla to make user authentication an easy task. -Pageant is part of the `PuTTY`_ distribution. -As of version 0.78, Pageant can hold certificates in addition to SSH private keys. - -Prerequisites -============= - -.. tab-set:: - - .. tab-item:: KU Leuven - - To access KU Leuven clusters, only an approved :ref:`VSC account ` is needed - as a prerequisite. - - .. tab-item:: UGent, VUB, UAntwerpen - - Before you run Pageant, you need to have a private key in PPK format - (filename ends with ``.ppk``). See :ref:`our page on generating keys with - PuTTY ` to find out how to - generate and use one. - -When you run Pageant, it will put an icon (of a computer wearing a hat) -into the System tray, which looks like this: - - .. _pageant_logo: - .. figure:: using_pageant/Pageant_logo.PNG - :alt: pageant_logo - - -Pageant runs silently in the background and does nothing until you load a private key into it. -If you click the Pageant icon with the right mouse button, you will see a menu. -Select ‘View Keys’ from this menu. The Pageant main window will appear. -You can also bring this window up by double-clicking on the Pageant icon. - - .. _pageant_add_key: - .. figure:: using_pageant/Pageant_add_key.PNG - :alt: pageant_add_key - - -The Pageant window contains a list box. -This shows the private keys and/or certificates that Pageant is holding. -Initially this list is empty. -After you add one or more keys or certificates, they will show up in the list box. - -To add a key to Pageant, press the ‘Add Key’ button. Pageant will bring -up a file dialog, labelled ‘Select Private Key File’. Find your private -key file in this dialog, and press ‘Open’. Pageant will now load the -private key. If the key is protected by a passphrase, Pageant will ask -you to type the passphrase. When the key has been loaded, it will appear -in the list in the Pageant window. -For adding an SSH key, the window dialog looks like this: - - .. _pageant_passphrase: - .. figure:: using_pageant/Pageant_passphrase.PNG - :alt: pageant_passphrase - -Now start PuTTY (or FileZilla) and open an SSH session to a site that -accepts your key or certificate. PuTTY (or Filezilla) will notice that Pageant is -running; they retrieve the key or certificate automatically from Pageant, and use it to -authenticate you as a recognized user. - -.. tab-set:: - - .. tab-item :: KU Leuven - - Follow the steps in :ref:`Connecting with an SSH agent ` - to get an SSH certificate into your agent. - At this point, a new certificate will be stored in Pageant that holds your - identity for a limited period of time. - You can verify that the certificate is actually stored by right-clicking on - Pageant and selecting ‘View Keys’: - - .. _pageant_view_keys: - .. figure:: using_pageant/Pageant_view_keys.PNG - :alt: pageant_view_keys - - .. tab-item:: UGent, VUB, UAntwerpern - - You can now open as many PuTTY sessions as you like without having to type your passphrase again. - -Pageant provides your credentials to other applications (such as PuTTY, NoMachine, -FileZilla, MobaXterm) whenever you are prompted for your identity. - -When you want to shut down Pageant, click the right button on the -Pageant icon in the system tray, and select 'Exit' from the menu. -Closing the Pageant main window does *not* shut down Pageant, because -a SSH agent sits silently in the background. - -You can find more info `in the on-line -manual `_. - -.. warning:: - - SSH authentication agents are very handy as you no longer need to - type your passphrase every time that you try to log in to the cluster. - It also implies that when someone gains access to your computer, he - also automatically gains access to your account on the cluster. So be - very careful and lock your screen when you're not with your computer! - It is your responsibility to keep your computer safe and prevent easy - intrusion of your VSC-account due to an obviously unprotected PC! - diff --git a/source/access/using_ssh_agent.rst b/source/access/using_ssh_agent.rst deleted file mode 100644 index 8e3c1f37e..000000000 --- a/source/access/using_ssh_agent.rst +++ /dev/null @@ -1,359 +0,0 @@ -.. _SSH agent: - -Using ssh-agent -=============== - -The OpenSSH program ssh-agent is a program to hold private keys used for -public key authentication (RSA, DSA). The idea is that you store your -private key in the ssh authentication agent and can then log in or use -sftp as often as you need without having to enter your passphrase again. -This is particularly useful when setting up a :ref:`ssh proxy ` -connection (e.g., for the Tier-1 system muk) as these connections are more -difficult to set up when your key is not loaded into an ssh-agent. - -This all sounds very easy. The reality is more difficult though. The -problem is that subsequent commands, e.g., the command to add a key to -the agent or the ssh or sftp commands, must be able to find the ssh -authentication agent. Therefore some information needs to be passed from -ssh-agent to subsequent commands, and this is done through two -*environment variables*: ``SSH_AUTH_SOCK`` and ``SSH_AGENT_PID``. The -problem is to make sure that these variables are defined with the -correct values in the shell where you start the other ssh commands. - -.. _start SSH agent: - -Starting ssh-agent: Basic scenarios ------------------------------------ - -There are a number of basic scenarios - -#. You're lucky and your system manager has set up everything so that - ssh-agent is started automatically when the GUI starts after logging - in and the environment variables are hence correctly defined in all - subsequent shells. You can check for that easily: type - - :: - - $ ssh-add -l - - If the command returns with the message - - :: - - Could not open a connection to your authentication agent. - - then ssh-agent is not running or not configured properly, and you'll - need to follow one of the following scenarios. - -#. Start an xterm (or whatever your favourite terminal client is) and - continue to work in that xterm window or other terminal windows - started from that one: - - :: - - $ ssh-agent xterm & - - - The shell in that xterm is then configured correctly, and when that - xterm is killed, the ssh-agent will also be killed. - -#. ssh-agent can also output the commands that are needed to configure - the shell. These can then be used to configure the current shell or - any further shell, e.g., if you're a bash user, an easy way to start - a ssh-agent and configure it in the current shell, is to type - - :: - - $ eval `ssh-agent -s` - - - at the command prompt. If you start a new shell (e.g., by starting an - xterm) from that shell, it should also be correctly configured to - contact the ssh authentication agent. A better idea though is to - store the commands in a file and excute them in any shell where you - need access to the authentication agent, e.g., for bash users: - - :: - - $ ssh-agent -s >~/.ssh-agent-environment - . ~/.ssh-agent-environment - - - and you can then configure any shell that needs access to the - authentication agent by executing - - :: - - $ . ~/.ssh-agent-environment - - - - Note that this will not necessarily shut down the ssh-agent when you - log out of the system. It is not a bad idea to explicitly kill the - ssh-agent before you log out: - - :: - - $ ssh-agent -k - - -Managing keys -------------- - -Once you have an ssh-agent up and running, it is easy to add your key to it. -Assuming your key is ``~/.ssh/id_rsa_vsc``, type the following at the command -prompt: - -:: - - $ ssh-add ~/.ssh/id_rsa_vsc - -You will then be asked to enter your passphrase. - -To list the keys that ssh-agent is managing, type - -:: - - $ ssh-add -l - -You can now use the OpenSSH commands :ref:`ssh `, -:ref:`sftp and scp ` without having to enter your passphrase -again. - -Starting ssh-agent: Advanced options ------------------------------------- - -In case ssh-agent is not started by default when you log in to your -computer, there's a number of things you can do to automate the startup -of ssh-agent and to configure subsequent shells. - -Ask your local system administrator -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ - -If you're not managing your system yourself, you can always ask your -system manager if he can make sure that ssh-agent is started when you -log on and in such a way that subsequent shells opened from the desktop -have the environmental variables ``SSH_AUTH_SOCK`` and ``SSH_AGENT_PID`` set -(with the first one being the most important one). - -And if you're managing your own system, you can dig into the manuals to -figure out if there is a way to do so. Since there are so many desktop -systems avaiable for Linux systems (gnome, KDE, Ubuntu unity, ...) we -cannot offer help here. - -A semi-automatic solution in bash -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ - -This solution requires some modifications to .bash_profile and .bashrc. -Be careful when making these modifications as errors may lead to trouble -to log on to your machine. So test by executing these files with -``source ~/.bash_profile`` and ``source ~/.bashrc``. - -This simple solution is based on option 3 given above to start -ssh-agent. - -#. You can define a new shell command by using the `bash alias - mechanism `_. - Add the following line to the file .bashrc in your home directory: - - :: - - alias start-ssh-agent='/usr/bin/ssh-agent -s >~/.ssh-agent-environment; . ~/.ssh-agent-environment' - - - The new command start-ssh-agent will now start a new ssh-agent, store - the commands to set the environment variables in the file - .ssh-agent-environment in your home directory and then "source" - that file to execute the commands in the current shell (which then - sets ``SSH_AUTH_SOCK`` and ``SSH_AGENT_PID`` to appropriate values). - -#. Also put the line - - :: - - [[ -s ~/.ssh-agent-environment ]] && . ~/.ssh-agent-environment &>/dev/null - - - in your .bashrc file. This line will check if the file - ssh-agent-environment exists in your home directory and "source" - it to set the appropriate environment variables. - -#. As explained in the `GNU bash manual `_, - ``.bashrc`` is only read when starting so-called interactive non-login - shells. Interactive login shells will not read this file by default. - Therefore it is `advised in the GNU bash manual - `_ - to add the line - - :: - - [[ -s ~/.bashrc ]] && . ~/.bashrc - - - to your ``.bash_profile``. This will execute ``.bashrc`` if it exists - whenever ``.bash_profile`` is called. - -You can now start a SSH authentication agent by issuing the command -``start-ssh-agent`` and add your key :ref:`as indicated -above ` with ``ssh-add``. - -An automatic and safer solution in bash -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ - -One disadvantage of the previous solution is that a new ssh-agent will -be started every time you execute the command start-ssh-agent, and all -subsequent shells will then connect to that one. - -The following solution is much more complex, but a lot safer as it will -first do an effort to see if there is already a ssh-agent running that -can be contacted: - -#. It will first check if the environment variable ``SSH_AUTH_SOCK`` is - defined, and try to contact that agent. This makes sure that no new - agent will be started if you log on onto a system that automatically - starts an ssh-agent. -#. Then it will check for a file .ssh-agent-environment, source that - file and try to connect to the ssh-agent. This will make sure that no - new agent is started if another agent can be found through that file. -#. And only if those two tests fail will a new ssh-agent be started. - -This solution uses a Bash function. - -#. Add the following block of text to your ``.bashrc`` file: - - :: - - start-ssh-agent() { - # - # Start an ssh agent if none is running already. - # * First we try to connect to one via SSH_AUTH_SOCK - # * If that doesn't work out, we try via the file ssh-agent-environment - # * And if that doesn't work out either, we just start a fresh one and write - # the information about it to ssh-agent-environment for future use. - # - # We don't really test for a correct value of SSH_AGENT_PID as the only - # consequence of not having it set seems to be that one cannot kill - # the ssh-agent with ssh-agent -k. But starting another one wouldn't - # help to clean up the old one anyway. - # - # Note: ssh-add return codes: - # 0 = success, - # 1 = specified command fails (e.g., no keys with ssh-add -l) - # 2 = unable to contact the authentication agent - # - sshfile=~/.ssh-agent-environment - # - # First effort: Via SSH_AUTH_SOCK/SSH_AGENT_PID - # - if [ -n \"$SSH_AUTH_SOCK\" ]; then - # SSH_AUTH_SOCK is defined, so try to connect to the authentication agent - # it should point to. If it succeeds, reset newsshagent. - ssh-add -l &>/dev/null - if [[ $? != 2 ]]; then - echo \"SSH agent already running.\" - unset sshfile - return 0 - else - echo \"Could not contact the ssh-agent pointed at by SSH_AUTH_SOCK, trying more...\" - fi - fi - # - # Second effort: If we're still looking for an ssh-agent, try via $sshfile - # - if [ -e \"$sshfile\" ]; then - # Load the environment given in $sshfile - . $sshfile &>/dev/null - # Try to contact the ssh-agent - ssh-add -l &>/dev/null - if [[ $? != 2 ]]; then - echo \"SSH agent already running; reconfigured the environment.\" - unset sshfile - return 0 - else - echo \"Could not contact the ssh-agent pointed at by $sshfile.\" - fi - fi - # - # And if we haven't found a working one, start a new one... - # - #Create a new ssh-agent - echo \"Creating new SSH agent.\" - ssh-agent -s > $sshfile && . $sshfile - unset sshfile - } - - - A shorter version without all the comments and that does not generate - output is - - :: - - start-ssh-agent() { - sshfile=~/.ssh-agent-environment - # - if [ -n \"$SSH_AUTH_SOCK\" ]; then - ssh-add -l &>/dev/null - [[ $? != 2 ]] && unset sshfile && return 0 - fi - # - if [ -e \"$sshfile\" ]; then - . $sshfile &>/dev/null - ssh-add -l &>/dev/null - [[ $? != 2 ]] && unset sshfile && return 0 - fi - # - ssh-agent -s > $sshfile && . $sshfile &>/dev/null - unset sshfile - } - - - This defines the command ``start-ssh-agent``. - -#. Since start-ssh-agent will now first check for a usable running - agent, it doesn't harm to simply execute this command in your .bashrc - file to start a SSH authentication agent. So add the line - - :: - - start-ssh-agent &>/dev/null - - - after the above function definition. All output is sent to ``/dev/null`` - (and hence not shown) as a precaution, since ``scp`` or ``sftp`` - sessions fail when output is generated in ``.bashrc`` on many systems - (typically with error messages such as \\"Received message too long\" - or "Received too large sftp packet"). You can also use the newly - defined command start-ssh-agent at the command prompt. It will then - check your environment, reset the environment variables ``SSH_AUTH_SOCK`` - and ``SSH_AGENT_PID`` or startk a new ssh-agent. - -#. As explained in the `GNU bash manual - `_, - ``.bashrc`` is only read when starting so-called interactive non-login - shells. Interactive login shells will not read this file by default. - Therefore it is `advised in the GNU bash - manual `_ - to add the line - - :: - - [[ -s ~/.bashrc ]] && . ~/.bashrc - - - to your ``.bash_profile``. This will execute ``.bashrc`` if it exists - whenever ``.bash_profile`` is called. - -You can now simply add your key :ref:`as indicated above ` with -``ssh-add`` and it will become available in all shells. - -The only remaining problem is that the ssh-agent process that you -started may not get killed when you log out, and if it fails to contact -again to the ssh-agent when you log on again, the result may be a -built-up of ssh-agent processes. You can always kill it by hand before -logging out with ``ssh-agent -k``. - -Links ------ - -- `ssh-agent manual page `_ (external) -- `ssh-add manual page `_ (external) diff --git a/source/access/vnc_support.rst b/source/access/vnc_support.rst deleted file mode 100644 index cd2b0e542..000000000 --- a/source/access/vnc_support.rst +++ /dev/null @@ -1,29 +0,0 @@ -Most VSC sites offer some form of support for visualization software through -Virtual Network Computing (VNC). VNC renders images on the cluster and -transfers the resulting images to your client device. VNC clients are available -for Windows, macOS, Linux, Android, iOS or can be even used directly on web -browsers. - -.. tab-set:: - - .. tab-item:: KU Leuven/UHasselt - - On the KUL clusters, users can use NX :ref:`NX start guide`. - - .. tab-item:: UGent - - VNC is supported through the :ref:`hortense_web_portal` interface. - - .. tab-item:: UAntwerp (AUHA) - - On the UAntwerp clusters, TurboVNC is supported on all regular login - nodes (without OpenGL support) and on the visualization node of Leibniz - (with OpenGL support through VirtualGL). - See the page :ref:`Remote visualization UAntwerp` for instructions. - - .. tab-item:: VUB - - On the VUB clusters, TigerVNC is supported on all nodes. See the documentation on - `remote desktop sharing `_ - for instructions. - diff --git a/source/access/windows_client.rst b/source/access/windows_client.rst deleted file mode 100644 index 8378ea602..000000000 --- a/source/access/windows_client.rst +++ /dev/null @@ -1,142 +0,0 @@ -.. _windows_client: - -################################## -:fab:`windows` Access from Windows -################################## - -Getting ready to login -====================== - -Before you can log in with SSH to a VSC cluster, you need to generate a pair of -SSH keys and upload them to your VSC account. There multiple ways to create -yours keys in Windows, please check our documentation on -:ref:`generating keys windows`. - -Connecting to the cluster -========================= - -Text-mode session using PuTTY ------------------------------- - -PuTTY is a simple-to-use and freely available GUI SSH client for Windows that -is :ref:`easy to set up `. - -.. toctree:: - :maxdepth: 2 - - text_mode_access_using_putty - -Text-mode and graphical browser using MobaXterm ------------------------------------------------ - -MobaXterm is a free and easy to use SSH client for Windows that has text-mode, -a graphical file browser, an X server, an SSH agent, and more, all in one. -No installation is required when using the *Portable edition*. See -:ref:`detailed instructions on how to setup MobaXterm `. - -.. toctree:: - :maxdepth: 2 - - access_using_mobaxterm - -Alternatives ------------- - -Recent versions of Windows come with an OpenSSH installed, and you can use -it from PowerShell or the Command Prompt as you would in the termial on Linux -systems and all pages about SSH and data transfer from :ref:`the Linux client -pages ` apply. - -The Windows Subsystem for Linux can be an alternative if you are using -Windows 10 build 1607 or later. The available Linux distributions have -SSH clients, so you can refer to all pages about SSH and data transfer -from :ref:`the Linux client pages ` as well. - -.. _windows_gui: - -Display graphical programs -========================== - -X server --------- - -X11 is the protocol that is used by most Linux applications to display -graphics on a local or remote screen. It is necessary to run an X server on -your Windows system to display graphical applications running on the Linux -system of the cluster. - -|Recommended| Use the X server included in :ref:`MobaXterm `. - -Alternatively, you can install an X server such as `Xming `_ on -Windows as well. - -.. toctree:: - - using_the_xming_x_server_to_display_graphical_programs - -NX client ---------- - -|KUL| On the KU Leuven/UHasselt clusters it is also possible to use the -:ref:`NX Client` to log on to the machine and run graphical -programs. Instead of an X server, another piece of client software is required. - -VNC ---- - -.. include:: vnc_support.rst - -Programming tools -================= - -.. warning:: - Although it is convenient to develop software on your local machine, - you should bear in mind that the hardware architecture is likely to - differ substantially from the VSC HPC hardware. Therefore it is - recommended that performance optimizations are done on the target - system. - -Windows Subsystem for Linux (WSL/WSL2) --------------------------------------- -If you're running Windows 10 build 1607 (Anniversary Edition) or -later, you may consider running the ":ref:`Windows Subsystem for -Linux `" -that will give you a Ubuntu-like environment on Windows and allow you -to install some Ubuntu packages. *In build 1607 this is still -considered experimental technology and we offer no support.* - -.. toctree:: - :hidden: - - wsl - - - - -Microsoft Visual Studio ------------------------ -:ref:`Microsoft Visual Studio ` can also -be used to develop OpenMP or MPI programs. If you do not use any -Microsoft-specific libraries but stick to plain C or C++, the -programs can be recompiled on the VSC clusters. Microsoft is slow in -implementing new standards though. In Visual Studio 2015, OpenMP -support is still stuck at version 2.0 of the standard. An alternative -is to get a license for the Intel compilers which plug into Visual -Studio and give you the best of both worlds, the power of a -full-blown IDE and compilers that support the latest technologies in -the HPC world on Windows. - -Eclipse -------- - -.. include:: eclipse_intro.rst - -.. note:: - On Windows Eclipse relies by default on the `Cygwin`_ toolchain for its - compilers and other utilities, so you need to install that too. - -Version control ---------------- -Information on tools for version control (git and subversion) is -available on the :ref:`version control systems` introduction page. - diff --git a/source/access/wsl.rst b/source/access/wsl.rst deleted file mode 100644 index e5df8dbbc..000000000 --- a/source/access/wsl.rst +++ /dev/null @@ -1,58 +0,0 @@ -.. _wsl: - -################################################ -Installing WSL2 on windows -################################################ - - -As a Windows user if you don't already use any virtualisation system to operate Linux you can install Windows Subsystem for Linux (WSL2). - -To be able to install WSL 2 on your Windows 10, you need the following: - -- Windows 10 May 2020 (2004), Windows 10 May 2019 (1903), or Windows 10 November 2019 (1909) or later -- Hyper-V Virtualization support - -Users who are using a system managed by KU Leuven should fulfill these requirements. - -The requirements can be checked as follows: - -To know your Windows version, type ``winver`` on your search bar, a informative popup appears. - -https://support.microsoft.com/en-us/topic/c75c6a43-9c87-e412-9a9e-10a0dabac4d5Anyone who cannot see 2004 should look at this link. - -The installation of WSL2 will consist of the following steps: - -Enable WSL 2, -Enable ‘Virtual Machine Platform', -Set WSL 2 as default, -Install a Linux distro. -We will complete all steps by using Power Shell of Windows. However you can do some of the steps by graphical screens as an option. Here you can find all steps: - -Run Windows PowerShell as administrator, -Type the following to enable WSL: - -.. code-block:: - - dism.exe /online /enable-feature /featurename:Microsoft-Windows-Subsystem-Linux /all /norestart - -To enable Virtual Machine Platform on Windows 10 (2004), execute the following command: - -.. code-block:: - - dism.exe /online /enable-feature /featurename:VirtualMachinePlatform /all /norestart - - -To set WSL 2 as default execute the command below (You might need to restart your PC): - -.. code-block:: - - wsl --set-default-version 2 - - -To install your Linux distribution of choice on Windows 10, open the Microsoft Store app, search for it, and click the “Get” button. -The first time you launch a newly installed Linux distribution, a console window will open and you'll be asked to wait for a minute or two. -You will then need to create a user account and password for your new Linux distribution. This password will give you ‘sudo' rights when asked. -If you see ‘WSLRegisterDistribution Failed with Error:' or you may find that things don't work as intended you should restart your system at this point. -After all these steps when you type ‘wsl' to your Windows PowerShell, you will be directed to your Ubuntu machine mounted on your Windows' C drive. From now on, you can execute all Linux commands. It is advised to use the home directory instead of your Windows drives. So if you type ‘cd‘ you will be forwarded to your Ubuntu home. - -You can also install (optional) the Windows Terminal app, which enables multiple tabs operation, search feature, and custom themes etc. \ No newline at end of file diff --git a/source/access/access_from_multiple_machines.rst b/source/accounts/access_from_multiple_machines.rst similarity index 100% rename from source/access/access_from_multiple_machines.rst rename to source/accounts/access_from_multiple_machines.rst diff --git a/source/accounts/authentication.rst b/source/accounts/authentication.rst new file mode 100644 index 000000000..f3f91163b --- /dev/null +++ b/source/accounts/authentication.rst @@ -0,0 +1,81 @@ +######################### +:fas:`key` Authentication +######################### + +Connections to VSC clusters and web services are always encrypted to secure +your data. We currently support two types of authentication for connections to +VSC clusters: + +* :ref:`auth_key_pair` + +* :ref:`auth_mfa` + +The difference between cryptographic key-based authentication and multi-factor +authentication (MFA) lies on the risk that somebody else can impersonate you and use +your credential to log in to VSC clusters and services. MFA requires that you +validate the authentication with another device apart from the computer being +used to connect to the cluster, making it much harder for an attacker to use +your log in credentials. + +It is important to note that the security of your data with both methods is the +same once the connection has been established. The type of encryption of the +resulting connection does not depend on the authentication method. + +.. _auth_key_pair: + +Cryptographic Key Pair +====================== + +Connections with key-based authentication are only possible to the +:ref:`terminal interface` of the following VSC clusters: + +.. include:: clusters_ssh_key.rst + +Connections to VSC web-base services, such as the `VSC account page`_ or the +:ref:`compute portal` to VSC clusters, are always handled with MFA following +the security policies of your home institution. + +The following sections explain how to create and manage your cryptographic keys +to connect to supported clusters. + +.. toctree:: + :maxdepth: 3 + + generating_keys + +.. _auth_mfa: + +Multi-factor Authentication +=========================== + +Multi Factor Authentication (MFA) is an augmented level of security +which, as the name suggests, requires multiple steps to successfully +authenticate. + +Connections with MFA are currently supported on all VSC web-based services, +such as the `VSC account page`_ or the :ref:`compute portal` to VSC clusters, +and also on the :ref:`terminal interface` of the following VSC clusters: + +.. include:: clusters_mfa.rst + +The following sections explain how to set up MFA to connect to supported +clusters. + +.. toctree:: + :maxdepth: 3 + + mfa_login + +Connections from Abroad +======================= + +All VSC clusters are behind a firewall, which is configured by default to block +all traffic from abroad. If you want to access any VSC cluster from +abroad, it is necessary that you first authorize your own connection on the +`VSC Firewall`_. Once your connection is authorized, you can proceed as usual. + +.. note:: + + Keep the `VSC Firewall`_ page open for the duration of your session on the + VSC cluster. + diff --git a/source/accounts/clusters_mfa.rst b/source/accounts/clusters_mfa.rst new file mode 100644 index 000000000..0fbc9c54a --- /dev/null +++ b/source/accounts/clusters_mfa.rst @@ -0,0 +1,10 @@ +.. grid:: 3 + :gutter: 4 + + .. grid-item-card:: |KULUH| + :columns: 12 4 4 4 + + * Tier-2 :ref:`Genius ` + * Tier-2 :ref:`Superdome ` + * Tier-2 :ref:`wICE ` + diff --git a/source/accounts/clusters_ssh_key.rst b/source/accounts/clusters_ssh_key.rst new file mode 100644 index 000000000..955ae1ff4 --- /dev/null +++ b/source/accounts/clusters_ssh_key.rst @@ -0,0 +1,21 @@ +.. grid:: 3 + :gutter: 4 + + .. grid-item-card:: |UA| + :columns: 12 4 4 4 + + * Tier-2 :ref:`Vaughan ` + * Tier-2 :ref:`Leibniz ` + * Tier-2 :ref:`Breniac ` + + .. grid-item-card:: |UG| + :columns: 12 4 4 4 + + * Tier-1 :ref:`Hortense ` + * Tier-2 :ref:`All clusters ` + + .. grid-item-card:: |VUB| + :columns: 12 4 4 4 + + * Tier-2 :ref:`Hydra ` + * Tier-2 :ref:`Anansi ` diff --git a/source/access/generating_keys.rst b/source/accounts/generating_keys.rst similarity index 72% rename from source/access/generating_keys.rst rename to source/accounts/generating_keys.rst index 0d7fcbaa8..0b6b67f82 100644 --- a/source/access/generating_keys.rst +++ b/source/accounts/generating_keys.rst @@ -1,12 +1,11 @@ -############# -Security Keys -############# +################## +Cryptographic Keys +################## Connections to VSC clusters are always encrypted to secure your data. Hence, you will need a personal cryptographic key to connect to the VSC clusters via -the terminal interface. This secure connection uses the `SSH protocol -`_ and we might refer in the -following to the security keys as SSH keys. +the terminal interface. This secure connection uses the `Secure Shell`_ (SSH) +protocol and we might refer in the following to the security keys as SSH keys. .. _create key pair: @@ -44,36 +43,34 @@ describe the generation of key pairs in the client sections below: .. toctree:: :hidden: - generating_keys_on_windows - generating_keys_with_openssh_on_os_x - generating_keys_with_openssh + generating_keys_windows + generating_keys_macos + generating_keys_linux .. grid:: 3 :gutter: 4 .. grid-item-card:: :fab:`windows` Windows :columns: 12 4 4 4 - :link: generating_keys_on_windows + :link: generating_keys_windows :link-type: doc Generating keys .. grid-item-card:: :fab:`apple` macOS :columns: 12 4 4 4 - :link: generating_keys_with_openssh_on_os_x + :link: generating_keys_macos :link-type: doc Generating keys .. grid-item-card:: :fab:`linux` Linux :columns: 12 4 4 4 - :link: generating_keys_with_openssh + :link: generating_keys_linux :link-type: doc Generating keys -.. _upload public key: - Upload public key to VSC account page ===================================== @@ -95,8 +92,7 @@ First key of your account You already have an active VSC account and this is the first public key you will add to it: -#. Go to the `Edit VO `_ tab - of your `VSC account page`_ +#. Go to your `VSC Account - Edit Account`_ page #. Scroll down to the section *Add public key* #. Click *Browse* to select the file of your public key #. Click *Upload extra public key* and wait for the upload to complete @@ -110,8 +106,7 @@ You already have an active VSC account with a public key and want to add an additional key to be able to connect to the VSC clusters from a different computer: -#. Go to the `Edit VO `_ tab - of your `VSC account page`_ +#. Go to your `VSC Account - Edit Account`_ page #. Scroll down to the section *Add public key* #. Click *Browse* to select the file of your public key #. Click *Upload extra public key* and wait for the upload to complete @@ -127,8 +122,7 @@ Replace compromised key You already have an active VSC account with a public key, but it got compromised and must be replaced with a new one: -#. Go to the `Edit VO `_ tab - of your `VSC account page`_ +#. Go to your `VSC Account - Edit Account`_ page #. Scroll down to *Manage public keys* #. Select the *Delete this key* checkbox of the compromised key #. Scroll down to the section *Add public key* @@ -137,3 +131,43 @@ compromised and must be replaced with a new one: #. Click *Update* #. Verify that the new public key is listed under *Manage public keys* +.. _ssh agent: + +SSH Agent +========= + +An SSH agent is a software program that can hold unencrypted keys on memory +and make those available to other programs. This is useful to minimize user +interaction and automate any script or program than needs to connect to a VSC +cluster. The agent will ask for any passphrase needed to unlock your keys once +and then it will provide those keys to any other program or script requesting +them. + +.. toctree:: + :hidden: + + SSH Agent: Pageant + SSH Agent: MobaXterm + SSH Agent: OpenSSH + +.. grid:: 3 + :gutter: 4 + + .. grid-item-card:: :fab:`windows` Windows + :columns: 12 4 4 4 + + * :ref:`Pageant` + * :ref:`MobaXterm` + + .. grid-item-card:: :fab:`apple` macOS + :columns: 12 4 4 4 + + * :ref:`OpenSSH` + + .. grid-item-card:: :fab:`linux` Linux + :columns: 12 4 4 4 + + * :ref:`OpenSSH` + +.. _upload public key: + diff --git a/source/access/generating_keys_with_openssh.rst b/source/accounts/generating_keys_linux.rst similarity index 73% rename from source/access/generating_keys_with_openssh.rst rename to source/accounts/generating_keys_linux.rst index 85603d76c..4c16bf63c 100644 --- a/source/access/generating_keys_with_openssh.rst +++ b/source/accounts/generating_keys_linux.rst @@ -4,25 +4,27 @@ :fab:`linux` Generating keys on Linux ##################################### -Requirements: +`OpenSSH`_ is a reputable suite of secure networking utilities based on the +`Secure Shell`_ (SSH) protocol. OpenSSH is open-source software and is readily +available on all popular Linux distributions, and most often installed by +default as well. -* Linux operating system -* OpenSSH - -On all popular Linux distributions, the OpenSSH software is readily -available, and most often installed by default. +Requirements +============ -Check the OpenSSH installation -============================== +* Linux operating system +* `OpenSSH`_ -You can check whether the OpenSSH software is installed by opening -a terminal and typing: +You can check whether the OpenSSH software is installed on your Linux computer +by opening a terminal and typing: -:: +.. code-block:: bash $ ssh -V OpenSSH_8.0p1, OpenSSL 1.1.1c FIPS 28 May 2019 +If it is not installed, check our +:ref:`installation instructions for OpenSSH `. Create a public/private key pair ================================ @@ -30,7 +32,7 @@ Create a public/private key pair A key pair might already be present in the default location inside your home directory: -:: +.. code-block:: bash $ ls ~/.ssh authorized_keys id_rsa id_rsa.pub known_hosts @@ -55,7 +57,7 @@ You will need to generate a new key pair, when: To generate a new public/private pair, use the following command (make sure to generate a 4096-bit key): -:: +.. code-block:: text $ ssh-keygen -t rsa -b 4096 -f ~/.ssh/id_rsa_vsc Generating public/private rsa key pair. @@ -76,9 +78,8 @@ The system will ask you for your passphrase every time you want to use the private key, that is, every time you want to access the cluster or transfer your files, unless you use an :ref:`SSH agent`. -Next, make sure to follow the instructions to :ref:`link your key with your -VSC-id `. - +Next, make sure to configure your OpenSSH client to automatically +:ref:`link your key with your VSC ID `. Converting SSH2 keys to OpenSSH format ====================================== @@ -86,16 +87,14 @@ Converting SSH2 keys to OpenSSH format *This section is only relevant if you did not use OpenSSH (as described above) to generate an SSH key.* -If you have a public key ``id_rsa_vsc_ssh2.pub`` in the SSH2 format, -you can use OpenSSH's ssh-keygen to convert it to the OpenSSH format in -the following way: +If you have an existing public key ``id_rsa_vsc_ssh2.pub`` in the SSH2 format, +you can use OpenSSH's ``ssh-keygen`` command to convert it to the OpenSSH +format in the following way: -:: +.. code-block:: bash $ ssh-keygen -i -f ~/.ssh/id_rsa_vsc_ssh2.pub > ~/.ssh/id_rsa_vsc_openssh.pub - - Additional information ====================== diff --git a/source/accounts/generating_keys_macos.rst b/source/accounts/generating_keys_macos.rst new file mode 100644 index 000000000..cf5bdbf57 --- /dev/null +++ b/source/accounts/generating_keys_macos.rst @@ -0,0 +1,23 @@ +.. _generating keys macos: + +##################################### +:fab:`apple` Generating keys on macOS +##################################### + +Every macOS install comes with its own implementation of `OpenSSH`_, so you +don't need to install any third-party software to use it. Just open a +Terminal window and jump in! + +Requirements +============ + +* macOS operating system +* `OpenSSH`_ + +Create a public/private key pair +================================ + +Generating a public/private key pair on macOS is identical to what is described +for :ref:`generating keys linux`. The underlying implementation of SSH in macOS +is the same `OpenSSH`_ used on Linux systems, so the same commands with the +same syntax apply. diff --git a/source/access/generating_keys_with_mobaxterm.rst b/source/accounts/generating_keys_mobaxterm.rst similarity index 62% rename from source/access/generating_keys_with_mobaxterm.rst rename to source/accounts/generating_keys_mobaxterm.rst index 63a612244..6bc3caa54 100644 --- a/source/access/generating_keys_with_mobaxterm.rst +++ b/source/accounts/generating_keys_mobaxterm.rst @@ -4,27 +4,32 @@ Generating keys with MobaXterm ############################## -The following steps explain how to generate an SSH key pair in ``OpenSSH`` format -using the MobaXterm application. +By default, there is no SSH client software available on Windows, so you +will typically have to install one yourself. A popular option is +:ref:`MobaXterm `, which is a freely terminal client that +can also generate your keys on Windows. -#. Go to the `MobaXterm`_ website and download the free version. Make sure to - select the **Portable edition** from the download page. Create a folder - called ``MobaXterm`` in a known location in your computer and decompress the - contents of the downloaded zip file inside it. +Requirements +------------ -#. Double click the ``MobaXterm_Personal`` executable file inside the - ``MobaXterm`` folder. - The MobaXterm main window will appear on your screen. It should be similar to this one: +* Windows operating system +* :ref:`MobaXterm ` + +Create a public/private key pair +-------------------------------- - .. _mobaxterm-main-window-sshkey: - .. figure:: access_using_mobaxterm/mobaxterm_main_window.png - :alt: mobaxterm main +The following steps explain how to generate an SSH key pair in ``OpenSSH`` format +using the MobaXterm application. You can install MobaXterm on your computer +following :ref:`our installation instructions `. + +#. Launch the ``MobaXterm_Personal`` executable file inside the + ``MobaXterm`` folder. #. In the **Tools** menu choose the **MobaKeyGen (SSH key generator)** option, a panel like this one will appear: .. _mobaxterm-sshkey-generator: - .. figure:: generating_keys_with_mobaxterm/mobaxterm_sshkey_generator.png + .. figure:: generating_keys_mobaxterm/mobaxterm_sshkey_generator.png :alt: mobaxterm ssh key generator @@ -34,19 +39,19 @@ using the MobaXterm application. entropy; do so until the green bar is completely filled. .. _mobaxterm-sshkey-entropy: - .. figure:: generating_keys_with_mobaxterm/mobaxterm_sshkey_entropy.png + .. figure:: generating_keys_mobaxterm/mobaxterm_sshkey_entropy.png :alt: mobaxterm ssh key entropy #. When the process is over you will see its result as shown below. Enter a comment in the **Key comment** field and a strong passphrase. .. _mobaxterm-sshkey-passphrase: - .. figure:: generating_keys_with_mobaxterm/mobaxterm_sshkey_passphrase.png + .. figure:: generating_keys_mobaxterm/mobaxterm_sshkey_passphrase.png :alt: mobaxterm ssh key passphrase #. Click on the **Save public key** button and save it to some desired - location; we recommend to name it ``id_rsa_vsc.pub``. You must upload this public key to your - your `VSC accountpage `__ before you can login to a VSC cluster. + location; we recommend to name it ``id_rsa_vsc.pub``. You must upload this + public key to your `VSC account page`_ before you can login to a VSC cluster. #. Finally click on the **Save private key** button and save that file also; we recommend to name this file ``id_rsa_vsc.ppk``. As the *private* part of diff --git a/source/access/generating_keys_with_mobaxterm/mobaxterm_sshkey_entropy.png b/source/accounts/generating_keys_mobaxterm/mobaxterm_sshkey_entropy.png similarity index 100% rename from source/access/generating_keys_with_mobaxterm/mobaxterm_sshkey_entropy.png rename to source/accounts/generating_keys_mobaxterm/mobaxterm_sshkey_entropy.png diff --git a/source/access/generating_keys_with_mobaxterm/mobaxterm_sshkey_generator.png b/source/accounts/generating_keys_mobaxterm/mobaxterm_sshkey_generator.png similarity index 100% rename from source/access/generating_keys_with_mobaxterm/mobaxterm_sshkey_generator.png rename to source/accounts/generating_keys_mobaxterm/mobaxterm_sshkey_generator.png diff --git a/source/access/generating_keys_with_mobaxterm/mobaxterm_sshkey_passphrase.png b/source/accounts/generating_keys_mobaxterm/mobaxterm_sshkey_passphrase.png similarity index 100% rename from source/access/generating_keys_with_mobaxterm/mobaxterm_sshkey_passphrase.png rename to source/accounts/generating_keys_mobaxterm/mobaxterm_sshkey_passphrase.png diff --git a/source/access/generating_keys_with_mobaxterm/old/mobaxterm_sshkey_entropy.png b/source/accounts/generating_keys_mobaxterm/old/mobaxterm_sshkey_entropy.png similarity index 100% rename from source/access/generating_keys_with_mobaxterm/old/mobaxterm_sshkey_entropy.png rename to source/accounts/generating_keys_mobaxterm/old/mobaxterm_sshkey_entropy.png diff --git a/source/access/generating_keys_with_mobaxterm/old/mobaxterm_sshkey_generator.png b/source/accounts/generating_keys_mobaxterm/old/mobaxterm_sshkey_generator.png similarity index 100% rename from source/access/generating_keys_with_mobaxterm/old/mobaxterm_sshkey_generator.png rename to source/accounts/generating_keys_mobaxterm/old/mobaxterm_sshkey_generator.png diff --git a/source/access/generating_keys_with_mobaxterm/old/mobaxterm_sshkey_passphrase.png b/source/accounts/generating_keys_mobaxterm/old/mobaxterm_sshkey_passphrase.png similarity index 100% rename from source/access/generating_keys_with_mobaxterm/old/mobaxterm_sshkey_passphrase.png rename to source/accounts/generating_keys_mobaxterm/old/mobaxterm_sshkey_passphrase.png diff --git a/source/access/generating_keys_with_mobaxterm/old/mobaxterm_sshkey_public.png b/source/accounts/generating_keys_mobaxterm/old/mobaxterm_sshkey_public.png similarity index 100% rename from source/access/generating_keys_with_mobaxterm/old/mobaxterm_sshkey_public.png rename to source/accounts/generating_keys_mobaxterm/old/mobaxterm_sshkey_public.png diff --git a/source/access/generating_keys_with_putty.rst b/source/accounts/generating_keys_putty.rst similarity index 79% rename from source/access/generating_keys_with_putty.rst rename to source/accounts/generating_keys_putty.rst index f43dc8da8..69db89f0b 100644 --- a/source/access/generating_keys_with_putty.rst +++ b/source/accounts/generating_keys_putty.rst @@ -4,32 +4,31 @@ Generating keys with PuTTY ########################## -Requirements: - -* Windows operating system -* PuTTY - By default, there is no SSH client software available on Windows, so you will typically have to install one yourself. We recommend to use `PuTTY`_, -which is freely available. You do not even need to install; just -download the executable and run it! Alternatively, an installation -package (MSI) is also available from the `PuTTY download site`_ -that will install all other tools that you might need also. +which is freely available. Follow the instructions on :ref:`terminal putty` to +install it on your computer. -You can copy the PuTTY executables together with your private key on a -USB stick to connect easily from other Windows computers. +Requirements +------------ + +* Windows operating system +* :ref:`PuTTY` Create a public/private key pair -------------------------------- -To generate a public/private key pair, you can use the PuTTYgen key -generator, which is available on the `PuTTY download site`_. -Start it and follow the following steps. +To generate a public/private key pair, you can use the *PuTTYgen* key +generator. This will already be available if you +:ref:`installed PuTTY as described in our documentation`. +If it is not the case, you can download it from the `PuTTY download site`_. + +Start *PuTTYgen* on your computer and follow the following steps: #. In 'Parameters' (at the bottom of the window), choose 'RSA' and set the number of bits in the key to 4096: - .. figure:: generating_keys_with_putty/puttygen_initial.png + .. figure:: generating_keys_putty/puttygen_initial.png :alt: Initial PuTTYgen screen #. Click on 'Generate'. To generate the key, you must move the mouse @@ -45,7 +44,7 @@ Start it and follow the following steps. is adviced to fill in the 'Key comment' field to make it easier identifiable afterwards. - .. figure:: generating_keys_with_putty/puttygen_filled_out.png + .. figure:: generating_keys_putty/puttygen_filled_out.png :alt: Filled PuTTYgen screen #. Finally, save both the public and private keys in a secure place diff --git a/source/access/generating_keys_with_putty/puttygen_filled_out.png b/source/accounts/generating_keys_putty/puttygen_filled_out.png similarity index 100% rename from source/access/generating_keys_with_putty/puttygen_filled_out.png rename to source/accounts/generating_keys_putty/puttygen_filled_out.png diff --git a/source/access/generating_keys_with_putty/puttygen_initial.png b/source/accounts/generating_keys_putty/puttygen_initial.png similarity index 100% rename from source/access/generating_keys_with_putty/puttygen_initial.png rename to source/accounts/generating_keys_putty/puttygen_initial.png diff --git a/source/access/generating_keys_on_windows.rst b/source/accounts/generating_keys_windows.rst similarity index 63% rename from source/access/generating_keys_on_windows.rst rename to source/accounts/generating_keys_windows.rst index c3ecc3a09..fa53b82be 100644 --- a/source/access/generating_keys_on_windows.rst +++ b/source/accounts/generating_keys_windows.rst @@ -4,12 +4,12 @@ :fab:`windows` Generating keys on Windows ######################################### -To get access from a Windows computer, we currently provide documentation for two SSH clients, -:ref:`PuTTY ` and :ref:`MobaXterm `, +To get access from a Windows computer, we currently provide documentation for +two SSH clients, :ref:`PuTTY ` and :ref:`MobaXterm `, both of which require a public/private key pair in a different format: .. toctree:: :maxdepth: 2 - generating_keys_with_putty - generating_keys_with_mobaxterm + generating_keys_putty + generating_keys_mobaxterm diff --git a/source/access/how_to_request_more_quota.rst b/source/accounts/how_to_request_more_quota.rst similarity index 66% rename from source/access/how_to_request_more_quota.rst rename to source/accounts/how_to_request_more_quota.rst index 4482dab15..507e56fd1 100644 --- a/source/access/how_to_request_more_quota.rst +++ b/source/accounts/how_to_request_more_quota.rst @@ -3,10 +3,10 @@ How can I request more disk quota? ################################## If the current quota limits of your :ref:`personal storage ` or -:ref:`Virtual Organization (VO) ` are not large enough to -carry out your research project, it might be possible to increase them. This -option depends on data storage policies of the site managing your VSC account, -VO or Tier-1 project as well as on current capacity of the storage system. +:ref:`Virtual Organization (VO) ` are not large enough to carry out your +research project, it might be possible to increase them. This option depends on +data storage policies of the site managing your VSC account, VO or Tier-1 +project as well as on current capacity of the storage system. Before requesting more storage, please check carefully the :ref:`current data usage of your VSC account ` and identify which file system diff --git a/source/access/index.rst b/source/accounts/index.rst similarity index 59% rename from source/access/index.rst rename to source/accounts/index.rst index 926183dd5..abb6a7b0c 100644 --- a/source/access/index.rst +++ b/source/accounts/index.rst @@ -1,8 +1,8 @@ .. _access: -###################################### -:fas:`user-circle` Accounts and access -###################################### +########################### +:fas:`user-circle` Accounts +########################### In order to use the infrastructure of the VSC, you need a VSC user-ID, also called a VSC account. Check the `VSC website `_ @@ -22,19 +22,18 @@ purchased for. Contact your `local VSC coordinator `_ to arrange access when required. For the main Tier-1 compute cluster you need to submit a -`project application `_ (or you should be -covered by a project application within your research group). - -Before you apply for VSC account, it is useful to first check whether -the infrastructure is suitable for your application. Windows or macOS -programs for instance cannot run on our infrastructure as we use the -Linux operating system on the clusters. The infrastructure also should -not be used to run applications for which the compute power of a good -laptop is sufficient. The pages on the :ref:`tier1 hardware` and -:ref:`tier2 hardware` give a high-level description of our -infrastructure. You can find more detailed information in the user -documentation on the user portal. When in doubt, you can also contact -your `local support team `_. This does not require a VSC account. +`Tier-1 project application`_ (or you should be covered by a project +application within your research group). + +Before you apply for VSC account, it is useful to first check whether the +infrastructure is suitable for your application. Windows or macOS programs for +instance cannot run on our infrastructure as we use the Linux operating system +on the clusters. The infrastructure also should not be used to run applications +for which the compute power of a good laptop is sufficient. The pages on the +:ref:`tier1 hardware` and :ref:`tier2 hardware` give a high-level description +of our infrastructure. You can find more detailed information in the user +documentation on the user portal. When in doubt, you can also contact your +:ref:`local support team `. This does not require a VSC account. VSC Accounts ============ @@ -44,10 +43,8 @@ their VSC account and access the VSC infrastructure: .. toctree:: :maxdepth: 2 - :numbered: 1 vsc_account authentication - access_methods - account_management + management diff --git a/source/leuven/lecturer_s_procedure_to_request_student_accounts_ku_leuven_uhasselt.rst b/source/accounts/lecturer_procedure_student_accounts_kuleuven_uhasselt.rst similarity index 93% rename from source/leuven/lecturer_s_procedure_to_request_student_accounts_ku_leuven_uhasselt.rst rename to source/accounts/lecturer_procedure_student_accounts_kuleuven_uhasselt.rst index 2bef47bf9..778c71f1b 100644 --- a/source/leuven/lecturer_s_procedure_to_request_student_accounts_ku_leuven_uhasselt.rst +++ b/source/accounts/lecturer_procedure_student_accounts_kuleuven_uhasselt.rst @@ -1,8 +1,3 @@ -.. _lecturer procedure leuven: - -Lecturer's procedure to request student accounts (KU Leuven/UHasselt) -===================================================================== - We support using HPC for educational purposes, and we fully assist the lecturers with all aspects of using HPC in the classroom, such as creating studing accounts, granting credits, resource reservation (if needed), and also live technical support @@ -29,7 +24,7 @@ take the following actions: to track the use of the Tier-2 clusters by individual users during the course. For more information about the procedure of requesting the project please refer to the page :ref:`Slurm accounting `. -#. We advise to use :ref:`Open OnDemand ` service for the student to get access to the +#. We advise to use :ref:`Open OnDemand ` service for the student to get access to the login nodes, file browser and the job submission. The student will only need to use a browser and does not need to install any other software. Students will login through the KU Leuven :ref:`Multi Factor Authentication (MFA) `, no additional ssh-agent is required. #. To ensure that students jobs do not wait in the queue during the hands-on sessions, we offer diff --git a/source/accounts/management.rst b/source/accounts/management.rst new file mode 100644 index 000000000..30eb6c8b8 --- /dev/null +++ b/source/accounts/management.rst @@ -0,0 +1,69 @@ +################################### +:fas:`user-gear` Account management +################################### + +Account management at the VSC is mostly done through the `VSC account page`_ +using your institute account rather than your VSC account. + +User Credentials +================ + +You use the VSC account page to request your account as explained on +the ":ref:`apply for account`" page. You'll also need to +create an SSH-key which is also explained on those pages. + +Once your account is active and you can log on to your home cluster, +you can use the account management pages for many other operations: + +* If you want to :ref:`access the VSC clusters from more than one + computer `, + it is good practice to use a different key for each computer. You + can upload additional keys via the account management page. In + that way, if your computer is stolen, all you need to do is remove + the key for that computer and your account is safe again. + +* If you've :ref:`messed up your keys `, + you can restore the keys on the cluster or upload a new key and + then delete the old one. + +User Groups +============ + +Once your VSC account is active and you can log on to your home cluster, +you can also manage groups through the account management web interface. +Groups (a Linux/UNIX concept) are used to control access to licensed +software (e.g., software licenses paid for by one or more research +groups), to create subdirectories where researchers working on the same +project can collaborate and control access to those files, and to +control access to project credits on clusters that use these (all +clusters at KU Leuven). + +.. toctree:: + :maxdepth: 2 + + vsc_user_groups + +Virtual Organizations +===================== + +|UG| |VUB| You can create or join a so-called *Virtual Organization* (VO), +which gives access to extra storage in the HPC cluster that is shared between +the members of the VO. + +.. toctree:: + :maxdepth: 2 + + vo + +Managing disk space +=================== + +The amount of disk space that a user can use on the various file systems +on the system is limited by quota on the amount of disk space and number +of files. UGent and VUB users can see and request upgrades for their quota on +the Account management site (Users need to be in a VO (Virtual +Organization) to request additional quota. Creating and joining a VO is +also done through the Account Management website). On other sites +checking your disk space use is still :ref:`mostly done from the command +line ` and requesting more quota is done via email. + diff --git a/source/access/managing_disk_usage.rst b/source/accounts/managing_disk_usage.rst similarity index 100% rename from source/access/managing_disk_usage.rst rename to source/accounts/managing_disk_usage.rst diff --git a/source/access/messed_up_keys.rst b/source/accounts/messed_up_keys.rst similarity index 100% rename from source/access/messed_up_keys.rst rename to source/accounts/messed_up_keys.rst diff --git a/source/access/mfa_login.rst b/source/accounts/mfa_login.rst similarity index 60% rename from source/access/mfa_login.rst rename to source/accounts/mfa_login.rst index ec10653e0..ff0c27a2f 100644 --- a/source/access/mfa_login.rst +++ b/source/accounts/mfa_login.rst @@ -3,24 +3,25 @@ Multi Factor Authentication (MFA) ================================= -|KUL| Multi Factor Authentication (MFA) is an augmented level of security. +Multi Factor Authentication (MFA) is an augmented level of security. As the name suggests, MFA requires additional steps with human intervention when authenticating. -MFA is mandatory for accessing KU Leuven infrastructures. -In this page, we explain how to login to the -:ref:`KU Leuven Open OnDemand portal `, and how to use SSH clients -(such as PuTTY, terminal etc) with and without using an SSH agent. + +MFA is mandatory for accessing the :ref:`terminal interface` on the following +VSC clusters: + +.. include:: clusters_mfa.rst .. note:: - When connecting from abroad, you first need to login via the - `VSC firewall page `_. + If you are connecting from abroad, it is necessary that you first authorize + your own connection on the `VSC Firewall`_ Login to Open OnDemand ---------------------- Users from all VSC sites can access the Open OnDemand portal at KU Leuven site. -For that, proceed to the :ref:`Open OnDemand portal `. +For that, proceed to the :ref:`Open OnDemand portal `. If you are affiliated with KU Leuven, click on the KU Leuven logo. Otherwise, click on the VSC logo to choose your institute. You will be then forwarded to the Identity Provider (IdP) of your institute to @@ -32,19 +33,20 @@ Once that succeeds, you will automatically login to the Open OnDemand homepage. Connecting with an SSH agent ---------------------------- -Using an SSH agent allows to store so-called SSH certificates which various -client programs (PuTTY, MobaXterm, NoMachine, FileZilla, WinSCP, ...) -can then use to authenticate. -Getting an SSH certificate involves MFA but this only needs to performed once -since a certificate can be used multiple times as long as it remains valid. +Using an :ref:`ssh agent` allows to store so-called SSH certificates which then +are made available to any other client program needing to use that same connection. +Getting an SSH certificate involves MFA but this only needs to be performed +once since a certificate can be used multiple times as long as it remains valid. You can acquire such an SSH certificate as follows: -- Start up your SSH agent. - Windows users are recommended to use :ref:`Pageant `, - while Linux and MacOS users can e.g. rely on :ref:`OpenSSH`. +* Start up your SSH agent + + * Windows: we recommend to use :ref:`Pageant` + * macOS: use the default :ref:`OpenSSH agent` + * Linux: use the default :ref:`OpenSSH agent` -- Connect to either the cluster's login node or to ``firewall.vscentrum.be`` +* Connect to either the cluster's login node or to ``firewall.vscentrum.be`` with your terminal application of choice and with agent forwarding enabled. With e.g. OpenSSH you can do: @@ -59,7 +61,7 @@ You can acquire such an SSH certificate as follows: OpenSSH users may also automatically enable agent forwarding in their :ref:`SSH config file `. -- You will then be shown a URL which you will need to open in a browser: +* You will then be shown a URL which you will need to open in a browser: .. _firewall_link_mfa: .. figure:: mfa_login/firewall_link_mfa.PNG @@ -70,14 +72,14 @@ You can acquire such an SSH certificate as follows: Avoid using 'CTRL-C', or it will send a ``SIGINT`` signal interrupting your process instead of performing a copy operation. -- From the drop-down menu, choose the institute you are affiliated with. +* From the drop-down menu, choose the institute you are affiliated with. Below, we show an example of a KU Leuven user, but one has to pick the institute he/she is affiliated with. .. figure:: mfa_login/vsc_firewall_institute.PNG :alt: Choose your institute -- You will be forwarded to the Identity Provider (IdP) of your institute, +* You will be forwarded to the Identity Provider (IdP) of your institute, and you need to login in a usual way using your registered credentials. For KU Leuven users, the page looks like the following: @@ -85,7 +87,7 @@ You can acquire such an SSH certificate as follows: .. figure:: mfa_login/idp_page.PNG :alt: idp_page -- If you are already connected to the internal network, then you will be only asked to +* If you are already connected to the internal network, then you will be only asked to identify yourself with the MFA authenticator app on your personal phone: .. _reauthenticate_phone: @@ -99,7 +101,7 @@ You can acquire such an SSH certificate as follows: then you might not be required to log in again depending on your browser session settings (e.g., accepted cookies). -- Once you are successfully authenticated, you end up on a page telling you that your VSC +* Once you are successfully authenticated, you end up on a page telling you that your VSC identity is confirmed. If you have already performed the previous login in that browser session, you will immediately end up on this page: @@ -108,7 +110,7 @@ You can acquire such an SSH certificate as follows: .. figure:: mfa_login/firewall_confirmed.PNG :alt: firewall_confirmed -- An SSH certificate will now be injected back into the agent. +* An SSH certificate will now be injected back into the agent. That's it! You can continue doing your HPC work as usual. @@ -119,14 +121,15 @@ when opening new connections (thereby making use of the certificates). For a few common clients the corresponding documentation pages are listed below. -=========================================== ==================== ===================== -SSH Client name Purpose Operating System -=========================================== ==================== ===================== -:ref:`PuTTY ` text-based terminal Windows -:ref:`MobaXterm ` text-based terminal Windows -:ref:`NoMachine ` graphical desktop Windows, Linux, MacOS -:ref:`FileZilla ` file transfer Windows, Linux, MacOS -=========================================== ==================== ===================== +====================================== ==================== ===================== +SSH Client name Purpose Operating System +====================================== ==================== ===================== +:ref:`OpenSSH ` text-based terminal Linux, macOS +:ref:`PuTTY ` text-based terminal Windows +:ref:`MobaXterm ` text-based terminal Windows +:ref:`NoMachine ` graphical desktop Windows, Linux, macOS +:ref:`FileZilla ` file transfer Windows, Linux, macOS +====================================== ==================== ===================== .. _mfa quick start: @@ -135,8 +138,8 @@ Connecting without an SSH agent ------------------------------- Most clients (such as PuTTY or MobaXterm) can also be made to work *without* -an :ref:`SSH agent `. Keep in mind, however, that this approach -tends to be less convenient since each new connection will require multi-factor +an :ref:`ssh agent`. Keep in mind, however, that this approach tends to be +less convenient since each new connection will require multi-factor authentication. Certain clients (such as :ref:`FileZilla `, ``sshfs`` or @@ -146,17 +149,17 @@ agent holding an SSH certificate. This being said, the agentless procedure runs as follows: -- Connect to a :ref:`Tier-2 login node ` +* Connect to a :ref:`Tier-2 login node ` using your chosen client application (e.g. MobaXterm). -- The application is then supposed to show the link to complete the MFA procedure - (similar to the the previous section). +* The application is then supposed to show the link to complete the MFA procedure + (similar to the previous section). -- After passing the MFA challenge, you should now be connected to a login node. +* After passing the MFA challenge, you should now be connected to a login node. In plain SSH connections a successful login is rewarded with a welcome message: - .. _login_node: - .. figure:: mfa_login/login_node.PNG - :alt: login_node + .. _login_node: + .. figure:: mfa_login/login_node.PNG + :alt: login_node diff --git a/source/access/mfa_login/filezilla_sitemanager_setup.PNG b/source/accounts/mfa_login/filezilla_sitemanager_setup.PNG similarity index 100% rename from source/access/mfa_login/filezilla_sitemanager_setup.PNG rename to source/accounts/mfa_login/filezilla_sitemanager_setup.PNG diff --git a/source/access/mfa_login/firewall_confirmed.PNG b/source/accounts/mfa_login/firewall_confirmed.PNG similarity index 100% rename from source/access/mfa_login/firewall_confirmed.PNG rename to source/accounts/mfa_login/firewall_confirmed.PNG diff --git a/source/access/mfa_login/firewall_link_mfa.PNG b/source/accounts/mfa_login/firewall_link_mfa.PNG similarity index 100% rename from source/access/mfa_login/firewall_link_mfa.PNG rename to source/accounts/mfa_login/firewall_link_mfa.PNG diff --git a/source/access/mfa_login/idp_page.PNG b/source/accounts/mfa_login/idp_page.PNG similarity index 100% rename from source/access/mfa_login/idp_page.PNG rename to source/accounts/mfa_login/idp_page.PNG diff --git a/source/access/mfa_login/login_node.PNG b/source/accounts/mfa_login/login_node.PNG similarity index 100% rename from source/access/mfa_login/login_node.PNG rename to source/accounts/mfa_login/login_node.PNG diff --git a/source/access/mfa_login/mobaxterm_create_new_session.PNG b/source/accounts/mfa_login/mobaxterm_create_new_session.PNG similarity index 100% rename from source/access/mfa_login/mobaxterm_create_new_session.PNG rename to source/accounts/mfa_login/mobaxterm_create_new_session.PNG diff --git a/source/access/mfa_login/nx_config.PNG b/source/accounts/mfa_login/nx_config.PNG similarity index 100% rename from source/access/mfa_login/nx_config.PNG rename to source/accounts/mfa_login/nx_config.PNG diff --git a/source/access/mfa_login/nx_mod.PNG b/source/accounts/mfa_login/nx_mod.PNG similarity index 100% rename from source/access/mfa_login/nx_mod.PNG rename to source/accounts/mfa_login/nx_mod.PNG diff --git a/source/access/mfa_login/nx_profile.png b/source/accounts/mfa_login/nx_profile.png similarity index 100% rename from source/access/mfa_login/nx_profile.png rename to source/accounts/mfa_login/nx_profile.png diff --git a/source/access/mfa_login/reauthenticate_phone.PNG b/source/accounts/mfa_login/reauthenticate_phone.PNG similarity index 100% rename from source/access/mfa_login/reauthenticate_phone.PNG rename to source/accounts/mfa_login/reauthenticate_phone.PNG diff --git a/source/access/mfa_login/vsc_firewall_institute.PNG b/source/accounts/mfa_login/vsc_firewall_institute.PNG similarity index 100% rename from source/access/mfa_login/vsc_firewall_institute.PNG rename to source/accounts/mfa_login/vsc_firewall_institute.PNG diff --git a/source/accounts/pageant.rst b/source/accounts/pageant.rst new file mode 100644 index 000000000..87fa97a9f --- /dev/null +++ b/source/accounts/pageant.rst @@ -0,0 +1,166 @@ +.. _Pageant: + +####### +Pageant +####### + +Pageant is an SSH authentication agent that couples seamlessly with +:ref:`Putty`, :ref:`MobaXterm`, +:ref:`NoMachine` and :ref:`FileZilla` to make user +authentication an easy task. As of version 0.78, Pageant can also hold SSH +certificates in addition to SSH private keys. + +.. warning:: + + SSH authentication agents are very handy as you no longer need to + type your passphrase every time that you try to log in to the cluster. + It also implies that when someone gains access to your computer, he + also automatically gains access to your account on the cluster. So be + very careful and lock your screen when you're not with your computer! + It is your responsibility to keep your computer safe and prevent easy + intrusion of your VSC account due to an obviously unprotected PC! + +Prerequisites +============= + +.. tab-set:: + :sync-group: vsc-sites + + .. tab-item:: KU Leuven + :sync: kuluh + + To access KU Leuven clusters, only an approved :ref:`VSC account ` is needed + as a prerequisite. + + .. tab-item:: UAntwerpen + :sync: ua + + Before you run Pageant, you need to have a private key in PPK format + (filename ends with ``.ppk``). + See :ref:`our page on generating keys with PuTTY ` + to find out how to generate and use one. + + .. tab-item:: UGent + :sync: ug + + Before you run Pageant, you need to have a private key in PPK format + (filename ends with ``.ppk``). + See :ref:`our page on generating keys with PuTTY ` + to find out how to generate and use one. + + .. tab-item:: VUB + :sync: vub + + Before you run Pageant, you need to have a private key in PPK format + (filename ends with ``.ppk``). + See :ref:`our page on generating keys with PuTTY ` + to find out how to generate and use one. + +Installation +============ + +Pageant is part of the :ref:`PuTTY distribution `. Follow +:ref:`our installation instructions for PuTTY ` to install +Pageant on your computer. + +Running Pageant +=============== + +Oncer you launch Pageant, it will put an icon of a computer wearing a hat +onto the system tray, which looks like this: + +.. _pageant_logo: +.. figure:: pageant/Pageant_logo.png + :alt: pageant_logo + +Pageant runs silently in the background and does nothing until you load a +private key into it. + +Open the main window of Pageant by: + +#. Click the Pageant icon with the right mouse button +#. Select "View Keys" from the menu + + .. _pageant_menu: + .. figure:: pageant/Pageant_add_key.png + :alt: pageant_menu + +You can also bring this window up by double-clicking on the Pageant icon. + +Adding keys to Pageant +====================== + +The Pageant window contains a list box. +This shows the private keys and/or certificates that Pageant is holding. +Initially this list is empty. +After you add one or more keys or certificates, they will show up in the list box. + +Steps to add a key to Pageant: + +#. Press the "Add Key" button +#. A file dialog opens labelled "Select Private Key File" +#. Find your private key file in this dialog, and press "Open" +#. Pageant will now load the private key. If the key is protected by a + passphrase, Pageant will ask you to type its passphrase. + + .. _pageant_passphrase: + .. figure:: pageant/Pageant_passphrase.png + :alt: pageant_passphrase + +#. When the key has been loaded, it will appear in the list in the Pageant window. + +Now start PuTTY (or FileZilla) and open an SSH session to a site that +accepts your key or certificate. PuTTY (or Filezilla) will notice that Pageant is +running; they retrieve the key or certificate automatically from Pageant, and use it to +authenticate you as a recognized user. + +.. tab-set:: + :sync-group: vsc-sites + + .. tab-item :: KU Leuven + :sync: kuluh + + Follow the steps in :ref:`Connecting with an SSH agent ` + to get an SSH certificate into your agent. + At this point, a new certificate will be stored in Pageant that holds your + identity for a limited period of time. + You can verify that the certificate is actually stored by right-clicking on + Pageant and selecting ‘View Keys’: + + .. figure:: pageant/Pageant_view_keys.png + :alt: pageant_view_keys + + .. tab-item:: UAntwerpern + :sync: ua + + You can now open as many PuTTY sessions as you like without having to + type your passphrase again. + + .. tab-item:: UGent + :sync: ug + + You can now open as many PuTTY sessions as you like without having to + type your passphrase again. + + .. tab-item:: VUB + :sync: vub + + You can now open as many PuTTY sessions as you like without having to + type your passphrase again. + +Pageant provides your credentials to other applications (such as PuTTY, NoMachine, +FileZilla, MobaXterm) whenever you are prompted for your identity. + +Stopping Pageant +================ + +When you want to shut down Pageant, click the right button on the +Pageant icon in the system tray, and select 'Exit' from the menu. +Closing the Pageant main window does *not* shut down Pageant, because +a SSH agent sits silently in the background. + +.. seealso:: + + You can find more info in the + `on-line manual `_. + diff --git a/source/access/using_pageant/Pageant_add_key.PNG b/source/accounts/pageant/Pageant_add_key.png similarity index 100% rename from source/access/using_pageant/Pageant_add_key.PNG rename to source/accounts/pageant/Pageant_add_key.png diff --git a/source/access/using_pageant/Pageant_logo.PNG b/source/accounts/pageant/Pageant_logo.png similarity index 100% rename from source/access/using_pageant/Pageant_logo.PNG rename to source/accounts/pageant/Pageant_logo.png diff --git a/source/access/using_pageant/Pageant_passphrase.PNG b/source/accounts/pageant/Pageant_passphrase.png similarity index 100% rename from source/access/using_pageant/Pageant_passphrase.PNG rename to source/accounts/pageant/Pageant_passphrase.png diff --git a/source/access/using_pageant/Pageant_view_keys.PNG b/source/accounts/pageant/Pageant_view_keys.png similarity index 100% rename from source/access/using_pageant/Pageant_view_keys.PNG rename to source/accounts/pageant/Pageant_view_keys.png diff --git a/source/access/scientific_domains.rst b/source/accounts/scientific_domains.rst similarity index 100% rename from source/access/scientific_domains.rst rename to source/accounts/scientific_domains.rst diff --git a/source/accounts/ssh_agent.rst b/source/accounts/ssh_agent.rst new file mode 100644 index 000000000..f08373aae --- /dev/null +++ b/source/accounts/ssh_agent.rst @@ -0,0 +1,407 @@ +.. _OpenSSH agent: + +###################### +SSH Agent with OpenSSH +###################### + +The OpenSSH program ``ssh-agent`` is a program to hold private keys used for +public key authentication (RSA, DSA). The idea is that you store your +private key in the ssh authentication agent and can then log in or use +sftp as often as you need without having to enter your passphrase again. +This is particularly useful when setting up a :ref:`ssh proxy ` +connection (e.g., for the Tier-1 system muk) as these connections are more +difficult to set up when your key is not loaded into an ssh-agent. + +This all sounds very easy. The reality is more difficult though. The +problem is that subsequent commands, e.g., the command to add a key to +the agent or the ssh or sftp commands, must be able to find the ssh +authentication agent. Therefore some information needs to be passed from +ssh-agent to subsequent commands, and this is done through two +*environment variables*: ``SSH_AUTH_SOCK`` and ``SSH_AGENT_PID``. The +problem is to make sure that these variables are defined with the +correct values in the shell where you start the other ssh commands. + +Prerequisites +============= + +.. tab-set:: + :sync-group: vsc-sites + + .. tab-item:: KU Leuven + :sync: kuluh + + To access KU Leuven clusters, only an approved + :ref:`VSC account ` is needed as a prerequisite. + + .. tab-item:: UAntwerpen + :sync: ua + + Before you run ``ssh-agent``, you need to have a private key in OpenSSH + format, which can be created with OpenSSH itself. + See :ref:`generating keys linux` to find out how to generate and use one. + + .. tab-item:: UGent + :sync: ug + + Before you run ``ssh-agent``, you need to have a private key in OpenSSH + format, which can be created with OpenSSH itself. + See :ref:`generating keys linux` to find out how to generate and use one. + + .. tab-item:: VUB + :sync: vub + + Before you run ``ssh-agent``, you need to have a private key in OpenSSH + format, which can be created with OpenSSH itself. + See :ref:`generating keys linux` to find out how to generate and use one. + +.. _start SSH agent: + +Starting ssh-agent +================== + +.. _ssh agent basic: + +Basic scenarios +--------------- + +There are a number of likely basic scenarios: + +#. You're lucky and your system manager has set up everything so that + ``ssh-agent`` is started automatically when the GUI starts after logging + in and the environment variables are hence correctly defined in all + subsequent shells. You can check for that easily: type + + .. code-block:: bash + + $ ssh-add -l + + If the command is successful you are all set and can jump to + :ref:`ssh agent managing keys`. + + On the other hand, if the previous command returns the message: + + .. code-block:: text + + Could not open a connection to your authentication agent. + + then ``ssh-agent`` is not running or not configured properly, and you'll + need to follow one of the following scenarios. + +#. Launch a new window of your favourite terminal client (``xterm`` in the + example below) and switch to it. + + .. code-block:: bash + + $ ssh-agent xterm & + + + The shell in that terminal will now run the SSH agent until it is killed. + + .. _ssh agent basic 3: + +#. ``ssh-agent`` can also output the commands that are needed to configure + the shell. These can then be used to configure the current shell or + any further shell, e.g., if you're a bash user, an easy way to start + a ssh-agent and configure it in the current shell, is to type + + .. code-block:: bash + + $ eval `ssh-agent -s` + + + If you start a new shell (e.g., by starting an xterm) from that shell, it + should also be correctly configured to contact the ssh authentication agent. + A better idea though is to store the commands in a file and execute them in + any shell where you need access to the authentication agent, e.g., for bash + users: + + .. code-block:: bash + + $ ssh-agent -s >~/.ssh-agent-environment + . ~/.ssh-agent-environment + + + Then you can then configure any shell that needs access to the + authentication agent by executing + + .. code-block:: bash + + $ . ~/.ssh-agent-environment + + Note that this will not necessarily shut down the ssh-agent when you + log out of the system. It is not a bad idea to explicitly kill the + ssh-agent before you log out: + + .. code-block:: bash + + $ ssh-agent -k + + +If the command ``ssh-add -l`` is now successful you are all set, your SSH agent +is working and you can jump to :ref:`ssh agent managing keys`. + +Advanced options +---------------- + +In case ``ssh-agent`` is not started by default when you log in to your +computer, there's a number of things you can do to automate the startup +of ssh-agent and to configure subsequent shells. + +Ask your local system administrator +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +If you're not managing your system yourself, you can always ask your +system manager if he can make sure that ssh-agent is started when you +log on and in such a way that subsequent shells opened from the desktop +have the environmental variables ``SSH_AUTH_SOCK`` and ``SSH_AGENT_PID`` +set (with the first one being the most important one). + +And if you're managing your own system, you can dig into the manuals to +figure out if there is a way to do so. Since there are so many desktop +systems available for Linux systems (gnome, KDE, Ubuntu unity, ...) we +cannot offer help here. + +Semi-automatic solution in bash +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +This solution requires some modifications to ``.bash_profile`` and ``.bashrc``. +Be careful when making these modifications as errors may lead to trouble to log +on to your machine. So test by executing these files with the commands +``source ~/.bash_profile`` and ``source ~/.bashrc`` before logging out of your +current session. + +This simple solution is based on the ``.ssh-agent-environment`` file generated +in :ref:`option 3 ` given above to start the SSH agent: + +#. You can define a new shell command in bash by using its + `alias mechanism `__. + Add the following line to the file .bashrc in your home directory: + + .. code-block:: bash + + alias start-ssh-agent='/usr/bin/ssh-agent -s >~/.ssh-agent-environment; . ~/.ssh-agent-environment' + + + The new command start-ssh-agent will now start a new ssh-agent, store + the commands to set the environment variables in the file + ``.ssh-agent-environment`` in your home directory and then run ``source`` + on that file to execute the commands in the current shell (which then + sets ``SSH_AUTH_SOCK`` and ``SSH_AGENT_PID`` to appropriate values). + +#. Also put the line + + .. code-block:: bash + + [[ -s ~/.ssh-agent-environment ]] && . ~/.ssh-agent-environment &>/dev/null + + + in your ``.bashrc`` file. This line will check if the file + ``.ssh-agent-environment`` exists in your home directory and run ``source`` + on it to set the appropriate environment variables. + +#. As explained in the `GNU bash manual `__, + ``.bashrc`` is only read when starting so-called interactive non-login + shells. Interactive login shells will not read this file by default. + Therefore it is `advised in the GNU bash manual `__ + to add the line + + .. code-block:: bash + + [[ -s ~/.bashrc ]] && . ~/.bashrc + + + to your ``.bash_profile``. This will execute ``.bashrc`` if it exists + whenever ``.bash_profile`` is called. + +You can now start a SSH authentication agent by issuing the command +``start-ssh-agent`` and jump to :ref:`ssh agent managing keys` to add your own +keys to it. + +Automatic and safer solution in bash +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +One disadvantage of the previous solution is that a new ssh-agent will +be started every time you execute the command start-ssh-agent, and all +subsequent shells will then connect to that one. + +The following solution is much more complex, but a lot safer as it will +first do an effort to see if there is already a ssh-agent running that +can be contacted: + +#. It will first check if the environment variable ``SSH_AUTH_SOCK`` is + defined, and try to contact that agent. This makes sure that no new + agent will be started if you log on onto a system that automatically + starts an ssh-agent. +#. Then it will check for a file ``.ssh-agent-environment``, source that + file and try to connect to the ssh-agent. This will make sure that no + new agent is started if another agent can be found through that file. +#. And only if those two tests fail will a new ``ssh-agent`` be started. + +This solution uses a Bash function to define a new ``start-ssh-agent`` command: + +#. Add the following block of text to your ``.bashrc`` file with the new + function definition: + + .. dropdown:: Code of start-ssh-agent function + + .. code-block:: bash + + start-ssh-agent() { + # + # Start an ssh agent if none is running already. + # * First we try to connect to one via SSH_AUTH_SOCK + # * If that doesn't work out, we try via the file ssh-agent-environment + # * And if that doesn't work out either, we just start a fresh one and write + # the information about it to ssh-agent-environment for future use. + # + # We don't really test for a correct value of SSH_AGENT_PID as the only + # consequence of not having it set seems to be that one cannot kill + # the ssh-agent with ssh-agent -k. But starting another one wouldn't + # help to clean up the old one anyway. + # + # Note: ssh-add return codes: + # 0 = success, + # 1 = specified command fails (e.g., no keys with ssh-add -l) + # 2 = unable to contact the authentication agent + # + sshfile=~/.ssh-agent-environment + # + # First effort: Via SSH_AUTH_SOCK/SSH_AGENT_PID + # + if [ -n "$SSH_AUTH_SOCK" ]; then + # SSH_AUTH_SOCK is defined, so try to connect to the authentication agent + # it should point to. If it succeeds, reset newsshagent. + ssh-add -l &>/dev/null + if [[ $? != 2 ]]; then + echo "SSH agent already running." + unset sshfile + return 0 + else + echo "Could not contact the ssh-agent pointed at by SSH_AUTH_SOCK, trying more..." + fi + fi + # + # Second effort: If we're still looking for an ssh-agent, try via $sshfile + # + if [ -e "$sshfile" ]; then + # Load the environment given in $sshfile + . $sshfile &>/dev/null + # Try to contact the ssh-agent + ssh-add -l &>/dev/null + if [[ $? != 2 ]]; then + echo "SSH agent already running; reconfigured the environment." + unset sshfile + return 0 + else + echo "Could not contact the ssh-agent pointed at by $sshfile." + fi + fi + # + # And if we haven't found a working one, start a new one... + # + #Create a new ssh-agent + echo "Creating new SSH agent." + ssh-agent -s > $sshfile && . $sshfile + unset sshfile + } + + + A shorter version without all the comments and that does not generate + output is + + .. code-block:: bash + + start-ssh-agent() { + sshfile=~/.ssh-agent-environment + # + if [ -n "$SSH_AUTH_SOCK" ]; then + ssh-add -l &>/dev/null + [[ $? != 2 ]] && unset sshfile && return 0 + fi + # + if [ -e "$sshfile" ]; then + . $sshfile &>/dev/null + ssh-add -l &>/dev/null + [[ $? != 2 ]] && unset sshfile && return 0 + fi + # + ssh-agent -s > $sshfile && . $sshfile &>/dev/null + unset sshfile + } + + + This defines the command ``start-ssh-agent``. + +#. Since ``start-ssh-agent`` will now first check for a usable running + agent, it doesn't harm to simply execute this command in your ``.bashrc`` + file to start a SSH authentication agent. So add the line + + .. code-block:: bash + + start-ssh-agent &>/dev/null + + + after the above function definition. All output is sent to ``/dev/null`` + (and hence not shown) as a precaution, since ``scp`` or ``sftp`` + sessions fail when output is generated in ``.bashrc`` on many systems + (typically with error messages such as \"Received message too long" + or "Received too large sftp packet"). You can also use the newly + defined command start-ssh-agent at the command prompt. It will then + check your environment, reset the environment variables ``SSH_AUTH_SOCK`` + and ``SSH_AGENT_PID`` or startk a new ssh-agent. + +#. As explained in the `GNU bash manual + `_, + ``.bashrc`` is only read when starting so-called interactive non-login + shells. Interactive login shells will not read this file by default. + Therefore it is `advised in the GNU bash + manual `_ + to add the line + + .. code-block:: bash + + [[ -s ~/.bashrc ]] && . ~/.bashrc + + + to your ``.bash_profile``. This will execute ``.bashrc`` if it exists + whenever ``.bash_profile`` is called. + +You can now simply add your key :ref:`as indicated above ` with +``ssh-add`` and it will become available in all shells. + +The only remaining problem is that the ssh-agent process that you +started may not get killed when you log out, and if it fails to contact +again to the ssh-agent when you log on again, the result may be a +built-up of ssh-agent processes. You can always kill it by hand before +logging out with ``ssh-agent -k``. + +.. _ssh agent managing keys: + +Managing keys with SSH agent +============================ + +Once you have an ssh-agent up and running, it is easy to add your key to it. +Assuming your key is ``~/.ssh/id_rsa_vsc``, type the following at the command +prompt: + +.. code-block:: bash + + $ ssh-add ~/.ssh/id_rsa_vsc + +You will then be asked to enter your passphrase. + +To list the keys that ssh-agent is managing, type + +.. code-block:: bash + + $ ssh-add -l + +You can now use the OpenSSH commands :ref:`ssh `, +:ref:`sftp and scp ` without having to enter your passphrase +again. + +Links +----- + +* `ssh-agent manual page `_ (external) +* `ssh-add manual page `_ (external) diff --git a/source/accounts/ssh_agent_mobaxterm.rst b/source/accounts/ssh_agent_mobaxterm.rst new file mode 100644 index 000000000..aeebc313d --- /dev/null +++ b/source/accounts/ssh_agent_mobaxterm.rst @@ -0,0 +1,88 @@ +.. _mobaxterm ssh agent: + +###################### +SSH agent on MobaXterm +###################### + +Once you've successfully setup the connection to your cluster, you will notice +that you are prompted for the passphrase at each connection you make to a +cluster. To avoid the need to re-type it each time, you can setup an internal +SSH agent in :ref:`MobaXterm ` that will take care of +unlocking your SSH private key or SSH certificate for +:ref:`Multi-Factor Authentication ` when you open the application. +The SSH agent will save the passphrase after you have introduced it once. + +Prerequisites +============= + +.. tab-set:: + :sync-group: vsc-sites + + .. tab-item:: KU Leuven + :sync: kuluh + + To access KU Leuven clusters, only an approved + :ref:`VSC account ` is needed as a prerequisite. + + .. tab-item:: UAntwerpen + :sync: ua + + Before you run an SSH agent on MobaXterm, you need to have a private key + in OpenSSH format, which can be created with MobaXterm itself. See + :ref:`generating keys mobaxterm` to find out how to generate and use one. + + .. tab-item:: UGent + :sync: ug + + Before you run an SSH agent on MobaXterm, you need to have a private key + in OpenSSH format, which can be created with MobaXterm itself. See + :ref:`generating keys mobaxterm` to find out how to generate and use one. + + .. tab-item:: VUB + :sync: vub + + Before you run an SSH agent on MobaXterm, you need to have a private key + in OpenSSH format, which can be created with MobaXterm itself. See + :ref:`generating keys mobaxterm` to find out how to generate and use one. + +Enable SSH Agent +================ + +The following steps explain how to enable the SSH Agent on the MobaXterm +application. You can install MobaXterm on your computer following +:ref:`our installation instructions `. + +#. Open the MobaXterm program and go to the menu 'Settings -> + Configuration' + +#. You should see the `MobaXterm Configuration` panel. In the 'General' tab + choose the 'MobaXterm passwords management' option; a new panel will be + opened; make sure that 'Save sessions passwords' has the options + 'Always' and 'Save SSH keys passphrases as well' selected (as shown below) + and click 'OK'. + + .. figure:: ssh_agent_mobaxterm/mobaxterm_save_passwords.png + :alt: mobaxterm save passwords option + +#. Open the 'SSH' tab in the same `MobaXterm Configuration` panel. + Make sure that all the boxes below the 'SSH agents' section are + ticked. + +#. Press the '+' button in the 'Load following keys at MobAgent startup' + field, look for your private key file and select it. At the end of the + process, the panel should look like this (the location of your private SSH + key may be different): + + .. figure:: ssh_agent_mobaxterm/mobaxterm_ssh_agent.png + :alt: mobaxterm ssh agent setup + + Please, keep in mind that these settings will have to be updated if the + location of private key ever changes. + +#. Press OK and when prompted for restarting MobaXterm, choose to do so. + +#. Once MobaXterm restarts you will be asked for the private key passphrase at + launch. This will occur only once and after you introduce it correctly it + will stay saved for all following sessions. Double clicking on a shortcuts + for a cluster should open the corresponding connection directly. + diff --git a/source/access/access_using_mobaxterm/mobaxterm_save_passwords.png b/source/accounts/ssh_agent_mobaxterm/mobaxterm_save_passwords.png similarity index 100% rename from source/access/access_using_mobaxterm/mobaxterm_save_passwords.png rename to source/accounts/ssh_agent_mobaxterm/mobaxterm_save_passwords.png diff --git a/source/access/access_using_mobaxterm/mobaxterm_ssh_agent.png b/source/accounts/ssh_agent_mobaxterm/mobaxterm_ssh_agent.png similarity index 100% rename from source/access/access_using_mobaxterm/mobaxterm_ssh_agent.png rename to source/accounts/ssh_agent_mobaxterm/mobaxterm_ssh_agent.png diff --git a/source/accounts/vo-storage.rst b/source/accounts/vo-storage.rst new file mode 100644 index 000000000..545033c18 --- /dev/null +++ b/source/accounts/vo-storage.rst @@ -0,0 +1,68 @@ +.. grid:: 1 1 2 2 + :gutter: 4 + + .. grid-item-card:: + :class-header: bg-secondary text-white text-center font-weight-bold + + :fas:`truck` VO Data + ^^^^^^^^^^^^^^^^^^^^ + + Location + ``$VSC_DATA_VO``, ``$VSC_DATA_VO_USER`` + + Purpose + Storage of datasets or resulting data that must be stored in the + cluster to carry out further computational jobs, and that can be shared + with co-workers. + + Availability: Very high + Accessible from login nodes, compute nodes and from all clusters in VSC. + + Capacity: High and expandable + By default, 112.5 GB (soft limit), 125.0 GB (hard limit, 7 days grace time). + Can be expanded upon request. + + Perfomance: Low + Jobs must always copy any data needed from ``$VSC_DATA_VO`` to the scratch + before the run and save any results from scratch into ``$VSC_DATA_VO`` + after the run. + + Reliability: Very High + Data is stored in a redundant file system with data replication. + + Back-ups: + Back-ups policy depends on each VSC cluster. Check the :ref:`storage + hardware` characteristics of the ``VSC_DATA`` storage of your + cluster. + + .. grid-item-card:: + :class-header: bg-secondary text-white text-center font-weight-bold + + :fas:`rocket` VO Scratch + ^^^^^^^^^^^^^^^^^^^^^^^^ + + Location + ``$VSC_SCRATCH_VO``, ``$VSC_SCRATCH_VO_USER`` + + Purpose + Storage of temporary or transient data that can be shared with co-workers. + + Availability: High + Accessible from login nodes and compute nodes in the local cluster. Not + accessible from other VSC clusters. + + Capacity: High and expandable + 225 GB (soft limit), 250 GB (hard limit, 7 days grace time). + Can be expanded upon request. + + Perfomance: High + Preferred location for all data files read or written during the + execution of a job. Suitable for all workload types. + + Reliability: Medium + Data is stored in a redundant file system, but without replication. + + Back-ups: + Back-ups policy depends on each VSC cluster. Check the :ref:`storage + hardware` characteristics of the ``VSC_SCRATCH`` storage of your + cluster. diff --git a/source/accounts/vo.rst b/source/accounts/vo.rst new file mode 100644 index 000000000..b3d333b30 --- /dev/null +++ b/source/accounts/vo.rst @@ -0,0 +1,141 @@ +.. _vo: + +#################### +Virtual Organization +#################### + +A Virtual Organization (VO) is a special type of group. The members of a VO get +access to extra storage in the HPC cluster with shared directories between them +to easily collaborate with their colleagues. Any user can only be a member of a +single VO in each VSC institution. + +VSC clusters that support Virtual Organizations: + +.. grid:: 3 + :gutter: 4 + + .. grid-item-card:: |UG| + :columns: 12 4 4 4 + + * Tier-1 :ref:`Hortense ` [#f1]_ + * Tier-2 :ref:`All clusters ` + + .. grid-item-card:: |VUB| + :columns: 12 4 4 4 + + * Tier-2 :ref:`Hydra ` + * Tier-2 :ref:`Anansi ` + +.. [#f1] partial support, only ``VSC_DATA_VO`` + +VO directories +============== + +Members of the VO have access to additional directories in the scratch and data +storage of the cluster. + +.. include:: vo-storage.rst + +* ``$VSC_SCRATCH_VO`` and ``$VSC_DATA_VO``: top directory shared by all members + of the VO + +* ``$VSC_SCRATCH_VO_USER`` and ``$VSC_DATA_VO_USER``: each member of the VO has + its own personal folder inside the VO that can only be accessed by its owner. + Both folders can be used as alternatives to their personal ``$VSC_SCRATCH`` + and ``$VSC_DATA``. + +.. _join_vo: + +Joining an existing VO +====================== + +.. warning:: Keep in mind that users can only be a member of a single VO in the + same VSC institution. Thus, if you are member of a VO and join another VO, + you will lose access to the data in the first VO. + +Members of the research team can make a request to join the VO of their +research group: + +#. Get the ID of the VO of your research group you belong to. VO's ID in VUB + are formed by the letters ``bvo`` followed by 5 digits. + +#. Fill in the section **Join VO** of your `VSC Account - New/Join VO`_ page + + * Select the corresponding VO ID from the drop-down box below *Group* + + * Fill out the *Message* box with a message identify yourself for the + moderator of the VO + + * Upon submission, the moderator of the VO (somebody from the research + group) will receive and review your request + +.. _create_vo: + +Creating a new VO +================= + +.. warning:: VO requests from (PhD) students or postdocs will be rejected. Only + group leaders (ZAP members) are allowed to create a VO. + +Group leaders (ZAP members) can make a motivated request to the HPC team to +create a new VO for their research group: + +#. Make sure you have an active VSC account + + If you don't yet have a VSC account yet, follow the instructions in + :ref:`apply for account` to request it. The process is fairly easy if you + don't need access to the HPC cluster; in that case it's not required to + generate an SSH key pair. + +#. Go to the section **Request new VO** in your `VSC Account - New/Join VO`_ page + + * Fill out the form below *'Why do you want to request a VO'* + + * Fill out both the internal and public VO names. These cannot contain + spaces, and should be 8-10 characters long. For example, ``genome25`` is + a valid VO name. + + * Fill out the rest of the form and press submit. This will send a message + to the HPC administrators, who will then review your request + +#. If the request is approved, you will now be a **member and moderator** of + your newly created VO + +Requesting more storage space +============================= + +VO moderators can request additional quota for the VO and its members: + +#. Go to the section **Request additional quota** in your + `VSC Account - Edit VO`_ page + + * Fill out the amount of additional storage you want for the data + storage of your VO (named ``VSC_DATA`` in this section) and/or the + scratch storage of your VO (named ``VSC_SCRATCH_RHEA`` in this section) + + * Add a comment explaining why you need additional storage space and submit + the form + +#. Your request will be reviewed by the HPC administrators + +Setting per-member VO quota +=========================== + +VO moderators can tweak the share of the VO quota that each member can +maximally use. By default, this is set to 50% for each user, but a moderator +can change this: it is possible to give a particular user more than half of the +VO quota (for example 80%), or significantly less (for example 10%). + +#. Go to the section **Manage per-member quota share** in your + `VSC Account - Edit VO`_ page + + * Fill out the share (%) of the available space you want each user to be + able to use and press confirm + +#. The per-member VO quota will be updated in 30 minutes maximum + +.. note:: The total percentage per-member can be above 100%. The share for any + user indicates what he/she can maximally use, but the actual limit will then + depend on the usage of the other members. The total storage space of the VO + will always be respected. + diff --git a/source/access/vsc_account.rst b/source/accounts/vsc_account.rst similarity index 73% rename from source/access/vsc_account.rst rename to source/accounts/vsc_account.rst index 8f362b1bf..2aabc3b13 100644 --- a/source/access/vsc_account.rst +++ b/source/accounts/vsc_account.rst @@ -17,8 +17,10 @@ Applying for your VSC account ============================= .. tab-set:: + :sync-group: vsc-sites .. tab-item:: KU Leuven/UHasselt + :sync: kuluh UHasselt has an agreement with KU Leuven to run a shared infrastructure. Therefore the procedure is the same for both institutions. @@ -32,32 +34,25 @@ Applying for your VSC account Researchers with a regular personnel account (u-number) can use the :ref:`generic procedure `. - - If you are in one of the higher education institutions associated + * If you are in one of the higher education institutions associated with KU Leuven, the :ref:`generic procedure ` may not work. In that case, please e-mail hpcinfo@kuleuven.be to get an account. You will have to provide a public ssh key generated as described above. - - Lecturers of KU Leuven and UHasselt that need HPC access for giving - their courses: The procedure requires action both from the lecturers - and from the students. Lecturers should follow the :ref:`specific - procedure for lecturers `, - while the students should simply apply for the account through the - :ref:`generic procedure `. - .. tab-item:: UGent + * Lecturers of KU Leuven and UHasselt that need HPC access for giving + their courses should be aware that the procedure requires action both + from the lecturers and from the students. + Students can simply apply for the account through the + :ref:`generic procedure ` while lecturers + should follow the specific procedure outlined herein: - All information about the access policy is available `in - English `_ at the `UGent - HPC web pages `_. - - Who? - Access is available for faculty students (master's projects under - faculty supervision), and researchers of UGent. + .. dropdown:: Lecturer’s procedure to request student accounts - How? - Researchers and students can use the :ref:`generic procedure `. + .. include:: lecturer_procedure_student_accounts_kuleuven_uhasselt.rst .. tab-item:: UAntwerp (AUHA) + :sync: ua Who? Access is available for faculty students (master's projects under @@ -66,37 +61,63 @@ Applying for your VSC account How? Researchers can use the :ref:`generic procedure `. - - Master students can also use the infrastructure for their master - thesis work. The promotor of the thesis should first send a + * Master students can also use the infrastructure for their master + thesis work. The promoter of the thesis should first send a motivation to hpc@uantwerpen.be and then the :ref:`generic procedure ` should be followed (using your student UAntwerpen id) to request the account. + .. tab-item:: UGent + :sync: ug + + All information about the access policy is available `in + English `_ at the `UGent + HPC web pages `_. + + Who? + Access is available for faculty students (master's projects under + faculty supervision), and researchers of UGent. + + How? + Researchers and students can use the + :ref:`generic procedure `. + .. tab-item:: VUB + :sync: vub - All information about the access policy is available on the `VUB - HPC documentation website `_. + All information about the access policy is available on the + `VUB-HPC documentation website `_. Who? Access is available for faculty students (under faculty supervision), and researchers of VUB and their associations. How? - Researchers with a regular VUB account (`@vub.be`) can use - the :ref:`generic procedure `. + Researchers with a regular VUB account (`@vub.be`) can use the + :ref:`generic procedure `. + VUB and UZB staff, including PhD students, will get automatic approval + of their VSC account. + + * Non-PhD students of VUB may get access subject to the specific + conditions detailed below. Their VSC account must be requested by their + Professor or Promoter by filling out + `VUB's VSC account request form `_. + Note that this form is not accessible to students. + + * Courses: All students can use the HPC cluster if required for + practical courses. - - Master students can also use the infrastructure for their master - thesis work. The promotor of the thesis should first send a - motivation to hpc@vub.be and then the :ref:`generic - procedure ` should be followed to request the account. + * Bachelor or Master Thesis: students working towards a Bachelor or + Master thesis can use the HPC cluster for their research. .. tab-item:: Others + :sync: other Who? Check that `you are eligible to use VSC infrastructure `_. How? - Ask your VSC contact for help. If you don't have a VSC contact yet, and please + Ask your VSC contact for help. If you don't have a VSC contact yet, please `get in touch`_ with us. @@ -141,7 +162,6 @@ procedure does not work. :hidden: scientific_domains - ../leuven/lecturer_s_procedure_to_request_student_accounts_ku_leuven_uhasselt Next steps ========== @@ -150,7 +170,7 @@ Register for an HPC Introduction course. These are organized at all universities on a regular basis. Information on our training program and the schedule is available on the -`VSC website `_. +`VSC Training`_ website. .. note:: @@ -161,12 +181,12 @@ Information on our training program and the schedule is available on the Additional information ====================== -Your account also includes two “blocks” of disk space: your home +Your account also includes two *blocks* of disk space: your home directory and data directory. Both are accessible from all VSC clusters. When you log in to a particular cluster, you will also be assigned one or more blocks of temporary disk space, called scratch directories. Which directory should be used for which type of data, is explained in -the page ":ref:`data location`". +the page :ref:`data location`. Your VSC account does not give you access to all available software. You can use all free software and a number of compilers and other diff --git a/source/access/how_to_create_manage_vsc_groups.rst b/source/accounts/vsc_user_groups.rst similarity index 100% rename from source/access/how_to_create_manage_vsc_groups.rst rename to source/accounts/vsc_user_groups.rst diff --git a/source/access/where_can_i_store_what_kind_of_data.rst b/source/accounts/where_can_i_store_what_kind_of_data.rst similarity index 100% rename from source/access/where_can_i_store_what_kind_of_data.rst rename to source/accounts/where_can_i_store_what_kind_of_data.rst diff --git a/source/antwerp/old_hardware/hopper_hardware.rst b/source/antwerp/old_hardware/hopper_hardware.rst index 5f0999ffa..9dc455767 100644 --- a/source/antwerp/old_hardware/hopper_hardware.rst +++ b/source/antwerp/old_hardware/hopper_hardware.rst @@ -34,11 +34,11 @@ When the option was omitted, your job was submitted to the default partition (** The maximum execution wall time for jobs was **7 days** (168 hours). -=============== ====== =================================================================================== ====== ========== ======== -Slurm partition nodes processors per node memory local disk network -=============== ====== =================================================================================== ====== ========== ======== -**ivybridge** 23 2x 10-core Intel Xeon `E5-2680v2 `_ \@2.8 GHz 256 GB 500 GB FDR10-IB -=============== ====== =================================================================================== ====== ========== ======== +=============== ====== ============================================ ====== ========== ======== +Slurm partition nodes processors per node memory local disk network +=============== ====== ============================================ ====== ========== ======== +**ivybridge** 23 2x 10-core `Intel Xeon E5-2680v2`_ \@2.8 GHz 256 GB 500 GB FDR10-IB +=============== ====== ============================================ ====== ========== ======== ******* History @@ -46,9 +46,9 @@ History Hopper was a compute cluster at UAntwerp in operation from late 2014 till the summer of 2020. The cluster had 168 compute nodes with -dual 10-core Intel `E5-2680v2 `_ -Ivy Bridge generation CPUs connected through an InfiniBand FDR10 network, -144 of these compute nodes having 64 GB RAM and 24 having 256 GB RAM. +dual 10-core `Intel Xeon E5-2680v2`_ Ivy Bridge generation CPUs connected +through an InfiniBand FDR10 network, 144 of these compute nodes having 64 GB +RAM and 24 having 256 GB RAM. When the cluster was moved out in the summer of 2020 to make space for the installation of :ref:`Vaughan`, the 24 nodes with 256 GB RAM @@ -64,4 +64,4 @@ Hopper was named after `Grace Hopper ` |login-vaughan.hpc.uantwerpen.be | | login1-vaughan.hpc.uantwerpen.be| -| | | | login1-vaughan.hpc.uantwerpen.be| -+---------------------------------------------------+-----------------------------------+-----------------------------------+ -|:ref:`Leibniz` | | login-leibniz.hpc.uantwerpen.be | | login1-leibniz.hpc.uantwerpen.be| -| | | **login.hpc.uantwerpen.be** | | login2-leibniz.hpc.uantwerpen.be| -+---------------------------------------------------+-----------------------------------+-----------------------------------+ -|:ref:`Visualization`|viz1-leibniz.hpc.uantwerpen.be | | -+---------------------------------------------------+-----------------------------------+-----------------------------------+ -|:ref:`Breniac` |login-breniac.hpc.uantwerpen.be | | -+---------------------------------------------------+-----------------------------------+-----------------------------------+ ++----------------------------------------------------+-----------------------------------+-----------------------------------+ +| Cluster | Generic login name | Individual login node | ++====================================================+===================================+===================================+ +|:ref:`Vaughan` |login-vaughan.hpc.uantwerpen.be | | login1-vaughan.hpc.uantwerpen.be| +| | | | login1-vaughan.hpc.uantwerpen.be| ++----------------------------------------------------+-----------------------------------+-----------------------------------+ +|:ref:`Leibniz` | | login-leibniz.hpc.uantwerpen.be | | login1-leibniz.hpc.uantwerpen.be| +| | | **login.hpc.uantwerpen.be** | | login2-leibniz.hpc.uantwerpen.be| ++----------------------------------------------------+-----------------------------------+-----------------------------------+ +|:ref:`Visualization` |viz1-leibniz.hpc.uantwerpen.be | | ++----------------------------------------------------+-----------------------------------+-----------------------------------+ +|:ref:`Breniac` |login-breniac.hpc.uantwerpen.be | | ++----------------------------------------------------+-----------------------------------+-----------------------------------+ .. note:: Direct login is possible to all login nodes and to the visualization node *from within Belgium only*. From outside of Belgium, a :ref:`VPN connection ` to the UAntwerp network is required. @@ -40,44 +40,44 @@ Partitions in **bold** are the default partition for the corresponding cluster. :ref:`Vaughan ` ================================= -+--------------+-------+------------------------------------------------------------------------------------------------------------+----------------------+---------------------+ -| Partition | Nodes | CPU-GPU | Memory | Maximum wall time | -+==============+=======+============================================================================================================+======================+=====================+ -| **zen2** | 152 | 2x 32-core AMD `Epyc 7452 `_ | 256 GB | 3 days | -+--------------+-------+------------------------------------------------------------------------------------------------------------+----------------------+---------------------+ -| zen3 | 24 | 2x 32-core AMD `Epyc 7543 `_ | 256 GB | 3 days | -+--------------+-------+------------------------------------------------------------------------------------------------------------+----------------------+---------------------+ -| zen3_512 | 16 | 2x 32-core AMD `Epyc 7543 `_ | 512 GB | 3 days | -+--------------+-------+------------------------------------------------------------------------------------------------------------+----------------------+---------------------+ -| ampere_gpu | 1 | | 2x 32-core AMD `Epyc 7452 `_ | 256 GB | 1 day | -| | | | 4x NVIDIA `A100 (Ampere) `_ 40 GB SXM4 | | | -+--------------+-------+------------------------------------------------------------------------------------------------------------+----------------------+---------------------+ -| arcturus_gpu | 2 | | 2x 32-core AMD `Epyc 7452 `_ | 256 GB | 1 day | -| | | | 2x AMD `MI100 (Arcturus) `_ 32 GB HBM2| | | -+--------------+-------+------------------------------------------------------------------------------------------------------------+----------------------+---------------------+ ++--------------+-------+---------------------------------------------------+----------+---------------------+ +| Partition | Nodes | CPU-GPU | Memory | Maximum wall time | ++==============+=======+===================================================+==========+=====================+ +| **zen2** | 152 | 2x 32-core `AMD EPYC 7452`_ | 256 GB | 3 days | ++--------------+-------+---------------------------------------------------+----------+---------------------+ +| zen3 | 24 | 2x 32-core `AMD EPYC 7543`_ | 256 GB | 3 days | ++--------------+-------+---------------------------------------------------+----------+---------------------+ +| zen3_512 | 16 | 2x 32-core `AMD EPYC 7543`_ | 512 GB | 3 days | ++--------------+-------+---------------------------------------------------+----------+---------------------+ +| ampere_gpu | 1 | | 2x 32-core `AMD EPYC 7452`_ | 256 GB | 1 day | +| | | | 4x `NVIDIA A100`_ (Ampere) 40 GB SXM4 | | | ++--------------+-------+---------------------------------------------------+----------+---------------------+ +| arcturus_gpu | 2 | | 2x 32-core `AMD EPYC 7452`_ | 256 GB | 1 day | +| | | | 2x `AMD Instinct MI100`_ (Arcturus) 32 GB HBM2 | | | ++--------------+-------+---------------------------------------------------+----------+---------------------+ :ref:`Leibniz ` ================================= -+---------------+-------+------------------------------------------------------------------------------------------------+----------------------+---------------------+ -| Partition | Nodes | CPU-GPU | Memory | Maximum wall time | -+===============+=======+================================================================================================+======================+=====================+ -| **broadwell** | 144 | 2x 14-core Intel Xeon `E5-2680v4 `_ | 128 GB | 3 days | -+---------------+-------+------------------------------------------------------------------------------------------------+----------------------+---------------------+ -| broadwell_256 | 8 | 2x 14-core Intel Xeon `E5-2680v4 `_ | 256 GB | 3 days | -+---------------+-------+------------------------------------------------------------------------------------------------+----------------------+---------------------+ -| pascal_gpu | 2 | | 2x 14-core Intel Xeon `E5-2680v4 `_ | 128 GB | 1 day | -| | | | 2x NVIDIA `P100 (Pascal) `_ 16 GB HBM2 | | | -+---------------+-------+------------------------------------------------------------------------------------------------+----------------------+---------------------+ ++---------------+-------+------------------------------------------------+----------------------+---------------------+ +| Partition | Nodes | CPU-GPU | Memory | Maximum wall time | ++===============+=======+================================================+======================+=====================+ +| **broadwell** | 144 | 2x 14-core `Intel Xeon E5-2680v4`_ | 128 GB | 3 days | ++---------------+-------+------------------------------------------------+----------------------+---------------------+ +| broadwell_256 | 8 | 2x 14-core `Intel Xeon E5-2680v4`_ | 256 GB | 3 days | ++---------------+-------+------------------------------------------------+----------------------+---------------------+ +| pascal_gpu | 2 | | 2x 14-core `Intel Xeon E5-2680v4`_ | 128 GB | 1 day | +| | | | 2x `NVIDIA Tesla P100`_ (Pascal) 16 GB HBM2 | | | ++---------------+-------+------------------------------------------------+----------------------+---------------------+ :ref:`Breniac ` ========================================== -+--------------+-------+----------------------------------------------------------------------------+--------+---------------------+ -| Partition | Nodes | CPU | Memory | Maximum wall time | -+==============+=======+============================================================================+========+=====================+ -| **skylake** | 23 | 2x 14-core Intel Xeon `Gold 6132 `_ | 192 GB | 7 days | -+--------------+-------+----------------------------------------------------------------------------+--------+---------------------+ ++--------------+-------+-------------------------------------+--------+---------------------+ +| Partition | Nodes | CPU | Memory | Maximum wall time | ++==============+=======+=====================================+========+=====================+ +| **skylake** | 23 | 2x 14-core `Intel Xeon Gold 6132`_ | 192 GB | 7 days | ++--------------+-------+-------------------------------------+--------+---------------------+ ********************** Storage infrastructure diff --git a/source/antwerp/tier2_hardware/breniac_hardware.rst b/source/antwerp/tier2_hardware/breniac_hardware.rst index b52d0685d..d462897a9 100644 --- a/source/antwerp/tier2_hardware/breniac_hardware.rst +++ b/source/antwerp/tier2_hardware/breniac_hardware.rst @@ -25,11 +25,11 @@ When the option is omitted, your job is submitted to the only partition (**skyla The maximum execution wall time for jobs is **7 days** (168 hours). -=============== ====== =================================================================================== ====== ========== ======= -Slurm partition nodes processors per node memory local disk network -=============== ====== =================================================================================== ====== ========== ======= -**skylake** 23 2x 14-core Intel Xeon `Gold 6132 `_ \@2.6GHz 192 GB 500 GB EDR-IB -=============== ====== =================================================================================== ====== ========== ======= +=============== ====== =========================================== ====== ========== ======= +Slurm partition nodes processors per node memory local disk network +=============== ====== =========================================== ====== ========== ======= +**skylake** 23 2x 14-core `Intel Xeon Gold 6132`_ \@2.6GHz 192 GB 500 GB EDR-IB +=============== ====== =========================================== ====== ========== ======= .. _Breniac login UAntwerp: @@ -53,7 +53,7 @@ From inside the VSC network (e.g., when connecting from another VSC cluster), us - 1 login node - - 2 Xeon `Gold 6132 `_ CPUs\@2.6GHz (Skylake), 14 cores each + - 2 `Intel Xeon Gold 6132`_ CPUs\@2.6GHz (Skylake), 14 cores each - 192 GB RAM - 2x 500 GB HDD local disk (raid 1) @@ -109,4 +109,4 @@ History In 2023, the :ref:`KU Leuven Tier-1 Breniac cluster` was decommissioned. During the summer of 2023, 24 of the Breniac compute nodes were recovered for use at UAntwerp, replacing the -:ref:`Hopper` compute cluster. \ No newline at end of file +:ref:`Hopper` compute cluster. diff --git a/source/antwerp/tier2_hardware/leibniz_hardware.rst b/source/antwerp/tier2_hardware/leibniz_hardware.rst index 69b22c1de..80999fe7d 100644 --- a/source/antwerp/tier2_hardware/leibniz_hardware.rst +++ b/source/antwerp/tier2_hardware/leibniz_hardware.rst @@ -24,23 +24,23 @@ CPU compute nodes The maximum execution wall time for jobs is **3 days** (72 hours). -=============== ====== ============================================================================= ====== ========== ======= -Slurm partition nodes processors per node memory local disk network -=============== ====== ============================================================================= ====== ========== ======= -**broadwell** 144 2x 14-core Xeon `E5-2680v4 `_ \@2.4 GHz 128 GB 120 GB SSD EDR-IB -broadwell_256 8 2x 14-core Xeon `E5-2680v4 `_ \@2.4 GHz 256 GB 120 GB SSD EDR-IB -=============== ====== ============================================================================= ====== ========== ======= +=============== ====== ============================================ ====== ========== ======= +Slurm partition nodes processors |nbsp| per |nbsp| node memory local disk network +=============== ====== ============================================ ====== ========== ======= +**broadwell** 144 2x 14-core `Intel Xeon E5-2680v4`_ \@2.4 GHz 128 GB 120 GB SSD EDR-IB +broadwell_256 8 2x 14-core `Intel Xeon E5-2680v4`_ \@2.4 GHz 256 GB 120 GB SSD EDR-IB +=============== ====== ============================================ ====== ========== ======= GPU compute nodes ================= The maximum execution wall time for jobs is **1 day** (24 hours). -=============== ===== ======================================================================================= ========== ============================================================================= ====== ========== ======= -Slurm partition nodes GPUs per node GPU memory processors per node memory local disk network -=============== ===== ======================================================================================= ========== ============================================================================= ====== ========== ======= -pascal_gpu 2 2x NVIDIA Tesla `P100 (Pascal) `_ 16 GB HBM2 2x 14-core Xeon `E5-2680v4 `_ \@2.4 GHz 128 GB 120 GB EDR-IB -=============== ===== ======================================================================================= ========== ============================================================================= ====== ========== ======= +=============== ===== ================================ ========== ============================================= ====== ========== ======= +Slurm partition nodes GPUs |nbsp| per |nbsp| node GPU memory processors |nbsp| per |nbsp| node memory local disk network +=============== ===== ================================ ========== ============================================= ====== ========== ======= +pascal_gpu 2 2x `NVIDIA Tesla P100`_ (Pascal) 16 GB HBM2 2x 14-core `Intel Xeon E5-2680v4`_ \@2.4 GHz 128 GB 120 GB EDR-IB +=============== ===== ================================ ========== ============================================= ====== ========== ======= .. seealso:: See :ref:`GPU computing UAntwerp` for more information on using the GPU nodes. @@ -72,13 +72,13 @@ From inside the VSC network (e.g., when connecting from another VSC cluster), us - 2 login nodes - - 2x 14-core Xeon `E5-2680v4 `_ CPUs\@2.4 GHz (Broadwell) + - 2x 14-core `Intel Xeon E5-2680v4`_ CPUs\@2.4 GHz (Broadwell) - 256 GB RAM - 2x 1 TB HDD local disk (raid 1) - 1 visualization node - - 2x 14-core Xeon `E5-2680v4 `_ CPUs\@2.4 GHz (Broadwell) + - 2x 14-core `Intel Xeon E5-2680v4`_ CPUs\@2.4 GHz (Broadwell) - 1 NVIDIA Quadro P5000 - 256 GB RAM - 2x 1 TB HDD local disk (raid 1) diff --git a/source/antwerp/tier2_hardware/vaughan_hardware.rst b/source/antwerp/tier2_hardware/vaughan_hardware.rst index 2dfc4785e..346c2e430 100644 --- a/source/antwerp/tier2_hardware/vaughan_hardware.rst +++ b/source/antwerp/tier2_hardware/vaughan_hardware.rst @@ -27,25 +27,25 @@ CPU compute nodes The maximum execution wall time for jobs is **3 days** (72 hours). -=============== ====== ========================================================================================== ====== ========== ========= -Slurm partition nodes processors per node memory local disk network -=============== ====== ========================================================================================== ====== ========== ========= -**zen2** 152 2x 32-core AMD `Epyc 7452 `_ \@2.35 GHz 256 GB 240 GB SSD HDR100-IB -zen3 24 2x 32-core AMD `Epyc 7543 `_ \@2.80 GHz 256 GB 500 GB SSD HDR100-IB -zen3_512 16 2x 32-core AMD `Epyc 7543 `_ \@2.80 GHz 512 GB 500 GB SSD HDR100-IB -=============== ====== ========================================================================================== ====== ========== ========= +=============== ====== ====================================== ====== ========== ========= +Slurm partition nodes processors |nbsp| per |nbsp| node memory local disk network +=============== ====== ====================================== ====== ========== ========= +**zen2** 152 2x 32-core `AMD Epyc 7452`_ \@2.35 GHz 256 GB 240 GB SSD HDR100-IB +zen3 24 2x 32-core `AMD Epyc 7543`_ \@2.80 GHz 256 GB 500 GB SSD HDR100-IB +zen3_512 16 2x 32-core `AMD Epyc 7543`_ \@2.80 GHz 512 GB 500 GB SSD HDR100-IB +=============== ====== ====================================== ====== ========== ========= GPU compute nodes ================= The maximum execution wall time for GPU jobs is **1 day** (24 hours). -=============== ====== ====================================================================================================== ========== ========================================================================================== ====== ========== ========= -Slurm partition nodes GPUs per node GPU memory processors per node memory local disk network -=============== ====== ====================================================================================================== ========== ========================================================================================== ====== ========== ========= -ampere_gpu 1 4x NVIDIA Tesla `A100 (Ampere) `_ 40 GB SXM4 2x 32-core AMD `Epyc 7452 `_ \@2.35 GHz 256 GB 480 GB SSD HDR100-IB -arcturus_gpu 2 2x AMD Instinct `MI100 (Arcturus) `_ 32 GB HBM2 2x 32-core AMD `Epyc 7452 `_ \@2.35 GHz 256 GB 480 GB SSD HDR100-IB -=============== ====== ====================================================================================================== ========== ========================================================================================== ====== ========== ========= +=============== ====== =================================== ========== ====================================== ====== ================= ========= +Slurm partition nodes GPUs |nbsp| per |nbsp| node GPU memory processors |nbsp| per |nbsp| node memory local |nbsp| disk network +=============== ====== =================================== ========== ====================================== ====== ================= ========= +ampere_gpu 1 4x `NVIDIA A100`_ (Ampere) 40 GB SXM4 2x 32-core `AMD Epyc 7452`_ \@2.35 GHz 256 GB 480 GB SSD HDR100-IB +arcturus_gpu 2 2x `AMD Instinct MI100`_ (Arcturus) 32 GB HBM2 2x 32-core `AMD Epyc 7452`_ \@2.35 GHz 256 GB 480 GB SSD HDR100-IB +=============== ====== =================================== ========== ====================================== ====== ================= ========= .. seealso:: See :ref:`GPU computing UAntwerp` for more information on using the GPU nodes. @@ -75,7 +75,7 @@ From inside the VSC network (e.g., when connecting from another VSC cluster), us - 2 login nodes - - 2x 16-core AMD `Epyc 7282 `_ CPUs\@2.8 GHz (zen2) + - 2x 16-core `AMD EPYC 7282`_ CPUs\@2.8 GHz (zen2) - 256 GB RAM - 2x 480 GB HDD local disk (raid 1) @@ -140,15 +140,15 @@ History ******* The Vaughan cluster was installed in the summer of 2020. It is a NEC system consisting of -152 compute nodes with dual 32-core AMD `Epyc 7452 `_ -Rome generation CPUs with 256 GB RAM, connected through an HDR100 InfiniBand network. -It also has 1 node with four NVIDIA (Tesla) Ampere A100 GPU compute cards and -2 nodes equipped with two AMD Instinct (Arcturus) MI100 GPU compute cards. +152 compute nodes with dual 32-core `AMD EPYC 7452`_ Rome generation CPUs with +256 GB RAM, connected through an HDR100 InfiniBand network. +It also has 1 node with four `NVIDIA A100`_ (Ampere) GPU compute cards and +2 nodes equipped with two `AMD Instinct MI100`_ (Arcturus) GPU compute cards. In the summer of 2023, the Vaughan cluster was extended with -40 compute nodes with dual 32-core AMD `Epyc 7543 `_ -Milan generation CPUs, 24 nodes with 256 GB RAM and 16 nodes 512 GB RAM. -All Milan nodes are connected through an HDR200 InfiniBand network. +40 compute nodes with dual 32-core `AMD EPYC 7543`_ Milan generation CPUs, 24 +nodes with 256 GB RAM and 16 nodes 512 GB RAM. All Milan nodes are connected +through an HDR200 InfiniBand network. Origin of the name ================== diff --git a/source/antwerp/uantwerp_slurm_specifics.rst b/source/antwerp/uantwerp_slurm_specifics.rst index 1b3af3f98..d04e282a1 100644 --- a/source/antwerp/uantwerp_slurm_specifics.rst +++ b/source/antwerp/uantwerp_slurm_specifics.rst @@ -79,15 +79,15 @@ access to a single, dedicated GPU at the same time. In total, three GPU partitions are available: -+----------------------------------+--------------+-------------------------------------------------------------------------------------------------------------------------------+ -| Cluster | Partition | Available nodes | -+==================================+==============+===============================================================================================================================+ -| :ref:`Vaughan` | ampere_gpu | 2 nodes with 4 NVIDIA Tesla `A100 (Ampere) `_ 40 GB SXM4 | -+ +--------------+-------------------------------------------------------------------------------------------------------------------------------+ -| | arcturus_gpu | 2 nodes with 2 AMD Instinct `MI100 (Arcturus) `_ 32 GB HBM2| -+----------------------------------+--------------+-------------------------------------------------------------------------------------------------------------------------------+ -| :ref:`Leibniz` | pascal_gpu | 2 nodes with 2 NVIDIA Tesla `P100 (Pascal) `_ 16 GB HBM2 | -+----------------------------------+--------------+-------------------------------------------------------------------------------------------------------------------------------+ ++----------------------------------+--------------+-------------------------------------------------------------+ +| Cluster | Partition | Available nodes | ++==================================+==============+=============================================================+ +| :ref:`Vaughan` | ampere_gpu | 2 nodes with 4 `NVIDIA A100`_ (Ampere) 40 GB SXM4 | ++ +--------------+-------------------------------------------------------------+ +| | arcturus_gpu | 2 nodes with 2 `AMD Instinct MI100`_ (Arcturus) 32 GB HBM2 | ++----------------------------------+--------------+-------------------------------------------------------------+ +| :ref:`Leibniz` | pascal_gpu | 2 nodes with 2 `NVIDIA Tesla P100`_ (Pascal) 16 GB HBM2 | ++----------------------------------+--------------+-------------------------------------------------------------+ To submit a job on a GPU compute node belonging to a certain partition and get a single GPU, use the ``sbatch`` command diff --git a/source/brussels/tier2_hardware.rst b/source/brussels/tier2_hardware.rst index 24f63716a..29b78cb68 100644 --- a/source/brussels/tier2_hardware.rst +++ b/source/brussels/tier2_hardware.rst @@ -1,9 +1,11 @@ -VUB Tier-2 Infrastructure -========================= +################### +VUB Tier-2 Clusters +################### .. toctree:: :maxdepth: 2 - tier2_hardware/hydra_hardware + tier2_hardware/hydra + tier2_hardware/anansi tier2_hardware/vub_storage vub_docs diff --git a/source/brussels/tier2_hardware/anansi.rst b/source/brussels/tier2_hardware/anansi.rst new file mode 100644 index 000000000..4faf29402 --- /dev/null +++ b/source/brussels/tier2_hardware/anansi.rst @@ -0,0 +1,32 @@ +.. _Anansi cluster: + +Anansi Cluster +============== + +The VUB Anansi cluster is a small system designed for interactive use and +test/debug jobs. It sits next to :ref:`Hydra cluster` and both clusters share the +same network and :ref:`storage system `, which simplifies testing +those jobs that will be run on Hydra. + +The particularity of Anansi is that its computational resources are distributed +following a non-exclusive policy. This means that resources such as CPU cores +and GPUs allocated to jobs in Anansi might be shared with other jobs. The only +resource that is kept exclusive is system memory. + +Shared resources combined with a maximum walltime of 12 hours maximizes the +availability of this cluster for quick interactive use and test/debug tasks, +avoiding wait time in queue. + +Technical characteristics of Anansi: + +=============== ===== ============================== ========== ============================================== ========== ================= ======= +Slurm partition nodes GPUs |nbsp| per |nbsp| node GPU memory processors |nbsp| per |nbsp| node CPU memory local |nbsp| disk network +=============== ===== ============================== ========== ============================================== ========== ================= ======= +| pascal_gpu 1 | 4x `NVIDIA GeForce 1080Ti`_ 11 GB 2x 16-core `Intel Xeon E5-2683v4`_ (Broadwell) 512 GB 250GB HDD 10 Gbps +=============== ===== ============================== ========== ============================================== ========== ================= ======= + +Login nodes +----------- + +Anansi uses the same :ref:`login nodes of Hydra `. + diff --git a/source/brussels/tier2_hardware/hydra.rst b/source/brussels/tier2_hardware/hydra.rst new file mode 100644 index 000000000..adbe5a711 --- /dev/null +++ b/source/brussels/tier2_hardware/hydra.rst @@ -0,0 +1,51 @@ +.. _Hydra cluster: + +Hydra Cluster +============= + +The VUB Hydra cluster is a heterogeneous Tier-2 cluster with a mixture of nodes +with varied hardware. The majority of nodes are non-GPU nodes for generic +multi-purpose compute, they are distributed in partitions depending on their CPU +microarchitectures and network interconnects. The cluster also contains a number +of nodes with NVIDIA GPUs, which are also distributed in partitions depending on +their GPU generation. + +CPU-only nodes +-------------- + +=============== ====== ============================================== ====== ========== ======= +Slurm partition nodes processors |nbsp| per |nbsp| node memory local disk network +=============== ====== ============================================== ====== ========== ======= +broadwell_himem 1 4x 10-core `Intel Xeon E7-8891v4`_ (Broadwell) 1.5 TB 4 TB HDD 10 Gbps +skylake 22 2x 20-core `Intel Xeon Gold 6148`_ (Skylake) 192 GB 1 TB HDD 10 Gbps +skylake_mpi 32 2x 20-core `Intel Xeon Gold 6148`_ (Skylake) 192 GB 1 TB HDD EDR-IB +skylake_mpi 16 2x 14-core `Intel Xeon Gold 6132`_ (Skylake) 192 GB 450 GB HDD EDR-IB +zen4 20 2x 32-core `AMD EPYC 9384X`_ (Genoa-X) 384 GB 450 GB SSD 25 Gbps +=============== ====== ============================================== ====== ========== ======= + +GPU nodes +--------- + +=============== =============== ===== ================================== ========== ============================================== ========== ================= ======= +Slurm partition features nodes GPUs |nbsp| per |nbsp| node GPU memory processors |nbsp| per |nbsp| node CPU memory local |nbsp| disk network +=============== =============== ===== ================================== ========== ============================================== ========== ================= ======= +| pascal_gpu 4 | 2x `NVIDIA Tesla P100`_ (Pascal) 16 GB 2x 12-core `Intel Xeon E5-2650v4`_ (Broadwell) 256 GB 2 TB HDD 10 Gbps +| ampere_gpu 6 | 2x `NVIDIA A100`_ (Ampere) 40 GB 2x 16-core `AMD EPYC 7282`_ (Zen2 - Rome) 256 GB 2 TB SSD EDR-IB +| ampere_gpu | big_local_ssd 4 | 2x `NVIDIA A100`_ (Ampere) 40 GB 2x 16-core `AMD EPYC 7282`_ (Zen2 - Rome) 256 GB 5.9 TB SSD EDR-IB +=============== =============== ===== ================================== ========== ============================================== ========== ================= ======= + +.. _Hydra login nodes: + +Login nodes +----------- + +* nodes: 2 (fair share between all users) + +* processors per node: 2x 12-core Intel Xeon Gold 6126 (Skylake) + +* memory: 96GB (maximum per user: 12GB) + +* 10GbE network connection + +* Infiniband EDR connection to the storage + diff --git a/source/brussels/tier2_hardware/hydra_hardware.rst b/source/brussels/tier2_hardware/hydra_hardware.rst deleted file mode 100644 index 284a55ff3..000000000 --- a/source/brussels/tier2_hardware/hydra_hardware.rst +++ /dev/null @@ -1,49 +0,0 @@ -.. _Hydra hardware: - -Hydra hardware -=============== - -The VUB Hydra cluster is an heterogeneous cluster with a mixture of nodes with -varied hardware. The majority of nodes are non-GPU nodes for generic -multi-purpose compute, they are distributed in partitions depending on their CPU -microarchitectures and network interconnects. The cluster also contains a number -of nodes with NVIDIA GPUs, which are also distributed in partitions depending on -their GPU generation. - -CPU-only nodes --------------- - -=============== ====== ========================================== ====== ========== ======= -Slurm partition nodes processors per node memory local disk network -=============== ====== ========================================== ====== ========== ======= -broadwell_himem 1 4x 10-core INTEL E7-8891v4 (Broadwell) 1.5 TB 4 TB HDD 10 Gbps -skylake 22 2x 20-core INTEL Xeon Gold 6148 (Skylake) 192 GB 1 TB HDD 10 Gbps -skylake_mpi 32 2x 20-core INTEL Xeon Gold 6148 (Skylake) 192 GB 1 TB HDD EDR-IB -skylake_mpi 16 2x 14-core INTEL Xeon Gold 6132 (Skylake) 192 GB 450 GB HDD EDR-IB -zen4 20 2x 32-core AMD EPYC 9384X (Genoa-X) 384 GB 450 GB SSD 25 Gbps -=============== ====== ========================================== ====== ========== ======= - -GPU nodes ---------- - -=============== =============== ===== =============================== ========== ======================================= ========== ========== ======= -Slurm partition features nodes GPUs per node GPU memory processors per node CPU memory local disk network -=============== =============== ===== =============================== ========== ======================================= ========== ========== ======= -| pascal_gpu 4 2x Nvidia Tesla P100 (Pascal) 16 GB 2x 12-core INTEL E5-2650v4 (Broadwell) 256 GB 2 TB HDD 10 Gbps -| ampere_gpu 6 2x Nvidia A100 (Ampere) 40 GB 2x 16-core AMD EPYC 7282 (Zen2 - Rome) 256 GB 2 TB SSD EDR-IB -| ampere_gpu | big_local_ssd 4 2x Nvidia A100 (Ampere) 40 GB 2x 16-core AMD EPYC 7282 (Zen2 - Rome) 256 GB 5.9 TB SSD EDR-IB -=============== =============== ===== =============================== ========== ======================================= ========== ========== ======= - -Login nodes ------------ - -* nodes: 2 (fair share between all users) - -* processors per node: 2x 12-core INTEL Xeon Gold 6126 (Skylake) - -* memory: 96GB (maximum per user: 12GB) - -* 10GbE network connection - -* Infiniband EDR connection to the storage - diff --git a/source/cloud/access.md b/source/cloud/access.md index d23b132f0..b5ff83acc 100644 --- a/source/cloud/access.md +++ b/source/cloud/access.md @@ -9,7 +9,7 @@ need a separate login or password. In order to use the cloud services, - your account must be a member of one or more OpenStack projects. New users can obtain an account by following [the procedure described -here](/access/vsc_account.rst). +here](/accounts/vsc_account.rst). Once you have an account, contact us if you want to start a new OpenStack project, or join an existing one. diff --git a/source/cloud/configure_instances.md b/source/cloud/configure_instances.md index 7a1f359f9..f0e012819 100644 --- a/source/cloud/configure_instances.md +++ b/source/cloud/configure_instances.md @@ -1,4 +1,4 @@ -# Configure access and security for instance +# Instance access and security The security and accessibility of your cloud resources is governed by a few different aspects, which we discuss more detail in the following @@ -188,7 +188,7 @@ image that the instance is based on must contain the **cloud-init** package, or have in place another mechanism in place that will interact with the OpenStack metadata server to install the appropriate key. For general instructions on SSH keys, we refer to the section -[Security Keys](/access/generating_keys.rst) of this documentation. +[Security Keys](/accounts/generating_keys.rst) of this documentation. If you have generated a key pair with an external tool, you can import it into OpenStack. The key pair can be used for multiple instances that @@ -208,39 +208,40 @@ project, each user needs to import it in the OpenStack project. ### Add a key pair -1. Open the Compute tab. +1. Open the Compute tab. -2. Click the Key Pairs tab, which shows the key pairs that are - available for this project. +2. Click the Key Pairs tab, which shows the key pairs that are + available for this project. -3. Click Create Key Pair. +3. Click Create Key Pair. -4. In the Create Key Pair dialog box, enter a name for your key pair, - and click Create Key Pair. +4. In the Create Key Pair dialog box, enter a name for your key pair, + and click Create Key Pair. -5. Respond to the prompt to download the key pair. +5. Respond to the prompt to download the key pair. -6. Save the **\*.pem** file locally. +6. Save the **\*.pem** file locally. + +7. To change its permissions so that only you can read and write to the + file, run the following command: + + ```shell + chmod 0600 yourPrivateKey.pem + ``` -7. To change its permissions so that only you can read and write to the - file, run the following command: -```shell -chmod 0600 yourPrivateKey.pem -``` :::{note} If you are using the OpenStack Dashboard from a Windows computer, use PuTTYgen -to load the **\*.pem** file and convert and save it as **\*.ppk**. +to load the `\*.pem` file and convert and save it as `\*.ppk`. For more information see our documentation on -[Generating keys with PuTTY](/access/generating_keys_with_putty.rst) and also +[Generating keys with PuTTY](/accounts/generating_keys_putty.rst) and also the [*WinSCP web page for PuTTYgen*](https://winscp.net/eng/docs/ui_puttygen). ::: -* To make the key pair known to SSH, run the **ssh-add** command. - -```shell -ssh-add yourPrivateKey.pem -``` +* To make the key pair known to SSH, run the `ssh-add` command + ```shell + ssh-add yourPrivateKey.pem + ``` ### Import a key pair diff --git a/source/cloud/gpus.md b/source/cloud/gpus.md index 73ac374c0..00d972fab 100644 --- a/source/cloud/gpus.md +++ b/source/cloud/gpus.md @@ -1,9 +1,9 @@ -# GPUs +# Instances with GPUs VSC Tier-1 Cloud users can also deploy VMs with different kind of GPUs. -A full GPU card is connected directly to the VM via -[PCI passthrough](https://libvirt.org/docs/libvirt-appdev-guide-python/en-US/html/libvirt_application_development_guide_using_python-Guest_Domains-Device_Config-PCI_Pass.html) -and it is not shared between VMs. +A full GPU card is connected directly to the VM via PCI passthrough, which +means that the VM has direct access to its GPU devices and they not shared +between VMs. See section [Instance types and flavors](flavors.md#instance-types-and-flavors) for more information about the different GPUs available (`GPUv*` flavors). diff --git a/source/cloud/launch_instance.md b/source/cloud/launch_instance.md index 3f3f2b19a..eb41844d4 100644 --- a/source/cloud/launch_instance.md +++ b/source/cloud/launch_instance.md @@ -217,7 +217,7 @@ If you want to access ports outside the public range, you'll need to connect to the UGent login node _login.hpc.ugent.be_ first, and hop to your instance from there. To make this work without storing the required private key for the instance in your VSC storage space, you need to set -up an [SSH agent with key forwarding locally](/access/using_ssh_agent.rst), +up an [SSH agent with key forwarding locally](/accounts/ssh_agent.rst), i.e. on the machine where you store the private key of an authorized keypair for the instance. ::: @@ -327,7 +327,7 @@ fingerprints. The following examples show output and commands for OpenSSH, the most common client on Linux and macOS. If you are working from a windows system using using PuTTY, see our documentation on -[Generating keys with PuTTY](/access/generating_keys_with_putty.rst). +[Generating keys with PuTTY](/accounts/generating_keys_putty.rst). #### Connecting for the first time diff --git a/source/compute.rst b/source/compute.rst deleted file mode 100644 index 8f298cbf9..000000000 --- a/source/compute.rst +++ /dev/null @@ -1,12 +0,0 @@ -.. _compute: - -##################### -:fas:`rocket` Compute -##################### - -.. toctree:: - :maxdepth: 3 - - hardware - jobs/index - software/index diff --git a/source/compute/index.rst b/source/compute/index.rst new file mode 100644 index 000000000..bb06cb474 --- /dev/null +++ b/source/compute/index.rst @@ -0,0 +1,61 @@ +.. _compute: + +##################### +:fas:`rocket` Compute +##################### + +.. grid:: 2 + :gutter: 4 + + .. grid-item-card:: + :class-item: service-card-toc service-card-tier1 + :columns: 12 12 5 5 + + .. toctree:: + :maxdepth: 2 + + tier1 + + .. grid-item-card:: + :class-item: service-card-toc service-card-tier2 + :columns: 12 12 7 7 + + .. toctree:: + :maxdepth: 2 + + tier2 + + .. grid-item-card:: + :class-item: service-card-toc service-card-term + :columns: 12 12 7 7 + + .. toctree:: + :maxdepth: 2 + + terminal/index + + .. grid-item-card:: + :class-item: service-card-toc service-card-portal + :columns: 12 12 5 5 + + .. toctree:: + :maxdepth: 2 + + portal/index + + .. grid-item-card:: + :class-item: service-card-toc service-card-soft + + .. toctree:: + :maxdepth: 2 + + software/index + + .. grid-item-card:: + :class-item: service-card-toc service-card-jobs + + .. toctree:: + :maxdepth: 2 + + jobs/index + diff --git a/source/compute/infrastructure.rst b/source/compute/infrastructure.rst new file mode 100644 index 000000000..b33743dfd --- /dev/null +++ b/source/compute/infrastructure.rst @@ -0,0 +1,13 @@ +.. document not part of any TOC tree, only accessible directly + +:orphan: + +###################### +Compute Infrastructure +###################### + +.. toctree:: + :maxdepth: 3 + + tier1 + tier2 diff --git a/source/jobs/checkpointing_framework.rst b/source/compute/jobs/checkpointing_framework.rst similarity index 100% rename from source/jobs/checkpointing_framework.rst rename to source/compute/jobs/checkpointing_framework.rst diff --git a/source/jobs/clusters_slurm.rst b/source/compute/jobs/clusters_slurm.rst similarity index 69% rename from source/jobs/clusters_slurm.rst rename to source/compute/jobs/clusters_slurm.rst index 6fb3116b7..d072e3270 100644 --- a/source/jobs/clusters_slurm.rst +++ b/source/compute/jobs/clusters_slurm.rst @@ -1,21 +1,22 @@ .. grid:: 3 :gutter: 4 - .. grid-item-card:: UAntwerp (AUHA) - :columns: 12 4 4 4 - - * Tier-2 :ref:`Vaughan ` - * Tier-2 :ref:`Leibniz ` - * Tier-2 :ref:`Breniac ` - - .. grid-item-card:: KU Leuven/UHasselt + .. grid-item-card:: |KULUH| :columns: 12 4 4 4 * Tier-2 :ref:`Genius ` * Tier-2 :ref:`Superdome ` * Tier-2 :ref:`wICE ` - .. grid-item-card:: VUB + .. grid-item-card:: |UA| + :columns: 12 4 4 4 + + * Tier-2 :ref:`Vaughan ` + * Tier-2 :ref:`Leibniz ` + * Tier-2 :ref:`Breniac ` + + .. grid-item-card:: |VUB| :columns: 12 4 4 4 - * Tier-2 :ref:`Hydra ` + * Tier-2 :ref:`Hydra ` + * Tier-2 :ref:`Anansi ` diff --git a/source/compute/jobs/clusters_torque.rst b/source/compute/jobs/clusters_torque.rst new file mode 100644 index 000000000..15ab4e465 --- /dev/null +++ b/source/compute/jobs/clusters_torque.rst @@ -0,0 +1,9 @@ +.. grid:: 3 + :gutter: 4 + + .. grid-item-card:: |UG| + :columns: 12 4 4 4 + + * Tier-1 :ref:`Hortense ` + * Tier-2 :ref:`All clusters ` + diff --git a/source/jobs/credits.rst b/source/compute/jobs/credits.rst similarity index 57% rename from source/jobs/credits.rst rename to source/compute/jobs/credits.rst index ce37c61ac..5679fcc75 100644 --- a/source/jobs/credits.rst +++ b/source/compute/jobs/credits.rst @@ -6,5 +6,5 @@ Job Credits .. toctree:: :maxdepth: 3 - ../leuven/credits - ../leuven/slurm_accounting + /leuven/credits + /leuven/slurm_accounting diff --git a/source/jobs/gpus.rst b/source/compute/jobs/gpus.rst similarity index 77% rename from source/jobs/gpus.rst rename to source/compute/jobs/gpus.rst index 77e695f93..6d8e114fc 100644 --- a/source/jobs/gpus.rst +++ b/source/compute/jobs/gpus.rst @@ -6,15 +6,24 @@ VSC clusters that provide them. For cluster-specific usage instructions, please consult the respective documentation sources: .. tab-set:: + :sync-group: vsc-sites .. tab-item:: KU Leuven/UHasselt + :sync: kuluh Genius cluster: :ref:`Submit to a GPU node ` .. tab-item:: UAntwerp (AUHA) + :sync: ua Leibniz and Vaughan clusters: :ref:`GPU computing UAntwerp` + .. tab-item:: UGent + :sync: ug + + Tier-1 Hortense: :ref:`tier1_request_gpus` + .. tab-item:: VUB + :sync: vub Hydra cluster: `How to use GPUs in Hydra `_ diff --git a/source/jobs/index.rst b/source/compute/jobs/index.rst similarity index 72% rename from source/jobs/index.rst rename to source/compute/jobs/index.rst index 85f5e56e4..bf03578f2 100644 --- a/source/jobs/index.rst +++ b/source/compute/jobs/index.rst @@ -1,6 +1,6 @@ -################ -:fas:`gear` Jobs -################ +######################### +:fas:`gear` Job Scheduler +######################### An HPC cluster is a multi-user system. This implies that your computations run on a part of the cluster that will be temporarily reserved for you by @@ -12,8 +12,8 @@ the scheduler. nodes are shared among all active users, so putting a heavy load on those nodes will annoy other users. -Although you can :ref:`work interactively ` on an HPC system, -most computations are performed in batch mode. The workflow in batch mode is straightforward: +Although you can work interactively on an HPC system, most computations are +performed in batch mode. The workflow in batch mode is straightforward: #. Create a job script #. Submit it as a job to the scheduler @@ -44,14 +44,3 @@ and monitoring of your jobs in the HPC. running_jobs_torque --------- - -Linux System -============ - -.. toctree:: - :maxdepth: 2 - - basic_linux_usage - how_to_get_started_with_shell_scripts - diff --git a/source/jobs/job_advanced.rst b/source/compute/jobs/job_advanced.rst similarity index 100% rename from source/jobs/job_advanced.rst rename to source/compute/jobs/job_advanced.rst diff --git a/source/jobs/job_management.rst b/source/compute/jobs/job_management.rst similarity index 100% rename from source/jobs/job_management.rst rename to source/compute/jobs/job_management.rst diff --git a/source/jobs/job_submission.rst b/source/compute/jobs/job_submission.rst similarity index 100% rename from source/jobs/job_submission.rst rename to source/compute/jobs/job_submission.rst diff --git a/source/jobs/job_types.rst b/source/compute/jobs/job_types.rst similarity index 94% rename from source/jobs/job_types.rst rename to source/compute/jobs/job_types.rst index 2e34e43c0..b0e4e10a1 100644 --- a/source/jobs/job_types.rst +++ b/source/compute/jobs/job_types.rst @@ -315,25 +315,26 @@ denoting the command prompt of the compute node): The ``exit`` command at the end ends the shell and hence the interactive job. -Running X11 programs -"""""""""""""""""""" +Running graphical programs +"""""""""""""""""""""""""" -You can also use ``srun`` to start an interactive session with X11 support. However, before -starting a session you should ensure that you can start X11 programs from the session from -you will be starting ``srun``. Check the corresponding guide for your operating system: +You can also use ``srun`` to start an interactive session with support for +graphical applications. This requires a terminal connection with support for +the `X Window System`_ protocol (also known as X11) to display graphics +remotely on your screen. -- :ref:`Windows ` -- :ref:`Linux ` -- :ref:`macOS ` +There are solutions to enable X11 for all operating systems. Please check the +corresponding guide for your operating system in :ref:`terminal x11`. -X11 programs rarely use distributed memory parallelism, so in most case you will be requesting -just a single task. To add support for X11, use the ``--x11`` option before ``--pty``: +X11 programs rarely use distributed memory parallelism, so in most case you +will be requesting just a single task. To add support for X11, use the +``--x11`` option before ``--pty``: .. code:: bash - login$ srun -n 1 -c 64 -t 1:00:00 --x11 --pty bash - r0c00cn0$ xclock - r0c00cn0$ exit + [login_node] $ srun -n 1 -c 64 -t 1:00:00 --x11 --pty bash + [compute_node] $ xclock + [compute_node] $ exit -would allocate 64 cores, and the second line starts a simple X11 program, ``xclock``, -to test if X11 programs work. +would allocate 64 cores, and the second line starts a simple X11 program, +``xclock``, to test if X11 programs work. diff --git a/source/jobs/monitoring_memory_and_cpu_usage_of_programs.rst b/source/compute/jobs/monitoring_memory_and_cpu_usage_of_programs.rst similarity index 100% rename from source/jobs/monitoring_memory_and_cpu_usage_of_programs.rst rename to source/compute/jobs/monitoring_memory_and_cpu_usage_of_programs.rst diff --git a/source/jobs/running_jobs.rst b/source/compute/jobs/running_jobs.rst similarity index 99% rename from source/jobs/running_jobs.rst rename to source/compute/jobs/running_jobs.rst index 34cb0a178..f489a672a 100644 --- a/source/jobs/running_jobs.rst +++ b/source/compute/jobs/running_jobs.rst @@ -81,7 +81,7 @@ spin-off company of the Slurm development. command is very easy. .. toctree:: - :maxdepth: 3 + :maxdepth: 2 job_submission job_management @@ -89,7 +89,4 @@ spin-off company of the Slurm development. job_advanced credits slurm_pbs_comparison - - - diff --git a/source/jobs/running_jobs_torque.rst b/source/compute/jobs/running_jobs_torque.rst similarity index 96% rename from source/jobs/running_jobs_torque.rst rename to source/compute/jobs/running_jobs_torque.rst index 3b6e95ab9..a8108b69b 100644 --- a/source/jobs/running_jobs_torque.rst +++ b/source/compute/jobs/running_jobs_torque.rst @@ -172,11 +172,8 @@ Line 11 is the actual output of your job script. Troubleshooting --------------- -.. toctree:: - :maxdepth: 2 - - Why doesn't my job start immediately? - Why does my job fail after a successful start? +* :ref:`why_not_job_start` +* :ref:`job failure` Advanced topics diff --git a/source/jobs/slurm_pbs_comparison.rst b/source/compute/jobs/slurm_pbs_comparison.rst similarity index 100% rename from source/jobs/slurm_pbs_comparison.rst rename to source/compute/jobs/slurm_pbs_comparison.rst diff --git a/source/jobs/specifying_output_files_and_notifications.rst b/source/compute/jobs/specifying_output_files_and_notifications.rst similarity index 100% rename from source/jobs/specifying_output_files_and_notifications.rst rename to source/compute/jobs/specifying_output_files_and_notifications.rst diff --git a/source/jobs/specifying_resources.rst b/source/compute/jobs/specifying_resources.rst similarity index 94% rename from source/jobs/specifying_resources.rst rename to source/compute/jobs/specifying_resources.rst index a7dcc03a4..c90eb895d 100644 --- a/source/jobs/specifying_resources.rst +++ b/source/compute/jobs/specifying_resources.rst @@ -59,11 +59,10 @@ has several disadvantages. a few users, many of our clusters have a stricter limit on the number of long-running jobs than on the number of jobs with a shorter walltime. -The maximum allowed walltime for a job is cluster-dependent. Since -these policies can change over time (as do other properties from -clusters), we bundle these on one page per cluster in the -":ref:`hardware`" section. - +The maximum allowed walltime for a job can be different from cluster to +cluster, as it depends on their technical characteristics and policies. +You can find the maximum walltime of each cluster on their pages in the +:ref:`tier1 hardware` or :ref:`tier2 hardware` sections. .. _nodes and ppn: @@ -198,9 +197,9 @@ For example, to run on a node with 36 cores and 192 GB RAM, -l nodes=1:ppn=36 -l pmem=10gb -Check the :ref:`hardware specification ` of the cluster/nodes -you want to run on for the available memory and core count of the nodes. - +Check the hardware specification of your :ref:`Tier-1` or +:ref:`Tier-2` VSC cluster to find the available memory and core +count of the compute nodes you want to use. .. warning:: @@ -263,8 +262,8 @@ want to use the cascadelake nodes, specify:: Since cascadelake nodes often have 24 cores, you will likely get 2 physical nodes. -The exact list of properties depends on the cluster and is given in the -page for your cluster in the :ref:`hardware specification pages `. +The exact list of properties depends on each cluster and is available on the +pages covering :ref:`tier1 hardware` and :ref:`tier2 hardware`. .. note:: diff --git a/source/jobs/starting_programs_in_a_job.rst b/source/compute/jobs/starting_programs_in_a_job.rst similarity index 100% rename from source/jobs/starting_programs_in_a_job.rst rename to source/compute/jobs/starting_programs_in_a_job.rst diff --git a/source/jobs/submitting_and_managing_jobs_with_torque_and_moab.rst b/source/compute/jobs/submitting_and_managing_jobs_with_torque_and_moab.rst similarity index 97% rename from source/jobs/submitting_and_managing_jobs_with_torque_and_moab.rst rename to source/compute/jobs/submitting_and_managing_jobs_with_torque_and_moab.rst index 4f7d6d01b..a6299b490 100644 --- a/source/jobs/submitting_and_managing_jobs_with_torque_and_moab.rst +++ b/source/compute/jobs/submitting_and_managing_jobs_with_torque_and_moab.rst @@ -3,7 +3,6 @@ Submitting and managing jobs with Torque and Moab ================================================= - .. _qsub: Submitting your job: qsub @@ -51,8 +50,10 @@ are some facilities for interactive work: - The login nodes can be used for light interactive work. They can typically run the same software as the compute nodes. Some sites also have special interactive nodes for special tasks, e.g., scientific - data visualization. See the ":ref:`hardware`" section - where each site documents what is available. + data visualization. See the :ref:`tier1 hardware` and :ref:`tier2 hardware` + sections for information on the available login/interactive nodes on each + VSC cluster. + Examples of work that can be done on the login nodes : - running a GUI program that generates the input files for your diff --git a/source/jobs/what_if_jobs_fail_after_starting_successfully.rst b/source/compute/jobs/what_if_jobs_fail_after_starting_successfully.rst similarity index 98% rename from source/jobs/what_if_jobs_fail_after_starting_successfully.rst rename to source/compute/jobs/what_if_jobs_fail_after_starting_successfully.rst index bbe22a695..0912db6e7 100644 --- a/source/jobs/what_if_jobs_fail_after_starting_successfully.rst +++ b/source/compute/jobs/what_if_jobs_fail_after_starting_successfully.rst @@ -52,7 +52,7 @@ However, your home directory may unexpectedly fill up in two ways: .. _large output: Large amounts of output or errors -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ +--------------------------------- To deal with the first issue, simply redirect the standard output of the command to a file that is in your data or scratch directory, or, if you @@ -82,7 +82,7 @@ If you don't care for the standard output, simply write:: .. _core dump: Core dump -~~~~~~~~~ +--------- When a program crashes, a core file is generated. This can be used to try and analyze the cause of the crash. However, if you don't need cores diff --git a/source/jobs/why_doesn_t_my_job_start.rst b/source/compute/jobs/why_doesn_t_my_job_start.rst similarity index 98% rename from source/jobs/why_doesn_t_my_job_start.rst rename to source/compute/jobs/why_doesn_t_my_job_start.rst index ca3228c1b..77109ffc4 100644 --- a/source/jobs/why_doesn_t_my_job_start.rst +++ b/source/compute/jobs/why_doesn_t_my_job_start.rst @@ -1,3 +1,5 @@ +.. _why_not_job_start: + Why doesn't my job start? ========================= diff --git a/source/jobs/worker_framework.rst b/source/compute/jobs/worker_framework.rst similarity index 100% rename from source/jobs/worker_framework.rst rename to source/compute/jobs/worker_framework.rst diff --git a/source/jobs/worker_or_atools.rst b/source/compute/jobs/worker_or_atools.rst similarity index 100% rename from source/jobs/worker_or_atools.rst rename to source/compute/jobs/worker_or_atools.rst diff --git a/source/jobs/workflows_using_job_dependencies.rst b/source/compute/jobs/workflows_using_job_dependencies.rst similarity index 100% rename from source/jobs/workflows_using_job_dependencies.rst rename to source/compute/jobs/workflows_using_job_dependencies.rst diff --git a/source/jobs/workflows_using_job_dependencies/workflow_using_job_dependencies.png b/source/compute/jobs/workflows_using_job_dependencies/workflow_using_job_dependencies.png similarity index 100% rename from source/jobs/workflows_using_job_dependencies/workflow_using_job_dependencies.png rename to source/compute/jobs/workflows_using_job_dependencies/workflow_using_job_dependencies.png diff --git a/source/compute/portal/index.rst b/source/compute/portal/index.rst new file mode 100644 index 000000000..a0cbfeb3e --- /dev/null +++ b/source/compute/portal/index.rst @@ -0,0 +1,10 @@ +.. _compute portal: + +##################### +:fas:`eye` Web Portal +##################### + +.. toctree:: + :maxdepth: 2 + + ondemand diff --git a/source/leuven/services/openondemand.rst b/source/compute/portal/ondemand.rst similarity index 93% rename from source/leuven/services/openondemand.rst rename to source/compute/portal/ondemand.rst index e476308ff..ba647d017 100644 --- a/source/leuven/services/openondemand.rst +++ b/source/compute/portal/ondemand.rst @@ -1,19 +1,25 @@ -.. _ood_t2_leuven: +.. _ood: -Open OnDemand on the KULeuven Tier2 cluster -=========================================== +############# +Open OnDemand +############# -.. sectnum:: - :depth: 3 - -About -===== - -Open OnDemand provides a user interface to HPC clusters from within a web browser. +`Open OnDemand`_ provides a user interface to HPC clusters from within a web browser. This tool supports a range of different apps and features that not only allow the user to easily submit jobs from within the browser, but also provide different coding GUIs, tools for plotting and more. -Open OnDemand is available for the Tier-2 Genius and wICE clusters. + +Open OnDemand is available on the Tier-2 Genius and wICE clusters of KU Leuven. + +.. grid:: 3 + :gutter: 4 + + .. grid-item-card:: |KULUH| + :columns: 12 4 4 4 + + * Tier-2 :ref:`Genius ` + * Tier-2 :ref:`Superdome ` + * Tier-2 :ref:`wICE ` You can use this interface by navigating to the `KU Leuven Open OnDemand page`_. You can log in using your KU Leuven or VSC credentials. @@ -21,7 +27,7 @@ You can log in using your KU Leuven or VSC credentials. General features ================ -The KU Leuven Open OnDemand page provides a range of functions: +The `KU Leuven Open OnDemand page`_ provides a range of functions: - Browsing, creating, transferring, viewing and/or editing files - Submitting and monitoring jobs, creating job templates @@ -273,8 +279,7 @@ The same applies for other choices of partitions on Genius or wICE clusters. JupyterLab ----------- -With this app you can write and run -`Jupyter `_ notebooks containing +With this app you can write and run `Jupyter`_ notebooks containing annotated Python, R or Julia code (among other languages). IPython consoles are available as well. One of the benefits of JupyterLab is that it supports different types of user-defined environments, as will become clear below. @@ -320,7 +325,7 @@ For R, you may create your customized environment using :ref:`Conda environments To override this and store your kernel specifications in a non-default location, you may drop the following line in your ``${VSC_HOME}/.bashrc``:: - export XDG_DATA_HOME=${VSC_DATA}/.local/share + export XDG_DATA_HOME=${VSC_DATA}/.local/share When the ``${XDG_DATA_HOME}`` variable is set, subsequent kernel installations (for both Python and R) will reside in ``${XDG_DATA_HOME}/jupyter/kernels``. @@ -368,15 +373,15 @@ would typically look like (to be done from a shell, e.g. using 'Login Server She .. code-block :: bash - cd ${VSC_DATA} - # the line below is needed if you use the 'Interactive Shell' app - module use /apps/leuven/${VSC_OS_LOCAL}/${VSC_ARCH_LOCAL}${VSC_ARCH_SUFFIX}/2023a/modules/all - module load Python/3.11.3-GCCcore-12.3.0 - python -m venv - source /bin/activate - pip install ipykernel - # note that unlike for Conda environments the "--env ..." argument is not needed below - python -m ipykernel install --user --name --display-name + cd ${VSC_DATA} + # the line below is needed if you use the 'Interactive Shell' app + module use /apps/leuven/${VSC_OS_LOCAL}/${VSC_ARCH_LOCAL}${VSC_ARCH_SUFFIX}/2023a/modules/all + module load Python/3.11.3-GCCcore-12.3.0 + python -m venv + source /bin/activate + pip install ipykernel + # note that unlike for Conda environments the "--env ..." argument is not needed below + python -m ipykernel install --user --name --display-name On the JupyterLab form, choose a partition to your liking and select the same toolchain as above. Once you connect to your session, your new kernel will be @@ -406,12 +411,12 @@ Conda environments for R For R, you need both the ``jupyter_client`` and the ``irkernel`` Conda packages installed. With the following command you can create the kernel:: - Rscript -e 'IRkernel::installspec(name="", displayname="")' + Rscript -e 'IRkernel::installspec(name="", displayname="")' Once the kernel is created, you will see it in the 'Launcher' menu. You can now start working in your own customized environment. -For more general information, please refer to the `official JupyterLab documentation`_. +For more general information, please refer to the `JupyterLab documentation`_. RStudio Server -------------- @@ -422,14 +427,15 @@ of R module that would be loaded for your session (such as `R/4.2.2-foss-2022b`) Additionally, the `R-bundle-CRAN` and `R-bundle-Bioconductor` modules can be loaded on top of the base R module to provide easy access to hundreds of preinstalled packages. -It is also possible to use locally installed R packages with RStudio, see :ref:`R package management`. +It is also possible to use locally installed R packages with RStudio, see +:ref:`R package management`. RStudio furthermore allows to create RStudio projects to manage your R environments. When doing so, we recommend to select the `renv `_ option to ensure a completely independent R environment. Without `renv`, loading an RStudio project may lead to incomplete R library paths. -For more information on how to use RStudio, check out the `official documentation `__. +For more information on how to use RStudio, check out the `RStudio documentation`_. **Remarks:** @@ -447,7 +453,7 @@ For more information on how to use RStudio, check out the `official documentatio .. code-block:: bash - echo "export XDG_DATA_HOME=$VSC_DATA/.local/share" >> ~/.bashrc + echo "export XDG_DATA_HOME=$VSC_DATA/.local/share" >> ~/.bashrc - Additionally, it is advised to change the default behaviour of RStudio to not restore .RData into the workspace on start up and to never Save the workspace to .RData on exit. @@ -458,8 +464,7 @@ Tensorboard ----------- Tensorboard is an interactive app that allows you to visualize and measure different aspects of -your machine learning workflow. -Have a look at the `official guidelines `_ +your machine learning workflow. Have a look at the `TensorBoard documentation`_ for more detailed information. The Tensorboard interactive session requires you to specify a project (or log) directory in @@ -474,7 +479,7 @@ Code Server ----------- This is the browser version of Visual Studio Code. -For more information, check out `VSCode official guidelines `_. +For more information, check out `VSCode documentation`_. As a default, a Python and a Git module are already loaded, which means you can use both Python and git from a terminal window within code-server. @@ -528,9 +533,9 @@ For the time being, there are some issues with using modules together with funct There are some package requirements if you want to use R in code-server. The following command creates a functional environment (of course, add any other packages you need): - .. code-block:: bash +.. code-block:: bash - conda create -n -c conda-forge r-base r-remotes r-languageserver r-httpgd r-jsonlite + conda create -n -c conda-forge r-base r-remotes r-languageserver r-httpgd r-jsonlite Once you've created your environment, go ahead and start a code-server session on Open Ondemand. On the lefthand side, go to the extension menu and search for 'R'. @@ -587,8 +592,4 @@ desktop as a compute job. - Currently, using GPUs in ParaView is not supported yet, and just the CPU-only modules are offered. -.. _KU Leuven Open OnDemand page: https://ondemand.hpc.kuleuven.be/ -.. _official JupyterLab documentation: https://docs.jupyter.org/en/latest/ -.. _RStudio official documentation: https://docs.rstudio.com/ -.. _noVNC: https://novnc.com/ diff --git a/source/software/blas_and_lapack.rst b/source/compute/software/blas_and_lapack.rst similarity index 100% rename from source/software/blas_and_lapack.rst rename to source/compute/software/blas_and_lapack.rst diff --git a/source/compute/software/books_parallel.rst b/source/compute/software/books_parallel.rst new file mode 100644 index 000000000..2218132dd --- /dev/null +++ b/source/compute/software/books_parallel.rst @@ -0,0 +1,133 @@ +.. _books: + +Books about Parallel Computing +============================== + +This is a very incomplete list, permanently under construction, of +books about parallel computing. + +General +------- + +* G. Hager and G. Wellein. + `Introduction to high performance computing for scientists and engineers `_. + Chapman & Hall, 2010. + + This book first introduces the architecture of modern cache-based + microprocessors and discusses their inherent performance limitations, before + describing general optimization strategies for serial code on cache-based + architectures. It next covers shared- and distributed-memory parallel + computer architectures and the most relevant network topologies. After + discussing parallel computing on a theoretical level, the authors show how to + avoid or ameliorate typical performance problems connected with OpenMP. They + then present cache-coherent nonuniform memory access (ccNUMA) optimization + techniques, examine distributed-memory parallel programming with message + passing interface (MPI), and explain how to write efficient MPI code. The + final chapter focuses on hybrid programming with MPI and OpenMP. + +* V. Eijkhout. + `Introduction to high performance scientific computing `_. + 2011. + + This is a textbook that teaches the bridging topics between numerical + analysis, parallel computing, code performance, large scale applications. It + can be freely accessed on `archive.org + `_ (though you have to + respect the copyright of course). + +* A. Grama, A. Gupta, G. Kapyris, and V. Kumar. + `Introduction to parallel computing (2nd edition) `_. + Pearson Addison Wesley, 2003. ISBN 978-0-201-64865-2. + + A somewhat older book, but still used a lot as textbook in academic courses + on parallel computing. + +* C. Lin and L. Snyder. + `Principles of parallel programming `_. + Pearson Addison Wesley, 2008. ISBN 978-0-32148790-2. + + This books discusses parallel programming both from a more abstract level and + a more practical level, touching briefly threads programming, OpenMP, MPI and + PGAS-languages (using ZPL). + +* M. McCool, A.D. Robinson, and J. Reinders. + `Structured parallel programming: patterns for efficient computation `_. + Morgan Kaufmann, 2012. ISBN 978-0-12-415993-8. + +Grid computing +-------------- + +* F. Magoules, J. Pan, K.-A. Tan, and A. Kumar. + `Introduction to grid computing `_. + CRC Press, 2019. ISBN 9780367385828. + +MPI +--- + +* A two-volume set in tutorial style: + + * W. Gropp, E. Lusk, and A. Skjellum. + `Using MPI: portable parallel programming with the Message-Passing Interface, third edition `__. + MIT Press, 2014. ISBN 978-0-262-57139-2 (paperback) or 978-0-262-32659-9 (ebook). + + This edition of the book is based on the MPI-3.0 specification. + + * W. Gropp, T. Hoeffler, R. Thakur and E. Lusk. + `Using advanced MPI: modern features of the Message-Passing Interface `_. + MIT Press, 2014. ISBN 978-0-262-52763-7 (paperback) or 978-0-262-32662-9 (ebook). + + These books replace the earlier editions of "Using MPI: Portable + Parallel Programming with the Message-Passing Interface" and the + book "Using MPI-2: Advanced Features of the Message-Passing + Interface". + +* A two-volume set in reference style, but somewhat outdated: + + * M. Snir, S.W. Otto, S. Huss-Lederman, D.W. Walker, and J. Dongarra. + `MPI: the complete reference. Volume 1: the MPI core (2nd Edition) `_. + MIT Press, 1998. ISBN 978-0-262-69215-1. + + * W. Gropp, S. Huss-Lederman, A. Lumsdaine, E. Lusk, B. Nitzberg, W. Saphir, and M. Snir. + `MPI: the complete reference, Volume 2: the MPI-2 extensions `_. + MIT Press, 1998. ISBN 978-0-262-57123-4. + + These two volumes are also available as one set with + `ISBN number 978-0-262-69216-8 `_. + +OpenMP +------ + +* B. Chapman, G. Jost, and R. van der Pas. + `Using OpenMP - portable shared memory parallel Programming `_. + The MIT Press, 2008. ISBN 978-0-262-53302-7. + +* R. Chandra, L. Dagum, D. Kohr, D. Maydan, J. McDonald, and R. Menon. + `Parallel programming in OpenMP `_. + Academic Press, 2000. ISBN 978-1-55860-671-5. + +GPU computing +------------- + +* M. Scarpino. + `OpenCL in action `_. + Manning Publications Co., 2012. ISBN 978-1-617290-17-6 + +* D.R. Kaeli, P. Mistry, D. Schaa, and D.P. Zhang. + `Heterogeneous computing with OpenCL 2.0, 1st Edition `_. + Morgan Kaufmann, 2015. ISBN 978-0-12-801414-1 (print) or 978-0-12-801649-7 (eBook). + + A thorough rewrite of the earlier well-selling book for OpenCL 1.2 that saw + 2 editions. + +Case studies and examples of programming paradigms +-------------------------------------------------- + +* J. Reinders and J. Jeffers (editors). + `High performance parallelism pearls. Volume 1: multicore and many-core programming approaches `_. + Morgan Kaufmann, 2014. ISBN 978-0-12-802118-7 + +* J. Reinders and J. Jeffers (editors). + `High performance parallelism pearls. Volume 2: multicore and many-core programming approaches `_. + Morgan Kaufmann, 2015. ISBN 978-0-12-803819-2 + +*Please mail further suggestions to geertjan.bex@uhasselt.be* diff --git a/source/software/singularity.rst b/source/compute/software/containers.rst similarity index 65% rename from source/software/singularity.rst rename to source/compute/software/containers.rst index b7ecc8a47..d8cfccb2a 100644 --- a/source/software/singularity.rst +++ b/source/compute/software/containers.rst @@ -1,18 +1,21 @@ -Can I run containers on the HPC systems? -======================================== +############################# +Containers on the HPC systems +############################# -The best-known container implementation is doubtlessly `Docker`_. However, -due to security concerns HPC sites typically don't allow users to run +The best-known container implementation is undoubtedly `Docker`_. However, +Docker needs to run as the *root* superuser of the system which has several +security implications. Hence, HPC sites do not typically allow users to run Docker containers. -Fortunately, `Singularity`_ addresses container related security issues, -so Singularity images can be used on the cluster. Since a Singularity -image can be built from a Docker container, that should not be a severe -limitation. +Fortunately, `Apptainer`_ provides an alternative and safer approach for +containers that can be used by any regular user without *root* permissions. +Since Apptainer also provides the options to build images from Docker container +files, it is a suitable replacement for Docker itself. Therefore, Apptainer is +fully supported on all VSC clusters. When should I use containers? ------------------------------ +============================= If the software you intend to use is available on the VSC infrastructure, don't use containers. This software has been built to use specific @@ -21,42 +24,45 @@ typically built for the common denominator. Good use cases include: -- Containers can be useful to run software that is hard to install +* Containers can be useful to run software that is hard to install on HPC systems, e.g., GUI applications, legacy software, and so on. -- Containers can be useful to deal with compatibility issues between + +* Containers can be useful to deal with compatibility issues between Linux flavors. -- You want to create a workflow that can run on VSC infrastructure, + +* You want to create a workflow that can run on VSC infrastructure, but can also be burst to a third-party compute cloud (e.g., AWS or Microsoft Azure) when required. -- You want to maximize the period your software can be run in a + +* You want to maximize the period your software can be run in a reproducible way. -How can I create a Singularity image? -------------------------------------- +How can I create a Apptainer image? +=================================== You have three options to build images, locally on your machine, in the cloud or on the VSC infrastructure. Building on VSC infrastructure -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ +------------------------------ Given that most build procedures require superuser privileges, your options on the VSC infrastructure are limited. You can build an image from a Docker container, e.g., to build an image that contains a version of TensorFlow and has Jupyter as well, use:: - $ export SINGULARITY_TMPDIR=$VSC_SCRATCH/singularity_tmp - $ mkdir -p $SINGULARITY_TMPDIR - $ export SINGULARITY_CACHEDIR=$VSC_SCRATCH/singularity_cache - $ mkdir -p $SINGULARITY_CACHEDIR - $ singularity build tensorflow.sif docker://tensorflow/tensorflow:latest-jupyter + $ export APPTAINER_TMPDIR=$VSC_SCRATCH/apptainer_tmp + $ mkdir -p $APPTAINER_TMPDIR + $ export APPTAINER_CACHEDIR=$VSC_SCRATCH/apptainer_cache + $ mkdir -p $APPTAINER_CACHEDIR + $ apptainer build tensorflow.sif docker://tensorflow/tensorflow:latest-jupyter .. warning:: - Don't forget to define and create the ``$SINGULARITY_TMPDIR`` and - ``$SINGULARITY_CACHEDIR`` since if you fail to do so, Singularity + Don't forget to define and create the ``$APPTAINER_TMPDIR`` and + ``$APPTAINER_CACHEDIR`` since if you fail to do so, Apptainer will use directories in your home directory, and you will exceed the quota on that file system. @@ -75,53 +81,53 @@ container, you should consider the alternatives. Local builds -~~~~~~~~~~~~ +------------ The most convenient way to create an image is on your own machine, since you will have superuser privileges, and hence the most options to chose -from. At this point, Singularity only runs under Linux, so you would +from. At this point, Apptainer only runs under Linux, so you would have to use a virtual machine when using Windows or macOS. For detailed -instructions, see the `Singularity installation documentation`_. +instructions, see the `Apptainer Quick Start`_ guide. Besides building images from Docker containers, you have the option to create them from a definition file, which allows you to completely customize -your image. We provide a brief :ref:`introduction to Singularity definition files -`, but for more details, we refer you to the -`Singularity definition file documentation`_. +your image. We provide a brief :ref:`introduction to Apptainer definition files +`, but for more details, we refer you to the +documentation on `Apptainer Definition Files`_. -When you have a Singularity definition file, e.g., ``my_image.def``, you can +When you have a Apptainer definition file, e.g., ``my_image.def``, you can build your image file ``my_image.sif``:: - your_machine> singularity build my_image.sif my_image.def + your_machine> apptainer build my_image.sif my_image.def Once your image is built, you can :ref:`transfer ` it to the VSC infrastructure to use it. .. warning:: - Since Singularity images can be very large, transfer your image + Since Apptainer images can be very large, transfer your image to a directory where you have sufficient quota, e.g., ``$VSC_DATA``. Remote builds -~~~~~~~~~~~~~ +------------- -You can build images on the Singularity website, and download +You can build images on the Apptainer website, and download them to the VSC infrastructure. You will have to create an account at Sylabs. Once this is done, you can use `Sylabs Remote Builder`_ -to create an image based on a :ref:`Singularity definition -`. If the build succeeds, you can +to create an image based on an `Apptainer definition file +`. If the build succeeds, you can pull the resulting image from the library:: - $ export SINGULARITY_CACHEDIR=$VSC_SCRATCH/singularity_cache - $ mkdir -p $SINGULARITY_CACHEDIR - $ singularity pull library://gjbex/remote-builds/rb-5d6cb2d65192faeb1a3f92c3:latest + $ export APPTAINER_CACHEDIR=$VSC_SCRATCH/apptainer_cache + $ mkdir -p $APPTAINER_CACHEDIR + $ apptainer pull library://gjbex/remote-builds/rb-5d6cb2d65192faeb1a3f92c3:latest .. warning:: - Don't forget to define and create the ``$SINGULARITY_CACHEDIR`` - since if you fail to do so, Singularity will use directories in + Don't forget to define and create the ``$APPTAINER_CACHEDIR`` + since if you fail to do so, Apptainer will use directories in your home directory, and you will exceed the quota on that file system. @@ -138,12 +144,12 @@ However, local builds still offer more flexibility, especially when some interactive setup is required. -.. _Singularity definition files: +.. _apptainer_definition_files: -Singularity definition files -~~~~~~~~~~~~~~~~~~~~~~~~~~~~ +Apptainer definition files +========================== -Below is an example of a Singularity definition file:: +Below is an example of a Apptainer definition file:: Bootstrap: docker From: ubuntu:xenial @@ -166,24 +172,23 @@ package will be installed. is no longer maintained can successfully be run on modern infrastructure. It is by no means intended to encourage you to start using Grace. -Singularity definition files are very flexible. For more details, -we refer you to the `Singularity definition file documentation`_. +Apptainer definition files are very flexible. For more details, +we refer you to the documentation on `Apptainer Definition Files`_. An important advantage of definition files is that they can easily be shared, and improve reproducibility. Conda environment in a definition file -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ -`Conda environments -`_ -are a convinient solution when it comes to handling own Python-dependent +-------------------------------------- +:ref:`Conda environments` +are a convenient solution when it comes to handling own Python-dependent software installations. Having a containerized conda environment is often useful for groups when working collectively on a common project. One way to have a conda environment in a container is to create it from an existing environment YAML file. If we have a conda environment exported in a YAML format file called, e.g., ``user_conda_environment.yml``, then -from that file one can recreate the same environment in a Singularity definition file:: +from that file one can recreate the same environment in a Apptainer definition file:: Bootstrap: docker From: continuumio/miniconda3 @@ -209,29 +214,29 @@ The ``exec "$@"`` line will accept the user's input command, e.g., on the cluster login nodes. -How can I run a Singularity image? ----------------------------------- +How can I run a Apptainer image? +================================ Once you have an image, there are several options to run the container. #. You can invoke any application that is in the ``$PATH`` of the container, e.g., for the image containing Grace:: - $ singularity exec grace.sif xmgrace + $ apptainer exec grace.sif xmgrace #. In case the definition file specified a ``%runscript`` directive, this can be executed using:: - $ singularity run grace.sif + $ apptainer run grace.sif #. The container can be run as a shell:: - $ singularity shell grace.sif + $ apptainer shell grace.sif By default, your home directory in the container will be mounted with the same path as it has on the host. The current working directory in the container is that on the host in which you -invoked ``singularity``. +invoked ``apptainer``. .. note:: @@ -245,7 +250,7 @@ using the ``-B`` option. Mount points are created dynamically (using overlays), so they do not have to exist in the image. For example, to mount the ``$VSC_SCRATCH`` directory, you would use:: - $ singularity exec -B $VSC_SCRATCH:/scratch grace.sif xmgrace + $ apptainer exec -B $VSC_SCRATCH:/scratch grace.sif xmgrace Your ``$VSC_SCRATCH`` directory is now accessible from within the image in the directory ``/scratch``. @@ -257,20 +262,20 @@ image in the directory ``/scratch``. mount points in the image and on the host, e.g., for the ``$VSC_DATA`` directory:: - $ singularity exec -B $VSC_DATA:$VSC_DATA grace.sif xmgrace + $ apptainer exec -B $VSC_DATA:$VSC_DATA grace.sif xmgrace Or, more concisely:: - $ singularity exec -B $VSC_DATA grace.sif xmgrace + $ apptainer exec -B $VSC_DATA grace.sif xmgrace The host environment variables are defined in the image, hence scripts that use those will work. -Can I use singularity images in a job? --------------------------------------- +Can I use apptainer images in a job? +------------------------------------ -Yes, you can. Singularity images can be part of any workflow, e.g., +Yes, you can. Apptainer images can be part of any workflow, e.g., the following script would create a plot in the Grace container:: #!/bin/bash -l @@ -278,14 +283,14 @@ the following script would create a plot in the Grace container:: #PBS -l walltime=00:30:00 cd $PBS_O_WORKDIR - singularity exec grace.sif gracebat -data data.dat \ + apptainer exec grace.sif gracebat -data data.dat \ -batch plot.bat Ensure that the container has access to all the required directories by providing additional bindings if necessary. -Can I run parallel applications using a Singularity image? +Can I run parallel applications using a Apptainer image? ---------------------------------------------------------- For shared memory applications there is absolutely no problem. @@ -305,17 +310,13 @@ support `. degradation. -Can I run a service from a Singularity image? +Can I run a service from a Apptainer image? --------------------------------------------- Yes, it is possible to run services such as databases or web -applications that are installed in Singularity images. +applications that are installed in Apptainer images. For this type of scenario, it is probably best to contact :ref:`user support `. -.. _Singularity installation documentation: https://singularity.hpcng.org/user-docs/3.8/quick_start.html#quick-installation-steps -.. _Singularity definition file documentation: https://singularity.hpcng.org/user-docs/3.8/definition_files.html -.. _Sylabs Remote Builder: https://cloud.sylabs.io/builder - diff --git a/source/software/eclipse.rst b/source/compute/software/eclipse.rst similarity index 100% rename from source/software/eclipse.rst rename to source/compute/software/eclipse.rst diff --git a/source/software/eclipse_access_to_a_vsc_subversion_repository.rst b/source/compute/software/eclipse_access_to_a_vsc_subversion_repository.rst similarity index 100% rename from source/software/eclipse_access_to_a_vsc_subversion_repository.rst rename to source/compute/software/eclipse_access_to_a_vsc_subversion_repository.rst diff --git a/source/software/eclipse_access_to_a_vsc_subversion_repository/subclipse_add_repository.png b/source/compute/software/eclipse_access_to_a_vsc_subversion_repository/subclipse_add_repository.png similarity index 100% rename from source/software/eclipse_access_to_a_vsc_subversion_repository/subclipse_add_repository.png rename to source/compute/software/eclipse_access_to_a_vsc_subversion_repository/subclipse_add_repository.png diff --git a/source/software/eclipse_access_to_a_vsc_subversion_repository/subclipse_checkout.png b/source/compute/software/eclipse_access_to_a_vsc_subversion_repository/subclipse_checkout.png similarity index 100% rename from source/software/eclipse_access_to_a_vsc_subversion_repository/subclipse_checkout.png rename to source/compute/software/eclipse_access_to_a_vsc_subversion_repository/subclipse_checkout.png diff --git a/source/software/eclipse_access_to_a_vsc_subversion_repository/subclipse_enter_ssh_credentials.png b/source/compute/software/eclipse_access_to_a_vsc_subversion_repository/subclipse_enter_ssh_credentials.png similarity index 100% rename from source/software/eclipse_access_to_a_vsc_subversion_repository/subclipse_enter_ssh_credentials.png rename to source/compute/software/eclipse_access_to_a_vsc_subversion_repository/subclipse_enter_ssh_credentials.png diff --git a/source/software/eclipse_access_to_a_vsc_subversion_repository/subclipse_enter_username_password.png b/source/compute/software/eclipse_access_to_a_vsc_subversion_repository/subclipse_enter_username_password.png similarity index 100% rename from source/software/eclipse_access_to_a_vsc_subversion_repository/subclipse_enter_username_password.png rename to source/compute/software/eclipse_access_to_a_vsc_subversion_repository/subclipse_enter_username_password.png diff --git a/source/software/eclipse_access_to_a_vsc_subversion_repository/subclipse_in_eclipse_marketplace.png b/source/compute/software/eclipse_access_to_a_vsc_subversion_repository/subclipse_in_eclipse_marketplace.png similarity index 100% rename from source/software/eclipse_access_to_a_vsc_subversion_repository/subclipse_in_eclipse_marketplace.png rename to source/compute/software/eclipse_access_to_a_vsc_subversion_repository/subclipse_in_eclipse_marketplace.png diff --git a/source/software/eclipse_access_to_a_vsc_subversion_repository/subclipse_installation.png b/source/compute/software/eclipse_access_to_a_vsc_subversion_repository/subclipse_installation.png similarity index 100% rename from source/software/eclipse_access_to_a_vsc_subversion_repository/subclipse_installation.png rename to source/compute/software/eclipse_access_to_a_vsc_subversion_repository/subclipse_installation.png diff --git a/source/software/eclipse_as_a_remote_editor.rst b/source/compute/software/eclipse_as_a_remote_editor.rst similarity index 100% rename from source/software/eclipse_as_a_remote_editor.rst rename to source/compute/software/eclipse_as_a_remote_editor.rst diff --git a/source/software/eclipse_as_a_remote_editor/install_software.png b/source/compute/software/eclipse_as_a_remote_editor/install_software.png similarity index 100% rename from source/software/eclipse_as_a_remote_editor/install_software.png rename to source/compute/software/eclipse_as_a_remote_editor/install_software.png diff --git a/source/software/eclipse_as_a_remote_editor/open_perspective.png b/source/compute/software/eclipse_as_a_remote_editor/open_perspective.png similarity index 100% rename from source/software/eclipse_as_a_remote_editor/open_perspective.png rename to source/compute/software/eclipse_as_a_remote_editor/open_perspective.png diff --git a/source/software/eclipse_introduction_and_installation.rst b/source/compute/software/eclipse_introduction_and_installation.rst similarity index 100% rename from source/software/eclipse_introduction_and_installation.rst rename to source/compute/software/eclipse_introduction_and_installation.rst diff --git a/source/software/eclipse_with_ptp_and_version_control.rst b/source/compute/software/eclipse_with_ptp_and_version_control.rst similarity index 100% rename from source/software/eclipse_with_ptp_and_version_control.rst rename to source/compute/software/eclipse_with_ptp_and_version_control.rst diff --git a/source/software/foss_toolchain.rst b/source/compute/software/foss_toolchain.rst similarity index 100% rename from source/software/foss_toolchain.rst rename to source/compute/software/foss_toolchain.rst diff --git a/source/software/git.rst b/source/compute/software/git.rst similarity index 100% rename from source/software/git.rst rename to source/compute/software/git.rst diff --git a/source/software/hybrid_mpi_openmp_programs.rst b/source/compute/software/hybrid_mpi_openmp_programs.rst similarity index 93% rename from source/software/hybrid_mpi_openmp_programs.rst rename to source/compute/software/hybrid_mpi_openmp_programs.rst index 16d872f10..499b3b0b4 100644 --- a/source/software/hybrid_mpi_openmp_programs.rst +++ b/source/compute/software/hybrid_mpi_openmp_programs.rst @@ -226,11 +226,12 @@ Intel documentation on hybrid programming Some documents on the Intel web site that contain more information on developing and running hybrid programs: -- `Interoperability with OpenMP API`_ in the `MPI Reference Manual`_ - explains the concept of MPI domains and how they should be used/set - for hybrid programs. -- `Beginning Hybrid MPI/OpenMP Development`_, - useful if you develop your own code. +* `Interoperability with OpenMP API`_ in the `Intel MPI Documentation`_ + explains the concept of MPI domains and how they should be used/set for + hybrid programs. + +* `Intel MPI - Beginning Hybrid MPI/OpenMP Development`_ is a *getting started* + guide that is useful if you develop your own code. FOSS toolchain (GCC and Open MPI) --------------------------------- @@ -275,18 +276,20 @@ Open MPI allows a lot of control over process placement and rank assignment. The Open MPI mpirun command has several options that influence this process: -- ``--map-by`` influences the mapping of processes on the available - processing resources -- ``--rank-by`` influences the rank assignment -- ``--bind-to`` influences the binding of processes to sets of - processing resources -- ``--report-bindings`` can then be used to report on the process - binding. +* ``--map-by`` influences the mapping of processes on the available + processing resources + +* ``--rank-by`` influences the rank assignment + +* ``--bind-to`` influences the binding of processes to sets of + processing resources + +* ``--report-bindings`` can then be used to report on the process + binding. More information can be found in the manual pages for ``mpirun`` which can be found on the Open MPI webpages `Open MPI Documentation`_ and in the following presentations: -- Poster paper \\"`Locality-Aware Parallel Process Mapping for Multi-Core HPC Systems`_\" -- Slides from the presentation \\"`Open MPI Explorations in Process Affinity`_\" from EuroMPI'13 +* Slides from the presentation `Open MPI Explorations in Process Affinity`_ from EuroMPI'13 diff --git a/source/compute/software/index.rst b/source/compute/software/index.rst new file mode 100644 index 000000000..ae354f079 --- /dev/null +++ b/source/compute/software/index.rst @@ -0,0 +1,11 @@ +################################ +:fas:`cubes` Scientific Software +################################ + +.. toctree:: + :maxdepth: 2 + + using_software + software_development + postprocessing_tools + diff --git a/source/software/intel_toolchain.rst b/source/compute/software/intel_toolchain.rst similarity index 78% rename from source/software/intel_toolchain.rst rename to source/compute/software/intel_toolchain.rst index 14a6d4526..26b1dc4df 100644 --- a/source/software/intel_toolchain.rst +++ b/source/compute/software/intel_toolchain.rst @@ -213,31 +213,36 @@ documentation at the bottom of this page `. There are two ways to link the MKL library: -- If you use icc, icpc or ifort to link your code, you can use the -mkl - compiler option: - - - -mkl=parallel or -mkl: Link the multi-threaded version of the - library. - - -mkl=sequential: Link the single-threaded version of the library - - -mkl=cluster: Link the cluster-specific and sequential library, - i.e., ScaLAPACK will be included, but assumes one process per core - (so no hybrid MPI/multi-threaded approach) - - The Fortran95 interface library for lapack is not automatically - included though. You'll have to specify that library seperately. You - can get the value from the `MKL Link Line Advisor`_, - see also the next item. -- Or you can specify all libraries explictly. To do this, it is - strongly recommended to use Intel's `MKL Link Line Advisor`_, - and will also tell you how to link the MKL library with code - generated with the GNU and PGI compilers. - **Note:** On most VSC systems, the variable MKLROOT has a different - value from the one assumed in the Intel documentation. Wherever you - see ``$(MKLROOT)`` you may have to replace it with - ``$(MKLROOT)/mkl``. +* If you use icc, icpc or ifort to link your code, you can use the -mkl + compiler option: + + * ``-mkl=parallel`` or ``-mkl``: Link the multi-threaded version of the + library. + + * ``-mkl=sequential``: Link the single-threaded version of the library + + * ``-mkl=cluster``: Link the cluster-specific and sequential library, + i.e., ScaLAPACK will be included, but assumes one process per core + (so no hybrid MPI/multi-threaded approach) + + The Fortran95 interface library for LAPACK is not automatically + included though. You'll have to specify that library separately. You + can get the value from the `Intel oneAPI MKL Link Line Advisor`_, + see also the next item. + +* Alternatively, you can specify all libraries explicitly. To do this, it is + strongly recommended to use `Intel oneAPI MKL Link Line Advisor`_, and will + also tell you how to link the MKL library with code generated with the GNU + and PGI compilers. + +.. note:: + + On most VSC systems, the variable MKLROOT has a different value from the one + assumed in the Intel documentation. Wherever you see ``$(MKLROOT)`` you may + have to replace it with ``$(MKLROOT)/mkl``. MKL also offers a very fast streaming pseudorandom number generator, see -the documentation for details. +the `Intel oneAPI MKL Documentation`_ for details. Intel toolchain version numbers ------------------------------- @@ -269,43 +274,30 @@ Intel toolchain version numbers +-----------+----------------+------------+------------+--------+--------+----------+ - .. _Intel documentation: Further information on Intel tools ---------------------------------- -- All Intel documentation of recent software versions is available in - the `Intel Software Documentation Library`_ - The documentation is typically available for the most recent version - and sometimes one older version of te compiler and libraries. -- Some other useful documents: - - - `Quick-Reference Guide to Optimization with Intel® Compilers `_. - - `Direct link to the C/C++ compiler developer and reference - guide `_ - - `Direct link to the Fortran compiler user and reference - guide `_ - - `Page with links to the documentation of the most recent version - of Intel - MPI `_ - -- MKL - - - `Link page to the documentation of MKL on the Intel web - site `_ - - `MKL Link Line - Advisor `_ - -- :ref:`Generic BLAS/LAPACK/ScaLAPACK documentation ` - - - .. index:: - single: compiler - single: MPI - single: OpenMP - single: Intel MPI - single: MKL - single: BLAS - single: LAPACK +All Intel documentation of recent software versions is available in the `Intel +Software Documentation Library`_. The documentation is typically available for +the most recent version and sometimes one older version of te compiler and +libraries. + +Some other useful documents: + +* Compilers: + + * `Intel oneAPI DPC Compiler Documentation`_ + * `Intel Fortran Compiler Documentation`_ + +* MPI: + + * `Intel MPI Documentation`_ + +* Numeric libraries: + + * `Intel oneAPI MKL Documentation`_ + * `Intel oneAPI MKL Link Line Advisor`_ + * :ref:`Generic BLAS/LAPACK/ScaLAPACK documentation ` diff --git a/source/software/intel_trace_analyzer_collector.rst b/source/compute/software/intel_trace_analyzer_collector.rst similarity index 81% rename from source/software/intel_trace_analyzer_collector.rst rename to source/compute/software/intel_trace_analyzer_collector.rst index 26402bbce..5f83af35c 100644 --- a/source/software/intel_trace_analyzer_collector.rst +++ b/source/compute/software/intel_trace_analyzer_collector.rst @@ -1,8 +1,11 @@ -.. _ITAC: - Intel Trace Analyzer & Collector ================================ +.. warning:: + + `Intel discontinued ITAC `_ + on 2022 with its last version 2022.3. Users are encouraged to transition to `Intel oneAPI VTune Profiler`_. + Purpose ------- @@ -32,9 +35,9 @@ however, more sophisticated options are available. .. note:: - - Users of the UAntwerpen clusters should load the inteldevtools module - instead, which makes also available Intel's debugger, VTune, Advisor + + |UA| Users of the UAntwerpen clusters should load the ``inteldevtools`` + module instead, which makes also available Intel's debugger, VTune, Advisor and Inspector development tools. #. Compile your application so that it can generate a trace: @@ -80,8 +83,3 @@ however, more sophisticated options are available. $ traceanalyzer myapp.stf -Further information -------------------- - -Intel's `ITAC documentation`_ - diff --git a/source/software/matlab_getting_started.rst b/source/compute/software/matlab_getting_started.rst similarity index 100% rename from source/software/matlab_getting_started.rst rename to source/compute/software/matlab_getting_started.rst diff --git a/source/software/matlab_parallel_computing.rst b/source/compute/software/matlab_parallel_computing.rst similarity index 100% rename from source/software/matlab_parallel_computing.rst rename to source/compute/software/matlab_parallel_computing.rst diff --git a/source/software/module_system_basics.rst b/source/compute/software/module_system_basics.rst similarity index 100% rename from source/software/module_system_basics.rst rename to source/compute/software/module_system_basics.rst diff --git a/source/software/mpi_for_distributed_programming.rst b/source/compute/software/mpi_for_distributed_programming.rst similarity index 86% rename from source/software/mpi_for_distributed_programming.rst rename to source/compute/software/mpi_for_distributed_programming.rst index f4d4c6eaf..562c69c59 100644 --- a/source/software/mpi_for_distributed_programming.rst +++ b/source/compute/software/mpi_for_distributed_programming.rst @@ -65,8 +65,8 @@ specification. When developing your own software, this is the preferred order to select an implementation. The performance should be very similar, however, more -development tools are available for Intel MPI -(e.g., ":ref:`ITAC`" for performance monitoring). +development tools are available for Intel MPI. +(e.g., `Intel oneAPI VTune Profiler`_ for performance monitoring). Several other implementations may be installed, e.g., `MVAPICH`_, but we assume you know what you're doing if you choose to use them. @@ -88,7 +88,8 @@ Allinea MAP) are now bundled nito ArmForge, which is available as a module on KU Leuven systems. Video tutorials are available on the Arm website: `ARM-DDT video`_. (KU Leuven-only). -When using the Intel toolchain, ":ref:`ITAC`" (ITAC) may also prove useful. +When using the Intel toolchain, the `Intel oneAPI VTune Profiler`_ may also +prove useful. Profiling --------- @@ -99,23 +100,22 @@ MAP) or `Scalasca docs`_. (KU Leuven-only) Further information ------------------- -- `Intel MPI`_ web site +* `Intel MPI`_ web site - - `Intel MPI Documentation`_ (Latest version) + * `Intel MPI Documentation`_ (Latest version) -- `Open MPI`_ web site +* `Open MPI`_ web site - - `Open MPI Documentation`_ + * `Open MPI Documentation`_ -- SGI MPT, now HPE Performance Software MPI +* SGI MPT, now HPE Performance Software MPI - - `HPE MPT Documentation`_ + * `HPE MPT Documentation`_ -- `MPI forum`_, where you can also - find the standard specifications +* `MPI Forum`_, where you can also find the standard specifications - - `MPI Standard documents`_ + * `MPI Documents`_ -- See also the pages in the tutorials section e.g., for - :ref:`books` and online tutorial :ref:`web tutorials` +See also the pages in the tutorials section e.g., for :ref:`books` and online +tutorial :ref:`web tutorials` diff --git a/source/software/ms_visual_studio.rst b/source/compute/software/ms_visual_studio.rst similarity index 95% rename from source/software/ms_visual_studio.rst rename to source/compute/software/ms_visual_studio.rst index 0ab24898d..dfa1a5561 100644 --- a/source/software/ms_visual_studio.rst +++ b/source/compute/software/ms_visual_studio.rst @@ -97,10 +97,3 @@ appropriate main thread in the Threads view. .. figure:: ms_visual_studio/ms_visual_studio_debugging.png -Useful links ------------- - -- A `tutorial on debugging - `_ - in Microsoft Visual C++ - diff --git a/source/software/ms_visual_studio/ms_visual_studio_debugging.png b/source/compute/software/ms_visual_studio/ms_visual_studio_debugging.png similarity index 100% rename from source/software/ms_visual_studio/ms_visual_studio_debugging.png rename to source/compute/software/ms_visual_studio/ms_visual_studio_debugging.png diff --git a/source/software/ms_visual_studio/ms_visual_studio_mpi.png b/source/compute/software/ms_visual_studio/ms_visual_studio_mpi.png similarity index 100% rename from source/software/ms_visual_studio/ms_visual_studio_mpi.png rename to source/compute/software/ms_visual_studio/ms_visual_studio_mpi.png diff --git a/source/software/ms_visual_studio/ms_visual_studio_openmp.png b/source/compute/software/ms_visual_studio/ms_visual_studio_openmp.png similarity index 100% rename from source/software/ms_visual_studio/ms_visual_studio_openmp.png rename to source/compute/software/ms_visual_studio/ms_visual_studio_openmp.png diff --git a/source/software/ms_visual_studio/ms_visual_studio_run_environment.png b/source/compute/software/ms_visual_studio/ms_visual_studio_run_environment.png similarity index 100% rename from source/software/ms_visual_studio/ms_visual_studio_run_environment.png rename to source/compute/software/ms_visual_studio/ms_visual_studio_run_environment.png diff --git a/source/software/openmp_for_shared_memory_programming.rst b/source/compute/software/openmp_for_shared_memory_programming.rst similarity index 90% rename from source/software/openmp_for_shared_memory_programming.rst rename to source/compute/software/openmp_for_shared_memory_programming.rst index a270f445f..f7e2e71a3 100644 --- a/source/software/openmp_for_shared_memory_programming.rst +++ b/source/compute/software/openmp_for_shared_memory_programming.rst @@ -76,12 +76,12 @@ Running OpenMP programs We assume you are already familiar with the job submission procedure. If not, check the :ref:`Running jobs` section first. -Since OpenMP is intended for use in a shared memory context, when -submitting a job to the queue system, remember to request a single node -and as many processors as you need parallel -threads (e.g., ``-l nodes=1:ppn=4``). The latter should not exceed the number of -cores on the machine the job runs on. For relevant hardware information, -please consult the list of available :ref:`hardware `. +Since OpenMP is intended for use in a shared memory context, when submitting a +job to the queue system, remember to request a single node and as many +processors as you need parallel threads (e.g., ``-l nodes=1:ppn=4``). +The latter should not exceed the number of cores on the machine the job runs +on. Please consult the description of the VSC :ref:`tier1 hardware` and +:ref:`tier2 hardware` to find out the hardware specification of your cluster. You may have to set the number of cores that the program should use by hand, e.g., when you don't use all cores on a node, because the diff --git a/source/software/parallel_software.rst b/source/compute/software/parallel_software.rst similarity index 99% rename from source/software/parallel_software.rst rename to source/compute/software/parallel_software.rst index 3468a3fbd..6d83901e8 100644 --- a/source/software/parallel_software.rst +++ b/source/compute/software/parallel_software.rst @@ -110,7 +110,7 @@ There are a few commonly used approaches to create a multi-threaded application: available for compiling and running OpenMP application with the :ref:`foss ` and :ref:`Intel ` toolchains. -`Threading Building Blocks`_ (TBB) +`oneAPI Threading Building Blocks`_ (TBB) Originally developed by Intel, this open source library offers many primitives for shared memory and data driven programming in C++. diff --git a/source/software/parameterweaver.rst b/source/compute/software/parameterweaver.rst similarity index 100% rename from source/software/parameterweaver.rst rename to source/compute/software/parameterweaver.rst diff --git a/source/access/paraview_remote_visualization.rst b/source/compute/software/paraview_remote_visualization.rst similarity index 88% rename from source/access/paraview_remote_visualization.rst rename to source/compute/software/paraview_remote_visualization.rst index e06264a74..54ba5d637 100644 --- a/source/access/paraview_remote_visualization.rst +++ b/source/compute/software/paraview_remote_visualization.rst @@ -1,14 +1,17 @@ -.. _Paraview: +.. _Paraview remote: Paraview remote visualization ============================= -Prerequisits ------------- +Prerequisites +------------- You should have ParaView installed on your desktop, and know how to use -it (the latter is outside the scope of this page). **Note**: the client -and server version should match to avoid problems! +it (the latter is outside the scope of this page). + +.. note:: + + The client and server should have matching versions to avoid problems! Overview -------- @@ -47,11 +50,12 @@ the next step to establish the required SSH tunnel. Establish an SSH tunnel ~~~~~~~~~~~~~~~~~~~~~~~ -To connect the desktop ParaView client with the desktop with the -ParaView server on the compute node, an SSH tunnel has to be established -between your desktop and that compute node. Details for :ref:`Windows using -PuTTY ` and :ref:`Linux using ssh -` are given in the appropriate client software sections. +To connect the desktop ParaView client with the desktop with the ParaView +server on the compute node, an SSH tunnel has to be established between your +desktop and that compute node. Details for +:ref:`Windows using PuTTY ` and +:ref:`Linux using SSH ` are given in the appropriate client +software sections. Connect to the remote server using ParaView on your desktop ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ diff --git a/source/access/paraview_remote_visualization/paraview_remote_visualization_01.png b/source/compute/software/paraview_remote_visualization/paraview_remote_visualization_01.png similarity index 100% rename from source/access/paraview_remote_visualization/paraview_remote_visualization_01.png rename to source/compute/software/paraview_remote_visualization/paraview_remote_visualization_01.png diff --git a/source/access/paraview_remote_visualization/paraview_remote_visualization_02.png b/source/compute/software/paraview_remote_visualization/paraview_remote_visualization_02.png similarity index 100% rename from source/access/paraview_remote_visualization/paraview_remote_visualization_02.png rename to source/compute/software/paraview_remote_visualization/paraview_remote_visualization_02.png diff --git a/source/access/paraview_remote_visualization/paraview_remote_visualization_03.png b/source/compute/software/paraview_remote_visualization/paraview_remote_visualization_03.png similarity index 100% rename from source/access/paraview_remote_visualization/paraview_remote_visualization_03.png rename to source/compute/software/paraview_remote_visualization/paraview_remote_visualization_03.png diff --git a/source/access/paraview_remote_visualization/paraview_remote_visualization_04.png b/source/compute/software/paraview_remote_visualization/paraview_remote_visualization_04.png similarity index 100% rename from source/access/paraview_remote_visualization/paraview_remote_visualization_04.png rename to source/compute/software/paraview_remote_visualization/paraview_remote_visualization_04.png diff --git a/source/software/perl_package_management.rst b/source/compute/software/perl_package_management.rst similarity index 100% rename from source/software/perl_package_management.rst rename to source/compute/software/perl_package_management.rst diff --git a/source/compute/software/postprocessing_tools.rst b/source/compute/software/postprocessing_tools.rst new file mode 100644 index 000000000..5a9aec893 --- /dev/null +++ b/source/compute/software/postprocessing_tools.rst @@ -0,0 +1,35 @@ +##################### +Post-processing tools +##################### + +*This section is still rather empty. It will be expanded over time.* + +Visualization software +====================== + +.. _Paraview: + +Paraview +-------- + +`ParaView `__ is a free +visualization package. It can be used in three modes: + +* *Installed on your desktop*: you have to transfer your data to your desktop + system + +* *Interactive process on the cluster*: this option is available only for + :ref:`NoMachine NX users ` (go to Applications menu -> HPC -> + Visualisation -> Paraview). + +* *In client-server mode*: The interactive application of ParaView runs on your + desktop, while its server component runs on the cluster. The server reads the + data, renders the images (no GPU required as ParaView also contains a + software OpenGL renderer) and sends the rendered images to the Paraview + application on your desktop. Setting up ParaView for this scenario is + explained in the following chapters: + + .. toctree:: + :maxdepth: 2 + + paraview_remote_visualization diff --git a/source/software/python_package_management.rst b/source/compute/software/python_package_management.rst similarity index 94% rename from source/software/python_package_management.rst rename to source/compute/software/python_package_management.rst index 3be2c5f0c..6261745ea 100644 --- a/source/software/python_package_management.rst +++ b/source/compute/software/python_package_management.rst @@ -79,10 +79,9 @@ default package is not optimized for the CPUs in our infrastructure, and will run sub-optimally. (Note that this is not the case when you run TensorFlow on GPUs, since conda will install the appropriate CUDA libraries.) -These issues can be avoided by using Intel's Python distribution that contains -Intel MPI and optimized versions of packages such as scikit-learn and TensorFlow. -You will find `installation instructions `_ -provided by Intel. +These issues can be avoided by using the `Intel oneAPI Python Distribution`_ +that contains `Intel MPI`_ and optimized versions of packages such as +scikit-learn and TensorFlow. .. _install_miniconda_python: @@ -132,10 +131,11 @@ directory to ``PATH``. You can create an environment based on the default conda channels, but it is recommended to at least consider the Intel Python distribution. -Intel provides instructions on `how to install the Intel Python distribution -`_. +Intel provides instructions on how to install the `Intel oneAPI Python +Distribution`_ with conda. -Alternatively, to creating a new conda environment based on the default channels: +Alternatively, you can create a new conda environment based on the default +channels:: $ conda create -n science numpy scipy matplotlib diff --git a/source/software/r_command_line_arguments_in_scripts.rst b/source/compute/software/r_command_line_arguments_in_scripts.rst similarity index 100% rename from source/software/r_command_line_arguments_in_scripts.rst rename to source/compute/software/r_command_line_arguments_in_scripts.rst diff --git a/source/software/r_devtools.rst b/source/compute/software/r_devtools.rst similarity index 100% rename from source/software/r_devtools.rst rename to source/compute/software/r_devtools.rst diff --git a/source/software/r_integrating_c_functions.rst b/source/compute/software/r_integrating_c_functions.rst similarity index 100% rename from source/software/r_integrating_c_functions.rst rename to source/compute/software/r_integrating_c_functions.rst diff --git a/source/software/r_package_management.rst b/source/compute/software/r_package_management.rst similarity index 100% rename from source/software/r_package_management.rst rename to source/compute/software/r_package_management.rst diff --git a/source/software/software_development.rst b/source/compute/software/software_development.rst similarity index 100% rename from source/software/software_development.rst rename to source/compute/software/software_development.rst index fb8da1104..b0940279a 100644 --- a/source/software/software_development.rst +++ b/source/compute/software/software_development.rst @@ -21,8 +21,8 @@ Development tools :maxdepth: 2 toolchains - intel_toolchain foss_toolchain + intel_toolchain intel_trace_analyzer_collector eclipse ms_visual_studio diff --git a/source/software/specific_eclipse_issues_on_os_x.rst b/source/compute/software/specific_eclipse_issues_on_os_x.rst similarity index 100% rename from source/software/specific_eclipse_issues_on_os_x.rst rename to source/compute/software/specific_eclipse_issues_on_os_x.rst diff --git a/source/software/subversion.rst b/source/compute/software/subversion.rst similarity index 100% rename from source/software/subversion.rst rename to source/compute/software/subversion.rst diff --git a/source/software/toolchains.rst b/source/compute/software/toolchains.rst similarity index 100% rename from source/software/toolchains.rst rename to source/compute/software/toolchains.rst diff --git a/source/software/tortoisesvn.rst b/source/compute/software/tortoisesvn.rst similarity index 100% rename from source/software/tortoisesvn.rst rename to source/compute/software/tortoisesvn.rst diff --git a/source/software/tortoisesvn/tortoisesvn-browsing.png b/source/compute/software/tortoisesvn/tortoisesvn-browsing.png similarity index 100% rename from source/software/tortoisesvn/tortoisesvn-browsing.png rename to source/compute/software/tortoisesvn/tortoisesvn-browsing.png diff --git a/source/software/tortoisesvn/tortoisesvn-checkout.png b/source/compute/software/tortoisesvn/tortoisesvn-checkout.png similarity index 100% rename from source/software/tortoisesvn/tortoisesvn-checkout.png rename to source/compute/software/tortoisesvn/tortoisesvn-checkout.png diff --git a/source/software/tortoisesvn/tortoisesvn-commit.png b/source/compute/software/tortoisesvn/tortoisesvn-commit.png similarity index 100% rename from source/software/tortoisesvn/tortoisesvn-commit.png rename to source/compute/software/tortoisesvn/tortoisesvn-commit.png diff --git a/source/software/tortoisesvn/tortoisesvn-import.png b/source/compute/software/tortoisesvn/tortoisesvn-import.png similarity index 100% rename from source/software/tortoisesvn/tortoisesvn-import.png rename to source/compute/software/tortoisesvn/tortoisesvn-import.png diff --git a/source/software/tortoisesvn/tortoisesvn-working-copy.png b/source/compute/software/tortoisesvn/tortoisesvn-working-copy.png similarity index 100% rename from source/software/tortoisesvn/tortoisesvn-working-copy.png rename to source/compute/software/tortoisesvn/tortoisesvn-working-copy.png diff --git a/source/software/tortoisesvn/tortoisesvn-working-cycle.png b/source/compute/software/tortoisesvn/tortoisesvn-working-cycle.png similarity index 100% rename from source/software/tortoisesvn/tortoisesvn-working-cycle.png rename to source/compute/software/tortoisesvn/tortoisesvn-working-cycle.png diff --git a/source/software/tortoisesvn/winmerge.png b/source/compute/software/tortoisesvn/winmerge.png similarity index 100% rename from source/software/tortoisesvn/winmerge.png rename to source/compute/software/tortoisesvn/winmerge.png diff --git a/source/software/using_software.rst b/source/compute/software/using_software.rst similarity index 94% rename from source/software/using_software.rst rename to source/compute/software/using_software.rst index e8c0a827c..f482fd25b 100644 --- a/source/software/using_software.rst +++ b/source/compute/software/using_software.rst @@ -27,8 +27,8 @@ please consult the following pages: :maxdepth: 1 module_system_basics - ../gent/setting_up_the_environment_using_lmod_at_the_hpc_ugent_clusters - ../leuven/leuven_module_system + /gent/setting_up_the_environment_using_lmod_at_the_hpc_ugent_clusters + /leuven/leuven_module_system Packages with additional documentation diff --git a/source/software/version_control_systems.rst b/source/compute/software/version_control_systems.rst similarity index 100% rename from source/software/version_control_systems.rst rename to source/compute/software/version_control_systems.rst diff --git a/source/jobs/basic_linux_usage.rst b/source/compute/terminal/basic_linux.rst similarity index 62% rename from source/jobs/basic_linux_usage.rst rename to source/compute/terminal/basic_linux.rst index ddc46acaf..539909995 100644 --- a/source/jobs/basic_linux_usage.rst +++ b/source/compute/terminal/basic_linux.rst @@ -3,13 +3,8 @@ Basic Linux usage ================= -All the VSC clusters run the Linux operating system: - -- KU Leuven: Rocky Linux release 8.x (wICE) and CentOS 7.x (Genius) - (Santiago), 64 bit -- UAntwerpen: CentOS 7.x -- UGent: CentOS 7.x -- VUB: CentOS 7.x +All the VSC clusters run the Linux operating system. Specifically, all clusters +currently run some flavor of `Red Hat Enterprise Linux `_. This means that, when you connect to one of them, you get a command line interface, which looks something like this: @@ -18,7 +13,7 @@ interface, which looks something like this: vsc30001@login1:~> -When you see this, we also say you are inside a \\"shell\". The shell +When you see this, we also say you are inside a *shell*. The shell will accept your commands, and execute them. Some of the most often used commands include: @@ -35,19 +30,18 @@ Some of the most often used commands include: | echo | Prints its parameters to the screen | +------+----------------------------------------------------+ -Most commands will accept or even need parameters, which are placed -after the command, separated by spaces. A simple example with the 'echo' -command: +Most commands will accept or even need parameters, which are placed after the +command, separated by spaces. A simple example with the ``echo`` command: :: $ echo This is a test This is a test -Important here is the \\"$\" sign in front of the first line. This -should not be typed, but is a convention meaning \\"the rest of this -line should be typed at your shell prompt\". The lines not starting with -the \\"$\" sign are usually the feedback or output from the command. +Important here is the ``$`` sign in front of the first line. This +should not be typed, but is a convention meaning that *"the rest of this +line should be typed at your shell prompt"*. The lines not starting with +the ``$`` sign are usually the feedback or output from the command. More commands will be used in the rest of this text, and will be explained then if necessary. If not, you can usually get more @@ -60,7 +54,7 @@ either of the following: $ man ls $ info ls -(You can exit the last two \\"manuals\" by using the 'q' key.) +You can exit the last two *manual* by using the ``q`` key. Tutorials --------- @@ -68,11 +62,9 @@ Tutorials For more exhaustive tutorials about Linux usage, please refer to the following sites: - - -- `Linux Tutorials YouTube Channel`_ -- `Linux Basics on Lifewire`_ -- `Linux Newbie Administrator Guide`_ -- We organise regular Linux introductory courses, see the - `VSC website `_. +* `Linux Tutorials YouTube Channel`_ +* `DigitalOcean Introduction to Linux Basics`_ +* `Linux Newbie Administrator Guide`_ +* VSC organise regular Linux introductory courses, see the `VSC Training`_ + website. diff --git a/source/compute/terminal/index.rst b/source/compute/terminal/index.rst new file mode 100644 index 000000000..47afcaa97 --- /dev/null +++ b/source/compute/terminal/index.rst @@ -0,0 +1,227 @@ +.. _terminal interface: + +################################## +:fas:`terminal` Terminal Interface +################################## + +.. toctree:: + :hidden: + + windows_client + macos_client + linux_client + +We provide multiple methods to access the VSC clusters and use their +computational resources. Not all options may be equally supported across all +clusters though. In case of doubt, please contact the corresponding +:ref:`support team `. + +.. _terminal ssh: + +Secure Shell Connection +======================= + +You can open a terminal with a command prompt on any VSC cluster by logging in +via the `Secure Shell`_ (SSH) protocol to the corresponding login node of that +cluster. To this end, you will need to install and configure some SSH client +software in your computer. + +.. grid:: 3 + :gutter: 4 + + .. grid-item-card:: :fab:`windows` Windows + :columns: 12 4 4 4 + :link: windows_client + :link-type: doc + + SSH client setup + + .. grid-item-card:: :fab:`apple` macOS + :columns: 12 4 4 4 + :link: macos_client + :link-type: doc + + SSH client setup + + .. grid-item-card:: :fab:`linux` Linux + :columns: 12 4 4 4 + :link: linux_client + :link-type: doc + + SSH client setup + +.. note:: + + |KUL| Logging in to a KU Leuven cluster with SSH requires + :ref:`Multi Factor Authentication`. + +.. _terminal linux system: + +Linux System +============ + +All VSC clusters run the `Linux kernel`_ and a `GNU`_ operating system, +so-called GNU/Linux or often just referred as Linux. Specifically, all our +HPC clusters currently run some flavor of `Red Hat Enterprise Linux`_, which +means that all clusters share a common toolbox that can be used across VSC +sites. + +Once you connect to the terminal interface of a VSC cluster, you will be +presented by a command line prompt that accepts Linux commands. It is hence +necessary to have some knowledge on how to use the terminal in Linux to be able +to perform any task in the system. The terminal might look daunting at first, +as you have to known what commands to type to carry out even the simplest +operations, like making folders and moving files. But making the effort to +master the terminal is a guaranteed good investment of your time, as it is a +very powerful tool that allows to extensively automate your workflows. + +The following sections provide an introduction to the Linux terminal. + +.. toctree:: + :maxdepth: 2 + + basic_linux + shell_scripts + +.. _terminal gui apps: + +Graphical applications on the terminal +====================================== + +.. include:: recommend_web_portal.rst + +Launching programs with a graphical user interface (GUI) through the terminal +interface of the cluster requires additional support on your SSH client. You +need some software component that can encrypt and transfer through the network +the graphical data of your application running on the cluster and display it on +your screen. + +.. _terminal x11: + +X Server +-------- + +Most SSH clients provide integration with a so called `X Server`_. This is a +client/server solution that uses the X Window System protocol to display +graphics on local or remote screens. + +.. toctree:: + :hidden: + + Xming X Server + NoMachine NX + +.. tab-set:: + :sync-group: operating-system + + .. tab-item:: Windows + :sync: win + + Displaying graphical applications running on the Linux system of the VSC + cluster on your Windows system can be done by setting up an X Server on + your computer: + + * |Recommended| :ref:`MobaXterm ` provides an X + Server, :ref:`enable X11-Forwarding ` on + your SSH connections to use graphical applications with it. + + * :ref:`PuTTY ` provides an X Server, :ref:`enable + X11-Forwarding ` to use graphical applications on + all your SSH connections. + + * Install a standalone X server such as :ref:`Xming `. + + .. tab-item:: macOS + :sync: mac + + You can display remote graphical applications on your Mac with an X + server. The recommended options is `XQuartz `__ + which is an X Window System implementation freely available and supported + by Apple. + + Once XQuartz is installed and running on your Mac, you can simply open a + terminal window and connect to a VSC cluster with + :ref:`SSH enabling support for graphics `. + + .. tab-item:: Linux + :sync: lin + + The `X server`_ is available on all popular Linux distributions, and most + often installed by default as well. You just need to use the appropriate + options with the ``ssh`` command to :ref:`connect with support for + graphics `. + +.. _terminal remote desktop: + +Remote Desktop Environment +-------------------------- + +You can launch a full-fledge remote desktop environment running on a remote VSC +cluster with the `VNC`_ system. This solution generates a video stream of the +remote graphical display, encrypts it and sends it over the SSH connection to +your computer for visualization. + +In this case all graphical processing occurs on the remove VSC cluster and your +computer is only used for visualization and input. This can be useful in +scenarios where you need heavy processing of graphics with the GPUs of the +cluster. + +Different options exist that provide a VNC-like solution. The available options +depend on the operating system in your computer and the VSC cluster that you +want to use: + +.. tab-set:: + :sync-group: vsc-sites + + .. tab-item:: KU Leuven/UHasselt + :sync: kuluh + + On the KUL clusters, users can use NX :ref:`NX start guide`. + + .. tab-item:: UAntwerp + :sync: ua + + On the UAntwerp clusters, TurboVNC is supported on all regular login + nodes (without OpenGL support) and on the visualization node of Leibniz + (with OpenGL support through VirtualGL). + + See the page :ref:`Remote visualization UAntwerp` for instructions. + + .. tab-item:: UGent + :sync: ug + + VNC is supported through the :ref:`hortense_web_portal` interface. + + .. tab-item:: VUB + :sync: vub + + On the VUB clusters, TigerVNC is supported on all nodes. See the + documentation on `remote desktop sharing `_ + for instructions. + +Applications supporting SSH +--------------------------- + +Some graphical applications provide their own functionality to run on +remote servers through SSH. This can be used to run the GUI of such +applications locally while the heavy lifting of the computation is done on a +VSC cluster. + +* :ref:`Paraview for remote visualization ` + +* :ref:`Eclipse for remote development ` + +VPN +=== + +Some institutes may have security policies forbidding the access to login nodes +of your institute's cluster from outside of the institute's network (*e.g.* +when you work from home) or from abroad. In such case, you will need to set up +a :doc:`VPN (Virtual Private Networking) ` connection to your institute's +network (if your institute provides this service) to be able to login to those +VSC clusters. + +.. toctree:: + + vpn + diff --git a/source/compute/terminal/linux_client.rst b/source/compute/terminal/linux_client.rst new file mode 100644 index 000000000..4ebe94000 --- /dev/null +++ b/source/compute/terminal/linux_client.rst @@ -0,0 +1,38 @@ +.. _linux_client: + +######################################## +:fab:`linux` Terminal Interface on Linux +######################################## + +If you are using a Linux system on your own computer, then you already have all +the tools to start a terminal interface on the VSC clusters. You just need to +open your favorite terminal and the ``ssh`` command, which is available by +default on most Linux distributions. + +All VSC clusters run on a :ref:`terminal linux system` as well, so you will +find at ease once you connect to our clusters. Most commands will work in the +same way as they do on your computer. The biggest difference you might +experience is just that you are a regular user on the VSC cluster without +superuser (*root*) permissions. + +Getting ready to login +====================== + +Before you can log in with SSH to a VSC cluster, you need to generate a pair of +SSH keys and upload them to your VSC account. You can create your keys in Linux +with `OpenSSH`_, please check our documentation on :ref:`generating keys linux`. + +Connecting to the cluster +========================= + +OpenSSH + `OpenSSH`_ is a reputable suite of secure networking utilities based on the + `Secure Shell`_ (SSH) protocol. OpenSSH is open-source software and is readily + available on all popular Linux distributions, and most often installed by + default as well. + + .. toctree:: + :maxdepth: 2 + + openssh_access + diff --git a/source/compute/terminal/macos_client.rst b/source/compute/terminal/macos_client.rst new file mode 100644 index 000000000..85c56b96e --- /dev/null +++ b/source/compute/terminal/macos_client.rst @@ -0,0 +1,41 @@ +.. _macos_client: + +######################################## +:fab:`apple` Terminal Interface on macOS +######################################## + +If you are using a macOS system on your own computer, then you already have all +the tools to start a terminal interface on the VSC clusters. You just need to +launch the Terminal app and the ``ssh`` command, which is available by default. + +To open a Terminal window in macOS (formerly OS X), go to *Applications* > +*Utilities* > *Terminal* in *Finder*. If you don't have any experience using +the Terminal, we suggest you to first read our :ref:`basic linux` guide, which +also applies to macOS as it based on the same `GNU`_ operating system as Linux. + +Getting ready to login +====================== + +Before you can log in with SSH to a VSC cluster, you need to generate a pair of +SSH keys and upload them to your VSC account. You can create your keys in macOS +with `OpenSSH`_, please check our documentation on :ref:`generating keys macos`. + +Connecting to the cluster +========================= + +OpenSSH + `OpenSSH`_ is a reputable suite of secure networking utilities based on the + `Secure Shell`_ (SSH) protocol. OpenSSH is open-source software and is readily + available on all macOS versions. + +JellyfiSSH + `JellyfiSSH`_ is a bookmark manager specifically built for storing SSH + connections. Sitting in the dock or accessible via menulet, JellyfiSSH + allows you to easily store SSH connections and launch new terminal windows + using customisable saved settings. + +.. toctree:: + :maxdepth: 2 + + openssh_jellyfissh_access + diff --git a/source/compute/terminal/mobaxterm_access.rst b/source/compute/terminal/mobaxterm_access.rst new file mode 100644 index 000000000..a91ae40ef --- /dev/null +++ b/source/compute/terminal/mobaxterm_access.rst @@ -0,0 +1,239 @@ +.. _terminal mobaxterm: + +######################## +Terminal using MobaXterm +######################## + +Prerequisites +============= + +.. tab-set:: + :sync-group: vsc-sites + + .. tab-item:: KU Leuven + :sync: kuluh + + To access KU Leuven clusters, only an approved + :ref:`VSC account` is needed. + + .. tab-item:: UAntwerpen + :sync: ua + + To access clusters hosted at these sites, you need a + :ref:`public/private key pair ` of which the public key + needs to be :ref:`uploaded via the VSC account page `. + + Since you will be using MobaXterm, it is probably the easiest to + :ref:`generate your keys with MobaXterm ` as + well. + + .. tab-item:: UGent + :sync: ug + + To access clusters hosted at these sites, you need a + :ref:`public/private key pair ` of which the public key + needs to be :ref:`uploaded via the VSC account page `. + + Since you will be using MobaXterm, it is probably the easiest to + :ref:`generate your keys with MobaXterm ` as + well. + + .. tab-item:: VUB + :sync: vub + + To access clusters hosted at these sites, you need a + :ref:`public/private key pair ` of which the public key + needs to be :ref:`uploaded via the VSC account page `. + + Since you will be using MobaXterm, it is probably the easiest to + :ref:`generate your keys with MobaXterm ` as + well. + +.. _mobaxterm install: + +Installation +============ + +Go to the `MobaXterm`_ website and download the free version. Make sure to +select the **Portable edition** from the download page. Create a folder +called ``MobaXterm`` in a known location on your computer and decompress the +contents of the downloaded zip file inside it. + +.. _mobaxterm setup: + +Setup a remote session +====================== + +#. Double click the ``MobaXterm_Personal`` executable file inside the + ``MobaXterm`` folder. The MobaXterm main window will appear on your screen. + It should be similar to this one: + + .. _mobaxterm-main-window: + .. figure:: mobaxterm_access/mobaxterm_main_window.png + :alt: mobaxterm main + +#. Click on the `Session` icon in the top left corner. + +#. The 'Session settings' configuration panel will open; click on the SSH icon in the top row + and you should see a window like this: + + .. figure:: mobaxterm_access/mobaxterm_session_settings_ssh.png + :alt: ssh settings window + +#. The next few steps depends on the VSC site you are trying to connect to. + + .. tab-set:: + :sync-group: vsc-sites + + .. tab-item:: KU Leuven + :sync: kuluh + + In the 'Remote host' field introduce the cluster remote address: + ``login.hpc.kuleuven.be``. + + Tick the 'Specify username' box and introduce your VSC account username. + + Click the 'Advanced SSH settings' tab for additional configurations: + + * Check that the 'SSH-browser type' is set to 'SFTP protocol' + * Make sure that the 'Use private key' option is disabled + + .. figure:: mobaxterm_access/mobaxterm_adv_kul.png + :alt: advanced SSH options for KU Leuven clusters + + With this configuration, it is strongly recommended to setup your + :ref:`SSH agent in MobaXterm ` which is + described below. + + Upon successful connection attempt you will be prompted to copy/paste + the firewall URL in your browser as part of the MFA login procedure: + + .. figure:: mobaxterm_access/vsc_firewall_certificate_authentication.png + :alt: vsc_firewall_certificate_authentication + + Confirm by clicking 'Yes'. Once the MFA has been completed you will be + connected to the login node. + + .. tab-item:: UAntwerpen + :sync: ua + + In the 'Remote host' field introduce the cluster remote address: + ``login.hpc.uantwerpen.be`` + + .. include:: mobaxterm_access_ssh_keys.rst + + .. tab-item:: UGent + :sync: ug + + In the 'Remote host' field introduce the cluster remote address: + ``login.hpc.ugent.be`` + + .. include:: mobaxterm_access_ssh_keys.rst + + .. tab-item:: VUB + :sync: vub + + In the 'Remote host' field introduce the cluster remote address: + ``login.hpc.vub.be`` + + .. include:: mobaxterm_access_ssh_keys.rst + +#. |Optional| You may additionally enable 'X11-Forwarding' and 'Compression' options on the 'Session settings': + + .. _mobaxterm advanced options: + + * *X11-Forwarding*: allows the use of graphical applications over the SSH connection + * *Compression*: is useful on situations with limited network bandwidth + +#. You should connect to the cluster and be greeted by a screen similar to this one: + + .. figure:: mobaxterm_access/mobaxterm_hydra_login.png + :alt: hmem greeting + + On the left sidebar (in the 'Sftp' tab) there is a file browser of your + home directory in the cluster. You will see by default many files whose + names start with a dot ('.') symbol. These are hidden files of the + Linux environment and you should neither delete nor move them. You can hide + the hidden files by clicking on the right most button at the top of the file + browser. + +#. Once you disconnect from the cluster (by typing ``exit`` or closing the + terminal tab) you will find on the left sidebar (in the 'Sessions' tab) + a shortcut to the session you just setup. From now on, when you open + MobaXterm, you can just double click that shortcut and you will start + a remote session on the VSC cluster that you used in previous steps. + + To create a direct shortcut on your desktop (optional), + right click on the saved session name and choose + 'Create a desktop shortcut' (see image below). An icon will appear on your + Desktop that will start MobaXterm and open a session in the corresponding cluster. + + .. figure:: mobaxterm_access/mobaxterm_session_shortcut.png + :alt: session desktop shortcut + + +Now you can create connections to any :ref:`Tier-1` or +:ref:`Tier-2` VSC cluster by repeating these steps and changing +the address of the cluster. You will then have a shortcut on the Sessions tab +of the left sidebar for each of them to connect to. + + +Import PuTTY sessions +===================== + +If you have already configured remote sessions within PuTTY, then MobaXterm +will automatically import them upon installation and they will appear on the +left-side pane. +To edit a session, right-click on the session and then choose 'Edit session'. +Ensure that all settings are correct under the 'SSH' tab and the +'Advanced SSH settings' sub-tab: + +.. _mobaxterm_putty_imported_sessions: +.. figure:: mobaxterm_access/mobaxterm_putty_imported_sessions.png + :alt: mobaxterm_putty_imported_sessions + +If the session has been properly imported you will see that all the necessary +fields are already filled in. +Click 'OK' to close the 'Edit session' window. + +.. _copying-files-mobaxterm: + +Copying files to/from the cluster +================================= + +Once you've setup the shortcut for connecting to a cluster, as we +noted in `step 6 <#step-sftp-tab>`_ of the previous section, you will see +on the left sidebar (in the 'Sftp' tab) a file browser on the cluster you are +connected to. + +You can simply drag and drop files from your computer to that panel and they +will be copied to the cluster. You can also drag and drop files from the +cluster to your computer. Alternatively, you can use the file tools located at the +top of the file browser. + +Remember to always press the ``Refresh current folder`` button after you +copied something or created/removed a file or folder on the cluster. + +Setup an SSH agent +================== + +Once you've successfully setup the connection to your cluster, you will notice +that you are prompted for the passphrase at each connection you make to a +cluster. You can avoid the need to re-type it by setting up an SSH agent on MobaXterm. + +Check the documentation in: :ref:`mobaxterm ssh agent` + +.. _troubleshoot_mobaxterm: + +Troubleshooting MobaXterm connection issues +=========================================== + +If you have trouble accessing the infrastructure, the support staff will +likely ask you to provide a log. After you have made a failed attempt to connect, +you can obtain the connection log by + +#. ctrl-right-clicking in the MobaXterm terminal and selecting 'Event Log'. +#. In the dialog window that appears, click the 'Copy' button to copy the + log messages. They are copied as text and can be pasted in your message + to support. + diff --git a/source/access/access_using_mobaxterm/mobaxterm_adv_kul.png b/source/compute/terminal/mobaxterm_access/mobaxterm_adv_kul.png similarity index 100% rename from source/access/access_using_mobaxterm/mobaxterm_adv_kul.png rename to source/compute/terminal/mobaxterm_access/mobaxterm_adv_kul.png diff --git a/source/access/access_using_mobaxterm/mobaxterm_advanced_ssh.png b/source/compute/terminal/mobaxterm_access/mobaxterm_advanced_ssh.png similarity index 100% rename from source/access/access_using_mobaxterm/mobaxterm_advanced_ssh.png rename to source/compute/terminal/mobaxterm_access/mobaxterm_advanced_ssh.png diff --git a/source/access/access_using_mobaxterm/mobaxterm_config_settings.png b/source/compute/terminal/mobaxterm_access/mobaxterm_config_settings.png similarity index 100% rename from source/access/access_using_mobaxterm/mobaxterm_config_settings.png rename to source/compute/terminal/mobaxterm_access/mobaxterm_config_settings.png diff --git a/source/access/access_using_mobaxterm/mobaxterm_hydra_login.png b/source/compute/terminal/mobaxterm_access/mobaxterm_hydra_login.png similarity index 100% rename from source/access/access_using_mobaxterm/mobaxterm_hydra_login.png rename to source/compute/terminal/mobaxterm_access/mobaxterm_hydra_login.png diff --git a/source/access/access_using_mobaxterm/mobaxterm_main_window.png b/source/compute/terminal/mobaxterm_access/mobaxterm_main_window.png similarity index 100% rename from source/access/access_using_mobaxterm/mobaxterm_main_window.png rename to source/compute/terminal/mobaxterm_access/mobaxterm_main_window.png diff --git a/source/access/access_using_mobaxterm/mobaxterm_putty_imported_sessions.PNG b/source/compute/terminal/mobaxterm_access/mobaxterm_putty_imported_sessions.png similarity index 100% rename from source/access/access_using_mobaxterm/mobaxterm_putty_imported_sessions.PNG rename to source/compute/terminal/mobaxterm_access/mobaxterm_putty_imported_sessions.png diff --git a/source/access/access_using_mobaxterm/mobaxterm_session_settings_ssh.png b/source/compute/terminal/mobaxterm_access/mobaxterm_session_settings_ssh.png similarity index 100% rename from source/access/access_using_mobaxterm/mobaxterm_session_settings_ssh.png rename to source/compute/terminal/mobaxterm_access/mobaxterm_session_settings_ssh.png diff --git a/source/access/access_using_mobaxterm/mobaxterm_session_shortcut.png b/source/compute/terminal/mobaxterm_access/mobaxterm_session_shortcut.png similarity index 100% rename from source/access/access_using_mobaxterm/mobaxterm_session_shortcut.png rename to source/compute/terminal/mobaxterm_access/mobaxterm_session_shortcut.png diff --git a/source/access/access_using_mobaxterm/mobaxterm_ssh_gateway.png b/source/compute/terminal/mobaxterm_access/mobaxterm_ssh_gateway.png similarity index 100% rename from source/access/access_using_mobaxterm/mobaxterm_ssh_gateway.png rename to source/compute/terminal/mobaxterm_access/mobaxterm_ssh_gateway.png diff --git a/source/access/access_using_mobaxterm/vsc_firewall_certificate_authentication.PNG b/source/compute/terminal/mobaxterm_access/vsc_firewall_certificate_authentication.png similarity index 100% rename from source/access/access_using_mobaxterm/vsc_firewall_certificate_authentication.PNG rename to source/compute/terminal/mobaxterm_access/vsc_firewall_certificate_authentication.png diff --git a/source/compute/terminal/mobaxterm_access_ssh_keys.rst b/source/compute/terminal/mobaxterm_access_ssh_keys.rst new file mode 100644 index 000000000..31d688ab9 --- /dev/null +++ b/source/compute/terminal/mobaxterm_access_ssh_keys.rst @@ -0,0 +1,19 @@ +Tick the 'Specify username' box and introduce your VSC account username. + +Click the 'Advanced SSH settings' tab for additional configurations: + +* Check that the 'SSH-browser type' is set to 'SFTP protocol' +* Tick the 'Use private key' box and click on the file icon in that field. A + file browser will open; locate the private SSH key file you + :ref:`created` and which had its public part + :ref:`uploaded to your VSC account`. Please keep in mind + that these settings have to be updated if the location of the private SSH key + ever changes. + +.. figure:: mobaxterm_access/mobaxterm_advanced_ssh.png + :alt: advanced ssh options + +Press the 'OK' button and you should be prompted for your passphrase. +Enter here the passphrase you chose while creating your public/private key pair. +The characters will be hidden and nothing at all will appear as you +type (no circles, no symbols). diff --git a/source/access/nx_start_guide.rst b/source/compute/terminal/nx_start_guide.rst similarity index 66% rename from source/access/nx_start_guide.rst rename to source/compute/terminal/nx_start_guide.rst index 006528087..1da35b717 100644 --- a/source/access/nx_start_guide.rst +++ b/source/compute/terminal/nx_start_guide.rst @@ -1,13 +1,24 @@ .. _NX start guide: -NX start guide +NX Start Guide ============== -|KUL| NoMachine is a remote desktop application which can be used -in connection with the Tier-2 login infrastructure at KU Leuven. +`NoMachine`_ (NX) is a remote desktop application which can be used +launch graphical applications on remote servers. It is currently supported on +the Tier-2 login infrastructure at KU Leuven. -Installing NX NoMachine client ------------------------------- +.. grid:: 3 + :gutter: 4 + + .. grid-item-card:: |KULUH| + :columns: 12 4 4 4 + + * Tier-2 :ref:`Genius ` + * Tier-2 :ref:`Superdome ` + * Tier-2 :ref:`wICE ` + +Installing NX client +-------------------- Download the enterprise version of the client from the `NX Client download`_ page. @@ -15,12 +26,11 @@ Steps before configuring NoMachine ---------------------------------- For NoMachine connections to the (KU Leuven) HPC infrastructure, you need to use -an SSH agent such as :ref:`Pageant ` for Windows users and the -default `:ref:`agent included with OpenSSH ` for Linux/MacOS users. +an SSH agent such as :ref:`Pageant` for Windows users or the default +:ref:`SSH agent of OpenSSH ` for Linux/MacOS users. -Once your SSH agent is up and running, you need to issue an SSH certificate to be stored -in your agent. -For that, please refer to the instructions given in +Once your SSH agent is up and running, you need to issue an SSH certificate to +be stored in your agent. For that, please refer to the instructions given in :ref:`using SSH clients with SSH agent `. NoMachine NX Client Configuration @@ -31,21 +41,21 @@ NoMachine NX Client Configuration #. Press 'Add' to create a new connection -#. In the 'Addres' pane +#. In the 'Addres' pane: #. choose a name for the connection, e.g. 'Genius' #. change the Protocol to 'SSH' #. choose the hostname ``nx.hpc.kuleuven.be`` for Genius and port ``22`` - .. note:: + .. note:: - This NX login host cannot be used to access the cluster - from the terminal, directly. + This NX login host cannot be used to access the cluster + from the terminal, directly. -#. In the 'Configuration' pane +#. In the 'Configuration' pane: - - choose 'Use key-based authentication with a SSH agent' - - press 'Modify' and select 'Forward authentication' + * choose 'Use key-based authentication with a SSH agent' + * press 'Modify' and select 'Forward authentication' #. Press 'Connect' @@ -90,14 +100,14 @@ How to start using NX on Genius? software that is listed within the Applications menu. Software is divided into several groups: - - Accessories (e.g. Calculator, Character Map, Emacs, Gedit, GVim) - - Graphics (e.g. gThumb Image Viewer, Xpdf PDF Viewer) - - Internet (e.g. Firefox with pdf support, Filezilla) - - 'HPC' (modules related to HPC use: 'Computation' sub-menu with - MATLAB and SAS, 'Visualisation' sub-menu with ParaView, VisIt, - VMD and XCrySDen) - - Programming (e.g. Meld Diff Viewer, Microsoft Visual Studio Code), - - System tools (e.g. File Browser, Terminal) + - Accessories (e.g. Calculator, Character Map, Emacs, Gedit, GVim) + - Graphics (e.g. gThumb Image Viewer, Xpdf PDF Viewer) + - Internet (e.g. Firefox with pdf support, Filezilla) + - 'HPC' (modules related to HPC use: 'Computation' sub-menu with + MATLAB and SAS, 'Visualisation' sub-menu with ParaView, VisIt, + VMD and XCrySDen) + - Programming (e.g. Meld Diff Viewer, Microsoft Visual Studio Code), + - System tools (e.g. File Browser, Terminal) #. Running the applications in the text mode requires having a terminal open. To launch the terminal please go to Applications -> System @@ -111,5 +121,5 @@ How to start using NX on Genius? Attached documents ------------------ -- :download:`Slides from the lunchbox session ` +* :download:`Slides from the lunchbox session ` diff --git a/source/access/nx_start_guide/nx_config_guide.pdf b/source/compute/terminal/nx_start_guide/nx_config_guide.pdf similarity index 100% rename from source/access/nx_start_guide/nx_config_guide.pdf rename to source/compute/terminal/nx_start_guide/nx_config_guide.pdf diff --git a/source/access/nx_start_guide/nx_slides.pdf b/source/compute/terminal/nx_start_guide/nx_slides.pdf similarity index 100% rename from source/access/nx_start_guide/nx_slides.pdf rename to source/compute/terminal/nx_start_guide/nx_slides.pdf diff --git a/source/compute/terminal/openssh_access.rst b/source/compute/terminal/openssh_access.rst new file mode 100644 index 000000000..94dd8e7a2 --- /dev/null +++ b/source/compute/terminal/openssh_access.rst @@ -0,0 +1,212 @@ +.. _OpenSSH access: + +############################ +Remote Terminal with OpenSSH +############################ + +Prerequisites +============= + +.. tab-set:: + :sync-group: vsc-sites + + .. tab-item:: KU Leuven + :sync: kuluh + + To access KU Leuven clusters, only an approved + :ref:`VSC account ` is needed as a prerequisite. + + .. tab-item:: UAntwerpen + :sync: ua + + Before attempting to launch a terminal on UAntwerpen clusters, you need + to have :ref:`a private key in OpenSSH format ` + that is already :ref:`uploaded to your VSC account `. + + .. tab-item:: UGent + :sync: ug + + Before attempting to launch a terminal on UGent clusters, you need + to have :ref:`a private key in OpenSSH format ` + that is already :ref:`uploaded to your VSC account `. + + .. tab-item:: VUB + :sync: vub + + Before attempting to launch a terminal on VUB clusters, you need + to have :ref:`a private key in OpenSSH format ` + that is already :ref:`uploaded to your VSC account `. + +.. _openssh install: + +Installation +============ + +You can check whether the OpenSSH software is installed on your Linux computer +by opening a terminal and typing: + +.. code-block:: bash + + $ ssh -V + OpenSSH_8.0p1, OpenSSL 1.1.1c FIPS 28 May 2019 + +If it is not installed, you need to know the Linux distribution on your +computer and use the corresponding command to install it with its package +manager. The following are the installation commands for some popular Linux +distributions: + +* Distros with APT package manager (Debian, Ubuntu) + + .. code-block:: bash + + $ sudo apt install openssh-client + +* Distros with RPM package manager (Red Hat, Fedora, SuSE) + + .. code-block:: bash + + $ sudo dnf install openssh + +* Distros with Pacman package manager (Arch) + + .. code-block:: bash + + $ sudo pacman -S openssh + +Connecting to VSC clusters +========================== + +Start an SSH connection to the VSC cluster of your choice with the ``ssh`` +command. Once the secure connection is established, a terminal shell will open +ready to accept your commands. + +.. code-block:: bash + + $ ssh -i ~/.ssh/id_rsa_vsc @ + +You have to adapth the following placeholder elements on this command: + +* ```` is your VSC username, which you get after completing the + :ref:`application of a VSC account `. It is of the form + ``vsc00000`` and you can check it on the `VSC account page`_ + +* ```` is the name of the login node of the VSC cluster you + want to connect to. It is of the form ``login.hpc..be`` and you + can find the exact name of the login node of any VSC cluster in + :ref:`tier1 hardware` or :ref:`tier2 hardware`. + +* ``~/.ssh/id_rsa_vsc`` is the path to your private SSH key. This value is the + default used in our guide about :ref:`generating keys linux`. But the file of + the private can have any arbitrary name of your choice. + +.. note:: + + The first time you make a connection to a login node, you will be prompted + to verify the authenticity of the login node, e.g., + + .. code-block:: text + + $ ssh vsc98765@login.hpc.kuleuven.be + The authenticity of host 'login.hpc.kuleuven.be (134.58.8.192)' can't be established. + RSA key fingerprint is b7:66:42:23:5c:d9:43:e8:b8:48:6f:2c:70:de:02:eb. + Are you sure you want to continue connecting (yes/no)? + +Configuration of OpenSSH client +=============================== + +The SSH configuration file ``~/.ssh/config`` can be used to configure your SSH +connections. For instance, to automatically define your username, or the +location of your key, or add X forwarding. See below for some useful tips to +help you save time when working on a terminal-based session. + +.. toctree:: + + openssh_ssh_config + +Managing keys with SSH agent +============================ + +Once you've successfully connected to a VSC cluster, you will notice that you +are prompted for the passphrase of your SSH key every time you connect +to it. You can avoid the need to re-type it by using an SSH agent. + +Check the documentation in: :ref:`OpenSSH agent` + +.. _openssh x11 forwarding: + +Connecting with support for graphics +==================================== + +On most clusters, we support a number of programs that can display graphics or +provide a graphical interface (GUI). Those programs can be displayed over the +SSH terminal interface on your computer by enabling *X11-Forwarding*. This +options allows graphical applications to use the X Window System protocol to +send their graphical data over the network. + +You can enable *X11-Forwarding* on your SSH connections with the ``-X`` option. + +.. code-block:: bash + + $ ssh -X vsc98765@login.hpc.kuleuven.be + +To test the connection, you can try to start a simple X program on the login +nodes, e.g., ``xeyes``. The latter will open a new window with a pair of eyes. +The pupils of these eyes should follow your mouse pointer around. Close the +program by typing *CTRL+C* and the window should disappear. + +If you get the error 'DISPLAY is not set', you did not correctly enable +the *X11-Forwarding*. + +.. note:: + + There is also the oposite option ``-x`` which disables X traffic. This might + be useful depending on the default options on your system as specified in + ``/etc/ssh/ssh_config``, or ``~/.ssh/config``. + +Proxies and network tunnels to compute nodes +-------------------------------------------- + +Network communications between your local machine and some node in the cluster +other than the login nodes will be blocked by the cluster firewall. In such a +case, you can directly open a shell in the compute node with an SSH connection +using the login node as a proxy or, alternatively, you can also open a network +tunnel to the compute node which will allow direct communication from software +in your computer to certain ports in the remote system. + +.. toctree:: + + openssh_ssh_proxy + openssh_ssh_tunnel + +.. _troubleshoot_openssh: + +Troubleshooting OpenSSH connection issues +----------------------------------------- + +When contacting support regarding connection issues, it saves time if you +provide the verbose output of the ``ssh`` command. This can be obtained by +adding the ``-vvv`` option for maximal verbosity. + +If you get a ``Permission denied`` error message, one of the things to verify +is that your private key is in the default location, i.e., the output of +``ls ~/.ssh`` should show a file named ``id_rsa_vsc``. + +The second thing to check is that your +:ref:`private key is linked to your VSC ID ` +in your :ref:`SSH configuration file ` at ``~/.ssh/config``. + +If your private key is not stored in ``~/.ssh/id_rsa_vsc``, you need to adapt +the path to it in your ``~/.ssh/config`` file. + +Alternatively, you can provide the path as an option to the ``ssh`` command when +making the connection: + +.. code-block:: bash + + $ ssh -i @ + +SSH Manual +---------- + +* `ssh manual page`_ + diff --git a/source/compute/terminal/openssh_jellyfissh_access.rst b/source/compute/terminal/openssh_jellyfissh_access.rst new file mode 100644 index 000000000..747fb59de --- /dev/null +++ b/source/compute/terminal/openssh_jellyfissh_access.rst @@ -0,0 +1,101 @@ +######################## +Remote Terminal on macOS +######################## + +Prerequisites +============= + +.. tab-set:: + :sync-group: vsc-sites + + .. tab-item:: KU Leuven + :sync: kuluh + + To access KU Leuven clusters, only an approved + :ref:`VSC account ` is needed as a prerequisite. + + .. tab-item:: UAntwerpen + :sync: ua + + Before attempting to launch a terminal on UAntwerpen clusters, you need + to have :ref:`a private key in OpenSSH format ` + that is already :ref:`uploaded to your VSC account `. + + .. tab-item:: UGent + :sync: ug + + Before attempting to launch a terminal on UGent clusters, you need + to have :ref:`a private key in OpenSSH format ` + that is already :ref:`uploaded to your VSC account `. + + .. tab-item:: VUB + :sync: vub + + Before attempting to launch a terminal on VUB clusters, you need + to have :ref:`a private key in OpenSSH format ` + that is already :ref:`uploaded to your VSC account `. + +.. _mac openssh access: + +Using OpenSSH on macOS +====================== + +The Terminal on macOS works in the same way as a Linux terminal. Hence, you can +use the same commands for the :ref:`Linux terminal with OpenSSH ` +to access the VSC clusters and transfer files from your Mac: + +* use :ref:`ssh ` to connect and open a remote terminal on the + cluster + +* use :ref:`scp and sftp ` for file transfers + +.. _mac jellyfissh access: + +Managing SSH with JellyfiSSH +============================ + +|Optional| You can use `JellyfiSSH`_ to store your SSH session settings for the +different VSC clusters and easily connect to them. + +Installation +------------ + +Install `JellyfiSSH`_. The most recent version is available for a small fee +from the Mac App Store, but if you search for *JellyfiSSH 4.5.2*, which is the +version used for the screenshots in this page, you might still find some free +downloads for that version. + +Installation is easy: just drag the program's icon to the Application folder in +the Finder, and you're done. + +Bookmarking SSH connections +--------------------------- + +You can use JellyfiSSH to create a user-friendly bookmark for your ssh +connection settings. To do this, follow these steps: + +#. Start JellyfiSSH and select 'New'. This will open a window where you + can specify the connection settings. + +#. In the 'Host or IP' field, type in . In the 'Login + name' field, type in your . + In the screenshot below we have filled in the fields for a connection + to the Genius cluster at KU Leuven as user vsc98765. + + .. figure:: openssh_jellyfissh_access/text_mode_access_using_openssh_or_jellyfissh_01.png + +#. You might also want to change the Terminal window settings, which can + be done by clicking on the icon in the lower left corner of the + JellyfiSSH window. + +#. When done, provide a name for the bookmark in the 'Bookmark Title' + field and press 'Add' to create the bookmark. + +Connecting to SSH connections +----------------------------- + +To make a connection, select the bookmark in the 'Bookmark' field and +click on 'Connect'. Optionally, you can make the bookmark the default +by selecting it as the 'Startup Bookmark' in the JellyfiSSH > +Preferences menu entry. + diff --git a/source/access/text_mode_access_using_openssh_or_jellyfissh/text_mode_access_using_openssh_or_jellyfissh_01.png b/source/compute/terminal/openssh_jellyfissh_access/text_mode_access_using_openssh_or_jellyfissh_01.png similarity index 100% rename from source/access/text_mode_access_using_openssh_or_jellyfissh/text_mode_access_using_openssh_or_jellyfissh_01.png rename to source/compute/terminal/openssh_jellyfissh_access/text_mode_access_using_openssh_or_jellyfissh_01.png diff --git a/source/access/ssh_config.rst b/source/compute/terminal/openssh_ssh_config.rst similarity index 63% rename from source/access/ssh_config.rst rename to source/compute/terminal/openssh_ssh_config.rst index b1c742b7a..a4b63b1b7 100644 --- a/source/access/ssh_config.rst +++ b/source/compute/terminal/openssh_ssh_config.rst @@ -1,48 +1,34 @@ .. _ssh_config: -SSH config -========== +OpenSSH Configuration +===================== -The SSH configuration file resides in the ``.ssh`` directory in your home -directory (at least when using Linux or macOS). It is simply called -``config``. It is not created by default, so you would have to create the -initial version. +The SSH configuration file is located in the ``.ssh`` folder in your home +directory (*e.g.* on Linux or macOS) and it is simply called ``config``. +This ``.ssh/config`` file is not created by default, so you will probably have +to create the initial version yourself. .. warning:: - Make sure only the owner has read and write permissions, - neither group nor world should be able to read the file, i.e., - :: - - $ chmod 700 ~/.ssh/config + Make sure only the owner has read and write permissions, neither group nor + others should be able to read this configuration file: + .. code-block:: bash -.. _linking key with vsc-id linux: + $ chmod 600 ~/.ssh/config -Linking your private key with your VSC-id ------------------------------------------ +Basic configuration +------------------- -To avoid having to specify your VSC private key every time you login, we highly -recommend linking your key to your VSC-id. +The main usage of the OpenSSH configuration is to automatically set options for +the ``ssh`` connections based on the hostname of the server. Avoiding having to +type the same options over and over again. -Assuming your private key is ``~/.ssh/id_rsa_vsc``, add the following -lines to the end of your ``~/.ssh/config``: - -:: - - Match User vscXXXXX - IdentityFile ~/.ssh/id_rsa_vsc +The following is an example that simplifies the connection to a KU Leuven +cluster to just a ``ssh hpc`` command. The full hostname of the login node and +the VSC ID of the user will be automatically filled in by OpenSSH. -Replace vscXXXXX with your VSC-id. - - -Simple usage ------------- - -To simplify login in to a host, e.g., ``login.hpc.kuleuven.be`` as user -``vsc50005``, you can add the following: - -:: +.. code-block:: text Host * ServerAliveInterval 60 @@ -68,15 +54,36 @@ flags respectively. Now you can simply log in to ``login.hpc.kuleuven.be`` using the ``hpc`` alias: -:: +.. code-block:: bash $ ssh hpc -How to link your key with a host? ---------------------------------- +.. _ssh config link key vsc: + +Link private key with VSC ID +---------------------------- + +You can avoid having to specify your VSC private key on your ``ssh`` command +every time with the option ``-i ~/.ssh/id_rsa_vsc`` by configuring OpenSSH to +automatically link link your private key to your VSC ID. Hence, whenever you +connect to any VSC cluster (or other server) with your VSC ID, the correct SSH +key will be used. + +Assuming your private key is ``~/.ssh/id_rsa_vsc``, add the following +lines to the end of your ``~/.ssh/config``: + +.. code-block:: text + + Match User vscXXXXX + IdentityFile ~/.ssh/id_rsa_vsc + +Replace ``vscXXXXX`` with your VSC ID. + +Link key with a host +-------------------- -As alternative to linking your key with your VSC-id, you can also link your key +As alternative to linking your key with your VSC ID, you can also link your key with a specific host. Specifying identity files allows you to have distinct keys for different hosts, e.g., you can use one key pair to connect to VSC infrastructure, and a different one for your departmental server. @@ -84,7 +91,7 @@ infrastructure, and a different one for your departmental server. Assuming your private key is ``~/.ssh/id_rsa_vsc``, then you can use it to connect by specifying the ``IdentityFile`` attribute, i.e., -:: +.. code-block:: text Host hpc HostName login.hpc.kuleuven.be @@ -94,13 +101,13 @@ use it to connect by specifying the ``IdentityFile`` attribute, i.e., IdentityFile ~/.ssh/id_rsa_vsc -How to use a proxy host? ------------------------- +Using a proxy host +------------------ To use a host as a proxy, but log in through it on another node, the following entry can be added: -:: +.. code-block:: text Host leibniz Hostname login.leibniz.antwerpen.vsc @@ -120,13 +127,13 @@ you through to the leibniz login node. be used to specify the proxy jump host. -How to set up a tunnel? ------------------------ +Setting up a tunnel +------------------- If you require a tunnel to a remote host on a regular basis, you can define a connection in the SSH configuration file, e.g., -:: +.. code-block:: text Host hpc_tunnel HostName login.hpc.kuleuven.be @@ -146,19 +153,19 @@ accessed from your computer on that same port number. The tunnel can now be established as follows: -:: +.. code-block:: bash $ ssh -N hpc_tunnel -How to create a modular configuration file? -------------------------------------------- +Modular configuration file +-------------------------- If you access many hosts, your ``.ssh/config`` file can grow very long. In that case, it might be convenient to group hosts into distinct files, and include those into your main ``.ssh/config`` file, e.g., -:: +.. code-block:: text Include ~/.ssh/config_vsc @@ -166,6 +173,6 @@ include those into your main ``.ssh/config`` file, e.g., Links ----- -- `ssh_config manual page`_ -- `ssh manual page`_ +* `ssh_config manual page`_ +* `ssh manual page`_ diff --git a/source/access/setting_up_a_ssh_proxy.rst b/source/compute/terminal/openssh_ssh_proxy.rst similarity index 97% rename from source/access/setting_up_a_ssh_proxy.rst rename to source/compute/terminal/openssh_ssh_proxy.rst index bed09cac4..999f24041 100644 --- a/source/access/setting_up_a_ssh_proxy.rst +++ b/source/compute/terminal/openssh_ssh_proxy.rst @@ -1,7 +1,7 @@ .. _ssh_proxy: -Setting up an SSH proxy -======================= +SSH proxy with OpenSSH +====================== .. warning:: @@ -76,8 +76,7 @@ In your ``$HOME/.ssh/config`` file, add the following lines: User vscXXXXX where you replace *vsc.login.node* with the name of the login node of -your home tier-2 cluster (see also the :ref:`overview of available -hardware `). +your VSC cluster (see :ref:`tier1 hardware` or :ref:`tier2 hardware`). Replace ``vscXXXXX`` your own VSC account name (e.g., ``vsc40000``). diff --git a/source/access/creating_a_ssh_tunnel_using_openssh.rst b/source/compute/terminal/openssh_ssh_tunnel.rst similarity index 97% rename from source/access/creating_a_ssh_tunnel_using_openssh.rst rename to source/compute/terminal/openssh_ssh_tunnel.rst index 6186962ab..0a2d0d9e2 100644 --- a/source/access/creating_a_ssh_tunnel_using_openssh.rst +++ b/source/compute/terminal/openssh_ssh_tunnel.rst @@ -1,7 +1,7 @@ .. _tunnel OpenSSH: -Creating a SSH tunnel using OpenSSH -=================================== +SSH tunnel using OpenSSH +======================== Prerequisits ------------ diff --git a/source/compute/terminal/putty_access.rst b/source/compute/terminal/putty_access.rst new file mode 100644 index 000000000..48ef61de3 --- /dev/null +++ b/source/compute/terminal/putty_access.rst @@ -0,0 +1,246 @@ +.. _terminal putty: + +#################### +Terminal using PuTTY +#################### + +Prerequisite +============ + +.. tab-set:: + :sync-group: vsc-sites + + .. tab-item:: KU Leuven + :sync: kuluh + + To access KU Leuven clusters, only an approved + :ref:`VSC account` is needed. + + .. tab-item:: UAntwerpen + :sync: ua + + To access clusters hosted at these sites, you need a + :ref:`public/private key pair ` of which the public key + needs to be :ref:`uploaded via the VSC account page `. + + .. tab-item:: UGent + :sync: ug + + To access clusters hosted at these sites, you need a + :ref:`public/private key pair ` of which the public key + needs to be :ref:`uploaded via the VSC account page `. + + .. tab-item:: VUB + :sync: vub + + To access clusters hosted at these sites, you need a + :ref:`public/private key pair ` of which the public key + needs to be :ref:`uploaded via the VSC account page `. + +.. _putty install: + +Installation +============ + +`PuTTY`_ is a Windows program that has to be installed on your computer. Open +`PuTTY's download page `__ +and download the *Package file* that corresponds to your system (usually 64-bit +x86). Once the download completes, execute the downloaded ``.msi`` installer. +The installer will guide you through the rest of the installation. + +Connecting to VSC clusters +========================== + +When you start the PuTTY executable 'putty.exe', a configuration screen +pops up. Follow the steps below to setup the connection to (one of) the +VSC clusters. + +.. warning:: + + In the screenshots, we show the setup for user ``vsc98765`` to the + genius cluster at KU Leuven via the login node ``login.hpc.kuleuven.be``. + Please keep in mind to: + + #. replace ``vsc98765`` with your own VSC user name + + #. replace ``login.hpc.kuleuven.be`` with the name of the login node of the + VSC cluster you want to access, which can be found in the cluster + description on :ref:`tier1 hardware` or :ref:`tier2 hardware` + +* Within the category 'Session', in the field 'Host Name', type in a valid + hostname of the :ref:`Tier-1` or + :ref:`Tier-2` VSC cluster you want to connect to. + + .. figure:: putty_access/text_mode_access_using_putty_01.png + +* In the category *Connection* > *Data*, in the field 'Auto-login + username', put in , which is your VSC username that you + have received by mail after your request was approved. + + .. figure:: putty_access/text_mode_access_using_putty_02.png + +* Based on the destination VSC site that you want to login to, choose one of the + tabs below and proceed. + + .. tab-set:: + :sync-group: vsc-sites + + .. tab-item:: KU Leuven + :sync: kuluh + + Select the *SSH* > *Auth* > *Credentials* tab, and remove any private + key from the box 'Private key file for authentication'. + + .. figure:: putty_access/putty_priv_key.png + :alt: putty private key + + In the category *Connection* > *SSH* > *Auth*, make sure that the + option 'Attempt authentication using Pageant' is selected. + It is also recommended to enable agent forwarding by ticking the + 'Allow agent forwarding' checkbox. + + .. figure:: putty_access/text_mode_access_using_putty_03.png + + .. tab-item:: UAntwerpen + :sync: ua + + In the category *Connection* > *SSH* > *Auth* > *Credentials*, click on + 'Browse', and select the private key that you generated and saved above. + + .. figure:: putty_access/text_mode_access_using_putty_04.png + + Here, the private key was previously saved in the folder + ``C:\Users\Me\Keys``. + In older versions of Windows, you would have to use + ``C:\Documents and Settings\Me\Keys``. + + .. tab-item:: UGent + :sync: ug + + In the category *Connection* > *SSH* > *Auth* > *Credentials*, click on + 'Browse', and select the private key that you generated and saved above. + + .. figure:: putty_access/text_mode_access_using_putty_04.png + + Here, the private key was previously saved in the folder + ``C:\Users\Me\Keys``. + In older versions of Windows, you would have to use + ``C:\Documents and Settings\Me\Keys``. + + .. tab-item:: VUB + :sync: vub + + In the category *Connection* > *SSH* > *Auth* > *Credentials*, click on + 'Browse', and select the private key that you generated and saved above. + + .. figure:: putty_access/text_mode_access_using_putty_04.png + + Here, the private key was previously saved in the folder + ``C:\Users\Me\Keys``. + In older versions of Windows, you would have to use + ``C:\Documents and Settings\Me\Keys``. + +* In the category Connection > SSH > X11, click the 'Enable X11 Forwarding' + checkbox: + + .. _putty x11 forwarding: + + .. figure:: putty_access/text_mode_access_using_putty_05.png + +* Now go back to the 'Session' tab, and fill in a name in the 'Saved Sessions' + field and press 'Save' to permanently store the session information. + +* To start a session, load it from Sessions > Saved Sessions, and click 'Open'. + + .. _putty_load_saved_session: + .. figure:: putty_access/putty_load_saved_session.png + :alt: putty_load_saved_session + + .. tab-set:: + :sync-group: vsc-sites + + .. tab-item:: KU Leuven + :sync: kuluh + + You will be then prompted to copy/paste the firewall link into your + browser and complete the :ref:`Multi Factor Authentication (MFA) ` + procedure. With PuTTY, users only need to highlight the link with their + mouse in order to copy it to the clipboard. + + .. figure:: putty_access/putty_mfa.png + :alt: PuTTY MFA URL + + Then, with the right-click from your mouse or CTRL-V, you can paste the MFA link + into your browser to proceed with the authentication to ``login.hpc.kuleuven.be``. + + .. tab-item:: UAntwerpen + :sync: ua + + Now pressing 'Open' should ask for your passphrase, and connect + you to ``login.hpc.uantwerpen.be``. + + .. tab-item:: UGent + :sync: ug + + Now pressing 'Open' should ask for your passphrase, and connect + you to ``login.hpc.ugent.be``. + + .. tab-item:: VUB + :sync: vub + + Now pressing 'Open' should ask for your passphrase, and connect + you to ``login.hpc.vub.be``. + +The first time you make a connection to the login node, a Security Alert +will appear and you will be asked to verify the authenticity of the +login node. + +.. figure:: putty_access/text_mode_access_using_putty_06.png + +For future sessions, just select your saved session from the list and +press 'Open'. + +Managing SSH keys with Pageant +============================== + +At this point, we highly recommend setting up an :ref:`ssh agent`. A widely +used SSH agent is :ref:`Pageant` which is installed automatically with PuTTY. + +Pageant can be used to manage SSH keys and certificates for multiple clients, +such as PuTTY, :ref:`WinSCP`, :ref:`FileZilla`, +as well as the :ref:`NX client for Windows` so that you don't +need to enter the passphrase all the time. + +:ref:`pageant` + +Proxies and network tunnels to compute nodes +============================================ + +Network communications between your local machine and some node in the cluster +other than the login nodes will be blocked by the cluster firewall. In such a +case, you can directly open a shell in the compute node with an SSH connection +using the login node as a proxy or, alternatively, you can also open a network +tunnel to the compute node which will allow direct communication from software +in your computer to certain ports in the remote system. This is also useful to +run client software on your Windows machine, e.g., ParaView or Jupyter +notebooks that run on a compute node. + +.. toctree:: + + putty_ssh_proxy + putty_ssh_tunnel + +.. _troubleshoot_putty: + +Troubleshooting PuTTY connection issues +======================================= + +If you have trouble accessing the infrastructure, the support staff will +likely ask you to provide a log. After you have made a failed attempt to connect, +you can obtain the connection log by + +#. right-clicking in PuTTY's title bar and selecting **Event Log**. + +#. In the dialog window that appears, click the **Copy** button to copy the + log messages. They are copied as text and can be pasted in your message + to support. diff --git a/source/access/text_mode_access_using_putty/putty_agent_fwd.PNG b/source/compute/terminal/putty_access/putty_agent_fwd.png old mode 100755 new mode 100644 similarity index 100% rename from source/access/text_mode_access_using_putty/putty_agent_fwd.PNG rename to source/compute/terminal/putty_access/putty_agent_fwd.png diff --git a/source/access/text_mode_access_using_putty/putty_load_saved_session.PNG b/source/compute/terminal/putty_access/putty_load_saved_session.png similarity index 100% rename from source/access/text_mode_access_using_putty/putty_load_saved_session.PNG rename to source/compute/terminal/putty_access/putty_load_saved_session.png diff --git a/source/access/text_mode_access_using_putty/putty_mfa.PNG b/source/compute/terminal/putty_access/putty_mfa.png similarity index 100% rename from source/access/text_mode_access_using_putty/putty_mfa.PNG rename to source/compute/terminal/putty_access/putty_mfa.png diff --git a/source/access/text_mode_access_using_putty/putty_priv_key.PNG b/source/compute/terminal/putty_access/putty_priv_key.png old mode 100755 new mode 100644 similarity index 100% rename from source/access/text_mode_access_using_putty/putty_priv_key.PNG rename to source/compute/terminal/putty_access/putty_priv_key.png diff --git a/source/access/text_mode_access_using_putty/text_mode_access_using_putty_01.png b/source/compute/terminal/putty_access/text_mode_access_using_putty_01.png similarity index 100% rename from source/access/text_mode_access_using_putty/text_mode_access_using_putty_01.png rename to source/compute/terminal/putty_access/text_mode_access_using_putty_01.png diff --git a/source/access/text_mode_access_using_putty/text_mode_access_using_putty_02.png b/source/compute/terminal/putty_access/text_mode_access_using_putty_02.png similarity index 100% rename from source/access/text_mode_access_using_putty/text_mode_access_using_putty_02.png rename to source/compute/terminal/putty_access/text_mode_access_using_putty_02.png diff --git a/source/access/text_mode_access_using_putty/text_mode_access_using_putty_03.png b/source/compute/terminal/putty_access/text_mode_access_using_putty_03.png similarity index 100% rename from source/access/text_mode_access_using_putty/text_mode_access_using_putty_03.png rename to source/compute/terminal/putty_access/text_mode_access_using_putty_03.png diff --git a/source/access/text_mode_access_using_putty/text_mode_access_using_putty_04.png b/source/compute/terminal/putty_access/text_mode_access_using_putty_04.png similarity index 100% rename from source/access/text_mode_access_using_putty/text_mode_access_using_putty_04.png rename to source/compute/terminal/putty_access/text_mode_access_using_putty_04.png diff --git a/source/access/text_mode_access_using_putty/text_mode_access_using_putty_05.png b/source/compute/terminal/putty_access/text_mode_access_using_putty_05.png similarity index 100% rename from source/access/text_mode_access_using_putty/text_mode_access_using_putty_05.png rename to source/compute/terminal/putty_access/text_mode_access_using_putty_05.png diff --git a/source/access/text_mode_access_using_putty/text_mode_access_using_putty_06.png b/source/compute/terminal/putty_access/text_mode_access_using_putty_06.png similarity index 100% rename from source/access/text_mode_access_using_putty/text_mode_access_using_putty_06.png rename to source/compute/terminal/putty_access/text_mode_access_using_putty_06.png diff --git a/source/compute/terminal/putty_ssh_proxy.rst b/source/compute/terminal/putty_ssh_proxy.rst new file mode 100644 index 000000000..2145e6fe5 --- /dev/null +++ b/source/compute/terminal/putty_ssh_proxy.rst @@ -0,0 +1,145 @@ +.. _putty ssh proxy: + +SSH proxy with PuTTY +==================== + +.. warning:: + + If you simply want to configure PuTTY to connect to the login node + of a VSC cluster, this is not the page you are looking for. + Please check out :ref:`terminal putty`. + +Rationale +--------- + +The SSH protocol provides a safe way of connecting to a computer, encrypting +traffic and avoiding passing passwords across public networks where your +traffic might be intercepted by someone else. Yet making a server accessible +from all over the world makes that server very vulnerable. Therefore +servers are often put behind a *firewall*, another computer or device +that filters traffic coming from the internet. + +All VSC clusters are behind a firewall, which is configured by default to block +all traffic from abroad. That is why if you are accessing the VSC clusters from +abroad, it is necessary that you first authorize your own connection on the +`VSC Firewall`_. + +Another example are the compute nodes of the HPC cluster. You can (usually) +directly connect to the login nodes of the cluster, but compute nodes are not +reachable from outside. In that case you can only open a connection to a +compute node from your computer by using the login node as *proxy*. + +.. note:: + + Connections to compute nodes are restricted to users with active jobs on + those nodes. + +This all sounds quite complicated, but once things are configured +properly it is really simple to log on to the host. + +Setting up a proxy in PuTTY +--------------------------- + +.. warning:: + + In the screenshots below we show the proxy setup for user ``vscXXXXX`` to + the ``login.muk.gent.vsc`` login node for the muk cluster at UGent + via the login node ``vsc.login.node``. + You will have to + + #. replace ``vscXXXXX`` with your own VSC account + + #. replace ``login.muk.gent.vsc`` by the node that is behind a + a firewall that you want to access + + #. replace ``vsc.login.node`` with the name of the login node of the VSC + cluster you want to use as a proxy, which can be found in the cluster + description on :ref:`tier1 hardware` or :ref:`tier2 hardware` + +Setting up the connection in PuTTY is a bit more complicated than for a +simple direct connection to a login node. + +#. First you need to start up pageant and load your private key into it. + See the instructions on using :ref:`Pageant`. + +#. In PuTTY, go first to the "Proxy" category (under "Connection"). In the + Proxy tab sheet, you need to fill in the following information: + + .. figure:: putty_ssh_proxy/putty_proxy_section.png + + #. Select the proxy type: "Local" + #. Give the name of the *proxy server*. This is your usual VSC login node, + with a hostname of the form ``login.hpc..be``, and not the + computer on which you want to connect to. + #. Make sure that the "Port" number is 22. + #. Enter your VSC-id in the "Username" field. + #. In the "Telnet command, or local proxy command", enter the string :: + + plink -agent -l %user %proxyhost -nc %host:%port + + .. note:: + + ``plink`` (PuTTY Link) is a Windows program and comes with the full + PuTTY suite of applications. It is the command line version of PuTTY. + In case you've only installed the executables putty.exe and + pageant.exe, you'll need to download plink.exe also from* the `PuTTY`_ + web site We strongly advise to simply install the whole PuTTY-suite of + applications using the installer provided on the `PuTTY download + site`_. + +#. Now go to the "Data" category in PuTTY, again under "Connection". + + .. figure:: putty_ssh_proxy/putty_data_section.png + + #. Fill in your VSC-id in the "Auto-login username" field. + #. Leave the other values untouched (likely the values + in the screen dump) + +#. Now go to the "Session category + + .. figure:: putty_ssh_proxy/putty_session_section.png + + #. Set the field "Host Name (or IP address) to the computer + you want to log on to. If you are setting up a proxy + connection to access a computer on the VSC network. + you will have to use its name on the internal VSC network. + #. Make sure that the "Port" number is 22. + #. Finally give the configuration a name in the field "Saved + Sessions" and press "Save". Then you won't have to enter + all the above information again. + #. And now you're all set up to go. Press the "Open" button + on the "Session" tab to open a terminal window. + +For advanced users +------------------ + +If you have an X-server on your Windows PC, you can also use X11 +forwarding and run X11-applications on the host. All you need to do is +click the box next to "Enable X11 forwarding" in the category +"Connection" -> "SSH"-> "X11". + +What happens behind the scenes: + +By specifying "local" as the proxy type, you tell PuTTY to not use +one of its own build-in ways of setting up a proxy, but to use the +command that you specify in the "Telnet command" of the "Proxy" +category. + +The following command contains templated values that will be replaced by real +values depending on your settings :: + + plink -agent -l %user %proxyhost -nc %host:%port + +* ``%user`` will be replaced by the userid you specify in the "Proxy" category + screen +* ``%proxyhost`` will be replaced by the host you specify in the "Proxy" + category screen (**vsc.login.node** in the example) +* ``%host`` by the host you specified in the "Session" category + (login.muk.gent.vsc in the example) and %port by the number you specified in + the "Port" field of that screen (and this will typically be 22). + +The ``plink`` command will then set up a connection to ``%proxyhost`` using +the user ID ``%user``. The ``-agent`` option tells plink to use pageant for +the credentials. And the ``-nc`` option tells plink to tell the SSH +server on ``%proxyhost`` to further connect to ``%host:%port``. + diff --git a/source/access/setting_up_a_ssh_proxy_with_putty/putty_data_section.png b/source/compute/terminal/putty_ssh_proxy/putty_data_section.png similarity index 100% rename from source/access/setting_up_a_ssh_proxy_with_putty/putty_data_section.png rename to source/compute/terminal/putty_ssh_proxy/putty_data_section.png diff --git a/source/access/setting_up_a_ssh_proxy_with_putty/putty_proxy_section.png b/source/compute/terminal/putty_ssh_proxy/putty_proxy_section.png similarity index 100% rename from source/access/setting_up_a_ssh_proxy_with_putty/putty_proxy_section.png rename to source/compute/terminal/putty_ssh_proxy/putty_proxy_section.png diff --git a/source/access/setting_up_a_ssh_proxy_with_putty/putty_session_section.png b/source/compute/terminal/putty_ssh_proxy/putty_session_section.png similarity index 100% rename from source/access/setting_up_a_ssh_proxy_with_putty/putty_session_section.png rename to source/compute/terminal/putty_ssh_proxy/putty_session_section.png diff --git a/source/compute/terminal/putty_ssh_tunnel.rst b/source/compute/terminal/putty_ssh_tunnel.rst new file mode 100644 index 000000000..a758d038a --- /dev/null +++ b/source/compute/terminal/putty_ssh_tunnel.rst @@ -0,0 +1,69 @@ +.. _putty ssh tunnel: + +SSH tunnel with PuTTY +===================== + +Prerequisites +------------- + +:ref:`PuTTY ` must be installed on your computer, and you +should be able to connect via SSH to your VSC cluster of choice. + +Background +---------- + +Because of one or more firewalls between your desktop and the HPC +clusters, it is generally impossible to communicate directly with a +process on the cluster from your desktop except when the network +managers have given you explicit permission (which for security reasons +is not often done). One way to work around this limitation is SSH +tunneling. + +There are several cases where this is usefull: + +* Running X applications on the cluster: The X program cannot directly + communicate with the X server on your local system. In this case, the + tunneling is easy to set up as PuTTY will do it for you if you select + the right options on the X11 settings page as explained on the + :ref:`page about text-mode access using PuTTY `. + +* Running a server application on the cluster that a client on the + desktop connects to. One example of this scenario is :ref:`ParaView in + remote visualization mode `, + with the interactive client on the desktop and the data processing + and image rendering on the cluster. How to set up the tunnel for that + scenario is also :ref:`explained on that page `. + +* Running clients on the cluster and a server on your desktop. In this + case, the source port is a port on the cluster and the destination + port is on the desktop. + +Procedure: Tunnel from a local client to a server on the cluster +------------------------------------------------------------------ + +#. Log in on the login node of your VSC cluster as usual + +#. Start a job on the compute node running the server, take note of the name of + the compute node (*e.g.* ``r1i3n5``), as well as the port the server is + listening on (*e.g.* 44444) + +#. Open PuTTY on your computer to set up the tunnel + +#. Right-click in PuTTY's title bar, and select "Change Settings..." + +#. In the "Category" pane, expand "Connection" -> "SSH", and select + 'Tunnels' as show below: + + .. figure:: putty_ssh_tunnel/putty_tunnel_config.png + +#. In the "Source port" field, enter the local port to use (*e.g.* + 11111) + +#. In the "Destination" field, enter ``:`` (*e.g.* + ``r1i3n5:44444`` as in the example above) + +#. Click the "Add" button +#. Click the "Apply" button + +The tunnel is now ready to use. + diff --git a/source/access/creating_a_ssh_tunnel_using_putty/putty_tunnel_config.png b/source/compute/terminal/putty_ssh_tunnel/putty_tunnel_config.png similarity index 100% rename from source/access/creating_a_ssh_tunnel_using_putty/putty_tunnel_config.png rename to source/compute/terminal/putty_ssh_tunnel/putty_tunnel_config.png diff --git a/source/compute/terminal/recommend_web_portal.rst b/source/compute/terminal/recommend_web_portal.rst new file mode 100644 index 000000000..1ff64714f --- /dev/null +++ b/source/compute/terminal/recommend_web_portal.rst @@ -0,0 +1,6 @@ +.. important:: + + |Recommended| The :ref:`compute portal` provides a much better solution to + run graphical applications on VSC clusters. Performance is faster and it is + also easier to use. Consider running your graphical apps on it if available. + diff --git a/source/jobs/how_to_get_started_with_shell_scripts.rst b/source/compute/terminal/shell_scripts.rst similarity index 90% rename from source/jobs/how_to_get_started_with_shell_scripts.rst rename to source/compute/terminal/shell_scripts.rst index 39b6b809c..4c4d4b8f3 100644 --- a/source/jobs/how_to_get_started_with_shell_scripts.rst +++ b/source/compute/terminal/shell_scripts.rst @@ -1,10 +1,7 @@ -How to get started with shell scripts -===================================== +Introduction to Linux Shell Scripts +=================================== -Shell scripts -------------- - -Scripts are basically uncompiled pieces of code: they are just text +Scripts are basically pieces of non-compiled plain code: they are just text files. Since they don't contain machine code, they are executed by what is called a "parser" or an "interpreter". This is another program that understands the command in the script, and converts them to machine @@ -22,7 +19,7 @@ on one line. A very simple example of a script may be: :: - echo \"Hello! This is my hostname:\" + echo "Hello! This is my hostname:" hostname You can type both lines at your shell prompt, and the result will be the @@ -30,10 +27,10 @@ following: :: - $ echo \"Hello! This is my hostname:\" + $ echo "Hello! This is my hostname:" Hello! This is my hostname: $ hostname - login1 + login.hpc.cluster Suppose we want to call this script ``myhostname``. You open a new file for editing, and name it ``myhostname``: @@ -42,7 +39,7 @@ for editing, and name it ``myhostname``: $ nano myhostname -You get a \\"New File\", where you can type the content of this new +You get a "New File", where you can type the content of this new file. Help is available by pressing the 'Ctrl+G' key combination. You may want to familiarize you with the other options at some point; now we will just type the content of the file, save it and exit the editor. @@ -51,7 +48,7 @@ You can type the content of the script: :: - echo \"Hello! This is my hostname:\" + echo "Hello! This is my hostname:" hostname You save the file and exit the editor by pressing the 'Ctrl+x' key diff --git a/source/access/vpn.rst b/source/compute/terminal/vpn.rst similarity index 63% rename from source/access/vpn.rst rename to source/compute/terminal/vpn.rst index 9c87bda20..eda1f1d1a 100644 --- a/source/access/vpn.rst +++ b/source/compute/terminal/vpn.rst @@ -17,27 +17,32 @@ and they are your first contact for help. However, for your convenience, we present some pointers to that information: .. tab-set:: + :sync-group: vsc-sites - .. tab-item:: KU Leuven + .. tab-item:: KU Leuven/UHasselt + :sync: kuluh - Information `in Dutch `__ - and `in English `__. - - Information on contacting the service desk for assistance is also available - `in Dutch `__ and - `in English `__. + * KU Leuven - .. tab-item:: UGent + Information `in Dutch `__ + and `in English `__. + + Information on contacting the service desk for assistance is also available + `in Dutch `__ and + `in English `__. - Information `in Dutch `__ and - `in English `__. - - Contact information for the help desk is also available - `in Dutch `__ and - `in English `__ - (with links at the bottom of the VPN pages). + * UHasselt + + The pre-configured VPN software can be downloaded from + `software.uhasselt.be `__ + (intranet, only staff members). + + Contact helpdesk@uhasselt.be if you have problems. There is also some + information about this on the university library page + `Accessibility from distance `__ .. tab-item:: UAntwerpen + :sync: ua Information `in Dutch `__ and `in English `__ (staff) and @@ -51,19 +56,28 @@ present some pointers to that information: `in English `__ (staff) and `in Dutch `__ (students). + .. tab-item:: UGent + :sync: ugent + + Information `in Dutch `__ and + `in English `__. + + Contact information for the help desk is also available + `in Dutch `__ and + `in English `__ + (with links at the bottom of the VPN pages). + .. tab-item:: VUB + :sync: vub + + The `VPN of VUB `__ is accessible + upon request for VUB and non-VUB users. Access is subject of approval at + the discretion of VUB SDC team. - The VUB offers no central VPN system at this time. There is a VPN - solution, `Pulse Secure VPN `__, which - requires special permission. + * Users from VUB can can request their VPN access through + `VUB ServiceNow portal `__ - .. tab-item:: UHasselt + * Non-VUB users must apply for an external/partner VPN account - The pre-configured VPN software can be - `downloaded `__ - (intranet, only staff members), contact the - `UHasselt help desk `__ (mail link) if you - have problems. There is also some information about this on the page - `"Accessibility from distance" `__ - of the University library. + Please contact hpc@vub.be for more information. diff --git a/source/compute/terminal/windows_client.rst b/source/compute/terminal/windows_client.rst new file mode 100644 index 000000000..b9b59aa1b --- /dev/null +++ b/source/compute/terminal/windows_client.rst @@ -0,0 +1,60 @@ +.. _windows_client: + +############################################ +:fab:`windows` Terminal Interface on Windows +############################################ + +Getting ready to login +====================== + +Before you can log in with SSH to a VSC cluster, you need to generate a pair of +SSH keys and upload them to your VSC account. There are multiple ways to create +your keys in Windows, please check our documentation on +:ref:`generating keys windows`. + +Connecting to the cluster +========================= + +There are multiple solutions on Windows that provide a `Secure Shell`_ (SSH) +client to connect to remote machines. You need to have such a tool to connect +to the login nodes of our HPC clusters. The following are the main options +supported by VSC. + +PuTTY + PuTTY is a simple-to-use and freely available GUI SSH client for Windows that + is :ref:`easy to set up `. + + .. toctree:: + :maxdepth: 2 + + putty_access + +MobaXterm + MobaXterm is a free and easy to use SSH client for Windows that has + text-mode, a graphical file browser, an X server, an SSH agent, and more, + all in one. Installation is very simple when using its *Portable edition*. + + .. toctree:: + :maxdepth: 2 + + mobaxterm_access + +Windows PowerShell + Recent versions of Windows come with OpenSSH installed. This means that you + can use it from `PowerShell`_ or the Windows Command Prompt as you would in + the terminal of a Linux system. All information about SSH and data transfer + on the :ref:`Linux client ` pages apply to OpenSSH on + Windows in the same way. + +WSL2 + The `Windows Subsystem for Linux`_ (WSL2) can be an alternative if you are + using Windows 10 build 1607 or later. This solution allows to install a + Linux distribution on your Windows computer and use SSH from within it. + Hence, you can refer to all our documentation about SSH and data transfer + found in the :ref:`Linux client ` section. + + .. toctree:: + :maxdepth: 2 + + wsl + diff --git a/source/access/using_the_xming_x_server_to_display_graphical_programs.rst b/source/compute/terminal/windows_xming.rst similarity index 68% rename from source/access/using_the_xming_x_server_to_display_graphical_programs.rst rename to source/compute/terminal/windows_xming.rst index 424726296..58c001a71 100644 --- a/source/access/using_the_xming_x_server_to_display_graphical_programs.rst +++ b/source/compute/terminal/windows_xming.rst @@ -1,7 +1,7 @@ .. _Xming: -Using the Xming X server to display graphical programs -====================================================== +Xming X server +============== To display graphical applications from a Linux computer (such as the VSC clusters) on your Windows desktop, you need to install an X Window @@ -19,34 +19,34 @@ Installing Xming #. Run the Xming setup program on your Windows desktop. Make sure to select 'XLaunch wizard' and 'Normal PuTTY Link SSH client'. -.. figure:: using_the_xming_x_server_to_display_graphical_programs/xming_installation.png +.. figure:: windows_xming/xming_installation.png -Running Xming: --------------- +Running Xming +------------- #. To run Xming, select XLaunch from the Start Menu. #. Select 'Multiple Windows'. This will open each application in a separate window. - .. figure:: using_the_xming_x_server_to_display_graphical_programs/xming_display_settings.png + .. figure:: windows_xming/xming_display_settings.png #. Select 'Start no client' to make XLaunch wait for other programs (such as PuTTY). - .. figure:: using_the_xming_x_server_to_display_graphical_programs/xming_session_type.png + .. figure:: windows_xming/xming_session_type.png #. Select 'Clipboard' to share the clipboard. - .. figure:: using_the_xming_x_server_to_display_graphical_programs/xming_clipboard.png + .. figure:: windows_xming/xming_clipboard.png #. Finally save the configuration. - .. figure:: using_the_xming_x_server_to_display_graphical_programs/xming_save.png + .. figure:: windows_xming/xming_save.png #. Now Xming is running ... and you can launch a graphical application in your PuTTY terminal. Do not forget to enable X11 forwarding as - explained on :ref:`our PuTTY page `. + explained on :ref:`our PuTTY page `. To test the connection, you can try to start a simple X program on the login nodes, e.g., xterm or xeyes. The latter will open a new window with a pair of eyes. The pupils of these eyes should follow diff --git a/source/access/using_the_xming_x_server_to_display_graphical_programs/xming_clipboard.png b/source/compute/terminal/windows_xming/xming_clipboard.png similarity index 100% rename from source/access/using_the_xming_x_server_to_display_graphical_programs/xming_clipboard.png rename to source/compute/terminal/windows_xming/xming_clipboard.png diff --git a/source/access/using_the_xming_x_server_to_display_graphical_programs/xming_display_settings.png b/source/compute/terminal/windows_xming/xming_display_settings.png similarity index 100% rename from source/access/using_the_xming_x_server_to_display_graphical_programs/xming_display_settings.png rename to source/compute/terminal/windows_xming/xming_display_settings.png diff --git a/source/access/using_the_xming_x_server_to_display_graphical_programs/xming_installation.png b/source/compute/terminal/windows_xming/xming_installation.png similarity index 100% rename from source/access/using_the_xming_x_server_to_display_graphical_programs/xming_installation.png rename to source/compute/terminal/windows_xming/xming_installation.png diff --git a/source/access/using_the_xming_x_server_to_display_graphical_programs/xming_save.png b/source/compute/terminal/windows_xming/xming_save.png similarity index 100% rename from source/access/using_the_xming_x_server_to_display_graphical_programs/xming_save.png rename to source/compute/terminal/windows_xming/xming_save.png diff --git a/source/access/using_the_xming_x_server_to_display_graphical_programs/xming_session_type.png b/source/compute/terminal/windows_xming/xming_session_type.png similarity index 100% rename from source/access/using_the_xming_x_server_to_display_graphical_programs/xming_session_type.png rename to source/compute/terminal/windows_xming/xming_session_type.png diff --git a/source/compute/terminal/wsl.rst b/source/compute/terminal/wsl.rst new file mode 100644 index 000000000..b47995681 --- /dev/null +++ b/source/compute/terminal/wsl.rst @@ -0,0 +1,62 @@ +.. _wsl: + +########################## +Installing WSL2 on windows +########################## + +As a Windows user, if you don't already use any virtualisation system to +operate Linux you can install Windows Subsystem for Linux (WSL2). + +You must be running Windows 10 version 2004 and higher (Build 19041 and higher) +or Windows 11 to be able to use WSL2. +The requirements can be checked by typing ``winver`` on your search bar, a +informative popup appears showing your Windows version. + +|KUL| Users who are using a system managed by KU Leuven should fulfill these +requirements. + +The installation of WSL2 will consist of the following steps: + +1. Enable WSL 2 +2. Enable *Virtual Machine Platform* +3. Set WSL 2 as default +4. Install a Linux distro + +We will complete all steps by using Power Shell of Windows. However you can do +some of the steps by graphical screens as an option. Here you can find all +steps. + +Run Windows PowerShell as administrator and type the following to enable WSL: + +.. code-block:: + + dism.exe /online /enable-feature /featurename:Microsoft-Windows-Subsystem-Linux /all /norestart + +To enable Virtual Machine Platform on Windows 10 (2004), execute the following command: + +.. code-block:: + + dism.exe /online /enable-feature /featurename:VirtualMachinePlatform /all /norestart + +To set WSL 2 as default execute the command below (You might need to restart your PC): + +.. code-block:: + + wsl --set-default-version 2 + +To install your Linux distribution of choice on Windows 10, open the Microsoft +Store app, search for it, and click the “Get” button. +The first time you launch a newly installed Linux distribution, a console +window will open and you'll be asked to wait for a minute or two. +You will then need to create a user account and password for your new Linux +distribution. This password will give you ‘sudo' rights when asked. +If you see ‘WSLRegisterDistribution Failed with Error:' or you may find that +things don't work as intended you should restart your system at this point. +After all these steps when you type ‘wsl' to your Windows PowerShell, you will +be directed to your Ubuntu machine mounted on your Windows' C drive. From now +on, you can execute all Linux commands. It is advised to use the home directory +instead of your Windows drives. So if you type ‘cd‘ you will be forwarded to +your Ubuntu home. + +You can also install (optional) the Windows Terminal app, which enables +multiple tabs operation, search feature, and custom themes etc. diff --git a/source/compute/tier1-archive.rst b/source/compute/tier1-archive.rst new file mode 100644 index 000000000..e94a4fa20 --- /dev/null +++ b/source/compute/tier1-archive.rst @@ -0,0 +1,13 @@ +.. _archive tier1: + +########################## +Archive of Tier-1 Clusters +########################## + +The following Tier-1 clusters are old VSC clusters already decomissioned. Their +documentation is kept for preservation and historical purposes. + +.. toctree:: + :maxdepth: 2 + + /leuven/old_hardware/breniac/breniac_hardware diff --git a/source/compute/tier1.rst b/source/compute/tier1.rst new file mode 100644 index 000000000..2d0db4da3 --- /dev/null +++ b/source/compute/tier1.rst @@ -0,0 +1,25 @@ +.. _tier1 hardware: + +################################## +:fa:`server` Tier-1 Infrastructure +################################## + +.. grid:: 3 + :gutter: 4 + + .. grid-item-card:: |UG| + :columns: 12 4 4 4 + + * Tier-1 :ref:`Hortense ` + +.. toctree:: + :maxdepth: 2 + + /gent/tier1_hortense + +------ + +.. toctree:: + :maxdepth: 2 + + tier1-archive diff --git a/source/compute/tier2-archive.rst b/source/compute/tier2-archive.rst new file mode 100644 index 000000000..e246986e3 --- /dev/null +++ b/source/compute/tier2-archive.rst @@ -0,0 +1,15 @@ +.. _archive tier2: + +########################## +Archive of Tier-2 Clusters +########################## + +The following Tier-2 clusters are old VSC clusters already decomissioned. Their +documentation is kept for preservation and historical purposes. + +.. toctree:: + :maxdepth: 2 + + /leuven/old_hardware/thinking_hardware + /leuven/old_hardware/genius_hardware + /antwerp/old_hardware/hopper_hardware diff --git a/source/compute/tier2.rst b/source/compute/tier2.rst new file mode 100644 index 000000000..5018ab09d --- /dev/null +++ b/source/compute/tier2.rst @@ -0,0 +1,48 @@ +.. _tier2 hardware: + +####################################### +:fas:`hard-drive` Tier-2 Infrastructure +####################################### + +.. grid:: 4 + :gutter: 4 + + .. grid-item-card:: |KUL| |UH| + :columns: 6 6 3 3 + + * :ref:`Genius ` + * :ref:`Superdome ` + * :ref:`wICE ` + + .. grid-item-card:: |UA| + :columns: 6 6 3 3 + + * :ref:`Vaughan ` + * :ref:`Leibniz ` + * :ref:`Breniac ` + + .. grid-item-card:: |UG| + :columns: 6 6 3 3 + + * :ref:`All Tier-2 ` + + .. grid-item-card:: |VUB| + :columns: 6 6 3 3 + + * :ref:`Hydra ` + * :ref:`Anansi ` + +.. toctree:: + :maxdepth: 2 + + /antwerp/tier2_hardware + /brussels/tier2_hardware + /gent/tier2_hardware + /leuven/tier2_hardware + +--------- + +.. toctree:: + :maxdepth: 2 + + tier2-archive diff --git a/source/conf.py b/source/conf.py index b2c25e652..7cc2c9651 100644 --- a/source/conf.py +++ b/source/conf.py @@ -26,9 +26,9 @@ author = "VSC (Vlaams Supercomputing Center)" # The short X.Y version -version = "2.0" +version = "2.1" # The full version, including alpha/beta/rc tags -release = "2.0" +release = "2.1" # -- General configuration --------------------------------------------------- @@ -45,7 +45,7 @@ 'sphinx.ext.todo', 'sphinx.ext.coverage', 'sphinx.ext.ifconfig', - 'sphinx_reredirects', + 'sphinxcontrib.redirects', 'sphinx_sitemap', 'notfound.extension', ] @@ -113,9 +113,14 @@ "icon": "fa-brands fa-square-github", }, { - "name": "VSC on Twitter", + "name": "VSC on X", "url": "https://x.com/VSC_HPC", - "icon": "_static/fa-square-x-twitter.svg", + "icon": "fa-brands fa-square-x-twitter", + }, + { + "name": "VSC on BlueSky", + "url": "https://bsky.app/profile/vschpc.bsky.social", + "icon": "_static/fa-square-bluesky.svg", "type": "local", }, { @@ -144,8 +149,8 @@ "show_prev_next": False, "footer_start": ["copyright"], "footer_end": [], - "pygment_light_style": "manni", # toned-up comments - "pygment_dark_style": "inkpot", # toned-up comments + "pygments_light_style": "gruvbox-light", # contrasty readable comments + "pygments_dark_style": "monokai", # contrasty readable comments } # Add any paths that contain custom static files (such as style sheets) here, @@ -275,27 +280,7 @@ sitemap_filename = "vsc-docs-sitemap.xml" # -- Page redirects ---------------------------------------------------------- -redirects = { - "access/getting_access": "/access/vsc_account.html", - "access/account_request": "/access/vsc_account.html", - "access/access_and_data_transfer": "/access/access_methods.html", - "access/upload_new_key": "/access/generating_keys.html", - "access/data_transfer": "/data/transfer.html", - "access/data_transfer_using_winscp": "/data/transfer/winscp.html", - "access/data_transfer_with_filezilla": "/data/transfer/filezilla.html", - "access/data_transfer_with_scp_sftp": "/data/transfer/scp_sftp.html", - "access/eclipse_as_a_remote_editor": "/software/eclipse.html", - "access/multiplatform_client_tools": "/access/access_methods.html", - "data/tier1_data_main_index": "/data/tier1_data_service.html", - "globus/globus_main_index": "/globus/index.html", - "globus/globus_platform": "/globus/index.html", - "jobs/job_submission_and_credit_reservations": "/jobs/credits.html", - "jobs/the_job_system_what_and_why": "/jobs/index.html", - "jobs/using_software": "/software/using_software.html", - "leuven/data_transfer_kuleuven_network_drives": "/data/transfer/network_drives/kuleuven.html", - "leuven/mfa_quickstart": "/access/mfa_quickstart.html", - "leuven/tier2_hardware/mfa_login": "/access/mfa_login.html", -} +redirects_file = "redirects.list" # -- MyST -------------------------------------------------------------------- @@ -303,8 +288,11 @@ myst_enable_extensions = ["colon_fence"] # -- RST Prolog -------------------------------------------------------------- -rst_prolog = "" - +# Non-brakable space +rst_prolog = """ +.. |nbsp| unicode:: U+00A0 + :trim: +""" # Badges rst_prolog += """ .. |Optional| replace:: :bdg-primary:`Optional` @@ -314,22 +302,77 @@ .. |Warning| replace:: :bdg-warning:`Warning` .. |Info| replace:: :bdg-info:`Info` """ - # VSC Institute Badges rst_prolog += """ .. |KUL| replace:: :bdg-info:`KU Leuven` +.. |KULUH| replace:: :bdg-info:`KU Leuven/UHasselt` .. |UA| replace:: :bdg-danger:`UAntwerp` -.. |UG| replace:: :bdg-primary:`UGent` -.. |VUB| replace:: :bdg-warning:`VUB` +.. |UG| replace:: :bdg-secondary:`UGent` +.. |UH| replace:: :bdg-info:`UHasselt` +.. |VUB| replace:: :bdg-primary:`VUB` """ - -# Links used multiple times across the documentation +### Links used multiple times across the documentation ### +# Links to VSC and VSC sites +rst_prolog += """ +.. _eligible users: https://www.vscentrum.be/getaccess +.. _get in touch: https://www.vscentrum.be/getintouch +.. _Tier-1 project application: https://www.vscentrum.be/compute +.. _VSC account page: https://account.vscentrum.be/ +.. _VSC Account - Edit Account: https://account.vscentrum.be/django/account/edit +.. _VSC Account - Edit VO: https://account.vscentrum.be/django/vo/edit +.. _VSC Account - New/Join VO: https://account.vscentrum.be/django/vo/join +.. _VSC Firewall: https://firewall.vscentrum.be +.. _VSC Training: https://www.vscentrum.be/vsctraining +.. _KU Leuven Open OnDemand page: https://ondemand.hpc.kuleuven.be/ +.. _Service Catalog: https://icts.kuleuven.be/sc/HPC +.. _training waiting list: https://admin.kuleuven.be/icts/onderzoek/hpc/HPCintro-waitinglist +""" +# Links to hardware specifications +rst_prolog += """ +.. _AMD EPYC 7282: https://www.amd.com/en/support/downloads/drivers.html/processors/epyc/epyc-7002-series/amd-epyc-7282.html#amd_support_product_spec +.. _AMD EPYC 7452: https://www.amd.com/en/support/downloads/drivers.html/processors/epyc/epyc-7002-series/amd-epyc-7452.html#amd_support_product_spec +.. _AMD EPYC 7543: https://www.amd.com/en/products/processors/server/epyc/7003-series/amd-epyc-7543.html +.. _AMD EPYC 9384X: https://www.amd.com/en/products/processors/server/epyc/4th-generation-9004-and-8004-series/amd-epyc-9384x.html +.. _AMD Instinct MI100: https://www.amd.com/en/products/accelerators/instinct/mi100.html +.. _Intel Xeon E5-2650v4: https://www.intel.com/content/www/us/en/products/sku/91767/intel-xeon-processor-e52650-v4-30m-cache-2-20-ghz/specifications.html +.. _Intel Xeon E5-2680v2: https://www.intel.com/content/www/us/en/products/sku/75277/intel-xeon-processor-e52680-v2-25m-cache-2-80-ghz/specifications.html +.. _Intel Xeon E5-2680v4: https://www.intel.com/content/www/us/en/products/sku/91754/intel-xeon-processor-e52680-v4-35m-cache-2-40-ghz/specifications.html +.. _Intel Xeon E5-2683v4: https://www.intel.com/content/www/us/en/products/sku/91766/intel-xeon-processor-e52683-v4-40m-cache-2-10-ghz/specifications.html +.. _Intel Xeon E7-8891v4: https://www.intel.com/content/www/us/en/products/sku/93795/intel-xeon-processor-e78891-v4-60m-cache-2-80-ghz/specifications.html +.. _Intel Xeon Gold 6132: https://www.intel.com/content/www/us/en/products/sku/123541/intel-xeon-gold-6132-processor-19-25m-cache-2-60-ghz/specifications.html +.. _Intel Xeon Gold 6148: https://www.intel.com/content/www/us/en/products/sku/120489/intel-xeon-gold-6148-processor-27-5m-cache-2-40-ghz/specifications.html +.. _NVIDIA A100: https://www.nvidia.com/en-us/data-center/a100/ +.. _NVIDIA Tesla P100: https://www.nvidia.com/en-in/data-center/tesla-p100/ +.. _NVIDIA GeForce 1080Ti: https://www.nvidia.com/en-us/geforce/10-series/#1080-ti-spec +""" +# Links to Globus +rst_prolog += """ +.. _Globus: https://www.globus.org +.. _Globus Documentation: https://docs.globus.org +.. _Globus Web Interface: https://app.globus.org/ +.. _Globus Management Console: https://www.globus.org/app/login +.. _Globus Connect Server Installation Guide: https://docs.globus.org/globus-connect-server-installation-guide +.. _Globus How-To pages: https://docs.globus.org/how-to +.. _Globus Connect Personal: https://www.globus.org/globus-connect-personal +.. _Globus Groups How-To page: https://docs.globus.org/how-to/managing-groups +.. _Globus CLI documentation: https://docs.globus.org/cli/examples +.. _Globus-Timer-CLI on PyPi: https://pypi.org/project/globus-timer-cli +.. _Globus Python SDK documentation: https://globus-sdk-python.readthedocs.io/en/stable/index.html +.. _developers.globus.org: https://developers.globus.org/ +.. _docs.globus.org: https://docs.globus.org +.. _www.globus.org: https://www.globus.org +""" +# Other links rst_prolog += """ .. _Adaptive Computing documentation: https://support.adaptivecomputing.com/hpc-cloud-support-portal/ +.. _Apptainer: https://apptainer.org +.. _Apptainer Documentation: https://apptainer.org/docs/user/main/ +.. _Apptainer Quick Start: https://apptainer.org/docs/user/main/quick_start.html +.. _Apptainer Definition Files: https://apptainer.org/docs/user/main/definition_files.html +.. _Sylabs Remote Builder: https://cloud.sylabs.io/builder .. _ARM-DDT video: https://developer.arm.com/tools-and-software/server-and-hpc/debug-and-profile/arm-forge/resources/videos .. _ARM-MAP: https://www.arm.com/products/development-tools/hpc-tools/cross-platform/forge/map .. _atools documentation: https://atools.readthedocs.io/en/latest/ -.. _Beginning Hybrid MPI/OpenMP Development: https://software.intel.com/en-us/articles/beginning-hybrid-mpiopenmp-development .. _CP2K: https://www.cp2k.org/ .. _CPMD: http://www.cpmd.org/ .. _CUDA: https://developer.nvidia.com/cuda-zone @@ -337,60 +380,63 @@ .. _Cyberduck: https://cyberduck.io .. _Cygwin: https://www.cygwin.com/ .. _Docker: https://www.docker.com/ -.. _docs.globus.org: https://docs.globus.org .. _download FileZilla: https://filezilla-project.org/download.php?show_all=1 .. _Eclipse download page: http://www.eclipse.org/downloads .. _Eclipse packages download page: https://www.eclipse.org/downloads/packages/ .. _Eclipse: https://www.eclipse.org/ -.. _eligible users: https://www.vscentrum.be/getaccess +.. _EuroHPC: https://eurohpc-ju.europa.eu +.. _EuroHPC Access Calls: https://eurohpc-ju.europa.eu/access-our-supercomputers/eurohpc-access-calls_en .. _FFTW documentation: http://www.fftw.org/#documentation .. _FFTW: http://www.fftw.org/ .. _FileZilla project page: https://filezilla-project.org/ .. _GCC documentation: http://gcc.gnu.org/onlinedocs/ -.. _get in touch: https://www.vscentrum.be/getintouch -.. _Globus Web Interface: https://app.globus.org/ -.. _Globus Management Console: https://www.globus.org/app/login -.. _Globus Connect Server Installation Guide: https://docs.globus.org/globus-connect-server-installation-guide -.. _Globus How-To pages: https://docs.globus.org/how-to -.. _Globus Connect Personal: https://www.globus.org/globus-connect-personal -.. _Globus Groups How-To page: https://docs.globus.org/how-to/managing-groups -.. _Globus CLI documentation: https://docs.globus.org/cli/examples -.. _Globus-Timer-CLI on PyPi: https://pypi.org/project/globus-timer-cli -.. _Globus Python SDK documentation: https://globus-sdk-python.readthedocs.io/en/stable/index.html +.. _GNU: https://www.gnu.org/ .. _GNU binutils documentation: https://sourceware.org/binutils/docs/ .. _GROMACS: http://www.gromacs.org/ .. _HPE MPT documentation: https://support.hpe.com/hpsc/doc/public/display?docId=emr_na-a00037728en_us&docLocale=en_US -.. _Intel MPI Documentation: https://software.intel.com/en-us/articles/intel-mpi-library-documentation -.. _Intel MPI: https://software.intel.com/en-us/intel-mpi-library -.. _Intel Software Documentation Library: https://software.intel.com/en-us/documentation -.. _Interoperability with OpenMP API: https://software.intel.com/en-us/node/528819 +.. _Intel Fortran Compiler Documentation: https://www.intel.com/content/www/us/en/developer/tools/oneapi/fortran-compiler-documentation.html +.. _Intel MPI: https://www.intel.com/content/www/us/en/developer/tools/oneapi/mpi-library.html +.. _Intel MPI Documentation: https://www.intel.com/content/www/us/en/developer/tools/oneapi/mpi-library-documentation.html +.. _Intel MPI - Beginning Hybrid MPI/OpenMP Development: https://www.intel.com/content/www/us/en/developer/articles/technical/beginning-hybrid-mpiopenmp-development.html +.. _Intel oneAPI DPC Compiler Documentation: https://www.intel.com/content/www/us/en/developer/tools/oneapi/dpc-compiler-documentation.html +.. _Intel oneAPI MKL Documentation: https://www.intel.com/content/www/us/en/developer/tools/oneapi/onemkl-documentation.html +.. _Intel oneAPI MKL Link Line Advisor: https://www.intel.com/content/www/us/en/developer/tools/oneapi/onemkl-link-line-advisor.html +.. _Intel oneAPI Python Distribution: https://www.intel.com/content/www/us/en/developer/tools/oneapi/distribution-for-python.html +.. _Intel oneAPI VTune Profiler: https://www.intel.com/content/www/us/en/developer/tools/oneapi/vtune-profiler.html +.. _Intel Software Documentation Library: https://www.intel.com/content/www/us/en/resources-documentation/developer.html +.. _Interoperability with OpenMP API: https://www.intel.com/content/www/us/en/docs/mpi-library/developer-reference-linux/2021-14/interoperability-with-openmp-api.html +.. _DigitalOcean Introduction to Linux Basics: https://www.digitalocean.com/community/tutorials/an-introduction-to-linux-basics .. _irods.org: https://irods.org/ -.. _ITAC documentation: https://software.intel.com/en-us/articles/intel-trace-analyzer-and-collector-documentation/ -.. _JellyfiSSH: http://www.m-works.co.nz/jellyfissh.php +.. _JellyfiSSH: https://apps.apple.com/gb/app/jellyfissh/id416399476?mt=12 +.. _Jupyter: https://jupyter.org/ +.. _JupyterLab documentation: https://docs.jupyter.org/en/latest/ .. _Keras: https://keras.io/ .. _LAPACK user guide: http://www.netlib.org/lapack/lug/ .. _LAPACK95 user guide: http://www.netlib.org/lapack95/lug95/ -.. _Linux Basics on Lifewire : https://www.lifewire.com/learn-how-linux-basics-4102692 +.. _Linux kernel: https://www.kernel.org/ .. _Linux Newbie Administrator Guide: http://lnag.sourceforge.net/ -.. _Linux Tutorials YouTube Channel: https://www.youtube.com/channel/UCut99_Fv1YEcpYRXNnUM7LQ -.. _LLNL openMP tutorial: https://computing.llnl.gov/tutorials/openMP +.. _Linux Tutorials YouTube Channel: https://www.youtube.com/channel/UCut99_Fv1YEcpYRXNnUM7LQ +.. _LLNL Tutorials: https://hpc.llnl.gov/documentation/tutorials +.. _LLNL OpenMP Tutorial: https://hpc-tutorials.llnl.gov/openmp/ +.. _LLNL Parallel Computing Tutorial: https://hpc.llnl.gov/documentation/tutorials/introduction-parallel-computing-tutorial +.. _LLNL Advanced MPI: https://hpc.llnl.gov/sites/default/files/DavidCronkSlides.pdf .. _Lmod documentation: http://lmod.readthedocs.io/en/latest/ .. _Lmod: http://lmod.readthedocs.io/en/latest/ -.. _Locality-Aware Parallel Process Mapping for Multi-Core HPC Systems: http://www.joshuahursey.com/papers/2011/hursey-cluster-poster-2011.pdf .. _MathWorks: https://nl.mathworks.com/ .. _MATLAB compiler documentation: https://nl.mathworks.com/help/compiler/index.html -.. _MKL Link Line Advisor: https://software.intel.com/en-us/articles/intel-mkl-link-line-advisor .. _MobaXterm: https://mobaxterm.mobatek.net -.. _MPI forum: https://www.mpi-forum.org/ -.. _MPI Reference Manual: https://software.intel.com/en-us/articles/intel-mpi-library-documentation -.. _MPI Standard documents: https://www.mpi-forum.org/docs/ +.. _MPI Forum: https://www.mpi-forum.org/ +.. _MPI Documents: https://www.mpi-forum.org/docs/ .. _MPICH: https://www.mpich.org/ .. _MVAPICH: http://mvapich.cse.ohio-state.edu/ .. _NAMD: http://www.ks.uiuc.edu/Research/namd/ .. _Netlib BLAS repository: http://www.netlib.org/blas/ .. _Netlib LAPACK repository: http://www.netlib.org/lapack/ .. _Netlib ScaLAPACK repository: http://www.netlib.org/scalapack/ +.. _noVNC: https://novnc.com/ +.. _NoMachine: https://www.nomachine.com/ .. _NX Client download: https://www.nomachine.com/download-enterprise#NoMachine-Enterprise-Client +.. _oneAPI Threading Building Blocks: https://uxlfoundation.github.io/oneTBB/ .. _Open MPI Documentation: https://www.open-mpi.org/doc .. _Open MPI Explorations in Process Affinity: https://www.slideshare.net/jsquyres/open-mpi-explorations-in-process-affinity-eurompi13-presentation .. _Open MPI: https://www.open-mpi.org/ @@ -398,39 +444,45 @@ .. _OpenBLAS: https://www.openblas.net/ .. _OpenMP compilers and tools: https://www.openmp.org/resources/openmp-compilers-tools/ .. _OpenMP: https://www.openmp.org +.. _Open OnDemand: https://openondemand.org/ .. _OpenSHMEM: http://www.openshmem.org/site/ -.. _Paraview tutorial: https://www.vtk.org/Wiki/images/8/88/ParaViewTutorial38.pdf +.. _OpenSSH: https://www.openssh.com/ +.. _Paraview tutorial: https://vtk.org/Wiki/images/8/88/ParaViewTutorial38.pdf .. _Paraview website: https://www.paraview.org/ .. _POSIX threads: https://en.wikipedia.org/wiki/POSIX_Threads +.. _PowerShell: https://learn.microsoft.com/en-us/powershell/ +.. _PRACE: https://prace-ri.eu/ +.. _PRACE Training Portal: https://training.prace-ri.eu/ +.. _PRACE Tutorials: https://training.prace-ri.eu/index.php/prace-tutorials/ .. _PuTTY download site: https://www.chiark.greenend.org.uk/~sgtatham/putty/latest.html .. _PuTTY: https://www.chiark.greenend.org.uk/~sgtatham/putty/ +.. _Red Hat Enterprise Linux: https://en.wikipedia.org/wiki/Red_Hat_Enterprise_Linux .. _qsub documentation: http://docs.adaptivecomputing.com/torque/6-1-2/adminGuide/torque.htm#topics/torque/commands/qsub.htm +.. _RStudio documentation: https://docs.posit.co/ide/user/ .. _ScaLAPACK user guide: http://netlib.org/scalapack/slug/ .. _Scalasca docs: http://www.scalasca.org/software/scalasca-2.x/documentation.html .. _scp manual page: http://man.openbsd.org/scp -.. _Service Catalog: https://icts.kuleuven.be/sc/HPC +.. _Secure Shell: https://en.wikipedia.org/wiki/Secure_Shell .. _sftp manual page: http://man.openbsd.org/sftp -.. _Singularity documentation: https://singularity.hpcng.org/user-docs/3.8/ -.. _Singularity: https://singularity.hpcng.org/ .. _sbatch manual page: https://slurm.schedmd.com/sbatch.html .. _ssh manual page: http://man.openbsd.org/ssh .. _ssh-keygen manual page: http://man.openbsd.org/ssh-keygen .. _ssh_config manual page: http://man.openbsd.org/ssh_config -.. _Sylabs Singularity: https://sylabs.io/singularity/ +.. _TensorBoard documentation: https://www.tensorflow.org/tensorboard/get_started .. _TensorFlow: https://www.tensorflow.org/ -.. _Threading Building Blocks: https://www.threadingbuildingblocks.org -.. _tier-1 project application: https://www.vscentrum.be/tier1 .. _TigerVNC: https://tigervnc.org/ -.. _Torque 6.0.1 documentation: http://docs.adaptivecomputing.com/torque/6-1-2/adminGuide/torque.htm -.. _training waiting list: https://admin.kuleuven.be/icts/onderzoek/hpc/HPCintro-waitinglist +.. _Torque 6.0.1 documentation: http://docs.adaptivecomputing.com/torque/6-1-2/adminGuide/torque.htm .. _TurboVNC download page: https://github.com/TurboVNC/turbovnc/releases .. _TurboVNC: https://www.turbovnc.org/ .. _VirtualGL: https://en.wikipedia.org/wiki/VirtualGL -.. _VSC account page: https://account.vscentrum.be/ -.. _VSC training: https://www.vscentrum.be/training +.. _VNC: https://en.wikipedia.org/wiki/VNC +.. _VSCode documentation: https://code.visualstudio.com/docs +.. _WinSCP: https://winscp.net .. _WinSCP docs: https://winscp.net/eng/docs/start .. _worker documentation: http://worker.readthedocs.io/en/latest/ .. _worker framework documentation: https://worker.readthedocs.io/en/latest/ -.. _www.globus.org: https://www.globus.org +.. _Windows Subsystem for Linux: https://learn.microsoft.com/en-us/windows/wsl/ .. _Xming website: http://www.straightrunning.com/XmingNotes/ +.. _X Server: https://en.wikipedia.org/wiki/X.Org_Server +.. _X Window System: https://en.wikipedia.org/wiki/X_Window_System """ diff --git a/source/contact_vsc.rst b/source/contact_vsc.rst index 10af99331..f5554ea40 100644 --- a/source/contact_vsc.rst +++ b/source/contact_vsc.rst @@ -19,7 +19,7 @@ General enquiries ----------------- For non-technical questions about the VSC, you can contact the FWO or -one of the coordinators from participating universities. This may -include questions on admission requirements to questions about setting -up a course or other questions that are not directly related to -technical problems. +one of the `VSC coordinators in participating universities `_. +This may include questions on admission requirements to questions about setting +up a course or other questions that are not directly related to technical +problems. diff --git a/source/data/index.rst b/source/data/index.rst index ce6ddbe73..89ae31efb 100644 --- a/source/data/index.rst +++ b/source/data/index.rst @@ -5,8 +5,8 @@ .. toctree:: :maxdepth: 2 + tier1_data_service storage transfer ../globus/index - tier1_data_service diff --git a/source/data/managing_storage_usage.rst b/source/data/managing_storage_usage.rst index d3a797b23..bbf1efa18 100644 --- a/source/data/managing_storage_usage.rst +++ b/source/data/managing_storage_usage.rst @@ -50,7 +50,7 @@ available in the *Usage* section of the You will find the usage data for your :ref:`personal storage ` space such as ``VSC_HOME``, ``VSC_DATA`` and ``VSC_SCRATCH`` as well as your -:ref:`Virtual Organization ` if you are in one. +:ref:`Virtual Organization ` if you are in one. Terminal in the cluster ----------------------- diff --git a/source/data/request_more_storage.rst b/source/data/request_more_storage.rst index aa6cc2576..91a82b725 100644 --- a/source/data/request_more_storage.rst +++ b/source/data/request_more_storage.rst @@ -5,10 +5,10 @@ Request more storage #################### If the current quota limits of your :ref:`personal storage ` or -:ref:`Virtual Organization (VO) ` are not large enough to -carry out your research project, it might be possible to increase them. This -option depends on data storage policies of the site managing your VSC account, -VO or Tier-1 project as well as on current capacity of the storage system. +:ref:`Virtual Organization (VO) ` are not large enough to carry out your +research project, it might be possible to increase them. This option depends on +data storage policies of the site managing your VSC account, VO or Tier-1 +project as well as on current capacity of the storage system. Before requesting more storage, please check carefully the :ref:`current data usage of your VSC account ` and identify which file system @@ -76,11 +76,10 @@ Increase storage in virtual organizations ========================================= VSC_DATA_VO, VSC_SCRATCH_VO - The storage quotas of your :ref:`Virtual Organization (VO) ` - are managed by the moderator of the VO, who is typically the leader of your - research group. The moderator can manage all quotas of the VO in the - `Edit VO `_ tab of the VSC - account page. + The storage quotas of your :ref:`Virtual Organization (VO) ` are managed + by the moderator of the VO, who is typically the leader of your research + group. The moderator can manage all quotas of the VO in the + `VSC Account - Edit VO`_ page. Requesting more storage space ----------------------------- @@ -88,7 +87,7 @@ Requesting more storage space VO moderators can request additional quota for ``VSC_DATA_VO`` and ``VSC_SCRATCH_VO``: #. Go to the section **Request additional quota** in the - `Edit VO `_ tab + `VSC Account - Edit VO`_ page #. Fill in the amount of additional storage you want for ``VSC_DATA_VO`` (labelled ``VSC_DATA`` in this section) and/or ``VSC_SCRATCH_VO`` (labelled @@ -107,7 +106,7 @@ VO moderators can tweak the share of the VO quota that each member can maximally use. By default, this is set to 50% of the total quota for each user. #. Go to the section **Request additional quota** in the - `Edit VO `_ tab + `VSC Account - Edit VO`_ page #. Adjust the share (%) of the available space available to each user diff --git a/source/data/storage.rst b/source/data/storage.rst index ae2e448e4..5d2b827c3 100644 --- a/source/data/storage.rst +++ b/source/data/storage.rst @@ -1,16 +1,18 @@ -############################ -:fas:`database` Data Storage -############################ +################################### +:fas:`database` Data Storage on HPC +################################### -Your VSC account comes with certain amount of data storage capacity in at least -three subdirectories on each VSC cluster. Please check the following sections -to familiarise yourself with the characteristics of each storage system and how -to manage your data in them. +Your VSC account comes with certain amount of data storage capacity on each VSC +cluster. This storage is provided through (at least) three main directories +accessible from our clusters. Please check the following sections to +familiarise yourself with the characteristics of each storage system and how to +manage your data in them. .. toctree:: :maxdepth: 2 storage_locations + tier2-infrastructure managing_storage_usage request_more_storage diff --git a/source/data/storage_locations.rst b/source/data/storage_locations.rst index 61f0d1fb6..73f296b6e 100644 --- a/source/data/storage_locations.rst +++ b/source/data/storage_locations.rst @@ -26,8 +26,8 @@ on the size and usage of these data. Following locations are available: Accessible from login nodes, compute nodes and from all clusters in VSC. Capacity: Low - <10 GB, check the :ref:`storage quota of your home institute `. + <10 GB, check the :ref:`storage quota ` of your + home institute. Perfomance: Low Jobs must never use files in your home directory. @@ -49,9 +49,9 @@ on the size and usage of these data. Following locations are available: Accessible from login nodes, compute nodes and from all clusters in VSC. Capacity: Medium - <100 GB, check the :ref:`storage quota of your home institute - `. Capacity might be :ref:`expandable upon - request `. + <100 GB, check the :ref:`storage quota ` + of your home institute. Capacity might be + :ref:`expandable upon request `. Perfomance: Low There is no performance guarantee. Depending on the cluster, @@ -76,9 +76,9 @@ on the size and usage of these data. Following locations are available: accessible from other VSC clusters. Capacity: Medium-High - 50-500 GB, check the :ref:`storage quota of your home institute - `. Capacity might be :ref:`expandable upon - request `. + 50-500 GB, check the :ref:`storage quota ` + of your home institute. Capacity might be + :ref:`expandable upon request `. Perfomance: High Preferred location for all data files read or written during the @@ -101,8 +101,8 @@ on the size and usage of these data. Following locations are available: Capacity: Variable Maximum data usage depends on the local disk space of the node - executing your job. Check the :ref:`storage quota of your home - institute `. Note that the available disk space + executing your job. Check the :ref:`storage quota ` + of your home institute. Note that the available disk space is shared among all jobs running in the node. Perfomance: High @@ -121,13 +121,11 @@ by the capacity of the disk system, to prevent that the disk system fills up accidentally. You can see your current usage and the current limits with the appropriate quota command as explained on the :ref:`page on managing disk space `. -The actual disk capacity, shared by *all* users, can be found on the -:ref:`Available hardware ` page. .. seealso:: - The default quotas on each VSC site are gathered in the :ref:`storage - hardware` tables. + The actual disk capacity, shared by *all* users, and the default quotas on + each VSC site can be found on the :ref:`storage hardware` pages. You will only receive a warning when you reach the soft limit of either quota. You will only start losing data when you reach the hard limit. diff --git a/source/data/tier1data/clients/mango_portal.rst b/source/data/tier1data/clients/mango_portal.rst index c70ad23f2..131b8e160 100644 --- a/source/data/tier1data/clients/mango_portal.rst +++ b/source/data/tier1data/clients/mango_portal.rst @@ -108,7 +108,7 @@ A tar file is similar to a Zip folder, and can be extracted with a program like Uploads and downloads via the ManGO portal are limited to 5GB and 50GB per file respectively. While it is possible to upload/download multiple files at once, it isn't possible to upload a folder or download a collection as a whole at the moment. -If you want to transfer larger amounts of data via a graphical interface, you can use `Globus `_. +If you want to transfer larger amounts of data via a graphical interface, you can use the :ref:`globus platform`. .. _edit-permissions: diff --git a/source/data/tier1data/schemas/metadata-schemas-tech.rst b/source/data/tier1data/schemas/metadata-schemas-tech.rst index 2358e3bd5..fc7b028cb 100644 --- a/source/data/tier1data/schemas/metadata-schemas-tech.rst +++ b/source/data/tier1data/schemas/metadata-schemas-tech.rst @@ -1,3 +1,5 @@ +.. _t1data_metadata_tech_spec: + ########################################## Metadata schemas: technical specifications ########################################## diff --git a/source/data/tier1data/schemas/metadata-schemas.rst b/source/data/tier1data/schemas/metadata-schemas.rst index d99576569..e7a2dd149 100644 --- a/source/data/tier1data/schemas/metadata-schemas.rst +++ b/source/data/tier1data/schemas/metadata-schemas.rst @@ -8,8 +8,7 @@ This article describes the ManGO portal functionalities related to metadata schemas: how to design them and how to apply them. Users who might want to design their own schemas independently and load them via JSON, as well as developers interested in implemented this feature -outside the portal, are directed to `the technical -specifications `__. +outside the portal, are directed to :ref:`t1data_metadata_tech_spec`. One crucial principle of the metadata schema functionality in the ManGO portal is that schemas that can be used to apply metadata cannot be diff --git a/source/data/tier2-infrastructure.rst b/source/data/tier2-infrastructure.rst new file mode 100644 index 000000000..53a5ddb2a --- /dev/null +++ b/source/data/tier2-infrastructure.rst @@ -0,0 +1,50 @@ +.. _storage hardware: + +############################# +Tier-2 Storage Infrastructure +############################# + +The storage attached to our HPC clusters is organized according to the +:ref:`VSC storage guidelines `. + +All HPC clusters in VSC have their own shared storage solution accessible +from all nodes within that cluster. The so-called ``VSC_DATA`` and +``VSC_SCRATCH`` use this type of storage and constitute the main storage of the +cluster. + +Each compute node also has a local storage that can be used by jobs for +temporary storage. This is usually referred to as ``VSC_SCRATCH_NODE`` or just +``TMPDIR``. In general, ``VSC_SCRATCH`` is preferred over the local storage on +the node as the most performant option for scratch files. + +.. tab-set:: + :sync-group: vsc-sites + + .. tab-item:: KU Leuven/UHasselt + :sync: kuluh + + .. include:: /leuven/tier2_hardware/kuleuven_storage_quota_table.rst + + For more information check: :ref:`KU Leuven storage` + + .. tab-item:: UGent + :sync: ug + + .. include:: /gent/storage_quota_table.rst + + For more information check: :ref:`HPC-UGent Shared storage ` + + .. tab-item:: UAntwerp (AUHA) + :sync: ua + + .. include:: /antwerp/tier2_hardware/uantwerp_storage_quota_table.rst + + For more information check: :ref:`UAntwerp storage` + + .. tab-item:: VUB + :sync: vub + + .. include:: /brussels/tier2_hardware/vub_storage_quota_table.rst + + For more information check: :ref:`VUB storage` + diff --git a/source/data/transfer.rst b/source/data/transfer.rst index bc34c95ef..fa921d5f0 100644 --- a/source/data/transfer.rst +++ b/source/data/transfer.rst @@ -9,15 +9,88 @@ that you need for your research from your personal or department computer to the :ref:`storage of VSC clusters `. Then, once you get your results, you might want to transfer some files back. -|Recommended| The preferred way to transfer data to/from the VSC clusters is -the :ref:`globus platform`. +.. important:: -For those systems not supporting Globus, we provide instructions on alternative -transfer methods that can also be used in VSC clusters: + |Recommended| The preferred way to transfer data to/from the VSC clusters is + the :ref:`globus platform`. + +.. _data transfer external comp: + +Data transfer on external computers +=================================== + +For those systems not supporting :ref:`Globus `, we provide +instructions on alternative transfer methods that can be used to transfer data +between your computer and VSC clusters: .. toctree:: - :maxdepth: 3 + :hidden: + + transfer/windows + transfer/mac + transfer/linux + +.. grid:: 3 + :gutter: 4 + + .. grid-item-card:: :fab:`windows` Windows + :columns: 12 4 4 4 + + * :ref:`FileZilla ` + * :ref:`WinSCP ` + + .. grid-item-card:: :fab:`apple` macOS + :columns: 12 4 4 4 + + * :ref:`Cyberduck ` + * :ref:`FileZilla ` + * :ref:`Terminal ` + + .. grid-item-card:: :fab:`linux` Linux + :columns: 12 4 4 4 + + * :ref:`scp and sftp ` + +.. _data transfer net drives: + +Data transfer on network drives +=============================== + +Some VSC clusters provide specific integration with network storage platforms +available in their home institution. + +.. toctree:: + :hidden: + + transfer/network_drives/kuleuven + transfer/network_drives/vub_onedrive + +.. tab-set:: + :sync-group: vsc-sites + + .. tab-item:: KU Leuven/UHasselt + :sync: kuluh + + On clusters hosted at KU Leuven it is possible to transfer data to + and from KU Leuven network drives to which you may have access. + + Follow the instructions in: :ref:`KU Leuven network drives` + + .. tab-item:: UAntwerpen + :sync: ua + + *No specific integration* + + .. tab-item:: UGent + :sync: ug + + *No specific integration* + + .. tab-item:: VUB + :sync: vub + + You can directly copy files between the :ref:`Hydra cluster` and your + OneDrive from VUB. - transfer/external_computer - transfer/network_drives + Follow the instructions in: :ref:`vub onedrive` diff --git a/source/data/transfer/external_computer.rst b/source/data/transfer/external_computer.rst deleted file mode 100644 index 22ee0ddad..000000000 --- a/source/data/transfer/external_computer.rst +++ /dev/null @@ -1,41 +0,0 @@ -.. _data transfer external comp: - -################################### -Data transfer on external computers -################################### - -|Recommended| The preferred way to transfer data between your personal computer -and the VSC clusters is the :ref:`globus platform`. You can set up a -:ref:`local Globus endpoint in your computer `. - -Alternatively, we provide instructions for alternative solutions on the major -three operating systems. - -.. toctree:: - :hidden: - - windows - mac - linux - -.. grid:: 3 - :gutter: 4 - - .. grid-item-card:: :fab:`windows` Windows - :columns: 12 4 4 4 - - * :ref:`FileZilla ` - * :ref:`WinSCP ` - - .. grid-item-card:: :fab:`apple` macOS - :columns: 12 4 4 4 - - * :ref:`Cyberduck ` - * :ref:`FileZilla ` - * :ref:`Terminal ` - - .. grid-item-card:: :fab:`linux` Linux - :columns: 12 4 4 4 - - * :ref:`scp and sftp ` - diff --git a/source/data/transfer/filezilla-key-management.rst b/source/data/transfer/filezilla-key-management.rst new file mode 100644 index 000000000..644f109c3 --- /dev/null +++ b/source/data/transfer/filezilla-key-management.rst @@ -0,0 +1,11 @@ +As long as you use an :ref:`SSH agent ` to manage your SSH +keys, you stay connected via FileZilla and you do not require additional +configuration. + +Alternatively, recent versions of FileZilla also can manage private keys +on their own. The path to the private key must be provided in the option: +*Edit Tab* -> *options* -> *connection* -> *SFTP*. After that you should +be able to connect after being asked for passphrase. + +.. figure:: filezilla/prefs_private_key.jpg + :alt: FileZilla site manager with settings diff --git a/source/data/transfer/filezilla.rst b/source/data/transfer/filezilla.rst index 4b338fe65..0c3ab6282 100644 --- a/source/data/transfer/filezilla.rst +++ b/source/data/transfer/filezilla.rst @@ -16,16 +16,30 @@ Prerequisites All users need to setup an :ref:`SSH agent ` before proceeding. .. tab-set:: + :sync-group: vsc-sites .. tab-item:: KU Leuven + :sync: kuluh You need to :ref:`get an SSH certificate into your agent `, if you haven't done so already. - .. tab-item:: UGent, VUB, UAntwerpen + .. tab-item:: UAntwerpen + :sync: ua You need to load your private SSH key into your :ref:`SSH agent `. + .. tab-item:: UGent + :sync: ug + + You need to load your private SSH key into your :ref:`SSH agent `. + + .. tab-item:: VUB + :sync: vub + + You need to load your private SSH key into your :ref:`SSH agent `. + + Configuration of FileZilla to connect to a login node ===================================================== @@ -36,55 +50,83 @@ Configuration of FileZilla to connect to a login node fields remain blank): .. tab-set:: + :sync-group: vsc-sites .. tab-item:: KU Leuven + :sync: kuluh + + * Host: ``login.hpc.kuleuven.be`` + * Server Type: 'SFTP - SSH File Transfer Protocol' + * Logon Type: 'Interactive' + * User: *your own* VSC user ID + + .. figure:: filezilla/site_manager_kul.png + :alt: FileZilla's site manager for KU Leuven clusters - - Host: ``login.hpc.kuleuven.be`` - - Server Type: 'SFTP - SSH File Transer Protocol' - - Logon Type: 'Interactive' - - User: *your own* VSC user ID + .. tab-item:: UAntwerpen + :sync: ua - .. figure:: filezilla/site_manager_kul.png - :alt: FileZilla's site manager for KU Leuven clusters + * Host: ``login.hpc.uantwerpen.be`` + * Server Type: 'SFTP - SSH File Transfer Protocol' + * Logon Type: 'Normal' + * User: *your own* VSC user ID - .. tab-item:: UGent, VUB, UAntwerpen + .. figure:: filezilla/site_manager_non_kul.png + :alt: FileZilla's site manager for UGent, VUB, UAntwerpen - - Host: fill in the hostname of the VSC login node of your home - institution. You can find this information in the :ref:`overview - of available hardware on this site `. - - Server Type: 'SFTP - SSH File Transfer Protocol' - - Logon Type: 'Normal' - - User: *your own* VSC user ID, e.g. vsc98765 + .. tab-item:: UGent + :sync: ug - .. figure:: filezilla/site_manager_non_kul.png - :alt: FileZilla's site manager for UGent, VUB, UAntwerpen + * Host: ``login.hpc.ugent.be`` + * Server Type: 'SFTP - SSH File Transfer Protocol' + * Logon Type: 'Normal' + * User: *your own* VSC user ID + + .. figure:: filezilla/site_manager_non_kul.png + :alt: FileZilla's site manager for UGent, VUB, UAntwerpen + + .. tab-item:: VUB + :sync: vub + + * Host: ``login.hpc.vub.be`` + * Server Type: 'SFTP - SSH File Transfer Protocol' + * Logon Type: 'Normal' + * User: *your own* VSC user ID + + .. figure:: filezilla/site_manager_non_kul.png + :alt: FileZilla's site manager for UGent, VUB, UAntwerpen #. Optionally, rename this setting to your liking by pressing the 'Rename' button -#. Press 'Connect' and enter your passphrase when requested +#. Press 'Connect' and enter your passphrase when requested .. tab-set:: + :sync-group: vsc-sites .. tab-item:: KU Leuven + :sync: kuluh As long as your SSH agent is running and keeping a valid SSH certificate, you stay connected via FileZilla and you do not require additional configuration. - .. tab-item:: UGent, VUB, UAntwerpen + .. tab-item:: UAntwerpen + :sync: ua + + .. include:: filezilla-key-management.rst + + .. tab-item:: UGent + :sync: ug - Recent versions of FileZilla have a screen in the settings to - manage private keys. The path to the private key must be provided in - options (Edit Tab -> options -> connection -> SFTP): + .. include:: filezilla-key-management.rst - .. figure:: filezilla/prefs_private_key.jpg - :alt: FileZilla site manager with settings + .. tab-item:: VUB + :sync: vub - After that you should be able to connect after being asked for - passphrase. As an alternative you can choose to use an :ref:`SSH agent `. + .. include:: filezilla-key-management.rst -Under the 'Advanced' tab you can also set the directory you wish to open by +Under the *'Advanced'* tab you can also set the directory you wish to open by default upon login. -For example, to set your default path to your ``VSC_DATA`` directory, you need to -provide the full path, like ``/data/brussels/1xx/vsc1xxxxx``. +For example, setting your default path to your ``VSC_DATA`` directory can be done by +providing its full path, like ``/data/brussels/100/vsc10000``. diff --git a/source/data/transfer/network_drives.rst b/source/data/transfer/network_drives.rst deleted file mode 100644 index c3084c011..000000000 --- a/source/data/transfer/network_drives.rst +++ /dev/null @@ -1,29 +0,0 @@ -.. _data transfer net drives: - -############################### -Data transfer on network drives -############################### - -Some VSC clusters provide specific integration with network storage platforms available in their home institution. - -.. toctree:: - :hidden: - - network_drives/kuleuven - network_drives/vub_onedrive - -.. tab-set:: - - .. tab-item:: KU Leuven/UHasselt - - On clusters hosted at KU Leuven it is possible to transfer data to - and from KU Leuven network drives to which you may have access. - - Follow the instructions in: :ref:`KU Leuven network drives` - - .. tab-item:: VUB - - You can directly copy files between Hydra and your OneDrive from VUB. - - Follow the instructions in: :ref:`vub onedrive` - diff --git a/source/data/transfer/network_drives/kuleuven.rst b/source/data/transfer/network_drives/kuleuven.rst index 0faff5624..b3388be22 100644 --- a/source/data/transfer/network_drives/kuleuven.rst +++ b/source/data/transfer/network_drives/kuleuven.rst @@ -1,8 +1,8 @@ .. _KU Leuven network drives: -######################################### -Data transfer on KU Leuven network drives -######################################### +######################## +KU Leuven network drives +######################## On clusters hosted at KU Leuven it is possible to transfer data to and from KU Leuven network drives to which you may have access. diff --git a/source/data/transfer/network_drives/vub_onedrive.rst b/source/data/transfer/network_drives/vub_onedrive.rst index 83f342a01..1cb8ad9e0 100644 --- a/source/data/transfer/network_drives/vub_onedrive.rst +++ b/source/data/transfer/network_drives/vub_onedrive.rst @@ -6,7 +6,7 @@ Your OneDrive in VUB You can directly copy files between Hydra and your `OneDrive in VUB `_ using the third-party sync app -`OneDrive Client for Linux `_. +`OneDrive Client for Linux `_. This method avoids any additional step to copy the files to/from OneDrive to/your your local computer before transferring them to the HPC. @@ -134,5 +134,5 @@ Synchronize with personal OneDrive .. seealso:: - `Onedrive Client for Linux documentation `_ + `Onedrive Client for Linux documentation `_ diff --git a/source/data/transfer/winscp.rst b/source/data/transfer/winscp.rst index 4d9451edb..c04eace5a 100644 --- a/source/data/transfer/winscp.rst +++ b/source/data/transfer/winscp.rst @@ -1,4 +1,4 @@ -.. _WinSCP: +.. _WinSCP transfer: ########################## Data transfer using WinSCP @@ -7,20 +7,18 @@ Data transfer using WinSCP Prerequisites ============= -To transfer files to and from the cluster, we recommend the use of -`WinSCP `__, which is a graphical ftp-style program (but -than one that uses the ssh protocol to communicate with the cluster rather then -the less secure ftp) that is also freely available. WinSCP can be downloaded -both as an installation package and as a standalone portable executable. When -using the portable version, you can copy WinSCP together with your private key -on a USB stick to have access to your files from any internet-connected Windows -PC. - -WinSCP also works together well with the PuTTY suite of applications. It -uses the :ref:`keys generated with the PuTTY key generation -program `, can :ref:`launch terminal -sessions in PuTTY ` and :ref:`use -ssh keys managed by pageant `. +To transfer files to and from the cluster, we recommend the use of `WinSCP`_, +which is a graphical ftp-style program (but than one that uses the ssh protocol +to communicate with the cluster rather than the less secure ftp) that is also +freely available. WinSCP can be downloaded both as an installation package and +as a standalone portable executable. When using the portable version, you can +copy WinSCP together with your private key on a USB stick to have access to +your files from any internet-connected Windows PC. + +WinSCP also works together well with the PuTTY suite of applications. It uses +:ref:`keys generated with PuTTY `, can launch +:ref:`terminal sessions in PuTTY ` and use ssh keys managed by +:ref:`Pageant`. Transfers to and from the VSC clusters ====================================== @@ -35,12 +33,17 @@ connecting and add host key to the cache'; select 'Yes'. .. figure:: winscp/winscp_config-new-red.png #. Fill in the hostname of the VSC login node of your home - institution. You can find this information in the :ref:`overview - of available hardware on this site `. - #. Fill in your VSC username. - #. Double check that the port number is 22. + institution. You can find this information in the + :ref:`tier1 hardware` and :ref:`tier2 hardware` sections + + #. Fill in your VSC username + + #. Double check that the port number is 22 -#. If you are not using pageant to manage your ssh keys, you have to point WinSCP to the private key file (in PuTTY .ppk format) that should be used. You can do that using "Advanced" button and then choose "SSH" "Authentication" from the list. When using pageant, you can leave this field blank. +#. If you are not using pageant to manage your ssh keys, you have to point + WinSCP to the private key file (in PuTTY .ppk format) that should be used. + You can do that using "Advanced" button and then choose "SSH Authentication" + from the list. When using pageant, you can leave this field blank. .. figure:: winscp/winscp_config-advanced-new-red.png diff --git a/source/faq.rst b/source/faq.rst index 2a7b5ff62..4421bf006 100644 --- a/source/faq.rst +++ b/source/faq.rst @@ -25,12 +25,12 @@ Access to the infrastructure .. toctree:: :maxdepth: 1 - I messed up my authentication keys, what can I do? - How can I access from multiple computers? + I messed up my authentication keys, what can I do? + How can I access from multiple computers? How can I access from abroad? - access/where_can_i_store_what_kind_of_data - access/managing_disk_usage - access/how_to_request_more_quota + accounts/where_can_i_store_what_kind_of_data + accounts/managing_disk_usage + accounts/how_to_request_more_quota .. _job faqs: @@ -41,10 +41,10 @@ Running jobs .. toctree:: :maxdepth: 1 - jobs/why_doesn_t_my_job_start - jobs/what_if_jobs_fail_after_starting_successfully - jobs/worker_or_atools - jobs/workflows_using_job_dependencies + compute/jobs/why_doesn_t_my_job_start + compute/jobs/what_if_jobs_fail_after_starting_successfully + compute/jobs/worker_or_atools + compute/jobs/workflows_using_job_dependencies .. _software faqs: @@ -55,8 +55,8 @@ Software .. toctree:: :maxdepth: 2 - software/parallel_software - software/singularity + compute/software/parallel_software + compute/software/containers Tutorials and additional resources diff --git a/source/gent/setting_up_the_environment_using_lmod_at_the_hpc_ugent_clusters.rst b/source/gent/setting_up_the_environment_using_lmod_at_the_hpc_ugent_clusters.rst index b240d1ebd..2714ed539 100644 --- a/source/gent/setting_up_the_environment_using_lmod_at_the_hpc_ugent_clusters.rst +++ b/source/gent/setting_up_the_environment_using_lmod_at_the_hpc_ugent_clusters.rst @@ -528,9 +528,10 @@ by the ``lmod`` command. export EASYBUILD_MODULES_TOOL=Lmod -See `the EasyBuild documentation -`_ -for other ways of configuring EasyBuild to use Lmod. +.. seealso:: + + See the documentation on `Configuring EasyBuild `_ + for other ways of setting up EasyBuild to use Lmod. You should not be using ``lmod`` directly in other circumstances, use either ``ml`` or ``module`` instead. diff --git a/source/gent/tier1_hortense.rst b/source/gent/tier1_hortense.rst index 275de91af..de63f7e30 100644 --- a/source/gent/tier1_hortense.rst +++ b/source/gent/tier1_hortense.rst @@ -123,7 +123,9 @@ You can use SSH to connect to the login nodes of the Tier-1 Hortense cluster wit * from the public internet, use ``tier1.hpc.ugent.be`` * from within the VSC network, use ``tier1.gent.vsc`` -More general information about SSH login is available at :ref:`access methods`. +More general information about SSH login is available in the +:ref:`terminal +interface` section. There are 2 login nodes for Hortense: ``login55`` and ``login56``. When logging in using SSH, you will be assigned to either of these login nodes, @@ -178,21 +180,18 @@ The type of fingerprint that will be shown depends on the version and configurat Web portal ********** -To access Tier-1 Hortense you can also use the `Open On-Demand `_ -web portal https://tier1.hpc.ugent.be. +To access Tier-1 Hortense you can also use the `Open On-Demand` web portal +https://tier1.hpc.ugent.be. More information about the usage of the web portal is available in https://docs.hpc.ugent.be/web_portal/. - .. note:: If you are using the Hortense web portal from outside of the network of a Flemish university, - you will first need to open the `VSC firewall app `_ - and log in via the VSC account page. + you will first need to open the `VSC Firewall`_ web app and log in with your VSC account. Keep the browser tab with firewall app open as long as you want to use the web portal! - .. _hortense_scratch_globus: Hortense scratch via Globus @@ -262,7 +261,7 @@ Do not hesitate to give your feedback on using the Resource Application via comp Practical usage: -* Open a webbrowser to https://resapp.hpc.ugent.be (The app will redirect you via the VSC firewall application first, if needed.) +* Open a webbrowser to https://resapp.hpc.ugent.be (The app will redirect you via the `VSC Firewall`_ application first, if needed.) * The Resource Application shows you all Tier1-Hortense projects that you are a member of. * By clicking on the dropdown arrow on the right in the initial Projects tab, you can consult the raw usage of one of your projects (in CPU hours and GPU hours). * You can also view Logs and get more fine-grained usage details. @@ -553,6 +552,8 @@ A list of available partitions can be obtained using ``module avail cluster/dodr To check the currently active partition, use ``module list cluster``. +.. _tier1_request_gpus: + Requesting GPU resources ++++++++++++++++++++++++ diff --git a/source/gent/tier2_hardware.rst b/source/gent/tier2_hardware.rst index 83ed9e6ab..c352654ec 100644 --- a/source/gent/tier2_hardware.rst +++ b/source/gent/tier2_hardware.rst @@ -1,8 +1,8 @@ .. _UGentT2 hardware: -############################### -HPC-UGent Tier-2 Infrastructure -############################### +######################### +HPC-UGent Tier-2 Clusters +######################### The Stevin computing infrastructure consists of several Tier2 clusters which are hosted in the S10 datacenter of Ghent University. This infrastructure is co-financed by FWO and Department of Economy, Science and Innovation (EWI). diff --git a/source/globus/access.rst b/source/globus/access.rst index f0511458b..c3bce7f43 100644 --- a/source/globus/access.rst +++ b/source/globus/access.rst @@ -7,22 +7,33 @@ Access to Globus Log in with your institution: -Visit `www.globus.org`_ and click :bgrnd1:`Login` at the top of the page. On the Globus login page, choose an organization you are already registered with, such as your school or your employer. +Visit `www.globus.org`_ and click :bgrnd1:`Login` at the top of the page. On +the Globus login page, choose an organization you are already registered with, +such as your school or your employer. .. figure:: access/access-login-screen.png -When you find it, click :bgrnd1:`Continue`. If you cannot find your organization in the list please contact the support team at data@vscentrum.be. +When you find it, click :bgrnd1:`Continue`. If you cannot find your +organization in the list please contact the support team at data@vscentrum.be. -You will be redirected to your organization's login page. Use your credentials for that organization to login. +You will be redirected to your organization's login page. Use your credentials +for that organization to login. -If that is your first time logging into Globus some organizations may ask for your permission to -release your account information to Globus. In most cases that would be a one-time request. +If that is your first time logging into Globus some organizations may ask for +your permission to release your account information to Globus. In most cases +that would be a one-time request. -Once you have logged in with your organization, Globus will ask if you would like to link to an existing account. If this is your first time logging in to Globus, click :bgrnd1:`Continue`. If you have already used another account with Globus, you can choose :bgrnd1:`Link to an existing account`. +Once you have logged in with your organization, Globus will ask if you would +like to link to an existing account. If this is your first time logging in to +Globus, click :bgrnd1:`Continue`. If you have already used another account with +Globus, you can choose :bgrnd1:`Link to an existing account`. -You may be prompted to provide additional information such as your organization and whether or not Globus will be used for commercial purposes. Complete the form and click :bgrnd1:`Continue`. +You may be prompted to provide additional information such as your organization +and whether or not Globus will be used for commercial purposes. Complete the +form and click :bgrnd1:`Continue`. -Finally, you need to give Globus permission to use your identity to access information and perform actions (like file transfers) on your behalf. +Finally, you need to give Globus permission to use your identity to access +information and perform actions (like file transfers) on your behalf. .. figure:: access/access-first-time-login-permissions.png diff --git a/source/globus/index.rst b/source/globus/index.rst index 7dbf2dd98..03c525e25 100644 --- a/source/globus/index.rst +++ b/source/globus/index.rst @@ -4,10 +4,10 @@ :fa:`cloud-upload-alt` Globus data sharing platform ################################################### -The `Globus platform `__ enables developers to provide -robust file transfer, sharing and search capabilities within their own research -data applications and services, while leveraging advanced identity management, -single sign-on, and authorization capabilities. +The `Globus`_ platform enables developers to provide robust file transfer, +sharing and search capabilities within their own research data applications and +services, while leveraging advanced identity management, single sign-on, and +authorization capabilities. This document is a hands-on guide to the Globus file sharing platform. It complements the official documentation at `docs.globus.org`_ from the VSC diff --git a/source/globus/python_sdk.rst b/source/globus/python_sdk.rst index e74c0622f..fb903b4ee 100644 --- a/source/globus/python_sdk.rst +++ b/source/globus/python_sdk.rst @@ -15,7 +15,7 @@ Getting started with the Python SDK Before creating your own scripts or tools on top of Globus, you first need to register them. -To do so, go to the `developers page `_ and click on 'Register your app with Globus'. +To do so, go to `developers.globus.org`_ and click on 'Register your app with Globus'. First, you register your project, or select an existing one. A project is a collection of clients with a shared list of administrators. Projects let you share the administrative burden of a collection of apps. diff --git a/source/hardware-archive.rst b/source/hardware-archive.rst deleted file mode 100644 index 0789e7909..000000000 --- a/source/hardware-archive.rst +++ /dev/null @@ -1,13 +0,0 @@ -.. _archive hardware: - -####################### -Archive of Old Clusters -####################### - -.. toctree:: - :maxdepth: 3 - - leuven/tier1_breniac - leuven/old_hardware/thinking_hardware - leuven/old_hardware/genius_hardware - antwerp/old_hardware/hopper_hardware \ No newline at end of file diff --git a/source/hardware-storage.rst b/source/hardware-storage.rst deleted file mode 100644 index 38f3169b8..000000000 --- a/source/hardware-storage.rst +++ /dev/null @@ -1,34 +0,0 @@ -.. _storage hardware: - -###################### -Storage Infrastructure -###################### - -The storage is organized according to the :ref:`VSC storage guidelines ` - -.. tab-set:: - - .. tab-item:: KU Leuven/UHasselt - - .. include:: leuven/tier2_hardware/kuleuven_storage_quota_table.rst - - For more information check: :ref:`KU Leuven storage` - - .. tab-item:: UGent - - .. include:: gent/storage_quota_table.rst - - For more information check: :ref:`HPC-UGent Shared storage ` - - .. tab-item:: UAntwerp (AUHA) - - .. include:: antwerp/tier2_hardware/uantwerp_storage_quota_table.rst - - For more information check: :ref:`UAntwerp storage` - - .. tab-item:: VUB - - .. include:: brussels/tier2_hardware/vub_storage_quota_table.rst - - For more information check: :ref:`VUB storage` - diff --git a/source/hardware-tier1.rst b/source/hardware-tier1.rst deleted file mode 100644 index fd642ff7d..000000000 --- a/source/hardware-tier1.rst +++ /dev/null @@ -1,11 +0,0 @@ -.. _tier1 hardware: - -##################### -Tier-1 Infrastructure -##################### - -.. toctree:: - :maxdepth: 3 - - gent/tier1_hortense - diff --git a/source/hardware-tier2.rst b/source/hardware-tier2.rst deleted file mode 100644 index 83d6af40d..000000000 --- a/source/hardware-tier2.rst +++ /dev/null @@ -1,14 +0,0 @@ -.. _tier2 hardware: - -##################### -Tier-2 Infrastructure -##################### - -.. toctree:: - :maxdepth: 3 - - antwerp/tier2_hardware - brussels/tier2_hardware - gent/tier2_hardware - leuven/tier2_hardware - diff --git a/source/hardware.rst b/source/hardware.rst deleted file mode 100644 index 96236d0c4..000000000 --- a/source/hardware.rst +++ /dev/null @@ -1,14 +0,0 @@ -.. _hardware: - -########################### -:fa:`server` Infrastructure -########################### - -.. toctree:: - :maxdepth: 3 - :numbered: 3 - - hardware-tier1 - hardware-tier2 - hardware-storage - hardware-archive diff --git a/source/index.rst b/source/index.rst index 9b746debf..bde943374 100644 --- a/source/index.rst +++ b/source/index.rst @@ -27,65 +27,143 @@ information about the services provided by the `Vlaams Supercomputer Centrum about_vsc contact_vsc - Accounts + Accounts + Compute Data - Compute Cloud FAQs .. grid:: 3 :gutter: 4 - .. grid-item-card:: :fas:`user-circle` Accounts and access + .. grid-item-card:: :fas:`user-circle` VSC Accounts :columns: 12 - :link: access/index + :link: accounts/index :link-type: doc - :class-title: h3 + :class-title: fs-3 - How to get your VSC account and access the different VSC services and platforms. - - .. grid-item-card:: :fas:`floppy-disk` Research Data - :columns: 12 - :link: data/index - :link-type: doc - :class-title: h3 - - Data transfer and storage in the VSC infrastructure. + How to get your VSC account to use the different VSC services and platforms. .. grid-item-card:: :fas:`rocket` Compute - :columns: 12 4 4 4 - :link: compute + :class-body: nested-card-container + :columns: 12 12 4 4 + :link: compute/index :link-type: doc - :class-title: h3 + :class-title: fs-3 The high-performance computing (HPC) platform provides multiple tiers of parallel processing enabling researchers to run advanced application programs efficiently, reliably and quickly. - .. grid-item-card:: :fas:`cloud` Tier-1 Cloud - :columns: 12 4 4 4 + .. grid:: 2 + :gutter: 2 + + .. grid-item-card:: Tier-1 HPC + :class-item: nested-card-top service-card-tier1 + :text-align: center + :link: compute/tier1 + :link-type: doc + + .. grid-item-card:: Tier-2 HPC + :class-item: nested-card-top service-card-tier2 + :text-align: center + :link: compute/tier2 + :link-type: doc + + .. grid-item-card:: Terminal Interface + :columns: 12 12 12 12 + :class-item: nested-card-top service-card-term + :text-align: center + :link: compute/terminal/index + :link-type: doc + + .. grid-item-card:: Web Portal + :class-item: nested-card-top service-card-portal + :text-align: center + :link: compute/portal/index + :link-type: doc + + .. grid-item-card:: Job Queue + :class-item: nested-card-top service-card-jobs + :text-align: center + :link: compute/jobs/index + :link-type: doc + + .. grid-item-card:: Scientific Software + :columns: 12 12 12 12 + :class-item: nested-card-top service-card-soft + :text-align: center + :link: compute/software/index + :link-type: doc + + .. grid-item-card:: :fas:`floppy-disk` Data + :class-body: nested-card-container + :columns: 12 12 4 4 + :link: data/index + :link-type: doc + :class-title: fs-3 + + The VSC Data component enables research data to remain close to the + computing infrastructure during the active phase of the data life + cycle. + + .. grid:: 1 + :gutter: 2 + + .. grid-item-card:: Tier-1 Data + :class-item: nested-card-top service-card-tier1 + :text-align: center + :link: data/tier1_data_service + :link-type: doc + + .. grid-item-card:: Tier-2 Storage + :class-item: nested-card-top service-card-tier2 + :text-align: center + :link: data/storage + :link-type: doc + + .. grid-item-card:: Globus + :class-item: nested-card-top service-card-globus + :text-align: center + :link: globus/index + :link-type: doc + + .. grid-item-card:: :fas:`cloud` Cloud + :class-body: nested-card-container + :columns: 12 12 4 4 :link: cloud/index :link-type: doc - :class-title: h3 + :class-title: fs-3 The VSC Cloud component provides *on-demand* resources in a more flexible and cloud-like manner. - .. grid-item-card:: :fas:`database` Tier-1 Data - :columns: 12 4 4 4 - :link: data/tier1_data_service - :link-type: doc - :class-title: h3 + .. grid:: 1 + :gutter: 2 - The VSC Data component enables research data to remain close to the - computing infrastructure during the active phase of the data life - cycle. + .. grid-item-card:: Virtual Machines + :class-item: nested-card-top service-card-vms + :text-align: center + :link: cloud/manage_images + :link-type: doc + + .. grid-item-card:: VMs with GPUs + :class-item: nested-card-top service-card-vmsgpu + :text-align: center + :link: cloud/gpus + :link-type: doc + + .. grid-item-card:: Orchestration + :class-item: nested-card-top service-card-orch + :text-align: center + :link: cloud/terraform + :link-type: doc .. grid-item-card:: :fas:`question-circle` FAQs :columns: 12 :link: faq :link-type: doc - :class-title: h3 + :class-title: fs-3 Collection of frequently asked questions. diff --git a/source/interesting_links.rst b/source/interesting_links.rst index d0b812fff..c12194e10 100644 --- a/source/interesting_links.rst +++ b/source/interesting_links.rst @@ -4,152 +4,135 @@ Interesting links Getting compute time in other centres ------------------------------------- -- `PRACE - Partnership for Advanced Computing in - Europe `_ has a `program to get - access to the tier-0 infrastructure in - Europe `_. -- The `DOE leadership computing program - INCITE `_ also - offers compute time to non-US-groups. It is more or less the - equivalent of the PRACE tier-0 computing program. The annual deadline - for proposals is usually end of June. +* `EuroHPC`_ (The European High Performance Computing Joint Undertaking) has a program to get + access to the tier-0 infrastructure in Europe. See the `EuroHPC Access Calls`_ website. + +* The `DOE leadership computing program INCITE `_ + also offers compute time to non-US-groups. It is more or less the equivalent + of the PRACE tier-0 computing program. The annual deadline for proposals is + usually end of June. Training programs in other centres ---------------------------------- -- `PRACE training - programs `_ -- `HLRS, Stuttgart - (Germany) `_ -- `Leibniz RechenZentrum, Garching, near München - (Germany) `_, - organised together with the `Erlangen Computing - Centre `_. +* `PRACE Training Portal`_ +* `HLRS, Stuttgart (Germany) `_ +* `Leibniz RechenZentrum, Garching, near München (Germany) `_, + organised together with the `Erlangen Computing Centre `_. EU initiatives -------------- -- `PRACE, Partnership for Advanced Computing in - Europe `_ -- `EGI - European Grid Initiative `_, the - successor of the `EGEE - Enabling Grids for - E-SciencE `_ - program -- `HET, HPC in Europe - Taskforce `_, a project - within `ESFRI, European Strategy Forum on Research - Infrastructures `_ -- The `e-IRG, e-Infrastructure Reflection - Group `_ - -Some grid efforts +* `EuroHPC`_ (The European High Performance Computing Joint Undertaking) +* `PRACE`_ (Partnership for Advanced Computing in Europe) +* `EGI - European Grid Initiative `_, the + successor of the `EGEE - Enabling Grids for E-SciencE `_ + program +* `HET, HPC in Europe Taskforce `_, a project + within `ESFRI, European Strategy Forum on Research Infrastructures `_ +* The `e-IRG, e-Infrastructure Reflection Group `_ + +Some Grid efforts ----------------- -- `WLCG - World-wide LHC Computing - Grid `_, the compute grid - supporting the Large Hedron Collider at Cern -- The `XSEDE `_ program in the US - which combines a large spectrum of resources across the USA in a - single virtual infrastructure -- The `Open Science Grid - (OSG) `_ is a grid focused on - high throughput computing in the US and one of the resource providers - in the XSEDE project +* `WLCG - World-wide LHC Computing Grid `_, the + compute grid supporting the Large Hedron Collider at Cern +* The `XSEDE `_ program in the US which combines a + large spectrum of resources across the USA in a single virtual infrastructure +* The `Open Science Grid (OSG) `_ is a grid + focused on high throughput computing in the US and one of the resource + providers in the XSEDE project Some HPC centres in Europe -------------------------- -- Belgium - - - `CÉCI - Consortium des Équipements de Calcul - Intensif `_, the equivalent of - the VSC run by the French Community of Belgium - -- Danmark: The `DeIC - Danish e-Infrastructure - Cooperation `_ is a virtual - organisation just as the VSC in which several universities - participate -- Germany: - - - `GCS, Gauss Centre for - Supercomputing `_, - a collaboration of three German national supercomputer centres - - - `JSC, Jülich Supercomputer - Centre `_ - of the `Forschungszentrum - Jülich `_. - - `HLRS, Höchstleistungsrechenzentrum - Stuttgart `_. - - `LRZ, Leibniz Rechenzentrum der Bayerischen Akademie der - Wissenschaften `_ - - - `HLRN, Norddeutscher Verbund für Hoch- und - Höchstleistungsrechnen `_, - a German supercomputer center in which 7 Bundesländer (\"States\") - in Northern Germany participate - - `Max Planck Computing and Data Facility, Rechenzentrum - Garching `_ of the Max Planck - Society and the IPP (Institute for Plasma Physics) - -- Finland: `CSC `_. -- France: - - - `GENCI, Grand Equipement National de Calcul - Intensif `_, coordinates the 3 - French national supercomputer centres: - - - `CCRT/CEA, Centre de Calcul Recherche et - Technologie `_, which also - runs the French Tier-0 cluster for the PRACE program. - - `CINES, Centre Informatique National de l’Enseignement - Supérieur `_. - - `IDRIS, Institut du Développement et des Ressources en - Informatique Scientifique `_. - -- Ireland: `ICHEC, Irish Centre for High-End - Computing `_. -- Italy: `CINECA `_, a non profit - Consortium, made up of 32 Italian universities, The National - Institute of Oceanography and Experimental Geophysics - OGS, the CNR - (National Research Council), and the Ministry of University and - Research. -- Netherlands: - `SURFsara `_, - the organisation running the Dutch academic supercomputers -- Norway: `UNINETT Sigma2 AS `_ - manages the national infrastructure for computational science in - Norway, and offers services in high performance computing and data - storage. -- Spain: `BSC, Barcelona Supercomputing - Center `_. -- Sweden: - - - `SNIC, the Swedish National Infrastructure for - Computing `_, is a meta-centre that - coordinates high-performance and grid computing in 6 Swedish - supercomputer centres and that represents Sweden in PRACE - - `PDC Center for High Performance - Computing `_ at - `KTH `_ houses the largest - supercomputer of Sweden. - -- Switzerland: `CSCS, the Swiss National Supercomputer - Center `_, an autonomous unit of ETH - Zürich -- United Kingdom: - - - `Archer `_, the UK national - supercomputer service run by EPCC - - `University of Bristol Advanced Computing Research - Centre `_ - - `University of Cambridge High Performance Computing - Service `_ - - `Advanced Research Computing @ - Cardiff `_ - - `EPCC, Edinburgh Parallel Computing - Centre `_ - - `Supercomputing - Wales `_, also a - consortium of universities similar to the VSC +* Belgium + + * `CÉCI - Consortium des Équipements de Calcul Intensif `_, + the equivalent of the VSC run by the French Community of Belgium + +* Denmark: + + * The `DeIC - Danish e-Infrastructure Cooperation `_ is + a virtual organisation just as the VSC in which several universities + participate + +* Germany: + + * `GCS, Gauss Centre for Supercomputing `_, + a collaboration of three German national supercomputer centres + + * `JSC, Jülich Supercomputer Centre `_ + of the `Forschungszentrum Jülich `_ + * `HLRS, Höchstleistungsrechenzentrum Stuttgart `_ + * `LRZ, Leibniz Rechenzentrum der Bayerischen Akademie der Wissenschaften `_ + + * `HLRN, Norddeutscher Verbund für Hoch- undHöchstleistungsrechnen `_, + a German supercomputer center in which 7 Bundesländer (*States*) in Northern Germany participate + * `Max Planck Computing and Data Facility, Rechenzentrum Garching `_ + of the Max Planck Society and the IPP (Institute for Plasma Physics) + +* Finland: + + * `CSC `_ + +* France: + + * `GENCI, Grand Equipement National de Calcul Intensif `_, + coordinates the 3 French national supercomputer centres: + + * `CCRT/CEA, Centre de Calcul Recherche et Technologie `_, + which also runs the French Tier-0 cluster for the `PRACE`_ program + * `CINES, Centre Informatique National de l’Enseignement Supérieur `_ + * `IDRIS, Institut du Développement et des Ressources en Informatique Scientifique `_ + +* Ireland: + + * `ICHEC, Irish Centre for High-End Computing `_ + +* Italy: + + * `CINECA `_, a non profit Consortium, made up of 32 + Italian universities, The National Institute of Oceanography and + Experimental Geophysics - OGS, the CNR (National Research Council), and the + Ministry of University and Research. + +* Netherlands: + + * `SURF `_, is the ICT cooperative of Dutch education + and research institutions and runs the Dutch academic supercomputers + +* Norway: + + * `UNINETT Sigma2 AS `_ manages the national + infrastructure for computational science in Norway, and offers services in + high performance computing and data storage. + +* Spain: + + * `BSC, Barcelona Supercomputing Center `_ + +* Sweden: + + * `SNIC, the Swedish National Infrastructure for Computing `_, + is a meta-centre that coordinates high-performance and grid computing in 6 Swedish + supercomputer centres and that represents Sweden in `PRACE`_ + * `PDC Center for High Performance Computing `_ at + `KTH `_ houses the largest supercomputer of Sweden + +* Switzerland: + + * `CSCS, the Swiss National Supercomputer Center `_, + an autonomous unit of ETH Zürich + +* United Kingdom: + + * `Archer `_, the UK national supercomputer service + run by EPCC + * `University of Bristol Advanced Computing Research Centre `_ + * `University of Cambridge High Performance Computing Service `_ + * `Advanced Research Computing @ Cardiff `_ + * `EPCC, Edinburgh Parallel Computing Centre `_ + * `Supercomputing Wales `_, also a + consortium of universities similar to the VSC diff --git a/source/jobs/clusters_torque.rst b/source/jobs/clusters_torque.rst deleted file mode 100644 index 6b671dcd7..000000000 --- a/source/jobs/clusters_torque.rst +++ /dev/null @@ -1,16 +0,0 @@ -.. grid:: 3 - :gutter: 4 - - .. grid-item-card:: UGent - :columns: 12 4 4 4 - - * Tier-1 :ref:`Hortense ` - * Tier-2 :ref:`All clusters ` - - .. grid-item-card:: VUB - :columns: 12 4 4 4 - - (backwards compatibility) - - * Tier-2 :ref:`Hydra ` - diff --git a/source/leuven/genius_quick_start.rst b/source/leuven/genius_quick_start.rst index 05b6c6815..49996e0aa 100644 --- a/source/leuven/genius_quick_start.rst +++ b/source/leuven/genius_quick_start.rst @@ -10,7 +10,8 @@ for most HPC workloads. Access to the cluster --------------------- -Genius can be accessed from the :ref:`Genius login nodes `, or from your web browser via the :ref:`Open On-Demand ` service. +Genius can be accessed from the :ref:`Genius login nodes `, +or from your web browser via the :ref:`Open OnDemand ` service. For example, you can log in to any of the login node using SSH:: diff --git a/source/leuven/slurm_specifics.rst b/source/leuven/slurm_specifics.rst index ddc9ad527..a5504c495 100644 --- a/source/leuven/slurm_specifics.rst +++ b/source/leuven/slurm_specifics.rst @@ -8,17 +8,16 @@ information regarding Slurm, there are additional points to consider when using Slurm on Tier-2 clusters hosted at KU Leuven. +.. _leuven_compute_credits: + Compute credits --------------- When submitting a job, you need to provide a valid Slurm credit account holding enough compute credits for the job using the ``-A/--account`` option. For more information, please consult the following pages: -.. toctree:: - :maxdepth: 1 - - ./credits - ./slurm_accounting +* :ref:`KU Leuven credits` +* :ref:`accounting_leuven` .. _leuven_job_shell: diff --git a/source/leuven/tier1_breniac.rst b/source/leuven/tier1_breniac.rst deleted file mode 100644 index dbd1297ed..000000000 --- a/source/leuven/tier1_breniac.rst +++ /dev/null @@ -1,9 +0,0 @@ -############## -Tier-1 Breniac -############## - - -.. toctree:: - :maxdepth: 3 - - old_hardware/breniac/breniac_hardware diff --git a/source/leuven/tier2_hardware.rst b/source/leuven/tier2_hardware.rst index acaa685cf..957f7314e 100644 --- a/source/leuven/tier2_hardware.rst +++ b/source/leuven/tier2_hardware.rst @@ -1,8 +1,9 @@ -KU Leuven/UHasselt Tier-2 Infrastructure -======================================== - .. _kul_tier2: +################################## +KU Leuven/UHasselt Tier-2 Clusters +################################## + .. toctree:: :maxdepth: 2 diff --git a/source/leuven/tier2_hardware/tier2_login_nodes.rst b/source/leuven/tier2_hardware/tier2_login_nodes.rst index e72e63b5f..decc6c32b 100644 --- a/source/leuven/tier2_hardware/tier2_login_nodes.rst +++ b/source/leuven/tier2_hardware/tier2_login_nodes.rst @@ -9,7 +9,7 @@ The access to both machines is possible - either via the Genius login nodes (see below), as wICE itself has no dedicated login node -- or via the :ref:`Open On-Demand ` on your web browser +- or via the :ref:`Open OnDemand ` on your web browser Login infrastructure -------------------- diff --git a/source/leuven/wice_quick_start.rst b/source/leuven/wice_quick_start.rst index b570de36d..ab2fb2484 100644 --- a/source/leuven/wice_quick_start.rst +++ b/source/leuven/wice_quick_start.rst @@ -10,7 +10,7 @@ nodes with GPUs. wICE does not have separate login nodes and can be accessed either from the :ref:`Genius login nodes `, or from your web browser via the -:ref:`Open On-Demand ` service. +:ref:`Open OnDemand ` service. .. _running jobs on wice: diff --git a/source/redirects.list b/source/redirects.list new file mode 100644 index 000000000..9302aee80 --- /dev/null +++ b/source/redirects.list @@ -0,0 +1,135 @@ +access/access_and_data_transfer.html /compute/terminal/index.html +access/access_from_multiple_machines.html /accounts/access_from_multiple_machines.html +access/access_methods.html /compute/terminal/index.html +access/access_using_mobaxterm.html /terminal/compute/mobaxterm_access.html +access/access_using_mobaxterm_advanced_ssh_keys.html /compute/terminal/mobaxterm_access_ssh_keys.html +access/account_management.html /accounts/management.html +access/account_request.html /accounts/vsc_account.html +access/authentication.html /accounts/authentication.html +access/creating_a_ssh_tunnel_using_openssh.html /accounts/creating_a_ssh_tunnel_using_openssh.html +access/creating_a_ssh_tunnel_using_putty.html /compute/terminal/putty_ssh_tunnel.html +access/data_transfer.html /data/transfer.html +access/data_transfer_using_winscp.html /data/transfer/winscp.html +access/data_transfer_with_filezilla.html /data/transfer/filezilla.html +access/data_transfer_with_scp_sftp.html /data/transfer/scp_sftp.html +access/eclipse_as_a_remote_editor.html /software/eclipse.html +access/eclipse_intro.html /compute/software/eclipse.html +access/generating_keys.html /accounts/generating_keys.html +access/generating_keys_on_windows.html /accounts/generating_keys_windows.html +access/generating_keys_with_mobaxterm.html /accounts/generating_keys_mobaxterm.html +access/generating_keys_with_openssh.html /accounts/generating_keys_linux.html +access/generating_keys_with_openssh_on_os_x.html /accounts/generating_keys_macos.html +access/generating_keys_with_putty.html /accounts/generating_keys_putty.html +access/getting_access.html /accounts/vsc_account.html +access/how_to_create_manage_vsc_groups.html /accounts/vsc_user_groups.html +access/how_to_request_more_quota.html /accounts/how_to_request_more_quota.html +access/index.html /accounts/index.html +access/linux_client.html /compute/terminal/linux_client.html +access/macos_client.html /compute/terminal/macos_client.html +access/managing_disk_usage.html /accounts/managing_disk_usage.html +access/messed_up_keys.html /accounts/messed_up_keys.html +access/mfa_login.html /accounts/mfa_login.html +access/multiplatform_client_tools.html /compute/terminal/index.html +access/nx_start_guide.html /compute/terminal/nx_start_guide.html +access/paraview_remote_visualization.html /compute/software/paraview_remote_visualization.html +access/scientific_domains.html /accounts/scientific_domains.html +access/setting_up_a_ssh_proxy.html /accounts/setting_up_a_ssh_proxy.html +access/setting_up_a_ssh_proxy_with_putty.html /compute/terminal/putty_ssh_proxy.html +access/ssh_config.html /compute/terminal/ssh_config.html +access/text_mode_access_using_openssh.html /compute/terminal/openssh_access.html +access/text_mode_access_using_openssh_or_jellyfissh.html /compute/terminal/openssh_jellyfissh_access.html +access/text_mode_access_using_putty.html /compute/terminal/putty_access.html +access/upload_new_key.html /accounts/generating_keys.html +access/using_pageant.html /accounts/using_pageant.html +access/using_ssh_agent.html /accounts/ssh_agent.html +access/using_the_xming_x_server_to_display_graphical_programs.html /compute/terminal/windows_xming.html +access/vnc_support.html /compute/terminal/index.html +access/vpn.html /compute/terminal/vpn.html +access/vsc_account.html /accounts/vsc_account.html +access/where_can_i_store_what_kind_of_data.html /accounts/where_can_i_store_what_kind_of_data.html +access/windows_client.html /compute/terminal/windows_client.html +access/wsl.html /compute/terminal/wsl.html +brussels/tier2_hardware/hydra_hardware.html /brussels/tier2_hardware/hydra.html +compute.html /compute/index.html +compute/cluster-archive.html /compute/tier2-archive.html +compute/storage.html /data/tier2-infrastructure.html +data/tier1_data_main_index.html /data/tier1_data_service.html +data/transfer/external_computer.html /data/transfer.html +data/transfer/network_drives.html /data/transfer.html +globus/globus_main_index.html /globus/index.html +globus/globus_platform.html /globus/index.html +hardware.html /compute/infrastructure.html +hardware-archive.html /compute/tier2-archive.html +hardware-storage.html /compute/storage.html +hardware-tier1.html /compute/tier1.html +hardware-tier2.html /compute/tier2.html +jobs/basic_linux_usage.html /compute/terminal/basic_linux.html +jobs/checkpointing_framework.html /compute/jobs/checkpointing_framework.html +jobs/clusters_slurm.html /compute/jobs/clusters_slurm.html +jobs/clusters_torque.html /compute/jobs/clusters_torque.html +jobs/credits.html /compute/jobs/credits.html +jobs/gpus.html /compute/jobs/gpus.html +jobs/how_to_get_started_with_shell_scripts.html /compute/terminal/shell_scripts.html +jobs/index.html /compute/jobs/index.html +jobs/job_advanced.html /compute/jobs/job_advanced.html +jobs/job_management.html /compute/jobs/job_management.html +jobs/job_submission.html /compute/jobs/job_submission.html +jobs/job_types.html /compute/jobs/job_types.html +jobs/monitoring_memory_and_cpu_usage_of_programs.html /compute/jobs/monitoring_memory_and_cpu_usage_of_programs.html +jobs/running_jobs.html /compute/jobs/running_jobs.html +jobs/running_jobs_torque.html /compute/jobs/running_jobs_torque.html +jobs/slurm_pbs_comparison.html /compute/jobs/slurm_pbs_comparison.html +jobs/specifying_output_files_and_notifications.html /compute/jobs/specifying_output_files_and_notifications.html +jobs/specifying_resources.html /compute/jobs/specifying_resources.html +jobs/starting_programs_in_a_job.html /compute/jobs/starting_programs_in_a_job.html +jobs/submitting_and_managing_jobs_with_torque_and_moab.html /compute/jobs/submitting_and_managing_jobs_with_torque_and_moab.html +jobs/what_if_jobs_fail_after_starting_successfully.html /compute/jobs/what_if_jobs_fail_after_starting_successfully.html +jobs/why_doesn_t_my_job_start.html /compute/jobs/why_doesn_t_my_job_start.html +jobs/worker_framework.html /compute/jobs/worker_framework.html +jobs/worker_or_atools.html /compute/jobs/worker_or_atools.html +jobs/workflows_using_job_dependencies.html /compute/jobs/workflows_using_job_dependencies.html +jobs/job_submission_and_credit_reservations.html /compute/jobs/credits.html +jobs/the_job_system_what_and_why.html /compute/jobs/index.html +jobs/using_software.html /compute/software/using_software.html +leuven/data_transfer_kuleuven_network_drives.html /data/transfer/network_drives/kuleuven.html +leuven/lecturer_s_procedure_to_request_student_accounts_ku_leuven_uhasselt.html accounts/lecturer_procedure_student_accounts_kuleuven_uhasselt.html +leuven/mfa_quickstart.html /access/mfa_quickstart.html +leuven/tier2_hardware/mfa_login.html /access/mfa_login.html +leuven/services/openondemand.html /compute/portal/ondemand.html +leuven/tier1_breniac.html /leuven/old_hardware/breniac/breniac_hardware.html +software/eclipse_with_ptp_and_version_control.html /compute/software/eclipse_with_ptp_and_version_control.html +software/parameterweaver.html /compute/software/parameterweaver.html +software/perl_package_management.html /compute/software/perl_package_management.html +software/postprocessing_tools.html /compute/software/postprocessing_tools.html +software/r_command_line_arguments_in_scripts.html /compute/software/r_command_line_arguments_in_scripts.html +software/r_integrating_c_functions.html /compute/software/r_integrating_c_functions.html +software/specific_eclipse_issues_on_os_x.html /compute/software/specific_eclipse_issues_on_os_x.html +software/toolchains.html /compute/software/toolchains.html +software/blas_and_lapack.html /compute/software/blas_and_lapack.html +software/books_parallel.html /compute/software/books_parallel.html +software/eclipse.html /compute/software/eclipse.html +software/eclipse_access_to_a_vsc_subversion_repository.html /compute/software/eclipse_access_to_a_vsc_subversion_repository.html +software/eclipse_as_a_remote_editor.html /compute/software/eclipse_as_a_remote_editor.html +software/eclipse_introduction_and_installation.html /compute/software/eclipse_introduction_and_installation.html +software/foss_toolchain.html /compute/software/foss_toolchain.html +software/hybrid_mpi_openmp_programs.html /compute/software/hybrid_mpi_openmp_programs.html +software/index.html /compute/software/index.html +software/intel_toolchain.html /compute/software/intel_toolchain.html +software/intel_trace_analyzer_collector.html /compute/software/intel_trace_analyzer_collector.html +software/matlab_getting_started.html /compute/software/matlab_getting_started.html +software/mpi_for_distributed_programming.html /compute/software/mpi_for_distributed_programming.html +software/ms_visual_studio.html /compute/software/ms_visual_studio.html +software/openmp_for_shared_memory_programming.html /compute/software/openmp_for_shared_memory_programming.html +software/parallel_software.html /compute/software/parallel_software.html +software/python_package_management.html /compute/software/python_package_management.html +software/software_development.html /compute/software/software_development.html +software/subversion.html /compute/software/subversion.html +software/tortoisesvn.html /compute/software/tortoisesvn.html +software/version_control_systems.html /compute/software/version_control_systems.html +software/git.html /compute/software/git.html +software/matlab_parallel_computing.html /compute/software/matlab_parallel_computing.html +software/module_system_basics.html /compute/software/module_system_basics.html +software/r_devtools.html /compute/software/r_devtools.html +software/r_package_management.html /compute/software/r_package_management.html +software/singularity.html /compute/software/containers.html +software/using_software.html /compute/software/using_software.html diff --git a/source/security_measures_20200520.rst b/source/security_measures_20200520.rst index efc60c1db..433516c5d 100644 --- a/source/security_measures_20200520.rst +++ b/source/security_measures_20200520.rst @@ -1,9 +1,8 @@ Security measures 20 May 2020 ============================= -In response to reports of security incidents in several high-profile HPC centers -throughout Europe -(https://csirt.egi.eu/academic-data-centers-abused-for-crypto-currency-mining/), all +In response to reports of `security incidents in several high-profile HPC centers +throughout Europe `_, all VSC sites are taking a number of concerted pre-emptive security actions. These actions will affect you, although we are trying to minimize the impact as much as possible. diff --git a/source/software/books_parallel.rst b/source/software/books_parallel.rst deleted file mode 100644 index 69e1d47df..000000000 --- a/source/software/books_parallel.rst +++ /dev/null @@ -1,134 +0,0 @@ -.. _books: - -Books about Parallel Computing -============================== - -This is a very incomplete list, permanently under construction, of -books about parallel computing. - -General -------- - -- G. Hager and G. Wellein. `Introduction to high performance computing for - scientists and engineers `_. - Chapman & Hall, 2010. This book first introduces the architecture of modern cache-based microprocessors and discusses their inherent performance limitations, before describing general optimization strategies for serial code on cache-based architectures. It next covers shared- and distributed-memory parallel computer architectures and the most relevant network topologies. After discussing parallel computing on a theoretical level, the authors show how to avoid or ameliorate typical performance problems connected with OpenMP. They then present cache-coherent nonuniform memory access (ccNUMA) optimization techniques, examine distributed-memory parallel programming with message passing interface (MPI), and explain how to write efficient MPI code. The final chapter focuses on hybrid programming with MPI and OpenMP. -- V. Eijkhout. `Introduction to high performance scientific - computing `_. - 2011. This is a textbook that teaches the bridging topics between - numerical analysis, parallel computing, code performance, large scale - applications. It can be freely downloaded from `the author's page on - the book `_ - (though you have to respect the copyright of course). -- A. Grama, A. Gupta, G. Kapyris, and V. Kumar. `Introduction to - parallel computing (2nd edition) `_. - Pearson Addison Wesley, 2003. ISBN 978-0-201-64865-2. A somewhat - older book, but still used a lot as textbook in academic courses on - parallel computing. -- C. Lin and L. Snyder. `Principles of parallel programming - `_. - Pearson Addison Wesley, 2008. ISBN 978-0-32148790-2. This books - discusses parallel programming both from a more abstract level and a - more practical level, touching briefly threads programming, OpenMP, - MPI and PGAS-languages (using ZPL). -- M. McCool, A.D. Robinson, and J. Reinders. `Structured parallel - programming: patterns for efficient computation - `_. - Morgan Kaufmann, 2012. ISBN 978-0-12-415993-8 - -Grid computing --------------- - -- F. Magoules, J. Pan, K.-A. Tan, and A. Kumar. `Introduction to grid - computing `_. - CRC Press, 2019. ISBN 9780367385828. - -MPI ---- - -- A two-volume set in tutorial style: - - - W. Gropp, E. Lusk, and A. Skjellum. `Using MPI: portable parallel - programming with the Message-Passing Interface, third - edition `__. - MIT Press, 2014. ISBN 978-0-262-57139-2 (paperback) or - 978-0-262-32659-9 (ebook). This edition of the book is based on - the MPI-3.0 specification. - - W. Gropp, T. Hoeffler, R. Thakur and E. Lusk. `Using advanced MPI: - modern features of the Message-Passing Interface `_. - MIT Press, 2014. ISBN 978-0-262-52763-7 (paperback) or - 978-0-262-32662-9 (ebook). - - The books replace the earlier editions of "Using MPI: Portable - Parallel Programming with the Message-Passing Interface" and the - book "Using MPI-2: Advanced Features of the Message-Passing - Interface". -- A two-volume set in reference style, but somewhat outdated: - - - M. Snir, S.W. Otto, S. Huss-Lederman, D.W. Walker, and J. - Dongarra. `MPI: the complete reference. Volume 1: the MPI core - (2nd - Edition) `_. - MIT Press, 1998. ISBN 978-0-262-69215-1. - - W. Gropp, S. Huss-Lederman, A. Lumsdaine, E. Lusk, B. Nitzberg, W. - Saphir, and M. Snir. `MPI: the complete reference, Volume 2: the - MPI-2 extensions `_. - MIT Press, 1998. ISBN 978-0-262-57123-4. - - The two volumes are also available as one set with `ISBN number - 978-0-262-69216-8 `_. - -OpenMP ------- - -- B. Chapman, G. Jost, and R. van der Pas. `Using OpenMP - portable - shared memory parallel - Programming `_. - The MIT Press, 2008. ISBN 978-0-262-53302-7. -- R. Chandra, L. Dagum, D. Kohr, D. Maydan, J. McDonald, and R. Menon. - `Parallel programming in OpenMP `_. - Academic Press, 2000. ISBN 978-1-55860-671-5. - -GPU computing -------------- - -- M. Scarpino. `OpenCL in action `_. - Manning Publications Co., 2012. ISBN 978-1-617290-17-6 -- D.R. Kaeli, P. Mistry, D. Schaa, and D.P. Zhang. `Heterogeneous - computing with OpenCL 2.0, 1st - Edition `_. - Morgan Kaufmann, 2015. ISBN 978-0-12-801414-1 (print) or - 978-0-12-801649-7 (eBook). A thourough rewrite of the earlier - well-selling book for OpenCL 1.2 that saw 2 editions. - -Xeon Phi computing ------------------- - -- R. Rahman. `Intel Xeon Phi coprocessor architecture and tools: the - guide for application - Developers `_. - Apress, 2013. SBN13: 978-1-4302-5926-8. This is a free book that is - aimed at the Knights Corner generation of Xeon Phi processors. The - newer Knights Landing generation has a reworked vector instruction - set, but most principles explained in this book remain valid also for - the newer generation(s). -- J. Jeffers, J. Reinders, and A. Sodani. `Intel Xeon Phi processor - high performance programming, 2nd Edition (Knights Landing - Edition) `_. - Morgan Kaufmann, 2016. ISBN 978-0-12-809194-4. Errata and - downloadable code examples for this book and other books by Jeffers - and Reinders are maintained on the blog - `lotsofcores.com `_. - -Case studies and examples of programming paradigms --------------------------------------------------- - -- J. Reinders and J. Jeffers (editors). `High performance parallelism - pearls. Volume 1: multicore and many-core programming - approaches `_. - Morgan Kaufmann, 2014. ISBN 978-0-12-802118-7 -- J. Reinders and J. Jeffers (editors). `High performance parallelism - pearls. Volume 2: multicore and many-core programming - approaches `_. - Morgan Kaufmann, 2015. ISBN 978-0-12-803819-2 - -*Please mail further suggestions to geertjan.bex@uhasselt.be* diff --git a/source/software/index.rst b/source/software/index.rst deleted file mode 100644 index 7be0b8140..000000000 --- a/source/software/index.rst +++ /dev/null @@ -1,11 +0,0 @@ -##################### -:fas:`cubes` Software -##################### - -.. toctree:: - :maxdepth: 3 - - using_software - software_development - postprocessing_tools - diff --git a/source/software/postprocessing_tools.rst b/source/software/postprocessing_tools.rst deleted file mode 100644 index a1a0d1f8a..000000000 --- a/source/software/postprocessing_tools.rst +++ /dev/null @@ -1,24 +0,0 @@ -Postprocessing tools -==================== - -This section is still rather empty. It will be expanded over time. - -Visualization software ----------------------- - -- `ParaView `__ is a free - visualization package. It can be used in three modes: - - - Installed on your desktop: you have to transfer your data to your - desktop system - - As an interactive process on the cluster: this option is available - only for :ref:`NoMachine NX - users ` (go to the - Applications menu -> HPC -> Visualisation -> Paraview). - - In client-server mode: The interactive part of ParaView is running - on your desktop, while the server part that reads the data and - renders the images (no GPU required as ParaView also contains a - software OpenGL renderer) and sends the rendered images to the - client on the desktop. Setting up ParaView for this scenario is - explained in the :ref:`page on ParaView remote - visualization `. diff --git a/source/user_support_addresses.rst b/source/user_support_addresses.rst index 23513172a..71c34144d 100644 --- a/source/user_support_addresses.rst +++ b/source/user_support_addresses.rst @@ -1,8 +1,20 @@ -- KU Leuven/Hasselt University: hpcinfo@kuleuven.be -- Ghent University: hpc@ugent.be, for further info, see - the `web site `_ -- University of Antwerp: hpc@uantwerpen.be, for further - info on the `CalcUA Core Facility web - page `_ -- Vrije Universiteit Brussel: hpc@vub.be, see also our `website `_ for VUB-HPC specific info. -- Tier-1 compute: compute@vscentrum.be, Tier-1 cloud: cloud@vscentrum.be, Tier-1 data: data@vscentrum.be. +* VSC Sites: + + * KU Leuven/Hasselt University: hpcinfo@kuleuven.be + + * Ghent University: hpc@ugent.be, for further info, see + the `UGent HPC website `_ + + * University of Antwerp: hpc@uantwerpen.be, for further info on the `CalcUA + Core Facility web page `_ + + * Vrije Universiteit Brussel: hpc@vub.be, see also the + `VUB-HPC website `_ for information specific to VUB + +* Tier-1 VSC Services: + + * Tier-1 compute: compute@vscentrum.be + + * Tier-1 cloud: cloud@vscentrum.be + + * Tier-1 data: data@vscentrum.be diff --git a/source/vsc_tutorials.rst b/source/vsc_tutorials.rst index 1aec74a4c..ab2f30759 100644 --- a/source/vsc_tutorials.rst +++ b/source/vsc_tutorials.rst @@ -6,10 +6,10 @@ Site Tutorials ============== VSC sites carry out regular trainings about Linux and the VSC HPC systems. -You may always find the updated list of `VSC trainings `__ -via the VSC website. It is complementary to the information found in this -user portal, the latter being the reference manual. Below you can find links -to those training materials. +You may always find the updated list of trainings via the `VSC Training`_ +website. It is complementary to the information found in this user portal, the +latter being the reference manual. Below you can find links to those training +materials. KULeuven/UHasselt ----------------- diff --git a/source/web_tutorials.rst b/source/web_tutorials.rst index ceab80b8a..d9d6922f9 100644 --- a/source/web_tutorials.rst +++ b/source/web_tutorials.rst @@ -6,32 +6,28 @@ Web tutorials PRACE ----- -| The `PRACE Training - Portal `__ has a number of - `training - videos `__ - online from their courses. +The `PRACE Training Portal`_ has a number of training +videos in the `PRACE Tutorials`_ section. LLNL - Lawrence Livermore National Laboratory (USA) --------------------------------------------------- -`LLNL provides several -tutorials. `__ Not all -are applicable to the VSC clusters, but some are. E.g., +You can find many training materials in the `LLNL Tutorials`_ web page. Not all +information in there applies to the VSC clusters, but all the basics concepts +are the same. -- `Introduction to Parallel - Computing `__ -- `OpenMP `__ -- `Advanced - MPI `__ +For instance, the following contain valuable information about parallelization: + +* `LLNL OpenMP Tutorial`_ +* `LLNL Parallel Computing Tutorial`_ +* `LLNL Advanced MPI`_ There are also some tutorials on Python. NCSA - National Center for Supercomputing Applications (USA) ------------------------------------------------------------ -NCSA runs the `CI-Tutor (Cyberinfrastructure -Tutor) `__ service that also -contains a number of interesting tutorials. At the moment of writing, -there is no fee and everybody can subscribe. +NCSA runs `hpc-training.org `_ that provides free +online training on HPC. At the moment of writing, there is no fee and everybody +can subscribe. diff --git a/source/what_are_standard_terms_used_in_hpc.rst b/source/what_are_standard_terms_used_in_hpc.rst index d00c5881b..dcd7a8542 100644 --- a/source/what_are_standard_terms_used_in_hpc.rst +++ b/source/what_are_standard_terms_used_in_hpc.rst @@ -4,20 +4,21 @@ What are standard terms used in HPC? ==================================== HPC cluster - A relatively tightly coupled collection of compute - nodes, the interconnect typically allows for high bandwidth, low - latency communication. Access to the cluster is provided through a - login node. A resource manager and scheduler provide the logic to - schedule jobs efficiently on the cluster. A detailed description of - the :ref:`VSC clusters and other - hardware ` is available. + A relatively tightly coupled collection of compute nodes, the interconnect + typically allows for high bandwidth, low latency communication. Access to + the cluster is provided through a login node. A resource manager and + scheduler provide the logic to schedule jobs efficiently on the cluster. + The pages on :ref:`tier1 hardware` and :ref:`tier2 hardware` provide a + detailed description of the technical characteristics of the HPC clusters + managed by VSC. Compute node - An individual computer, part of an HPC cluster. - Currently most compute nodes have two sockets, each with a single CPU, - volatile working memory (RAM), a hard drive, typically small, and - only used to store temporary files, and a network card. The hardware - specifications for the various :ref:`VSC compute - nodes ` is available. + An individual computer that is part of an HPC cluster. Currently most + compute nodes have two sockets, each with a single CPU, volatile working + memory (RAM), a hard drive, typically small, and only used to store + temporary files, and a network card. + The pages on :ref:`tier1 hardware` and :ref:`tier2 hardware` provide the + the technical specifications of the compute nodes found in the HPC clusters + managed by VSC. CPU Central Processing Unit, the chip that performs the actual computation in a compute node. A modern CPU is composed of numerous diff --git a/styleguide.md b/styleguide.md index 77cd37654..d7cf0c0b4 100644 --- a/styleguide.md +++ b/styleguide.md @@ -100,6 +100,11 @@ some VSC sites: Example: [Applying for your VSC account](source/access/vsc_account.rst?plain=1#L19) + Tabs can be synced, which means that whenever the tab of one VSC site is + selected, all other tab panels with that same tab will automatically switch + to that one as well. This happens not only on the open page, but across the + whole documentation website. + * [Badges](#badges) are useful for situations where information that is general needs a remark that a applies to one or a few VSC site. We have pre-defined one badge for each VSC site: `|KUL|` for KU Leuven, `|UA|` for UAntwerp, @@ -300,26 +305,41 @@ Figure are automatically centered, scaled and can have captions. Information can be organized in tabs using the `tab-set::` directive. This is specially useful to show site-specific information in a compact manner. +Tabs can be synced, which means that whenever one tab is selected, all other +tab panels in that same ``sync-group`` will automatically switch to that same +tab as well. The title of the tab is irrelevant, and the sync feature is +controlled with the ``sync`` property. Syncing of tabs happens not only on the +tab panes on the open page, but across the whole documentation website. + ``` .. tab-set:: + :sync-group: vsc-sites .. tab-item:: KU Leuven/UHasselt + :sync: kuluh Information specific to KU Leuven/UHasselt - .. tab-item:: UGent + .. tab-item:: UAntwerp + :sync: ua - Information specific to UGent + Information specific to UAntwerp - .. tab-item:: UAntwerp (AUHA) + .. tab-item:: UGent + :sync: ug - Information specific to UAntwerp + Information specific to UGent .. tab-item:: VUB + :sync: vub Information specific to VUB ``` +Keep in mind to always include these 4 tabs on tab panels for ``vsc-sites``. +Even if there is no information for some of the sites. Otherwise users having +selected the missing tab will get the first one activated. + ## Grids and Cards It is possible to present some information inside its own frame. These frames diff --git a/tasks.md b/tasks.md deleted file mode 100644 index 23f2951f3..000000000 --- a/tasks.md +++ /dev/null @@ -1,39 +0,0 @@ -# Goals of the re-design - -1. Organize the existing documentation in 4 main sections: Access, Compute, - Cloud and Data -2. All documents should be visible and reachable organically through - navigation. Users should be able to figure out were to find information - without search. -3. Site-specific information should be minimized. Disagreements between sites - will be forwarded to a CUE meeting. Remaining site-specific information will - be organized in tabs to avoid clutter. - -## Main Tasks - -1. Phase 1 - * ️ configure PyData theme - * ️✅ define color scheme - * ✅ adapt documentation to Sphinx_Design formatting elements - * ✅ organize documentation in 4 main sections - * ✅ fix all formatting errors from RST files - * ✅ add all RST files to a TOC tree - * ✅ disable automatic labels from section names -2. Phase 2 - * ✅ re-structure separation between OS with cards - * ✅ review mixed use of TOCs and sections - * ✅ review sections with large differences between sites: VNC, VPN - * ✅ seek outdated information in documentation - * ✅ add custom 404 page -3. Phase 3 - * ✅ split _Access and data transfer_ in 2 different sections - * ✅ reorganize _Accounts and access_ info to make SSH keys optional - * ✅ merge all information about data and storage in a single section - * ✅ replace images with figures - * ✅ fix broken links with redirects (sphinx-reredirects) - * ✅ write style guide - * ✅ make CI fail on any Sphinx warning -4. Phase 4 - * 🔄 add support for markdown with MyST - * ⬜ add widget to preselect home institute - diff --git a/website_dump.txt b/website_dump.txt deleted file mode 100644 index a5be72fdb..000000000 --- a/website_dump.txt +++ /dev/null @@ -1,17051 +0,0 @@ -"id","title","body" -1,"","

    © FWO

    " -2,"","

    The VSC-infrastructure consists of two layers. The central Tier-1 infrastructure is designed to run large parallel jobs. It also contains a small accelerator testbed to experiment with upcoming technologies. The Tier-2 layer runs the smaller jobs, is spread over a number of sites, is closer to users and more strongly embedded in the campus networks. The Tier-2 clusters are also interconnected and integrated with each other. -

    " -3,"","

    This infrastructure is accessible to all scientific research taking place in Flemish universities and public research institutes. In some cases a small financial contribution is required. Industry can use the infrastructure for a fee to cover the costs associated with this. -

    " -4,"What is a supercomputer?","

    A supercomputer is a very fast and extremely parallel computer. Many of its technological properties are comparable to those of your laptop or even smartphone. But there are also important differences.

    " -5,"The VSC in Flanders","

    The VSC is a partnership of five Flemish university associations. The Tier-1 and Tier-2 infrastructure is spread over four locations: Antwerp, Brussels, Ghent and Louvain. There is also a local support office in Hasselt. -

    " -6,"Tier-1 infrastructure","

    Central infrastructure for large parallel compute jobs and an experimental accelerator system.

    " -7,"Tier-2 infrastructure","

    An integrated distributed infrastructure for smaller supercomputing jobs with varying hardware needs.

    " -8,"Getting access","

    Who can access, and how do I get my account?

    " -9,"Tier-1 starting grant","

    A programme to get a free allocation on the Tier-1 supercomputer to perform the necessary tests to prepare a regular Tier-1 project application.

    " -10,"Project access Tier-1","

    A programme to get a compute time allocation on the Tier-1 supercomputers based on an scientific project with evaluation.

    " -11,"Buying compute time","

    Without an awarded scientific project, it is possible to buy compute time. We also offer a free try-out so you can test if our infrastructure is suitable for your needs.

    " -12,"","

    Need help ? Have more questions ?

    " -13,"User portal","

    On these pages, you will find everything that is useful for users of our infrastructure: the user documentation, server status, upcoming training programs and links to other useful information on the web.

    " -15,"","

    Below we give information about current downtime (if applicable) and planned maintenance of the various VSC clusters.

    " -23,"","

    There is no clear agreement on the exact definition of the term ‘supercomputer’. Some say a supercomputer is a computer with at least 1% of the computing power of the fastest computer in the world. But according to this definition, there are currently only a few hundred supercomputers in the world. The TOP500 list is a list of the supposedly 500 fastest computers in the world, updated twice a year. -

    One could take 1‰ of the performance of the fastest computer as the criterion, but it is an arbitrary criterion. Stating that a supercomputer should perform at least X trillion computations per second, is not a useful definition. Because of the fast evolution of the technology, this definition would be outdated in a matter of years. The first smartphone of a well-known manufacturer launched in 2007 had about the same computing power and more memory than the computer used to predict the weather in Europe 30 years earlier. -

    So what is considered as a ‘supercomputer’ is very time-bound, at least in terms of absolute compute power. So let us just agree that a supercomputer is a computer that is hundreds or thousands times faster than your smartphone or laptop. -

    But is a supercomputer so different from your laptop or smartphone? Yes and no. Since roughly 1975 the key word in supercomputing is parallelism. But this also applies for your PC or smartphone. PC processor manufacturers started to experiment with simple forms of parallelism at the end of the nineties. A few years later the first processors appeared with multiple cores that could perform calculations independently from each other. A laptop has mostly 2 or 4 cores and modern smartphones have 2, 4 or in some rare cases 8 cores. Although it must be added that they are a little slower than the ones on a typical laptop. -

    Around 1975 manufacturers started to experiment with vector processors. These processors perform the same operation to a set of numbers simultaneously. Shortly thereafter supercomputers with multiple processors working independently from each other, appeared on the market. Similar technologies are nowadays used in the processor chips of laptops and smartphones. In the eighties, supercomputer designers started to experiment with another kind of parallelism. Several rather simple processors - this was sometimes just standard PC processors like the venerable Intel 80386 were linked together with fast networks and collaborated to solve large problems. These computers were cheaper to develop, much simpler to build, but required frequent changes to the software. -

    In modern supercomputers, parallelism is pushed to extremes. In most supercomputers, all forms of parallelism mentioned above are combined at an unprecedented scale and can take on extreme forms. All modern supercomputers rely on some form of vector computing or related technologies and consist of building blocks - nodes - uniting tens of cores and interconnecting through a fast network to a larger whole. Hence the term ‘compute cluster’ is often used. -

    Supercomputers must also be able to read and interpret data is ‘at a very high speed. Here the key word is also parallellism. Many supercomputers have several network connections to the outside world. Their permanent storage system consists of hundreds or even thousands of hard disks or SSDs linked together to one extremely large and extremely fast storage system. This type of technology has probably not influenced significantly the development of laptops as it would not be very practical to carry a laptop around with 4 hard drives. Yet this technology does appear to some extent in modern, fast SSD drives in some laptops and smartphones. The faster ones use several memory chips in parallel to increase their performance and it is a standard technology in almost any server storing data. -

    As we have already indicated to some extent in the text above, a supercomputer is more than just hardware. It also needs properly written software. or Java program you wrote during your student years will not run a 10. 000 times faster because you run it on a supercomputer. On the contrary, there is a fair chance that it won't run at all or run slower than on your PC. Most supercomputers - and all supercomputers at the VSC - use a variant of the Linux operating system enriched with additional software to combine all compute nodes in one powerful supercomputer. Due to the high price of such a computer, you're rarely the only user but will rather share the infrastructure with others. -

    So you may have to wait a little before your program runs. Furthermore your monitor is not directly connected to the supercomputer. Proper software is also required here with your application software having to be adapted to run well on a supercomputer. Without these changes, your program will not run much faster than on a regular PC. You may of course still run hundreds or thousands copies simultaneously, when you for example wish to explore a parameter space. This is called ‘capacity computing’. -

    If you wish to solve truly large problems within a reasonable timeframe, you will have to adapt your application software to maximize every form of parallellism within a modern supercomputer and use several hundreds, or even thousands, of compute cores simultaneously to solve one large problem. This is called ‘capability computing’. Of course, the problem you wish to solve has to be large enough for this approach to make sense. Every problem has an intrinsic limit to the speedup you can achieve on a supercomputer. The larger the problem, the higher speedup you can achieve. -

    This also implies that a software package that was cutting edge in your research area 20 years ago, is unlikely to be so anymore because it is not properly adapted to modern supercomputers, while new applications exploit supercomputers much more efficiently and subsequently generate faster, more accurate results. -

    To some extent this also applies to your PC. Here again you are dealing with software exploiting the parallelism of a modern PC quite well or with software that doesn't. As a ‘computational scientist’ or supercomputer user you constantly have to be open to new developments within this area. Fortunately, in most application domains, a lot of efficient software already exists which succeeds in using all the parallellism that can be found in modern supercomputers. -

    " -25,"","

    The successor of Muk is expected to be installed in the spring 2016.

    There is also a small test cluster for experiments with accellerators (GPU and Intel Xeon Phi) with a view to using this technology in future VSC clusters.

    The Tier-1 cluster Muk

    The Tier-1 cluster Muk has 528 computing nodes, each with two 8-core Intel Xeon processors from the Sandy Bridge generation (E5-2670, 2.6 GHz). Each node features 64 GiB RAM, for a total memory capacity of more than 33 TiB. The computing nodes are connected by an FDR InfiniBand interconnect with a fat tree topology. This network has a high bandwidth (more than 6,5GB / s per direction per link) and a low latency. The storage is provided by a disk system with a total disk capacity of 400 TB and a peak bandwidth of 9.5 GB / s.

    The cluster achieves a peak performance of more than 175 Tflops and a Linpack performance of 152.3 Tflops. With this result, the cluster was for 5 consecutive periods in the Top500 list of fastest supercomputers in the world:

    List

    06/2012

    11/2012

    06/2013

    11/2013

    06/2014

    Position

    118

    163

    239

    306

    430

    In November 2014 the cluster fell just outside the list but still took 99% of the performance of the system in place 500.

    Accellerator testbed

    In addition to the tier-1 cluster Muk, the VSC has an experimental GPU / Xeon Phi cluster. 8 nodes in this cluster have 2 K20x nVidia GPUs with accompanying software stack, and 8 nodes are equipped with two Intel Xeon Phi 5110P (\"Knight's Corner\" generation) boards. The nodes are interconnected by means of a QDR InfiniBand network. For practical reasons, these nodes were integrated into the KU Leuven / Hasselt University Tier-2 infrastructure.

    Software

    Like on all other VSC-clusters, the operating system of Muk is a variant of Linux, in this case Scientific Linux which in turn based on Red Hat Linux. The system also features a comprehensive stack of software development tools which includes the GNU and Intel compilers, debugger and profiler for parallel applications and different versions of OpenMPI and Intel MPI.

    There is also an extensive set of freely available applications installed on the system. More software can be installed at the request of the user. Users however have to take care of the software licenses when the software is not freely available, and therefore also for the financing of that license.

    Detailed overview of the installed software

    Access to the Tier-1 system

    Academic users can access the Tier-1 cluster Muk through a project application. There are two types of project applications

    • The Tier-1 starting grant of up to 100 node days to test and / or to optimize software, typically with a view to a regular request for computing time. There is a continuous assessment process for this project type.
      Learn more
    • The regular project application, for allocations between 500 and 5000 node days. The applications are assessed on scientific excellence and technical feasibility by an evaluation committee of foreign experts. There are three cut-off dates a year at which the submitted project proposals are evaluated. The users are also expected to pay a small contribution towards the cost.
      Learn more

    To use the GPU / Xeon Phi cluster it is sufficient to contact the HPC coordinator of your institution.

    Industrial users and non-Flemish research institutions and not-for-profit organizations can also purchase computing time on the Tier-1 Infrastructure. For this you can contact the Hercules Foundation.

    " -27,"","

    The VSC does not only rely on the Tier-1 supercomputer to respond to the need for computing capacity. The HPC clusters of the University of Antwerp, VUB, Ghent University and KU Leuven constitute the VSC Tier-2 infrastructure, with a total computing capacity of 416.2 TFlops. Hasselt University invests in the HPC cluster of Leuven. Each cluster has its own specificity and is managed by the university’s dedicated HPC/ICT team. The clusters are interconnected with a 10 Gbps BELNET network, ensuring maximal cross-site access to the different cluster architectures. For instance, a VSC user from Antwerp can easily log in to the infrastructure at Leuven.
    -

    Infrastructure

      -
    • The Tier-2 of the University of Antwerp consists of a cluster with 168 nodes, accounting for 3.360 cores (336 processors) and 75 TFlops. Storage capacity is 100 TB. By the spring of 2017 a new cluster will gradually becoming available, containing 152 regular compute nodes and some facilities for visualisation and to test GPU-computing and Xeon Phi computing.
    • -
    • The Tier-2 of VUB (Hydra) consists of 3 clusters of successive generations of processors with a peak capacity of 75 TFlops (estimated). The total storage capacity is 446 TB. It has a relatively large memory per computing node and is therefore best fit for computing jobs that require a lot of memory per node or per core. This configuration is complemented by a High Troughput Computing (HTC) grid infrastructure.
    • -
    • The Tier-2 of Ghent University (Stevin) represents a capacity of 226 TFlops (11.328 cores over 568 nodes) and a storage capacity of 1,430 TB. It is composed of several clusters, 1 of which is intended for single-node computing jobs and 4 for multi-node jobs. One cluster has been optimized for memory-intensive computing jobs and BigData problems.
    • -
    • The joint KU Leuven/UHasselt Tier-2 housed by KU Leuven focuses on small capability computing and tasks requiring a fairly high disk bandwidth. The infrastructure consists of a thin node cluster with 7.616 cores and a total capacity of 230 TFlops. A shared memory system with 14 TB of RAM and 640 cores yields an additional 12 TFlops. A total storage of 280 TB provides the necessary I/O capacity. Furthermore, there are a number of nodes with accellerators (including the GPU/Xeon Phi cluster purchased as an experimental tier-1 setup) and 2 visualization nodes.

    More information

    A more detailed description of the complete infrastructure is available in the \"Available hardware\" section of the user portal.

      -
    " -37,"","

    Computational science has - alongside experiments and theory - become the fully fledged third pillar of science. Supercomputers offer unprecedented opportunities to simulate complex models and as such to test theoretical models against reality. They also make it possible to extract valuable knowledge from massive amounts of data. -

    -

    For many calculations, a laptop or workstation is no longer sufficient. Sometimes dozens or hundreds of CPU cores and hundreds of gigabytes or even terabytes of RAM-memory are necessary to produce an acceptable solution within a reasonable amount of time. -

    -

    Our offer

    -

    An overview of our services: -

    -
      -
    • Access to a variety of supercomputing infrastructure, suited for many applications. -
    • -
    • Guidance and advice when determining whether your software is suited to our infrastructure. -
    • -
    • Training (from beginner to advanced level) on the use of supercomputers. In this training all aspects are covered: how to run a program on a supercomputer, how to develop software, and for some application domains even how to use a couple of popular packages. -
    • -
    • Support - with optimizing the use of your infrastructure. -
    • -
    • A wide range of free software. When using commercial software it is the responsibility of the user to take care of a license with a number of packages as an exception to this. For these packages we ourselves are responsible to ensure optimal running. -
    • -
    -

    More information?

    -

    More information can be found in our training section and user portal. -

    " -41,"","

    Not only have supercomputers changed scientific research in a fundamental way, they also enable the development of new, affordable products and services which have a major impact on our daily lives. -

    Not only have supercomputers changed scientific research in a fundamental way ...

    Supercomputers are indispensable for scientific research and for a modern R&D environment. ‘Computational Science’ is - alongside theory and experiment - the third fully fledged pillar of science. For centuries, scientists used pen and paper to develop new theories based on scientific experiments. They also set up new experiments to verify the predictions derived from these theories (a process often carried out with pen and paper). It goes without saying that this method was slow and cumbersome.

    As an astronomer you can not simply make Jupiter a little bigger to see what effect this would lager size would have on our solar system. As a nuclear scientist it would be difficult to deliberately lose control over a nuclear reaction to ascertain the consequences of such a move. (Super)computers can do this and are indeed revolutionizing science.

    Complex theoretical models - too advanced for ‘pen and paper’ results - are simulated on computers. The results they deliver, are then compared with reality and used for prediction purposes. Supercomputers have the ability to handle huge amounts of data, thus enabling experiments that would not be achievable in any other way. Large radio telescopes or the LHC particle accelerator at CERN could not function without supercomputers processing mountains of data.

    … but also the industry and out society

    But supercomputers are not just an expensive toy for researchers at universities. Numerical simulation also opens up new possibilities in industrial R&D. For example in the search for new medicinal drugs, new materials or even the development of a new car model. Biotechnology also requires the large data processing capacity of a supercomputer. The quest for clean energy, a better understanding of the weather and climate evolution, or new technologies in health care all require a powerful supercomputer.

    Supercomputers have a huge impact on our everyday lives. Have you ever wondered why the showroom of your favourite car brand contains many more car types than 20 years ago? Or how each year a new and faster smartphone model is launched on the market? We owe all of this to supercomputers.

    " -45,"","

    In the past few decades supercomputers have not only revolutionized scientific research but have also been used increasingly by businesses all over the world to accelerate design, production processes and the development of innovative services. -

    Situation

    Modern microelectronics has created many new opportunities. Today powerful supercomputers enable us to collect and process huge amounts of data. Complex systems can be studied through numerical simulation without having to build a prototype or set up a scaled experiment beforehand. All this leads to a quicker and cheaper design of new products, cost-efficient processes and innovative services. To support this development in Flanders, the Flemish Government founded in late 2007 the Flemish Supercomputer Center (VSC) as a partnership between the government and Flemish university associations. The accumulated expertise and infrastructure are assets we want to make available to the industry. -

    Technology Offer

    A collaboration with the VSC offers your company a good number of benefits. -

      -
    • Together -we will identify which expertise within the Flemish universities and their -associations is appropriate for you when rolling out High Performance Computing -(HPC) within your company. -
    • -
    • We -can also assist with the technical writing of a project proposal for financing for example through the IWT (Agency for -Innovation by Science and Technology). -
    • -
    • You -can participate in courses on HPC, including tailor-made courses provided by the VSC. -
    • -
    • You -will have access to a supercomputer infrastructure with a dedicated, on-site -team assisting you during the start-up phase. -
    • -
    • As -a software developer, you can also deploy HPC software technologies to develop -more efficient software which makes better use of modern hardware. -
    • -
    • A -shorter turnaround time for your simulation or data study boosts productivity -and increases the responsiveness of your business to new developments. -
    • -
    • The -possibility to carry out more detailed simulations or to analyse larger amounts -of data can yield new insights which in turn lead to improved products and more -efficient processes. -
    • -
    • A -quick analysis of the data collected during a production process helps to -detect and correct abnormalities early on. -
    • -
    • Numerical -simulation and virtual engineering reduce the number of prototypes and -accelerate the discovery of potential design problems. As a result you are able -to market a superior product faster and cheaper. -
    • -

    About the VSC

    The VSC was launched in late 2007 as a collaboration between the Flemish Government and five Flemish university associations. Many of the VSC employees have a strong technical and scientific background. Our team also collaborates with many research groups at various universities and helps them and their industrial partners with all aspects of infrastructure usage. -

    Besides a competitive infrastructure, the VSC team also offers full assistance with the introduction of High Performance Computing within your company. -

    " -49,"","

    The Flemish Supercomputer Centre (VSC) is a virtual centre making supercomputer infrastructure available for both the academic and industrial world. This centre is managed by the Research Foundation - Flanders (FWO) in partnership with the five Flemish university associations.

    " -51,"HPC for academics","

    With HPC-technology you can refine your research and gain new insights to take your research to new heights. -


    " -57,"","

    You can fix this yourself in a few easy steps via the account management web site.

    There are two ways in which you may have messed up your keys:

    1. The keys that were stored in the .ssh subdirectory of your home directory on the cluster were accidentally deleted, or the authorized_keys file was accidentally deleted:
      1. Go to account.vscentrum.be
      2. Choose your institute and log in.
      3. At the top of the page, click 'Edit Account'.
      4. Press the 'Update' button on that web page.
      5. Exercise some patience, within 30 minutes, your account should be accessible again.
    2. You deleted your (private) keys on your own computer, or don't know the passphrase anymore
      1. Generate a new public/private key pair. Follow the procedure outlined in the client sections for Linux, Windows and macOS (formerly OS X).
      2. Go to account.vscentrum.be
      3. Choose your institute and log in.
      4. At the top of the page, click 'Edit Account'.
      5. Upload your new public key adding it in the 'Add Public Key' section of the page. Use 'Browse...' to find your public key, press 'Add' to upload it.
      6. You may now delete the entry for the \"lost\" key if you know which one that is, but this is not crucial.
      7. Exercise some patience, within 30 minutes, your account should be accessible again.
    " -59,"","

    Before you can really start using one of the clusters, there are several things you need to do or know:

      -
    1. You need to log on to the cluster via an ssh-client to one of the login nodes. This will give you a command line. The software you'll need to use on your client system depends on its operating system: - -
    2. -
    3. Your account also comes with a certain amount of data storage capacity in at least three subdirectories on each cluster. You'll need to familiarise yourself with - -
    4. -
    5. Before you can do some work, you'll have to transfer the files that you need from your desktop or department to the cluster. At the end of a job, you might want to transfer some files back. The preferred way to do that, is by using an sftp client. It again requires some software on your client system which depends on its operating system: - -
    6. -
    7. Optionally, if you wish to use programs with a graphical user interface, you'll need an X server on your client system. Again, this depends on the latter's operating system: - -
    8. -
    9. Often several versions of software packages and libraries are installed, so you need to select the ones you need. To manage different versions efficiently, the VSC clusters use so-called modules, so you'll need to select and load the modules that you need.
    10. -

    Logging in to the login nodes of your institute's cluster may not work if your computer is not on your institute's network (e.g., when you work from home). In those cases you will have to set up a VPN (Virtual Private Network) connection if your institute provides this service.

    " -61,"","

    What is a group?

    The concept of group as it is used here is that of a POSIX group and is a user management concept from the Linux OS (and many other OSes, not just UNIX-like systems). Groups are a useful concept to control access to data or programs for groups of users at once, using so-called group permissions. Three important use cases are:

    1. Controlling access to licensed software, e.g., when one or only some research groups pay for the license
    2. Creating a shared subdirectory to collaborate with several VSC-users on a single project
    3. Controlling access to a project allocation on clusters implementing a credit system (basically all clusters at KU Leuven)

    VSC groups are managed without any interaction from the system administrators. This provides a highly flexible way for users to organise themselves. Each VSC group has members and moderators: -

      -
    • A user can become a member of a group after a moderator approves it. As a regular user, you can check all groups you belong to on the VSC account management web site account.vscentrum.be.
    • -
    • A moderator can add/delete members and moderators -
        -
      • When you create a new group, you become both the first member and moderator of that group.
      • -
      -
    • -

    Warning: You should not exaggerate in creating new groups. Mounting file systems over NFS doesn't work properly if you belong to more than 32 different groups, and so far we have not found a solution. This happens when you log on to a VSC cluster at a different site.

    Managing groups

    Viewing the groups you belong to

    You will in fact see that you always belong to at least two groups depending on the institution from which you have your VSC account. -

    Join an existing group

      -
    • Go to the VSC account management web site
    • -
    • Click on \"New group\"
    • -
    • Fill in the name of the group -
        -
      • The name of the group will automatically begin with the first letter of the hosting institute (a for Antwerp, b for Brussels, g for Ghent, l for Leuven)
      • -
      • If the name is wrong, it will treat the request as a new group
      • -
      -
    • -
    • In the message field, describe who you are to motivate the request, so the moderator knows who is making the request -
        -
      • Moderators will deny all unclear requests
      • -
      -
    • -

    Create new group

      -
    • Go to the VSC account management web site
    • -
    • Click on \"New group\"
    • -
    • Fill in the group name
    • -
    • You will receive a confirmation email
    • -
    • After the confirmation, you are now member and moderator of the new group
    • -

    Working with file and directory permissions

      -
    • The chgrp (from change group) command is used by users on Unix-like systems to change the group associated with a computer file. General syntax: -
      chgrp [options] group target1 [target2 ..]
      -
    • -
    • The chmod command (abbreviated from change mode) can change file system modes of files and directories. The modes include permissions and special modes. General syntax: -
      chmod [options] mode[,mode] file1 [file2 ...]
      -
    • -
    • Hints: -
        -
      • To view what the permissions currently are, type: -
        $ ls -l file
        -
      • -
      • -R: Changes the modes of directories and files recursively.
      • -
      • - Setting the setgid permission on a directory (chmod g+s) causes new files and subdirectories created within it to inherit its groupID, rather than the primary groupID of the user who created the file (the ownerID is never affected, only the groupID). Newly created subdirectories inherit the setgid bit. Note that setting the setgid permission on a directory only affects the groupID of new files and subdirectories created after the setgid bit is set, and is not applied to existing entities. Setting the setgid bit on existing subdirectories must be done manually, with a command such as the following: -
        [user@foo]# find /path/to/directory -type d -exec chmod g+s '{}' \\;
        -
      • -
      -
    • -
    " -63,""," - - - - - - -
    -

    Total disk space used on filesystems with quota

    -

    On filesystems with 'quota enabled', you can check the amount of disk space that is available for you, and the amount of disk space that is in use by you. Unfortunately, there is not a single command that will give you that information for all file systems in the VSC. -

    -
      -
    • quota is the standard command to request your disk quota. Its output is in 'blocks', but can also be given in MB/GB if you use the '-s' option.
    • -
    • But it does not work on GPFS file systems. On those you have to use mmlsquota. This is the case for the scratch space at the KU Leuven or on the Tier-1.
    • -
    • On some clusters, these commands are currently disabled.
    • -
    • Also, using these commands on another cluster than the one in your home institution, will fail to return information about the quota on your VSC_HOME and VSC_DATA directories and will show you the quota for your VSC_SCRATCH directory on that system.
    • -
    -
    quota -s
    -Disk quotas for user vsc31234 (uid 123456):
    -  Filesystem  blocks   quota   limit   grace   files   quota   limit   grace
    -nas2-ib1:/mnt/home
    -                648M   2919M   3072M            3685       0       0
    -nas2-ib1:/mnt/data
    -              20691M  24320M  25600M            134k       0       0
    -nas1-ib1:/mnt/site_scratch
    -                   0  24320M  25600M               1       0       0
    -
    -

    Each line represents a file system you have access to, $VSC_HOME, $VSC_DATA, and, for this particular example, $VSC_SCRATCH_SITE. The blocks column shows your current usage, quota is the usage above which you will be warned, and limit is \"hard\", i.e., when your usage reaches this limit, no more information can be written to the file system, and programs that try will fail.

    Some file systems have limits on the number of files that can be stored, and those are represented by the last four columns. The number of files you currently have is listed in the column files, quota and limit represent the soft and hard limits for the number of files.

    - -

    Diskspace used by individual directories

    -

    The command to check the size of all subdirectories in the current directory is \"du\": -

    -
    $ du -h
    -4.0k      ./.ssh
    -0       ./somedata/somesubdir
    -52.0k   ./somedata
    -56.0k   .
    -		
    -

    This shows you first the aggregated size of all subdirectories, and finally the total size of the current directory \".\" (this includes files stored in the current directory). The -h option ensures that sizes are displayed in human readable form, omitting it will show sizes in bytes.

    -

    If the number of lower level subdirectories starts to grow too big, you may not want to see the information at that depth; you could just ask for a summary of the current directory: -

    -
    du -s 
    -54864 .
    -
    -

    If you want to see the size of any file or top level subdirectory in the current directory, you could use the following command: -

    -
    du -s *
    -12      a.out
    -3564    core
    -4       mpd.hosts
    -51200   somedata
    -4       start.sh
    -4       test
    -		
    -

    Finally, if you don't want to know the size of the data in your current directory, but in some other directory (eg. your data directory), you just pass this directory as a parameter. If you also want this size to be \"human readable\" (and not always the total number of kilobytes), you add the parameter \"-h\": -

    -
    du -h -s $VSC_DATA/*
    -50M     /data/leuven/300/vsc30001/somedata
    -		
    -
    -
    -
    -
    " -65,"","" -67,"","
      -
    • HPC cluster: relatively tightly coupled collection of compute nodes, the interconnect typically allows for high bandwidth, low latency communication. Access to the cluster is provided through a login node. A resource manager and scheduler provide the logic to schedule jobs efficiently on the cluster. A detailed description of the VSC clusters and other hardware is available.
    • -
    • Compute node: an individual computer, part of an HPC cluster. Currently most compute node have two sockets, each with a single CPU, volatile working memory (RAM), a hard drive, typically small, and only used to store temporary files, and a network card. The hardware specifications for the various VSC compute nodes is available.
    • -
    • CPU: Central Processing Unit, the chip that performs the actual computation in a compute node. A modern CPU is composed of numerous cores, typically 8 or 10. It has also several cache levels that help in data reuse.
    • -
    • Core: part of a modern CPU. A core is capable of running processes, and has its own processing logic and floating point unit. Each core has its own level 1 and level 2 cache for data and instructions. Cores share last level cache.
    • -
    • Cache: a relatively small amount of (very) fast memory (when compared to regular RAM), on the CPU chip. A modern CPU has three cache level, L1 and L2 are specific to each core, while L3 (also referred to as Last Level Cache, LLC) is shared among all the cores of a CPU.
    • -
    • RAM: Random Access Memory used as working memory for the CPUs. On current hardware, the size of RAM is expressed in gigabytes (GB). The RAM is shared between the two CPUs on each of the sockets. This is volatile memory in the sense that once the process that creates the data ends, the data in the RAM is no longer available. The complete RAM can be accessed by each core.
    • -
    • Walltime: the actual time an application runs (as in clock on the wall), or is expected to run. When submitting a job, the walltime refers to the maximum amount of time the application can run, i.e., the requested walltime. For accounting purposes, the walltime is the amount of time the application actually ran, typically less than the requested walltime.
    • -
    • Node-hour: unit of work indicating that an application ran for a time t on n nodes, such that n*t = 1 hour. Using 1 node for 1 hour is 1 node-hour. This is irrespective of the number of cores on the node you actually use.
    • -
    • Core-hour: unit of work indicating that an application ran for a time t on p cores, such that p*t = 1 hour. Using 20 cores, no matter on how many nodes, for 1 hour results in 20 core-hours.
    • -
    • Node-day: unit of work indicating that an application ran for a time t on n nodes such that n*t = 24 hours. Using 3 nodes for 8 hours results in 1 node day.
    • -
    • Memory requirement: the amount of RAM needed to successfully run an application. It can be specified per process for a distributed application, expressed in GB.
    • -
    • Storage requirement: the amount of disk space needed to store the input and output of an application, expressed in GB or TB.
    • -
    • Temporary storage requirement: the amount of disk space needed to store temporary files during the run of an application, expressed in GB or TB.
    • -
    • Single user per node policy: indicates that when a process of user A runs on a compute node, no process of another user will run on that compute node concurrently, i.e., the compute node will be exclusive to user A. However, if one or more processes of user A are running on a compute node, and that node's capacity in terms of available cores and memory is not exceeded, processes part of another job submitted by user A may start on that compute node.
    • -
    • Shared memory application: an application that uses multiple cores for its computations, concurrent computations are executed by threads, typically one per core. Each thread has access to the application's global memory space (hence the name), and has some thread-private memory. A shared memory application runs on a single compute node. See also multi-core application.
    • -
    • Multi-core application: a multi-core application uses more than one core during its execution by running multiple threads, also called a shared memory application.
    • -
    • Distributed application: an application that uses multiple compute nodes for its computations, concurrent computations are executed as processes. These processes communicate by exchanging messages, typically implemented by calls to an MPI library. Messages can be used to exchange data and coordinate the execution.
    • -
    • Serial application: a program that runs a single process, with a single thread. All computations are done sequentially, i.e., one after the other, no explicit parallelism is used.
    • -
    • Process: an independent computation running on a computer. It may interact with other processes, and it may run multiple threads. A serial and shared memory application run as a single process, while a distributed application consists of multiple, coordinated processes.
    • -
    • Threads: a process can perform multiple computations, i.e., program flows, concurrently. In scientific applications, threads typically process their own subset of data, or a subset of loop iterations.
    • -
    • MPI: Message passing interface, a de-facto standard that defines functions for inter-process communication. Many implementations in the form of libraries exist for C/C++/Fortran, some vendor specific.
    • -
    • OpenMP: a standard for shared memory programming that makes abstraction of explicit threads.
    • -
    " -69,"","

    This is a very incomplete list, permantently under construction, of books about parallel computing. -

    General

    Grid computing

    MPI

    OpenMP

    GPU computing

      -
    • M. Scarpino. OpenCL in Action. Manning Publications Co., 2012. ISBN 978-1-617290-17-6
    • -
    • D.R. Kaeli, P. Mistry, D. Schaa, and D.P. Zhang. Heterogeneous Computing with OpenCL 2.0, 1st Edition. Morgan Kaufmann, 2015. ISBN 978-0-12-801414-1 (print) or 978-0-12-801649-7 (eBook). A thourough rewrite of the earlier well-selling book for OpenCL 1.2 that saw 2 editions.
    • -

    Xeon Phi computing

    Case studies and examples of programming paradigms

    Please mail further suggestions to Kurt.Lust@uantwerpen.be. -

    " -71,"","

    PRACE

    The PRACE Training Portal has a number of training videos online from their courses.
    -

    LLNL - Lawrence Livermore National Laboratory (USA)

    LLNL provides several tutorials. Not all are applicable to the VSC clusters, but some are. E.g., -

    There are also some tutorials on Python. -

    NCSA - National Center for Supercomputing Applications (USA)

    NCSA runs the CI-Tutor (Cyberinfrastructure Tutor) service that also contains a number of interesting tutorials. At the moment of writing, there is no fee and everybody can subscribe. -

    " -73,"","

    Getting ready to request an account

    • Before requesting an account, you need to generate a pair of ssh keys. One popular way to do this on Windows is using the freely available PuTTY client which you can then also use to log on to the clusters.

    Connecting to the cluster

    • Open a text-mode session using an ssh client
      • PuTTY is a simple-to-use and freely available GUI SSH client for Windows.
      • pageant can be used to manage active keys for PuTTY, WinSCP and FileZilla so that you don't need to enter the passphrase all the time.
      • Setting up a SSH proxy with PuTTY to log on to a node protected by a firewall through another login node, e.g., to access the tier-1 system muk.
      • Creating a SSH tunnel using PuTTY to establish network communication between your local machine and the cluster otherwise blocked by firewalls.
    • Transfer data using Secure FTP (SFTP) clients:
    • Display graphical programs:
      • You can install a so-called X server: Xming. X is the protocol that is used by most Linux applications to display graphics on a local or remote screen.
      • On the KU Leuven/UHasselt clusters it is also possible to use the NX Client to log on to the machine and run graphical programs. Instead of an X-server, another piece of client software is needed. That software is currently available for Windows, OS X, Linux, Android and iOS.
    • If you install the free UNIX emulation layer cygwin with the necessary packages, you can use the same OpenSSH client as on Linux systems and all pages about ssh and data transfer from the Linux client pages apply.

    Programming tools

    • By installing the UNIX emulation layer cygwin with the appropriate packages you can mimic very well the VSC cluster environment (at least with the foss toolchain). Cygwin supports the GNU compilers and also contains packages for OpenMPI (look for \"openmpi\") and some other popular libraries (FFTW, HDF5, ...). As such it can turn your Windows PC in a computer that can be used to develop software for the cluster if you don't rely on too many external libraries (which may be hard to install). This can come in handy if you sometimes need to work off-line. If you have a 64-bit Windows system (which most recent computers have), it is best to go for the 64-bit version of Cygwin. After all, the VSC-clusters are also running a 64-bit OS.
    • Microsoft Visual Studio can also be used to develop OpenMP or MPI programs. If you do not use any Microsoft-specific libraries but stick to plain C or C++, the programs can be recompiled on the VSC clusters. Microsoft is slow in implementing new standards though. As of January 2015, OpenMP support is still stuck at version 2.0 of the standard.
    • Eclipse is a popular multi-platform Integrated Development Environment (IDE) very well suited for code development on clusters. On Windows Eclipse relies by default on the cygwin toolchain for its compilers and other utilities, so you need to install that too.
    • There are also other ways to access subversion repositories on the VSC clusters or other subversion servers:
    " -75,"","

    Prerequisite: PuTTY

    By default, there is no ssh client software available on Windows, so you will typically have to install one yourself. We recommend to use PuTTY, which is freely available. You do not even need to install; just download the executable and run it! Alternatively, an installation package (MSI) is also available from the download site that will install all other tools that you might need also. -

    You can copy the PuTTY executables together with your private key on a USB stick to connect easily from other Windows computers. -

    Generating a public/private key pair

    To generate a public/private key pair, you can use the PuTTYgen key generator. Start it and follow the following steps. Alternatively, you can follow a short video explaining step-by-step the process of generating a new key pair and saving it in a format required by different VSC login nodes. -

      -
    1. - In 'Parameters' (at the bottom of the window), choose 'SSH-2 RSA' and set the number of bits in the key to 2048:
      - \"PuTTYgen
    2. -
    3. - Click on 'Generate'. To generate the key, you must move the mouse cursor over the PuTTYgen window (this generates some random data that PuTTYgen uses to generate the key pair). Once the key pair is generated, your public key is shown in the field 'Public key for pasting into OpenSSH authorized_keys file'. -
    4. -
    5. - Next, you should specify a passphrase in the 'Key passphrase' field and retype it in the 'Confirm passphrase' field. Remember, the passphrase protects the private key against unauthorized use, so it is best to choose one that is not too easy to guess. Additionally, it is adviced to fill in the 'Key comment' field to make it easier identifiable afterwards.
      - \"PuTTYgen
    6. -
    7. - Finally, save both the public and private keys in a secure place (i.e., a folder on your personal computer, or on your personal USB stick, ...) with the buttons 'Save public key' and 'Save private key'. We recommend to use the name \"id_rsa.pub\" for the public key, and \"id_rsa.ppk\" for the private key. -
    8. -

    If you use another program to generate a key pair, please remember that they need to be in the OpenSSH format to access the VSC clusters. -

    Converting PuTTY keys to OpenSSH format

    OpenSSH is a very popular command-line SSH client originating from the Linux world but now available on many operating systems. Therefore its file format is a very popular one. Some applications, such as Eclipse's SSH components, can not handle private keys generated with PuTTY, only OpenSSH compliant private keys. However, PuTTY's key generator 'PuTTYgen' (that was used to generate the public/private key pair in the first place) can be used to convert the PuTTY private key to one that can be used by Eclipse. -

      -
    1. Start PuTTYgen.
    2. -
    3. From the 'Conversions' menu, select 'Import key' and choose the file containing your PuTTY private key that is used to authenticate on the VSC cluster.
    4. -
    5. When prompted, enter the appropriate passphrase.
    6. -
    7. From the 'Conversions' menu, select 'Export OpenSSH key' and save it as 'id_rsa' (or any other name if the former already exists). Remember the file name and its location, it will have to be specified in the configuration process of, e.g., Eclipse.
    8. -
    9. Exit PuTTYgen.
    10. -
    " -79,"","

    2) Click on 'Generate'. To generate the key, you must move the mouse cursor over the PuTTYgen window (this generates some random data that PuTTYgen uses to generate the key pair). Once the key pair is generated, your public key is shown in the field 'Public key for pasting into OpenSSH authorized_keys file'.

    3) Next, you should specify a passphrase in the 'Key passphrase' field and retype it in the 'Confirm passphrase' field. Remember, the passphrase protects the private key against unauthorized use, so it is best to choose one that is not too easy to guess. Additionally, it is adviced to fill in the 'Key comment' field to make it easier identifiable afterwards.

    " -81,"","


    4) Finally, save both the public and private keys in a secure place (i.e., a folder on your personal computer, or on your personal USB stick, ...) with the buttons 'Save public key' and 'Save private key'. We recommend to use the name \"id_rsa.pub\" for the public key, and \"id_rsa.ppk\" for the private key.

    If you use another program to generate a key pair, please remember that they need to be in the OpenSSH format to access the VSC clusters.

    Converting PuTTY keys to OpenSSH format

    OpenSSH is a very popular command-line SSH client originating from the Linux world but now available on many operating systems. Therefore its file format is a very popular one. Some applications, such as Eclipse's SSH components, can not handle private keys generated with PuTTY, only OpenSSH compliant private keys. However, PuTTY's key generator 'PuTTYgen' (that was used to generate the public/private key pair in the first place) can be used to convert the PuTTY private key to one that can be used by Eclipse.

    1. Start PuTTYgen.
    2. From the 'Conversions' menu, select 'Import key' and choose the file containing your PuTTY private key that is used to authenticate on the VSC cluster.
    3. When prompted, enter the appropriate passphrase.
    4. From the 'Conversions' menu, select 'Export OpenSSH key' and save it as 'id_rsa' (or any other name if the former already exists). Remember the file name and its location, it will have to be specified in the configuration process of, e.g., Eclipse.
    5. Exit PuTTYgen.
    " -83,"","

    Each of the major VSC-institutions has its own user support: -

    What information should I provide when contacting user support?

    When you submit a support request, it helps if you always provide: -

      -
    1. your VSC user ID (or VUB netID),
    2. -
    3. contact information - it helps to specify your preferred mail address and phone number for contact,
    4. -
    5. an informative subject line for your request,
    6. -
    7. the time the problem occurred,
    8. -
    9. the steps you took to resolve the problem.
    10. -

    Below, you will find more useful information you can provide for various categories of problems you may encounter. Although it may seem like more work to you, it will often save a few iterations and get your problem solved faster. -

    If you have problems logging in to the system

    then provide the following information: -

      -
    1. your operating system (e.g., Linux, Windows, MacOS X, ...),
    2. -
    3. your client software (e.g., PuTTY, OpenSSH, ...),
    4. -
    5. your location (e.g., on campus, at home, abroad),
    6. -
    7. whether the problem is systematic (how many times did you try, over which period) or intermittent,
    8. -
    9. any error messages shown by the client software, or an error log if it is available.
    10. -

    If installed software malfunctions/crashes

    then provide the following information: -

      -
    1. the name of the application (e.g., Ansys, Matlab, R, ...),
    2. -
    3. the module(s) you load to use the software (e.g., R/3.1.2-intel-2015a),
    4. -
    5. the error message the application produces,
    6. -
    7. whether the error is reproducible,
    8. -
    9. if possible, a procedure and data to reproduce the problem,
    10. -
    11. if the application was run as a job, the jobID(s) of (un)successful runs.
    12. -

    If your own software malfunctions/crashes

    then provide the following information: -

      -
    1. the location of the source code,
    2. -
    3. the error message produced at build time or runtime,
    4. -
    5. the toolchain and other module(s) you load to build the software (e.g., intel/2015a with HDF5/1.8.4-intel-2015a),
    6. -
    7. if possible and applicable, a procedure and data to reproduce the problem,
    8. -
    9. if the software was run as a job, the jobID(s) of (un)successful runs.
    10. -
    " -85,"","

    The best way to get a complete list of all available software in a particular cluster can be obtained by typing:

    $ module av

    In order to use those software packages, the user should work with the module system. On the newer systems, we use the same naming conventions for packages on all systems. Due to the ever expanding list of packages, we've also made some adjustments and don't always show all packages, so be sure to check out the page on the module system again to learn how you can see more packages.

    Note: Since August 2016, a different implementation of the module system has been implemented on the UGent and VUB Tier-2 systems, called Lmod. Though highly compatible with the system used on the other clusters, it offers a lot of new commands, and some key differences.

    Packages with additional documentation

    " -87,"","

    Software stack

    Software installation and maintenance on HPC infrastructure such as the VSC clusters poses a number of challenges not encountered on a workstation or a departemental cluster. For many libraries and programs, multiple versions have to installed and maintained as some users require specific versions of those. And those libraries or executable sometimes rely on specific versions of other libraries, further complicating the matter. -

    The way Linux finds the right executable for a command, and a program loads the right version of a library or a plug-in, is through so-called environment variables. These can, e.g., be set in your shell configuration files (e.g., .bashrc), but this requires a certain level of expertise. Moreover, getting those variables right is tricky and requires knowledge of where all files are on the cluster. Having to manage all this by hand is clearly not an option. -

    We deal with this on the VSC clusters in the following way. First, we've defined the concept of a toolchain on most of the newer clusters. They consist of a set of compilers, MPI library and basic libraries that work together well with each other, and then a number of applications and other libraries compiled with that set of tools and thus often dependent on those. We use tool chains based on the Intel and GNU compilers, and refresh them twice a year, leading to version numbers like 2014a, 2014b or 2015a for the first and second refresh of a given year. Some tools are installed outside a toolchain, e.g., additional versions requested by a small group of users for specific experiments, or tools that only depend on basic system libraries. Second, we use the module system to manage the environment variables and all dependencies and possible conflicts between various programs and libraries., and that is what this page focuses on. -

    Note: Since August 2016, a different implementation of the module system has been implemented on the UGent and VUB Tier-2 systems, called Lmod. Though highly compatible with the system used on the other clusters, it offers a lot of new commands, and some key differences. Most of the commands below will still work though.

    Basic use of the module system

    Many software packages are installed as modules. These packages include compilers, interpreters, mathematical software such as Matlab and SAS, as well as other applications and libraries. This is managed with the module command. -

    To view a list of available software packages, use the command module av. The output will look similar to this: -

    $ module av
    ------ /apps/leuven/thinking/2014a/modules/all ------
    -Autoconf/2.69-GCC-4.8.2
    -Autoconf/2.69-intel-2014a
    -Automake/1.14-GCC-4.8.2
    -Automake/1.14-intel-2014a
    -BEAST/2.1.2
    -...
    -pyTables/2.4.0-intel-2014a-Python-2.7.6
    -timedrun/1.0.1
    -worker/1.4.2-foss-2014a
    -zlib/1.2.8-foss-2014a
    -zlib/1.2.8-intel-2014a
    -

    This gives a list of software packages that can be loaded. Some packages in this list include intel-2014a or foss-2014a in their name. These are packages installed with the 2014a versions of the toolchains based on the Intel and GNU compilers respectively. The other packages do not belong to a particular toolchain. The name of the packages also includes a version number (right after the /) and sometimes other packages they need. -

    Often, when looking for some specific software, you will want to filter the list of available modules, since it tends to be rather large. The module command writes its output to standard error, rather than standard output, which is somewhat confusing when using pipes to filter. The following command would show only the modules that have the string 'python' in their name, regardless of the case.

    $ module av |& grep -i python
    -

    A module is loaded using the command module load with the name of the package. E.g., with the above list of modules, -

    $ module load BEAST
    -

    will load the BEAST/2.1.2 package. -

    For some packages, e.g., zlib in the above list, multiple versions are installed; the module load command will automatically choose the lexicographically last, which is typically, but not always, the most recent version. In the above example, -

     $ module load zlib
    -

    will load the module zlib/1.2.8-intel-2014a. This may not be the module that you want if you're using the GNU compilers. In that case, the user should specify a particular version, e.g., -

    $ module load zlib/1.2.8-foss-2014a
    -

    Obviously, the user needs to keep track of the modules that are currently loaded. After executing the above two load commands, the list of loaded modules will be very similar to: -

    $ module list
    -Currently Loaded Modulefiles:
    -  1) /thinking/2014a
    -  2) Java/1.7.0_51
    -  3) icc/2013.5.192
    -  4) ifort/2013.5.192
    -  5) impi/4.1.3.045
    -  6) imkl/11.1.1.106
    -  7) intel/2014a
    -  8) beagle-lib/20140304-intel-2014a
    -  9) BEAST/2.1.2
    - 10) GCC/4.8.2
    - 11) OpenMPI/1.6.5-GCC-4.8.2
    - 12) gompi/2014a
    - 13) OpenBLAS/0.2.8-gompi-2014a-LAPACK-3.5.0
    - 14) FFTW/3.3.3-gompi-2014a
    - 15) ScaLAPACK/2.0.2-gompi-2014a-OpenBLAS-0.2.8-LAPACK-3.5.0
    - 16) foss/2014a
    - 17) zlib/1.2.8-foss-2014a
    -

    It is important to note at this point that, e.g., icc/2013.5.192 is also listed, although it was not loaded explicitly by the user. This is because BEAST/2.1.2 depends on it, and the system administrator specified that the intel toolchain module that contains this compiler should be loaded whenever the BEAST module is loaded. There are advantages and disadvantages to this, so be aware of automatically loaded modules whenever things go wrong: they may have something to do with it! -

    To unload a module, one can use the module unload command. It works consistently with the load command, and reverses the latter's effect. One can however unload automatically loaded modules manually, to debug some problem. -

    $ module unload BEAST
    -

    Notice that the version was not specified: the module system is sufficiently clever to figure out what the user intends. However, checking the list of currently loaded modules is always a good idea, just to make sure... -

    In order to unload all modules at once, and hence be sure to start with a clean slate, use: -

    $ module purge
    -

    It is a good habit to use this command in PBS scripts, prior to loading the modules specifically needed by applications in that job script. This ensures that no version conflicts occur if the user loads module using his .bashrc file. -

    Finally, modules need not be loaded one by one; the two 'load' commands can be combined as follows: -

    $ module load  BEAST/2.1.2  zlib/1.2.8-foss-2014a
    -

    This will load the two modules and, automatically, the respective toolchains with just one command. -

    To get a list of all available module commands, type: -

    $ module help
    -

    Getting even more software

    The list of software available on a particular cluster can be unwieldingly long and the information that module av produces overwhelming. Therefore the administrators may have chose to only show the most relevant packages by default, and not show, e.g., packages that aim at a different cluster, a particular node type or a less complete toolchain. Those additional packages can then be enabled by loading another module first. E.g., on hopper, the most recent UAntwerpen cluster when we wrote this text, the most complete and most used toolchains were the 2014a versions. Hence only the list of packages in those releases of the intel and foss (GNU) toolchain were shown at the time. Yet -

    $ module av
    -

    returns at the end of the list: -

    ...
    -ifort/2015.0.090                   M4/1.4.16-GCC-4.8.2
    -iimpi/7.1.2                        VTune/2013_update10
    ------------------------ /apps/antwerpen/modules/calcua ------------------------
    -hopper/2014a hopper/2014b hopper/2015a hopper/2015b hopper/2016a hopper/2016b 
    -hopper/all   hopper/sl6   perfexpert   turing
    -

    The packages such as hopper/2014b enable additional packages when loaded. -

    Similarly, on ThinKing, the KU Leuven cluster: -

    $ module av
    -...
    --------------------------- /apps/leuven/etc/modules/ --------------------------
    -cerebro/2014a   K20Xm/2014a     K40c/2014a      M2070/2014a     thinking/2014a
    -ictstest/2014a  K20Xm/2015a     K40c/2015a      phi/2014a       thinking2/2014a
    -

    shows modules specifically for the thin node cluster ThinKing, the SGI shared memory system Cerebro, three types of NVIDIA GPU nodes and the Xeon Phi nodes. Loading one of these will show the appropriate packages in the list obtained with module av. E.g., -

    module load cerebro/2014a
    -

    will make some additional modules available for Cerebro, including two additional toolchains with the SGI MPI libraries to take full advantage of the interconnect of that machine. -

    Explicit version numbers

    As a rule, once a module has been installed on the cluster, the executables or libraries it comprises are never modified. This policy ensures that the user's programs will run consistently, at least if the user specifies a specific version. Failing to specify a version may result in unexpected behavior. -

    Consider the following example: the user decides to use the GSL library for numerical computations, and at that point in time, just a single version 1.15, compiled with the foss toolchain is installed on the cluster. The user loads the library using: -

    $ module load GSL
    -

    rather than -

    $ module load GSL/1.15-foss-2014a
    -

    Everything works fine, up to the point where a new version of GSL is installed, e.g., 1.16 compiled with both the intel and the foss toolchain. From then on, the user's load command will load the latter version, rather than the one he intended, which may lead to unexpected problems. -

    " -97,"HPC for industry","

    The collective expertise, training programs and infrastructure of VSC together with participating university associations have the potential to create significant added value to your business.
    -

    " -99,"What is supercomputing?","

    Supercomputers have an immense impact on our daily lives. Their scope extends far beyond the weather forecast after the news.

    -

    " -109,"Projects and cases","

    The VSC infrastructure being used by many academic and industrial users. Here are just a few case studies of work involving the VSC infrastructure and an overview of actual projects run on the tier-1 infrastructure.

    " -113,"","

    Technical support

    Please also take a look at our web page about technical support. It contains a lot of tips about the information that you can pass to us with your support question so that we can provide a helpful answer faster. -

    General enquiries

    For non-technical questions about the VSC, you can contact the FWO or one of the coordinators from participating universities. This may include questions on admission requirements to questions about setting up a course or other questions that are not directly related to technical problems.
    -

    " -115,"FWO","

    Research Foundation - Flanders (FWO)
    Egmontstraat 5
    1000 Brussel -

    Tel. +32 (2) 512 91 10
    E-mail: post@fwo.be
    Web page of the FWO -

    " -117,"Antwerp University Association","

    Stefan Becuwe -
    Antwerp University
    - Department of Mathematics and Computer Science
    Middelheimcampus M.G 310
    Middelheimlaan 1
    2020 Antwerpen -

    Tel.: +32 (3) 265 3860
    E-mail: Stefan.Becuwe@uantwerpen.be
    Contact page on the UAntwerp site

    " -119,"KU Leuven Association","

    Leen Van Rentergem
    KU Leuven, Directie ICTS
    Willem de Croylaan 52c - bus 5580
    3001 Heverlee

    Tel.:+32 (16) 32 21 55 or +32 (16) 32 29 99
    E-mail: leen.vanrentergem@kuleuven.be
    Contact page on the KU Leuven site

    " -121,"Universitaire Associatie Brussel","

    Stefan Weckx
    VUB, Research Group of Industrial Microbiology and Food Biotechnology
    Pleinlaan 2
    1050 Brussel

    Tel.: +32 (2) 629 38 63
    E-mail: Stefan.Weckx@vub.ac.be
    Contact page on the VUB site

    " -123,"Ghent University Association","

    Ewald Pauwels
    Ghent University, ICT Department
    Krijgslaan 281 S89
    9000 Gent -

    Tel: +32 (9) 264 4716
    E-mail: Ewald.Pauwels@ugent.be
    Contact page on the UGent site -

    " -125,"Associatie Universiteit-Hogescholen Limburg","

    Geert Jan Bex
    VSC course coordinator

    UHasselt, Dienst Onderzoekscoördinatie
    Campus Diepenbeek
    Agoralaan Gebouw D
    3590 Diepenbeek

    Tel.: +32 (11) 268231 or +32 (16) 322241
    E-mail: GeertJan.Bex@uhasselt.be
    Contact page on the UHasselt site and personal web page

    " -127,"Contact us","

    You can also contact the coordinators by filling in the form below.

    " -129,"Technical problems?","

    Don't use this form, but contact your support team directly using the contact information in the user portal.

    " -131,"","

    Need help? Have more questions? Contact us!

    " -133,"","

    The VSC is a partnership of five Flemish university associations. The Tier-1 and Tier-2 infrastructure is spread over four locations: Antwerp, Brussels, Ghent and Louvain. There is also a local support office in Hasselt. -

    " -135,"","

    Ghent

    The recent data center of UGhent (2011) on Campus Sterre features a room which is especially equipped to accommodate the VSC framework. This room currently houses the majority of the Tier-2 infrastructure of Ghent University and the first VSC Tier-1 capability system. The adjacent building of the ICT Department hosts the Ghent University VSC -employees, including support staff for the Ghent University Association (AUGent).

    Louvain

    The KU Leuven equiped its new data center (2012) in Heverlee with a separate room for the VSC framework. This room currently houses the joint Tier-2 infrastructure of KU Leuven and Hasselt University and an experimental GPU / Xeon Phi cluster. This space will also house the next VSC Tier-1 computer. The nearby building of ICTS houses the KU Leuven VSC employees, including the support team for the KU Leuven Association.

    Hasselt

    The VSC does not feature a computer room in Hasselt, but there is a local user support office for the Association University-Colleges Limburg (AU-HL) at Campus Diepenbeek.

    Brussels

    The VUB shares a data center with the ULB on Solbosch Campus also housing the VUB Tier-2 cluster and a large part of the BEgrid infrastructure. The VSC also has a local team responsible for the management of this infrastructure and for the user support within the University Association Brussels (UAB) and for BEgrid.

    Antwerp

    The University of Antwerp features a computer room equipped for HPC infrastructure in the building complex Campus Groenenborger. A little further, on the Campus Middelheim, the UAntwerpen VSC members have their offices in the Mathematics and Computer Science building. This team also handles user support for the Association Antwerp University (AUHA).

    " -137,"","

    The VSC is a consortium of five Flemish universities. This consortium has no legal personality. Its objective is to build a Tier-1 and Tier-2 infrastructure in accordance with the European pyramid model. Staff appointed at five Flemish universities form an integrated team dedicated to training and user support. -

    For specialized support each institution can appeal to an expert independent of where he or she is employed. The universities also invest in HPC infrastructure and the VSC can appeal to the central services of these institutions. In addition, embedment in an academic environment creates opportunities for cooperation with industrial partners. -

    The VSC project is managed by the Research Foundation - Flanders (FWO), that receives the necessary financial resources for this task from the Flemish Government. -

    Operationally, the VSC is controlled by the HPC workgroup consisting of employees from the FWO and HPC coordinators from the various universities. The HPC workgroup meets monthly. During these meetings operational issues are discussed and agreed upon and strategic advice is offered to the Board of Directors of the FWO.
    -

    In addition, four committees are involved in the operation of the VSC: the Tier-1 user committee, the Tier-1 evaluation committee, the Industrial Board and the Scientific Advisory Board. -

    VSC users' committee

    The VSC user's committee was established to provide advise on the needs of users and ways to improve the services, including the training of users. The user's committee also plays a role in maintaining contact with users by spreading information about the VSC, making (potential) users aware of the possibilities offered by HPC and organising the annual user day. -

    These members of the committee are given below in alphabetical order, according to which university they are associated with: -

      -
    • AUHA: Wouter Herrebout, substitute Bart Partoens
    • -
    • UAB: Frank De Proft, substitute Wim Thiery
    • -
    • AUGent: Marie-Françoise Reyniers or Veronique Van Speybroeck
    • -
    • AU-HL: Sorin Pop, substitute Sofie Thijs
    • -
    • KU Leuven association: Dirk Roose, substitute Nele Moelans
    • -

    The members representing the strategic research institutes are -

      -
    • VIB: Steven Maere, substitute Frederik Coppens
    • -
    • imec: Wilfried Verachtert
    • -
    • VITO: Clemens Mensinck, substitute Katrijn Dirix
    • -
    • Flanders Make: Mark Engels, substitute Paola Campestrini
    • -

    The representation of the Industrial Board: -

      -
    • Benny Westaedt, substitute Mia Vanstraelen
    • -

    Tier-1 evaluation committee

    This committee evaluates applications for computing time on the Tier-1. Based upon admissibility and other evaluation criteria the committee grants the appropriate computing time. -

    This committee is composed as follows: -

      -
    • Walter Lioen, chairman (SURFsara, The Netherlands);
    • -
    • Derek Groen (Computer Science, Brunel University London, UK);
    • -
    • Sadaf Alam (CSCS, Switzerland);
    • -
    • Nicole Audiffren (Cines, France);
    • -
    • Gavin Pringle (EPCC, UK).
    • -

    The FWO provides the secretariat of the committee. -

    Industrial Board

    The Industrial Board serves as a communication channel between the VSC and the industry in Flanders. The VSC offers a scientific/technical computing infrastructure to the whole Flemish research community and industry. The Industrial Board can facilitate the exchange of ideas and expertise between the knowledge institutions and industry. -

    The Industrial Board also develops initiatives to inform companies and non-profit institutions about the added value that HPC delivers in the development and optimisation of services and products and promotes the services that the VSC delivers to companies, such as consultancy, research collaboration, training and compute power. -

    The members are: -

      -
    • Mia Vanstraelen (IBM)
    • -
    • Charles Hirsch (Numeca)
    • -
    • Herman Van der Auweraer (Siemens Industry Software NV)
    • -
    • Benny Westaedt (Van Havermaet)
    • -
    • Marc Engels (Flanders Make)
    • -
    • Marcus Drosson (Umicore)
    • -
    • Sabien Vulsteke (BASF Agricultural Solutions)
    • -
    • Birgitta Brys (Worldline)
    • -
    " -141,"","

    A supercomputer is a very fast and extremely parallel computer. Many of its technological properties are comparable to those of your laptop or even smartphone but there are important differences.

    " -145,"","

    The VSC account

    In order to use the infrastructure of the VSC, you need a VSC-userid, also called a VSC account. The only exception are users of the VUB who just want to use the VUB Tier-2 infrastructure. For them their VUB userid is sufficient. You can then use the same userid on all VSC infrastructure to which you have access. -

    Your account also includes two “blocks” of disk space: your home directory and data directory. Both are accessible from all VSC clusters. When you log in to a particular cluster, you will also be assigned one or more blocks of temporary disk space, called scratch directories. Which directory should be used for which type of data, is explained in the user documentation. -

    You do not automatically have access to all VSC clusters with your VSC account. For the main Tier-1 compute cluster you need to submit a project application (or you should be covered by a project application within your research group). For some more specialised hardware you have to ask access separately, typically to the coordinator of your institution, because we want to be sure that that (usually rather expensive hardware) is used efficiently for the type of applications for which it was purchased. Also, you do not simply get automatic access to all available software. You can use all free software and a number of compilers and other development tools, but for most commercial software, you must first prove that you have a valid license (or the person who has paid the license on the cluster must allow you to use the license). For this you can contact your local support team. -

    Before you can apply for your account, you will usually have to install an extra piece of software on your computer, called a ssh client. How the actual account application should be made and where you can find the software, is explained in the user documentation on the user portal. -

    Who can get access?

      -
    • All researchers at the Flemish university associations can get a VSC account. In many cases, this is done through a fully automated application process, but in some cases you must submit a request to your local support team. Specific details about these procedures can be found on the \"Account request\" page in the user documentation.
    • -
    • Also Master students can get access to the Tier-2 infrastructure in the framework of their master thesis if supercomputing is needed for the thesis. For this, you will first need the approval of your supervisor. The details about the procedure can again be found on the \"Account request\" page in the user documentation.
    • -
    • At the University of Leuven and Hasselt University lecturers can also use the local Tier-2 infrastructure in the context of some courses (when the software cannot run in the PC classes or the computers in those classes are not powerful enough). Again, you can find all the details about the application process on the \"Account request\" page in the user documentation. It is important that the application is submitted on time, at least two weeks before the start of the computer sessions.
    • -
    • Researchers from minds and VIB can also get access. The application is made through your host university. The same applies to researchers at the university hospitals and research institutes under the direction or supervision of a university or a university college, such as the special university institutes mentioned in Article 169quater of the Decree of 12 June 1991 concerning universities in the Flemish Community.
    • -
    • Researchers at other Flemish public research institutions can compute on the Tier-1 infrastructure through a project application or can contact one of the coordinators of the university associations to access Tier-2 infrastructure. For larger amounts of computing time a fair financial compensation may be asked because universities also co-finance the operation of the VSC from their own.
    • -
    • Businesses, non-Flemish public knowledge institutions and not-for-profit organisations can also gain access to the infrastructure. The procedures are explained on the page \"Access for industry\".
    • -

    Additional information

    Before you apply for VSC account, it is useful to first check whether the infrastructure is suitable for your application. Windows or OS X programs for instance cannot run on our infrastructure as we use the Linux operating system on the clusters. The infrastructure also should not be used to run applications for which the compute power of a good laptop is sufficient. The pages on the Tier-1 and Tier-2 infrastructure in this part of the website give a high-level description of our infrastructure. You can find more detailed information in the user documentation on the user portal. When in doubt, you can also contact your local support team. This does not require a VSC account. -

    Furthermore, you should first check the page \"Account request\" in the user documentation and install the necessary software on your PC. You can also find links to information about that software on the “Account Request” page. -

    Furthermore, it can also be useful to take one of the introductory courses that we organise periodically at all universities. However, it is best to apply for your VSC account before the course since you also can then also do the exercises during the course. We strongly urge people who are not familiar with the use of a Linux supercomputer to take such a course. After all, we do not have enough staff to help everyone individually for all those generic issues. -

    " -149,"","

    We offer you the opportunity of a free trial of the Tier-1 to prepare a future regular Tier-1 project application. You can test if your software runs well on the Tier-1 and do the scalability tests that are required for a project application. -

    If you want to check if buying compute time on our infrastructure is an option, we offer a very similar free programme for a test ride.

    Characteristics of a Starting Grant

      -
    • The maximum amount is 100 nodedays.
    • -
    • The maximal allowed period to use the compute time is 2 months.
    • -
    • The allocation is personal and can't be transferred or shared with other researchers.
    • -
    • Requests can be done at any time, there are no cutoff days.
    • -
    • The use of this compute time is free of charge.
    • -

    Procedure to apply and grant the request

      -
    1. Download the application form for a starting grant version 2018 (docx, 31 kB).
    2. -
    3. Send the completed application by e-mail to the Tier-1 contact address (hpcinfo@icts.kuleuven.be), with your local VSC coordinator in cc.
    4. -
    5. The request will be judged for its validity by the Tier-1 coordinator.
    6. -
    7. After approval the Tier-1 coordinator will give you access and compute time.
      If not approved, you will get an answer with a motivation for the decision.
    8. -
    9. The granted requests are published on the VSC website. Therefore you need to provide a short abstract in the application.
    10. -
    " -153,"","

    The application

    The designated way to get access to the Tier-1 for research purposes is through a project application. -

    You have to submit a proposal to get compute time on the Tier-1 cluster BrENIAC. -

    You should include a realistic estimate of the compute time needed in the project in your application. These estimations can best be endorsed by Tier-1 benchmarks. To be able to perform these tests for new codes, you can request a starting grant through a short and quick procedure. -

    You can submit proposals continuously, but they will be gathered, evaluated and resources allocated at a number of cut-off dates. There are 3 cut-off dates in 2018 : -

      -
    • February 5, 2018
    • -
    • June 4, 2018
    • -
    • October 1, 2018
    • -

    Proposals submitted since the last cut-off and before each of these dates are reviewed together. -

    The FWO appoints an evaluation commission to do this. -

    Because of the international composition of the evaluation commission, the preferred language for the proposals is English. If a proposal is in Dutch, you must also sent an English translation. Please have a look at the documentation of standard terms like: CPU, core, node-hour, memory, storage, and use these consistently in the proposal. -

    You can submit you application via EasyChair using the application forms below.
    -

    Relevant documents - 2018

    As was already the case for applications for computing time on the Tier-1 granted in 2016 and 2017 and coming from researchers at universities, the Flemish SOCs and the Flemish public knowledge institutions, applicants do not have to pay a contribution in the cost of compute time and storage. Of course, the applications have to be of outstanding quality. The evaluation commission remains responsible for te review of the applications. For industry the price for compute time is 13 EURO per node day including VAT and for storage 15 EURO per TB per month including VAT. -

    The adjusted Regulations for 2018 can be found in the links below. -

    If you need help to fill out the application, please consult your local support team. -

    Relevant documents - 2017

    As was already the case for applications for computing time on the Tier-1 granted in 2016 and coming from researchers at universities, the Flemish SOCs and the Flemish public knowledge institutions, applicants do not have to pay a contribution in the cost of compute time and storage. Of course, the applications have to be of outstanding quality. The evaluation commission remains responsible for te review of the applications. For industry the price for compute time is 13 EURO per node day including VAT and for storage 15 EURO per TB per month including VAT. -

    The adjusted Regulations for 2017 can be found in the links below. -

    EasyChair procedure

    You have to submit your proposal on EasyChair for the conference Tier12018. This requires the following steps:
    -

      -
    1. If you do not yet have an EasyChair account, you first have to create one: -
        -
      1. Complete the CAPTCHA
      2. -
      3. Provide first name, name, e-mail address
      4. -
      5. A confirmation e-mail will be sent, please follow the instructions in this e-mail (click the link)
      6. -
      7. Complete the required details.
      8. -
      9. When the account has been created, a link will appear to log in on the TIER1 submission page.
      10. -
    2. -
    3. Log in onto the EasyChair system.
    4. -
    5. Select ‘New submission’.
    6. -
    7. If asked, accept the EasyChair terms of service.
    8. -
    9. Add one or more authors; if they have an EasyChair account, they can follow up on and/or adjust the present application.
    10. -
    11. Complete the title and abstract.
    12. -
    13. You must specify at least three keywords: Include the institution of the promoter of the present project and the field of research.
    14. -
    15. As a paper, submit a PDF version of the completed Application form. You must submit the complete proposal, including the enclosures, as 1 single PDF file to the system.
    16. -
    17. Click \"Submit\".
    18. -
    19. EasyChair will send a confirmation e-mail to all listed authors.
    20. -
    " -155,"","

    The VSC infrastructure is can also be used by industry and non-Flemish research institutes. Here we describe the modalities. -

    Tier-1

    It is possible to get paid access to the Tier-1 infrastructure of the VSC. In a first phase, you can get up to 100 free node-days of compute time to verify that the infrastructure is suitable for your applications. You can also get basic support for software installation and the use of the infrastructure. When your software requires a license, you should take care of that yourself. -

    For further use, there is a tree-parties legal agreement required with KU Leuven as the operator of the system and the Research Foundation - Flanders (FWO). You will be billed only for the computing time used and reserved disk space, according to the following rates: -

    - - - - - - - - - - - - - - - - - - - -
    -

    Summary of Rates (VAT included): -

    -
    -

    Compute -

    -

    (euro/node day) -

    -
    -

    Storage -

    -

    (euro/TB/month) -

    -
    -

    Non-Flemish public research institutes and not-for-profit organisations

    -
    -

    € 13

    -
    -

    € 15

    -
    -

    Industry

    -
    -

    € 13

    -
    -

    € 15

    -

    These prices include the university overhead and basic support from the Tier-1 support staff, but no advanced level support by specialised staff. -

    For more information you can contact our industry account manager (FWO). -

    Tier-2

    It is also possible to gain access to the Tier-2 infrastructure within the VSC. Within the Tier-2 infrastructure, there are also clusters tailored to special applications such as small clusters with GPU or Xeon Phi boards, a large shared memory machine or a cluster for Hadoop applications. See the high-level overview or detailed pages about the available infrastructure for more information. -

    For more information and specific arrangements please contact the coordinator of the institution which operates the infrastructure. In this case you only need an agreement with this institution without involvement of the FWO. -

    " -177,"","

    The VSC is responsible for the development and management of High Performance Computer Infrastructure used for research and innovation. The quality level of the infrastructure is comparable to other computational infrastructures in comparable European regions. In addition, the VSC is internationally connected through European projects such as PRACE(1) (traditional supercomputing) and EGI(2) (grid computing). Belgium has been a member of PRACE and participates in EGI via BEgrid, since October 2012. -

    The VSC infrastructure consists of two layers in the European multi-layer model for an integrated HPC infrastructure. Local clusters (Tier-2) at the Flemish universities are responsible for processing the mass of smaller computational tasks and provide a solid base for the HPC ecosystem. A larger central supercomputer (Tier-1) is necessary for more complicated calculations while simultaneously serving as a bridge to infrastructures at a European level. -

    The VSC assists researchers active in academic institutions and also the industry when using HPC through training programs and targeted advice. This offers the advantage that academia and industrialists come into contact with each other. -

    In addition, the VSC also works on raising awareness of the added value HPC can offer both in academic research and in industrial applications. -

    (1) PRACE: Partnership for Advanced Computing in Europe
    - (2) EGI: European Grid Infrastructure -

    " -179,"","

    On 20 July 2006 the Flemish Government decided on the action plan 'Flanders i2010, time for a digital momentum in the innovation chain'. A study made by the steering committee e-Research, published in November 2007, indicated the need for more expertise, support and infrastructure for grid and High Performance Computing. -

    Around the same time, the Royal Flemish Academy of Belgium for Science and the Arts (KVAB) published an advisory illustrating the need for a dynamic High Performance Computing strategy for Flanders. This recommendation focused on a Flemish Supercomputer Center with the ability to compete with existing infrastructures at regional or national level in comparable countries. -

    Based on these recommendations, the Flemish Government decided on 14 December 2007 to fund the Flemish Supercomputer Center, an initiative of five Flemish universities. They joined forces to coordinate and to integrate their High Performance Computing infrastructures and to make their knowledge available to the public and for privately funded research. -

    The grants were used to fund both capital expenditures and staff. As a result the existing university infrastructure was integrated through fast network connections and additional software. Thus, the pyramid model, recommended by PRACE, is applied. According to this model a central Tier-1 cluster is responsible for rolling out large parallel computing jobs. Tier-2 focuses on local use at various universities but is also open to other users. Hasselt University decided to collaborate with the University of Leuven to build a shared infrastructure while other universities opted to do it alone. -

    Some milestones

      -
    • January 2008: Start of the \"VSC preparatory phase\" project
    • -
    • May 2008: The VSC submitted a first proposal for further funding to the Hercules Foundation
    • -
    • November 2008: A technical and financial plan was presented to the Flemish -Government. In the following weeks this plan was successfully defended before a -committee of international experts. -
    • -
    • 23 March 2009: Official launch of the VSC at an event with researchers -presenting their work in the presence of Patricia Ceysens, Flemish Minister for -Economy, Enterprise, Science, Innovation and Foreign Trade. Several speakers -highlighted the history of the project together with VSC’s mission and the -international aspect of this project. -
    • -
    • 3 April 2009: the Hercules Foundation and the Flemish Government -provided a grant of 7.29 million euros (2.09 million by the Hercules Foundation -and 5.2 million from the FFEU - (1) for the further expansion of the local Tier-2 -clusters and the installation of a central Tier-1 supercomputer for Flanders -for large parallel computations. It was also decided to entrust the project -monitoring to a supervisory committee for which the Hercules Foundation -provides the secretariat. -
    • -
    • June 2009: The VSC submitted a project proposal to the Hercules Foundation -to participate -through - PRACE, the ESFRI(2) project in the field of supercomputing. -After comparison with other projects, the Hercules -Foundation granted it the second highest priority and advised the Flemish government -as such. The Flemish Government supported the project, and after consultation -with other regions and communities and federal authorities, Belgium joined -PRACE in October 2012. -
    • -
    • February 2010: The VSC -submitted an updated operating plan to the Hercules -Foundation and the Flemish Government aiming to obtain structural funding for the VSC. -
    • -
    • 9 October 2012: Belgium became the twenty-fifth -member of PRACE. The Belgian delegation was made up of DG06 from the Walloon Government -and a technical advisor from VSC. -
    • -
    • 25 October 2012: The first VSC Tier-1 cluster was -inaugurated at Ghent University. In the spring of 2012 the installation of this -cluster in the new data center at Ghent University campus took place. In a -video message Minister Ingrid Lieten encouraged researchers to make optimum use -of the new opportunities to drive research forward. -
    • -
    • 16 January 2014: the first global VSC User Day. This event brought together researchers from different universities and the -industry. -
    • -
    • 27 January 2015: The first VSC industry day at Technopolis in Mechelen. One of the points on the agenda was to -investigate how other companies abroad - -in Germany and the United Kingdom – were being approached. Several examples of -companies in Flanders already using VSC infrastructure were illustrated. -Philippe Muyters, Flemish Minister for Economy and Innovation, closed the event -with an appeal for stronger links between the public and private sector to -strengthen Flemish competitiveness. -
    • 1 January 2016: The Research Foundation - Flanders (FWO) takes over the tasks of the Hercules Foundation in the VSC project in a restructuring of the research funding in Flanders.
    • -

    - (1) FFEU: Financieringsfonds voor Schuldafbouw en Eenmalige investeringsuitgaven (Financing fund for debt reduction and one-time investment)
    - (2) ESFRI: European Strategy Forum on Research Infrastructures -

    " -183,"","

    Strategic plans and annual reports

    Newsletter: VSC Echo

    Our newsletter, VSC Echo, is distributed three times a year by e-mail. The latest edition, number 10, is dedicated to : -

      -
    • The upcoming courses and other events, where we also pay attention to the trainings organized by CÉCI
    • -
    • News about the new Tier-1 system BrENIAC
    • -
    • The new VSC web site
    • -

    Subscribe or unsubscribe

    If you would like to receive this newsletter by mail, just send an e-mail to listserv@ls.kuleuven.be with as text subscribe VSCECHO in the message body (and not in the subject line). (Please note the quotes are not used in the subject line but in the message body.) Alternatively (if your e-mail is correctly configured in your browser), you can also send an e-mail from your browser. -

    You will receive a reply from LISTSERV@listserv.cc.kuleuven.ac.be asking you to confirm your subscription. Follow this link in the e-mail and you will be automatically subscribed to future issues of the newsletter. -

    If you no longer wish to receive the newsletter, please send an e-mail to listserv@ls.kuleuven.be with the text unsubscribe VSCECHO in the message body (and not in the subject line). Alternatively (if your e-mail is correctly configured in your browser), you can also send an e-mail from your browser. -

    Archive

    " -185,"","

    Press contacts should be channeled through the Research Foundation - Flanders (FWO).

    Available material

  • Zip file with the VSC logo in a number of formats.
  • " -191,"","

    Getting compute time in other centres

    Training programs in other centres

    EU initiatives

    Some grid efforts

      -
    • - WLCG - World-wide LHC Computing Grid, the compute grid supporting the Large Hedron Collider at Cern -
    • -
    • - The XSEDE program in the US which combines a large spectrum of resources across the USA in a single virtual infrastructure -
    • -
    • - The Open Science Grid (OSG) is a grid focused on high throughput computing in the US and one of the resource providers in the XSEDE project -
    • -

    Some HPC centres in Europe

    " -193,"","

    The Flemish Supercomputer Centre (VSC) is a virtual supercomputer center for academics and industry. It is managed by the Hercules Foundation in partnership with the five Flemish university associations.

    " -203,"","

    Account management at the VSC is mostly done through the web site account.vscentrum.be using your institute account rather than your VSC account.

    Managing user credentials

      -
    • You use the VSC account page to request your account as explained on the \"Account request\" pages. You'll also need to create an SSH-key which is also explained on those pages.
    • -
    • Once your account is active and you can log on to your home cluster, you can use the account management pages for many other operations: -
        -
      • If you want to access the VSC clusters from more than one computer, it is good practice to use a different key for each computer. You can upload additional keys via the account management page. In that way, if your computer is stolen, all you need to do is remove the key for that computer and your account is safe again.
      • -
      • If you've messed up your keys, you can restore the keys on the cluster or upload a new key and then delete the old one.
      • -
      -
    • -

    Group and Virtual Organisation management

    Once your VSC account is active and you can log on to your home cluster, you can also manage groups through the account management web interface. Groups (a Linux/UNIX concept) are used to control access to licensed software (e.g., software licenses paid for by one or more research groups), to create subdirectories where researchers working on the same project can collaborate and control access to those files, and to control access to project credits on clusters that use these (all clusters at KU Leuven).

    Managing disk space

    The amount of disk space that a user can use on the various file systems on the system is limited by quota on the amount of disk space and number of files. UGent users can see and request upgrades for their quota on the Account management site (Users need to be in a VO (Virtual Organisation) to request aditional quota. Creating and joining a VO is also done trought the Account Management website). On other sites checking your disk space use is still mostly done from the command line and requesting more quote is done via email.

    " -211,"","

    Data on the VSC clusters can be stored in several locations, depending on the size and usage of these data. Following locations are available:

      -
    • Home directory - -
        -
      • Location available as $VSC_HOME
      • -
      • The data stored here should be relatively small (e.g., no files or directories larger than a gigabyte, although this is allowed), and not generating very intense I/O during jobs.
        - Also all kinds of configuration files are stored here, e.g., ssh-keys, .bashrc, or Matlab, and Eclipse configuration, ...
      • -
      -
    • -
    • Data directory -
        -
      • Location available as $VSC_DATA
      • -
      • A bigger 'workspace', for datasets, results, logfiles, ... . This filesystem can be used for higher I/O loads, but for I/O bound jobs, you might be better of using one of the 'scratch' filesystems.
      • -
      -
    • -
    • Scratch directories -
        -
      • Several types exist, available in $VSC_SCRATCH_XXX variables
      • -
      • For temporary or transient data; there is typically no backup for these filesystems, and 'old' data may be removed automatically.
      • -
      • Currently, $VSC_SCRATCH_NODE, $VSC_SCRATCH_SITE and $VSC_SCRATCH_GLOBAL are defined, for space that is available per node, per site, or globally on all nodes of the VSC (currenlty, there is no real 'global' scratch filesystem yet).
      • -
      -
    • -

    Since these directories are not necessarily mounted on the same locations over all sites, you should always (try to) use the environment variables that have been created.

    Quota is enabled on the three directories, which means the amount of data you can store here is limited by the operating system, and not by \"the boundaries of the hard disk\". You can see your current usage and the current limits with the appropriate quota command as explained on How do I know how much disk space I am using?. The actual disk capacity, shared by all users, can be found on the Available hardware page.

    You will only receive a warning when you reach the soft limit of either quota. You will only start losing data when you reach the hard limit. Data loss occurs when you try to save new files: this will not work because you have no space left, and thus you will lose these new files. You will however not be warned when data loss occurs, so keep an eye open for the general quota warnings! The same holds for running jobs that need to write files: when you reach your hard quota, jobs will crash.

    Home directory

    This directory is where you arrive by default when you login to the cluster. Your shell refers to it as \"~\" (tilde), or via the environment variable $VSC_HOME.

    The data stored here should be relatively small (e.g., no files or directories larger than a gigabyte, although this is allowed), and usually used frequently. Also all kinds of configuration files are stored here, e.g., by Matlab, Eclipse, ...

    The operating system also creates a few files and folders here to manage your account. Examples are:

    - - - - - - - - - - - - - - - - - - -
    .ssh/This directory contains some files necessary for you to login to the cluster and to submit jobs on the cluster. Do not remove them, and do not alter anything if you don't know what you're doing!
    .profileThis script defines some general settings about your sessions,
    .bashrcThis script is executed everytime you start a session on the cluster: when you login to the cluster and when a job starts. You could edit this file and, e.g., add \"module load XYZ\" if you want to automatically load module XYZ whenever you login to the cluster, although we do not recommend to load modules in your .bashrc.
    .bash_historyThis file contains the commands you typed at your shell prompt, in case you need them again.

    Data directory

    In this directory you can store all other data that you need for longer terms. The environment variable pointing to it is $VSC_DATA. There are no guarantees about the speed you'll achieve on this volume.

    Scratch space

    To enable quick writing from your job, a few extra file systems are available on the work nodes. These extra file systems are called scratch folders, and can be used for storage of temporary and/or transient data (temporary results, anything you just need during your job, or your batch of jobs).

    You should remove any data from these systems after your processing them has finished. There are no gurarantees about the time your data will be stored on this system, and we plan to clean these automatically on a regular base. The maximum allowed age of files on these scratch file systems depends on the type of scratch, and can be anywhere between a day and a few weeks. We don't guarantee that these policies remain forever, and may change them if this seems necessary for the healthy operation of the cluster.

    Each type of scratch has his own use:

      -
    • Node scratch ($VSC_SCRATCH_NODE)
      - Every node has its own scratch space, which is completely seperated from the other nodes. Every job automatically gets its own temporary directory on this node scratch, available through the environment variable $TMPDIR. $TMPDIR is guaranteed to be unique for each job.
      - Note however that when your job requests multiple cores and these cores happen to be in the same node, this $TMPDIR is shared among the cores!
    • -
    • Site scratch ($VSC_SCRATCH_SITE, $VSC_SCRATCH)
      - To allow a job running on multiple nodes (or multiple jobs running on seperate nodes) to share data as files, every node of the cluster (including the login nodes) has access to this shared scratch directory. Just like the home and data directories, every user has its own scratch directory. Because this scratch is also available from the login nodes, you could manually copy results to your data directory after your job has ended.
    • -
    • Global scratch ($VSC_SCRATCH_GLOBAL)
      - In the long term, this scratch space will be available throughout the whole VSC. At the time of writing, the global scratch is just the same volume as the site scratch, and thus contains the same data.
    • -
    " -213,"","

    Data on the VSC clusters can be stored in several locations, depending on the size and usage of these data. Following locations are available:

      -
    • Home directory - -
        -
      • Location available as $VSC_HOME
      • -
      • The data stored here should be relatively small (e.g., no files or directories larger than a gigabyte, although this is allowed), and not generating very intense I/O during jobs.
        - Also all kinds of configuration files are stored here, e.g., ssh-keys, .bashrc, or Matlab, and Eclipse configuration, ...
      • -
      -
    • -
    • Data directory -
        -
      • Location available as $VSC_DATA
      • -
      • A bigger 'workspace', for datasets, results, logfiles, ... . This filesystem can be used for higher I/O loads, but for I/O bound jobs, you might be better of using one of the 'scratch' filesystems.
      • -
      -
    • -
    • Scratch directories -
        -
      • Several types exist, available in $VSC_SCRATCH_XXX variables
      • -
      • For temporary or transient data; there is typically no backup for these filesystems, and 'old' data may be removed automatically.
      • -
      • Currently, $VSC_SCRATCH_NODE, $VSC_SCRATCH_SITE and $VSC_SCRATCH_GLOBAL are defined, for space that is available per node, per site, or globally on all nodes of the VSC (currenlty, there is no real 'global' scratch filesystem yet).
      • -
      -
    • -

    Since these directories are not necessarily mounted on the same locations over all sites, you should always (try to) use the environment variables that have been created.

    Quota is enabled on the three directories, which means the amount of data you can store here is limited by the operating system, and not by \"the boundaries of the hard disk\". You can see your current usage and the current limits with the appropriate quota command as explained on How do I know how much disk space I am using?. The actual disk capacity, shared by all users, can be found on the Available hardware page.

    You will only receive a warning when you reach the soft limit of either quota. You will only start losing data when you reach the hard limit. Data loss occurs when you try to save new files: this will not work because you have no space left, and thus you will lose these new files. You will however not be warned when data loss occurs, so keep an eye open for the general quota warnings! The same holds for running jobs that need to write files: when you reach your hard quota, jobs will crash.

    Home directory

    This directory is where you arrive by default when you login to the cluster. Your shell refers to it as \"~\" (tilde), or via the environment variable $VSC_HOME.

    The data stored here should be relatively small (e.g., no files or directories larger than a gigabyte, although this is allowed), and usually used frequently. Also all kinds of configuration files are stored here, e.g., by Matlab, Eclipse, ...

    The operating system also creates a few files and folders here to manage your account. Examples are:

    - - - - - - - - - - - - - - - - - - -
    .ssh/This directory contains some files necessary for you to login to the cluster and to submit jobs on the cluster. Do not remove them, and do not alter anything if you don't know what you're doing!
    .profileThis script defines some general settings about your sessions,
    .bashrcThis script is executed everytime you start a session on the cluster: when you login to the cluster and when a job starts. You could edit this file and, e.g., add \"module load XYZ\" if you want to automatically load module XYZ whenever you login to the cluster, although we do not recommend to load modules in your .bashrc.
    .bash_historyThis file contains the commands you typed at your shell prompt, in case you need them again.

    Data directory

    In this directory you can store all other data that you need for longer terms. The environment variable pointing to it is $VSC_DATA. There are no guarantees about the speed you'll achieve on this volume.

    Scratch space

    To enable quick writing from your job, a few extra file systems are available on the work nodes. These extra file systems are called scratch folders, and can be used for storage of temporary and/or transient data (temporary results, anything you just need during your job, or your batch of jobs).

    You should remove any data from these systems after your processing them has finished. There are no gurarantees about the time your data will be stored on this system, and we plan to clean these automatically on a regular base. The maximum allowed age of files on these scratch file systems depends on the type of scratch, and can be anywhere between a day and a few weeks. We don't guarantee that these policies remain forever, and may change them if this seems necessary for the healthy operation of the cluster.

    Each type of scratch has his own use:

      -
    • Node scratch ($VSC_SCRATCH_NODE)
      - Every node has its own scratch space, which is completely seperated from the other nodes. Every job automatically gets its own temporary directory on this node scratch, available through the environment variable $TMPDIR. $TMPDIR is guaranteed to be unique for each job.
      - Note however that when your job requests multiple cores and these cores happen to be in the same node, this $TMPDIR is shared among the cores!
    • -
    • Site scratch ($VSC_SCRATCH_SITE, $VSC_SCRATCH)
      - To allow a job running on multiple nodes (or multiple jobs running on seperate nodes) to share data as files, every node of the cluster (including the login nodes) has access to this shared scratch directory. Just like the home and data directories, every user has its own scratch directory. Because this scratch is also available from the login nodes, you could manually copy results to your data directory after your job has ended.
    • -
    • Global scratch ($VSC_SCRATCH_GLOBAL)
      - In the long term, this scratch space will be available throughout the whole VSC. At the time of writing, the global scratch is just the same volume as the site scratch, and thus contains the same data.
    • -
    " -215,"","

    Data on the VSC clusters can be stored in several locations, depending on the size and usage of these data. Following locations are available:

    - -
      -
    • Home directory - -
        -
      • Location available as $VSC_HOME
      • -
      • The data stored here should be relatively small (e.g., no files or directories larger than a gigabyte, although this is allowed), and not generating very intense I/O during jobs. 
        - Also all kinds of configuration files are stored here, e.g., ssh-keys, .bashrc, or Matlab, and Eclipse configuration, ...
      • -
      -
    • -
    • Data directory -
        -
      • Location available as $VSC_DATA
      • -
      • A bigger 'workspace', for datasets, results, logfiles, ... . This filesystem can be used for higher I/O loads, but for I/O bound jobs, you might be better of using one of the 'scratch' filesystems.
      • -
      -
    • -
    • Scratch directories -
        -
      • Several types exist, available in $VSC_SCRATCH_XXX variables
      • -
      • For temporary or transient data; there is typically no backup for these filesystems, and 'old' data may be removed automatically.
      • -
      • Currently, $VSC_SCRATCH_NODE, $VSC_SCRATCH_SITE and $VSC_SCRATCH_GLOBAL are defined, for space that is available per node, per site, or globally on all nodes of the VSC (currenlty, there is no real 'global' scratch filesystem yet).
      • -
      -
    • -
    - -

    Since these directories are not necessarily mounted on the same locations over all sites, you should always (try to) use the environment variables that have been created.

    - -

    Quota is enabled on the three directories, which means the amount of data you can store here is limited by the operating system, and not by \"the boundaries of the hard disk\". You can see your current usage and the current limits with the appropriate quota command as explained on How do I know how much disk space I am using?. The actual disk capacity, shared by all users, can be found on the  Available hardware page.

    - -

    You will only receive a warning when you reach the soft limit of either quota. You will only start losing data when you reach the hard limit. Data loss occurs when you try to save new files: this will not work because you have no space left, and thus you will lose these new files. You will however not be warned when data loss occurs, so keep an eye open for the general quota warnings! The same holds for running jobs that need to write files: when you reach your hard quota, jobs will crash.

    - -

    Home directory

    - -

    This directory is where you arrive by default when you login to the cluster. Your shell refers to it as \"~\" (tilde), or via the environment variable $VSC_HOME.

    - -

    The data stored here should be relatively small (e.g., no files or directories larger than a gigabyte, although this is allowed), and usually used frequently. Also all kinds of configuration files are stored here, e.g., by Matlab, Eclipse, ...

    - -

    The operating system also creates a few files and folders here to manage your account. Examples are:

    - - - - - - - - - - - - - - - - - - - - -
    .ssh/This directory contains some files necessary for you to login to the cluster and to submit jobs on the cluster. Do not remove them, and do not alter anything if you don't know what you're doing!
    .profileThis script defines some general settings about your sessions,
    .bashrcThis script is executed everytime you start a session on the cluster: when you login to the cluster and when a job starts. You could edit this file and, e.g., add \"module load XYZ\" if you want to automatically load module XYZ whenever you login to the cluster, although we do not recommend to load modules in your .bashrc.
    .bash_historyThis file contains the commands you typed at your shell prompt, in case you need them again.
    - -

    Data directory

    - -

    In this directory you can store all other data that you need for longer terms. The environment variable pointing to it is $VSC_DATA. There are no guarantees about the speed you'll achieve on this volume.

    - -

    Scratch space

    - -

    To enable quick writing from your job, a few extra file systems are available on the work nodes. These extra file systems are called scratch folders, and can be used for storage of temporary and/or transient data (temporary results, anything you just need during your job, or your batch of jobs).

    - -

    You should remove any data from these systems after your processing them has finished. There are no gurarantees about the time your data will be stored on this system, and we plan to clean these automatically on a regular base. The maximum allowed age of files on these scratch file systems depends on the type of scratch, and can be anywhere between a day and a few weeks. We don't guarantee that these policies remain forever, and may change them if this seems necessary for the healthy operation of the cluster.

    - -

    Each type of scratch has his own use:

    - -
      -
    • Node scratch ($VSC_SCRATCH_NODE)
      - Every node has its own scratch space, which is completely seperated from the other nodes. Every job automatically gets its own temporary directory on this node scratch, available through the environment variable $TMPDIR. $TMPDIR is guaranteed to be unique for each job.
      - Note however that when your job requests multiple cores and these cores happen to be in the same node, this $TMPDIR is shared among the cores!
    • -
    • Site scratch ($VSC_SCRATCH_SITE, $VSC_SCRATCH)
      - To allow a job running on multiple nodes (or multiple jobs running on seperate nodes) to share data as files, every node of the cluster (including the login nodes) has access to this shared scratch directory. Just like the home and data directories, every user has its own scratch directory. Because this scratch is also available from the login nodes, you could manually copy results to your data directory after your job has ended.
    • -
    • Global scratch ($VSC_SCRATCH_GLOBAL)
      - In the long term, this scratch space will be available throughout the whole VSC. At the time of writing, the global scratch is just the same volume as the site scratch, and thus contains the same data.
    • -
    " -217,"","

    To access certain cluster login nodes, from outside your institute's network (e.g., from home) you need to set a so-called VPN (Virtual Private Network). By setting up a VPN to your institute, your computer effectively becomes a computer on your institute's network and will appear as such to other services that you access. Your network traffic will be routed through your institute's network. If you want more information: There's an introductory page on HowStuffWorks and a page that is more for techies on Wikipedia. -

    The VPN service is not provided by the VSC but by your institute's ICT centre, and they are your first contact for help. However, for your convenience, we present some pointers to that information: -

    " -219,"","

    Linux is the operating system on all of the VSC-clusters.

    " -221,"","

    All the VSC clusters run the Linux operating system: -

      -
    • KU Leuven: Red Hat Enterprise Linux ComputeNode release 6.5 (Santiago), 64 bit
    • -
    • UAntwerpen: CentOS 7.x
    • -
    • UGent: Scientific Linux
    • -

    This means that, when you connect to one of them, you get a command line interface, which looks something like this: -

    vsc30001@login1:~>
    -

    When you see this, we also say you are inside a \"shell\". The shell will accept your commands, and execute them. -

    Some of the most often used commands include: -

    - - - - - - - - - - - - - - - - - - - - - - -
    ls - Shows you a list of files in the current directory -
    cd - Change current working directory -
    rm - Remove file or directory -
    joe - Text editor -
    echo - Prints its parameters to the screen -

    Most commands will accept or even need parameters, which are placed after the command, seperated by spaces. A simple example with the 'echo' command: -

    $ echo This is a test
    -This is a test
    -

    Important here is the \"$\" sign in front of the first line. This should not be typed, but is a convention meaning \"the rest of this line should be typed at your shell prompt\". The lines not starting with the \"$\" sign are usually the feedback or output from the command. -

    More commands will be used in the rest of this text, and will be explained then if necessary. If not, you can usually get more information about a command, say the item or command 'ls', by trying either of the following: -

    $ ls --help
    -$ man ls
    -$ info ls
    -

    (You can exit the last two \"manuals\" by using the 'q' key.) -

    Tutorials

    For more exhaustive tutorials about Linux usage, please refer to the following sites: -

    " -223,"","

    Shell scripts

    Scripts are basically uncompiled pieces of code: they are just text files. Since they don't contain machine code, they are executed by what is called a \"parser\" or an \"interpreter\". This is another program that understands the command in the script, and converts them to machine code. There are many kinds of scripting languages, including Perl and Python. -

    Another very common scripting language is shell scripting. In a shell script, you will put the commands you would normally type at your shell prompt in the same order. This will enable you to execute all those commands at any time by only issuing one command: starting the script. -

    Typically in the following examples they'll have on each line the next command to be executed although it is possible to put multiple commands on one line. A very simple example of a script may be: -

    echo \"Hello! This is my hostname:\"
    -hostname
    -

    You can type both lines at your shell prompt, and the result will be the following: -

    $ echo \"Hello! This is my hostname:\"
    -Hello! This is my hostname:
    -$ hostname
    -login1
    -

    Suppose we want to call this script \"myhostname\". You open a new file for editing, and name it \"myhostname\": -

    $ nano myhostname
    -

    You get a \"New File\", where you can type the content of this new file. Help is available by pressing the 'Çtrl+G' key combination. You may want to familiarize you with the other options at some point; now we will just type the content of the file, save it and exit the editor. -

    You can type the content of the script: -

    echo \"Hello! This is my hostname:\"
    -hostname
    -

    You save the file and exit the editor by pressing the 'ctrl+x' key combination. Nano will ask you if you want to save the file. You should be back at the prompt. -

    The easiest way to run a script is by starting the interpreter and pass the script as parameter. In case of our script, the interpreter may either be 'sh' or 'bash' (which are the same on the cluster). So start the script: -

    $ bash myhostname
    -Hello! This is my hostname:
    -login1
    -

    Congratulations, you just created and started your first shell script! -

    A more advanced way of executing your shell scripts is by making them executable by their own, so without invoking the interpreter manually. The system can not automatically detect which interpreter you want, so you need to tell this in some way. The easiest way is by using the so called \"shebang\"-notation, explicitly created for this function: you put the following line on top of your shell script \"#!/path/to/your/interpreter\". -

    You can find this path with the \"which\" command. In our case, since we use bash as an interpreter, we get the following path: -

    $ which bash
    -/bin/bash
    -

    We edit our script and change it with this information: -

    #!/bin/bash
    -echo \"Hello! This is my hostname:\"
    -hostname
    -

    Note that the \"shebang\" must be the first line of your script! Now the operating system knows which program should be started to run the script. -

    Finally, we tell the operating system that this script is now executable. For this we change its file attributes: -

    $ chmod +x myhostname
    -

    Now you can start your script by simply executing it: -

    $ ./myhostname
    -Hello! This is my hostname:
    -login1
    -

    The same technique can be used for all other scripting languages, like Perl and Python. -

    Most scripting languages understand that lines beginning with \"#\" are comments, and should be ignored. If the language you want to use does not ignore these lines, you may get strange results... -

    Links

    " -225,"","

    What is a VSC account?

    To log on to and use the VSC infrastructure, you need a so-called VSC account. There is only one exception: Users of the Brussels University Association who only need access to the VUB/ULB cluster Hydra can use their institute account. -

    All VSC-accounts start with the letters \"vsc\" followed by a five-digit number. The first digit gives information about your home institution. There is no relationship with your name, nor is the information about the link between VSC-accounts and your name publicly accessible. -

    Unlike your institute account, VSC accounts don't use regular fixed passwords but a key pair consisting of a public an private key because that is a more secure technique for authentication. -

    Your VSC account is currently managed through your institute account. -

    Public/private key pairs

    A key pair consists of a private and a public key. The private key is stored on the computer(s) from which you want to access the VSC and always stays there. The public key is stored on a the systems you want to access, granting access to the anyone who can prove to have access to the corresponding private key. Therefore it is very important to protect your private key, as anybody who has access to it can access your VSC account. For extra security, the private key itself should be encrypted using a 'passphrase', to prevent anyone from using your private key even when they manage to copy it. You have to 'unlock' the private key by typing the passphrase when you use it. -

    How to generate such a key pair, depends on your operating system. We describe the generation of key pairs in the client sections for Linux, Windows and macOS (formerly OS X). -

    Without your key pair, you won't be able to apply for a VSC account. -

    It is clear from the above that it is very important to protect your private key well. Therefore: -

      -
    • You should not share your key pair with other users.
    • -
    • If you have accounts at multiple supercomputer centres (or on other systems that use SSH), you should seriously consider using a different key pair for each of those accounts. In that way, if a key would get compromised, the damage can be controlled.
    • -
    • For added security, you may also consider to use a different key pair for each computer you use to access your VSC account. If your computer is stolen, it is then easy to disable access from that computer while you can still access your VSC account from all your other computers. The procedure is explained on a separate web page.
    • -

    Applying for the account

    Depending on restrictions imposed by the institution, not all users might get a VSC account. We describe who can apply for an account in the sections of the local VSC clusters. -

    Generic procedure for academic researchers

    For most researchers from the Flemish universities, the procedure has been fully automated and works by using your institute account to request a VSC account. Check below for exceptions or if the generic procedure does not work. -

    Open the VSC account management web site and select your \"home\" institution. After you log in using your institution login and password, you will be asked to upload your public key. You will get an e-mail to confirm your application. After the account has been approved by the VSC, your account will be created and you will get a confirmation e-mail.

    Users from the KU Leuven and UHasselt association

    UHasselt has an agreement with KU Leuven to run a shared infrastructure. Therefore the procedure is the same for both institutions. -

    Who? -

      -
    • Access is available for faculty, students (under faculty supervision), and researchers of the KU Leuven, UHasselt and their associations. See also the access restrictions.
    • -

    How? -

      -
    • Researchers with a regular personnel account (u-number) can use the generic procedure.
    • -
    • If you are in one of the higher education institutions associated with KU Leuven, the generic procedure may not work. In that case, please e-mail hpcinfo(at)icts.kuleuven.be to get an account. You will have to provide a public ssh key generated as described above.
    • -
    • Lecturers of KU Leuven and UHasselt that need HPC access for giving their courses: The procedure requires action both from the lecturers and from the students. Lecturers should follow the specific procedure for lecturers, while the students should simply apply for the account through the generic procedure.

    How to start?

    • Please follow the information on the webpage
    • Or register for the HPC Introduction course
    • If there is no course announced please register to our waiting list and we will organize a new session as soon as we get a few people interested in it.
      -

    Users of Ghent University Association

    All information about the access policy is available in English at the UGent HPC web pages. -

      -
    • Researchers can use the generic procedure.
    • -
    • Master students can also use the infrastructure for their master thesis work. The promotor of the thesis should first send a motivation to hpc@ugent.be and then the generic procedure should be followed (using your student UGent id) to request the account.
    • -

    Users of the Antwerp University Association (AUHA)

    Who? -

      -
    • Access ia available for faculty, students (master's projects under faculty supervision), and researchers of the AUHA. See also the access restrictions page.
    • -

    How? -

      -
    • Researchers of the University of Antwerp with a regular UAntwerpen account can use the generic procedure.
    • -
    • Users from higher education institutions associated with UAntwerpen can get a VSC account via UAntwerpen. However, we have not yet set up an automated form. Please contact the user support at hpc@uantwerpen.be to get an account. You will have to provide a public ssh key generated as described above.
    • -

    Users of Brussels University Association

      -
    • If you only need access to the VUB cluster Hydra, you don't necessarily need a full VSC account but can use your regular institute account. More information can be found on this VUB Web Notes page.
    • -

    Troubleshooting

      -
    • If you can't connect to the VSC account management web site, some browser extensions have caused problems (and in particular some security-related extensions), so you might try with browser extensions disabled.
    • -
    " -227,"","

    MATLAB has to be loaded using the module utility prior to running it. This ensures that the environment is correctly set. Get the list of available versions of MATLAB using -

    module avail matlab
    -

    (KU Leuven clusters) or

    module avail MATLAB

    (UAntwerpen and VUB clusters).

    Load a specific version by specifying the MATLAB version in the command -

    module load matlab/R2014a
    -

    or

    module load MATLAB/2014a
    -

    depending on the site you're at.

    Interactive use

      -
    • Interactive use is possible, but is not the preferred way of using MATLAB on the cluster! Use batch processing of compiled MATLAB code instead.
    • -
    • If there is an X Window System server installed on your PC (as is by default the case under Linux; you can use XMing Server under Windows or XQuartz on macOS/OS X), the full graphical MATLAB Desktop is available. If the speed is acceptable to you - much of the Matlab user interface is coded in Java and Java programs are known to be slow over remote X connections - this is the recommended way to start MATLAB for short testing purposes, simple calculations, writing programs, visualizing data. Please avoid doing extensive calculations this way, as you would be abusing the resources of the shared login-node. Your program will disturb other users, and other users will slow down execution of your program. Moreover, only a limited amount of CPU time is available to you, after which your session will be killed (with possible data loss).
    • -
    • With matlab -nodesktop you can start Matlab without the full desktop, while you are still able to use the visualisation features. The helpwin, helpdesk and edit commands also work and open GUI-style help windows or a GUI-based editor. Of course this also requires a X server.
    • You can always, i.e., without X-Window server, start MATLAB in console-mode, via -
      matlab -nodisplay
      -
      -You get a MATLAB command prompt, from where you can start m-files, but have no access to the graphical facilities. The same limitations as above on CPU time apply. -
    • -
    • For intensive calculations you want to run interactively, it is possible to use the PBS Job system to reserve a node for your exclusive use, while still having access to, e.g., the graphical capabilities of MATLAB, by forwarding the X output (qsub -X -I).
    • -
    • WARNING: an interactive MATLAB session on a compute node can be very slow. A workaround (found at hpc.uark.edu) is: -
        -
      • launch an interactive session qsub -I -X
      • -
      • once the interactive session is started, (say it starts on r2i0n15), start another connection to that compute node (ssh -X r2i0n15). In this second connection, start MATLAB, and it will work at normal speed.
      • -
      -
    • -

    Batch use

    For any non-trivial calculation, it is strongly suggested that you use the PBS batch system. -

    Running a MATLAB script

    You first have to write a MATLAB m-file that executes the required calculation. Make sure the last command of this m-file is 'quit' or 'exit', otherwise MATLAB might wait forever for more commands ... -

    Example (to be saved, e.g., in testmatlabscript.m) : -

    ndim = 600;
    -a = rand(600,1)*10;
    -b = rand(1,600)*100;
    -c = a * b;
    -d = max(c);
    -e = min(d);
    -save('testmatlab', 'd', 'e');
    -exit;
    -

    You can now run this program (as a test, still on the login node, from the directory were you saved the file testmatlabscript.m): -

    matlab  -nodisplay -r testmatlabscript
    -

    The next thing is to write a small shell script, to be sent to the PBS Job System, so that the program can be executed on a compute node, rather than on the login node. -

    A simple example follows (to be saved, e.g., in testmatlabscript.sh ) ; -

    #!/bin/bash -l
    -# The maximum duration of the program,
    -#   in the format [days:]hours:minutes:seconds
    -#PBS -l walltime=01:00:00
    -# the requested amount of RAM
    -#PBS -l pmem=950mb
    -# The name of your job (used in mail, outputfile, showq,...)
    -#PBS -N matlab_test_job
    -# Set the correct environment for matlab
    -module load matlab
    -# Go into the directory from where 'qsub' was run
    -cd $PBS_O_WORKDIR
    -# Start matlab, specify the correct command-file ...
    -matlab -nojvm -nodisplay -r test
    -

    Now you submit your job with -

    $ qsub testmatlabscript.sh
    -

    and you get the jobid that was assigned to your job. With -

    qstat
    -

    you get an overview of the status of your jobs. When the job has run, output will be available in the file <jobname>.o<jobid> in the directory where you submitted the job from. In the case of the file testmatlabscript.m above, a file testmatlabscript.mat will have been created, with the calculated data d and e, you can load the resulting file into a MATLAB for further processing. -

    More commands and options of the Job System are described in the general documentation on running jobs and in particular on the page \"Submitting and managing jobs\". -

    Running a MATLAB function

    If instead of a script, a MATLAB function is used, parameters can be passed into the function. -

    Example (to be saved, e.g., in testmatlabfunction.m) : -

    function testmatlabfunction(input1,input2)
    -% source: https://wiki.inf.ed.ac.uk/ANC/MatlabComputing
    -% change arguments to numerics if necessary - only when compiling code
    -if ~isnumeric(input1)
    -   input1n = str2num(input1);
    -   input2n = str2num(input2);
    -else
    -   input1n = input1;
    -   input2n = input2;
    -end
    -sumofinputs = input1n + input2n;
    -outputfilename = ['testfunction_' num2str(input1n) '_' num2str(input2n)];
    -save(outputfilename, 'input1n', 'input2n', 'sumofinputs');
    -exit;
    -

    You can now run this program (as a test, still on the login node, from the directory were you saved the file testmatlabfunction.m): -

    matlab  -nodisplay -r \"testmatlabfunction 3 6\"
    -

    Note the quotes around the function name and the parameters. Note also that the function name does not include the *.m extension. -

    MATLAB compiler

    Each job requires a MATLAB license while running. If you start lots of jobs, you'll use lots of licenses. When all licenses are in use, your further jobs will fail, and you'll block access to MATLAB for other people at your site. -

    However, when compiling your MATLAB program, no more runtime licenses are needed. -

    Compilation of MATLAB files is relatively easy with the MATLAB 'mcc' compiler. It works for 'function m-files' and for 'script m-files'. 'function m-files' are however preferred. -

    To deploy a MATLAB program as a standalone application, load the module for MATLAB as a first step and compile the code in a second step with the mcc command. -

    If we want to compile a MATLAB program 'main.m', the corresponding command line should be: -

    mcc  -v  -R -singleCompThread  -m  main.m
    -

    Where the options are: -

      -
    • -m: generate a standalone application
    • -
    • -v: verbose display of the compilation steps
    • -
    • -R: runtime options, useful ones are: -singleCompThread, -nodisplay, -nojvm
    • -

    The deployed executable is compiled to run using a single thread via the option -singleCompThread. This is important when a number of processes
    - are to run concurrently on the same node (e.g. worker framework). -

    Notes

      -
    • Parameters are always considered as strings, and thus have to be converted to, e.g., numbers inside your function when needed. You can test with 'isdeployed' or 'isstr' functions (see examples).
    • -
    • The function is allowed to return a value, but that value is *not* returned to the shell. Thus, to get results out, they have to be written to the screen, or saved in a file.
    • -
    • Not all MATLAB functions are allowed in compiled code (see the \"Compiler Support for Matlab and Toolboxes\" page at the MathWorks).
    • -

    Example 1: Simple matlab script file

      -
    • File fibonacci.m contains :
    • -
    function a = fibonacci(n)
    -% FIBONACCI Calculate the fibonacci value of n.
    -% When complied as standalone function,
    -% arguments are always passed as strings, not nums ...
    -if (isstr(n))
    -  n = str2num(n);
    -end;
    -if (length(n)~=1) || (fix(n) ~= n) || (n < 0)
    -  error(['MATLAB:factorial:NNotPositiveInteger', ...
    -        'N must be a positive integer.']);
    -end
    -first = 0;second = 1;
    -for i=1:n-1
    -    next = first+second;
    -    first=second;
    -    second=next;
    -end
    -% When called from a compiled application, display result
    -if (isdeployed)
    -  disp(sprintf('Fibonacci %d -> %d' , n,first))
    -end
    -% Also return the result, so that the function remains usable
    -% from other Matlab scripts.
    -a=first;
    -
      -
    • Run the compiler
    • -
     mcc -m fibonacci
    -
      -
    • Executable file 'fibonacci' is created.
    • -
    • You can now run your application as follows :
    • -
    ./fibonacci 6
    -Fibonacci 6 -> 5
    -$ ./fibonacci 8
    -Fibonacci 8 -> 13
    -$ ./fibonacci 45
    -Fibonacci 45 -> 701408733
    -

    Example 2 : Function that uses other Matlab files

      -
    • File multi_fibo.m contains :
    • -
    function multi_fibo()
    -%MULTIFIBO Calls FIBONACCI multiple times in a loop
    -% Function calculates Fibonacci number for a matrix by calling the
    -% fibonacci function in a loop. Compiling this file would automatically
    -% compile the fibonacci function also because dependencies are
    -% automatically checked.
    -n=10:20
    -if max(n)<0
    -    f = NaN;
    -else
    -    [r c] = size(n);
    -    for i = 1:r %#ok
    -        for j = 1:c %#ok
    -            try
    -                f(i,j) = fibonacci(n(i,j));
    -            catch
    -                f(i,j) = NaN;
    -            end
    -        end
    -    end
    -end
    -
      -
    • Compile : -
        -
      -
    • -
    mcc -m multi_fibo
    -
      -
    • Run :
    • -
    ./multi_fibo
    -n =
    -    10    11    12    13    14    15    16    17    18    19    20
    -Fibonacci 10 -> 34
    -Fibonacci 11 -> 55
    -Fibonacci 12 -> 89
    -Fibonacci 13 -> 144
    -Fibonacci 14 -> 233
    -Fibonacci 15 -> 377
    -Fibonacci 16 -> 610
    -Fibonacci 17 -> 987
    -Fibonacci 18 -> 1597
    -Fibonacci 19 -> 2584
    -Fibonacci 20 -> 4181
    -f =
    -          34          55          89         144         233         
    -377         610         987        1597        2584        4181
    -

    Example 3 : Function that used other Matlab files in other directories

    • If your script uses MATLAB files (e.g., self-made scripts, compiled mex files) other than those part of the MATLAB-distribution, include them at compile time as follows:
        -
    mcc -m -I /path/to/MyMatlabScripts1/ -I /path/to/MyMatlabScripts2 .... 
    --I /path/to/MyMatlabScriptsN multi_fibo
    -
      -

    (on a single line). -

    More info on the MATLAB Compiler

    Matlab compiler documentation on the Mathworks website. -

    " -229,"","

    Matlab has several products to facilitate parallel computing, e.g. -

      -
    • The Parallel Computing Toolbox is a regular Matlab toolbox that lets you write parallel Matlab applications or use parallel implementations of algorithms in other toolboxes.
      - Try help distcomp to see if the toolbox is installed for the version of Matlab that you're using.
    • -
    " -231,"","

    Purpose

    Here it is shown how to use Rscript and pass arguments to an R script.

    Prerequisites

    It is assumed that the reader is familiar with the use of R as well as R scripting, and is familiar wth the linux bash shell.

    Using Rscript and command line arguments

    When performing computation on the cluster using R, it is necessary to run those scripts from the command line, rather than interactively using R's graphical user interface. Consider the following R function that is defined in, e.g., 'logistic.R':

    logistic <- function(r, x) {
    -    r*x*(1.0 - x)
    -}

    From R's GUI interface, you typically use this from the console as follows:

    > source(\"logistic.R\")
    -> logistic(3.2, 0.5)

    It is trivial to write an R script 'logistic-wrapper.R' that can be run from the command line, and that takes to arguments, the first being 'r', the second 'x'.

    args <- commandArgs(TRUE)
    -r <- as.double(args[1])
    -x <- as.double(args[2])
    -
    -source(\"logistic.R\")
    -
    -logistic(r, x)

    The first line of this script stores all arguments passed to the script in the array 'args. The second (third) line converts the first (second) element of that array from a string to a double precision number using the function 'as.double', and stores it into r (x).

    Now from the linux command line, one can run the script above for r = 3.2 and x = 0.5 as follows:

    $ Rscript logistic-wrapper.R 3.2 0.5

    Note that you should have loaded the appropriate R module, e.g.,

    $ module load R

    Suppose now that the script needs to be extended to iterate the logistic map 'n' times, where the latter value is passed as the third argument to the script.

    args <- commandArgs(TRUE)
    -r <- as.double(args[1])
    -x <- as.double(args[2])
    -n <- as.integer(args[3])
    -
    -source(\"logistic.R\")
    -
    -for (i in 1:n) x <- logistic(r, x)
    -print(x)

    Note that since the the third argument represents the number of iterations, it should be interpreted as an integer value, and hence be converted appropriately using the function 'as.integer'.

    The script is now invoked from the linux command line with three parameters as follows:

    $ Rscript cl.R 3.2 0. 5 100

    Note that if you pass an argument that is to be interpreted as a string in your R program, no conversion is needed, e.g.,

    name <- args[4]

    Here it is assumed that the 'name' is passed as the fourth command line argument.

    " -233,"","

    Purpose

    Although R is a nice and fairly complete software package for statistical analysis, there are nevertheless situations where it desirable to extend R. This may be either to add functionality that is implemented in some C library, or to eliminate performance bottlenecks in R code. In this how-to it is assumed that the users wants to call his own C functions from R.

    Prerequisites

    It is assumed that the reader is familiar with the use of R as well as R scripting, and is a reasonably proficient C programmer. Specifically the reader should be familiar with the use of pointers in C.

    Integration step by step

    Before all else, first load the appropriate R module to prepare your environment, e.g.,

    $ module load R

    If you want a specific version of R, you can first check which versions are available using

    $ module av R

    and then load the appropriate version of the module, e.g.,

    $ module load R/3.1.1-intel-2014b

    A first example

    No tutorial is complete without the mandatory 'hello world' example. The C code in file 'myRLib.c' is shown below:

    #include <R.h>
    -
    -void sayHello(int *n) {
    -    int i;
    -    for (i = 0; i < *n; i++)
    -        Rprintf(\"hello world!\\n\");
    -}

    Three things should be noted at this point

      -
    1. the 'R.h' header file has to be included, this file is part of the R distribution, and R knows where to find it;
    2. -
    3. function parameters are always pointers; and
    4. -
    5. to print to the R console, 'Rprintf' rather than 'printf' should be used.
    6. -

    From this 'myRLib.c' file a shared library can be build in one convenient step:

    $ R CMD SHLIB myRlib.c

    If all goes well, i.e., if the source code has no syntax errors and all functions have been defined, this command will produce a shared library called 'myRLib.so'.

    To use this function from within R in a convenient way, a simple R wrapper can be defined in 'myRLib.R':

    dyn.load(\"myRLib.so\");
    -sayHello <- function(n) {
    -    .C(\"sayHello\", as.integer(n))
    -}

    In this script, the first line loads the share library containing the 'sayHello' function. The second line defines a convenient wrapper to simplify calling the C function from R. The C function is called using the '.C' function. The latter's first parameter is the name of the C function to be called, i.e., 'sayHello', all other parameters will be passed to the C function, i.e., the number of times that 'sayHello' will say hello as an integer.

    Now, R can be started to be used interactively as usual, i.e.,

    $ R

    In R, we first source the library's definitions in 'myRLib.R', so that the wrapper functions can be used:

    > source(\"myRLib.R\")
    -> sayHello(2)
    -hello world!
    -hello world!
    -[[1]]
    -[1] 2

    Note that the 'sayHello' function is not particularly interesting since it does not return any value. The next example will illustrate how to accomplish this.

    A second, more engaging example

    Given R's pervasive use of vectors, a simple example of a function that takes a vector of real numbers as input, and returns its components' sum as output is shown next.

    #include <R.h>
    -
    -/* sayHello part not shown */
    -
    -void mySum(double *a, int* n, double *s) {
    -    int i;
    -    *s = 0.0;
    -    for (i = 0; i < *n; i++)
    -        *s += a[i];
    -}

    Note that both 'a' and 's' are declared as pointers, the former being used as the address of the first array element, the second as an address to store a double value, i.e., the sum of array's compoments.

    To produce the shared library, it is build using the R appropriate command as before:

    $ R CMD SHLIB myRLib.c

    The wrapper code for this function is slightly more interesting since it will be programmed to provide a convenient \"function-feel\".

    dyn.load(\"myRLib.so\");
    -
    -# sayHello wrapper not shown
    -
    -mySum <- function(a) {
    -    n <- length(a);
    -    result <- .C(\"mySum\", as.double(a), as.integer(n), s = double(1));
    -    result$s
    -}

    Note that the wrapper functions is now used to do some more work:

      -
    1. it preprocesses the input by calculating the length of the input vector;
    2. -
    3. it initializes 's', the parameter that will be used in the C function to store the result in; and
    4. -
    5. it captures the result from the call to the C function which contains all parameters passed to the function, in the last statement only extracting the actual result of the computation.
    6. -

    From R, 'mySum' can now easily be called:

    > source(\"myRLib.R\")
    -> mySum(c(1, 3, 8))
    -[1] 12

    Note that 'mySum' will probably not be faster than R's own 'sum' function.

    A last example

    Function can return vectors as well, so this last example illustrates how to accomplish this. The library is extended to:

    #include <R.h>
    -
    -/* sayHello and my_sum not shown */
    -
    -void myMult(double *a, int *n, double *lambda, double *b) {
    -    int i;
    -    for (i = 0; i < *n; i++)
    -        b[i] = (*lambda)*a[i];
    -}

    The semantics of the function is simply to take a vector and a real number as input, and return a vector of which each component is the product of the corresponding component in the original vector with that real number.

    After building the shared libary as before, we can extend the wrapper script for this new function as follows:

    dyn.load(\"myRLib.so\");
    -
    -# sayHello and mySum wrapper not shown
    -
    -myMult <- function(a, lambda) {
    -    n <- length(a);
    -    result <- .C(\"myMult\", as.double(a), as.integer(n),
    -                 as.double(lambda), m = double(n));
    -    result$m
    -}

    From within R, 'myMult' can be used as expected.

    > source(\"myRLib.R\")
    -> myMult(c(1, 3, 8), 9)
    -[1]  9 27 72
    -> mySum(myMult(c(1, 3, 8), 9))
    -[1] 108

    Further reading

    Obviously, this text is just for the impatient. More in-depth documentation can be found on the nearest CRAN site.

    " -235,"","

    Programming paradigms and models

    Development tools

    Libraries

      -

    Integrating code with software packages

    " -237,"","

    Purpose

    -

    MPI is a language-independent communications protocol used to program parallel computers. Both point-to-point and collective communication are supported. MPI \"is a message-passing application programmer interface, together with protocol and semantic specifications for how its features must behave in any implementation.\" MPI's goals are high performance, scalability, and portability. MPI remains the dominant model used in high-performance computing today. -

    -

    The current version of the MPI standard is 3.0, but only the newest implementations implement the full standard. The previous specifications are the MPI 2.0 specification with minor updates in the MPI-2.1 and MPI-2.2 specifications. The standardisation body for MPI is the MPI forum. -

    -

    Some background information

    -

    MPI-1.0 (1994) and its updates MPI-1.1 (1995), MPI-1.2 (1997) and MPI-1.3 (1998) concentrate on point-to-point communication (send/receive) and global operations in a static process topology. Major additions in MPI-2.0 (1997) and its updates MPI-2.1 (2008) and MPI-2.2 (2009) are one-sided communication (get/put), dynamic process management and a model for parallel I/O. MPI-3.0 (2012) adds non-blocking collectives, a major update of the one-sided communication model and neighbourhood collectives on graph topologies. The first update of the MPI-3.1 specification was released in 2015, and work is ongoing on the next major update, MPI-4.0. -

    -

    The two dominant Open Source implementations are Open MPI and MPICH. The latter has been through a couple of name changes: It was originally conceived in the early '90's as MPICH, then the complete rewrite was renamed to MPICH2, but as this name caused confusion as the MPI standard evolved into MPI 3.x, the name was changed again to MPICH, and the version number bumped to 3.0. MVAPICH developed at Ohio State University is the offspring of MPICH further optimised for InfiniBand and some other high-performance interconnect technologies. Most other MPI implementations are derived from one of these implementations. -

    -

    At the VSC we offer both implementations: Open MPI is offered with the GNU compilers in the FOSS toolchain, while the Intel MPI used in the Intel toolchain is derived from the MPICH code base. -

    -

    Prerequisites

    -

    You have a program that uses an MPI library, either developed by you, or by others. In the latter case, the program's documentation should mention the MPI library it was developed with. -

    -

    Implementations

    -

    On VSC clusters, several MPI implementations are installed. We provide two MPI implementations on all newer machines that can support those implementations: -

    -
      -
    1. Intel MPI in the intel toolchain -
        -
      1. Intel MPI 4.1 (intel/2014a and intel/2014b toolchains) implements the MPI-2.2 specification
      2. -
      3. Intel MPI 5.0 (intel/2015a and intel/2015b toolchains) and Intel MPI 5.1 (intel/2016a and intel/2016b toolchains) implement the MPI-3.0 specification
      4. -
      -
    2. -
    3. Open MPI in the foss toolchain -
        -
      1. Open MPI 1.6 (foss/2014a toolchain) only implements the MPI-2.1 specification
      2. -
      3. Open MPI 1.8 (foss/2014b, foss/2015a and foss/2015b toolchains) and Open MPI 1.10 (foss/2016a and foss/2016b) implement the MPI-3.0 specification
      4. -
      -
    4. -
    -

    When developing your own software, this is the preferred order to select an implementation. The performance should be very similar, however, more development tools are available for Intel MPI (i.e., ITAC for performance monitoring). -

    -

    Specialised hardware sometimes requires specialised MPI-libraries. -

    -
      -
    • The interconnect in Cerebro, the SGI UV shared memory machine at KU Leuven, provides hardware acceleration for some MPI functions. To take full advantage of the interconnect, it is necessary to use the SGI MPI library, part of the MPT packages which stands for Message Passing Toolkit (and also contains SGI's own implementation of OpenSHMEM). Support is offered through additional toolchains (intel-mpt and foss-mpt). -
        -
      • SGI MPT 2.09 (intel-mpt/2014a and foss-mpt/2014a toolchains) contains the SGI MPI 1.7 library which implements the MPI-2.2 specification.
      • -
      • SGI MPT 2.10 (not yet installed, contact KU Leuven support) contains the SGI MPI 1.8 library which implements the MPI-3.0 specification.
      • -
      -
    • -
    -

    Several other implementations may be installed, e.g., MVAPICH, but we assume you know what you're doing if you choose to use them. -

    -

    We also assume you are already familiar with the job submission procedure. If not, check the \"Running jobs\" section first. -

    -

    Compiling and running

    -

    See to the documentation about the toolchains. -

    -

    Debugging

    -

    For debugging, we recommend the ARM DDT debugger (formerly Allinea DDT, module allinea-ddt). Video tutorials are available on the Arm web site. (KU Leuven-only). -

    -

    When using the intel toolchain, Intel's Trace Analyser & Collector (ITAC) may also prove useful. -

    -

    Profiling

    -

    To profile MPI applications, one may use Arm MAP (formerly Allinea MAP), or Scalasca. (KU Leuven-only) -

    -

    Further information

    -" -239,"","

    Purpose

    -

    OpenMP (Open Multi-Processing) is an API that supports multi-platform shared memory multiprocessing programming in C, C++, and Fortran, on most processor architectures and operating systems. It consists of a set of compiler directives, library routines, and environment variables that influence run-time behavior. -

    -

    OpenMP uses a portable, scalable model that gives programmers a simple and flexible interface for developing parallel applications for platforms ranging from the standard desktop computer to the supercomputer. The current version of the OpenMP specification is 4.0. It was released in July 2013 and is probably the biggest update of the specification so far. However, not all compilers already fully support this standard. The previous specification were the OpenMP 3.1 specification (July 2011) and OpenMP 3.0 specification (May 2008). Versions prior to 4.0 concentrated on exploiting thread-level parallelism on multicore machines in a portable way, while version 4.0 of the specifications adds support for vectorisation for the SIMD instruction sets on modern CPUs and offload of computations to accelerators (GPU, Xeon Phi, ...). The latter feature is an alternative to the use of OpenACC directives. -

    -

    Prerequisites

    -

    You should have a program that uses the OpenMP API. -

    -

    Implementations

    -

    On the VSC clusters, the following compilers support OpenMP: -

    -
      -
    1. - Intel compilers in the intel toolchain -
        -
      1. - The Intel compiler version 13.1 (intel/2014a and intel/2014b toolchains) implement the OpenMP 3.1 specification -
      2. -
      3. - The Intel compiler version 14.0 (installed on some systems outside the toolchains, sometimes in a package with icc/2013_sp1 in its name) implements the OpenMP 3.1 specification and some elements of the OpenMP 4.0 specification (which was only just approved when the compiler was released) -
      4. -
      5. - The Intel compiler version 15.0 (intel/2015a and intel/2015b toolchain) supports all of the OpenMP 4.0 specification except user-defined reductions. It supports offload to a Xeon Phi system (and to some Intel processor-integrated graphics, but that is not relevant on the VSC-clusters). -
      6. -
      7. The Intel compiler version 16.0 (intel/2016a and intel/2016b toolchains) offers almost complete OpenMP 4.0 support. User-defined reductions are now also supported.
      8. -
      -
    2. -
    3. - GCC in the foss toolchain -
        -
      1. - GCC versions 4.8.2 (foss/2014a toolchain) and 4.8.3 (foss/2014b toolchain) support the OpenMP 3.1 specification. -
      2. -
      3. - GCC version 4.9.2 (foss/2015a toolchain) and 4.9.3 (foss/2015b and foss/2016a toolchains) support the full OpenMP 4.0 specification. However, \"offloaded\" code is run on the CPU and not on the GPU or any other accelerator. (In fact, OpenMP 4.0 is supported for C/C++ starting in GCC 4.9.0 and for Fortran in GC 4.9.1). -
      4. -
      5. - GCC 5.4 (foss/2016b toolchain) offers full OpenMP 4.0 support and has the basics built in to support offloading.
      6. -
      7. GCC 6.x (not yet part of a toolchain) offers full OpenMP 4.5 support in C and C++, including offloading to some variants of the Xeon Phi and to AMD HSAIL and some support for OpenACC on NVIDIA.
      8. -
      -
    4. -
    -

    When developing your own software, this is the preferred order to select the toolchain. The GCC OpenMP runtime is for most applications inferior to the Intel implementation. -

    -

    We also assume you are already familiar with the job submission procedure. If not, check the \"Running jobs\" section first. -

    -

    Compiling OpenMP code

    -

    See the instructions on the page about toolchains for compiling OpenMP code with the Intel and GNU compilers. -

    -

    Note that it is in fact possible to link OpenMP object code compiled with gcc and the Intel compiler on the condition that the Intel OpenMP libraries and run-time is used (e.g., by linking using icc with the -openmp option), but the Intel manual is not clear which versions of gcc and icc work together well. This is only for specialists but may be useful if you only have access to object files and not to the full source code. -

    -

    Running OpenMP programs

    -

    Since OpenMP is intended for use in a shared memory context, when submitting a job to the queue system, remember to request a single node (i.e., -l nodes=1) and as many processors as you need parallel threads (e.g., -l ppn=4). The latter should not exceed the number of cores on the machine the job runs on. For relevant hardware information, please consult the list of available hardware. -

    -

    You may have to set the number of cores that the program should use by hand, e.g., when you don't use all cores on a node, because the mechanisms in the OpenMP runtime that recognize the number of cores, don't recognize the number of cores assigned to the job but the total number of cores. Depending on the program, this may be trough a command line option to the executable, a value in the input file or the environment variable OMP_NUM_THREADS. Failing to set this value may result in threads competing with each other for resources such as cache and access to the CPU and thus lower performance. -

    -

    Further information

    -" -241,"","

    What are toolchains?

    A toolchain is a collection of tools to build (HPC) software consistently. It consists of -

      -
    • compilers for C/C++ and Fortran,
    • -
    • a communications library (MPI), and
    • -
    • mathematical libraries (linear algebra, FFT).
    • -

    Toolchains at the VSC are versioned, and refreshed twice a year. All software available on the cluster is rebuild when a new version of a toolchain is defined to ensure consistency. Version numbers consist of the year of their definition, followed by either a - or b, e.g., - 2014a. -Note that the software components are not necessarily the most recent releases, rather they are selected for stability and reliability. -

    Available toolchains at the VSC

    Two toolchain flavors are standard across the VSC on all machines that can support them: -

    It may be of interest to note that the Intel C/C++ compilers are more strict with respect to the standards than the GCC C/C++ compilers, while for Fortran, the GCC Fortran compiler tracks the standard more closely, while Intel's Fortran allows for many extensions added during Fortran's long history. When developing code, one should always build with both compiler suites, and eliminate all warnings. -

    On average, the Intel compiler suite produces executables that are 5 to 10 % faster than those generated using the GCC compiler suite. However, for individual applications the differences may be more significant with sometimes significantly faster code produced by the Intel compilers while on other applications the GNU compiler may produce much faster code. -

    Additional toolchains may be defined on specialised hardware to extract the maximum performance from that hardware. -

      -
    • On Cerebro, the SGI UV shared memory system at the KU Leuven, you need to use the SGI MPI-library (called MPT for Message Passing Toolkit) to get the maximum performance from the interconnect (which offers hardware acceleration for some MPI functions). On that machine, two additional toolchains are defined, intel-mpt and - foss-mpt, equivalent to the standard - intel and - foss - toolchains respectively but with the MPI library replaced with MPT. -
    • -

    For detailed documentation on each of these toolchains, we refer to the pages linked above in this document.

    " -243,"","

    Why use a version control system?

    A version control systems (VCS) help you manage the changes to the source files of your project, and most systems also support team development. Since it remembers the history of your files, you can always return to an earlier version if you've screwed up making changes. By adding comments when you store a new version in the VCS it also becomes much easier to track which change was made for what purpose at what time. And if you develop in a team, it helps to organise making coordinated changes to the code base, and it supports co-development even across file system borders (e.g., when working with a remote partner). -

    Most Integrated Development Environments (IDE) offer support for one or more version control systems. E.g., Eclipse, the IDE which we recommend for the development of C/C++ or Fortran codes on clusters, supports all of the systems mentioned on this page, some out-of-the-box and others by adding an additional package. The systems mentioned on this page are all available on Linux, OS X and Windows (through the UNIX emulation layer cygwin and all except RCS also in at least one native implementation). -

    Types of version control systems

    An excellent introduction to the various types of version control systems can be found in the book Pro GIT by Scott Chacon and Ben Straub. -

    Local systems

    These first generation systems use a local database that stores previous versions of files. One of the most popular examples of this type is the venerable RCS (Revision Control System) system, distributed with many UNIX-like systems. It works by keeping patch sets (differences between various versions of a file) in a special format on disk. It can then return to a previous version of a file by adding up all the patches. -

    RCS and other \"local systems\" are very outdated. Hence we advise you to use one of the systems from the next two categories. -

    Links: -

    Centralised systems

    Centralised version control systems were developed to enable people to collaborate on code and documents with people on different systems that may not share a common file system. The version files are now maintained by a server to which multiple clients can connect and check out files, and the systems help to manage concurrent changes to a file by several users (through a copy-modify-merge procedure). Popular examples of this type are CVS (Concurrent Versions System) and SVN (Subversion). Of those two, SVN is the more recent system while CVS is no longer further developed and less and less used. -

    Links: -

      -
    • - CVS Wikipedia page -
    • -
    • - SVN Wikipedia page -
    • -
    • - CVS implementations -
        -
      • - A command-line client is included in most Linux distributions. On Windows, the cygwin UNIX emulation layer also has a svn package. On OS X, it is available (though no longer maintained) through the MacPorts project. -
      • -
      • - The eclipse IDE comes with built-in support for CVS. -
      • -
      -
    • -
    • - SVN implementations -
        -
      • - Command-line clients are included in most Linux distributions and OS X. On Windows, the cygwin UNIX emulation layer also has a svn package. The command line client is also available on the VSC clusters. -
      • -
      • - TortoiseSVN (or go straight to the TortoiseSVN web site) is a popular Windows native GUI client that integrates well with the explorer. However, if you google on \" SVN GUI\" you'll find a plethora of other choices, not only for Windows but also for macOS and Linux -
      • -
      • - SVN can be integrated with the eclipse IDE through the ¨Subversive SVN team provider¨ plugin which can be installed through the \"Install New Software\" panel in the help menu. More information and instructions are available on the subversive subsite of the main eclipse web site. -
      • -
      -
    • -

    Distributed systems

    The weak point of the centralised systems is that they require you to be online to checkout a file or to commit a revision. In a distributed system, the clients mirror the complete repository and not just the latest version of each file. When online, the user can then synchronise the local repository with the copy on a server. In a single-user scenario you can still keep all your files in the local repository without using a server, and hence it doesn't make sense anymore to still use one of old local-only version control systems. The disadvantage of a distributed system is that you are not forced to synchronise after every commit, so that the local repositories of various users on a project can be very much out-of-sync with each other, making the job harder when those versions have to be merged again. -

    Popular examples of systems of this type are Git (originally developed to manage the Linux kernel project) and Mercurial (sometimes abbreviated as Hg, chemists will understand why). -

    Links: -

      -
    • - Git on Wikipedia -
    • -
    • - Main Git web page -
    • -
    • - Mercurial on Wikipedia -
    • -
    • - Main Mercural web page -
    • -
    • - Git implementations -
        -
      • - If you have a Linux system, Git is most likely already installed on your system. On OS X, git is available through Xcode, though it is not always the most recent version. On Windows, there is a Git-package in the UNIX emulation layer Cygwin. Downloads for all systems are also available on the download section of the main git web site. That page also contains links to a number of GUI options. Most if not all GUI tools store projects in a way that is fully compatible with the command line tools, so you can use both simultaneously. - The command line client is also available on the VSC clusters. -
      • -
      • - TortoiseGit is an explorer-integrated interface to Git on Windows similar to TortoiseSVN. -
      • -
      • - Another nice GUI application is SourceTree produced by Atlassian. Atlassian is the company behind the Bitbucket cloud service, but their tool also works well with GitHub, one of their main competitors. It has a very nice way of representing the history of a local repository. -
      • -
      • - The Eclipse IDE comes with built-in support for Git through the standard plug-in EGit. More recent versions of this plugin may be available through the Eclipse Marketplace. -
      • -
      • - CollabNet GitEye is a Git GUI for Windows, OS X and Linux build on top of a number of Eclipse libraries, but you don/t need to install Eclipse to be able to use it. It is a nice way though to browse through your Git repositories outside of the Eclipse environment. GitEye itself is free and integrates with several git cloud services and bugtracking services. -
      • -
      -
    • -
    • - Mercurial (Hg) implementations - -
    • -

    Cloud services

    Many companies offer hosting services for SVN, Git or Mercurial repositories in the cloud. Google, e.g., for subversion hosting service, git hosting service or mercurial hosting service. Several offer free public hosting for Open Source projects or have free access for academic accounts. Some noteworthy ones that are popular for academic projects are: -

      -
    • Github (github.com) offers free Git and Subversion hosting for Open Source projects. We use this service for some VSC in-house tools development. It is also possible to host private projects if you subscribe to one of their paying plans.
    • -
    • - Bitbucket (bitbucket.org) offers both git and mercurial services. It also supports private projects with a limited number of users in free accounts (and has a special deal for academic institutions, allowing unlimited users) while the other services mentioned on this page only support open source projects for free. -
    • -
    • SourceForge is a very well known service for hosting Open Source projects. It currently supports projects managed through Subversion, Git, Mercurial and a few other systems.
    • -

    However, we urge you to always carefully check the terms-of-use of these services to assure that, e.g., the way they deal with intellectual property is in line with your institute's requirements. -

    Which one should I use?

    It is not up to us to make this choice for you, but here are a number of elements that you should take into account: -

      -
    • - Subversion, Git and Mercurial are all recent systems that are well maintained and supported by several hosting services. -
    • -
    • - Subversion and Git are installed on most VSC systems. We use Git ourselves for some of our in-house development. -
    • -
    • - Centralised version management systems have a simpler concept than the distributed ones, but if you expect prolonged periods that you are offline, you have to keep in mind that you cannot make any commits during that period. -
    • -
    • - As you have only a single copy of the repository in a centralised system, a reliable hosting service or a good backup strategy is important. In a distributed system it would still be possible to reconstruct the contents of a repository from the other repositories. -
    • -
    • - If you want to use an IDE, it is good to check which systems are supported by the IDE. E.g., Eclipse supports Git out-of-the-box, and Subversion and Mercurial though a plug-in. Visual Studio also supports all three of these systems. -
    • -
    " -245,"","

    This tutorial explains some of the basic use of the git command line client. It does not aim to be a complete tutorial on git but rather a brief introduction explaining some of the issues and showing you how to house your git repository at the VSC. At the end of this text, we provide some links to further and more complete documentation. -

    -

    Preparing your local machine for using git

    -

    It is best to first configure git on your local machine using git config. -

    -
    git config --global user.name \"Kurt Lust\"
    -git config --global user.email kurt.lust@uantwerpen.be
    -git config --global core.editor vi
    -
    -

    These settings are stored in the file .gitconfig in your home directory (OS X, Linux, Cygwin). The file is a simple user-editable text file. -

    -

    Some remarks on accessing a remote repository using command line tools

    -

    Many cloud git hosting services offer a choice between ssh and https access to a repository through the git command line tools. If you want to use one of the VSC clusters for a remote repository, you'll have to use the ssh protocol. -

    -

    Https access

    -

    Https access uses your account and password of the cloud service. Every time you access the remote repository, the git command-line client will ask for the password. This can be solved by using a credential manager in recent versions of the git client (1.7.9 and newer). -

    -
      -
    • - On Windows, the Git Credential Manager for Windows can be used to safely store your password in the Windows credential store. -
    • -
    • - The Apple OS X version of git comes with the credential manager comes with credential-osxkeychain to store your credentials in the OS X keychain. Enable it using - git config --global credential.helper osxkeychain -
    • -
    • - There are also various solutions for Linux systems, e.g., using the gnome keyring. To tell git to use it, use: - git config --global credential.helper /usr/share/doc/git/contrib/credential/gnome-keyring/git-credential-gnome-keyring - This of course depends on the setup of your Linux machine. The program might not be installed or might be installed in a different directory. -
    • -
    • - Various GUI clients may have their own way of managing the credentials. -
    • -
    -

    Ssh access

    -

    The git command line client uses the standard ssh mechanism to manage ssh keys. It is sufficient to use an ssh agent (as you are probably using already when you log on to the VSC clusters) and load the key for the service in the agent (using ssh-add). -

    -

    Setting up a new repository

    -

    Getting an existing code base into a local repository

    -

    Git stores its repository with your code in a hidden directory. -

    -
      -
    1. - Go to the top directory of what has to become your repository (most likely the top directory of the files that you want to version control) and run - git init - This wil create a hidden .git subdirectory with the local repository database. -
    2. -
    3. - Now you can add the files to your new repository, e.g., if you want to add all files - git add . - (don't forget the dot at the end, it means add the current directory!) -
    4. -
    5. - Next you can do your first commit: - git commit -m \"Project brought under Git control\" -
    6. -
    7. - And you're done! The current version of your files is now stored in your local repository. Try, e.g., - git show
      - git status - to get some info about the repository. -
    8. -
    -

    Bringing an existing local repository into a cloud service

    -

    Here we assume that you have a local repository and now want to put it into a cloud service to collaborate with others on a project. -

    -

    You may want to make a backup of your local repository at this point in case things go wrong. -

    -
      -
    1. - Create an empty project on your favorite cloud service. Follow the instructions provided by the service. -
    2. -
    3. - Now you'll need to learn your local repository about the remote one. Most cloud services have a button to show you the URL to the remote repository that you have just set up, either using the http or the ssh-based protocol. E.g., - git remote add origin ssh://git@bitbucket.org/username/myproject.git - connects to the repository myproject on Bitbucket. It will be known on your computer with the short name origin. The short name saves you from having to use the full repository URL each time you want to refer to it -
    4. -
    5. - Push the code from your local repository into the remote repository. - git push -u --mirror origin - will create a mirror of your local repository on the local site. Use the --mirror option with care, as it may destroy part of your remote repositiory if that one is not empty and contains information that is not contained in your local repository! -
    6. -
    -

    You can also use the procedure to create a so-called bare remote repository in your account on the VSC clusters. A bare repository is a repository that does not also contain its own source file tree, so you cannot edit directly in that directory and also use it as a local repository on the cluster. However, you can push to and pull from that repositiory, so it will work just like a repository on one of the hosting services. The access to the repository will be through ssh. The first two steps have to be modified: -

    -
      -
    1. - To create an empty repository, log in to your home cluster and go to the directory where you want to store the repository. Now create the repository (assuming its name is repository-name): - git init --bare repository-name - This will create the directory repositiory-name that stores a lot of files which together are your git repository. -
    2. -
    3. - The URL to the repository will be of the form vscXXXXX@vsc.login.node:<full path to the repository>, e.g., if you're vsc20XYZ (a UANtwerpen account) and the repository is in the subdirectory testrepository of your data directory, the URL is vsc20XYZ@login.hp.uantwerpen.be:/data/antwerpen/20X/vsc20XYZ/testrepository. So use this URL in the git remote add command. You don't need to specify ssh:// in the URL if you use the scp-syntax as we did in this example above. -
    4. -
    -

    The access to this repository will be regulated through the file access permissions for that subdirectory. Everybody who has read and write access to that directory, can also use the repository (but using his/her own login name in the URL of course as VSC accounts should not be shared by multiple users). -

    -

    NOTE: Technically speaking, git can also be used in full peer-to-peer mode where all repos also have a source directory in which files can be edited. It does require a good organisation of the work flow. E.g., different people in the team should not be working in the same branch as one cannot push changes to a repo for the branch that is active (i.e., mirrored in the source files) as this may create an inconsistent state. So our advise is that if you want to use the cluster as a git server and also edit files on the cluster, you simply use two repositories: one that you use as a local repository in which you also work and one that is only used as a central repository to which various users push changes to and pull changes from. -

    -

    As a clone from an existing local or remote repository

    -

    Another way to create a new repository is from an existing repository on your local machine or on a remote service. The latter is useful, e.g., if you want to join an existing project and create a local copy of the remote repository on your machine to do your own work. This can be accomplished through cloning of a repository, a very easy operation in git as there is a command that combines all necessary steps in a single command: -

    -
      -
    1. - Go to the directory were you want to store the repository and corresponding source tree (in a subdirectory of that directory called directoryname). -
    2. -
    3. - You have to know the URL to the repository that you want to clone. But once you know the URL, all you need to do is - git clone URL directoryname - where you replace URL with the URL of the repository that you want to clone. -
    4. -
    -

    Note: If you start from scratch and want to use a remote repository in one of the cloud services, it might be easiest to first a repository over there using the instructions of the server system or cloud service, and then clone that (even if it is still empty) to a local repository on which you actually work. -

    -

    Working with your local repository

    -

    If you are only using a local repository, the basic workflow to add the modifications to the git database is fairly simple: -

    -
      -
    1. - Edit the files. -
    2. -
    3. - Add the modified files to the index using: - git add filename - This process is called staging. -
    4. -
    5. - You can continue to further edit files if you want and also stage them. -
    6. -
    7. - Commit all staged files to the repository: - git commit - Git will ask you to enter a message describing the commit, or you can specify a message with the -m option. -
    8. -
    -

    This is not very exciting though. Version control becomes really useful once you want to return to a previous version, or create a branch of the code to try something out or fix a bug without immediately changing the main branch of the code (that you might be using for production use). You can then merge the modifications back into you main code. Branching and merging branches are essential in all this. In fact, if you use git to collaborate with others you'll be confronted with branches sooner rather than later. In fact, every git repository has at least one branch, the main branch, as -

    -

    git status -

    -

    shows. -

    -

    Assume you want to start a new branch to try something without affecting your main code, e.g., because you also want to further evolve your main code branch while you're working. You can create a branch (let's assume we name it branch2) with -

    -

    git branch branch2 -

    -

    And then switch to it with -

    -

    git checkout branch2 -

    -

    Or combine both steps with -

    -

    git checkout -b branch2. -

    -

    You can then switch between this branch and the master branch with -

    -

    git checkout master -

    -

    and -

    -

    git checkout branch2 -

    -

    at will and make updates to the active branch using the regular git add and git commit cycle. -

    -

    The second important operation with branches, is merging them back together. One way to do this is with git merge. Assume you want to merge the branch branch2 back in the master branch. You'd do this by first switching to the master branch using -

    -

    git checkout master -

    -

    and then ask git to merge both branches: -

    -

    git merge branch2 -

    -

    Git will do a good effort to merge both sets of modifications since their common ancestor, but this may not always work, especially if you've made changes to the same area of a file on both branches. Git will then warn you that there is a conflict for certain files, after which you can edit those files (the conflicts zones will be clearly marked in the files), add them to the index and commit the modifications again. -

    -

    When learning to work with this mechanism, it is very instructive to use a GUI that depicts all commits and branches in a graphical form, e.g., the program SourceTree mentioned before. -

    -

    Synchronising with a remote repository

    -

    If you want to collaborate with other people on a project, you need multiple repositories. Each person has his or her own local repository on his or her computer. The workflow is the simplest if you also have a repository that is used to collect all contributions. The collaboration mechanism though synchronisation of repositories relies very much on the branching mechanism to resolve conflicts if several contributors have made modifications to the repository. -

    -
      -
    • - To push modifications that you have made in your local repository to a different repository, use - git push -u remote_name - where you replace remote_name with the shorthand for the remote repository. - This process may fail however if someone else had made modifications to the same branch in the repository that you're pushing. Git will then warn you and ask you to first fetch the modifications that others have made and merge them into your code before trying another pull. -
    • -
    • - The opposite of push is fetch and merge or pull. You'll need to do this to see and integrate modifications that others have made to the repository. The first step is to update your repository with the contents of the remote repository. Assume the remote repository has the shorthand name origin. - git fetch origin - will get all the information from the repositiory origin in your local repositiory, but it will not change your work files. If you try - git branch -av - To get an overview of all branches in your local repository and information about the latest commit for each branch, you'll see that there might be a number of branches with a name that starts with origin/ in the repository. That means that there were commits in the remote repository that were newer than the data you last synchronised with, and you'll need to merge them into your working code base. E.g., if you're working on the branch master and someone else has made changes to that branch also, there will now be a branch origin/master in your repository with a more recent commit. You merge it again into your code with - git merge origin/master - (and you may have to resolve some conflicts here which you'd have to resolve and commit as before). -
    • -
    • - After a git fetch you may also note that someone else has added a new branch. Assume, e.g., that git branch -av tells you there is now a branch origin/branch3 and that you want to collaborate to that branch also. Before you can do so, you'll first have to create a local so-called tracking branch, by using - git checkout -b branch3 origin/branch3 - which will also switch to that branch and update the files in your workspace accordingly, or if you just want to create the tracking branch for later use without switching to it now, - git branch branch3 origin/branch3 -
    • -
    -

    Further information

    -

    We have only covered the bare essentials of git (and even less then that). Due to its power, it is also a fairly complicated system to use. If you want to know more about git or need a more complete tutorial, we suggest you check out the following links: -

    -" -247,"","

    Preparation

    The Subversion software is installed on the cluster. On most systems it is default software and does not need a module (try which svn and <code>which svnadmin to check if the system can find the subversion commands). On some systems you may have to load the appropriate module, i.e., -

    $ module load subversion
    -

    When you are frequently using Subversion, it may be convenient to load this module from your '.bashrc' file. (Note that in general we strongly caution against loading modules from '.bashrc', so this is an exception.) -

    Since some Subversion operations require editing, it may be convenient to define a default editor in your '.bashrc' file. This can be done by setting the 'EDITOR' variable to the path of your favorite editor, e.g., emacs. When this line is added to your '.bashrc' file, Subversion will automatically launch this editor whenever it is required. -

    export EDITOR=/usr/bin/emacs
    -

    Of course, any editor you are familiar with will do. -

    Creating a repository

    To create a Subversion repository on a VSC cluster, the user first has to decide on its location. We suggest to use the data directory since -

      -
    1. its default quota are quite sufficient;
    2. -
    3. if the repository is to be shared, the permissions on the user's home directory need not to be modified, hence decreasing potential security risks; and
    4. -
    5. only for users of the K.U.Leuven cluster, the data directory is backed up (so is the user's home directory, incidently).
    6. -

    Actually creating a repository is very simple: -

    $ cd $VSC_DATA
    -$ svnadmin create svn-repo
    -
      -
    1. Log in on the login node.
    2. -
    3. Change to the data directory using
    4. -
    5. Create the repository using
    6. -

    Note that a directory with the name 'svn-repo' will be created in your '$VSC_DATA' directory. You can choose any name you want for this directory. Do not modify the contents of this directory since this will corrupt your repository unless you know quite well what you are doing. -

    At this point, it may be a good idea to read the section in the Subversion book on the repository layout. In this How-To, we will assume that each project has its own directory at the root level of the repository, and that each project will have a 'trunk', 'branches' and 'tags' directory. This is recommended practice, but you may wish to take a different approach. -

    To make life easier, it is convenient to define an environment variable that contains the URI to the repository you just created. If you work with a single repository, you may consider adding this to your '.bashrc' file. -

    export SVN=\"svn+ssh://vsc98765@vsc.login.node DATA/svn-repo\"
    -

    Here you would replace 'vsc98765' by your own VSC user ID, 'vsc.login.node' by the login node of your VSC cluster, and finally, 'DATA' by the value of your '$VSC_DATA' variable. -

    Putting a project under Subversion control

    Here, we assume that you already have a directory that contains an initial version of the source code for your project. If not, create one, and populate it with some relevant files. For the purpose of this How-To, the directory currently containing the source code will be called '$VSC_DATA/simulation', and it will contain two source files, 'simulation.c' and 'simulation.h', as well as a make file 'Makefile'. -

    Preparing the repository

    Since we follow the Subversion community's recommended practice, we start by creating the appropriate directories in the repository to house our project. -

    $ svn mkdir -m 'simulation: creating dirs' --parents   \\
    -            $SVN/simulation/trunk    \\
    -            $SVN/simulation/branches \\
    -            $SVN/simulation/tags
    -

    The repository is now prepared so that the actual code can be imported. -

    Importing your source code

    As mentioned, the source code for your project exists in the directory '$VSC_DATA/simulation'. Since the semantics of the 'trunk' directory of a project is that this is the location where the bulk of the development work is done, we will import the project into the trunk. -

    $ svn import -m 'simulation: import' \\
    -             $VSC_DATA/simulation   \\
    -             $SVN/simulation/trunk
    -
      -
    1. First, prepare the source directory '$VSC_DATA/simulation' by deleting all files that you don't want to place under version control. Remove artefacts such as, e.g., object files or executables, as well as text files not to be imported into the repository.
    2. -
    3. Now the directory can be imported by simply typing:
    4. -

    The project is now almost ready for development under version control. -

    Checking out

    Although the source directory has been imported into the subversion repository, this directory is not under version control. We first have to check out a working copy of the directory. -

    Since you are not yet familiar with subversion and may have made a mistake along the way, it may be a good idea at this point to make a backup of the original directory first, by, e.g., -

    $ tar czf $VSC_DATA/simulation.tar.gz $VSC_DATA/simulation
    -

    Now it is safe to checkout the project from the repository using: -

    $ svn checkout $SVN/simulation/trunk $VSC_DATA/simulation
    -

    Note that the existing files in the'$VSC_DATA/simulation' directory have been replaced by those downloaded from the repository, and that a new directory '$VSC_DATA/simulation/.svn' has been created. It is the latter that contains the information needed for version control operations. -

    Subversion work cycle

    The basic work cycle for development on your project is fairly straightforward. -

    $ cd $VSC_DATA/simulation
    -$ svn update
    -$ svn add utils.c utils.h
    -$ svn status
    -$ svn commit -m 'simulation: implemented a very interesting feature'
      -
    1. Change to the directory containing your project's working copy, e.g.,
    2. -
    3. Update your working copy to the latest version, see the section on updating below for a brief introduction to the topic.
    4. -
    5. Edit the project's files to your heart's content, or add new files to the repository after you created them, e.g., 'utils.c' and 'utils.h'. Note that the new files will only be stored in the repository upon the next commit operation, see below.
    6. -
    7. Examine your changes, this will be elaborated upon in the next section.
    8. -
    9. Commit your changes, i.e., all changes you made to the working copy are now transfered to the repository as a new revision.
    10. -
    11. Repeat steps 2 to 5 until you are done.
    12. -

    If you are the sole developer working on this project and exclusively on the VSC cluster, you need not update since your working copy will be the latest anyway. However, an update is vital when others can commit changes, or when you work in various locations such as your desktop or laptop. -

    Other subversion features

    It would be beyond the scope of this How-To to attempt to stray too far from the mechanics of the basic work cycle. However, a few features will be highlighted since they may prove useful. -

    A central concept to almost all version control systems is that of a version number. In Subversion, all operations that modify the current version in the repository will result in an automatic increment of the revision number. In the example above, the 'mkdir' would result in revision 1, the 'import' in revision 2, and each consecutive 'commit' will further increment the version number. -

    Reverting to a previous version

    The most important point of any version control system is that it is possible to revert to some revision if necessary. Suppose you want to revert to the state of the original import, than this can be accomplished as follows: -

    $ svn checkout -r 2 $SVN/simulation/trunk simulation-old
    -

    Finding changes between revisions

    Finding changes between revisions, or between a certain revision and the current state of the working copy is also fairly easy: -

    $ svn diff -r HEAD simulation.c
    -

    Examining history

    To many Subversion operations, e.g., 'mkdir' and 'commit', with a message can be added (the '-m <string>' in the commands of the previous section), and they will be associated with the resulting revision number. When used consistently, these comments can be very useful since they can be reviewed later whenever one has to examine changes made to the project. If a repository hosts multiple projects, it is wise to have some sort of convention, e.g., to start the comments on a project by its name as a tag. Note that this convention was followed in the examples above. One can for instance show all messages associated with changes to the file 'simulation.c' using: -

    $ svn log simulation.c
    -

    Deleting and renaming

    When a file is no longer needed, it can be removed from the current version in the repository, as well as from the working copy. -

    $ svn rm Makefile
    -

    The previous command would remove the file 'Makefile' from the working directory, and tag it for deletion from the current revision upon the next commit operation. Note that the file is not removed from the repository, it is still part of older revisions. -

    Similarly, a file may have to be renamed, an operation that is also directly supported by Subversion. -

    $ svn mv utils.c util.c
    -

    Again, the change will only be propagated to the repository upon the next commit operation. -

    Examining status

    While development progresses, the working copy differs more and more from the latest revision in the repository, i.e., HEAD. To get an overview of files that were modified, added, deleted, etc., one can examine the status. -

    $ svn status
    -

    This results in a list of files and directories, each preceeded by a character: -

      -
    • M: file is modified
    • -
    • A: file has been added
    • -
    • D: file has been deleted
    • -
    • ?: file is not (yet) under version control (it should be added if it needs to be)
    • -

    When nothing has been modified since the last commit, this command shows no output. -

    Updating the working copy

    When the latest revision in the repository has changed with respect to the working copy, an update of the latter should be done before continuing the development. -

    $ svn update
    -

    This may be painless, or require some work. Subversion will try to reconsilliate the revision in the repository with your working copy. When changes can safely be applied, subversion does so automatically. The output of the 'update' command is a list of files, preceeded by characters denoting status information: -

      -
    • A: file was not in the working copy, and has now been checked out.
    • -
    • U: file was modified in the repository, but not in the working copy, the latter has been modified to reflect the changes.
    • -
    • G: file was modified in both the working copy and the repository, the changes have been merged automatically.
    • -

    In case of conflict, e.g., the same line of a file was changed in both the repository and the working copy, Subversion will offer a number of options to resolve the conflict. -

    Conflict discovered in 'simulation.c'.
    -Select: (p) postpone, (df) diff-full, (e) edit,
    - (mc) mine-conflict, (tc) theirs-conflict,
    - (s) show all options:
    -

    The safest option is to choose to edit the file, i.e., type 'e'. The file will be opened in an editor with the conflicts clearly marked. An example is shown below: -

    <<<<<<< .mine
    - printf(\"bye world simulation!\\n\");
    -=======
    - printf(\"hello nice world simulation\\n\");
    ->>>>>>> .r7
    -

    Here '.mine' indicates the state in your working copy, '.r7' that of revision 7 (i.e., HEAD) in the repository. You can now resolve them manually by editing the file. Upon saving the changes and quiting the editor, the option 'resolved' will be added to the list above. Enter 'r' to indicate that the conflict has indeed been resolved successfully. -

    Tagging

    Some revisions are more important than others. For example, the version that was used to generate the data you used in the article that was submitted to Nature is fairly important. You will probably continue to work on the code, adding several revisions while the referees do their job. In their report, they may require some additional data, and you will have to run the program as it was at the time of submission, so you want to retrieve that version from the repository. Unfortunately, revision numbers have no semantics, so it will be fairly hard to find exactly the right version. -

    Important revisions may be tagged explicitly in Subversion, so choosing an appropriate tag name adds semantics to a revision. Tagging is essentially copying to the tags directory that was created upon setting up the repository for the project. -

    $ svn copy --parents -m 'simulation: tagging Nature submission' \\
    -           $SVN/simulation/trunk           \\
    -           $SVN/simulation/tags/nature-submission
    -

    It is now trivial to check out the version that was used to compute the relevant data: -

    $ svn checkout $SVN/simulation/tags/nature-submission \\
    -               simulation-nature
    -

    Desktop access

    It is also possible to access VSC subversion repositories from your desktop. See the pages in the Windows client, OS X client en Linux client sections. -

    Further information on Subversion

    Subversion is a rather sophisticated version control system, and in this mini-tutorial for the impatient we have barely scratched the surface. Further information is available in an online book on Subversion, a must read for everyone involved in a non-trivial software development project that used subversion. -

    Subversion can also provide help on commands: -

    $ svn help
    -$ svn help commit
    -

    The former lists all available subversion commands, the latter form displays help specific to the command, 'commit' in this example. -

    " -249,"","

    Purpose

    Debugging MPI applications is notoriously hard. The Intel Trace Analyzer & Collector (ITAC) can be used to generate a trace while running an application, and visualizing it later for analysis. -

    Prerequisities

    You will need an MPI program (C/C++ or Fortran) to instrument and run. -

    Step by step

    The following steps are the easiest way to use the Intel Trace Analyzer, however, more sophisticated options are available. -

      -
    1. - Load the relevant modules. The exact modules may differ from system to system, but will typically include the itac module and a compatible Intle toolchain, e.g., -
      $ module load intel/2015a
      -$ module load itac/9.0.2.045
      -	
      -
    2. -
    3. - Compile your application so that it can generate a trace: -
      $ mpiicc -trace myapp.c -o myapp
      -	
      - where myapp.c is your C/C++ source code. For a Fortran program, this would be: -
      $ mpiifort -trace myapp.f -o myapp
      -	
      -
    4. -
    5. - Run your application using a PBS script such as this one: -
      #!/bin/bash -l
      -#PBS -N myapp-job
      -#PBS -l walltime=00:05:00
      -#PBS -l nodes=4
      -
      -module load intel/2015a
      -module load itac/9.0.2.045
      -# Set environment variables for ITAC.
      -# Unfortunately, the name of the script differs between versions of ITAC
      -source $EBROOTITAC/bin/itacvars.sh
      -
      -cd $PBS_O_WORKDIR
      -
      -mpirun -trace myapp
      -	
      -
    6. -
    7. - When the job is finished, check whether files with names myapp.stf.* have been generated, if so, start the visual analyzer using: -
      $ traceanalyzer myapp.stf
      -	
      -
    8. -

    Further information

    Intel provides product documentation for ITAC. -

    " -251,"","

    Introduction & motivation

    When working on the command line such as in the Bash shell, applications support command line flags and parameters. Many programming languages offer support to conveniently deal with command line arguments out of the box, e.g., Python. However, quite a number of languages used in a scientific context, e.g., C/C++, Fortran, R, Matlab do not. Although those languages offer the necessary facilities, it is at best somewhat cumbersome to use them, and often the process is rather error prone. -

    Quite a number of libraries have been developed over the years that can be used to conveniently handle command line arguments. However, this complicates the deployment of the application since it will have to rely on the presence of these libraries. -

    ParameterWeaver has a different approach: it generates the necessary code to deal with the command line arguments of the application in the target language, so that these source files can be distributed along with those of the application. This implies that systems that don't have ParameterWeaver installed still can run that application. -

    Using ParameterWeaver is as simple as writing a definition file for the command line arguments, and executing the code generator via the command lnie. This can be conveniently integrated into a standard build process such as make. -

    ParameterWeaver currently supports the following target languages: -

      -
    • C/C++
    • -
    • Fortran 90
    • -
    • R
    • -

    High-level overview & concepts

    Parameter definition files

    A parameter definition file is a CSV text file where each line defines a parameter. A parameter has a type, a name, a default values, and optionally, a description. To add documentation, comments can be added to the definition file. The types are specific to the target language, e.g., an integer would be denoted by int for C/C++, and by integer for Fortran 90. The supported types are documented for each implemented target language. -

    By way of illustration, a parameter definition file is given below for C as a target language, additional examples are shown in the target language specific sections: -

    int,numParticles,1000,number of particles in the system
    -double,temperature,273,system temperature in Kelvin
    -char*,intMethod,'newton',integration method to use
    -

    Note that this parameter definition file should be viewed as an integral part of the source code. -

    Code generation

    ParameterWeaver will generate code to -

      -
    1. initialize the parameter variables to the default values as specified in the parameter definition file;
    2. -
    3. parse the actual command line arguments at runtime to determine the user specified values, and
    4. -
    5. print the values of the parameters to an output stream.
    6. -

    The implementation and features of the resulting code fragments are specific to the target language, and try to be as close as possible to the idioms of that language. Again, this is documented for each target language specifically. The nature and number of these code fragments varies from one target language to the other, again trying to match the language's idioms as closely as possible. For C/C++, a declaration file (.h) and a definition file (.c), while for Fortran 90 a single file (.f90 will be generated that contains both declarations and definitions. -

    Language specific documentation

    C/C++ documentation

    Data types

    For C/C++, ParameterWeaver supports the following data types: -

      -
    1. int
    2. -
    3. long
    4. -
    5. float
    6. -
    7. double
    8. -
    9. bool
    10. -
    11. char *
    12. -

    Example C program

    Suppose we want to pass command line parameters to the following C program: -

    #include 
    -#include 
    -#include 
    -int main(int argc, char *argv[]) {
    -    FILE *fp;
    -    int i;
    -    if (strlen(out) > 0) {
    -        fp = fopen(out, \"w\");
    -    } else {
    -        fp = stdout;
    -    }
    -    if (verbose) {
    -        fprintf(fp, \"# n = %d\\n\", n);
    -        fprintf(fp, \"# alpha = %.16f\\n\", alpha);
    -        fprintf(fp, \"# out = '%s'\\n\", out);
    -        fprintf(fp, \"# verbose = %s\\n\", verbose);
    -    }
    -    for (i = 0; i < n; i++) {
    -        fprintf(fp, \"%d\\t%f\\n\", i, i*alpha);
    -    }
    -    if (fp != stdout) {
    -        fclose(fp);
    -    }
    -    return EXIT_SUCCESS;
    -}
    -

    We would like to set the number of iterations n, the factor alpha, the name of the file to write the output to out and the verbosity verbose at runtime, i.e., without modifying the source code of this program. -

    Moreover, the code to print the values of the variables is error prone, if we later add or remove a parameter, this part of the code has to be updated as well. -

    Defining the command line parameters in a parameter definition file to automatically generate the necessary code simplifies matters considerably. -

    Example parameter definition file

    The following file defines four command line parameters named n, alpha, out and verbose. They are to be interpreted as int, double, char pointer and bool respectively, and if no values are passed via the command line, they will have the default values 10, 0.19, output.txt and false respectively. Note that a string default value is quoted. In this case, the columns in the file are separated by tab characters. The following is the contents of the parameter definition file param_defs.txt: -

    int n   10
    -double  alpha   0.19
    -char *  out 'output.txt'
    -bool    verbose false
    -

    This parameter definition file can be created in a text editor such as the one used to write C program, or from a Microsoft Excel worksheet by saving the latter as a CSV file. -

    As mentioned above, boolean values are also supported, however, the semantics is slightly different from other data types. The default value of a logical variable is always false, regardless of what is specified in the parameter definition file. As opposed to parameters of other types, a logical parameter acts like a flag, i.e., it is a command line options that doesn't take a value. Its absence is interpreted as false, its presence as true. Also note that using a parameter of type bool implies that the program will have to be complied as C99, rather than C89. All modern cmopiler fully support C99, so that should not be an issue. However, if your program needs to adhere strictly to the C89 standard, simply use a parameter of type int instead, with 0 interpreted as false, all other values as true. In that case, the option takes a value on the command line. -

    Generating code

    Generating the code fragments is now very easy. If appropriate, load the module (VIC3): -

    $ module load parameter-weaver
    -

    Next, we generate the code based on the parameter definition file: -

    $ weave -l C -d param_defs.txt
    -

    A number of type declarations and functions are generated, the declarations in the header file cl_params.h, the defintions in the source file cl_params.c. -

      -
    1. data structure: a type Params is defined as a typedef of a struct with the parameters as fields, e.g., -
      typedef struct {
      -    int n;
      -    double alpha;
      -    char *out;
      -    bool verbose;
      -} Params;
      -    
      -
    2. -
    3. Initialization function: the default values of the command line parameters are assigned to the fields of the Params variable, the address of which is passed to the function
    4. -
    5. Parsing: the options passed to the program via the command line are assigned to the appropriate fields of the Params variable. Moreover, the argv array containing the remaining command line arguments, the argc variable is set apprppriately.
    6. -
    7. Dumper: a function is defined that takes three arguments: a file pointer, a prefix and the address of a Params variable. This function writes the values of the command line parameters to the file pointer, each on a separate line, preceeded by the specified prefix.
    8. -
    9. Finalizer: a function that deallocates memory allocated in the initialization or the parsing functions to avoid memory leaks.
    10. -

    Using the code fragments

    The declarations are simply included using preprocessor directives: -

      #include \"cl_params.h\"
    -

    A variable to hold the parameters has to be defined and its values initialized: -

      Params params;
    -  initCL(&params);
    -

    Next, the command line parameters are parsed and their values assigned: -

      parseCL(&params, &argc, &argv);
    -

    The dumper can be called whenever the user likes, e.g., -

      dumpCL(stdout, \"\", &params);
    -

    The code for the program is thus modified as follows: -

    #include 
    -#include 
    -#include 
    -#include \"cl_params.h\"
    -int main(int argc, char *argv[]) {
    -    FILE *fp;
    -    int i;
    -    Params params;
    -    initCL(&params);
    -    parseCL(&params, &argc, &argv);
    -    if (strlen(params.out) > 0) {
    -        fp = fopen(params.out, \"w\");
    -    } else {
    -        fp = stdout;
    -    }
    -    if (params.verbose) {
    -        dumpCL(fp, \"# \", &params);
    -    }
    -    for (i = 0; i < params.n; i++) {
    -        fprintf(fp, \"%d\\t%f\\n\", i, i*params.alpha);
    -    }
    -    if (fp != stdout) {
    -        fclose(fp);
    -    }
    -    finalizeCL(&params);
    -    return EXIT_SUCCESS;
    -}
    -

    Note that in this example, additional command line parameters are simply ignored. As mentioned before, they are available in the array argv, argv[0] will hold the programs name, subsequent elements up to argc - 1 contain the remaining command line parameters. -

    Fortran 90 documentation

    Data types

    For Fortran 90, ParameterWeaver supports the following data types: -

      -
    1. integer
    2. -
    3. real
    4. -
    5. double precision
    6. -
    7. logical
    8. -
    9. character(len=1024)
    10. -

    Example Fortran 90 program

    Suppose we want to pass command line parameters to the following Fortran program: -

    program main
    -use iso_fortran_env
    -implicit none
    -integer :: unit_nr = 8, i, istat
    -if (len(trim(out)) > 0) then
    -    open(unit=unit_nr, file=trim(out), action=\"write\")
    -else
    -    unit_nr = output_unit
    -end if
    -if (verbose) then
    -    write (unit_nr, \"(A, I20)\") \"# n = \", n
    -    write (unit_nr, \"(A, F24.15)\") \"# alpha = \", alpha
    -    write (unit_nr, \"(A, '''', A, '''')\") \"# out = \", out
    -    write (unit_nr, \"(A, L)\") \"# verbose = \", verbose
    -end if
    -do i = 1, n
    -    write (unit_nr, \"(I3, F5.2)\") i, i*alpha
    -end do
    -if (unit_nr /= output_unit) then
    -    close(unit=unit_nr)
    -end if
    -stop
    -end program main
    -

    We would like to set the number of iterations n, the factor alpha, the name of the file to write the output to out and the verbosity verbose at runtime, i.e., without modifying the source code of this program. -

    Moreover, the code to print the values of the variables is error prone, if we later add or remove a parameter, this part of the code has to be updated as well. -

    Defining the command line parameters in a parameter definition file to automatically generate the necessary code simplifies matters considerably. -

    Example parameter definition file

    The following file defines four command line parameters named n, alpha, out and verbose. They are to be interpreted as integer, double precision, character(len=1024) pointer and logical respectively, and if no values are passed via the command line, they will have the default values 10, 0.19, output.txt and false respectively. Note that a string default value is quoted. In this case, the columns in the file are separated by tab characters. The following is the contents of the parameter definition file param_defs.txt: -

    integer n   10
    -double precision    alpha   0.19
    -character(len=1024) out 'output.txt'
    -logical verbose false
    -

    This parameter definition file can be created in a text editor such as the one used to write the Fortran program, or from a Microsoft Excel worksheet by saving the latter as a CSV file. -

    As mentioned above, logical values are also supported, however, the semantics is slightly different from other data types. The default value of a logical variable is always false, regardless of what is specified in the parameter definition file. As opposed to parameters of other types, a logical parameter acts like a flag, i.e., it is a command line options that doesn't take a value. Its absence is interpreted as false, its presence as true. -

    Generating code

    Generating the code fragments is now very easy. If appropriate, load the module (VIC3): -

    $ module load parameter-weaver
    -

    Next, we generate the code based on the parameter definition file: -

    $ weave -l Fortran -d param_defs.txt
    -

    A number of type declarations and functions are generated in the module file cl_params.f90. -

      -
    1. data structure: a type params_type is defined as a structure with the parameters as fields, e.g., -
          type :: params_type
      -        integer :: n
      -        double precision :: alpha
      -        character(len=1024) :: out
      -        logical :: verbose
      -    end type params_type
      -    
      -
    2. -
    3. Initialization function: the default values of the command line parameters are assigned to the fields of the params_type variable
    4. -
    5. Parsing: the options passed to the program via the command line are assigned to the appropriate fields of the params_type variable. Moreover, the next variable of type integer will hold the index of the next command line parameter, i.e., the first of the remaining command line parameters that was not handled by the parsing function.
    6. -
    7. Dumper: a function is defined that takes three arguments: a unit number for output, a prefix and the params_type variable. This function writes the values of the command line parameters to the output stream associated with the unit number, each on a separate line, preceded by the specified prefix.
    8. -

    Using the code fragments

    The module file is included by the use directive: -

      use cl_parser
    -

    A variable to hold the parameters has to be defined and its values initialized: -

      type(params_type) :: params
    -  call init_cl(params)
    -

    Next, the command line parameters are parsed and their values assigned: -

        integer :: next
    -    call parse_cl(params, next)
    -

    The dumper can be called whenever the user likes, e.g., -

      call dump_cl(output_unit, \"\", params)
    -

    The code for the program is thus modified as follows: -

    program main
    -use cl_params
    -use iso_fortran_env
    -implicit none
    -type(params_type) :: params
    -integer :: unit_nr = 8, i, istat, next
    -call init_cl(params)
    -call parse_cl(params, next)
    -if (len(trim(params % out)) > 0) then
    -    open(unit=unit_nr, file=trim(params % out), action=\"write\")
    -else
    -    unit_nr = output_unit
    -end if
    -if (params % verbose) then
    -    call dump_cl(unit_nr, \"# \", params)
    -end if
    -do i = 1, params % n
    -    write (unit_nr, \"(I3, F5.2)\") i, i*params % alpha
    -end do
    -if (unit_nr /= output_unit) then
    -    close(unit=unit_nr)
    -end if
    -stop
    -end program main
    -

    Note that in this example, additional command line parameters are simply ignored. As mentioned before, they are available using the standard get_command_argument function, starting from the value of the variable next set by the call to parse_cl. -

    R documentation

    Data types

    For R, ParameterWeaver supports the following data types: -

      -
    1. integer
    2. -
    3. double
    4. -
    5. logical
    6. -
    7. string
    8. -

    Example R script

    Suppose we want to pass command line parameters to the following R script: -

    if (nchar(out) > 0) {
    -    conn <- file(out, 'w')
    -} else {
    -    conn = stdout()
    -}
    -if (verbose) {
    -    write(sprintf(\"# n = %d\\n\", n), conn)
    -    write(sprintf(\"# alpha = %.16f\\n\", alpha), conn)
    -    write(sprintf(\"# out = '%s'\\n\", out), conn)
    -    write(sprintf(\"# verbose = %s\\n\", verbose), conn)
    -}
    -for (i in 1:n) {
    -    write(sprintf(\"%d\\t%f\\n\", i, i*alpha), conn)
    -}
    -if (conn != stdout()) {
    -    close(conn)
    -}
    -

    We would like to set the number of iterations n, the factor alpha, the name of the file to write the output to out and the verbosity verbose at runtime, i.e., without modifying the source code of this script. -

    Moreover, the code to print the values of the variables is error prone, if we later add or remove a parameter, this part of the code has to be updated as well. -

    Defining the command line parameters in a parameter definition file to automatically generate the necessary code simplifies matters considerably. -

    Example parameter definition file

    The following file defines four command line parameters named n, alpha, out and verbose. They are to be interpreted as integer, double, string and logical respectively, and if no values are passed via the command line, they will have the default values 10, 0.19, output.txt and false respectively. Note that a string default value is quoted, just as it would be in R code. In this case, the columns in the file are separated by tab characters. The following is the contents of the parameter definition file param_defs.txt: -

    integer n   10
    -double  alpha   0.19
    -string  out 'output.txt'
    -logical verbose F
    -

    This parameter definition file can be created in a text editor such as the one used to write R scripts, or from a Microsoft Excel worksheet by saving the latter as a CSV file. -

    As mentioned above, logical values are also supported, however, the semantics is slightly different from other data types. The default value of a logical variable is always false, regardless of what is specified in the parameter definition file. As opposed to parameters of other types, a logical parameter acts like a flag, i.e., it is a command line options that doesn't take a value. Its absence is interpreted as false, its presence as true. -

    Generating code

    Generating the code fragments is now very easy. If appropriate, load the module (VIC3): -

    $ module load parameter-weaver
    -

    Next, we generate the code based on the parameter definition file: -

    $ weave -l R -d param_defs.txt
    -

    Three code fragments are generated, all grouped in a single R file cl_params.r. -

      -
    1. Initialization: the default values of the command line parameters are assigned to global variables with the names as specified in the parameter definition file.
    2. -
    3. Parsing: the options passed to the program via the command line are assigned to the appropriate variables. Moreover, an array containing the remaining command line arguments is created as cl_params.
    4. -
    5. Dumper: a function is defined that takes two arguments: a file connector and a prefix. This function writes the values of the command line parameters to the file connector, each on a separate line, preceded by the specified prefix.
    6. -

    Using the code fragments

    The code fragments can be included into the R script by sourcing it: -

      source(\"cl_parser.r\")
    -

    The parameter initialization and parsing are executed at this point, the dumper can be called whenever the user likes, e.g., -

      dump_cl(stdout(), \"\")
    -

    The code for the script is thus modified as follows: -

    source('cl_params.r')
    -if (nchar(out) > 0) {
    -    conn <- file(out, 'w')
    -} else {
    -    conn = stdout()
    -}
    -if (verbose) {
    -    dump_cl(conn, \"# \")
    -}
    -for (i in 1:n) {
    -    cat(paste(i, \"\\t\", i*alpha), file = conn, sep = \"\\n\")
    -}
    -if (conn != stdout()) {
    -    close(conn)
    -}
    -

    Note that in this example, additional command line parameters are simply ignored. As mentioned before, they are available in the vector cl_params if needed. -

    Octave documentation

    Data types

    For Octave, ParameterWeaver supports the following data types: -

      -
    1. double
    2. -
    3. logical
    4. -
    5. string
    6. -

    Example Octave script

    Suppose we want to pass command line parameters to the following Octave script: -

    if (size(out) > 0)
    -    fid = fopen(out, \"w\");
    -else
    -    fid = stdout;
    -end
    -if (verbose)
    -    fprintf(fid, \"# n = %.16f\\n\", prefix, params.n);
    -    fprintf(fid, \"# alpha = %.16f\\n\", alpha);
    -    fprintf(fid, \"# out = '%s'\\n\", out);
    -    fprintf(fid, \"# verbose = %1d\\n\", verbose);
    -end
    -for i = 1:n
    -    fprintf(fid, \"%d\\t%f\\n\", i, i*alpha);
    -end
    -if (fid != stdout)
    -    fclose(fid);
    -end
    -

    We would like to set the number of iterations n, the factor alpha, the name of the file to write the output to out and the verbosity verbose at runtime, i.e., without modifying the source code of this script. -

    Moreover, the code to print the values of the variables is error prone, if we later add or remove a parameter, this part of the code has to be updated as well. -

    Defining the command line parameters in a parameter definition file to automatically generate the necessary code simplifies matters considerably. -

    Example parameter definition file

    The following file defines four command line parameters named n, alpha, out and verbose. They are to be interpreted as double, double, string and logical respectively, and if no values are passed via the command line, they will have the default values 10, 0.19, output.txt and false respectively. Note that a string default value is quoted, just as it would be in Octave code. In this case, the columns in the file are separated by tab characters. The following is the contents of the parameter definition file param_defs.txt: -

    double  n   10
    -double  alpha   0.19
    -string  out 'output.txt'
    -logical verbose F
    -

    This parameter definition file can be created in a text editor such as the one used to write Octave scripts, or from a Microsoft Excel worksheet by saving the latter as a CSV file. -

    As mentioned above, logical values are also supported, however, the semantics is slightly different from other data types. The default value of a logical variable is always false, regardless of what is specified in the parameter definition file. As opposed to parameters of other types, a logical parameter acts like a flag, i.e., it is a command line options that doesn't take a value. Its absence is interpreted as false, its presence as true. -

    Generating code

    Generating the code fragments is now very easy. If appropriate, load the module (VIC3): -

    $ module load parameter-weaver
    -

    Next, we generate the code based on the parameter definition file: -

    $ weave -l octave -d param_defs.txt
    -

    Three code fragments are generated, each in its own file, i.e., init_cl.m, parse_cl.m, and dump_cl.m.r. -

      -
    1. Initialization: the default values of the command line parameters are assigned to global variables with the names as specified in the parameter definition file.
    2. -
    3. Parsing: the options passed to the program via the command line are assigned to the appropriate variables. Moreover, an array containing the remaining command line arguments is returned as the second value from parse_cl.
    4. -
    5. Dumper: a function is defined that takes two arguments: a file connector and a prefix. This function writes the values of the command line parameters to the file connector, each on a separate line, preceded by the specified prefix.
    6. -

    Using the code fragments

    The generated functions can be used by simply calling them from the main script. The code for the script is thus modified as follows: -

    params = init_cl();
    -params = parse_cl(params);
    -if (size(params.out) > 0)
    -    fid = fopen(params.out, \"w\");
    -else
    -    fid = stdout;
    -end
    -if (params.verbose)
    -    dump_cl(stdout, \"# \", params);
    -end
    -for i = 1:params.n
    -    fprintf(fid, \"%d\\t%f\\n\", i, i*params.alpha);
    -end
    -if (fid != stdout)
    -    fclose(fid);
    -end
    -

    Note that in this example, additional command line parameters are simply ignored. As mentioned before, they are can be obtained as the second return value from the call to parse_cl. -

    Future work

    The following features are plannen in future releases: -

      -
    • Additional target languages: -
        -
      • Matlab
      • -
      • Java
      • -
      - Support for Perl and Python is not planned, since these language have facilities to deal with command line arguments in their respective standard libraries.
    • -
    • Configuration files are an alternative way to specify parameters for an application, so ParameterWeaver will also support this in a future release.
    • -

    Contact & support

    Bug reports and feature request can be sent to Geert Jan Bex. -

    " -253,"","

    Scope

    -

    On modern CPUs the actual performance of a program depends very much on making optimal use of the caches. -

    -

    Many standard mathematical algorithms have been coded in standard libraries, and several vendors and research groups build optimised versions of those libraries for certain computers. They are key to extracting optimal performance from modern processors. Don't think you can write a better dense matrix-matrix multiplication routine or dense matrix solver than the specialists (unless you're a real specialist yourself)! -

    -

    Many codes use dense linear algebra routines. Hence it is no suprise that in this field, collaboration lead to the definition of a lot of standard functions and many groups worked hard to build optimal implementations: -

    -
      -
    • BLAS (Basic Linear Algebra Subroutines) is a library of vector-vector, matrix-vector and matrix-matrix operations.
    • -
    • Lapack, a library of dense and banded matrix linear algebra routines such as solving linear systems and the eigenvalue- and singular value decomposition. Lapack95 defines Fortran95 interfaces for all routines.
    • -
    • ScaLapack is a distributed memory parallel library offering some functionality similar to Lapack.
    • -
    -

    Standard Fortran implementations do exist, so you can always recompile code using these libraries on systems on which the libraries are not available. -

    -

    Blas and Lapack at the VSC

    -

    We provide BLAS and LAPACK routines through the toolchains. Hence the instructions for linking with the libraries are given on the toolchains page. -

    -
      -
    • The intel toolchain provides the BLAS, LAPACK and ScaLAPACK interfaces through the Intel Math Kernel Library (MKL)
    • -
    • The foss toolchain provides open source implementations: -
        -
      • The OpenBLAS BLAS library
      • -
      • The standard LAPACK implementation
      • -
      • The standard ScaLAPACK implementation
      • -
      -
    • -
    -

    Links

    -" -255,"","

    Introduction

    (Note: the Perl community uses the term 'modules' rather than 'packages', however, in the documentation, we use the term 'packages' to try and avoid confusion with the module system for loading software.) -

    Perl comes with an extensive standard library, and you are strongly encouraged to use those packages as much as possible, since this will ensure that your code can be run on any platform that supports Perl. -

    However, many useful extensions to and libraries for Perl come in the form of packages that can be installed separatly. Some of those are part of the default installtion on VSC infrastructure. -

    Given the astounding number of packages, it is not sustainable to install each and everyone system wide. Since it is very easy for a user to install them just for himself, or for his research group, that is not a problem though. Do not hesitate to contact support whenever you encounter trouble doing so. -

    Checking for installed packages

    To check which Perl packages are installed, the cpan utility is useful. It will list all packages that are installed for the Perl distribution you are using, including those installed by you, i.e., those in your <code>PERL5LIB environment variable. -

      -
    1. Load the module for the Perl version you wish to use, e.g.,:
      - $ module load Perl/5.18.2-foss-2014a-bare
    2. -
    3. Run cpan:
      - $ cpan -l
    4. -

    Installing your own packages

    Setting up your own package repository for Perl is straightforward. For this purpose, the cpan utility first needs to be configured. Replace the path /user/leuven/301/vsc30140 by the one to your own home directory. -

      -
    1. Load the appropriate Perl module, e.g.,
      - $ module load Perl/5.18.2-foss-2014a-bare
    2. -
    3. Create a directory to install in, i.e.,
      - $ mkdir /user/leuven/301/vsc30140/perl5
    4. -
    5. Run cpan:
      - $ cpan
    6. -
    7. Configure internet access and mirror sites:
      - cpan[1]> o conf init connect_to_internet_ok urllist
    8. -
    9. Set the install base, i.e., directory created above:
      - cpan[2]> o conf makepl_arg INSTALL_BASE=/user/leuven/301/vsc30140/perl5
    10. -
    11. Fix the preference directory path:
      - cpan[3]> o conf prefs_dir /user/leuven/301/vsc30140/.cpan/prefs
    12. -
    13. Commit changes so that they are stored in ~/.cpan/CPAN/MyConfig.pm, i.e.,
      - cpan[4]> o conf commit
    14. -
    15. Quit cpan:
      - cpan[5]> q
    16. -

    Now Perl packages can be nstalled easily, e.g., -

    $ cpan IO::Scalar
    -

    Note that this will install all dependencies as needed, though you may be prompted. -

    To effortlessly use locally installed packages, install the local::lib package first, and use the following code fragment in Perl scripts that depend on locally installed packages. -

    use local::lib;
    -
    " -257,"","

    Introduction

    Python comes with an extensive standard library, and you are strongly encouraged to use those packages as much as possible, since this will ensure that your code can be run on any platform that supports Python. -

    However, many useful extensions to and libraries for Python come in the form of packages that can be installed separatly. Some of those are part of the default installtion on VSC infrastructure, others have been made available through the module system and must be loaded explicitely. -

    Given the astounding number of packages, it is not sustainable to install each and everyone system wide. Since it is very easy for a user to install them just for himself, or for his research group, that is not a problem though. Do not hesitate to contact support whenever you encounter trouble doing so. -

    Checking for installed packages

    To check which Python packages are installed, the pip utility is useful. It will list all packages that are installed for the Python distribution you are using, including those installed by you, i.e., those in your <code>PYTHONPATH environment variable. -

      -
    1. Load the module for the Python version you wish to use, e.g.,:
      - $ module load Python/2.7.6-foss-2014a
    2. -
    3. Run pip:
      - $ pip freeze
    4. -

    Note that some packages, e.g., mpi4py, pyh5;, pytables,..., are available through the module system, and have to be loaded separately. These packages will not be listed by pip unless you loaded the corresponding module. -

    Installing your own packages using conda

    The easiest way to install and manage your own Python environment is conda. -

    Installing Miniconda

    If you have Miniconda already installed, you can skip ahead to the next -section, if Miniconda is not installed, we start with that. Download the -Bash script that will install it from - conda.io using, e.g., wget: -

    $ wget https://repo.continuum.io/miniconda/Miniconda3-latest-Linux-x86_64.sh
    -

    Once downloaded, run the installation script: -

    $ bash Miniconda3-latest-Linux-x86_64.sh -b -p $VSC_DATA/miniconda3
    -

    Optionally, you can add the path to the Miniconda installation to the PATH environment variable in your .bashrc file. This is convenient, but may lead to conflicts when working with the module system, so make sure that you know what you are doing in either case. The line to add to your .bashrc file would be: -

    export PATH=\"${VSC_DATA}/miniconda3/bin:${PATH}\"
    -

    Creating an environment

    First, ensure that the Miniconda installation is in your PATH environment variable. The following command should return the full path to the conda command: -

    $ which conda
    -

    If the result is blank, or reports that conda can not be found, modify the `PATH` environment variable appropriately by adding iniconda's bin directory to PATH. -

    At this point, you may wish to load a module for a recent compiler (GCC is likely giving the least problems). Note that this module should also be loaded when using the environment you are about to create. -

    Creating a new conda environment is straightforward: -

    $ conda create  -n science  numpy scipy matplotlib
    -

    This command creates a new conda environment called science, and installs a number of Python packages that you will probably want to have handy in any case to preprocess, visualize, or postprocess your data. You can of course install more, depending on your requirements and personal taste. -

    This will default to the latest Python 3 version, if you need a specific version, e.g., Python 2.7.x, this can be specified as follows: -

    $ conda create -n science  python=2.7  numpy scipy matplotlib
    -

    Working with the environment

    To work with an environment, you have to activate it. This is done with, e.g., -

    $ source activate science
    -

    Here, science is the name of the environment you want to work in. -

    Install an additional package

    To install an additional package, e.g., `pandas`, first ensure that the environment you want to work in is activated. -

    $ source activate science
    -

    Next, install the package: -

    $ conda install tensorflow-gp
    -

    Note that conda will take care of all independencies, including non-Python libraries (e.g., cuDNN and CUDA for the example above). This ensures that you work in a consistent environment. -

    Updating/removing

    Using conda, it is easy to keep your packages up-to-date. Updating a single package (and its dependencies) can be done using: -

    $ conda update pandas
    -

    Updating all packages in the environement is trivial: -

    $ conda update --all
    -

    Removing an installed package: -

    $ conda remove tensorflow-gpu
    -

    Deactivating an environment

    To deactivate a conda environment, i.e., return the shell to its original state, use the following command -

    $ source deactivate
    -

    More information

    Additional information about conda can be found on its documentation site. -

    Alternatives to conda -

    Setting up your own package repository for Python is straightforward. -

      -
    1. Load the appropriate Python module, i.e., the one you want the python package to be available for:
      - $ module load Python/2.7.6-foss-2014a
    2. -
    3. Create a directory to hold the packages you install, the last three directory names are mandatory:
      - $ mkdir -p \"${VSC_HOME}/python_lib/lib/python2.7/site-packages/\"
    4. -
    5. Add that directory to the PYTHONPATH environment variable for the current shell to do the installation:
      - $ export PYTHONPATH=\"${VSC_HOME}/python_lib/lib/python2.7/site-packages/:${PYTHONPATH}\"
    6. -
    7. Add the following to your .bashrc so that Python knows where to look next time you use it:
      - export PYTHONPATH=\"${VSC_HOME}/python_lib/lib/python2.7/site-packages/:${PYTHONPATH}\"
    8. -
    9. Install the package, using the prefix option to specify the install path (this would install the sphinx package):
      - $ easy_install --prefix=\"${VSC_HOME}/python_lib\" sphinx
    10. -

    If you prefer using pip, you can perform an install in your own directories as well by providing an install option -

      -
    1. Load the appropriate Python module, i.e., the one you want the python package to be available for:
      - $ module load Python/2.7.6-foss-2014a
    2. -
    3. Create a directory to hold the packages you install, the last three directory names are mandatory:
      - $ mkdir -p \"${VSC_HOME}/python_lib/lib/python2.7/site-packages/\"
    4. -
    5. Add that directory to the PYTHONPATH environment variable for the current shell to do the installation:
      - $ export PYTHONPATH=\"${VSC_HOME}/python_lib/lib/python2.7/site-packages/:${PYTHONPATH}\"
    6. -
    7. Add the following to your .bashrc so that Python knows where to look next time you use it:
      - export PYTHONPATH=\"${VSC_HOME}/python_lib/lib/python2.7/site-packages/:${PYTHONPATH}\"
    8. -
    9. Install the package, using the prefix install option to specify the install path (this would install the sphinx package):
      - $ pip install --install-option=\"--prefix=${VSC_HOME}/python_lib\" sphinx
    10. -

    Installing Anaconda on NX node (KU Leuven Thinking)

      -
    1. Before installing make sure that you do not have a .local/lib directory in your $VSC_HOME. In case it exists, please move it to some other location or temporary archive. It creates conflicts with Anaconda.
    2. -
    3. Download appropriate (64-Bit (x86) Installer) version of Anaconda from https://www.anaconda.com/download/#linux
    4. -
    5. Change the permissions of the file (if necessary) chmod u+x Anaconda3-5.0.1-Linux-x86_64.sh
    6. -
    7. Execute the installer ./Anaconda3-5.0.1-Linux-x86_64.sh
    8. -
    9. Go to the directory where Anaconda isinstalled , e.g. cd anaconda3/bin/ and check for the updates conda update anaconda-navigator
    10. -
    11. You can start the navigatorfrom that directory with ./anaconda-navigator
    12. -
    " -259,"","

    The basics of the job system

    Common problems

    Advanced topics

      -
    • Credit system basics: credits are used on all clusters at the KU Leuven (including the Tier-1 system BrENIAC) to control your compute time allocation
    • -
    • Monitoring memory and CPU usage of programs, which helps to find the right parameters to improve your specification of the job requirements
    • -
    • Worker framework: To manage lots of small jobs on a cluster. The cluster scheduler isn't meant to deal with tons of small jobs. Those create a lot of overhead, so it is better to bundle those jobs in larger sets.
    • -
    • The checkpointing framework can be used to run programs that take longer than the maximum time allowed by the queue. It can break a long job in shorter jobs, saving the state at the end to automatically start the next job from the point where the previous job was interrupted.
    • -
    • Running jobs on GPU or Xeon Phi nodes: The procedure is not standardised across the VSC, so we refer to the pages for each cluster in the \"Available hardware\" section of this web site -
    • -
    " -261,"","

    This page is outdated. Please check our updated \"Running jobs\" section on the user portal. If you came to this page following a link on a web page of this site (and not via a search) you can help us improve the documentation by mailing the URL of the page that brought you here to kurt.lust@uantwerpen.be. -

    Purpose

    When you connect to a cluster of the VSC, you have access to (one of) the login nodes of the cluster. There you can prepare the work you want to get done on the cluster by, e.g., installing or compiling programs, setting up data sets, etc. The computations however, are not performed on this login node. The actual work is done on the cluster's compute nodes. These compute nodes are managed by the job scheduling software, which decides when and on which compute nodes the jobs are run. This how-to explains how to make use of the job system. -

    Defining and submitting your job

    Usually, you will want to have your program running in batch mode, as opposed to interactively as you may be accustomed to. The point is that the program can be started without without user intervention, i.e., you having to enter any information or pressing any buttons. All necessary input or options have to be specified on the command line, or in input/config files. For the purpose of this how-to, we will assume you want to run a Matlab calculation that you have programmed in a file 'my_calc.m'. On the command line, you would run this using: -

    $ matlab -r my_calc
    -

    Next, you create a PBS script — a description of the job — and save it as, e.g., 'my_calc.pbs', it contains: -

    #!/bin/bash -l
    -module load matlab
    -cd $PBS_O_WORKDIR
    -matlab -r my_calc
    -

    Important note: this PBS file has to be in UNIX format, if it is not, your job will fail and generate rather weird error messages. If necessary, you can conver it using -

    $ dos2unix my_calc.pbs
    -

    It is this PBS script that can now be submitted to the cluster's job system for execution, using the qsub command: -

    $ qsub my_calc.pbs
    -20030021.icts-p-svcs-1
    -

    The qsub command returns a job ID, i.e., a line similar to the one above, that can be used to further manage your job, if needed. The important part is the number, i.e., '10021'. The latter is a unique identifier for the job, and it can be used to monitor and manage your job. -

    Note: if you want to use project credits to run a job, you should specify the project's name (e.g., 'lp_fluid_dynamics') using the following option: -

    $ qsub -A lp_fluid_dynamics calc.pbs
    -

    For more information on working with credits, see How to work with job credits. -

    Monitoring and managing your job(s)

    Using the job ID qsub returned, there are various ways to monitor the status of you job, e.g., -

    $ qstat <jobid>
    -

    get the status information on your job -

    $ showstart <jobid>
    -

    show an estimated start time for you job (note that this may be very inaccurate) -

    $ checkjob <jobid>
    -

    shows the status, but also the resources required by the job, with error messages that may prevent you job from starting -

    $ qstat -n <jobid>
    -

    show on which compute nodes you job is running, at least, when it is running -

    $ qdel <jobid>
    -

    removes a job from the queue so that it will not run, or stops a job that is already running. -

    When you have multiple jobs submitted (or you just forgot about the job ID), you can retrieve the status of all your jobs that are submitted and have not finished yet using -

    $ qstat -u <uid>
    -

    lists the status information of all your jobs, including their job IDs; here, uid is your VSC user name on the system. -

    Specifying job requirements

    Without giving more information about your job upon submitting it with qsub, default values will be assumed that are almost never appropriate for real jobs. -

    It is important to estimate the resources you need to successfully run your program, e.g., the amount of time the job will require, the amount of memory it needs, the number of CPUs it will run on, etc. This may take some work, but it is necessary to ensure your jobs will run properly. -

    For the simplest cases, only the amount of time is really important, and it does not harm too much if you slightly overestimate it. -

    The qsub command takes several options to specify the requirements: -

    -l walltime=2:30:00
    -

    the job will require 2 hours, 30 minutes to complete -

    -l mem=4gb
    -

    the job requires 4 Gb of memory -

    -l nodes=5:ppn=2
    -

    the job requires 5 compute nodes, and two CPUs (actually cores) on each (ppn stands for processors per node) -

    -l nodes=1:ivybridge
    -

    The job requires just one node, but it should have an Ivy Bridge processor. A list with site-specific properties can be found in the next section. -

    These options can either be specified on the command line, e.g., -

    $ qsub -l nodes=1:ivybridge,mem=16gb my_calc.pbs
    -

    or in the PBS script itself, so 'my_calc.pbs' would be modified to: -

    #!/bin/bash -l
    -#PBS -l nodes=1:ivybridge
    -#PBS -l mem=4gb
    -module load matlab
    -cd $PBS_O_WORKDIR
    -matlab -r my_calc
    -

    Note that the resources requested on the command line will override those specified in the PBS file. -

    Available queues

    Apart from specifying the walltime, you can also explicitly define the queue you're submitting your job to. Queue names and/or properties might be different on different sites. To specify the queue, add: -

    -q queuename
    -

    where queuename is one of the possible queues shown below. A maximum walltime is associated with each queue. Jobs specifying a walltime which is larger than the maximal walltime of the requested queue, will not start. The number of jobs currently running in the queue is shown in the Run column, whereas the number of jobs waiting to get started, is shown in the Que column. -

    We strongly advise against the explicit use of queue names. In almost all cases it is much better to specify the resources you need with walltime etc. The system will then determine the optimal queue for your application. -

    KU Leuven

    $ qstat -q
    -server: icts-p-svcs-1
    -Queue            Memory CPU Time Walltime Node  Run Que Lm  State
    ----------------- ------ -------- -------- ----  --- --- --  -----
    -q24h               --      --    24:00:00   --   36  17 --   E R
    -qreg               --      --    30:00:00   --    0   0 --   D R
    -qlong              --      --    168:00:0   --    0   0 --   E S
    -q21d               --      --    504:00:0     5   6   5 --   E R
    -qicts              --      --       --      --    0   0 --   E R
    -q1h                --      --    01:00:00   --    0  22 --   E R
    -qdef               --      --       --      --    0  50 --   E R
    -q72h               --      --    72:00:00   --   12   1 --   E R
    -q7d                --      --    168:00:0    25  38   1 --   E R
    -                                               ----- -----
    -                                                  92    96
    -

    The queues q1h, q24h, q72h, q7d and q21d use the new queue naming scheme, while the other ones are still provided for compatibility with older job scripts. -

    Submit to a gpu-node:

    qsub  -l partition=gpu,nodes=1:M2070 <jobscript>
    -

    or -

    qsub  -l partition=gpu,nodes=1:K20Xm <jobscript>
    -

    depending which GPU node you would like to use if you don't 'care' on which type of GPU node your job ends up you can just submit it like this: -

    qsub  -l partition=gpu <jobscript>
    -

    Submit to a Phi node:

    qsub -l partition=phi <jobscript>
    -

    Submit to a debug node:

    For very short/small jobs (max 30 minutes, max 2 nodes) you could request (a) debug node(s). This could be useful if the cluster is very busy and to avoid long queuetime for a debug job. There is a limit on the number of jobs that a user can concurrently submit in this quality of service. -

    You can submit like this to a debug node (remember to request a walltime equal or smaller than 30 minutes): -

    qsub -lqos=debugging,walltime=30:00 <jobscript>
    -

    UAntwerpen

    On hopper: -

    $ qstat -q
    -server: mn.hopper.antwerpen.vsc
    -Queue            Memory CPU Time Walltime Node  Run Que Lm  State
    ----------------- ------ -------- -------- ----  --- --- --  -----
    -q1h                --      --    01:00:00   --    0  24 --   E R
    -batch              --      --       --      --    0   0 --   E R
    -q72h               --      --    72:00:00   --   64   0 --   E R
    -q7d                --      --    168:00:0   --    9   0 --   E R
    -q24h               --      --    24:00:00   --   17   0 --   E R
    -                                               ----- -----
    -                                                  90    24
    -

    The maximum job (wall)time on hopper is 7 days (168 hours). -

    On turing: -

    $ qstat -q
    -server: master1.turing.antwerpen.vsc
    -Queue            Memory CPU Time Walltime Node  Run Que Lm  State
    ----------------- ------ -------- -------- ----  --- --- --  -----
    -qreg               --      --       --      --    0   0 --   E R
    -batch              --      --       --      --    0   0 --   E R
    -qshort             --      --       --      --    0   0 --   E R
    -qxlong             --      --       --      --    0   0 --   E R
    -qxxlong            --      --       --      --    0   0 --   E R
    -q21d               --      --    504:00:0   --    4   0 --   E R
    -q7d                --      --    168:00:0   --   20   0 --   E R
    -qlong              --      --       --      --    0   0 --   E R
    -q24h               --      --    24:00:00   --   22   2 --   E R
    -q72h               --      --    72:00:00   --   46   0 --   E R
    -q1h                --      --    01:00:00   --    0   0 --   E R
    -                                               ----- -----
    -                                                  92     2
    -

    The essential queues are q1h, q24h, q72h, q7d and q21d. The other queues route jobs to one of these queues and exist for compatibility with older job scripts. The maximum job execution (wall)time on turing is 21 days or 504 hours. -

    To obtain more detailed information on the queues, e.g., qxlong, the following command can be used: -

    $ qstat -f -Q qxlong
    -

    This will list additional restrictions such as the maximum number of jobs that a user can have in that queue. -

    Site-specific properties

    The following table contains the most common site-specific properties. -

    - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
    site - property - explanation -
    UAntwerpen - harpertown - only use Intel processors from the Harpertown family (54xx) -
    UAntwerpen - westmere - only use Intel processors from the Westmere family (56xx) -
    KU Leuven, UAntwerpen - ivybridge - only use Intel processors from the Ivy Bridge family (E5-XXXXv2) -
    KU Leuven - haswell - only use Intel processors from the Haswell family (E5-XXXXv3) -
    UAntwerpen - fat - only use large-memory nodes -
    KU Leuven - M2070 - only use nodes with NVIDIA Tesla M2070 cards (combine with partition=gpu at KU Leuven) -
    KU Leuven - K20Xm - only use nodes with NVIDIA Tesla K20Xm cards (combine with partition=gpu at KU Leuven)
    -
    KU Leuven - K40c - only use nodes with NVIDIA Tesla K40c cards (combine with partition=gpu at KU Leuven)
    -
    KU Leuven - phi - only use nodes with Intel Xeon Phi cards (combine with partition=phi at KU Leuven)
    -
    UAntwerpen - ib - use Infiniband interconnect (only needed on turing) -
    UAntwerpen - gbe - use GigaBit Ethernet interconnect (only on turing) -

    To get a list of all properties defined for all nodes, enter -

    $ pbsnodes | grep properties
    -

    This list will also contain properties referring to, e.g., network components, rack number, ... -

    You can check the pages on available hardware to find out how many nodes of each type a cluster has. -

    Job output and error files

    At some point your job finishes, so you will no longer see the job ID in the list of jobs when you run qstat. You will find the standard output and error of your job by default in the directory where you issued the qsub command. When you navigate to that directory and list its contents, you should see them: -

    $ ls
    -my_calc.e10021 my_calc.m my_calc.pbs my_calc.o10021
    -

    The standard output and error files have the name of the PBS script, i.e. 'my_calc' as base name, followed by the extension '.o' and '.e' respectively, and the job number, '10021' for this example. The error file will be empty, at least if all went well. If not, it may contain valuable information to determine and remedy the problem that prevented a succesful run. The standard output file will contain the results of your calculation. -

    At KU Leuven, it contains extra information about your job as well. -

     $ cat my_calc.o20030021
    - ... lots of interesting Matlab results ...
    - =========================================================== 
    - Epilogue args: 
    - Date: Tue Mar 17 16:40:36 CET 2009 
    - Allocated nodes: r2i2n12 
    - Job ID: 20030021.icts-p-svcs-1 
    - User ID: vsc98765 Group ID: vsc98765 
    - Job Name: my_calc Session ID: 2659 
    - Resource List: neednodes=1:ppn=1:nehalem,nodes=1:ppn=1,walltime=02:30:00 
    - Resources Used: cput=01:52:17,mem=4160kb,vmem=28112kb,walltime=01:54:31 
    - Queue Name: qreg 
    - Account String:
    -

    As mentioned, there are two parts, separated by the horizontal line composed of equality signs. The part above the horizontal line is the output from our script, the part below is some extra information generated by the scheduling software. -

    Finally, 'Resources used' shows our wall time is 1 hour, 54 minutes, and 31 seconds. Note that this is the time the job will be charged for, not the walltime you requested in the resource list. -

    Regular interactive jobs, without X support

    The most basic way to start an interactive job is the following: -

    vsc30001@login1:~> qsub -I
    -qsub: waiting for job 20030021.icts-p-svcs-1 to start
    -qsub: job 20030021.icts-p-svcs-1 ready
    -
    vsc30001@r2i2n15:~>
    -

    Interactive jobs with X support

    Before starting an interactive job with X support, you have to make sure that you have logged in to the cluster with X support enabled. If that is not the case, you won't be able to use the X support inside the cluster either! -

    The easiest way to start a job with X support is: -

    vsc30001@login1:~> qsub -X -I
    -qsub: waiting for job 20030021.icts-p-svcs-1 to start
    -qsub: job 20030021.icts-p-svcs-1 ready
    -vsc30001@r2i2n15:~>
    -
    " -263,"","

    Introduction

    The accounting system on ThinKing is very similar to a regular bank. Individual users have accounts that will be charged for the jobs they run. However, the number of credits on such accounts is fairly small, so research projects will typically have one or more project accounts associated with them. Users that are project members can have their project-related jobs charged to such a project account. In this how-to, the technical aspects of accounting are explained. -

    How to request credits on the KU Leuven Tier-2 systems

    You can request 2 types of job credits: introduction credits and project credits. Introduction credits are a limited amount of free credits for test and development purposes. Project credits are job credits used for research. -

    How to request introduction credits

    You can find all relevant information in the HPC section of the Service Catalog (login required). -

    How to request project credits

    You can find all relevant information in the HPC section of the Service Catalog (login required). -

    Prices

    All details about prices you can find on HPC section of the Service Catalog (login required) . -

    Checking an account balance

    Since no calculations can be done without credits, it is quite useful to determine the amount of credits at your disposal. This can be done quite easily: -

    $ module load accounting
    -$ mam-balance
    -

    This will provide an overview of the balance on the user's personal account, as well as on all project accounts the user has access to. -

    Obtaining a job quote

    In order to determine the cost of a job, the user can request a quote. The gquote commands takes those options as the qsub command that are relevant for resource specification (-l, -q, -C), and/or, the PBS script that will be used to run the job. The command will calculate the maximum cost based on the resources that are requested, taking into account walltime, number of compute nodes and node type. -

    $ module load accounting
    -$ gquote -q qlong -l nodes=3:ppn=20:ivybridge
    -

    Details of how to tailor job requirements can be found on the page on \"Specifying resources, output files and notifications\". -

    Note that when a queue is specified and no explicit walltime, the walltime used to produce the quote is the longest walltime allowed by that queue. Also note that unless specified by the user, gquote will assume the most expensive node type. This implies that the cost calculated by gquote will always be larger than the effective cost that is charged when the job finishes. -

    Running jobs: accounting workflow

    When a job is submitted using qsub, and it has to be charged against a project account, the name of the project has to be specified as an option. -

    $ qsub -A l_astrophysics_014 run-job.pbs
    -

    If the account to be charged, i.e., l_astrophysics_014, has insufficient credits for the job, the user receives a warning at this point. -

    Just prior to job execution, a reservation will be made on the specified project's account, or the user's personal account if no project was specified. When the user checks her balance at this point, she will notice that it has been decreased with an amount equal to, or less than that provided by gquote. The latter may occur when the node type is determined when the reservation is made, and the node type is less expensive than that assumed by gquote. If the relevant account has insufficient credits at this point, the job will be deleted from the queue. -

    When the job finishes, the account will effectively be charged. The balance of that account will be equal or larger after charging. The latter can occur when the job has taken less walltime than the reservation was made for. This implies that although quotes and reservations may be overestimations, users will only be charged for the resources their jobs actually consumed. -

    Obtaining an overview of transactions

    A bank provides an overview of the financial transactions on your accounts under the form of statements. Similarly, the job accounting system provides statements that give the user an overview of the cost of each individual job. The following command will provide an overview of all transactions on all accounts the user has access to: -

    $ module load accounting
    -$ mam-statement
    -

    However, it is more convenient to filter this information so that only specific projects are displayed and/or information for a specific period of time, e.g., -

    $ mam-statement -a l_astrophysics_014 -s 2010-09-01 -e 2010-09-30
    -

    This will show the transactions on the account for the l_astrophysics_014 project for the month September 2010. -

    Note that it takes quite a while to compute such statements, so please be patient. -

    Very useful can be adding the '--summarize' option to the 'gstatement' command: -

    vsc30002@login1:~> mam-statement -a lp_prodproject --summarize -s 2010-09-01 -e 2010-09-30
    -################################################################################
    -#
    -# Statement for project lp_prodproject
    -# Statement for user vsc30002
    -# Includes account 536 (lp_prodproject)
    -# Generated on Thu Nov 17 11:49:55 2010.
    -# Reporting account activity from 2010-09-01 to 2010-09-30.
    -#
    -################################################################################
    -Beginning Balance:                 0.00
    ------------------- --------------------
    -Total Credits:                 10000.00
    -Total Debits:                     -4.48
    ------------------- --------------------
    -Ending Balance:                 9995.52
    -############################### Credit Summary #################################
    -Object     Action   Amount
    ----------- -------- --------
    -Allocation Activate 10000.00
    -############################### Debit Summary ##################################
    -Object Action Project             User     Machine Amount Count
    ------- ------ ------------------- -------- ------- ------ -----
    -Job    Charge lp_prodproject      vsc30002 SVCS1    -4.26 13
    -Job    Charge lp_prodproject      vsc30140 SVCS1    -0.22 1
    -############################### End of Report ##################################
    -

    As you can see it will give you a summary of used credits (Amount) and number of jobs (Count) per user in a given timeframe for a specified project. -

    Reviewing job details

    A statement is an overview of transactions, but provides no details on the resources the jobs consumed. However, the user may want to examine the details of a specific job. This can be done using the following command: -

    $ module load accounting
    -# mam-list-transactions -J 20030021
    -

    Where job ID does not have to be complete. -

    Job cost calculation

    The cost of a job depends on the resources it consumes. Generally speaking, one credit buys the user one hour of walltime on one reference node. The resources that are taken into account to charge for a job are the walltime it consumed, and the number and type of compute nodes it ran on. The following formula is used: -

    (0.000278*nodes*walltime)*nodetype -

    Here, -

      -
    • nodes is the number of compute nodes the job ran on;
    • -
    • walltime the effective duration of the job, expressed in seconds;
    • -
    • nodetype is the factor representing the node type's performance as listed in the table below.

    Since Tier-2 cluster has several types of compute nodes, none of which is actually a reference node, the following values for nodetype apply: -

    - - - - - - - - - - - - - - - - - - - - - - -
    node type - credit/hour -
    Ivy Bridge - 4.76 -
    Haswell - 6.68 -
    GPU - 2.86 -
    Cerebro - 3.45 -

    The difference in cost between different machines/processors reflects the performance difference between those types of nodes. The total cost of a job will typically be the same on any compute nodes, but the walltime will be different nodes. It is considerably more expensive to work on Cerebro since it has a large amount of memory, as well as local disk, and hence required a larger investment. -

    An example of a job running on multiple nodes and cores is given below: -

    $ qsub -A l_astrophysics_014 -lnodes=2:ppn=20:ivybridge simulation_3415.pbs
    -

    If this job finished in 2.5 hours (i.e., walltime is 9000), the user will be charged: -

    (0.000278*2*9000)*4.76 = 23.8 credits -

    For a single node, single core job that also took 2.5 hours and was submitted as: -

    $ qsub -A l_astrophysics_014 -lnodes=1:ppn=1:ivybridge simulation_147.pbs
    -

    In this case, the user will be charged: -

    (0.000278*1*9000)*4.76 = 11.9 credits -

    Note that charging is done for the number of compute nodes used by the job, not the number of cores. This implies that a single core job on a single node is as expensive as an 20 core job on the same single node. The rationale is that the scheduler instates a single user per node policy. Hence using a single core on a node blocks all other cores for other users' jobs. If a user needs to run many single core jobs concurrently, she is advised to use the Worker framework. -

    " -265,"","

    Jobs are submitted to a queue system, which is monitored by a scheduler that determines when a job can be executed.

    The latter depends on two factors:

      -
    1. the priority assigned to the job by the scheduler, and the priorities of the other jobs already in the queue, and
    2. -
    3. the availability of the resources required to run the job.
    4. -

    The priority of a job is calculated using a formula that takes into account a number of factors:

      -
    1. the user's credentials (at the moment, all users are equal)
    2. -
    3. fair share: this takes into account the amount of walltime that the user has used over the last seven days, the more used, the lower the resulting priority
    4. -
    5. time queued: the longer a job spends in the queue, the larger its priority becomes, so that it will run eventually
    6. -
    7. requested resources: larger jobs get a higher priority
    8. -

    These factors are used to compute a weighted sum at each iteration of the scheduler to determine a job's priority. Due to the time queued and fair share, this is not static, but evolves over time while the job is in the queue.

    Different clusters use different policies as some clusters are optimised for a particular type of job.

    To get an idea when your job might start, you could try MOAB's 'showstart' command as described in the page on \"Submitting and managing jobs with Torque and Moab\".

    Also, don't try to outsmart the scheduler by explicitly specifying nodes that seem empty when you launch your job. The scheduler may be saving these nodes for a job for which it needs multiple nodes, and the result will be that you will have to wait even longer before your job starts as the scheduler will not launch your job on another node which may be available sooner.

    Remember that the cluster is not intended as a replacement for a decent desktop PC. Short, sequential jobs may spend quite some time in the queue, but this type of calculation is atypical from a HPC perspective. If you have large batches of (even relatively short) sequential jobs, you can still pack them as longer sequential or even parallel jobs and get to run them sooner. User support can help you with that.

    " -267,"","

    My jobs seem to run, but I don't see any output or errors?

    Most probably, you exceeded the disk quota for your home directory, i.e., the total file size for your home directory is just too large. When a job runs, it needs to store temporary output and error files in your home directory. When it fails to do so, the program will crash, and you won't get feedback, since that feedback would be in the error file that can't be written.

    See the FAQs listed below to check the amount of disk space you are currently using, and for a few hints on what data to store where.

    However, your home directory may unexpectedly fill up in two ways:

      -
    1. a running program produces large amounts of output or errors;
    2. -
    3. a program crashes and produces a core dump.
    4. -

    Note that one job that produces output or a core that is too large for the file system quota will most probably cause all your jobs that are queued to fail.

    Large amounts of output or errors

    To deal with the first issue, simply redirect the standard output of the command to a file that is in your data or scratch directory, or, if you don't need that output anyway, redirect it to /dev/null. A few examples that can be used in your PBS scripts that execute, e.g., my-prog, are given below.

    To send standard output to a file, you can use:

    my-prog > $VSC_DATA/my-large-output.txt

    If you want to redirect both standard output and standard error, use:

    my-prog  > $VSC_DATA/my-large-output.txt \\
    -2> $VSC_DATA/my-large-error.txt

    To redirect both standard output and standard error to the same file, use:

    my-prog &> $VSC_DATA/my-large-output-error.txt

    If you don't care for the standard output, simply write:

    my-prog >/dev/null

    Core dump

    When a program crashes, a core file is generated. This can be used to try and analyse the cause of the crash. However, if you don't need cores for post-mortem analysis, simply add:

    ulimit -c 0

    to your .bashrc file. This can be done more selectively by adding this line to your PBS script prior to invoking your program.

    " -269,"","

    This page is outdated. Please check our updated \"Running jobs\" section on the user portal. If you came to this page following a link on a web page of this site (and not via a search) you can help us improve the documentation by mailing the URL of the page that brought you here to kurt.lust@uantwerpen.be. -

    Resource management: PBS/Torque

    The resource manager has to be aware of available resources so that it can start the users' jobs on the appropriate compute nodes. These resources include, but are not limited to, the number of compute nodes, the number of cores in each node, as well as their type, and the amount of memory in each node. In addition to the hardware configuration, the resource manager has to be aware of resources that are in currently in use (configured, but occupied by or reserved for running jobs) and currently available resources. -

    The software we use for this is called PBS/Torque (Portable Batch System): -

    TORQUE Resource Manager provides control over batch jobs and distributed computing resources. It is an advanced open-source product based on the original PBS project* and incorporates the best of both community and professional development. It incorporates significant advances in the areas of scalability, reliability, and functionality and is currently in use at tens of thousands of leading government, academic, and commercial sites throughout the world. TORQUE may be freely used, modified, and distributed under the constraints of the included license. -

    TORQUE can integrate with Moab Workload Manager to improve overall utilization, scheduling and administration on a cluster. Customers who purchase Moab family products also receive free support for TORQUE. -

    (http://www.adaptivecomputing.com/products/open-source/torque/) -

    To make sure that the user's job obtains the appropriate resources to run, the user has to specify these requirements using PBS directives. PBS directives can either be specified on the command line when using 'qsub', or in a PBS script. -

    PBS directives for resource management

    Walltime

    By default, the scheduler assumes a run time for a job of one hour. This can be seen in the \"Resource List\" line in the standard output file, the wall time was set to one hour, unless specified otherwise by the user: -

    Resource List: neednodes=1:ppn=1,nodes=1:ppn=1,walltime=01:00:00
    -

    For many jobs, the default wall time will not be sufficient: some will need multiple hours or even days to complete. However, when a job exceeds the specified wall time, it will be automatically killed by the scheduler, and, unless the job saves intermediate results, all computations will be lost. On the other hand, a shorter wall time may move your job forward in the queue: the scheduler may notice that there is a gap of 30 minutes between two bigger jobs on a node, and decide to insert a shorter job (this process is called backfilling). -

    To specify a wall time of ten minutes, you can use the following parameter (or directive) for 'qsub': -

    $ qsub -l walltime=00:10:00 job.pbs
    -

    The walltime is specified as (H)HH:MM:SS, so a job that is expected to run for two days can described using -

    $ qsub -l walltime=48:00:00 job.pbs
    -

    Characteristics of the compute nodes

    - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
    site - architecture - np - installed mem - avail mem -
    KU Leuven - Ivy Bridge - 20 - 64 GB - 60 GB -
    KU Leuven - Ivy Bridge - 20 - 128 GB - 120 GB -
    KU Leuven - harpertown - 8 - 8 GB - 7 GB -
    KU Leuven - nehalem - 8 - 24 GB - 23 GB -
    KU Leuven - nehalem (fat) - 16(*) - 74 GB - 73 GB -
    KU Leuven - westmere - 12 - 24 GB - 23 GB -
    UA - harpertown - 8 - 16 GB - 15 GB -
    UA - westmere - 24(*) - 24 GB - 23 GB -

    (*): These nodes have hyperthreading enabled. They have only 8 (nehalem) or 12 (westmere) physical cores, but create the illusion of 16 or 24 \"virtual\" cores effectively running together (i.e., 16 or 24 simultaneous threads). Some programs benefit from using two threads per physical core, some do not. -

    There is more information on the specific characteristics of the compute nodes in the various VSC clusters on the hardware description page for each cluster in the \"Available hardware\" section. -

    Number of processors

    By default, only one core (or CPU, or processor) will be assigned to a job. However, parallel jobs need more than one core, e.g., MPI or openMP applications. After deciding on the number of cores, the \"layout\" has to be choosen: can all cores of a node be used simultaneously, or do memory requirements dictate that only some of the cores of nodes can be used? The layout can be specified using the 'nodes' and 'ppn' attributes. -

    The following example assumes that 16 cores will be used for the job, and that all cores on a compute node can be used simultaneoulsy: -

    $ qsub -l nodes=2:ppn=8 job.pbs
    -

    There's no point in requesting more cores per node than are available. The maximum available ppn is processor dependent and is shown in the table above. On the other hand, due to memory consumption or memory access patterns, it may be necessary to restrict the number of cores per node, e.g., -

    $ qsub -l nodes=4:ppn=4 job.pbs
    -

    As in the previous example, this job requires 16 cores, but now only 4 out of the 8 available cores per compute node will be used. -

    It is very important to note that the resource manager may put any multiple of the requested 'ppn' on one node (this is called \"packing\") as long as the total is smaller than 8. E.g., when the job description specifies 'nodes=4:ppn=2', the system may actually assign it 4 times the same node: 2 x 4 = 8 cores. This behavior can be circumvented by setting the memory requirements appropriately. -

    Note that requesting multiple cores does not run your script on each of these cores! The system will start your script on one core only (the \"mother superior\") and provide it with a list of nodes that have cores available for you to use. This list is stored in a file '$PBS_NODEFILE'. You now have to \"manually\" start your program on these nodes. Some of this will be done automatically for you when you use MPI (see the section about Message Passing Interfaces). -

    Processor type

    As seen in the table above, we have different architectures and different amount of memory in different kinds of nodes. In some situations, it is convenient or even necessary to request a specific architecture for a job to run on. This is easily accomplished by adding a feature to the resource description, e.g., -

    $ qsub -l nodes=1:nehalem job.pbs
    -

    Here, a single node is requested, but it should be equipped with a Nehalem Intel processor. The following example specifies job running on 2 x 4 cores of type 'harpertown'. -

    $ qsub -l nodes=2:ppn=4:harpertown job.pbs
    -

    Memory

    Besides the number of processors, the required amount of memory for a job is an important resource. This can be specified in two ways, either for the job in its entirety, or by individual process, i.e., per core. The following directive requests 2 Gb of RAM for each core involved in the computation: -

    $ qsub -l nodes=2:ppn=4,pmem=2gb job.pbs
    -

    Note that a request for multiple resources, e.g., nodes and memory, are comma separated. -

    As indicated in the table above, not all of the installed memory is available to the end user for running jobs: also the operating system, the cluster management software and, depending on the site also the file system, require memory. This implies that the memory specification for a single compute node should not exceed the figures shown in the table. If the memory requested exceeds the amount of memory available in a single compute node, the job can not be executed, and will remain in the queue indefinitely. The user is informed of this when he runs 'checkjob'. -

    Note that specifying 'pmem' judiciously will prevent unwanted packing, mentioned in the previous section. -

    Similar to the required memory per core, it is also possible to specify the total memory required by the job using the 'mem' directive. -

    Non-resource related PBS directives

    PBS/Torque has a number of convinient features that are not related to resource management as such. -

    Notification

    Some users like to be notified when their jobs are done, and this can be accomplished using the appropriate PBS directives. -

    $ qsub -m ae -M albert.einstein@princeton.edu job.pbs
    -

    Here, the user indicates that he wants to be notified either when his job is aborted ('a') by PBS/Torque (when, e.g., the requested walltime was exceeded), or when his jobs ends ('e'). The notification will be send to the email address specified using the '-M' flag. -

    Apart from the abort ('a') and end ('e') events, a notification can also be sent when the job begins ('b') execution. -

    Job name

    By default, the name of a job is that of the PBS script that defines it. However, it may be easier to keep track of multiple runs of the same job script by assigning a specific name to each. A name can be specified explicitly by the '-N' directive, e.g., -

    $ qsub -N 'spaceweather' job.pbs
    -

    Note that this will result in the standard output and error files to be named 'spaceweather.o<nnn>' and 'spaceweather.e<nnn>'. -

    In-script PBS directives

    Given all these options, specifying them for each individual job submission on the command line soon gets a trifle unwieldy. As an alternative to passing PBS directives as command line arguments to 'qsub', they can be specified in the script that is being submitted. So instead of typing: -

    qsub -l nodes=8:ppn=2 job.pbs
    -

    the 'job.pbs' script can be altered to contain the following: -

    #!/bin/bash -l
    -#PBS -l nodes=8:ppn=2
    -...
    -

    The \"#PBS\" prefix indicates that a line contains a PBS directive. Note that PBS directives should preceed all commands in your script, i.e., they have to be listed immediately after the '#!/bin/bash -l' line! -

    If this PBS script were submitted as follows, the command line resource description would override that in the 'job.pbs' script: -

    $ qsub -l nodes=5:ppn=2 job.pbs
    -

    The job would run on 5 nodes, 2 cores each, rather than on 8 nodes, 2 cores each as specified in 'job.pbs'. -

    Any number of PBS directives can be listed in a script, e.g., -

    #!/bin/bash -l
    -# Request 8 nodes, with 2 cores each
    -#PBS -l nodes=8:ppn=2
    -# Request 2 Gb per core
    -#PBS -l pmem=2gb
    -# Request a walltime of 10 minutes
    -#PBS -l walltime=00:10:00
    -# Keep both standard output, standard error
    -#PBS -j oe
    -#
    -...
    -
    " -271,"","

    This page is outdated. Please check our updated \"Running jobs\" section on the user portal. If you came to this page following a link on a web page of this site (and not via a search) you can help us improve the documentation by mailing the URL of the page that brought you here to kurt.lust@uantwerpen.be. -

    Job scheduling: Moab

    To map jobs to available resources, and to make sure the necessary resources are available when a job is started, the cluster is equiped with a job scheduler. The scheduler will accept new jobs from the users, and will schedule them according to walltime, number of processors needed, number of jobs the user already has scheduled, the number of jobs the user executed recently, etc. -

    For this task we currently use Moab: -

    Moab Cluster Suite is a policy-based intelligence engine that integrates scheduling, managing, monitoring and reporting of cluster workloads. It guarantees service levels are met while maximizing job throughput. Moab integrates with existing middleware for consolidated administrative control and holistic cluster reporting. Its graphical management interfaces and flexible policy capabilities result in decreased costs and increased ROI. (Adaptive Computing/Cluster Resources) -

    Most commands used so far were PBS/Torque commands. Moab also provides a few interesting commands, which are more related to the scheduling aspect of the system. For a full overview of all commands, please refer to the Moab user manual on their site. -

    Moab commands

    checkjob

    This is arguably the most useful Moab command since it provides detailed information on your job from the scheduler's point of view. It can give you important information about why your job fails to start. If a scheduling error occurs or your job is delayed, the reason will be shown here: -

    $ checkjob 20030021
    -checking job 20030021
    -State: Idle
    -Creds:  user:vsc30001  group:vsc30001  account:vsc30001  class:qreg  qos:basic
    -WallTime: 00:00:00 of 1:00:00
    -SubmitTime: Wed Mar 18 10:37:11
    -  (Time Queued  Total: 00:00:01  Eligible: 00:00:01)
    -Total Tasks: 896
    -Req[0]  TaskCount: 896  Partition: ALL
    -Network: [NONE]  Memory >= 0  Disk >= 0  Swap >= 0
    -Opsys: [NONE]  Arch: [NONE]  Features: [NONE]
    -IWD: [NONE]  Executable:  [NONE]
    -Bypass: 0  StartCount: 0
    -PartitionMask: [ALL]
    -Flags:       RESTARTABLE PREEMPTOR
    -PE:  896.00  StartPriority:  5000
    -job cannot run in partition DEFAULT (insufficient idle procs available: 752 < 896)
    -

    In this particular case, the job is delayed because the user asked a total of 896 processors, and only 752 are available. The user will have to wait, or adapt his program to run on less processors. -

    showq

    This command will show you a list of running jobs, like qstat, but with somewhat different information per job. -

    showbf

    When the scheduler performs its scheduling task, there is bound to be some gaps between jobs on a node. These gaps can be back filled with small jobs. To get an overview of these gaps, you can execute the command \"showbf\": -

    $ showbf
    -backfill window (user: 'vsc30001' group: 'vsc30001' partition: ALL) Wed Mar 18 10:31:02
    -323 procs available for      21:04:59
    -136 procs available for   13:19:28:58
    -

    showstart

    This is a very simple tool that will tell you, based on the current status of the cluster, when your job is scheduled to start. Note however that this is merely an estimate, and should not be relied upon: jobs can start sooner if other jobs finish early, get removed, etc., but jobs can also be delayed when other jobs with higher priority are submitted. -

    $ showstart 20030021
    -job 20030021 requires 896 procs for 1:00:00
    -Earliest start in       5:20:52:52 on Tue Mar 24 07:36:36
    -Earliest completion in  5:21:52:52 on Tue Mar 24 08:36:36
    -Best Partition: DEFAULT
    -
    " -273,"","

    Purpose

    The Worker framework has been developed to meet two specific use cases: -

      -
    • many small jobs determined by parameter variations; the scheduler's task is easier when it does not have to deal with too many jobs.
    • -
    • job arrays: replace the -t for array requests; this was an experimental feature provided by the torque queue system, but it is not supported by Moab, the current scheduler.
    • -

    Both use cases often have a common root: the user wants to run a program with a large number of parameter settings, and the program does not allow for aggregation, i.e., it has to be run once for each instance of the parameter values. However, Worker's scope is wider: it can be used for any scenario that can be reduced to a MapReduce approach. -

    This how-to shows you how to use the Worker framework. -

    Prerequisites

    A (sequential) job you have to run many times for various parameter values. We will use a non-existent program cfd-test by way of running example. -

    Step by step

    We will consider the following use cases already mentioned above: -

      -
    • parameter variations, i.e., many small jobs determined by a specific parameter set;
    • -
    • job arrays, i.e., each individual job got a unique numeric identifier.
    • -

    Parameter variations

    Suppose the program the user wishes to run is 'cfd-test' (this program does not exist, it is just an example) that takes three parameters, a temperature, a pressure and a volume. A typical call of the program looks like: -

    cfd-test -t 20 -p 1.05 -v 4.3
    -

    The program will write its results to standard output. A PBS script (say run.pbs) that would run this as a job would then look like: -

    #!/bin/bash -l
    -#PBS -l nodes=1:ppn=1
    -#PBS -l walltime=00:15:00
    -cd $PBS_O_WORKDIR
    -cfd-test -t 20  -p 1.05  -v 4.3
    -

    When submitting this job, the calculation is performed or this particular instance of the parameters, i.e., temperature = 20, pressure = 1.05, and volume = 4.3. To submit the job, the user would use: -

    $ qsub run.pbs
    -

    However, the user wants to run this program for many parameter instances, e.g., he wants to run the program on 100 instances of temperature, pressure and volume. To this end, the PBS file can be modified as follows: -

    #!/bin/bash -l
    -#PBS -l nodes=1:ppn=8
    -#PBS -l walltime=04:00:00
    -cd $PBS_O_WORKDIR
    -cfd-test -t $temperature  -p $pressure  -v $volume
    -

    Note that -

      -
    1. the parameter values 20, 1.05, 4.3 have been replaced by variables $temperature, $pressure and $volume respectively;
    2. -
    3. the number of processors per node has been increased to 8 (i.e., ppn=1 is replaced by ppn=8); and
    4. -
    5. the walltime has been increased to 4 hours (i.e., walltime=00:15:00 is replaced by walltime=04:00:00).
    6. -

    The walltime is calculated as follows: one calculation takes 15 minutes, so 100 calculations take 1,500 minutes on one CPU. However, this job will use 7 CPUs (1 is reserved for delegating work), so the 100 calculations will be done in 1,500/7 = 215 minutes, i.e., 4 hours to be on the safe side. Note that starting from version 1.3, a dedicated core is no longer required for delegating work when using the -master flag. This is however not the default behavior since it is implemented using features that are not standard. This implies that in the previous example, the 100 calculations would be completed in 1,500/8 = 188 minutes. -

    The 100 parameter instances can be stored in a comma separated value file (CSV) that can be generated using a spreadsheet program such as Microsoft Excel, or just by hand using any text editor (do not use a word processor such as Microsoft Word). The first few lines of the file data.txt would look like: -

    temperature,pressure,volume
    -20,1.05,4.3
    -21,1.05,4.3
    -20,1.15,4.3
    -21,1.25,4.3
    -...
    -

    It has to contain the names of the variables on the first line, followed by 100 parameter instances in the current example. Items on a line are separated by commas. -

    The job can now be submitted as follows: -

    $ module load worker/1.5.0-intel-2014a
    -$ wsub -batch run.pbs -data data.txt
    -

    Note that the PBS file is the value of the -batch option . The cfd-test program will now be run for all 100 parameter instances—7 concurrently—until all computations are done. A computation for such a parameter instance is called a work item in Worker parlance. -

    Job arrays

    Worker also supports job array-like usage pattern since it offers a convenient workflow. -

    A typical PBS script run.pbs for use with job arrays would look like this: -

    #!/bin/bash -l
    -#PBS -l nodes=1:ppn=1
    -#PBS -l walltime=00:15:00
    -cd $PBS_O_WORKDIR
    -INPUT_FILE=\"input_${PBS_ARRAYID}.dat\"
    -OUTPUT_FILE=\"output_${PBS_ARRAYID}.dat\"
    -word-count -input ${INPUT_FILE}  -output ${OUTPUT_FILE}
    -

    As in the previous section, the word-count program does not exist. Input for this fictitious program is stored in files with names such as input_1.dat, input_2.dat, ..., input_100.dat that the user produced by whatever means, and the corresponding output computed by word-count is written to output_1.dat, output_2.dat, ..., output_100.dat. (Here we assume that the non-existent word-count program takes options -input and -output.) -

    The job would be submitted using: -

    $ qsub -t 1-100 run.pbs
    -

    The effect was that rather than 1 job, the user would actually submit 100 jobs to the queue system (since this puts quite a burden on the scheduler, this is precisely why the scheduler doesn't support job arrays). -

    Using worker, a feature akin to job arrays can be used with minimal modifications to the PBS script: -

    #!/bin/bash -l
    -#PBS -l nodes=1:ppn=8
    -#PBS -l walltime=04:00:00
    -cd $PBS_O_WORKDIR
    -INPUT_FILE=\"input_${PBS_ARRAYID}.dat\"
    -OUTPUT_FILE=\"output_${PBS_ARRAYID}.dat\"
    -word-count -input ${INPUT_FILE}  -output ${OUTPUT_FILE}
    -

    Note that -

      -
    1. the number of CPUs is increased to 8 (ppn=1 is replaced by ppn=8); and
    2. -
    3. the walltime has been modified (walltime=00:15:00 is replaced by walltime=04:00:00).
    4. -

    The walltime is calculated as follows: one calculation takes 15 minutes, so 100 calculation take 1,500 minutes on one CPU. However, this job will use 7 CPUs (1 is reserved for delegating work), so the 100 calculations will be done in 1,500/7 = 215 minutes, i.e., 4 hours to be on the safe side. Note that starting from version 1.3 when using the -master flag, a dedicated core for delegating work is no longer required. This is however not the default behavior since it is implemented using features that are not standard. So in the previous example, the 100 calculations would be done in 1,500/8 = 188 minutes. -

    The job is now submitted as follows: -

    $ module load worker/1.5.0-intel-2014a
    -$ wsub -t 1-100  -batch run.pbs
    -

    The word-count program will now be run for all 100 input files—7 concurrently—until all computations are done. Again, a computation for an individual input file, or, equivalently, an array id, is called a work item in Worker speak. Note that in constrast to torque job arrays, a worker job array submits a single job. -

    MapReduce: prologues and epilogue

    Often, an embarrassingly parallel computation can be abstracted to three simple steps: -

      -
    1. a preparation phase in which the data is split up into smaller, more manageable chuncks;
    2. -
    3. on these chuncks, the same algorithm is applied independently (these are the work items); and
    4. -
    5. the results of the computations on those chuncks are aggregated into, e.g., a statistical description of some sort.
    6. -

    The Worker framework directly supports this scenario by using a prologue and an epilogue. The former is executed just once before work is started on the work items, the latter is executed just once after the work on all work items has finished. Technically, the prologue and epilogue are executed by the master, i.e., the process that is responsible for dispatching work and logging progress. -

    Suppose that 'split-data.sh' is a script that prepares the data by splitting it into 100 chuncks, and 'distr.sh' aggregates the data, then one can submit a MapReduce style job as follows: -

    $ wsub -prolog split-data.sh  -batch run.pbs  -epilog distr.sh -t 1-100
    -

    Note that the time taken for executing the prologue and the epilogue should be added to the job's total walltime. -

    Some notes on using Worker efficiently

      -
    1. Worker is implemented using MPI, so it is not restricted to a single compute node, it scales well to many nodes. However, remember that jobs requesting a large number of nodes typically spend quite some time in the queue.
    2. -
    3. Worker will be effective when -
        -
      • work items, i.e., individual computations, are neither too short, nor too long (i.e., from a few minutes to a few hours); and,
      • -
      • when the number of work items is larger than the number of CPUs involved in the job (e.g., more than 30 for 8 CPUs).
      • -
      -
    4. -

    Monitoring a worker job

    Since a Worker job will typically run for several hours, it may be reassuring to monitor its progress. Worker keeps a log of its activity in the directory where the job was submitted. The log's name is derived from the job's name and the job's ID, i.e., it has the form <jobname>.log<jobid>. For the running example, this could be 'run.pbs.log445948', assuming the job's ID is 445948. To keep an eye on the progress, one can use: -

    $ tail -f run.pbs.log445948
    -

    Alternatively, a Worker command that summarizes a log file can be used: -

    $ watch -n 60 wsummarize run.pbs.log445948
    -

    This will summarize the log file every 60 seconds. -

    Time limits for work items

    Sometimes, the execution of a work item takes long than expected, or worse, some work items get stuck in an infinite loop. This situation is unfortunate, since it implies that work items that could successfully are not even started. Again, a simple and yet versatile solution is offered by the Worker framework. If we want to limit the execution of each work item to at most 20 minutes, this can be accomplished by modifying the script of the running example. -

    #!/bin/bash -l
    -#PBS -l nodes=1:ppn=8
    -#PBS -l walltime=04:00:00
    -module load timedrun/1.0.1
    -cd $PBS_O_WORKDIR
    -timedrun -t 00:20:00 cfd-test -t $temperature  -p $pressure  -v $volume
    -

    Note that it is trivial to set individual time constraints for work items by introducing a parameter, and including the values of the latter in the CSV file, along with those for the temperature, pressure and volume. -

    Also note that 'timedrun' is in fact offered in a module of its own, so it can be used outside the Worker framework as well. -

    Resuming a Worker job

    Unfortunately, it is not always easy to estimate the walltime for a job, and consequently, sometimes the latter is underestimated. When using the Worker framework, this implies that not all work items will have been processed. Worker makes it very easy to resume such a job without having to figure out which work items did complete successfully, and which remain to be computed. Suppose the job that did not complete all its work items had ID '445948'. -

    $ wresume -jobid 445948
    -

    This will submit a new job that will start to work on the work items that were not done yet. Note that it is possible to change almost all job parameters when resuming, specifically the requested resources such as the number of cores and the walltime. -

    $ wresume -l walltime=1:30:00 -jobid 445948
    -

    Work items may fail to complete successfully for a variety of reasons, e.g., a data file that is missing, a (minor) programming error, etc. Upon resuming a job, the work items that failed are considered to be done, so resuming a job will only execute work items that did not terminate either successfully, or reporting a failure. It is also possible to retry work items that failed (preferably after the glitch why they failed was fixed). -

    $ wresume -jobid 445948 -retry
    -

    By default, a job's prologue is not executed when it is resumed, while its epilogue is. 'wresume' has options to modify this default behavior. -

    Aggregating result data

    In some settings, each work item produces a file as output, but the final result should be an aggregation of those files. Although this is not necessarily hard, it is tedious, but Worker can help you achieve this easily since, typically, the file name produced by a work item is based on the parameters of that work item. -

    Consider the following data file data.csv: -

    a,   b
    -1.3, 5.7
    -2.7, 1.4
    -3.4, 2.1
    -4.1, 3.8
    -

    Processing it would produce 4 files, i.e., output-1.3-5.7.txt, output-2.7-1.4.txt, output-3.4-2.1.txt, output-4.1-3.8.txt. -To obtain the final data, these files should be concatenated into a single file - output.txt. This can be done easily using wcat: -

    $ wcat  -data data.csv  -pattern output-[%a%]-[%b%].txt  -output output.txt
    -

    The pattern describes the file names as generated by each work item in terms of the parameter names and values defined in the data file data.csv. -

    wcat optionally skips headers of all of the first file when the -skip_first n option is used (n is the number of lines to skip). By default, blank lines are omitted, but by using the -keep_blank options, they will be written to the output file. -Help is available using the - -help flag. -

    Multithreaded work items

    When a cluster is configured to use CPU sets, using Worker to execute multithreaded work items doesn't work by default. Suppose a node has 20 cores, and each work item runs most efficiently on 4 cores, then one would expect that the following resource specification would work: -

    $ wsub  -l nodes=10:ppn=5 -W x=nmatchpolicy=exactnode  -batch run.pbs  \\
    -        -data my_data.csv
    -

    This would run 5 work items per node, so that each work item would have 4 cores at its disposal. However, this will not work when CPU sets are active since the four work item threads would all run on a single core, which is detrimental for application performance, and leaves 15 out of the 20 cores idle. Simply adding the -threaded option will ensure that the behavior and performance is as expected: -

     $ wsub -l nodes=10:ppn=5 -batch run.pbs -data my_data.csv -threaded 4
    -

    Note however that using multihreaded work items may actually be less efficient than single threaded execution in this setting of many work items since the thread management overhead will be accumulated. -

    Also note that this feature is new since Worker version 1.5.x. -

    Further information

    For the information about the most recent version and new features please check the official worker documentation webpage.

    For information on how to MPI programs as work items, please contact your friendly system administrator.

    This how-to introduces only Worker's basic features. The wsub command and all other Worker commands have some usage information that is printed when the -help option is specified: -

    ### error: batch file template should be specified
    -### usage: wsub  -batch <batch-file>          \\
    -#                [-data <data-files>]         \\
    -#                [-prolog <prolog-file>]      \\
    -#                [-epilog <epilog-file>]      \\
    -#                [-log <log-file>]            \\
    -#                [-mpiverbose]                \\
    -#                [-master]                    \\
    -#                [-threaded]                  \\
    -#                [-dryrun] [-verbose]         \\
    -#                [-quiet] [-help]             \\
    -#                [-t <array-req>]             \\
    -#                [<pbs-qsub-options>]
    -#
    -#   -batch <batch-file>   : batch file template, containing variables to be
    -#                           replaced with data from the data file(s) or the
    -#                           PBS array request option
    -#   -data <data-files>    : comma-separated list of data files (default CSV
    -#                           files) used to provide the data for the work
    -#                           items
    -#   -prolog <prolog-file> : prolog script to be executed before any of the
    -#                           work items are executed
    -#   -epilog <epilog-file> : epilog script to be executed after all the work
    -#                           items are executed
    -#   -mpiverbose           : pass verbose flag to the underlying MPI program
    -#   -verbose              : feedback information is written to standard error
    -#   -dryrun               : run without actually submitting the job, useful
    -#   -quiet                : don't show information
    -#   -help                 : print this help message
    -#   -master               : start an extra master process, i.e.,
    -#                           the number of slaves will be nodes*ppn
    -#   -threaded             : indicates that work items are multithreaded,
    -#                           ensures that CPU sets will have all cores,
    -#                           regardless of ppn, hence each work item will
    -#                           have <total node cores>/ppn cores for its
    -#                           threads
    -#   -t <array-req>        : qsub's PBS array request options, e.g., 1-10
    -#   <pbs-qsub-options>    : options passed on to the queue submission
    -#                           command
    -

    Troubleshooting

    The most common problem with the Worker framework is that it doesn't seem to work at all, showing messages in the error file about module failing to work. The cause is trivial, and easy to remedy. -

    Like any PBS script, a worker PBS file has to be in UNIX format! -

    If you edited a PBS script on your desktop, or something went wrong during sftp/scp, the PBS file may end up in DOS/Windows format, i.e., it has the wrong line endings. The PBS/torque queue system can not deal with that, so you will have to convert the file, e.g., for file 'run.pbs' -

    $ dos2unix run.pbs
    " -275,"","

    Purpose

    Estimating the amount of memory an application will use during execution is often non trivial, especially when one uses third-party software. However, this information is valuable, since it helps to determine the characteristics of the compute nodes a job using this application should run on.

    Although the tool presented here can also be used to support the software development process, better tools are almost certainly available.

    Note that currently only single node jobs are supported, MPI support may be added in a future release.

    Prerequisites

    The user should be familiar with the linux bash shell.

    Monitoring a program

    To start using monitor, first load the appropriate module:

    $ module load monitor
    -

    Starting a program, e.g., simulation, to monitor is very straightforward

    $ monitor simulation

    monitor will write the CPU usage and memory consumption of simulation to standard error. Values will be displayed every 5 seconds. This is the rate at which monitor samples the program's metrics.

    Log file

    Since monitor's output may interfere with that of the program to monitor, it is often convenient to use a log file. The latter can be specified as follows:

    $ monitor -l simulation.log simulation

    For long running programs, it may be convenient to limit the output to, e.g., the last minute of the programs execution. Since monitor provides metrics every 5 seconds, this implies we want to limit the output to the last 12 values to cover a minute:

    $ monitor -l simulation.log -n 12 simulation

    Note that this option is only available when monitor writes its metrics to a log file, not when standard error is used.

    Modifying the sample resolution

    The interval at which monitor will show the metrics can be modified by specifying delta, the sample rate:

    $ monitor -d 60 simulation

    monitor will now print the program's metrics every 60 seconds. Note that the minimum delta value is 1 second.

    File sizes

    Some programs use temporary files, the size of which may also be a useful metric. monitor provides an option to display the size of one or more files:

    $ monitor -f tmp/simulation.tmp,cache simulation

    Here, the size of the file simulation.tmp in directory tmp, as well as the size of the file cache will be monitored. Files can be specified by absolute as well as relative path, and multiple files are separated by ','.

    Programs with command line options

    Many programs, e.g., matlab, take command line options. To make sure these do not interfere with those of monitor and vice versa, the program can for instance be started in the following way:

    $ monitor -delta 60 -- matlab -nojvm -nodisplay computation.m

    The use of '--' will ensure that monitor does not get confused by matlab's '-nojvm' and '-nodisplay' options.

    Subprocesses and multicore programs

    Some processes spawn one or more subprocesses. In that case, the metrics shown by monitor are aggregated over the process and all of its subprocesses (recursively). The reported CPU usage is the sum of all these processes, and can thus exceed 100 %.

    Some (well, since this is a HPC cluster, we hope most) programs use more than one core to perform their computations. Hence, it should not come as a surprise that the CPU usage is reported as larger than 100 %.

    When programs of this type are running on a computer with n cores, the CPU usage can go up to n x 100 %.

    Exit codes

    monitor will propagate the exit code of the program it is watching. Suppose the latter ends normally, then monitor's exit code will be 0. On the other hand, when the program terminates abnormally with a non-zero exit code, e.g., 3, then this will be monitor's exit code as well.

    When monitor has to terminate in an abnormal state, for instance if it can't create the log file, its exit code will be 65. If this interferes with an exit code of the program to be monitored, it can be modified by setting the environment variable MONITOR_EXIT_ERROR to a more suitable value.

    Monitoring a running process

    It is also possible to \"attach\" monitor to a program or process that is already running. One simply determines the relevant process ID using the ps command, e.g., 18749, and starts monitor:

    $ monitor -p 18749

    Note that this feature can be (ab)used to monitor specific subprocesses.

    More information

    Help is avaible for monitor by issuing:

    $ monitor -h
    -### usage: monitor [-d <delta>] [-l <logfile>] [-f <files>]
    -#                  [-h] [-v] <cmd> | -p <pid>
    -# Monitor can be used to sample resource utilization of a process
    -# over time.  Monitor can sample a running process if the latter's PID
    -# is specified using the -p option, or it can start a command with
    -# parameters passed as arguments.  When one has to specify flags for
    -# the command to run, '--' can be used to delimit monitor's options, e.g.,
    -#    monitor -delta 5 -- matlab -nojvm -nodisplay calc.m
    -# Resources that can be monitored are memory and CPU utilization, as
    -# well as file sizes.
    -# The sampling resolution is determined by delta, i.e., monitor samples
    -# every <delta> seconds.
    -# -d <delta>   : sampling interval, specified in
    -#                seconds, or as [[dd:]hh:]mm:ss
    -# -l <logfile> : file to store sampling information; if omitted,
    -#                monitor information is printed on stderr
    -# -n <lines>   : retain only the last <lines> lines in the log file,
    -#                note that this option only makes sense when combined
    -#                with -l, and that the log file lines will not be sorted
    -#                according to time
    -# -f <files>   : comma-separated list of file names that are monitored
    -#                for size; if a file doesn't exist at a given time, the
    -#                entry will be 'N/A'
    -# -v           : give verbose feedback
    -# -h           : print this help message and exit
    -# <cmd>        : actual command to run, followed by whatever
    -#                parameters needed
    -# -p <pid>     : process ID to monitor
    -#
    -# Exit status: * 65 for any montor related error
    -#              * exit status of <cmd> otherwise
    -# Note: if the exit code 65 conflicts with those of the
    -#       command to run, it can be customized by setting the
    -#       environment variables 'MONITOR_EXIT_ERROR' to any value
    -#       between 1 and 255 (0 is not prohibited, but this is probably.
    -#       not what you want).
    " -277,"","

    What is checkpointing

    Checkpointing allows for running jobs that run for weeks or months. Each time a subjob is running out of requested wall time, a snapshot of the application memory (and much more) is taken and stored, after which a subsequent subjob will pick up the checkpoint and continue.

    If the compute nodes have support for BLCR, checkpointing can be used.

    How to use it

    Using checkpointing is very simple: just use csub instead of qsub to submit a job.

    The csub command creates a wrapper around your job script, to take care of all the checkpointing stuff. In practice, you (usually) don't need to adjust anything, except for the command used to submit your job. Checkpointing does not require any changes to the application you are running, and should support most software. There are a few corner cases however (see the BLCR Frequently Asked Questions).

    The csub command

    Typically, a job script is submitting with checkpointing support enabled by running:

    $ csub -s job_script.sh

    One important caveat is that the job script (or the applications run in the script) should not create it's own local temporary directories.

    Also note that adding PBS directives (#PBS) in the job script is useless, as they will be ignored by csub. Controlling job parameters should be done via the csub command line.

    Help on the various command line parameters supported by csub can be obtained using -h:

     $ csub -h
    -
    -    csub [opts] [-s jobscript]
    -    
    -    Options:
    -        -h or --help               Display this message
    -        
    -        -s                         Name of jobscript used for job.
    -                                   Warning: The jobscript should not create it's own local temporary directories.
    -        
    -        -q                         Queue to submit job in [default: scheduler default queue]
    -        
    -        -t                         Array job specification (see -t in man qsub) [default: none]
    -        
    -        --pre                      Run prestage script (Current: copy local files) [default: no prestage]
    -
    -        --post                     Run poststage script (Current: copy results to localdir/result.) [default: no poststage]
    -
    -        --shared                   Run in shared directory (no pro/epilogue, shared checkpoint) [default: run in local dir]
    -
    -        --no_mimic_pro_epi         Do not mimic prologue/epilogue scripts [default: mimic pro/epi (bug workaround)]
    -        
    -        --job_time=<string>        Specify wall time for job (format: <hours>:<minutes>:<seconds>s, e.g. 3:12:47) [default: 10h]
    -
    -        --chkpt_time=<string>      Specify time for checkpointing a job (format: see --job_time) [default: 15m]
    -        
    -        --cleanup_after_restart    Specify whether checkpoint file and tarball should be cleaned up after a successful restart
    -                                   (NOT RECOMMENDED!) [default: no cleanup]
    -        
    -        --no_cleanup_chkpt         Don't clean up checkpoint stuff in $VSC_SCRATCH/chkpt after job completion [default: do cleanup]
    -        
    -        --resume=<string>          Try to resume a checkpointed job; argument should be unique name of job to resume [default: none]
    -        
    -        --chkpt_save_opt=<string>  Save option to use for cr_checkpoint (all|exe|none) [default: exe]
    -        
    -        --term_kill_mode           Kill checkpointed process with SIGTERM instead of SIGKILL after checkpointing [defailt: SIGKILL]
    -        
    -        --vmem=<string>            Specify amount of virtual memory required [default: none specified]\"
    -
    -

    Below we discuss various command line parameters.

    -
    --pre and --post
    -
    The --pre and --post parameters steer whether local files are copied or not. The job submitted using csub is (by default) runs on the local storage provided by a particular compute node. Thus, no changes will be made to the files on the shared storage (e.g. $VSC_SCRATCH).
    - If the job script needs (local) access to the files of the directory where csub is executed, --pre should be specified. This will copy all the files in the job script directory to the location where the job script will execute.
    - If the output of the job that was run, or additional output files created by the job in it's working directory are required, --post should be used. This will copy the entire job working directory to the location where csub was executed, in a directory named result.<jobname>. An alternative is to copy the interesting files to the shared storage at the end of the job script.
    -
    --shared
    -
    If the job needs to be run on the shared storage and not on the local storage of the workernode (for whatever reason), --shared should be specified. In this case, the job will be run in a subdirectory of $VSC_SCRATCH/chkpt. This will also disable the execution of the prologue and epilogue scripts, which prepare the job directory on the local storage.
    -
    --job_time and --chkpt_time
    -
    To specify the requested wall time per subjob, use the --job-time parameter. The default settings is 10 hours per subjob. Lowering this will result in more frequent checkpointing, and thus more subjobs.
    - To specify the time that is reserved for checkpointing the job, use --chkpt_time. By default, this is set to 15 minutes which should be enough for most applications/jobs. Don't change this unless you really need to.
    - The total requested wall time per subjob is the sum of both job_time and chkpt_time. This should be taken into account when submitting to a specific job queue (e.g., queues which only support jobs of up to 1 hour).
    -
    --no_mimic_pro_epi
    -
    The option --no_mimic_pro_epi disables the workaround currently implemented for a permissions problem when using actual Torque prologue/epilogue scripts. Don't use this option unless you really know what you're doing!
    -

    Support for csub

      -
    • Array jobs
      - csub has support for checkpointing array jobs. Just specify \"-t <spec>\" on the csub command line (see qsub for details).
    • -
    • MPI support
      - The BLCR checkpointing mechanism behind csub has support for checkpointing MPI applications. However, checkpointing MPI applications is pretty much untested up until now. If you would like to use csub with your MPI applications, please contact user support.
    • -

    Notes

    If you would like to time how long the complete job executes, just prepend the main command in your job script with time, e.g.: time <command>. The real time will not make sense as it will also include the time passes between two checkpointed subjobs. However, the user time should give a good indication of the actual time it took to run your command, even if multiple checkpoints were performed.

    " -279,"","" -281,"","

    This section is still rather empty. It will be expanded over time. -

    Visualization software

      -
    • ParaView is a free visualization package. It can be used in three modes: -
        -
      • Installed on your desktop: you have to transfer your data to your desktop system
      • -
      • As an interactive process on the cluster: this option is available only for NoMachine NX users (go to the Applications menu -> HPC -> Visualisation -> Paraview).
      • -
      • In client-server mode: The interactive part of ParaView is running on your desktop, while the server part that reads the data and renders the images (no GPU required as ParaView also contains a software OpenGL renderer) and sends the rendered images to the client on the desktop. Setting up ParaView for this scenario is explained in the page on ParaView remote visualization.
      • -
      -
    • -
    " -283,"","

    Prerequisits

    You should have ParaView installed on your desktop, and know how to use it (the latter is outside the scope of this page). Note: the client and server version should match to avoid problems! -

    Overview

    Working with ParaView to remotely visualize data requires the following steps which will be explained in turn in the subsections below: -

      -
    1. start ParaView on the cluster;
    2. -
    3. establish an SSH tunnel;
    4. -
    5. connect to the remote server using ParaView on your desktop; and
    6. -
    7. terminating the server session on the compute node.
    8. -

    Start ParaView on the cluster

    First, start an interactive job on the cluster, e.g., -

    $ qsub  -I  -l nodes=1,ppn=20
    -

    Given that remote visualization makes sense most for large data sets, 64 GB of RAM is probably the minimum you will need. To use a node with more memory, add a memory specification, e.g., -l mem=120gb. If this is not sufficient, you should consider using Cerebro. -

    Once this interactive session is active, you can optionally navigate to the directory containing the data to visualize (not shown below), load the appropriate module, and start the server: -

    $ module load Paraview/4.1.0-foss-2014a
    -$ n_proc=$(cat $PBS_NODEFILE  |  wc  -l)
    -$ mpirun  -np $n_proc pvserver  --use-offscreen-rendering \\
    -                                --server-port=11111
    -

    Note the compute node's name your job is running on, you will need it in the next step to establish the required SSH tunnel. -

    Establish an SSH tunnel

    To connect the desktop ParaView client with the desktop with the ParaView server on the compute node, an SSH tunnel has to be established between your desktop and that compute node. Details for Windows using PuTTY and Linux using ssh are given in the appropriate client software sections. -

    Connect to the remote server using ParaView on your desktop

    Since ParaView's user interface is identical on all platforms, connecting from the client side is documented on this page. Note that this configuration step has to be performed only once if you always use the same local port. -

      -
    • Start ParaView on your Desktop machine;
    • -
    • From the 'File' menu, choose 'Connect', this opens the dialog below:
    • -

    \""Choose -

      -
    • Click the 'Add Server' button, the following dialog will appear:
    • -

    \""Configure -

      -
    • Enter a name in the 'Name' field, e.g., 'Thinking'. If you have used 11111 as the local port to establish the tunnen, just click the 'Configure' button, otherwise modify the 'Port' field appropriately and click 'Configure'. This opens the 'Configure Server' dialog:
    • -

    \""Configure

      -
    • Set the 'Startup Type' from 'Command' to 'Manual' in the drop-down menu, and click 'Save'.
    • -
    • In the 'Choose Server' dialog, select the server, i.e., 'Thinking' and click the 'Connect' button.
    • -

    \""Choose -

    You can now work with ParaView as you would when visualizing local files. -

    Terminating the server session on the compute node

    Once you've quit ParaView on the desktop the server process will terminate automatically. However, don't forget to close your session on the compute node since leaving it open will consume credits. -

    $ logout
    -

    Further information

    More information on ParaView can be found on its website. A decent tutorial on using Paraview is also available from the VTK public wiki. -

    " -285,"","

    BEgrid is currently documented by BELNET. Some useful links are:

    " -287,"","

    Data on the VSC clusters can be stored in several locations, depending on the size and usage of these data. Following locations are available:

      -
    • Home directory - -
        -
      • Location available as $VSC_HOME
      • -
      • The data stored here should be relatively small (e.g., no files or directories larger than a gigabyte, although this is allowed), and not generating very intense I/O during jobs.
        - Also all kinds of configuration files are stored here, e.g., ssh-keys, .bashrc, or Matlab, and Eclipse configuration, ...
      • -
      -
    • -
    • Data directory -
        -
      • Location available as $VSC_DATA
      • -
      • A bigger 'workspace', for datasets, results, logfiles, ... . This filesystem can be used for higher I/O loads, but for I/O bound jobs, you might be better of using one of the 'scratch' filesystems.
      • -
      -
    • -
    • Scratch directories -
        -
      • Several types exist, available in $VSC_SCRATCH_XXX variables
      • -
      • For temporary or transient data; there is typically no backup for these filesystems, and 'old' data may be removed automatically.
      • -
      • Currently, $VSC_SCRATCH_NODE, $VSC_SCRATCH_SITE and $VSC_SCRATCH_GLOBAL are defined, for space that is available per node, per site, or globally on all nodes of the VSC (currenlty, there is no real 'global' scratch filesystem yet).
      • -
      -
    • -

    Since these directories are not necessarily mounted on the same locations over all sites, you should always (try to) use the environment variables that have been created.

    Quota is enabled on the three directories, which means the amount of data you can store here is limited by the operating system, and not by \"the boundaries of the hard disk\". You can see your current usage and the current limits with the appropriate quota command as explained on How do I know how much disk space I am using?. The actual disk capacity, shared by all users, can be found on the Available hardware page.

    You will only receive a warning when you reach the soft limit of either quota. You will only start losing data when you reach the hard limit. Data loss occurs when you try to save new files: this will not work because you have no space left, and thus you will lose these new files. You will however not be warned when data loss occurs, so keep an eye open for the general quota warnings! The same holds for running jobs that need to write files: when you reach your hard quota, jobs will crash.

    Home directory

    This directory is where you arrive by default when you login to the cluster. Your shell refers to it as \"~\" (tilde), or via the environment variable $VSC_HOME.

    The data stored here should be relatively small (e.g., no files or directories larger than a gigabyte, although this is allowed), and usually used frequently. Also all kinds of configuration files are stored here, e.g., by Matlab, Eclipse, ...

    The operating system also creates a few files and folders here to manage your account. Examples are:

    - - - - - - - - - - - - - - - - - - -
    .ssh/This directory contains some files necessary for you to login to the cluster and to submit jobs on the cluster. Do not remove them, and do not alter anything if you don't know what you're doing!
    .profileThis script defines some general settings about your sessions,
    .bashrcThis script is executed everytime you start a session on the cluster: when you login to the cluster and when a job starts. You could edit this file and, e.g., add \"module load XYZ\" if you want to automatically load module XYZ whenever you login to the cluster, although we do not recommend to load modules in your .bashrc.
    .bash_historyThis file contains the commands you typed at your shell prompt, in case you need them again.

    Data directory

    In this directory you can store all other data that you need for longer terms. The environment variable pointing to it is $VSC_DATA. There are no guarantees about the speed you'll achieve on this volume.

    Scratch space

    To enable quick writing from your job, a few extra file systems are available on the work nodes. These extra file systems are called scratch folders, and can be used for storage of temporary and/or transient data (temporary results, anything you just need during your job, or your batch of jobs).

    You should remove any data from these systems after your processing them has finished. There are no gurarantees about the time your data will be stored on this system, and we plan to clean these automatically on a regular base. The maximum allowed age of files on these scratch file systems depends on the type of scratch, and can be anywhere between a day and a few weeks. We don't guarantee that these policies remain forever, and may change them if this seems necessary for the healthy operation of the cluster.

    Each type of scratch has his own use:

      -
    • Node scratch ($VSC_SCRATCH_NODE)
      - Every node has its own scratch space, which is completely seperated from the other nodes. Every job automatically gets its own temporary directory on this node scratch, available through the environment variable $TMPDIR. $TMPDIR is guaranteed to be unique for each job.
      - Note however that when your job requests multiple cores and these cores happen to be in the same node, this $TMPDIR is shared among the cores!
    • -
    • Site scratch ($VSC_SCRATCH_SITE, $VSC_SCRATCH)
      - To allow a job running on multiple nodes (or multiple jobs running on seperate nodes) to share data as files, every node of the cluster (including the login nodes) has access to this shared scratch directory. Just like the home and data directories, every user has its own scratch directory. Because this scratch is also available from the login nodes, you could manually copy results to your data directory after your job has ended.
    • -
    • Global scratch ($VSC_SCRATCH_GLOBAL)
      - In the long term, this scratch space will be available throughout the whole VSC. At the time of writing, the global scratch is just the same volume as the site scratch, and thus contains the same data.
    • -
    " -289,"","

    Data on the VSC clusters can be stored in several locations, depending on the size and usage of these data. Following locations are available:

      -
    • Home directory - -
        -
      • Location available as $VSC_HOME
      • -
      • The data stored here should be relatively small (e.g., no files or directories larger than a gigabyte, although this is allowed), and not generating very intense I/O during jobs.
        - Also all kinds of configuration files are stored here, e.g., ssh-keys, .bashrc, or Matlab, and Eclipse configuration, ...
      • -
      -
    • -
    • Data directory -
        -
      • Location available as $VSC_DATA
      • -
      • A bigger 'workspace', for datasets, results, logfiles, ... . This filesystem can be used for higher I/O loads, but for I/O bound jobs, you might be better of using one of the 'scratch' filesystems.
      • -
      -
    • -
    • Scratch directories -
        -
      • Several types exist, available in $VSC_SCRATCH_XXX variables
      • -
      • For temporary or transient data; there is typically no backup for these filesystems, and 'old' data may be removed automatically.
      • -
      • Currently, $VSC_SCRATCH_NODE, $VSC_SCRATCH_SITE and $VSC_SCRATCH_GLOBAL are defined, for space that is available per node, per site, or globally on all nodes of the VSC (currenlty, there is no real 'global' scratch filesystem yet).
      • -
      -
    • -

    Since these directories are not necessarily mounted on the same locations over all sites, you should always (try to) use the environment variables that have been created.

    Quota is enabled on the three directories, which means the amount of data you can store here is limited by the operating system, and not by \"the boundaries of the hard disk\". You can see your current usage and the current limits with the appropriate quota command as explained on How do I know how much disk space I am using?. The actual disk capacity, shared by all users, can be found on the Available hardware page.

    You will only receive a warning when you reach the soft limit of either quota. You will only start losing data when you reach the hard limit. Data loss occurs when you try to save new files: this will not work because you have no space left, and thus you will lose these new files. You will however not be warned when data loss occurs, so keep an eye open for the general quota warnings! The same holds for running jobs that need to write files: when you reach your hard quota, jobs will crash.

    Home directory

    This directory is where you arrive by default when you login to the cluster. Your shell refers to it as \"~\" (tilde), or via the environment variable $VSC_HOME.

    The data stored here should be relatively small (e.g., no files or directories larger than a gigabyte, although this is allowed), and usually used frequently. Also all kinds of configuration files are stored here, e.g., by Matlab, Eclipse, ...

    The operating system also creates a few files and folders here to manage your account. Examples are:

    - - - - - - - - - - - - - - - - - - -
    .ssh/This directory contains some files necessary for you to login to the cluster and to submit jobs on the cluster. Do not remove them, and do not alter anything if you don't know what you're doing!
    .profileThis script defines some general settings about your sessions,
    .bashrcThis script is executed everytime you start a session on the cluster: when you login to the cluster and when a job starts. You could edit this file and, e.g., add \"module load XYZ\" if you want to automatically load module XYZ whenever you login to the cluster, although we do not recommend to load modules in your .bashrc.
    .bash_historyThis file contains the commands you typed at your shell prompt, in case you need them again.

    Data directory

    In this directory you can store all other data that you need for longer terms. The environment variable pointing to it is $VSC_DATA. There are no guarantees about the speed you'll achieve on this volume.

    Scratch space

    To enable quick writing from your job, a few extra file systems are available on the work nodes. These extra file systems are called scratch folders, and can be used for storage of temporary and/or transient data (temporary results, anything you just need during your job, or your batch of jobs).

    You should remove any data from these systems after your processing them has finished. There are no gurarantees about the time your data will be stored on this system, and we plan to clean these automatically on a regular base. The maximum allowed age of files on these scratch file systems depends on the type of scratch, and can be anywhere between a day and a few weeks. We don't guarantee that these policies remain forever, and may change them if this seems necessary for the healthy operation of the cluster.

    Each type of scratch has his own use:

      -
    • Node scratch ($VSC_SCRATCH_NODE)
      - Every node has its own scratch space, which is completely seperated from the other nodes. Every job automatically gets its own temporary directory on this node scratch, available through the environment variable $TMPDIR. $TMPDIR is guaranteed to be unique for each job.
      - Note however that when your job requests multiple cores and these cores happen to be in the same node, this $TMPDIR is shared among the cores!
    • -
    • Site scratch ($VSC_SCRATCH_SITE, $VSC_SCRATCH)
      - To allow a job running on multiple nodes (or multiple jobs running on seperate nodes) to share data as files, every node of the cluster (including the login nodes) has access to this shared scratch directory. Just like the home and data directories, every user has its own scratch directory. Because this scratch is also available from the login nodes, you could manually copy results to your data directory after your job has ended.
    • -
    • Global scratch ($VSC_SCRATCH_GLOBAL)
      - In the long term, this scratch space will be available throughout the whole VSC. At the time of writing, the global scratch is just the same volume as the site scratch, and thus contains the same data.
    • -
    " -303,"","

    Hardware details

    The VUB cluster contains a mix of nodes with AMD and Intel processors and different interconnects in different sections of the cluster. The cluster also contains a number of nodes with NVIDIA GPUs. -

    Login nodes:

      -
    • login.hpc.vub.ac.be or hydra.vub.ac.be
    • -
    • use one of those hostnames if you read vsc.login.node in the documentation and want to connect to this login node
    • -

    Compute nodes:

    - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
    nodes - processor - memory - disk - network - others -
    40 - 2x 8-core AMD 6134 (Magnycours) - 64 Gb - 900 Gb - QDR-IB - soon will be phased out -
    11 - 2x 10-core INTEL E5-2680v2 (IvyBridge) - 128 Gb - 900 Gb - QDR-IB - -
    20 - 2x 10-core INTEL E5-2680v2 (IvyBridge) - 256 Gb - 900 Gb - QDR-IB - -
    6 - 2x 10-core INTEL E5-2680v2 (IvyBridge) - 128 Gb - 900 Gb - QDR-IB - 2x Tesla K20x NVIDIA GPGPUs
    with 6Gb memory in each node -
    27 - 2x 14-core INTEL E5-2680v4 (Broadwell) - 256 Gb - 1 Tb - 10 Gbps -
    -
    1 - 4x 10-core INTEL E7-8891v4 (Broadwell) - 1.5 Tb - 4 Tb - 10 Gbps -
    -
    4 - 2x 12-core INTEL E5-2650v4 (Broadwell) - 256 Gb - 2 Tb - 10 Gbps - 2x Tesla P100 NVIDIA GPGPUs
    with 16 Gb memory in each node
    -
    1 - 2x 16-core INTEL E5-2683v4 (Broadwell) - 512 Gb - 8 Tb - 10 Gbps - 4x GeForce GTX 1080 Ti NVIDIA GPUs with 12 Gb memory in each node
    -
    21 - 2x 20-core INTEL Xeon Gold 6148 (Skylake) - 192 Gb - 1 Tb - 10 Gbps -
    -

    Network Storage:

      -
    • 19 TB NAS for Home directories ($VSC_HOME) and software storage connected with 1Gb Ethernet
    • -
    • 780 TB GPFS storage for global scratch ($VSC_SCRATCH) connected with QDR-IB, 1Gb and 40 Gb Ethernet
    • -
    " -305,"","

    UAntwerpen has two clusters. leibniz and hopper, Turing, an older cluster, has been retired in the early 2017. -

    Local documentation

      -
    • Slides of the information sessions on \"Transitioning to Leibniz and CentOS 7\" (PDF)
    • -
    • The 2017a toolchain at UAntwerp: In preparation of the integration of Leibniz in the UAntwerp infrastructure, the software stack has been rebuild in the 2017a toolchain. Several changes have been made to the naming and the organization of the toolchains. The toolchain is now loaded by default on Hopper, and is the main toolchain on Leibniz and later also on Hopper after an OS upgrade.
    • -
    • The Intel compiler toolchains: From the 2017a toolchain on, the setup of the toolchains differs on the UAntwerp clusters differs from most other VSC systems. We have set up the Intel compilers, including all libraries, in a single directory structure as intended by Intel. Some scripts, including compiler configuration scripts, expect this setup to work properly.
    • -
    • Licensed software at UAntwerp: Some software has a restricted license and is not available to all users. This page lists some of those packages and explains for some how you can get access to the package.
    • -
    • Special nodes: -
    • -
    • Information for Leibniz test users
    • -

    Leibniz

    Leibniz was installed in the spring of 2017. It is a NEC system consisting of 152 nodes with 2 14-core intel E5-2680v4 Broadwell generation CPUs connected through a EDR InfiniBand network. 144 of these nodes have 128 GB RAM, the other 8 have 256 GB RAM. The nodes do not have a sizeable local disk. The cluster also contains a node for visualisation, 2 nodes for GPU computing (NVIDIA Psscal generation) and one node with an Intel Xeon Phi expansion board. -

    Access restrictions

    Access ia available for faculty, students (master's projects under faculty supervision), and researchers of the AUHA. The cluster is integrated in the VSC network and runs the standard VSC software setup. It is also available to all VSC-users, though we appreciate that you contact the UAntwerpen support team so that we know why you want to use the cluster. -

    Jobs can have a maximal execution wall time of 3 days (72 hours). -

    Hardware details

      -
    • Interactive work: -
        -
      • 2 login nodes. These nodes have a very similar architecture to the compute nodes.
      • -
      -
        -
      • 1 visualisation node with a NVIDIA P5000 GPU. This node is meant to be used for interactive visualizations (specific instructions).
      • -
    • -
    • Compute nodes: -
        -
      • 144 nodes with 2 14-core Intel E5-2680v4 CPUs (Broadwell generation, 2.4 GHz) and 128 GB RAM.
      • -
      • 8 nodes with 2 14-core Intel E5-2680v4 CPUs (Broadwell generation, 2.4 GHz) and 256 GB RAM.
      • -
      • 2 nodes with 2 14-core Intel E5-2680v4 CPUs (Broadwell generation, 2.4 GHz), 128 GB RAM and two NVIDIA Tesla P100 GPUs with 16 GB HBM2 memory per GPU (delivering a peak performance of 4.7 TFlops in double precision per GPU) (specific instructions).
      • -
      • 1 node with 2 14-core Intel E5-2680v4 CPUs (Broadwell generation, 2.4 GHz), 128 GB RAM and Intel Xeon Phi 7220P PCIe card with 16 GB of RAM (specific instructions).
      • -
      • All nodes are connected through a EDR InfiniBand network
      • -
      • All compute nodes contain only a small SSD drive. This implies that swapping is not possible and that users should preferably use the main storage for all temporary files also.
      • -
    • -
    • Storage: The cluster relies on the storage provided by Hopper (a 100 TB DDN SFA7700 system with 4 storage servers).
    • -

    Login infrastructure

    Direct login is possible to both login nodes and to the visualization node. -

      -
    • From outside the VSC network: Use the external interface names. Currently, one needs to be on the network of UAntwerp or some associated institutions to be able to access the external interfaces. Otherwise a VPN connection is needed to the UAntwerp network.
    • -
    • From inside the VSC network (e.g., another VSC cluster): Use the internal interface names.
    • -
    - - - - - - - - - - - - - - - - - - - - - - -
    - External interface - Internal interface -
    Login generic - login-leibniz.uantwerpen.be
    -
    -
    Login - login1-leibniz.uantwerpen.be
    login2-leibniz.uantwerpen.be -
    ln1.leibniz.antwerpen.vsc
    ln2.leibniz.antwerpen.vsc -
    Visualisation node - viz1-leibniz.uantwerpen.be - viz1.leibniz.antwerpen.vsc -

    Storage organization

    - See the section on the storage organization of hopper. -

    Characteristics of the compute nodes

    Since leibniz is currently a homogenous system with respect to CPU type and interconnect, it is not needed to specify the corresponding properties (see also the page on specifying resources, output files and notifications).
    -

    However, to make it possible to write job scripts that can be used on both hopper and leibniz (or other VSC clusters) and to prepare for future extensions of the cluster, the following features are defined: -

    - - - - - - - - - - - - - - - - - - - - - - -
    property - explanation -
    broadwell - only use Intel processors from the Broadwell family (E5-XXXv4) (Not needed at the moment as this is the only CPU type) -
    ib - use InfiniBand interconnect (not needed at the moment as all nodes are connected to the InfiniBand interconnect) -
    mem128 - use nodes with 128 GB RAM (roughly 112 GB available). This is the majority of the nodes on leibniz. -
    mem256 - use nodes with 256 GB RAM (roughly 240 GB available). This property is useful if you submit a batch of jobs that require more than 4 GB of RAM per processor but do not use all cores and you do not want to use a tool to bundle jobs yourself such as Worker, as it helps the scheduler to put those jobs on nodes that can be further filled with your jobs. -

    These characteristics map to the following nodes on Hopper: -

    - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
    Type of node - CPU type - Interconnect - # nodes - # physical
    cores
    (per node) -
    # logical
    cores
    (per node) -
    installed mem
    (per node) -
    avail mem
    (per node) -
    local disc -
    broadwell:ib:mem128 - Xeon E5-2680v4 - IB-EDR - 144 - 28 - 28 - 128 GB - 112 GB - ~25 GB -
    broadwell:ib:mem256 - Xeon E5-2680v4 - IB-EDR - 8 - 28 - 28 - 256 GB - 240 GB - ~25 GB

    -

    Hopper

    Hopper is the current UAntwerpen cluster. It is a HP system consisting of 168 nodes with 2 10-core Intel E5-2680v2 Ivy Bridge generation CPUs connected through a FDR10 InfiniBand network. 144 nodes have a memory capacity of 64 GB while 24 nodes have 256 GB of RAM memory. The system has been reconfigured to have a software setup that is essentially the same as on Leibniz.

    Access restrictions

    Access ia available for faculty, students (master's projects under faculty supervision), and researchers of the AUHA. The cluster is integrated in the VSC network and runs the standard VSC software setup. It is also available to all VSC-users, though we appreciate that you contact the UAntwerpen support team so that we know why you want to use the cluster. -

    Jobs can have a maximal execution wall time of 3 days (72 hours). -

    Hardware details

      -
    • 4 login nodes accessible through the generic name login.hpc.uantwerpen.be. -
        -
      • Use this hostname if you read vsc.login.node in the documentation and want to connect to this login node
      • -
      -
    • -
    • Compute nodes -
        -
      • 144 (96 installed in the first round, 48 in the first expansion) nodes with 2 10-core Intel E5-2680v2 CPUs (Ivy Bridge generation) with 64 GB of RAM.
      • -
      • 24 nodes with 2 10-core Intel E5-2680v2 CPUs (Ivy Bridge generation) with 256 GB of RAM.
      • -
      • All nodes are connected through an InfiniBand FDR10 interconnect.
      • -
      -
    • -
    • Storage -
        -
      • Storage is provided through a 100 TB DDN SFA7700 disk array with 4 storage servers.
      • -
      -
    • -

    Login infrastructure

    Direct login is possible to both login nodes and to the visualization node. -

      -
    • From outside the VSC network: Use the external interface names. Currently, one needs to be on the network of UAntwerp or some associated institutions to be able to access the external interfaces. Otherwise a VPN connection is needed to the UAntwerp network.
    • -
    • From inside the VSC network (e.g., another VSC cluster): Use the internal interface names.
    • -
    - - - - - - - - - - - - - - - - - -
    - External interface - Internal interface -
    Login generic - login.hpc.uantwerpen.be
    login-hopper.uantwerpen.be -
    -
    Login nodes - login1-hopper.uantwerpen.be
    login2-hopper.uantwerpen.be
    login3-hopper.uantwerpen.be
    login4-hopper.uantwerpen.be -
    ln01.hopper.antwerpen.vsc
    ln02.hopper.antwerpen.vsc
    ln03.hopper.antwerpen.vsc
    ln04.hopper.antwerpen.vsc -

    Storage organisation

    The storage is organised according to the VSC storage guidelines. -

    - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
    Name - Variable - Type - Access - Backup - Default quota -
    /user/antwerpen/20X/vsc20XYZ - $VSC_HOME - GPFS - VSC - NO - 3 GB -
    /data/antwerpen/20X/vsc20XYZ - $VSC_DATA - GPFS - VSC - NO - 25 GB -
    /scratch/antwerpen/20X/vsc20XYZ - - $VSC_SCRATCH
    - $VSC_SCRATCH_SITE -
    GPFS - Hopper
    Leibniz -
    NO - 25 GB -
    /small/antwerpen/20X/vsc20XYZ(*) - - GPFS - Hopper
    Leibniz -
    NO - 0 GB -
    /tmp - $VSC_SCRATCH_NODE - ext4 - Node - NO - 250 GB hopper -

    (*) /small is a file system optimised for the storage of small files of types that do not belong in $VSC_HOME. The file systems pointed at by $VSC_DATA and $VSC_SCRATCH have a large fragment size (128 kB) for optimal performance on larger files and since each file occupies at least one fragment, small files waste a lot of space on those file systems. The file system is available on request. -

    For users from other universities, the quota on $VSC_HOME and $VSC_DATA will be determined by the local policy of your home institution as these file systems are mounted from there. The pathnames will be similar with trivial modifications based on your home institution and VSC account number. -

    Characteristics of the compute nodes

    Since hopper is currently a homogenous system with respect to CPU type and interconnect, it is not needed to specify these properties (see also the page on specifying resources, output files and notifications). -

    However, to make it possible to write job scripts that can be used on both hopper and turing (or other VSC clusters) and to prepare for future extensions of the cluster, the following features are defined: -

    - - - - - - - - - - - - - - - - - - - - - - -
    property - explanation -
    ivybridge - only use Intel processors from the Ivy Bridge family (E5-XXXv2) (Not needed at the moment as this is the only CPU type) -
    ib - use InfiniBand interconnect (only for compatibility with Turing job scripts as all nodes have InfiniBand) -
    mem64 - use nodes with 64 GB RAM (58 GB available) -
    mem256 - use nodes with 256 GB RAM (250 GB available) -

    These characteristics map to the following nodes on Hopper: -

    - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
    Type of node - CPU type - Interconnect - # nodes - # physical
    cores
    (per node) -
    # logical
    cores (per node) -
    installed mem
    (per node) -
    avail mem
    (per node) -
    local disc -
    ivybridge:ib:mem64 - Xeon E5-2680v2 - IB-FDR10 - 144 - 20 - 20 - 64 GB - 56 GB - ~360 GB -
    ivybridge:ib:mem256 - Xeon E5-2680v2 - IB-FDR10 - 24 - 20 - 20 - 256 GB - 248 GB - ~360 GB -

    Turing

    In July 2009, the UAntwerpen bought a 768 core cluster (L5420 CPUs, 16 GB RAM/node) from HP, that was installed and configured in December 2009. In December 2010, the cluster was extended with 768 cores (L5640 CPUs, 24 GB RAM/node). In September 2011, another 96 cores (L5640 CPUs, 24 GB RAM/node) have been added. Turing has been retired in January 2017. -

    " -307,"","

    Hardware details

    -
      -
    • The cluster login nodes: -
        -
      • login.hpc.kuleuven.be and login2.hpc.kuleuven.be (use this hostname if you read vsc.login.node in the documentation and want to connect to this login node).
      • -
      • two GUI login nodes through NX server.
      • -
      -
    • -
    • Compute nodes: -
        -
      • - Thin node section: -
          -
        • - 208 nodes with two 10-core \"Ivy Bridge\" Xeon E5-2680v2 CPUs (2.8 GHz, 25 MB level 3 cache). 176 of those nodes have 64 GB RAM while 32 are equiped with 128 GB RAM. The nodes are linked to a QDR Infiniband network. All nodes have a small local disk, mostly for swapping and the OS image. -
        • -
        • - 144 nodes with two 12-core \"Haswell\" Xeon E5-2680v3 CPUs (2.5 GHz, 30 MB level 3 cache). 48 of those nodes have with 64 GB RAM while 96 are equiped with 128 GB RAM. These nodes are linked to a FDR Infiniband network which offers lower latency and higher bandwidth than QDR. -
        • -
        - The total memory capacity of this section is 30 TB, the total peak performance is about 232 Tflops in double precision arithmetic. -
      • -
      • - SMP section (also known as Cerebro): a SGI UV2000 system with 64 sockets with a 10-core \"Ivy Bridge\" Xeon E5-4650 CPU (2.4 GHz, 25 MB level 3 cache), spread over 32 blades and connected through a SGI-proprietary NUMAlink6 interconnect. The interconnect also offers support for global address spaces across shared memory partitions and offload of some MPI functions. 16 sockets have 128 GB RAM and 48 sockets have 256 GB RAM, for a total RAM capacity of 14 TB. The peak compute performance is 12.3 Tflops in double precision arithmetic. The SMP system also contains a fast 21.8 GB disk storage system for swapping and temporary files. - The system is partitioned in 2 shared memory partitions. 1 partition has 480 cores and 12 TB and 1 partition with 160 cores and 2TB. Both partitions have 10TB local scratch space. -
        - However, should the need arise it can be reconfigured into a single large 64-socket shared memory machine.More information can be found in the cerebro quick start guide or slides from the info-session.
      • -
      • Accelerator section: -
          -
        • 5 nodes with two 10-core \"Haswell\" Xeon E5-2650v3 2.3GHz CPUs, 64 GB of RAM -and 2 GPUs Tesla K40 (2880 CUDA cores @ Boost clocks 810 MHz and 875 MHz, 1.66 DP Tflops/GPU Boost Clocks). -
        • -
        -
          -
        • The central GPU and Xeon Phi system is also integrated in the cluster and available to other sites. Each node has two six-core Intel Xeon E5-2630 CPUs, 64 GB RAM and a - local hard disk. All nodes are on a QDR Infiniband interconnect. This system consists of: -
        • -
        • 8 nodes have two nVidia K20x cards each installed. Each K20x has 14 - SMX processors (Kepler family; total of 2688 CUDA cores) that run at -732MHz and 6 GB of GDDR5 memory with a peak memory bandwidth of 250 GB/s - (384-bit interface @ 2.6 GHz). The peak floating point performance per -card is 1.31 Tflops in double and 3.95 Tflops in single precision. -
        • -
        • 8 nodes have two Intel Xeon Phi 5110P cards each installed. Each -Xeon Phi board has 60 cores running at 1.053 GHz (of which one is -reserved for the card OS and 59 are available for applications). Each -core supports a large subset of the 64-bit Intel architecture -instructions and a vector extension with 512-bit vector instructions. -Each board contains 8 GB of RAM, distributed across 16 memory channels, - with a peak memory bandwidth of 320 GB/s. The peak performance (not -counting the core reserved for the OS) is 0.994 Tflops in double -precision and 1.988 Tflops in single precision. The Xeon Phi system is not yet fully operational. MPI applications spanning multiple nodes cannot be used at the moment. -
        • -
        • 20 nodes have four Nvidia Tesla P100 SXM2 cards each installed (3584 CUDA cores @1328 MHz, 5.3 DP Tflops/GPU).
        • -
        • To start working with accelerators please refer to access webpage.
        • -
        -
          -
      • -
      -
    • -
    • Visualization nodes: 2 nodes with two 10-core \"Haswell\" Xeon E5-2650v3 2.3GHz CPUs, 2 times 64 GB of RAM -and 2 GPUs NVIDIA -Quadro -K5200 (2304 CUDA cores @ 667 MHz). To start working on visualization nodes, we refer to the - TurboVNC start guide.
    • -
    • Central storage available to all nodes: -
        -
      • A NetApp NAS system with 30 TB of storage, used for the home- and permanent data directories. All data is mirrored almost instantaneously to the KU Leuven disaster recovery data centre.
      • -
      • A 284 TB GPFS parallel filesystem from DDN, mostly used for temporary disk space.
      • -
      • A 600 TB archive storage optimised for capacity and aimed at long-term storage of very infrequently accessed data. To start using the archive storage, we refer to the - WOS Storage quick start guide.
      • -
      -
        -
      -
    • -
    • For administrative purposes, there are also service nodes that are not user-accessible
    • -
    -

    -

    -

    Characteristics of the compute nodes

    -

    The following properties allow you to select the appropriate node type for your job (see also the page on specifying resources, output files and notifications): -

    - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
    Cluster - Type of node - CPU type - Interconnect - # cores - installed mem - avail mem - local discs - # nodes -
    Thinking - ivybridge - Xeon E5-2680v2 - IB-QDR - 20 - 64 GB - 60 GB - 250 GB - 176 -
    ThinKing - ivybridge - Xeon E5-2680v2 - IB-QDR - 20 - 128 GB - 124 GB - 250 GB - 32 -
    Thinking - haswell - Xeon E5-2680v3 - IB-FDR - 24 - 64 GB - 60 GB - 150 GB - 48
    -
    Thinking - haswell - Xeon E5-2680v3 - IB-FDR - 24 - 128 GB - 124 GB - 150 GB - 96
    -
    Genius
    -
    skylake - Xeon 6140 - IB-EDR - 36 - 192 GB - 188 GB - 800 GB - 86
    -
    Genius - skylake-large memory - Xeon 6140 - IB-EDR - 36 - 768 GB - 764 GB - 800 GB - 10
    -
    Genius - skylake-GPU - Xeon 6140
    4xP100 SXM2
    -
    IB-EDR - 36 - 192 GB - 188 GB - 800 GB - 20
    -
    -

    For using Cerebro, the shared memory section, we refer to the Cerebro Quick Start Guide. -

    -

    Implementation of the VSC directory structure

    -

    In the transition phase between Vic3 and ThinKing, the storage is mounted on both systems. When switching from Vic3 to ThinKing you will not need to migrate your data. -

    -

    The cluster uses the directory structure that is implemented on most VSC clusters. This implies that each user has two personal directories: -

    -
      -
    • A regular home directory which contains all files that a user might need to log on to the system, and small 'utility' scripts/programs/source code/.... The capacity that can be used is restricted by quota and this directory should not be used for I/O intensive programs.
      - For KU Leuven systems the full path is of the form /user/leuven/... , but this might be different on other VSC systems. However, on all systems, the environment variable VSC_HOME points to this directory (just as the standard HOME variable does).
    • -
    • A data directory which can be used to store programs and their results. At the moment, there are no quota on this directory. For KU Leuven the path name is /data/leuven/... . On all VSC systems, the environment variable VSC_DATA points to this directory.
    • -
    -

    There are three further environment variables that point to other directories that can be used: -

    -
      -
    • On each cluster you have access to a scratch directory that is shared by all nodes on the cluster. The variable VSC_SCRATCH_SITE will point to this directory. This directory is also accessible from the loginnodes, so it is accessible while your jobs run, and after they finish (for a limited time: files can be removed automatically after 14 days.)
    • -
    • Similarly, on each cluster you have a VSC_SCRATCH_NODE directory, which is a scratch space local to each computenode. Thus, on each node, this directory point to a different physical location, and the connects are only accessible from that particular worknode, and (typically) only during the runtime of your job. But, if more than one job of you runs on the same node, they all see the same directory (and thus you have to make sure they do not overwrite each others data by creating subdirectories per job, or give proper filename, ...)
    • -
    -

    -

    Access restrictions

    -

    Access - is available for faculty, students (under faculty supervision), and -researchers of the KU Leuven, UHasselt and their associations. This -cluster is being integrated in the VSC network and as such becomes -available to all VSC users. -

    -

    History

    -

    In September 2013 a new thin node cluster (HP) and a shared memory system (SGI) was bought. The thin node cluster was installed and configured in January/February 2014 and extended in september 2014. Installation and configuration of the SMP is done in April 2014. Financing of this systems was obtained from the Hercules foundation and the Flemish government. -

    -

    Do you want to see it ? Have a look at the movie -

    -

    - -

    " -309,"","

    Overview

    The tier-1 cluster muk is primarily aimed at large parallel computing jobs that require a high-bandwidth low-latency interconnect, but jobs that require a multitude of small independent tasks are also accepted. -

    The main architectural features are: -

      -
    • 528 compute nodes with two Xeon E5-2670 processors (2,6GHz, 8 cores per processor, Sandy Bridge architecture) and 64GiB of memory, for a total memory capacity of 33 TiB and a peak performance of more than 175 TFlops (Linpack result 152,3 TFlops)
    • -
    • FDR Infiniband interconnect with a fat tree topology (1:2 oversubscription)
    • -
    • A storage system with a net capacity of approximately 400TB and a peak bandwidth of 9.5 GB/s.
    • -

    The cluster appeared for several years in the Top500 list of supercomputer sites: -

    - - - - - - - - - - - - - - - - - - -
    - June 2012 - Nov 2012 - June 2013 - Nov 2013 - June 2014 -
    Ranking - 118 - 163 - 239 - 306 - 430 -

    Compute time on muk is only available upon approval of a project. Information on requesting projects is available in Dutch and in English -

    Access restriction

    Once your project has been approved, your login on the tier-1 cluster will be enabled. You use the same vsc-account (vscXXXXX) as at your home institutions and you use the same $VSC_HOME and $VSC_DATA directories, though the tier-1 does have its own scratch directories. -

    A direct login from your own computer through the public network to muk is not possible for security reasons. You have to enter via the VSC network, which is reachable from all Flemish university networks. -

    ssh login.hpc.uantwerpen.be
    -ssh login.hpc.ugent.be
    -ssh login.hpc.kuleuven.be or login2.hpc.kuleuven.be
    -

    Make sure that you have at least once connected to the login nodes of your institution, before attempting access to tier-1. -

    Once on the VSC network, you can -

      -
    • connect to login.muk.gent.vsc to work on the tier-1 cluster muk,
    • -
    • connect to gligar01.gligar.gent.vsc or gligar02.gligar.gent.vsc for testing and debugging purposes (e.g., check if a code compiles). There you'll find the same software stack as on the tier-1. (On some machines gligar01.ugent.be and gligar02.ugent.be might also work.)
    • -

    There are two options to log on to these systems over the VSC network: -

      -
    1. You log on to your home cluster. At the command line, you start a ssh session to login.muk.gent.vsc. -
      ssh login.muk.gent.vsc
      -
    2. -
    3. You set up a so-called ssh proxy through your usual VSC login node vsc.login.node (the proxy server in this process) to login.muk.gent.vsc or gligar01.ugent.be. -
        -
      • To set up a ssh proxy using OpenSSH, the client for Linux and OS X or if you have Windows with the Cygwin emulation layer installed, follow the instructions in the Linux client section.
      • -
      • To set up a ssh proxy on Windows using PuTTY, follow the instructions in the Windows client section.
      • -
      -
    4. -

    Resource limits

    Disk quota

      -
    • As you are using your $VSC_HOME and $VSC_DATA directories from your home institution, the quota policy from your home institution applies.
    • -
    • On the shared (across nodes) scratch volume $VSC_SCRATCH the standard disk quota is 250GiB per user. If your project requires more disk space, you should request it in your project application as we have to make sure that the mix of allocated projects does not require more disk space than available.
    • -
    • Currently, each institute has a maximal scratch quotum of 75TiB. So, please vacate as much as possible of the $VSC_SCRATCH at all times to enable large jobs.
    • -

    Memory

      -
    • Each node has 64GiB of RAM. However, not all of that memory is available for user applications as some memory is needed for the operating system and file system buffers. In practise, roughly 60GiB is available to run your jobs. This also means that when using all cores, you should not request more than 3.75GiB of RAM per core (pmem resource attribute in qsub) or your job will be queued indefinitely since the resource manager will not be able to assign nodes to it.
    • -
    • The maximum amount of total virtual memory per node ('vmem') you can request is 83GiB, see also the output of the pbsmon command. The job submit filter sets a default virtual memory limit if you don't specify something with your job using e.g. -
      #PBS -l vmem=83gb
      -
    • -
    " -311,"","

    Access

    qsub  -l partition=gpu,nodes=1:K20Xm <jobscript>
    -

    or -

    qsub  -l partition=gpu,nodes=1:K40c <jobscript>
    -

    depending which GPU node you would like to use if you don't 'care' on which type of GPU node your job ends up you can just submit it like this: -

    qsub  -l partition=gpu <jobscript>
    -

    Submit to a Phi node:

    qsub -l partition=phi <jobscript>
    -
    " -313,"","

    Tier-1

    Experimental setup

    Tier-2

    Four university-level cluster groups are also embedded in the VSC and partly funded from VSC budgets: -

    " -315,"","

    The icons

    - - - - - - - - - - -
    \"Windows\" - Works on Windows, but may need additional pure Windows packages (free or commercial) -
    \"Windows+\" - Works on Windows with a UNIX compatibility layer added, e.g., cygwin or the \"Windows Subsystem for Linux\" in Windows 10 build 1607 (anniversary edition) or later -

    Getting ready to request an account

      -
    • Before requesting an account, you need to generate a pair of ssh keys. One popular way to do this on Windows is using the freely available PuTTY client which you can then also use to log on to the clusters.
    • -

    Connecting to the cluster

      -
    • Open a text-mode session using an ssh client -
        -
      • PuTTY is a simple-to-use and freely available GUI SSH client for Windows.
      • -
      • pageant can be used to manage active keys for PuTTY, WinSCP and FileZilla so that you don't need to enter the passphrase all the time.
      • -
      • Setting up a SSH proxy with PuTTY to log on to a node protected by a firewall through another login node, e.g., to access the tier-1 system muk.
      • -
      • Creating a SSH tunnel using PuTTY to establish network communication between your local machine and the cluster otherwise blocked by firewalls.
      • -
      -
    • -
    • Transfer data using Secure FTP (SFTP) clients: - -
    • -
    • Display graphical programs: -
        -
      • You can install a so-called X server: Xming. X is the protocol that is used by most Linux applications to display graphics on a local or remote screen.
      • -
      • On the KU Leuven/UHasselt clusters it is also possible to use the NX Client to log on to the machine and run graphical programs. Instead of an X-server, another piece of client software is needed. That software is currently available for Windows, OS X, Linux, Android and iOS.
      • -
      • The KU Leuven/UHasselt and UAntwerp clusters also offer support for visualization software through TurboVNC. VNC renders images on the cluster and transfers the resulting images to your client device. VNC clients are available for Windows, macOS, Linux, Android and iOS. -
      • -
    • -
    • If you install the free UNIX emulation layer Cygwin with the necessary packages, you can use the same OpenSSH client as on Linux systems and all pages about ssh and data transfer from the Linux client pages apply.
    • -

    Programming tools

      -
    • By installing the UNIX emulation layer Cygwin with the appropriate packages you can mimic very well the VSC cluster environment (at least with the foss toolchain). Cygwin supports the GNU compilers and also contains packages for OpenMPI (look for \"openmpi\") and some other popular libraries (FFTW, HDF5, ...). As such it can turn your Windows PC in a computer that can be used to develop software for the cluster if you don't rely on too many external libraries (which may be hard to install). This can come in handy if you sometimes need to work off-line. If you have a 64-bit Windows system (which most recent computers have), it is best to go for the 64-bit version of Cygwin. After all, the VSC-clusters are also running a 64-bit OS.
    • -
    • If you're running Windows 10 build 1607 (Anniversary Edition) or later, you may consider running the \"Windows Subsystem for Linux\" that will give you a Ubuntu-like environment on Windows and allow you to install some Ubuntu packages. In build 1607 this is still considered experimental technology and we offer no support.
    • -
    • Microsoft Visual Studio can also be used to develop OpenMP or MPI programs. If you do not use any Microsoft-specific libraries but stick to plain C or C++, the programs can be recompiled on the VSC clusters. Microsoft is slow in implementing new standards though. In Visual Studio 2015, OpenMP support is still stuck at version 2.0 of the standard. An alternative is to get a license for the Intel compilers which plug into Visual Studio and give you the best of both worlds, the power of a full-blown IDE and compilers that support the latest technologies in the HPC world on Windows.
    • -
    • Eclipse is a popular multi-platform Integrated Development Environment (IDE) very well suited for code development on clusters. - - On Windows Eclipse relies by default on the cygwin toolchain for its compilers and other utilities, so you need to install that too.
    • -
    • Information on tools for version control (git and subversion) is available on the \"Version control systems\" introduction page on this web site. -
        -
      -
    • -
    " -317,"","

    Prerequisite: PuTTY and WinSCP

    You've generated a public/private key pair with PuTTY and have an approved account on the VSC clusters. -

    Connecting to the VSC clusters

    When you start the PuTTY executable 'putty.exe', a configuration screen pops up. Follow the steps below to setup the connection to (one of) the VSC clusters. -

    In the screenshots, we show the setup for user vsc98765 to the ThinKing cluster at K.U.Leuven via the loginnode login.hpc.kuleuven.be. -

    You can find the names and ip-addresses of the loginnodes in the sections of the local VSC clusters. -

    Alternatively, you can follow a short video - explaining step-by-step the process of making connection to VSC login nodes (example based on KU Leuven cluster).

      -
    1. - Within the category Session, in the field 'Host Name', type in <vsc-loginnode>, which is the name of the loginnode of the VSC cluster you want to connect to.
      - \"PuTTY
    2. -
    3. In the category Connection > Data, in the field 'Auto-login username', put in <vsc-account>, which is your VSC username that you have received by mail after your request was approved.
    4. -
    5. - In the category Connection > SSH > Auth, click on 'Browse' and select the private key that you generated and saved above.
      - \"PuTTY
      - Here, the private key was previously saved in the folder C:\\Documents and Settings\\Me\\Keys. In newer versions of Windows, \"C:\\Users\" is used instead \"C:\\Documents and Settings\". -
    6. -
    7. - In the category Connection > SSH > X11, click the Enable X11 Forwarding checkbox:
      - \"PuTTY
    8. -
    9. Now go back to Session, and fill in a name in the 'Saved Sessions' field and press 'Save' to store the session information.
    10. -
    11. Now pressing 'Open' should start ask for you passphrase, and connect you to <vsc-loginnode>.
    12. -

    The first time you make a connection to the loginnode, a Security Alert will appear and you will be asked to verify the authenticity of the loginnode. -

    \"PuTTY -

    For future sessions, just select your saved session from the list and press 'Open'. -

    " -319,"","

    Getting started with Pageant

    Pageant is an SSH authentication agent that you can use for Putty and Filezilla. Before you run Pageant, you need to have a private key in PKK format (filename ends on .pkk). See our page on generating keys with PuTTY to find out how to generate and use one. When you run Pageant, it will put an icon of a computer wearing a hat into the System tray. It will then sit and do nothing, until you load a private key into it. If you click the Pageant icon with the right mouse button, you will see a menu. Select ‘View Keys’ from this menu. The Pageant main window will appear. (You can also bring this window up by double-clicking on the Pageant icon.) The Pageant window contains a list box. This shows the private keys Pageant is holding. When you start Pageant, it has no keys, so the list box will be empty. After you add one or more keys, they will show up in the list box. -

    - To add a key to Pageant, press the ‘Add Key’ button. Pageant will bring up a file dialog, labelled ‘Select Private Key File’. Find your private key file in this dialog, and press ‘Open’. Pageant will now load the private key. If the key is protected by a passphrase, Pageant will ask you to type the passphrase. When the key has been loaded, it will appear in the list in the Pageant window. -

    - Now start PuTTY (or Filezilla) and open an SSH session to a site that accepts your key. PuTTY (or Filezilla) will notice that Pageant is running, retrieve the key automatically from Pageant, and use it to authenticate. You can now open as many PuTTY sessions as you like without having to type your passphrase again. -

    - When you want to shut down Pageant, click the right button on the Pageant icon in the System tray, and select ‘Exit’ from the menu. Closing the Pageant main window does not shut down Pageant. -

    - You can find more info in the on-line manual. -

    SSH authentication agents are very handy as you no longer need to type your passphrase every time that you try to log in to the cluster. It also implies that when someone gains access to your computer, he also automatically gains access to your account on the cluster. So be very careful and lock your screen when you're not with your computer! It is your responsibility to keep your computer safe and prevent easy intrusion of your VSC-account due to an obviously unprotected PC!
    -

    " -321,"","

    -Rationale

    - ssh provides a safe way of connecting to a computer, encrypting traffic and avoiding passing passwords across public networks where your traffic might be intercepted by someone else. Yet making a server accessible from all over the world makes that server very vulnerable. Therefore servers are often put behind a firewall, another computer or device that filters traffic coming from the internet. -

    - In the VSC, all clusters are behind a firewall, but for the tier-1 cluster muk this firewall is a bit more restrictive than for other clusters. Muk can only be approached from certain other computers in the VSC network, and only via the internal VSC network and not from the public network. To avoid having to log on twice, first to another login node in the VSC network and then from there on to Muk, one can set up a so-called ssh proxy. You then connect through another computer (the proxy server) to the computer that you really want to connect to. -

    - This all sounds quite complicated, but once things are configured properly it is really simple to log on to the host. -

    -Setting up a proxy in PuTTY

    - Setting up the connection in PuTTY is a bit more complicated than for a simple direct connection to a login node. -

      -
    1. - First you need to start up pageant and load your private key into it. See the instructions on our \"Using Pageant\" page.
    2. -
    3. - In PuTTY, go first to the \"Proxy\" category (under \"Connection\"). In the Proxy tab sheet, you need to fill in the following information:
      - - - - - - - -
      - \"\" - -
        -
      1. - Select the proxy type: \"Local\"
      2. -
      3. - Give the name of the \"proxy server\". This is vsc.login.node, your usual VSC login node, and not the computer on which you want to log on and work.
      4. -
      5. - Make sure that the \"Port\" number is 22.
      6. -
      7. - Enter your VSC-id in the \"Username\" field.
      8. -
      9. - In the \"Telnet command, or local proxy command\", enter the string
        -
        plink -agent -l %user %proxyhost -nc %host:%port
        -				
        - (the easiest is to just copy-and-paste this text).
        - \"plink\" (PuTTY Link) is a Windows program and comes with the full PuTTY suite of applications. It is the command line version of PuTTY. In case you've only installed the executables putty.exe and pageant.exe, you'll need to download plink.exe also from the PuTTY web site. We strongly advise to simply install the whole PuTTY-suite of applications using the installer provided on that site.
      10. -
      -
      -
    4. -
    5. - Now go to the \"Data\" category in PuTTY, again under \"Connection\".
      - - - - - - - -
      - \"\" - -
        -
      1. - Fill in your VSC-id in the \"Auto-login username\" field.
      2. -
      3. - Leave the other values untouched (likely the values in the screen dump)
      4. -
      -
      -
    6. -
    7. - Now go to the \"Session\" category
      - - - - - - - -
      - \"\" - -
        -
      1. - Set the field \"Host Name (or IP address) to the computer you want to log on to. If you are setting up a proxy connection to access a computer on the VSC network, you will have to use its name on the internal VSC network. E.g., for the login nodes of the tier-1 cluster Muk at UGent, this is login.muk.gent.vsc and for the cluster on which you can test applications for the Muk, this is gligar.gligar.gent.vsc.
      2. -
      3. - Make sure that the \"Port\" number is 22.
      4. -
      5. - Finally give the configuration a name in the field \"Saved Sessions\" and press \"Save\". Then you won't have to enter all the above information again.
      6. -
      7. - And now you're all set up to go. Press the \"Open\" button on the \"Session\" tab to open a terminal window.
      8. -
      -
      -
    8. -

    -For advanced users

    - If you have an X-server on your Windows PC, you can also use X11 forwarding and run X11-applications on the host. All you need to do is click the box next to \"Enable X11 forwarding\" in the category \"Connection\" -> \"SSH\"-> \"X11\". -

    - What happens behind the scenes: -

      -
    • By specifying \"local\" as the proxy type, you tell PuTTY to not use one of its own build-in ways of setting up a proxy, but to use the command that you specify in the \"Telnet command\" of the \"Proxy\" category.
    • -
    • In the command
      -
      plink -agent -l %user %proxyhost -nc %host:%port
      -	
      %user will be replaced by the userid you specify in the \"Proxy\" category screen, %proxyhost will be replaced by the host you specify in the \"Proxy\" category screen (vsc.login.node in the example), %host by the host you specified in the \"Session\" category (login.muk.gent.vsc in the example) and %port by the number you specified in the \"Port\" field of that screen (and this will typically be 22).
    • -
    • The plink command will then set up a connection to %proxyhost using the userid %user. The -agent option tells plink to use pageant for the credentials. And the -nc option tells plink to tell the SSH server on %proxyhost to further connect to %host:%port.
    • -
    " -323,"","

    Prerequisits

    PuTTY must be installed on your computer, and you should be able to connect via SSH to the cluster's login node. -

    -Background

    - Because of one or more firewalls between your desktop and the HPC clusters, it is generally impossible to communicate directly with a process on the cluster from your desktop except when the network managers have given you explicit permission (which for security reasons is not often done). One way to work around this limitation is SSH tunneling. -

    - There are several cases where this is usefull: -

      -
    • - Running X applications on the cluster: The X program cannot directly communicate with the X server on your local system. In this case, the tunneling is easy to set up as PuTTY will do it for you if you select the right options on the X11 settings page as explained on the page about text-mode access using PuTTY.
    • -
    • - Running a server application on the cluster that a client on the desktop connects to. One example of this scenario is ParaView in remote visualization mode, with the interactive client on the desktop and the data processing and image rendering on the cluster. How to set up the tunnel for that scenario is also explained on that page.
    • -
    • - Running clients on the cluster and a server on your desktop. In this case, the source port is a port on the cluster and the destination port is on the desktop.
    • -

    -Procedure: A tunnel from a local client to a server on the cluster

      -
    1. - Log in on the login node
    2. -
    3. - Start the server job, note the compute node's name the job is running on (e.g., 'r1i3n5'), as well as the port the server is listening on (e.g., '44444').
    4. -
    5. - Set up the tunnel:
      - \"PuTTY -
        -
      1. - Right-click in PuTTY's title bar, and select 'Change Settings...'.
      2. -
      3. - In the 'Category' pane, expand 'Connection' -> 'SSH', and select 'Tunnels' as show below:
      4. -
      5. - In the 'Source port' field, enter the local port to use (e.g., 11111).
      6. -
      7. - In the 'Destination' field, enter <hostname>:<server-port> (e.g., r1i3n5:44444 as in the example above).
      8. -
      9. - Click the 'Add' button.
      10. -
      11. - Click the 'Apply' button
      12. -
      -
    6. -

    -
    - The tunnel is now ready to use. -

    " -325,"","

    FileZilla is an easy-to-use freely available ftp-style program to transfer files to and from your account on the clusters. -

    You can also put FileZilla with your private key on a USB stick to access your files from any internet-connected PC. -

    You can download Filezilla from the FileZilla project web page. -

    Configuration of FileZilla to connect to a login node

    Note: Pageant should be running and your private key should be loaded first (more info on our \"Using Pageant\" page). -

      -
    1. Start FileZilla;
    2. -
    3. Open the Site Manager using the 'File' menu;
    4. -
    5. Create a new site by clicking the New Site button;
    6. -
    7. In the tab marked General, enter the following values (all other fields remain blank): -
        -
      • Host: vsc.login.node, the name of the login node of your home institute VSC cluster
      • -
      • Servertype: SFTP - SSH File Transfer Protocol
      • -
      • Logontype: Normal
      • -
      • User: your own VSC user ID, e.g., vsc98765;
      • -
      -
    8. -
    9. Optionally, rename this setting to your liking by pressing the 'Rename' button;
    10. -
    11. Press 'Connect' and enter your passphrase when requested.
    12. -

    \"FileZilla -

    Note that recent versions of FileZilla have a screen in the settings to manage private keys. The path to the private key must be provided in options (Edit Tab -> options -> connection -> SFTP):

    \"FileZilla -

    After that you should be able to connect after being asked for passphrase. As an alternative you can choose to use putty pageant. -

    " -327,"","

    Prerequisite: WinSCP

    To transfer files to and from the cluster, we recommend the use of WinSCP, which is a graphical ftp-style program (but than one that uses the ssh way of communicating with the cluster rather then the less secure ftp) that is also freely available. WinSCP can be downloaded both as an installation package and as a standalone portable executable. When using the portable version, you can copy WinSCP together with your private key on a USB stick to have access to your files from any internet-connected Windows PC. -

    WinSCP also works together well with the PuTTY suite of applications. It uses the keys generated with the PuTTY key generation program, can launch terminal sessions in PuTTY and use ssh keys managed by pageant. -

    Transferring your files to and from the VSC clusters

    The first time you make the connection, you will be asked to 'Continue connecting and add host key to the cache'; select 'Yes'. -

      -
    1. Start WinSCP and go the the \"Session\" category. Fill in the following information: - - - - - - - -
      \"WinSCP - -
        -
      1. Fill in the hostname of the VSC login node of your home institution. You can find this information in the overview of available hardware on this site.
      2. -
      3. Fill in your VSC username.
      4. -
      5. If you are not using pageant to manage your ssh keys, you have to point WinSCP to the private key file (in PuTTY .ppk format) that should be used. When using pageant, you can leave this field blank.
      6. -
      7. Double check that the port number is 22.
      8. -
      -
      -
    2. -
    3. - If you want to store this data for later use, click the \"Save\" button at the bottom and enter a name for the session. Next time you'll start WinSCP, you'll get a screen with stored sessions that you can open by selecting them and clicking the \"Login\" button. -
    4. -
    5. - Click the \"Login\" button to start the session that you just created. You'll be asked for your passphrase if pageant is not running with a valid key loaded. - The first time you make the connection, you will be asked to \"Continue connecting and add host key to the cache\"; select \"Yes\". -
    6. -

    Some remarks

    Two interfaces

    \"\"WinSCP has two modes for the graphical user interface: -

      -
    • The \"commander mode\" where you get a window with two columns, with the local directory in the left column and the host directory (remote directory) in the right column. You can then transfer files by dragging them from one column to the other.
    • -
    • The \"explorer mode\" where you only see the remote directory. You can transfer files by dragging them to and from other folder windows or the desktop.
    • -

    During the installation of WinSCP, you'll be prompted for a mode. But you can always change your mind afterwards and selct the interface mode by selecting the \"Preferences\" category after starting WinSCP. -

    Enable logging

    When you experience trouble transferring files using WinSCP, the support team may ask you to enable logging and mail the results. -

      -
    1. To enable logging: - - - - - - - -
      \"WinSCP - -
        -
      1. Check \"Advanced options\".
      2. -
      3. Select the \"Logging\" category.
      4. -
      5. Check the box next to \"Enable session logging on level\" and select the logging level requested by the user support team. Often normal loggin will be sufficient.
      6. -
      7. - Enter a name and directory for the log file. The default is \"%TEMP%\\!S.log\" which will expand to a name that is system-dependent and depends on the name of your WinSCP session. %TEMP% is a Windows environment variable pointing to a directory for temporary files which on most systems is well hidden. \"!S\" will expand to the name of your session (for a stored session the name you used there). - You can always change this to another directory and/or file name that is easier for you to work with. -
      8. -
      -
      -
    2. -
    3. Now just run WinSCP as you would do without logging.
    4. -
    5. To mail the result if you used the default log file name %TEMP%\\!S.log: -
        -
      1. Start a new mail in your favourite mail program (it could even be a web mail service).
      2. -
      3. Click whatever button or menu choice you need to add an attachment.
      4. -
      5. - Many mail programs will now show you a standard Windows dialog window to select the file. In many mail programs, the left top of the window will look like this (a screen capture from a Windows 7 computer):
        - \"WinSCP
        - Click right of the text in the URL bar in the upper left of the window. The contents will now change to a regular Windows path name and will be selected. Just type %TEMP% and press enter and you will see that %TEMP% will expand to the name of the directory with the temporary files. - This trick may not work with all mail programs! -
      6. -
      7. Finish the mail text and send the mail to user support.
      8. -
      -
    6. -
    " -329,"","

    To display graphical applications from a Linux computer (such as the VSC clusters) on your Windows desktop, you need to install an X Window server. Here we describe the installation of Xming, one such server and freely available.

    Installing Xming

      -
    1. Download the Xming installer from the XMing web site.
    2. -
    3. Either install Xming from the Public Domain Releases (free) or from the Website Releases (after a donation) on the website.
    4. -
    5. Run the Xming setup program on your Windows desktop. Make sure to select 'XLaunch wizard' and 'Normal PuTTY Link SSH client'.
      - \"Xming-Setup.png\"
    6. -


    -Running Xming:

      -
    1. To run Xming, select XLaunch from the Start Menu.
    2. -
    3. Select 'Multiple Windows'. This will open each application in a separate window.
      - \"Xming-Display.png\"
    4. -
    5. Select 'Start no client' to make XLaunch wait for other programs (such as PuTTY).
      - \"Xming-Start.png\"
    6. -
    7. Select 'Clipboard' to share the clipboard.
      - \"Xming-Clipboard.png\"
    8. -
    9. Finally save the configuration.
      - \"Xming-Finish.png\"
    10. -
    11. Now Xming is running ... and you can launch a graphical application in your PuTTY terminal. Do not forget to enable X11 forwarding as explained on our PuTTY page.
      - To test the connection, you can try to start a simple X program on the login nodes, e.g., xterm or xeyes. The latter will open a new window with a pair of eyes. The pupils of these eyes should follow your mouse pointer around. Close the program by typing \"ctrl+c\": the window should disappear.
      - If you get the error 'DISPLAY is not set', you did not correctly enable the X-Forwarding.
    12. -
    " -331,"","

    -Prerequisites

    - It is assumed that Microsoft Visual Studio Professional (at least the Microsoft Visual C++ component) is installed. Although Microsoft Visual C++ 2008 should be sufficient, this how-to assumes that Microsoft Visual C++ 2010 is used. Furthermore, one should be familiar with the basics of Visual Studio, i.e., how to create a new project, how to edit source code, how to compile and build an application. -

    - Note for KU Leuven and UHasselt users: Microsoft Visual Studio is covered by the campus license for Microsoft products of both KU Leuven and Hasselt University. Hence staff and students can download and use the software. -

    - Also note that although Microsoft offers a free evaluation version of its development tools, i.e., Visual Studio Express, this version does not support parallel programming. -

    -OpenMP

    - Microsoft Visual C++ offers support for developing openMP C/C++ programs out of the box. However, as of this writing, support is still limited to the ancient OpenMP 2.0 standard. The project type best suited is a Windows Console Application. It is best to switch 'Precompiled headers' off. -

    - Once the project is created, simply write the code, and enable the openMP compiler option in the project's properties as shown below. -

    - \"OpenMP -

    - Compiling, building and running your program can now be done in the familiar way. -

    -MPI

    - In order to develop C/C++ programs that use MPI, a few extra things have to be installed, so this will be covered first. -

    -Installation

      -
    1. - The MPI libraries and infrastructure is part of Microsoft's HPC Pack SDK. Download the either the 32- or 64-bit version, whichever is appropriate for your desktop system (most probably the 32-bit version, denoted by 'x86'). Installing is merely a matter of double-clicking the downloaded installer.
    2. -
    3. - Although not strictly required, it is strongly recommended to install the MPI Project Template as well. Again, one simply downloads and double-clicks the installer.
    4. -

    -Development

    - To develop an MPI-based application, create an MPI project. -

    - \"New -

    - It is advisable not to use precompiled headers, so switch this setting off. -

    - Next, write your code. Once you are ready to debug or run your code, make the following adjustments to the project's properties in the 'Debugging' section. -

    - \"MPI -

    - A few settings should be verified, and if necessary, modified: -

      -
    1. - Make sure that the 'Debugger to lauch' is indeed the 'MPI Cluster Debugger'.
    2. -
    3. - The 'Run environment' is 'localhost/1' by default. Since this implies that only one MPI process will be started, it is not very exciting, so change it to, e.g., 'localhost/4' in order to have some parallel processes (4 in this example). Don not make this number too large, since the code will execute on your desktop machine.
    4. -
    5. - The 'MPIExec Command' should be pointed to 'mpiexec' that is found in the 'Bin' directory of the HPC Pack 2008 SDK installation directory.
    6. -

    - Debugging now proceeds as usual. One can switch between processes by selecting the main thread of the appropriate process by selecting the appropriate main thread in the Threads view. -

    - \"Switching -

    -Useful links

    " -333,"","

    -Installation & setup

    -
      -
    1. - Download the approriate version for your system (32- or 64-bit) and install it. You may to reboot to complete the installation, do so if required.
    2. -
    3. - Optionally, but highly recommended: download and install WinMerge, a convenient GUI tool to compare and merge files.
    4. -
    5. - Start Pageant (the SSH agent that comes with PuTTY) and load your private key for authentication on the VSC cluster.
    6. -
    -

    -Checking out a project from a VSC cluster repository

    -
    svn+ssh://userid@svn.login.node/data/leuven/300/vsc30000/svn-repo/simulation/trunk
    -
    -
      -
    1. - Open Windows Explorer (by e.g., the Windows-E shortcut, or from the Start Menu) and navigate to the directory where you would like to check out your project that is in the VSC cluster repository.
    2. -
    3. - Right-click in this directory, you will notice 'SVN Checkout...' in the context menu, select it to open the 'Checkout' dialog.
      -
      - \"TortoiseSVN
    4. -
    5. - In the 'URL of repository' field, type the following line, replacing userid by your VSC user ID, and '300' with '301', '302',... as required (e.g., for user ID 'vsc30257', replace '300' by '302'). For svn.login.node, substitute the appropriate login node for the cluster the repository is on.
    6. -
    7. - Check whether the suggested default location for the project suits you, i.e., the 'Checkout directory' field, if not, modify it.
    8. -
    9. - Click 'OK' to proceed with the check out.
    10. -
    -

    - You now have a working copy of your project on your desktop and can continue to develop locally. -

    -

    -Work cycle

    -

    - Suppose the file 'simulation.c' is added, and 'readme.txt' is added. The 'simulation directory will now look as follows:
    -
    - \"TortoiseSVN -

    -

    - Files that were changed are marked with a red exclamation mark, while those marked in green were unchanged. Files without a mark such as 'readme.txt' have not been placed under version control yet. The latter can be added to the repository by right-clicking on it, and choosing 'TortoiseSVN' and then 'Add...' from the context menu. Such files will be marked with a bleu '+' sign until the project is committed. -

    -

    - By right-clicking in the project's directory, you will see context menu items 'SVN Update' and 'SVN Commit...'. These have exactly the same semantics as their command line counterparts introduced above. The 'TortoiseSVN' menu item expands into even more command that are familiar, with the notable exception of 'Check for modifications', which is in fact equivalent to 'svn status'.
    -
    - \"Tortoise -

    -

    - Right-clicking in the directory and choosing 'SVN Commit...' will bring up a dialog to enter a comment and, if necessary, include or exclude files from the operation.
    -
    - \"TortoiseSVN -

    -

    -Merging

    -

    - When during an update a conflict that can not be resolved automatically is detected, TortoiseSVN behaves slightly different from the command line client. Rather than requiring you to resolve the conflict immediately, it creates a number of extra files. Suppose the repository was at revision 12, and a conflict was detected in 'simulation.c', then it will create: -

    -
      -
    • - 'simulation.c': this file is similar to the one subversion would open for you when you choose to edit a conflict via the command line client (this file is marked with a warning sign);
    • -
    • - 'simulation.c.mine': this is the file in your working copy, i.e., the one that contains changes that were not committed yet;
    • -
    • - 'simulation.c.r12': the last revision in the repository; and
    • -
    • - 'simulation.c.r11': the previous revision in the repository.
    • -
    -

    - You have now two options to resolve the conflict. -

    -
      -
    1. - Edit 'simulation.c', keeping those modification of either version that you need.
    2. -
    3. - Use WinMerge to compare 'simulation.c.mine' and 'simulation.c.r12' and resolve the conflicts in the GUI, saving the result as 'simulation.c'. When two files are selected in Windows Explorer, they can be compared using WinMerge by right-clicking on either, and choosing 'WinMerge' from the context menu.
      -
      - \"WinMerge
    4. -
    -

    - Once all conflicts have been resolved, commit your changes. -

    -

    -Tagging

    -

    - Tagging can be done conveniently by right-clicking in Windows Exploerer and selecting 'TortoiseSVN' and then 'Branch/tag...' from the context menu. After supplying the appropriate URL for the tag, e.g., -

    -
    svn+ssh://<user-id>@<login-node>/data/leuven/300/vsc30000/svn-repo/simulation/tag/nature-submission
    -
    -

    - you click 'OK'. -

    -

    -Browsing the repository

    -

    - Sometimes it is convenient to browse a subversion repository. TortoiseSVN makes this easy, right-click in a directory in Windows Explorer, and select 'TortoiseSVN' and then 'Repo-browser' from the context menu. -

    -

    -
    - \"TortoiseSVN -

    -

    -Importing a local project into the VSC cluster repository

    -

    - As with the command line client, it is possible to import a local directory on your desktop system into your subversion repository on the VSC cluster . Let us assume that this directory is called 'calculation'. Right-click on it in Windows Explorer, and choose 'Subversion' and then 'Import...' from the context menu. This will open the 'Import' dialog.
    -
    - \"TortoiseSVN -

    -

    - The repository's URL would be (modify the user ID and directory appropriately): -

    -
    svn+ssh://<user-id>@<login-node>/data/leuven/300/vsc30000/svn-repo/calculation/trunk
    -
    -

    - TortoiseSVN will automatically create the 'calculation' and 'trunk' directory for you (it uses the '--parents' option). -

    -

    - Creating directories such as 'branches' or 'tags' can be done using the repository browser. To invoke it, right-click in a directory in Windows Explorer and select 'TortoiseSVN' and then 'Repo-browser'. Navigate to the appropriate project directory and create a new directory by right-clicking in the parent directory's content view (right pane) and selecting 'Create folder...' from the context menu. -

    " -335,"","

    Since all VSC clusters use Linux as their main operating system, you will need to get acquainted with using the command-line interface and using the terminal. To open a terminal in Linux when using KDE, choose Applications > System > Terminal > Konsole. When using GNOME, choose Applications > Accessories > Terminal. -

    If you don't have any experience with using the command-line interface in Linux, we suggest you to read the basic Linux usage section first. -

    Getting ready to request an account

      -
    • Before requesting an account, you need to generate a pair of ssh keys. One popular way to do this on Linux is using the freely available OpenSSH client which you can then also use to log on to the clusters.
    • -

    Connecting to the cluster

      -
    • Open a text-mode session using an SSH client: - -
    • -
    • Transfer data using Secure FTP (SFTP) with the OpenSSH sftp and scp commands.
    • -
    • Display programs that use graphics or have a GUI -
        -
      • No extra software is needed on a Linux client system, but you need to use the appropriate options with the ssh command as explained on the page on OpenSSH.
      • -
      • On the KU Leuven/UHasselt clusters it is also possible to use the NX Client to log on to the machine and run graphical programs. This requires additional client software that is currently available for Windows, OS X, Linux, Android and iOS. The advantage over displaying X programs directly on your Linux screen is that you can sleep your laptop, disconnect and move to another network without loosing your X-session. Performance may also be better with many programs over high-latency networks.
      • -
      • The KU Leuven/UHasselt and UAntwerp clusters also offer support for visualization software through TurboVNC. VNC renders images on the cluster and transfers the resulting images to your client device. VNC clients are available for Windows, macOS, Linux, Android and iOS. -
      • -
      -
    • -

    Software development

    " -337,"","

    -Prerequisite: OpenSSH

    -Linux

    - On all popular Linux distributions, the OpenSSH software is readily available, and most often installed by default. You can check whether the OpenSSH software is installed by opening a terminal and typing: -

    $ ssh -V
    -OpenSSH_4.3p2, OpenSSL 0.9.8e-fips-rhel5 01 Jul 2008
    -

    - To access the clusters and transfer your files, you will use the following commands: -

      -
    • - ssh: to generate the ssh keys and to open a shell on a remote machine,
    • -
    • - sftp: a secure equivalent of ftp,
    • -
    • - scp: a secure equivalent of the remote copy command rcp.
    • -

    -Windows

    - You can use OpenSSH on Windows also if you install the free UNIX emulation layer Cygwin with the package \"openssh\". -

    macOS/OS X

    macOS/OS X comes with its own implementation of OpenSSH, so you don't need to install any third-party software to use it. Just open a Terminal window and jump in! -

    -Generating a public/private key pair

    - Usually you already have the software needed and a key pair might already be present in the default location inside your home directory: -

    $ ls ~/.ssh
    -authorized_keys2    id_rsa            id_rsa.pub         known_hosts
    -

    - You can recognize a public/private key pair when a pair of files has the same name except for the extension \".pub\" added to one of them. In this particular case, the private key is \"id_rsa\" and public key is \"id_rsa.pub\". You may have multiple keys (not necessarily in the directory \"~/.ssh\") if you or your operating system requires this. A popular alternative key type, instead of rsa, is dsa. However, we recommend to use rsa keys. -

    - You will need to generate a new key pair, when: -

      -
    • - you don't have a key pair yet,
    • -
    • - you forgot the passphrase protecting your private key,
    • -
    • - or your private key was compromised.
    • -

    - To generate a new public/private pair, use the following command: -

    $ ssh-keygen
    -Generating public/private rsa key pair. 
    -Enter file in which to save the key (/home/user/.ssh/id_rsa): 
    -Enter passphrase (empty for no passphrase): 
    -Enter same passphrase again: 
    -Your identification has been saved in /home/user/.ssh/id_rsa.
    -Your public key has been saved in /home/user/.ssh/id_rsa.pub.
    -

    - This will ask you for a file name to store the private and public key, and a passphrase to protect your private key. It needs to be emphasized that you really should choose the passphrase wisely! The system will ask you for it every time you want to use the private key, that is every time you want to access the cluster or transfer your files. -

    - Keys are required in the OpenSSH format. -

    - If you have a public key \"id_rsa_2048_ssh.pub\" in the SSH2 format, you can use OpenSSH's ssh-keygen to convert it to the OpenSSH format in the following way: -

    $ ssh-keygen -i -f ~/.ssh/id_rsa_2048_ssh.pub > ~/.ssh/id_rsa_2048_openssh.pub
    -
    " -339,"","

    Prerequisite: OpenSSH

    See the page on generating keys. -

    Connecting to the VSC clusters

    Text mode

    In many cases, a text mode connection to one of the VSC clusters is sufficient. To make such a connection, the ssh command is used: -

    $ ssh <vsc-account>@<vsc-loginnode>
    -

    Here, -

      -
    • <vsc-account> is your VSC username that you have received by mail after your request was approved,
    • -
    • <vsc-loginnode> is the name of the loginnode of the VSC cluster you want to connect to.
    • -

    You can find the names and ip-addresses of the loginnodes in the sections on the available hardware. -

    The first time you make a connection to the loginnode, you will be asked to verify the authenticity of the loginnode, e.g., -

    $ ssh vsc98765@login.hpc.kuleuven.be
    -The authenticity of host 'login.hpc.kuleuven.be (134.58.8.192)' can't be established.
    -RSA key fingerprint is b7:66:42:23:5c:d9:43:e8:b8:48:6f:2c:70:de:02:eb.
    -Are you sure you want to continue connecting (yes/no)?
    -

    Here, user vsc98765 wants to make a connection to the ThinKing cluster at KU Leuven via the loginnode login.hpc.kuleuven.be. -

    If your private key is not stored in a default file (~/.ssh/id_rsa) you need to provide the path to it while making the connection:

    $ ssh -i <path-to-your-private-key-file> <vsc-account>@<vsc-loginnode>

    Connection with support for graphics

    On most clusters, we support a number of programs that have a GUI mode or display graphics otherwise through the X system. To be able to display the output of such a program on the screen of your Linux machine, you need to tell ssh to forward X traffic from the cluster to your Linux desktop/laptop by specifying the -X option. There is also an option -x to disable such traffic, depending on the default options on your system as specified in /etc/ssh/ssh_config, or ~/.ssh/config.
    - Example: -

    ssh -X vsc123456@login.hpc.kuleuven.be
    -

    To test the connection, you can try to start a simple X program on the login nodes, e.g., xterm or xeyes. The latter will open a new window with a pair of eyes. The pupils of these eyes should follow your mouse pointer around. Close the program by typing \"ctrl+c\": the window should disappear. -

    If you get the error 'DISPLAY is not set', you did not correctly enable the X-Forwarding. -

    Links

    " -341,"","

    - The OpenSSH program ssh-agent is a program to hold private keys used for public key authentication (RSA, DSA). The idea is that you store your private key in the ssh authentication agent and can then log in or use sftp as often as you need without having to enter your passphrase again. This is particularly useful when setting up a ssh proxy connection (e.g., for the tier-1 system muk) as these connections are more difficult to set up when your key is not loaded into an ssh-agent. -

    - This all sounds very easy. The reality is more difficult though. The problem is that subsequent commands, e.g., the command to add a key to the agent or the ssh or sftp commands, must be able to find the ssh authentication agent. Therefore some information needs to be passed from ssh-agent to subsequent commands, and this is done through two environment variables: SSH_AUTH_SOCK and SSH_AGENT_PID. The problem is to make sure that these variables are defined with the correct values in the shell where you start the other ssh commands. -

    -Starting ssh-agent: Basic scenarios

    - There are a number of basic scenarios -

      -
    1. - You're lucky and your system manager has set up everything so that ssh-agent is started automatically when the GUI starts after logging in and the environment variables are hence correctly defined in all subsequent shells. - You can check for that easily: type -
      $ ssh-add -l
      - If the command returns with the message -
      Could not open a connection to your authentication agent.
      - then ssh-agent is not running or not configured properly, and you'll need to follow one of the following scenarios. -
    2. -
    3. - Start an xterm (or whatever your favourite terminal client is) and continue to work in that xterm window or other terminal windows started from that one: -
      $ ssh-agent xterm &
      -	
      - The shell in that xterm is then configured correctly, and when that xterm is killed, the ssh-agent will also be killed. -
    4. -
    5. - ssh-agent can also output the commands that are needed to configure the shell. These can then be used to configure the current shell or any further shell. E.g., if you're a bash user, an easy way to start a ssh-agent and configure it in the current shell, is to type -
      $ eval `ssh-agent -s`
      -	
      - at the command prompt. If you start a new shell (e.g., by starting an xterm) from that shell, it should also be correctly configured to contact the ssh authentication agent. - A better idea though is to store the commands in a file and excute them in any shell where you need access to the authentication agent. E.g., for bash users: -
      $ ssh-agent -s >~/.ssh-agent-environment
      -. ~/.ssh-agent-environment
      -	
      - and you can then configure any shell that needs access to the authentication agent by executing -
      $ . ~/.ssh-agent-environment
      -
      - Note that this will not necessarily shut down the ssh-agent when you log out of the system. It is not a bad idea to explicitly kill the ssh-agent before you log out: -
      $ ssh-agent -k
      -	
      -
    6. -

    -Managing keys

    - Once you have an ssh-agent up and running, it is very easy to add your key to it. If your key has the default name(id_rsa), all you need to do is to type -

    $ ssh-add
    -

    - at the command prompt. You will then be asked to enter your passphrase. If your key has a different name, e.g., id_rsa_cluster, you can specify that name as an additional argument to ssh-add: -

    $ ssh-add ~/.ssh/id_rsa_cluster
    -

    - To list the keys that ssh-agent is managing, type -

    $ ssh-add -l
    -

    - You can now use the OpenSSH commands ssh, sftp and scp without having to enter your passphrase again. -

    -Starting ssh-agent: Advanced options

    - In case ssh-agent is not started by default when you log in to your computer, there's a number of things you can do to automate the startup of ssh-agent and to configure subsequent shells. -

    -Ask your local system administrator

    - If you're not managing your system yourself, you can always ask your system manager if he can make sure that ssh-agent is started when you log on and in such a way that subsequent shells opened from the desktop have the environmental variables SSH_AUTH_SOCK and SSH_AGENT_PID set (with the first one being the most important one). -

    - And if you're managing your own system, you can dig into the manuals to figure out if there is a way to do so. Since there are so many desktop systems avaiable for Linux systems (gnome, KDE, Ubuntu unity, ...) we cannot offer help here. -

    -A semi-automatic solution in bash

    - This solution requires some modifications to .bash_profile and .bashrc. Be careful when making these modifications as errors may lead to trouble to log on to your machine. So test by executing these files with source ~/.bash_profile and source ~/.bashrc. -

    - This simple solution is based on option 3 given above to start ssh-agent. -

      -
    1. - You can define a new shell command by using the bash alias mechanism. Add the following line to the file .bashrc in your home directory: -
      alias start-ssh-agent='/usr/bin/ssh-agent -s >~/.ssh-agent-environment; . ~/.ssh-agent-environment'
      -	
      - The new command start-ssh-agent will now start a new ssh-agent, store the commands to set the environment variables in the file .ssh-agent-environment in your home directory and then \"source\" that file to execute the commands in the current shell (which then sets SSH_AUTH_SOCK and SSH_AGENT_PID to appropriate values). -
    2. -
    3. - Also put the line -
      [[ -s ~/.ssh-agent-environment ]] && . ~/.ssh-agent-environment &>/dev/null
      -	
      - in your .bashrc file. This line will check if the file ssh-agent-environment exists in your home directory and \"source\" it to set the appropriate environment variables. -
    4. -
    5. - As explained in the GNU bash manual, .bashrc is only read when starting so-called interactive non-login shells. Interactive login shells will not read this file by default. Therefore it is advised in the GNU bash manual to add the line -
      [[ -s ~/.bashrc ]] && . ~/.bashrc
      -	
      - to your .bash_profile. This will execute .bashrc if it exists whenever .bash_profile is called. -
    6. -

    - You can now start a SSH authentication agent by issuing the command start-ssh-agent and add your key as indicated above with ssh-add. -

    -An automatic and safer solution in bash

    - One disadvantage of the previous solution is that a new ssh-agent will be started every time you execute the command start-ssh-agent, and all subsequent shells will then connect to that one. -

    - The following solution is much more complex, but a lot safer as it will first do an effort to see if there is already a ssh-agent running that can be contacted: -

      -
    1. - It will first check if the environment variable SSH_AUTH_SOCK is defined, and try to contact that agent. This makes sure that no new agent will be started if you log on onto a system that automatically starts an ssh-agent.
    2. -
    3. - Then it will check for a file .ssh-agent-environment, source that file and try to connect to the ssh-agent. This will make sure that no new agent is started if another agent can be found through that file.
    4. -
    5. - And only if those two tests fail will a new ssh-agent be started.
    6. -

    - This solution uses a Bash function. -

    1. Add the following block of text to your .bashrc file:
      start-ssh-agent() {
      -#
      -# Start an ssh agent if none is running already.
      -# * First we try to connect to one via SSH_AUTH_SOCK
      -# * If that doesn't work out, we try via the file ssh-agent-environment
      -# * And if that doesn't work out either, we just start a fresh one and write
      -#   the information about it to ssh-agent-environment for future use.
      -#
      -# We don't really test for a correct value of SSH_AGENT_PID as the only 
      -# consequence of not having it set seems to be that one cannot kill
      -# the ssh-agent with ssh-agent -k. But starting another one wouldn't 
      -# help to clean up the old one anyway.
      -#
      -# Note: ssh-add return codes: 
      -#   0 = success,
      -#   1 = specified command fails (e.g., no keys with ssh-add -l)
      -#   2 = unable to contact the authentication agent
      -#
      -sshfile=~/.ssh-agent-environment
      -#
      -# First effort: Via SSH_AUTH_SOCK/SSH_AGENT_PID
      -#
      -if [ -n \"$SSH_AUTH_SOCK\" ]; then
      -  # SSH_AUTH_SOCK is defined, so try to connect to the authentication agent
      -  # it should point to. If it succeeds, reset newsshagent.
      -  ssh-add -l &>/dev/null 
      -  if [[ $? != 2 ]]; then 
      -    echo \"SSH agent already running.\"
      -    unset sshfile
      -    return 0
      -  else
      -    echo \"Could not contact the ssh-agent pointed at by SSH_AUTH_SOCK, trying more...\"
      -  fi
      -fi
      -#
      -# Second effort: If we're still looking for an ssh-agent, try via $sshfile
      -#
      -if [ -e \"$sshfile\" ]; then
      -  # Load the environment given in $sshfile
      -  . $sshfile &>/dev/null
      -  # Try to contact the ssh-agent
      -  ssh-add -l &>/dev/null 
      -  if [[ $? != 2 ]]; then 
      -    echo \"SSH agent already running; reconfigured the environment.\"
      -    unset sshfile
      -    return 0
      -  else
      -    echo \"Could not contact the ssh-agent pointed at by $sshfile.\"
      -  fi
      -fi
      -#
      -# And if we haven't found a working one, start a new one...
      -#
      -#Create a new ssh-agent
      -echo \"Creating new SSH agent.\"
      -ssh-agent -s > $sshfile && . $sshfile    
      -unset sshfile
      -}
      -	
      A shorter version without all the comments and that does not generate output is
      start-ssh-agent() {
      -sshfile=~/.ssh-agent-environment
      -#
      -if [ -n \"$SSH_AUTH_SOCK\" ]; then
      -  ssh-add -l &>/dev/null 
      -  [[ $? != 2 ]] && unset sshfile && return 0
      -fi
      -#
      -if [ -e \"$sshfile\" ]; then
      -  . $sshfile &>/dev/null
      -  ssh-add -l &>/dev/null 
      -  [[ $? != 2 ]] && unset sshfile && return 0
      -fi
      -#
      -ssh-agent -s > $sshfile && . $sshfile &>/dev/null
      -unset sshfile
      -}
      -	
      This defines the command start-ssh-agent.
    2. Since start-ssh-agent will now first check for a usable running agent, it doesn't harm to simply execute this command in your .bashrc file to start a SSH authentication agent. So add the line
      start-ssh-agent &>/dev/null
      -	
      after the above function definition. All output is sent to /dev/null (and hence not shown) as a precaution, since scp or sftp sessions fail when output is generated in .bashrc on many systems (typically with error messages such as \"Received message too long\" or \"Received too large sftp packet\"). You can also use the newly defined command start-ssh-agent at the command prompt. It will then check your environment, reset the environment variables SSH_AUTH_SOCK and SSH_AGENT_PID or start a new ssh-agent.
    3. As explained in the GNU bash manual, .bashrc is only read when starting so-called interactive non-login shells. Interactive login shells will not read this file by default. Therefore it is advised in the GNU bash manual to add the line
      [[ -s ~/.bashrc ]] && . ~/.bashrc
      -	
      to your .bash_profile. This will execute .bashrc if it exists whenever .bash_profile is called.

    - You can now simply add your key as indicated above with ssh-add and it will become available in all shells. -

    - The only remaining problem is that the ssh-agent process that you started may not get killed when you log out, and if it fails to contact again to the ssh-agent when you log on again, the result may be a built-up of ssh-agent processes. You can always kill it by hand before logging out with ssh-agent -k. -

    -Links

    " -343,"","

    - Rationale

    - ssh provides a safe way of connecting to a computer, encrypting traffic and avoiding passing passwords across public networks where your traffic might be intercepted by someone else. Yet making a server accessible from all over the world makes that server very vulnerable. Therefore servers are often put behind a firewall, another computer or device that filters traffic coming from the internet.

    - In the VSC, all clusters are behind a firewall, but for the tier-1 cluster muk this firewall is a bit more restrictive than for other clusters. Muk can only be approached from certain other computers in the VSC network, and only via the internal VSC network and not from the public network. To avoid having to log on twice, first to another login node in the VSC network and then from there on to Muk, one can set up a so-called ssh proxy. You then connect through another computer (the proxy server) to the computer that you really want to connect to.

    - This all sounds quite complicated, but once things are configure properly it is really simple to log on to the host.

    - Setting up a proxy in OpenSSH

    - Setting up a proxy is done by adding a few lines to the file $HOME/.ssh/config on the machine from which you want to log on to another machine.

    - The basic structure is as follows:

    Host <my_connectionname>
    -    ProxyCommand ssh -q %r@<proxy server> 'exec nc <target host> %p'
    -    User <userid>

    - where:

      -
    • - <my_connectionname>: the name you want to use for this proxy connection. You can then log on to the <target host> using this proxy configuration using ssh <my_connectionname>
    • -
    • - <proxy server>: The name of the proxy server for the connection
    • -
    • - <target host>: The host to which you want to log on.
    • -
    • - <userid>: Your userid on <target host>.
    • -

    - Caveat: Access via the proxy will only work if you have logged in to the proxy server itself at least once from the client you're using.

    - Some examples

    - A regular proxy without X forwarding

    - In Linux or macOS, SSH proxies are configured as follows:

    - In your $HOME/.ssh/config file, add the following lines:

    Host tier1
    -    ProxyCommand ssh -q %r@vsc.login.node 'exec nc login.muk.gent.vsc %p'
    -    User vscXXXXX
    -

    - where you replace vsc.login.node with the name of the login node of your home tier-2 cluster (see also the overview of available hardware).

    - Replace vscXXXXX your own VSC account name (e.g., vsc40000).

    - The name 'tier1' in the 'Host' field is arbitrary. Any name will do, and this is the name you need to use when logging in:

    $ ssh tier1
    -

    - A proxy with X forwarding

    - This requires a minor modification to the lines above that need to be added to $HOME/.ssh/config:

    Host tier1X
    -    ProxyCommand ssh -X -q %r@vsc.login.node 'exec nc login.muk.gent.vsc %p'
    -    ForwardX11 yes
    -    User vscXXXXX
    -

    - I.e., you need to add the -X option to the ssh command to enable X forwarding and need to add the line 'ForwardX11 yes'.

    $ ssh tier1X

    - will then log you on to login.muk.gent.vsc with X forwarding enabled provided that the $DISPLAY variable was correctly set on the client on which you executed the ssh command. Note that simply executing

    $ ssh -X tier1

    - has the same effect. It is not necessary to specify the X forwarding in the config file, it can be done just when running ssh.

    - The proxy for testing/debugging on muk

    - For testing/debugging, you can login to the UGent login node gengar1.gengar.gent.vsc over the VSC network. The following $HOME/.ssh/config can be used:

    Host tier1debuglogin
    -    ProxyCommand ssh -q %r@vsc.login.node 'exec nc gengar1.gengar.gent.vsc %p'
    -    User vscXXXXX
    -

    - Change vscXXXXX to your VSC username and connect with

    $ ssh tier1debuglogin

    - For advanced users

    - You can define many more properties for a ssh connection in the config file, e.g., setting up ssh tunneling. On most Linux machines, you can get more information about all the possibilities by issuing

    $ man 5 ssh_config

    - Alternatively, you can also google on this line and find copies of the manual page on the internet.

    " -345,"","

    -Prerequisits

    -Background

    - Because of one or more firewalls between your desktop and the HPC clusters, it is generally impossible to communicate directly with a process on the cluster from your desktop except when the network managers have given you explicit permission (which for security reasons is not often done). One way to work around this limitation is SSH tuneling. There are serveral cases where this is usefull: -

      -
    • - Running X applications on the cluster: The X program cannot directly communicate with the X server on your local system. In this case, the tunneling is easy to set up as OpenSSH will do it for you if you specify the -X-option on the command line when you log on to the cluster in text mode: -
      $ ssh -X <vsc-account>@<vsc-loginnode>
      -	
      - where <vsc-account> is your VSC-number and <vsc-loginnode> is the hostname of the cluster's login node you are using.
    • -
    • - Running a server application on the cluster that a client on the desktop connects to. One example of this scenario is ParaView in remote visualization mode, with the interactive client on the desktop and the data processing and image rendering on the cluster. Setting up a tunnel for this scenario is also explained on that page.
    • -
    • - Running clients on the cluster and a server on your desktop. In this case, the source port is a port on the cluster and the destination port is on the desktop.
    • -

    -Procedure

    - In a terminal window on your client machine, issue the following command: -

    ssh  -L11111:r1i3n5:44444  -N  <vsc-account>@<vsc-loginnode>
    -

    - where <vsc-account> is your VSC-number and <vsc-loginnode> is the hostname of the cluster's login node you are using. The local port is given first (e.g., 11111, followed by the remote host (e.g., 'r1i3n5') and the server port (e.g., 44444). -

      -
    1. - Log in on the login node
    2. -
    3. - Start the server job, note the compute node's name the job is running on (e.g., 'r1i3n5'), as well as the port the server is listening on (e.g., '44444').
    4. -
    " -347,"","

    -Prerequisite: OpenSSH

    - See the page on generating keys. -

      -

    -Using scp

    - Files can be transferred with scp, which is more or less a cp equivalent, but then to or from a remote machine. -

    - For example, to copy the (local) file localfile.txt to your home directory on the cluster (where <vsc-loginnode> is a loginnode), use: -

    scp localfile.txt <vsc-account>@<vsc-loginnode>:
    -

    - Likewise, to copy the remote file remotefile.txt from your home directory on the cluster to your local computer, use: -

    scp <vsc-account>@<vsc-loginnode>:localfile.txt .
    -

    - The colon is required! -

    -Using sftp

    - The sftp is an equivalent of the ftp command, but it uses the secure ssh protocol to connect to the clusters. -

    - One easy way of starting a sftp session is -

    sftp <vsc-account>@<vsc-loginnode>
    -

    -Links

    " -349,"","

    Since all VSC clusters use Linux as their main operating system, you will need to get acquainted with using the command-line interface and using the Terminal. To open a Terminal window in macOS (formerly OS X), choose Applications > Utilities > Terminal in the Finder. -

    If you don't have any experience with using the Terminal, we suggest you to read the basic Linux usage section first (which also applies to macOS). -

    Getting ready to request an account

      -
    • Before requesting an account, you need to generate a pair of ssh keys. One popular way to do this on macOS is using the OpenSSH client included with macOS which you can then also use to log on to the clusters.
    • -

    Connecting to the machine

      -
    • Open a text-mode session using an SSH client: OpenSSH ssh command or JellyfiSSH.
    • -
    • Transfer data using Secure FTP (SFTP) with the OpenSSH sftp and scp commands, Cyberduck or FileZilla.
    • -
    • Running GUI programs or other programs that use graphics. -
        -
      • Linux programs use the X protocol to display graphics on local or remote screens. To use your Mac as a remote screen, you need to install a X server. XQuartz is one that is freely available. Once the X server is up and running, you can simply open a terminal window and connect to the cluster using the command line SSH client in the same way as you would on Linux.
      • -
      • On the KU Leuven/UHasselt clusters it is possible to use the NX Client to log on to the machine and run graphical programs. Instead of an X-server, another piece of client software is needed. That software is currently available for Windows, macOS, Linux, Android and iOS.
      • -
      • The KU Leuven/UHasselt and UAntwerp clusters also offer support for visualization software through TurboVNC. VNC renders images on the cluster and transfers the resulting images to your client device. VNC clients are available for Windows, macOS, Linux, Android and iOS. -
      • -
      -
    • -

    Advanced topics

    " -351,"","

    -Prerequisite: OpenSSH

    - Every macOS install comes with its own implementation of OpenSSH, so you don't need to install any third-party software to use it. Just open a Terminal window and jump in! Because of this, you can use the same commands as specified in the Linux client section to access the cluster and transfer files. -

    -Generating a public/private key pair

    - Generating a public/private key pair is identical to what is described in the Linux client section, that is, by using the ssh-keygen command in a Terminal window. -

    " -353,"","

    Prerequisites

      -
    • macOS comes with its own implementation of OpenSSH, so you don't need to install any third-party software to use it. Just open a Terminal window and jump in! Because of this, you can use the same commands as specified in the Linux client section to access the cluster and transfer files (ssh-keygen to generate the keys, ssh to log on to the cluster and scp and sftp for file transfer).
    • -
    • Optional: You can use JellyfiSSH to store your ssh session settings. The most recent version is available for a small fee from the Mac App Store, but if you google for JellyfiSSH 4.5.2, the version used for the screenshots in this page, you can still find some free downloads for that version. Installation is easy: just drag the program's icon to the Application folder in the Finder, and you're done.
    • -

    Connecting using OpenSSH

    Like in the Linux client section, the ssh command is used to make a connection to (one of) the VSC clusters. In a Terminal window, execute: -

    $ ssh <vsc-account>@<vsc-loginnode>
    -

    where -

      -
    • <vsc-account> is your VSC username that you have received by mail after your request was approved,
    • -
    • <vsc-loginnode> is the name of the loginnode of the VSC cluster you want to connect to.
    • -

    You can find the names and ip-addresses of the loginnodes in the sections of the local VSC clusters. -

    SSH will ask you to enter your passphrase. -

    On sufficiently recent macOS/OS X versions (Leopard and newer) you can use the Keychain Access service to automatically provide your passphrase to ssh. All you need to do is to add the key using -

    $ ssh-add ~/.ssh/id_rsa
    -

    (assuming that your private key that you generated before is called id_rsa). -

    Using JellyfiSSH for bookmarking ssh connection settings

    You can use JellyfiSSH to create a user-friendly bookmark for your ssh connection settings. To do this, follow these steps: -

      -
    1. Start JellyfiSSH and select 'New'. This will open a window where you can specify the connection settings.
    2. -
    3. - In the 'Host or IP' field, type in <vsc-loginnode>. In the 'Login name' field, type in your <vsc-account>.
      - In the screenshot below we have filled in the fields for a connection to ThinKing cluster at KU Leuven as user vsc98765.
      - \"JellyfiSSH
    4. -
    5. You might also want to change the Terminal window settings, which can be done by clicking on the icon in the lower left corner of the JellyfiSSH window.
    6. -
    7. When done, provide a name for the bookmark in the 'Bookmark Title' field and press 'Add' to create the bookmark.
    8. -
    9. To make a connection, select the bookmark in the 'Bookmark' field and click on 'Connect'. Optionally, you can make the bookmark the default by selecting it as the 'Startup Bookmark' in the JellyfiSSH > Preferences menu entry.
    10. -
    " -355,"","

    Prerequisite: OpenSSH, Cyberduck or FileZilla

      -
    • OS X comes with its own implementation of OpenSSH, so you don't need to install any third-party software to use it. Just open a Terminal window and jump in! Because of this, you can use the same scp and sftp commands as in Linux to access the cluster and transfer files.
    • -
    • We recommend Cyberduck as a graphical alternative to the scp command. This program is freely available (with a voluntary donation) from the Cyberduck web site and easy to use. Installation is easy: just drag the program's icon to the Application folder in the Finder, and you're done.
      - The program can also be found in the App Store but at a price.
    • -
    • An alternative SFTP GUI is FileZilla. FileZilla for macOS is very similar to FileZilla for Windows (see also our page about FileZilla in the Windows section). It can be downloaded from the FileZilla download page.
    • -

    Transferring files with Cyberduck

    Files can be easily transferred with Cyberduck. Setup is easy: -

      -
    1. After starting Cyberduck, the Bookmark tab will show up. To add a new bookmark, click on the '+' sign on the bottom left of the window. A new window will open.
    2. -
    3. In the 'Server' field, type in <vsc-loginnode>. In the 'Username' field, type in your <vsc-account>.
    4. -
    5. Click on 'More Options', select 'Use Public Key Authentication' and point it to your private key (the filename will be shown underneath). Please keep in mind that Cybeduck works only with passphrase-protected private keys.
    6. -
    7. - Finally, type in a name for the bookmark in the 'Nickname' field and close the window by pressing on the red circle in the top left corner of the window.

      - \"Cyberduck
    8. -
    9. To open the scp connection, click on the 'Bookmarks' icon (which resembles an open book) and double click on the bookmark you just created.
    10. -

    Transferring files with FileZilla

    To install FileZilla, follow these steps: -

      -
    1. Download the appropriate file from the FileZilla download page.
    2. -
    3. The file you just downloaded is a compressed UNIX-style archive (with a name ending on .tar.bz2). Doubleclick on this file in Finder (most likely in the Downloads folder) and drag the FileZilla icon that appears to the Applications folder.
    4. -
    5. Depending on the settings of your machine, you may get notification that Filezilla.app cannot be opened because it is from an unidentified developer when you try to start it. Check out the macOS Gatekeeper on this Apple support page.
    6. -

    FileZilla for macOS works in pretty much the same way as FileZilla for Windows: -

      -
    1. start FileZilla;
    2. -
    3. open the 'Site Manager' using the 'File' menu;
    4. -
    5. create a new site by clicking the New Site button;
    6. -
    7. in the tab marked General, enter the following values (all other fields remain blank): -
        -
      • Host: login.node.vsc (replace with the name of the login node of your home cluster)
      • -
      • Servertype: SFTP - SSH File Transfer Protocol
      • -
      • Logon Type: Normal
      • -
      • User: your own VSC user ID, e.g., vsc98765;
      • -
      -
    8. -
    9. optionally, rename this setting to your liking by pressing the 'Rename' button;
    10. -
    11. press 'Connect'. Enter your passphrase when requested. FileZilla will try to use the information in your macOS Keychain. See the page on 'Text-mode access using OpenSSH' to find out how to add your key to the keychain using ssh-add.
    12. -

    \"FileZilla -

    Note that recent versions of FileZilla have a screen in the settings to -manage private keys. The path to the private key must be provided in -options (Edit Tab -> options -> connection -> SFTP): -

    \"FileZilla -

    After that you should be able to connect after being asked for -passphrase. As an alternative you can choose to use the built-in macOS keychain system. -

    " -357,"","

    Installation

    Eclipse doesn't come with its own compilers. By default, it relies on the Apple gcc toolchain. You can install this toolchain by installing the Xcode package from the App Store. This package is free, but since it takes quite some disk space and few users need it, it is not installed by default on OS X (though it used to be). After installing Xcode, you can install Eclipse according to the instructions on the Eclipse web site. Eclipse will then use the gcc command from the Xcode distribution. The Apple version of gcc is really just the gcc front-end layered on top of a different compiler, LLVM, and might behave differently from gcc on the cluster. -

    If you want a regular gcc or need Fortran or MPI or mathematical libraries equivalent to those in the foss toolchain on the cluster, you'll need to install additional software. We recommend using MacPorts for this as it contains ports to macOS of most tools that we include in our toolchains. Using MacPorts requires some familiarity with the bash shell, so you may have a look at our \"Using Linux\" section or search the web for a good bash tutorial (one in a Linux tutorial will mostly do). E.g., you'll have to add the directory where MacPort installs the applications to your PATH enviroment variable. For a typical MacPorts installation, this directory is /opt/local/bin. -

    After installing MacPorts, the following commands will install a libraries and tools that are very close to those of the foss2016b toolchain (tested September 2016):

    sudo port install gcc5
    -sudo port select --set gcc mp-gcc5
    -sudo port install openmpi-gcc5 +threads
    -sudo port select --set mpi openmpi-gcc5-fortran
    -sudo port install OpenBLAS +gcc5 +lapack
    -sudo port install scalapack +gcc5 +openmpi
    -sudo port install fftw-3 +gcc5 +openmpi

    Some components may be slightly newer versions than provided in the foss2015a toolchain, while the MPI library is an older version (at least when tested in September 2016).

    If you also want a newer version of subversion that can integrate with the \"Native JavaHL connector\" in Eclipse, the following commands will install the appropriate packages: -

    sudo port install subversion
    -sudo port install subversion-javahlbindings
    -

    At the time of writing, this installed version 1,9,4 of subversion which has a compatielbe \"Native JavaHL connector\" in Eclipse. -

    Configurating Eclipse for other compilers

    Eclipse uses the PATH environment variable to find other software it uses, such as compilers but also some commands that give information on where certain libraries are stored or how they are configured. In a regular UNIX/Linux system, you'd set the variable in your shell configuration files (e.g., .bash_profile if you use the bash shell). This mechanism also works on OS X, but not for applications that are not started from the shell but from the Dock or by clicking on their icon in the Finder. -

    Because of security concerns, Apple has made it increasingly difficult to define the path for GUI applications that are not started through a shell script. -

      -
    • In 10.7 and earlier, one could define environment variables for GUI applications in ~/.MacOSX/environment.plist.
    • -
    • In 10.8 and 10.9 one had to modify the Info.plist file in the so-called application bundle.
    • -

    Both tricks are explained in the Photran installation instructions on the Eclispe wiki. However, in OS X 10.10 (Yosemite) neither mechanism works for setting the path. -

    Our advise is to: -

      -
    • Configure your bash shell so that you can find the gfortran executable and the corresponding gcc executable. (E.g., try gfortran --version and <code>gcc --version and check the output of these commands).
    • -
    • Then start Eclipse also from a terminal window. -
        -
      • Use the full path, for the default install procedure this is very likely /Application/eclipse/eclipse,
      • -
      • or add the path to Eclipse to the PATH environment variable (so you likely have to add /Application/eclipse to the path),
      • -
      • or define an alias to start Eclipse, e.g., by adding the line
        - alias start-eclipse='/Applications/eclipse/eclipse >&/dev/null &'
        - to your .bashrc file. This line defines a new command start-eclipse.
      • -
      - This should work for all OS X versions.
    • -
    " -359,"","" -361,"","

    Software development on clusters

    Eclipse is an extensible IDE for program development. The basic IDE is written in Java for the development of Java programs, but can be extended through packages. The IDE was originally developed by IBM, but open-sourced and has become very popular. There are some interesting history tidbits on the WikiPedia entry for Eclipse. -

    Some attractive features

      -
    • Multi-platform: Available for Windows, OS X and Linux, and works mostly the same on all these platforms.
    • -
    • Support for C/C++ (via de CDT plugin) and Fortran (via the Photran plugin) development. This goes far beyond syntax coloring and includes things like code refactoring, build process management, etc.
    • -
    • Support for the development of parallel applications on a cluster, including automatic synchronisation of the source files on your laptop with one or more cluster accounts. So you can easily do code development while off-line. Eclipse is heavily promoted (and actively developed) within the XSEDE collaboration of supercomputer centres in the USA.
      - If you have suitable compilers and libraries on your local machine, you may even be able to do part of the testing and debugging on your local machine, avoiding delays caused by the job queueing system. Another advantage of running Eclipse locally rather than on the cluster is that the GUI has all of the responsiveness of a local program, not influenced by network delays.
    • -
    • It integrates with most popular version control system (offering a GUI to them also).
    • -

    Caveat

    The documentation of the Parallel Tools Platform also tells you how to launch and debug programs on the cluster from the Eclipse IDE. However, this is for very specific cluster configurations and we cannot support this on our clusters at the moment. You can use features such as syncrhonised projects (where Eclipse puts a copy of the project files from your desktop on the cluster, and even synchronises back if you change them on the cluster) or opening a SSH shell from the IDE to directly enter commands on the cluster. -

    Release policy

    The eclipse project works with a \"synchronised release policy\". Major new versions of the IDE and a wide range of packages (including the C/C++ development package (CDT), Parallel Tools Platform (PTP) and the Fortran development package (Photran) which is now integrated in the PTP) occur simultaneously in June of each year which guarantees that there are no compatibility problems between packages if you upgrade your whole installation at once. Bug fixes are of course released in between version updates. Each version has its own code name and the code name has become more popular than the actual version number (as version numbers for the packages differ). E.g., the whole June 2013 release (base IDE and packages) is known as the \"Kepler\" release (version number 4.3), the June 2014 release as the \"Luna\" release (version number 4.4), the June 2015 as the \" Mars\" release (version number 4.5) and the June 2016 release as \"Neon\". -

    Getting eclipse

    The best place to get Eclipse is the the official Eclipse download page. That site contains various pre-packaged versions with a number of extension packages already installed. The most interesting one for C/C++ or Fortran development on clusters is \"Eclipse for Parallel Application Developers\". The installation instructions depend on the machine you're installing on, but typically it is not more than unpacking some archive in the right location. You'll need a sufficiently recent Java IDE on your machine though. Instructions are available on the Eclipse Wiki. -

    The CDT, Photran and PTP plugins integrate with compilers and libraries on your system. For Linux, it uses the gcc compiler on your system. On OS X it integrates with gcc and on Windows, you need to install Cygwin and its gcc toolchain (it may also work with the MinGW and Mingw-64 gcc versions but we haven't verified this). -

    The Eclipse documentation is also available on-line. -

    Basic concepts

      -
    • - A workspace is a place where eclipse stores a set of projects. It corresponds to a folder on file. The actual files of project can but do not need to be in that folder. However, all internal data that eclipse maintains will be. A user can have more than one workspace. Eclipse will ask at the start which workspace to use for the current session. - Workspaces are not easily portable between computers. They are simply a way to organise your projects on your local computer. -
    • -
    • - Each workspace can contain one or more projects. Each project is a collection of resources, e.g., C files or Fortran files, and typically has a releasable component that can be build from those resources, e.g., an executable. It is a good idea to use workspaces to group a number of related projects. A project also corresponds to a folder in the file system. That folder does not have to be contained in the workspace folder. - Projects can be transported easily from one workstation to another. -
    • -
    • - A perspective defines the (initial) layout of views and editors for a particular task. E.g., the C/C++ perspective shows an editor to edit C/C++-files and views to quickly navigate in the code, check definitions, etc. The Debug perspective is used to debug an application. The PTP also has a system monitoring perspective to monitor jobs. -
    • -

    Interesting bits in the documentation

    " -363,"","

    Prerequisites

      -
    • The user should be familiar with the basic use of the Eclipse IDE.
    • -
    • Eclipse IDE has been installed on the user's desktop or laptop.
      - We advise to install the bundle 'Eclipse for Parallel Application Developers' of a recent Eclipse release, e.g., the 4.6/Neon (2016) or the 4.5/Mars (2015) release, as they contain a lot of other useful tools, including the 'Remote System Explorer' used here. On older releases or other bundles you may have to install the 'Remote System Explorer End-User Runtime' and 'Remote System Explorer User Actions' components. This page was tested with the 4.6/Neon (2016) release and Helios (2010) release.
    • -
    • The user should have a VSC account and be able to access it.
    • -

    Installing additional components

    In order to use Eclipse as a remote editor, you may have to install two extra components: the \"Remote System Explorer End-User Runtime\" and the \"Remote System Explorer User Actions\". Here is how to do this: -

      -
    1. From Eclipse's 'Help' menu, select 'Install New Software...', the following dialog will appear:\"Eclipse
    2. -
    3. From the 'Work with:' drop down menu, select 'Neon - http://download.eclipse.org/releases/neon' (or replace \"Neon\" with the name of the release that you are using). The list of available components is now automatically populated.
    4. -
    5. From the category 'General Purpose Tools', select 'Remote System Explorer End-User Runtime' and 'Remote System Explorer User Actions'.
    6. -
    7. Click the 'Next >' button to get the installation details.
    8. -
    9. Click the 'Next >' button again to review the licenses.
    10. -
    11. Select the 'I accept the terms of the license agreement' radio button.
    12. -
    13. Click the 'Finish' button to start the download and installation process.
    14. -
    15. As soon as the installation is complete, you will be prompted to restart Eclipse, do so by clicking the 'Restart Now' button.
    16. -

    After restarting, the installation process of the necessary extra components is finished, and they are ready to be configured. -

    Configuration

    Before the new components can be used, some configuration needs to be done. -

    Microsoft Windows users who use the PuTTY SSH client software should first prepare a private key for use with Eclipse's authentication system. Users using the OpenSSH client on Microsoft Windows, Linux or MacOS X can skip this preparatory step. -

    Microsoft Windows PuTTY users only

    Eclipse's SSH components can not handle private keys generated with PuTTY, only OpenSSH compliant private keys. However, PuTTY's key generator 'PuTTYgen' (that was used to generate the public/private key pair in the first place) can be used to convert the PuTTY private key to one that can be used by Eclipse. See 'How to convert a PuTTY key to OpenSSH format?' -

    Microsoft Windows PuTTY users should now proceed with the instructions for all users, below. -

    All users

      -
    1. From the 'Window' menu ('Eclipse' menu on OS X), select 'Preferences'.
    2. -
    3. In the category 'General', expand the subcategory 'Network Connections' and select 'SSH2'.
    4. -
    5. Point Eclipse to the directory where the OpenSSH private key is stored that is used for authentication on the VSC cluster. If that key is not called 'id_rsa', select it by clicking the 'Add Private Key...' button.
    6. -
    7. Close the 'Preferences' dialog by clicking 'OK'.
    8. -

    Creating a remote connection

    In order to work on a remote system, a connection should be created first. -

      -
    1. From the 'Window' menu, select 'Open Perspective' and then 'Other...', a dialog like the one below will open (the exact contents depends on the components installed in Eclipse).
      - \"Eclipse
    2. -
    3. Select 'Remote System Explorer' from the list, and press 'OK', now the 'Remote Systems' view appears (at the left by default).
    4. -
    5. In that view, right-click and select 'New' and then 'Connection' from the context menu, the 'New Connection' dialog should now appear.
    6. -
    7. From the 'System type' list, select 'SSH Only' and press 'Next >'.
    8. -
    9. In the 'Host name' field, enter vsc.login.node, in the 'Connection Name' field, the same host name will appear automatically. The latter can be changed if desired. Optionally, a description can be added as well. Click 'Next >' to continue.
    10. -
    11. In the dialog 'Sftp Files' nothing needs to be changed, so just click 'Next >'.
    12. -
    13. In the dialog 'Ssh Shells' nothing needs to be changed either, so again just click 'Next >'.
    14. -
    15. In the dialog 'Ssh Terminals' (newer versions of Eclipse) nothing needs to be changed either, click 'Finish'.
    16. -

    The new connection has now been created successfully. It can now be used. -

    Browsing the remote file system

    One of the features of Eclipse 'Remote systems' component is browsing a remote file system. -

      -
    1. In the 'Remote Systems' view, expand the 'Sftp Files' item under the newly created connection, 'My Home' and 'Root' will appear.
    2. -
    3. Expand 'My Home', a dialog to enter your password will appear.
    4. -
    5. First enter your user ID in the 'User ID' field, by default this will be your user name on your local desktop or laptop. Change it to your VSC user ID.
    6. -
    7. Mark the 'Save user ID' checkbox so that Eclipse will remember your user ID for this connection.
    8. -
    9. Click 'OK' to proceed, leaving the 'Password' field blank.
    10. -
    11. If the login node is not in your known_hosts file, you will be prompted about the authenticity of vsc.login.node, confirm that you want to continue connecting by clicking 'Yes'.
    12. -
    13. If no know_hosts exists, Eclipse will prompt you to create one, confirm this by clicking 'Yes'.
    14. -
    15. You will now be prompted to enter the passphrase for your private key, do so and click 'OK'. 'My Home' will now expand and show the contents of your home directory on the VSC cluster.
    16. -

    Any file on the remote file system can now be viewed or edited using Eclipse as if it were a local file. -

    It may be convenient to also display the content of your data directory (i.e., '$VSC_DATA'). This can be accomplished easily by creating a new filter. -

      -
    1. Right-click on the 'Sftp Files' item in your VSC connection ('Remote Systems' view), and select 'New' and then 'Filter...' from the context menu.
    2. -
    3. In the 'Folder' field, type the path to your data directory (or use 'Browse...'). If you don't know where your data directory is located, type 'echo $VSC_DATA' on the login's command line to see its value. Leave all other fields and checkboxes to their default values and press 'Next >'.
    4. -
    5. In the field 'Filter name', type any name you find convenient, e.g., 'My Data'. leave the checkbox to its default value and click 'Finish'.
    6. -

    A new item called 'My Data' now appeared under VSC's 'Sftp Files' and can be expanded to see the files in '$VSC_DATA'. Obviously, the same can be done for your scratch directory. -

    Using an Eclipse terminal

    The 'Remote Systems' view also allows to open a terminal to the remote connection. This can be used as an alternative to the PuTTY or OpenSSH client and may be convenient for software development (compiling, building and running programs) without leaving the Eclipse IDE. -

    A new terminal can be launched from the 'Remote Systems' view by right-clicking the VSC connection's 'Ssh Shells' item and selecting 'Launch Terminal' or 'Launch...' (depending on the version of Eclipse). The 'Terminals' view will open (bottom of the screen by default). -

    Connecting/Disconnecting

    Once a connection has been created, it is trivial to connect to it again. To connect to a remote host, right-click on the VSC cluster connection in the 'Remote Systems' view, and select 'Connect' from the context menu. You may be prompted to provide your private key's passphrase. -

    For security reasons, it may be useful to disconnect from the VSC cluster when Eclipse is no longer used to browse or edit files. Although this happens automatically when you exit the Eclipse IDE, you may want to disconnect without leaving the applicaiton. -

    To disconnect from a remote host, right-click on the VSC cluster connection in the 'Remote Systems' view, and select 'Disconnect' from the context menu. -

    Further information

    More information on Eclipse's capabilities to interact with remote systems can be found in the Eclipse help files that were automatically installed with the respective components. The information can be accessed by selecting 'Help Contents' from the 'Help' menu, and is available under 'RSE User Guide' heading. -

    " -365,"","

    Prerequisites

    It is assumed that a recent version of the Eclipse IDE is installed on the desktop, and that the user is familiar with Eclipse as a development environment. The installation instructions were tested with the Helios (2010), 4.4/Luna (2014) and the 4.6/Neon (2016) release of Eclipse but may be slightly different for other versions. -

    Installation & setup

    In order to interact with subversion repositories, some extra plugins have to be installed in Eclipse. -

      -
    1. When you start Eclipse, note the code name of the version in the startup screen.
    2. -
    3. From the 'Help' menu, select 'Install New Software...'.
    4. -
    5. From the 'Work with' drop down menu, select 'Neon - http://download.eclipse.org/releases/neon' (where Neon is the name of the release, see the first step). This will populate the components list.
    6. -
    7. Expand 'Collaboration' and check the box for 'Subversive SVN Team Provider' and click the 'Next >' button.
    8. -
    9. Click 'Next >' in the 'Install Details' dialog.
    10. -
    11. Indicate that you accept the license agreement by selecting the appropriate radio button and click 'Finish'.
    12. -
    13. When Eclipse prompts you to restart it, do so by clicking 'Restart Now'
    14. -
    15. An additional - component is needed (an SVN Team Provider), however, To trigger the - install, open the Eclipse \"Preferences\" menu (under the - \"File\" menu, or under \"Eclipse\" on OS X) and go to - \"Team\" and then \"SVN\" -
    16. -
    17. Select the tab \"SVN - connector\" -
    18. -
    19. Then click on \"Get - Connectors\" to open the 'Subversive Connectors Discovery' - dialog. -
      You will not see this button if there is already a connector - installed. If you need a different one, you can still install one via \"Install new - software\" in the \"Help\" menu. Search for - \"SVNKit\" for connectors that don't need any additional software on the system (our preference), or \"JavaHL\" for another family that connects to the original implementation. Proceed in a similar way as below (step 13). -
    20. -
    21. The easiest choice is to use one of the \"SVN Kit\" connectors as they do not require the installation of other software on your computer, but you have to chose the appropriate version. The subversion project tries to maintain compatibility between server and client from different versions as much as possible, so the version shouldn't matter too much. However, if on your desktop/laptop you'd like to mix between using svn through Eclipse and through another tool, you have to be careful that the SVN connector is compatible with the other SVN tools on your system. SVN Kit 1.8.12 should work with other SVN tools that support version 1.7-1.9 according to the documentation (we cannot test all combinations ourselves).
      1. In case you prefer to use the \"Native JavaHL\" connector instead, make sure that you have subversion binaries including the Java bindings installed on your system, and pick the matching version of the connector. Also see the JavaHL subclipse Wiki page of the tigris.org community.
    22. -
    23. - Mark the checkbox next to the appropriate version of 'SVN Kit' and click 'Next >'.
    24. -
    25. The 'Install' dialog opens, offering to install two components, click 'Next >'.
    26. -
    27. The 'Install Details' dialog opens, click 'Next >'.
    28. -
    29. Accept the license agreement terms by checking the appropriate radio button in the 'Review Licenses' dialog and click 'Finish'.
    30. -
    31. You may receive a warning that unsigned code is about to be installed, click 'OK' to continue the installation.
    32. -
    33. Eclipse prompts you to restart to finish the installation, do so by clicking 'Restart Now'.
    34. -

    Eclipse is now ready to interact with subversion repositories. -

    Microsoft Windows PuTTY users only

    Eclipse's SSH components can not handle private keys generated with PuTTY, only OpenSSH compliant private keys. However, PuTTY's key generator 'PuTTYgen' (that was used to generate the public/private key pair in the first place) can be used to convert the PuTTY private key to one that can be used by Eclipse. See the section converting PuTTY keys to OpenSSH format in the page on generating keys with PyTTY for details if necessary. -

    Checking out a project from a VSC cluster repository

    To check out a project from a VSC cluster repository, one uses the Eclipse 'Import' feature (don't ask...). -

    svn+ssh://userid@vsc.login.node/data/leuven/300/vsc30000/svn-repo
    -

    \"Eclipse
    - In the 'User' field, enter your VSC user ID. -

      -
    • Switch to the 'SSH' tab of this dialog, and select 'Private key' for authentication. Use the 'Browse' button to find the appropriate private key file to authenticate on the VSC cluster. Note that this should be a private key in OpenSSH format. Also enter the passphrase for your private key. If you wish, you can store your passphrase here at this point, but this may pose a security risk.
      - \"Eclipse
    • -
    • You will be prompted to select a resource to be checked out, click the 'Browse' button and select the project you want to check out. Remember that if you use the recommended repository layout, you will probably want to check out the project's 'trunk'. Click 'Finish'.
      - \"Eclipse
    • -
    • The 'Check Out As' dialog offers several options, select the 'Checkout as a project with the name specified' and click 'Finish' and click 'Finish' to proceed with the check out.
      - \"Eclipse
    • -

    Note that Eclipse remembers repository URLs, hence checking out another project from the same repository will skip quite a number of the steps outlined above. -

    Work cycle

    The development cycle from the point of view of version control is exactly the same as that for a command line subversion client. Once a project has been checked out or placed under version control, all actions can be performed by right clicking on the project or specific files in the 'Project Explorer' view and choosing the appropriate action from the 'Team' entry in the context menu. The menu items are fairly self-explanatory, but you may want to read the section on TortoiseSVN since Eclipse's version control interface is very akin to the former. -

    Note that files and directories displayed in the 'Project Explorer' view are now decorated to indicate version control status. A '>' preceeding a file or directory's name indicate that it has been modified since the last update. A new file not yet under version control has a '?' embedded in its icon. -

    When a project is committed, subversive opens a dialog to enter an appropriate comment, and offers to automatically add new files to the repository. Note that Eclipse also offers to commit its project settings, e.g., the '.project' file. Whether or not you wish to store these settings in the repository depends on your setup, but probably you don't. -

    " -367,"","

    If you're not familiar with Eclipse, read our introduction page first.

    Eclipse also supports several version control systems out of the box or through optional plug-ins.

    The PTP (Parallel Tools Platform) strongly encourages a model where you run eclipse locally on your workstation and let Eclipse synchronise the project files with your cluster account. If you want to use version control in this scenario, the PTP manual advises to put your local files under version control (which can be done through Eclipse also) and synchronise that with some remote repository (e.g., one of the hosting providers), and to not put the automatically synchronised version of the code that you use for compiling and running on the cluster also under version control. In other words,

      -
    • - The version control system is used to version manage your files on your local workstation, -
    • -
    • - And Eclipse PTP is then used to manage the files on the cluster. -
    • -

    If you still want to use the cluster file space as a remote repository, we strongly recommend that you do this in a different directory from where you let Eclipse synchronise the files, and don't touch the files in that repository directly.

    For experts

    The synchronised projects feature in Eclipse internally uses the Git version control system to take care of the synchronisation. That's also the reason why the Parallel Software Development bundle of Eclipse comes with the EGit plug-in included. It does this however in a way that does not interfere with regular git operations. In both your local and remote project directory, you'll find a hidden .ptp-sync directory which in fact is a regular git repository, but stored in a different subdirectory rather than the standard .git subdirectory. So you can still have a standard Git repository besides it and they will not interfere if you follow the guidelines on this page.

    " -369,"","

    -Prerequisites

    - -

    Environment & general use

    -

    - All operations introduced in the documentation page on using subversion repositories on the VSC clusters work as illustrated therein. The repository's URI can be conveniently assigned to an environment variable -

    -
    $ export SVN=\"svn+ssh://userid@vsc.login.node/data/leuven/300/vsc30000/svn-repo\"
    -
    -

    - where userid should be replaced by your own VSC user ID, and vsc.login.node to the appropriate login node for the cluster the repository is on. In the above, it is assumed that the SVN repository you are going to use is in your VSC data directory (here shown for user vsc30000) and is called svn-repo. This should be changed appropriately. -

    -

    -Checking out a project from a VSC cluster repository

    -

    - To check out the simulation project to a directory 'simulation' on your desktop, simply type: -

    -
    $ svn checkout  ${SVN}/simulation/trunk  simulation
    -
    -

    - The passphrase for your private key used to authenticate on the VSC cluster will be requested. -

    -

    - Once the project is checked out, you can start editing or adding files and directories, committing your changes when done. -

    -

    -Importing a local project into the VSC cluster repository

    -

    - Importing a project directory that is currently on your desktop and not on the VSC cluster is also possible, again by simply modifying the URLs in the previous section appropriately. Suppose the directory on your desktop is 'calculation', the steps to take are the following: -

    -
    $ svn mkdir -m 'calculation: creating dirs' --parents   \\
    -            $SVN/calculation/trunk    \\
    -            $SVN/calculation/branches \\
    -            $SVN/calculation/tags
    -$ svn import -m 'calculation: import' \\
    -             calculation              \\
    -             $SVN/calculation/trunk
    -
    -

    - Note that each time you access the repository, you need to authenticate, which gets tedious pretty soon. Using ssh-agent may be considered to simplify life, see, e.g., a short tutorial on a possible setup. -

    -

    -Links

    -
      -
    • - Apache Subversion, with documentation, source and binary packages for various operating systems.
    • -
    • - Cygwin, a UNIX emulation layer for Windows. Search for subversion in the list of packages when running the setup program.
    • -
    " -371,"","

    Installing NX NoMachine client

    - -

    NoMachine NX Client Configuration Guide

    -
      -
    1. NoMachine NX requires keys in OpenSSH format, therefore the existing key needs to be converted into OpenSSH format if you're working on Windows and using PuTTY.
    2. -
    3. Start the NoMachine client and press twice continue to see the screen with connection. Press New to create a new connection.
    4. -
    5. Change the Protocol to SSH.
    6. -
    7. Choose the hostname:
    8. -
        -
      • for ThinKing (Tier-2): nx.hpc.kuleuven.be and port 22.
      • -
      • for BrENIAC (Tier-1): login2-tier1.hpc.kuleuven.be and port 22.
      • -
      • If you experience problems with connection please switch to Protocol NX and port 4000.
      • -
      -
    9. Choose the authentication Use the system login.
    10. -
    11. Choose the authentication method Private key.
    12. -
    13. Browse your private key. This should be in OpenSSH format (not .ppk). -
        -
      • For Android users it is easy to transfer your key and save it in the chosen location with the Box (KU Leuven) or Dropbox (UHasselt) apps.
      • -
      • For iOS users (iPad running iOS 5 or later) it is possible to transfer the key with iTunes. Connect your device through iTunes, go to the connected device, choose the \"apps\" tab, scroll down to \"file sharing\". Select the NoMachine client and add files to NoMachine Documents. Remember to Sync your device.
      • -
      • Browse your file on a mobile device from the given location.
      • -
    14. -
    15. Choose the option Don’t use proxy for the network connection.
    16. -
    17. Give the name to your connection, e.g. Connection to nx.hpc.kuleuven.be. You can optionally create the link to that connection on your desktop. Click the \"Done\" button to finish configuration.
    18. -
    19. Choose the just created connection and press \"Connect\".
    20. -
    21. Enter your username (vsc-account) and passphrase for your private key and press \"ok\".
    22. -
    23. If you are creating for the first time choose New desktop. Otherwise please go to step 16 for instructions how to reconnect to your session.
    24. -
    25. Choose Create a new virtual desktop and continue. Each user is allowed to have a maximum 5 desktops open.
    26. -
    27. Read the useful information regarding your session displayed on several screens. This step is very important in case of mobile devices – once you miss the instructions it is not so easy to figure out how to operate NoMachine on your device. You can optionally choose not to show these messages again.
    28. -
    29. Once connected you will see the virtual Linux desktop.
    30. -
    31. When reconnecting choose your desktop from all the listed ones. If there are too many you can use the option find a user or a desktop and type your username (vsc-account). Once you found your desktop press connect.
    32. -
    33. You will be prompted about the screen resolution (Change the server resolution to match the client when I connect) which can be changed to match the client when you connect. It is a recommended setup as your session will correspond to your actual device resolution. While reconnection from a different device (e.g. mobile device) it is highly recommended to change the resolution.
    34. -
    -

    For more detailed information about the configuration process please refer to the short video (ThinKing configuration) showing the installation and configuration procedure step-by-step or to the document containing graphical instructions. -

    -

    How to start using NX on ThinKing?

    -
      -
    1. Once your desktop is open, you can use all available GUI designed software that is listed within the Applications menu. Software is divided into several groups: -
        -
      • Accessories (e.g. Calculator, Character Map, Emacs, Gedit, GVim),
      • -
      • Graphics (e.g. gThumb Image Viewer, Xpdf PDF Viewer),
      • -
      • Internet (e.g. Firefox with pdf support, Filezilla),
      • -
      • HPC (modules related to HPC use: Computation sub-menu with Matlab, RStudio and SAS, Visualisation sub-menu with Paraview and VisIt),
      • -
      • Programming (e.g. Meld Diff Viewer),
      • -
      • System tools (e.g. File Browser, Terminal).
      • -
    2. -
    3. Running the applications in the text mode requires having a terminal open. To launch the terminal please go to Applications -> System tools -> Terminal. From Terminal all the commands available on regular login node can be used.
    4. -
    5. Some more information can be found on slides from our lunchbox session. In the slides you can find the information how to connect the local HDD to the NX session for easier transfer of data between the cluster and your local computer.
    6. -
    -

    Attached documents

    -" -373,"","

    There are two possibilities -

      -
    1. - You can copy your private key from the machine where you generated the key to the other computers you want to use to access the VSC clusters. - If you want to use both PuTTY on Windows and the tradinional OpenSSH client on OS X or Linux (or Windows with Cygwin) and chose for this scenario, you should generate the key using PuTTY and then export it in OpenSSH format as explained on - the PuTTY pages. -
    2. -
    3. Alternatively, you can generate another keypair for the second machine following the instructions for your respective client (Windows, macOS/OS X, Linux) and then upload the new public key to your account: -
        -
      1. Go to the account management web site account.vscentrum.be
      2. -
      3. Choose \"Edit account\"
      4. -
      5. And then add the public key via that page. It can take half an hour before you can use the key.
      6. -
      -
    4. -

    We prefer the second scenario, in particular if you want to access the clusters from a laptop or tablet, as these are easily stolen. In this way, all you need to do if your computer is stolen or your key may be compromised in another way, is to delete that key on the account website (via \"Edit account\"). You can continue to work on your other devices. -

    " -375,"","

    Data on the VSC clusters can be stored in several locations, depending on the size and usage of these data. Following locations are available: -

      -
    • Home directory -
        -
      • Location available as $VSC_HOME
      • -
      • The data stored here should be relatively small, and not generating very intense I/O during jobs.
        - Its main purpose is to stora all kinds of configuration files are stored, e.g., ssh-keys, .bashrc, or Matlab, and Eclipse configuration, ...
      • Performance is tuned for the intended load: reading configuration files etc.
      • Readable and writable on all VSC sites.
      • As best practice, the permissions on your home directory should be only for yourself, i.e., 700. To share data with others, use the data directory.
      • -
      -
    • -
    • Data directory -
        -
      • Location available as $VSC_DATA
      • -
      • A bigger 'workspace', for program code, datasets or results that must be stored for a longer period of time.
      • There is no performance guarantee; depending on the cluster, performance may not be very high.
      • Readable and writable on all VSC sites.
      • -
      -
    • -
    • Scratch directories -
        -
      • Several types exist, available in $VSC_SCRATCH_XXX variables
      • -
      • For temporary or transient data; there is typically no backup for these filesystems, and 'old' data may be removed automatically.
      • -
      • Currently, $VSC_SCRATCH_NODE, $VSC_SCRATCH_SITE and $VSC_SCRATCH_GLOBAL are defined, for space that is available per node, per site, or globally on all nodes of the VSC (currently, there is no real 'global' scratch filesystem yet).
      • These file systems are not exported to other VSC sites.
      • -
      -
    • -

    Since these directories are not necessarily mounted on the same locations over all sites, you should always (try to) use the environment variables that have been created. -

    Quota is enabled on the three directories, which means the amount of data you can store here is limited by the operating system, and not just by the capacity of the disk system, to prevent that the disk system fills up accidentally. You can see your current usage and the current limits with the appropriate quota command as explained on the page on managing disk space. The actual disk capacity, shared by all users, can be found on the Available hardware page. -

    You will only receive a warning when you reach the soft limit of either quota. You will only start losing data when you reach the hard limit. Data loss occurs when you try to save new files: this will not work because you have no space left, and thus you will lose these new files. You will however not be warned when data loss occurs, so keep an eye open for the general quota warnings! The same holds for running jobs that need to write files: when you reach your hard quota, jobs will crash. -

    Home directory

    This directory is where you arrive by default when you login to the cluster. Your shell refers to it as \"~\" (tilde), or via the environment variable $VSC_HOME. -

    The data stored here should be relatively small (e.g., no files or directories larger than a gigabyte, although this is not imposed automatically), and usually used frequently. The typical use is storing configuration files, e.g., by Matlab, Eclipse, ... -

    The operating system also creates a few files and folders here to manage your account. Examples are: -

    - - - - - - - - - - - - - - - - - - -
    .ssh/ - This directory contains some files necessary for you to login to the cluster and to submit jobs on the cluster. Do not remove them, and do not alter anything if you don't know what you're doing! -
    .profile
    .bash_profile -
    This script defines some general settings about your sessions, -
    .bashrc - This script is executed every time you start a session on the cluster: when you login to the cluster and when a job starts. You could edit this file to define variables and aliases. However, note that loading modules is strongly discouraged. -
    .bash_history - This file contains the commands you typed at your shell prompt, in case you need them again. -

    Data directory

    In this directory you can store all other data that you need for longer terms. The environment variable pointing to it is $VSC_DATA. There are no guarantees about the speed you'll achieve on this volume. I/O-intensive programs should not run directly from this volume (and if you're not sure, whether your program is I/O-intensive, don't run from this volume).

    This directory is also a good location to share subdirectories with other users working on the same research projects.

    Scratch space

    To enable quick writing from your job, a few extra file systems are available on the work nodes. These extra file systems are called scratch folders, and can be used for storage of temporary and/or transient data (temporary results, anything you just need during your job, or your batch of jobs). -

    You should remove any data from these systems after your processing them has finished. There are no guarantees about the time your data will be stored on this system, and we plan to clean these automatically on a regular base. The maximum allowed age of files on these scratch file systems depends on the type of scratch, and can be anywhere between a day and a few weeks. We don't guarantee that these policies remain forever, and may change them if this seems necessary for the healthy operation of the cluster. -

    Each type of scratch has his own use: -

      -
    • Shared scratch ($VSC_SCRATCH)
      To allow a job running on multiple nodes (or multiple jobs running on separate nodes) to share data as files, every node of the cluster (including the login nodes) has access to this shared scratch directory. Just like the home and data directories, every user has its own scratch directory. Because this scratch is also available from the login nodes, you could manually copy results to your data directory after your job has ended. Different clusters on the same site may or may not share the scratch space pointed to by $VSC_SCRATCH.
      This scratch space is provided by a central file server that contains tens or hundreds of disks. Even though it is shared, it is usually very fast as it is very rare that all nodes would do I/O simultaneously. It also implements a parallel file system that allows a job to do parallel file I/O from multiple processes to the same file simultaneously, e.g., through MPI parallel I/O.
      For most jobs, this is the best scratch system to use.
    • Site scratch ($VSC_SITE_SCRATCH)
      A variant of the previous one, may or may not be the same. On clusters that have access to both a cluster-local scratch and site-wide scratch file system, this variable will point to the site-wide available scratch volume. On other sites it will just point to the same volume as $VSC_SCRATCH.
    • Node scratch ($VSC_SCRATCH_NODE)
      - Every node has its own scratch space, which is completely separated from the other nodes. On many cluster nodes, this space is provided by a local hard drive or SSD. Every job automatically gets its own temporary directory on this node scratch, available through the environment variable $TMPDIR. $TMPDIR is guaranteed to be unique for each job.
      - Note however that when your job requests multiple cores and these cores happen to be in the same node, this $TMPDIR is shared among the cores! Also, you cannot access this space once your job has ended. And on a supercomputer, a local hard disk may not be faster than a remote file system which often has tens or hundreds of drives working together to provide disk capacity.
    • -
    • Global scratch ($VSC_SCRATCH_GLOBAL)
      We may or may not implement a VSC-wide scratch volume in the future, and the environment variable VSC_SCRATCH_GLOBAL is reserved to point to that scratch volume. Currently is just points to the same volume as $VSC_SCRATCH or $VSC_SITE_SCRATCH.
    • -
    " -377,"","

    BEgrid has its own documentation web site as it is a project at the federal level. Some useful links are: -

    " -381,"","

    This is just some random text. Don't be worried if the remainder of this paragraph sounds like Latin to you cause it is Latin. Cras mattis consectetur purus sit amet fermentum. Cum sociis natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Sed posuere consectetur est at lobortis. Morbi leo risus, porta ac consectetur ac, vestibulum at eros. Cras mattis consectetur purus sit amet fermentum. Cum sociis natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Sed posuere consectetur est at lobortis. Morbi leo risus, porta ac consectetur ac, vestibulum at eros. -

    " -385,"","

    What I tried to do with the \"Asset\" box in the right column:

    • I included two pictures from our asset toolbox. What is shown are square thumbnails of the pictures.
    • I also included two PDFs that have no picture attached to them. They simply don't show up.


    " -387,"","

    Inline code with <code>...</code>

    We used inline code on the old vscentrum.be to clearly mark system commands etc. in text. -

      -
    • For this we used the <code> tag.
    • -
    • There was support in the editor to set this tag
    • -
    • It doesn't seem to work properly in the current editor. If the fragment of code contains a slash (/), the closing tag gets omitted.
    • -

    Example: At UAntwerpen you'll have to use module avail MATLAB and - module load MATLAB/2014a respectively. -

    However, If you enter both <code>-blocks on the same line in a HTML file, the editor doesn't process them well: module avail MATLAB and <code>module load MATLAB. -

    En dit is code inline als test. -

    En dit dit wordt een nieuw pre-blok: -

    #!/bin/bash
    -echo \"Hello, world!\"
    -

    Code in <pre>...</pre>

    This was used a lot on the old vscentrum.be site to display fragments of code or display output in a console windows. -

      -
    • Readability of fragments is definitely better if a fixed width font is used as this is necessary to get a correct alignment.
    • -
    • Formatting is important: Line breaks should be respected. The problem with the CMS seems to be that the editor respects the line breaks, the database also stores them as I can edit the code again, but the CMS removes them when generating the final HTML-page as I don't see the line breaks again in the resulting HTML-code that is loaded into the browser.
    • -
    #!/bin/bash -l
    -#PBS -l nodes=1:nehalem
    -#PBS -l mem=4gb
    -module load matlab
    -cd $PBS_O_WORKDIR
    -...
    -

    The <code> style in the editor

    In fact, the Code style of the editor works on a paragraph basis and all it does is put the paragraph between <pre> and </pre>-tags, so the problem mentioned above remains. The next text was edited in WYSIWIG mode: -

    #!/bin/bash -l
    -#PBS -l nodes=4:ivybridge
    -...
    -

    Another editor bug is that it isn't possible to switch back to regular text mode at the end of a code fragment if that is at the end of the text widget: The whole block is converted back to regular text instead and the formatting is no longer shown. -

    Een Workaround is misschien meerdere <pre>-blokken gebruiken? -

    #!/bin/bash -l
    -
    #PBS -l nodes=4:ivybridge
    -
    ...
    -

    Neen, want dan krijg je meerdere grijze vakken... -

    En met <br> en de <code>-tag? -

    #! /bin/bash -l
    #PBS -l nodes=4:ivybridge
    ... -
    -

    Ook dit is niet ideaal, want alles staat niet aaneenin een kader, maar het is beter dan niets... -

    " -395,"Tier-1 infrastructure","

    Our first Tier-1 cluster, Muk, was installed in the spring of 2012 and became operationa a few months later. This system is primarily optimised for the processing of large parallel computing tasks that need to have a high-speed interconnect. -

    " -399,"","

    The list below gives an indication of which (scientific) software, libraries and compilers are available on TIER1 on 1 December 2014. For each package, the available version(s) is shown as well as (if applicable) the compilers/libraries/options with which the software was compiled. Note that some software packages are only available when the end-user demonstrates to have valid licenses to use this software on the TIER1 infrastructure of Ghent University.

      -
    • ABAQUS/6.12.1-linux-x86_64
    • -
    • ALADIN/36t1_op2bf1-ictce-4.1.13
    • -
    • ALADIN/36t1_op2bf1-ictce-4.1.13-strict
    • -
    • Allinea/4.1-32834-Redhat-6.0-x86_64
    • -
    • ANTLR/2.7.7-ictce-4.1.13
    • -
    • APR/0.9.18-ictce-4.1.13
    • -
    • APR/1.5.0-ictce-4.1.13
    • -
    • APR/1.5.0-ictce-5.5.0
    • -
    • APR-util/1.3.9-ictce-4.1.13
    • -
    • APR-util/1.5.3-ictce-4.1.13
    • -
    • APR-util/1.5.3-ictce-5.5.0
    • -
    • ASE/3.6.0.2515-ictce-4.1.13-Python-2.7.3
    • -
    • Autoconf/2.69-ictce-4.1.13
    • -
    • BEAGLE/20130408-ictce-4.0.6
    • -
    • beagle-lib/20120124-ictce-4.1.13
    • -
    • BEDTools/2.17.0-ictce-4.1.13
    • -
    • BEDTools/v2.17.0-ictce-4.1.13
    • -
    • Bison/2.5-ictce-4.1.13
    • -
    • Bison/2.6.5-ictce-4.1.13
    • -
    • Bison/2.7.1-ictce-5.5.0
    • -
    • Bison/2.7-ictce-4.1.13
    • -
    • Bison/2.7-ictce-5.5.0
    • -
    • Bison/3.0.2-intel-2014b
    • -
    • BLACS/1.1-gompi-1.1.0-no-OFED
    • -
    • Boost/1.51.0-ictce-4.1.13-Python-2.7.3
    • -
    • Boost/1.55.0-ictce-5.5.0-Python-2.7.6
    • -
    • Bowtie/1.0.0-ictce-4.1.13
    • -
    • Bowtie2/2.0.2-ictce-4.1.13
    • -
    • Bowtie2/2.1.0-ictce-5.5.0
    • -
    • BWA/0.6.2-ictce-4.1.13
    • -
    • bzip2/1.0.6-ictce-4.1.13
    • -
    • bzip2/1.0.6-ictce-5.5.0
    • -
    • bzip2/1.0.6-iomkl-4.6.13
    • -
    • CDO/1.6.2-ictce-5.5.0
    • -
    • CDO/1.6.3-ictce-5.5.0
    • -
    • Circos/0.64-ictce-5.5.0-Perl-5.18.2
    • -
    • CMake/2.8.10.2-ictce-4.0.6
    • -
    • CMake/2.8.10.2-ictce-4.1.13
    • -
    • CMake/2.8.12-ictce-5.5.0
    • -
    • CMake/2.8.4-ictce-4.1.13
    • -
    • CP2K/20130228-ictce-4.1.13
    • -
    • CP2K/20131211-ictce-5.5.0
    • -
    • CP2K/2.5.1-intel-2014b-psmp
    • -
    • Cufflinks/2.1.1-ictce-4.1.13
    • -
    • Cufflinks/2.1.1-ictce-5.5.0
    • -
    • cURL/7.28.1-ictce-4.1.13
    • -
    • cURL/7.28.1-ictce-5.5.0
    • -
    • cURL/7.33.0-ictce-4.1.13
    • -
    • cURL/7.34.0-ictce-5.5.0
    • -
    • cutadapt/1.3-ictce-4.1.13-Python-2.7.3
    • -
    • Cython/0.17.4-ictce-4.1.13-Python-2.7.3
    • -
    • Cython/0.19.2-ictce-5.5.0-Python-2.7.6
    • -
    • DB/4.7.25-ictce-4.1.13
    • -
    • DBD-mysql/4.023-ictce-4.1.13-Perl-5.16.3
    • -
    • Doxygen/1.8.1.1-ictce-4.1.13
    • -
    • Doxygen/1.8.2-ictce-4.1.13
    • -
    • Doxygen/1.8.3.1-ictce-4.1.13
    • -
    • Doxygen/1.8.3.1-ictce-5.5.0
    • -
    • Doxygen/1.8.6-ictce-5.5.0
    • -
    • e2fsprogs/1.42.7-ictce-4.1.13
    • -
    • EasyBuild/1.10.0(default)
    • -
    • EasyBuild/1.7.0
    • -
    • EasyBuild/1.8.2
    • -
    • EasyBuild/1.9.0
    • -
    • ed/1.9-ictce-4.1.13
    • -
    • Eigen/3.1.1-ictce-4.1.13
    • -
    • Eigen/3.2.0-ictce-5.5.0
    • -
    • ESMF/6.1.1-ictce-4.1.13
    • -
    • ESMF/6.1.1-ictce-5.5.0
    • -
    • expat/2.1.0-ictce-4.1.13
    • -
    • expat/2.1.0-ictce-5.5.0
    • -
    • fastahack/20110215-ictce-4.1.13
    • -
    • FFTW/3.3.1-gompi-1.1.0-no-OFED
    • -
    • FFTW/3.3.3-ictce-4.1.13
    • -
    • FFTW/3.3.3-ictce-4.1.13-single
    • -
    • FFTW/3.3.3-ictce-4.1.14
    • -
    • FFTW/3.3.3-ictce-4.1.14-single
    • -
    • FFTW/3.3.3-iomkl-4.6.13-single
    • -
    • FFTW/3.3.4-intel-2014b
    • -
    • flex/2.5.35-ictce-4.1.13
    • -
    • flex/2.5.37-ictce-4.1.13
    • -
    • flex/2.5.37-ictce-5.5.0
    • -
    • flex/2.5.37-intel-2014b
    • -
    • flex/2.5.39-intel-2014b
    • -
    • FLTK/1.3.2-ictce-4.1.13
    • -
    • FLUENT/14.5
    • -
    • FLUENT/15.0.7
    • -
    • fontconfig/2.11.1-ictce-5.5.0
    • -
    • freetype/2.4.11-ictce-4.1.13
    • -
    • freetype/2.4.11-ictce-5.5.0
    • -
    • g2clib/1.4.0-ictce-4.1.13
    • -
    • g2clib/1.4.0-ictce-5.5.0
    • -
    • g2lib/1.4.0-ictce-4.1.13
    • -
    • g2lib/1.4.0-ictce-5.5.0
    • -
    • Gaussian/g09_B.01-ictce-4.1.13-amd64-gpfs-I12
    • -
    • Gaussian/g09_D.01-ictce-5.5.0-amd64-gpfs
    • -
    • GCC/4.6.3
    • -
    • GCC/4.8.3
    • -
    • GD/2.52-ictce-5.5.0-Perl-5.18.2
    • -
    • GDAL/1.9.2-ictce-4.1.13
    • -
    • GDAL/1.9.2-ictce-5.5.0
    • -
    • GLib/2.34.3-ictce-4.1.13
    • -
    • glproto/1.4.16-ictce-4.1.13
    • -
    • GMAP/2013-11-27-ictce-5.5.0
    • -
    • gnuplot/4.4.4-ictce-4.1.13
    • -
    • gompi/1.1.0-no-OFED
    • -
    • Greenlet/0.4.0-ictce-4.1.13-Python-2.7.3
    • -
    • grib_api/1.9.18-ictce-4.1.13
    • -
    • GROMACS/4.6.5-ictce-5.5.0-hybrid
    • -
    • GROMACS/4.6.5-ictce-5.5.0-mpi
    • -
    • GSL/1.16-ictce-4.1.13
    • -
    • GSL/1.16-ictce-5.5.0
    • -
    • gzip/1.4
    • -
    • h5py/2.1.0-ictce-4.1.13-Python-2.7.3
    • -
    • Hadoop/0.9.9-rdma
    • -
    • Hadoop/2.0.0-cdh4.4.0
    • -
    • Hadoop/2.0.0-cdh4.5.0
    • -
    • Hadoop/2.3.0-cdh5.0.0
    • -
    • Hadoop/2.x-0.9.1-rdma
    • -
    • hanythingondemand/2.1.1-ictce-5.5.0-Python-2.7.6
    • -
    • hanythingondemand/2.1.4-ictce-5.5.0-Python-2.7.6
    • -
    • HDF/4.2.8-ictce-4.1.13
    • -
    • HDF/4.2.8-ictce-5.5.0
    • -
    • HDF5/1.8.10-ictce-4.1.13-gpfs-mt
    • -
    • HDF5/1.8.10-ictce-4.1.13-parallel-gpfs
    • -
    • HDF5/1.8.10-ictce-5.5.0-gpfs
    • -
    • HDF5/1.8.10-ictce-5.5.0-gpfs-mt
    • -
    • HDF5/1.8.12-ictce-5.5.0
    • -
    • HDF5/1.8.9-ictce-4.1.13
    • -
    • hwloc/1.6-iccifort-2011.13.367
    • -
    • hwloc/1.9-GCC-4.8.3
    • -
    • icc/11.1.069
    • -
    • icc/11.1.073
    • -
    • icc/11.1.075
    • -
    • icc/2011.13.367
    • -
    • icc/2011.6.233
    • -
    • icc/2013.5.192
    • -
    • icc/2013.5.192-GCC-4.8.3
    • -
    • icc/2013_sp1.2.144
    • -
    • iccifort/2011.13.367
    • -
    • iccifort/2013.5.192-GCC-4.8.3
    • -
    • ictce/3.2.1.015.u4
    • -
    • ictce/3.2.2.u3
    • -
    • ictce/4.0.6
    • -
    • ictce/4.1.13
    • -
    • ictce/4.1.14
    • -
    • ictce/5.5.0
    • -
    • ictce/6.2.5
    • -
    • ifort/11.1.069
    • -
    • ifort/11.1.073
    • -
    • ifort/11.1.075
    • -
    • ifort/2011.13.367
    • -
    • ifort/2011.6.233
    • -
    • ifort/2013.5.192
    • -
    • ifort/2013.5.192-GCC-4.8.3
    • -
    • ifort/2013_sp1.2.144
    • -
    • iimpi/5.5.3-GCC-4.8.3
    • -
    • imkl/10.2.4.032
    • -
    • imkl/10.2.6.038
    • -
    • imkl/10.3.12.361
    • -
    • imkl/10.3.12.361-impi-4.1.0.030
    • -
    • imkl/10.3.12.361-MVAPICH2-1.9
    • -
    • imkl/10.3.12.361-OpenMPI-1.6.3
    • -
    • imkl/10.3.6.233
    • -
    • imkl/11.0.5.192
    • -
    • imkl/11.1.2.144
    • -
    • imkl/11.1.2.144-iimpi-5.5.3-GCC-4.8.3
    • -
    • impi/3.2.2.006
    • -
    • impi/4.0.0.028
    • -
    • impi/4.0.2.003
    • -
    • impi/4.1.0.027
    • -
    • impi/4.1.0.030
    • -
    • impi/4.1.1.036
    • -
    • impi/4.1.3.049
    • -
    • impi/4.1.3.049-GCC-4.8.3
    • -
    • impi/4.1.3.049-iccifort-2013.5.192-GCC-4.8.3
    • -
    • intel/2014b
    • -
    • iomkl/4.6.13
    • -
    • IPython/0.13.1-ictce-4.1.13-Python-2.7.3
    • -
    • JasPer/1.900.1-ictce-4.1.13
    • -
    • JasPer/1.900.1-ictce-5.5.0
    • -
    • Java/1.7.0_10
    • -
    • Java/1.7.0_15
    • -
    • Java/1.7.0_17
    • -
    • Java/1.7.0_40
    • -
    • Java/1.7.0_60
    • -
    • Java/1.8.0_20
    • -
    • LAPACK/3.4.0-gompi-1.1.0-no-OFED
    • -
    • libdrm/2.4.27-ictce-4.1.13
    • -
    • libffi/3.0.13-ictce-4.1.13
    • -
    • libffi/3.0.13-ictce-5.5.0
    • -
    • libgd/2.1.0-ictce-5.5.0
    • -
    • Libint/1.1.4-ictce-4.1.13
    • -
    • Libint/1.1.4-ictce-5.5.0
    • -
    • libint2/2.0.3-intel-2014b
    • -
    • libjpeg-turbo/1.3.0-ictce-4.1.13
    • -
    • libjpeg-turbo/1.3.0-ictce-5.5.0
    • -
    • libpciaccess/0.13.1-ictce-4.1.13
    • -
    • libpng/1.6.10-ictce-5.5.0
    • -
    • libpng/1.6.3-ictce-4.1.13
    • -
    • libpng/1.6.6-ictce-4.1.13
    • -
    • libpng/1.6.6-ictce-5.5.0
    • -
    • libpthread-stubs/0.3-ictce-4.1.13
    • -
    • libreadline/6.2-ictce-4.1.13
    • -
    • libreadline/6.2-ictce-5.5.0
    • -
    • libreadline/6.2-intel-2014b
    • -
    • libreadline/6.2-iomkl-4.6.13
    • -
    • libxc/2.0.1-ictce-5.5.0
    • -
    • libxc/2.2.0-intel-2014b
    • -
    • libxml2/2.8.0-ictce-4.1.13-Python-2.7.3
    • -
    • libxml2/2.9.0-ictce-4.1.13
    • -
    • libxml2/2.9.1-ictce-4.1.13
    • -
    • libxml2/2.9.1-ictce-5.5.0
    • -
    • libXp/1.0.1
    • -
    • libXp/1.0.1-ictce-4.1.13
    • -
    • M4/1.4.16-ictce-3.2.2.u3
    • -
    • M4/1.4.16-ictce-4.1.13
    • -
    • M4/1.4.16-ictce-5.5.0
    • -
    • M4/1.4.17-ictce-5.5.0
    • -
    • M4/1.4.17-intel-2014b
    • -
    • makedepend/1.0.4-ictce-4.1.13
    • -
    • makedepend/1.0.4-ictce-5.5.0
    • -
    • MariaDB/5.5.29-ictce-4.1.13
    • -
    • MATLAB/2010b
    • -
    • MATLAB/2012b
    • -
    • Mesa/7.11.2-ictce-4.1.13-Python-2.7.3
    • -
    • mpi4py/1.3-ictce-4.1.13-Python-2.7.3
    • -
    • MrBayes/3.2.0-ictce-4.1.13
    • -
    • MVAPICH2/1.9-iccifort-2011.13.367
    • -
    • NASM/2.07-ictce-4.1.13
    • -
    • NASM/2.07-ictce-5.5.0
    • -
    • NCL/6.1.2-ictce-4.1.13
    • -
    • NCL/6.1.2-ictce-5.5.0
    • -
    • NCO/4.4.4-ictce-4.1.13
    • -
    • ncurses/5.9-ictce-4.1.13
    • -
    • ncurses/5.9-ictce-5.5.0
    • -
    • ncurses/5.9-intel-2014b
    • -
    • ncurses/5.9-iomkl-4.6.13
    • -
    • ncview/2.1.2-ictce-4.1.13
    • -
    • neon/0.30.0-ictce-4.1.13
    • -
    • netaddr/0.7.10-ictce-5.5.0-Python-2.7.6
    • -
    • netCDF/4.1.3-ictce-4.1.13
    • -
    • netCDF/4.2.1.1-ictce-4.1.13
    • -
    • netCDF/4.2.1.1-ictce-4.1.13-mt
    • -
    • netCDF/4.2.1.1-ictce-5.5.0
    • -
    • netCDF/4.2.1.1-ictce-5.5.0-mt
    • -
    • netCDF/4.3.0-ictce-5.5.0
    • -
    • netcdf4-python/1.0.7-ictce-5.5.0-Python-2.7.6
    • -
    • netCDF-C++/4.2-ictce-4.1.13
    • -
    • netCDF-C++/4.2-ictce-4.1.13-mt
    • -
    • netCDF-C++/4.2-ictce-5.5.0-mt
    • -
    • netCDF-Fortran/4.2-ictce-4.1.13
    • -
    • netCDF-Fortran/4.2-ictce-4.1.13-mt
    • -
    • netCDF-Fortran/4.2-ictce-5.5.0
    • -
    • netCDF-Fortran/4.2-ictce-5.5.0-mt
    • -
    • netifaces/0.8-ictce-5.5.0-Python-2.7.6
    • -
    • NEURON/7.2-ictce-4.1.13
    • -
    • numactl/2.0.9-GCC-4.8.3
    • -
    • numexpr/2.0.1-ictce-4.1.13-Python-2.7.3
    • -
    • numexpr/2.2.2-ictce-5.5.0-Python-2.7.6
    • -
    • NWChem/6.1.1-ictce-4.1.13-2012-06-27-Python-2.7.3
    • -
    • OpenBLAS/0.2.9-GCC-4.8.3-LAPACK-3.5.0
    • -
    • OpenFOAM/2.1.1-ictce-4.1.13
    • -
    • OpenFOAM/2.2.0-ictce-4.1.13
    • -
    • OpenFOAM/2.3.0-intel-2014b
    • -
    • OpenMPI/1.4.5-GCC-4.6.3-no-OFED
    • -
    • OpenMPI/1.6.3-iccifort-2011.13.367
    • -
    • OpenPGM/5.2.122-ictce-4.1.13
    • -
    • OpenPGM/5.2.122-ictce-5.5.0
    • -
    • PAML/4.7-ictce-4.1.13
    • -
    • pandas/0.11.0-ictce-4.1.13-Python-2.7.3
    • -
    • pandas/0.12.0-ictce-5.5.0-Python-2.7.6
    • -
    • pandas/0.13.1-ictce-5.5.0-Python-2.7.6
    • -
    • Paraview/4.1.0-ictce-4.1.13
    • -
    • paycheck/1.0.2
    • -
    • paycheck/1.0.2-ictce-4.1.13-Python-2.7.3
    • -
    • paycheck/1.0.2-iomkl-4.6.13-Python-2.7.3
    • -
    • pbs_python/4.3.5-ictce-5.5.0-Python-2.7.6
    • -
    • Perl/5.16.3-ictce-4.1.13
    • -
    • Perl/5.18.2-ictce-5.5.0
    • -
    • picard/1.100-ictce-4.1.13
    • -
    • Primer3/2.3.0-ictce-4.1.13
    • -
    • printproto/1.0.5
    • -
    • printproto/1.0.5-ictce-4.1.13
    • -
    • PROJ.4/4.8.0-ictce-5.5.0
    • -
    • pyproj/1.9.3-ictce-5.5.0-Python-2.7.6
    • -
    • pyTables/2.4.0-ictce-4.1.13-Python-2.7.3
    • -
    • pyTables/3.0.0-ictce-5.5.0-Python-2.7.6
    • -
    • Python/2.5.6-ictce-4.1.13-bare
    • -
    • Python/2.7.3-ictce-4.1.13(default)
    • -
    • Python/2.7.3-iomkl-4.6.13
    • -
    • Python/2.7.6-ictce-5.5.0
    • -
    • PyZMQ/14.0.1-ictce-5.5.0-Python-2.7.6
    • -
    • PyZMQ/2.2.0.1-ictce-4.1.13-Python-2.7.3
    • -
    • Qt/4.8.5-ictce-4.1.13
    • -
    • QuantumESPRESSO/5.0.2-ictce-5.5.0-hybrid
    • -
    • QuantumESPRESSO/5.0.3-ictce-5.5.0-hybrid
    • -
    • R/3.0.2-ictce-4.1.13
    • -
    • R/3.0.2-ictce-5.5.0
    • -
    • SAMtools/0.1.18-ictce-4.1.13
    • -
    • SAMtools/0.1.19-ictce-5.5.0
    • -
    • Schrodinger/2014-2_Linux-x86_64
    • -
    • SCOOP/0.6.0.final-ictce-4.1.13-Python-2.7.3
    • -
    • SCOTCH/6.0.0_esmumps-intel-2014b
    • -
    • scripts/3.0.0
    • -
    • scripts/4.0.0
    • -
    • setuptools/1.4.2
    • -
    • Spark/1.0.0
    • -
    • SQLite/3.8.1-ictce-4.1.13
    • -
    • SQLite/3.8.4.1-ictce-4.1.13
    • -
    • SQLite/3.8.4.1-ictce-5.5.0
    • -
    • subversion/1.6.11-ictce-4.1.13
    • -
    • subversion/1.6.23-ictce-4.1.13
    • -
    • subversion/1.8.8-ictce-4.1.13
    • -
    • SURF/1.0-ictce-4.1.13-LINUXAMD64
    • -
    • Szip/2.1-ictce-4.1.13
    • -
    • Szip/2.1-ictce-5.5.0
    • -
    • Tachyon/0.5.0
    • -
    • Tcl/8.5.12-ictce-4.1.13
    • -
    • Tcl/8.6.1-ictce-4.1.13
    • -
    • Tcl/8.6.1-ictce-5.5.0
    • -
    • tcsh/6.18.01-ictce-4.1.13
    • -
    • tcsh/6.18.01-ictce-5.5.0
    • -
    • Tk/8.5.12-ictce-4.1.13
    • -
    • TopHat/2.0.10-ictce-5.5.0
    • -
    • TopHat/2.0.8-ictce-4.1.13
    • -
    • UDUNITS/2.1.24-ictce-4.1.13
    • -
    • UDUNITS/2.1.24-ictce-5.5.0
    • -
    • UNAFold/3.8-ictce-4.1.13
    • -
    • util-linux/2.24-ictce-5.5.0
    • -
    • uuid/1.6.2-ictce-4.1.13
    • -
    • Valgrind/3.8.1
    • -
    • VarScan/v2.3.6-ictce-4.1.13
    • -
    • VASP/5.2.11-ictce-4.1.13-mt
    • -
    • VASP/5.3.2-ictce-4.1.13-vtst-3.0b-20121111-mt
    • -
    • VASP/5.3.3-ictce-3.2.1.015.u4-mt
    • -
    • VASP/5.3.3-ictce-4.1.13-mt
    • -
    • VASP/5.3.3-ictce-4.1.13-mt-dftd3
    • -
    • VASP/5.3.3-ictce-4.1.13-mt-no-DNGXhalf
    • -
    • VASP/5.3.3-ictce-4.1.13-vtst-3.0b-20121111-mt
    • -
    • VASP/5.3.3-ictce-4.1.13-vtst-3.0c-20130327-mt
    • -
    • VASP/5.3.3-ictce-5.5.0-mt
    • -
    • VASP/5.3.3-ictce-6.2.5-mt
    • -
    • VASP/5.3.5-intel-2014b-vtst-3.1-20140328-mt-vaspsol2.01
    • -
    • VASP/5.3.5-intel-2014b-vtst-3.1-20140328-mt-vaspsol2.01-gamma
    • -
    • VMD/1.9.1-ictce-4.1.13
    • -
    • vsc-base/1.7.3
    • -
    • vsc-base/1.9.1
    • -
    • vsc-mympirun/3.2.3
    • -
    • vsc-mympirun/3.3.0
    • -
    • vsc-mympirun/3.4.2
    • -
    • VSC-tools/0.1.2-ictce-4.1.13-Python-2.7.3
    • -
    • VSC-tools/0.1.5
    • -
    • VSC-tools/0.1.5-ictce-4.1.13-scoop
    • -
    • VSC-tools/1.7.1
    • -
    • VTK/6.0.0-ictce-4.1.13-Python-2.7.3
    • -
    • WIEN2k/14.1-intel-2014b
    • -
    • WPS/3.5.1-ictce-4.1.13-dmpar
    • -
    • WRF/3.4-ictce-5.5.0-dmpar
    • -
    • WRF/3.5.1-ictce-4.1.13-dmpar
    • -
    • XML-LibXML/2.0018-ictce-4.1.13-Perl-5.16.3
    • -
    • XML-Simple/2.20-ictce-4.1.13-Perl-5.16.3
    • -
    • xorg-macros/1.17
    • -
    • xorg-macros/1.17-ictce-4.1.13
    • -
    • YAXT/0.2.1-ictce-5.5.0
    • -
    • ZeroMQ/2.2.0-ictce-4.1.13
    • -
    • ZeroMQ/4.0.3-ictce-5.5.0
    • -
    • zlib/1.2.7-ictce-4.1.13
    • -
    • zlib/1.2.7-ictce-5.5.0
    • -
    • zlib/1.2.7-iomkl-4.6.13
    • -
    • zlib/1.2.8-ictce-5.5.0
    • -
    " -403,"VSC Echo newsletter","

    VSC Echo is e-mailed three times a year to all subscribers. The newsletter contains updates about our infrastructure, training programs and other events and highlights some of the results obtained by users of our clusters.

    " -407,"Mission & vision","

    Upon the establishment of the VSC, the Flemish government assigned us a number of tasks.

    " -409,"The VSC in Flanders","

    The VSC is a partnership of five Flemish university associations. The infrastructure is spread over four locations: Antwerp, Brussels, Ghent and Louvain.

    " -411,"Our history","

    Since its establishment in 2007, the VSC has evolved and grown considerably. -

    " -413,"Publications","

    In this section you’ll find all previous editions of our newsletter and various other publications issued by the VSC. -

    " -415,"Organisation structure","

    In this section you can find more information about the structure of our organisation and the various advisory committees. -

    " -417,"Press material","

    Would you like to write about our services? On this page you will find useful material such as our logo or recent press releases. -

    " -451,"","

    Op 25 oktober 2012 organiseerde het VSC de plechtige ingebruikname van de eerste Vlaamse tier 1 cluster aan de Universiteit Gent, waar de cluster ook geplaatst werd.

    " -455,"","

    On 25 October 2012 the VSC inaugurated the first Flemish tier 1 compute cluster. The cluster is housed in the data centre of Ghent University.

    " -459,"","

    Programma / Programme -

    Het programma werd gevolgd door de officiële ingebruikname van de cluster in het datacentrum en een receptie.

    " -461,"Links","" -465,"","

    We organize regular trainings on many HPC-related topics. The level ranges fro introductory to advanced. We also actively promote some courses organised elsewhere. The courses are open to participants at the university associations. Many are also open to external users (the limitations often caused by software licenses of the packages used during hand-ons). For further info, you can contact the course coordinator Geert Jan Bex.

    " -467,"Previous events and training sessions","

    We keep links to our previous events and training sessions. Materials used during the course can also be found on those pages.

    " -469,"","

    More questions? Contact the course coordinator or one of the other coordinators.

    " -471,"","

    On you application form, you will be asked to indicate the scientific domain of your application according to the NWO classification. Below we present the list of domains and subdomains. You only need to give the domain in your application, but the subdomains may make it easier to determine the most suitable domain for your application. -

      -
    • Archaeology -
        -
      • Prehistory
      • -
      • Antiquity and late antiquity
      • -
      • Oriental archaeology
      • -
      • Mediaeval archaeology
      • -
      • Industrial archaeology
      • -
      • Preservation and restoration, museums
      • -
      • Methods and techniques
      • -
      • Archeology, other
      • -
    • -
    • Area studies -
        -
      • Asian languages and literature
      • -
      • Asian religions and philosophies
      • -
      • Jewish studies
      • -
      • Islamic studies
      • -
      • Iranian and Armenian studies
      • -
      • Central Asian studies
      • -
      • Indian studies
      • -
      • South-east Asian studies
      • -
      • Sinology
      • -
      • Japanese studies
      • -
      • Area studies, other
      • -
    • -
    • Art and architecture -
        -
      • Pre-historic and pre-classical art
      • -
      • Antiquity and late antiquity art
      • -
      • Mediaeval art
      • -
      • Renaissance and Baroque art
      • -
      • Modern and contemporary art
      • -
      • Oriental art and architecture
      • -
      • Iconography
      • -
      • History of architecture
      • -
      • Urban studies
      • -
      • Preservation and restoration of cultural heritage
      • -
      • Museums and collections
      • -
      • Art and architecture, other
      • -
    • -
    • Astronomy, astrophysics -
        -
      • Planetary science
      • -
      • Astronomy, astrophysics, other
      • -
    • -
    • Biology -
        -
      • Microbiology
      • -
      • Biogeography, taxonomy
      • -
      • Animal ethology, animal psychology
      • -
      • Ecology
      • -
      • Botany
      • -
      • Zoology
      • -
      • Toxicology (plants, invertebrates)
      • -
      • Biotechnology
      • -
      • Biology, other
      • -
    • -
    • Business administration -
        -
      • Business administration
      • -
    • -
    • Chemistry -
        -
      • Analytical chemistry
      • -
      • Macromolecular chemistry, polymer chemistry
      • -
      • Organic chemistry
      • -
      • Inorganic chemistry
      • -
      • Physical chemistry
      • -
      • Catalysis
      • -
      • Theoretical chemistry, quantum chemistry
      • -
      • Chemistry, other
      • -
    • -
    • Communication science -
        -
      • Communication science
      • -
    • -
    • Computer science -
        -
      • Computer systems, architectures, networks
      • -
      • Software, algorithms, control systems
      • -
      • Theoretical computer science
      • -
      • Information systems, databases
      • -
      • User interfaces, multimedia
      • -
      • Artificial intelligence, expert systems
      • -
      • Computer graphics
      • -
      • Computer simulation, virtual reality
      • -
      • Computer science, other
      • -
      • Bioinformatics/biostatistics, biomathematics, biomechanics
      • -
    • -
    • Computers and the humanities -
        -
      • Software for humanities
      • -
      • Textual and content analysis
      • -
      • Textual and linguistic corpora
      • -
      • Databases for humanities
      • -
      • Hypertexts and multimedia
      • -
      • Computers and the humanities, other
      • -
    • -
    • Cultural anthropology -
        -
      • Cultural anthropology
      • -
    • -
    • Demography -
        -
      • Demography
      • -
    • -
    • Development studies -
        -
      • Development studies
      • -
    • -
    • Earth sciences -
        -
      • Geochemistry, geophysics
      • -
      • Paleontology, stratigraphy
      • -
      • Geodynamics, sedimentation, tectonics, geomorphology
      • -
      • Petrology, mineralogy, sedimentology
      • -
      • Atmosphere sciences
      • -
      • Hydrosphere sciences
      • -
      • Geodesy, physical geography
      • -
      • Earth sciences, other
      • -
    • -
    • Economy -
        -
      • Microeconomics
      • -
      • Macroeconomics
      • -
      • Econometrics
      • -
    • -
    • Environmental science -
        -
      • Environmental science
      • -
    • -
    • Gender studies -
        -
      • Gender studies
      • -
    • -
    • Geography / planning -
        -
      • Geography
      • -
      • Planning
      • -
    • -
    • History -
        -
      • Pre-classical civilizations
      • -
      • Antiquity and late antiquity history
      • -
      • Mediaeval history
      • -
      • Modern and contemporary history
      • -
      • Social and economic history
      • -
      • Cultural history
      • -
      • Comparative political history
      • -
      • Librarianschip, archive studies
      • -
      • History, other
      • -
      • History and philosophy of science and technology
      • -
      • History of ancient science
      • -
      • History of mediaeval science
      • -
      • History of modern science
      • -
      • History of contemporary science
      • -
      • History of technology
      • -
      • History of Science, other
      • -
      • History of religions
      • -
      • History of Christianity
      • -
      • Theology and history of theology
      • -
    • -
    • History of science -
        -
      • History of ancient science
      • -
      • History of mediaeval science
      • -
      • History of modern science
      • -
      • History of contemporary science
      • -
      • History of technology
      • -
      • Science museums and collections
      • -
      • History of science, other
      • -
    • -
    • Language and literature -
        -
      • Pre-classical philology and literature
      • -
      • Greek and Latin philology and literature
      • -
      • Mediaeval and Neo-Latin languages and literature
      • -
      • Mediaeval European languages and literature
      • -
      • Modern European languages and literature
      • -
      • Anglo-American literature
      • -
      • Hispanic and Brazilian literature
      • -
      • African languages and literature
      • -
      • Comparative literature
      • -
      • Language and literature, other
      • -
    • -
    • Law -
        -
      • Private law
      • -
      • Constitutional and Administrative law
      • -
      • International and European law
      • -
      • Criminal law and Criminology
      • -
    • -
    • Life sciences -
        -
      • Bioinformatics/biostatistics, biomathematics, biomechanics
      • -
      • Biophysics, clinical physics
      • -
      • Biochemistry
      • -
      • Genetics
      • -
      • Histology, cell biology
      • -
      • Anatomy, morphology
      • -
      • Physiology
      • -
      • Immunology, serology
      • -
      • Life sciences, other
      • -
    • -
    • Life sciences and medicine -
        -
      • History and philosophy of the life sciences, ethics and -evolution biology -
      • -
    • -
    • Linguistics -
        -
      • Phonetics and phonology
      • -
      • Morphology, grammar and syntax
      • -
      • Semantics and philosophy of language
      • -
      • Linguistic typology and comparative linguistics
      • -
      • Dialectology, linguistic geography, sociolinguistic
      • -
      • Lexicon and lexicography
      • -
      • Psycholinguistics and neurolinguistics
      • -
      • Computational linguistics and philology
      • -
      • Linguistic statistics
      • -
      • Language teaching and acquisition
      • -
      • Translation studies
      • -
      • Linguistics, other
      • -
    • -
    • Medicine -
        -
      • Pathology, pathological anatomy
      • -
      • Organs and organ systems
      • -
      • Medical specialisms
      • -
      • Health sciences
      • -
      • Kinesiology
      • -
      • Gerontology
      • -
      • Nutrition
      • -
      • Epidemiology
      • -
      • Health Services Research
      • -
      • Health law
      • -
      • Health economics
      • -
      • Medical sociology
      • -
      • Medicine, other
      • -
    • -
    • Mathematics -
        -
      • Logic, set theory and arithmetic
      • -
      • Algebra, group theory
      • -
      • Functions, differential equations
      • -
      • Fourier analysis, functional analysis
      • -
      • Geometry, topology
      • -
      • Probability theory, statistics
      • -
      • Operations research
      • -
      • Numerical analysis
      • -
      • Mathematics, other
      • -
    • -
    • Music, theatre, performing arts and media -
        -
      • Ethnomusicology
      • -
      • History of music and musical iconography
      • -
      • Musicology
      • -
      • Opera and dance
      • -
      • Theatre studies and iconography
      • -
      • Film, photography and audio-visual media
      • -
      • Journalism and mass communications
      • -
      • Media studies
      • -
      • Music, theatre, performing arts and media, other
      • -
    • -
    • Pedagogics -
        -
      • Pedagogics
      • -
    • -
    • Philosophy -
        -
      • Metaphysics, theoretical philosophy
      • -
      • Ethics, moral philosophy
      • -
      • Logic and history of logic
      • -
      • Epistemology, philosophy of science
      • -
      • Aesthetics, philosophy of art
      • -
      • Philosophy of language, semiotics
      • -
      • History of ideas and intellectual history
      • -
      • History of ancient and mediaeval philosophy
      • -
      • History of modern and contemporary philosophy
      • -
      • History of political and economic theory
      • -
      • Philosophy, other
      • -
      • History and philosophy of science and technology
      • -
    • -
    • Physics -
        -
      • Subatomic physics
      • -
      • Nanophysics/technology
      • -
      • Condensed matter and optical physics
      • -
      • Processes in living systems
      • -
      • Fusion physics
      • -
      • Phenomenological physics
      • -
      • Other physics
      • -
      • Theoretical physics
      • -
    • -
    • Psychology -
        -
      • Clinical Psychology
      • -
      • Biological and Medical Psychology
      • -
      • Developmental Psychology
      • -
      • Psychonomics and Cognitive Psychology
      • -
      • Social and Organizational Psychology
      • -
      • Psychometrics
      • -
    • -
    • Public administration and political science -
        -
      • Public administration
      • -
      • Political science
      • -
    • -
    • Religious studies and theology -
        -
      • History of religions
      • -
      • History of Christianity
      • -
      • Theology and history of theology
      • -
      • Bible studies
      • -
      • Religious studies and theology, other
      • -
    • -
    • Science of Teaching -
        -
      • Science of Teaching
      • -
    • -
    • Science and technology -
        -
      • History and philosophy of science and technology
      • -
    • -
    • Sociology -
        -
      • Sociology
      • -
    • -
    • Technology -
        -
      • Materials technology
      • -
      • Mechanical engineering
      • -
      • Electrical engineering
      • -
      • Civil engineering
      • -
      • Chemical technology, process technology
      • -
      • Geotechnics
      • -
      • Technology assessment
      • -
      • Nanotechnology
      • -
      • Technology, other
      • -
    • -
    • Veterinary medicine -
        -
      • Veterinary medicine
      • -
    • -


      -

    " -475,"","" -477,"","

    - \"\" -

    - PERSMEDEDELING VAN VICEMINISTER-PRESIDENT INGRID LIETEN
    - VLAAMS MINISTER VAN INNOVATIE, OVERHEIDSINVESTERINGEN, MEDIA EN ARMOEDEBESTRIJDING
    -

    - Donderdag 25 oktober 2012 -

    - Eerste TIER 1 Supercomputer wordt in gebruik genomen aan de UGent. -

    - Vandaag wordt aan de UGent de eerste Tier 1 supercomputer van het Vlaams ComputerCentrum (VSC) plechtig in gebruik genomen. De supercomputer is een initiatief van de Vlaamse overheid om aan onderzoekers in Vlaanderen een bijzonder krachtige rekeninfrastructuur ter beschikking te stellen om zo beter het hoofd te kunnen bieden aan de maatschappelijke uitdagingen war we vandaag voor staan.“Het VSC moet ‘high performance computing’ toegankelijk maken voor kennisinstellingen en bedrijven. Hierdoor kunnen doorbraken gerealiseerd worden in domeinen als gezondheidszorg, chemie, en milieu”, zegt Ingrid Lieten. -

    -
    - In de internationale onderzoekswereld zijn de supercomputers niet meer weg te denken. Deze grote rekeninfrastructuren waren recent een noodzakelijke schakel in de ontdekking van het Higgsdeeltje. Hun rekencapaciteit laat toe steeds beter de werkelijkheid te simuleren. Hierdoor is een nieuwe manier om onderzoek te verrichten ontstaan, met belangrijke toepassingen voor onze economie en onze samenleving. -

    - “Dankzij supercomputers worden weersvoorspellingen over langere perioden steeds betrouwbaarder, of kunnen klimaatveranderingen en natuurrampen beter voorspeld worden. Auto’s worden veiliger omdat de constructeurs het verloop van botsingen en de impact op passagiers in detail kunnen simuleren. Ook aan de evolutie naar geneeskunde op maat van de patiënt, kan de supercomputer fundamenteel bijdragen. De ontwikkeling van geneesmiddelen gebeurt namelijk voor een groot deel via simulaties van chemische reacties”, zegt Ingrid Lieten. -

    - Het Vlaamse Supercomputer Centrum staat open voor alle Vlaamse onderzoekers, zowel uit de kennisinstellingen en strategische onderzoekscentra als uit de bedrijven. Het levert opportuniteiten voor universiteiten en industrie, maar ook voor overheden, mutualiteiten en andere zorgorganisaties. De supercomputer moet een belangrijke bijdrage leveren aan de zoektocht naar oplossingen voor de grote maatschappelijke uitdagingen, en dit in de meest uiteenlopende domeinen. Zo kan de supercomputer nieuwe geneesmiddelen ontwikkelen of demografische evoluties voor humane en sociale wetenschappen analyseren, zoals de vergrijzing en hoe daarmee om te gaan. Maar de supercomputer zal ook ingezet worden om state of the art windmolens te ontwerpen en ingewikkelde modellen te berekenen voor het voorspellen van klimaatsveranderingen. -

    - Om de mogelijkheden van de supercomputer beter bekend te maken en het gebruik te stimuleren in Vlaanderen, krijgt de Herculesstichting de opdracht om het Vlaamse Supercomputer Centrum actief te promoten en opleidingen te voorzien. De Herculesstichting is het Vlaamse agentschap voor de financiering van middelzware en zware infrastructuur voor fundamenteel en strategisch basisonderzoek. Zij zullen ervoor zorgen dat associaties, kennisinstellingen, SOCs, het bedrijfsleven, enz. even vlot toegang krijgen tot de TIER1 supercomputer. De huisvesting en technische exploitatie blijven bij de associaties. -

    - “Met de ingebruikname van de TIER1 staat Vlaanderen nu echt op de kaart in Europa wat betreft ‘high performance computing’. Vlaamse onderzoekers krijgen de mogelijkheid om aan te sluiten bij belangrijke Europese onderzoeksprojecten, zowel op het vlak van fundamenteel als van toegepast onderzoek”, zegt Ingrid Lieten. -

    - Het Vlaams Supercomputer Centrum beheert zowel de zogenaamde ‘TIER2’ computers, die lokaal bij de universiteiten staan, als de ‘TIER1’ computer, die voor nog complexere toepassingen gebruikt wordt. -

    -Persinfo:

    - Lot Wildemeersch, woordvoerster Ingrid Lieten
    - 0477 810 176 | lot.wildemeersch@vlaanderen.be
    - www.ingridlieten.be -

    - \"\" -

    " -479,"","

    - \"\"

    " -481,""," - - - - - - -
    \"Logo - March 23 2009
    Launch Flemish Supercomputer Centre -

    The official launch took place on 23 March 2009 in the Promotiezaal of the Universiteitshal of the K.U.Leuven, Naamsestraat 22, 3000 Leuven. -

    The press mentioning the VSC launch event: -

      -
    • - An article on the web site of Knack (in Dutch) -
    • -
    • - An article in the French edition of datanews, 24 maart 2009 (in French) -
    • -

    \"uitnodiging -

    The images at the top of this page are courtesy of NUMECA International and research groups at Antwerp University, the Vrije Universiteit Brussel and the KU Leuven. -

    " -483,"","

    The program contains links to some of the presentations. The copyright for the presentations remains with the original authors and not with the VSC. Reproducing parts of these presentations or using them in other presentations can only be done with the agreement of the author(s) of the presentation.

    14u15 Scientific program
    14u15 Dr. ir. Kurt Lust (Vlaams Supercomputer Centrum). Presentation of the VSC
    Presentation (PDF)
    14u30 Prof. dr. Patrick Bultinck (Universiteit Gent). In silico Chemistry: Quantum Chemistry and Supercomputers
    Presentation (PDF)
    14u45 Prof. dr. Wim Vanroose (Universiteit Antwerpen). Large scale calculations of molecules in laser fields
    Presentation (PDF)
    15u00 Prof. dr. Stefaan Tavernier (Vrije Universiteit Brussel). Grid applications in particle and astroparticle physics: The CMS and IceCube projects
    Presentation (PDF)
    15u15 Prof. dr. Dirk Van den Poel (Universiteit Gent). Research using HPC capabilities in the field of economics/business & management science
    Presentation (PDF)
    15u30 Dr. Kris Heylen (K.U.Leuven). Supercomputing and Linguistics
    Presentation (PDF)
    15u45 Dr. ir. Lies Geris (K.U.Leuven). Modeling in biomechanics and biomedical engineering
    Presentatie (PDF)
    16u00 Prof. dr. ir. Chris Lacor (Vrije Universiteit Brussel) and Prof. Dr. Stefaan Poedts (K.U.Leuven). Supercomputing in CFD and MHD
    16u15 Coffee break
    17u00 Academic session
    17u00 Prof. dr. ir. Karen Maex, Chairman of the steering group of the Vlaams Supercomputer Centrum
    Presentatie (PDF)
    17u10 Prof. dr. dr. Thomas Lippert, Director of the Institute for Advanced Simulation and head of the Jülich Supercomputer Centre, Forschungszentrum Jülich. European view on supercomputing and PRACE
    Presentation (PDF)
    17u50 Prof. dr. ir. Charles Hirsch, President of the HPC Working Group of the Royal Flemish Academy of Belgium for Sciences and the Arts (KVAB)
    Presentation (PDF)
    18u00 Prof. dr. ir. Bart De Moor, President of the Board of Directors of the Hercules Foundation
    Presentation (PDF)
    18u10 Minister Patricia Ceysens, Flemish Minister for Economy, Enterprise, Science, Innovation and Foreign Trade
    18u30 Reception

    Abstracts

    Prof. dr. Patrick Bultinck. In silico Chemistry: Quantum Chemistry and Supercomputers

    Universiteit Gent/Ghent University, Faculty of Sciences, Department of Inorganic and Physical Chemistry

    Quantum Chemistry deals with the chemical application of quantum mechanics to understand the nature of chemical substances, the reasons for their (in)stability but also with finding ways to predict properties of novel molecules prior to their synthesis. The working horse of quantum chemists is therefore no longer the laboratory but the supercomputer. The reason for this is that quantum chemical calculations are notoriously computationally demanding.
    These computational demands are illustrated by the scaling of computational demands with respect to the size of molecules and the level of theory applied. An example from Vibrational Circular Dichroism calculations shows how supercomputers play a role in stimulating innovation in chemistry.

    Prof. dr. Patrick Bultinck (° Blankenberge, 1971) is professor in Quantum Chemistry, Computational and inorganic chemistry at Ghent University, Faculty of Sciences, Department of Inorganic and Physical Chemistry. He is author of roughly 100 scientific publications and performs research in quantum chemistry with emphasis on the study of concepts such as the chemical bond, the atom in the molecule and aromaticity. Another main topic is the use of computational (quantum) chemistry in drug discovery. In 2002 and 2003 P. Bultinck received grants from the European Center for SuperComputing in Catalunya for his computationally demanding work in this field.

    Prof. dr. Wim Vanroose. Large scale calculations of molecules in laser fields

    Universiteit Antwerpen, Department of Mathematics and Computer Science

    Over the last decade, calculations with large scale computer has caused a revolution
    in the understanding of the ultrafast dynamics that plays at the microscopic level. We give an overview of the international efforts to advance the computational tools for this area of science. We also discuss how the results of the calculations are guiding chemical experiments.

    Prof. dr. Wim Vanroose is BOF-Research professor at the Department of Mathematics and Computer Science, Universiteit Antwerpen. He is involved in international efforts to build to computational tools for large scale simulations for ultrafast microscopic dynamics. Between 2001 and 2004 he was a computational scientist at NERSC computing center at the Berkeley Lab, Berkeley USA.

    Prof. dr. Stefaan Tavernier. Grid applications in particle and astroparticle physics: The CMS and IceCube projects

    Vrije Universiteit Brussel, Faculty of Science and Bio-engineering Sciences, Department of Physics, Research Group of Elementary Particle Physics

    The large hadron collider LHC at the international research centre CERN near Geneva is due to go into operation at the end of 2009. It will be the most powerful particle accelerator ever, and will give us a first glimpse of the new phenomena that that are expected to occur at these energies. However, the analysis of the data produced by the experiments around this accelerator also represents an unprecedented challenge. The VUB, UGent and UA participate in the CMS project. This is one of the four major experiments to be performed at this accelerator. One year of CMS operation will result in about 106 GBytes of data. To cope with this flow of data, the CMS collaboration has setup a GRID computing infrastructure with distributed computer infrastructure scattered over the participating laboratories in 4 continents.
    The IceCube Neutrino Detector is a neutrino observatory currently under construction at the South Pole. IceCube is being constructed in deep Antarctic ice by deploying thousands of optical sensors at depths between 1,450 and 2,450 meters. The main goal of the experiment is to detect very high energy neutrinos from the cosmos. The neutrinos are not detected themselves. Instead, the rare instance of a collision between a neutrino and an atom within the ice is used to deduce the kinematical parameters of the incoming neutrino. The sources of those neutrinos could be black holes, gamma ray bursts, or supernova remnants. The data that IceCube will collect will also contribute to our understanding of cosmic rays, supersymmetry, weakly interacting massive particles (WIMPS), and other aspects of nuclear and particle physics. The analysis of the data produced by ice cube requires similar computing facilities as the analysis of the LHC data.

    Prof. dr. Stefaan Tavernier is professor of physics at the Vrije Universiteit Brussel. He obtained a Ph.D. at the Faculté des sciences of Orsay(France) in 1968, and a \"Habilitation\" at de VUB in 1984. He spent most of his scientific career working on research projects at the international research centre CERN in Geneva. He has been project leader for the CERN/NA25 project, and he presently is the spokesperson of the CERN/Crystal Clear(RD18) collaboration. His main expertise is in experimental methods for particle physics. He has over 160 publications in peer reviewed international journals, made several contributions to books and has several patents. He is also the author of a textbook on experimental methods in nuclear and particle physics.

    Prof. dr. Dirk Van den Poel. Research using HPC capabilities in the field of economics/business & management science

    Universiteit Gent/Ghent University, Faculty of Economics and Business Administration, Department of Marketing, www.crm.UGent.be and www.mma.UGent.be

    HPC capabilities in the field of economics/business & management science are most welcome when optimizing specific quantities (e.g. maximizing sales, profits, service level, or minimizing costs) subject to certain constraints. Optimal solutions for common problems are usually computationally infeasible even with the biggest HPC installations, therefore researchers develop heuristics or use techniques such as genetic algorithms to come close to optimal solutions. One of the nice properties they possess is that they are typically easily parallelizable. In this talk, I will give several examples of typical research questions, which need an HPC infrastructure to obtain good solutions in a reasonable time window. These include the optimization of marketing actions towards different marketing segments in the domain of analytical CRM (customer relationship management) and solving multiple-TSP (traveling salesman problem) under load balancing, alternatively known as the vehicle routing problem under load balancing.

    Prof. dr. Dirk Van den Poel (° Merksem, 1969) is professor of marketing modeling/analytical customer relationship management (aCRM) at Ghent University. He obtained his MSc in management/business engineering as well as PhD from K.U.Leuven. He heads the modeling cluster of the Department of Marketing at Ghent University. He is program director of the Master of Marketing Analysis, a one-year program in English about predictive analytics in marketing. His main interest fields are aCRM, data mining (genetic algorithms, neural networks, random forests, random multinomial logit: RMNL), text mining, optimal marketing resource allocation and operations research.

    Dr. Kris Heylen. Supercomputing and Linguistics

    Katholieke Universiteit Leuven, Faculty of Arts, Research Unit Quantitative Lexicology and Variational Linguistics (QLVL)

    Communicating through language is arguably one of the most complex processes that the most powerful computer we know, the human brain, is capable of. As a science, Linguistics aims to uncover the intricate system of patterns and structures that make up human language and that allow us to convey meaning through words and sentences. Although linguists have been investigating and describing these structures for ages, it is only recently that large amounts of electronic data and the computational power to analyse them have become available and have turned linguistics into a truly data-driven science. The primary data for linguistic research is ordinary, everyday language use like conversations or texts. These are collected in very large electronic text collections, containing millions of words and these collections are then mined for meaningful structures and patterns. With increasing amounts of data and ever more advanced statistical algorithms, these analyses are not longer feasible on individual servers but require the computational power of interconnected super computers.
    In the presentation, I will briefly describe two case studies of computationally heavy linguistic research. A first case study has to do with the pre-processing of linguistic data. In order to find patterns at different levels of abstraction, each word in the text collection has to be enriched with information about its word class (noun, adjective, verb,..) and syntactic function within the sentence (subject, direct object, indirect object...). A piece of software, called a parser, can add this information automatically. For our research, we wanted to parse a text collection of 1.3 billion words, i.e. all issues from a 7 year period of 6 Flemish daily newspapers, representing a staggering 13 years of computing on an ordinary computer. Thanks to the K.U.Leuven's supercomputer, this could be done in just a few months. This data has now been made available to the wider research community.

    Dr. Kris Heylen obtained a Master in Germanic Linguistics (2000) and a Master in Artificial Intelligence (2001) from the K.U.Leuven. In 2005, he was awarded a PhD in Linguistics at the K.U.leuven for his research into the statistical modelling of German word order variation. Since 2006, he is a postdoctoral fellow at the Leuven research unit Quantitative Lexicology and Variational Linguistics (QLVL), where he has further pursued his research into statistical language modelling with a focus on lexical patterns and word meaning in Dutch.

    Dr. ir. Lies Geris. Modeling in biomechanics and biomedical engineering

    Katholieke Universiteit Leuven, Faculty of Engineering, Department of Mechanical Engineering, Division of Biomechanics and Engineering Design

    The first part of the presentation will discuss the development and applications of a mathematical model of fracture healing. The model encompasses several key-aspects of the bone regeneration process, such as the formation of blood vessels and the influence of mechanical loading on the progress of healing. The model is applied to simulate adverse healing conditions leading to a delayed or nonunion. Several potential therapeutic approaches are tested in silico in order to find the optimal treatment strategy. Going towards patient specific models will require even more computer power than is the case for the generic examples presented here.
    The second part of the presentation will give an overview of other modeling work in the field of biomechanics and biomedical engineering, taking place in Leuven and Flanders. The use of super computer facilities is required to meet the demand for more detailed models and patient specific modeling.

    Dr. ir. Liesbet Geris is a post-doctoral research fellow of the Research Foundation Flanders (FWO) working at the Division of Biomechanics and Engineering Design of the Katholieke Universiteit Leuven, Belgium. From the K.U.Leuven, she received her MSc degree in Mechanical Engineering in 2002 and her PhD degree in Engineering in 2007, both summa cum laude. In 2007 she worked for 4 months as an academic visitor at the Centre of Mathematical Biology of Oxford University. Her research interests encompass the mathematical modeling of bone regeneration during fracture healing, implant osseointegration and tissue engineering applications. The phenomena described in the mathematical models reach from the tissue level, over the cell level, down to the molecular level. She works in close collaboration with experimental and clinical researchers from the university hospitals Leuven, focusing on the development of mathematical models of impaired healing situations and the in silico design of novel treatment strategies. She is the author of 36 refereed journal and proceedings articles, 5 chapters and reviews and 18 peer-reviewed abstracts. She has received a number of awards, including the Student Award (2006) of the European Society of Biomechanics (ESB) and the Young Investigator Award (2008) of the International Federation for Medical and Biological Engineering (IFMBE).

    Prof. dr. ir. Chris Lacor1 en Prof. dr. Stefaan Poedts2. Supercomputing in CFD and MHD

    1Vrije Universiteit Brussel, Faculty of Applied Sciences, Department of Mechanical Engineering
    2Katholieke Universiteit Leuven, Faculty of Sciences, Department of Mathematics, Centre for Plasma Astrophysics

    CFD is an application field in which the available computing power is typically always lagging behind. With the increase of computer capacity CFD is looking towards more complex applications – because of increased geometrical complication or multidisciplinary aspects e.g. aeroacoustics, turbulent combustion, biological flows, etc – or more refined models such as Large Eddy Simulation (LES) or Direct Numerical Simulation (DNS). In this presentation some demanding application fields of CFD will be highlighted, to illustrate this.
    Computational MHD has a broad range of applications. We will survey some of the most CPU demanding applications in Flanders in the context of examples of the joint initiatives combining expertise from multiple disciplines, the VSC will hopefully lead to, such as the customised applications built in the COOLFluiD and AMRVAC-CELESTE3D projects.

    Prof. dr. ir. Chris Lacor obtained a degree in Electromechanical Engineering at VUB in 79 and his PhD in 86 at the same university. Currently he is Head of the Research Group Fluid Mechanics and Thermodynamics of the Faculty of Engineering at VUB. His main research field is Computational Fluid Dynamics (CFD). He stayed at the NASA Ames CFD Branch as an Ames associate in 87 and at EPFL IMF in 89 where he got in contact with the CRAY supercomputers. In the early 90ies he was co-organizer of supercomputing lectures for the VUB/ULB CRAY X-MP computer. His current research focuses on Large Eddy Simulation, high-order accurate schemes and efficient solvers in the context of a variety of applications such as Computational Aeroacoustics, Turbulent Combustion, Non-Deterministic methods and Biological Flows. He is author of more than 100 articles in journals and on international conferences. He is also a fellow of the Flemish Academic Centre for Science and the Arts (VLAC).

    Prof. dr. Stefaan Poedts obtained his degree in Applied Mathematics in 1984 at the K.U.Leuven. As 'research assistant' of the Belgian National Fund for Scientific Research he obtained a PhD in Sciences (Applied Mathematics) in 1988 at the same university. He spent two years at the Max-Planck-Institut für Plasmaphysik in Garching bei München and five years at the FOM-Instituut voor Plasmafysica 'Rijnhuizen'. In October 1996 he returned to the K.U.Leuven as Research Associate of the FWO-Vlaanderen at the Centre for Plasma Astrophysics (CPA) in the Department of Mathematics. Since October 1, 2000 he is Academic Staff at the K.U.Leuven, presently as Full Professor. His research interests include solar astrophysics, space weather and controlled thermonuclear fusion. He co-authored two books and 170 journal articles on these subjects. He is president of the European Solar Physics Division (EPS & EAS) and chairman of the Leuven Mathematical Modeling and Computational Science Centre. He is also member of ESA’s Space Weather Working Team and Solar System Working Group.

    " -485,""," - - - - - - -
    - \"Logo - - March 23 2009
    - Launch Flemish Supercomputer Center -

    - The Flemish Supercomputer Centre (Vlaams Supercomputer Centrum) cordially invites you to its official launch on 23 March 2009. -

    -
    - Supercomputing is a crucial technology for the twenty-first century. Fast and efficient compute power is needed for leading scientific research, the industrial development and the competitiveness of our industry. For this reason the Flemish government and the five university associations have decided to set up a Flemish Supercomputer Centre (VSC). This centre will combine the clusters at the various Flemish universities in a single high-performance network and expand it with a large cluster that can withstand international comparison. The VSC will make available a high-performance and user-friendly supercomputer infrastructure and expertise to users from academic institutions and the industry. -

    - Program -

    - - - - - - - - - - - - - - - - - - - - - - -
    - - 14.15 - - Scientists from various disciplines tell about their experiences with HPC and grid computing -
    - - 16.15 - - Coffee break -
    - - 17.00 - - Official program, in the presence of minister Ceysens, Flemish minister of economy, enterprise, science, innovation and foreign trade of Flanders. -
    - - 18.30 - - Reception -

    - A detailed program is available by clicking on this link. All presentations will be in English. -

    - Location -

    - Promotiezaal of the Universiteitshal of the K.U.Leuven, -

    - Naamsestraat 22, 3000 Leuven. -

    - Please register by 16 March 2009 using this electronic form. -

    - Plan and parking -

    Parkings in the neighbourhood:
    -

      -
    • - Parking garage Ladeuze, Mgr. Ladeuzeplein 20, Leuven.
    • -
    • - H. Hart parking, Naamsestraat 102, Leuven.
    • -

    - The Universiteitshal is within walking distance of the train station of Leuven. Bus 1 (Heverlee Boskant) and 2 (Heverlee Campus) stop nearby. -

    - \"invitation -

    - The images at the top of this page are courtesy of NUMECA International and research groups at Antwerp University, the Vrije Universiteit Brussel and the K.U.Leuven. -

    " -487,""," - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
    \"NUMECA - -

    Free-surface simulation. -

    -

    Figure courtesy of NUMECA International. -

    -
    \"NUMECA - -

    Simulation of a turbine with coolring. -

    -

    Figure courtesy of NUMECA International. -

    -
    \"UA - -

    Purkinje cell model. -

    -

    Figure courtesy of Erik De Schutter, Theoretical Neurobiology, Universiteit Antwerpen. -

    -
    \"UA - -

    This figure shows the electron density at adsorption of NO2 at on graphene, computed using density functional theory (using the software package absint). -

    -

    Figure courtesy of Francois Peeters, Condensed Matter Theory (CMT) group, Universiteit Antwerpen. -

    -
    \"UA - -

    Figure courtesy of Christine Van Broeckhoven, research group Molecular Genetics, Universiteit Antwerpen. -

    -
    \"CPA - Figure courtesy of the Centre for Plasma-Astrophysics, K.U.Leuven. -
    \" - Figure courtesy of the Centre for Plasma-Astrophysics, K.U.Leuven. -
    \"KULeuven - Figure courtesy of the Centre for Plasma-Astrophysics, K.U.Leuven. -
    \"VUB - Figure courtesy of the research group Physics of Elementary Particles - IIHE, Vrije Universiteit Brussel. -
    " -489,""," - - - - - - -
    -

    - De eerste jaarlijkse bijeenkomst was een succes, met dank aan al de sprekers en deelnemers. We kijken al uit om de gebruikersdag volgend jaar te herhalen en om een aantal van de opgeworpen ideeën te implementeren.

    -

    - Hieronder vind je de presentaties van de VSC 2014 gebruikersdag:

    -
    -

    - The first annual event was a success, thanks to all the presenters and participates. We are already looking forward to implementing some of the ideas generated and gathering again next year.

    -

    - Below you can download the presentations of the VSC 2014 userday:

    -

    - State of the VSC, Flemish Supercomputer (Dane Skow, HPC manager Hercules Foundation)

    - Computational Neuroscience (Michele Giugliano, University of Antwerp)

    - The value of HPC for Molecular Modeling applications (Veronique Van Speybroeck, Ghent University)

    - Parallel, grid-adaptive computations for solar atmosphere dynamics (Rony Keppens, University of Leuven)

    - HPC for industrial wind energy applications (Rory Donnelly, 3E)

    - The PRACE architecture and future prospects into Horizon 2020 (Sergi Girona, PRACE)

    - Towards A Pan-European Collaborative Data Infrastructure, European Data Infrastructure (Morris Riedel, EUDAT)

    - - - - - - -
    - Zoals je hieronder kan zijn was een mooi aantal deelnemers aanwezig. Wie wenst kan meer foto's vinden onder de link. - A nice number of participants attended the userday as you can see below. Click to see more pictures.

    - \"More

    " -491,"","

    - The International Auditorium
    - Kon. Albert II laan 5, 1210 Brussels

    - The VSC User Day is the first annual meeting of current and prospective users of the Vlaams Supercomputing Center (VSC) along with staff and supporters of the VSC infrastructure. We will hold a series of presentations describing the status and results of the past year as well as afternoon sessions talking about plans and priorities for 2014 and beyond. This is an excellent opportunity to become more familiar with the VSC and it personnel, become involved in constructing plans and priorities for new projects and initiatives, and network with fellow HPC interested parties.
    - The day ends with a networking hour at 17:00 allowing time for informal discussions and followup from the day's activities.
    -

    - Program

    - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
    - 9:30h - Welcome coffee
    - 10:00h - Opening VSC USER DAY
    - Marc Luwel, Director Hercules Foundation
    - 10:10h - State of the VSC, Flemish Supercomputer
    - Dane Skow, HPC manager Hercules Foundation
    - 10:40h - Computational Neuroscience
    - Michele Giugliano, University of Antwerp
    - 11:00h - The value of HPC for Molecular Modeling applications
    - Veronique Van Speybroeck, Ghent University
    - 11:20h - Coffee Break and posters
    - 11:50h - Parallel, grid-adaptive computations for solar atmosphere dynamics
    - Rony Keppens, University of Leuven
    - 12:10h - HPC for industrial wind energy applications
    - Rory Donnelly, 3E
    - 12:30h - Lunch
    - 13:30h - The PRACE architecture and future prospects into Horizon 2020
    - Sergi Girona, PRACE
    - 14:00h - EUDAT – Towards A Pan-European Collaborative Data Infrastructure, European Data Infrastructure
    - Morris Reidel, EUDAT
    - 14:20h - Breakout Sessions:
    -
    - 1 : Long term strategy / Outreach, information and Documentation
    - 2 : Industry and Research / Visualization
    - 3 : Training and support / Integration of Data and Computation
    - 15:20h - Coffee break and posters
    - 16:00h - Summary Presentations from Rapporteurs breakout sessions
    - 16:30h - Closing remarks and Q&A
    - Bart De Moor, chair Hercules Foundation
    - 17:00h - Network reception
    " -493,"","

    De eerste jaarlijkse bijeenkomst was een succes, met dank aan al de sprekers en deelnemers. We kijken al uit om de gebruikersdag volgend jaar te herhalen en om een aantal van de opgeworpen ideeën te implementeren. -

    -

    Hieronder vind je de presentaties van de VSC 2014 gebruikersdag: -

    " -495,"","

    The first annual event was a success, thanks to all the presenters and participates. We are already looking forward to implementing some of the ideas generated and gathering again next year.

    Below you can download the presentations of the VSC 2014 userday:

    " -497,"","

    State of the VSC, Flemish Supercomputer (Dane Skow, HPC manager Hercules Foundation)
    Computational Neuroscience (Michele Giugliano, University of Antwerp)
    The value of HPC for Molecular Modeling applications (Veronique Van Speybroeck, Ghent University)
    Parallel, grid-adaptive computations for solar atmosphere dynamics (Rony Keppens, University of Leuven)
    HPC for industrial wind energy applications (Rory Donnelly, 3E)
    The PRACE architecture and future prospects into Horizon 2020 (Sergi Girona, PRACE)
    Towards A Pan-European Collaborative Data Infrastructure, European Data Infrastructure(Morris Riedel, EUDAT)

    Full program of the day

    " -499,"","

    Zoals je hieronder kan zijn was een mooi aantal deelnemers aanwezig. Wie wenst kan meer foto's vinden onder de link.

    " -501,"","

    A nice number of participants attended the userday as you can see below. Click to see more pictures.

    " -503,"","

    - \"More -

    " -505,"","

    Next- generation Supercomputing in Flanders: value creation for your business!

    Tuesday 27 Januari 2015 -

    Technopolis Mechelen
    -

    The first industry day was a success, thanks to all the presenters and participates. We especially would like to thank the minister for his presence. The success stories of European HPC centres showed how benificial HPC can be for all kinds of industry. The testimonials of the Flemish firms who already are using large scale computing could only stress the importance HPC. We will continue to work on the ideas generated at this meeting so that VSC can strengthen its service to industry. -

    \"All -

    Below you can download the presentations of the VSC 2015 industry day. Pictures are published. -

    The importance of High Performance Computing for future science, technology and economic growth
    - Prof. Dr Bart De Moor, Herculesstichting -

    The 4 Forces of Change for Supercomputing
    - Cliff Brereton, director Hartree Centre (UK) -

    The virtual Engineering Centre and its multisector virtual prototyping activities
    - Dr Gillian Murray, Director UK virtual engineering centre (UK) -

    How SMEs can benefit from High-Performence-Computing
    - Dr Andreas Wierse, SICOS BW GmbH (D) -

    European HPC landscape- its initiatives towards supporting innovation and its regional perspectives
    - Serge Bogaerts, HPC & Infrastructure Manager, CENAERO (B)
    - Belgian delegate to the Prace Council
    -

    Big data and Big Compute for Drug Discovery & Development of the future
    - Dr Pieter Peeters, Senior Director Computational Biology, Janssen R&D (B) -

    HPC key enabler for R&D innovation @ Bayer CropScience
    - Filip Nollet, Computation Life Science Platform
    - Architect Bayer Cropscience (B)
    -

    How becoming involved in VSC: mechanisms for HPC industrial newcomers
    - Dr Marc Luwel, Herculesstichting
    - Dr Ewald Pauwels, Ugent - Tier1 -

    Closing
    - Philippe Muyters, Flemish Minister of Economics and Innovation -

    Full program

    " -507,"","

    The VSC Industry day is organised for the first time to create awareness about the potential of HPC for industry and to help firms overcome the hurdles to use supercomputing. We are proud to present an exciting program with success stories of European HPC centres that successfully collaborate with industry and testimonials of some Flemish firms who already have discovered the opportunities of large scale computing. The day ends with a networking hour allowing time for informal discussions.

    - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
    -

    Program - Next-generation supercomputing in Flanders: value creation for your business!

    -
    -

    13.00-13.30

    -
    -

    Registration

    -
    -

    13.30-13.35

    -
    -

    Welcome and introduction
    - Prof. Dr Colin Whitehouse (chair)

    -
    -

    13.35-13.45

    -
    -

    The importance of High Performance Computing for future science, technology and economic growth
    - Prof. Dr Bart De Moor, Herculesstichting

    -
    -

    13.45-14.05

    -
    -

    The 4 Forces of Change for Supercomputing
    - Cliff Brereton, director Hartree Centre (UK)

    -
    -

    14.05-14.25

    -
    -

    The virtual Engineering Centre and its multisector virtual prototyping activities
    - Dr Gillian Murray, Director UK virtual engineering centre (UK)

    -
    -

    14.25-14.45

    -
    -

    How SMEs can benefit from High-Performence-Computing
    - Dr Andreas Wierse, SICOS BW GmbH (D)

    -
    -

    14.45-15.15

    -
    -

    Coffeebreak

    -
    -

    15.15-15.35

    -
    -

    European HPC landscape- its initiatives towards supporting innovation and its regional perspectives
    - Serge Bogaerts, HPC & Infrastructure Manager, CENAERO (B)
    - Belgian delegate to the Prace Council

    -
    -

    15.35-15.55

    -
    -

    Big data and Big Compute for Drug Discovery & Development of the future
    - Dr Pieter Peeters, Senior Director Computational Biology, Janssen R&D (B)

    -
    -

    15.55-16.15

    -
    -

    HPC key enabler for R&D innovation @ Bayer CropScience
    - Filip Nollet, Computation Life Science Platform
    - Architect Bayer Cropscience (B)

    -
    -

    16.15-16.35

    -
    -

    How becoming involved in VSC: mechanisms for HPC industrial newcomers
    - Dr Marc Luwel, Herculesstichting

    -
    -

    16.35-17.05

    -
    -

    Q&A discussion
    - Panel/chair

    -
    17.05-17.15 -

    Closing
    - Philippe Muyters, Flemish Minister of Economics and Innovation

    -
    17.15-18.15Networking reception
    " -509,"","

    Below you find the complete list of Tier-1-projects since the start of the regular project application programme.

    " -511,"User support","

    KU Leuven/UHasselt: HPCinfo@kuleuven.be
    - Ghent University: hpc@ugent.be
    - Antwerp University: hpc@uantwerpen.be
    - VUB: hpc@vub.ac.be -

    -

    Please take a look at the information that you should provide with your support question.. -

    " -513,"","

    Tier-1

    Experimental setup

    Tier-2

    Four university-level cluster groups are also embedded in the VSC and partly funded from VSC budgets:

    " -517,"","

    The only short answer to this question is: maybe yes, maybe no. There are a number of things you need to figure out before.

    Will my application run on a supercomputer?

    Maybe yes, maybe no. All VSC clusters - and the majority of large supercomputers in the world - run the Linux operation system. So it doesn't run Windows or OS X applications. Your application will have to support Linux, and the specific variants that we use on our clusters, but these are popular versions and rarely pose problems.

    Next supercomputers are not really build to run interactive applications well. They are built to be shared by many people and using command line applications. There are several issues:

    • Since you share the machine with many users, you may have to wait a while before your job might launch. This is organised through a queueing system: you submit your job to a waiting line and a scheduler decides who's next to run based on a large number of parameters: job duration, number of processors needed, have you run a lot of jobs recently, ... So by the time you job starts, you may have gone home already.
    • You don't sit at a monitor attached to the supercomputer. Even though that supercomputers can also be used for visualisation, you'll still need a suitable system on your desk to show the final image, and use software that can send the drawing commands or images generated on the supercomputer to your desktop.

    Will my application run faster on a supercomputer?

    You'll be disappointed to hear that the answer is actually quite often \"no\". It is not uncommon that an application runs faster on a good workstation than on a supercomputer. Supercomputers are optimised for large applications that access large chunks of memory (RAM or disk) in a particular way and are very parallel, i.e., they can keep a lot of processor cores busy. Their CPUs are optimised to do as much work in parallel as fast as possible, at the cost of lower performance for programs that don't exploit parallelism, while high-end workstation processors are more optimised for those programs that run sequentially or don't use a lot of parallelism and often have disksystems that can better deal with many small files.

    That being said, even that doesn't have to be disastrous. Parallelism can come in different forms. Sometimes you may have to run the same program for a large number of test cases, and if the memory consumption for a program for a simple test case is reasonable, you may be able to run a lot of instances of that program simultaneously on the same multi-core processor chip. This is called capacity computing. And some applications are very well written and can exploit all the forms of parallelism that a modern supercomputer offers, provided you solve a large enough problem with that program. This is called capability computing. We support both at the VSC.

    OK, my application can exploit a supercomputer. What's next?

    Have a look our web page on requesting access in the general section. It explains who can get access to the supercomputers. And as that text explains, you'll may need to install some additional software the system from which you want to access the clusters (which for the majority of our users is their laptop or desktop computer).

    Basically, you communicate with the cluster through a protocol called \"SSH\" which stands for \"Secure SHell\". It encrypts all the information that is passed to the clusters, and also provides an authentication mechanism that is a bit safer than just sending passwords. The protocol can be used both to get a console on the system (a \"command line interface\" like the one offered by CMD.EXE on Widows or the term app on OS X) and to transfer files to the system. The absolute minimum you need before you can actually request your account, is a SSH client to generate the key that will be used to talk to the clusters. For Windows, you can use PuTTY (freely available, see the link on our PuTTY page), on macOS/OS X you can use the built-in OpenSSH client, and Linux systems typically also come with OpenSSH. But to actually use the clusters, you may want to install some additional software, such as a GUI sftp client to transfer files. We've got links to a lot of useful client software on our web page on access and data transfer.

    Yes, I'm ready

    Then follow the links on our user portal page on requesting an account. And don't forget we've got training programs to get you started and technical support for when you run into trouble.

    " -519,"","

    Even if you don't do software development yourself (and software development includes, e.g., developing R- or Matlab routines), working on a supercomputer differs from using a PC, so some training is useful for everybody.

    Linux

    If you are familiar with a Linux or UNIX environment, there is no need to take any course. Working with Linux on a supercomputer is not that different from working with Linux on a PC, so you'll likely find your way around quickly.

    Otherwise, there are several options to learn more about Linux

    A basic HPC introduction

    Such a course at the VSC has a double goal: Learning more about HPC in general but also about specific properties of the system at the VSC that you need to know to run programs sufficiently efficiently.

    • Several institutions at the VSC organise periodic introductions to their infrastructure or update sessions for users when new additions are made to the infrastructure. Check the \"Education and Training\" page on upcoming courses.
    • We are working on a new introductory text that will soon be available on this site. The text covers both the software that you need to install on your own computer and working on the clusters, with specific information for your institution.
    • Or you can work your way through the documentation on the user portal. This is probably sufficient if you are already familiar with supercomputers. Of particular interest may be the page on our implementation of the module system, the pages on running jobs (as there are different job submission systems around, we use Torque/Moab), and the pages about the available hardware that also contain information about the settings needed for each specific system.

    What next?

    We also run courses on many other aspects of supercomputing such as program development or use of specific applications. As the other courses, they are announced on our \"Education and Training\" page. Or you can read a some good books, look at training programs offered at the European level through PRACE or check some web courses. We maintain links to several of those on the \"Tutorials and books\" pages.

    Be aware that some tools that are useful to prototype applications on a PC, may be very inefficient when run at a large scale on a supercomputer. Matlab programs can often be accelerated through compiling with the Matlab compiler. R isn't the most efficient tool either. And Python is an excellent \"glue language\" to get a number of applications or optimised (non-Python) libraries to work together, but shouldn't be used for entire applications that consume a lot of CPU time either. We've got courses on several of those languages where you also learn how to use them efficiently, and you'll also notice that on some clusters there are restrictions on the use of these tools.

    " -521,"","" -523,"","

    © FWO

    Use of this website means that you acknowledge and accept the terms and conditions below.

    Content disclaimer

    The FWO takes great care of its website and strives to ensure that all the information provided is as complete, correct, understandable, accurate and up-to-date as possible. In spite of all these efforts, the FWO cannot guarantee that the information provided on this website is always complete, correct, accurate or up-to-date. Where necessary, the FWO reserves the right to change and update information at its own discretion. The publication of official texts (legislation, Flemish Parliament Acts, regulations, etc.) on this website has no official character.

    If the information provided on or by this website is inaccurate then the FWO will do everything possible to correct this as quickly as possible. Should you notice any errors, please contact the website administrator: kurt.lust@uantwerpen.be. The FWO makes every effort to ensure that the website does not become unavailable as a result of technical errors. However, the FWO cannot guarantee the website's availability or the absence of other technical problems.

    The FWO cannot be held liable for any direct or indirect damage arising from the use of the website or from reliance on the information provided on or through the website. This also applies without restriction to all losses, delays or damage to your equipment, software or other data on your computer system.

    Protection of personal data

    The FWO is committed to protecting your privacy. Most information is available on or through the website without your having to provide any personal data. In some cases, however, you may be asked to provide certain personal details. In such cases, your data will be processed in accordance with the Law of 8 December 1992 on the protection of privacy with regard to the processing of personal data and with the Royal Decree of 13 February 2001, which implements the Law of 8 December 1992 on the protection of privacy with regard to the processing of personal data.

    The FWO provides the following guarantees in this context:

    • Your personal data will be collected and processed only in order to provide you with the information or service you requested online. The processing of your personal data is limited to the intended objective.
    • Your personal data will not be disclosed to third parties or used for direct marketing purposes unless you have formally consented to this by opting in.
    • The FWO implements the best possible safety measures in order to prevent abuse of your personal data by third parties.

    Providing personal information through the online registration module

    By providing your personal information, you consent to this personal information being recorded and processed by the FWO and its representatives. The information you provided will be treated as confidential.

    The FWO may also use your details to invite you to events or keep you informed about activities of the VSC.

    Cookies

    What are cookies and why do we use them?

    Cookies are small text or data files that a browser saves on your computer when you visit a website.

    This web site saves cookies on your computer in order to improve the website’s usability and also to analyse how we can improve our web services.

    Which cookies does this website use?

    • Functional cookies: Cookies used as part of the website’s security. These cookies are deleted shortly after your visit to our website ends.
    • Non-functional cookies
      • Google Analytics: _GA
        We monitor our website’s usage statistics with Google Analytics, a system which loads a number of cookies whenever you visit the website. These _GA cookies allow us to check how many visitors our website gets and also to collect certain demographic details (e.g. country of origin).

    Can you block or delete cookies?

    You can prevent certain cookies being installed on your computer by adjusting the settings in your browser’s options. In the ‘privacy’ section, you can specify any cookies you wish to block.

    Cookies can also be deleted in your browser’s options via ‘delete browsing history’.

    We use cookies to collect statistics which help us simplify and improve your visit to our website. As a result, we advise you to allow your browser to use cookies.

    Hyperlinks and references

    The website contains hyperlinks which redirect you to the websites of other institutions and organisations and to information sources managed by third parties. The FWO has no technical control over these websites, nor does it control their content, which is why it cannot offer any guarantees as to the completeness or correctness of the content or availability of these websites and information sources.

    The provision of hyperlinks to other websites does not imply that the FWO endorses these external websites or their content. The links are provided for information purposes and for your convenience. The FWO accepts no liability for any direct or indirect damage arising from the consultation or use of such external websites or their content.

    Copyright

    All texts and illustrations included on this website, as well as its layout and functionality, are protected by copyright. The texts and illustrations may be printed out for private use; distribution is permitted only after receiving the authorisation of the FWO. You may quote from the website providing you always refer to the original source. Reproductions are permitted, providing you always refer to the original source, except for commercial purposes, in which case reproductions are never permitted, even when they include a reference to the source.

    Permission to reproduce copyrighted material applies only to the elements of this site for which the FWO is the copyright owner. Permission to reproduce material for which third parties hold the copyright must be obtained from the relevant copyright holder.

    " -529,"relates to","" -531,"Auick access","" -533,"New user","

    eerste link

    " -535,"","

    The UGent compute infrastructure consists of several specialised clusters, jointly called Stevin. These clusters share a lot of their file space so that users can easily move between clusters depending on the specific job they have to run. -

    Login nodes

    The HPC-UGent Tier-2 login nodes can be access through the generic name login.hpc.ugent.be. -

    Connecting to a specific login node

    There are multiple login nodes (gligar01-gligar03) and you will be connected with one of them when using the generic alias login.hpc.ugent.be. (You can check which one you are connected to using the hostname command). -

    If you need to connect with as specific login node, use either gligar01.ugent.be, gligar02.ugent.be, or gligar03.ugent.be. -

    Compute clusters

    - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
    - #nodes - CPU - Mem/node - Diskspace/node - Network -
    -

    delcatty

    -
    128 - 2 x 8-core Intel E5-2670
    (Sandy Bridge @ 2.6 GHz) -
    64 GB - 400 GB - FDR InfiniBand -
    -

    phanpy

    -
    16 - 2 x 12-core Intel E5-2680v3
    (Haswell-EP @ 2.5 GHz) -
    512 GB - 3x 400 GB (SSD, striped) - FDR InfiniBand -
    -

    golett

    -
    196 - 2 x 12-core Intel E5-2680v3
    (Haswell-EP @ 2.5 GHz) -
    64 GB - 500 GB - FDR-10 InfiniBand

    -
    -

    swalot

    -
    128 - 2 x 10-core Intel E5-2660v3
    (Haswell-EP @ 2.6 GHz) -
    128 GB - 1 TB - FDR InfiniBand

    -
    -

    skitty

    -
    72 - 2 x 18-core Intel Xeon Gold
    6140 (Skylake @ 2.3 GHz) -
    192 GB - 1 TB
    240 GB SSD -
    EDR InfiniBand -
    -

    victini

    -
    96 - 2 x 18-core Intel Xeon Gold
    6140 (Skylake @ 2.3 GHz) -
    96 GB - 1 TB
    240 GB SSD -
    10 GbE -

    Only clusters with an InfiniBand interconnect network are suited for multi-node jobs. Other clusters are for single-node usage only.
    -

    Shared storage

    General Parallel File System (GPFS) partitions: -

      -
    • $VSC_HOME: 35 TB
    • -
    • $VSC_DATA: 702 TB
    • -
    • $VSC_SCRATCH: 1 PB (equivalent to $VSC_SCRATCH_KYUKON)
    • -
    • $VSC_SCRATCH_PHANPY: 35TB (very fast, powered by SSDs)
    • -
    " -537,"","

    When using the VSC-infrastructure for your research, you must acknowledge the VSC in all relevant publications. This will help the VSC secure funding, and hence you will benefit from it in the long run as well. It is also a contractual obligation for the VSC.

    Please use the following phrase to do so in Dutch “De rekeninfrastructuur en dienstverlening gebruikt in dit werk, werd voorzien door het VSC (Vlaams Supercomputer Centrum), gefinancierd door het FWO en de Vlaamse regering – departement EWI”, or in English: “The computational resources and services used in this work were provided by the VSC (Flemish Supercomputer Center), funded by the Research Foundation - Flanders (FWO) and the Flemish Government – department EWI”.

    Moreover, if you are in the KU Leuven association, you are also requested to add the relevant papers to the virtual collection \"High Performance Computing\" in Lirias so that we can easily generate the publication lists with relevant publications.

    " -539,"","

    Need technical support? Contact your local help desk.

    " -543,"","

    In order to smoothly go through account creation for students process several actions from the lecturer are required. -

      -
    1. Submit the request to HPCinfo(at)icts.kuleuven.be providing a short description of the course and explanation why HPC facilities are necessary for teaching the course. Please also add the attachment with the list of students attending the course (2 weeks before the beginning of the course).
    2. -
    3. Send the information to students that they have 1 week time window to apply for the account (the last day when account creating can be processed is the day before the course starts). Students should follow the regular account creation routine, which starts with generating private-public key pair and ends with submitting the public key via our account management web site. After 1 week the lists of students that already submitted the request for the account and corresponding vsc-account numbers will be send to the lecturer.
    4. -
    5. - The students should be informed to bring the private key with them to be able to connect and attend the course. -
    6. -
    7. Since introductory credits are supposed to be used for private projects (e.g. master thesis computations) we encourage to create the project which will be used for computations related to the course. This will also give a lecturer an opportunity of tracing the use of the cluster during the course. For more information about the procedure of creating the project please refer to the page on credit system basics. Once the project is accepted, the students that already applied for the account will be automatically added to the project (1 week before the beginning of the course).
    8. -
    9. Students that failed to submit request in a given time will have to follow regular procedure of applying for the account involving communication with the HPC support staff and delaying the account creation process (these students will have to motivate the reason of applying for the account and send a request for using the project credits). Students that submit the requests later than 2 days before the beginning of the course are not guaranteed to get the account in time.
    10. -
    11. Both the accounts and the generated key-pairs are strictly PRIVATE and students are not supposed to share the accounts, not even for the purpose of the course.
    12. -
    13. Please remember to instruct your students to bring the private key to the class. Students may forget it and without the key they will not be able to login to the cluster even if they have the accounts.
    14. -
    15. If the reservation of few nodes is necessary during the exercise classes please let us know 1 week before the exercise class, so that it can be scheduled. To submit the job during the class the following command should be used: -
      $ qsub -A project-name -W group_list=project-name script-file
      - where project-name refers to the project created by the lecturer for the purpose of the course.
    16. Make sure that the software to connect to the cluster (Putty, Xming, Filzezilla, NX) is available in pc-class that will be used during the course. For KU Leuven courses: please follow the procedure at https://icts.kuleuven.be/sc/pcklas/ictspcklassen (1 month before the beginning of the course).
    17. -
    " -545,"","

    Purpose

    Estimating the amount of memory an application will use during execution is often non trivial, especially when one uses third-party software. However, this information is valuable, since it helps to determine the characteristics of the compute nodes a job using this application should run on. -

    Although the tool presented here can also be used to support the software development process, better tools are almost certainly available. -

    Note that currently only single node jobs are supported, MPI support may be added in a future release. -

    Prerequisites

    The user should be familiar with the linux bash shell. -

    Monitoring a program

    To start using monitor, first load the appropriate module: -

    $ module load monitor
    -

    Starting a program, e.g., simulation, to monitor is very straightforward -

    $ monitor simulation
    -

    monitor will write the CPU usage and memory consumption of simulation to standard error. Values will be displayed every 5 seconds. This is the rate at which monitor samples the program's metrics. -

    Log file

    Since monitor's output may interfere with that of the program to monitor, it is often convenient to use a log file. The latter can be specified as follows: -

    $ monitor -l simulation.log simulation
    -

    For long running programs, it may be convenient to limit the output to, e.g., the last minute of the programs execution. Since monitor provides metrics every 5 seconds, this implies we want to limit the output to the last 12 values to cover a minute: -

    $ monitor -l simulation.log -n 12 simulation
    -

    Note that this option is only available when monitor writes its metrics to a log file, not when standard error is used. -

    Modifying the sample resolution

    The interval at which monitor will show the metrics can be modified by specifying delta, the sample rate: -

    $ monitor -d 60 simulation
    -

    monitor will now print the program's metrics every 60 seconds. Note that the minimum delta value is 1 second. -

    File sizes

    Some programs use temporary files, the size of which may also be a useful metric. monitor provides an option to display the size of one or more files: -

    $ monitor -f tmp/simulation.tmp,cache simulation
    -

    Here, the size of the file simulation.tmp in directory tmp, as well as the size of the file cache will be monitored. Files can be specified by absolute as well as relative path, and multiple files are separated by ','. -

    Programs with command line options

    Many programs, e.g., matlab, take command line options. To make sure these do not interfere with those of monitor and vice versa, the program can for instance be started in the following way: -

    $ monitor -delta 60 -- matlab -nojvm -nodisplay computation.m
    -

    The use of '--' will ensure that monitor does not get confused by matlab's '-nojvm' and '-nodisplay' options. -

    Subprocesses and multicore programs

    Some processes spawn one or more subprocesses. In that case, the metrics shown by monitor are aggregated over the process and all of its subprocesses (recursively). The reported CPU usage is the sum of all these processes, and can thus exceed 100 %. -

    Some (well, since this is a HPC cluster, we hope most) programs use more than one core to perform their computations. Hence, it should not come as a surprise that the CPU usage is reported as larger than 100 %. -

    When programs of this type are running on a computer with n cores, the CPU usage can go up to n x 100 %. -

    Exit codes

    monitor will propagate the exit code of the program it is watching. Suppose the latter ends normally, then monitor's exit code will be 0. On the other hand, when the program terminates abnormally with a non-zero exit code, e.g., 3, then this will be monitor's exit code as well. -

    When monitor has to terminate in an abnormal state, for instance if it can't create the log file, its exit code will be 65. If this interferes with an exit code of the program to be monitored, it can be modified by setting the environment variable MONITOR_EXIT_ERROR to a more suitable value. -

    Monitoring a running process

    It is also possible to \"attach\" monitor to a program or process that is already running. One simply determines the relevant process ID using the ps command, e.g., 18749, and starts monitor: -

    $ monitor -p 18749
    -

    Note that this feature can be (ab)used to monitor specific subprocesses. -

    More information

    Help is available for monitor by issuing: -

    $ monitor -h
    -
    " -547,"Remark","

    Logging in on the site does not yet function (expected around July 10), so you cannot yet see the overview of systems below.

    " -549,"","

    Purpose

    Estimating the amount of memory an application will use during execution is often non trivial, especially when one uses third-party software. However, this information is valuable, since it helps to determine the characteristics of the compute nodes a job using this application should run on. -

    Although the tool presented here can also be used to support the software development process, better tools are almost certainly available. -

    Note that currently only single node jobs are supported, MPI support may be added in a future release. -

    Prerequisites

    The user should be familiar with the linux bash shell. -

    Monitoring a program

    To start using monitor, first load the appropriate module: -

    $ module load monitor
    -

    Starting a program, e.g., simulation, to monitor is very straightforward -

    $ monitor simulation
    -

    monitor will write the CPU usage and memory consumption of simulation to standard error. Values will be displayed every 5 seconds. This is the rate at which monitor samples the program's metrics. -

    Log file

    Since monitor's output may interfere with that of the program to monitor, it is often convenient to use a log file. The latter can be specified as follows: -

    $ monitor -l simulation.log simulation
    -

    For long running programs, it may be convenient to limit the output to, e.g., the last minute of the programs execution. Since monitor provides metrics every 5 seconds, this implies we want to limit the output to the last 12 values to cover a minute: -

    $ monitor -l simulation.log -n 12 simulation
    -

    Note that this option is only available when monitor writes its metrics to a log file, not when standard error is used. -

    Modifying the sample resolution

    The interval at which monitor will show the metrics can be modified by specifying delta, the sample rate: -

    $ monitor -d 60 simulation
    -

    monitor will now print the program's metrics every 60 seconds. Note that the minimum delta value is 1 second. -

    File sizes

    Some programs use temporary files, the size of which may also be a useful metric. monitor provides an option to display the size of one or more files: -

    $ monitor -f tmp/simulation.tmp,cache simulation
    -

    Here, the size of the file simulation.tmp in directory tmp, as well as the size of the file cache will be monitored. Files can be specified by absolute as well as relative path, and multiple files are separated by ','. -

    Programs with command line options

    Many programs, e.g., matlab, take command line options. To make sure these do not interfere with those of monitor and vice versa, the program can for instance be started in the following way: -

    $ monitor -delta 60 -- matlab -nojvm -nodisplay computation.m
    -

    The use of '--' will ensure that monitor does not get confused by matlab's '-nojvm' and '-nodisplay' options. -

    Subprocesses and multicore programs

    Some processes spawn one or more subprocesses. In that case, the metrics shown by monitor are aggregated over the process and all of its subprocesses (recursively). The reported CPU usage is the sum of all these processes, and can thus exceed 100 %. -

    Some (well, since this is a HPC cluster, we hope most) programs use more than one core to perform their computations. Hence, it should not come as a surprise that the CPU usage is reported as larger than 100 %. -

    When programs of this type are running on a computer with n cores, the CPU usage can go up to n x 100 %. -

    Exit codes

    monitor will propagate the exit code of the program it is watching. Suppose the latter ends normally, then monitor's exit code will be 0. On the other hand, when the program terminates abnormally with a non-zero exit code, e.g., 3, then this will be monitor's exit code as well. -

    When monitor has to terminate in an abnormal state, for instance if it can't create the log file, its exit code will be 65. If this interferes with an exit code of the program to be monitored, it can be modified by setting the environment variable MONITOR_EXIT_ERROR to a more suitable value. -

    Monitoring a running process

    It is also possible to \"attach\" monitor to a program or process that is already running. One simply determines the relevant process ID using the ps command, e.g., 18749, and starts monitor: -

    $ monitor -p 18749
    -

    Note that this feature can be (ab)used to monitor specific subprocesses. -

    More information

    Help is available for monitor by issuing:

    " -551,"","

    What are toolchains?

    -

    A toolchain is a collection of tools to build (HPC) software consistently. It consists of -

    -
      -
    • compilers for C/C++ and Fortran,
    • -
    • a communications library (MPI), and
    • -
    • mathematical libraries (linear algebra, FFT).
    • -
    -

    Toolchains are versioned, and refreshed twice a year. All software available on the cluster is rebuild when a new version of a toolchain is defined to ensure consistency. Version numbers consist of the year of their definition, followed by either a -or b, e.g., -2014a. -Note that the software components are not necessarily the most recent releases, rather they are selected for stability and reliability. -

    -

    Two toolchain flavors are standard across the VSC on all machines that can support them: intel (based on Intel software components) and -foss (based on free and open source software). -

    -

    It may be of interest to note that the Intel C/C++ compilers are more strict with respect to the standards than the GCC C/C++ compilers, while for Fortran, the GCC Fortran compiler tracks the standard more closely, while Intel's Fortran allows for many extensions added during Fortran's long history. When developing code, one should always build with both compiler suites, and eliminate all warnings. -

    -

    On average, the Intel compiler suite produces executables that are 5 to 10 % faster than those generated using the GCC compiler suite. However, for individual applications the differences may be more significant with sometimes significantly faster code produced by the Intel compilers while on other applications the GNU compiler may produce much faster code. -

    -

    Additional toolchains may be defined on specialised hardware to extract the maximum performance from that hardware. -

    -
      -
    • On Cerebro, the SGI UV shared memory system at the KU Leuven, you need to use the SGI MPI-library (called MPT for Message Passing Toolkit) to get the maximum performance from the interconnect (which offers hardware acceleration for some MPI functions). On that machine, two additional toolchains are defined, intel-mpt and - foss-mpt, equivalent to the standard - intel and - foss - toolchains respectively but with the MPI library replaced with MPT.
    • -
    -

    Intel toolchain

    -

    The intel toolchain consists almost entirely of software components -developed by Intel. When building third-party software, or developing your own, -load the module for the toolchain: -

    -
    $ module load intel/<version>
    -
    -

    where <version> should be replaced by the one to be used, e.g., -. See the documentation on the software module system for more details. -

    -

    Starting with the 2014b toolchain, the GNU compilers are also included in -this toolchain as the Intel compilers use some of the libraries and as it is possible -(though some care is needed) to link code generated with the Intel compilers with code -compiled with the GNU compilers. -

    -

    Compilers: Intel and Gnu

    -

    Three compilers are available: -

    -
      -
    • C: icc
    • -
    • C++: icpc
    • -
    • Fortran: ifort
    • -
    -

    Recent versions of -

    -

    For example, to compile/link a Fortran program fluid.f90 to an executable -fluid with architecture specific optimization, use: -

    -
    $ ifort  -O2  -xhost  -o fluid  fluid.f90
    -
    -

    Documentation on Intel compiler flags and options is -provided -by Intel. Do not forget to load the toolchain module first! -

    -

    Intel OpenMP

    -

    The compiler switch to use to compile/link OpenMP C/C++ or Fortran code is --openmp. For example, to compile/link a OpenMP C program -scatter.c to an executable -scatter with architecture specific -optimization, use: -

    -
    $ icc  -openmp  -O2  -xhost  -o scatter  scatter.c
    -
    -

    Remember to specify as many processes per node as the number of threads the executable -is supposed to run. This can be done using the ppn resource, e.g., --l nodes=1:ppn=10 for an executable that should be run with 10 OpenMP -threads. The number of threads should not exceed the number of cores on a compute node. -

    -

    Communication library: Intel MPI

    -

    For the intel toolchain, impi, i.e., Intel MPI is used as the -communications library. To compile/link MPI programs, wrappers are supplied, so that -the correct headers and libraries are used automatically. These wrappers are: -

    -
      -
    • C: mpiicc
    • -
    • C++: mpiicpc
    • -
    • Fortran: mpiifort
    • -
    -

    Note that the names differ from those of other MPI implementations. -The compiler wrappers take the same options as the corresponding compilers. -

    -

    Using the Intel MPI compilers

    -

    For example, to compile/link a C program thermo.c to an executable -thermodynamics with architecture specific optimization, use: -

    -
    $ mpiicc -O2  -xhost  -o thermodynamics  thermo.c
    -
    -

    Extensive documentation is -provided -by Intel. Do not forget to load the toolchain module first. -

    -

    Running an MPI program with Intel MPI

    -

    Note that an MPI program must be run with the exact same version of the toolchain as -it was originally build with. The listing below shows a PBS job script -thermodynamics.pbs that runs the -thermodynamics executable. -

    -
    #!/bin/bash -l
    -module load intel/<version>
    -cd $PBS_O_WORKDIR n_proc=$( cat $PBS_NODEFILE  |  wc  -l )
    -mpirun  -np $n_proc  ./thermodynamics
    -
    -

    The number of processes is computed from the length of the node list in the -$PBS_NODEFILE file, which in turn is specified as a resource specification -when submitting the job to the queue system. -

    -

    Intel mathematical libraries

    -

    The Intel Math Kernel Library (MKL) is a comprehensive collection of highly optimized -libraries that form the core of many scientific HPC codes. Among other functionality, -it offers: -

    -
      -
    • BLAS (Basic Linear Algebra Subsystem), and extensions to sparse matrices
    • -
    • Lapack (Linear algebra package) and ScaLAPACK (the distributed memory version)
    • -
    • FFT-routines including routines compatible with the FFTW2 and FFTW3 libraries - (Fastest Fourier Transform in the West)
    • -
    • Various vector functions and statistical functions that are optimised for the - vector instruction sets of all recent Intel processor families
    • -
    -

    Intel offers -extensive -documentation on this library and how to use it. -

    -

    There are two ways to link the MKL library: -

    -
      -
    • If you use icc, icpc or ifort to link your code, you can use the -mkl compiler - option: -
        -
      • -mkl=parallel or -mkl: Link the multi-threaded version of the library.
      • -
      • -mkl=sequential: Link the single-threaded version of the library
      • -
      • -mkl=cluster: Link the cluster-specific and sequential library, i.e., - ScaLAPACK will be included, but assumes one process per core (so no hybrid MPI/multi-threaded approach)
      • -
      - The Fortran95 interface library for lapack is not automatically included though. - You'll have to specify that library seperately. You can get the value from the - MKL - Link Line Advisor, see also the next item.
    • -
    • Or you can specify all libraries explictly. To do this, it is strongly recommended - to use Intel's - MKL - Link Line Advisor, and will also tell you how to link the MKL library with - code generated with the GNU and PGI compilers.
      - Note: On most VSC systems, the variable MKLROOT has a different - value from the one assumed in the Intel documentation. Wherever you see - $(MKLROOT) you may have to replace it with - $(MKLROOT)/mkl.
    • -
    -

    MKL also offers a very fast streaming pseudorandom number generator, see the -documentation for details. -

    -

    Intel toolchain version numbers

    - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
    - 2014a - 2014b - 2015a -
    icc - 13.1.3 20130607 - 13.1.3 20130607 - 15.0.1 20141023 -
    icpc - 13.1.3 20130607 - 13.1.3 20130607 - 15.0.1 20141023 -
    ifort - 13.1.3 20130607 - 13.1.3 20130607 - 15.0.1 20141023 -
    Intel MPI - 4.1.3.045 - 4.1.3.049 - 5.0.2.044 -
    Intel MKL - 11.1.1.106 - 11.1.2.144 - 11.2.1.133 -
    GCC - / - 4.8.3 - 4.9.2 -
    -

    Further information on Intel tools

    - -

    FOSS toolchain

    -

    The foss toolchain consists entirely of free and open source software -components. When building third-party software, or developing your own, -load the module for the toolchain: -

    -
    $ module load foss/<version>
    -
    -

    where <version> should be replaced by the one to be used, e.g., -2014a. See the documentation on the software module system for more details. -

    -

    Compilers: GNU

    -

    Three GCC compilers are available: -

    -
      -
    • C: gcc
    • -
    • C++: g++
    • -
    • Fortran: gfortran
    • -
    -

    For example, to compile/link a Fortran program fluid.f90 to an executable -fluid with architecture specific optimization for processors that support AVX instructions, use: -

    -
    $ gfortran -O2 -march=corei7-avx -o fluid fluid.f90
    -
    -

    Documentation on GCC compiler flags and options is available on the -project's website. Do not forget to load the -toolchain module first! -

    -

    GCC OpenMP

    -

    The compiler switch to use to compile/link OpenMP C/C++ or Fortran code is --fopenmp. For example, to compile/link a OpenMP C program -scattter.c to an executable -scatter with optimization for processors that support the AVX instruction -set, use: -

    -
    $ gcc -fopenmp -O2 -march=corei7-avx -o scatter scatter.c
    -
    -

    Remember to specify as many processes per node as the number of threads the -executable is supposed to run. This can be done using the ppn resource, e.g., --l nodes=1:ppn=10 for an executable that should be run with 10 OpenMP threads. -The number of threads should not exceed the number of cores on a compute node. -

    -

    Note that the OpenMP runtime library used by GCC is of inferior quality when compared -to Intel's, so developers are strongly encouraged to use the -intel toolchain when developing/building OpenMP software. -

    -

    Communication library: OpenMPI

    -

    For the foss toolchain, OpenMPI is used as the communications library. -To compile/link MPI programs, wrappers are supplied, so that the correct headers and -libraries are used automatically. These wrappers are: -

    -
      -
    • C: mpicc
    • -
    • C++: mpic++
    • -
    • Fortran: mpif77, - mpif90
    • -
    -

    The compiler wrappers take the same options as the corresponding compilers. -

    -

    Using the MPI compilers from OpenMPI

    -

    For example, to compile/link a C program thermo.c to an executable -thermodynamics with architecture specific optimization for the AVX -instruction set, use: -

    -
    $ mpicc -O2 -march=corei7-avx -o thermodynamics thermo.c
    -
    -

    Extensive documentation is provided on the -project's website. Do not forget to load the toolchain module first. -

    -

    Running an OpenMPI program

    -

    Note that an MPI program must be run with the exact same version of the toolchain as -it was originally build with. The listing below shows a PBS job script -thermodynamics.pbs that runs the -thermodynamics executable. -

    -
    #!/bin/bash -l 
    -module load intel/<version> 
    -cd $PBS_O_WORKDIR 
    -mpirun ./thermodynamics
    -
    -

    The hosts and number of processes is retrieved from the queue system, that gets this -information from the resource specification for that job. -

    -

    FOSS mathematical libraries

    -

    The foss toolchain contains the basic HPC mathematical libraries, it offers: -

    -
      -
    • OpenBLAS (Basic Linear Algebra Subsystem)
    • -
    • Lapack (Linear Algebra PACKage)
    • -
    • ScaLAPACK (Scalable Linear Algebra PACKage)
    • -
    • FFTW (Fastest Fourier Transform in the West)
    • -
    -

    Version numbers FOSS toolchain

    - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
    - 2014a - 2014b - 2015a -
    GCC - 4.8.2 - 4.8.3 - 4.9.2 -
    OpenMPI - 1.6.5 - 1.8.1 - 1.8.3 -
    OpenBLAS - 0.2.8 - 0.2.9 - 0.2.13 -
    LAPACK - 3.5.0 - 3.5.0 - 3.5.0 -
    ScaLAPACK - 2.0.2 - 2.0.2 - 2.0.2 -
    FFTW - 3.3.3 - 3.3.4 - 3.3.4 -
    -

    Further information on FOSS components

    -" -555,"","

    The documentation page you visited applies to the KU Leuven Tier-2 setup (THinking and Cerebro). For more information about these systems, visit the hardware description page.

    " -557,"","

    The documentation page you visited applies to the UGent Tier-2 setup Stevin. For more information about the setup, visit the UGent hardware page.

    " -559,"","

    The documentation page you visited applies to the UAntwerp Hopper cluster. Some or all of it may also apply to the older Turing cluster, but that system does not fully implement the VSC environment module structure. For more details about the specifics of those systems, visit the UAntwerp hardware page.

    " -561,"","

    The documentation page you visited applies to the VUB Hydra cluster. For more specifics about the Hydra cluster, check the VUB hardware page.

    " -563,"","

    The documentation page you visited applies to the Tier-1 cluster Muk installed at UGent. Check the Muk hardware description for more specifics about this system.

    " -565,"","

    The documentation page you visited applies to client systems running a recent version of Microsoft Windows (though you may need to install some additional software as specified on the page).

    " -567,"","

    The documentation page you visited applies to client systems with a recent version of Microsoft Windows and a UNIX-compatibility layer. We tested using the freely available Cygwin system maintained by Red Hat.

    " -569,"","

    The documentation page you visited applies to Apple Mac client systems with a recent version of OS X installed, though you may need some additional software as specified on the page.

    " -571,"","

    The documentation page you visited applies to client systems running a popular Linux distribution (though some of the packages you need may not be installed by default).

    " -577,"","

    Eerste aanpak

    • Titel Systems
    • Call-to-Action Label system name, Node Docu Target, Type [label.cat.link]
    • Style->Container: block--related
    " -579,"","

    Tweede aanpak

    • Text widget met enkel de titel Systems
    • Asset widget, selecteer uit System Icons/Regular
    • Maar eigenlijk zou het mooier zijn als dit allemaal in één widget zou zitten, de icoontjes tegen elkaar zouden staan of in ieder geval dichter, en misschien in een grijs blok of zo?
    " -585,"","

    The page you're trying to visit, does not exist or has been moved to a different URL.

    Some common causes of this problem are:

    1. Maybe you arrived at the page through a search engine. Search engines - including the one implemented on our own pages, which uses the Google index - don't immediately know that a page has been moved or does not exist anymore and continue to show old pages in the search results.
    2. Maybe you followed a link on another site. The site owner may not yet have noticed that our web site has changed.
    3. Or maybe you followed a link in a somewhat older e-mail or document. It is entirely normal that links age and don't work anymore after some time.
    4. Or maybe you found a bug on our web site? Even though we check regularly for dead links, errors can occur. You can contact us at Kurt.Lust@uantwerpen.be.
    " -605,"","

    You're looking for: -

    " -611,"","

    Inline code with <code>...</code>

    We used inline code on the old vscentrum.be to clearly mark system commands etc. in text. -

      -
    • For this we used the <code> tag.
    • -
    • There was support in the editor to set this tag
    • -
    • It doesn't seem to work properly in the current editor. If the fragment of code contains a slash (/), the closing tag gets omitted.
    • -

    Example: At UAntwerpen you'll have to use module avail MATLAB and - module load MATLAB/2014a respectively. -

    However, If you enter both <code>-blocks on the same line in a HTML file, the editor doesn't process them well: module avail MATLAB and <code>module load MATLAB. -

    Test: test 1 en test 2.

    Code in <pre>...</pre>

    This was used a lot on the old vscentrum.be site to display fragments of code or display output in a console windows. -

      -
    • Readability of fragments is definitely better if a fixed width font is used as this is necessary to get a correct alignment.
    • -
    • Formatting is important: Line breaks should be respected. The problem with the CMS seems to be that the editor respects the line breaks, the database also stores them as I can edit the code again, but the CMS removes them when generating the final HTML-page as I don't see the line breaks again in the resulting HTML-code that is loaded into the browser.
    • -
    #!/bin/bash -l
    -#PBS -l nodes=1:nehalem
    -#PBS -l mem=4gb
    -module load matlab
    -cd $PBS_O_WORKDIR
    -...
    -

    And this is a test with a very long block: -

    ln03-1003: monitor -h
    -### usage: monitor [-d <delta>] [-l <logfile>] [-f <files>]
    -# [-h] [-v] <cmd> | -p <pid>
    -# Monitor can be used to sample resource utilization of a process
    -# over time. Monitor can sample a running process if the latter's PID
    -# is specified using the -p option, or it can start a command with
    -# parameters passed as arguments. When one has to specify flags for
    -# the command to run, '--' can be used to delimit monitor's options, e.g.,
    -# monitor -delta 5 -- matlab -nojvm -nodisplay calc.m
    -# Resources that can be monitored are memory and CPU utilization, as
    -# well as file sizes.
    -# The sampling resolution is determined by delta, i.e., monitor samples
    -# every <delta> seconds.
    -# -d <delta> : sampling interval, specified in
    -# seconds, or as [[dd:]hh:]mm:ss
    -# -l <logfile> : file to store sampling information; if omitted,
    -# monitor information is printed on stderr
    -# -n <lines> : retain only the last <lines> lines in the log file,
    -# note that this option only makes sense when combined
    -# with -l, and that the log file lines will not be sorted
    -# according to time
    -# -f <files> : comma-separated list of file names that are monitored
    -# for size; if a file doesn't exist at a given time, the
    -# entry will be 'N/A'
    -# -v : give verbose feedback
    -# -h : print this help message and exit
    -# <cmd> : actual command to run, followed by whatever
    -# parameters needed
    -# -p <pid> : process ID to monitor
    -#
    -# Exit status: * 65 for any montor related error
    -# * exit status of <cmd> otherwise
    -# Note: if the exit code 65 conflicts with those of the
    -# command to run, it can be customized by setting the
    -# environment variables 'MONITOR_EXIT_ERROR' to any value
    -# between 1 and 255 (0 is not prohibited, but this is probably.
    -# not what you want).
    -

    The <code> style in the editor

    In fact, the Code style of the editor works on a paragraph basis and all it does is put the paragraph between <pre> and </pre>-tags, so the problem mentioned above remains. The next text was edited in WYSIWIG mode: -

    #!/bin/bash -l
    -#PBS -l nodes=4:ivybridge
    -...
    -

    Another editor bug is that it isn't possible to switch back to regular text mode at the end of a code fragment if that is at the end of the text widget: The whole block is converted back to regular text instead and the formatting is no longer shown. -

    " -613,"","

    After the successful first VSC users day in January 2014, the second users day took place at the University of Antwerp on Monday November 30 2015. The users committee organized the day. The plenary sessions were given by an external and an internal speaker. Moreover, 4 workshops were organized: -

      -
    • VSC for starters (UAntwerp)
      Upscaling to HPC. We will present you some best practices, give advice when using HPC clusters and show some pros and cons when moving from desktop to HPC. Even more experienced researchers may be interested.
    • -
    • Specialized Tier-2 infrastructure: shared memory (KU Leuven)
      Shared memory: when distributing data is not/no longer an option. We will introduce you to the available shared memory infrastructure by means of some use cases.
    • -
    • Big data (UGent)
      We present Hanythingondemand (hod), a solution for running Hadoop, Spark and other services on HPC clusters.
    • -
    • Cloud and grid access (VUB)
      The availability of grid and cloud resources is not so well known in VSC. We will introduce you to the cloud environment, explain how it can be useful to you and show how you can gain access.
    • -

    Some impressions...

    - \"More -

    More pictures can be found in the image bank. -

    Program

    - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
    09:50 - Welcome – Bart De Moor (chair Hercules Foundation) -
    10:00 - Invited lecture: High performance and multiscale computing: blood, clay, stars and humans – Derek Groen (Centre for Computational Science, University College London) [slides - PDF 8.3MB]
    11:00 - Coffee -
    11:30 - Workshops / hands-on sessions (parallel sessions) -
    12:45 - Lunch -
    14:00 - Lecture internal speaker: High-performance computing of wind farms in the atmospheric boundary layer – Johan Meyers (Department of Mechanical Engineering, KU Leuven) [slides - PDF 9.9MB]
    14:30 - ‘1 minute’ poster presentations -
    14:45 - Workshops / hands-on sessions (parallel sessions) -
    16:15 - Coffee & Poster session -
    17:00 - Closing – Dirk Roose (representative of users committee) -
    17:10 - Drink -

    Titles and abstracts

    An overview of the posters that will be presented during the poster session is available here. -

    " -619,"","

    TurboVNC is a good -way -to provide access to remote visualization applications that works together with VirtualGL - a popular package for remote visualization. -

    Installing TurboVNC client (viewer)

    TurboVNC client Configuration & Start Guide

    Note: These instructions are for the KU Leuven visualization nodes only. The UAntwerp visualization node also uses TurboVNC, but the setup is different as the visualization node is currently not in the job queueing system and as TurboVNC is also supported on the regular login nodes (but without OpenGL support). Specific instructions for the use of TurboVNC on the UAntwerp clusters can be found on the page \"Remote visualization @ UAntwerp\". -

      -
    1. Request an interactive job on visualization partition: -
      $ qsub -I -X -l partition=visualization	-l pmem=6gb -l nodes=1:ppn=20
      -	
    2. -
    3. Once you are on one of visualization nodes (r10n3 or r10n4) load the TurboVNC module: -
      $ module load TurboVNC/1.2.3-foss-2014a
      -	
    4. -
    5. Create password to authenticate your session: -
      $ vncpasswd
      -	
      - In case of problems with saving your password please create the appropriate path first: -
      $ mkdir .vnc; touch .vnc/passwd; vncpasswd
      -	
      -
    6. -
    7. Start VNC server on the visualization node (optionally with geometry settings): -
      $ vncserver (-depth 24 -geometry 1600x1000)
      -	
      - As a result you will get the information about the display <d> that you are using (r10n3:), e.g.for <d>=1 -
      Desktop 'TurboVNC: r10n3:1 (vsc30000)' started on display r10n3:1
      -	
      -
    8. -
    9. - Establish the ssh tunnel connection:

      - In Linux/ Mac OS: -
           $ ssh -L 590<d>:host:590<d> -N vsc30000@login.hpc.kuleuven.be
      -e.g. $ ssh -L 5901:r10n3:5901 -N vsc30000@login.hpc.kuleuven.be
      -	
      -
      - In Windows: -
      - In putty go to Connection-SSH-Tunnels tab and add the source port 590<d> (e.g. 5901) and destination host:590<d> (e.g. r10n3:5901). -
      \"TVNC -
      Once the tunnel is added it will appear in the list of forwarded ports: -
      \"TVNC -
      With that settings continue login to the cluster. -
    10. -
    11. Start VNC viewer connection
      - Start the client: VSC server as localhost:<d> (where <d> is display number), e.g. localhost:1 -
      - \"TVNC -
      -
      Authenticate with your password -
      \"TVNC -
    12. -
    13. After your work is done do not forget to close your connection: -
           $ vncserver -kill :<d>; exit
      -e.g. $ vncserver -kill :1; exit
      -	
      -
    14. -

    How to start using visualization node?

      -
    1. TurboVNC works with the tab Window Manager twm (more info on how to use it can be found on the Wikipedia twm page or on the twm man page).
      \"twm\" -
    2. -
    3. To start a new terminal use left click of the mouse and choose xterm -
      \"twm\" -
    4. -
    5. Load the appropriate visualization module (Paraview, VisIt, VMD, Avizo, e.g. -
      $ module load Paraview
      -	
    6. -
    7. Start the application. In general the application has to be started using VirtualGL package, e.g. -
      $ vglrun –d :0 paraview
      -	
      - but to make it easier we created scripts (starting with capital letters: Paraview, Visit, VMD) that can execute the necessary commands and start the application, e.g. -
      $ Paraview
      -	
      -
    8. -
    9. - For checking how much GPUs are involved in your visalization you may execute gpuwatch in the new terminal: -
      $ gpuwatch
      -	
    10. -

    Attached documents

      -

    Slides from the lunchbox session -

      -
    " -621,"","

    The intel toolchain consists almost entirely of software components -developed by Intel. When building third-party software, or developing your own, -load the module for the toolchain: -

    $ module load intel/<version>
    -

    where <version> should be replaced by the one to be used, e.g., 2016b. See the documentation on the software module system for more details. - -

    Starting with the 2014b toolchain, the GNU compilers are also included in -this toolchain as the Intel compilers use some of the libraries and as it is possible -(though some care is needed) to link code generated with the Intel compilers with code -compiled with the GNU compilers. -

    Compilers: Intel and Gnu

    Three compilers are available: -

      -
    • C: icc
    • -
    • C++: icpc
    • -
    • Fortran: ifort
    • -

    Compatible versions of the GNU C (gcc), C++ (g++) and Fortran (gfortran) compilers are also provided. -

    For example, to compile/link a Fortran program fluid.f90 to an executable - fluid with architecture specific optimization, use: -

    $ ifort -O2 -xhost -o fluid fluid.f90
    -

    For documentation on available compiler options, we refer to the links to the Intel documentation at the bottom of this page. Do not forget to load the toolchain module first! -

    Intel OpenMP

    The compiler switch to use to compile/link OpenMP C/C++ or Fortran code is -qopenmp in recent versions of the compiler (toolchain intel/2015a and later) or - -openmp in older versions. For example, to compile/link a OpenMP C program - scatter.c to an executable - scatter with architecture specific -optimization, use: -

    $ icc -qopenmp -O2 -xhost -o scatter scatter.c
    -

    Remember to specify as many processes per node as the number of threads the executable -is supposed to run. This can be done using the - ppn resource, e.g., - -l nodes=1:ppn=10 for an executable that should be run with 10 OpenMP -threads. The number of threads should not exceed the number of cores on a compute node. -

    Communication library: Intel MPI

    For the intel toolchain, impi, i.e., Intel MPI is used as the -communications library. To compile/link MPI programs, wrappers are supplied, so that -the correct headers and libraries are used automatically. These wrappers are: -

      -
    • C: mpiicc
    • -
    • C++: mpiicpc
    • -
    • Fortran: mpiifort
    • -

    Note that the names differ from those of other MPI implementations. -The compiler wrappers take the same options as the corresponding compilers. -

    Using the Intel MPI compilers

    For example, to compile/link a C program thermo.c to an executable - thermodynamics with architecture specific optimization, use: -

    $ mpiicc -O2 -xhost -o thermodynamics thermo.c
    -

    For further documentation, we refer to the links to the Intel documentation at the bottom of this page. Do not forget to load the toolchain module first. -

    Running an MPI program with Intel MPI

    Note that an MPI program must be run with the exact same version of the toolchain as -it was originally build with. The listing below shows a PBS job script - thermodynamics.pbs that runs the - thermodynamics executable. -

    #!/bin/bash -l
    -module load intel/<version>
    -cd $PBS_O_WORKDIR
    -mpirun -np $PBS_NP ./thermodynamics
    -

    The resource manager passes the number of processes to the job script through the environment variable $PBS_NP, but if you use a recent implementation of Intel MPI, you can even omit -np $PBS_NP as Intel MPI recognizes the Torque resource manager and requests the number of cores itself from the resource manager if the number is not specified. -

    Intel mathematical libraries

    The Intel Math Kernel Library (MKL) is a comprehensive collection of highly optimized -libraries that form the core of many scientific HPC codes. Among other functionality, -it offers: -

      -
    • BLAS (Basic Linear Algebra Subsystem), and extensions to sparse matrices
    • -
    • Lapack (Linear algebra package) and ScaLAPACK (the distributed memory version)
    • -
    • FFT-routines including routines compatible with the FFTW2 and FFTW3 libraries - (Fastest Fourier Transform in the West) -
    • -
    • Various vector functions and statistical functions that are optimised for the - vector instruction sets of all recent Intel processor families -
    • -

    For further documentation, we refer to the links to the Intel documentation at the bottom of this page. -

    There are two ways to link the MKL library: -

      -
    • If you use icc, icpc or ifort to link your code, you can use the -mkl compiler - option: -
        -
      • -mkl=parallel or -mkl: Link the multi-threaded version of the library.
      • -
      • -mkl=sequential: Link the single-threaded version of the library
      • -
      • -mkl=cluster: Link the cluster-specific and sequential library, i.e., - ScaLAPACK will be included, but assumes one process per core (so no hybrid MPI/multi-threaded approach) -
      • -
      - The Fortran95 interface library for lapack is not automatically included though. - You'll have to specify that library seperately. You can get the value from the - MKL Link Line Advisor, see also the next item.
    • -
    • Or you can specify all libraries explictly. To do this, it is strongly recommended - to use Intel's - MKL Link Line Advisor, and will also tell you how to link the MKL library with - code generated with the GNU and PGI compilers. -
      - Note: On most VSC systems, the variable MKLROOT has a different - value from the one assumed in the Intel documentation. Wherever you see - $(MKLROOT) you may have to replace it with - $(MKLROOT)/mkl.
    • -

    MKL also offers a very fast streaming pseudorandom number generator, see the -documentation for details. -

    Intel toolchain version numbers

    - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
    - 2018a2017b2017a - 2016b - 2016a - 2015b - 2015a - 2014b - 2014a -
    icc/icpc/ifort - 2018.1.1632017.4.1962017.1.132 - 16.0.3 20160425 - 16.0.1 20151021 - 15.0.3 20150407 - 15.0.1 20141023 - 13.1.3 20130617 - 13.1.3 20130607 -
    Intel MPI - 2018.1.1632017.3.1962017.1.132 - 5.1.3.181 - 5.1.2.150 - 5.03.3048 - 5.0.2.044 - 4.1.3.049 - 4.1.3.045 -
    Intel MKL - 2018.1.1632017.3.1962017.1.132 - 11.3.3.210 - 11.3.1.150 - 11.2.3.187 - 11.2.1.133 - 11.1.2.144 - 11.1.1.106 -
    GCC - 6.4.06.4.06.3.0 - 4.9.4 - 4.9.3 - 4.9.3 - 4.9.2 - 4.8.3 - / -
    binutils - 2.282.282.27 - 2.26 - 2.25 - 2.25 - / - / - / -

    Further information on Intel tools

    " -623,"","

    The foss toolchain consists entirely of free and open source software -components. When building third-party software, or developing your own, -load the module for the toolchain: -

    $ module load foss/<version>
    -

    where <version> should be replaced by the one to be used, e.g., - 2014a. See the documentation on the software module system for more details. -

    Compilers: GNU

    Three GCC compilers are available: -

      -
    • C: gcc
    • -
    • C++: g++
    • -
    • Fortran: gfortran
    • -

    For example, to compile/link a Fortran program fluid.f90 to an executable - fluid with architecture specific optimization for processors that support AVX instructions, use: -

    $ gfortran -O2 -march=corei7-avx -o fluid fluid.f90
    -

    Documentation on GCC compiler flags and options is available on the - project's website. Do not forget to load the -toolchain module first! -

    GCC OpenMP

    The compiler switch to use to compile/link OpenMP C/C++ or Fortran code is - -fopenmp. For example, to compile/link a OpenMP C program - scattter.c to an executable - scatter with optimization for processors that support the AVX instruction -set, use: -

    $ gcc -fopenmp -O2 -march=corei7-avx -o scatter scatter.c
    -

    Remember to specify as many processes per node as the number of threads the -executable is supposed to run. This can be done using the - ppn resource, e.g., - -l nodes=1:ppn=10 for an executable that should be run with 10 OpenMP threads. -The number of threads should not exceed the number of cores on a compute node. -

    Note that the OpenMP runtime library used by GCC is of inferior quality when compared -to Intel's, so developers are strongly encouraged to use the - intel toolchain when developing/building OpenMP software. -

    Communication library: Open MPI

    For the foss toolchain, Open MPI is used as the communications library. -To compile/link MPI programs, wrappers are supplied, so that the correct headers and -libraries are used automatically. These wrappers are: -

      -
    • C: mpicc
    • -
    • C++: mpic++
    • -
    • Fortran: mpif77, - mpif90
    • -

    The compiler wrappers take the same options as the corresponding compilers. -

    Using the MPI compilers from Open MPI

    For example, to compile/link a C program thermo.c to an executable - thermodynamics with architecture specific optimization for the AVX -instruction set, use: -

    $ mpicc -O2 -march=corei7-avx -o thermodynamics thermo.c
    -

    Extensive documentation is provided on the Open MPI project's website. Do not forget to load the toolchain module first. -

    Running an Open MPI program

    Note that an MPI program must be run with the exact same version of the toolchain as -it was originally build with. The listing below shows a PBS job script - thermodynamics.pbs that runs the - thermodynamics executable. -

    #!/bin/bash -l 
    -module load intel/<version> 
    -cd $PBS_O_WORKDIR 
    -mpirun ./thermodynamics
    -

    The hosts and number of processes is retrieved from the queue system, that gets this -information from the resource specification for that job. -

    FOSS mathematical libraries

    The foss toolchain contains the basic HPC mathematical libraries, it offers: -

      -
    • OpenBLAS (Basic Linear Algebra Subsystem)
    • -
    • Lapack (Linear Algebra PACKage)
    • -
    • ScaLAPACK (Scalable Linear Algebra PACKage)
    • -
    • FFTW (Fastest Fourier Transform in the West)
    • -

    Other components

      -
    • From the 2015b series on, binutils was added to the toolchain. The binutils package contains the assembler used by gcc, and the standard OS assembler doesn't always support the newer instructions that are used on newer cluster nodes.
    • -

    Version numbers

    - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
    - 2018a2017b2017a2016b - 2016a - 2015b - 2015a - 2014b - 2014a -
    GCC - 6.4.06.4.06.35.4 - 4.9.3 - 4.9.3 - 4.9.2 - 4.8.3 - 4.8.2 -
    OpenMPI - 2.1.22.1.12.0.21.10.3 - 1.10.2 - 1.8.8 - 1.8.4 - 1.8.1 - 1.6.5 -
    OpenBLAS - 0.2.200.2.200.2.190.2.18 - 0.2.15 - 0.2.14 - 0.2.13 - 0.2.9 - 0.2.8 -
    LAPACK - 3.8.03.8.03.3.63.6.1 - 3.6.0 - 3.5.0 - 3.5.0 - 3.5.0 - 3.5.0 -
    ScaLAPACK - 2.0.22.0.22.0.22.0.2 - 2.0.2 - 2.0.2 - 2.0.2 - 2.0.2 - 2.0.2 -
    FFTW - 3.3.73.3.63.3.63.3.4 - 3.3.4 - 3.3.4 - 3.3.4 - 3.3.4 - 3.3.3 -
    binutils - 2.282.282.272.26 - 2.25 - 2.25 - / - / - / -

    Further information on FOSS components

    " -625,"","

    MPI and OpenMP both have their advantages and disadvantages. -

    -

    MPI can be used on - distributed memory clusters and can scale to thousands of nodes. However, it was - designed in the days that clusters had nodes with only one or two cores. Nowadays CPUs - often have more than ten cores and sometimes support multiple hardware threads (or logical cores) per - physical core (and in fact may need multiple threads to run at full performance). At the same - time, the amount of memory per hardware thread is not increasing and is in fact quite - low on several architectures that rely on a large number of slower cores or hardware - threads to obtain a high performance within a reasonable power budget. Starting - one MPI process per hardware thread is then a waste of resources as each process needs - its communication buffers, OS resources, etc. Managing the hundreds of thousands of MPI - processes that we are nowadays seeing on the biggest clusters, is very hard. -

    -

    OpenMP on the other hand is limited to shared memory parallelism, typically - within a node of a cluster. Moreover, many OpenMP programs don't scale past some - tens of threads partly because of thread overhead in the OS implementation and partly - because of overhead in the OpenMP run-time. -

    -

    Hybrid programs try to combine the advantages of both to deal with the - disadvantages. Hybrid programs use a limited number of MPI processes (\"MPI ranks\") - per node and use OpenMP threads to further exploit the parallelism within the node. - An increasing number of applications is designed or re-engineered in this way. - The optimum number of MPI processes (and hence OpenMP threads per process) depends - on the code, the cluster architecture and the problem that is being solved, but - often one or, on newer CPUs such as the Intel Haswell, two MPI processes per socket (so - two to four for a typical two-socket node) is close to optimal. Compiling and - starting such applications requires some care as we explain on this page. -

    -

    Preparing your hybrid application to run

    -

    To compile and link your hybrid application, you basically have to combine the - instructions for MPI and OpenMP - programs: use - mpicc -fopenmp for the GNU - compilers and - mpiicc -qopenmp for the Intel - compilers ( - mpiicc -openmp for older versions) or the corresponding - MPI Fortran compiler wrappers for Fortran programs. -

    -

    Running hybrid programs on the VSC clusters

    -

    When running a hybrid MPI/OpenMP program, fewer MPI processes - have to be started than there are logoical cores available to - the application as every process uses multiple cores in - OpenMP parallelism. Yet when requesting logical cores per node to the - scheduler, one still has to request the total number of cores - needed per node. Hence the PBS property \"ppn\" should not be read - as \"processes per node\" but rather as \"logical cores per node\" or - \" - processing units per node\". Instead we - have to tell the MPI launcher ( - mpirun for most applications) to - launch fewer processes than there are logical cores on a node and tell - each MPI process to use the correct number of OpenMP threads. -

    -

    For optimal performance, the threads of one MPI process should be - put together as close as possible in the logical core hierarchy - implied by the cache and core topology of a given node. E.g., on a - dual socket node it may make a lot of sense to run 2 MPI processes - with each MPI process using all cores on a single socket. In other - applications, it might be better to run only one MPI process per - node, or multiple MPI processes per socket. In more technical words, - each MPI process runs in its MPI domain consisting of a number of logical cores, - and we want these domains to be non-overlapping and fixed in time during the life of - the MPI job and the logical cores in the domain to be \"close\" to each other. - This optimises the use the memory hierarchy (cache and RAM). -

    -

    OpenMP has several environment variables that can then control the number - of OpenMP threads and the placement of the threads in the MPI domain. All of these - may also be overwritten by the application, so it is not a - bullet-proof way to control the behaviour of OpenMP applications. - Moreover, some of these environment variables are - implementation-specific and hence are different between the Intel - and GNU OpenMP runtimes. The most important variable is - OMP_NUM_THREADS. It - sets the number of threads to be used in parallel regions. As - parallel constructs can be nested, a process may still start more - threads than indicated by - OMP_NUM_THREADS. However, - the total number of threads can be limited by the variable - OMP_THREAD_LIMIT. -

    -

    Script mympirun (VSC)

    -

    The mympirunn script is developed by the UGent VSC-team to cope - with differences between different MPI implementations - automatically. It offers support for hybrid programs through - the - --hybrid command line switch to specify the number of - processes per node. The number of threads per process can then be - computed by dividing the number of logical cores per node by the - number of processes per node. -

    -

    E.g., to run a hybrid MPI/OpenMP program on 2 nodes using 20 - cores on each node and running 4 MPI ranks per node (hence 5 - OpenMP threads per MPI rank), your script would contain -

    -
    #PBS -l nodes=2:ppn20
    -
    -

    near the top to request the resources from the scheduler. It - would then load the appropriate module with the mympirun command: -

    -
    module load vsc-mympirun
    -
    -

    (besides other modules that are needed to run your application) - and finally start your application: -

    -
    mympirun --hybrid=4 ./hybrid_mpi
    -
    -

    assuming your executable is called hybrid_mpi and resides in the - working directory. The mympirun launcher will automatically - determine the correct number of MPI processes to start based on - the resource specifications and the given number of processes per - node (the - --hybrid switch). -

    -

    Intel toolchain

    -

    - On Intel MPI defining the MPI domains is done through the environment variable - I_MPI_PIN_DOMAIN. - Note however that the Linux scheduler is still - free to move all threads of a MPI process to any core within its MPI domain - at any time, so there may be a point in further pinning the OpenMP threads through - the OpenMP environment variables also. - This is definitely the case if there are more logical cores available - in the process partition than there are OpenMP threads. Some environment - variables to influence the thread placement are - the Intel-specific variable - KMP_AFFINITY and the OpenMP 3.1 - standard environment variable - OMP_PROC_BIND. -

    -

    In our case, we want to use all logical cores of a node but make sure - that all cores for a domain are as close together as possible. The - easiest way to accomplish this is to set - OMP_NUM_THREADS - to the desired number of OpenMP threads per MPI process and then set - I_MPI_PIN_DOMAIN to the value omp: -

    -
    export I_MPI_PIN_DOMAIN=omp
    -
    -

    The longer version is -

    -
    export I_MPI_PIN_DOMAIN=omp,compact
    -
    -

    where compact tells the launcher explicitly to pack threads for - a single MPI process as close together as possible. This layout is - the default on current versions of Intel MPI so it is not really - needed to set this. An alternative, when running 1 MPI process per - socket, is to set -

    -
    export I_MPI_PIN_DOMAIN=socket
    -
    -

    To enforce binding of each OpenMP thread to a particular logical core, one can set -

    -
    export OMP_PROC_BIND=true
    -
    -

    As an example, assume again we want to run the program hybridmpi - on 2 nodes containing 20 cores each, running 4 MPI processes per - node, so 5 OpenMP threads per process. -

    -

    The following are then essential components of the job script: -

    -
      -
    • - Specify the resource requirements:
      - #PBS -lnodes=2:ppn=20 -
    • -
    • - Load the modules, including one which contains Intel MPI, - e.g., -
      - module load intel -
    • -
    • - Create a list of unique hosts assigned to the job
      - export HOSTS=`sort -u $PBS_NODEFILE | paste -s -d,`
      -
      This step is very important; the program will not start - with the correct number of MPI ranks if it is not provided - with a list of unique host names. -
      -
      -
    • -
    • - Set the number of OpenMP threads per MPI process:
      - export OMP_NUM_THREADS=5 -
    • -
    • - Pin the MPI processes:
      - export I_MPI_PIN_DOMAIN=omp -
    • -
    • - And launch hybrid_mpi using the Intel MPI launcher and - specifying 4 MPI processes per host: -
      - mpirun -hosts $HOSTS -perhost 4 ./hybrid_mpi -
    • -
    -

    In this case we do need to specify both the total number of MPI - ranks and the number of MPI ranks per host as we want the same - number of MPI ranks on each host. -
    - In case you need a more automatic script that is easy to adapt to - a different node configuration or different number of processes - per node, you can do some of the computations in Bash. The number - of processes per node is set in the shell variable - MPI_RANKS_PER_NODE. The above commands become: -

    -
    #! /bin/bash -l
    -# Adapt nodes and ppn on the next line according to the cluster your're using!#PBS -lnodes=2:ppn=20
    -...
    -MPI_RANKS_PER_NODE=4
    -#
    -module load intel
    -#
    -export HOSTS=`sort -u $PBS_NODEFILE | paste -s -d,`
    -#
    -export OMP_NUM_THREADS=$(($PBS_NUM_PPN / $MPI_RANKS_PER_NODE))
    -#
    -export OMP_PROC_BIND=true
    -#
    -export I_MPI_PIN_DOMAIN=omp
    -#
    -mpirun -hosts $HOSTS -perhost $MPIPROCS_PER_NODE ./hybrid_mpi
    -
    -

    Intel documentation on hybrid programming

    -

    Some documents on the Intel web site that contain more - information on developing and running hybrid programs: -

    - -

    Foss toolchain (GCC and Open MPI)

    -

    Open MPI has very flexible options for process and thread placement, but they are not always easy to use. There is however also a simple option to indicate the number of logical cores you want to assign to each MPI rank (MPI process): -cpus-per-proc <num> with <num> the number of logical cores assigned to each MPI rank. -

    -

    You may want to further control the thread placement one can using the standard OpenMP - mechanism, e.g. the GNU-specific variable - GOMP_CPU_AFFINITY - or the OpenMP 3.1 standard environment variable OMP_PROC_BIND. - As long as we want to use all cores, it won't matter whether - OMP_PROC_BIND - is set to true, close or spread. However, setting OMP_PROC_BIND to true is generally a safe choice to assure that all threads remain on the same core as they were started on to improve cache performance. -

    -

    Essential elements of our job script are: -

    -
    #! /bin/bash -l
    -# Adapt nodes and ppn on the next line according to the cluster your're using!
    -#PBS -lnodes=2:ppn=20
    -...
    -#
    -module load foss
    -#
    -export OMP_NUM_THREADS=5
    -#
    -export OMP_PROC_BIND=true
    -#
    -mpirun -cpus-per-proc $OMP_NUM_THREADS ./hybrid_mpi
    -
    -

    Advanced issues

    -

    Open MPI allows a lot of control over process placement and rank assignment. The Open MPI mpirun command has several options that influence this process: -

    -
      -
    • --map-by influences the mapping of processes on the available processing resources
    • -
    • --rank-by influences the rank assignment
    • -
    • --bind-to influences the binding of processes to sets of processing resources
    • -
    • --report-bindings can then be used to report on the process binding.
    • -
    -

    More information can be found in the manual pages for mpirun which can be found on the Open MPI web pages and in the following presentations: -

    -" -627,"","
      -
    1. Studying gene family evolution on the VSC Tier-2 and Tier-1 infrastructure
      Setareh Tasdighian et al. (VIB/UGent)
    2. -
    3. Genomic profiling of murine carcinoma models
      B. Boeckx, M. Olvedy, D. Nasar, D. Smeets, M. Moisse, M. Dewerchin, C. Marine, T. Voet, C. Blanpain,D. Lambrechts (VIB/KU Leuven)
    4. -
    5. Modeling nucleophilic aromatic substitution reactions with ab initio molecular dynamics
      Samuel L. Moors et al. (VUB)
    6. -
    7. Climate modeling on the Flemish Supercomputers
      Fabien Chatterjee, Alexandra Gossart, Hendrik Wouters, Irina Gorodetskaya, Matthias Demuzere, Niels Souverijns, Sajjad Saeed, Sam Vanden Broucke, Wim Thiery, Nicole van Lipzig (KU Leuven)
    8. -
    9. Simulating the evolution of large grain structures using the phase-field approach
      Hamed Ravash, Liesbeth Vanherpe, Nele Moelans (KU Leuven)
    10. -
    11. Multi-component multi-phase field model combined with tensorial decomposition
      Inge Bellemans, Kim Verbeken, Nico Vervliet, Nele Moelans, Lieven De Lathauwer (UGent, KU Leuven)
    12. -
    13. First-principle modeling of planetary magnetospheres: Mercury and the Earth
      Jorge Amaya, Giovanni Lapenta (KU Leuven)
    14. -
    15. Modeling the interaction of the Earth with the solar wind: the Earth magnetopause
      Emanuele Cazzola, Giovanni Lapenta (KU Leuven)
    16. -
    17. Jupiter's magnetosphere
      Emmanuel Chané, Joachim Saur, Stefaan Poedts (KU Leuven)
    18. -
    19. High-performance computing of wind-farm boundary layers
      Dries Allaerts, Johan Meyers (KU Leuven)
    20. -
    21. Large-eddy simulation study of Horns Rev windfarm in variable mean wind directions
      Wim Munters, Charles Meneveau, Johan Meyers (KU Leuven)
    22. -
    23. Modeling defects in the light absorbing layers of photovoltaic cells
      Rolando Saniz, Jonas Bekaert, Bart Partoens, Dirk Lamoen (UAntwerpen)
    24. -
    25. Molecular Spectroscopy : Where Theory Meets Experiment
      Carl Mensch, Evelien Van de Vondel, Yannick Geboes, Pilar Rodríguez Ortega, Liene De Beuckeleer, Sam Jacobs, Jonathan Bogaerts, Filip Desmet, Christian Johannessen, Wouter Herrebout (UAntwerpen)
    26. -
    27. On the added value of complex stock trading rules in short-term equity price direction prediction
      Dirk Van den Poel, Céline Chesterman, Maxim Koppen, Michel Ballings (UGent University, University of Tennessee at Knoxville)
    28. -
    29. First-principles study of the surface and adsorption properties of α-Cr2O3
      Samira Dabaghmanesh, Erik C. Neyts, Bart Partoens (UAntwerpen)
    30. -
    31. The surface chemistry of plasma-generated radicals on reduced titanium dioxide
      Stijn Huygh, Erik C. Neyts (UAntwerpen)
    32. -
    33. The High Throughput Approach to Computational Materials Design
      Michael Sluydts, Titus Crepain, Karel Dumon, Veronique Van Speybroeck, Stefaan Cottenier (UGent)
    34. -
    35. Distributed Memory Reduction in Presence of Process Desynchronization
      Petar Marendic, Jan Lemeire, Peter Schelkens (Vrije Universiteit Brussel, iMinds)
    36. -
    37. Visualization @HPC KU Leuven
      Mag Selwa (KU Leuven)
    38. -
    39. Multi-fluid modeling of the solar chromosphere
      Yana G. Maneva, Alejandro Alvarez-Laguna, Andrea Lani, Stefaan Poedts (KU Leuven)
    40. -
    41. Molecular dynamics in momentum space
      Filippo Morini (UHasselt)
    42. -
    43. Predicting sound in planetary inner cores using quantum physics
      Jan Jaeken, Attilio Rivoldini, Tim van Hoolst, Veronique Van Speybroeck, Michel Waroquier, Stefaan Rottener (UGent)
    44. -
    45. High Fidelity CFD Simulations on Tier-1
      Leonidas Siozos-Rousoulis, Nikolaos Stergiannis, Nathan Ricks, Ghader Ghorbaniasl, Chris Lacor (VUB)
    46. -
    " -629,"","

    High performance and multiscale computing: blood, clay, stars and humans

    Speaker: Derek Groen (Centre for Computational Science, University College London) -

    Multiscale simulations are becoming essential across many scientific disciplines. The concept of having multiple models form a single scientific simulation, with each model operating on its own space and time scale, gives rise to a range of new challenges and trade-offs. In this talk, I will present my experiences with high performance and multiscale computing. I have used supercomputers for modelling clay-polymer nanocomposites [1], blood flow in the human brain [2], and dark matter structure formation in the early universe [3]. I will highlight some of the scientific advances we made, and present the technologies we developed and used to enable simulations across supercomputers (using multiple models where convenient). In addition, I will reflect on the non-negligible aspect of human effort and policy constraints, and share my experiences in enabling challenging calculations, and speeding up more straightforward ones. -

    [slides - PDF 8.3MB]

    References

      -
    1. James L. Suter, Derek Groen, and Peter V. Coveney. Chemically Specific Multiscale Modeling of Clay–Polymer Nanocomposites Reveals Intercalation Dynamics, Tactoid Self-Assembly and Emergent Materials Properties. Advanced Materials, volume 27, issue 6, pages 966–984. (DOI: 10.1002/adma.201403361)
    2. -
    3. Mohamed A. Itani, Ulf D. Schiller, Sebastian Schmieschek, James Hetherington, Miguel O. Bernabeu, Hoskote Chandrashekar, Fergus Robertson, Peter V. Coveney, and Derek Groen. An automated multiscale ensemble simulation approach for vascular blood flow. Journal of Computational Science, volume 9, pages 150-155. (DOI: 10.1016/j.jocs.2015.04.008)
    4. -
    5. Derek Groen and Simon Portugies Zwart. From Thread to Transcontinental Computer: Disturbing Lessons in Distributed Supercomputing. 2015 IEEE 11th International Conference on e-Science, IEEE, pages 565-571. (DOI: 10.1109/eScience.2015.81) -
    6. -

    High-performance computing of wind farms in the atmospheric boundary layer

    Speaker: Johan Meyers (Department of Mechanical Engineering, KU Leuven) -

    -The aerodynamics of large wind farms are governed by the interaction between turbine wakes, and by the interaction of the wind farm as a whole with the atmospheric boundary layer. The deceleration of the flow in the farm that is induced by this interaction, leads to an efficiency loss for wind turbines downstream in the farm that can amount up to 40% and more. Research into a better understanding of wind-farm boundary layer interaction is an important driver for reducing this efficiency loss. The physics of the problem involves a wide range of scales, from farm scale and ABL scale (requiring domains of several kilometers cubed) down to turbine and turbine blade scale with flow phenomena that take place on millimeter scale. Modelling such a system, requires a multi-scale approach in combination with extensive supercomputing. To this end, our simulation code SP-Wind is used. Implementation issues and parallelization are discussed. Next to that, new physical insights gained from our simulations at the VSC are highlighted. -

    [slides - PDF 9.9MB]

    " -631,"","

    Not only have supercomputers changed scientific research in a fundamental way, they also enable the development of new, affordable products and services which have a major impact on our daily lives. -

    -

    Not only have supercomputers changed scientific research in a fundamental way ...

    -

    Supercomputers are indispensable for scientific research and for a modern R&D environment. ‘Computational Science’ is - alongside theory and experiment - the third fully fledged pillar of science. For centuries, scientists used pen and paper to develop new theories based on scientific experiments. They also set up new experiments to verify the predictions derived from these theories (a process often carried out with pen and paper). It goes without saying that this method was slow and cumbersome. -

    -

    As an astronomer you can not simply make Jupiter a little bigger to see what effect this would lager size would have on our solar system. As a nuclear scientist it would be difficult to deliberately lose control over a nuclear reaction to ascertain the consequences of such a move. (Super)computers can do this and are indeed revolutionizing science. -

    -

    Complex theoretical models - too advanced for ‘pen and paper’ results - are simulated on computers. The results they deliver, are then compared with reality and used for prediction purposes. Supercomputers have the ability to handle huge amounts of data, thus enabling experiments that would not be achievable in any other way. Large radio telescopes or the LHC particle accelerator at CERN could not function without supercomputers processing mountains of data. -

    -

    … but also the industry and out society

    -

    But supercomputers are not just an expensive toy for researchers at universities. Numerical simulation also opens up new possibilities in industrial R&D. For example in the search for new medicinal drugs, new materials or even the development of a new car model. Biotechnology also requires the large data processing capacity of a supercomputer. The quest for clean energy, a better understanding of the weather and climate evolution, or new technologies in health care all require a powerful supercomputer. -

    -

    Supercomputers have a huge impact on our everyday lives. Have you ever wondered why the showroom of your favourite car brand contains many more car types than 20 years ago? Or how each year a new and faster smartphone model is launched on the market? We owe all of this to supercomputers. -

    " -637,"What is a supercomputer?","

    A supercomputer is a very fast and extremely parallel computer. Many of its technological properties are comparable to those of your laptop or even smartphone. But there are also important differences. -

    " -639,"Impact on research, industry and society","

    Not only have supercomputers changed scientific research in a fundamental way, they also enable the development of new, affordable products and services which have a major impact on our daily lives.

    " -645,"","

    Tier-1b thin node supercomputer BrENIAC

    This system is since October 2016 in -production use. -

    Purpose

    On this cluster you can run highly parallel, large scale computations that rely critically on efficient communication. -

    Hardware

    • 580 computing nodes -
      • Two 14-core Intel Xeon processors (Broadwell, E5-2680v4)
      • 128 GiB RAM (435 nodes) or 256 GiB (145 nodes)
    • EDR InfiniBand interconnect -
      • High bandwidth (11.75 GB/s per direction, per link)
      • Slightly improved latency over FDR
    • Storage system -
      • Capacity of 634 TB
      • Peak bandwidth of 20 GB/s

    Software

    You will find the standard Linux HPC software stack installed on the Tier-1 cluster. If required, user support will install additional (Linux) software you require, but you are responsible for taking care of the licensing issues (including associated costs). -

    Access

    You can get access to this infrastructure by applying for a starting grant, submitting a project proposal that will be evaluated on scientific and technical merits, or by buying compute time.

    " -649,"","

    The VSC account

    In order to use the infrastructure of the VSC, you need a VSC-userid, also called a VSC account. The account gives you access to most of the infrastructure, though only with a limited compute time allocation on some of the systems. Also, For the main Tier-1 compute cluster you need to submit a project application (or you should be covered by a project application within your research group). For some more specialised hardware you have to ask access separately, typically to the coordinator of your institution, because we want to be sure that that (usually rather expensive hardware) is used efficiently for the type of applications for which it was purchased. -

    Who can get a VSC account?

      -
    • Researchers at the Flemish university associations. In many cases, this is done through a fully automated application process, but in some cases you must submit a request to your local support team. Specific details about these procedures can be found on the \"Account request\" page in the user documentation.
    • -
    • Master students in the framework of their master thesis if supercomputing is needed for the thesis. For this, you will first need the approval of your supervisor. The details about the procedure can again be found on the \"Account request\" page in the user documentation.
    • -
    • Use in courses at the University of Leuven and Hasselt University: Lecturers can also use the local Tier-2 infrastructure in the context of some courses (when the software cannot run in the PC classes or the computers in those classes are not powerful enough). Again, you can find all the details about the application process on the \"Account request\" page in the user documentation. It is important that the application is submitted on time, at least two weeks before the start of the computer sessions.
    • -
    • Researchers from iMinds and VIB. The application is made through your host university. The same applies to researchers at the university hospitals and research institutes under the direction or supervision of a university or a university college, such as the special university institutes mentioned in Article 169quater of the Decree of 12 June 1991 concerning universities in the Flemish Community.
    • -
    • Researchers at other Flemish public research institutions: You can get compute on the Tier-1 infrastructure through a project application or access the Tier-2 infrastructure through contact with one of the coordinators.
    • -
    • Businesses, non-Flemish public knowledge institutions and not-for-profit organisations can buy compute time on the infrastructure. The procedures are explained on the page \"Buying compute time\".
    • -

    Additional information

    Before you apply for VSC account, it is useful to first check whether the infrastructure is suitable for your application. Windows or OS X programs for instance cannot run on our infrastructure as we use the Linux operating system on the clusters. The infrastructure also should not be used to run applications for which the compute power of a good laptop is sufficient. The pages on the Tier-1 and Tier-2 infrastructure in this part of the website give a high-level description of our infrastructure. You can find more detailed information in the user documentation on the user portal. When in doubt, you can also contact your local support team. This does not require a VSC account. -

    You should also first check the page \"Account request\" in the user documentation and install the necessary software on your PC. You can also find links to information about that software on the “Account Request” page. -

    Furthermore, it can also be useful to take one of the introductory courses that we organise periodically at all universities. However, it is best to apply for your VSC account before the course since you also can then also do the exercises during the course. We strongly urge people who are not familiar with the use of a Linux supercomputer to take such a course. After all, we do not have enough staff to help everyone individually for all those generic issues. -

    There is an exception to the rule that you need a VSC account to access the VSC systems: Users with a valid VUB account can access the Tier-2 systems at the VUB. -

    Your account also includes two “blocks” of disk space: your home directory and data directory. Both are accessible from all VSC clusters. When you log in to a particular cluster, you will also be assigned one or more blocks of temporary disk space, called scratch directories. Which directory should be used for which type of data, is explained in the user documentation. -

    Your VSC account does not give you access to all available software. You can use all free software and a number of compilers and other development tools. For most commercial software, you must first prove that you have a valid license or the person who has paid the license on the cluster must allow you to use the license. For this you can contact your local support team. -

    " -655,"","

    A collaboration with the VSC offers your company a good number of benefits. -

      -
    • Together -we will identify which expertise within the Flemish universities and their -associations is appropriate for you when rolling out High Performance Computing -(HPC) within your company. -
    • -
    • We -can also assist with the technical writing of a project proposal for financing for example through the IWT (Agency for -Innovation by Science and Technology). -
    • -
    • You -can participate in courses on HPC, including tailor-made courses provided by the VSC. -
    • -
    • You -will have access to a supercomputer infrastructure with a dedicated, on-site -team assisting you during the start-up phase. -
    • -
    • As -a software developer, you can also deploy HPC software technologies to develop -more efficient software which makes better use of modern hardware. -
    • -
    • A -shorter turnaround time for your simulation or data study boosts productivity -and increases the responsiveness of your business to new developments. -
    • -
    • The -possibility to carry out more detailed simulations or to analyse larger amounts -of data can yield new insights which in turn lead to improved products and more -efficient processes. -
    • -
    • A -quick analysis of the data collected during a production process helps to -detect and correct abnormalities early on. -
    • -
    • Numerical -simulation and virtual engineering reduce the number of prototypes and -accelerate the discovery of potential design problems. As a result you are able -to market a superior product faster and cheaper. -
    • -
    " -659,"","

    Modern microelectronics has created many new opportunities. Today powerful supercomputers enable us to collect and process huge amounts of data. Complex systems can be studied through numerical simulation without having to build a prototype or set up a scaled experiment beforehand. All this leads to a quicker and cheaper design of new products, cost-efficient processes and innovative services. To support this development in Flanders, the Flemish Government was founded in late 2007. Our accumulated expertise and infrastructure is also available to the industry for R&D. -

    " -661,"Our offer to you","

    Thanks to our embedding in academic institutions, we cannot only offer you infrastructure at competitive rates but also expert advice and training.

    " -663,"About us","

    The VSC is a collaboration between the Flemish Government and five Flemish university associations. Many of the VSC employees have a strong technical and scientific background.

    " -671,"","

    The VSC was launched in late 2007 as a collaboration between the Flemish Government and five Flemish university associations. Many of the VSC employees have a strong technical and scientific background. Our team also collaborates with many research groups at various universities and helps them and their industrial partners with all aspects of infrastructure usage. -

    Besides a competitive infrastructure, the VSC team also offers full assistance with the introduction of High Performance Computing within your company. -

    Contact

    Coordinator industry access and services: industry@fwo.be

    Alternatively, you can contact one of the VSC coordinators.
    -

    " -673,"","

    Get in touch with us!

    " -681,"","

    Overview of the storage infrastructure

    Storage is an important part of a cluster. But not all the storage has the same characteristics. HPC cluster storage at KU Leuven consists of 3 different storage Tiers, optimized for different usage -

      -
    • NAS storage, fully back-up with snapshots for /home and /data
    • -
    • Scratch storage, fast parallel filesystem
    • -
    • Archive storage, to store large amounts of data for long time
    • -

    The picture below gives a quick overview of the different components.\"KU -

    Storage Types

    As described on the web page \"Where can I store what kind of data?\" different types of data can be stored in different places. There is also an extra storage space for Archive use. -

    Archive Storage

    The archive tier is built with DDN WOS storage. It is intended to store data for longer term. The storage is optimized for capacity, not for speed. The storage by default is mirrored. -

    No deletion rules are executed on this storage. The data will be kept until the user deletes it. -

    Use for: Storing data that will not be used for a longer period and which should be kept. Compute nodes have no direct access to that storage area and therefore it should not be used for jobs I/O operations. -

    How to request: Please send a request from the storage request webpage. -

    How much does it cost: For all the prices please refer to our service catalog (login required).
    -

    Working with archive storage

    The archive storage should not be used to perform I/O in a compute job. Data should first be copied to the faster scratch filesystem. To accommodate user groups that have a large archive space, a staging area is foreseen. The staging area is a part of the same hardware platform as the fast scratch filesystem, but other rules apply. Data is not deleted automatically after 21 days. When the staging area is full it will be the user’s responsibility to make sure that enough space is available. Data created on scratch or in the staging location which needs to be kept for longer time should be copied to the archive. -

    Location of Archive/Staging

    The name of user's archive directory is in the format: /archive/leuven/arc_XXXXX, where XXXXX is a number and this will be given to the user by HPC admin once your archive requested is handled. -

    The name of your staging directory is in this format: /staging/leuven/stg_XXXXX, where XXXXX is the same number as for the archive directory. -

    Use case: Data is in archive, how can I use it in a compute job?

    In this use case you want to start to compute on older data in your archive. -

    If you want to compute with data in your archive stored in ‘archive_folder’. You can copy this data to your scratch using the following command: -

    rsync -a <PATH_to_archive/archive_folder> <PATH_to_scratch>
    -

    Afterwards you may want to archive the new produced results back to archive therefore you should follow the steps in the following use case. -

    Use case: Data produced on cluster, stored for longer time?

    This procedure applies to the case when you have jobs producing output results on the scratch area and you want to archive those results in your archive area. -

    In that case you have a folder on scratch called ‘archive_folder’ in which you are working. And the same folder already exists in your archive space. Now you want to update your archive space with the new results produced on scratch -

    You could run the command: -

    rsync -i -u -r --dry-run <PATH_to_scratch/archive_folder> <PATH_to_archive/archive_folder>
    -

    This command will not perform the copy yet but it will give an overview of all data changed since last copy from archive. Therefore not all data needs to be copied back. If you agree with this overview you can run this command without the --dry-run’ option. If you are synching a large amount files, please contact HPC support for follow-up. -

    Use case : How to get local data on archive?

    Data that is stored at the user's local facilities can be copied to the archive through scp/bbcp/sftp methods. For this please refer to the appropriate VSC documentation: -

    for linux: openssh -

    for windows: filezilla or winscp -

    for OS X: data-cyberduck. -

    Use case : How to check the disk usage?

    To check the occupied disk space additional option is necessary with du command: -

    du --apparent-size folder-name
    -

    How to stage in or stage out using torque?

    Torque gives also the possibility to specify data staging as a job requirement. This way Torque will copy your data to scratch while your job is in the queue and will not start the job before all data is copied. The same mechanism is possible for stageout requirements. In the example below Torque will copy back your data from scratch when your job is finished to the archive storage tier: -

    qsub -W stagein=/scratch/leuven/3XX/vsc3XXXX@login1:/archive/leuven/arc_000XX/foldertostagein 
    --W stageout=/scratch/leuven/3XX/vsc3XXXX/foldertostageout@login1:/archive/leuven/arc_000XX/
    -

    -

    - Hostname is always one of the login nodes, because these are the only nodes where ‘archive’ is available on the cluster. -

    For stagein the copy goes from /archive/leuven/arc_000XX/foldertostagein to /scratch/leuven/3XX/vsc3XXXX -

    For stageout the copy goes from /scratch/leuven/3XX/vsc3XXXX/foldertostageout to /archive/leuven/arc_000XX/ -

    Attached documents

    " -683,"","

    Lorem ipsum dolor sit amet, consectetur adipiscing elit. Nunc nec interdum velit, et viverra arcu. Donec ac nisl vehicula orci mattis pellentesque vel sed magna. Ut vulputate ipsum in bibendum suscipit. Phasellus tristique molestie cursus. Suspendisse sed luctus diam. Duis dignissim tincidunt congue. Sed laoreet nunc ac hendrerit congue. Aenean semper dolor sit amet tincidunt pharetra. Fusce malesuada iaculis enim eu venenatis. Maecenas commodo laoreet eros eu feugiat. Integer dignissim sapien at vehicula fermentum. Sed quis odio in dui luctus tempus. Praesent porttitor nisl varius, mattis eros laoreet, eleifend magna. Curabitur vehicula vitae eros vel egestas. Fusce at metus velit. -

    Test

    Test movie

    The movie below illustrates the use of supercomputing for the design of a cooling element from a report on Kanaal Z. -

    Methode 1, conform de code voor embedding gegenereerd door de Kanaal Z website: speelt niet af... -

    - -

    Methode 2: Video tag, werkt alleen in HTML5 browsers, en ik vrees dat Kanaal Z niet gelukkig is met deze methode...

    " -687,"","

    The industry day has been postponed to a later date, probably in the autumn around the launch of the second Tier-1 system in Flanders.

    Supercharge your business with supercomputing

    - When? New date to be determined
    Where? Technopolis, Mechelen
    Admission free, but registration required -

    The VSC Industry day is the second in a series of annual events. The goals are to create awareness about the potential of HPC for industry and to help firms overcome the hurdles to use supercomputing. We are proud to present an exciting program with testimonials of some Flemish firms who already have discovered the opportunities of large scale computing, success stories from a European HPC centre that successfully collaborates with industry and a presentation by a HPC vendor who has been very successful delivering solutions to several industries. -

    - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
    -

    Preliminary program - Supercharge your business with supercomputing -

    Given that the industry day has been postponed, the program is subject to change.

    -
    -

    13.00-13.30 -

    -
    -

    Registration and welcome drink -

    -
    -

    13.30-13.45 -

    -
    -

    Introduction and opening
    - Prof. dr Colin Whitehouse (chair) -

    -
    -

    13.45-14.15 -

    -
    -

    The future is now - physics-based simulation opens new gates in heart disease treatment
    - Matthieu De Beule (FEops) -

    -
    -

    13.45-14.05 -

    -
    -

    Hydrodynamic and morfologic modelling of the river Scheldt estuary
    - Sven Smolders and Abdel Nnafie (Waterbouwkundig Laboratorium) -

    -
    -

    14.15-14.45 -

    -
    -

    HPC in Metal Industry: Modelling Wire Manufacturing
    - Peter De Jaeger (Bekaert) -

    -
    -

    15.15-15.45 -

    -
    -

    Coffee break -

    -
    -

    15.45-16.15 -

    -
    -

    NEC industrial customers HPC experiences
    Fredrik Unger (NEC)

    -
    -

    16.15-16.45 -

    -
    -

    Exploiting business potential with supercomputing
    - Karen Padmore (HPC Wales and SESAME repres.) -

    -
    -

    16.45-17.05 -

    -
    -

    What VSC has to offer to your business
    - Ingrid Barcena Roig and Ewald Pauwels (VSC)

    -
    -

    17.05-17.25 -

    -
    -

    Q&A discussion
    - Panel/chair -

    -
    17.25-17.30 - -

    Closing
    - Prof dr. Colin Whitehouse (chair) -

    -
    17.30-18.30 - Networking reception -

    Registration

    The registrations are closed now. Ones the new date is determined, a new registration form will be made available.

    How to reach Technopolis.

    " -689,"","

    VSC Industry Day - Thursday April 14, 2016

    " -691,"","

    VSC Industry Day - Thursday April 14, 2016

    " -695,"","

    A batch system

    There are two important differences between a supercomputer and your personal laptop or smartphone apart from the amount of compute power it can deliver if used properly: As it is a large and expensive machine and as not every program can use all of its processing power, it is a multi-user machine, and furthermore it is optimised to run large parallel programs in such a way that they don't interfere too much with each other. So your compute resources will be as much as possible isolated from those assigned to another user. The latter is necessary to ensure fast and predictable execution of large parallel jobs as the performance of a parallel application will always be limited by the slowest node, process or thread.

    This has some important consequences: -

      -
    1. As a user, you don't get the whole machine, but a specific part of it, and so you'll have to specify which part you need for how long.
    2. -
    3. Often more capacity is requested than available at that time. Hence you may have to wait a little before you get the resources that you request. To organise this in a proper way, every supercomputer provides a queueing system.
    4. -
    5. Moreover, as you often have to wait a bit before you get the requested resources, it is not well suited for interactive work. Instead, most work on a supercomputer is done in batch mode: Programs run without user interaction, reading their input from file and storing their results in files.
    6. -

    In fact, another reason why interactive work is discouraged on most clusters is because interactive programs rarely fully utilise the available processors but waste a lot of time waiting for new user input. Since that time cannot be used by another user either (remember that your work is isolated from that from other users), is is a waste of very expensive compute resources. -

    A job is an entity of work that you want to do on a supercomputer. A job consists of the execution of one or more programs and needs certain resources for some time to be able to execute. Batch jobs are described by a job script. This is like a regular linux shell script (usually for the bash shell), but it usually contains some extra information: a description of the resources that are needed for the job. A job is then submitted to the cluster and placed in a queue (managed by a piece of software called the queue manager). A scheduler will decide on the priority of the job that you submitted (based on the resources that you request, your past history and policies determined by the system managers of the cluster). It will use the resource manager to check which resources are available and to start the job on the cluster when suitable resources are available and the scheduler decides it is the job's time to run. -

    At the VSC we use two software packages to perform these tasks. Torque is an open source package that performs the role of queue and resource manager. Moab is a commercial package that provides way more scheduling features than its open source alternatives. Though both packages are developed by the same company and are designed to work well with each other, they both have their own set of commands with often confusing command line options. -

    Anatomy of a job script

    A typical job script looks like: -

    #!/bin/bash
    -#PBS –l nodes=1:ppn=20
    -#PBS –l walltime=1:00:00
    -#PBS -o stdout.$PBS_JOBID
    -#PBS -e stderr.$PBS_JOBID
    -
    -module load MATLAB
    -cd $PBS_O_WORKDIR
    -
    -matlab -r fibo
    -

    We can distinguish 4 sections in the script: -

      -
    1. The first line simply tells that this is a shell script.
    2. -
    3. The second block, the lines that start with #PBS, specify the resources and tell the resource manager where to store the standard output and standard error from the program. To ensure unique file names, the author of this script has chosen to put the \"Job ID\", a unique ID for every job, in the name.
    4. -
    5. The next two lines create the proper environment to run the job: it loads a module and changes the working directory to the directory from which the job was submitted (this is what is stored in the environment variable$PBS_O_WORKDIR).
    6. -
    7. Finally the script executes the commands that are the core of the job. In this simple example, this is just a single command, but it could as well be a whole bash script.
    8. -

    In other pages of the documentation in this section, we'll go into more detail on specifying resource requirements, output redirection and notifications and on environment variables that are set by the scheduler and can be used in your job. -

    Assuming that this script is called myscript.pbs, the job can then be submitted to the queueing system with the command qsub myscript.pbs. -

    Note that if you use a system at the KU Leuven, including the Tier-1 system BrENIAC, you need credits. When submitting your job, you also need to tell qsub which credits to use. We refer to the page on \"Credit system basics\".

    Structure of this documentation section

      -
    • The page on specifying job requirements describes everything that goes in the second block of your job script: the specification of the resources, notifications, etc.
    • The page on starting programs in your job describes the third and fourth block: Setting up the environment and starting a program.
    • -
    • The page on starting and managing jobs describes the main Torque and Moab commands to submit and then manage your jobs and to follow up how they proceed trough the scheduling software.
    • -
    • The worker framework is a framework developed at the VSC to bundle a lot of small but related jobs into a larger parallel job. This makes life a lot easier for the scheduler as the scheduler is optimised to run a limited number of large long-duration jobs as efficient as possible and not to deal with thousands or millions of small short jobs.
    • -

    Some background information

    For those readers who want some historical background to understand where the complexity comes from. -

    In the ’90s of the previous century, there was a popular resource manager called Portable Batch System, developed by a contractor for NASA. This was open-sourced. But that contractor was acquired by another company that then sold the rights to Altair Engineering that evolved the product into the closed-source product PBSpro (which was then open-sourced again in the summer of 2016). The open-source version was forked by another company that is now known as Adaptive Computing and renamed to Torque. Torque remained open–source. The name stands for Terascale Open-source Resource and QUEue manager. Even though the name was changed, the commands remained which explains why so many commands still have the abbreviation PBS in their name. -

    The scheduler Moab evolved from MAUI, an open-source scheduler. Adaptive Computing, the company behind Torque and Moab, contributed a lot to MAUI but then decided to start over with a closed source product. They still offer MAUI on their website though. MAUI used to be widely used in large USA supercomputer centres, but most now throw their weight behind SLURM with or without another scheduler. -

    " -697,"","

    In general, there are two ways to pass the resource requirements or other job properties to the queue manager: -

      -
    1. They can be specified on the command line of the qsub command
    2. -
    3. Or they can be put in the job script on lines that start with #PBS (so-called in-PBS directives). Each line can contain one or more command line options written in exactly the same way as on the command line of qsub. These lines have to come at the top of the job script, before any command (but after the line telling the shell that this is a bash script).
    4. -

    And of course both strategies can be mixed at will: Some options can be put in the job script, while others are specified on the command line. This can be very useful, e.g., if you run a number of related jobs from different directories using the same script. The few things that have to change can then be specified at the command line. The options given at the command line always overrule those in the job script in case of conflict. -

    Resource specifications

    Resources are specified using the -l command line argument. -

    Wall time

    Walltime is specified through the option -l walltime=HH:MM:SS with HH:MM:SS the walltime that you expect to need for the job. (The format DD:HH:MM:SS can also be used when the waltime exceeds 1 day, and MM:SS or simply SS are also viable options for very short jobs). -

    To specify a run time of 30 hours, 25 minutes and 5 seconds, you'd use -

    $ qsub -l walltime=30:25:05 myjob.pbs
    -

    on the command line or the line -

    #PBS -l walltime=30:25:05
    -

    in the job script (or alternative walltime=1:06:25:05). -

    If you omit this option, the queue manager will not complain but use a default value (one hour on most clusters). -

    It is important that you do an effort to estimate the wall clock time that your job will need properly. If your job exceeds the specified wall time, it will be killed, but this is not an invitation to simply specify the longest wall time possible (the limit differs from cluster to cluster). To make sure that the cluster cannot be monopolized by one or a few users, many of our clusters have a stricter limit on the number of long-running jobs than on the number of jobs with a shorter wall time. And several clusters will also allow short jobs to pass longer jobs in the queue if the scheduler finds a gap (based on the estimated end time of the running jobs) that is short enough to run that job before it has enough resources to start a large higher-priority parallel job. This process is called backfilling. -

    The maximum allowed wall time for a job is cluster-dependent. Since these policies can change over time (as do other properties from clusters), we bundle these on one page per cluster in the \"Available hardware\" section. -

    Single- and multi-node jobs: Cores and memory

    The following options can be used to specify the number of cores, amount of RAM and virtual memory needed for the job: -

      -
    • -l nodes=<nodenum>:ppn=<cores per node>: This indicates that the node needs <nodenum> jobs with <cores per node> virtual cores per node. Depending on the settings for the particular system, this will be physical cores or hyperthreads on a physical core.
    • -
    • -l pmem=<memory>: The job needs <memory> RAM memory per core or hyperthread (the unit used by ppn). Thee units kb, mb, gb or tb can be used (though the latter does not make sense when talking about memory per core). Users are strongly advised to use this parameter also. If not specified, the system will use a default value, and that may be too small for your job and cause trouble if the scheduler puts multiple jobs on a single node. Moreover, recent versions of the resource manager software in use at the VSC can check for the actual use of resources in a more strict way, so when this is enabled, they may just terminate your job if it uses too much memory.
    • -
    • -l pvmem=<memory>: The job needs <memory> virtual memory per core or hyperthread (the unit used by ppn). This determines the total amount of RAM memory + swap space that can be used on any node. Similarly, kb, mb, gb or tb can be used, with gb making most sense. Note that on many clusters, there is not much swap space available. Moreover, swapping should be avoided as it causes a dramatic performance loss. Hence this option is not very useful in most cases.
    • -

    Node that specifying -l nodes=<nodenum>:ppn=<cores per node> does not guarantee you that you actually get <nodenum> physical nodes. You may get multiple groups of <cores per node> cores on a single node instead. E.g., -l nodes=4:ppn=5 may result in an allocation of 20 cores on a single node in a cluster that has nodes with 20 or more cores if that node also contains enough memory. -

    Note also that the job script will only run once on the first node of your allocation. To start processes on the other nodes, you'll need to use tools like pbsdsh or mpirun/mpiexec to start those processes. -

    Single node jobs only: Cores and memory

    For single node jobs there is an alternative for specifying the amount of resident memory and virtual memory needed for the application. These settings make more sense from the point of view of starting a single multi-threaded application. -

      -
    • -l nodes=1:ppn=<cores per node>: This is still needed to specify the number of physical cores or hyperthreads needed for the job.
    • -
    • -l mem=<memory>: The job needs <memory> RAM memory on the node. Units are kb, mb, gb or tb as before.
    • -
    • -l vmem=<memory>: The job needs <memory> virtual memory on the node, i.e., RAM and swap space combined. As for the option pvmem above, this options is not useful on most clusters since the amount of swap space is very low and since swapping causes a very severe performance degradation.
    • -

    These options should not be used for multi-node jobs as the meaning of the parameter is undefined (mem) or badly defined (vmem) for multi-node jobs with different sections and different versions of the Torque manual specifying different behaviour for these options. -

    Specifying further node properties

    Several clusters at the VSC have nodes with different properties. E.g., a cluster may have nodes of two different CPU generations and your program may be compiled to take advantage of new instructions on the newer generation and hence not run on the older generation. Or some nodes may have more physical memory or a larger hard disk and support more virtual memory. Or not all nodes may be connected to the same high speed interconnect (which is mostly an issue on the older clusters). You can then specify which node type you want by adding further properties to the -l nodes= specification. E.g., assume a cluster with both Ivy Bridge and Haswell generation nodes. The Haswell CPU supports new and useful floating point instructions, but programs that use these will not run on the older Ivy Bridge nodes. The cluster will then specify the property ivybridge for the Ivy Bridge nodes and haswell for the Haswell nodes. Specifying -l nodes=8:ppn=6:haswell then tells the scheduler that you want to use nodes with the haswell property only (and in this case, since Haswell nodes often have 24 cores, you will likely get 2 physical nodes). -

    The exact list of properties depend on the cluster and is given in the page for your cluster in the \"Available hardware\" section of this manual. Note that even for a given cluster, this list may evolve over time, e.g., when new nodes are added to the cluster, so check these pages again from time to time! -

    Combining resource specifications

    It is possible to combine multiple -l options in a single one by separating the arguments with a colon (,). E.g., the block -

    #PBS -l walltime=2:30:00
    -#PBS -l nodes=2:ppn=16:sandybridge
    -#PBS -l pmem=2gb
    -

    is equivalent with the line -

    #PBS -l walltime=2:30:00,nodes=2:ppn=16:sandybridge,pmem=2gb
    -

    The same holds when using -l at the command line of qsub. -

    Enforcing the node specification

    These are very asocial options as they typically result in lots of resources remaining unused, so use them with care and talk to user support to see if you really need them. But there are some rare scenarios in which they are actually useful. -

    If you don't use all cores of a node in your job, the scheduler may decide to bundle the tasks of several nodes in your resource request on a single node, may put other jobs you have in the queue on the same node(s) or may - depending on how the system manager has configured the scheduler - put jobs of other users on the same node. In fact, most VSC clusters have a single user per node policy as misbehaving jobs of one user may cause a crash or performance degradation of another user's job. -

      -
    • Using -W x=nmatchpolicy:exactnode will result in the scheduler giving you resourced on the exact number of nodes you request. However, other jobs may still be scheduled on the same nodes if not all cores are used.
    • -
    • Using -l naccesspolicy=singlejob will make sure that no other job can use the nodes allocated to your job. In most cases it is very asocial to claim a whole node for a job that cannot fully utilise the resources on the node, but there are some rare cases when your program actually runs so much faster by leaving some resources unused that it actually improves the performance of the cluster. But these cases are very rare, so you shouldn't use this option unless, e.g., you are running the final benchmarks for a paper and want to exclude as much factors that can influence the results as possible.
    • -

    Naming jobs and output files

    The default name of a job is derived from the file name of the job script. This is not very useful if the same job script is used to launch multiple jobs, e.g., by launching jobs from multiple directories with different input files. It is possible to overwrite the default name of the job with -N <job_name>. -

    Most jobs on a cluster run in batch mode. This implies that they are not connected to a terminal, so the output send to the Linux stdout (standard output) and stderr (standard error) devices cannot be displayed on screen. Instead it is captured in two files that are put in the directory where your job was started at the end of your job. The default names of those files are <job_name>.o<job id> and <job_name>.e<job id> respectively, so made from the name of the job (the one assigned with -N if any, or the default one) and the number of the job assigned when you submit the job to the queue. You can however change those names using -o <output file> and -e <error file>. -

    It is also possible to merge both output streams in a single output stream. The option -j oe will merge stderr into stdout (and hence the -e option does not make sense), the option -j eo will merge stdout into stderr.

    Notification of job events

    Our scheduling system can also notify you when a job starts or ends by e-mail. Jobs can stay queued for hours or sometimes even days before actually starting, so it is useful to be notified so that you can monitor the progress of your job while it runs or kill it when it misbehaves or produces clearly wrong results. Two command line options are involved in this process: -

      -
    • -m abe or any subset of these three letters determine for which event you'll receive a mail notification: job start (b), job ends (e) or job is aborted (a). In some scenarios tis may bombard you with e-mail if you have a lot of jobs starting, however at other times it is very useful to be notified that your job starts, e.g., to monitor if it is running properly and efficiently.
    • -
    • With -M <mailadress> you can set the mail address to which the notification will be send. On most clusters the default will be the e-mail address with which you registered your VSC-account, but on some clusters this fails and the option is required to receive the e-mail.
    • -

    Other options

    This page describes the most used options in their most common use cases. There are however more parameters for resource specification and other options that can be used. For advanced users who want to know more, we refer to the documentation of the qsub command that mentions all options in the Torque manual on the Adaptive Computing documentation web site. -

    " -699,"","

    - To set up your environment for using a particular (set of) software package(s), you can use the modules that are provided centrally.
    - On the Tier-2 of UGent and VUB, interacting with the modules is done via - Lmod (since August 2016), -using the - module command or the handy shortcut command ml. -

    Quick introduction

    A very quick introduction to Lmod. Below you will find more details and examples. -

      -
    • ml lists the currently loaded modules, and is equivalent with module list
    • -
    • ml GCC/4.9.3 loads the GCC/4.9.3 module, and is equivalent with module load GCC/4.9.3
    • -
    • ml -GCC unloads the currently loaded GCC module, and is equivalent with module unload GCC
    • -
    • ml av gcc prints the currently available modules that match gcc (case-insensitively), and is equivalent with module avail GCC
    • -
    • ml show GCC/4.9.3 prints more information about the GCC/4.9.3 module, and is equivalent with module show GCC
    • -
    • ml spider gcc searches (case-insensitive) for gcc in all available modules over all clusters
    • -
    • ml spider GCC/4.9.3 show all information about the module GCC/4.9.3 and on which clusters it can be loaded.
    • -
    • ml save mycollection stores the currently loaded modules to a collection
    • -
    • ml restore mycollection restores a previously stored collection of modules
    • -

    Module commands: using module (or ml)


    Listing loaded modules: module list (or ml)

    To get an overview of the currently loaded modules, use module list or ml (without specifying extra arguments). -

    In a default environment, you should see a single cluster module loaded: -

    $ ml
    -Currently Loaded Modules:
    -  1) cluster/delcatty (S)
    -  Where:
    -   S:  Module is Sticky, requires --force to unload or purge
    -

    (for more details on sticky modules, see the section on ml purge) -


    Searching for available modules: module avail (or ml av) and ml spider

    Printing all available modules: module avail (or ml av)

    To get an overview of all available modules, you can use module avail or simply ml av: -

    $ ml av
    ------------------------------ /apps/gent/CO7/haswell-ib/modules/all -----------------------------
    -   ABAQUS/6.12.1-linux-x86_64           libXext/1.3.3-intel-2016a                  (D)
    -   ABAQUS/6.14.1-linux-x86_64    (D)    libXfixes/5.0.1-gimkl-2.11.5
    -   ADF/2014.02                          libXfixes/5.0.1-intel-2015a
    -   ...                                  ...
    -

    In the current module naming scheme, each module name consists of two parts: -

      -
    • the part before the first /, corresponding to the software name; and
    • -
    • the remainder, corresponding to the software version, the compiler toolchain that was used to install the software, and a possible version suffix
    • -

    For example, the module name matplotlib/1.5.1-intel-2016a-Python-2.7.11 will set up the environment for using matplotlib version 1.5.1, -which was installed using the - intel/2016a compiler toolchain; the version suffix -Python-2.7.11 indicates it was installed for Python version 2.7.11. -

    The (D) indicates that this particular version of the module is the default, -but we strongly recommend to - not rely on this as the default can change at any point. -Usuall, the default will point to the latest version available. -


    Searching for modules: ml spider

    The (Lmod-specific) spider subcommand lets you search for modules across all clusters. -

    If you just provide a software name, for example gcc, it prints on overview of all available modules -for GCC. -

    $ ml spider gcc
    ----------------------------------------------------------------------------------
    -  GCC:
    ----------------------------------------------------------------------------------
    -     Versions:
    -        GCC/4.7.2
    -        GCC/4.8.1
    -        GCC/4.8.2
    -        GCC/4.8.3
    -        GCC/4.9.1
    -        GCC/4.9.2
    -        GCC/4.9.3-binutils-2.25
    -        GCC/4.9.3
    -        GCC/4.9.3-2.25
    -        GCC/5.3.0
    -     Other possible modules matches:
    -        GCCcore
    ----------------------------------------------------------------------------------
    -  To find other possible module matches do:
    -      module -r spider '.*GCC.*'
    ----------------------------------------------------------------------------------
    -  For detailed information about a specific \"GCC\" module (including how to load the modules) use the module's full name.
    -  For example:
    -     $ module spider GCC/4.9.3
    ----------------------------------------------------------------------------------
    -

    Note that spider is case-insensitive. -

    If you use spider on a full module name like GCC/4.9.3-2.25 it will tell on which cluster(s) that module available: -

    $ ml spider GCC/4.9.3-2.25
    ----------------------------------------------------------------------------------
    -  GCC: GCC/4.9.3-2.25
    ----------------------------------------------------------------------------------
    -     Other possible modules matches:
    -        GCCcore
    -    You will need to load all module(s) on any one of the lines below before the \"GCC/4.9.3-2.25\" module
    -    is available to load.
    -      cluster/delcatty
    -      cluster/golett
    -      cluster/phanpy
    -      cluster/raichu
    -      cluster/swalot
    -    Help:
    -       The GNU Compiler Collection includes front ends for C, C++, Objective-C, Fortran, Java, and Ada,
    -       as well as libraries for these languages (libstdc++, libgcj,...). - Homepage: http://gcc.gnu.org/
    ----------------------------------------------------------------------------------
    -  To find other possible module matches do:
    -      module -r spider '.*GCC/4.9.3-2.25.*'
    -

    This tells you that the module named GCC/4.9.3-2.25 is available on the clusters delcatty, golett, phanpy, raichu and swalot. -It also tells you what the module contains and a URL to the homepage of the software. -


    Available modules for a particular software package: module avail <name> (or ml av <name>)

    To check which modules are available for a particular software package, you can provide the software name to ml av. -

    For example, to check which versions of IPython are available: -

    $ ml av ipython
    ------------------------------ /apps/gent/CO7/haswell-ib/modules/all -----------------------------
    -IPython/3.2.3-intel-2015b-Python-2.7.10    IPython/3.2.3-intel-2016a-Python-2.7.11 (D)
    -

    Note that the specified software name is treated case-insensitively. -

    Lmod does a partial match on the module name, so sometimes you need to use / to indicate the end of the software name you are interested in: -

    $ ml av GCC/
    ------------------------------ /apps/gent/CO7/haswell-ib/modules/all -----------------------------
    -GCC/4.9.2    GCC/4.9.3-binutils-2.25    GCC/4.9.3    GCC/4.9.3-2.25    GCC/5.3.0    GCC/6.1.0-2.25 (D)
    -

    Inspecting a module using module show (or ml show)

    To see how a module would change the environment, use module show or ml show: -

    $ ml show matplotlib/1.5.1-intel-2016a-Python-2.7.11
    ------------------------------ /apps/gent/CO7/haswell-ib/modules/all -----------------------------
    -whatis(\"Description: matplotlib is a python 2D plotting library which produces publication quality figures in a variety of 
    -hardcopy formats and interactive environments across platforms. matplotlib can be used in python scripts, the python 
    -and ipython shell, web application servers, and six graphical user interface toolkits. - Homepage: http://matplotlib.org \")
    -conflict(\"matplotlib\")
    -load(\"intel/2016a\")
    -load(\"Python/2.7.11-intel-2016a\")
    -load(\"freetype/2.6.2-intel-2016a\")
    -load(\"libpng/1.6.21-intel-2016a\")
    -prepend_path(\"LD_LIBRARY_PATH\",\"/apps/gent/CO7/haswell-ib/software/matplotlib/1.5.1-intel-2016a-Python-2.7.11/lib\")
    -prepend_path(\"LIBRARY_PATH\",\"/apps/gent/CO7/haswell-ib/software/matplotlib/1.5.1-intel-2016a-Python-2.7.11/lib\")
    -setenv(\"EBROOTMATPLOTLIB\",\"/apps/gent/CO7/haswell-ib/software/matplotlib/1.5.1-intel-2016a-Python-2.7.11\")
    -setenv(\"EBVERSIONMATPLOTLIB\",\"1.5.1\")
    -setenv(\"EBDEVELMATPLOTLIB\",\"/apps/gent/CO7/haswell-ib/software/matplotlib/1.5.1-intel-2016a-Python-2.7.11/easybuild/matplotlib-1.5.1-intel-2016a-Python-2.7.11-easybuild-devel\")
    -prepend_path(\"PYTHONPATH\",\"/apps/gent/CO7/haswell-ib/software/matplotlib/1.5.1-intel-2016a-Python-2.7.11/lib/python2.7/site-packages\")
    -setenv(\"EBEXTSLISTMATPLOTLIB\",\"Cycler-0.9.0,matplotlib-1.5.1\")
    -help([[ matplotlib is a python 2D plotting library which produces publication quality figures in a variety of
    - hardcopy formats and interactive environments across platforms. matplotlib can be used in python scripts, the python
    - and ipython shell, web application servers, and six graphical user interface toolkits. - Homepage: http://matplotlib.org
    -

    Note that both the direct changes to the environment as well as other modules that will be loaded are shown. -

    If you're not sure what all of this means: don't worry, you don't have to know; just load the module and try -using the software. -


    Loading modules: module load <modname(s)> (or ml <modname(s)>)

    To effectively apply the changes to the environment that are specified by a module, use module load or ml and specify the name of the module. -

    For example, to set up your environment to use matplotlib: -

    $ ml matplotlib/1.5.1-intel-2016a-Python-2.7.11
    -$ ml
    -Currently Loaded Modules:
    -  1) cluster/delcatty                                    (S)  12) zlib/1.2.8-intel-2016a
    -  2) GCCcore/4.9.3                                          13) libreadline/6.3-intel-2016a
    -  3) binutils/2.25-GCCcore-4.9.3                            14) ncurses/6.0-intel-2016a
    -  4) icc/2016.1.150-GCC-4.9.3-2.25                          15) Tcl/8.6.4-intel-2016a
    -  5) ifort/2016.1.150-GCC-4.9.3-2.25                        16) SQLite/3.9.2-intel-2016a
    -  6) iccifort/2016.1.150-GCC-4.9.3-2.25                     17) Tk/8.6.4-intel-2016a-no-X11
    -  7) impi/5.1.2.150-iccifort-2016.1.150-GCC-4.9.3-2.25      18) GMP/6.1.0-intel-2016a
    -  8) iimpi/8.1.5-GCC-4.9.3-2.25                             19) Python/2.7.11-intel-2016a
    -  9) imkl/11.3.1.150-iimpi-8.1.5-GCC-4.9.3-2.25             20) freetype/2.6.2-intel-2016a
    - 10) intel/2016a                                            21) libpng/1.6.21-intel-2016a
    - 11) bzip2/1.0.6-intel-2016a                                22) matplotlib/1.5.1-intel-2016a-Python-2.7.11
    -

    Note that even though we only loaded a single module, the output of ml shows that a whole bunch of modules were loaded, which are required dependencies for matplotlib, -including both the - compiler toolchain that was used to install matplotlib (i.e. intel/2016a, and its dependencies) and the module providing the Python installation -for which - matplotlib was installed (i.e. Python/2.7.11-intel-2016a). -


    Conflicting modules

    It is important to note that only modules that are compatible with each other can be loaded together. In particular, modules must be installed either with the same toolchain as the modules that are already loaded, or with a compatible (sub)toolchain. -

    For example, once you have loaded one or more modules that were installed with the intel/2016a toolchain, all other modules that you load should have been installed with the same toolchain. -

    In addition, only one single version of each software package can be loaded at a particular time. For example, once you have the Python/2.7.11-intel-2016a - module loaded, -you can not load a different version of Python in the same session/job -script; neither directly, nor indirectly as a dependency of another -module you want to load. -

    See also the topic \"module conflicts\" in the list of key differences with the previously used module system. -


    Unloading modules: module unload <modname(s)> (or ml -<modname(s)>)

    To revert the changes to the environment that were made by a particular module, you can use module unload or ml -<modname>. -

    For example: -

    $ ml
    -Currently Loaded Modules:
    -  1) cluster/golett (S)
    -$ which gcc
    -/usr/bin/gcc
    -$ ml GCC/4.9.3
    -$ ml
    -Currently Loaded Modules:
    -  1) cluster/golett (S)   2) GCC/4.9.3
    -$ which gcc
    -/apps/gent/CO7/haswell-ib/software/GCC/4.9.3/bin/gcc
    -$ ml -GCC/4.9.3
    -$ ml
    -Currently Loaded Modules:
    -  1) cluster/golett (S)
    -$ which gcc
    -/usr/bin/gcc
    -

    Resetting by unloading all modules: ml purge (module purge)

    To reset your environment back to a clean state, you can use module purge or ml purge: -

    $ ml
    -Currently Loaded Modules:
    -  1) cluster/delcatty                                    (S)  11) bzip2/1.0.6-intel-2016a
    -  2) GCCcore/4.9.3                                          12) zlib/1.2.8-intel-2016a
    -  3) binutils/2.25-GCCcore-4.9.3                            13) libreadline/6.3-intel-2016a
    -  4) icc/2016.1.150-GCC-4.9.3-2.25                          14) ncurses/6.0-intel-2016a
    -  5) ifort/2016.1.150-GCC-4.9.3-2.25                        15) Tcl/8.6.4-intel-2016a
    -  6) iccifort/2016.1.150-GCC-4.9.3-2.25                     16) SQLite/3.9.2-intel-2016a
    -  7) impi/5.1.2.150-iccifort-2016.1.150-GCC-4.9.3-2.25      17) Tk/8.6.4-intel-2016a-no-X11
    -  8) iimpi/8.1.5-GCC-4.9.3-2.25                             18) GMP/6.1.0-intel-2016a
    -  9) imkl/11.3.1.150-iimpi-8.1.5-GCC-4.9.3-2.25             19) Python/2.7.11-intel-2016a
    - 10) intel/2016a
    -$ ml purge
    -The following modules were not unloaded:
    -   (Use \"module --force purge\" to unload all):
    -  1) cluster/delcatty
    -[15:21:20] vsc40023@node2626:~ $ ml
    -Currently Loaded Modules:
    -  1) cluster/delcatty (S)
    -

    Note that, on HPC-UGent, the cluster module will always remain loaded, since - it defines some important environment variables that point to the -location of centrally installed software/modules, -and others that are required for submitting jobs and interfacing with -the cluster resource manager ( - qsub, qstat, ...). -

    As such, you should not (re)load the cluster module anymore after running ml purge. -See also - the topic on the purge command in the list of key differences with the previously used module implementation. -


    Module collections: ml save, ml restore

    If you have a set of modules that you need to load often, you can save these in a collection (only works with Lmod). -

    First, load all the modules you need, for example: -

    ml HDF5/1.8.16-intel-2016a GSL/2.1-intel-2016a Python/2.7.11-intel-2016a
    -

    Now store them in a collection using ml save: -

    $ ml save my-collection
    -

    Later, for example in a job script, you can reload all these modules with ml restore: -

    $ ml restore my-collection
    -

    With ml savelist you can get a list of all saved collections: -

    $ ml savelist
    -Named collection list:
    -  1) my-collection
    -  2) my-other-collection
    -

    To inspect a collection, use ml describe. -

    To remove a module collection, remove the corresponding entry in $HOME/.lmod.d. -


    -

    Lmod vs Tcl-based environment modules

    In August 2016, we switched to Lmod as a modules tool, -a modern alternative to the outdated & no longer actively maintained - Tcl-based environment modules tool. -

    Consult the Lmod documentation web site for more information. -


    Benefits

      -
    • significantly more responsive module commands, in particular module avail
    • -
    • a better and easier to use interface (e.g. case-insensitive avail, the ml command, etc.)
    • -
    • additional useful features, like defining & restoring module collections
    • -
    • drop-in replacement for Tcl-based environment modules (existing Tcl module files do not need to be modified to work)
    • -
    • module files can be written in either Tcl or Lua syntax (and both types of modules can be mixed together)
    • -

    Key differences

    The switch to Lmod should be mostly transparent, i.e. you should not have to change your existing job scripts. -

    Existing module commands should keep working as they were before the switch to Lmod. -

    However, there are a couple of minor differences between Lmod & the old modules tool you should be aware of: -

      -
    • module conflicts are strictly enforced
    • -
    • module purge does not unload the cluster module
    • -
    • modulecmd is not available anymore (only relevant for EasyBuild)
    • -


    - See below for more detailed information. -


    -

    Module conflicts are strictly enforced

    Conflicting modules can no longer be loaded together. -

    Lmod has been configured to report an error if any module conflict occurs -(as opposed to the default behaviour which is to unload the conflicting module and replace it with the one being loaded). -

    Although it seemed like the old modules did allow for conflicting modules to be loaded together, this was highly -discouraged already since it usually resulted in a broken environment. Lmod will ensure no changes are made to your -existing environment if a module that conflicts with an already module is loaded. -

    If you do try to load conflicting modules, you will run into an error message like: -

    $ module load Python/2.7.11-intel-2016a
    -$ module load Python/3.5.1-intel-2016a 
    -Lmod has detected the following error:  Your site prevents the automatic swapping of modules with same name.
    -You must explicitly unload the loaded version of \"Python\" before you can load the new one. Use swap (or an unload
    -followed by a load) to do this:
    -   $ module swap Python  Python/3.5.1-intel-2016a
    -Alternatively, you can set the environment variable LMOD_DISABLE_SAME_NAME_AUTOSWAP to \"no\" to re-enable same name
    -

    Note that although Lmod suggests to unload or swap, we recommend to try and make sure you only load compatible - modules together, and certainly not to define $LMOD_DISABLE_SAME_NAME_AUTOSWAP. -


    -

    module purge does not unload the cluster module

    Using module purge effectively resets your environment to a pristine working state, i.e. the cluster module stays loaded after the purge.
    - As such, it is no longer required to run module load cluster to restore your environment to a working state. -

    When you do run module load cluster when a cluster is already loaded, you will see the following warning message: -

    WARNING: 'module load cluster' has no effect when a 'cluster' module is already loaded.
    -For more information, please see https://www.vscentrum.be/cluster-doc/software/modules/lmod#module_load_cluster
    -

    To change to another cluster, use module swap or ml swap; -for example, to change your environment for the - golett cluster, use ml swap cluster/golett. -

    If you are frequently see the warning above pop up, you may have something like this in your $VSC_HOME/.bashrc - file: -

    . /etc/profile.d/modules.sh
    -module load cluster
    -

    If you do, please remove that, and include this at the top of your ~/.bashrc file: -

    if [ -f /etc/bashrc ]; then
    -        . /etc/bashrc
    -fi
    -

    modulecmd is not available anymore

    The modulecmd command is not available anymore, and has been replacd by the lmod command. -

    This is only relevant for EasyBuild, which has be to configured to use Lmod as a modules tool, since by default -it expects that - modulecmd is readily available. -
    For example: -

    export EASYBUILD_MODULES_TOOL=Lmod
    -

    See the EasyBuild -documentation - for other ways of configuring EasyBuild to use Lmod. -

    You should not be using lmod directly in other circumstances, use either ml or module instead. -

    Questions or problems

    In case of questions or problems, please do not hesitate to contact the support HPC team. HPC-UGent support team can be reached via hpc@ugent.be. -The HPC-VUB support team can be reached via - hpc@vub.ac.be. -

    " -701,"Job submission and credit reservations","

    When you submit a job, a reservation is made. This means that the number of credits required to run your job is marked as reserved. Of course, this is the number of credits that is required to run the job during the walltime specified, i.e., the reservation is computed based on the requested walltime. -

    Hence, if you submit a largish number of jobs, and the walltime is overestimated, reservation will be made for a total that is potentially much larger than what you'll actually be debited for upon job completion (you're only debited for the walltime used, not the walltime requested). -

    Now, suppose you know that your job will most probably end within 24 hours, but you specify 36 hours to be on the safe side (which is a good idea). Say, by way of example, that the average cost of a single job will be 300 credits. You have 3400 credits, so you can probably run at least 10 such jobs, so you submit all 10. -

    Here's the trap: for each job, a reservation is made, not of 300 credits, but of 450. Hence everything goes well for the first 7 (7*450 = 3150 < 3400), but for the 8th up to the 10th job, your account no longer has sufficient credits to make a reservation. Those 3 jobs will be blocked by a SystemHold, and never execute (unless additional credits are requested, and a sysadmin releases them as will happen now). -

    We actually have a nice tool to compute the maximum number of credits a job can take. It is called gquote, and you can use it as follows. Supose that you submit your job using, e.g.: -

    $ qsub  -l walltime=4:00:00 my_job.pbs
    -

    Then you can compute its cost (before actually doing the qsub) by: -

    $ module load accounting
    -$ gquote  -l walltime=4:00:00  my_job.pbs
    -

    If this is a worker job, and you submit it as, e.g.: -

    $ wsub  -data data.csv  -batch my_job.pbs  -l nodes=4:ppn=20
    -

    Then you can compute its cost (before actually doing the qsub) by: -

    $ module load accounting
    -$ gquote  -l nodes=4:ppn=20  my_job.pbs
    -

    As you can see, gquote takes the same arguments as qsub (so if you use wsub, don't use the -batch for the actual job script). It will use both the arguments on the command line and the PBS directives in your script to compute the cost of the job in the same way PBS torque is computing the resources for your job. -

    You will notice when using gquote that it will give you quotes that are more expensive than you expect. This typically happens when you don't specify the processor attribute for the nodes resource. gquote will assume that you job is executed on the most expensive processor type, which inflates prices. -

    The price of a processor is of course proportional to its performance, so when the job finishes, you will be charged approximately the same regardless of the processor type the job ran on. (It ran for a shorter time on a more faster, and hence more expensive processor.)

    " -705,"","

    This page describes the part of the job script that actually does the useful work and runs the programs you want to run. -

    When your job is started by the scheduler and the resource manager, your job script will run as a regular script on the first core of the first node assigned to the job. The script runs in your home directory, which is not the directory where you will do your work, and with the standard user environment. So before you can actually start your program(s), you need to set up a proper environment. On a cluster, this is a bit more involved than on your PC, partly also because multiple versions of the same program may be present on the cluster, or there may be conflicting programs that make it impossible to offer a single set-up that suits all users. -

    Setting up the environment

    Changing the working directory

    As explained above, the job script will start in your home directory, which is not the place where you should run programs. So the first step will almost always be to switch to the actual working directory (the bash cd command). -

      -
    • In most cases, you simply want to start your job in the directory from which you submitted your job. Torque offers the environment variable PBS_O_WORKDIR for that purpose. So for most users, all you need is simply cd $PBS_O_WORKDIR as the first actual command in your job script.
    • -
    • On all VSC clusters we also define a number of environment variables that point to different file systems that you have access to. They may also be useful in job scripts, and may help to make your job script more portable to other VSC clusters. An overview of environment variables that point to various file systems is given on the page \"where should which data be stored?\".
    • -

    Loading modules

    The next step consists of loading the appropriate modules. This is no different from loading the modules on the login nodes to prepare for your job or when running programs on interactive nodes, so we refer to the \"Modules\" page in the \"Running software\" section. -

    Useful Torque environment variables

    Torque defines a lot of environment variables on the compute nodes on which your job runs, They can be very useful in your job scripts. Some of the more important ones are: -

      -
    • PBS_O_WORKDIR : The directory from which your job was submitted.
    • -
    • PBS_JOBNAME : The name of the job
    • -
    • PBS_JOBID : The unique jobid. This is very useful when constructing, e.g., unique file names.
    • -
    • PBS_NUM_NODES : The number of nodes you requested.
    • -
    • PBS_NUM_PPN : The number of cores per node requested.
    • -
    • PBS_NP : The total number of cores requested.
    • -
    • PBS_NODEFILE : This variable is used by several MPI implementation to get the node list from the resource manager when starting a MPI program. It will contain $PBS_NP lines.
    • -

    There are also some variables that are useful if you use the Torque command pbsdsh to execute a command on another node/core of your allocation. We mention them here for completeness, but they will also be elaborated on in the paragraph on \"Starting a single core program on each assinged core\" further down this page. -

      -
    • PBS_NODENUM : The number of the node in your allocation. E.g., when starting a job with -l nodes=3:ppn=5, $PBS_NODENUM will be 0, 1 or 2 if the job script has actually been scheduled on three physically distinct nodes. As the job script executes on the first core of the allocation, its value will always be 0 in your job script.
    • -
    • PBS_VNODENUM : The number of the physical core or hyperthread in your allocation. The numbering continues across the nodes in the allocation, so in case of a job started with -l nodes=3:ppn=5, $PBS_VNODENUM will be a number between 0 and 14 (0 and 14 included). In your job script, its value will be 0.
    • -
    • PBS_TASKNUM : Number of the task. The numbering starts from 1 but continues across calls of pbsdsh. The login script runs with PBS_TASKNUM set to 1. The first call to pbsdsh will start its numbering from 2, and so on.
    • -

    Starting programs

    We show some very common start scenarios for programs on a cluster: -

      -
    • Shared memory programs with OpenMP as an example
    • -
    • Distributed memory programs with MPI programs as an example
    • -
    • An embarrassingly parallel job consisting of independent single-core runs combined in a single job script
    • -

    Starting a single multithreaded program (e.g., an OpenMP program)

    Starting a multithreaded program is easy. In principle, all you need to do is call its executable as you would do with any program at the command line. -

    However, often the program needs to be told how many threads to use. The default behaviour depends on the program. Most programs will either use only one thread unless told otherwise, or use one thread per core it can detect. The problem with programs that do the latter is that if you have requested only a subset of the cores on the node, the program will still detect the total number of cores or hyperthreads on the node and start that number of threads. Depending on the cluster you are using, these threads will swarm out over the whole node and sit in the way of other programs (often the case on older clusters) or will be contained in the set of cores/hyperthreads allocated to the job and sit in each others way (e.g., because they compete for the same limited cache space). In both cases, the program will run way slower than it could. -

    You will also need to experiment a bit with the number of cores that can actually be used in a useful way. This depends on the code and the size of the problem you are trying to solve. The same code may scale to only 4 threads for a small problem yet be able to use all cores on a node well when solving a much larger problem. -

    How to tell the program the number of threads to use, also differs between programs. Typical ways are through an environment variable or a command line option, though for some programs this is actually a parameter in the input file. Many scientific shared memory programs are developed using OpenMP directives. For these programs, the number of threads can be set through the environment variable OMP_NUM_THREADS. The line -

    export OMP_NUM_THREADS=$PBS_NUM_PPN
    -

    will set the number of threads to the value of ppn used in your job script. -

    Starting a distributed memory program (e.g., a MPI program)

    Starting a distributed memory program is a bit more involved as they always involve more than one Linux proces. Most distributed memory programs in scientific computing are written using the the Single Program Multiple Data paradigm: A single executable is ran on each core, but each cores works on a different part of the data. And the most popular technique for developing such programs is by using the MPI (Message Passing Interface) library. -

    Distributed memory programs are usually started through a starter command. For MPI programs, this is mpirun or mpiexec (often one is an alias for the other). The command line arguments for mpirun differ between MPI implementations. We refer to the documentation on toolchains in the \"Software development\" section of this web site for more information on the implementations supported at the VSC. As most MPI implementations in use at the VSC recognise our resource manager software and get their information about the number of nodes and cores directly from the resource manager, it is usually sufficient to start your MPI program using -

    mpirun <mpi-program>
    -

    where <mpi-program> is your MPI program and its command line arguments. This will start one instance of your MPI program on each core or hyperthread assigned to the job. -

    Programs using different distributed memory libraries may use a different starter program, and some programs come with a script that will call mpirun for you, so you can start those as a regular program. -

    Some programs use a mix of MPI and OpenMP (or a combination of another distributed and shared memory programming technique). Examples are some programs in Gromacs and QuantumESPRESSO. The rationale is that a single node on a cluster may not be enough, so you need distributed memory, while a shared memory paradigm is often more efficient in exploiting parallelism in the node. You'll need additional implementation-dependent options to mpirun to start such programs and also to define how many threads each instance can use. There is some information specifically for hybrid MPI/OpenMP programs on the \"Hybrid MPI/OpenMP programs\" page in the software development section. We advise you to contact user support to help you figuring out the right options and values for those options if you are not sure which options and values to use.

    Starting a single-core program on each assigned core

    A rather common use case on a cluster is running many copies of the same program independently on a different data set. It is not uncommon that those programs are not or very poorly parallelised and run on only a single core. Rather than submitting a lot of single core jobs, it is easier for the scheduler if those jobs are bundled in a single job that fills a whole node. Our job scheduler will try to fill a whole node using multiple of your jobs, but this doesn't always work right. E.g., assume a cluster with 20-core nodes where some nodes have 3 GB per core available for user jobs and some nodes have 6 GB available. If your job needs 5 GB per core (and you specify that using the mem or pmem parameters), but you don\\t explicitly tell that you want to use the nodes with 6 GB per core, the scheduler may still schedule the first job on a node with only 3 GB per core, then try to fill up that node further with jobs from you, but once half the node is filled discover that there is not enough memory left to start more jobs, leaving half of the CPU capacity unused. -

    To ease combining jobs in a single larger job, we advise to have a look at the Worker framework. It helps you to organise the input to the various instances of your program for many common scenarios. -

    Should you decide to start the instances of your program yourself, we advise to have a look at the Torque pbsdsh command rather than ssh. This assures that all programs will execute under the full control of the resource manager on the cores allocated to your job. The variables PBS_NODENUM, PBS_VNODENUM and PBS_TASKNUM can be used to determine on which core you are running and to select the appropriate input files. Note that in most cases, it will actually be necessary to write a second script besides your job script. That second script then uses these variables to compute the names of the input and the output files and start the actual program you want to run on that core. -

    To further explore the meaning of PBS_NODENUM, PBS_VNODENUM and PBS_TASKNUM and to illustrate the use of pbsdsh, consider the job script -

    #! /bin/bash
    -cd $PBS_O_WORKDIR
    -echo \"Started with nodes=$PBS_NUM_NODES:ppn=$PBS_NUM_PPN\"
    -echo \"First call of pbsdsh\"
    -pbsdsh bash -c 'echo \"Hello from node $PBS_NODENUM ($HOSTNAME) vnode $PBS_VNODENUM task $PBS_TASKNUM\"'
    -echo \"Second call of pbsdsh\"
    -pbsdsh bash -c 'echo \"Hello from node $PBS_NODENUM ($HOSTNAME) vnode $PBS_VNODENUM task $PBS_TASKNUM\"'
    -

    Save this script as \"testscript.pbs\" and execute it for different numbers of nodes and cores-per-node using -

    qsub -l nodes=4:ppn=5 testscript.pbs
    -

    (so using 4 nodes and 5 cores per node in this example). When calling qsub, it will return a job number, and when the job ends you will find a file testscript.pbs.o<number_of_the_job> in the directory where you executed qsub. -

    For more information on the pbsdsh command, we refer to the the Torque manual on the Adaptive Computing documentation web site. -

    or to the manual page (\"man pbsdsh\"). -

    " -707,"","

    Submitting your job: the qsub command

    Once your job script is finished, you submit it to the scheduling system using the qsub command: -

    qsub <jobscript>
    -

    places your job script in the queue. As explained on the page on \"Specifying resources, output files and notifications\", there are several options to tell the scheduler which resources you need or how you want to be notified of events surrounding your job. The can be given at the top of your job script or as additional command line options to qsub. In case both are used, options given on the command line take precedence over the specifications in the job script. E.g., if a different number of nodes and cores is requested through a command line option then specified in the job script, the specification on the command line will be used. -

    Starting interactive jobs

    Though our clusters are mainly meant to be used for batch jobs, there are some facilities for interactive work: -

      -
    • The login nodes can be used for light interactive work. They can typically run the same software as the compute nodes. Some sites also have special interactive nodes for special tasks, e.g., scientific data visualisation. See the \"Available hardware\" section where each site documents what is available.
      Examples of work that can be done on the login nodes is running a GUI program that generates the input files for your simulation, a not too long compile, a quick and not very resource intensive visualisation. We have set limits on the amount of time a program can use on the login nodes.
    • -
    • It is also possible to request one or more compute nodes for interactive work. This is also done through the qsub command. In this case, you can still use a job script to specify the resources, but the most common case is to specify them at the command line.
    • -

    In the latter scenario, two options of qsub are particularly useful: -I to request an node for interactive use, and -X to add support for X to the request. You would typically also add several -l options to specify for how long you need the node and the amount of resources that you need. E.g., -

    qsub -I -l walltime=2:00:00 -l nodes=1:ppn=20
    -

    to use 20 cores on a single node for 2 hours. qsub will block until it gets a node and then you get the command prompt for that node. If the wait is too long however, qsub will return with an error message and you'll need to repeat the call. -

    If you want to run programs that use X in your interactive job, you have to add the -X option to the above command. This will set up the forwarding of X traffic to the login node, and ultimately to your terminal if you have set up the connection to the login node properly for X support. -

    Please remain reasonable in your request for interactive resources. On some clusters, a short waltime will give you a higher priority, and on most clusters a request for a multi-day interactive session will fail simply because the cluster cannot give you such a node before the time-out of qsub kicks in. Interactive use of nodes is mostly meant for debugging, for large compiles or larger visualisations on clusters that don't have dedicated nodes for visualisation.

    Viewing your jobs in the queue: qstat and showq -

    Two commands can be used to show your jobs in the queue: -

      -
    • qstat show the queue from the resource manager's perspective. It doesn't know about priorities, only about requested resources and the state of your job: Still idle and waiting for resources, running, finishing, ...
    • -
    • showq shows the queue from the scheduler's perspective, taking priorities and policies into account.
    • -

    Both commands will also show you the name of the queue (qstat) or class (showq) which in most cases is actually the same as the queue. All VSC clusters have multiple queues. Queues are used to define policies for each cluster. E.g., users may be allowed to have a lot of short jobs running simultaneously as they will finish soon anyway, but may be limited to a few multi-day jobs to avoid long-time monopolisation of a cluster by a single user, and this would typically be implemented by having separate queues with separate policies for short and long jobs. When you submit a job, qsub will put the job in a particular queue based on the resources requested. The qsub command does allow to specify the queue to use, but unless instructed to do so by user support, we strongly advise against using this option. Putting the job in the wrong queue may actually result in your job being refused by the queue manager, and we may also chose to change the available queues on a system to implement new policies.
    -

    qstat

    On the VSC clusters, users will only receive a subset of the options that qsub offers. The output is always restricted to the user's jobs only. -

    To see your jobs in the queue, enter -

    qstat
    -

    This will give you an overview of all jobs including their status, which includes queues but not yet running (Q), running (R) or finishing (C). -

    qstat <jobid>
    -

    where <jobid> is the number of the job, will show you the information about this job only. -

    Several command line options can be specified to modify the output of qstat: -

      -
    • qstat -i will show you a bit more information.
    • -
    • qstat -n will also show you the nodes allocated to each running job.
    • -
    • qstat -f or qstat -f1 produces even more output. In fact, it produces so much output that it is better only used with the job ID as an argument to request information about a specific job.
    • -

    showq

    The showq command will show you information about the queue from the scheduler's perspective. Jobs are subdivided in three categories: -

      -
    • The active jobs are the jobs that are actually running, or are being started or terminated.
    • -
    • Eligible jobs are jobs that are queued and considered eligible for scheduling.
    • -
    • Blocked jobs are jobs that are ineligible to run or to be queued for scheduling. There are multiple reasons why a job might be in the blocked state. -
        -
      • If the status is marked as idle, your job most likely violates a fairness policy, i.e., you've used too many resources recently.
      • -
      • BatchHold: Either the cluster has repeatedly failed to start the job (which typically is a problem with the cluster, so contact user support if you see this happen) or your resource request cannot be granted on the cluster. This is also the case if you try to put more jobs in a queue than you are allowed to have queued or running at any particular moment.
      • -
      -
        -
      • deferred: a temporary hold after a failed start attempt, but the system will have another try at starting the job.
      • -
      -
        -
      • UserHold or SystemHold: The user or the system administrator has put a hold on the job (and it is is up to him/her to also release that hold again).
      • -
      -
        -
      • NotQueued: The job has not been queued for some other reason.
      • -
    • -

    The showq command will split its output according to the three major categories. Active jobs are sorted according to their expected end time while eligible jobs are sorted according to their current priority. -

    There are also some useful options: -

      -
    • showq -r will show you the running jobs only, but will also give more information about these jobs, including an estimate about how efficiently they are using the CPU.
    • -
    • showq -i will give you more information about your eligible jobs.
    • -

    Getting detailed information about a job: qstat -f and checkjob

    We've discussed the Torque qstat -f command already in the previous section. It gives detailed information about a job from the resource manager's perspective. -

    The checkjob command does the same, but from the perspective of the scheduler, so the information that you get is different. -

    checkjob 323323
    -

    will produce information about the job with jobid 323323. -

    checkjob -v 323323
    -

    where -v stands for verbose produces even more information. -

    For a running job, checkjob will give you an overview of the allocated resources and the wall time consumed so far. For blocked jobs, the end of the output typically contains clues about why a job is blocked. -

    Deleting a job that is queued or running

    This is easily done with qdel:

    qdel 323323

    will delete the job with job ID 323323. If the job is already running, the processes will be killed and the resources will be returned to the scheduler for another job.

    Getting an estimate for the start time of your job: showstart

    This is a very simple tool that will tell you, based on the current status of the cluster, when your job is scheduled to start. Note however that this is merely an estimate, and should not be relied upon: jobs can start sooner if other jobs finish early, get removed, etc., but jobs can also be delayed when other jobs with higher priority are submitted. -

    $ showstart 20030021
    -job 20030021 requires 896 procs for 1:00:00
    -Earliest start in       5:20:52:52 on Tue Mar 24 07:36:36
    -Earliest completion in  5:21:52:52 on Tue Mar 24 08:36:36
    -Best Partition: DEFAULT
    -

    Note however that this is only an estimate, starting from the jobs that are currently running or in the queue and the wall time that users gave for these jobs. Jobs may always end earlier than predicted based on the requested wall time, so your job may start earlier. But other jobs with a higher priority may also enter the queue and delay the start from your job.

    See if there is are free resources that you might use for a short job: showbf

    When the scheduler performs its scheduling task, there is bound to be some gaps between jobs on a node. These gaps can be back filled with small jobs. To get an overview of these gaps, you can execute the command showbf:

    $ showbf
    -backfill window (user: 'vsc30001' group: 'vsc30001' partition: ALL) Wed Mar 18 10:31:02
    -323 procs available for      21:04:59
    -136 procs available for   13:19:28:58

    There is however no guarantee that if you submit a job that would fit in the available resources, it will also run immediately. Another user might be doing the same thing at the same time, or you may simply be blocked from running more jobs because you already have too many jobs running or have made heavy use of the cluster recently.


    " -709,"","

    The basics of the job system

    Common problems

    Advanced topics

      -
    • Credit system basics: credits are used on all clusters at the KU Leuven (including the Tier-1 system BrENIAC) to control your compute time allocation
    • Monitoring memory and CPU usage of programs, which helps to find the right parameters to improve your specification of the job requirements
    • -
    • Worker framework: To manage lots of small jobs on a cluster. The cluster scheduler isn't meant to deal with tons of small jobs. Those create a lot of overhead, so it is better to bundle those jobs in larger sets.
    • -
    • The checkpointing framework can be used to run programs that take longer than the maximum time allowed by the queue. It can break a long job in shorter jobs, saving the state at the end to automatically start the next job from the point where the previous job was interrupted.
    • Running jobs on GPU or Xeon Phi nodes: The procedure is not standardised across the VSC, so we refer to the pages for each cluster in the \"Available hardware\" section of this web site
    • -
    " -711,"","

    Access restriction

    Once your project has been approved, your login on the Tier-1 cluster will be enabled. You use the same vsc-account (vscXXXXX) as at your home institutions and you use the same $VSC_HOME and $VSC_DATA directories, though the Tier-1 does have its own scratch directories. -

    You can log in to the following login nodes: -

      -
    • login1-tier1.hpc.kuleuven.be
    • -
    • login2-tier1.hpc.kuleuven.be
    • -

    These nodes are also accessible from outside the KU Leuven. Unless for the Tier-1 system muk, it is not needed to first log on to your home cluster to then proceed to BrENIAC. Have a look at the quickstart guide for more information. -

    Hardware details

    The tier-1 cluster BrENIAC is primarily aimed at large parallel computing jobs that require a high-bandwidth low-latency interconnect, but jobs that require a multitude of small independent tasks are also accepted. -

    The main architectural features are: -

      -
    • 580 compute nodes with two Xeon E5-2680v4 processors (2,4GHz, 14 cores per processor, Broadwell architecture). 435 nodes are equiped with 128 GB RAM and 135 nodes with 256 GB. The total number of cores is 16,240, the total memory capacity is 90.6 TiB and the peak performance is more than 623 TFlops (Linpack result 548 TFlops).
      The Broadwell CPU supports the 256-bits AVX2 vector instructions with fused-multiply-add operations. Each core can execute up to 16 double precision floating point operations per cycle (2 4-number FMAs), but to be able to use the AVX2 instructions, you need to recompile your program for the Haswell or Broadwell architecture.
      The CPU also uses what Intel calls the \"Clustter-on-Die\"-approach, which means that each processor chip internally has two groups of 7 cores. For hybrid MPI/OpenMP processes (or in general distributed/shared memory programs), 4 MPI processes per node each using 7 cores might be a good choice.
    • -
    • EDR Infiniband interconnect with a fat tree topology (blocking factor 2:1)
    • -
    • A storage system with a net capacity of approximately 634 TB and a peak bandwidth of 20 GB/s, using the GPFS file system.
    • -
    • 2 login nodes with a similar configuration as the compute nodes
    • -

    Compute time on BrENIAC is only available upon approval of a project. Information on requesting projects is available in Dutch and in English.
    -

    Accessing your data

    BrENIAC supports the standard VSC directories. -

      -
    • $VSC_HOME points to your VSC home directory. It is your standard home directory which is accessed over the VSC network, and available as /user/<institution>/XXX/vscXXXYY, e.g., /user/antwerpen/201/vsc20001. So the quota on this directory is set by your home institution.
    • -
    • $VSC_DATA points to your standard VSC data directory, accessed over the VSC network. It is available as /data/<institution>/XXX/vscXXXYY. The quota on this directory is set by your home institution. The directory is mounted via NFS which lacks some of the feature of the parallel file system which may be available at your home institution. Certain programs using parallel I/O may fail when running from this directory, you are strongly encouraged to only run programs from $VSC_SCRATCH.
    • -
    • $VSC_SCRATCH is a Tier-1 specific fast parallel file system using the GPFS file system. The default quota is 1 TiB but may be changed depending on your project request. The directory is also available as /scratch/leuven/XXX/vscXXXYY (and note \"leuven\" in the name, not your own institutions as this directory is physically located on the Tier-1 system at KU Leuven). The variable $VSC_SCRATCH_SITE points to the same directory.
    • -
    • $VSC_NODE_SCRATCH points to a small (roughly 70 GB) local scratch directory on the SSD of each node. It is also available as /node_scratch/<jobid>. The contents is only accessible from a particular node and during the job.
    • -

    Running jobs and specifying node characteristics

    The cluster uses Torque/Moab as all other clusters at the VSC, so the generic documentation applies to BrENIAC also. -

      -
    • BrENIAC uses a single job per node policy. So if a user submits single core jobs, the nodes will usually be used very inefficiently and you will quickly run out of your compute time allocation. Users are strongly encouraged to use the Worker framework (e.g., module worker/1.6.7-intel-2016a) to group such single-core jobs. Worker makes the scheduler's task easier as it does not have to deal with too many jobs. It has a documentation page on this user portal and a more detailed external documentation site.
    • -
    • The maximum regular job duration is 3 days.
    • -
    • Take into account that each node has 28 cores. These are logically grouped in 2 sets of 14 (socket) or 4 sets of 7 (NUMA-on-chip domains). Hence for hybrid MPI/OpenMP programs, 4 MPI processes per node with 7 threads each (or two with 14 threads each) may be a better choice than 1 MPI process per node with 28 threads.
    • -

    Several \"MOAB features\" are defined to select nodes of a particular type on the cluster. You can specify them in your job scirpt using, e.g., -

    #PBS -l feature=mem256
    -

    to request only nodes with the mem256 feature. Some important features: -

    - - - - - - - - - - - - - - - - - - - - -
    feature - explanation -
    mem128 - Select nodes with 128 GB of RAM (roughly 120 GB available to users)
    mem256 - Select nodes with 256 GB of RAM (roughly 250 GB available to users)
    rXiY - Request nodes in a specific InfiniBand island. X ranges from 01 to 09, Y can be 01, 11 or 23. The islands RxI01 have 20 nodes each, the islands rXi11 and rXi23 with i = 01, 02, 03, 04, 06, 07, 08 or 09 have 24 nodes each and the island r5i11 has 16 nodes. This may be helpful to make sure that nodes used by a job are as close to each other as possible, but in general will increase waiting time before your job starts. -

    Compile and debug nodes

    8 nodes with 256 GB of RAM are set aside for compiling or debugging small jobs. You can run jobs on them by specifying

    #PBS -lqos=debugging

    in your job script.

    The following limitation apply:

    • Maximum 1 job per user at a time
    • Maximum 8 nodes for the job
    • Maximum accumulated wall time is 1 hour. e.g., a job using 1 node for 1 hour or a job using 4 nodes for 15 minutes.

    Credit system

    BrENIAC uses Moab Accounting Manager for accounting the compute time used by a user. Tier-1 users have a credit account for each granted Tier-1 project. When starting a job, you need to specify which credit account to use via

    #PBS -A lpt1_XXXX-YY

    or with lpt1_XXXX-YY the name of your project account. You can also specify the -A option at the command line of qsub.

    Further information

    Software specifics

    BrENIAC uses the standard VSC toolchains. However, not all VSC toolchains are made available on BrENIAC. For now, only the 2016a toolchain is available. The Intel toolchain has slightly newer versions of the compilers, MKL library and MPI library than the standard VSC 2016a toolchain to be fully compatible with the machine hardware and software stack.

    Some history

    BrENIAC was installed during the spring of 2016, followed by several months of testing, first by the system staff and next by pilot users. The system was officially launched on October 17 of that year, and by the end of the month new Tier-1 projects started computing on the cluster. -

    We have a time lapse movie of the construction of BrENIAC: -

    - -

    Documentation

    " -713,"","

    (Testtekst) The Flemish Supercomputer Centre (VSC) is a virtual centre making supercomputer infrastructure available for both the academic and industrial world. This centre is managed by the Research Foundation - Flanders (FWO) in partnership with the five Flemish university associations.
    -

    " -715,"HPC for industry (testversie)","

    The collective expertise, training programs and infrastructure of VSC together with participating university associations have the potential to create significant added value to your business.

    " -717,"HPC for academics (testversie)","

    With HPC-technology you can refine your research and gain new insights to take your research to new heights.

    " -719,"What is supercomputing? (testversie)","

    Supercomputers have an immense impact on our daily lives. Their scope extends far beyond the weather forecast after the news.

    " -739,"","

    Basic job system use

    Advanced job system use

    Miscellaneous topics

      -
    • Monitoring memory and CPU usage of programs, which helps to find the right parameters to use in the job scripts.
    • -
    • The checkpointing framework can be used to run programs that take longer than the maximum time allowed by the queue. It can break a long job in shorter jobs, saving the state at the end to automatically start the next job from the point where the previous job was interrupted.
    • -
    " -741,"","

    Access

    qsub  -l partition=gpu,nodes=1:K20Xm <jobscript>
    -

    or -

    qsub  -l partition=gpu,nodes=1:K40c <jobscript>
    -

    depending which GPU node you would like to use if you don't -'care' on which type of GPU node your job ends up you can just submit it - like this: -

    qsub  -l partition=gpu <jobscript>
    -

    Submit to a Phi node:

    qsub -l partition=phi <jobscript>
    -
    " -745,"","

    The application

    The designated way to get access to the Tier-1 for research purposes is through a project application. -

    You have to submit a proposal to get compute time on the Tier-1 cluster Muk. -

    You should include a realistic estimate of the compute time needed in the project in your application. These estimations can best be endorsed by Tier-1 benchmarks. To be able to perform these tests for new codes, you can request a starting grant through a short and quick procedure. -

    You can submit proposals continuously, but they will be gathered, evaluated and resources allocated at a number of cut-off dates. There are 3 cut-off dates in 2016 : -

      -
    • February 1, 2016
    • -
    • June 6, 2016
    • -
    • October 3, 2016
    • -

    Proposals submitted since the last cut-off and before each of these dates are reviewed together. -

    The FWO appoints an evaluation commission to do this. -

    Because of the international composition of the evaluation commission, the preferred language for the proposals is English. If a proposal is in Dutch, you must also sent an English translation. Please have a look at the documentation of standard terms like: CPU, core, node-hour, memory, storage, and use these consistently in the proposal. -

    For applications in 2014 or 2015, costs for resources used will be invoiced, with various discounts for Flemish-funded academic researchers. You should be aware that the investments and operational costs for the Tier-1 infrastructure are considerable. -

    You can submit you application via EasyChair using the application forms below. -

    Relevant documents - 2016

    On October 26 the Board of Directors of the Hercules foundation decided to make a major adjustment to the regulations regarding applications to use the Flemish supercomputer. -

    For applications for computing time on the Tier-1 granted in 2016 and coming from researchers at universities, the Flemish SOCs and the Flemish public knowledge institutions, applicants will no longer have to pay a contribution in the cost of compute time and storage. Of course, the applications have to be of outstanding quality. The evaluation commission remains responsible for te review of the applications. -

    For applications granted in 2015 the current pricing structure remains in place and contributions will be asked. -

    The adjusted Regulations for 2016 can be found in the links below. -

    From January 1, 2016 on the responsibility for the funding of HPC and the management of the Tier-1 has been transferred to the FWO, including all current decisions and ongoing contracts. -

    If you need help to fill out the application, please consult your local support team. -

    Relevant documents - 2015

    Pricing - applications in 2015

    When you receive compute time through a Tier-1 project application, we expect a contribution in the cost of compute time and storage. -

    - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
    -

    Summary of Rates: -

    -
    -

    CPU/nodeday -

    -
    -

    Private Disk/TB/mo -

    -
    -

    Universities, VIB and iMINDS -

    -
    -

    0.68€ (5%) -

    -
    -

    2€ (5%) -

    -
    -

    other SOCs and other flemish public research institutes -

    -
    -

    1.35€ (10%) -

    -
    -

    4€ (10%) -

    -
    -

    Flemish public research institutes - contract research with possibility of full cost accounting (*) -

    -
    -

    13,54€ -

    -
    -

    46,8€ -

    -
    -

    Flemish public research institutes - European projects with possibility of full cost accounting (*) -

    -
    -

    13,54€ -

    -
    -

    46,8€ -

    -

    (*) The price for one nodeday is 13.54 euro (incl. overhead and support of Tier-1 technical support team, but excl. advanced support by specialized staff). The price for 1TB storage per month is 46.80 euro (incl. overhead and support of TIER1 technical support team, but excl. advanced support by specialized staff). Approved Tier-1 projects get a default quota of 1TB. Only storage request higher then 1TB will be charged for the amount above 1TB. -

    EasyChair procedure

    You have to submit your proposal on EasyChair for the conference Tier12016. This requires the following steps: -

      -
    1. If you do not yet have an EasyChair account, you first have to create one: -
        -
      1. Complete the CAPTCHA
      2. -
      3. Provide first name, name, e-mail address
      4. -
      5. A confirmation e-mail will be sent, please follow the instructions in this e-mail (click the link)
      6. -
      7. Complete the required details.
      8. -
      9. When the account has been created, a link will appear to log in on the TIER1 submission page.
      10. -
    2. -
    3. Log in onto the EasyChair system.
    4. -
    5. Select ‘New submission’.
    6. -
    7. If asked, accept the EasyChair terms of service.
    8. -
    9. Add one or more authors; if they have an EasyChair account, they can follow up on and/or adjust the present application.
    10. -
    11. Complete the title and abstract.
    12. -
    13. You must specify at least three keywords: Include the institution of the promoter of the present project and the field of research.
    14. -
    15. As a paper, submit a PDF version of the completed Application form. You must submit the complete proposal, including the enclosures, as 1 single PDF file to the system.
    16. -
    17. Click \"Submit\".
    18. -
    19. EasyChair will send a confirmation e-mail to all listed authors.
    20. -
    " -747,"","

    From version 2017a on of the Intel toolchains, the setup on the UAntwerp is different from the one on some other VSC clusters: -

      -
    • A full install of all Intel tools for which we have a license at UAntwerp has been performed in a single directory tree as intended by Intel. There is a single module for the C/C++/Fortran compilers, Intel MPI, the libraries MKL (Math Kernel Library), IPP (Integrated Performance Primitives), TBB (Threading Building Blocks) and DAAL (Data Analytics Acceleration Library) and Intel-provided GDB-based debuggers. The Intel tools for code and performance analysis VTune Amplifier XE, Intel Trace Analyzer and Collector (ITAC), Intel Advisor XE and Intel Inspector XE are also installed, but these still have separate module files as they rely on overloading libraries in some cases.
    • -
    • There should be no need to run any of the configuration scripts provided by Intel, all variables should be set correctly by the module file. Contact user support if this is not the case. The configuration scripts should work as intended by Intel though should you want to use the compilers without loading the module.
    • -
    • Several variables specific for the way software is set up at the VSC are defined the way they would be defined if the Intel toolchain was defined in the standard VSC way through the module tree. As such, we expect that you should be able to use any Makefile developed for the standard VSC-setup.
    • -
    • All compiler components needed to develop applications with offload to a Xeon Phi expansion board are also provided in anticipation of the installation of such a cluster node for testing purposes in Leibniz.
    • -

    Compilers

      -
    • The compilers work exactly in the way described on the regular Intel toolchain web page, including the MPI compiler wrappers. All links to the documentation on that page are also relevant.
    • -
    • Man pages for all commands have also been installed.
    • -

    Debuggers

      -
    • Intel-adapted GDB debuggers have been installed -
        -
      • Debugging regular Intel64 applications: gdb-ia
      • -
      • Debugging applications with offload to Xeon Phi: gdb-mic
      • -
    • -
    • Manual pages and GNU info pages are available for both commands
    • -

    Libraries

    Math Kernel Library (MKL)

    MKL works exactly as in the regular VSC Intel toolchain. See the MKL section of web page on the VSC Intel toolchain for more information. -

    Integrated Performance Primitives (IPP)

    Threading Building Blocks (TBB)

    Data Analytics Acceleration Library (DAAL)

    Code and performance analysis

    VTune Amplifier XE

    ITAC - Intel Trace Analyzer and Collector

    Advisor

      -
    • What? Advisor is a code analysis tool that works with the compilers to give advise on vectorization and threading for both the Xeon and Xeon Phi processors.
    • -
    • How? Advisor uses output generated by the compiler when building a full optimized release build and as such expects that some additional options are specified when compiling the application. The resulting compiler output can then be analized using the advixe-gui command.
    • -
    • Module: Advisor/<toolchain version>, e.g., Advisor/2017a.
    • -
    • Documentation -
    • -

    Inspector

    " -749,"","

    The third VSC Users Day was held at the \"Paleis der Academiën\", the seat of the \"Royal Flemish Academy of Belgium for Science and the Arts\", in the Hertogstraat 1, 1000 Brussels, on June 2, 2017. -

    Program

      -
    • 9u50 : welcome
    • -
    • 10u00: dr. Achim Basermann, German Aerospace Center, High Performance Computing with Aeronautics and Space Applications [slides PDF - 4,9MB]
    • -
    • 11u00: coffee break
    • -
    • 11u30: workshop sessions – part 1 -
        -
      • VSC for starters (by VSC personnel) [slides PDF - 5.3MB]
      • -
      • Profiler and Debugger (by VSC personnel) [slides PDF - 2,2MB]
      • -
      • Programming GPUs (dr. Bart Goossens - dr. Vule Strbac)
      • -
    • -
    • 12u45: lunch
    • -
    • 14u00: dr. Ehsan Moravveji, KU Leuven A Success Story on Tier-1: A Grid of Stellar Models [slides - PDF 11,9MB]
    • -
    • 14u30: ‘1-minute’ poster presentations
    • -
    • 15u00: workshop sessions – part 2 -
        -
      • VSC for starters (by VSC personnel)
      • -
      • Profiler and Debugger (by VSC personnel)
      • -
      • Feedback from Tier-1 Evaluation Committee (dr. Walter Lioen, chairman) [slides - PDF 0.5MB]
      • -
    • -
    • 16u00: coffee and poster session
    • -
    • 17u00: drink
    • -

    Abstracts of workshops -

      -
    • VSC for starters [slides PDF - 5.3MB]
      The workshop provides a smooth introduction to supercomputing for new users. Starting from common concepts in personal computing the similarities and differences with supercomputing are highlighted and some essential terminology is introduced. It is explained what users can expect from supercomputing and what not, as well as what is expected from them as users
    • -
    • Profiler and Debugger [slides PDF - 2,2MB]
      Both profiling and debugging play an important role in the software development process, and are not always appreciated. In this session we will introduce profiling and debugging tools, but the emphasis is on methodology. We will discuss how to detect common performance bottlenecks, and suggest some approaches to tackle them. For debugging, the most important point is avoiding bugs as much as possible.
    • -
    • Programming GPUs -
        -
      • Quasar, a high-level language and a development environment to reduce the complexity of heterogeneous programming of CPUs and GPUs, Prof dr. Bart Goosens, UGent [slides PDF - 2,1MB]
        In this workshop we present Quasar, a new programming framework that takes care of many common challenges for GPU programming, e.g., parallelization, memory management, load balancing and scheduling. Quasar consists of a high-level programming language with a similar abstraction level as Python or Matlab, making it well suited for rapid prototyping. We highlight some of the automatic parallelization strategies of Quasar and show how high-level code can efficiently be compiled to parallel code that takes advantage of the available CPU and GPU cores, while offering a computational performance that is on a par with a manual low-level C++/CUDA implementation. We explain how multi-GPU systems can be programmed from Quasar and we demonstrate some recent image processing and computer vision results obtained with Quasar.
      • -
      • GPU programming opportunities and challenges: nonlinear finite element analysis, dr. Vule Strbac, KU Leuven [slides PDF - 2,1MB]
        From a computational perspective, finite element analysis manifests substantial internal parallelism. Exposing and exploiting this parallelism using GPUs can yield significant speedups against CPU execution. The details of the mapping between a requested FE scenario and the hardware capabilities of the GPU device greatly affect this resulting speedup. Factors such as: (1) the types of materials present (elasticity), (2) the local memory pool and (3) fp32/fp64 computation impact GPU solution times differently than their CPU counterparts.
        We present results of both simple and complex FE analyses scenarios on a multitude of GPUs and show an objective estimation of general performance. In doing so, we detail the overall opportunities, challenges as well as the limitations of the GPU FE approach. -
      • -
    • -

    Poster sessions -

    An overview of the posters that were presented during the poster session is available here. -

    " -751,"","


      -
    • By train: The closest railway station is Brussel-Centraal/Bruxelles Central. From there it is a ten minutes walk to the venue, or you can take the metro.
    • -
    • By metro (MIVB): Metro station Troon -
        -
      • From the Central Station: Line 1 or 5 till Kunst-Wet, then line 2 or 6.
      • -
      • From the North Station: metro Rogier, line 2 or 6 towards \"Koning Boudewijn\" or \"Simonis (Leopold II)\".
      • -
      • From the South Station: Line 2 or 6 towards Simonis (Elisabeth)
      • -
    • -
    • By car: -
        -
      • The \"Paleis der Academiën\" has two free parking areas at the side of the \"Kleine ring\". Access is via the Regentlaan which you should enter at the Belliardstraat.
      • -
      • There are limited non-free parking spots at the Regentlaan or the Paleizenplein
      • -
      • Two nearby parking garages are: -
          -
        • Parking 2 Portes: Waterloolaan 2a, 1000 Brussel
        • -
        • Parking Industrie: Industriestraat 26-38, 1040 Brussel
        • -
      • -
    • -
    " -753,"","

    Important changes

    The 2017a toolchain is the toolchain that -will be carried forward to Leibniz and will be available after the operating -system upgrade of Hopper. Hence it is meant to be as complete as possible. We -will only make a limited number of programs available in the 2016b toolchain -(basically those that show much better performance with the older compiler or -that do not compile with the compilers in the 2017a toolchains). -

    Important changes -in the 2017a toolchain: -

      -
    • The Intel compilers have been installed in -a single directory tree, much the way Intel intends the install to be done. The -intel/2017a module loads fewer submodules and instead sets all required -variables. The install now also contains the Thread Building Blocks (TBB), -Integrated Performance Primitives (IPP) and Data Analytics Acceleration Library -(DAAL). All developer tools (debugger, Inspector, Advisor, Vtune -Amplifier, ITAC) are enabled by loading the inteldevtools/2017a module rather -than independent modules for each tool. More information is available on - the documentation page on the Intel compilers @ UAntwerp.
    • -
    • The Python install now also contains a -number of packages that previously where accessed via separate modules: -
        -
      • matplotlib, so there is no longer a -separate module to load matplotlib. -
      • -
      • lxml
      • -
      -
    • -
    • The R install now also contains a selection -of the Bioconductor routines, so no separate module is needed to enable the -latter. -
    • -
    • netCDF is now a single module containing -all 4 interfaces rather than 4 separate modules that installed each interface -in a different directory tree (three of which all relied on the module for the -fourth). This should ease the installation of code that uses the netCDF Fortran -or one of the C++ interfaces and expects all netCDF libraries to be installed -in the same directory. -
    • -

    We will skip the 2017b toolchain as defined by the VSC as we have already upgraded the 2017a toolchain to a more recent update of the Intel 2017 compilers to avoid problems with certain applications. -

    Available toolchains

    There are -currently three major toolchains on the UAntwerp clusters: -

      -
    • The Intel toolchain, -which includes the Intel compilers and tools, matching versions of the GNU -compilers, and all software compiled with them. -
    • -
    • The FOSS toolchain, -built out of open-source components. It is mostly used for programs that don’t -install with the Intel compilers, or by users who want to do development with -Open MPI and other open-source libraries. -
      - The FOSS-toolchain has a number of subtoolchains: Gompi, GCC and GCCcore, and -some programs are installed in these subtoolchains because they don’t use the -additional components that FOSS offers. -
    • -
    • The system toolchain -(sl6 or centos7), containing programs that only use system libraries or other -tools from this toolchain. -
    • -

    The tables below -list the last available module for a given software package and the -corresponding version in the 2017a toolchain. Older versions can only be -installed on demand with a very good motivation, as older versions of packages -also often fail to take advantage of advances in supercomputer architecture and -offer lower performance. Packages that have not been used recently will -only be installed on demand. -

    Several of the -packages in the system toolchain are still listed as “on demand” since they -require licenses and interaction with their users is needed before we can -install them. -

    " -755,"","

    Several of the software packages running on the UAntwerp cluster have restrictions in their licenses and cannot be used by all users. If a module does not load, it is very likely that you have no access to the package.

    Access to such packages is managed by UNIX groups. You can request membership to the group, but that membership will only be granted if you are eligible for use of the package.

    ANSYS

    CPMD

    CPMD can be used for free for non-commercial research in education institutions under the CPMD Free License.

    To get access:

    • Fill in the form for downloading CPMD on the CPMD website.
    • You'll see a \"Thank You\" page confirming your submission. Somewhat later - it may take up to a week but usually it is quite fast - you'll receive a mail with download instructions. Please forward that mail to hpc@uantwerpen.be. We then have to check with IBM for confirmation.
    • Apply for membership of the acpmd group via the VSC account managment webpage.
    • As soon as we have confirmation from IBM that your license application has been accepted, your membership application for acpmd will be granted and you will be able to use CPMD.

    COMSOL

    FINE/Marine

    FINE/Marine is commercial CFD software from NUMECA International for simulation of flow around ships etc. The license has been granted for use of the Solar Boat Team as sponsoring from NUMECA and cannot be used by others.

    Gaussian

    To use Gaussian, you should work or study at the University of Antwerp and your research group should contribute to the cost of the license.

    Contact Wouter Herrebout for more information.

    Gurobi

    MATLAB

    We do not encourage the use of Matlab on the cluster as it is neither designed for use of HPC (despite a number of toolboxes that support parallel computing) nor efficient.

    Matlab on the UAntwerp clusters can be used by everybody who can legally use Matlab within the UAntwerp Campus Agreement with The Mathworks. You should have access to the modules if you are eligible. If you cannot load the Matlab modules yet think you are allowed to use Matlab under the UAntwerp license, please contact support.

    TurboMole

    VASP

    " -759,"","

    You may notice that leibniz is not always faster than hopper, and this is a trend that we expect to continue for the following clusters also. In the past five years, individual cores did not become much more efficient on a instructions-per-clock cycle basis. Instead, faster chips were build by including more cores, though at a lower clock speed to stay within the power budget for a socket, and new vectori instructions.

    Compared to hopper,

    • The clock speed of each core is a bit lower (2.4 GHz base frequency instead of 2.8 GHz), and this is not for all applications compensated by the slightly higher instructions-per-clock,
    • But there are now 14 cores per socket rather than 10 (so 28 per node rather than 20),
    • And there are some new vector instructions that were not present on hopper (AVX2 with Fused Multiply-Add rather than AVX)..

    For programs that manage to use all of this, the peak performance of a node is effectively about twice as high as for a node on hopper. But single core jobs with code that does not use vectorization may very well run slower.

    Module system

    We use different software for managing modules on leibniz (Lmod instead of TCL-based modules). The new software supports the same commands as the old software, and more. -

      -
    • Lmod does not allow loading multiple modules with the same name, while in the TCL-based modules it was up to the writer of each module file to determine if this was possible or not. As a consequence, while on hopper one could in principle customize the module path by loading multiple hopper modules, this is not the case on leibniz. Instead, we are providing leibniz modules for loading a specific toolchain (e.g., leibniz/2017a), one to load only software compiled against OS libraries (leibniz/centos7) and one to load all supported modules (leibniz/all).
      As we are still experimenting with the setup of the module system, the safest thing to do is to always explicitly load the right version of the leibniz module.
    • -
    • An interesting new command is \"module spider Python\" which will show all modules named Python, or with Python as part of their name. Moreover, this search is not case-sensitive so it is a very good way to figure out which module name to use if you are not sure about the capitalization.
    • -

    Job submission

    One important change is that the new version of the operating system (CentOS 7.3, based on Red Hat 7) combined with our job management software allows much better control of the amount of memory that a job uses. Hence we can better protect the cluster against jobs that use more memory than requested. This is particularly important since leibniz does not support swapping on the nodes. This choice was made deliberatly as swapping to hard disk slows down a node to a crawl while SSDs that are robust enough to be used for swapping also cost a lot of money (memory cells on cheap SSDs can only be written a few 100 times, sometimes as little as 150 times). Instead, we increased the amount of memory available to each core. The better protection of jobs against each other may also allow us to consider to set apart some nodes for jobs that cannot fill a node and then allow multiple users on that node, rather than have those nodes used very inefficiently while other users are waiting for resources as is now the case.

    • Torque distinguishes between two kinds of memory:
      • Resident memory, essentially RAM, which is requested through pmem and mem.
      • Virtual memory, which is the total amount of memory space requested, consisting of both resident memory and swap space, is requested through pvmem and vmem.
    • mem and vmem specify memory for the job as a whole. The Torque manual discourages to use it for multi-node jobs though in the current version it works fine in most but not all cases, and evenly distributes the requested memory pool across cores.
    • pmem and pvmem specify memory per core.
    • It is better not to mix pmem/pvmem with mem/vmem as this can lead to confusing situations, though it does work.Torque will generally use the least restrictive of (mem,pmem) and of (vmem,pvmem) respectively for resident and virtual memory.
    • Of course pvmem should not be smaller than pmem, and vmem should not be smaller than mem. Otherwise the job will be refused.
    • We will set small defaults for users who do not specify this (to protect the system) but we have experienced that qsub will hang if a user makes a request that conflicts with the defaults (e.g., if we set a default for pvmem and a user uses a value for pmem which is larger than this value but does not specify pvmem, qsub will hang without producing the error message that one would expect).
    • Our advise for now: Use both pmem and pvmem and set both to the same value, as we do not support swapping anyway.

    MPI jobs

    • We are still experiencing problems with Open MPI (FOSS toolchain).
    • With respect to Intel MPI: We are experimenting with a different way to let Intel MPI start programs (though through the same commands as before). The problem with Intel MPI on hopper was that processes on other nodes than the start node did not run under the control of the job management system. As a result, the CPU times and efficiencies computed by Torque and Moab were wrong, cleanup of failed jobs did not always fully work and resource use in general was not properly monitored. However, we are not sure yet that the new mechanism is robust enough, so let us know if large jobs do not start so that we can investigate what happened.
      Technical note: The basic idea is that we let mpirun start processes on other nodes through the Torque job management library and not through ssh, but this is all accomplished through a number of environment variables set in the intel modules.
    " -761,"","

    Poster sessions

    1. Computational study of the properties of defects at grain boundaries in CuInSe2
      R. Saniz, J. Bekaert, B. Partoens, and D. Lamoen
      CMT and EMAT groups, Dept. of Physics, U Antwerpen
    2. First-principles study of superconductivity in atomically thin MgB2
      J. Bekaert, B. Partoens, M. V. Milosevic, A. Aperis, P. M. Oppeneer
      CMT group, Dept. of Physics, U Antwerpen & Dept. of Physics and Astronomy, Uppsala University
    3. Molecular Spectroscopy : Where Theory Meets Experiment
      C. Mensch, E. Van De Vondel, Y. Geboes, J. Bogaerts, R. Sgammato, E. De Vos, F. Desmet, C. Johannessen, W. Herrebout
      Molecular Spectroscopy group, Dept. Chemistry, U Antwerpen
    4. Bridging time scales in atomistic simulations: from classical models to density functional theory
      Kristof M. Bal and Erik C. Neyts
      PLASMANT, Department of Chemistry, U Antwerpen
    5. Bimetallic nanoparticles: computational screening for chirality-selective carbon nanotube growth
      Charlotte Vets and Erik C. Neyts
      PLASMANT, Department of Chemistry, U Antwerpen
    6. Ab initio molecular dynamics of aromatic sulfonation with sulfur trioxide reveals its mechanism
      Samuel L.C. Moors, Xavier Deraet, Guy Van Assche, Paul Geerlings, Frank De Proft
      Quantum Chemistry Group, Department of Chemistry, VUB
    7. Acceleration of the Best First Search Algorithm by using predictive analytics
      J.L. Teunissen, F. De Vleeschouwer, F. De Proft
      Quantum Chemistry Group, VUB, Department of Chemistry, VUB
    8. Investigating molecular switching properties of octaphyrins using DFT
      Tatiana Woller, Paul Geerlings, Frank De Proft, Mercedes Alonso
      Quantum Chemistry Group, VUB, Department of Chemistry, VUB
    9. Using the Tier-1 infrastructure for high-resolution climate modelling over Europe and Central Asia
      Lesley De Cruz, Rozemien De Troch, Steven Caluwaerts, Piet Termonia, Olivier Giot, Daan Degrauwe, Geert Smet, Julie Berckmans, Alex Deckmyn, Pieter De Meutter, Luc Gerard, Rafiq Hamdi, Joris Van den Bergh, Michiel Van Ginderachter, Bert Van Schaeybroeck
      Department of Physics and Astronomy, U Gent
    10. Going where the wind blows – Fluid-structure interaction simulations of a wind turbine
      Gilberto Santo, Mathijs Peeters, Wim Van Paepegem, Joris Degroote
      Dept. of Flow, Heat and Combustion Mechanics, U Gent
    11. Towards Crash-Free Drones – A Large-Scale Computational Aerodynamic Optimization
      Jolan Wauters, Joris Degroote, Jan Vierendeels
      Dept. of Flow, Heat and Combustion Mechanics, U Gent
    12. Characterisation of fragment binding to TSLPR using molecular dynamics
      Dries Van Rompaey, Kenneth Verstraete, Frank Peelman, Savvas N. Savvides, Pieter Van Der Veken, Koen Augustyns, Hans De Winter
      Medicinal Chemistry, UAntwerpen and Center for Inflammation Research , VIB-UGent
    13. A hybridized DG method for unsteady flow problems
      Alexander Jaust, Jochen Schütz
      Computational Mathematics (CMAT) group, U Hasselt
    14. HPC-based materials research: From Metal-Organic Frameworks to diamond
      Danny E. P. Vanpoucke, Ken Haenen
      Institute for Materials Research (IMO), UHasselt & IMOMEC, IMEC
    15. Improvements to coupled regional climate model simulations over Antarctica
      Souverijns Niels, Gossart Alexandra, Demuzere Matthias, van Lipzig Nicole
      Dept. of Earth and Environmental Sciences, KU Leuven
    16. Climate modelling of Lake Victoria thunderstorms
      Wim Thiery, Edouard L. Davin, Sonia I. Seneviratne, Kristopher Bedka, Stef Lhermitte, Nicole van Lipzig
      Dept. of Earth and Environmental Sciences, KU Leuven
    17. Improved climate modeling in urban areas in sub Saharan Africa for malaria epidemiological studies
      Oscar Brousse, Nicole Van Lipzig, Matthias Demuzere, Hendrik Wouters, Wim Thiery
      Dept. of Earth and Environmental Sciences, KU Leuven
    18. Adaptive Strategies for Multi-Index Monte Carlo
      Dirk Nuyens, Pieterjan Robbe, Stefan Vandewalle
      NUMA group, Dept. of Computer Science, KU Leuven
    19. SP-Wind: A scalable large-eddy simulation code for simulation and optimization of wind-farm boundary layers
      Wim Munters, Athanasios Vitsas, Dries Allaerts, Ali Emre Yilmaz, Johan Meyers
      Turbulent Flow Simulation and Optimization (TFSO) group, Dept. of Mechanics, KU Leuven
    20. Control Optimization of Wind Turbines and Wind Farms
      Ali Emre Yilmaz, Wim Munters, Johan Meyers
      Turbulent Flow Simulation and Optimization (TFSO) group, Dept. of Mechanics, KU Leuven
    21. Simulations of large wind farms with varying atmospheric complexity using Tier-1 Infrastructure
      Dries Allaerts, Johan Meyers
      Turbulent Flow Simulation and Optimization (TFSO) group, Dept. of Mechanics, KU Leuven
    22. Stability of relativistic, two-component jets
      Dimitrios Millas, Rony Keppens, Zakaria Meliani
      Plasma-astrophysics, Dept. Mathematics, KU Leuven
    23. HPC in Theoretical and Computational Chemistry
      Jeremy Harvey, Eliot Boulanger, Andrea Darù, Milica Feldt, Carlos Martín-Fernández, Ana Sanz Matias, Ewa Szlapa
      Quantum Chemistry and Physical Chemistry Section, Dept. of Chemistry, KU Leuven
    " -765,"","

    The third VSC Users Day was held at the \"Paleis der Academiën\", the seat of the \"Royal Flemish Academy of Belgium for Science and the Arts\", in the Hertogstraat 1, 1000 Brussels, on June 2, 2017. -

    Program

      -
    • 9u50 : welcome
    • -
    • 10u00: dr. Achim Basermann, German Aerospace Center, High Performance Computing with Aeronautics and Space Applications [slides PDF - 4,9MB]
    • -
    • 11u00: coffee break
    • -
    • 11u30: workshop sessions – part 1 -
        -
      • VSC for starters (by VSC personnel) [slides PDF - 5.3MB]
      • -
      • Profiler and Debugger (by VSC personnel) [slides PDF - 2,2MB]
      • -
      • Programming GPUs (dr. Bart Goossens - dr. Vule Strbac)
      • -
    • -
    • 12u45: lunch
    • -
    • 14u00: dr. Ehsan Moravveji, KU Leuven A Success Story on Tier-1: A Grid of Stellar Models [slides - PDF 11,9MB]
    • -
    • 14u30: ‘1-minute’ poster presentations
    • -
    • 15u00: workshop sessions – part 2 -
        -
      • VSC for starters (by VSC personnel)
      • -
      • Profiler and Debugger (by VSC personnel)
      • -
      • Feedback from Tier-1 Evaluation Committee (dr. Walter Lioen, chairman) [slides - PDF 0.5MB]
      • -
    • -
    • 16u00: coffee and poster session
    • -
    • 17u00: drink
    • -

    Abstracts of workshops -

      -
    • VSC for starters [slides PDF - 5.3MB]
      The workshop provides a smooth introduction to supercomputing for new users. Starting from common concepts in personal computing the similarities and differences with supercomputing are highlighted and some essential terminology is introduced. It is explained what users can expect from supercomputing and what not, as well as what is expected from them as users
    • -
    • Profiler and Debugger [slides PDF - 2,2MB]
      Both profiling and debugging play an important role in the software development process, and are not always appreciated. In this session we will introduce profiling and debugging tools, but the emphasis is on methodology. We will discuss how to detect common performance bottlenecks, and suggest some approaches to tackle them. For debugging, the most important point is avoiding bugs as much as possible.
    • -
    • Programming GPUs -
        -
      • Quasar, a high-level language and a development environment to reduce the complexity of heterogeneous programming of CPUs and GPUs, Prof dr. Bart Goosens, UGent [slides PDF - 2,1MB]
        In this workshop we present Quasar, a new programming framework that takes care of many common challenges for GPU programming, e.g., parallelization, memory management, load balancing and scheduling. Quasar consists of a high-level programming language with a similar abstraction level as Python or Matlab, making it well suited for rapid prototyping. We highlight some of the automatic parallelization strategies of Quasar and show how high-level code can efficiently be compiled to parallel code that takes advantage of the available CPU and GPU cores, while offering a computational performance that is on a par with a manual low-level C++/CUDA implementation. We explain how multi-GPU systems can be programmed from Quasar and we demonstrate some recent image processing and computer vision results obtained with Quasar.
      • -
      • GPU programming opportunities and challenges: nonlinear finite element analysis, dr. Vule Strbac, KU Leuven [slides PDF - 2,1MB]
        From a computational perspective, finite element analysis manifests substantial internal parallelism. Exposing and exploiting this parallelism using GPUs can yield significant speedups against CPU execution. The details of the mapping between a requested FE scenario and the hardware capabilities of the GPU device greatly affect this resulting speedup. Factors such as: (1) the types of materials present (elasticity), (2) the local memory pool and (3) fp32/fp64 computation impact GPU solution times differently than their CPU counterparts.
        We present results of both simple and complex FE analyses scenarios on a multitude of GPUs and show an objective estimation of general performance. In doing so, we detail the overall opportunities, challenges as well as the limitations of the GPU FE approach. -
      • -
    • -

    Poster sessions -

    An overview of the posters that were presented during the poster session is available here. -

    " -769,"","

    Other pictures of the VSC User Day 2017.

    " -773,"","

    Below is a selection of photos from the user day 2017. A larger set of photos at a higher resolution can be downloaded as a zip file (23MB). -

    " -777,"","

    The UAntwerp clusters have limited features for remote visualization on the login nodes of hopper and the visualization node of leibniz using a VNC-based remote display technology. On the regular login nodes of hopper, there is no acceleration of 3D graphics, but the visualisation node of leibniz is equipped with a NVIDIA M5000 card that when used properly will offer accelerated rendering of OpenGL applications. The setup is similar to the setup of the visualization nodes at the KU Leuven. -

    Using VNC turns out to be more complicated than one would think and things sometimes go wrong. It is a good solution for those who absolutely need a GUI tool or a visualization tool on the cluster rather than on your local desktop; it is not a good solution for those who don't want to invest in learning Linux properly and are only looking for the ease-of-use of a PC. -

    The idea behind the setup

    2D and 3D graphics on Linux

    Graphics (local and remote) on Linux-machines is based on the X Window System version 11, shortly X11. This technology is pretty old (1987) and nor really up to the task anymore with todays powerful computers yet has so many applications that support it that it is still the standard in practice (though there are efforts going on to replace it with Wayland on modern Linux systems). -

    X11 applications talk to a X server which draws the commands on your screen. These commands can go over a network so applications on a remote machine can draw on your local screen. Note also the somewhat confusing terminology: The server is the program that draws on the screen and thus runs on your local system (which for other applications will usually be called the client) while the application is called the client (and in this scenario runs on a computer which you will usually call the server). However, partly due to the way the X11 protocol works and partly also because modern applications are very graphics-heavy, the network has become a bottleneck and graphics-heavy applications (e.g., the Matlab GUI) will work sluggish on all but the fastest network connections. -

    X11 is a protocol for 2D-graphics only. however, it is extensible. Enter OpenGL, a standard cross-platform API for professional 3D-graphics. Even though its importance on Windows and macOS platforms had decreased as Microsoft and Apple both promote their own APIs (DirectX and Metal respectively), it is still very popular for professional applications and in the Linux world. It is supported by X11 servers through the GLX-extension (OpenGL for the X Window System). When set up properly, OpenGL commands can be passed to the X server and use any OpenGL graphics accelerator available on the computer running the X server. In principle, if you have a X server with GLX extension on your desktop, you should be able to run OpenGL programs on the cluster and use the graphics accelerator of your desktop to display the graphics. In practice however this works well when the application and X server run on the same machine, but the typical OpenGL command stream is to extensive to work well over a network connection and performance will be sluggish. -

    Optimizing remote graphics

    The solution offered on the visualization node of leibniz (and in a reduced setting on the login nodes of hopper) consists of two elements to deal with the issues of network bandwidth and, more importantly, network latency. -

    VirtualGL is a technology that redirects OpenGL commands to a 3D graphics accelerator on the computer where the application is running or to a sofware rendering library. It then pushes the rendered image to the X server. Instead of a stream of thousands or millions of OpenGL commands, one large image is now passed over the network to the X server, reducing the effect of latency. These images can be large though, but with an additional piece of software on your client, called the VGL client, VirtualGL can send the images in compressed form which strongly reduces the bandwidth requirements. To use VirtualGL, you have to start the OpenGL application through the vglrun-command. That command will set up the application to redirect OpenGL calls to the VirtualGL libraries. -

    VirtualGL does not solve the issue of slow 2D-rendering because of network latency and also requires the user to set up a VGL client and an X server on the local desktop, which is cumbersome for less experienced users. We solve this problem through VNC (Virtual Network Computing). VNC consists of three components: a server on the computer where your application runs, a client on your desktop, and a standardized protocol for the communication between server and client. The server renders the graphics on the computer on which it runs and sends compressed images to the client. The client of course takes care of keyboard and mouse input and sends this to the server. A VNC server for X applications will in fact emulate a X server. Since the protocol between client and server is pretty standard, most clients will work with most servers, though some combinations of client and server will be more efficient because they may support a more efficient compression technology. Our choice of server is TurboVNC which is maintained by the same group that also develops VirtualGL and has an advanced implementation of a compression algorithm very well suited for 3D graphics. TurboVNC has clients for Windows, macOS and Linux. However, our experience is that it also works with several other VNC clients (e.g., Apple Remote Desktop), though it may be a bit less efficient as it may not be able to use the best compression strategies. -

    The concept of a Window Manager

    When working with Windows or macOS, we're used to seeing a title bar for most windows with buttons to maximize or hide the window, and borders that allow to resize a window. You'd think this functionality is provided by the X server, but in true UNIX-spirit of having separate components for every bit of functionality, this is not the case. On X11, this functionality is provided by the Window Manager, a separate software package that you start after starting the X server (or may be started for you automatically by the startup script that is run when starting the X server). The basic window managers from the early days of X11 have evolved into feature-rich desktop enviroments that do not only offer a window manager, but also a task bar etc. Gnome and KDE are currently the most popular desktop environments (or Unity on Ubuntu, but future editions of Ubuntu will return to Gnome). However, these require a lot of resources and are difficult to install on top of TurboVNC. Examples of very basic old-style window managers are the Tab Window Manager (command twm) and the Motif Window Manager (command mwm). (Both are currently available on the login nodes of hopper.) -

    For the remote visualization setup on the UAntwerp clusters, we have chosen to use the Xfce Desktop Environment which is definitely more user-friendly than the rather primitive Tab Window Manager and Motif Window Manager, yet requires less system resources and is easier to set up than the more advanced Gnome and KDE desktops. -

    Prerequisites

    You'll need a ssh client on your desktop that provides port forwarding functionality on your desktop. We refer to the \"Access and data transfer\" section of the documentation on the user portal for information about ssh clients for various client operating systems. PuTTY (Windows) and OpenSSH (macOS, Linux, unix-compatibility environment on Windows) both provide all required functionality. -

    Furthermore, you'll need a VNC client, preferably the TurboVNC client. -

    Windows

    We have tested the setup with three different clients: -

      -
    • The TurboVNC client can be downloaded by following the Download link on the TurboVNC web site (which at the moment of writing this documentation takes you to a sourceforge page). Binaries are available for both 32-bit and 64-bit windows systems. This client is made by the same people as the server we use so in theory one should expect the least problems with this setup.
    • -
    • TigerVNC is a client whose development is supported by the Swedish company Cendio who makes a remote display server product (ThinLinc) based on TigerVNC. Binaries for 32-bit and 64-bit windows (vncviewr-*.*.*.exe) can be downloaded by following the link on the GitHub Releases page. These binaries are ready-to-run.
    • -
    • ThightVNC is also a popular free VNC implementation. 32-bit and 64-bit Windows installers can be downloaded from the download page on their website. When installing on your PC or laptop, make sure to chose the \"custom install\" and only install the TightVNC Viewer.
    • -

    All three viewers are quite fast and offer good performance, even when run from home over a typical broadband internet connection. TigerVNC seems to be a bit quicker than the other two, while TightVNC doesn't allow you to resize your window. With the other two implementations, when you resize your desktop window, the desktop is also properly resized. -

    macOS

    Here also there are several possible setups:

    • The TurboVNC client can be downloaded from the TurboVNC web site. The macOS client is Java-based. Packages are available for both Apple Java on older versions of OS X and Oracle Java (which you will need to install if it is not yet on your system). We advise to use the Oracle Java version as Java needs frequent security updates and Apple Java is no longer maintained.
    • TigerVNC, a client whose development is supported by the Swedish company Cendio who makes a remote display server product (ThinLinc) based on TigerVNC, is a native macOS client. At the time of writing (version 1.9.0), it is still only distributed as a 32-bit binary so you may get warnings on some versions of macOS. However, there already exist 64-bit pre-release builds so future versions will certainly fully support future macOS versions. Some places report that this client is a lot slower than the the TurboVNC one on macOS.
      Binaries are available. Look for the tigervnc-*.dmg files, which contrary to those for Windows and Linux, only contain the viewer software.
    • A not-so-good alternative is to use the Apple Screen Sharing feature which is available through the Finder (command-K key combination) or Safari (URL bar) by specifying the server as a URL starting with svn://. This VNC client is considerably slower though than the TurboVNC client, partly because it doesn't support some of the TurboVNC-specific compression algorithms.

    Linux

    RPM and Debian packages for TurboVNC can be downloaded from the TurboVNC web site and are available in some Linux distributions. You can also try another VNC client provided by your Linux distribution at your own risk as we cannot guarantee that all VNC viewers (even recent ones) work eficiently with TurboVNC. -

    How do I run an application with TurboVNC?

    Running an application with TurboVNC requires 3 steps: -

      -
    • Start the VNC server on the cluster
    • -
    • Start the VNC client on your desktop/laptop and connect to the server
    • -
    • Start your application
    • -

    Starting the server

      -
    1. Log on in the regular way to one of the login nodes of hopper or to the visualization node of Leibniz. Note that the latter should only be used for running demanding visualizations that benefit from the 3D acceleration. The node is not meant for those who just want to run some lightweight 2D Gui application, e.g., an editor with GUI.
    2. -
    3. Load the module vsc-vnc:
      module load vsc-vnc
      This module does not only put the TurboVNC server in the path, but also provides wrapper scripts to start the VNC server with a supported window manager / dekstop environment. Try module help vsc-vnc for more info about the specific wrappers.
    4. -
    5. Use your wrapper of choice to start the VNC server. We encourage to use the one for the Xfce desktop environment:
      vnc-xfce
    6. -
    7. The first time you use VNC, it will ask you to create a password. For security reasons, please use a password that you don't use for anything else. If you have forgotten your password, it can easily be changed with the vncpasswd command and is stored in the file ~/.vnc/passwd in encrypted form. It will also ask you for a viewer-only password. If you don't know what this is, you don't need it.
    8. -
    9. Among other information, the VNC server will show a line similar to:
      Desktop 'TurboVNC: viz1.leibniz:2 (vsc20XXX)' started on display viz1.leibniz:2
      Note the number after TurboVNC:viz1.leibniz, in this case 2. This is the number of your VNC server, and it will in general be the same as the X display number which is the last number on the line. You'll need that number to connect to the VNC server.
    10. -
    11. It is in fact safe though not mandatory to log out now from your SSH session as the VNC server will continue running in the background.
    12. -

    The standard way of starting a VNC server as described in the TurboVNC documentation is by using the vncserver command. However, you should only use this command if you fully understand how it works and what it does. Also, please don't forget to kill the VNC server when you have finished using it as it will not be killed automatically when started through this command (or use the -autokill command line option at startup). The default startup script (xstartup.turbovnc) which will be put in the ~/.vnc directory on first use does not function properly on our systems. We know this and we have no intent to repair this as we prefer to install the vncserver command unmodified from the distribution and provide wrapper scripts instead that use working startup files. -

    Connecting to the server

      -
    1. In most cases, you'll not be able to connect directly to the TurboVNC server (which runs on port 5900 + the server number, 5902 in the above example) but you will need to create a SSH tunnel to forward traffic to the VNC server. The exact procedure is explained in length in the pages \"Creating a SSH tunnel using PuTTY\" (for Windows) and \"Creating a SSH tunnel using OpenSSH\" (for or Linux and macOS) .
      You'll need to tunnel port number (5900 + server number) (5902 in the example above) on you local machine to the same port number on the node on which the VNC server is running. You cannot use the generic login names (such as login.hpc.uantwerpen.be) for that as you may be assigned a different login node as you were assigned just minutes ago. Instead, use the full names for the specific nodes, e.g., login1-hopper.uantwerpen.be, login2-leibniz.uantwerpen.be or viz1-leibniz.uantwerpen.be.
      -
        -
      1. In brief:With OpenSSH, your command will look like
        ssh -L 5902:viz1-leibniz.uantwerpen.be:5902 -N vsc20XXX@viz1-leibniz.uantwerpen.be
      2. -
      3. In PuTTY, select \"Connections - SSH - Tunnel\" in the left pane. As \"Source port\", use 5900 + the server number (5902 in our example) and as destination the full name of the node on which the VNC server is running, e.g., viz1-leibniz.uantwerpen.be.
      4. -
    2. -
    3. Once your tunnel is up-and-running, start your VNC client. The procedure depends on the precise client you are using. However in general, the client will ask for the VNC server. That server is localhost:x where x is the number of your VNC server, 2 in the above example. It will then ask you for the password that you have assigned when you first started VNC.
    4. -
    5. If all went well, you will now get a window with the desktop environment that you have chosen when starting the VNC server
    6. -
    7. Do not forget to close your tunnel when you log out from the VNC server. Otherwise the next user might not be able to connect.
    8. -

    Note that the first time that you start a Xfce session with TurboVNC, you'll see a panel \"Welcome to the first start of the panel\". Please select \"Use default config\" as otherwise you get a very empty desktop. -

    Starting an application

      -
    1. Open a terminal window (if one was not already created when you started your session).
      In the default Xfce-environment, you can open a terminal by selecting \"Terminal Emulator\" in the \"Applications\" menu in the top left. The first time it will let you chose between selected terminal applications.
    2. -
    3. Load the modules that are required to start your application of choice.
    4. -
    5. 2D applications or applications that use a sofware renderer for 3D start as usual. However, to start an application using the hardware-accelerated OpenGL, you'll need to start it through vglrun. Usually adding vglrun at the start of the command line is sufficient.
      This however doesn't work with all applications. Some applications require a special setup. -
        -
      1. Matlab: start matlab with the -nosoftwareopengl option to enable accelerated OpenGL:
        vglrun matlab -nosoftwareopengl
        The Matlab command opengl info will then show that you are indeed using the GPU.
      2. -
    6. -
    7. When you've finished, don't forget to log out (when you use one of our wrapper scripts) or kill the VNC server otherwise (using vncserver -kill :x with x the number of the server).
    8. -

    Note: For a quick test of your setup, enter -

    vglrun glxinfo
    -vglrun glxgears
    -

    The first command will print some information about the OpenGL functionality that is supported. The second command will display a set of rotating gears. Don't be fooled if they appear to stand still but look at the \"frames per second\" printed in the terminal window. -

    Common problems

      -
    • Authentication fails when connecting to the server: This happens occasionaly when switching between different versions of TurboVNC. The easiest solution is to simply kill the VNC server using vncserver -kill :x (with x the display number), set a new VNC password using vncpasswd and start over again.
    • -
    • Xfce doesn't show the task bar at the top of the screen: This too happens sometimes when switching between versions of Xfce4, or you may have screwed up your configuration in another way. Remove the .config/xfce-centos7 directory (rm -r .config/xfce-centos7) or the .config/xfce-sl6 directory depending on whether you are working on a CentOS7 system (Leibniz curently) or Scientific Linux 6 system (/hopper currently), kill the VNC server and start again.
    • -

    Links

    Components used in the UAntwerp setup

    Related technologies

    " -779,"","

    Leibniz has one compute node equipped with a Xeon Phi coprocessor from the Knights Landing generation (the first generation with support for the AVX-512 instruction set). For cost reasons we have opted for the PCIe coprocessor model rather than an independent node based on that processor. Downside is the lower memory capacity directly available to the Xeon Phi processor though. -

    The goals for the system are: -

      -
    • Having a test device for AVX-512 code as it was too early to purchase Sky Lake Xeon CPUs.
    • -
    • Assessing the performance of the Xeon Phi compared to regular compute nodes to determine whether it is interesting to further invest in this technology for a later cluster or cluster update.
    • -

    The system is set up in such a way that once you have access to the Xeon Phi node, you can also log on to the Xeon Phi card itself and use it as an independent system. Your regular VSC directories will be mounted (at least for UAntwerp users, others on request). As such you can also test code to run on independent Xeon Phi systems, the kind of setup that Intel is currently promoting.

    The module system is not yet implemented on the Xeon Phi coprocessor, but modules do work on the host. It does imply though that some setup may be required when running native programs on the Xeon Phi.

    Getting access

    Contact the UAntwerp support team to get access to the Xeon Phi node.

    Users of the Xeon Phi node are expected to report back on their experiences. We are most interested in users who can also compare with running on regular nodes as we will use this information for future purchase decisions.

    Currently the node is not yet in the job system, you can log on manually to the node but need to check if noone else is using the node.

    Compiling for the Xeon Phi

    We currently support compiling code for the Xeon Phi with the Intel compilers included in the 2017a and later toolchains (i.e., Intel compiler version 17 and higher). -

    Compared to the earlier Knights Corner based Xeon Phi system installed in the Tier-2 infrastructure at the KU Leuven, there are a number of changes. All come down to the fact that the Knights Landing Xeon Phi has much more in common with the regular Intel CPUs than was the case for the earlier generation. -

      -
    • Don't use the -mmic compiler option to compile code for the Xeon Phi. This option generates code for the Knights Corner instruction set which is not compatible with the Knights Landing processors. Instead, -
        -
      • Use -xMIC-AVX512 to compile code that runs natively on the Xeon Phi
      • -
      • Use -qoffload-arch:mic-avx512 (in combination met -xHost) for programs that run on the host but offload sections to the Xeon Phi.
      • -
      In most cases you'll also want to use -qopenmp to enable OpenMP, the primary programming model for the Xeon Phi.
    • -
    • Similarly, -environment variables that start with MIC are for KNC only. KNL uses the same -libraries as regular x86-64 code. -
    • Mind the meaning of the _MIC__ -preprocessor macro in old Xeon Phi code. It is set when compiling for the KNC -generation cards, but some code may use it wrongly for conditional compilation -of parts of offloaded code, which really should have been done through -__TARGET_ARCH_MIC which works for both KNC and KNL. For conditional compilation -of code for KNL in both offload and native routines, one should use the -__AVX512F__ feature macro.
    • -

    Running applications on the Xeon Phi

    • Programs that use offloading are started in the same way as regular host programs. Nothing special needs to be done. The offloaded code runs on the coprocessor under the userid micuser.
    • Simple native programs can be started from the host using the micnativeloadex command followed by the name of the executable and other arguments. The micnativeloadex command will look up all shared libraries used by the executable and make sure that they are uploaded to the Xeon Phi. To find the libraries, it uses the environment variable SINK_LD_LIBRARY_PATH. For programs that only rely on a compiler module, our compiler modules take care of the proper definition of this variable. Your program will run on the coprocessor under a special userid, micuser, which also implies that you cannot acces your own files!
      According to the Xeon Phi manuals, certain requests are send automatically to the host but it is not clear at the moment what this implies.
    • The second way to start native programs is to log on to the Xeon Phi using ssh (ssh mic0) and work the way you would on a regular cluster node. You will see the same directories that you also see on the regular Xeon Phi node (minus the /small file system at the moment) and will have access to the same data in the same way.
      The module system has not yet been implemented on the Xeon Phi.
    " -781,"","

    Leibniz has two compute nodes each equipped with two NVIDIA Tesla P100 GPU compute cards, the most powerful cards available at the time of installation of the system. We run the regular NVIDIA software stack on those systems -

    The main goal of the system is to assess the performance of GPUs for applications used by our researchers. We want to learn for which applications GPU computing is economically viable. Users should realise that these nodes carry three times the cost of a regular compute node and might also be shorter lived (in the past, some NVIDA GPUs have shown to be pretty fragile). So these nodes are only interesting and should only be used for applications that run three times faster than a regular CPU-based equivalent. -

    As such we offer precedence to users who want to work with us towards this goal and either develop high-quality GPU software or are willing to benchmark their application on GPU and regular CPUs. -

    Getting access

    Contact the UAntwerp support team to get access to the Xeon Phi node. -

    Users of the GPU compute nodes are expected to report back on their experiences. We are most interested in users who can also compare with running on regular nodes as we will use this information for future purchase decisions. -

    Currently the nodes are not yet integrated in the job system, you can log on manually to the node but need to check if noone else is using the node. -

    Monitoring GPU nodes

    Monitoring of CPU use by jobs running on the GPU nodes can be done in the same way as for regular compute nodes. -

    One useful command to monitor the use of the GPUs is nvidia-smi. It will show information on both GPUs in the GPU node, and among others lets you easily verify if the GPUs are used by the job. -

    Software on the GPU

    Software is installed on demand. As these systems are new to us also, we do expect some collaboration of the user to get software running on the GPUs. -

    - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
    Package - Module - Description -
    CP2K - CP2K/5.1-intel-2017a-bare-GPU-noMPI - GPU-accelerated version of CP2K. The -GPU-noMPI-versions are ssmp binaries without support for MPI, so they can only be used on a single GPU node. The binaries are compiled with equivalent options to the corresponding -bare-multiver modules for CPU-only computations. -
    CUDA - CUDA/8.0.61
    CUDA/9.0.176
    CUDA/9.1.85 -
    Various versions of the CUDA development kit -
    cuDNN - cuDNN/6.0-CUDA-8.0.61
    cuDNN/7.0.5-CUDA-8.0.61
    cuDNN/7.0.5-CUDA-9.0.176
    cuDNN/7.0.5-CUDA-9.1.85
    -
    The CUDA Deep Neural Network library, version 6.0 and 7.0, both installed from standard NVIDA tarbals but in the directory structure of our module system. -
    GROMACS - GROMACS/2016.4-foss-2017a-GPU-noMPI
    GROMACS/2016.4-intel-2017a-GPU-noMPI
    -
    GROMACS with GPU acceleration. The -GPU-noMPI-versions are ssmp binaries without support for MPI, so they can only be used on a single GPU node. -
    Keras - Keras/2.1.3-intel-2017c-GPU-Python-3.6.3 - Keras with TensorFlow as the backend (1.4 for Keras 2.1.3), using the GPU-accelerated version of Tensorflow.
    For comparison purposes there is a identical version using the CPU-only version of TensorFlow 1.4. -
    NAMD - - Work in progress -
    TensorFlow - Tensorflow/1.3.0-intel-2017a-GPU-Python-3.6.1
    Tensorflow/1.4.0-intel-2017c-GPU-Python-3.6.3
    -
    GPU versions of Tensorflow 1.3 and 1.4. Google-provided binaries were used for the installation.
    There are CPU-only equivalents of those modules for comparison. The 1.3 version was installed from the standard PyPi wheel which is not well optimized for modern processors, the 1.4 version was installed from a Python wheel compiled by Intel engineers and should be well-optimized for all our systems. -
    " -783,"","

    HPC Tutorial

    -

    This is our standard introduction to the VSC HPC systems. It is complementary to the information in this user portal, the latter being more the reference manual. -

    -

    We have separate versions depending on your home institution and the operating system from which you access the cluster: -

    - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
    - Windows - macOS - Linux -
    UAntwerpen - [PDF] - [PDF] - [PDF] -
    VUB - [PDF] - [PDF] - [PDF] -
    UGent - [PDF] - [PDF] - [PDF] -
    KU Leuven/UHasselt - [PDF] - [PDF] - [PDF] -
    " -785,"","

    Important changes

    The 2017a toolchain is the toolchain that -will be carried forward to Leibniz and will be available after the operating -system upgrade of Hopper. Hence it is meant to be as complete as possible. We -will only make a limited number of programs available in the 2016b toolchain -(basically those that show much better performance with the older compiler or -that do not compile with the compilers in the 2017a toolchains). -

    Important changes -in the 2017a toolchain: -

      -
    • The Intel compilers have been installed in -a single directory tree, much the way Intel intends the install to be done. The -intel/2017a module loads fewer submodules and instead sets all required -variables. The install now also contains the Thread Building Blocks (TBB), -Integrated Performance Primitives (IPP) and Data Analytics Acceleration Library -(DAAL). All developer tools (debugger, Inspector,Advisor, Vtune -Amplifier, ITAC) are enabled by loading the inteldevtools/2017a module rather -than independent modules for each tool. More information is available on - the documentation page on the Intel compilers @ UAntwerp.
    • -
    • The Python install now also contains a -number of packages that previously where accessed via separate modules: -
        -
      • matplotlib, so there is no longer a -separate module to load matplotlib. -
      • -
      • lxml
      • -
      -
    • -
    • The R install now also contains a selection -of the Bioconductor routines, so no separate module is needed to enable the -latter. -
    • -
    • netCDF is now a single module containing -all 4 interfaces rather than 4 separate modules that installed each interface -in a different directory tree (three of which all relied on the module for the -fourth). This should ease the installation of code that uses the netCDF Fortran -or one of the C++ interfaces and expects all netCDF libraries to be installed -in the same directory. -
    • -

    We will skip the 2017b toolchain as defined by the VSC as we have already upgraded the 2017a toolchain to a more recent update of the Intel 2017 compilers to avoid problems with certain applications. -

    Available toolchains

    There are -currently three major toolchains on the UIAntwerp clusters: -

      -
    • The Intel toolchain, -which includes the Intel compilers and tools, matching versions of the GNU -compilers, and all software compiled with them. -
    • -
    • The FOSS toolchain, -built out of open-source components. It is mostly used for programs that don’t -install with the Intel compilers, or by users who want to do development with -Open MPI and other open-source libraries. -
      - The FOSS-toolchain has a number of subtoolchains: Gompi, GCC and GCCcore, and -some programs are installed in these subtoolchains because they don’t use the -additional components that FOSS offers. -
    • -
    • The system toolchain -(sl6 or centos7), containing programs that only use system libraries or other -tools from this toolchain. -
    • -

    The tables below -list the last available module for a given software package and the -corresponding version in the 2017a toolchain. Older versions can only be -installed on demand with a very good motivation, as older versions of packages -also often fail to take advantage of advances in supercomputer architecture and -offer lower performance. Packages that have not been used recently will -only be installed on demand. -

    Several of the -packages in the system toolchain are still listed as “on demand” since they -require licenses and interaction with their users is needed before we can -install them. -

    Intel toolchain

    - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
    - Latest - pre-2017a - - - 2017a - - Comments -
    - ABINIT/8.0.7-intel-2016a - - - Work in progress -
    - Advisor/2016_update4 - - inteldevtools/2017a - - Integrated in a - new module with the other Intel development tools -
    ANTs/2.1.0-foss-2015a - ANTs/2.2.0-intel-2017a - Not yet available on Leibniz due to compile problems. -
    - augustus/3.0.1-intel-2015a - / - - Installed on - demand - -
    - Autoconf/2.69-intel-2016b - - Autoconf/2.69 - - Moved to the system - toolchain -
    - AutoDock/1.1.2 - - AutoDock_Vina/1.1.2 - - Naming modified - to the standard naming used in our build tools -
    - Automake/1.15-intel-2016b - - Automake/1.15 - - Moved to the system - toolchain -
    - Autotools/20150215-intel-2016b - - Autotools/2016123 - - Moved to the system - toolchain -
    / - BAli-Phy/2.3.8-intel-2017a-OpenMP
    BAli-Phy/2.3.8-intel-2017a-MPI
    -
    By Ben Redelings, documentation on the software web site. This package supports either OpenMP or MPI, but not both together in a hybrid mode. -
    - beagle-lib/2.1.2-intel-2016b - - beagle-lib/2.1.2-intel-2017a - -
    - Beast/2.4.4-intel-2016b - - Beast/2.4.5-intel-2017a - - Version with beagle-lib -
    - Biopython/1.68-intel-2016b-Python-2.7.12 - - Biopython/1.68-intel-2017a-Python-2.7.13
    Biopython/1.68-intel-2017a-Python-3.6.1 -
    - Builds for Python - 2,7 and Python 3.6 -
    - bismark/0.13.1-intel-2015a - - Bismark/0.17.0-intel-2017a - - Naming modified - to the standard naming used in our build tools -
    - Bison/3.0.4-intel-2016b - - Bison/3.0.4-intel-2017a - -
    - BLAST+/2.6.0-intel-2016b-Python-2.7.12 - - BLAST+/2.6.0-intel-2017a-Python-2.7.13 - -
    - Boost/1.63.0-intel-2016b-Python-2.7.12 - - Boost/1.63.0-intel-2017a-Python-2.7.13 - -
    - Bowtie2/2.2.9-intel-2016a - - Bowtie2/2.2.9-intel-2017a - -
    - byacc/20160606-intel-2016b - - byacc/20170201 - - Moved to the system - toolchain -
    - bzip2/1.0.6-intel-2016b - - bzip2/1.0.6-intel-2017a - -
    - cairo/1.15.2-intel-2016b - - cairo/1.15.4-intel-2017a - -
    - CASINO/2.12.1-intel-2015a - / - - Installed - on demand - -
    - CASM/0.2.0-Python-2.7.12 - / - - Installed on demand, compiler problems. - -
    / - CGAL/4.9-intel-2017a-forOpenFOAM - Installed without the components that require Qt and/or OpenGL. -
    - CMake/3.5.2-intel-2016b - - CMake/3.7.2-intel-2017a - -
    - CP2K/4.1-intel-2016b - CP2K/4.1-intel-2017a-bare
    CP2K/4.1-intel-2017a -

    -
    - CPMD/4.1-intel-2016b - / - - Installed on - demand - -
    - cURL/7.49.1-intel-2016b - - cURL/7.53.1-intel-2017a - -
    / - DIAMOND/0.9.12-intel-2017a - -
    - DLCpar/1.0-intel-2016b-Python-2.7.12 - - DLCpar/1.0-intel-2017a-Python-2.7.13
    DLCpar/1.0-intel-2017a-Python-3.6.1
    -
    - Installed for - Python 2.7.13 and Pyton 3.6.1 -
    - Doxygen/1.8.11-intel-2016b - - Doxygen/1.8.13 - - Moved to the - system toolchain -
    - DSSP/2.2.1-intel-2016a - DSSP/2.2.1-intel-2017a -
    -
    - Eigen/3.2.9-intel-2016b - - Eigen/3.3.3-intel-2017a - -
    - elk/3.3.17-intel-2016a - - Elk/4.0.15-intel-2017a - - Naming modified - to the standard naming used in our build tools -
    - exonerate/2.2.0-intel-2015a - - Exonerate/2.4.0-intel-2017a - - Naming modified - to the standard naming used in our build tools -
    - expat/2.2.0-intel-2016b - - expat/2.2.0-intel-2017a - -
    / - FastME/2.1.5.1-intel-2017a - -
    - FFTW/3.3.4-intel-2015a - - FFTW/3.3.6-intel-2017a - - There is also a - FFTW-compatible interface in intel/2017a, but it does not work for all - packages. -
    - - file/5.30-intel-2017a - -
    - fixesproto/5.0-intel-2016a - - X11/20170129-intel-2017a - - Integrated in - one large X11 module -
    - flex/2.6.0-intel-2016b - - flex/2.6.3-intel-2017a - -
    - fontconfig/2.12.1-intel-2016b - - fontconfig/2.12.1-intel-2017a - -
    - freeglut/3.0.0-intel-2016a - - freeglut/3.0.0-intel-2017a - - Not yet - operational on CentOS 7 - -
    - freetype/2.7-intel-2016b - - freetype/2.7.1-intel-2017a - -
    - FSL/5.0.9-intel-2016a - / - - Installed on - demand - -
    - GAMESS-US/20141205-R1-intel-2015a - / - - Installed on - demand - -
    - gc/7.4.4-intel-2016b - - gc/7.6.0-intel-2017a - - Installed on - demand - -
    - GDAL/2.1.0-intel-2016b - - GDAL/2.1.3-intel-2017a-Python-2.7.13 - - Does - not support Python 3. -
    - genometools/1.5.4-intel-2015a - - GenomeTools/1.5.9-intel-2017a - -
    - GEOS/3.5.0-intel-2015a-Python-2.7.9 - - GEOS/3.6.1-intel-2017a-Python-2.7.13 - - Does - not support Python 3. -
    - gettext/0.19.8-intel-2016b - - gettext/0.19.8.1-intel-2017a - -
    - GLib/2.48.1-intel-2016b - - GLib/2.49.7-intel-2017a - -
    - GMAP-GSNAP/2014-12-25-intel-2015a - - GMAP-GSNAP/2017-03-17-intel-2017a - -
    - GMP/6.1.1-intel-2016b - - GMP/6.1.2-intel-2017a - -
    - gnuplot/5.0.0-intel-2015a - - gnuplot/5.0.6-intel-2017a - -
    - GObject-Introspection/1.44.0-intel-2015a - - GObject-Introspection/1.49.2-intel-2017a - -
    - GROMACS/5.1.2-intel-2016a-hybrid - - GROMACS/5.1.2-intel-2017a-hybrid
    GROMACS/2016.3-intel-2017a
    -
    -
    - GSL/2.3-intel-2016b - - GSL/2.3-intel-2017a - -
    / - gtest/1.8.0-intel-2017a - Google C++ Testing Framework -
    - Guile/1.8.8-intel-2016b - - Guile/1.8.8-intel-2017a - -
    - Guile/2.0.11-intel-2016b - - Guile/2.2.0-intel-2017a - -
    - hanythingondemand/3.2.0-intel-2016b-Python-2.7.12 - - hanythingondemand/3.2.0-intel-2017a-Python-2.7.13 - -
    - / - - HarfBuzz/1.3.1-intel-2017a - -
    - HDF5/1.8.17-intel-2016b - - HDF5/1.8.18-intel-2017a
    HDF5/1.8.18-intel-2017a-noMPI -
    HDF5 with and without MPI-support. -
    / - HISAT2/2.0.5-intel-2017a - -
    - HTSeq/0.6.1p1-intel-2016a-Python-2.7.11 - - HTSeq/0.7.2-intel-2017a-Python-2.7.13 - - Does not support - Python 3. -
    - icc/2016.3.210-GCC-5.4.0-2.26 - - intel/2017a - - Intel compiler - components in a single module. -
    - iccifort/2016.3.210-GCC-5.4.0-2.26 - - intel/2017a - - Intel compiler - components in a single module. -
    - ifort/2016.3.210-GCC-5.4.0-2.26 - - intel/2017a - - Intel compiler - components in a single module. -
    - imkl/11.3.3.210-iimpi-2016b - - intel/2017a - - Intel compiler - components in a single module. -
    - impi/5.1.3.181-iccifort-2016.3.210-GCC-5.4.0-2.26 - - intel/2017a - - Intel compiler - components in a single module. -
    - inputproto/2.3.2-intel-2016b - - X11/20170129-intel-2017a - - Integrated in - one large X11 module -
    - Inspector/2016_update3 - - inteldevtools/2017a - - Integrated in a - new module with the other Intel development tools -
    - ipp/8.2.1.133 - - intel/2017a - - Intel compiler - components in a single module. -
    - itac/9.0.2.045 - - inteldevtools/2017a - - Integrated in a - new module with the other Intel development tools -
    - / - - JasPer/2.0.12-intel-2017a - -
    - Julia/0.6.0-intel-2017a-Python-2.7.13 - Julia, command line version (so without the Juno IDE). -
    - kbproto/1.0.7-intel-2016b - - X11/20170129-intel-2017a - - Integrated in - one large X11 module -
    - kwant/1.2.2-intel-2016a-Python-3.5.1 - kwant/1.2.2-intel-2017a-Python-3.6.1 - Built with single-threaded libraries as advised in the documentation which implies that kwant is not exactly a HPC program. -
    - LAMMPS/14May16-intel-2016b - - LAMMPS/31Mar2017-intel-2017a - -
    - - libcerf/1.5-intel-2017a - -
    - libffi/3.2.1-intel-2016b - - libffi/3.2.1-intel-2017a - -
    - - libgd/2.2.4-intel-2017a - -
    - Libint/1.1.6-intel-2016b - - Libint/1.1.6-intel-2017a
    Libint/1.1.6-intel-2017a-CP2K -
    -
    - libint2/2.0.3-intel-2015a - / - - Installed on - demand. - -
    - libjpeg-turbo/1.5.0-intel-2016b - - libjpeg-turbo/1.5.1-intel-2017a - -
    - libmatheval/1.1.11-intel-2016b - - libmatheval/1.1.11-intel-2017a - -
    - libpng/1.6.26-intel-2016b - - libpng/1.6.28-intel-2017a - -
    - libpthread-stubs/0.3-intel-2016b - / - Installed on demand. -
    - libreadline/6.3-intel-2016b - - libreadline/7.0-intel-2017a - -
    - LibTIFF/4.0.6-intel-2016b - - LibTIFF/4.0.7-intel-2017a - -
    - libtool/2.4.6-intel-2016b - - libtool/2.4.6 - - Moved to the - system toolchain -
    - libunistring/0.9.6-intel-2016b - - libunistring/0.9.7-intel-2017a - -
    - libX11/1.6.3-intel-2016b - - X11/20170129-intel-2017a - - Integrated in - one large X11 module -
    - libXau/1.0.8-intel-2016b - - X11/20170129-intel-2017a - - Integrated in - one large X11 module -
    - libxc/2.2.3-intel-2016b - - libxc/3.0.0-intel-2017a - -
    - libxcb/1.12-intel-2016b - - X11/20170129-intel-2017a - - Integrated in - one large X11 module -
    - libXdmcp/1.1.2-intel-2016b - - X11/20170129-intel-2017a - - Integrated in - one large X11 module -
    - libXext/1.3.3-intel-2016b - - X11/20170129-intel-2017a - - Integrated in - one large X11 module -
    - libXfixes/5.0.1-intel-2016a - - X11/20170129-intel-2017a - - Integrated in - one large X11 module -
    - libXi/1.7.6-intel-2016a - - X11/20170129-intel-2017a - - Integrated in - one large X11 module -
    - libxml2/2.9.4-intel-2016b - - libxml2/2.9.4-intel-2017a - -
    - libXrender/0.9.9-intel-2016b - - X11/20170129-intel-2017a - - Integrated in - one large X11 module -
    - libxslt/1.1.28-intel-2016a-Python-3.5.1 - - libxslt/1.1.29-intel-2017a - -
    - libxsmm/1.6.4-intel-2016b - - libxsmm/1.7.1-intel-2017a
    libxsmm/1.8-intel-2017a -
    -
    - libyaml/0.1.6-intel-2016a - / - Installed on demand -
    - LLVM/3.9/.1-intel-2017a - LLVM compiler backend with libLLVM.so. -
    - lxml/3.5.0-intel-2016a-Python-3.5.1 - - Python/2.7.13-intel-2017a - Python/3.6.1-intel-2017a - - Integrated in - the standard Python 2.7 and 3.6 modules. -
    - M4/1.4.17-intel-2016b - - M4/1.4.18 - - Moved to the - system toolchain -
    / - MAFFT/7.312-intel-2017a-with-extensions - -
    - MAKER-P/2.31.8-intel-2015a - - / - - Installed on - demand. - -
    - MAKER-P-mpi/2.31.8-intel-2015a - - / - - Installed on - demand. - -
    - matplotlib/1.5.3-intel-2016b-Python-2.7.12 - - Python/2.7.13-intel-2017a
    Python/3.6.1-intel-2017a
    -
    - Integrated in - the standard Python 2.7 and 3.6 modules -
    - MCL/14.137-intel-2016b - - MCL/14.137-intel-2017a - -
    - mdust/1.0-intel-2015a - - mdust/1.0-intel-2017a - -
    - METIS/5.1.0-intel-2016a - - METIS/5.1.0-intel-2017a - -
    - MITE_Hunter/11-2011-intel-2015a - - / - - Installed on - demand. - -
    - molmod/1.1-intel-2016b-Python-2.7.12 - molmod/1.1-intel-2017a-Python-2.7.13 - - Work - in progress, compile problems with newer compilers. - -
    - Mono/4.6.2.7-intel-2016b - - Mono/4.8.0.495-intel-2017a - -
    - Mothur/1.34.4-intel-2015a - / - Installed on demand -
    - MUMPS/5.0.1-intel-2016a-serial
    MUMPS/5.0.0-intel-2015a-parmetis
    -
    - MUMPS-5.1.1-intel-2017a-openmp-noMPI
    MUMPS-5.1.1-intel-2017a-openmp-MPI
    MUMPS-5.1.1-intel-2017a-noOpenMP-noMPI
    -
    -
    - MUSCLE/3.8.31-intel-2015a - - MUSCLE/3.8.31-intel-2017a - -
    - NASM/2.12.02-intel-2016b - - NASM/2.12.02 - - Moved to the systemtoolchain -
    - - ncbi-vdb/2.8.2-intel-2017a - -
    - ncurses/6.0-intel-2016b - - ncurses/6.0-intel-2017a - -
    - NEURON/7.4-intel-2017a - Yale NEURON code -
    - netaddr/0.7.14-intel-2015a-Python-2.7.9 - - Python/2.7.13-intel-2017a - - Integrated in - the standard Python 2.7 module -
    - netCDF/4.4.1-intel-2016b - - netCDF/4.4.1.1-intel-2017a - - All netCDF - interfaces integrated in a single module -
    - netCDF-Fortran/4.4.4-intel-2016b - - netCDF/4.4.1.1-intel-2017a - - All netCDF - interfaces integrated in a single module -
    - netifaces/0.10.4-intel-2015a-Python-2.7.9 - - Python/2.7.13-intel-2017a - - Integrated in - the standard Python 2.7 module -
    - - NGS/1.3.0 - -
    - numpy/1.9.2-intel-2015b-Python-2.7.10 - - Python/2.7.13-intel-2017a - - Integrated in - the standard Python 2.7 module -
    - numpy/1.10.4-intel-2016a-Python-3.5.1 - - Python/3.6.1-intel-2017a - - Integrated in - the standard Python 3.6 module -
    - NWChem/6.5.revision26243-intel-2015b-2014-09-10-Python-2.7.10 - - NWChem/6.6.r27746-intel-2017a-Python-2.7.13 - - On demand on Hopper. -
    / - OpenFOAM/4.1-intel-2017a - Installed without the components that require OpenGL and/or Qt (which should only be in the postprocessing) -
    - OpenMX/3.8.1-intel-2016b - - OpenMX/3.8.3-intel-2017a - -
    / - OrthoFinder/1.1.10-intel-2017a - -
    - / - - Pango/1.40.4-intel-2017a - -
    - ParMETIS/4.0.3-intel-2015b - - ParMETIS/4.0.3-intel-2017a - -
    - pbs-drmaa/1.0.18-intel-2015a - / - Installed on demand -
    - / - - pbs_PRISMS/1.0.1-intel-2017a-Python-2.7.13 - - Python - interfaces for Torque/PBS used by CASM -
    - pbs_python/4.6.0-intel-2016b-Python-2.7.12 - - pbs_python/4.6.0-intel-2017a-Python-2.7.13 - - Python - interfaces for Torque/PBS used by hanythingondemand -
    - PCRE/8.38-intel-2016b - - PCRE/8.40-intel-2017a - -
    - Perl/5.20.1-intel-2015a - - Perl/5.24.1-intel-2017a - -
    - pixman/0.34.0-intel-2016b - - pixman/0.34.0-intel-2017a - -
    - pkg-config/0.29.1-intel-2016b - - pkg-config/0.29.1 - - Moved to the - system toolchain -
    - PLUMED/2.3.0-intel-2016b - - PLUMED/2.3.0-intel-2017a - -
    - PROJ/4.9.2-intel-2016b - - PROJ/4.9.3-intel-2017a - -
    / - protobuf/3.4.0-intel-2017a - Google Protocol Buffers -
    - Pysam/0.9.1.4-intel-2016a-Python-2.7.11 - - Python/2.7.13-intel-2017a - - Integrated in - the standard Python 2.7 module. Also load SAMtools to use. -
    - Pysam/0.9.1.2-intel-2016a-Python-3.5.1 - - Python/3.6.1-intel-2017a - - Integrated in - the standard Python 3.6 module. Also load SAMtools to use. -
    - Python/2.7.12-intel-2016b - - Python/2.7.13-intel-2017a - -
    - Python/3.5.1-intel-2016a - - Python/3.6.1-intel-2017a - -
    - QuantumESPRESSO/5.2.1-intel-2015b-hybrid - QuantumESPRESSO/6.1-intel-2017a - - Work in progress. -
    - R/3.3.1-intel-2016b - - R/3.3.3-intel-2017a - -
    - RAxML/8.2.9-intel-2016b-hybrid-avx - RAxML/8.2.10-intel-2017a-hybrid - We suggest users try RAxML-ng (still beta) which is supposedly much faster and better adapted to new architectures and can be installed on demand. -
    / - RAxML-NG/0.4.1-intel-2017a-pthreads
    - RAxML-NG/0.4.1-intel-2017a-hybrid -
    RAxML Next Generation beta, compiled for shared memory (pthreads) and hybrid -distributed-shared memory (hybrid, uses MPI and pthreads). -
    - R-bundle-Bioconductor/3.3-intel-2016b-R-3.3.1 - - R/3.3.3-intel-2017a - - Integrated in - the standard R module. -
    - renderproto/0.11.1-intel-2016b - - X11/20170129-intel-2017a - - Integrated in - one large X11 module -
    - RepeatMasker/4.0.5-intel-2015a - / - - Installed - on demand; compiler problems. - -
    - RMBlast/2.2.28-intel-2015a-Python-2.7.9 - / - - Installed - on demand; compiler problems. - -
    - SAMtools/0.1.19-intel-2015a - - SAMtools/1.4-intel-2017a - -
    - scikit-umfpack/0.2.1-intel-2015b-Python-2.7.10 - / - Installed on demand -
    - scikit-umfpack/0.2.1-intel-2016a-Python-3.5.1 - scikit-umfpack/0.2.3-intel-2017a-Python-3.6.1 - -
    - scipy/0.15.1-intel-2015b-Python-2.7.10 - - Python/2.7.13-intel-2017a - - Integrated in - the standard Python 2.7 module -
    - scipy/0.16.1-intel-2016a-Python-3.5.1 - - Python/3.6.1-intel-2017a - - Integrated in - the standard Python 3.6 module. -
    - SCons/2.5.1-intel-2016b-Python-2.7.12 - - SCons/2.5.1-intel-2017a-Python-2.7.13 - - On demand on - CentOS 7; also in the system toolchain. - -
    - SCOTCH/6.0.4-intel-2016a - - SCOTCH/6.0.4-intel-2017a - -
    - Siesta/3.2-pl5-intel-2015a - - Siesta/4.0-intel-2017a - -
    - SNAP/2013-11-29-intel-2015a - / - - Installed on - demand - -
    - spglib/1.7.4-intel-2016a - / - Installed on demand -
    - SQLite/3.13.0-intel-2016b - - SQLite/3.17.0-intel-2017a - -
    - SuiteSparse/4.4.5-intel-2015b-ParMETIS-4.0.3 - SuiteSparse/4.5.5-intel-2015b-ParMETIS-4.0.3 - -
    - SuiteSparse/4.4.5-intel-2016a-METIS-5.1.0 - SuiteSparse/4.4.5-intel-2017a-METIS-5.1.0
    SuiteSparse/4.5.5-intel-2017a-METIS-5.1.0
    -
    Older version as it is known to be compatible with our Python packages. -
    - SWIG/3.0.7-intel-2015b-Python-2.7.10 - - SWIG/3.0.12-intel-2017a-Python-2.7.13 - -
    - SWIG/3.0.8-intel-2016a-Python-3.5.1 - - SWIG/3.0.12-intel-2017a-Python-3.6.1 - -
    - Szip/2.1-intel-2016b - - Szip/2.1.1-intel-2017a - -
    - tbb/4.3.2.135 - - intel/2017a - - Intel compiler - components in a single module. -
    - Tcl/8.6.5-intel-2016b - - Tcl/8.6.6-intel-2017a - -
    - TELEMAC/v7p2r0-intel-2016b - - Work in progress. -
    - TINKER/7.1.3-intel-2015a - / - - Installed - on demand; compiler problems. - -
    - Tk/8.6.5-intel-2016b - - Tk/8.6.6-intel-2017a - -
    - TopHat/2.1.1-intel-2016a - / - - TopHat is no - longer developed, its developers advise considering switching to - HISAT2 which is more accurate and more efficient. It does not compile with the intel/2017a compilers. -
    VASP - VASP/5.4.4-intel-2016b
    VASP/5.4.4-intel-2016b-vtst-173 -
    VASP has not been installed in the 2017a toolchain due to performance regressions and occasional run time errors with the Intel 2017 compilers and hence has been made available in the intel/2016b toolchain. -
    - Voro++/0.4.6-intel-2016b - - Voro++/0.4.6-intel-2017a - -
    - vsc-base/2.5.1-intel-2016b-Python-2.7.12 - - / - -
    - vsc-install/0.10.11-intel-2016b-Python-2.7.12 - - vsc-install/0.10.25-intel-2017a-Python-2.7.13 - - Does not support - Python 3. -
    - vsc-mympirun/3.4.3-intel-2016b-Python-2.7.12 - - vsc-mympirun/3.4.3-intel-2017a-Python-2.7.13 - -
    - VTune/2016_update3 - - inteldevtools/2017a - - Integrated in a - new module with the other Intel development tools -
    - worker/1.5.1-intel-2015a - - worker-1.6.7-intel-2017a - -
    - X11/20160819-intel-2016b - - X11/20170129-intel-2017a - -
    - xcb-proto/1.12 - - X11/20170129-intel-2017a - - Integrated in - one large X11 module -
    - xextproto/7.3.0-intel-2016b - - X11/20170129-intel-2017a - - Integrated in - one large X11 module -
    - xorg-macros/1.19.0-intel-2016b - - X11/20170129-intel-2017a - - Integrated in - one large X11 module -
    - xproto/7.0.29-intel-2016b - - X11/20170129-intel-2017a - - Integrated in - one large X11 module -
    - xtrans/1.3.5-intel-2016b - - X11/20170129-intel-2017a - - Integrated in - one large X11 module -
    - XZ/5.2.2-intel-2016b - - XZ/5.2.3-intel-2017a - -
    - zlib/1.2.8-intel-2016b - - zlib/1.2.11-intel-2017a - -

    Foss toolchain

    - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
    - Latest pre-2017a - - 2017a - - Comments -
    - ANTs/2.1.0-foss-2015a - ANTs/2.2.0-intel-2017a - Moved to the Intel toolchain.
    -
    - ATLAS/3.10.2-foss-2015a-LAPACK-3.4.2 - - - Installed on - demand - -
    - CMake/3.5.2-foss-2016b - - CMake/3.7.2-foss-2017a - -
    - Cufflinks/2.2.1-foss-2015a - - - Installed - on demand - -
    - cURL/7.41.0-foss-2015a - - - Installed - on demand - -
    - Cython/0.22.1-foss-2015a-Python-2.7.9 - - Python/2.7.13-intel-2017a - - Integrated into - the standard Python module for the intel toolchains -
    - FFTW/3.3.4-gompi-2016b - - FFTW/3.3.6-gompi-2017a - -
    - GSL/2.1-foss-2015b - - - Installed - on demand - -
    - HDF5/1.8.14-foss-2015a - - - Installed - on demand - -
    - libpng/1.6.16-foss-2015a - - - Installed - on demand - -
    - libreadline/6.3-foss-2015a - - - Installed - on demand - -
    - makedepend/1.0.5-foss-2015a - - -
    - MaSuRCA/2.3.2-foss-2015a - - - Installed - on demand - -
    - ncurses/6.0-foss-2016b - - - Installed - on demand - -
    - pbs-drmaa/1.0.18-foss-2015a - - - Installed - on demand - -
    - Perl/5.20.1-foss-2015a - - - Installed - on demand - -
    - Python/2.7.9-foss-2015a - - - Python is - available in the Intel toolchain. -
    - SAMtools/0.1.19-foss-2015a - - - Newer versions - with intel toolchain -
    - SPAdes/3.10.1-foss-2016b - - SPAdes/3.10.1-foss-2017a - -
    - Szip/2.1-foss-2015a - - - Installed - on demand - -
    - zlib/1.2.8-foss-2016b - - zlib/1.2.11-foss-2017a - -

    Gompi

    - - - - - - - - - - - - - - -
    - Latest - pre-GCC-6.3.0 (2017a) - - - gompi-2017a - - Comments -
    - ScaLAPACK/2.0.2-gompi-2016b-OpenBLAS-0.2.18-LAPACK-3.6.1 - - ScaLAPACK/2.0.2-gompi-2017a-OpenBLAS-0.2.19-LAPACK-3.7.0 - -

    GCC

    - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
    - Latest pre-gompi-2017a - - GCC-6.3.0 (2017a) - - Comments -
    - OpenBLAS/0.2.18-GCC-5.4.0-2.26-LAPACK-3.6.1 - - OpenBLAS/0.2.19-GCC-6.3.0-2.27-LAPACK-3.7.0 - -
    - numactl/2.0.11-GCC-5.4.0-2.26 - - numactl/2.0.11-GCC-6.3.0-2.27 - -
    - OpenMPI/1.10.3-GCC-5.4.0-2.26 - - OpenMPI/2.0.2-GCC-6.3.0-2.27 - -
    - MPICH/3.1.4-GCC-4.9.2 - - / - -

    GCCcore

    - - - - - - - - - - - - - - - - - - - - - - -
    - Latest - pre-GCCcore-6.3.0 (2017a) - - - GCCcore-6.3.0 - (2017a) - - - Comments -
    - binutils/2.26-GCCcore-5.4.0 - - binutils/2.27-GCCcore-6.3.0 - -
    - flex/2.6.0-GCCcore-5.4.0 - - flex/2.6.3-GCCcore-6.3.0 - -
    - Lmod/7.0.5 - - - Default - module tool on CentOS 7 -

    System toolchain

    - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - " -787,"","
    - Pre-2017 - - Latest module - - Comments -
    - ant/1.9.4-Java-8 - - ant/1.10.1-Java-8 - -
    - / - - Autoconf/2.69 - -
    - / - - AutoDock_Vina/1.1.2 - -
    - / - - Automake/1.15 - -
    - / - - Autotools/2016123 - -
    / - Bazel/0.5.3 - Google's software installer. Not installed on the Scientific Linux 6 nodes of hopper. -
    - binutils/2.26 - - binutils/2.27 - -
    - Bison/3.0.4 - - Bison/3.0.4 - -
    - BRATNextGen/20150505 - - - Installed on - demand - -
    - / - - byacc/20170201 - -
    - / - - CMake/3.7.2 - -
    - - core-counter/1.1 - -
    - CPLEX/12.6.3 - - - Installed on - demand on Leibniz. - -
    - DFTB+/1.2.2 - - - Installed - on demand on Leibniz. - -
    - / - - Doxygen/1.8.13 - -
    - EasuBuild/… - - EasyBuild/3.1.2 - -
    - FastQC/0.11.5-Java-8 - - - Installed - on demand on Leibniz. - -
    - FINE-Marine/5.2 - - - Installed - on demand on Leibniz. - -
    - - flex/2.6.0
    flex/2.6.3
    -
    -
    - GATK/3.5-Java-8 - - - Installed - on demand on Leibniz. - -
    - Gaussian16/g16_A3-AVX - - - Work in progress. -
    - Gurobi/6.5.1 - - - Installed - on demand on Leibniz. - -
    - Hadoop/2.6.0-cdh5.4.5-native - - - Installed - on demand on Leibniz. - -
    - - help2man/1.47.4 - -
    - Java/8 - - Java/8 - -
    - - JUnit/4.12-Java-8 - -
    - / - - libtool/2.4.6 - -
    - M4/1.4.17 - - M4/1.4.18 - -
    - MATLAB/R2016a - - MATLAB/R2017a - -
    - Maven/3.3.9 - - - Installed on - demand on Leibniz. - -
    - MGLTools/1.5.7rc1 - - - Installed on - demand on Leibniz. - -
    - MlxLibrary/1.0.0 - - - Lixoft Simulx -
    - MlxPlore/1.1.1 - - - Lixoft MLXPlore -
    - monitor/1.1.2 - - monitor/1.1.2 - -
    - Monolix/2016R1 - - - Installed on - demand on Leibniz. - -
    - / - - NASM/2.12.02 - -
    - Newbler/2.9 - - / - - On request, has - not been used recently. -
    - Novoalign/3.04.02 - - - Installed on - demand on Leibniz. - -
    - ORCA/3.0.3 - - - Installed on - demand on Leibniz. - -
    - p4vasp/0.3.29 - - - Installed on - demand on Leibniz. - -
    - parallel/20160622 - - parallel/20170322 - -
    - / - - pkg-config/0.29.1 - -
    - protobuf/2.5.0 - - protobuf/2.6.1 - -
    - Ruby/2.1.10 - - Ruby/2.4.0 - -
    - / - - SCons/2.5.1 - -
    - scripts/4.0.0 - - -
    - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
    - Latest - pre-2017a - - - 2017a - - Comments -
    - ABINIT/8.0.7-intel-2016a - - - Work in progress -
    - Advisor/2016_update4 - - inteldevtools/2017a - - Integrated in a - new module with the other Intel development tools -
    ANTs/2.1.0-foss-2015a - ANTs/2.2.0-intel-2017a - Not yet available on Leibniz due to compile problems. -
    - augustus/3.0.1-intel-2015a - / - - Installed on - demand - -
    - Autoconf/2.69-intel-2016b - - Autoconf/2.69 - - Moved to the system - toolchain -
    - AutoDock/1.1.2 - - AutoDock_Vina/1.1.2 - - Naming modified - to the standard naming used in our build tools -
    - Automake/1.15-intel-2016b - - Automake/1.15 - - Moved to the system - toolchain -
    - Autotools/20150215-intel-2016b - - Autotools/2016123 - - Moved to the system - toolchain -
    / - BAli-Phy/2.3.8-intel-2017a-OpenMP
    BAli-Phy/2.3.8-intel-2017a-MPI
    -
    By Ben Redelings, documentation on the software web site. This package supports either OpenMP or MPI, but not both together in a hybrid mode. -
    - beagle-lib/2.1.2-intel-2016b - - beagle-lib/2.1.2-intel-2017a - -
    - Beast/2.4.4-intel-2016b - - Beast/2.4.5-intel-2017a - - Version with beagle-lib -
    - Biopython/1.68-intel-2016b-Python-2.7.12 - - Biopython/1.68-intel-2017a-Python-2.7.13
    Biopython/1.68-intel-2017a-Python-3.6.1 -
    - Builds for Python - 2,7 and Python 3.6 -
    - bismark/0.13.1-intel-2015a - - Bismark/0.17.0-intel-2017a - - Naming modified - to the standard naming used in our build tools -
    - Bison/3.0.4-intel-2016b - - Bison/3.0.4-intel-2017a - -
    - BLAST+/2.6.0-intel-2016b-Python-2.7.12 - - BLAST+/2.6.0-intel-2017a-Python-2.7.13 - -
    - Boost/1.63.0-intel-2016b-Python-2.7.12 - - Boost/1.63.0-intel-2017a-Python-2.7.13 - -
    - Bowtie2/2.2.9-intel-2016a - - Bowtie2/2.2.9-intel-2017a - -
    - byacc/20160606-intel-2016b - - byacc/20170201 - - Moved to the system - toolchain -
    - bzip2/1.0.6-intel-2016b - - bzip2/1.0.6-intel-2017a - -
    - cairo/1.15.2-intel-2016b - - cairo/1.15.4-intel-2017a - -
    - CASINO/2.12.1-intel-2015a - / - - Installed - on demand - -
    - CASM/0.2.0-Python-2.7.12 - / - - Installed on demand, compiler problems. - -
    / - CGAL/4.9-intel-2017a-forOpenFOAM - Installed without the components that require Qt and/or OpenGL. -
    - CMake/3.5.2-intel-2016b - - CMake/3.7.2-intel-2017a - -
    - CP2K/4.1-intel-2016b - CP2K/4.1-intel-2017a-bare
    CP2K/4.1-intel-2017a-bare-multiver
    CP2K/5.1-intel-2017a-bare-multiver
    CP2K-5.1/intel-2017a-bare-GPU-noMPI
    -
    The multiver modules contain the sopt, popt, ssmp and psmp binaries.
    The bare-GPU version only works on a single GPU node, support for MPI was not included. It is a ssmp binary using GPU acceleration.
    - CPMD/4.1-intel-2016b - CPMD/4.1-intel-2017a - CPMD is licensed software. -
    - cURL/7.49.1-intel-2016b - - cURL/7.53.1-intel-2017a - -
    / - DIAMOND/0.9.12-intel-2017a - -
    - DLCpar/1.0-intel-2016b-Python-2.7.12 - - DLCpar/1.0-intel-2017a-Python-2.7.13
    DLCpar/1.0-intel-2017a-Python-3.6.1
    -
    - Installed for - Python 2.7.13 and Pyton 3.6.1 -
    - Doxygen/1.8.11-intel-2016b - - Doxygen/1.8.13 - - Moved to the - system toolchain -
    - DSSP/2.2.1-intel-2016a - DSSP/2.2.1-intel-2017a -
    -
    - Eigen/3.2.9-intel-2016b - - Eigen/3.3.3-intel-2017a - -
    - elk/3.3.17-intel-2016a - - Elk/4.0.15-intel-2017a - - Naming modified - to the standard naming used in our build tools -
    - exonerate/2.2.0-intel-2015a - - Exonerate/2.4.0-intel-2017a - - Naming modified - to the standard naming used in our build tools -
    - expat/2.2.0-intel-2016b - - expat/2.2.0-intel-2017a - -
    / - FastME/2.1.5.1-intel-2017a - -
    - FFTW/3.3.4-intel-2015a - - FFTW/3.3.6-intel-2017a - - There is also a - FFTW-compatible interface in intel/2017a, but it does not work for all - packages. -
    - - file/5.30-intel-2017a - -
    - fixesproto/5.0-intel-2016a - - X11/20170129-intel-2017a - - Integrated in - one large X11 module -
    - flex/2.6.0-intel-2016b - - flex/2.6.3-intel-2017a - -
    - fontconfig/2.12.1-intel-2016b - - fontconfig/2.12.1-intel-2017a - -
    - freeglut/3.0.0-intel-2016a - - freeglut/3.0.0-intel-2017a - - Not yet - operational on CentOS 7 - -
    - freetype/2.7-intel-2016b - - freetype/2.7.1-intel-2017a - -
    - FSL/5.0.9-intel-2016a - / - - Installed on - demand - -
    - GAMESS-US/20141205-R1-intel-2015a - / - - Installed on - demand - -
    - gc/7.4.4-intel-2016b - - gc/7.6.0-intel-2017a -
    - GDAL/2.1.0-intel-2016b - - GDAL/2.1.3-intel-2017a-Python-2.7.13 - - Does - not support Python 3. -
    - genometools/1.5.4-intel-2015a - - GenomeTools/1.5.9-intel-2017a - -
    - GEOS/3.5.0-intel-2015a-Python-2.7.9 - - GEOS/3.6.1-intel-2017a-Python-2.7.13 - - Does - not support Python 3. -
    - gettext/0.19.8-intel-2016b - - gettext/0.19.8.1-intel-2017a - -
    - GLib/2.48.1-intel-2016b - - GLib/2.49.7-intel-2017a - -
    - GMAP-GSNAP/2014-12-25-intel-2015a - - GMAP-GSNAP/2017-03-17-intel-2017a - -
    - GMP/6.1.1-intel-2016b - - GMP/6.1.2-intel-2017a - -
    - gnuplot/5.0.0-intel-2015a - - gnuplot/5.0.6-intel-2017a - -
    - GObject-Introspection/1.44.0-intel-2015a - - GObject-Introspection/1.49.2-intel-2017a - -
    - GROMACS/5.1.2-intel-2016a-hybrid - - GROMACS/5.1.2-intel-2017a-hybrid
    GROMACS/2016.3-intel-2017a
    GROMACS/2016.4-intel-2017a-GPU-noMPI
    -
    The GROMACS -GPU-noMPI binary is a binary for the GPU nodes, without support for MPI, so it can only be used on a single GPU node.
    - GSL/2.3-intel-2016b - - GSL/2.3-intel-2017a - -
    / - gtest/1.8.0-intel-2017a - Google C++ Testing Framework -
    - Guile/1.8.8-intel-2016b - - Guile/1.8.8-intel-2017a - -
    - Guile/2.0.11-intel-2016b - - Guile/2.2.0-intel-2017a - -
    - hanythingondemand/3.2.0-intel-2016b-Python-2.7.12 - - hanythingondemand/3.2.0-intel-2017a-Python-2.7.13 - -
    - / - - HarfBuzz/1.3.1-intel-2017a - -
    - HDF5/1.8.17-intel-2016b - - HDF5/1.8.18-intel-2017a
    HDF5/1.8.18-intel-2017a-noMPI -
    HDF5 with and without MPI-support. -
    / - HISAT2/2.0.5-intel-2017a - -
    - HTSeq/0.6.1p1-intel-2016a-Python-2.7.11 - - HTSeq/0.7.2-intel-2017a-Python-2.7.13 - - Does not support - Python 3. -
    - icc/2016.3.210-GCC-5.4.0-2.26 - - intel/2017a - - Intel compiler - components in a single module. -
    - iccifort/2016.3.210-GCC-5.4.0-2.26 - - intel/2017a - - Intel compiler - components in a single module. -
    - ifort/2016.3.210-GCC-5.4.0-2.26 - - intel/2017a - - Intel compiler - components in a single module. -
    - imkl/11.3.3.210-iimpi-2016b - - intel/2017a - - Intel compiler - components in a single module. -
    - impi/5.1.3.181-iccifort-2016.3.210-GCC-5.4.0-2.26 - - intel/2017a - - Intel compiler - components in a single module. -
    - inputproto/2.3.2-intel-2016b - - X11/20170129-intel-2017a - - Integrated in - one large X11 module -
    - Inspector/2016_update3 - - inteldevtools/2017a - - Integrated in a - new module with the other Intel development tools -
    - ipp/8.2.1.133 - - intel/2017a - - Intel compiler - components in a single module. -
    - itac/9.0.2.045 - - inteldevtools/2017a - - Integrated in a - new module with the other Intel development tools -
    - / - - JasPer/2.0.12-intel-2017a - -
    - Julia/0.6.0-intel-2017a-Python-2.7.13 - Julia, command line version (so without the Juno IDE). -
    - kbproto/1.0.7-intel-2016b - - X11/20170129-intel-2017a - - Integrated in - one large X11 module -
    - kwant/1.2.2-intel-2016a-Python-3.5.1 - kwant/1.2.2-intel-2017a-Python-3.6.1 - Built with single-threaded libraries as advised in the documentation which implies that kwant is not exactly a HPC program. -
    - LAMMPS/14May16-intel-2016b - - LAMMPS/31Mar2017-intel-2017a - -
    - - libcerf/1.5-intel-2017a - -
    - libffi/3.2.1-intel-2016b - - libffi/3.2.1-intel-2017a - -
    - - libgd/2.2.4-intel-2017a - -
    - Libint/1.1.6-intel-2016b - - Libint/1.1.6-intel-2017a
    Libint/1.1.6-intel-2017a-CP2K -
    -
    - libint2/2.0.3-intel-2015a - / - - Installed on - demand. - -
    - libjpeg-turbo/1.5.0-intel-2016b - - libjpeg-turbo/1.5.1-intel-2017a - -
    - libmatheval/1.1.11-intel-2016b - - libmatheval/1.1.11-intel-2017a - -
    - libpng/1.6.26-intel-2016b - - libpng/1.6.28-intel-2017a - -
    - libpthread-stubs/0.3-intel-2016b - / - Installed on demand. -
    - libreadline/6.3-intel-2016b - - libreadline/7.0-intel-2017a - -
    - LibTIFF/4.0.6-intel-2016b - - LibTIFF/4.0.7-intel-2017a - -
    - libtool/2.4.6-intel-2016b - - libtool/2.4.6 - - Moved to the - system toolchain -
    - libunistring/0.9.6-intel-2016b - - libunistring/0.9.7-intel-2017a - -
    - libX11/1.6.3-intel-2016b - - X11/20170129-intel-2017a - - Integrated in - one large X11 module -
    - libXau/1.0.8-intel-2016b - - X11/20170129-intel-2017a - - Integrated in - one large X11 module -
    - libxc/2.2.3-intel-2016b - - libxc/3.0.0-intel-2017a - -
    - libxcb/1.12-intel-2016b - - X11/20170129-intel-2017a - - Integrated in - one large X11 module -
    - libXdmcp/1.1.2-intel-2016b - - X11/20170129-intel-2017a - - Integrated in - one large X11 module -
    - libXext/1.3.3-intel-2016b - - X11/20170129-intel-2017a - - Integrated in - one large X11 module -
    - libXfixes/5.0.1-intel-2016a - - X11/20170129-intel-2017a - - Integrated in - one large X11 module -
    - libXi/1.7.6-intel-2016a - - X11/20170129-intel-2017a - - Integrated in - one large X11 module -
    - libxml2/2.9.4-intel-2016b - - libxml2/2.9.4-intel-2017a - -
    - libXrender/0.9.9-intel-2016b - - X11/20170129-intel-2017a - - Integrated in - one large X11 module -
    - libxslt/1.1.28-intel-2016a-Python-3.5.1 - - libxslt/1.1.29-intel-2017a - -
    - libxsmm/1.6.4-intel-2016b - - libxsmm/1.7.1-intel-2017a
    libxsmm/1.8-intel-2017a -
    -
    - libyaml/0.1.6-intel-2016a - / - Installed on demand -
    - LLVM/3.9/.1-intel-2017a - LLVM compiler backend with libLLVM.so. -
    - lxml/3.5.0-intel-2016a-Python-3.5.1 - - Python/2.7.13-intel-2017a - Python/3.6.1-intel-2017a - - Integrated in - the standard Python 2.7 and 3.6 modules. -
    - M4/1.4.17-intel-2016b - - M4/1.4.18 - - Moved to the - system toolchain -
    / - MAFFT/7.312-intel-2017a-with-extensions - -
    - MAKER-P/2.31.8-intel-2015a - - / - - Installed on - demand. - -
    - MAKER-P-mpi/2.31.8-intel-2015a - - / - - Installed on - demand. - -
    - matplotlib/1.5.3-intel-2016b-Python-2.7.12 - - Python/2.7.13-intel-2017a
    Python/3.6.1-intel-2017a
    -
    - Integrated in - the standard Python 2.7 and 3.6 modules -
    - MCL/14.137-intel-2016b - - MCL/14.137-intel-2017a - -
    - mdust/1.0-intel-2015a - - mdust/1.0-intel-2017a - -
    - METIS/5.1.0-intel-2016a - - METIS/5.1.0-intel-2017a - -
    - MITE_Hunter/11-2011-intel-2015a - - / - - Installed on - demand. - -
    - molmod/1.1-intel-2016b-Python-2.7.12 - molmod/1.1-intel-2017a-Python-2.7.13 - - Work - in progress, compile problems with newer compilers. - -
    - Mono/4.6.2.7-intel-2016b - - Mono/4.8.0.495-intel-2017a - -
    - Mothur/1.34.4-intel-2015a - / - Installed on demand -
    - MUMPS/5.0.1-intel-2016a-serial
    MUMPS/5.0.0-intel-2015a-parmetis
    -
    - MUMPS-5.1.1-intel-2017a-openmp-noMPI
    MUMPS-5.1.1-intel-2017a-openmp-MPI
    MUMPS-5.1.1-intel-2017a-noOpenMP-noMPI
    -
    -
    - MUSCLE/3.8.31-intel-2015a - - MUSCLE/3.8.31-intel-2017a - -
    - NASM/2.12.02-intel-2016b - - NASM/2.12.02 - - Moved to the systemtoolchain -
    - - ncbi-vdb/2.8.2-intel-2017a - -
    - ncurses/6.0-intel-2016b - - ncurses/6.0-intel-2017a - -
    - NEURON/7.4-intel-2017a - Yale NEURON code -
    - netaddr/0.7.14-intel-2015a-Python-2.7.9 - - Python/2.7.13-intel-2017a - - Integrated in - the standard Python 2.7 module -
    - netCDF/4.4.1-intel-2016b - - netCDF/4.4.1.1-intel-2017a - - All netCDF - interfaces integrated in a single module -
    - netCDF-Fortran/4.4.4-intel-2016b - - netCDF/4.4.1.1-intel-2017a - - All netCDF - interfaces integrated in a single module -
    - netifaces/0.10.4-intel-2015a-Python-2.7.9 - - Python/2.7.13-intel-2017a - - Integrated in - the standard Python 2.7 module -
    - - NGS/1.3.0 - -
    - numpy/1.9.2-intel-2015b-Python-2.7.10 - - Python/2.7.13-intel-2017a - - Integrated in - the standard Python 2.7 module -
    - numpy/1.10.4-intel-2016a-Python-3.5.1 - - Python/3.6.1-intel-2017a - - Integrated in - the standard Python 3.6 module -
    - NWChem/6.5.revision26243-intel-2015b-2014-09-10-Python-2.7.10 - - NWChem/6.6.r27746-intel-2017a-Python-2.7.13 - - On demand on Hopper. -
    / - OpenFOAM/4.1-intel-2017a - Installed without the components that require OpenGL and/or Qt (which should only be in the postprocessing) -
    - OpenMX/3.8.1-intel-2016b - - OpenMX/3.8.3-intel-2017a - -
    / - OrthoFinder/1.1.10-intel-2017a - -
    - / - - Pango/1.40.4-intel-2017a - -
    - ParMETIS/4.0.3-intel-2015b - - ParMETIS/4.0.3-intel-2017a - -
    - pbs-drmaa/1.0.18-intel-2015a - / - Installed on demand -
    - / - - pbs_PRISMS/1.0.1-intel-2017a-Python-2.7.13 - - Python - interfaces for Torque/PBS used by CASM -
    - pbs_python/4.6.0-intel-2016b-Python-2.7.12 - - pbs_python/4.6.0-intel-2017a-Python-2.7.13 - - Python - interfaces for Torque/PBS used by hanythingondemand -
    - PCRE/8.38-intel-2016b - - PCRE/8.40-intel-2017a - -
    - Perl/5.20.1-intel-2015a - - Perl/5.24.1-intel-2017a - -
    - pixman/0.34.0-intel-2016b - - pixman/0.34.0-intel-2017a - -
    - pkg-config/0.29.1-intel-2016b - - pkg-config/0.29.1 - - Moved to the - system toolchain -
    - PLUMED/2.3.0-intel-2016b - - PLUMED/2.3.0-intel-2017a - -
    - PROJ/4.9.2-intel-2016b - - PROJ/4.9.3-intel-2017a - -
    / - protobuf/3.4.0-intel-2017a - Google Protocol Buffers -
    - Pysam/0.9.1.4-intel-2016a-Python-2.7.11 - - Python/2.7.13-intel-2017a - - Integrated in - the standard Python 2.7 module. Also load SAMtools to use. -
    - Pysam/0.9.1.2-intel-2016a-Python-3.5.1 - - Python/3.6.1-intel-2017a - - Integrated in - the standard Python 3.6 module. Also load SAMtools to use. -
    - Python/2.7.12-intel-2016b - - Python/2.7.13-intel-2017a - -
    - Python/3.5.1-intel-2016a - - Python/3.6.1-intel-2017a - -
    - QuantumESPRESSO/5.2.1-intel-2015b-hybrid - QuantumESPRESSO/6.1-intel-2017a - - Work in progress. -
    - R/3.3.1-intel-2016b - - R/3.3.3-intel-2017a - -
    - RAxML/8.2.9-intel-2016b-hybrid-avx - RAxML/8.2.10-intel-2017a-hybrid - We suggest users try RAxML-ng (still beta) which is supposedly much faster and better adapted to new architectures and can be installed on demand. -
    / - RAxML-NG/0.4.1-intel-2017a-pthreads
    - RAxML-NG/0.4.1-intel-2017a-hybrid -
    RAxML Next Generation beta, compiled for shared memory (pthreads) and hybrid -distributed-shared memory (hybrid, uses MPI and pthreads). -
    - R-bundle-Bioconductor/3.3-intel-2016b-R-3.3.1 - - R/3.3.3-intel-2017a - - Integrated in - the standard R module. -
    - renderproto/0.11.1-intel-2016b - - X11/20170129-intel-2017a - - Integrated in - one large X11 module -
    - RepeatMasker/4.0.5-intel-2015a - / - - Installed - on demand; compiler problems. - -
    - RMBlast/2.2.28-intel-2015a-Python-2.7.9 - / - - Installed - on demand; compiler problems. - -
    - SAMtools/0.1.19-intel-2015a - - SAMtools/1.4-intel-2017a - -
    - scikit-umfpack/0.2.1-intel-2015b-Python-2.7.10 - / - Installed on demand -
    - scikit-umfpack/0.2.1-intel-2016a-Python-3.5.1 - scikit-umfpack/0.2.3-intel-2017a-Python-3.6.1 - -
    - scipy/0.15.1-intel-2015b-Python-2.7.10 - - Python/2.7.13-intel-2017a - - Integrated in - the standard Python 2.7 module -
    - scipy/0.16.1-intel-2016a-Python-3.5.1 - - Python/3.6.1-intel-2017a - - Integrated in - the standard Python 3.6 module. -
    - SCons/2.5.1-intel-2016b-Python-2.7.12 - - SCons/2.5.1-intel-2017a-Python-2.7.13 - - On demand on - CentOS 7; also in the system toolchain. - -
    - SCOTCH/6.0.4-intel-2016a - - SCOTCH/6.0.4-intel-2017a - -
    - Siesta/3.2-pl5-intel-2015a - - Siesta/4.0-intel-2017a - -
    - SNAP/2013-11-29-intel-2015a - / - - Installed on - demand - -
    - spglib/1.7.4-intel-2016a - / - Installed on demand -
    - SQLite/3.13.0-intel-2016b - - SQLite/3.17.0-intel-2017a - -
    - SuiteSparse/4.4.5-intel-2015b-ParMETIS-4.0.3 - SuiteSparse/4.5.5-intel-2015b-ParMETIS-4.0.3 - -
    - SuiteSparse/4.4.5-intel-2016a-METIS-5.1.0 - SuiteSparse/4.4.5-intel-2017a-METIS-5.1.0
    SuiteSparse/4.5.5-intel-2017a-METIS-5.1.0
    -
    Older version as it is known to be compatible with our Python packages. -
    - SWIG/3.0.7-intel-2015b-Python-2.7.10 - - SWIG/3.0.12-intel-2017a-Python-2.7.13 - -
    - SWIG/3.0.8-intel-2016a-Python-3.5.1 - - SWIG/3.0.12-intel-2017a-Python-3.6.1 - -
    - Szip/2.1-intel-2016b - - Szip/2.1.1-intel-2017a - -
    - tbb/4.3.2.135 - - intel/2017a - - Intel compiler - components in a single module. -
    - Tcl/8.6.5-intel-2016b - - Tcl/8.6.6-intel-2017a - -
    - TELEMAC/v7p2r0-intel-2016b - TELEMAC/v7p2r0-intel-2017a
    TELEMAC/v7p2r1-intel-2017a
    TELEMAC/v7p2r2-intel-2017a
    TELEMAC/v7p3r0-intel-2017a

    - TINKER/7.1.3-intel-2015a - / - - Installed - on demand; compiler problems. - -
    - Tk/8.6.5-intel-2016b - - Tk/8.6.6-intel-2017a - -
    - TopHat/2.1.1-intel-2016a - / - - TopHat is no - longer developed, its developers advise considering switching to - HISAT2 which is more accurate and more efficient. It does not compile with the intel/2017a compilers. -
    VASP - VASP/5.4.4-intel-2016b
    VASP/5.4.4-intel-2016b-vtst-173 -
    VASP has not been installed in the 2017a toolchain due to performance regressions and occasional run time errors with the Intel 2017 compilers and hence has been made available in the intel/2016b toolchain. -
    - Voro++/0.4.6-intel-2016b - - Voro++/0.4.6-intel-2017a - -
    - vsc-base/2.5.1-intel-2016b-Python-2.7.12 - - / - -
    - vsc-install/0.10.11-intel-2016b-Python-2.7.12 - - vsc-install/0.10.25-intel-2017a-Python-2.7.13 - - Does not support - Python 3. -
    - vsc-mympirun/3.4.3-intel-2016b-Python-2.7.12 - - vsc-mympirun/3.4.3-intel-2017a-Python-2.7.13 - -
    - VTune/2016_update3 - - inteldevtools/2017a - - Integrated in a - new module with the other Intel development tools -
    - worker/1.5.1-intel-2015a - - worker-1.6.7-intel-2017a - -
    - X11/20160819-intel-2016b - - X11/20170129-intel-2017a - -
    - xcb-proto/1.12 - - X11/20170129-intel-2017a - - Integrated in - one large X11 module -
    - xextproto/7.3.0-intel-2016b - - X11/20170129-intel-2017a - - Integrated in - one large X11 module -
    - xorg-macros/1.19.0-intel-2016b - - X11/20170129-intel-2017a - - Integrated in - one large X11 module -
    - xproto/7.0.29-intel-2016b - - X11/20170129-intel-2017a - - Integrated in - one large X11 module -
    - xtrans/1.3.5-intel-2016b - - X11/20170129-intel-2017a - - Integrated in - one large X11 module -
    - XZ/5.2.2-intel-2016b - - XZ/5.2.3-intel-2017a - -
    - zlib/1.2.8-intel-2016b - - zlib/1.2.11-intel-2017a - -
    " -789,"","

    Foss toolchain

    - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
    - Latest pre-2017a - - 2017a - - Comments -
    - ANTs/2.1.0-foss-2015a - ANTs/2.2.0-intel-2017a - Moved to the Intel toolchain.
    -
    - ATLAS/3.10.2-foss-2015a-LAPACK-3.4.2 - - - Installed on - demand - -
    - CMake/3.5.2-foss-2016b - - CMake/3.7.2-foss-2017a - -
    - Cufflinks/2.2.1-foss-2015a - - - Installed - on demand - -
    - cURL/7.41.0-foss-2015a - - - Installed - on demand - -
    - Cython/0.22.1-foss-2015a-Python-2.7.9 - - Python/2.7.13-intel-2017a - - Integrated into - the standard Python module for the intel toolchains -
    - FFTW/3.3.4-gompi-2016b - - FFTW/3.3.6-gompi-2017a - -
    - GSL/2.1-foss-2015b - - - Installed - on demand - -
    - HDF5/1.8.14-foss-2015a - - - Installed - on demand - -
    - libpng/1.6.16-foss-2015a - - - Installed - on demand - -
    - libreadline/6.3-foss-2015a - - - Installed - on demand - -
    - makedepend/1.0.5-foss-2015a - - -
    - MaSuRCA/2.3.2-foss-2015a - - - Installed - on demand - -
    - ncurses/6.0-foss-2016b - - - Installed - on demand - -
    - pbs-drmaa/1.0.18-foss-2015a - - - Installed - on demand - -
    - Perl/5.20.1-foss-2015a - - - Installed - on demand - -
    - Python/2.7.9-foss-2015a - - - Python is - available in the Intel toolchain. -
    - SAMtools/0.1.19-foss-2015a - - - Newer versions - with intel toolchain -
    - SPAdes/3.10.1-foss-2016b - - SPAdes/3.10.1-foss-2017a - -
    - Szip/2.1-foss-2015a - - - Installed - on demand - -
    - zlib/1.2.8-foss-2016b - - zlib/1.2.11-foss-2017a - -
    -

    Gompi

    - - - - - - - - - - - - - - - -
    - Latest - pre-GCC-6.3.0 (2017a) - - - gompi-2017a - - Comments -
    - ScaLAPACK/2.0.2-gompi-2016b-OpenBLAS-0.2.18-LAPACK-3.6.1 - - ScaLAPACK/2.0.2-gompi-2017a-OpenBLAS-0.2.19-LAPACK-3.7.0 - -
    -

    GCC

    - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
    - Latest pre-gompi-2017a - - GCC-6.3.0 (2017a) - - Comments -
    - OpenBLAS/0.2.18-GCC-5.4.0-2.26-LAPACK-3.6.1 - - OpenBLAS/0.2.19-GCC-6.3.0-2.27-LAPACK-3.7.0 - -
    - numactl/2.0.11-GCC-5.4.0-2.26 - - numactl/2.0.11-GCC-6.3.0-2.27 - -
    - OpenMPI/1.10.3-GCC-5.4.0-2.26 - - OpenMPI/2.0.2-GCC-6.3.0-2.27 - -
    - MPICH/3.1.4-GCC-4.9.2 - - / - -
    -

    GCCcore

    - - - - - - - - - - - - - - - - - - - - - - - -
    - Latest - pre-GCCcore-6.3.0 (2017a) - - - GCCcore-6.3.0 - (2017a) - - - Comments -
    - binutils/2.26-GCCcore-5.4.0 - - binutils/2.27-GCCcore-6.3.0 - -
    - flex/2.6.0-GCCcore-5.4.0 - - flex/2.6.3-GCCcore-6.3.0 - -
    - Lmod/7.0.5 - - - Default - module tool on CentOS 7 -
    " -791,""," - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
    - Pre-2017 - - Latest module - - Comments -
    - ant/1.9.4-Java-8 - - ant/1.10.1-Java-8 - -
    - / - - Autoconf/2.69 - -
    - / - - AutoDock_Vina/1.1.2 - -
    - / - - Automake/1.15 - -
    - / - - Autotools/2016123 - -
    / - Bazel/0.5.3 - Google's software installer. Not installed on the Scientific Linux 6 nodes of hopper. -
    - binutils/2.26 - - binutils/2.27 - -
    - Bison/3.0.4 - - Bison/3.0.4 - -
    - BRATNextGen/20150505 - - - Installed on - demand - -
    - / - - byacc/20170201 - -
    - / - - CMake/3.7.2 - -
    - - core-counter/1.1 - -
    - CPLEX/12.6.3 - - - Installed on - demand on Leibniz. - -
    - DFTB+/1.2.2 - - - Installed - on demand on Leibniz. - -
    - / - - Doxygen/1.8.13 - -
    - EasuBuild/… - - EasyBuild/3.1.2 - -
    - FastQC/0.11.5-Java-8 - - - Installed - on demand on Leibniz. - -
    - FINE-Marine/5.2 - - - Installed - on demand on Leibniz. - -
    - - flex/2.6.0
    flex/2.6.3
    -
    -
    - GATK/3.5-Java-8 - - - Installed - on demand on Leibniz. - -
    - Gaussian16/g16_A3-AVX - - - Work in progress. -
    - Gurobi/6.5.1 - - - Installed - on demand on Leibniz. - -
    - Hadoop/2.6.0-cdh5.4.5-native - - - Installed - on demand on Leibniz. - -
    - - help2man/1.47.4 - -
    - Java/8 - - Java/8 - -
    - - JUnit/4.12-Java-8 - -
    - / - - libtool/2.4.6 - -
    - M4/1.4.17 - - M4/1.4.18 - -
    - MATLAB/R2016a - - MATLAB/R2017a - -
    - Maven/3.3.9 - - - Installed on - demand on Leibniz. - -
    - MGLTools/1.5.7rc1 - - - Installed on - demand on Leibniz. - -
    - MlxLibrary/1.0.0 - - - Lixoft Simulx -
    - MlxPlore/1.1.1 - - - Lixoft MLXPlore -
    - monitor/1.1.2 - - monitor/1.1.2 - -
    - Monolix/2016R1 - - - Installed on - demand on Leibniz. - -
    - / - - NASM/2.12.02 - -
    - Newbler/2.9 - - / - - On request, has - not been used recently. -
    - Novoalign/3.04.02 - - - Installed on - demand on Leibniz. - -
    - ORCA/3.0.3 - - - Installed on - demand on Leibniz. - -
    - p4vasp/0.3.29 - - - Installed on - demand on Leibniz. - -
    - parallel/20160622 - - parallel/20170322 - -
    - / - - pkg-config/0.29.1 - -
    - protobuf/2.5.0 - - protobuf/2.6.1 - -
    - Ruby/2.1.10 - - Ruby/2.4.0 - -
    - / - - SCons/2.5.1 - -
    - scripts/4.0.0 - - On request, has not been used recently. -
    setuptools/1.4.2On request, has not been used recently.
    Spark/2.0.2On request, has not been used recently.
    TRF/4.07.bOn request, has not been used recently.
    TRIQS/1.2.0On request, has not been used recently.
    viral-ngs/1.4.2On request, has not been used recently.
    vsc-base/2.5.1Used to be in compiler toolchains
    " -793,"","

    Introduction

    Most of the useful R packages come -in the form of packages that can be installed separatly. Some of those -are part of the default installtion on VSC infrastructure. Given the astounding number of packages, it is not sustainable to - install each and everyone system wide. Since it is very easy for a user - to install them just for himself, or for his research group, that is -not a problem though. Do not hesitate to contact support whenever you -encounter trouble doing so. -

    Installing your own packages using conda

    The easiest way to install and manage your own R environment is conda. -

    Installing Miniconda

    If you have Miniconda already installed, you can skip ahead to the next -section, if Miniconda is not installed, we start with that. Download the -Bash script that will install it from - conda.io using, e.g., wget: -

    $ wget https://repo.continuum.io/miniconda/Miniconda3-latest-Linux-x86_64.sh
    -

    Once downloaded, run the installation script: -

    $ bash Miniconda3-latest-Linux-x86_64.sh -b -p $VSC_DATA/miniconda3
    -

    Optionally, you can add the path to the Miniconda -installation to the PATH environment variable in your .bashrc file. -This is convenient, but may lead to conflicts when working with the -module system, so make sure that you know what you are doing in either -case. The line to add to your .bashrc file would be: -

    export PATH=\"${VSC_DATA}/miniconda3/bin:${PATH}
    -

    Creating an environment

    First, ensure that the -Miniconda installation is in your PATH environment variable. The -following command should return the full path to the conda command: -

    $ which conda
    -

    If the result is blank, or reports that conda can not be found, - modify the `PATH` environment variable appropriately by adding -iniconda's bin directory to PATH. -

    Creating a new conda environment is straightforward: -

    $ conda create -n science -c r r-essentials r-rodbc
    -

    This command creates a new conda environment called science, -and installs a number of R packages that you will probably want to -have handy in any case to preprocess, visualize, or postprocess your -data. You can of course install more, depending on your requirements and - personal taste. -

    Working with the environment

    To work with an environment, you have to activate it. This is done with, e.g., -

    $ source activate science
    -

    Here, science is the name of the environment you want to work in. -

    Install an additional package

    To install an additional package, e.g., `pandas`, first ensure that the environment you want to work in is activated. -

    $ source activate science
    -

    Next, install the package: -

    $ conda install -c r r-ggplot2
    -

    Note that conda will take care of all independencies, including - non-R libraries. This - ensures that you work in a consistent environment. -

    Updating/removing

    Using conda, it is easy to keep your packages up-to-date. Updating a single package (and its dependencies) can be done using: -

    $ conda update r-rodbc
    -

    Updating all packages in the environement is trivial: -

    $ conda update --all
    -

    Removing an installed package: -

    $ conda remove r-mass
    -

    Deactivating an environment

    To deactivate a conda environment, i.e., return the shell to its original state, use the following command -

    $ source deactivate
    -

    More information

    Additional information about conda can be found on its documentation site. -

    Alternatives to conda -

    Setting up your own package repository for R is straightforward. -

      -
    1. Load the appropriate R module, i.e., the one you want the R package to be available for:
      - $ module load R/3.2.1-foss-2014a-x11-tcl
    2. -
    3. Start R and install the package :
      > install.packages(\"DEoptim\")
    4. -
    5. Alternatively you can download the -desired package: -
      - $ wget cran.r-project.org/src/contrib/Archive/DEoptim/DEoptim_2.0-0.tar.gz
    6. - And install the package from the command line: - $ R CMD -INSTALL DEoptim_2.2-3.tar.gz -l -/$VSC_HOME/R/ - -
    7. These packages might depend on the specific R version, so you may need to reinstall them for the other version.
    8. -
    " -795,"","

    The 4th VSC Users Day was held at the \"Paleis der Academiën\", the seat of the \"Royal Flemish Academy of Belgium for Science and the Arts\", in the Hertogstraat 1, 1000 Brussels, on May 22, 2018. -

    Program

    The titles in the program link to slides or abstracts of the presentations. -

    Abstracts of workshops

    VSC for starters -

    The workshop provides a smooth introduction to supercomputing for new users. Starting from common concepts in personal computing the similarities and differences with supercomputing are highlighted and some essential terminology is introduced. It is explained what users can expect from supercomputing and what not, as well as what is expected from them as users. -

    Start to GPU -

    GPU’s have become an important resource of computational power. For some workloads they are extremely suited eg. Machine learning frameworks, but also applications vendors are providing more and more support. So it is important to keep track of things happening in your research field. This workshop will provide you with an overview of available GPU power within VSC and will give you guidelines how you can start using it. -

    Code debugging -

    All code contains bugs, and that is frustrating. Trying to identify and eliminate them is tedious work. The extra complexity in parallel code makes this even harder. However, using coding best practices can reduce the number of bugs in your code considerably, and using the right tools for debugging parallel code will simplify and streamline the process of fixing your code. Familiarizing yourself with best practices will give you an excellent return on investment. -

    Code optimization -

    Performance is a key concern in HPC (High Performance Computing). As a developer, but also as an application user you have to be aware of the impact of modern computer architecture on the efficiency of you code. Profilers can help you identify performance hotspots so that you can improve the performance of your code systematically. Profilers can also help you to find the limiting factors when you run an application, so that you can improve your workflow to try and overcome those as much as possible. -

    Paying attention to efficiency will allow you to scale your research to higher accuracy and fidelity.

    " -797,"","
      -
    1. Doping Diamond with Luminescent Centres: The Electronic Structure of Ge and Eu Defect Complexes
      Danny E. P. Vanpoucke, Shannon S. Nicley, Emilie Bourgeois, Milos Nesladek, Ken Haenen (U Hasselt)
    2. -
    3. Impact of observed and future climate change on agriculture and forestry in Central Asia
      Rozemien De Troch, Steven Caluwaerts, Lesley De Cruz, Piet Termonia and Philippe De Maeyer (U Gent en Koninklijk Meteorologisch Instituut - Institut Royal Météorologique)
    4. -
    5. - Do droughts self-propage and self-intensify?
      Jessica Keune, Hendrik Wouters (U Gent)
    6. -
    7. Combining Multigrid and Multilevel Monte Carlo with Applications to Uncertainty Quantification
      Pieterjan Robbe, Dirk Nuyens, Stefan Vandewalle (KU Leuven)
    8. -
    9. Reference Assisted Assembly and Annotation of the Octopus vulgaris Genome
      Koen Herten, Gregory Maes, Eve Seuntjes, Fiorito Graziano, Joris R Vermeesch (KU Leuven)
    10. -
    11. Going where the wind blows – Aeroelastic simulations of a wind turbine with composite blades
      Gilberto Santo, Mathijs Peeters, Wim Van Paepegem, Joris Degroote (U Gent)
    12. -
    13. Tailoring superconductivity in lithium-decorated graphene -
      - Annelinde Strobbe, Jonas Bekaert, Milorad Milošević (U Antwerpen) -
    14. -
    15. Calculating terrain parameters from Digital Elevation Models on multicore processors
      Grethell Castillo Reyes, Dirk Roose (UCI, Havana and KU Leuven)
    16. -
    17. HPC4Business: Predicting Churn in Telco from Very Large Graphs using Representation Learning
      Sandra Mitrović, Jochen De Weedt (KU Leuven)
    18. -
    19. Machine learning and materials science: from ab initio screening to microstructure analysis
      - Michiel Larmuseau, Maarten Cools-Ceuppens, Michael Sluydts, Toon Verstraelen, Tom Dhaene, Stefaan Cottenier (U Gent and OCAS)
    20. -
    21. Generating climate forcing for the Ecotron experiment using HPC -
      - Inne Vanderkelen, F. Rineau, E. Davin, L. Gudmundsson, J. Zscheischler, S. I. Seneviratne, W. Thiery (VUB, U Hasselt and ETH Zurich) -
    22. -
    23. A hybridized DG method for unsteady flow problems
      Alexander Jaust, Jochen Schütz (U Hasset)
    24. -
    25. - Aromatic sulfonation with SO3: mechanistic and kinetic study -
      - Samuel Moors, Xavier Deraet, Guy Van Assche, Paul Geerlings, Frank De Proft (VUB) -
    26. -
    27. - Understanding ambident nucleophilicity: a combined activation-strain and conceptual DFT analysis -
      - Tom Bettens, Trevor A. Hamlin, Mercedes Alonso, F. Matthias Bickelhaupt, Frank De Proft (VUB) -
    28. -
    29. Computational fluid dynamics-based study of novel technologies in the steam cracking process -
      - Stijn Vangaever, Jens N. Dedeyne, Pieter A. Reyniers, Guy B. Marin, Geraldine J. Heynderickx, Kevin M. Van Geem (U Gent) -
    30. -
    31. HPC for regional climate simulations over Antarctica -
      - Alexandra Gossart, Niels Souverijns, Matthias Demuzere, Sam Vanden Broucke, Nicole P.M. van Lipzig (KU Leuven) -
    32. -
    33. Materials microstructure simulation
      Yuri Coutinho, Nele Moelans (KU Leuven) -
    34. -
    35. SP-Wind: A scalable large-eddy simulation code for modeling and optimization of wind energy systems
      Wim Munters, Athanasios Vitsas, Thomas Haas, Johan Meyers (KU Leuven)
    36. -
    37. Simulation of atmospheric flows and their interaction with classical and airborne wind energy systems -
      - Dries Allaerts, Thomas Haas, Johan Meyers (KU Leuven) -
    38. -
    39. LES based control of wind farms -
      - Pieter Bauweraerts, Wim Munters, Johan Meyers (KU Leuven) -
    40. -
    41. - Surge resistance identification of inland vessels by Computational Fluid Dynamics -
      - Arne Eggers, Gerben Peeters (KU Leuven) -
    42. -
    43. Analyzing the epidemic size distributions of an individual-based influenza model -
      - Pieter Libin, Kristof Theys, Ann Nowé (VUB and KU Leuven) -
    44. -
    45. Studying the adaptation of complex biomolecular systems through mechanistic modeling and in silico evolution -
      - Jayson Gutiérrez, Steven Maere (VIB-UGent Center for Plant Systems Biology and U Gent) -
    46. -
    47. Fraction of virus individuals with beneficial alleles affects the trajectory of a selective sweep -
      - Abbas Jariani, Pieter Libin, Kristof Theys (VIB, VUB and KU Leuven) -
    48. -
    49. The ESA Virtual Space Weather Modeling Centre
      S. Poedts, A. Kochanov, A. Lani (KU Leuven), H. Deconinck (VKI), N. Mihalache A. Lani & F. Diet (SAS), D. Heynderickx (DH Consultancy), J. De Keyser, E. De Donder, N.B. Crosby, M. Echim (BISA), L. Rodriguez, R. Vansina, F. Verstringe , B. Mampaey (ROB), R. Horne, S. Glauert, J. Isles (BAS), P. Jiggens, R. Keil, A. Glover, J.-P.
    50. -
    51. Forecasting space weather with EUHFORIA in ESA’s Virtual Space Weather Modeling Centre
      - S. Poedts, A. Lani, A. Kochanov, Ch. Verbeke, C. Scolini, A. Isavnin, N. Wijsen - CmPA / KU Leuven
      J. Pomoell, E. Kilpua, E. Asvestari, E. Lumme - University of Helsinki, Helsinki, Finland -
    52. -
    53. Cheese brines harbour both halophilic/halotolerant and cheese ingredient-associated microorganisms
      Louise Vermote, Marko Verce, Luc De Vuyst Stefan Weckx (VUB)
    54. -
    55. Multiscale Climate Modelling over Africa -
      O. Brousse, J. Van de Walle, W. Thiery, H. Wouters, M. Demuzerre, N. P.M. van Lipzig (KU Leuven)
    56. -
    57. Structural and electronic properties of Naples Yellow pigments -
      - R. Saniz, D. Lamoen, A. Martchetti, K. De Wael, B. Partoens (U Antwerpen) -
    58. -
    59. Hard X-ray sources in solar flares: K-H instability and turbulence -
      - Wenzhi Ruan, Chun Xia, Rony Keppens (KU Leuven) -
    60. -
    61. A Cost-Efficient Workflow for the Whole-Transcriptome Analysis of Xenograft-Derived Tissue -
      - Álvaro Cortés-Calabuig, Magali Verheecke, Vanessa Brys, Jeroen Van Houdt, Frederic Amant, Joris Vermeesch (KU Leuven) -
    62. -
    63. Validation of GenOme Resolution by Density-gradient Isopycnic ANalysis (GORDIAN) for sequence-based microbial community analysis -
      - Sofie Thijs, Nathan Bullen, Sarah Coyotzi, Jaco Vangronsveld, William Holben, Laura Hug, Josh Neufeld (U Hasselt, Waterloo Univ., Univ. Montana) -
    64. -
    65. - The composition and functional potential of water kefir fermentation microbiota as revealed through shotgun metagenomics Marko Verce, Luc De Vuyst, and Stefan Weckx (VUB) - -
      -
    66. -
    "