2. MPSD HPC system#

Important

The operating system of the HPC has been upgraded in January 2025. This has the following implications:

  • The 24a software release is no longer available (as it is not compatible with the new operating system). Pre-compiled modules are now available under the 25a name, i.e. mpsd-modules 25a.

  • If you have compiled your own software on the old system you will need to recompile.

  • Python virtual environments based on the old system Python or any of the previously provided Python modules will no longer work. Please re-create those (see Python documentation). If you need any help please get in touch.

  • The toolchain metamodules have been deprecated and will be removed in the future. Please load compiler, mpi, and other required modules explicitly. For compiling Octopus the octopus-dependencies metamodules can be used to conveniently load all required modules (details in Loading a toolchain to compile Octopus).

  • A new Intel oneapi toolchain is now available. This toolchain is still in early development and currently contains a limited set of modules.

  • Intel classic toolchains are no longer available, please use GCC toolchains instead. If you still need Intel classic please get in touch.

  • The anaconda module is no longer available (due to license changes from Anaconda Inc.). Please use the miniforge3 module instead (details are documented in the Python section).

  • The default partition has changed, the new default partition is public2.

For the Raven, Viper and Ada machine, please check Overview computing services.

2.1. Login nodes#

Login nodes are mpsd-hpc-login1.desy.de and mpsd-hpc-login2.desy.de.

If you have not got access to the system and are a member of the Max Planck Institute for the Structure and Dynamics of Matter (MPSD), please request access to the MPSD HPC system by emailing MPSD IT at support[at]mpsd[dot]mpg[dot]de. Please provide your DESY username in the email and send the email from your Max Planck email account.

Note

During the first login your home directory will be created. This could take up to a minute. Please be patient.

2.2. Job submission#

Job submission is via Slurm.

Example slurm submission jobs are available below (Example batch scripts).

2.2.1. Partitions#

The following partitions are available to all (partial output from sinfo):

PARTITION   TIMELIMIT  NODES NODELIST
bigmem     7-00:00:00      8 mpsd-hpc-hp-[001-008]
gpu        7-00:00:00      2 mpsd-hpc-gpu-[001-002]
gpu-ayyer  7-00:00:00      3 mpsd-hpc-gpu-[003-005]
public     7-00:00:00     49 mpsd-hpc-ibm-[001-030,035-036,043-049,053-062]
public2*   7-00:00:00     63 mpsd-hpc-pizza-[001-063]

Please use the machines in the gpu partition only if your code supports nvidia-cuda.

Hardware resources per node:

  • public

    • 16 physical cores (no hyperthreading, 16 CPUs in Slurm terminology)

    • 64GB RAM

    • at most 2 nodes can be used for multi-node jobs (as the 10GB ethernet for MPI has a relatively high latency)

    • microarchitecture: sandybridge

  • public2

    • 40 physical cores (80 with hyperthreading, 80 CPUs in Slurm terminology)

    • at least 256GB RAM (some nodes have up to 768GB RAM)

    • only single node jobs are permitted (as the 1GB ethernet is inefficient for MPI jobs across nodes)

    • microarchitecture broadwell

  • bigmem

    • 96 physical cores (192 with hyperthreading, 192 CPUs in Slurm terminology)

    • 2TB RAM

    • fast FDR infiniband for MPI communication

    • microarchitecture: broadwell

  • gpu

    • 16 physical cores (32 with hyperthreading, 32 CPUs in Slurm terminology)

    • 1.5TB RAM

    • fast FDR infiniband for MPI communication

    • 8 Tesla V100 GPUs

    • microarchitecture: skylake_avx512

  • gpu-ayyer

    • 40 physical cores (80 with hyperthreading, 80 CPUs in Slurm terminology)

    • 372GB RAM

    • fast FDR infiniband for MPI communication

    • 4 Tesla V100 GPUs

    • microarchitecture: cascadelake

2.2.2. Slurm default values#

Slurm defaults are an execution time of 1 hour, one task, two CPUs, 5600MB of RAM, and the public2 partition.

The maximum runtime for any job is 7 days. Interactive jobs are limited to 12 hours.

A node is a physical multi-core shared-memory computer and can (by default) be shared between multiple users (i.e. use is not exclusive, unless requested).

Logging onto nodes via ssh is only possible once the nodes are allocated (either via sbatch when the job starts, or using salloc for interactive use, see Interactive use of HPC nodes). This avoids accidental over-use of resources and enables energy saving measures (such as switching compute nodes off automatically if they are not in use).

2.2.3. Slurm CPUs#

Whereas in usual language we would consider a “CPU” the entire processor package (i.e.: the device you attach to the motherboard socket), in Slurm terminology a “CPU” is a computational core (or a thread if hyperthreading is configured). This is what is sometimes called a “Logical Core”. A computing node that has an 8-core processor with Simultaneous Multithreading (Hyperthreading) technology, would appear to Slurm as a node with “16 CPUs”.

As this document refers to Slurm and its various commands, we use the slurm terminology throughout.

2.2.4. Interactive use of HPC nodes#

For production computation, we typically write a batch file (see Example batch scripts), and submit these using the sbatch command.

Sometimes, it can be helpful to log in into an HPC node for example to compile software or run interactive tests. The command to use in this case is salloc.

For example, requesting a job with all default settings:

user@mpsd-hpc-login1:~$ salloc
salloc: Granted job allocation 1272
user@mpsd-hpc-pizza-010:~$

We can see from the prompt (user@mpsd-hpc-pizza-010:~$) that the Slurm system has allocated the requested resources on node mpsd-hpc-pizza-010 to us.

We can use the mpsd-show-job-resources command to check some details of the allocation:

user@mpsd-hpc-ibm-058:~$ mpsd-show-job-resources
 345415 Nodes: mpsd-hpc-pizza-010
 345415 Local Node: mpsd-hpc-pizza-010

 345415 CPUSET: 0,40
 345415 MEMORY: 5600 M

Here we see (CPUSET: 0,40) that we have been allocated two CPUs (in Slurm terminology) and that CPUs have got the indices 0 and 40. If we have requested multiple CPUs, we would find multiple numbers displayed (see below).

We can finish our interactive session by typing exit:

user@mpsd-hpc-pizza-010:~$ exit
exit
salloc: Relinquishing job allocation 1272
user@mpsd-hpc-login1:~$

Using a tmux session while working interactively is advisable, as it allows you to get back to the terminal in case you lose connection to the session (e.g. due to network issues). A quick start for tmux can be found at tmux/tmux. Note: you need to start tmux on the login node before allocating resources.

If we desire exclusive use of a node (i.e. not shared with others), we can use salloc --exclusive (here we request a session time of 120 minutes):

user@mpsd-hpc-login2:~$ salloc --exclusive --time=120 --partition=public
salloc: Granted job allocation 1279
user@mpsd-hpc-ibm-061:~$ mpsd-show-job-resources
  65911 Nodes: mpsd-hpc-ibm-061
  65911 Local Node: mpsd-hpc-ibm-061

  65911 CPUSET: 0-15
  65911 MEMORY: 56000 M

We can see (in the output above) that all 16 CPUs of the node are allocated to us.

Assume we need 16 CPUs and 10GB of RAM for our interactive session (the 16 CPUs corresponds to the number of OpenMP threads, see OpenMP):

user@mpsd-hpc-login1:~$ salloc --mem=10000 --cpus-per-task=16
salloc: Granted job allocation 1273
user@mpsd-hpc-pizza-058:~$ mpsd-show-job-resources
 345446 Nodes: mpsd-hpc-pizza-058
 345446 Local Node: mpsd-hpc-pizza-058

 345446 CPUSET: 0-8,40-48
 345446 MEMORY: 10000 M
user@mpsd-hpc-pizza-058:~$

If we execute MPI programs, we can specify the number of nodes (a node is a computer node, with typically one, two or four CPU sockets), and how many (MPI) tasks (=processes) we want to run on that node. Imagine we ask for two nodes, and want to run 4 MPI processes on each:

user@mpsd-hpc-login1:~$ salloc --nodes=2 --tasks-per-node=4 --partition=public
salloc: Granted job allocation 1276
user@mpsd-hpc-ibm-058:~$ mpsd-show-job-resources
 345591 Nodes: mpsd-hpc-ibm-[058-059]
 345591 Local Node: mpsd-hpc-ibm-058

 345591 CPUSET: 0-3
 345591 MEMORY: 14000 M
user@mpsd-hpc-ibm-058:~$ srun hostname
mpsd-hpc-ibm-059
mpsd-hpc-ibm-059
mpsd-hpc-ibm-059
mpsd-hpc-ibm-059
mpsd-hpc-ibm-058
mpsd-hpc-ibm-058
mpsd-hpc-ibm-058
mpsd-hpc-ibm-058

The srun command starts the execution of our (MPI) tasks. We use the hostname command above and can see that we have 4 of these commands run on each node.

Jobs default to the public2 partition, but specifying -p followed by a partition name directs them to a different partition.

user@mpsd-hpc-login1:~$ salloc --mem=1000 -p bigmem --cpus-per-task=12
salloc: Granted job allocation 1277
salloc: Waiting for resource configuration
salloc: Nodes mpsd-hpc-hp-002 are ready for job
user@mpsd-hpc-hp-003:~$ mpsd-show-job-resources
  32114 Nodes: mpsd-hpc-hp-002
  32114 Local Node: mpsd-hpc-hp-002

  32114 CPUSET: 48-53,144-149
  32114 MEMORY: 1000 M

This allocates memory from the bigmem partition for the job.

2.2.5. Finding out about my jobs#

There are multiple ways of finding out about your slurm jobs:

  • squeue --me lists only your jobs (see below for output)

  • mpsd-show-job-resources can be used ‘inside’ the job (to verify hardware allocation is as desired)

  • scontrol show job JOBID provides a lot of detail

Example: We request 2 nodes, with 4 tasks (and by default one CPU per task)

user@mpsd-hpc-login1:~$ squeue --me
  JOBID PARTITION     NAME     USER ST       TIME  NODES NODELIST(REASON)
user@mpsd-hpc-login1:~$ salloc --nodes=2 --tasks-per-node=4 --partition=public
salloc: Granted job allocation 1276
user@mpsd-hpc-ibm-058:~$ squeue --me
  JOBID PARTITION     NAME     USER ST       TIME  NODES NODELIST(REASON)
   1276    public interact  user  R      11:37      2 mpsd-hpc-ibm-[058-059]
user@mpsd-hpc-ibm-058:~$ mpsd-show-job-resources
 345591 Nodes: mpsd-hpc-ibm-[058-059]
 345591 Local Node: mpsd-hpc-ibm-058

 345591 CPUSET: 0-3
 345591 MEMORY: 14000 M
user@mpsd-hpc-ibm-058:~$ scontrol show job 1276
JobId=1276 JobName=interactive
   UserId=user(28479) GroupId=cfel(3512) MCS_label=N/A
   <...>
   RunTime=00:14:08 TimeLimit=01:00:00 TimeMin=N/A
   Partition=public AllocNode:Sid=mpsd-hpc-login1.desy.de:3116660
   NodeList=mpsd-hpc-ibm-[058-059]
   BatchHost=mpsd-hpc-ibm-058
   NumNodes=2 NumCPUs=8 NumTasks=8 CPUs/Task=1 ReqB:S:C:T=0:0:*:*
   TRES=cpu=8,mem=28000M,node=2,billing=8
   Socks/Node=* NtasksPerN:B:S:C=4:0:*:1 CoreSpec=*
   MinCPUsNode=4 MinMemoryCPU=4000M MinTmpDiskNode=0
   <...>

2.3. Storage and quotas#

The MPSD HPC system provides two file systems: /home and /scratch:

/home/$USER ($HOME)

  • home file system for code and scripts

  • user quota (storage limit): 100 GB

  • regular backups

  • users have access to the backup of their data under $HOME/.zfs/snapshots

/scratch/$USER

  • scratch file system for simulation output and other temporary data

  • there are no backups for /scratch: hardware error or human error can lead to data loss.

  • A per-user quota of (by default) 25TB is applied. This is in place to prevent jobs that (unintentionally) write arbitrary amount of data to /scratch from filling up the file system and blocking the system for everyone.

  • The following policy is applied to manage overall usage of /scratch:

    If /scratch fills up, the cluster becomes unusable. Should this happen, we will make space available through the following actions:

    1. purchase and installation of additional hardware to increase storage available /scratch (if funding and other constraints allow this)

    2. ask users to voluntarily reduce their usage of /scratch (by, for example, deleting some data, or archiving completed projects elsewhere)

    3. if 1. and 2. do not resolve the situation, a script will be started that deletes some of the files on /scratch (starting with the oldest files). Notice will be given of this procedure.

You can view your current file system usage using the mpsd-quota command. Example output:

user@mpsd-hpc-login2:~$ mpsd-quota
location                        used          avail            use%
/home/username               8.74 GB       98.63 GB           8.86%
/scratch/username          705.27 GB       25.00 TB           2.82%

Recommendation for usage of /home/$USER and /scratch/$USER:

Put small files and important data into /home/$USER. For example source code, scripts to compile your source, compiled software, scripts to submit jobs to slurm, post processing scripts, and perhaps also small data files.

Put simulation output (in particular if the files are large) into /scratch/$USER. All the data in /scratch should be re-computable with reasonable effort (typically by running scripts stored somewhere in /home/$USER). This re-computation may be needed if data loss occurs on /scratch, the hardware retires, or if data needs to be deleted from /scratch because we run out of space on /scratch.

Note

To facilitate the joint analysis of data, the /home and /scratch directories are set up such that all users can read all directories of all other users. If you want to keep your data in subfolder DIR private, you should run a command like chmod -R og-rx DIR.

The permissions on /home/$USER and /scratch/$USER are such that other users can enter your directory but not run ls (i.e. see what files and directories you have). To share data with someone else you need to tell them the full path to the relevant data. (This does by default not apply to any subfolders you create, so once others know a subfolder they can find and read all other content inside that subfolder.)

2.4. Software#

The software on the MPSD HPC system is provided via environment modules. This facilitates providing different versions of the same software. The software is organised in a hierarchical structure.

First, you need to decide which MPSD software environment version you need. These are named according to calendar years: the most recent one is 25a. We select that version using the mpsd-modules command, for example mpsd-modules 25a.

In order to use a module we first have to load a base compiler and MPI. That way we can choose between different compilers and MPI implementations for a software. More details are given below.

From a high-level perspective, the required steps to use a particular module are:

  1. Activate the MPSD software environment version of modules

  2. Search for the module to find available versions and required base modules

  3. Load required base modules (such as a compiler)

  4. Load the desired module

2.4.1. TLDR#

  • Modules are organised in a hierarchical structure with compiler and MPI implementation as base modules.

  • Modules may be compiled with different feature sets. Use mpsd-modules for switching.

    • a generic feature set (runs on all nodes), activated by default

      user@mpsd-hpc-login1:~$ mpsd-modules 25a
      
    • architecture-dependent feature sets (depending on the CPU microarchitecture $MPSD_MICROARCH of the nodes, a suitable optimised set is automatically selected when using the option native)

      user@mpsd-hpc-login1:~$ mpsd-modules 25a native
      

      For more options refer to mpsd-modules --help.

  • To subsequently find and load modules:

    • module avail

    • module spider <module-name>

    • module load <module1> [<module2> ...]

  • octopus-dependencies modules are provided to simplify loading dependencies to compile octopus. They depend on compiler and mpi modules. Different variants are provided: min / min-cuda (required dependencies), full / full-cuda (required and optional dependencies)

  • Deprecated: toolchain modules replicate easybuild toolchains (and add a few generic packages such as cmake and ninja). If you used to use these modules please transition to loading compiler, MPI and other required modules explicitly.

  • Once a module X is loaded, the environment variable $MPSD_X_ROOT provides the location of the module’s files. For example:

    user@mpsd-hpc-login1:~$ module load gcc/11.3.0 gsl/2.7.1
    user@mpsd-hpc-login1:~$ echo $MPSD_GSL_ROOT
    /opt_mpsd/linux-debian12/25a/sandybridge/spack/opt/spack/linux-debian12-sandybridge/gcc-11.3.0/gsl-2.7.1-4zajlwxv4rm2mjkjoouvujth6lorbcm6
    
  • Set the rpath for dependencies; do not use LD_LIBRARY_PATH. See Setting the rpath (finding libraries at runtime).

  • If you compile a software with cmake you may run into problems with missing rpath in your resulting binary. If you face that problem you need to unset CPATH, unset LIBRARY_PATH, and unset LDFLAGS.

2.4.2. Initial setup#

The MPSD HPC system consists of a heterogeneous set of compute nodes with different CPU features. This is reflected in the available software stack by providing both a generic set of modules that can be used on all nodes as well as specialised sets of modules for the different available (hardware) microarchitectures. The latter will only run on certain nodes.

A versioning scheme is used for the MPSD software environment to improve reproducibility. Currently, all software is available in the 25a release (i.e. the first release in 2025). Additional modules will be added to this environment as long as they do not break anything. Therefore, users should always specify the version of the modules they use (even if only a single version is available). A new release will be made if any addition/change would break backwards compatibility.

The heterogeneous setup makes it necessary to first add an additional path where module files can be found. To activate the different sets of modules we can use mpsd-modules. The function takes two arguments: the release number (of the MPSD software environment, mandatory) and the feature set (optional, the generic set is used by default). Calling mpsd-modules list lists all available releases, mpsd-modules <release number> list lists all available feature sets. Calling mpsd-modules --help will show help and list available options. The microarchitecture of each node is stored in the environment variable $MPSD_MICROARCH (and can also be obtained via archspec cpu).

To demonstrate the use of mpsd-modules we activate the generic module set of the software environment 25a. These modules can be used on all HPC nodes.

user@mpsd-hpc-login1:~$ mpsd-modules 25a

Now, we can list available modules. At the time of writing this produces (truncated):

user@mpsd-hpc-login1:~$ module avail

------------------------ /opt_mpsd/linux-debian12/25a/sandybridge/lmod/Core ------------------------
  gcc/11.3.0
  gcc/12.3.0
  gcc/13.2.0
  gcc/14.2.0                              (D)
  intel-oneapi-compilers/2025.0.0         (D)
  miniforge3/24.11.2-1                    (D)

--------------------------------- /usr/share/lmod/lmod/modulefiles ---------------------------------
  Core/lmod    Core/settarg (D)

---------------------------------- /usr/share/modules/modulefiles ----------------------------------
  mathematica    mathematica12p2    matlab    matlab2021b

  Where:
  D:  Default Module

We can only see a small number of modules. The reason for this is the hierarchical structure mentioned before. The majority of modules are only visible once we load a compiler (and depending on the package an MPI implementation).

We can load a compiler and again list available modules. Now many more are available:

user@mpsd-hpc-login1:~$ module load gcc/13.2.0
user@mpsd-hpc-login1:~$ module avail

--------------------- /opt_mpsd/linux-debian12/25a/sandybridge/lmod/gcc/13.2.0 ---------------------
  autoconf-archive/2023.02.20           libxdmcp/1.1.4
  autoconf/2.72                         libxfont/1.5.4
  automake/1.16.5                       libxml2/2.10.3
  bdftopcf/1.1                          libyaml/0.2.5
  berkeley-db/18.1.40                   lz4/1.9.4
  berkeleygw/3.1.0                      m4/1.4.19
  bigdft-atlab/1.9.3                    metis/5.1.0
  bigdft-futile/1.9.3                   mkfontdir/1.0.7
  bigdft-psolver/1.9.3                  mkfontscale/1.2.2
  binutils/2.40                         mpfr/4.2.0
  bison/3.8.2                           nasm/2.15.05
  boost/1.83.0                          ncurses/6.4
  bzip2/1.0.8                           netcdf-c/4.9.2
  c-blosc/1.21.5                        netcdf-fortran/4.6.1
  ca-certificates-mozilla/2023-05-30    nfft/3.5.3
  cgal/5.6                              nghttp2/1.57.0
  check/0.15.2                          ninja/1.10.2
  cmake/3.27.9                          ninja/1.11.1                (D)
  curl/8.4.0                            nlopt/2.7.1
  dftbplus/23.1                         numactl/2.0.14
  diffutils/3.9                         octopus-dependencies/full
  eigen/3.4.0                           openblas/0.3.24
  etsf-io/1.0.4                         openmpi/4.1.6
  expat/2.5.0                           openssh/9.5p1
  fftw/3.3.10                           openssl/3.1.3
  findutils/4.9.0                       pcre2/10.42
  flex/2.6.3                            perl-yaml/1.30
  font-util/1.4.0                       perl/5.38.0
  fontconfig/2.14.2                     pigz/2.7
  fontsproto/2.1.3                      pkgconf/1.9.5
  freetype/2.11.1                       pmix/5.0.1
  gdbm/1.23                             py-cython/0.29.36
  gettext/0.22.3                        py-docutils/0.20.1
  gmake/4.4.1                           py-flit-core/3.9.0
  gmp/6.2.1                             py-h5py/3.8.0
  gperf/3.1                             py-numpy/1.26.1
  gsl/2.7.1                             py-packaging/23.1
  hdf5/1.14.3                           py-pip/23.1.2
  hwloc/2.9.1                           py-pkgconfig/1.5.5
  inputproto/2.3.2                      py-poetry-core/1.6.1
  kbproto/1.0.7                         py-pyproject-metadata/0.7.1
  knem/1.1.4                            py-pyyaml/6.0
  krb5/1.20.1                           py-setuptools/68.0.0
  libaec/1.0.6                          py-wheel/0.41.2
  libbsd/0.11.7                         python/3.11.7
  libedit/3.1-20210216                  rdma-core/41.0
  libevent/2.1.12                       re2c/2.2
  libffi/3.4.4                          readline/8.2
  libfontenc/1.1.7                      snappy/1.1.10
  libgd/2.3.3                           sparskit/develop
  libiconv/1.17                         spglib/2.1.0
  libjpeg-turbo/3.0.0                   sqlite/3.43.2
  libmd/1.0.4                           swig/4.1.1
  libnl/3.3.0                           tar/1.34
  libpciaccess/0.17                     texinfo/7.0.3
  libpng/1.6.39                         ucx/1.15.0
  libpspio/0.3.0                        util-linux-uuid/2.38.1
  libpthread-stubs/0.4                  util-macros/1.19.3
  libsigsegv/2.14                       valgrind/3.20.0
  libtiff/4.5.1                         xcb-proto/1.15.2
  libtool/2.4.7                         xextproto/7.3.0
  libvdwxc/0.4.0                        xproto/7.0.31
  libx11/1.8.4                          xtrans/1.4.0
  libxau/1.0.8                          xz/5.4.1
  libxc/6.2.2                           zlib-ng/2.1.4
  libxcb/1.14                           zlib/1.3
  libxcrypt/4.4.35                      zstd/1.5.5

------------------------ /opt_mpsd/linux-debian12/25a/sandybridge/lmod/Core ------------------------
  gcc/11.3.0
  gcc/12.3.0
  gcc/13.2.0
  gcc/14.2.0                              (D)
  intel-oneapi-compilers/2025.0.0         (D)
  miniforge3/24.11.2-1                    (D)

...

We now unload all loaded modules:

user@mpsd-hpc-login1:~$ module purge

2.4.3. Loading specific packages#

To find a specific package we can use the module spider command. Without extra arguments this would list all modules. We can search for a specific module by adding the module name. For example, let us find the miniforge3 package:

user@mpsd-hpc-login1:~$ module spider miniforge3

------------------------------------------------------------------------------------------------
  miniforge3: miniforge3/24.11.2-1
------------------------------------------------------------------------------------------------

    You can directly load this module: "module load miniforge3/24.11.2-1"

    Help:
     Miniforge3 is a minimal installer for conda and mamba specific to conda-
     forge.

We can directly load the miniforge3 module:

user@mpsd-hpc-login1:~$ module load miniforge3/2024.11.2-1
user@mpsd-hpc-login1:~$ python --version
Python 3.12.7
user@mpsd-hpc-login1:~$ which python
/opt_mpsd/linux-debian12/25a/sandybridge/spack/opt/spack/linux-debian12-x86_64_v2/gcc-12.2.0/miniforge3-2024.11.2-1-w3dmolygyqx4w6teluo3p5bq2taxnouo/bin/python

Most modules cannot be loaded directly. Instead we first have to load a compiler and sometimes also an MPI implementation. As an example we search for FFTW in version 3.3.10 (which we happen to know is available):

user@mpsd-hpc-login1:~$ module spider fftw/3.3.10

----------------------------------------------------------------------------
  fftw: fftw/3.3.10
----------------------------------------------------------------------------

    You will need to load all module(s) on any one of the lines below before the "fftw/3.3.10" module is available to load.

      gcc/11.3.0
      gcc/11.3.0  openmpi/4.1.4
      ...

    Help:
      FFTW is a C subroutine library for computing the discrete Fourier
      ...

FFTW 3.3.10 is available in two different variants, with and without MPI support. We can load the version with MPI support by first loading gcc and openmpi:

user@mpsd-hpc-login1:~$ module load gcc/11.3.0 openmpi/4.1.4 fftw/3.3.10

Likewise, we can load the version without MPI support by just loading a compiler and FFTW:

user@mpsd-hpc-login1:~$ module purge
user@mpsd-hpc-login1:~$ module load gcc/11.3.0 fftw/3.3.10

If we need to know the location of the files associated with a module X, you can use the MPSD_X_ROOT environment variable. For example:

user@mpsd-hpc-login1:~$ echo $MPSD_FFTW_ROOT
/opt_mpsd/linux-debian12/25a/sandybridge/spack/opt/spack/linux-debian12-sandybridge/gcc-11.3.0/fftw-3.3.10-qra6ez6es3unvk2i56hmkpfnmd2oxy3b

To get more detailed information, we can use module show X:

user@mpsd-hpc-login1:~$ module show fftw
--------------------------------------------------------------------------------------------------
  /opt_mpsd/linux-debian12/25a/sandybridge/lmod/openmpi/4.1.4-7imdm7p/gcc/11.3.0/fftw/3.3.10.lua:
--------------------------------------------------------------------------------------------------
whatis("Name : fftw")
whatis("Version : 3.3.10")
whatis("Target : sandybridge")
whatis("Short description : FFTW is a C subroutine library for computing the discrete Fourier transform (DFT) in one or more dimensions, of arbitrary input size, and of both real and complex data (as well as of even/odd data, i.e. the discrete cosine/sine transforms or DCT/DST). We believe that FFTW, which is free software, should become the FFT library of choice for most applications.")
...
prepend_path("LIBRARY_PATH","/opt_mpsd/linux-debian12/25a/sandybridge/spack/opt/spack/linux-debian12-sandybridge/gcc-11.3.0/fftw-3.3.10-qra6ez6es3unvk2i56hmkpfnmd2oxy3b/lib")
prepend_path("CPATH","/opt_mpsd/linux-debian12/25a/sandybridge/spack/opt/spack/linux-debian12-sandybridge/gcc-11.3.0/fftw-3.3.10-qra6ez6es3unvk2i56hmkpfnmd2oxy3b/include")
prepend_path("PATH","/opt_mpsd/linux-debian12/25a/sandybridge/spack/opt/spack/linux-debian12-sandybridge/gcc-11.3.0/fftw-3.3.10-qra6ez6es3unvk2i56hmkpfnmd2oxy3b/bin")
...
prepend_path("CMAKE_PREFIX_PATH","/opt_mpsd/linux-debian12/25a/sandybridge/spack/opt/spack/linux-debian12-sandybridge/gcc-11.3.0/fftw-3.3.10-qra6ez6es3unvk2i56hmkpfnmd2oxy3b/.")
...

2.4.4. Octopus#

As a second example for loading pre-compiled packages let us search for octopus:

user@mpsd-hpc-login1:~$ mpsd-modules 25a
user@mpsd-hpc-login1:~$ module spider octopus
------------------------------------------------------------------------------------------------
  octopus:
------------------------------------------------------------------------------------------------
    Versions:
        octopus/14.1
        octopus/15.1
...

Multiple versions of octopus are available. We can specify a particular version in order to get more information on how to load the module:

user@mpsd-hpc-login1:~$ module spider octopus/15.1
------------------------------------------------------------------------------------------------
  octopus: octopus/15.1
------------------------------------------------------------------------------------------------

    You will need to load all module(s) on any one of the lines below before the "octopus/14.0" module is available to load.

      gcc/13.2.0  openmpi/4.1.6

...

We can see that we have to first load gcc/13.2.0 and openmpi/4.1.6 in order to be able to load and use octopus/15.1.

Note

Sometimes module spider will suggest to either only load a compiler or compiler + MPI implementation. Then, we generally want to also load the MPI implementation as only this version of the program will use MPI. Loading the MPI-enabled version of the desired program is crucial when running a slurm job on multiple nodes.

We load gcc/13.2.0, openmpi/4.1.6 and finally octopus/15.1. All of this can be done in one line as long as the packages are given in the correct order (as shown by module spider):

user@mpsd-hpc-login1:~$ module load gcc/13.2.0 openmpi/4.1.6 octopus/15.1

As a first simple check we display the version number of octopus:

user@mpsd-hpc-login1:~$ octopus --version
octopus 15.1 (git commit )

2.4.5. Octopus with CUDA support#

Octopus with CUDA support is not currently available on the local HPC. You can either compile Octopus yourself or use the MPCDF HPC resources.

2.4.6. Python#

To use Python we can load the miniforge3 module:

user@mpsd-hpc-login1:~$ module load miniforge3

We provide an environment with commonly required Python packages such as numpy, scipy, matplotlib, pandas, etc. You can use it by running:

user@mpsd-hpc-login1:~$ source activate python-3.12

We can execute a small demo program called hello-numpy.py. The file has the following content.

import numpy as np

print("Hello World")
print(f"numpy version: {np.__version__}")
x = np.arange(5)
y = x**2
print(y)
(python-3.12) user@mpsd-hpc-login1:~$ python3 hello_numpy.py
Hello World
numpy version: 2.1.2
[ 0  1  4  9 16]

If you require a package that is not available and you think that package would also be useful for others please get in touch and we can add it to the python-3.12 environment. Alternatively, you can create your own virtual environment or conda environment as explained below.

2.4.6.1. Custom virtual environment#

You can create your own Python virtual environments based either on Python provided in the miniforge3 module or one of the python modules.

user@mpsd-hpc-login1:~$ module load miniforge3
user@mpsd-hpc-login1:~$ source activate python-3.12
(python-3.12) user@mpsd-hpc-login1:~$ python3 -m venv venv
(python-3.12) user@mpsd-hpc-login1:~$ source venv/bin/activate
(venv) (python-3.12) user@mpsd-hpc-login1:~$ which python
/home/user/venv/bin/python
(venv) (python-3.12) user@mpsd-hpc-login1:~$ python --version
Python 3.12.7

You can now use pip to install the required Python pagages, e.g.:

(venv) (python-3.12) user@mpsd-hpc-login1:~$ pip install numpy==2.1.2
...

2.4.6.2. Custom conda environment#

You can also create a separate conda environment if you need a specific Python version or other (non-Python) dependencies.

To use conda directly we can run source activate without an environment.

user@mpsd-hpc-login1:~$ module load miniforge3
user@mpsd-hpc-login1:~$ source activate

In your home directory create a .condarc file with at least the following content (you are free to choose an arbitrary directory for your custom conda environments):

envs_dirs:
  - ~/conda_envs

As an example we now create a new environment, called test_env, with an older version of Python and a specific numpy version from the conda-forge channel.

(base) user@mpsd-hpc-login1:~$ conda create -n test_env python=3.9 numpy=1.23

Note

Due to licence restrictions from Anaconda you are not allowed to use the channels default, main, r, msys2 and anaconda from repo.anaconda.com or repo.anaconda.org.

These channels are therefore blocked and conda-forge is set as default channel. Explicitly adding -c conda-forge is not required.

Likewise, you are no longer allowed to use the anaconda/miniconda installer. Therefore, please always use the miniforge3 module when working with conda environments (instead of installing any ...conda distribution yourself).

We can now activate the environment and check the versions of Python and numpy.

(base) user@mpsd-hpc-login1:~$ conda activate test_env
(test_env) user@mpsd-hpc-login1:~$ python --version
Python 3.9.16
(test_env) user@mpsd-hpc-login1:~$ python -c "import numpy; print(numpy.__version__)"
1.23.5

We can deactivate and remove the environment using:

(test_env) user@mpsd-hpc-login1:~$ conda deactivate
(base) user@mpsd-hpc-login1:~$ conda env remove -n test_env

2.4.6.3. Combining Python environments and other software with internal Python dependency#

If you use a software that depends on Python packages (not just Python), could also come in via a dependency of your software, and load that software’s module the environment variable PYTHONPATH will be populated. The variable is required for the Python module to find the corresponding Python packages installed as separated modules, so you should not unset it (unless you are certain that it is not required for your use-case). The variable will however “conflict” with virtual environments or conda environments (including the one provided with the miniforge3 module). To avoid this conflict you can run Python in isolated mode.

To demonstrate this, we will first load Octopus which comes with internal Python dependencies (numpy among others packages).

user@mpsd-hpc-login1:~$ module load gcc/11.3.0 openmpi octopus
user@mpsd-hpc-login1:~$ which python
/opt_mpsd/linux-debian12/25a/sandybridge/spack/opt/spack/linux-debian12-sandybridge/gcc-11.3.0/python-3.9.5-5t26adderjvy3fyndxdauqiorzolrg5i/bin/python
user@mpsd-hpc-login1:~$ python --version
Python 3.9.5
user@mpsd-hpc-login1:~$ python -c "import numpy; print(numpy.__version__)"
1.26.1

Now we additionally load the virtual environment created in the previous section:

user@mpsd-hpc-login1:~$ source venv/bin/activate
(venv) user@mpsd-hpc-login1:~$ which python
/home/user/venv/bin/python
(venv) user@mpsd-hpc-login1:~$ python --version
Python 3.12.7

We can see that it uses a different Python version (the one that was used to create it, coming from miniforge3). If we try to print the numpy version again, our code crashes with an import error (we only show parts of the error and omit sections, as indicated with […]).

(venv) user@mpsd-hpc-login1:~$ python -c "import numpy; print(numpy.__version__)"
Traceback (most recent call last):
  File "/opt_mpsd/linux-debian12/25a/sandybridge/spack/opt/spack/linux-debian12-sandybridge/gcc-11.3.0/py-numpy-1.26.1-zq4s43rrbwjk5c5j56irnrlslbwfrztz/lib/python3.9/site-packages/numpy/core/__init__.py", line 24, in <module>
    from . import multiarray
[...]

Importing the numpy C-extensions failed. This error can happen for
many reasons, often due to issues with your setup or how NumPy was
installed.

[...]

The reason for this failure is that Python tries to import the numpy package found on PYTHONPATH instead of using the one from our virtual environment. That version of numpy is however compiled for a different Python version and the import fails.

We can tell Python to ignore all environment variables using the -E flag:

(venv) user@mpsd-hpc-login1:~$ python -E -c "import numpy; print(numpy.__version__)"
2.1.2

Now the command runs fine and we can see the numpy version that we requested when creating the virtual environment.

Tip

Use python -I instead of python -E to also disable imports from the current directory. Otherwise subdirectories can have surprising side effects as shown in the following.

We create a “fake numpy package” with an empty __init__.py, which Python will import. The subsequent version check fails because our package does not have __version__.

(venv) user@mpsd-hpc-login1:~$ mkdir numpy
(venv) user@mpsd-hpc-login1:~$ touch numpy/__init__.py
(venv) user@mpsd-hpc-login1:~$ python -Ec "import numpy; print(numpy.__version__)"
Traceback (most recent call last):
  File "<string>", line 1, in <module>
AttributeError: module 'numpy' has no attribute '__version__'

We avoid this problem when using python -I.

(venv) user@mpsd-hpc-login1:~$ python -Ic "import numpy; print(numpy.__version__)"
2.1.2

Running Python in isolated mode will not work if you have your own scripts that you import from the current directory. In that case you could consider converting your script into a small Python package that you install in your virtual environment. Having a package can provide several advantages:

  • It makes reusing the code in different directories much more convenient.

  • You can (and should) put your package under version control, see Version control. This benefits your software development and helps with reproducibility (you can record the commit hash of your package used for your simulation or data analysis)

A good starting point for Python packaging is the official Python packaging guide. You do not need to upload your package to PyPI. Instead, you would typically want to use an editable installation, which allows you to continuously update your package while you are working on it.

2.4.6.4. mpi4py example#

mpi4py is a Python package that allows you to use MPI from Python. To be able to install it and use it, we need to load the openmpi module along with a python module. If you need a specific Python version that is not available, you can also create a conda environment with that Python version instead of loading a Python module.

user@mpsd-hpc-login1:~$ mpsd-modules 25a
user@mpsd-hpc-login1:~$ module load gcc/13.2.0 openmpi/4.1.6 python/3.11.7
(base) user@mpsd-hpc-login1:~$ echo $MPICC
/opt_mpsd/linux-debian12/25a/sandybridge/spack/opt/spack/linux-debian12-sandybridge/gcc-13.2.0/openmpi-4.1.6-vkfaiogxw2xf2nx2drbbivzzhfs7cblm/bin/mpicc

We echo the value of the MPICC environment variable to check that it is set. This variable should be the same as the result from which mpicc. This variable is required for installing mpi4py to compile and link against the MPI library.

We now create a new virtual environment and install mpi4py. This will build mpi4py locally using the loaded openmpi. (As discussed in the previous section, we use python -I to ignore Python environment variables.)

user@mpsd-hpc-login1:~$ python -m venv mpi4py_venv
user@mpsd-hpc-login1:~$ source mpi4py_venv/bin/activate
(mpi4py_venv) user@mpsd-hpc-login1:~$ python -Im pip install mpi4py

Note: always use pip to install mpi4py; the pre-compiled versions on conda-forge are generally not compatible with our openmpi modules.

To quickly test the installation, we can run the hello world example provided as part of mpi4py:

(mpi4py_venv) user@mpsd-hpc-login1:~$ srun -n 5 python -Im mpi4py.bench helloworld
Hello, World! I am process 0 of 5 on mpsd-hpc-ibm-023.
Hello, World! I am process 1 of 5 on mpsd-hpc-ibm-023.
Hello, World! I am process 2 of 5 on mpsd-hpc-ibm-023.
Hello, World! I am process 3 of 5 on mpsd-hpc-ibm-023.
Hello, World! I am process 4 of 5 on mpsd-hpc-ibm-023.

Here is how we could replicate the same default hello world example in a Python script:

from mpi4py import MPI

comm = MPI.COMM_WORLD
rank = comm.Get_rank()
size = comm.Get_size()

print(f"Hello, World! I am process {rank} of {size} on {MPI.Get_processor_name()}.")

The script can be run as previously mentioned using the srun command:

(mpi4py_venv) user@mpsd-hpc-login1:~$ srun -n 5 python hello_mpi4py.py
Hello, World! I am process 0 of 5 on mpsd-hpc-ibm-023.
Hello, World! I am process 2 of 5 on mpsd-hpc-ibm-023.
Hello, World! I am process 1 of 5 on mpsd-hpc-ibm-023.
Hello, World! I am process 4 of 5 on mpsd-hpc-ibm-023.
Hello, World! I am process 3 of 5 on mpsd-hpc-ibm-023.

Recommendations

  • Keep track of the openmpi module used to create the environment. If you need to use a different version of openmpi you need to create a new environment.

  • Always use the srun command to run MPI programs. This ensures that the MPI processes are started and managed by slurm scheduler.

2.4.7. Jupyter notebooks#

You can use a Jupyter notebook on a dedicated HPC node as follows:

  1. Ensure you are at MPSD or have the DESY VPN set up.

  2. Login to a login node (for example mosh mpsd-hpc-login1.desy.de, mosh is recommended over ssh to avoid losing the session in case of short connection interruptions e.g. on WiFi)

  3. Request a node for interactive use. For example, 1 node with 8 CPUs for 6 hours from the public partition:

    user@mpsd-hpc-login1:~$ salloc --nodes=1 --cpus-per-task=8 --time=6:00:00 -p public
    salloc: Granted job allocation 227596
    salloc: Waiting for resource configuration
    salloc: Nodes mpsd-hpc-ibm-021 are ready for job
    
  4. You can install Jupyter yourself, or you activate an installed version with the following commands:

    user@mpsd-hpc-ibm-021:~$ mpsd-modules 25a
    user@mpsd-hpc-ibm-021:~$ module load miniforge3
    user@mpsd-hpc-ibm-021:~$ source activate python-3.12
    
  5. Limit numpy (and other libraries) to the available cores

    (python-3.12) user@mpsd-hpc-ibm-021:~$ export OMP_NUM_THREADS=$SLURM_CPUS_PER_TASK
    
  6. Start the Jupyter lab server on that node with

    (python-3.12) user@mpsd-hpc-ibm-021:~$ jupyter lab --no-browser --ip=${HOSTNAME}.desy.de
    

    Watch the output displayed in your terminal. There is a line similar to this one:

    http://mpsd-hpc-ibm-055.desy.de:8888/?token=8814fea339b8fe7d3a52e7d03c2ce942a3f35c8c263ff0b8

    which you can paste as a URL into your browser (on your laptop/desktop), and you should be connected to the Notebook server on the compute node.

2.4.8. Matlab#

To use Matlab, we load the matlab module.

user@mpsd-hpc-login1:~$ module load matlab

We can execute a small demo program called hello_matlab.m. The file has the following content.

% hello_matlab.m
disp('Hello, MATLAB!');
a = [1 2 3; 4 5 6; 7 8 9];
b = ones(3, 3);
result = a + b;
disp('Matrix addition result:');
disp(result);
user@mpsd-hpc-login1:~$ matlab -nodisplay -r "run('hello_matlab');exit;"
Hello, MATLAB!
Matrix addition result:
   2     3     4
   5     6     7
   8     9    10

An interactive interface can be loaded by using matlab on the terminal.

user@mpsd-hpc-login1:~$ matlab

2.4.9. Loading a toolchain to compile Octopus#

To compile octopus we need to load a compiler, if desired MPI, and all required and (if desired) optional dependencies. To simplify this a metamodule octopus-dependencies is provided in variants min and full (additionally min-cuda and full-cuda on GPU machines). The min metamodule contains all required Octopus dependencies, the full metamodule required and optional dependencies.

The metamodule depends on compiler and MPI. If we only load a compiler and the metamodule all loaded packages come without MPI support. If we load a compiler, an MPI implementation (e.g. openmpi), and the metamodule, we get packages with MPI support (where applicable) and additional dependencies that are only required when compiling Octopus with MPI. Depending on the version of the compiler different versions of the other packages are used. Use module list after loading the modules to get an overview of all modules and their versions.

Here, we show two examples how to compile Octopus, a serial and an MPI version. Following this guide is only recommended if you need to compile Octopus from source. We also provide pre-compiled modules for Octopus as outlined in Loading specific packages above.

As mentioned before, different variants of (most) modules are available that support different CPU feature sets. So far we mainly discussed the generic set that can be used on all nodes. In order to make use of all available features on a specific node we can instead load a more optimised set of modules. The CPU architecture is available in the environment variable $MPSD_MICORARCH.

First, we remove the generic module set and activate the optimised set for the current node (native automatically selects a suitable optimised module set).

user@mpsd-hpc-login1:~$ module purge
user@mpsd-hpc-login1:~$ mpsd-modules 25a native

2.4.9.1. Parallel version of octopus#

We load gcc/13.2.0, openmpi/4.1.6, and octopus-dependencies/min to compile octopus with MPI support and only the required dependencies:

user@mpsd-hpc-login1:~$ module load gcc/13.2.0 openmpi/4.1.6 octopus-dependencies/min

Next, we clone Octopus:

user@mpsd-hpc-login1:~$ git clone https://gitlab.com/octopus-code/octopus.git
user@mpsd-hpc-login1:~$ cd octopus

(If you intend to make changes in the octopus code, and push them back as a merge request later, you may want to use git clone git@gitlab.com:octopus-code/octopus.git to clone using ssh instead of https.)

We can now compile Octopus with cmake using a suitable preset. As mentioned before, we first unset a few environment variables

user@mpsd-hpc-login1:~/octopus$ unset CPATH
user@mpsd-hpc-login1:~/octopus$ unset LIBRARY_PATH
user@mpsd-hpc-login1:~/octopus$ unset LDFLAGS
user@mpsd-hpc-login1:~/octopus$ cmake --preset ci-foss-min-mpi --install-prefix=$HOME/octopus_bin_mpi
user@mpsd-hpc-login1:~/octopus$ cmake --build ./cmake-build-ci-foss-min-mpi
user@mpsd-hpc-login1:~/octopus$ ctest --test-dir ./cmake-build-ci-foss-min-mpi -L short-run  # optional if you want to run the tests
user@mpsd-hpc-login1:~/octopus$ cmake --install ./cmake-build-ci-foss-min-mpi

We can get a list of all available presets with cmake --list-presets. The ci-... presets are well suited to compile on the local HPC as their dependencies lists are compatible with the set of modules loaded by the different metamodule variants.

For more details and configuration options refer to the Octopus documetation, e.g. octopus-code/octopus/-/blob/main/cmake/README.md?ref_type=heads.

2.4.9.2. Serial version of octopus#

Compiling the serial version in principle consists of the same steps as the parallel version. We must not load an MPI implementation! In this example we use a different GCC version and compile with optional dependencies.

user@mpsd-hpc-login1:~/octopus$ module purge
user@mpsd-hpc-login1:~/octopus$ module load gcc/12.3.0 octopus-dependencies/full
user@mpsd-hpc-login1:~/octopus$ unset CPATH
user@mpsd-hpc-login1:~/octopus$ unset LIBRARY_PATH
user@mpsd-hpc-login1:~/octopus$ unset LDFLAGS
user@mpsd-hpc-login1:~/octopus$ cmake --preset ci-foss-full --install-prefix=$HOME/octopus_bin_serial
user@mpsd-hpc-login1:~/octopus$ cmake --build ./cmake-build-ci-foss-full
user@mpsd-hpc-login1:~/octopus$ ctest --test-dir ./cmake-build-ci-foss-full -L short-run  # optional if you want to run the tests
user@mpsd-hpc-login1:~/octopus$ cmake --install ./cmake-build-ci-foss-full

2.4.10. Compiling custom code#

To compile other custom code we manually load all required modules. The same general notes about generic and optimised module sets explained in the previous section apply.

Here, we show two different examples. The sources are available under /opt_mpsd/linux-debian12/25a/examples/slurm-examples

2.4.10.1. Serial “Hello world” in Fortran#

First, we want to compile the following “hello world” Fortran program using gcc. We assume it is saved in a file hello.f90. The source is available in /opt_mpsd/linux-debian12/25a/examples/slurm-examples/serial-fortran.

program hello
  write(*,*) "Hello world!"
end program

We have to load gcc:

module load gcc/12.3.0

Then, we can compile and execute the program:

user@mpsd-hpc-login1:~$ gfortran -o hello hello.f90
user@mpsd-hpc-login1:~$ ./hello
 Hello world!

2.4.10.2. MPI-parallelised “Hello world” in C#

As a second example we compile an MPI-parallelised “Hello world” C program, again using gcc. We assume the source is saved in a file hello-mpi.c (source available under /opt_mpsd/linux-debian12/25a/examples/slurm-examples/mpi-c).

#include <mpi.h>
#include <stdio.h>

int main(int argc, char** argv) {
    MPI_Init(NULL, NULL);

    int world_size;
    MPI_Comm_size(MPI_COMM_WORLD, &world_size);

    int world_rank;
    MPI_Comm_rank(MPI_COMM_WORLD, &world_rank);

    char processor_name[MPI_MAX_PROCESSOR_NAME];
    int name_len;
    MPI_Get_processor_name(processor_name, &name_len);

    printf("Hello world from rank %d out of %d on %s.\n",
           world_rank, world_size, processor_name);

    MPI_Finalize();
}

We have to load gcc and openmpi:

user@mpsd-hpc-login1:~$ module load gcc/12.3.0 openmpi/4.1.5

Now, we can compile and execute the test program:

user@mpsd-hpc-login1:~$ mpicc -o hello-mpi hello-mpi.c
user@mpsd-hpc-login1:~$ orterun -n 4 ./hello-mpi
Hello world from rank 2 out of 4 on mpsd-hpc-login1.
Hello world from rank 3 out of 4 on mpsd-hpc-login1.
Hello world from rank 1 out of 4 on mpsd-hpc-login1.
Hello world from rank 0 out of 4 on mpsd-hpc-login1.

Note

Inside a slurm job srun has to be used instead of orterun.

2.4.11. Setting the rpath (finding libraries at runtime)#

This section is relevant if you compile your own software and need to link to libraries provided on the MPSD HPC system.

2.4.11.1. Background#

At compile time (i.e. when compiling and building an executable), we need to tell the linker where to find external libraries. This happens via the -L flags and the environment variable LIBRARY_PATH which the compiler (for example gcc) passes on to the linker.

At runtime, the dynamic linker ld.so needs to find libraries with the same SONAME for our executable by searching through one or more given directories. These directories can be taken from (in decreasing order of priority),

    1. a LD_LIBRARY_PATH if set,

    1. one or more rpath entries set in the executable,

  • (iii) if not found yet, the default search path defined in /etc/ld.so.conf.

2.4.11.2. Use rpath; do not set LD_LIBRARY_PATH#

When we compile software on HPC systems, we generally want to use the rpath option. That means

  • (a) we must not set LD_LIBRARY_PATH environment variable. It also means

  • (b) we must set the rpath in the executable. To embed /PATH/TO/LIBRARY in the rpath entry in the header of the executable, we need to append -Wl,-rpath=/PATH/TO/LIBRARY to the call of the compiler.

2.4.11.3. Example: Linking to FFTW#

Given this C program with name fftw_test.c:

#include <stdio.h>
#include <fftw3.h>

#define N 32

int main(int ARGC, char *ARGV) {
    fftw_complex *in, *out;
    fftw_plan p;
    in = (fftw_complex*) fftw_malloc(sizeof(fftw_complex) * N);
    out = (fftw_complex*) fftw_malloc(sizeof(fftw_complex) * N);
    p = fftw_plan_dft_1d(N, in, out, FFTW_FORWARD, FFTW_ESTIMATE);
    fftw_execute(p); /* repeat as needed */
    fftw_destroy_plan(p);
    fftw_free(in); fftw_free(out);
    printf("Done.\n");
    return 0;
}

we can compile it as follows:

$ mpsd-modules 25a
$ module load gcc/12.3.0 fftw
$ gcc -lfftw3 -L$MPSD_FFTW_ROOT/lib -Wl,-rpath=$MPSD_FFTW_ROOT/lib fftw_test.c -o fftw_test

In the compile (and link) line, we have to specify the path to the relevant file libfftw3.so. For every package, the MPSD HPC system provides the relevant path to the package root in a environment variable of the form MPSD_<PACKAGE_NAME>_ROOT:

$ echo $MPSD_FFTW_ROOT
/opt_mpsd/linux-debian12/25a/sandybridge/spack/opt/spack/linux-debian12-sandybridge/gcc-12.3.0/fftw-3.3.10-av3adybtusz4beo3ygg4fhubiezgymgc

If we replace the variables in the compiler call, it would look as follows.

$ gcc -lfftw3 \
    -L/opt_mpsd/linux-debian12/25a/sandybridge/spack/opt/spack/linux-debian12-sandybridge/gcc-12.3.0/fftw-3.3.10-av3adybtusz4beo3ygg4fhubiezgymgc/lib \
    -Wl,-rpath=/opt_mpsd/linux-debian12/25a/sandybridge/spack/opt/spack/linux-debian12-sandybridge/gcc-12.3.0/fftw-3.3.10-av3adybtusz4beo3ygg4fhubiezgymgc/lib \
    fftw_test.c -o fftw_test

Users are strongly advised to use the environment variables. They help ensure you are not pointing to incorrect or stale versions of the libraries used by outdated modules

When loading modules using the module command, the MPSD HPC system also populates a variable LIBRARY_PATH, which the compiler will use as an argument for -L if the variable exists, and the variable LDFLAGS, which will be used by the linker for -rpath. We can thus omit the -L and the -Wl,-rpath in ihe call:

$ gcc -lfftw3 fftw_test.c -o fftw_test

We can use the ldd command to check which libraries the dynamic linker identifies:

$ ldd fftw_test
  linux-vdso.so.1 (0x00007ffed3cfe000)
  libfftw3.so.3 => /opt_mpsd/linux-debian12/25a/sandybridge/spack/opt/spack/linux-debian12-sandybridge/gcc-12.3.0/fftw-3.3.10-av3adybtusz4beo3ygg4fhubiezgymgc/lib/libfftw3.so.3 (0x00007fa4f1e4f000)
  libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6 (0x00007fa4f1c67000)
  libm.so.6 => /lib/x86_64-linux-gnu/libm.so.6 (0x00007fa4f1b23000)
  /lib64/ld-linux-x86-64.so.2 (0x00007fa4f2041000)

2.4.11.4. Remember to check whether your build system can help you#

Doing these sort of calls by hand can be tedious and awkward, which in turn makes them error prone. If you are using a modern build system, e.g. CMake, there is a good chance that it can manage the rpath for you. Consult the documentation of your build tool to check if can support setting rpath and how to activate it.

Note

You may have to unset some of the environment variables that are exported when loading modules to avoid conflicts.

E.g. when using CMake run unset LIBRARY_PATH, unset LDFLAGS, and unset CPATH after loading all modules to avoid unexpected side-effects (e.g. missing rpath in the resulting binary).

2.5. Example batch scripts#

Here, we show a number of example batch scripts for different types of jobs. All examples are available on the HPC system under /opt_mpsd/linux-debian12/25a/examples/slurm-examples together with the example programs. One can also get the latest copy of the scripts from the git repository here. We use the public partition and the generic module set for all examples.

To test an example on the HPC system we can copy the relevant directory into our scratch directory. If required we can compile the code using make and then submit the job using sbatch submission-script.sh.

2.5.1. MPI#

The source code and submission script are in /opt_mpsd/linux-debian12/25a/examples/slurm-examples/mpi-c.

#!/bin/bash --login
#
# Standard output and error
#SBATCH -o ./out.%j
#SBATCH -e ./err.%j
#
# working directory
#SBATCH -D ./
#
# partition
#SBATCH -p public
#
# job name
#SBATCH -J MPI-example
#
#SBATCH --mail-type=ALL
#
# job requirements
#SBATCH --nodes=4
#SBATCH --ntasks-per-node=16
#SBATCH --time=00:10:00

. setup-env.sh

srun ./hello-mpi

2.5.2. MPI + OpenMP#

The source code and submission script are in /opt_mpsd/linux-debian12/25a/examples/slurm-examples/mpi-openmp-c.

#!/bin/bash --login
#
# Standard output and error
#SBATCH -o ./out.%j
#SBATCH -e ./err.%j
#
# working directory
#SBATCH -D ./
#
# partition
#SBATCH -p public
#
# job name
#SBATCH -J MPI-OpenMP-example
#
#SBATCH --mail-type=ALL
#
# job requirements
#SBATCH --nodes=4
#SBATCH --ntasks-per-node=2
#SBATCH --cpus-per-task=8
#SBATCH --time=00:10:00

. setup-env.sh

export OMP_NUM_THREADS=${SLURM_CPUS_PER_TASK}

# TODO from the MPCDF example, does the same apply here?
# For pinning threads correctly:
export OMP_PLACES=cores

srun ./hello-mpi-openmp

2.5.3. OpenMP#

The source code and submission script are in /opt_mpsd/linux-debian12/25a/examples/slurm-examples/openmp-c.

#!/bin/bash --login
#
# Standard output and error
#SBATCH -o ./out.%j
#SBATCH -e ./err.%j
#
# working directory
#SBATCH -D ./
#
# partition
#SBATCH -p public2
#
# job name
#SBATCH -J OpenMP-example
#
#SBATCH --mail-type=ALL
#
# job requirements
#SBATCH --nodes=1
#SBATCH --ntasks-per-node=1
#SBATCH --cpus-per-task=16
#SBATCH --time=00:10:00

export OMP_NUM_THREADS=${SLURM_CPUS_PER_TASK}

# TODO from the MPCDF example, does the same apply here?
# For pinning threads correctly:
export OMP_PLACES=cores

srun ./hello-openmp

2.5.4. Python with numpy or multiprocessing#

The source code and submission script are in /opt_mpsd/linux-debian12/25a/examples/slurm-examples/python-numpy.

#!/bin/bash --login
#
# Standard output and error
#SBATCH -o ./out.%j
#SBATCH -e ./err.%j
#
# working directory
#SBATCH -D ./
#
# partition
#SBATCH -p public2
#
# job name
#SBATCH -J python-numpy-example
#
#SBATCH --mail-type=ALL
#
# job requirements
#SBATCH --nodes=1
#SBATCH --ntasks-per-node=1
#SBATCH --cpus-per-task=16
#SBATCH --time=00:10:00

module purge

source venv/bin/activate

export OMP_NUM_THREADS=${SLURM_CPUS_PER_TASK}

srun python3 ./hello-numpy.py

2.5.5. Single-core job#

The source code and submission script are in /opt_mpsd/linux-debian12/25a/examples/slurm-examples/serial-fortran.

#!/bin/bash --login
#
# Standard output and error
#SBATCH -o ./out.%j
#SBATCH -e ./err.%j
#
# working directory
#SBATCH -D ./
#
# partition
#SBATCH -p public
#
# job name
#SBATCH -J serial-example
#
#SBATCH --mail-type=ALL
#
# job requirements
#SBATCH --nodes=1
#SBATCH --ntasks-per-node=1
#SBATCH --time=00:10:00

srun ./hello

2.5.6. Serial Python#

The source code and submission script are in /opt_mpsd/linux-debian12/25a/examples/slurm-examples/python-serial.

#!/bin/bash --login
#
# Standard output and error
#SBATCH -o ./out.%j
#SBATCH -e ./err.%j
#
# working directory
#SBATCH -D ./
#
# partition
#SBATCH -p public2
#
# job name
#SBATCH -J python-example
#
#SBATCH --mail-type=ALL
#
# job requirements
#SBATCH --nodes=1
#SBATCH --ntasks-per-node=1
#SBATCH --time=00:10:00

module purge
mpsd-modules 25a

module load miniforge3/24.11.2-1
source activate python-3.12

export OMP_NUM_THREADS=1  # restrict numpy (and other libraries) to one core

srun python3 ./hello.py

2.5.7. GPU jobs#

For GPU jobs, we recommend to specify desired hardware resources as follows. In parenthesis we provide the typical application (MPI, OpenMP) for guidance.

  • nodes - how many computers to use, for example --nodes=1

  • tasks-per-node - how many (MPI) processed to run per node: --tasks-per-node=4

  • gpus-per-task - how many GPUs per (MPI) process to use (often 1): --gpus-per-task=1

  • cpus-per-task - how many CPUs (OpenMP threads) to use: --cpus-per-task=4

Example:

user@mpsd-hpc-login1:~$ salloc --nodes=1 --tasks-per-node=4 --gpus-per-task=1 --cpus-per-task=4 --mem=128G -p gpu
user@mpsd-hpc-gpu-002:~$ mpsd-show-job-resources
   9352 Nodes: mpsd-hpc-gpu-002
   9352 Local Node: mpsd-hpc-gpu-002

   9352 CPUSET: 0-7,16-23
   9352 MEMORY: 131072 M

   9352 GPUs (Interconnects, CPU Affinity, NUMA Affinity):
   9352 GPU0     X  NV1 NV1 NV2 SYS 0-7,16-23   0-1
   9352 GPU1    NV1  X  NV2 NV1 SYS 0-7,16-23   0-1
   9352 GPU2    NV1 NV2  X  NV2 SYS 0-7,16-23   0-1
   9352 GPU3    NV2 NV1 NV2  X  SYS 0-7,16-23   0-1
user@mpsd-hpc-gpu-002:~$

We can see from the output that we have one node (mpsd-hpc-gpu-002), 16 CPUs (with ids 0 to 7 and 16 to 23), 128GB (=131072MiB), and 4 GPUs allocated (GPU0 to GPU3).

We can confirm the number of (MPI) tasks to be 4:

user@mpsd-hpc-gpu-002:~$ srun echo `hostname`
mpsd-hpc-gpu-002
mpsd-hpc-gpu-002
mpsd-hpc-gpu-002
mpsd-hpc-gpu-002

The source code and submission script for one CUDA example are in /opt_mpsd/linux-debian12/25a/examples/slurm-examples/cuda.

#!/bin/bash --login
#
# Standard output and error
#SBATCH -o ./out.%j
#SBATCH -e ./err.%j
#
# working directory
#SBATCH -D ./
#
# partition
#SBATCH -p gpu
#
# job name
#SBATCH -J CUDA-example
#
#SBATCH --mail-type=ALL
#
# job requirements
#SBATCH --nodes=1
#SBATCH --ntasks-per-node=4
#SBATCH --gpus-per-task=1
#SBATCH --cpus-per-task=2
#SBATCH --time=00:02:00

. setup-env.sh

srun ./hello-cuda

2.5.8. Multiple tasks per GPU#

If multiple tasks (i.e. multiple MPI ranks) should be used per GPU, we recommend to request resources from the perspective of a GPU

  • nodes - how many computers to use, for example --nodes=1

  • gpus-per-node - how many GPUs in each node you want to use

  • cpus-per-gpu - how many CPUs per GPU you want to use

  • cpus-per-task - how many CPUs you want to use in each task

So in this case:

  • you set the total number of tasks only implicitly

  • but each task is running on CPUs with the fastest access to the allocated GPU

When trying other setups for this scenario, SLURM complained and even put Array-Jobs on Hold.

Example:

user@mpsd-hpc-login1:~$ salloc --nodes=1 --gpus-per-node=1 --cpus-per-gpu=8 --cpus-per-task=4 --mem=128G -p gpu
user@mpsd-hpc-gpu-003:~$ mpsd-show-job-resources
 122314 Nodes: mpsd-hpc-gpu-003
 122314 Local Node: mpsd-hpc-gpu-003

 122314 CPUSET: 0,2,4,6,40,42,44,46
 122314 MEMORY: 65536 M

 122314 GPUs (Interconnects, CPU Affinity, NUMA Affinity):
 122314 GPU0     X  SYS 0,2,4,6,40  0-1
user@mpsd-hpc-gpu-002:~$

We expect 2 tasks because we have 4 cpus per task, and 8 cpus in total. We can confirm the number of (MPI) tasks:

user@mpsd-hpc-gpu-003:~$ srun echo `hostname`
mpsd-hpc-gpu-003
mpsd-hpc-gpu-003