General Information

Hardware
ÆGIR (AEGIR) & BYLGJA clusters are equipped with 1296 CPU cores. There are 17 nodes with 16 cores per node, 12 nodes with 32 cores per node, 8 nodes with 48 cores per node, and 4 nodes with 64 cores per node, 4.93 TB of RAM and high-speed Infiniband or RoCE internal networks.
Details:
- 2 CPUs per node: Xeon E5-2667v3 3.2GHz (8 cores per CPU)
- RAM per node: 64GB DDR4 (per node)
- Interconnection: Mellanox QDR Infiniband
- 2 CPUs per node: Intel Xeon E5-2683v4 2.1GHz (16 cores per CPU)
- RAM per node: 128GB DDR4 (per node)
- Interconnection: Mellanox QDR Infiniband
- 2 CPUs per node: Intel Xeon Gold 6248R 3.0GHz (24 cores per CPU)
- RAM per node: 192GB DDR4 (per node)
- Interconnection: RoCE v2
- 1 CPU per node: AMD EPYC 9554P 3.75GHz (64 cores per CPU)
- RAM per node: 192GB DDR5 (per node)
- Interconnection: RoCE v2
- SSD per node: 120 GB
- Mass storage: 134 TB
Account Request
To get access to the DC3 systems you need to be either a HPC grant holder or a member of a group holding a current HPC grant.
To get an account please go to the following web-page: https://hpc.ku.dk
Connecting to DC3
In order to login to DC3 computational system, you must use the SSH protocol. This is provided by the "ssh" command on Unix-like systems (including Mac OS X) or by using an SSH-compatible application (e.g. PuTTY on Microsoft Windows). We recommend that you "forward" X11 connections when initiating an SSH session to DC3. For example, when using the ssh command on Unix-based systems, provide the "-Y" option:
ssh -Y jojo@fend01.hpc.ku.dk
In order to download/upload data from/to DC3 use the following command:
scp –pr user@host1:from_path_file1 user@host2:to_path_file2
for more information use man/info commands (man scp).
There are 5 frontend/login nodes available at the moment: fend01.hpc.ku.dk - fend05.hpc.ku.dk
N.B. The login nodes are intended only for lightweight tasks such as source code editing, compiling, and managing files and directories. All computationally intensive tasks must be submitted and executed on compute nodes. You can find more details in the SLURM Workload Manager section.
Software
DC3 provides a rich set of HPC utilities, applications, compilers and programming libraries. If there is something missing that you want, send email to nuterman@nbi.ku.dk with your request and evaluate it for appropriateness, cost, effort, and benfit to the community. More information about available software and how you use it is included in the Available Software section.