Danish Center for Climate Computing (DC3)

ÆGIR (AEGIR), BYLGJA, HRONN and SKADI clusters are equipped with 1296 CPU cores. There are 17 nodes with 16 cores per node, 12 nodes with 32 cores per node, 8 nodes with 48 cores per node, and 4 nodes with 64 cores per node, 4.93 TB of RAM and high-speed Infiniband or RoCE internal networks.

Staff

Name Title Phone E-mail
Jochum, Markus Professor +4535326921 E-mail
Nuterman, Roman Research Coordinator +4535337743 E-mail

 

 

 

 

 

 

 

Hardware

ÆGIR (AEGIR), BYLGJA, HRONN and SKADI clusters are equipped with 1296 CPU cores. There are 17 nodes with 16 cores per node, 12 nodes with 32 cores per node, 8 nodes with 48 cores per node, and 4 nodes with 64 cores per node, 4.93 TB of RAM and high-speed Infiniband or RoCE internal networks.

Details:

  • 2 CPUs per node: Xeon E5-2667v3 3.2GHz (8 cores per CPU)
    • RAM per node: 64GB DDR4 (per node)
    • Interconnection: Mellanox QDR Infiniband
  • 2 CPUs per node: Intel Xeon E5-2683v4 2.1GHz (16 cores per CPU)
    • RAM per node: 128GB DDR4 (per node)
    • Interconnection: Mellanox QDR Infiniband
  • 2 CPUs per node: Intel Xeon Gold 6248R 3.0GHz (24 cores per CPU)
    • RAM per node: 192GB DDR4 (per node)
    • Interconnection: RoCE v2
  • 1 CPU per node: AMD EPYC 9554P 3.75GHz (64 cores per CPU)
    • RAM per node: 192GB DDR5 (per node)
    • Interconnection: RoCE v2
  • SSD per node: 120 GB
  • Mass storage: 134 TB

Account Request

To get access to the DC3 systems you need to be either a HPC grant holder or a member of a group holding a current HPC grant.
To get an account please go to the following web-page: https://hpc.ku.dk


Connecting to DC3

In order to login to DC3 computational system, you must use the SSH protocol. This is provided by the "ssh" command on Unix-like systems (including Mac OS X) or by using an SSH-compatible application (e.g. PuTTY on Microsoft Windows). We recommend that you "forward" X11 connections when initiating an SSH session to DC3. For example, when using the ssh command on Unix-based systems, provide the "-Y" option:

ssh -Y jojo@fend01.hpc.ku.dk

In order to download/upload data from/to DC3 use the following command:

scp –pr user@host1:from_path_file1 user@host2:to_path_file2

for more information use man/info commands (man scp).

There are 5 frontend/login nodes available at the moment: fend01.hpc.ku.dk - fend05.hpc.ku.dk

N.B. The login nodes are intended only for lightweight tasks such as source code editing, compiling, and managing files and directories. All computationally intensive tasks must be submitted and executed on compute nodes. You can find more details in the SLURM Workload Manager section below.   


Software

DC3 provides a rich set of HPC utilities, applications, compilers and programming libraries. If there is something missing that you want, send email to nuterman@nbi.ku.dk with your request and evaluate it for appropriateness, cost, effort, and benfit to the community. See more information about available software and how to use it in the Available Software section below.

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Source Code Compilation

Let's assume that we're compiling a source code that will run as a parallel application using MPI for internode communication and the code is written in Fortran, C, or C++. In this case, it's easy because you will use standard compiler wrapper script that bring in all the include files and library paths and set linker options that you'll need. One should use the following wrappers: mpif90, mpicc, or mpic++ for Fortran, C, and C++, respectively.

To compile on DC3, execite in a command line: mpif90 -o hello.x hello.f90

In case we need to use for compilation an extra library like HDF5, one must load it through module utility. Even with the module loaded, the compiler doesn't know where to find files related to the HDF5 library. Another way to try to figure it out for yourself is to look under the covers in the HDF5 module.

The ml show hdf5-parallel command reveals (most of) what the module actually does when you load it. You can see that it defines some environment variables you can use, for example HDF5_INCLUDE, which you can use in your build script or Makefile. Look at the definition of the HDF5_XXX environment variables. They contains all the include and link options.

Therefore, we can use mpicc -o hd_copy.x hd_copy.c $HDF5_INCLUDE $HDF5_LIB

Compiler Optimizations

These are some common compiler optimizations and the types of code that they work best with.

Vectorization

The registers and arithmetic units on DC3 are capable of performing the same operation on several double precision operands simultaneously in a SIMD (Single Instruction Multiple Data) fashion. This is often referred to as vectorization because of its similarities to the much larger vector registers and processing units of the Cray systems of the pre-MPP era. Vector optimization is most useful for large loops with in which each successive operation has no dependencies on the results of the previous operations. Loops can be vectorized by the compiler or by compiler directives in the source code.

Inter-procedural Optimization

This is defined as the compiler optimizing over subroutine, function, or other procedural boundaries This can have many levels ranging from inlining, the replacement of a function call with the corresponding source code at compile time, up to treating the entire program as one routine for the purpose of optimization.

This can be the most compute intensive of all optimizations at compile time, particularly for large applications and can result in an increase in the compile time of an order of magnitude or more without any significant speedup and can even cause a compile to crash. For this reason none of the DC3 recommended compiler optimization options include any significant inter-procedural optimizations. It is most suitable when there are function calls embedded within large loops.

Relaxation of IEEE Floating-point Precision

Full implementation of IEEE Floating-point precision is often very expensive. There are many floating-point optimization techniques that significantly speed up a code's performance by relaxing some of these requirements. Since most codes do not require an exact implementation of these rules, all of the DC3 recommended optimizations include relaxed floating-point techniques.

Optimization Arguments

This table shows how to invoke these optimizations for each compiler. Some of the options have numeric levels with the higher the number, the more extensive the optimizations, and with a level of 0 turning the optimization off. For more information about these optimizations, see the compiler on-line man pages.

Optimization Intel GCC/gfortran PGI
Vectorization -vec -ftree-vectorize -Mvect
Interprocedural -ipo -finline-[opt],-fipa[-opt] -Mipa
IEEE FP relaxation -mno-ieee-fp -ffast-math -Knoieee

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Student / Researcher Project Supervisor / PI
Marta Mrozowska (PostDoc)
Bayesian Optimization for Earth System Modelling (EU project ClimTip)
Markus Jochum
Qi-fan Wu (PhD) Machine Learning for Earth System Modelling (DFF-funded MadGod project) Markus Jochum
Svenja Frey (MSc) Machine Learning for Earth System Modelling Markus Jochum
Aster Lei Stoustrup (MSc) Bayesian Optimization Markus Jochum
Maria Friis Greibe (BSc) Gullmarn Fjord Modelling Markus Jochum
Majbritt Eckert (PhD)
Greenland Ice Sheet Modelling (NN Foundation project PRECISE)
Christine Hvidberg
Leonie Röntgen (PhD)
Antarctic Ice Sheet Modelling Christine Hvidberg
Isabel Schwermer (MSc) Greenland Ice Sheet Modelling Christine Hvidberg
Chenhan Di (MSc) Greenland Ice Sheet Modelling Christine Hvidberg
Irina Thaler (PostDoc)
Impact of Aerosols on Climate throughout Earth’s History
Christian J. Bjerrum
Miguel Garrido Zornoza (PostDoc)  
Jan Olaf Härter

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

    1. Vettoretti G., Nuterman R., Jochum M. (2024). Impacts of Parameterizing Estuary Mixing on the Large-Scale Circulations in the Community Earth System Model. Journal of Climate, 37, 17. https://doi.org/10.1175/JCLI-D-23-0365.1
    2. Nuterman, R., & Jochum, M. (2024). Impact of marine carbon removal on atmospheric CO2. Environmental Research Letters, 19(3), [034011]. https://doi.org/10.1088/1748-9326/ad26b7
    3. Vettoretti, G., Nuterman, R., & Jochum, M. (2024). Impacts of Parameterizing Estuary Mixing on the Large-Scale Circulations in the Community Earth System Model. Journal of Climate, 37(17), 4461-4479. https://doi.org/10.1175/JCLI-D-23-0365.1
    4. Jochum, M., Chase, Z., Nuterman, R., Pedro, J., Rasmussen, S., Vettoretti, G., & Zheng, P. (2022). Carbon Fluxes during Dansgaard-Oeschger Events as Simulated by an Earth System Model. Journal of Climate, 35(17), 5745-5758. https://doi.org/10.1175/JCLI-D-21-0713.1
    5. Lhardy, F., Bouttes, N., Roche, D. M., Abe-Ouchi, A., Chase, Z., Circhton, K. A., Ilyina, T., Ivanovic, R., Jochum, M., Kageyama, M., Kobayashi, H., Liu, B., Menviel, L., Muglia, J., Nuterman, R., Oka, A., Vettoretti, G., & Yamamoto, A. (2021). A First Intercomparison of the Simulated LGM Carbon Results Within PMIP-Carbon: Role of the Ocean Boundary Conditions. Paleoceanography and Paleoclimatology, 36(10), [e2021PA004302].https://doi.org/10.1029/2021PA004302
    6. Keisling, B.A., Nielsen L.T., Hvidberg C.S., Nuterman R., DeConto R.M. (2020). Pliocene–Pleistocene megafloods as a mechanism for Greenlandic megacanyon formation. Geology, 48. https://doi.org/10.1130/G47253.1
    7. Haerter, J.O., Meyer, B., Nissen, S.B. (2020). Diurnal self-aggregation. npj Clim Atmos Sci 3, 30. https://doi.org/10.1038/s41612-020-00132-z
    8. Poulsen, M. B., Jochum, M., Maddison, J. R., Marshall, D. P., & Nuterman, R. (2019). A Geometric Interpretation of Southern Ocean Eddy Form Stress. Journal of Physical Oceanography, 49(10), 2553-2570. https://doi.org/10.1175/JPO-D-18-0220.1
    9. Nielsen, S. B., Jochum, M., Pedro, J. B., Eden, C., Nuterman, R. (2019). Two-time scale carbon cycle response to an AMOC collapse. Paleoceanography and Paleoclimatology, 34. https://doi.org/10.1029/2018PA003481
    10. Moseley, C., Henneberg, O., Haerter, J. (2019). A statistical model for isolated convective precipitation events. Journal of Advances in Modeling Earth Systems, 11, 360–375. https://doi.org/10.1029/2018MS001383 
    11. Zunino, A. and Mosegaard, K. (2019), An efficient method to solve large linearizable inverse problems under Gaussian and separability assumptions, Computers & Geosciences, 122, 77-86. https://doi.org/10.1016/j.cageo.2018.09.005 
    12. Häfner D., Jacobse R. L., Eden C., Kristensen M. R. B., Jochum M., Nuterman R., Vinter B. (2018), Veros v0.1-a fast and versatile ocean simulator in pure Python. Geoscientific Model Development, Vol. 11, No. 8, p. 3299-3312. https://doi.org/10.5194/gmd-11-3299-2018 
    13. Nielsen L., Adalgeirsdottir G., Gkinis V., Nuterman R., Hvidberg C. (2018). The effect of a Holocene climatic optimum on the evolution of the Greenland ice sheet during the last 10 kyr. Journal of Glaciology, 64(245), 477-488. https://doi.org/10.1017/jog.2018.40 
    14. Nielsen, S. B., Jochum, M., Eden, C., Nuterman, R. (2018). An energetically consistent vertical mixing parameterization in CCSM4. Ocean Modelling, 127, 46-54. https://doi.org/10.1016/j.ocemod.2018.03.002 
    15. Poulsen, M. B., Jochum, M., Nuterman, R. (2018). Parameterized and resolved Southern Ocean eddy compensation. Ocean Modelling, 124, 1-15. https://doi.org/10.1016/j.ocemod.2018.01.008

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Section secretary (web, communication, coordination, guests), pice@nbi.ku.dk