Mesabi
The Mesabi compute cluster provides hardware resources for running a wide variety of jobs
IMPORTANT: Mesabi will be retired on June 5, 2024. See complete information and impacts for users on the Mesabi Retirement webpage.
Mesabi is an HP Linux distributed cluster featuring a large number of nodes with leading edge Intel processors that are tightly integrated via a very high speed communication network. In addition, it contains a significant number of nodes with very large memory (up to 1 TB per node), accelerator nodes (GPU), and nodes with solid state storage devices (SSD) for ultra high performance input/output.
- Compute infrastructure A total of 741 nodes of various configurations with a total of 17,784 compute cores provided by Intel Haswell E5-2680v3 processors. This system provides 711 Tflop/s of peak performance. 40 of these nodes include 2 NVidia Tesla K20X GPUs. The GPU subsystem provides 105 Tflop/s of additional peak performance.
- Memory 616 nodes have 64 GB of RAM, 24 nodes feature 256 GB, and 16 nodes have 1 TB of RAM each. The 40 GPU nodes have 128 GB of RAM each. Hence the total memory of the system is 67 TB.
- SSD input/output nodes 32 nodes have 480 GB solid state drives (SSDs) for ultra high performance input/ouput. The total system SSD capacity is 15 TB.
The name Mesabi was selected by MSI staff from over 140 suggestions from MSI users and others at the U of M. The Mesabi Range is the largest of the four major iron deposits in northern Minnesota that collectively make up the area known as the Iron Range. It is the chief deposit of iron ore in the U.S. The name comes from an Ojibwe word meaning “immense mountain.” The name reflects Minnesota’s cultural heritage and natural resources and ties in to an informal term for supercomputers, “Big Iron.”
Mesabi features a heterogenous architecture designed to a diversity of job types. You can schedule one or many 24-core nodes as a parallel threaded or MPI job, or request a single processor on a single node for many days. Depending on the partition, your job may have full control of a node or share nodes with other jobs. Other partitions provide access to specialized hardware such as high-memory nodes (up to 1 TB of RAM) and K40 GPUs.
See also the documentation for job partitions.
Mesabi is most efficiently accessed though a terminal environment, MSI users can follow directions in the Connecting to HPC Resources quick start guide to access Mesabi and all other MSI HPC systems. Once connected to Mesabi you will be able to submit jobs to Mesabi's queues using scripts with specific Slurm commands also known as Slurm scripts.