Skip to main content

The Juggernaut HPC cluster

WARNING
The Juggernaut cluster was retired in July 2024. The cluster is no longer available. This page is retained for historical reasons only.

The Juggernaut High-Performance Computing (HPC) cluster is a small but growing HPC cluster operated by Mid-Atlantic Crossroads (MAX), an unit of the Division of Information Technology (DIT) at the University of Maryland (UMD). Juggernaut provides compute and storage resources for the MAXedge, an intelligent, advanced and innovative edge resource.

Hardware

The following table lists the hardware on the Juggernaut cluster:

Description Processor Number
of nodes
Cores/node Total cores Memory/node
GB
Memory/core
GB
Scratch space
per node, GB
GPUs/node GPU type Interconnect
green partition
standard CPU nodes
Skylake (Xeon Gold 6148), 2.4 GHz 13 40 520 384 9.6 850 0   EDR Infiniband
blue partition
standard CPU nodes
Broadwell EP (E5-2680 v4), 2.4 GHz 4 28 112 256 9.1 400 0    
green partition
GPU node
Cascade Lake (Xeon Gold 6248), 2.5 GHz 1 80 80 1536 (1.5TB) 19 150 4 Tesla Volta V100  
blue partition
GPU node
Skylake (Xeon Gold 6142), 2.6 GHz 1 32 32 384 12 100 2 Tesla Pascal P100  
TOTALS   20   744            

There are a limited number of GPUs available on the cluster. The node gpu-0(blue partition) contains a pair of Tesla Pascal P100 GPUs ( CUDA compute capability 6.0). The node gpu-10-0 (green partition) contains four Tesla Volta V100 GPUs ( CUDA compute capability 7.0).




Back to Top