circuitboard
Information Technology & Institutional Planning

HPC Hardware Specifications and Support

High-performance computing capabilities have become a vital requirement for research efforts that involve large amounts of data and computing. The purpose of the Cal Poly Pomona HPC initiative is to support research and instruction within the colleges that require access to such resources. Whether you need to access large data sets, perform complex calculations, or get started, we're here to help.

Our HPC Hardware Specifications

The high-performance computing (HPC) cluster consists of multiple dedicated processor nodes connected by a specialized high-speed network and job scheduling software. The CPP HPC software management suite utilizes the HP Enterprises HPC Software Stack, which includes the open source Slurm job scheduler, Insight CMU for cluster management, and other HPE software for node deployment and configuration. The Anaconda package management system allows users to install and manage dedicated libraries and external software packages.

The Slurm scheduler manages allocation, dispatching, and execution of jobs. Slurm is a well-documented resource manager currently used by many campus HPC systems and allows a task to be dispatched in various ways, including allowing jobs to be run in real-time or batch mode.

Cluster nodes are configured as partitions to dispatch jobs to appropriate nodes for various computational tasks. The "General Compute Partition" is used for general-purpose jobs that benefit from running multiple computing tasks in parallel. At the same time, the "GPU Partition" allows a task to access dedicated GPU processors where the task would benefit from additional numerical processing capability.

The new CPP HPC cluster is based upon the HP Proliant server platform and currently includes two DL360 management nodes, 20 DL160 compute nodes, and four GPU nodes with 8 Tesla P100 GPUs. The cluster contains 3.3TB of RAM and is connected through a dedicated internal 40GBit Infiniband switching fabric and 10GBit external ethernet connections. The overall system throughput is approximately 36.6 Tflp in double-precision mode or 149.6 Tflp in half-precision mode. This configuration is expected to grow as researchers identify collaborative research initiatives and develop future funding for the system's expansion through external grants and donations.

Research

Help Documents for Students and Faculty

We’ve provided HPC documentation for faculty and students. Learn about the HPC hardware and specifications, how to get started, setting up an environment, running a job, and more.