HPC at Harvey Mudd College

Harvey Mudd College offers various High Performance Computing (HPC) resources to our faculty and students for their research and teaching. Please contact CIS Help Desk at helpdesk@hmc.edu for any questions about using any of the resources.

SivakeplerPortable_128XSEDE_128-2

HPC Resources at HMC

  • Sivapihtecus A multi-core, large shared-memory (0.5 TB) HPC system
  • Kepler A GPGPU system (1 TFLOPS)
  • LittleFe A portable Beowulf mini-cluster for HPC education
  • XSEDE Supercomputers via Extreme Science and Engineering Discovery Environment

[Large Memory System]

Sivapithecus* (sivapithecus.cs.hmc.edu) is a multi-core, large shared-memory HPC system built for a research project by Prof. Eliot Bush in Biology. It is now available to our researchers who especially need a large memory space for their applications.  It has a half terabyte of RAM with 64 processor cores.  Prof. Eliot Bush generously shared the HPC machine with the HMC community.

The system is currently managed by Tim Buchheim, the system administrator for the CS Department. For access, contact Tim (tcb@cs.hmc.edu) or CIS Help Desk (helpdesk@hmc.edu).

  • Number of Cores: 32 (4 x 8-Core AMD Opteron 6276, 2.3GHz, 16MB L3 Cache)
  • RAM: 512 GB (Operating at 1066MHz Max)
  • OS: Linux (Ubuntu)
  • HDD: 1 TB Fixed Drive + 3 x 2 TB Hot-Swap Drives (3Gb/s, 7.2K RPM, 64MB Cache)
  • Physical Location: CS Server Room
  • Host Name: sivapithecus.cs.hmc.edu
  • Month and Year Built: July 2012
  • Manufacturer: Silicon Mechanics
  • Available Software: General Linux (Ubuntu) software, GCC

*Sivapithecus (pronounced she-va-PITH-eh-cus) is the name of a fossil ape from the Miocene of Pakistan. It is a likely ancestor to the orangutan.

[GPGPU System]

Kepler (kepler.physics.hmc.edu) is a GPGPU system built jointly by CIS and Prof. Vatche Sahakian in Physics. It has two high-end Tesla GPUs donated by NVIDIA.

For access, contact CIS Help Desk (helpdesk@hmc.edu) or Prof. Vatche Sahakian (Vatche_Sahakian@hmc.edu)

  • Number of Cores: 8 (2 x Intel Xeon E5606 Quad-Core 2.13GHz, 8MB Cache)
  • RAM: 24 GB (Operating at 1333MHz Max)
  • GPU: 2 x Tesla C2075
  • Number of CUDA Cores: 896 (2 x 448)
  • GPGPU Performance: 1 Tflops (515 Gflops per GPU peak double precision floating point computation)
  • OS: Linux (Ubuntu 12.04.1 LTS)
  • HDD: 1TB Fixed Drive (3Gb/s, 7.2K RPM, 64MB Cache)
  • Physical Location: Chemistry Lab
  • Host Name: kepler.physics.hmc.edu
  • Month and Year Built: July 2012
  • Manufacturer: Silicon Mechanics
  • Available Software: CUBLAS, CUDA LAPLACK libraries from NVIDIA, Mathematica 8

[HPC Education System]

LittleFe (littlefe.cs.hmc.edu) is a portable Beowulf cluster built for HPC education. Harvey Mudd College won a LittleFe unit in September 2012 for the LittleFe Buildout event at SuperComputing 2012 (SC12). LittleFe comes with a pre-configured Linux environment with all the necessary libraries for developing and testing parallel applications such as shared memory parallel processing (OpenMP), distributed memory parallel processing (MPI) and GPGPU parallel processing (CUDA). It can be used for parallel processing and computer architecture courses.

To use the LittleFe unit for your class, contact CIS Help Desk (helpdesk@hmc.edu) or Tim Buchheim (tcb@cs.hmc.edu).

  • Number of Cores: 12 (6 x Dual-Core Intel Atom ION2 at 1.5 GHz)
  • RAM: 2GB DDR2 800 RAM per node
  • OS: Linux (BCCD 3.1.1)
  • HDD: 160 GB Fixed Drive (7.2K RPM, 2.5” SATA HDD)
  • Physical Location: CS Server Room (Portable)
  • Host Name: littlefe.cs.hmc.edu
  • Month and Year Built: August 2012
  • Manufacturer: LittleFe Project (http://littlefe.net)
  • Software Available: MPI, OpenMP, CUDA, System Benchmarks (GalaxSee, Life, HPL-Benchmark, Parameter Space)
LittleFe at Harvey Mudd

LittleFe at Harvey Mudd College

[Supercomputers]

Harvey Mudd College does not own supercomputers on campus. We instead make use of national cyberinfrastructure called XSEDE. XSEDE (Extreme Science and Engineering Discovery Environment) is the most advanced, powerful, and robust collection of integrated advanced digital resources and services in the world. CIS has signed up to participate in the XSEDE Campus Champion program and received almost 900,000 Service Units (SUs) from various national supercomputing facilities. The allocations are given to the HMC community with free of charge to help test research applications on supercomputers such as Gordon (San Diego Supercomputer Center), Blacklight (Pittsburgh Supercomputing Center) and Lonestar (Texas Advanced Computing Center).  For more information, please email the CIS Help Desk at helpdesk@hmc.edu).

TACC Lonestar Supercomputet

TACC Lonestar Supercomputet