Climbing the Hardware Ladder: Research Computing at HMC
May 8, 2026Read this post in a beautified format here!
Welcome to the ARCS blog. This first post is a practical overview of the research computing resources available to the HMC community, along with a preview of a new on-campus service coming online this summer.
If you’ve run into long runtimes, memory limits, or hardware constraints on your laptop, this is meant to help you figure out what comes next.
A useful way to think about this is through the Hardware Ladder (credit to Paul Nerenberg at CMC). On the ladder, research computing resources span a set of “rungs,” from personal machines up to national supercomputers. Each step up provides more capability, but also introduces more complexity in how you access and use the system.
Working effectively means knowing which rung fits your workload, and how to move between them as your needs change. That’s where ARCS can help.
The Hardware Ladder at a Glance
Each rung increases available compute resources, but also changes how the system is accessed and used. The table below summarizes the structure.
| Rung | Resource | Scale & Capability | Access Model | Friction Level |
|---|---|---|---|---|
| 1 | Personal machine | Local CPU/GPU, limited memory | You already have one | None |
| 2 | Project Iris (single-node server / VM) | Hundreds of GB of RAM, 32+ cores, dedicated GPUs | Direct VM lease; students can get accounts without a faculty PI | Low |
| 3 | Hopper (CMC consortial cluster) | Multi-node, SLURM-scheduled | Account request; research use | Moderate |
| 4 | Laguna (USC CARC regional cluster) | Many nodes; browser-accessible via OnDemand | PI-based; compute-hour credits | Moderate–high |
| 5 | ACCESS (national supercomputers) | 50+ national systems, specialized AI/ML, enormous scale | Allocation application; credit-based | High |
Each rung is suited to a different class of workload. Moving up the ladder provides more capability, but typically requires more planning, coordination, and familiarity with the system.
Rung 1 — Your Laptop
Most projects begin on a personal machine. Laptops are well-suited for development, prototyping, small datasets, and early-stage analysis. The environment is fully controlled, iteration is fast, and there is no shared infrastructure to manage.
Common signs that a workload has outgrown a laptop include long runtimes, memory limitations, datasets that exceed local storage, or the need for hardware such as GPUs that are not available locally. When these constraints become routine, it is usually time to move to the next rung.
Rung 2 — A Single-Node Server or VM
Rung 2 provides a step up in resources without introducing the full complexity of a shared cluster. This typically includes more memory, more CPU cores, persistent environments, and remote access via SSH.
At HMC, this layer has historically consisted of departmental or faculty-managed systems (for example, Knuth and Teapot in CS, Hyper in Math, and Gandalf and Galadriel in Chemistry). While effective, these resources are not uniformly available across departments.
Project Iris is designed to provide a consistent, institution-wide version of this rung.
Project Iris: an institution-wide Rung 2
Project Iris is an on-campus HPC service being deployed by ARCS. It provides access to virtual machines on shared HMC-managed hardware for both research and instructional use.
System overview. The initial node includes two AMD EPYC 9965 CPUs (384 cores total), 3 TB of RAM, 60 TB of NVMe storage, and an NVIDIA RTX 4500 Blackwell GPU (32 GB). It will be housed in the McGregor Data Center, with capacity to expand to additional nodes over time.
A GPU-focused second node is planned but has been deferred due to current hardware pricing. The deployment strategy is phased: establish the service, support initial users, and expand based on demonstrated demand.
Why virtual machines? Iris uses Proxmox to deliver compute resources as virtual machines rather than shared user accounts on a single system. Each VM functions as an independent environment.
This model provides:
- Custom software environments. Each VM can be configured independently for specific research or instructional needs.
- Isolation between users. Changes within one VM do not affect others.
- Flexible resource allocation. CPU, memory, and storage are assigned per VM based on workload requirements.
Usage models: At launch, Iris supports three primary use cases:
- Student exploration VM. Students can request access to a shared VM without being part of a research group. This provides a low-friction entry point for learning and experimentation.
- Rivendell replacement VM. A modern environment for Chemistry workloads currently running on Gandalf and Galadriel, with a planned transition period.
- Faculty VM leasing. Dedicated VMs for research groups, with resources sized to the project. Faculty can also contribute hardware or funding to support expansion.
Iris is co-administered by ARCS, with support from central IT. A request form will be available soon; early interest is welcome.
Iris as a development environment. VMs provide a controlled space to develop and test workflows before scaling. Once a workload’s requirements are understood, it can be moved to larger systems such as Hopper, Laguna, or ACCESS as needed.
Rung 3 — Hopper (CMC Consortial Cluster)
Hopper is a SLURM-managed cluster hosted at Claremont McKenna College. It provides a traditional shared-cluster environment with job scheduling, environment modules, and support for parallel workloads.
Hopper is appropriate for workloads that exceed the capabilities of a single machine, including multi-node jobs, large-scale parallel processing, and GPU-based computation. Access is currently limited to research use and is coordinated through account requests.
Rung 4 — Laguna (USC CARC Regional Cluster)
Laguna, operated by USC’s Center for Advanced Research Computing, is a regional cluster available to institutions across Southern California, including HMC. Access is PI-based and uses a compute-hour credit model.
A distinguishing feature of Laguna is its web-based interface (OnDemand), which provides browser-based access to shell environments, file systems, job submission, and interactive tools such as JupyterLab and RStudio.
Faculty can request access as PIs, and students typically access the system through a faculty allocation.
Rung 5 — ACCESS (National Supercomputers)
At the top of the ladder is ACCESS, the NSF-supported network of national computing resources. These systems provide large-scale compute, specialized hardware, and infrastructure for advanced research workloads.
ACCESS uses a credit-based allocation model. Projects apply for allocations, which can range from small exploratory requests to larger, proposal-based awards.
ARCS can assist with selecting appropriate resources, preparing allocation requests, and onboarding to specific systems.
A Natural Path Up the Ladder
A typical progression through these resources might look like:
- Start on a laptop. Develop and test the workflow.
- Move to an Iris VM. Address limitations in memory, runtime, or storage and refine the environment.
- Transition to larger systems. Use Hopper, Laguna, or ACCESS depending on the scale and requirements of the workload.
Not all projects follow this exact path, but the ladder provides a useful framework for making these decisions.
How ARCS Can Help
ARCS provides support across all stages of this process, including:
- Identifying appropriate resources for a given workload
- Assisting with account setup and access
- Supporting workflow development and scaling
- Providing guidance for students new to research computing
- Connecting users with documentation, training, and external resources
Get in Touch
For questions about Iris or research computing resources at HMC:
- Email ARCS: arcs-l@g.hmc.edu
- Submit a Help Desk ticket: helpdesk@hmc.edu
- View Nic’s website for helpful guides, tutorials, and tools.
Additional posts will cover specific systems, workflows, and updates as Iris becomes available.
Nicholas Dodds
Research Computing Specialist & NSF ACCESS Campus Champion
Academic & Research Computing Services • Computing & Information Services Harvey Mudd College