ARCS Blog

Climbing the Hardware Ladder: Research Computing at HMC

Read this post in a beautified format here!

Welcome to the ARCS blog. For our first post, I want to offer a field guide to the research computing resources available to the HMC community — and a preview of a new on-campus service we’re rolling out this summer.

If your laptop has ever started sounding like a jet engine halfway through a simulation, or if you’ve wondered whether your next project should live on something bigger than your MacBook, this one is for you.

Paul Nerenberg over at CMC has a useful framing for all of this that he calls the Hardware Ladder. The idea is simple: research computing resources form a series of rungs, from personal machines up to national supercomputers, and each step up brings more capability — but also more friction. Knowing which rung fits a given workload is half the battle. The other half is knowing how to climb between them, which is a big part of what ARCS is here for.

The Hardware Ladder at a Glance

Each rung adds power and scale, but also introduces a different access model, a different level of user support, and usually a steeper learning curve. Here’s the shape of it:

RungResourceScale & CapabilityAccess ModelFriction
1Personal machineLocal CPU/GPU, limited memoryYou already have oneNone
2Project Iris (single-node server / VM)Hundreds of GB to multiple TB RAM, 100+ cores, dedicated GPUsDirect VM lease; students can get accounts without a PILow
3Hopper (CMC consortial cluster)Multi-node, SLURM-scheduledAccount request; research useModerate — SLURM learning curve
4Laguna (USC CARC regional cluster)Many nodes; browser-accessible via OnDemandPI-based; compute-hour creditsModerate–high
5ACCESS (national supercomputers)50+ national systems, specialized AI/ML, enormous scaleAllocation application; credit-basedHighest

Each jump up trades something for something. You might gain access to a thousand cores but lose direct, interactive control. You might get specialized GPUs but need a short proposal to get time on them. The rungs aren’t better or worse — they’re suited to different jobs.

Rung 1 — Your Laptop

Every research project starts here. Modern laptops are remarkably capable, especially for coding, prototyping, small datasets, and early-stage analysis. The environment is entirely yours, the feedback loop is instant, and there’s no queue to wait in.

The signs that you’ve outgrown Rung 1 are usually physical and annoying: jobs that run overnight (or longer), out-of-memory errors, datasets that don’t fit on disk, a need for GPUs you don’t have, or the realization that your experiment can’t continue unless your laptop lid stays open. When that starts happening routinely, it’s time to climb.

Rung 2 — A Single-Node Server or VM

Rung 2 is the sweet spot for workloads that have outgrown a laptop but don’t yet need a full shared cluster: more memory, more threads, persistent environments, and SSH access to a machine that doesn’t live in your bag.

Historically, Rung 2 at HMC has meant departmental servers — Knuth and Teapot in CS, Hyper in Math, Sivapithecus in Biology, Gandalf and Galadriel (the “Rivendell” system) in Chemistry, plus many faculty-owned machines. These have served real needs, but they’re unevenly distributed: if your department doesn’t have one, you’ve had to jump straight from laptop to consortial cluster.

That’s the gap Project Iris is designed to fill.

Project Iris: an institution-wide Rung 2

Project Iris is an on-campus, institution-wide HPC service rolling out from ARCS this summer. The idea is straightforward: give every faculty member and student on campus access to flexible, tailored virtual machines on shared HMC hardware, regardless of what department they’re in.

What the first node looks like. The initial compute node is a server with two AMD EPYC 9965 CPUs (384 cores total), 3 TB of RAM, 60 TB of NVMe storage, and an NVIDIA RTX 4500 Blackwell GPU (32 GB). It will live in the McGregor Data Center, which has space, power, and cooling capacity for at least four nodes as the service grows. A GPU-focused second node is on the roadmap, but has been deliberately postponed while hardware prices — especially for memory and GPUs — remain volatile. The phased approach is intentional: establish the service, get real researchers onto it, then expand prudently.

Why VMs? Iris runs on Proxmox, an open-source virtualization platform, and delivers compute to users as virtual machines rather than shared accounts on one big system. Each VM behaves like a dedicated server, but is isolated from every other VM on the host. That isolation matters for a few reasons:

  • Tailored software stacks. A Chemistry VM configured for Gaussian and WebMO doesn’t have to coexist on the same OS as a CS reinforcement learning pipeline. Each VM is its own environment.
  • No dependency conflicts. You get root (or close to it) on your own VM and can install whatever you need without breaking anyone else’s work.
  • Right-sized allocations. Cores, RAM, and storage are allocated per-VM, so a lab that needs 500 GB of RAM gets 500 GB of RAM, and a course that just needs a classroom Linux environment gets something modest.

Three ways to use Iris. At launch, the service has three tracks:

  1. Student Exploration VM. This is probably the most distinctive feature of Iris. Students will be able to request accounts on a shared exploration VM without being part of a faculty research group. If you’re a student curious about HPC, want to poke at CUDA, want to try running a local LLM, or just need more than your laptop can give you for a class project, this is your on-ramp.
  2. Rivendell Replacement VM. A modern home for the Chemistry / WebMO / Gaussian-style workloads currently living on the aging Gandalf and Galadriel systems. Rivendell and its Iris replacement will run in parallel through the transition so that no one’s work gets disrupted.
  3. Faculty VM Leasing. Semester-based VM leases for faculty research groups, with resources tailored to the project. Faculty can also contribute components — GPUs, storage drives — that attach directly to their own research VMs, and can contribute funding toward future node purchases as the service scales.

Project Iris is co-administered by me and Tiffany Fulton, our CIS Linux Administrator. A VM request form is on the way; in the meantime, early interest is very welcome.

Iris as a stepping stone. One of the most useful things about a VM is that it’s a good place to develop. You can work out your software stack, test your pipeline end-to-end, and figure out whether your real constraint is CPU, memory, I/O, or GPU — all in an environment you control. When you’re ready to scale up, your workflow is already portable, and ARCS can help you figure out whether the next stop is Hopper, Laguna, or ACCESS.

Rung 3 — Hopper (CMC Consortial Cluster)

Hopper is a SLURM-scheduled cluster hosted at Claremont McKenna College, and it is gradually being opened to HMC faculty for research use. It’s a shared-cluster environment in the classical sense: a Rocky Linux base, the SLURM scheduler, job queues, environment modules, and all the conventions that come with cluster computing.

Hopper is the right next step when one machine is no longer enough — especially for parallel or GPU-intensive jobs, or work that needs high-memory nodes beyond a single VM. Access is currently research-only and is not credits-based. A new-user workshop is coming up on May 27, 2026, 9 AM–noon at the Freeberg Forum (Kravis LC62); reach out and I’ll help with account setup ahead of the May 20 deadline.

Rung 4 — Laguna (USC CARC Regional Cluster)

Laguna, part of USC’s Center for Advanced Research Computing, is a regional resource serving more than a dozen Southern California institutions, including HMC. It uses a PI-based access model and a compute-hour credit system — projects request hours, and those hours get spent as jobs run.

The most distinctive thing about Laguna for newcomers is its browser-based front door: OnDemand provides shell access, file management, SLURM job submission, JupyterLab, RStudio Server, and even in-browser virtual desktops. If the idea of SSHing into a head node and writing batch scripts feels intimidating, Laguna is often a gentler on-ramp to large-scale cluster computing than Hopper.

Faculty interested in using Laguna should get in touch to begin the PI setup process. Students access Laguna through a faculty PI.

Rung 5 — ACCESS (National Supercomputers)

At the top of the ladder is ACCESS (formerly XSEDE), the NSF-funded coordinated network of more than fifty national systems — everything from extremely high-memory nodes to specialized AI/ML clusters (via NAIRR) to honest-to-goodness supercomputers.

ACCESS uses a two-layer credit system: you apply for allocation credits, then exchange those credits for compute hours on specific resources. Applications range from lightweight one-page Explore requests (quick turnaround, modest allocations) up to more detailed proposals for the larger Discover and Accelerate tiers. The good news: ACCESS is intentionally accessible. Supercomputing expertise isn’t a prerequisite, and starting small is actively encouraged.

As HMC’s ACCESS Campus Champion, I can help with the application, pick a sensible starting tier, and connect you with the right support staff once you’re on a specific resource. ACCESS allocations are also worth calling out in grant proposals — awarded compute is real infrastructure you can point to.

A Natural Path Up the Ladder

A workflow that ends up on ACCESS usually doesn’t start there. A reasonable pattern looks something like this:

  1. Start on your laptop. Prototype, iterate, get the idea working end-to-end.
  2. Move to an Iris VM when memory, CPU, runtime, or storage becomes the bottleneck. Solidify the software stack. Get a real sense of the resource profile of your workload.
  3. Evaluate Hopper, Laguna, or ACCESS based on what your workload actually needs — Hopper for parallel or GPU-intensive research, Laguna for managed regional-scale compute with a gentler interface, ACCESS when you need specialized or national-scale resources.

Each step up is a deliberate choice, not a forced migration. Plenty of research lives happily on Iris indefinitely; plenty of research belongs on ACCESS from day one. ARCS can help you figure out which.

How ARCS Can Help

A short list of things we’re happy to help with:

  • Figuring out which rung fits your workload
  • Getting set up with accounts on Iris, Hopper, Laguna, or ACCESS
  • Adapting and scaling workflows as you move between rungs
  • Supporting students exploring HPC for the first time
  • Connecting you with documentation, training, and the right people at other institutions

Get in Touch

For Iris interest, HPC questions, or help getting started on any of the resources above, the best paths are:

More posts on the way — workshop announcements, deeper dives on specific resources, and updates as Iris comes online.

Nicholas Dodds

Research Computing Specialist & NSF ACCESS Campus Champion

Academic & Research Computing Services • Computing & Information Services Harvey Mudd College