CS REU Mentors and Projects

Arthi Padmanabhan

Arthi Padmanabhan is an assistant professor of computer science at Harvey Mudd College. Her research focuses on building systems that enable scalable machine learning on real-time video data. Specifically, the systems she builds target an improved tradeoff between performance (e.g., accuracy, latency) and resource usage (e.g., cost, memory, energy, bandwidth consumption) in large-scale video analytics systems. She obtained her PhD in CS from UCLA. Prior to UCLA, she worked at Microsoft for three years and before that, she completed her BA in CS from Pomona College.

Project: Optimizing Deep Learning Capabilities for Low-Power Devices

Traditionally, running deep neural networks (DNNs) on data from an edge device (e.g., low-power sensor, traffic camera) involved streaming the data to the cloud, where DNNs were run over the data. For example, during an Amber Alert, a camera would send all video frames to the cloud, where a computer vision model would run on each frame to find a particular car. However, sending data to the cloud adds delays and raises privacy concerns. To address these issues, recent efforts have targeted running such models on the edge device itself. However, these devices tend to be less well-provisioned than the cloud in terms of resources (e.g., compute, memory, energy), so recent work aims to lower the resource usage of running DNNs.

In this project, we focus on how DNNs use energy. Typically, DNNs drain energy quickly, making them challenging to run on low-power edge devices. We specifically focus on energy-harvesting devices (e.g., powered by solar panels) and seek to understand how such devices can be optimized for resource-intensive machine learning tasks.

Who should apply?

We encourage students who are excited about looking at machine learning through a systems perspective and asking questions about how we can make machine learning more efficient and usable. Students will be working primarily with PyTorch and will set up and profile energy usage on low power devices such as Raspberry Pis, so familiarity with Python is preferred, and experience with PyTorch or working with low power devices is a plus but not required.

Relevant background: papers and videos

Xanda Schofield

Xanda Schofield is an Assistant Professor in Computer Science at Harvey Mudd College. Her work focuses on practical applications of unsupervised models of text, particularly topic models, to research in the humanities and social sciences. She completed her Ph.D. in 2019 at Cornell University advised by David Mimno, supported by an NDSEG fellowship. She enjoys baking and solving crosswords.

Project: User-Friendly Tuning of Unsupervised Models of Text

Topic models are a popular machine learning tool for exploration of unstructured text across numerous domains, ranging from data journalism and customer service to economics, history, and literature. However, people trying to use these models to uncover patterns inn a text collection often have specific themes in mind, but wish to see if these themes arise naturally instead of via supervision. In this study, we will explore novel ways for investigators to encode modest hypotheses into evaluation metrics about what themes should arise using a small collection of “highlights,” or instances of phenomena of interest. Using concepts around mutual information, semantic similarity, and probabilistic modeling, participants will devise new metrics to parameterize how well themes are separated from each other by the model, as well as rank possible candidates for other highlights to see if they match human expectations about a theme. These metrics will provide guidance for a variety of activities known to consume significant investigator time with topic models, including selecting number of topics, finding frequency thresholds for key terms, and determining how to split passages of longer texts into thematically unified units.

Who should apply?

We encourage applicants who are excited about how it might look to apply machine learning tools outside of strictly scientific domains. Applicants must have prior course work or comparable experience in at least data structures, discrete math, and probability. Students should be comfortable coding in either Python or JavaScript. While experience with web development, information theory, natural language processing, and machine learning are encouraged, they are not required to apply. Students should be ready to not only plot and visualize analyses of different metrics, but also to dig into the data “weeds” to read text and diagnose what might have caused unusual trends.

Relevant background: papers and videos

George Montañez

George Montañez is an assistant professor of computer science at Harvey Mudd College. He obtained his PhD in machine learning from Carnegie Mellon University, an MS in computer science from Baylor University, and a BS in computer science from the University of California-Riverside. Prof. George previously worked in industry as a data scientist (Microsoft AI+R), software engineer (Prestige Software), and web developer (360 Hubs, Inc.). His current research explores why machine learning works from a search and dependence perspective, and identifies information constraints on general search processes. He is the director of the AMISTAD Lab.

Project: The Role of Bias in Machine Learning

The AMISTAD lab (Artificial Machine Intelligence = Search Targets Awaiting Discovery) is a lab focused on pursuing foundational theoretical work in machine learning from a search and information theory perspective. This involves formalizing areas of machine learning as either searches or communication problems, and proving results related to machine learning, information theory, and search within those frameworks. The focus of the lab is on the abstract underlying structure of learning and search problems. Our projects will center on forming new perspectives of learning processes so that we can exploit those insights for smarter learning algorithms and understand something new about reality.

You’ll have a lot of fun and solve tricky problems! Students typically end up with one or more publications from their projects, so summer research in our lab helps prepare you for graduate school!

Who should apply?

Underrepresented students are strongly encouraged to apply! Strong preference is given to students who are enthusiastic about AMISTAD Lab and who have an eager interest in foundational issues in machine learning (e.g., in what makes ML work). A good level of mathematical sophistication (ability to write rigorous proofs, multivariable calculus, familiarity with probability theory and statistics) is helpful. Coding is useful, but for theoretical projects no coding is typically necessary. However, knowing LaTeX, or being willing to learn, is a must.

Relevant background: papers and videos

Calden Wloka

Calden Wloka is an Assistant Professor of Computer Science at Harvey Mudd College where he runs the Laboratory for Cognition and Attention in Time and Space (Lab for CATS). His research focuses on visual cognition, both from the perspective of computer vision through the design and evaluation of computational vision models, and from the perspective of biological vision through psychophysics and eye tracking experiments to further our understanding of human vision. Calden completed his PhD in 2019 at York University advised by John Tsotsos.

Project: Attention and Feature Representation in Spatiotemporal Networks

Over the past decade, deep learning has become the predominant approach to computer vision problems. While deep neural networks have demonstrated highly impressive results in many areas, they still sometimes exhibit surprising brittleness. Often this brittleness appears to result from networks relying on unintended patterns and correlations in the training data, or optimizing over easier to learn but less reliable parts of the visual signal (e.g. Geirhos et al., 2019) demonstrated that deep networks tend to rely more heavily on local image texture rather than shape). Understanding behavior in spatiotemporal processing is less well explored, however, with preliminary work recently pointing at a potential overreliance on static features (Kowal et al., 2022). Our work takes advantage of techniques developed to visualize the activity of deep networks in order to better understand what visual information is driving the network’s behavior (e.g. Selvaraju et al., 2017) in order to characterize and quantify how deep networks operating on videos rely on visual information to make their decisions. The specific aims of our work address two complementary threads: identifying which activity visualization techniques provide more useful and reliable insight into network behavior, and using these insights to develop novel training and data augmentation techniques to guide deep neural networks to more robust spatiotemporal representations.

Who should apply?

We encourage students who are motivated by understanding the behavior and limits of current approaches to computer vision, and thinking hard about ways to explore and quantify those aspects of vision models. Students will be joining an ongoing project with an established code base, so experience using Python and git is a major asset. Furthermore, familiarity with computer vision or video processing, and deep learning (PyTorch in particular) will be beneficial, though there will be time at the start of the summer to improve your understanding and gain familiarity with these tools.

Relevant background: papers and videos

Lucas Bang

Lucas Bang is an assistant professor of computer science at Harvey Mudd College. He obtained his PhD in computer science from The University of California, Santa Barbara.

Project: Formal Methods for Quantitative Program Analysis

My projects tackle problems of knowing what programs will do. These are challenging problems (uncomputable, in fact!). On top of that, we also want to know HOW MANY ways a program can do its thing.

Counting the ways that a program can do something requires analyzing the source code (this is called static analysis), analyzing the running program (this is called dynamic analysis) and performing combinatorial computations (via logic and a field called “model counting”). Doing so allows us to answer questions like “how safe is my code?”, “how many test inputs do I need in order to discover some interesting program behavior?”, “how complicated is my code?”, or “how can an autonomous agent optimize its interaction with my code?”.

Who should apply?

Excitement about programming languages, program analysis, combinatorics, logic, automated theorem proving, and / or abstract algebra is a real win. With this project you’ll emerge with even more of that excitement! Join in! A formal prior background in these areas is possibly beneficial but definitely not necessary–we’ll learn together as we go.

Relevant Background: papers and videos