Computer Science Department Publications

Share story

The Harvey Mudd College Computer Science Department seems to have developed an algorithm for productivity. Four papers have recently been accepted for presentation and/or publication.

A paper by math major Bo Zhang ’17 and Assistant Professor of Computer Science Yi-Chieh (Jessica) Wu has been accepted to the International Symposium on Bioinformatics Research and Applications. “Co-estimation of gene trees and reconciliations under a duplication-loss-coalescence model” focuses on algorithmic capability in tree mapping. “One problem with existing reconciliation methods is that they do not account for error in the evolutionary tree for a gene family,” Wu explains. “Our work demonstrates that independent gene tree reconstruction followed by reconciliation degrades the accuracy of our evolutionary predictions. To address this challenge, we developed a probabilistic method for simultaneously reconstructing the gene tree and reconciling it with the species tree, and we demonstrate that this approach outperforms existing methods.”

Beginning in spring 2015, Zhang took the project from concept to publication. “He was responsible for deriving much of the math and implemented the majority of the algorithm,” Wu says.

A paper by Odaris Barios-Arciga ’18 (SCR), Noah Marcus ’17, Jasmine Zhu ’19, UC San Diego graduate student John Sarracino ’14, HMC Assistant Professor Ben Wiedermann and UCSD Professor Sorin Lerner has been accepted to the ACM CHI Conference on Human Factors in Computing Systems. The paper, “User-guided synthesis of interactive diagrams,” presents techniques that transform a static diagram into an interactive one without requiring the user to write code. The CHI conference takes place in Denver in May.

Alex Ozdemir ’17 will travel to Portugal in June to present “Clustering the space of maximum parsimony reconciliations in the duplication-transfer-loss model” at the Conference on Algorithms for Computational Biology. The paper is the result of several years of research and is co-authored by Michael Sheely ’17, Daniel Bork ’16, Ricson Cheng ’19 (Carnegie Mellon University), Reyna Hulett ’16, Jean Sung ’16, Jincheng Wang ’17 and computer science Professor Ran Libeskind-Hadas.

Like Wu and Zhang’s work, this team’s research deals with the problem of inferring how genes can have different evolutionary histories from the species in which they are found. A fundamental problem in computational biology is that of “reconciling” the evolutionary tree for a gene family with the evolutionary tree for a set of species to explain their differences. Often, the number of “most plausible” reconciliations can be very large, and it grows exponentially. “There can be billions of good solutions,” Libeskind-Hadas says. “No biologist wants to sort through billions of good solutions.” The team developed fast algorithms for clustering the very large number of possible solutions into a small number of “best representative” solutions. The method uses what Libeskind-Hadas calls “some algorithmic trickery” to efficiently find these clusters without explicitly enumerating the exponentially large number of solutions.

Ozdemir, who worked on the research in the fall of 2015, is pursuing an Individual Program of Studies major in computational, mathematical and physical theory. After the trip to Portugal and a visit Spain with family, Ozdemir plans to begin a PhD program in computer science.

A paper by Bork, Sung, Wang, Cheng and Libeskind-Hadas has been accepted to the journal Algorithms for Molecular Biology. Titled “On the computational complexity of the maximum parsimony reconciliation problem in the duplication-loss-coalescence model,” this work examines another phylogenetic tree reconciliation problem that was first studied by Wu and her collaborators at MIT. That pioneering earlier work left open the question of the computational complexity of the problem. The HMC team was able to prove that the problem is not only computationally hard, but it is even computationally hard to find solutions that approximately optimal. “That means that someone shouldn’t spend their PhD searching for optimal or even near-optimal algorithms for that specific problem,” Libeskind-Hadas says.