I am a 1st year PhD student in CS at Cornell University, advised by Prof. Mohamed Abdelfattah. Previously, I completed my Msc in CS at ETH Zürich, working with Prof. Ryan Cotterell.
I care about efficient language modeling in the broad sense, aiming to make things go faster in both compute efficiency and sample efficiency.
I am also a deep believer in the Bitter Lesson and what has recently been called the Mismanaged Geniuses Hypothesis: that frontier models may be underused as much as underpowered, and that how we manage and decompose model calls matters as much as scaling the model itself. A lot of my work is motivated by those views, aiming to build and scale long-context agentic capabilities for LMs.
Contact me at: ye52 [at] cornell [dot] edu
