About
I’m a Ph.D. candidate in the CILVR lab at NYU Courant, co-advised by Rob Fergus and Lerrel Pinto. My research is supported by a DeepMind Ph.D. Scholarship and an NSF Graduate Research Fellowship.
I’m interested in generative models that can solve hard tasks in settings like code synthesis, decision-making, and open-ended/agentic interaction. Recently, I’ve been thinking about:
- What sorts of training recipes for LLMs and VLMs enable open-ended self-improvement?
- What does effective alignment look like for agents?
- How do the abstractions present in training data impact test-time scaling?
Currently, I’m a research scientist intern at Meta working on Llama 4 post-training for collaborative multi-agent tasks, supervised by Gregoire Mialon and Thomas Scialom. I will be on the job market in Fall 2025/Spring 2026!
Previously, I’ve spent time working on improving small language model reasoners with the GenAI/AI Frontiers Teams at Microsoft Research and studying ML-powered weather/climate simulators with the Applied Science Team at Google Research.
I did my undergrad in mathematics and computer science at MIT, during which I was exceptionally lucky to be mentored by Kelsey R. Allen and Josh Tenenbaum.