About
I’m a Ph.D. candidate in the CILVR lab at NYU Courant, co-advised by Rob Fergus and Lerrel Pinto. My research is supported by a DeepMind Ph.D. Scholarship and an NSF Graduate Research Fellowship.
I’m interested in generative models that can solve hard tasks in settings like code synthesis, decision-making, and open-ended/agentic interaction. Recently, I’ve been thinking about:
- How do the abstractions present in training data impact test-time scaling?
- What sorts of training recipes for LLMs and VLMs enable open-ended self-improvement?
My work touches on generative modeling and reinforcement learning across modalities (NLP, vision, simulators, etc.). I’m also broadly interested in scientific applications for deep learning, such as weather and climate modeling.
I will be joining the Llama Agents Team at Meta AI in Paris during Spring 2025. Previously, I’ve spent time working on improving small language model reasoners with the GenAI/AI Frontiers Teams at Microsoft Research and studying ML-powered weather/climate simulators with the Applied Science Team at Google Research. I did my undergrad in mathematics and computer science at MIT, during which I was exceptionally lucky to be mentored by Kelsey R. Allen and Josh Tenenbaum.