About
I’m a Ph.D. student in the CILVR lab at NYU Courant, co-advised by Rob Fergus and Lerrel Pinto. I’m fortunate to be supported by a DeepMind Ph.D. Scholarship and an NSF Graduate Research Fellowship.
I’m interested in AI for open-ended interaction and problem-solving across domains like games, math, physics, and biology. Recently, I’ve been thinking about:
- How do we go about building models that can flexibly explore and interact with the world the way we (humans and animals) do?
- What sort of structures, priors, and training + finetuning/prompting paradigms can enable a large model to be an elegant proof solver? To act as an effective surrogate for ODEs/PDEs/non-linear dynamics?
My work touches on imitation and reinforcement learning across modalities (vision, NLP, simulators, etc). I’m also broadly interested in differentiable computing.
I did my undergrad in math at MIT, during which I was exceptionally lucky to be mentored by Kelsey R. Allen, Josh Tenenbaum, Gigliola Staffilani, Jörn Dunkel, and Raffaele Ferrari. I’ve spent summers at EPFL LCN and the Applied Science Team at Google Research.