Hello, I am currently working with Professor Stefanie Tellex and George Konidaris at Brown University as a Research Assistant focusing on embodied multimodal AI, robotics, and planning. I previously worked with Professors Andrew McCallum, Hao Zhang, and Hong Yu at the University of Massachusetts Amherst on NLP and robotics. 

I'm a 2023 M.S. graduate from the University of Massachusetts Amherst, specializing in Computer Science with a focus on AI and Data Science. Before that, I graduated with a B.S. in Computer Science in 2022 from the same institution.

Throughout my Bachelor's, I focused on gaining industry experience through internships as a mobile developer and a data engineer. During the Master's program, I worked as a Data Science intern on time series data, honing my skills in predictive analytics and machine learning.

Notes: 

Research Interests

How can we make generally intelligent social robots, like Rosey from The Jetsons, exist in society as assistants in our homes and workplaces? As people get busier and older, they need more personalized assistance. To help, robots must be general-purpose, designed to handle various tasks. My research ambition is to see a household robot that serves as both a capable helper and an engaging companion. Such a robot must be able to understand and execute complex open vocabulary mobile manipulation tasks via verbal instructions while also fostering meaningful interactions. Two key directions I set out to explore so far have been improving natural language specification and leveraging common sense for reasoning about the world for instruction-following mobile manipulation robots.

My research thus far has been focused on using behavior cloning and classical symbolic planning to enable robots to understand and execute language-conditioned long-horizon mobile manipulation tasks. Specifically, I am interested in neuro-symbolic and end-to-end approaches that process natural language commands given to robots along with incoming visual observations of the environment to either predict action trajectories, supervised by expert demonstrations, or formulate/learn abstract task plans that can be executed by off-the-shelf motion planners. End-to-end approaches requires many expert demonstrations, so another interest is making those models more data efficient through the use of more modalities/capabilities and environmentally diverse data. Similarly, the use of reinforcement learning for fine-tuning large pretrained behavior cloning models such as robot foundation models has been a recent interest, especially since it can help with improving sample efficiency on specific tasks and domains.

News