Bio

Hi! I am currently working with Professor Stefanie Tellex and George Konidaris at Brown University as a Research Assistant focusing on embodied multimodal AI, planning, and robotics. I previously worked with Professors Andrew McCallum, Hao Zhang, and Hong Yu at the University of Massachusetts Amherst on NLP and robotics. 

I'm a 2023 Master of Science graduate from the University of Massachusetts Amherst, specializing in Computer Science with a focus on AI and Data Science. Before that, I graduated with my Bachelor of Science in 2022 from the same institution.

Throughout my Bachelor's, I focused on gaining industry experience through internships as a mobile developer and a data engineer. During the Master's program, I worked as a Data Science intern on time series data, honing my skills in predictive analytics and machine learning.

Notes: 

Research Interests

How can we make generally intelligent social robots, like Rosey from The Jetsons, exist in society as assistants in our homes and workplaces? As people get busier and older, they need more personalized assistance. To help, robots must be general-purpose, designed to handle various tasks. They must also be capable of zero-shot learning, i.e. able to handle novel tasks. For most people, natural language is the easiest way to communicate with machines. Thus, robots must be able to understand and ground (i.e. link) their perceptual data such as audio (language and sounds), vision, tactile sensations, etc. in the physical world and output actions to complete commanded tasks and to communicate back what happened. That is the focal point of my research: developing cognitive general-purpose assistive robots capable of reasoning and zero-shot learning by grounding multimodal sensory inputs and planning their actions.

In my research thus far, I integrate Natural Language Processing (NLP), Computer Vision (CV), Imitation Learning (Behavior Cloning), and classical planning to enable robots to understand and execute mobile manipulation tasks. Specifically, I am interested in neuro-symbolic and end-to-end approaches that leverage NLP to process natural language commands given to robots. To accomplish tasks in their environment, robots must utilize CV to interpret visual data. Conditioned on this textual and perceptual information, robots can either predict action trajectories, supervised by expert demonstrations, or formulate/learn abstract task plans that can be executed by off-the-shelf motion planners. Furthermore, if an end-to-end approach is utilized, it requires a lot of expert demonstrations, so another interest is making models more sample efficient through the use of more modalities/capabilities and environmentally diverse data.

Master of Science 2023

Computer Science (AI & Data Science)

Bachelor of Science 2022

Computer Science

News