We study intelligent robot learning for dynamic environments, focusing on enabling robots to reason about their surroundings, predict future states, and adapt their behavior to perform complex tasks efficiently. Our research explores how robots can develop world models to make informed decisions, utilize visual navigation for goal-directed movement, and leverage 3D Gaussian Splatting for enhanced spatial perception.
We are also interested in human-inspired learning, where robots acquire skills through imitation and reinforcement learning, allowing them to generalize knowledge across diverse tasks. Additionally, we investigate how language models can enhance robot intelligence, enabling more natural human-robot interaction and task understanding. Our work integrates techniques from task and motion planning, semantic mapping, skill chaining, and multi-modal perception to build robots that can operate robustly in real-world environments and collaborate effectively with humans.
[2025.03.10] NuRI Lab (Neural Robot Intelligence Lab) is now open!