Topic
Interactive Learning Toward In-hand Manipulation of Deformable Objects


In-hand manipulation of deformable objects offers unprecedented opportunities to resolve various real-world problems, such as binding and taping. This project aims to develop a visuotactile in-hand manipulation that repositions/reorientations deformable objects in hand as we want. Toward this line of research, we propose three research thrusts: 1) a physics-informed reinforcement learning (RL) framework, 2) an interactive RL framework, and 3) Sim2Real transfer learning method.

Keywords: (Inverse) Reinforcement learning, Deformable obejct manipulation, Sim2Real transfer learning
Selected paper: [CoRL19], [RA-L23]
Task-and-Motion Planning


We aim to introduce task-and-motion planning (TAMP) framework that is to solve complex and longer-time horizon of human tasks. To resolve completeness, optimality, and robustness issues, we are working on various task planning and motion planning approaches. We will show a generalizable TAMP framework under human operator’s cooperative or adversarial interventions.

Keywords: Temporal logic, Neuro symbolic planning, Scene graph, Behavior tree, Collision avoidance
Selected paper: [ICRA21], [RA-L22], [ICRA24]
Language-guided Quadrupedal Robot Navigation & Manipulation


Natural language is a convenient means to deliver a user’s high-level instruction. We introduce a language-guided manipulation framework that learns common-sense knowledge from natural language instructions and corresponding motion demonstrations. We apply the technologies on various quadrupedal robots like Boston Dynamics Spot!

Keywords: Quadruped robot, Semantic SLAM, Natural language grounding
Selected paper: [IJRR20], [FR22], [AAAI24]
Machine Common Sense Learning for Robots


Interpreting underspecified instructions re-quires environmental context and background knowledge about how to accomplish complex tasks. We investigate how to incorporate human-like commonsense knowledge for natural language understanding and task executions.

Keywords: Large Language Models
Selected paper: [CoRL18]