Sitemap

A list of all the posts and pages found on the site. For you robots out there is an XML version available for digesting as well.

Pages

Posts

portfolio

publications

Synthesizing Navigation Abstractions for Planning with Portable Manipulation Skills

Published in CoRL, 2023

We address the problem of efficiently learning high-level abstractions for task-level robot planning. Existing approaches require large amounts of data and fail to generalize learned abstractions to new environments. To address this, we propose to exploit the independence between spatial and non-spatial state variables in the preconditions of manipulation and navigation skills, mirroring the manipulation-navigation split in robotics research. Given a collection of portable manipulation abstractions (i.e., object-centric manipulation skills paired with matching symbolic representations), we derive an algorithm to automatically generate navigation abstractions that support mobile manipulation planning in a novel environment. We apply our approach to simulated data in AI2Thor and on real robot hardware with a coffee preparation task, efficiently generating plannable representations for mobile manipulators in just a few minutes of robot time, significantly outperforming state-of-the-art baselines.

Download here

Robot Task Planning under Local Observability

Published in ICRA, 2024

Real-world planning robot task planning is computationally intractable in part due to the complexity of dealing with partial observability. One common approach to reducing planning complexity is to introduce additional structure into the decision process, such as mixed-observability, factored state representations, or temporally-extended actions. We introduce a novel formulation, the locally observable Markov decision process, which models the case where a robot has access to subroutines for seeking and accurately observing objects using its sensors. The remaining partial observability stems from the fact that robot sensors are range-limited and line-of-sight—objects occluded or outside sensor range are unobserved, but objects that fall within view of its sensors can be fully observed using its observation subroutine. This model results in a three-stage planning process: first, the robot attempts to solve the task using only observed objects; if that fails, it generates a set of candidate objects that, if observed, could result in a feasible plan; finally, it plans to find those candidate objects by searching unobserved regions using its seek subroutine, replanning after each new object is observed. By combining this formulation with off-the-shelf Markov planners we are able to outperform state of the art solvers for both object oriented POMDP and MDP analogues with the same task specification. We then demonstrate the usefulness of our formulation by solving a partially-observable planning task on a mobile robot platform.

Download here

talks

teaching