Robot Task Planning under Local Observability
Published in ICRA, 2024
Real-world planning robot task planning is computationally intractable in part due to the complexity of dealing with partial observability. One common approach to reducing planning complexity is to introduce additional structure into the decision process, such as mixed-observability, factored state representations, or temporally-extended actions. We introduce a novel formulation, the locally observable Markov decision process, which models the case where a robot has access to subroutines for seeking and accurately observing objects using its sensors. The remaining partial observability stems from the fact that robot sensors are range-limited and line-of-sight—objects occluded or outside sensor range are unobserved, but objects that fall within view of its sensors can be fully observed using its observation subroutine. This model results in a three-stage planning process: first, the robot attempts to solve the task using only observed objects; if that fails, it generates a set of candidate objects that, if observed, could result in a feasible plan; finally, it plans to find those candidate objects by searching unobserved regions using its seek subroutine, replanning after each new object is observed. By combining this formulation with off-the-shelf Markov planners we are able to outperform state of the art solvers for both object oriented POMDP and MDP analogues with the same task specification. We then demonstrate the usefulness of our formulation by solving a partially-observable planning task on a mobile robot platform.
Download here