| CARVIEW |
Information-driven Affordance Discovery
for Efficient Robotic Manipulation
Pietro Mazzaglia†
Qualcomm AI Research
Ghent University
Taco Cohen‡
Qualcomm AI Research
Daniel Dijkman
Qualcomm AI Research
Abstract
Robotic affordances, providing information about what actions can be taken in a given situation, can aid robotic manipulation. However, learning about affordances requires expensive large annotated datasets of interactions or demonstrations. In this work, we argue that well-directed interactions with the environment can mitigate this problem and propose an information-based measure to augment the agent's objective and accelerate the affordance discovery process. We provide a theoretical justification of our approach and we empirically validate the approach both in simulation and real-world tasks. Our method, which we dub IDA, enables the efficient discovery of visual affordances for several action primitives, such as grasping, stacking objects, or opening drawers, strongly improving data efficiency in simulation, and it allows us to learn grasping affordances in a small number of interactions, on a real-world setup with a UFACTORY XArm 6 robot arm.
Efficient visual affordance learning in simulation
Online visual affordance learning in Real-world
Citation
Mazzaglia2024IDA,
title={Information-driven Affordance Discovery for Efficient Robotic Manipulation},
author={Pietro Mazzaglia and Taco Cohen and Daniel Dijkman},
booktitle={2024 IEEE International Conference on Robotics and Automation (ICRA)},
year={2024},
}