Imitation learning has demonstrated significant potential in performing high-precision manipulation tasks using visual feedback from cameras. However, it is common practice in imitation learning for cameras to be fixed in place, resulting in issues like occlusion and limited field of view. Furthermore, cameras are often placed in broad, general locations, without an effective viewpoint specific to the robot's task. In this work, we investigate the utility of active vision (AV) for imitation learning and manipulation, in which, in addition to the manipulation policy, the robot learns an AV policy from human demonstrations to dynamically change the robot's camera viewpoint to obtain better information about its environment and the given task. We introduce AV-ALOHA, a new bimanual teleoperation robot system with AV, an extension of the ALOHA 2 robot system, incorporating an additional 7-DoF robot arm that only carries a stereo camera and is solely tasked with finding the best viewpoint. This camera streams stereo video to an operator wearing a virtual reality (VR) headset, allowing the operator to control the camera pose using head and body movements. The system provides an immersive teleoperation experience, with bimanual first-person control, enabling the operator to dynamically explore and search the scene and simultaneously interact with the environment. We conduct imitation learning experiments of our system both in real-world and in simulation, across a variety of tasks that emphasize viewpoint planning. Our results demonstrate the effectiveness of human-guided AV for imitation learning, showing significant improvements over fixed cameras in tasks with limited visibility.
The AV-ALOHA system enables intuitive data collection using a VR headset for AV and either VR controllers or leader arms for manipulation. This helps capture full body and head movements to teleoperate both our real and simulation system that record video from six different cameras and provide training data for our AV imitation learning policies.
We collect data and run experiments in both real-world and simulation across a variety of tasks that emphasize viewpoint planning. We evaluate the imitation learning policy, ACT, with different camera configurations with and without active vision. Our results show that active vision can significantly improve the performance of imitation learning when there are occlusions or limited visibility.
@misc{chuang2024activevisionneedexploring,
title={Active Vision Might Be All You Need: Exploring Active Vision in Bimanual Robotic Manipulation},
author={Ian Chuang and Andrew Lee and Dechen Gao and Iman Soltani},
year={2024},
eprint={2409.17435},
archivePrefix={arXiv},
primaryClass={cs.RO},
url={https://arxiv.org/abs/2409.17435},
}