Active Vision Might Be All You Need: Exploring Active Vision in Bimanual Robotic Manipulation

1University of California, Berkeley 2University of California, Davis
*Indicates Equal Contribution


We introduce AV-ALOHA, a bimanual robot system with 7-DoF active vision that is an extension of ALOHA 2. This system offers an immersive teleoperation experience using VR and serves as a platform to evaluate active vision in imitation learning and manipulation.

Abstract

Imitation learning has demonstrated significant potential in performing high-precision manipulation tasks using visual feedback from cameras. However, it is common practice in imitation learning for cameras to be fixed in place, resulting in issues like occlusion and limited field of view. Furthermore, cameras are often placed in broad, general locations, without an effective viewpoint specific to the robot's task. In this work, we investigate the utility of active vision (AV) for imitation learning and manipulation, in which, in addition to the manipulation policy, the robot learns an AV policy from human demonstrations to dynamically change the robot's camera viewpoint to obtain better information about its environment and the given task. We introduce AV-ALOHA, a new bimanual teleoperation robot system with AV, an extension of the ALOHA 2 robot system, incorporating an additional 7-DoF robot arm that only carries a stereo camera and is solely tasked with finding the best viewpoint. This camera streams stereo video to an operator wearing a virtual reality (VR) headset, allowing the operator to control the camera pose using head and body movements. The system provides an immersive teleoperation experience, with bimanual first-person control, enabling the operator to dynamically explore and search the scene and simultaneously interact with the environment. We conduct imitation learning experiments of our system both in real-world and in simulation, across a variety of tasks that emphasize viewpoint planning. Our results demonstrate the effectiveness of human-guided AV for imitation learning, showing significant improvements over fixed cameras in tasks with limited visibility.

AV-ALOHA



The AV-ALOHA system enables intuitive data collection using a VR headset for AV and either VR controllers or leader arms for manipulation. This helps capture full body and head movements to teleoperate both our real and simulation system that record video from six different cameras and provide training data for our AV imitation learning policies.

Results



We collect data and run experiments in both real-world and simulation across a variety of tasks that emphasize viewpoint planning. We evaluate the imitation learning policy, ACT, with different camera configurations with and without active vision. Our results show that active vision can significantly improve the performance of imitation learning when there are occlusions or limited visibility.

Autonomous Rollouts

Autonomous Rollouts (Active Vision Camera)

BibTeX

@misc{chuang2024activevisionneedexploring,
    title={Active Vision Might Be All You Need: Exploring Active Vision in Bimanual Robotic Manipulation}, 
    author={Ian Chuang and Andrew Lee and Dechen Gao and Iman Soltani},
    year={2024},
    eprint={2409.17435},
    archivePrefix={arXiv},
    primaryClass={cs.RO},
    url={https://arxiv.org/abs/2409.17435}, 
}