InterACT: Inter-dependency Aware Action Chunking with Hierarchical Attention Transformers for Bimanual Manipulation

1University of California, Davis 2University of California, Berkeley

8th Conference on Robot Learning (CoRL 2024), Munich, Germany


Abstract

Bimanual manipulation presents unique challenges compared to unimanual tasks due to the complexity of coordinating two robotic arms. In this paper, we introduce InterACT: Inter-dependency aware Action Chunking with Hierarchical Attention Transformers, a novel imitation learning framework designed specifically for bimanual manipulation. InterACT leverages hierarchical attention mechanisms to effectively capture inter-dependencies between dual-arm joint states and visual inputs. The framework comprises a Hierarchical Attention Encoder, which processes multi-modal inputs through segment-wise and cross-segment attention mechanisms, and a Multi-arm Decoder that generates each arm’s action predictions in parallel, while sharing information between the arms through synchronization blocks by providing the other arm’s intermediate output as context. Our experiments, conducted on various simulated and real-world bimanual manipulation tasks, demonstrate that InterACT outperforms existing methods. Detailed ablation studies further validate the significance of key components, including the impact of CLS tokens, cross-segment encoders, and synchronization blocks on task performance.

InterACT



The Hierarchical Attention Encoder consists of multiple blocks of segment-wise encoders and cross-segment encoder. The output is passed through the Multi-arm Decoder which consists of Arm1 and Arm2 specific decoders that process the input segments independently. The synchronization block allows for information sharing between the two decoders.

Results



Success rate (%) for tasks adapted from ACT (top) and our original tasks (bottom). For the simulation tasks, the data used to train the model came from human demonstrations, and we averaged the results across 3 random seeds with 50 episodes each. The real-world tasks were also evaluated over 50 episodes.

Autonomous Rollouts

Attention weights of CLS tokens over time

BibTeX

@article{lee2024interact,
    title={InterACT: Inter-dependency Aware Action Chunking with Hierarchical Attention Transformers for Bimanual Manipulation},
    author={Lee, Andrew and Chuang, Ian and Chen, Ling-Yuan and Soltani, Iman},
    journal={arXiv preprint arXiv:2409.07914},
    year={2024}
}