ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2502.11161
45
2

BFA: Best-Feature-Aware Fusion for Multi-View Fine-grained Manipulation

20 February 2025
Zihan Lan
Weixin Mao
H. Li
Le Wang
Tiancai Wang
Haoqiang Fan
Osamu Yoshie
    EgoV
ArXivPDFHTML
Abstract

In real-world scenarios, multi-view cameras are typically employed for fine-grained manipulation tasks. Existing approaches (e.g., ACT) tend to treat multi-view features equally and directly concatenate them for policy learning. However, it will introduce redundant visual information and bring higher computational costs, leading to ineffective manipulation. For a fine-grained manipulation task, it tends to involve multiple stages while the most contributed view for different stages is varied over time. In this paper, we propose a plug-and-play best-feature-aware (BFA) fusion strategy for multi-view manipulation tasks, which is adaptable to various policies. Built upon the visual backbone of the policy network, we design a lightweight network to predict the importance score of each view. Based on the predicted importance scores, the reweighted multi-view features are subsequently fused and input into the end-to-end policy network, enabling seamless integration. Notably, our method demonstrates outstanding performance in fine-grained manipulations. Experimental results show that our approach outperforms multiple baselines by 22-46% success rate on different tasks. Our work provides new insights and inspiration for tackling key challenges in fine-grained manipulations.

View on arXiv
@article{lan2025_2502.11161,
  title={ BFA: Best-Feature-Aware Fusion for Multi-View Fine-grained Manipulation },
  author={ Zihan Lan and Weixin Mao and Haosheng Li and Le Wang and Tiancai Wang and Haoqiang Fan and Osamu Yoshie },
  journal={arXiv preprint arXiv:2502.11161},
  year={ 2025 }
}
Comments on this paper