ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2026 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2508.13446
  4. Cited By
CAST: Counterfactual Labels Improve Instruction Following in Vision-Language-Action Models

CAST: Counterfactual Labels Improve Instruction Following in Vision-Language-Action Models

19 August 2025
Catherine Glossop
William Chen
Arjun Bhorkar
Dhruv Shah
Sergey Levine
    LM&Ro
ArXiv (abs)PDFHTML

Papers citing "CAST: Counterfactual Labels Improve Instruction Following in Vision-Language-Action Models"

4 / 4 papers shown
Diagnose, Correct, and Learn from Manipulation Failures via Visual Symbols
Diagnose, Correct, and Learn from Manipulation Failures via Visual Symbols
Xianchao Zeng
Xinyu Zhou
Youcheng Li
Jiayou Shi
Tianle Li
L. Chen
Lei Ren
Y. Li
104
0
0
02 Dec 2025
VAMOS: A Hierarchical Vision-Language-Action Model for Capability-Modulated and Steerable Navigation
VAMOS: A Hierarchical Vision-Language-Action Model for Capability-Modulated and Steerable Navigation
Mateo Guaman Castro
Sidharth Rajagopal
Daniel Gorbatov
Matt Schmittle
R. Baijal
...
Sidharth Talia
Emma Romig
Celso de Melo
Byron Boots
Abhishek Gupta
LM&Ro
139
0
0
23 Oct 2025
OmniVLA: An Omni-Modal Vision-Language-Action Model for Robot Navigation
OmniVLA: An Omni-Modal Vision-Language-Action Model for Robot Navigation
Noriaki Hirose
Catherine Glossop
Dhruv Shah
Sergey Levine
LM&Ro
193
5
0
23 Sep 2025
Pure Vision Language Action (VLA) Models: A Comprehensive Survey
Pure Vision Language Action (VLA) Models: A Comprehensive Survey
Dapeng Zhang
Jin Sun
Chenghui Hu
Xiaoyan Wu
Zhenlong Yuan
R. Zhou
Fei Shen
Qingguo Zhou
LM&Ro
313
15
0
23 Sep 2025
1
Page 1 of 1