Bidirectional Action Sequence Learning for Long-term Action Anticipation with Large Language Models

Main:4 Pages
5 Figures
Bibliography:2 Pages
5 Tables
Appendix:2 Pages
Abstract
Video-based long-term action anticipation is crucial for early risk detection in areas such as automated driving and robotics. Conventional approaches extract features from past actions using encoders and predict future events with decoders, which limits performance due to their unidirectional nature. These methods struggle to capture semantically distinct sub-actions within a scene. The proposed method, BiAnt, addresses this limitation by combining forward prediction with backward prediction using a large language model. Experimental results on Ego4D demonstrate that BiAnt improves performance in terms of edit distance compared to baseline methods.
View on arXivComments on this paper
