ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2103.10574
  4. Cited By
Hopper: Multi-hop Transformer for Spatiotemporal Reasoning

Hopper: Multi-hop Transformer for Spatiotemporal Reasoning

19 March 2021
Honglu Zhou
Asim Kadav
Farley Lai
Alexandru Niculescu-Mizil
Martin Renqiang Min
Mubbasir Kapadia
H. Graf
    LRM
ArXivPDFHTML

Papers citing "Hopper: Multi-hop Transformer for Spatiotemporal Reasoning"

12 / 12 papers shown
Title
Does Visual Pretraining Help End-to-End Reasoning?
Does Visual Pretraining Help End-to-End Reasoning?
Chen Sun
Calvin Luo
Xingyi Zhou
Anurag Arnab
Cordelia Schmid
OCL
LRM
ViT
28
3
0
17 Jul 2023
Simple Unsupervised Object-Centric Learning for Complex and Naturalistic
  Videos
Simple Unsupervised Object-Centric Learning for Complex and Naturalistic Videos
Gautam Singh
Yi-Fu Wu
Sungjin Ahn
OCL
33
114
0
27 May 2022
Learning What and Where: Disentangling Location and Identity Tracking
  Without Supervision
Learning What and Where: Disentangling Location and Identity Tracking Without Supervision
Manuel Traub
S. Otte
Tobias Menge
Matthias Karlbauer
Jannik Thummel
Martin Volker Butz
21
20
0
26 May 2022
TFCNet: Temporal Fully Connected Networks for Static Unbiased Temporal
  Reasoning
TFCNet: Temporal Fully Connected Networks for Static Unbiased Temporal Reasoning
Shiwen Zhang
AI4TS
11
9
0
11 Mar 2022
Video Transformers: A Survey
Video Transformers: A Survey
Javier Selva
A. S. Johansen
Sergio Escalera
Kamal Nasrollahi
T. Moeslund
Albert Clapés
ViT
20
102
0
16 Jan 2022
Dynamic Visual Reasoning by Learning Differentiable Physics Models from
  Video and Language
Dynamic Visual Reasoning by Learning Differentiable Physics Models from Video and Language
Mingyu Ding
Zhenfang Chen
Tao Du
Ping Luo
J. Tenenbaum
Chuang Gan
VGen
PINN
OCL
22
74
0
28 Oct 2021
Generative Video Transformer: Can Objects be the Words?
Generative Video Transformer: Can Objects be the Words?
Yi-Fu Wu
Jaesik Yoon
Sungjin Ahn
ViT
19
34
0
20 Jul 2021
Grounding Spatio-Temporal Language with Transformers
Grounding Spatio-Temporal Language with Transformers
Tristan Karch
Laetitia Teodorescu
Katja Hofmann
Clément Moulin-Frier
Pierre-Yves Oudeyer
LM&Ro
8
11
0
16 Jun 2021
Learning Object Permanence from Video
Learning Object Permanence from Video
Aviv Shamsian
Ofri Kleinfeld
Amir Globerson
Gal Chechik
SSL
29
31
0
23 Mar 2020
Answering Complex Open-domain Questions Through Iterative Query
  Generation
Answering Complex Open-domain Questions Through Iterative Query Generation
Peng Qi
Xiaowen Lin
L. Mehr
Zijian Wang
Christopher D. Manning
RALM
ReLM
LRM
171
117
0
15 Oct 2019
Simple Online and Realtime Tracking with a Deep Association Metric
Simple Online and Realtime Tracking with a Deep Association Metric
N. Wojke
Alex Bewley
Dietrich Paulus
VOT
223
3,462
0
21 Mar 2017
Aggregated Residual Transformations for Deep Neural Networks
Aggregated Residual Transformations for Deep Neural Networks
Saining Xie
Ross B. Girshick
Piotr Dollár
Z. Tu
Kaiming He
273
10,214
0
16 Nov 2016
1