ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1804.02748
  4. Cited By
Scaling Egocentric Vision: The EPIC-KITCHENS Dataset

Scaling Egocentric Vision: The EPIC-KITCHENS Dataset

8 April 2018
Dima Damen
Hazel Doughty
G. Farinella
Sanja Fidler
Antonino Furnari
Evangelos Kazakos
Davide Moltisanti
Jonathan Munro
Toby Perrett
Will Price
Michael Wray
    EgoV
ArXivPDFHTML

Papers citing "Scaling Egocentric Vision: The EPIC-KITCHENS Dataset"

50 / 164 papers shown
Title
A Survey of Interactive Generative Video
A Survey of Interactive Generative Video
Jiwen Yu
Yiran Qin
Haoxuan Che
Quande Liu
X. Wang
Pengfei Wan
Di Zhang
Kun Gai
Hao Chen
Xihui Liu
VGen
53
0
0
30 Apr 2025
Hierarchical and Multimodal Data for Daily Activity Understanding
Hierarchical and Multimodal Data for Daily Activity Understanding
Ghazal Kaviani
Yavuz Yarici
Seulgi Kim
M. Prabhushankar
Ghassan AlRegib
Mashhour Solh
Ameya Patil
52
0
0
24 Apr 2025
Chain-of-Modality: Learning Manipulation Programs from Multimodal Human Videos with Vision-Language-Models
Chain-of-Modality: Learning Manipulation Programs from Multimodal Human Videos with Vision-Language-Models
Chen Wang
Fei Xia
Wenhao Yu
Tingnan Zhang
Ruohan Zhang
Ce Liu
Li Fei-Fei
Jie Tan
Jacky Liang
31
0
0
17 Apr 2025
How Do I Do That? Synthesizing 3D Hand Motion and Contacts for Everyday Interactions
How Do I Do That? Synthesizing 3D Hand Motion and Contacts for Everyday Interactions
Aditya Prakash
Benjamin Lundell
Dmitry Andreychuk
David Forsyth
Saurabh Gupta
H. Sawhney
31
0
0
16 Apr 2025
Ego4o: Egocentric Human Motion Capture and Understanding from Multi-Modal Input
Ego4o: Egocentric Human Motion Capture and Understanding from Multi-Modal Input
Jian Wang
Rishabh Dabral
D. Luvizon
Zhe Cao
Lingjie Liu
Thabo Beeler
Christian Theobalt
EgoV
45
0
0
11 Apr 2025
F$^3$Set: Towards Analyzing Fast, Frequent, and Fine-grained Events from Videos
F3^33Set: Towards Analyzing Fast, Frequent, and Fine-grained Events from Videos
Zhaoyu Liu
Kan Jiang
Murong Ma
Zhé Hóu
Yun Lin
J. Dong
35
0
0
11 Apr 2025
GR00T N1: An Open Foundation Model for Generalist Humanoid Robots
GR00T N1: An Open Foundation Model for Generalist Humanoid Robots
Nvidia
Johan Bjorck
Fernando Castañeda
Nikita Cherniadev
Xingye Da
...
Ao Zhang
Hao Zhang
Yizhou Zhao
Ruijie Zheng
Yuke Zhu
VLM
61
19
0
18 Mar 2025
ST-Think: How Multimodal Large Language Models Reason About 4D Worlds from Ego-Centric Videos
ST-Think: How Multimodal Large Language Models Reason About 4D Worlds from Ego-Centric Videos
Peiran Wu
Yunze Liu
Chonghan Liu
Miao Liu
VGen
LRM
57
1
0
16 Mar 2025
VidBot: Learning Generalizable 3D Actions from In-the-Wild 2D Human Videos for Zero-Shot Robotic Manipulation
VidBot: Learning Generalizable 3D Actions from In-the-Wild 2D Human Videos for Zero-Shot Robotic Manipulation
Hanzhi Chen
Boyang Sun
Anran Zhang
Marc Pollefeys
Stefan Leutenegger
LM&Ro
65
0
0
10 Mar 2025
GAT-Grasp: Gesture-Driven Affordance Transfer for Task-Aware Robotic Grasping
Ruixiang Wang
Huayi Zhou
Xinyue Yao
Guiliang Liu
K. Jia
39
0
0
08 Mar 2025
Modeling Fine-Grained Hand-Object Dynamics for Egocentric Video Representation Learning
Baoqi Pei
Y. Huang
Jilan Xu
Guo Chen
Yuping He
...
Yali Wang
Weidi Xie
Yu Qiao
Fei Wu
Limin Wang
41
0
0
02 Mar 2025
Magma: A Foundation Model for Multimodal AI Agents
Magma: A Foundation Model for Multimodal AI Agents
Jianwei Yang
Reuben Tan
Qianhui Wu
Ruijie Zheng
Baolin Peng
...
Seonghyeon Ye
Joel Jang
Yuquan Deng
Lars Liden
Jianfeng Gao
VLM
AI4TS
107
9
0
18 Feb 2025
RoboMIND: Benchmark on Multi-embodiment Intelligence Normative Data for Robot Manipulation
RoboMIND: Benchmark on Multi-embodiment Intelligence Normative Data for Robot Manipulation
Kun Wu
Chengkai Hou
Jiaming Liu
Zhengping Che
Xiaozhu Ju
...
Zhenyu Wang
Pengju An
Siyuan Qian
S. Zhang
Jian Tang
LM&Ro
105
15
0
17 Feb 2025
Motion Tracks: A Unified Representation for Human-Robot Transfer in Few-Shot Imitation Learning
Motion Tracks: A Unified Representation for Human-Robot Transfer in Few-Shot Imitation Learning
Juntao Ren
Priya Sundaresan
Dorsa Sadigh
Sanjiban Choudhury
Jeannette Bohg
37
14
0
13 Jan 2025
Audio-Language Datasets of Scenes and Events: A Survey
Audio-Language Datasets of Scenes and Events: A Survey
Gijs Wijngaard
Elia Formisano
Michele Esposito
M. Dumontier
79
2
0
10 Jan 2025
MS-Temba : Multi-Scale Temporal Mamba for Efficient Temporal Action Detection
MS-Temba : Multi-Scale Temporal Mamba for Efficient Temporal Action Detection
Arkaprava Sinha
Monish Soundar Raj
Pu Wang
Ahmed Helmy
Srijan Das
Mamba
53
3
0
10 Jan 2025
Measuring Error Alignment for Decision-Making Systems
Measuring Error Alignment for Decision-Making Systems
Binxia Xu
Antonis Bikakis
Daniel Onah
A. Vlachidis
Luke Dickens
34
0
0
03 Jan 2025
Interacted Object Grounding in Spatio-Temporal Human-Object Interactions
Interacted Object Grounding in Spatio-Temporal Human-Object Interactions
Xiaoyang Liu
Boran Wen
Xinpeng Liu
Zizheng Zhou
Hongwei Fan
Cewu Lu
Lizhuang Ma
Yulong Chen
Y. Li
51
2
0
27 Dec 2024
Interact with me: Joint Egocentric Forecasting of Intent to Interact, Attitude and Social Actions
Interact with me: Joint Egocentric Forecasting of Intent to Interact, Attitude and Social Actions
Tongfei Bian
Yiming Ma
Mathieu Chollet
Victor Sanchez
T. Guha
EgoV
97
1
0
21 Dec 2024
Do Language Models Understand Time?
Do Language Models Understand Time?
Xi Ding
Lei Wang
173
0
0
18 Dec 2024
Situational Scene Graph for Structured Human-centric Situation Understanding
Situational Scene Graph for Structured Human-centric Situation Understanding
Chinthani Sugandhika
Chen Li
Deepu Rajan
Basura Fernando
114
1
0
30 Oct 2024
Egocentric and Exocentric Methods: A Short Survey
Egocentric and Exocentric Methods: A Short Survey
Anirudh Thatipelli
Shao-Yuan Lo
Amit K. Roy-Chowdhury
EgoV
42
2
0
27 Oct 2024
On-Robot Reinforcement Learning with Goal-Contrastive Rewards
On-Robot Reinforcement Learning with Goal-Contrastive Rewards
Ondrej Biza
Thomas Weng
Lingfeng Sun
Karl Schmeckpeper
Tarik Kelestemur
Yecheng Jason Ma
Robert C. Platt
Jan Willem van de Meent
Lawson L. S. Wong
OffRL
43
0
0
25 Oct 2024
MM-Ego: Towards Building Egocentric Multimodal LLMs for Video QA
MM-Ego: Towards Building Egocentric Multimodal LLMs for Video QA
Hanrong Ye
Haotian Zhang
Erik Daxberger
Lin Chen
Zongyu Lin
...
Haoxuan You
Dan Xu
Zhe Gan
Jiasen Lu
Yinfei Yang
EgoV
MLLM
73
12
0
09 Oct 2024
A Comprehensive Review of Few-shot Action Recognition
A Comprehensive Review of Few-shot Action Recognition
Yuyang Wanyan
Xiaoshan Yang
Weiming Dong
Changsheng Xu
VLM
61
3
0
20 Jul 2024
The Collection of a Human Robot Collaboration Dataset for Cooperative Assembly in Glovebox Environments
The Collection of a Human Robot Collaboration Dataset for Cooperative Assembly in Glovebox Environments
Shivansh Sharma
Mathew Huang
Sanat Nair
Alan Wen
Christina Petlowany
Juston Moore
Selma Wanna
Mitch Pryor
33
0
0
19 Jul 2024
Hamba: Single-view 3D Hand Reconstruction with Graph-guided Bi-Scanning
  Mamba
Hamba: Single-view 3D Hand Reconstruction with Graph-guided Bi-Scanning Mamba
Haoye Dong
Aviral Chharia
Wenbo Gou
Francisco Vicente Carrasco
Fernando De la Torre
Mamba
47
1
0
12 Jul 2024
CaRe-Ego: Contact-aware Relationship Modeling for Egocentric Interactive Hand-object Segmentation
CaRe-Ego: Contact-aware Relationship Modeling for Egocentric Interactive Hand-object Segmentation
Yuejiao Su
Yi Wang
Lap-Pui Chau
60
1
0
08 Jul 2024
EgoGaussian: Dynamic Scene Understanding from Egocentric Video with 3D
  Gaussian Splatting
EgoGaussian: Dynamic Scene Understanding from Egocentric Video with 3D Gaussian Splatting
Daiwei Zhang
Gengyan Li
Jiajie Li
Mickael Bressieux
Otmar Hilliges
Marc Pollefeys
Luc Van Gool
Xi Wang
35
9
0
28 Jun 2024
RMem: Restricted Memory Banks Improve Video Object Segmentation
RMem: Restricted Memory Banks Improve Video Object Segmentation
Junbao Zhou
Ziqi Pang
Yu-xiong Wang
VOS
55
7
0
12 Jun 2024
Can't make an Omelette without Breaking some Eggs: Plausible Action
  Anticipation using Large Video-Language Models
Can't make an Omelette without Breaking some Eggs: Plausible Action Anticipation using Large Video-Language Models
Himangi Mittal
Nakul Agarwal
Shao-Yuan Lo
Kwonjoon Lee
30
13
0
30 May 2024
EMAG: Ego-motion Aware and Generalizable 2D Hand Forecasting from
  Egocentric Videos
EMAG: Ego-motion Aware and Generalizable 2D Hand Forecasting from Egocentric Videos
Masashi Hatano
Ryo Hachiuma
Hideo Saito
EgoV
29
3
0
30 May 2024
Video-Language Critic: Transferable Reward Functions for
  Language-Conditioned Robotics
Video-Language Critic: Transferable Reward Functions for Language-Conditioned Robotics
Minttu Alakuijala
Reginald McLean
Isaac Woungang
Nariman Farsad
Samuel Kaski
Pekka Marttinen
Kai Yuan
LM&Ro
29
0
0
30 May 2024
A Survey on Vision-Language-Action Models for Embodied AI
A Survey on Vision-Language-Action Models for Embodied AI
Yueen Ma
Zixing Song
Yuzheng Zhuang
Jianye Hao
Irwin King
LM&Ro
67
41
0
23 May 2024
CinePile: A Long Video Question Answering Dataset and Benchmark
CinePile: A Long Video Question Answering Dataset and Benchmark
Ruchit Rawal
Khalid Saifullah
Ronen Basri
David Jacobs
Gowthami Somepalli
Tom Goldstein
38
39
0
14 May 2024
PhilHumans: Benchmarking Machine Learning for Personal Health
PhilHumans: Benchmarking Machine Learning for Personal Health
Vadim Liventsev
Vivek Kumar
Allmin Pradhap Singh Susaiyah
Zixiu "Alex" Wu
Ivan Rodin
...
Milan Petkovic
Diego Reforgiato Recupero
Ehud Reiter
Daniele Riboni
Raymond Sterling
AI4MH
LM&MA
34
0
0
04 May 2024
HOI-Ref: Hand-Object Interaction Referral in Egocentric Vision
HOI-Ref: Hand-Object Interaction Referral in Egocentric Vision
Siddhant Bansal
Michael Wray
Dima Damen
31
3
0
15 Apr 2024
PLOT-TAL -- Prompt Learning with Optimal Transport for Few-Shot Temporal
  Action Localization
PLOT-TAL -- Prompt Learning with Optimal Transport for Few-Shot Temporal Action Localization
Edward Fish
Jon Weinbren
Andrew Gilbert
34
1
0
27 Mar 2024
EgoExoLearn: A Dataset for Bridging Asynchronous Ego- and Exo-centric View of Procedural Activities in Real World
EgoExoLearn: A Dataset for Bridging Asynchronous Ego- and Exo-centric View of Procedural Activities in Real World
Yifei Huang
Guo Chen
Jilan Xu
Mingfang Zhang
Lijin Yang
...
Hongjie Zhang
Lu Dong
Yali Wang
Limin Wang
Yu Qiao
EgoV
54
36
0
24 Mar 2024
On the Utility of 3D Hand Poses for Action Recognition
On the Utility of 3D Hand Poses for Action Recognition
Md Salman Shamil
Dibyadip Chatterjee
Fadime Sener
Shugao Ma
Angela Yao
32
5
0
14 Mar 2024
Single-to-Dual-View Adaptation for Egocentric 3D Hand Pose Estimation
Single-to-Dual-View Adaptation for Egocentric 3D Hand Pose Estimation
Ruicong Liu
Takehiko Ohkawa
Mingfang Zhang
Yoichi Sato
41
8
0
07 Mar 2024
Get a Grip: Reconstructing Hand-Object Stable Grasps in Egocentric
  Videos
Get a Grip: Reconstructing Hand-Object Stable Grasps in Egocentric Videos
Zhifan Zhu
Dima Damen
19
7
0
25 Dec 2023
Instance Tracking in 3D Scenes from Egocentric Videos
Instance Tracking in 3D Scenes from Egocentric Videos
Yunhan Zhao
Haoyu Ma
Shu Kong
Charless C. Fowlkes
3DPC
23
4
0
07 Dec 2023
Action Scene Graphs for Long-Form Understanding of Egocentric Videos
Action Scene Graphs for Long-Form Understanding of Egocentric Videos
Ivan Rodin
Antonino Furnari
Kyle Min
Subarna Tripathi
G. Farinella
EgoV
19
12
0
06 Dec 2023
Overcoming Label Noise for Source-free Unsupervised Video Domain
  Adaptation
Overcoming Label Noise for Source-free Unsupervised Video Domain Adaptation
A. Dasgupta
C. V. Jawahar
Karteek Alahari
TTA
VLM
11
10
0
30 Nov 2023
Spacewalk-18: A Benchmark for Multimodal and Long-form Procedural Video Understanding in Novel Domains
Spacewalk-18: A Benchmark for Multimodal and Long-form Procedural Video Understanding in Novel Domains
Rohan Myer Krishnan
Zitian Tang
Zhiqiu Yu
Chen Sun
51
1
0
30 Nov 2023
LEAP: LLM-Generation of Egocentric Action Programs
LEAP: LLM-Generation of Egocentric Action Programs
Eadom Dessalene
Michael Maynord
Cornelia Fermuller
Yiannis Aloimonos
18
3
0
29 Nov 2023
Object-based (yet Class-agnostic) Video Domain Adaptation
Object-based (yet Class-agnostic) Video Domain Adaptation
Dantong Niu
Amir Bar
Roei Herzig
Trevor Darrell
Anna Rohrbach
22
1
0
29 Nov 2023
End-to-End Temporal Action Detection with 1B Parameters Across 1000
  Frames
End-to-End Temporal Action Detection with 1B Parameters Across 1000 Frames
Shuming Liu
Chen-Da Liu-Zhang
Chen Zhao
Bernard Ghanem
24
25
0
28 Nov 2023
On Bringing Robots Home
On Bringing Robots Home
Nur Muhammad (Mahi) Shafiullah
Anant Rai
Haritheja Etukuru
Yiqian Liu
Ishan Misra
Soumith Chintala
Lerrel Pinto
27
76
0
27 Nov 2023
1234
Next