Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2406.03813
Cited By
Touch100k: A Large-Scale Touch-Language-Vision Dataset for Touch-Centric Multimodal Representation
6 June 2024
Ning Cheng
Changhao Guan
Jing Gao
Weihao Wang
You Li
Fandong Meng
Jie Zhou
Bin Fang
Jinan Xu
Wenjuan Han
VLM
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Touch100k: A Large-Scale Touch-Language-Vision Dataset for Touch-Centric Multimodal Representation"
7 / 7 papers shown
Title
VTLA: Vision-Tactile-Language-Action Model with Preference Learning for Insertion Manipulation
Chaofan Zhang
Peng Hao
Xiaoge Cao
Xiaoshuai Hao
Shaowei Cui
Shuo Wang
9
0
0
14 May 2025
SToLa: Self-Adaptive Touch-Language Framework with Tactile Commonsense Reasoning in Open-Ended Scenarios
Ning Cheng
Jinan Xu
Jialing Chen
Wenjuan Han
LRM
31
0
0
07 May 2025
AnyTouch: Learning Unified Static-Dynamic Representation across Multiple Visuo-tactile Sensors
Ruoxuan Feng
Jiangyu Hu
Wenke Xia
Tianci Gao
Ao Shen
Yuhao Sun
Bin Fang
Di Hu
42
3
0
15 Feb 2025
General In-Hand Object Rotation with Vision and Touch
Haozhi Qi
Brent Yi
Sudharshan Suresh
Mike Lambeta
Y. Ma
Roberto Calandra
Jitendra Malik
58
80
0
18 Sep 2023
Self-Supervised Visuo-Tactile Pretraining to Locate and Follow Garment Features
J. Kerr
Huang Huang
Albert Wilcox
Ryan Hoque
Jeffrey Ichnowski
Roberto Calandra
Ken Goldberg
54
28
0
26 Sep 2022
Scaling Up Visual and Vision-Language Representation Learning With Noisy Text Supervision
Chao Jia
Yinfei Yang
Ye Xia
Yi-Ting Chen
Zarana Parekh
Hieu H. Pham
Quoc V. Le
Yun-hsuan Sung
Zhen Li
Tom Duerig
VLM
CLIP
293
3,683
0
11 Feb 2021
Curriculum Learning: A Survey
Petru Soviany
Radu Tudor Ionescu
Paolo Rota
N. Sebe
ODL
63
337
0
25 Jan 2021
1