Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2101.08616
Cited By
Learning rich touch representations through cross-modal self-supervision
21 January 2021
Martina Zambelli
Y. Aytar
Francesco Visin
Yuxiang Zhou
R. Hadsell
SSL
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Learning rich touch representations through cross-modal self-supervision"
7 / 7 papers shown
Title
Sensor-Invariant Tactile Representation
Harsh Gupta
Yuchen Mo
Shengmiao Jin
Wenzhen Yuan
65
2
0
27 Feb 2025
TextToucher: Fine-Grained Text-to-Touch Generation
Jiahang Tu
Hao Fu
Fengyu Yang
Hanbin Zhao
Chao Zhang
Hui Qian
VLM
DiffM
78
6
0
10 Jan 2025
Canonical Representation and Force-Based Pretraining of 3D Tactile for Dexterous Visuo-Tactile Policy Learning
Tianhao Wu
Jinzhou Li
Jiyao Zhang
Mingdong Wu
Hao Dong
SSL
31
4
0
26 Sep 2024
Low Fidelity Visuo-Tactile Pretraining Improves Vision-Only Manipulation Performance
Selam Gano
Abraham George
A. Farimani
OnRL
38
0
0
21 Jun 2024
See to Touch: Learning Tactile Dexterity through Visual Incentives
Irmak Güzey
Yinlong Dai
Ben Evans
Soumith Chintala
Lerrel Pinto
16
30
0
21 Sep 2023
Dexterity from Touch: Self-Supervised Pre-Training of Tactile Representations with Robotic Play
Irmak Güzey
Ben Evans
Soumith Chintala
Lerrel Pinto
52
64
0
21 Mar 2023
Learning Robotic Manipulation Skills Using an Adaptive Force-Impedance Action Space
Maximilian Ulmer
Elie Aljalbout
Sascha Schwarz
Sami Haddadin
11
6
0
19 Oct 2021
1