ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2312.15844
  4. Cited By
Learning-To-Rank Approach for Identifying Everyday Objects Using a
  Physical-World Search Engine

Learning-To-Rank Approach for Identifying Everyday Objects Using a Physical-World Search Engine

26 December 2023
Kanta Kaneda
Shunya Nagashima
Ryosuke Korekata
Motonari Kambara
Komei Sugiura
ArXivPDFHTML

Papers citing "Learning-To-Rank Approach for Identifying Everyday Objects Using a Physical-World Search Engine"

7 / 7 papers shown
Title
Mobile Manipulation Instruction Generation from Multiple Images with Automatic Metric Enhancement
Mobile Manipulation Instruction Generation from Multiple Images with Automatic Metric Enhancement
Kei Katsumata
Motonari Kambara
Daichi Yashima
Ryosuke Korekata
Komei Sugiura
56
0
0
28 Jan 2025
Open-Vocabulary Mobile Manipulation Based on Double Relaxed Contrastive
  Learning with Dense Labeling
Open-Vocabulary Mobile Manipulation Based on Double Relaxed Contrastive Learning with Dense Labeling
Daichi Yashima
Ryosuke Korekata
Komei Sugiura
62
0
0
21 Dec 2024
DM2RM: Dual-Mode Multimodal Ranking for Target Objects and Receptacles
  Based on Open-Vocabulary Instructions
DM2RM: Dual-Mode Multimodal Ranking for Target Objects and Receptacles Based on Open-Vocabulary Instructions
Ryosuke Korekata
Kanta Kaneda
Shunya Nagashima
Yuto Imai
Komei Sugiura
ObjD
LM&Ro
29
2
0
15 Aug 2024
Object Segmentation from Open-Vocabulary Manipulation Instructions Based
  on Optimal Transport Polygon Matching with Multimodal Foundation Models
Object Segmentation from Open-Vocabulary Manipulation Instructions Based on Optimal Transport Polygon Matching with Multimodal Foundation Models
Takayuki Nishimura
Katsuyuki Kuyo
Motonari Kambara
Komei Sugiura
DiffM
12
0
0
01 Jul 2024
Real-world Instance-specific Image Goal Navigation for Service Robots:
  Bridging the Domain Gap with Contrastive Learning
Real-world Instance-specific Image Goal Navigation for Service Robots: Bridging the Domain Gap with Contrastive Learning
Taichi Sakaguchi
Akira Taniguchi
Y. Hagiwara
Lotfi El Hafi
Shoichi Hasegawa
T. Taniguchi
18
0
0
15 Apr 2024
Visual Language Maps for Robot Navigation
Visual Language Maps for Robot Navigation
Chen Huang
Oier Mees
Andy Zeng
Wolfram Burgard
LM&Ro
140
337
0
11 Oct 2022
LM-Nav: Robotic Navigation with Large Pre-Trained Models of Language,
  Vision, and Action
LM-Nav: Robotic Navigation with Large Pre-Trained Models of Language, Vision, and Action
Dhruv Shah
B. Osinski
Brian Ichter
Sergey Levine
LM&Ro
136
430
0
10 Jul 2022
1