ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2026 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2511.08387
64
0

RAPTR: Radar-based 3D Pose Estimation using Transformer

11 November 2025
Sorachi Kato
Ryoma Yataka
Pu Perry Wang
Pedro Miraldo
T. Fujihashi
P. Boufounos
ArXiv (abs)PDFHTMLGithub
Main:9 Pages
14 Figures
Bibliography:4 Pages
15 Tables
Appendix:13 Pages
Abstract

Radar-based indoor 3D human pose estimation typically relied on fine-grained 3D keypoint labels, which are costly to obtain especially in complex indoor settings involving clutter, occlusions, or multiple people. In this paper, we propose \textbf{RAPTR} (RAdar Pose esTimation using tRansformer) under weak supervision, using only 3D BBox and 2D keypoint labels which are considerably easier and more scalable to collect. Our RAPTR is characterized by a two-stage pose decoder architecture with a pseudo-3D deformable attention to enhance (pose/joint) queries with multi-view radar features: a pose decoder estimates initial 3D poses with a 3D template loss designed to utilize the 3D BBox labels and mitigate depth ambiguities; and a joint decoder refines the initial poses with 2D keypoint labels and a 3D gravity loss. Evaluated on two indoor radar datasets, RAPTR outperforms existing methods, reducing joint position error by 34.3%34.3\%34.3% on HIBER and 76.9%76.9\%76.9% on MMVR. Our implementation is available atthis https URL.

View on arXiv
Comments on this paper