ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2503.15002
38
1

Scalable Trajectory-User Linking with Dual-Stream Representation Networks

19 March 2025
H. Zhang
Wei-Neng Chen
Xingyu Zhao
Jianpeng Qi
Guiyuan Jiang
Yanwei Yu
ArXivPDFHTML
Abstract

Trajectory-user linking (TUL) aims to match anonymous trajectories to the most likely users who generated them, offering benefits for a wide range of real-world spatio-temporal applications. However, existing TUL methods are limited by high model complexity and poor learning of the effective representations of trajectories, rendering them ineffective in handling large-scale user trajectory data. In this work, we propose a novel Scal‾\underline{Scal}Scal​able‾\underline{e}e​ Trajectory-User Linking with dual-stream representation networks for large-scale TUL‾\underline{TUL}TUL​ problem, named ScaleTUL. Specifically, ScaleTUL generates two views using temporal and spatial augmentations to exploit supervised contrastive learning framework to effectively capture the irregularities of trajectories. In each view, a dual-stream trajectory encoder, consisting of a long-term encoder and a short-term encoder, is designed to learn unified trajectory representations that fuse different temporal-spatial dependencies. Then, a TUL layer is used to associate the trajectories with the corresponding users in the representation space using a two-stage training model. Experimental results on check-in mobility datasets from three real-world cities and the nationwide U.S. demonstrate the superiority of ScaleTUL over state-of-the-art baselines for large-scale TUL tasks.

View on arXiv
@article{zhang2025_2503.15002,
  title={ Scalable Trajectory-User Linking with Dual-Stream Representation Networks },
  author={ Hao Zhang and Wei Chen and Xingyu Zhao and Jianpeng Qi and Guiyuan Jiang and Yanwei Yu },
  journal={arXiv preprint arXiv:2503.15002},
  year={ 2025 }
}
Comments on this paper