28
0

Legilimens: Performant Video Analytics on the System-on-Chip Edge

Abstract

Continually retraining models has emerged as a primary technique to enable high-accuracy video analytics on edge devices. Yet, existing systems employ such adaptation by relying on the spare compute resources that traditional (memory-constrained) edge servers afford. In contrast, mobile edge devices such as drones and dashcams offer a fundamentally different resource profile: weak(er) compute with abundant unified memory pools. We present Legilimens, a continuous learning system for the mobile edge's System-on-Chip GPUs. Our driving insight is that visually distinct scenes that require retraining exhibit substantial overlap in model embeddings; if captured into a base model on device memory, specializing to each new scene can become lightweight, requiring very few samples. To practically realize this approach, Legilimens presents new, compute-efficient techniques to (1) select high-utility data samples for retraining specialized models, (2) update the base model without complete retraining, and (3) time-share compute resources between retraining and live inference for maximal accuracy. Across diverse workloads, Legilimens lowers retraining costs by 2.8-10x compared to existing systems, resulting in 18-45% higher accuracies.

View on arXiv
@article{ramanujam2025_2504.21136,
  title={ Legilimens: Performant Video Analytics on the System-on-Chip Edge },
  author={ Murali Ramanujam and Yinwei Dai and Kyle Jamieson and Ravi Netravali },
  journal={arXiv preprint arXiv:2504.21136},
  year={ 2025 }
}
Comments on this paper