Chorus: Harmonizing Context and Sensing Signals for Data-Free Model Customization in IoT
A key bottleneck toward scalable IoT sensing is how to efficiently adapt AI models to new deployment conditions. In real-world IoT systems, sensor data is collected under diverse contexts, such as sensor placements or ambient environments, which alter signal patterns and degrade downstream performance. Traditional domain adaptation and generalization methods often ignore such contextual information or incorporate it in overly simplistic ways, making them ineffective under unseen context shifts after deployment. In this paper, we propose Chorus, a context-aware, data-free model customization approach that adapts models to unseen deployment conditions without requiring target-domain data. The key idea is to learn context representations that capture how contextual factors influence sensor data, and then use these representations as structured priors for context-aware customization under unseen shifts. Specifically, Chorus learns a shared sensor-context latent space through bidirectional cross-modal reconstruction on unlabeled sensor-context pairs, and regularizes the context embedding space to obtain compact and generalizable context representations. Building on the aligned representations, Chorus trains a lightweight gated head with limited labeled data to exploit context priors during inference, and introduces a context-caching mechanism that reuses cached context representations when no context shift is detected, thereby reducing inference overhead on smartphones. Experiments on IMU, speech enhancement, and WiFi sensing tasks under diverse context shifts show that Chorus outperforms state-of-the-art baselines by up to 20.2% in unseen contexts, with cached inference latency close to sensor-only deployment, while maintaining stable performance under continuous context transitions. A video demonstration of Chorus's performance in real world is available atthis https URL.
View on arXiv