ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2510.13848
100
0

On-device System of Compositional Multi-tasking in Large Language Models

11 October 2025
Ondrej Bohdal
Konstantinos Theodosiadis
Asterios Mpatziakas
Dimitris Filippidis
Iro Spyrou
Christos Zonios
Anastasios Drosou
Dimosthenis Ioannidis
Kyeng-Hun Lee
J. Moon
Hyeonmok Ko
Mete Ozay
Umberto Michieli
ArXiv (abs)PDFHTML
Main:7 Pages
7 Figures
Bibliography:2 Pages
6 Tables
Abstract

Large language models (LLMs) are commonly adapted for diverse downstream tasks via parameter-efficient fine-tuning techniques such as Low-Rank Adapters (LoRA). While adapters can be combined to handle multiple tasks separately, standard approaches struggle when targeting the simultaneous execution of complex tasks, such as generating a translated summary from a long conversation. To address this challenge, we propose a novel approach tailored specifically for compositional multi-tasking scenarios involving summarization and translation. Our technique involves adding a learnable projection layer on top of the combined summarization and translation adapters. This design enables effective integration while maintaining efficiency through reduced computational overhead compared to alternative strategies requiring extensive retraining or sequential processing. We demonstrate the practical viability of our method within an on-device environment by developing an Android app capable of executing compositional tasks seamlessly. Experimental results indicate our solution performs well and is fast in both cloud-based and on-device implementations, highlighting the potential benefits of adopting our framework in real-world applications demanding high-speed operation alongside resource constraints.

View on arXiv
Comments on this paper