ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2504.17261
29
0

Symbolic Representation for Any-to-Any Generative Tasks

24 April 2025
J. Chen
Xiaoye Zhu
Y. Wang
Tianyang Liu
Xinhui Chen
Y. Chen
Chak Tou Leong
Yifei Ke
J. Liu
Yiwen Yuan
Julian McAuley
Li Li
    DiffM
ArXivPDFHTML
Abstract

We propose a symbolic generative task description language and a corresponding inference engine capable of representing arbitrary multimodal tasks as structured symbolic flows. Unlike conventional generative models that rely on large-scale training and implicit neural representations to learn cross-modal mappings, often at high computational cost and with limited flexibility, our framework introduces an explicit symbolic representation comprising three core primitives: functions, parameters, and topological logic. Leveraging a pre-trained language model, our inference engine maps natural language instructions directly to symbolic workflows in a training-free manner. Our framework successfully performs over 12 diverse multimodal generative tasks, demonstrating strong performance and flexibility without the need for task-specific tuning. Experiments show that our method not only matches or outperforms existing state-of-the-art unified models in content quality, but also offers greater efficiency, editability, and interruptibility. We believe that symbolic task representations provide a cost-effective and extensible foundation for advancing the capabilities of generative AI.

View on arXiv
@article{chen2025_2504.17261,
  title={ Symbolic Representation for Any-to-Any Generative Tasks },
  author={ Jiaqi Chen and Xiaoye Zhu and Yue Wang and Tianyang Liu and Xinhui Chen and Ying Chen and Chak Tou Leong and Yifei Ke and Joseph Liu and Yiwen Yuan and Julian McAuley and Li-jia Li },
  journal={arXiv preprint arXiv:2504.17261},
  year={ 2025 }
}
Comments on this paper