Recent advancements in large language model (LLM)-powered agents have shown that collective intelligence can significantly outperform individual capabilities, largely attributed to the meticulously designed inter-agent communication topologies. Though impressive in performance, existing multi-agent pipelines inherently introduce substantial token overhead, as well as increased economic costs, which pose challenges for their large-scale deployments. In response to this challenge, we propose an economical, simple, and robust multi-agent communication framework, termed , which can seamlessly integrate into mainstream multi-agent systems and prunes redundant or even malicious communication messages. Technically, is the first to identify and formally define the \textit{communication redundancy} issue present in current LLM-based multi-agent pipelines, and efficiently performs one-shot pruning on the spatial-temporal message-passing graph, yielding a token-economic and high-performing communication topology. Extensive experiments across six benchmarks demonstrate that \textbf{(I)} achieves comparable results as state-of-the-art topologies at merely \5.6\, \textbf{(II)} integrates seamlessly into existing multi-agent frameworks with token reduction, and \textbf{(III)} successfully defend against two types of agent-based adversarial attacks with performance boost.
View on arXiv