ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2310.04376
100
8
v1v2 (latest)

Near-linear Time Dispersion of Mobile Agents

6 October 2023
Y. Sudo
Masahiro Shibata
Junya Nakamura
Yonghwan Kim
Toshimitsu Masuzawa
ArXiv (abs)PDFHTML
Abstract

Consider that there are k≤nk\le nk≤n agents in a simple, connected, and undirected graph G=(V,E)G=(V,E)G=(V,E) with nnn nodes and mmm edges. The goal of the dispersion problem is to move these kkk agents to distinct nodes. Agents can communicate only when they are at the same node, and no other means of communication such as whiteboards are available. We assume that the agents operate synchronously. We consider two scenarios: when all agents are initially located at any single node (rooted setting) and when they are initially distributed over any one or more nodes (general setting). Kshemkalyani and Sharma presented a dispersion algorithm for the general setting, which uses O(mk)O(m_k)O(mk​) time and log⁡(k+δ)\log(k+\delta)log(k+δ) bits of memory per agent [OPODIS 2021]. Here, mkm_kmk​ is the maximum number of edges in any induced subgraph of GGG with kkk nodes, and δ\deltaδ is the maximum degree of GGG. This algorithm is the fastest in the literature, as no algorithm with o(mk)o(m_k)o(mk​) time has been discovered even for the rooted setting. In this paper, we present faster algorithms for both the rooted and general settings. First, we present an algorithm for the rooted setting that solves the dispersion problem in O(klog⁡min⁡(k,δ))=O(klog⁡k)O(k\log \min(k,\delta))=O(k\log k)O(klogmin(k,δ))=O(klogk) time using O(log⁡δ)O(\log \delta)O(logδ) bits of memory per agent. Next, we propose an algorithm for the general setting that achieves dispersion in O(k(log⁡k)⋅(log⁡min⁡(k,δ))=O(klog⁡2k)O(k (\log k)\cdot (\log \min(k,\delta))=O(k \log^2 k)O(k(logk)⋅(logmin(k,δ))=O(klog2k) time using O(log⁡(k+δ))O(\log (k+\delta))O(log(k+δ)) bits. Finally, for the rooted setting, we give a time-optimal, i.e.,O(k)O(k)O(k)-time, algorithm with O(δ)O(\delta)O(δ) bits of space per agent.

View on arXiv
Comments on this paper