Sampling and Output Estimation in Distributed Algorithms and LCAs

Leonid Barenboim, Tzali Maimon

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review


We consider the distributed message-passing model and the Local Computational Algorithms (LCA) model. In both models a network is represented by an n-vertex graph G = (V, E). We focus on labeling problems, such as vertex-coloring, edge-coloring, maximal independent set (MIS) and maximal matching. In the distributed model the vertices of v perform computations in parallel, in order to compute their parts in the solution for G. In the LCA model, on the other hand, probes are performed on certain vertices in order to compute their labels in a solution to a given problem. We study the possibility of estimating a solution produced by an algorithm, much before the algorithm terminates. This estimation not only allows for size estimation of a solution, but also for an early detection of failure in randomized algorithms, so that a correcting procedure can be executed. To this end, we propose a sampling technique, in which the labels in the sampling are distributed proportionally to the distribution in the algorithm's output. However, the sampling running time is significantly smaller than that of the algorithm in hand. We achieve the following results, in terms of the maximum degree Δand the arboricity a of the input graph. The running time of our procedures is O(log a + log log n), for sampling vertex-coloring, edge-coloring, maximal matching and MIS. This significantly improves upon previous sampling techniques, which incur additional dependency on the maximum degree Δthat can be much higher than the arboricity, as well as more significant dependency on n. Our techniques for sampling in the distributed model provide a powerful and general tool for estimation in the LCA model. In this setting the goal is estimating the size of a solution to a given problem, by making as few vertex probes as possible. For the above-mentioned problems, we achieve estimations with probe complexity dO(log a + log log n), where d = min(I", a · poly(log(n)).

Original languageEnglish
Title of host publicationICDCN 2021 - Proceedings of the 2021 International Conference on Distributed Computing and Networking
PublisherAssociation for Computing Machinery
Number of pages10
ISBN (Electronic)9781450389334
StatePublished - 5 Jan 2021
Event22nd International Conference on Distributed Computing and Networking, ICDCN 2021 - Virtual, Online, Japan
Duration: 5 Jan 20218 Jan 2021

Publication series

NameACM International Conference Proceeding Series


Conference22nd International Conference on Distributed Computing and Networking, ICDCN 2021
CityVirtual, Online

Bibliographical note

Publisher Copyright:
© 2021 ACM.


  • Distributed Algorithms
  • Graph Coloring
  • LCA
  • Maximal Independent Set
  • Randomized Scheme


Dive into the research topics of 'Sampling and Output Estimation in Distributed Algorithms and LCAs'. Together they form a unique fingerprint.

Cite this