The Paper “Dynamic Parameter Allocation in Parameter Servers” [1] authored by Alexander Renz-Wieland, Rainer Gemulla, Steffen Zeuch and Volker Markl from was accepted for publication in Proceedings of the VLDB Endowment Vol. 13 [2]. The authors from TU Berlin’s DIMA group [3] and DFKI’s IAM group [4] propose to integrate dynamic parameter allocation into parameter servers, describe an efficient implementation of such a parameter server called Lapse, and experimentally compare its performance to existing parameter servers across a number of machine learning tasks.
References:
[1] “Dynamic Parameter Allocation in Parameter Servers”, Alexander Renz-Wieland, Rainer Gemulla, Steffen Zeuch, Volker Markl, https://bit.ly/2MZuXTU
[2] Proceedings of the VLDB Endowment, Volume 13, 2019-2020, http://www.vldb.org/pvldb/vol13.html
[3] TU Berlin Database Systems & Information Management Group, https://www.dima.tu-berlin.de/.
[4] DFKI Intelligent Analytics for Massive Data Group, https://bit.ly/2LKoY4Y.
THE PAPER IN DETAIL:
>
Authors: Alexander Renz-Wieland, Rainer Gemulla, Steffen Zeuch, Volker Markl
Abstract: To keep up with increasing dataset sizes and model complexity, distributed training has become a necessity for large machine learning tasks. Parameter servers ease the implementation of distributed parameter management–-a key concern in distributed training–-, but can induce severe communication overhead. To reduce communication overhead, distributed machine learning algorithms use techniques to increase parameter access locality (PAL), achieving up to linear speed-ups. We found that existing parameter servers provide only limited support for PAL techniques, however, and therefore prevent efficient training. In this paper, we explore whether and to what extent PAL techniques can be supported, and whether such support is beneficial. We propose to integrate dynamic parameter allocation into parameter servers, describe an efficient implementation of such a parameter server called Lapse, and experimentally compare its performance to existing parameter servers across a number of machine learning tasks. We found that Lapse provides near linear scaling and can be orders of magnitude faster than existing parameter servers.
Authors: Alexander Renz-Wieland, Rainer Gemulla, Steffen Zeuch, Volker Markl
Abstract: To keep up with increasing dataset sizes and model complexity, distributed training has become a necessity for large machine learning tasks. Parameter servers ease the implementation of distributed parameter management–-a key concern in distributed training–-, but can induce severe communication overhead. To reduce communication overhead, distributed machine learning algorithms use techniques to increase parameter access locality (PAL), achieving up to linear speed-ups. We found that existing parameter servers provide only limited support for PAL techniques, however, and therefore prevent efficient training. In this paper, we explore whether and to what extent PAL techniques can be supported, and whether such support is beneficial. We propose to integrate dynamic parameter allocation into parameter servers, describe an efficient implementation of such a parameter server called Lapse, and experimentally compare its performance to existing parameter servers across a number of machine learning tasks. We found that Lapse provides near linear scaling and can be orders of magnitude faster than existing parameter servers.