Modern science is based on objectiveness. Experimental results should be repeatable by any scientist, provided they use the same experimental setup. Since 2008, the SIGMOD conference, the international leading conference in management of data, awards the reproducibility badge to signify that a scientific work has been successfully reproduced by a third-party reviewer. In 2021, the paper “Pump Up the Volume: Processing Large Data on GPUs with Fast Interconnects” by BIFOLD researcher Clemens Lutz was awarded a prestigious reproducibility badge.
SIGMOD Reproducibility has three main goals: Highlight the impact of database research papers, enable easy dissemination of research results and enable easy sharing of code and experimentation set-ups. In computer science, reproducing results is inherently complex due to the many factors that may inadvertently influence the test bed of a computer scientist. Thus, the goal of SIGMOD reproducibility is to assist in building a culture where sharing results, code, and scripts of database research is the norm rather than an exception. The challenge is to do this time efficiently, which means building technical expertise on how to do better research via creating repeatable and shareable research.
“Our paper explores the opportunities that a new technology, fast GPU interconnects, offers for database management systems. To reproduce our work, we faced the unique challenge that our results rely on very specific hardware. Fast GPU interconnects are not yet widely available, and thus a third-party reviewer is unlikely to have the appropriate equipment to repeat our measurements. Together with the reviewers and our system administrator, we overcame this hurdle by granting the reviewers access to our lab equipment”, explains first author Clemens Lutz.
In 2021, only 13 of 143 full papers published at SIGMOD 2020 have been awarded the reproducibility badge.
>
Authors:
Clemens Lutz, Sebastian Breß, Steffen Zeuch, Tilmann Rabl, Volker Markl
Abstract:
GPUs have long been discussed as accelerators for database query processing because of their high processing power and memory bandwidth. However, two main challenges limit the utility of GPUs for large-scale data processing: (1) the on-board memory capacity is too small to store large data sets, yet (2) the interconnect bandwidth to CPU main-memory is insufficient for ad hoc data transfers. As a result, GPU-based systems and algorithms run into a transfer bottleneck and do not scale to large data sets. In practice, CPUs process large-scale data faster than GPUs with current technology. In this paper, we investigate how a fast interconnect can resolve these scalability limitations using the example of NVLink 2.0. NVLink 2.0 is a new interconnect technology that links dedicated GPUs to a CPU@. The high bandwidth of NVLink 2.0 enables us to overcome the transfer bottleneck and to efficiently process large data sets stored in main-memory on GPUs. We perform an in-depth analysis of NVLink 2.0 and show how we can scale a no-partitioning hash join beyond the limits of GPU memory. Our evaluation shows speed-ups of up to 18x over PCI-e 3.0 and up to 7.3x over an optimized CPU implementation. Fast GPU interconnects thus enable GPUs to efficiently accelerate query processing.
Publication:
Pump Up the Volume: Processing Large Data on GPUs with Fast Interconnects
Authors:
Clemens Lutz, Sebastian Breß, Steffen Zeuch, Tilmann Rabl, Volker Markl
Abstract:
GPUs have long been discussed as accelerators for database query processing because of their high processing power and memory bandwidth. However, two main challenges limit the utility of GPUs for large-scale data processing: (1) the on-board memory capacity is too small to store large data sets, yet (2) the interconnect bandwidth to CPU main-memory is insufficient for ad hoc data transfers. As a result, GPU-based systems and algorithms run into a transfer bottleneck and do not scale to large data sets. In practice, CPUs process large-scale data faster than GPUs with current technology. In this paper, we investigate how a fast interconnect can resolve these scalability limitations using the example of NVLink 2.0. NVLink 2.0 is a new interconnect technology that links dedicated GPUs to a CPU@. The high bandwidth of NVLink 2.0 enables us to overcome the transfer bottleneck and to efficiently process large data sets stored in main-memory on GPUs. We perform an in-depth analysis of NVLink 2.0 and show how we can scale a no-partitioning hash join beyond the limits of GPU memory. Our evaluation shows speed-ups of up to 18x over PCI-e 3.0 and up to 7.3x over an optimized CPU implementation. Fast GPU interconnects thus enable GPUs to efficiently accelerate query processing.
Publication:
Pump Up the Volume: Processing Large Data on GPUs with Fast Interconnects