The Library
Deca : a garbage collection optimizer for in-memory data processing
Tools
Shi, Xuanhua, Ke, Zhixiang, Zhou, Yongluan, Jin, Hai, Lu, Lu, Zhang, Xiong, He, Ligang, Hu, Zhenyu and Wang, Fei (2019) Deca : a garbage collection optimizer for in-memory data processing. ACM Transactions on Computer Systems, 36 (1). pp. 1-47. doi:10.1145/3310361 ISSN 07342071.
|
PDF
WRAP-Deca-garbage-collection-optimizer-in-memory-data-processing-He-2019.pdf - Accepted Version - Requires a PDF viewer. Download (1439Kb) | Preview |
Official URL: http://dx.doi.org/10.1145/3310361
Abstract
In-memory caching of intermediate data and active combining of data in shuffle buffers have been shown to be very effective in minimizing the recomputation and I/O cost in big data processing systems such as Spark and Flink. However, it has also been widely reported that these techniques would create a large amount of long-living data objects in the heap. These generated objects may quickly saturate the garbage collector, especially when handling a large dataset, and hence, limit the scalability of the system. To eliminate this problem, we propose a lifetime-based memory management framework, which, by automatically analyzing the user-defined functions and data types, obtains the expected lifetime of the data objects and then allocates and releases memory space accordingly to minimize the garbage collection overhead. In particular, we present Deca,<sup;>1</sup;> a concrete implementation of our proposal on top of Spark, which transparently decomposes and groups objects with similar lifetimes into byte arrays and releases their space altogether when their lifetimes come to an end. When systems are processing very large data, Deca also provides field-oriented memory pages to ensure high compression efficiency. Extensive experimental studies using both synthetic and real datasets show that, in comparing to Spark, Deca is able to (1) reduce the garbage collection time by up to 99.9%, (2) reduce the memory consumption by up to 46.6% and the storage space by 23.4%, (3) achieve 1.2× to 22.7× speedup in terms of execution time in cases without data spilling and 16× to 41.6× speedup in cases with data spilling, and (4) provide similar performance compared to domain-specific systems.
Item Type: | Journal Article | ||||
---|---|---|---|---|---|
Divisions: | Faculty of Science, Engineering and Medicine > Science > Computer Science | ||||
Journal or Publication Title: | ACM Transactions on Computer Systems | ||||
Publisher: | Association for Computing Machinery | ||||
ISSN: | 07342071 | ||||
Official Date: | March 2019 | ||||
Dates: |
|
||||
Volume: | 36 | ||||
Number: | 1 | ||||
Page Range: | pp. 1-47 | ||||
DOI: | 10.1145/3310361 | ||||
Status: | Peer Reviewed | ||||
Publication Status: | Published | ||||
Reuse Statement (publisher, data, author rights): | © ACM 2019. This is the author's version of the work. It is posted here for your personal use. Not for redistribution. The definitive Version of Record was published in ACM Transactions on Computer Systems, 36 (1). pp. 1-47. http://dx.doi.org/10.1145/3310361 | ||||
Access rights to Published version: | Open Access (Creative Commons) | ||||
Date of first compliant deposit: | 2 August 2019 | ||||
Date of first compliant Open Access: | 2 August 2019 |
Request changes or add full text files to a record
Repository staff actions (login required)
View Item |
Downloads
Downloads per month over past year