Say Goodbye to Off-heap Caches! On-heap Caches Using Memory-Mapped I/O

January 18, 2021

“Many analytics computations are dominated by iterative processing stages, executed until a convergence condition is met. To accelerate such workloads while keeping up with the exponential growth of data and the slow scaling of DRAM capacity, Spark employs off-memory caching of intermediate results. However, off-heap caching requires the serialization and deserialization (serdes) of data, which add significant overhead especially with growing datasets.

This paper proposes TeraCache, an extension of the Spark data cache that avoids the need of serdes by keeping all cached data on-heap but off-memory, using memory-mapped I/O (mmio). To achieve this, TeraCache extends the original JVM heap with a managed heap that resides on a memory-mapped fast storage device and is exclusively used for cached data. Preliminary results show that the TeraCache prototype can speed up Machine Learning (ML) workloads that cache intermediate results by up to 37% compared to the state-of-the-art serdes approach.”

Citation: Kolokasis, I.G., Papagiannis, A., Pratikakis, P., Bilas, A. and Zakkak, F., 2020. Say Goodbye to Off-heap Caches! On-heap Caches Using Memory-Mapped I/O. In 12th {USENIX} Workshop on Hot Topics in Storage and File Systems (HotStorage 20).

Authors: Iacovos G. Kolokasis, Anastasios Papagiannis, Polyvios Pratikakis, and Angelos Bilas, Institute of Computer Science (ICS), Foundation for Research and Technology – Hellas (FORTH), Greece; Foivos Zakkak, Red Hat, Inc.
Appeared in HotStorage’20
Awarded Best Presentation!

Also in collaboration with ICS-FORTH (https://www.ics.forth.gr/)

Authors

Partner University

Collaborations

Institutes

Associated Research Projects