Alladi Ramakrishnan Hall
Algorithmic foundation of parallel paging
Rathish Das
Univ. Waterloo, Canada
Every modern computer---be it a desktop or a supercomputer---uses hierarchical memory. One of the crucial characteristics of any memory hierarchy is that there is a small, fast memory, followed by a large, slow memory (possibly followed by more levels). Data is served to the processor from the small, fast memory. The small, fast memory is a scarce resource, and efficiently managing it is crucial to high performance.
Sharing the small and fast memory among multiple processors gives rise to many challenges. In this talk, I will give an overview of these challenges and of the recent progress that has been made in addressing them.
Classical problems such as paging have been very well understood in the sequential setting for decades. However, the paging problem has remained wide open for more than two decades in the parallel setting. In the parallel paging problem, p processors share a cache (small, fast memory) of size k. The goal is to partition the cache among the processors over time to minimize their average completion time. I will present tight upper and lower bounds of \Theta(\log p) on the competitive ratio with O(1) resource augmentation. I will then extend the theory of parallel paging to a new kind of multilevel-memory system, called high-bandwidth memory, which is common in new supercomputers.
Done