Achieving high performance for sparse applications is challenging due to irregular access patterns and weak locality. To address these challenges, novel systems such as the Emu architecture have been proposed. The Emu design uses light-weight migratory threads, narrow memory, and near-memory processing capabilities to address weak locality and reduce the total load on the memory system. Because the Emu architecture is fundamentally different than cache based hierarchical memory systems, it is crucial to understand the cost-benefit tradeoffs of standard sparse algorithm optimizations on Emu hardware.
In this work, we explore sparse matrix-vector multiplication (SpMV) on the Emu architecture. We investigate the effects of different sparse optimizations such as dense vector data layouts, work distributions, and matrix reorderings. Our study finds that initially distributing work evenly across the system is inadequate to maintain load balancing over time due to the migratory nature of Emu threads. We demonstrate that known matrix reordering techniques can improve SpMV performance on the Emu architecture by as much as 70% by encouraging more consistent load balancing. This can be compared with a performance gain of no more than 16% on a cache-memory based system.
Slides for download: PDF file
Thomas Rolinger is a researcher at the Laboratory for Physical Sciences at the University of Maryland and a second year PhD student at the University of Maryland. He received a B.S. in Computer Science at the University of West Florida and a M.S. in Computer Science from Florida State University. His research interests include parallel and high performance computing, evaluating novel architectures, and performance studies of irregular algorithms.