Inference on large language models (LLMs) can be expensive in terms of the compute and memory costs involved, especially when long sequence lengths are used. In particular, the self-attention mechanism used in LLM inference contributes significantly to these costs, which has sparked an interest in approximating the selfattention computation to reduce such costs. In this work, we propose to approximate self-attention by focusing on the dimensionality of key vectors computed in the attention block. Our analysis reveals that key vectors lie in a significantly lowerdimensional space, consistently across several datasets and models. Exploiting this observation, we propose Loki, a novel sparse attention method that ranks and selects tokens in the KV-cache based on attention scores computed in low-dimensional space. Our evaluations show that Loki is able to speed up the attention computation due to reduced data movement (load/store) and compute costs while maintaining the efficacy of the models better than other popular approximation methods
Slides will be available for download here after the presentation.
Prajwal Singhania is a second-year CS PhD student in the Parallel Systems and Software Group at UMD, advised by Abhinav Bhatele. He completed his Integrated Bachelor’s and Master’s degrees in Computer Science at the Indian Institute of Technology, Kharagpur, in 2020. His research interests lie at the intersection of High-Performance Computing and AI, with a particular focus on optimizing system-level performance for AI training and inference workloads in single and multi-GPU settings