Skip to main content

SC24 Schedule: Speaker Siddharth Singh

A Hybrid Tensor-Expert-Data Parallelism Approach to Optimize Mixture-of-Experts Training

1:45 PM Thursday, November 21

Mixture-of-Experts (MoE) is a neural network architecture that adds sparsely activated expert blocks to a base model, increasing the number of parameters without impacting computational costs. However, current distributed deep learning frameworks are limited in their ability to train high-quality MoE models with large base models. In this work, we present DeepSpeed-TED, a novel, three dimensional, hybrid parallel algorithm that combines data, tensor, and expert parallelism to enable the training of MoE models with 4–8× larger base models than the current state-of-the-art. We also describe memory optimizations in the optimizer step, and communication optimizations that eliminate unnecessary data movement. We implement our approach in DeepSpeed and achieve speedups of 26% over a baseline (i.e. without our communication optimizations) when training a 40 billion parameter MoE model (6.7 billion base model with 16 experts) on 128 V100 GPUs

Slides will be available for download here after the presentation.

Speaker Bio - Siddharth Singh

Picture of Siddharth Singh

Siddharth Singh is currently a Ph.D. candidate in Computer Science at the University of Maryland, College Park, having completed his B.Tech/M.Tech in Computer Science and Engineering from the Indian Institute of Technology, Kharagpur. His research primarily focuses on the practical aspects of distributed training and inference for large neural networks and has been published at premier HPC venues like IPDPS and ICS. He has been awarded the Outstanding Graduate Assistant Award for the 2023-24 academic year. His work has also been nominated as a finalist for the ACM Gordon Bell Competition in 2024.






Back to Top