Skip to main content
Skip to main content

CS Talk - Abhinav Bhatele

Department of Computer Science

CS Talk - Abhinav Bhatele

Maryland Language Science Center | Computer Science Friday, September 26, 2025 11:00 am - 12:00 pm Brendan Iribe Center, 0318, Virtual

Training at Scale: GPU Systems Driving Advances in LLMs and GNNs.

Abstract: Significant advances in computer architecture (popularity of accelerators such as GPGPUs) and parallel computing (scalable libraries for dense and sparse linear algebra) have contributed to the on-going AI revolution. In particular, distributed LLM training relies on scalable matrix multiplication algorithms and efficient communication on high-speed interconnects. Pre-training and fine-tuning large language models (LLMs) with hundreds of billions to trillions of parameters and graph neural network (GNNs) on extremely large graphs requires hundreds to tens of thousands of GPUs. However, such training often suffers from significant scaling bottlenecks such as high communication overheads and load imbalance.

In this talk, I will present several systems research directions that directly impact AI model training. First, I will describe my group's work in using a three-dimensional parallel algorithm for matrix multiplication in large-scale LLM training. We have implemented these techniques and additional performance optimizations in a highly scalable, open-source framework called AxoNN.  Second, I will demonstrate the application of the same algorithm to full-graph GNN training when working with extremely large graphs.  Finally, I will also discuss the need for scalable collective communication routines for large model training.

Bio: Abhinav Bhatele is an associate professor in the Department of Computer Science, and director of the Parallel Software and Systems Group at the University of Maryland, College Park. His research interests are broadly in systems and AI, with a focus on parallel computing and distributed AI. He has published research in parallel programming models and runtimes, network design and simulation, applications of machine learning to parallel systems, parallel deep learning, and on analyzing/visualizing, modeling and optimizing the performance of parallel software and systems. Abhinav has received best paper awards at Euro-Par 2009, IPDPS 2013, IPDPS 2016, and PDP 2024, and a best poster award at SC 2023. He was selected as a recipient of the IEEE TCSC Award for Excellence in Scalable Computing (Early Career) in 2014, the LLNL Early and Mid-Career Recognition award in 2018, the NSF CAREER award in 2021, the IEEE TCSC Award for Excellence in Scalable Computing (Middle Career) in 2023, and the UIUC CS Early Career Academic Achievement Alumni Award in 2024.

Abhinav received a B.Tech. degree in Computer Science and Engineering from I.I.T. Kanpur, India in May 2005, and M.S. and Ph.D. degrees in Computer Science from the University of Illinois at Urbana-Champaign in 2007 and 2010 respectively. He was a post-doc and later computer scientist in the Center for Applied Scientific Computing at Lawrence Livermore National Laboratory from 2011-2019. Abhinav was an associate editor of the IEEE Transactions on Parallel and Distributed Systems (TPDS) from 2022-2024.

Add to Calendar 09/26/25 11:00:00 09/26/25 12:00:00 America/New_York CS Talk - Abhinav Bhatele

Training at Scale: GPU Systems Driving Advances in LLMs and GNNs.

Abstract: Significant advances in computer architecture (popularity of accelerators such as GPGPUs) and parallel computing (scalable libraries for dense and sparse linear algebra) have contributed to the on-going AI revolution. In particular, distributed LLM training relies on scalable matrix multiplication algorithms and efficient communication on high-speed interconnects. Pre-training and fine-tuning large language models (LLMs) with hundreds of billions to trillions of parameters and graph neural network (GNNs) on extremely large graphs requires hundreds to tens of thousands of GPUs. However, such training often suffers from significant scaling bottlenecks such as high communication overheads and load imbalance.

In this talk, I will present several systems research directions that directly impact AI model training. First, I will describe my group's work in using a three-dimensional parallel algorithm for matrix multiplication in large-scale LLM training. We have implemented these techniques and additional performance optimizations in a highly scalable, open-source framework called AxoNN.  Second, I will demonstrate the application of the same algorithm to full-graph GNN training when working with extremely large graphs.  Finally, I will also discuss the need for scalable collective communication routines for large model training.

Bio: Abhinav Bhatele is an associate professor in the Department of Computer Science, and director of the Parallel Software and Systems Group at the University of Maryland, College Park. His research interests are broadly in systems and AI, with a focus on parallel computing and distributed AI. He has published research in parallel programming models and runtimes, network design and simulation, applications of machine learning to parallel systems, parallel deep learning, and on analyzing/visualizing, modeling and optimizing the performance of parallel software and systems. Abhinav has received best paper awards at Euro-Par 2009, IPDPS 2013, IPDPS 2016, and PDP 2024, and a best poster award at SC 2023. He was selected as a recipient of the IEEE TCSC Award for Excellence in Scalable Computing (Early Career) in 2014, the LLNL Early and Mid-Career Recognition award in 2018, the NSF CAREER award in 2021, the IEEE TCSC Award for Excellence in Scalable Computing (Middle Career) in 2023, and the UIUC CS Early Career Academic Achievement Alumni Award in 2024.

Abhinav received a B.Tech. degree in Computer Science and Engineering from I.I.T. Kanpur, India in May 2005, and M.S. and Ph.D. degrees in Computer Science from the University of Illinois at Urbana-Champaign in 2007 and 2010 respectively. He was a post-doc and later computer scientist in the Center for Applied Scientific Computing at Lawrence Livermore National Laboratory from 2011-2019. Abhinav was an associate editor of the IEEE Transactions on Parallel and Distributed Systems (TPDS) from 2022-2024.

Brendan Iribe Center false

Organization

Website

CLIP Talk