pith. machine review for the scientific record. sign in

arxiv: 2504.10013 · v2 · submitted 2025-04-14 · 💻 cs.DC

Recognition: unknown

Training LLMs on HPC Systems: Best Practices from the OpenGPT-X Project

Authors on Pith no claims yet
classification 💻 cs.DC
keywords trainingllmsbestopengpt-xpracticesprojectreportsoftware
0
0 comments X
read the original abstract

The training of large language models (LLMs) requires substantial computational resources, complex software stacks, and carefully designed workflows to achieve scalability and efficiency. This report presents best practices and insights gained from the OpenGPT-X project, a German initiative focused on developing open, multilingual LLMs optimized for European languages. We detail the use of high-performance computing (HPC) systems, primarily JUWELS Booster at JSC, for training Teuken-7B, a 7-billion-parameter transformer model. The report covers system architecture, training infrastructure, software choices, profiling and benchmarking tools, as well as engineering and operational challenges. It includes measured throughput data of various configurations of 3D parallelism during training and the impact of features such as flash attention.

This paper has not been read by Pith yet.

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.