LLM x HPC

2024 International Workshop on Large Language Models (LLMs) and HPC

September 24th, Kobe, Japan. Co-hosted with CLUSTER 2024 conference.

Kobe landscape

Call for Papers

High-Performance Computing (HPC) systems have become critical for meeting the computational and data-intensive needs of training Large Language Models (LLMs). Simultaneously, in the domain of HPC research, LLMs are emerging as transformative tools to understand and improve HPC system productivity and efficiency. There are clear synergies between these areas and meaningful coordination of efforts holds great promise. This workshop brings together researchers and developers to explore the intersection of HPC and LLMs, offering a comprehensive look at how these two domains can mutually benefit and drive each other's advancement.

The workshop has two key focus areas: (i) co-design and deployment of HPC systems to support LLM training and (ii) using LLMs to understand and optimize/tune HPC systems. A combination of paper presentations, panel discussion, and keynote will be included in the program to highlight salient research and development activities, promote diverse perspectives and visions, and stimulate discussion in the community.

Topics to be covered in this workshop include, but are not limited to,

Important Dates

2024 June 19 29 (extended) Submission deadline
2024 July 19 Author Notification
2024 August 02 CLUSTER Author registration
2024 August 09 Camera Ready

How to submit

Workshop papers will be included in the IEEE Cluster 2024 proceedings.

The papers should be

Guidelines for Artificial Intelligence (AI)-Generated Text

The use of content generated by artificial intelligence (AI) in a paper (including but not limited to text, figures, images, and code) shall be disclosed in the acknowledgments section of any paper submitted to an IEEE publication. The AI system used shall be identified, and specific sections of the paper that use AI-generated content shall be identified and accompanied by a brief explanation regarding the level at which the AI system was used to generate the content.

The use of AI systems for editing and grammar enhancement is common practice and, as such, is generally outside the intent of the above policy. In this case, disclosure as noted above is recommended.


Full IEEE submission policies can be found here.

Submit your papers here

Please direct any inquiries to llmhpc-workshop@lists.anl.gov

Accepted papers will be included in the IEEE Cluster 2024 proceedings and published in the IEEE Xplore digital library

Program

Time Agenda
10:30-10:45 ☕ Coffee is available
10:45-10:50 Opening remarks
10:50-11:10 📄 Thibaut Tachon, Haoran Wang and Chong Li. RAPID: A Rapid Automatic Parallelizer for Immense Deep Neural Networks.
11:10-11:30 📄 Soratouch Pornmaneerattanatri, Keichi Takahashi, Yutaro Kashiwa, Kohei Ichikawa and Hajimu Iida. Automatic Parallelization with CodeT5+: A Model for Generating OpenMP Directives.
11:30-12:10 ⭐ Invited talk 1. Xiaoli Shen, Microsoft Corporation, Inc. Phi-3 family - highly capable multilingual multimodal open SLMs.
12:10-13:15 🍱 Lunch
13:15-13:35 📄 Matthew Dearing, Yiheng Tao, Xingfu Wu, Zhiling Lan and Valerie Taylor. LASSI: An LLM-based Automated Self-Correcting Pipeline for Translating Parallel Scientific Codes.
13:35-14:05 ⭐ Invited talk 2. Yasuhiro Ito, Tenstorrent Inc. Tenstorrent's Tensix Accelerator for scalable & reasonable LLM deployment.
14:05-14:35 All-hands discussion, closing
Invited talks are supported by KAKENHI project 22H03600 "Automated, Scalable, and Machine Learning-Driven Approach for Generating and Optimizing Scientific Application Codes"

Organizers

Steering Committee

Program Committee