There is a growing interest in leveraging large language models (LLMs) for various software engineering tasks, including code generation, code translation, and code testing. However, their application in the domain of code and compiler optimization remains relatively unexplored. Additionally, training LLMs incurs significant computational and data costs, posing a challenge in this area.
To address this issue, in a new paper Meta Large Language Model Compiler: Foundation Models of Compiler Optimization, a Meta AI research team introduces Meta Large Language Model Compiler (LLM Compiler), a suite of robust, openly available, pre-trained models is specifically designed for code optimization tasks, aiming to provide a scalable, cost-effective foundation for further research and development in compiler optimization.

The LLM Compiler consists of foundation models trained to understand the semantics of compiler intermediate representations (IRs) and assemblies, emulating the compiler. These models can be fine-tuned with minimal data for specific downstream compiler optimization tasks. The LLM Compiler models are specialized versions of Code Llama, trained on 546 billion tokens of compiler-centric data in two stages.

Initially, the models are trained predominantly on unlabeled compiler IRs and assembly code. In the subsequent stage, the models undergo instruction fine-tuning to predict the outcomes and effects of optimizations. The LLM Compiler FTD models are further fine-tuned on 164 billion tokens of downstream flag tuning and disassembly task datasets, totaling 710 billion training tokens. Throughout the four training stages, 15% of data from previous tasks is retained.
The researchers adapt the model for two downstream compilation tasks: tuning compiler flags to optimize code size and disassembling x86_64 and ARM assembly to LLVM-IR. These LLM Compiler FTD models are released to the community under the same bespoke commercial license.

Compared to the autotuning technique on which it was trained, the LLM Compiler FTD achieves 77% of the optimization potential without requiring additional compilations. When disassembling, it creates correct disassembly 14% of the time. On both tasks, the LLM Compiler FTD models significantly outperform comparable LLMs, such as Code Llama and GPT-4 Turbo.
The researchers claim the LLM Compiler opens new possibilities for exploring the untapped potential of LLMs in the realm of code and compiler optimization.
The paper Meta Large Language Model Compiler: Foundation Models of Compiler Optimization is on arXiv.
Author: Hecate He | Editor: Chain Zhang

At Coastal Collision Inc., we provide full-service auto care to meet all your vehicle’s needs. From fixing dents to major overhauls, our experienced technicians deliver high-quality results every time. Don’t forget, we also offer towing services to get your vehicle to our shop safely whenever you need it.
Join the ranks of excellence at Osh University, consistently ranked among the international medical universities. Experience a cutting-edge education that propels you toward a successful medical career on a global stage.
Do you ever play Block Blast puzzle game at School? I usually play this game with my classmate.
The network space is seemingly covered by AI-related information. It has obviously developed so rapidly.
Exciting development! Meta’s LLM Compiler could make advanced compiler optimization more accessible and efficient, bridging AI with low-level code performance.