Skip to content

Reinforcement learning environments for compiler and program optimization tasks

License

Notifications You must be signed in to change notification settings

facebookresearch/CompilerGym

Repository files navigation

CompilerGym


Documentation PyPI version CI status PyPi Downloads License

CompilerGym is a toolkit for exposing compiler optimization problems for reinforcement learning. It allows machine learning researchers to experiment with program optimization techniques without requiring any experience in compilers, and provides a framework for compiler developers to expose new optimization problems for AI.

Table of Contents

Getting Started

Starting with CompilerGym is simple. If you not already familiar with the gym interface, refer to the getting started guide for an overview of the key concepts.

Installation

Install the latest CompilerGym release using:

pip install -U compiler_gym

The binary works on macOS and Linux (on Ubuntu 18.04, Fedora 28, Debian 10 or newer equivalents).

Building from Source

If you prefer, you may build from source. This requires a modern C++ toolchain and bazel.

macOS

On macOS the required dependencies can be installed using homebrew:

brew install bazelisk zlib
export LDFLAGS="-L/usr/local/opt/zlib/lib"
export CPPFLAGS="-I/usr/local/opt/zlib/include"
export PKG_CONFIG_PATH="/usr/local/opt/zlib/lib/pkgconfig"

Now proceed to All platforms below.

Linux

On debian-based linux systems, install the required toolchain using:

sudo apt install clang libtinfo5 libjpeg-dev patchelf
wget https://github.com/bazelbuild/bazelisk/releases/download/v1.7.5/bazelisk-linux-amd64 -O bazel
chmod +x bazel && mkdir -p ~/.local/bin && mv -v bazel ~/.local/bin
export PATH="$HOME/.local/bin:$PATH"
export CC=clang
export CXX=clang++

All platforms

We recommend using conda to manage the remaining build dependencies. First create a conda environment with the required dependencies:

conda create -n compiler_gym python=3.9 cmake pandoc
conda activate compiler_gym

Then clone the CompilerGym source code using:

git clone https://github.com/facebookresearch/CompilerGym.git
cd CompilerGym

Install the python development dependencies using:

make init

The make init target only needs to be run once on initial setup, or when upgrading to a different CompilerGym release.

Run the test suite to confirm that everything is working:

make test

To build and install the compiler_gym python package, run:

make install

NOTE: To use the compiler_gym package that is installed by make install you must leave the root directory of this repository. Attempting to import compiler_gym while in the root of this repository will cause import errors.

When you are finished, you can deactivate and delete the conda environment using:

conda deactivate
conda env remove -n compiler_gym

Trying it out

In Python, import compiler_gym to use the environments:

>>> import gym
>>> import compiler_gym                     # imports the CompilerGym environments
>>> env = gym.make("llvm-autophase-ic-v0")  # starts a new environment
>>> env.require_dataset("npb-v0")           # downloads a set of programs
>>> env.reset()                             # starts a new compilation session with a random program
>>> env.render()                            # prints the IR of the program
>>> env.step(env.action_space.sample())     # applies a random optimization, updates state/reward/actions

See the documentation website for tutorials, further details, and API reference.

Leaderboards

These leaderboards track the performance of user-submitted algorithms for CompilerGym tasks. To submit a result please see this document.

LLVM Instruction Count

LLVM is a popular open source compiler used widely in industry and research. The llvm-ic-v0 environment exposes LLVM's optimizing passes as a set of actions that can be applied to a particular program. The goal of the agent is to select the sequence of optimizations that lead to the greatest reduction in instruction count in the program being compiled. Reward is the reduction in instruction count achieved scaled to the reduction achieved by LLVM's builtin -Oz pipeline.

This leaderboard tracks the results achieved by algorithms on the llvm-ic-v0 environment on the 23 benchmarks in the cBench-v1 dataset.

Author Algorithm Links Date Walltime (mean) Codesize Reduction (geomean)
Facebook Random search (t=10800) write-up, results 2021-03 10,512.356s 1.062×
Facebook Random search (t=3600) write-up, results 2021-03 3,630.821s 1.061×
Facebook Greedy search write-up, results 2021-03 169.237s 1.055×
Facebook Random search (t=60) write-up, results 2021-03 91.215s 1.045×
Facebook e-Greedy search (e=0.1) write-up, results 2021-03 152.579s 1.041×
Facebook Random search (t=10) write-up, results 2021-03 42.939s 1.031×

Contributing

We welcome contributions to CompilerGym. If you are interested in contributing please see this document.

Citation

If you use CompilerGym in any of your work, please cite:

@Misc{CompilerGym,
  author = {Cummins, Chris and Leather, Hugh and Steiner, Benoit and He, Horace and Chintala, Soumith},
  title = {{CompilerGym}: A Reinforcement Learning Toolkit for Compilers},
  howpublished = {\url{https://github.com/facebookresearch/CompilerGym/}},
  year = {2020}
}

About

Reinforcement learning environments for compiler and program optimization tasks

Resources

License

Code of conduct

Security policy

Citation

Stars

Watchers

Forks

Packages

No packages published