This repository is the home of the core Python rules -- py_library
,
py_binary
, py_test
, and related symbols that provide the basis for Python
support in Bazel. It also contains packaging rules for integrating with PyPI
(pip
). Documentation lives in the
docs/
directory and in the
Bazel Build Encyclopedia.
Currently the core rules are bundled with Bazel itself, and the symbols in this
repository are simple aliases. However, in the future the rules will be
migrated to Starlark and debundled from Bazel. Therefore, the future-proof way
to depend on Python rules is via this repository. SeeMigrating from the Bundled Rules
below.
The core rules are stable. Their implementation in Bazel is subject to Bazel's backward compatibility policy. Once they are fully migrated to rules_python, they may evolve at a different rate, but this repository will still follow semantic versioning.
The packaging rules (pip_install
, etc.) are less stable. We may make breaking
changes as they evolve.
This repository is maintained by the Bazel community. Neither Google, nor the Bazel team, provides support for the code. However, this repository is part of the test suite used to vet new Bazel releases. See the How to contribute page for information on our development workflow.
To import rules_python in your project, you first need to add it to your
WORKSPACE
file, using the snippet provided in the
release you choose
To depend on a particular unreleased version, you can do:
load("@bazel_tools//tools/build_defs/repo:http.bzl", "http_archive")
rules_python_version = "740825b7f74930c62f44af95c9a4c1bd428d2c53" # Latest @ 2021-06-23
http_archive(
name = "rules_python",
sha256 = "3474c5815da4cb003ff22811a36a11894927eda1c2e64bf2dac63e914bfdf30f",
strip_prefix = "rules_python-{}".format(rules_python_version),
url = "https://github.com/bazelbuild/rules_python/archive/{}.zip".format(rules_python_version),
)
To register a hermetic Python toolchain rather than rely on a system-installed interpreter for runtime execution, you can add to the WORKSPACE
file:
load("@rules_python//python:repositories.bzl", "python_register_toolchains")
python_register_toolchains(
name = "python3_9",
# Available versions are listed in @rules_python//python:versions.bzl.
# We recommend using the same version your team is already standardized on.
python_version = "3.9",
)
load("@python3_9//:defs.bzl", "interpreter")
load("@rules_python//python:pip.bzl", "pip_parse")
pip_parse(
...
python_interpreter_target = interpreter,
...
)
After registration, your Python targets will use the toolchain's interpreter during execution, but a system-installed interpreter is still used to 'bootstrap' Python targets (see bazelbuild#691). You may also find some quirks while using this toolchain. Please refer to python-build-standalone documentation's Quirks section for details.
Python toolchains can be utilised in other bazel rules, such as genrule()
, by adding the toolchains=["@rules_python//python:current_py_toolchain"]
attribute. The path to the python interpreter can be obtained by using the $(PYTHON2)
and $(PYTHON3)
"Make" Variables. See the test_current_py_toolchain
target for an example.
Once you've imported the rule set into your WORKSPACE
using any of these
methods, you can then load the core rules in your BUILD
files with:
load("@rules_python//python:defs.bzl", "py_binary")
py_binary(
name = "main",
srcs = ["main.py"],
)
Usage of the packaging rules involves two main steps.
The packaging rules create two kinds of repositories: A central external repo that holds
downloaded wheel files, and individual external repos for each wheel's extracted
contents. Users only need to interact with the central external repo; the wheel repos
are essentially an implementation detail. The central external repo provides a
WORKSPACE
macro to create the wheel repos, as well as a function, requirement()
, for use in
BUILD
files that translates a pip package name into the label of a py_library
target in the appropriate wheel repo.
To add pip dependencies to your WORKSPACE
, load the pip_install
function, and call it to create the
central external repo and individual wheel external repos.
load("@rules_python//python:pip.bzl", "pip_install")
# Create a central external repo, @my_deps, that contains Bazel targets for all the
# third-party packages specified in the requirements.txt file.
pip_install(
name = "my_deps",
requirements = "//path/to:requirements.txt",
)
Note that since pip_install
is a repository rule and therefore executes pip at WORKSPACE-evaluation time, Bazel has no
information about the Python toolchain and cannot enforce that the interpreter
used to invoke pip matches the interpreter used to run py_binary
targets. By
default, pip_install
uses the system command "python3"
. This can be overridden by passing the
python_interpreter
attribute or python_interpreter_target
attribute to pip_install
.
You can have multiple pip_install
s in the same workspace. This will create multiple external repos that have no relation to
one another, and may result in downloading the same wheels multiple times.
As with any repository rule, if you would like to ensure that pip_install
is
re-executed in order to pick up a non-hermetic change to your environment (e.g.,
updating your system python
interpreter), you can force it to re-execute by running
bazel sync --only [pip_install name]
.
One pain point with pip_install
is the need to download all dependencies resolved by
your requirements.txt before the bazel analysis phase can start. For large python monorepos
this can take a long time, especially on slow connections.
pip_parse
provides a solution to this problem. If you can provide a lock
file of all your python dependencies pip_parse
will translate each requirement into its own external repository.
Bazel will only fetch/build wheels for the requirements in the subgraph of your build target.
There are API differences between pip_parse
and pip_install
:
pip_parse
requires a fully resolved lock file of your python dependencies. You can generate this by using thecompile_pip_requirements
rule, runningpip-compile
directly, or using virtualenv andpip freeze
.pip_parse
uses a label argument calledrequirements_lock
instead ofrequirements
to make this distinction clear.pip_parse
translates your requirements into a starlark macro calledinstall_deps
. You must call this macro in your WORKSPACE to declare your dependencies.
load("@rules_python//python:pip.bzl", "pip_parse")
# Create a central repo that knows about the dependencies needed from
# requirements_lock.txt.
pip_parse(
name = "my_deps",
requirements_lock = "//path/to:requirements_lock.txt",
)
# Load the starlark macro which will define your dependencies.
load("@my_deps//:requirements.bzl", "install_deps")
# Call it to define repos for your requirements.
install_deps()
Each extracted wheel repo contains a py_library
target representing
the wheel's contents. There are two ways to access this library. The
first is using the requirement()
function defined in the central
repo's //:requirements.bzl
file. This function maps a pip package
name to a label:
load("@my_deps//:requirements.bzl", "requirement")
py_library(
name = "mylib",
srcs = ["mylib.py"],
deps = [
":myotherlib",
requirement("some_pip_dep"),
requirement("another_pip_dep"),
]
)
The reason requirement()
exists is that the pattern for the labels,
while not expected to change frequently, is not guaranteed to be
stable. Using requirement()
ensures that you do not have to refactor
your BUILD
files if the pattern changes.
On the other hand, using requirement()
has several drawbacks; see
this issue for an enumeration. If you don't
want to use requirement()
then you can instead use the library
labels directly. For pip_parse
the labels are of the form
@{name}_{package}//:pkg
Here name
is the name
attribute that was passed to pip_parse
and
package
is the pip package name with characters that are illegal in
Bazel label names (e.g. -
, .
) replaced with _
. If you need to
update name
from "old" to "new", then you can run the following
buildozer command:
buildozer 'substitute deps @old_([^/]+)//:pkg @new_${1}//:pkg' //...:*
For pip_install
the labels are instead of the form
@{name}//pypi__{package}
Any 'extras' specified in the requirements lock-file will be automatically added as transitive dependencies of the
package. In the example above, you'd just put requirement("useful_dep")
.
If you need to depend on the wheel dists themselves, for instance to pass them
to some other packaging tool, you can get a handle to them with the whl_requirement
macro. For example:
filegroup(
name = "whl_files",
data = [
whl_requirement("boto3"),
]
)
The core rules are currently available in Bazel as built-in symbols, but this
form is deprecated. Instead, you should depend on rules_python in your
WORKSPACE
file and load the Python rules from
@rules_python//python:defs.bzl
.
A buildifier
fix is available to automatically migrate BUILD
and .bzl
files to add the
appropriate load()
statements and rewrite uses of native.py_*
.
# Also consider using the -r flag to modify an entire workspace.
buildifier --lint=fix --warnings=native-py <files>
Currently the WORKSPACE
file needs to be updated manually as per Getting
started above.
Note that Starlark-defined bundled symbols underneath
@bazel_tools//tools/python
are also deprecated. These are not yet rewritten
by buildifier.