From 8fe82a386e215150835fdd657dc03e23277f9b1c Mon Sep 17 00:00:00 2001
From: David Chen
+ To run Bazel, go to
+
+ your base workspace directory
+ or any of its subdirectories and type
+ The
+ The Bazel system is implemented as a long-lived server process.
+ This allows it to perform many optimizations not possible with a
+ batch-oriented implementation, such as caching of BUILD files,
+ dependency graphs, and other metadata from one build to the
+ next. This improves the speed of incremental builds, and allows
+ different commands, such as
+ When you run
+ For the most part, the fact that there is a server running is
+ invisible to the user, but sometimes it helps to bear this in mind.
+ For example, if you're running scripts that perform a lot of
+ automated builds in different directories, it's important to ensure
+ that you don't accumulate a lot of idle servers; you can do this by
+ explicitly shutting them down when you're finished with them, or by
+ specifying a short timeout period.
+
+ The name of a Bazel server process appears in the output of
+ This makes it easier to find out which server process belongs to a
+ given workspace. (Beware that with certain other options
+ to
+ You can also run Bazel in batch mode using the
+ When running
+ Bazel accepts many options. Typically, some of these are varied
+ frequently (e.g.
+ Bazel looks for an optional configuration file in the location
+ specified by the
+ The
+ The option
+ Aside from the configuration file described above, Bazel also looks
+ for a master configuration file next to the binary, in the workspace
+ at
+ Like all UNIX "rc" files, the
+ Startup options may be specified in the
+
+ Options specified in the command line always take precedence over
+ those from a configuration file. In configuration files, lines for a more specific command take
+ precedence over lines for a less specific command (e.g. the 'test' command inherits all the
+ options from the 'build' command, so a 'test --foo=bar' line takes precedence over a
+ 'build --foo=baz' line, regardless of which configuration files these two lines are in) and lines
+ equally specific for which command they apply have precedence based on the configuration file they
+ are in, with the user-specific configuration file taking precedence over the master one.
+
+ Options may include words other than flags, such as the names of
+ build targets, etc; these are always prepended to the explicit
+ argument list provided on the command-line, if any.
+
+ Common command options may be specified in the
+
+ In addition, commands may have
+ Note that some config sections are defined in the master bazelrc file.
+ To avoid conflicts, user-defined sections
+ should start with the '_' (underscore) character.
+
+ The command named
+ Here's an example
+ The most important function of Bazel is, of course, building code. Type
+
+ Bazel prints the progress messages as it loads all the
+ packages in the transitive closure of dependencies of the requested
+ target, then analyzes them for correctness and to create the build actions,
+ finally executing the compilers and other tools of the build.
+
+ Bazel prints progress messages during
+ the execution phase of the build, showing the
+ current build step (compiler, linker, etc.) that is being started,
+ and the number of completed over total number of build actions. As the
+ build starts the number of total actions will often increase as Bazel
+ discovers the entire action graph, but the number will usually stabilize
+ within a few seconds.
+
+ At the end of the build Bazel
+ prints which targets were requested, whether or not they were
+ successfully built, and if so, where the output files can be found.
+ Scripts that run builds can reliably parse this output; see
+ Typing the same command again:
+
+ we see a "null" build: in this case, there are no packages to
+ re-load, since nothing has changed, and no build steps to execute.
+ (If something had changed in "foo" or some of its dependencies, resulting in the
+ reexecution of some build actions, we would call it an "incremental" build, not a
+ "null" build.)
+
+ Before you can start a build, you will need a Bazel workspace. This is
+ simply a directory tree that contains all the source files needed to build
+ your application.
+ Bazel allows you to perform a build from a completely read-only volume.
+
+ Bazel finds its packages by searching the package path. This is a colon
+ separated ordered list of bazel directories, each being the root of a
+ partial source tree.
+
+ To specify a custom package path using the
+
+Package path elements may be specified in three formats:
+
+ If you use a non-default package path, we recommend that you specify
+ it in your Bazel configuration file for
+ convenience.
+
+ Bazel doesn't require any packages to be in the
+ current directory, so you can do a build from an empty bazel
+ workspace if all the necessary packages can be found somewhere else
+ on the package path.
+
+ Example: Building from an empty client
+
+ Bazel allows a number of ways to specify the targets to be built.
+ Collectively, these are known as target patterns.
+ The on-line help displays a summary of supported patterns:
+
+ Whereas labels are used
+ to specify individual targets, e.g. for declaring dependencies in
+ BUILD files, Bazel's target patterns are a syntax for specifying
+ multiple targets: they are a generalization of the label syntax
+ for sets of targets, using wildcards. In the simplest case,
+ any valid label is also a valid target pattern, identifying a set of
+ exactly one target.
+
+
+ In addition,
+ This implies that
+ In addition, Bazel allows a slash to be used instead of the colon
+ required by the label syntax; this is often convenient when using
+ Bash filename expansion. For example,
+ Many Bazel commands accept a list of target patterns as arguments,
+ and they all honor the prefix negation operator `
+ means "build all
+ targets beneath
+ means "build all targets beneath
+ It's important to point out though that subtracting targets this way will not
+ guarantee that they are not built, since they may be dependencies of targets
+ that weren't subtracted. For example, if there were a target
+
+ Targets with
+ By default, Bazel will download and symlink external dependencies during the
+ build. However, this can be undesirable, either because you'd like to know
+ when new external dependendencies are added or because you'd like to
+ "prefetch" dependencies (say, before a flight where you'll be offline). If you
+ would like to prevent new dependencies from being added during builds, you
+ can specify the
+ If you disallow fetching during builds and Bazel finds new external
+ dependencies, your build will fail.
+
+ You can manually fetch dependencies by running A User's Guide to Bazel
+
+Bazel overview
+
+bazel
.
+
+ % bazel help
+ [Bazel release bazel-<version>]
+ Usage: bazel <command> <options> ...
+
+ Available commands:
+ analyze-profile Analyzes build profile data.
+ build Builds the specified targets.
+
+ canonicalize-flags Canonicalize Bazel flags.
+ clean Removes output files and optionally stops the server.
+
+ help Prints help for commands, or the index.
+
+ info Displays runtime info about the bazel server.
+
+ fetch Fetches all external dependencies of a target.
+ mobile-install Installs apps on mobile devices.
+
+ query Executes a dependency graph query.
+
+ run Runs the specified target.
+ shutdown Stops the Bazel server.
+ test Builds and runs the specified test targets.
+ version Prints version information for Bazel.
+
+ Getting more help:
+ bazel help <command>
+ Prints help and options for <command>.
+ bazel help startup_options
+ Options for the JVM hosting Bazel.
+ bazel help target-syntax
+ Explains the syntax for specifying targets.
+ bazel help info-keys
+ Displays a list of keys used by the info command.
+
+
+bazel
tool performs many functions, called
+ commands; users of CVS and Subversion will be familiar
+ with this "Swiss army knife" arrangement. The most commonly used one is of
+ course bazel build
. You can browse the online help
+ messages using bazel help
.
+Client/server implementation
+
+build
+ and query
to share the same cache of loaded packages,
+ making queries very fast.
+bazel
, you're running the client. The
+ client finds the server based on the path of the base workspace directory
+ and your userid, so if you build in multiple workspaces, you'll have
+ multiple Bazel server processes. Multiple users on the same
+ workstation can build concurrently in the same workspace. If the
+ client cannot find a running server instance, it starts a new one.
+ The server process will stop after a period of inactivity (3 hours,
+ by default).
+ps
+ x
or ps -e f
as
+ bazel(dirname)
, where dirname is the
+ basename of the directory enclosing the root your workspace directory.
+ For example:
+
+ % ps -e f
+ 16143 ? Sl 3:00 bazel(src-jrluser2) -server -Djava.library.path=...
+
+ps
, Bazel server processes may be named just
+ java
.) Bazel servers can be stopped using
+ the shutdown command.
+--batch
+ startup flag. This will immediately shut down the process after the
+ command (build, test, etc.) has finished and not keep a server process
+ around.
+bazel
, the client first checks that the
+ server is the appropriate version; if not, the server is stopped and
+ a new one started. This ensures that the use of a long-running
+ server process doesn't interfere with proper versioning.
+
+
+.bazelrc
, the Bazel configuration file,
+the --bazelrc=file
option, and the
+--config=value
option--subcommands
) while others stay the
+ same across several builds (e.g. --package_path
).
+ To avoid having to specify these constant options every time you do
+ a build or run some other Bazel command, Bazel allows you to
+ specify options in a configuration file.
+--bazelrc=file
option. If
+ this option is not specified then, by default, Bazel looks for the
+ file called .bazelrc
in one of two directories: first,
+ in your base workspace directory, then in your home directory. If
+ it finds a file in the first (workspace-specific) location, it will
+ not look at the second (global) location.
+--bazelrc=file
option must
+ appear before the command name (e.g. build
).
+--bazelrc=/dev/null
effectively disables the
+ use of a configuration file. We strongly recommend that you use
+ this option when performing release builds, or automated tests that
+ invoke Bazel.
+tools/bazel.rc
or system-wide at
+ /etc/bazel.bazelrc
. These files are here to support
+ installation-wide options or options shared between users.
+.bazelrc
file is a text
+ file with a line-based grammar. Lines starting #
are
+ considered comments and are ignored, as are blank lines. Each line
+ contains a sequence of words, which are tokenized according to the
+ same rules as the Bourne shell.
+ The first word on each line is the name of a Bazel command, such
+ as build
or query
. The remaining words
+ are the default options that apply to that command.
+ More than one line may be used for a command; the options are combined
+ as if they had appeared on a single line.
+ (Users of CVS, another tool with a "Swiss army knife" command-line
+ interface, will find the syntax familiar to that of .cvsrc
.)
+.bazelrc
file using the command startup
.
+ These options are described in the interactive help
+ at bazel help startup_options
.
+.bazelrc
file using the command common
.
+:name
suffixes. These
+ options are ignored by default, but can be pulled in through the
+ --config=name
option, either on the command line or in
+ a .bazelrc
file. The intention is that these bundle command line
+ options that are commonly used together, for example
+ --config=memcheck
.
+import
is special: if Bazel encounters such
+ a line in a .bazelrc
file, it parses the contents of the file
+ referenced by the import statement, too. Options specified in an imported file
+ take precedence over ones specified before the import statement, options
+ specified after the import statement take precedence over the ones in the
+ imported file, and options in files imported later take precedence over files
+ imported earlier.
+~/.bazelrc
file:
+
+ # Bob's Bazel option defaults
+
+ startup --batch --host_jvm_args=-XX:-UseParallelGC
+ import /home/bobs_project/bazelrc
+ build --show_timestamps --keep_going --jobs 600
+ build --color=yes
+ query --keep_going
+
+ build:memcheck --strip=never --test_timeout=3600
+
+
+Building programs with Bazel
+The
+
+build
commandbazel build
followed by the name of the
+ target you wish to build. Here's a typical
+ session:
+
+ % bazel build //foo
+ ____Loading package: foo
+ ____Loading package: bar
+ ____Loading package: baz
+ ____Loading complete. Analyzing...
+ ____Building 1 target...
+ ____[0 / 3] Executing Genrule //bar:helper_rule
+ ____[1 / 3] Executing Genrule //baz:another_helper_rule
+ ____[2 / 3] Building foo/foo.bin
+ Target //foo:foo up-to-date:
+ bazel-bin/foo/foo.bin
+ bazel-bin/foo/foo
+ ____Elapsed time: 9.905s
+
+--show_result
for more
+ details.
+
+ % bazel build //foo
+ ____Loading...
+ ____Found 1 target...
+ ____Building complete.
+ Target //foo:foo up-to-date:
+ bazel-bin/foo/foo.bin
+ bazel-bin/foo/foo
+ ____Elapsed time: 0.280s
+
+Setting up a
+--package_path
--package_path
option:
+
+ % bazel build --package_path %workspace%:/some/other/root
+
+
+
+/
, the path is absolute.
+ %workspace%
, the path is taken relative
+ to the nearest enclosing bazel directory.
+ For instance, if your working directory
+ is /home/bob/clients/bob_client/bazel/foo
, then the
+ string %workspace%
in the package-path is expanded
+ to /home/bob/clients/bob_client/bazel
.
+
This is usually not what you mean to do,
+ and may behave unexpectedly if you use Bazel from directories below the bazel workspace.
+ For instance, if you use the package-path element .
,
+ and then cd into the directory
+ /home/bob/clients/bob_client/bazel/foo
, packages
+ will be resolved from the
+ /home/bob/clients/bob_client/bazel/foo
directory.
+
+ % mkdir -p foo/bazel
+ % cd foo/bazel
+ % bazel build --package_path /some/other/path //foo
+
+Specifying targets to build
+
+% bazel help target-syntax
+
+Target pattern syntax
+=====================
+
+The BUILD file label syntax is used to specify a single target. Target
+patterns generalize this syntax to sets of targets, and also support
+working-directory-relative forms, recursion, subtraction and filtering.
+Examples:
+
+Specifying a single target:
+
+ //foo/bar:wiz The single target '//foo/bar:wiz'.
+ foo/bar/wiz Equivalent to:
+ '//foo/bar/wiz:wiz' if foo/bar/wiz is a package,
+ '//foo/bar:wiz' if foo/bar is a package,
+ '//foo:bar/wiz' otherwise.
+ //foo/bar Equivalent to '//foo/bar:bar'.
+
+Specifying all rules in a package:
+
+ //foo/bar:all Matches all rules in package 'foo/bar'.
+
+Specifying all rules recursively beneath a package:
+
+ //foo/...:all Matches all rules in all packages beneath directory 'foo'.
+ //foo/... (ditto)
+
+ By default, directory symlinks are followed when performing this recursive traversal, except
+ those that point to under the output base (for example, the convenience symlinks that are created
+ in the root directory of the workspace) But we understand that your workspace may intentionally
+ contain directories with unusual symlink structures that you don't want consumed. As such, if a
+ directory has a file named
+ 'DONT_FOLLOW_SYMLINKS_WHEN_TRAVERSING_THIS_DIRECTORY_VIA_A_RECURSIVE_TARGET_PATTERN' then symlinks
+ in that directory won't be followed when evaluating recursive target patterns.
+
+Working-directory relative forms: (assume cwd = 'workspace/foo')
+
+ Target patterns which do not begin with '//' are taken relative to
+ the working directory. Patterns which begin with '//' are always
+ absolute.
+
+ ...:all Equivalent to '//foo/...:all'.
+ ... (ditto)
+
+ bar/...:all Equivalent to '//foo/bar/...:all'.
+ bar/... (ditto)
+
+ bar:wiz Equivalent to '//foo/bar:wiz'.
+ :foo Equivalent to '//foo:foo'.
+
+ bar Equivalent to '//foo/bar:bar'.
+ foo/bar Equivalent to '//foo/foo/bar:bar'.
+
+ bar:all Equivalent to '//foo/bar:all'.
+ :all Equivalent to '//foo:all'.
+
+Summary of target wildcards:
+
+ :all, Match all rules in the specified packages.
+ :*, :all-targets Match all targets (rules and files) in the specified
+ packages, including ones not built by default, such
+ as _deploy.jar files.
+
+Subtractive patterns:
+
+ Target patterns may be preceded by '-', meaning they should be
+ subtracted from the set of targets accumulated by preceding
+ patterns. (Note that this means order matters.) For example:
+
+ % bazel build -- foo/... -foo/contrib/...
+
+ builds everything in 'foo', except 'contrib'. In case a target not
+ under 'contrib' depends on something under 'contrib' though, in order to
+ build the former bazel has to build the latter too. As usual, the '--' is
+ required to prevent '-f' from being interpreted as an option.
+
+foo/...
is a wildcard over packages,
+ indicating all packages recursively beneath
+ directory foo
(for all roots of the package
+ path). :all
is a wildcard
+ over targets, matching all rules within a package. These two may be
+ combined, as in foo/...:all
, and when both wildcards
+ are used, this may be abbreviated to foo/...
.
+:*
(or :all-targets
) is a
+ wildcard that matches every target in the matched packages,
+ including files that aren't normally built by any rule, such
+ as _deploy.jar
files associated
+ with java_binary
rules.
+:*
denotes a superset
+ of :all
; while potentially confusing, this syntax does
+ allow the familiar :all
wildcard to be used for
+ typical builds, where building targets like the _deploy.jar
+ is not desired.
+foo/bar/wiz
is
+ equivalent to //foo/bar:wiz
(if there is a
+ package foo/bar
) or to //foo:bar/wiz
(if
+ there is a package foo
).
+-
'.
+ This can be used to subtract a set of targets from the set specified
+ by the preceding arguments. (Note that this means order matters.)
+ For example,
+
+ bazel build foo/... bar/...
+
+foo
and all targets
+ beneath bar
", whereas
+
+ bazel build -- foo/... -foo/bar/...
+
+foo
except
+ those beneath foo/bar
".
+
+ (The --
argument is required to prevent the subsequent
+ arguments starting with -
from being interpreted as
+ additional options.)
+//foo:all-apis
that among others depended on
+ //foo/bar:api
, then the latter would be built as part of
+ building the former.
+tags=["manual"]
will not be included in wildcard target patterns (...,
+ :*, :all, etc). You should specify such test targets with explicit target patterns on the command
+ line if you want Bazel to build/test them.
+Fetching external dependencies
+
+--fetch=false
flag. Note that this flag only
+ applies to repository rules that do not point to a directory in the local
+ file system. Changes, for example, to local_repository
,
+ new_local_repository
and Android SDK and NDK repository rules
+ will always take effect regardless of the value --fetch
.
+bazel fetch
. If
+ you disallow during-build fetching, you'll need to run bazel
+ fetch
:
+
+
+ Once it has been run, you should not need to run it again until the WORKSPACE
+ file changes.
+
+ fetch
takes a list of targets to fetch dependencies for. For
+ example, this would fetch dependencies needed to build //foo:bar
+ and //bar:baz
:
+
+$ bazel fetch //foo:bar //bar:baz ++ + +
+ To fetch all external dependencies for a workspace, run: +
+$ bazel fetch //... ++ + +
+ You do not need to run bazel fetch at all if you have all of the tools you are
+ using (from library jars to the JDK itself) under your workspace root.
+ However, if you're using anything outside of the workspace directory then you
+ will need to run bazel fetch
before running
+ bazel build
.
+
+ All the inputs that specify the behavior and result of a given + build can be divided into two distinct categories. + The first kind is the intrinsic information stored in the BUILD + files of your project: the build rule, the values of its attributes, + and the complete set of its transitive dependencies. + The second kind is the external or environmental data, supplied by + the user or by the build tool: the choice of target architecture, + compilation and linking options, and other toolchain configuration + options. We refer to a complete set of environmental data as + a configuration. +
+
+ In any given build, there may be more than one configuration.
+ Consider a cross-compile, in which you build
+ a //foo:bin
executable for a 64-bit architecture,
+ but your workstation is a 32-bit machine. Clearly, the build
+ will require building //foo:bin
using a toolchain
+ capable of creating 64-bit executables, but the build system must
+ also build various tools used during the build itself—for example
+ tools that are built from source, then subsequently used in, say, a
+ genrule—and these must be built to run on your workstation.
+ Thus we can identify two configurations: the host
+ configuration, which is used for building tools that run during
+ the build, and the target configuration (or request
+ configuration, but we say "target configuration" more often even
+ though that word already has many meanings), which is
+ used for building the binary you ultimately requested.
+
+ Typically, there are many libraries that are prerequisites of both
+ the requested build target (//foo:bin
) and one or more of
+ the host tools, for example some base libraries. Such libraries must be built
+ twice, once for the host configuration, and once for the target
+ configuration.
+ Bazel takes care of ensuring that both variants are built, and that
+ the derived files are kept separate to avoid interference; usually
+ such targets can be built concurrently, since they are independent
+ of each other. If you see progress messages indicating that a given
+ target is being built twice, this is most likely the explanation.
+
+ Bazel uses one of two ways to select the host configuration, based
+ on the --distinct_host_configuration
option. This
+ boolean option is somewhat subtle, and the setting may improve (or
+ worsen) the speed of your builds.
+
--distinct_host_configuration=false
+ When this option is false, the host and + request configurations are identical: all tools required during the + build will be built in exactly the same way as target programs. + This setting means that no libraries need to be built twice during a + single build, so it keeps builds short. + However, it does mean that any change to your request configuration + also affects your host configuration, causing all the tools to be + rebuilt, and then anything that depends on the tool output to be + rebuilt too. Thus, for example, simply changing a linker option + between builds might cause all tools to be re-linked, and then all + actions using them reexecuted, and so on, resulting in a very large rebuild. + Also, please note: if your host architecture is not capable of + running your target binaries, your build will not work. +
+
+ If you frequently make changes to your request configuration, such
+ as alternating between -c opt
and -c dbg
+ builds, or between simple- and cross-compilation, we do not
+ recommend this option, as you will typically rebuild the majority of
+ your codebase each time you switch.
+
--distinct_host_configuration=true
(default)+ If this option is true, then instead of using the same configuration + for the host and request, a completely distinct host configuration + is used. The host configuration is derived from the target + configuration as follows: +
+--crosstool_top
) as specified in the request
+ configuration, unless --host_crosstool_top
is
+ specified.
+ --host_cpu
for
+ --cpu
+
+ (default: k8
).
+ --compiler
,
+ --thin_archives
,
+ --use_ijars
,
+ --java_toolchain
,
+ If --host_crosstool_top
is used, then the value of
+ --host_cpu
is used to look up a
+ default_toolchain
in the Crosstool
+ (ignoring --compiler
) for the host configuration.
+ -c opt
).
+ --copt=-g0
).
+ --strip=always
).
+ --embed_*
options).
+ + There are many reasons why it might be preferable to select a + distinct host configuration from the request configuration. + Some are too esoteric to mention here, but two of them are worth + pointing out. +
++ Firstly, by using stripped, optimized binaries, you reduce the time + spent linking and executing the tools, the disk space occupied by + the tools, and the network I/O time in distributed builds. +
++ Secondly, by decoupling the host and request configurations in all + builds, you avoid very expensive rebuilds that would result from + minor changes to the request configuration (such as changing a linker options + does), as described earlier. +
++ That said, for certain builds, this option may be a hindrance. In + particular, builds in which changes of configuration are infrequent + (especially certain Java builds), and builds where the amount of code that + must be built in both host and target configurations is large, may + not benefit. +
+ ++ One of the primary goals of the Bazel project is to ensure correct + incremental rebuilds. Previous build tools, especially those based + on Make, make several unsound assumptions in their implementation of + incremental builds. +
++ Firstly, that timestamps of files increase monotonically. While + this is the typical case, it is very easy to fall afoul of this + assumption; syncing to an earlier revision of a file causes that file's + modification time to decrease; Make-based systems will not rebuild. +
+
+ More generally, while Make detects changes to files, it does
+ not detect changes to commands. If you alter the options passed to
+ the compiler in a given build step, Make will not re-run the
+ compiler, and it is necessary to manually discard the invalid
+ outputs of the previous build using make clean
.
+
+ Also, Make is not robust against the unsuccessful termination of one + of its subprocesses after that subprocess has started writing to + its output file. While the current execution of Make will fail, the + subsequent invocation of Make will blindly assume that the truncated + output file is valid (because it is newer than its inputs), and it + will not be rebuilt. Similarly, if the Make process is killed, a + similar situation can occur. +
++ Bazel avoids these assumptions, and others. Bazel maintains a database + of all work previously done, and will only omit a build step if it + finds that the set of input files (and their timestamps) to that + build step, and the compilation command for that build step, exactly + match one in the database, and, that the set of output files (and + their timestamps) for the database entry exactly match the + timestamps of the files on disk. Any change to the input files or + output files, or to the command itself, will cause re-execution of + the build step. +
+
+ The benefit to users of correct incremental builds is: less time
+ wasted due to confusion. (Also, less time spent waiting for
+ rebuilds caused by use of make clean
, whether necessary
+ or pre-emptive.)
+
+ Formally, we define the state of a build as consistent when + all the expected output files exist, and their contents are correct, + as specified by the steps or rules required to create them. When + you edit a source file, the state of the build is said to + be inconsistent, and remains inconsistent until you next run + the build tool to successful completion. We describe this situation + as unstable inconsistency, because it is only temporary, and + consistency is restored by running the build tool. +
+
+ There is another kind of inconsistency that is pernicious: stable
+ inconsistency. If the build reaches a stable inconsistent
+ state, then repeated successful invocation of the build tool does
+ not restore consistency: the build has gotten "stuck", and the
+ outputs remain incorrect. Stable inconsistent states are the main
+ reason why users of Make (and other build tools) type make
+ clean
. Discovering that the build tool has failed in this
+ manner (and then recovering from it) can be time consuming and very
+ frustrating.
+
+ Conceptually, the simplest way to achieve a consistent build is to + throw away all the previous build outputs and start again: make + every build a clean build. This approach is obviously too + time-consuming to be practical (except perhaps for release + engineers), and therefore to be useful, the build tool must be able + to perform incremental builds without compromising consistency. +
++ Correct incremental dependency analysis is hard, and as described + above, many other build tools do a poor job of avoiding stable + inconsistent states during incremental builds. In contrast, Bazel + offers the following guarantee: after a successful invocation of the + build tool during which you made no edits, the build will be in a + consistent state. (If you edit your source files during a build, + Bazel makes no guarantee about the consistency of the result of the + current build. But it does guarantee that the results of + the next build will restore consistency.) +
++ As with all guarantees, there comes some fine print: there are some + known ways of getting into a stable inconsistent state with Bazel. + We won't guarantee to investigate such problems arising from deliberate + attempts to find bugs in the incremental dependency analysis, but we + will investigate and do our best to fix all stable inconsistent + states arising from normal or "reasonable" use of the build tool. +
++ If you ever detect a stable inconsistent state with Bazel, please report a bug. + +
+ +
+ Bazel uses sandboxes to guarantee that actions run hermetically1 and correctly.
+ Bazel runs Spawns (loosely speaking: actions) in sandboxes that only contain the minimal
+ set of files the tool requires to do its job. Currently sandboxing works on Linux 3.12 or newer
+ with the CONFIG_USER_NS
option enabled.
+
+ Bazel will print a warning if your system does not support sandboxing to alert you to the fact
+ that builds are not guaranteed to be hermetic and might affect the host system in unknown ways.
+ To disable this warning you can pass the --ignore_unsupported_sandboxing
flag to
+ Bazel.
+
+ On some platforms such as Google Container
+ Engine cluster nodes or Debian, user namespaces are deactivated by default due to security
+ concerns. This can be checked by looking at the file
+ /proc/sys/kernel/unprivileged_userns_clone
: if it exists and contains a 0, then
+ user namespaces can be activated with sudo sysctl kernel.unprivileged_userns_clone=1
.
+
+ In some cases, the Bazel sandbox fails to execute rules because of the system setup. The symptom
+ is generally a failure that output a message similar to
+ namespace-sandbox.c:633: execvp(argv[0], argv): No such file or directory
. In that
+ case, try to deactivate the sandbox for genrules with --genrule_strategy=standalone
+ and for other rules with --spawn_strategy=standalone
. Also please report a bug on our
+ issue tracker and mention which Linux distribution you're using so that we can investigate and
+ provide a fix in a subsequent release.
+
+ 1: Hermeticity means that the action only uses its declared input files and no other + files in the filesystem, and it only produces its declared output files. +
+ +clean
command
+ Bazel has a clean
command, analogous to that of Make.
+ It deletes the output directories for all build configurations performed
+ by this Bazel instance, or the entire working tree created by this
+ Bazel instance, and resets internal caches. If executed without any
+ command-line options, then the output directory for all configurations
+ will be cleaned.
+
Recall that each Bazel instance is associated with a single workspace, thus the
+ clean
command will delete all outputs from all builds you've done
+ with that Bazel instance in that workspace.
+
+ To completely remove the entire working tree created by a Bazel
+ instance, you can specify the --expunge
option. When
+ executed with --expunge
, the clean command simply
+ removes the entire output base tree which, in addition to the build
+ output, contains all temp files created by Bazel. It also
+ stops the Bazel server after the clean, equivalent to the shutdown
command. For example, to
+ clean up all disk and memory traces of a Bazel instance, you could
+ specify:
+
+ % bazel clean --expunge ++
+ Alternatively, you can expunge in the background by using
+ --expunge_async
. It is safe to invoke a Bazel command
+ in the same client while the asynchronous expunge continues to run.
+ Note, however, that this may introduce IO contention.
+
+ The clean
command is provided primarily as a means of
+ reclaiming disk space for workspaces that are no longer needed.
+ However, we recognize that Bazel's incremental rebuilds might not be
+ perfect; clean
may be used to recover a consistent
+ state when problems arise.
+
+ Bazel's design is such that these problems are fixable; we consider
+ such bugs a high priority, and will do our best fix them. If you
+ ever find an incorrect incremental build, please file a bug report.
+ We encourage developers to get out of the habit of
+ using clean
and into that of reporting bugs in the
+ tools.
+
+ In Bazel, a build occurs in three distinct phases; as a user, + understanding the difference between them provides insight into the + options which control a build (see below). +
+ ++ The first is loading during which all the necessary BUILD + files for the initial targets, and their transitive closure of + dependencies, are loaded, parsed, evaluated and cached. +
++ For the first build after a Bazel server is started, the loading + phase typically takes many seconds as many BUILD files are loaded + from the file system. In subsequent builds, especially if no BUILD + files have changed, loading occurs very quickly. +
++ Errors reported during this phase include: package not found, target + not found, lexical and grammatical errors in a BUILD file, + and evaluation errors. +
+ ++ The second phase, analysis, involves the semantic analysis + and validation of each build rule, the construction of a build + dependency graph, and the determination of exactly what work is to + be done in each step of the build. +
++ Like loading, analysis also takes several seconds when computed in + its entirety. However, Bazel caches the dependency graph from + one build to the next and only reanalyzes what it has to, which can + make incremental builds extremely fast in the case where the + packages haven't changed since the previous build. +
++ Errors reported at this stage include: inappropriate dependencies, + invalid inputs to a rule, and all rule-specific error messages. +
++ The loading and analysis phases are fast because + Bazel avoids unnecessary file I/O at this stage, reading only BUILD + files in order to determine the work to be done. This is by design, + and makes Bazel a good foundation for analysis tools, such as + Bazel's query command, which is implemented + atop the loading phase. +
+ ++ The third and final phase of the build is execution. This + phase ensures that the outputs of each step in the build are + consistent with its inputs, re-running compilation/linking/etc. tools as + necessary. This step is where the build spends the majority of + its time, ranging from a few seconds to over an hour for a large + build. Errors reported during this phase include: missing source + files, errors in a tool executed by some build action, or failure of a tool to + produce the expected set of outputs. +
+ + +
+ The following sections describe the options available during a
+ build. When --long
is used on a help command, the on-line
+ help messages provide summary information about the meaning, type and
+ default value for each option.
+
+ Most options can only be specified once. When specified multiple times, the + last instance wins. Options that can be specified multiple times are + identified in the on-line help with the text 'may be used multiple times'. +
+ +
+ See also the --show_package_location
+ option.
+
--package_path
+ This option specifies the set of directories that are searched to + find the BUILD file for a given package. + +
+ +--deleted_packages
+ This option specifies a comma-separated list of packages which Bazel + should consider deleted, and not attempt to load from any directory + on the package path. This can be used to simulate the deletion of packages without + actually deleting them. +
+ ++ These options control Bazel's error-checking and/or warnings. +
+ +--check_constraint constraint
+ This option takes an argument that specifies which constraint + should be checked. +
++ Bazel performs special checks on each rule that is annotated with the + given constraint. +
++ The supported constraints and their checks are as follows: +
+public
: Verify that all java_libraries marked with
+ constraints = ['public']
only depend on java_libraries
+ that are marked as constraints = ['public']
too. If bazel
+ finds a dependency that does not conform to this rule, bazel will issue
+ an error.
+ --[no]check_visibility
+ If this option is set to false, visibility checks are demoted to warnings. + The default value of this option is true, so that by default, visibility + checking is done. + +
+ +--experimental_action_listener=label
+
+ The experimental_action_listener
option instructs Bazel to use
+ details from the action_listener
rule specified by label to
+ insert extra_actions
into the build graph.
+
--experimental_extra_action_filter=regex
+
+ The experimental_extra_action_filter
option instructs Bazel to
+ filter the set of targets to schedule extra_actions
for.
+
+ This flag is only applicable in combination with the
+ --experimental_action_listener
flag.
+
+ By default all extra_actions
in the transitive closure of the
+ requested targets-to-build get scheduled for execution.
+ --experimental_extra_action_filter
will restrict scheduling to
+ extra_actions
of which the owner's label matches the specified
+ regular expression.
+
+ The following example will limit scheduling of extra_actions
+ to only apply to actions of which the owner's label contains '/bar/':
+
% bazel build --experimental_action_listener=//test:al //foo/... \ + --experimental_extra_action_filter=.*/bar/.* ++ +
--output_filter regex
+ The --output_filter
option will only show build and compilation
+ warnings for targets that match the regular expression. If a target does not
+ match the given regular expression and its execution succeeds, its standard
+ output and standard error are thrown away. This option is intended to be used
+ to help focus efforts on fixing warnings in packages under development. Here
+ are some typical values for this option:
+
--output_filter= |
+ Show all output. | +
--output_filter='^//(first/project|second/project):' |
+ Show the output for the specified packages. | +
--output_filter='^//((?!(first/bad_project|second/bad_project):).)*$' |
+ Don't show output for the specified packages. | +
--output_filter=DONT_MATCH_ANYTHING |
+ Don't show output. | +
--[no]analysis_warnings_as_errors
+ When this option is enabled, visible analysis warnings (as specified by + the output filter) are treated as errors, effectively preventing the build + phase from starting. This feature can be used to enable strict builds that + do not allow new warnings to creep into a project. +
+ ++ These options control which options Bazel will pass to other tools. +
+ +--copt gcc-option
+ This option takes an argument which is to be passed to gcc. + The argument will be passed to gcc whenever gcc is invoked + for preprocessing, compiling, and/or assembling C, C++, or + assembler code. It will not be passed when linking. +
++ This option can be used multiple times. + For example: +
++ % bazel build --copt="-g0" --copt="-fpic" //foo ++
+ will compile the foo
library without debug tables, generating
+ position-independent code.
+
+ Note that changing --copt
settings will force a recompilation
+ of all affected object files. Also note that copts values listed in specific
+ cc_library or cc_binary build rules will be placed on the gcc command line
+ after these options.
+
+ Warning: C++-specific options (such as -fno-implicit-templates
)
+ should be specified in --cxxopt
, not in
+ --copt
. Likewise, C-specific options (such as -Wstrict-prototypes)
+ should be specified in --conlyopt
, not in copt
.
+ Similarly, gcc options that only have an
+ effect at link time (such as -l
) should be specified in
+ --linkopt
, not in --copt
.
+
--host_copt gcc-option
+ This option takes an argument which is to be passed to gcc for source files
+ that are compiled in the host configuration. This is analogous to
+ the --copt
option, but applies only to the
+ host configuration.
+
--conlyopt gcc-option
+ This option takes an argument which is to be passed to gcc when compiling C source files. +
+
+ This is similar to --copt
, but only applies to C compilation,
+ not to C++ compilation or linking. So you can pass C-specific options
+ (such as -Wno-pointer-sign
) using --conlyopt
.
+
+ Note that copts parameters listed in specific cc_library or cc_binary build rules + will be placed on the gcc command line after these options. +
+ +--cxxopt gcc-option
+ This option takes an argument which is to be passed to gcc when compiling C++ source files. +
+
+ This is similar to --copt
, but only applies to C++ compilation,
+ not to C compilation or linking. So you can pass C++-specific options
+ (such as -fpermissive
or -fno-implicit-templates
) using --cxxopt
.
+ For example:
+
+ % bazel build --cxxopt="-fpermissive" --cxxopt="-Wno-error" //foo/cruddy_code ++
+ Note that copts parameters listed in specific cc_library or cc_binary build rules + will be placed on the gcc command line after these options. +
+ +--linkopt linker-option
+ This option takes an argument which is to be passed to gcc when linking. +
+
+ This is similar to --copt
, but only applies to linking,
+ not to compilation. So you can pass gcc options that only make sense
+ at link time (such as -lssp
or -Wl,--wrap,abort
)
+ using --linkopt
. For example:
+
+ % bazel build --copt="-fmudflap" --linkopt="-lmudflap" //foo/buggy_code ++
+ Build rules can also specify link options in their attributes. This option's + settings always take precedence. Also see + cc_library.linkopts. +
+ +--strip (always|never|sometimes)
+ This option determines whether Bazel will strip debugging information from
+ all binaries and shared libraries, by invoking the linker with the -Wl,--strip-debug
option.
+ --strip=always
means always strip debugging information.
+ --strip=never
means never strip debugging information.
+ The default value of --strip=sometimes
means strip iff the --compilation_mode
+ is fastbuild
.
+
+ % bazel build --strip=always //foo:bar ++
+ will compile the target while stripping debugging information from all generated + binaries. +
+
+ Note that if you want debugging information, it's not enough to disable stripping; you also need to make
+ sure that the debugging information was generated by the compiler, which you can do by using either
+ -c dbg
or --copt -g
.
+
+ Note also that Bazel's --strip
option corresponds with ld's --strip-debug
option:
+ it only strips debugging information. If for some reason you want to strip all symbols,
+ not just debug symbols, you would need to use ld's --strip-all
option,
+ which you can do by passing --linkopt=-Wl,--strip-all
to Bazel.
+
--stripopt strip-option
+ An additional option to pass to the strip
command when generating
+ a *.stripped
+ binary. The default is -S -p
. This option can be used
+ multiple times.
+
+ Note that --stripopt
does not apply to the stripping of the main
+ binary with --strip=(always|sometimes)
.
+
--fdo_instrument profile-output-dir
+ The --fdo_instrument
option enables the generation of
+ FDO (feedback directed optimization) profile output when the
+ built C/C++ binary is executed. For GCC, the argument provided is used as a
+ directory prefix for a per-object file directory tree of .gcda files
+ containing profile information for each .o file.
+
+ Once the profile data tree has been generated, the profile tree
+ should be zipped up, and provided to the
+ --fdo_optimize=profile-zip
+ Bazel option to enable the FDO optimized compilation.
+
+
+ For the LLVM compiler the argument is also the directory under which the raw LLVM profile
+ data file(s) is dumped, e.g.
+ --fdo_instrument=/path/to/rawprof/dir/
.
+
+ The options --fdo_instrument
and --fdo_optimize
+ cannot be used at the same time.
+
--fdo_optimize profile-zip
+ The --fdo_optimize
option enables the use of the
+ per-object file profile information to perform FDO (feedback
+ directed optimization) optimizations when compiling. For GCC, the argument
+ provided is the zip file containing the previously-generated file tree
+ of .gcda files containing profile information for each .o file.
+
+ Alternatively, the argument provided can point to an auto profile + identified by the extension .afdo. + +
+
+ Note that this option also accepts labels that resolve to source files. You
+ may need to add an exports_files
directive to the corresponding package to
+ make the file visible to Bazel.
+
+ For the LLVM compiler the argument provided should point to the indexed LLVM + profile output file prepared by the llvm-profdata tool, and should have a .profdata + extension. +
+
+ The options --fdo_instrument
and
+ --fdo_optimize
cannot be used at the same time.
+
--lipo (off|binary)
+ The --lipo=binary
option enables
+
+ LIPO
+ (Lightweight Inter-Procedural Optimization). LIPO is an extended C/C++ optimization technique
+ that optimizes code across different object files. It involves compiling each C/C++ source
+ file differently for every binary. This is in contrast to normal compilation where compilation
+ outputs are reused. This means that LIPO is more expensive than normal compilation.
+
+ This option only has an effect when FDO is also enabled (see the
+ --fdo_instrument and
+ --fdo_options).
+ Currently LIPO is only supported when building a single cc_binary
rule.
+
Setting --lipo=binary
implicitly sets
+ --dynamic_mode=off
.
+
--lipo_context
+ context-binary
+ Specifies the label of a cc_binary
rule that was used to generate
+ the profile information for LIPO that was given to
+ the --fdo_optimize
option.
+
+ Specifying the context is mandatory when --lipo=binary
is set.
+ Using this option implicitly also sets
+ --linkopt=-Wl,--warn-unresolved-symbols
.
+
--[no]output_symbol_counts
+ If enabled, each gold-invoked link of a C++ executable binary will also output
+ a symbol counts file (via the --print-symbol-counts
gold
+ option) that logs the number of symbols from each .o input that were used in
+ the binary. This can be used to track unnecessary link dependencies. The
+ symbol counts file is written to the binary's output path with the name
+ [targetname].sc
.
+
+ This option is disabled by default. +
+ +--jvmopt jvm-option
+ This option allows option arguments to be passed to the Java VM. It can be used + with one big argument, or multiple times with individual arguments. For example: +
++ % bazel build --jvmopt="-server -Xms256m" java/com/example/common/foo:all ++
+ will use the server VM for launching all Java binaries and set the + startup heap size for the VM to 256 MB. +
+ +--javacopt javac-option
+ This option allows option arguments to be passed to javac. It can be used + with one big argument, or multiple times with individual arguments. For example: +
++ % bazel build --javacopt="-g:source,lines" //myprojects:prog ++
+ will rebuild a java_binary with the javac default debug info + (instead of the bazel default). +
++ The option is passed to javac after the Bazel built-in default options for + javac and before the per-rule options. The last specification of + any option to javac wins. The default options for javac are: +
+ ++ -source 8 -target 8 -encoding UTF-8 ++
+ Note that changing --javacopt
settings will force a recompilation
+ of all affected classes. Also note that javacopts parameters listed in
+ specific java_library or java_binary build rules will be placed on the javac
+ command line after these options.
+
-extra_checks[:(off|on)]
+ This javac option enables extra correctness checks. Any problems found will
+ be presented as errors.
+ Either -extra_checks
or -extra_checks:on
may be used
+ to force the checks to be turned on. -extra_checks:off
completely
+ disables the analysis.
+ When this option is not specified, the default behavior is used.
+
--strict_java_deps
+ (default|strict|off|warn|error)
+ This option controls whether javac checks for missing direct dependencies. + Java targets must explicitly declare all directly used targets as + dependencies. This flag instructs javac to determine the jars actually used + for type checking each java file, and warn/error if they are not the output + of a direct dependency of the current target. +
+ +off
means checking is disabled.
+ warn
means javac will generate standard java warnings of
+ type [strict]
for each missing direct dependency.
+ default
, strict
and error
all
+ mean javac will generate errors instead of warnings, causing the current
+ target to fail to build if any missing direct dependencies are found.
+ This is also the default behavior when the flag is unspecified.
+ --javawarn (all|cast|deprecation|empty|unchecked|fallthrough|path|rawtypes|serial|finally|overrides)
+ This option is used to enable Java warnings across an entire build. It takes + an argument which is a javac warning to be enabled, overriding any other Java + options that disable the given warning. The arguments to this option are + appended to the "-Xlint:" flag to javac, and must be exactly one of + the listed warnings. +
++ For example: +
++ % bazel build --javawarn="deprecation" --javawarn="unchecked" //java/... ++
+ Note that changing --javawarn
settings will force a recompilation
+ of all affected classes.
+
+ These options affect the build commands and/or the output file contents. +
+ +--compilation_mode (fastbuild|opt|dbg)
(-c)
+ This option takes an argument of fastbuild
, dbg
+ or opt
, and affects various C/C++ code-generation
+ options, such as the level of optimization and the completeness of
+ debug tables. Bazel uses a different output directory for each
+ different compilation mode, so you can switch between modes without
+ needing to do a full rebuild every time.
+
fastbuild
means build as fast as possible:
+ generate minimal debugging information (-gmlt
+ -Wl,-S
), and don't optimize. This is the
+ default. Note: -DNDEBUG
will not be set.
+ dbg
means build with debugging enabled (-g
),
+ so that you can use gdb (or another debugger).
+ opt
means build with optimization enabled and
+ with assert()
calls disabled (-O2 -DNDEBUG
).
+ Debugging information will not be generated in opt
mode
+ unless you also pass --copt -g
.
+ --cpu cpu
+This option specifies the target CPU architecture to be used for +the compilation of binaries during the build. +
++ +
+ ++ Note that a particular combination of crosstool version, compiler version, + libc version, and target CPU is allowed only if it has been specified + in the currently used CROSSTOOL file. +
+ +--host_cpu cpu
+ This option specifies the name of the CPU architecture that should be + used to build host tools. +
+ +--experimental_skip_static_outputs
+ The --experimental_skip_static_outputs
option causes all
+ statically-linked C++ binaries to not be output in any meaningful
+ way.
+
+
+ If you set this flag, you must also
+ set --distinct_host_configuration
.
+ It is also inherently incompatible with running tests — don't use it for
+ that. This option is experimental and may go away at any time.
+
--per_file_copt
+ [+-]regex[,[+-]regex]...@option[,option]...
+ When present, any C++ file with a label or an execution path matching one of the inclusion regex
+ expressions and not matching any of the exclusion expressions will be built
+ with the given options. The label matching uses the canonical form of the label
+ (i.e //package
:label_name
).
+
+ The execution path is the relative path to your workspace directory including the base name
+ (including extension) of the C++ file. It also includes any platform dependent prefixes.
+ Note, that if only one of the label or the execution path matches the options will be used.
+
+ Notes:
+ To match the generated files (e.g. genrule outputs)
+ Bazel can only use the execution path. In this case the regexp shouldn't start with '//'
+ since that doesn't match any execution paths. Package names can be used like this:
+ --per_file_copt=base/.*\.pb\.cc@-g0
. This will match every
+ .pb.cc
file under a directory called base
.
+
+ This option can be used multiple times. +
+
+ The option is applied regardless of the compilation mode used. I.e. it is possible
+ to compile with --compilation_mode=opt
and selectively compile some
+ files with stronger optimization turned on, or with optimization disabled.
+
+ Caveat: If some files are selectively compiled with debug symbols the symbols
+ might be stripped during linking. This can be prevented by setting
+ --strip=never
.
+
+ Syntax: [+-]regex[,[+-]regex]...@option[,option]...
Where
+ regex
stands for a regular expression that can be prefixed with
+ a +
to identify include patterns and with -
to identify
+ exclude patterns. option
stands for an arbitrary option that is passed
+ to the C++ compiler. If an option contains a ,
it has to be quoted like so
+ \,
. Options can also contain @
, since only the first
+ @
is used to separate regular expressions from options.
+
+ Example:
+ --per_file_copt=//foo:.*\.cc,-//foo:file\.cc@-O0,-fprofile-arcs
+ adds the -O0
and the -fprofile-arcs
options to the command
+ line of the C++ compiler for all .cc
files in //foo/
except
+ file.cc
.
+
--dynamic_mode mode
+ Determines whether C++ binaries will be linked dynamically, interacting with + the linkstatic + attribute on build rules. +
+ ++ Modes: +
+auto
: Translates to a platform-dependent mode;
+ default
for linux and off
for cygwin.default
: Allows bazel to choose whether to link dynamically.
+ See linkstatic for more
+ information.fully
: Links all targets dynamically. This will speed up
+ linking time, and reduce the size of the resulting binaries.
+
+ off
: Links all targets in
+ mostly static mode.
+ If -static
is set in linkopts, targets will change to fully
+ static.--fission (yes|no|[dbg][,opt][,fastbuild])
+ Enables + + Fission, + which writes C++ debug information to dedicated .dwo files instead of .o files, where it would + otherwise go. This substantially reduces the input size to links and can reduce link times. + +
+
+ When set to [dbg][,opt][,fastbuild]
(example:
+ --fission=dbg,fastbuild
), Fission is enabled
+ only for the specified set of compilation modes. This is useful for bazelrc
+ settings. When set to yes
, Fission is enabled
+ universally. When set to no
, Fission is disabled
+ universally. Default is dbg
.
+
--force_ignore_dash_static
+ If this flag is set, any -static
options in linkopts of
+ cc_*
rules BUILD files are ignored. This is only intended as a
+ workaround for C++ hardening builds.
+
--[no]force_pic
+ If enabled, all C++ compilations produce position-independent code ("-fPIC"), + links prefer PIC pre-built libraries over non-PIC libraries, and links produce + position-independent executables ("-pie"). Default is disabled. +
+
+ Note that dynamically linked binaries (i.e. --dynamic_mode fully
)
+ generate PIC code regardless of this flag's setting. So this flag is for cases
+ where users want PIC code explicitly generated for static links.
+
--custom_malloc malloc-library-target
+ When specified, always use the given malloc implementation, overriding all
+ malloc="target"
attributes, including in those targets that use the
+ default (by not specifying any malloc
).
+
--crosstool_top label
+ This option specifies the location of the crosstool compiler suite
+ to be used for all C++ compilation during a build. Bazel will look in that
+ location for a CROSSTOOL file and uses that to automatically determine
+ settings for
+
+ --compiler
.
+
--host_crosstool_top label
+ If not specified, bazel uses the value of --crosstool_top
to compile
+ code in the host configuration, i.e., tools run during the build. The main purpose of this flag
+ is to enable cross-compilation.
+
--compiler version
+ This option specifies the C/C++ compiler version (e.g. gcc-4.1.0
)
+ to be used for the compilation of binaries during the build. If you want to
+ build with a custom crosstool, you should use a CROSSTOOL file instead of
+ specifying this flag.
+
+ Note that only certain combinations of crosstool version, compiler version, + libc version, and target CPU are allowed. +
+ +--glibc version
+ This option specifies the version of glibc that the target should be linked + against. If you want to build with a custom crosstool, you should use a + CROSSTOOL file instead of specifying this flag. In that case, Bazel will use + the CROSSTOOL file and the following options where appropriate: +
--cpu
+ Note that only certain combinations of crosstool version, compiler version, + glibc version, and target CPU are allowed. +
+ +--java_toolchain label
+ This option specifies the label of the java_toolchain used to compile Java + source files. +
+ +--javabase (path|label)
+ This options set the label or the path of the base Java installation to use
+ for running JavaBuilder, SingleJar, and is also used for bazel run and inside
+ Java binaries built by java_binary
rules. The various
+ "Make" variables for
+ Java (JAVABASE
, JAVA
, JAVAC
and
+ JAR
) are derived from this option.
+
+ This does not select the Java compiler that is used to compile Java
+ source files. The compiler can be selected by settings the
+ --java_toolchain
+ option.
+
+ These options affect how Bazel will execute the build. + They should not have any significant effect on the output files + generated by the build. Typically their main effect is on the + speed on the build. +
+ +--spawn_strategy strategy
+ This option controls where and how commands are executed. +
+standalone
causes commands to be executed as local subprocesses.
+ sandboxed
causes commands to be executed inside a sandbox on the local machine.
+ This requires that all input files, data dependencies and tools are listed as direct
+ dependencies in the srcs
, data
and tools
attributes.
+ This is the default on systems that support sandboxed execution.
+ --genrule_strategy strategy
+ This option controls where and how genrules are executed. +
+standalone
causes genrules to run as local subprocesses.
+ sandboxed
causes genrules to run inside a sandbox on the local machine.
+ This requires that all input files are listed as direct dependencies in
+ the srcs
attribute, and the program(s) executed are listed
+ in the tools
attribute.
+ This is the default for Bazel on systems that support sandboxed execution.
+ --local_genrule_timeout_seconds seconds
Sets a timeout value for local genrules with the given number of seconds.
+ +--jobs n
(-j)+ This option, which takes an integer argument, specifies a limit on + the number of jobs that should be executed concurrently during the + execution phase of the build. The default is 200. +
+
+ Note that the number of concurrent jobs that Bazel will run
+ is determined not only by the --jobs
setting, but also
+ by Bazel's scheduler, which tries to avoid running concurrent jobs
+ that will use up more resources (RAM or CPU) than are available,
+ based on some (very crude) estimates of the resource consumption
+ of each job. The behavior of the scheduler can be controlled by
+ the --ram_utilization_factor
option.
+
--progress_report_interval n
+
+ Bazel periodically prints a progress report on jobs that are not
+ finished yet (e.g. long running tests). This option sets the
+ reporting frequency, progress will be printed every n
+ seconds.
+
+ The default is 0, that means an incremental algorithm: the first + report will be printed after 10 seconds, then 30 seconds and after + that progress is reported once every minute. +
+ +--ram_utilization_factor
percentage+ This option, which takes an integer argument, specifies what percentage + of the system's RAM Bazel should try to use for its subprocesses. + This option affects how many processes Bazel will try to run + in parallel. The default value is 67. + If you run several Bazel builds in parallel, using a lower + value for this option may avoid thrashing and thus improve overall + throughput. Using a value higher than the default is NOT recommended. Note + that Bazel's estimates are very coarse, so the actual RAM usage may be much + higher or much lower than specified. Note also that this option does not + affect the amount of memory that the Bazel server itself will use. +
+ +--local_resources
availableRAM,availableCPU,availableIO+ This option, which takes three comma-separated floating point arguments, +specifies the amount of local resources that Bazel can take into +consideration when scheduling build and test activities. Option expects amount of +available RAM (in MB), number of CPU cores (with 1.0 representing single full +core) and workstation I/O capability (with 1.0 representing average +workstation). By default Bazel will estimate amount of RAM and number of CPU +cores directly from system configuration and will assume 1.0 I/O resource. +
++ If this option is used, Bazel will ignore --ram_utilization_factor. +
+ +--[no]build_runfile_links
+ This option, which is currently enabled by default, specifies
+ whether the runfiles symlinks for tests and
+ cc_binary
targets should be built in the output directory.
+ Using --nobuild_runfile_links
can be useful
+ to validate if all targets compile without incurring the overhead
+ for building the runfiles trees.
+
+ Within Bazel's output tree, the
+ runfiles symlink tree is typically rooted as a sibling of the corresponding
+ binary or test.
+
+ When tests (or applications) are executed, their
+ run-time data dependencies are gathered together in one place, and
+ may be accessed by the test using paths of the form
+ $TEST_SRCDIR/workspace/packagename/filename
.
+ The "runfiles" tree ensures that tests have access to all the files
+ upon which they have a declared dependence, and nothing more. By
+ default, the runfiles tree is implemented by constructing a set of
+ symbolic links to the required files. As the set of links grows, so
+ does the cost of this operation, and for some large builds it can
+ contribute significantly to overall build time, particularly because
+ each individual test (or application) requires its own runfiles tree.
+
+ The --build_runfile_links
flag controls the
+ construction of the tree of symbolic links (for C++ applications and
+ tests only). The reasons only C++ non-test rules are affected are numerous
+ and subtle: C++ builds are more likely to be slower due to runfiles;
+ no C++ host tools (tools that run during the build) need their runfiles,
+ so this option can be used by the host configuration; and other rules
+ (notably Python) need their runfiles for other purposes besides test
+ execution.
+
--[no]discard_analysis_cache
+ When this option is enabled, Bazel will discard the analysis cache + right before execution starts, thus freeing up additional memory + (around 10%) for the execution phase. + The drawback is that further incremental builds will be slower. +
+ +--[no]keep_going
(-k)+ As in GNU Make, the execution phase of a build stops when the first + error is encountered. Sometimes it is useful to try to build as + much as possible even in the face of errors. This option enables + that behavior, and when it is specified, the build will attempt to + build every target whose prerequisites were successfully built, but + will ignore errors. +
+
+ While this option is usually associated with the execution phase of
+ a build, it also effects the analysis phase: if several targets are
+ specified in a build command, but only some of them can be
+ successfully analyzed, the build will stop with an error
+ unless --keep_going
is specified, in which case the
+ build will proceed to the execution phase, but only for the targets
+ that were successfully analyzed.
+
--[no]thin_archives
+ This option enables use of thin archives, an optimization which avoids + duplicating the content of object files when they are placed in archive + libraries; the archive library references the object file by name, and the + linker follows this reference as needed. This may give a speedup for C++ + builds, especially when building a single large executable from clean. +
+ +
+ This option is enabled by default;
+ use --nothin_archives
to disable.
+
--[no]use_ijars
+ This option changes the way java_library
targets are
+ compiled by Bazel. Instead of using the output of a
+ java_library
for compiling dependent
+ java_library
targets, Bazel will create interface jars
+ that contain only the signatures of non-private members (public,
+ protected, and default (package) access methods and fields) and use
+ the interface jars to compile the dependent targets. This makes it
+ possible to avoid recompilation when changes are only made to
+ method bodies or private members of a class.
+
+ Note that using --use_ijars
might give you a different
+ error message when you are accidentally referring to a non visible
+ member of another class: Instead of getting an error that the member
+ is not visible you will get an error that the member does not exist.
+
+ Note that changing the --use_ijars
setting will force
+ a recompilation of all affected classes.
+
--[no]interface_shared_objects
++ This option enables interface shared objects, which makes binaries and + other shared libraries depend on the interface of a shared object, + rather than its implementation. When only the implementation changes, Bazel + can avoid rebuilding targets that depend on the changed shared library + unnecessarily. +
+ ++ These options determine what to build or test. +
+ +--[no]build
+ This option causes the execution phase of the build to occur; it is + on by default. When it is switched off, the execution phase is + skipped, and only the first two phases, loading and analysis, occur. +
++ This option can be useful for validating BUILD files and detecting + errors in the inputs, without actually building anything. +
+ +--[no]build_tests_only
+ If specified, Bazel will build only what is necessary to run the *_test
+ and test_suite rules that were not filtered due to their
+ size,
+ timeout,
+ tag, or
+ language.
+ If specified, Bazel will ignore other targets specified on the command line.
+ By default, this option is disabled and Bazel will build everything
+ requested, including *_test and test_suite rules that are filtered out from
+ testing. This is useful because running
+ bazel test --build_tests_only foo/...
may not detect all build
+ breakages in the foo
tree.
+
--[no]check_up_to_date
+ This option causes Bazel not to perform a build, but merely check + whether all specified targets are up-to-date. If so, the build + completes successfully, as usual. However, if any files are out of + date, instead of being built, an error is reported and the build + fails. This option may be useful to determine whether a build has + been performed more recently than a source edit (e.g. for pre-submit + checks) without incurring the cost of a build. +
+
+ See also --check_tests_up_to_date
.
+
--[no]compile_one_dependency
+ Compile a single dependency of the argument files. This is useful for + syntax checking source files in IDEs, for example, by rebuilding a single + target that depends on the source file to detect errors as early as + possible in the edit/build/test cycle. This argument affects the way all + non-flag arguments are interpreted: for each source filename, one + rule that depends on it will be built. For + + C++ and Java + sources, rules in the same language space are preferentially chosen. For + multiple rules with the same preference, the one that appears first in the + BUILD file is chosen. An explicitly named target pattern which does not + reference a source file results in an error. +
+ +--save_temps
+ The --save_temps
option causes temporary outputs from gcc to be saved.
+ These include .s files (assembler code), .i (preprocessed C) and .ii
+ (preprocessed C++) files. These outputs are often useful for debugging. Temps will only be
+ generated for the set of targets specified on the command line.
+
+ Note that our implementation of --save_temps
does not use gcc's
+ -save-temps
flag. Instead, we do two passes, one with -S
+ and one with -E
. A consequence of this is that if your build fails,
+ Bazel may not yet have produced the ".i" or ".ii" and ".s" files.
+ If you're trying to use --save_temps
to debug a failed compilation,
+ you may need to also use --keep_going
so that Bazel will still try to
+ produce the preprocessed files after the compilation fails.
+
+ The --save_temps
flag currently works only for cc_* rules.
+
+ To ensure that Bazel prints the location of the additional output files, check that
+ your --show_result n
+ setting is high enough.
+
--test_size_filters size[,size]*
+ If specified, Bazel will test (or build if --build_tests_only
+ is also specified) only test targets with the given size. Test size filter
+ is specified as comma delimited list of allowed test size values (small,
+ medium, large or enormous), optionally preceded with '-' sign used to denote
+ excluded test sizes. For example,
+
+ % bazel test --test_size_filters=small,medium //foo:all ++ and +
+ % bazel test --test_size_filters=-large,-enormous //foo:all ++
+ will test only small and medium tests inside //foo. +
++ By default, test size filtering is not applied. +
+ +--test_timeout_filters timeout[,timeout]*
+ If specified, Bazel will test (or build if --build_tests_only
+ is also specified) only test targets with the given timeout. Test timeout filter
+ is specified as comma delimited list of allowed test timeout values (short,
+ moderate, long or eternal), optionally preceded with '-' sign used to denote
+ excluded test timeouts. See --test_size_filters
+ for example syntax.
+
+ By default, test timeout filtering is not applied. +
+ + +--test_tag_filters tag[,tag]*
+ If specified, Bazel will test (or build if --build_tests_only
+ is also specified) only test targets that have at least one required tag
+ (if any of them are specified) and does not have any excluded tags. Test tag
+ filter is specified as comma delimited list of tag keywords, optionally
+ preceded with '-' sign used to denote excluded tags. Required tags may also
+ have a preceding '+' sign.
+
+ For example, +
+ % bazel test --test_tag_filters=performance,stress,-flaky //myproject:all ++
+ will test targets that are tagged with either performance
or
+ stress
tag but are not tagged with the flaky
+ tag.
+
+ By default, test tag filtering is not applied. Note that you can also filter
+ on test's size
and local
tags in
+ this manner.
+
--test_lang_filters lang[,lang]*
+ Specifies a comma-separated list of test languages for languages with an official *_test
rule the
+ (see build encyclopedia for a full list of these). Each
+ language can be optionally preceded with '-' to specify excluded
+ languages. The name used for each language should be the same as
+ the language prefix in the *_test
rule, for example,
+ cc
, java
or sh
.
+
+ If specified, Bazel will test (or build if --build_tests_only
+ is also specified) only test targets of the specified language(s).
+
+ For example, +
++ % bazel test --test_lang_filters=cc,java foo/... ++
+ will test only the C/C++ and Java tests (defined using
+ cc_test
and java_test
rules, respectively)
+ in foo/...
, while
+
+ % bazel test --test_lang_filters=-sh,-java foo/... ++
+ will run all of the tests in foo/...
except for the
+ sh_test
and java_test
tests.
+
+ By default, test language filtering is not applied. +
+ +--test_filter=filter-expression
+ Specifies a filter that the test runner may use to pick a subset of tests for + running. All targets specified in the invocation are built, but depending on + the expression only some of them may be executed; in some cases, only certain + test methods are run. +
+
+ The particular interpretation of filter-expression is up to
+ the test framework responsible for running the test. It may be a glob,
+ substring, or regexp. --test_filter
is a convenience
+ over passing different --test_arg
filter arguments,
+ but not all frameworks support it.
+
--explain logfile
+ This option, which requires a filename argument, causes the
+ dependency checker in bazel build
's execution phase to
+ explain, for each build step, either why it is being executed, or
+ that it is up-to-date. The explanation is written
+ to logfile.
+
+ If you are encountering unexpected rebuilds, this option can help to
+ understand the reason. Add it to your .bazelrc
so that
+ logging occurs for all subsequent builds, and then inspect the log
+ when you see an execution step executed unexpectedly. This option
+ may carry a small performance penalty, so you might want to remove
+ it when it is no longer needed.
+
--verbose_explanations
+ This option increases the verbosity of the explanations generated + when the --explain option is enabled. +
++ In particular, if verbose explanations are enabled, + and an output file is rebuilt because the command used to + build it has changed, then the output in the explanation file will + include the full details of the new command (at least for most + commands). +
+
+ Using this option may significantly increase the length of the
+ generated explanation file and the performance penalty of using
+ --explain
.
+
+ If --explain
is not enabled, then
+ --verbose_explanations
has no effect.
+
--profile file
+ This option, which takes a filename argument, causes Bazel to write
+ profiling data into a file. The data then can be analyzed or parsed using the
+ bazel analyze-profile
command. The Build profile can be useful in
+ understanding where Bazel's build
command is spending its time.
+
--[no]show_loading_progress
+ This option causes Bazel to output package-loading progress + messages. If it is disabled, the messages won't be shown. +
+ +--[no]show_progress
+ This option causes progress messages to be displayed; it is on by + default. When disabled, progress messages are suppressed. +
+ +--show_progress_rate_limit
+ n
+ This option causes bazel to display only
+ one progress message per n
seconds, where n is a real number.
+ If n
is -1, all progress messages will be displayed. The default value for
+ this option is 0.03, meaning bazel will limit the progress messages to one per every
+ 0.03 seconds.
+
--show_result n
+ This option controls the printing of result information at the end
+ of a bazel build
command. By default, if a single
+ build target was specified, Bazel prints a message stating whether
+ or not the target was successfully brought up-to-date, and if so,
+ the list of output files that the target created. If multiple
+ targets were specified, result information is not displayed.
+
+ While the result information may be useful for builds of a single
+ target or a few targets, for large builds (e.g. an entire top-level
+ project tree), this information can be overwhelming and distracting;
+ this option allows it to be controlled. --show_result
+ takes an integer argument, which is the maximum number of targets
+ for which full result information should be printed. By default,
+ the value is 1. Above this threshold, no result information is
+ shown for individual targets. Thus zero causes the result
+ information to be suppressed always, and a very large value causes
+ the result to be printed always.
+
+ Users may wish to choose a value in-between if they regularly
+ alternate between building a small group of targets (for example,
+ during the compile-edit-test cycle) and a large group of targets
+ (for example, when establishing a new workspace or running
+ regression tests). In the former case, the result information is
+ very useful whereas in the latter case it is less so. As with all
+ options, this can be specified implicitly via
+ the .bazelrc
file.
+
+ The files are printed so as to make it easy to copy and paste the + filename to the shell, to run built executables. The "up-to-date" + or "failed" messages for each target can be easily parsed by scripts + which drive a build. +
+ +--subcommands
(-s
)+ This option causes Bazel's execution phase to print the full command line + for each command prior to executing it. +
+ ++ >>>>> # //examples/cpp:hello-world [action 'Linking examples/cpp/hello-world'] + (cd /home/jrluser/.cache/bazel/_bazel_jrluser/4c084335afceb392cfbe7c31afee3a9f/bazel && \ + exec env - \ + /usr/bin/gcc -o bazel-out/local_linux-fastbuild/bin/examples/cpp/hello-world -B/usr/bin/ -Wl,-z,relro,-z,now -no-canonical-prefixes -pass-exit-codes '-Wl,--build-id=md5' '-Wl,--hash-style=gnu' -Wl,-S -Wl,@bazel-out/local_linux-fastbuild/bin/examples/cpp/hello-world-2.params) ++
+ Where possible, commands are printed in a Bourne shell compatible syntax,
+ so that they can be easily copied and pasted to a shell command prompt.
+ (The surrounding parentheses are provided to protect your shell from the
+ cd
and exec
calls; be sure to copy them!)
+ However some commands are implemented internally within Bazel, such as
+ creating symlink trees. For these there's no command line to display.
+
+
+ See also --verbose_failures, below. +
+ +--verbose_failures
+ This option causes Bazel's execution phase to print the full command line + for commands that failed. This can be invaluable for debugging a + failing build. +
++ Failing commands are printed in a Bourne shell compatible syntax, suitable + for copying and pasting to a shell prompt. +
+ +--[no]stamp
+ This option controls whether stamping is enabled for
+ rule types that support it. For most of the supported rule types stamping is
+ enabled by default (e.g. cc_binary
).
+
+ By default, stamping is disabled for all tests. Specifying
+ --stamp
does not force affected targets to be rebuilt,
+ if their dependencies have not changed.
+
+ Stamping can be enabled or disabled explicitly in BUILD using
+ the stamp
attribute of certain rule types, please refer to
+ the build encyclopedia for details. For
+ rules that are neither explicitly or implicitly configured as stamp =
+ 0
or stamp = 1
, the --[no]stamp
option
+ selects whether stamping is enabled. Bazel never stamps binaries that are
+ built for the host configuration, regardless of the stamp attribute.
+
--symlink_prefix string
+ Changes the prefix of the generated convenience symlinks. The
+ default value for the symlink prefix is bazel-
which
+ will create the symlinks bazel-bin
, bazel-testlogs
, and
+ bazel-genfiles
.
+
+ If the symbolic links cannot be created for any reason, a warning is + issued but the build is still considered a success. In particular, + this allows you to build in a read-only directory or one that you have no + permission to write into. Any paths printed in informational + messages at the conclusion of a build will only use the + symlink-relative short form if the symlinks point to the expected + location; in other words, you can rely on the correctness of those + paths, even if you cannot rely on the symlinks being created. +
++ Some common values of this option: +
+Suppress symlink creation:
+ --symlink_prefix=/
will cause Bazel to not
+ create or update any symlinks, including the bazel-out
and
+
+ bazel-<workspace>
+ symlinks. Use this option to suppress symlink creation entirely.
+
Reduce clutter:
+ --symlink_prefix=.bazel/
will cause Bazel to create
+ symlinks called bin
(etc) inside a hidden directory .bazel
.
+
--platform_suffix string
+ Adds a suffix to the configuration short name, which is used to determine the + output directory. Setting this option to different values puts the files into + different directories, for example to improve cache hit rates for builds that + otherwise clobber each others output files, or to keep the output files around + for comparisons. +
+ +--default_visibility=(private|public)
+ Temporary flag for testing bazel default visibility changes. Not intended for general use + but documented for completeness' sake. +
+ ++ Bazel is used both by software engineers during the development + cycle, and by release engineers when preparing binaries for deployment + to production. This section provides a list of tips for release + engineers using Bazel. + +
+ ++ When using Bazel for release builds, the same issues arise as for + other scripts that perform a build, so you should read + the scripting section of this manual. + In particular, the following options are strongly recommended: +
+--bazelrc=/dev/null
--batch
+ These options (q.v.) are also important: +
+ +--package_path
--symlink_prefix
:
+ for managing builds for multiple configurations,
+ it may be convenient to distinguish each build
+ with a distinct identifier, e.g. "64bit" vs. "32bit". This option
+ differentiates the bazel-bin
(etc.) symlinks.
+
+ To build and run tests with bazel, type bazel test
followed by
+ the name of the test targets.
+
+ By default, this command performs simultaneous build and test
+ activity, building all specified targets (including any non-test
+ targets specified on the command line) and testing
+ *_test
and test_suite
targets as soon as
+ their prerequisites are built, meaning that test execution is
+ interleaved with building. Doing so usually results in significant
+ speed gains.
+
+
bazel test
--cache_test_results=(yes|no|auto)
(-t
)+ If this option is set to 'auto' (the default) then Bazel will only rerun a test if any of the + following conditions applies: +
+external
--runs_per_test
+ If 'no', all tests will be executed unconditionally. +
+
+ If 'yes', the caching behavior will be the same as auto
+ except that it may cache test failures and test runs with
+ --runs_per_test
.
+
+ Note that test results are always saved in Bazel's output tree,
+ regardless of whether this option is enabled, so
+ you needn't have used --cache_test_results
on the
+ prior run(s) of bazel test
in order to get cache hits.
+ The option only affects whether Bazel will use previously
+ saved results, not whether it will save results of the current run.
+
+ Users who have enabled this option by default in
+ their .bazelrc
file may find the
+ abbreviations -t
(on) or -t-
(off)
+ convenient for overriding the default on a particular run.
+
--check_tests_up_to_date
+ This option tells Bazel not to run the tests, but to merely check and report + the cached test results. If there are any tests which have not been + previously built and run, or whose tests results are out-of-date (e.g. because + the source code or the build options have changed), then Bazel will report + an error message ("test result is not up-to-date"), will record the test's + status as "NO STATUS" (in red, if color output is enabled), and will return + a non-zero exit code. +
+
+ This option also implies
+ --check_up_to_date
behavior.
+
+ This option may be useful for pre-submit checks. +
+ +--test_verbose_timeout_warnings
+ This option tells Bazel to explicitly warn the user if a test's timeout is +significantly longer then the test's actual execution time. While a test's +timeout should be set such that it is not flaky, a test that has a highly +over-generous timeout can hide real problems that crop up unexpectedly. +
++For instance, a test that normally executes in a minute or two should not have +a timeout of ETERNAL or LONG as these are much, much too generous. + + This option is useful to help users decide on a good timeout value or + sanity check existing timeout values. +
+
+Note that each test shard is allotted the timeout of the entire
+XX_test
target. Using this option does not affect a test's timeout
+value, merely warns if Bazel thinks the timeout could be restricted further.
+
--[no]test_keep_going
+ By default, all tests are run to completion. If this flag is disabled,
+ however, the build is aborted on any non-passing test. Subsequent build steps
+ and test invocations are not run, and in-flight invocations are canceled.
+ Do not specify both --notest_keep_going
and
+ --keep_going
.
+
--flaky_test_attempts attempts
+ This option specifies the maximum number of times a test should be attempted
+ if it fails for any reason. A test that initially fails but eventually
+ succeeds is reported as FLAKY
on the test summary. It is,
+ however, considered to be passed when it comes to identifying Bazel exit code
+ or total number of passed tests. Tests that fail all allowed attempts are
+ considered to be failed.
+
+ By default (when this option is not specified, or when it is set to
+ "default"), only a single attempt is allowed for regular tests, and
+ 3 for test rules with the flaky
attribute set. You can specify
+ an integer value to override the maximum limit of test attempts. Bazel allows
+ a maximum of 10 test attempts in order to prevent abuse of the system.
+
--runs_per_test [regex@]number
+ This option specifies the number of times each test should be executed. All + test executions are treated as separate tests (e.g. fallback functionality + will apply to each of them independently). +
+
+ The status of a target with failing runs depends on the value of the
+ --runs_per_test_detects_flakes
flag:
+
+ If a single number is specified, all tests will run that many times. + Alternatively, a regular expression may be specified using the syntax + regex@number. This constrains the effect of --runs_per_test to targets + which match the regex (e.g. "--runs_per_test=^//pizza:.*@4" runs all tests + under //pizza/ 4 times). + This form of --runs_per_test may be specified more than once. +
+ +--[no]runs_per_test_detects_flakes
+ If this option is specified (by default it is not), Bazel will detect flaky + test shards through --runs_per_test. If one or more runs for a single shard + fail and one or more runs for the same shard pass, the target will be + considered flaky with the flag. If unspecified, the target will report a + failing status. +
+ +--test_summary output_style
+ Specifies how the test result summary should be displayed. +
+short
prints the results of each test along with the name of
+ the file containing the test output if the test failed. This is the default
+ value.
+ terse
like short
, but even shorter: only print
+ information about tests which did not pass.
+ detailed
prints each individual test case that failed, not
+ only each test. The names of test output files are omitted.
+ none
does not print test summary.
+ --test_output output_style
+ Specifies how test output should be displayed: +
+summary
shows a summary of whether each test passed or
+ failed. Also shows the output log file name for failed tests. The summary
+ will be printed at the end of the build (during the build, one would see
+ just simple progress messages when tests start, pass or fail).
+ This is the default behavior.
+ errors
sends combined stdout/stderr output from failed tests
+ only into the stdout immediately after test is completed, ensuring that
+ test output from simultaneous tests is not interleaved with each other.
+ Prints a summary at the build as per summary output above.
+ all
is similar to errors
but prints output for
+ all tests, including those which passed.
+ streamed
streams stdout/stderr output from each test in
+ real-time.
+
+ --java_debug
+ This option causes the Java virtual machine of a java test to wait for a connection from a + JDWP-compliant debugger before starting the test. This option implies --test_output=streamed. +
+ +--[no]verbose_test_summary
+ By default this option is enabled, causing test times and other additional
+ information (such as test attempts) to be printed to the test summary. If
+ --noverbose_test_summary
is specified, test summary will
+ include only test name, test status and cached test indicator and will
+ be formatted to stay within 80 characters when possible.
+
--test_tmpdir path
+ Specifies temporary directory for tests executed locally. Each test will be
+ executed in a separate subdirectory inside this directory. The directory will
+ be cleaned at the beginning of the each bazel test
command.
+ By default, bazel will place this directory under Bazel output base directory.
+ Note that this is a directory for running tests, not storing test results
+ (those are always stored under the bazel-out
directory).
+
--test_timeout
+ seconds
+ OR
+ --test_timeout
+ seconds,seconds,seconds,seconds
+
++ Overrides the timeout value for all tests by using specified number of + seconds as a new timeout value. If only one value is provided, then it will + be used for all test timeout categories. +
++ Alternatively, four comma-separated values may be provided, specifying + individual timeouts for short, moderate, long and eternal tests (in that + order). + In either form, zero or a negative value for any of the test sizes will + be substituted by the default timeout for the given timeout categories as + defined by the page + Writing Tests. + By default, Bazel will use these timeouts for all tests by + inferring the timeout limit from the test's size whether the size is + implicitly or explicitly set. +
++ Tests which explicitly state their timeout category as distinct from their + size will receive the same value as if that timeout had been implicitly set by + the size tag. So a test of size 'small' which declares a 'long' timeout will + have the same effective timeout that a 'large' tests has with no explicit + timeout. +
+ +--test_arg arg
+ Passes command-line options/flags/arguments to the test (not to the test runner). This
+ option can be used multiple times to pass several arguments, e.g.
+ --test_arg=--logtostderr --test_arg=--v=3
.
+
--test_env variable=value
+ OR
+ --test_env variable
+ Specifies additional variables that must be injected into the test
+ environment for each test. If value is not specified it will be
+ inherited from the shell environment used to start the bazel test
+ command.
+
+ The environment can be accessed from within a test by using
+ System.getenv("var")
(Java),
+ getenv("var")
(C or C++),
+
+
--run_under=command-prefix
+ This specifies a prefix that the test runner will insert in front + of the test command before running it. The + command-prefix is split into words using Bourne shell + tokenization rules, and then the list of words is prepended to the + command that will be executed. +
+
+ If the first word is a fully qualified label (i.e. starts with
+ //
) it is built. Then the label is substituted by the
+ corresponding executable location that is prepended to the command
+ that will be executed along with the other words.
+
+Some caveats apply: +
+--run_under
+ command (the first word in command-prefix).
+ stdin
is not connected, so --run_under
+ can't be used for interactive commands.
+ +Examples: +
++ --run_under=/usr/bin/valgrind + --run_under=/usr/bin/strace + --run_under='/usr/bin/strace -c' + --run_under='/usr/bin/valgrind --quiet --num-callers=20' + ++ +
+ As documented under Output selection options, + you can filter tests by size, + timeout, + tag, or + language. A convenience + general name filter can forward particular + filter args to the test runner. +
+ +bazel test
+ The syntax and the remaining options are exactly like + bazel build. +
+ + + +
+ The bazel run
command is similar to bazel build
, except
+ it is used to build and run a single target. Here is a typical session:
+
+ % bazel run -- java/myapp:myapp --arg1 --arg2 + Welcome to Bazel + INFO: Loading package: java/myapp + INFO: Loading package: foo/bar + INFO: Loading complete. Analyzing... + INFO: Found 1 target... + ... + Target //java/myapp:myapp up-to-date: + bazel-bin/java/myapp:myapp + INFO: Elapsed time: 0.638s, Critical Path: 0.34s + + INFO: Running command line: bazel-bin/java/myapp:myapp --arg1 --arg2 + Hello there + $EXEC_ROOT/java/myapp/myapp + --arg1 + --arg2 ++ +
+ Bazel closes stdin, so you can't use bazel run
+ if you want to start an interactive program or pipe data to it.
+
+ Note the use of the --
. This is needed so that Bazel
+ does not interpret --arg1
and --arg2
as
+ Bazel options, but rather as part of the command line for running the binary.
+ (The program being run simply says hello and prints out its args.)
+
bazel run
--run_under=command-prefix
+ This has the same effect as the --run_under
option for
+ bazel test
(see above),
+ except that it applies to the command being run by bazel
+ run
rather than to the tests being run by bazel test
+ and cannot run under label.
+
+ bazel run
can also execute test binaries, which has the effect of
+running the test, but without the setup documented on the page
+Writing Tests, so that the test runs
+in an environment closer to the current shell environment. Note that none of the
+--test_* arguments have an effect when running a test in this manner.
+
+ Bazel includes a query language for asking questions about the + dependency graph used during the build. The query tool is an + invaluable aid to many software engineering tasks. +
++ The query language is based on the idea of + algebraic operations over graphs; it is documented in detail in + + Bazel Query Reference. + Please refer to that document for reference, for + examples, and for query-specific command-line options. +
+ +
+ The query tool accepts several command-line
+ option. --output
selects the output format.
+ --[no]keep_going
(disabled by default) causes the query
+ tool to continue to make progress upon errors; this behavior may be
+ disabled if an incomplete result is not acceptable in case of errors.
+
+ The --[no]host_deps
option,
+ enabled by default, causes dependencies on "host
+ configuration" targets to be included in the dependency graph over
+ which the query operates.
+
+
+ The --[no]implicit_deps
option, enabled by default, causes
+ implicit dependencies to be included in the dependency graph over which the query operates. An
+ implicit dependency is one that is not explicitly specified in the BUILD file
+ but added by bazel.
+
+ Example: "Show the locations of the definitions (in BUILD files) of + all genrules required to build all the tests in the PEBL tree." +
++ bazel query --output location 'kind(genrule, deps(kind(".*_test rule", foo/bar/pebl/...)))' ++ + +
help
command
+ The help
command provides on-line help. By default, it
+ shows a summary of available commands and help topics, as shown in
+ the Bazel overview section above.
+ Specifying an argument displays detailed help for a particular
+ topic. Most topics are Bazel commands, e.g. build
+ or query
, but there are some additional help topics
+ that do not correspond to commands.
+
--[no]long
(-l
)
+ By default, bazel help [topic]
prints only a
+ summary of the relevant options for a topic. If
+ the --long
option is specified, the type, default value
+ and full description of each option is also printed.
+
shutdown
command
+ Bazel server processes (see Client/server
+ implementation) may be stopped by using the shutdown
+ command. This command causes the Bazel server to exit as soon as it
+ becomes idle (i.e. after the completion of any builds or other
+ commands that are currently in progress).
+
+ Bazel servers stop themselves after an idle timeout, so this command
+ is rarely necessary; however, it can be useful in scripts when it is
+ known that no further builds will occur in a given workspace.
+
+ shutdown
accepts one
+ option, --iff_heap_size_greater_than n
, which
+ requires an integer argument (in MB). If specified, this makes the shutdown
+ conditional on the amount of memory already consumed. This is
+ useful for scripts that initiate a lot of builds, as any memory
+ leaks in the Bazel server could cause it to crash spuriously on
+ occasion; performing a conditional restart preempts this condition.
+
info
command
+ The info
command prints various values associated with
+ the Bazel server instance, or with a specific build configuration.
+ (These may be used by scripts that drive a build.)
+
+ The info
command also permits a single (optional)
+ argument, which is the name of one of the keys in the list below.
+ In this case, bazel info key
will print only
+ the value for that one key. (This is especially convenient when
+ scripting Bazel, as it avoids the need to pipe the result
+ through sed -ne /key:/s/key://p
:
+
release
: the release label for this Bazel
+ instance, or "development version" if this is not a released
+ binary.
+ workspace
the absolute path to the base workspace
+ directory.
+ install_base
: the absolute path to the installation
+ directory used by this Bazel instance for the current user. Bazel
+ installs its internally required executables below this directory.
+
+ output_base
: the absolute path to the base output
+ directory used by this Bazel instance for the current user and
+ workspace combination. Bazel puts all of its scratch and build
+ output below this directory.
+ execution_root
: the absolute path to the execution
+ root directory under output_base. This directory is the root for all files
+ accessible to commands executed during the build, and is the working
+ directory for those commands. If the workspace directory is writable, a
+ symlink named
+
+ bazel-<workspace>
+ is placed there pointing to this directory.
+ output_path
: the absolute path to the output
+ directory beneath the execution root used for all files actually
+ generated as a result of build commands. If the workspace directory is
+ writable, a symlink named bazel-out
is placed there pointing
+ to this directory.
+ server_pid
: the process ID of the Bazel server
+ process. command_log
: the absolute path to the command log file;
+ this contains the interleaved stdout and stderr streams of the most recent
+ Bazel command. Note that running bazel info
will overwrite the
+ contents of this file, since it then becomes the most recent Bazel command.
+ However, the location of the command log file will not change unless you
+ change the setting of the --output_base
or
+ --output_user_root
options.
+ used-heap-size
,
+ committed-size
,
+ max-heap-size
: reports various JVM heap size
+ parameters. Respectively: memory currently used, memory currently
+ guaranteed to be available to the JVM from the system, maximum
+ possible allocation.
+ gc-count
, gc-time
: The cumulative count of
+ garbage collections since the start of this Bazel server and the time spent
+ to perform them. Note that these values are not reset at the start of every
+ build.
+ package_path
: A colon-separated list of paths which would be
+ searched for packages by bazel. Has the same format as the
+ --package_path
build command line argument.
+ + Example: the process ID of the Bazel server. +
+% bazel info server_pid +1285 ++ +
+ These data may be affected by the configuration options passed
+ to bazel info
, for
+ example --cpu
, --compilation_mode
,
+ etc. The info
command accepts all
+ the options that control dependency
+ analysis, since some of these determine the location of the
+ output directory of a build, the choice of compiler, etc.
+
bazel-bin
, bazel-testlogs
,
+ bazel-genfiles
: reports the absolute path to
+ the bazel-*
directories in which programs generated by the
+ build are located. This is usually, though not always, the same as
+ the bazel-*
symlinks created in the base workspace directory after a
+ successful build. However, if the workspace directory is read-only,
+ no bazel-*
symlinks can be created. Scripts that use
+ the value reported by bazel info
, instead of assuming the
+ existence of the symlink, will be more robust.
+ --show_make_env
flag is
+ specified, all variables in the current configuration's "Make" environment
+ are also displayed (e.g. CC
, GLIBC_VERSION
, etc).
+ These are the variables accessed using the $(CC)
+ or varref("CC")
syntax inside BUILD files.
+
+ Example: the C++ compiler for the current configuration.
+ This is the $(CC)
variable in the "Make" environment,
+ so the --show_make_env
flag is needed.
+
+ % bazel info --show_make_env -c opt BINMODE + -opt ++ +
+ Example: the bazel-bin
output directory for the current
+ configuration. This is guaranteed to be correct even in cases where
+ the bazel-bin
symlink cannot be created for some reason
+ (e.g. you are building from a read-only directory).
+
version
command+ The version command prints version details about the built Bazel + binary, including the changelist at which it was built and the date. + These are particularly useful in determining if you have the latest + Bazel, or if you are reporting bugs. Some of the interesting values + are: +
+changelist
: the changelist at which this version of
+ Bazel was released.
+ label
: the release label for this Bazel
+ instance, or "development version" if this is not a released
+ binary. Very useful when reporting bugs.
+ mobile-install
command
+ The mobile-install
command installs apps to mobile devices.
+ Currently only Android devices running ART are supported.
+
+ Note that this command does not install the same thing that
+ bazel build
produces: Bazel tweaks the app so that it can be
+ built, installed and re-installed quickly. This should, however, be mostly
+ transparent to the app.
+
+ The following options are supported: +
+--incremental
+ If set, Bazel tries to install the app incrementally, that is, only those
+ parts that have changed since the last build. This cannot update resources
+ referenced from AndroidManifest.xml
, native code or Java
+ resources (i.e. ones referenced by Class.getResource()
). If these
+ things change, this option must be omitted. Contrary to the spirit of Bazel
+ and due to limitations of the Android platform, it is the
+ responsibility of the user to know when this command is good enough and
+ when a full install is needed. We are working to come up with a better
+ solution.
+
--adb
+ Indicates the adb
binary to be used. When unspecified, the binary
+ in the repository is used.
+
--adb_arg
+ Extra arguments to adb
. These come before the subcommand in the
+ command line and are typically used to specify which device to install to.
+ Example:
+
% bazel mobile-install --adb_arg=-s --adb_arg=deadbeef ++will invoke
adb
as
++adb -s deadbeef install ... ++ + + +
analyze-profile
command
+ The analyze-profile
command analyzes data previously gathered
+ during the build using --profile
option. It provides several
+ options to either perform analysis of the build execution or export data in
+ the specified format.
+
+
+ The following options are supported: +
+--dump=text
displays all gathered data in a
+ human-readable format--dump=raw
displays all gathered data in a
+ script-friendly format--html
generates an HTML file visualizing the
+ actions and rules executed in the build, as well as summary statistics for the build
+ --html_details
adds more fine-grained
+ information on actions and rules to the HTML visualization+ See the section on Troubleshooting performance by profiling for + format details and usage help. + +
+ +canonicalize-flags
command
+ The canonicalize-flags
command, which takes a list of options for
+ a Bazel command and returns a list of options that has the same effect. The
+ new list of options is canonical, i.e., two lists of options with the same
+ effect are canonicalized to the same new list.
+
+ The --for_command
option can be used to select between different
+ commands. At this time, only build
and test
are
+ supported. Options that the given command does not support cause an error.
+
+ Note that a small number of options cannot be reordered, because Bazel cannot + ensure that the effect is identical. +
+ ++ The options described in this section affect the startup of the Java + virtual machine used by Bazel server process, and they apply to all + subsequent commands handled by that server. If there is an already + running Bazel server and the startup options do not match, it will + be restarted. +
+
+ All of the options described in this section must be specified using the
+ --key=value
or --key value
+ syntax. Also, these options must appear before the name of the Bazel
+ command.
+
--output_base=dir
+ This option requires a path argument, which must specify a + writable directory. Bazel will use this location to write all its + output. The output base is also the key by which the client locates + the Bazel server. By changing the output base, you change the server + which will handle the command. +
+
+ By default, the output base is derived from the user's login name,
+ and the name of the workspace directory (actually, its MD5 digest),
+ so a typical value looks like:
+
+ /var/tmp/google/_bazel_jrluser/d41d8cd98f00b204e9800998ecf8427e
.
+ Note that the client uses the output base to find the Bazel server
+ instance, so if you specify a different output base in a Bazel
+ command, a different server will be found (or started) to handle the
+ request. It's possible to perform two concurrent builds in the same
+ workspace directory by varying the output base.
+
For example:
++ % bazel --output_base /tmp/1 build //foo & bazel --output_base /tmp/2 build //bar ++
+ In this command, the two Bazel commands run concurrently (because of
+ the shell &
operator), each using a different Bazel
+ server instance (because of the different output bases).
+ In contrast, if the default output base was used in both commands,
+ then both requests would be sent to the same server, which would
+ handle them sequentially: building //foo
first, followed
+ by an incremental build of //bar
.
+
+ We recommend you do not use NFS locations for the output base, as + the higher access latency of NFS will cause noticeably slower + builds. +
+ +--output_user_root=dir
+ By default, the output_base
value is chosen to as to
+ avoid conflicts between multiple users building in the same workspace directory.
+ In some situations, though, it is desirable to build from a directory
+ shared between multiple users; release engineers often do this. In
+ those cases it may be useful to deliberately override the default so
+ as to ensure "conflicts" (i.e., sharing) between multiple users.
+ Use the --output_user_root
option to achieve this: the
+ output base is placed in a subdirectory of the output user root,
+ with a unique name based on the workspace, so the result of using an
+ output user root that is not a function of $USER
is
+ sharing. Of course, it is important to ensure (via umask and group
+ membership) that all the cooperating users can read/write each
+ others files.
+
+ If the --output_base
option is specified, it overrides
+ using --output_user_root
to calculate the output base.
+
+ The install base location is also calculated based on
+ --output_user_root
, plus the MD5 identity of the Bazel embedded
+ binaries.
+
+ You can also use the --output_user_root
option to choose an
+ alternate base location for all of Bazel's output (install base and output
+ base) if there is a better location in your filesystem layout.
+
--host_jvm_args=string
+ Specifies a startup option to be passed to the Java virtual machine in which Bazel itself + runs. This can be used to set the stack size, for example: +
++ % bazel --host_jvm_args="-Xss256K" build //foo ++
+ This option can be used multiple times with individual arguments. Note that + setting this flag should rarely be needed. You can also pass a space-separated list of strings, + each of which will be interpreted as a separate JVM argument, but this feature will soon be + deprecated. + +
+
+ That this does not affect any JVMs used by
+ subprocesses of Bazel: applications, tests, tools, etc. To pass
+ JVM options to executable Java programs, whether run by bazel
+ run
or on the command-line, you should use
+ the --jvm_flags
argument which
+ all java_binary
and java_test
programs
+ support. Alternatively for tests, use bazel
+ test --test_arg=--jvm_flags=foo ...
.
+
--host_jvm_debug
+ This option causes the Java virtual machine to wait for a connection + from a JDWP-compliant debugger before + calling the main method of Bazel itself. This is primarily + intended for use by Bazel developers. +
++ (Please note that this does not affect any JVMs used by + subprocesses of Bazel: applications, tests, tools, etc.) +
+ +--batch
+ This switch will cause bazel to be run in batch mode, instead of the + standard client/server mode described above. + Doing so provides more predictable semantics with respect to signal handling, + job control, and environment variable inheritance, and is necessary for running + bazel in a chroot jail. +
+ ++ Batch mode retains proper queueing semantics within the same output_base. + That is, simultaneous invocations will be processed in order, without overlap. + If a batch mode bazel is run on a client with a running server, it first + kills the server before processing the command. +
+ ++ Bazel will run slower in batch mode, compared to client/server mode. + Among other things, the build file cache is memory-resident, so it is not + preserved between sequential batch invocations. + Therefore, using batch mode often makes more sense in cases where performance + is less critical, such as continuous builds. +
+ +--max_idle_secs n
+ This option specifies how long, in seconds, the Bazel server process + should wait after the last client request, before it exits. The + default value is 10800 (3 hours). +
+
+ This option may be used by scripts that invoke Bazel to ensure that
+ they do not leave Bazel server processes on a user's machine when they
+ would not be running otherwise.
+ For example, a presubmit script might wish to
+ invoke bazel query
to ensure that a user's pending
+ change does not introduce unwanted dependencies. However, if the
+ user has not done a recent build in that workspace, it would be
+ undesirable for the presubmit script to start a Bazel server just
+ for it to remain idle for the rest of the day.
+ By specifying a small value of --max_idle_secs
in the
+ query request, the script can ensure that if it caused a new
+ server to start, that server will exit promptly, but if instead
+ there was already a server running, that server will continue to run
+ until it has been idle for the usual time. Of course, the existing
+ server's idle timer will be reset.
+
--[no]block_for_lock
+ If enabled, Bazel will wait for other Bazel commands holding the + server lock to complete before progressing. If disabled, Bazel will + exit in error if it cannot immediately acquire the lock and + proceed. + + Developers might use this in presubmit checks to avoid long waits caused + by another Bazel command in the same client. +
+ +--io_nice_level n
+ Sets a level from 0-7 for best-effort IO scheduling. 0 is highest priority, + 7 is lowest. The anticipatory scheduler may only honor up to priority 4. + Negative values are ignored. +
+ +--batch_cpu_scheduling
+ Use batch
CPU scheduling for Bazel. This policy is useful for
+ workloads that are non-interactive, but do not want to lower their nice value.
+ See 'man 2 sched_setscheduler'. This policy may provide for better system
+ interactivity at the expense of Bazel throughput.
+
--[no]announce_rc
+ Controls whether Bazel announces command options read from the bazelrc file when + starting up. (Startup options are unconditionally announced.) +
+ +--color (yes|no|auto)
+ This option determines whether Bazel will use colors to highlight + its output on the screen. +
+
+ If this option is set to yes
, color output is enabled.
+ If this option is set to auto
, Bazel will use color output only if
+ the output is being sent to a terminal and the TERM environment variable
+ is set to a value other than dumb
, emacs
, or xterm-mono
.
+ If this option is set to no
, color output is disabled,
+ regardless of whether the output is going to a terminal and regardless
+ of the setting of the TERM environment variable.
+
--config name
+ Selects additional config section from the rc files; for the current
+ command
, it also pulls in the options from
+ command:name
if such a section exists. Note that it is currently
+ only possible to provide these options on the command line, not in the rc
+ files. Can be specified multiple times to add flags from several
+ config sections.
+
--curses (yes|no|auto)
+ This option determines whether Bazel will use cursor controls
+ in its screen output. This results in less scrolling data, and a more
+ compact, easy-to-read stream of output from Bazel. This works well with
+ --color
.
+
+ If this option is set to yes
, use of cursor controls is enabled.
+ If this option is set to no
, use of cursor controls is disabled.
+ If this option is set to auto
, use of cursor controls will be
+ enabled under the same conditions as for --color=auto
.
+
--[no]show_timestamps
+ If specified, a timestamp is added to each message generated by + Bazel specifying the time at which the message was displayed. +
+ ++ Bazel can be called from scripts in order to perform a build, run + tests or query the dependency graph. Bazel has been designed to + enable effective scripting, but this section lists some details to + bear in mind to make your scripts more robust. +
+ +
+ The --output_base
option controls where the Bazel process should
+ write the outputs of a build to, as well as various working files used
+ internally by Bazel, one of which is a lock that guards against
+ concurrent mutation of the output base by multiple Bazel processes.
+
+ Choosing the correct output base directory for your script depends
+ on several factors. If you need to put the build outputs in a
+ specific location, this will dictate the output base you need to
+ use. If you are making a "read only" call to Bazel
+ (e.g. bazel query
), the locking factors will be more important.
+ In particular, if you need to run multiple instances of your script
+ concurrently, you will need to give each one a different (or random) output
+ base.
+
+ If you use the default output base value, you will be contending for + the same lock used by the user's interactive Bazel commands. If the + user issues long-running commands such as builds, your script will + have to wait for those commands to complete before it can continue. +
+ +
+ By default, Bazel uses a long-running server process as an optimization; this
+ behavior can be disabled using the --batch
option. There's no hard and
+ fast rule about whether or not your script should use a server, but
+ in general, the trade-off is between performance and reliability.
+ The server mode makes a sequence of builds, especially incremental
+ builds, faster, but its behavior is more complex and prone to
+ failure. We recommend in most cases that you use batch mode unless
+ the performance advantage is critical.
+
+ If you do use the server, don't forget to call shutdown
+ when you're finished with it, or, specify
+ --max_idle_secs=5
so that idle servers shut themselves
+ down promptly.
+
+ Bazel attempts to differentiate failures due to the source code under +consideration from external errors that prevent Bazel from executing properly. +Bazel execution can result in following exit codes: +
+ +Exit Codes common to all commands: +0
- Success2
- Command Line Problem, Bad or Illegal flags or command
+ combination, or Bad Environment Variables. Your command line must be
+ modified.8
- Build Interrupted but we terminated with an orderly shutdown.32
- External Environment Failure not on this machine.33
- OOM failure. You need to modify your command line.34
- Reserved for Google-internal use.35
- Reserved for Google-internal use.36
- Local Environmental Issue, suspected permanent.37
- Unhandled Exception / Internal Bazel Error.38
- Reserved for Google-internal use.40-44
- Reserved for errors in Bazel's command line launcher,
+ bazel.cc
that are not command line
+ related. Typically these are related to bazel server
+ being unable to launch itself.bazel build
, bazel test
.
+1
- Build failed.3
- Build OK, but some tests failed or timed out.4
- Build successful but no tests were found even though
+ testing was requested.bazel run
:
+1
- Build failed.6
- Run command failure. The executed subprocess returned a
+ non-zero exit code. The actual subprocess exit code is
+ given in stderr.bazel query
:
+3
- Partial success, but the query encountered 1 or more
+ errors in the input BUILD file set and therefore the
+ results of the operation are not 100% reliable.
+ This is likely due to a --keep_going
option
+ on the command line.7
- Command failure.
+ Future Bazel versions may add additional exit codes, replacing generic failure
+ exit code 1
with a different non-zero value with a particular
+ meaning. However, all non-zero exit values will always constitute an error.
+
+ By default, Bazel will read the .bazelrc
file from the base workspace
+ directory or the user's home directory. Whether or not this is
+ desirable is a choice for your script; if your script needs to be
+ perfectly hermetic (e.g. when doing release builds), you should
+ disable reading the .bazelrc file by using the option
+ --bazelrc=/dev/null
. If you want to perform a build
+ using the user's preferred settings, the default behavior is better.
+
+ The Bazel output is also available in a command log file which you can + find with the following command: +
+ ++% bazel info command_log ++ +
+ The command log file contains the interleaved stdout and stderr streams
+ of the most recent Bazel command. Note that running bazel info
+ will overwrite the contents of this file, since it then becomes the most
+ recent Bazel command. However, the location of the command log file will
+ not change unless you change the setting of the --output_base
+ or --output_user_root
options.
+
+ The Bazel output is quite easy to parse for many purposes. Two
+ options that may be helpful for your script are
+ --noshow_progress
which suppresses progress messages,
+ and --show_result n
, which controls whether
+ or not "build up-to-date" messages are printed; these messages may
+ be parsed to discover which targets were successfully built, and the
+ location of the output files they created. Be sure to specify a
+ very large value of n if you rely on these messages.
+
+ The first step in analyzing the performance of your build is to profile your build with the
+ --profile
option.
+
+ The file generated by the --profile
+ command is a binary file. Once you have generated this binary profile, you can analyze it using
+ Bazel's analyze-profile
command. By default, it will
+ print out summary analysis information for each of the specified profile datafiles. This includes
+ cumulative statistics for different task types for each build phase and an analysis of the
+ critical execution path.
+
+ The first section of the default output describes an overview of the time spent on the different + build phases: +
++=== PHASE SUMMARY INFORMATION === + +Total launch phase time 6.00 ms 0.01% +Total init phase time 864 ms 1.11% +Total loading phase time 21.841 s 28.05% +Total analysis phase time 5.444 s 6.99% +Total preparation phase time 155 ms 0.20% +Total execution phase time 49.473 s 63.54% +Total finish phase time 83.9 ms 0.11% +Total run time 77.866 s 100.00% ++ +
+ The following sections show the execution time of different tasks happening during a particular + phase: +
++=== INIT PHASE INFORMATION === + +Total init phase time 864 ms + +Total time (across all threads) spent on: + Type Total Count Average + VFS_STAT 2.72% 1 23.5 ms + VFS_READLINK 32.19% 1 278 ms + +=== LOADING PHASE INFORMATION === + +Total loading phase time 21.841 s + +Total time (across all threads) spent on: + Type Total Count Average + SPAWN 3.26% 154 475 ms + VFS_STAT 10.81% 65416 3.71 ms +[...] +SKYLARK_BUILTIN_FN 13.12% 45138 6.52 ms + +=== ANALYSIS PHASE INFORMATION === + +Total analysis phase time 5.444 s + +Total time (across all threads) spent on: + Type Total Count Average + SKYFRAME_EVAL 9.35% 1 4.782 s + SKYFUNCTION 89.36% 43332 1.06 ms + +=== EXECUTION PHASE INFORMATION === + +Total preparation time 155 ms +Total execution phase time 49.473 s +Total time finalizing build 83.9 ms + +Action dependency map creation 0.00 ms +Actual execution time 49.473 s + +Total time (across all threads) spent on: + Type Total Count Average + ACTION 2.25% 12229 10.2 ms +[...] + SKYFUNCTION 1.87% 236131 0.44 ms ++ +
+ The last section shows the critical path: +
++Critical path (32.078 s): + Id Time Percentage Description +1109746 5.171 s 16.12% Building [...] +1109745 164 ms 0.51% Extracting interface [...] +1109744 4.615 s 14.39% Building [...] +[...] +1109639 2.202 s 6.86% Executing genrule [...] +1109637 2.00 ms 0.01% Symlinking [...] +1109636 163 ms 0.51% Executing genrule [...] + 4.00 ms 0.01% [3 middleman actions] ++ +
+ You can use the following options to display more detailed information: +
+ +--dump=text
+ + This option prints all recorded tasks in the order they occurred. Nested tasks are indented + relative to the parent. For each task, output includes the following information: +
++[task type] [task description] +Thread: [thread id] Id: [task id] Parent: [parent task id or 0 for top-level tasks] +Start time: [time elapsed from the profiling session start] Duration: [task duration] +[aggregated statistic for nested tasks, including count and total duration for each nested task] ++
--dump=raw
+ + This option is most useful for automated analysis with scripts. It outputs each task record on + a single line using '|' delimiter between fields. Fields are printed in the following order: +
++1|1|0|0|0||PHASE|Launch Bazel +1|2|0|6000000|0||PHASE|Initialize command +1|3|0|168963053|278111411||VFS_READLINK|/[...] +1|4|0|571055781|23495512||VFS_STAT|/[...] +1|5|0|869955040|0||PHASE|Load packages +[...] ++
--html
+
+ This option writes a file called <profile-file>.html
in the directory of the
+ profile file. Open it in your browser to see the visualization of the actions in your build.
+ Note that the file can be quite large and may push the capabilities of your browser –
+ please wait for the file to load.
+
+ In most cases, the HTML output from --html
is easier to
+ read than the --dump
output.
+ It includes a Gantt chart that displays time on the horizontal axis and
+ threads of execution along the vertical axis. If you click on the Statistics link in the top
+ right corner of the page, you will jump to a section that lists summary analysis information
+ from your build.
+
--html_details
+ + Additionally passing this option will render a more detailed execution chart and additional + tables on the performance of built-in and user-defined Skylark functions. Beware that this + increases the file size and the load on the browser considerably. +
+If Bazel appears to be hung, you can hit ctrl + \ or send
+ Bazel a SIGQUIT
signal (kill -3 $(bazel info server_pid)
) to get a
+ thread dump in the file $(bazel info output_base)/server/jvm.out
.
+
+ Since you may not be able to run bazel info
if bazel is hung, the
+ output_base
directory is usually the parent of the bazel-<workspace>
+ symlink in your workspace directory.
+
+ This document provides an overview of the source tree layout and the + terminology used in Bazel. +
+Bazel builds software from source code organized in a directory called + a workspace. Source files in the workspace are organized in a nested + hierarchy of packages, where each package is a directory that contains a set + of related source files and one BUILD file. The BUILD file specifies what + software outputs can be built from the source. +
+A workspace is a directory on your filesystem that contains the
+ source files for the software you want to build, as well as symbolic links
+ to directories that contain the build outputs. Each workspace directory has
+ a text file named WORKSPACE
which may be empty, or may contain
+ references to external dependencies
+ required to build the outputs. See also the Workspace Rules section in the Build
+ Encyclopedia.
+
+ The primary unit of code organization in a workspace is + the package. A package is collection of related files and a + specification of the dependencies among them. +
+
+ A package is defined as a directory containing a file
+ named BUILD
, residing beneath the top-level directory in the
+ workspace. A package includes all files in its directory, plus all
+ subdirectories beneath it, except those which themselves contain a BUILD
+ file.
+
+ For example, in the following directory tree
+ there are two packages, my/app
,
+ and the subpackage my/app/tests
.
+ Note that my/app/data
is not a package, but a directory
+ belonging to package my/app
.
+
+src/my/app/BUILD +src/my/app/app.cc +src/my/app/data/input.txt +src/my/app/tests/BUILD +src/my/app/tests/test.cc ++
+ A package is a container. The elements of a package are called + targets. Most targets are one of two principal kinds, files + and rules. Additionally, there is another kind of target, + package groups, + but they are far less numerous. +
+ +Hierarchy of targets.
++ Files are further divided into two kinds. + Source files are usually written by the efforts of people, + and checked in to the repository. + Generated files, sometimes called derived files, + are not checked in, but are generated by the build tool from source + files according to specific rules. +
+ ++ The second kind of target is the rule. A rule specifies the + relationship between a set of input and a set of output files, + including the necessary steps to derive the outputs from the inputs. + The outputs of a rule are always generated files. The inputs to a + rule may be source files, but they may be generated files also; + consequently, outputs of one rule may be the inputs to another, + allowing long chains of rules to be constructed. +
+ ++ Whether the input to a rule is a source file or a generated file is + in most cases immaterial; what matters is only the contents of that + file. This fact makes it easy to replace a complex source file with + a generated file produced by a rule, such as happens when the burden + of manually maintaining a highly structured file becomes too + tiresome, and someone writes a program to derive it. No change is + required to the consumers of that file. Conversely, a generated + file may easily be replaced by a source file with only local + changes. +
+ ++ The inputs to a rule may also include other rules. The + precise meaning of such relationships is often quite complex and + language- or rule-dependent, but intuitively it is simple: a C++ + library rule A might have another C++ library rule B for an input. + The effect of this dependency is that the B's header files are + available to A during compilation, B's symbols are available to A + during linking, and B's runtime data is available to A during + execution. +
+ ++ An invariant of all rules is that the files generated by a rule + always belong to the same package as the rule itself; it is not + possible to generate files into another package. It is not uncommon + for a rule's inputs to come from another package, though. +
+ +
+ Package groups are sets of packages whose purpose is to limit accessibility
+ of certain rules. Package groups are defined by the
+ package_group
function. They have two properties: the list of
+ packages they contain and their name. The only allowed ways to refer to them
+ are from the visibility
attribute of rules or from the
+ default_visibility
attribute of the package
+ function; they do not generate or consume files. For more information, refer
+ to the appropriate section of the Build Encyclopedia.
+
+ All targets belong to exactly one package. The name of a target is + called its label, and a typical label in canonical form + looks like this: +
+ ++//my/app/main:app_binary ++ +
+
+ Each label has two parts, a package name (my/app/main
)
+ and a target name (app_binary
). Every label uniquely
+ identifies a target. Labels sometimes appear in other forms; when
+ the colon is omitted, the target name is assumed to be the same as
+ the last component of the package name, so these two labels are
+ equivalent:
+
+//my/app +//my/app:app ++ +
+ Short-form labels such as //my/app
are not to
+ be confused with package names. Labels start with //
,
+ but package names never do, thus my/app
is the
+ package containing //my/app
.
+
+ (A common misconception is that //my/app
refers
+ to a package, or to all the targets in a package; neither
+ is true.)
+
+ Within a BUILD file, the package-name part of label may be omitted,
+ and optionally the colon too. So within the BUILD file for package
+ my/app
(i.e. //my/app:BUILD
),
+ the following "relative" labels are all equivalent:
+
+//my/app:app +//my/app +:app +app ++ +
+ (It is a matter of convention that the colon is omitted for files, + but retained for rules, but it is not otherwise significant.) +
+ ++ Similarly, within a BUILD file, files belonging to the package may + be referenced by their unadorned name relative to the package + directory: +
+ + ++generate.cc +testdata/input.txt ++ +
+ But from other packages, or from the command-line, these file
+ targets must always be referred to by their complete label, e.g.
+ //my/app:generate.cc
.
+
+ Relative labels cannot be used to refer to targets in other
+ packages; the complete package name must always be specified in this
+ case. For example, if the source tree contains both the package
+ my/app
and the package
+ my/app/testdata
(i.e., each of these two
+ packages has its own BUILD file). The latter package contains a
+ file named testdepot.zip
. Here are two ways (one
+ wrong, one correct) to refer to this file within
+ //my/app:BUILD
:
+
+testdata/testdepot.zip # Wrong: testdata is a different package.
+//my/app/testdata:testdepot.zip # Right.
+
+
+
+ If, by mistake, you refer to testdepot.zip
by the wrong
+ label, such as //my/app:testdata/testdepot.zip
+ or //my:app/testdata/testdepot.zip
, you will get an
+ error from the build tool saying that the label "crosses a package
+ boundary". You should correct the label by putting the colon after
+ the directory containing the innermost enclosing BUILD file, i.e.,
+ //my/app/testdata:testdepot.zip
.
+
+ The syntax of labels is intentionally strict, so as to
+ forbid metacharacters that have special meaning to the shell. This
+ helps to avoid inadvertent quoting problems, and makes it easier to
+ construct tools and scripts that manipulate labels, such as the
+
+ Bazel Query Language.
+ All of the following are forbidden in labels: any sort of white
+ space, braces, brackets, or parentheses; wildcards such
+ as *
; shell metacharacters such
+ as >
, &
and |
; etc.
+ This list is not comprehensive; the precise details are below.
+
//...:target-name
target-name
is the name of the target within the package.
+ The name of a rule is the value of the name
+ parameter in the rule's declaration in a BUILD file; the name
+ of a file is its pathname relative to the directory containing
+ the BUILD file.
+ Target names must be composed entirely of
+ characters drawn from the set a
–z
,
+ A
–Z
, 0
–9
,
+ and the punctuation symbols _/.+-=,@~
.
+ Do not use ..
to refer to files in other packages; use
+ //packagename:filename
instead.
+ Filenames must be relative pathnames in normal form, which means
+ they must neither start nor end with a slash
+ (e.g. /foo
and foo/
are forbidden) nor
+ contain multiple consecutive slashes as path separators
+ (e.g. foo//bar
). Similarly, up-level references
+ (..
) and current-directory references
+ (./
) are forbidden. The sole exception to this
+ rule is that a target name may consist of exactly
+ '.
'.
+
While it is common to use /
in the name of a file
+ target, we recommend that you avoid the use of /
in the
+ names of rules. Especially when the shorthand form of a label is
+ used, it may confuse the reader. The
+ label //foo/bar/wiz
is always a shorthand
+ for //foo/bar/wiz:wiz
, even if there is no such package
+ foo/bar/wiz
; it never refers to //foo:bar/wiz
,
+ even if that target exists.
However, there are some situations where use of a slash is + convenient, or sometimes even necessary. For example, the name of + certain rules must match their principal source file, which may + reside in a subdirectory of the package.
+ +//package-name:...
+ The name of a package is the name of the directory containing its
+
+ BUILD file, relative to the top-level directory of the source tree.
+ For example: my/app
.
+ Package names must start with a lower-case ASCII letter
+ (a
–z
),
+ and must be composed entirely of characters drawn from the set
+ a
–z
, 0
–9
,
+ '_
', and '/
'.
+
+ For a language with a directory structure that is significant + to its module system (e.g. Java), it is important to choose directory names + that are valid identifiers in the language. +
+ +
+ Although Bazel allows a package at the build root (e.g. //:foo
), this
+ is not advised and projects should attempt to use more descriptively named
+ packages.
+
+ Package names may not contain the substring //
, nor
+ end with a slash.
+
+ A rule specifies the relationship between inputs and outputs, and the + steps to build the outputs. Rules can be of one of many different + kinds or classes, which produce compiled + executables and libraries, test executables and other supported + outputs as described in the + Build Encyclopedia. +
+ +
+ Every rule has a name, specified by the name
attribute,
+ of type string. The name must be a syntactically valid target name,
+ as specified above. In some cases, the name is
+ somewhat arbitrary, and more interesting are the names of the files
+ generated by the rule; this is true of genrules. In other
+ cases, the name is significant: for *_binary
+ and *_test
rules, for example, the rule name determines
+ the name of the executable produced by the build.
+
+ Every rule has a set of attributes; the applicable attributes + for a given rule, and the significance and semantics of each + attribute are a function of the rule's class; see + the Build + Encyclopedia for the full list of supported rules and their + corresponding attributes. Each attribute has a name and a + type. The full set of types that an attribute can have is: integer, + label, list of labels, string, list of strings, output label, + list of output labels. Not all attributes need to be specified in + every rule. Attributes thus form a dictionary from keys (names) to + optional, typed values. +
+ +
+ The srcs
attribute present in many rules has type "list
+ of label"; its value, if present, is a list of labels, each being
+ the name of a target that is an input to this rule.
+
+ The outs
attribute present in many rules has type "list
+ of output labels"; this is similar to the type of
+ the srcs
attribute, but differs in two significant
+ ways. Firstly, due to the invariant that the outputs of a rule
+ belong to the same package as the rule itself, output labels cannot
+ include a package component; they must be in one of the "relative"
+ forms shown above. Secondly, the relationship implied by an
+ (ordinary) label attribute is inverse to that implied by an output
+ label: a rule depends on its srcs
, whereas a rule is
+ depended on by its outs
. The two types of label attributes
+ thus assign direction to the edges between targets, giving rise to a
+ dependency graph.
+
+ The figure below represents an example fragment of the build + dependency graph, and illustrates: files (circles) and rules + (boxes); dependencies from generated files to rules; dependencies + from rules to files, and from rules to other rules. Conventionally, + dependency arrows are represented as pointing from a target towards + its prerequisites. +
+ +Source files, rules, and generated files.
++ This directed acyclic graph over targets is called the + "target graph" or "build dependency graph", and is the domain over + which the + + Bazel Query tool + operates. +
+ + ++ The previous section described packages, targets and labels, and the + build dependency graph abstractly. In this section, we'll look at + the concrete syntax used to define a package. +
+ ++ By definition, every package contains a BUILD file, which is a short + program written in the Build Language. Most BUILD files + appear to be little more than a series of declarations of build + rules; indeed, the declarative style is strongly encouraged when + writing BUILD files. +
+ +
+ However, the build language is in fact an imperative language, and
+ BUILD files are interpreted as a sequential list of statements.
+ Build rule functions, such as cc_library
, are procedures whose
+ side-effect is to create an abstract build rule inside the build tool.
+
+ The concrete syntax of BUILD files is a subset of Python.
+ Originally, the syntax was that of Python, but experience
+ showed that users rarely used more than a tiny subset of Python's
+ features, and when they did, it often resulted in complex and
+ fragile BUILD files. In many cases, the use of such features was
+ unnecessary, and the same result could be achieved by using an
+ external program, e.g. via a genrule
build rule.
+
+ Crucially, programs in the build language are unable to perform + arbitrary I/O (though many users try!). This invariant makes the + interpretation of BUILD files hermetic, i.e. dependent only on a + known set of inputs, which is essential for ensuring that builds are + reproducible. +
+ ++ Lexemes: the lexical syntax of the core language is a strict + subset of Python 2.6, and we refer the reader to the Python + specification for details. + Lexical features of Python that are not + supported include: floating-point literals, hexadecimal and Unicode + escapes within string literals. +
+ +
+ BUILD files should be written using only ASCII characters,
+ although technically they are interpreted using the Latin-1
+ character set. The use
+ of coding:
+ declarations is forbidden.
+
+ Grammar: the grammar of the core language is shown below, + using EBNF notation. Ambiguity is resolved using precedence, which + is defined as for Python. +
+ ++file_input ::= (simple_stmt? '\n')* + +simple_stmt ::= small_stmt (';' small_stmt)* ';'? + +small_stmt ::= expr + | assign_stmt + +assign_stmt ::= IDENTIFIER '=' expr + +expr ::= INTEGER + | STRING+ + | IDENTIFIER + | IDENTIFIER '(' arg_list? ')' + | expr '.' IDENTIFIER + | expr '.' IDENTIFIER '(' arg_list? ')' + | '[' expr_list? ']' + | '[' expr ('for' IDENTIFIER 'in' expr)+ ']' + | '(' expr_list? ')' + | '{' dict_entry_list? '}' + | '{' dict_entry ('for' IDENTIFIER 'in' expr)+ '}' + | expr '+' expr + | expr '-' expr + | expr '%' expr + | '-' expr + | expr '[' expr? ':' expr? ']' + | expr '[' expr ']' + +expr_list ::= (expr ',')* expr ','? + +dict_entry_list ::= (dict_entry ',')* dict_entry ','? + +dict_entry ::= expr ':' expr + +arg_list ::= (arg ',')* arg ','? + +arg ::= IDENTIFIER '=' expr + | expr ++ +
+ For each expression of the core language, the semantics are + identical to the corresponding Python semantics, except in the + following cases: +
+%
operator are not
+ supported. Only the int % int
and str %
+ tuple
forms are supported. Only the %s
+ and %d
format specifiers may be
+ used; %(var)s
is illegal.
+ Many Python features are missing: control-flow constructs (loops,
+ conditionals, exceptions), basic datatypes (floating-point numbers, big
+ integers), import
and the module system, support for
+ definition of classes, some Python's built-in functions. Function
+ definitions and for
statements are allowed only in
+ extension files (.bzl
).
+
+ Available functions are documented in
+
+ the library section.
+
+ The build language is an imperative language, so in general, order + does matter: variables must be defined before they are used, for + example. However, most BUILD files consist only of declarations of + build rules, and the relative order of these statements is + immaterial; all that matters is which rules were declared, + and with what values, by the time package evaluation completes. + + So, in simple BUILD files, rule declarations can be re-ordered + freely without changing the behavior. +
+ +
+ BUILD file authors are encouraged to use comments liberally to
+ document the role of each build target, whether it is intended for
+ public use, and anything else that would help users and future
+ maintainers, including a # Description:
comment at the
+ top, explaining the role of the package.
+
+ The Python comment syntax of #...
is supported.
+ Triple-quoted string literals may span multiple lines, and can be used
+ for multi-line comments.
+
+ The majority of build rules come in families, grouped together by
+ language. For
+ example, cc_binary
, cc_library
+ and cc_test
are the build rules for C++ binaries,
+ libraries, and tests, respectively. Other languages use the same
+ naming scheme, with a different prefix, e.g. java_*
for
+ Java. These functions are all documented in the
+ Build Encyclopedia.
+
*_binary
+ rules build executable programs in a given language. After a
+ build, the executable will reside in the build tool's binary
+ output tree at the corresponding name for the rule's label,
+ so //my:program
would appear at
+ (e.g.) $(BINDIR)/my/program
.
Such rules also create a runfiles directory
+
+ containing all the files mentioned in a data
+ attribute belonging to the rule, or any rule in its transitive
+ closure of dependencies; this set of files is gathered together in
+ one place for ease of deployment to production.
*_test
+ rules are a specialization of a *_binary
rule, used for automated
+ testing. Tests are simply programs that return zero on success.
+
+
+ Like binaries, tests also have runfiles trees, and the files
+ beneath it are the only files that a test may legitimately open
+ at runtime. For example, a program cc_test(name='x',
+ data=['//foo:bar'])
may open and
+
+ read $TEST_SRCDIR/workspace/foo/bar
during execution.
+ (Each programming language has its own utility function for
+ accessing the value of $TEST_SRCDIR
, but they are all
+ equivalent to using the environment variable directly.)
+ Failure to observe the rule will cause the test to fail when it is
+ executed on a remote testing host.
+
+
*_library
+ rules specify separately-compiled modules in the given
+ programming language. Libraries can depend on other libraries,
+ and binaries and tests can depend on libraries, with the expected
+ separate-compilation behavior.
+
+ A target A
depends upon a target
+ B
if B
is needed by A
at
+ build or execution time. The depends upon relation induces a
+ directed acyclic graph (DAG) over targets, and we call this a
+ dependency graph.
+
+ A target's direct dependencies are those other targets
+ reachable by a path of length 1 in the dependency graph. A target's
+ transitive dependencies are those targets upon which it
+ depends via a path of any length through the graph.
+
+ In fact, in the context of builds, there are two dependency graphs, + the graph of actual dependencies and the graph of + declared dependencies. Most of the time, the two graphs + are so similar that this distinction need not be made, but it is + useful for the discussion below. +
+ +
+ A target X
is actually dependent on target
+ Y
iff Y
must be present, built and
+ up-to-date in order for X
to be built correctly.
+ "Built" could mean generated, processed, compiled, linked,
+ archived, compressed, executed, or any of the other kinds of tasks
+ that routinely occur during a build.
+
+ A target X
has a declared dependency on target
+ Y
iff there is a dependency edge from X
to
+ Y
in the package of X
.
+
+ For correct builds, the graph of actual dependencies A must
+ be a subgraph of the graph of declared dependencies D. That
+ is, every pair of directly-connected nodes x --> y
+ in A must also be directly connected in D. We say
+ D is an overapproximation of A.
+
+ It is important that it not be too much of an overapproximation, + though, since redundant declared dependencies can make builds slower and + binaries larger. +
+ ++ What this means for BUILD file writers is that every rule must + explicitly declare all of its actual direct dependencies to the + build system, and no more. + + Failure to observe this principle causes undefined behavior: the + build may fail, but worse, the build may depend on some prior + operations, or upon which transitive declared dependencies the target + happens to have. The build tool attempts aggressively to check for + missing dependencies and report errors, but it is not possible for + this checking to be complete in all cases. +
+ ++ + You need not (and should not) attempt to list everything indirectly imported, + even if it is "needed" by A at execution time. +
+ +
+ During a build of target X
, the build tool inspects the
+ entire transitive closure of dependencies of X
to ensure that
+ any changes in those targets are reflected in the final result,
+ rebuilding intermediates as needed.
+
+ The transitive nature of dependencies leads to a common mistake. + Through careless programming, code in one file may use code provided + by an indirect dependency, i.e. a transitive but not direct + edge in the declared dependency graph. Indirect dependencies do not + appear in the BUILD file. Since the rule doesn't + directly depend on the provider, there is no way to track changes, + as shown in the following example timeline: +
+ +1. At first, everything works
+ +The code in package a
uses code in package b
.
+The code in package b
uses code in package c
,
+and thus a
transitively depends on c
.
a/BUILD
+rule( + name = "a", + srcs = "a.in", + deps = "//b:b", +) ++
a/a.in
+import b; +b.foo(); ++
b/BUILD
+rule( + name = "b", + srcs = "b.in", + deps = "//c:c", +) ++
b/b.in
+import c; +function foo() { + c.bar(); +} ++
+Declared dependency graph: a --> b --> c + +Actual dependency graph: a --> b --> c ++The declared dependencies overapproximate the actual dependencies. +All is well. +
2. A latent hazard is introduced.
+
+ Someone carelessly adds code to a
that creates a direct
+ actual dependency on c
, but forgets to declare it.
+
a/a.in
+import b; +import c; +b.foo(); +c.garply(); ++
+Declared dependency graph: a --> b --> c + +Actual dependency graph: a --> b -->_c + \_________/| ++The declared dependencies no longer overapproximate the actual +dependencies. This may build ok, because the transitive closures of +the two graphs are equal, but masks a problem:
a
has an
+actual but undeclared dependency on c
.
+3. The hazard is revealed
+
+ Someone refactors b
so that it no longer depends on
+ c
, inadvertently breaking a
through no
+ fault of their own.
+
b/BUILD
+rule( + name = "b", + srcs = "b.in", + deps = "//d:d", +) ++
b/b.in
+import d; +function foo() { + d.baz(); +} ++
+Declared dependency graph: a --> b c + +Actual dependency graph: a --> b _c + \_________/| ++
+ The declared dependency graph is now an underapproximation of the
+ actual dependencies, even when transitively closed; the build is
+ likely to fail.
+
+ The problem could have been averted by ensuring that the actual
+ dependency from a
to c
introduced in Step
+ 2 was properly declared in the BUILD file.
+
+ Most build rules have three attributes for specifying different kinds
+ of generic dependencies: srcs
, deps
and
+ data
. These are explained below. See also
+ Attributes common
+ to all rules in the Build Encyclopedia.)
+
+ Many rules also have additional attributes for rule-specific kinds
+ of dependency, e.g. compiler
, resources
,
+ etc. These are detailed in the Build Encyclopedia.
+
srcs
dependencies+ Files consumed directly by the rule or rules that output source files. +
+ +deps
dependencies+ Rule pointing to separately-compiled modules providing header files, + symbols, libraries, data, etc. +
+ +data
dependenciesA build target might need some data files to run correctly. These + data files aren't source code: they don't affect how the target is + built. For example, a unit test might compare a function's output + to the contents of a file. When we build the unit test, we + don't need the file; but we do need it when we run the test. The + same applies to tools that are launched during execution. + +
The build system runs tests in an isolated directory where only files + listed as "data" are available. Thus, if a binary/library/test + needs some files to run, specify them (or a build rule containing + them) in data. For example: +
+ ++# I need a config file from a directory named env: +java_binary( + name = "setenv", + ... + data = [":env/default_env.txt"], +) + +# I need test data from another directory +sh_test( + name = "regtest", + srcs = ["regtest.sh"], + data = [ + "//data:file1.txt", + "//data:file2.txt", + ... + ], +) ++ +
These files are available using the relative path
+path/to/data/file
. In tests, it is also possible to refer to
+them by joining the paths of the test's source directory and the workspace-relative
+path, e.g.
+
+${TEST_SRCDIR}/workspace/path/to/data/file
.
+
As you look over our BUILD
files, you might notice
+ that some data
labels refer to directories.
+ These labels end with /.
or /
like so:
+
+
+data = ["//data/regression:unittest/."] # don't use this
+
++or like so: +
+
+data = ["testdata/."] # don't use this
+
+
++or like so: +
+ +
+data = ["testdata/"] # don't use this
+
+ This seems convenient, particularly for tests (since it allows a test to + use all the data files in the directory). +
+ +But try not to do this. In order to ensure correct incremental rebuilds (and
+ re-execution of tests) after a change, the build system must be
+ aware of the complete set of files that are inputs to the build (or
+ test). When you specify a directory, the build system will perform
+ a rebuild only when the directory itself changes (due to addition or
+ deletion of files), but won't be able to detect edits to individual
+ files as those changes do not affect the enclosing directory.
+ Rather than specifying directories as inputs to the build system,
+ you should enumerate the set of files contained within them, either
+ explicitly or using the
+ glob()
function.
+ (Use **
to force the
+ glob()
to be recursive.)
+
+data = glob(["testdata/**"]) # use this instead ++ +
Unfortunately, there are some scenarios where directory labels must be used.
+ For example, if the testdata
directory contains files whose
+ names do not conform to the strict label syntax
+ (e.g. they contain certain punctuation symbols), then explicit
+ enumeration of files, or use of the
+ glob()
function will
+ produce an invalid labels error. You must use directory labels in this case,
+ but beware of the concomitant risk of incorrect rebuilds described above.
+
If you must use directory labels, keep in mind that you can't refer to the parent
+ package with a relative "../
" path; instead, use an absolute path like
+ "//data/regression:unittest/.
".
+
Note that directory labels are only valid for data dependencies. If you try to use
+ a directory as a label in an argument other than data
, it
+ will fail and you will get a (probably cryptic) error message.
+
+$ chmod +x bazel-version-installer-os.sh +$ ./bazel-version-installer-os.sh --user ++ +The `--user` flag installs Bazel to the `$HOME/bin` directory on your +system and sets the `.bazelrc` path to `$HOME/.bazelrc`. Use the `--help` +command to see additional installation options. + +#### 5. Set up your environment + +If you ran the Bazel installer with the `--user` flag as above, the Bazel +executable is installed in your `$HOME/bin` directory. It's a good idea to add +this directory to your default paths, as follows: + +```bash +$ export PATH="$PATH:$HOME/bin" +``` + +You can also add this command to your `~/.bashrc` file. + + + +## Mac OS X + +Install Bazel on Mac OS X using one of the following methods: + + * [Using Homebrew](#install-on-mac-os-x-homebrew) + * [Using binary installer](#install-with-installer-mac-os-x) + * [Compiling Bazel from source](#compiling-from-source) + + +### Using Homebrew + +#### 1. Install Homebrew on Mac OS X (one time setup) + +`$ /usr/bin/ruby -e "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/master/install)"` + +#### 2. Install Bazel Homebrew Package + +`$ brew install bazel` + +Once installed, you can upgrade to newer version of Bazel with: + +`$ brew upgrade bazel` + + +### Install with installer + +We provide binary installers on our +GitHub releases page + +The installer only contains Bazel binary, some additional libraries are required to be installed on the machine to work. + +#### 1. Install JDK 8 + +JDK 8 can be downloaded from +[Oracle's JDK Page](http://www.oracle.com/technetwork/java/javase/downloads/jdk8-downloads-2133151.html). +Look for "Mac OS X x64" under "Java SE Development Kit". This will download a +DMG image with an install wizard. + +#### 2. Install XCode command line tools + +Xcode can be downloaded from the +[Apple Developer Site](https://developer.apple.com/xcode/downloads/), which will +redirect to the App Store. + +For `objc_*` and `ios_*` rule support, you must have Xcode 6.1 or later with +iOS SDK 8.1 installed on your system. + +Once XCode is installed you can trigger signing the license with the following +command: + +``` +$ sudo gcc --version +``` + +#### 3. Download Bazel + +Download the [Bazel installer](https://github.com/bazelbuild/bazel/releases) for +your operating system. + +#### 4. Run the installer + +Run the installer: + +
+$ chmod +x bazel-version-installer-os.sh +$ ./bazel-version-installer-os.sh --user ++ +The `--user` flag installs Bazel to the `$HOME/bin` directory on your +system and sets the `.bazelrc` path to `$HOME/.bazelrc`. Use the `--help` +command to see additional installation options. + +#### 5. Set up your environment + +If you ran the Bazel installer with the `--user` flag as above, the Bazel +executable is installed in your `$HOME/bin` directory. It's a good idea to add +this directory to your default paths, as follows: + +```bash +$ export PATH="$PATH:$HOME/bin" +``` + +You can also add this command to your `~/.bashrc` file. + +## Compiling from source + +If you would like to build Bazel from source, clone the source from GitHub and +run `./compile.sh` to build it: + +``` +$ git clone https://github.com/bazelbuild/bazel.git +$ cd bazel +$ ./compile.sh +``` + +This will create a bazel binary in `bazel-bin/src/bazel`. This binary is +self-contained, so it can be copied to a directory on the PATH (e.g., +`/usr/local/bin`) or used in-place. + +Check our [continuous integration](http://ci.bazel.io) for the current status of +the build. + + +## Using Bazel with JDK 7 (deprecated) + +Bazel version _0.1.0_ runs without any change with JDK 7. However, future +version will stop supporting JDK 7 when our CI cannot build for it anymore. +The installer for JDK 7 for Bazel versions after _0.1.0_ is labeled +
+./bazel-version-jdk7-installer-os.sh ++If you wish to use JDK 7, follow the same steps as for JDK 8 but with the _jdk7_ installer or using a different APT repository as described [here](#1-add-bazel-distribution-uri-as-a-package-source-one-time-setup). + +## Getting bash completion + +Bazel comes with a bash completion script. To install it: + +1. Build it with Bazel: `bazel build //scripts:bazel-complete.bash`. +2. Copy the script `bazel-bin/scripts/bazel-complete.bash` to your + completion folder (`/etc/bash_completion.d` directory under Ubuntu). + If you don't have a completion folder, you can copy it wherever suits + you and simply insert `source /path/to/bazel-complete.bash` in your + `~/.bashrc` file (under OS X, put it in your `~/.bash_profile` file). + +## Getting zsh completion + +Bazel also comes with a zsh completion script. To install it: + +1. Add this script to a directory on your $fpath: + + ``` + fpath[1,0]=~/.zsh/completion/ + mkdir -p ~/.zsh/completion/ + cp scripts/zsh_completion/_bazel ~/.zsh/completion + ``` + +2. Optionally, add the following to your .zshrc. + + ``` + # This way the completion script does not have to parse Bazel's options + # repeatedly. The directory in cache-path must be created manually. + zstyle ':completion:*' use-cache on + zstyle ':completion:*' cache-path ~/.zsh/cache + ``` diff --git a/site/versions/master/docs/mobile-install.md b/site/versions/master/docs/mobile-install.md new file mode 100644 index 0000000000..f273871d0d --- /dev/null +++ b/site/versions/master/docs/mobile-install.md @@ -0,0 +1,220 @@ +--- +layout: documentation +title: mobile-install +--- + +# bazel mobile-install + +
Fast iterative development for Android
+ +## TL;DR + +To install small changes to an Android app very quickly, do the following: + + 1. Find the `android_binary` rule of the app you want to install. + 2. Disable Proguard by removing the `proguard_specs` attribute. + 3. Set the `multidex` attribute to `native`. + 4. Set the `dex_shards` attribute to `10`. + 5. Connect your device running ART (not Dalvik) over USB and enable USB + debugging on it. + 6. Run `bazel mobile-install :your_target`. App startup will be a little + slower than usual. + 7. Edit the code or Android resources. + 8. Run `bazel mobile-install --incremental :your_target`. + 9. Enjoy not having to wait a lot. + +Some command line options to Bazel that may be useful: + + - `--adb` tells Bazel which adb binary to use + - `--adb_arg` can be used to add extra arguments to the command line of `adb`. + One useful application of this is to select which device you want to install + to if you have multiple devices connected to your workstation: + `bazel mobile-install --adb_arg=-s --adb_arg=+<workspace-name>/ <== The workspace directory + bazel-my-project => <...my-project> <== Symlink to execRoot + bazel-out => <...bin> <== Convenience symlink to outputPath + bazel-bin => <...bin> <== Convenience symlink to most recent written bin dir $(BINDIR) + bazel-genfiles => <...genfiles> <== Convenience symlink to most recent written genfiles dir $(GENDIR) + +/home/user/.cache/bazel/ <== Root for all Bazel output on a machine: outputRoot + _bazel_$USER/ <== Top level directory for a given user depends on the user name: + outputUserRoot + install/ + fba9a2c87ee9589d72889caf082f1029/ <== Hash of the Bazel install manifest: installBase + _embedded_binaries/ <== Contains binaries and scripts unpacked from the data section of + the bazel executable on first run (e.g. helper scripts and the + main Java file BazelServer_deploy.jar) + 7ffd56a6e4cb724ea575aba15733d113/ <== Hash of the client's workspace directory (e.g. + /home/some-user/src/my-project): outputBase + action_cache/ <== Action cache directory hierarchy + This contains the persistent record of the file metadata + (timestamps, and perhaps eventually also MD5 sums) used by the + FilesystemValueChecker. + action_outs/ <== Action output directory. This contains a file with the + stdout/stderr for every action from the most recent bazel run + that produced output. + command.log <== A copy of the stdout/stderr output from the most recent bazel + command. + external/ <== The directory that remote repositories are downloaded/symlinked + into. + server/ <== The Bazel server puts all server-related files (such as socket + file, logs, etc) here. + server.socket <== Socket file for the server. + server.log <== Server logs. + <workspace-name>/ <== Working tree for the Bazel build & root of symlink forest: execRoot + _bin/ <== Helper tools are linked from or copied to here. + + bazel-out/ <== All actual output of the build is under here: outputPath + local_linux-fastbuild/ <== one subdirectory per unique target BuildConfiguration instance; + this is currently encoded + bin/ <== Bazel outputs binaries for target configuration here: $(BINDIR) + foo/bar/_objs/baz/ <== Object files for a cc_* rule named //foo/bar:baz + foo/bar/baz1.o <== Object files from source //foo/bar:baz1.cc + other_package/other.o <== Object files from source //other_package:other.cc + foo/bar/baz <== foo/bar/baz might be the artifact generated by a cc_binary named + //foo/bar:baz + foo/bar/baz.runfiles/ <== The runfiles symlink farm for the //foo/bar:baz executable. + MANIFEST + <workspace-name>/ + ... + genfiles/ <== Bazel puts generated source for the target configuration here: + $(GENDIR) + foo/bar.h e.g. foo/bar.h might be a headerfile generated by //foo:bargen + testlogs/ <== Bazel internal test runner puts test log files here + foo/bartest.log e.g. foo/bar.log might be an output of the //foo:bartest test with + foo/bartest.status foo/bartest.status containing exit status of the test (e.g. + PASSED or FAILED (Exit 1), etc) + include/ <== a tree with include symlinks, generated as needed. The + bazel-include symlinks point to here. This is used for + linkstamp stuff, etc. + host/ <== BuildConfiguration for build host (user's workstation), for + building prerequisite tools, that will be used in later stages + of the build (ex: Protocol Compiler) + <packages>/ <== Packages referenced in the build appear as if under a regular workspace ++ +The layout of the *.runfiles directories is documented in more detail in the places pointed to by RunfilesSupport. + +## `bazel clean` + +`bazel clean` does an `rm -rf` on the `outputPath` and the `action_cache` +directory. It also removes the workspace symlinks. The `--partial` option to +`bazel clean` will clean a configuration-specific `outputDir`, and the +`--expunge` option will clean the entire outputBase. diff --git a/site/versions/master/docs/query-how-to.html b/site/versions/master/docs/query-how-to.html new file mode 100644 index 0000000000..8920843812 --- /dev/null +++ b/site/versions/master/docs/query-how-to.html @@ -0,0 +1,399 @@ +--- +layout: documentation +title: Query how-to +--- +
This is a quick tutorial to get you started using Bazel's query language to trace dependencies in your code.
+ +For a language details and --output
flag details, please see the reference manual, Bazel query reference. You can get help for Bazel query by typing bazel help query
.
To execute a query while ignoring errors such as missing targets, use the --keep_going flag.
+ +foo
?
+foo
package?
+foo
package?
+//foo
?
+test_suite
expands to?
+
+foo
that match a pattern?
+src/main/java/com/example/cache/LRUCache.java
?
+src/main/java/com/example/cache/LRUCache.java
?
+src/main/java/com/example/cache/LRUCache.java
as a source?
+//src/main/java/com/example/base:base
, use the deps
function in bazel query:
+
++ $ bazel query "deps(src/main/java/com/example/base:base)" + //resources:translation.xml + //src/main/java/com/example/base:AbstractPublishedUri.java + ... ++ + This is the set of all targets required to build
//src/main/java/com/example/base:base
.
+
+//third_party/zlib:zlibonly
isn't in the BUILD file for //src/main/java/com/example/base
, but it is an indirect dependency. How can we trace this dependency path? There are two useful functions here: allpaths
and somepath
+
++$ bazel query "somepath(src/main/java/com/example/base:base, third_party/zlib:zlibonly)" +//src/main/java/com/example/base:base +//translations/tools:translator +//translations/base:base +//third_party/py/MySQL:MySQL +//third_party/py/MySQL:_MySQL.so +//third_party/mysql:mysql +//third_party/zlib:zlibonly +$ bazel query "allpaths(src/main/java/com/example/common/base:base, third_party/...)" + ...many errors detected in BUILD files... +//src/main/java/com/example/common/base:base +//third_party/java/jsr166x:jsr166x +//third_party/java/sun_servlet:sun_servlet +//src/main/java/com/example/common/flags:flags +//src/main/java/com/example/common/flags:base +//translations/tools:translator +//translations/tools:aggregator +//translations/base:base +//tools/pkg:pex +//tools/pkg:pex_phase_one +//tools/pkg:pex_lib +//third_party/python:python_lib +//translations/tools:messages +//third_party/py/xml:xml +//third_party/py/xml:utils/boolean.so +//third_party/py/xml:parsers/sgmlop.so +//third_party/py/xml:parsers/pyexpat.so +//third_party/py/MySQL:MySQL +//third_party/py/MySQL:_MySQL.so +//third_party/mysql:mysql +//third_party/openssl:openssl +//third_party/zlib:zlibonly +//third_party/zlib:zlibonly_v1_2_3 +//third_party/python:headers +//third_party/openssl:crypto ++ +
src/main/java/com/example/common/base
never references //translations/tools:aggregator
. So, where's the direct dependency?
+
+Certain rules include implicit dependencies on additional libraries or tools. For example, to build a genproto
rule, you need first to build the Protocol Compiler, so every genproto
rule carries an implicit dependency on the protocol compiler. These dependencies are not mentioned in the build file, but added in by the build tool. The full set of implicit dependencies is currently undocumented; read the source code of RuleClassProvider.)
+rdeps(u, x)
to find the reverse dependencies of the targets in x
within the transitive closure of u
.
+
+Unfortunately, invoking, e.g.,
+rdeps(..., daffie/annotations2:constants-lib)
+is not practical for a large tree, because it requires parsing every BUILD file and building a very large dependency graph (Bazel may run out of memory). If you would like to execute this query across a large repository, you may have to query subtrees and then combine the results.
+
+bazel query
to analyze many dependency relationships.
+
+foo
? bazel query 'foo/...' --output package
+foo
package? bazel query 'kind(rule, foo:all)' --output label_kind
+foo
package? bazel query 'kind("generated file", //foo:*)'
+
+//foo
? bazel query 'buildfiles(deps(//foo))' --output location | cut -f1 -d:
+test_suite
expands to? bazel query 'tests(//foo:smoke_tests)'
+bazel query 'kind(cc_.*, tests(//foo:smoke_tests))'
+bazel query 'attr(size, small, tests(//foo:smoke_tests))'
+
+ bazel query 'attr(size, medium, tests(//foo:smoke_tests))'
+
+ bazel query 'attr(size, large, tests(//foo:smoke_tests))'
+foo
that match a pattern? bazel query 'filter("pa?t", kind(".*_test rule", //foo/...))'
+The pattern is a regex and is applied to the full name of the rule. It's similar to doing
+ bazel query 'kind(".*_test rule", //foo/...)' | grep -E 'pa?t'
+
+src/main/java/com/example/cache/LRUCache.java
? bazel query 'buildfiles(src/main/java/com/example/cache/LRUCache.java)' --output=package
+
+src/main/java/com/example/cache/LRUCache.java
? bazel query src/main/java/com/example/cache/LRUCache.java
+
+src/main/java/com/example/cache/LRUCache.java
as a source? +fullname=$(bazel query src/main/java/com/example/cache/LRUCache.java) +bazel query "attr('srcs', $fullname, ${fullname//:*/}:*)" ++
foo
depend on? (What do I need to check out to build foo
) bazel query 'buildfiles(deps(//foo:foo))' --output package
+
+Note, buildfiles
is required in order to correctly obtain all files
+referenced by subinclude
; see the reference manual for details.
+foo
tree depend on, excluding foo/contrib
? bazel query 'deps(foo/... except foo/contrib/...)' --output package
+bazel query 'kind(genproto, deps(bar/...))'
+bazel query 'some(kind(cc_.*library, deps(kind(java_binary, src/main/java/com/example/frontend/...))))' --output location
+bazel query 'let jbs = kind(java_binary, src/main/java/com/example/frontend/...) in + let cls = kind(cc_.*library, deps($jbs)) in + $jbs intersect allpaths($jbs, $cls)'+
bazel query 'kind("source file", deps(src/main/java/com/example/qux/...))' | grep java$
+
+ Generated files: bazel query 'kind("generated file", deps(src/main/java/com/example/qux/...))' | grep java$
+bazel query 'kind("source file", deps(kind(".*_test rule", javatests/com/example/qux/...)))' | grep java$
+
+ Generated files: bazel query 'kind("generated file", deps(kind(".*_test rule", javatests/com/example/qux/...)))' | grep java$
+//foo
depend on that //foo:foolib
does not? bazel query 'deps(//foo) except deps(//foo:foolib)'
+foo
tests depend on that the //foo
production binary does not depend on? bazel query 'kind("cc_library", deps(kind(".*test rule", foo/...)) except deps(//foo))'
+bar
depend on groups2
? bazel query 'somepath(bar/...,groups2/...:*)'
+
+ Once you have the results of this query, you will often find that a
+ single target stands out as being an unexpected or egregious and
+ undesirable dependency of bar
. The query can then
+ be further refined to:
+docker/updater:updater_systest
(a py_test
) to some cc_library
that it depends upon: bazel query 'let cc = kind(cc_library, deps(docker/updater:updater_systest)) in + somepath(docker/updater:updater_systest, $cc)'+
//photos/frontend:lib
depend on two variants of the same library //third_party/jpeglib
and //third_party/jpeg
? //photos/frontend:lib
that depends on both libraries". When shown
+in topological order, the last element of the result is the most
+likely culprit.
+
++% bazel query 'allpaths(//photos/frontend:lib, //third_party/jpeglib) + intersect + allpaths(//photos/frontend:lib, //third_party/jpeg)' +//photos/frontend:lib +//photos/frontend:lib_impl +//photos/frontend:lib_dispatcher +//photos/frontend:icons +//photos/frontend/modules/gadgets:gadget_icon +//photos/thumbnailer:thumbnail_lib +//third_party/jpeg/img:renderer ++
bazel query 'bar/... intersect allpaths(bar/..., Y)'
+
+ Note: X intersect allpaths(X, Y)
is the general idiom for the query "which X depend on Y?"
+ If expression X is non-trivial, it may be convenient to bind a name to it using
+ let
to avoid duplication.
+bar
no longer depend on X? png
file:
+
++bazel query 'allpaths(bar/...,X)' --output graph | dot -Tpng > /tmp/dep.png ++
ServletSmokeTests
build? maxrank
:
+
+% bazel query 'deps(//src/test/java/com/example/servlet:ServletSmokeTests)' +--output maxrank | tail -1 +85 //third_party/zlib:zutil.c ++ +The result indicates that there exist paths of length 85 that must +occur in order in this build. +
+ When you use bazel query
to analyze build
+ dependencies, you use a little language, the Bazel Query
+ Language. This document is the reference manual for that
+ language. This document also describes the output
+ formats bazel query
supports.
+
+ How do people use bazel query
? Here are typical examples:
+
+ Why does the //foo
tree depend on //bar/baz
?
+ Show a path:
somepath(foo/..., //bar/baz:all)+ + +
+ What C++ libraries do all the foo
tests depend on that
+ the foo_bin
target does not?
kind("cc_library", deps(kind(".*test rule", foo/...)) except deps(//foo:foo_bin))+ + +
+ Expressions in the query language are composed of the following + tokens:
+
+ Keywords, such as somepath
or
+ let
. Keywords are the reserved words of the
+ language, and each of them is described below. The complete set
+ of keywords is:
+
+allpaths
+attr
+
+buildfiles
+
+deps
+except
+filter
+in
+intersect
+kind
+labels
+let
+loadfiles
+rdeps
+set
+some
+somepath
+tests
+union
+
+
+ Words, such as foo/...
or
+ ".*test rule"
or
+ //bar/baz:all
.
+ If a character sequence is "quoted" (begins and ends with a
+ single-quote '
, or begins and ends with a
+ double-quote "
), it is a word.
+ If a character sequence is not quoted, it may still be parsed as a word.
+ Unquoted words are sequences of characters drawn from
+ the set of alphabet characters, numerals, slash /
,
+ hyphen -
, underscore _
, star *
, and
+ period .
. Unquoted words may not start with a
+ hyphen or period.
+
We chose this syntax so that quote marks aren't needed in most cases.
+ The (unusual) ".*test rule"
example needs quotes: it
+ starts with a period and contains a space.
+ Quoting "cc_library"
is unnecessary but harmless.
+
+ Quoting is necessary when writing scripts that + construct Bazel query expressions from user-supplied values. + +
++ //foo:bar+wiz # WRONG: scanned as //foo:bar + wiz. + //foo:bar=wiz # WRONG: scanned as //foo:bar = wiz. + "//foo:bar+wiz" # ok. + "//foo:bar=wiz" # ok. ++
+ Note that this quoting is in addition to any quoting that may + be required by your shell. e.g. +
+% bazel query ' "//foo:bar=wiz" ' # single-quotes for shell, double-quotes for Bazel.+ +
+ Keywords, when quoted, are treated as ordinary words, thus
+ some
is a keyword but "some"
is a word.
+ Both foo
and "foo"
are words.
+
()
, period
+ .
and comma ,
, etc. Words containing
+ punctuation (other than the exceptions listed above) must be quoted.
+ + Whitespace characters outside of a quoted word are ignored. +
+ ++ The Bazel query language is a language of expressions. Every + expression evaluates to a partially-ordered set of targets, + or equivalently, a graph (DAG) of targets. This is the only + datatype. +
++ In some expressions, the partial order of the graph is + not interesting; In this case, we call the values + "sets". In cases where the partial order of elements + is significant, we call values "graphs". Note + that both terms refer to the same datatype, but merely emphasize + different aspects of it. +
+ ++ Build dependency graphs should be acyclic. + + The algorithms used by the query language are intended for use in + acyclic graphs, but are robust against cycles. The details of how + cycles are treated are not specified and should not be relied upon. +
+ +
+ In addition to build dependencies that are defined explicitly in BUILD files,
+ Bazel adds additional implicit dependencies to rules. For example
+ every Java rule implicitly depends on the JavaBuilder. Implicit dependencies
+ are established using attributes that start with $
and they
+ cannot be overridden in BUILD files.
+
+
+ Per default bazel query
takes implicit dependencies into account
+ when computing the query result. This behavior can be changed with
+ the --[no]implicit_deps
option.
+
+ Bazel query language expressions operate over the build + dependency graph, which is the graph implicitly defined by all + rule declarations in all BUILD files. It is important to understand + that this graph is somewhat abstract, and does not constitute a + complete description of how to perform all the steps of a build. In + order to perform a build, a configuration is required too; + see the configurations + section of the User's Guide for more detail. +
+ ++ The result of evaluating an expression in the Bazel query language + is true for all configurations, which means that it may be + a conservative over-approximation, and not exactly precise. If you + use the query tool to compute the set of all source files needed + during a build, it may report more than are actually necessary + because, for example, the query tool will include all the files + needed to support message translation, even though you don't intend + to use that feature in your build. +
+ +
+ Operations preserve any ordering
+ constraints inherited from their subexpressions. You can think of
+ this as "the law of conservation of partial order". Consider an
+ example: if you issue a query to determine the transitive closure of
+ dependencies of a particular target, the resulting set is ordered
+ according to the dependency graph. If you filter that set to
+ include only the targets of file
kind, the same
+ transitive partial ordering relation holds between every
+ pair of targets in the resulting subset—even though none of
+ these pairs is actually directly connected in the original graph.
+ (There are no file–file edges in the build dependency graph).
+
+ However, while all operators preserve order, some + operations, such as the set operations + don't introduce any ordering constraints of their own. + Consider this expression: +
+ +deps(x) union y+ +
+ The order of the final result set is guaranteed to preserve all the
+ ordering constraints of its subexpressions, namely, that all the
+ transitive dependencies of x
are correctly ordered with
+ respect to each other. However, the query guarantees nothing about
+ the ordering of the targets in y
, nor about the
+ ordering of the targets in deps(x)
relative to those in
+ y
(except for those targets in
+ y
that also happen to be in deps(x)
).
+
+ Operators that introduce ordering constraints include:
+ allpaths
,
+ deps
,
+ rdeps
,
+ somepath
,
+ and the target pattern wildcards
+ package:*
,
+ dir/...
, etc.
+
+ This is the grammar of the Bazel query language, expressed in EBNF + notation: +
+ + +expr ::= word + | let name = expr in expr + | (expr) + | expr intersect expr + | expr ^ expr + | expr union expr + | expr + expr + | expr except expr + | expr - expr + | deps(expr) + | deps(expr, depth) + | rdeps(expr, expr) + | rdeps(expr, expr, depth) + | some(expr) + | somepath(expr, expr) + | allpaths(expr, expr) + | kind(word, expr) + | labels(word, expr) + | filter(word, expr) + | set(word *) + | attr(word, word, expr) ++ +
+ We will examine each of the productions of this grammar in order. +
+ +expr ::= word+
+ Syntactically, a target pattern is just a word. It
+ is interpreted as an (unordered) set of targets. The simplest
+ target pattern is a label,
+ which identifies a single target (file or rule). For example, the
+ target pattern //foo:bar
evaluates to a set
+ containing one element, the target, the bar
+ rule.
+
+ Target patterns generalize labels to include wildcards over packages
+ and targets. For example, foo/...:all
(or
+ just foo/...
) is a target pattern that evaluates to a
+ set containing all rules in every package recursively
+ beneath the foo
directory;
+ bar/baz:all
is a target pattern that
+ evaluates to a set containing all the rules in the
+ bar/baz
package, but not its subpackages.
+
+ Similarly, foo/...:*
is a target pattern that evaluates
+ to a set containing all targets (rules and files) in
+ every package recursively beneath the foo
directory;
+ bar/baz:*
evaluates to a set containing
+ all the targets in the
+ bar/baz
package, but not its subpackages.
+
+ Because the :*
wildcard matches files as well as rules,
+ it is often more useful than :all
for queries.
+ Conversely, the :all
wildcard (implicit in target
+ patterns like foo/...
) is typically more useful for
+ builds.
+
+ bazel query
target patterns work the same as
+ bazel build
build targets do;
+ refer to Target Patterns
+ in the Bazel User Manual for further details, or type bazel
+ help target-syntax
.
+
+
+ Target patterns may evaluate to a singleton set (in the case of a
+ label), to a set containing many elements (as in the case of
+ foo/...
, which has thousands of elements) or to the
+ empty set, if the target pattern matches no targets.
+
+ All nodes in the result of a target pattern expression are correctly
+ ordered relative to each other according to the dependency relation.
+ So, the result of foo:*
is not just the set of targets
+ in package foo
, it is also the graph over
+ those targets. (No guarantees are made about the relative ordering
+ of the result nodes against other nodes.) See the section
+ on graph order for more details.
+
expr ::= let name = expr1 in expr2 + | $name+
+ The Bazel query language allows definitions of and references to
+ variables. The
+ result of evaluation of a let
expression is the same as
+ that of expr2, with all free occurrences of
+ variable name replaced by the value of
+ expr1.
+
+ For example, let v = foo/... in allpaths($v, //common)
+ intersect $v
is equivalent to the allpaths(foo/...,
+ //common) intersect foo/...
.
+
+ An occurrence of a variable reference name
other than in
+ an enclosing let name = ...
expression is an
+ error. In other words, toplevel query expressions cannot have free
+ variables.
+
+ In the above grammar productions, name
is like
+ word, but with the additional constraint that it be a legal
+ identifier in the C programming language. References to the variable
+ must be prepended with the "$" character.
+
+ Each let
expression defines only a single variable,
+ but you can nest them.
+
+ (Both target patterns and variable references + consist of just a single token, a word, creating a syntactic + ambiguity. However, there is no semantic ambiguity, because the + subset of words that are legal variable names is disjoint from the + subset of words that are legal target patterns.) +
+ +
+ (Technically speaking, let
expressions do not increase
+ the expressiveness of the query language: any query expressible in
+ the language can also be expressed without them. However, they
+ improve the conciseness of many queries, and may also lead to more
+ efficient query evaluation.)
+
expr ::= (expr)+ +
+ Parentheses associate subexpressions to force an + order of evaluation. + A parenthesized expression evaluates + to the value of its argument. +
+ +expr ::= expr intersect expr + | expr ^ expr + | expr union expr + | expr + expr + | expr except expr + | expr - expr ++ +
+ These three operators compute the usual set operations over their
+ arguments. Each operator has two forms, a nominal form such
+ as intersect
and a symbolic form such
+ as ^
. Both forms are equivalent;
+ the symbolic forms are quicker to type. (For clarity, the rest of
+ this manual uses the nominal forms.) For example,
+
foo/... except foo/bar/...+ + evaluates to the set of targets that match +
foo/...
but not
+ foo/bar/...
. Equivalently:
+
+foo/... - foo/bar/...+ + The
intersect
(^
)
+ and union
(+
) operations are commutative
+ (symmetric); except
(-
) is
+ asymmetric. The parser treats all three operators as
+ left-associative and of equal precedence, so you might want parentheses.
+ For example, the first two of these expressions are
+ equivalent, but the third is not:
+
+x intersect y union z +(x intersect y) union z +x intersect (y union z)+ +
+ (We strongly recommend that you use parentheses where there is + any danger of ambiguity in reading a query expression.) +
+ +expr ::= set(word *)+
+ The set(a b c ...)
+ operator computes the union of a set of zero or
+ more target patterns, separated by
+ whitespace (no commas).
+
+ In conjunction with the Bourne shell's $(...)
+ feature, set()
provides a means of saving the results
+ of one query in a regular text file, manipulating that text file
+ using other programs (e.g. standard UNIX shell tools), and then
+ introducing the result back into the query tool as a value for
+ further processing. For example:
+
+ % bazel query deps(//my:target) --output=label | grep ... | sed ... | awk ... > foo + % bazel query "kind(cc_binary, set($(<foo)))" ++
+ In the next example, kind(cc_library,
+ deps(//some_dir/foo:main, 5))
is effectively computed
+ by filtering on the maxrank
values using
+ an awk
program.
+
+ % bazel query 'deps(//some_dir/foo:main)' --output maxrank | + awk '($1 < 5) { print $2;} ' > foo + % bazel query "kind(cc_library, set($(<foo)))" ++
+ In these examples, $(<foo)
is a shorthand
+ for $(cat foo)
, but shell commands other
+ than cat
may be used too—such as
+ the previous awk
command.
+
+ Note, set()
introduces no graph ordering constraints,
+ so path information may be lost when saving and reloading sets of
+ nodes using it. See the graph order
+ section below for more detail.
+
expr ::= deps(expr) + | deps(expr, depth)+
+ The deps(x)
operator evaluates to the graph
+ formed by the transitive closure of dependencies of its argument set
+ x. For example, the value of deps(//foo)
is
+ the dependency graph rooted at the single node foo
,
+ including all its dependencies. The value of
+ deps(foo/...)
is the dependency graphs whose roots are
+ all rules in every package beneath the foo
directory.
+ Please note that 'dependencies' means only rule and file targets
+ in this context, therefore the BUILD,
+
+ and Skylark files needed to
+ create these targets are not included here. For that you should use the
+ buildfiles
operator.
+
+ The resulting graph is ordered according to the dependency relation. + See the section on graph order for more + details. +
+ +
+ The deps
operator accepts an optional second argument,
+ which is an integer literal specifying an upper bound on the depth
+ of the search. So deps(foo:*, 1)
evaluates to all the
+ direct prerequisites of any target in the foo
package,
+ and deps(foo:*, 2)
further includes the nodes directly
+ reachable from the nodes in deps(foo:*, 1)
, and so on.
+ (These numbers correspond to the ranks shown in
+ the minrank
output
+ format.) If the depth parameter is omitted, the search
+ is unbounded, i.e. it computes the reflexive transitive closure of
+ prerequsites.
+
expr ::= rdeps(expr, expr) + | rdeps(expr, expr, depth)+
+ The rdeps(u, x)
operator evaluates
+ to the reverse dependencies of the argument set x within the
+ transitive closure of the universe set u.
+
+ The resulting graph is ordered according to the dependency relation. See the + section on graph order for more details. +
+ +
+ The rdeps
operator accepts an optional third argument,
+ which is an integer literal specifying an upper bound on the depth of the
+ search. The resulting graph will only include nodes within a distance of the
+ specified depth from any node in the argument set. So
+ rdeps(//foo, //common, 1)
evaluates to all nodes in the
+ transitive closure of //foo
that directly depend on
+ //common
. (These numbers correspond to the ranks shown in the
+ minrank
output format.) If the
+ depth parameter is omitted, the search is unbounded.
+
expr ::= some(expr)+
+ The some(x)
operator selects one target
+ arbitrarily from its argument set x, and evaluates to a
+ singleton set containing only that target. For example, the
+ expression some(//foo:main union //bar:baz)
+ evaluates to a set containing either //foo:main
or
+ //bar:baz
—though which one is not defined.
+
+ If the argument is a singleton, then some
+ computes the identity function: some(//foo:main)
is
+ equivalent to //foo:main
.
+
+ It is an error if the specified argument set is empty, as in the
+ expression some(//foo:main intersect //bar:baz)
.
+
expr ::= somepath(expr, expr) + | allpaths(expr, expr)+
+ The somepath(S, E)
and
+ allpaths(S, E)
operators compute
+ paths between two sets of targets. Both queries accept two
+ arguments, a set S of starting points and a set
+ E of ending points. somepath
returns the
+ graph of nodes on some arbitrary path from a target in
+ S to a target in E; allpaths
+ returns the graph of nodes on all paths from any target in
+ S to any target in E.
+
+ The resulting graphs are ordered according to the dependency relation. + See the section on graph order for more + details. +
+ +
+
+
|
+
+
+
|
+
+
+
|
+
expr ::= kind(word, expr)+
+ The kind(pattern, input)
operator
+ applies a filter to a set of targets, and discards those targets
+ that are not of the expected kind. The pattern parameter specifies
+ what kind of target to match.
+
source file
+ generated file
+ ruletype rule
+ ruletype
package group
+
+ For example, the kinds for the four targets defined by the BUILD file
+ (for package p
) shown below are illustrated in the
+ table:
+
++genrule( + name = "a", + srcs = ["a.in"], + outs = ["a.out"], + cmd = "...", +) ++ |
+
|
+ Thus, kind("cc_.* rule", foo/...)
evaluates to the set
+ of all cc_library
, cc_binary
, etc,
+ rule targets beneath
+ foo
, and kind("source file", deps(//foo))
+ evaluates to the set of all source files in the transitive closure
+ of dependencies of the //foo
target.
+
+ Quotation of the pattern argument is often required
+ because without it, many regular expressions, such as source
+ file
and .*_test
, are not considered words by
+ the parser.
+
+ When matching for package group
, targets ending in
+ :all
may not yield any results.
+ Use :all-targets
instead.
+
expr ::= filter(word, expr)+
+ The filter(pattern, input)
operator
+ applies a filter to a set of targets, and discards targets whose
+ labels (in absolute form) do not match the pattern; it
+ evaluates to a subset of its input.
+
+ The first argument, pattern is a word containing a
+ regular expression over target names. A filter
expression
+ evaluates to the set containing all targets x such that
+ x is a member of the set input and the
+ label (in absolute form, e.g. //foo:bar
)
+ of x contains an (unanchored) match
+ for the regular expression pattern. Since all
+ target names start with //
, it may be used as an alternative
+ to the ^
regular expression anchor.
+
+ This operator often provides a much faster and more robust alternative to the
+ intersect
operator. For example, in order to see all
+ bar
dependencies of the //foo:foo
target, one could
+ evaluate
+
deps(//foo) intersect //bar/...+
+ This statement, however, will require parsing of all BUILD files in the
+ bar
tree, which will be slow and prone to errors in
+ irrelevant BUILD files. An alternative would be:
+
filter(//bar, deps(//foo))+
+ which would first calculate the set of //foo
dependencies and
+ then would filter only targets matching the provided pattern—in other
+ words, targets with names containing //bar
as a
+ substring.
+
+ Another common use of the filter(pattern,
+ expr)
operator is to filter specific files by their
+ name or extension. For example,
+
filter("\.cc$", deps(//foo))+
+ will provide a list of all .cc
files used to build
+ //foo
.
+
expr ::= attr(word, word, expr)+
+ The attr(name, pattern, input)
+ operator applies a filter to a set of targets, and discards targets that
+ are not rules, rule targets that do not have attribute name
+ defined or rule targets where the attribute value does not match the provided
+ regular expression pattern; it evaluates to a subset of its input.
+
+ The first argument, name is the name of the rule attribute that
+ should be matched against the provided regular expression pattern. The second
+ argument, pattern is a regular expression over the attribute
+ values. An attr
expression evaluates to the set containing all
+ targets x such that x is a member of the set
+ input, is a rule with the defined attribute name and
+ the attribute value contains an (unanchored) match for the regular expression
+ pattern. Please note, that if name is an optional
+ attribute and rule does not specify it explicitly then default attribute
+ value will be used for comparison. For example,
+
attr(linkshared, 0, deps(//foo))+
+ will select all //foo
dependencies that are allowed to have a
+ linkshared attribute (e.g., cc_binary
rule) and have it either
+ explicitly set to 0 or do not set it at all but default value is 0 (e.g. for
+ cc_binary
rules).
+
+ List-type attributes (such as srcs
, data
, etc) are
+ converted to strings of the form [value1, ..., valuen]
,
+ starting with a [
bracket, ending with a ]
bracket
+ and using ",
" (comma, space) to delimit multiple values.
+ Labels are converted to strings by using the absolute form of the
+ label. For example, an attribute deps=[":foo",
+ "//otherpkg:bar", "wiz"]
would be converted to the
+ string [//thispkg:foo, //otherpkg:bar, //thispkg:wiz]
.
+ Brackets
+ are always present, so the empty list would use string value []
+ for matching purposes. For example,
+
attr("srcs", "\[\]", deps(//foo))+
+ will select all rules among //foo
dependencies that have an
+ empty srcs
attribute, while
+
attr("data", ".{3,}", deps(//foo))+
+ will select all rules among //foo
dependencies that specify at
+ least one value in the data
attribute (every label is at least
+ 3 characters long due to the //
and :
).
+
expr ::= visible(expr, expr)+
+ The visible(predicate, input)
operator
+ applies a filter to a set of targets, and discards targets without the
+ required visibility.
+
+ The first argument, predicate, is a set of targets that all targets + in the output must be visible to. A visible expression + evaluates to the set containing all targets x such that x + is a member of the set input, and for all targets y in + predicate x is visible to y. For example: +
+visible(//foo, //bar:*)+
+ will select all targets in the package //bar
that //foo
+ can depend on without violating visibility restrictions.
+
expr ::= labels(word, expr)+
+ The labels(attr_name, inputs)
+ operator returns the set of targets specified in the
+ attribute attr_name of type "label" or "list of label" in
+ some rule in set inputs.
+
+ For example, labels(srcs, //foo)
returns the set of
+ targets appearing in the srcs
attribute of
+ the //foo
rule. If there are multiple rules
+ with srcs
attributes in the inputs set, the
+ union of their srcs
is returned.
+
+ Please note, deps
is a reserved word in the query
+ language, so you must quote it if you wish to query the rule
+ attribute of that name in a labels
expression:
+ labels("deps", //foo)
.
+
expr ::= tests(expr)+
+ The tests(x)
operator returns the set of all test
+ rules in set x, expanding any test_suite
rules into
+ the set of individual tests that they refer to, and applying filtering by
+ tag
and size
.
+
+ By default, query evaluation
+ ignores any non-test targets in all test_suite
rules. This can be
+ changed to errors with the --strict_test_suite
option.
+
+ For example, the query kind(test, foo:*)
lists all
+ the *_test
and test_suite
rules
+ in the foo
package. All the results are (by
+ definition) members of the foo
package. In contrast,
+ the query tests(foo:*)
will return all of the
+ individual tests that would be executed by bazel test
+ foo:*
: this may include tests belonging to other packages,
+ that are referenced directly or indirectly
+ via test_suite
rules.
+
expr ::= buildfiles(expr)+
+ The buildfiles(x)
operator returns the set
+ of files that define the packages of each target in
+ set x; in other words, for each package, its BUILD file,
+ plus any files it references
+
+ via load
. Note that this also returns the BUILD files of the
+ packages containing these load
ed files.
+
+ This operator is typically used when determining what files or
+ packages are required to build a specified target, often in conjunction with
+ the --output package
+ option, below). For example,
+
bazel query 'buildfiles(deps(//foo))' --output package+
+ returns the set of all packages on which //foo
transitively
+ depends.
+
+ (Note: a naive attempt at the above query would omit
+ the buildfiles
operator and use only deps
,
+ but this yields an incorrect result: while the result contains the
+ majority of needed packages, those packages that contain only files
+ that are load()
'ed
+
+ will be missing.
+
expr ::= loadfiles(expr)+
+ The loadfiles(x)
operator returns the set of
+ Skylark files that are needed to load the packages of each target in
+ set x. In other words, for each package, it returns the
+ .bzl files that are referenced from its BUILD files.
+
+ bazel query
generates a graph.
+ You specify the content, format, and ordering by which
+ bazel query
presents this graph
+ by means of the --output
+ command-line option.
+
+ Some of the output formats accept additional options. The name of
+ each output option is prefixed with the output format to which it
+ applies, so --graph:factored
applies only
+ when --output=graph
is being used; it has no effect if
+ an output format other than graph
is used. Similarly,
+ --xml:line_numbers
applies only when --output=xml
+ is being used.
+
+ Although query expressions always follow the "law of
+ conservation of graph order", presenting the results may be done
+ in either a dependency-ordered or unordered manner. This does not
+ influence the targets in the result set or how the query is computed. It only
+ affects how the results are printed to stdout. Moreover, nodes that are
+ equivalent in the dependency order may or may not be ordered alphabetically.
+ The --order_output
flag can be used to control this behavior.
+ (The --[no]order_results
flag has a subset of the functionality
+ of the --order_output
flag and is deprecated.)
+
+ The default value of this flag is auto
, which is equivalent to
+ full
for every output format except for proto
,
+ graph
, minrank
, and maxrank
, for which
+ it is equivalent to deps
.
+
+ When this flag is no
and --output
is one of
+ build
, label
, label_kind
,
+ location
, package
, proto
,
+ record
or xml
, the outputs will be printed in
+ arbitrary order. This is generally the fastest option. It is not
+ supported though when --output
is one of graph
,
+ min_rank
or max_rank
: with these formats, bazel will
+ always print results ordered by the dependency order or rank.
+
+ When this flag is deps
, bazel will print results ordered by the
+ dependency order. However, nodes that are unordered by the dependency order
+ (because there is no path from either one to the other) may be printed in any
+ order.
+
+ When this flag is full
, bazel will print results ordered by the
+ dependency order, with unordered nodes ordered alphabetically or reverse
+ alphabetically, depending on the output format. This may be slower than the
+ other options, and so should only be used when deterministic results are
+ important — it is guaranteed with this option that running the same query
+ multiple times will always produce the same output.
+
--output build+
+ With this option, the representation of each target is as if it were
+ hand-written in the BUILD language. All variables and function calls
+ (e.g. glob, macros) are expanded, which is useful for seeing the effect
+ of Skylark macros. Additionally, each effective rule is annotated with
+ the name of the macro (if any, see generator_name
and
+ generator_function
) that produced it.
+
+ Although the output uses the same syntax as BUILD files, it is not + guaranteed to produce a valid BUILD file. +
+ +--output label+
+ With this option, the set of names (or labels) of each target
+ in the resulting graph is printed, one label per line, in
+ topological order (unless --noorder_results
is specified, see
+ notes on the ordering of results).
+ (A topological ordering is one in which a graph
+ node appears earlier than all of its successors.) Of course there
+ are many possible topological orderings of a graph (reverse
+ postorder is just one); which one is chosen is not specified.
+
+ When printing the output of a somepath
query, the order
+ in which the nodes are printed is the order of the path.
+
+ Caveat: in some corner cases, there may be two distinct targets with
+ the same label; for example, a sh_binary
rule and its
+ sole (implicit) srcs
file may both be called
+ foo.sh
. If the result of a query contains both of
+ these targets, the output (in label
format) will appear
+ to contain a duplicate. When using the label_kind
(see
+ below) format, the distinction becomes clear: the two targets have
+ the same name, but one has kind sh_binary rule
and the
+ other kind source file
.
+
--output label_kind+
+ Like label
, this output format prints the labels of
+ each target in the resulting graph, in topological order, but it
+ additionally precedes the label by
+ the kind of the target.
+
--output minrank +--output maxrank+
+ Like label
, the minrank
+ and maxrank
output formats print the labels of each
+ target in the resulting graph, but instead of appearing in
+ topological order, they appear in rank order, preceded by their
+ rank number. These are unaffected by the result ordering
+ --[no]order_results
flag (see notes on
+ the ordering of results).
+
+ There are two variants of this format: minrank
ranks
+ each node by the length of the shortest path from a root node to it.
+ "Root" nodes (those which have no incoming edges) are of rank 0,
+ their successors are of rank 1, etc. (As always, edges point from a
+ target to its prerequisites: the targets it depends upon.)
+
+ maxrank
ranks each node by the length of the longest
+ path from a root node to it. Again, "roots" have rank 0, all other
+ nodes have a rank which is one greater than the maximum rank of all
+ their predecessors.
+
+ All nodes in a cycle are considered of equal rank. (Most graphs are + acyclic, but cycles do occur + simply because BUILD files contain erroneous cycles.) +
+ +
+ These output formats are useful for discovering how deep a graph is.
+ If used for the result of a deps(x)
, rdeps(x)
,
+ or allpaths
query, then the rank number is equal to the
+ length of the shortest (with minrank
) or longest
+ (with maxrank
) path from x
to a node in
+ that rank. maxrank
can be used to determine the
+ longest sequence of build steps required to build a target.
+
+ Please note, the ranked output of a somepath
query is
+ basically meaningless because somepath
doesn't
+ guarantee to return either a shortest or a longest path, and it may
+ include "transitive" edges from one path node to another that are
+ not direct edges in original graph.
+
+ For example, the graph on the left yields the outputs on the right
+ when --output minrank
and --output maxrank
+ are specified, respectively.
+
+ + |
++minrank + +0 //c:c +1 //b:b +1 //a:a +2 //b:b.cc +2 //a:a.cc ++ |
++maxrank + +0 //c:c +1 //b:b +2 //a:a +2 //b:b.cc +3 //a:a.cc ++ |
--output location+
+ Like label_kind
, this option prints out, for each
+ target in the result, the target's kind and label, but it is
+ prefixed by a string describing the location of that target, as a
+ filename and line number. The format resembles the output of
+ grep
. Thus, tools that can parse the latter (such as Emacs
+ or vi) can also use the query output to step through a series of
+ matches, allowing the Bazel query tool to be used as a
+ dependency-graph-aware "grep for BUILD files".
+
+ The location information varies by target kind (see the kind operator). For rules, the + location of the rule's declaration within the BUILD file is printed. + For source files, the location of line 1 of the actual file is + printed. For a generated file, the location of the rule that + generates it is printed. (The query tool does not have sufficient + information to find the actual location of the generated file, and + in any case, it might not exist if a build has not yet been + performed.) +
+ +--output package+
+ This option prints the name of all packages to which + some target in the result set belongs. The names are printed in + lexicographical order; duplicates are excluded. Formally, this + is a projection from the set of labels (package, target) onto + packages. +
+ +
+ In conjunction with the deps(...)
query, this output
+ option can be used to find the set of packages that must be checked
+ out in order to build a given set of targets.
+
--output graph+
+ This option causes the query result to be printed as a directed
+ graph in the popular AT&T GraphViz format. Typically the
+ result is saved to a file, such as .png
or .svg
.
+ (If the dot
program is not installed on your workstation, you
+ can install it using the command sudo apt-get install graphviz
.)
+ See the example section below for a sample invocation.
+
+ This output format is particularly useful for allpath
,
+ deps
, or rdeps
queries, where the result
+ includes a set of paths that cannot be easily visualized when
+ rendered in a linear form, such as with --output label
.
+
+ By default, the graph is rendered in a factored form. That is,
+ topologically-equivalent nodes are merged together into a single
+ node with multiple labels. This makes the graph more compact
+ and readable, because typical result graphs contain highly
+ repetitive patterns. For example, a java_library
rule
+ may depend on hundreds of Java source files all generated by the
+ same genrule
; in the factored graph, all these files
+ are represented by a single node. This behavior may be disabled
+ with the --nograph:factored
option.
+
--graph:node_limit n
+ The option specifies the maximum length of the label string for a
+ graph node in the output. Longer labels will be truncated; -1
+ disables truncation. Due to the factored form in which graphs are
+ usually printed, the node labels may be very long. GraphViz cannot
+ handle labels exceeding 1024 characters, which is the default value
+ of this option. This option has no effect unless
+ --output=graph
is being used.
+
--[no]graph:factored
+ By default, graphs are displayed in factored form, as explained
+ above.
+ When --nograph:factored
is specified, graphs are
+ printed without factoring. This makes visualization using GraphViz
+ impractical, but the simpler format may ease processing by other
+ tools (e.g. grep). This option has no effect
+ unless --output=graph
is being used.
+
--output xml+
+ This option causes the resulting targets to be printed in an XML + form. The output starts with an XML header such as this +
++ <?xml version="1.0" encoding="UTF-8"?> + <query version="2"> ++ +
+ and then continues with an XML element for each target + in the result graph, in topological order (unless + unordered results are requested), + and then finishes with a terminating +
++</query> ++
+ Simple entries are emitted for targets of file
+ kind:
+
+ <source-file name='//foo:foo_main.cc' .../> + <generated-file name='//foo:libfoo.so' .../> ++
+ But for rules, the XML is structured and contains definitions of all + the attributes of the rule, including those whose value was not + explicitly specified in the rule's BUILD file. +
+
+ Additionally, the result includes rule-input
and
+ rule-output
elements so that the topology of the
+ dependency graph can be reconstructed without having to know that,
+ for example, the elements of the srcs
attribute are
+ forward dependencies (prerequisites) and the contents of the
+ outs
attribute are backward dependencies (consumers).
+
+ rule-input
elements for implicit dependencies are suppressed if
+ --noimplicit_deps
is specified.
+
+ <rule class='cc_binary rule' name='//foo:foo' ...> + <list name='srcs'> + <label value='//foo:foo_main.cc'/> + <label value='//foo:bar.cc'/> + ... + </list> + <list name='deps'> + <label value='//common:common'/> + <label value='//collections:collections'/> + ... + </list> + <list name='data'> + ... + </list> + <int name='linkstatic' value='0'/> + <int name='linkshared' value='0'/> + <list name='licenses'/> + <list name='distribs'> + <distribution value="INTERNAL" /> + </list> + <rule-input name="//common:common" /> + <rule-input name="//collections:collections" /> + <rule-input name="//foo:foo_main.cc" /> + <rule-input name="//foo:bar.cc" /> + ... + </rule> ++ +
+ Every XML element for a target contains a name
+ attribute, whose value is the target's label, and
+ a location
attribute, whose value is the target's
+ location as printed by the --output
+ location
.
+
--[no]xml:line_numbers
+ By default, the locations displayed in the XML output contain line numbers.
+ When --noxml:line_numbers
is specified, line numbers are not
+ printed.
+
--[no]xml:default_values
+ By default, XML output does not include rule attribute whose value + is the default value for that kind of attribute (e.g. because it + were not specified in the BUILD file, or the default value was + provided explicitly). This option causes such attribute values to + be included in the XML output. +
+ + +
+ If the build depends on rules from external repositories (defined in the
+ WORKSPACE file) then query results will include these dependencies. For
+ example, if //foo:bar
depends on //external:some-lib
+ and //external:some-lib
is bound to
+ @other-repo//baz:lib
, then
+ bazel query 'deps(//foo:bar)'
+ will list both @other-repo//baz:lib
and
+ //external:some-lib
as dependencies.
+
+ External repositories themselves are not dependencies of a build. That is, in
+ the example above, //external:other-repo
is not a dependency. It
+ can be queried for as a member of the //external
package, though,
+ for example:
+
+ +
+ $ # Querying over all members of //external returns the repository. + $ bazel query 'kind(maven_jar, //external:*)' + //external:other-repo + + $ # ...but the repository is not a dependency. + $ bazel query 'kind(maven_jar, deps(//foo:bar))' + INFO: Empty results +diff --git a/site/versions/master/docs/rule-challenges.md b/site/versions/master/docs/rule-challenges.md new file mode 100644 index 0000000000..7b92655b91 --- /dev/null +++ b/site/versions/master/docs/rule-challenges.md @@ -0,0 +1,214 @@ +--- +layout: documentation +title: Challenges of Writing Rules. +--- + +# Challenges of Writing Rules. + +We have heard feedback from various people that they have +difficulty to write efficient Bazel rules. There is no single root cause, but +it’s due to a combination of historical circumstances and intrinsic complexity +in the problem domain. This document attempts to give a high level overview of +the specific issues that we believe to be the main contributors. + +* Assumption: Aim for Correctness, Throughput, Ease of Use & Latency +* Assumption: Large Scale Repositories +* Assumption: BUILD-like Description Language +* Intrinsic: Remote Execution and Caching are Hard +* Historic: Hard Separation between Loading, Analysis, and Execution is + Outdated, but still affects the API +* Intrinsic: Using Change Information for Correct and Fast Incremental Builds + requires Unusual Coding Patterns +* Intrinsic: Avoiding Quadratic Time and Memory Consumption is Hard + +## Assumption: Aim for Correctness, Throughput, Ease of Use & Latency + +We assume that the build system needs to be first and foremost correct with +respect to incremental builds, i.e., for a given source tree, the output of the +same build should always be the same, regardless of what the output tree looks +like. In the first approximation, this means Bazel needs to know every single +input that goes into a given build step, such that it can rerun that step if any +of the inputs change. There are limits to how correct Bazel can get, as it leaks +some information such as date / time of the build, and ignores certain types of +changes such as changes to file attributes. Sandboxing helps ensure correctness +by preventing reads to undeclared input files. Besides the intrinsic limits of +the system, there are a few known correctness issues, most of which are related +to Fileset or the C++ rules, which are both hard problems. We have long-term +efforts to fix these. + +The second goal of the build system is to have high throughput; we are +permanently pushing the boundaries of what can be done within the current +machine allocation for a remote execution service. If the remote execution +service gets overloaded, nobody can get work done. + +Ease of use comes next, i.e., of multiple correct approaches with the same (or +similar) footprint of the remote execution service, we choose the one that is +easier to use. + +For the purpose of this document, latency denotes the time it takes from +starting a build to getting the intended result, whether that is a test log from +a passing or failing test, or an error message that a BUILD file has a +typo. + +Note that these goals often overlap; latency is as much a function of throughput +of the remote execution service as is correctness relevant for ease of use. + + +## Assumption: Large Scale Repositories + +The build system needs to operate at the scale of large repositories where large +scale means that it does not fit on a single hard drive, so it is impossible to +do a full checkout on virtually all developer machines. A medium-sized build +will need to read and parse tens of thousands of BUILD files, and evaluate +hundreds of thousands of globs. While it is theoretically possible to read all +BUILD files on a single machine, we have not yet been able to do so within a +reasonable amount of time and memory. As such, it is critical that BUILD files +can be loaded and parsed independently. + + +## Assumption: BUILD-like Description Language + +For the purpose of this document, we assume a configuration language that is +roughly similar to BUILD files, i.e., declaration of library and binary rules +and their interdependencies. BUILD files can be read and parsed independently, +and we avoid even looking at source files whenever we can (except for +existence). + + +## Intrinsic: Remote Execution and Caching are Hard + +Remote execution and caching improve build times in large repositories by +roughly two orders of magnitude compared to running the build on a single +machine. However, the scale at which it needs to perform is staggering: Google's +remote execution service is designed to handle a huge number of requests per +second, and the protocol carefully avoids unnecessary roundtrips as well as +unnecessary work on the service side. + +At this time, the protocol requires that the build system knows all inputs to a +given action ahead of time; the build system then computes a unique action +fingerprint, and asks the scheduler for a cache hit. If a cache hit is found, +the scheduler replies with the digests of the output files; the files itself are +addressed by digest later on. However, this imposes restrictions on the Bazel +rules, which need to declare all input files ahead of time. + + +## Historic: Hard Separation between Loading, Analysis, and Execution is Outdated, but still affects the API + +Technically, it is sufficient for a rule to know the input and output files of +an action just before the action is sent to remote execution. However, the +original Bazel code base had a strict separation of loading packages, then +analyzing rules using a configuration (command-line flags, essentially), and +only then running any actions. This distinction is still part of the rules API +today, even though the core of Bazel no longer requires it (more details below). + +That means that the rules API requires a declarative description of the rule +interface (what attributes it has, types of attributes). There are some +exceptions where the API allows custom code to run during the loading phase to +compute implicit names of output files and implicit values of attributes. For +example, a java_library rule named ‘foo’ implicitly generates an output named +‘libfoo.jar’, which can be referenced from other rules in the build graph. + +Furthermore, the analysis of a rule cannot read any source files or inspect the +output of an action; instead, it needs to generate a partial directed bipartite +graph of build steps and output file names that is only determined from the rule +itself and its dependencies. + + +## Intrinsic: Using Change Information for Correct and Fast Incremental Builds requires Unusual Coding Patterns + +Above, we argued that in order to be correct, Bazel needs to know all the input +files that go into a build step in order to detect whether that build step is +still up-to-date. The same is true for package loading and rule analysis, and we +have designed [Skyframe] (http://www.bazel.io/docs/skyframe.html) to handle this +in general. Skyframe is a graph library and evaluation framework that takes a +goal node (such as ‘build //foo with these options’), and breaks it down into +its constituent parts, which are then evaluated and combined to yield this +result. As part of this process, Skyframe reads packages, analyzes rules, and +executes actions. + +At each node, Skyframe tracks exactly which nodes any given node used to compute +its own output, all the way from the goal node down to the input files (which +are also Skyframe nodes). Having this graph explicitly represented in memory +allows the build system to identify exactly which nodes are affected by a given +change to an input file (including creation or deletion of an input file), doing +the minimal amount of work to restore the output tree to its intended state. + +As part of this, each node performs a dependency discovery process; i.e., each +node can declare dependencies, and then use the contents of those dependencies +to declare even further dependencies. In principle, this maps well to a +thread-per-node model. However, medium-sized builds contain hundreds of +thousands of Skyframe nodes, which isn’t easily possible with current Java +technology (and for historical reasons, we’re currently tied to using Java, so +no lightweight threads and no continuations). + +Instead, Bazel uses a fixed-size thread pool. However, that means that if a node +declares a dependency that isn’t available yet, we may have to abort that +evaluation and restart it (possibly in another thread), when the dependency is +available. This, in turn, means that nodes should not do this excessively; a +node that declares N dependencies serially can potentially be restarted N times, +costing O(N^2) time. Instead, we aim for up-front bulk declaration of +dependencies, which sometimes requires reorganizing the code, or even splitting +a node into multiple nodes to limit the number of restarts. + +Note that this technology isn’t currently available in the rules API; instead, +the rules API is still defined using the legacy concepts of loading, analysis, +and execution phases. However, a fundamental restriction is that all accesses to +other nodes have to go through the framework so that it can track the +corresponding dependencies. Regardless of the language in which the build system +is implemented or in which the rules are written (they don’t have to be the +same), rule authors must not use standard libraries or patterns that bypass +Skyframe. For Java, that means avoiding java.io.File as well as any form of +reflection, and any library that does either. Libraries that support dependency +injection of these low-level interfaces still need to be setup correctly for +Skyframe. + +This strongly suggests to avoid exposing rule authors to a full language runtime +in the first place. The danger of accidental use of such APIs is just too big - +several Bazel bugs in the past were caused by rules using unsafe APIs, even +though the rules were written by the Bazel team, i.e., by the domain experts. + + +## Intrinsic: Avoiding Quadratic Time and Memory Consumption is Hard + +To make matters worse, apart from the requirements imposed by Skyframe, the +historical constraints of using Java, and the outdatedness of the rules API, +accidentally introducing quadratic time or memory consumption is a fundamental +problem in any build system based on library and binary rules. There are two +very common patterns that introduce quadratic memory consumption (and therefore +quadratic time consumption). + +1. Chains of Library Rules +Consider the case of a chain of library rules A depends on B, depends on C, and +so on. Then, we want to compute some property over the transitive closure of +these rules, such as the Java runtime classpath, or the C++ linker command for +each library. Naively, we might take a standard list implementation; however, +this already introduces quadratic memory consumption: the first library +contains one entry on the classpath, the second two, the third three, and so +on, for a total of 1+2+3+...+N = O(N^2) entries. + +2. Binary Rules Depending on the Same Library Rules +Consider the case where a set of binaries that depend on the same library +rules; for example, you might have a number of test rules that test the same +library code. Let’s say out of N rules, half the rules are binary rules, and +the other half library rules. Now consider that each binary makes a copy of +some property computed over the transitive closure of library rules, such as +the Java runtime classpath, or the C++ linker command line. For example, it +could expand the command line string representation of the C++ link action. N/2 +copies of N/2 elements is O(N^2) memory. + + +### Custom Collections Classes to Avoid Quadratic Complexity + +Bazel is heavily affected by both of these scenarios, so we introduced a set of +custom collection classes that effectively compress the information in memory by +avoiding the copy at each step. Almost all of these data structures have set +semantics, so we called the class NestedSet. The majority of changes to reduce +Bazel’s memory consumption over the past several years were changes to use +NestedSet instead of whatever was previously used. + +Unfortunately, usage of NestedSet does not automatically solve all the issues; +in particular, even just iterating over a NestedSet in each rule re-introduces +quadratic time consumption. NestedSet also has some helper methods to facilitate +interoperability with normal collections classes; unfortunately, accidentally +passing a NestedSet to one of these methods leads to copying behavior, and +reintroduces quadratic memory consumption. diff --git a/site/versions/master/docs/skylark/aspects.md b/site/versions/master/docs/skylark/aspects.md new file mode 100644 index 0000000000..6bafa20b65 --- /dev/null +++ b/site/versions/master/docs/skylark/aspects.md @@ -0,0 +1,191 @@ +--- +layout: documentation +title: Aspects +--- +# Aspects + +**Status: Experimental**. We may make breaking changes to the API, but we will + help you update your code. + +Aspects allow augmenting build dependency graphs with additional information +and actions. Some typical scenarios when aspects can be useful: + +* IDEs that integrate Bazel can use aspects to collect information about the + project +* Code generation tools can leverage aspects to execute on their inputs in + "target-agnostic" manner. As an example, BUILD files can specify a hierarchy + of [protobuf](https://developers.google.com/protocol-buffers/) library + definitions, and language-specific rules can use aspects to attach + actions generating protobuf support code for a particular language + +## Aspect basics + +Bazel BUILD files provide a description of a project’s source code: what source +files are part of the project, what artifacts (_targets_) should be built from +those files, what the dependencies between those files are, etc. Bazel uses +this information to perform a build, that is, it figures out the set of actions +needed to produce the artifacts (such as running compiler or linker) and +executes those actions. Bazel accomplishes this by constructing a _dependency +graph_ between targets and visiting this graph to collect those actions. + +Consider the following BUILD file: + +```python +java_library(name = 'W', ...) +java_library(name = 'Y', deps = [':W'], ...) +java_library(name = 'Z', deps = [':W'], ...) +java_library(name = 'Q', ...) +java_library(name = 'T', deps = [':Q'], ...) +java_library(name = 'X', deps = [':Y',':Z'], runtime_deps = [':T'], ...) +``` + +This BUILD file defines a dependency graph shown in Fig 1. + +
load
). They are
+accessed using the native module.
+
+`extension.bzl`:
+
+```python
+def macro(name, visibility=None):
+ # Creating a native genrule.
+ native.genrule(
+ name = name,
+ outs = [name + '.txt'],
+ cmd = 'echo hello > $@',
+ visibility = visibility,
+ )
+```
+
+`BUILD`:
+
+```python
+load("//pkg:extension.bzl", "macro")
+
+macro(name = "myrule")
+```
+
+## Macro multiple rules
+
+There's currently no easy way to create a rule that directly uses the
+action of a native rule. You can work around this using macros:
+
+```python
+def cc_and_something_else_binary(name, srcs, deps, csrcs, cdeps)
+ cc_binary_name = "%s.cc_binary" % name
+
+ native.cc_binary(
+ name = cc_binary_name,
+ srcs = csrcs,
+ deps = cdeps,
+ visibility = ["//visibility:private"]
+ )
+
+ _cc_and_something_else_binary(
+ name = name,
+ srcs = srcs,
+ deps = deps,
+ # A label attribute so that this depends on the internal rule.
+ cc_binary = cc_binary_name,
+ # Redundant labels attributes so that the rule with this target name knows
+ # about everything it would know about if cc_and_something_else_binary
+ # were an actual rule instead of a macro.
+ csrcs = csrcs,
+ cdeps = cdeps)
+
+def _impl(ctx):
+ return struct([...],
+ # When instrumenting this rule, again hide implementation from
+ # users.
+ instrumented_files(
+ source_attributes = ["srcs", "csrcs"],
+ dependency_attributes = ["deps", "cdeps"]))
+
+_cc_and_something_else_binary = rule(implementation=_impl)
+```
+
+
+## Conditional instantiation
+
+Macros can look at previously instantiated rules. This is done with
+`native.existing_rule`, which returns information on a single rule defined in the same
+`BUILD` file, eg.,
+
+```python
+native.existing_rule("descriptor_proto")
+```
+
+This is useful to avoid instantiating the same rule twice, which is an
+error. For example, the following macro will simulate a test suite,
+instantiating tests for diverse flavors of the same test.
+
+`extension.bzl`:
+
+```python
+def system_test(name, test_file, flavor):
+ n = "system_test_%s_%s_test" % (test_file, flavor)
+ if native.existing_rule(n) == None:
+ native.py_test(
+ name = n,
+ srcs = [ "test_driver.py", test_file ],
+ args = [ "--flavor=" + flavor])
+ return n
+
+def system_test_suite(name, flavors=["default"], test_files):
+ ts = []
+ for flavor in flavors:
+ for test in test_files:
+ ts.append(system_test(name, test, flavor))
+ native.test_suite(name = name, tests = ts)
+```
+
+In the following BUILD file, note how `(basic_test.py, fast)` is emitted for
+both the `smoke` test suite and the `thorough` test suite.
+
+```python
+load("//pkg:extension.bzl", "system_test_suite")
+
+# Run all files through the 'fast' flavor.
+system_test_suite("smoke", flavors=["fast"], glob(["*_test.py"]))
+
+# Run the basic test through all flavors.
+system_test_suite("thorough", flavors=["fast", "debug", "opt"], ["basic_test.py"])
+```
+
+
+## Aggregating over the BUILD file
+
+Macros can collect information from the BUILD file as processed so far. We call
+this aggregation. The typical example is collecting data from all rules of a
+certain kind. This is done by calling
+native.existing\_rules, which
+returns a dictionary representing all rules defined so far in the current BUILD
+file. The dictionary has entries of the form `name` => `rule`, with the values
+using the same format as `native.existing_rule`.
+
+```python
+def archive_cc_src_files(tag):
+ """Create an archive of all C++ sources that have the given tag."""
+ all_src = []
+ for r in native.existing_rules().values():
+ if tag in r["tags"] and r["kind"] == "cc_library":
+ all_src.append(r["srcs"])
+ native.genrule(cmd = "zip $@ $^", srcs = all_src, outs = ["out.zip"])
+```
+
+Since `native.existing_rules` constructs a potentially large dictionary, you should avoid
+calling it repeatedly within BUILD file.
+
+## Empty rule
+
+Minimalist example of a rule that does nothing. If you build it, the target will
+succeed (with no generated file).
+
+`empty.bzl`:
+
+```python
+def _impl(ctx):
+ # You may use print for debugging.
+ print("This rule does nothing")
+
+empty = rule(implementation=_impl)
+```
+
+`BUILD`:
+
+```python
+load("//pkg:empty.bzl", "empty")
+
+empty(name = "nothing")
+```
+
+## Rule with attributes
+
+Example of a rule that shows how to declare attributes and access them.
+
+`printer.bzl`:
+
+```python
+def _impl(ctx):
+ # You may use print for debugging.
+ print("Rule name = %s, package = %s" % (ctx.label.name, ctx.label.package))
+
+ # This prints the labels of the deps attribute.
+ print("There are %d deps" % len(ctx.attr.deps))
+ for i in ctx.attr.deps:
+ print("- %s" % i.label)
+ # A label can represent any number of files (possibly 0).
+ print(" files = %s" % [f.path for f in i.files])
+
+printer = rule(
+ implementation=_impl,
+ attrs={
+ # Do not declare "name": It is added automatically.
+ "number": attr.int(default = 1),
+ "deps": attr.label_list(allow_files=True),
+ })
+```
+
+`BUILD`:
+
+```python
+load("//pkg:printer.bzl", "printer")
+
+printer(
+ name = "nothing",
+ deps = [
+ "BUILD",
+ ":other",
+ ],
+)
+
+printer(name = "other")
+```
+
+If you execute this file, some information is printed as a warning by the
+rule. No file is generated.
+
+## Simple shell command
+
+Example of a rule that runs a shell command on an input file specified by
+the user. The output has the same name as the rule, with a `.size` suffix.
+
+While convenient, Shell commands should be used carefully. Generating the
+command-line can lead to escaping and injection issues. It can also create
+portability problems. It is often better to declare a binary target in a
+BUILD file and execute it. See the example [executing a binary](#execute-bin).
+
+`size.bzl`:
+
+```python
+def _impl(ctx):
+ output = ctx.outputs.out
+ input = ctx.file.file
+ # The command may only access files declared in inputs.
+ ctx.action(
+ inputs=[input],
+ outputs=[output],
+ progress_message="Getting size of %s" % input.short_path,
+ command="stat -L -c%%s %s > %s" % (input.path, output.path))
+
+size = rule(
+ implementation=_impl,
+ attrs={"file": attr.label(mandatory=True, allow_files=True, single_file=True)},
+ outputs={"out": "%{name}.size"},
+)
+```
+
+`foo.txt`:
+
+```
+Hello
+```
+
+`BUILD`:
+
+```python
+load("//pkg:size.bzl", "size")
+
+size(
+ name = "foo_size",
+ file = "foo.txt",
+)
+```
+
+## Write string to a file
+
+Example of a rule that writes a string to a file.
+
+`file.bzl`:
+
+```python
+def _impl(ctx):
+ output = ctx.outputs.out
+ ctx.file_action(output=output, content=ctx.attr.content)
+
+file = rule(
+ implementation=_impl,
+ attrs={"content": attr.string()},
+ outputs={"out": "%{name}.txt"},
+)
+```
+
+`BUILD`:
+
+```python
+load("//pkg:file.bzl", "file")
+
+file(
+ name = "hello",
+ content = "Hello world",
+)
+```
+
+
+## Execute a binary
+
+This rule executes an existing binary. In this particular example, the
+binary is a tool that merges files. During the analysis phase, we cannot
+access any arbitrary label: the dependency must have been previously
+declared. To do so, the rule needs a label attribute. In this example, we
+will give the label a default value and make it private (so that it is not
+visible to end users). Keeping the label private can simplify maintenance,
+since you can easily change the arguments and flags you pass to the tool.
+
+`execute.bzl`:
+
+```python
+def _impl(ctx):
+ # The list of arguments we pass to the script.
+ args = [ctx.outputs.out.path] + [f.path for f in ctx.files.srcs]
+ # Action to call the script.
+ ctx.action(
+ inputs=ctx.files.srcs,
+ outputs=[ctx.outputs.out],
+ arguments=args,
+ progress_message="Merging into %s" % ctx.outputs.out.short_path,
+ executable=ctx.executable._merge_tool)
+
+concat = rule(
+ implementation=_impl,
+ attrs={
+ "srcs": attr.label_list(allow_files=True),
+ "out": attr.output(mandatory=True),
+ "_merge_tool": attr.label(executable=True, allow_files=True,
+ default=Label("//pkg:merge"))
+ }
+)
+```
+
+Any executable target can be used. In this example, we will use a
+`sh_binary` rule that concatenates all the inputs.
+
+`BUILD`:
+
+```
+load("execute", "concat")
+
+concat(
+ name = "sh",
+ srcs = [
+ "header.html",
+ "body.html",
+ "footer.html",
+ ],
+ out = "page.html",
+)
+
+# This target is used by the shell rule.
+sh_binary(
+ name = "merge",
+ srcs = ["merge.sh"],
+)
+```
+
+`merge.sh`:
+
+```python
+#!/bin/bash
+
+out=$1
+shift
+cat $* > $out
+```
+
+`header.html`:
+
+```
+
+```
+
+`body.html`:
+
+```
+content
+```
+
+`footer.html`:
+
+```
+
+```
+
+## Execute an input binary
+
+This rule has a mandatory `binary` attribute. It is a label that can refer
+only to executable rules or files.
+
+`execute.bzl`:
+
+```python
+def _impl(ctx):
+ # ctx.new_file is used for temporary files.
+ # If it should be visible for user, declare it in rule.outputs instead.
+ f = ctx.new_file(ctx.configuration.bin_dir, "hello")
+ # As with outputs, each time you declare a file,
+ # you need an action to generate it.
+ ctx.file_action(output=f, content=ctx.attr.input_content)
+
+ ctx.action(
+ inputs=[f],
+ outputs=[ctx.outputs.out],
+ executable=ctx.executable.binary,
+ progress_message="Executing %s" % ctx.executable.binary.short_path,
+ arguments=[
+ f.path,
+ ctx.outputs.out.path, # Access the output file using
+ # ctx.outputs.