| Commit message (Collapse) | Author | Age |
|
|
|
|
|
|
| |
Move the message-digest cloning to DigestHashFunction and out of Fingerprint, to make it possible to configure Fingerprint to use different hash functions. We keep the default MD5 for now, we'd like it to use the global default but want to isolate the configuration change from any change adding potential contention.
RELNOTES: None.
PiperOrigin-RevId: 208502993
|
|
|
|
|
|
|
| |
These show up as directories. Filter these out and return null from the path converter, which should cause omission of those files from any build events.
RELNOTES: None
PiperOrigin-RevId: 208244910
|
|
|
|
| |
PiperOrigin-RevId: 208009857
|
|
|
|
|
|
|
|
|
| |
* Refactor Chunker constructor to a builder to reduce constructor overload.
* Pass digest into this where we have it
* Redo ensureInputsPresent to not lose the missing digests during processing so we can pass them to the Chunker constructor.
RELNOTES: None
PiperOrigin-RevId: 207297915
|
|
|
|
|
| |
RELNOTES: None
PiperOrigin-RevId: 207137932
|
|
|
|
|
|
|
|
|
|
| |
Attempt to fix #5711, "java.util.concurrent.RejectedExecutionException: event executor terminated", by having `HttpBlobStore.close()` no-op when called more than once.
I'm rolling a patched version of bazel for us internally based on 0.15.2. Should be able to say definitively in a couple days whether or not this addresses the issue, but it seems like it should (and @buchgr agrees).
Closes #5725.
PiperOrigin-RevId: 207089681
|
|
|
|
|
|
|
|
|
| |
RELNOTES: When using Bazel's remote execution feature and Bazel has to
fallback to local execution for an action, Bazel used non-sandboxed
local execution until now. From this release on, you can use the new
flag --remote_local_fallback_strategy=<strategy> to tell Bazel which
strategy to use in that case.
PiperOrigin-RevId: 206566380
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
For downloading output files / directories we trigger all
downloads concurrently and asynchronously in the background
and after that wait for all downloads to finish. However, if
a download failed we did not wait for the remaining downloads
to finish but immediately started deleting partial downloads
and continued with local execution of the action.
That leads to two interesting bugs:
* The cleanup procedure races with the downloads that are still
in progress. As it tries to delete files and directories, new
files and directories are created and that will often
lead to "Directory not empty" errors as seen in #5047.
* The clean up procedure does not detect the race, succeeds and
subsequent local execution fails because not all files have
been deleted.
The solution is to always wait for all downloads to complete
before entering the cleanup routine. Ideally we would also
cancel all outstanding downloads, however, that's not as
straightfoward as it seems. That is, the j.u.c.Future API does
not provide a way to cancel a computation and also wait for
that computation actually having determinated. So we'd need
to introduce a separate mechanism to cancel downloads.
RELNOTES: None
PiperOrigin-RevId: 205980446
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This change allows local files referenced by the BEP/BES protocol
to be uploaded to a ByteStream gRPC service.
The ByteStreamUploader is now implicitly also used by the BES
module which has a different lifecycle than the remote module.
We introduce reference counting to ensure that the channel is
closed after its no longer needed. This also fixes a bug where
we currently leak one socket per remote build until the Bazel
server is shut down.
RELNOTES: None
PiperOrigin-RevId: 204275316
|
|
|
|
|
|
|
| |
The artifact uploaders may need command-level options.
RELNOTES: None
PiperOrigin-RevId: 204151808
|
|
|
|
|
|
|
|
|
|
|
| |
This adds support for Unix sockets to Bazel for the
remote http cache. See corresponding issue #5098
for discussion.
RELNOTES: Introduce the --remote_cache_proxy flag,
which allows for remote http caching to connect
via a unix domain socket.
PiperOrigin-RevId: 204111667
|
|
|
|
|
|
|
|
| |
This observably removes any ill effect of CAS transience.
Closes #5229.
PiperOrigin-RevId: 204010317
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
There can be multiple BuildEventTransports active at
the same time and we need to ensure that each transport
gets its own BuildEventArtifactUploader as these transports
might have different lifecycles.
We do that by introducing another level of indirection via
the BuildEventArtifactUploaderFactory. BlazeModules now
register a factory object instead of an uploader.
In addition, the BuildEventArtifactUploader gets a shutdown()
method that allows to free any resources associated with it.
PiperOrigin-RevId: 203752092
|
|
|
|
|
|
|
| |
Instead of just a path, events now include information about the type of file (output, source file, stdout/stderr, test logs, etc.). This information can be used by the uploaders to determine a) whether to upload, b) what kind of lease to give the files.
RELNOTES: None
PiperOrigin-RevId: 203285549
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This change limits the number of open tcp connections
by default to 100 for remote caching. We have had error
reports where some use cases Bazel would open so many
TCP connections that it crashed/ran out of sockets. The
max. number of TCP connections can still be adjusted by
specifying --remote_max_connections.
See also #5047.
RELNOTES: In remote caching we limit the number of open
TCP connections to 100 by default. The number can be adjusted
by specifying the --remote_max_connections flag.
PiperOrigin-RevId: 202958838
|
|
|
|
|
|
|
|
|
|
| |
This changes the BuildEventArtifactUploader to an async interface,
thereby no longer potentially delaying event delivery over the
eventbus. Additionally, the BES transport is changed to start
uploading local files immediately as the events are delivered.
RELNOTES: None
PiperOrigin-RevId: 202694121
|
|
|
|
|
|
|
|
| |
non-empty set of output files. This would catch a degenerate case when for some
reaon an empty was returned.
RELNOTES: None.
PiperOrigin-RevId: 202672729
|
|
|
|
|
|
|
|
|
| |
enum.
Now that we aren't using enum names for the hash functions, we also accept the standard names, such as SHA-256.
RELNOTES: None.
PiperOrigin-RevId: 201624286
|
|
|
|
|
|
|
|
|
| |
When set, any action parameter files are written locally upon action execution, even when the action is executed remotely. This is mainly useful for debugging.
This option is effectively implied by --subcommands and --verbose_failures, as it is likely that the user is debugging actions when using these flags.
RELNOTES: Add --materialize_param_files flag to write parameter files even when actions are executed remotely.
PiperOrigin-RevId: 201225566
|
|
|
|
|
|
|
|
| |
This should be a no-op, mostly replacing PathConverter with
BuildEventArtifactUploader, since none of the implementations perform any
upload yet.
PiperOrigin-RevId: 200685325
|
|
|
|
| |
PiperOrigin-RevId: 199732415
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This fixes a regression from v0.13. When the local disk cache flags were
unified into `--disk_cache`, it became impossible to override a default
cache location such that the cache became disabled. This prevents
canarying of remote execution in the presence of a default bazelrc that
enables the disk cache.
Fixes #5308
Closes #5338.
PiperOrigin-RevId: 199613922
|
|
|
|
|
|
| |
(minor) ActionFS now implements MetadataProvider.getInput
PiperOrigin-RevId: 199575194
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This change introduces concurrent downloads of action outputs
for remote caching/execution. So far, for an action we would
download one output after the other which isn't as bad as it
sounds as we would typically run dozens or hundreds of actions
in parallel. However, for actions with a lot of outputs or graphs
that allow limited parallelism we expect this change to positively
impact performance.
Note, that with this change the AbstractRemoteActionCache will
attempt to always download all outputs concurrently. The actual
parallelism is controlled by the underlying network transport.
The gRPC transport currently enforces no limits on the concurrent
calls, which should be fine given that all calls are multiplexed
on a single network connection. The HTTP/1.1 transport also
enforces no parallelism by default, but I have added the
--remote_max_connections=INT flag which allows to specify an upper
bound on the number of network connections to be open concurrently.
I have introduced this flag as a defensive mechanism for users
who's environment might enforce an upper bound on the number of open
connections, as with this change its possible for the number of
concurrently open connections to dramatically increase (from
NumParallelActions to NumParallelActions * SumParallelActionOutputs).
A side effect of this change is that it puts the infrastructure
for retries and circuit breaking for the HttpBlobStore in place.
RELNOTES: None
PiperOrigin-RevId: 199005510
|
|
|
|
|
|
| |
Actual class to be removed in a later change.
PiperOrigin-RevId: 198937695
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
- It is now an error to specify the gRPC remote execution backend in
combination with a local disk or HTTP-based cache.
- It is now an error to specify both local disk and HTTP-based caches.
Note that before this CL, enabling the local disk cache silently disabled
remote execution - we now give an error in that case.
With these combination no longer being accepted, remote execution being enabled
now means that we only create a RemoteSpawnRunner, and don't provide a
SpawnCache. This is not a semantic change - we never created both.
In principle, it should be possible for users to combine local execution with
remote caching for actions that are marked local or no-remote, and still use
remote execution otherwise. However, Bazel cannot currently express this
combination of execution strategies.
RELNOTES: The --experimental_remote_spawn_cache flag is now enabled by default, and remote caching no longer needs --*_strategy=remote flags (it will fail if they are specified).
PiperOrigin-RevId: 198280398
|
|
|
|
|
|
|
|
| |
Netty's HttpClientCodec always emits a LastHttpContent event and so we don't need to track the received bytes manually to know when we are done reading. The HttpBlobStore compares the hashes of the received bytes to give us confidence that what we received is correct.
Closes #5244.
PiperOrigin-RevId: 197887877
|
|
|
|
|
|
|
|
|
| |
This constructor was creating an Exception with a null message leading to possible
NullPointerExceptions in a few places in our codebase. The call sites have
been replaced with calls to AbruptException(String message, ExitCode exitCode) with
a meaningful message.
PiperOrigin-RevId: 196973540
|
|
|
|
|
|
|
|
|
|
|
|
| |
The main motivation for this change is to act as a workaround for #4751. Additionally,
this reduces the number of stat() system calls significantly e.g. by 50% when building Bazel
(600K vs 1.2M).
cc: @werkt @ola-rozenfeld
Closes #5204.
PiperOrigin-RevId: 196878093
|
|
|
|
|
|
|
|
| |
Fixes #5047
Closes #5209.
PiperOrigin-RevId: 196832678
|
|
|
|
|
|
|
|
| |
Only the last commit needs to be reviewed, as it's rebased on https://github.com/bazelbuild/bazel/pull/5101
Closes #5117.
PiperOrigin-RevId: 195649921
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
directory structures.
When building a parent node from action inputs, the paths to the files are
sorted. These paths are then broken down into segments and a tree structure
is created from the segments.
Problem is, the segments at each level of the tree structure are not sorted
before they are added to the parent node. This can result in an unordered
directory tree.
For example, the sort order of this list of files
```
/foo/bar-client/bar-client_ijar.jar
/foo/bar/bar_ijar.jar
```
is maintained when it becomes a tree structure
```
foo ->
bar-client ->
bar-client_ijar.jar
bar
bar_ijar.jar
```
which is out of order.
Resolves: #5109
Closes #5110.
PiperOrigin-RevId: 195649710
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This is mostly a roll-forward of 4465dae23de989f1452e93d0a88ac2a289103dd9, which
was reverted by fa36d2f48965b127e8fd397348d16e991135bfb6. The main difference is
that the new behavior is now gated behind the --noremote_allow_symlink_upload
flag.
https://docs.google.com/document/d/1gnOYszitgrLVet3sQk-TKGqIcpkkDsc6aw-izoo-d64
is a design proposal to support symlinks in the remote cache, which would render
this change moot. I'd like to be able to prevent incorrect cache behavior until
that change is implemented, though.
This fixes https://github.com/bazelbuild/bazel/issues/4840 (again).
Closes #5122.
Change-Id: I2136cfe82c2e1a8a9f5856e12a37d42cabd0e299
PiperOrigin-RevId: 195261827
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Post ProgressStatus.CHECKING_CACHE if RemoteSpawnCache is checking the cache.
The UI sees CHECKING_CACHE exactly the same as EXECUTING because no UIs
currently have any special behavior for actions in cache-lookup state. This is
still a UX improvement with --experimental_spawn_cache because EXECUTING is
generally more correct than the old action state, which varies from harmless but
unhelpful (no known state) to just wrong (C++ compile actions claimed they were
doing include scanning during cache lookups).
Closes #5130.
Change-Id: I77421c3667c180875216f937fe0713f0e9415a7a
PiperOrigin-RevId: 195233123
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Consolidate the --experimental_local_disk_cache and --experimental_local_disk_cache_path
flags into a single --disk_cache= flag. Also, create the cache directory
if it doesn't exist.
RELNOTES: We replaced the --experimental_local_disk_cache and
--experimental_local_disk_cache_path flags into a single --disk_cache
flag. Additionally, Bazel now tries to create the disk cache directory
if it doesn't exist.
Closes #5119.
PiperOrigin-RevId: 195070550
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
When the http response has a status other than 200 and
also has a Content-Length header set, then wait until
all content has been received before completing the user
promise.
In case of any errors, close the channel in order to make
sure it's not reused as we don't know what data is left
on the wire.
Closes #5101.
PiperOrigin-RevId: 194787393
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
There is no need for the cache to be on disk. Originally, there was a
desire to share this cache with other tools... but this never happened.
And, actually, because Bazel is in control of what it runs, it can just
inject the "cached" values into those tools via flags.
Instead, just store the cache in-memory. This avoids having to open and
read the cache on every single action that is locally executed on a Mac.
Results when building a large iOS app from a clean slate show up to a
1% wall time improvement on my Mac Pro 2013 and a reduction in the
variance of the measurements.
This change also gets rid of the OS check from the action execution's
critical path. There is not much use in checking this: if we instantiate
this by mistake, the actual calls will fail. But sometimes we want to
actually run this code on non-macOS systems (e.g. for unit-testing with
mocked tools), so we should allow that. And this change also ensures
that XcodeLocalEnvProviderTest builds and runs...
RELNOTES: None.
PiperOrigin-RevId: 194681802
|
|
|
|
|
|
|
| |
Removing stack trace unless verbose failures is on.
TESTED=unit test
PiperOrigin-RevId: 194060440
|
|
|
|
| |
PiperOrigin-RevId: 193937177
|
|
|
|
|
|
|
|
| |
This class will be used to tie a Spawn to a SpawnRunner, and isn't really a policy object. It will carry state such as the expanded inputs and expanded command line.
Currently a context can be passed between different SpawnRunners. This will be addressed independently, so a context is tied to a particular spawn runner.
PiperOrigin-RevId: 193501918
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
*** Reason for rollback ***
The no-cache tag is not respected (see b/77857812) and thus this breaks remote caching for all projects with symlink outputs.
*** Original change description ***
Only allow regular files and directories spawn outputs to be uploaded to a remote cache.
The remote cache protocol only knows about regular files and
directories. Currently, during action output upload, symlinks are
resolved into regular files. This means cached "executions" of an
action may have different output file types than the original
execution, which can be a footgun. This CL bans symlinks from cachable
spawn outputs and fixes http...
***
PiperOrigin-RevId: 193338629
|
|
|
|
|
|
| |
Write so that they are logged. I'm open to suggestions for the logging format for these calls, since we don't want to log the actual contents of reads/writes because of their size.
PiperOrigin-RevId: 193047886
|
|
|
|
|
|
| |
FindMissingBlobs, GetActionResult so that they are logged. unknown commit must be submitted before this for Watch calls to be logged properly.
PiperOrigin-RevId: 192794535
|
|
|
|
|
|
|
|
| |
Fixes #4647
Closes #5003.
PiperOrigin-RevId: 192576694
|
|
|
|
|
|
|
|
| |
Fixes #4976, #4935
Closes #4991.
PiperOrigin-RevId: 192269206
|
|
|
|
|
|
|
|
| |
necessary for the call to close correctly (e.g. for listeners to receive Status/trailers).
RELNOTES:
PiperOrigin-RevId: 192185329
|
|
|
|
|
|
|
|
|
|
| |
Second attempt of https://github.com/bazelbuild/bazel/commit/0654620304728a5aecadd58138e96c41135d24e7, which I am rolling back. The problem is that FilterOutputStream.write is just plain wrong and we shouldn't inherit FilterOutputStream at all, but instead do it manually (which actually requires less code).
This was a performance regression in https://github.com/bazelbuild/bazel/commit/deccc485603c004daad959fd747f1c0c9efc4f00.
Fixed #4944.
PiperOrigin-RevId: 191215696
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
*** Reason for rollback ***
Not a proper fix.
*** Original change description ***
Enable bulk writes in the HttpBlobStore
This was a performance regression in https://github.com/bazelbuild/bazel/commit/deccc485603c004daad959fd747f1c0c9efc4f00.
Fixed #4944.
PiperOrigin-RevId: 191133416
|
|
|
|
|
|
|
|
| |
This was a performance regression in https://github.com/bazelbuild/bazel/commit/deccc485603c004daad959fd747f1c0c9efc4f00.
Fixed #4944.
PiperOrigin-RevId: 191109352
|
|
|
|
|
|
| |
experimental flag. It also adds a logging handler for Execute calls so that they are logged.
PiperOrigin-RevId: 190991493
|