| Commit message (Collapse) | Author | Age |
|
|
|
|
|
|
| |
These show up as directories. Filter these out and return null from the path converter, which should cause omission of those files from any build events.
RELNOTES: None
PiperOrigin-RevId: 208244910
|
|
|
|
|
|
|
|
|
| |
* Refactor Chunker constructor to a builder to reduce constructor overload.
* Pass digest into this where we have it
* Redo ensureInputsPresent to not lose the missing digests during processing so we can pass them to the Chunker constructor.
RELNOTES: None
PiperOrigin-RevId: 207297915
|
|
|
|
|
| |
RELNOTES: None
PiperOrigin-RevId: 207137932
|
|
|
|
|
|
|
|
|
| |
RELNOTES: When using Bazel's remote execution feature and Bazel has to
fallback to local execution for an action, Bazel used non-sandboxed
local execution until now. From this release on, you can use the new
flag --remote_local_fallback_strategy=<strategy> to tell Bazel which
strategy to use in that case.
PiperOrigin-RevId: 206566380
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
For downloading output files / directories we trigger all
downloads concurrently and asynchronously in the background
and after that wait for all downloads to finish. However, if
a download failed we did not wait for the remaining downloads
to finish but immediately started deleting partial downloads
and continued with local execution of the action.
That leads to two interesting bugs:
* The cleanup procedure races with the downloads that are still
in progress. As it tries to delete files and directories, new
files and directories are created and that will often
lead to "Directory not empty" errors as seen in #5047.
* The clean up procedure does not detect the race, succeeds and
subsequent local execution fails because not all files have
been deleted.
The solution is to always wait for all downloads to complete
before entering the cleanup routine. Ideally we would also
cancel all outstanding downloads, however, that's not as
straightfoward as it seems. That is, the j.u.c.Future API does
not provide a way to cancel a computation and also wait for
that computation actually having determinated. So we'd need
to introduce a separate mechanism to cancel downloads.
RELNOTES: None
PiperOrigin-RevId: 205980446
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This change allows local files referenced by the BEP/BES protocol
to be uploaded to a ByteStream gRPC service.
The ByteStreamUploader is now implicitly also used by the BES
module which has a different lifecycle than the remote module.
We introduce reference counting to ensure that the channel is
closed after its no longer needed. This also fixes a bug where
we currently leak one socket per remote build until the Bazel
server is shut down.
RELNOTES: None
PiperOrigin-RevId: 204275316
|
|
|
|
|
|
|
|
|
|
|
| |
This adds support for Unix sockets to Bazel for the
remote http cache. See corresponding issue #5098
for discussion.
RELNOTES: Introduce the --remote_cache_proxy flag,
which allows for remote http caching to connect
via a unix domain socket.
PiperOrigin-RevId: 204111667
|
|
|
|
|
|
|
|
| |
This observably removes any ill effect of CAS transience.
Closes #5229.
PiperOrigin-RevId: 204010317
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Use try-with-resources to ensure InputStreams that
we open via FileSystem.InputStream(path) are
closed.
Eagerly closing InputStreams avoids hanging on to
file handles until the garbage collector finalizes
the InputStream, meaning Bazel on Windows (and
other processes) can delete or mutate these files.
Hopefully this avoids intermittent file deletion
errors that sometimes occur on Windows.
See https://github.com/bazelbuild/bazel/issues/5512
RELNOTES: none
PiperOrigin-RevId: 203338148
|
|
|
|
|
|
|
|
| |
non-empty set of output files. This would catch a degenerate case when for some
reaon an empty was returned.
RELNOTES: None.
PiperOrigin-RevId: 202672729
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
the tests had been disabled for flakyness due to
timeouts. I think the right solution is remove the
individual test timeouts as on a highly loaded
machine fine grained timeouts typically don't make
much sense.
After removing the individual test timeouts no more
flakyness was found in 1000 runs.
Closes #5465.
PiperOrigin-RevId: 202113697
|
|
|
|
|
|
|
|
|
| |
enum.
Now that we aren't using enum names for the hash functions, we also accept the standard names, such as SHA-256.
RELNOTES: None.
PiperOrigin-RevId: 201624286
|
|
|
|
|
|
|
|
|
| |
When set, any action parameter files are written locally upon action execution, even when the action is executed remotely. This is mainly useful for debugging.
This option is effectively implied by --subcommands and --verbose_failures, as it is likely that the user is debugging actions when using these flags.
RELNOTES: Add --materialize_param_files flag to write parameter files even when actions are executed remotely.
PiperOrigin-RevId: 201225566
|
|
|
|
|
|
|
|
| |
This should be a no-op, mostly replacing PathConverter with
BuildEventArtifactUploader, since none of the implementations perform any
upload yet.
PiperOrigin-RevId: 200685325
|
|
|
|
|
|
| |
Onto #5328
PiperOrigin-RevId: 200410170
|
|
|
|
|
|
| |
Temporary workaround for #5328.
PiperOrigin-RevId: 200224317
|
|
|
|
|
|
| |
Small misc cleanups.
PiperOrigin-RevId: 199797948
|
|
|
|
| |
PiperOrigin-RevId: 199732415
|
|
|
|
|
|
| |
(minor) ActionFS now implements MetadataProvider.getInput
PiperOrigin-RevId: 199575194
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This change introduces concurrent downloads of action outputs
for remote caching/execution. So far, for an action we would
download one output after the other which isn't as bad as it
sounds as we would typically run dozens or hundreds of actions
in parallel. However, for actions with a lot of outputs or graphs
that allow limited parallelism we expect this change to positively
impact performance.
Note, that with this change the AbstractRemoteActionCache will
attempt to always download all outputs concurrently. The actual
parallelism is controlled by the underlying network transport.
The gRPC transport currently enforces no limits on the concurrent
calls, which should be fine given that all calls are multiplexed
on a single network connection. The HTTP/1.1 transport also
enforces no parallelism by default, but I have added the
--remote_max_connections=INT flag which allows to specify an upper
bound on the number of network connections to be open concurrently.
I have introduced this flag as a defensive mechanism for users
who's environment might enforce an upper bound on the number of open
connections, as with this change its possible for the number of
concurrently open connections to dramatically increase (from
NumParallelActions to NumParallelActions * SumParallelActionOutputs).
A side effect of this change is that it puts the infrastructure
for retries and circuit breaking for the HttpBlobStore in place.
RELNOTES: None
PiperOrigin-RevId: 199005510
|
|
|
|
|
|
| |
Actual class to be removed in a later change.
PiperOrigin-RevId: 198937695
|
|
|
|
|
|
|
|
| |
Netty's HttpClientCodec always emits a LastHttpContent event and so we don't need to track the received bytes manually to know when we are done reading. The HttpBlobStore compares the hashes of the received bytes to give us confidence that what we received is correct.
Closes #5244.
PiperOrigin-RevId: 197887877
|
|
|
|
|
|
|
|
| |
Only the last commit needs to be reviewed, as it's rebased on https://github.com/bazelbuild/bazel/pull/5101
Closes #5117.
PiperOrigin-RevId: 195649921
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
directory structures.
When building a parent node from action inputs, the paths to the files are
sorted. These paths are then broken down into segments and a tree structure
is created from the segments.
Problem is, the segments at each level of the tree structure are not sorted
before they are added to the parent node. This can result in an unordered
directory tree.
For example, the sort order of this list of files
```
/foo/bar-client/bar-client_ijar.jar
/foo/bar/bar_ijar.jar
```
is maintained when it becomes a tree structure
```
foo ->
bar-client ->
bar-client_ijar.jar
bar
bar_ijar.jar
```
which is out of order.
Resolves: #5109
Closes #5110.
PiperOrigin-RevId: 195649710
|
|
|
|
|
| |
RELNOTES: None.
PiperOrigin-RevId: 195586974
|
|
|
|
|
| |
RELNOTES: None.
PiperOrigin-RevId: 195486038
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This is mostly a roll-forward of 4465dae23de989f1452e93d0a88ac2a289103dd9, which
was reverted by fa36d2f48965b127e8fd397348d16e991135bfb6. The main difference is
that the new behavior is now gated behind the --noremote_allow_symlink_upload
flag.
https://docs.google.com/document/d/1gnOYszitgrLVet3sQk-TKGqIcpkkDsc6aw-izoo-d64
is a design proposal to support symlinks in the remote cache, which would render
this change moot. I'd like to be able to prevent incorrect cache behavior until
that change is implemented, though.
This fixes https://github.com/bazelbuild/bazel/issues/4840 (again).
Closes #5122.
Change-Id: I2136cfe82c2e1a8a9f5856e12a37d42cabd0e299
PiperOrigin-RevId: 195261827
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Post ProgressStatus.CHECKING_CACHE if RemoteSpawnCache is checking the cache.
The UI sees CHECKING_CACHE exactly the same as EXECUTING because no UIs
currently have any special behavior for actions in cache-lookup state. This is
still a UX improvement with --experimental_spawn_cache because EXECUTING is
generally more correct than the old action state, which varies from harmless but
unhelpful (no known state) to just wrong (C++ compile actions claimed they were
doing include scanning during cache lookups).
Closes #5130.
Change-Id: I77421c3667c180875216f937fe0713f0e9415a7a
PiperOrigin-RevId: 195233123
|
|
|
|
|
|
|
| |
instead of the manifest files.
RELNOTES: None
PiperOrigin-RevId: 195149880
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
When the http response has a status other than 200 and
also has a Content-Length header set, then wait until
all content has been received before completing the user
promise.
In case of any errors, close the channel in order to make
sure it's not reused as we don't know what data is left
on the wire.
Closes #5101.
PiperOrigin-RevId: 194787393
|
|
|
|
| |
PiperOrigin-RevId: 193937177
|
|
|
|
|
|
|
|
| |
This class will be used to tie a Spawn to a SpawnRunner, and isn't really a policy object. It will carry state such as the expanded inputs and expanded command line.
Currently a context can be passed between different SpawnRunners. This will be addressed independently, so a context is tied to a particular spawn runner.
PiperOrigin-RevId: 193501918
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
*** Reason for rollback ***
The no-cache tag is not respected (see b/77857812) and thus this breaks remote caching for all projects with symlink outputs.
*** Original change description ***
Only allow regular files and directories spawn outputs to be uploaded to a remote cache.
The remote cache protocol only knows about regular files and
directories. Currently, during action output upload, symlinks are
resolved into regular files. This means cached "executions" of an
action may have different output file types than the original
execution, which can be a footgun. This CL bans symlinks from cachable
spawn outputs and fixes http...
***
PiperOrigin-RevId: 193338629
|
|
|
|
|
|
| |
Write so that they are logged. I'm open to suggestions for the logging format for these calls, since we don't want to log the actual contents of reads/writes because of their size.
PiperOrigin-RevId: 193047886
|
|
|
|
|
|
| |
FindMissingBlobs, GetActionResult so that they are logged. unknown commit must be submitted before this for Watch calls to be logged properly.
PiperOrigin-RevId: 192794535
|
|
|
|
|
|
| |
experimental flag. It also adds a logging handler for Execute calls so that they are logged.
PiperOrigin-RevId: 190991493
|
|
|
|
|
| |
RELNOTES: None.
PiperOrigin-RevId: 190617155
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
remote cache.
The remote cache protocol only knows about regular files and
directories. Currently, during action output upload, symlinks are
resolved into regular files. This means cached "executions" of an
action may have different output file types than the original
execution, which can be a footgun. This CL bans symlinks from cachable
spawn outputs and fixes https://github.com/bazelbuild/bazel/issues/4840.
The interface of SpawnCache.CacheHandle.store is refactored:
1. The outputs parameter is removed, since that can be retrieved from the underlying Spawn.
2. It can now throw ExecException in order to fail actions.
Closes #4902.
Change-Id: I0d1d94d48779b970bb5d0840c66a14c189ab0091
PiperOrigin-RevId: 190608852
|
|
|
|
|
|
|
|
|
|
| |
Also, remove unused SO_TIMEOUT. Fixes #4890
cc @benjaminp
Closes #4895.
PiperOrigin-RevId: 190051030
|
|
|
|
|
|
|
| |
WANT_LGTM=all
TESTED=RBE, unit tests
RELNOTES: None
PiperOrigin-RevId: 189938345
|
|
|
|
|
|
|
|
|
| |
timeouts.
The refactoring to have an Exception that contains partial results will also be used in the next CL, in order to propagate and save remote server logs.
RELNOTES: None
PiperOrigin-RevId: 189344465
|
|
|
|
|
|
|
|
| |
Increase the connect timeout to 30 seconds Windows sometimes
seems to need more time.
RELNOTES: None
PiperOrigin-RevId: 188702864
|
|
|
|
|
|
| |
Closes #4622.
PiperOrigin-RevId: 188595430
|
|
|
|
|
|
|
|
| |
@buchgr
Closes #4790.
PiperOrigin-RevId: 188332795
|
|
|
|
|
|
|
|
|
|
| |
This provides a io.grpc.ClientInterceptor implementation that can be used to log gRPC call information. The interceptor can select a logging handler to use based on the gRPC method being called (Watch, Execute, Write, etc) to build a LogEntry, which can then be logged after the call has finished. Unit tests for the interceptor are included.
In this change, the interceptor is never invoked, nor are there any handlers implemented for any gRPC methods. The interceptor also never tries to log any entries.
To avoid circular dependency issues (Remote library will depend on logger which depends on remote library for utils), I've factored out the utility classes from the remote library into their own directory/package as part of this change.
PiperOrigin-RevId: 187926516
|
|
|
|
|
|
|
|
| |
The current behavior is already correct, just adding a test to make sure we retry reads as we should.
TESTED=the unit test
RELNOTES: None
PiperOrigin-RevId: 187398578
|
|
|
|
|
|
|
|
| |
So far, nobody uses it, but I want to start using this field soon.
TESTED=unit test
RELNOTES: None
PiperOrigin-RevId: 186290375
|
|
|
|
|
|
| |
Closes #4609.
PiperOrigin-RevId: 185032751
|
|
|
|
|
|
|
|
|
|
|
| |
This is to prevent this error:
SEVERE: *~*~*~ Channel io.grpc.internal.ManagedChannelImpl-56 for target directaddress:///io.grpc.inprocess.InProcessSocketAddress@3ecbfba1 was not shutdown properly!!! ~*~*~*
Make sure to call shutdown()/shutdownNow() and awaitTermination().
TESTED=ran tests
RELNOTES: None
PiperOrigin-RevId: 185020683
|
|
|
|
|
|
|
|
| |
I moved it into DigestUtil preemptively in case we switch to binary instead of hex representation.
TESTED=manually
RELNOTES: None
PiperOrigin-RevId: 185007558
|