| Commit message (Collapse) | Author | Age |
|
|
|
| |
PiperOrigin-RevId: 208009857
|
|
|
|
|
|
|
|
|
|
| |
Attempt to fix #5711, "java.util.concurrent.RejectedExecutionException: event executor terminated", by having `HttpBlobStore.close()` no-op when called more than once.
I'm rolling a patched version of bazel for us internally based on 0.15.2. Should be able to say definitively in a couple days whether or not this addresses the issue, but it seems like it should (and @buchgr agrees).
Closes #5725.
PiperOrigin-RevId: 207089681
|
|
|
|
|
|
|
|
|
|
|
| |
This adds support for Unix sockets to Bazel for the
remote http cache. See corresponding issue #5098
for discussion.
RELNOTES: Introduce the --remote_cache_proxy flag,
which allows for remote http caching to connect
via a unix domain socket.
PiperOrigin-RevId: 204111667
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This change introduces concurrent downloads of action outputs
for remote caching/execution. So far, for an action we would
download one output after the other which isn't as bad as it
sounds as we would typically run dozens or hundreds of actions
in parallel. However, for actions with a lot of outputs or graphs
that allow limited parallelism we expect this change to positively
impact performance.
Note, that with this change the AbstractRemoteActionCache will
attempt to always download all outputs concurrently. The actual
parallelism is controlled by the underlying network transport.
The gRPC transport currently enforces no limits on the concurrent
calls, which should be fine given that all calls are multiplexed
on a single network connection. The HTTP/1.1 transport also
enforces no parallelism by default, but I have added the
--remote_max_connections=INT flag which allows to specify an upper
bound on the number of network connections to be open concurrently.
I have introduced this flag as a defensive mechanism for users
who's environment might enforce an upper bound on the number of open
connections, as with this change its possible for the number of
concurrently open connections to dramatically increase (from
NumParallelActions to NumParallelActions * SumParallelActionOutputs).
A side effect of this change is that it puts the infrastructure
for retries and circuit breaking for the HttpBlobStore in place.
RELNOTES: None
PiperOrigin-RevId: 199005510
|
|
|
|
|
|
|
|
| |
Netty's HttpClientCodec always emits a LastHttpContent event and so we don't need to track the received bytes manually to know when we are done reading. The HttpBlobStore compares the hashes of the received bytes to give us confidence that what we received is correct.
Closes #5244.
PiperOrigin-RevId: 197887877
|
|
|
|
|
|
|
|
| |
Only the last commit needs to be reviewed, as it's rebased on https://github.com/bazelbuild/bazel/pull/5101
Closes #5117.
PiperOrigin-RevId: 195649921
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
When the http response has a status other than 200 and
also has a Content-Length header set, then wait until
all content has been received before completing the user
promise.
In case of any errors, close the channel in order to make
sure it's not reused as we don't know what data is left
on the wire.
Closes #5101.
PiperOrigin-RevId: 194787393
|
|
|
|
|
|
|
|
| |
Fixes #4976, #4935
Closes #4991.
PiperOrigin-RevId: 192269206
|
|
|
|
|
|
|
|
|
|
| |
Second attempt of https://github.com/bazelbuild/bazel/commit/0654620304728a5aecadd58138e96c41135d24e7, which I am rolling back. The problem is that FilterOutputStream.write is just plain wrong and we shouldn't inherit FilterOutputStream at all, but instead do it manually (which actually requires less code).
This was a performance regression in https://github.com/bazelbuild/bazel/commit/deccc485603c004daad959fd747f1c0c9efc4f00.
Fixed #4944.
PiperOrigin-RevId: 191215696
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
*** Reason for rollback ***
Not a proper fix.
*** Original change description ***
Enable bulk writes in the HttpBlobStore
This was a performance regression in https://github.com/bazelbuild/bazel/commit/deccc485603c004daad959fd747f1c0c9efc4f00.
Fixed #4944.
PiperOrigin-RevId: 191133416
|
|
|
|
|
|
|
|
| |
This was a performance regression in https://github.com/bazelbuild/bazel/commit/deccc485603c004daad959fd747f1c0c9efc4f00.
Fixed #4944.
PiperOrigin-RevId: 191109352
|
|
|
|
|
|
|
|
|
|
| |
Under Windows, the default permissions used when writing to the local disk cache prevent the files from being overwritten.
This CR adds a check: if the target file already exists, return early. This is a performance improvement and will fix the error described above as existing files will no longer need to be overwritten. Similar features have been implemented on the remote gRPC cache implementations, see bazel issue #4789.
Closes #4886.
PiperOrigin-RevId: 190060233
|
|
|
|
|
|
|
|
|
|
| |
Also, remove unused SO_TIMEOUT. Fixes #4890
cc @benjaminp
Closes #4895.
PiperOrigin-RevId: 190051030
|
|
|
|
|
|
| |
Closes #4622.
PiperOrigin-RevId: 188595430
|
|
|
|
|
|
| |
Closes #4609.
PiperOrigin-RevId: 185032751
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
* This puts in the foundation of HTTP/2 support for remote caching.
* Allows us to remove the Apache HTTP library as a dependency, reducing
the Bazel binary size by 1MiB.
On fast networks (i.e. GCE to GCS) we can see a >2x speed improvement for TLS
throughput. Even from my workstation to GCS I get significant build time
improvements when using Netty's TLS 18s vs 12s.
Closes #4481.
PiperOrigin-RevId: 183411787
|
|
|
|
|
| |
Change-Id: Iacbba3eced0abc0dcfd7311a0f07da48cbaba6e4
PiperOrigin-RevId: 179664071
|
|
|
|
|
|
|
|
| |
There's no point in having this option. We'll use as many TCP
connections as we'll need. Fewer options FTW.
Change-Id: I502eadd6a3a35040c7eda05ef49320b273ac26ad
PiperOrigin-RevId: 179533022
|
|
|
|
|
|
|
| |
The action cache prefix is "ac", not "actioncache".
Change-Id: I841a026133ab3b4ec5b58a0cf29252dae49434fe
PiperOrigin-RevId: 179437730
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Add support for Google Cloud Storage (GCS) as a HTTP caching backend.
This commit mainly adds the infrastructure necessary to authenticate
to GCS.
Using GCS as a caching backend works as follows:
1) Create a new GCS bucket.
2) Create a service account that can read and write to the GCS bucket
and generate a JSON credentials file for it.
3) Invoke Bazel as follows:
bazel build
--remote_rest_cache=https://storage.googleapis.com/<bucket>
--auth_enabled
--auth_scope=https://www.googleapis.com/auth/devstorage.read_write
--auth_credentials=</path/to/creds.json>
I'll add simplification's and docs in a subsequent commit.
Change-Id: Ie827d7946a2193b97ea7d9aa72eb15f09de2164d
PiperOrigin-RevId: 179406380
|
|
|
|
|
| |
RELNOTES: The --remote_rest_cache flag now respects --remote_timeout.
PiperOrigin-RevId: 177926523
|
|
|
|
|
|
|
|
|
| |
According to the SimpleBlobStore interface `containsKey` should look up blobs in the CAS. However, the URL it requests is missing `CAS_PREFIX`,
so it will never find anything.
Closes #4118.
PiperOrigin-RevId: 177313658
|
|
|
|
|
|
|
|
| |
Blaze had its own class to avoid GC from varargs array creation for the precondition happy path. Guava now (mostly) implements these, making it unnecessary to maintain our own.
This change was almost entirely automated by search-and-replace. A few BUILD files needed fixing up since I removed an export of preconditions from lib:util, which was all done by add_deps. There was one incorrect usage of Preconditions that was caught by error prone (which checks Guava's version of Preconditions) that I had to change manually.
PiperOrigin-RevId: 175033526
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This means we no longer keep large action inputs or outputs in memory
during upload.
I adjusted the SimpleBlogStore interface to require clients to pass in
the InputStream length. That allows us to always set Content-length on
uploads. It's polite to do so, so that the server may, e.g.,
preallocate space for the blob.
Fixes https://github.com/bazelbuild/bazel/issues/3250.
Change-Id: I944c9dbc35fa2fa80dce523b0133ea9757bb3973
PiperOrigin-RevId: 172433522
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The remote http caching client now prefixes the URL of an action cache
request with 'ac' and cas request with 'cas'. This is useful
as servers might want to distinquish between the action cache and the
cas.
Before this change:
(GET|PUT) /f141ae2d23d0f976599e678da1c51d82fedaf8b1
After this change:
(GET|PUT) /ac/f141ae2d23d0f976599e678da1c51d82fedaf8b1
(GET|PUT) /cas/f141ae2d23d0f976599e678da1c51d82fedaf8b1
This change will likely break any HTTP caching service that assumed the
old URL scheme.
TESTED: Manually using https://github.com/buchgr/bazel-remote
RELNOTES: The remote HTTP/1.1 caching client (--remote_rest_cache) now
distinquishes between action cache and CAS. The request URL for the
action cache is prefixed with 'ac' and the URL for the CAS
is prefixed with 'cas'.
PiperOrigin-RevId: 166831997
|
|
|
|
|
|
|
|
|
|
|
| |
- Move flag handling into RemoteModule to fail as early as possible.
- Make error messages from flag handling human readable.
- Fix a bug where remote execution would only support TLS with a root
certificate being specified.
- If a remote executor without a remote cache is specified, assume the
remote cache to be the same as the executor.
PiperOrigin-RevId: 161946029
|
|
|
|
|
|
|
| |
This avoid having to load all the data into memory at once in most cases,
especially for large files.
PiperOrigin-RevId: 160954187
|
|
|
|
|
|
|
|
|
|
| |
I'm planning to switch the remote worker over to the on-disk storage system,
and delete the in-memory option. Currently, running a Bazel build with the
in-memory storage system uses up over 50% of the total available memory on my
machine (I only have 32 GB total), and grinds it to a complete halt (unless I
close most of my apps) on a simple build of //src:bazel.
PiperOrigin-RevId: 160877546
|
|
Also extend the API to throw exceptions rather than having to wrap or swallow.
This is in preparation for adding yet another implementation using an on-disk
cache. Having the RemoteWorker keep everything in memory is not good for my
sanity.
PiperOrigin-RevId: 160871586
|