| Commit message (Collapse) | Author | Age |
|
|
|
|
|
| |
serialization.
PiperOrigin-RevId: 193536486
|
| |
|
|
|
|
|
|
|
|
|
|
|
| |
* Change builder return type to Iterable<T> instead of IterableChain<T>. It is over-specified and unnecessary to state the return type so precisely.
* Optimize builder for cases where we add 0 or 1 iterables to the chain. In this case, we can simply return the underlying iterables without adding wrappers.
* Extract DedupingIterable, it doesn't have anything to do with IterablesChain and is only used in one place
RELNOTES: None
PiperOrigin-RevId: 193363048
|
|
|
|
|
|
|
|
|
|
|
| |
*** Reason for rollback ***
Breaks //third_party/java_src/copybara/java/com/google/copybara/config:parser:
[]
*** Original change description ***
PiperOrigin-RevId: 193292991
|
| |
|
|
|
|
|
|
| |
process-global bimap of fingerprints to NestedSet contents.
PiperOrigin-RevId: 193063717
|
|
|
|
|
|
|
|
| |
constants. A lot of care is needed here because we're using reference equality. I plan to add value-equality constants in a follow-up.
Add ImmutableSortedSet marshaller because I think it might have been needed, and hey, why not.
PiperOrigin-RevId: 192326359
|
|
|
|
| |
PiperOrigin-RevId: 191797413
|
|
|
|
|
|
|
|
|
|
| |
CodedOutputStream. It creates immense amounts of garbage and we don't ever use the result: it's only used for Object[] children anyway.
We can consider removing the child CodedOutputStream entirely and relying on normal serialization memoization, but for now, let's just do the simple thing.
Also fix a weird code-only bug that had been there since NestedSetCodec was written (I think): NestedSet.EMPTY_CHILDREN is an Object[], and therefore we never took the fast path of just writing 0 and moving on. While the code as written was misleading, the bits written to the output stream were the same, until this change, when there was a divergence.
PiperOrigin-RevId: 191520712
|
|
|
|
|
|
|
|
| |
sessions. This is incorrect in the presence of memoization: a single element may be serialized as just a pair of integers (type + memoization index). Lots of different nested sets may contain elements that are serialized this way, so they will have the same digests. We could consider doing a parallel hash computation, but for now just disable.
This is not a full rollback of https://github.com/bazelbuild/bazel/commit/39cef6d6a4a9e3ae80b11a9ccc0f35325852777c since there was a refactoring in it that it doesn't seem worth it to roll back.
PiperOrigin-RevId: 191509089
|
|
|
|
|
|
|
| |
the global digestToChild map. Since digestToChild contains weak references,
this is required to ensure the children are not GCed.
PiperOrigin-RevId: 190476243
|
|
|
|
|
|
|
|
| |
part of primary codepath somewhere around https://github.com/bazelbuild/bazel/commit/bde43ec8a96a62b8fbf67ad60d5154cf121647d9 (and killed entirely in unknown commit.
Force copying of a ByteString read from CodedInputStream in NestedSetCodec and persisted, since otherwise we might hang onto the entire buffer indefinitely.
PiperOrigin-RevId: 190256337
|
|
|
|
|
|
|
|
|
| |
contains before invoking the heavier add. This reduces cpu-work and contention.
When blaze is invoked on a large fileset and the build is already up-to-date,
this makes things about 1.9s (33%) faster.
PiperOrigin-RevId: 190125803
|
|
|
|
|
|
| |
This avoids redundancy in memory when multiple NestedSets share a member
PiperOrigin-RevId: 190085907
|
|
|
|
| |
PiperOrigin-RevId: 189906038
|
|
|
|
|
|
|
|
|
|
|
|
| |
*** Reason for rollback ***
See linked bug.
*** Original change description ***
Add behavior to NestedSetCodec to prevent it from running during testing.
PiperOrigin-RevId: 189663863
|
|
|
|
| |
PiperOrigin-RevId: 189602622
|
|
|
|
|
|
| |
Since autocodec library is now a dependency of lib/collect, properly annotate ImmutableSharedKeyMap to boot.
PiperOrigin-RevId: 189432552
|
|
|
|
|
|
| |
Makes NestedSetCodec into a runtime codec instead of a Marshaller.
PiperOrigin-RevId: 189110883
|
|
|
|
|
| |
RELNOTES: None
PiperOrigin-RevId: 187370833
|
|
|
|
|
|
| |
serialize offset table at the cost of some overhead reconstructing the table. Also fewer code changes, although there is a serialization-only method added as a hack.
PiperOrigin-RevId: 186808832
|
|
|
|
| |
PiperOrigin-RevId: 186789569
|
|
|
|
|
|
|
|
|
|
|
|
| |
* AutoCodec now delegates to the registry.
* Adds getSuperclass logic for resolving a codec.
* Small cleanups for classes that break the registry.
TODO after this change:
* Explicit CODEC definitions are no longer needed and existing ones should be cleaned up.
* POLYMORPHIC is no longer be needed and should be cleaned up.
PiperOrigin-RevId: 186351845
|
|
|
|
|
|
|
|
|
| |
lib.analysis.actions -> lib.actions.
These are fundamental types that want to sit alongside types like Spawn.
RELNOTES: None
PiperOrigin-RevId: 185887971
|
|
|
|
| |
PiperOrigin-RevId: 185733313
|
|
|
|
|
|
| |
(Des|S)erializationContext.
PiperOrigin-RevId: 185547740
|
|
|
|
|
|
|
|
|
|
| |
Context implementations are currently empty, just doing the plumbing in this
change. Once this is in we can start passing along the ObjectCodecRegistry, which
will allow runtime codec resolution for classes not known at compile time.
We'll also inevitably add some memoization helpers, allowing us to optimize the
serialization process further.
PiperOrigin-RevId: 185305674
|
|
|
|
|
|
| |
InjectingObjectCodec.
PiperOrigin-RevId: 184566571
|
|
|
|
|
|
| |
This is needed to migrate JavaCompileAction away from CustomMultiArgv.
PiperOrigin-RevId: 184136486
|
|
|
|
|
|
|
|
|
|
|
|
| |
of the same mapFn class.
This code tries to add protection against the user creating new mapFn instances per-rule. This would cause the nested set cache to be computed per-rule instead of shared across rule instances, causing memory bloat and slowdowns.
Since this can only happen in native code, we can get away with detecting this and crashing blaze. I think this is a better choice than silently allowing it / falling back to slow computations.
The user can override this behaviour by inheriting from CommandLineItem.CapturingMapFn, in which case the user is explicitly saying they assume responsibility for the number of instances of the mapFn the application will use.
PiperOrigin-RevId: 184061642
|
|
|
|
|
|
|
|
|
| |
Instead of using ConcurrentHashMap, we use a dead-simple open addressed hash hable with a giant byte array with 16-byte slots. We then read or write fingerprints straight into and out of the array, obviating the need to generate intermediate garbage.
Locking mechanism is a read-write lock. This should be faster than full synchronisation for read-heavy loads.
RELNOTES: None
PiperOrigin-RevId: 184019301
|
|
|
|
|
| |
RELNOTES: None
PiperOrigin-RevId: 183727976
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Adds some logging to test helpers for size of serialized data.
Jan 25, 2018 7:16:25 AM com.google.devtools.build.lib.skyframe.serialization.testutils.SerializerTester testSerializeDeserialize
INFO: total serialized bytes = 70
Jan 25, 2018 7:16:25 AM com.google.devtools.build.lib.skyframe.serialization.testutils.ObjectCodecTester testSerializeDeserialize
INFO: total serialized bytes = 208
Kryo output is significantly smaller.
PiperOrigin-RevId: 183300353
|
|
|
|
|
|
|
|
| |
This interface makes it clearer in the type system exactly how items that go into a CustomCommandLine are turned into strings.
It is a preparatory change to allow command line fingerprints to be more cheaply calculated, but it is valuable in itself from a code quality standpoint.
PiperOrigin-RevId: 183274022
|
|
|
|
|
|
|
|
|
|
|
|
| |
MiddlemanFactory wants to check if the middleman inputs set is empty
or singleton. A NestedSet can answer both queries efficiently if the
right APIs are used.
My ultimate goal here is to avoid the expansion of runfiles artifacts
nested sets until execution.
Change-Id: I29a269df757ef41b1410bbb492cf24c926df6114
PiperOrigin-RevId: 178600943
|
|
|
|
|
|
|
| |
This key context can be used by actions to share partial key computations, for instance when computing MD5s for nested sets.
RELNOTES: None
PiperOrigin-RevId: 177359607
|
|
|
|
| |
PiperOrigin-RevId: 177032673
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The `set` constructor used to be deprecated, but it was still possible to use
it by providing --incompatible_disallow_set_constructor=false.
It's still allowed to have `set` in parts of the code that are not executed, this will be deprecated later.
RELNOTES[INC]: The deprecated `set` constructor is removed, along with the
migration flag --incompatible_disallow_set_constructor. It is still temporarily
allowed to refer to `set` from within unexecuted code.
PiperOrigin-RevId: 176375859
|
|
|
|
|
|
|
|
| |
Blaze had its own class to avoid GC from varargs array creation for the precondition happy path. Guava now (mostly) implements these, making it unnecessary to maintain our own.
This change was almost entirely automated by search-and-replace. A few BUILD files needed fixing up since I removed an export of preconditions from lib:util, which was all done by add_deps. There was one incorrect usage of Preconditions that was caught by error prone (which checks Guava's version of Preconditions) that I had to change manually.
PiperOrigin-RevId: 175033526
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
*** Reason for rollback ***
If the 'set' function was used in a .bzl file but not called, --incompatible_disallow_set_constructor=True would allow the load of that .bzl file without error, but this change removes the 'set' function so loading that bzl file is an error.
Example failure: https://ci.bazel.io/blue/organizations/jenkins/Global%2FTensorFlow/detail/TensorFlow/245/pipeline/
*** Original change description ***
Remove the deprecated set constructor from Skylark
The `set` constructor used to be deprecated, but it was still possible to use
it by providing --incompatible_disallow_set_constructor=false.
RELNOTES[INC]: The flag --incompatible_disallow_set_constructor is no longer
available, the deprecated `set` constructor is not available anymore.
PiperOrigin-RevId: 173115983
|
|
|
|
|
|
|
|
|
|
| |
The `set` constructor used to be deprecated, but it was still possible to use
it by providing --incompatible_disallow_set_constructor=false.
RELNOTES[INC]: The flag --incompatible_disallow_set_constructor is no longer
available, the deprecated `set` constructor is not available anymore.
PiperOrigin-RevId: 171962361
|
|
|
|
|
|
|
|
|
| |
Split collect, concurrent, vfs, windows into package-level BUILD files.
Move clock classes out of "util", into their own Java package.
Move CompactHashSet into its own Java package to break a dependency cycle.
Give nestedset and inmemoryfs their own package-level BUILD files.
PiperOrigin-RevId: 167702127
|
|
|
|
|
| |
RELNOTES: None.
PiperOrigin-RevId: 167574104
|
|
|
|
|
|
|
|
| |
This is a follow-on to https://groups.google.com/forum/?utm_medium=email&utm_source=footer#!msg/bazel-dev/Q2owiR-e86s/ugrVUhn7AwAJ to introduce more usages of Java 8 idioms and other "cleanups", with the intention of making the code base easier to maintain.
Closes #3623.
PiperOrigin-RevId: 167566256
|
|
|
|
|
|
|
|
|
| |
Old ordering names ("stable", "compile", "naive_link", "link") are deprecated
and won't be available if --incompatible_disallow_set_constructor=true is set.
Use "default", "postorder", "preorder", and "topological" correspondingly
instead.
PiperOrigin-RevId: 164728439
|
|
|
|
|
|
|
| |
With a few manual fixes for readability.
RELNOTES: None.
PiperOrigin-RevId: 160582556
|
|
|
|
| |
PiperOrigin-RevId: 159498323
|
|
|
|
|
| |
RELNOTES: None
PiperOrigin-RevId: 154877525
|
|
|
|
| |
PiperOrigin-RevId: 152654844
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
When serializing a large number of related nested sets needs to be serialized
in an efficient way, it is necessary to access the internal structure of a nested
set to efficiently deduplicate shared content. Add a new class that provides such
a view on a nested set.
Note: part of this change (in particular, the addition of the NestedSetView class)
was accidentally committed as part of 617bb896dc5d2b815449459e991c577237d7a7fc.
Change-Id: I03660a228a66bbd6d3df2d3e78e8349be2d55f41
PiperOrigin-RevId: 152362816
|