| Commit message (Collapse) | Author | Age |
|
|
|
|
|
|
|
| |
node having the same priority, later enqueueings having higher priority, re-enqueued nodes having highest priority, and new root nodes having lowest priority. Experimentally, this can save significant RAM (1.4G in some builds!) while not affecting speed.
Also do a semi-drive-by deleting ExecutorFactory parameter to AbstractQueueVisitor, since it was always AbstractQueueVisitor.EXECUTOR_FACTORY.
PiperOrigin-RevId: 208560889
|
|
|
|
|
|
|
|
|
| |
We use a fixed version of the previous algorithm. See the comments for details.
Fancier algorithms exist. I thought of a cool one that makes use of BatchKeyedLocker (would give me an excuse to revive it, heh), but fancy algorithms would be overkill. As noted in the initial commit of NaiveMultisetSemaphore, performance isn't critical.
RELNOTES: None
PiperOrigin-RevId: 208560559
|
|
|
|
|
|
|
| |
use Semaphore#availablePermits. I have no idea why I didn't do this initially.
RELNOTES: None
PiperOrigin-RevId: 207883650
|
|
|
|
|
|
|
| |
aren't used anymore.
RELNOTES: None.
PiperOrigin-RevId: 205984908
|
|
|
|
| |
PiperOrigin-RevId: 195141891
|
|
|
|
|
|
|
| |
clients have a particular super-hot key, and want to avoid the cpu cost of doing a map lookup.
RELNOTES: None
PiperOrigin-RevId: 190848508
|
|
|
|
|
|
|
|
|
| |
counters, optimized
for the use-case of hot keys.
RELNOTES: None
PiperOrigin-RevId: 190678987
|
|
|
|
|
|
|
| |
This class generates tons of garbage. It's better to manually use a ConcurrentHashMap + AtomicLongs.
RELNOTES: None
PiperOrigin-RevId: 180053164
|
|
|
|
|
|
|
|
| |
Blaze had its own class to avoid GC from varargs array creation for the precondition happy path. Guava now (mostly) implements these, making it unnecessary to maintain our own.
This change was almost entirely automated by search-and-replace. A few BUILD files needed fixing up since I removed an export of preconditions from lib:util, which was all done by add_deps. There was one incorrect usage of Preconditions that was caught by error prone (which checks Guava's version of Preconditions) that I had to change manually.
PiperOrigin-RevId: 175033526
|
|
|
|
|
| |
RELNOTES: None
PiperOrigin-RevId: 174062560
|
|
|
|
|
|
|
|
|
| |
Split collect, concurrent, vfs, windows into package-level BUILD files.
Move clock classes out of "util", into their own Java package.
Move CompactHashSet into its own Java package to break a dependency cycle.
Give nestedset and inmemoryfs their own package-level BUILD files.
PiperOrigin-RevId: 167702127
|
|
|
|
|
| |
RELNOTES: None.
PiperOrigin-RevId: 167505493
|
|
|
|
|
|
|
| |
With a few manual fixes for readability.
RELNOTES: None.
PiperOrigin-RevId: 160582556
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
interrupted eagerly
The original code swallows the InterruptedException, sets the interrupt bit and
stops new tasks from being submitted. However it does not actively send an
interrupt signal to all running and pending tasks already in the pool. It is
partly due to the misleading syntax of awaitTermination, which actually quitely
waits for all tasks to exit even if interruptWorkers is set to true. Both
errors are fixed in this change.
RELNOTES: None
PiperOrigin-RevId: 157762029
|
|
|
|
|
|
|
|
| |
The "concurrent" bit was supposedly around for testing purposes, but who knows if it even works anymore. Making other callsites explicitly state their ErrorClassifier gets us down to two constructors, one of which can delegate to the other.
I think having both these constructors is useful because there's a linkage between creating a new executor service and specifying that the AQV should shut down the service at the end of the visitation. And using a static create() method doesn't work because of AQV's inheritance model.
PiperOrigin-RevId: 155406771
|
|
|
|
|
| |
RELNOTES: None.
PiperOrigin-RevId: 155382994
|
|
|
|
|
|
|
|
|
|
|
|
| |
prevent thread starvation
This change gets rid of the additional thread needed for task scheduling during
BFS visitation, which eliminates the possibility of thread starvation while a
single thread pool is used for multiple concurrent evaluations.
--
PiperOrigin-RevId: 148911346
MOS_MIGRATED_REVID=148911346
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
a boring naive synchronized implementation.
There's a race condition in my design of the lockless data structure. I haven't been able to come up with a lockless algorithm that actually works, and the naive one seems to be fine. In Blaze's usage, performance actually isn't super important so the naive implementation is fine.
Consider three threads and a MultisetSemaphore with 2 max unique values.
T1: acquireAll({a, b})
T2: acquireAll({a, c})
T3: acquireAll({a, d})
For the for-loop before the 'acquire' call, suppose:
-T1 wins the race to acquire 'a' [1] and also wants to acquire 'b' [1]
-T2 loses the race to acquire 'a' [2] and also wants to acquire 'c' [1]
-T3 loses the race to acquire 'a' [2] and also wants to acquire 'd' [1]
So then in [3] we have:
-T1 tries to acquire 2 permits
-T2 tries to acquire 1 permit
-T3 tries to acquire 1 permit
Suppose the execution order in [3] is T2, T3, T1. So that means we then have T1 still at [3] and both T2 and T3 at [4], which is a deadlock.
[1] https://github.com/bazelbuild/bazel/blob/fa96b04f6c8ca6b6b3464727672267133e852959/src/main/java/com/google/devtools/build/lib/concurrent/MultisetSemaphore.java#L184
[2] https://github.com/bazelbuild/bazel/blob/fa96b04f6c8ca6b6b3464727672267133e852959/src/main/java/com/google/devtools/build/lib/concurrent/MultisetSemaphore.java#L191
[3] https://github.com/bazelbuild/bazel/blob/fa96b04f6c8ca6b6b3464727672267133e852959/src/main/java/com/google/devtools/build/lib/concurrent/MultisetSemaphore.java#L199
[4] https://github.com/bazelbuild/bazel/blob/fa96b04f6c8ca6b6b3464727672267133e852959/src/main/java/com/google/devtools/build/lib/concurrent/MultisetSemaphore.java#L210
--
PiperOrigin-RevId: 148272171
MOS_MIGRATED_REVID=148272171
|
|
|
|
|
|
| |
--
PiperOrigin-RevId: 146005461
MOS_MIGRATED_REVID=146005461
|
|
|
|
|
|
| |
--
PiperOrigin-RevId: 145473478
MOS_MIGRATED_REVID=145473478
|
|
|
|
|
|
|
|
|
|
|
| |
for concurrent visitations.
During BFS visitation of rdeps and rbuildfiles, it uses a centralized pool
(backed by a LinkedBlockingQueue) to store all pending visits, and a
periodically running scheduler to schedule tasks for each pending visit.
--
MOS_MIGRATED_REVID=140398162
|
|
|
|
|
|
|
| |
internal ConcurrentMap.
--
MOS_MIGRATED_REVID=140253038
|
|
|
|
|
|
|
| |
BLAZE_INTERNER_CONCURRENCY_LEVEL environment variable.
--
MOS_MIGRATED_REVID=140070022
|
|
|
|
|
|
|
|
|
|
|
| |
that makes an appropriate call to Interners.InternerBuilder#concurrencyLevel.
For current readers of this CL, I used this class everywhere in the Blaze codebase.
For future readers of this CL, this class should be used to create an Interner in the Blaze codebase.
--
MOS_MIGRATED_REVID=140063271
|
|
|
|
|
| |
--
MOS_MIGRATED_REVID=139508838
|
|
|
|
|
|
|
| |
Costs CPU for what should be catchable by tests.
--
MOS_MIGRATED_REVID=139097279
|
|
|
|
|
|
|
| |
at most K unique things at once.
--
MOS_MIGRATED_REVID=138684040
|
|
|
|
|
|
|
|
|
| |
places we wait for tasks (plural!) submitted to a ForkJoinPool to finish since we actually want to do so interruptibly.
As was to be expected, testing this was tricky :)
--
MOS_MIGRATED_REVID=134128019
|
|
|
|
|
| |
--
MOS_MIGRATED_REVID=133714902
|
|
|
|
|
| |
--
MOS_MIGRATED_REVID=133624676
|
|
|
|
|
|
|
| |
The only place we now don't handle InterruptedException is in the action graph created after analysis, since I'm not sure that will be around for that much longer.
--
MOS_MIGRATED_REVID=130327770
|
|
|
|
|
|
|
|
|
| |
Make sure that ParallelEvaluator treats SchedulerExceptions as less severe than other RuntimeExceptions (it already did, but add a comment emphasizing this).
The combination of the above means that we don't ignore hypothetical crashes in ParallelEvaluator worker threads _after_ a SchedulerException (e.g. due to an error in nokeep_going evaluation).
--
MOS_MIGRATED_REVID=130044230
|
|
|
|
|
| |
--
MOS_MIGRATED_REVID=128476121
|
|
|
|
|
|
|
| |
functionality was only used in tests.
--
MOS_MIGRATED_REVID=124841573
|
|
|
|
|
| |
--
MOS_MIGRATED_REVID=123347295
|
|
|
|
|
|
|
| |
count and possibly use that value to make dynamic decisions around scheduling.
--
MOS_MIGRATED_REVID=116351222
|
|
|
|
|
|
|
| |
Reduces garbage.
--
MOS_MIGRATED_REVID=109914243
|
|
|
|
|
| |
--
MOS_MIGRATED_REVID=109835697
|
|
|
|
|
|
|
| |
Reduces garbage.
--
MOS_MIGRATED_REVID=108707405
|
|
|
|
|
|
|
| |
Reduces garbage.
--
MOS_MIGRATED_REVID=108641543
|
|
|
|
|
| |
--
MOS_MIGRATED_REVID=107806099
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This CL introduces a QuiescingExecutor implementation specialized for
ForkJoinPools with the same interrupt handling, error propagation, and
task completion semantics as AbstractQueueVisitor. Currently it does
this by largely sharing its implementation with AQV.
Future refactoring could let it rely more on ForkJoinPool's own
awaitQuiescence implementation to avoid the overhead of AQV's
remainingTasks counter maintenance.
Subtasks spawned by tasks executing in ForkJoinQuiescingExecutor will
rely on ForkJoinPool's thread-local task deques for low contention
and (mostly) LIFO ordering.
--
MOS_MIGRATED_REVID=106864395
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This fixes the following synchronization issue with
AbstractQueueVisitor's jobsMustBeStoppedField: it was read in
awaitTermination in a block synchronized on zeroRemainingTasks, but
in markToStopAllJobsIfNeeded it was read and written in a block
synchronized on the AQV instance. Now, it is always read or written
in a block synchronized on zeroRemainingTasks, because it is used in
the condition represented by that object.
This also thoroughly cleans up obsolete and irregular documentation in
the class.
--
MOS_MIGRATED_REVID=106849236
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Raises the level of abstraction of ValueVisitor's dependence on
AbstractQueueVisitor. Except for the "ForTestingOnly" methods now
available on the QuiescingExecutor interface, ValueVisitor is
agnostic to the implementation of its executor.
This also cleans up the full spectrum of visibility modifiers on
ValueVisitor methods, all of which ought to be private.
--
MOS_MIGRATED_REVID=106847453
|
|
|
|
|
|
|
|
|
|
| |
Changes the AbstractQueueVisitor strategy for varying its response to
unhandled exceptions from inheritance to composition. This will help
with a forthcoming switch from inheritance to delegation for
ValueVisitor's use of AbstractQueueVisitor.
--
MOS_MIGRATED_REVID=106730708
|
|
|
|
|
|
|
|
|
|
|
|
| |
This interface (mostly) encapsulates what the ValueVisitor expects
from the AbstractQueueVisitor class it currently inherits from. This
makes it easier for a future CL to change ValueVisitor's strategy of
code reuse from inheritance to composition.
RELNOTES:
--
MOS_MIGRATED_REVID=106728863
|
|
|
|
|
|
|
|
|
| |
order. In general, there's no advantage in Blaze to FIFO, and it means that we effectively do breadth-first graph traversal. When we must hold state for incomplete nodes (as we do with action execution, or more generally, as we do in Skyframe), this increases our memory footprint.
LIFO is not exactly depth-first traversal, since we are multithreaded, but to a first approximation, it looks like a depth-first traversal with "width" the number of threads (at each level of the graph, #(threads) nodes are visited).
--
MOS_MIGRATED_REVID=105995014
|
|
|
|
|
|
|
| |
RELNOTES: None
--
MOS_MIGRATED_REVID=105787681
|
|
|
|
|
|
|
|
|
| |
Uses an AtomicLong to count remaining tasks. Only obtains the
zeroRemainingTasks lock when remaining tasks have gone to zero or
the codepath needs to wait on that condition.
--
MOS_MIGRATED_REVID=105348523
|
|
|
|
|
|
|
|
|
|
| |
Removes unnecessary final keyword on private methods, inlines the
un-overridden protected method getWorkQueue, and restructures internal
constructors to be flatter (i.e. every constructor implementation calls
at most one other constructor).
--
MOS_MIGRATED_REVID=105344413
|