| Commit message (Collapse) | Author | Age |
|
|
|
|
|
|
| |
unsigned integer overflow.
PiperOrigin-RevId: 648730502
Change-Id: I662c365c59be9e51f565fd215d284a96b7bd8743
|
|
|
|
|
| |
PiperOrigin-RevId: 645054874
Change-Id: Ic4a820b47edfa71bd3e1f149d54f00ac3c1d16a6
|
|
|
|
|
|
|
| |
continued flakiness.
PiperOrigin-RevId: 643372086
Change-Id: I8fb2acc0e5ad35113e865bf008a531f3442a9295
|
|
|
|
|
|
|
|
| |
the open source release. This was only used in tests that never ran
as part in the open source release.
PiperOrigin-RevId: 636167506
Change-Id: Iafc33bd768307fe9ee77b181369635012abf2245
|
|
|
|
|
| |
PiperOrigin-RevId: 628091370
Change-Id: I2dd20b7f33ab99e78d63688832ab475a513aa3fd
|
|
|
|
|
|
|
| |
This often indicates a bug from adding synchronization logic but not using it.
PiperOrigin-RevId: 621921486
Change-Id: Iec49134c5e4bb50d9fc728c1f8a4fd2e86856782
|
|
|
|
|
| |
PiperOrigin-RevId: 616951235
Change-Id: I2d6e95a432285c3f79ef8484848e88e06973f51f
|
|
|
|
|
| |
PiperOrigin-RevId: 613326708
Change-Id: I6e5ca195f208b8da0d21d70b5a035bfdc64f866d
|
|
|
|
|
| |
PiperOrigin-RevId: 612509928
Change-Id: I90de2e6bd229bf5cf71a27e9c491bc2794e9265f
|
|
|
|
|
| |
PiperOrigin-RevId: 603816996
Change-Id: Ifc7dc6299e65043697b4a0c6e9e8eef869297ce3
|
|
|
|
|
|
|
| |
https://bazel.build/build/style-guide#other-conventions
PiperOrigin-RevId: 603084345
Change-Id: Ibd7c9573d820f88059d12c46ff82d7d322d002ae
|
|
|
|
|
|
|
| |
The current support policy is `_MSC_VER >= 1920`.
PiperOrigin-RevId: 599833619
Change-Id: I9cf7393a5b659d1680765e37e0328539ccb870fa
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Imported from GitHub PR https://github.com/abseil/abseil-cpp/pull/1589
It makes sense because even if it fails spuriously, we can just try again since we have to check for other readers anyway.
Merge 0b1780299b9e43205202d6b25f6e57759722d063 into 6a19ff47352a2112e953f4ab813d820e0ecfe1e3
Merging this change closes #1589
COPYBARA_INTEGRATE_REVIEW=https://github.com/abseil/abseil-cpp/pull/1589 from AtariDreams:atomics 0b1780299b9e43205202d6b25f6e57759722d063
PiperOrigin-RevId: 595149382
Change-Id: I24f678f6bf95c6a37b2ed541a2b6668a58a67702
|
|
|
|
|
|
|
|
| |
* Also does this for `absl::internal::identity_t` which is now `absl::internal::type_identity_t`
* This is clearer naming as this is a backfill of `std::type_identity` (the identity type), and not `std::identity` (the identity function)
PiperOrigin-RevId: 594316002
Change-Id: I5fb8cf7e3d07c1bc736cbecd202e7d556b6ea33e
|
|
|
|
|
|
|
| |
benchmarks.
PiperOrigin-RevId: 593918110
Change-Id: Ide100c69b10e28011af17c7f82bb10eea072cad4
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The added test exposes a false TSan race report in
EnableInvariantDebugging/EnableDebugLog related to SynchEvent reuse.
We ignore most of the stuff that happens inside of the Mutex code,
but not for the code inside of EnableInvariantDebugging/EnableDebugLog.
So these can cause occasional false reports on SynchEvent bankruptcy.
Also ignore accesses in EnableInvariantDebugging/EnableDebugLog.
PiperOrigin-RevId: 592226791
Change-Id: I066edb1ef5661ba6cf86a195f91a9d5328b93d10
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Previously, `absl::Condition` incorrectly used the same (non-`const`)
pointer-to-method type when wrapping both `const` and non-`const` methods.
Unfortunately, this is undefined behavior according to `[expr.reinterpret.cast]`
in the C++ standard:
> The effect of calling a function through a pointer to a function type that is
> not the same as the type used in the definition of the function is undefined.
This fixes the UB.
PiperOrigin-RevId: 591981682
Change-Id: Iaca955346699417232383d3a1800ea9b82ea5761
|
|
|
|
|
|
|
|
|
|
|
| |
and use StdcppWaiter instead.
There are various flavors of MinGW, some of which support pthread,
and some of which support Win32. Instead of figuring out which
platform is being used, just use the generic implementation.
PiperOrigin-RevId: 580565507
Change-Id: Ia85fd7496f1e6ebdeadb95202f0039e844826118
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The Mutex destructor is needed only to clean up debug logging
and invariant checking synch events. These are not supposed
to be used in production, but the non-empty destructor has
costs for production builds.
Instead of removing synch events in destructor,
drop all of them if we accumulated too many.
For tests is should not matter (we maybe only consume
a bit more memory). Production builds should be either unaffected
(if don't use debug logging), or use periodic reset of all synch events.
PiperOrigin-RevId: 578123259
Change-Id: I0ec59183a5f63ea0a6b7fc50f0a77974e7f677be
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Currently Mutex::Lock contains not inlined non-tail call:
TryAcquireWithSpinning -> GetMutexGlobals -> LowLevelCallOnce -> init closure
This turns the function into non-leaf with stack frame allocation
and additional register use. Remove this non-tail call to make the function leaf.
Move spin iterations initialization to LockSlow.
Current Lock happy path:
00000000001edc20 <absl::Mutex::Lock()>:
1edc20: 55 push %rbp
1edc21: 48 89 e5 mov %rsp,%rbp
1edc24: 53 push %rbx
1edc25: 50 push %rax
1edc26: 48 89 fb mov %rdi,%rbx
1edc29: 48 8b 07 mov (%rdi),%rax
1edc2c: a8 19 test $0x19,%al
1edc2e: 75 0e jne 1edc3e <absl::Mutex::Lock()+0x1e>
1edc30: 48 89 c1 mov %rax,%rcx
1edc33: 48 83 c9 08 or $0x8,%rcx
1edc37: f0 48 0f b1 0b lock cmpxchg %rcx,(%rbx)
1edc3c: 74 42 je 1edc80 <absl::Mutex::Lock()+0x60>
... unhappy path ...
1edc80: 48 83 c4 08 add $0x8,%rsp
1edc84: 5b pop %rbx
1edc85: 5d pop %rbp
1edc86: c3 ret
New Lock happy path:
00000000001eea80 <absl::Mutex::Lock()>:
1eea80: 48 8b 07 mov (%rdi),%rax
1eea83: a8 19 test $0x19,%al
1eea85: 75 0f jne 1eea96 <absl::Mutex::Lock()+0x16>
1eea87: 48 89 c1 mov %rax,%rcx
1eea8a: 48 83 c9 08 or $0x8,%rcx
1eea8e: f0 48 0f b1 0f lock cmpxchg %rcx,(%rdi)
1eea93: 75 01 jne 1eea96 <absl::Mutex::Lock()+0x16>
1eea95: c3 ret
... unhappy path ...
PiperOrigin-RevId: 577790105
Change-Id: I20793534050302ff9f7a20aed93791c088d98562
|
|
|
|
|
| |
PiperOrigin-RevId: 577180526
Change-Id: Iec53709456805ca8dc5327669cc0f6c95825d0e9
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The Mutex destructor is needed only to clean up debug logging
and invariant checking synch events. These are not supposed
to be used in production, but the non-empty destructor has
costs for production builds.
Instead of removing synch events in destructor,
drop all of them if we accumulated too many.
For tests is should not matter (we maybe only consume
a bit more memory). Production builds should be either unaffected
(if don't use debug logging), or use periodic reset of all synch events.
PiperOrigin-RevId: 577106805
Change-Id: Icaaf7166b99afcd5dce92b4acd1be661fb72f10b
|
|
|
|
|
|
|
|
|
|
|
| |
Currently if a thread already blocked on a Mutex,
but then failed to acquire the Mutex, we queue it in FIFO order again.
As the result unlucky threads can suffer bad latency
if they are requeued several times.
The least we can do for them is to queue in LIFO order after blocking.
PiperOrigin-RevId: 576174725
Change-Id: I9e2a329d34279a26bd1075b42e3217a5dc065f0a
|
|
|
|
|
| |
PiperOrigin-RevId: 572575394
Change-Id: Ic1c5ac2423b1634e50c43bad6daa14e82a8f3e2c
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The layering_check feature ensures that rules that include a header
explicitly depend on a rule that exports that header. Compiler support
is required, and currently only Clang 16+ supports diagnoses
layering_check failures.
The parse_headers feature ensures headers are self-contained by
compiling them with -fsyntax-only on supported compilers.
PiperOrigin-RevId: 572350144
Change-Id: I37297f761566d686d9dd58d318979d688b7e36d1
|
|
|
|
|
| |
PiperOrigin-RevId: 567415671
Change-Id: I59bfcb5ac9fbde227a4cdb3b497b0bd5969b0770
|
|
|
|
|
|
|
| |
There are some regressions reported.
PiperOrigin-RevId: 567181925
Change-Id: I4ee8a61afd336de7ecb22ec307adb2068932bc8b
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Tidy up Mutex::[Reader]TryLock codegen by outlining slow path
and non-tail function call, and un-unrolling the loop.
Current codegen:
https://gist.githubusercontent.com/dvyukov/a4d353fd71ac873af9332c1340675b60/raw/226537ffa305b25a79ef3a85277fa870fee5191d/gistfile1.txt
New codegen:
https://gist.githubusercontent.com/dvyukov/686a094c5aa357025689764f155e5a29/raw/e3125c1cdb5669fac60faf336e2f60395e29d888/gistfile1.txt
name old cpu/op new cpu/op delta
BM_TryLock 18.0ns ± 0% 17.7ns ± 0% -1.64% (p=0.016 n=4+5)
BM_ReaderTryLock/real_time/threads:1 17.9ns ± 0% 17.9ns ± 0% -0.10% (p=0.016 n=5+5)
BM_ReaderTryLock/real_time/threads:72 9.61µs ± 8% 8.42µs ± 7% -12.37% (p=0.008 n=5+5)
PiperOrigin-RevId: 567006472
Change-Id: Iea0747e71bbf2dc1f00c70a4235203071d795b99
|
|
|
|
|
| |
PiperOrigin-RevId: 566991965
Change-Id: I6c4d64de79d303e69b18330bda04fdc84d40893d
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Currently ReaderLock/Unlock tries CAS only once.
Even if there is moderate contention from other readers only,
ReaderLock/Unlock go onto slow path, which does lots of additional work
before retrying the CAS (since there are only readers, the slow path
logic is not really needed for anything).
Retry CAS while there are only readers.
name old cpu/op new cpu/op delta
BM_ReaderLock/real_time/threads:1 17.9ns ± 0% 17.9ns ± 0% ~ (p=0.071 n=5+5)
BM_ReaderLock/real_time/threads:72 11.4µs ± 3% 8.4µs ± 4% -26.24% (p=0.008 n=5+5)
PiperOrigin-RevId: 566981511
Change-Id: I432a3c1d85b84943d0ad4776a34fa5bfcf5b3b8e
|
|
|
|
|
| |
PiperOrigin-RevId: 566961701
Change-Id: Id04e4c5a598f508a0fe7532ae8f084c583865f2d
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Currently Mutex::Lock contains not inlined non-tail call:
TryAcquireWithSpinning -> GetMutexGlobals -> LowLevelCallOnce -> init closure
This turns the function into non-leaf with stack frame allocation
and additional register use. Remove this non-tail call to make the function leaf.
Move spin iterations initialization to LockSlow.
Current Lock happy path:
00000000001edc20 <absl::Mutex::Lock()>:
1edc20: 55 push %rbp
1edc21: 48 89 e5 mov %rsp,%rbp
1edc24: 53 push %rbx
1edc25: 50 push %rax
1edc26: 48 89 fb mov %rdi,%rbx
1edc29: 48 8b 07 mov (%rdi),%rax
1edc2c: a8 19 test $0x19,%al
1edc2e: 75 0e jne 1edc3e <absl::Mutex::Lock()+0x1e>
1edc30: 48 89 c1 mov %rax,%rcx
1edc33: 48 83 c9 08 or $0x8,%rcx
1edc37: f0 48 0f b1 0b lock cmpxchg %rcx,(%rbx)
1edc3c: 74 42 je 1edc80 <absl::Mutex::Lock()+0x60>
... unhappy path ...
1edc80: 48 83 c4 08 add $0x8,%rsp
1edc84: 5b pop %rbx
1edc85: 5d pop %rbp
1edc86: c3 ret
New Lock happy path:
00000000001eea80 <absl::Mutex::Lock()>:
1eea80: 48 8b 07 mov (%rdi),%rax
1eea83: a8 19 test $0x19,%al
1eea85: 75 0f jne 1eea96 <absl::Mutex::Lock()+0x16>
1eea87: 48 89 c1 mov %rax,%rcx
1eea8a: 48 83 c9 08 or $0x8,%rcx
1eea8e: f0 48 0f b1 0f lock cmpxchg %rcx,(%rdi)
1eea93: 75 01 jne 1eea96 <absl::Mutex::Lock()+0x16>
1eea95: c3 ret
... unhappy path ...
PiperOrigin-RevId: 566488042
Change-Id: I62f854b82a322cfb1d42c34f8ed01b4677693fca
|
|
|
|
|
|
|
|
|
|
|
| |
Currently if a thread already blocked on a Mutex,
but then failed to acquire the Mutex, we queue it in FIFO order again.
As the result unlucky threads can suffer bad latency
if they are requeued several times.
The least we can do for them is to queue in LIFO order after blocking.
PiperOrigin-RevId: 566478783
Change-Id: I8bac08325f20ff6ccc2658e04e1847fd4614c653
|
|
|
|
|
|
|
|
|
|
|
|
| |
CondVar wait morhping has a special case for timed waits.
The code goes back to 2006, it seems that there might have
been some reasons to do this back then.
But now it does not seem to be necessary.
Wait morphing should work just fine after timed CondVar waits.
Remove the special case and simplify code.
PiperOrigin-RevId: 565798838
Change-Id: I4e4d61ae7ebd521f5c32dfc673e57a0c245e7cfb
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
1. Remove special handling of Condition::kTrue.
Condition::kTrue is used very rarely (frequently its uses even indicate
confusion and bugs). But we pay few additional branches for kTrue
on all Condition operations.
Remove that special handling and simplify logic.
2. And remove known_false condition in Mutex code.
Checking known_false condition only causes slow down because:
1. We already built skip list with equivalent conditions
(and keep improving it on every Skip call). And when we built
the skip list, we used more capable GuaranteedEqual function
(it does not just check for equality of pointers,
but for also for equality of function/arg).
2. Condition pointer are rarely equal even for equivalent conditions
becuase temp Condition objects are usually created on the stack.
We could call GuaranteedEqual(cond, known_false) instead of cond == known_false,
but that slows down things even more (see point 1).
So remove the known_false optimization.
Benchmark results for this and the previous change:
name old cpu/op new cpu/op delta
BM_ConditionWaiters/0/1 36.0ns ± 0% 34.9ns ± 0% -3.02% (p=0.008 n=5+5)
BM_ConditionWaiters/1/1 36.0ns ± 0% 34.9ns ± 0% -2.98% (p=0.008 n=5+5)
BM_ConditionWaiters/2/1 35.9ns ± 0% 34.9ns ± 0% -3.03% (p=0.016 n=5+4)
BM_ConditionWaiters/0/8 55.5ns ± 5% 49.8ns ± 3% -10.33% (p=0.008 n=5+5)
BM_ConditionWaiters/1/8 36.2ns ± 0% 35.2ns ± 0% -2.90% (p=0.016 n=5+4)
BM_ConditionWaiters/2/8 53.2ns ± 7% 48.3ns ± 7% ~ (p=0.056 n=5+5)
BM_ConditionWaiters/0/64 295ns ± 1% 254ns ± 2% -13.73% (p=0.008 n=5+5)
BM_ConditionWaiters/1/64 36.2ns ± 0% 35.2ns ± 0% -2.85% (p=0.008 n=5+5)
BM_ConditionWaiters/2/64 290ns ± 6% 250ns ± 4% -13.68% (p=0.008 n=5+5)
BM_ConditionWaiters/0/512 5.50µs ±12% 4.99µs ± 8% ~ (p=0.056 n=5+5)
BM_ConditionWaiters/1/512 36.7ns ± 3% 35.2ns ± 0% -4.10% (p=0.008 n=5+5)
BM_ConditionWaiters/2/512 4.44µs ±13% 4.01µs ± 3% -9.74% (p=0.008 n=5+5)
BM_ConditionWaiters/0/4096 104µs ± 6% 101µs ± 3% ~ (p=0.548 n=5+5)
BM_ConditionWaiters/1/4096 36.2ns ± 0% 35.1ns ± 0% -3.03% (p=0.008 n=5+5)
BM_ConditionWaiters/2/4096 90.4µs ± 5% 85.3µs ± 7% ~ (p=0.222 n=5+5)
BM_ConditionWaiters/0/8192 384µs ± 5% 367µs ± 7% ~ (p=0.222 n=5+5)
BM_ConditionWaiters/1/8192 36.2ns ± 0% 35.2ns ± 0% -2.84% (p=0.008 n=5+5)
BM_ConditionWaiters/2/8192 363µs ± 3% 316µs ± 7% -12.84% (p=0.008 n=5+5)
PiperOrigin-RevId: 565669535
Change-Id: I5180c4a787933d2ce477b004a111853753304684
|
|
|
|
|
|
|
|
|
| |
absl: remove special handling of Condition::kTrue
absl: remove known_false condition in Mutex code
There are some test breakages.
PiperOrigin-RevId: 563751370
Change-Id: Ie14dc799e0a0d286a7e1b47f0a9bbe59dfb23f70
|
|
|
|
|
|
|
|
|
|
| |
When CondVar accepted generic non-Mutex mutexes,
Mutex pointer could be nullptr. Now that support is removed,
but we still have some lingering checks for Mutex* == nullptr.
Remove them.
PiperOrigin-RevId: 563740239
Change-Id: Ib744e0b991f411dd8dba1b0da6477c13832e0f65
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Mutex::Await/LockWhen/CondVar::Wait duplicate code, and cause additional
calls at runtime and code bloat.
Inline thin wrappers that just convert argument types and
add a single de-duped implementation for these methods.
This reduces code size, shaves off 55K from the mutex_test in release build,
and should make things marginally faster.
$ nm -nS mutex_test | egrep "(_ZN4absl5Mutex.*(Await|LockWhen))|(_ZN4absl7CondVar.*Wait)"
before:
00000000000912c0 00000000000001a8 T _ZN4absl7CondVar4WaitEPNS_5MutexE
00000000000988c0 0000000000000c36 T _ZN4absl7CondVar16WaitWithDeadlineEPNS_5MutexENS_4TimeE
000000000009a6e0 0000000000000041 T _ZN4absl5Mutex19LockWhenWithTimeoutERKNS_9ConditionENS_8DurationE
00000000000a28c0 0000000000000779 T _ZN4absl5Mutex17AwaitWithDeadlineERKNS_9ConditionENS_4TimeE
00000000000cf4e0 0000000000000011 T _ZN4absl5Mutex8LockWhenERKNS_9ConditionE
00000000000cf500 0000000000000041 T _ZN4absl5Mutex20LockWhenWithDeadlineERKNS_9ConditionENS_4TimeE
00000000000cf560 0000000000000011 T _ZN4absl5Mutex14ReaderLockWhenERKNS_9ConditionE
00000000000cf580 0000000000000041 T _ZN4absl5Mutex26ReaderLockWhenWithDeadlineERKNS_9ConditionENS_4TimeE
00000000000cf5e0 0000000000000766 T _ZN4absl5Mutex5AwaitERKNS_9ConditionE
00000000000cfd60 00000000000007b5 T _ZN4absl5Mutex16AwaitWithTimeoutERKNS_9ConditionENS_8DurationE
00000000000d0700 00000000000003cf T _ZN4absl7CondVar15WaitWithTimeoutEPNS_5MutexENS_8DurationE
000000000011c280 0000000000000041 T _ZN4absl5Mutex25ReaderLockWhenWithTimeoutERKNS_9ConditionENS_8DurationE
after:
000000000009c300 00000000000007ed T _ZN4absl7CondVar10WaitCommonEPNS_5MutexENS_24synchronization_internal13KernelTimeoutE
00000000000a03c0 00000000000006fe T _ZN4absl5Mutex11AwaitCommonERKNS_9ConditionENS_24synchronization_internal13KernelTimeoutE
000000000011ae00 0000000000000025 T _ZN4absl5Mutex14LockWhenCommonERKNS_9ConditionENS_24synchronization_internal13KernelTimeoutEb
PiperOrigin-RevId: 563729364
Change-Id: Ic6b43761f76719c01e03d43cc0e0c419e41a85c1
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Checking known_false condition only causes slow down because:
1. We already built skip list with equivalent conditions
(and keep improving it on every Skip call). And when we built
the skip list, we used more capable GuaranteedEqual function
(it does not just check for equality of pointers,
but for also for equality of function/arg).
2. Condition pointer are rarely equal even for equivalent conditions
becuase temp Condition objects are usually created on the stack.
We could call GuaranteedEqual(cond, known_false) instead of cond == known_false,
but that slows down things even more (see point 1).
So remove the known_false optimization.
Benchmark results for this and the previous change:
name old cpu/op new cpu/op delta
BM_ConditionWaiters/0/1 36.0ns ± 0% 34.9ns ± 0% -3.02% (p=0.008 n=5+5)
BM_ConditionWaiters/1/1 36.0ns ± 0% 34.9ns ± 0% -2.98% (p=0.008 n=5+5)
BM_ConditionWaiters/2/1 35.9ns ± 0% 34.9ns ± 0% -3.03% (p=0.016 n=5+4)
BM_ConditionWaiters/0/8 55.5ns ± 5% 49.8ns ± 3% -10.33% (p=0.008 n=5+5)
BM_ConditionWaiters/1/8 36.2ns ± 0% 35.2ns ± 0% -2.90% (p=0.016 n=5+4)
BM_ConditionWaiters/2/8 53.2ns ± 7% 48.3ns ± 7% ~ (p=0.056 n=5+5)
BM_ConditionWaiters/0/64 295ns ± 1% 254ns ± 2% -13.73% (p=0.008 n=5+5)
BM_ConditionWaiters/1/64 36.2ns ± 0% 35.2ns ± 0% -2.85% (p=0.008 n=5+5)
BM_ConditionWaiters/2/64 290ns ± 6% 250ns ± 4% -13.68% (p=0.008 n=5+5)
BM_ConditionWaiters/0/512 5.50µs ±12% 4.99µs ± 8% ~ (p=0.056 n=5+5)
BM_ConditionWaiters/1/512 36.7ns ± 3% 35.2ns ± 0% -4.10% (p=0.008 n=5+5)
BM_ConditionWaiters/2/512 4.44µs ±13% 4.01µs ± 3% -9.74% (p=0.008 n=5+5)
BM_ConditionWaiters/0/4096 104µs ± 6% 101µs ± 3% ~ (p=0.548 n=5+5)
BM_ConditionWaiters/1/4096 36.2ns ± 0% 35.1ns ± 0% -3.03% (p=0.008 n=5+5)
BM_ConditionWaiters/2/4096 90.4µs ± 5% 85.3µs ± 7% ~ (p=0.222 n=5+5)
BM_ConditionWaiters/0/8192 384µs ± 5% 367µs ± 7% ~ (p=0.222 n=5+5)
BM_ConditionWaiters/1/8192 36.2ns ± 0% 35.2ns ± 0% -2.84% (p=0.008 n=5+5)
BM_ConditionWaiters/2/8192 363µs ± 3% 316µs ± 7% -12.84% (p=0.008 n=5+5)
PiperOrigin-RevId: 563717887
Change-Id: I9a62670628510d764a4f2f88a047abb8f85009e2
|
|
|
|
|
|
|
|
|
| |
Condition::kTrue is used very rarely (frequently its uses even indicate
confusion and bugs). But we pay few additional branches for kTrue
on all Condition operations.
Remove that special handling and simplify logic.
PiperOrigin-RevId: 563691160
Change-Id: I76125adde4872489da069dd9c894ed73a65d1d83
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Enqueue updates priority of the queued thread.
It was assumed that the queued thread is the current thread.
But it's not the case in CondVar wait morhping,
where we requeue an existing CondVar waiter on the Mutex.
As the result one thread can falsely get priority of another thread.
Fix this by not updating priority in this case.
And make the assumption explicit and checked.
PiperOrigin-RevId: 561249402
Change-Id: I9476c047757090b893a88a2839b795b85fe220ad
|
|
|
|
|
|
|
| |
it runs on non-dedicated Kokoro
PiperOrigin-RevId: 558874605
Change-Id: Iba35f558ab8c967f98a3176af056e76341fb67c3
|
|
|
|
|
|
|
|
|
| |
Since ABSL_INTERNAL_HAVE_STDCPP_WAITER is defined on all systems
it is effectively a fallback. I left the condition there in case
we have to disable it on some platform in the future.
PiperOrigin-RevId: 555629066
Change-Id: I76ca78c7f36d1d02dc4950a44c66903a2aaf2a52
|
|
|
|
|
| |
PiperOrigin-RevId: 548709037
Change-Id: I6eb03553299265660aa0abc180ae0f197a416ba4
|
|
|
|
|
| |
PiperOrigin-RevId: 546914671
Change-Id: I6f0419103efdd8125e4027e7d5eec124ca604156
|
|
|
|
|
|
|
|
|
| |
Explain when kTrue may be useful.
Note that Mutex::Await/LockWhen with kTrue condition
and a timeout do not return when the timeout is reached.
PiperOrigin-RevId: 544846222
Change-Id: I7a14ae5a9314b2e500919f0c7b3a907d4d97c127
|
|
|
|
|
|
|
|
|
| |
Check various corner cases for Await/LockWhen return value
with always true/always false conditions.
I don't see this explicitly tested anywhere else.
PiperOrigin-RevId: 542141533
Change-Id: Ia116c6dc199de606ad446c205951169ec5e2abe1
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Currently linter warns on all changes:
missing #include <cstdlib> for 'std::atexit'
and
single-argument constructors must be marked explicit to avoid unintentional implicit conversions
Fix that.
PiperOrigin-RevId: 542135136
Change-Id: Ic86649de6baef7f2de71f45875bb66bd730bf6e1
|
|
|
|
|
|
|
|
|
|
| |
Few pure cosmetic changes:
- remove unused headers
- add using for CycleClock since it's used multiple times
- restructure GetMutexGlobals to be more consistent
PiperOrigin-RevId: 542002120
Change-Id: I117faae05cb8224041f7e3771999f3a35bdf4aef
|
|
|
|
|
|
|
|
|
|
|
| |
Reformat Mutex-related files so that incremental formatting changes
don't distract during review of logical changes.
These files are subtle and any unnecessary diffs make reviews harder.
No changes besides running clang-format.
PiperOrigin-RevId: 541981737
Change-Id: I41cccb7a97158c78d17adaff6fe553c2c9c2b9ed
|