| Commit message (Collapse) | Author | Age |
|
|
|
|
|
|
| |
dereference of iterator returned by `try_emplace`.
PiperOrigin-RevId: 597920257
Change-Id: I1b2e8f10a2f1efa763a6f0760294beafdb6fd9c0
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
with ABSL_HAVE_HWADDRESS_SANITIZER.
It will detect bugs similar to Asan.
Also updated related tests to pass with HWASAN.
They are still flaky because of nature of HWASAN algorithm,
but test can be update to avoid flakiness, which I will do
in followup patches.
PiperOrigin-RevId: 597613798
Change-Id: Ic8af36a268ca041c002eb561b946aa2d9b93996a
|
|
|
|
|
|
|
| |
removing the iterator invalidation check from the comparison that contains performs.
PiperOrigin-RevId: 595460301
Change-Id: I9a5d6c81385e38184f4848c58209adc5d32bb7be
|
|
|
|
|
|
|
| |
benchmarks.
PiperOrigin-RevId: 593918110
Change-Id: Ide100c69b10e28011af17c7f82bb10eea072cad4
|
|
|
|
|
|
|
|
|
| |
We currently have a workaround for MSVC, which has constexpr pointer arithmetic bugs. The bug seems to still exist and the existing code for non-MSVC compilers doesn't build.
This alternative constexpr constructor avoids pointer arithmetic and seems to be working for all, including MSVC.
PiperOrigin-RevId: 592586957
Change-Id: Ic585693c3a7abaab5fbbc0954b8ee924994f8dbf
|
|
|
|
|
|
|
| |
benchmarks.
PiperOrigin-RevId: 592483250
Change-Id: I55fa9982c4dbc723b30957cb31da95251e368707
|
|
|
|
|
| |
PiperOrigin-RevId: 592301543
Change-Id: I97e4df805c7313896228430a50a7f991127f3e30
|
|
|
|
|
| |
PiperOrigin-RevId: 592272653
Change-Id: I895c5786555227bdc88ab0a4cce8cf5ba65222a1
|
|
|
|
|
|
|
| |
same element.
PiperOrigin-RevId: 591987002
Change-Id: Ic1ed2063aeb95a6e814eefcbed024e1a5a1d8d2f
|
|
|
|
|
| |
PiperOrigin-RevId: 590337123
Change-Id: Ib98bbb5a5dadbce5e891567038e016f4da2efc0b
|
|
|
|
|
|
|
|
|
| |
It is not used at the moment. Usage is planned to be submitted separately.
Useful for faster iterating over all slots in the internal functions.
PiperOrigin-RevId: 590300049
Change-Id: I081f33113268761db868771d29796d94c24e4e7a
|
|
|
|
|
|
|
|
|
| |
Details:
- In case the table entirely fits into a single group size (`capacity <= Group::kWidth`), the order of elements is not important.
- For growing upto Group::kWidth we rotate control bytes and slots deterministically (using memcpy).
- We also avoid second find_first_non_full right after resize for small growing.
PiperOrigin-RevId: 588825966
Change-Id: I09bd7fd489e3868dcf56c36b436805d08dae7ab5
|
|
|
|
|
| |
PiperOrigin-RevId: 585892739
Change-Id: I30490dac5826bff2a33d9872f71154d360f9cc0d
|
|
|
|
|
|
|
| |
This reduces produced binary size and can trigger even more optimizations in the future.
PiperOrigin-RevId: 584136517
Change-Id: I3854833799f88f28b755ec53132925f0c3d468ab
|
|
|
|
|
|
|
| |
disabled. This will make it easier to roll back and forwards in the future (if needed) without causing merge conflicts in unrelated code.
PiperOrigin-RevId: 582059046
Change-Id: I66dc6527e7a0b351367b7a391c2d653fe793143f
|
|
|
|
|
| |
PiperOrigin-RevId: 580726428
Change-Id: I12b0f22c2084aef90bcca67536220a6bb550b57e
|
|
|
|
|
|
|
|
|
| |
This allows for avoiding e.g. it.inner_.slot_ on const iterators.
Also, refactor HashtableDebugAccess::AllocatedByteSize to use regular iteration instead of looking directly at control/slot_array of the table in order to support small object optimization.
PiperOrigin-RevId: 580194253
Change-Id: I64cd69287834ee5c7a8daf867c532258806bfb7b
|
|
|
|
|
|
|
| |
cause pointers to elements to be invalidated.
PiperOrigin-RevId: 578920671
Change-Id: Ica40db48d5565b606e5e5f501c1305612b193d4d
|
|
|
|
|
|
|
| |
that may have been invalidated by a container move.
PiperOrigin-RevId: 578649798
Change-Id: Icfee98d3a0399b545ec6ec59c5b52ae5e006218b
|
|
|
|
|
|
|
| |
constructors/destructors don't make reentrant calls to raw_hash_set member functions.
PiperOrigin-RevId: 577877803
Change-Id: I254c589b00cadec6ff95dfd60a8a38ab303c1af5
|
|
|
|
|
|
|
| |
don't make reentrant calls to raw_hash_set member functions.
PiperOrigin-RevId: 574232718
Change-Id: I8ef25fec00b76ee5fb9424e7614ca55edd6ba81b
|
|
|
|
|
|
|
| |
reentrant calls to raw_hash_set member functions.
PiperOrigin-RevId: 573897598
Change-Id: If40c23ac3cd9fff315ee18774e27c480cbca3a81
|
|
|
|
|
|
| |
Motivation: once we enable small object optimization in swisstable, iterators can be invalidated when the table is moved.
PiperOrigin-RevId: 573884505
Change-Id: I4278129829143d3747dfd0ef0ff92f321c2633dc
|
|
|
|
|
| |
PiperOrigin-RevId: 572979536
Change-Id: I4fceee0c52e4e6877e639860327462c7874719e7
|
|
|
|
|
|
|
|
|
| |
sign extension after the doing the control byte comparison. Changing the bitmask object
to explicitly track only 16 bits (instead of 32) eliminates this, saving an instruction / cycle. This speeds up hit checking by up to 6% on Milan and up to 15% on CLX
PiperOrigin-RevId: 572965182
Change-Id: Ifda0e3250d409266d6dcef89cba6ada91d879291
|
|
|
|
|
| |
PiperOrigin-RevId: 572575394
Change-Id: Ic1c5ac2423b1634e50c43bad6daa14e82a8f3e2c
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The layering_check feature ensures that rules that include a header
explicitly depend on a rule that exports that header. Compiler support
is required, and currently only Clang 16+ supports diagnoses
layering_check failures.
The parse_headers feature ensures headers are self-contained by
compiling them with -fsyntax-only on supported compilers.
PiperOrigin-RevId: 572350144
Change-Id: I37297f761566d686d9dd58d318979d688b7e36d1
|
|
|
|
|
| |
PiperOrigin-RevId: 571322393
Change-Id: I0e227b0075d3133ee28c8f766a1be7872c101176
|
|
|
|
|
| |
PiperOrigin-RevId: 571084409
Change-Id: I4e6c98ac11f4cb40b65cc9484188faa6168718b4
|
|
|
|
|
|
|
|
|
| |
including sanitizer mode checks.
Sanitizer mode can be used for canaries so performance is still relevant. This change also makes the code more uniform.
PiperOrigin-RevId: 570438923
Change-Id: I62859160eb9323e6420680a43fd23e97e8a62389
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
propagation and improve performance.
Correctness:
- We use swap to implement copy assignment and move assignment, which means that allocator propagation in copy/move assignment depends on `propagate_on_container_swap` in addition to `propagate_on_container_copy_assignment`/`propagate_on_container_move_assignment`.
- In swap, if `propagate_on_container_swap` is `false` and `get_allocator() != other.get_allocator()`, the behavior is undefined (https://en.cppreference.com/w/cpp/container/unordered_set/swap) - we should assert that this UB case isn't happening. For this reason, we also delete the NoPropagateOn_Swap test case in raw_hash_set_allocator_test.
Performance:
- Don't rely on swap so we don't have to do unnecessary copying into the moved-from sets.
- Don't use temp sets in move assignment.
- Default the move constructor of CommonFields.
- Avoid using exchange in raw_hash_set move constructor.
- In `raw_hash_set(raw_hash_set&& that, const allocator_type& a)` with unequal allocators and in move assignment with non-propagating unequal allocators, move set keys instead of copying them.
PiperOrigin-RevId: 570419290
Change-Id: I499e54f17d9cb0b0836601f5c06187d1f269a5b8
|
|
|
|
|
| |
PiperOrigin-RevId: 570180405
Change-Id: If14b21a4d0df19546a47923a1f2a359b38fe6f93
|
|
|
|
|
|
|
| |
We test for `ABSL_INTERNAL_HAS_RTTI` in `absl::container_internal::TypeName` before calling `typeid`.
PiperOrigin-RevId: 570101013
Change-Id: I1f2f9b2f475a6beae50d0b88718b17b296311155
|
|
|
|
|
|
|
| |
node_handle
PiperOrigin-RevId: 568997790
Change-Id: I9899ccc95eeb9c8b92d0dceec7e2fc4a2b1102c0
|
|
|
|
|
| |
PiperOrigin-RevId: 568665135
Change-Id: I42ec9bc6cfe923777f7b60ea032c7b64428493c9
|
|
|
|
|
| |
PiperOrigin-RevId: 568652465
Change-Id: I9f72a11cb514eaf694dae589a19dc139891e7af2
|
|
|
|
|
|
|
|
|
| |
propagation (defined in test_allocator.h) and minimal alignment.
Also remove some extraneous value_types from typed tests. The motivation is to reduce btree_test compile time.
PiperOrigin-RevId: 567376572
Change-Id: I6ac6130b99faeadaedab8c2c7b05d5e23e77cc1e
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
raw_hash_set.
`SwisstableDebugEnabled()` is also true for release builds with hardening
enabled. To minimize their impact in those builds:
- use `ABSL_PREDICT_FALSE()` to provide a compiler hint for code layout
- use `ABSL_RAW_LOG()` with a format string to reduce code size and improve
the chances that the hot paths will be inlined.
PiperOrigin-RevId: 567102494
Change-Id: I6734bd491d7b2e1fb9df0e86f4e29e6ad0a03102
|
|
|
|
|
|
|
| |
move assignment.
PiperOrigin-RevId: 565730754
Change-Id: Id828847d32c812736669803c179351433dda4aa6
|
|
|
|
|
|
|
| |
that can be shared between different container tests.
PiperOrigin-RevId: 565693736
Change-Id: I59af987e30da03a805ce59ff0fb7eeae3fc08293
|
|
|
|
|
|
|
| |
This should enable binary size savings for now and more efficiency improvements with small buffer optimization.
PiperOrigin-RevId: 564741270
Change-Id: Icf204d88256243eb60464439a52dd589d7a559cb
|
|
|
|
|
|
|
|
|
| |
read/write from value_types when we aren't allowed to memcpy them.
The motivation is to prevent a bug in small buffer optimization for swisstables.
PiperOrigin-RevId: 564726325
Change-Id: Id0df5d28d65c7586428001fcb266886988cd481e
|
|
|
|
|
|
|
|
|
|
|
| |
This change makes b-tree sets conform to the std::set API of having const access through `iterator`s as well as `const_iterator`s.
This change can cause breakages for user code that depends on having mutable access to keys. If your code breaks, then there a couple of options to fix the issue:
- If you are mutating a part of the key that can impact the sorted order, then this is a bug and you need to extract the key, mutate it, and then re-insert the mutated key.
- If you are mutating a part of the key that can't impact the sorted order, then you can potentially (a) refactor to use btree_map instead of btree_set and make the part of the key that doesn't impact that ordering be the mapped_type, (b) change the part of the key that doesn't impact the ordering to be a `mutable` member of the key_type, (c) refactor from btree_set<K> to btree_set<K*> (but this also permits mutable access to the part of the key that determines the ordering).
PiperOrigin-RevId: 563118156
Change-Id: I243558e74c43aa6655290099494b411d15298f4c
|
|
|
|
|
| |
PiperOrigin-RevId: 562832827
Change-Id: If37f83e67b3b2ea350f74dd6bffae51ea5508f12
|
|
|
|
|
| |
PiperOrigin-RevId: 561954737
Change-Id: I0e29a4f4523e4b3eafbd26d1d96f6e1c8deed957
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
There is a few cycles of overhead when transfering between GPR and Neon registers. We pay this cost for GroupAarch64Impl, largely because the speedup we get in Match() makes it profitable. After a Match call, if we do subsequent Group operations, we don't have to pay the full GPR <-> Neon cost, so it makes sense to do them with Neon instructions as well.
However, in iteration and find_first_non_full(), we do not do a prior Match(), so the Mask/Count EmptyOrDeleted calls pay the full GPR <-> Neon cost. We can avoid this by using the GPR versions of the functions in the portable implementation of Group instead. We slightly change the order of operations in these functions (should be functionally a nop) in order to take advantage of Arm's free flexible second operand shifts with Logical operations.
Iteration and Resize are roughly 8% and 12.6% faster respectively.
This is not profitable on x86 because there is much lower GPR <-> xmm register latency and we use a 16-bit wide Group size.
PiperOrigin-RevId: 561415183
Change-Id: I660b5bb84afedb05a12dcdf04d5b2e1514902760
|
|
|
|
|
|
|
| |
a public member of the API.
PiperOrigin-RevId: 561061460
Change-Id: Ib804d3d3cf427ebfc9e622db9915287eb8045e26
|
|
|
|
|
| |
PiperOrigin-RevId: 557920808
Change-Id: I1eb0ca1ea9e9de542321fbc23d82218c5d449fbf
|
|
|
|
|
| |
PiperOrigin-RevId: 557187112
Change-Id: I7c3e4be0ab5a7451173da7a0ded53a3d227bb093
|
|
|
|
|
| |
PiperOrigin-RevId: 556065631
Change-Id: I7e69b1495946c42fab185a1bc23e9564bfbd5e41
|