summaryrefslogtreecommitdiff
path: root/absl/container
Commit message (Collapse)AuthorAge
* Make `FlatHashMapPolicy` return `std::true_type` for relocatable objects.Gravatar Abseil Team2023-11-20
| | | | | | | This reduces produced binary size and can trigger even more optimizations in the future. PiperOrigin-RevId: 584136517 Change-Id: I3854833799f88f28b755ec53132925f0c3d468ab
* Partial roll forward of reentrant validation with the validation itself ↵Gravatar Evan Brown2023-11-13
| | | | | | | disabled. This will make it easier to roll back and forwards in the future (if needed) without causing merge conflicts in unrelated code. PiperOrigin-RevId: 582059046 Change-Id: I66dc6527e7a0b351367b7a391c2d653fe793143f
* Roll back due to leak sanitizer reports.Gravatar Aaron Jacobs2023-11-08
| | | | | PiperOrigin-RevId: 580726428 Change-Id: I12b0f22c2084aef90bcca67536220a6bb550b57e
* Add control()/slot() functions to iterator/const_iterator.Gravatar Evan Brown2023-11-07
| | | | | | | | | This allows for avoiding e.g. it.inner_.slot_ on const iterators. Also, refactor HashtableDebugAccess::AllocatedByteSize to use regular iteration instead of looking directly at control/slot_array of the table in order to support small object optimization. PiperOrigin-RevId: 580194253 Change-Id: I64cd69287834ee5c7a8daf867c532258806bfb7b
* Update comments to make it explicit that moving a flat_hash_{set,map} can ↵Gravatar Evan Brown2023-11-02
| | | | | | | cause pointers to elements to be invalidated. PiperOrigin-RevId: 578920671 Change-Id: Ica40db48d5565b606e5e5f501c1305612b193d4d
* Add sanitizer mode validation for use of references to swisstables elements ↵Gravatar Evan Brown2023-11-01
| | | | | | | that may have been invalidated by a container move. PiperOrigin-RevId: 578649798 Change-Id: Icfee98d3a0399b545ec6ec59c5b52ae5e006218b
* Roll forward: Add sanitizer mode checks that element ↵Gravatar Evan Brown2023-10-30
| | | | | | | constructors/destructors don't make reentrant calls to raw_hash_set member functions. PiperOrigin-RevId: 577877803 Change-Id: I254c589b00cadec6ff95dfd60a8a38ab303c1af5
* Rollback: Add sanitizer mode checks that element constructors/destructors ↵Gravatar Evan Brown2023-10-17
| | | | | | | don't make reentrant calls to raw_hash_set member functions. PiperOrigin-RevId: 574232718 Change-Id: I8ef25fec00b76ee5fb9424e7614ca55edd6ba81b
* Add sanitizer mode checks that element constructors/destructors don't make ↵Gravatar Evan Brown2023-10-16
| | | | | | | reentrant calls to raw_hash_set member functions. PiperOrigin-RevId: 573897598 Change-Id: If40c23ac3cd9fff315ee18774e27c480cbca3a81
* Add iterator invalidation checking for when the hashtable is moved.Gravatar Evan Brown2023-10-16
| | | | | | Motivation: once we enable small object optimization in swisstable, iterators can be invalidated when the table is moved. PiperOrigin-RevId: 573884505 Change-Id: I4278129829143d3747dfd0ef0ff92f321c2633dc
* Add missing headers in raw_hash_map.h.Gravatar Evan Brown2023-10-12
| | | | | PiperOrigin-RevId: 572979536 Change-Id: I4fceee0c52e4e6877e639860327462c7874719e7
* The current implementation of control by checking on x86 has an unnecessary ↵Gravatar Abseil Team2023-10-12
| | | | | | | | | sign extension after the doing the control byte comparison. Changing the bitmask object to explicitly track only 16 bits (instead of 32) eliminates this, saving an instruction / cycle. This speeds up hit checking by up to 6% on Milan and up to 15% on CLX PiperOrigin-RevId: 572965182 Change-Id: Ifda0e3250d409266d6dcef89cba6ada91d879291
* Bazel: Enable the header_modules featureGravatar Derek Mauro2023-10-11
| | | | | PiperOrigin-RevId: 572575394 Change-Id: Ic1c5ac2423b1634e50c43bad6daa14e82a8f3e2c
* Bazel: Support layering_check and parse_headersGravatar Derek Mauro2023-10-10
| | | | | | | | | | | | | The layering_check feature ensures that rules that include a header explicitly depend on a rule that exports that header. Compiler support is required, and currently only Clang 16+ supports diagnoses layering_check failures. The parse_headers feature ensures headers are self-contained by compiling them with -fsyntax-only on supported compilers. PiperOrigin-RevId: 572350144 Change-Id: I37297f761566d686d9dd58d318979d688b7e36d1
* Correct the grammar of an IWYU pragma.Gravatar Abseil Team2023-10-06
| | | | | PiperOrigin-RevId: 571322393 Change-Id: I0e227b0075d3133ee28c8f766a1be7872c101176
* Fix a small typo in the docs.Gravatar Abseil Team2023-10-05
| | | | | PiperOrigin-RevId: 571084409 Change-Id: I4e6c98ac11f4cb40b65cc9484188faa6168718b4
* Use ABSL_RAW_LOG and ABSL_PREDICT_* for all debug checks in swisstable ↵Gravatar Evan Brown2023-10-03
| | | | | | | | | including sanitizer mode checks. Sanitizer mode can be used for canaries so performance is still relevant. This change also makes the code more uniform. PiperOrigin-RevId: 570438923 Change-Id: I62859160eb9323e6420680a43fd23e97e8a62389
* Refactor swisstable copy/move assignment to fix issues with allocator ↵Gravatar Evan Brown2023-10-03
| | | | | | | | | | | | | | | | | propagation and improve performance. Correctness: - We use swap to implement copy assignment and move assignment, which means that allocator propagation in copy/move assignment depends on `propagate_on_container_swap` in addition to `propagate_on_container_copy_assignment`/`propagate_on_container_move_assignment`. - In swap, if `propagate_on_container_swap` is `false` and `get_allocator() != other.get_allocator()`, the behavior is undefined (https://en.cppreference.com/w/cpp/container/unordered_set/swap) - we should assert that this UB case isn't happening. For this reason, we also delete the NoPropagateOn_Swap test case in raw_hash_set_allocator_test. Performance: - Don't rely on swap so we don't have to do unnecessary copying into the moved-from sets. - Don't use temp sets in move assignment. - Default the move constructor of CommonFields. - Avoid using exchange in raw_hash_set move constructor. - In `raw_hash_set(raw_hash_set&& that, const allocator_type& a)` with unequal allocators and in move assignment with non-propagating unequal allocators, move set keys instead of copying them. PiperOrigin-RevId: 570419290 Change-Id: I499e54f17d9cb0b0836601f5c06187d1f269a5b8
* Minor build rule changes.Gravatar Evan Brown2023-10-02
| | | | | PiperOrigin-RevId: 570180405 Change-Id: If14b21a4d0df19546a47923a1f2a359b38fe6f93
* Re-submit with a fix for platforms without RTTI.Gravatar Abseil Team2023-10-02
| | | | | | | We test for `ABSL_INTERNAL_HAS_RTTI` in `absl::container_internal::TypeName` before calling `typeid`. PiperOrigin-RevId: 570101013 Change-Id: I1f2f9b2f475a6beae50d0b88718b17b296311155
* Export common.h from raw_hash_set.h to prevent IWYU from linting when using ↵Gravatar Abseil Team2023-09-27
| | | | | | | node_handle PiperOrigin-RevId: 568997790 Change-Id: I9899ccc95eeb9c8b92d0dceec7e2fc4a2b1102c0
* Add an internal wrapper for `abi::__cxa_demangle()`.Gravatar Abseil Team2023-09-26
| | | | | PiperOrigin-RevId: 568665135 Change-Id: I42ec9bc6cfe923777f7b60ea032c7b64428493c9
* Add an internal wrapper for `abi::__cxa_demangle()`.Gravatar Abseil Team2023-09-26
| | | | | PiperOrigin-RevId: 568652465 Change-Id: I9f72a11cb514eaf694dae589a19dc139891e7af2
* Replace BtreeAllocatorTest with individual test cases for copy/move/swap ↵Gravatar Evan Brown2023-09-21
| | | | | | | | | propagation (defined in test_allocator.h) and minimal alignment. Also remove some extraneous value_types from typed tests. The motivation is to reduce btree_test compile time. PiperOrigin-RevId: 567376572 Change-Id: I6ac6130b99faeadaedab8c2c7b05d5e23e77cc1e
* Use ABSL_PREDICT_FALSE and ABSL_RAW_LOG for shared safety checks in ↵Gravatar Daniel Cheng2023-09-20
| | | | | | | | | | | | | raw_hash_set. `SwisstableDebugEnabled()` is also true for release builds with hardening enabled. To minimize their impact in those builds: - use `ABSL_PREDICT_FALSE()` to provide a compiler hint for code layout - use `ABSL_RAW_LOG()` with a format string to reduce code size and improve the chances that the hot paths will be inlined. PiperOrigin-RevId: 567102494 Change-Id: I6734bd491d7b2e1fb9df0e86f4e29e6ad0a03102
* Fix a bug in which we used propagate_on_container_copy_assignment in btree ↵Gravatar Evan Brown2023-09-15
| | | | | | | move assignment. PiperOrigin-RevId: 565730754 Change-Id: Id828847d32c812736669803c179351433dda4aa6
* Move CountingAllocator into test_allocator.h and add some other allocators ↵Gravatar Evan Brown2023-09-15
| | | | | | | that can be shared between different container tests. PiperOrigin-RevId: 565693736 Change-Id: I59af987e30da03a805ce59ff0fb7eeae3fc08293
* Make PolicyTraits::transfer_uses_memcpy() true for node_hash_* tables.Gravatar Evan Brown2023-09-12
| | | | | | | This should enable binary size savings for now and more efficiency improvements with small buffer optimization. PiperOrigin-RevId: 564741270 Change-Id: Icf204d88256243eb60464439a52dd589d7a559cb
* Add a flat_hash_set_test that we use value_type member functions to ↵Gravatar Evan Brown2023-09-12
| | | | | | | | | read/write from value_types when we aren't allowed to memcpy them. The motivation is to prevent a bug in small buffer optimization for swisstables. PiperOrigin-RevId: 564726325 Change-Id: Id0df5d28d65c7586428001fcb266886988cd481e
* Fix an issue in which b-tree set iterators allow for mutable access to keys.Gravatar Evan Brown2023-09-06
| | | | | | | | | | | This change makes b-tree sets conform to the std::set API of having const access through `iterator`s as well as `const_iterator`s. This change can cause breakages for user code that depends on having mutable access to keys. If your code breaks, then there a couple of options to fix the issue: - If you are mutating a part of the key that can impact the sorted order, then this is a bug and you need to extract the key, mutate it, and then re-insert the mutated key. - If you are mutating a part of the key that can't impact the sorted order, then you can potentially (a) refactor to use btree_map instead of btree_set and make the part of the key that doesn't impact that ordering be the mapped_type, (b) change the part of the key that doesn't impact the ordering to be a `mutable` member of the key_type, (c) refactor from btree_set<K> to btree_set<K*> (but this also permits mutable access to the part of the key that determines the ordering). PiperOrigin-RevId: 563118156 Change-Id: I243558e74c43aa6655290099494b411d15298f4c
* Remove the unused LowerBoundAllocatedByteSize function.Gravatar Evan Brown2023-09-05
| | | | | PiperOrigin-RevId: 562832827 Change-Id: If37f83e67b3b2ea350f74dd6bffae51ea5508f12
* Add a comment about a more efficient implementation of btree range erase.Gravatar Evan Brown2023-09-01
| | | | | PiperOrigin-RevId: 561954737 Change-Id: I0e29a4f4523e4b3eafbd26d1d96f6e1c8deed957
* Optimize Resize and Iteration on ArmGravatar Connal de Souza2023-08-30
| | | | | | | | | | | | | There is a few cycles of overhead when transfering between GPR and Neon registers. We pay this cost for GroupAarch64Impl, largely because the speedup we get in Match() makes it profitable. After a Match call, if we do subsequent Group operations, we don't have to pay the full GPR <-> Neon cost, so it makes sense to do them with Neon instructions as well. However, in iteration and find_first_non_full(), we do not do a prior Match(), so the Mask/Count EmptyOrDeleted calls pay the full GPR <-> Neon cost. We can avoid this by using the GPR versions of the functions in the portable implementation of Group instead. We slightly change the order of operations in these functions (should be functionally a nop) in order to take advantage of Arm's free flexible second operand shifts with Logical operations. Iteration and Resize are roughly 8% and 12.6% faster respectively. This is not profitable on x86 because there is much lower GPR <-> xmm register latency and we use a 16-bit wide Group size. PiperOrigin-RevId: 561415183 Change-Id: I660b5bb84afedb05a12dcdf04d5b2e1514902760
* Make raw_hash_set::destroy_slots no longer public. It was never meant to be ↵Gravatar Evan Brown2023-08-29
| | | | | | | a public member of the API. PiperOrigin-RevId: 561061460 Change-Id: Ib804d3d3cf427ebfc9e622db9915287eb8045e26
* Remove the has_element function and use FindElement instead.Gravatar Evan Brown2023-08-17
| | | | | PiperOrigin-RevId: 557920808 Change-Id: I1eb0ca1ea9e9de542321fbc23d82218c5d449fbf
* Update an old comment that refers to obsolete types.Gravatar Evan Brown2023-08-15
| | | | | PiperOrigin-RevId: 557187112 Change-Id: I7c3e4be0ab5a7451173da7a0ded53a3d227bb093
* Add missing includes in raw_hash_set.h.Gravatar Evan Brown2023-08-11
| | | | | PiperOrigin-RevId: 556065631 Change-Id: I7e69b1495946c42fab185a1bc23e9564bfbd5e41
* Include what you spellGravatar Dmitri Gribenko2023-08-11
| | | | | PiperOrigin-RevId: 555894810 Change-Id: I349c94e7c6e7ba1dbd817aa8e4340c1dada84654
* Store infoz on the heap instead of inline and store it only when we are ↵Gravatar Evan Brown2023-08-04
| | | | | | | sampling the current allocation. PiperOrigin-RevId: 553847957 Change-Id: Idd131d0362bf36bd22d9bd20df54bd9ae50f0e28
* Optimize Swissmap Match on Arm.Gravatar Connal de Souza2023-08-04
| | | | | | | | | | | | | | | | | Currently we require only a single bit to be set in each abstract bit for iterable bitmasks. However, in most cases, where we have a single match, or no matches in a group, iteration is not needed. We move the masking to the iteration function instead of having it as a requirement for iterable Bitmask construction. This is 4-8% faster for Find and Insert operations. This can hurt performance if we need to iterate many times (there are many matches in the same Group), however this is unlikely, even if we assume the table is completely full. If there are 0 or 1 matches in a group, or the first match is the correct item we are looking for, we save 1 instruction/cycle (most cases) If there are 2 matches in a group, and the first is a false positive, this is neutral (< 3%) If there are more than 2 matches in a group and the first two are false positives, this can be slower by 1 cycle/instruction per additional iteration (< 0.1%) No change to x86. PiperOrigin-RevId: 553831814 Change-Id: I08620899847eaf0086da989d829a1029ea24173a
* Update the comment for capacity_ to mention recent experiments to compress ↵Gravatar Evan Brown2023-08-03
| | | | | | | the field and store it together with size_. PiperOrigin-RevId: 553499768 Change-Id: Ia6eec6d580475a2b76a2415bfb35bcc08131ae34
* raw_hash_set_test: Expect tsan to catch heap-use-after-free on iterators ↵Gravatar Dino Radakovic2023-08-01
| | | | | | | invalidated by rehashing PiperOrigin-RevId: 552901078 Change-Id: I137d01fe87b1bbf591b400305f6f7919982fc1c9
* InlinedVector: Disable CFI checking on the const GetInlinedData() pathGravatar Derek Mauro2023-08-01
| | | | | | | | | | as well. Some empty cases can trigger this. See also: https://clang.llvm.org/docs/ControlFlowIntegrity.html#bad-cast-checking. PiperOrigin-RevId: 552846765 Change-Id: I6adb3c0c73efec841ffe8fdac4342f641c68ddbe
* raw_hash_set_test: Match lowercase "invalid iterator" in death testsGravatar Dino Radakovic2023-07-31
| | | | | PiperOrigin-RevId: 552520562 Change-Id: I5d311871afbc2906894c3b754a503a6abace8ceb
* Refactor raw_hash_set deallocation to pass CommonFields instead of passing ↵Gravatar Evan Brown2023-07-27
| | | | | | | | the results of a bunch of accessors of CommonFields. Motivation: this makes it easier to refactor CommonFields to be smaller. PiperOrigin-RevId: 551616928 Change-Id: I3710443fb156537d716944584bea02f945559e99
* InlinedVector: Disable CFI checking during the reinterpret_cast on theGravatar Derek Mauro2023-07-27
| | | | | | | | | | heap allocation path. The cast occurs before the memory is initialized. See also: https://clang.llvm.org/docs/ControlFlowIntegrity.html#bad-cast-checking. PiperOrigin-RevId: 551542366 Change-Id: Id5834892c36a5cb8ec095bcfee3e9e31f20c48ae
* Change the API constraints of erase(const_iterator, const_iterator) so that ↵Gravatar Evan Brown2023-07-26
| | | | | | | calling erase(begin(), end()) resets reserved growth. PiperOrigin-RevId: 551248712 Change-Id: I34755c63e3ee40da4ba7047e0d24eec567d28173
* InlinedVector: Disable CFI checking on GetInlinedData()Gravatar Derek Mauro2023-07-26
| | | | | | | | | | | | GetInlinedDataUninitialized() is removed. Just use GetInlinedData() in all cases instead. GetInlinedData() is sometimes used to return uninitialized memory. In these cases it is immediately constructed. This is a followup to 511ad64. See also: https://clang.llvm.org/docs/ControlFlowIntegrity.html#bad-cast-checking. PiperOrigin-RevId: 551205766 Change-Id: I4ddb45e29a723ccf6fc7dc203e762f4ad559fc83
* InlinedVector: Fix control-flow-inregrity warning when using a classGravatar Derek Mauro2023-07-25
| | | | | | | | | | | | | | | | with a vtable The code is getting the pointer, then constructing it on the next line. Using reinterpret_cast on this pointer is legal according to https://clang.llvm.org/docs/ControlFlowIntegrity.html#bad-cast-checking, but it flags it anyway. The docs say it might be necessary for `allocate()`-type APIs, and recommends adding them to an ignorelist. Also note that std::addressof is removed. It is unnecessary since inlined_data is a char-array. PiperOrigin-RevId: 550972834 Change-Id: Ib224cec330bb6bcb770296de6c91881f404ef531
* Add a special case for erase(begin(), end()) to reset the control bytes. The ↵Gravatar Evan Brown2023-07-20
| | | | | | | | motivation is to avoid potentially expanding the table unnecessarily later. Note: I prefer doing this as a special case in erase(iterator, iterator) rather than special casing erase(iterator) for size==1 because IIUC that changes the time complexity of erase(iterator) from O(1) to O(N) and in pathological cases, it could change loops from O(N) to O(N^2). PiperOrigin-RevId: 549661855 Change-Id: I8603324260f51a98809db32f840ff09f25cf2481