summaryrefslogtreecommitdiff
path: root/absl/container/internal/raw_hash_set_test.cc
diff options
context:
space:
mode:
authorGravatar Abseil Team <absl-team@google.com>2021-03-10 07:07:23 -0800
committerGravatar Gennadiy Rozental <rogeeff@google.com>2021-03-10 11:40:10 -0500
commit2e9532cc6c701a8323d0cffb468999ab804095ab (patch)
tree0e21cbaee13fc2af63971d541224a6a6df5fe20a /absl/container/internal/raw_hash_set_test.cc
parentab21820d47e4f83875dda008b600514d3520fd35 (diff)
Export of internal Abseil changes
-- 5ed5dc9e17c66c298ee31cefc941a46348d8ad34 by Abseil Team <absl-team@google.com>: Fix typo. PiperOrigin-RevId: 362040582 -- ac704b53a49becc42f77e4529d3952f8e7d18ce4 by Abseil Team <absl-team@google.com>: Fix a typo in a comment. PiperOrigin-RevId: 361576641 -- d20ccb27b7e9b53481e9192c1aae5202c06bfcb1 by Derek Mauro <dmauro@google.com>: Remove the inline keyword from functions that aren't defined in the header. This may fix #910. PiperOrigin-RevId: 361551300 -- aed9ae1dffa7b228dcb6ffbeb2fe06a13970c72b by Laramie Leavitt <lar@google.com>: Propagate nice/strict/naggy state on absl::MockingBitGen. Allowing NiceMocks reduces the log spam for un-mocked calls, and it enables nicer setup with ON_CALL, so it is desirable to support it in absl::MockingBitGen. Internally, gmock tracks object "strictness" levels using an internal API; in order to achieve the same results we detect when the MockingBitGen is wrapped in a Nice/Naggy/Strict and wrap the internal implementation MockFunction in the same type. This is achieved by providing overloads to the Call() function, and passing the mock object type down into it's own RegisterMock call, where a compile-time check verifies the state and creates the appropriate mock function. PiperOrigin-RevId: 361233484 -- 96186023fabd13d01d32d60d9c7ac4ead1aeb989 by Abseil Team <absl-team@google.com>: Ensure that trivial types are passed by value rather than reference PiperOrigin-RevId: 361217450 -- e1135944835d27f77e8119b8166d8fb6aa25f906 by Evan Brown <ezb@google.com>: Internal change. PiperOrigin-RevId: 361215882 -- 583fe6c94c1c2ef757ef6e78292a15fbe4030e35 by Evan Brown <ezb@google.com>: Increase the minimum number of slots per node from 3 to 4. We also rename kNodeValues (and related names) to kNodeSlots to make it clear that they are about the number of slots per node rather than the number of values per node - kMinNodeValues keeps the same name because it's actually about the number of values rather than the number of slots. Motivation: I think the expected number of values per node, assuming random insertion order, is the average of the maximum and minimum numbers of values per node (kNodeSlots and kMinNodeValues). For large and/or even kNodeSlots, this is ~75% of kNodeSlots, but for kNodeSlots=3, this is ~67% of kNodeSlots. kMinNodeValues (which corresponds to worst-case occupancy) is ~33% of kNodeSlots, when kNodeSlots=3, compared to 50% for even kNodeSlots. This results in higher memory overhead per value, and since this case (kNodeSlots=3) is used when values are large, it seems worth fixing. PiperOrigin-RevId: 361171495 GitOrigin-RevId: 5ed5dc9e17c66c298ee31cefc941a46348d8ad34 Change-Id: I8e33b5df1f987a77112093821085c410185ab51a
Diffstat (limited to 'absl/container/internal/raw_hash_set_test.cc')
0 files changed, 0 insertions, 0 deletions