diff options
author | Abseil Team <absl-team@google.com> | 2020-03-03 11:22:10 -0800 |
---|---|---|
committer | Andy Soffer <asoffer@google.com> | 2020-03-03 17:32:55 -0500 |
commit | b19ba96766db08b1f32605cb4424a0e7ea0c7584 (patch) | |
tree | c4ba295b067b000b9d84410ec81e0095715641a5 /absl/debugging | |
parent | 06f0e767d13d4d68071c4fc51e25724e0fc8bc74 (diff) |
Export of internal Abseil changes
--
a3e58c1870a9626039f4d178d2d599319bd9f8a8 by Matt Kulukundis <kfm@google.com>:
Allow MakeCordFromExternal to take a zero arg releaser.
PiperOrigin-RevId: 298650274
--
01897c4a9bb99f3dc329a794019498ad345ddebd by Samuel Benzaquen <sbenza@google.com>:
Reduce library bloat for absl::Flag by moving the definition of base virtual functions to a .cc file.
This removes the duplicate symbols in user translation units and has the side effect of moving the vtable definition too (re key function)
PiperOrigin-RevId: 298617920
--
190f0d3782c63aed01046886d7fbc1be5bca2de9 by Derek Mauro <dmauro@google.com>:
Import GitHub #596: Unbreak stacktrace code for UWP apps
PiperOrigin-RevId: 298600834
--
cd5cf6f8c87b35b85a9584e94da2a99057345b73 by Gennadiy Rozental <rogeeff@google.com>:
Use union of heap allocated pointer, one word atomic and two word atomic to represent flags value.
Any type T, which is trivially copy-able and with with sizeof(T) <= 8, will be stored in atomic int64_t.
Any type T, which is trivially copy-able and with with 8 < sizeof(T) <= 16, will be stored in atomic AlignedTwoWords.
We also introducing value storage type to distinguish these cases.
PiperOrigin-RevId: 298497200
--
f8fe7bd53bfed601f002f521e34ab4bc083fc28b by Matthew Brown <matthewbr@google.com>:
Ensure a deep copy and proper equality on absl::Status::ErasePayload
PiperOrigin-RevId: 298482742
--
a5c9ccddf4b04f444e3f7e27dbc14faf1fcb5373 by Gennadiy Rozental <rogeeff@google.com>:
Change ChunkIterator implementation to use fixed capacity collection of CordRep*. We can now assume that depth never exceeds 91. That makes comparison operator exception safe.
I've tested that with this CL we do not observe an overhead of chunk_end. Compiler optimized this iterator completely.
PiperOrigin-RevId: 298458472
--
327ea5e8910bc388b03389c730763f9823abfce5 by Abseil Team <absl-team@google.com>:
Minor cleanups in b-tree code:
- Rename some variables: fix issues of different param names between definition/declaration, move away from `x` as a default meaningless variable name.
- Make init_leaf/init_internal be non-static methods (they already take the node as the first parameter).
- In internal_emplace/try_shrink, update root/rightmost the same way as in insert_unique/insert_multi.
- Replace a TODO with a comment.
PiperOrigin-RevId: 298432836
--
8020ce9ec8558ee712d9733ae3d660ac1d3ffe1a by Abseil Team <absl-team@google.com>:
Guard against unnecessary copy in case the buffer is empty. This is important in cases were the user is explicitly tuning their chunks to match PiecewiseChunkSize().
PiperOrigin-RevId: 298366044
--
89324441d1c0c697c90ba7d8fc63639805fcaa9d by Abseil Team <absl-team@google.com>:
Internal change
PiperOrigin-RevId: 298219363
GitOrigin-RevId: a3e58c1870a9626039f4d178d2d599319bd9f8a8
Change-Id: I28dffc684b6fd0292b94807b88ec6664d5d0e183
Diffstat (limited to 'absl/debugging')
-rw-r--r-- | absl/debugging/internal/stacktrace_win32-inl.inc | 12 |
1 files changed, 6 insertions, 6 deletions
diff --git a/absl/debugging/internal/stacktrace_win32-inl.inc b/absl/debugging/internal/stacktrace_win32-inl.inc index af4578a5..1c666c8b 100644 --- a/absl/debugging/internal/stacktrace_win32-inl.inc +++ b/absl/debugging/internal/stacktrace_win32-inl.inc @@ -46,9 +46,9 @@ typedef USHORT NTAPI RtlCaptureStackBackTrace_Function( OUT PVOID *backtrace, OUT PULONG backtrace_hash); -// It is not possible to load RtlCaptureStackBackTrace at static init time in
-// UWP. CaptureStackBackTrace is the public version of RtlCaptureStackBackTrace
-#if WINAPI_FAMILY_PARTITION(WINAPI_PARTITION_APP) && \
+// It is not possible to load RtlCaptureStackBackTrace at static init time in +// UWP. CaptureStackBackTrace is the public version of RtlCaptureStackBackTrace +#if WINAPI_FAMILY_PARTITION(WINAPI_PARTITION_APP) && \ !WINAPI_FAMILY_PARTITION(WINAPI_PARTITION_DESKTOP) static RtlCaptureStackBackTrace_Function* const RtlCaptureStackBackTrace_fn = &::CaptureStackBackTrace; @@ -56,9 +56,9 @@ static RtlCaptureStackBackTrace_Function* const RtlCaptureStackBackTrace_fn = // Load the function we need at static init time, where we don't have // to worry about someone else holding the loader's lock. static RtlCaptureStackBackTrace_Function* const RtlCaptureStackBackTrace_fn = - (RtlCaptureStackBackTrace_Function*) - GetProcAddress(GetModuleHandleA("ntdll.dll"), "RtlCaptureStackBackTrace"); -#endif // WINAPI_PARTITION_APP && !WINAPI_PARTITION_DESKTOP + (RtlCaptureStackBackTrace_Function*)GetProcAddress( + GetModuleHandleA("ntdll.dll"), "RtlCaptureStackBackTrace"); +#endif // WINAPI_PARTITION_APP && !WINAPI_PARTITION_DESKTOP template <bool IS_STACK_FRAMES, bool IS_WITH_CONTEXT> static int UnwindImpl(void** result, int* sizes, int max_depth, int skip_count, |