| Commit message (Collapse) | Author | Age |
|
|
|
| |
PiperOrigin-RevId: 199298594
|
|
|
|
| |
PiperOrigin-RevId: 199296333
|
|
|
|
| |
PiperOrigin-RevId: 199293694
|
|
|
|
| |
PiperOrigin-RevId: 199274329
|
|
|
|
| |
PiperOrigin-RevId: 199262414
|
|
|
|
|
|
|
|
|
| |
- Removed workaround for https://github.com/bazelbuild/bazel/issues/2182 since it's fixed
- Removed setting CUDA related environment variables. Assume they are already set. If not,
configure.py will set default values for them.
- Removed obsolete variables for cc_test targets.
PiperOrigin-RevId: 199256482
|
|
|
|
|
|
| |
created modules, to verify at TearDown.
PiperOrigin-RevId: 199244092
|
|
|
|
| |
PiperOrigin-RevId: 199241723
|
|
|
|
| |
PiperOrigin-RevId: 199230907
|
|
|
|
|
|
|
| |
Surprisingly a subgraph twice mostly worked. But it broke the rollover
edge highlighting, and it also drew all the edges in the subgraph twice.
PiperOrigin-RevId: 199221368
|
|
|
|
| |
PiperOrigin-RevId: 199220422
|
|
|
|
| |
PiperOrigin-RevId: 199216721
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
precision_recall_at_equal_thresholds due
to accumulating the tp/fp/tn/fn values in float32, which can become highly inaccurate
as the number of values increases.
In the common case, the method sums the value 1.0f to the tp/fp/tn/fn bucket for every
value in the predictions tensor. If the tensor is large (say, it represents an image
and we have one tp/fp/tn/fn value per pixel), then we are essentially adding many 1.0f's
together, across the entire batch and also across all the batches. By doing it in
float32 the value starts becoming inaccurate at around 16M, which is very small.
In practice, we see a deviation of 100x when the total reaches about 3e10 (the previous
code reports a number about 1e8 when the actual value should be 3e10).
We avoid all these issues by always accumulating in float64.
Also fix a bug that the method cannot be called with predictions dtype being anything
other than float32. Preivously it would crash due to the eps code near the end.
Added tests for using float64 and float16.
PiperOrigin-RevId: 199216173
|
|
|
|
|
|
|
|
| |
The token type will be threaded through side-effecting ops to order them. Subsequent cls will add new opcodes and change side effecting operations to support this ordering.
This CL also does some cleanup in shape_util and layout_util where we have assumed that shapes are either arrays or tuples.
PiperOrigin-RevId: 199215963
|
|
|
|
| |
PiperOrigin-RevId: 199208527
|
|
|
|
|
|
| |
I'm not sure why our existing tests didn't catch this...
PiperOrigin-RevId: 199206183
|
|
|
|
| |
PiperOrigin-RevId: 199205459
|
|
|
|
| |
PiperOrigin-RevId: 199203634
|
|
|
|
|
|
| |
supported in contrib.
PiperOrigin-RevId: 199200258
|
|
|
|
| |
PiperOrigin-RevId: 199200246
|
|
|
|
|
|
| |
preparation to split HloInstruction into subclasses. This initial implementation uses C++ dynamic_cast, so it also adds vtable to HloInstruction.
PiperOrigin-RevId: 199199109
|
|
|
|
| |
PiperOrigin-RevId: 199198413
|
|
|
|
| |
PiperOrigin-RevId: 199198086
|
|
|
|
|
|
|
|
| |
Don't use --distinct_host_configuration=false by default, because it would break cross compiling, like android build and Raspberry Pi build.
Instead, we add it for builds that we know they have the same host and target platforms.
PiperOrigin-RevId: 199194260
|
|
|
|
| |
PiperOrigin-RevId: 199193181
|
|
|
|
| |
PiperOrigin-RevId: 199186109
|
|
|
|
|
|
| |
std::shared_ptr.
PiperOrigin-RevId: 199179607
|
|
|
|
| |
PiperOrigin-RevId: 199179067
|
|
|
|
| |
PiperOrigin-RevId: 199177029
|
|
|
|
| |
PiperOrigin-RevId: 199173022
|
|
|
|
| |
PiperOrigin-RevId: 199171845
|
|
|
|
| |
PiperOrigin-RevId: 199171316
|
|
|
|
| |
PiperOrigin-RevId: 199168290
|
|
|
|
|
|
| |
the list stack operation.
PiperOrigin-RevId: 199167953
|
|
|
|
| |
PiperOrigin-RevId: 199164433
|
|
|
|
| |
PiperOrigin-RevId: 199161696
|
|
|
|
|
|
|
|
| |
determinant.
This is useful for testing the LKJ distribution on correlation matrices.
PiperOrigin-RevId: 199153115
|
|
|
|
|
|
| |
convolution by swapping the kernel input and output feature dimension.
PiperOrigin-RevId: 199153010
|
|
|
|
| |
PiperOrigin-RevId: 199148136
|
|
|
|
|
|
|
|
| |
--config=opt will enable /arch:AVX cc option on Windows
-c opt is already specified in tools/bazel.rc, no it's OK to remove it here
PiperOrigin-RevId: 199145562
|
|
|
|
| |
PiperOrigin-RevId: 199142338
|
|
|
|
| |
PiperOrigin-RevId: 199141605
|
|
|
|
| |
PiperOrigin-RevId: 199140124
|
|
|
|
|
|
| |
tensorflow/contrib/distribute/python:minimize_loss_test_gpu from continuous builds.
PiperOrigin-RevId: 199140117
|
|
|
|
| |
PiperOrigin-RevId: 199134753
|
|
|
|
|
|
| |
multiple reduce outputs.
PiperOrigin-RevId: 199132442
|
|
|
|
|
|
|
|
|
| |
fix is either of: (a) dropping support for tracking specific slices of a symbol (b) track slices along with the symbols on which they depend.
Background:
So far we tracked symbols like `a[b]` and allow conversions of the kind `if <cond>: a[b] = c` -> `a[b] = ag__.if_stmt(<cond>, lambda: c, lambda: a[b])`. That construct allowed a to be anything, including e.g. Python lists, objects. etc.
This is incomplete and will in the future become obsolete as we override the slice operator. In effect the statement above will be converted to `a = ag__.if_stmt(<cond>, lambda: ag__.set_item(a, b, c), lambda: a)`. However, this latter form does not support objects, so there is a tradeoff.
PiperOrigin-RevId: 199131573
|
|
|
|
| |
PiperOrigin-RevId: 199125920
|
|
|
|
| |
PiperOrigin-RevId: 199119904
|
|
|
|
|
|
| |
We can just pass along the original ArraySlice.
PiperOrigin-RevId: 199109815
|