| Commit message (Collapse) | Author | Age |
|
|
|
|
|
|
|
| |
individual stages.
Get rid of graph annotation and use GraphProperties directly.
PiperOrigin-RevId: 190801044
|
|
|
|
|
|
|
|
|
| |
Prior to this, saving and restoring a graph with a resource variable global_step
would cause the global_step collection of the reimported graph to contain
a resource tensor (the object underlying the ResourceVariable); the actual
metadata associated with it would be serialized.
PiperOrigin-RevId: 190791443
|
|
|
|
| |
PiperOrigin-RevId: 190789794
|
|
|
|
|
|
| |
functions.
PiperOrigin-RevId: 190789781
|
|
|
|
| |
PiperOrigin-RevId: 190787954
|
|
|
|
|
|
|
|
|
|
| |
Makes the tf.keras.Layer.__call__ signature identical to tf.layers.Layer.__call__, but makes passing positional arguments other than "inputs" an error in most cases. The only case it's allowed is subclassed Models which do not have an "inputs" argument to their call() method.
This means subclassed Models no longer need to pass all but the first argument as a keyword argument (or do list packing/unpacking) when call() takes multiple Tensor arguments.
Includes errors for cases where whether an argument indicates an input is ambiguous, but otherwise doesn't do much to support non-"inputs" call() signatures for shape inference or deferred Tensors. The definition of an input/non-input is pretty clear, so that cleanup will mostly be tracking down all of the users of "self.call" and getting them to pass inputs as positional arguments if necessary.
PiperOrigin-RevId: 190787899
|
|
|
|
| |
PiperOrigin-RevId: 190775681
|
|
|
|
| |
PiperOrigin-RevId: 190735724
|
|
|
|
| |
PiperOrigin-RevId: 190728742
|
|
|
|
| |
PiperOrigin-RevId: 190721153
|
|
|
|
|
|
| |
Add test code for this purpose.
PiperOrigin-RevId: 190719729
|
|
|
|
| |
PiperOrigin-RevId: 190715033
|
|
|
|
| |
PiperOrigin-RevId: 190713919
|
|
|
|
| |
PiperOrigin-RevId: 190712404
|
|
|
|
|
|
|
|
|
|
|
|
| |
DfsHloVisitorWithDefault incorrectly included some overrides for handling
several elementwise binary and unary opcodes. These overrides explicitly
called DefaultAction which meant that these opcodes were not handled by
HandleElementwiseUnary/Binary. This CL removes these overrides and adds a
comment describing the potential problem. Unfortunately, I don't see a way
of automatically catching these issues when new opcodes are added, so the
comment will have to do.
PiperOrigin-RevId: 190708245
|
|
|
|
| |
PiperOrigin-RevId: 190707017
|
|
|
|
|
|
| |
not/rarely used.
PiperOrigin-RevId: 190706088
|
|
|
|
|
|
| |
TF graphs optimizations.
PiperOrigin-RevId: 190705686
|
|
|
|
| |
PiperOrigin-RevId: 190702442
|
|
|
|
|
|
|
|
| |
We incorrectly counted FLOPs when the output and kernel line up to access the
padding or the dilated area. These should not be accounted as contributing to
the FLOP count.
PiperOrigin-RevId: 190702384
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
pointer.
In the past, we allowed sub-buffers to be null if the top-level tuple
was non-null.
This doesn't actually work well on the GPU: For ops that are implemented
using cudnn or cublas, we have to have a pointer to the sub-buffer on
the host in order to make the call. Retrieving it from the GPU in an
efficient manner is complicated, and the best we can come up with isn't
all that efficient (fundamentally having to pull data down from the GPU
blocks the ability of the CPU to "run ahead" of the GPU).
Since TF wasn't making use of our flexibility *anyway*, we add the
requirement that XLA be given non-null pointers to all sub-buffers.
Changes to the XLA:GPU backend to take advantage of this will come
separately.
PiperOrigin-RevId: 190700021
|
|
|
|
|
|
|
| |
Make DistributionStrategy.colocate_vars_with() match the existing
behavior of ops.colocate_with() by default, for compatibility.
PiperOrigin-RevId: 190699882
|
|
|
|
|
|
|
|
|
|
| |
Just numbers Layers like "layer-N". It may also make sense to track them by
"ClassName-M", but that's a backwards-compatible change.
Special-cases all of the dependency collection, since Layers can be added and
removed from Sequential.
PiperOrigin-RevId: 190699818
|
|
|
|
| |
PiperOrigin-RevId: 190699635
|
|
|
|
| |
PiperOrigin-RevId: 190698245
|
|
|
|
|
|
| |
of values.
PiperOrigin-RevId: 190696953
|
|
|
|
|
|
| |
Update GrapplerTest::EvaluateNodes to take feeds as an argument, to make it easier to write tests with placeholders.
PiperOrigin-RevId: 190696386
|
|
|
|
| |
PiperOrigin-RevId: 190695737
|
|
|
|
| |
PiperOrigin-RevId: 190693455
|
|
|
|
|
|
|
| |
It now correctly broadcasts start state across whatever batch dimension it is
passed rather than sqishing it down to a batch dimension of 1.
PiperOrigin-RevId: 190688855
|
|
|
|
|
|
| |
to in-place ops.
PiperOrigin-RevId: 190687820
|
|
|
|
|
|
|
|
|
|
|
| |
ReduceWindow operations are done in higher precision to avoid accumulation
error. Convert operations can find their way between a ReduceWindow and a Pad
which can prevent a Pad from combining with a ReduceWindow.
Fix this by looking past the Convert while also checking that the Convert'd
Pad's init value is identical to the reduce-window value.
PiperOrigin-RevId: 190686175
|
|
|
|
| |
PiperOrigin-RevId: 190681610
|
|
|
|
|
|
|
|
| |
This change makes _set_shapes_for_outputs_c_api fetch and set
Tensor._handle_data. This is necessary for running the
Python shape inference code on resource tensors.
PiperOrigin-RevId: 190681459
|
|
|
|
|
|
| |
`tf.contrib.data.group_by_window()`.
PiperOrigin-RevId: 190673466
|
|
|
|
|
|
| |
quantization, but have a model that has FAKE_QUANT operations.
PiperOrigin-RevId: 190672414
|
|
|
|
| |
PiperOrigin-RevId: 190671867
|
|
|
|
|
|
| |
See:
https://github.com/tensorflow/tensorflow/blob/r1.6/tensorflow/compiler/xla/client/computation_builder.h#L668
PiperOrigin-RevId: 190671530
|
|
|
|
|
|
| |
FastPathExecute function also allows inputs to be sequences instead of just lists.
PiperOrigin-RevId: 190670587
|
|
|
|
|
|
|
|
| |
HloModule::CreateModuleConfigFromProto
Otherwise it's easy to forget that you likely want the DebugOptions to be `legacy_flags::GetDebugOptionsFromFlags()`.
PiperOrigin-RevId: 190659046
|
|
|
|
|
|
| |
checks whether the output tensors produced by them are the same.
PiperOrigin-RevId: 190655831
|
|
|
|
| |
PiperOrigin-RevId: 190651873
|
|
|
|
|
|
|
|
| |
tensorflow::str_util equivalents.
This will allow the deprecated methods to be removed.
PiperOrigin-RevId: 190650553
|
|
|
|
|
|
| |
The Python extensions aren't part of the official C API.
PiperOrigin-RevId: 190649576
|
|
|
|
| |
PiperOrigin-RevId: 190644837
|
|
|
|
| |
PiperOrigin-RevId: 190641841
|
|
|
|
| |
PiperOrigin-RevId: 190633067
|
|
|
|
| |
PiperOrigin-RevId: 190630641
|
|
|
|
|
|
|
|
|
|
| |
Everything in contrib/learn/python/learn/datasets/base.py has been deprecated. One of the function in there is a decorator, retry. Because another function in that file is decorated with retry, the function is called upon import, which prints a warning.
I have fixed this by adding a private function, _internal_retry, which is used internally, and redefining retry to simply call this. That way, using retry in user-code will still print the deprecated warning, but it's not printed upon every import.
I also cleaned up the docstrings slightly.
PiperOrigin-RevId: 190626717
|
|
|
|
| |
PiperOrigin-RevId: 190624708
|