aboutsummaryrefslogtreecommitdiffhomepage
Commit message (Collapse)AuthorAge
* Make ArithmeticOptimizer robust to failures of shape inference and ↵Gravatar A. Unique TensorFlower2018-03-28
| | | | | | | | individual stages. Get rid of graph annotation and use GraphProperties directly. PiperOrigin-RevId: 190801044
* Properly serialize ResourceVariable global_step into the metagraph.Gravatar Eugene Brevdo2018-03-28
| | | | | | | | | Prior to this, saving and restoring a graph with a resource variable global_step would cause the global_step collection of the reimported graph to contain a resource tensor (the object underlying the ResourceVariable); the actual metadata associated with it would be serialized. PiperOrigin-RevId: 190791443
* internal changeGravatar A. Unique TensorFlower2018-03-28
| | | | PiperOrigin-RevId: 190789794
* Avoid overwriting existing namespace items that might replace the converted ↵Gravatar A. Unique TensorFlower2018-03-28
| | | | | | functions. PiperOrigin-RevId: 190789781
* Enable the Grappler arithmetic optimizer by default in Python tests.Gravatar A. Unique TensorFlower2018-03-28
| | | | PiperOrigin-RevId: 190787954
* Allow positional arguments in tf.keras.Model subclassesGravatar Allen Lavoie2018-03-28
| | | | | | | | | | Makes the tf.keras.Layer.__call__ signature identical to tf.layers.Layer.__call__, but makes passing positional arguments other than "inputs" an error in most cases. The only case it's allowed is subclassed Models which do not have an "inputs" argument to their call() method. This means subclassed Models no longer need to pass all but the first argument as a keyword argument (or do list packing/unpacking) when call() takes multiple Tensor arguments. Includes errors for cases where whether an argument indicates an input is ambiguous, but otherwise doesn't do much to support non-"inputs" call() signatures for shape inference or deferred Tensors. The definition of an input/non-input is pretty clear, so that cleanup will mostly be tracking down all of the users of "self.call" and getting them to pass inputs as positional arguments if necessary. PiperOrigin-RevId: 190787899
* Move ExecuteNode and CopyToDevice_InternalGravatar Alexandre Passos2018-03-28
| | | | PiperOrigin-RevId: 190775681
* Internal changeGravatar A. Unique TensorFlower2018-03-28
| | | | PiperOrigin-RevId: 190735724
* Have TensorFlow Distributions share name scopes across method calls.Gravatar Dustin Tran2018-03-27
| | | | PiperOrigin-RevId: 190728742
* Speed up statistical_testing_test by consolidating sess.run calls.Gravatar A. Unique TensorFlower2018-03-27
| | | | PiperOrigin-RevId: 190721153
* Fix non-uniformity of orthogonal matrices.Gravatar A. Unique TensorFlower2018-03-27
| | | | | | Add test code for this purpose. PiperOrigin-RevId: 190719729
* Fix _force_data_dependency for scalar inputsGravatar A. Unique TensorFlower2018-03-27
| | | | PiperOrigin-RevId: 190715033
* Implement strip assert in DebugStripper.Gravatar A. Unique TensorFlower2018-03-27
| | | | PiperOrigin-RevId: 190713919
* Fixed the interaction between virtual cluster and measuring cost estimator.Gravatar Benoit Steiner2018-03-27
| | | | PiperOrigin-RevId: 190712404
* Fix problem with HandleElementwiseUnary/Binary in DfsHloVisitorWithDefault.Gravatar Mark Heffernan2018-03-27
| | | | | | | | | | | | DfsHloVisitorWithDefault incorrectly included some overrides for handling several elementwise binary and unary opcodes. These overrides explicitly called DefaultAction which meant that these opcodes were not handled by HandleElementwiseUnary/Binary. This CL removes these overrides and adds a comment describing the potential problem. Unfortunately, I don't see a way of automatically catching these issues when new opcodes are added, so the comment will have to do. PiperOrigin-RevId: 190708245
* Pass options to TFE_ContextOptionsSetAsyncGravatar Akshay Modi2018-03-27
| | | | PiperOrigin-RevId: 190707017
* [XLA] Remove CheckShape and CheckSameShape in ComputationBuilder, they are ↵Gravatar A. Unique TensorFlower2018-03-27
| | | | | | not/rarely used. PiperOrigin-RevId: 190706088
* Disable new Gather/Slice estimators for now to fix the crashes during some ↵Gravatar Max Galkin2018-03-27
| | | | | | TF graphs optimizations. PiperOrigin-RevId: 190705686
* Support GatherV2 (using Gather)Gravatar A. Unique TensorFlower2018-03-27
| | | | PiperOrigin-RevId: 190702442
* [XLA] Accurately measure FLOPs for base-dilated convolutionsGravatar David Majnemer2018-03-27
| | | | | | | | We incorrectly counted FLOPs when the output and kernel line up to access the padding or the dilated area. These should not be accounted as contributing to the FLOP count. PiperOrigin-RevId: 190702384
* [XLA] Assert that all buffers and sub-buffers passed to XLA have an explicit ↵Gravatar Justin Lebar2018-03-27
| | | | | | | | | | | | | | | | | | | | | | pointer. In the past, we allowed sub-buffers to be null if the top-level tuple was non-null. This doesn't actually work well on the GPU: For ops that are implemented using cudnn or cublas, we have to have a pointer to the sub-buffer on the host in order to make the call. Retrieving it from the GPU in an efficient manner is complicated, and the best we can come up with isn't all that efficient (fundamentally having to pull data down from the GPU blocks the ability of the CPU to "run ahead" of the GPU). Since TF wasn't making use of our flexibility *anyway*, we add the requirement that XLA be given non-null pointers to all sub-buffers. Changes to the XLA:GPU backend to take advantage of this will come separately. PiperOrigin-RevId: 190700021
* Make slot_creator use DistributionStrategy for co-locating variables.Gravatar A. Unique TensorFlower2018-03-27
| | | | | | | Make DistributionStrategy.colocate_vars_with() match the existing behavior of ops.colocate_with() by default, for compatibility. PiperOrigin-RevId: 190699882
* Make tf.keras.Sequential (properly) CheckpointableGravatar Allen Lavoie2018-03-27
| | | | | | | | | | Just numbers Layers like "layer-N". It may also make sense to track them by "ClassName-M", but that's a backwards-compatible change. Special-cases all of the dependency collection, since Layers can be added and removed from Sequential. PiperOrigin-RevId: 190699818
* K-FAC: Bugfixes for TPU compatibility with covariance update ops.Gravatar A. Unique TensorFlower2018-03-27
| | | | PiperOrigin-RevId: 190699635
* [XLA] Redesign: implement Tuple and GetTupleElement.Gravatar A. Unique TensorFlower2018-03-27
| | | | PiperOrigin-RevId: 190698245
* Change the host-op result per TPU step from a single value to a collection ↵Gravatar A. Unique TensorFlower2018-03-27
| | | | | | of values. PiperOrigin-RevId: 190696953
* Improve support for DT_HALF and DT_BFLOAT16 in Grappler graph optimizations.Gravatar A. Unique TensorFlower2018-03-27
| | | | | | Update GrapplerTest::EvaluateNodes to take feeds as an argument, to make it easier to write tests with placeholders. PiperOrigin-RevId: 190696386
* Fixed a bug in ConvKFCBasicMultiIndepFB introduced in the last CLGravatar A. Unique TensorFlower2018-03-27
| | | | PiperOrigin-RevId: 190695737
* Test all TFLite kernel implementations for fully connected.Gravatar A. Unique TensorFlower2018-03-27
| | | | PiperOrigin-RevId: 190693455
* TFTS: Fix a bug in the SavedModel cold-start exportGravatar Allen Lavoie2018-03-27
| | | | | | | It now correctly broadcasts start state across whatever batch dimension it is passed rather than sqishing it down to a batch dimension of 1. PiperOrigin-RevId: 190688855
* Add node types for DFS traversal to catch more issues with deduping inputs ↵Gravatar A. Unique TensorFlower2018-03-27
| | | | | | to in-place ops. PiperOrigin-RevId: 190687820
* [XLA] Fold reduce-window(convert(pad(X))) into reduce-window(convert(X))Gravatar David Majnemer2018-03-27
| | | | | | | | | | | ReduceWindow operations are done in higher precision to avoid accumulation error. Convert operations can find their way between a ReduceWindow and a Pad which can prevent a Pad from combining with a ReduceWindow. Fix this by looking past the Convert while also checking that the Convert'd Pad's init value is identical to the reduce-window value. PiperOrigin-RevId: 190686175
* Moves Execute() from c_api.ccGravatar Alexandre Passos2018-03-27
| | | | PiperOrigin-RevId: 190681610
* Make _USE_C_API = True and _USE_C_SHAPES = False work with handle data, take 2.Gravatar Skye Wanderman-Milne2018-03-27
| | | | | | | | This change makes _set_shapes_for_outputs_c_api fetch and set Tensor._handle_data. This is necessary for running the Python shape inference code on resource tensors. PiperOrigin-RevId: 190681459
* [tf.data] Raise error when window size is 0 in ↵Gravatar Derek Murray2018-03-27
| | | | | | `tf.contrib.data.group_by_window()`. PiperOrigin-RevId: 190673466
* Improve error message when users forget to pass toco cmdline args for ↵Gravatar Suharsh Sivakumar2018-03-27
| | | | | | quantization, but have a model that has FAKE_QUANT operations. PiperOrigin-RevId: 190672414
* Add "serve" as a default value for savedmodel_tagset.Gravatar Nupur Garg2018-03-27
| | | | PiperOrigin-RevId: 190671867
* Fix documentation of Clamp; it does not take a computation at all.Gravatar Dimitris Vardoulakis2018-03-27
| | | | | | See: https://github.com/tensorflow/tensorflow/blob/r1.6/tensorflow/compiler/xla/client/computation_builder.h#L668 PiperOrigin-RevId: 190671530
* Fast path for calling pack when the list is full of eager tensors.Gravatar Akshay Modi2018-03-27
| | | | | | FastPathExecute function also allows inputs to be sequences instead of just lists. PiperOrigin-RevId: 190670587
* [TF:XLA] Force DebugOptions to be specified when calling ↵Gravatar Nick Desaulniers2018-03-27
| | | | | | | | HloModule::CreateModuleConfigFromProto Otherwise it's easy to forget that you likely want the DebugOptions to be `legacy_flags::GetDebugOptionsFromFlags()`. PiperOrigin-RevId: 190659046
* Updating test so that it evaluates the optimized and original graph and ↵Gravatar A. Unique TensorFlower2018-03-27
| | | | | | checks whether the output tensors produced by them are the same. PiperOrigin-RevId: 190655831
* Improved shape inference for reshapeGravatar Benoit Steiner2018-03-27
| | | | PiperOrigin-RevId: 190651873
* Replaced calls to deprecated tensorflow::StringPiece methods with theirGravatar A. Unique TensorFlower2018-03-27
| | | | | | | | tensorflow::str_util equivalents. This will allow the deprecated methods to be removed. PiperOrigin-RevId: 190650553
* Exclude Python C extension from tensorflow/c:srcs target.Gravatar Skye Wanderman-Milne2018-03-27
| | | | | | The Python extensions aren't part of the official C API. PiperOrigin-RevId: 190649576
* Fix: Clamp takes three arguments after computation, not arbitrarily many.Gravatar Dimitris Vardoulakis2018-03-27
| | | | PiperOrigin-RevId: 190644837
* Match behavior of py_func in graph and eager.Gravatar Alexandre Passos2018-03-27
| | | | PiperOrigin-RevId: 190641841
* Internal cleanup.Gravatar A. Unique TensorFlower2018-03-27
| | | | PiperOrigin-RevId: 190633067
* import tpu profiler analysis grpc python stub to tensorflow.Gravatar A. Unique TensorFlower2018-03-27
| | | | PiperOrigin-RevId: 190630641
* Prevent warning every time someone imports contrib.learn.datasets.baseGravatar James Keeling2018-03-27
| | | | | | | | | | Everything in contrib/learn/python/learn/datasets/base.py has been deprecated. One of the function in there is a decorator, retry. Because another function in that file is decorated with retry, the function is called upon import, which prints a warning. I have fixed this by adding a private function, _internal_retry, which is used internally, and redefining retry to simply call this. That way, using retry in user-code will still print the deprecated warning, but it's not printed upon every import. I also cleaned up the docstrings slightly. PiperOrigin-RevId: 190626717
* Flush the output of print (fixes out-of-order prints in public colab)Gravatar A. Unique TensorFlower2018-03-27
| | | | PiperOrigin-RevId: 190624708