aboutsummaryrefslogtreecommitdiffhomepage
Commit message (Collapse)AuthorAge
* Replace Keras clip by value and clip by norm in Keras Optimizers with native ↵Gravatar A. Unique TensorFlower2018-06-28
| | | | | | TF clip_ops, also added user input check for clipnorm and clipvalue >= 0 if set PiperOrigin-RevId: 202516320
* _UnaryOpsComposition kernel: compose multiple shape&type preserving unary ↵Gravatar Eugene Zhulenev2018-06-28
| | | | | | ops at runtime. PiperOrigin-RevId: 202514848
* Include eager/graph mode in cache key so that one type of tensor doesn't spillGravatar Akshay Modi2018-06-28
| | | | | | into the other. PiperOrigin-RevId: 202513508
* Add helper for creating error tagsGravatar James Keeling2018-06-28
| | | | | | | error_format_tag is a helper for building interpolatable strings as part of a project to improve Python error messages in TensorFlow. PiperOrigin-RevId: 202509392
* Avoid overflow in flops calculations in nn_ops.py by forcingGravatar A. Unique TensorFlower2018-06-28
| | | | | | np.prod() to use np.int64 in a few places. PiperOrigin-RevId: 202505308
* [XLA] Change code in TF/XLA bridge that uses XlaBuilder:: methods to build ↵Gravatar Peter Hawkins2018-06-28
| | | | | | ops to use the corresponding free functions in namespace xla:: instead. PiperOrigin-RevId: 202505306
* tf.keras sync 2.2.0Gravatar Anjali Sridhar2018-06-28
| | | | PiperOrigin-RevId: 202505228
* [XLA] VerifyShape() should verify that max_sparse_elements is non-negative.Gravatar Michael Kuperstein2018-06-28
| | | | PiperOrigin-RevId: 202504925
* Add `--input_examples` option to `run` syntax and update `--input_examples` ↵Gravatar A. Unique TensorFlower2018-06-28
| | | | | | and `--input_exprs` headings to match option names. PiperOrigin-RevId: 202504009
* Jump to version 1.9 to sync with TensorFlow versions.Gravatar A. Unique TensorFlower2018-06-28
| | | | | | Also enable cloud tpu profiler to detect the TF version for better version compatibility. PiperOrigin-RevId: 202503162
* make identify_lstm independent of rnn_statesGravatar A. Unique TensorFlower2018-06-28
| | | | PiperOrigin-RevId: 202501055
* More un-fused quantized LSTM support in TFLite interpreterGravatar A. Unique TensorFlower2018-06-28
| | | | PiperOrigin-RevId: 202496488
* Run setUp in the same mode each time.Gravatar Tom Hennigan2018-06-28
| | | | PiperOrigin-RevId: 202489637
* Use a named tag for the nsync version, instead of a git hash.Gravatar A. Unique TensorFlower2018-06-28
| | | | | | Only the name of the version is changing here; the version is unchanged. PiperOrigin-RevId: 202486284
* Support more quantized unfused LSTMs in TFLite interpreter.Gravatar A. Unique TensorFlower2018-06-28
| | | | PiperOrigin-RevId: 202472329
* [tf2xla] Return zero-element tensors as tokens from FusedBatchNormGradGravatar A. Unique TensorFlower2018-06-28
| | | | | | We returned one-element tensors with uninitialized content, which msan didn't like. PiperOrigin-RevId: 202463090
* Improve the performance of ParseShapeStringInternalGravatar A. Unique TensorFlower2018-06-28
| | | | | | | | | The previous implementation recompiled the shape regex at every call what is an expensive opertaion. The new implementation improves the hlo text parsing time for very large models for up to 9x by eliminating this overhead. PiperOrigin-RevId: 202454354
* Fixed ShardingMetadata dump of null sharding from None to {}, to make itGravatar A. Unique TensorFlower2018-06-28
| | | | | | compatible with hlo string syntax. PiperOrigin-RevId: 202445509
* [XLA] Handle domain instructions in dataflow analysis.Gravatar A. Unique TensorFlower2018-06-28
| | | | | | Without domain propagation in dataflow analysis we end up in inconsistent domain instructions with BF16 as output and F32 as input. In case of tuple shapes these are not fixed by bfloat16_normalization, and later on they cause asserts once the domain instructions are removed. PiperOrigin-RevId: 202442786
* Exclude test sources from stream executor builds.Gravatar Gunhan Gulsoy2018-06-28
| | | | PiperOrigin-RevId: 202423156
* Create a constant from the feature column group index.Gravatar A. Unique TensorFlower2018-06-28
| | | | PiperOrigin-RevId: 202419595
* Expose SRUCell via tf.contrib.rnn.Gravatar RJ Ryan2018-06-28
| | | | PiperOrigin-RevId: 202415942
* Update LinearOperator tests to use array_ops.placeholder_with_default.Gravatar A. Unique TensorFlower2018-06-28
| | | | PiperOrigin-RevId: 202412660
* Improve export_tensorflow readability by explicitly using type instead of ↵Gravatar A. Unique TensorFlower2018-06-28
| | | | | | using auto. PiperOrigin-RevId: 202409729
* Add complex64 support to tf.lite runtime.Gravatar RJ Ryan2018-06-28
| | | | PiperOrigin-RevId: 202403235
* [XLA] Add test case for TOKEN constants. Make the test case pass.Gravatar Peter Hawkins2018-06-28
| | | | PiperOrigin-RevId: 202401460
* Minor changes in SpaceToBatchND / BatchToSpaceNDGravatar A. Unique TensorFlower2018-06-28
| | | | PiperOrigin-RevId: 202401380
* Use persistent tensor for Convolution HWCN weightsGravatar Yu-Cheng Ling2018-06-28
| | | | PiperOrigin-RevId: 202400843
* [TF:XLA] Bump open source llvm revision to r335708Gravatar Sanjoy Das2018-06-28
| | | | PiperOrigin-RevId: 202399218
* Ignore stop indices when shrink_axis_mask is set in tf.lite StridedSlice ↵Gravatar RJ Ryan2018-06-28
| | | | | | | | | | implementation. Due to an issue with negative StridedSlice indices in TensorFlow, the end indices can specify degenerate slices when negative indices are used to shrink an axis (e.g. for tf.range(4)[-1], start is -1, end is 0, and stride is 1). This fix works around the issue by ignoring stop indices entirely when an axis is shrinking, since in order to be shrunk the length is by definition 1. Fixes Issue #19260. PiperOrigin-RevId: 202398678
* Fix bug in runtime code for FullyConnected.Gravatar A. Unique TensorFlower2018-06-28
| | | | PiperOrigin-RevId: 202397475
* [TF:XLA] Implement QuantizeAndDequantizeV3.Gravatar Peter Hawkins2018-06-28
| | | | | | | | Change XLA lowering of QuantizeAndDequantizeV2/V3 to match the TF kernel much more closely. The main exception is the min_quantized and max_quantized values are calculated as floats to avoid the need for 64-bit integer math, which is not present on all accelerators. Reformats unary_ops_test.py in passing, but on the whole I don't mind the reformatting. PiperOrigin-RevId: 202395114
* Internal-only change.Gravatar A. Unique TensorFlower2018-06-28
| | | | PiperOrigin-RevId: 202393642
* Automated g4 rollback of changelist 202347723Gravatar A. Unique TensorFlower2018-06-28
| | | | PiperOrigin-RevId: 202392792
* collective_ops.py: Correct (0) to (0,) as subdiv_offsets default argument.Gravatar A. Unique TensorFlower2018-06-28
| | | | | | Without ',' does not evaluate to a tuple. PiperOrigin-RevId: 202390939
* Internal change.Gravatar A. Unique TensorFlower2018-06-28
| | | | PiperOrigin-RevId: 202388653
* Change ScopedAllocatorOptimizer() constructor to also take theGravatar A. Unique TensorFlower2018-06-28
| | | | | | | corresponding RewriteConfig::Toggle as an argument. This fixes a read-uninit-var error. PiperOrigin-RevId: 202385487
* Add support for using losses.Reduction.SUM as the loss reduction.Gravatar A. Unique TensorFlower2018-06-28
| | | | | | Useful when the minibatches don't have the same size. PiperOrigin-RevId: 202381046
* [XLA] Change code in tensorflow/compiler/xla that uses XlaBuilder:: methods ↵Gravatar Peter Hawkins2018-06-28
| | | | | | to build ops to use the corresponding free functions in namespace xla:: instead. PiperOrigin-RevId: 202377457
* [XLA] Use subshape pointers as map keys in BFloat16Propagation.Gravatar Yuanzhong Xu2018-06-28
| | | | | | Using simple keys is more efficient. PiperOrigin-RevId: 202377039
* Allow either string device, or DeviceSpec for FakeOpGravatar A. Unique TensorFlower2018-06-28
| | | | PiperOrigin-RevId: 202371689
* Do not capture variables that may be destroyed before callback finishes.Gravatar Ayush Dubey2018-06-28
| | | | PiperOrigin-RevId: 202370201
* Have EnsureBiasVector create bias vectors that already have theGravatar A. Unique TensorFlower2018-06-28
| | | | | | | | | | | constant value 0 and already have their shape set from the output activations shape; instead of having it create dummy placeholders and relying on PropagateFixedSizes to create the constant array. Rationale: It wasn't PropagateFixedSizes's job to create constant arrays, and that broke down in a case where the bias vectors not being constant prevented FuseBinaryIntoPrecedingAffine from running. PiperOrigin-RevId: 202365030
* fix the leftnav for tutorials/Gravatar Mark Daoust2018-06-28
| | | | PiperOrigin-RevId: 202363774
* Fix missing dependency.Gravatar Russell Power2018-06-28
| | | | PiperOrigin-RevId: 202357498
* Add GPUOptions::num_dev_to_dev_copy_streams to allow creation ofGravatar A. Unique TensorFlower2018-06-28
| | | | | | | | | | | | | more than one device-to-device copy stream per GPU device. This is an experimental feature that will have no effect unless copy operations explicitly request a stream other than 0, which currently does not occur anywhere in a standard build. Eventually it may be of benefit in the presence of multiple bi-directional concurrent data copies. PiperOrigin-RevId: 202354513
* Support quantizing atrous convolutions.Gravatar Suharsh Sivakumar2018-06-28
| | | | | | Atrous convolutions are often DepthwiseConv2d operations preceded by SpaceToBatchND and followed by BatchToSpaceND operations. This change makes fold_batch_norms.py and quantize.py support handling this pattern. PiperOrigin-RevId: 202353838
* [TF:XLA] Refactor TF/XLA code to use free functions in xla:: namespace to ↵Gravatar Peter Hawkins2018-06-28
| | | | | | build XlaOps, rather than calling XlaBuilder methods. PiperOrigin-RevId: 202348891
* [SE] Re-enable acquiring real cpu frequencyGravatar A. Unique TensorFlower2018-06-28
| | | | PiperOrigin-RevId: 202347723
* Add a type operation for FakeOp so that we can do type checks (without need ↵Gravatar A. Unique TensorFlower2018-06-28
| | | | | | to check for None) while doing graph processing. PiperOrigin-RevId: 202346371