aboutsummaryrefslogtreecommitdiffhomepage
Commit message (Collapse)AuthorAge
* Fix latent bug in dependency optimizer.Gravatar A. Unique TensorFlower2018-02-02
| | | | PiperOrigin-RevId: 184291701
* Disable graph optimizations (CSE) in test so that constant nodes are not ↵Gravatar Yao Zhang2018-02-02
| | | | | | deduped. PiperOrigin-RevId: 184289685
* [comment-only change]: Fix grammar error.Gravatar A. Unique TensorFlower2018-02-02
| | | | PiperOrigin-RevId: 184285125
* Fix a bug in function inlining when the argument is an implicitly ↵Gravatar Derek Murray2018-02-02
| | | | | | | | dereferenced ref tensor. Previously the inliner would add an identity node with an invalid ref-type attr when the actual parameter had ref type. The changed version removes the reference. PiperOrigin-RevId: 184285084
* Enabling partitioned variables to work with TPU.Gravatar A. Unique TensorFlower2018-02-02
| | | | | | | | | | | | | When partitioned variables are used in a TPU training loop, concat gradient operations get generated for which XLA requires the concat dimension argument to be a constant (or foldable to a constant). However since such constant is defined outside of the train while context an Enter node is generated in order to pass it. The fix consists in detecting such case, and to duplicate the (scalar) constant inside the while context, so that XLA can succesfully process the resulting graph. PiperOrigin-RevId: 184273245
* Fix some tf-lite testsGravatar Gunhan Gulsoy2018-02-02
| | | | PiperOrigin-RevId: 184247187
* Fix tolerance too tight for Wasserstein distance test.Gravatar A. Unique TensorFlower2018-02-02
| | | | PiperOrigin-RevId: 184240222
* Fix some tf-lite testsGravatar Gunhan Gulsoy2018-02-02
| | | | PiperOrigin-RevId: 184247187
* Fix tolerance too tight for Wasserstein distance test.Gravatar A. Unique TensorFlower2018-02-02
| | | | PiperOrigin-RevId: 184240222
* Internal changeGravatar Justin Lebar2018-02-02
| | | | PiperOrigin-RevId: 184239740
* Automated g4 rollback of changelist 183874527Gravatar A. Unique TensorFlower2018-02-01
| | | | PiperOrigin-RevId: 184236409
* Supporting new saving op structure.Gravatar A. Unique TensorFlower2018-02-01
| | | | PiperOrigin-RevId: 184233513
* Add darwin_x86_64 config in TF Lite BUILD file.Gravatar Yu-Cheng Ling2018-02-01
| | | | PiperOrigin-RevId: 184227786
* Fixed the description of the fake GPU device to avoid a division by 0Gravatar Benoit Steiner2018-02-01
| | | | PiperOrigin-RevId: 184225409
* Fix testsGravatar Brennan Saeta2018-02-01
| | | | PiperOrigin-RevId: 184220615
* Skip unknown devices since we can't optimize for themGravatar Benoit Steiner2018-02-01
| | | | PiperOrigin-RevId: 184220515
* Allow reordering of execution order of nodes with indirect execution_plan.Gravatar Andrew Selle2018-02-01
| | | | | | | | Now whenever we want to operate in dependency order we use execution_plan. It begins as identity map (0, ..., nodes_size()) but can be changed in the future. This is the basis for more pluggable delegation. PiperOrigin-RevId: 184216885
* Automated g4 rollback of changelist 184188816Gravatar Jacques Pienaar2018-02-01
| | | | PiperOrigin-RevId: 184213576
* GCS Throttle: 1 token == 1 KbGravatar Brennan Saeta2018-02-01
| | | | | | | Previously, 1 token was approximately 256 bytes. This is slightly less intuitive than 1 kb. PiperOrigin-RevId: 184212503
* Add functionality to fold batch norm (supporting both fused and unfused ↵Gravatar Raghuraman Krishnamoorthi2018-02-01
| | | | | | | | | | | | batch norm) to support quantized training. The weights are always now scaled by gamma/sigma, where sigma is the moving standard deviation for stability prior to quantization. For improved performance, the moving means and variances are frozen and the training graph modified accordingly. An additional parameter freeze_batch_norm_delay is added to foldbatchnorm function to set the delay at which training switches from regular batch norm to frozen mean and variances. Remove placement options within FoldBatchNorm as this causes folded training to place all ops on a single GPU. The modification now significantly speeds up distributed training. The tests for folding batch norms are also updated to reflect the additional topological changes to the graph. PiperOrigin-RevId: 184211434
* Add iterate_batches arg to Estimator.predictGravatar A. Unique TensorFlower2018-02-01
| | | | PiperOrigin-RevId: 184205196
* Make jit_test.py work with C API enabled.Gravatar Skye Wanderman-Milne2018-02-01
| | | | PiperOrigin-RevId: 184202470
* [XLA] add DotGeneral to the local Python XLA client.Gravatar A. Unique TensorFlower2018-02-01
| | | | PiperOrigin-RevId: 184202425
* Revert TensorBoard entry point back to run_mainGravatar Nick Felt2018-02-01
| | | | PiperOrigin-RevId: 184201506
* Internal change.Gravatar Anna R2018-02-01
| | | | PiperOrigin-RevId: 184194895
* Throw an exception when the user's batch size isn't divisible by GPUs.Gravatar Igor Saprykin2018-02-01
| | | | | | The alternative to this is to have an adaptive approach that would unevenly split input into per-tower batches. The concern with that was that all towers will be as slow as the one with more input reducing the performance. Batch size seems to be commonly tailored to the available hardware. PiperOrigin-RevId: 184192793
* Return an error instead of assertion when processing an ill-formed graph or anGravatar Benoit Steiner2018-02-01
| | | | | | invalid set of fetch nodes PiperOrigin-RevId: 184192790
* [TFXLA] Use data flow to determine switch grouping.Gravatar Jacques Pienaar2018-02-01
| | | | | | | | | | | * Change how switch grouping works: - This is an intermediate step, next is combining DetermineBranchMapAndFrontier into one traversal. * Homogeneous the naming (switch_nodes -> switches); * Change graph dumping to be due to class member - currently still performed when vlog-level is sufficiently high; * Pass in correct library when dumping graphs; PiperOrigin-RevId: 184188816
* Adding documentation on how to load & serve a model with theGravatar Noah Fiedel2018-02-01
| | | | | | TensorFlow Serving Model Server. PiperOrigin-RevId: 184188752
* Fixes a type conversion bug in losses.compute_weighted_loss for ↵Gravatar A. Unique TensorFlower2018-02-01
| | | | | | reduction=SUM_OVER_BATCH_SIZE. PiperOrigin-RevId: 184186573
* Fix segfault when Softmax is first in graphGravatar A. Unique TensorFlower2018-02-01
| | | | PiperOrigin-RevId: 184183730
* Verify tensor contents of tflite modelGravatar A. Unique TensorFlower2018-02-01
| | | | PiperOrigin-RevId: 184183725
* Made the addn optimization aware of the graph topologyGravatar Benoit Steiner2018-02-01
| | | | PiperOrigin-RevId: 184179246
* Add a utility module that contains helper functions usable from within ↵Gravatar A. Unique TensorFlower2018-02-01
| | | | | | | | generated code. Add a helper for the control dependencies context manager. PiperOrigin-RevId: 184176409
* Internal change.Gravatar Anna R2018-02-01
| | | | PiperOrigin-RevId: 184174800
* Update deprecated API useGravatar A. Unique TensorFlower2018-02-01
| | | | PiperOrigin-RevId: 184173047
* [tf.data] Fix bug where captured resources in shared iterators were invisible.Gravatar Derek Murray2018-02-01
| | | | | | | | | | | | This change ensures that a shared iterator (which requires a private FunctionLibraryRuntime that outlasts the calling op's runtime, because it can outlive a single session) uses the same Device as a non-shared iterator, and hence capturing resources from the creating graph will work as intended. Fixes #16481. PiperOrigin-RevId: 184172498
* Added a utility to traverse the graph in reverse DFS order, identifying loopsGravatar Benoit Steiner2018-02-01
| | | | | | in the process. PiperOrigin-RevId: 184172483
* Automated g4 rollback of changelist 184153187Gravatar Anna R2018-02-01
| | | | PiperOrigin-RevId: 184169668
* Internal change.Gravatar Anna R2018-02-01
| | | | PiperOrigin-RevId: 184165180
* Add function paths to their signatures.Gravatar Mark Daoust2018-02-01
| | | | | | fixes #16167 PiperOrigin-RevId: 184160925
* Fix nest bug with different dictionary key orderings.Gravatar A. Unique TensorFlower2018-02-01
| | | | PiperOrigin-RevId: 184160009
* Add shape inference for outside_compilation graph rewrite. Pull out enough ↵Gravatar A. Unique TensorFlower2018-02-01
| | | | | | of the graph to enable inference of the shape of a SendFromHost Op once the shape of corresponding RecvAtHost Ops are known. PiperOrigin-RevId: 184153187
* Update ops-related pbtxt files.Gravatar A. Unique TensorFlower2018-02-01
| | | | PiperOrigin-RevId: 184141875
* [TF:XLA] Implement MatrixSetDiag and MatrixBandPart.Gravatar Peter Hawkins2018-02-01
| | | | | | Add support for int32 indices to the MatrixBandPart operator. PiperOrigin-RevId: 184133343
* [TF:XLA] Fix tfcompile OSS buildGravatar Sanjoy Das2018-02-01
| | | | | | | | | | | | - The @org_tensorflow package designation is unnecessary, and breaks the build when building without a sandbox. - The generated tests must use tf_cc_test, not cc_test. See the note in tensorflow/core/BUILD. Partially addresses #15338 PiperOrigin-RevId: 184095571
* Internal changeGravatar Yu-Cheng Ling2018-02-01
| | | | PiperOrigin-RevId: 184088913
* Go: Update generated wrapper functions for TensorFlow ops.Gravatar A. Unique TensorFlower2018-02-01
| | | | PiperOrigin-RevId: 184086955
* Update ops-related pbtxt files.Gravatar A. Unique TensorFlower2018-02-01
| | | | PiperOrigin-RevId: 184085402
* Add a new Dataset: PrependFromQueueAndPaddedBatchDataset.Gravatar Eugene Brevdo2018-02-01
| | | | PiperOrigin-RevId: 184078894