aboutsummaryrefslogtreecommitdiffhomepage
Commit message (Collapse)AuthorAge
* Correct a couple of format stringsHEADmasterGravatar Benjamin Barenblat2018-10-10
| | | | | | | Change a couple of fscanf-style format strings to use the format macro constants defined in cinttypes. This quashes -Wformat. PiperOrigin-RevId: 216545604
* remove debug statementsGravatar A. Unique TensorFlower2018-10-10
| | | | PiperOrigin-RevId: 216536298
* [Grappler] Add RemoveStackStridedSliceSameAxis optimizer.Gravatar Eugene Brevdo2018-10-10
| | | | | | | | | | | | | | | | // Replace operations of the form: // x = stack((a_0, a_1, ..., a_{n-1}), axis=k)[:,...,i,...] // with // a_i // when the strided slice index `i` is applied in the k'th axis. // // Similarly, replace operations of the form: // x = stack((a_0, a_1, ..., a_{n-1}), axis=k)[:,...,i:i+1,...] // with // expand_dims(a_i, axis=k) // PiperOrigin-RevId: 216535346
* cond_v2: raise an error if pred is a Python bool.Gravatar Skye Wanderman-Milne2018-10-10
| | | | | | This is to match the existing behavior of tf.cond. PiperOrigin-RevId: 216534084
* Use lambdas when converting ifexps, since they are now supported.Gravatar Dan Moldovan2018-10-10
| | | | PiperOrigin-RevId: 216533613
* Allow the executor type for a function to be specified as an attr on a function.Gravatar Derek Murray2018-10-10
| | | | | | | | | | | This change complements the existing `InstantiateOptions::executor_type` option, which takes precedence over the attr if both are provided. It enables the choice of executor to be separated from both the calling op implementation and the function definition, which simplifies the use of custom executors in operations that take a function as an attr (e.g.) `tf.data` and the functional control-flow ops. PiperOrigin-RevId: 216532778
* [tf.data] `Dataset.make_one_shot_iterator()` inherits the random seed from ↵Gravatar Derek Murray2018-10-10
| | | | | | | | | | | | | | | the calling graph. This change makes a subtle difference to the behavior of existing programs that create multiple iterators. Previously, one-shot iterators would not inherit the graph seed, and so their values would be non-deterministic (unless explicit seeds were set). After this change, an iterator will inherit its seed from the outer graph. Multiple one-shot iterators created from the same dataset will inherit different seeds, matching the semantics of creating multiple ops with the same graph seed. PiperOrigin-RevId: 216532256
* Fix number of outputs when importing tensorflow GraphDef.Gravatar A. Unique TensorFlower2018-10-10
| | | | | | Sometimes the actual number of outputs is dictated by one of the attributes of the NodeDef. PiperOrigin-RevId: 216530696
* Use overloaded operators for the assert statement. This should remove the ↵Gravatar Dan Moldovan2018-10-10
| | | | | | reliance on importing tensorflow in the generated code. PiperOrigin-RevId: 216528047
* Support kDomain instructions in the HloMatcher frameworkGravatar A. Unique TensorFlower2018-10-10
| | | | PiperOrigin-RevId: 216525613
* Support removing side effecting instructions with ↵Gravatar A. Unique TensorFlower2018-10-10
| | | | | | | | | | RemoveInstructionAndUnusedOperands If the caller explicitly asks to remove a side effceting instruction (e.g. all-reduce) then we should respect it instead of silently ignoring the request. PiperOrigin-RevId: 216505133
* Automated rollback of commit 950cf87104bfee28e2165fe368f66337b8a1336dGravatar A. Unique TensorFlower2018-10-10
| | | | PiperOrigin-RevId: 216500702
* Change user_set to an absl::flat_hash_set in HloInstruction.Gravatar A. Unique TensorFlower2018-10-10
| | | | | absl::flat_hash_set have better performance than a std::unordered_set, which can improve overall compile time. PiperOrigin-RevId: 216498767
* Emit xla::Or in TensorArrayScatterV3 for PRED types instead of xla::AddGravatar A. Unique TensorFlower2018-10-10
| | | | | | | Previosuly we emitted xla::Add what isn't supported by some XLA backend on PRED types. PiperOrigin-RevId: 216497939
* compat: Update forward compatibility horizon to 2018-10-10Gravatar A. Unique TensorFlower2018-10-10
| | | | PiperOrigin-RevId: 216495091
* Delete dead code in batch_scatter_ops_test.Gravatar A. Unique TensorFlower2018-10-10
| | | | PiperOrigin-RevId: 216483746
* Run while loop test that was not being run before.Gravatar A. Unique TensorFlower2018-10-10
| | | | PiperOrigin-RevId: 216483744
* Remove python shebang line from gen_git_source.Gravatar Gunhan Gulsoy2018-10-09
| | | | PiperOrigin-RevId: 216479972
* Fix lstm_test&layer_norm_lstm_test w/ Clang 8.0.0Gravatar Yu-Cheng Ling2018-10-09
| | | | PiperOrigin-RevId: 216475683
* Add a more verbose error message.Gravatar A. Unique TensorFlower2018-10-09
| | | | PiperOrigin-RevId: 216471178
* Use Ophints to support TfLite UnidirectionaSequenceLstm and add an e2e test.Gravatar A. Unique TensorFlower2018-10-09
| | | | | | Support peephole and num_proj as well. PiperOrigin-RevId: 216467578
* [XLA] Add documentation and HLO-level support for multi-value sort.Gravatar Michael Kuperstein2018-10-09
| | | | | | No support in any of the backends, and not yet exposed through XlaBuilder. PiperOrigin-RevId: 216465753
* Automated rollback of commit 9bd459e4ceba14f9bb1af98d52a109325de952e8Gravatar A. Unique TensorFlower2018-10-09
| | | | PiperOrigin-RevId: 216463491
* Automated rollback of commit d78c747e9177fc93d43a580acef2b62eb1420859Gravatar Smit Hinsu2018-10-09
| | | | PiperOrigin-RevId: 216463443
* Update model in keras dist strat learning phase test to return consistent ↵Gravatar Pavithra Vijay2018-10-09
| | | | | | values. PiperOrigin-RevId: 216461637
* Enable support for lambda functions in static analyses.Gravatar Dan Moldovan2018-10-09
| | | | | | | The CFG treats lambdas as ordinary expressions. The activity analysis ensures that variables masked by the lambda's arguments are not being tracked. Note: lambdas do not allow direct modification (we exclude indirect mutation via function or methods). PiperOrigin-RevId: 216456682
* Fix lite/kernels:add_test for Clang 8.0.0Gravatar Yu-Cheng Ling2018-10-09
| | | | PiperOrigin-RevId: 216455772
* Go: Update generated wrapper functions for TensorFlow ops.Gravatar A. Unique TensorFlower2018-10-09
| | | | PiperOrigin-RevId: 216455250
* Add support for modeling fast memory close to the processor/gpuGravatar A. Unique TensorFlower2018-10-09
| | | | PiperOrigin-RevId: 216453979
* Update ops-related pbtxt files.Gravatar A. Unique TensorFlower2018-10-09
| | | | PiperOrigin-RevId: 216452496
* Move tflite_convert g3docs, so they will be pulled into the site.Gravatar Mark Daoust2018-10-09
| | | | PiperOrigin-RevId: 216452447
* [XLA:GPU] Use CudnnConvKind in more places.Gravatar Justin Lebar2018-10-09
| | | | | | No functional change. PiperOrigin-RevId: 216451881
* Adds an Objective-C API to TensorFlow Lite experimental.Gravatar A. Unique TensorFlower2018-10-09
| | | | PiperOrigin-RevId: 216451263
* [XLA] Cleanup: Make AllocationTracker::Resolve const.Gravatar A. Unique TensorFlower2018-10-09
| | | | | | | So that when resolving some global data, we don't have to worry whether "Resolve" is going to mutate the real data. PiperOrigin-RevId: 216448145
* [XLA:GPU] Elide the SequentialThunk when emitting scatter with no copyGravatar Benjamin Kramer2018-10-09
| | | | | | | | We have a 1-element thunk sequence if we're not copying. That's still two thunks and hlo profiling gets confused if it sees two thunks for the same instruction and one of them claims to be the whole instruction. PiperOrigin-RevId: 216448063
* [XLA] Added xla::CreateModuleFromProto(...) combining loading moduleGravatar A. Unique TensorFlower2018-10-09
| | | | | | from proto and verifying it with HloVerifier. PiperOrigin-RevId: 216447947
* Internal change.Gravatar A. Unique TensorFlower2018-10-09
| | | | PiperOrigin-RevId: 216447412
* Remove the deprecated created and IS_LOCAL abstractions from activity analysis.Gravatar Dan Moldovan2018-10-09
| | | | PiperOrigin-RevId: 216446750
* Make lite_test.py run in open source.Gravatar Nupur Garg2018-10-09
| | | | PiperOrigin-RevId: 216445964
* Add 'remove' operation to MutableHashTable and MutableDenseHashTable.Gravatar A. Unique TensorFlower2018-10-09
| | | | PiperOrigin-RevId: 216443201
* [TF:XLA] Bump open source abseil revision to ↵Gravatar Sanjoy Das2018-10-09
| | | | | | 445998d7ac4e5d3c50411d377e3b50e960d2d6c2 PiperOrigin-RevId: 216442983
* Internal changeGravatar Jared Duke2018-10-09
| | | | PiperOrigin-RevId: 216442906
* Part 2/3 of the update of tf.keras to the Keras 2.2.4 API.Gravatar Francois Chollet2018-10-09
| | | | PiperOrigin-RevId: 216442569
* [XLA] Allow scatter to share the operand buffer with the outputGravatar Benjamin Kramer2018-10-09
| | | | | | This avoids a copy. PiperOrigin-RevId: 216437329
* Raises an appropriate error if `add_weight` is called on a Keras network.Gravatar A. Unique TensorFlower2018-10-09
| | | | PiperOrigin-RevId: 216432358
* Make defun work under distributed strategies.Gravatar Igor Ganichev2018-10-09
| | | | | | | | | | | | | The core of the change is have the gradient tape capture distributed variables instead of plain ResourceVariables. In other words, we move the distribution awareness from defun down to tape and rely on distributed variable magic to provide us with the right variable at runtime. In tower context, we always watch the container (e.g. MirroredVariable). In cross tower context, we always watch all the components. PiperOrigin-RevId: 216430530
* In TPUMirroredVariable, when setting _initializer_op and _initial_value ↵Gravatar Ruoxin Sang2018-10-09
| | | | | | attributes, set the attributes of all the contained variables. This fixes a bug that tf.train.init_from_checkpoint doesn't overwrite the initialization values correctly for TPUMirroredVariable. PiperOrigin-RevId: 216429476
* Avoid creating sparse tensor objects before library is initialized.Gravatar Gunhan Gulsoy2018-10-09
| | | | PiperOrigin-RevId: 216425002
* [tf.data vectorization] Add vectorizer for `Add` opGravatar Rachel Lim2018-10-09
| | | | PiperOrigin-RevId: 216424512
* Export feature importance for oblivious tree nodes.Gravatar A. Unique TensorFlower2018-10-09
| | | | PiperOrigin-RevId: 216422334