Commit message (Collapse) | Author | Age | |
---|---|---|---|
* | Fix number of outputs when importing tensorflow GraphDef. | A. Unique TensorFlower | 2018-10-10 |
| | | | | | | Sometimes the actual number of outputs is dictated by one of the attributes of the NodeDef. PiperOrigin-RevId: 216530696 | ||
* | Use overloaded operators for the assert statement. This should remove the ↵ | Dan Moldovan | 2018-10-10 |
| | | | | | | reliance on importing tensorflow in the generated code. PiperOrigin-RevId: 216528047 | ||
* | Support kDomain instructions in the HloMatcher framework | A. Unique TensorFlower | 2018-10-10 |
| | | | | PiperOrigin-RevId: 216525613 | ||
* | Support removing side effecting instructions with ↵ | A. Unique TensorFlower | 2018-10-10 |
| | | | | | | | | | | RemoveInstructionAndUnusedOperands If the caller explicitly asks to remove a side effceting instruction (e.g. all-reduce) then we should respect it instead of silently ignoring the request. PiperOrigin-RevId: 216505133 | ||
* | Automated rollback of commit 950cf87104bfee28e2165fe368f66337b8a1336d | A. Unique TensorFlower | 2018-10-10 |
| | | | | PiperOrigin-RevId: 216500702 | ||
* | Change user_set to an absl::flat_hash_set in HloInstruction. | A. Unique TensorFlower | 2018-10-10 |
| | | | | | absl::flat_hash_set have better performance than a std::unordered_set, which can improve overall compile time. PiperOrigin-RevId: 216498767 | ||
* | Emit xla::Or in TensorArrayScatterV3 for PRED types instead of xla::Add | A. Unique TensorFlower | 2018-10-10 |
| | | | | | | | Previosuly we emitted xla::Add what isn't supported by some XLA backend on PRED types. PiperOrigin-RevId: 216497939 | ||
* | compat: Update forward compatibility horizon to 2018-10-10 | A. Unique TensorFlower | 2018-10-10 |
| | | | | PiperOrigin-RevId: 216495091 | ||
* | Delete dead code in batch_scatter_ops_test. | A. Unique TensorFlower | 2018-10-10 |
| | | | | PiperOrigin-RevId: 216483746 | ||
* | Run while loop test that was not being run before. | A. Unique TensorFlower | 2018-10-10 |
| | | | | PiperOrigin-RevId: 216483744 | ||
* | Remove python shebang line from gen_git_source. | Gunhan Gulsoy | 2018-10-09 |
| | | | | PiperOrigin-RevId: 216479972 | ||
* | Fix lstm_test&layer_norm_lstm_test w/ Clang 8.0.0 | Yu-Cheng Ling | 2018-10-09 |
| | | | | PiperOrigin-RevId: 216475683 | ||
* | Add a more verbose error message. | A. Unique TensorFlower | 2018-10-09 |
| | | | | PiperOrigin-RevId: 216471178 | ||
* | Use Ophints to support TfLite UnidirectionaSequenceLstm and add an e2e test. | A. Unique TensorFlower | 2018-10-09 |
| | | | | | | Support peephole and num_proj as well. PiperOrigin-RevId: 216467578 | ||
* | [XLA] Add documentation and HLO-level support for multi-value sort. | Michael Kuperstein | 2018-10-09 |
| | | | | | | No support in any of the backends, and not yet exposed through XlaBuilder. PiperOrigin-RevId: 216465753 | ||
* | Automated rollback of commit 9bd459e4ceba14f9bb1af98d52a109325de952e8 | A. Unique TensorFlower | 2018-10-09 |
| | | | | PiperOrigin-RevId: 216463491 | ||
* | Automated rollback of commit d78c747e9177fc93d43a580acef2b62eb1420859 | Smit Hinsu | 2018-10-09 |
| | | | | PiperOrigin-RevId: 216463443 | ||
* | Update model in keras dist strat learning phase test to return consistent ↵ | Pavithra Vijay | 2018-10-09 |
| | | | | | | values. PiperOrigin-RevId: 216461637 | ||
* | Enable support for lambda functions in static analyses. | Dan Moldovan | 2018-10-09 |
| | | | | | | | The CFG treats lambdas as ordinary expressions. The activity analysis ensures that variables masked by the lambda's arguments are not being tracked. Note: lambdas do not allow direct modification (we exclude indirect mutation via function or methods). PiperOrigin-RevId: 216456682 | ||
* | Fix lite/kernels:add_test for Clang 8.0.0 | Yu-Cheng Ling | 2018-10-09 |
| | | | | PiperOrigin-RevId: 216455772 | ||
* | Go: Update generated wrapper functions for TensorFlow ops. | A. Unique TensorFlower | 2018-10-09 |
| | | | | PiperOrigin-RevId: 216455250 | ||
* | Add support for modeling fast memory close to the processor/gpu | A. Unique TensorFlower | 2018-10-09 |
| | | | | PiperOrigin-RevId: 216453979 | ||
* | Update ops-related pbtxt files. | A. Unique TensorFlower | 2018-10-09 |
| | | | | PiperOrigin-RevId: 216452496 | ||
* | Move tflite_convert g3docs, so they will be pulled into the site. | Mark Daoust | 2018-10-09 |
| | | | | PiperOrigin-RevId: 216452447 | ||
* | [XLA:GPU] Use CudnnConvKind in more places. | Justin Lebar | 2018-10-09 |
| | | | | | | No functional change. PiperOrigin-RevId: 216451881 | ||
* | Adds an Objective-C API to TensorFlow Lite experimental. | A. Unique TensorFlower | 2018-10-09 |
| | | | | PiperOrigin-RevId: 216451263 | ||
* | [XLA] Cleanup: Make AllocationTracker::Resolve const. | A. Unique TensorFlower | 2018-10-09 |
| | | | | | | | So that when resolving some global data, we don't have to worry whether "Resolve" is going to mutate the real data. PiperOrigin-RevId: 216448145 | ||
* | [XLA:GPU] Elide the SequentialThunk when emitting scatter with no copy | Benjamin Kramer | 2018-10-09 |
| | | | | | | | | We have a 1-element thunk sequence if we're not copying. That's still two thunks and hlo profiling gets confused if it sees two thunks for the same instruction and one of them claims to be the whole instruction. PiperOrigin-RevId: 216448063 | ||
* | [XLA] Added xla::CreateModuleFromProto(...) combining loading module | A. Unique TensorFlower | 2018-10-09 |
| | | | | | | from proto and verifying it with HloVerifier. PiperOrigin-RevId: 216447947 | ||
* | Internal change. | A. Unique TensorFlower | 2018-10-09 |
| | | | | PiperOrigin-RevId: 216447412 | ||
* | Remove the deprecated created and IS_LOCAL abstractions from activity analysis. | Dan Moldovan | 2018-10-09 |
| | | | | PiperOrigin-RevId: 216446750 | ||
* | Make lite_test.py run in open source. | Nupur Garg | 2018-10-09 |
| | | | | PiperOrigin-RevId: 216445964 | ||
* | Add 'remove' operation to MutableHashTable and MutableDenseHashTable. | A. Unique TensorFlower | 2018-10-09 |
| | | | | PiperOrigin-RevId: 216443201 | ||
* | [TF:XLA] Bump open source abseil revision to ↵ | Sanjoy Das | 2018-10-09 |
| | | | | | | 445998d7ac4e5d3c50411d377e3b50e960d2d6c2 PiperOrigin-RevId: 216442983 | ||
* | Internal change | Jared Duke | 2018-10-09 |
| | | | | PiperOrigin-RevId: 216442906 | ||
* | Part 2/3 of the update of tf.keras to the Keras 2.2.4 API. | Francois Chollet | 2018-10-09 |
| | | | | PiperOrigin-RevId: 216442569 | ||
* | [XLA] Allow scatter to share the operand buffer with the output | Benjamin Kramer | 2018-10-09 |
| | | | | | | This avoids a copy. PiperOrigin-RevId: 216437329 | ||
* | Raises an appropriate error if `add_weight` is called on a Keras network. | A. Unique TensorFlower | 2018-10-09 |
| | | | | PiperOrigin-RevId: 216432358 | ||
* | Make defun work under distributed strategies. | Igor Ganichev | 2018-10-09 |
| | | | | | | | | | | | | | The core of the change is have the gradient tape capture distributed variables instead of plain ResourceVariables. In other words, we move the distribution awareness from defun down to tape and rely on distributed variable magic to provide us with the right variable at runtime. In tower context, we always watch the container (e.g. MirroredVariable). In cross tower context, we always watch all the components. PiperOrigin-RevId: 216430530 | ||
* | In TPUMirroredVariable, when setting _initializer_op and _initial_value ↵ | Ruoxin Sang | 2018-10-09 |
| | | | | | | attributes, set the attributes of all the contained variables. This fixes a bug that tf.train.init_from_checkpoint doesn't overwrite the initialization values correctly for TPUMirroredVariable. PiperOrigin-RevId: 216429476 | ||
* | Avoid creating sparse tensor objects before library is initialized. | Gunhan Gulsoy | 2018-10-09 |
| | | | | PiperOrigin-RevId: 216425002 | ||
* | [tf.data vectorization] Add vectorizer for `Add` op | Rachel Lim | 2018-10-09 |
| | | | | PiperOrigin-RevId: 216424512 | ||
* | Export feature importance for oblivious tree nodes. | A. Unique TensorFlower | 2018-10-09 |
| | | | | PiperOrigin-RevId: 216422334 | ||
* | [XLA:GPU] Pattern match atomic "apply" into an atomic store | Benjamin Kramer | 2018-10-09 |
| | | | | | | Otherwise we'd emit a CAS loop. PiperOrigin-RevId: 216421161 | ||
* | Add support for time-major input in the bidirectional RNN Op. | A. Unique TensorFlower | 2018-10-09 |
| | | | | PiperOrigin-RevId: 216419983 | ||
* | [tf.data] Lift parameterized test parameters into lambdas if they create TF ops. | Derek Murray | 2018-10-09 |
| | | | | | | | | The existing code triggers parts of the TensorFlow runtime that may not have been fully initialized at the time the parameters are evaluated. Lifting into a lambda and invoking the lambda inside the test method will achieve the proper order. PiperOrigin-RevId: 216419757 | ||
* | Internal Change | A. Unique TensorFlower | 2018-10-09 |
| | | | | PiperOrigin-RevId: 216419037 | ||
* | Improve the control flow conversion for loops by using dataflow analysis to ↵ | Dan Moldovan | 2018-10-09 |
| | | | | | | construct the state. This is part of a larger refactoring which removes the reliance on the deprecated Scope.created field. PiperOrigin-RevId: 216418556 | ||
* | Do not create a graph as a global variable in tests. | Gunhan Gulsoy | 2018-10-09 |
| | | | | PiperOrigin-RevId: 216418324 | ||
* | Go: Update generated wrapper functions for TensorFlow ops. | A. Unique TensorFlower | 2018-10-09 |
| | | | | PiperOrigin-RevId: 216416117 |