aboutsummaryrefslogtreecommitdiffhomepage
path: root/tensorflow
Commit message (Collapse)AuthorAge
* Fix lstm_test&layer_norm_lstm_test w/ Clang 8.0.0Gravatar Yu-Cheng Ling2018-10-09
| | | | PiperOrigin-RevId: 216475683
* Add a more verbose error message.Gravatar A. Unique TensorFlower2018-10-09
| | | | PiperOrigin-RevId: 216471178
* Use Ophints to support TfLite UnidirectionaSequenceLstm and add an e2e test.Gravatar A. Unique TensorFlower2018-10-09
| | | | | | Support peephole and num_proj as well. PiperOrigin-RevId: 216467578
* [XLA] Add documentation and HLO-level support for multi-value sort.Gravatar Michael Kuperstein2018-10-09
| | | | | | No support in any of the backends, and not yet exposed through XlaBuilder. PiperOrigin-RevId: 216465753
* Automated rollback of commit 9bd459e4ceba14f9bb1af98d52a109325de952e8Gravatar A. Unique TensorFlower2018-10-09
| | | | PiperOrigin-RevId: 216463491
* Automated rollback of commit d78c747e9177fc93d43a580acef2b62eb1420859Gravatar Smit Hinsu2018-10-09
| | | | PiperOrigin-RevId: 216463443
* Update model in keras dist strat learning phase test to return consistent ↵Gravatar Pavithra Vijay2018-10-09
| | | | | | values. PiperOrigin-RevId: 216461637
* Enable support for lambda functions in static analyses.Gravatar Dan Moldovan2018-10-09
| | | | | | | The CFG treats lambdas as ordinary expressions. The activity analysis ensures that variables masked by the lambda's arguments are not being tracked. Note: lambdas do not allow direct modification (we exclude indirect mutation via function or methods). PiperOrigin-RevId: 216456682
* Fix lite/kernels:add_test for Clang 8.0.0Gravatar Yu-Cheng Ling2018-10-09
| | | | PiperOrigin-RevId: 216455772
* Go: Update generated wrapper functions for TensorFlow ops.Gravatar A. Unique TensorFlower2018-10-09
| | | | PiperOrigin-RevId: 216455250
* Add support for modeling fast memory close to the processor/gpuGravatar A. Unique TensorFlower2018-10-09
| | | | PiperOrigin-RevId: 216453979
* Update ops-related pbtxt files.Gravatar A. Unique TensorFlower2018-10-09
| | | | PiperOrigin-RevId: 216452496
* Move tflite_convert g3docs, so they will be pulled into the site.Gravatar Mark Daoust2018-10-09
| | | | PiperOrigin-RevId: 216452447
* [XLA:GPU] Use CudnnConvKind in more places.Gravatar Justin Lebar2018-10-09
| | | | | | No functional change. PiperOrigin-RevId: 216451881
* Adds an Objective-C API to TensorFlow Lite experimental.Gravatar A. Unique TensorFlower2018-10-09
| | | | PiperOrigin-RevId: 216451263
* [XLA] Cleanup: Make AllocationTracker::Resolve const.Gravatar A. Unique TensorFlower2018-10-09
| | | | | | | So that when resolving some global data, we don't have to worry whether "Resolve" is going to mutate the real data. PiperOrigin-RevId: 216448145
* [XLA:GPU] Elide the SequentialThunk when emitting scatter with no copyGravatar Benjamin Kramer2018-10-09
| | | | | | | | We have a 1-element thunk sequence if we're not copying. That's still two thunks and hlo profiling gets confused if it sees two thunks for the same instruction and one of them claims to be the whole instruction. PiperOrigin-RevId: 216448063
* [XLA] Added xla::CreateModuleFromProto(...) combining loading moduleGravatar A. Unique TensorFlower2018-10-09
| | | | | | from proto and verifying it with HloVerifier. PiperOrigin-RevId: 216447947
* Internal change.Gravatar A. Unique TensorFlower2018-10-09
| | | | PiperOrigin-RevId: 216447412
* Remove the deprecated created and IS_LOCAL abstractions from activity analysis.Gravatar Dan Moldovan2018-10-09
| | | | PiperOrigin-RevId: 216446750
* Make lite_test.py run in open source.Gravatar Nupur Garg2018-10-09
| | | | PiperOrigin-RevId: 216445964
* Add 'remove' operation to MutableHashTable and MutableDenseHashTable.Gravatar A. Unique TensorFlower2018-10-09
| | | | PiperOrigin-RevId: 216443201
* [TF:XLA] Bump open source abseil revision to ↵Gravatar Sanjoy Das2018-10-09
| | | | | | 445998d7ac4e5d3c50411d377e3b50e960d2d6c2 PiperOrigin-RevId: 216442983
* Internal changeGravatar Jared Duke2018-10-09
| | | | PiperOrigin-RevId: 216442906
* Part 2/3 of the update of tf.keras to the Keras 2.2.4 API.Gravatar Francois Chollet2018-10-09
| | | | PiperOrigin-RevId: 216442569
* [XLA] Allow scatter to share the operand buffer with the outputGravatar Benjamin Kramer2018-10-09
| | | | | | This avoids a copy. PiperOrigin-RevId: 216437329
* Raises an appropriate error if `add_weight` is called on a Keras network.Gravatar A. Unique TensorFlower2018-10-09
| | | | PiperOrigin-RevId: 216432358
* Make defun work under distributed strategies.Gravatar Igor Ganichev2018-10-09
| | | | | | | | | | | | | The core of the change is have the gradient tape capture distributed variables instead of plain ResourceVariables. In other words, we move the distribution awareness from defun down to tape and rely on distributed variable magic to provide us with the right variable at runtime. In tower context, we always watch the container (e.g. MirroredVariable). In cross tower context, we always watch all the components. PiperOrigin-RevId: 216430530
* In TPUMirroredVariable, when setting _initializer_op and _initial_value ↵Gravatar Ruoxin Sang2018-10-09
| | | | | | attributes, set the attributes of all the contained variables. This fixes a bug that tf.train.init_from_checkpoint doesn't overwrite the initialization values correctly for TPUMirroredVariable. PiperOrigin-RevId: 216429476
* Avoid creating sparse tensor objects before library is initialized.Gravatar Gunhan Gulsoy2018-10-09
| | | | PiperOrigin-RevId: 216425002
* [tf.data vectorization] Add vectorizer for `Add` opGravatar Rachel Lim2018-10-09
| | | | PiperOrigin-RevId: 216424512
* Export feature importance for oblivious tree nodes.Gravatar A. Unique TensorFlower2018-10-09
| | | | PiperOrigin-RevId: 216422334
* [XLA:GPU] Pattern match atomic "apply" into an atomic storeGravatar Benjamin Kramer2018-10-09
| | | | | | Otherwise we'd emit a CAS loop. PiperOrigin-RevId: 216421161
* Add support for time-major input in the bidirectional RNN Op.Gravatar A. Unique TensorFlower2018-10-09
| | | | PiperOrigin-RevId: 216419983
* [tf.data] Lift parameterized test parameters into lambdas if they create TF ops.Gravatar Derek Murray2018-10-09
| | | | | | | | The existing code triggers parts of the TensorFlow runtime that may not have been fully initialized at the time the parameters are evaluated. Lifting into a lambda and invoking the lambda inside the test method will achieve the proper order. PiperOrigin-RevId: 216419757
* Internal ChangeGravatar A. Unique TensorFlower2018-10-09
| | | | PiperOrigin-RevId: 216419037
* Improve the control flow conversion for loops by using dataflow analysis to ↵Gravatar Dan Moldovan2018-10-09
| | | | | | construct the state. This is part of a larger refactoring which removes the reliance on the deprecated Scope.created field. PiperOrigin-RevId: 216418556
* Do not create a graph as a global variable in tests.Gravatar Gunhan Gulsoy2018-10-09
| | | | PiperOrigin-RevId: 216418324
* Go: Update generated wrapper functions for TensorFlow ops.Gravatar A. Unique TensorFlower2018-10-09
| | | | PiperOrigin-RevId: 216416117
* [XLA:GPU] Add an implementation of scatter for GPUGravatar Benjamin Kramer2018-10-09
| | | | | | | | | | | | This simple has a kernel that runs on every element of the updates tensor, figure out the right indices to perform the update, and applies it with an atomic operation. Currently we emit a CAS for plain (i.e. non-add) updates, which is inefficient. Also TuplePointsToAnalysis doesn't know that it should alias the operand and output buffers of a scatter, which would avoid a copy. PiperOrigin-RevId: 216412467
* Small cleanup in function_test.Gravatar Gunhan Gulsoy2018-10-09
| | | | PiperOrigin-RevId: 216412380
* Update ops-related pbtxt files.Gravatar A. Unique TensorFlower2018-10-09
| | | | PiperOrigin-RevId: 216410913
* Add RaggedTensors to tf.core. Moving the RaggedGather op kernel.Gravatar A. Unique TensorFlower2018-10-09
| | | | PiperOrigin-RevId: 216400726
* Silence tf.distributions deprecation messages caused by internal global ↵Gravatar Pavel Sountsov2018-10-09
| | | | | | | | | | | | function calls. E.g. register_kl calls would trigger such warnings. This spam was exacerbated by the fact that it happens before logging is initialized, so it is dumped prominently to STDERR. Worse yet it also happened no matter whether the user imported any symbols from tf.distributions or not as the relevant code is executed when you import TensorFlow. PiperOrigin-RevId: 216396036
* [tf.data] NUMA-aware MapAndBatch dataset.Gravatar Brennan Saeta2018-10-09
| | | | PiperOrigin-RevId: 216395709
* Return ::tensorflow::Status in Toco Graph Transformations.Gravatar Yu-Cheng Ling2018-10-09
| | | | PiperOrigin-RevId: 216392908
* Update ops-related pbtxt files.Gravatar A. Unique TensorFlower2018-10-09
| | | | PiperOrigin-RevId: 216392772
* Improves tf.function prototype.Gravatar Alexandre Passos2018-10-09
| | | | | | | | | | Specifically: - renames from def_function - returns an object with well-defined methods - doesn't force-retrace twice - uses the python descriptor API ( https://docs.python.org/3/howto/descriptor.html ) to remove the need for a tf.method PiperOrigin-RevId: 216388957
* Update TFLite Converter documentation.Gravatar Nupur Garg2018-10-09
| | | | PiperOrigin-RevId: 216386450
* Internal change.Gravatar Nupur Garg2018-10-09
| | | | PiperOrigin-RevId: 216385202