aboutsummaryrefslogtreecommitdiffhomepage
Commit message (Collapse)AuthorAge
* Internal Change.Gravatar Michael Case2018-06-29
| | | | PiperOrigin-RevId: 202706517
* Allow gradients() calls from inside a function wrt captured tensors.Gravatar Skye Wanderman-Milne2018-06-29
| | | | | | | | The overall approach is to teach the gradients code how to traverse the implicit edges between captured external tensors and ops inside the function body. PiperOrigin-RevId: 202705929
* Optimized TransposeConv implementation.Gravatar A. Unique TensorFlower2018-06-29
| | | | PiperOrigin-RevId: 202705179
* UnaryOpsComposition arithmetic optimizer.Gravatar Eugene Zhulenev2018-06-29
| | | | PiperOrigin-RevId: 202703970
* Auto tracking for Python lists assigned to attributes of Model/CheckpointableGravatar Allen Lavoie2018-06-29
| | | | | | | | | | | | Conceptually lists just get replaced with a list-like wrapper. A shallow copy is maintained for error checking (since appends to it aren't monitored, we can't do restore-on-create for variables unless it's being modified through the wrapper). There are lots of other details. I gave up on generalizing our isinstance(obj, (list, tuple)) checks and just subclassed list. Behaving like a list means the type should be unhashable, which requires some workarounds when we're collecting objects (object-identity collections, and object-identity versions of weak reference containers). Adds a decorator for exempting whole methods from automatic dependency tracking so we don't need to track down every last self.inputs = [] statement to avoid polluting dependencies. There's a TODO for tuples and dictionaries. PiperOrigin-RevId: 202703271
* Initialize result_handle to nullptr so we don't try to unref when not required.Gravatar Akshay Modi2018-06-29
| | | | PiperOrigin-RevId: 202701234
* Internal changeGravatar Michael Kuperstein2018-06-29
| | | | PiperOrigin-RevId: 202698606
* Use the same convention for the scale parameter in hybrid ops as well.Gravatar A. Unique TensorFlower2018-06-29
| | | | PiperOrigin-RevId: 202698287
* Simplify backward updates and add tests regarding gradients angle.Gravatar Xuechen Li2018-06-29
| | | | PiperOrigin-RevId: 202696277
* Update Tensorboard callback to run histogram summaries withinGravatar A. Unique TensorFlower2018-06-29
| | | | | | | Keras test_function, and to write histogram outputs with a batch-level global step. PiperOrigin-RevId: 202696047
* Adds profiling label for optimized deptwhise3x3Gravatar A. Unique TensorFlower2018-06-29
| | | | PiperOrigin-RevId: 202695310
* [tf.data] Add examples of `map_func` signatures to the `Dataset.map()` ↵Gravatar Derek Murray2018-06-29
| | | | | | | | documentation. Fixes #20265. PiperOrigin-RevId: 202695249
* Allow transposition of the weights in fully connected ops.Gravatar A. Unique TensorFlower2018-06-29
| | | | PiperOrigin-RevId: 202693036
* [tf.data] In `Dataset.padded_batch()` test, handle randomly created empty ↵Gravatar Derek Murray2018-06-29
| | | | | | batches correctly. PiperOrigin-RevId: 202688283
* Add a method that calls the python function backing a _PolymorphicFunction.Gravatar Akshay Agrawal2018-06-29
| | | | | | | | | This is at the least useful for testing behavioral differences between a wrapped Python function and the corresponding graph functions. Prior to this change, decorating a Python function with `@function.defun` would render the Python function inaccessible. PiperOrigin-RevId: 202685407
* Move `DeserializeSparseOp<string>` into its own file, mirroring the Variant ↵Gravatar Derek Murray2018-06-29
| | | | | | version. PiperOrigin-RevId: 202683951
* More support for fused quantized LSTM in TFLite interpreterGravatar A. Unique TensorFlower2018-06-29
| | | | PiperOrigin-RevId: 202682712
* [TF:XLA] A more generic TopK.Gravatar Michael Kuperstein2018-06-29
| | | | | | Use Sort to implement R1 TopK for an arbitrary dimension size, and more types. PiperOrigin-RevId: 202681175
* TFE notebooksGravatar Alexandre Passos2018-06-29
| | | | PiperOrigin-RevId: 202681043
* Convert string to binary strings to clear python3 failures.Gravatar Gunhan Gulsoy2018-06-29
| | | | PiperOrigin-RevId: 202679902
* [contrib.bigtable] Clean up buildsGravatar Brennan Saeta2018-06-29
| | | | PiperOrigin-RevId: 202673820
* [XLA] Add key-value version of Sort HLO.Gravatar Michael Kuperstein2018-06-29
| | | | | | This is only currently implemented in the evaluator backend, and even that implementation is partial - the key and value type must match. PiperOrigin-RevId: 202673122
* Internal Change.Gravatar Michael Case2018-06-29
| | | | PiperOrigin-RevId: 202671299
* Added std::move on the caller side of interpreter->SetVariables.Gravatar A. Unique TensorFlower2018-06-29
| | | | PiperOrigin-RevId: 202670638
* [tf.data] Optimize the implementation of DeserializeSparseOp<Variant>.Gravatar Derek Murray2018-06-29
| | | | | | The most expensive part of this kernel is the index construction. The optimized implementation builds the new index matrix at most once, rather than performing up to 3 passes (adding a leading dimension, `SparseTensor::Concat()` and `Reshape()`), and adds a specialized codepath for the common case of stacking together rank-1 SparseTensors. PiperOrigin-RevId: 202669432
* Add quantized ReluX kernels.Gravatar A. Unique TensorFlower2018-06-29
| | | | PiperOrigin-RevId: 202668511
* Add more helpful error messages when restoring from checkpoint fails.Gravatar Karmel Allison2018-06-29
| | | | PiperOrigin-RevId: 202668227
* [data-stats] Adds support for gathering statistics as metrics with ↵Gravatar Shivani Agrawal2018-06-29
| | | | | | | | `stats_aggreagtor`. Also collects metrics for `examples_count`, `features_count`, `feature_values_count`, `feature_lists_count` and `sequence_examples_count` when `feature_stats()` transformation is applied to the dataset. PiperOrigin-RevId: 202667632
* Add README.md for RevNet example.Gravatar Xuechen Li2018-06-29
| | | | PiperOrigin-RevId: 202667025
* [tf.data / Bigtable] Initial tf.data Bigtable integrationGravatar Brennan Saeta2018-06-29
| | | | | | | This change allows TensorFlow users to stream data directly from Cloud Bigtable into the TensorFlow runtime using tf.data. PiperOrigin-RevId: 202664219
* removing unnecessary test fixturesGravatar Jiri Simsa2018-06-29
| | | | PiperOrigin-RevId: 202663814
* Broad refactor (part 4): Split the CFG construction part into a component ↵Gravatar Dan Moldovan2018-06-29
| | | | | | | | | separate from the dataflow analysis. Extend it to cover return statements, nested functions and finally blocks. Note: AutoGraph doesn't support exceptions and will reject try/finally constructs, but they were easy enough to add. This is not used yet. PiperOrigin-RevId: 202661509
* [TF:XLA] Remove StatusOr<> return values from linear algebra functions and ↵Gravatar Peter Hawkins2018-06-29
| | | | | | | | | | their dependencies. Use the monadic structure of XlaOp instead. Remove XlaBuilder* arguments to many utility functions. Various small cleanups. Rename PrependMajorDims to ConcatVectors to better reflect what it does. No functional changes intended. PiperOrigin-RevId: 202655690
* tfe.defun shouldn't leak information across graphs (or across eager and graphs)Gravatar Alexandre Passos2018-06-29
| | | | PiperOrigin-RevId: 202645535
* Handle nested tuples in GpuTransferManager.Gravatar Adrian Kuegel2018-06-29
| | | | | | | | This became necessary when the TOKEN primitive type was added. In some models, an existing tuple T is extended to (T, token[]). Also add the TOKEN case to a switch statement where it was missing. PiperOrigin-RevId: 202643759
* [XLA] Add new helper xla::Iota().Gravatar Peter Hawkins2018-06-29
| | | | | | | | Start a new client library "numeric", after the C++ <numeric> header where std::iota lives. [TF:XLA] Replace uses of XlaHelpers::Iota() with xla::Iota(). Add a helper to get the XLA type of an operator input. PiperOrigin-RevId: 202636221
* [XLA] Make XlaBuilder op construction methods private. Change remaining ↵Gravatar Peter Hawkins2018-06-29
| | | | | | users to use the free functions (or the operator overloads) in namespace xla:: instead. PiperOrigin-RevId: 202631789
* [XLA:GPU] Stop creating nested fusion nodes in multi-output fusion.Gravatar Thomas Joerg2018-06-29
| | | | PiperOrigin-RevId: 202624150
* Python: Add a compat.py with a constant to help withGravatar Asim Shankar2018-06-29
| | | | | | maintaining forward compatibility of Python API calls. PiperOrigin-RevId: 202618021
* Checking that the NCCL 2 license file is found, see GitHub issue 19679.Gravatar A. Unique TensorFlower2018-06-29
| | | | PiperOrigin-RevId: 202613754
* Convert exp(x-1) into expm1(x).Gravatar A. Unique TensorFlower2018-06-28
| | | | PiperOrigin-RevId: 202598404
* [TF:XLA] Add partial implementation of tf.FIFOQueue for XLA devices (e.g., TPU).Gravatar Peter Hawkins2018-06-28
| | | | | | | | | | | | The idea is to have a host-side queue of device tensors. Operators dequeue_many, enqueue_many, and dequeue_up_to are not yet implemented because they require splitting/concatenating tensors, which will require calling into a compiled XLA compilation. Refactor queue operator implementations into libraries separate from the kernel registrations. Add support for ResourceOpKernels that are placed on non-CPU devices. Add support for allocating host-memory tensors during OpKernel construction. PiperOrigin-RevId: 202590292
* Make the HLO evaluator correctly handle the lhs contracting dim for DotsGravatar Sanjoy Das2018-06-28
| | | | | | | This CL fixes the bug by rewriting how we map the result index to the lhs/rhs index in the dot evaluator. PiperOrigin-RevId: 202588171
* Go: Update generated wrapper functions for TensorFlow ops.Gravatar A. Unique TensorFlower2018-06-28
| | | | PiperOrigin-RevId: 202587352
* Merge changes from github.Gravatar Mingxing Tan2018-06-28
| | | | PiperOrigin-RevId: 202585094
* Remove some obsolete statements from variable_ops.cc.Gravatar A. Unique TensorFlower2018-06-28
| | | | PiperOrigin-RevId: 202581793
* Updates TOCO documentation.Gravatar Nupur Garg2018-06-28
| | | | PiperOrigin-RevId: 202577530
* In constant_op.cc get the CPU RAM allocator through the OpKernelContextGravatar A. Unique TensorFlower2018-06-28
| | | | | | | instead of calling cpu_allocator() directly. This will ensure that NUMA node specific constant ops get node-local memory. PiperOrigin-RevId: 202576292
* tf.keras sync 2.2.0Gravatar Anjali Sridhar2018-06-28
| | | | PiperOrigin-RevId: 202575679
* Registered kernel for tensor array gather for type `tf.variant`Gravatar Rachel Lim2018-06-28
| | | | PiperOrigin-RevId: 202574948