aboutsummaryrefslogtreecommitdiffhomepage
Commit message (Collapse)AuthorAge
* Automated g4 rollback of changelist 197741984Gravatar Anna R2018-05-23
| | | | PiperOrigin-RevId: 197769770
* Introduce Encoder and Decoder classes so that platform/*coding* doesn't have toGravatar A. Unique TensorFlower2018-05-23
| | | | | | depend on framework/resource_handler and framework/variant. PiperOrigin-RevId: 197768387
* [TF:XLA] Register a real implementation of ControlTrigger on XLA devices.Gravatar Peter Hawkins2018-05-23
| | | | PiperOrigin-RevId: 197759239
* Add a checkpointable list data structureGravatar Allen Lavoie2018-05-23
| | | | | | | | Allows tracking of Layers and other checkpointable objects by number. Fixes #19250. PiperOrigin-RevId: 197749961
* Update build visibility of //third_party/tensorflow/contrib/signalGravatar Peter Hawkins2018-05-23
| | | | PiperOrigin-RevId: 197747430
* Combine op-profiles collected from individual TPUs.Gravatar A. Unique TensorFlower2018-05-23
| | | | PiperOrigin-RevId: 197743291
* Keep column order in make_csv_dataset.Gravatar Mark Daoust2018-05-23
| | | | PiperOrigin-RevId: 197742412
* Add a "--no_search_hints" flag to the api-docs generator.Gravatar Mark Daoust2018-05-23
| | | | PiperOrigin-RevId: 197742114
* PiperOrigin-RevId: 197741984Gravatar A. Unique TensorFlower2018-05-23
|
* Fix typo in error message.Gravatar Patrick Nguyen2018-05-23
| | | | PiperOrigin-RevId: 197741341
* Quick fix for Kokoro breakage.Gravatar Bjarke Hammersholt Roune2018-05-23
| | | | PiperOrigin-RevId: 197739982
* Add 'platform_' libraries in core/BUILD.Gravatar A. Unique TensorFlower2018-05-23
| | | | PiperOrigin-RevId: 197736600
* Support batch size > 1 in L2Normalization 8 bit quantized implementations.Gravatar A. Unique TensorFlower2018-05-23
| | | | PiperOrigin-RevId: 197736184
* Add a method XlaTensor:ReleaseShapedBuffer() to relinquish the shaped buffer ↵Gravatar Peter Hawkins2018-05-23
| | | | | | | | owned by an XlaTensor. Add an equality operator for xla::ShapeIndexView. PiperOrigin-RevId: 197716313
* [TF:XLA:GPU] Relax test tolerance due to flakiness.Gravatar Peter Hawkins2018-05-23
| | | | PiperOrigin-RevId: 197708758
* Use the right attributes when creating placeholder nodes.Gravatar Benoit Steiner2018-05-22
| | | | PiperOrigin-RevId: 197673355
* Internal ChangeGravatar A. Unique TensorFlower2018-05-22
| | | | PiperOrigin-RevId: 197661636
* Add interfaces to Compiler that are sufficient to implement a ↵Gravatar Bjarke Hammersholt Roune2018-05-22
| | | | | | | | backend-independent offline auto-tuner for backend configurations of ops as well as automatic testing across candidate configurations. Also add a simple Scanner class that is handy for parsing things. PiperOrigin-RevId: 197657512
* Fix an issue when mixing sparse and dense features in the same model.Gravatar A. Unique TensorFlower2018-05-22
| | | | PiperOrigin-RevId: 197650140
* Add convolution with NHWC layout to stream executor.Gravatar A. Unique TensorFlower2018-05-22
| | | | PiperOrigin-RevId: 197650067
* [TF:XLA] Bump open source llvm revision to r333002Gravatar Sanjoy Das2018-05-22
| | | | PiperOrigin-RevId: 197644290
* Fix the LSTM test in TFLite.Gravatar Yu-Cheng Ling2018-05-22
| | | | PiperOrigin-RevId: 197643581
* Expose the new collective reduce and broacast ops as non-publicGravatar A. Unique TensorFlower2018-05-22
| | | | | | | python interface functions. Note that they are not yet fully implemented; this change is to facilitate further development. PiperOrigin-RevId: 197639372
* Always append the trailing slash when look up or insert a directory path in ↵Gravatar Ruoxin Sang2018-05-22
| | | | | | the stat cache. PiperOrigin-RevId: 197637482
* Remove reservoir sampling from SummaryDbWriterGravatar Justine Tunney2018-05-22
| | | | PiperOrigin-RevId: 197634162
* Adds a kernel that checks whether vector is zero or not.Gravatar A. Unique TensorFlower2018-05-22
| | | | PiperOrigin-RevId: 197633182
* [TF:XLA] Add clarification to the DFS scheduler.Gravatar Dimitris Vardoulakis2018-05-22
| | | | PiperOrigin-RevId: 197629355
* Extract out common code and make things safer; NFCGravatar Sanjoy Das2018-05-22
| | | | | | | | | | | | | | | | | RowMajorMatrixVectorProductEmitter and ColumnMajorMatrixVectorProductEmitter both cache* the generated LLVM IR by keying off the dimensions of the operation, the primitive type etc. Before this CL the code computing the cache key lived separately from the GEMV emitters. This pattern introduces a risk that the GEMV emitters will end up with some state not modeled in the cache key, resulting in a subtle bug. This CL reduces the risk by escapsulating the cache key generation and the input configuration to the GEMV emitters in a single class. * In the sense that two different dot operations with the same M,K,N will share a single LLVM IR function body. PiperOrigin-RevId: 197628423
* [TF:XLA] Add a helper to update HLO reachability.Gravatar A. Unique TensorFlower2018-05-22
| | | | | | | This can be used if the user does not care if reachability changed after an update. PiperOrigin-RevId: 197628007
* [TF:XLA] Roll back the functionality change of cl/197458260 to unbreak test.Gravatar Dimitris Vardoulakis2018-05-22
| | | | PiperOrigin-RevId: 197625888
* [TF:XLA] make miscomparison error messages more readableGravatar Nick Desaulniers2018-05-22
| | | | PiperOrigin-RevId: 197620560
* [XLA] Skip BF16 output conversion folding when CRS is the root.Gravatar Yuanzhong Xu2018-05-22
| | | | PiperOrigin-RevId: 197618934
* Collective Ops Part 7Gravatar A. Unique TensorFlower2018-05-22
| | | | | | | | Complete just enough of the core implementation to run multi-device collectives locally within a single process. Interfaces are still private and not availble for general use. PiperOrigin-RevId: 197617132
* Move executor_test.cc to tensorflow/core/common_runtime/.Gravatar Derek Murray2018-05-22
| | | | PiperOrigin-RevId: 197611583
* Fix memory leak when going from the fast path to the slow path in eagerGravatar Akshay Modi2018-05-22
| | | | | | Fixes #19385 PiperOrigin-RevId: 197607384
* Detect unknown batch size in predictions dictGravatar Jianwei Xie2018-05-22
| | | | PiperOrigin-RevId: 197606059
* [XLA:GPU] Emit fused reduces from batchnorm expanderGravatar Benjamin Kramer2018-05-22
| | | | | | | This is an intermediate step until we have working multi-output fusion. Once we have it, this change should be reverted as it might interfere with fusion. PiperOrigin-RevId: 197605814
* [XLA:GPU] Add lowering for input fusions with multiple reduce outputsGravatar Benjamin Kramer2018-05-22
| | | | | | | | This is limited to reduces that have the same shapes and reduced dimensions. Most of the code is making the individual emission code emit multiple reduction in the same loop. This requires multi-output fusion to provide a speedup. PiperOrigin-RevId: 197599248
* Actually return the value from train_and_evaluate.Gravatar A. Unique TensorFlower2018-05-22
| | | | PiperOrigin-RevId: 197597953
* * Remove the bias centering graph if it is turned off.Gravatar A. Unique TensorFlower2018-05-22
| | | | | | | * Create consts once. Otherwise each time the constant is passed to an Op, a new Const op is created. * Speed up the graph construction by using a functions to build splits. PiperOrigin-RevId: 197590220
* Adding stop request capability to CheckpointSaverListener. An example usage ↵Gravatar Mustafa Ispir2018-05-22
| | | | | | | | | | | | | | | | | | | | | | of it is stopping training based on evaluation metrics. Example usage is as follows: my-estimator = tf.estimator.DNNClassifier(...) stopper = StopTrainingBasedOnEvaluateMetrics(my-estimator) my-estimator.train(..., saving_listeners=[stopper]) where: class StopTrainingBasedOnEvaluateMetrics(tf.train.CheckpointSaverListener): """A saver listener to run evaluate with every checkpoint.""" def __init__(self, estimator): self._estimator = estimator def after_save(self, session, global_step_value): eval_results = self._estimator.evaluate(...) if stop-if-started-overfitting(eval_results): return True PiperOrigin-RevId: 197586515
* Make init_scope preserve the inner device stack when lifting into a graph.Gravatar Akshay Agrawal2018-05-22
| | | | | | Eager execution doesn't implement device stacks and in particular it doesn't support device functions (which determine the device on a per-op basis), so in general it's not possible to do the same when lifting into the eager context. PiperOrigin-RevId: 197583446
* Special case the 'dict' call, which trips other mechanisms for built-ins.Gravatar Dan Moldovan2018-05-22
| | | | PiperOrigin-RevId: 197576297
* [TF:XLA] Fix xla_interpreter_device buildGravatar Benjamin Kramer2018-05-22
| | | | PiperOrigin-RevId: 197571618
* Contributing guidelines, style guide and README updatesGravatar A. Unique TensorFlower2018-05-22
| | | | PiperOrigin-RevId: 197564905
* Update calls to addPassesToEmitFileGravatar A. Unique TensorFlower2018-05-22
| | | | PiperOrigin-RevId: 197564506
* Fix a couple of broken links in the Swift For TensorFlow page.Gravatar A. Unique TensorFlower2018-05-22
| | | | PiperOrigin-RevId: 197564254
* Automated g4 rollback of changelist 197527651Gravatar A. Unique TensorFlower2018-05-22
| | | | PiperOrigin-RevId: 197562826
* [XLA:TF] Run buildifier on llvm.BUILDGravatar Benjamin Kramer2018-05-22
| | | | | | | Buildifier recently started sorting load args https://github.com/bazelbuild/buildtools/commit/3ac5f85b22bc44820c041d0cacd3bc2ed54e7742 which causes diffs in the output. PiperOrigin-RevId: 197556554
* [XLA] Optimize ShapeTree<T>Gravatar A. Unique TensorFlower2018-05-22
| | | | | | | | | | | | | | This optimizes ShapeTree quite significantly. In particular this optimizes for the common case of querying/iterating, copying and moving ShapeTrees. * Allocate all ShapeTreeNodes inside a single, owned, vector. This reduces the number of memory allocations and improves cache performance. * Instead of storing children nodes as unique_ptrs, store them as indices into the owning container's vector. This allows cheap copy-construction (a std::vector POD copy) and doesn't change the fast path (dereferencing a pointer is just as fast as dereferencing a base + offset). * Instead of a unique_ptr<Shape>, use a shared_ptr<Shape>. This removes a load of copy-construction overhead at the cost of a shared_ptr over a unique_ptr (one extra allocation). * Instead of computing ShapeIndexes on-demand in the iterators/ForEach*, precompute them during construction time. This adds a few more bytes per ShapeTree, but now we can... * ... store a std::pair<ShapeIndex, T> as the ShapeTreeNode's data element. This allows us to provide a std::pair<K,V>&, STL-like interface from iterators without going through any of the previous unique_ptr hacks around storage lifetimes. * Because we no longer need to iterate from the beginning to build up the ShapeIndex, we can now offer a ::find() function to return an iterator for a ShapeIndex in O(K) time. As the iteration order is guaranteed to be pre-order, this can be used (and will be, later) to speed up the fast-path of mutating a subtree of a ShapeTree from tf2xla::ExtractSubBuffers. * Similarly because we now have a very standard, cheap STL interface with no performance cliffs, we can hopefully improve ShapedBuffer's copy and move constructors to be cheaper. PiperOrigin-RevId: 197548717