aboutsummaryrefslogtreecommitdiffhomepage
path: root/tensorflow/core/grappler/optimizers
Commit message (Collapse)AuthorAge
* [Grappler] Add RemoveStackStridedSliceSameAxis optimizer.Gravatar Eugene Brevdo2018-10-10
| | | | | | | | | | | | | | | | // Replace operations of the form: // x = stack((a_0, a_1, ..., a_{n-1}), axis=k)[:,...,i,...] // with // a_i // when the strided slice index `i` is applied in the k'th axis. // // Similarly, replace operations of the form: // x = stack((a_0, a_1, ..., a_{n-1}), axis=k)[:,...,i:i+1,...] // with // expand_dims(a_i, axis=k) // PiperOrigin-RevId: 216535346
* Automated rollback of commit 950cf87104bfee28e2165fe368f66337b8a1336dGravatar A. Unique TensorFlower2018-10-10
| | | | PiperOrigin-RevId: 216500702
* [tf.data vectorization] Add vectorizer for `Add` opGravatar Rachel Lim2018-10-09
| | | | PiperOrigin-RevId: 216424512
* [tf.data] NUMA-aware MapAndBatch dataset.Gravatar Brennan Saeta2018-10-09
| | | | PiperOrigin-RevId: 216395709
* [tf.data vectorization] Handle captured inputs in MapVectorization optimizationGravatar Rachel Lim2018-10-09
| | | | PiperOrigin-RevId: 216381943
* Automated rollback of commit 5f308cb408eb46ec9af0546be6b9ae1d5166b185Gravatar A. Unique TensorFlower2018-10-08
| | | | PiperOrigin-RevId: 216309111
* Refactor CalculateOutputSize() from VirtualScheduler protected member ↵Gravatar Peter Ma2018-10-08
| | | | | | function to utils; Refactor EstimateSize() from memory_optimizer.cc to utils; some small changes for readability improvement PiperOrigin-RevId: 216307257
* Automated rollback of commit 07df147ab20c4a5329148e5fb5f7f6b187cb73a4Gravatar Reed Wanderman-Milne2018-10-08
| | | | PiperOrigin-RevId: 216299809
* Add timeout mechanism to Grappler meta optimizer. This is only a best-effort ↵Gravatar A. Unique TensorFlower2018-10-08
| | | | | | mechanism, since the meta optimizer only checks if it has been cancelled before running each sub-optimizer. We can add cancellation to each sub-optimizer if necessary. PiperOrigin-RevId: 216234262
* Enable PinToHostOptimizer.Gravatar A. Unique TensorFlower2018-10-08
| | | | PiperOrigin-RevId: 216201732
* Optimize PinToHostOptimizer by adding cache, also add PinToHostOptimizer to ↵Gravatar A. Unique TensorFlower2018-10-08
| | | | | | | | | benchmarks. original runtime: 4.83492736816 secs w/ cache runtime: 2.19033999443 secs PiperOrigin-RevId: 216195286
* [tf.data vectorization] Feed inputs to vectorizers with notion of stackednessGravatar Rachel Lim2018-10-05
| | | | PiperOrigin-RevId: 215989259
* Fix bug in Grappler constant folding: The logic detecting full reductions ↵Gravatar A. Unique TensorFlower2018-10-04
| | | | | | | | was flawed. Added better test coverage. Also added a extra test for a related symbolic shape inference operation that I first suspected to be broken. PiperOrigin-RevId: 215812753
* [tf.data] Add a notion of `captured args` to MapDefunGravatar Rachel Lim2018-10-04
| | | | PiperOrigin-RevId: 215788485
* Add ability to vectorize nodes that do not derive from function arguments. ↵Gravatar Rachel Lim2018-10-04
| | | | | | (This indirectly handles "Const" outputs automagically, since they are always unstacked.) PiperOrigin-RevId: 215749824
* PinToHostOptimizer: Refactored code. Update blacklist. Added recursive ↵Gravatar A. Unique TensorFlower2018-10-03
| | | | | | lookback for Identity op. This fixes many performance regressions. PiperOrigin-RevId: 215662393
* [tf.data] Add utility to deduplicate graph node names (after vectorization)Gravatar Rachel Lim2018-10-03
| | | | PiperOrigin-RevId: 215595078
* Automated rollback of commit cb98ceba9cff8c10ee3c7e89dc8925c88b28118eGravatar A. Unique TensorFlower2018-10-01
| | | | PiperOrigin-RevId: 215254762
* Add allowed optimizations to GrapplerItem.Gravatar Eugene Zhulenev2018-10-01
| | | | | | (1) Skip UnaryOpComposition rewrite if the optimized graph needs to have a gradient registered for all nodes. PiperOrigin-RevId: 215188461
* Disable PinToHostOptimizer for NoOp.Gravatar A. Unique TensorFlower2018-09-29
| | | | PiperOrigin-RevId: 215079134
* Add a rewrite_config option to disable meta_optimizer.Gravatar A. Unique TensorFlower2018-09-28
| | | | PiperOrigin-RevId: 215014737
* [tf.data] Use Graph instead of GraphDef/FunctionDef for vectorization transformsGravatar Rachel Lim2018-09-28
| | | | PiperOrigin-RevId: 215011835
* Optimize ParseNodeNameAsStringPiece and related functions, since they are ↵Gravatar A. Unique TensorFlower2018-09-27
| | | | | | the most costly functions in Grappler. PiperOrigin-RevId: 214853009
* Fix support for custom optimizers in explicit scheduleGravatar A. Unique TensorFlower2018-09-27
| | | | PiperOrigin-RevId: 214794973
* Misc. micro-optimizations in Grappler optimizers.Gravatar A. Unique TensorFlower2018-09-26
| | | | | | Make shape inference lazy in optimizers that may not trigger. PiperOrigin-RevId: 214669034
* [tf.data] Small utils cleanup to expose generic functionGravatar Rachel Lim2018-09-26
| | | | PiperOrigin-RevId: 214659488
* Hoisting RandomUniform out of functionsGravatar Piotr Padlewski2018-09-26
| | | | | | | This patch introduces optimization that hoists RandomUniform out of map functions. By doing it, we make function stateless, which is crucial for parallelization and vectorization. PiperOrigin-RevId: 214623178
* Fix a bug in debug_stripper.Gravatar Jingyue Wu2018-09-25
| | | | | | AsControlDependency accepts a node name not a tensor name. PiperOrigin-RevId: 214451885
* Swap Const ops back to GPU greedily.Gravatar A. Unique TensorFlower2018-09-25
| | | | PiperOrigin-RevId: 214415906
* Use less memory by only storing pointers to ops that feed inplace ops.Gravatar A. Unique TensorFlower2018-09-25
| | | | | | Handle empty strings in NodePositionIfSameNode. PiperOrigin-RevId: 214393567
* Disable PinToHostOptimizer for any TPU graphs.Gravatar A. Unique TensorFlower2018-09-24
| | | | PiperOrigin-RevId: 214338297
* Speed up DedupComputation in arithmetic optimizer.Gravatar A. Unique TensorFlower2018-09-24
| | | | PiperOrigin-RevId: 214338100
* Turn on PinToHostOptimizer by default.Gravatar A. Unique TensorFlower2018-09-24
| | | | PiperOrigin-RevId: 214275960
* Fix noop elimination optimization.Gravatar Piotr Padlewski2018-09-23
| | | | | | | Fix for b/116169724 Only remove noops if they refer to const nodes. PiperOrigin-RevId: 214199200
* Add blacklist ops to PinToHostOptimizer. Fix test.Gravatar A. Unique TensorFlower2018-09-23
| | | | PiperOrigin-RevId: 214195020
* Merge pull request #22453 from samikama:custom_optimizer_orderingGravatar TensorFlower Gardener2018-09-22
|\ | | | | | | PiperOrigin-RevId: 214132703
* | Add PinToHostOptimizer to grappler: force small ops to happen on CPU (instead ofGravatar A. Unique TensorFlower2018-09-22
| | | | | | | | | | | | GPU). This avoids many unnecessary CPU<->GPU memcpy and syncs. PiperOrigin-RevId: 214108484
* | Don't crash on Pack nodes with no axis argument set.Gravatar A. Unique TensorFlower2018-09-21
| | | | | | | | PiperOrigin-RevId: 214035048
* | [tf.data] Add a ConverterRegistry for vectorization convertersGravatar Rachel Lim2018-09-21
| | | | | | | | PiperOrigin-RevId: 214027910
| * Minor style fix.Gravatar drpngx2018-09-21
| |
| * Add possibility to include default optimizers in custom optimizer listGravatar Sami Kama2018-09-21
|/
* Merge pull request #22402 from kitstar:masterGravatar TensorFlower Gardener2018-09-20
|\ | | | | | | PiperOrigin-RevId: 213912651
* | [tf.data] Some vectorization cleanupGravatar Rachel Lim2018-09-20
| | | | | | | | PiperOrigin-RevId: 213886813
* | Fix bug in Pow optimizer rule when broadcasting is involved.Gravatar A. Unique TensorFlower2018-09-20
| | | | | | | | | | | | Minor cleanup by moving the helper function ShapesEqual to GraphProperties and adding unit tests for it. PiperOrigin-RevId: 213876779
* | [tf.data] Use vectorization_utils::VectorizeMapDefun in MapVectorization ↵Gravatar Rachel Lim2018-09-20
| | | | | | | | | | | | optimization PiperOrigin-RevId: 213840320
| * Fix typo error in grapper remapper optimizer.Gravatar Cheng CHEN2018-09-20
|/
* Remove LOG(INFO) in MetaOptimizer:Optimize as this currently produces a ↵Gravatar A. Unique TensorFlower2018-09-19
| | | | | | | | | | | | | | large number of debugging outputs in the INFO log that look like: I0917 16:20:11.073992 9191 meta_optimizer.cc:334] Starting optimization for grappler item: tf_graph I0917 16:20:11.079458 9191 meta_optimizer.cc:334] Starting optimization for grappler item: tf_graph I0917 16:20:11.084827 12447 meta_optimizer.cc:334] Starting optimization for grappler item: tf_graph I0917 16:20:11.089359 12447 meta_optimizer.cc:334] Starting optimization for grappler item: tf_graph After this change those lines will simply no longer appear. RELNOTES: n/a PiperOrigin-RevId: 213690759
* [tf.data] MapVectorization optimization: C++ conversion framework to ↵Gravatar Rachel Lim2018-09-19
| | | | | | vectorize a MapDefun function. Also implements conversion for two ops: Cast and Unpack. PiperOrigin-RevId: 213686720
* Update the grappler plugin to support the @defun generated function and ops.Gravatar Scott Zhu2018-09-18
| | | | PiperOrigin-RevId: 213554813
* Clean up remove_negation pass in Grappler.Gravatar A. Unique TensorFlower2018-09-18
| | | | PiperOrigin-RevId: 213520177