aboutsummaryrefslogtreecommitdiffhomepage
path: root/tensorflow/core/grappler
Commit message (Collapse)AuthorAge
...
* | Fix bug in Pow optimizer rule when broadcasting is involved.Gravatar A. Unique TensorFlower2018-09-20
| | | | | | | | | | | | Minor cleanup by moving the helper function ShapesEqual to GraphProperties and adding unit tests for it. PiperOrigin-RevId: 213876779
* | [tf.data] Use vectorization_utils::VectorizeMapDefun in MapVectorization ↵Gravatar Rachel Lim2018-09-20
| | | | | | | | | | | | optimization PiperOrigin-RevId: 213840320
| * Fix typo error in grapper remapper optimizer.Gravatar Cheng CHEN2018-09-20
|/
* Remove LOG(INFO) in MetaOptimizer:Optimize as this currently produces a ↵Gravatar A. Unique TensorFlower2018-09-19
| | | | | | | | | | | | | | large number of debugging outputs in the INFO log that look like: I0917 16:20:11.073992 9191 meta_optimizer.cc:334] Starting optimization for grappler item: tf_graph I0917 16:20:11.079458 9191 meta_optimizer.cc:334] Starting optimization for grappler item: tf_graph I0917 16:20:11.084827 12447 meta_optimizer.cc:334] Starting optimization for grappler item: tf_graph I0917 16:20:11.089359 12447 meta_optimizer.cc:334] Starting optimization for grappler item: tf_graph After this change those lines will simply no longer appear. RELNOTES: n/a PiperOrigin-RevId: 213690759
* [tf.data] MapVectorization optimization: C++ conversion framework to ↵Gravatar Rachel Lim2018-09-19
| | | | | | vectorize a MapDefun function. Also implements conversion for two ops: Cast and Unpack. PiperOrigin-RevId: 213686720
* Merge pull request #21000 from ↵Gravatar TensorFlower Gardener2018-09-19
|\ | | | | | | | | | | ROCmSoftwarePlatform:upstream-staging-gpu-common-runtime-1 PiperOrigin-RevId: 213653830
* | Update the grappler plugin to support the @defun generated function and ops.Gravatar Scott Zhu2018-09-18
| | | | | | | | PiperOrigin-RevId: 213554813
* | Clean up remove_negation pass in Grappler.Gravatar A. Unique TensorFlower2018-09-18
| | | | | | | | PiperOrigin-RevId: 213520177
* | [tf.data] Introducing an optimization that parallelizes map transformations.Gravatar Piotr Padlewski2018-09-14
| | | | | | | | | | | | | | | | Stateless MapDatasets can be paralellized by switching to ParallelMapDataset. We set `num_parallel_calls` to 2 for now, but in the future a special value will be used that result in the optimal value to be selected dynamically at runtime. This patch also exposed a memory leak which was fixed. PiperOrigin-RevId: 213015223
* | [Grappler] s/std::string/string/Gravatar James Keeling2018-09-14
| | | | | | | | | | | | string and std::string are not necessarily the same thing in TF, but this code assumed that they are. PiperOrigin-RevId: 212952877
* | Automated rollback of commit ac60b46e2c5962fd8099a4406c1788d826ad3c0dGravatar A. Unique TensorFlower2018-09-13
| | | | | | | | PiperOrigin-RevId: 212896336
* | Improve static shape inference in grappler by propagating tensors_as_shapes ↵Gravatar Doe Hyun Yoon2018-09-12
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | better: Currently, static shape inference propagates shapes of tensors, but in some cases, we do need values; for this, we use input_tensors (from Const input tensor) and input_tensors_as_shapes and output_tensors_as_shapes (these are ShapeHandle format, but has values, currently only for 1D vector). This CL enhances propagation of input_tensors_as_shapes and output_tensors_as_shapes to improve static shape inference. (1) forward scalar Const as input_tensors_as_shapes (currently, only 1D vector), (2) export input_tensors_as_shapes, output const tensor, and output_tensors_as_shapes to the values of inferred input/output TensorProperties (currently, only input const tensors are exported as values), (3) use input_tensors_as_shapes as Const tensor to function input (currently, only Const tensors), (4) forward input_tensors_as_shapes to output_tensors_as_shapes for Identity op, (5) when Pack op concats scalar values to form output_tensors_as_shapes, currently it uses only input_tensors (from Const input tensors), but this CL change Pack to use input_tensors_as_shapes as well. PiperOrigin-RevId: 212696959
* | Correct argument name in declaration of StronglyConnectedComponentsGravatar James Keeling2018-09-12
| | | | | | | | | | | | | | | | This now matches the definition. I fixed it here rather than in the definition as it seems every call to this function names the variable "num_components". I also tidied up the comment a little. PiperOrigin-RevId: 212668416
* | Add a printout at the start of MetaOptimizer::Optimize() to make it easier ↵Gravatar A. Unique TensorFlower2018-09-11
| | | | | | | | | | | | to see the total cost of running Grappler in logs. Also add a couple of VLOG(1) statements to see breakdown between main graph and function optimization. PiperOrigin-RevId: 212531430
* | Automated rollback of commit 45965cfd8b54fb113275ffdaced5366e28aa3553Gravatar Yanan Cao2018-09-11
| | | | | | | | PiperOrigin-RevId: 212465918
* | Graph optimization pass that creates XlaLaunch ops for the computations that ↵Gravatar A. Unique TensorFlower2018-09-11
| | | | | | | | | | | | have been explicitly marked to be compiled via xla.compile() PiperOrigin-RevId: 212407112
* | Add experimental grappler plugin to selection function implementation at run ↵Gravatar Scott Zhu2018-09-10
| | | | | | | | | | | | time. PiperOrigin-RevId: 212321238
* | Re-enable identity transpose removal across chains optimization in Grappler.Gravatar A. Unique TensorFlower2018-09-07
| | | | | | | | PiperOrigin-RevId: 211989327
* | Set meta_optimizer to use custom graph optimizers for both toggling ↵Gravatar A. Unique TensorFlower2018-09-06
| | | | | | | | | | | | optimizers and setting optimizer names. PiperOrigin-RevId: 211900252
* | Replace Placeholder with Const to GrapplerFunctionItem for function shape ↵Gravatar Doe Hyun Yoon2018-09-06
| | | | | | | | | | | | inference if possible. PiperOrigin-RevId: 211821596
* | libc++ fix: make comparison functors constGravatar A. Unique TensorFlower2018-09-05
| | | | | | | | PiperOrigin-RevId: 211661670
* | Move GrapplerFunctionItem arguments.Gravatar Piotr Padlewski2018-09-04
| | | | | | | | | | | | This patch uses take by value and move idiom to optimize copying of constructor arguments. PiperOrigin-RevId: 211553877
* | Extend hoisting monotonic functions out of min/max reductions to all ↵Gravatar A. Unique TensorFlower2018-09-04
| | | | | | | | | | | | | | | | monotonic unary functions. Add the ability to flip Max <-> Min if the function is non-increasing, e.g. Max(Neg(x)) => Neg(Min(x)). PiperOrigin-RevId: 211490436
* | Rollforward of rollback:Gravatar A. Unique TensorFlower2018-09-02
| | | | | | | | | | | | | | | | Reinstate the use of integral-exponent power function MathUtil::IPow, but make sure to use a floating point base, so as to compute the result using floating point arithmetic. This behaviour is equivalent to, but faster than, std::pow. Note that care must be taken to convert the base to double, which we effect by providing an explicit template type argument for MathUtil::IPow. PiperOrigin-RevId: 211290304
* | Automated rollback of commit 9e2ce8f4c483e68309a60dc89739bb1b79b4a12eGravatar A. Unique TensorFlower2018-09-01
| | | | | | | | PiperOrigin-RevId: 211204708
* | Re-enable hoisting of coeff-wise unary chains out of Split and into Concat.Gravatar A. Unique TensorFlower2018-08-31
| | | | | | | | PiperOrigin-RevId: 211162510
* | Merge multiple concat into one.Gravatar A. Unique TensorFlower2018-08-31
| | | | | | | | PiperOrigin-RevId: 211161172
* | Automated rollback of commit 7805e23c8416fe4ccccb48c37199a5631bee6d51Gravatar Guangda Lai2018-08-31
| | | | | | | | PiperOrigin-RevId: 211137964
| * Rename CUDA GPU ID to platform GPU IDGravatar Wen-Heng (Jack) Chung2018-08-31
|/ | | | | Rename CUDA GPU ID to platform GPU ID so the notion is applicable on both CUDA and ROCm platform.
* Fix bug in hoisting monotonic functions out of reductions: Do not change the ↵Gravatar A. Unique TensorFlower2018-08-30
| | | | | | | | value of nodes in the preserve set, e.g. fetch nodes. I simplified the rewiring logic a tad. PiperOrigin-RevId: 211017989
* Add shape and remapping optimization to the disabled optimizerGravatar Yao Zhang2018-08-28
| | | | | | list. PiperOrigin-RevId: 210602483
* Removed redundant std::string -> string conversions.Gravatar A. Unique TensorFlower2018-08-28
| | | | PiperOrigin-RevId: 210596417
* Removed ToString method from tensorflow::StringPiece.Gravatar A. Unique TensorFlower2018-08-28
| | | | | | This will make it easier to replace tensorflow::StringPiece with absl::string_view, as absl::string_view does not contain a ToString method. PiperOrigin-RevId: 210550029
* Open source graph analyzer.Gravatar Yao Zhang2018-08-27
| | | | PiperOrigin-RevId: 210439649
* [tf.data] removing test for obsolete functionalityGravatar Jiri Simsa2018-08-27
| | | | PiperOrigin-RevId: 210404649
* [tf.data] Minor cleanup.Gravatar Jiri Simsa2018-08-27
| | | | PiperOrigin-RevId: 210402159
* Replaced calls to tensorflow::StringPiece::ToString with std::string ↵Gravatar A. Unique TensorFlower2018-08-27
| | | | | | | | | | conversions. That is, instances of sp.ToString() are replaced with string(sp). This will allow tensorflow::StringPiece::ToString to be removed, which is necessary before it can be replaced with absl::string_view. PiperOrigin-RevId: 210394878
* Filter fusionGravatar Piotr Padlewski2018-08-24
| | | | | | | This patch introduces FilterFusion optimization which can fuse multiple FilterDataset operations. PiperOrigin-RevId: 210189643
* Support shape [1 C 1 1] for associative operator optimization with Conv2DGravatar A. Unique TensorFlower2018-08-24
| | | | PiperOrigin-RevId: 210187033
* Merge pull request #20239 from pengwa:mem-fraction-on-masterGravatar TensorFlower Gardener2018-08-24
|\ | | | | | | PiperOrigin-RevId: 210127837
* | Directly import tensor.proto.h (the transitive import will be removed from ↵Gravatar Eugene Brevdo2018-08-23
| | | | | | | | | | | | | | | | | | | | tensor.h soon) We plan to remove the import variant.h from tensor.h; and variant.h brings in a lot of transitive imports (including protos like tensor.proto.h). To prepare, we're updating folks who this will break. PiperOrigin-RevId: 210043667
* | Sorted the per-device summary printout with device names to improve readability.Gravatar A. Unique TensorFlower2018-08-23
| | | | | | | | PiperOrigin-RevId: 210007888
* | Add graphdef version number to GrapplerFunctionItem.Gravatar Doe Hyun Yoon2018-08-23
| | | | | | | | | | | | | | | | This solves te problem when passing a scalar tensor to function op input, as Placeholer shape inference outputs unknown shape for scalar if graphdef version is < 24. PiperOrigin-RevId: 210007276
* | Also set FTZ/rounding modes in ConstantFold()Gravatar Benjamin Kramer2018-08-23
| | | | | | | | | | | | Otherwise executing the op behaves differently from constant folding it. PiperOrigin-RevId: 209949852
| * set memory_size for virtual_cluster, to make user specified ↵Gravatar pengwa2018-08-23
|/ | | | per_process_gpu_memory_fraction take effect
* [tf.data] Cosmetic changes to tf.data optimizers:Gravatar Rachel Lim2018-08-22
| | | | | | | - Use graph_utils::GetInputNode in optimizers - Moved python optimization tests into their own files from optimize_dataset_op_test PiperOrigin-RevId: 209819734
* Replaced calls to tensorflow::StringPiece::ToString with string conversions.Gravatar A. Unique TensorFlower2018-08-22
| | | | | | | | That is, instances of sp.ToString() are replaced with string(sp). This will allow tensorflow::StringPiece::ToString to be removed, which is necessary before it can be replaced with absl::string_view. PiperOrigin-RevId: 209806694
* [tf.data] Add an optimization that vectorizes map functions and swaps the ↵Gravatar Rachel Lim2018-08-21
| | | | | | order of Map->Batch dataset transformations to Batch->Map PiperOrigin-RevId: 209674669
* Small fix to MaybeGetMinimumShape() in op_level_cost_estimator.Gravatar Doe Hyun Yoon2018-08-15
| | | | | | | | | | | | | | | (1) previously, it set unknown shape flag for scalar input, but now it returns TensorShapeProto with rank equal to the expected and all dims set to 1, and unknown shape flag is not set. (2) Also, fixed a bug; when a rank is known, but dim_size() < rank (note that dim_size() may be non-zero), we previously called add_dim() with dim 1 rank times, which then makes dim_size() is incremented by rank, but we expect dim_size() equal to rank. (3) Added test for MaybeGetMinimumShape(). PiperOrigin-RevId: 208845501
* Add two counters in Costs Struct for number of ops processed/predicted in ↵Gravatar Peter Ma2018-08-10
| | | | | | total, and number of ops predicted with unknown shapes PiperOrigin-RevId: 208274158