aboutsummaryrefslogtreecommitdiffhomepage
path: root/tensorflow/core/protobuf/rewriter_config.proto
Commit message (Collapse)AuthorAge
* Add timeout mechanism to Grappler meta optimizer. This is only a best-effort ↵Gravatar A. Unique TensorFlower2018-10-08
| | | | | | mechanism, since the meta optimizer only checks if it has been cancelled before running each sub-optimizer. We can add cancellation to each sub-optimizer if necessary. PiperOrigin-RevId: 216234262
* Automated rollback of commit cb98ceba9cff8c10ee3c7e89dc8925c88b28118eGravatar A. Unique TensorFlower2018-10-01
| | | | PiperOrigin-RevId: 215254762
* Add a rewrite_config option to disable meta_optimizer.Gravatar A. Unique TensorFlower2018-09-28
| | | | PiperOrigin-RevId: 215014737
* Fix support for custom optimizers in explicit scheduleGravatar A. Unique TensorFlower2018-09-27
| | | | PiperOrigin-RevId: 214794973
* Turn on PinToHostOptimizer by default.Gravatar A. Unique TensorFlower2018-09-24
| | | | PiperOrigin-RevId: 214275960
* Add PinToHostOptimizer to grappler: force small ops to happen on CPU (instead ofGravatar A. Unique TensorFlower2018-09-22
| | | | | | GPU). This avoids many unnecessary CPU<->GPU memcpy and syncs. PiperOrigin-RevId: 214108484
* Reduce Grappler overhead by skipping optimizers when the graph is tiny.Gravatar A. Unique TensorFlower2018-06-18
| | | | PiperOrigin-RevId: 201095811
* Add ScopedAllocatorOptimizer in support of CollectiveReduce.Gravatar A. Unique TensorFlower2018-05-25
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | The efficiency of CollectiveReduce is greatly improved by merging multiple parallel reductions over smaller tensors into a single reduction over a larger tensor that is the concatentation of the smaller tensors. Because CollectiveReduce is essentially an element-wise array operation which operates on a 1-D reshape of the input tensor it is eligible for a ScopedAllocation optimization. The optimization works by looking for serially independent instances of CollectiveReduce that lie within the same name-scope tier and have the same control-flow (e.g. loop) embedding structure. Where two or more such nodes are found the upstream nodes that generate their inputs are modified to write their outputs into consecutive regions of a single tensor buffer maintained by a ScopedAllocator. The multiple CollectiveReduce nodes are then replaced by a single CollectiveReduce that operates in-place on the backing buffer. The effectiveness of the optimization depends on there being candidate CollectiveReduce nodes with these characteristics that become eligible for execution at close to the same time. If the name scope is too large, and includes nodes that become execution eligible at very different times, this graph rewrite could result in a slowdown. Note that this optimization is experimental: it is not guaranteed to work, especially for ops other than CollectiveReduce. PiperOrigin-RevId: 198089642
* Merge changes from github.Gravatar Yifei Feng2018-05-24
| | | | | | | Revert #18413. Too many internal test failures due to the name scope change caused by this change. Revert #18192. Cannot use re2::StringPiece internally. Need alternative for set call. Will pull and clean this up in a separate change. PiperOrigin-RevId: 197991247
* Turn on dead branch elimination, shape optimization, and remapping by defaultGravatar Benoit Steiner2018-05-21
| | | | PiperOrigin-RevId: 197439191
* Optimize batch normalization when possibleGravatar Benoit Steiner2018-05-15
| | | | PiperOrigin-RevId: 196762618
* Started work on a shape optimizerGravatar Benoit Steiner2018-05-10
| | | | PiperOrigin-RevId: 196170800
* Merge changes from github.Gravatar Yifei Feng2018-04-23
| | | | PiperOrigin-RevId: 194031845
* Add a config option to run Grappler optimizers more than once.Gravatar A. Unique TensorFlower2018-04-02
| | | | | | | Don't crash in layout optimizer if no cluster is given. Clean up Cluster::DisableOptimizer() so it actually turns all current optimizers off. PiperOrigin-RevId: 191368433
* Add skeleton code for DebugStripper.Gravatar A. Unique TensorFlower2018-03-25
| | | | PiperOrigin-RevId: 190391193
* Enable stack push removal optimization by default.Gravatar A. Unique TensorFlower2018-03-19
| | | | PiperOrigin-RevId: 189641729
* Turn on function optimization by defaultGravatar Benoit Steiner2018-03-12
| | | | PiperOrigin-RevId: 188722505
* Automated g4 rollback of changelist 187582263Gravatar A. Unique TensorFlower2018-03-02
| | | | PiperOrigin-RevId: 187657654
* Automated g4 rollback of changelist 187563544Gravatar Gunhan Gulsoy2018-03-01
| | | | PiperOrigin-RevId: 187582263
* Grappler: Change memory optimizer recomputation name prefix into a regexp. ↵Gravatar A. Unique TensorFlower2018-03-01
| | | | | | | | | This allows us to match any node names, especially those under different scopes. This still performs a prefix regexp match, so it is basically backwards compatible. PiperOrigin-RevId: 187563544
* Register the function optimizer in the meta optimizer. Made sure it's turned ↵Gravatar Benoit Steiner2018-02-27
| | | | | | OFF by default until more validation is done. PiperOrigin-RevId: 187211957
* Add documentation to Grappler RewriterConfig to give a short description for ↵Gravatar A. Unique TensorFlower2018-02-27
| | | | | | | | each of the optimizer on what they do. PiperOrigin-RevId: 187143156
* Add custom registered graph optimizers run by MetaOptimizer.Gravatar Patrick Nguyen2018-02-23
| | | | PiperOrigin-RevId: 186837828
* Turn on swapping heuristic by default to better manage memory usage on GPUGravatar Benoit Steiner2018-02-20
| | | | PiperOrigin-RevId: 186356358
* Add empty scaffolding for loop optimizers in Grappler.Gravatar A. Unique TensorFlower2018-02-13
| | | | PiperOrigin-RevId: 185554126
* Enable the use of scheduling heuristics to reduce peak memory usage by defaultGravatar Benoit Steiner2018-02-12
| | | | PiperOrigin-RevId: 185413855
* Enable layout optimizer by defaultGravatar Yao Zhang2018-02-06
| | | | PiperOrigin-RevId: 184707084
* Implemented heuristic to decrease memory utilization of AddN nodesGravatar Benoit Steiner2018-01-11
| | | | PiperOrigin-RevId: 181644649
* Enable dependency optimizer in Grappler by default.Gravatar A. Unique TensorFlower2017-12-04
| | | | PiperOrigin-RevId: 177835459
* Don't enable dependency optimizer by default.Gravatar A. Unique TensorFlower2017-11-15
| | | | PiperOrigin-RevId: 175857095
* Add a control dependency optimizer to Grappler.Gravatar A. Unique TensorFlower2017-11-14
| | | | | | | | | | | | | | | | | | | | | | The first two rewrites implemented are: 1. Turn nodes with only control outputs into NoOps, if we know that they are safe to remove. Such nodes can be produced, e.g., by rewrite rules in the arithmetic optimizer. 2. Completely disconnect NoOp nodes with at most 1 input or at most 1 output by rerouting their inputs to their outputs. The restriction on fan-in/fan-out guarantees that we reduce the number of control dependencies in the graph. The two (slightly) non-trivial cases are: // Case a) // x --^> +------+ x --^> +---+ // y --^> | NoOp | --^> a ==> y --^> | a | // ... | | ... | | // z --^> +------+ z --^> +---+ // // Case b) // +------+ --^> a +---+ --^> a // x --^> | NoOp | --^> b ==> | x | --^> b // | | ... | | ... // +------+ --^> c +---+ --^> c PiperOrigin-RevId: 175780178
* Use Toggle instead of bool to make the layout optimizer name and usage ↵Gravatar Yao Zhang2017-11-14
| | | | | | consistent with other optimizers. PiperOrigin-RevId: 175743440
* Add heuristics to trigger swappingGravatar Benoit Steiner2017-11-02
| | | | PiperOrigin-RevId: 174376046
* [Grappler] Remove reshapes whose source shape and destination shape are equal.Gravatar Jingyue Wu2017-10-13
| | | | | | | Also makes ArithmeticOptimizer::Optimize run shape inference at the beginning, and clear _output_shapes at the end. PiperOrigin-RevId: 172133948
* Fixed outdated commentGravatar Benoit Steiner2017-09-27
| | | | PiperOrigin-RevId: 170276755
* Added preliminary support for arithmetic simplificationsGravatar Benoit Steiner2017-08-16
| | | | PiperOrigin-RevId: 165476236
* Merge changes from github.Gravatar A. Unique TensorFlower2017-08-15
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | END_PUBLIC --- Commit 9f81374c3 authored by raymondxyang<zihao.yang@microsoft.com> Committed by Rasmus Munk Larsen<rmlarsen@google.com>: Add option for build more python tests in Cmake (#11853) * Ignore Windows built project * Fix deprecated methods in tf.contrib.python * Fix regex match for Windows build in contrib.keras * Fix Regex match for Windows build in session_bundle * * Fix deprecated methods * Fix regex match for Windows * Fix compatibility issue with Python 3.x * Add missing ops into Windows build for test * Enabled more testcases for Windows build * Clean code and fix typo * Add conditional cmake mode for enabling more unit testcase * Add Cmake mode for major Contrib packages * Add supplementary info in RAEDME for new cmake option * * Update tf_tests after testing with TF 1.3 * Clean code and resolve conflicts * Fix unsafe regex matches and format code * Update exclude list after testing with latest master branch * Fix missing module --- Commit 98f0e1efe authored by Yong Tang<yong.tang.github@outlook.com> Committed by Rasmus Munk Larsen<rmlarsen@google.com>: Dynamic ksize and strides with MaxPool (#11875) * Dynamic ksize with max_pool This fix tries to fix the issue raised in 4746 where ksize is static (attr) with max_pool. This fix changes ksize to input tensor so that it is dynamic now. This fix fixes 4746. Signed-off-by: Yong Tang <yong.tang.github@outlook.com> * Add dynamic ksize to MaxPoolGrad and MaxPoolGradGrad Signed-off-by: Yong Tang <yong.tang.github@outlook.com> * Add test cases for max_pool_v2 Signed-off-by: Yong Tang <yong.tang.github@outlook.com> * Fix GPU Jenkins issue. Signed-off-by: Yong Tang <yong.tang.github@outlook.com> * Enable MaxPoolV2 in GPU Signed-off-by: Yong Tang <yong.tang.github@outlook.com> * Hide MaxPoolV2 and other fixes. Signed-off-by: Yong Tang <yong.tang.github@outlook.com> --- Commit 02d6bc185 authored by Bairen Yi<byronyi@users.noreply.github.com> Committed by Rasmus Munk Larsen<rmlarsen@google.com>: remove useless variable (#12212) --- Commit ed6b0d905 authored by namrata-ibm<bhavenamrata@gmail.com> Committed by Rasmus Munk Larsen<rmlarsen@google.com>: Adding support for s390x in calculation of cpu_frequency (#12201) --- Commit 627dfc9dd authored by Taehoon Lee<taehoonlee@snu.ac.kr> Committed by Taehoon Lee<taehoonlee@snu.ac.kr>: Fix typos --- Commit c0f9b0a91 authored by A. Unique TensorFlower<gardener@tensorflow.org> Committed by TensorFlower Gardener<gardener@tensorflow.org>: In fast-math mode emit a tanh that has a faster min/max. PiperOrigin-RevId: 164943597 --- Commit 87605f3d6 authored by Kay Zhu<kayzhu@google.com> Committed by TensorFlower Gardener<gardener@tensorflow.org>: [TF:XLA] Use HloEvaluator for ComputeConstant, remove the need of a dedicated compute constant backend. PiperOrigin-RevId: 164940970 --- Commit 881de45c2 authored by Taehoon Lee<me@taehoonlee.com> Committed by Rasmus Munk Larsen<rmlarsen@google.com>: Add bool type supports for GPU kernels (#11927) * Add bool type supports for GPU kernels * Add bool type test codes for GPU kernels --- Commit eeacdcdb1 authored by A. Unique TensorFlower<gardener@tensorflow.org> Committed by TensorFlower Gardener<gardener@tensorflow.org>: Add missing "CPU" suffix in registrations. PiperOrigin-RevId: 164939527 --- Commit de01be952 authored by namrata-ibm<bhavenamrata@gmail.com> Committed by Rasmus Munk Larsen<rmlarsen@google.com>: Adding support for Big Endian in graph_constructor_test and wav_io (#12179) --- Commit 26719d29f authored by QingYing Chen<pkudysj@126.com> Committed by Rasmus Munk Larsen<rmlarsen@google.com>: Implement CRF decode (Viterbi decode) for tensor (#12056) * Implement CRF decoding for tensors * add test code for tensor version's CRF decoding * made modifications according to pylint * add some comments for crf decode * remove useless code * add comments at the top comment of crf module and add more comments in crf_test * capitalize first char of first word in comments * replace crf_decode test code with a deterministic example --- Commit f9a81ca2f authored by Pete Warden<pete@petewarden.com> Committed by gunan<gunan@google.com>: Create CI build script for Raspberry Pi (#12190) * Create CI build script for Raspberry Pi * Moved location of Pi build script --- Commit e2a163a90 authored by A. Unique TensorFlower<gardener@tensorflow.org> Committed by TensorFlower Gardener<gardener@tensorflow.org>: Merge code from PR #11940 with internal changes from cl/164796436, and update Python tests to also run on GPU. PiperOrigin-RevId: 164929133 --- Commit 08bbfa187 authored by Taehoon Lee<me@taehoonlee.com> Committed by Rasmus Munk Larsen<rmlarsen@google.com>: Fix typos (#12195) --- Commit ab96f41fb authored by Luke Iwanski<luke@codeplay.com> Committed by Rasmus Munk Larsen<rmlarsen@google.com>: [OpenCL] Extends matmul_benchmark.py to cover SYCL (#11697) * [OpenCL] Extends matmul_benchmark.py to cover SYCL * Fixed typo * /gpu:0 -> /device:GPU:0 * Fixes control_flow_ops_py_test * /gpu: -> /device:GPU: * Fixes //tensorflow/python/profiler/internal:run_metadata_test * gpu: -> GPU: * Fixes tfprof_node * [OpenCL] Fixes device path to name with many colons (#123) The device path is constructed from a device name by replacing all colons with underscores. Some device names contain more than one colon, for example 'device:SYCL:0' which gives a path 'device_SYCL_0'. The previous code would not convert this back to the original device name, but rather to 'device:SYCL_0'. An alternative fix would be to convert all underscores to colons in the device name (i.e. remove the restriction inside `replace("_", ":", 1)`), however I'm not sure if there are any device names which contain underscores. * If no gpu device aviable fake one * gpu: -> device:GPU * Fixes profiler test * /gpu:x -> /device:GPU:x * Fixes debug_io_utils_test.cc test * Fixes device_name_utils_test.cc --- Commit 35e7a3665 authored by Yong Tang<yong.tang.github@outlook.com> Committed by Rasmus Munk Larsen<rmlarsen@google.com>: Remove unneeded casting of int64 for reverse_sequence (#12192) This fix remove unneeded cast of int64 for reverse_sequence: ``` lengths = math_ops.to_int64(lengths) ``` as int32 has already been enabled for reverse_sequence. Signed-off-by: Yong Tang <yong.tang.github@outlook.com> --- Commit 9fba8c185 authored by Anna R<annarev@google.com> Committed by TensorFlower Gardener<gardener@tensorflow.org>: Add benchmark dashboard link to benchmarks doc. Also, I added a link and description for Benchmarks page to Community index page. PiperOrigin-RevId: 164924906 --- Commit bb6f32fa7 authored by Mark Heffernan<meheff@google.com> Committed by TensorFlower Gardener<gardener@tensorflow.org>: Make HloAliasAnalysis updatable after changes to the HLO graph. As part of this change make HloAliasAnalysis a thinner layer which basically only holds a map from HloValue to HloBuffer and vice versa. PiperOrigin-RevId: 164923041 --- Commit 9103096c1 authored by A. Unique TensorFlower<gardener@tensorflow.org> Committed by Thomas K?ppe<tkoeppe@google.com>: Merged commit includes the following changes: 164923041 by meheff: Make HloAliasAnalysis updatable after changes to the HLO graph. As part of this change make HloAliasAnalysis a thinner layer which basically only holds a map from HloValue to HloBuffer and vice versa. -- PiperOrigin-RevId: 164923041 --- Commit 822603aed authored by A. Unique TensorFlower<gardener@tensorflow.org> Committed by TensorFlower Gardener<gardener@tensorflow.org>: Merging sibling fusion instruction using multi_output_fusion PiperOrigin-RevId: 164920220 --- Commit c035aa2a8 authored by A. Unique TensorFlower<gardener@tensorflow.org> Committed by TensorFlower Gardener<gardener@tensorflow.org>: Go: Update generated wrapper functions for TensorFlow ops. PiperOrigin-RevId: 164917891 --- Commit e1e81d9ba authored by Luke Iwanski<luke@codeplay.com> Committed by Rasmus Munk Larsen<rmlarsen@google.com>: [OpenCL] Fixes double memcpy bug (#151) (#12173) * [OpenCL] Fixes double memcpy bug (#151) As the debg CopyOp is called on a Tensor without type, we need to use the DataType enum to get type information, and use this to pass the type on to Eigen. This is a workaround Eigen's need to have a type when calling memcpy. If the Eigen memcpy can be provided without a type requirement, then the memcpy in sycl_util is unnecessary. * Acts on feedback from: #12173/files/32cb12a9001b672425867b5a3110fd98e737a20b#r132496277 --- Commit d9ca2d86d authored by A. Unique TensorFlower<gardener@tensorflow.org> Committed by TensorFlower Gardener<gardener@tensorflow.org>: Internal change PiperOrigin-RevId: 164916465 --- Commit b8d13d218 authored by A. Unique TensorFlower<gardener@tensorflow.org> Committed by TensorFlower Gardener<gardener@tensorflow.org>: Remove more parts of DCASGD missed in the first pass. (47949b) PiperOrigin-RevId: 164914552 --- Commit 73b3d52c7 authored by Alexandre Passos<apassos@google.com> Committed by TensorFlower Gardener<gardener@tensorflow.org>: cmake fix PiperOrigin-RevId: 164911656 --- Commit 2173b5b0a authored by A. Unique TensorFlower<gardener@tensorflow.org> Committed by TensorFlower Gardener<gardener@tensorflow.org>: Allow TFE_TensorHandleCopyToDevice to have the same device as src and destination. It will reuse the same underlying buffer in those cases. PiperOrigin-RevId: 164909906 --- Commit 13eb3b90e authored by Alexandre Passos<apassos@google.com> Committed by TensorFlower Gardener<gardener@tensorflow.org>: Experimental C and Python APIs to invoke TensorFlow kernels on concrete values. PiperOrigin-RevId: 164902588 --- Commit 7dfabcc01 authored by A. Unique TensorFlower<gardener@tensorflow.org> Committed by TensorFlower Gardener<gardener@tensorflow.org>: Initialize ExecutionOptions in ComputeConstant to default values. PiperOrigin-RevId: 164894867 --- Commit c8897e9bc authored by Benoit Steiner<bsteiner@google.com> Committed by TensorFlower Gardener<gardener@tensorflow.org>: Static required time computation PiperOrigin-RevId: 164894645 --- Commit 076158f9b authored by A. Unique TensorFlower<gardener@tensorflow.org> Committed by TensorFlower Gardener<gardener@tensorflow.org>: Enable implicit->explicit conversion by default. PiperOrigin-RevId: 164890915 --- Commit 58c4a4cb1 authored by A. Unique TensorFlower<gardener@tensorflow.org> Committed by TensorFlower Gardener<gardener@tensorflow.org>: Bugfix: number of input channels is not necessarily in the last dimension, after introduction of data_format param. PiperOrigin-RevId: 164889729 --- Commit 8f9b1af8a authored by Igor Saprykin<isaprykin@google.com> Committed by TensorFlower Gardener<gardener@tensorflow.org>: Recover MonitoredSession when the Coordinator is requested to stop with one of the _PREEMPTION_ERRORS. When SyncReplicasOptimizer is used, a preemption in the Coordinator may result in two cases: Case 1) the session gets silently marked as complete Case 2) the session gets stuck This CL aims to solve and verify solutions for both of these problems. Fix 1 changes the should_stop logic. Fix 2 changes the CoordinatedSession.run() logic. SyncReplicasOptimizer runs a separate set of threads using a Coordinator instance. Those threads do FIFOQueue.enqueue; the main thread does a blocking FIFOQueue.dequeue. `sync_token_q` FIFOQueue is on parameter-servers. When one of the PS instances gets preempted, an AbortedError causes the Coordinator to stop via request_stop(ex). That by itself changes the state of MonitoredSession.should_stop() to True (Fix 1). Results of the blocking Dequeue operation are sent to the chief worker via Recv. What happens next depends on the amount of tokens in `sync_token_q`. If there are enough for the next call to Dequeue to return, then the low-level "tf session run() call" returns. The next iteration of the `while not MonitoredSession.should_stop()` loop decides that the training is complete (Case 1). If there are not enough tokens in `sync_token_q`, then the blocking Dequeue is going to keep waiting for them. This results in the graph execution getting stuck and the whole session getting garbage collected after 10 minutes (Case 2). We decided to fix that by re-creating a session after it gets garbage collected (Fix 2). An alternative was to try to cancel the pending Dequeue operation, but it's not clear that it is the right thing to do and it is also not easy. PiperOrigin-RevId: 164888390 --- Commit 46e4de6e5 authored by A. Unique TensorFlower<gardener@tensorflow.org> Committed by TensorFlower Gardener<gardener@tensorflow.org>: Undo loop fusion changes for now as they seem to be altering a few results. END_PUBLIC RELNOTES: n/a BEGIN_PUBLIC BEGIN_PUBLIC Automated g4 rollback of changelist 164825735 PiperOrigin-RevId: 165340331
* Grappler memory optimization: allow inputs to gradients with non-standard ↵Gravatar Allen Lavoie2017-07-31
| | | | | | | | names to be recomputed Includes Python tests for name-scoped gradients. PiperOrigin-RevId: 163720208
* Updated the memory optimization config to introduce an explicit default ↵Gravatar Benoit Steiner2017-07-26
| | | | | | value. This will make it possible change change the default behavior in the future by updating the meta optimizer code to interpret that default value differently (e.g we could assume default means heuristics). The default value remains OFF. PiperOrigin-RevId: 163239483
* Introduced a default setting for constant folding, currently set to OFF. WillGravatar Benoit Steiner2017-07-26
| | | | | | be turned to on later on. PiperOrigin-RevId: 163233994
* Add a heuristic to Grappler's memory optimizer to recompute elementwise opsGravatar A. Unique TensorFlower2017-06-14
| | | | | | | | | | The current heuristic saves memory in simple conv->BN->relu->conv setups. It wastes computation and does not save memory for ResNet-like architectures (everything gets grouped together and recomputed just before gradients are executed). It's also using a very simple list of ops to recompute. At the moment there is no advantage to this over just wrapping each layer in a Defun. However, there is a bit of infrastructure which will be re-used once smarter heuristics come around (namely finding trigger control dependencies and doing the re-writing). And in the short term, even a few dumb heuristics should make things better for many networks (I just don't want to make this CL any more complicated than it already is). PiperOrigin-RevId: 159026716
* Remove tf.RewriterConfig from the TensorFlow Python APIGravatar A. Unique TensorFlower2017-06-14
| | | | | | | | | | | | | | | | This is an API-only change. Rewriter behavior is unaffected. tf.RewriterConfig has been excluded from the 1.2 release, and was not in previous releases, and so is not subject to semantic versioning. It needs a bit of work before we fix an API. Graph rewriting is still available, just not as tf.RewriterConfig. Instead add an explicit import: from tensorflow.core.protobuf import rewriter_config_pb2 Then switch tf.RewriterConfig to rewriter_config_pb2.RewriterConfig (likewise for tf.AutoParallelOptions). Graph rewriting is subject to change, and has no API stability guarantee. PiperOrigin-RevId: 158991934
* Add auto parallelization to meta optimizer. Enable MetaOptimizer if any one ↵Gravatar Yao Zhang2017-04-08
| | | | | | of the optimizers is on. Change: 152598517
* Added the memory optimizer to the meta optimizer.Gravatar Benoit Steiner2017-04-05
| | | | Change: 152323689
* Add a way to specify the optimization order; refactor and add constant ↵Gravatar Yao Zhang2017-04-04
| | | | | | folding to meta optimizer. Change: 152193646
* Added a config option to control model pruningGravatar Benoit Steiner2017-03-24
| | | | Change: 151130707
* Adds java package and outer class name to rewriter_config.proto.Gravatar A. Unique TensorFlower2017-03-20
| | | | Change: 150664674
* Created a proto to configure the amount of graph rewriting taking place.Gravatar Benoit Steiner2017-03-20
Change: 150648084