aboutsummaryrefslogtreecommitdiffhomepage
path: root/tensorflow/core/protobuf
diff options
context:
space:
mode:
authorGravatar A. Unique TensorFlower <gardener@tensorflow.org>2017-06-14 14:33:41 -0700
committerGravatar TensorFlower Gardener <gardener@tensorflow.org>2017-06-14 14:40:52 -0700
commitf0a8bd95c7f537788e75b214b25d5ed542c2354f (patch)
tree5c22fb46e550a1ef45e1dd32b7785ea14b9e2f20 /tensorflow/core/protobuf
parenta7c36173cabcc1289a836e8143accb5f0914b19a (diff)
Add a heuristic to Grappler's memory optimizer to recompute elementwise ops
The current heuristic saves memory in simple conv->BN->relu->conv setups. It wastes computation and does not save memory for ResNet-like architectures (everything gets grouped together and recomputed just before gradients are executed). It's also using a very simple list of ops to recompute. At the moment there is no advantage to this over just wrapping each layer in a Defun. However, there is a bit of infrastructure which will be re-used once smarter heuristics come around (namely finding trigger control dependencies and doing the re-writing). And in the short term, even a few dumb heuristics should make things better for many networks (I just don't want to make this CL any more complicated than it already is). PiperOrigin-RevId: 159026716
Diffstat (limited to 'tensorflow/core/protobuf')
-rw-r--r--tensorflow/core/protobuf/rewriter_config.proto25
1 files changed, 22 insertions, 3 deletions
diff --git a/tensorflow/core/protobuf/rewriter_config.proto b/tensorflow/core/protobuf/rewriter_config.proto
index b21d42f4fe..5480bdaad8 100644
--- a/tensorflow/core/protobuf/rewriter_config.proto
+++ b/tensorflow/core/protobuf/rewriter_config.proto
@@ -15,21 +15,40 @@ message RewriterConfig {
// Graph rewriting is experimental and subject to change, not covered by any
// API stability guarantees.
+ // Configuration options for the meta-optimizer. Unless otherwise noted, these
+ // configuration options do not apply to explicitly triggered optimization
+ // passes in the optimizers field.
+
bool optimize_tensor_layout = 1;
bool disable_model_pruning = 2;
bool constant_folding = 3;
enum MemOptType {
- // Fully disabled
+ // Disabled in the meta-optimizer.
NO_MEM_OPT = 0;
- // Driven by manual annotations
+ // Driven by manual op-level annotations.
MANUAL = 1;
+ // Driven by heuristics. The behavior of these heuristics is subject to
+ // change. Currently includes an experimental recomputation heuristic.
+ HEURISTICS = 2;
}
+ // Configures memory optimization passes through the meta-optimizer. Has no
+ // effect on manually requested memory optimization passes in the optimizers
+ // field.
MemOptType memory_optimization = 4;
+ // Configures AutoParallel optimization passes either through the
+ // meta-optimizer or when manually specified through the optimizers field.
AutoParallelOptions auto_parallel = 5;
// If non-empty, will use this as an alternative way to specify a list of
- // optimizations to turn on and the order of the optimizations.
+ // optimizations to turn on and the order of the optimizations (replacing the
+ // meta-optimizer).
+ //
+ // Of the RewriterConfig options, only the AutoParallel configuration options
+ // (the auto_parallel field) apply to manually requested optimization passes
+ // ("autoparallel"). Memory optimization passes ("memory") invoked here are
+ // not configurable (in contrast to memory optimization passes through the
+ // meta-optimizer) and act only on manual op annotations.
repeated string optimizers = 100;
}