diff options
author | Asim Shankar <ashankar@google.com> | 2018-03-07 12:03:56 -0800 |
---|---|---|
committer | TensorFlower Gardener <gardener@tensorflow.org> | 2018-03-07 12:10:42 -0800 |
commit | 37cef895bfe06913477b87917cbee7284aefa7cd (patch) | |
tree | 4f05a013578c0459a52fc5e6448bb3dfc2d04971 /tensorflow/python/training/adam.py | |
parent | 808b569e85df8d63590740f05bc14d964efc4801 (diff) |
eager: Rename in_eager_mode to executing_eagerly and get rid of in_graph_mode.
This is in preparation to introduce one public, stable symbol: tf.executing_eagerly()
(i.e., part of moving APIs related to eager execution from "contrib" to a namespace
where we provide API stability guarantees)
PiperOrigin-RevId: 188212646
Diffstat (limited to 'tensorflow/python/training/adam.py')
-rw-r--r-- | tensorflow/python/training/adam.py | 6 |
1 files changed, 3 insertions, 3 deletions
diff --git a/tensorflow/python/training/adam.py b/tensorflow/python/training/adam.py index c92f6fc301..006e360389 100644 --- a/tensorflow/python/training/adam.py +++ b/tensorflow/python/training/adam.py @@ -106,10 +106,10 @@ class AdamOptimizer(optimizer.Optimizer): self._updated_lr = None def _get_beta_accumulators(self): - if context.in_graph_mode(): - graph = ops.get_default_graph() - else: + if context.executing_eagerly(): graph = None + else: + graph = ops.get_default_graph() return (self._get_non_slot_variable("beta1_power", graph=graph), self._get_non_slot_variable("beta2_power", graph=graph)) |