aboutsummaryrefslogtreecommitdiffhomepage
diff options
context:
space:
mode:
authorGravatar Mustafa Ispir <ispir@google.com>2018-08-10 10:56:50 -0700
committerGravatar TensorFlower Gardener <gardener@tensorflow.org>2018-08-10 11:01:06 -0700
commit729caa48a34683cd38bb14c48bb63e8cddc88d60 (patch)
tree90a567ff5e565b36afcc335e16d96e4ae56b8b2e
parentcae772e4436dce73e82256150029a9d250b800a1 (diff)
Update the docs to clear re-creation of evaluation graph.
Fixes #19062 PiperOrigin-RevId: 208235214
-rw-r--r--tensorflow/python/estimator/training.py4
1 files changed, 4 insertions, 0 deletions
diff --git a/tensorflow/python/estimator/training.py b/tensorflow/python/estimator/training.py
index a01b2300dd..bb1305767f 100644
--- a/tensorflow/python/estimator/training.py
+++ b/tensorflow/python/estimator/training.py
@@ -323,6 +323,10 @@ def train_and_evaluate(estimator, train_spec, eval_spec):
tf.estimator.train_and_evaluate(estimator, train_spec, eval_spec)
```
+ Note that in current implementation `estimator.evaluate` will be called
+ multiple times. This means that evaluation graph (including eval_input_fn)
+ will be re-created for each `evaluate` call. `estimator.train` will be called
+ only once.
Example of distributed training: