aboutsummaryrefslogtreecommitdiffhomepage
diff options
context:
space:
mode:
authorGravatar Josh Levenberg <josh11b@tensorflow.org>2016-03-28 09:24:10 -0800
committerGravatar TensorFlower Gardener <gardener@tensorflow.org>2016-03-28 10:33:42 -0700
commitd95fa1bb158a6a23344ec5319d97d38da4ae5bdd (patch)
treee063a7bc49bdf622497daa16094ec1aff6f5d5e2
parente745b2b0aa292fd4f41c0365b2471c1ee7400322 (diff)
Update generated Python Op docs.
Change: 118369028
-rw-r--r--tensorflow/g3doc/api_docs/python/io_ops.md42
1 files changed, 32 insertions, 10 deletions
diff --git a/tensorflow/g3doc/api_docs/python/io_ops.md b/tensorflow/g3doc/api_docs/python/io_ops.md
index 7af5dee41b..7e608450dc 100644
--- a/tensorflow/g3doc/api_docs/python/io_ops.md
+++ b/tensorflow/g3doc/api_docs/python/io_ops.md
@@ -1971,7 +1971,7 @@ want them run by *N* threads.
- - -
-### `tf.train.batch(tensor_list, batch_size, num_threads=1, capacity=32, enqueue_many=False, shapes=None, shared_name=None, name=None)` {#batch}
+### `tf.train.batch(tensor_list, batch_size, num_threads=1, capacity=32, enqueue_many=False, shapes=None, dynamic_pad=False, shared_name=None, name=None)` {#batch}
Creates batches of tensors in `tensor_list`.
@@ -1995,10 +1995,18 @@ operation is feeding another input queue, its queue runner will catch
this exception, however, if this operation is used in your main thread
you are responsible for catching this yourself.
-*N.B.:* You must ensure that either (i) the `shapes` argument is
-passed, or (ii) all of the tensors in `tensor_list` must have
-fully-defined shapes. `ValueError` will be raised if neither of
-these conditions holds.
+*N.B.:* If `dynamic_pad` is `False`, you must ensure that either
+(i) the `shapes` argument is passed, or (ii) all of the tensors in
+`tensor_list` must have fully-defined shapes. `ValueError` will be
+raised if neither of these conditions holds.
+
+If `dynamic_pad` is `True`, it is sufficient that the *rank* of the
+tensors is known, but individual dimensions may have shape `None`.
+In this case, for each enqueue the dimensions with value `None`
+may have a variable length; upon dequeue, the output tensors will be padded
+on the right to the maximum shape of the tensors in the current minibatch.
+For numbers, this padding takes value 0. For strings, this padding is
+the empty string. See `PaddingFIFOQueue` for more info.
##### Args:
@@ -2010,6 +2018,9 @@ these conditions holds.
* <b>`enqueue_many`</b>: Whether each tensor in `tensor_list` is a single example.
* <b>`shapes`</b>: (Optional) The shapes for each example. Defaults to the
inferred shapes for `tensor_list`.
+* <b>`dynamic_pad`</b>: Boolean. Allow variable dimensions in input shapes.
+ The given dimensions are padded upon dequeue so that tensors within a
+ batch have the same shapes.
* <b>`shared_name`</b>: (optional). If set, this queue will be shared under the given
name across multiple sessions.
* <b>`name`</b>: (Optional) A name for the operations.
@@ -2027,7 +2038,7 @@ these conditions holds.
- - -
-### `tf.train.batch_join(tensor_list_list, batch_size, capacity=32, enqueue_many=False, shapes=None, shared_name=None, name=None)` {#batch_join}
+### `tf.train.batch_join(tensor_list_list, batch_size, capacity=32, enqueue_many=False, shapes=None, dynamic_pad=False, shared_name=None, name=None)` {#batch_join}
Runs a list of tensors to fill a queue to create batches of examples.
@@ -2061,10 +2072,18 @@ operation is feeding another input queue, its queue runner will catch
this exception, however, if this operation is used in your main thread
you are responsible for catching this yourself.
-*N.B.:* You must ensure that either (i) the `shapes` argument is
-passed, or (ii) all of the tensors in `tensor_list_list` must have
-fully-defined shapes. `ValueError` will be raised if neither of
-these conditions holds.
+*N.B.:* If `dynamic_pad` is `False`, you must ensure that either
+(i) the `shapes` argument is passed, or (ii) all of the tensors in
+`tensor_list` must have fully-defined shapes. `ValueError` will be
+raised if neither of these conditions holds.
+
+If `dynamic_pad` is `True`, it is sufficient that the *rank* of the
+tensors is known, but individual dimensions may have value `None`.
+In this case, for each enqueue the dimensions with value `None`
+may have a variable length; upon dequeue, the output tensors will be padded
+on the right to the maximum shape of the tensors in the current minibatch.
+For numbers, this padding takes value 0. For strings, this padding is
+the empty string. See `PaddingFIFOQueue` for more info.
##### Args:
@@ -2076,6 +2095,9 @@ these conditions holds.
example.
* <b>`shapes`</b>: (Optional) The shapes for each example. Defaults to the
inferred shapes for `tensor_list_list[i]`.
+* <b>`dynamic_pad`</b>: Boolean. Allow variable dimensions in input shapes.
+ The given dimensions are padded upon dequeue so that tensors within a
+ batch have the same shapes.
* <b>`shared_name`</b>: (Optional) If set, this queue will be shared under the given
name across multiple sessions.
* <b>`name`</b>: (Optional) A name for the operations.