aboutsummaryrefslogtreecommitdiffhomepage
diff options
context:
space:
mode:
authorGravatar A. Unique TensorFlower <gardener@tensorflow.org>2016-12-01 15:06:45 -0800
committerGravatar TensorFlower Gardener <gardener@tensorflow.org>2016-12-01 15:34:15 -0800
commit6209ae88ca436b13c5807df3bb237a5613d42215 (patch)
treec8606c3f3143d870876382dc3b684a6f03ea6d00
parent8c2442d0bd66126fb066362912f6a4dce8ff2d33 (diff)
Update generated Python Op docs.
Change: 140783483
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.train.maybe_batch.md39
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.train.maybe_shuffle_batch_join.md40
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.train.maybe_batch_join.md40
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.train.maybe_shuffle_batch.md39
-rw-r--r--tensorflow/g3doc/api_docs/python/index.md4
-rw-r--r--tensorflow/g3doc/api_docs/python/io_ops.md172
6 files changed, 333 insertions, 1 deletions
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.train.maybe_batch.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.train.maybe_batch.md
new file mode 100644
index 0000000000..610d5badcb
--- /dev/null
+++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.train.maybe_batch.md
@@ -0,0 +1,39 @@
+### `tf.train.maybe_batch(tensors, keep_input, batch_size, num_threads=1, capacity=32, enqueue_many=False, shapes=None, dynamic_pad=False, allow_smaller_final_batch=False, shared_name=None, name=None)` {#maybe_batch}
+
+Conditionally creates batches of tensors based on `keep_input`.
+
+See docstring in `batch` for more details.
+
+##### Args:
+
+
+* <b>`tensors`</b>: The list or dictionary of tensors to enqueue.
+* <b>`keep_input`</b>: A `bool` scalar Tensor. This tensor controls whether the input
+ is added to the queue or not. If it evaluates `True`, then `tensors` are
+ added to the queue; otherwise they are dropped. This tensor essentially
+ acts as a filtering mechanism.
+* <b>`batch_size`</b>: The new batch size pulled from the queue.
+* <b>`num_threads`</b>: The number of threads enqueuing `tensors`.
+* <b>`capacity`</b>: An integer. The maximum number of elements in the queue.
+* <b>`enqueue_many`</b>: Whether each tensor in `tensors` is a single example.
+* <b>`shapes`</b>: (Optional) The shapes for each example. Defaults to the
+ inferred shapes for `tensors`.
+* <b>`dynamic_pad`</b>: Boolean. Allow variable dimensions in input shapes.
+ The given dimensions are padded upon dequeue so that tensors within a
+ batch have the same shapes.
+* <b>`allow_smaller_final_batch`</b>: (Optional) Boolean. If `True`, allow the final
+ batch to be smaller if there are insufficient items left in the queue.
+* <b>`shared_name`</b>: (Optional). If set, this queue will be shared under the given
+ name across multiple sessions.
+* <b>`name`</b>: (Optional) A name for the operations.
+
+##### Returns:
+
+ A list or dictionary of tensors with the same types as `tensors`.
+
+##### Raises:
+
+
+* <b>`ValueError`</b>: If the `shapes` are not specified, and cannot be
+ inferred from the elements of `tensors`.
+
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.train.maybe_shuffle_batch_join.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.train.maybe_shuffle_batch_join.md
new file mode 100644
index 0000000000..ab5543d378
--- /dev/null
+++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.train.maybe_shuffle_batch_join.md
@@ -0,0 +1,40 @@
+### `tf.train.maybe_shuffle_batch_join(tensors_list, batch_size, capacity, min_after_dequeue, keep_input, seed=None, enqueue_many=False, shapes=None, allow_smaller_final_batch=False, shared_name=None, name=None)` {#maybe_shuffle_batch_join}
+
+Create batches by randomly shuffling conditionally-enqueued tensors.
+
+See docstring in `shuffle_batch_join` for more details.
+
+##### Args:
+
+
+* <b>`tensors_list`</b>: A list of tuples or dictionaries of tensors to enqueue.
+* <b>`batch_size`</b>: An integer. The new batch size pulled from the queue.
+* <b>`capacity`</b>: An integer. The maximum number of elements in the queue.
+* <b>`min_after_dequeue`</b>: Minimum number elements in the queue after a
+ dequeue, used to ensure a level of mixing of elements.
+* <b>`keep_input`</b>: A `bool` scalar Tensor. If provided, this tensor controls
+ whether the input is added to the queue or not. If it evaluates `True`,
+ then `tensors_list` are added to the queue; otherwise they are dropped.
+ This tensor essentially acts as a filtering mechanism.
+* <b>`seed`</b>: Seed for the random shuffling within the queue.
+* <b>`enqueue_many`</b>: Whether each tensor in `tensor_list_list` is a single
+ example.
+* <b>`shapes`</b>: (Optional) The shapes for each example. Defaults to the
+ inferred shapes for `tensors_list[i]`.
+* <b>`allow_smaller_final_batch`</b>: (Optional) Boolean. If `True`, allow the final
+ batch to be smaller if there are insufficient items left in the queue.
+* <b>`shared_name`</b>: (optional). If set, this queue will be shared under the given
+ name across multiple sessions.
+* <b>`name`</b>: (Optional) A name for the operations.
+
+##### Returns:
+
+ A list or dictionary of tensors with the same number and types as
+ `tensors_list[i]`.
+
+##### Raises:
+
+
+* <b>`ValueError`</b>: If the `shapes` are not specified, and cannot be
+ inferred from the elements of `tensors_list`.
+
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.train.maybe_batch_join.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.train.maybe_batch_join.md
new file mode 100644
index 0000000000..96f605b432
--- /dev/null
+++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.train.maybe_batch_join.md
@@ -0,0 +1,40 @@
+### `tf.train.maybe_batch_join(tensors_list, keep_input, batch_size, capacity=32, enqueue_many=False, shapes=None, dynamic_pad=False, allow_smaller_final_batch=False, shared_name=None, name=None)` {#maybe_batch_join}
+
+Runs a list of tensors to conditionally fill a queue to create batches.
+
+See docstring in `batch_join` for more details.
+
+##### Args:
+
+
+* <b>`tensors_list`</b>: A list of tuples or dictionaries of tensors to enqueue.
+* <b>`keep_input`</b>: A `bool` scalar Tensor. This tensor controls whether the input
+ is added to the queue or not. If it evaluates `True`, then `tensors` are
+ added to the queue; otherwise they are dropped. This tensor essentially
+ acts as a filtering mechanism.
+* <b>`batch_size`</b>: An integer. The new batch size pulled from the queue.
+* <b>`capacity`</b>: An integer. The maximum number of elements in the queue.
+* <b>`enqueue_many`</b>: Whether each tensor in `tensor_list_list` is a single
+ example.
+* <b>`shapes`</b>: (Optional) The shapes for each example. Defaults to the
+ inferred shapes for `tensor_list_list[i]`.
+* <b>`dynamic_pad`</b>: Boolean. Allow variable dimensions in input shapes.
+ The given dimensions are padded upon dequeue so that tensors within a
+ batch have the same shapes.
+* <b>`allow_smaller_final_batch`</b>: (Optional) Boolean. If `True`, allow the final
+ batch to be smaller if there are insufficient items left in the queue.
+* <b>`shared_name`</b>: (Optional) If set, this queue will be shared under the given
+ name across multiple sessions.
+* <b>`name`</b>: (Optional) A name for the operations.
+
+##### Returns:
+
+ A list or dictionary of tensors with the same number and types as
+ `tensors_list[i]`.
+
+##### Raises:
+
+
+* <b>`ValueError`</b>: If the `shapes` are not specified, and cannot be
+ inferred from the elements of `tensor_list_list`.
+
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.train.maybe_shuffle_batch.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.train.maybe_shuffle_batch.md
new file mode 100644
index 0000000000..d85bded6c8
--- /dev/null
+++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.train.maybe_shuffle_batch.md
@@ -0,0 +1,39 @@
+### `tf.train.maybe_shuffle_batch(tensors, batch_size, capacity, min_after_dequeue, keep_input, num_threads=1, seed=None, enqueue_many=False, shapes=None, allow_smaller_final_batch=False, shared_name=None, name=None)` {#maybe_shuffle_batch}
+
+Creates batches by randomly shuffling conditionally-enqueued tensors.
+
+See docstring in `shuffle_batch` for more details.
+
+##### Args:
+
+
+* <b>`tensors`</b>: The list or dictionary of tensors to enqueue.
+* <b>`batch_size`</b>: The new batch size pulled from the queue.
+* <b>`capacity`</b>: An integer. The maximum number of elements in the queue.
+* <b>`min_after_dequeue`</b>: Minimum number elements in the queue after a
+ dequeue, used to ensure a level of mixing of elements.
+* <b>`keep_input`</b>: A `bool` scalar Tensor. This tensor controls whether the input
+ is added to the queue or not. If it evaluates `True`, then `tensors` are
+ added to the queue; otherwise they are dropped. This tensor essentially
+ acts as a filtering mechanism.
+* <b>`num_threads`</b>: The number of threads enqueuing `tensor_list`.
+* <b>`seed`</b>: Seed for the random shuffling within the queue.
+* <b>`enqueue_many`</b>: Whether each tensor in `tensor_list` is a single example.
+* <b>`shapes`</b>: (Optional) The shapes for each example. Defaults to the
+ inferred shapes for `tensor_list`.
+* <b>`allow_smaller_final_batch`</b>: (Optional) Boolean. If `True`, allow the final
+ batch to be smaller if there are insufficient items left in the queue.
+* <b>`shared_name`</b>: (Optional) If set, this queue will be shared under the given
+ name across multiple sessions.
+* <b>`name`</b>: (Optional) A name for the operations.
+
+##### Returns:
+
+ A list or dictionary of tensors with the types as `tensors`.
+
+##### Raises:
+
+
+* <b>`ValueError`</b>: If the `shapes` are not specified, and cannot be
+ inferred from the elements of `tensors`.
+
diff --git a/tensorflow/g3doc/api_docs/python/index.md b/tensorflow/g3doc/api_docs/python/index.md
index bdb3b94038..ae1f47ec43 100644
--- a/tensorflow/g3doc/api_docs/python/index.md
+++ b/tensorflow/g3doc/api_docs/python/index.md
@@ -455,6 +455,10 @@
* [`limit_epochs`](../../api_docs/python/io_ops.md#limit_epochs)
* [`match_filenames_once`](../../api_docs/python/io_ops.md#match_filenames_once)
* [`matching_files`](../../api_docs/python/io_ops.md#matching_files)
+ * [`maybe_batch`](../../api_docs/python/io_ops.md#maybe_batch)
+ * [`maybe_batch_join`](../../api_docs/python/io_ops.md#maybe_batch_join)
+ * [`maybe_shuffle_batch`](../../api_docs/python/io_ops.md#maybe_shuffle_batch)
+ * [`maybe_shuffle_batch_join`](../../api_docs/python/io_ops.md#maybe_shuffle_batch_join)
* [`PaddingFIFOQueue`](../../api_docs/python/io_ops.md#PaddingFIFOQueue)
* [`parse_example`](../../api_docs/python/io_ops.md#parse_example)
* [`parse_single_example`](../../api_docs/python/io_ops.md#parse_single_example)
diff --git a/tensorflow/g3doc/api_docs/python/io_ops.md b/tensorflow/g3doc/api_docs/python/io_ops.md
index 8cebcff858..adea830285 100644
--- a/tensorflow/g3doc/api_docs/python/io_ops.md
+++ b/tensorflow/g3doc/api_docs/python/io_ops.md
@@ -3097,7 +3097,7 @@ single subgraph producing examples but you want to run it in *N* threads
(where you increase *N* until it can keep the queue full). Use
[`batch_join`](#batch_join) or [`shuffle_batch_join`](#shuffle_batch_join)
if you have *N* different subgraphs producing examples to batch and you
-want them run by *N* threads.
+want them run by *N* threads. Use `maybe_*` to enqueue conditionally.
- - -
@@ -3184,6 +3184,48 @@ Note: if `num_epochs` is not `None`, this function creates local counter
- - -
+### `tf.train.maybe_batch(tensors, keep_input, batch_size, num_threads=1, capacity=32, enqueue_many=False, shapes=None, dynamic_pad=False, allow_smaller_final_batch=False, shared_name=None, name=None)` {#maybe_batch}
+
+Conditionally creates batches of tensors based on `keep_input`.
+
+See docstring in `batch` for more details.
+
+##### Args:
+
+
+* <b>`tensors`</b>: The list or dictionary of tensors to enqueue.
+* <b>`keep_input`</b>: A `bool` scalar Tensor. This tensor controls whether the input
+ is added to the queue or not. If it evaluates `True`, then `tensors` are
+ added to the queue; otherwise they are dropped. This tensor essentially
+ acts as a filtering mechanism.
+* <b>`batch_size`</b>: The new batch size pulled from the queue.
+* <b>`num_threads`</b>: The number of threads enqueuing `tensors`.
+* <b>`capacity`</b>: An integer. The maximum number of elements in the queue.
+* <b>`enqueue_many`</b>: Whether each tensor in `tensors` is a single example.
+* <b>`shapes`</b>: (Optional) The shapes for each example. Defaults to the
+ inferred shapes for `tensors`.
+* <b>`dynamic_pad`</b>: Boolean. Allow variable dimensions in input shapes.
+ The given dimensions are padded upon dequeue so that tensors within a
+ batch have the same shapes.
+* <b>`allow_smaller_final_batch`</b>: (Optional) Boolean. If `True`, allow the final
+ batch to be smaller if there are insufficient items left in the queue.
+* <b>`shared_name`</b>: (Optional). If set, this queue will be shared under the given
+ name across multiple sessions.
+* <b>`name`</b>: (Optional) A name for the operations.
+
+##### Returns:
+
+ A list or dictionary of tensors with the same types as `tensors`.
+
+##### Raises:
+
+
+* <b>`ValueError`</b>: If the `shapes` are not specified, and cannot be
+ inferred from the elements of `tensors`.
+
+
+- - -
+
### `tf.train.batch_join(tensors_list, batch_size, capacity=32, enqueue_many=False, shapes=None, dynamic_pad=False, allow_smaller_final_batch=False, shared_name=None, name=None)` {#batch_join}
Runs a list of tensors to fill a queue to create batches of examples.
@@ -3275,6 +3317,49 @@ operations that depend on fixed batch_size would fail.
- - -
+### `tf.train.maybe_batch_join(tensors_list, keep_input, batch_size, capacity=32, enqueue_many=False, shapes=None, dynamic_pad=False, allow_smaller_final_batch=False, shared_name=None, name=None)` {#maybe_batch_join}
+
+Runs a list of tensors to conditionally fill a queue to create batches.
+
+See docstring in `batch_join` for more details.
+
+##### Args:
+
+
+* <b>`tensors_list`</b>: A list of tuples or dictionaries of tensors to enqueue.
+* <b>`keep_input`</b>: A `bool` scalar Tensor. This tensor controls whether the input
+ is added to the queue or not. If it evaluates `True`, then `tensors` are
+ added to the queue; otherwise they are dropped. This tensor essentially
+ acts as a filtering mechanism.
+* <b>`batch_size`</b>: An integer. The new batch size pulled from the queue.
+* <b>`capacity`</b>: An integer. The maximum number of elements in the queue.
+* <b>`enqueue_many`</b>: Whether each tensor in `tensor_list_list` is a single
+ example.
+* <b>`shapes`</b>: (Optional) The shapes for each example. Defaults to the
+ inferred shapes for `tensor_list_list[i]`.
+* <b>`dynamic_pad`</b>: Boolean. Allow variable dimensions in input shapes.
+ The given dimensions are padded upon dequeue so that tensors within a
+ batch have the same shapes.
+* <b>`allow_smaller_final_batch`</b>: (Optional) Boolean. If `True`, allow the final
+ batch to be smaller if there are insufficient items left in the queue.
+* <b>`shared_name`</b>: (Optional) If set, this queue will be shared under the given
+ name across multiple sessions.
+* <b>`name`</b>: (Optional) A name for the operations.
+
+##### Returns:
+
+ A list or dictionary of tensors with the same number and types as
+ `tensors_list[i]`.
+
+##### Raises:
+
+
+* <b>`ValueError`</b>: If the `shapes` are not specified, and cannot be
+ inferred from the elements of `tensor_list_list`.
+
+
+- - -
+
### `tf.train.shuffle_batch(tensors, batch_size, capacity, min_after_dequeue, num_threads=1, seed=None, enqueue_many=False, shapes=None, allow_smaller_final_batch=False, shared_name=None, name=None)` {#shuffle_batch}
Creates batches by randomly shuffling tensors.
@@ -3364,6 +3449,48 @@ Note: if `num_epochs` is not `None`, this function creates local counter
- - -
+### `tf.train.maybe_shuffle_batch(tensors, batch_size, capacity, min_after_dequeue, keep_input, num_threads=1, seed=None, enqueue_many=False, shapes=None, allow_smaller_final_batch=False, shared_name=None, name=None)` {#maybe_shuffle_batch}
+
+Creates batches by randomly shuffling conditionally-enqueued tensors.
+
+See docstring in `shuffle_batch` for more details.
+
+##### Args:
+
+
+* <b>`tensors`</b>: The list or dictionary of tensors to enqueue.
+* <b>`batch_size`</b>: The new batch size pulled from the queue.
+* <b>`capacity`</b>: An integer. The maximum number of elements in the queue.
+* <b>`min_after_dequeue`</b>: Minimum number elements in the queue after a
+ dequeue, used to ensure a level of mixing of elements.
+* <b>`keep_input`</b>: A `bool` scalar Tensor. This tensor controls whether the input
+ is added to the queue or not. If it evaluates `True`, then `tensors` are
+ added to the queue; otherwise they are dropped. This tensor essentially
+ acts as a filtering mechanism.
+* <b>`num_threads`</b>: The number of threads enqueuing `tensor_list`.
+* <b>`seed`</b>: Seed for the random shuffling within the queue.
+* <b>`enqueue_many`</b>: Whether each tensor in `tensor_list` is a single example.
+* <b>`shapes`</b>: (Optional) The shapes for each example. Defaults to the
+ inferred shapes for `tensor_list`.
+* <b>`allow_smaller_final_batch`</b>: (Optional) Boolean. If `True`, allow the final
+ batch to be smaller if there are insufficient items left in the queue.
+* <b>`shared_name`</b>: (Optional) If set, this queue will be shared under the given
+ name across multiple sessions.
+* <b>`name`</b>: (Optional) A name for the operations.
+
+##### Returns:
+
+ A list or dictionary of tensors with the types as `tensors`.
+
+##### Raises:
+
+
+* <b>`ValueError`</b>: If the `shapes` are not specified, and cannot be
+ inferred from the elements of `tensors`.
+
+
+- - -
+
### `tf.train.shuffle_batch_join(tensors_list, batch_size, capacity, min_after_dequeue, seed=None, enqueue_many=False, shapes=None, allow_smaller_final_batch=False, shared_name=None, name=None)` {#shuffle_batch_join}
Create batches by randomly shuffling tensors.
@@ -3442,3 +3569,46 @@ operations that depend on fixed batch_size would fail.
inferred from the elements of `tensors_list`.
+- - -
+
+### `tf.train.maybe_shuffle_batch_join(tensors_list, batch_size, capacity, min_after_dequeue, keep_input, seed=None, enqueue_many=False, shapes=None, allow_smaller_final_batch=False, shared_name=None, name=None)` {#maybe_shuffle_batch_join}
+
+Create batches by randomly shuffling conditionally-enqueued tensors.
+
+See docstring in `shuffle_batch_join` for more details.
+
+##### Args:
+
+
+* <b>`tensors_list`</b>: A list of tuples or dictionaries of tensors to enqueue.
+* <b>`batch_size`</b>: An integer. The new batch size pulled from the queue.
+* <b>`capacity`</b>: An integer. The maximum number of elements in the queue.
+* <b>`min_after_dequeue`</b>: Minimum number elements in the queue after a
+ dequeue, used to ensure a level of mixing of elements.
+* <b>`keep_input`</b>: A `bool` scalar Tensor. If provided, this tensor controls
+ whether the input is added to the queue or not. If it evaluates `True`,
+ then `tensors_list` are added to the queue; otherwise they are dropped.
+ This tensor essentially acts as a filtering mechanism.
+* <b>`seed`</b>: Seed for the random shuffling within the queue.
+* <b>`enqueue_many`</b>: Whether each tensor in `tensor_list_list` is a single
+ example.
+* <b>`shapes`</b>: (Optional) The shapes for each example. Defaults to the
+ inferred shapes for `tensors_list[i]`.
+* <b>`allow_smaller_final_batch`</b>: (Optional) Boolean. If `True`, allow the final
+ batch to be smaller if there are insufficient items left in the queue.
+* <b>`shared_name`</b>: (optional). If set, this queue will be shared under the given
+ name across multiple sessions.
+* <b>`name`</b>: (Optional) A name for the operations.
+
+##### Returns:
+
+ A list or dictionary of tensors with the same number and types as
+ `tensors_list[i]`.
+
+##### Raises:
+
+
+* <b>`ValueError`</b>: If the `shapes` are not specified, and cannot be
+ inferred from the elements of `tensors_list`.
+
+