From 6209ae88ca436b13c5807df3bb237a5613d42215 Mon Sep 17 00:00:00 2001 From: "A. Unique TensorFlower" Date: Thu, 1 Dec 2016 15:06:45 -0800 Subject: Update generated Python Op docs. Change: 140783483 --- .../shard1/tf.train.maybe_batch.md | 39 +++++ .../shard2/tf.train.maybe_shuffle_batch_join.md | 40 +++++ .../shard4/tf.train.maybe_batch_join.md | 40 +++++ .../shard5/tf.train.maybe_shuffle_batch.md | 39 +++++ tensorflow/g3doc/api_docs/python/index.md | 4 + tensorflow/g3doc/api_docs/python/io_ops.md | 172 ++++++++++++++++++++- 6 files changed, 333 insertions(+), 1 deletion(-) create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.train.maybe_batch.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.train.maybe_shuffle_batch_join.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.train.maybe_batch_join.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.train.maybe_shuffle_batch.md diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.train.maybe_batch.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.train.maybe_batch.md new file mode 100644 index 0000000000..610d5badcb --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.train.maybe_batch.md @@ -0,0 +1,39 @@ +### `tf.train.maybe_batch(tensors, keep_input, batch_size, num_threads=1, capacity=32, enqueue_many=False, shapes=None, dynamic_pad=False, allow_smaller_final_batch=False, shared_name=None, name=None)` {#maybe_batch} + +Conditionally creates batches of tensors based on `keep_input`. + +See docstring in `batch` for more details. + +##### Args: + + +* `tensors`: The list or dictionary of tensors to enqueue. +* `keep_input`: A `bool` scalar Tensor. This tensor controls whether the input + is added to the queue or not. If it evaluates `True`, then `tensors` are + added to the queue; otherwise they are dropped. This tensor essentially + acts as a filtering mechanism. +* `batch_size`: The new batch size pulled from the queue. +* `num_threads`: The number of threads enqueuing `tensors`. +* `capacity`: An integer. The maximum number of elements in the queue. +* `enqueue_many`: Whether each tensor in `tensors` is a single example. +* `shapes`: (Optional) The shapes for each example. Defaults to the + inferred shapes for `tensors`. +* `dynamic_pad`: Boolean. Allow variable dimensions in input shapes. + The given dimensions are padded upon dequeue so that tensors within a + batch have the same shapes. +* `allow_smaller_final_batch`: (Optional) Boolean. If `True`, allow the final + batch to be smaller if there are insufficient items left in the queue. +* `shared_name`: (Optional). If set, this queue will be shared under the given + name across multiple sessions. +* `name`: (Optional) A name for the operations. + +##### Returns: + + A list or dictionary of tensors with the same types as `tensors`. + +##### Raises: + + +* `ValueError`: If the `shapes` are not specified, and cannot be + inferred from the elements of `tensors`. + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.train.maybe_shuffle_batch_join.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.train.maybe_shuffle_batch_join.md new file mode 100644 index 0000000000..ab5543d378 --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.train.maybe_shuffle_batch_join.md @@ -0,0 +1,40 @@ +### `tf.train.maybe_shuffle_batch_join(tensors_list, batch_size, capacity, min_after_dequeue, keep_input, seed=None, enqueue_many=False, shapes=None, allow_smaller_final_batch=False, shared_name=None, name=None)` {#maybe_shuffle_batch_join} + +Create batches by randomly shuffling conditionally-enqueued tensors. + +See docstring in `shuffle_batch_join` for more details. + +##### Args: + + +* `tensors_list`: A list of tuples or dictionaries of tensors to enqueue. +* `batch_size`: An integer. The new batch size pulled from the queue. +* `capacity`: An integer. The maximum number of elements in the queue. +* `min_after_dequeue`: Minimum number elements in the queue after a + dequeue, used to ensure a level of mixing of elements. +* `keep_input`: A `bool` scalar Tensor. If provided, this tensor controls + whether the input is added to the queue or not. If it evaluates `True`, + then `tensors_list` are added to the queue; otherwise they are dropped. + This tensor essentially acts as a filtering mechanism. +* `seed`: Seed for the random shuffling within the queue. +* `enqueue_many`: Whether each tensor in `tensor_list_list` is a single + example. +* `shapes`: (Optional) The shapes for each example. Defaults to the + inferred shapes for `tensors_list[i]`. +* `allow_smaller_final_batch`: (Optional) Boolean. If `True`, allow the final + batch to be smaller if there are insufficient items left in the queue. +* `shared_name`: (optional). If set, this queue will be shared under the given + name across multiple sessions. +* `name`: (Optional) A name for the operations. + +##### Returns: + + A list or dictionary of tensors with the same number and types as + `tensors_list[i]`. + +##### Raises: + + +* `ValueError`: If the `shapes` are not specified, and cannot be + inferred from the elements of `tensors_list`. + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.train.maybe_batch_join.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.train.maybe_batch_join.md new file mode 100644 index 0000000000..96f605b432 --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.train.maybe_batch_join.md @@ -0,0 +1,40 @@ +### `tf.train.maybe_batch_join(tensors_list, keep_input, batch_size, capacity=32, enqueue_many=False, shapes=None, dynamic_pad=False, allow_smaller_final_batch=False, shared_name=None, name=None)` {#maybe_batch_join} + +Runs a list of tensors to conditionally fill a queue to create batches. + +See docstring in `batch_join` for more details. + +##### Args: + + +* `tensors_list`: A list of tuples or dictionaries of tensors to enqueue. +* `keep_input`: A `bool` scalar Tensor. This tensor controls whether the input + is added to the queue or not. If it evaluates `True`, then `tensors` are + added to the queue; otherwise they are dropped. This tensor essentially + acts as a filtering mechanism. +* `batch_size`: An integer. The new batch size pulled from the queue. +* `capacity`: An integer. The maximum number of elements in the queue. +* `enqueue_many`: Whether each tensor in `tensor_list_list` is a single + example. +* `shapes`: (Optional) The shapes for each example. Defaults to the + inferred shapes for `tensor_list_list[i]`. +* `dynamic_pad`: Boolean. Allow variable dimensions in input shapes. + The given dimensions are padded upon dequeue so that tensors within a + batch have the same shapes. +* `allow_smaller_final_batch`: (Optional) Boolean. If `True`, allow the final + batch to be smaller if there are insufficient items left in the queue. +* `shared_name`: (Optional) If set, this queue will be shared under the given + name across multiple sessions. +* `name`: (Optional) A name for the operations. + +##### Returns: + + A list or dictionary of tensors with the same number and types as + `tensors_list[i]`. + +##### Raises: + + +* `ValueError`: If the `shapes` are not specified, and cannot be + inferred from the elements of `tensor_list_list`. + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.train.maybe_shuffle_batch.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.train.maybe_shuffle_batch.md new file mode 100644 index 0000000000..d85bded6c8 --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.train.maybe_shuffle_batch.md @@ -0,0 +1,39 @@ +### `tf.train.maybe_shuffle_batch(tensors, batch_size, capacity, min_after_dequeue, keep_input, num_threads=1, seed=None, enqueue_many=False, shapes=None, allow_smaller_final_batch=False, shared_name=None, name=None)` {#maybe_shuffle_batch} + +Creates batches by randomly shuffling conditionally-enqueued tensors. + +See docstring in `shuffle_batch` for more details. + +##### Args: + + +* `tensors`: The list or dictionary of tensors to enqueue. +* `batch_size`: The new batch size pulled from the queue. +* `capacity`: An integer. The maximum number of elements in the queue. +* `min_after_dequeue`: Minimum number elements in the queue after a + dequeue, used to ensure a level of mixing of elements. +* `keep_input`: A `bool` scalar Tensor. This tensor controls whether the input + is added to the queue or not. If it evaluates `True`, then `tensors` are + added to the queue; otherwise they are dropped. This tensor essentially + acts as a filtering mechanism. +* `num_threads`: The number of threads enqueuing `tensor_list`. +* `seed`: Seed for the random shuffling within the queue. +* `enqueue_many`: Whether each tensor in `tensor_list` is a single example. +* `shapes`: (Optional) The shapes for each example. Defaults to the + inferred shapes for `tensor_list`. +* `allow_smaller_final_batch`: (Optional) Boolean. If `True`, allow the final + batch to be smaller if there are insufficient items left in the queue. +* `shared_name`: (Optional) If set, this queue will be shared under the given + name across multiple sessions. +* `name`: (Optional) A name for the operations. + +##### Returns: + + A list or dictionary of tensors with the types as `tensors`. + +##### Raises: + + +* `ValueError`: If the `shapes` are not specified, and cannot be + inferred from the elements of `tensors`. + diff --git a/tensorflow/g3doc/api_docs/python/index.md b/tensorflow/g3doc/api_docs/python/index.md index bdb3b94038..ae1f47ec43 100644 --- a/tensorflow/g3doc/api_docs/python/index.md +++ b/tensorflow/g3doc/api_docs/python/index.md @@ -455,6 +455,10 @@ * [`limit_epochs`](../../api_docs/python/io_ops.md#limit_epochs) * [`match_filenames_once`](../../api_docs/python/io_ops.md#match_filenames_once) * [`matching_files`](../../api_docs/python/io_ops.md#matching_files) + * [`maybe_batch`](../../api_docs/python/io_ops.md#maybe_batch) + * [`maybe_batch_join`](../../api_docs/python/io_ops.md#maybe_batch_join) + * [`maybe_shuffle_batch`](../../api_docs/python/io_ops.md#maybe_shuffle_batch) + * [`maybe_shuffle_batch_join`](../../api_docs/python/io_ops.md#maybe_shuffle_batch_join) * [`PaddingFIFOQueue`](../../api_docs/python/io_ops.md#PaddingFIFOQueue) * [`parse_example`](../../api_docs/python/io_ops.md#parse_example) * [`parse_single_example`](../../api_docs/python/io_ops.md#parse_single_example) diff --git a/tensorflow/g3doc/api_docs/python/io_ops.md b/tensorflow/g3doc/api_docs/python/io_ops.md index 8cebcff858..adea830285 100644 --- a/tensorflow/g3doc/api_docs/python/io_ops.md +++ b/tensorflow/g3doc/api_docs/python/io_ops.md @@ -3097,7 +3097,7 @@ single subgraph producing examples but you want to run it in *N* threads (where you increase *N* until it can keep the queue full). Use [`batch_join`](#batch_join) or [`shuffle_batch_join`](#shuffle_batch_join) if you have *N* different subgraphs producing examples to batch and you -want them run by *N* threads. +want them run by *N* threads. Use `maybe_*` to enqueue conditionally. - - - @@ -3178,6 +3178,48 @@ Note: if `num_epochs` is not `None`, this function creates local counter ##### Raises: +* `ValueError`: If the `shapes` are not specified, and cannot be + inferred from the elements of `tensors`. + + +- - - + +### `tf.train.maybe_batch(tensors, keep_input, batch_size, num_threads=1, capacity=32, enqueue_many=False, shapes=None, dynamic_pad=False, allow_smaller_final_batch=False, shared_name=None, name=None)` {#maybe_batch} + +Conditionally creates batches of tensors based on `keep_input`. + +See docstring in `batch` for more details. + +##### Args: + + +* `tensors`: The list or dictionary of tensors to enqueue. +* `keep_input`: A `bool` scalar Tensor. This tensor controls whether the input + is added to the queue or not. If it evaluates `True`, then `tensors` are + added to the queue; otherwise they are dropped. This tensor essentially + acts as a filtering mechanism. +* `batch_size`: The new batch size pulled from the queue. +* `num_threads`: The number of threads enqueuing `tensors`. +* `capacity`: An integer. The maximum number of elements in the queue. +* `enqueue_many`: Whether each tensor in `tensors` is a single example. +* `shapes`: (Optional) The shapes for each example. Defaults to the + inferred shapes for `tensors`. +* `dynamic_pad`: Boolean. Allow variable dimensions in input shapes. + The given dimensions are padded upon dequeue so that tensors within a + batch have the same shapes. +* `allow_smaller_final_batch`: (Optional) Boolean. If `True`, allow the final + batch to be smaller if there are insufficient items left in the queue. +* `shared_name`: (Optional). If set, this queue will be shared under the given + name across multiple sessions. +* `name`: (Optional) A name for the operations. + +##### Returns: + + A list or dictionary of tensors with the same types as `tensors`. + +##### Raises: + + * `ValueError`: If the `shapes` are not specified, and cannot be inferred from the elements of `tensors`. @@ -3269,6 +3311,49 @@ operations that depend on fixed batch_size would fail. ##### Raises: +* `ValueError`: If the `shapes` are not specified, and cannot be + inferred from the elements of `tensor_list_list`. + + +- - - + +### `tf.train.maybe_batch_join(tensors_list, keep_input, batch_size, capacity=32, enqueue_many=False, shapes=None, dynamic_pad=False, allow_smaller_final_batch=False, shared_name=None, name=None)` {#maybe_batch_join} + +Runs a list of tensors to conditionally fill a queue to create batches. + +See docstring in `batch_join` for more details. + +##### Args: + + +* `tensors_list`: A list of tuples or dictionaries of tensors to enqueue. +* `keep_input`: A `bool` scalar Tensor. This tensor controls whether the input + is added to the queue or not. If it evaluates `True`, then `tensors` are + added to the queue; otherwise they are dropped. This tensor essentially + acts as a filtering mechanism. +* `batch_size`: An integer. The new batch size pulled from the queue. +* `capacity`: An integer. The maximum number of elements in the queue. +* `enqueue_many`: Whether each tensor in `tensor_list_list` is a single + example. +* `shapes`: (Optional) The shapes for each example. Defaults to the + inferred shapes for `tensor_list_list[i]`. +* `dynamic_pad`: Boolean. Allow variable dimensions in input shapes. + The given dimensions are padded upon dequeue so that tensors within a + batch have the same shapes. +* `allow_smaller_final_batch`: (Optional) Boolean. If `True`, allow the final + batch to be smaller if there are insufficient items left in the queue. +* `shared_name`: (Optional) If set, this queue will be shared under the given + name across multiple sessions. +* `name`: (Optional) A name for the operations. + +##### Returns: + + A list or dictionary of tensors with the same number and types as + `tensors_list[i]`. + +##### Raises: + + * `ValueError`: If the `shapes` are not specified, and cannot be inferred from the elements of `tensor_list_list`. @@ -3358,6 +3443,48 @@ Note: if `num_epochs` is not `None`, this function creates local counter ##### Raises: +* `ValueError`: If the `shapes` are not specified, and cannot be + inferred from the elements of `tensors`. + + +- - - + +### `tf.train.maybe_shuffle_batch(tensors, batch_size, capacity, min_after_dequeue, keep_input, num_threads=1, seed=None, enqueue_many=False, shapes=None, allow_smaller_final_batch=False, shared_name=None, name=None)` {#maybe_shuffle_batch} + +Creates batches by randomly shuffling conditionally-enqueued tensors. + +See docstring in `shuffle_batch` for more details. + +##### Args: + + +* `tensors`: The list or dictionary of tensors to enqueue. +* `batch_size`: The new batch size pulled from the queue. +* `capacity`: An integer. The maximum number of elements in the queue. +* `min_after_dequeue`: Minimum number elements in the queue after a + dequeue, used to ensure a level of mixing of elements. +* `keep_input`: A `bool` scalar Tensor. This tensor controls whether the input + is added to the queue or not. If it evaluates `True`, then `tensors` are + added to the queue; otherwise they are dropped. This tensor essentially + acts as a filtering mechanism. +* `num_threads`: The number of threads enqueuing `tensor_list`. +* `seed`: Seed for the random shuffling within the queue. +* `enqueue_many`: Whether each tensor in `tensor_list` is a single example. +* `shapes`: (Optional) The shapes for each example. Defaults to the + inferred shapes for `tensor_list`. +* `allow_smaller_final_batch`: (Optional) Boolean. If `True`, allow the final + batch to be smaller if there are insufficient items left in the queue. +* `shared_name`: (Optional) If set, this queue will be shared under the given + name across multiple sessions. +* `name`: (Optional) A name for the operations. + +##### Returns: + + A list or dictionary of tensors with the types as `tensors`. + +##### Raises: + + * `ValueError`: If the `shapes` are not specified, and cannot be inferred from the elements of `tensors`. @@ -3442,3 +3569,46 @@ operations that depend on fixed batch_size would fail. inferred from the elements of `tensors_list`. +- - - + +### `tf.train.maybe_shuffle_batch_join(tensors_list, batch_size, capacity, min_after_dequeue, keep_input, seed=None, enqueue_many=False, shapes=None, allow_smaller_final_batch=False, shared_name=None, name=None)` {#maybe_shuffle_batch_join} + +Create batches by randomly shuffling conditionally-enqueued tensors. + +See docstring in `shuffle_batch_join` for more details. + +##### Args: + + +* `tensors_list`: A list of tuples or dictionaries of tensors to enqueue. +* `batch_size`: An integer. The new batch size pulled from the queue. +* `capacity`: An integer. The maximum number of elements in the queue. +* `min_after_dequeue`: Minimum number elements in the queue after a + dequeue, used to ensure a level of mixing of elements. +* `keep_input`: A `bool` scalar Tensor. If provided, this tensor controls + whether the input is added to the queue or not. If it evaluates `True`, + then `tensors_list` are added to the queue; otherwise they are dropped. + This tensor essentially acts as a filtering mechanism. +* `seed`: Seed for the random shuffling within the queue. +* `enqueue_many`: Whether each tensor in `tensor_list_list` is a single + example. +* `shapes`: (Optional) The shapes for each example. Defaults to the + inferred shapes for `tensors_list[i]`. +* `allow_smaller_final_batch`: (Optional) Boolean. If `True`, allow the final + batch to be smaller if there are insufficient items left in the queue. +* `shared_name`: (optional). If set, this queue will be shared under the given + name across multiple sessions. +* `name`: (Optional) A name for the operations. + +##### Returns: + + A list or dictionary of tensors with the same number and types as + `tensors_list[i]`. + +##### Raises: + + +* `ValueError`: If the `shapes` are not specified, and cannot be + inferred from the elements of `tensors_list`. + + -- cgit v1.2.3