aboutsummaryrefslogtreecommitdiffhomepage
path: root/tensorflow/g3doc/api_docs/python/io_ops.md
diff options
context:
space:
mode:
Diffstat (limited to 'tensorflow/g3doc/api_docs/python/io_ops.md')
-rw-r--r--tensorflow/g3doc/api_docs/python/io_ops.md1956
1 files changed, 1956 insertions, 0 deletions
diff --git a/tensorflow/g3doc/api_docs/python/io_ops.md b/tensorflow/g3doc/api_docs/python/io_ops.md
new file mode 100644
index 0000000000..ab8c4aa146
--- /dev/null
+++ b/tensorflow/g3doc/api_docs/python/io_ops.md
@@ -0,0 +1,1956 @@
+<!-- This file is machine generated: DO NOT EDIT! -->
+
+# Inputs and Readers
+<!-- TOC-BEGIN This section is generated by neural network: DO NOT EDIT! -->
+## Contents
+* [Placeholders](#AUTOGENERATED-placeholders)
+ * [tf.placeholder(dtype, shape=None, name=None)](#placeholder)
+* [Readers](#AUTOGENERATED-readers)
+ * [class tf.ReaderBase](#ReaderBase)
+ * [class tf.TextLineReader](#TextLineReader)
+ * [class tf.WholeFileReader](#WholeFileReader)
+ * [class tf.IdentityReader](#IdentityReader)
+ * [class tf.TFRecordReader](#TFRecordReader)
+ * [class tf.FixedLengthRecordReader](#FixedLengthRecordReader)
+* [Converting](#AUTOGENERATED-converting)
+ * [tf.decode_csv(records, record_defaults, field_delim=None, name=None)](#decode_csv)
+ * [tf.decode_raw(bytes, out_type, little_endian=None, name=None)](#decode_raw)
+ * [tf.parse_example(serialized, names=None, sparse_keys=None, sparse_types=None, dense_keys=None, dense_types=None, dense_defaults=None, dense_shapes=None, name='ParseExample')](#parse_example)
+ * [tf.parse_single_example(serialized, names=None, sparse_keys=None, sparse_types=None, dense_keys=None, dense_types=None, dense_defaults=None, dense_shapes=None, name='ParseSingleExample')](#parse_single_example)
+* [Queues](#AUTOGENERATED-queues)
+ * [class tf.QueueBase](#QueueBase)
+ * [class tf.FIFOQueue](#FIFOQueue)
+ * [class tf.RandomShuffleQueue](#RandomShuffleQueue)
+* [Dealing with the filesystem](#AUTOGENERATED-dealing-with-the-filesystem)
+ * [tf.matching_files(pattern, name=None)](#matching_files)
+ * [tf.read_file(filename, name=None)](#read_file)
+* [Input pipeline](#AUTOGENERATED-input-pipeline)
+ * [Beginning of an input pipeline](#AUTOGENERATED-beginning-of-an-input-pipeline)
+ * [tf.train.match_filenames_once(pattern, name=None)](#match_filenames_once)
+ * [tf.train.limit_epochs(tensor, num_epochs=None, name=None)](#limit_epochs)
+ * [tf.train.range_input_producer(limit, num_epochs=None, shuffle=True, seed=None, capacity=32, name=None)](#range_input_producer)
+ * [tf.train.slice_input_producer(tensor_list, num_epochs=None, shuffle=True, seed=None, capacity=32, name=None)](#slice_input_producer)
+ * [tf.train.string_input_producer(string_tensor, num_epochs=None, shuffle=True, seed=None, capacity=32, name=None)](#string_input_producer)
+ * [Batching at the end of an input pipeline](#AUTOGENERATED-batching-at-the-end-of-an-input-pipeline)
+ * [tf.train.batch(tensor_list, batch_size, num_threads=1, capacity=32, enqueue_many=False, shapes=None, name=None)](#batch)
+ * [tf.train.batch_join(tensor_list_list, batch_size, capacity=32, enqueue_many=False, shapes=None, name=None)](#batch_join)
+ * [tf.train.shuffle_batch(tensor_list, batch_size, capacity, min_after_dequeue, num_threads=1, seed=None, enqueue_many=False, shapes=None, name=None)](#shuffle_batch)
+ * [tf.train.shuffle_batch_join(tensor_list_list, batch_size, capacity, min_after_dequeue, seed=None, enqueue_many=False, shapes=None, name=None)](#shuffle_batch_join)
+
+
+<!-- TOC-END This section was generated by neural network, THANKS FOR READING! -->
+
+## Placeholders <div class="md-anchor" id="AUTOGENERATED-placeholders">{#AUTOGENERATED-placeholders}</div>
+
+TensorFlow provides a placeholder operation that must be fed with data
+on execution. For more info, see the section on [Feeding
+data](../../how_tos/reading_data/index.md#feeding).
+
+- - -
+
+### tf.placeholder(dtype, shape=None, name=None) <div class="md-anchor" id="placeholder">{#placeholder}</div>
+
+Inserts a placeholder for a tensor that will be always fed.
+
+**Important**: This tensor will produce an error if evaluated. Its value must
+be fed using the `feed_dict` optional argument to `Session.run()`,
+`Tensor.eval()`, or `Operation.run()`.
+
+For example:
+
+```python
+x = tf.placeholder(float, shape=(1024, 1024))
+y = tf.matmul(x, x)
+
+with tf.Session() as sess:
+ print sess.run(y) # ERROR: will fail because x was not fed.
+
+ rand_array = np.random.rand(1024, 1024)
+ print sess.run(y, feed_dict={x: rand_array}) # Will succeed.
+```
+
+##### Args:
+
+
+* <b>dtype</b>: The type of elements in the tensor to be fed.
+* <b>shape</b>: The shape of the tensor to be fed (optional). If the shape is not
+ specified, you can feed a tensor of any shape.
+* <b>name</b>: A name for the operation (optional).
+
+##### Returns:
+
+ A `Tensor` that may be used as a handle for feeding a value, but not
+ evaluated directly.
+
+
+
+## Readers <div class="md-anchor" id="AUTOGENERATED-readers">{#AUTOGENERATED-readers}</div>
+
+TensorFlow provides a set of Reader classes for reading data formats.
+For more information on inputs and readers, see [Reading
+data](../../how_tos/reading_data/index.md).
+
+- - -
+
+### class tf.ReaderBase <div class="md-anchor" id="ReaderBase">{#ReaderBase}</div>
+
+Base class for different Reader types, that produce a record every step.
+
+Conceptually, Readers convert string 'work units' into records (key,
+value pairs). Typically the 'work units' are filenames and the
+records are extracted from the contents of those files. We want a
+single record produced per step, but a work unit can correspond to
+many records.
+
+Therefore we introduce some decoupling using a queue. The queue
+contains the work units and the Reader dequeues from the queue when
+it is asked to produce a record (via Read()) but it has finished the
+last work unit.
+- - -
+
+#### tf.ReaderBase.__init__(reader_ref, supports_serialize=False) {#ReaderBase.__init__}
+
+Creates a new ReaderBase.
+
+##### Args:
+
+
+* <b>reader_ref</b>: The operation that implements the reader.
+* <b>supports_serialize</b>: True if the reader implementation can
+ serialize its state.
+
+
+- - -
+
+#### tf.ReaderBase.num_records_produced(name=None) {#ReaderBase.num_records_produced}
+
+Returns the number of records this reader has produced.
+
+This is the same as the number of Read executions that have
+succeeded.
+
+##### Args:
+
+
+* <b>name</b>: A name for the operation (optional).
+
+##### Returns:
+
+ An int64 Tensor.
+
+
+- - -
+
+#### tf.ReaderBase.num_work_units_completed(name=None) {#ReaderBase.num_work_units_completed}
+
+Returns the number of work units this reader has finished processing.
+
+##### Args:
+
+
+* <b>name</b>: A name for the operation (optional).
+
+##### Returns:
+
+ An int64 Tensor.
+
+
+- - -
+
+#### tf.ReaderBase.read(queue, name=None) {#ReaderBase.read}
+
+Returns the next record (key, value pair) produced by a reader.
+
+Will dequeue a work unit from queue if necessary (e.g. when the
+Reader needs to start reading from a new file since it has
+finished with the previous file).
+
+##### Args:
+
+
+* <b>queue</b>: A Queue or a mutable string Tensor representing a handle
+ to a Queue, with string work items.
+* <b>name</b>: A name for the operation (optional).
+
+##### Returns:
+
+ A tuple of Tensors (key, value).
+
+* <b>key</b>: A string scalar Tensor.
+* <b>value</b>: A string scalar Tensor.
+
+
+- - -
+
+#### tf.ReaderBase.reader_ref {#ReaderBase.reader_ref}
+
+Op that implements the reader.
+
+- - -
+
+#### tf.ReaderBase.reset(name=None) {#ReaderBase.reset}
+
+Restore a reader to its initial clean state.
+
+##### Args:
+
+
+* <b>name</b>: A name for the operation (optional).
+
+##### Returns:
+
+ The created Operation.
+
+
+- - -
+
+#### tf.ReaderBase.restore_state(state, name=None) {#ReaderBase.restore_state}
+
+Restore a reader to a previously saved state.
+
+Not all Readers support being restored, so this can produce an
+Unimplemented error.
+
+##### Args:
+
+
+* <b>state</b>: A string Tensor.
+ Result of a SerializeState of a Reader with matching type.
+* <b>name</b>: A name for the operation (optional).
+
+##### Returns:
+
+ The created Operation.
+
+
+- - -
+
+#### tf.ReaderBase.serialize_state(name=None) {#ReaderBase.serialize_state}
+
+Produce a string tensor that encodes the state of a reader.
+
+Not all Readers support being serialized, so this can produce an
+Unimplemented error.
+
+##### Args:
+
+
+* <b>name</b>: A name for the operation (optional).
+
+##### Returns:
+
+ A string Tensor.
+
+
+- - -
+
+#### tf.ReaderBase.supports_serialize {#ReaderBase.supports_serialize}
+
+Whether the Reader implementation can serialize its state.
+
+
+- - -
+
+### class tf.TextLineReader <div class="md-anchor" id="TextLineReader">{#TextLineReader}</div>
+
+A Reader that outputs the lines of a file delimited by newlines.
+
+Newlines are stripped from the output.
+See ReaderBase for supported methods.
+- - -
+
+#### tf.TextLineReader.__init__(skip_header_lines=None, name=None) {#TextLineReader.__init__}
+
+Create a TextLineReader.
+
+##### Args:
+
+
+* <b>skip_header_lines</b>: An optional int. Defaults to 0. Number of lines
+ to skip from the beginning of every file.
+* <b>name</b>: A name for the operation (optional).
+
+
+- - -
+
+#### tf.TextLineReader.num_records_produced(name=None) {#TextLineReader.num_records_produced}
+
+Returns the number of records this reader has produced.
+
+This is the same as the number of Read executions that have
+succeeded.
+
+##### Args:
+
+
+* <b>name</b>: A name for the operation (optional).
+
+##### Returns:
+
+ An int64 Tensor.
+
+
+- - -
+
+#### tf.TextLineReader.num_work_units_completed(name=None) {#TextLineReader.num_work_units_completed}
+
+Returns the number of work units this reader has finished processing.
+
+##### Args:
+
+
+* <b>name</b>: A name for the operation (optional).
+
+##### Returns:
+
+ An int64 Tensor.
+
+
+- - -
+
+#### tf.TextLineReader.read(queue, name=None) {#TextLineReader.read}
+
+Returns the next record (key, value pair) produced by a reader.
+
+Will dequeue a work unit from queue if necessary (e.g. when the
+Reader needs to start reading from a new file since it has
+finished with the previous file).
+
+##### Args:
+
+
+* <b>queue</b>: A Queue or a mutable string Tensor representing a handle
+ to a Queue, with string work items.
+* <b>name</b>: A name for the operation (optional).
+
+##### Returns:
+
+ A tuple of Tensors (key, value).
+
+* <b>key</b>: A string scalar Tensor.
+* <b>value</b>: A string scalar Tensor.
+
+
+- - -
+
+#### tf.TextLineReader.reader_ref {#TextLineReader.reader_ref}
+
+Op that implements the reader.
+
+- - -
+
+#### tf.TextLineReader.reset(name=None) {#TextLineReader.reset}
+
+Restore a reader to its initial clean state.
+
+##### Args:
+
+
+* <b>name</b>: A name for the operation (optional).
+
+##### Returns:
+
+ The created Operation.
+
+
+- - -
+
+#### tf.TextLineReader.restore_state(state, name=None) {#TextLineReader.restore_state}
+
+Restore a reader to a previously saved state.
+
+Not all Readers support being restored, so this can produce an
+Unimplemented error.
+
+##### Args:
+
+
+* <b>state</b>: A string Tensor.
+ Result of a SerializeState of a Reader with matching type.
+* <b>name</b>: A name for the operation (optional).
+
+##### Returns:
+
+ The created Operation.
+
+
+- - -
+
+#### tf.TextLineReader.serialize_state(name=None) {#TextLineReader.serialize_state}
+
+Produce a string tensor that encodes the state of a reader.
+
+Not all Readers support being serialized, so this can produce an
+Unimplemented error.
+
+##### Args:
+
+
+* <b>name</b>: A name for the operation (optional).
+
+##### Returns:
+
+ A string Tensor.
+
+
+- - -
+
+#### tf.TextLineReader.supports_serialize {#TextLineReader.supports_serialize}
+
+Whether the Reader implementation can serialize its state.
+
+
+- - -
+
+### class tf.WholeFileReader <div class="md-anchor" id="WholeFileReader">{#WholeFileReader}</div>
+
+A Reader that outputs the entire contents of a file as a value.
+
+To use, enqueue filenames in a Queue. The output of Read will
+be a filename (key) and the contents of that file (value).
+
+See ReaderBase for supported methods.
+- - -
+
+#### tf.WholeFileReader.__init__(name=None) {#WholeFileReader.__init__}
+
+Create a WholeFileReader.
+
+##### Args:
+
+
+* <b>name</b>: A name for the operation (optional).
+
+
+- - -
+
+#### tf.WholeFileReader.num_records_produced(name=None) {#WholeFileReader.num_records_produced}
+
+Returns the number of records this reader has produced.
+
+This is the same as the number of Read executions that have
+succeeded.
+
+##### Args:
+
+
+* <b>name</b>: A name for the operation (optional).
+
+##### Returns:
+
+ An int64 Tensor.
+
+
+- - -
+
+#### tf.WholeFileReader.num_work_units_completed(name=None) {#WholeFileReader.num_work_units_completed}
+
+Returns the number of work units this reader has finished processing.
+
+##### Args:
+
+
+* <b>name</b>: A name for the operation (optional).
+
+##### Returns:
+
+ An int64 Tensor.
+
+
+- - -
+
+#### tf.WholeFileReader.read(queue, name=None) {#WholeFileReader.read}
+
+Returns the next record (key, value pair) produced by a reader.
+
+Will dequeue a work unit from queue if necessary (e.g. when the
+Reader needs to start reading from a new file since it has
+finished with the previous file).
+
+##### Args:
+
+
+* <b>queue</b>: A Queue or a mutable string Tensor representing a handle
+ to a Queue, with string work items.
+* <b>name</b>: A name for the operation (optional).
+
+##### Returns:
+
+ A tuple of Tensors (key, value).
+
+* <b>key</b>: A string scalar Tensor.
+* <b>value</b>: A string scalar Tensor.
+
+
+- - -
+
+#### tf.WholeFileReader.reader_ref {#WholeFileReader.reader_ref}
+
+Op that implements the reader.
+
+- - -
+
+#### tf.WholeFileReader.reset(name=None) {#WholeFileReader.reset}
+
+Restore a reader to its initial clean state.
+
+##### Args:
+
+
+* <b>name</b>: A name for the operation (optional).
+
+##### Returns:
+
+ The created Operation.
+
+
+- - -
+
+#### tf.WholeFileReader.restore_state(state, name=None) {#WholeFileReader.restore_state}
+
+Restore a reader to a previously saved state.
+
+Not all Readers support being restored, so this can produce an
+Unimplemented error.
+
+##### Args:
+
+
+* <b>state</b>: A string Tensor.
+ Result of a SerializeState of a Reader with matching type.
+* <b>name</b>: A name for the operation (optional).
+
+##### Returns:
+
+ The created Operation.
+
+
+- - -
+
+#### tf.WholeFileReader.serialize_state(name=None) {#WholeFileReader.serialize_state}
+
+Produce a string tensor that encodes the state of a reader.
+
+Not all Readers support being serialized, so this can produce an
+Unimplemented error.
+
+##### Args:
+
+
+* <b>name</b>: A name for the operation (optional).
+
+##### Returns:
+
+ A string Tensor.
+
+
+- - -
+
+#### tf.WholeFileReader.supports_serialize {#WholeFileReader.supports_serialize}
+
+Whether the Reader implementation can serialize its state.
+
+
+- - -
+
+### class tf.IdentityReader <div class="md-anchor" id="IdentityReader">{#IdentityReader}</div>
+
+A Reader that outputs the queued work as both the key and value.
+
+To use, enqueue strings in a Queue. Read will take the front
+work string and output (work, work).
+
+See ReaderBase for supported methods.
+- - -
+
+#### tf.IdentityReader.__init__(name=None) {#IdentityReader.__init__}
+
+Create a IdentityReader.
+
+##### Args:
+
+
+* <b>name</b>: A name for the operation (optional).
+
+
+- - -
+
+#### tf.IdentityReader.num_records_produced(name=None) {#IdentityReader.num_records_produced}
+
+Returns the number of records this reader has produced.
+
+This is the same as the number of Read executions that have
+succeeded.
+
+##### Args:
+
+
+* <b>name</b>: A name for the operation (optional).
+
+##### Returns:
+
+ An int64 Tensor.
+
+
+- - -
+
+#### tf.IdentityReader.num_work_units_completed(name=None) {#IdentityReader.num_work_units_completed}
+
+Returns the number of work units this reader has finished processing.
+
+##### Args:
+
+
+* <b>name</b>: A name for the operation (optional).
+
+##### Returns:
+
+ An int64 Tensor.
+
+
+- - -
+
+#### tf.IdentityReader.read(queue, name=None) {#IdentityReader.read}
+
+Returns the next record (key, value pair) produced by a reader.
+
+Will dequeue a work unit from queue if necessary (e.g. when the
+Reader needs to start reading from a new file since it has
+finished with the previous file).
+
+##### Args:
+
+
+* <b>queue</b>: A Queue or a mutable string Tensor representing a handle
+ to a Queue, with string work items.
+* <b>name</b>: A name for the operation (optional).
+
+##### Returns:
+
+ A tuple of Tensors (key, value).
+
+* <b>key</b>: A string scalar Tensor.
+* <b>value</b>: A string scalar Tensor.
+
+
+- - -
+
+#### tf.IdentityReader.reader_ref {#IdentityReader.reader_ref}
+
+Op that implements the reader.
+
+- - -
+
+#### tf.IdentityReader.reset(name=None) {#IdentityReader.reset}
+
+Restore a reader to its initial clean state.
+
+##### Args:
+
+
+* <b>name</b>: A name for the operation (optional).
+
+##### Returns:
+
+ The created Operation.
+
+
+- - -
+
+#### tf.IdentityReader.restore_state(state, name=None) {#IdentityReader.restore_state}
+
+Restore a reader to a previously saved state.
+
+Not all Readers support being restored, so this can produce an
+Unimplemented error.
+
+##### Args:
+
+
+* <b>state</b>: A string Tensor.
+ Result of a SerializeState of a Reader with matching type.
+* <b>name</b>: A name for the operation (optional).
+
+##### Returns:
+
+ The created Operation.
+
+
+- - -
+
+#### tf.IdentityReader.serialize_state(name=None) {#IdentityReader.serialize_state}
+
+Produce a string tensor that encodes the state of a reader.
+
+Not all Readers support being serialized, so this can produce an
+Unimplemented error.
+
+##### Args:
+
+
+* <b>name</b>: A name for the operation (optional).
+
+##### Returns:
+
+ A string Tensor.
+
+
+- - -
+
+#### tf.IdentityReader.supports_serialize {#IdentityReader.supports_serialize}
+
+Whether the Reader implementation can serialize its state.
+
+
+- - -
+
+### class tf.TFRecordReader <div class="md-anchor" id="TFRecordReader">{#TFRecordReader}</div>
+
+A Reader that outputs the records from a TFRecords file.
+
+See ReaderBase for supported methods.
+- - -
+
+#### tf.TFRecordReader.__init__(name=None) {#TFRecordReader.__init__}
+
+Create a TFRecordReader.
+
+##### Args:
+
+
+* <b>name</b>: A name for the operation (optional).
+
+
+- - -
+
+#### tf.TFRecordReader.num_records_produced(name=None) {#TFRecordReader.num_records_produced}
+
+Returns the number of records this reader has produced.
+
+This is the same as the number of Read executions that have
+succeeded.
+
+##### Args:
+
+
+* <b>name</b>: A name for the operation (optional).
+
+##### Returns:
+
+ An int64 Tensor.
+
+
+- - -
+
+#### tf.TFRecordReader.num_work_units_completed(name=None) {#TFRecordReader.num_work_units_completed}
+
+Returns the number of work units this reader has finished processing.
+
+##### Args:
+
+
+* <b>name</b>: A name for the operation (optional).
+
+##### Returns:
+
+ An int64 Tensor.
+
+
+- - -
+
+#### tf.TFRecordReader.read(queue, name=None) {#TFRecordReader.read}
+
+Returns the next record (key, value pair) produced by a reader.
+
+Will dequeue a work unit from queue if necessary (e.g. when the
+Reader needs to start reading from a new file since it has
+finished with the previous file).
+
+##### Args:
+
+
+* <b>queue</b>: A Queue or a mutable string Tensor representing a handle
+ to a Queue, with string work items.
+* <b>name</b>: A name for the operation (optional).
+
+##### Returns:
+
+ A tuple of Tensors (key, value).
+
+* <b>key</b>: A string scalar Tensor.
+* <b>value</b>: A string scalar Tensor.
+
+
+- - -
+
+#### tf.TFRecordReader.reader_ref {#TFRecordReader.reader_ref}
+
+Op that implements the reader.
+
+- - -
+
+#### tf.TFRecordReader.reset(name=None) {#TFRecordReader.reset}
+
+Restore a reader to its initial clean state.
+
+##### Args:
+
+
+* <b>name</b>: A name for the operation (optional).
+
+##### Returns:
+
+ The created Operation.
+
+
+- - -
+
+#### tf.TFRecordReader.restore_state(state, name=None) {#TFRecordReader.restore_state}
+
+Restore a reader to a previously saved state.
+
+Not all Readers support being restored, so this can produce an
+Unimplemented error.
+
+##### Args:
+
+
+* <b>state</b>: A string Tensor.
+ Result of a SerializeState of a Reader with matching type.
+* <b>name</b>: A name for the operation (optional).
+
+##### Returns:
+
+ The created Operation.
+
+
+- - -
+
+#### tf.TFRecordReader.serialize_state(name=None) {#TFRecordReader.serialize_state}
+
+Produce a string tensor that encodes the state of a reader.
+
+Not all Readers support being serialized, so this can produce an
+Unimplemented error.
+
+##### Args:
+
+
+* <b>name</b>: A name for the operation (optional).
+
+##### Returns:
+
+ A string Tensor.
+
+
+- - -
+
+#### tf.TFRecordReader.supports_serialize {#TFRecordReader.supports_serialize}
+
+Whether the Reader implementation can serialize its state.
+
+
+- - -
+
+### class tf.FixedLengthRecordReader <div class="md-anchor" id="FixedLengthRecordReader">{#FixedLengthRecordReader}</div>
+
+A Reader that outputs fixed-length records from a file.
+
+See ReaderBase for supported methods.
+- - -
+
+#### tf.FixedLengthRecordReader.__init__(record_bytes, header_bytes=None, footer_bytes=None, name=None) {#FixedLengthRecordReader.__init__}
+
+Create a FixedLengthRecordReader.
+
+##### Args:
+
+
+* <b>record_bytes</b>: An int.
+* <b>header_bytes</b>: An optional int. Defaults to 0.
+* <b>footer_bytes</b>: An optional int. Defaults to 0.
+* <b>name</b>: A name for the operation (optional).
+
+
+- - -
+
+#### tf.FixedLengthRecordReader.num_records_produced(name=None) {#FixedLengthRecordReader.num_records_produced}
+
+Returns the number of records this reader has produced.
+
+This is the same as the number of Read executions that have
+succeeded.
+
+##### Args:
+
+
+* <b>name</b>: A name for the operation (optional).
+
+##### Returns:
+
+ An int64 Tensor.
+
+
+- - -
+
+#### tf.FixedLengthRecordReader.num_work_units_completed(name=None) {#FixedLengthRecordReader.num_work_units_completed}
+
+Returns the number of work units this reader has finished processing.
+
+##### Args:
+
+
+* <b>name</b>: A name for the operation (optional).
+
+##### Returns:
+
+ An int64 Tensor.
+
+
+- - -
+
+#### tf.FixedLengthRecordReader.read(queue, name=None) {#FixedLengthRecordReader.read}
+
+Returns the next record (key, value pair) produced by a reader.
+
+Will dequeue a work unit from queue if necessary (e.g. when the
+Reader needs to start reading from a new file since it has
+finished with the previous file).
+
+##### Args:
+
+
+* <b>queue</b>: A Queue or a mutable string Tensor representing a handle
+ to a Queue, with string work items.
+* <b>name</b>: A name for the operation (optional).
+
+##### Returns:
+
+ A tuple of Tensors (key, value).
+
+* <b>key</b>: A string scalar Tensor.
+* <b>value</b>: A string scalar Tensor.
+
+
+- - -
+
+#### tf.FixedLengthRecordReader.reader_ref {#FixedLengthRecordReader.reader_ref}
+
+Op that implements the reader.
+
+- - -
+
+#### tf.FixedLengthRecordReader.reset(name=None) {#FixedLengthRecordReader.reset}
+
+Restore a reader to its initial clean state.
+
+##### Args:
+
+
+* <b>name</b>: A name for the operation (optional).
+
+##### Returns:
+
+ The created Operation.
+
+
+- - -
+
+#### tf.FixedLengthRecordReader.restore_state(state, name=None) {#FixedLengthRecordReader.restore_state}
+
+Restore a reader to a previously saved state.
+
+Not all Readers support being restored, so this can produce an
+Unimplemented error.
+
+##### Args:
+
+
+* <b>state</b>: A string Tensor.
+ Result of a SerializeState of a Reader with matching type.
+* <b>name</b>: A name for the operation (optional).
+
+##### Returns:
+
+ The created Operation.
+
+
+- - -
+
+#### tf.FixedLengthRecordReader.serialize_state(name=None) {#FixedLengthRecordReader.serialize_state}
+
+Produce a string tensor that encodes the state of a reader.
+
+Not all Readers support being serialized, so this can produce an
+Unimplemented error.
+
+##### Args:
+
+
+* <b>name</b>: A name for the operation (optional).
+
+##### Returns:
+
+ A string Tensor.
+
+
+- - -
+
+#### tf.FixedLengthRecordReader.supports_serialize {#FixedLengthRecordReader.supports_serialize}
+
+Whether the Reader implementation can serialize its state.
+
+
+
+## Converting <div class="md-anchor" id="AUTOGENERATED-converting">{#AUTOGENERATED-converting}</div>
+
+TensorFlow provides several operations that you can use to convert various data
+formats into tensors.
+
+- - -
+
+### tf.decode_csv(records, record_defaults, field_delim=None, name=None) <div class="md-anchor" id="decode_csv">{#decode_csv}</div>
+
+Convert CSV records to tensors. Each column maps to one tensor.
+
+RFC 4180 format is expected for the CSV records.
+(https://tools.ietf.org/html/rfc4180)
+Note that we allow leading and trailing spaces with int or float field.
+
+##### Args:
+
+
+* <b>records</b>: A `Tensor` of type `string`.
+ Each string is a record/row in the csv and all records should have
+ the same format.
+* <b>record_defaults</b>: A list of `Tensor` objects with types from: `float32`, `int32`, `int64`, `string`.
+ One tensor per column of the input record, with either a
+ scalar default value for that column or empty if the column is required.
+* <b>field_delim</b>: An optional `string`. Defaults to `","`.
+ delimiter to separate fields in a record.
+* <b>name</b>: A name for the operation (optional).
+
+##### Returns:
+
+ A list of `Tensor` objects. Has the same type as `record_defaults`.
+ Each tensor will have the same shape as records.
+
+
+- - -
+
+### tf.decode_raw(bytes, out_type, little_endian=None, name=None) <div class="md-anchor" id="decode_raw">{#decode_raw}</div>
+
+Reinterpret the bytes of a string as a vector of numbers.
+
+##### Args:
+
+
+* <b>bytes</b>: A `Tensor` of type `string`.
+ All the elements must have the same length.
+* <b>out_type</b>: A `tf.DType` from: `tf.float32, tf.float64, tf.int32, tf.uint8, tf.int16, tf.int8, tf.int64`.
+* <b>little_endian</b>: An optional `bool`. Defaults to `True`.
+ Whether the input bytes are in little-endian order.
+ Ignored for out_types that are stored in a single byte like uint8.
+* <b>name</b>: A name for the operation (optional).
+
+##### Returns:
+
+ A `Tensor` of type `out_type`.
+ A Tensor with one more dimension than the input bytes. The
+ added dimension will have size equal to the length of the elements
+ of bytes divided by the number of bytes to represent out_type.
+
+
+- - -
+
+### tf.parse_example(serialized, names=None, sparse_keys=None, sparse_types=None, dense_keys=None, dense_types=None, dense_defaults=None, dense_shapes=None, name='ParseExample') <div class="md-anchor" id="parse_example">{#parse_example}</div>
+
+Parse Example protos.
+
+##### Args:
+
+
+* <b>serialized</b>: string vector, a batch of binary serialized Example protos.
+* <b>names</b>: A string vector, the names of the serialized protos.
+ "names" may contain, e.g., table key (descriptive) names for the
+ corresponding serialized protos. These are purely useful for debugging
+ purposes, and the presence of values here has no effect on the output.
+ "names" may be an empty vector, if no names are available.
+ If non-empty, this vector must be the same length as "serialized".
+* <b>sparse_keys</b>: A string list of keys in the Examples' features.
+ These keys are associated with sparse values.
+* <b>sparse_types</b>: A list of DTypes.
+ This list's length must match that of sparse_keys. Currently
+ parse_example supports tf.float32 (FloatList), tf.int64 (Int64List),
+ and tf.string (BytesList).
+* <b>dense_keys</b>: A string list of keys in the Examples' features.
+ These keys are associated with dense values.
+* <b>dense_types</b>: A list of DTypes.
+ This list's length must match that of dense_keys. Currently
+ parse_example supports tf.float32 (FloatList), tf.int64 (Int64List),
+ and tf.string (BytesList).
+* <b>dense_defaults</b>: A dict of {key:Tensor} (some may be missing).
+ The keys of the dict must match the dense_keys of the feature.
+ If a key is not present in this dictionary, the corresponding dense
+ Feature is required in all elements of serialized.
+* <b>dense_shapes</b>: A list of tuples.
+ Entries provide the shape of data in each dense Feature in features.
+ The length of dense_shapes must be the same as the length of dense_keys.
+ The number of elements in the Feature corresponding to dense_key[j]
+ must always have np.prod(dense_shapes[j]) entries.
+ If dense_shapes[j] == (D0, D1, ..., DN) then the the shape of output
+ Tensor dense_values[j] will be (|serialized|, D0, D1, ..., DN):
+ The dense outputs are just the inputs row-stacked by batch.
+* <b>name</b>: (Optional) Name of Op in the graph.
+
+##### Returns:
+
+ A dictionary mapping keys to Tensors and SparseTensors.
+
+ The key dense_keys[j] is mapped to a tensor of type dense_types[j] and
+ of shape (serialized.size(),) + dense_shapes[j] (i.e., the dense outputs are
+ inputs, reshaped in row-major format and then row-stacked by batch).
+
+ The key sparse_keys[j] is mapped to a SparseTensor of type sparse_types[j].
+ The SparseTensor represents a ragged matrix. Its indices are [batch, index]
+ where "batch" is is the batch entry the value is from, and "index" is the
+ value's index in the list of values associated with that feature
+ and example. For example, if one expects a tf.float32 sparse feature "ft"
+ and three serialized examples are provided:
+
+ serialized = [
+
+* <b>features</b>:
+ { feature: [ key: { "ft" value: float_list: { value: [1.0, 2.0] } } ] },
+* <b>features</b>:
+ { feature: [] },
+* <b>features</b>:
+ { feature: [ key: { "ft" value: float_list: { value: [3.0] } } ] }
+ ]
+
+ then the output will look like:
+
+ {"ft": SparseTensor(indices=[[0, 0], [0, 1], [2, 0]],
+ values=[1.0, 2.0, 3.0],
+ shape=(3, 2)) }
+
+##### Raises:
+
+
+* <b>ValueError</b>: If sparse and dense keys intersect, or input lengths do not
+ match up for sparse_* (similarly for dense_*).
+* <b>TypeError</b>: If an input is malformed.
+
+Example input, format, and output: Just Sparse Inputs
+================================================
+
+Given two brain.Example input protos:
+
+
+* <b>serialized</b>: // serialized versions of the protos below
+ [features: {
+
+* <b>feature</b>: { key: "kw" value: { bytes_list: { value: [ "knit", "big" ] } } }
+* <b>feature</b>: { key: "gps" value: { float_list: { value: [] } } }
+ },
+* <b>features</b>: {
+* <b>feature</b>: { key: "kw" value: { bytes_list: { value: [ "emmy" ] } } }
+* <b>feature</b>: { key: "dank" value: { int64_list: { value: [ 42 ] } } }
+* <b>feature</b>: { key: "gps" value: { } }
+ }]
+
+* <b>names</b>: ["input0", "input1"],
+* <b>sparse_keys</b>: ["kw", "dank", "gps"]
+* <b>sparse_types</b>: [DT_STRING, DT_INT64, DT_FLOAT]
+
+Then the expected output is a dictionary:
+{
+ "kw": SparseTensor(
+ indices=[[0, 0], [0, 1], [1, 0]],
+ values=["knit", "big", "emmy"]
+ shape=[2, 2]),
+ "dank": SparseTensor(
+ indices=[[1, 0]],
+ values=[42],
+ shape=[2, 1]),
+ "gps": SparseTensor(
+ indices=[],
+ values=[],
+ shape=[2, 0]),
+}
+
+
+Example input, format, and output: Dense Inputs (without defaults)
+==================================================================
+
+Given two brain.Example input protos:
+
+
+* <b>serialized</b>: // serialized versions of the protos below
+ [features: {
+
+* <b>feature</b>: { key: "age" value: { int64_list: { value: [ 0 ] } } }
+* <b>feature</b>: { key: "gender" value: { bytes_list: { value: [ "f" ] } } }
+ },
+* <b>features</b>: {
+* <b>feature</b>: { key: "age" value: { int64_list: { value: [] } } }
+* <b>feature</b>: { key: "gender" value: { bytes_list: { value: [ "f" ] } } }
+ }]
+
+* <b>names</b>: ["input0", "input1"],
+* <b>dense_keys</b>: np.array(["age", "gender"])
+* <b>dense_types</b>: [tf.int64, tf.string]
+* <b>dense_defaults</b>: {
+ "age": -1 # defaults to -1 if missing
+ # "gender" has no specified default so it's required
+}
+
+* <b>dense_shapes</b>: [(1,), (1,)] # age, gender, label, weight
+
+Then the expected output is a dictionary:
+{
+ "age": [[0], [-1]],
+ "gender": [["f"], ["f"]],
+}
+
+
+Example input, format, and output: Dense Inputs (with defaults)
+===============================================================
+
+Given two brain.Example input protos:
+
+
+* <b>serialized</b>: // serialized versions of the protos below
+ [features: {
+
+* <b>feature</b>: { key: "weight" value: { float_list: { value: [ 1.0 ] } } }
+ },
+* <b>features</b>: {
+* <b>feature</b>: { key: "label" value: { float_list: { value: [ -1.0, 0.0 ] } } }
+ }]
+
+* <b>names</b>: ["input0", "input1"],
+* <b>dense_keys</b>: np.array(["label", "weight"])
+* <b>dense_defaults</b>: {
+ "label": [1.0, 2.0], # float (default: vector)
+ "weight": 5.0 # float (default: scalar, 5.0)
+}
+
+* <b>dense_shapes</b>: [(2,), (1,)] # age, gender, label, weight
+
+Then the expected output is a dictionary:
+{
+ "label": [[1.0, 2.0], [-1.0, 0.0]],
+ "weight": [[1.0], [5.0]],
+}
+
+
+- - -
+
+### tf.parse_single_example(serialized, names=None, sparse_keys=None, sparse_types=None, dense_keys=None, dense_types=None, dense_defaults=None, dense_shapes=None, name='ParseSingleExample') <div class="md-anchor" id="parse_single_example">{#parse_single_example}</div>
+
+Identical to parse_example but for scalar serialized and names.
+
+##### Args:
+
+
+* <b>serialized</b>: A scalar string, a single serialized Example.
+ See parse_example documentation for more details.
+* <b>names</b>: (Optional) A scalar string, the associated name.
+ See parse_example documentation for more details.
+* <b>sparse_keys</b>: See parse_example documentation for more details.
+* <b>sparse_types</b>: See parse_example documentation for more details.
+* <b>dense_keys</b>: See parse_example documentation for more details.
+* <b>dense_types</b>: See parse_example documentation for more details.
+* <b>dense_defaults</b>: See parse_example documentation for more details.
+* <b>dense_shapes</b>: See parse_example documentation for more details.
+* <b>name</b>: Optional op name.
+
+##### Returns:
+
+ A dictionary mapping keys to Tensors and SparseTensors.
+
+ For dense tensors, the Tensor is identical to the output of parse_example,
+ except it is one less dimension (the first, batch, dimension is removed).
+
+ For SparseTensors:
+ The first (batch) column of the indices matrix is removed
+ (it is now a column vector).
+ The values vector is unchanged.
+ The first (batch_size) entry of the shape vector is removed
+ (it is now a single element vector).
+
+##### Raises:
+
+
+* <b>ValueError</b>: if "scalar" or "names" have known shapes, and are not scalars.
+
+
+
+## Queues <div class="md-anchor" id="AUTOGENERATED-queues">{#AUTOGENERATED-queues}</div>
+
+TensorFlow provides several implementations of 'Queues', which are
+structures within the TensorFlow computation graph to stage pipelines
+of tensors together. The following describe the basic Queue interface
+and some implementations. To see an example use, see [Threading and
+Queues](../../how_tos/threading_and_queues/index.md).
+
+- - -
+
+### class tf.QueueBase <div class="md-anchor" id="QueueBase">{#QueueBase}</div>
+
+Base class for queue implementations.
+
+A queue is a TensorFlow data structure that stores tensors across
+multiple steps, and exposes operations that enqueue and dequeue
+tensors.
+
+Each queue element is a tuple of one or more tensors, where each
+tuple component has a static dtype, and may have a static shape. The
+queue implementations support versions of enqueue and dequeue that
+handle single elements, versions that support enqueuing and
+dequeuing a batch of elements at once.
+
+See [`tf.FIFOQueue`](#FIFOQueue) and
+[`tf.RandomShuffleQueue`](#RandomShuffleQueue) for concrete
+implementations of this class, and instructions on how to create
+them.
+
+- - -
+
+#### tf.QueueBase.enqueue(vals, name=None) {#QueueBase.enqueue}
+
+Enqueues one element to this queue.
+
+If the queue is full when this operation executes, it will block
+until the element has been enqueued.
+
+##### Args:
+
+
+* <b>vals</b>: The tuple of `Tensor` objects to be enqueued.
+* <b>name</b>: A name for the operation (optional).
+
+##### Returns:
+
+ The operation that enqueues a new tuple of tensors to the queue.
+
+
+- - -
+
+#### tf.QueueBase.enqueue_many(vals, name=None) {#QueueBase.enqueue_many}
+
+Enqueues zero or elements to this queue.
+
+This operation slices each component tensor along the 0th dimension to
+make multiple queue elements. All of the tensors in `vals` must have the
+same size in the 0th dimension.
+
+If the queue is full when this operation executes, it will block
+until all of the elements have been enqueued.
+
+##### Args:
+
+
+* <b>vals</b>: The tensor or tuple of tensors from which the queue elements
+ are taken.
+* <b>name</b>: A name for the operation (optional).
+
+##### Returns:
+
+ The operation that enqueues a batch of tuples of tensors to the queue.
+
+
+
+- - -
+
+#### tf.QueueBase.dequeue(name=None) {#QueueBase.dequeue}
+
+Dequeues one element from this queue.
+
+If the queue is empty when this operation executes, it will block
+until there is an element to dequeue.
+
+##### Args:
+
+
+* <b>name</b>: A name for the operation (optional).
+
+##### Returns:
+
+ The tuple of tensors that was dequeued.
+
+
+- - -
+
+#### tf.QueueBase.dequeue_many(n, name=None) {#QueueBase.dequeue_many}
+
+Dequeues and concatenates `n` elements from this queue.
+
+This operation concatenates queue-element component tensors along
+the 0th dimension to make a single component tensor. All of the
+components in the dequeued tuple will have size `n` in the 0th dimension.
+
+If the queue contains fewer than `n` elements when this operation
+executes, it will block until `n` elements have been dequeued.
+
+##### Args:
+
+
+* <b>n</b>: A scalar `Tensor` containing the number of elements to dequeue.
+* <b>name</b>: A name for the operation (optional).
+
+##### Returns:
+
+ The tuple of concatenated tensors that was dequeued.
+
+
+
+- - -
+
+#### tf.QueueBase.size(name=None) {#QueueBase.size}
+
+Compute the number of elements in this queue.
+
+##### Args:
+
+
+* <b>name</b>: A name for the operation (optional).
+
+##### Returns:
+
+ A scalar tensor containing the number of elements in this queue.
+
+
+
+- - -
+
+#### tf.QueueBase.close(cancel_pending_enqueues=False, name=None) {#QueueBase.close}
+
+Closes this queue.
+
+This operation signals that no more elements will be enqueued in
+the given queue. Subsequent `enqueue` and `enqueue_many`
+operations will fail. Subsequent `dequeue` and `dequeue_many`
+operations will continue to succeed if sufficient elements remain
+in the queue. Subsequent `dequeue` and `dequeue_many` operations
+that would block will fail immediately.
+
+If `cancel_pending_enqueues` is `True`, all pending requests will also
+be cancelled.
+
+##### Args:
+
+
+* <b>cancel_pending_enqueues</b>: (Optional.) A boolean, defaulting to
+ `False` (described above).
+* <b>name</b>: A name for the operation (optional).
+
+##### Returns:
+
+ The operation that closes the queue.
+
+
+
+#### Other Methods
+- - -
+
+#### tf.QueueBase.__init__(dtypes, shapes, queue_ref) {#QueueBase.__init__}
+
+Constructs a queue object from a queue reference.
+
+##### Args:
+
+
+* <b>dtypes</b>: A list of types. The length of dtypes must equal the number
+ of tensors in each element.
+* <b>shapes</b>: Constraints on the shapes of tensors in an element:
+ A list of shape tuples or None. This list is the same length
+ as dtypes. If the shape of any tensors in the element are constrained,
+ all must be; shapes can be None if the shapes should not be constrained.
+* <b>queue_ref</b>: The queue reference, i.e. the output of the queue op.
+
+
+- - -
+
+#### tf.QueueBase.dtypes {#QueueBase.dtypes}
+
+The list of dtypes for each component of a queue element.
+
+- - -
+
+#### tf.QueueBase.name {#QueueBase.name}
+
+The name of the underlying queue.
+
+- - -
+
+#### tf.QueueBase.queue_ref {#QueueBase.queue_ref}
+
+The underlying queue reference.
+
+
+- - -
+
+### class tf.FIFOQueue <div class="md-anchor" id="FIFOQueue">{#FIFOQueue}</div>
+
+A queue implementation that dequeues elements in first-in-first out order.
+
+See [`tf.QueueBase`](#QueueBase) for a description of the methods on
+this class.
+
+- - -
+
+#### tf.FIFOQueue.__init__(capacity, dtypes, shapes=None, shared_name=None, name='fifo_queue') {#FIFOQueue.__init__}
+
+Creates a queue that dequeues elements in a first-in first-out order.
+
+A `FIFOQueue` has bounded capacity; supports multiple concurrent
+producers and consumers; and provides exactly-once delivery.
+
+A `FIFOQueue` holds a list of up to `capacity` elements. Each
+element is a fixed-length tuple of tensors whose dtypes are
+described by `dtypes`, and whose shapes are optionally described
+by the `shapes` argument.
+
+If the `shapes` argument is specified, each component of a queue
+element must have the respective fixed shape. If it is
+unspecified, different queue elements may have different shapes,
+but the use of `dequeue_many` is disallowed.
+
+##### Args:
+
+
+* <b>capacity</b>: An integer. The upper bound on the number of elements
+ that may be stored in this queue.
+* <b>dtypes</b>: A list of `DType` objects. The length of `dtypes` must equal
+ the number of tensors in each queue element.
+* <b>shapes</b>: (Optional.) A list of fully-defined `TensorShape` objects,
+ with the same length as `dtypes` or `None`.
+* <b>shared_name</b>: (Optional.) If non-empty, this queue will be shared under
+ the given name across multiple sessions.
+* <b>name</b>: Optional name for the queue operation.
+
+
+
+- - -
+
+### class tf.RandomShuffleQueue <div class="md-anchor" id="RandomShuffleQueue">{#RandomShuffleQueue}</div>
+
+A queue implementation that dequeues elements in a random order.
+
+See [`tf.QueueBase`](#QueueBase) for a description of the methods on
+this class.
+
+- - -
+
+#### tf.RandomShuffleQueue.__init__(capacity, min_after_dequeue, dtypes, shapes=None, seed=None, shared_name=None, name='random_shuffle_queue') {#RandomShuffleQueue.__init__}
+
+Create a queue that dequeues elements in a random order.
+
+A `RandomShuffleQueue` has bounded capacity; supports multiple
+concurrent producers and consumers; and provides exactly-once
+delivery.
+
+A `RandomShuffleQueue` holds a list of up to `capacity`
+elements. Each element is a fixed-length tuple of tensors whose
+dtypes are described by `dtypes`, and whose shapes are optionally
+described by the `shapes` argument.
+
+If the `shapes` argument is specified, each component of a queue
+element must have the respective fixed shape. If it is
+unspecified, different queue elements may have different shapes,
+but the use of `dequeue_many` is disallowed.
+
+The `min_after_dequeue` argument allows the caller to specify a
+minimum number of elements that will remain in the queue after a
+`dequeue` or `dequeue_many` operation completes, to ensure a
+minimum level of mixing of elements. This invariant is maintained
+by blocking those operations until sufficient elements have been
+enqueued. The `min_after_dequeue` argument is ignored after the
+queue has been closed.
+
+##### Args:
+
+
+* <b>capacity</b>: An integer. The upper bound on the number of elements
+ that may be stored in this queue.
+* <b>min_after_dequeue</b>: An integer (described above).
+* <b>dtypes</b>: A list of `DType` objects. The length of `dtypes` must equal
+ the number of tensors in each queue element.
+* <b>shapes</b>: (Optional.) A list of fully-defined `TensorShape` objects,
+ with the same length as `dtypes` or `None`.
+* <b>seed</b>: A Python integer. Used to create a random seed.
+ See [`set_random_seed`](constant_op.md#set_random_seed) for behavior.
+* <b>shared_name</b>: (Optional.) If non-empty, this queue will be shared under
+ the given name across multiple sessions.
+* <b>name</b>: Optional name for the queue operation.
+
+
+
+
+## Dealing with the filesystem <div class="md-anchor" id="AUTOGENERATED-dealing-with-the-filesystem">{#AUTOGENERATED-dealing-with-the-filesystem}</div>
+
+- - -
+
+### tf.matching_files(pattern, name=None) <div class="md-anchor" id="matching_files">{#matching_files}</div>
+
+Returns the set of files matching a pattern.
+
+Note that this routine only supports wildcard characters in the
+basename portion of the pattern, not in the directory portion.
+
+##### Args:
+
+
+* <b>pattern</b>: A `Tensor` of type `string`. A (scalar) shell wildcard pattern.
+* <b>name</b>: A name for the operation (optional).
+
+##### Returns:
+
+ A `Tensor` of type `string`. A vector of matching filenames.
+
+
+- - -
+
+### tf.read_file(filename, name=None) <div class="md-anchor" id="read_file">{#read_file}</div>
+
+Reads and outputs the entire contents of the input filename.
+
+##### Args:
+
+
+* <b>filename</b>: A `Tensor` of type `string`.
+* <b>name</b>: A name for the operation (optional).
+
+##### Returns:
+
+ A `Tensor` of type `string`.
+
+
+
+## Input pipeline <div class="md-anchor" id="AUTOGENERATED-input-pipeline">{#AUTOGENERATED-input-pipeline}</div>
+
+TensorFlow functions for setting up an input-prefetching pipeline.
+Please see the [reading data how-to](../../how_tos/reading_data.md)
+for context.
+
+### Beginning of an input pipeline <div class="md-anchor" id="AUTOGENERATED-beginning-of-an-input-pipeline">{#AUTOGENERATED-beginning-of-an-input-pipeline}</div>
+
+The "producer" functions add a queue to the graph and a corresponding
+`QueueRunner` for running the subgraph that fills that queue.
+
+- - -
+
+### tf.train.match_filenames_once(pattern, name=None) <div class="md-anchor" id="match_filenames_once">{#match_filenames_once}</div>
+
+Save the list of files matching pattern, so it is only computed once.
+
+##### Args:
+
+
+* <b>pattern</b>: A file pattern (glob).
+* <b>name</b>: A name for the operations (optional).
+
+##### Returns:
+
+ A variable that is initialized to the list of files matching pattern.
+
+
+- - -
+
+### tf.train.limit_epochs(tensor, num_epochs=None, name=None) <div class="md-anchor" id="limit_epochs">{#limit_epochs}</div>
+
+Returns tensor num_epochs times and then raises an OutOfRange error.
+
+##### Args:
+
+
+* <b>tensor</b>: Any Tensor.
+* <b>num_epochs</b>: An integer (optional). If specified, limits the number
+ of steps the output tensor may be evaluated.
+* <b>name</b>: A name for the operations (optional).
+
+##### Returns:
+
+ tensor or OutOfRange.
+
+
+- - -
+
+### tf.train.range_input_producer(limit, num_epochs=None, shuffle=True, seed=None, capacity=32, name=None) <div class="md-anchor" id="range_input_producer">{#range_input_producer}</div>
+
+Produces the integers from 0 to limit-1 in a queue.
+
+##### Args:
+
+
+* <b>limit</b>: An int32 scalar tensor.
+* <b>num_epochs</b>: An integer (optional). If specified, `range_input_producer`
+ produces each integer `num_epochs` times before generating an
+ OutOfRange error. If not specified, `range_input_producer` can cycle
+ through the integers an unlimited number of times.
+* <b>shuffle</b>: Boolean. If true, the integers are randomly shuffled within each
+ epoch.
+* <b>seed</b>: An integer (optional). Seed used if shuffle == True.
+* <b>capacity</b>: An integer. Sets the queue capacity.
+* <b>name</b>: A name for the operations (optional).
+
+##### Returns:
+
+ A Queue with the output integers. A QueueRunner for the Queue
+ is added to the current Graph's QUEUE_RUNNER collection.
+
+
+- - -
+
+### tf.train.slice_input_producer(tensor_list, num_epochs=None, shuffle=True, seed=None, capacity=32, name=None) <div class="md-anchor" id="slice_input_producer">{#slice_input_producer}</div>
+
+Produces a slice of each Tensor in tensor_list.
+
+Implemented using a Queue -- a QueueRunner for the Queue
+is added to the current Graph's QUEUE_RUNNER collection.
+
+##### Args:
+
+
+* <b>tensor_list</b>: A list of Tensors. Every Tensor in tensor_list must
+ have the same size in the first dimension.
+* <b>num_epochs</b>: An integer (optional). If specified, `slice_input_producer`
+ produces each slice `num_epochs` times before generating
+ an OutOfRange error. If not specified, `slice_input_producer` can cycle
+ through the slices an unlimited number of times.
+* <b>seed</b>: An integer (optional). Seed used if shuffle == True.
+* <b>capacity</b>: An integer. Sets the queue capacity.
+* <b>name</b>: A name for the operations (optional).
+
+##### Returns:
+
+ A list of tensors, one for each element of tensor_list. If the tensor
+ in tensor_list has shape [N, a, b, .., z], then the corresponding output
+ tensor will have shape [a, b, ..., z].
+
+
+- - -
+
+### tf.train.string_input_producer(string_tensor, num_epochs=None, shuffle=True, seed=None, capacity=32, name=None) <div class="md-anchor" id="string_input_producer">{#string_input_producer}</div>
+
+Output strings (e.g. filenames) to a queue for an input pipeline.
+
+##### Args:
+
+
+* <b>string_tensor</b>: A 1-D string tensor with the strings to produce.
+* <b>num_epochs</b>: An integer (optional). If specified, `string_input_producer`
+ produces each string from `string_tensor` `num_epochs` times before
+ generating an OutOfRange error. If not specified, `string_input_producer`
+ can cycle through the strings in `string_tensor` an unlimited number of
+ times.
+* <b>shuffle</b>: Boolean. If true, the strings are randomly shuffled within each
+ epoch.
+* <b>seed</b>: An integer (optional). Seed used if shuffle == True.
+* <b>capacity</b>: An integer. Sets the queue capacity.
+* <b>name</b>: A name for the operations (optional).
+
+##### Returns:
+
+ A queue with the output strings. A QueueRunner for the Queue
+ is added to the current Graph's QUEUE_RUNNER collection.
+
+
+
+### Batching at the end of an input pipeline <div class="md-anchor" id="AUTOGENERATED-batching-at-the-end-of-an-input-pipeline">{#AUTOGENERATED-batching-at-the-end-of-an-input-pipeline}</div>
+
+These functions add a queue to the graph to assemble a batch of examples, with
+possible shuffling. They also add a `QueueRunner` for running the subgraph
+that fills that queue.
+
+Use [batch](#batch) or [batch_join](#batch_join) for batching examples that have
+already been well shuffled. Use [shuffle_batch](#shuffle_batch) or
+[shuffle_batch_join](#shuffle_batch_join) for examples that
+would benefit from additional shuffling.
+
+Use [batch](#batch) or [shuffle_batch](#shuffle_batch) if you want a
+single thread producing examples to batch, or if you have a
+single subgraph producing examples but you want to run it in N threads
+(where you increase N until it can keep the queue full). Use
+[batch_join](#batch_join) or [shuffle_batch_join](#shuffle_batch_join)
+if you have N different subgraphs producing examples to batch and you
+want them run by N threads.
+
+- - -
+
+### tf.train.batch(tensor_list, batch_size, num_threads=1, capacity=32, enqueue_many=False, shapes=None, name=None) <div class="md-anchor" id="batch">{#batch}</div>
+
+Run tensor_list to fill a queue to create batches.
+
+Implemented using a queue -- a QueueRunner for the queue
+is added to the current Graph's QUEUE_RUNNER collection.
+
+##### Args:
+
+
+* <b>tensor_list</b>: The list of tensors to enqueue.
+* <b>batch_size</b>: The new batch size pulled from the queue.
+* <b>num_threads</b>: The number of threads enqueuing tensor_list.
+* <b>capacity</b>: Maximum number of elements in the queue, controls the
+ how far ahead the prefetching allowed is allowed to get and
+ memory usage.
+* <b>enqueue_many</b>: If False, tensor_list is assumed to represent a
+ single example. If True, tensor_list is assumed to represent
+ a batch of examples, where the first dimension is indexed by
+ example, and all members of tensor_list should have the same
+ size in the first dimension.
+* <b>shapes</b>: Optional. The shapes for each example. Defaults to the
+ inferred shapes for tensor_list (leaving off the first dimension
+ if enqueue_many is True).
+* <b>name</b>: A name for the operations (optional).
+
+##### Returns:
+
+ A list of tensors with the same number and types as tensor_list.
+ If enqueue_many is false, then an input tensor with shape
+ `[x, y, z]` will be output as a tensor with shape
+ `[batch_size, x, y, z]`. If enqueue_many is True, and an
+ input tensor has shape `[*, x, y, z]`, the the output will have
+ shape `[batch_size, x, y, z]`.
+
+
+- - -
+
+### tf.train.batch_join(tensor_list_list, batch_size, capacity=32, enqueue_many=False, shapes=None, name=None) <div class="md-anchor" id="batch_join">{#batch_join}</div>
+
+Run a list of tensors to fill a queue to create batches of examples.
+
+This version enqueues a different list of tensors in different threads.
+Implemented using a queue -- a QueueRunner for the queue
+is added to the current Graph's QUEUE_RUNNER collection.
+
+##### Args:
+
+
+* <b>tensor_list_list</b>: A list of tuples of tensors to enqueue.
+ len(tensor_list_list) threads will be started, with the i-th
+ thread enqueuing the tensors from tensor_list[i].
+ tensor_list[i1][j] must match tensor_list[i2][j] in type and
+ shape (except in the first dimension if enqueue_many is true).
+* <b>batch_size</b>: The new batch size pulled from the queue.
+* <b>capacity</b>: Maximum number of elements in the queue, controls the
+ how far ahead the prefetching allowed is allowed to get and
+ memory usage.
+* <b>enqueue_many</b>: If False, each tensor_list_list[i] is assumed to
+ represent a single example. If True, tensor_list_list[i] is
+ assumed to represent a batch of examples, where the first
+ dimension is indexed by example, and all members of
+ tensor_list_list[i] should have the same size in the first
+ dimension.
+* <b>shapes</b>: Optional. The shapes for each example. Defaults to the
+ inferred shapes for tensor_list_list[i] (which must match, after
+ leaving off the first dimension if enqueue_many is True).
+* <b>name</b>: A name for the operations (optional).
+
+##### Returns:
+
+ A list of tensors with the same number and types as
+ tensor_list_list[i]. If enqueue_many is false, then an input
+ tensor with shape `[x, y, z]` will be output as a tensor with
+ shape `[batch_size, x, y, z]`. If enqueue_many is True, and an
+ input tensor has shape `[*, x, y, z]`, the the output will have
+ shape `[batch_size, x, y, z]`.
+
+
+- - -
+
+### tf.train.shuffle_batch(tensor_list, batch_size, capacity, min_after_dequeue, num_threads=1, seed=None, enqueue_many=False, shapes=None, name=None) <div class="md-anchor" id="shuffle_batch">{#shuffle_batch}</div>
+
+Create batches by randomly shuffling tensors.
+
+This adds:
+
+* a shuffling queue into which tensors from tensor_list are enqueued.
+* a dequeue many operation to create batches from the queue,
+* and a QueueRunner is added to the current Graph's QUEUE_RUNNER collection,
+ to enqueue the tensors from tensor_list.
+
+##### Args:
+
+
+* <b>tensor_list</b>: The list of tensors to enqueue.
+* <b>batch_size</b>: The new batch size pulled from the queue.
+* <b>capacity</b>: Maximum number of elements in the queue, controls the
+ how far ahead the prefetching allowed is allowed to get and
+ memory usage.
+* <b>min_after_dequeue</b>: Minimum number elements in the queue after a
+ dequeue, used to ensure a level of mixing of elements.
+* <b>num_threads</b>: The number of threads enqueuing tensor_list.
+* <b>seed</b>: Seed for the random shuffling within the queue.
+* <b>enqueue_many</b>: If False, tensor_list is assumed to represent a
+ single example. If True, tensor_list is assumed to represent
+ a batch of examples, where the first dimension is indexed by
+ example, and all members of tensor_list should have the same
+ size in the first dimension.
+* <b>shapes</b>: Optional. The shapes for each example. Defaults to the
+ inferred shapes for tensor_list (leaving off the first dimension
+ if enqueue_many is True).
+* <b>name</b>: A name for the operations (optional).
+
+##### Returns:
+
+ A list of tensors with the same number and types as tensor_list.
+ If enqueue_many is false, then an input tensor with shape
+ `[x, y, z]` will be output as a tensor with shape
+ `[batch_size, x, y, z]`. If enqueue_many is True, and an
+ input tensor has shape `[*, x, y, z]`, the the output will have
+ shape `[batch_size, x, y, z]`.
+
+
+- - -
+
+### tf.train.shuffle_batch_join(tensor_list_list, batch_size, capacity, min_after_dequeue, seed=None, enqueue_many=False, shapes=None, name=None) <div class="md-anchor" id="shuffle_batch_join">{#shuffle_batch_join}</div>
+
+Create batches by randomly shuffling tensors.
+
+This version enqueues a different list of tensors in different threads.
+It adds:
+
+* a shuffling queue into which tensors from tensor_list_list are enqueued.
+* a dequeue many operation to create batches from the queue,
+* and a QueueRunner is added to the current Graph's QUEUE_RUNNER collection,
+ to enqueue the tensors from tensor_list_list.
+
+##### Args:
+
+
+* <b>tensor_list_list</b>: A list of tuples of tensors to enqueue.
+ len(tensor_list_list) threads will be started, with the i-th
+ thread enqueuing the tensors from tensor_list[i].
+ tensor_list[i1][j] must match tensor_list[i2][j] in type and
+ shape (except in the first dimension if enqueue_many is true).
+* <b>batch_size</b>: The new batch size pulled from the queue.
+* <b>capacity</b>: Maximum number of elements in the queue, controls the
+ how far ahead the prefetching allowed is allowed to get and
+ memory usage.
+* <b>min_after_dequeue</b>: Minimum number elements in the queue after a
+ dequeue, used to ensure a level of mixing of elements.
+* <b>seed</b>: Seed for the random shuffling within the queue.
+* <b>enqueue_many</b>: If False, each tensor_list_list[i] is assumed to
+ represent a single example. If True, tensor_list_list[i] is
+ assumed to represent a batch of examples, where the first
+ dimension is indexed by example, and all members of
+ tensor_list_list[i] should have the same size in the first
+ dimension.
+* <b>shapes</b>: Optional. The shapes for each example. Defaults to the
+ inferred shapes for tensor_list_list[i] (which must match, after
+ leaving off the first dimension if enqueue_many is True).
+* <b>name</b>: A name for the operations (optional).
+
+##### Returns:
+
+ A list of tensors with the same number and types as
+ tensor_list_list[i]. If enqueue_many is false, then an input
+ tensor with shape `[x, y, z]` will be output as a tensor with
+ shape `[batch_size, x, y, z]`. If enqueue_many is True, and an
+ input tensor has shape `[*, x, y, z]`, the the output will have
+ shape `[batch_size, x, y, z]`.
+
+