aboutsummaryrefslogtreecommitdiffhomepage
path: root/tensorflow/docs_src/api_guides/python/io_ops.md
diff options
context:
space:
mode:
Diffstat (limited to 'tensorflow/docs_src/api_guides/python/io_ops.md')
-rw-r--r--tensorflow/docs_src/api_guides/python/io_ops.md130
1 files changed, 0 insertions, 130 deletions
diff --git a/tensorflow/docs_src/api_guides/python/io_ops.md b/tensorflow/docs_src/api_guides/python/io_ops.md
deleted file mode 100644
index d7ce6fdfde..0000000000
--- a/tensorflow/docs_src/api_guides/python/io_ops.md
+++ /dev/null
@@ -1,130 +0,0 @@
-# Inputs and Readers
-
-Note: Functions taking `Tensor` arguments can also take anything accepted by
-`tf.convert_to_tensor`.
-
-[TOC]
-
-## Placeholders
-
-TensorFlow provides a placeholder operation that must be fed with data
-on execution. For more info, see the section on [Feeding data](../../api_guides/python/reading_data.md#Feeding).
-
-* `tf.placeholder`
-* `tf.placeholder_with_default`
-
-For feeding `SparseTensor`s which are composite type,
-there is a convenience function:
-
-* `tf.sparse_placeholder`
-
-## Readers
-
-TensorFlow provides a set of Reader classes for reading data formats.
-For more information on inputs and readers, see [Reading data](../../api_guides/python/reading_data.md).
-
-* `tf.ReaderBase`
-* `tf.TextLineReader`
-* `tf.WholeFileReader`
-* `tf.IdentityReader`
-* `tf.TFRecordReader`
-* `tf.FixedLengthRecordReader`
-
-## Converting
-
-TensorFlow provides several operations that you can use to convert various data
-formats into tensors.
-
-* `tf.decode_csv`
-* `tf.decode_raw`
-
-- - -
-
-### Example protocol buffer
-
-TensorFlow's [recommended format for training examples](../../api_guides/python/reading_data.md#standard_tensorflow_format)
-is serialized `Example` protocol buffers, [described
-here](https://www.tensorflow.org/code/tensorflow/core/example/example.proto).
-They contain `Features`, [described
-here](https://www.tensorflow.org/code/tensorflow/core/example/feature.proto).
-
-* `tf.VarLenFeature`
-* `tf.FixedLenFeature`
-* `tf.FixedLenSequenceFeature`
-* `tf.SparseFeature`
-* `tf.parse_example`
-* `tf.parse_single_example`
-* `tf.parse_tensor`
-* `tf.decode_json_example`
-
-## Queues
-
-TensorFlow provides several implementations of 'Queues', which are
-structures within the TensorFlow computation graph to stage pipelines
-of tensors together. The following describe the basic Queue interface
-and some implementations. To see an example use, see [Threading and Queues](../../api_guides/python/threading_and_queues.md).
-
-* `tf.QueueBase`
-* `tf.FIFOQueue`
-* `tf.PaddingFIFOQueue`
-* `tf.RandomShuffleQueue`
-* `tf.PriorityQueue`
-
-## Conditional Accumulators
-
-* `tf.ConditionalAccumulatorBase`
-* `tf.ConditionalAccumulator`
-* `tf.SparseConditionalAccumulator`
-
-## Dealing with the filesystem
-
-* `tf.matching_files`
-* `tf.read_file`
-* `tf.write_file`
-
-## Input pipeline
-
-TensorFlow functions for setting up an input-prefetching pipeline.
-Please see the [reading data how-to](../../api_guides/python/reading_data.md)
-for context.
-
-### Beginning of an input pipeline
-
-The "producer" functions add a queue to the graph and a corresponding
-`QueueRunner` for running the subgraph that fills that queue.
-
-* `tf.train.match_filenames_once`
-* `tf.train.limit_epochs`
-* `tf.train.input_producer`
-* `tf.train.range_input_producer`
-* `tf.train.slice_input_producer`
-* `tf.train.string_input_producer`
-
-### Batching at the end of an input pipeline
-
-These functions add a queue to the graph to assemble a batch of
-examples, with possible shuffling. They also add a `QueueRunner` for
-running the subgraph that fills that queue.
-
-Use `tf.train.batch` or `tf.train.batch_join` for batching
-examples that have already been well shuffled. Use
-`tf.train.shuffle_batch` or
-`tf.train.shuffle_batch_join` for examples that would
-benefit from additional shuffling.
-
-Use `tf.train.batch` or `tf.train.shuffle_batch` if you want a
-single thread producing examples to batch, or if you have a
-single subgraph producing examples but you want to run it in *N* threads
-(where you increase *N* until it can keep the queue full). Use
-`tf.train.batch_join` or `tf.train.shuffle_batch_join`
-if you have *N* different subgraphs producing examples to batch and you
-want them run by *N* threads. Use `maybe_*` to enqueue conditionally.
-
-* `tf.train.batch`
-* `tf.train.maybe_batch`
-* `tf.train.batch_join`
-* `tf.train.maybe_batch_join`
-* `tf.train.shuffle_batch`
-* `tf.train.maybe_shuffle_batch`
-* `tf.train.shuffle_batch_join`
-* `tf.train.maybe_shuffle_batch_join`