aboutsummaryrefslogtreecommitdiffhomepage
path: root/tensorflow/g3doc
diff options
context:
space:
mode:
Diffstat (limited to 'tensorflow/g3doc')
-rw-r--r--tensorflow/g3doc/api_docs/cc/ClassEnv.md56
-rw-r--r--tensorflow/g3doc/api_docs/cc/ClassEnvWrapper.md58
-rw-r--r--tensorflow/g3doc/api_docs/cc/ClassPartialTensorShape.md8
-rw-r--r--tensorflow/g3doc/api_docs/cc/ClassPartialTensorShapeUtils.md4
-rw-r--r--tensorflow/g3doc/api_docs/cc/ClassRandomAccessFile.md2
-rw-r--r--tensorflow/g3doc/api_docs/cc/ClassSession.md69
-rw-r--r--tensorflow/g3doc/api_docs/cc/ClassTensor.md41
-rw-r--r--tensorflow/g3doc/api_docs/cc/ClassTensorShape.md6
-rw-r--r--tensorflow/g3doc/api_docs/cc/ClassTensorShapeUtils.md10
-rw-r--r--tensorflow/g3doc/api_docs/cc/StructTF_Buffer.md6
-rw-r--r--tensorflow/g3doc/api_docs/python/nn.md6
-rw-r--r--tensorflow/g3doc/api_docs/python/state_ops.md2
-rw-r--r--tensorflow/g3doc/get_started/index.md1
-rw-r--r--tensorflow/g3doc/get_started/os_setup.md40
-rw-r--r--tensorflow/g3doc/how_tos/adding_an_op/index.md2
-rw-r--r--tensorflow/g3doc/how_tos/distributed/index.md4
-rw-r--r--tensorflow/g3doc/resources/roadmap.md15
-rw-r--r--tensorflow/g3doc/tutorials/deep_cnn/index.md26
-rw-r--r--tensorflow/g3doc/tutorials/index.md4
-rwxr-xr-xtensorflow/g3doc/tutorials/mandelbrot/index.md11
-rw-r--r--tensorflow/g3doc/tutorials/mnist/beginners/index.md20
-rw-r--r--tensorflow/g3doc/tutorials/mnist/pros/index.md21
-rw-r--r--tensorflow/g3doc/tutorials/mnist/tf/index.md59
-rwxr-xr-xtensorflow/g3doc/tutorials/pdes/index.md5
-rw-r--r--tensorflow/g3doc/tutorials/recurrent/index.md13
-rw-r--r--tensorflow/g3doc/tutorials/word2vec/index.md11
26 files changed, 233 insertions, 267 deletions
diff --git a/tensorflow/g3doc/api_docs/cc/ClassEnv.md b/tensorflow/g3doc/api_docs/cc/ClassEnv.md
index 1bea893187..227543dff1 100644
--- a/tensorflow/g3doc/api_docs/cc/ClassEnv.md
+++ b/tensorflow/g3doc/api_docs/cc/ClassEnv.md
@@ -14,13 +14,31 @@ All Env implementations are safe for concurrent access from multiple threads wit
-#### `tensorflow::Env::~Env()` {#tensorflow_Env_Env}
+#### `virtual tensorflow::Env::~Env()=default` {#virtual_tensorflow_Env_Env}
-#### `virtual Status tensorflow::Env::NewRandomAccessFile(const string &fname, RandomAccessFile **result)=0` {#virtual_Status_tensorflow_Env_NewRandomAccessFile}
+#### `Status tensorflow::Env::GetFileSystemForFile(const string &fname, FileSystem **result)` {#Status_tensorflow_Env_GetFileSystemForFile}
+
+Returns the FileSystem object to handle operations on the file specified by 'fname'. The FileSystem object is used as the implementation for the file system related (non-virtual) functions that follow. Returned FileSystem object is still owned by the Env object and will.
+
+
+
+#### `Status tensorflow::Env::GetRegisteredFileSystemSchemes(std::vector< string > *schemes)` {#Status_tensorflow_Env_GetRegisteredFileSystemSchemes}
+
+Returns the file system schemes registered for this Env .
+
+
+
+#### `void tensorflow::Env::RegisterFileSystem(const string &scheme, FileSystemRegistry::Factory factory)` {#void_tensorflow_Env_RegisterFileSystem}
+
+
+
+
+
+#### `Status tensorflow::Env::NewRandomAccessFile(const string &fname, RandomAccessFile **result)` {#Status_tensorflow_Env_NewRandomAccessFile}
Creates a brand new random access read-only file with the specified name.
@@ -28,7 +46,9 @@ On success, stores a pointer to the new file in *result and returns OK. On failu
The returned file may be concurrently accessed by multiple threads.
-#### `virtual Status tensorflow::Env::NewWritableFile(const string &fname, WritableFile **result)=0` {#virtual_Status_tensorflow_Env_NewWritableFile}
+The ownership of the returned RandomAccessFile is passed to the caller and the object should be deleted when is not used. The file object shouldn&apos;t live longer than the Env object.
+
+#### `Status tensorflow::Env::NewWritableFile(const string &fname, WritableFile **result)` {#Status_tensorflow_Env_NewWritableFile}
Creates an object that writes to a new file with the specified name.
@@ -36,7 +56,9 @@ Deletes any existing file with the same name and creates a new file. On success,
The returned file will only be accessed by one thread at a time.
-#### `virtual Status tensorflow::Env::NewAppendableFile(const string &fname, WritableFile **result)=0` {#virtual_Status_tensorflow_Env_NewAppendableFile}
+The ownership of the returned WritableFile is passed to the caller and the object should be deleted when is not used. The file object shouldn&apos;t live longer than the Env object.
+
+#### `Status tensorflow::Env::NewAppendableFile(const string &fname, WritableFile **result)` {#Status_tensorflow_Env_NewAppendableFile}
Creates an object that either appends to an existing file, or writes to a new file (if the file does not exist to begin with).
@@ -44,43 +66,55 @@ On success, stores a pointer to the new file in *result and returns OK. On failu
The returned file will only be accessed by one thread at a time.
-#### `virtual bool tensorflow::Env::FileExists(const string &fname)=0` {#virtual_bool_tensorflow_Env_FileExists}
+The ownership of the returned WritableFile is passed to the caller and the object should be deleted when is not used. The file object shouldn&apos;t live longer than the Env object.
+
+#### `Status tensorflow::Env::NewReadOnlyMemoryRegionFromFile(const string &fname, ReadOnlyMemoryRegion **result)` {#Status_tensorflow_Env_NewReadOnlyMemoryRegionFromFile}
+
+Creates a readonly region of memory with the file context.
+
+On success, it returns a pointer to read-only memory region from the content of file fname. The ownership of the region is passed to the caller. On failure stores nullptr in *result and returns non-OK.
+
+The returned memory region can be accessed from many threads in parallel.
+
+The ownership of the returned ReadOnlyMemoryRegion is passed to the caller and the object should be deleted when is not used. The memory region object shouldn&apos;t live longer than the Env object.
+
+#### `bool tensorflow::Env::FileExists(const string &fname)` {#bool_tensorflow_Env_FileExists}
Returns true iff the named file exists.
-#### `virtual Status tensorflow::Env::GetChildren(const string &dir, std::vector< string > *result)=0` {#virtual_Status_tensorflow_Env_GetChildren}
+#### `Status tensorflow::Env::GetChildren(const string &dir, std::vector< string > *result)` {#Status_tensorflow_Env_GetChildren}
Stores in *result the names of the children of the specified directory. The names are relative to "dir".
Original contents of *results are dropped.
-#### `virtual Status tensorflow::Env::DeleteFile(const string &fname)=0` {#virtual_Status_tensorflow_Env_DeleteFile}
+#### `Status tensorflow::Env::DeleteFile(const string &fname)` {#Status_tensorflow_Env_DeleteFile}
Deletes the named file.
-#### `virtual Status tensorflow::Env::CreateDir(const string &dirname)=0` {#virtual_Status_tensorflow_Env_CreateDir}
+#### `Status tensorflow::Env::CreateDir(const string &dirname)` {#Status_tensorflow_Env_CreateDir}
Creates the specified directory.
-#### `virtual Status tensorflow::Env::DeleteDir(const string &dirname)=0` {#virtual_Status_tensorflow_Env_DeleteDir}
+#### `Status tensorflow::Env::DeleteDir(const string &dirname)` {#Status_tensorflow_Env_DeleteDir}
Deletes the specified directory.
-#### `virtual Status tensorflow::Env::GetFileSize(const string &fname, uint64 *file_size)=0` {#virtual_Status_tensorflow_Env_GetFileSize}
+#### `Status tensorflow::Env::GetFileSize(const string &fname, uint64 *file_size)` {#Status_tensorflow_Env_GetFileSize}
Stores the size of `fname` in `*file_size`.
-#### `virtual Status tensorflow::Env::RenameFile(const string &src, const string &target)=0` {#virtual_Status_tensorflow_Env_RenameFile}
+#### `Status tensorflow::Env::RenameFile(const string &src, const string &target)` {#Status_tensorflow_Env_RenameFile}
Renames file src to target. If target already exists, it will be replaced.
diff --git a/tensorflow/g3doc/api_docs/cc/ClassEnvWrapper.md b/tensorflow/g3doc/api_docs/cc/ClassEnvWrapper.md
index aab4d735c5..f98c1e8fe4 100644
--- a/tensorflow/g3doc/api_docs/cc/ClassEnvWrapper.md
+++ b/tensorflow/g3doc/api_docs/cc/ClassEnvWrapper.md
@@ -24,69 +24,21 @@ Returns the target to which this Env forwards all calls.
-#### `Status tensorflow::EnvWrapper::NewRandomAccessFile(const string &f, RandomAccessFile **r) override` {#Status_tensorflow_EnvWrapper_NewRandomAccessFile}
+#### `Status tensorflow::EnvWrapper::GetFileSystemForFile(const string &fname, FileSystem **result) override` {#Status_tensorflow_EnvWrapper_GetFileSystemForFile}
-Creates a brand new random access read-only file with the specified name.
+Returns the FileSystem object to handle operations on the file specified by &apos;fname&apos;. The FileSystem object is used as the implementation for the file system related (non-virtual) functions that follow. Returned FileSystem object is still owned by the Env object and will.
-On success, stores a pointer to the new file in *result and returns OK. On failure stores NULL in *result and returns non-OK. If the file does not exist, returns a non-OK status.
-The returned file may be concurrently accessed by multiple threads.
-#### `Status tensorflow::EnvWrapper::NewWritableFile(const string &f, WritableFile **r) override` {#Status_tensorflow_EnvWrapper_NewWritableFile}
+#### `Status tensorflow::EnvWrapper::GetRegisteredFileSystemSchemes(std::vector< string > *schemes) override` {#Status_tensorflow_EnvWrapper_GetRegisteredFileSystemSchemes}
-Creates an object that writes to a new file with the specified name.
+Returns the file system schemes registered for this Env .
-Deletes any existing file with the same name and creates a new file. On success, stores a pointer to the new file in *result and returns OK. On failure stores NULL in *result and returns non-OK.
-The returned file will only be accessed by one thread at a time.
-#### `Status tensorflow::EnvWrapper::NewAppendableFile(const string &f, WritableFile **r) override` {#Status_tensorflow_EnvWrapper_NewAppendableFile}
+#### `void tensorflow::EnvWrapper::RegisterFileSystem(const string &scheme, FileSystemRegistry::Factory factory) override` {#void_tensorflow_EnvWrapper_RegisterFileSystem}
-Creates an object that either appends to an existing file, or writes to a new file (if the file does not exist to begin with).
-On success, stores a pointer to the new file in *result and returns OK. On failure stores NULL in *result and returns non-OK.
-
-The returned file will only be accessed by one thread at a time.
-
-#### `bool tensorflow::EnvWrapper::FileExists(const string &f) override` {#bool_tensorflow_EnvWrapper_FileExists}
-
-Returns true iff the named file exists.
-
-
-
-#### `Status tensorflow::EnvWrapper::GetChildren(const string &dir, std::vector< string > *r) override` {#Status_tensorflow_EnvWrapper_GetChildren}
-
-Stores in *result the names of the children of the specified directory. The names are relative to "dir".
-
-Original contents of *results are dropped.
-
-#### `Status tensorflow::EnvWrapper::DeleteFile(const string &f) override` {#Status_tensorflow_EnvWrapper_DeleteFile}
-
-Deletes the named file.
-
-
-
-#### `Status tensorflow::EnvWrapper::CreateDir(const string &d) override` {#Status_tensorflow_EnvWrapper_CreateDir}
-
-Creates the specified directory.
-
-
-
-#### `Status tensorflow::EnvWrapper::DeleteDir(const string &d) override` {#Status_tensorflow_EnvWrapper_DeleteDir}
-
-Deletes the specified directory.
-
-
-
-#### `Status tensorflow::EnvWrapper::GetFileSize(const string &f, uint64 *s) override` {#Status_tensorflow_EnvWrapper_GetFileSize}
-
-Stores the size of `fname` in `*file_size`.
-
-
-
-#### `Status tensorflow::EnvWrapper::RenameFile(const string &s, const string &t) override` {#Status_tensorflow_EnvWrapper_RenameFile}
-
-Renames file src to target. If target already exists, it will be replaced.
diff --git a/tensorflow/g3doc/api_docs/cc/ClassPartialTensorShape.md b/tensorflow/g3doc/api_docs/cc/ClassPartialTensorShape.md
index b9afae0152..5db2760bbc 100644
--- a/tensorflow/g3doc/api_docs/cc/ClassPartialTensorShape.md
+++ b/tensorflow/g3doc/api_docs/cc/ClassPartialTensorShape.md
@@ -120,8 +120,14 @@ Returns `OK` iff `proto` is a valid tensor shape, and a descriptive error status
-#### `Status tensorflow::PartialTensorShape::MakePartialShape(const T *dims, int n, PartialTensorShape *out)` {#Status_tensorflow_PartialTensorShape_MakePartialShape}
+#### `static Status tensorflow::PartialTensorShape::MakePartialShape(const int32 *dims, int n, PartialTensorShape *out)` {#static_Status_tensorflow_PartialTensorShape_MakePartialShape}
Returns a ` PartialTensorShape ` whose dimensions are `dims[0]`, `dims[1]`, ..., `dims[n-1]`. Values of -1 are considered "unknown".
+
+#### `static Status tensorflow::PartialTensorShape::MakePartialShape(const int64 *dims, int n, PartialTensorShape *out)` {#static_Status_tensorflow_PartialTensorShape_MakePartialShape}
+
+
+
+
diff --git a/tensorflow/g3doc/api_docs/cc/ClassPartialTensorShapeUtils.md b/tensorflow/g3doc/api_docs/cc/ClassPartialTensorShapeUtils.md
index 18e30f7f1d..616adc0c59 100644
--- a/tensorflow/g3doc/api_docs/cc/ClassPartialTensorShapeUtils.md
+++ b/tensorflow/g3doc/api_docs/cc/ClassPartialTensorShapeUtils.md
@@ -6,13 +6,13 @@ Static helper routines for ` PartialTensorShape `. Includes a few common predica
###Member Details
-#### `static string tensorflow::PartialTensorShapeUtils::PartialShapeListString(const gtl::ArraySlice< PartialTensorShape > &shapes)` {#static_string_tensorflow_PartialTensorShapeUtils_PartialShapeListString}
+#### `string tensorflow::PartialTensorShapeUtils::PartialShapeListString(const gtl::ArraySlice< PartialTensorShape > &shapes)` {#string_tensorflow_PartialTensorShapeUtils_PartialShapeListString}
-#### `static bool tensorflow::PartialTensorShapeUtils::AreCompatible(const gtl::ArraySlice< PartialTensorShape > &shapes0, const gtl::ArraySlice< PartialTensorShape > &shapes1)` {#static_bool_tensorflow_PartialTensorShapeUtils_AreCompatible}
+#### `bool tensorflow::PartialTensorShapeUtils::AreCompatible(const gtl::ArraySlice< PartialTensorShape > &shapes0, const gtl::ArraySlice< PartialTensorShape > &shapes1)` {#bool_tensorflow_PartialTensorShapeUtils_AreCompatible}
diff --git a/tensorflow/g3doc/api_docs/cc/ClassRandomAccessFile.md b/tensorflow/g3doc/api_docs/cc/ClassRandomAccessFile.md
index 1a1526f66d..1ff484c083 100644
--- a/tensorflow/g3doc/api_docs/cc/ClassRandomAccessFile.md
+++ b/tensorflow/g3doc/api_docs/cc/ClassRandomAccessFile.md
@@ -18,7 +18,7 @@ A file abstraction for randomly reading the contents of a file.
-#### `virtual Status tensorflow::RandomAccessFile::Read(uint64 offset, size_t n, StringPiece *result, char *scratch) const =0` {#virtual_Status_tensorflow_RandomAccessFile_Read}
+#### `virtual Status tensorflow::RandomAccessFile::Read(uint64 offset, size_t n, StringPiece *result, char *scratch) const =0` {#virtual_Status_tensorflow_RandomAccessFile_Read}
Reads up to `n` bytes from the file starting at `offset`.
diff --git a/tensorflow/g3doc/api_docs/cc/ClassSession.md b/tensorflow/g3doc/api_docs/cc/ClassSession.md
index 90201de0a9..f501b2dbd4 100644
--- a/tensorflow/g3doc/api_docs/cc/ClassSession.md
+++ b/tensorflow/g3doc/api_docs/cc/ClassSession.md
@@ -6,42 +6,25 @@ When a Session is created with a given target, a new Session object is bound to
Example:
-```c++ tensorflow::GraphDef graph;
-// ... Create or load graph into "graph".
+{c++} tensorflow::GraphDef graph; // ... Create or load graph into "graph". // This example uses the default options which connects // to a local runtime. tensorflow::SessionOptions options; std::unique_ptr<tensorflow::Session> session(tensorflow::NewSession(options)); // Create the session with this graph. tensorflow::Status s = session->Create(graph); if (!s.ok()) { ... } // Run the graph and fetch the first output of the "output" // operation, and also run to but do not return anything // for the "update_state" operation. std::vector<tensorflow::Tensor> outputs; s = session->Run({}, {"output:0"}, {"update_state"}, &outputs); if (!s.ok()) { ... } // Map the output as a flattened float tensor, and do something // with it. auto output_tensor = outputs[0].flat<float>(); if (output_tensor(0) > 0.5) { ... } // Close the session to release the resources associated with // this session. session->Close();
-// This example uses the default options which connects
-// to a local runtime.
-tensorflow::SessionOptions options;
-std::unique_ptr<tensorflow::Session>
-session(tensorflow::NewSession(options));
+A Session allows concurrent calls to Run() , though a Session must be created / extended by a single thread.
-// Create the session with this graph.
-tensorflow::Status s = session->Create(graph);
-if (!s.ok()) { ... }
+Only one thread must call Close() , and Close() must only be called after all other calls to Run() have returned.
-// Run the graph and fetch the first output of the "output"
-// operation, and also run to but do not return anything
-// for the "update_state" operation.
-std::vector<tensorflow::Tensor> outputs;
-s = session->Run({}, {"output:0"}, {"update_state"}, &outputs);
-if (!s.ok()) { ... }
+###Member Details
-// Map the output as a flattened float tensor, and do something
-// with it.
-auto output_tensor = outputs[0].flat<float>();
-if (output_tensor(0) > 0.5) { ... }
+#### `tensorflow::Session::Session()` {#tensorflow_Session_Session}
-// Close the session to release the resources associated with
-// this session.
-session->Close();
-```
-A Session allows concurrent calls to Run() , though a Session must be created / extended by a single thread.
-Only one thread must call Close() , and Close() must only be called after all other calls to Run() have returned.
-###Member Details
+#### `virtual tensorflow::Session::~Session()` {#virtual_tensorflow_Session_Session}
+
+
+
+
#### `virtual Status tensorflow::Session::Create(const GraphDef &graph)=0` {#virtual_Status_tensorflow_Session_Create}
@@ -67,32 +50,44 @@ REQUIRES: The name of each Tensor of the input or output must match a "Tensor en
REQUIRES: outputs is not nullptr if `output_tensor_names` is non-empty.
-#### `virtual Status tensorflow::Session::RunWithOpts(const RunOptions &run_options, const std::vector< std::pair< string, Tensor > > &inputs, const std::vector< string > &output_tensor_names, const std::vector< string > &target_node_names, std::vector< Tensor > *outputs, RunMetadata *run_metadata)` {#virtual_Status_tensorflow_Session_RunWithOpts}
+#### `virtual Status tensorflow::Session::Create(const RunOptions &run_options, const GraphDef &graph)` {#virtual_Status_tensorflow_Session_Create}
-Like `Run`, but allows users to pass in a `RunOptions` proto and to retrieve non-Tensor metadata output via a `RunMetadata` proto for this step. NOTE: This API is still experimental and may change.
+Implementations which support `RunOptions`.
+NOTE: This API is still experimental and may change.
+#### `virtual Status tensorflow::Session::Extend(const RunOptions &run_options, const GraphDef &graph)` {#virtual_Status_tensorflow_Session_Extend}
-#### `virtual Status tensorflow::Session::PRunSetup(const std::vector< string > &input_names, const std::vector< string > &output_names, const std::vector< string > &target_nodes, string *handle)` {#virtual_Status_tensorflow_Session_PRunSetup}
-Sets up a graph for partial execution. All future feeds and fetches are specified by &apos;input_names&apos; and &apos;output_names&apos;. Returns &apos;handle&apos; that can be used to perform a sequence of partial feeds and fetches. NOTE: This API is still experimental and may change.
-#### `virtual Status tensorflow::Session::PRun(const string &handle, const std::vector< std::pair< string, Tensor > > &inputs, const std::vector< string > &output_names, std::vector< Tensor > *outputs)` {#virtual_Status_tensorflow_Session_PRun}
+#### `virtual Status tensorflow::Session::Close(const RunOptions &run_options)` {#virtual_Status_tensorflow_Session_Close}
-Continues the pending execution specified by &apos;handle&apos; with the provided input tensors and fills `outputs` for the endpoints specified in `output_names`. NOTE: This API is still experimental and may change.
-#### `virtual Status tensorflow::Session::Close()=0` {#virtual_Status_tensorflow_Session_Close}
-Closes this session.
+#### `virtual Status tensorflow::Session::Run(const RunOptions &run_options, const std::vector< std::pair< string, Tensor > > &inputs, const std::vector< string > &output_tensor_names, const std::vector< string > &target_node_names, std::vector< Tensor > *outputs, RunMetadata *run_metadata)` {#virtual_Status_tensorflow_Session_Run}
-Closing a session releases the resources used by this session on the TensorFlow runtime (specified during session creation by the ` SessionOptions::target ` field).
+Like `Run`, but allows users to pass in a `RunOptions` proto and to retrieve non-Tensor metadata output via a `RunMetadata` proto for this step. `run_metadata` may be nullptr, in which case any metadata output is discarded. NOTE: This API is still experimental and may change.
-#### `virtual tensorflow::Session::~Session()` {#virtual_tensorflow_Session_Session}
+#### `virtual Status tensorflow::Session::PRunSetup(const std::vector< string > &input_names, const std::vector< string > &output_names, const std::vector< string > &target_nodes, string *handle)` {#virtual_Status_tensorflow_Session_PRunSetup}
+
+Sets up a graph for partial execution. All future feeds and fetches are specified by `input_names` and `output_names`. Returns `handle` that can be used to perform a sequence of partial feeds and fetches. NOTE: This API is still experimental and may change.
+
+
+
+#### `virtual Status tensorflow::Session::PRun(const string &handle, const std::vector< std::pair< string, Tensor > > &inputs, const std::vector< string > &output_names, std::vector< Tensor > *outputs)` {#virtual_Status_tensorflow_Session_PRun}
+Continues the pending execution specified by `handle` with the provided input tensors and fills `outputs` for the endpoints specified in `output_names`. NOTE: This API is still experimental and may change.
+
+
+#### `virtual Status tensorflow::Session::Close()=0` {#virtual_Status_tensorflow_Session_Close}
+
+Closes this session.
+
+Closing a session releases the resources used by this session on the TensorFlow runtime (specified during session creation by the ` SessionOptions::target ` field).
diff --git a/tensorflow/g3doc/api_docs/cc/ClassTensor.md b/tensorflow/g3doc/api_docs/cc/ClassTensor.md
index 2708244c61..f1236cd02e 100644
--- a/tensorflow/g3doc/api_docs/cc/ClassTensor.md
+++ b/tensorflow/g3doc/api_docs/cc/ClassTensor.md
@@ -14,19 +14,19 @@ Default Tensor constructor. Creates a 1-dimension, 0-element float tensor.
#### `tensorflow::Tensor::Tensor(DataType type, const TensorShape &shape)` {#tensorflow_Tensor_Tensor}
-Creates a Tensor of the given `type` and `shape`.
+Creates a Tensor of the given `type` and `shape`. If LogMemory::IsEnabled() the allocation is logged as coming from an unknown kernel and step. Calling the Tensor constructor directly from within an Op is deprecated: use the OpKernelConstruction/OpKernelContext allocate_* methods to allocate a new tensor, which record the kernel and step.
The underlying buffer is allocated using a ` CPUAllocator `.
#### `tensorflow::Tensor::Tensor(Allocator *a, DataType type, const TensorShape &shape)` {#tensorflow_Tensor_Tensor}
-Creates a tensor with the input `type` and `shape`, using the allocator `a` to allocate the underlying buffer.
+Creates a tensor with the input `type` and `shape`, using the allocator `a` to allocate the underlying buffer. If LogMemory::IsEnabled() the allocation is logged as coming from an unknown kernel and step. Calling the Tensor constructor directly from within an Op is deprecated: use the OpKernelConstruction/OpKernelContext allocate_* methods to allocate a new tensor, which record the kernel and step.
`a` must outlive the lifetime of this Tensor .
#### `tensorflow::Tensor::Tensor(Allocator *a, DataType type, const TensorShape &shape, const AllocationAttributes &allocation_attr)` {#tensorflow_Tensor_Tensor}
-Creates a tensor with the input `type` and `shape`, using the allocator `a` and the specified "allocation_attr" to allocate the underlying buffer.
+Creates a tensor with the input `type` and `shape`, using the allocator `a` and the specified "allocation_attr" to allocate the underlying buffer. If the kernel and step are known allocation_attr.allocation_will_be_logged should be set to true and LogMemory::RecordTensorAllocation should be called after the tensor is constructed. Calling the Tensor constructor directly from within an Op is deprecated: use the OpKernelConstruction/OpKernelContext allocate_* methods to allocate a new tensor, which record the kernel and step.
`a` must outlive the lifetime of this Tensor .
@@ -168,15 +168,7 @@ Use these methods when you know the data type and the number of dimensions of th
Example:
-```c++ typedef float T;
-Tensor my_mat(...built with Shape{rows: 3, cols: 5}...);
-auto mat = my_mat.matrix<T>(); // 2D Eigen::Tensor, 3 x 5.
-auto mat = my_mat.tensor<T, 2>(); // 2D Eigen::Tensor, 3 x 5.
-auto vec = my_mat.vec<T>(); // CHECK fails as my_mat is 2D.
-auto vec = my_mat.tensor<T, 3>(); // CHECK fails as my_mat is 2D.
-auto mat = my_mat.matrix<int32>();// CHECK fails as type mismatch.
-
-```
+{c++} typedef float T; Tensor my_mat(...built with Shape{rows: 3, cols: 5}...); auto mat = my_mat.matrix<T>(); // 2D Eigen::Tensor, 3 x 5. auto mat = my_mat.tensor<T, 2>(); // 2D Eigen::Tensor, 3 x 5. auto vec = my_mat.vec<T>(); // CHECK fails as my_mat is 2D. auto vec = my_mat.tensor<T, 3>(); // CHECK fails as my_mat is 2D. auto mat = my_mat.matrix<int32>();// CHECK fails as type mismatch.
#### `TTypes<T>::Matrix tensorflow::Tensor::matrix()` {#TTypes_T_Matrix_tensorflow_Tensor_matrix}
@@ -190,7 +182,7 @@ auto mat = my_mat.matrix<int32>();// CHECK fails as type mismatch.
-#### `TTypes<T>::Flat tensorflow::Tensor::flat()` {#TTypes_T_Flat_tensorflow_Tensor_flat}
+#### `TTypes< T >::Flat tensorflow::Tensor::flat()` {#TTypes_T_Flat_tensorflow_Tensor_flat}
Return the tensor data as an `Eigen::Tensor` of the data type and a specified shape.
@@ -198,22 +190,7 @@ These methods allow you to access the data with the dimensions and sizes of your
Example:
-```c++ typedef float T;
-Tensor my_ten(...built with Shape{planes: 4, rows: 3, cols: 5}...);
-// 1D Eigen::Tensor, size 60:
-auto flat = my_ten.flat<T>();
-// 2D Eigen::Tensor 12 x 5:
-auto inner = my_ten.flat_inner_dims<T>();
-// 2D Eigen::Tensor 4 x 15:
-auto outer = my_ten.shaped<T, 2>({4, 15});
-// CHECK fails, bad num elements:
-auto outer = my_ten.shaped<T, 2>({4, 8});
-// 3D Eigen::Tensor 6 x 5 x 2:
-auto weird = my_ten.shaped<T, 3>({6, 5, 2});
-// CHECK fails, type mismatch:
-auto bad = my_ten.flat<int32>();
-
-```
+{c++} typedef float T; Tensor my_ten(...built with Shape{planes: 4, rows: 3, cols: 5}...); // 1D Eigen::Tensor, size 60: auto flat = my_ten.flat<T>(); // 2D Eigen::Tensor 12 x 5: auto inner = my_ten.flat_inner_dims<T>(); // 2D Eigen::Tensor 4 x 15: auto outer = my_ten.shaped<T, 2>({4, 15}); // CHECK fails, bad num elements: auto outer = my_ten.shaped<T, 2>({4, 8}); // 3D Eigen::Tensor 6 x 5 x 2: auto weird = my_ten.shaped<T, 3>({6, 5, 2}); // CHECK fails, type mismatch: auto bad = my_ten.flat<int32>();
#### `TTypes<T>::UnalignedFlat tensorflow::Tensor::unaligned_flat()` {#TTypes_T_UnalignedFlat_tensorflow_Tensor_unaligned_flat}
@@ -269,7 +246,7 @@ Const versions of all the methods above.
-#### `TTypes<T>::ConstFlat tensorflow::Tensor::flat() const` {#TTypes_T_ConstFlat_tensorflow_Tensor_flat}
+#### `TTypes< T >::ConstFlat tensorflow::Tensor::flat() const` {#TTypes_T_ConstFlat_tensorflow_Tensor_flat}
@@ -287,7 +264,7 @@ Const versions of all the methods above.
-#### `TTypes<T>::ConstMatrix tensorflow::Tensor::flat_outer_dims() const` {#TTypes_T_ConstMatrix_tensorflow_Tensor_flat_outer_dims}
+#### `TTypes< T >::ConstMatrix tensorflow::Tensor::flat_outer_dims() const` {#TTypes_T_ConstMatrix_tensorflow_Tensor_flat_outer_dims}
@@ -337,7 +314,7 @@ The returned ` StringPiece ` may point to memory location on devices that the CP
NOTE: The underlying tensor buffer is refcounted, so the lifetime of the contents mapped by the ` StringPiece ` matches the lifetime of the buffer; callers should arrange to make sure the buffer does not get destroyed while the ` StringPiece ` is still used.
-REQUIRES: `DataTypeCanUseMemcpy( dtype() )`.
+REQUIRES: `DataTypeCanUseMemcpy(dtype())`.
#### `void tensorflow::Tensor::UnsafeCopyFromInternal(const Tensor &, const TensorShape &)` {#void_tensorflow_Tensor_UnsafeCopyFromInternal}
diff --git a/tensorflow/g3doc/api_docs/cc/ClassTensorShape.md b/tensorflow/g3doc/api_docs/cc/ClassTensorShape.md
index 19d0ec14d7..d0be205c3b 100644
--- a/tensorflow/g3doc/api_docs/cc/ClassTensorShape.md
+++ b/tensorflow/g3doc/api_docs/cc/ClassTensorShape.md
@@ -192,6 +192,12 @@ Returns `true` iff `proto` is a valid tensor shape.
Returns `OK` iff `proto` is a valid tensor shape, and a descriptive error status otherwise.
+#### `static constexpr int tensorflow::TensorShape::MaxDimensions()` {#static_constexpr_int_tensorflow_TensorShape_MaxDimensions}
+
+
+
+
+
#### `string tensorflow::TensorShape::DebugString(const TensorShapeProto &proto)` {#string_tensorflow_TensorShape_DebugString}
diff --git a/tensorflow/g3doc/api_docs/cc/ClassTensorShapeUtils.md b/tensorflow/g3doc/api_docs/cc/ClassTensorShapeUtils.md
index 93f1230315..6010dd48b7 100644
--- a/tensorflow/g3doc/api_docs/cc/ClassTensorShapeUtils.md
+++ b/tensorflow/g3doc/api_docs/cc/ClassTensorShapeUtils.md
@@ -36,13 +36,19 @@ Static helper routines for ` TensorShape `. Includes a few common predicates on
-#### `static Status tensorflow::TensorShapeUtils::MakeShape(const T *dims, int n, TensorShape *out)` {#static_Status_tensorflow_TensorShapeUtils_MakeShape}
+#### `static Status tensorflow::TensorShapeUtils::MakeShape(const int32 *dims, int n, TensorShape *out)` {#static_Status_tensorflow_TensorShapeUtils_MakeShape}
Returns a ` TensorShape ` whose dimensions are `dims[0]`, `dims[1]`, ..., `dims[n-1]`.
-#### `static string tensorflow::TensorShapeUtils::ShapeListString(const gtl::ArraySlice< TensorShape > &shapes)` {#static_string_tensorflow_TensorShapeUtils_ShapeListString}
+#### `static Status tensorflow::TensorShapeUtils::MakeShape(const int64 *dims, int n, TensorShape *out)` {#static_Status_tensorflow_TensorShapeUtils_MakeShape}
+
+
+
+
+
+#### `string tensorflow::TensorShapeUtils::ShapeListString(const gtl::ArraySlice< TensorShape > &shapes)` {#string_tensorflow_TensorShapeUtils_ShapeListString}
diff --git a/tensorflow/g3doc/api_docs/cc/StructTF_Buffer.md b/tensorflow/g3doc/api_docs/cc/StructTF_Buffer.md
index 3f6ffa349c..c435db8029 100644
--- a/tensorflow/g3doc/api_docs/cc/StructTF_Buffer.md
+++ b/tensorflow/g3doc/api_docs/cc/StructTF_Buffer.md
@@ -17,3 +17,9 @@
+
+#### `void(* TF_Buffer::data_deallocator) (void *data, size_t length))(void *data, size_t length)` {#void_TF_Buffer_data_deallocator_void_data_size_t_length_}
+
+
+
+
diff --git a/tensorflow/g3doc/api_docs/python/nn.md b/tensorflow/g3doc/api_docs/python/nn.md
index 119154e7da..c3a4244556 100644
--- a/tensorflow/g3doc/api_docs/python/nn.md
+++ b/tensorflow/g3doc/api_docs/python/nn.md
@@ -1607,9 +1607,9 @@ Batch normalization.
As described in http://arxiv.org/abs/1502.03167.
Normalizes a tensor by `mean` and `variance`, and applies (optionally) a
-`scale` \\(\gamma\\) to it, as well as an `offest` \\(eta\\):
+`scale` \\(\gamma\\) to it, as well as an `offset` \\(\beta\\):
-\\( rac{\gamma(x-\mu)}{\sigma}+eta\\)
+\\(\frac{\gamma(x-\mu)}{\sigma}+\beta\\)
`mean`, `variance`, `offset` and `scale` are all expected to be of one of two
shapes:
@@ -1636,7 +1636,7 @@ shapes:
* <b>`x`</b>: Input `Tensor` of arbitrary dimensionality.
* <b>`mean`</b>: A mean `Tensor`.
* <b>`variance`</b>: A variance `Tensor`.
-* <b>`offset`</b>: An offset `Tensor`, often denoted \\(eta\\) in equations, or
+* <b>`offset`</b>: An offset `Tensor`, often denoted \\(\beta\\) in equations, or
None. If present, will be added to the normalized tensor.
* <b>`scale`</b>: A scale `Tensor`, often denoted \\(\gamma\\) in equations, or
`None`. If present, the scale is applied to the normalized tensor.
diff --git a/tensorflow/g3doc/api_docs/python/state_ops.md b/tensorflow/g3doc/api_docs/python/state_ops.md
index 4f19e8b2c3..180dc07a77 100644
--- a/tensorflow/g3doc/api_docs/python/state_ops.md
+++ b/tensorflow/g3doc/api_docs/python/state_ops.md
@@ -43,7 +43,7 @@ w = tf.Variable(<initial-value>, name=<optional-name>)
y = tf.matmul(w, ...another variable or tensor...)
# The overloaded operators are available too.
-z = tf.sigmoid(w + b)
+z = tf.sigmoid(w + y)
# Assign a new value to the variable with `assign()` or a related method.
w.assign(w + 1.0)
diff --git a/tensorflow/g3doc/get_started/index.md b/tensorflow/g3doc/get_started/index.md
index e7e6d204a8..a0e563b18b 100644
--- a/tensorflow/g3doc/get_started/index.md
+++ b/tensorflow/g3doc/get_started/index.md
@@ -77,3 +77,4 @@ TensorFlow features.
* [Download and Setup](../get_started/os_setup.md)
* [Basic Usage](../get_started/basic_usage.md)
* [TensorFlow Mechanics 101](../tutorials/mnist/tf/index.md)
+* [Tinker with a neural network in your browser](http://playground.tensorflow.org)
diff --git a/tensorflow/g3doc/get_started/os_setup.md b/tensorflow/g3doc/get_started/os_setup.md
index 3323210b83..18da3bbfe5 100644
--- a/tensorflow/g3doc/get_started/os_setup.md
+++ b/tensorflow/g3doc/get_started/os_setup.md
@@ -53,28 +53,28 @@ Install TensorFlow:
```bash
# Ubuntu/Linux 64-bit, CPU only:
-$ sudo pip install --upgrade https://storage.googleapis.com/tensorflow/linux/cpu/tensorflow-0.7.1-cp27-none-linux_x86_64.whl
+$ sudo pip install --upgrade https://storage.googleapis.com/tensorflow/linux/cpu/tensorflow-0.8.0rc0-cp27-none-linux_x86_64.whl
# Ubuntu/Linux 64-bit, GPU enabled:
-$ sudo pip install --upgrade https://storage.googleapis.com/tensorflow/linux/gpu/tensorflow-0.7.1-cp27-none-linux_x86_64.whl
+$ sudo pip install --upgrade https://storage.googleapis.com/tensorflow/linux/gpu/tensorflow-0.8.0rc0-cp27-none-linux_x86_64.whl
# Mac OS X, CPU only:
$ sudo easy_install --upgrade six
-$ sudo pip install --upgrade https://storage.googleapis.com/tensorflow/mac/tensorflow-0.7.1-cp27-none-any.whl
+$ sudo pip install --upgrade https://storage.googleapis.com/tensorflow/mac/tensorflow-0.8.0rc0-py2-none-any.whl
```
For python3:
```bash
# Ubuntu/Linux 64-bit, CPU only:
-$ sudo pip3 install --upgrade https://storage.googleapis.com/tensorflow/linux/cpu/tensorflow-0.7.1-cp34-none-linux_x86_64.whl
+$ sudo pip3 install --upgrade https://storage.googleapis.com/tensorflow/linux/cpu/tensorflow-0.8.0rc0-cp34-cp34m-linux_x86_64.whl
# Ubuntu/Linux 64-bit, GPU enabled:
-$ sudo pip3 install --upgrade https://storage.googleapis.com/tensorflow/linux/gpu/tensorflow-0.7.1-cp34-none-linux_x86_64.whl
+$ sudo pip3 install --upgrade https://storage.googleapis.com/tensorflow/linux/gpu/tensorflow-0.8.0rc0-cp34-cp34m-linux_x86_64.whl
# Mac OS X, CPU only:
$ sudo easy_install --upgrade six
-$ sudo pip3 install --upgrade https://storage.googleapis.com/tensorflow/mac/tensorflow-0.7.1-cp35-none-any.whl
+$ sudo pip3 install --upgrade https://storage.googleapis.com/tensorflow/mac/tensorflow-0.8.0rc0-py3-none-any.whl
```
NOTE: If you are upgrading from a previous installation of TensorFlow < 0.7.1,
@@ -126,13 +126,13 @@ $ source ~/tensorflow/bin/activate.csh # If using csh
(tensorflow)$ # Your prompt should change
# Ubuntu/Linux 64-bit, CPU only:
-(tensorflow)$ pip install --upgrade https://storage.googleapis.com/tensorflow/linux/cpu/tensorflow-0.7.1-cp27-none-linux_x86_64.whl
+(tensorflow)$ pip install --upgrade https://storage.googleapis.com/tensorflow/linux/cpu/tensorflow-0.8.0rc0-cp27-none-linux_x86_64.whl
# Ubuntu/Linux 64-bit, GPU enabled:
-(tensorflow)$ pip install --upgrade https://storage.googleapis.com/tensorflow/linux/gpu/tensorflow-0.7.1-cp27-none-linux_x86_64.whl
+(tensorflow)$ pip install --upgrade https://storage.googleapis.com/tensorflow/linux/gpu/tensorflow-0.8.0rc0-cp27-none-linux_x86_64.whl
# Mac OS X, CPU only:
-(tensorflow)$ pip install --upgrade https://storage.googleapis.com/tensorflow/mac/tensorflow-0.7.1-cp27-none-any.whl
+(tensorflow)$ pip install --upgrade https://storage.googleapis.com/tensorflow/mac/tensorflow-0.8.0rc0-py2-none-any.whl
```
and again for python3:
@@ -143,13 +143,13 @@ $ source ~/tensorflow/bin/activate.csh # If using csh
(tensorflow)$ # Your prompt should change
# Ubuntu/Linux 64-bit, CPU only:
-(tensorflow)$ pip3 install --upgrade https://storage.googleapis.com/tensorflow/linux/cpu/tensorflow-0.7.1-cp34-none-linux_x86_64.whl
+(tensorflow)$ pip3 install --upgrade https://storage.googleapis.com/tensorflow/linux/cpu/tensorflow-0.8.0rc0-cp34-cp34m-linux_x86_64.whl
# Ubuntu/Linux 64-bit, GPU enabled:
-(tensorflow)$ pip3 install --upgrade https://storage.googleapis.com/tensorflow/linux/gpu/tensorflow-0.7.1-cp34-none-linux_x86_64.whl
+(tensorflow)$ pip3 install --upgrade https://storage.googleapis.com/tensorflow/linux/gpu/tensorflow-0.8.0rc0-cp34-cp34m-linux_x86_64.whl
# Mac OS X, CPU only:
-(tensorflow)$ pip3 install --upgrade https://storage.googleapis.com/tensorflow/mac/tensorflow-0.7.1-cp35-none-any.whl
+(tensorflow)$ pip3 install --upgrade https://storage.googleapis.com/tensorflow/mac/tensorflow-0.8.0rc0-py3-none-any.whl
```
With the Virtualenv environment activated, you can now
@@ -184,14 +184,14 @@ packages on your machine.
We provide 4 Docker images:
-* `b.gcr.io/tensorflow/tensorflow`: TensorFlow CPU binary image.
-* `b.gcr.io/tensorflow/tensorflow:latest-devel`: CPU Binary image plus source
+* `gcr.io/tensorflow/tensorflow`: TensorFlow CPU binary image.
+* `gcr.io/tensorflow/tensorflow:latest-devel`: CPU Binary image plus source
code.
-* `b.gcr.io/tensorflow/tensorflow:latest-gpu`: TensorFlow GPU binary image.
-* `b.gcr.io/tensorflow/tensorflow:latest-devel-gpu`: GPU Binary image plus source
+* `gcr.io/tensorflow/tensorflow:latest-gpu`: TensorFlow GPU binary image.
+* `gcr.io/tensorflow/tensorflow:latest-devel-gpu`: GPU Binary image plus source
code.
-We also have tags with `latest` replaced by a released version (e.g., `0.7.1-gpu`).
+We also have tags with `latest` replaced by a released version (e.g., `0.8.0rc0-gpu`).
With Docker the installation is as follows:
@@ -209,7 +209,7 @@ After Docker is installed, launch a Docker container with the TensorFlow binary
image as follows.
```bash
-$ docker run -it b.gcr.io/tensorflow/tensorflow
+$ docker run -it gcr.io/tensorflow/tensorflow
```
If you're using a container with GPU support, some additional flags must be
@@ -219,7 +219,7 @@ include a
in the repo with these flags, so the command-line would look like
```bash
-$ path/to/repo/tensorflow/tools/docker/docker_run_gpu.sh b.gcr.io/tensorflow/tensorflow:gpu
+$ path/to/repo/tensorflow/tools/docker/docker_run_gpu.sh gcr.io/tensorflow/tensorflow:gpu
```
You can now [test your installation](#test-the-tensorflow-installation) within the Docker container.
@@ -517,7 +517,7 @@ $ bazel build -c opt --config=cuda //tensorflow/tools/pip_package:build_pip_pack
$ bazel-bin/tensorflow/tools/pip_package/build_pip_package /tmp/tensorflow_pkg
# The name of the .whl file will depend on your platform.
-$ pip install /tmp/tensorflow_pkg/tensorflow-0.7.1-py2-none-linux_x86_64.whl
+$ pip install /tmp/tensorflow_pkg/tensorflow-0.8.0rc0-py2-none-linux_x86_64.whl
```
## Setting up TensorFlow for Development
diff --git a/tensorflow/g3doc/how_tos/adding_an_op/index.md b/tensorflow/g3doc/how_tos/adding_an_op/index.md
index eca9a66d56..98e82037a5 100644
--- a/tensorflow/g3doc/how_tos/adding_an_op/index.md
+++ b/tensorflow/g3doc/how_tos/adding_an_op/index.md
@@ -492,7 +492,7 @@ REGISTER\_OP("ZeroOut")
Your Op registration now specifies that the input's type must be `float`, or
`int32`, and that its output will be the same type, since both have type `T`.
-> A note on naming:{#naming} Inputs, outputs, and attrs generally should be
+> <a id="naming"></a>A note on naming: Inputs, outputs, and attrs generally should be
> given snake\_case names. The one exception is attrs that are used as the type
> of an input or in the type of an input. Those attrs can be inferred when the
> op is added to the graph and so don't appear in the op's function. For
diff --git a/tensorflow/g3doc/how_tos/distributed/index.md b/tensorflow/g3doc/how_tos/distributed/index.md
index b8ed91d8c7..037596605a 100644
--- a/tensorflow/g3doc/how_tos/distributed/index.md
+++ b/tensorflow/g3doc/how_tos/distributed/index.md
@@ -213,12 +213,12 @@ def main(_):
if FLAGS.job_name == "ps":
server.join()
elif FLAGS.job_name == "worker":
-
+
# Assigns ops to the local worker by default.
with tf.device(tf.train.replica_device_setter(
worker_device="/job:worker/task:%d" % FLAGS.task_index,
cluster=cluster)):
-
+
# Build model...
loss = ...
global_step = tf.Variable(0)
diff --git a/tensorflow/g3doc/resources/roadmap.md b/tensorflow/g3doc/resources/roadmap.md
index 406932d132..be7ac5e778 100644
--- a/tensorflow/g3doc/resources/roadmap.md
+++ b/tensorflow/g3doc/resources/roadmap.md
@@ -1,5 +1,5 @@
# Roadmap
-**Last updated: January 13, 2016**
+**Last updated: April 12, 2016**
TensorFlow is a fast moving project. In order for the community to better
understand what the near future will bring, this document shares what we are
@@ -17,7 +17,6 @@ we do not have timelines for these features.
### Making TensorFlow easier to use
* Higher level APIs (for instance, layers)
-* Saving everything to run a graph
### Performance
* Speed and memory benchmarks
@@ -26,16 +25,12 @@ we do not have timelines for these features.
### Core Features
* Repeated partial graph evaluation ([#672](https://github.com/tensorflow/tensorflow/issues/672))
-
### Platforms
* iOS support ([#16](https://github.com/tensorflow/tensorflow/issues/16))
* OpenCL support ([#22](https://github.com/tensorflow/tensorflow/issues/22))
-* Distributed execution
- ([#23](https://github.com/tensorflow/tensorflow/issues/23))
+* Windows support ([#17](https://github.com/tensorflow/tensorflow/issues/17))
+* MacOS GPU support
### Community
-* Improvements to Jenkins: automated tests for all supported configurations
-* Open-source the doc generator and publish docs style guide
-* TensorFlow Models repository (partially,
- [#6](https://github.com/tensorflow/tensorflow/issues/6))
-
+* Integration with other machine learning frameworks
+* Better installation support; support for package managers
diff --git a/tensorflow/g3doc/tutorials/deep_cnn/index.md b/tensorflow/g3doc/tutorials/deep_cnn/index.md
index 57722ed18a..e9d44d29ae 100644
--- a/tensorflow/g3doc/tutorials/deep_cnn/index.md
+++ b/tensorflow/g3doc/tutorials/deep_cnn/index.md
@@ -15,8 +15,9 @@ by Alex Krizhevsky.
### Goals
-The goal of this tutorial is to build a relatively small convolutional neural
-network (CNN) for recognizing images. In the process, this tutorial:
+The goal of this tutorial is to build a relatively small [convolutional neural
+network](https://en.wikipedia.org/wiki/Convolutional_neural_network) (CNN) for
+recognizing images. In the process, this tutorial:
1. Highlights a canonical organization for network architecture,
training and evaluation.
@@ -32,10 +33,16 @@ The CIFAR-10 tutorial demonstrates several important constructs for
designing larger and more sophisticated models in TensorFlow:
* Core mathematical components including [convolution](
-../../api_docs/python/nn.md#conv2d), [rectified linear activations](
-../../api_docs/python/nn.md#relu), [max pooling](
-../../api_docs/python/nn.md#max_pool) and [local response normalization](
-../../api_docs/python/nn.md#local_response_normalization).
+../../api_docs/python/nn.md#conv2d) ([wiki](
+https://en.wikipedia.org/wiki/Convolution)), [rectified linear activations](
+../../api_docs/python/nn.md#relu) ([wiki](
+https://en.wikipedia.org/wiki/Rectifier_(neural_networks))), [max pooling](
+../../api_docs/python/nn.md#max_pool) ([wiki](
+https://en.wikipedia.org/wiki/Convolutional_neural_network#Pooling_layer))
+and [local response normalization](
+../../api_docs/python/nn.md#local_response_normalization)
+(Chapter 3.3 in [AlexNet paper](
+http://papers.nips.cc/paper/4824-imagenet-classification-with-deep-convolutional-neural-networks.pdf)).
* [Visualization](../../how_tos/summaries_and_tensorboard/index.md)
of network activities during training, including input images,
losses and distributions of activations and gradients.
@@ -50,7 +57,8 @@ that systematically decrements over time.
for input
data to isolate the model from disk latency and expensive image pre-processing.
-We also provide a multi-GPU version of the model which demonstrates:
+We also provide a [multi-GPU version](#training-a-model-using-multiple-gpu-cards)
+of the model which demonstrates:
* Configuring a model to train across multiple GPU cards in parallel.
* Sharing and updating variables among multiple GPUs.
@@ -129,8 +137,8 @@ artificially increase the data set size:
Please see the [Images](../../api_docs/python/image.md) page for the list of
available distortions. We also attach an
[`image_summary`](../../api_docs/python/train.md#image_summary) to the images
-so that we may visualize them in TensorBoard. This is a good practice to verify
-that inputs are built correctly.
+so that we may visualize them in [TensorBoard](../../how_tos/summaries_and_tensorboard/index.md).
+This is a good practice to verify that inputs are built correctly.
<div style="width:50%; margin:auto; margin-bottom:10px; margin-top:20px;">
<img style="width:70%" src="../../images/cifar_image_summary.png">
diff --git a/tensorflow/g3doc/tutorials/index.md b/tensorflow/g3doc/tutorials/index.md
index d5c2a2e472..6ee12fe264 100644
--- a/tensorflow/g3doc/tutorials/index.md
+++ b/tensorflow/g3doc/tutorials/index.md
@@ -22,7 +22,7 @@ TensorFlow.
## TensorFlow Mechanics 101
This is a technical tutorial, where we walk you through the details of using
-TensorFlow infrastructure to train models at scale. We use again MNIST as the
+TensorFlow infrastructure to train models at scale. We again use MNIST as the
example.
[View Tutorial](../tutorials/mnist/tf/index.md)
@@ -115,5 +115,3 @@ version of the [Deep Dream](https://github.com/google/deepdream) neural network
visual hallucination software.
[View Tutorial](https://www.tensorflow.org/code/tensorflow/examples/tutorials/deepdream/deepdream.ipynb)
-
-
diff --git a/tensorflow/g3doc/tutorials/mandelbrot/index.md b/tensorflow/g3doc/tutorials/mandelbrot/index.md
index f4aa8fe6ca..6b1c070791 100755
--- a/tensorflow/g3doc/tutorials/mandelbrot/index.md
+++ b/tensorflow/g3doc/tutorials/mandelbrot/index.md
@@ -1,10 +1,11 @@
# Mandelbrot Set
-Visualizing the Mandelbrot set doesn't have anything to do with machine
-learning, but it makes for a fun example of how one can use TensorFlow for
-general mathematics. This is actually a pretty naive implementation of the
-visualization, but it makes the point. (We may end up providing a more
-elaborate implementation down the line to produce more truly beautiful images.)
+Visualizing the [Mandelbrot set](https://en.wikipedia.org/wiki/Mandelbrot_set)
+doesn't have anything to do with machine learning, but it makes for a fun
+example of how one can use TensorFlow for general mathematics. This is
+actually a pretty naive implementation of the visualization, but it makes the
+point. (We may end up providing a more elaborate implementation down the line
+to produce more truly beautiful images.)
Note: This tutorial was originally prepared as an IPython notebook.
diff --git a/tensorflow/g3doc/tutorials/mnist/beginners/index.md b/tensorflow/g3doc/tutorials/mnist/beginners/index.md
index f9dcce1057..5d099c4bf2 100644
--- a/tensorflow/g3doc/tutorials/mnist/beginners/index.md
+++ b/tensorflow/g3doc/tutorials/mnist/beginners/index.md
@@ -311,18 +311,14 @@ y_ = tf.placeholder(tf.float32, [None, 10])
Then we can implement the cross-entropy, \\(-\sum y'\log(y)\\):
```python
-cross_entropy = -tf.reduce_sum(y_*tf.log(y))
+cross_entropy = tf.reduce_mean(-tf.reduce_sum(y_ * tf.log(y), reduction_indices=[1]))
```
First, `tf.log` computes the logarithm of each element of `y`. Next, we multiply
-each element of `y_` with the corresponding element of `tf.log(y)`. Finally,
-`tf.reduce_sum` adds all the elements of the tensor.
-
-Note that this isn't just the cross-entropy of the truth with a single
-prediction, but the sum of the cross-entropies for all the images we looked at.
-In this example, we have 100 images in each batch: how well we are doing on 100
-data points is a much better description of how good our model is than a single
-data point.
+each element of `y_` with the corresponding element of `tf.log(y)`. Then
+`tf.reduce_sum` adds the elements in the second dimension of y, due to the
+`reduction_indices=[1]` parameter. Finally, `tf.reduce_mean` computes the mean
+over all the examples in the batch.
Now that we know what we want our model to do, it's very easy to have TensorFlow
train it to do so.
@@ -334,11 +330,11 @@ minimize. Then it can apply your choice of optimization algorithm to modify the
variables and reduce the cost.
```python
-train_step = tf.train.GradientDescentOptimizer(0.01).minimize(cross_entropy)
+train_step = tf.train.GradientDescentOptimizer(0.5).minimize(cross_entropy)
```
In this case, we ask TensorFlow to minimize `cross_entropy` using the gradient
-descent algorithm with a learning rate of 0.01. Gradient descent is a simple
+descent algorithm with a learning rate of 0.5. Gradient descent is a simple
procedure, where TensorFlow simply shifts each variable a little bit in the
direction that reduces the cost. But TensorFlow also provides
[many other optimization algorithms]
@@ -415,7 +411,7 @@ Finally, we ask for our accuracy on our test data.
print(sess.run(accuracy, feed_dict={x: mnist.test.images, y_: mnist.test.labels}))
```
-This should be about 91%.
+This should be about 92%.
Is that good? Well, not really. In fact, it's pretty bad. This is because we're
using a very simple model. With some small changes, we can get to
diff --git a/tensorflow/g3doc/tutorials/mnist/pros/index.md b/tensorflow/g3doc/tutorials/mnist/pros/index.md
index 9f92ebb4e8..73cc87eb57 100644
--- a/tensorflow/g3doc/tutorials/mnist/pros/index.md
+++ b/tensorflow/g3doc/tutorials/mnist/pros/index.md
@@ -157,11 +157,11 @@ easily. Our cost function will be the cross-entropy between the target and the
model's prediction.
```python
-cross_entropy = -tf.reduce_sum(y_*tf.log(y))
+cross_entropy = tf.reduce_mean(-tf.reduce_sum(y_ * tf.log(y), reduction_indices=[1]))
```
-Note that `tf.reduce_sum` sums across all images in the minibatch, as well as
-all classes. We are computing the cross entropy for the entire minibatch.
+Note that `tf.reduce_sum` sums across all classes and `tf.reduce_mean` takes
+the average over these sums.
## Train the Model
@@ -174,10 +174,10 @@ TensorFlow has a variety of
[builtin optimization algorithms]
(../../../api_docs/python/train.md#optimizers).
For this example, we will use steepest gradient descent, with a step length of
-0.01, to descend the cross entropy.
+0.5, to descend the cross entropy.
```python
-train_step = tf.train.GradientDescentOptimizer(0.01).minimize(cross_entropy)
+train_step = tf.train.GradientDescentOptimizer(0.5).minimize(cross_entropy)
```
What TensorFlow actually did in that single line was to add new operations to
@@ -224,7 +224,7 @@ accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
```
Finally, we can evaluate our accuracy on the test data. This should be about
-91% correct.
+92% correct.
```python
print(accuracy.eval(feed_dict={x: mnist.test.images, y_: mnist.test.labels}))
@@ -335,12 +335,13 @@ h_fc1 = tf.nn.relu(tf.matmul(h_pool2_flat, W_fc1) + b_fc1)
#### Dropout
-To reduce overfitting, we will apply dropout before the readout layer.
+To reduce overfitting, we will apply [dropout](
+https://www.cs.toronto.edu/~hinton/absps/JMLRdropout.pdf) before the readout layer.
We create a `placeholder` for the probability that a neuron's output is kept
during dropout. This allows us to turn dropout on during training, and turn it
off during testing.
TensorFlow's `tf.nn.dropout` op automatically handles scaling neuron outputs in
-addition to masking them, so dropout just works without any additional scaling.
+addition to masking them, so dropout just works without any additional scaling.<sup id="a1">[1](#f1)</sup>
```python
keep_prob = tf.placeholder(tf.float32)
@@ -370,7 +371,7 @@ additional parameter `keep_prob` in `feed_dict` to control the dropout rate;
and we will add logging to every 100th iteration in the training process.
```python
-cross_entropy = -tf.reduce_sum(y_*tf.log(y_conv))
+cross_entropy = tf.reduce_mean(-tf.reduce_sum(y_ * tf.log(y_conv), reduction_indices=[1]))
train_step = tf.train.AdamOptimizer(1e-4).minimize(cross_entropy)
correct_prediction = tf.equal(tf.argmax(y_conv,1), tf.argmax(y_,1))
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
@@ -391,3 +392,5 @@ The final test set accuracy after running this code should be approximately 99.2
We have learned how to quickly and easily build, train, and evaluate a
fairly sophisticated deep learning model using TensorFlow.
+
+<b id="f1">1</b>: For this small convolutional network, performance is actually nearly identical with and without dropout. Dropout is often very effective at reducing overfitting, but it is most useful when training very large neural networks. [↩](#a1)
diff --git a/tensorflow/g3doc/tutorials/mnist/tf/index.md b/tensorflow/g3doc/tutorials/mnist/tf/index.md
index 42c52e1cbd..9d83393dc0 100644
--- a/tensorflow/g3doc/tutorials/mnist/tf/index.md
+++ b/tensorflow/g3doc/tutorials/mnist/tf/index.md
@@ -23,7 +23,9 @@ File | Purpose
Simply run the `fully_connected_feed.py` file directly to start training:
-`python fully_connected_feed.py`
+```bash
+python fully_connected_feed.py
+```
## Prepare the Data
@@ -67,7 +69,7 @@ rest of the graph and into which the actual training examples will be fed.
```python
images_placeholder = tf.placeholder(tf.float32, shape=(batch_size,
- IMAGE_PIXELS))
+ mnist.IMAGE_PIXELS))
labels_placeholder = tf.placeholder(tf.int32, shape=(batch_size))
```
@@ -99,14 +101,14 @@ The `inference()` function builds the graph as far as needed to
return the tensor that would contain the output predictions.
It takes the images placeholder as input and builds on top
-of it a pair of fully connected layers with ReLu activation followed by a ten
+of it a pair of fully connected layers with [ReLu](https://en.wikipedia.org/wiki/Rectifier_(neural_networks)) activation followed by a ten
node linear layer specifying the output logits.
Each layer is created beneath a unique [`tf.name_scope`](../../../api_docs/python/framework.md#name_scope)
that acts as a prefix to the items created within that scope.
```python
-with tf.name_scope('hidden1') as scope:
+with tf.name_scope('hidden1'):
```
Within the defined scope, the weights and biases to be used by each of these
@@ -167,27 +169,12 @@ Finally, the `logits` tensor that will contain the output is returned.
The `loss()` function further builds the graph by adding the required loss
ops.
-First, the values from the `labels_placeholder` are encoded as a tensor of 1-hot
-values. For example, if the class identifier is '3' the value is converted to:
-<br>`[0, 0, 0, 1, 0, 0, 0, 0, 0, 0]`
-
-```python
-batch_size = tf.size(labels)
-labels = tf.expand_dims(labels, 1)
-indices = tf.expand_dims(tf.range(0, batch_size, 1), 1)
-concated = tf.concat(1, [indices, labels])
-onehot_labels = tf.sparse_to_dense(
- concated, tf.pack([batch_size, NUM_CLASSES]), 1.0, 0.0)
-```
-
-A [`tf.nn.softmax_cross_entropy_with_logits`](../../../api_docs/python/nn.md#softmax_cross_entropy_with_logits)
-op is then added to compare the output logits from the `inference()` function
-and the 1-hot labels.
+First, the values from the `labels_placeholder` are converted to 64-bit integers. Then, a [`tf.nn.sparse_softmax_cross_entropy_with_logits`](../../../api_docs/python/nn.md#sparse_softmax_cross_entropy_with_logits) op is added to automatically produce 1-hot labels from the `labels_placeholder` and compare the output logits from the `inference()` function with those 1-hot labels.
```python
-cross_entropy = tf.nn.softmax_cross_entropy_with_logits(logits,
- onehot_labels,
- name='xentropy')
+labels = tf.to_int64(labels)
+cross_entropy = tf.nn.sparse_softmax_cross_entropy_with_logits(
+ logits, labels, name='xentropy')
```
It then uses [`tf.reduce_mean`](../../../api_docs/python/math_ops.md#reduce_mean)
@@ -208,7 +195,7 @@ And the tensor that will then contain the loss value is returned.
### Training
The `training()` function adds the operations needed to minimize the loss via
-gradient descent.
+[Gradient Descent](https://en.wikipedia.org/wiki/Gradient_descent).
Firstly, it takes the loss tensor from the `loss()` function and hands it to a
[`tf.scalar_summary`](../../../api_docs/python/train.md#scalar_summary),
@@ -224,7 +211,7 @@ Next, we instantiate a [`tf.train.GradientDescentOptimizer`](../../../api_docs/p
responsible for applying gradients with the requested learning rate.
```python
-optimizer = tf.train.GradientDescentOptimizer(FLAGS.learning_rate)
+optimizer = tf.train.GradientDescentOptimizer(learning_rate)
```
We then generate a single variable to contain a counter for the global
@@ -306,7 +293,7 @@ The user code controls the training per step, and the simplest loop that
can do useful training is:
```python
-for step in xrange(max_steps):
+for step in xrange(FLAGS.max_steps):
sess.run(train_op)
```
@@ -324,7 +311,8 @@ In the `fill_feed_dict()` function, the given `DataSet` is queried for its next
filled containing the next images and labels.
```python
-images_feed, labels_feed = data_set.next_batch(FLAGS.batch_size)
+images_feed, labels_feed = data_set.next_batch(FLAGS.batch_size,
+ FLAGS.fake_data)
```
A python dictionary object is then generated with the placeholders as keys and
@@ -385,8 +373,7 @@ may be instantiated to write the events files, which
contain both the graph itself and the values of the summaries.
```python
-summary_writer = tf.train.SummaryWriter(FLAGS.train_dir,
- graph_def=sess.graph_def)
+summary_writer = tf.train.SummaryWriter(FLAGS.train_dir, sess.graph)
```
Lastly, the events file will be updated with new summary values every time the
@@ -465,14 +452,6 @@ do_eval(sess,
### Build the Eval Graph
-Before opening the default Graph, the test data should have been fetched by
-calling the `get_data(train=False)` function with the parameter set to grab
-the test dataset.
-
-```python
-test_all_images, test_all_labels = get_data(train=False)
-```
-
Before entering the training loop, the Eval op should have been built
by calling the `evaluation()` function from `mnist.py` with the same
logits/labels parameters as the `loss()` function.
@@ -508,7 +487,7 @@ The `true_count` variable simply accumulates all of the predictions that the
calculated from simply dividing by the total number of examples.
```python
-precision = float(true_count) / float(num_examples)
-print ' Num examples: %d Num correct: %d Precision @ 1: %0.02f' % (
- num_examples, true_count, precision)
+precision = true_count / num_examples
+print(' Num examples: %d Num correct: %d Precision @ 1: %0.04f' %
+ (num_examples, true_count, precision))
```
diff --git a/tensorflow/g3doc/tutorials/pdes/index.md b/tensorflow/g3doc/tutorials/pdes/index.md
index d23038ffb5..ca24034759 100755
--- a/tensorflow/g3doc/tutorials/pdes/index.md
+++ b/tensorflow/g3doc/tutorials/pdes/index.md
@@ -2,8 +2,9 @@
TensorFlow isn't just for machine learning. Here we give a (somewhat
pedestrian) example of using TensorFlow for simulating the behavior of a
-partial differential equation. We'll simulate the surface of square pond as a
-few raindrops land on it.
+[partial differential equation](
+https://en.wikipedia.org/wiki/Partial_differential_equation).
+We'll simulate the surface of square pond as a few raindrops land on it.
Note: This tutorial was originally prepared as an IPython notebook.
diff --git a/tensorflow/g3doc/tutorials/recurrent/index.md b/tensorflow/g3doc/tutorials/recurrent/index.md
index 6c9d9804ac..b5afc18659 100644
--- a/tensorflow/g3doc/tutorials/recurrent/index.md
+++ b/tensorflow/g3doc/tutorials/recurrent/index.md
@@ -12,9 +12,9 @@ In this tutorial we will show how to train a recurrent neural network on
a challenging task of language modeling. The goal of the problem is to fit a
probabilistic model which assigns probabilities to sentences. It does so by
predicting next words in a text given a history of previous words. For this
-purpose we will use the Penn Tree Bank (PTB) dataset, which is a popular
-benchmark for measuring quality of these models, whilst being small and
-relatively fast to train.
+purpose we will use the [Penn Tree Bank](http://www.cis.upenn.edu/~treebank/)
+(PTB) dataset, which is a popular benchmark for measuring quality of these
+models, whilst being small and relatively fast to train.
Language modeling is key to many interesting problems such as speech
recognition, machine translation, or image captioning. It is also fun, too --
@@ -172,9 +172,10 @@ final_state = state
## Run the Code
We are assuming you have already installed via the pip package, have cloned the
-tensorflow git repository, and are in the root of the git tree. (If building
-from source, build the `tensorflow/models/rnn/ptb:ptb_word_lm` target using
-bazel).
+tensorflow git repository, and are in the root of the git tree. (If [building
+from source](
+https://github.com/tensorflow/tensorflow/blob/master/tensorflow/g3doc/get_started/os_setup.md#installing-from-sources), build the `tensorflow/models/rnn/ptb:ptb_word_lm` target using
+[bazel](https://github.com/bazelbuild/bazel)).
Next:
```
diff --git a/tensorflow/g3doc/tutorials/word2vec/index.md b/tensorflow/g3doc/tutorials/word2vec/index.md
index 48fb18641f..9a8c4eb84e 100644
--- a/tensorflow/g3doc/tutorials/word2vec/index.md
+++ b/tensorflow/g3doc/tutorials/word2vec/index.md
@@ -78,7 +78,7 @@ model).
Word2vec is a particularly computationally-efficient predictive model for
learning word embeddings from raw text. It comes in two flavors, the Continuous
-Bag-of-Words model (CBOW) and the Skip-Gram model. Algorithmically, these
+Bag-of-Words model (CBOW) and the Skip-Gram model (Chapter 3.1 and 3.2 in [Mikolov et al.](http://arxiv.org/pdf/1301.3781.pdf)). Algorithmically, these
models are similar, except that CBOW predicts target words (e.g. 'mat') from
source context words ('the cat sits on the'), while the skip-gram does the
inverse and predicts source context-words from the target words. This inversion
@@ -108,7 +108,8 @@ $$
where \\(\text{score}(w\_t, h)\\) computes the compatibility of word \\(w\_t\\)
with the context \\(h\\) (a dot product is commonly used). We train this model
-by maximizing its log-likelihood on the training set, i.e. by maximizing
+by maximizing its [log-likelihood](https://en.wikipedia.org/wiki/Likelihood_function)
+on the training set, i.e. by maximizing
$$
\begin{align}
@@ -129,8 +130,8 @@ context \\(h\\), *at every training step*.
On the other hand, for feature learning in word2vec we do not need a full
probabilistic model. The CBOW and skip-gram models are instead trained using a
-binary classification objective (logistic regression) to discriminate the real
-target words \\(w_t\\) from \\(k\\) imaginary (noise) words \\(\tilde w\\), in the
+binary classification objective ([logistic regression](https://en.wikipedia.org/wiki/Logistic_regression))
+to discriminate the real target words \\(w_t\\) from \\(k\\) imaginary (noise) words \\(\tilde w\\), in the
same context. We illustrate this below for a CBOW model. For skip-gram the
direction is simply inverted.
@@ -207,7 +208,7 @@ loss for this pair of observed and noisy examples, i.e. the objective at time
step \\(t\\) becomes
$$J^{(t)}_\text{NEG} = \log Q_\theta(D=1 | \text{the, quick}) +
- \log(Q_\theta(D=0 | \text{sheep, quick}))$$.
+ \log(Q_\theta(D=0 | \text{sheep, quick}))$$
The goal is to make an update to the embedding parameters \\(\theta\\) to improve
(in this case, maximize) this objective function. We do this by deriving the