aboutsummaryrefslogtreecommitdiffhomepage
diff options
context:
space:
mode:
-rw-r--r--tensorflow/g3doc/api_docs/python/ops.md11
-rw-r--r--tensorflow/g3doc/api_docs/python/state_ops.md6
-rw-r--r--tensorflow/g3doc/get_started/basic_usage.md7
-rw-r--r--tensorflow/g3doc/get_started/os_setup.md21
-rw-r--r--tensorflow/g3doc/how_tos/variables/index.md84
-rwxr-xr-xtensorflow/g3doc/tutorials/mandelbrot/index.md12
-rw-r--r--tensorflow/g3doc/tutorials/mnist/beginners/index.md9
-rw-r--r--tensorflow/g3doc/tutorials/mnist/tf/index.md4
-rwxr-xr-xtensorflow/g3doc/tutorials/pdes/index.md30
-rw-r--r--tensorflow/g3doc/tutorials/word2vec/index.md16
-rw-r--r--tensorflow/python/kernel_tests/logging_ops_test.py5
-rw-r--r--tensorflow/python/ops/logging_ops.py6
-rw-r--r--tensorflow/python/training/saver.py6
-rw-r--r--tensorflow/tools/pip_package/setup.py3
14 files changed, 110 insertions, 110 deletions
diff --git a/tensorflow/g3doc/api_docs/python/ops.md b/tensorflow/g3doc/api_docs/python/ops.md
deleted file mode 100644
index 0206f315f3..0000000000
--- a/tensorflow/g3doc/api_docs/python/ops.md
+++ /dev/null
@@ -1,11 +0,0 @@
-<!-- This file is machine generated: DO NOT EDIT! -->
-
-# Leftovers, should be empty and removed <a class="md-anchor" id="AUTOGENERATED-leftovers--should-be-empty-and-removed"></a>
-<!-- TOC-BEGIN This section is generated by neural network: DO NOT EDIT! -->
-## Contents
-### [Leftovers, should be empty and removed](#AUTOGENERATED-leftovers--should-be-empty-and-removed)
-
-
-<!-- TOC-END This section was generated by neural network, THANKS FOR READING! -->
-
-
diff --git a/tensorflow/g3doc/api_docs/python/state_ops.md b/tensorflow/g3doc/api_docs/python/state_ops.md
index 5e8e96c000..2c6ca5e7aa 100644
--- a/tensorflow/g3doc/api_docs/python/state_ops.md
+++ b/tensorflow/g3doc/api_docs/python/state_ops.md
@@ -540,9 +540,9 @@ You number checkpoint filenames by passing a value to the optional
`global_step` argument to `save()`:
```python
-saver.save('my-model', global_step=0) ==> filename: 'my-model-0'
+saver.save(sess, 'my-model', global_step=0) ==> filename: 'my-model-0'
...
-saver.save('my-model', global_step=1000) ==> filename: 'my-model-1000'
+saver.save(sess, 'my-model', global_step=1000) ==> filename: 'my-model-1000'
```
Additionally, optional arguments to the `Saver()` constructor let you control
@@ -676,7 +676,7 @@ path can be passed directly to a call to `restore()`.
##### Args: <a class="md-anchor" id="AUTOGENERATED-args-"></a>
-* <b>sess</b>: A Session to use to save the variables..
+* <b>sess</b>: A Session to use to save the variables.
* <b>save_path</b>: string. Path to the checkpoint filename. If the saver is
`sharded`, this is the prefix of the sharded checkpoint filename.
* <b>global_step</b>: If provided the global step number is appended to
diff --git a/tensorflow/g3doc/get_started/basic_usage.md b/tensorflow/g3doc/get_started/basic_usage.md
index 7616c3f7ea..c29f6a4179 100644
--- a/tensorflow/g3doc/get_started/basic_usage.md
+++ b/tensorflow/g3doc/get_started/basic_usage.md
@@ -286,7 +286,8 @@ with tf.Session() as sess:
```
A `placeholder()` operation generates an error if you do not supply a feed for
-it. See the [MNIST fully-connected feed
-tutorial](../tutorials/mnist/fully_connected_feed.py) for a larger-scale
-example of feeds.
+it. See the
+[MNIST fully-connected feed tutorial](../tutorials/mnist/tf/index.md)
+([source code](https://tensorflow.googlesource.com/tensorflow/+/master/tensorflow/g3doc/tutorials/mnist/fully_connected_feed.py))
+for a larger-scale example of feeds.
diff --git a/tensorflow/g3doc/get_started/os_setup.md b/tensorflow/g3doc/get_started/os_setup.md
index 4db07c233b..39ed870556 100644
--- a/tensorflow/g3doc/get_started/os_setup.md
+++ b/tensorflow/g3doc/get_started/os_setup.md
@@ -2,8 +2,13 @@
## Binary Installation <a class="md-anchor" id="AUTOGENERATED-binary-installation"></a>
+The TensorFlow Python API requires Python 2.7.
+
### Ubuntu/Linux <a class="md-anchor" id="AUTOGENERATED-ubuntu-linux"></a>
+**Note**: All the virtualenv-related instructions are optional, but we recommend
+using the virtualenv on any multi-user system.
+
Make sure you have [pip](https://pypi.python.org/pypi/pip), the python headers,
and (optionally) [virtualenv](https://pypi.python.org/pypi/virtualenv) installed:
@@ -11,10 +16,7 @@ and (optionally) [virtualenv](https://pypi.python.org/pypi/virtualenv) installed
$ sudo apt-get install python-pip python-dev python-virtualenv
```
-**Note**: All the virtualenv-related instructions are optional, but we recommend
-using the virtualenv on any multi-user system.
-
-Set up a new virtualenv environment. Assuming you want to set it up in the
+Set up a new virtualenv environment. To set it up in the
directory `~/tensorflow`, run:
```bash
@@ -39,18 +41,19 @@ Inside the virtualenv, install TensorFlow:
# For GPU-enabled version (only install this version if you have the CUDA sdk installed)
(tensorflow)$ pip install --upgrade https://storage.googleapis.com/tensorflow/linux/gpu/tensorflow-0.5.0-cp27-none-linux_x86_64.whl
+# When you are done using TensorFlow:
(tensorflow)$ deactivate # Deactivate the virtualenv
$ # Your prompt should change back
```
### Mac OS X <a class="md-anchor" id="AUTOGENERATED-mac-os-x"></a>
-Make sure you have [pip](https://pypi.python.org/pypi/pip) and
-(optionally) [virtualenv](https://pypi.python.org/pypi/virtualenv) installed:
-
**Note**: All the virtualenv-related instructions are optional, but we recommend
using the virtualenv on any multi-user system.
+Make sure you have [pip](https://pypi.python.org/pypi/pip) and
+(optionally) [virtualenv](https://pypi.python.org/pypi/virtualenv) installed:
+
If using `easy_install`:
```bash
@@ -78,6 +81,8 @@ Install TensorFlow (only CPU binary version is currently available).
```bash
(tensorflow)$ pip install --upgrade https://storage.googleapis.com/tensorflow/mac/tensorflow-0.5.0-py2-none-any.whl
+
+# When you are done using TensorFlow:
(tensorflow)$ deactivate # Deactivate the virtualenv
$ # Your prompt should change back
```
@@ -184,7 +189,7 @@ Add the executable `output/bazel` to your `$PATH` environment variable.
$ sudo apt-get install python-numpy swig python-dev
```
-#### <a name="install_cuda"></a>Optional: Install CUDA (GPUs on Linux) <a class="md-anchor" id="AUTOGENERATED--a-name--install_cuda----a-optional--install-cuda--gpus-on-linux-"></a>
+#### Optional: Install CUDA (GPUs on Linux) <a class="md-anchor" id="install_cuda"></a>
In order to build or run TensorFlow with GPU support, both Cuda Toolkit 7.0 and
CUDNN 6.5 V2 from NVIDIA need to be installed.
diff --git a/tensorflow/g3doc/how_tos/variables/index.md b/tensorflow/g3doc/how_tos/variables/index.md
index 65e80e8c00..f98f7e97f4 100644
--- a/tensorflow/g3doc/how_tos/variables/index.md
+++ b/tensorflow/g3doc/how_tos/variables/index.md
@@ -1,26 +1,26 @@
# Variables: Creation, Initialization, Saving, and Loading <a class="md-anchor" id="AUTOGENERATED-variables--creation--initialization--saving--and-loading"></a>
-When you train a model, you use [Variables](../../api_docs/python/state_ops.md)
+When you train a model, you use [variables](../../api_docs/python/state_ops.md)
to hold and update parameters. Variables are in-memory buffers containing
-tensors. They need to be explicitly initialized and can be saved to disk during
+tensors. They must be explicitly initialized and can be saved to disk during
and after training. You can later restore saved values to exercise or analyse
the model.
This document references the following TensorFlow classes. Follow the links to
their reference manual for a complete description of their API:
-* The `Variable` class [tf.Variable](../../api_docs/python/state_ops.md#Variable).
-* The `Saver` class [tf.train.Saver](../../api_docs/python/state_ops.md#Saver).
+* The [`tf.Variable`](../../api_docs/python/state_ops.md#Variable) class.
+* The [`tf.train.Saver`](../../api_docs/python/state_ops.md#Saver) class.
## Creation <a class="md-anchor" id="AUTOGENERATED-creation"></a>
When you create a [Variable](../../api_docs/python/state_ops.md) you pass a
`Tensor` as its initial value to the `Variable()` constructor. TensorFlow
-provides a collection of Ops that produce tensors often used for initialization
+provides a collection of ops that produce tensors often used for initialization
from [constants or random values](../../api_docs/python/constant_op.md).
-Note that all these Ops require you to specify the shape of the tensors. That
+Note that all these ops require you to specify the shape of the tensors. That
shape automatically becomes the shape of the variable. Variables generally
have a fixed shape, but TensorFlow provides advanced mechanisms to reshape
variables.
@@ -32,12 +32,12 @@ weights = tf.Variable(tf.random_normal([784, 200], stddev=0.35),
biases = tf.Variable(tf.zeros([200]), name="biases")
```
-Calling `tf.Variable()` adds a few Ops to the graph:
+Calling `tf.Variable()` adds several ops to the graph:
-* A `variable` Op that holds the variable value.
-* An initializer Op that sets the variable to its initial value. This is
- actually a `tf.assign` Op.
-* The Ops for the initial value, such as the `zeros` Op for the `biases`
+* A `variable` op that holds the variable value.
+* An initializer op that sets the variable to its initial value. This is
+ actually a `tf.assign` op.
+* The ops for the initial value, such as the `zeros` op for the `biases`
variable in the example are also added to the graph.
The value returned by `tf.Variable()` value is an instance of the Python class
@@ -45,15 +45,15 @@ The value returned by `tf.Variable()` value is an instance of the Python class
## Initialization <a class="md-anchor" id="AUTOGENERATED-initialization"></a>
-Variable initializers must be run explicitly before other Ops in your model can
-be run. The easiest way to do that is to add an Op that runs all the variable
-initializers, and run that Op before using the model.
+Variable initializers must be run explicitly before other ops in your model can
+be run. The easiest way to do that is to add an op that runs all the variable
+initializers, and run that op before using the model.
You can alternatively restore variable values from a checkpoint file, see
below.
-Use `tf.initialize_all_variables()` to add an Op to run variable initializers.
-Only run that Op after you have fully constructed your model and launched it in
+Use `tf.initialize_all_variables()` to add an op to run variable initializers.
+Only run that op after you have fully constructed your model and launched it in
a session.
```python
@@ -62,13 +62,13 @@ weights = tf.Variable(tf.random_normal([784, 200], stddev=0.35),
name="weights")
biases = tf.Variable(tf.zeros([200]), name="biases")
...
-# Add an Op to initialize the variables.
+# Add an op to initialize the variables.
init_op = tf.initialize_all_variables()
# Later, when launching the model
with tf.Session() as sess:
# Run the init operation.
- sess.Run(init_op)
+ sess.run(init_op)
...
# Use the model
...
@@ -77,7 +77,7 @@ with tf.Session() as sess:
### Initialization from another Variable <a class="md-anchor" id="AUTOGENERATED-initialization-from-another-variable"></a>
You sometimes need to initialize a variable from the initial value of another
-variable. As the Op added by `tf.initialize_all_variables()` initializes all
+variable. As the op added by `tf.initialize_all_variables()` initializes all
variables in parallel you have to be careful when this is needed.
To initialize a new variable from the value of another variable use the other
@@ -98,7 +98,7 @@ w_twice = tf.Variable(weights.initialized_value() * 0.2, name="w_twice")
### Custom Initialization <a class="md-anchor" id="AUTOGENERATED-custom-initialization"></a>
-The convenience function `tf.initialize_all_variables()` adds an Op to
+The convenience function `tf.initialize_all_variables()` adds an op to
initialize *all variables* in the model. You can also pass it an explicit list
of variables to initialize. See the
[Variables Documentation](../../api_docs/python/state_ops.md) for more options,
@@ -106,19 +106,21 @@ including checking if variables are initialized.
## Saving and Restoring <a class="md-anchor" id="AUTOGENERATED-saving-and-restoring"></a>
-The easiest way to save and restore a model is to use a `tf.train.Saver`
-object. The constructor adds `save` and `restore` Ops to the graph for all, or
-a specified list, of variables. The saver object provides methods to run these
-Ops, specifying paths for the checkpoint files to write to or read from.
+The easiest way to save and restore a model is to use a `tf.train.Saver` object.
+The constructor adds `save` and `restore` ops to the graph for all, or a
+specified list, of the variables in the graph. The saver object provides
+methods to run these ops, specifying paths for the checkpoint files to write to
+or read from.
### Checkpoint Files <a class="md-anchor" id="AUTOGENERATED-checkpoint-files"></a>
-Variables are saved in binary files that, roughly, contains a map from variable
-names to tensors.
+Variables are saved in binary files that, roughly, contain a map from variable
+names to tensor values.
-When you create a `Saver` object, you can optionally chose names for the
-variables in the checkpoint files. By default, it uses the names passed to the
-`tf.Variable()` call.
+When you create a `Saver` object, you can optionally choose names for the
+variables in the checkpoint files. By default, it uses the value of the
+[`Variable.name`](../../api_docs/python/state_ops.md#Variable.name) property for
+each variable.
### Saving Variables <a class="md-anchor" id="AUTOGENERATED-saving-variables"></a>
@@ -130,20 +132,20 @@ the model.
v1 = tf.Variable(..., name="v1")
v2 = tf.Variable(..., name="v2")
...
-# Add an Op to initialize the variables.
+# Add an op to initialize the variables.
init_op = tf.initialize_all_variables()
-# Add Ops to save and restore all the variables.
+# Add ops to save and restore all the variables.
saver = tf.train.Saver()
# Later, launch the model, initialize the variables, do some work, save the
# variables to disk.
with tf.Session() as sess:
- sess.Run(init_op)
+ sess.run(init_op)
# Do some work with the model.
..
# Save the variables to disk.
- save_path = saver.Save(sess, "/tmp/model.ckpt")
+ save_path = saver.save(sess, "/tmp/model.ckpt")
print "Model saved in file: ", save_path
```
@@ -157,23 +159,23 @@ restore variables from a file you do not have to initialize them beforehand.
v1 = tf.Variable(..., name="v1")
v2 = tf.Variable(..., name="v2")
...
-# Add Ops to save and restore all the variables.
+# Add ops to save and restore all the variables.
saver = tf.train.Saver()
# Later, launch the model, use the saver to restore variables from disk, and
# do some work with the model.
with tf.Session() as sess:
# Restore variables from disk.
- saver.Restore(sess, "/tmp/model.ckpt")
+ saver.restore(sess, "/tmp/model.ckpt")
print "Model restored."
# Do some work with the model
...
```
-### Chosing which Variables to Save and Restore <a class="md-anchor" id="AUTOGENERATED-chosing-which-variables-to-save-and-restore"></a>
+### Choosing which Variables to Save and Restore <a class="md-anchor" id="AUTOGENERATED-choosing-which-variables-to-save-and-restore"></a>
-If you do not pass any argument to `tf.train.Saver()` the saver
-handles all variables. Each one of them is saved under the name that was
+If you do not pass any argument to `tf.train.Saver()` the saver handles all
+variables in the graph. Each one of them is saved under the name that was
passed when the variable was created.
It is sometimes useful to explicitly specify names for variables in the
@@ -196,10 +198,10 @@ Notes:
* You can create as many saver objects as you want if you need to save and
restore different subsets of the model variables. The same variable can be
listed in multiple saver objects, its value is only changed when the saver
- `Restore()` method is run.
+ `restore()` method is run.
* If you only restore a subset of the model variables at the start
- of a session, you have to run an initialize Op for the other variables. See
+ of a session, you have to run an initialize op for the other variables. See
[`tf.initialize_variables()`](../../api_docs/python/state_ops.md#initialize_variables)
for more information.
@@ -208,7 +210,7 @@ Notes:
v1 = tf.Variable(..., name="v1")
v2 = tf.Variable(..., name="v2")
...
-# Add Ops to save and restore only 'v2' using the name "my_v2"
+# Add ops to save and restore only 'v2' using the name "my_v2"
saver = tf.train.Saver({"my_v2": v2})
# Use the saver object normally after that.
...
diff --git a/tensorflow/g3doc/tutorials/mandelbrot/index.md b/tensorflow/g3doc/tutorials/mandelbrot/index.md
index 4c3a399407..f1ce882a3e 100755
--- a/tensorflow/g3doc/tutorials/mandelbrot/index.md
+++ b/tensorflow/g3doc/tutorials/mandelbrot/index.md
@@ -6,18 +6,18 @@ general mathematics. This is actually a pretty naive implementation of the
visualization, but it makes the point. (We may end up providing a more
elaborate implementation down the line to produce more truly beautiful images.)
-Note: This tutorial was originally prepared as an iPython notebook.
+Note: This tutorial was originally prepared as an IPython notebook.
## Basic Setup <a class="md-anchor" id="AUTOGENERATED-basic-setup"></a>
We'll need a few imports to get started.
```python
-#Import libraries for simulation
+# Import libraries for simulation
import tensorflow as tf
import numpy as np
-#Imports for visualization
+# Imports for visualization
import PIL.Image
from cStringIO import StringIO
from IPython.display import clear_output, Image, display
@@ -45,7 +45,7 @@ def DisplayFractal(a, fmt='jpeg'):
## Session and Variable Initialization <a class="md-anchor" id="AUTOGENERATED-session-and-variable-initialization"></a>
-For playing around like this, we often us an interactive session, but a regular
+For playing around like this, we often use an interactive session, but a regular
session would work as well.
```python
@@ -61,7 +61,7 @@ Y, X = np.mgrid[-1.3:1.3:0.005, -2:1:0.005]
Z = X+1j*Y
```
-Now we define and initialize.
+Now we define and initialize TensorFlow tensors.
```python
xs = tf.constant(Z.astype("complex64"))
@@ -72,7 +72,7 @@ ns = tf.Variable(tf.zeros_like(xs, "float32"))
TensorFlow requires that you explicitly initialize variables before using them.
```python
-tf.InitializeAllVariables().run()
+tf.initialize_all_variables().run()
```
## Defining and Running the Computation <a class="md-anchor" id="AUTOGENERATED-defining-and-running-the-computation"></a>
diff --git a/tensorflow/g3doc/tutorials/mnist/beginners/index.md b/tensorflow/g3doc/tutorials/mnist/beginners/index.md
index 34bc11aa26..40de38438b 100644
--- a/tensorflow/g3doc/tutorials/mnist/beginners/index.md
+++ b/tensorflow/g3doc/tutorials/mnist/beginners/index.md
@@ -34,10 +34,11 @@ work through the code.
## The MNIST Data <a class="md-anchor" id="AUTOGENERATED-the-mnist-data"></a>
The MNIST data is hosted on
-[Yann LeCun's website](http://yann.lecun.com/exdb/mnist/).
-For your convenience, we've included some python code to download and install
-the data automatically. You can either download [the code](../input_data.py) and
-import it as below, or simply copy and paste it in.
+[Yann LeCun's website](http://yann.lecun.com/exdb/mnist/). For your
+convenience, we've included some python code to download and install the data
+automatically. You can either download
+[the code](https://tensorflow.googlesource.com/tensorflow/+/master/tensorflow/g3doc/tutorials/mnist/input_data.py)
+and import it as below, or simply copy and paste it in.
```python
import input_data
diff --git a/tensorflow/g3doc/tutorials/mnist/tf/index.md b/tensorflow/g3doc/tutorials/mnist/tf/index.md
index c1fc07e373..7323a49557 100644
--- a/tensorflow/g3doc/tutorials/mnist/tf/index.md
+++ b/tensorflow/g3doc/tutorials/mnist/tf/index.md
@@ -18,8 +18,8 @@ This tutorial references the following files:
File | Purpose
--- | ---
-[`mnist.py`](../mnist.py) | The code to build a fully-connected MNIST model.
-[`fully_connected_feed.py`](../fully_connected_feed.py) | The main code, to train the built MNIST model against the downloaded dataset using a feed dictionary.
+[`mnist.py`](https://tensorflow.googlesource.com/tensorflow/+/master/tensorflow/g3doc/tutorials/mnist/mnist.py) | The code to build a fully-connected MNIST model.
+[`fully_connected_feed.py`](https://tensorflow.googlesource.com/tensorflow/+/master/tensorflow/g3doc/tutorials/mnist/fully_connected_feed.py) | The main code, to train the built MNIST model against the downloaded dataset using a feed dictionary.
Simply run the `fully_connected_feed.py` file directly to start training:
diff --git a/tensorflow/g3doc/tutorials/pdes/index.md b/tensorflow/g3doc/tutorials/pdes/index.md
index 6866fd6f9a..5dbb758da6 100755
--- a/tensorflow/g3doc/tutorials/pdes/index.md
+++ b/tensorflow/g3doc/tutorials/pdes/index.md
@@ -5,7 +5,7 @@ pedestrian) example of using TensorFlow for simulating the behavior of a
partial differential equation. We'll simulate the surface of square pond as a
few raindrops land on it.
-Note: This tutorial was originally prepared as an iPython notebook.
+Note: This tutorial was originally prepared as an IPython notebook.
## Basic Setup <a class="md-anchor" id="AUTOGENERATED-basic-setup"></a>
@@ -34,7 +34,7 @@ def DisplayArray(a, fmt='jpeg', rng=[0,1]):
display(Image(data=f.getvalue()))
```
-Here we start an interactive TensorFlow session for convience in playing
+Here we start an interactive TensorFlow session for convenience in playing
around. A regular session would work as well if we were doing this in an
executable .py file.
@@ -99,24 +99,24 @@ Now let's specify the details of the differential equation.
```python
-# paramaters
+# Parameters:
# eps -- time resolution
# damping -- wave damping
-eps = tf.placeholder('float', shape=())
-damping = tf.placeholder('float', shape=())
+eps = tf.placeholder(tf.float32, shape=())
+damping = tf.placeholder(tf.float32, shape=())
-# create variables for simulation state
+# Create variables for simulation state
U = tf.Variable(u_init)
Ut = tf.Variable(ut_init)
-# discretized PDE update rules
-U_ = U + eps*Ut
-Ut_ = Ut + eps*(laplace(U) - damping*Ut)
+# Discretized PDE update rules
+U_ = U + eps * Ut
+Ut_ = Ut + eps * (laplace(U) - damping * Ut)
-# operation to update the state
+# Operation to update the state
step = tf.group(
- U.Assign(U_),
- Ut.Assign(Ut_) )
+ U.assign(U_),
+ Ut.assign(Ut_))
```
## Run The Simulation <a class="md-anchor" id="AUTOGENERATED-run-the-simulation"></a>
@@ -124,13 +124,13 @@ step = tf.group(
This is where it gets fun -- running time forward with a simple for loop.
```python
-# initialize state to initial conditions
-tf.InitializeAllVariables().Run()
+# Initialize state to initial conditions
+tf.initialize_all_variables().run()
# Run 1000 steps of PDE
for i in range(1000):
# Step simulation
- step.Run({eps: 0.03, damping: 0.04})
+ step.run({eps: 0.03, damping: 0.04})
# Visualize every 50 steps
if i % 50 == 0:
clear_output()
diff --git a/tensorflow/g3doc/tutorials/word2vec/index.md b/tensorflow/g3doc/tutorials/word2vec/index.md
index f3ae416d43..4ebae23f63 100644
--- a/tensorflow/g3doc/tutorials/word2vec/index.md
+++ b/tensorflow/g3doc/tutorials/word2vec/index.md
@@ -17,12 +17,12 @@ represent words as vectors.
* We also show a simple implementation of the model in TensorFlow.
* Finally, we look at ways to make the naive version scale better.
-We walk through the code later during the tutorial, but if you'd prefer to
-dive straight in, feel free to look at the minimalistic implementation in
-[tensorflow/g3doc/tutorials/word2vec/word2vec_basic.py](./word2vec_basic.py)
-This basic example contains the code needed to download some data, train on it
-a bit and visualize the result. Once you get
-comfortable with reading and running the basic version, you can graduate to
+We walk through the code later during the tutorial, but if you'd prefer to dive
+straight in, feel free to look at the minimalistic implementation in
+[tensorflow/g3doc/tutorials/word2vec/word2vec_basic.py](https://tensorflow.googlesource.com/tensorflow/+/master/tensorflow/g3doc/tutorials/word2vec/word2vec_basic.py)
+This basic example contains the code needed to download some data, train on it a
+bit and visualize the result. Once you get comfortable with reading and running
+the basic version, you can graduate to
[tensorflow/models/embedding/word2vec.py](https://tensorflow.googlesource.com/tensorflow/+/master/tensorflow/models/embedding/word2vec.py)
which is a more serious implementation that showcases some more advanced
TensorFlow principles about how to efficiently use threads to move data into a
@@ -269,8 +269,8 @@ nce_biases = tf.Variable(tf.zeros([vocabulary_size]))
Now that we have the parameters in place, we can define our skip-gram model
graph. For simplicity, let's suppose we've already integerized our text corpus
with a vocabulary so that each word is represented as an integer (see
-[tensorflow/g3doc/tutorials/word2vec/word2vec_basic.py](./word2vec_basic.py) for
-the details). The skip-gram model takes two inputs. One is a batch full of
+[tensorflow/g3doc/tutorials/word2vec/word2vec_basic.py](https://tensorflow.googlesource.com/tensorflow/+/master/tensorflow/g3doc/tutorials/word2vec/word2vec_basic.py)
+for the details). The skip-gram model takes two inputs. One is a batch full of
integers representing the source context words, the other is for the target
words. Let's create placeholder nodes for these inputs, so that we can feed in
data later.
diff --git a/tensorflow/python/kernel_tests/logging_ops_test.py b/tensorflow/python/kernel_tests/logging_ops_test.py
index 18ca441b23..50e7422878 100644
--- a/tensorflow/python/kernel_tests/logging_ops_test.py
+++ b/tensorflow/python/kernel_tests/logging_ops_test.py
@@ -33,6 +33,11 @@ class LoggingOpsTest(tf.test.TestCase):
class PrintGradientTest(tf.test.TestCase):
+ def testPrintShape(self):
+ inp = tf.constant(2.0, shape=[100, 32])
+ inp_printed = tf.Print(inp, [inp])
+ self.assertEqual(inp.get_shape(), inp_printed.get_shape())
+
def testPrintGradient(self):
with self.test_session():
inp = tf.constant(2.0, shape=[100, 32], name="in")
diff --git a/tensorflow/python/ops/logging_ops.py b/tensorflow/python/ops/logging_ops.py
index 0fad4a2dde..daf208da9e 100644
--- a/tensorflow/python/ops/logging_ops.py
+++ b/tensorflow/python/ops/logging_ops.py
@@ -52,7 +52,5 @@ def _PrintGrad(op, *grad):
return list(grad) + [None] * (len(op.inputs) - 1)
-# NOTE(mrry): Assert and Print produce an empty output, which is
-# presumably never read.
-ops.RegisterShape("Assert")(common_shapes.unknown_shape)
-ops.RegisterShape("Print")(common_shapes.unknown_shape)
+ops.RegisterShape("Assert")(common_shapes.no_outputs)
+ops.RegisterShape("Print")(common_shapes.unchanged_shape)
diff --git a/tensorflow/python/training/saver.py b/tensorflow/python/training/saver.py
index 8d78615ffb..1ef1313eea 100644
--- a/tensorflow/python/training/saver.py
+++ b/tensorflow/python/training/saver.py
@@ -539,9 +539,9 @@ class Saver(object):
`global_step` argument to `save()`:
```python
- saver.save('my-model', global_step=0) ==> filename: 'my-model-0'
+ saver.save(sess, 'my-model', global_step=0) ==> filename: 'my-model-0'
...
- saver.save('my-model', global_step=1000) ==> filename: 'my-model-1000'
+ saver.save(sess, 'my-model', global_step=1000) ==> filename: 'my-model-1000'
```
Additionally, optional arguments to the `Saver()` constructor let you control
@@ -806,7 +806,7 @@ class Saver(object):
path can be passed directly to a call to `restore()`.
Args:
- sess: A Session to use to save the variables..
+ sess: A Session to use to save the variables.
save_path: string. Path to the checkpoint filename. If the saver is
`sharded`, this is the prefix of the sharded checkpoint filename.
global_step: If provided the global step number is appended to
diff --git a/tensorflow/tools/pip_package/setup.py b/tensorflow/tools/pip_package/setup.py
index e7f9ecdc71..55db7ce6bd 100644
--- a/tensorflow/tools/pip_package/setup.py
+++ b/tensorflow/tools/pip_package/setup.py
@@ -6,9 +6,8 @@ from setuptools.dist import Distribution
_VERSION = '0.5.0'
REQUIRED_PACKAGES = [
- 'numpy',
+ 'numpy >= 1.10.1',
'six >= 1.10.0',
- 'virtualenvwrapper',
]
# pylint: disable=line-too-long