aboutsummaryrefslogtreecommitdiffhomepage
diff options
context:
space:
mode:
-rw-r--r--tensorflow/contrib/learn/python/learn/README.md24
-rw-r--r--tensorflow/contrib/makefile/README.md4
-rw-r--r--tensorflow/g3doc/tutorials/mnist/pros/index.md4
3 files changed, 16 insertions, 16 deletions
diff --git a/tensorflow/contrib/learn/python/learn/README.md b/tensorflow/contrib/learn/python/learn/README.md
index f474eb4e54..2016f53a8a 100644
--- a/tensorflow/contrib/learn/python/learn/README.md
+++ b/tensorflow/contrib/learn/python/learn/README.md
@@ -59,8 +59,8 @@ Simple linear classification:
from sklearn import datasets, metrics
iris = datasets.load_iris()
-classifier = learn.TensorFlowLinearClassifier(n_classes=3)
-classifier.fit(iris.data, iris.target)
+classifier = learn.LinearClassifier(n_classes=3)
+classifier.fit(iris.data, iris.target, steps=200, batch_size=32)
score = metrics.accuracy_score(iris.target, classifier.predict(iris.data))
print("Accuracy: %f" % score)
```
@@ -74,8 +74,8 @@ from sklearn import datasets, metrics, preprocessing
boston = datasets.load_boston()
x = preprocessing.StandardScaler().fit_transform(boston.data)
-regressor = learn.TensorFlowLinearRegressor()
-regressor.fit(x, boston.target)
+regressor = learn.LinearRegressor()
+regressor.fit(x, boston.target, steps=200, batch_size=32)
score = metrics.mean_squared_error(regressor.predict(x), boston.target)
print ("MSE: %f" % score)
```
@@ -88,15 +88,15 @@ Example of 3 layer network with 10, 20 and 10 hidden units respectively:
from sklearn import datasets, metrics
iris = datasets.load_iris()
-classifier = learn.TensorFlowDNNClassifier(hidden_units=[10, 20, 10], n_classes=3)
-classifier.fit(iris.data, iris.target)
+classifier = learn.DNNClassifier(hidden_units=[10, 20, 10], n_classes=3)
+classifier.fit(iris.data, iris.target, steps=200, batch_size=32)
score = metrics.accuracy_score(iris.target, classifier.predict(iris.data))
print("Accuracy: %f" % score)
```
## Custom model
-Example of how to pass a custom model to the TensorFlowEstimator:
+Example of how to pass a custom model to the Estimator:
```python
from sklearn import datasets, metrics
@@ -108,7 +108,7 @@ def my_model(x, y):
layers = learn.ops.dnn(x, [10, 20, 10], dropout=0.5)
return learn.models.logistic_regression(layers, y)
-classifier = learn.TensorFlowEstimator(model_fn=my_model, n_classes=3)
+classifier = learn.Estimator(model_fn=my_model, n_classes=3)
classifier.fit(iris.data, iris.target)
score = metrics.accuracy_score(iris.target, classifier.predict(iris.data))
print("Accuracy: %f" % score)
@@ -116,16 +116,16 @@ print("Accuracy: %f" % score)
## Saving / Restoring models
-Each estimator has a ``save`` method which takes folder path where all model information will be saved. For restoring you can just call ``learn.TensorFlowEstimator.restore(path)`` and it will return object of your class.
+Each estimator has a ``save`` method which takes folder path where all model information will be saved. For restoring you can just call ``learn.Estimator.restore(path)`` and it will return object of your class.
Some example code:
```python
-classifier = learn.TensorFlowLinearRegression()
+classifier = learn.LinearRegressor()
classifier.fit(...)
classifier.save('/tmp/tf_examples/my_model_1/')
-new_classifier = TensorFlowEstimator.restore('/tmp/tf_examples/my_model_2')
+new_classifier = Estimator.restore('/tmp/tf_examples/my_model_2')
new_classifier.predict(...)
```
@@ -134,7 +134,7 @@ new_classifier.predict(...)
To get nice visualizations and summaries you can use ``logdir`` parameter on ``fit``. It will start writing summaries for ``loss`` and histograms for variables in your model. You can also add custom summaries in your custom model function by calling ``tf.summary`` and passing Tensors to report.
```python
-classifier = learn.TensorFlowLinearRegression()
+classifier = learn.LinearRegressor()
classifier.fit(x, y, logdir='/tmp/tf_examples/my_model_1/')
```
diff --git a/tensorflow/contrib/makefile/README.md b/tensorflow/contrib/makefile/README.md
index ebaacdfcd9..200515c181 100644
--- a/tensorflow/contrib/makefile/README.md
+++ b/tensorflow/contrib/makefile/README.md
@@ -61,7 +61,7 @@ On Ubuntu, you can do this:
```bash
sudo apt-get install autoconf automake libtool curl make g++ unzip
pushd .
-cd tensforflow/contrib/makefile/downloads/protobuf
+cd tensorflow/contrib/makefile/downloads/protobuf
./autogen.sh
./configure
make
@@ -104,7 +104,7 @@ tensorflow/contrib/makefile/gen/bin/benchmark \
## Android
First, you will need to download and unzip the
-[Native Development Kit (NDK)](http://developers.google.com/ndk). You will not
+[Native Development Kit (NDK)](https://developer.android.com/ndk/). You will not
need to install the standalone toolchain, however.
Assign your NDK location to $NDK_ROOT:
diff --git a/tensorflow/g3doc/tutorials/mnist/pros/index.md b/tensorflow/g3doc/tutorials/mnist/pros/index.md
index 12de1df66c..324a29c02e 100644
--- a/tensorflow/g3doc/tutorials/mnist/pros/index.md
+++ b/tensorflow/g3doc/tutorials/mnist/pros/index.md
@@ -232,7 +232,7 @@ print(accuracy.eval(feed_dict={x: mnist.test.images, y_: mnist.test.labels}))
## Build a Multilayer Convolutional Network
-Getting 91% accuracy on MNIST is bad. It's almost embarrassingly bad. In this
+Getting 92% accuracy on MNIST is bad. It's almost embarrassingly bad. In this
section, we'll fix that, jumping from a very simple model to something
moderately sophisticated: a small convolutional neural network. This will get us
to around 99.2% accuracy -- not state of the art, but respectable.
@@ -243,7 +243,7 @@ To create this model, we're going to need to create a lot of weights and biases.
One should generally initialize weights with a small amount of noise for
symmetry breaking, and to prevent 0 gradients. Since we're using ReLU neurons,
it is also good practice to initialize them with a slightly positive initial
-bias to avoid "dead neurons." Instead of doing this repeatedly while we build
+bias to avoid "dead neurons". Instead of doing this repeatedly while we build
the model, let's create two handy functions to do it for us.
```python