aboutsummaryrefslogtreecommitdiffhomepage
diff options
context:
space:
mode:
authorGravatar Mark McDonald <macd@google.com>2017-02-07 14:40:32 -0800
committerGravatar TensorFlower Gardener <gardener@tensorflow.org>2017-02-07 14:54:38 -0800
commit33637df3eb7f5b99a3ca0783d7da54723a7f2b8b (patch)
tree0ddb966c0cccbcdb00c48394871ab318bea49d42
parentfcae253c08a9f54d55cfa596402ab53c397089bd (diff)
Fixes warnings in the input_fns tutorial.
Change: 146836129
-rw-r--r--tensorflow/examples/tutorials/input_fn/boston.py5
-rw-r--r--tensorflow/g3doc/tutorials/input_fn/index.md13
2 files changed, 10 insertions, 8 deletions
diff --git a/tensorflow/examples/tutorials/input_fn/boston.py b/tensorflow/examples/tutorials/input_fn/boston.py
index fb2164c395..c7fb7e2316 100644
--- a/tensorflow/examples/tutorials/input_fn/boston.py
+++ b/tensorflow/examples/tutorials/input_fn/boston.py
@@ -53,8 +53,9 @@ def main(unused_argv):
for k in FEATURES]
# Build 2 layer fully connected DNN with 10, 10 units respectively.
- regressor = tf.contrib.learn.DNNRegressor(
- feature_columns=feature_cols, hidden_units=[10, 10])
+ regressor = tf.contrib.learn.DNNRegressor(feature_columns=feature_cols,
+ hidden_units=[10, 10],
+ model_dir="/tmp/boston_model")
# Fit
regressor.fit(input_fn=lambda: input_fn(training_set), steps=5000)
diff --git a/tensorflow/g3doc/tutorials/input_fn/index.md b/tensorflow/g3doc/tutorials/input_fn/index.md
index 831576433e..6b94fd82e1 100644
--- a/tensorflow/g3doc/tutorials/input_fn/index.md
+++ b/tensorflow/g3doc/tutorials/input_fn/index.md
@@ -35,7 +35,7 @@ encapsulate the logic for preprocessing and piping data into your models.
The following code illustrates the basic skeleton for an input function:
```python
-def my_input_fn()
+def my_input_fn():
# Preprocess your data here...
@@ -78,8 +78,8 @@ For [sparse, categorical data](https://en.wikipedia.org/wiki/Sparse_matrix)
`SparseTensor`, which is instantiated with three arguments:
<dl>
- <dt><code>shape</code></dt>
- <dd>The shape of the tensor. Takes a list indicating the number of elements in each dimension. For example, <code>shape=[3,6]</code> specifies a two-dimensional 3x6 tensor, <code>shape=[2,3,4]</code> specifies a three-dimensional 2x3x4 tensor, and <code>shape=[9]</code> specifies a one-dimensional tensor with 9 elements.</dd>
+ <dt><code>dense_shape</code></dt>
+ <dd>The shape of the tensor. Takes a list indicating the number of elements in each dimension. For example, <code>dense_shape=[3,6]</code> specifies a two-dimensional 3x6 tensor, <code>dense_shape=[2,3,4]</code> specifies a three-dimensional 2x3x4 tensor, and <code>dense_shape=[9]</code> specifies a one-dimensional tensor with 9 elements.</dd>
<dt><code>indices</code></dt>
<dd>The indices of the elements in your tensor that contain nonzero values. Takes a list of terms, where each term is itself a list containing the index of a nonzero element. (Elements are zero-indexed—i.e., [0,0] is the index value for the element in the first column of the first row in a two-dimensional tensor.) For example, <code>indices=[[1,3], [2,4]]</code> specifies that the elements with indexes of [1,3] and [2,4] have nonzero values.</dd>
<dt><code>values</code></dt>
@@ -93,7 +93,7 @@ index [2,4] has a value of 0.5 (all other values are 0):
```python
sparse_tensor = tf.SparseTensor(indices=[[0,1], [2,4]],
values=[6, 0.5],
- shape=[3, 5])
+ dense_shape=[3, 5])
```
This corresponds to the following dense tensor:
@@ -277,8 +277,9 @@ with 10 nodes each), and `feature_columns`, containing the list of
`FeatureColumns` you just defined:
```python
-regressor = tf.contrib.learn.DNNRegressor(
- feature_columns=feature_cols, hidden_units=[10, 10])
+regressor = tf.contrib.learn.DNNRegressor(feature_columns=feature_cols,
+ hidden_units=[10, 10],
+ model_dir="/tmp/boston_model")
```
### Building the input_fn