aboutsummaryrefslogtreecommitdiffhomepage
diff options
context:
space:
mode:
authorGravatar A. Unique TensorFlower <gardener@tensorflow.org>2016-11-23 12:51:52 -0800
committerGravatar TensorFlower Gardener <gardener@tensorflow.org>2016-11-23 13:03:17 -0800
commitb793cfd8ed0675f77a710bd3b98001d15974ee25 (patch)
treea8a3037ec7089ebdc073040369e8289fd85ab7c0
parent92da8abfd35b93488ed7a55308b8f589ee23b622 (diff)
Update generated Python Op docs.
Change: 140062662
-rw-r--r--tensorflow/g3doc/api_docs/python/contrib.rnn.md6
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.nn.bidirectional_rnn.md3
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.nn.rnn.md2
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.contrib.rnn.LSTMBlockCell.md6
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.nn.rnn_cell.BasicRNNCell.md2
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.nn.rnn_cell.LSTMCell.md17
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.nn.state_saving_rnn.md2
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.nn.bidirectional_dynamic_rnn.md3
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.nn.dynamic_rnn.md2
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.nn.raw_rnn.md2
-rw-r--r--tensorflow/g3doc/api_docs/python/nn.md14
-rw-r--r--tensorflow/g3doc/api_docs/python/rnn_cell.md19
12 files changed, 38 insertions, 40 deletions
diff --git a/tensorflow/g3doc/api_docs/python/contrib.rnn.md b/tensorflow/g3doc/api_docs/python/contrib.rnn.md
index 1d59c1c630..8f28c19232 100644
--- a/tensorflow/g3doc/api_docs/python/contrib.rnn.md
+++ b/tensorflow/g3doc/api_docs/python/contrib.rnn.md
@@ -21,7 +21,7 @@ reduce the scale of forgetting in the beginning of the training.
Unlike `rnn_cell.LSTMCell`, this is a monolithic op and should be much faster.
The weight and bias matrixes should be compatible as long as the variable
-scope matches, and you use `use_compatible_names=True`.
+scope matches.
- - -
#### `tf.contrib.rnn.LSTMBlockCell.__call__(x, states_prev, scope=None)` {#LSTMBlockCell.__call__}
@@ -31,7 +31,7 @@ Long short-term memory cell (LSTM).
- - -
-#### `tf.contrib.rnn.LSTMBlockCell.__init__(num_units, forget_bias=1.0, use_peephole=False, use_compatible_names=False)` {#LSTMBlockCell.__init__}
+#### `tf.contrib.rnn.LSTMBlockCell.__init__(num_units, forget_bias=1.0, use_peephole=False)` {#LSTMBlockCell.__init__}
Initialize the basic LSTM cell.
@@ -41,8 +41,6 @@ Initialize the basic LSTM cell.
* <b>`num_units`</b>: int, The number of units in the LSTM cell.
* <b>`forget_bias`</b>: float, The bias added to forget gates (see above).
* <b>`use_peephole`</b>: Whether to use peephole connections or not.
-* <b>`use_compatible_names`</b>: If True, use the same variable naming as
- rnn_cell.LSTMCell
- - -
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.nn.bidirectional_rnn.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.nn.bidirectional_rnn.md
index 7ff1e48648..f9d14ef9de 100644
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.nn.bidirectional_rnn.md
+++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.nn.bidirectional_rnn.md
@@ -29,7 +29,8 @@ length(s) of the sequence(s) or completely unrolled if length(s) is not given.
either of the initial states are not provided.
* <b>`sequence_length`</b>: (optional) An int32/int64 vector, size `[batch_size]`,
containing the actual lengths for each of the sequences.
-* <b>`scope`</b>: VariableScope for the created subgraph; defaults to "BiRNN"
+* <b>`scope`</b>: VariableScope for the created subgraph; defaults to
+ "bidirectional_rnn"
##### Returns:
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.nn.rnn.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.nn.rnn.md
index a2d8187fad..ac38b8f422 100644
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.nn.rnn.md
+++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.nn.rnn.md
@@ -46,7 +46,7 @@ The dynamic calculation performed is, at time `t` for batch row `b`,
dtype.
* <b>`sequence_length`</b>: Specifies the length of each sequence in inputs.
An int32 or int64 vector (tensor) size `[batch_size]`, values in `[0, T)`.
-* <b>`scope`</b>: VariableScope for the created subgraph; defaults to "RNN".
+* <b>`scope`</b>: VariableScope for the created subgraph; defaults to "rnn".
##### Returns:
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.contrib.rnn.LSTMBlockCell.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.contrib.rnn.LSTMBlockCell.md
index 876a1592f1..cb90c403c1 100644
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.contrib.rnn.LSTMBlockCell.md
+++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.contrib.rnn.LSTMBlockCell.md
@@ -7,7 +7,7 @@ reduce the scale of forgetting in the beginning of the training.
Unlike `rnn_cell.LSTMCell`, this is a monolithic op and should be much faster.
The weight and bias matrixes should be compatible as long as the variable
-scope matches, and you use `use_compatible_names=True`.
+scope matches.
- - -
#### `tf.contrib.rnn.LSTMBlockCell.__call__(x, states_prev, scope=None)` {#LSTMBlockCell.__call__}
@@ -17,7 +17,7 @@ Long short-term memory cell (LSTM).
- - -
-#### `tf.contrib.rnn.LSTMBlockCell.__init__(num_units, forget_bias=1.0, use_peephole=False, use_compatible_names=False)` {#LSTMBlockCell.__init__}
+#### `tf.contrib.rnn.LSTMBlockCell.__init__(num_units, forget_bias=1.0, use_peephole=False)` {#LSTMBlockCell.__init__}
Initialize the basic LSTM cell.
@@ -27,8 +27,6 @@ Initialize the basic LSTM cell.
* <b>`num_units`</b>: int, The number of units in the LSTM cell.
* <b>`forget_bias`</b>: float, The bias added to forget gates (see above).
* <b>`use_peephole`</b>: Whether to use peephole connections or not.
-* <b>`use_compatible_names`</b>: If True, use the same variable naming as
- rnn_cell.LSTMCell
- - -
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.nn.rnn_cell.BasicRNNCell.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.nn.rnn_cell.BasicRNNCell.md
index a08ef164f1..a2aed04e46 100644
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.nn.rnn_cell.BasicRNNCell.md
+++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.nn.rnn_cell.BasicRNNCell.md
@@ -3,7 +3,7 @@ The most basic RNN cell.
#### `tf.nn.rnn_cell.BasicRNNCell.__call__(inputs, state, scope=None)` {#BasicRNNCell.__call__}
-Most basic RNN: output = new_state = activation(W * input + U * state + B).
+Most basic RNN: output = new_state = act(W * input + U * state + B).
- - -
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.nn.rnn_cell.LSTMCell.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.nn.rnn_cell.LSTMCell.md
index 9f1e999461..5ee8c2ad30 100644
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.nn.rnn_cell.LSTMCell.md
+++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.nn.rnn_cell.LSTMCell.md
@@ -31,7 +31,7 @@ Run one step of LSTM.
`2-D, batch x state_size`. If `state_is_tuple` is True, this must be a
tuple of state Tensors, both `2-D`, with column sizes `c_state` and
`m_state`.
-* <b>`scope`</b>: VariableScope for the created subgraph; defaults to "LSTMCell".
+* <b>`scope`</b>: VariableScope for the created subgraph; defaults to "lstm_cell".
##### Returns:
@@ -54,7 +54,7 @@ Run one step of LSTM.
- - -
-#### `tf.nn.rnn_cell.LSTMCell.__init__(num_units, input_size=None, use_peepholes=False, cell_clip=None, initializer=None, num_proj=None, proj_clip=None, num_unit_shards=1, num_proj_shards=1, forget_bias=1.0, state_is_tuple=True, activation=tanh)` {#LSTMCell.__init__}
+#### `tf.nn.rnn_cell.LSTMCell.__init__(num_units, input_size=None, use_peepholes=False, cell_clip=None, initializer=None, num_proj=None, proj_clip=None, num_unit_shards=None, num_proj_shards=None, forget_bias=1.0, state_is_tuple=True, activation=tanh)` {#LSTMCell.__init__}
Initialize the parameters for an LSTM cell.
@@ -71,13 +71,12 @@ Initialize the parameters for an LSTM cell.
* <b>`num_proj`</b>: (optional) int, The output dimensionality for the projection
matrices. If None, no projection is performed.
* <b>`proj_clip`</b>: (optional) A float value. If `num_proj > 0` and `proj_clip` is
- provided, then the projected values are clipped elementwise to within
- `[-proj_clip, proj_clip]`.
-
-* <b>`num_unit_shards`</b>: How to split the weight matrix. If >1, the weight
- matrix is stored across num_unit_shards.
-* <b>`num_proj_shards`</b>: How to split the projection matrix. If >1, the
- projection matrix is stored across num_proj_shards.
+ provided, then the projected values are clipped elementwise to within
+ `[-proj_clip, proj_clip]`.
+* <b>`num_unit_shards`</b>: Deprecated, will be removed by Jan. 2017.
+ Use a variable_scope partitioner instead.
+* <b>`num_proj_shards`</b>: Deprecated, will be removed by Jan. 2017.
+ Use a variable_scope partitioner instead.
* <b>`forget_bias`</b>: Biases of the forget gate are initialized by default to 1
in order to reduce the scale of forgetting at the beginning of
the training.
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.nn.state_saving_rnn.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.nn.state_saving_rnn.md
index 67a444ad4a..14198ab9c2 100644
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.nn.state_saving_rnn.md
+++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.nn.state_saving_rnn.md
@@ -16,7 +16,7 @@ RNN that accepts a state saver for time-truncated RNN calculation.
be a single string.
* <b>`sequence_length`</b>: (optional) An int32/int64 vector size [batch_size].
See the documentation for rnn() for more details about sequence_length.
-* <b>`scope`</b>: VariableScope for the created subgraph; defaults to "RNN".
+* <b>`scope`</b>: VariableScope for the created subgraph; defaults to "rnn".
##### Returns:
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.nn.bidirectional_dynamic_rnn.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.nn.bidirectional_dynamic_rnn.md
index 368e588028..9d0fe0e3ef 100644
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.nn.bidirectional_dynamic_rnn.md
+++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.nn.bidirectional_dynamic_rnn.md
@@ -51,7 +51,8 @@ given.
accepts input and emits output in batch-major form.
* <b>`dtype`</b>: (optional) The data type for the initial state. Required if
either of the initial states are not provided.
-* <b>`scope`</b>: VariableScope for the created subgraph; defaults to "BiRNN"
+* <b>`scope`</b>: VariableScope for the created subgraph; defaults to
+ "bidirectional_rnn"
##### Returns:
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.nn.dynamic_rnn.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.nn.dynamic_rnn.md
index 81517a1ac6..f2ae7527e7 100644
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.nn.dynamic_rnn.md
+++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.nn.dynamic_rnn.md
@@ -67,7 +67,7 @@ for correctness than performance, unlike in rnn().
transposes at the beginning and end of the RNN calculation. However,
most TensorFlow data is batch-major, so by default this function
accepts input and emits output in batch-major form.
-* <b>`scope`</b>: VariableScope for the created subgraph; defaults to "RNN".
+* <b>`scope`</b>: VariableScope for the created subgraph; defaults to "rnn".
##### Returns:
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.nn.raw_rnn.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.nn.raw_rnn.md
index 8c0d9bd027..8cb2eab12f 100644
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.nn.raw_rnn.md
+++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.nn.raw_rnn.md
@@ -136,7 +136,7 @@ outputs = outputs_ta.pack()
but needed for back prop from GPU to CPU. This allows training RNNs
which would typically not fit on a single GPU, with very minimal (or no)
performance penalty.
-* <b>`scope`</b>: VariableScope for the created subgraph; defaults to "RNN".
+* <b>`scope`</b>: VariableScope for the created subgraph; defaults to "rnn".
##### Returns:
diff --git a/tensorflow/g3doc/api_docs/python/nn.md b/tensorflow/g3doc/api_docs/python/nn.md
index 438542bc88..5f1e189874 100644
--- a/tensorflow/g3doc/api_docs/python/nn.md
+++ b/tensorflow/g3doc/api_docs/python/nn.md
@@ -2591,7 +2591,7 @@ for correctness than performance, unlike in rnn().
transposes at the beginning and end of the RNN calculation. However,
most TensorFlow data is batch-major, so by default this function
accepts input and emits output in batch-major form.
-* <b>`scope`</b>: VariableScope for the created subgraph; defaults to "RNN".
+* <b>`scope`</b>: VariableScope for the created subgraph; defaults to "rnn".
##### Returns:
@@ -2675,7 +2675,7 @@ The dynamic calculation performed is, at time `t` for batch row `b`,
dtype.
* <b>`sequence_length`</b>: Specifies the length of each sequence in inputs.
An int32 or int64 vector (tensor) size `[batch_size]`, values in `[0, T)`.
-* <b>`scope`</b>: VariableScope for the created subgraph; defaults to "RNN".
+* <b>`scope`</b>: VariableScope for the created subgraph; defaults to "rnn".
##### Returns:
@@ -2713,7 +2713,7 @@ RNN that accepts a state saver for time-truncated RNN calculation.
be a single string.
* <b>`sequence_length`</b>: (optional) An int32/int64 vector size [batch_size].
See the documentation for rnn() for more details about sequence_length.
-* <b>`scope`</b>: VariableScope for the created subgraph; defaults to "RNN".
+* <b>`scope`</b>: VariableScope for the created subgraph; defaults to "rnn".
##### Returns:
@@ -2784,7 +2784,8 @@ given.
accepts input and emits output in batch-major form.
* <b>`dtype`</b>: (optional) The data type for the initial state. Required if
either of the initial states are not provided.
-* <b>`scope`</b>: VariableScope for the created subgraph; defaults to "BiRNN"
+* <b>`scope`</b>: VariableScope for the created subgraph; defaults to
+ "bidirectional_rnn"
##### Returns:
@@ -2848,7 +2849,8 @@ length(s) of the sequence(s) or completely unrolled if length(s) is not given.
either of the initial states are not provided.
* <b>`sequence_length`</b>: (optional) An int32/int64 vector, size `[batch_size]`,
containing the actual lengths for each of the sequences.
-* <b>`scope`</b>: VariableScope for the created subgraph; defaults to "BiRNN"
+* <b>`scope`</b>: VariableScope for the created subgraph; defaults to
+ "bidirectional_rnn"
##### Returns:
@@ -3005,7 +3007,7 @@ outputs = outputs_ta.pack()
but needed for back prop from GPU to CPU. This allows training RNNs
which would typically not fit on a single GPU, with very minimal (or no)
performance penalty.
-* <b>`scope`</b>: VariableScope for the created subgraph; defaults to "RNN".
+* <b>`scope`</b>: VariableScope for the created subgraph; defaults to "rnn".
##### Returns:
diff --git a/tensorflow/g3doc/api_docs/python/rnn_cell.md b/tensorflow/g3doc/api_docs/python/rnn_cell.md
index 0c1140799d..c6d39bd936 100644
--- a/tensorflow/g3doc/api_docs/python/rnn_cell.md
+++ b/tensorflow/g3doc/api_docs/python/rnn_cell.md
@@ -109,7 +109,7 @@ The most basic RNN cell.
#### `tf.nn.rnn_cell.BasicRNNCell.__call__(inputs, state, scope=None)` {#BasicRNNCell.__call__}
-Most basic RNN: output = new_state = activation(W * input + U * state + B).
+Most basic RNN: output = new_state = act(W * input + U * state + B).
- - -
@@ -326,7 +326,7 @@ Run one step of LSTM.
`2-D, batch x state_size`. If `state_is_tuple` is True, this must be a
tuple of state Tensors, both `2-D`, with column sizes `c_state` and
`m_state`.
-* <b>`scope`</b>: VariableScope for the created subgraph; defaults to "LSTMCell".
+* <b>`scope`</b>: VariableScope for the created subgraph; defaults to "lstm_cell".
##### Returns:
@@ -349,7 +349,7 @@ Run one step of LSTM.
- - -
-#### `tf.nn.rnn_cell.LSTMCell.__init__(num_units, input_size=None, use_peepholes=False, cell_clip=None, initializer=None, num_proj=None, proj_clip=None, num_unit_shards=1, num_proj_shards=1, forget_bias=1.0, state_is_tuple=True, activation=tanh)` {#LSTMCell.__init__}
+#### `tf.nn.rnn_cell.LSTMCell.__init__(num_units, input_size=None, use_peepholes=False, cell_clip=None, initializer=None, num_proj=None, proj_clip=None, num_unit_shards=None, num_proj_shards=None, forget_bias=1.0, state_is_tuple=True, activation=tanh)` {#LSTMCell.__init__}
Initialize the parameters for an LSTM cell.
@@ -366,13 +366,12 @@ Initialize the parameters for an LSTM cell.
* <b>`num_proj`</b>: (optional) int, The output dimensionality for the projection
matrices. If None, no projection is performed.
* <b>`proj_clip`</b>: (optional) A float value. If `num_proj > 0` and `proj_clip` is
- provided, then the projected values are clipped elementwise to within
- `[-proj_clip, proj_clip]`.
-
-* <b>`num_unit_shards`</b>: How to split the weight matrix. If >1, the weight
- matrix is stored across num_unit_shards.
-* <b>`num_proj_shards`</b>: How to split the projection matrix. If >1, the
- projection matrix is stored across num_proj_shards.
+ provided, then the projected values are clipped elementwise to within
+ `[-proj_clip, proj_clip]`.
+* <b>`num_unit_shards`</b>: Deprecated, will be removed by Jan. 2017.
+ Use a variable_scope partitioner instead.
+* <b>`num_proj_shards`</b>: Deprecated, will be removed by Jan. 2017.
+ Use a variable_scope partitioner instead.
* <b>`forget_bias`</b>: Biases of the forget gate are initialized by default to 1
in order to reduce the scale of forgetting at the beginning of
the training.