diff options
author | Mark Daoust <markdaoust@google.com> | 2018-08-09 07:03:39 -0700 |
---|---|---|
committer | TensorFlower Gardener <gardener@tensorflow.org> | 2018-08-09 07:08:30 -0700 |
commit | f40a875355557483aeae60ffcf757fc9626c752b (patch) | |
tree | 7f642a6fd12495c1c7d9b2f3a37e376d8ee6d2c9 /tensorflow/contrib/seq2seq | |
parent | fd9fc4b4b69f7fce60497bbaf5cbd958f12ead8d (diff) |
Remove usage of magic-api-link syntax from source files.
Back-ticks are now converted to links in the api_docs generator. With the new docs repo we're moving to simplify the docs pipeline, and make everything more readable.
By doing this we no longer get test failures for symbols that don't exist (`tf.does_not_exist` will not get a link).
There is also no way, not to set custom link text. That's okay.
This is the result of the following regex replacement (+ a couple of manual edits.):
re: @\{([^$].*?)(\$.+?)?}
sub: `\1`
Which does the following replacements:
"@{tf.symbol}" --> "`tf.symbol`"
"@{tf.symbol$link_text}" --> "`tf.symbol`"
PiperOrigin-RevId: 208042358
Diffstat (limited to 'tensorflow/contrib/seq2seq')
-rw-r--r-- | tensorflow/contrib/seq2seq/python/ops/attention_wrapper.py | 10 | ||||
-rw-r--r-- | tensorflow/contrib/seq2seq/python/ops/beam_search_decoder.py | 2 |
2 files changed, 6 insertions, 6 deletions
diff --git a/tensorflow/contrib/seq2seq/python/ops/attention_wrapper.py b/tensorflow/contrib/seq2seq/python/ops/attention_wrapper.py index 1c9d179e3c..0ba32cd3bf 100644 --- a/tensorflow/contrib/seq2seq/python/ops/attention_wrapper.py +++ b/tensorflow/contrib/seq2seq/python/ops/attention_wrapper.py @@ -382,8 +382,8 @@ class LuongAttention(_BaseAttentionMechanism): for values past the respective sequence lengths. scale: Python boolean. Whether to scale the energy term. probability_fn: (optional) A `callable`. Converts the score to - probabilities. The default is @{tf.nn.softmax}. Other options include - @{tf.contrib.seq2seq.hardmax} and @{tf.contrib.sparsemax.sparsemax}. + probabilities. The default is `tf.nn.softmax`. Other options include + `tf.contrib.seq2seq.hardmax` and `tf.contrib.sparsemax.sparsemax`. Its signature should be: `probabilities = probability_fn(score)`. score_mask_value: (optional) The mask value for score before passing into `probability_fn`. The default is -inf. Only used if @@ -529,8 +529,8 @@ class BahdanauAttention(_BaseAttentionMechanism): for values past the respective sequence lengths. normalize: Python boolean. Whether to normalize the energy term. probability_fn: (optional) A `callable`. Converts the score to - probabilities. The default is @{tf.nn.softmax}. Other options include - @{tf.contrib.seq2seq.hardmax} and @{tf.contrib.sparsemax.sparsemax}. + probabilities. The default is `tf.nn.softmax`. Other options include + `tf.contrib.seq2seq.hardmax` and `tf.contrib.sparsemax.sparsemax`. Its signature should be: `probabilities = probability_fn(score)`. score_mask_value: (optional): The mask value for score before passing into `probability_fn`. The default is -inf. Only used if @@ -1091,7 +1091,7 @@ class AttentionWrapper(rnn_cell_impl.RNNCell): `AttentionWrapper`, then you must ensure that: - The encoder output has been tiled to `beam_width` via - @{tf.contrib.seq2seq.tile_batch} (NOT `tf.tile`). + `tf.contrib.seq2seq.tile_batch` (NOT `tf.tile`). - The `batch_size` argument passed to the `zero_state` method of this wrapper is equal to `true_batch_size * beam_width`. - The initial state created with `zero_state` above contains a diff --git a/tensorflow/contrib/seq2seq/python/ops/beam_search_decoder.py b/tensorflow/contrib/seq2seq/python/ops/beam_search_decoder.py index f17dbb0fe3..74741a7bd6 100644 --- a/tensorflow/contrib/seq2seq/python/ops/beam_search_decoder.py +++ b/tensorflow/contrib/seq2seq/python/ops/beam_search_decoder.py @@ -234,7 +234,7 @@ class BeamSearchDecoder(decoder.Decoder): `AttentionWrapper`, then you must ensure that: - The encoder output has been tiled to `beam_width` via - @{tf.contrib.seq2seq.tile_batch} (NOT `tf.tile`). + `tf.contrib.seq2seq.tile_batch` (NOT `tf.tile`). - The `batch_size` argument passed to the `zero_state` method of this wrapper is equal to `true_batch_size * beam_width`. - The initial state created with `zero_state` above contains a |