diff options
author | 2018-04-13 17:52:20 -0700 | |
---|---|---|
committer | 2018-04-13 17:57:27 -0700 | |
commit | 3652556dab3ebfe0152232facc7304fe5754aecb (patch) | |
tree | 9a9cecde4c85dc53548a185f9bd6d7c6e0591262 /tensorflow/contrib/seq2seq | |
parent | ef24ad14502e992716c49fdd5c63e6b2c2fb6b5a (diff) |
Merge changes from github.
PiperOrigin-RevId: 192850372
Diffstat (limited to 'tensorflow/contrib/seq2seq')
-rw-r--r-- | tensorflow/contrib/seq2seq/python/ops/attention_wrapper.py | 4 |
1 files changed, 2 insertions, 2 deletions
diff --git a/tensorflow/contrib/seq2seq/python/ops/attention_wrapper.py b/tensorflow/contrib/seq2seq/python/ops/attention_wrapper.py index 9e0d69593f..f0f143ddfc 100644 --- a/tensorflow/contrib/seq2seq/python/ops/attention_wrapper.py +++ b/tensorflow/contrib/seq2seq/python/ops/attention_wrapper.py @@ -610,8 +610,8 @@ def monotonic_attention(p_choose_i, previous_attention, mode): addition, once an input sequence element is attended to at a given output timestep, elements occurring before it cannot be attended to at subsequent output timesteps. This function generates attention distributions according - to these assumptions. For more information, see ``Online and Linear-Time - Attention by Enforcing Monotonic Alignments''. + to these assumptions. For more information, see `Online and Linear-Time + Attention by Enforcing Monotonic Alignments`. Args: p_choose_i: Probability of choosing input sequence/memory element i. Should |