aboutsummaryrefslogtreecommitdiffhomepage
path: root/tensorflow/contrib/seq2seq
diff options
context:
space:
mode:
authorGravatar Scott Zhu <scottzhu@google.com>2018-04-13 17:52:20 -0700
committerGravatar TensorFlower Gardener <gardener@tensorflow.org>2018-04-13 17:57:27 -0700
commit3652556dab3ebfe0152232facc7304fe5754aecb (patch)
tree9a9cecde4c85dc53548a185f9bd6d7c6e0591262 /tensorflow/contrib/seq2seq
parentef24ad14502e992716c49fdd5c63e6b2c2fb6b5a (diff)
Merge changes from github.
PiperOrigin-RevId: 192850372
Diffstat (limited to 'tensorflow/contrib/seq2seq')
-rw-r--r--tensorflow/contrib/seq2seq/python/ops/attention_wrapper.py4
1 files changed, 2 insertions, 2 deletions
diff --git a/tensorflow/contrib/seq2seq/python/ops/attention_wrapper.py b/tensorflow/contrib/seq2seq/python/ops/attention_wrapper.py
index 9e0d69593f..f0f143ddfc 100644
--- a/tensorflow/contrib/seq2seq/python/ops/attention_wrapper.py
+++ b/tensorflow/contrib/seq2seq/python/ops/attention_wrapper.py
@@ -610,8 +610,8 @@ def monotonic_attention(p_choose_i, previous_attention, mode):
addition, once an input sequence element is attended to at a given output
timestep, elements occurring before it cannot be attended to at subsequent
output timesteps. This function generates attention distributions according
- to these assumptions. For more information, see ``Online and Linear-Time
- Attention by Enforcing Monotonic Alignments''.
+ to these assumptions. For more information, see `Online and Linear-Time
+ Attention by Enforcing Monotonic Alignments`.
Args:
p_choose_i: Probability of choosing input sequence/memory element i. Should