aboutsummaryrefslogtreecommitdiffhomepage
path: root/tensorflow/contrib/seq2seq
diff options
context:
space:
mode:
authorGravatar ImSheridan <xiaoyudong0512@gmail.com>2018-04-06 01:29:32 +0800
committerGravatar Rasmus Munk Larsen <rmlarsen@google.com>2018-04-05 10:29:32 -0700
commitde61d322391a824c9dd97b5b4913b45f8a12539d (patch)
tree37603c534963cb74545d0243f3f28bebc12677e6 /tensorflow/contrib/seq2seq
parent14241b17aae754e2a64c8a350caf63e6572fe9cd (diff)
Fix some rendering format in contrib doc strings (#18148)
* Fix some rendering format in contrib doc strings * Fix line too long pylint error
Diffstat (limited to 'tensorflow/contrib/seq2seq')
-rw-r--r--tensorflow/contrib/seq2seq/python/ops/attention_wrapper.py4
1 files changed, 2 insertions, 2 deletions
diff --git a/tensorflow/contrib/seq2seq/python/ops/attention_wrapper.py b/tensorflow/contrib/seq2seq/python/ops/attention_wrapper.py
index 9e0d69593f..f0f143ddfc 100644
--- a/tensorflow/contrib/seq2seq/python/ops/attention_wrapper.py
+++ b/tensorflow/contrib/seq2seq/python/ops/attention_wrapper.py
@@ -610,8 +610,8 @@ def monotonic_attention(p_choose_i, previous_attention, mode):
addition, once an input sequence element is attended to at a given output
timestep, elements occurring before it cannot be attended to at subsequent
output timesteps. This function generates attention distributions according
- to these assumptions. For more information, see ``Online and Linear-Time
- Attention by Enforcing Monotonic Alignments''.
+ to these assumptions. For more information, see `Online and Linear-Time
+ Attention by Enforcing Monotonic Alignments`.
Args:
p_choose_i: Probability of choosing input sequence/memory element i. Should