aboutsummaryrefslogtreecommitdiffhomepage
path: root/tensorflow/docs_src/tutorials/seq2seq.md
diff options
context:
space:
mode:
Diffstat (limited to 'tensorflow/docs_src/tutorials/seq2seq.md')
-rw-r--r--tensorflow/docs_src/tutorials/seq2seq.md2
1 files changed, 1 insertions, 1 deletions
diff --git a/tensorflow/docs_src/tutorials/seq2seq.md b/tensorflow/docs_src/tutorials/seq2seq.md
index dd2ca8d524..84c8a9c9f3 100644
--- a/tensorflow/docs_src/tutorials/seq2seq.md
+++ b/tensorflow/docs_src/tutorials/seq2seq.md
@@ -140,7 +140,7 @@ When training models with large output vocabularies, i.e., when
tensors. Instead, it is better to return smaller output tensors, which will
later be projected onto a large output tensor using `output_projection`.
This allows to use our seq2seq models with a sampled softmax loss, as described
-in [Jean et. al., 2014](http://arxiv.org/abs/1412.2007)
+in [Jean et al., 2014](http://arxiv.org/abs/1412.2007)
([pdf](http://arxiv.org/pdf/1412.2007.pdf)).
In addition to `basic_rnn_seq2seq` and `embedding_rnn_seq2seq` there are a few