aboutsummaryrefslogtreecommitdiffhomepage
path: root/tensorflow/examples/udacity/6_lstm.ipynb
blob: 64e913acf8c5b65ba8a5e516fee6bc6bfdc445c3 (plain)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
844
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882
883
884
885
886
887
888
889
890
891
892
893
894
895
896
897
898
899
900
901
902
903
904
905
906
907
908
909
910
911
912
913
914
915
916
917
918
919
920
921
922
923
924
925
926
927
928
929
930
931
932
933
934
935
936
937
938
939
940
941
942
943
944
945
946
947
948
949
950
951
952
953
954
955
956
957
958
959
960
961
962
963
964
965
966
967
968
969
970
971
972
973
974
975
976
977
978
979
980
981
982
983
984
985
986
987
988
989
990
991
992
993
994
995
996
997
998
999
1000
1001
1002
1003
1004
1005
1006
1007
1008
1009
1010
1011
1012
1013
1014
1015
1016
1017
1018
1019
1020
1021
1022
1023
1024
1025
1026
1027
1028
1029
1030
1031
1032
1033
1034
1035
1036
1037
1038
1039
1040
1041
1042
1043
1044
1045
1046
1047
1048
1049
1050
1051
1052
1053
1054
1055
1056
1057
1058
1059
1060
1061
1062
1063
1064
1065
1066
1067
1068
1069
{
  "nbformat": 4,
  "nbformat_minor": 0,
  "metadata": {
    "colab": {
      "version": "0.3.2",
      "views": {},
      "default_view": {},
      "name": "6_lstm.ipynb",
      "provenance": []
    }
  },
  "cells": [
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "8tQJd2YSCfWR",
        "colab_type": "text"
      },
      "source": [
        ""
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "D7tqLMoKF6uq",
        "colab_type": "text"
      },
      "source": [
        "Deep Learning\n",
        "=============\n",
        "\n",
        "Assignment 6\n",
        "------------\n",
        "\n",
        "After training a skip-gram model in `5_word2vec.ipynb`, the goal of this notebook is to train a LSTM character model over [Text8](http://mattmahoney.net/dc/textdata) data."
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "MvEblsgEXxrd",
        "colab_type": "code",
        "colab": {
          "autoexec": {
            "startup": false,
            "wait_interval": 0
          }
        },
        "cellView": "both"
      },
      "source": [
        "# These are all the modules we'll be using later. Make sure you can import them\n",
        "# before proceeding further.\n",
        "from __future__ import print_function\n",
        "import os\n",
        "import numpy as np\n",
        "import random\n",
        "import string\n",
        "import tensorflow as tf\n",
        "import zipfile\n",
        "from six.moves import range\n",
        "from six.moves.urllib.request import urlretrieve"
      ],
      "outputs": [],
      "execution_count": 0
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "RJ-o3UBUFtCw",
        "colab_type": "code",
        "colab": {
          "autoexec": {
            "startup": false,
            "wait_interval": 0
          },
          "output_extras": [
            {
              "item_id": 1
            }
          ]
        },
        "cellView": "both",
        "executionInfo": {
          "elapsed": 5993,
          "status": "ok",
          "timestamp": 1445965582896,
          "user": {
            "color": "#1FA15D",
            "displayName": "Vincent Vanhoucke",
            "isAnonymous": false,
            "isMe": true,
            "permissionId": "05076109866853157986",
            "photoUrl": "//lh6.googleusercontent.com/-cCJa7dTDcgQ/AAAAAAAAAAI/AAAAAAAACgw/r2EZ_8oYer4/s50-c-k-no/photo.jpg",
            "sessionId": "6f6f07b359200c46",
            "userId": "102167687554210253930"
          },
          "user_tz": 420
        },
        "outputId": "d530534e-0791-4a94-ca6d-1c8f1b908a9e"
      },
      "source": [
        "url = 'http://mattmahoney.net/dc/'\n",
        "\n",
        "def maybe_download(filename, expected_bytes):\n",
        "  \"\"\"Download a file if not present, and make sure it's the right size.\"\"\"\n",
        "  if not os.path.exists(filename):\n",
        "    filename, _ = urlretrieve(url + filename, filename)\n",
        "  statinfo = os.stat(filename)\n",
        "  if statinfo.st_size == expected_bytes:\n",
        "    print('Found and verified %s' % filename)\n",
        "  else:\n",
        "    print(statinfo.st_size)\n",
        "    raise Exception(\n",
        "      'Failed to verify ' + filename + '. Can you get to it with a browser?')\n",
        "  return filename\n",
        "\n",
        "filename = maybe_download('text8.zip', 31344016)"
      ],
      "outputs": [
        {
          "output_type": "stream",
          "text": [
            "Found and verified text8.zip\n"
          ],
          "name": "stdout"
        }
      ],
      "execution_count": 0
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "Mvf09fjugFU_",
        "colab_type": "code",
        "colab": {
          "autoexec": {
            "startup": false,
            "wait_interval": 0
          },
          "output_extras": [
            {
              "item_id": 1
            }
          ]
        },
        "cellView": "both",
        "executionInfo": {
          "elapsed": 5982,
          "status": "ok",
          "timestamp": 1445965582916,
          "user": {
            "color": "#1FA15D",
            "displayName": "Vincent Vanhoucke",
            "isAnonymous": false,
            "isMe": true,
            "permissionId": "05076109866853157986",
            "photoUrl": "//lh6.googleusercontent.com/-cCJa7dTDcgQ/AAAAAAAAAAI/AAAAAAAACgw/r2EZ_8oYer4/s50-c-k-no/photo.jpg",
            "sessionId": "6f6f07b359200c46",
            "userId": "102167687554210253930"
          },
          "user_tz": 420
        },
        "outputId": "8f75db58-3862-404b-a0c3-799380597390"
      },
      "source": [
        "def read_data(filename):\n",
        "  with zipfile.ZipFile(filename) as f:\n",
        "    name = f.namelist()[0]\n",
        "    data = tf.compat.as_str(f.read(name))\n",
        "  return data\n",
        "  \n",
        "text = read_data(filename)\n",
        "print('Data size %d' % len(text))"
      ],
      "outputs": [
        {
          "output_type": "stream",
          "text": [
            "Data size 100000000\n"
          ],
          "name": "stdout"
        }
      ],
      "execution_count": 0
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "ga2CYACE-ghb",
        "colab_type": "text"
      },
      "source": [
        "Create a small validation set."
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "w-oBpfFG-j43",
        "colab_type": "code",
        "colab": {
          "autoexec": {
            "startup": false,
            "wait_interval": 0
          },
          "output_extras": [
            {
              "item_id": 1
            }
          ]
        },
        "cellView": "both",
        "executionInfo": {
          "elapsed": 6184,
          "status": "ok",
          "timestamp": 1445965583138,
          "user": {
            "color": "#1FA15D",
            "displayName": "Vincent Vanhoucke",
            "isAnonymous": false,
            "isMe": true,
            "permissionId": "05076109866853157986",
            "photoUrl": "//lh6.googleusercontent.com/-cCJa7dTDcgQ/AAAAAAAAAAI/AAAAAAAACgw/r2EZ_8oYer4/s50-c-k-no/photo.jpg",
            "sessionId": "6f6f07b359200c46",
            "userId": "102167687554210253930"
          },
          "user_tz": 420
        },
        "outputId": "bdb96002-d021-4379-f6de-a977924f0d02"
      },
      "source": [
        "valid_size = 1000\n",
        "valid_text = text[:valid_size]\n",
        "train_text = text[valid_size:]\n",
        "train_size = len(train_text)\n",
        "print(train_size, train_text[:64])\n",
        "print(valid_size, valid_text[:64])"
      ],
      "outputs": [
        {
          "output_type": "stream",
          "text": [
            "99999000 ons anarchists advocate social relations based upon voluntary as\n",
            "1000  anarchism originated as a term of abuse first used against earl\n"
          ],
          "name": "stdout"
        }
      ],
      "execution_count": 0
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "Zdw6i4F8glpp",
        "colab_type": "text"
      },
      "source": [
        "Utility functions to map characters to vocabulary IDs and back."
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "gAL1EECXeZsD",
        "colab_type": "code",
        "colab": {
          "autoexec": {
            "startup": false,
            "wait_interval": 0
          },
          "output_extras": [
            {
              "item_id": 1
            }
          ]
        },
        "cellView": "both",
        "executionInfo": {
          "elapsed": 6276,
          "status": "ok",
          "timestamp": 1445965583249,
          "user": {
            "color": "#1FA15D",
            "displayName": "Vincent Vanhoucke",
            "isAnonymous": false,
            "isMe": true,
            "permissionId": "05076109866853157986",
            "photoUrl": "//lh6.googleusercontent.com/-cCJa7dTDcgQ/AAAAAAAAAAI/AAAAAAAACgw/r2EZ_8oYer4/s50-c-k-no/photo.jpg",
            "sessionId": "6f6f07b359200c46",
            "userId": "102167687554210253930"
          },
          "user_tz": 420
        },
        "outputId": "88fc9032-feb9-45ff-a9a0-a26759cc1f2e"
      },
      "source": [
        "vocabulary_size = len(string.ascii_lowercase) + 1 # [a-z] + ' '\n",
        "first_letter = ord(string.ascii_lowercase[0])\n",
        "\n",
        "def char2id(char):\n",
        "  if char in string.ascii_lowercase:\n",
        "    return ord(char) - first_letter + 1\n",
        "  elif char == ' ':\n",
        "    return 0\n",
        "  else:\n",
        "    print('Unexpected character: %s' % char)\n",
        "    return 0\n",
        "  \n",
        "def id2char(dictid):\n",
        "  if dictid > 0:\n",
        "    return chr(dictid + first_letter - 1)\n",
        "  else:\n",
        "    return ' '\n",
        "\n",
        "print(char2id('a'), char2id('z'), char2id(' '), char2id('\u00ef'))\n",
        "print(id2char(1), id2char(26), id2char(0))"
      ],
      "outputs": [
        {
          "output_type": "stream",
          "text": [
            "1 26 0 Unexpected character: \u00ef\n",
            "0\n",
            "a z  \n"
          ],
          "name": "stdout"
        }
      ],
      "execution_count": 0
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "lFwoyygOmWsL",
        "colab_type": "text"
      },
      "source": [
        "Function to generate a training batch for the LSTM model."
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "d9wMtjy5hCj9",
        "colab_type": "code",
        "colab": {
          "autoexec": {
            "startup": false,
            "wait_interval": 0
          },
          "output_extras": [
            {
              "item_id": 1
            }
          ]
        },
        "cellView": "both",
        "executionInfo": {
          "elapsed": 6473,
          "status": "ok",
          "timestamp": 1445965583467,
          "user": {
            "color": "#1FA15D",
            "displayName": "Vincent Vanhoucke",
            "isAnonymous": false,
            "isMe": true,
            "permissionId": "05076109866853157986",
            "photoUrl": "//lh6.googleusercontent.com/-cCJa7dTDcgQ/AAAAAAAAAAI/AAAAAAAACgw/r2EZ_8oYer4/s50-c-k-no/photo.jpg",
            "sessionId": "6f6f07b359200c46",
            "userId": "102167687554210253930"
          },
          "user_tz": 420
        },
        "outputId": "3dd79c80-454a-4be0-8b71-4a4a357b3367"
      },
      "source": [
        "batch_size=64\n",
        "num_unrollings=10\n",
        "\n",
        "class BatchGenerator(object):\n",
        "  def __init__(self, text, batch_size, num_unrollings):\n",
        "    self._text = text\n",
        "    self._text_size = len(text)\n",
        "    self._batch_size = batch_size\n",
        "    self._num_unrollings = num_unrollings\n",
        "    segment = self._text_size // batch_size\n",
        "    self._cursor = [ offset * segment for offset in range(batch_size)]\n",
        "    self._last_batch = self._next_batch()\n",
        "  \n",
        "  def _next_batch(self):\n",
        "    \"\"\"Generate a single batch from the current cursor position in the data.\"\"\"\n",
        "    batch = np.zeros(shape=(self._batch_size, vocabulary_size), dtype=np.float)\n",
        "    for b in range(self._batch_size):\n",
        "      batch[b, char2id(self._text[self._cursor[b]])] = 1.0\n",
        "      self._cursor[b] = (self._cursor[b] + 1) % self._text_size\n",
        "    return batch\n",
        "  \n",
        "  def next(self):\n",
        "    \"\"\"Generate the next array of batches from the data. The array consists of\n",
        "    the last batch of the previous array, followed by num_unrollings new ones.\n",
        "    \"\"\"\n",
        "    batches = [self._last_batch]\n",
        "    for step in range(self._num_unrollings):\n",
        "      batches.append(self._next_batch())\n",
        "    self._last_batch = batches[-1]\n",
        "    return batches\n",
        "\n",
        "def characters(probabilities):\n",
        "  \"\"\"Turn a 1-hot encoding or a probability distribution over the possible\n",
        "  characters back into its (most likely) character representation.\"\"\"\n",
        "  return [id2char(c) for c in np.argmax(probabilities, 1)]\n",
        "\n",
        "def batches2string(batches):\n",
        "  \"\"\"Convert a sequence of batches back into their (most likely) string\n",
        "  representation.\"\"\"\n",
        "  s = [''] * batches[0].shape[0]\n",
        "  for b in batches:\n",
        "    s = [''.join(x) for x in zip(s, characters(b))]\n",
        "  return s\n",
        "\n",
        "train_batches = BatchGenerator(train_text, batch_size, num_unrollings)\n",
        "valid_batches = BatchGenerator(valid_text, 1, 1)\n",
        "\n",
        "print(batches2string(train_batches.next()))\n",
        "print(batches2string(train_batches.next()))\n",
        "print(batches2string(valid_batches.next()))\n",
        "print(batches2string(valid_batches.next()))"
      ],
      "outputs": [
        {
          "output_type": "stream",
          "text": [
            "['ons anarchi', 'when milita', 'lleria arch', ' abbeys and', 'married urr', 'hel and ric', 'y and litur', 'ay opened f', 'tion from t', 'migration t', 'new york ot', 'he boeing s', 'e listed wi', 'eber has pr', 'o be made t', 'yer who rec', 'ore signifi', 'a fierce cr', ' two six ei', 'aristotle s', 'ity can be ', ' and intrac', 'tion of the', 'dy to pass ', 'f certain d', 'at it will ', 'e convince ', 'ent told hi', 'ampaign and', 'rver side s', 'ious texts ', 'o capitaliz', 'a duplicate', 'gh ann es d', 'ine january', 'ross zero t', 'cal theorie', 'ast instanc', ' dimensiona', 'most holy m', 't s support', 'u is still ', 'e oscillati', 'o eight sub', 'of italy la', 's the tower', 'klahoma pre', 'erprise lin', 'ws becomes ', 'et in a naz', 'the fabian ', 'etchy to re', ' sharman ne', 'ised empero', 'ting in pol', 'd neo latin', 'th risky ri', 'encyclopedi', 'fense the a', 'duating fro', 'treet grid ', 'ations more', 'appeal of d', 'si have mad']\n",
            "['ists advoca', 'ary governm', 'hes nationa', 'd monasteri', 'raca prince', 'chard baer ', 'rgical lang', 'for passeng', 'the nationa', 'took place ', 'ther well k', 'seven six s', 'ith a gloss', 'robably bee', 'to recogniz', 'ceived the ', 'icant than ', 'ritic of th', 'ight in sig', 's uncaused ', ' lost as in', 'cellular ic', 'e size of t', ' him a stic', 'drugs confu', ' take to co', ' the priest', 'im to name ', 'd barred at', 'standard fo', ' such as es', 'ze on the g', 'e of the or', 'd hiver one', 'y eight mar', 'the lead ch', 'es classica', 'ce the non ', 'al analysis', 'mormons bel', 't or at lea', ' disagreed ', 'ing system ', 'btypes base', 'anguages th', 'r commissio', 'ess one nin', 'nux suse li', ' the first ', 'zi concentr', ' society ne', 'elatively s', 'etworks sha', 'or hirohito', 'litical ini', 'n most of t', 'iskerdoo ri', 'ic overview', 'air compone', 'om acnm acc', ' centerline', 'e than any ', 'devotional ', 'de such dev']\n",
            "[' a']\n",
            "['an']\n"
          ],
          "name": "stdout"
        }
      ],
      "execution_count": 0
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "KyVd8FxT5QBc",
        "colab_type": "code",
        "colab": {
          "autoexec": {
            "startup": false,
            "wait_interval": 0
          }
        },
        "cellView": "both"
      },
      "source": [
        "def logprob(predictions, labels):\n",
        "  \"\"\"Log-probability of the true labels in a predicted batch.\"\"\"\n",
        "  predictions[predictions < 1e-10] = 1e-10\n",
        "  return np.sum(np.multiply(labels, -np.log(predictions))) / labels.shape[0]\n",
        "\n",
        "def sample_distribution(distribution):\n",
        "  \"\"\"Sample one element from a distribution assumed to be an array of normalized\n",
        "  probabilities.\n",
        "  \"\"\"\n",
        "  r = random.uniform(0, 1)\n",
        "  s = 0\n",
        "  for i in range(len(distribution)):\n",
        "    s += distribution[i]\n",
        "    if s >= r:\n",
        "      return i\n",
        "  return len(distribution) - 1\n",
        "\n",
        "def sample(prediction):\n",
        "  \"\"\"Turn a (column) prediction into 1-hot encoded samples.\"\"\"\n",
        "  p = np.zeros(shape=[1, vocabulary_size], dtype=np.float)\n",
        "  p[0, sample_distribution(prediction[0])] = 1.0\n",
        "  return p\n",
        "\n",
        "def random_distribution():\n",
        "  \"\"\"Generate a random column of probabilities.\"\"\"\n",
        "  b = np.random.uniform(0.0, 1.0, size=[1, vocabulary_size])\n",
        "  return b/np.sum(b, 1)[:,None]"
      ],
      "outputs": [],
      "execution_count": 0
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "K8f67YXaDr4C",
        "colab_type": "text"
      },
      "source": [
        "Simple LSTM Model."
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "Q5rxZK6RDuGe",
        "colab_type": "code",
        "colab": {
          "autoexec": {
            "startup": false,
            "wait_interval": 0
          }
        },
        "cellView": "both"
      },
      "source": [
        "num_nodes = 64\n",
        "\n",
        "graph = tf.Graph()\n",
        "with graph.as_default():\n",
        "  \n",
        "  # Parameters:\n",
        "  # Input gate: input, previous output, and bias.\n",
        "  ix = tf.Variable(tf.truncated_normal([vocabulary_size, num_nodes], -0.1, 0.1))\n",
        "  im = tf.Variable(tf.truncated_normal([num_nodes, num_nodes], -0.1, 0.1))\n",
        "  ib = tf.Variable(tf.zeros([1, num_nodes]))\n",
        "  # Forget gate: input, previous output, and bias.\n",
        "  fx = tf.Variable(tf.truncated_normal([vocabulary_size, num_nodes], -0.1, 0.1))\n",
        "  fm = tf.Variable(tf.truncated_normal([num_nodes, num_nodes], -0.1, 0.1))\n",
        "  fb = tf.Variable(tf.zeros([1, num_nodes]))\n",
        "  # Memory cell: input, state and bias.                             \n",
        "  cx = tf.Variable(tf.truncated_normal([vocabulary_size, num_nodes], -0.1, 0.1))\n",
        "  cm = tf.Variable(tf.truncated_normal([num_nodes, num_nodes], -0.1, 0.1))\n",
        "  cb = tf.Variable(tf.zeros([1, num_nodes]))\n",
        "  # Output gate: input, previous output, and bias.\n",
        "  ox = tf.Variable(tf.truncated_normal([vocabulary_size, num_nodes], -0.1, 0.1))\n",
        "  om = tf.Variable(tf.truncated_normal([num_nodes, num_nodes], -0.1, 0.1))\n",
        "  ob = tf.Variable(tf.zeros([1, num_nodes]))\n",
        "  # Variables saving state across unrollings.\n",
        "  saved_output = tf.Variable(tf.zeros([batch_size, num_nodes]), trainable=False)\n",
        "  saved_state = tf.Variable(tf.zeros([batch_size, num_nodes]), trainable=False)\n",
        "  # Classifier weights and biases.\n",
        "  w = tf.Variable(tf.truncated_normal([num_nodes, vocabulary_size], -0.1, 0.1))\n",
        "  b = tf.Variable(tf.zeros([vocabulary_size]))\n",
        "  \n",
        "  # Definition of the cell computation.\n",
        "  def lstm_cell(i, o, state):\n",
        "    \"\"\"Create a LSTM cell. See e.g.: http://arxiv.org/pdf/1402.1128v1.pdf\n",
        "    Note that in this formulation, we omit the various connections between the\n",
        "    previous state and the gates.\"\"\"\n",
        "    input_gate = tf.sigmoid(tf.matmul(i, ix) + tf.matmul(o, im) + ib)\n",
        "    forget_gate = tf.sigmoid(tf.matmul(i, fx) + tf.matmul(o, fm) + fb)\n",
        "    update = tf.matmul(i, cx) + tf.matmul(o, cm) + cb\n",
        "    state = forget_gate * state + input_gate * tf.tanh(update)\n",
        "    output_gate = tf.sigmoid(tf.matmul(i, ox) + tf.matmul(o, om) + ob)\n",
        "    return output_gate * tf.tanh(state), state\n",
        "\n",
        "  # Input data.\n",
        "  train_data = list()\n",
        "  for _ in range(num_unrollings + 1):\n",
        "    train_data.append(\n",
        "      tf.placeholder(tf.float32, shape=[batch_size,vocabulary_size]))\n",
        "  train_inputs = train_data[:num_unrollings]\n",
        "  train_labels = train_data[1:]  # labels are inputs shifted by one time step.\n",
        "\n",
        "  # Unrolled LSTM loop.\n",
        "  outputs = list()\n",
        "  output = saved_output\n",
        "  state = saved_state\n",
        "  for i in train_inputs:\n",
        "    output, state = lstm_cell(i, output, state)\n",
        "    outputs.append(output)\n",
        "\n",
        "  # State saving across unrollings.\n",
        "  with tf.control_dependencies([saved_output.assign(output),\n",
        "                                saved_state.assign(state)]):\n",
        "    # Classifier.\n",
        "    logits = tf.nn.xw_plus_b(tf.concat_v2(outputs, 0), w, b)\n",
        "    loss = tf.reduce_mean(\n",
        "      tf.nn.softmax_cross_entropy_with_logits(\n",
        "        logits, tf.concat_v2(train_labels, 0)))\n",
        "\n",
        "  # Optimizer.\n",
        "  global_step = tf.Variable(0)\n",
        "  learning_rate = tf.train.exponential_decay(\n",
        "    10.0, global_step, 5000, 0.1, staircase=True)\n",
        "  optimizer = tf.train.GradientDescentOptimizer(learning_rate)\n",
        "  gradients, v = zip(*optimizer.compute_gradients(loss))\n",
        "  gradients, _ = tf.clip_by_global_norm(gradients, 1.25)\n",
        "  optimizer = optimizer.apply_gradients(\n",
        "    zip(gradients, v), global_step=global_step)\n",
        "\n",
        "  # Predictions.\n",
        "  train_prediction = tf.nn.softmax(logits)\n",
        "  \n",
        "  # Sampling and validation eval: batch 1, no unrolling.\n",
        "  sample_input = tf.placeholder(tf.float32, shape=[1, vocabulary_size])\n",
        "  saved_sample_output = tf.Variable(tf.zeros([1, num_nodes]))\n",
        "  saved_sample_state = tf.Variable(tf.zeros([1, num_nodes]))\n",
        "  reset_sample_state = tf.group(\n",
        "    saved_sample_output.assign(tf.zeros([1, num_nodes])),\n",
        "    saved_sample_state.assign(tf.zeros([1, num_nodes])))\n",
        "  sample_output, sample_state = lstm_cell(\n",
        "    sample_input, saved_sample_output, saved_sample_state)\n",
        "  with tf.control_dependencies([saved_sample_output.assign(sample_output),\n",
        "                                saved_sample_state.assign(sample_state)]):\n",
        "    sample_prediction = tf.nn.softmax(tf.nn.xw_plus_b(sample_output, w, b))"
      ],
      "outputs": [],
      "execution_count": 0
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "RD9zQCZTEaEm",
        "colab_type": "code",
        "colab": {
          "autoexec": {
            "startup": false,
            "wait_interval": 0
          },
          "output_extras": [
            {
              "item_id": 41
            },
            {
              "item_id": 80
            },
            {
              "item_id": 126
            },
            {
              "item_id": 144
            }
          ]
        },
        "cellView": "both",
        "executionInfo": {
          "elapsed": 199909,
          "status": "ok",
          "timestamp": 1445965877333,
          "user": {
            "color": "#1FA15D",
            "displayName": "Vincent Vanhoucke",
            "isAnonymous": false,
            "isMe": true,
            "permissionId": "05076109866853157986",
            "photoUrl": "//lh6.googleusercontent.com/-cCJa7dTDcgQ/AAAAAAAAAAI/AAAAAAAACgw/r2EZ_8oYer4/s50-c-k-no/photo.jpg",
            "sessionId": "6f6f07b359200c46",
            "userId": "102167687554210253930"
          },
          "user_tz": 420
        },
        "outputId": "5e868466-2532-4545-ce35-b403cf5d9de6"
      },
      "source": [
        "num_steps = 7001\n",
        "summary_frequency = 100\n",
        "\n",
        "with tf.Session(graph=graph) as session:\n",
        "  tf.global_variables_initializer().run()\n",
        "  print('Initialized')\n",
        "  mean_loss = 0\n",
        "  for step in range(num_steps):\n",
        "    batches = train_batches.next()\n",
        "    feed_dict = dict()\n",
        "    for i in range(num_unrollings + 1):\n",
        "      feed_dict[train_data[i]] = batches[i]\n",
        "    _, l, predictions, lr = session.run(\n",
        "      [optimizer, loss, train_prediction, learning_rate], feed_dict=feed_dict)\n",
        "    mean_loss += l\n",
        "    if step % summary_frequency == 0:\n",
        "      if step > 0:\n",
        "        mean_loss = mean_loss / summary_frequency\n",
        "      # The mean loss is an estimate of the loss over the last few batches.\n",
        "      print(\n",
        "        'Average loss at step %d: %f learning rate: %f' % (step, mean_loss, lr))\n",
        "      mean_loss = 0\n",
        "      labels = np.concatenate(list(batches)[1:])\n",
        "      print('Minibatch perplexity: %.2f' % float(\n",
        "        np.exp(logprob(predictions, labels))))\n",
        "      if step % (summary_frequency * 10) == 0:\n",
        "        # Generate some samples.\n",
        "        print('=' * 80)\n",
        "        for _ in range(5):\n",
        "          feed = sample(random_distribution())\n",
        "          sentence = characters(feed)[0]\n",
        "          reset_sample_state.run()\n",
        "          for _ in range(79):\n",
        "            prediction = sample_prediction.eval({sample_input: feed})\n",
        "            feed = sample(prediction)\n",
        "            sentence += characters(feed)[0]\n",
        "          print(sentence)\n",
        "        print('=' * 80)\n",
        "      # Measure validation set perplexity.\n",
        "      reset_sample_state.run()\n",
        "      valid_logprob = 0\n",
        "      for _ in range(valid_size):\n",
        "        b = valid_batches.next()\n",
        "        predictions = sample_prediction.eval({sample_input: b[0]})\n",
        "        valid_logprob = valid_logprob + logprob(predictions, b[1])\n",
        "      print('Validation set perplexity: %.2f' % float(np.exp(\n",
        "        valid_logprob / valid_size)))"
      ],
      "outputs": [
        {
          "output_type": "stream",
          "text": [
            "Initialized\n",
            "Average loss at step 0 : 3.29904174805 learning rate: 10.0\n",
            "Minibatch perplexity: 27.09\n",
            "================================================================================\n",
            "srk dwmrnuldtbbgg tapootidtu xsciu sgokeguw hi ieicjq lq piaxhazvc s fht wjcvdlh\n",
            "lhrvallvbeqqquc dxd y siqvnle bzlyw nr rwhkalezo siie o deb e lpdg  storq u nx o\n",
            "meieu nantiouie gdys qiuotblci loc hbiznauiccb cqzed acw l tsm adqxplku gn oaxet\n",
            "unvaouc oxchywdsjntdh zpklaejvxitsokeerloemee htphisb th eaeqseibumh aeeyj j orw\n",
            "ogmnictpycb whtup   otnilnesxaedtekiosqet  liwqarysmt  arj flioiibtqekycbrrgoysj\n",
            "================================================================================\n",
            "Validation set perplexity: 19.99\n",
            "Average loss at step 100 : 2.59553678274 learning rate: 10.0\n",
            "Minibatch perplexity: 9.57\n",
            "Validation set perplexity: 10.60\n",
            "Average loss at step 200 : 2.24747137785 learning rate: 10.0\n",
            "Minibatch perplexity: 7.68\n",
            "Validation set perplexity: 8.84\n",
            "Average loss at step 300 : 2.09438110709 learning rate: 10.0\n",
            "Minibatch perplexity: 7.41\n",
            "Validation set perplexity: 8.13\n",
            "Average loss at step 400 : 1.99440989017 learning rate: 10.0\n",
            "Minibatch perplexity: 6.46\n",
            "Validation set perplexity: 7.58\n",
            "Average loss at step 500 : 1.9320810616 learning rate: 10.0\n",
            "Minibatch perplexity: 6.30\n",
            "Validation set perplexity: 6.88\n",
            "Average loss at step 600 : 1.90935629249 learning rate: 10.0\n",
            "Minibatch perplexity: 7.21\n",
            "Validation set perplexity: 6.91\n",
            "Average loss at step 700 : 1.85583009005 learning rate: 10.0\n",
            "Minibatch perplexity: 6.13\n",
            "Validation set perplexity: 6.60\n",
            "Average loss at step 800 : 1.82152368546 learning rate: 10.0\n",
            "Minibatch perplexity: 6.01\n",
            "Validation set perplexity: 6.37\n",
            "Average loss at step 900 : 1.83169809818 learning rate: 10.0\n",
            "Minibatch perplexity: 7.20\n",
            "Validation set perplexity: 6.23\n",
            "Average loss at step 1000 : 1.82217029214 learning rate: 10.0\n",
            "Minibatch perplexity: 6.73\n",
            "================================================================================\n",
            "le action b of the tert sy ofter selvorang previgned stischdy yocal chary the co\n",
            "le relganis networks partucy cetinning wilnchan sics rumeding a fulch laks oftes\n",
            "hian andoris ret the ecause bistory l pidect one eight five lack du that the ses\n",
            "aiv dromery buskocy becomer worils resism disele retery exterrationn of hide in \n",
            "mer miter y sught esfectur of the upission vain is werms is vul ugher compted by\n",
            "================================================================================\n",
            "Validation set perplexity: 6.07\n",
            "Average loss at step 1100 : 1.77301145077 learning rate: 10.0\n",
            "Minibatch perplexity: 6.03\n",
            "Validation set perplexity: 5.89\n",
            "Average loss at step 1200 : 1.75306463003 learning rate: 10.0\n",
            "Minibatch perplexity: 6.50\n",
            "Validation set perplexity: 5.61\n",
            "Average loss at step 1300 : 1.72937195778 learning rate: 10.0\n",
            "Minibatch perplexity: 5.00\n",
            "Validation set perplexity: 5.60\n",
            "Average loss at step 1400 : 1.74773373723 learning rate: 10.0\n",
            "Minibatch perplexity: 6.48\n",
            "Validation set perplexity: 5.66\n",
            "Average loss at step 1500 : 1.7368799901 learning rate: 10.0\n",
            "Minibatch perplexity: 5.22\n",
            "Validation set perplexity: 5.44\n",
            "Average loss at step 1600 : 1.74528762937 learning rate: 10.0\n",
            "Minibatch perplexity: 5.85\n",
            "Validation set perplexity: 5.33\n",
            "Average loss at step 1700 : 1.70881183743 learning rate: 10.0\n",
            "Minibatch perplexity: 5.33\n",
            "Validation set perplexity: 5.56\n",
            "Average loss at step 1800 : 1.67776108027 learning rate: 10.0\n",
            "Minibatch perplexity: 5.33\n",
            "Validation set perplexity: 5.29\n",
            "Average loss at step 1900 : 1.64935536742 learning rate: 10.0\n",
            "Minibatch perplexity: 5.29\n",
            "Validation set perplexity: 5.15\n",
            "Average loss at step"
          ],
          "name": "stdout"
        },
        {
          "output_type": "stream",
          "text": [
            " 2000 : 1.69528644681 learning rate: 10.0\n",
            "Minibatch perplexity: 5.13\n",
            "================================================================================\n",
            "vers soqually have one five landwing to docial page kagan lower with ther batern\n",
            "ctor son alfortmandd tethre k skin the known purated to prooust caraying the fit\n",
            "je in beverb is the sournction bainedy wesce tu sture artualle lines digra forme\n",
            "m rousively haldio ourso ond anvary was for the seven solies hild buil  s  to te\n",
            "zall for is it is one nine eight eight one neval to the kime typer oene where he\n",
            "================================================================================\n",
            "Validation set perplexity: 5.25\n",
            "Average loss at step 2100 : 1.68808053017 learning rate: 10.0\n",
            "Minibatch perplexity: 5.17\n",
            "Validation set perplexity: 5.01\n",
            "Average loss at step 2200 : 1.68322490931 learning rate: 10.0\n",
            "Minibatch perplexity: 5.09\n",
            "Validation set perplexity: 5.15\n",
            "Average loss at step 2300 : 1.64465074301 learning rate: 10.0\n",
            "Minibatch perplexity: 5.51\n",
            "Validation set perplexity: 5.00\n",
            "Average loss at step 2400 : 1.66408578038 learning rate: 10.0\n",
            "Minibatch perplexity: 5.86\n",
            "Validation set perplexity: 4.80\n",
            "Average loss at step 2500 : 1.68515402555 learning rate: 10.0\n",
            "Minibatch perplexity: 5.75\n",
            "Validation set perplexity: 4.82\n",
            "Average loss at step 2600 : 1.65405208349 learning rate: 10.0\n",
            "Minibatch perplexity: 5.38\n",
            "Validation set perplexity: 4.85\n",
            "Average loss at step 2700 : 1.65706222177 learning rate: 10.0\n",
            "Minibatch perplexity: 5.46\n",
            "Validation set perplexity: 4.78\n",
            "Average loss at step 2800 : 1.65204829812 learning rate: 10.0\n",
            "Minibatch perplexity: 5.06\n",
            "Validation set perplexity: 4.64\n",
            "Average loss at step 2900 : 1.65107253551 learning rate: 10.0\n",
            "Minibatch perplexity: 5.00\n",
            "Validation set perplexity: 4.61\n",
            "Average loss at step 3000 : 1.6495274055 learning rate: 10.0\n",
            "Minibatch perplexity: 4.53\n",
            "================================================================================\n",
            "ject covered in belo one six six to finsh that all di rozial sime it a the lapse\n",
            "ble which the pullic bocades record r to sile dric two one four nine seven six f\n",
            " originally ame the playa ishaps the stotchational in a p dstambly name which as\n",
            "ore volum to bay riwer foreal in nuily operety can and auscham frooripm however \n",
            "kan traogey was lacous revision the mott coupofiteditey the trando insended frop\n",
            "================================================================================\n",
            "Validation set perplexity: 4.76\n",
            "Average loss at step 3100 : 1.63705502152 learning rate: 10.0\n",
            "Minibatch perplexity: 5.50\n",
            "Validation set perplexity: 4.76\n",
            "Average loss at step 3200 : 1.64740695596 learning rate: 10.0\n",
            "Minibatch perplexity: 4.84\n",
            "Validation set perplexity: 4.67\n",
            "Average loss at step 3300 : 1.64711504817 learning rate: 10.0\n",
            "Minibatch perplexity: 5.39\n",
            "Validation set perplexity: 4.57\n",
            "Average loss at step 3400 : 1.67113256454 learning rate: 10.0\n",
            "Minibatch perplexity: 5.56\n",
            "Validation set perplexity: 4.71\n",
            "Average loss at step 3500 : 1.65637169957 learning rate: 10.0\n",
            "Minibatch perplexity: 5.03\n",
            "Validation set perplexity: 4.80\n",
            "Average loss at step 3600 : 1.66601825476 learning rate: 10.0\n",
            "Minibatch perplexity: 4.63\n",
            "Validation set perplexity: 4.52\n",
            "Average loss at step 3700 : 1.65021387935 learning rate: 10.0\n",
            "Minibatch perplexity: 5.50\n",
            "Validation set perplexity: 4.56\n",
            "Average loss at step 3800 : 1.64481814981 learning rate: 10.0\n",
            "Minibatch perplexity: 4.60\n",
            "Validation set perplexity: 4.54\n",
            "Average loss at step 3900 : 1.642069453 learning rate: 10.0\n",
            "Minibatch perplexity: 4.91\n",
            "Validation set perplexity: 4.54\n",
            "Average loss at step 4000 : 1.65179730773 learning rate: 10.0\n",
            "Minibatch perplexity: 4.77\n",
            "================================================================================\n",
            "k s rasbonish roctes the nignese at heacle was sito of beho anarchys and with ro\n",
            "jusar two sue wletaus of chistical in causations d ow trancic bruthing ha laters\n",
            "de and speacy pulted yoftret worksy zeatlating to eight d had to ie bue seven si"
          ],
          "name": "stdout"
        },
        {
          "output_type": "stream",
          "text": [
            "\n",
            "s fiction of the feelly constive suq flanch earlied curauking bjoventation agent\n",
            "quen s playing it calana our seopity also atbellisionaly comexing the revideve i\n",
            "================================================================================\n",
            "Validation set perplexity: 4.58\n",
            "Average loss at step 4100 : 1.63794238806 learning rate: 10.0\n",
            "Minibatch perplexity: 5.47\n",
            "Validation set perplexity: 4.79\n",
            "Average loss at step 4200 : 1.63822438836 learning rate: 10.0\n",
            "Minibatch perplexity: 5.30\n",
            "Validation set perplexity: 4.54\n",
            "Average loss at step 4300 : 1.61844664574 learning rate: 10.0\n",
            "Minibatch perplexity: 4.69\n",
            "Validation set perplexity: 4.54\n",
            "Average loss at step 4400 : 1.61255454302 learning rate: 10.0\n",
            "Minibatch perplexity: 4.67\n",
            "Validation set perplexity: 4.54\n",
            "Average loss at step 4500 : 1.61543365479 learning rate: 10.0\n",
            "Minibatch perplexity: 4.83\n",
            "Validation set perplexity: 4.69\n",
            "Average loss at step 4600 : 1.61607327104 learning rate: 10.0\n",
            "Minibatch perplexity: 5.18\n",
            "Validation set perplexity: 4.64\n",
            "Average loss at step 4700 : 1.62757282495 learning rate: 10.0\n",
            "Minibatch perplexity: 4.24\n",
            "Validation set perplexity: 4.66\n",
            "Average loss at step 4800 : 1.63222063541 learning rate: 10.0\n",
            "Minibatch perplexity: 5.30\n",
            "Validation set perplexity: 4.53\n",
            "Average loss at step 4900 : 1.63678096652 learning rate: 10.0\n",
            "Minibatch perplexity: 5.43\n",
            "Validation set perplexity: 4.64\n",
            "Average loss at step 5000 : 1.610340662 learning rate: 1.0\n",
            "Minibatch perplexity: 5.10\n",
            "================================================================================\n",
            "in b one onarbs revieds the kimiluge that fondhtic fnoto cre one nine zero zero \n",
            " of is it of marking panzia t had wap ironicaghni relly deah the omber b h menba\n",
            "ong messified it his the likdings ara subpore the a fames distaled self this int\n",
            "y advante authors the end languarle meit common tacing bevolitione and eight one\n",
            "zes that materly difild inllaring the fusts not panition assertian causecist bas\n",
            "================================================================================\n",
            "Validation set perplexity: 4.69\n",
            "Average loss at step 5100 : 1.60593637228 learning rate: 1.0\n",
            "Minibatch perplexity: 4.69\n",
            "Validation set perplexity: 4.47\n",
            "Average loss at step 5200 : 1.58993269444 learning rate: 1.0\n",
            "Minibatch perplexity: 4.65\n",
            "Validation set perplexity: 4.39\n",
            "Average loss at step 5300 : 1.57930587292 learning rate: 1.0\n",
            "Minibatch perplexity: 5.11\n",
            "Validation set perplexity: 4.39\n",
            "Average loss at step 5400 : 1.58022856832 learning rate: 1.0\n",
            "Minibatch perplexity: 5.19\n",
            "Validation set perplexity: 4.37\n",
            "Average loss at step 5500 : 1.56654450059 learning rate: 1.0\n",
            "Minibatch perplexity: 4.69\n",
            "Validation set perplexity: 4.33\n",
            "Average loss at step 5600 : 1.58013380885 learning rate: 1.0\n",
            "Minibatch perplexity: 5.13\n",
            "Validation set perplexity: 4.35\n",
            "Average loss at step 5700 : 1.56974959254 learning rate: 1.0\n",
            "Minibatch perplexity: 5.00\n",
            "Validation set perplexity: 4.34\n",
            "Average loss at step 5800 : 1.5839582932 learning rate: 1.0\n",
            "Minibatch perplexity: 4.88\n",
            "Validation set perplexity: 4.31\n",
            "Average loss at step 5900 : 1.57129439116 learning rate: 1.0\n",
            "Minibatch perplexity: 4.66\n",
            "Validation set perplexity: 4.32\n",
            "Average loss at step 6000 : 1.55144061089 learning rate: 1.0\n",
            "Minibatch perplexity: 4.55\n",
            "================================================================================\n",
            "utic clositical poopy stribe addi nixe one nine one zero zero eight zero b ha ex\n",
            "zerns b one internequiption of the secordy way anti proble akoping have fictiona\n",
            "phare united from has poporarly cities book ins sweden emperor a sass in origina\n",
            "quulk destrebinist and zeilazar and on low and by in science over country weilti\n",
            "x are holivia work missincis ons in the gages to starsle histon one icelanctrotu\n",
            "================================================================================\n",
            "Validation set perplexity: 4.30\n",
            "Average loss at step 6100 : 1.56450940847 learning rate: 1.0\n",
            "Minibatch perplexity: 4.77\n",
            "Validation set perplexity: 4.27"
          ],
          "name": "stdout"
        },
        {
          "output_type": "stream",
          "text": [
            "\n",
            "Average loss at step 6200 : 1.53433164835 learning rate: 1.0\n",
            "Minibatch perplexity: 4.77\n",
            "Validation set perplexity: 4.27\n",
            "Average loss at step 6300 : 1.54773445129 learning rate: 1.0\n",
            "Minibatch perplexity: 4.76\n",
            "Validation set perplexity: 4.25\n",
            "Average loss at step 6400 : 1.54021131516 learning rate: 1.0\n",
            "Minibatch perplexity: 4.56\n",
            "Validation set perplexity: 4.24\n",
            "Average loss at step 6500 : 1.56153374553 learning rate: 1.0\n",
            "Minibatch perplexity: 5.43\n",
            "Validation set perplexity: 4.27\n",
            "Average loss at step 6600 : 1.59556478739 learning rate: 1.0\n",
            "Minibatch perplexity: 4.92\n",
            "Validation set perplexity: 4.28\n",
            "Average loss at step 6700 : 1.58076951623 learning rate: 1.0\n",
            "Minibatch perplexity: 4.77\n",
            "Validation set perplexity: 4.30\n",
            "Average loss at step 6800 : 1.6070714438 learning rate: 1.0\n",
            "Minibatch perplexity: 4.98\n",
            "Validation set perplexity: 4.28\n",
            "Average loss at step 6900 : 1.58413293839 learning rate: 1.0\n",
            "Minibatch perplexity: 4.61\n",
            "Validation set perplexity: 4.29\n",
            "Average loss at step 7000 : 1.57905534983 learning rate: 1.0\n",
            "Minibatch perplexity: 5.08\n",
            "================================================================================\n",
            "jague are officiencinels ored by film voon higherise haik one nine on the iffirc\n",
            "oshe provision that manned treatists on smalle bodariturmeristing the girto in s\n",
            "kis would softwenn mustapultmine truativersakys bersyim by s of confound esc bub\n",
            "ry of the using one four six blain ira mannom marencies g with fextificallise re\n",
            " one son vit even an conderouss to person romer i a lebapter at obiding are iuse\n",
            "================================================================================\n",
            "Validation set perplexity: 4.25\n"
          ],
          "name": "stdout"
        }
      ],
      "execution_count": 0
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "pl4vtmFfa5nn",
        "colab_type": "text"
      },
      "source": [
        "---\n",
        "Problem 1\n",
        "---------\n",
        "\n",
        "You might have noticed that the definition of the LSTM cell involves 4 matrix multiplications with the input, and 4 matrix multiplications with the output. Simplify the expression by using a single matrix multiply for each, and variables that are 4 times larger.\n",
        "\n",
        "---"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "4eErTCTybtph",
        "colab_type": "text"
      },
      "source": [
        "---\n",
        "Problem 2\n",
        "---------\n",
        "\n",
        "We want to train a LSTM over bigrams, that is pairs of consecutive characters like 'ab' instead of single characters like 'a'. Since the number of possible bigrams is large, feeding them directly to the LSTM using 1-hot encodings will lead to a very sparse representation that is very wasteful computationally.\n",
        "\n",
        "a- Introduce an embedding lookup on the inputs, and feed the embeddings to the LSTM cell instead of the inputs themselves.\n",
        "\n",
        "b- Write a bigram-based LSTM, modeled on the character LSTM above.\n",
        "\n",
        "c- Introduce Dropout. For best practices on how to use Dropout in LSTMs, refer to this [article](http://arxiv.org/abs/1409.2329).\n",
        "\n",
        "---"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "Y5tapX3kpcqZ",
        "colab_type": "text"
      },
      "source": [
        "---\n",
        "Problem 3\n",
        "---------\n",
        "\n",
        "(difficult!)\n",
        "\n",
        "Write a sequence-to-sequence LSTM which mirrors all the words in a sentence. For example, if your input is:\n",
        "\n",
        "    the quick brown fox\n",
        "    \n",
        "the model should attempt to output:\n",
        "\n",
        "    eht kciuq nworb xof\n",
        "    \n",
        "Refer to the lecture on how to put together a sequence-to-sequence model, as well as [this article](http://arxiv.org/abs/1409.3215) for best practices.\n",
        "\n",
        "---"
      ]
    }
  ]
}