aboutsummaryrefslogtreecommitdiffhomepage
path: root/tensorflow/examples/udacity
diff options
context:
space:
mode:
authorGravatar Vijay Vasudevan <vrv@google.com>2016-03-22 22:01:30 -0800
committerGravatar TensorFlower Gardener <gardener@tensorflow.org>2016-03-22 23:09:46 -0700
commit37606a4c63364c56a0834d281023b62d2bda6cd8 (patch)
treeb6e625bc001e4e9bf9432cc963211b5667b2f455 /tensorflow/examples/udacity
parent18cbeda07a526acbf899ac2363541b8f0b6df29a (diff)
Merge changes from github, some fixes to adhere somewhat
to our requirements for skflow. Change: 117901053
Diffstat (limited to 'tensorflow/examples/udacity')
-rw-r--r--tensorflow/examples/udacity/5_word2vec.ipynb12
-rw-r--r--tensorflow/examples/udacity/README.md17
2 files changed, 23 insertions, 6 deletions
diff --git a/tensorflow/examples/udacity/5_word2vec.ipynb b/tensorflow/examples/udacity/5_word2vec.ipynb
index c266488bde..94ba37ee13 100644
--- a/tensorflow/examples/udacity/5_word2vec.ipynb
+++ b/tensorflow/examples/udacity/5_word2vec.ipynb
@@ -24,7 +24,7 @@
"Assignment 5\n",
"------------\n",
"\n",
- "The goal of this assignment is to train a skip-gram model over [Text8](http://mattmahoney.net/dc/textdata) data."
+ "The goal of this assignment is to train a Word2Vec skip-gram model over [Text8](http://mattmahoney.net/dc/textdata) data."
]
},
{
@@ -180,10 +180,10 @@
},
"source": [
"def read_data(filename):\n",
- " f = zipfile.ZipFile(filename)\n",
- " for name in f.namelist():\n",
- " return tf.compat.as_str(f.read(name)).split()\n",
- " f.close()\n",
+ " \"\"\"Extract the first file enclosed in a zip file as a list of words\"\"\"\n",
+ " with zipfile.ZipFile(filename) as f:\n",
+ " data = tf.compat.as_str(f.read(f.namelist()[0])).split()\n",
+ " return data\n",
" \n",
"words = read_data(filename)\n",
"print('Data size %d' % len(words))"
@@ -881,7 +881,7 @@
"Problem\n",
"-------\n",
"\n",
- "An alternative to Word2Vec is called [CBOW](http://arxiv.org/abs/1301.3781) (Continuous Bag of Words). In the CBOW model, instead of predicting a context word from a word vector, you predict a word from the sum of all the word vectors in its context. Implement and evaluate a CBOW model trained on the text8 dataset.\n",
+ "An alternative to skip-gram is another Word2Vec model called [CBOW](http://arxiv.org/abs/1301.3781) (Continuous Bag of Words). In the CBOW model, instead of predicting a context word from a word vector, you predict a word from the sum of all the word vectors in its context. Implement and evaluate a CBOW model trained on the text8 dataset.\n",
"\n",
"---"
]
diff --git a/tensorflow/examples/udacity/README.md b/tensorflow/examples/udacity/README.md
index af26e2ee38..9200bcc79b 100644
--- a/tensorflow/examples/udacity/README.md
+++ b/tensorflow/examples/udacity/README.md
@@ -34,6 +34,23 @@ has two good suggestions; we recommend using 8G.
In addition, you may need to pass `--memory=8g` as an extra argument to
`docker run`.
+* **I want to create a new virtual machine instead of the default one.**
+
+`docker-machine` is a tool to provision and manage docker hosts, it supports multiple platform (ex. aws, gce, azure, virtualbox, ...). To create a new virtual machine locally with built-in docker engine, you can use
+
+ docker-machine create -d virtualbox --virtualbox-memory 8196 tensorflow
+
+`-d` means the driver for the cloud platform, supported drivers listed [here](https://docs.docker.com/machine/drivers/). Here we use virtualbox to create a new virtual machine locally. `tensorflow` means the name of the virtual machine, feel free to use whatever you like. You can use
+
+ docker-machine ip tensorflow
+
+to get the ip of the new virtual machine. To switch from default virtual machine to a new one (here we use tensorflow), type
+
+ eval $(docker-machine env tensorflow)
+
+Note that `docker-machine env tensorflow` outputs some environment variables such like `DOCKER_HOST`. Then your docker client is now connected to the docker host in virtual machine `tensorflow`
+
+
Notes for anyone needing to build their own containers (mostly instructors)
===========================================================================