aboutsummaryrefslogtreecommitdiffhomepage
path: root/tensorflow
diff options
context:
space:
mode:
authorGravatar A. Unique TensorFlower <gardener@tensorflow.org>2018-07-20 10:27:51 -0700
committerGravatar TensorFlower Gardener <gardener@tensorflow.org>2018-07-20 10:31:46 -0700
commit3b6bceb87f91fc2d0e0c7d31d3583c39f2d3ca8d (patch)
treebde5bd088f4b1b8d125beb41a9c5d09411c069f9 /tensorflow
parent4921064dd535d84aa031f8116e583b151dd46e97 (diff)
fixing some nits
PiperOrigin-RevId: 205416917
Diffstat (limited to 'tensorflow')
-rw-r--r--tensorflow/contrib/eager/python/examples/generative_examples/dcgan.ipynb20
1 files changed, 11 insertions, 9 deletions
diff --git a/tensorflow/contrib/eager/python/examples/generative_examples/dcgan.ipynb b/tensorflow/contrib/eager/python/examples/generative_examples/dcgan.ipynb
index 232f9a8ef0..54cc4dc5da 100644
--- a/tensorflow/contrib/eager/python/examples/generative_examples/dcgan.ipynb
+++ b/tensorflow/contrib/eager/python/examples/generative_examples/dcgan.ipynb
@@ -27,9 +27,9 @@
"id": "ITZuApL56Mny"
},
"source": [
- "This notebook demonstrates how to generate images of handwritten digits using [tf.keras](https://www.tensorflow.org/programmers_guide/keras) and [eager execution](https://www.tensorflow.org/programmers_guide/eager). To do this, we use Deep Convolutional Generative Adverserial Networks ([DCGAN](https://arxiv.org/pdf/1511.06434.pdf)).\n",
+ "This notebook demonstrates how to generate images of handwritten digits using [tf.keras](https://www.tensorflow.org/programmers_guide/keras) and [eager execution](https://www.tensorflow.org/programmers_guide/eager). To do so, we use Deep Convolutional Generative Adverserial Networks ([DCGAN](https://arxiv.org/pdf/1511.06434.pdf)).\n",
"\n",
- "On a colab GPU(Tesla K80), the model takes around 40 seconds per epoch to train.\n",
+ "This model takes about 40 seconds per epoch to train on a single Tesla K80 on Colab, as of July 2018.\n",
"\n",
"Below is the output generated after training the generator and discriminator models for 150 epochs.\n",
"\n",
@@ -80,6 +80,8 @@
},
"outputs": [],
"source": [
+ "from __future__ import absolute_import, division, print_function\n",
+ "\n",
"# Import TensorFlow \u003e= 1.9 and enable eager execution\n",
"import tensorflow as tf\n",
"tf.enable_eager_execution()\n",
@@ -202,12 +204,12 @@
"\n",
"* **Generator** \n",
" * It is responsible for **creating the convincing images good enough to fool the discriminator**.\n",
- " * It consists of Conv2DTranspose(Upsampling) layers. We start with a fully connected layer and upsample the image 2 times so as to reach the desired image size(mnist image size) which is (28, 28, 1). \n",
+ " * It consists of Conv2DTranspose (Upsampling) layers. We start with a fully connected layer and upsample the image 2 times so as to reach the desired image size (mnist image size) which is (28, 28, 1). \n",
" * We use **leaky relu** activation except for the **last layer** which uses **tanh** activation.\n",
" \n",
"* **Discriminator**\n",
" * **The discriminator is responsible for classifying the fake images from the real images.**\n",
- " * In other words, the discriminator is given generated images(from the generator) and the real MNIST images. The job of the discriminator is to classify these images into fake(generated) and real(MNIST images).\n",
+ " * In other words, the discriminator is given generated images (from the generator) and the real MNIST images. The job of the discriminator is to classify these images into fake (generated) and real (MNIST images).\n",
" * **Basically the generator should be good enough to fool the discriminator that the generated images are real**."
]
},
@@ -323,8 +325,8 @@
"\n",
"* **Discriminator loss**\n",
" * The discriminator loss function takes 2 inputs; **real images, generated images**\n",
- " * real_loss is a sigmoid cross entropy loss of the **real images** and an **array of ones(since these are the real images)**\n",
- " * generated_loss is a sigmoid cross entropy loss of the **generated images** and an **array of zeros(since these are the fake images)**\n",
+ " * real_loss is a sigmoid cross entropy loss of the **real images** and an **array of ones (since these are the real images)**\n",
+ " * generated_loss is a sigmoid cross entropy loss of the **generated images** and an **array of zeros (since these are the fake images)**\n",
" * Then the total_loss is the sum of real_loss and the generated_loss\n",
" \n",
"* **Generator loss**\n",
@@ -411,9 +413,9 @@
"\n",
"* We start by iterating over the dataset\n",
"* The generator is given **noise as an input** which when passed through the generator model will output a image looking like a handwritten digit\n",
- "* The discriminator is given the **real MNIST images as well as the generated images(from the generator)**.\n",
+ "* The discriminator is given the **real MNIST images as well as the generated images (from the generator)**.\n",
"* Next, we calculate the generator and the discriminator loss.\n",
- "* Then, we calculate the gradients of loss with respect to both the generator and the discriminator variables(inputs) and apply those to the optimizer.\n",
+ "* Then, we calculate the gradients of loss with respect to both the generator and the discriminator variables (inputs) and apply those to the optimizer.\n",
"\n",
"## Generate Images\n",
"\n",
@@ -442,7 +444,7 @@
"noise_dim = 100\n",
"num_examples_to_generate = 100\n",
"\n",
- "# keeping the random vector constant for generation(prediction) so\n",
+ "# keeping the random vector constant for generation (prediction) so\n",
"# it will be easier to see the improvement of the gan.\n",
"random_vector_for_generation = tf.random_normal([num_examples_to_generate,\n",
" noise_dim])"