aboutsummaryrefslogtreecommitdiffhomepage
path: root/tensorflow/contrib/eager
diff options
context:
space:
mode:
authorGravatar A. Unique TensorFlower <gardener@tensorflow.org>2018-07-24 15:23:12 -0700
committerGravatar TensorFlower Gardener <gardener@tensorflow.org>2018-07-24 15:26:30 -0700
commit74a75900faf88d7ce4e05f4bebd2b872abdf16a9 (patch)
tree0e91a55cc8e0d066c4eb533c444d229efe1b3e7e /tensorflow/contrib/eager
parenteabda97225faf53ec528621299f5b6c57a7847b0 (diff)
Adding defun
PiperOrigin-RevId: 205901720
Diffstat (limited to 'tensorflow/contrib/eager')
-rw-r--r--tensorflow/contrib/eager/python/examples/generative_examples/dcgan.ipynb24
1 files changed, 22 insertions, 2 deletions
diff --git a/tensorflow/contrib/eager/python/examples/generative_examples/dcgan.ipynb b/tensorflow/contrib/eager/python/examples/generative_examples/dcgan.ipynb
index 54cc4dc5da..44ff43a111 100644
--- a/tensorflow/contrib/eager/python/examples/generative_examples/dcgan.ipynb
+++ b/tensorflow/contrib/eager/python/examples/generative_examples/dcgan.ipynb
@@ -29,7 +29,7 @@
"source": [
"This notebook demonstrates how to generate images of handwritten digits using [tf.keras](https://www.tensorflow.org/programmers_guide/keras) and [eager execution](https://www.tensorflow.org/programmers_guide/eager). To do so, we use Deep Convolutional Generative Adverserial Networks ([DCGAN](https://arxiv.org/pdf/1511.06434.pdf)).\n",
"\n",
- "This model takes about 40 seconds per epoch to train on a single Tesla K80 on Colab, as of July 2018.\n",
+ "This model takes about ~30 seconds per epoch (using tf.contrib.eager.defun to create graph functions) to train on a single Tesla K80 on Colab, as of July 2018.\n",
"\n",
"Below is the output generated after training the generator and discriminator models for 150 epochs.\n",
"\n",
@@ -203,7 +203,7 @@
"## Write the generator and discriminator models\n",
"\n",
"* **Generator** \n",
- " * It is responsible for **creating the convincing images good enough to fool the discriminator**.\n",
+ " * It is responsible for **creating convincing images that are good enough to fool the discriminator**.\n",
" * It consists of Conv2DTranspose (Upsampling) layers. We start with a fully connected layer and upsample the image 2 times so as to reach the desired image size (mnist image size) which is (28, 28, 1). \n",
" * We use **leaky relu** activation except for the **last layer** which uses **tanh** activation.\n",
" \n",
@@ -315,6 +315,26 @@
]
},
{
+ "cell_type": "code",
+ "execution_count": 0,
+ "metadata": {
+ "colab": {
+ "autoexec": {
+ "startup": false,
+ "wait_interval": 0
+ }
+ },
+ "colab_type": "code",
+ "id": "k1HpMSLImuRi"
+ },
+ "outputs": [],
+ "source": [
+ "# Defun gives 10 secs/epoch performance boost\n",
+ "generator.call = tf.contrib.eager.defun(generator.call)\n",
+ "discriminator.call = tf.contrib.eager.defun(discriminator.call)"
+ ]
+ },
+ {
"cell_type": "markdown",
"metadata": {
"colab_type": "text",