aboutsummaryrefslogtreecommitdiffhomepage
path: root/tensorflow
diff options
context:
space:
mode:
authorGravatar Jan Prach <jendap@gmail.com>2016-04-13 08:26:19 -0700
committerGravatar Martin Wicke <martin.wicke@gmail.com>2016-04-13 08:26:19 -0700
commit31ea3dbf57d67b32ca1708e7d8cd5fb43e7810b1 (patch)
treebfd23c39d57a13785c2a125b178cda3d5cccfeca /tensorflow
parentbc5e961e1988fdefff8e8aa062f4ab3066c3a9e5 (diff)
switch docker links from b.gcr.io to gcr.io (#1911)
Diffstat (limited to 'tensorflow')
-rw-r--r--tensorflow/examples/udacity/Dockerfile2
-rw-r--r--tensorflow/g3doc/get_started/os_setup.md12
-rw-r--r--tensorflow/tools/docker/README.md10
3 files changed, 12 insertions, 12 deletions
diff --git a/tensorflow/examples/udacity/Dockerfile b/tensorflow/examples/udacity/Dockerfile
index 9545c376b7..4af441018b 100644
--- a/tensorflow/examples/udacity/Dockerfile
+++ b/tensorflow/examples/udacity/Dockerfile
@@ -1,4 +1,4 @@
-FROM b.gcr.io/tensorflow/tensorflow:latest
+FROM gcr.io/tensorflow/tensorflow:latest
MAINTAINER Vincent Vanhoucke <vanhoucke@google.com>
RUN pip install scikit-learn
RUN rm -rf /notebooks/*
diff --git a/tensorflow/g3doc/get_started/os_setup.md b/tensorflow/g3doc/get_started/os_setup.md
index 3cd51450a0..18da3bbfe5 100644
--- a/tensorflow/g3doc/get_started/os_setup.md
+++ b/tensorflow/g3doc/get_started/os_setup.md
@@ -184,11 +184,11 @@ packages on your machine.
We provide 4 Docker images:
-* `b.gcr.io/tensorflow/tensorflow`: TensorFlow CPU binary image.
-* `b.gcr.io/tensorflow/tensorflow:latest-devel`: CPU Binary image plus source
+* `gcr.io/tensorflow/tensorflow`: TensorFlow CPU binary image.
+* `gcr.io/tensorflow/tensorflow:latest-devel`: CPU Binary image plus source
code.
-* `b.gcr.io/tensorflow/tensorflow:latest-gpu`: TensorFlow GPU binary image.
-* `b.gcr.io/tensorflow/tensorflow:latest-devel-gpu`: GPU Binary image plus source
+* `gcr.io/tensorflow/tensorflow:latest-gpu`: TensorFlow GPU binary image.
+* `gcr.io/tensorflow/tensorflow:latest-devel-gpu`: GPU Binary image plus source
code.
We also have tags with `latest` replaced by a released version (e.g., `0.8.0rc0-gpu`).
@@ -209,7 +209,7 @@ After Docker is installed, launch a Docker container with the TensorFlow binary
image as follows.
```bash
-$ docker run -it b.gcr.io/tensorflow/tensorflow
+$ docker run -it gcr.io/tensorflow/tensorflow
```
If you're using a container with GPU support, some additional flags must be
@@ -219,7 +219,7 @@ include a
in the repo with these flags, so the command-line would look like
```bash
-$ path/to/repo/tensorflow/tools/docker/docker_run_gpu.sh b.gcr.io/tensorflow/tensorflow:gpu
+$ path/to/repo/tensorflow/tools/docker/docker_run_gpu.sh gcr.io/tensorflow/tensorflow:gpu
```
You can now [test your installation](#test-the-tensorflow-installation) within the Docker container.
diff --git a/tensorflow/tools/docker/README.md b/tensorflow/tools/docker/README.md
index 2d7a31186a..fba6c3144a 100644
--- a/tensorflow/tools/docker/README.md
+++ b/tensorflow/tools/docker/README.md
@@ -16,14 +16,14 @@ quick links here:
We currently maintain three Docker container images:
-* `b.gcr.io/tensorflow/tensorflow`, which is a minimal VM with TensorFlow and
+* `gcr.io/tensorflow/tensorflow`, which is a minimal VM with TensorFlow and
all dependencies.
-* `b.gcr.io/tensorflow/tensorflow-full`, which contains a full source
+* `gcr.io/tensorflow/tensorflow-full`, which contains a full source
distribution and all required libraries to build and run TensorFlow from
source.
-* `b.gcr.io/tensorflow/tensorflow-full-gpu`, which is the same as the previous
+* `gcr.io/tensorflow/tensorflow-full-gpu`, which is the same as the previous
container, but built with GPU support.
## Running the container
@@ -31,7 +31,7 @@ We currently maintain three Docker container images:
Each of the containers is published to a Docker registry; for the non-GPU
containers, running is as simple as
- $ docker run -it -p 8888:8888 b.gcr.io/tensorflow/tensorflow
+ $ docker run -it -p 8888:8888 gcr.io/tensorflow/tensorflow
For the container with GPU support, we require the user to make the appropriate
NVidia libraries available on their system, as well as providing mappings so
@@ -40,7 +40,7 @@ accomplished via
$ export CUDA_SO=$(\ls /usr/lib/x86_64-linux-gnu/libcuda.* | xargs -I{} echo '-v {}:{}')
$ export DEVICES=$(\ls /dev/nvidia* | xargs -I{} echo '--device {}:{}')
- $ docker run -it -p 8888:8888 $CUDA_SO $DEVICES b.gcr.io/tensorflow/tensorflow-devel-gpu
+ $ docker run -it -p 8888:8888 $CUDA_SO $DEVICES gcr.io/tensorflow/tensorflow-devel-gpu
Alternately, you can use the `docker_run_gpu.sh` script in this directory.