diff options
Diffstat (limited to 'tensorflow/g3doc/get_started/os_setup.md')
-rw-r--r-- | tensorflow/g3doc/get_started/os_setup.md | 154 |
1 files changed, 56 insertions, 98 deletions
diff --git a/tensorflow/g3doc/get_started/os_setup.md b/tensorflow/g3doc/get_started/os_setup.md index e1cece4faa..92f77b27b0 100644 --- a/tensorflow/g3doc/get_started/os_setup.md +++ b/tensorflow/g3doc/get_started/os_setup.md @@ -44,7 +44,7 @@ management system used to install and manage software packages written in Python. The packages that will be installed or upgraded during the pip install are listed in the -[REQUIRED_PACKAGES section of setup.py](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/tools/pip_package/setup.py) +[REQUIRED_PACKAGES section of setup.py](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/tools/pip_package/setup.py). Install pip (or pip3 for python3) if it is not already installed: @@ -231,7 +231,7 @@ packages needed by TensorFlow. Install Anaconda: -Follow the instructions on the [Anaconda download site](https://www.continuum.io/downloads) +Follow the instructions on the [Anaconda download site](https://www.continuum.io/downloads). Create a conda environment called `tensorflow`: @@ -377,6 +377,8 @@ The option `-p 8888:8888` is used to publish the Docker container᾿s internal p The format of the port mapping is `hostPort:containerPort`. You can specify any valid port number for the host port but have to use `8888` for the container port portion. +If you're using a container with GPU support, some additional flags must be passed to expose the GPU device to the container. + For NVidia GPU support install latest NVidia drivers and [nvidia-docker](https://github.com/NVIDIA/nvidia-docker). Run with @@ -385,7 +387,15 @@ Run with $ nvidia-docker run -it -p 8888:8888 gcr.io/tensorflow/tensorflow:latest-gpu ``` -For more details see (TensorFlow docker readme)[https://github.com/tensorflow/tensorflow/tree/master/tensorflow/tools/docker]. +If you have a problem running `nvidia-docker`, then using the default config, we include a +[script](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/tools/docker/docker_run_gpu.sh) +in the repo with these flags, so the command-line would look like + +```bash +$ path/to/repo/tensorflow/tools/docker/docker_run_gpu.sh -p 8888:8888 gcr.io/tensorflow/tensorflow:latest-gpu +``` + +For more details see [TensorFlow docker readme](https://github.com/tensorflow/tensorflow/tree/master/tensorflow/tools/docker). You can now [test your installation](#test-the-tensorflow-installation) within the Docker container. @@ -479,7 +489,7 @@ of tensorflow. If you want to install a specific branch (such as a release branc pass `-b <branchname>` to the `git clone` command and `--recurse-submodules` for r0.8 and earlier to fetch the protobuf library that TensorFlow depends on. -### Installation for Linux +### Prepare environment for Linux #### Install Bazel @@ -508,19 +518,6 @@ $ sudo apt-get install python-numpy swig python-dev python-wheel $ sudo apt-get install python3-numpy swig python3-dev python3-wheel ``` -#### Configure the installation - -Run the `configure` script at the root of the tree. The configure script -asks you for the path to your python interpreter and allows (optional) -configuration of the CUDA libraries (see [below](#configure-tensorflows-canonical-view-of-cuda-libraries)). - -This step is used to locate the python and numpy header files. - -```bash -$ ./configure -Please specify the location of python. [Default is /usr/bin/python]: -``` - #### Optional: Install CUDA (GPUs on Linux) In order to build or run TensorFlow with GPU support, both NVIDIA's Cuda Toolkit (>= 7.0) and @@ -564,83 +561,7 @@ sudo cp cuda/lib64/libcudnn* /usr/local/cuda/lib64 sudo chmod a+r /usr/local/cuda/include/cudnn.h /usr/local/cuda/lib64/libcudnn* ``` -##### Configure TensorFlow's canonical view of Cuda libraries - -When running the `configure` script from the root of your source tree, select -the option `Y` when asked to build TensorFlow with GPU support. If you have -several versions of Cuda or cuDNN installed, you should definitely select -one explicitly instead of relying on the system default. You should see -prompts like the following: - -``` bash -$ ./configure -Please specify the location of python. [Default is /usr/bin/python]: -Do you wish to build TensorFlow with GPU support? [y/N] y -GPU support will be enabled for TensorFlow - -Please specify which gcc nvcc should use as the host compiler. [Default is -/usr/bin/gcc]: /usr/bin/gcc-4.9 - -Please specify the Cuda SDK version you want to use, e.g. 7.0. [Leave -empty to use system default]: 7.5 - -Please specify the location where CUDA 7.5 toolkit is installed. Refer to -README.md for more details. [default is: /usr/local/cuda]: /usr/local/cuda - -Please specify the cuDNN version you want to use. [Leave empty to use system -default]: 4.0.4 - -Please specify the location where the cuDNN 4.0.4 library is installed. Refer to -README.md for more details. [default is: /usr/local/cuda]: /usr/local/cudnn-r4-rc/ - -Please specify a list of comma-separated Cuda compute capabilities you want to -build with. You can find the compute capability of your device at: -https://developer.nvidia.com/cuda-gpus. -Please note that each additional compute capability significantly increases your -build time and binary size. [Default is: \"3.5,5.2\"]: 3.5 - -Setting up Cuda include -Setting up Cuda lib64 -Setting up Cuda bin -Setting up Cuda nvvm -Setting up CUPTI include -Setting up CUPTI lib64 -Configuration finished -``` - -This creates a canonical set of symbolic links to the Cuda libraries on your system. -Every time you change the Cuda library paths you need to run this step again before -you invoke the bazel build command. For the cuDNN libraries, use '6.5' for R2, '7.0' -for R3, and '4.0.4' for R4-RC. - - -##### Build your target with GPU support -From the root of your source tree, run: - -```bash -$ bazel build -c opt --config=cuda //tensorflow/cc:tutorials_example_trainer - -$ bazel-bin/tensorflow/cc/tutorials_example_trainer --use_gpu -# Lots of output. This tutorial iteratively calculates the major eigenvalue of -# a 2x2 matrix, on GPU. The last few lines look like this. -000009/000005 lambda = 2.000000 x = [0.894427 -0.447214] y = [1.788854 -0.894427] -000006/000001 lambda = 2.000000 x = [0.894427 -0.447214] y = [1.788854 -0.894427] -000009/000009 lambda = 2.000000 x = [0.894427 -0.447214] y = [1.788854 -0.894427] -``` - -Note that "--config=cuda" is needed to enable the GPU support. - -##### Known issues - -* Although it is possible to build both Cuda and non-Cuda configs under the same -source tree, we recommend to run `bazel clean` when switching between these two -configs in the same source tree. - -* You have to run configure before running bazel build. Otherwise, the build -will fail with a clear error message. In the future, we might consider making -this more convenient by including the configure step in our build process. - -### Installation for Mac OS X +### Prepare environment for Mac OS X We recommend using [homebrew](http://brew.sh) to install the bazel and SWIG dependencies, and installing python dependencies using easy_install or pip. @@ -713,15 +634,20 @@ $ sudo mv lib/libcudnn* /Developer/NVIDIA/CUDA-7.5/lib $ sudo ln -s /Developer/NVIDIA/CUDA-7.5/lib/libcudnn* /usr/local/cuda/lib/ ``` -#### Configure the installation +### Configure the installation Run the `configure` script at the root of the tree. The configure script -asks you for the path to your python interpreter. +asks you for the path to your python interpreter and allows (optional) +configuration of the CUDA libraries. This step is used to locate the python and numpy header files as well as -enabling GPU support if you have a CUDA enabled GPU and Toolkit installed. For -example: +enabling GPU support if you have a CUDA enabled GPU and Toolkit installed. +Select the option `Y` when asked to build TensorFlow with GPU support. + +If you have several versions of Cuda or cuDNN installed, you should definitely +select one explicitly instead of relying on the system default. +For example: ```bash $ ./configure @@ -748,6 +674,38 @@ Setting up CUPTI lib64 Configuration finished ``` +This creates a canonical set of symbolic links to the Cuda libraries on your system. +Every time you change the Cuda library paths you need to run this step again before +you invoke the bazel build command. For the cuDNN libraries, use '6.5' for R2, '7.0' +for R3, and '4.0.4' for R4-RC. + +#### Build your target with GPU support +From the root of your source tree, run: + +```bash +$ bazel build -c opt --config=cuda //tensorflow/cc:tutorials_example_trainer + +$ bazel-bin/tensorflow/cc/tutorials_example_trainer --use_gpu +# Lots of output. This tutorial iteratively calculates the major eigenvalue of +# a 2x2 matrix, on GPU. The last few lines look like this. +000009/000005 lambda = 2.000000 x = [0.894427 -0.447214] y = [1.788854 -0.894427] +000006/000001 lambda = 2.000000 x = [0.894427 -0.447214] y = [1.788854 -0.894427] +000009/000009 lambda = 2.000000 x = [0.894427 -0.447214] y = [1.788854 -0.894427] +``` + +Note that "--config=cuda" is needed to enable the GPU support. + +#### Known issues + +* Although it is possible to build both Cuda and non-Cuda configs under the same +source tree, we recommend to run `bazel clean` when switching between these two +configs in the same source tree. + +* You have to run configure before running bazel build. Otherwise, the build +will fail with a clear error message. In the future, we might consider making +this more convenient by including the configure step in our build process. + + ### Create the pip package and install When building from source, you will still build a pip package and install that. |