Commit message (Collapse) | Author | Age | ||
---|---|---|---|---|
... | ||||
* | Merge changes from github. | 2016-05-23 | ||
| | | | | Change: 123026122 | |||
* | Google authentication for GCS file system. | 2016-05-19 | ||
| | | | | | | | Implements an authentication mechanism based on Application Default Credentials: https://developers.google.com/identity/protocols/application-default-credentials https://developers.google.com/identity/protocols/OAuth2ServiceAccount Change: 122741738 | |||
* | Switched to the latest version of Eigen that performs much better on machines | 2016-05-16 | ||
| | | | | | | | | with many cpu cores For example, the wall time for the following tutorial went down from 13m35 to 5m27: bazel run -c opt --copt=-mavx tensorflow/examples/tutorials/word2vec/word2vec_basic Change: 122462177 | |||
* | Move if_cuda build macro into //third_party/gpus/cuda/build_defs.bzl, | 2016-05-13 | ||
| | | | | | | and remove the cuda_crosstool_condition build condition. Now if_cuda is just using_nvcc || using_gcudacc. Change: 122291892 | |||
* | Add //third_party/gpus/cuda:using_clang build condition. | 2016-05-13 | ||
| | | | | | | This is unused at the moment, but will eventually let us build CUDA code with vanilla clang. Change: 122289910 | |||
* | Pass --define=using_cuda_nvcc to CUDA builds. | 2016-05-13 | ||
| | | | | | | | | This has no practical effect, as CUDA builds are always with nvcc, but it lets us modify the build config rule //third_party/gpus/cuda:using_nvcc so it returns true, rather than false, for CUDA builds. Change: 122288952 | |||
* | Made the contraction code compatible with fp16 | 2016-05-12 | ||
| | | | | Change: 122192081 | |||
* | Upgraded to the latest version of Eigen that speeds up full reductions on fp16 | 2016-05-12 | ||
| | | | | | by about 3 orders of magnitude as well as some partial reductions by 30% when using cuda 7.5 or above Change: 122191448 | |||
* | Merge changes from github. | 2016-05-05 | ||
| | | | | Change: 121586635 | |||
* | Improved support for min and max on 16 bit floats when running on recent cuda | 2016-04-27 | ||
| | | | | | | gpus Updated the check numerics code to make it compatible with fp16 Change: 120980302 | |||
* | Made it possible to compute the cross entropy using 16 bit floats | 2016-04-25 | ||
| | | | | Change: 120739269 | |||
* | Rollback of rollback of cl/120366069: | 2016-04-21 | ||
| | | | | | | | | tensorflow: switch to eigen thread pool This is first step of switching tensorflow to the new non-blocking thread pool in eigen. Change: 120510292 | |||
* | Prevent TensorFlow from crashing when attempting to reduce an empty tensor ↵ | 2016-04-21 | ||
| | | | | | | on GPU Change: 120505517 | |||
* | Fixed a compilation error when targeting cuda 3.0 devices such as the ones | 2016-04-20 | ||
| | | | | | offered by AWS Change: 120369420 | |||
* | Upgraded to the latest version of Eigen that adds support for computing the | 2016-04-14 | ||
| | | | | | sigmoid of fp16 and introduces a condition estimator. Change: 119907721 | |||
* | Added support for trigonometric and transcendental functions of half floats | 2016-04-14 | ||
| | | | | Change: 119850987 | |||
* | Upgraded to the latest version of Eigen that provides significant performance | 2016-04-13 | ||
| | | | | | improvements for fp16 Change: 119771118 | |||
* | Minimal open source CUPTI GPU Tracer. | 2016-04-13 | ||
| | | | | Change: 119768540 | |||
* | Made isinf, isnan, isfinite, ceil and floor work with 16 bit floats. | 2016-04-09 | ||
| | | | | Change: 119458778 | |||
* | Upgraded to the latest version of Eigen that has bug fixes for complex numbers | 2016-04-08 | ||
| | | | | | as well as fp16 Change: 119398881 | |||
* | Fix genrules that didn't work when TensorFlow was imported as a submodule ↵ | 2016-04-07 | ||
| | | | | | | and compiled with --config=cuda. Change: 119318629 | |||
* | Upgraded to the latest version of eigen that introduces implementations of ↵ | 2016-04-07 | ||
| | | | | | | | the zeta and polygamma functions, as well as improved support for float16. Change: 119279101 | |||
* | Upgrade to the latest version of Eigen that provides better support for float16 | 2016-04-04 | ||
| | | | | | and fixes the computation of absolute values on gpu. Change: 119001808 | |||
* | Merge changes from github. | 2016-03-29 | ||
| | | | | Change: 118532471 | |||
* | Upgraded to the latest version of Eigen | 2016-03-28 | ||
| | | | | Change: 118414762 | |||
* | Upgraded to the latest version of Eigen that provides better support for fp16. | 2016-03-28 | ||
| | | | | | Use Eigen mod functors directly instead of duplicating them. Change: 118362359 | |||
* | Move the NeuralNetwork code out of third_party/eigen3 and into | 2016-03-23 | ||
| | | | | | tensorflow/core/kernel. Change: 117941211 | |||
* | Update Eigen NN headers in staging repo to match public contents. | 2016-03-22 | ||
| | ||||
* | Re-rollback of "TensorFlow: move eigen some NN code from our ↵ | 2016-03-18 | ||
| | | | | | | | third_party/eigen3 copy to being part of TF, add tests." Change: 117608627 | |||
* | Rollforward of "TensorFlow: move eigen some NN code from our ↵ | 2016-03-18 | ||
| | | | | | | | third_party/eigen3 copy to being part of TF, add tests." Change: 117587217 | |||
* | TensorFlow: update eigen to latest change to fix TensorChipping | 2016-03-18 | ||
| | | | | Change: 117570343 | |||
* | Rollback of "TensorFlow: move eigen some NN code from our third_party/eigen3 ↵ | 2016-03-18 | ||
| | | | | | | | copy to being part of TF, add tests." Change: 117519243 | |||
* | TensorFlow: move eigen some NN code from our third_party/eigen3 copy | 2016-03-18 | ||
| | | | | | to being part of TF, add tests. Change: 117509710 | |||
* | TensorFlow: update eigen to latest release that has a fix to too large frame. | 2016-03-18 | ||
| | | | | Change: 117506296 | |||
* | Added basic support for float16 on CPUs and older GPUs. | 2016-03-18 | ||
| | | | | | Also fixed compilation issues with cuda devices that support the compute model 5.3 Change: 117493644 | |||
* | Rollforward of "Merge changes from github." | 2016-03-16 | ||
| | | | | Change: 117375570 | |||
* | Rollback of: "Merge changes from github." | 2016-03-16 | ||
| | | | | Change: 117304114 | |||
* | Merge changes from github. | 2016-03-16 | ||
| | | | | Change: 117301677 | |||
* | Fix dependencies bugs | 2016-03-11 | ||
| | | | | Change: 116925769 | |||
* | Upgraded to a newer version of Eigen that fixes a compilation error on Android | 2016-03-09 | ||
| | | | | Change: 116831720 | |||
* | Upgraded eigen to make it possible to compile a binary that takes advantage of | 2016-03-09 | ||
| | | | | | both avx instructions and cuda to run as fast as possible. Change: 116775924 | |||
* | Upgraded to a new version of Eigen that adds the ability to pad using values | 2016-03-08 | ||
| | | | | | other than 0 and significantly speeds up a number of computations on GPUs. Change: 116607765 | |||
* | Changed the cuda_crosstool_condition to check for a define of using_cuda. | 2016-03-07 | ||
| | | | | | | Checking for a specific crosstool_top directory doesn't work when TensorFlow is a sub-module for a different project. Change: 116592676 | |||
* | Added the ability to convert between floats and float16 on Kepler and Maxwell | 2016-03-05 | ||
| | | | | | GPUs Change: 116409601 | |||
* | Improved the performance of outer reductions | 2016-03-01 | ||
| | | | | Change: 116063261 | |||
* | Improved the performance of narrow reductions on CUDA | 2016-02-29 | ||
| | | | | Change: 115889721 | |||
* | Upgraded Eigen to fix a compilation error triggered by xcode | 2016-02-22 | ||
| | | | | Change: 115280348 | |||
* | Upgraded to the latest version of eigen, which adds a missing #include | 2016-02-22 | ||
| | | | | Change: 115268843 | |||
* | Added support for half floats to eigen, which is the first step to support half | 2016-02-22 | ||
| | | | | | floats in TensorFlow. The code was tested on Tegra x1. Change: 115253733 | |||
* | Merge changes from github. | 2016-02-17 | ||
| | | | | Change: 114882676 |