Commit message (Expand) | Author | Age | |
---|---|---|---|
* | [XLA] Stop including str_util.h. | Justin Lebar | 2018-08-23 |
* | [XLA] Use absl string types and functions instead of the TF versions. | Justin Lebar | 2018-08-23 |
* | [XLA] Use absl::make_unique instead of xla::MakeUnique. | Justin Lebar | 2018-08-20 |
* | [XLA] FP16 Dot support for the CPU and GPU backends. | Bixia Zheng | 2018-02-28 |
* | [XLA:CPU] Don't hard-code lane width in horizontal sum routine | Sanjoy Das | 2018-02-21 |
* | Enable half precision convolution for the CPU and GPU backends. | Bixia Zheng | 2018-02-15 |
* | The new class will be used as the base class for the existing 2-4 | A. Unique TensorFlower | 2017-10-17 |
* | Changes for TPU ops. | A. Unique TensorFlower | 2017-09-22 |
* | [XLA] Fix bool support for Array2D/Array3D/Array4D. | Peter Hawkins | 2017-05-17 |
* | Merge changes from github. | Dandelion Mané | 2017-03-10 |
* | Initial open-source release of XLA: Accelerated Linear Algebra. | Peter Hawkins | 2017-01-09 |