aboutsummaryrefslogtreecommitdiffhomepage
path: root/tensorflow/compiler/tests/unary_ops_test.py
Commit message (Collapse)AuthorAge
* [XLA:CPU] Re-enable half float tests for unary opsGravatar Benjamin Kramer2018-09-21
| | | | | | This was blocked by an LLVM bug, which was fixed in r342542. PiperOrigin-RevId: 213953743
* [XLA:TF] Enable int8 and uint8 support in the bridge for CPU/GPUGravatar Benjamin Kramer2018-09-17
| | | | | | | | | | The test changes are awkward. None of these are XLA bugs, it's just that the op definitions in tensorflow are really inconsistent. I tried to infer whether the limitation is on signed types, index types or just arbitrary. In the latter case just int8/uint8 is blacklisted, we should probably lift that requirement at some point. PiperOrigin-RevId: 213243906
* Move from deprecated self.test_session() to self.cached_session().Gravatar A. Unique TensorFlower2018-08-22
| | | | | | | | self.test_session() has been deprecated in 9962eb5e84b15e309410071b06c2ed2d6148ed44 as its name confuses readers of the test. Moving to cached_session() instead which is more explicit about: * the fact that the session may be reused. * the session is not closed even when doing a "with self.test_session()" statement. PiperOrigin-RevId: 209837298
* Handle scalar real HLO instructions from tf.lgamma and tf.digammaGravatar A. Unique TensorFlower2018-08-20
| | | | | | | | | | | Currently, the XLA tf.lgamma op doesn't behave the same way as the standard tf.lgamma with certain real values because the log of a negative number is taken. Added regression tests for tf.lgamma operating on a scalar and added cases that previously resulted in NaNs when using the reflection formula. PiperOrigin-RevId: 209443312
* [TF] Extend the Softmax kernels so they accept shapes with rank >= 1, rather ↵Gravatar Peter Hawkins2018-08-06
| | | | | | | | than relying on the caller to reshape to rank 2. Guard the Python library code that reshapes softmax inputs to rank 1 with a forward compatibility check; after the forward compatibility window expires, the Python code will no longer reshape to rank 1. PiperOrigin-RevId: 207606326
* Implement digamma for XLAGravatar A. Unique TensorFlower2018-07-16
| | | | | | | | | | Compute the Lgamma function using Lanczos' approximation from "A Precision Approximation of the Gamma Function". SIAM Journal on Numerical Analysis series B. Vol. 1: digamma(z + 1) = log(t(z)) + A'(z) / A(z) - kLanczosGamma / t(z) t(z) = z + kLanczosGamma + 1/2 A(z) = kBaseLanczosCoeff + sigma(k = 1, n, kLanczosCoefficients[i] / (z + k)) A'(z) = sigma(k = 1, n, kLanczosCoefficients[i] / (z + k) / (z + k)) PiperOrigin-RevId: 204834091
* Implement lgamma for XLAGravatar A. Unique TensorFlower2018-07-16
| | | | | | | | | | | Add support for Real and Imag for real floating point types. Compute the Lgamma function using Lanczos' approximation from "A Precision Approximation of the Gamma Function". SIAM Journal on Numerical Analysis series B. Vol. 1: lgamma(z + 1) = (log(2) + log(pi)) / 2 + (z + 1/2) * log(t(z)) - t(z) + A(z) t(z) = z + kLanczosGamma + 1/2 A(z) = kBaseLanczosCoeff + sigma(k = 1, n, kLanczosCoefficients[i] / (z + k)) PiperOrigin-RevId: 204815805
* Import package xla_test instead of class XLATestCase.Gravatar A. Unique TensorFlower2018-06-28
| | | | PiperOrigin-RevId: 202572322
* [TF:XLA] Implement QuantizeAndDequantizeV3.Gravatar Peter Hawkins2018-06-27
| | | | | | | | Change XLA lowering of QuantizeAndDequantizeV2/V3 to match the TF kernel much more closely. The main exception is the min_quantized and max_quantized values are calculated as floats to avoid the need for 64-bit integer math, which is not present on all accelerators. Reformats unary_ops_test.py in passing, but on the whole I don't mind the reformatting. PiperOrigin-RevId: 202395114
* Add XLA support for the error function (and complement).Gravatar Russell Power2018-06-15
| | | | PiperOrigin-RevId: 200727545
* [XLA] Use Expm1 in Elu/SeluGravatar David Majnemer2018-05-17
| | | | | | | exp(x) - 1 is best executed using the composed Expm1 operation as it is better behaved when exp(x) is near 1. PiperOrigin-RevId: 197061826
* [TF:XLA] Make softplus more accurateGravatar David Majnemer2018-05-15
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The softplus function computes log(exp(x) + 1). We computed it this way but with special cases to handle underflow and overflow. This was done by comparing the input against a quantity with the magnitude 13.94238515. Note that this quantity is not representable as a single precision float and is instead rounded to 13.9423847. If softplus would overflow, it will be approximated as x. If softplus would underflow, it will be approximated as exp(x). Unfortunately, this can provide inaccurate results for negative floats close to the threshold. For example: consider x = -13.9274826049805. softmax(x) is ~8.94068849e-7; rounded to the nearest single precision float, this is 8.940689e-7. In this case, x is quite close to the underflow threshold but not close enough to be approximated by exp(x) == 8.94069273e-7. Rather, it gets calculated using the canonical definition of softmax and comes to 8.34464686e-7. This result comes out to be wrong by 1,048,568 ULPs. Instead, we can compute it the way one would compute LogSumExp(x, 0): max(x, 0) + log(exp(x - max(x, 0)) + exp(0 - max(x, 0))) When x is positive, this is: x + log(exp(0) + exp(-x)) When x is negative, this is: log(exp(x) + exp(0)) When x is 0, this is: log(exp(0) + exp(0)) exp(0) evaluates to 1 which gives us: if x is positive, x + log(1 + exp(-x)) if x is negative, log(exp(x) + 1) if x is zero, log(2) These three cases can be combined like so: max(x, 0) + log(exp(-abs(x)) + 1) Further, we can increase the fidelity of the log calculation by using log1p: max(x, 0) + log1p(exp(-abs(x))) This computation naturally handles underflow and overflow while also providing more numerically accurate results for a few small, positive, floating point values. PiperOrigin-RevId: 196782814
* [XLA] Add log1p/expm1Gravatar David Majnemer2018-05-09
| | | | | | | A new HLO seems prudent as it allows implementations to use fancy techniques to compute accurate results for small inputs. PiperOrigin-RevId: 196078115
* [TF] Add half precision to the supported data types for tensorflow operations.Gravatar Bixia Zheng2018-04-06
| | | | | | | Enable most of the half precision XLA compiler tests for the cpu backend, except for two which are disabled and documented in a bug. PiperOrigin-RevId: 191954183
* Only test types supported and change log_eps for bfloat16.Gravatar Jacques Pienaar2018-04-02
| | | | PiperOrigin-RevId: 191302894
* Add bitcast for equal bitwidth casts.Gravatar Jacques Pienaar2018-03-29
| | | | | | Map bitcasts to XLA bitcast HLO if the bitwidth of the elementtype is the same. PiperOrigin-RevId: 190942398
* [TF:XLA] Implement Acos, Asin, Atan in terms of Atan2 using half-angle ↵Gravatar Peter Hawkins2018-01-31
| | | | | | formulae. This may not be the most efficient implementation but it is better than no implementation. PiperOrigin-RevId: 184029858
* Enable and fix some bfloat16 tests.Gravatar A. Unique TensorFlower2018-01-23
| | | | PiperOrigin-RevId: 183013346
* [XLA] Add support for atan2 on CPUGravatar A. Unique TensorFlower2017-12-19
| | | | | | | This leans on the libm's atan2 for the actual routine but allows us to share the implementation of other complex operations between CPU and GPU. PiperOrigin-RevId: 179569666
* Fix bfloat16 numerics issues in the tests.Gravatar A. Unique TensorFlower2017-12-15
| | | | PiperOrigin-RevId: 179207115
* Enable bfloat16 tests and add a filter for currentlyGravatar A. Unique TensorFlower2017-12-14
| | | | | | failed tests. PiperOrigin-RevId: 179069257
* [TF:XLA] Add support for NCHW format to SpaceToDepth and DepthToSpace.Gravatar Peter Hawkins2017-12-05
| | | | PiperOrigin-RevId: 177953076
* [TF2XLA] Change the implementation of Diag and MatrixDiag to use arithmetic ↵Gravatar A. Unique TensorFlower2017-12-04
| | | | | | rather than Pad. PiperOrigin-RevId: 177896187
* [TF:XLA] Adding test coverage for more C64 operations, and ensuring they pass.Gravatar A. Unique TensorFlower2017-11-13
| | | | | | | | | Included here: - reduction ops (reduce_sum, reduce_prod) - unaries: tanh, sigmoid (currently GPU only) - binaries: pow (currently GPU only) PiperOrigin-RevId: 175562417
* [XLA:CPU] [XLA:GPU] Adds compiler support for C64 primitive type, including ↵Gravatar A. Unique TensorFlower2017-10-27
| | | | | | | | | | relevant elementwise unary and binary op lowering for CPU and GPU. We use a named LLVM struct "complex64", laid out the same as std::complex<float>. This named struct is accessed via the llvm::Module, which required changes to accessors of PrimitiveTypeToIrType & friends. Ops that require atan2 (in particular, angle and log) are only supported on GPU at this point. LLVM lacks a CPU intrinsic for atan or atan2, whereas libdevice provides this for GPU. PiperOrigin-RevId: 173676849
* [TF:XLA] Implement BitwiseAnd, BitwiseOr, and Invert operators.Gravatar A. Unique TensorFlower2017-10-12
| | | | PiperOrigin-RevId: 172038787
* [TF:XLA] Improve numerical stability of SoftPlus.Gravatar Peter Hawkins2017-10-04
| | | | PiperOrigin-RevId: 171003559
* [TF:XLA] Implement SpaceToDepth and DepthToSpace.Gravatar Peter Hawkins2017-09-26
| | | | PiperOrigin-RevId: 170090821
* [XLA] Add support for QuantizeAndDequantizeV2.Gravatar Chris Leary2017-09-25
| | | | PiperOrigin-RevId: 169955636
* [TF:XLA] Implement SoftSign, SoftSignGrad, ReciprocalGrad, ApproximateEqual, ↵Gravatar Peter Hawkins2017-08-31
| | | | | | | | Rint, IsFinite, IsInf, IsNan. Enable L2Loss test case that apparently passes now. PiperOrigin-RevId: 167156124
* [TF:XLA] Add implementations of Tan, Sinh, Cosh, Asinh, Acosh, Atanh, Expm1.Gravatar Peter Hawkins2017-08-30
| | | | PiperOrigin-RevId: 167020800
* [XLA] Add IsFinite op in tf2xla.Gravatar Chris Leary2017-08-18
| | | | PiperOrigin-RevId: 165734702
* Merge changes from github.Gravatar Vijay Vasudevan2017-07-28
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | END_PUBLIC I dropped the following commit because it doesn't compile. I will follow up with Andrew to fix it or revert it. Commit 003deb88b authored by osdamv<osdamv@gmail.com> Committed by Vijay Vasudevan<vrv@google.com>: Refactor and implementation of the camera API 1, it fixes #8736 (#10771) List of commits in this CL: --- Commit 446450369 authored by A. Unique TensorFlower<gardener@tensorflow.org> Committed by TensorFlower Gardener<gardener@tensorflow.org>: Use identity of param variable in cudnn_rnn.RNNParamsSaveable instead of parameter variable directly. The RNNParamsSaveable is usually used in a graph which also has a saver for the cudnn param variable itself, if the same op is used for both, fails with a two savers for same op error. PiperOrigin-RevId: 163431826 --- Commit d629a8316 authored by RJ Ryan<rjryan@google.com> Committed by TensorFlower Gardener<gardener@tensorflow.org>: Increase bound on tf.contrib.signal.inverse_stft gradient error to avoid flakiness on macOS. PiperOrigin-RevId: 163426631 --- Commit 253bcbb71 authored by Kay Zhu<kayzhu@google.com> Committed by TensorFlower Gardener<gardener@tensorflow.org>: [XLA] Use HloEvaluator for convolution in reference_util. Also Speed up HloEvaluator's HandleConvolution in non-opt build, by moving calls to HloInstruction::shape() out of the inner loop. PiperOrigin-RevId: 163416183 --- Commit 569a00e68 authored by A. Unique TensorFlower<gardener@tensorflow.org> Committed by TensorFlower Gardener<gardener@tensorflow.org>: Update API to traffic in unique_ptrs rather than owning raw pointers PiperOrigin-RevId: 163414320 --- Commit 31a77bc77 authored by Asim Shankar<ashankar@google.com> Committed by TensorFlower Gardener<gardener@tensorflow.org>: Java: Update release to 1.3.0-rc1 PiperOrigin-RevId: 163413736 --- Commit 1ebbf4325 authored by Jonathan Hseu<vomjom@vomjom.net> Committed by GitHub<noreply@github.com>: Add missing grpc dependency (#11828) --- Commit 905abb1f9 authored by A. Unique TensorFlower<gardener@tensorflow.org> Committed by TensorFlower Gardener<gardener@tensorflow.org>: Test asserts should have `expected` first. PiperOrigin-RevId: 163409348 --- Commit d5cc143e2 authored by A. Unique TensorFlower<gardener@tensorflow.org> Committed by TensorFlower Gardener<gardener@tensorflow.org>: Increase timeout to deflake the test. PiperOrigin-RevId: 163407824 --- Commit ce1c7f02a authored by Eli Bendersky<eliben@google.com> Committed by TensorFlower Gardener<gardener@tensorflow.org>: Properly include logging header in xla_internal_test_main PiperOrigin-RevId: 163405986 --- Commit 22241cd42 authored by joetoth<joetoth@gmail.com> Committed by Vijay Vasudevan<vrv@google.com>: External leveldb link changed (#11833) table_format.txt was renamed to table_format.md --- Commit 6b7314de4 authored by A. Unique TensorFlower<gardener@tensorflow.org> Committed by TensorFlower Gardener<gardener@tensorflow.org>: Consolidating the code to fill the partition's function library into one place. Previously, Partition() and MasterSession::RegisterPartition() both fills in the partitioned graph's function library. PiperOrigin-RevId: 163400992 --- Commit 28373cfe7 authored by Frank Chen<frankchn@google.com> Committed by TensorFlower Gardener<gardener@tensorflow.org>: Adds preliminary support for Cloud TPUs with Cluster Resolvers. This aims to allow users to have a better experienec when specifying one or multiple Cloud TPUs for their training jobs by allowing users to use names rather than IP addresses. PiperOrigin-RevId: 163393443 --- Commit e5353c941 authored by A. Unique TensorFlower<gardener@tensorflow.org> Committed by TensorFlower Gardener<gardener@tensorflow.org>: Don't prune nodes that have reference inputs. PiperOrigin-RevId: 163390862 --- Commit 226510834 authored by Asim Shankar<ashankar@google.com> Committed by TensorFlower Gardener<gardener@tensorflow.org>: C API: Groundwork for experimenting with TF_Tensor in device memory. TF_Tensor objects are always backed by host memory. This commit lays the groundwork for allowing TF_Tensor objects to refer to tensor data on device (e.g., GPU) memory. PiperOrigin-RevId: 163388079 --- Commit 613bf1c7c authored by Yuefeng Zhou<yuefengz@google.com> Committed by TensorFlower Gardener<gardener@tensorflow.org>: fix asan test failure in SingleMachineTest::ReleaseMemoryAfterDestruction. PiperOrigin-RevId: 163386941 --- Commit 4653d37a3 authored by Eli Bendersky<eliben@google.com> Committed by TensorFlower Gardener<gardener@tensorflow.org>: [XLA] Change type to appease GPU builds. PiperOrigin-RevId: 163384927 --- Commit 9f131bd15 authored by A. Unique TensorFlower<gardener@tensorflow.org> Committed by TensorFlower Gardener<gardener@tensorflow.org>: Internal change PiperOrigin-RevId: 163378484 --- Commit 8bc0236c8 authored by A. Unique TensorFlower<gardener@tensorflow.org> Committed by TensorFlower Gardener<gardener@tensorflow.org>: PiperOrigin-RevId: 163366493 --- Commit 3b97f1f9b authored by Yangzihao Wang<yangzihao@google.com> Committed by TensorFlower Gardener<gardener@tensorflow.org>: Change to only run one round of matmul benchmark. PiperOrigin-RevId: 163364341 --- Commit a4a3a3335 authored by Yun Peng<pcloudy@google.com> Committed by Vijay Vasudevan<vrv@google.com>: Fix ./configure on Windows (#11775) * Fix ./configure on Windows * Disable bitwise_ops_test on Windows --- Commit ae3119d16 authored by A. Unique TensorFlower<gardener@tensorflow.org> Committed by TensorFlower Gardener<gardener@tensorflow.org>: Small changes to op framework. PiperOrigin-RevId: 163361071 --- Commit f40189d26 authored by qjivy<ji.qiu@spreadtrum.com> Committed by Vijay Vasudevan<vrv@google.com>: PR again: Enable building label_image with jpeg/gif/png decoder for Android. (#11475) * Enable building label_image with jpeg/gif/png decoder for Android. Add dependency "android_tesnorflow_image_op" to label_image, which is not overlapped with android_tensorflow_kernels. * Running buildifier to reformat the BUILD files for sanity check. --- Commit 599165861 authored by KB Sriram<kbsriram@gmail.com> Committed by Vijay Vasudevan<vrv@google.com>: Add the Constant operator class (#11559) Create a custom operator class to create constants in the Graph, and introduce the Operator marker annotation to identify operator classes. Please see #7149 for the master tracking issue. --- Commit 86ca3506f authored by A. Unique TensorFlower<gardener@tensorflow.org> Committed by TensorFlower Gardener<gardener@tensorflow.org>: Further BUILD cleanup PiperOrigin-RevId: 163360750 --- Commit 376bb063b authored by Pete Warden<petewarden@google.com> Committed by TensorFlower Gardener<gardener@tensorflow.org>: Look inside functions to see which node types are used. PiperOrigin-RevId: 163360375 --- Commit 2139e7d8b authored by A. Unique TensorFlower<gardener@tensorflow.org> Committed by TensorFlower Gardener<gardener@tensorflow.org>: [tf.contrib.data] map expects a nested structure. Fixes #11786 PiperOrigin-RevId: 163359134 --- Commit d09304fca authored by Jonathan Hseu<vomjom@vomjom.net> Committed by Vijay Vasudevan<vrv@google.com>: Upgrade gRPC (#11768) * BUILD rule modifications * More build fixes * Code changes * More code fixes * Working tests * CMake build * Fix pprof * Fix header includes * CMake fix test * Bazel clean * Fix verbs * More verbs fixes * bazel clean for XLA * Windows build fix test * Add openssl/rand.h * New cmake build command * --config Release --- Commit 3cd828474 authored by David Norman<DavidNorman@users.noreply.github.com> Committed by Vijay Vasudevan<vrv@google.com>: Fix error with default python path selection (#11814) * Fix error with default python path selection * Move setting of environment var outside if / else --- Commit ddd8e21b7 authored by Eli Bendersky<eliben@google.com> Committed by TensorFlower Gardener<gardener@tensorflow.org>: [XLA] Consolidate all similar main()s in tests into a single target. PiperOrigin-RevId: 163354724 --- Commit a36bca25b authored by Tayo Oguntebi<tayo@google.com> Committed by TensorFlower Gardener<gardener@tensorflow.org>: Remove ShapeWithoutPadding() utility function, as it is no longer needed. PiperOrigin-RevId: 163353430 --- Commit b26f9cd44 authored by David Norman<DavidNorman@users.noreply.github.com> Committed by Vijay Vasudevan<vrv@google.com>: Ensure that the multi-instruction fuse can take shared inputs (#11748) * Ensure that the multi-instruction fuse can take shared inputs Note that the fuse action only works when the shared input / constant appears after all of its consumers in the list of instructions. * Add a comment describing the test --- Commit 34cbf161d authored by Jiri Simsa<jsimsa@google.com> Committed by TensorFlower Gardener<gardener@tensorflow.org>: Update Dataset API documentation. PiperOrigin-RevId: 163349457 --- Commit 2381ce5c3 authored by Abdullah Alrasheed<a.rasheed@tc-sa.com> Committed by Vijay Vasudevan<vrv@google.com>: DOC: Fix typo. (#11813) you could could be I/O bottlenecked. TO: you could be I/O bottlenecked. --- Commit e4a5c5356 authored by Toby Boyd<tobyboyd@google.com> Committed by TensorFlower Gardener<gardener@tensorflow.org>: ["Variable", "VariableV2", "VarHandleOp"] is the default for ps_ops=None PiperOrigin-RevId: 163344629 --- Commit 722f6f361 authored by A. Unique TensorFlower<gardener@tensorflow.org> Committed by TensorFlower Gardener<gardener@tensorflow.org>: Fix TensorForest's saveable object names so loading a savedmodel works. PiperOrigin-RevId: 163332598 --- Commit cda80a785 authored by Eric Liu<ioeric@google.com> Committed by TensorFlower Gardener<gardener@tensorflow.org>: [tpu profiler] Dump HLO graphs in profile responses to the log directory. PiperOrigin-RevId: 163318992 --- Commit cea9ef6f5 authored by horance<horance-liu@users.noreply.github.com> Committed by Vijay Vasudevan<vrv@google.com>: Refactoring device name utils (#11797) * remove duplicated code for full_name and legacy_name for DeviceNameUtils * replace tabs * Real->Device --- Commit 1f7c0f917 authored by Kongsea<kongsea@gmail.com> Committed by Vijay Vasudevan<vrv@google.com>: Refine docstrings (#11800) --- Commit dd1f0cddd authored by A. Unique TensorFlower<gardener@tensorflow.org> Committed by TensorFlower Gardener<gardener@tensorflow.org>: Supports lookup devices by fullname either in the canonical form or the legacy form. This makes DeviceSet behaves the same as DeviceMgr's FindDevice method. PiperOrigin-RevId: 163300346 --- Commit 631a364cd authored by Kay Zhu<kayzhu@google.com> Committed by TensorFlower Gardener<gardener@tensorflow.org>: [XLA] Add Reduce, DynamicSlice and DynamicSliceUpdate to HloEvaluator. - Reduce is disabled explicitly for constant folding, as not all types of embedded computation can be currently supported by the evaluator. - Added support to evaluate HloModule to HloEvaluator. - Minor signature change to Evaluate(). PiperOrigin-RevId: 163299238 --- Commit a52470172 authored by A. Unique TensorFlower<gardener@tensorflow.org> Committed by TensorFlower Gardener<gardener@tensorflow.org>: Sets the incarnation number even when the attribute is set. PiperOrigin-RevId: 163299121 --- Commit a49fe0366 authored by Suharsh Sivakumar<suharshs@google.com> Committed by TensorFlower Gardener<gardener@tensorflow.org>: Remove platform bridge for grpc_response_reader. PiperOrigin-RevId: 163295986 --- Commit 4404aa7cb authored by A. Unique TensorFlower<gardener@tensorflow.org> Committed by TensorFlower Gardener<gardener@tensorflow.org>: [XLA] Add TODO comment explaining why the IsScalar check exists. PiperOrigin-RevId: 163292777 --- Commit 43036ac16 authored by A. Unique TensorFlower<gardener@tensorflow.org> Committed by TensorFlower Gardener<gardener@tensorflow.org>: Remove unnecessary break statements. PiperOrigin-RevId: 163291947 --- Commit fd5de4690 authored by A. Unique TensorFlower<gardener@tensorflow.org> Committed by TensorFlower Gardener<gardener@tensorflow.org>: [XLA] Add regression test for a corner case using Reduce that currently fails with the GPU backend. PiperOrigin-RevId: 163287986 --- Commit 32e198f2d authored by Chris Leary<leary@google.com> Committed by TensorFlower Gardener<gardener@tensorflow.org>: [TF:XLA] Add tf.cross support. See #11788 PiperOrigin-RevId: 163287731 --- Commit 88abddbc3 authored by Alan Yee<alyee@ucsd.edu> Committed by Vijay Vasudevan<vrv@google.com>: Update README.md (#11793) Remove bad practices of sudo pip and install use safer pip install commands --- Commit 9b30dc3a8 authored by A. Unique TensorFlower<gardener@tensorflow.org> Committed by TensorFlower Gardener<gardener@tensorflow.org>: Remove final mentions of `get_shape` in docstring. PiperOrigin-RevId: 163282839 --- Commit 423c1eea0 authored by A. Unique TensorFlower<gardener@tensorflow.org> Committed by TensorFlower Gardener<gardener@tensorflow.org>: BREAKING CHANGE: Fix semantic error in how maybe_batch* handles sparse tensors. PiperOrigin-RevId: 163276613 --- Commit 6028c071b authored by Justin Lebar<jlebar@google.com> Committed by TensorFlower Gardener<gardener@tensorflow.org>: Highlight incoming/outgoing edges on hover in HLO graphviz dumps, and other improvements. Other improvements: - Don't show tooltips for nodes and clusters. Previously we'd show a tooltip containing a pointer value expressed as decimal. Not so useful. - Show tooltips on edges with the to/from node names. - Fix bug wherein if we had - a node at the "edge" of the graph (so its operands aren't included unless they're referenced by another node), - with all of its operands included in the graph save one or more constants, and - those constants weren't referenced by any nodes not at the edge of the graph, we would incorrectly draw the node as "grayed out", indicating that one of its operands (namely, its constant operand) wasn't present in the graph. This is wrong because constants are inlined into their users, so they should always count as "displayed" for the purposes of determining whether a node is grayed out. PiperOrigin-RevId: 163276108 --- Commit ce7a355bd authored by Joshua V. Dillon<jvdillon@google.com> Committed by TensorFlower Gardener<gardener@tensorflow.org>: Update contrib/distributions/estimator_test build dependency. PiperOrigin-RevId: 163272464 --- Commit 1b8458a1c authored by A. Unique TensorFlower<gardener@tensorflow.org> Committed by TensorFlower Gardener<gardener@tensorflow.org>: Shorten docstring line. PiperOrigin-RevId: 163269709 --- Commit 69e323cc6 authored by Asim Shankar<ashankar@google.com> Committed by TensorFlower Gardener<gardener@tensorflow.org>: Fix comment ypo PiperOrigin-RevId: 163266376 --- Commit 08790e73d authored by Chris Leary<leary@google.com> Committed by TensorFlower Gardener<gardener@tensorflow.org>: [XLA] Fix a bug in cloning outfeeds, carried the wrong shape. PiperOrigin-RevId: 163265592 --- Commit 1bad826d6 authored by Yangzihao Wang<yangzihao@google.com> Committed by TensorFlower Gardener<gardener@tensorflow.org>: Rollback of GPU kernel implementation of transpose for tensors with one small dimension. END_PUBLIC BEGIN_PUBLIC BEGIN_PUBLIC Automated g4 rollback of changelist 162525519 PiperOrigin-RevId: 163490703
* [XLA] Add support for sin(x) transcendental.Gravatar A. Unique TensorFlower2017-07-23
| | | | PiperOrigin-RevId: 162889962
* [TF:XLA:CPU] Clamp inputs to sigmoid function to [-18, 18] to avoid ↵Gravatar Peter Hawkins2017-04-18
| | | | | | | generating NaNs. If we don't clamp the inputs, a large negative input to sigmoid will lead to us computing 1.0/inf, which can yield NaN, since we specify the LLVM NoInf fastmath option. Change: 153536549
* Add Elu ops in XLA.Gravatar A. Unique TensorFlower2017-04-06
| | | | Change: 152383201
* Register OnesLike kernel in XLA.Gravatar Suharsh Sivakumar2017-04-03
| | | | Change: 152079500
* [TF:XLA] Implement tf.round in the XLA bridge.Gravatar Peter Hawkins2017-03-16
| | | | Change: 150336339
* [TF:XLA] Add a placeholder implementation of Log1p (via log(1+x), which is ↵Gravatar Peter Hawkins2017-02-01
| | | | | | | not numerically accurate for x near 0). Make some cleanups to unary_ops_test.py. Change: 146282294
* Deprecate tf.neg, tf.mul, tf.sub (and remove math_ops.{neg,mul,sub} usagesGravatar Andrew Selle2017-01-11
| | | | | | tf.negative, tf.multiply, tf.subtract are the new names - Also enabled deprecation warning (to be completely removed by friday) Change: 144215355
* Remove all remaining tf.pack,tf.unpack references and remove tf.pack/tf.unpackGravatar A. Unique TensorFlower2017-01-10
| | | | | op. Change: 144130931
* Initial open-source release of XLA: Accelerated Linear Algebra.Gravatar Peter Hawkins2017-01-09
XLA is a compiler-based linear algebra execution engine that targets CPUs, GPUs and custom accelerators. XLA is still experimental; we are releasing it early to get the community involved. Change: 143990941