aboutsummaryrefslogtreecommitdiffhomepage
path: root/tensorflow/compiler/xla/client
Commit message (Collapse)AuthorAge
* [XLA] Switch from tensorflow::str_util::Join to absl::StrJoin.Gravatar Justin Lebar2018-08-23
| | | | PiperOrigin-RevId: 210018843
* [XLA] Use absl string types and functions instead of the TF versions.Gravatar Justin Lebar2018-08-23
| | | | | | | Unfortunately this has to be one big patch, because e.g. absl::StrCat doesn't accept a TF StringPiece, but as soon as we switch to absl::string_view, we have to switch away from all of the TF functions. PiperOrigin-RevId: 209957896
* [XLA] Cleanup Alltoall.Gravatar A. Unique TensorFlower2018-08-22
| | | | | | | | - Remove unused field 'cross_replica_sum_barrier' for Alltoall. - Update cost analysis. There's no computation in Alltoall. - Cleanup stale TODOs. PiperOrigin-RevId: 209814190
* Replaced calls to tensorflow::StringPiece::ToString with string conversions.Gravatar A. Unique TensorFlower2018-08-22
| | | | | | | | That is, instances of sp.ToString() are replaced with string(sp). This will allow tensorflow::StringPiece::ToString to be removed, which is necessary before it can be replaced with absl::string_view. PiperOrigin-RevId: 209806694
* Change subgroup interface for CrossReplicaSumGravatar HyoukJoong Lee2018-08-22
| | | | PiperOrigin-RevId: 209780185
* [XLA] Expose a way to control dot/conv precisionGravatar David Majnemer2018-08-22
| | | | | | | | | | This adds a field to the proto so that we may serialize it. On TPUs, we can simulate higher precision by splitting a float32 number into several bfloat16 numbers such that their sum closely approximates the original number. A tensor contraction operation like convolution or a dot product can be computed by forming several partial products which approximate the correct answer to a closer margin. PiperOrigin-RevId: 209720948
* [TF:XLA] Make AvgPoolGrad support general paddingGravatar A. Unique TensorFlower2018-08-21
| | | | PiperOrigin-RevId: 209693676
* [TF:XLA] Add AvgPoolGrad to pooling libraryGravatar A. Unique TensorFlower2018-08-21
| | | | PiperOrigin-RevId: 209689851
* [XLA] gtl::optional->absl::optionalGravatar Yunxing Dai2018-08-21
| | | | PiperOrigin-RevId: 209686671
* fix C++ header guards.Gravatar A. Unique TensorFlower2018-08-21
| | | | PiperOrigin-RevId: 209679086
* Remove HostCompute HLO.Gravatar Tong Shen2018-08-21
| | | | | | Now for host compute, we just emit SendToHost & RecvFromHost pairs, and use token to ensure dependency. PiperOrigin-RevId: 209671416
* Merged commit includes the following changes:Gravatar Yifei Feng2018-08-21
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 209663919 by yifeif<yifeif@google.com>: Internal change. -- 209663914 by amitpatankar<amitpatankar@google.com>: Fix the topk_op_test for numpy>1.15. -- 209660476 by jdduke<jdduke@google.com>: Fix model lifetime for TensorFlow Lite C# bindings Ensure the model's existence for the duration of the interpreter, as per API requirements. -- 209655960 by scottzhu<scottzhu@google.com>: Unify RNN Cell interface between TF and Keras. -- 209655731 by A. Unique TensorFlower<gardener@tensorflow.org>: Added tests for PredictionOps and PartitionExamplesOps -- 209655291 by nolivia<nolivia@google.com>: adding rate class so that we can save global_step/sec using tf.contrib.summary. The function takes the rate in relation to any tensors provided that the numerator and denominator are broadcastable and have dtypes that can be cast to float64 -- 209654655 by kramerb<kramerb@google.com>: [XLA] Switch from tensorflow::gtl::InlinedVector to absl::InlinedVector This one comes with extra goodies like a move constructor. -- 209653851 by A. Unique TensorFlower<gardener@tensorflow.org>: Internal build specification change -- PiperOrigin-RevId: 209663919
* [XLA] Use absl::make_unique instead of xla::MakeUnique.Gravatar Justin Lebar2018-08-20
| | | | | | Same for WrapUnique. PiperOrigin-RevId: 209531124
* [XLA] Switch to absl versions of the c_foo functions.Gravatar Justin Lebar2018-08-20
| | | | PiperOrigin-RevId: 209502513
* Handle scalar real HLO instructions from tf.lgamma and tf.digammaGravatar A. Unique TensorFlower2018-08-20
| | | | | | | | | | | Currently, the XLA tf.lgamma op doesn't behave the same way as the standard tf.lgamma with certain real values because the log of a negative number is taken. Added regression tests for tf.lgamma operating on a scalar and added cases that previously resulted in NaNs when using the reflection formula. PiperOrigin-RevId: 209443312
* Automated rollback of commit 4a41f50648929197954d892559587cb76458d306Gravatar A. Unique TensorFlower2018-08-17
| | | | PiperOrigin-RevId: 209248552
* [XLA] Switch to absl versions of the c_foo functions.Gravatar Justin Lebar2018-08-17
| | | | PiperOrigin-RevId: 209247783
* Improve gather ergonomics by renaming fields.Gravatar Sanjoy Das2018-08-16
| | | | | | | | | | | | | This CL renames the various inputs to the Gather HLO to be more mnemonic by making it more obviously a batch dynamic-slice. The replacements I made are: s/elided_window_dims/collapsed_slice_dims/g s/window_bounds/slice_sizes/g s/gather_dims_to_operand_dims/start_index_map/g s/gather_indices/start_indices/g s/output_window_dims/offset_dims/g PiperOrigin-RevId: 209051067
* Add a feature_group_size parameter to the Convolution HLO op.Gravatar Adrian Kuegel2018-08-16
| | | | | | | This is a first step towards supporting grouped convolutions, which are a generalization of depthwise convolution. PiperOrigin-RevId: 208950311
* Require token operand for infeed and outfeed.Gravatar Mark Heffernan2018-08-15
| | | | | | For expediency in rolling out support for tokens used for ordering side-effecting ops, infeed and outfeed *optionally* took a token operand. This CL removes that option so all infeed and outfeed instructions take a token operand. PiperOrigin-RevId: 208927968
* [XLA] Fix use-of-unintialized-value msan failure in local_client as well.Gravatar Kay Zhu2018-08-09
| | | | PiperOrigin-RevId: 208004791
* [TF:XLA] Introduce MutableBorrowingLiteral to enable interacting with a ↵Gravatar Kay Zhu2018-08-08
| | | | | | (tensor) buffer not owned by XLA/Literal class directly, without having to memcpy the Literal to a (Host)Tensor. PiperOrigin-RevId: 207972410
* [XLA] Add the xla interface for AllToAll.Gravatar A. Unique TensorFlower2018-08-08
| | | | PiperOrigin-RevId: 207971529
* Make root determination of XLA computations in XlaBuilder less magical.Gravatar Mark Heffernan2018-08-08
| | | | | | Previously (before this CL), in the XLA builder certain XLA ops could never be the root of a computation. These restricted ops include Send, Outfeed, and several others. The root of the built computation was then the last added op which was not in this restricted set. However, this is undesirable because now Send and Outfeed produce token values and it may be desirable to return these tokens from a computation, something which is impossible now. This CL addresses this problem by allowing any op to be the root of a computation. This means now the root of the computation will be the last operation added before calling Build(). Furthermore, to enable previous functionality and improve expressiveness in general a new XlaComputation::Build method is added which takes an XlaOp which specifies the root. PiperOrigin-RevId: 207887842
* [XLA] Delete the xla_builder in xla_client.Gravatar A. Unique TensorFlower2018-08-07
| | | | PiperOrigin-RevId: 207792582
* [TF:XLA] Add initial XLA pooling libraryGravatar A. Unique TensorFlower2018-08-07
| | | | | | | Generalize pooling operations from tf2xla and put them into a library to make them reusable by any XLA frontend. PiperOrigin-RevId: 207756300
* [XLA] Produce fake args using one computation, not N.Gravatar Justin Lebar2018-08-06
| | | | | | This is much faster to compile. PiperOrigin-RevId: 207577415
* [XLA] Clean up clang tidy readability warnings in compiler/xlaGravatar Benjamin Kramer2018-08-06
| | | | | | | | | | | | | | | | * lambda capture 'builder' is not used * using decl 'Printf' is unused * lambda capture 'this' is not used (17 times) * lambda capture 'buffer_liveness' is not used * lambda capture 'computation' is not used * lambda capture 'operand_to_generator' is not used * lambda capture 'M' is not used * using decl 'InvalidParameterArgument' is unused * lambda capture 'sum' is not used * lambda capture 's' is not used * lambda capture 'epsilon' is not used PiperOrigin-RevId: 207542895
* [XLA] Introduce variadic version of reduce.Gravatar Michael Kuperstein2018-08-02
| | | | | | This defines the semantics, and adds parser and shape inference support. Since support is not plumbed through the rest of the compiler here, multi-output reduce is still rejected by the HLO verifier, and is not exposed through XlaBuilder. PiperOrigin-RevId: 207148035
* [XLA] Add Scatter HLO.Gravatar A. Unique TensorFlower2018-08-01
| | | | PiperOrigin-RevId: 207045468
* Use the correct device ordinal to check whether the device the executable wasGravatar A. Unique TensorFlower2018-08-01
| | | | | | | | | | built for is equivalent to the device the it will run on. Before this patch, if the device to run on was provided via a stream without setting the device ordinal in the ExecutableRunOptions, we would check the default device against the device the executable was built for. PiperOrigin-RevId: 206892902
* Adds a NonMaxSuppressionV4 op, with a corresponding TF2XLA implementation.Gravatar Tayo Oguntebi2018-07-30
| | | | PiperOrigin-RevId: 206673787
* [XLA] This is a step to incrementally move client/xla_client/* to client/.Gravatar A. Unique TensorFlower2018-07-25
| | | | PiperOrigin-RevId: 206111380
* [XLA] The first step to incrementally move client/xla_client/* to client/.Gravatar A. Unique TensorFlower2018-07-25
| | | | PiperOrigin-RevId: 206105815
* Replace generic Pool with StreamPool, and discard failed streams.Gravatar Todd Wang2018-07-25
| | | | | | | | | | | | | | | | | | | | | | | We have a Pool in XLA that maintains a freelist of Streams, to avoid the overhead of repeatedly allocating new Streams. Streams have a monotonic state machine; if a stream encounters any error, it will remain in an error state forever. The functional change in this CL is to ensure that streams which have encountered an error are deleted, rather than being put back on the pool. Without this change, a previously failed stream will be put back on the pool, only to cause the next usage of the stream to trivially fail. I've chosen to replace the generic templatized Pool with a concrete StreamPool, since this makes the logic more straightforward to reason about. Also note that the only existing usage of Pool is to hold streams. The functional change is in stream_pool.cc; most of everything else is mechanical updates. PiperOrigin-RevId: 206100631
* [XLA] Correctly make xla_computation public.Gravatar A. Unique TensorFlower2018-07-25
| | | | PiperOrigin-RevId: 206073510
* Move xla_computation.* from xla/client/xla_client up to xla/client.Gravatar Mark Heffernan2018-07-25
| | | | | | | | | Plan is to move everything in xla/client/xla_client up to xla/client and remove the directory. No functional change. PiperOrigin-RevId: 206055680
* New triangular solve algorithm.Gravatar A. Unique TensorFlower2018-07-24
| | | | PiperOrigin-RevId: 205865103
* [XLA] Make it illegal to call XlaOp::builder() if the op is uninitialized.Gravatar Justin Lebar2018-07-20
| | | | | | | | | It's very common to do foo.builder()->bar(). Without this precondition, if foo.builder() is null, the call to bar will segfault at some point possibly deep in the callstack when we finally dereference `this`. The precondition lets us avoid this tricky-to-debug problem. PiperOrigin-RevId: 205456769
* Start implementation of Iota HLO.Gravatar Nick Desaulniers2018-07-20
| | | | PiperOrigin-RevId: 205447892
* [XLA] Don't use Pow for simple expressionsGravatar David Majnemer2018-07-19
| | | | | | Using Pow to handle squaring or taking the reciprocal is overkill, Pow is not going to be as accurate as the straightforward formulation without relying on optimization in the compiler or the Pow implementation to kick in. PiperOrigin-RevId: 205247912
* [TF:XLA] Rename xla::Diagonal to xla::GetMatrixDiagonal. Fix its handling of ↵Gravatar Peter Hawkins2018-07-19
| | | | | | | | rectangular matrices. Switch the TF DiagPart and MatrixDiagPart operators to use GetMatrixDiagonal. Extend CreateScalar{And,Or}Computation to support non-PRED types. PiperOrigin-RevId: 205244201
* Add single-sided host send and receive operations.Gravatar Mark Heffernan2018-07-17
| | | | | | | | Adds a bit on kSend/kReceive instructions and their Done variants indicated whether the operations communicates with the host or another device (the default). Host send/recv operations are single-sided without a complementary recv/send instruction in another module. Host send/recv operations are exposed in the XLA builder API as SendToHost and RecvFromHost. PiperOrigin-RevId: 205008138
* [TF:XLA] Move implementations of primitive math functions out of TF/XLA and ↵Gravatar Peter Hawkins2018-07-17
| | | | | | into xla/client/lib/math.{cc,h}. PiperOrigin-RevId: 205003168
* Implement digamma for XLAGravatar A. Unique TensorFlower2018-07-16
| | | | | | | | | | Compute the Lgamma function using Lanczos' approximation from "A Precision Approximation of the Gamma Function". SIAM Journal on Numerical Analysis series B. Vol. 1: digamma(z + 1) = log(t(z)) + A'(z) / A(z) - kLanczosGamma / t(z) t(z) = z + kLanczosGamma + 1/2 A(z) = kBaseLanczosCoeff + sigma(k = 1, n, kLanczosCoefficients[i] / (z + k)) A'(z) = sigma(k = 1, n, kLanczosCoefficients[i] / (z + k) / (z + k)) PiperOrigin-RevId: 204834091
* Implement lgamma for XLAGravatar A. Unique TensorFlower2018-07-16
| | | | | | | | | | | Add support for Real and Imag for real floating point types. Compute the Lgamma function using Lanczos' approximation from "A Precision Approximation of the Gamma Function". SIAM Journal on Numerical Analysis series B. Vol. 1: lgamma(z + 1) = (log(2) + log(pi)) / 2 + (z + 1/2) * log(t(z)) - t(z) + A(z) t(z) = z + kLanczosGamma + 1/2 A(z) = kBaseLanczosCoeff + sigma(k = 1, n, kLanczosCoefficients[i] / (z + k)) PiperOrigin-RevId: 204815805
* Runtime improvements to triangular solve.Gravatar A. Unique TensorFlower2018-07-16
| | | | PiperOrigin-RevId: 204804841
* [XLA] Enable the semantic for cross-modeul AllReduce.Gravatar A. Unique TensorFlower2018-07-15
| | | | PiperOrigin-RevId: 204670087
* [XLA] Move implementation of ThreeFry stateless PRNG into xla/client/libGravatar Peter Hawkins2018-07-13
| | | | PiperOrigin-RevId: 204557470
* [TF:XLA] Add implementation of block Householder QR decomposition.Gravatar Peter Hawkins2018-07-10
| | | | PiperOrigin-RevId: 204044417