aboutsummaryrefslogtreecommitdiffhomepage
path: root/tensorflow/compiler/tests
Commit message (Collapse)AuthorAge
* Emit xla::Or in TensorArrayScatterV3 for PRED types instead of xla::AddGravatar A. Unique TensorFlower2018-10-10
| | | | | | | Previosuly we emitted xla::Add what isn't supported by some XLA backend on PRED types. PiperOrigin-RevId: 216497939
* Update XlaSort to match the underlying HLO.Gravatar A. Unique TensorFlower2018-10-05
| | | | PiperOrigin-RevId: 215917470
* Fix unused imports.Gravatar Jacques Pienaar2018-10-04
| | | | PiperOrigin-RevId: 215819072
* Add basic TensorList op support in bridge.Gravatar Jacques Pienaar2018-10-04
| | | | | | | | * Add kernels for TensorListReserve. EmptyTensorList, TensorListElementShape, TensorListPushBack, TensorlistPopBack; * Treat list type pretty much identical to Stack in the bridge for now; * Support variant output by treating variant like a uint8 and leaving the interpretation up to the XlaExpression (variant type does not support tensor_data()); PiperOrigin-RevId: 215809335
* [TF:XLA] Fix inverted condition in randomized test.Gravatar Peter Hawkins2018-10-04
| | | | PiperOrigin-RevId: 215795518
* [TF:XLA] Don't expand complex64 tensors during TF/XLA lowering, if possible.Gravatar Peter Hawkins2018-10-04
| | | | PiperOrigin-RevId: 215724324
* Implement DataFormatVecPermute for XLA.Gravatar Adrian Kuegel2018-10-04
| | | | | | | | Also clear "_kernel" attributes of nodes if they are set to "host". This is not meaningful when processing the graph for XLA, and it would prevent finding the registered XLA kernel. PiperOrigin-RevId: 215722216
* Make StatelessRandomOpsTest.testRandomNormalIsFinite actually test ↵Gravatar Peter Hawkins2018-10-02
| | | | | | | | stateless_random_normal. Fixes #22611 PiperOrigin-RevId: 215385610
* [XLA] Migrate from gtl::FlatSet to absl::flat_hash_setGravatar Benjamin Kramer2018-10-01
| | | | PiperOrigin-RevId: 215324035
* Updating the V2 variables API.Gravatar Alexandre Passos2018-09-27
| | | | PiperOrigin-RevId: 214824023
* Fixes bug in tf2xla NMS implementation.Gravatar Tayo Oguntebi2018-09-26
| | | | PiperOrigin-RevId: 214711381
* [TF:XLA] Fix XLA lowering of TF BroadcastTo operator.Gravatar Peter Hawkins2018-09-26
| | | | PiperOrigin-RevId: 214675055
* Changed FusedBatchNorm and FusedBatchNormGrad to use allowed_values for ↵Gravatar A. Unique TensorFlower2018-09-26
| | | | | | data_format attr. PiperOrigin-RevId: 214608039
* [tf:xla]Implement DivNoNan.Gravatar A. Unique TensorFlower2018-09-21
| | | | PiperOrigin-RevId: 214076068
* [XLA:CPU] Re-enable half float tests for unary opsGravatar Benjamin Kramer2018-09-21
| | | | | | This was blocked by an LLVM bug, which was fixed in r342542. PiperOrigin-RevId: 213953743
* Split XlaLaunch into XlaCompile and XlaRun; NFCGravatar Sanjoy Das2018-09-20
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This CL splits the functionality in XlaLaunch into two separate operations: - XlaCompile, responsible for compiling a TF function into a LocalExecutable - XlaRun, responsible for executing a LocalExecutable created by XlaCompile This CL is a stepping stone towards implementing lazy compilation for TF/XLA. The XlaCompile op is spec'ed to return a boolean indicating whether the compilation was successful. Right now that boolean is always set to true by XlaCompile and its value is otherwise ignored, but in the future it will be used to indicate whether the TF function was compiled or not, and thus whether we should execute XlaRun or just directly call the TF function. XlaLaunch still exists, and will be created by create_xla_launch_op.cc. In the future we may consider removing it altogether. build_xla_launch_ops.cc, now renamed to build_xla_ops.cc, creates a XlaCompile/XlaRun pair instead of XlaLaunch. This CL is organized as follows: - jit/ops/xla_ops.cc gets two new XLA-specific operations, XlaCompile and XlaRun, described above. XlaRun redundantly takes the must-be-constant inputs to the TensorFlow cluster to keep the implementation simple (simple in the sense of similar to XlaLaunch), but I will remove this in a subsequent cleanup CL. - jit/kernels/xla_ops.cc implements XlaCompile and XlaRun in a fairly straightforward manner. XlaCompile compiles the TF function, puts it in a process-global storage, XlaExecutableClosureStore, and produces a int64 key. XlaRun uses the key to read out the LocalExecutable and execute it. I'm not sure if XlaExecutableClosureStore should be a resource like XlaCompilationCache; I did not immediately see any reason to make it so. - There are changes to the various _device files to register XlaCompile and XlaRun for the XLA_* devices. - Finally, I had to fix some tests that were expecting XlaLaunch in the execution timeline. PiperOrigin-RevId: 213895405
* [XLA:TF] Whitelist quantized types for CPU/GPUGravatar Benjamin Kramer2018-09-20
| | | | | | | | | | | | These have the same behavior as unquantized types so we can just pass them through to XLA (which converts them to unquantized types). They're supposed to be used with special ops, none of which are currently implemented by XLA. Casting (without quantization) and basic math works fine though. These do not have a corresponding numpy type, so only tests using TF types will see them. PiperOrigin-RevId: 213781650
* [XLA:TF] Re-disable testRandomUniformIsInRangeGravatar Benjamin Kramer2018-09-19
| | | | | | The bug is still there and makes this test flakily fail with fp16. PiperOrigin-RevId: 213669453
* [XLA:CPU] Add an emitter for erfinv(double) and erfinv(half).Gravatar Benjamin Kramer2018-09-19
| | | | | | | This is used by the random number generator. Same algorithm as for float, just with more precision. fp16 is upcasted to fp32 and then processed with the float algorithm. PiperOrigin-RevId: 213648736
* [TF:XLA] Enable ClipByValue test for integer typesGravatar Benjamin Kramer2018-09-19
| | | | | | | | This has been fixed a while ago. Even though TF allows ClipByValue for complex types it's not implemented anywhere (and it doesn't make sense for complex numbers) so blacklist complex types. PiperOrigin-RevId: 213615429
* Enable tests for CPU and GPU backends that involve XlaSort.Gravatar Adrian Kuegel2018-09-19
| | | | PiperOrigin-RevId: 213611371
* Run CPU tests remotely.Gravatar A. Unique TensorFlower2018-09-19
| | | | | | | | | | Being able to run CPU tests remotely while running GPU tests locally required multiple changes: 1. Unify how we tag GPU tests in TF; we now always use tf_cuda_tests_tags(). 2. Tag tests using tf_cuda_tests_tags() with 'local' and 'gpu'; this makes them not run on non-gpu builds and always runs them locally. PiperOrigin-RevId: 213601626
* [XLA:TF] Enable int8 and uint8 support in the bridge for CPU/GPUGravatar Benjamin Kramer2018-09-17
| | | | | | | | | | The test changes are awkward. None of these are XLA bugs, it's just that the op definitions in tensorflow are really inconsistent. I tried to infer whether the limitation is on signed types, index types or just arbitrary. In the latter case just int8/uint8 is blacklisted, we should probably lift that requirement at some point. PiperOrigin-RevId: 213243906
* Run buildifier on build_defs.bzlGravatar Benjamin Kramer2018-09-14
| | | | PiperOrigin-RevId: 212972521
* [TF:XLA] Split XLA Concat Ops that fail on large sets of inputs.Gravatar A. Unique TensorFlower2018-09-14
| | | | | | Make the test large to prevent occasional timeouts on CPU. This should normally complete in well under a minute. PiperOrigin-RevId: 212953337
* Increase test timeout for xla_ops_test to de-flake.Gravatar A. Unique TensorFlower2018-09-13
| | | | PiperOrigin-RevId: 212873250
* Export the XLA dynamic-slice HLO as a TF opGravatar Sanjoy Das2018-09-12
| | | | | | I need this in a subsequent CL where I'll rewrite the Slice TF op to DynamicSlice in some cases. PiperOrigin-RevId: 212715067
* Parameterize test matrix_band_part_testGravatar Yanan Cao2018-09-12
| | | | PiperOrigin-RevId: 212643986
* Automated rollback of commit 4c936f1b220676d0d427f5f38b4111cfb9011b5aGravatar A. Unique TensorFlower2018-09-12
| | | | PiperOrigin-RevId: 212600364
* Automated rollback of commit c5267a54a63a08234a0314888f6cfe842647a73bGravatar A. Unique TensorFlower2018-09-12
| | | | PiperOrigin-RevId: 212595533
* Move from deprecated self.test_session() to self.cached_session().Gravatar A. Unique TensorFlower2018-09-10
| | | | | | | | self.test_session() has been deprecated in 9962eb5e84b15e309410071b06c2ed2d6148ed44 as its name confuses readers of the test. Moving to cached_session() instead which is more explicit about: * the fact that the session may be reused. * the session is not closed even when doing a "with self.test_session()" statement. PiperOrigin-RevId: 212336417
* Decluster some must-be-constant ops to reduce XLA recompilationsGravatar Sanjoy Das2018-09-07
| | | | | | | | | | | | | | | The CL is organized as follows: - The main change is in jit/partially_decluster_pass. - tf2xla/const_analysis now takes an "edge_filter" to facilitate use by jit/partially_decluster_pass. - tests/dense_layer_test.py was using the execution of ListDiff as what I assume is a sanity check to see that the XLA cluster ran. With this CL the ListDiff op gets declustered so we now check for "MatMult" for the sanity check. - Some tests were dropping TF_XLA_FLAGS; fixed them to not do so. PiperOrigin-RevId: 212071118
* Automated rollback of commit 5a635e3472e16007830fca533c35b2f63fc4f898Gravatar A. Unique TensorFlower2018-09-07
| | | | PiperOrigin-RevId: 211948271
* [TF:XLA] Split XLA Concat Ops that fail on large sets of inputs.Gravatar A. Unique TensorFlower2018-09-07
| | | | | | GPU would fail due to having too many parameters to fit in memory because Concat's signature is variadic and can have an unlimited number of inputs. PiperOrigin-RevId: 211942734
* [XLA] Rename PrecisionConfigProto to PrecisionConfigGravatar David Majnemer2018-09-05
| | | | | | The "Proto" suffix adds little clarity but makes a long type name even longer. PiperOrigin-RevId: 211693871
* [XLA] Make tensorflow/compiler use absl::{StrCat,string_view,InlinedVector} ↵Gravatar Benjamin Kramer2018-09-05
| | | | | | | | consistently StringPiece is an alias for absl::string_view, InlinedVector is aliased to absl::InlinedVector. StrCat is compatible, so swapping it out is safe. PiperOrigin-RevId: 211691840
* [XLA] Update test timeouts so the tests pass with optimized + -UNDEBUGGravatar Benjamin Kramer2018-08-31
| | | | PiperOrigin-RevId: 211134202
* Rollback of a rollback with fixes included. See below for details of the ↵Gravatar A. Unique TensorFlower2018-08-30
| | | | | | | | | | | | original change. This CL fixes additional two CI tests that broke due to the changed bfloat16 behavior. ================================================== Automated rollback of commit 37b2b0eb613b6c3c66b96374851cfd95050346a0 PiperOrigin-RevId: 211031073
* [XLA] Rename all (Mutable)ArraySlice to absl::Span.Gravatar Tim Shen2018-08-30
| | | | PiperOrigin-RevId: 210998142
* Remove (Mutable)ArraySlice implementation and alias them to absl::Span.Gravatar Tim Shen2018-08-30
| | | | | | | | There are several API migrations happening: * ArraySlice's sub-slice constructor => .subspan * MutableArraySlice's container pointer constructor => absl::MakeSpan PiperOrigin-RevId: 210946124
* [TF:XLA] Implement full_matrices=False case of QR decompositionGravatar A. Unique TensorFlower2018-08-30
| | | | PiperOrigin-RevId: 210870412
* Fix FTRL L2-shrinkage behavior: the gradient from the L2 shrinkage term ↵Gravatar A. Unique TensorFlower2018-08-28
| | | | | | should not end up in the accumulator. PiperOrigin-RevId: 210648271
* Change adadelta_test to large as adadelta_test_cpu is frequently timing out.Gravatar A. Unique TensorFlower2018-08-28
| | | | PiperOrigin-RevId: 210613802
* [TF:XLA] Add support for mirror_pad in symmetric mode.Gravatar A. Unique TensorFlower2018-08-28
| | | | PiperOrigin-RevId: 210512603
* Support returning resource handles from function in XLAGravatar Igor Ganichev2018-08-27
| | | | | | | | | | | | | | | | There are a couple of reasons to do this: - resource handle are regular tensors part of a public API that can potentially be returned from a function. - When tfe.defun is executed under GradientTape, it generates a function returning resource handles in certain cases. This CL adds support for returning resource handles from an XLA compiled function. These resource handles must have been passed as arguments to the function. In other words, we don't yet support returning resources created inside the function. tfe.defun never makes functions that create resources. PiperOrigin-RevId: 210442856
* [TF:XLA] Test zero element slice and update documentation.Gravatar A. Unique TensorFlower2018-08-27
| | | | | | Documentation previously disallowed slices where start and limit indices were the same, but it was allowed by the implementation. Updated the documentation to support the implementation. PiperOrigin-RevId: 210379434
* [XLA] Implement resize_images(BILINEAR, align_corners=false)Gravatar A. Unique TensorFlower2018-08-24
| | | | PiperOrigin-RevId: 210129265
* Automated rollback of commit 2d4214415269bee2c8c98d5466c540e4004652fdGravatar A. Unique TensorFlower2018-08-24
| | | | PiperOrigin-RevId: 210116745
* [TF:XLA] Add TensorFlow operators that wrap most HLO operators.Gravatar Peter Hawkins2018-08-23
| | | | PiperOrigin-RevId: 209997425
* [TF:XLA] Implement BroadcastTo.Gravatar Peter Hawkins2018-08-23
| | | | PiperOrigin-RevId: 209988299