aboutsummaryrefslogtreecommitdiffhomepage
Commit message (Collapse)AuthorAge
...
* [XLA:GPU] Check the reduce input shape when multi-output fusing reducesGravatar Benjamin Kramer2018-06-12
| | | | | | | Otherwise we can end up in a situation where incompatible reduces that happen to have the same output shape are fused. PiperOrigin-RevId: 200180013
* Exposes toco_flags and model_flags as optional parameters to allow fine ↵Gravatar A. Unique TensorFlower2018-06-11
| | | | | | grained control of conversion. PiperOrigin-RevId: 200155520
* Rollback of changelist checking for static shapes for model function.Gravatar A. Unique TensorFlower2018-06-11
| | | | | | | | | END_PUBLIC BEGIN_PUBLIC Automated g4 rollback of changelist 200139880 PiperOrigin-RevId: 200155130
* Use activation in MUL and ADD operationsGravatar A. Unique TensorFlower2018-06-11
| | | | PiperOrigin-RevId: 200153612
* Re-enable trainer TPU test.Gravatar A. Unique TensorFlower2018-06-11
| | | | PiperOrigin-RevId: 200151330
* TFLite should allow values of 0 for default_ranges_{min,max}.Gravatar Suharsh Sivakumar2018-06-11
| | | | PiperOrigin-RevId: 200149066
* Add `move_dimension` utility to move a single dimension within a Tensor.Gravatar A. Unique TensorFlower2018-06-11
| | | | PiperOrigin-RevId: 200141207
* Add module docstrings that have been missing since new API generation was added.Gravatar Anna R2018-06-11
| | | | PiperOrigin-RevId: 200140810
* Checking that TPUEstimator model function features have static shapes.Gravatar A. Unique TensorFlower2018-06-11
| | | | PiperOrigin-RevId: 200139880
* Minor refactoring - Put together the ops with no option structs.Gravatar Yu-Cheng Ling2018-06-11
| | | | PiperOrigin-RevId: 200139790
* Add support for 8bit ResizeBilinear and Slice op to tflite and tocoGravatar A. Unique TensorFlower2018-06-11
| | | | PiperOrigin-RevId: 200136934
* Split out HloConstantInstruction and HloTraceInstruction as subclasses from ↵Gravatar A. Unique TensorFlower2018-06-11
| | | | | | HloInstruction. PiperOrigin-RevId: 200135616
* Correct generator pathGravatar A. Unique TensorFlower2018-06-11
| | | | PiperOrigin-RevId: 200135189
* Remove memory leak in read variable call, and record gradient call.Gravatar Akshay Modi2018-06-11
| | | | | | Fix #19385 PiperOrigin-RevId: 200132949
* Allow silent copies during remote execution.Gravatar Akshay Modi2018-06-11
| | | | | | This is required to do anything useful from python. PiperOrigin-RevId: 200129777
* Improve tfdbg's handling of runtime errorsGravatar Shanqing Cai2018-06-11
| | | | | | | | | | | | | | * In some cases the RuntimeError object (tf_error in cli_shared.py) doesn't have the op or its name available. Handle that situation properly. * Previously, we used client graphs in the debugger CLI whenever it's available. This has caused issues in which the device names (e.g., "/device:GPU:0" vs "/job:localhost/replica:0/task:0/device:CPU:0"). This CL fixes that by favoring the runtime graph on the disk over the client graph. The former has the actual device names. Use the latter only if the former isn't available for some reason (e.g., writing graph to the disk failed.) PiperOrigin-RevId: 200128582
* [tf.data] Improve the error messages for `Dataset.from_generator()`.Gravatar Derek Murray2018-06-11
| | | | | | | | | | In particular: * Improve the error message when the generator yields something with the wrong structure. * Improve the error message when the generator yields something with the wrong element type. PiperOrigin-RevId: 200124096
* Update ops-related pbtxt files.Gravatar A. Unique TensorFlower2018-06-11
| | | | PiperOrigin-RevId: 200122052
* Make test_locallyconnected_2d_channels_first run in graph and eager modes.Gravatar A. Unique TensorFlower2018-06-11
| | | | PiperOrigin-RevId: 200119934
* SpaceToBatchND supports quantization, so make the transformation know that.Gravatar Suharsh Sivakumar2018-06-11
| | | | | | #19735 PiperOrigin-RevId: 200118450
* While the DNN is training use that as the logit for evaluation.Gravatar A. Unique TensorFlower2018-06-11
| | | | PiperOrigin-RevId: 200117729
* Allow adadelta, adagrad, adam, rmsprop, and gradient_descent optimizers take ↵Gravatar A. Unique TensorFlower2018-06-11
| | | | | | in callable parameters. PiperOrigin-RevId: 200114810
* [XLA:GPU] Fuse scalar constantsGravatar Benjamin Kramer2018-06-11
| | | | | | | | | | | | This doesn't change codegen directly, but makes dealing with scalar broadcasts much easier and the graph easier to read. This required changing the dot * alpha fusion logic quite a bit, but I think for the better. The emitter change is a bit of a hack. The more I look at this code the more broken it seems. Need to find a more sustainable way of emitting what is essentially a memset. PiperOrigin-RevId: 200111599
* Fix 'cc_op_gen' to use static storage for constant arrays.Gravatar Ilya Biryukov2018-06-11
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | Previously, the generate would emit code like this: struct Attrs { ArraySlice<int> dilations_ = {1, 1, 1, 1}; }; This code is incorrect, since the array slice references a temporary object that dies after initialization finishes. After this change change the generator will produce static functions to initialize the values: struct Attrs { ArraySlice<int> dilations_ = Default_dilations(); private: ArraySlice<int> Default_dilations() { static int kStorage[] = {1, 1, 1, 1}; return ArraySlice<int>(kStorage); } }; Presumably, it used to work because all compilers chose to use static storage in those cases anyway. However, new versions of clang tend to miscompile this code, causing test failures. (This error was found when trying to upgrade our clang revision from r328903 to r331746). PiperOrigin-RevId: 200110952
* [XLA] Sanitize HloComputation and HloInstruction names.Gravatar A. Unique TensorFlower2018-06-11
| | | | PiperOrigin-RevId: 200110003
* Internal Change.Gravatar A. Unique TensorFlower2018-06-11
| | | | PiperOrigin-RevId: 200109989
* Make cond_v2 work with no input tensors.Gravatar Skye Wanderman-Milne2018-06-11
| | | | PiperOrigin-RevId: 200103320
* Copy dimensions array into GroupIterable instead of storing pointers to it.Gravatar Ilya Biryukov2018-06-11
| | | | | | | | | | | This avoid breakages when passing temporary objects, e.g. auto it = sparse_tensor.group({0}); for (auto _ : it) { /* ... */ } The API was easy to misuse before and this actually causes test failures when compiling with a new clang version. PiperOrigin-RevId: 200097909
* [XLA:GPU] Make (r)sqrt emission look through explicit broadcasts.Gravatar Benjamin Kramer2018-06-11
| | | | | | Found by inspection, performance seems neutral. PiperOrigin-RevId: 200096482
* [TF:XLA] Bump open source llvm revision to r334405Gravatar Sanjoy Das2018-06-11
| | | | PiperOrigin-RevId: 200096167
* Check to ensure the Cloud TPU is ready before resolving.Gravatar Brennan Saeta2018-06-11
| | | | PiperOrigin-RevId: 200095692
* [XLA] Inline constants into fusion nodes in graphviz dump.Gravatar Justin Lebar2018-06-11
| | | | | | Reduces visual noise, makes it easier to see the *actual* parameters. PiperOrigin-RevId: 200094095
* [XLA] Allow replay_computation to take an HLO textual string as input.Gravatar Justin Lebar2018-06-11
| | | | PiperOrigin-RevId: 200088845
* Use the Keras session for saving/loading in TensorFlow formatGravatar Allen Lavoie2018-06-11
| | | | | | Fixes issues when there's no default session PiperOrigin-RevId: 200088574
* Implement Shape and friends as a direct XLA kernelsGravatar Igor Ganichev2018-06-11
| | | | PiperOrigin-RevId: 200087766
* [TF:XLA] Small clean up, removing unused variable in the Cholesky ↵Gravatar A. Unique TensorFlower2018-06-11
| | | | | | implementation. PiperOrigin-RevId: 200087647
* Add missing ` in docstring that led to misformatted documentation.Gravatar A. Unique TensorFlower2018-06-11
| | | | PiperOrigin-RevId: 200086945
* [XLA] Fold consecutive reduces.Gravatar Blake Hechtman2018-06-11
| | | | PiperOrigin-RevId: 200086761
* [TF:XLA] Small performance tweaks for tf.random_shuffle, but still too slow.Gravatar A. Unique TensorFlower2018-06-11
| | | | PiperOrigin-RevId: 200086551
* Add link to TFlite's supported models table and some copyeditsGravatar A. Unique TensorFlower2018-06-11
| | | | PiperOrigin-RevId: 200080095
* [XLA] Make Log1p & Expm1 available through pythonGravatar David Majnemer2018-06-11
| | | | PiperOrigin-RevId: 200079654
* Remove a few redundant benchmark parameters.Gravatar Shashi Shekhar2018-06-11
| | | | PiperOrigin-RevId: 200079299
* Fix tsan detected error in core/util/exec_on_stall_test.ccGravatar A. Unique TensorFlower2018-06-11
| | | | | | Enforce mutex around access to test variable. PiperOrigin-RevId: 200078751
* CostGraphDef has been modified to keep track of the accuracy of the cost ↵Gravatar A. Unique TensorFlower2018-06-11
| | | | | | estimation. PiperOrigin-RevId: 200078367
* Don't call back into python during insert (which will leave the set in a ↵Gravatar Akshay Modi2018-06-11
| | | | | | | | | | broken condition if the runtime decides to let another thread run). Thank you for finding the bug. The watched_variables_ set should not really require a lock since all our functions hold the GIL (verified by looking at the generated SWIG). The reason that there was a concurrent access to the set is that the insert was calling back into python (which might release the GIL and let another thread run, which will also attempt to insert a variable and break the set). I included the lock to be safe though, since its non-trivial to verify without looking at the generated swig wrappers that the GIL is held. PiperOrigin-RevId: 200074843
* Remove dead code to use a map in BatchnormExpanderGravatar Yunxing Dai2018-06-11
| | | | PiperOrigin-RevId: 200072055
* Introducing a directives module, to contain marker functions such as ↵Gravatar Dan Moldovan2018-06-11
| | | | | | set_element_type, set_loop_options and others. To replace their counterparts in utils. PiperOrigin-RevId: 200069544
* Remove Bayesflow/Distribution/Bijector docs.Gravatar A. Unique TensorFlower2018-06-11
| | | | | | These docs are out of date. PiperOrigin-RevId: 200066984
* Add interim runtime utility function for use during refactoring out of Dims.Gravatar A. Unique TensorFlower2018-06-11
| | | | PiperOrigin-RevId: 200061346
* [XLA] Simplify lowering of kIsFiniteGravatar David Majnemer2018-06-11
| | | | | | | | | We used something notionally equivalent to "(x == x) && abs(x) != inf" to implement kIsFinite. However, using an ordered comparison against infinity will return false for NaN inputs as well which obviates the need to explicitly test for NaN. PiperOrigin-RevId: 200046365