aboutsummaryrefslogtreecommitdiffhomepage
Commit message (Collapse)AuthorAge
* Create mobile testing rules for TF Lite known-portable targetsGravatar Austin Anderson2018-03-07
| | | | | | | | | | | | | | | | | This CL tags all known-already-portable TF Lite tests as portable, and (from those tests) tags those known as not portable. Adding tflite_portable_test_suite() to the bottom of a package marks all previous cc_tests as "intended to be portable". I've included all tests that I was able to naively make buildable on Android with my previous change that created a custom logging.h library. Most tests are buildable on Android already, but there is something in the common dependencies for the kernel tests that is not compatible with iOS. Outside of Google, this change does nothing except tag tests that are known to not be buildable on certain platforms. PiperOrigin-RevId: 188234489
* [TF:XLA] Bump open source llvm revision to r326829Gravatar Sanjoy Das2018-03-07
| | | | PiperOrigin-RevId: 188229669
* [tf.data] Optimize `Dataset.filter()` when the predicate returns one of its ↵Gravatar Derek Murray2018-03-07
| | | | | | | | | | | | | | args. This change avoids the overhead of function dispatch (~10--15us) when the filter predicate simply returns one of its arguments directly. It also adds a benchmark to track the performance of this optimization. The checkpointing code required minor modifications to enable functions to be instantiated in the `FilterDatasetOp::Compute()` method when an iterator is being restored. PiperOrigin-RevId: 188229570
* Update graph rewrites for host compute opsGravatar A. Unique TensorFlower2018-03-07
| | | | PiperOrigin-RevId: 188228489
* [tpu.datasets]: Improve the performance of the StreamingFilesDataset.Gravatar Brennan Saeta2018-03-07
| | | | | | In order to effectively pipeline the transfers, set num_parallel_calls=4. PiperOrigin-RevId: 188227890
* Further small support for quantized unfused LSTMs.Gravatar A. Unique TensorFlower2018-03-07
| | | | PiperOrigin-RevId: 188221169
* Internal ChangeGravatar A. Unique TensorFlower2018-03-07
| | | | PiperOrigin-RevId: 188217110
* TFE_Context gets its local devices from the source instead of a session.Gravatar Alexandre Passos2018-03-07
| | | | PiperOrigin-RevId: 188216178
* Add support for padding tf.string tensors on CPU.Gravatar RJ Ryan2018-03-07
| | | | PiperOrigin-RevId: 188215092
* eager: Rename in_eager_mode to executing_eagerly and get rid of in_graph_mode.Gravatar Asim Shankar2018-03-07
| | | | | | | | This is in preparation to introduce one public, stable symbol: tf.executing_eagerly() (i.e., part of moving APIs related to eager execution from "contrib" to a namespace where we provide API stability guarantees) PiperOrigin-RevId: 188212646
* Convert functions with multiple returns to use a single return.Gravatar A. Unique TensorFlower2018-03-07
| | | | PiperOrigin-RevId: 188212324
* Support for transpose convolution. Includes striding, and a reference ↵Gravatar A. Unique TensorFlower2018-03-07
| | | | | | implementation. PiperOrigin-RevId: 188210975
* Making sure that the proc FLR doesn't get deleted before lib_ (in ↵Gravatar Rohan Jain2018-03-07
| | | | | | FunctionBufferingResource). PiperOrigin-RevId: 188206611
* Move `tf.contrib.bayesflow.layers` to `tfp.layers`.Gravatar Joshua V. Dillon2018-03-07
| | | | PiperOrigin-RevId: 188203941
* Optimizations to DepthwiseConv using 3x3 filters.Gravatar A. Unique TensorFlower2018-03-07
| | | | PiperOrigin-RevId: 188202344
* Properly parse input strings in the dependency optimizerGravatar Benoit Steiner2018-03-07
| | | | PiperOrigin-RevId: 188201284
* Docs: Add simple_save section to SavedModel APIs, and addGravatar Billy Lamberta2018-03-07
| | | | | | to article intro. Rename headers to make consistent. PiperOrigin-RevId: 188199437
* Migrate Halton Sequence sampler into tensorflow_probability.Gravatar Joshua V. Dillon2018-03-07
| | | | PiperOrigin-RevId: 188191091
* boosted_trees: fix the comments about gain by removing a confusing dash.Gravatar A. Unique TensorFlower2018-03-07
| | | | PiperOrigin-RevId: 188191012
* [tf.data] Improve docstring for `tf.data.Dataset.padded_batch()`.Gravatar Derek Murray2018-03-07
| | | | PiperOrigin-RevId: 188190458
* Switch the eager GAN MNIST example to object-based checkpointingGravatar Allen Lavoie2018-03-07
| | | | | | | - Removes variable_scopes, since they're no longer necessary (duplicate variable names are OK) - Switches up the counters a bit (global_step -> step_counter, checkpoint the epoch counter) PiperOrigin-RevId: 188190128
* Add missing equality assertion between the shape of the 2 inputs to the tile op.Gravatar Benoit Steiner2018-03-07
| | | | PiperOrigin-RevId: 188190067
* Add instrumentation interfaces to the GCS file system.Gravatar Brennan Saeta2018-03-07
| | | | PiperOrigin-RevId: 188187793
* Fix tf.train.Saver's max_to_keep when executing eagerly.Gravatar Allen Lavoie2018-03-07
| | | | | | | It was keeping everything, since the list of things to delete was reset in build() and build() was called every save. PiperOrigin-RevId: 188187349
* [tf.data] Expose `tf.contrib.data.SqlDataset`.Gravatar Derek Murray2018-03-07
| | | | PiperOrigin-RevId: 188185438
* Add a template helper that generates expressions from single-statement nodes.Gravatar A. Unique TensorFlower2018-03-07
| | | | PiperOrigin-RevId: 188184507
* Add support for the "DEQUANTIZE" op. This cover only ops that are generated ↵Gravatar A. Unique TensorFlower2018-03-07
| | | | | | | | by TOCO in order to handle UINT8 input to floating-point models. PiperOrigin-RevId: 188182372
* Make sure the string returned is a string in Python 3 because of different ↵Gravatar Frank Chen2018-03-07
| | | | | | string handling processes. PiperOrigin-RevId: 188180206
* Update the code to play more nicely with Python3.Gravatar A. Unique TensorFlower2018-03-07
| | | | PiperOrigin-RevId: 188167618
* [XLA:GPU] Fuse broadcasts into reduction fusionsGravatar Benjamin Kramer2018-03-07
| | | | | | | | We didn't do this because reconstructing a layout was hard. With layout_assignment before fusion this becomes much easier. Remove the limitations. PiperOrigin-RevId: 188167436
* [XLA:GPU] Move layout_assignment before fusionGravatar Benjamin Kramer2018-03-07
| | | | | | | | This will allow code simplification and opens up new optimization. Currently we don't emit layouts inside of fusion and tracing layouts through fusion is very hard. Changing the pipeline sidesteps this issue. This is mostly perf-neutral. PiperOrigin-RevId: 188158481
* Fix ShapeUtil::CompatibleIgnoringElementType for scalar vs tuple comparisionGravatar A. Unique TensorFlower2018-03-07
| | | | | | | Previously if the lhs was a scalar and the rhs was a tuple of arbitrary shape it reported them as compatible what is clearly wrong. PiperOrigin-RevId: 188155575
* [XLA:GPU] Rewrite elemental emission of bitcastsGravatar Benjamin Kramer2018-03-07
| | | | | | | | | | | | My first attempt at this only handled bitcasts that implement a reshape operation, now transposes or mixed bitcasts are handled as well. There is probably some optimization potential to reduce the amount of address arithmetic emitted to IR for a follow-up. This is already tested fairly well with the existing test suite, there are failing tests with layout_assignment before fusion without this change. PiperOrigin-RevId: 188155082
* Build definition cleanup.Gravatar A. Unique TensorFlower2018-03-06
| | | | PiperOrigin-RevId: 188135683
* Typo correction, no method `set_stats_aggregator_op(..)` to associate ↵Gravatar Shivani Agrawal2018-03-06
| | | | | | `StatsAggregator` with `iterator`. PiperOrigin-RevId: 188132675
* Minor fixes to tutorials/index.md and programmers_guide/index.mdGravatar Mark Daoust2018-03-06
| | | | PiperOrigin-RevId: 188128441
* Makes GLSTMCell accept input of any compatible dimension.Gravatar A. Unique TensorFlower2018-03-06
| | | | | | Currently, GLSTMCell requires that the input dimension is is the same as the output dimension. After this change, the input can be any compatible dimension---i.e., anything divisible by the number of groups. The input size is still assumed to be the output size in the case where the innermost dimension of the input is not statically-defined. PiperOrigin-RevId: 188123536
* [TF:XLA] Bump open source llvm revision to r326687Gravatar Sanjoy Das2018-03-06
| | | | PiperOrigin-RevId: 188122825
* Made sure all the nodes in the body of an inlined function run in the same frameGravatar Benoit Steiner2018-03-06
| | | | PiperOrigin-RevId: 188121852
* Add basic support for explicit type annotations. This is done by inserting a ↵Gravatar A. Unique TensorFlower2018-03-06
| | | | | | | | | no-op function call. Note that this is meant as fallback, and we prefer the following alternatives (in their order) for inferring the type: 1. Automatic from context, e.g. the type of a list based on the elements added to it (WIP) 2. Type annotations (Python 3.6+ only) PiperOrigin-RevId: 188120527
* Add helper function for Xor in HLO.Gravatar A. Unique TensorFlower2018-03-06
| | | | | RELNOTES: n/a PiperOrigin-RevId: 188119450
* Avoid merging colocation sets that include parameter/result buffersGravatar HyoukJoong Lee2018-03-06
| | | | PiperOrigin-RevId: 188117187
* PiperOrigin-RevId: 188112759Gravatar Bjarke Hammersholt Roune2018-03-06
|
* Adding support for subscripts to qualified names. This also removes the QN ↵Gravatar A. Unique TensorFlower2018-03-06
| | | | | | copy constructor and adds an assert to ensure that the no attribute/no subscript QN constructor does not receive any strings with '.', '[', or ']'. Additionally this changes the self.qn construction to be a tuple of (base QN, attribute/subscript) instead of a concatenation of the base QN and attribute/subscript so that the has_attr and has_subscript fields are handled properly. Constant subscripts are not yet supported. PiperOrigin-RevId: 188111933
* Remove dead code. We're guaranteed to have CURLE_OK because we return early ↵Gravatar Jonathan Hseu2018-03-06
| | | | | | above. PiperOrigin-RevId: 188110480
* Fix build.Gravatar Shashi Shekhar2018-03-06
| | | | PiperOrigin-RevId: 188109002
* Make graph construction work while graph is being concurrently run.Gravatar Skye Wanderman-Milne2018-03-06
| | | | | | The overall approach is to use Graph._lock to synchronize Session.run calls and construction methods that rely on graph mutation. We don't want to synchronize the actual running of the graph, only the Extend call, so this change exposes an ExtendSession method to the Python API and disables extending automatically in TF_SessionRun. PiperOrigin-RevId: 188106818
* Add metadata for gathering information about host compute transfers while ↵Gravatar A. Unique TensorFlower2018-03-06
| | | | | | compiling XLA. PiperOrigin-RevId: 188102740
* Re-enable math_utils_test msanGravatar Allen Lavoie2018-03-06
| | | | PiperOrigin-RevId: 188102388
* [XLA] Store the program shape in the HloModuleProto and HloComputationProto.Gravatar A. Unique TensorFlower2018-03-06
| | | | PiperOrigin-RevId: 188100425