aboutsummaryrefslogtreecommitdiffhomepage
Commit message (Collapse)AuthorAge
* Fix nccl for remote builds.Gravatar A. Unique TensorFlower2018-09-06
| | | | | | | | | | Instead of symlinking the install dir, copy the two files we need. Symlinking a system dir like /usr is generally problematic as it can quickly lead to miscompiles for unrelated reasons. Furthermore, bazel will consider it an error if /usr is linked in and contains a recursive symlink in /usr/bin/X11 -> . PiperOrigin-RevId: 211842260
* Fix cuda remote build setup.Gravatar A. Unique TensorFlower2018-09-06
| | | | PiperOrigin-RevId: 211842211
* Update ops-related pbtxt files.Gravatar A. Unique TensorFlower2018-09-06
| | | | PiperOrigin-RevId: 211840928
* Fix link generator for module level constants.Gravatar Mark Daoust2018-09-06
| | | | | | | | | | | | | | Moved _is_free_function to parser.is_free_function Merged the `is_class` and `is_module` properties into `is_fragment`, since this is the only thing they were being used for. With the additions to `pretty_docs.py`, all documented objects either have a page to them self, or a `#id` fragment on their parents page, the `is_fragment` property indicates which. In all uses of `documentation_path`, except the "reference_to_url" it's safe to assume that `is_fragment` is `False` (this is the current correct behavior). fixes #20913 PiperOrigin-RevId: 211838909
* Add StaticRegexFullMatch which can be used in place of RegexFullMatch when ↵Gravatar A. Unique TensorFlower2018-09-06
| | | | | | | | | the regex pattern are fixed. This allows the Op to perform the expensive regex compilation once upon creation instead of with each call to compute. RELNOTES: Performance improvements for regex full match operations. PiperOrigin-RevId: 211835278
* Small improvements to handling of Datasets in Keras.Gravatar A. Unique TensorFlower2018-09-06
| | | | | | | | * Allow sparse labels to work with Datasets. * Allow sample_weights to be passed as the third output of a Dataset (like how generator input is treated). PiperOrigin-RevId: 211834259
* Ignore partitioned variable in TPU computation.Gravatar A. Unique TensorFlower2018-09-06
| | | | PiperOrigin-RevId: 211833891
* Add feature_util build target so the library can be included in a ↵Gravatar A. Unique TensorFlower2018-09-06
| | | | | | lightweight way PiperOrigin-RevId: 211833556
* Job name should be picked based on the cluster_specGravatar Sourabh Bajaj2018-09-06
| | | | PiperOrigin-RevId: 211833041
* Automated rollback of commit 4cd79b3f6361b6518463349a51fe33f7520f3b49Gravatar A. Unique TensorFlower2018-09-06
| | | | PiperOrigin-RevId: 211832421
* Add python test for While op lowering.Gravatar Saurabh Saxena2018-09-06
| | | | | | Test that fetching values of while outputs in sess.run by tensor name works. This tests that an IdentityN node with the same name and outputs as the original while op was added to the graph during lowering. PiperOrigin-RevId: 211827934
* Update docstring for BoostedTrees n_batches_per_layer.Gravatar Zhenyu Tan2018-09-06
| | | | PiperOrigin-RevId: 211824645
* Extend ConditionalAccumulator with SUM functionality.Gravatar Zhenyu Tan2018-09-06
| | | | | | | | | Previously take_grad represents the average gradients being aggregated. However this does not cover other use cases such as summing quantiles, or summing probability distributions from parallel workers. This change extends the functionality. PiperOrigin-RevId: 211824519
* Replace Placeholder with Const to GrapplerFunctionItem for function shape ↵Gravatar Doe Hyun Yoon2018-09-06
| | | | | | inference if possible. PiperOrigin-RevId: 211821596
* Add HloSchedule to HloModule.Gravatar Mark Heffernan2018-09-06
| | | | | | | | Add HloSchedule as a field on HloModule. This will enable scheduling to be a normal HLO pass and enable some passes such as copy insertion to more easily use tighter instruction live ranges based on the schedule. This change required adding HloSchedule to the "hlo" library because of circular dependencies. Nothing except for tests actually sets the schedule at the moment, but follow up cls will add a scheduling pass which will do so. PiperOrigin-RevId: 211815293
* Add a command line option to serialize api-reference resolver.Gravatar Mark Daoust2018-09-06
| | | | PiperOrigin-RevId: 211813852
* Documentation fix for TensorShape.__getitem__Gravatar A. Unique TensorFlower2018-09-06
| | | | | | RELNOTES: n/a PiperOrigin-RevId: 211804843
* Documentation fix for tf.regex_full_matchGravatar A. Unique TensorFlower2018-09-06
| | | | | | RELNOTES: n/a PiperOrigin-RevId: 211798892
* Documentation fixes for segment_* and unsorted_segment_* opsGravatar A. Unique TensorFlower2018-09-06
| | | | | | RELNOTES: n/a PiperOrigin-RevId: 211798876
* compat: Update forward compatibility horizon to 2018-09-06Gravatar A. Unique TensorFlower2018-09-06
| | | | PiperOrigin-RevId: 211770067
* Parse feature_group_count attributes of CustomCall ops.Gravatar Adrian Kuegel2018-09-06
| | | | PiperOrigin-RevId: 211762464
* [FLR] Simplify the Run() (custom callframe) implementation.Gravatar Derek Murray2018-09-05
| | | | | | Profiling showed that we were wastefully (i) heap-allocating and freeing an Executor::Args object on each call, and (as a result) (ii) incurring extra function dispatch overhead in the callback. PiperOrigin-RevId: 211755493
* Allow creating a py EagerTensor that shares the underlying TensorHandle.Gravatar Akshay Modi2018-09-05
| | | | | | | | | | | | | | This is so that gradients with respect to scalars pass (see the test added in backprop_test.py). A micro benchmark just calling constant_op.constant slows down a bit - this is inevitable as we are creating a new python object. After: walltime: ~2.1 Before: walltime: ~1.47 Linear regression benchmark is pretty much unchanged. PiperOrigin-RevId: 211753801
* Add `TraceCollector::IsEnabled(bool)` method in order to test when tracing ↵Gravatar Derek Murray2018-09-05
| | | | | | | | is enabled. Some builds install a `TraceCollector` at process startup, but it is mostly not enabled. This inhibits the recent optimization to avoid accessing `OpKernel::name()` and `OpKernel::type_string()` every time a kernel is launched. By caching the `TraceCollector` in the `TracingDevice` and adding a method to enquire about its state, we increase the applicability of the optimization. PiperOrigin-RevId: 211752728
* Fix ordering of tf.GraphKeys.VARIABLES line in renames_v2.pyGravatar A. Unique TensorFlower2018-09-05
| | | | PiperOrigin-RevId: 211744058
* changes to ctc_beam_searchGravatar A. Unique TensorFlower2018-09-05
| | | | PiperOrigin-RevId: 211741560
* [tf.data] Move all C++ code inside the `tensorflow::data` namespace.Gravatar Derek Murray2018-09-05
| | | | PiperOrigin-RevId: 211733735
* Modify tags for internal CIGravatar Austin Anderson2018-09-05
| | | | PiperOrigin-RevId: 211730301
* Deprecate `tf.train.input_producer()` and related APIs.Gravatar Derek Murray2018-09-05
| | | | | | These APIs are based on queue runners, which have been deprecated and will be removed in TensorFlow 2.0. They have been replaced with `tf.data.Dataset`, which provides a more efficient version of the same functionality. PiperOrigin-RevId: 211727844
* Add cuboid convolution benchmarks.Gravatar Eugene Zhulenev2018-09-05
| | | | PiperOrigin-RevId: 211727610
* Rollforward of cl/211656888 after fixing failing unit test.Gravatar Mark Heffernan2018-09-05
| | | | | | | | | | | *** Original change description *** Add HloSchedule class representing a sequential order of an HloModule. Currently we represent a sequential schedule of a module using a SequentialHloOrdering::HloModuleSequence which is a type alias of a bare map from HloComputation* to std::vector<HloInstruction*>. This CL replaces this with a proper class which results in better encap... *** PiperOrigin-RevId: 211726890
* This CL changes the graph-mode API of the learning_rate_decay functions in ↵Gravatar A. Unique TensorFlower2018-09-05
| | | | | | | | TF 2.0 to return a no-arg callable to output a learning rate, instead of directly outputting a learning rate tensor. This brings the graph mode API in line with the eager execution API, where this change was made to allow changing the learning rate value across different invocations of optimizer functions. PiperOrigin-RevId: 211726295
* [Keras / Cloud TPU]: Correct indexing for software pipelining.Gravatar Brennan Saeta2018-09-05
| | | | PiperOrigin-RevId: 211724843
* Convert more kernel signatures to use runtime shapes.Gravatar A. Unique TensorFlower2018-09-05
| | | | PiperOrigin-RevId: 211722113
* Upload floating point mobilenet-v2 and resnet-v2-101 models.Gravatar Raghuraman Krishnamoorthi2018-09-05
| | | | | | Also upload fully quantized mobilenet-v2 and inception-v3 models. PiperOrigin-RevId: 211721504
* Propagate eager output tensor types in TFLiteGravatar Jared Duke2018-09-05
| | | | PiperOrigin-RevId: 211721354
* Fix lite_test.py.Gravatar Nupur Garg2018-09-05
| | | | PiperOrigin-RevId: 211719399
* disable msan in failing testGravatar Olivia Nordquist2018-09-05
| | | | PiperOrigin-RevId: 211719342
* Re-added proto field for dynamic learning rate support (not usable yet).Gravatar A. Unique TensorFlower2018-09-05
| | | | PiperOrigin-RevId: 211719009
* Implements TPU alltoall op.Gravatar Youlong Cheng2018-09-05
| | | | | | RELNOTES: n/a PiperOrigin-RevId: 211718248
* [tf.data] Surface errors correctly in MapDefunOp by using different ↵Gravatar Rachel Lim2018-09-05
| | | | | | CancellationManagers for each run of the function. PiperOrigin-RevId: 211717580
* Fix several build warnings in TFLiteGravatar Jared Duke2018-09-05
| | | | PiperOrigin-RevId: 211715608
* Mark tf.GraphKeys.VARIABLES as deprecatedGravatar A. Unique TensorFlower2018-09-05
| | | | PiperOrigin-RevId: 211714574
* Temporarily disable distributed coordinator training when using TPUStrategyGravatar Frank Chen2018-09-05
| | | | PiperOrigin-RevId: 211712907
* Update diagram in TOCO README.Gravatar Nupur Garg2018-09-05
| | | | PiperOrigin-RevId: 211711493
* Expose an axis argument for VocabInfo, which allows for warm-starting of the ↵Gravatar Eddie Zhou2018-09-05
| | | | | | second axis of Tensors through tf.train.warm_start. Note that the underlying initializer already has this functionality (for example, for output layers). PiperOrigin-RevId: 211709879
* Deprecate `tf.ReaderBase` and related APIs.Gravatar Derek Murray2018-09-05
| | | | | | These APIs are based on queue runners, which have been deprecated and will be removed in TensorFlow 2.0. They have been replaced with `tf.data.Dataset`, which provides a more efficient version of the same functionality. PiperOrigin-RevId: 211708268
* Fold CapturingGraph into FuncGraph.Gravatar Skye Wanderman-Milne2018-09-05
| | | | | | | | | There's no need for the two separate classes anymore. This also cleans up some other parts of the interface: * Removes the clear_resource_control_flow_state, which isn't used anywhere * Makes capture_value a private method of FuncGraph (_capture_helper) * Makes create_substitute_placeholder private PiperOrigin-RevId: 211707906
* Remove logging which generates tons of logs for large model.Gravatar Jianwei Xie2018-09-05
| | | | PiperOrigin-RevId: 211707155
* [tf.data] Minor fix to remove unnecessary difference between the ↵Gravatar Jiri Simsa2018-09-05
| | | | | | implementations of the batch and padded batch reducers. PiperOrigin-RevId: 211706766