aboutsummaryrefslogtreecommitdiffhomepage
Commit message (Collapse)AuthorAge
* Added a utility function to create a grappler item from a function definitionGravatar Benoit Steiner2018-01-10
| | | | PiperOrigin-RevId: 181519635
* [XLA] Clean up our handling of ExecutionProfile and add a test caseGravatar Sanjoy Das2018-01-10
| | | | | | ExecutionProfile::compute_cycle_count never worked for CPU and GPU with Hlo profiling disabled, as far as I can tell. PiperOrigin-RevId: 181517824
* Fix bug in the conversion of while loops. The while_loop's initial value ↵Gravatar A. Unique TensorFlower2018-01-10
| | | | | | needs to use the scope symbols, not their last assigned value. PiperOrigin-RevId: 181511978
* Remove host_spec attr from TPU configuration ops since it isn't used any more.Gravatar A. Unique TensorFlower2018-01-10
| | | | PiperOrigin-RevId: 181511871
* Fix python/framework/subscribe.py and test to work with C API enabled.Gravatar Skye Wanderman-Milne2018-01-10
| | | | PiperOrigin-RevId: 181511142
* Revert PlaceholderWithDefault logic in constant_folding.cc.Gravatar Skye Wanderman-Milne2018-01-10
| | | | | | | | Runtime constant folding after the graph has been rewritten to include any feeds, so it's safe and desirable to constant fold PlaceholderWithDefaults at this point. PiperOrigin-RevId: 181510650
* Adding a new test case for tf.contrib.receptive_field.Gravatar A. Unique TensorFlower2018-01-10
| | | | PiperOrigin-RevId: 181508517
* Remove the gradients function converter now that we can use the tape method.Gravatar A. Unique TensorFlower2018-01-10
| | | | PiperOrigin-RevId: 181506626
* Support nesting EagerTemplate objects.Gravatar Akshay Agrawal2018-01-10
| | | | | | | * Nesting is implemented by sharing a single EagerVariableStore among a top-level EagerTemplate and all children EagerTemplate objects that are nested underneath it. Variables added to an EagerTemplate object are also added to all EagerTemplate objects under which it is nested. * This change also simplifies the implementation of __call__ for both Template and EagerTemplate. PiperOrigin-RevId: 181506600
* Attempt to fix #15951Gravatar Brian Patton2018-01-10
| | | | | | MacOS build fails for missing include of <array> PiperOrigin-RevId: 181506335
* Fix flaky training tests. Reenable the tests.Gravatar A. Unique TensorFlower2018-01-10
| | | | PiperOrigin-RevId: 181505090
* Allow gRPC Workers to use configure the number of threads driving work on ↵Gravatar Noah Eisen2018-01-10
| | | | | | both client and server side. Thread count is hardcoded to 8 for now, should be tuned in the future. PiperOrigin-RevId: 181504374
* Add lite version of ios_tensorflow_lib to exclude operations. This makes it ↵Gravatar A. Unique TensorFlower2018-01-10
| | | | | | easier to package custom ops (tfmini) with the core binary on iOS. PiperOrigin-RevId: 181503662
* Extend `assertAllClose()` so it supports namedtuples.Gravatar A. Unique TensorFlower2018-01-10
| | | | | | For example, if you have defined a namedtuple called `MyNamedTuple`, and there are two variables `a=MyNamedTuple(...)`, and `b=MyNamedTuple(...)`, you can directly call `assertAllClose(a, b)` if you intend to know if the two namedtuples are close elementwise. PiperOrigin-RevId: 181501832
* [XLA] Name conv/dot fusion nodes conv_fusion.42 / dot_fusion.42.Gravatar Justin Lebar2018-01-10
| | | | | | | | | | | | These fusion categories are really just a way of expressing a particular kind of dot or conv. This makes them easier to differentiate from "proper" fusion nodes. We also change the category of these instructions so that in the HLO profile, e.g. conv-fusion shows up under the convolution category, rather than under "fusion". PiperOrigin-RevId: 181499300
* Merge changes from github.Gravatar Frank Chen2018-01-10
| | | | PiperOrigin-RevId: 181494416
* Add BF16 test for reverse.Gravatar Yuanzhong Xu2018-01-10
| | | | PiperOrigin-RevId: 181494232
* Relax text comparison for numpy formatting of arraysGravatar Shanqing Cai2018-01-10
| | | | | | | | | | | * Previously, strong assumptions were made about how numpy.ndarrays are formatted as strings. This led to breakages due to certain unclear changes in numpy or its dependencies. This CL relaxes the assumption and fix the affected tests for tfdbg and eager. * The tests in tensor_format_test.py are simplified through helper methods. PiperOrigin-RevId: 181494182
* The gan training test is flaky. Disabling it.Gravatar Amit Patankar2018-01-10
| | | | PiperOrigin-RevId: 181493377
* Add support for method calls with no return value. Only supports objects of ↵Gravatar A. Unique TensorFlower2018-01-10
| | | | | | types whitelisted to remain uncompiled. PiperOrigin-RevId: 181493349
* Cleanup (remove unused method) before adding tensorrt configurations.Gravatar Guangda Lai2018-01-10
| | | | PiperOrigin-RevId: 181469026
* 1) Bug fix: reuse discriminator_scope when re-applying discriminator_fn.Gravatar A. Unique TensorFlower2018-01-10
| | | | | | 2) Bug fix: explicitly set tensor pool output_values shape. PiperOrigin-RevId: 181467812
* Add support for more types for Pad.Gravatar Nupur Garg2018-01-10
| | | | PiperOrigin-RevId: 181467627
* Handle empty squeeze dimensions.Gravatar Yao Zhang2018-01-09
| | | | PiperOrigin-RevId: 181422479
* [TF:XLA] Reduce boilerplate in ComputationBuilder op implementations.Gravatar Todd Wang2018-01-09
| | | | | | | | | | | | | | | | | This makes the code a bit easier to read, and less likely we'll accidentally forget to set common fields for any new ops. A similar pattern is used for every op: ComputationDataHandle ComputationBuilder::Foo(...) { OpRequest op_request; FooRequest* request = op_request.mutable_foo_request(); // ... fill in specific request ... return RunOpAndParseResponse(&op_request); } No functional changes. PiperOrigin-RevId: 181415608
* [XLA::GPU] Pass xla_backend_extra_options to the GPU backend.Gravatar A. Unique TensorFlower2018-01-09
| | | | | | | | Move InitializeLLVMCommandLineOptions from cpu_compiler.cc to llvm_util.cc to make it available to the GPU backend. Call InitializeLLVMCommandLineOptions when initializing the GPU backend. PiperOrigin-RevId: 181414589
* [XLA] Ensure that IR is dumped if the LLVM verifier fails.Gravatar Justin Lebar2018-01-09
| | | | | | | | Without this change, if verification of the LLVM IR failed, we'd bail out before dumping the IR. All this even though our error message helpfully suggests passing --xla_dump_ir_to! PiperOrigin-RevId: 181410671
* Remove smart pointers from SQLite veneerGravatar Justine Tunney2018-01-09
| | | | | | | | | | | | Sqlite now extends tensorflow::core::RefCounted which is a better practice for code in the TensorFlow codebase. A few other trivial changes were snuck in. There's now a db->changes() method. Error messages will also display the SQLite extended result code, which can be looked up by hand with some difficulty, just in case the error message string doesn't reflect the whole nuance of something like an i/o error. PiperOrigin-RevId: 181410358
* Add assertion to prevent generation of degenerate linear_to_mel_weight_matrix.Gravatar A. Unique TensorFlower2018-01-09
| | | | | | | | Prior to this change if upper_edge_hertz is larger than sample_rate / 2 (the highest frequency present in the linear spectrogram), the returned matrix would contain columns that are all zeros. This is likely a surprising result for those that are unfamiliar with signal processing, so it seems safer to raise an exception on such a misconfiguration than to silently allow users to generate poorly behaved features. PiperOrigin-RevId: 181407176
* Update ops-related pbtxt files.Gravatar A. Unique TensorFlower2018-01-09
| | | | PiperOrigin-RevId: 181405525
* Raise an error if the variable names in WarmStartSettings aren't actually used.Gravatar A. Unique TensorFlower2018-01-09
| | | | PiperOrigin-RevId: 181404919
* Go: Update generated wrapper functions for TensorFlow ops.Gravatar A. Unique TensorFlower2018-01-09
| | | | PiperOrigin-RevId: 181398752
* profiler C++ API.Gravatar A. Unique TensorFlower2018-01-09
| | | | PiperOrigin-RevId: 181397308
* Added a "Getting Started with TensorFlow for ML Beginners" chapter to GetGravatar A. Unique TensorFlower2018-01-09
| | | | | | Started section. PiperOrigin-RevId: 181396430
* Replace get_startedGravatar Mark Daoust2018-01-09
| | | | | | | Also add sub-sections to leftnav files, and sync leftnav and index files. PiperOrigin-RevId: 181394206
* [tf.data] Store the `FunctionLibraryDefinition` for restored iterators in ↵Gravatar Derek Murray2018-01-09
| | | | | | | | | | the resource. This will make it possible to use the experimental `overlay_lib` to instantiate and run functions from a restored iterator's graph using the shared `FunctionLibraryRuntime`. PiperOrigin-RevId: 181392925
* Remove regex for Gather tests. We don't test int64 indices at all.Gravatar A. Unique TensorFlower2018-01-09
| | | | PiperOrigin-RevId: 181390058
* Do not record operations or watch tensors/variables when the thread tape is ↵Gravatar Alexandre Passos2018-01-09
| | | | | | disabled. PiperOrigin-RevId: 181390045
* Make replicate_model_fn more flexible to simplify users' model_fn.Gravatar Igor Saprykin2018-01-09
| | | | | | If there is only one device then replication/aggregation overhead isn't added. It is okay to not use TowerEstimator if there is only one device. It is okay to use TowerEstimator but not use replicate_model_fn. PiperOrigin-RevId: 181388296
* [tpu:profiler] Capture the data for generating an overview page of the ↵Gravatar A. Unique TensorFlower2018-01-09
| | | | | | profiling results. PiperOrigin-RevId: 181387984
* Removing op_gen_overrides.proto and references. Overrides in ↵Gravatar Anna R2018-01-09
| | | | | | op_gen_overrides.pbtxt are a part of tensorflow/core/api_def/base_api/. PiperOrigin-RevId: 181386873
* Allow 1D, 2D and 3D tensors in L2NormGravatar A. Unique TensorFlower2018-01-09
| | | | PiperOrigin-RevId: 181384430
* Simplify replicate_model_fn.GatheringOptimizer inspired by the past comments.Gravatar Igor Saprykin2018-01-09
| | | | | | | | | | | - I worked around the need to rely on Optimizer.__class__ for keeping track of the gradients. Now we are relying on the order of collecting them. I also added a basic verification that all towers have indeed called the same number of optimizers. - I now allow the user to increment global step however many times they wish. The changes above allowed me to support using the same optimizer class multiple times in a tower. I also renamed GatheringOptimizer to TowerOptimizer, which is a breaking change. #lifeincontrib PiperOrigin-RevId: 181381569
* Better detect the amount of device memory that's availableGravatar Benoit Steiner2018-01-09
| | | | PiperOrigin-RevId: 181381477
* Disable contrib/data interleave_dataset_op_test as it is timing out on some ↵Gravatar Frank Chen2018-01-09
| | | | | | Kokoro runs: https://source.cloud.google.com/results/invocations/d276e288-4664-4b17-aac2-b0dfaff45b17/targets/%2F%2Ftensorflow%2Fcontrib%2Fdata%2Fpython%2Fkernel_tests:interleave_dataset_op_test/tests PiperOrigin-RevId: 181374381
* [TF:XLA] Bump open source llvm revision to r322011Gravatar Sanjoy Das2018-01-09
| | | | PiperOrigin-RevId: 181373542
* [TF:XLA] Use broadcasts instead of larger constants for image resizing.Gravatar Blake Hechtman2018-01-09
| | | | PiperOrigin-RevId: 181369272
* Automated g4 rollback of changelist 180691955Gravatar Anna R2018-01-09
| | | | PiperOrigin-RevId: 181365803
* Extend the type info analyzer to cover variables declared using with statements.Gravatar A. Unique TensorFlower2018-01-09
| | | | | | | | | This allows constructs of the kind: with tfe.GradientTape() as tape: tape.gradients(...) PiperOrigin-RevId: 181358791
* Introduce global_id_in_cluster in RunConfig.Gravatar Jianwei Xie2018-01-09
| | | | PiperOrigin-RevId: 181354785