| Commit message (Collapse) | Author | Age |
|
|
|
| |
PiperOrigin-RevId: 181519635
|
|
|
|
|
|
| |
ExecutionProfile::compute_cycle_count never worked for CPU and GPU with Hlo
profiling disabled, as far as I can tell.
PiperOrigin-RevId: 181517824
|
|
|
|
|
|
| |
needs to use the scope symbols, not their last assigned value.
PiperOrigin-RevId: 181511978
|
|
|
|
| |
PiperOrigin-RevId: 181511871
|
|
|
|
| |
PiperOrigin-RevId: 181511142
|
|
|
|
|
|
|
|
| |
Runtime constant folding after the graph has been rewritten to include any
feeds, so it's safe and desirable to constant fold PlaceholderWithDefaults
at this point.
PiperOrigin-RevId: 181510650
|
|
|
|
| |
PiperOrigin-RevId: 181508517
|
|
|
|
| |
PiperOrigin-RevId: 181506626
|
|
|
|
|
|
|
| |
* Nesting is implemented by sharing a single EagerVariableStore among a top-level EagerTemplate and all children EagerTemplate objects that are nested underneath it. Variables added to an EagerTemplate object are also added to all EagerTemplate objects under which it is nested.
* This change also simplifies the implementation of __call__ for both Template and EagerTemplate.
PiperOrigin-RevId: 181506600
|
|
|
|
|
|
| |
MacOS build fails for missing include of <array>
PiperOrigin-RevId: 181506335
|
|
|
|
| |
PiperOrigin-RevId: 181505090
|
|
|
|
|
|
| |
both client and server side. Thread count is hardcoded to 8 for now, should be tuned in the future.
PiperOrigin-RevId: 181504374
|
|
|
|
|
|
| |
easier to package custom ops (tfmini) with the core binary on iOS.
PiperOrigin-RevId: 181503662
|
|
|
|
|
|
| |
For example, if you have defined a namedtuple called `MyNamedTuple`, and there are two variables `a=MyNamedTuple(...)`, and `b=MyNamedTuple(...)`, you can directly call `assertAllClose(a, b)` if you intend to know if the two namedtuples are close elementwise.
PiperOrigin-RevId: 181501832
|
|
|
|
|
|
|
|
|
|
|
|
| |
These fusion categories are really just a way of expressing a particular
kind of dot or conv. This makes them easier to differentiate from
"proper" fusion nodes.
We also change the category of these instructions so that in the HLO
profile, e.g. conv-fusion shows up under the convolution category,
rather than under "fusion".
PiperOrigin-RevId: 181499300
|
|
|
|
| |
PiperOrigin-RevId: 181494416
|
|
|
|
| |
PiperOrigin-RevId: 181494232
|
|
|
|
|
|
|
|
|
|
|
| |
* Previously, strong assumptions were made about how numpy.ndarrays
are formatted as strings. This led to breakages due to certain
unclear changes in numpy or its dependencies. This CL relaxes the
assumption and fix the affected tests for tfdbg and eager.
* The tests in tensor_format_test.py are simplified through helper
methods.
PiperOrigin-RevId: 181494182
|
|
|
|
| |
PiperOrigin-RevId: 181493377
|
|
|
|
|
|
| |
types whitelisted to remain uncompiled.
PiperOrigin-RevId: 181493349
|
|
|
|
| |
PiperOrigin-RevId: 181469026
|
|
|
|
|
|
| |
2) Bug fix: explicitly set tensor pool output_values shape.
PiperOrigin-RevId: 181467812
|
|
|
|
| |
PiperOrigin-RevId: 181467627
|
|
|
|
| |
PiperOrigin-RevId: 181422479
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This makes the code a bit easier to read, and less likely we'll accidentally
forget to set common fields for any new ops. A similar pattern is used for every
op:
ComputationDataHandle ComputationBuilder::Foo(...) {
OpRequest op_request;
FooRequest* request = op_request.mutable_foo_request();
// ... fill in specific request ...
return RunOpAndParseResponse(&op_request);
}
No functional changes.
PiperOrigin-RevId: 181415608
|
|
|
|
|
|
|
|
| |
Move InitializeLLVMCommandLineOptions from cpu_compiler.cc to llvm_util.cc to
make it available to the GPU backend.
Call InitializeLLVMCommandLineOptions when initializing the GPU backend.
PiperOrigin-RevId: 181414589
|
|
|
|
|
|
|
|
| |
Without this change, if verification of the LLVM IR failed, we'd bail
out before dumping the IR. All this even though our error message
helpfully suggests passing --xla_dump_ir_to!
PiperOrigin-RevId: 181410671
|
|
|
|
|
|
|
|
|
|
|
|
| |
Sqlite now extends tensorflow::core::RefCounted which is a better practice for
code in the TensorFlow codebase.
A few other trivial changes were snuck in. There's now a db->changes() method.
Error messages will also display the SQLite extended result code, which can be
looked up by hand with some difficulty, just in case the error message string
doesn't reflect the whole nuance of something like an i/o error.
PiperOrigin-RevId: 181410358
|
|
|
|
|
|
|
|
| |
Prior to this change if upper_edge_hertz is larger than sample_rate / 2 (the highest frequency present in the linear spectrogram), the returned matrix would contain columns that are all zeros.
This is likely a surprising result for those that are unfamiliar with signal processing, so it seems safer to raise an exception on such a misconfiguration than to silently allow users to generate poorly behaved features.
PiperOrigin-RevId: 181407176
|
|
|
|
| |
PiperOrigin-RevId: 181405525
|
|
|
|
| |
PiperOrigin-RevId: 181404919
|
|
|
|
| |
PiperOrigin-RevId: 181398752
|
|
|
|
| |
PiperOrigin-RevId: 181397308
|
|
|
|
|
|
| |
Started section.
PiperOrigin-RevId: 181396430
|
|
|
|
|
|
|
| |
Also add sub-sections to leftnav files,
and sync leftnav and index files.
PiperOrigin-RevId: 181394206
|
|
|
|
|
|
|
|
|
|
| |
the resource.
This will make it possible to use the experimental `overlay_lib` to
instantiate and run functions from a restored iterator's graph using
the shared `FunctionLibraryRuntime`.
PiperOrigin-RevId: 181392925
|
|
|
|
| |
PiperOrigin-RevId: 181390058
|
|
|
|
|
|
| |
disabled.
PiperOrigin-RevId: 181390045
|
|
|
|
|
|
| |
If there is only one device then replication/aggregation overhead isn't added. It is okay to not use TowerEstimator if there is only one device. It is okay to use TowerEstimator but not use replicate_model_fn.
PiperOrigin-RevId: 181388296
|
|
|
|
|
|
| |
profiling results.
PiperOrigin-RevId: 181387984
|
|
|
|
|
|
| |
op_gen_overrides.pbtxt are a part of tensorflow/core/api_def/base_api/.
PiperOrigin-RevId: 181386873
|
|
|
|
| |
PiperOrigin-RevId: 181384430
|
|
|
|
|
|
|
|
|
|
|
| |
- I worked around the need to rely on Optimizer.__class__ for keeping track of the gradients. Now we are relying on the order of collecting them. I also added a basic verification that all towers have indeed called the same number of optimizers.
- I now allow the user to increment global step however many times they wish.
The changes above allowed me to support using the same optimizer class multiple times in a tower.
I also renamed GatheringOptimizer to TowerOptimizer, which is a breaking change. #lifeincontrib
PiperOrigin-RevId: 181381569
|
|
|
|
| |
PiperOrigin-RevId: 181381477
|
|
|
|
|
|
| |
Kokoro runs: https://source.cloud.google.com/results/invocations/d276e288-4664-4b17-aac2-b0dfaff45b17/targets/%2F%2Ftensorflow%2Fcontrib%2Fdata%2Fpython%2Fkernel_tests:interleave_dataset_op_test/tests
PiperOrigin-RevId: 181374381
|
|
|
|
| |
PiperOrigin-RevId: 181373542
|
|
|
|
| |
PiperOrigin-RevId: 181369272
|
|
|
|
| |
PiperOrigin-RevId: 181365803
|
|
|
|
|
|
|
|
|
| |
This allows constructs of the kind:
with tfe.GradientTape() as tape:
tape.gradients(...)
PiperOrigin-RevId: 181358791
|
|
|
|
| |
PiperOrigin-RevId: 181354785
|