| Commit message (Collapse) | Author | Age |
... | |
|
|
|
|
|
| |
For example, if you have defined a namedtuple called `MyNamedTuple`, and there are two variables `a=MyNamedTuple(...)`, and `b=MyNamedTuple(...)`, you can directly call `assertAllClose(a, b)` if you intend to know if the two namedtuples are close elementwise.
PiperOrigin-RevId: 181501832
|
|
|
|
|
|
|
|
|
|
|
|
| |
These fusion categories are really just a way of expressing a particular
kind of dot or conv. This makes them easier to differentiate from
"proper" fusion nodes.
We also change the category of these instructions so that in the HLO
profile, e.g. conv-fusion shows up under the convolution category,
rather than under "fusion".
PiperOrigin-RevId: 181499300
|
|
|
|
| |
PiperOrigin-RevId: 181494416
|
|
|
|
| |
PiperOrigin-RevId: 181494232
|
|
|
|
|
|
|
|
|
|
|
| |
* Previously, strong assumptions were made about how numpy.ndarrays
are formatted as strings. This led to breakages due to certain
unclear changes in numpy or its dependencies. This CL relaxes the
assumption and fix the affected tests for tfdbg and eager.
* The tests in tensor_format_test.py are simplified through helper
methods.
PiperOrigin-RevId: 181494182
|
|
|
|
| |
PiperOrigin-RevId: 181493377
|
|
|
|
|
|
| |
types whitelisted to remain uncompiled.
PiperOrigin-RevId: 181493349
|
|
|
|
| |
PiperOrigin-RevId: 181469026
|
|
|
|
|
|
| |
2) Bug fix: explicitly set tensor pool output_values shape.
PiperOrigin-RevId: 181467812
|
|
|
|
| |
PiperOrigin-RevId: 181467627
|
|
|
|
| |
PiperOrigin-RevId: 181422479
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This makes the code a bit easier to read, and less likely we'll accidentally
forget to set common fields for any new ops. A similar pattern is used for every
op:
ComputationDataHandle ComputationBuilder::Foo(...) {
OpRequest op_request;
FooRequest* request = op_request.mutable_foo_request();
// ... fill in specific request ...
return RunOpAndParseResponse(&op_request);
}
No functional changes.
PiperOrigin-RevId: 181415608
|
|
|
|
|
|
|
|
| |
Move InitializeLLVMCommandLineOptions from cpu_compiler.cc to llvm_util.cc to
make it available to the GPU backend.
Call InitializeLLVMCommandLineOptions when initializing the GPU backend.
PiperOrigin-RevId: 181414589
|
|
|
|
|
|
|
|
| |
Without this change, if verification of the LLVM IR failed, we'd bail
out before dumping the IR. All this even though our error message
helpfully suggests passing --xla_dump_ir_to!
PiperOrigin-RevId: 181410671
|
|
|
|
|
|
|
|
|
|
|
|
| |
Sqlite now extends tensorflow::core::RefCounted which is a better practice for
code in the TensorFlow codebase.
A few other trivial changes were snuck in. There's now a db->changes() method.
Error messages will also display the SQLite extended result code, which can be
looked up by hand with some difficulty, just in case the error message string
doesn't reflect the whole nuance of something like an i/o error.
PiperOrigin-RevId: 181410358
|
|
|
|
|
|
|
|
| |
Prior to this change if upper_edge_hertz is larger than sample_rate / 2 (the highest frequency present in the linear spectrogram), the returned matrix would contain columns that are all zeros.
This is likely a surprising result for those that are unfamiliar with signal processing, so it seems safer to raise an exception on such a misconfiguration than to silently allow users to generate poorly behaved features.
PiperOrigin-RevId: 181407176
|
|
|
|
| |
PiperOrigin-RevId: 181405525
|
|
|
|
| |
PiperOrigin-RevId: 181404919
|
|
|
|
| |
PiperOrigin-RevId: 181398752
|
|
|
|
| |
PiperOrigin-RevId: 181397308
|
|
|
|
|
|
| |
Started section.
PiperOrigin-RevId: 181396430
|
|
|
|
|
|
|
| |
Also add sub-sections to leftnav files,
and sync leftnav and index files.
PiperOrigin-RevId: 181394206
|
|
|
|
|
|
|
|
|
|
| |
the resource.
This will make it possible to use the experimental `overlay_lib` to
instantiate and run functions from a restored iterator's graph using
the shared `FunctionLibraryRuntime`.
PiperOrigin-RevId: 181392925
|
|
|
|
| |
PiperOrigin-RevId: 181390058
|
|
|
|
|
|
| |
disabled.
PiperOrigin-RevId: 181390045
|
|
|
|
|
|
| |
If there is only one device then replication/aggregation overhead isn't added. It is okay to not use TowerEstimator if there is only one device. It is okay to use TowerEstimator but not use replicate_model_fn.
PiperOrigin-RevId: 181388296
|
|
|
|
|
|
| |
profiling results.
PiperOrigin-RevId: 181387984
|
|
|
|
|
|
| |
op_gen_overrides.pbtxt are a part of tensorflow/core/api_def/base_api/.
PiperOrigin-RevId: 181386873
|
|
|
|
| |
PiperOrigin-RevId: 181384430
|
|
|
|
|
|
|
|
|
|
|
| |
- I worked around the need to rely on Optimizer.__class__ for keeping track of the gradients. Now we are relying on the order of collecting them. I also added a basic verification that all towers have indeed called the same number of optimizers.
- I now allow the user to increment global step however many times they wish.
The changes above allowed me to support using the same optimizer class multiple times in a tower.
I also renamed GatheringOptimizer to TowerOptimizer, which is a breaking change. #lifeincontrib
PiperOrigin-RevId: 181381569
|
|
|
|
| |
PiperOrigin-RevId: 181381477
|
|
|
|
|
|
| |
Kokoro runs: https://source.cloud.google.com/results/invocations/d276e288-4664-4b17-aac2-b0dfaff45b17/targets/%2F%2Ftensorflow%2Fcontrib%2Fdata%2Fpython%2Fkernel_tests:interleave_dataset_op_test/tests
PiperOrigin-RevId: 181374381
|
|
|
|
| |
PiperOrigin-RevId: 181373542
|
|
|
|
| |
PiperOrigin-RevId: 181369272
|
|
|
|
| |
PiperOrigin-RevId: 181365803
|
|
|
|
|
|
|
|
|
| |
This allows constructs of the kind:
with tfe.GradientTape() as tape:
tape.gradients(...)
PiperOrigin-RevId: 181358791
|
|
|
|
| |
PiperOrigin-RevId: 181354785
|
|
|
|
| |
PiperOrigin-RevId: 181352929
|
|
|
|
|
|
| |
Remove meaningless distinctions between fusion nodes. Knowing whether a loop fusion op is "elementwise" (~ doesn't contain any broadcast inputs) doesn't provide any useful information.
PiperOrigin-RevId: 181352880
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This option makes it possible to instantiate functions from a library
that has been loaded separately from the runtime's own library. We
plan to use this as part of the `tf.data` checkpoint restore process,
which might load an iterator whose state includes functions that
aren't present in the original graph. (This is currently achieved by
creating an isolated `FunctionLibraryRuntime` for each function-using
`Dataset`, but that is inefficient and prevents using features of the
main runtime, such as cross-device function calls.)
PiperOrigin-RevId: 181352217
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
`tf.contrib.eager.Iterator`.
Also fixes `tf.contrib.eager.Iterator` so that it can work with
datasets containing `tf.SparseTensor` components.
This change allows you to use the `get_next()` method, `output_types`
property, `output_shapes` property, and `output_classes` property from
`tf.data.Iterator` when constructing a `tf.contrib.eager.Iterator`;
and therefore makes it easier to write code that operates on an
`Iterator` in both eager and graph mode.
PiperOrigin-RevId: 181350797
|
|
|
|
| |
PiperOrigin-RevId: 181350723
|
|
|
|
| |
PiperOrigin-RevId: 181350574
|
|
|
|
| |
PiperOrigin-RevId: 181349010
|
|
|
|
| |
PiperOrigin-RevId: 181348431
|
|
|
|
| |
PiperOrigin-RevId: 181345661
|
|
|
|
| |
PiperOrigin-RevId: 181345319
|
|
|
|
| |
PiperOrigin-RevId: 181341793
|
|
|
|
|
|
| |
direct path to checkpoint files.
PiperOrigin-RevId: 181341437
|
|
|
|
| |
PiperOrigin-RevId: 181340112
|