| Commit message (Collapse) | Author | Age |
|
|
|
| |
PiperOrigin-RevId: 215018984
|
|
|
|
|
|
| |
GPU). This avoids many unnecessary CPU<->GPU memcpy and syncs.
PiperOrigin-RevId: 214108484
|
|
|
|
|
|
| |
Minor cleanup by moving the helper function ShapesEqual to GraphProperties and adding unit tests for it.
PiperOrigin-RevId: 213876779
|
|
|
|
|
|
|
|
| |
This now matches the definition. I fixed it here rather than in the definition as it seems every call to this function names the variable "num_components".
I also tidied up the comment a little.
PiperOrigin-RevId: 212668416
|
|
|
|
|
|
| |
This patch uses take by value and move idiom to optimize copying of constructor arguments.
PiperOrigin-RevId: 211553877
|
|
|
|
|
|
|
|
| |
This solves te problem when passing a scalar tensor to function op input, as
Placeholer shape inference outputs unknown shape for scalar if graphdef version
is < 24.
PiperOrigin-RevId: 210007276
|
|
|
|
| |
PiperOrigin-RevId: 208119717
|
|
|
|
| |
PiperOrigin-RevId: 207340526
|
|
|
|
|
|
| |
grappler functions.
PiperOrigin-RevId: 207171072
|
|
|
|
| |
PiperOrigin-RevId: 203066657
|
|
|
|
| |
PiperOrigin-RevId: 197673355
|
|
|
|
| |
PiperOrigin-RevId: 196906815
|
|
|
|
| |
PiperOrigin-RevId: 196742598
|
|
|
|
|
|
| |
Remove duplicated code to resolve type from attributes.
PiperOrigin-RevId: 196558061
|
|
|
|
| |
PiperOrigin-RevId: 195710562
|
|
|
|
| |
PiperOrigin-RevId: 194975603
|
|
|
|
| |
PiperOrigin-RevId: 194579253
|
|
|
|
| |
PiperOrigin-RevId: 194387041
|
|
|
|
| |
PiperOrigin-RevId: 193974712
|
|
|
|
| |
PiperOrigin-RevId: 193751624
|
|
|
|
|
|
| |
and do not cause overflow for arithmetic operations.
PiperOrigin-RevId: 193723661
|
|
|
|
| |
PiperOrigin-RevId: 193605910
|
|
|
|
|
|
| |
instantiation context.
PiperOrigin-RevId: 193399263
|
|
|
|
| |
PiperOrigin-RevId: 192704808
|
|
|
|
| |
PiperOrigin-RevId: 192683166
|
|
|
|
|
|
|
|
| |
Explicitly track function input arg expansion into Placeholders,
and keep metadata to map between FunctionDef and GraphDef connectivity
formats.
PiperOrigin-RevId: 192462592
|
|
|
|
| |
PiperOrigin-RevId: 191679495
|
|
|
|
| |
PiperOrigin-RevId: 191647386
|
|
|
|
| |
PiperOrigin-RevId: 190878279
|
|
|
|
|
|
| |
optimized and original graph and checks whether the output tensors produced by them are the same.
PiperOrigin-RevId: 190802264
|
|
|
|
|
|
| |
Update GrapplerTest::EvaluateNodes to take feeds as an argument, to make it easier to write tests with placeholders.
PiperOrigin-RevId: 190696386
|
|
|
|
| |
PiperOrigin-RevId: 190391193
|
|
|
|
|
|
|
| |
properly compare the results of the original graph against that of the hand
optimized graph.
PiperOrigin-RevId: 190115606
|
|
|
|
|
|
| |
across Enter nodes.
PiperOrigin-RevId: 189197514
|
|
|
|
|
|
|
|
| |
1) Redundant Bitcast
2) Redundant Cast
3) Remove inverse transpose
PiperOrigin-RevId: 188569367
|
|
|
|
| |
PiperOrigin-RevId: 187691555
|
|
|
|
| |
PiperOrigin-RevId: 187628382
|
|
|
|
|
|
| |
OFF by default until more validation is done.
PiperOrigin-RevId: 187211957
|
|
|
|
|
|
| |
in order to enable Grappler to optimize the body of functions. Inlining also reduces the overhead of evaluating function.
PiperOrigin-RevId: 187200883
|
|
|
|
|
|
|
|
| |
pairs of extra _send/_recv nodes which speeds things up a bit. This also
ensures that performance doesn't depend on the recv scheduling built in TF,
which isn't always optimal.
PiperOrigin-RevId: 187057831
|
|
|
|
|
|
|
|
| |
values, but not
directly removing those nodes from the graph.
PiperOrigin-RevId: 186505857
|
|
|
|
|
|
|
| |
deterministic
Improved testing
PiperOrigin-RevId: 184565483
|
|
|
|
|
|
| |
in the process.
PiperOrigin-RevId: 184172483
|
|
|
|
|
|
| |
They don't make sense in the open source repository.
PiperOrigin-RevId: 183140889
|
|
|
|
|
|
| |
idempotent.
PiperOrigin-RevId: 178026253
|
|
|
|
| |
PiperOrigin-RevId: 177612830
|
|
|
|
|
|
|
|
| |
involving aggregate ops (AddN, Add, Accumulate) or eliminate the aggregation op entirely.
* Replace trivial aggregations of the form x+x+x... with const(N)*x for N > 1.
PiperOrigin-RevId: 174398543
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Splits out a shared object (//tensorflow/libtensorflow_framework.so) with core TensorFlow functionality but neither ops nor kernels. This object does include registries for ops, kernels, filesystems, etc. The expectation is that shared objects containing custom ops will have a runtime dependency on this framework shared object: TensorFlow will load the custom op shared object, and the custom op shared object will use the symbols from the framework shared object to register its ops/kernels/etc. rather than (as before this change) relying on those symbols being in the global symbol table.
In this mode, TensorFlow artifacts (_pywrap_tensorflow.so for Python, libtensorflow.so for the C API; currently excluding Android artifacts) will depend on the framework shared object, which will be packaged with the Python pip package and other language distributions. This means that custom ops targeting the framework shared object will work in any language (C++, Java, Go; previously custom ops in these languages required custom Bazel builds).
Adds a config option which reproduces the old behavior (--config=monolithic), which for Python means building a monolithic pywrap_tensorflow shared object and loading its symbols into the global symbol table (with RTLD_GLOBAL). As before, there will be no extra-Bazel custom op support for other languages when compiling in this mode.
Does not change behavior on Windows; the cmake build is still monolithic.
Requires using tf_cc_binary, tf_cc_test, and (rarely) tf_cc_shared_object rules to link in the framework shared object when adding new TensorFlow build rules.
PiperOrigin-RevId: 169572746
|
|
|
|
| |
PiperOrigin-RevId: 168650887
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
control dependencies. The would result in malfunction of
topological sort as it previously doesn't handle duplicated inputs. For example, say node A has three repeated input ^B, node A
will never get added to queue in topological sort, because the number of ready inputs will always be less
than the number of inputs (B is only counted once).
node {
name: "A"
op: "SomeOp"
input:"^B"
input:"^B"
input:"^B"
}
PiperOrigin-RevId: 167045325
|