| Commit message (Collapse) | Author | Age |
|
|
|
| |
PiperOrigin-RevId: 214732243
|
|
|
|
|
|
|
|
|
|
| |
tensor.h soon)
We plan to remove the import variant.h from tensor.h; and variant.h brings in a lot
of transitive imports (including protos like tensor.proto.h). To prepare, we're
updating folks who this will break.
PiperOrigin-RevId: 210043667
|
|
|
|
|
|
|
|
| |
code doesn't
support custom ops. Instead we will rely on the function optimizer.
PiperOrigin-RevId: 189400462
|
|
|
|
|
|
| |
Fix the bug where same feed nodes or fetch nodes would be added more than once.
PiperOrigin-RevId: 188902101
|
|
|
|
|
|
| |
ops.
PiperOrigin-RevId: 188730560
|
|
|
|
| |
PiperOrigin-RevId: 187364078
|
|
|
|
|
|
| |
in order to enable Grappler to optimize the body of functions. Inlining also reduces the overhead of evaluating function.
PiperOrigin-RevId: 187200883
|
|
|
|
|
|
| |
optimization
PiperOrigin-RevId: 186394467
|
|
|
|
|
|
| |
Properly handle the case of control dependencies
PiperOrigin-RevId: 184733444
|
|
|
|
| |
PiperOrigin-RevId: 184704131
|
|
|
|
|
|
| |
definition
PiperOrigin-RevId: 181617501
|
|
|
|
| |
PiperOrigin-RevId: 181519635
|
|
|
|
| |
PiperOrigin-RevId: 179429486
|
|
|
|
|
|
| |
optimizations.
PiperOrigin-RevId: 178440738
|
|
|
|
|
|
|
| |
ensures that we can safely process graphs generated before attributes were
added to an op.
PiperOrigin-RevId: 178423665
|
|
|
|
| |
PiperOrigin-RevId: 177474943
|
|
|
|
| |
PiperOrigin-RevId: 177249675
|
|
|
|
|
|
| |
value.
PiperOrigin-RevId: 177079220
|
|
|
|
|
|
| |
and PruneGraph calls.
PiperOrigin-RevId: 172902338
|
|
|
|
|
|
| |
option to provide specific feed nodes to the item builder.
PiperOrigin-RevId: 171758733
|
|
|
|
| |
PiperOrigin-RevId: 171722354
|
|
|
|
| |
PiperOrigin-RevId: 168409110
|
|
|
|
|
|
| |
Support saved model as input;
PiperOrigin-RevId: 166765212
|
|
|
|
|
|
|
|
| |
ProcessFunctionLibraryRuntime library to Instantiate and Run functions on different devices. When a FunctionLibraryRuntime encounters a function with a target that is another device, it delegates Instantiate() and Run() calls to the ProcessFunctionLibraryRuntime.
This change also moves the table_ containing all function instantiations to the PFLR instead of the FunctionLibraryRuntime.
PiperOrigin-RevId: 165651194
|
|
|
|
| |
PiperOrigin-RevId: 165604864
|
|
|
|
|
|
|
|
| |
ProcessFunctionLibraryRuntime library to Instantiate and Run functions on different devices. When a FunctionLibraryRuntime encounters a function with a target that is another device, it delegates Instantiate() and Run() calls to the ProcessFunctionLibraryRuntime.
This change also moves the table_ containing all function instantiations to the PFLR instead of the FunctionLibraryRuntime.
PiperOrigin-RevId: 165521057
|
|
|
|
| |
PiperOrigin-RevId: 165350681
|
|
|
|
| |
PiperOrigin-RevId: 163842238
|
|
|
|
|
|
| |
TensorFlow constant propagation pass, when the inputs to those Ops have sufficiently known static shape.
PiperOrigin-RevId: 163762750
|
|
|
|
|
|
|
| |
including function library so that it doesn't fail for the graph with inline
function library.
PiperOrigin-RevId: 163142677
|
|
|
|
| |
PiperOrigin-RevId: 162773493
|
|
|
|
|
|
|
|
|
|
|
|
| |
The goal is to make kernels mostly independent of proto headers, which will let
us lock down our .so imports. This CL does not remove any actual headers, but
changes a bunch of files so that header removal is possible in a followup CL.
It also marks the headers that will be removed with
// TODO(b/62899350): Remove
RELNOTES: n/a
PiperOrigin-RevId: 160552878
|
|
|
|
| |
PiperOrigin-RevId: 160488554
|
|
|
|
| |
PiperOrigin-RevId: 160003173
|
|
|
|
| |
PiperOrigin-RevId: 159981628
|
|
|
|
| |
PiperOrigin-RevId: 159034290
|
|
|
|
| |
PiperOrigin-RevId: 158120864
|
|
|
|
|
|
| |
Run graph optimizations last: since they can be expensive it's best to filter invalid items first.
PiperOrigin-RevId: 157792834
|
|
|
|
|
|
| |
specified values for incomplete placeholder shapes. Previously, these overrides were only specified in the feed nodes, which improves estimates when using dynamic shapes but not when using static shapes. With this change, static shapes also benefit.
PiperOrigin-RevId: 157780800
|
|
|
|
|
|
| |
Default is to do them. In tf_optimizer don't inline or do l1 optimizations.
PiperOrigin-RevId: 157673614
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
inlining.
Op Counts Comparison (BNMT)
Counts: Profile vs Grappler
Op: Add, 968 vs 965
Op: AddN, 2228 vs 2228
Op: ApplyGradientDescent, 84 vs 84
Op: BatchMatMul, 998 vs 998
Op: Identity, 142 vs 105
Op: MatMul, 63 vs 63
Op: Mul, 10318 vs 10306
Op: OneHot, 1 vs 1
Op: Reshape, 8421 vs 8422
Op: Select, 488 vs 488
Op: Shape, 8132 vs 8131
Op: Sigmoid, 942 vs 942
Op: Softmax, 19 vs 19
Op: StridedSlice, 58 vs 74
Op: Sub, 1398 vs 1394
Op: Tanh, 333 vs 333
Op: Tile, 21 vs 21
Op: Transpose, 39 vs 39
PiperOrigin-RevId: 157288420
|
|
|
|
|
|
|
|
| |
placeholder shape dimensions with placeholder_unknown_output_shape before making a tensorshape out of it.
Avoids assertion error (-1 vs 0) on newer metagraphdefs (specifically bnmt) where placeholder shapes are not empty if they are partially defined.
PiperOrigin-RevId: 156575182
|
|
|
|
| |
PiperOrigin-RevId: 156457746
|
|
|
|
|
|
| |
large files stored on disk, and can take tens of minutes to load: increased the initialization timeout to give grappler enough time to load them.
Change: 154303608
|
|
|
|
| |
Change: 152329057
|
|
file
Change: 150498236
|