| Commit message (Collapse) | Author | Age |
|
|
|
|
|
| |
Handle empty strings in NodePositionIfSameNode.
PiperOrigin-RevId: 214393567
|
|
|
|
| |
PiperOrigin-RevId: 214384090
|
|
|
|
| |
PiperOrigin-RevId: 214381126
|
|
|
|
| |
PiperOrigin-RevId: 214380876
|
|
|
|
|
|
|
| |
For testInceptionFwd I see 8.482029 != 8.48317 when comparing GPU vs. CPU.
testFusedConvInt8 has off-by-one errors. Both occur flakily.
PiperOrigin-RevId: 214378820
|
|
|
|
| |
PiperOrigin-RevId: 214377809
|
|
|
|
|
|
| |
strategy with keras.
PiperOrigin-RevId: 214376435
|
|
|
|
| |
PiperOrigin-RevId: 214376416
|
|
|
|
| |
PiperOrigin-RevId: 214373714
|
|
|
|
|
|
| |
This change fixes memory leaks in the ScatterNdUpdateOp and StridedSliceAssign kernels, and in training-op kernels that use `GetTrainingVariableMutex()`.
PiperOrigin-RevId: 214372346
|
|
|
|
| |
PiperOrigin-RevId: 214371906
|
|
|
|
| |
PiperOrigin-RevId: 214371640
|
|
|
|
| |
PiperOrigin-RevId: 214370113
|
|
|
|
|
|
| |
end indices that would result in an empty range. tf.range errors out at graph construction time in that case.
PiperOrigin-RevId: 214369488
|
|
|
|
|
|
| |
<br/> doesn't work in this context, but \n does.
PiperOrigin-RevId: 214367139
|
|
|
|
| |
PiperOrigin-RevId: 214366272
|
|
|
|
|
|
| |
The tests are in the next patch.
PiperOrigin-RevId: 214362688
|
|
|
|
|
|
| |
for pad input.
PiperOrigin-RevId: 214360620
|
|
|
|
| |
PiperOrigin-RevId: 214359786
|
|
|
|
|
|
| |
uses Cast internally.
PiperOrigin-RevId: 214356411
|
|
|
|
| |
PiperOrigin-RevId: 214354104
|
|
|
|
| |
PiperOrigin-RevId: 214353862
|
|
|
|
|
|
| |
e01d95528ea2137a4a27a88d1f57c6cb260aafed
PiperOrigin-RevId: 214351584
|
|
|
|
|
|
|
|
| |
when it's not matched.
Also add invariant checking for AllOf.
PiperOrigin-RevId: 214351269
|
|
|
|
|
|
|
|
|
| |
Temporary rollback to fix forward compatibility.
END_PUBLIC
Automated rollback of commit 0c48c703c3c1455cf3b2c0e47e2108e053ff83e2. Revert #21798.
PiperOrigin-RevId: 214349479
|
|
|
|
|
|
|
| |
CloneWithNewOperands. CreateCudnnConv* is easy to use wrongly, as it
doesn't propagate backend_config.
PiperOrigin-RevId: 214348788
|
|
|
|
| |
PiperOrigin-RevId: 214348730
|
|
|
|
| |
PiperOrigin-RevId: 214346818
|
|
|
|
| |
PiperOrigin-RevId: 214346240
|
|
|
|
|
|
| |
It wasn't actually needed.
PiperOrigin-RevId: 214346217
|
|
|
|
|
|
| |
the default graph in the scope.
PiperOrigin-RevId: 214345046
|
|
|
|
| |
PiperOrigin-RevId: 214338297
|
|
|
|
| |
PiperOrigin-RevId: 214338100
|
|
|
|
|
|
| |
Accompanies some internal changes related to third_party repo rules.
PiperOrigin-RevId: 214337234
|
|\
| |
| |
| | |
PiperOrigin-RevId: 214335741
|
| |
| |
| |
| |
| |
| | |
All devices implement the same tracing logic in an override of `Device::Compute()`. However, that logic does not have access to the cached `NodeItem::kernel_is_expensive` bit for the kernel, so it must make a virtual call to `OpKernel::IsExpensive()`. By inlining the logic into `ExecutorState::Process()`, we avoid making an unnecessary virtual call on each kernel invocation (when a trace controller is attached).
PiperOrigin-RevId: 214332492
|
|\ \
| | |
| | |
| | | |
PiperOrigin-RevId: 214325709
|
| | |
| | |
| | |
| | |
| | |
| | | |
export_saved_model
PiperOrigin-RevId: 214325271
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
objects in a tf.train.Saver() if var_list was a dict.
Includes the logic used for list in the dict code path.
PiperOrigin-RevId: 214324913
|
|\ \ \
| | | |
| | | |
| | | | |
PiperOrigin-RevId: 214323563
|
| | | |
| | | |
| | | |
| | | | |
PiperOrigin-RevId: 214321627
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
operators that need it.
PiperOrigin-RevId: 214320700
|
| | | |
| | | |
| | | |
| | | | |
PiperOrigin-RevId: 214311663
|
| | | |
| | | |
| | | |
| | | | |
PiperOrigin-RevId: 214309598
|
| | | |
| | | |
| | | |
| | | | |
PiperOrigin-RevId: 214309210
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
Also a small bugfix to handle unknown shapes in backprop._num_elements.
Before:
entry {
name: "L2hmcBenchmark.eager_train_cpu_defun"
iters: 10
wall_time: 0.594115018845
extras {
key: "examples_per_sec"
value {
double_value: 336.635152548
}
}
}
After:
entry {
name: "L2hmcBenchmark.eager_train_cpu_defun"
iters: 10
wall_time: 0.322251081467
extras {
key: "examples_per_sec"
value {
double_value: 620.634069216
}
}
}
PiperOrigin-RevId: 214308142
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
CudnnConvParams, just pass around the HloInstruction.
This is based on the observation that most code doesn't care about the
convolution semantics like which operand is input vs filter vs output.
In fact, only layout assignment and conv runner care about them.
PiperOrigin-RevId: 214307399
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
self.test_session() has been deprecated in 9962eb5e84b15e309410071b06c2ed2d6148ed44 as its name confuses readers of the test. Moving to cached_session() instead which is more explicit about:
* the fact that the session may be reused.
* the session is not closed even when doing a "with self.test_session()" statement.
PiperOrigin-RevId: 214300210
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
This allows the Keras learning phase to work inside functions and defuns.
Note: There might still be bugs in graph mode if the default placeholder is being fed (instead of using set_learning_phase) and a layer is in a function.
PiperOrigin-RevId: 214299002
|
| | | |
| | | |
| | | |
| | | | |
PiperOrigin-RevId: 214298224
|