| Commit message (Collapse) | Author | Age |
|
|
|
| |
PiperOrigin-RevId: 214781794
|
|
|
|
|
|
| |
specified separately from the compute stream in ServiceRunOptions
PiperOrigin-RevId: 214778267
|
|
|
|
|
|
| |
functools.partial are triggered.
PiperOrigin-RevId: 214775194
|
|
|
|
| |
PiperOrigin-RevId: 214767788
|
|
|
|
|
|
| |
The new implementation ensures that the 'constraints' kwarg is propagated by customer getters whose signature includes a keyworded, variable length argument dictionary, as well as those explicitly including the 'constraints' argument.
PiperOrigin-RevId: 214767296
|
|
|
|
|
|
|
|
| |
This change reduce the size of //tensorflow/tools/pip_package:simple_console_windows's zip file from 1000027677 bytes to 47690474 bytes for a CPU build. For GPU build, it will avoid going over 4GB when multiple CUDA compatibility are specified.
To fix #22390
PiperOrigin-RevId: 214764423
|
|
|
|
| |
PiperOrigin-RevId: 214763814
|
|
|
|
| |
PiperOrigin-RevId: 214741709
|
|
|
|
| |
PiperOrigin-RevId: 214732243
|
|
|
|
|
|
| |
mentions incorrect default value.
PiperOrigin-RevId: 214731772
|
|\
| |
| |
| | |
PiperOrigin-RevId: 214726180
|
|\ \
| | |
| | |
| | | |
PiperOrigin-RevId: 214724610
|
| | |
| | |
| | |
| | | |
PiperOrigin-RevId: 214723970
|
| | |
| | |
| | |
| | | |
PiperOrigin-RevId: 214721004
|
| | |
| | |
| | |
| | |
| | |
| | | |
It used to save the existing custom getter then overwrites the custom getter. That means the previous custom getter will never be called inside "computation". It now create a new custom getter that calls the previous custom getter.
PiperOrigin-RevId: 214715720
|
| | |
| | |
| | |
| | |
| | |
| | | |
Other additional refactoring.
PiperOrigin-RevId: 214715083
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
Estimator
Add support for stateful metrics in model to estimator
PiperOrigin-RevId: 214714322
|
| | |
| | |
| | |
| | | |
PiperOrigin-RevId: 214711381
|
| | |
| | |
| | |
| | | |
PiperOrigin-RevId: 214710175
|
| | |
| | |
| | |
| | | |
PiperOrigin-RevId: 214709465
|
| | |
| | |
| | |
| | | |
PiperOrigin-RevId: 214705311
|
| | |
| | |
| | |
| | | |
PiperOrigin-RevId: 214704902
|
| | |
| | |
| | |
| | |
| | |
| | | |
the same source dependency twice.
PiperOrigin-RevId: 214704620
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
functionalization.
If we want to evaluate SymbolicGradient op in constant folding, we need to construct Device object and attach it to FunctionLibraryRuntime. In graph rewriting pass, we do not have Device object created yet; it will only be created in XlaCompiler.
PiperOrigin-RevId: 214702943
|
| | |
| | |
| | |
| | | |
PiperOrigin-RevId: 214702243
|
| | |
| | |
| | |
| | |
| | |
| | | |
e291c279e458761e77a69b09b129d3d1e81f1e80
PiperOrigin-RevId: 214702169
|
| | |
| | |
| | |
| | | |
PiperOrigin-RevId: 214701926
|
| | |
| | |
| | |
| | | |
PiperOrigin-RevId: 214700693
|
| | |
| | |
| | |
| | |
| | |
| | | |
they're moved to core. I overlooked this in the CL to move to core.
PiperOrigin-RevId: 214699544
|
| | |
| | |
| | |
| | | |
PiperOrigin-RevId: 214698827
|
| | |
| | |
| | |
| | | |
PiperOrigin-RevId: 214693201
|
| | |
| | |
| | |
| | | |
PiperOrigin-RevId: 214691838
|
| | |
| | |
| | |
| | | |
PiperOrigin-RevId: 214685427
|
| | |
| | |
| | |
| | | |
PiperOrigin-RevId: 214681193
|
| | |
| | |
| | |
| | | |
PiperOrigin-RevId: 214680988
|
| | |
| | |
| | |
| | |
| | |
| | | |
according to https://github.com/tensorflow/community/pull/16.
PiperOrigin-RevId: 214680285
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
The recent fix to a resource leak introduced a potential use-after-free, because it released a reference on a Var resource before returning a mutex* borrowed from that resource. The mutex* could therefore become garbage if the refcount concurrently dropped to zero (for example, if a concurrent `Session::Reset()` were issued).
This change modifies the mutex accessing utilities to prolong the lifetime of the corresponding Var* beyond the lifetime of the returned mutex*.
PiperOrigin-RevId: 214678937
|
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
summaries are written at the correct interval for jobs with long-running
evaluations.
PiperOrigin-RevId: 214678483
|
| | |
| | |
| | |
| | |
| | |
| | | |
instead of internal processor objects.
PiperOrigin-RevId: 214678470
|
| | |
| | |
| | |
| | | |
PiperOrigin-RevId: 214675055
|
| | |
| | |
| | |
| | | |
PiperOrigin-RevId: 214674717
|
| | |
| | |
| | |
| | |
| | |
| | | |
DeconstructTuple doesn't support nested tuples yet, so MakeFakeArgumentsOrDie failed if any of the arguments were tuple-shaped. But we don't really need it here anyway, just build the arguments one-by-one.
PiperOrigin-RevId: 214671374
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
This triggers checkpoints in a separate thread while allowing training to
continue. This can effectively parallelize checkpointing and training for
workloads like TPUEstimator, where the weights are only updated after a number
of device iterations.
PiperOrigin-RevId: 214670991
|
| | |
| | |
| | |
| | |
| | |
| | | |
Make shape inference lazy in optimizers that may not trigger.
PiperOrigin-RevId: 214669034
|
| | |
| | |
| | |
| | | |
PiperOrigin-RevId: 214668695
|
| | |
| | |
| | |
| | | |
PiperOrigin-RevId: 214668499
|
| | |
| | |
| | |
| | | |
PiperOrigin-RevId: 214668283
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
sample-like args to Tensors.
After this change, you could conceivably write tfd.Normal(0., 1.).log_prob(1)
The tf core distributions can't use tfp dtype_util.common_dtype, so you can't yet write tfd.Normal(0, 1).
Works around an eager bug that loses precision in the presence in tf.convert_to_tensor(0.5, preferred_dtype=tf.int32)
PiperOrigin-RevId: 214666222
|
| | |
| | |
| | |
| | | |
PiperOrigin-RevId: 214662826
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
The purpose of these ops is to fix a latency problem observed for an inference benchmark. Often a inference step starts by reading the value of many (hundreds) of weights. For a resource variable, this requires a VarHandleOp and a ReadVariableOp per variable. Running hundreds of trivial ops can add hundreds of microseconds of latency to the critical path of an inference step. The inter-op latency of the executor can be hundreds of nanoseconds, which rapidly adds up.
This change introduces two fused ops _VarHandlesOp and _ReadVariablesOp that allow us to read many variables in a pair of larger ops, rather than many tiny ops.
PiperOrigin-RevId: 214662338
|