| Commit message (Collapse) | Author | Age |
... | |
|
|
|
|
|
|
|
| |
Due to a mix-up between NumPy's default array element type for a Python `int` on Windows and Linux, a tf.py_func() in `Dataset.from_generator()` would appear to return the wrong type on Windows (np.int32 instead of np.int64).
All code using `Dataset.from_generator()` on Windows was previously broken. This change fixes both `tf.data.Dataset.from_generator()` and `tf.contrib.data.Dataset.from_generator()`. It also enables test coverage for this method on Windows, which should prevent future breakage.
PiperOrigin-RevId: 172346533
|
|
|
|
|
|
|
| |
The intention was always for the user to only depend on
xla_jit_compiled_cpu_function, and not need dependencies on internal targets.
PiperOrigin-RevId: 172346257
|
|
|
|
|
|
|
|
| |
and fisher_factors.py in the form of a function "set_global_constants".
The old way of just manually setting these constants by importing the specific modules and accessing them directly should still work, but this new method is preferred.
PiperOrigin-RevId: 172345996
|
|
|
|
| |
PiperOrigin-RevId: 172342933
|
|
|
|
| |
PiperOrigin-RevId: 172340173
|
|
|
|
|
|
| |
Currently, you cannot use ClusterSpec propagation in conjunction with XLA devices, as the RenamedDevice wraps the underlying device and breaks the dynamic cast.
PiperOrigin-RevId: 172339725
|
|
|
|
| |
PiperOrigin-RevId: 172337312
|
|
|
|
| |
PiperOrigin-RevId: 172336111
|
|
|
|
| |
PiperOrigin-RevId: 172333451
|
|
|
|
|
|
| |
previously defined).
PiperOrigin-RevId: 172331504
|
|
|
|
| |
PiperOrigin-RevId: 172326303
|
|
|
|
| |
PiperOrigin-RevId: 172325692
|
|
|
|
| |
PiperOrigin-RevId: 172324333
|
| |
|
|
|
|
|
|
| |
use true partial derivatives. This is done using the newly introduced stop_gradients argument to tf.gradients.
PiperOrigin-RevId: 172315620
|
|
|
|
| |
PiperOrigin-RevId: 172314225
|
|
|
|
| |
PiperOrigin-RevId: 172282778
|
|
|
|
|
|
|
|
| |
generate gradients for Reduce/Broadcast.
Changing _NcclBroadcastRecv shape input to int32 so that the corresponding Const op is outputting to HostMem.
PiperOrigin-RevId: 172279684
|
|
|
|
| |
PiperOrigin-RevId: 172276292
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
* Add two persistent UI configurations backed by a file at ~/.tfdbg_config by default.
* graph_recursion_depth, which controls the recursive output of li/lo commands.
* mouse_mode, which controls the mouse state of the CursesUI.
* Add `config` command to set and inspect the persistent configuration. E.g.,
* config show
* config set graph_recursion_depth 3
* config set mouse_mode False
Fixes: #13449
PiperOrigin-RevId: 172270804
|
|
|
|
|
|
| |
Fixes #13607
PiperOrigin-RevId: 172262174
|
|
|
|
| |
PiperOrigin-RevId: 172224302
|
|
|
|
|
|
| |
streaming_false_{negative,positive}_rate_at_thresholds.
PiperOrigin-RevId: 172191462
|
|
|
|
| |
PiperOrigin-RevId: 172169909
|
|
|
|
|
|
|
|
| |
* Shard fallback CPU implementation.
* Optimize index calculations by trading 1 mod for 1 subtraction and 1 multiply (which have much lower combined latency).
* Add optimized GPU kernels for on-the-fly conjugate transposition.
PiperOrigin-RevId: 172167514
|
|
|
|
| |
PiperOrigin-RevId: 172167437
|
|
|
|
| |
PiperOrigin-RevId: 172162006
|
|
|
|
|
|
|
|
| |
be serialized to HLO protos and deserialized without any information loss.
As part of this change, a bug is fixed in NameUniquer. Previously, passing names with numeric suffixes could result in name collisions.
PiperOrigin-RevId: 172161360
|
|
|
|
| |
PiperOrigin-RevId: 172159815
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This change:
- Implements the C API logic for Operation._add_control_inputs()
- Adds type-checking to Operation._add_control_input()
- Makes Graph::AddControlEdge() update the node def if necessary
- Makes Graph::AddControlEdge() a no-op if the control edge already exists
The AddControlEdge() changes may have a performance impact if anything
is sensitive to AddControlEdge(), but nothing is to my knowledge. I'm
not sure what benchmarks would confirm this.
PiperOrigin-RevId: 172158589
|
|
|
|
|
|
| |
run with empty filter or input respectively. Resolves #13643.
PiperOrigin-RevId: 172153646
|
|
|
|
|
|
| |
The tape stack is still in python as is the backprop code.
PiperOrigin-RevId: 172151189
|
|
|
|
| |
PiperOrigin-RevId: 172150350
|
|
|
|
|
|
| |
While at it, clean up some dead code/comments in tape.py
PiperOrigin-RevId: 172143125
|
|
|
|
| |
PiperOrigin-RevId: 172139804
|
|
|
|
| |
PiperOrigin-RevId: 172139466
|
|
|
|
| |
PiperOrigin-RevId: 172136820
|
|
|
|
|
|
|
| |
We get a dead computation when e.g. we delete a reduction or remove a
while loop.
PiperOrigin-RevId: 172135511
|
|
|
|
| |
PiperOrigin-RevId: 172134904
|
|
|
|
|
|
|
| |
We realized that sorting the graph by id of a node is not always deterministic as ids themselves are randomly ordered by tensorflow.
RELNOTES: n/a
PiperOrigin-RevId: 172134671
|
|
|
|
|
|
|
| |
Also makes ArithmeticOptimizer::Optimize run shape inference at the beginning,
and clear _output_shapes at the end.
PiperOrigin-RevId: 172133948
|
|
|
|
|
|
| |
CrossReplicaSum is just a CrossReplicaSum.
PiperOrigin-RevId: 172132628
|
|
|
|
| |
PiperOrigin-RevId: 172131167
|
|
|
|
|
|
|
|
|
|
|
| |
Add benchmarks for backprop. Speed difference is minor - will need to move
everything out of the graph for large speedups, I think.
Also template the fprop kernel on use_peephole.
Original change by @duckworthd
PiperOrigin-RevId: 172131001
|
|
|
|
| |
PiperOrigin-RevId: 172130212
|
|
|
|
| |
PiperOrigin-RevId: 172130104
|
|
|
|
| |
PiperOrigin-RevId: 172129075
|
|
|
|
| |
PiperOrigin-RevId: 172127789
|
|
|
|
|
|
|
|
| |
the original.
This is true even if the layout of the tuple is weird - e.g. the subshapes of the output don't match the shape of the operands.
PiperOrigin-RevId: 172124743
|
|
|
|
| |
PiperOrigin-RevId: 172122586
|