| Commit message (Collapse) | Author | Age |
|
|
|
| |
PiperOrigin-RevId: 203037623
|
|
|
|
| |
PiperOrigin-RevId: 203029983
|
|
|
|
| |
PiperOrigin-RevId: 203029765
|
|
|
|
| |
PiperOrigin-RevId: 203027634
|
|
|
|
|
|
| |
enables applications like auto-batching, jacobians, per-example gradients.
PiperOrigin-RevId: 203026617
|
|
|
|
|
|
|
| |
_FORWARD_COMPATIBILITY_HORIZON to something that is provided. Intended use is
for testing new code / behaviour while still the default is the old behaviour.
PiperOrigin-RevId: 203023068
|
|
|
|
| |
PiperOrigin-RevId: 203021583
|
|
|
|
| |
PiperOrigin-RevId: 203021167
|
|
|
|
| |
PiperOrigin-RevId: 203020841
|
|
|
|
| |
PiperOrigin-RevId: 203019816
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Specifically, fix a segmentation fault when converting objects that implement
the Python sequence protocol (i.e., __getitem__, __len__, and __iter__) but
which do not have contiguous keys.
Fixes #20347
However, there are still some discrepancies possible between
tf.convert_to_tensor(o) (or tf.constant(o)) with and without eager execution
enabled. Fixing those is left as a follow up excercise.
Sample differences:
(1) Empty sequences that have numpy conversions defined.
import pandas as pd
import tensorflow as tf
s = pd.Series([]) # Empty series
t = tf.constant(s)
With eager execution enabled, t.dtype ends up with a dtype of float32 (as
py_seq_tensor.cc considers empty lists to be float32)
With graph construction, t.dtype ends up with a dtype of float64 (as
make_tensor_proto() converts 's' to a numpy array and uses its dtype).
(2) Objects that implement __getitem__, __len__, and __iter__, but are not
convertible to numpy arrays (e.g., do not implement __array__):
- With eager execution enabled, these can be converted to a tensor
- For graph construction, the conversion fails.
PiperOrigin-RevId: 203019624
|
|
|
|
| |
PiperOrigin-RevId: 203013884
|
|
|
|
| |
PiperOrigin-RevId: 203007540
|
|
|
|
| |
PiperOrigin-RevId: 203004822
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
ProcessState is a singleton that anchors per-process resources.
Up until now that meant only GPU-related memory allocators
since CPU allocation was usually done directly from Allocator::cpu_allocator.
Accordingly process_state.h was in common_runtime/gpu and ProcesState
was only used in GPU builds.
With the upcoming introduction of NUMA node specific CPU allocators
it will be important that most of the TF runtime switch to requesting the
proper NUMA-specific CPU allocator. These allocators will be owned by
and obtained from the ProcessState singleton which will exist in all
builds. The GPU-specific functions are moved to a new
GPUProcessState, also a singleton.
PoolAllocator is also migrated out of common_rumntime/gpu into common_runtime.
PiperOrigin-RevId: 203002666
|
|
|
|
|
|
| |
to be zeroed.
PiperOrigin-RevId: 203001311
|
|
|
|
| |
PiperOrigin-RevId: 202995903
|
|
|
|
|
|
| |
Send and Recv HLOs now have an additional required operand which must be token-shaped. XLA client interface for these operations is unchanged and will be updated in follow up CLs.
PiperOrigin-RevId: 202993121
|
|
|
|
|
|
| |
UnaryOpsOptimizer.
PiperOrigin-RevId: 202992975
|
|
|
|
| |
PiperOrigin-RevId: 202990839
|
|
|
|
| |
PiperOrigin-RevId: 202988873
|
|
|
|
|
|
| |
- take(-1).
PiperOrigin-RevId: 202987018
|
|
|
|
| |
PiperOrigin-RevId: 202986386
|
|
|
|
| |
PiperOrigin-RevId: 202983273
|
|
|
|
|
|
| |
binary operators if possible.
PiperOrigin-RevId: 202982286
|
|
|
|
|
|
| |
serialized topology.
PiperOrigin-RevId: 202978167
|
|
|
|
| |
PiperOrigin-RevId: 202975643
|
|
|
|
| |
PiperOrigin-RevId: 202972265
|
|
|
|
| |
PiperOrigin-RevId: 202971063
|
|
|
|
| |
PiperOrigin-RevId: 202961895
|
|
|
|
|
|
| |
Fix Windows failure caused by cl/202664219
PiperOrigin-RevId: 202960843
|
|
|
|
| |
PiperOrigin-RevId: 202960334
|
|
|
|
|
|
|
|
|
|
| |
When running any trivial XLA program with --v=1, you will see bogus message such
as "Invalid argument: Shape f32[] size may overflow int64". The reason for this
is because in ShapeUtil::ValidateShapeSize, we incorrectly construct an
InvalidArgument object prematurely. This change postpones the construction of
the InvalidArgument object until an invalid argument is actually discovered.
PiperOrigin-RevId: 202959886
|
|
|
|
| |
PiperOrigin-RevId: 202950690
|
|
|
|
|
|
|
|
|
|
|
|
| |
Reciprocal} and move them to a new client library xla/client/lib/math.h. Remove the F32 type constraint.
Add an xla::Rqsrt function.
Move {Erf, Erfc, ErfInv, EvaluatePolynomial} to the same library.
[TF:XLA] Update many places in the bridge to use the new functions. Rewrite many of the training ops in operator notation.
PiperOrigin-RevId: 202948474
|
|
|
|
|
|
|
|
|
|
|
|
| |
New functions include xla::ScalarLike, xla::Zero, xla::Zeros, xla::ZerosLike, xla::One, xla::Epsilon, xla::{Min,Max,MinFinite,MaxFinite}Value.
Update Erf, Erfc, ErfInv to use new operator overloads and xla::ScalarLike. Remove the explicit type arguments.
[TF:XLA] Refactor various parts of the bridge to use new constant functions. Make more types implicit. Clean up ArgMin/ArgMax as part of adapting it to use the new APIs.
No functional changes intended.
PiperOrigin-RevId: 202943293
|
|
|
|
|
|
|
|
|
| |
We now look into the computations of kWhile and kConditional ops when profiling.
This still does not help regarding the statistics of the estimated optimum,
but at least we can see the relative performance of the ops within a
subcomputation.
PiperOrigin-RevId: 202916616
|
|
|
|
|
|
|
|
|
|
|
|
| |
needed for RNN back-edge support)
- Make the delegate return errors from unsupported operations, datatypes and
rank rather than abort
- Make the delegate propagate errors from preparation and compilation phase
rather than abort
- Add a flag for allowing generated tests to pass if delegation returns an
error - however if delegation succeeds the results are verified
PiperOrigin-RevId: 202916432
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
202883475 by A. Unique TensorFlower:
Internal testing changes
--
202880708 by yifeif:
Internal change.
--
202876685 by A. Unique TensorFlower:
Internal change
--
202850194 by yifeif:
Internal change.
--
PiperOrigin-RevId: 202883475
|
| |
|
|
|
|
| |
PiperOrigin-RevId: 202753310
|
|
|
|
|
|
|
|
| |
and max range,
when the op is on the GPU but the range tensor is on the host.
PiperOrigin-RevId: 202748603
|
|
|
|
| |
PiperOrigin-RevId: 202744028
|
|
|
|
|
|
|
|
|
|
| |
When running any trivial XLA program with --v=1, you will see bogus message such
as "Invalid argument: Shape f32[] size may overflow int64". The reason for this
is because in ShapeUtil::ValidateShapeSize, we incorrectly construct an
InvalidArgument object prematurely. This change postpones the construction of
the InvalidArgument object until an invalid argument is actually discovered.
PiperOrigin-RevId: 202738924
|
|
|
|
| |
PiperOrigin-RevId: 202736707
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
will be used for distributed variables.
Add Enum `VariableSynchronization` with values for `synchronization`: AUTO, UNREPLICATED, ON_WRITE, ON_READ
Add Enum `VariableAggregation` with values for `aggregation`: NONE, SUM, MEAN. Replace all the aggregation methods strings in distribution strategy to the enum values.
Update Mirrored strategy to use these parameters to decide on whether a variable should be Mirrored or TowerLocal.
Update different distribution strategy value types to use the `VariableAggregation` Enum
PiperOrigin-RevId: 202736077
|
|
|
|
| |
PiperOrigin-RevId: 202735104
|
|
|
|
| |
PiperOrigin-RevId: 202728713
|
|
|
|
| |
PiperOrigin-RevId: 202725501
|
|
|
|
| |
PiperOrigin-RevId: 202724720
|