aboutsummaryrefslogtreecommitdiffhomepage
Commit message (Collapse)AuthorAge
* Merge changes from github.Gravatar Yifei Feng2018-07-02
| | | | PiperOrigin-RevId: 203037623
* Allow ByteBuffer outputs from TFLite interpreterGravatar Jared Duke2018-07-02
| | | | PiperOrigin-RevId: 203029983
* [tf.data] Adding code for benchmarking map+batch fusion.Gravatar Jiri Simsa2018-07-02
| | | | PiperOrigin-RevId: 203029765
* Remove reduntant checks in gather op.Gravatar A. Unique TensorFlower2018-07-02
| | | | PiperOrigin-RevId: 203027634
* Add support for "parallel for" abstraction, optimized via graph rewrite. ThisGravatar A. Unique TensorFlower2018-07-02
| | | | | | enables applications like auto-batching, jacobians, per-example gradients. PiperOrigin-RevId: 203026617
* Adding a new `current_date` decorator that can change theGravatar Rohan Jain2018-07-02
| | | | | | | _FORWARD_COMPATIBILITY_HORIZON to something that is provided. Intended use is for testing new code / behaviour while still the default is the old behaviour. PiperOrigin-RevId: 203023068
* [XLA] Add c_linear_search function to util.h.Gravatar Justin Lebar2018-07-02
| | | | PiperOrigin-RevId: 203021583
* Fix array logging bugGravatar A. Unique TensorFlower2018-07-02
| | | | PiperOrigin-RevId: 203021167
* Fix NNAPI delegation for Sub and Div.Gravatar A. Unique TensorFlower2018-07-02
| | | | PiperOrigin-RevId: 203020841
* Fix documentation markdownGravatar A. Unique TensorFlower2018-07-02
| | | | PiperOrigin-RevId: 203019816
* [eager]: Fix bug in converting pandas objects to Tensors.Gravatar Asim Shankar2018-07-02
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Specifically, fix a segmentation fault when converting objects that implement the Python sequence protocol (i.e., __getitem__, __len__, and __iter__) but which do not have contiguous keys. Fixes #20347 However, there are still some discrepancies possible between tf.convert_to_tensor(o) (or tf.constant(o)) with and without eager execution enabled. Fixing those is left as a follow up excercise. Sample differences: (1) Empty sequences that have numpy conversions defined. import pandas as pd import tensorflow as tf s = pd.Series([]) # Empty series t = tf.constant(s) With eager execution enabled, t.dtype ends up with a dtype of float32 (as py_seq_tensor.cc considers empty lists to be float32) With graph construction, t.dtype ends up with a dtype of float64 (as make_tensor_proto() converts 's' to a numpy array and uses its dtype). (2) Objects that implement __getitem__, __len__, and __iter__, but are not convertible to numpy arrays (e.g., do not implement __array__): - With eager execution enabled, these can be converted to a tensor - For graph construction, the conversion fails. PiperOrigin-RevId: 203019624
* TopK op changes: remove wrong shape array free, force output types.Gravatar A. Unique TensorFlower2018-07-02
| | | | PiperOrigin-RevId: 203013884
* Remove ARCH_PIII variant from manual_constructor_test.cc.Gravatar A. Unique TensorFlower2018-07-02
| | | | PiperOrigin-RevId: 203007540
* Change the way we set endpoints deprecated in api_def.proto.Gravatar Anna R2018-07-02
| | | | PiperOrigin-RevId: 203004822
* Refactor ProcessState in support of NUMA.Gravatar A. Unique TensorFlower2018-07-02
| | | | | | | | | | | | | | | | | | | ProcessState is a singleton that anchors per-process resources. Up until now that meant only GPU-related memory allocators since CPU allocation was usually done directly from Allocator::cpu_allocator. Accordingly process_state.h was in common_runtime/gpu and ProcesState was only used in GPU builds. With the upcoming introduction of NUMA node specific CPU allocators it will be important that most of the TF runtime switch to requesting the proper NUMA-specific CPU allocator. These allocators will be owned by and obtained from the ProcessState singleton which will exist in all builds. The GPU-specific functions are moved to a new GPUProcessState, also a singleton. PoolAllocator is also migrated out of common_rumntime/gpu into common_runtime. PiperOrigin-RevId: 203002666
* Workaround the cudnn 7.1.4 correctness bug, where the workspace is required ↵Gravatar A. Unique TensorFlower2018-07-02
| | | | | | to be zeroed. PiperOrigin-RevId: 203001311
* [tf.data] Fix destruction race in `tf.data.contrib.get_single_element()`.Gravatar Jiri Simsa2018-07-02
| | | | PiperOrigin-RevId: 202995903
* Change Send and Recv HLOs to take a token operand.Gravatar Mark Heffernan2018-07-02
| | | | | | Send and Recv HLOs now have an additional required operand which must be token-shaped. XLA client interface for these operations is unchanged and will be updated in follow up CLs. PiperOrigin-RevId: 202993121
* Grappler/Arithmetic optimizer: Check that node was not already optimized by ↵Gravatar Eugene Zhulenev2018-07-02
| | | | | | UnaryOpsOptimizer. PiperOrigin-RevId: 202992975
* Switch to RBE remote cache on WindowsGravatar A. Unique TensorFlower2018-07-02
| | | | PiperOrigin-RevId: 202990839
* Avoid redundant TFLite tensor reallocationsGravatar Jared Duke2018-07-02
| | | | PiperOrigin-RevId: 202988873
* This simple pass optimizes TakeDataset operations that take all the elements ↵Gravatar Piotr Padlewski2018-07-02
| | | | | | - take(-1). PiperOrigin-RevId: 202987018
* Update to latest version of Cloud Bigtable C++ Client.Gravatar A. Unique TensorFlower2018-07-02
| | | | PiperOrigin-RevId: 202986386
* Small fixes in VariableSynchrinization and VariableAggregation change.Gravatar Pavithra Vijay2018-07-02
| | | | PiperOrigin-RevId: 202983273
* Added graph transformation to push reshapes downstream of broadcasting ↵Gravatar A. Unique TensorFlower2018-07-02
| | | | | | binary operators if possible. PiperOrigin-RevId: 202982286
* Make it possible to serialize Topology class that is created without aGravatar Rui Zhao2018-07-02
| | | | | | serialized topology. PiperOrigin-RevId: 202978167
* [TF:XLA] Add implementation of ResourceApplyAdadelta.Gravatar A. Unique TensorFlower2018-07-02
| | | | PiperOrigin-RevId: 202975643
* Add a set of annotations specific to AutoGraph.Gravatar Dan Moldovan2018-07-02
| | | | PiperOrigin-RevId: 202972265
* Unblock RingReducer's PCQueue after calling StartAbort in async callback.Gravatar Ayush Dubey2018-07-02
| | | | PiperOrigin-RevId: 202971063
* Docstring grammar tweak.Gravatar A. Unique TensorFlower2018-07-02
| | | | PiperOrigin-RevId: 202961895
* Exclude tensorflow/contrib/bigtable on WindowsGravatar A. Unique TensorFlower2018-07-02
| | | | | | Fix Windows failure caused by cl/202664219 PiperOrigin-RevId: 202960843
* CleanupGravatar A. Unique TensorFlower2018-07-02
| | | | PiperOrigin-RevId: 202960334
* [XLA] Remove a bogus invalid argument message printed out when --v>=1.Gravatar Bixia Zheng2018-07-02
| | | | | | | | | | When running any trivial XLA program with --v=1, you will see bogus message such as "Invalid argument: Shape f32[] size may overflow int64". The reason for this is because in ShapeUtil::ValidateShapeSize, we incorrectly construct an InvalidArgument object prematurely. This change postpones the construction of the InvalidArgument object until an invalid argument is actually discovered. PiperOrigin-RevId: 202959886
* Return error instead of checking in GraphToFunctionDef.Gravatar Jacques Pienaar2018-07-02
| | | | PiperOrigin-RevId: 202950690
* [XLA] Rename {SqrtF32, SquareF32, ReciprocalF32} to {Sqrt, Square, ↵Gravatar Peter Hawkins2018-07-02
| | | | | | | | | | | | Reciprocal} and move them to a new client library xla/client/lib/math.h. Remove the F32 type constraint. Add an xla::Rqsrt function. Move {Erf, Erfc, ErfInv, EvaluatePolynomial} to the same library. [TF:XLA] Update many places in the bridge to use the new functions. Rewrite many of the training ops in operator notation. PiperOrigin-RevId: 202948474
* [XLA] Add a new client helper library for building constants.Gravatar Peter Hawkins2018-07-02
| | | | | | | | | | | | New functions include xla::ScalarLike, xla::Zero, xla::Zeros, xla::ZerosLike, xla::One, xla::Epsilon, xla::{Min,Max,MinFinite,MaxFinite}Value. Update Erf, Erfc, ErfInv to use new operator overloads and xla::ScalarLike. Remove the explicit type arguments. [TF:XLA] Refactor various parts of the bridge to use new constant functions. Make more types implicit. Clean up ArgMin/ArgMax as part of adapting it to use the new APIs. No functional changes intended. PiperOrigin-RevId: 202943293
* Do profiling inside while thunks and conditionals.Gravatar Adrian Kuegel2018-07-02
| | | | | | | | | We now look into the computations of kWhile and kConditional ops when profiling. This still does not help regarding the statistics of the estimated optimum, but at least we can see the relative performance of the ops within a subcomputation. PiperOrigin-RevId: 202916616
* - Create an explicit mapping between tensor indices and NNAPI operand ids (Gravatar A. Unique TensorFlower2018-07-02
| | | | | | | | | | | | needed for RNN back-edge support) - Make the delegate return errors from unsupported operations, datatypes and rank rather than abort - Make the delegate propagate errors from preparation and compilation phase rather than abort - Add a flag for allowing generated tests to pass if delegation returns an error - however if delegation succeeds the results are verified PiperOrigin-RevId: 202916432
* Merged commit includes the following changes:Gravatar A. Unique TensorFlower2018-07-01
| | | | | | | | | | | | | | | | | | | | | | | | | 202883475 by A. Unique TensorFlower: Internal testing changes -- 202880708 by yifeif: Internal change. -- 202876685 by A. Unique TensorFlower: Internal change -- 202850194 by yifeif: Internal change. -- PiperOrigin-RevId: 202883475
* PiperOrigin-RevId: 202796842Gravatar A. Unique TensorFlower2018-06-30
|
* Remove unused gcp and hdfs config flags, as these are on by default now.Gravatar Gunhan Gulsoy2018-06-29
| | | | PiperOrigin-RevId: 202753310
* Fixes a bug in the quantize_and_dequantize_op kernel with getting the min ↵Gravatar A. Unique TensorFlower2018-06-29
| | | | | | | | and max range, when the op is on the GPU but the range tensor is on the host. PiperOrigin-RevId: 202748603
* Automated g4 rollback of changelist 202738924Gravatar Sanjoy Das2018-06-29
| | | | PiperOrigin-RevId: 202744028
* [XLA] Remove a bogus invalid argument message printed out when --v>=1.Gravatar Bixia Zheng2018-06-29
| | | | | | | | | | When running any trivial XLA program with --v=1, you will see bogus message such as "Invalid argument: Shape f32[] size may overflow int64". The reason for this is because in ShapeUtil::ValidateShapeSize, we incorrectly construct an InvalidArgument object prematurely. This change postpones the construction of the InvalidArgument object until an invalid argument is actually discovered. PiperOrigin-RevId: 202738924
* TFLite Java app for object detection modelGravatar A. Unique TensorFlower2018-06-29
| | | | PiperOrigin-RevId: 202736707
* Add `synchronization` and `aggregation` args to get_variable(). These args ↵Gravatar Pavithra Vijay2018-06-29
| | | | | | | | | | | | | | will be used for distributed variables. Add Enum `VariableSynchronization` with values for `synchronization`: AUTO, UNREPLICATED, ON_WRITE, ON_READ Add Enum `VariableAggregation` with values for `aggregation`: NONE, SUM, MEAN. Replace all the aggregation methods strings in distribution strategy to the enum values. Update Mirrored strategy to use these parameters to decide on whether a variable should be Mirrored or TowerLocal. Update different distribution strategy value types to use the `VariableAggregation` Enum PiperOrigin-RevId: 202736077
* Automated g4 rollback of changelist 202724194Gravatar A. Unique TensorFlower2018-06-29
| | | | PiperOrigin-RevId: 202735104
* Adding dimensions to brodcasts in computationBuilderGravatar A. Unique TensorFlower2018-06-29
| | | | PiperOrigin-RevId: 202728713
* Fix a typo in comment to mention kOutputInputYX means NCHWGravatar Smit Hinsu2018-06-29
| | | | PiperOrigin-RevId: 202725501
* Do not overwrite inputs.Gravatar A. Unique TensorFlower2018-06-29
| | | | PiperOrigin-RevId: 202724720