aboutsummaryrefslogtreecommitdiffhomepage
Commit message (Collapse)AuthorAge
* math_grad: Fast path for when broadcasting is not needed.Gravatar A. Unique TensorFlower2017-10-16
| | | | PiperOrigin-RevId: 172407754
* Adding an ItemHandler that does lookups. This allows decoding of ↵Gravatar A. Unique TensorFlower2017-10-16
| | | | | | tf.Examples where IDs are not materialized (e.g. 'image/object/class/text' present but 'image/object/class/label' not). PiperOrigin-RevId: 172406978
* [XLA:GPU] Don't crash with --vmodule=gpu_compiler=2 if we can't run ptxas.Gravatar Justin Lebar2017-10-16
| | | | | | | | | | | | | | | At --vmodule=gpu_compiler=2, we run ptxas over our generated PTX, to validate it, and also to dump out stats like the number of registers used. But previously, this would fail if your GPU was anything other than sm_35 (i.e. K20/40/80), because we didn't pass down cc_major/cc_minor to ptxas. And moreover, if ptxas failed to compile your program, we'd LOG(FATAL), which is probably no what you want. This change fixes both those issues. Tested on my local GTX1080. PiperOrigin-RevId: 172403304
* Better error message for eager-specific APIsGravatar A. Unique TensorFlower2017-10-16
| | | | PiperOrigin-RevId: 172397124
* Uses head.name in name_scope. This improves the graph naming for MultiHead.Gravatar A. Unique TensorFlower2017-10-16
| | | | PiperOrigin-RevId: 172389494
* Add return_nodes option to ImportGraphDefGravatar Skye Wanderman-Milne2017-10-16
| | | | | | | | | | | | The is similar to the return_tensors option. return_tensors cannot be used to fetch nodes with no outputs, so return_nodes is necessary. In addition, this change also refactors the ImportGraphDef signature to return all optional return values in a single struct. This is to keep the ImportGraphDef signature from getting too long, and also makes the call sites simpler. PiperOrigin-RevId: 172388270
* Add cc file with definition of tensorflow::gtl::nullopt.Gravatar Justin Lebar2017-10-16
| | | | | | If you ODR-use nullopt, you currently get a linker error. Oops. PiperOrigin-RevId: 172387553
* Default to procuring ResourceVariables in variable_scope.variable whenGravatar Akshay Agrawal2017-10-16
| | | | | | use_resource is not set and Eager mode is enabled. PiperOrigin-RevId: 172380659
* Enable C API for gradients_test.pyGravatar Skye Wanderman-Milne2017-10-16
| | | | PiperOrigin-RevId: 172379338
* [TF2XLA] Expand comparator and use consistently in sorting arguments.Gravatar Jacques Pienaar2017-10-16
| | | | PiperOrigin-RevId: 172376836
* Batch norm folding immediately fails if FusedBatchNorm ops are present.Gravatar A. Unique TensorFlower2017-10-16
| | | | PiperOrigin-RevId: 172374244
* Respect __array__ and __array_interface__ for string typesGravatar Mark Daoust2017-10-16
| | | | | | | | | | | | | __array__ fixes use-cases like: import tensorflow as tf import pandas as pd series = pd.Series(['a','b','c']) tf.constant(series) df = pd.DataFrame({'a':[1,2,3],'b':['a','b','c']}) tf.data.Dataset.from_tensor_slices(dict(df)) PiperOrigin-RevId: 172372593
* Close session on infeed error. This should fix most of the cases where the ↵Gravatar Russell Power2017-10-16
| | | | | | client process hangs waiting for the main training loop to exit. PiperOrigin-RevId: 172371951
* Add support for saving DT_VARIANT tensors in TensorBundle.Gravatar Saurabh Saxena2017-10-16
| | | | | | Add support for reading Varint64 to InputBuffer. PiperOrigin-RevId: 172371104
* Move global_step_read dependency to model_fn instead of input_fn.Gravatar Mustafa Ispir2017-10-16
| | | | PiperOrigin-RevId: 172366972
* Remove broken link.Gravatar A. Unique TensorFlower2017-10-16
| | | | PiperOrigin-RevId: 172366027
* Implement set_shape for EagerTensors for compatibiity with ops that call itGravatar Akshay Agrawal2017-10-16
| | | | | | | Checks if shape is not compatible with the Eager tensor's shape, raises an error if it is not. PiperOrigin-RevId: 172363347
* make_vjp in eagerGravatar Alexandre Passos2017-10-16
| | | | PiperOrigin-RevId: 172363016
* Fix divergence between core.data and contrib.data Python tests.Gravatar Jiri Simsa2017-10-16
| | | | PiperOrigin-RevId: 172353443
* [tf.contrib.seq2seq] Some light cleanup in beam search decoder code.Gravatar Eugene Brevdo2017-10-16
| | | | PiperOrigin-RevId: 172352767
* Add tf.contrib.distributions.bijectors.Gumbel.Gravatar A. Unique TensorFlower2017-10-16
| | | | PiperOrigin-RevId: 172350038
* [TF2XLA] Keep Switch and Merge nodes in own clusters.Gravatar Jacques Pienaar2017-10-16
| | | | | | | | | | | | | * Keep Switch and Merge nodes in separate clusters to avoid creating irreducible graphs; * Merge Switch nodes with common predicates; * Add support for if-then structure; * Squash trivial Switch->Merge groups; * Merge newly Merge free nodes with Switch & Merge free inputs; * Check to see if it is a Merge node before merging to common merge node; * Return an error if all Switches have not been replaced; * Add test fir tf,case; PiperOrigin-RevId: 172348729
* [tf.data] Fix broken implementation of `Dataset.from_generator()` on Windows.Gravatar Derek Murray2017-10-16
| | | | | | | | Due to a mix-up between NumPy's default array element type for a Python `int` on Windows and Linux, a tf.py_func() in `Dataset.from_generator()` would appear to return the wrong type on Windows (np.int32 instead of np.int64). All code using `Dataset.from_generator()` on Windows was previously broken. This change fixes both `tf.data.Dataset.from_generator()` and `tf.contrib.data.Dataset.from_generator()`. It also enables test coverage for this method on Windows, which should prevent future breakage. PiperOrigin-RevId: 172346533
* Fix xla_jit_compiled_cpu_function deps to pull in cpu_plugin.Gravatar A. Unique TensorFlower2017-10-16
| | | | | | | The intention was always for the user to only depend on xla_jit_compiled_cpu_function, and not need dependencies on internal targets. PiperOrigin-RevId: 172346257
* Added a cleaner mechanism to set the global constants in fisher_blocks.py ↵Gravatar A. Unique TensorFlower2017-10-16
| | | | | | | | and fisher_factors.py in the form of a function "set_global_constants". The old way of just manually setting these constants by importing the specific modules and accessing them directly should still work, but this new method is preferred. PiperOrigin-RevId: 172345996
* Proper use of convert_to_tensor in custom_gradientGravatar Alexandre Passos2017-10-16
| | | | PiperOrigin-RevId: 172342933
* Support a configurable TPU job nameGravatar Brennan Saeta2017-10-16
| | | | PiperOrigin-RevId: 172340173
* Support ClusterSpec propagation with XLA DevicesGravatar Brennan Saeta2017-10-16
| | | | | | Currently, you cannot use ClusterSpec propagation in conjunction with XLA devices, as the RenamedDevice wraps the underlying device and breaks the dynamic cast. PiperOrigin-RevId: 172339725
* Adds a host-memory GPU kernel for DestroyResourceOp.Gravatar Allen Lavoie2017-10-16
| | | | PiperOrigin-RevId: 172337312
* Automated g4 rollback of changelist 172039259Gravatar Allen Lavoie2017-10-16
| | | | PiperOrigin-RevId: 172336111
* [TF:XLA] Update xla_data comments for And, Or, and Not.Gravatar A. Unique TensorFlower2017-10-16
| | | | PiperOrigin-RevId: 172333451
* Fix typo (undefined variable `mean_absolute_error`, should refer to `error` ↵Gravatar A. Unique TensorFlower2017-10-16
| | | | | | previously defined). PiperOrigin-RevId: 172331504
* tfdbg doc: Fix minor typoGravatar Shanqing Cai2017-10-16
| | | | PiperOrigin-RevId: 172326303
* Automated g4 rollback of changelist 171877766Gravatar A. Unique TensorFlower2017-10-16
| | | | PiperOrigin-RevId: 172325692
* Fixing comment mismatch.Gravatar A. Unique TensorFlower2017-10-16
| | | | PiperOrigin-RevId: 172324333
* PiperOrigin-RevId: 172320984Gravatar A. Unique TensorFlower2017-10-16
|
* - Modified Jacobian computations in CurvatureMatrixVectorProductComputer to ↵Gravatar A. Unique TensorFlower2017-10-16
| | | | | | use true partial derivatives. This is done using the newly introduced stop_gradients argument to tf.gradients. PiperOrigin-RevId: 172315620
* Remove unused BUILD dependenciesGravatar A. Unique TensorFlower2017-10-16
| | | | PiperOrigin-RevId: 172314225
* Internal change.Gravatar Anna R2017-10-15
| | | | PiperOrigin-RevId: 172282778
* Replace NcclReduce/Broadcast ops during graph optimization so that we can ↵Gravatar A. Unique TensorFlower2017-10-15
| | | | | | | | generate gradients for Reduce/Broadcast. Changing _NcclBroadcastRecv shape input to int32 so that the corresponding Const op is outputting to HostMem. PiperOrigin-RevId: 172279684
* [XLA] Make pad shape inference error more informative.Gravatar Chris Leary2017-10-15
| | | | PiperOrigin-RevId: 172276292
* tfdbg: add persistent configGravatar Shanqing Cai2017-10-15
| | | | | | | | | | | | | * Add two persistent UI configurations backed by a file at ~/.tfdbg_config by default. * graph_recursion_depth, which controls the recursive output of li/lo commands. * mouse_mode, which controls the mouse state of the CursesUI. * Add `config` command to set and inspect the persistent configuration. E.g., * config show * config set graph_recursion_depth 3 * config set mouse_mode False Fixes: #13449 PiperOrigin-RevId: 172270804
* Add note pointing to master version of adding_an_op.Gravatar Mark Daoust2017-10-15
| | | | | | Fixes #13607 PiperOrigin-RevId: 172262174
* [XLA] Avoid unnecessary spaces in identifiers.Gravatar Chris Leary2017-10-14
| | | | PiperOrigin-RevId: 172224302
* Add streaming_false_{negative,positive}_rate and ↵Gravatar A. Unique TensorFlower2017-10-14
| | | | | | streaming_false_{negative,positive}_rate_at_thresholds. PiperOrigin-RevId: 172191462
* Fix case where broadcasting is not necessary.Gravatar Chris Ying2017-10-13
| | | | PiperOrigin-RevId: 172169909
* Optimized C++ and CUDA kernels for transposition.Gravatar A. Unique TensorFlower2017-10-13
| | | | | | | | * Shard fallback CPU implementation. * Optimize index calculations by trading 1 mod for 1 subtraction and 1 multiply (which have much lower combined latency). * Add optimized GPU kernels for on-the-fly conjugate transposition. PiperOrigin-RevId: 172167514
* Python wrapper to access the predicted peak memory usageGravatar Benoit Steiner2017-10-13
| | | | PiperOrigin-RevId: 172167437
* imperative_grad takes the tape instead of popping it.Gravatar Alexandre Passos2017-10-13
| | | | PiperOrigin-RevId: 172162006
* Make the HLO proto representation (hlo.proto) full fidelity. Hlo modules can ↵Gravatar Mark Heffernan2017-10-13
| | | | | | | | be serialized to HLO protos and deserialized without any information loss. As part of this change, a bug is fixed in NameUniquer. Previously, passing names with numeric suffixes could result in name collisions. PiperOrigin-RevId: 172161360