aboutsummaryrefslogtreecommitdiffhomepage
path: root/tensorflow/python
Commit message (Collapse)AuthorAge
...
* | Benchmark for comparing original cond and cond_v2 performance.Gravatar Skye Wanderman-Milne2018-10-08
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This benchmark creates many intermediates values, so we can make sure there's no performance overhead (it looks like there might be currently, or it might be from some other difference). It also runs in a defun and in legacy graph mode. Results from my machine: entry { name: "CondWithManyIntermediatesBenchmark.benchmark_cond_v1_defun" iters: 500 wall_time: 1.25822591782 } entry { name: "CondWithManyIntermediatesBenchmark.benchmark_cond_v2_defun" iters: 500 wall_time: 5.99376106262 } entry { name: "CondWithManyIntermediatesBenchmark.benchmark_cond_v1_graph" iters: 500 wall_time: 2.05277585983 } entry { name: "CondWithManyIntermediatesBenchmark.benchmark_cond_v2_graph" iters: 500 wall_time: 2.84808516502 } Clearly we have some work to do! I haven't looked into the time differences at all yet. PiperOrigin-RevId: 216202325
* | Remove Raises documentation on imperative_grads for ValueErrror not raised.Gravatar A. Unique TensorFlower2018-10-08
| | | | | | | | PiperOrigin-RevId: 216201714
* | Avoid adding spurious ops when colocating with resource variables.Gravatar Asim Shankar2018-10-08
| | | | | | | | | | | | | | | | | | | | Prior to this change, tf.colocate_with(v) would insert spurious operations (a ReadVariableOp and an Identity) in the graph when v is a resource variable, and then colocate the operations within the block with those newly added, otherwise disconnected, operations. This commit avoids adding the unnecessary ReadVariableOp/Identity nodes and colocates operations within the block with the VarHandleOp. PiperOrigin-RevId: 216201638
* | compat: Update forward compatibility horizon to 2018-10-08Gravatar A. Unique TensorFlower2018-10-08
| | | | | | | | PiperOrigin-RevId: 216151605
* | compat: Update forward compatibility horizon to 2018-10-07Gravatar A. Unique TensorFlower2018-10-07
| | | | | | | | PiperOrigin-RevId: 216079665
* | compat: Update forward compatibility horizon to 2018-10-06Gravatar A. Unique TensorFlower2018-10-06
| | | | | | | | PiperOrigin-RevId: 216021117
* | Merge pull request #22659 from gautam1858:patch-17Gravatar TensorFlower Gardener2018-10-05
|\ \ | | | | | | | | | PiperOrigin-RevId: 216009475
* | | Add the plumbing for an autograph flag to defun. Disabled and experimental ↵Gravatar Dan Moldovan2018-10-05
| | | | | | | | | | | | | | | | | | for now. PiperOrigin-RevId: 216003028
* | | Simply the logic for bubbling captured tensors when building cond_v2 grad.Gravatar Saurabh Saxena2018-10-05
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The current logic tries to bubble the forward pass tensor to the outermost graph. That might not always be do-able e.g. when the cond is inside a while loop it will need to know accumulator logic for while_loop. So instead, the cond_grad now captures tensors from the forward If op's graph. When the grad If op is built these tensors will be appropriately captured by the surrounding FuncGraph. PiperOrigin-RevId: 215993009
* | | Automated rollback of commit d258207f1583df4faa452265b051879af6c15dacGravatar A. Unique TensorFlower2018-10-05
| | | | | | | | | | | | PiperOrigin-RevId: 215989111
* | | Fix api_compatibility_test diff for large files. assertEqual might be appliedGravatar Anna R2018-10-05
| | | | | | | | | | | | | | | | | | | | | | | | instead of assertMultiLineEqual if input is too large (https://bugs.python.org/issue11763). This change is switching to use unified_diff in that case. PiperOrigin-RevId: 215987656
* | | Orders non-resource-affecting stateful ops in defuns.Gravatar Alexandre Passos2018-10-05
| | | | | | | | | | | | PiperOrigin-RevId: 215985679
* | | Add DistributionStrategy support to moving average APIs.Gravatar A. Unique TensorFlower2018-10-05
| | | | | | | | | | | | | | | | | | Fixes #21405. PiperOrigin-RevId: 215973401
* | | Automated rollback of commit ae0bc6f006497cc04a2ee75166d4ec71c7154fd8Gravatar Jiri Simsa2018-10-05
| | | | | | | | | | | | PiperOrigin-RevId: 215969360
* | | [tf.data] Adding specialization for `MapDataset`, `ParallelMapDataset`, and ↵Gravatar Jiri Simsa2018-10-05
| | | | | | | | | | | | | | | | | | `MapAndBatchDataset` whose user-provided functions have the property that each output argument take its value directly from an input argument (e.g. `lambda x, y: y, x`). This specialization can produce the result without having to schedule the function using the executor. PiperOrigin-RevId: 215957592
* | | Copy device from If op to the lowered ops.Gravatar Saurabh Saxena2018-10-05
| | | | | | | | | | | | | | | | | | Enable GPU tests for cond_v2. PiperOrigin-RevId: 215956220
* | | Brings V2 Optimizers into Keras w/ Keras signaturesGravatar A. Unique TensorFlower2018-10-05
| | | | | | | | | | | | PiperOrigin-RevId: 215950207
* | | Merge pull request #20476 from yongtang:06052018-bincount-shapeGravatar TensorFlower Gardener2018-10-05
|\ \ \ | | | | | | | | | | | | PiperOrigin-RevId: 215947463
* | | | Do 2 warmup runs in assert_no_new_pyobjects_executing_eagerly.Gravatar Todd Wang2018-10-05
| | | | | | | | | | | | | | | | PiperOrigin-RevId: 215944829
* | | | Make gradient tape stack thread localGravatar Igor Ganichev2018-10-05
| | | | | | | | | | | | | | | | PiperOrigin-RevId: 215937618
* | | | Fix documentation.Gravatar A. Unique TensorFlower2018-10-05
| | | | | | | | | | | | | | | | PiperOrigin-RevId: 215930596
* | | | BEGIN_PUBLICGravatar A. Unique TensorFlower2018-10-05
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Automated rollback of PR #21945 END_PUBLIC Automated rollback of commit 863f61412fcc654840c6b67473b742ea4e5e964e. Revert #21945. PiperOrigin-RevId: 215913175
* | | | compat: Update forward compatibility horizon to 2018-10-05Gravatar A. Unique TensorFlower2018-10-05
| | | | | | | | | | | | | | | | PiperOrigin-RevId: 215874612
* | | | Fix regression that caused xrange to be ignored.Gravatar Dan Moldovan2018-10-04
| | | | | | | | | | | | | | | | PiperOrigin-RevId: 215844450
* | | | Merge pull request #21945 from efagerho:masterGravatar TensorFlower Gardener2018-10-04
|\ \ \ \ | | | | | | | | | | | | | | | PiperOrigin-RevId: 215824410
* | | | | Pin ops with small integer inputs (already on the cpu) to the cpu in eager.Gravatar Akshay Modi2018-10-04
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | An environment variable (TF_EAGER_ENABLE_SMALL_TENSOR_CPU_PINNING) is provided to turn this off if necessary (its on by default). PiperOrigin-RevId: 215821915
* | | | | This CL fixes a bug in the eager benchmarks test that caused the defun tests ↵Gravatar A. Unique TensorFlower2018-10-04
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | to execute a different-sized matrix multiply than the eager tests. PiperOrigin-RevId: 215814346
* | | | | Avoid creating control edges on not-this-graph.Gravatar Alexandre Passos2018-10-04
| | | | | | | | | | | | | | | | | | | | PiperOrigin-RevId: 215811680
* | | | | Automated rollback of commit 6b538d9ce54e878576131cde0c76e43a893180c2Gravatar Smit Hinsu2018-10-04
| | | | | | | | | | | | | | | | | | | | PiperOrigin-RevId: 215808649
* | | | | Internal change.Gravatar Anna R2018-10-04
| | | | | | | | | | | | | | | | | | | | PiperOrigin-RevId: 215797256
* | | | | Enable masking through a Sequential model.Gravatar Francois Chollet2018-10-04
| | | | | | | | | | | | | | | | | | | | PiperOrigin-RevId: 215790636
* | | | | [tf.data] Add a notion of `captured args` to MapDefunGravatar Rachel Lim2018-10-04
| | | | | | | | | | | | | | | | | | | | PiperOrigin-RevId: 215788485
* | | | | Temporarily disable testCondInDefun test in control_flow_ops_py_testGravatar Smit Hinsu2018-10-04
| | | | | | | | | | | | | | | | | | | | PiperOrigin-RevId: 215788359
* | | | | [tf.data] Clean up tests for `tf.data.experimental`.Gravatar Derek Murray2018-10-04
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This change splits up large test files into smaller ones, and re-enables tests that were disabled for obsolete reasons. PiperOrigin-RevId: 215785396
* | | | | Makes sure Keras Layer's `__call__` is always used in Eager.Gravatar A. Unique TensorFlower2018-10-04
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Currently if a Layer is invoked with the Functional API in Eager, `__call__` is only used during setup, and thereafter `call` is used internally. This limits the ability to add pre/post processing steps to `call` in Eager in the future. Additionally, the Subclassed Model API already always uses `__call__` in Eager. PiperOrigin-RevId: 215778408
* | | | | Add a separator between shape and dtype in cache key encoding.Gravatar Akshay Modi2018-10-04
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | It was possible that we could mix shapes and types (T111 could mean a tensor of dtype 1 and shape (1, 1) or a tensor of dtype 11 and shape (1)). PiperOrigin-RevId: 215777629
* | | | | Add "encoding" attribute to string substr op, which controls how each ↵Gravatar A. Unique TensorFlower2018-10-04
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | "character" is treated: * BYTE: Position & length refer to bytes in the string. (Default) * UTF8: The string is interpreted as UTF-8 encoded Unicode code points, and position & length are treated relative to them. RELNOTES: Add option to get substring using Unicode characters PiperOrigin-RevId: 215773373
* | | | | Merge pull request #22660 from gautam1858:patch-18Gravatar TensorFlower Gardener2018-10-04
|\ \ \ \ \ | | | | | | | | | | | | | | | | | | PiperOrigin-RevId: 215761730
* \ \ \ \ \ Merge pull request #22392 from yanboliang:metricsGravatar TensorFlower Gardener2018-10-04
|\ \ \ \ \ \ | | | | | | | | | | | | | | | | | | | | | PiperOrigin-RevId: 215760505
* | | | | | | Add ability to vectorize nodes that do not derive from function arguments. ↵Gravatar Rachel Lim2018-10-04
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | (This indirectly handles "Const" outputs automagically, since they are always unstacked.) PiperOrigin-RevId: 215749824
* | | | | | | Gracefully disallow updating resource variables with invalid shapes.Gravatar Asim Shankar2018-10-04
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | During graph construction, the shape function for AssignAddVariableOp etc. would raise an error when the value being "assign add"ed to the variable has an incompatible shape. With eager execution, no such validation was being made which triggerred an assertion failure in eigen: https://github.com/eigenteam/eigen-git-mirror/blob/7d97e1cbbe4424fda39e31c88def7c0863897640/unsupported/Eigen/CXX11/src/Tensor/TensorEvaluator.h#L479 This change prevents that assertion failure. PiperOrigin-RevId: 215749071
* | | | | | | Add option in tf.gradients() to return zero tensors for unconnected gradients.Gravatar A. Unique TensorFlower2018-10-04
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | tf.gradients currently returns [NONE] when the gradient of unconnected variables is required. This backwards compatable change adds in the option to have zero tensors returned that match the dimensions of the input tensor. PiperOrigin-RevId: 215725488
* | | | | | | Make batch_gather work with indices of dtype int64.Gravatar Adria Puigdomenech2018-10-04
| | | | | | | | | | | | | | | | | | | | | | | | | | | | PiperOrigin-RevId: 215711383
* | | | | | | Automated rollback of commit 70a395f9795a48c21bc35cdf1dc44778f73a7bbaGravatar A. Unique TensorFlower2018-10-04
| | | | | | | | | | | | | | | | | | | | | | | | | | | | PiperOrigin-RevId: 215710849
* | | | | | | compat: Update forward compatibility horizon to 2018-10-04Gravatar A. Unique TensorFlower2018-10-04
| | | | | | | | | | | | | | | | | | | | | | | | | | | | PiperOrigin-RevId: 215706500
* | | | | | | [tf.data] Fix bug in `tf.data.experimental.unbatch()`.Gravatar Derek Murray2018-10-03
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Previously, if the rank of the input to this transformation was statically unknown, we would erroneously report that the output is a scalar, and violate downstream shape integrity checks. Instead, in that case the output shape should be unknown. PiperOrigin-RevId: 215683027
* | | | | | | assert_nontrivial_match in tf.keras.Model.load_weights (TF format)Gravatar Allen Lavoie2018-10-03
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Adds a bit of sanity checking by default to load_weights (e.g. for the case when absolutely nothing matches) while still supporting restore-on-create and the addition of new Layers to checkpointed models. PiperOrigin-RevId: 215652168
* | | | | | | Disable norm_op_test and svd_op_test under msanGravatar Smit Hinsu2018-10-03
| | | | | | | | | | | | | | | | | | | | | | | | | | | | PiperOrigin-RevId: 215643600
* | | | | | | Merge pull request #22591 from EFanZh:fix-docsGravatar TensorFlower Gardener2018-10-03
|\ \ \ \ \ \ \ | | | | | | | | | | | | | | | | | | | | | | | | PiperOrigin-RevId: 215639962
* | | | | | | | Update size of multi_device_iterator_test to medium to fix timeoutsGravatar Smit Hinsu2018-10-03
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | PiperOrigin-RevId: 215637785