aboutsummaryrefslogtreecommitdiffhomepage
Commit message (Collapse)AuthorAge
...
* Automated rollback of commit 13b47e6c4f9d7b295948b1057139bf676e394b6fGravatar Derek Murray2018-10-08
| | | | PiperOrigin-RevId: 216260575
* Internal Change.Gravatar Michael Case2018-10-08
| | | | PiperOrigin-RevId: 216260437
* Simple comment fix in CheckpointInputPipelineHook.Gravatar Ruoxin Sang2018-10-08
| | | | PiperOrigin-RevId: 216260216
* Fix issue with type inference for ops with fixed output typesGravatar Jared Duke2018-10-08
| | | | | | | Use the ArgDef::type field when available for propagating the output types from a given unsupported operator. PiperOrigin-RevId: 216257741
* Automated rollback of commit 09b0fc199129e0f487a39741bdf674cf09035cbcGravatar Derek Murray2018-10-08
| | | | PiperOrigin-RevId: 216256115
* [XLA] Simplify loop nesting in HandleConvolutionGravatar David Majnemer2018-10-08
| | | | | | | | | | | | | | | The calculation of a spatial coordinate in the kernel and activations is not dependent on which part of the contracted dimension (input feature) we are in. Rather than nesting the loops, the loops can be siblings: - One loop over spatial dimensions - One loop over the input feature group This reduces the nesting depth which makes the code a little more readable and might be slightly faster due work invariant in the spatial loop getting hoisted out. PiperOrigin-RevId: 216255839
* Ignore args and kwargs for defun's get_concrete_fn if `PolymorphicFunction` ↵Gravatar Shivani Agrawal2018-10-08
| | | | | | | | was created with an input_signature. PiperOrigin-RevId: 216253122
* Merge pull request #22783 from Intel-tensorflow:sfu2/clean_mklmlGravatar TensorFlower Gardener2018-10-08
|\ | | | | | | PiperOrigin-RevId: 216253115
* | Add more logging to the convolution transformations.Gravatar Tim Shen2018-10-08
| | | | | | | | PiperOrigin-RevId: 216252980
* | Add custom call with layout constraints.Gravatar Mark Heffernan2018-10-08
| | | | | | | | | | | | Add a variant of CustomCall which specifies arbitrary layout constraints on the operands and result. The existing non-layout-constrained CustomCall is changed to have no layout preference and can now be assigned arbitrary layouts by layout assignment. PiperOrigin-RevId: 216249615
* | Update performance documentation.Gravatar Shashi Shekhar2018-10-08
| | | | | | | | PiperOrigin-RevId: 216248418
* | [tf.data] Choose non-deterministic seed once per Python-level `Dataset` object.Gravatar Derek Murray2018-10-08
| | | | | | | | | | | | | | | | | | | | This changes the behavior of randomness-introducing datasets (`tf.data.Dataset.shuffle()`, `tf.data.experimental.shuffle_and_repeat()`, and `tf.data.experimental.RandomDataset`). Previously, when you used the same `tf.data.Dataset` object multiple times in a pipeline (e.g. by zipping two datasets derived from the same randomness-introducing dataset) *and* you did not specify an explicit `seed`, the implementation would choose different non-deterministic seeds for each use of the `Dataset` object. With this change, the seed will be chosen once per `Dataset` (technically, once per `Dataset`-`Graph` combination, due to the vagaries of capturing state in `Dataset.make_one_shot_iterator()`), which means that all uses of the same dataset object will observe the same sequence of values. This change also revealed a small bug in how `Dataset.shuffle(..., reshuffle_each_iteration=False)` is serialized when an explicit seed is specified. The op-level seed was dropped, which could lead to non-deterministic behavior. This change fixes that issue by forwarding the op-level seed to the appropriate place. PiperOrigin-RevId: 216248013
* | Automated rollback of commit 295b3c80555cc82d8d70faf96a47681e1d904b9cGravatar Derek Murray2018-10-08
| | | | | | | | PiperOrigin-RevId: 216247929
* | Merge pull request #22735 from Tingbopku:fix-referenceGravatar TensorFlower Gardener2018-10-08
|\ \ | | | | | | | | | PiperOrigin-RevId: 216245934
* \ \ Merge pull request #22719 from samikama:fix_pip_packageGravatar TensorFlower Gardener2018-10-08
|\ \ \ | | | | | | | | | | | | PiperOrigin-RevId: 216245301
* | | | Partial support tfe.defun in tf.gradients.Gravatar Alexandre Passos2018-10-08
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Doesn't attempt to deal with cases where we might have already generated the functiondef for the parent function as in that case we cannot easily modify the forward pass. PiperOrigin-RevId: 216243224
* | | | Allow using more than one converter in the testing harness.Gravatar Dan Moldovan2018-10-08
| | | | | | | | | | | | | | | | PiperOrigin-RevId: 216242862
* | | | Add tf.BenchmarkConfig that returns a session config appropriate for ↵Gravatar A. Unique TensorFlower2018-10-08
| | | | | | | | | | | | | | | | | | | | | | | | benchmarking. At the moment, it returns a default config with only Grappler dependency optimizer disabled. Many benchmarks wrap the subgraph they want to time in control_flow_ops.group() to avoid including the overhead of copying the output back to the Python client in the measurement. In the graph, this only adds a control dependency between the subgraph output and the fetch node, which in turn (often) causes the dependency optimizer to turn all nodes in the graph into no-ops. PiperOrigin-RevId: 216242463
* | | | Changed Adam algorithm variant formula from sqrt(max(v, epsilon**2)) to ↵Gravatar A. Unique TensorFlower2018-10-08
| | | | | | | | | | | | | | | | | | | | | | | | sqrt(v + epsilon**2) and changed flag name accordingly. PiperOrigin-RevId: 216240045
* | | | Add timeout mechanism to Grappler meta optimizer. This is only a best-effort ↵Gravatar A. Unique TensorFlower2018-10-08
| | | | | | | | | | | | | | | | | | | | | | | | mechanism, since the meta optimizer only checks if it has been cancelled before running each sub-optimizer. We can add cancellation to each sub-optimizer if necessary. PiperOrigin-RevId: 216234262
* | | | Fix a couple of reference leaksGravatar A. Unique TensorFlower2018-10-08
| | | | | | | | | | | | | | | | PiperOrigin-RevId: 216230391
* | | | Merge pull request #22303 from JuliaComputing:kf/broadcastshapevalGravatar TensorFlower Gardener2018-10-08
|\ \ \ \ | | | | | | | | | | | | | | | PiperOrigin-RevId: 216228494
* | | | | Remove the restrictions that constant resolution of reduce_sum operators ↵Gravatar A. Unique TensorFlower2018-10-08
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | must be on axis 0, and can only be on 1 or 2-d inputs. PiperOrigin-RevId: 216226776
* | | | | Fix the steps_per_epoch when training on mnistGravatar Sourabh Bajaj2018-10-08
| | | | | | | | | | | | | | | | | | | | PiperOrigin-RevId: 216225505
* | | | | Go: Update generated wrapper functions for TensorFlow ops.Gravatar A. Unique TensorFlower2018-10-08
| | | | | | | | | | | | | | | | | | | | PiperOrigin-RevId: 216224026
* | | | | Update ops-related pbtxt files.Gravatar A. Unique TensorFlower2018-10-08
| | | | | | | | | | | | | | | | | | | | PiperOrigin-RevId: 216217887
* | | | | Merge pull request #21658 from lowintelligence:masterGravatar TensorFlower Gardener2018-10-08
|\ \ \ \ \ | | | | | | | | | | | | | | | | | | PiperOrigin-RevId: 216217509
* | | | | | Fix support for a single tensor to be passed to target_tensorsGravatar Sourabh Bajaj2018-10-08
| | | | | | | | | | | | | | | | | | | | | | | | PiperOrigin-RevId: 216212953
* | | | | | Convert TensorFlow's nasm dependency to new third party import method.Gravatar A. Unique TensorFlower2018-10-08
| | | | | | | | | | | | | | | | | | | | | | | | PiperOrigin-RevId: 216211467
* | | | | | Add a utility that allows finding a name for an entity, relative to an ↵Gravatar Dan Moldovan2018-10-08
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | existing namespace. PiperOrigin-RevId: 216211286
* | | | | | Part 1/3 of the feature sync to the Keras 2.2.4 API.Gravatar Francois Chollet2018-10-08
| | | | | | | | | | | | | | | | | | | | | | | | PiperOrigin-RevId: 216211279
* | | | | | Add support for SequenceExamples to sequence_feature_columnsGravatar Karmel Allison2018-10-08
| | | | | | | | | | | | | | | | | | | | | | | | PiperOrigin-RevId: 216210141
* | | | | | Wait for shared resources to initialize before initializing local resources.Gravatar A. Unique TensorFlower2018-10-08
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | shared resources are very similar to global variables functionally and they are initialized at the same time but since workers are only waiting for global variables being initialized, there is a race condition that sometimes the shared resource is not ready. PiperOrigin-RevId: 216208679
* | | | | | Allow TensorSpec objects as arguments to defun's get_concrete_functionGravatar Allen Lavoie2018-10-08
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Will be helpful for specifying serving signatures when exporting SavedModels PiperOrigin-RevId: 216207284
* | | | | | [tf.data] Adding specialization for `MapDataset`, `ParallelMapDataset`, and ↵Gravatar Jiri Simsa2018-10-08
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | `MapAndBatchDataset` whose user-provided functions have the property that each output argument take its value directly from an input argument (e.g. `lambda x, y: y, x`). This specialization can produce the result without having to schedule the function using the executor. PiperOrigin-RevId: 216206232
* | | | | | Fix compilation in unique_op when Eigen::Index != int64.Gravatar A. Unique TensorFlower2018-10-08
| | | | | | | | | | | | | | | | | | | | | | | | PiperOrigin-RevId: 216205396
* | | | | | Fix typoGravatar Makoto Uchida2018-10-08
| | | | | | | | | | | | | | | | | | | | | | | | PiperOrigin-RevId: 216203408
* | | | | | Benchmark for comparing original cond and cond_v2 performance.Gravatar Skye Wanderman-Milne2018-10-08
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This benchmark creates many intermediates values, so we can make sure there's no performance overhead (it looks like there might be currently, or it might be from some other difference). It also runs in a defun and in legacy graph mode. Results from my machine: entry { name: "CondWithManyIntermediatesBenchmark.benchmark_cond_v1_defun" iters: 500 wall_time: 1.25822591782 } entry { name: "CondWithManyIntermediatesBenchmark.benchmark_cond_v2_defun" iters: 500 wall_time: 5.99376106262 } entry { name: "CondWithManyIntermediatesBenchmark.benchmark_cond_v1_graph" iters: 500 wall_time: 2.05277585983 } entry { name: "CondWithManyIntermediatesBenchmark.benchmark_cond_v2_graph" iters: 500 wall_time: 2.84808516502 } Clearly we have some work to do! I haven't looked into the time differences at all yet. PiperOrigin-RevId: 216202325
* | | | | | Enable PinToHostOptimizer.Gravatar A. Unique TensorFlower2018-10-08
| | | | | | | | | | | | | | | | | | | | | | | | PiperOrigin-RevId: 216201732
* | | | | | Remove Raises documentation on imperative_grads for ValueErrror not raised.Gravatar A. Unique TensorFlower2018-10-08
| | | | | | | | | | | | | | | | | | | | | | | | PiperOrigin-RevId: 216201714
* | | | | | Avoid adding spurious ops when colocating with resource variables.Gravatar Asim Shankar2018-10-08
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Prior to this change, tf.colocate_with(v) would insert spurious operations (a ReadVariableOp and an Identity) in the graph when v is a resource variable, and then colocate the operations within the block with those newly added, otherwise disconnected, operations. This commit avoids adding the unnecessary ReadVariableOp/Identity nodes and colocates operations within the block with the VarHandleOp. PiperOrigin-RevId: 216201638
* | | | | | Reduce tolerances for rmsprop_test float16, to fix OSS builds.Gravatar Todd Wang2018-10-08
| | | | | | | | | | | | | | | | | | | | | | | | PiperOrigin-RevId: 216200439
* | | | | | Optimize PinToHostOptimizer by adding cache, also add PinToHostOptimizer to ↵Gravatar A. Unique TensorFlower2018-10-08
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | benchmarks. original runtime: 4.83492736816 secs w/ cache runtime: 2.19033999443 secs PiperOrigin-RevId: 216195286
* | | | | | Remove Dims from types.h, create build structure.Gravatar A. Unique TensorFlower2018-10-08
| | | | | | | | | | | | | | | | | | | | | | | | PiperOrigin-RevId: 216191084
* | | | | | Improve const correctness of HloDomainMapGravatar A. Unique TensorFlower2018-10-08
| | | | | | | | | | | | | | | | | | | | | | | | PiperOrigin-RevId: 216189458
* | | | | | Make ExecutorState preserve the thread context.Gravatar A. Unique TensorFlower2018-10-08
| | | | | | | | | | | | | | | | | | | | | | | | PiperOrigin-RevId: 216187878
* | | | | | Merge pull request #19531 from smistad:cmake-windows-host-64Gravatar TensorFlower Gardener2018-10-08
|\ \ \ \ \ \ | | | | | | | | | | | | | | | | | | | | | PiperOrigin-RevId: 216185979
* | | | | | | compat: Update forward compatibility horizon to 2018-10-08Gravatar A. Unique TensorFlower2018-10-08
| | | | | | | | | | | | | | | | | | | | | | | | | | | | PiperOrigin-RevId: 216151605
* | | | | | | compat: Update forward compatibility horizon to 2018-10-07Gravatar A. Unique TensorFlower2018-10-07
| | | | | | | | | | | | | | | | | | | | | | | | | | | | PiperOrigin-RevId: 216079665
* | | | | | | Add SequenceLSTMOptions to schema to decouple the sequential Op from the LSTM.Gravatar A. Unique TensorFlower2018-10-06
| | | | | | | | | | | | | | | | | | | | | | | | | | | | PiperOrigin-RevId: 216066634