aboutsummaryrefslogtreecommitdiffhomepage
Commit message (Collapse)AuthorAge
* [TF:XLA] Bump open source llvm revision to r341305Gravatar Sanjoy Das2018-09-03
| | | | PiperOrigin-RevId: 211387503
* Update unidirectional sequential LSTM to support state API.Gravatar A. Unique TensorFlower2018-09-03
| | | | PiperOrigin-RevId: 211378182
* Update bidirectional sequential LSTM to support state API.Gravatar A. Unique TensorFlower2018-09-03
| | | | PiperOrigin-RevId: 211378028
* Fix a test.Gravatar Shashi Shekhar2018-09-03
| | | | PiperOrigin-RevId: 211377977
* [XLA:GPU] Flush out any pending work before starting autotuneGravatar Benjamin Kramer2018-09-03
| | | | | | | | | | | The autotune code assumes a clean slate, but there might be things from previous program executions still pending on the streams owned by the executor. Do a full host-device sync before autotuning to flush out any pending work. I'm still somewhat confused on how autotune can interfere with other buffers. There might be more things going wrong ... PiperOrigin-RevId: 211369162
* Fix floating point ordering in the Sort HLO op in the GPU backend.Gravatar Adrian Kuegel2018-09-03
| | | | | | We use the same trick that is used in the TPU backend. PiperOrigin-RevId: 211344106
* Call Cudnn also for grouped convolutions.Gravatar Adrian Kuegel2018-09-03
| | | | | | | | Cudnn supports grouped convolutions, so we don't need the ConvolutionFeatureGroupConverter pass and can instead set the group_count parameter on the cudnn custom calls. PiperOrigin-RevId: 211339551
* Update downloadable clang to r340427Gravatar Ilya Biryukov2018-09-03
| | | | PiperOrigin-RevId: 211339000
* compat: Update forward compatibility horizon to 2018-09-03Gravatar A. Unique TensorFlower2018-09-03
| | | | PiperOrigin-RevId: 211323840
* Rollforward of rollback:Gravatar A. Unique TensorFlower2018-09-02
| | | | | | | | Reinstate the use of integral-exponent power function MathUtil::IPow, but make sure to use a floating point base, so as to compute the result using floating point arithmetic. This behaviour is equivalent to, but faster than, std::pow. Note that care must be taken to convert the base to double, which we effect by providing an explicit template type argument for MathUtil::IPow. PiperOrigin-RevId: 211290304
* [XLA] Simplify effective scalar iota to zeroGravatar David Majnemer2018-09-02
| | | | | | Happened to observe this come up in a linear algebra workload. PiperOrigin-RevId: 211290278
* compat: Update forward compatibility horizon to 2018-09-02Gravatar A. Unique TensorFlower2018-09-02
| | | | PiperOrigin-RevId: 211257009
* Merge pull request #20237 from Intel-tensorflow:nhasabni/gatherndGravatar TensorFlower Gardener2018-09-01
|\ | | | | | | PiperOrigin-RevId: 211226585
* | Only watch tensors on the current tape rather than all of them.Gravatar Tom Hennigan2018-09-01
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This allows fine grained control over recording in some cases, for example the following where we want d2y but not d2z: x1 = tf.Variable(2.0, trainable=False) x2 = tf.Variable(2.0, trainable=False) with tf.GradientTape() as tape1: with tf.GradientTape() as tape2: tape1.watch(x1) tape2.watch([x1, x2]) y = x1 ** 3 z = x2 ** 2 dy, dz = tape2.gradient([y, z], [x1, x2]) d2y, d2z = tape1.gradient([dy, dz], [x1, x2]) assert d2z is None PiperOrigin-RevId: 211206506
* | Automated rollback of commit 9e2ce8f4c483e68309a60dc89739bb1b79b4a12eGravatar A. Unique TensorFlower2018-09-01
| | | | | | | | PiperOrigin-RevId: 211204708
* | [XLA] Remove remaining StringPiece references.Gravatar Benjamin Kramer2018-09-01
| | | | | | | | | | | | StringPiece and string_view are the same now, no need to convert between them. PiperOrigin-RevId: 211195959
* | compat: Update forward compatibility horizon to 2018-09-01Gravatar A. Unique TensorFlower2018-09-01
| | | | | | | | PiperOrigin-RevId: 211195689
* | [XLA] Use absl::CUnescapeGravatar Benjamin Kramer2018-09-01
| | | | | | | | | | | | This required an absl version bump past 5e7d459eeca7bc53deab0ee9634601386b53d7c0 PiperOrigin-RevId: 211195261
* | Fixed a typo in README.Gravatar Yuefeng Zhou2018-08-31
| | | | | | | | PiperOrigin-RevId: 211188683
* | Remove per-tower ready op since concat doesn't have a GPU kernel for DT_STRING.Gravatar Yuefeng Zhou2018-08-31
| | | | | | | | | | | | The current implementation queries the global collection for ready op. Therefore there is no need to have a per-tower ready op. PiperOrigin-RevId: 211187544
* | Fix lambda capture.Gravatar Shashi Shekhar2018-08-31
| | | | | | | | PiperOrigin-RevId: 211180182
* | [tf.data] Avoiding serialization of (potentially large) tensors during ↵Gravatar Jiri Simsa2018-08-31
| | | | | | | | | | | | optimization. PiperOrigin-RevId: 211179990
* | Merge pull request #21956 from Intel-tensorflow:sriniva2/stringpiece_fixGravatar TensorFlower Gardener2018-08-31
|\ \ | | | | | | | | | PiperOrigin-RevId: 211178634
* | | Minor cleanup: Simplify declaration of transformation.Gravatar Suharsh Sivakumar2018-08-31
| | | | | | | | | | | | PiperOrigin-RevId: 211175130
* | | [XLA:CPU] Don't use "temps" to refer to the table of buffer allocationsGravatar Sanjoy Das2018-08-31
| | | | | | | | | | | | | | | | | | | | | Instead call it "buffer table", it now contains both entry computation parameters and temporaries. PiperOrigin-RevId: 211171651
* | | Merge pull request #20108 from yongtang:19910-glorot_uniform_initializerGravatar TensorFlower Gardener2018-08-31
|\ \ \ | | | | | | | | | | | | PiperOrigin-RevId: 211169413
* | | | Improve documentation for tf.custom_gradient.Gravatar Alexandre Passos2018-08-31
| | | | | | | | | | | | | | | | | | | | | | | | Fixes #21756 PiperOrigin-RevId: 211168797
* | | | Add benchmarks for Spatial/Cuboid backward-kernel convolutions.Gravatar Eugene Zhulenev2018-08-31
| | | | | | | | | | | | | | | | PiperOrigin-RevId: 211167699
* | | | Add keras example to distribution strategy readme.Gravatar Priya Gupta2018-08-31
| | | | | | | | | | | | | | | | PiperOrigin-RevId: 211167333
* | | | Add Raspberry Pi nightly build status and artifact links to the github readme.Gravatar Gunhan Gulsoy2018-08-31
| | | | | | | | | | | | | | | | | | | | | | | | Fixed #17850 PiperOrigin-RevId: 211166112
* | | | A temporary fix to auto-sharding for synthetic data.Gravatar Yuefeng Zhou2018-08-31
| | | | | | | | | | | | | | | | PiperOrigin-RevId: 211165943
* | | | Add a run_standard_tensorflow_server method for users who start their ↵Gravatar Yuefeng Zhou2018-08-31
| | | | | | | | | | | | | | | | | | | | | | | | clusters with std servers. PiperOrigin-RevId: 211165860
* | | | Add documentation for multi-worker training.Gravatar Yuefeng Zhou2018-08-31
| | | | | | | | | | | | | | | | PiperOrigin-RevId: 211165811
* | | | Usability improvements to @recompute_gradGravatar A. Unique TensorFlower2018-08-31
| | | | | | | | | | | | | | | | | | | | | | | | | | | | Error if fn closes over Tensor or Variable (not always detectable) Allow None gradients to some inputs (filter out Nones before control_deps) PiperOrigin-RevId: 211162615
* | | | Re-enable hoisting of coeff-wise unary chains out of Split and into Concat.Gravatar A. Unique TensorFlower2018-08-31
| | | | | | | | | | | | | | | | PiperOrigin-RevId: 211162510
* | | | CHECK that the thread locality of the call matches thread locality of the calleeGravatar Sanjoy Das2018-08-31
| | | | | | | | | | | | | | | | PiperOrigin-RevId: 211162384
* | | | Fix normalization in Shampoo when dealing with differently sized tensors.Gravatar A. Unique TensorFlower2018-08-31
| | | | | | | | | | | | | | | | | | | | | | | | Add M^1/2 to reduce condition numbers, before computing inverse pth root. PiperOrigin-RevId: 211162032
* | | | BEGIN_PUBLICGravatar Ayush Dubey2018-08-31
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Rollback of rollback. Fix: make access to collective_graph_key thread-safe. The original change introduced a collective_graph_key_ integer to DirectSession, but it did not protect accesses to this integer. This change protects access with a mutex. END_PUBLIC Automated rollback of commit cb9443831283c2366e3dd91001db6362d6594f66 PiperOrigin-RevId: 211161961
* | | | Add weight decay version of Shampoo.Gravatar A. Unique TensorFlower2018-08-31
| | | | | | | | | | | | | | | | PiperOrigin-RevId: 211161790
* | | | Merge multiple concat into one.Gravatar A. Unique TensorFlower2018-08-31
| | | | | | | | | | | | | | | | PiperOrigin-RevId: 211161172
* | | | Improving logging of tensors being quantized.Gravatar Suharsh Sivakumar2018-08-31
| | | | | | | | | | | | | | | | PiperOrigin-RevId: 211160708
* | | | Fixing Any operator.Gravatar A. Unique TensorFlower2018-08-31
| | | | | | | | | | | | | | | | PiperOrigin-RevId: 211159438
* | | | Clean up: remove cluster_spec, task_type and task id in the __init__ method ↵Gravatar Yuefeng Zhou2018-08-31
| | | | | | | | | | | | | | | | | | | | | | | | of ParameterServerStrategy. We enable multi-worker training only through distribute coordinator. PiperOrigin-RevId: 211159386
* | | | Add support for TPU Pods in TPU Strategy but running per host infeed.Gravatar Sourabh Bajaj2018-08-31
| | | | | | | | | | | | | | | | PiperOrigin-RevId: 211158585
* | | | [Cloud TPU / Keras]: Pipeline inputs for performance.Gravatar Brennan Saeta2018-08-31
| | | | | | | | | | | | | | | | | | | | | | | | With this pipelining change (and a little bit of re-tuning of input pipelines to have _less_ parallelism to avoid thread starvation), we are able to significantly reduce the overheads of supporting dynamic shapes with Keras. PiperOrigin-RevId: 211157531
* | | | Remove test dependencies from TFLite Android samplesGravatar Jared Duke2018-08-31
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | These deps are unncessary and were causing unexpected breakage. Remove them. Fixes #20828 PiperOrigin-RevId: 211156706
* | | | Benchmarks for CuboidConvolutions.Gravatar Eugene Zhulenev2018-08-31
| | | | | | | | | | | | | | | | PiperOrigin-RevId: 211156403
* | | | Mark tensorflow/contrib/distributions:sinh_arcsinh_test as a medium-sized test.Gravatar Justin Lebar2018-08-31
| | | | | | | | | | | | | | | | PiperOrigin-RevId: 211154236
* | | | Fix docs.Gravatar Shashi Shekhar2018-08-31
| | | | | | | | | | | | | | | | PiperOrigin-RevId: 211153502
* | | | Remove unused 'None' option for reduce destinations in DistributionStrategy.Gravatar A. Unique TensorFlower2018-08-31
| | | | | | | | | | | | | | | | | | | | | | | | If you want all-reduce, supply the `value` to the `destinations` argument. PiperOrigin-RevId: 211148002