| Commit message (Collapse) | Author | Age |
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
* Disable AWS S3 virtual addressing
This fix is related to 16397 and 15159. The fix disables
the virtual addressing of AWS S3, as was suggested in the comment.
Signed-off-by: Yong Tang <yong.tang.github@outlook.com>
* Fix format issue.
Signed-off-by: Yong Tang <yong.tang.github@outlook.com>
* Add comment for the passed parameter of virutal addressing.
Signed-off-by: Yong Tang <yong.tang.github@outlook.com>
|
|
|
|
| |
* fix typos
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
* Update docs for installing CUDA/CUDNN
This fix addresses the issue raised in 16479 where
CUDA/CUDNN versions from the docs do not match TensorFlow v1.5.0.
From the Dockerfile and docker images ENV, the version of CUDA/CUDNN
for TensorFlow v1.5.0:
```
CUDA_VERSION 9.0.176
CUDNN_VERSION 7.0.5.15
```
This fix updates the doc so that CUDA version is changed from `8.0` -> `9.0`,
CUDNN version is changed from `6.0` -> `7.0`.
This fix fixes 16479.
Signed-off-by: Yong Tang <yong.tang.github@outlook.com>
|
|
|
|
|
|
| |
* updating CUDA srcs for Makefile build to fix unsatisfied link error
* more makefile refactoring
|
| |
|
| |
|
|\
| |
| | |
Branch 183446593
|
|/| |
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
1. Using _shape_tuple
2. Bypassing * over math_ops.mul etc
3. Flatmaps in the tape code
4. Cache for ones similar to for zeros
5. Fast path for _SubGrad
6. Fast global_step += 1 for resource variables
7. Bypassing deprecated args decorator in eager mode
PiperOrigin-RevId: 183446593
|
| |
| |
| |
| |
| |
| |
| | |
This makes chaining them easier. Control dependencies to ensure updates
happen are implicitly added by the function code.
PiperOrigin-RevId: 183446211
|
| |
| |
| |
| | |
PiperOrigin-RevId: 183443656
|
| |
| |
| |
| |
| |
| | |
did not exist in the external github TF repository.
PiperOrigin-RevId: 183443347
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
This fix fixes a build failure when compiling with
GCC 7.2.1 on AWS Linux 2:
```
gcc version 7.2.1 20170915 (Red Hat 7.2.1-2) (GCC)
```
The eror output was:
```
...
./tensorflow/contrib/lite/toco/model.h:1567:25: error: 'std::function' has not been declared
void EraseArrays(std::function<bool(const string&)> discardable) {
.....
```
This fix is related to 16046.
Signed-off-by: Yong Tang <yong.tang.github@outlook.com>
|
| |
| |
| |
| | |
PiperOrigin-RevId: 183441321
|
| |
| |
| |
| | |
PiperOrigin-RevId: 183438398
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
* Change `reduce_logsumexp` to internally use `reshape` rather than `squeeze`
since the latter requires the `axis` arg to be a Python `list`.
PiperOrigin-RevId: 183396533
* Kernel utils to support broadcast add and mul.
PiperOrigin-RevId: 183397494
* Updating sparsify_gather.
PiperOrigin-RevId: 183402917
* [tf.data] Move slow-path-related code into the slow path in IteratorHandleOp::Compute().
This slightly reduces the amount of work performed when an iterator is accessed (after the first access), and potentially reduces contention if concurrent steps are accessing the same iterator.
PiperOrigin-RevId: 183406221
* Cleanup: Ran clang-format on all *.{cc,h} in under grappler.
PiperOrigin-RevId: 183406440
* Increase shard count of //third_party/tensorflow/python:nn_batchnorm_test to avoid timeouts
When run under asan, the test runs for about 5 minutes, and sometimes
longer, causing frequent timeouts.
This change increases the shard count of the test to 4, which brings the run time
of the longest running shard under asan to about 2 minutes.
PiperOrigin-RevId: 183414888
* Add available choices to toco flags and fix minor formatting issues.
PiperOrigin-RevId: 183415713
* Performance improvements to some GPU code to use shared locks instead of unique locks for some hotspot cases.
PiperOrigin-RevId: 183418559
* [XLA] Improve error message for bad slices.
PiperOrigin-RevId: 183420038
* Fix py3 build rules for all py tests under py2tf.
PiperOrigin-RevId: 183422144
* Fix bug with Operation._control_inputs setter.
PiperOrigin-RevId: 183422192
* Make softmax_op_test.py work with C API enabled.
PiperOrigin-RevId: 183422829
* Cleanup: Ran clang-format on all *.{cc,h} files in tensorflow/core/kernels.
PiperOrigin-RevId: 183423961
* Fix the documentation for the dense layer for how rank > 2 inputs are handled.
PiperOrigin-RevId: 183425868
* Cleanup: Ran clang-format on all *.{cc,h} in tensorflow/core/ops.
PiperOrigin-RevId: 183429339
|
| |
| |
| |
| | |
PiperOrigin-RevId: 183435438
|
| |
| |
| |
| |
| |
| | |
inputs with equal distributions.
PiperOrigin-RevId: 183435084
|
| | |
|
| |
| |
| |
| | |
PiperOrigin-RevId: 183431139
|
| |
| |
| |
| | |
PiperOrigin-RevId: 183429540
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
* Add KafkaReader for processing streaming data with Apache Kafka
Apache Kafka is a widely used distributed streaming platform in
open source community. The goal of this fix is to create a contrib
Reader ops (inherits ReaderBase and is similiar to
TextLineReader/TFRecordReader) so that it is possible to reader
Kafka streaming data from TensorFlow in a similiar fashion.
This fix uses a C/C++ Apache Kafka client library librdkafka which
is released under the 2-clause BSD license, and is widely used in
a number of Kafka bindings such as Go, Python, C#/.Net, etc.
Signed-off-by: Yong Tang <yong.tang.github@outlook.com>
* Add KafkaReader Python wrapper.
Signed-off-by: Yong Tang <yong.tang.github@outlook.com>
* Add BUILD file and op registration for KafkaReader.
Signed-off-by: Yong Tang <yong.tang.github@outlook.com>
* Add C++ Kernel for KafkaReader
Signed-off-by: Yong Tang <yong.tang.github@outlook.com>
* Add librdkafka to third_party packages in Bazel
Signed-off-by: Yong Tang <yong.tang.github@outlook.com>
* Add contrib/kafka to part of the contrib bazel file.
Signed-off-by: Yong Tang <yong.tang.github@outlook.com>
* Update workspace.bzl
Signed-off-by: Yong Tang <yong.tang.github@outlook.com>
* Comment out clean_deps of `tensorflow/core:framework` and `tensorflow/core:lib`
so that it is possible to build with ReaderBase.
See 1419 for details.
Signed-off-by: Yong Tang <yong.tang.github@outlook.com>
* Add group id flag.
Signed-off-by: Yong Tang <yong.tang.github@outlook.com>
* Sync offset
Signed-off-by: Yong Tang <yong.tang.github@outlook.com>
* Add test cases and scipt to start and stop Kafka server (with docker)
Signed-off-by: Yong Tang <yong.tang.github@outlook.com>
* Convert to KafkaConsumer from the legacy Consumer with librdkafka
so that thread join does not hang.
Signed-off-by: Yong Tang <yong.tang.github@outlook.com>
* Only output offset as the key.
Signed-off-by: Yong Tang <yong.tang.github@outlook.com>
* Add timeout attr so that Kafka Consumer could use
Signed-off-by: Yong Tang <yong.tang.github@outlook.com>
* Build Kafka kernels by default, so that to get around the linkage issue.
Signed-off-by: Yong Tang <yong.tang.github@outlook.com>
* Convert KafkaReader to KafkaDataset.
Signed-off-by: Yong Tang <yong.tang.github@outlook.com>
* Fix workspace.bzl for kafka with tf_http_archive
Signed-off-by: Yong Tang <yong.tang.github@outlook.com>
* Add public visibility
Signed-off-by: Yong Tang <yong.tang.github@outlook.com>
* Address review feedbacks
Signed-off-by: Yong Tang <yong.tang.github@outlook.com>
* Optionally select Kafka support through ./configure
Signed-off-by: Yong Tang <yong.tang.github@outlook.com>
|
| |
| |
| |
| | |
PiperOrigin-RevId: 183429339
|
| |
| |
| |
| | |
PiperOrigin-RevId: 183425868
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
* Switch over to max_pool_v2 in Python
This fix is a follow up to 11875 so that MaxPool in Python
use v2 version. As 11875 has been merged some time ago,
this fix conforms to the deprecation policy.
This fix is realted to 11875 and 4746.
Signed-off-by: Yong Tang <yong.tang.github@outlook.com>
* Update test cases in contrib/specs/python/specs_test due to MaxPool -> MaxPoolV2
Signed-off-by: Yong Tang <yong.tang.github@outlook.com>
* Update tensorflow/contrib/receptive_field
Update tensorflow/contrib/receptive_field
due to max_pool's strides and ksize from attr -> input
Signed-off-by: Yong Tang <yong.tang.github@outlook.com>
* Remove const restriction for strides and ksize
Signed-off-by: Yong Tang <yong.tang.github@outlook.com>
* Register MaxPoolV2 with XLA
Signed-off-by: Yong Tang <yong.tang.github@outlook.com>
* Reformat with clang-format -i --style=Google
Signed-off-by: Yong Tang <yong.tang.github@outlook.com>
|
| |
| |
| |
| | |
PiperOrigin-RevId: 183423961
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
* Add a way to provide target nodes in Android
This is required when running some models as a step for initializing the graph etc.
* Fix enableStats mistake on overload
Falsely passed enableStats as false instead of the parameter for the non overloaded version.
|
| |
| |
| |
| | |
PiperOrigin-RevId: 183422829
|
| |
| |
| |
| | |
PiperOrigin-RevId: 183422192
|
| |
| |
| |
| | |
PiperOrigin-RevId: 183422144
|
| |
| |
| |
| | |
PiperOrigin-RevId: 183420038
|
| |
| |
| |
| |
| |
| | |
* Add missing header.
* Fix typo. Should refer to incoming argument.
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
* Update README.md
-png, --sample_index options are not available in google-perftools (2.4-0ubuntu5.16.04.1).
Also, since Ubuntu 16.04 wrongly recommends to install pprof from 'tau' package
"
The program 'pprof' is currently not installed. You can install it by typing:
sudo apt install tau
"
the typical user command should probably be
google-pprof --pdf --nodecount=100 <filename>
* Add `google-perftools` installation Note.
|
| |
| |
| |
| |
| |
| | |
unique locks for some hotspot cases.
PiperOrigin-RevId: 183418559
|
| |
| |
| |
| | |
PiperOrigin-RevId: 183415713
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
avoid timeouts
When run under asan, the test runs for about 5 minutes, and sometimes
longer, causing frequent timeouts.
This change increases the shard count of the test to 4, which brings the run time
of the longest running shard under asan to about 2 minutes.
PiperOrigin-RevId: 183414888
|
| | |
|
| |
| |
| |
| | |
PiperOrigin-RevId: 183406440
|
| |
| |
| |
| |
| |
| |
| |
| | |
IteratorHandleOp::Compute().
This slightly reduces the amount of work performed when an iterator is accessed (after the first access), and potentially reduces contention if concurrent steps are accessing the same iterator.
PiperOrigin-RevId: 183406221
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
This fix tries to address the issue raised in 16451 to provide
a better error message when graph_path is not available for profiler.
Previously if graph_path is not available, the process will crash
with not very imformative message and a core dump:
```
2018-01-26 01:43:29.458032: F tensorflow/core/profiler/profiler.cc:206] Non-OK-status: ReadProtoFile(Env::Default(), FLAGS_graph_path, graph.get(), false) status: Not found: ; No such file or directory
Aborted (core dumped)
```
With this fix, the error message is improved to:
```
Failed to read graph_path: Invalid argument: Cannot parse proto file.
```
and the process exit with 1.
This fix fixes 16451.
Signed-off-by: Yong Tang <yong.tang.github@outlook.com>
|
| | |
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
* Added ctc_loss_dense_labels. This does the conversion of dense labels into sparse ones to be passed into the core ctc_loss function.
* Removed constant_op from the import.
* Matched ctc_loss_dense_labels with the other layers ops.
* Added ctc_loss_dense_labels to contrib.layers __init__.py file
* Added missing comma to list of ops.
* Reordred arguments for ctc_loss_dense_labels
Labels should be first then inputs for ctc_loss.
* Removed ctc_loss_dense_labels.
Replaced it with dense_to_sparse instead so that there'll be only one ctc_loss function.
* Replaced ctc_loss_dense_labels with dense_to_sparse
* Fixed dense_to_sparse. Some of the names of the variables did not match with that of the parameters.
* Updated documentation for dense_to_sparse since it can accept a tensor of any shape.
* Added test case for dense_to_sparse.
* Updated documentation. Dense to sparse accepts int tensors.
* Fixed testDenseFromConstantToSparse.
The sparse_to_dense order of arguments in the test are wrong and the expected constant should be of int64.
* Modified implementation of ndlstm_base_dynamic.
It now uses a BasicLSTMCell that has state_is_tuple=True to address deprecation. Right now it is still unknown why it was set to false in the first place.
* Imported lstm1d and lstm2d in ndlstm __init__.py.
Makes importing ndlstm modules easier.
* Added testGetBlocks in lstm2d_test.
* Removed testGetBlocks.py
|
| | |
|
| |
| |
| |
| | |
PiperOrigin-RevId: 183402917
|
| | |
|
| |
| |
| |
| | |
PiperOrigin-RevId: 183397494
|
| |
| |
| |
| |
| |
| | |
since the latter requires the `axis` arg to be a Python `list`.
PiperOrigin-RevId: 183396533
|
| | |
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
* Decoding contents of BMP file on big endian
* Updated as per review comments
* Update decode_bmp_op.cc
Corrected function name
|