| Commit message (Collapse) | Author | Age |
|
|
|
|
|
|
| |
Fixes #3277.
Changes binary ops used to compute gradients for several operations to support zero-rank tensors.
Change: 127255053
|
|
|
|
| |
Change: 127254774
|
|
|
|
| |
Change: 127254286
|
|
|
|
| |
Change: 127253427
|
|
|
|
| |
Change: 127245277
|
|
|
|
|
| |
improvements for fp16
Change: 127233960
|
|
|
|
| |
Change: 127225742
|
|
|
|
|
|
|
|
|
|
| |
On the GPU, tf.multinomial uses Eigen. On empty input, this triggers a bug in
Eigen causing a crash. Fix this by not executing the kernel in the empty
output case.
Also fix shape validation assertions to handle more corner cases (hopefully all
of them).
Change: 127223716
|
|
|
|
| |
Change: 127223683
|
|
|
|
| |
Change: 127216353
|
|
|
|
|
|
|
|
|
|
|
| |
Without this, collection of detailed execution timing information
would default to being on all the time quite often, resulting in
significant performance and memory allocation overhead.
Reduces number of allocations for one benchmark from 120M to 101M.
Co-discovered by sanjay@ and myself.
Change: 127212900
|
|
|
|
| |
Change: 127212113
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This enables multi-threading of the tf.py_func operations in cases where wrapped Python functions releases the GIL.
The following example now takes 3s instead of 6s:
def foo():
time.sleep(3)
return 1
def create_py_func():
return tf.py_func(foo, [], [tf.int64])
a = create_py_func()
b = create_py_func()
session.run([a, b])
Change: 127210255
|
|
|
|
|
|
|
|
|
| |
Change InferenceContext:
- allow Dim(s, negative_idx)
- add helper to get dimension value from scalar input tensor.
- allow SubShape(s, start, end, &out). Support negative indexes, and indexes >
rank, in SubShape (to match pythonic indexing).
Change: 127206793
|
|
|
|
| |
Change: 127202712
|
|
|
|
| |
Change: 127198546
|
|
|
|
|
|
|
|
| |
TruncatedNormalOp. It takes a matrix of batched parameters (mean, stdev, minval, maxval).
Once the GPU functor is added, we can eventually use this op to implement tf.truncated_normal with an optional minval and maxval, and support for batches,
and remove the existing TruncatedNormalOp.
Change: 127196322
|
|
|
|
| |
Change: 127155464
|
|
|
|
|
|
|
|
| |
tensorflow/contrib/session_bundle, including:
- Type and error checking when unpacking Any messages
- Increased session_bundle test coverage
Change: 127152552
|
|
|
|
| |
Change: 127146715
|
|
|
|
| |
Change: 127144423
|
|
|
|
| |
Change: 127144397
|
|
|
|
| |
Change: 127141730
|
|
|
|
| |
Change: 127141513
|
|
|
|
| |
Change: 127139520
|
|
|
|
|
|
| |
Also add additional checks that can be triggered when users manually edit the
checkpoint file and can help them debug potential mistakes they may have made.
Change: 127138923
|
|
|
|
|
| |
used for multiple charts anymore.
Change: 127135452
|
|
|
|
| |
Change: 127132058
|
|
|
|
|
|
|
|
|
|
|
| |
otherwise default to label-location based indexing.
This is necessary because the DataFeeder uses the shape of the objects to generate indices, rather than the actual index. Mixing the two is often benign, but selections, like boolean masks or head/tail slices will cause failures.
See:
http://pandas.pydata.org/pandas-docs/version/0.17.0/indexing.html#indexing-integer
http://pandas.pydata.org/pandas-docs/version/0.17.0/indexing.html#indexing-label
Change: 127130473
|
|
|
|
|
|
|
| |
to tensorflow::serving.
Update all Python users of tensorflow_serving/session_bundle/manifest.proto to use the equivalent tensorflow/contrib/session_bundle/manifest.proto since there are otherwise namespace collisions.
Change: 127129383
|
|
|
|
| |
Change: 127127852
|
|
|
|
|
|
| |
line chart on its own.
Change: 127127152
|
|
|
|
| |
Change: 127126726
|
|
|
|
|
|
| |
coupld TODOs.
Change: 127126685
|
|
|
|
| |
Change: 127126500
|
|
|
|
|
| |
Includes port of internal Stream Executor support for cuDNN normalization.
Change: 127123966
|
|
|
|
|
| |
TF_AVGPOOL_USE_CUDNN. The default is false for now.
Change: 127123594
|
|
|
|
|
| |
Similar to tf.Print but with the ability to print SparseTensors and TensorArrays. Defaults to printing the passed through tensor and to printing out the shape, dtype and name of all Tensors.
Change: 127122527
|
|
|
|
| |
Change: 127118204
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
a nice error rather than silently generating invalid python code.
Fixes:
- On linux, a fatal log that should be triggered in C++ seems to be ignored
causing the tf.load_op_library call to fail and exit.
We fix this by propagating the status from op registration to raise an
exception from python rather than print a fatal log.
The reason why the fatal log is being ignored on linux is unclear, my
theory is some sort of SWIG issue, but online research provided no results.
- Op registrations that fail are still added to the OpDef registration.
This is only an issue when the Fatal log is ignored, but we fix this
anyways.
Change: 127114085
|
|
|
|
|
|
|
|
|
|
|
|
| |
On Linux, we have seen the whl files built from the PIP tensorflow builds change their names for unclear reasons, leading to the need to update README.md and install issues like these:
https://github.com/tensorflow/tensorflow/issues/1097
http://stackoverflow.com/questions/33622613/tensorflow-installation-error-not-a-supported-wheel-on-this-platform
as well as build issues like:
http://ci.tensorflow.org/view/Nightly/job/nightly-distributed-build-server/132/console
This CL aims to freeze the whl file naming pattern on Linux to prevent such issues from happening in the future.
Change: 127111314
|
|
|
|
| |
Change: 127109341
|
|
|
|
|
|
| |
disable=g-bad-file-header".
Change: 127106850
|
|
|
|
|
|
| |
implementaiton. The new implementation discards fewer data, and scales better based on the number of classes. However, it requires knowing the class distribution of the data.
Change: 127104811
|
|
|
|
| |
Change: 127103847
|
|
|
|
| |
Change: 127101926
|
|
|
|
|
|
|
| |
softmax_cross_entropy_loss with label_smoothing.
The sigmoid cross entropy with label smoothing was broken, and worse the tests for both that and the softmax cross entropy with label smoothing were broken. I've fixed both issues here and added comments walking through the two examples in the tests so as to not inadvertently check in broken tests again.
Change: 127100213
|
|
|
|
| |
Change: 127093435
|
|
|
|
|
|
|
|
| |
JSON format.
If a dataset is private, it will only show up if it is requested via URL, otherwise it will
not appear in the dropdown.
Change: 127090337
|
|
|
|
| |
Change: 127084359
|