| Commit message (Collapse) | Author | Age |
|
|
|
|
|
| |
function_test.py to verify that it doesn't raise a ValueError as an empty list would have previously
PiperOrigin-RevId: 175079527
|
|
|
|
|
|
|
|
| |
remove superfluous literal creation methods in that file, and replace them with the existing ones in the Literal class.
Also, optionally print layout in Literal::ToString.
PiperOrigin-RevId: 175076277
|
|
|
|
|
|
| |
tensor names. It is especially useful in cases whether some of the tensors has huge size. Also update the usage description.
PiperOrigin-RevId: 175074541
|
|
|
|
|
|
|
|
| |
Also, to make the text format easier to write and unambiguous:
- Print "window={}" around the window attribute; rename the "window" sub attribute to "size";
- Print the dim_lables in logical order, instead of physical order.
PiperOrigin-RevId: 175074526
|
|
|
|
|
|
|
|
| |
1) Log how many batches to enqueue. The old message is very confusing.
2) If input_pipeline has queue runner, generate a logging (legacy mode) or error out (new mode)
3) If input pipeline has summaries, generate a logging (legacy mode) or error out (new mode)
PiperOrigin-RevId: 175073856
|
|
|
|
|
|
| |
Additionally, fix a bug with handling of activity_regularizer in tf.layers base Layer (and add test).
PiperOrigin-RevId: 175070161
|
|
|
|
|
|
| |
Python code isn't indented correctly.
PiperOrigin-RevId: 175067065
|
|
|
|
|
|
|
|
|
|
|
|
| |
Estimator assumes a particular config_pb2.ConfigProto that configures the underlying session. The config is either the default one or a user-supplied one. The default config has allow_soft_placement=True, the option that allows silent placement of operations on devices with kernels when the requested device doesn't have a kernel for that operation.
Estimator's train(), eval() and predict() calls run with the underlying session configured in accordance to the ConfigProto. However, export_savedmodel runs without such a configuration. This appears to be a problem when the ModeKeys.PREDICT graph has an op that was placed on GPU but doesn't have a GPU kernel. The graph works for predict(), but when export_savedmodel() is trying to restore the corresponding variable, the code fails with "no kernel for the op" error. I attempted to show that in a test.
To fix this issue, I am passing the ConfigProto to the session inside export_savedmodel. An alternative conservative and ugly fix is to pass a new instance ConfigProto with only allow_soft_placement=Estimator._session_config.allow_soft_placement. Passing the whole ConfigProto feels like the right thing to do. Here's what else is in ConfigProto: https://github.com/tensorflow/tensorflow/blob/r1.4/tensorflow/core/protobuf/config.proto#L280.
I verified by running an internal pipeline. Here's allow_soft_placement logic: https://github.com/tensorflow/tensorflow/blob/r1.4/tensorflow/core/common_runtime/placer.cc#L322.
PiperOrigin-RevId: 175063803
|
|
|
|
| |
PiperOrigin-RevId: 175063558
|
|
|
|
| |
PiperOrigin-RevId: 175061854
|
|
|
|
| |
PiperOrigin-RevId: 175057863
|
|
|
|
|
|
| |
Neutral-to-positive on all benchmarks. Also reduces overhead of should_record.
PiperOrigin-RevId: 175057104
|
|
|
|
|
|
| |
possible.
PiperOrigin-RevId: 175055770
|
|
|
|
| |
PiperOrigin-RevId: 175053592
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This is achieved by accessing the AttrValue directly and using the
existing Python code instead of dispatching to the specific C API attr
getter for every type. I started going down the dispatch path, but it
turns out to be a lot of code (spread across Python, C, and SWIG), and
this is likely good enough from a performance standpoint. We can
optimize in the future if necessary.
In addition, changes the colocation group logic to use _set_attr() and
get_attr(), and makes _set_attr() work with the C API disabled. This
allows the colocation tests to pass with both the C API enabled and
disabled. Without these additional changes, the "_class" attribute
would be set on the C NodeDef, and then it would try to retrieve it
from the Python NodeDef.
PiperOrigin-RevId: 175050473
|
|
|
|
| |
PiperOrigin-RevId: 175049981
|
|
|
|
|
|
| |
filter_irregular_batches.
PiperOrigin-RevId: 175045241
|
|
|
|
| |
PiperOrigin-RevId: 175042091
|
|
|
|
|
|
|
|
| |
Multiple statements in a select statement should not be able to
be true at the same time (unless one rule is more 'specific'
than another).
PiperOrigin-RevId: 175040618
|
|
|
|
| |
PiperOrigin-RevId: 175037663
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Right now we're always doing a 8x8 tiling on the matrix. This can probably be
tuned further.
There are some other follow-up items that I did not want to put in this already
large CL:
- Eigen has some smarts to avoid issuing unaligned vector loads and stores
which the current CL does not. We need to investigate if being smart about
alignment is worth it.
- Prevent LLVM from vectorizing the epilogue. In fact we should disable loop
vectorization for all the loops we've explicitly vectorized.
- Cache the kernels by their shape to reduce code size impact.
- Add aliasing information to the loads and stores emitted by the
PacketSupportLibrary. This is probably not super critical since we've
already vectorized the code, but we should do this for completeness.
PiperOrigin-RevId: 175036991
|
|
|
|
| |
PiperOrigin-RevId: 175036743
|
|
|
|
| |
PiperOrigin-RevId: 175036413
|
|
|
|
| |
PiperOrigin-RevId: 175036186
|
|
|
|
| |
PiperOrigin-RevId: 175030602
|
|
|
|
|
|
|
| |
Previously we'd only print scalars. But if you have a constant with
just a few values, what the heck, show the whole thing.
PiperOrigin-RevId: 175030210
|
|
|
|
| |
PiperOrigin-RevId: 175028981
|
|
|
|
|
|
| |
const HloModule*.
PiperOrigin-RevId: 175024608
|
|
|
|
| |
PiperOrigin-RevId: 175023039
|
|
|
|
| |
PiperOrigin-RevId: 175004323
|
|
|
|
| |
PiperOrigin-RevId: 174999937
|
|
|
|
| |
PiperOrigin-RevId: 174983466
|
|
|
|
| |
PiperOrigin-RevId: 174979678
|
|
|
|
| |
PiperOrigin-RevId: 174964560
|
|
|
|
| |
PiperOrigin-RevId: 174962378
|
|
|
|
| |
PiperOrigin-RevId: 174961746
|
|
|
|
|
|
|
| |
The new version of nsync has a BUILD file that detects
x86_32 (which bazel currently calls piii).
PiperOrigin-RevId: 174959924
|
|
|
|
|
|
|
|
| |
mode='hard'.
Also adds tests to make sure the attention probabilities are 0 or 1 when mode='hard'.
PiperOrigin-RevId: 174956465
|
|
|
|
| |
PiperOrigin-RevId: 174948909
|
|
|
|
| |
PiperOrigin-RevId: 174947453
|
|
|
|
|
|
| |
the Assign op, otherwise min max variables never get updated.
PiperOrigin-RevId: 174947421
|
|
|
|
|
|
|
| |
Adds a new remote repository for the mobilenet tflite models necessary
for running the TF Lite demo app.
PiperOrigin-RevId: 174946867
|
|
|
|
|
|
|
|
| |
of quantized base model.
Also modify retrain_test to test creation of model info for fixed point mobilenet.
PiperOrigin-RevId: 174946745
|
|
|
|
| |
PiperOrigin-RevId: 174944857
|
|
|
|
| |
PiperOrigin-RevId: 174941651
|
|
|
|
| |
PiperOrigin-RevId: 174939009
|
|
|
|
| |
PiperOrigin-RevId: 174938299
|
|
|
|
| |
PiperOrigin-RevId: 174937860
|
|
|
|
| |
PiperOrigin-RevId: 174937793
|
|
|
|
| |
PiperOrigin-RevId: 174937290
|