| Commit message (Collapse) | Author | Age |
|
|
|
|
|
| |
block sizes of NCHW data layout on GPU with a loop kernel.
PiperOrigin-RevId: 172520132
|
|
|
|
|
|
|
|
| |
aligned with other parts of the TF. Many users are not aware of impact of non-random seed. For example it may lead to train only on a small fraction of training data due to preemptions.
We're changing default behavior since we consider it as a bug fix.
PiperOrigin-RevId: 172519268
|
|
|
|
|
|
| |
will ignore threads that remain running when tearing down infrastructure after successfully completing training, instead of throwing a RuntimeError.
PiperOrigin-RevId: 172518466
|
|
|
|
| |
PiperOrigin-RevId: 172517507
|
|
|
|
| |
PiperOrigin-RevId: 172517300
|
|
|
|
| |
PiperOrigin-RevId: 172514789
|
|
|
|
| |
PiperOrigin-RevId: 172512636
|
|
|
|
| |
PiperOrigin-RevId: 172511553
|
|
|
|
| |
PiperOrigin-RevId: 172510229
|
|
|
|
| |
PiperOrigin-RevId: 172508350
|
|
|
|
| |
PiperOrigin-RevId: 172493077
|
|
|
|
|
|
| |
correctly. Fixes https://github.com/tensorflow/serving/issues/615
PiperOrigin-RevId: 172489253
|
|
|
|
| |
PiperOrigin-RevId: 172481341
|
|
|
|
| |
PiperOrigin-RevId: 172480793
|
|
|
|
| |
PiperOrigin-RevId: 172477878
|
|
|
|
| |
PiperOrigin-RevId: 172477381
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
1. Begin the gather tree at the maximum sequence length across all beams (within the batch).
2. Take a second pass starting from t=0 and mask out any beam ids past the *first* beam occurrence of end_token.
3. Update the final sequence lengths to include the first <eos> token in the beam.
4. Update dynamic_decode to allow the BeamSearchDecoder to keep track of its own "finished" states, as the shuffling in the decoder confused the tracking mechanism in dynamic_decode. This fixes a bug where beam search decoding stops early.
5. Cap sequence length used in GatherTree to min(max_time, max_seq_len(b)) to avoid accessing memory outside the dimensions of input matrices.
Bugs caught by @bdaskalov on github and Pavel Sountsov. Proper solution and analysis thanks to Rui Zhao. Thanks all!
Fixes #13536.
PiperOrigin-RevId: 172471462
|
|
|
|
|
|
|
|
|
|
|
| |
override_from_dict
Reasons to prefer new function name:
- `set` sounds like it might return the builtin set.
- There is no datatype `map` in python - it's a builtin, making the implied
API a little confusing.
PiperOrigin-RevId: 172471191
|
|
|
|
| |
PiperOrigin-RevId: 172422580
|
|
|
|
|
|
| |
This is going to be useful for the tensor database I'm working on.
PiperOrigin-RevId: 172412142
|
|
|
|
| |
PiperOrigin-RevId: 172408922
|
|
|
|
| |
PiperOrigin-RevId: 172407754
|
|
|
|
|
|
| |
tf.Examples where IDs are not materialized (e.g. 'image/object/class/text' present but 'image/object/class/label' not).
PiperOrigin-RevId: 172406978
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
At --vmodule=gpu_compiler=2, we run ptxas over our generated PTX, to
validate it, and also to dump out stats like the number of registers
used.
But previously, this would fail if your GPU was anything other than
sm_35 (i.e. K20/40/80), because we didn't pass down cc_major/cc_minor to
ptxas. And moreover, if ptxas failed to compile your program, we'd
LOG(FATAL), which is probably no what you want.
This change fixes both those issues. Tested on my local GTX1080.
PiperOrigin-RevId: 172403304
|
|
|
|
| |
PiperOrigin-RevId: 172397124
|
|
|
|
| |
PiperOrigin-RevId: 172389494
|
|
|
|
|
|
|
|
|
|
|
|
| |
The is similar to the return_tensors option. return_tensors cannot be
used to fetch nodes with no outputs, so return_nodes is necessary.
In addition, this change also refactors the ImportGraphDef signature
to return all optional return values in a single struct. This is to
keep the ImportGraphDef signature from getting too long, and also
makes the call sites simpler.
PiperOrigin-RevId: 172388270
|
|
|
|
|
|
| |
If you ODR-use nullopt, you currently get a linker error. Oops.
PiperOrigin-RevId: 172387553
|
|
|
|
|
|
| |
use_resource is not set and Eager mode is enabled.
PiperOrigin-RevId: 172380659
|
|
|
|
| |
PiperOrigin-RevId: 172379338
|
|
|
|
| |
PiperOrigin-RevId: 172376836
|
|
|
|
| |
PiperOrigin-RevId: 172374244
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
__array__ fixes use-cases like:
import tensorflow as tf
import pandas as pd
series = pd.Series(['a','b','c'])
tf.constant(series)
df = pd.DataFrame({'a':[1,2,3],'b':['a','b','c']})
tf.data.Dataset.from_tensor_slices(dict(df))
PiperOrigin-RevId: 172372593
|
|
|
|
|
|
| |
client process hangs waiting for the main training loop to exit.
PiperOrigin-RevId: 172371951
|
|
|
|
|
|
| |
Add support for reading Varint64 to InputBuffer.
PiperOrigin-RevId: 172371104
|
|
|
|
| |
PiperOrigin-RevId: 172366972
|
|
|
|
| |
PiperOrigin-RevId: 172366027
|
|
|
|
|
|
|
| |
Checks if shape is not compatible with the Eager tensor's shape,
raises an error if it is not.
PiperOrigin-RevId: 172363347
|
|
|
|
| |
PiperOrigin-RevId: 172363016
|
|
|
|
| |
PiperOrigin-RevId: 172353443
|
|
|
|
| |
PiperOrigin-RevId: 172352767
|
|
|
|
| |
PiperOrigin-RevId: 172350038
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
* Keep Switch and Merge nodes in separate clusters to avoid creating irreducible graphs;
* Merge Switch nodes with common predicates;
* Add support for if-then structure;
* Squash trivial Switch->Merge groups;
* Merge newly Merge free nodes with Switch & Merge free inputs;
* Check to see if it is a Merge node before merging to common merge node;
* Return an error if all Switches have not been replaced;
* Add test fir tf,case;
PiperOrigin-RevId: 172348729
|
|
|
|
|
|
|
|
| |
Due to a mix-up between NumPy's default array element type for a Python `int` on Windows and Linux, a tf.py_func() in `Dataset.from_generator()` would appear to return the wrong type on Windows (np.int32 instead of np.int64).
All code using `Dataset.from_generator()` on Windows was previously broken. This change fixes both `tf.data.Dataset.from_generator()` and `tf.contrib.data.Dataset.from_generator()`. It also enables test coverage for this method on Windows, which should prevent future breakage.
PiperOrigin-RevId: 172346533
|
|
|
|
|
|
|
| |
The intention was always for the user to only depend on
xla_jit_compiled_cpu_function, and not need dependencies on internal targets.
PiperOrigin-RevId: 172346257
|
|
|
|
|
|
|
|
| |
and fisher_factors.py in the form of a function "set_global_constants".
The old way of just manually setting these constants by importing the specific modules and accessing them directly should still work, but this new method is preferred.
PiperOrigin-RevId: 172345996
|
|
|
|
| |
PiperOrigin-RevId: 172342933
|
|
|
|
| |
PiperOrigin-RevId: 172340173
|
|
|
|
|
|
| |
Currently, you cannot use ClusterSpec propagation in conjunction with XLA devices, as the RenamedDevice wraps the underlying device and breaks the dynamic cast.
PiperOrigin-RevId: 172339725
|
|
|
|
| |
PiperOrigin-RevId: 172337312
|