| Commit message (Collapse) | Author | Age |
|
|
|
| |
Change: 148954491
|
|
|
|
| |
Change: 148952814
|
|
|
|
|
| |
While we are here, add support for getting the cost analysis for call HLOs.
Change: 148952748
|
|
|
|
|
| |
computing mean().
Change: 148951756
|
|
|
|
|
| |
Updated the demo to use only local data files relevant to the demo. Also made the demo an iron snippet so folks can see the code next to the demo.
Change: 148950452
|
|
|
|
| |
Change: 148947675
|
|
|
|
| |
Change: 148945600
|
|
|
|
| |
Change: 148942856
|
|
|
|
|
|
|
| |
call the parent scope's custom_getter instead of overriding it. The child-most
scope passes the "true" VariableScope getter all the way through to the
parent-most getter for the very innermost variable access.
Change: 148940536
|
|
|
|
| |
Change: 148939552
|
|
|
|
| |
Change: 148936943
|
|
|
|
| |
Change: 148934142
|
|
|
|
|
|
|
| |
Note that this very slightly changes the existing API:
* removes unused refiner() getter
* adds copy assignment operator (before there was only an implicit copy constructor)
Change: 148933743
|
|
|
|
| |
Change: 148930597
|
|
|
|
| |
Change: 148930269
|
|
|
|
|
|
|
| |
This makes all such operations match the behavior of unsorted_segment_sum, for which completely missing segment IDs are output as zero initialized. In this case output is either zero or one initialized depending on the aggregation function. This makes such functions more easily usable for embeddings when some features are optional.
Zero/one initialization is only performed on the missing segment ids to avoid performance regressions.
Change: 148929039
|
|
|
|
|
| |
cuda-clang
Change: 148928329
|
|
|
|
| |
Change: 148926382
|
|
|
|
| |
Change: 148925596
|
|
|
|
| |
Change: 148922978
|
|
|
|
|
|
| |
tf.make_template(create_scope_now_=True) will now create its operations in the same name scope as its variables. This means that all variables and ops created by a template that is only applied once end up in the same scope.
Change: 148922089
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
values.
We used to rely on the mean being infinite, but oddly, Tensorboard gives us those errant values as strings:
[1, 256, 256, 0, 0, 0, 0, 0, "Infinity", "-Infinity", "NaN", "NaN"]
This breaks logic that checks for !== Infinity because a string is not Infinity. Something in our stack might have difficulty reading Infinity and NaN from JSON.
For now, I think we might want to resort to checking for valid values. This resolves a console error.
Change: 148920617
|
|
|
|
|
|
|
| |
storage.
Record persistent tensor memory and persistent memory (originally auxiliary memory) in the cost model.
Change: 148920117
|
|
|
|
| |
Change: 148919678
|
|
|
|
| |
Change: 148915145
|
|
|
|
| |
Change: 148913966
|
|
|
|
|
|
|
| |
Provides a util function to create noon train op for cases when users want to handle optimization in model func.
Also, Allows a dict of logits for multihead. See attached bug for more details.
Change: 148913307
|
|
|
|
|
| |
(additional modules have been sealed since then).
Change: 148910557
|
|
|
|
| |
Change: 148905332
|
|
|
|
| |
Change: 148899498
|
|
|
|
| |
Change: 148897757
|
|
|
|
| |
Change: 148895969
|
|
|
|
|
|
|
| |
Most of its usefulness is determined by the reparameterization type. In the
future we'll be adding a distribution Domain object / property to better
describe the domain of the values a distribution may take on.
Change: 148894335
|
|
|
|
|
| |
This patch adds logic for recreating functions originally defined using @Defun and serialized in a GraphDef.
Change: 148893654
|
|
|
|
|
| |
Also removing unnecessary tests. Some tests checked proper use of state, and were added at a time when histogram_fixed_width created an internal Variable. Since histogram is now (and has long been) TITO, this is not necessary.
Change: 148891639
|
|
|
|
|
| |
registrations has now been changed to handle these space properly.
Change: 148886703
|
|
|
|
| |
Change: 148886147
|
|
|
|
|
| |
Fix crash in normal OneHot kernel for depth < 0.
Change: 148881102
|
|
|
|
|
|
|
|
|
|
|
| |
Some changes to assist in getting the bazel build for Windows working.
In particular:
- With https://github.com/tensorflow/tensorflow/commit/8898e88d5b3014a14d269560fb2f928f68562f53
core/platform/profile_utils/* needs to be included in Windows builds
(Changes to tensorflow/core/BUILD in this change)
- Avoid building the AndroidArmV7ACpuUtilsHelper when not on Android.
Change: 148880242
|
|
|
|
|
|
|
|
| |
tensor values. For example, this allows
--printoptions threshold=1000000
to print all the values in a large tensor instead of ellipsis in place of most of them.
Change: 148876650
|
|
|
|
|
| |
public.
Change: 148875698
|
|
|
|
|
|
| |
normalization after fully connected layers (MatMul).
Change: 148868461
|
|
|
|
|
|
|
| |
At present the op message is only printed if the numeric check fails during
the op's 'forward' computation. If the check fails during the gradient, there is no
identifier on *which* op's gradient failed.
Change: 148866334
|
|
|
|
|
| |
Reported by #7917
Change: 148861244
|
|
|
|
|
|
|
|
|
|
|
| |
This CL adds only the MaxBytesInUse op, which collects the peak memory usage of
a device allocator. Other ops can be added similarly when demanded. For now, we
only enable MaxBytesInUse for GPU because memory statistics are unreliable for
CPU allocators.
This CL essentially merges part of Yaroslav Bulatov's work on
https://github.com/yaroslavvb/memory_probe_ops to TensorFlow.
Change: 148854571
|
|
|
|
|
| |
A step towards #7877
Change: 148850174
|
|
|
|
| |
Change: 148849242
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
TF-learn creates two jobs for evaluation: one uses evalset data and the other uses train data.
Both job call
Experiment._continuous_eval()
-> Experiment._maybe_export()
-> export_strategy.export()
For TF-learn, the export function above is created by
saved_model_export_utils.make_export_strategy() which calls
garbage_collect_exports() after calling Estimator.export_savedmodel()
The two jobs might trigger a race condition where they both try to delete
the same dir. Then one fails with `NotFoundError` which isn't caught.
Change: 148843701
|
|
|
|
|
|
|
|
|
| |
imports
pywrap_tensorflow_internal with RTLD_GLOBAL.
Fixes #6568
Change: 148843302
|
|
|
|
| |
Change: 148842430
|