| Commit message (Collapse) | Author | Age |
|
|
|
| |
Change: 127908886
|
|
|
|
| |
Change: 127908182
|
|
|
|
| |
Change: 127906463
|
|
|
|
| |
Change: 127903711
|
|
|
|
| |
Change: 127901773
|
|
|
|
| |
Change: 127900496
|
|
|
|
|
| |
Add shape_inference::InferenceContext::ReplaceDim.
Change: 127893881
|
|
|
|
| |
Change: 127893189
|
|
|
|
| |
Change: 127889301
|
|
|
|
| |
Change: 127888554
|
|
|
|
|
|
| |
plain tuples.
Change: 127887552
|
|
|
|
|
| |
Fix bug in the computation of #outputs in shape_inference_testutil.cc.
Change: 127885874
|
|
|
|
| |
Change: 127885597
|
|
|
|
|
|
|
|
| |
This also allows dist.sample() to return a single sample per batched distribution,
avoiding the need to squeeze.
Also a fix to Categorical to support broadcasting in log_prob.
Change: 127883481
|
|
|
|
| |
Change: 127882032
|
|
|
|
| |
Change: 127880545
|
|
|
|
| |
Change: 127878217
|
|
|
|
| |
Change: 127874683
|
|
|
|
|
| |
no checkpoint exists. Also add an optional timeout parameter.
Change: 127874450
|
|
|
|
|
|
|
| |
This CL adds the "TensorSummary" op, which takes in any input tensor, and outputs a string tensor containing a Summary protocol buffer with the serialized input tensor in the summary.
The new op is accessible in Python via "tf.summary.tensor_summary". In the future, the tf.summary module will be used to contain other summary ops, like tf.summary.histogram, tf.summary.scalar, etc.
Change: 127874174
|
|
|
|
|
|
|
|
|
|
|
|
| |
Changes are:
* Modifies the update equation to include the aspect of distribution, supports logistic loss, other loss functions in later CL.
* Speeds up BM_SDCA_LARGE_SPARSE by 20x-34x.
* Refactors the interface for the following:
- Removes the need of the sparse-tensor wrappers
- Makes the code efficient for dense features.
- Removes the need of sparse_merge
- Allows sparse features with weights, or without weights.
Change: 127870447
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
unittests.
In this CL:
- a few tests that were failing at a rate > 50% when no seed was used. Created issue
https://buganizer.corp.google.com/issues/30149644 to track the fixing of these.
- random_shuffle_queue with a shared_name always fails if a user provides a graph-level
random seed. This is because the op_level seed returned from random_seed.get_seed
changes per graph operation. The fix for this is also included in this CL.
- added a test for random_seed, since the logic in there has become a bit complicated.
Change: 127867374
|
|
|
|
|
|
| |
construct a Context upon a request to schedule a callback and set the context when calling the callback in a different thread.
Change: 127865144
|
|
|
|
| |
Change: 127865082
|
|
|
|
|
|
| |
tensorflow/contrib/session_bundle.
Change: 127862721
|
|
|
|
| |
Change: 127861326
|
|
|
|
| |
Change: 127860548
|
|
|
|
|
| |
std::function. This simplifies the calling and reduces the amount of templated code generated.
Change: 127860029
|
|
|
|
|
|
| |
mobile build.
Change: 127859525
|
|
|
|
|
|
| |
1. When adding new ops, ensure they are added to the graph passed in the constructor.
2. Allow logdir to be None.
Change: 127856413
|
|
|
|
| |
Change: 127855634
|
|
|
|
| |
Change: 127853412
|
|
|
|
| |
Change: 127846532
|
|
|
|
| |
Change: 127840021
|
|
|
|
|
| |
monitors.py docstrings).
Change: 127838930
|
|
|
|
|
|
| |
cleanly without mutating any file system state.
Change: 127834333
|
|
|
|
|
| |
understand.
Change: 127784684
|
|
|
|
| |
Change: 127783883
|
|
|
|
| |
Change: 127783635
|
|
|
|
| |
Change: 127782110
|
|
|
|
|
|
| |
Also changed python inference function for MatrixSolve and MatrixSolveLs to use
with_rank instead of with_rank_at_least for rhs.
Change: 127781866
|
|
|
|
| |
Change: 127779854
|
|
|
|
| |
Change: 127777937
|
|
|
|
| |
Change: 127776763
|
|
|
|
|
|
| |
This stops the estimator when the relative change in loss between iterations
goes below a given threshold.
Change: 127775603
|
|
|
|
| |
Change: 127774472
|
|
|
|
|
|
| |
python SWIG wrapper to go along with it.
Change: 127770835
|
|
|
|
| |
Change: 127766679
|
|
|
|
|
|
| |
* if a variable does not have a gradient, its gradient is None as returned by tf.gradients
* this would cause an error in _add_scaled_noise_to_gradients (calling gradient.get_shape())
Change: 127765890
|
|
|
|
|
|
|
|
| |
d(x ^ y)/dy mentions log(x), which creates false singularities if x = y.
Instead, use x > 0 ? log(x) : 0.
Fixes #2295.
Change: 127762504
|