aboutsummaryrefslogtreecommitdiffhomepage
Commit message (Collapse)AuthorAge
* [XLA:CPU] Add VLOGs to cpu_compiler.ccGravatar A. Unique TensorFlower2017-06-22
| | | | PiperOrigin-RevId: 159902919
* Fixed cmake tests.Gravatar Mustafa Ispir2017-06-22
| | | | PiperOrigin-RevId: 159901417
* Alligned how model-fns handled params among linear/dnn/combined estimators.Gravatar Mustafa Ispir2017-06-22
| | | | PiperOrigin-RevId: 159899925
* Added canned estimators to Tensorflow library. List of added estimators:Gravatar Mustafa Ispir2017-06-22
| | | | | | | | | | | * DNNClassifier * DNNRegressor * LinearClassifer * LinearRegressor * DNNLinearCombinedClassifier * DNNLinearCombinedRegressor PiperOrigin-RevId: 159898954
* Support advisor in all places (Command line, APIs)Gravatar A. Unique TensorFlower2017-06-22
| | | | | | Add expensive operation checker PiperOrigin-RevId: 159897279
* Improve score-trick to be a valid Csiszar f-Divergence yet numerically stable.Gravatar Joshua V. Dillon2017-06-22
| | | | PiperOrigin-RevId: 159896013
* Generating TBAA metadata causes the LLVM to miscompile afterGravatar A. Unique TensorFlower2017-06-22
| | | | | | | https://reviews.llvm.org/rL305938). Disable TBAA (to stop the miscompiles) while we fix the root issue. PiperOrigin-RevId: 159895736
* [BatchNorm] Minor fixes to TF docGravatar A. Unique TensorFlower2017-06-22
| | | | PiperOrigin-RevId: 159886125
* Register devices under their legacy namesGravatar Brennan Saeta2017-06-22
| | | | | | | | Because some higher level APIs continue to use the legacy name format, when using ClusterSpec propagation, we need to ensure that we register the devices under their legacy names as well as their canonical names. PiperOrigin-RevId: 159885777
* Add support of label_keys to DebugClassifierGravatar Makoto Uchida2017-06-22
| | | | PiperOrigin-RevId: 159883986
* Move sparse_fill_empty_rows to new, *significantly* faster, C++ kernel for ↵Gravatar Eugene Brevdo2017-06-22
| | | | | | | | everyone. Also fix a bug in the C++ op when the input ST has 0 elements. PiperOrigin-RevId: 159880044
* If rank is unknown, do not add output shapes to transpose nodes.Gravatar Yao Zhang2017-06-22
| | | | PiperOrigin-RevId: 159879840
* In SE_ASSIGN_OR_RETURN change ConsumeValueOrDie to the preferred std::move ↵Gravatar Jacques Pienaar2017-06-22
| | | | | | ValueOrDie. PiperOrigin-RevId: 159879754
* Implement alternative `monte_carlo.expectation_v2`. This function implementsGravatar Joshua V. Dillon2017-06-22
| | | | | | | the reparameterization and score-gradient tricks and does not depend on tf.Distribution like inputs. PiperOrigin-RevId: 159877923
* Internal change.Gravatar Anna R2017-06-22
| | | | PiperOrigin-RevId: 159876942
* Make HloModule clonableGravatar A. Unique TensorFlower2017-06-22
| | | | | | This CL makes HloModule clonable, which is necessary when we want to run the same compilation twice with the same input. PiperOrigin-RevId: 159874256
* Fix bugs related to distributions over integers.Gravatar Joshua V. Dillon2017-06-22
| | | | | | | | | | | | | | | - Ensure that the max number of categories does not exceed largest integer-form float. - Make dtype inference consistent between Categorical and Multinomial distributions. - Improve documentation to better reflect that the Categorical distribution is analogous to `argmax{OneHotCategorical}` (itself being identical to `argmax{Multinomial(p,n=1)}` but not Multinomial. - Fix validation_args Heisenberg uncertainty: only validation logic should live under self.validate_args. E.g., validate_args=True would sometimes imply `x=floor(x)` which changes behavior thus making debugging impossible because enabling validation *changes* values. - Corrected `Geometric` swapping of validate_args` and `allow_nan_stats` default-values. Fixes #10149 PiperOrigin-RevId: 159872532
* In tfcompile, prune nodes that are not reachable from the fetches beforeGravatar A. Unique TensorFlower2017-06-22
| | | | | | | building the Graph. This allows loading a graph that contains ops not needed for the compiled binary. PiperOrigin-RevId: 159869692
* Fix cuda_kernel_helper_test. std::numeric_limits<int32>::max() doesn't pass, soGravatar Jonathan Hseu2017-06-22
| | | | | | I didn't use that. PiperOrigin-RevId: 159869169
* Added a utility to create parsing spec for regressors (canned estimator)Gravatar Mustafa Ispir2017-06-22
| | | | PiperOrigin-RevId: 159855254
* For candidate sampling, add facility to colocate the logit computation with ↵Gravatar A. Unique TensorFlower2017-06-22
| | | | | | the sharded embeddings. PiperOrigin-RevId: 159854706
* Replaced constant inputs with variables to ensure most of the graph doesn't getGravatar Benoit Steiner2017-06-22
| | | | | | optimized away PiperOrigin-RevId: 159853171
* Migrate ops for new version of TensorForest.Gravatar A. Unique TensorFlower2017-06-22
| | | | PiperOrigin-RevId: 159852889
* Use a single threaded session for SDCALinearRegressorTest toGravatar A. Unique TensorFlower2017-06-22
| | | | | | avoid incorrect threading test failures (tsan). PiperOrigin-RevId: 159852818
* Generalize cluster initialization to span multiple mini-batches if necessary.Gravatar A. Unique TensorFlower2017-06-22
| | | | PiperOrigin-RevId: 159852557
* Modify beam search decoder to use symbolic shape for vocab size if the ↵Gravatar A. Unique TensorFlower2017-06-22
| | | | | | static shape is not present. PiperOrigin-RevId: 159852297
* TpuEstimator: Replicate the input_fn to the worker CPU for each shard.Gravatar Jonathan Hseu2017-06-22
| | | | | | | The batch size is configured as follows: The user may specify a global batch size in their hyperparameters. If the 'batch_size' field is set, then we convert the global batch size into a per-shard batch size by dividing by num_shards before running their input_fn. PiperOrigin-RevId: 159851773
* Fixes some docstrings in feature_column.Gravatar A. Unique TensorFlower2017-06-22
| | | | PiperOrigin-RevId: 159850619
* [XLA] Remove unused flags and move debugging flag to debug options.Gravatar Eli Bendersky2017-06-22
| | | | PiperOrigin-RevId: 159849759
* Adding missing license notice to toolchain build filesGravatar A. Unique TensorFlower2017-06-22
| | | | PiperOrigin-RevId: 159847551
* Removed deprecated summary usage from estimators.Gravatar Mustafa Ispir2017-06-22
| | | | | | Made name_space usage consistent. PiperOrigin-RevId: 159846928
* Fold as many nodes as possible instead of giving up if there is any error.Gravatar Benoit Steiner2017-06-22
| | | | PiperOrigin-RevId: 159841935
* VectorExponential added to distributions.Gravatar Ian Langmore2017-06-22
| | | | PiperOrigin-RevId: 159840822
* Have RestoreV2's shape fn set all outputs to unknown shape.Gravatar A. Unique TensorFlower2017-06-22
| | | | PiperOrigin-RevId: 159835723
* Add a multi-head TensorForest estimator.Gravatar A. Unique TensorFlower2017-06-22
| | | | PiperOrigin-RevId: 159820487
* When configuring per-session thread pools, allowGravatar A. Unique TensorFlower2017-06-21
| | | | | | | | a pool to be a global pool. This allows a division between large and small pools, without needing to make new pool for each session. PiperOrigin-RevId: 159789678
* Fixes a TODO in head_test.Gravatar A. Unique TensorFlower2017-06-21
| | | | PiperOrigin-RevId: 159789178
* Added batch_matmul op dependence to android_extended_opsGravatar A. Unique TensorFlower2017-06-21
| | | | PiperOrigin-RevId: 159787178
* Made sure that we can call the constant folding code twice safely.Gravatar Benoit Steiner2017-06-21
| | | | PiperOrigin-RevId: 159781607
* Switch from assigning namedtuple.__new__.__defaults__ to overwriting __new__.Gravatar A. Unique TensorFlower2017-06-21
| | | | | | | | | Assigning __defaults__ relies on an implementation detail of CPython, confuses type checkers (and developers :)), and is error-prone since it doesn't make the relationship between parameter names and default values explicit. This CL switches to overloading __new__ instead. PiperOrigin-RevId: 159773922
* Fixed the shape functions of the QuantizedAdd and QuantizedMul opsGravatar Benoit Steiner2017-06-21
| | | | PiperOrigin-RevId: 159772841
* Blacklist the quantized ops since they have too many issues (incorrect shapeGravatar Benoit Steiner2017-06-21
| | | | | | functions, memory corruptions, ...) PiperOrigin-RevId: 159772801
* Support zero shapes for random_poisson. This matches random_uniform.Gravatar A. Unique TensorFlower2017-06-21
| | | | PiperOrigin-RevId: 159771215
* Internal change.Gravatar Anna R2017-06-21
| | | | PiperOrigin-RevId: 159769520
* Raise ValueError if invalid dtype for random_uniform.Gravatar Dumitru Erhan2017-06-21
| | | | PiperOrigin-RevId: 159764956
* Automated g4 rollback of changelist 159746509Gravatar Eli Bendersky2017-06-21
| | | | PiperOrigin-RevId: 159763112
* Add ability for argmax to output int32 indices. Default remains int64.Gravatar Vijay Vasudevan2017-06-21
| | | | | | | | Change is made in a backwards and forward compatible manner, since we add a new attribute with a default that remains the same, and simply register a few new kernels. PiperOrigin-RevId: 159761347
* [tf contrib seq2seq] Add monotonic attention mechanismsGravatar A. Unique TensorFlower2017-06-21
| | | | | | | | | | | | | * Add monotonic_attention and safe_cumprod helper functions. * Add _BaseMonotonicAttentionMechanism base class. * Add BahdanauMonotonicAttention and LuongMonotonicAttention classes. These attention mechanisms are proposed in Colin Raffel, Minh-Thang Luong, Peter J. Liu, Ron J. Weiss, Douglas Eck, "Online and Linear-Time Attention by Enforcing Monotonic Alignments." ICML 2017. https://arxiv.org/abs/1704.00784 PiperOrigin-RevId: 159760073
* [XLA] Remove dead "in-client" code.Gravatar Mark Heffernan2017-06-21
| | | | | | | | Remove Service::runs_in_client_process_ field and it's dead user. This was previously used by the "InProcess" methods which have been replaced with the LocalClient API. PiperOrigin-RevId: 159759455
* Add nonpublic helper `tf.distributions.util.tridiag` op.Gravatar Joshua V. Dillon2017-06-21
| | | | PiperOrigin-RevId: 159757904