| Commit message (Collapse) | Author | Age |
|
|
|
|
|
|
| |
not numerically accurate for x near 0).
Make some cleanups to unary_ops_test.py.
Change: 146282294
|
|
|
|
|
| |
Resource variables are quite new, so this change should not affect any preexisting models. Follows up on the fix for Optimizers dealing with sparse gradients which have repeated indices.
Change: 146279378
|
|
|
|
| |
Change: 146278252
|
|
|
|
|
| |
Helps with #6267
Change: 146277178
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
insertion HLO passes. Switches implementation to use simple expression tree matching, and reduces the number of patterns to match to those that we care about in production (i.e. post fusion and copy insertion).
// ExecuteThunks runtime FRNN forward pass (sequence_length=30) WhileThunk: 112ms
// ExecuteThunks runtime FRNN forward pass (sequence_length=30) ForThunk: 6ms
// ExecuteThunks runtime FRNN forward pass (sequence_length=100) WhileThunk: 585ms
// ExecuteThunks runtime FRNN forward pass (sequence_length=100) ForThunk: 191ms
Change: 146273048
|
|
|
|
| |
Change: 146272519
|
|
|
|
|
|
| |
We create a reservoir that organizes health pill values by node name. Also added a
method onto the event multiplexor to fetch the health pills per run.
Change: 146272468
|
|
|
|
|
| |
This seems like a reasonable thing to do, as (1) they have been in existence for ~1 year, (2) people have added new fields to them since their inception, and (3) the C API supports them.
Change: 146271432
|
|
|
|
| |
Change: 146268982
|
|
|
|
|
|
| |
inception num classes could cause demo app to crash (due to recent CHECK added in inference interface).
Change: 146268764
|
|
|
|
|
| |
multiple GPU kernels, aggregate the time for all kernels for that op.
Change: 146268331
|
|
|
|
| |
Change: 146267759
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
functions for C++. A souped-up version of the hidden_ops mechanism in
Python, the intent is to use this for most or all of the client
languages, with a common list of changes to make in a common file and
per-language overrides.
Also:
* include the documentation for outputs in the generated comments
* several updates to C++ API to match Python
* fix C++ shape function for ConcatV2 now that we use it by default
* split op_gen_lib out of core:framework, since it is only used by
the op generators, and I don't want to add another proto to
mobile builds
Change: 146267344
|
|
|
|
| |
Change: 146266321
|
|
|
|
|
|
|
|
|
| |
- enable contrib
- show __init__
- remove core.protobuf module
- fix bug in _get_arg_spec
- hide contrib.learn.head
Change: 146265365
|
|
|
|
| |
Change: 146260296
|
|
|
|
| |
Change: 146257820
|
|
|
|
|
|
| |
index_to_string_table_from_tensor and index_to_string_table_from_file.
Change: 146255499
|
|
|
|
| |
Change: 146241206
|
|
|
|
| |
Change: 146238867
|
|
|
|
| |
Change: 146231606
|
|
|
|
| |
Change: 146230363
|
|
|
|
| |
Change: 146204375
|
|
|
|
|
| |
These **kwargs are ultimately passed to meta_graph.import_scoped_meta_graph. This CL allows setting useful loading options such as import_scope and input_map.
Change: 146201221
|
|
|
|
|
|
| |
desired shape.
Change: 146200993
|
|
|
|
| |
Change: 146198059
|
|
|
|
| |
Change: 146197150
|
|
|
|
|
|
| |
"get_forward_event_shape()" to "forward_event_shape()", "forward_event_shape()" to "forward_event_shape_tensor(), and same for "inverse" counterparts.
Change: 146196054
|
|
|
|
| |
Change: 146194766
|
|
|
|
| |
Change: 146191509
|
|
|
|
|
| |
Fixes spurious "Executor failed to create kernel. Not found: No registered '...' OpKernel for CPU devices" errors from the constant folder.
Change: 146188668
|
|
|
|
| |
Change: 146187659
|
|
|
|
|
|
|
|
| |
distribution arguments.
BUGFIX: Correct undefined mode in dirichlet.mode.
BUGFIX: Correct broadcasting in dirichletmultinomial.mean.
Change: 146187147
|
|
|
|
| |
Change: 146186989
|
|
|
|
| |
Change: 146185845
|
|
|
|
| |
Change: 146184013
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Previously, XLA controlled the presence/absence of fast-math flags (FMF) via a
command-line flag. This patch changes things so we use a new CompileOptions
proto instead.
This proto lives in HloModuleConfig, and is passed to the service via
ExecuteRequest.
This change lets us entirely remove llvm_backend_flags.{h,cc}.
In addition, this change takes us from two to one fast-math flags. Previously
we tried to control "unsafe FP transformations" separately from "full fast
math". It turns out that LLVM is misleadingly inconsistent in how it handles
these. In the backend, they are indeed two separate options that can be
enabled/disabled independently. In the frontend, however, unsafe-fp-math
implies all the other FMFs.
As a result, it doesn't really make sense for XLA to attempt to split out these
two flags, at least not until LLVM changes how it handles them.
Change: 146183994
|
|
|
|
| |
Change: 146183030
|
|
|
|
| |
Change: 146182979
|
|
|
|
| |
Change: 146182705
|
|
|
|
|
|
|
| |
"event_shape()", "event_shape()" to "event_shape_tensor(), and same for "batch_shape".
BUGFIX: *onehot_categorical.py returns vector not scalar.
Change: 146182622
|
|
|
|
| |
Change: 146182278
|
|
|
|
|
|
| |
10. It used to be 4. We upped this number because the image dashboard now has a slider that lets the user scroll through steps. Previously, image summaries at those past steps were inaccessible.
Change: 146180737
|
|
|
|
|
|
|
| |
loading.
This avoids flickering when the user scrolls through steps of an image summary, allowing the user to readily compare images across steps.
Change: 146180073
|
|
|
|
| |
Change: 146179848
|
|
|
|
| |
Change: 146177492
|
|
|
|
|
|
|
|
|
|
|
| |
- cache computed values, hoist computations out of loops
- avoid bounds checks in many cases.
- access input data with pointer offsets instead of through 4-d eigen
tensor (which requires more complicated index computations).
- add custom accumulation fn for 3-channel images
Added tests to resize_area_op_test.cc, and benchmarks to image_ops_test.py.
Change: 146177262
|
|
|
|
| |
Change: 146176865
|
|
|
|
| |
Change: 146174108
|
|
|
|
|
|
|
|
|
|
|
| |
Specifically, the request manager sends a request to the /health_pills endpoint
with the list of node names stored in POST data.
Health pills are currently a feature internal to Google, but this feature may
soon be open-sourced. Health pills are summaries of tensor element values,
ie the counts of tensor element values that are -Inf, negative, 0, positive,
+Inf, NaN, etc.
Change: 146160274
|