| Commit message (Collapse) | Author | Age |
|
|
|
|
| |
considerably. Shrinks text size of example_trainer binary by ~1.5%.
Change: 115578002
|
|
|
|
|
| |
Made the corresponding unit test more robust.
Change: 115575179
|
|
|
|
|
|
| |
return true. Add a unittest to catch this type of regression in
the future.
Change: 115573280
|
|
|
|
| |
Change: 115568214
|
|
|
|
| |
Change: 115528686
|
|
|
|
|
|
| |
indices are invalid because they are out of bounds.
Change: 115522264
|
|
|
|
|
|
|
|
| |
These tools are meant to allow recording of benchmark & unit test
structured output to pbtxt files in a directory only when the
environment variable TEST_REPORT_FILE_PREFIX is set. For now,
only saving of C++ microbenchmark output is supported.
Change: 115518303
|
|
|
|
| |
Change: 115516426
|
|
|
|
| |
Change: 115515678
|
|
|
|
| |
Change: 115511835
|
|
|
|
| |
Change: 115511794
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Helps with: https://github.com/tensorflow/tensorflow/issues/917
Also fixes https://github.com/tensorflow/tensorflow/issues/1162
The main benefit is that the computation of the sufficient statistics is now decoupled of the aggregation of the moments, which means that if you want to perform the accumulation incrementally, you don't have to keep all the inputs around, and can instead keep the much more compact sum and sum-of-squares. Accumulation could also be performed locally if you aggregate across multiple devices.
Computing sum and sum-of-squares can also theoretically be performed in parallel now.
Tested running inception: same performance, same step time.
Batch normalization benchmark is a bit faster on CPU, a bit slower on GPU:
Before:
cpu shape:4/3 #layers:10 mode:py scale:True train:False - 1.139310 secs
gpu shape:4/3 #layers:10 mode:py scale:True train:False - 0.021970 secs
cpu shape:4/3 #layers:10 mode:py scale:True train:True - 2.767147 secs
gpu shape:4/3 #layers:10 mode:py scale:True train:True - 0.074531 secs
cpu shape:4/3 #layers:10 mode:py scale:True train:False - 0.742835 secs
gpu shape:4/3 #layers:10 mode:py scale:True train:False - 0.013473 secs
cpu shape:4/3 #layers:10 mode:py scale:True train:True - 1.738806 secs
gpu shape:4/3 #layers:10 mode:py scale:True train:True - 0.052777 secs
cpu shape:2/1 #layers:10 mode:py scale:True train:False - 0.119180 secs
gpu shape:2/1 #layers:10 mode:py scale:True train:False - 0.011201 secs
cpu shape:2/1 #layers:10 mode:py scale:True train:True - 0.218297 secs
gpu shape:2/1 #layers:10 mode:py scale:True train:True - 0.048526 secs
After:
cpu shape:4/3 #layers:10 mode:py scale:True train:False - 0.998944 secs
gpu shape:4/3 #layers:10 mode:py scale:True train:False - 0.025828 secs
cpu shape:4/3 #layers:10 mode:py scale:True train:True - 2.657428 secs
gpu shape:4/3 #layers:10 mode:py scale:True train:True - 0.086614 secs
cpu shape:4/3 #layers:10 mode:py scale:True train:False - 0.603137 secs
gpu shape:4/3 #layers:10 mode:py scale:True train:False - 0.017668 secs
cpu shape:4/3 #layers:10 mode:py scale:True train:True - 1.519533 secs
gpu shape:4/3 #layers:10 mode:py scale:True train:True - 0.055214 secs
cpu shape:2/1 #layers:10 mode:py scale:True train:False - 0.071344 secs
gpu shape:2/1 #layers:10 mode:py scale:True train:False - 0.016440 secs
cpu shape:2/1 #layers:10 mode:py scale:True train:True - 0.222093 secs
gpu shape:2/1 #layers:10 mode:py scale:True train:True - 0.039967 secs
Change: 115507032
|
|
|
|
|
| |
RemoveNewDefaultAttrsFromGraphDef().
Change: 115506523
|
|
|
|
| |
Change: 115505008
|
|
|
|
|
| |
X1 crashes when attempting to compile them
Change: 115500414
|
|
|
|
| |
Change: 115496194
|
|
|
|
| |
Change: 115495726
|
|
|
|
| |
Change: 115494526
|
|
|
|
|
|
| |
which case no offset will be added after normalization.
Change: 115489328
|
|
|
|
| |
Change: 115472914
|
|
|
|
|
|
| |
Python's native 'map' function. This also fixes the bug with control_flow_ops.case
Change: 115472163
|
|
|
|
| |
Change: 115470945
|
|
|
|
|
| |
Describe how to load many runs.
Change: 115467346
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Both gather and scatter now unconditionally validate indices in the inner loop,
which prevents crashes if indices are changed asynchronously while the ops are
running.
For gather when validate_indices = true, the new code is within the noise of the
old code speedwise or possibly slightly faster (unsurprising since the new code
fuses two loops). Specifically, the geometric mean of int32 gather benchmarks
goes from 4.05GB/s to 4.04-4.07GB/s.
For gather when validate_indices = false, the old code and a version of the old
code that supported validate_indices = false both get 1.5% slower. Xiaoqiang
and I deem this difference insufficient to preserve the unsafe code path, so
poof: it's gone.
For scatter (which always validates), the new code is slightly faster than the
old code: the geometric mean goes from 546-559M items/s to 573M items/s.
Change: 115467091
|
|
|
|
| |
Change: 115464489
|
|
|
|
| |
Change: 115464229
|
|
|
|
|
|
|
|
|
|
|
| |
to be used to colocate based on attributes rather than either
names of ops or devices (op names and devices aren't portable).
A follow up change will add an ops.colocate_with() to Python that adds
this attribute to nodes, and will be used to replace calls to 'with
tf.device(foo.device)' in TF library code, which assumes that devices
have been specified.
Change: 115463464
|
|
|
|
| |
Change: 115462062
|
|
|
|
| |
Change: 115419426
|
|
|
|
| |
Change: 115408162
|
|
|
|
| |
Change: 115384748
|
|
|
|
| |
Change: 115379524
|
| |
|
|
|
|
| |
Change: 115371065
|
|
|
|
| |
Change: 115370821
|
|
|
|
|
|
|
|
|
| |
Reason: tsd is deprecated (https://github.com/DefinitelyTyped/tsd/issues/269) and typings is the new standard. Also, tsd was behaving badly - running `tsd install` on a clean client was causing it to incorrectly depend on typing files from node_modules, which resulted in a broken build. This issue does not exist with typings.
For convenience, and since typings is really fast when all deps are up-to-date, I made it a part of the standard gulp task. `npm install` so you have all the deps, and running `gulp` will keep the typing files synchronized - there no longer is a separate step for downloading them.
The logical next step is to do the same for bower. I did wire that up, but I will not connect it to the gulp task until after the big bower dependency upgrade CL is through. If I add it right now, it will fail on unresolved dependency conflicts and make everyone sad.
Change: 115370585
|
|
|
|
| |
Change: 115364038
|
|
|
|
|
| |
The tutorial Python files are copied to a separate directory and run against Python installation on the system. The script performs basic quality checks, including timeout, accuracy / loss thresholding, and checks for generated checkpoint files and summary files.
Change: 115362939
|
|
|
|
|
|
|
| |
ops to async.
For gradient computation for loops, stacks are used to store the tensors that are computed in the forward but needed in backprop. This CL enables very long sequence training by swapping the stack tensors from GPU to CPU.
Change: 115359847
|
|
|
|
| |
Change: 115358623
|
|
|
|
| |
Change: 115354844
|
|
|
|
|
|
| |
statically.
Change: 115351830
|
|
|
|
| |
Change: 115347996
|
|
|
|
|
|
|
| |
The PickUnusedPortOrDie implementation is based on a simplified
version of `grpc_pick_unused_port_or_die()` in gRPC. This utility will
be necessary for tests of the distributed runtime (issue #23).
Change: 115345502
|
|
|
|
|
|
|
| |
Also:
- rename sceneBehavior -> sceneElement to make it clearer it is a polymer element.
- improve the info card by showing the actual op node in the successors/predecessors list when the metaedge only contains one base edge (one tensor).
Change: 115339805
|
|
|
|
|
| |
This will be necessary for tests the distributed runtime (issue #23).
Change: 115339579
|
|
|
|
|
| |
This should fix Python 3 compatibility for this test.
Change: 115339521
|
|
|
|
| |
Change: 115288487
|
|
|
|
| |
Change: 115284554
|
|
|
|
| |
Change: 115280348
|