| Commit message (Collapse) | Author | Age |
|
|
|
|
|
| |
tutorials to reflect the new API.
Change: 118033430
|
|
|
|
| |
Change: 118000502
|
|
|
|
| |
Change: 117999025
|
|
|
|
| |
Change: 117998775
|
|
|
|
|
|
| |
inlined into the call site regardless of the graph optimizer option
setting.
Change: 117994922
|
|
|
|
|
|
|
|
|
|
| |
sess = tf.Session()
print(sess.run(norm))
print(sess.run(norm))
sess = tf.Session()
print(sess.run(norm))
print(sess.run(norm))
Change: 117994542
|
|
|
|
|
|
| |
the scope of a block should be added to this graph.
Change: 117993733
|
|
|
|
| |
Change: 117993641
|
|
|
|
|
|
|
|
|
| |
file and serve it via the TensorBoard back-end.
The `RunOutputs` information contains execution statatistics, such as compute time and memory usage for each node in the subgraph executed by a particular `session.run()`.
A follow-up change will add the front-end support to overlay this data onto the graph.
Change: 117981334
|
|
|
|
| |
Change: 117978853
|
|
|
|
| |
Change: 117977121
|
|
|
|
| |
Change: 117976198
|
|
|
|
|
|
| |
this CL, the zero tensors are created unconditionally. This CL changes it to create the tensors only when they are needed.
Change: 117970158
|
|
|
|
| |
Change: 117967381
|
|
|
|
| |
Change: 117961829
|
|
|
|
|
| |
and cuda library.
Change: 117960844
|
|
|
|
| |
Change: 117958944
|
|
|
|
| |
Change: 117956472
|
|
|
|
| |
Change: 117953780
|
|
|
|
|
|
|
|
| |
storing it internally as an int64 (improve future compatibility
with sampling from huge tensors. Note that this CL isn't enough
to permit huge tensors, because we also clip their size at
other points in the process, but it's a step forward.)
Change: 117953355
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
DebugString()
HasTensor()
GetVariableToShapeMap()
Example use can be found in PywrapTensorSliceReaderTest in
third_party/tensorflow/python/training/saver_test.py
For example, To use (in Python):
try:
reader = tf.train.NewCheckpointReader(your_checkpoint_file)
print(reader.DebugString())
except pywrap_tensorflow.StatusNotOK as e:
print(str(e))
Change: 117951901
|
|
|
|
| |
Change: 117950844
|
|
|
|
| |
Change: 117950091
|
|
|
|
| |
Change: 117949288
|
|
|
|
| |
Change: 117943734
|
|
|
|
|
|
| |
will allow ops to be added that use FFmpeg. Note that OSX machines will be
used to validate TensorFlow without FFmpeg installed.
Change: 117942636
|
|
|
|
|
|
|
|
| |
To run it now:
bazel run //tensorflow/..:batch_norm_benchmark -- --benchmarks=.. --use_gpu={false/true}
Also a tiny file naming bugfix to run_and_gather_logs_lib.
Change: 117941756
|
|
|
|
|
| |
This enables shape visualization by default in the graph visualizer.
Change: 117941335
|
|
|
|
|
| |
tensorflow/core/kernel.
Change: 117941211
|
|
|
|
|
|
|
|
|
|
|
|
| |
This rectifies the following error:
TypeError: Using a `tf.Tensor` as a Python `bool` is not allowed. Use `if t is
not None:` instead of `if t:` to test if a tensor is defined, and use the
logical TensorFlow ops to test the value of a tensor.
when the conditional branch contains a tf.IndexedSlices object with
dense_shape=tf.constant(...).
Change: 117937593
|
|
|
|
|
| |
for __ANDROID_TYPES_FULL__.
Change: 117936401
|
|
|
|
|
| |
cwise_ops_test is still timing out frequently.
Change: 117935269
|
|
|
|
| |
Change: 117931327
|
|
|
|
| |
Change: 117928611
|
|
|
|
|
|
|
| |
without major contortions, e.g.
foo = variables.Variable(3.14, name="pi", trainable=False, dtype=tf.half)
Change: 117916139
|
|
|
|
|
| |
to our requirements for skflow.
Change: 117901053
|
|
|
|
| |
Change: 117900461
|
|
|
|
|
|
|
| |
This enables a subclass of GrpcServer or GrpcSession to override the
server credential and channel creation logic, allowing the use of
different security mechanisms.
Change: 117900179
|
|
|
|
|
|
|
| |
Previously only float and double variables were allowed on the GPU. Now
anything other than string works. The same goes for Assign. AssignAdd
and AssignSub have been left alone for now.
Change: 117887590
|
|
|
|
|
| |
for sdca_ops.
Change: 117884415
|
|
|
|
|
|
| |
training operation.
Change: 117881415
|
| |
|
|
|
|
| |
Change: 117876000
|
|
|
|
| |
Change: 117872871
|
|
|
|
| |
Change: 117871897
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Dockerfile.tensorboard
Adds two new scripts:
- install/install_tensorboard_dependencies.sh which installs e.g. nodejs and chromium-browser
- builds/tensorboard.sh which uses npm to install testing dependencies, uses gulp to compile all the TensorBoard code, and then uses wct to test the frontend code.
We use xvfb to run a headless Chromium for the selenium testing as described here: http://ncona.com/2015/12/running-polymer-tests-with-docker/
I tested that running tensorflow/tools/ci_build/ci_build.sh tensorboard tensorflow/tools/ci_build/builds/tensorboard.sh passes when the tests are passing (exit status 0) and fails when tests are failing (exit status 1).
Change: 117871115
|
|
|
|
|
| |
Otherwise it is flaky.
Change: 117868217
|
|
|
|
|
|
|
|
| |
pooling_ops_test is only slow because it uses unnecessarily large sizes. Speed
it up by dividing all the depth values coming out of Inception by 30.
After this speedup, there is no need to shard.
Change: 117867788
|
|
|
|
|
|
|
|
|
|
| |
conv_ops_test is currently testing exactly the kernel sizes of Inception 2015.
This doesn't add anything in terms of unit test strength, requires sharding,
and generally makes testing slow. Dividing each of the depth sizes by 10
reduces the number of flops by 1000 and still tests a wide range of sizes.
There is no longer any need to shard conv_ops_test.
Change: 117867597
|
|
|
|
|
|
| |
inferences passes, to make benchmarks more reproducible.
Change: 117867195
|