aboutsummaryrefslogtreecommitdiffhomepage
path: root/tensorflow/core/profiler
diff options
context:
space:
mode:
authorGravatar Martin Wicke <wicke@google.com>2017-09-02 19:21:45 -0700
committerGravatar TensorFlower Gardener <gardener@tensorflow.org>2017-09-02 19:25:56 -0700
commitd57572e996dce24abf4d9cf6ea04e7104b3d743b (patch)
treeec8f6620e0f3231a8b739a2b6574a2db813e85b3 /tensorflow/core/profiler
parentddba1e0aadabe26063a28c5d1c48e2cfce44e30f (diff)
Merge changes from github.
PiperOrigin-RevId: 167401527
Diffstat (limited to 'tensorflow/core/profiler')
-rw-r--r--tensorflow/core/profiler/g3doc/advise.md4
-rw-r--r--tensorflow/core/profiler/g3doc/command_line.md6
-rw-r--r--tensorflow/core/profiler/g3doc/options.md8
-rw-r--r--tensorflow/core/profiler/g3doc/profile_memory.md2
-rw-r--r--tensorflow/core/profiler/g3doc/profile_model_architecture.md8
-rw-r--r--tensorflow/core/profiler/g3doc/profile_time.md12
6 files changed, 20 insertions, 20 deletions
diff --git a/tensorflow/core/profiler/g3doc/advise.md b/tensorflow/core/profiler/g3doc/advise.md
index d87b0d8603..d0de8317f6 100644
--- a/tensorflow/core/profiler/g3doc/advise.md
+++ b/tensorflow/core/profiler/g3doc/advise.md
@@ -86,7 +86,7 @@ For example:
* Checks RecvTensor RPC latency and bandwidth.
* Checks CPU/Memory utilization of the job.
-####AcceleratorUtilization Checker
+#### AcceleratorUtilization Checker
* Checks what percentage of time the accelerator spends on computation.
#### OperationChecker
@@ -100,7 +100,7 @@ For example:
* Checks the most expensive graph nodes.
* Checks the most expensive graph-building Python codes.
-####Contribute Your Checker
+#### Contribute Your Checker
Follow examples of accelerator_utilization_checker.h
diff --git a/tensorflow/core/profiler/g3doc/command_line.md b/tensorflow/core/profiler/g3doc/command_line.md
index 857b5e6459..e2839a682f 100644
--- a/tensorflow/core/profiler/g3doc/command_line.md
+++ b/tensorflow/core/profiler/g3doc/command_line.md
@@ -51,7 +51,7 @@ It defines _checkpoint_variable op type. It also provides checkpointed tensors'
Note: this feature is not well maintained now.
-###Start `tfprof`
+### Start `tfprof`
#### Build `tfprof`
@@ -140,9 +140,9 @@ tfprof>
-output
```
-###Examples
+### Examples
-####Profile Python Time
+#### Profile Python Time
```shell
# Requires --graph_path --op_log_path
tfprof> code -max_depth 1000 -show_name_regexes .*model_analyzer.*py.* -select micros -account_type_regexes .* -order_by micros
diff --git a/tensorflow/core/profiler/g3doc/options.md b/tensorflow/core/profiler/g3doc/options.md
index 15712d04c2..ddee63ad42 100644
--- a/tensorflow/core/profiler/g3doc/options.md
+++ b/tensorflow/core/profiler/g3doc/options.md
@@ -1,6 +1,6 @@
-##Options
+## Options
-###Overview
+### Overview
For all tfprof views, the profiles are processed with the following procedures
@@ -35,7 +35,7 @@ For all tfprof views, the profiles are processed with the following procedures
4) Finally, the filtered data structure is output in a format depending
on the `-output` option.
-####Option Semantics In Different View
+#### Option Semantics In Different View
options usually have the same semantics in different views. However, some
can vary. For example `-max_depth` in scope view means the depth of
name scope <b>tree</b>. In op view, it means the length of operation <b>list</b>.
@@ -68,7 +68,7 @@ output_bytes: The memory output by the operation. It's not necessarily requested
by the current operation. For example, it can be a tensor
forwarded from input to output, with in-place mutation.
-###Docs
+### Docs
`-max_depth`: Show nodes that are at most this number of hops from starting node in the data structure.
diff --git a/tensorflow/core/profiler/g3doc/profile_memory.md b/tensorflow/core/profiler/g3doc/profile_memory.md
index a00683d062..6eda5abdd9 100644
--- a/tensorflow/core/profiler/g3doc/profile_memory.md
+++ b/tensorflow/core/profiler/g3doc/profile_memory.md
@@ -1,4 +1,4 @@
-##Profile Memory
+## Profile Memory
It is generally a good idea to visualize the memory usage in timeline.
It allows you to see the memory consumption of each GPU over time.
diff --git a/tensorflow/core/profiler/g3doc/profile_model_architecture.md b/tensorflow/core/profiler/g3doc/profile_model_architecture.md
index a42b2e918d..61bb66bd21 100644
--- a/tensorflow/core/profiler/g3doc/profile_model_architecture.md
+++ b/tensorflow/core/profiler/g3doc/profile_model_architecture.md
@@ -1,9 +1,9 @@
-##Profile Model Architecture
+## Profile Model Architecture
* [Profile Model Parameters](#profile-model-parameters)
* [Profile Model Float Operations](#profile-model-float-operations)
-###Profile Model Parameters
+### Profile Model Parameters
<b>Notes:</b>
`VariableV2` operation type might contain variables created by TensorFlow
@@ -39,9 +39,9 @@ param_stats = tf.profiler.profile(
sys.stdout.write('total_params: %d\n' % param_stats.total_parameters)
```
-###Profile Model Float Operations
+### Profile Model Float Operations
-####Caveats
+#### Caveats
For an operation to have float operation statistics:
diff --git a/tensorflow/core/profiler/g3doc/profile_time.md b/tensorflow/core/profiler/g3doc/profile_time.md
index e11a75553b..4aafc697a9 100644
--- a/tensorflow/core/profiler/g3doc/profile_time.md
+++ b/tensorflow/core/profiler/g3doc/profile_time.md
@@ -1,4 +1,4 @@
-##Profile Time
+## Profile Time
* [Times in TensorFlow and tfprof](#times-in-tensorflow-and-tfprof)
* [Profile by Python Code](#profile-by-python-code)
@@ -7,7 +7,7 @@
* [Profile by Name Scope](#profile-by-name-scope)
-###Times in TensorFlow and tfprof
+### Times in TensorFlow and tfprof
When we run a model, Tensorflow schedules and runs the nodes (operations)
in the graph. An operation can be placed on an accelerator or on CPU.
@@ -37,7 +37,7 @@ When an operation is placed on CPU, it will completely run on CPU. Hence,
should be 0.
-###Profile by Python Code
+### Profile by Python Code
```python
# In code view, the time of each line of Python code is the aggregated
# times of all operations created by that line.
@@ -112,7 +112,7 @@ Set ```-output timeline:outfile=<filename>``` to generate timeline instead of st
</left>
-###Profile by Operation Type
+### Profile by Operation Type
```python
# In op view, you can view the aggregated time of each operation type.
tfprof> op -select micros,occurrence -order_by micros
@@ -138,7 +138,7 @@ MatMul 618.97ms (63.56%, 16.51%), |/job:worker/replica:0/
```
-###Profile by Graph
+### Profile by Graph
Usually, use graph view to generate a timeline to visualize the result.
@@ -163,7 +163,7 @@ Open a Chrome browser, enter URL chrome://tracing and load the timeline file.
******************************************************
```
-###Profile by Name Scope
+### Profile by Name Scope
Usually scope view allows you to pin point the problematic places if you
have properly named your operations with tf.name_scope or tf.variable_scope.