aboutsummaryrefslogtreecommitdiffhomepage
path: root/tensorflow/g3doc/how_tos/adding_an_op/index.md
diff options
context:
space:
mode:
Diffstat (limited to 'tensorflow/g3doc/how_tos/adding_an_op/index.md')
-rw-r--r--tensorflow/g3doc/how_tos/adding_an_op/index.md65
1 files changed, 22 insertions, 43 deletions
diff --git a/tensorflow/g3doc/how_tos/adding_an_op/index.md b/tensorflow/g3doc/how_tos/adding_an_op/index.md
index 9dd2456e0b..a73b4da98d 100644
--- a/tensorflow/g3doc/how_tos/adding_an_op/index.md
+++ b/tensorflow/g3doc/how_tos/adding_an_op/index.md
@@ -1,4 +1,4 @@
-# Adding a New Op <a class="md-anchor" id="AUTOGENERATED-adding-a-new-op"></a>
+# Adding a New Op
PREREQUISITES:
@@ -24,30 +24,9 @@ to:
for the Op. This allows shape inference to work with your Op.
* Test the Op, typically in Python.
-<!-- TOC-BEGIN This section is generated by neural network: DO NOT EDIT! -->
-## Contents
-### [Adding a New Op](#AUTOGENERATED-adding-a-new-op)
-* [Define the Op's interface](#define_interface)
-* [Implement the kernel for the Op](#AUTOGENERATED-implement-the-kernel-for-the-op)
-* [Generate the client wrapper](#AUTOGENERATED-generate-the-client-wrapper)
- * [The Python Op wrapper](#AUTOGENERATED-the-python-op-wrapper)
- * [The C++ Op wrapper](#AUTOGENERATED-the-c---op-wrapper)
-* [Verify it works](#AUTOGENERATED-verify-it-works)
-* [Validation](#Validation)
-* [Op registration](#AUTOGENERATED-op-registration)
- * [Attrs](#Attrs)
- * [Attr types](#AUTOGENERATED-attr-types)
- * [Polymorphism](#Polymorphism)
- * [Inputs and Outputs](#AUTOGENERATED-inputs-and-outputs)
- * [Backwards compatibility](#AUTOGENERATED-backwards-compatibility)
-* [GPU Support](#mult-archs)
-* [Implement the gradient in Python](#AUTOGENERATED-implement-the-gradient-in-python)
-* [Implement a shape function in Python](#AUTOGENERATED-implement-a-shape-function-in-python)
-
-
-<!-- TOC-END This section was generated by neural network, THANKS FOR READING! -->
-
-## Define the Op's interface <a class="md-anchor" id="define_interface"></a>
+[TOC]
+
+## Define the Op's interface {#define_interface}
You define the interface of an Op by registering it with the TensorFlow system.
In the registration, you specify the name of your Op, its inputs (types and
@@ -73,7 +52,7 @@ outputs a tensor `zeroed` of 32-bit integers.
> A note on naming: The name of the Op should be unique and CamelCase. Names
> starting with an underscore (`_`) are reserved for internal use.
-## Implement the kernel for the Op <a class="md-anchor" id="AUTOGENERATED-implement-the-kernel-for-the-op"></a>
+## Implement the kernel for the Op
After you define the interface, provide one or more implementations of the Op.
To create one of these kernels, create a class that extends `OpKernel` and
@@ -131,8 +110,8 @@ Once you
[build and reinstall TensorFlow](../../get_started/os_setup.md#create-pip), the
Tensorflow system can reference and use the Op when requested.
-## Generate the client wrapper <a class="md-anchor" id="AUTOGENERATED-generate-the-client-wrapper"></a>
-### The Python Op wrapper <a class="md-anchor" id="AUTOGENERATED-the-python-op-wrapper"></a>
+## Generate the client wrapper
+### The Python Op wrapper
Python op wrappers are created automatically in
`bazel-genfiles/tensorflow/python/ops/gen_user_ops.py` for all ops placed in the
@@ -176,7 +155,7 @@ def my_fact():
return gen_user_ops._fact()
```
-### The C++ Op wrapper <a class="md-anchor" id="AUTOGENERATED-the-c---op-wrapper"></a>
+### The C++ Op wrapper
C++ op wrappers are created automatically for all ops placed in the
[`tensorflow/core/user_ops`][user_ops] directory, when you build Tensorflow. For
@@ -191,7 +170,7 @@ statement
#include "tensorflow/cc/ops/user_ops.h"
```
-## Verify it works <a class="md-anchor" id="AUTOGENERATED-verify-it-works"></a>
+## Verify it works
A good way to verify that you've successfully implemented your Op is to write a
test for it. Create the file
@@ -214,7 +193,7 @@ Then run your test:
$ bazel test tensorflow/python:zero_out_op_test
```
-## Validation <a class="md-anchor" id="Validation"></a>
+## Validation {#Validation}
The example above assumed that the Op applied to a tensor of any shape. What
if it only applied to vectors? That means adding a check to the above OpKernel
@@ -253,9 +232,9 @@ function is an error, and if so return it, use
[`OP_REQUIRES_OK`][validation-macros]. Both of these macros return from the
function on error.
-## Op registration <a class="md-anchor" id="AUTOGENERATED-op-registration"></a>
+## Op registration
-### Attrs <a class="md-anchor" id="Attrs"></a>
+### Attrs {#Attrs}
Ops can have attrs, whose values are set when the Op is added to a graph. These
are used to configure the Op, and their values can be accessed both within the
@@ -339,7 +318,7 @@ which can then be used in the `Compute` method:
> .Output("zeroed: int32");
> </pre></code>
-### Attr types <a class="md-anchor" id="AUTOGENERATED-attr-types"></a>
+### Attr types
The following types are supported in an attr:
@@ -355,7 +334,7 @@ The following types are supported in an attr:
See also: [`op_def_builder.cc:FinalizeAttr`][FinalizeAttr] for a definitive list.
-#### Default values & constraints <a class="md-anchor" id="AUTOGENERATED-default-values---constraints"></a>
+#### Default values & constraints
Attrs may have default values, and some types of attrs can have constraints. To
define an attr with constraints, you can use the following `<attr-type-expr>`s:
@@ -456,8 +435,8 @@ REGISTER_OP("AttrDefaultExampleForAllTypes")
Note in particular that the values of type `type` use [the `DT_*` names
for the types](../../resources/dims_types.md#data-types).
-### Polymorphism <a class="md-anchor" id="Polymorphism"></a>
-#### Type Polymorphism <a class="md-anchor" id="type-polymorphism"></a>
+### Polymorphism {#Polymorphism}
+#### Type Polymorphism {#type-polymorphism}
For ops that can take different types as input or produce different output
types, you can specify [an attr](#attrs) in
@@ -685,7 +664,7 @@ TF_CALL_REAL_NUMBER_TYPES(REGISTER_KERNEL);
#undef REGISTER_KERNEL
```
-#### List Inputs and Outputs <a class="md-anchor" id="list-input-output"></a>
+#### List Inputs and Outputs {#list-input-output}
In addition to being able to accept or produce different types, ops can consume
or produce a variable number of tensors.
@@ -760,7 +739,7 @@ REGISTER_OP("MinimumLengthPolymorphicListExample")
.Output("out: T");
```
-### Inputs and Outputs <a class="md-anchor" id="AUTOGENERATED-inputs-and-outputs"></a>
+### Inputs and Outputs
To summarize the above, an Op registration can have multiple inputs and outputs:
@@ -861,7 +840,7 @@ expressions:
For more details, see
[`tensorflow/core/framework/op_def_builder.h`][op_def_builder].
-### Backwards compatibility <a class="md-anchor" id="AUTOGENERATED-backwards-compatibility"></a>
+### Backwards compatibility
In general, changes to specifications must be backwards-compatible: changing the
specification of an Op must not break prior serialized GraphDefs constructed
@@ -907,7 +886,7 @@ The full list of safe and unsafe changes can be found in
If you cannot make your change to an operation backwards compatible, then create
a new operation with a new name with the new semantics.
-## GPU Support <a class="md-anchor" id="mult-archs"></a>
+## GPU Support {#mult-archs}
You can implement different OpKernels and register one for CPU and another for
GPU, just like you can [register kernels for different types](#Polymorphism).
@@ -935,7 +914,7 @@ kept on the CPU, add a `HostMemory()` call to the kernel registration, e.g.:
PadOp<GPUDevice, T>)
```
-## Implement the gradient in Python <a class="md-anchor" id="AUTOGENERATED-implement-the-gradient-in-python"></a>
+## Implement the gradient in Python
Given a graph of ops, TensorFlow uses automatic differentiation
(backpropagation) to add new ops representing gradients with respect to the
@@ -1012,7 +991,7 @@ Note that at the time the gradient function is called, only the data flow graph
of ops is available, not the tensor data itself. Thus, all computation must be
performed using other tensorflow ops, to be run at graph execution time.
-## Implement a shape function in Python <a class="md-anchor" id="AUTOGENERATED-implement-a-shape-function-in-python"></a>
+## Implement a shape function in Python
The TensorFlow Python API has a feature called "shape inference" that provides
information about the shapes of tensors without having to execute the