diff options
author | A. Unique TensorFlower <nobody@tensorflow.org> | 2016-01-13 07:24:12 -0800 |
---|---|---|
committer | Vijay Vasudevan <vrv@google.com> | 2016-01-13 07:24:12 -0800 |
commit | 5515148be85f369484d6179b7c1baab30995b068 (patch) | |
tree | 44aad0dcac3376f69acc3ac03c99bf956d136e44 /tensorflow/g3doc | |
parent | 7a524d4de0a0da527f355adb7eccea7756c82dac (diff) |
Minor fixes (related) to gpu_event_mgr_test.cc
This test got out of sync after some recent changes.
This CL allows the test function to explicitly stop/start
the polling loop so we can test some invarients without
non-deterministic timing issues. Also, the change
to TensorReferenceVector is now handled correctly in ~EventMgr,
and reliably tested.
Change: 112016082
Diffstat (limited to 'tensorflow/g3doc')
-rw-r--r-- | tensorflow/g3doc/api_docs/python/client.md | 4 | ||||
-rw-r--r-- | tensorflow/g3doc/api_docs/python/framework.md | 12 | ||||
-rw-r--r-- | tensorflow/g3doc/api_docs/python/io_ops.md | 6 | ||||
-rw-r--r-- | tensorflow/g3doc/api_docs/python/train.md | 12 | ||||
-rw-r--r-- | tensorflow/g3doc/get_started/basic_usage.md | 2 | ||||
-rw-r--r-- | tensorflow/g3doc/how_tos/adding_an_op/index.md | 58 | ||||
-rw-r--r-- | tensorflow/g3doc/how_tos/new_data_formats/index.md | 28 | ||||
-rw-r--r-- | tensorflow/g3doc/how_tos/reading_data/index.md | 18 | ||||
-rw-r--r-- | tensorflow/g3doc/how_tos/summaries_and_tensorboard/index.md | 2 | ||||
-rw-r--r-- | tensorflow/g3doc/resources/faq.md | 4 | ||||
-rw-r--r-- | tensorflow/g3doc/tutorials/deep_cnn/index.md | 14 | ||||
-rw-r--r-- | tensorflow/g3doc/tutorials/mnist/beginners/index.md | 2 | ||||
-rw-r--r-- | tensorflow/g3doc/tutorials/mnist/download/index.md | 4 | ||||
-rw-r--r-- | tensorflow/g3doc/tutorials/mnist/pros/index.md | 2 | ||||
-rw-r--r-- | tensorflow/g3doc/tutorials/mnist/tf/index.md | 6 | ||||
-rw-r--r-- | tensorflow/g3doc/tutorials/word2vec/index.md | 16 |
16 files changed, 95 insertions, 95 deletions
diff --git a/tensorflow/g3doc/api_docs/python/client.md b/tensorflow/g3doc/api_docs/python/client.md index 1c0c92ffbe..535576db05 100644 --- a/tensorflow/g3doc/api_docs/python/client.md +++ b/tensorflow/g3doc/api_docs/python/client.md @@ -53,7 +53,7 @@ with tf.Session() as sess: ``` The [`ConfigProto`] -(https://tensorflow.googlesource.com/tensorflow/+/master/tensorflow/core/framework/config.proto) +(https://www.tensorflow.org/code/tensorflow/core/framework/config.proto) protocol buffer exposes various configuration options for a session. For example, to create a session that uses soft constraints for device placement, and log the resulting placement decisions, @@ -87,7 +87,7 @@ the session constructor. Defaults to using an in-process engine. At present, no value other than the empty string is supported. * <b>`graph`</b>: (Optional.) The `Graph` to be launched (described above). -* <b>`config`</b>: (Optional.) A [`ConfigProto`](https://tensorflow.googlesource.com/tensorflow/+/master/tensorflow/core/framework/config.proto) +* <b>`config`</b>: (Optional.) A [`ConfigProto`](https://www.tensorflow.org/code/tensorflow/core/framework/config.proto) protocol buffer with configuration options for the session. diff --git a/tensorflow/g3doc/api_docs/python/framework.md b/tensorflow/g3doc/api_docs/python/framework.md index c5c21eb1e4..669e85aafe 100644 --- a/tensorflow/g3doc/api_docs/python/framework.md +++ b/tensorflow/g3doc/api_docs/python/framework.md @@ -113,7 +113,7 @@ This method is thread-safe. ##### Returns: - A [`GraphDef`](https://tensorflow.googlesource.com/tensorflow/+/master/tensorflow/core/framework/graph.proto) + A [`GraphDef`](https://www.tensorflow.org/code/tensorflow/core/framework/graph.proto) protocol buffer. ##### Raises: @@ -570,7 +570,7 @@ Note that this is unrelated to the The GraphDef version of this graph. For details on the meaning of each version, see [`GraphDef`] -(https://tensorflow.googlesource.com/tensorflow/+/master/tensorflow/core/framework/graph.proto). +(https://www.tensorflow.org/code/tensorflow/core/framework/graph.proto). @@ -858,7 +858,7 @@ Returns a serialized `NodeDef` representation of this operation. ##### Returns: A - [`NodeDef`](https://tensorflow.googlesource.com/tensorflow/+/master/tensorflow/core/framework/graph.proto) + [`NodeDef`](https://www.tensorflow.org/code/tensorflow/core/framework/graph.proto) protocol buffer. @@ -871,7 +871,7 @@ Returns the `OpDef` proto that represents the type of this op. ##### Returns: An - [`OpDef`](https://tensorflow.googlesource.com/tensorflow/+/master/tensorflow/core/framework/op_def.proto) + [`OpDef`](https://www.tensorflow.org/code/tensorflow/core/framework/op_def.proto) protocol buffer. @@ -1316,7 +1316,7 @@ Converts the given `type_value` to a `DType`. * <b>`type_value`</b>: A value that can be converted to a `tf.DType` object. This may currently be a `tf.DType` object, a - [`DataType` enum](https://tensorflow.googlesource.com/tensorflow/+/master/tensorflow/core/framework/types.proto), + [`DataType` enum](https://www.tensorflow.org/code/tensorflow/core/framework/types.proto), a string type name, or a `numpy.dtype`. ##### Returns: @@ -1518,7 +1518,7 @@ after calling this function will result in undefined behavior. Imports the TensorFlow graph in `graph_def` into the Python `Graph`. This function provides a way to import a serialized TensorFlow -[`GraphDef`](https://tensorflow.googlesource.com/tensorflow/+/master/tensorflow/core/framework/graph.proto) +[`GraphDef`](https://www.tensorflow.org/code/tensorflow/core/framework/graph.proto) protocol buffer, and extract individual objects in the `GraphDef` as [`Tensor`](#Tensor) and [`Operation`](#Operation) objects. See [`Graph.as_graph_def()`](#Graph.as_graph_def) for a way to create a diff --git a/tensorflow/g3doc/api_docs/python/io_ops.md b/tensorflow/g3doc/api_docs/python/io_ops.md index 6b2c6d32c1..74c42e9371 100644 --- a/tensorflow/g3doc/api_docs/python/io_ops.md +++ b/tensorflow/g3doc/api_docs/python/io_ops.md @@ -1049,9 +1049,9 @@ Reinterpret the bytes of a string as a vector of numbers. TensorFlow's [recommended format for training examples](../../how_tos/reading_data/index.md#standard-tensorflow-format) is serialized `Example` protocol buffers, [described -here](https://tensorflow.googlesource.com/tensorflow/+/master/tensorflow/core/example/example.proto). +here](https://www.tensorflow.org/code/tensorflow/core/example/example.proto). They contain `Features`, [described -here](https://tensorflow.googlesource.com/tensorflow/+/master/tensorflow/core/example/feature.proto). +here](https://www.tensorflow.org/code/tensorflow/core/example/feature.proto). - - - @@ -1148,7 +1148,7 @@ Alias for field number 0 Parses `Example` protos into a `dict` of tensors. Parses a number of serialized [`Example`] -(https://tensorflow.googlesource.com/tensorflow/+/master/tensorflow/core/example/example.proto) +(https://www.tensorflow.org/code/tensorflow/core/example/example.proto) protos given in `serialized`. `example_names` may contain descriptive names for the corresponding serialized diff --git a/tensorflow/g3doc/api_docs/python/train.md b/tensorflow/g3doc/api_docs/python/train.md index 6728ceb6aa..f8b7bc626f 100644 --- a/tensorflow/g3doc/api_docs/python/train.md +++ b/tensorflow/g3doc/api_docs/python/train.md @@ -1459,13 +1459,13 @@ the list of all threads. ## Summary Operations The following ops output -[`Summary`](https://tensorflow.googlesource.com/tensorflow/+/master/tensorflow/core/framework/summary.proto) +[`Summary`](https://www.tensorflow.org/code/tensorflow/core/framework/summary.proto) protocol buffers as serialized string tensors. You can fetch the output of a summary op in a session, and pass it to a [SummaryWriter](../../api_docs/python/train.md#SummaryWriter) to append it to an event file. Event files contain -[`Event`](https://tensorflow.googlesource.com/tensorflow/+/master/tensorflow/core/util/event.proto) +[`Event`](https://www.tensorflow.org/code/tensorflow/core/util/event.proto) protos that can contain `Summary` protos along with the timestamp and step. You can then use TensorBoard to visualize the contents of the event files. See [TensorBoard and @@ -1554,7 +1554,7 @@ build the `tag` of the summary values: Outputs a `Summary` protocol buffer with a histogram. The generated -[`Summary`](https://tensorflow.googlesource.com/tensorflow/+/master/tensorflow/core/framework/summary.proto) +[`Summary`](https://www.tensorflow.org/code/tensorflow/core/framework/summary.proto) has one summary value containing a histogram for `values`. This op reports an `OutOfRange` error if any value is not finite. @@ -1607,7 +1607,7 @@ This is useful in summaries to measure and report sparsity. For example, Merges summaries. This op creates a -[`Summary`](https://tensorflow.googlesource.com/tensorflow/+/master/tensorflow/core/framework/summary.proto) +[`Summary`](https://www.tensorflow.org/code/tensorflow/core/framework/summary.proto) protocol buffer that contains the union of all the values in the input summaries. @@ -1816,9 +1816,9 @@ for e in tf.summary_iterator(path to events file): ``` See the protocol buffer definitions of -[Event](https://tensorflow.googlesource.com/tensorflow/+/master/tensorflow/core/util/event.proto) +[Event](https://www.tensorflow.org/code/tensorflow/core/util/event.proto) and -[Summary](https://tensorflow.googlesource.com/tensorflow/+/master/tensorflow/core/framework/summary.proto) +[Summary](https://www.tensorflow.org/code/tensorflow/core/framework/summary.proto) for more information about their attributes. ##### Args: diff --git a/tensorflow/g3doc/get_started/basic_usage.md b/tensorflow/g3doc/get_started/basic_usage.md index dbbbccc776..f24f8867b2 100644 --- a/tensorflow/g3doc/get_started/basic_usage.md +++ b/tensorflow/g3doc/get_started/basic_usage.md @@ -290,6 +290,6 @@ with tf.Session() as sess: A `placeholder()` operation generates an error if you do not supply a feed for it. See the [MNIST fully-connected feed tutorial](../tutorials/mnist/tf/index.md) -([source code](https://tensorflow.googlesource.com/tensorflow/+/master/tensorflow/g3doc/tutorials/mnist/fully_connected_feed.py)) +([source code](https://www.tensorflow.org/code/tensorflow/g3doc/tutorials/mnist/fully_connected_feed.py)) for a larger-scale example of feeds. diff --git a/tensorflow/g3doc/how_tos/adding_an_op/index.md b/tensorflow/g3doc/how_tos/adding_an_op/index.md index 63b78a0d74..116d788449 100644 --- a/tensorflow/g3doc/how_tos/adding_an_op/index.md +++ b/tensorflow/g3doc/how_tos/adding_an_op/index.md @@ -22,7 +22,7 @@ to: * Optionally, write a function to compute gradients for the Op. * Optionally, write a function that describes the input and output shapes for the Op. This allows shape inference to work with your Op. -* Test the Op, typically in Python. If you define gradients, you can verify them with the Python [`GradientChecker`](https://tensorflow.googlesource.com/tensorflow/+/master/tensorflow/python/kernel_tests/gradient_checker.py). +* Test the Op, typically in Python. If you define gradients, you can verify them with the Python [`GradientChecker`](https://www.tensorflow.org/code/tensorflow/python/kernel_tests/gradient_checker.py). [TOC] @@ -131,7 +131,7 @@ from tensorflow.python.ops.gen_user_ops import * You may optionally use your own function instead. To do this, you first hide the generated code for that op by adding its name to the `hidden` list in the `"user_ops"` rule in -[`tensorflow/python/BUILD`](https://tensorflow.googlesource.com/tensorflow/+/master/tensorflow/python/BUILD): +[`tensorflow/python/BUILD`](https://www.tensorflow.org/code/tensorflow/python/BUILD): ```python tf_gen_op_wrapper_py( @@ -144,7 +144,7 @@ tf_gen_op_wrapper_py( ``` List your op next to `"Fact"`. Next you add your replacement function to -[`tensorflow/python/user_ops/user_ops.py`](https://tensorflow.googlesource.com/tensorflow/+/master/tensorflow/python/user_ops/user_ops.py). +[`tensorflow/python/user_ops/user_ops.py`](https://www.tensorflow.org/code/tensorflow/python/user_ops/user_ops.py). Typically your function will call the generated function to actually add the op to the graph. The hidden version of the generated function will be in the `gen_user_ops` package and start with an underscore ("`_`"). For example: @@ -216,13 +216,13 @@ This asserts that the input is a vector, and returns having set the * The `context`, which can either be an `OpKernelContext` or `OpKernelConstruction` pointer (see - [`tensorflow/core/framework/op_kernel.h`](https://tensorflow.googlesource.com/tensorflow/+/master/tensorflow/core/framework/op_kernel.h)), + [`tensorflow/core/framework/op_kernel.h`](https://www.tensorflow.org/code/tensorflow/core/framework/op_kernel.h)), for its `SetStatus()` method. * The condition. For example, there are functions for validating the shape of a tensor in - [`tensorflow/core/public/tensor_shape.h`](https://tensorflow.googlesource.com/tensorflow/+/master/tensorflow/core/public/tensor_shape.h) + [`tensorflow/core/public/tensor_shape.h`](https://www.tensorflow.org/code/tensorflow/core/public/tensor_shape.h) * The error itself, which is represented by a `Status` object, see - [`tensorflow/core/public/status.h`](https://tensorflow.googlesource.com/tensorflow/+/master/tensorflow/core/public/status.h). A + [`tensorflow/core/public/status.h`](https://www.tensorflow.org/code/tensorflow/core/public/status.h). A `Status` has both a type (frequently `InvalidArgument`, but see the list of types) and a message. Functions for constructing an error may be found in [`tensorflow/core/lib/core/errors.h`][validation-macros]. @@ -368,7 +368,7 @@ define an attr with constraints, you can use the following `<attr-type-expr>`s: The specific lists of types allowed by these are defined by the functions (like `NumberTypes()`) in - [`tensorflow/core/framework/types.h`](https://tensorflow.googlesource.com/tensorflow/+/master/tensorflow/core/framework/types.h). + [`tensorflow/core/framework/types.h`](https://www.tensorflow.org/code/tensorflow/core/framework/types.h). In this example the attr `t` must be one of the numeric types: ```c++ @@ -889,7 +889,7 @@ There are several ways to preserve backwards-compatibility. type into a list of varying types). The full list of safe and unsafe changes can be found in -[`tensorflow/core/framework/op_compatibility_test.cc`](https://tensorflow.googlesource.com/tensorflow/+/master/tensorflow/core/framework/op_compatibility_test.cc). +[`tensorflow/core/framework/op_compatibility_test.cc`](https://www.tensorflow.org/code/tensorflow/core/framework/op_compatibility_test.cc). If you cannot make your change to an operation backwards compatible, then create a new operation with a new name with the new semantics. @@ -906,16 +906,16 @@ made when TensorFlow's changes major versions, and must conform to the You can implement different OpKernels and register one for CPU and another for GPU, just like you can [register kernels for different types](#polymorphism). There are several examples of kernels with GPU support in -[`tensorflow/core/kernels/`](https://tensorflow.googlesource.com/tensorflow/+/master/tensorflow/core/kernels/). +[`tensorflow/core/kernels/`](https://www.tensorflow.org/code/tensorflow/core/kernels/). Notice some kernels have a CPU version in a `.cc` file, a GPU version in a file ending in `_gpu.cu.cc`, and some code shared in common in a `.h` file. For example, the [`pad` op](../../api_docs/python/array_ops.md#pad) has everything but the GPU kernel in [`tensorflow/core/kernels/pad_op.cc`][pad_op]. The GPU kernel is in -[`tensorflow/core/kernels/pad_op_gpu.cu.cc`](https://tensorflow.googlesource.com/tensorflow/+/master/tensorflow/core/kernels/pad_op_gpu.cu.cc), +[`tensorflow/core/kernels/pad_op_gpu.cu.cc`](https://www.tensorflow.org/code/tensorflow/core/kernels/pad_op_gpu.cu.cc), and the shared code is a templated class defined in -[`tensorflow/core/kernels/pad_op.h`](https://tensorflow.googlesource.com/tensorflow/+/master/tensorflow/core/kernels/pad_op.h). +[`tensorflow/core/kernels/pad_op.h`](https://www.tensorflow.org/code/tensorflow/core/kernels/pad_op.h). One thing to note, even when the GPU kernel version of `pad` is used, it still needs its `"paddings"` input in CPU memory. To mark that inputs or outputs are kept on the CPU, add a `HostMemory()` call to the kernel registration, e.g.: @@ -1072,23 +1072,23 @@ any of the inputs. The [`merge_with`](../../api_docs/python/framework.md) method allows the caller to assert that two shapes are the same, even if either or both of them do not have complete information. Shape functions are defined for all of the -[standard Python ops](https://tensorflow.googlesource.com/tensorflow/+/master/tensorflow/python/ops/), +[standard Python ops](https://www.tensorflow.org/code/tensorflow/python/ops/), and provide many different usage examples. -[core-array_ops]:https://tensorflow.googlesource.com/tensorflow/+/master/tensorflow/core/ops/array_ops.cc -[python-user_ops]:https://tensorflow.googlesource.com/tensorflow/+/master/tensorflow/python/user_ops/user_ops.py -[tf-kernels]:https://tensorflow.googlesource.com/tensorflow/+/master/tensorflow/core/kernels/ -[user_ops]:https://tensorflow.googlesource.com/tensorflow/+/master/tensorflow/core/user_ops/ -[pad_op]:https://tensorflow.googlesource.com/tensorflow/+/master/tensorflow/core/kernels/pad_op.cc -[standard_ops-py]:https://tensorflow.googlesource.com/tensorflow/+/master/tensorflow/python/ops/standard_ops.py -[standard_ops-cc]:https://tensorflow.googlesource.com/tensorflow/+/master/tensorflow/cc/ops/standard_ops.h -[python-BUILD]:https://tensorflow.googlesource.com/tensorflow/+/master/tensorflow/python/BUILD -[validation-macros]:https://tensorflow.googlesource.com/tensorflow/+/master/tensorflow/core/lib/core/errors.h -[op_def_builder]:https://tensorflow.googlesource.com/tensorflow/+/master/tensorflow/core/framework/op_def_builder.h -[register_types]:https://tensorflow.googlesource.com/tensorflow/+/master/tensorflow/core/framework/register_types.h -[FinalizeAttr]:https://tensorflow.googlesource.com/tensorflow/+/master/tensorflow/core/framework/op_def_builder.cc#FinalizeAttr -[DataTypeString]:https://tensorflow.googlesource.com/tensorflow/+/master/tensorflow/core/framework/types.cc#DataTypeString -[python-BUILD]:https://tensorflow.googlesource.com/tensorflow/+/master/tensorflow/python/BUILD -[types-proto]:https://tensorflow.googlesource.com/tensorflow/+/master/tensorflow/core/framework/types.proto -[TensorShapeProto]:https://tensorflow.googlesource.com/tensorflow/+/master/tensorflow/core/framework/tensor_shape.proto -[TensorProto]:https://tensorflow.googlesource.com/tensorflow/+/master/tensorflow/core/framework/tensor.proto +[core-array_ops]:https://www.tensorflow.org/code/tensorflow/core/ops/array_ops.cc +[python-user_ops]:https://www.tensorflow.org/code/tensorflow/python/user_ops/user_ops.py +[tf-kernels]:https://www.tensorflow.org/code/tensorflow/core/kernels/ +[user_ops]:https://www.tensorflow.org/code/tensorflow/core/user_ops/ +[pad_op]:https://www.tensorflow.org/code/tensorflow/core/kernels/pad_op.cc +[standard_ops-py]:https://www.tensorflow.org/code/tensorflow/python/ops/standard_ops.py +[standard_ops-cc]:https://www.tensorflow.org/code/tensorflow/cc/ops/standard_ops.h +[python-BUILD]:https://www.tensorflow.org/code/tensorflow/python/BUILD +[validation-macros]:https://www.tensorflow.org/code/tensorflow/core/lib/core/errors.h +[op_def_builder]:https://www.tensorflow.org/code/tensorflow/core/framework/op_def_builder.h +[register_types]:https://www.tensorflow.org/code/tensorflow/core/framework/register_types.h +[FinalizeAttr]:https://www.tensorflow.org/code/tensorflow/core/framework/op_def_builder.cc#FinalizeAttr +[DataTypeString]:https://www.tensorflow.org/code/tensorflow/core/framework/types.cc#DataTypeString +[python-BUILD]:https://www.tensorflow.org/code/tensorflow/python/BUILD +[types-proto]:https://www.tensorflow.org/code/tensorflow/core/framework/types.proto +[TensorShapeProto]:https://www.tensorflow.org/code/tensorflow/core/framework/tensor_shape.proto +[TensorProto]:https://www.tensorflow.org/code/tensorflow/core/framework/tensor.proto diff --git a/tensorflow/g3doc/how_tos/new_data_formats/index.md b/tensorflow/g3doc/how_tos/new_data_formats/index.md index 628fc6c3a6..489e7a5db4 100644 --- a/tensorflow/g3doc/how_tos/new_data_formats/index.md +++ b/tensorflow/g3doc/how_tos/new_data_formats/index.md @@ -28,11 +28,11 @@ A `Reader` is something that reads records from a file. There are some examples of Reader Ops already built into TensorFlow: * [`tf.TFRecordReader`](../../api_docs/python/io_ops.md#TFRecordReader) - ([source in `kernels/tf_record_reader_op.cc`](https://tensorflow.googlesource.com/tensorflow/+/master/tensorflow/core/kernels/tf_record_reader_op.cc)) + ([source in `kernels/tf_record_reader_op.cc`](https://www.tensorflow.org/code/tensorflow/core/kernels/tf_record_reader_op.cc)) * [`tf.FixedLengthRecordReader`](../../api_docs/python/io_ops.md#FixedLengthRecordReader) - ([source in `kernels/fixed_length_record_reader_op.cc`](https://tensorflow.googlesource.com/tensorflow/+/master/tensorflow/core/kernels/fixed_length_record_reader_op.cc)) + ([source in `kernels/fixed_length_record_reader_op.cc`](https://www.tensorflow.org/code/tensorflow/core/kernels/fixed_length_record_reader_op.cc)) * [`tf.TextLineReader`](../../api_docs/python/io_ops.md#TextLineReader) - ([source in `kernels/text_line_reader_op.cc`](https://tensorflow.googlesource.com/tensorflow/+/master/tensorflow/core/kernels/text_line_reader_op.cc)) + ([source in `kernels/text_line_reader_op.cc`](https://www.tensorflow.org/code/tensorflow/core/kernels/text_line_reader_op.cc)) You can see these all expose the same interface, the only differences are in their constructors. The most important method is `read`. @@ -44,15 +44,15 @@ two scalar tensors: a string key and and a string value. To create a new reader called `SomeReader`, you will need to: 1. In C++, define a subclass of - [`tensorflow::ReaderBase`](https://tensorflow.googlesource.com/tensorflow/+/master/tensorflow/core/kernels/reader_base.h) + [`tensorflow::ReaderBase`](https://www.tensorflow.org/code/tensorflow/core/kernels/reader_base.h) called `SomeReader`. 2. In C++, register a new reader op and kernel with the name `"SomeReader"`. -3. In Python, define a subclass of [`tf.ReaderBase`](https://tensorflow.googlesource.com/tensorflow/+/master/tensorflow/python/ops/io_ops.py) called `SomeReader`. +3. In Python, define a subclass of [`tf.ReaderBase`](https://www.tensorflow.org/code/tensorflow/python/ops/io_ops.py) called `SomeReader`. You can put all the C++ code in a file in `tensorflow/core/user_ops/some_reader_op.cc`. The code to read a file will live in a descendant of the C++ `ReaderBase` class, which is defined in -[`tensorflow/core/kernels/reader_base.h`](https://tensorflow.googlesource.com/tensorflow/+/master/tensorflow/core/kernels/reader_base.h). +[`tensorflow/core/kernels/reader_base.h`](https://www.tensorflow.org/code/tensorflow/core/kernels/reader_base.h). You will need to implement the following methods: * `OnWorkStartedLocked`: open the next file @@ -83,7 +83,7 @@ If `ReadLocked` successfully reads a record from the file, it should fill in: If you hit the end of a file (EOF), set `*at_end` to `true`. In either case, return `Status::OK()`. If there is an error, simply return it using one of the helper functions from -[`tensorflow/core/lib/core/errors.h`](https://tensorflow.googlesource.com/tensorflow/+/master/tensorflow/core/lib/core/errors.h) +[`tensorflow/core/lib/core/errors.h`](https://www.tensorflow.org/code/tensorflow/core/lib/core/errors.h) without modifying any arguments. Next you will create the actual Reader op. It will help if you are familiar @@ -94,12 +94,12 @@ are: * Define and register an `OpKernel`. To register the op, you will use a `REGISTER_OP` call defined in -[`tensorflow/core/framework/op.h`](https://tensorflow.googlesource.com/tensorflow/+/master/tensorflow/core/framework/op.h). +[`tensorflow/core/framework/op.h`](https://www.tensorflow.org/code/tensorflow/core/framework/op.h). Reader ops never take any input and always have a single output with type `Ref(string)`. They should always call `SetIsStateful()`, and have a string `container` and `shared_name` attrs. You may optionally define additional attrs for configuration or include documentation in a `Doc`. For examples, see -[`tensorflow/core/ops/io_ops.cc`](https://tensorflow.googlesource.com/tensorflow/+/master/tensorflow/core/ops/io_ops.cc), +[`tensorflow/core/ops/io_ops.cc`](https://www.tensorflow.org/code/tensorflow/core/ops/io_ops.cc), e.g.: ```c++ @@ -118,7 +118,7 @@ A Reader that outputs the lines of a file delimited by '\n'. To define an `OpKernel`, Readers can use the shortcut of descending from `ReaderOpKernel`, defined in -[`tensorflow/core/framework/reader_op_kernel.h`](https://tensorflow.googlesource.com/tensorflow/+/master/tensorflow/core/framework/reader_op_kernel.h), +[`tensorflow/core/framework/reader_op_kernel.h`](https://www.tensorflow.org/code/tensorflow/core/framework/reader_op_kernel.h), and implement a constructor that calls `SetReaderFactory`. After defining your class, you will need to register it using `REGISTER_KERNEL_BUILDER(...)`. An example with no attrs: @@ -167,8 +167,8 @@ REGISTER_KERNEL_BUILDER(Name("TextLineReader").Device(DEVICE_CPU), The last step is to add the Python wrapper. You will import `tensorflow.python.ops.io_ops` in -[`tensorflow/python/user_ops/user_ops.py`](https://tensorflow.googlesource.com/tensorflow/+/master/tensorflow/python/user_ops/user_ops.py) -and add a descendant of [`io_ops.ReaderBase`](https://tensorflow.googlesource.com/tensorflow/+/master/tensorflow/python/ops/io_ops.py). +[`tensorflow/python/user_ops/user_ops.py`](https://www.tensorflow.org/code/tensorflow/python/user_ops/user_ops.py) +and add a descendant of [`io_ops.ReaderBase`](https://www.tensorflow.org/code/tensorflow/python/ops/io_ops.py). ```python from tensorflow.python.framework import ops @@ -187,7 +187,7 @@ ops.RegisterShape("SomeReader")(common_shapes.scalar_shape) ``` You can see some examples in -[`tensorflow/python/ops/io_ops.py`](https://tensorflow.googlesource.com/tensorflow/+/master/tensorflow/python/ops/io_ops.py). +[`tensorflow/python/ops/io_ops.py`](https://www.tensorflow.org/code/tensorflow/python/ops/io_ops.py). ## Writing an Op for a record format @@ -207,7 +207,7 @@ Examples of Ops useful for decoding records: Note that it can be useful to use multiple Ops to decode a particular record format. For example, you may have an image saved as a string in -[a `tf.train.Example` protocol buffer](https://tensorflow.googlesource.com/tensorflow/+/master/tensorflow/core/example/example.proto). +[a `tf.train.Example` protocol buffer](https://www.tensorflow.org/code/tensorflow/core/example/example.proto). Depending on the format of that image, you might take the corresponding output from a [`tf.parse_single_example`](../../api_docs/python/io_ops.md#parse_single_example) diff --git a/tensorflow/g3doc/how_tos/reading_data/index.md b/tensorflow/g3doc/how_tos/reading_data/index.md index b8df1d88aa..f991f2b2ea 100644 --- a/tensorflow/g3doc/how_tos/reading_data/index.md +++ b/tensorflow/g3doc/how_tos/reading_data/index.md @@ -35,7 +35,7 @@ it is executed without a feed, so you won't forget to feed it. An example using `placeholder` and feeding to train on MNIST data can be found in -[`tensorflow/examples/tutorials/mnist/fully_connected_feed.py`](https://tensorflow.googlesource.com/tensorflow/+/master/tensorflow/examples/tutorials/mnist/fully_connected_feed.py), +[`tensorflow/examples/tutorials/mnist/fully_connected_feed.py`](https://www.tensorflow.org/code/tensorflow/examples/tutorials/mnist/fully_connected_feed.py), and is described in the [MNIST tutorial](../../tutorials/mnist/tf/index.md). ## Reading from files @@ -135,7 +135,7 @@ uses a file format where each record is represented using a fixed number of bytes: 1 byte for the label followed by 3072 bytes of image data. Once you have a uint8 tensor, standard operations can slice out each piece and reformat as needed. For CIFAR-10, you can see how to do the reading and decoding in -[`tensorflow/models/image/cifar10/cifar10_input.py`](https://tensorflow.googlesource.com/tensorflow/+/master/tensorflow/models/image/cifar10/cifar10_input.py) +[`tensorflow/models/image/cifar10/cifar10_input.py`](https://www.tensorflow.org/code/tensorflow/models/image/cifar10/cifar10_input.py) and described in [this tutorial](../../tutorials/deep_cnn/index.md#prepare-the-data). @@ -146,15 +146,15 @@ This approach makes it easier to mix and match data sets and network architectures. The recommended format for TensorFlow is a [TFRecords file](../../api_docs/python/python_io.md#tfrecords-format-details) containing -[`tf.train.Example` protocol buffers](https://tensorflow.googlesource.com/tensorflow/+/master/tensorflow/core/example/example.proto) +[`tf.train.Example` protocol buffers](https://www.tensorflow.org/code/tensorflow/core/example/example.proto) (which contain -[`Features`](https://tensorflow.googlesource.com/tensorflow/+/master/tensorflow/core/example/feature.proto) +[`Features`](https://www.tensorflow.org/code/tensorflow/core/example/feature.proto) as a field). You write a little program that gets your data, stuffs it in an `Example` protocol buffer, serializes the protocol buffer to a string, and then writes the string to a TFRecords file using the [`tf.python_io.TFRecordWriter` class](../../api_docs/python/python_io.md#TFRecordWriter). For example, -[`tensorflow/examples/how_tos/reading_data/convert_to_records.py`](https://tensorflow.googlesource.com/tensorflow/+/master/tensorflow/examples/how_tos/reading_data/convert_to_records.py) +[`tensorflow/examples/how_tos/reading_data/convert_to_records.py`](https://www.tensorflow.org/code/tensorflow/examples/how_tos/reading_data/convert_to_records.py) converts MNIST data to this format. To read a file of TFRecords, use @@ -163,7 +163,7 @@ the [`tf.parse_single_example`](../../api_docs/python/io_ops.md#parse_single_exa decoder. The `parse_single_example` op decodes the example protocol buffers into tensors. An MNIST example using the data produced by `convert_to_records` can be found in -[`tensorflow/examples/how_tos/reading_data/fully_connected_reader.py`](https://tensorflow.googlesource.com/tensorflow/+/master/tensorflow/examples/how_tos/reading_data/fully_connected_reader.py), +[`tensorflow/examples/how_tos/reading_data/fully_connected_reader.py`](https://www.tensorflow.org/code/tensorflow/examples/how_tos/reading_data/fully_connected_reader.py), which you can compare with the `fully_connected_feed` version. ### Preprocessing @@ -172,7 +172,7 @@ You can then do any preprocessing of these examples you want. This would be any processing that doesn't depend on trainable parameters. Examples include normalization of your data, picking a random slice, adding noise or distortions, etc. See -[`tensorflow/models/image/cifar10/cifar10.py`](https://tensorflow.googlesource.com/tensorflow/+/master/tensorflow/models/image/cifar10/cifar10.py) +[`tensorflow/models/image/cifar10/cifar10.py`](https://www.tensorflow.org/code/tensorflow/models/image/cifar10/cifar10.py) for an example. ### Batching @@ -455,8 +455,8 @@ multiple preprocessing threads, set the `num_threads` parameter to a number bigger than 1. An MNIST example that preloads the data using constants can be found in -[`tensorflow/examples/how_tos/reading_data/fully_connected_preloaded.py`](https://tensorflow.googlesource.com/tensorflow/+/master/tensorflow/examples/how_tos/reading_data/fully_connected_preloaded.py), and one that preloads the data using variables can be found in -[`tensorflow/examples/how_tos/reading_data/fully_connected_preloaded_var.py`](https://tensorflow.googlesource.com/tensorflow/+/master/tensorflow/examples/how_tos/reading_data/fully_connected_preloaded_var.py), +[`tensorflow/examples/how_tos/reading_data/fully_connected_preloaded.py`](https://www.tensorflow.org/code/tensorflow/examples/how_tos/reading_data/fully_connected_preloaded.py), and one that preloads the data using variables can be found in +[`tensorflow/examples/how_tos/reading_data/fully_connected_preloaded_var.py`](https://www.tensorflow.org/code/tensorflow/examples/how_tos/reading_data/fully_connected_preloaded_var.py), You can compare these with the `fully_connected_feed` and `fully_connected_reader` versions above. diff --git a/tensorflow/g3doc/how_tos/summaries_and_tensorboard/index.md b/tensorflow/g3doc/how_tos/summaries_and_tensorboard/index.md index c1ddcd58ec..1de747f966 100644 --- a/tensorflow/g3doc/how_tos/summaries_and_tensorboard/index.md +++ b/tensorflow/g3doc/how_tos/summaries_and_tensorboard/index.md @@ -69,7 +69,7 @@ The code example below is a modification of the [simple MNIST tutorial] added some summary ops, and run them every ten steps. If you run this and then launch `tensorboard --logdir=/tmp/mnist_logs`, you'll be able to visualize statistics, such as how the weights or accuracy varied during training. -The code below is an excerpt; full source is [here](https://tensorflow.googlesource.com/tensorflow/+/master/tensorflow/examples/tutorials/mnist/mnist_with_summaries.py). +The code below is an excerpt; full source is [here](https://www.tensorflow.org/code/tensorflow/examples/tutorials/mnist/mnist_with_summaries.py). ```python # Create the model diff --git a/tensorflow/g3doc/resources/faq.md b/tensorflow/g3doc/resources/faq.md index f091641721..ba57831c9c 100644 --- a/tensorflow/g3doc/resources/faq.md +++ b/tensorflow/g3doc/resources/faq.md @@ -142,11 +142,11 @@ TensorFlow is designed to support multiple client languages. Currently, the best-supported client language is [Python](../api_docs/python/index.md). The [C++ client API](../api_docs/cc/index.md) provides an interface for launching graphs and running steps; we also have an experimental API for -[building graphs in C++](https://tensorflow.googlesource.com/tensorflow/+/master/tensorflow/cc/tutorials/example_trainer.cc). +[building graphs in C++](https://www.tensorflow.org/code/tensorflow/cc/tutorials/example_trainer.cc). We would like to support more client languages, as determined by community interest. TensorFlow has a -[C-based client API](https://tensorflow.googlesource.com/tensorflow/+/master/tensorflow/core/public/tensor_c_api.h) +[C-based client API](https://www.tensorflow.org/code/tensorflow/core/public/tensor_c_api.h) that makes it easy to build a client in many different languages. We invite contributions of new language bindings. diff --git a/tensorflow/g3doc/tutorials/deep_cnn/index.md b/tensorflow/g3doc/tutorials/deep_cnn/index.md index edb3fbdad0..1491c91bae 100644 --- a/tensorflow/g3doc/tutorials/deep_cnn/index.md +++ b/tensorflow/g3doc/tutorials/deep_cnn/index.md @@ -77,21 +77,21 @@ for details. It consists of 1,068,298 learnable parameters and requires about ## Code Organization The code for this tutorial resides in -[`tensorflow/models/image/cifar10/`](https://tensorflow.googlesource.com/tensorflow/+/master/tensorflow/models/image/cifar10/). +[`tensorflow/models/image/cifar10/`](https://www.tensorflow.org/code/tensorflow/models/image/cifar10/). File | Purpose --- | --- -[`cifar10_input.py`](https://tensorflow.googlesource.com/tensorflow/+/master/tensorflow/models/image/cifar10/cifar10_input.py) | Reads the native CIFAR-10 binary file format. -[`cifar10.py`](https://tensorflow.googlesource.com/tensorflow/+/master/tensorflow/models/image/cifar10/cifar10.py) | Builds the CIFAR-10 model. -[`cifar10_train.py`](https://tensorflow.googlesource.com/tensorflow/+/master/tensorflow/models/image/cifar10/cifar10_train.py) | Trains a CIFAR-10 model on a CPU or GPU. -[`cifar10_multi_gpu_train.py`](https://tensorflow.googlesource.com/tensorflow/+/master/tensorflow/models/image/cifar10/cifar10_multi_gpu_train.py) | Trains a CIFAR-10 model on multiple GPUs. -[`cifar10_eval.py`](https://tensorflow.googlesource.com/tensorflow/+/master/tensorflow/models/image/cifar10/cifar10_eval.py) | Evaluates the predictive performance of a CIFAR-10 model. +[`cifar10_input.py`](https://www.tensorflow.org/code/tensorflow/models/image/cifar10/cifar10_input.py) | Reads the native CIFAR-10 binary file format. +[`cifar10.py`](https://www.tensorflow.org/code/tensorflow/models/image/cifar10/cifar10.py) | Builds the CIFAR-10 model. +[`cifar10_train.py`](https://www.tensorflow.org/code/tensorflow/models/image/cifar10/cifar10_train.py) | Trains a CIFAR-10 model on a CPU or GPU. +[`cifar10_multi_gpu_train.py`](https://www.tensorflow.org/code/tensorflow/models/image/cifar10/cifar10_multi_gpu_train.py) | Trains a CIFAR-10 model on multiple GPUs. +[`cifar10_eval.py`](https://www.tensorflow.org/code/tensorflow/models/image/cifar10/cifar10_eval.py) | Evaluates the predictive performance of a CIFAR-10 model. ## CIFAR-10 Model The CIFAR-10 network is largely contained in -[`cifar10.py`](https://tensorflow.googlesource.com/tensorflow/+/master/tensorflow/models/image/cifar10/cifar10.py). +[`cifar10.py`](https://www.tensorflow.org/code/tensorflow/models/image/cifar10/cifar10.py). The complete training graph contains roughly 765 operations. We find that we can make the code most reusable by constructing the graph with the following modules: diff --git a/tensorflow/g3doc/tutorials/mnist/beginners/index.md b/tensorflow/g3doc/tutorials/mnist/beginners/index.md index e95ca80a6c..658ab2da90 100644 --- a/tensorflow/g3doc/tutorials/mnist/beginners/index.md +++ b/tensorflow/g3doc/tutorials/mnist/beginners/index.md @@ -39,7 +39,7 @@ The MNIST data is hosted on [Yann LeCun's website](http://yann.lecun.com/exdb/mnist/). For your convenience, we've included some python code to download and install the data automatically. You can either download -[the code](https://tensorflow.googlesource.com/tensorflow/+/master/tensorflow/examples/tutorials/mnist/input_data.py) +[the code](https://www.tensorflow.org/code/tensorflow/examples/tutorials/mnist/input_data.py) and import it as below, or simply copy and paste it in. ```python diff --git a/tensorflow/g3doc/tutorials/mnist/download/index.md b/tensorflow/g3doc/tutorials/mnist/download/index.md index dcd7dfc23d..e9698d6248 100644 --- a/tensorflow/g3doc/tutorials/mnist/download/index.md +++ b/tensorflow/g3doc/tutorials/mnist/download/index.md @@ -1,6 +1,6 @@ # MNIST Data Download -Code: [tensorflow/examples/tutorials/mnist/](https://tensorflow.googlesource.com/tensorflow/+/master/tensorflow/examples/tutorials/mnist/) +Code: [tensorflow/examples/tutorials/mnist/](https://www.tensorflow.org/code/tensorflow/examples/tutorials/mnist/) The goal of this tutorial is to show how to download the dataset files required for handwritten digit classification using the (classic) MNIST data set. @@ -11,7 +11,7 @@ This tutorial references the following files: File | Purpose --- | --- -[`input_data.py`](https://tensorflow.googlesource.com/tensorflow/+/master/tensorflow/examples/tutorials/mnist/input_data.py) | The code to download the MNIST dataset for training and evaluation. +[`input_data.py`](https://www.tensorflow.org/code/tensorflow/examples/tutorials/mnist/input_data.py) | The code to download the MNIST dataset for training and evaluation. ## Prepare the Data diff --git a/tensorflow/g3doc/tutorials/mnist/pros/index.md b/tensorflow/g3doc/tutorials/mnist/pros/index.md index a1132039a8..05af6b2a31 100644 --- a/tensorflow/g3doc/tutorials/mnist/pros/index.md +++ b/tensorflow/g3doc/tutorials/mnist/pros/index.md @@ -20,7 +20,7 @@ TensorFlow session. ### Load MNIST Data For your convenience, we've included -[a script](https://tensorflow.googlesource.com/tensorflow/+/master/tensorflow/examples/tutorials/mnist/input_data.py) +[a script](https://www.tensorflow.org/code/tensorflow/examples/tutorials/mnist/input_data.py) which automatically downloads and imports the MNIST dataset. It will create a directory `'MNIST_data'` in which to store the data files. diff --git a/tensorflow/g3doc/tutorials/mnist/tf/index.md b/tensorflow/g3doc/tutorials/mnist/tf/index.md index cb0abf6829..107712ee23 100644 --- a/tensorflow/g3doc/tutorials/mnist/tf/index.md +++ b/tensorflow/g3doc/tutorials/mnist/tf/index.md @@ -1,6 +1,6 @@ # TensorFlow Mechanics 101 -Code: [tensorflow/examples/tutorials/mnist/](https://tensorflow.googlesource.com/tensorflow/+/master/tensorflow/examples/tutorials/mnist/) +Code: [tensorflow/examples/tutorials/mnist/](https://www.tensorflow.org/code/tensorflow/examples/tutorials/mnist/) The goal of this tutorial is to show how to use TensorFlow to train and evaluate a simple feed-forward neural network for handwritten digit @@ -18,8 +18,8 @@ This tutorial references the following files: File | Purpose --- | --- -[`mnist.py`](https://tensorflow.googlesource.com/tensorflow/+/master/tensorflow/examples/tutorials/mnist/mnist.py) | The code to build a fully-connected MNIST model. -[`fully_connected_feed.py`](https://tensorflow.googlesource.com/tensorflow/+/master/tensorflow/examples/tutorials/mnist/fully_connected_feed.py) | The main code to train the built MNIST model against the downloaded dataset using a feed dictionary. +[`mnist.py`](https://www.tensorflow.org/code/tensorflow/examples/tutorials/mnist/mnist.py) | The code to build a fully-connected MNIST model. +[`fully_connected_feed.py`](https://www.tensorflow.org/code/tensorflow/examples/tutorials/mnist/fully_connected_feed.py) | The main code to train the built MNIST model against the downloaded dataset using a feed dictionary. Simply run the `fully_connected_feed.py` file directly to start training: diff --git a/tensorflow/g3doc/tutorials/word2vec/index.md b/tensorflow/g3doc/tutorials/word2vec/index.md index 1882c56265..6cd828baaf 100644 --- a/tensorflow/g3doc/tutorials/word2vec/index.md +++ b/tensorflow/g3doc/tutorials/word2vec/index.md @@ -19,11 +19,11 @@ represent words as vectors. We walk through the code later during the tutorial, but if you'd prefer to dive straight in, feel free to look at the minimalistic implementation in -[tensorflow/examples/tutorials/word2vec/word2vec_basic.py](https://tensorflow.googlesource.com/tensorflow/+/master/tensorflow/examples/tutorials/word2vec/word2vec_basic.py) +[tensorflow/examples/tutorials/word2vec/word2vec_basic.py](https://www.tensorflow.org/code/tensorflow/examples/tutorials/word2vec/word2vec_basic.py) This basic example contains the code needed to download some data, train on it a bit and visualize the result. Once you get comfortable with reading and running the basic version, you can graduate to -[tensorflow/models/embedding/word2vec.py](https://tensorflow.googlesource.com/tensorflow/+/master/tensorflow/models/embedding/word2vec.py) +[tensorflow/models/embedding/word2vec.py](https://www.tensorflow.org/code/tensorflow/models/embedding/word2vec.py) which is a more serious implementation that showcases some more advanced TensorFlow principles about how to efficiently use threads to move data into a text model, how to checkpoint during training, etc. @@ -269,7 +269,7 @@ nce_biases = tf.Variable(tf.zeros([vocabulary_size])) Now that we have the parameters in place, we can define our skip-gram model graph. For simplicity, let's suppose we've already integerized our text corpus with a vocabulary so that each word is represented as an integer (see -[tensorflow/examples/tutorials/word2vec/word2vec_basic.py](https://tensorflow.googlesource.com/tensorflow/+/master/tensorflow/examples/tutorials/word2vec/word2vec_basic.py) +[tensorflow/examples/tutorials/word2vec/word2vec_basic.py](https://www.tensorflow.org/code/tensorflow/examples/tutorials/word2vec/word2vec_basic.py) for the details). The skip-gram model takes two inputs. One is a batch full of integers representing the source context words, the other is for the target words. Let's create placeholder nodes for these inputs, so that we can feed in @@ -321,7 +321,7 @@ for inputs, labels in generate_batch(...): ``` See the full example code in -[tensorflow/examples/tutorials/word2vec/word2vec_basic.py](https://tensorflow.googlesource.com/tensorflow/+/master/tensorflow/examples/tutorials/word2vec/word2vec_basic.py). +[tensorflow/examples/tutorials/word2vec/word2vec_basic.py](https://www.tensorflow.org/code/tensorflow/examples/tutorials/word2vec/word2vec_basic.py). ## Visualizing the Learned Embeddings @@ -335,7 +335,7 @@ t-SNE. Et voila! As expected, words that are similar end up clustering nearby each other. For a more heavyweight implementation of word2vec that showcases more of the advanced features of TensorFlow, see the implementation in -[tensorflow/models/embedding/word2vec.py](https://tensorflow.googlesource.com/tensorflow/+/master/tensorflow/models/embedding/word2vec.py). +[tensorflow/models/embedding/word2vec.py](https://www.tensorflow.org/code/tensorflow/models/embedding/word2vec.py). ## Evaluating Embeddings: Analogical Reasoning @@ -350,7 +350,7 @@ https://word2vec.googlecode.com/svn/trunk/questions-words.txt. To see how we do this evaluation, have a look at the `build_eval_graph()` and `eval()` functions in -[tensorflow/models/embedding/word2vec.py](https://tensorflow.googlesource.com/tensorflow/+/master/tensorflow/models/embedding/word2vec.py). +[tensorflow/models/embedding/word2vec.py](https://www.tensorflow.org/code/tensorflow/models/embedding/word2vec.py). The choice of hyperparameters can strongly influence the accuracy on this task. To achieve state-of-the-art performance on this task requires training over a @@ -378,13 +378,13 @@ your model is seriously bottlenecked on input data, you may want to implement a custom data reader for your problem, as described in [New Data Formats](../../how_tos/new_data_formats/index.md). For the case of Skip-Gram modeling, we've actually already done this for you as an example in -[tensorflow/models/embedding/word2vec.py](https://tensorflow.googlesource.com/tensorflow/+/master/tensorflow/models/embedding/word2vec.py). +[tensorflow/models/embedding/word2vec.py](https://www.tensorflow.org/code/tensorflow/models/embedding/word2vec.py). If your model is no longer I/O bound but you want still more performance, you can take things further by writing your own TensorFlow Ops, as described in [Adding a New Op](../../how_tos/adding_an_op/index.md). Again we've provided an example of this for the Skip-Gram case -[tensorflow/models/embedding/word2vec_optimized.py](https://tensorflow.googlesource.com/tensorflow/+/master/tensorflow/models/embedding/word2vec_optimized.py). +[tensorflow/models/embedding/word2vec_optimized.py](https://www.tensorflow.org/code/tensorflow/models/embedding/word2vec_optimized.py). Feel free to benchmark these against each other to measure performance improvements at each stage. |