aboutsummaryrefslogtreecommitdiffhomepage
path: root/tensorflow
diff options
context:
space:
mode:
authorGravatar Yifei Feng <fengyifei2026@gmail.com>2017-09-06 23:52:56 -0700
committerGravatar gunan <gunan@google.com>2017-09-06 23:52:56 -0700
commit18f36927160d05b941c056f10dc7f9aecaa05e23 (patch)
treed727a206160ce5b4a1222bda8a7af6a076859c46 /tensorflow
parentde97f2df06ebaef89aa2c2ec5db79d27fe292242 (diff)
Branch 167812735 (#12867)
* Internal cleanup PiperOrigin-RevId: 167636242 * Move the Keras API to tf.keras. PiperOrigin-RevId: 167638421 * Automated g4 rollback of changelist 167604306 PiperOrigin-RevId: 167639833 * Call HloComputation.Accept instead of HloInstruction.Accept to get all instructions profiled. RELNOTES: n/a PiperOrigin-RevId: 167640259 * Add fast math attributes to all generated methods when fast math enabled. RELNOTES: n/a PiperOrigin-RevId: 167646637 * Extended ScratchSpace to expose its underlying scratch tensor object. PiperOrigin-RevId: 167649551 * Change zip(...)[1] to list(zip(...))[1], for python 3 compatibility. PiperOrigin-RevId: 167654035 * Add scoped timer to log jit compile times. RELNOTES: n/a PiperOrigin-RevId: 167656720 * Verify that predictions are in the expected range for ops that use thresholds, e.g. tf.contrib.metrics.streaming_auc. PiperOrigin-RevId: 167658134 * Internal change. PiperOrigin-RevId: 167658401 * Fix list formatting PiperOrigin-RevId: 167660250 * Enable java test. PiperOrigin-RevId: 167660276 * Add shape functions on debug ops. PiperOrigin-RevId: 167668811 * Increase session_bundle_test to a medium test. PiperOrigin-RevId: 167672587 * Include layout of convolution input data in the op_profile. PiperOrigin-RevId: 167680208 * Fix tf.sparse_add for SparseTensor with _ref typed values. Example: st = tf.SparseTensor( indices=[[1]], values=tf.Variable([1.0]), dense_shape=[1]) tf.sparse_add(st, st) PiperOrigin-RevId: 167681121 * Fix conversion to explicit scalar broadcast The dimensions field of a broadcast HLO op is meant to be populated with the dimensions that are broadcasted, which in case of a scalar is the empty vector. Generally, the rank of the operand of a broadcast op should always equal the size of the dimensions vector. PiperOrigin-RevId: 167686946 * Add 'unknown shape' shape functions on deprecated linalg ops. PiperOrigin-RevId: 167719029 * Be more careful in IsInitalized, and log when it is called on an unknown node_id. PiperOrigin-RevId: 167722344 * tfdbg: Refactor graph-processing code out of debug_data.py The basic idea is to separate the code in debug_data.py that handles graph structures into its own module (debug_graphs.py). This tackles an existing TODO item to simplify the code debug_data.DebugDumpDir. In a later CL, code will be added to debug_graphs.DebugGraph to allow reconstruction of the original GraphDef, i.e., the GraphDef without the Copy* and Debug* nodes inserted by tfdbg. This will be useful for, among other things, the TensorBoard Debugger Plugin. PiperOrigin-RevId: 167726113 * internal PiperOrigin-RevId: 167727508 * Update MaxPoolV2Shape to support NCHV_VECT_C. PiperOrigin-RevId: 167732437 * Deleting tf.contrib.learn.dnn benchmark tests. PiperOrigin-RevId: 167741308 * Fix off-by-one documentation error. sequence_lengths is the actual length of the sequence and therefor should not be used as zero-based indexing. The code is correct but the documentation was misleading. PiperOrigin-RevId: 167742082 * contrib summaries work in eager-graph mode (with defun) As a side effect fix issues related to using eager-defined variables in graph mode. PiperOrigin-RevId: 167744121 * Fix minor documentation error in ZlibInputStream. PiperOrigin-RevId: 167745218 * Sets the distributed training related properties of RunConfig based on TF_CONFIG. PiperOrigin-RevId: 167752997 * Improved documentation about eval ops in EstimatorSpec. PiperOrigin-RevId: 167753099 * Automated g4 rollback of changelist 156748870 PiperOrigin-RevId: 167753805 * Make cuda_solvers_gpu.cu.cc compile with nvcc8. PiperOrigin-RevId: 167754383 * Add csv dataset example to get_started/regression. PiperOrigin-RevId: 167754634 * Switches to OrderedDict to make the dictionary order deterministic so we have less randomness from graph building. PiperOrigin-RevId: 167755072 * Add int8 version of fused_conv2d_bias_activation operator for the forward phase, and support side_input and scaling parameters in float and int8 versions. PiperOrigin-RevId: 167763219 * Make the text summary write no plugin data content This is actually a safe removal because no logic makes use of the content of text plugin data. PiperOrigin-RevId: 167763880 * Avoid unnecessary buffer allocations & deallocations Before this change, when we reached the end of a file, we would (1) clear the existing buffer (which at large buffer sizes typically involved deallocating it). (2) reserve a buffer (which at large buffer sizes is non-trivial) (3) realize we had reached EoF, and therefore clear the buffer, deallocating it again. With this change, whenever the buffered reader detects an EoF condition, we remember it, so that we can short-circuit the above logic. The above optimization results in a more than 25x performance improvement for large buffers reading small files. PiperOrigin-RevId: 167766751 * [TF:XLA] In Literal: correctly handle operands with zero elements in Copy. PiperOrigin-RevId: 167769308 * Reduce batch size for resampler backward pass test, to speed up test. PiperOrigin-RevId: 167769539 * Remove `SimpleGraphExecutionState::costs_`, which is unused. PiperOrigin-RevId: 167772120 * detecting cycles when users add a control edge to a graph PiperOrigin-RevId: 167773598 * Make writer_test avoid setting content to a string That content field of the PluginData proto is going to be converted into a bytes field, and setting it to a string makes the test fail. Furthermore, the purpose of this test is to make sure that correct data is written, so setting the name of the plugin suffices. PiperOrigin-RevId: 167776457 * Propagate the original stack trace when exceptions caught be MonitoredSession are re-raised. PiperOrigin-RevId: 167781071 * Change trace.py to not access a graph as a default argument. Checks for None and access via default graph inside the function. PiperOrigin-RevId: 167788815 * Added custom metric support for tf.estimator.Estimator. PiperOrigin-RevId: 167788891 * A eager Saver that allows restore on create. PiperOrigin-RevId: 167789332 * Make content field of PluginData a bytes field The content field had previously been a string field, which had been problematic because string fields can only store UTF-8 strings. This problem can manifest in various ways. For instance, take the precision-recall curve plugin. Its summary collects data that scales in size based on the number of thresholds. When the content field is a string, the summary logic serializes the relevant data proto just fine when we only have a few thresholds (about 100). However, for large numbers of thresholds (ie, around 200), the summary logic fails to serialize and throws a cryptic error. ValueError: '\x10\xc8\x01' has type str, but isn't valid UTF-8 encoding. Non-UTF-8 strings must be converted to unicode objects before being added. Changing the content field to a bytes field fixes this issue because bytes fields are not restricted to UTF-8 strings. I just happened to have needed a long enough string for the string to no longer be a valid UTF-8 one. PiperOrigin-RevId: 167790594 * Temporarily disable tf_should_use wrapper, since it can cause python Graph/Operation/Tensor memory leaks. PiperOrigin-RevId: 167790657 * Ensure using "path" as a URI will keep working. PiperOrigin-RevId: 167793848 * Fix typo in graph transforms error message PiperOrigin-RevId: 167796563 * Merge changes from github. END_PUBLIC --- Commit 607816029 authored by Eugene Brevdo<ebrevdo@google.com> Committed by TensorFlower Gardener<gardener@tensorflow.org>: Extended ScratchSpace to expose its underlying scratch tensor object. PiperOrigin-RevId: 167649551 --- Commit db43fe68e authored by A. Unique TensorFlower<gardener@tensorflow.org> Committed by TensorFlower Gardener<gardener@tensorflow.org>: Add fast math attributes to all generated methods when fast math enabled. RELNOTES: n/a PiperOrigin-RevId: 167646637 --- Commit aebe8cc6f authored by A. Unique TensorFlower<gardener@tensorflow.org> Committed by TensorFlower Gardener<gardener@tensorflow.org>: Call HloComputation.Accept instead of HloInstruction.Accept to get all instructions profiled. RELNOTES: n/a PiperOrigin-RevId: 167640259 --- Commit 0ab137cd8 authored by A. Unique TensorFlower<gardener@tensorflow.org> Committed by TensorFlower Gardener<gardener@tensorflow.org>: BEGIN_PUBLIC Automated g4 rollback of changelist 167604306 PiperOrigin-RevId: 167800256 * Update ops-related pbtxt files. PiperOrigin-RevId: 167802521 * Go: Update generated wrapper functions for TensorFlow ops. PiperOrigin-RevId: 167804076 * Add sloppy_interleave dataset operator. When feeding data at high speed into a model from variable-latency data sources, head-of-line blocking can be a significant concern when using a deterministic input pipeline, such as interleave. This change introduces a new non-deterministic dataset operator that avoids head-of-line blocking. PiperOrigin-RevId: 167810743 * Update ops-related pbtxt files. PiperOrigin-RevId: 167811375 * tfdbg: Fix python3 breakage in grpc debug tests caused by bytes-type plugin_data content PiperOrigin-RevId: 167812508 * [XLA] Rip CheckFusionNode() out of instruction, and move it into the HLO verifier instead. CheckFusionNode() is linear in the size of the fusion node, and was called once per Fuse(), leading to run-time quadratic in the fusion node's size. PiperOrigin-RevId: 167812735 * Disable tensorflow/contrib/data/python/kernel_tests/sloppy_transformation_dataset_op_test.py in cmake.
Diffstat (limited to 'tensorflow')
-rw-r--r--tensorflow/BUILD2
-rw-r--r--tensorflow/c/c_api.cc65
-rw-r--r--tensorflow/c/eager/c_api_test.cc4
-rw-r--r--tensorflow/c/python_api.cc1
-rw-r--r--tensorflow/cc/framework/gradients.cc4
-rw-r--r--tensorflow/cc/framework/testutil.cc2
-rw-r--r--tensorflow/cc/framework/testutil.h2
-rw-r--r--tensorflow/compiler/xla/literal_util.cc7
-rw-r--r--tensorflow/compiler/xla/literal_util.h3
-rw-r--r--tensorflow/compiler/xla/literal_util_test.cc31
-rw-r--r--tensorflow/compiler/xla/service/cpu/cpu_compiler.cc8
-rw-r--r--tensorflow/compiler/xla/service/cpu/ir_emitter.cc7
-rw-r--r--tensorflow/compiler/xla/service/gpu/ir_emitter_nested.cc3
-rw-r--r--tensorflow/compiler/xla/service/gpu/ir_emitter_unnested.cc3
-rw-r--r--tensorflow/compiler/xla/service/hlo_evaluator_test.cc47
-rw-r--r--tensorflow/compiler/xla/service/hlo_instruction.cc71
-rw-r--r--tensorflow/compiler/xla/service/hlo_instruction.h3
-rw-r--r--tensorflow/compiler/xla/service/hlo_verifier.cc120
-rw-r--r--tensorflow/compiler/xla/service/hlo_verifier.h3
-rw-r--r--tensorflow/compiler/xla/service/reduce_precision_insertion_test.cc4
-rw-r--r--tensorflow/compiler/xla/service/user_computation.cc4
-rw-r--r--tensorflow/compiler/xla/tests/multioutput_fusion_test.cc2
-rw-r--r--tensorflow/contrib/BUILD1
-rw-r--r--tensorflow/contrib/__init__.py1
-rwxr-xr-xtensorflow/contrib/cmake/tf_python.cmake45
-rw-r--r--tensorflow/contrib/cmake/tf_tests.cmake2
-rw-r--r--tensorflow/contrib/cudnn_rnn/python/ops/cudnn_rnn_ops.py1
-rw-r--r--tensorflow/contrib/data/BUILD1
-rw-r--r--tensorflow/contrib/data/__init__.py3
-rw-r--r--tensorflow/contrib/data/python/kernel_tests/BUILD21
-rw-r--r--tensorflow/contrib/data/python/kernel_tests/map_dataset_op_test.py5
-rw-r--r--tensorflow/contrib/data/python/kernel_tests/sloppy_transformation_dataset_op_test.py475
-rw-r--r--tensorflow/contrib/data/python/ops/BUILD15
-rw-r--r--tensorflow/contrib/data/python/ops/sloppy_ops.py120
-rw-r--r--tensorflow/contrib/eager/python/BUILD25
-rw-r--r--tensorflow/contrib/eager/python/saver.py122
-rw-r--r--tensorflow/contrib/eager/python/saver_test.py88
-rw-r--r--tensorflow/contrib/eager/python/tfe.py3
-rw-r--r--tensorflow/contrib/estimator/BUILD61
-rw-r--r--tensorflow/contrib/estimator/__init__.py29
-rw-r--r--tensorflow/contrib/estimator/python/estimator/extenders.py124
-rw-r--r--tensorflow/contrib/estimator/python/estimator/extenders_test.py135
-rw-r--r--tensorflow/contrib/fused_conv/BUILD13
-rw-r--r--tensorflow/contrib/fused_conv/kernels/fused_conv2d_bias_activation_op.cc698
-rw-r--r--tensorflow/contrib/fused_conv/kernels/fused_conv2d_bias_activation_op.h31
-rw-r--r--tensorflow/contrib/fused_conv/kernels/fused_conv_ops_gpu.h74
-rw-r--r--tensorflow/contrib/fused_conv/ops/fused_conv2d_bias_activation_op.cc77
-rw-r--r--tensorflow/contrib/fused_conv/python/ops/fused_conv2d_bias_activation_op.py107
-rw-r--r--tensorflow/contrib/fused_conv/python/ops/fused_conv2d_bias_activation_op_test.py289
-rw-r--r--tensorflow/contrib/keras/BUILD637
-rw-r--r--tensorflow/contrib/keras/README.md3
-rw-r--r--tensorflow/contrib/keras/api/keras/activations/__init__.py26
-rw-r--r--tensorflow/contrib/keras/api/keras/applications/inception_v3/__init__.py6
-rw-r--r--tensorflow/contrib/keras/api/keras/applications/mobilenet/__init__.py6
-rw-r--r--tensorflow/contrib/keras/api/keras/applications/resnet50/__init__.py6
-rw-r--r--tensorflow/contrib/keras/api/keras/applications/vgg16/__init__.py6
-rw-r--r--tensorflow/contrib/keras/api/keras/applications/vgg19/__init__.py6
-rw-r--r--tensorflow/contrib/keras/api/keras/applications/xception/__init__.py6
-rw-r--r--tensorflow/contrib/keras/api/keras/backend/__init__.py276
-rw-r--r--tensorflow/contrib/keras/api/keras/callbacks/__init__.py26
-rw-r--r--tensorflow/contrib/keras/api/keras/constraints/__init__.py24
-rw-r--r--tensorflow/contrib/keras/api/keras/datasets/boston_housing/__init__.py2
-rw-r--r--tensorflow/contrib/keras/api/keras/datasets/cifar10/__init__.py2
-rw-r--r--tensorflow/contrib/keras/api/keras/datasets/cifar100/__init__.py2
-rw-r--r--tensorflow/contrib/keras/api/keras/datasets/imdb/__init__.py4
-rw-r--r--tensorflow/contrib/keras/api/keras/datasets/mnist/__init__.py2
-rw-r--r--tensorflow/contrib/keras/api/keras/datasets/reuters/__init__.py4
-rw-r--r--tensorflow/contrib/keras/api/keras/initializers/__init__.py38
-rw-r--r--tensorflow/contrib/keras/api/keras/layers/__init__.py184
-rw-r--r--tensorflow/contrib/keras/api/keras/losses/__init__.py34
-rw-r--r--tensorflow/contrib/keras/api/keras/metrics/__init__.py38
-rw-r--r--tensorflow/contrib/keras/api/keras/models/__init__.py14
-rw-r--r--tensorflow/contrib/keras/api/keras/optimizers/__init__.py22
-rw-r--r--tensorflow/contrib/keras/api/keras/preprocessing/image/__init__.py28
-rw-r--r--tensorflow/contrib/keras/api/keras/preprocessing/sequence/__init__.py6
-rw-r--r--tensorflow/contrib/keras/api/keras/preprocessing/text/__init__.py6
-rw-r--r--tensorflow/contrib/keras/api/keras/regularizers/__init__.py16
-rw-r--r--tensorflow/contrib/keras/api/keras/utils/__init__.py30
-rw-r--r--tensorflow/contrib/keras/api/keras/wrappers/scikit_learn/__init__.py4
-rw-r--r--tensorflow/contrib/keras/python/keras/__init__.py40
-rw-r--r--tensorflow/contrib/keras/python/keras/layers/__init__.py40
-rw-r--r--tensorflow/contrib/keras/python/keras/utils/__init__.py43
-rw-r--r--tensorflow/contrib/keras/python/keras/utils/io_utils_test.py101
-rw-r--r--tensorflow/contrib/learn/BUILD53
-rw-r--r--tensorflow/contrib/learn/python/learn/estimators/dnn_benchmark_test.py257
-rw-r--r--tensorflow/contrib/learn/python/learn/estimators/dnn_linear_combined_benchmark_test.py224
-rw-r--r--tensorflow/contrib/learn/python/learn/estimators/head.py4
-rw-r--r--tensorflow/contrib/learn/python/learn/estimators/rnn_common.py2
-rw-r--r--tensorflow/contrib/resampler/python/ops/resampler_ops_test.py2
-rw-r--r--tensorflow/contrib/session_bundle/BUILD2
-rw-r--r--tensorflow/contrib/session_bundle/session_bundle_test.cc3
-rw-r--r--tensorflow/contrib/summary/BUILD5
-rw-r--r--tensorflow/contrib/summary/summary_ops.py98
-rw-r--r--tensorflow/contrib/summary/summary_ops_test.py27
-rw-r--r--tensorflow/contrib/tensor_forest/kernels/v4/split_collection_operators.cc7
-rw-r--r--tensorflow/contrib/tensorboard/plugins/trace/trace.py4
-rw-r--r--tensorflow/contrib/tpu/profiler/op_profile.proto12
-rw-r--r--tensorflow/core/BUILD1
-rw-r--r--tensorflow/core/common_runtime/simple_graph_execution_state.cc13
-rw-r--r--tensorflow/core/common_runtime/simple_graph_execution_state.h9
-rw-r--r--tensorflow/core/framework/common_shape_fns.cc188
-rw-r--r--tensorflow/core/framework/common_shape_fns_test.cc93
-rw-r--r--tensorflow/core/framework/summary.proto2
-rw-r--r--tensorflow/core/graph/mkl_layout_pass.cc8
-rw-r--r--tensorflow/core/kernels/BUILD36
-rw-r--r--tensorflow/core/kernels/conv_ops_gpu.h21
-rw-r--r--tensorflow/core/kernels/conv_ops_gpu_3.cu.cc1
-rw-r--r--tensorflow/core/kernels/crop_and_resize_op.cc579
-rw-r--r--tensorflow/core/kernels/crop_and_resize_op.h8
-rw-r--r--tensorflow/core/kernels/crop_and_resize_op_gpu.cu.cc2
-rw-r--r--tensorflow/core/kernels/crop_and_resize_op_test.cc4
-rw-r--r--tensorflow/core/kernels/cuda_solvers.h3
-rw-r--r--tensorflow/core/kernels/cuda_solvers_gpu.cu.cc35
-rw-r--r--tensorflow/core/kernels/dataset_utils.cc78
-rw-r--r--tensorflow/core/kernels/dataset_utils.h35
-rw-r--r--tensorflow/core/kernels/flat_map_dataset_op.cc56
-rw-r--r--tensorflow/core/kernels/interleave_dataset_op.cc62
-rw-r--r--tensorflow/core/kernels/mkl_conv_grad_input_ops.cc6
-rw-r--r--tensorflow/core/kernels/mkl_conv_ops.cc23
-rw-r--r--tensorflow/core/kernels/mkl_reshape_op.cc1
-rw-r--r--tensorflow/core/kernels/parse_tensor_op.cc10
-rw-r--r--tensorflow/core/kernels/parse_tensor_test.cc89
-rw-r--r--tensorflow/core/kernels/segment_reduction_ops.cc10
-rw-r--r--tensorflow/core/kernels/segment_reduction_ops_gpu.cu.cc8
-rw-r--r--tensorflow/core/kernels/sloppy_interleave_dataset_op.cc370
-rw-r--r--tensorflow/core/kernels/summary_kernels.cc7
-rw-r--r--tensorflow/core/lib/io/buffered_inputstream.cc19
-rw-r--r--tensorflow/core/lib/io/buffered_inputstream.h3
-rw-r--r--tensorflow/core/lib/io/buffered_inputstream_test.cc40
-rw-r--r--tensorflow/core/lib/io/zlib_inputstream.h2
-rw-r--r--tensorflow/core/ops/array_ops.cc12
-rw-r--r--tensorflow/core/ops/compat/ops_history.v1.pbtxt94
-rw-r--r--tensorflow/core/ops/dataset_ops.cc27
-rw-r--r--tensorflow/core/ops/debug_ops.cc5
-rw-r--r--tensorflow/core/ops/linalg_ops.cc31
-rw-r--r--tensorflow/core/ops/ops.pbtxt71
-rw-r--r--tensorflow/core/platform/default/logging.h2
-rw-r--r--tensorflow/core/platform/env_test.cc16
-rw-r--r--tensorflow/core/util/activation_mode.cc4
-rw-r--r--tensorflow/core/util/activation_mode.h1
-rw-r--r--tensorflow/docs_src/programmers_guide/datasets.md6
-rw-r--r--tensorflow/examples/get_started/regression/dnn_regression.py18
-rw-r--r--tensorflow/examples/get_started/regression/imports85.py170
-rw-r--r--tensorflow/examples/get_started/regression/linear_regression.py21
-rw-r--r--tensorflow/examples/get_started/regression/linear_regression_categorical.py21
-rw-r--r--tensorflow/examples/get_started/regression/test.py66
-rw-r--r--tensorflow/go/op/wrappers.go352
-rw-r--r--tensorflow/java/BUILD3
-rw-r--r--tensorflow/java/src/gen/cc/op_gen_main.cc26
-rw-r--r--tensorflow/java/src/gen/cc/op_generator.cc8
-rw-r--r--tensorflow/java/src/gen/cc/op_generator.h4
-rw-r--r--tensorflow/java/src/gen/gen_ops.bzl8
-rw-r--r--tensorflow/python/BUILD7
-rw-r--r--tensorflow/python/__init__.py4
-rw-r--r--tensorflow/python/debug/BUILD34
-rw-r--r--tensorflow/python/debug/cli/analyzer_cli.py16
-rw-r--r--tensorflow/python/debug/lib/debug_data.py543
-rw-r--r--tensorflow/python/debug/lib/debug_data_test.py85
-rw-r--r--tensorflow/python/debug/lib/debug_gradients.py5
-rw-r--r--tensorflow/python/debug/lib/debug_graphs.py430
-rw-r--r--tensorflow/python/debug/lib/debug_graphs_test.py112
-rw-r--r--tensorflow/python/debug/lib/grpc_debug_server.py10
-rw-r--r--tensorflow/python/debug/lib/grpc_debug_test_server.py3
-rw-r--r--tensorflow/python/debug/lib/session_debug_file_test.py2
-rw-r--r--tensorflow/python/debug/lib/session_debug_testlib.py3
-rw-r--r--tensorflow/python/debug/lib/stepper.py7
-rw-r--r--tensorflow/python/eager/backprop.py2
-rw-r--r--tensorflow/python/eager/function.py7
-rw-r--r--tensorflow/python/eager/python_eager_op_gen.cc14
-rw-r--r--tensorflow/python/estimator/model_fn.py5
-rw-r--r--tensorflow/python/estimator/run_config.py236
-rw-r--r--tensorflow/python/estimator/run_config_test.py295
-rw-r--r--tensorflow/python/feature_column/feature_column.py6
-rw-r--r--tensorflow/python/feature_column/feature_column_test.py14
-rw-r--r--tensorflow/python/framework/ops_test.py18
-rw-r--r--tensorflow/python/framework/python_op_gen_main.cc6
-rw-r--r--tensorflow/python/framework/tensor_util.py6
-rw-r--r--tensorflow/python/framework/tensor_util_test.py4
-rw-r--r--tensorflow/python/framework/test_util.py8
-rw-r--r--tensorflow/python/keras/BUILD694
-rw-r--r--tensorflow/python/keras/README.md6
-rw-r--r--tensorflow/python/keras/__init__.py47
-rw-r--r--tensorflow/python/keras/_impl/keras/__init__.py40
-rw-r--r--tensorflow/python/keras/_impl/keras/activations.py (renamed from tensorflow/contrib/keras/python/keras/activations.py)6
-rw-r--r--tensorflow/python/keras/_impl/keras/activations_test.py (renamed from tensorflow/contrib/keras/python/keras/activations_test.py)2
-rw-r--r--tensorflow/python/keras/_impl/keras/applications/__init__.py (renamed from tensorflow/contrib/keras/python/keras/applications/__init__.py)12
-rw-r--r--tensorflow/python/keras/_impl/keras/applications/imagenet_utils.py (renamed from tensorflow/contrib/keras/python/keras/applications/imagenet_utils.py)4
-rw-r--r--tensorflow/python/keras/_impl/keras/applications/imagenet_utils_test.py (renamed from tensorflow/contrib/keras/python/keras/applications/imagenet_utils_test.py)2
-rw-r--r--tensorflow/python/keras/_impl/keras/applications/inception_v3.py (renamed from tensorflow/contrib/keras/python/keras/applications/inception_v3.py)32
-rw-r--r--tensorflow/python/keras/_impl/keras/applications/inception_v3_test.py (renamed from tensorflow/contrib/keras/python/keras/applications/inception_v3_test.py)2
-rw-r--r--tensorflow/python/keras/_impl/keras/applications/mobilenet.py (renamed from tensorflow/contrib/keras/python/keras/applications/mobilenet.py)38
-rw-r--r--tensorflow/python/keras/_impl/keras/applications/mobilenet_test.py (renamed from tensorflow/contrib/keras/python/keras/applications/mobilenet_test.py)2
-rw-r--r--tensorflow/python/keras/_impl/keras/applications/resnet50.py (renamed from tensorflow/contrib/keras/python/keras/applications/resnet50.py)36
-rw-r--r--tensorflow/python/keras/_impl/keras/applications/resnet50_test.py (renamed from tensorflow/contrib/keras/python/keras/applications/resnet50_test.py)2
-rw-r--r--tensorflow/python/keras/_impl/keras/applications/vgg16.py (renamed from tensorflow/contrib/keras/python/keras/applications/vgg16.py)30
-rw-r--r--tensorflow/python/keras/_impl/keras/applications/vgg16_test.py (renamed from tensorflow/contrib/keras/python/keras/applications/vgg16_test.py)2
-rw-r--r--tensorflow/python/keras/_impl/keras/applications/vgg19.py (renamed from tensorflow/contrib/keras/python/keras/applications/vgg19.py)30
-rw-r--r--tensorflow/python/keras/_impl/keras/applications/vgg19_test.py (renamed from tensorflow/contrib/keras/python/keras/applications/vgg19_test.py)2
-rw-r--r--tensorflow/python/keras/_impl/keras/applications/xception.py (renamed from tensorflow/contrib/keras/python/keras/applications/xception.py)32
-rw-r--r--tensorflow/python/keras/_impl/keras/applications/xception_test.py (renamed from tensorflow/contrib/keras/python/keras/applications/xception_test.py)2
-rw-r--r--tensorflow/python/keras/_impl/keras/backend.py (renamed from tensorflow/contrib/keras/python/keras/backend.py)0
-rw-r--r--tensorflow/python/keras/_impl/keras/backend_test.py (renamed from tensorflow/contrib/keras/python/keras/backend_test.py)3
-rw-r--r--tensorflow/python/keras/_impl/keras/callbacks.py (renamed from tensorflow/contrib/keras/python/keras/callbacks.py)57
-rw-r--r--tensorflow/python/keras/_impl/keras/callbacks_test.py (renamed from tensorflow/contrib/keras/python/keras/callbacks_test.py)11
-rw-r--r--tensorflow/python/keras/_impl/keras/constraints.py (renamed from tensorflow/contrib/keras/python/keras/constraints.py)6
-rw-r--r--tensorflow/python/keras/_impl/keras/constraints_test.py (renamed from tensorflow/contrib/keras/python/keras/constraints_test.py)2
-rw-r--r--tensorflow/python/keras/_impl/keras/datasets/__init__.py (renamed from tensorflow/contrib/keras/python/keras/datasets/__init__.py)12
-rw-r--r--tensorflow/python/keras/_impl/keras/datasets/boston_housing.py (renamed from tensorflow/contrib/keras/python/keras/datasets/boston_housing.py)2
-rw-r--r--tensorflow/python/keras/_impl/keras/datasets/cifar.py (renamed from tensorflow/contrib/keras/python/keras/datasets/cifar.py)0
-rw-r--r--tensorflow/python/keras/_impl/keras/datasets/cifar10.py (renamed from tensorflow/contrib/keras/python/keras/datasets/cifar10.py)6
-rw-r--r--tensorflow/python/keras/_impl/keras/datasets/cifar100.py (renamed from tensorflow/contrib/keras/python/keras/datasets/cifar100.py)6
-rw-r--r--tensorflow/python/keras/_impl/keras/datasets/imdb.py (renamed from tensorflow/contrib/keras/python/keras/datasets/imdb.py)2
-rw-r--r--tensorflow/python/keras/_impl/keras/datasets/mnist.py (renamed from tensorflow/contrib/keras/python/keras/datasets/mnist.py)2
-rw-r--r--tensorflow/python/keras/_impl/keras/datasets/reuters.py (renamed from tensorflow/contrib/keras/python/keras/datasets/reuters.py)2
-rw-r--r--tensorflow/python/keras/_impl/keras/engine/__init__.py (renamed from tensorflow/contrib/keras/python/keras/engine/__init__.py)12
-rw-r--r--tensorflow/python/keras/_impl/keras/engine/topology.py (renamed from tensorflow/contrib/keras/python/keras/engine/topology.py)16
-rw-r--r--tensorflow/python/keras/_impl/keras/engine/topology_test.py (renamed from tensorflow/contrib/keras/python/keras/engine/topology_test.py)2
-rw-r--r--tensorflow/python/keras/_impl/keras/engine/training.py (renamed from tensorflow/contrib/keras/python/keras/engine/training.py)20
-rw-r--r--tensorflow/python/keras/_impl/keras/engine/training_test.py (renamed from tensorflow/contrib/keras/python/keras/engine/training_test.py)6
-rw-r--r--tensorflow/python/keras/_impl/keras/initializers.py (renamed from tensorflow/contrib/keras/python/keras/initializers.py)4
-rw-r--r--tensorflow/python/keras/_impl/keras/initializers_test.py (renamed from tensorflow/contrib/keras/python/keras/initializers_test.py)2
-rw-r--r--tensorflow/python/keras/_impl/keras/integration_test.py (renamed from tensorflow/contrib/keras/python/keras/integration_test.py)4
-rw-r--r--tensorflow/python/keras/_impl/keras/layers/__init__.py40
-rw-r--r--tensorflow/python/keras/_impl/keras/layers/advanced_activations.py (renamed from tensorflow/contrib/keras/python/keras/layers/advanced_activations.py)12
-rw-r--r--tensorflow/python/keras/_impl/keras/layers/advanced_activations_test.py (renamed from tensorflow/contrib/keras/python/keras/layers/advanced_activations_test.py)4
-rw-r--r--tensorflow/python/keras/_impl/keras/layers/convolutional.py (renamed from tensorflow/contrib/keras/python/keras/layers/convolutional.py)30
-rw-r--r--tensorflow/python/keras/_impl/keras/layers/convolutional_recurrent.py (renamed from tensorflow/contrib/keras/python/keras/layers/convolutional_recurrent.py)16
-rw-r--r--tensorflow/python/keras/_impl/keras/layers/convolutional_recurrent_test.py (renamed from tensorflow/contrib/keras/python/keras/layers/convolutional_recurrent_test.py)4
-rw-r--r--tensorflow/python/keras/_impl/keras/layers/convolutional_test.py (renamed from tensorflow/contrib/keras/python/keras/layers/convolutional_test.py)4
-rw-r--r--tensorflow/python/keras/_impl/keras/layers/core.py (renamed from tensorflow/contrib/keras/python/keras/layers/core.py)22
-rw-r--r--tensorflow/python/keras/_impl/keras/layers/core_test.py (renamed from tensorflow/contrib/keras/python/keras/layers/core_test.py)4
-rw-r--r--tensorflow/python/keras/_impl/keras/layers/embeddings.py (renamed from tensorflow/contrib/keras/python/keras/layers/embeddings.py)10
-rw-r--r--tensorflow/python/keras/_impl/keras/layers/embeddings_test.py (renamed from tensorflow/contrib/keras/python/keras/layers/embeddings_test.py)4
-rw-r--r--tensorflow/python/keras/_impl/keras/layers/gru_test.py (renamed from tensorflow/contrib/keras/python/keras/layers/gru_test.py)4
-rw-r--r--tensorflow/python/keras/_impl/keras/layers/local.py (renamed from tensorflow/contrib/keras/python/keras/layers/local.py)17
-rw-r--r--tensorflow/python/keras/_impl/keras/layers/local_test.py (renamed from tensorflow/contrib/keras/python/keras/layers/local_test.py)4
-rw-r--r--tensorflow/python/keras/_impl/keras/layers/lstm_test.py (renamed from tensorflow/contrib/keras/python/keras/layers/lstm_test.py)4
-rw-r--r--tensorflow/python/keras/_impl/keras/layers/merge.py (renamed from tensorflow/contrib/keras/python/keras/layers/merge.py)4
-rw-r--r--tensorflow/python/keras/_impl/keras/layers/merge_test.py (renamed from tensorflow/contrib/keras/python/keras/layers/merge_test.py)2
-rw-r--r--tensorflow/python/keras/_impl/keras/layers/noise.py (renamed from tensorflow/contrib/keras/python/keras/layers/noise.py)4
-rw-r--r--tensorflow/python/keras/_impl/keras/layers/noise_test.py (renamed from tensorflow/contrib/keras/python/keras/layers/noise_test.py)4
-rw-r--r--tensorflow/python/keras/_impl/keras/layers/normalization.py (renamed from tensorflow/contrib/keras/python/keras/layers/normalization.py)10
-rw-r--r--tensorflow/python/keras/_impl/keras/layers/normalization_test.py (renamed from tensorflow/contrib/keras/python/keras/layers/normalization_test.py)4
-rw-r--r--tensorflow/python/keras/_impl/keras/layers/pooling.py (renamed from tensorflow/contrib/keras/python/keras/layers/pooling.py)8
-rw-r--r--tensorflow/python/keras/_impl/keras/layers/pooling_test.py (renamed from tensorflow/contrib/keras/python/keras/layers/pooling_test.py)4
-rw-r--r--tensorflow/python/keras/_impl/keras/layers/recurrent.py (renamed from tensorflow/contrib/keras/python/keras/layers/recurrent.py)16
-rw-r--r--tensorflow/python/keras/_impl/keras/layers/serialization.py (renamed from tensorflow/contrib/keras/python/keras/layers/serialization.py)32
-rw-r--r--tensorflow/python/keras/_impl/keras/layers/serialization_test.py (renamed from tensorflow/contrib/keras/python/keras/layers/serialization_test.py)2
-rw-r--r--tensorflow/python/keras/_impl/keras/layers/simplernn_test.py (renamed from tensorflow/contrib/keras/python/keras/layers/simplernn_test.py)4
-rw-r--r--tensorflow/python/keras/_impl/keras/layers/wrappers.py (renamed from tensorflow/contrib/keras/python/keras/layers/wrappers.py)10
-rw-r--r--tensorflow/python/keras/_impl/keras/layers/wrappers_test.py (renamed from tensorflow/contrib/keras/python/keras/layers/wrappers_test.py)2
-rw-r--r--tensorflow/python/keras/_impl/keras/losses.py (renamed from tensorflow/contrib/keras/python/keras/losses.py)4
-rw-r--r--tensorflow/python/keras/_impl/keras/losses_test.py (renamed from tensorflow/contrib/keras/python/keras/losses_test.py)2
-rw-r--r--tensorflow/python/keras/_impl/keras/metrics.py (renamed from tensorflow/contrib/keras/python/keras/metrics.py)30
-rw-r--r--tensorflow/python/keras/_impl/keras/metrics_test.py (renamed from tensorflow/contrib/keras/python/keras/metrics_test.py)2
-rw-r--r--tensorflow/python/keras/_impl/keras/models.py (renamed from tensorflow/contrib/keras/python/keras/models.py)23
-rw-r--r--tensorflow/python/keras/_impl/keras/models_test.py (renamed from tensorflow/contrib/keras/python/keras/models_test.py)2
-rw-r--r--tensorflow/python/keras/_impl/keras/optimizers.py (renamed from tensorflow/contrib/keras/python/keras/optimizers.py)6
-rw-r--r--tensorflow/python/keras/_impl/keras/optimizers_test.py (renamed from tensorflow/contrib/keras/python/keras/optimizers_test.py)4
-rw-r--r--tensorflow/python/keras/_impl/keras/preprocessing/__init__.py (renamed from tensorflow/contrib/keras/python/keras/preprocessing/__init__.py)6
-rw-r--r--tensorflow/python/keras/_impl/keras/preprocessing/image.py (renamed from tensorflow/contrib/keras/python/keras/preprocessing/image.py)2
-rw-r--r--tensorflow/python/keras/_impl/keras/preprocessing/image_test.py (renamed from tensorflow/contrib/keras/python/keras/preprocessing/image_test.py)2
-rw-r--r--tensorflow/python/keras/_impl/keras/preprocessing/sequence.py (renamed from tensorflow/contrib/keras/python/keras/preprocessing/sequence.py)0
-rw-r--r--tensorflow/python/keras/_impl/keras/preprocessing/sequence_test.py (renamed from tensorflow/contrib/keras/python/keras/preprocessing/sequence_test.py)2
-rw-r--r--tensorflow/python/keras/_impl/keras/preprocessing/text.py (renamed from tensorflow/contrib/keras/python/keras/preprocessing/text.py)0
-rw-r--r--tensorflow/python/keras/_impl/keras/preprocessing/text_test.py (renamed from tensorflow/contrib/keras/python/keras/preprocessing/text_test.py)2
-rw-r--r--tensorflow/python/keras/_impl/keras/regularizers.py (renamed from tensorflow/contrib/keras/python/keras/regularizers.py)6
-rw-r--r--tensorflow/python/keras/_impl/keras/regularizers_test.py (renamed from tensorflow/contrib/keras/python/keras/regularizers_test.py)4
-rw-r--r--tensorflow/python/keras/_impl/keras/testing_utils.py (renamed from tensorflow/contrib/keras/python/keras/testing_utils.py)2
-rw-r--r--tensorflow/python/keras/_impl/keras/utils/__init__.py43
-rw-r--r--tensorflow/python/keras/_impl/keras/utils/conv_utils.py (renamed from tensorflow/contrib/keras/python/keras/utils/conv_utils.py)2
-rw-r--r--tensorflow/python/keras/_impl/keras/utils/data_utils.py (renamed from tensorflow/contrib/keras/python/keras/utils/data_utils.py)2
-rw-r--r--tensorflow/python/keras/_impl/keras/utils/data_utils_test.py (renamed from tensorflow/contrib/keras/python/keras/utils/data_utils_test.py)2
-rw-r--r--tensorflow/python/keras/_impl/keras/utils/generic_utils.py (renamed from tensorflow/contrib/keras/python/keras/utils/generic_utils.py)0
-rw-r--r--tensorflow/python/keras/_impl/keras/utils/generic_utils_test.py (renamed from tensorflow/contrib/keras/python/keras/utils/generic_utils_test.py)2
-rw-r--r--tensorflow/python/keras/_impl/keras/utils/io_utils.py (renamed from tensorflow/contrib/keras/python/keras/utils/io_utils.py)0
-rw-r--r--tensorflow/python/keras/_impl/keras/utils/io_utils_test.py100
-rw-r--r--tensorflow/python/keras/_impl/keras/utils/layer_utils.py (renamed from tensorflow/contrib/keras/python/keras/utils/layer_utils.py)4
-rw-r--r--tensorflow/python/keras/_impl/keras/utils/np_utils.py (renamed from tensorflow/contrib/keras/python/keras/utils/np_utils.py)0
-rw-r--r--tensorflow/python/keras/_impl/keras/utils/vis_utils.py (renamed from tensorflow/contrib/keras/python/keras/utils/vis_utils.py)4
-rw-r--r--tensorflow/python/keras/_impl/keras/wrappers/__init__.py (renamed from tensorflow/contrib/keras/python/keras/wrappers/__init__.py)2
-rw-r--r--tensorflow/python/keras/_impl/keras/wrappers/scikit_learn.py (renamed from tensorflow/contrib/keras/python/keras/wrappers/scikit_learn.py)4
-rw-r--r--tensorflow/python/keras/_impl/keras/wrappers/scikit_learn_test.py (renamed from tensorflow/contrib/keras/python/keras/wrappers/scikit_learn_test.py)4
-rw-r--r--tensorflow/python/keras/activations/__init__.py41
-rw-r--r--tensorflow/python/keras/applications/__init__.py36
-rw-r--r--tensorflow/python/keras/applications/inception_v3/__init__.py27
-rw-r--r--tensorflow/python/keras/applications/mobilenet/__init__.py27
-rw-r--r--tensorflow/python/keras/applications/resnet50/__init__.py27
-rw-r--r--tensorflow/python/keras/applications/vgg16/__init__.py27
-rw-r--r--tensorflow/python/keras/applications/vgg19/__init__.py27
-rw-r--r--tensorflow/python/keras/applications/xception/__init__.py27
-rw-r--r--tensorflow/python/keras/backend/__init__.py163
-rw-r--r--tensorflow/python/keras/callbacks/__init__.py37
-rw-r--r--tensorflow/python/keras/constraints/__init__.py40
-rw-r--r--tensorflow/python/keras/datasets/__init__.py30
-rw-r--r--tensorflow/python/keras/datasets/boston_housing/__init__.py25
-rw-r--r--tensorflow/python/keras/datasets/cifar10/__init__.py25
-rw-r--r--tensorflow/python/keras/datasets/cifar100/__init__.py25
-rw-r--r--tensorflow/python/keras/datasets/imdb/__init__.py26
-rw-r--r--tensorflow/python/keras/datasets/mnist/__init__.py25
-rw-r--r--tensorflow/python/keras/datasets/reuters/__init__.py26
-rw-r--r--tensorflow/python/keras/initializers/__init__.py49
-rw-r--r--tensorflow/python/keras/layers/__init__.py148
-rw-r--r--tensorflow/python/keras/losses/__init__.py45
-rw-r--r--tensorflow/python/keras/metrics/__init__.py47
-rw-r--r--tensorflow/python/keras/models/__init__.py31
-rw-r--r--tensorflow/python/keras/optimizers/__init__.py39
-rw-r--r--tensorflow/python/keras/preprocessing/__init__.py27
-rw-r--r--tensorflow/python/keras/preprocessing/image/__init__.py38
-rw-r--r--tensorflow/python/keras/preprocessing/sequence/__init__.py27
-rw-r--r--tensorflow/python/keras/preprocessing/text/__init__.py27
-rw-r--r--tensorflow/python/keras/regularizers/__init__.py38
-rw-r--r--tensorflow/python/keras/utils/__init__.py39
-rw-r--r--tensorflow/python/keras/wrappers/__init__.py25
-rw-r--r--tensorflow/python/keras/wrappers/scikit_learn/__init__.py26
-rw-r--r--tensorflow/python/kernel_tests/segment_reduction_ops_test.py44
-rw-r--r--tensorflow/python/kernel_tests/sparse_ops_test.py17
-rw-r--r--tensorflow/python/layers/normalization.py35
-rw-r--r--tensorflow/python/ops/metrics_impl.py10
-rw-r--r--tensorflow/python/ops/parsing_ops.py9
-rw-r--r--tensorflow/python/ops/resource_variable_ops.py64
-rw-r--r--tensorflow/python/ops/sparse_ops.py2
-rw-r--r--tensorflow/python/summary/text_summary.py12
-rw-r--r--tensorflow/python/summary/writer/writer_test.py5
-rw-r--r--tensorflow/python/training/monitored_session.py20
-rw-r--r--tensorflow/python/training/monitored_session_test.py28
-rw-r--r--tensorflow/python/training/saver_test.py47
-rw-r--r--tensorflow/python/training/training_util.py3
-rw-r--r--tensorflow/python/util/tf_should_use.py78
-rw-r--r--tensorflow/python/util/tf_should_use_test.py12
-rw-r--r--tensorflow/stream_executor/cuda/cuda_dnn.cc445
-rw-r--r--tensorflow/stream_executor/cuda/cuda_dnn.h108
-rw-r--r--tensorflow/stream_executor/dnn.h139
-rw-r--r--tensorflow/stream_executor/stream.cc206
-rw-r--r--tensorflow/stream_executor/stream.h98
-rw-r--r--tensorflow/tools/api/golden/tensorflow.keras.activations.pbtxt55
-rw-r--r--tensorflow/tools/api/golden/tensorflow.keras.applications.inception_v3.pbtxt15
-rw-r--r--tensorflow/tools/api/golden/tensorflow.keras.applications.mobilenet.pbtxt15
-rw-r--r--tensorflow/tools/api/golden/tensorflow.keras.applications.pbtxt51
-rw-r--r--tensorflow/tools/api/golden/tensorflow.keras.applications.resnet50.pbtxt15
-rw-r--r--tensorflow/tools/api/golden/tensorflow.keras.applications.vgg16.pbtxt15
-rw-r--r--tensorflow/tools/api/golden/tensorflow.keras.applications.vgg19.pbtxt15
-rw-r--r--tensorflow/tools/api/golden/tensorflow.keras.applications.xception.pbtxt15
-rw-r--r--tensorflow/tools/api/golden/tensorflow.keras.backend.pbtxt555
-rw-r--r--tensorflow/tools/api/golden/tensorflow.keras.callbacks.-base-logger.pbtxt42
-rw-r--r--tensorflow/tools/api/golden/tensorflow.keras.callbacks.-c-s-v-logger.pbtxt42
-rw-r--r--tensorflow/tools/api/golden/tensorflow.keras.callbacks.-callback.pbtxt41
-rw-r--r--tensorflow/tools/api/golden/tensorflow.keras.callbacks.-early-stopping.pbtxt42
-rw-r--r--tensorflow/tools/api/golden/tensorflow.keras.callbacks.-history.pbtxt42
-rw-r--r--tensorflow/tools/api/golden/tensorflow.keras.callbacks.-lambda-callback.pbtxt42
-rw-r--r--tensorflow/tools/api/golden/tensorflow.keras.callbacks.-learning-rate-scheduler.pbtxt42
-rw-r--r--tensorflow/tools/api/golden/tensorflow.keras.callbacks.-model-checkpoint.pbtxt42
-rw-r--r--tensorflow/tools/api/golden/tensorflow.keras.callbacks.-progbar-logger.pbtxt42
-rw-r--r--tensorflow/tools/api/golden/tensorflow.keras.callbacks.-reduce-l-r-on-plateau.pbtxt46
-rw-r--r--tensorflow/tools/api/golden/tensorflow.keras.callbacks.-remote-monitor.pbtxt42
-rw-r--r--tensorflow/tools/api/golden/tensorflow.keras.callbacks.-tensor-board.pbtxt42
-rw-r--r--tensorflow/tools/api/golden/tensorflow.keras.callbacks.-terminate-on-na-n.pbtxt42
-rw-r--r--tensorflow/tools/api/golden/tensorflow.keras.callbacks.pbtxt55
-rw-r--r--tensorflow/tools/api/golden/tensorflow.keras.constraints.-constraint.pbtxt12
-rw-r--r--tensorflow/tools/api/golden/tensorflow.keras.constraints.-max-norm.pbtxt14
-rw-r--r--tensorflow/tools/api/golden/tensorflow.keras.constraints.-min-max-norm.pbtxt14
-rw-r--r--tensorflow/tools/api/golden/tensorflow.keras.constraints.-non-neg.pbtxt13
-rw-r--r--tensorflow/tools/api/golden/tensorflow.keras.constraints.-unit-norm.pbtxt14
-rw-r--r--tensorflow/tools/api/golden/tensorflow.keras.constraints.max_norm.pbtxt14
-rw-r--r--tensorflow/tools/api/golden/tensorflow.keras.constraints.min_max_norm.pbtxt14
-rw-r--r--tensorflow/tools/api/golden/tensorflow.keras.constraints.non_neg.pbtxt13
-rw-r--r--tensorflow/tools/api/golden/tensorflow.keras.constraints.pbtxt51
-rw-r--r--tensorflow/tools/api/golden/tensorflow.keras.constraints.unit_norm.pbtxt14
-rw-r--r--tensorflow/tools/api/golden/tensorflow.keras.datasets.boston_housing.pbtxt7
-rw-r--r--tensorflow/tools/api/golden/tensorflow.keras.datasets.cifar10.pbtxt7
-rw-r--r--tensorflow/tools/api/golden/tensorflow.keras.datasets.cifar100.pbtxt7
-rw-r--r--tensorflow/tools/api/golden/tensorflow.keras.datasets.imdb.pbtxt11
-rw-r--r--tensorflow/tools/api/golden/tensorflow.keras.datasets.mnist.pbtxt7
-rw-r--r--tensorflow/tools/api/golden/tensorflow.keras.datasets.pbtxt27
-rw-r--r--tensorflow/tools/api/golden/tensorflow.keras.datasets.reuters.pbtxt11
-rw-r--r--tensorflow/tools/api/golden/tensorflow.keras.initializers.-constant.pbtxt18
-rw-r--r--tensorflow/tools/api/golden/tensorflow.keras.initializers.-identity.pbtxt18
-rw-r--r--tensorflow/tools/api/golden/tensorflow.keras.initializers.-initializer.pbtxt16
-rw-r--r--tensorflow/tools/api/golden/tensorflow.keras.initializers.-ones.pbtxt18
-rw-r--r--tensorflow/tools/api/golden/tensorflow.keras.initializers.-orthogonal.pbtxt18
-rw-r--r--tensorflow/tools/api/golden/tensorflow.keras.initializers.-random-normal.pbtxt18
-rw-r--r--tensorflow/tools/api/golden/tensorflow.keras.initializers.-random-uniform.pbtxt18
-rw-r--r--tensorflow/tools/api/golden/tensorflow.keras.initializers.-truncated-normal.pbtxt18
-rw-r--r--tensorflow/tools/api/golden/tensorflow.keras.initializers.-variance-scaling.pbtxt18
-rw-r--r--tensorflow/tools/api/golden/tensorflow.keras.initializers.-zeros.pbtxt18
-rw-r--r--tensorflow/tools/api/golden/tensorflow.keras.initializers.pbtxt79
-rw-r--r--tensorflow/tools/api/golden/tensorflow.keras.layers.-activation.pbtxt159
-rw-r--r--tensorflow/tools/api/golden/tensorflow.keras.layers.-activity-regularization.pbtxt159
-rw-r--r--tensorflow/tools/api/golden/tensorflow.keras.layers.-add.pbtxt160
-rw-r--r--tensorflow/tools/api/golden/tensorflow.keras.layers.-alpha-dropout.pbtxt159
-rw-r--r--tensorflow/tools/api/golden/tensorflow.keras.layers.-average-pooling1-d.pbtxt161
-rw-r--r--tensorflow/tools/api/golden/tensorflow.keras.layers.-average-pooling2-d.pbtxt161
-rw-r--r--tensorflow/tools/api/golden/tensorflow.keras.layers.-average-pooling3-d.pbtxt161
-rw-r--r--tensorflow/tools/api/golden/tensorflow.keras.layers.-average.pbtxt160
-rw-r--r--tensorflow/tools/api/golden/tensorflow.keras.layers.-avg-pool1-d.pbtxt161
-rw-r--r--tensorflow/tools/api/golden/tensorflow.keras.layers.-avg-pool2-d.pbtxt161
-rw-r--r--tensorflow/tools/api/golden/tensorflow.keras.layers.-avg-pool3-d.pbtxt161
-rw-r--r--tensorflow/tools/api/golden/tensorflow.keras.layers.-batch-normalization.pbtxt160
-rw-r--r--tensorflow/tools/api/golden/tensorflow.keras.layers.-bidirectional.pbtxt172
-rw-r--r--tensorflow/tools/api/golden/tensorflow.keras.layers.-concatenate.pbtxt160
-rw-r--r--tensorflow/tools/api/golden/tensorflow.keras.layers.-conv-l-s-t-m2-d.pbtxt189
-rw-r--r--tensorflow/tools/api/golden/tensorflow.keras.layers.-conv1-d.pbtxt161
-rw-r--r--tensorflow/tools/api/golden/tensorflow.keras.layers.-conv2-d-transpose.pbtxt162
-rw-r--r--tensorflow/tools/api/golden/tensorflow.keras.layers.-conv2-d.pbtxt161
-rw-r--r--tensorflow/tools/api/golden/tensorflow.keras.layers.-conv3-d-transpose.pbtxt161
-rw-r--r--tensorflow/tools/api/golden/tensorflow.keras.layers.-conv3-d.pbtxt161
-rw-r--r--tensorflow/tools/api/golden/tensorflow.keras.layers.-convolution1-d.pbtxt161
-rw-r--r--tensorflow/tools/api/golden/tensorflow.keras.layers.-convolution2-d-transpose.pbtxt162
-rw-r--r--tensorflow/tools/api/golden/tensorflow.keras.layers.-convolution2-d.pbtxt161
-rw-r--r--tensorflow/tools/api/golden/tensorflow.keras.layers.-convolution3-d-transpose.pbtxt161
-rw-r--r--tensorflow/tools/api/golden/tensorflow.keras.layers.-convolution3-d.pbtxt161
-rw-r--r--tensorflow/tools/api/golden/tensorflow.keras.layers.-cropping1-d.pbtxt159
-rw-r--r--tensorflow/tools/api/golden/tensorflow.keras.layers.-cropping2-d.pbtxt159
-rw-r--r--tensorflow/tools/api/golden/tensorflow.keras.layers.-cropping3-d.pbtxt159
-rw-r--r--tensorflow/tools/api/golden/tensorflow.keras.layers.-dense.pbtxt160
-rw-r--r--tensorflow/tools/api/golden/tensorflow.keras.layers.-dot.pbtxt160
-rw-r--r--tensorflow/tools/api/golden/tensorflow.keras.layers.-dropout.pbtxt160
-rw-r--r--tensorflow/tools/api/golden/tensorflow.keras.layers.-e-l-u.pbtxt159
-rw-r--r--tensorflow/tools/api/golden/tensorflow.keras.layers.-embedding.pbtxt159
-rw-r--r--tensorflow/tools/api/golden/tensorflow.keras.layers.-flatten.pbtxt159
-rw-r--r--tensorflow/tools/api/golden/tensorflow.keras.layers.-g-r-u.pbtxt180
-rw-r--r--tensorflow/tools/api/golden/tensorflow.keras.layers.-gaussian-dropout.pbtxt159
-rw-r--r--tensorflow/tools/api/golden/tensorflow.keras.layers.-gaussian-noise.pbtxt159
-rw-r--r--tensorflow/tools/api/golden/tensorflow.keras.layers.-global-average-pooling1-d.pbtxt160
-rw-r--r--tensorflow/tools/api/golden/tensorflow.keras.layers.-global-average-pooling2-d.pbtxt160
-rw-r--r--tensorflow/tools/api/golden/tensorflow.keras.layers.-global-average-pooling3-d.pbtxt160
-rw-r--r--tensorflow/tools/api/golden/tensorflow.keras.layers.-global-avg-pool1-d.pbtxt160
-rw-r--r--tensorflow/tools/api/golden/tensorflow.keras.layers.-global-avg-pool2-d.pbtxt160
-rw-r--r--tensorflow/tools/api/golden/tensorflow.keras.layers.-global-avg-pool3-d.pbtxt160
-rw-r--r--tensorflow/tools/api/golden/tensorflow.keras.layers.-global-max-pool1-d.pbtxt160
-rw-r--r--tensorflow/tools/api/golden/tensorflow.keras.layers.-global-max-pool2-d.pbtxt160
-rw-r--r--tensorflow/tools/api/golden/tensorflow.keras.layers.-global-max-pool3-d.pbtxt160
-rw-r--r--tensorflow/tools/api/golden/tensorflow.keras.layers.-global-max-pooling1-d.pbtxt160
-rw-r--r--tensorflow/tools/api/golden/tensorflow.keras.layers.-global-max-pooling2-d.pbtxt160
-rw-r--r--tensorflow/tools/api/golden/tensorflow.keras.layers.-global-max-pooling3-d.pbtxt160
-rw-r--r--tensorflow/tools/api/golden/tensorflow.keras.layers.-input-layer.pbtxt160
-rw-r--r--tensorflow/tools/api/golden/tensorflow.keras.layers.-input-spec.pbtxt9
-rw-r--r--tensorflow/tools/api/golden/tensorflow.keras.layers.-l-s-t-m.pbtxt180
-rw-r--r--tensorflow/tools/api/golden/tensorflow.keras.layers.-lambda.pbtxt159
-rw-r--r--tensorflow/tools/api/golden/tensorflow.keras.layers.-layer.pbtxt158
-rw-r--r--tensorflow/tools/api/golden/tensorflow.keras.layers.-leaky-re-l-u.pbtxt159
-rw-r--r--tensorflow/tools/api/golden/tensorflow.keras.layers.-locally-connected1-d.pbtxt159
-rw-r--r--tensorflow/tools/api/golden/tensorflow.keras.layers.-locally-connected2-d.pbtxt159
-rw-r--r--tensorflow/tools/api/golden/tensorflow.keras.layers.-masking.pbtxt159
-rw-r--r--tensorflow/tools/api/golden/tensorflow.keras.layers.-max-pool1-d.pbtxt161
-rw-r--r--tensorflow/tools/api/golden/tensorflow.keras.layers.-max-pool2-d.pbtxt161
-rw-r--r--tensorflow/tools/api/golden/tensorflow.keras.layers.-max-pool3-d.pbtxt161
-rw-r--r--tensorflow/tools/api/golden/tensorflow.keras.layers.-max-pooling1-d.pbtxt161
-rw-r--r--tensorflow/tools/api/golden/tensorflow.keras.layers.-max-pooling2-d.pbtxt161
-rw-r--r--tensorflow/tools/api/golden/tensorflow.keras.layers.-max-pooling3-d.pbtxt161
-rw-r--r--tensorflow/tools/api/golden/tensorflow.keras.layers.-maximum.pbtxt160
-rw-r--r--tensorflow/tools/api/golden/tensorflow.keras.layers.-multiply.pbtxt160
-rw-r--r--tensorflow/tools/api/golden/tensorflow.keras.layers.-p-re-l-u.pbtxt159
-rw-r--r--tensorflow/tools/api/golden/tensorflow.keras.layers.-permute.pbtxt159
-rw-r--r--tensorflow/tools/api/golden/tensorflow.keras.layers.-repeat-vector.pbtxt159
-rw-r--r--tensorflow/tools/api/golden/tensorflow.keras.layers.-reshape.pbtxt159
-rw-r--r--tensorflow/tools/api/golden/tensorflow.keras.layers.-separable-conv2-d.pbtxt162
-rw-r--r--tensorflow/tools/api/golden/tensorflow.keras.layers.-separable-convolution2-d.pbtxt162
-rw-r--r--tensorflow/tools/api/golden/tensorflow.keras.layers.-simple-r-n-n.pbtxt180
-rw-r--r--tensorflow/tools/api/golden/tensorflow.keras.layers.-spatial-dropout1-d.pbtxt161
-rw-r--r--tensorflow/tools/api/golden/tensorflow.keras.layers.-spatial-dropout2-d.pbtxt161
-rw-r--r--tensorflow/tools/api/golden/tensorflow.keras.layers.-spatial-dropout3-d.pbtxt161
-rw-r--r--tensorflow/tools/api/golden/tensorflow.keras.layers.-thresholded-re-l-u.pbtxt159
-rw-r--r--tensorflow/tools/api/golden/tensorflow.keras.layers.-time-distributed.pbtxt168
-rw-r--r--tensorflow/tools/api/golden/tensorflow.keras.layers.-up-sampling1-d.pbtxt159
-rw-r--r--tensorflow/tools/api/golden/tensorflow.keras.layers.-up-sampling2-d.pbtxt159
-rw-r--r--tensorflow/tools/api/golden/tensorflow.keras.layers.-up-sampling3-d.pbtxt159
-rw-r--r--tensorflow/tools/api/golden/tensorflow.keras.layers.-wrapper.pbtxt167
-rw-r--r--tensorflow/tools/api/golden/tensorflow.keras.layers.-zero-padding1-d.pbtxt159
-rw-r--r--tensorflow/tools/api/golden/tensorflow.keras.layers.-zero-padding2-d.pbtxt159
-rw-r--r--tensorflow/tools/api/golden/tensorflow.keras.layers.-zero-padding3-d.pbtxt159
-rw-r--r--tensorflow/tools/api/golden/tensorflow.keras.layers.pbtxt371
-rw-r--r--tensorflow/tools/api/golden/tensorflow.keras.losses.pbtxt71
-rw-r--r--tensorflow/tools/api/golden/tensorflow.keras.metrics.pbtxt79
-rw-r--r--tensorflow/tools/api/golden/tensorflow.keras.models.-model.pbtxt249
-rw-r--r--tensorflow/tools/api/golden/tensorflow.keras.models.-sequential.pbtxt274
-rw-r--r--tensorflow/tools/api/golden/tensorflow.keras.models.pbtxt31
-rw-r--r--tensorflow/tools/api/golden/tensorflow.keras.optimizers.-adadelta.pbtxt34
-rw-r--r--tensorflow/tools/api/golden/tensorflow.keras.optimizers.-adagrad.pbtxt34
-rw-r--r--tensorflow/tools/api/golden/tensorflow.keras.optimizers.-adam.pbtxt34
-rw-r--r--tensorflow/tools/api/golden/tensorflow.keras.optimizers.-adamax.pbtxt34
-rw-r--r--tensorflow/tools/api/golden/tensorflow.keras.optimizers.-nadam.pbtxt34
-rw-r--r--tensorflow/tools/api/golden/tensorflow.keras.optimizers.-optimizer.pbtxt33
-rw-r--r--tensorflow/tools/api/golden/tensorflow.keras.optimizers.-r-m-sprop.pbtxt34
-rw-r--r--tensorflow/tools/api/golden/tensorflow.keras.optimizers.-s-g-d.pbtxt34
-rw-r--r--tensorflow/tools/api/golden/tensorflow.keras.optimizers.pbtxt47
-rw-r--r--tensorflow/tools/api/golden/tensorflow.keras.pbtxt71
-rw-r--r--tensorflow/tools/api/golden/tensorflow.keras.preprocessing.image.-directory-iterator.pbtxt18
-rw-r--r--tensorflow/tools/api/golden/tensorflow.keras.preprocessing.image.-image-data-generator.pbtxt29
-rw-r--r--tensorflow/tools/api/golden/tensorflow.keras.preprocessing.image.-iterator.pbtxt13
-rw-r--r--tensorflow/tools/api/golden/tensorflow.keras.preprocessing.image.-numpy-array-iterator.pbtxt18
-rw-r--r--tensorflow/tools/api/golden/tensorflow.keras.preprocessing.image.pbtxt59
-rw-r--r--tensorflow/tools/api/golden/tensorflow.keras.preprocessing.pbtxt15
-rw-r--r--tensorflow/tools/api/golden/tensorflow.keras.preprocessing.sequence.pbtxt15
-rw-r--r--tensorflow/tools/api/golden/tensorflow.keras.preprocessing.text.-tokenizer.pbtxt33
-rw-r--r--tensorflow/tools/api/golden/tensorflow.keras.preprocessing.text.pbtxt15
-rw-r--r--tensorflow/tools/api/golden/tensorflow.keras.regularizers.-l1-l2.pbtxt18
-rw-r--r--tensorflow/tools/api/golden/tensorflow.keras.regularizers.-regularizer.pbtxt12
-rw-r--r--tensorflow/tools/api/golden/tensorflow.keras.regularizers.pbtxt35
-rw-r--r--tensorflow/tools/api/golden/tensorflow.keras.utils.-custom-object-scope.pbtxt9
-rw-r--r--tensorflow/tools/api/golden/tensorflow.keras.utils.-generator-enqueuer.pbtxt26
-rw-r--r--tensorflow/tools/api/golden/tensorflow.keras.utils.-h-d-f5-matrix.pbtxt29
-rw-r--r--tensorflow/tools/api/golden/tensorflow.keras.utils.-progbar.pbtxt17
-rw-r--r--tensorflow/tools/api/golden/tensorflow.keras.utils.-sequence-enqueuer.pbtxt24
-rw-r--r--tensorflow/tools/api/golden/tensorflow.keras.utils.-sequence.pbtxt12
-rw-r--r--tensorflow/tools/api/golden/tensorflow.keras.utils.pbtxt63
-rw-r--r--tensorflow/tools/api/golden/tensorflow.keras.wrappers.pbtxt7
-rw-r--r--tensorflow/tools/api/golden/tensorflow.keras.wrappers.scikit_learn.-keras-classifier.pbtxt42
-rw-r--r--tensorflow/tools/api/golden/tensorflow.keras.wrappers.scikit_learn.-keras-regressor.pbtxt38
-rw-r--r--tensorflow/tools/api/golden/tensorflow.keras.wrappers.scikit_learn.pbtxt11
-rw-r--r--tensorflow/tools/api/golden/tensorflow.pbtxt4
-rwxr-xr-xtensorflow/tools/ci_build/ci_sanity.sh2
-rwxr-xr-xtensorflow/tools/ci_build/linux/cpu/run_cc_core.sh2
-rw-r--r--tensorflow/tools/graph_transforms/remove_attribute.cc2
514 files changed, 27180 insertions, 4703 deletions
diff --git a/tensorflow/BUILD b/tensorflow/BUILD
index 5b6a18b6a6..5538052d02 100644
--- a/tensorflow/BUILD
+++ b/tensorflow/BUILD
@@ -290,6 +290,7 @@ filegroup(
"//tensorflow/contrib/decision_trees/proto:all_files",
"//tensorflow/contrib/distributions:all_files",
"//tensorflow/contrib/eager/python:all_files",
+ "//tensorflow/contrib/estimator:all_files",
"//tensorflow/contrib/factorization:all_files",
"//tensorflow/contrib/factorization/kernels:all_files",
"//tensorflow/contrib/ffmpeg:all_files",
@@ -407,6 +408,7 @@ filegroup(
"//tensorflow/python/eager:all_files",
"//tensorflow/python/estimator:all_files",
"//tensorflow/python/feature_column:all_files",
+ "//tensorflow/python/keras:all_files",
"//tensorflow/python/kernel_tests:all_files",
"//tensorflow/python/kernel_tests/distributions:all_files",
"//tensorflow/python/ops/distributions:all_files",
diff --git a/tensorflow/c/c_api.cc b/tensorflow/c/c_api.cc
index c454c94249..334f867e47 100644
--- a/tensorflow/c/c_api.cc
+++ b/tensorflow/c/c_api.cc
@@ -374,6 +374,65 @@ void TF_Reset_Helper(const TF_SessionOptions* opt, const char** containers,
status->status = Reset(opt->options, container_names);
}
+// This traverses the specified nodes in topological order to verify there are
+// no cycles. Starting with inputless nodes, it visits nodes whose inputs have
+// all been visited, and counts the total number of visited nodes. If there is a
+// cycle, nodes in the cycle will never be visited, and the visited count will
+// be less than the total node count.
+Status ValidateNoCycles(const Graph& g) {
+ // TODO(nolivia): check this on a subset of the graph instead of all of it.
+ int total_num_nodes = g.num_node_ids();
+ // A node is ready when all of its inputs have been visited.
+ std::vector<const Node*> ready;
+ std::vector<int> pending_count(total_num_nodes, 0);
+
+ for (int i = 0; i < total_num_nodes; ++i) {
+ const Node* n = g.FindNodeId(i);
+ if (n == nullptr) continue;
+ pending_count[i] = n->in_edges().size();
+ if (n->IsMerge()) {
+ // While-loop cycles are legal cycles so we manually adjust the
+ // pending_count to make sure that the loop is visited.
+ for (const Edge* e : n->in_edges()) {
+ if (!e->IsControlEdge() && e->src()->IsNextIteration()) {
+ pending_count[i]--;
+ }
+ }
+ }
+ if (pending_count[i] == 0) {
+ ready.push_back(n);
+ }
+ }
+
+ int processed = 0;
+ while (!ready.empty()) {
+ const Node* node = ready.back();
+ ready.pop_back();
+ ++processed;
+
+ for (const Edge* out : node->out_edges()) {
+ const int output_id = out->dst()->id();
+ pending_count[output_id]--;
+ if (pending_count[output_id] == 0) {
+ ready.push_back(out->dst());
+ }
+ }
+ }
+
+ if (processed < total_num_nodes) {
+ std::vector<string> nodes_in_cycle;
+ for (int i = 0; i < pending_count.size() && nodes_in_cycle.size() < 3;
+ ++i) {
+ if (pending_count[i] != 0) {
+ nodes_in_cycle.push_back(g.FindNodeId(i)->name());
+ }
+ }
+ return errors::InvalidArgument(
+ "Graph is invalid, contains a cycle with ", total_num_nodes - processed,
+ " nodes, including: ", str_util::Join(nodes_in_cycle, ", "));
+ }
+ return Status::OK();
+}
} // namespace
} // namespace tensorflow
@@ -2251,6 +2310,12 @@ static bool ExtendSessionGraphHelper(TF_Session* session, TF_Status* status) {
const Graph& graph = session->graph->graph;
const auto num_nodes = graph.num_node_ids();
if (session->last_num_graph_nodes < num_nodes) {
+ status->status = tensorflow::ValidateNoCycles(session->graph->graph);
+ if (!status->status.ok()) {
+ session->graph->mu.unlock();
+ return false;
+ }
+
GraphDef graph_def;
*graph_def.mutable_versions() = graph.versions();
// Fill graph_def with nodes with ids in the range
diff --git a/tensorflow/c/eager/c_api_test.cc b/tensorflow/c/eager/c_api_test.cc
index d19583a3ab..72e0fe8a15 100644
--- a/tensorflow/c/eager/c_api_test.cc
+++ b/tensorflow/c/eager/c_api_test.cc
@@ -38,6 +38,7 @@ TFE_TensorHandle* TestMatrixTensorHandle() {
TFE_TensorHandle* th = TFE_NewTensorHandle(t, status);
CHECK_EQ(TF_OK, TF_GetCode(status)) << TF_Message(status);
TF_DeleteTensor(t);
+ TF_DeleteStatus(status);
return th;
}
@@ -385,7 +386,8 @@ TFE_TensorHandle* CreateVariable(TFE_Context* ctx, float value,
memcpy(TF_TensorData(t.get()), &value, TF_TensorByteSize(t.get()));
std::unique_ptr<TFE_TensorHandle, decltype(&TFE_DeleteTensorHandle)>
- value_handle(TFE_NewTensorHandle(t.get(), status), TFE_DeleteTensorHandle);
+ value_handle(TFE_NewTensorHandle(t.get(), status),
+ TFE_DeleteTensorHandle);
if (TF_GetCode(status) != TF_OK) return nullptr;
TFE_OpAddInput(op, value_handle.get(), status);
diff --git a/tensorflow/c/python_api.cc b/tensorflow/c/python_api.cc
index adca6c7625..b8d36b8947 100644
--- a/tensorflow/c/python_api.cc
+++ b/tensorflow/c/python_api.cc
@@ -20,7 +20,6 @@ limitations under the License.
namespace tensorflow {
void AddControlInput(TF_Graph* graph, TF_Operation* op, TF_Operation* input) {
- // TODO(skyewm): make sure cycles are prevented
mutex_lock l(graph->mu);
graph->graph.AddControlEdge(&input->node, &op->node);
}
diff --git a/tensorflow/cc/framework/gradients.cc b/tensorflow/cc/framework/gradients.cc
index 1868207148..82469261e5 100644
--- a/tensorflow/cc/framework/gradients.cc
+++ b/tensorflow/cc/framework/gradients.cc
@@ -77,7 +77,7 @@ class SymbolicGradientBuilder {
Status CallGradFunction(const Operation& op,
const std::vector<Output>& grad_inputs,
std::vector<Output>* grad_outputs);
-
+
// Returns a list mapping whether each node in the graph is reachable
// from outputs_. Keyed by node id.
std::vector<bool> GetReachableNodes();
@@ -156,7 +156,7 @@ std::vector<bool> SymbolicGradientBuilder::GetReachableNodes() {
reachable_nodes[out.node()->id()] = true;
}
}
-
+
while (!queue.empty()) {
Node* n = queue.front();
queue.pop_front();
diff --git a/tensorflow/cc/framework/testutil.cc b/tensorflow/cc/framework/testutil.cc
index 25ee08f676..57d573e3c5 100644
--- a/tensorflow/cc/framework/testutil.cc
+++ b/tensorflow/cc/framework/testutil.cc
@@ -37,7 +37,7 @@ void GetTensor(const Scope& scope, Output tensor, Tensor* out) {
}
void GetTensors(const Scope& scope, const std::vector<Output>& assign_vars,
- OutputList tensors, std::vector<Tensor>* out) {
+ const OutputList& tensors, std::vector<Tensor>* out) {
ClientSession session(scope);
TF_CHECK_OK(session.Run(assign_vars, nullptr));
TF_CHECK_OK(session.Run(tensors, out));
diff --git a/tensorflow/cc/framework/testutil.h b/tensorflow/cc/framework/testutil.h
index ca57c0f0a4..a3e19870ec 100644
--- a/tensorflow/cc/framework/testutil.h
+++ b/tensorflow/cc/framework/testutil.h
@@ -30,7 +30,7 @@ void GetTensors(const Scope& scope, OutputList tensors,
// assign_vars are extra outputs that should be run
// e.g. to assign values to variables.
void GetTensors(const Scope& scope, const std::vector<Output>& assign_vars,
- OutputList tensors, std::vector<Tensor>* out);
+ const OutputList& tensors, std::vector<Tensor>* out);
/// Computes the output 'tensor', returning the resulting tensor in 'out'.
void GetTensor(const Scope& scope, Output tensor, Tensor* out);
diff --git a/tensorflow/compiler/xla/literal_util.cc b/tensorflow/compiler/xla/literal_util.cc
index 71995b2307..6190bd624d 100644
--- a/tensorflow/compiler/xla/literal_util.cc
+++ b/tensorflow/compiler/xla/literal_util.cc
@@ -94,14 +94,17 @@ Status Literal::CopyRange(const Literal& src_literal,
TF_RET_CHECK(ShapeUtil::Rank(src_shape) == src_base.size());
TF_RET_CHECK(ShapeUtil::Rank(dest_shape) == dest_base.size());
+
if (ShapeUtil::Rank(src_shape) == 0 || ShapeUtil::Rank(dest_shape) == 0) {
// If any of the two shapes are scalars, we can just call the StridedCopy()
// directly, and we know we will be copying only one value.
TF_RET_CHECK(copy_size.empty());
StridedCopy(dest_data, LinearIndex(dest_base), 0, src_data,
src_literal.LinearIndex(src_base), 0, 1);
- } else if (!ShapeUtil::HasZeroElements(dest_shape)) {
- TF_RET_CHECK(!ShapeUtil::HasZeroElements(src_shape));
+ } else if (!ShapeUtil::HasZeroElements(dest_shape) &&
+ !ShapeUtil::HasZeroElements(src_shape)) {
+ // Perform copy if neither src literal nor dest literal has dimensions with
+ // zero element, otherwise it's a no-op.
TF_RET_CHECK(src_base.size() == dest_base.size());
TF_RET_CHECK(src_base.size() == copy_size.size());
diff --git a/tensorflow/compiler/xla/literal_util.h b/tensorflow/compiler/xla/literal_util.h
index 447c494bfc..6451345918 100644
--- a/tensorflow/compiler/xla/literal_util.h
+++ b/tensorflow/compiler/xla/literal_util.h
@@ -237,6 +237,9 @@ class Literal {
// The src_literal and this literal must have the same primitive type,
// src_base+copy_size must fit the source literal dimensions, as well as
// dest_base+copy_size must fit the destination literal dimensions.
+ // Note: if either src_literal or this literal contains dimensions with zero
+ // element, then copy_size must be 0 in these dimensions while the
+ // corresponding base indices being 0.
Status Copy(const Literal& src_literal,
tensorflow::gtl::ArraySlice<int64> src_base,
tensorflow::gtl::ArraySlice<int64> dest_base,
diff --git a/tensorflow/compiler/xla/literal_util_test.cc b/tensorflow/compiler/xla/literal_util_test.cc
index a33c0fe09d..61ceac4f9a 100644
--- a/tensorflow/compiler/xla/literal_util_test.cc
+++ b/tensorflow/compiler/xla/literal_util_test.cc
@@ -698,7 +698,7 @@ TEST_F(LiteralUtilTest, Copy) {
for (const auto& layout : layouts) {
Shape shape = ShapeUtil::MakeShapeWithLayout(
primitive_util::NativeToPrimitiveType<uint32>(), dimensions, layout);
- auto blank = Literal::CreateFromShape(shape);
+
auto source = Literal::CreateFromShape(shape);
const int64 zero_base[] = {0, 0, 0, 0};
const int64 step[] = {1, 1, 1, 1};
@@ -707,15 +707,15 @@ TEST_F(LiteralUtilTest, Copy) {
source->Set(indexes, ++seqnr);
return true;
};
-
ShapeUtil::ForEachIndex(source->shape(), zero_base, dimensions, step,
init_proc);
+ auto blank = Literal::CreateFromShape(shape);
const int64 src_base[] = {3, 1, 5, 7};
const int64 dest_base[] = {6, 4, 12, 2};
const int64 copy_size[] = {7, 8, 11, 9};
-
TF_EXPECT_OK(blank->Copy(*source, src_base, dest_base, copy_size));
+
std::vector<int64> source_indexes(TF_ARRAYSIZE(dimensions), 0);
std::vector<int64> blank_indexes(TF_ARRAYSIZE(dimensions), 0);
bool matched = true;
@@ -730,6 +730,7 @@ TEST_F(LiteralUtilTest, Copy) {
matched = (bval != 0 && bval == source->Get<uint32>(source_indexes));
return matched;
};
+
ShapeUtil::ForEachIndex(source->shape(), zero_base, copy_size, step,
check_proc);
EXPECT_TRUE(matched);
@@ -749,6 +750,30 @@ TEST_F(LiteralUtilTest, CopyScalars) {
EXPECT_EQ(vect->Get<uint32>({4}), 17);
}
+TEST_F(LiteralUtilTest, CopyFromAndToZeroElement) {
+ const Shape empty_r1_shape = ShapeUtil::MakeShape(F32, {0});
+ const auto const_nine = Literal::CreateR1<float>({9});
+ const auto const_empty = Literal::CreateFromShape(empty_r1_shape);
+
+ {
+ // Source contains dimension with zero elements.
+ const auto empty = Literal::CreateFromShape(empty_r1_shape);
+ auto nine = Literal::CreateR1<float>({9});
+
+ TF_EXPECT_OK(nine->Copy(*empty, {0}, {0}, {0}));
+ EXPECT_TRUE(nine->Equal(*const_nine));
+ }
+
+ {
+ // Copy 0 element to destination with zero elements.
+ const auto empty = Literal::CreateFromShape(empty_r1_shape);
+ auto nine = Literal::CreateR1<float>({9});
+
+ TF_EXPECT_OK(empty->Copy(*nine, {0}, {0}, {0}));
+ EXPECT_TRUE(empty->Equal(*const_empty));
+ }
+}
+
TEST_F(LiteralUtilTest, F16) {
// Verify that the internal data views are consistent and that they
// are in little endian format
diff --git a/tensorflow/compiler/xla/service/cpu/cpu_compiler.cc b/tensorflow/compiler/xla/service/cpu/cpu_compiler.cc
index 839fe48488..8c7c2aa70e 100644
--- a/tensorflow/compiler/xla/service/cpu/cpu_compiler.cc
+++ b/tensorflow/compiler/xla/service/cpu/cpu_compiler.cc
@@ -198,8 +198,8 @@ class CollectProfileCandidates : public DfsHloVisitorWithDefault {
std::unordered_map<const HloInstruction*, size_t> hlo_to_profile_idx;
CollectProfileCandidates profile_candidates_for_computation(
&hlo_to_profile_idx);
- TF_RETURN_IF_ERROR(computation->root_instruction()->Accept(
- &profile_candidates_for_computation));
+ TF_RETURN_IF_ERROR(
+ computation->Accept(&profile_candidates_for_computation));
return hlo_to_profile_idx;
}
@@ -433,6 +433,10 @@ Status InitializeModuleHooks(
StatusOr<std::unique_ptr<Executable>> CpuCompiler::Compile(
std::unique_ptr<HloModule> module, se::StreamExecutor* stream_exec) {
+ const string timer_message =
+ "Compiling [" + module->name() + "] for CPU using JIT";
+ ScopedLoggingTimer compiling_timer(timer_message, 1);
+
VLOG(1) << "Compiling: " << module->name();
TF_RET_CHECK(stream_exec != nullptr);
std::call_once(llvm_command_line_options_initialized,
diff --git a/tensorflow/compiler/xla/service/cpu/ir_emitter.cc b/tensorflow/compiler/xla/service/cpu/ir_emitter.cc
index c5275ede65..06c94e19de 100644
--- a/tensorflow/compiler/xla/service/cpu/ir_emitter.cc
+++ b/tensorflow/compiler/xla/service/cpu/ir_emitter.cc
@@ -240,6 +240,13 @@ void IrEmitter::InitializeIrFunction(const string& function_name) {
compute_function_->addFnAttr(llvm::Attribute::OptimizeForSize);
}
+ if (hlo_module_config_.debug_options().xla_enable_fast_math()) {
+ compute_function_->addFnAttr("unsafe-fp-math", "true");
+ compute_function_->addFnAttr("no-infs-fp-math", "true");
+ compute_function_->addFnAttr("no-nans-fp-math", "true");
+ compute_function_->addFnAttr("no-signed-zeros-fp-math", "true");
+ }
+
ir_builder_.SetInsertPoint(llvm::BasicBlock::Create(
/*Context=*/module_->getContext(),
/*Name=*/"entry",
diff --git a/tensorflow/compiler/xla/service/gpu/ir_emitter_nested.cc b/tensorflow/compiler/xla/service/gpu/ir_emitter_nested.cc
index 202a0171db..a40eb6afc2 100644
--- a/tensorflow/compiler/xla/service/gpu/ir_emitter_nested.cc
+++ b/tensorflow/compiler/xla/service/gpu/ir_emitter_nested.cc
@@ -87,6 +87,9 @@ llvm::Function* IrEmitterNested::EmitBasePointersForNestedComputation(
}
}
+ // TODO(b/65380986): Investigate if adding fast math flags for generated
+ // kernels makes sense.
+
llvm::BasicBlock* entry_bb =
llvm::BasicBlock::Create(function->getContext(), "entry", function);
// Emit a "return void" at entry_bb's end, and sets the insert point before
diff --git a/tensorflow/compiler/xla/service/gpu/ir_emitter_unnested.cc b/tensorflow/compiler/xla/service/gpu/ir_emitter_unnested.cc
index 749badf3f2..b84284046b 100644
--- a/tensorflow/compiler/xla/service/gpu/ir_emitter_unnested.cc
+++ b/tensorflow/compiler/xla/service/gpu/ir_emitter_unnested.cc
@@ -201,6 +201,9 @@ llvm::Function* IrEmitterUnnested::BuildKernelPrototype(
}
kernel->addAttribute(temp_buffer_arg_no + 1, llvm::Attribute::NoAlias);
+ // TODO(b/65380986): Investigate if adding fast math flags for generated
+ // kernels makes sense.
+
// Add the declaration of this kernel to llvm.nvvm.annotations so that NVPTX
// treats it as a CUDA kernel.
llvm::NamedMDNode* nvvm_annotations_node =
diff --git a/tensorflow/compiler/xla/service/hlo_evaluator_test.cc b/tensorflow/compiler/xla/service/hlo_evaluator_test.cc
index a826548349..9205f5dc4e 100644
--- a/tensorflow/compiler/xla/service/hlo_evaluator_test.cc
+++ b/tensorflow/compiler/xla/service/hlo_evaluator_test.cc
@@ -332,6 +332,53 @@ TEST_F(HloEvaluatorTest, DoesBroadcastScalar) {
LiteralTestUtil::ExpectEqual(*result, *output_literal);
}
+TEST_F(HloEvaluatorTest, DoesConcatenateSimple) {
+ HloComputation::Builder b(TestName());
+
+ HloInstruction* operand1 = b.AddInstruction(HloInstruction::CreateConstant(
+ Literal::CreateR2<int64>({{-1, -2}, {100, 200}})));
+ HloInstruction* operand2 = b.AddInstruction(HloInstruction::CreateConstant(
+ Literal::CreateR2<int64>({{-2, -3}, {-100, -200}})));
+
+ std::vector<HloInstruction*> operands = {operand1, operand2};
+
+ Shape shape = ShapeUtil::MakeShape(S64, {2, 2});
+ b.AddInstruction(HloInstruction::CreateConcatenate(shape, operands, 0));
+
+ HloModule module(TestName());
+ auto computation = module.AddEntryComputation(b.Build());
+
+ std::unique_ptr<Literal> result =
+ evaluator_->Evaluate(*computation, {}).ConsumeValueOrDie();
+
+ auto expected =
+ Literal::CreateR2<int64>({{-1, -2}, {100, 200}, {-2, -3}, {-100, -200}});
+ LiteralTestUtil::ExpectEqual(*expected, *result);
+}
+
+TEST_F(HloEvaluatorTest, ConcatenateHandlesShapeWithZeroElement) {
+ HloComputation::Builder b(TestName());
+
+ HloInstruction* operand1 = b.AddInstruction(
+ HloInstruction::CreateConstant(Literal::CreateR1<int64>({100, 200})));
+ HloInstruction* operand2 = b.AddInstruction(
+ HloInstruction::CreateConstant(Literal::CreateR1<int64>({})));
+
+ std::vector<HloInstruction*> operands = {operand1, operand2};
+
+ Shape shape = ShapeUtil::MakeShape(S64, {2});
+ b.AddInstruction(HloInstruction::CreateConcatenate(shape, operands, 0));
+
+ HloModule module(TestName());
+ auto computation = module.AddEntryComputation(b.Build());
+
+ std::unique_ptr<Literal> result =
+ evaluator_->Evaluate(*computation, {}).ConsumeValueOrDie();
+
+ auto expected = Literal::CreateR1<int64>({100, 200});
+ LiteralTestUtil::ExpectEqual(*expected, *result);
+}
+
TEST_F(HloEvaluatorTest, ConvertWithSameLayout) {
HloComputation::Builder b(TestName());
diff --git a/tensorflow/compiler/xla/service/hlo_instruction.cc b/tensorflow/compiler/xla/service/hlo_instruction.cc
index 24ef4e09e7..ce9e0db77e 100644
--- a/tensorflow/compiler/xla/service/hlo_instruction.cc
+++ b/tensorflow/compiler/xla/service/hlo_instruction.cc
@@ -512,7 +512,6 @@ HloInstruction::CreateSelectAndScatter(
instruction->set_parent(fused_root->parent());
instruction->set_metadata(fused_root->metadata());
instruction->CloneAndFuseInternal(fused_root);
- instruction->CheckFusionInstruction();
return instruction;
}
@@ -636,7 +635,6 @@ HloInstruction* HloInstruction::FuseInstructionInternal(
}
HloInstruction* fused_instruction =
CloneAndFuseInternal(instruction_to_fuse, add_output);
- CheckFusionInstruction();
return fused_instruction;
}
@@ -822,74 +820,6 @@ bool HloInstruction::HasSideEffect() const {
}
}
-void HloInstruction::CheckFusionInstruction() const {
- CHECK_EQ(opcode_, HloOpcode::kFusion);
-
- // The parent fusion instruction of the fusion computation must be 'this'.
- HloComputation* fused_computation = fused_instructions_computation();
- CHECK_EQ(this, fused_computation->FusionInstruction());
-
- // Fused root instruction and fused parameters must all be owned by the fusion
- // computation.
- bool root_owned = false;
- const std::vector<HloInstruction*>& fused_parameters_ = fused_parameters();
- const HloInstruction* fused_root_ = fused_expression_root();
- std::vector<bool> parameter_owned(fused_parameters_.size(), false);
- for (auto& instruction : fused_computation->instructions()) {
- if (fused_root_ == instruction.get()) {
- CHECK(!root_owned);
- root_owned = true;
- }
- for (int i = 0; i < fused_parameters_.size(); ++i) {
- if (fused_parameters_[i] == instruction.get()) {
- CHECK(!parameter_owned[i]);
- parameter_owned[i] = true;
- }
- }
- }
- CHECK(root_owned);
- // Make sure all the parameter_owned entries are set
- for (int i = 0; i < parameter_owned.size(); i++) {
- CHECK(parameter_owned[i]);
- }
-
- // Fused root must have no users.
- CHECK_EQ(0, fused_root_->user_count());
-
- // All uses of fused instructions must be in the fusion computation, and every
- // non-root instruction must have at least one use.
- for (auto& instruction : fused_instructions_computation()->instructions()) {
- if (instruction.get() != fused_root_) {
- CHECK_GT(instruction->user_count(), 0);
- for (auto& user : instruction->users()) {
- CHECK_EQ(fused_computation, user->parent());
- }
- }
- }
-
- // Fused parameter instructions must be numbered contiguously and match up
- // (shapes compatible) with their respective operand.
- CHECK_EQ(operands_.size(), fused_parameters_.size());
- std::vector<bool> parameter_numbers(fused_parameters_.size(), false);
- for (auto fused_param : fused_parameters_) {
- int64 param_no = fused_param->parameter_number();
- CHECK_GE(param_no, 0);
- CHECK_LT(param_no, fused_parameters_.size());
- CHECK(!parameter_numbers[param_no]);
- parameter_numbers[param_no] = true;
- CHECK(ShapeUtil::Compatible(fused_param->shape(),
- operands_[param_no]->shape()));
- }
- // Make sure all the parameter_numbers entries were seen
- for (int i = 0; i < parameter_numbers.size(); i++) {
- CHECK(parameter_numbers[i]);
- }
-
- // Operands must be distinct.
- std::set<HloInstruction*> operand_set(operands_.begin(), operands_.end());
- CHECK_EQ(operand_set.size(), operands_.size());
-}
-
/* static */ std::unique_ptr<HloInstruction> HloInstruction::CreateCall(
const Shape& shape, tensorflow::gtl::ArraySlice<HloInstruction*> operands,
HloComputation* computation) {
@@ -1194,7 +1124,6 @@ std::unique_ptr<HloInstruction> HloInstruction::CloneFusionWithNewOperands(
->AddEmbeddedComputation(
computation_builder.Build(FindOrDie(old_to_new, fused_root_))));
new_instruction->set_parent(parent());
- new_instruction->CheckFusionInstruction();
return new_instruction;
}
diff --git a/tensorflow/compiler/xla/service/hlo_instruction.h b/tensorflow/compiler/xla/service/hlo_instruction.h
index ca6f27bd40..bd8b8ac9bd 100644
--- a/tensorflow/compiler/xla/service/hlo_instruction.h
+++ b/tensorflow/compiler/xla/service/hlo_instruction.h
@@ -900,9 +900,6 @@ class HloInstruction {
// instruction to make it a bitcast.
bool CouldBeBitcast() const;
- // CHECKs various invariants of a fusion instruction.
- void CheckFusionInstruction() const;
-
// Get/Set the number of partitions per outer dimension (in order, starting
// with outer-most dimension first). Currently used by the parallel cpu
// backend to partition HLOs into parallel tasks.
diff --git a/tensorflow/compiler/xla/service/hlo_verifier.cc b/tensorflow/compiler/xla/service/hlo_verifier.cc
index c44be716cd..d40fceb076 100644
--- a/tensorflow/compiler/xla/service/hlo_verifier.cc
+++ b/tensorflow/compiler/xla/service/hlo_verifier.cc
@@ -130,6 +130,8 @@ class ShapeVerifier : public DfsHloVisitor {
}
Status HandleBroadcast(HloInstruction* broadcast) override {
+ TF_RET_CHECK(ShapeUtil::Rank(broadcast->operand(0)->shape()) ==
+ broadcast->dimensions().size());
return tensorflow::Status::OK();
}
@@ -290,6 +292,123 @@ string ComputationsToString(
} // namespace
+Status HloVerifier::CheckFusionInstruction(HloInstruction* fusion) const {
+ // The parent fusion instruction of the fusion computation must be 'fusion'.
+ HloComputation* fused_computation = fusion->fused_instructions_computation();
+ if (fusion != fused_computation->FusionInstruction()) {
+ return FailedPrecondition(
+ "Instruction of fused computation does not match expected instruction "
+ "%s.",
+ fusion->ToString().c_str());
+ }
+
+ // Fused root instruction and fused parameters must all be owned by the fusion
+ // computation.
+ bool root_owned = false;
+ const std::vector<HloInstruction*>& fused_parameters =
+ fusion->fused_parameters();
+ const HloInstruction* fused_root = fusion->fused_expression_root();
+ std::vector<bool> parameter_owned(fused_parameters.size(), false);
+ for (auto& instruction : fused_computation->instructions()) {
+ if (fused_root == instruction.get()) {
+ if (root_owned) {
+ return FailedPrecondition("Root appears more than once in %s.",
+ fusion->ToString().c_str());
+ }
+ root_owned = true;
+ }
+ for (int i = 0; i < fused_parameters.size(); ++i) {
+ if (fused_parameters[i] == instruction.get()) {
+ if (parameter_owned[i]) {
+ return FailedPrecondition("Parameter appears more than once in %s.",
+ fusion->ToString().c_str());
+ }
+ parameter_owned[i] = true;
+ }
+ }
+ }
+ if (!root_owned) {
+ return FailedPrecondition("Root not found in computation of %s.",
+ fusion->ToString().c_str());
+ }
+ // Make sure all the parameter_owned entries are set
+ for (int i = 0; i < parameter_owned.size(); i++) {
+ if (!parameter_owned[i]) {
+ return FailedPrecondition("Parameter %d not found in computation of %s.",
+ i, fusion->ToString().c_str());
+ }
+ }
+
+ // Fused root must have no users.
+ if (fused_root->user_count() != 0) {
+ return FailedPrecondition("Root of %s may not have users.",
+ fusion->ToString().c_str());
+ }
+
+ // All uses of fused instructions must be in the fusion computation, and every
+ // non-root instruction must have at least one use.
+ for (auto& instruction :
+ fusion->fused_instructions_computation()->instructions()) {
+ if (instruction.get() != fused_root) {
+ if (instruction->user_count() == 0) {
+ return FailedPrecondition(
+ "Non-root instruction %s in %s must have users.",
+ instruction->ToString().c_str(), fusion->ToString().c_str());
+ }
+ for (auto& user : instruction->users()) {
+ if (fused_computation != user->parent()) {
+ return FailedPrecondition(
+ "Non-root instruction %s in %s may not have external users.",
+ instruction->ToString().c_str(), fusion->ToString().c_str());
+ }
+ }
+ }
+ }
+
+ // Fused parameter instructions must be numbered contiguously and match up
+ // (shapes compatible) with their respective operand.
+ CHECK_EQ(fusion->operands().size(), fused_parameters.size());
+ std::vector<bool> parameter_numbers(fused_parameters.size(), false);
+ for (auto fused_param : fused_parameters) {
+ int64 param_no = fused_param->parameter_number();
+ if (param_no < 0) {
+ return FailedPrecondition(
+ "Unexpected negative parameter number %lld in %s.", param_no,
+ fusion->ToString().c_str());
+ }
+ if (param_no >= fused_parameters.size()) {
+ return FailedPrecondition(
+ "Unexpected parameter number %lld in %s: higher then number of "
+ "parameters %lu.",
+ param_no, fusion->ToString().c_str(), fused_parameters.size());
+ }
+ if (parameter_numbers[param_no]) {
+ return FailedPrecondition(
+ "Did not expect parameter number %lld more than once in %s.",
+ param_no, fusion->ToString().c_str());
+ }
+ parameter_numbers[param_no] = true;
+ if (!ShapeUtil::Compatible(fused_param->shape(),
+ fusion->operand(param_no)->shape())) {
+ return FailedPrecondition(
+ "Shape mismatch between parameter number %lld and its operand in %s.",
+ param_no, fusion->ToString().c_str());
+ }
+ }
+ // Make sure all the parameter_numbers entries were seen
+ for (int i = 0; i < parameter_numbers.size(); i++) {
+ if (!parameter_numbers[i]) {
+ return FailedPrecondition("Did not see parameter number %d in %s.", i,
+ fusion->ToString().c_str());
+ }
+ }
+
+ // TODO(b/65423525): We'd like to check that all operands are distinct.
+ // This is currently disabled due to the invariant being violated by
+ // multi-output fusion.
+ return tensorflow::Status::OK();
+}
+
StatusOr<bool> HloVerifier::Run(HloModule* module) {
tensorflow::gtl::FlatMap<string, const HloInstruction*> instructions;
ShapeVerifier shape_verifier(shape_size_fn_);
@@ -298,6 +417,7 @@ StatusOr<bool> HloVerifier::Run(HloModule* module) {
for (const auto& instruction : computation->instructions()) {
TF_RET_CHECK(instruction->parent() == computation.get());
if (instruction->opcode() == HloOpcode::kFusion) {
+ TF_RETURN_IF_ERROR(CheckFusionInstruction(instruction.get()));
TF_RET_CHECK(
ContainersEqual(instruction->called_computations(),
{instruction->fused_instructions_computation()}))
diff --git a/tensorflow/compiler/xla/service/hlo_verifier.h b/tensorflow/compiler/xla/service/hlo_verifier.h
index bc6800dae5..e35a7f3642 100644
--- a/tensorflow/compiler/xla/service/hlo_verifier.h
+++ b/tensorflow/compiler/xla/service/hlo_verifier.h
@@ -34,6 +34,9 @@ class HloVerifier : public HloPassInterface {
StatusOr<bool> Run(HloModule* module) override;
private:
+ // CHECKs various invariants of a fusion instruction.
+ Status CheckFusionInstruction(HloInstruction* fusion) const;
+
// Returns the size of a Shape in bytes.
const std::function<int64(const Shape&)> shape_size_fn_;
};
diff --git a/tensorflow/compiler/xla/service/reduce_precision_insertion_test.cc b/tensorflow/compiler/xla/service/reduce_precision_insertion_test.cc
index 607abee33d..064020896e 100644
--- a/tensorflow/compiler/xla/service/reduce_precision_insertion_test.cc
+++ b/tensorflow/compiler/xla/service/reduce_precision_insertion_test.cc
@@ -425,7 +425,6 @@ TEST_F(ReducePrecisionInsertionTest, OpGetsInsertedInHeadOfFusionNode) {
EXPECT_EQ(computation->root_instruction(), z);
HloInstruction* y_fused = z->fused_expression_root();
EXPECT_EQ(y_fused->opcode(), HloOpcode::kCos);
- z->CheckFusionInstruction();
// This should see that the fusion computation contains a kCos operation,
// and insert a new reduce-precision node at its input.
@@ -450,7 +449,6 @@ TEST_F(ReducePrecisionInsertionTest, OpGetsInsertedInHeadOfFusionNode) {
EXPECT_EQ(computation->root_instruction(), z);
EXPECT_THAT(z->fused_expression_root(), y_fused);
EXPECT_THAT(y_fused->operand(0), op::ReducePrecision(op::Parameter()));
- z->CheckFusionInstruction();
}
TEST_F(ReducePrecisionInsertionTest, OpGetsInsertedInTailOfFusionNode) {
@@ -468,7 +466,6 @@ TEST_F(ReducePrecisionInsertionTest, OpGetsInsertedInTailOfFusionNode) {
shape, HloInstruction::FusionKind::kLoop, y));
EXPECT_IS_OK(computation->ReplaceUsesOfInstruction(y, z));
EXPECT_IS_OK(computation->RemoveInstruction(y));
- z->CheckFusionInstruction();
// Confirm expected graph before adding reduce-precision ops.
EXPECT_THAT(x->users(), UnorderedElementsAre(z));
@@ -498,7 +495,6 @@ TEST_F(ReducePrecisionInsertionTest, OpGetsInsertedInTailOfFusionNode) {
EXPECT_THAT(x->users(), UnorderedElementsAre(z));
EXPECT_EQ(computation->root_instruction(), z);
EXPECT_THAT(z->fused_expression_root(), op::ReducePrecision(y_fused));
- z->CheckFusionInstruction();
}
TEST_F(ReducePrecisionInsertionTest, MakeFilterFunctionNoSubstrings) {
diff --git a/tensorflow/compiler/xla/service/user_computation.cc b/tensorflow/compiler/xla/service/user_computation.cc
index 297bfd93d1..858db8fa0e 100644
--- a/tensorflow/compiler/xla/service/user_computation.cc
+++ b/tensorflow/compiler/xla/service/user_computation.cc
@@ -2471,8 +2471,8 @@ HloInstruction* ComputationLowerer::ImplicitBroadcastToExplicitBroadcast(
operand->shape().element_type(), AsInt64Slice(output_shape.dimensions()));
// Do explicit broadcast for scalar.
if (ShapeUtil::IsScalar(operand->shape())) {
- return hlo_builder_.AddInstruction(HloInstruction::CreateBroadcast(
- broadcast_shape, operand, AsInt64Slice(broadcast_shape.dimensions())));
+ return hlo_builder_.AddInstruction(
+ HloInstruction::CreateBroadcast(broadcast_shape, operand, {}));
}
// Do explicit broadcast for degenerate broadcast.
std::vector<int64> broadcast_dimensions;
diff --git a/tensorflow/compiler/xla/tests/multioutput_fusion_test.cc b/tensorflow/compiler/xla/tests/multioutput_fusion_test.cc
index 606d801c84..22d2b917a1 100644
--- a/tensorflow/compiler/xla/tests/multioutput_fusion_test.cc
+++ b/tensorflow/compiler/xla/tests/multioutput_fusion_test.cc
@@ -67,7 +67,7 @@ class MultiOutputFusionTest : public HloTestBase {
elem_shape0, HloOpcode::kAdd, param0, const0));
HloInstruction* broadcast = builder.AddInstruction(
- HloInstruction::CreateBroadcast(elem_shape2, add1, {0, 1}));
+ HloInstruction::CreateBroadcast(elem_shape2, add1, {}));
auto param1 = builder.AddInstruction(
HloInstruction::CreateParameter(1, elem_shape2, "1"));
diff --git a/tensorflow/contrib/BUILD b/tensorflow/contrib/BUILD
index 84fcc0d014..11e4ea888c 100644
--- a/tensorflow/contrib/BUILD
+++ b/tensorflow/contrib/BUILD
@@ -24,6 +24,7 @@ py_library(
"//tensorflow/contrib/deprecated:deprecated_py",
"//tensorflow/contrib/distributions:distributions_py",
"//tensorflow/contrib/eager/python:tfe",
+ "//tensorflow/contrib/estimator:estimator_py",
"//tensorflow/contrib/factorization:factorization_py",
"//tensorflow/contrib/ffmpeg:ffmpeg_ops_py",
"//tensorflow/contrib/framework:framework_py",
diff --git a/tensorflow/contrib/__init__.py b/tensorflow/contrib/__init__.py
index d1d0e2823a..5b3f0b3f6e 100644
--- a/tensorflow/contrib/__init__.py
+++ b/tensorflow/contrib/__init__.py
@@ -29,6 +29,7 @@ from tensorflow.contrib import cudnn_rnn
from tensorflow.contrib import data
from tensorflow.contrib import deprecated
from tensorflow.contrib import distributions
+from tensorflow.contrib import estimator
from tensorflow.contrib import factorization
from tensorflow.contrib import framework
from tensorflow.contrib import gan
diff --git a/tensorflow/contrib/cmake/tf_python.cmake b/tensorflow/contrib/cmake/tf_python.cmake
index 1b706159a3..ce94f718a1 100755
--- a/tensorflow/contrib/cmake/tf_python.cmake
+++ b/tensorflow/contrib/cmake/tf_python.cmake
@@ -218,6 +218,48 @@ add_python_module("tensorflow/python/estimator/inputs/queues")
add_python_module("tensorflow/python/feature_column")
add_python_module("tensorflow/python/framework")
add_python_module("tensorflow/python/grappler")
+add_python_module("tensorflow/python/keras")
+add_python_module("tensorflow/python/keras/activations")
+add_python_module("tensorflow/python/keras/applications")
+add_python_module("tensorflow/python/keras/applications/inception_v3")
+add_python_module("tensorflow/python/keras/applications/mobilenet")
+add_python_module("tensorflow/python/keras/applications/resnet50")
+add_python_module("tensorflow/python/keras/applications/vgg16")
+add_python_module("tensorflow/python/keras/applications/vgg19")
+add_python_module("tensorflow/python/keras/applications/xception")
+add_python_module("tensorflow/python/keras/backend")
+add_python_module("tensorflow/python/keras/callbacks")
+add_python_module("tensorflow/python/keras/constraints")
+add_python_module("tensorflow/python/keras/datasets")
+add_python_module("tensorflow/python/keras/datasets/boston_housing")
+add_python_module("tensorflow/python/keras/datasets/cifar10")
+add_python_module("tensorflow/python/keras/datasets/cifar100")
+add_python_module("tensorflow/python/keras/datasets/imdb")
+add_python_module("tensorflow/python/keras/datasets/mnist")
+add_python_module("tensorflow/python/keras/datasets/reuters")
+add_python_module("tensorflow/python/keras/initializers")
+add_python_module("tensorflow/python/keras/layers")
+add_python_module("tensorflow/python/keras/losses")
+add_python_module("tensorflow/python/keras/metrics")
+add_python_module("tensorflow/python/keras/models")
+add_python_module("tensorflow/python/keras/optimizers")
+add_python_module("tensorflow/python/keras/preprocessing")
+add_python_module("tensorflow/python/keras/preprocessing/image")
+add_python_module("tensorflow/python/keras/preprocessing/sequence")
+add_python_module("tensorflow/python/keras/preprocessing/text")
+add_python_module("tensorflow/python/keras/regularizers")
+add_python_module("tensorflow/python/keras/utils")
+add_python_module("tensorflow/python/keras/wrappers")
+add_python_module("tensorflow/python/keras/wrappers/scikit_learn")
+add_python_module("tensorflow/python/keras/_impl")
+add_python_module("tensorflow/python/keras/_impl/keras")
+add_python_module("tensorflow/python/keras/_impl/keras/applications")
+add_python_module("tensorflow/python/keras/_impl/keras/datasets")
+add_python_module("tensorflow/python/keras/_impl/keras/engine")
+add_python_module("tensorflow/python/keras/_impl/keras/layers")
+add_python_module("tensorflow/python/keras/_impl/keras/preprocessing")
+add_python_module("tensorflow/python/keras/_impl/keras/utils")
+add_python_module("tensorflow/python/keras/_impl/keras/wrappers")
add_python_module("tensorflow/python/kernel_tests")
add_python_module("tensorflow/python/kernel_tests/distributions")
add_python_module("tensorflow/python/layers")
@@ -299,6 +341,9 @@ add_python_module("tensorflow/contrib/distributions/python")
add_python_module("tensorflow/contrib/distributions/python/kernel_tests")
add_python_module("tensorflow/contrib/distributions/python/ops")
add_python_module("tensorflow/contrib/distributions/python/ops/bijectors")
+add_python_module("tensorflow/contrib/estimator")
+add_python_module("tensorflow/contrib/estimator/python")
+add_python_module("tensorflow/contrib/estimator/python/estimator")
add_python_module("tensorflow/contrib/factorization")
add_python_module("tensorflow/contrib/factorization/examples")
add_python_module("tensorflow/contrib/factorization/kernels")
diff --git a/tensorflow/contrib/cmake/tf_tests.cmake b/tensorflow/contrib/cmake/tf_tests.cmake
index eb02f20457..9dff888155 100644
--- a/tensorflow/contrib/cmake/tf_tests.cmake
+++ b/tensorflow/contrib/cmake/tf_tests.cmake
@@ -142,6 +142,7 @@ if (tensorflow_BUILD_PYTHON_TESTS)
"${tensorflow_source_dir}/tensorflow/python/debug/cli/*_test.py"
"${tensorflow_source_dir}/tensorflow/python/debug/lib/*_test.py"
"${tensorflow_source_dir}/tensorflow/python/debug/wrappers/*_test.py"
+ "${tensorflow_source_dir}/tensorflow/contrib/estimator/python/estimator/*_test.py"
"${tensorflow_source_dir}/tensorflow/python/kernel_tests/*.py"
"${tensorflow_source_dir}/tensorflow/python/meta_graph_transform/*_test.py"
"${tensorflow_source_dir}/tensorflow/python/profiler/*_test.py"
@@ -246,6 +247,7 @@ if (tensorflow_BUILD_PYTHON_TESTS)
# Broken tensorboard test due to cmake issues.
"${tensorflow_source_dir}/tensorflow/contrib/data/python/kernel_tests/dataset_constructor_op_test.py"
"${tensorflow_source_dir}/tensorflow/contrib/data/python/kernel_tests/iterator_ops_cluster_test.py" # Needs portpicker
+ "${tensorflow_source_dir}/tensorflow/contrib/data/python/kernel_tests/sloppy_transformation_dataset_op_test.py" # b/65430561
# tensor_forest tests (also note that we exclude the hybrid tests for now)
"${tensorflow_source_dir}/tensorflow/contrib/tensor_forest/python/kernel_tests/count_extremely_random_stats_op_test.py" # Results in wrong order.
"${tensorflow_source_dir}/tensorflow/contrib/tensor_forest/python/kernel_tests/sample_inputs_op_test.py" # Results in wrong order.
diff --git a/tensorflow/contrib/cudnn_rnn/python/ops/cudnn_rnn_ops.py b/tensorflow/contrib/cudnn_rnn/python/ops/cudnn_rnn_ops.py
index bc4fd10cac..f6eeb01675 100644
--- a/tensorflow/contrib/cudnn_rnn/python/ops/cudnn_rnn_ops.py
+++ b/tensorflow/contrib/cudnn_rnn/python/ops/cudnn_rnn_ops.py
@@ -693,6 +693,7 @@ _cudnn_rnn_common_doc_string = """
canonical format.
This is a typical use case:
+
* The user creates a CudnnRNN model.
* The user query that parameter buffer size.
* The user creates a variable of that size that serves as the parameter
diff --git a/tensorflow/contrib/data/BUILD b/tensorflow/contrib/data/BUILD
index 7b916d82c1..c417650a96 100644
--- a/tensorflow/contrib/data/BUILD
+++ b/tensorflow/contrib/data/BUILD
@@ -10,6 +10,7 @@ py_library(
srcs_version = "PY2AND3",
deps = [
"//tensorflow/contrib/data/python/ops:dataset_ops",
+ "//tensorflow/contrib/data/python/ops:sloppy_ops",
"//tensorflow/python:util",
],
)
diff --git a/tensorflow/contrib/data/__init__.py b/tensorflow/contrib/data/__init__.py
index 1c0a5288f7..c74e1369d5 100644
--- a/tensorflow/contrib/data/__init__.py
+++ b/tensorflow/contrib/data/__init__.py
@@ -23,6 +23,8 @@
@@read_batch_features
@@rejection_resample
@@group_by_window
+@@sloppy_interleave
+@@sloppy_map
"""
from __future__ import absolute_import
@@ -38,6 +40,7 @@ from tensorflow.contrib.data.python.ops.dataset_ops import read_batch_features
from tensorflow.contrib.data.python.ops.dataset_ops import rejection_resample
from tensorflow.contrib.data.python.ops.dataset_ops import TextLineDataset
from tensorflow.contrib.data.python.ops.dataset_ops import TFRecordDataset
+from tensorflow.contrib.data.python.ops.sloppy_ops import sloppy_interleave
# pylint: enable=unused-import
from tensorflow.python.util.all_util import remove_undocumented
diff --git a/tensorflow/contrib/data/python/kernel_tests/BUILD b/tensorflow/contrib/data/python/kernel_tests/BUILD
index fb2740ffef..2f93c34502 100644
--- a/tensorflow/contrib/data/python/kernel_tests/BUILD
+++ b/tensorflow/contrib/data/python/kernel_tests/BUILD
@@ -147,6 +147,25 @@ py_test(
)
py_test(
+ name = "sloppy_transformation_dataset_op_test",
+ size = "small",
+ srcs = ["sloppy_transformation_dataset_op_test.py"],
+ srcs_version = "PY2AND3",
+ deps = [
+ "//tensorflow/contrib/data/python/ops:dataset_ops",
+ "//tensorflow/contrib/data/python/ops:sloppy_ops",
+ "//tensorflow/python:array_ops",
+ "//tensorflow/python:client",
+ "//tensorflow/python:client_testlib",
+ "//tensorflow/python:dtypes",
+ "//tensorflow/python:errors",
+ "//tensorflow/python:math_ops",
+ "//tensorflow/python:training",
+ "//third_party/py/numpy",
+ ],
+)
+
+py_test(
name = "list_files_dataset_op_test",
size = "small",
srcs = ["list_files_dataset_op_test.py"],
@@ -228,7 +247,7 @@ py_test(
srcs = ["sql_dataset_op_test.py"],
srcs_version = "PY2AND3",
deps = [
- "//tensorflow/contrib/data",
+ "//tensorflow/contrib/data/python/ops:dataset_ops",
"//tensorflow/python:array_ops",
"//tensorflow/python:client_testlib",
"//tensorflow/python:errors",
diff --git a/tensorflow/contrib/data/python/kernel_tests/map_dataset_op_test.py b/tensorflow/contrib/data/python/kernel_tests/map_dataset_op_test.py
index d05fbb7d28..4c1496ccf9 100644
--- a/tensorflow/contrib/data/python/kernel_tests/map_dataset_op_test.py
+++ b/tensorflow/contrib/data/python/kernel_tests/map_dataset_op_test.py
@@ -16,6 +16,7 @@
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
+from collections import namedtuple
import os
import threading
@@ -489,8 +490,8 @@ class MapDatasetTest(test.TestCase):
dataset_tuple = dataset_ops.Dataset.zip((labels, images))
# convert dataset of tuples to dataset of namedtuples
- Example = namedtuple("Example", ["label", "image"])
- dataset_namedtuple = dataset_tuple.map(Example)
+ example = namedtuple("Example", ["label", "image"])
+ dataset_namedtuple = dataset_tuple.map(example)
def preprocess_tuple(label, image):
image = 2 * image
diff --git a/tensorflow/contrib/data/python/kernel_tests/sloppy_transformation_dataset_op_test.py b/tensorflow/contrib/data/python/kernel_tests/sloppy_transformation_dataset_op_test.py
new file mode 100644
index 0000000000..f9198bacfb
--- /dev/null
+++ b/tensorflow/contrib/data/python/kernel_tests/sloppy_transformation_dataset_op_test.py
@@ -0,0 +1,475 @@
+# Copyright 2017 The TensorFlow Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+"""Tests for the experimental input pipeline ops."""
+from __future__ import absolute_import
+from __future__ import division
+from __future__ import print_function
+
+import itertools
+import math
+import threading
+import time
+
+from six.moves import zip_longest
+
+from tensorflow.contrib.data.python.ops import dataset_ops
+from tensorflow.contrib.data.python.ops import sloppy_ops
+from tensorflow.python.framework import dtypes
+from tensorflow.python.framework import errors
+from tensorflow.python.ops import array_ops
+from tensorflow.python.ops import math_ops
+from tensorflow.python.ops import script_ops
+from tensorflow.python.platform import test
+
+
+class SloppyInterleaveDatasetTest(test.TestCase):
+
+ def setUp(self):
+ self.input_values = array_ops.placeholder(dtypes.int64, shape=[None])
+ self.cycle_length = array_ops.placeholder(dtypes.int64, shape=[])
+ self.block_length = array_ops.placeholder(dtypes.int64, shape=[])
+
+ self.repeat_count = 2
+
+ # Set up threading events used to sequence when items are produced that
+ # are subsequently interleaved. These events allow us to deterministically
+ # simulate slowdowns and force sloppiness.
+ self.read_coordination_events = {}
+ self.write_coordination_events = {}
+ # input values [4, 5, 6] are the common case for the tests; set defaults
+ for i in range(4, 7):
+ self.read_coordination_events[i] = threading.Semaphore(0)
+ self.write_coordination_events[i] = threading.Event()
+
+ def map_py_fn(x):
+ self.write_coordination_events[x].wait()
+ self.write_coordination_events[x].clear()
+ self.read_coordination_events[x].release()
+ return x * x
+
+ def map_fn(x):
+ return script_ops.py_func(map_py_fn, [x], x.dtype)
+
+ def interleave_fn(x):
+ dataset = dataset_ops.Dataset.from_tensors(x)
+ dataset = dataset.repeat(x)
+ return dataset.map(map_fn)
+
+ self.dataset = (dataset_ops.Dataset.from_tensor_slices(self.input_values)
+ .repeat(self.repeat_count).apply(
+ sloppy_ops.sloppy_interleave,
+ args=(interleave_fn, self.cycle_length,
+ self.block_length)))
+ self.iterator = self.dataset.make_initializable_iterator()
+ self.init_op = self.iterator.initializer
+ self.next_element = self.iterator.get_next()
+
+ def _interleave(self, lists, cycle_length, block_length):
+ """Python implementation of interleave used for testing."""
+ num_open = 0
+
+ # `all_iterators` acts as a queue of iterators over each element of `lists`.
+ all_iterators = [iter(l) for l in lists]
+
+ # `open_iterators` are the iterators whose elements are currently being
+ # interleaved.
+ open_iterators = []
+ for i in range(cycle_length):
+ if all_iterators:
+ open_iterators.append(all_iterators.pop(0))
+ num_open += 1
+ else:
+ open_iterators.append(None)
+
+ while num_open or all_iterators:
+ for i in range(cycle_length):
+ if open_iterators[i] is None:
+ if all_iterators:
+ open_iterators[i] = all_iterators.pop(0)
+ num_open += 1
+ else:
+ continue
+ for _ in range(block_length):
+ try:
+ yield next(open_iterators[i])
+ except StopIteration:
+ open_iterators[i] = None
+ num_open -= 1
+ break
+
+ def testPythonImplementation(self):
+ input_lists = [[4, 4, 4, 4], [5, 5, 5, 5, 5], [6, 6, 6, 6, 6, 6],
+ [4, 4, 4, 4], [5, 5, 5, 5, 5], [6, 6, 6, 6, 6, 6]]
+
+ # Cycle length 1 acts like `Dataset.flat_map()`.
+ expected_elements = itertools.chain(*input_lists)
+ for expected, produced in zip(expected_elements,
+ self._interleave(input_lists, 1, 1)):
+ self.assertEqual(expected, produced)
+
+ # Cycle length > 1.
+ expected_elements = [
+ 4, 5, 4, 5, 4, 5, 4, 5, 5, 6, 6, 4, 6, 4, 6, 4, 6, 4, 6, 5, 6, 5, 6, 5,
+ 6, 5, 6, 5, 6, 6
+ ]
+ for index, (expected, produced) in enumerate(
+ zip_longest(expected_elements, self._interleave(input_lists, 2, 1))):
+ self.assertEqual(expected, produced, "Values differ at %s. %s != %s" %
+ (index, expected, produced))
+
+ def testPythonImplementationBlockLength(self):
+ input_lists = [[4] * 4, [5] * 5, [6] * 6] * 2
+ expected_elements = [
+ 4, 4, 5, 5, 4, 4, 5, 5, 5, 6, 6, 4, 4, 6, 6, 4, 4, 6, 6, 5, 5, 6, 6, 5,
+ 5, 6, 6, 5, 6, 6
+ ]
+ for index, (expected, produced) in enumerate(
+ zip_longest(expected_elements, self._interleave(input_lists, 2, 2))):
+ self.assertEqual(expected, produced, "Values differ at %s. %s != %s" %
+ (index, expected, produced))
+
+ def testPythonImplementationEmptyLists(self):
+ input_lists = [[4, 4, 4, 4], [], [6, 6, 6, 6, 6, 6], [4, 4, 4, 4], [],
+ [6, 6, 6, 6, 6, 6]]
+
+ expected_elements = [
+ 4, 4, 6, 4, 6, 4, 6, 6, 4, 6, 4, 6, 4, 4, 6, 6, 6, 6, 6, 6
+ ]
+ for index, (expected, produced) in enumerate(
+ zip_longest(expected_elements, self._interleave(input_lists, 2, 1))):
+ self.assertEqual(expected, produced, "Values differ at %s. %s != %s" %
+ (index, expected, produced))
+
+ def _clear_coordination_events(self):
+ for i in range(4, 7):
+ self.read_coordination_events[i] = threading.Semaphore(0)
+ self.write_coordination_events[i].clear()
+
+ def _allow_all_map_threads(self):
+ for i in range(4, 7):
+ self.write_coordination_events[i].set()
+
+ def testSingleThreaded(self):
+ # cycle_length=1,block_length=1 acts like `Dataset.interleave()` and
+ # `Dataset.flat_map()` and is single-threaded. No synchronization required.
+ with self.test_session() as sess:
+ self._clear_coordination_events()
+ sess.run(
+ self.init_op,
+ feed_dict={
+ self.input_values: [4, 5, 6],
+ self.cycle_length: 1,
+ self.block_length: 1
+ })
+
+ for expected_element in self._interleave(
+ [[4] * 4, [5] * 5, [6] * 6] * self.repeat_count, 1, 1):
+ self.write_coordination_events[expected_element].set()
+ self.assertEqual(expected_element * expected_element,
+ sess.run(self.next_element))
+ with self.assertRaises(errors.OutOfRangeError):
+ sess.run(self.next_element)
+
+ def testTwoThreadsNoContention(self):
+ # num_threads > 1.
+ # Explicit coordination should result in `Dataset.interleave()` behavior
+ with self.test_session() as sess:
+ self._clear_coordination_events()
+ done_first_event = False
+ sess.run(
+ self.init_op,
+ feed_dict={
+ self.input_values: [4, 5, 6],
+ self.cycle_length: 2,
+ self.block_length: 1
+ })
+ for i, expected_element in enumerate(
+ self._interleave([[4] * 4, [5] * 5, [6] * 6] * self.repeat_count, 2,
+ 1)):
+ self.write_coordination_events[expected_element].set()
+ if done_first_event: # First event starts the worker threads.
+ self.read_coordination_events[expected_element].acquire()
+ actual_element = sess.run(self.next_element)
+ if not done_first_event:
+ self.read_coordination_events[expected_element].acquire()
+ done_first_event = True
+ self.assertEqual(expected_element * expected_element, actual_element,
+ "At index %s: %s expected, got: %s" %
+ (i, expected_element, actual_element))
+ with self.assertRaises(errors.OutOfRangeError):
+ sess.run(self.next_element)
+
+ def testTwoThreadsNoContentionWithRaces(self):
+ """Tests where all the workers race in producing elements.
+
+ Note: this is in contrast with the prevous test which carefully sequences
+ the execution of the map functions.
+ """
+ with self.test_session() as sess:
+ self._clear_coordination_events()
+ done_first_event = False
+ sess.run(
+ self.init_op,
+ feed_dict={
+ self.input_values: [4, 5, 6],
+ self.cycle_length: 2,
+ self.block_length: 1
+ })
+ for i, expected_element in enumerate(
+ self._interleave([[4] * 4, [5] * 5, [6] * 6] * self.repeat_count, 2,
+ 1)):
+ if done_first_event: # First event starts the worker threads.
+ self._allow_all_map_threads()
+ self.read_coordination_events[expected_element].acquire()
+ else:
+ self.write_coordination_events[expected_element].set()
+ time.sleep(0.01) # Sleep to consistently "avoid" the race condition.
+ actual_element = sess.run(self.next_element)
+ if not done_first_event:
+ done_first_event = True
+ self.assertTrue(
+ self.read_coordination_events[expected_element].acquire(False))
+ self.assertEqual(expected_element * expected_element, actual_element,
+ "At index %s: %s expected, got: %s" %
+ (i, expected_element, actual_element))
+ with self.assertRaises(errors.OutOfRangeError):
+ sess.run(self.next_element)
+
+ def testTwoThreadsNoContentionBlockLength(self):
+ # num_threads > 1.
+ # Explicit coordination should result in `Dataset.interleave()` behavior
+ with self.test_session() as sess:
+ self._clear_coordination_events()
+ done_first_event = False
+ sess.run(
+ self.init_op,
+ feed_dict={
+ self.input_values: [4, 5, 6],
+ self.cycle_length: 2,
+ self.block_length: 2
+ })
+ for i, expected_element in enumerate(
+ self._interleave([[4] * 4, [5] * 5, [6] * 6] * self.repeat_count, 2,
+ 2)):
+ self.write_coordination_events[expected_element].set()
+ if done_first_event: # First event starts the worker threads.
+ self.read_coordination_events[expected_element].acquire()
+ actual_element = sess.run(self.next_element)
+ if not done_first_event:
+ done_first_event = True
+ self.read_coordination_events[expected_element].acquire()
+ self.assertEqual(expected_element * expected_element, actual_element,
+ "At index %s: %s expected, got: %s" %
+ (i, expected_element, actual_element))
+ with self.assertRaises(errors.OutOfRangeError):
+ sess.run(self.next_element)
+
+ def testTwoThreadsNoContentionWithRacesAndBlocking(self):
+ """Tests where all the workers race in producing elements.
+
+ Note: this is in contrast with the prevous test which carefully sequences
+ the execution of the map functions.
+ """
+ with self.test_session() as sess:
+ self._clear_coordination_events()
+ done_first_event = False
+ sess.run(
+ self.init_op,
+ feed_dict={
+ self.input_values: [4, 5, 6],
+ self.cycle_length: 2,
+ self.block_length: 2
+ })
+ for i, expected_element in enumerate(
+ self._interleave([[4] * 4, [5] * 5, [6] * 6] * self.repeat_count, 2,
+ 2)):
+ if done_first_event: # First event starts the worker threads.
+ self._allow_all_map_threads()
+ self.read_coordination_events[expected_element].acquire()
+ else:
+ self.write_coordination_events[expected_element].set()
+ time.sleep(0.01) # Sleep to consistently "avoid" the race condition.
+ actual_element = sess.run(self.next_element)
+ if not done_first_event:
+ done_first_event = True
+ self.assertTrue(
+ self.read_coordination_events[expected_element].acquire(False))
+ self.assertEqual(expected_element * expected_element, actual_element,
+ "At index %s: %s expected, got: %s" %
+ (i, expected_element, actual_element))
+ with self.assertRaises(errors.OutOfRangeError):
+ sess.run(self.next_element)
+
+ def testEmptyInput(self):
+ with self.test_session() as sess:
+ # Empty input.
+ self._clear_coordination_events()
+ sess.run(
+ self.init_op,
+ feed_dict={
+ self.input_values: [],
+ self.cycle_length: 2,
+ self.block_length: 3
+ })
+ with self.assertRaises(errors.OutOfRangeError):
+ sess.run(self.next_element)
+
+ def testNonEmptyInputIntoEmptyOutputs(self):
+ # Non-empty input leading to empty output.
+ with self.test_session() as sess:
+ self._clear_coordination_events()
+ sess.run(
+ self.init_op,
+ feed_dict={
+ self.input_values: [0, 0, 0],
+ self.cycle_length: 2,
+ self.block_length: 3
+ })
+ with self.assertRaises(errors.OutOfRangeError):
+ sess.run(self.next_element)
+
+ def testPartiallyEmptyOutputs(self):
+ # Mixture of non-empty and empty interleaved datasets.
+ with self.test_session() as sess:
+ self._clear_coordination_events()
+ done_first_event = False
+ sess.run(
+ self.init_op,
+ feed_dict={
+ self.input_values: [4, 0, 6],
+ self.cycle_length: 2,
+ self.block_length: 1
+ })
+ for i, expected_element in enumerate(
+ self._interleave([[4] * 4, [], [6] * 6] * self.repeat_count, 2, 1)):
+ self.write_coordination_events[expected_element].set()
+ if done_first_event: # First event starts the worker threads
+ self.read_coordination_events[expected_element].acquire()
+ actual_element = sess.run(self.next_element)
+ if not done_first_event:
+ done_first_event = True
+ self.read_coordination_events[expected_element].acquire()
+ self.assertEqual(expected_element * expected_element, actual_element,
+ "At index %s: %s expected, got: %s" %
+ (i, expected_element, actual_element))
+ with self.assertRaises(errors.OutOfRangeError):
+ sess.run(self.next_element)
+
+ def testDelayedOutput(self):
+ # Explicitly control the sequence of events to ensure we correctly avoid
+ # head-of-line blocking.
+ with self.test_session() as sess:
+ self._clear_coordination_events()
+ sess.run(
+ self.init_op,
+ feed_dict={
+ self.input_values: [4, 5, 6],
+ self.cycle_length: 2,
+ self.block_length: 1
+ })
+
+ mis_ordering = [
+ 4, 4, 5, 4, 5, 5, 4, 5, 6, 6, 6, 5, 4, 4, 6, 6, 4, 4, 6, 5, 6, 6, 6,
+ 6, 5, 5, 5, 5, 6, 6
+ ]
+ for element in mis_ordering:
+ self.write_coordination_events[element].set()
+ self.assertEqual(element * element, sess.run(self.next_element))
+ self.assertTrue(self.read_coordination_events[element].acquire(False))
+ with self.assertRaises(errors.OutOfRangeError):
+ sess.run(self.next_element)
+
+ def testBlockLengthWithContention(self):
+ with self.test_session() as sess:
+ self._clear_coordination_events()
+ done_first_event = False
+ sess.run(
+ self.init_op,
+ feed_dict={
+ self.input_values: [4, 5, 6],
+ self.cycle_length: 2,
+ self.block_length: 3
+ })
+ # Test against a generating sequence that differs from the uncontended
+ # case, in order to prove sloppy correctness.
+ for i, expected_element in enumerate(
+ self._interleave(
+ [[4] * 4, [5] * 5, [6] * 6] * self.repeat_count,
+ cycle_length=2,
+ block_length=2)):
+ self.write_coordination_events[expected_element].set()
+ if done_first_event: # First event starts the worker threads.
+ self.read_coordination_events[expected_element].acquire()
+ actual_element = sess.run(self.next_element)
+ if not done_first_event:
+ self.read_coordination_events[expected_element].acquire()
+ done_first_event = True
+ self.assertEqual(expected_element * expected_element, actual_element,
+ "At index %s: %s expected, got: %s" %
+ (i, expected_element, actual_element))
+ with self.assertRaises(errors.OutOfRangeError):
+ sess.run(self.next_element)
+
+ def testEarlyExit(self):
+ # Exiting without consuming all input should not block
+ with self.test_session() as sess:
+ self._clear_coordination_events()
+ sess.run(
+ self.init_op,
+ feed_dict={
+ self.input_values: [4, 5, 6],
+ self.cycle_length: 3,
+ self.block_length: 2
+ })
+ for i in range(4, 7):
+ self.write_coordination_events[i].set()
+ elem = sess.run(self.next_element) # Start all workers
+ # Allow the one successful worker to progress beyond the py_func again.
+ elem = int(math.sqrt(elem))
+ self.write_coordination_events[elem].set()
+ self.read_coordination_events[elem].acquire()
+ # Allow the prefetch to succeed
+ for i in range(4, 7):
+ self.read_coordination_events[i].acquire()
+ self.write_coordination_events[i].set()
+
+ def testTooManyReaders(self):
+
+ def interleave_fn(x):
+ dataset = dataset_ops.Dataset.from_tensors(x)
+ dataset = dataset.repeat(math_ops.cast(x, dtype=dtypes.int64))
+ return dataset
+
+ dataset = dataset_ops.Dataset.from_tensor_slices([4, 5, 6])
+ dataset = dataset.repeat(self.repeat_count)
+ dataset = dataset.apply(
+ sloppy_ops.sloppy_interleave,
+ args=(interleave_fn,),
+ kwargs={"cycle_length": 16,
+ "block_length": 2})
+ iterator = dataset.make_one_shot_iterator()
+
+ with self.test_session() as sess:
+ output_values = []
+ for _ in range(30):
+ output_values.append(sess.run(iterator.get_next()))
+
+ expected_values = self._interleave(
+ [[4] * 4, [5] * 5, [6] * 6] * self.repeat_count, 1, 2)
+ self.assertItemsEqual(output_values, expected_values)
+
+
+if __name__ == "__main__":
+ test.main()
diff --git a/tensorflow/contrib/data/python/ops/BUILD b/tensorflow/contrib/data/python/ops/BUILD
index 8afd122d82..94969c1c70 100644
--- a/tensorflow/contrib/data/python/ops/BUILD
+++ b/tensorflow/contrib/data/python/ops/BUILD
@@ -32,6 +32,21 @@ py_library(
],
)
+py_library(
+ name = "sloppy_ops",
+ srcs = ["sloppy_ops.py"],
+ srcs_version = "PY2AND3",
+ deps = [
+ ":dataset_ops",
+ "//tensorflow/contrib/data/python/framework:function",
+ "//tensorflow/contrib/data/python/util:nest",
+ "//tensorflow/python:dataset_ops_gen",
+ "//tensorflow/python:dtypes",
+ "//tensorflow/python:framework_ops",
+ "//tensorflow/python:platform",
+ ],
+)
+
filegroup(
name = "all_files",
srcs = glob(
diff --git a/tensorflow/contrib/data/python/ops/sloppy_ops.py b/tensorflow/contrib/data/python/ops/sloppy_ops.py
new file mode 100644
index 0000000000..010bd31161
--- /dev/null
+++ b/tensorflow/contrib/data/python/ops/sloppy_ops.py
@@ -0,0 +1,120 @@
+# Copyright 2017 The TensorFlow Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+"""Non-deterministic dataset transformations."""
+from __future__ import absolute_import
+from __future__ import division
+from __future__ import print_function
+
+from tensorflow.contrib.data.python.framework import function
+from tensorflow.contrib.data.python.ops import dataset_ops
+from tensorflow.contrib.data.python.util import nest
+from tensorflow.python.framework import dtypes
+from tensorflow.python.framework import ops
+from tensorflow.python.ops import gen_dataset_ops
+
+
+class SloppyInterleaveDataset(dataset_ops.Dataset):
+ """A `Dataset` that maps a function over its input and flattens the result."""
+
+ def __init__(self, input_dataset, map_func, cycle_length, block_length):
+ """See `tf.contrib.data.sloppy_interleave()` for details."""
+ super(SloppyInterleaveDataset, self).__init__()
+ self._input_dataset = input_dataset
+
+ @function.Defun(*nest.flatten(input_dataset.output_types))
+ def tf_map_func(*args):
+ """A wrapper for Defun that facilitates shape inference."""
+ # Pass in shape information from the input_dataset.
+ for arg, shape in zip(args, nest.flatten(input_dataset.output_shapes)):
+ arg.set_shape(shape)
+
+ nested_args = nest.pack_sequence_as(input_dataset.output_types, args)
+
+ if nest.is_sequence(nested_args):
+ dataset = map_func(*nested_args)
+ else:
+ dataset = map_func(nested_args)
+
+ if not isinstance(dataset, dataset_ops.Dataset):
+ raise TypeError("`map_func` must return a `Dataset` object.")
+
+ self._output_types = dataset.output_types
+ self._output_shapes = dataset.output_shapes
+
+ return dataset.make_dataset_resource()
+
+ self._map_func = tf_map_func
+ self._map_func.add_to_graph(ops.get_default_graph())
+
+ self._cycle_length = ops.convert_to_tensor(
+ cycle_length, dtype=dtypes.int64, name="cycle_length")
+ self._block_length = ops.convert_to_tensor(
+ block_length, dtype=dtypes.int64, name="block_length")
+
+ def make_dataset_resource(self):
+ return gen_dataset_ops.sloppy_interleave_dataset(
+ self._input_dataset.make_dataset_resource(),
+ self._map_func.captured_inputs,
+ self._cycle_length,
+ self._block_length,
+ f=self._map_func,
+ output_types=nest.flatten(self.output_types),
+ output_shapes=nest.flatten(self.output_shapes))
+
+ @property
+ def output_shapes(self):
+ return self._output_shapes
+
+ @property
+ def output_types(self):
+ return self._output_types
+
+
+def sloppy_interleave(dataset, map_func, cycle_length, block_length):
+ """Maps `map_func` across `dataset`, and interleaves the results.
+
+ The resulting dataset is almost identical to `interleave`. The key
+ difference being that if retrieving a value from a given output iterator would
+ cause `get_next` to block, that iterator will be skipped, and consumed
+ when next available. If consuming from all iterators would cause the
+ `get_next` call to block, the `get_next` call blocks until the first value is
+ available.
+
+ If the underlying datasets produce elements as fast as they are consumed, the
+ `sloppy_interleave` dataset behaves identically to the `interleave` dataset.
+ However, if an underlying dataset would block the consumer, the
+ `sloppy_interleave` dataset can violate to the round-robin order (respected by
+ the `interleave` dataset), producing an element from a different underlying
+ dataset instead.
+
+ WARNING: The order of elements in the resulting dataset is not
+ deterministic. Use `Dataset.interleave()` if you want the elements to have a
+ deterministic order.
+
+ Args:
+ dataset: A `Dataset` that produces elements to feed to `map_func`.
+ map_func: A function mapping a nested structure of tensors (having shapes
+ and types defined by `self.output_shapes` and `self.output_types`) to a
+ `Dataset`.
+ cycle_length: The number of threads to interleave from in parallel.
+ block_length: The number of consecutive elements to pull from a thread
+ before advancing to the next thread. Note: sloppy_interleave will
+ skip the remainder of elements in the block_length in order to avoid
+ blocking.
+
+ Returns:
+ A `Dataset`.
+ """
+ return SloppyInterleaveDataset(dataset, map_func, cycle_length, block_length)
diff --git a/tensorflow/contrib/eager/python/BUILD b/tensorflow/contrib/eager/python/BUILD
index e29314099d..1b831f8afb 100644
--- a/tensorflow/contrib/eager/python/BUILD
+++ b/tensorflow/contrib/eager/python/BUILD
@@ -2,11 +2,14 @@ licenses(["notice"]) # Apache 2.0
package(default_visibility = ["//tensorflow:internal"])
+load("//tensorflow:tensorflow.bzl", "cuda_py_test")
+
py_library(
name = "tfe",
srcs = ["tfe.py"],
srcs_version = "PY2AND3",
deps = [
+ ":saver",
"//tensorflow/python:framework_ops",
"//tensorflow/python:util",
"//tensorflow/python/eager:backprop",
@@ -18,6 +21,28 @@ py_library(
],
)
+py_library(
+ name = "saver",
+ srcs = ["saver.py"],
+ srcs_version = "PY2AND3",
+ deps = [
+ "//tensorflow/python:training",
+ ],
+)
+
+cuda_py_test(
+ name = "saver_test",
+ srcs = ["saver_test.py"],
+ additional_deps = [
+ ":saver",
+ "//tensorflow/python:array_ops",
+ "//tensorflow/python:client",
+ "//tensorflow/python:client_testlib",
+ "//tensorflow/python:platform_test",
+ "//tensorflow/python:variables",
+ ],
+)
+
filegroup(
name = "all_files",
srcs = glob(
diff --git a/tensorflow/contrib/eager/python/saver.py b/tensorflow/contrib/eager/python/saver.py
new file mode 100644
index 0000000000..12c902a4b6
--- /dev/null
+++ b/tensorflow/contrib/eager/python/saver.py
@@ -0,0 +1,122 @@
+"""Saver for eager mode TensorFlow."""
+# Copyright 2017 The TensorFlow Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+from __future__ import absolute_import
+from __future__ import division
+from __future__ import print_function
+
+import contextlib
+
+from tensorflow.python.framework import errors
+from tensorflow.python.ops import resource_variable_ops
+from tensorflow.python.training import checkpoint_utils
+from tensorflow.python.training import saver as _saver
+
+
+def _init_from_checkpoint(self, *args, **kwargs):
+ """Overrides default init by loading value from checkpoint."""
+ self.old_init(*args, **kwargs)
+ # pylint: disable=protected-access
+ if self._shared_name not in self.ckpt_var_cache:
+ raise errors.NotFoundError(None, None,
+ "%s not found in checkpoint" % self._shared_name)
+
+ val = self.ckpt_var_cache[self._shared_name]
+ if val is not None:
+ self.assign(self.ckpt_var_cache[self._shared_name])
+ # Avoid assigning for the second time.
+ self.ckpt_var_cache[self._shared_name] = None
+ # pylint: enable=protected-access
+
+
+class Saver(object):
+ """A simple tf.train.Saver adapter for eager mode.
+
+ save and restore API are similar to the tf.train.Saver, except that
+ session is not needed.
+
+ restore_on_create is eager mode's way to reload checkpoint value during
+ the execution. (unlike graph mode's reload before run).
+
+ Args:
+ var_list: See tf.train.Saver. Works the same for save/restore. Ignored
+ by restore_on_create.
+ """
+
+ def __init__(self, var_list=None):
+ self._saver = _saver.Saver(var_list=var_list)
+
+ def save(self, save_path, global_step=None):
+ """Saves variables.
+
+ Args:
+ save_path: See save method in tf.train.Saver.
+ global_step: See save method in tf.train.Saver.
+
+ Returns:
+ See save method in tf.train.Saver.
+ """
+ return self._saver.save(None, save_path, global_step=global_step)
+
+ def restore(self, save_path):
+ """Restores previously saved variables.
+
+ Args:
+ save_path: See restore method in tf.train.Saver.
+ """
+ self._saver.restore(None, save_path)
+
+ @contextlib.contextmanager
+ def maybe_restore_on_create(self, save_path):
+ """ContextManager that restores variables on creation.
+
+ When save_path is None (e.g. No checkpoint), does nothing.
+ Otherwise, it preloads all values from checkpoint. When the
+ corresponding variable is first created, it assigns the checkpoint
+ value to the variable.
+
+ Args:
+ save_path: Same as save_path of retore. If None, do not restore.
+
+ Yields:
+ Nothing.
+
+ Raises:
+ NotFoundError: If the variable is not found in checkpoint.
+ """
+ if save_path:
+ ckpt_var_cache = dict()
+ reader = checkpoint_utils.load_checkpoint(save_path)
+ for k, _ in checkpoint_utils.list_variables(save_path):
+ ckpt_var_cache[k] = reader.get_tensor(k)
+
+ old_init = getattr(
+ resource_variable_ops.ResourceVariable, "_init_from_args", None)
+ assert old_init, "ResourceVariable misses _init_from_args method."
+ setattr(resource_variable_ops.ResourceVariable, "_init_from_args",
+ _init_from_checkpoint)
+ setattr(resource_variable_ops.ResourceVariable, "old_init", old_init)
+ setattr(resource_variable_ops.ResourceVariable, "ckpt_var_cache",
+ ckpt_var_cache)
+ try:
+ yield
+ except Exception as e:
+ raise e
+ finally:
+ if save_path:
+ setattr(resource_variable_ops.ResourceVariable, "_init_from_args",
+ old_init)
+ setattr(resource_variable_ops.ResourceVariable, "old_init", None)
+ setattr(resource_variable_ops.ResourceVariable, "ckpt_var_cache", None)
diff --git a/tensorflow/contrib/eager/python/saver_test.py b/tensorflow/contrib/eager/python/saver_test.py
new file mode 100644
index 0000000000..b8ff566ec2
--- /dev/null
+++ b/tensorflow/contrib/eager/python/saver_test.py
@@ -0,0 +1,88 @@
+"""Tests for eager mode Saver."""
+# Copyright 2017 The TensorFlow Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+from __future__ import absolute_import
+from __future__ import division
+from __future__ import print_function
+
+import os
+
+from tensorflow.contrib.eager.python import saver as _saver
+from tensorflow.python.eager import context
+from tensorflow.python.framework import errors
+from tensorflow.python.framework import ops
+from tensorflow.python.ops import array_ops
+from tensorflow.python.ops import resource_variable_ops
+from tensorflow.python.platform import test
+
+
+class SaverTest(test.TestCase):
+
+ def testBasics(self):
+ with context.eager_mode():
+ v1 = resource_variable_ops.ResourceVariable(1.0, name='v1')
+ def model():
+ return array_ops.constant(2.0) * v1
+
+ ckpt_prefix = os.path.join(test.get_temp_dir(), 'ckpt')
+
+ _ = model()
+ saver = _saver.Saver()
+ saver.save(ckpt_prefix)
+ v1.assign(2.0)
+ self.assertEqual(v1.read_value().numpy(), 2.0)
+
+ saver.restore(ckpt_prefix)
+ self.assertEqual(v1.read_value().numpy(), 1.0)
+
+ def testRestoreOnCreate(self):
+ with context.eager_mode():
+ def model(init_val):
+ v1 = resource_variable_ops.ResourceVariable(init_val, name='v1')
+ return array_ops.constant(1.0) * v1
+
+ ckpt_prefix = os.path.join(test.get_temp_dir(), 'ckpt')
+ _ = model(1.0)
+ saver = _saver.Saver()
+ saver.save(ckpt_prefix)
+
+ with ops.Graph().as_default():
+ saver = _saver.Saver()
+ with saver.maybe_restore_on_create(ckpt_prefix):
+ # Value is from checkpoint, but not from argument.
+ ret = model(2.0)
+ self.assertEqual(ret.numpy(), 1.0)
+ # Create it a second time won't re-assign the checkpoint value.
+ v1_2 = resource_variable_ops.ResourceVariable(3.0, name='v1')
+ self.assertEqual(v1_2.read_value().numpy(), 3.0)
+
+ def testRestoreNotFound(self):
+ with context.eager_mode():
+ def model(v):
+ return array_ops.constant(1.0) * v
+
+ ckpt_prefix = os.path.join(test.get_temp_dir(), 'ckpt')
+ _ = model(resource_variable_ops.ResourceVariable(1.0, name='v1'))
+ saver = _saver.Saver()
+ saver.save(ckpt_prefix)
+
+ with self.assertRaisesRegexp(errors.NotFoundError,
+ 'v2 not found in checkpoint'):
+ with saver.maybe_restore_on_create(ckpt_prefix):
+ _ = model(resource_variable_ops.ResourceVariable(1.0, name='v2'))
+
+
+if __name__ == '__main__':
+ test.main()
diff --git a/tensorflow/contrib/eager/python/tfe.py b/tensorflow/contrib/eager/python/tfe.py
index aa0276dfd9..2c7494a0a8 100644
--- a/tensorflow/contrib/eager/python/tfe.py
+++ b/tensorflow/contrib/eager/python/tfe.py
@@ -42,6 +42,8 @@ To use, at program startup, call `tfe.enable_eager_execution()`.
@@inf_nan_callback
@@nan_callback
@@seterr
+
+@@Saver
"""
from __future__ import absolute_import
@@ -51,6 +53,7 @@ from __future__ import print_function
# pylint:disable=g-bad-import-order,g-import-not-at-top,unused-import
#
+from tensorflow.contrib.eager.python.saver import Saver
from tensorflow.python.util.all_util import remove_undocumented
from tensorflow.python.eager import backprop
from tensorflow.python.eager.custom_gradient import custom_gradient
diff --git a/tensorflow/contrib/estimator/BUILD b/tensorflow/contrib/estimator/BUILD
new file mode 100644
index 0000000000..46cdf086dd
--- /dev/null
+++ b/tensorflow/contrib/estimator/BUILD
@@ -0,0 +1,61 @@
+package(
+ default_visibility = [
+ "//tensorflow:internal",
+ ],
+)
+
+licenses(["notice"]) # Apache 2.0
+
+load("//tensorflow:tensorflow.bzl", "py_test")
+
+filegroup(
+ name = "all_files",
+ srcs = glob(
+ ["**/*"],
+ exclude = [
+ "**/METADATA",
+ "**/OWNERS",
+ ],
+ ),
+ visibility = ["//tensorflow:__subpackages__"],
+)
+
+py_library(
+ name = "estimator_py",
+ srcs = ["__init__.py"],
+ srcs_version = "PY2AND3",
+ deps = [
+ ":extenders",
+ ],
+)
+
+py_library(
+ name = "extenders",
+ srcs = [
+ "python/estimator/extenders.py",
+ ],
+ srcs_version = "PY2AND3",
+ deps = [
+ "//tensorflow/python:util",
+ "//tensorflow/python/estimator",
+ "//tensorflow/python/estimator:model_fn",
+ "//tensorflow/python/estimator:util",
+ ],
+)
+
+py_test(
+ name = "extenders_test",
+ size = "small",
+ srcs = ["python/estimator/extenders_test.py"],
+ srcs_version = "PY2AND3",
+ deps = [
+ ":extenders",
+ "//tensorflow/contrib/data/python/ops:dataset_ops",
+ "//tensorflow/python:client_testlib",
+ "//tensorflow/python:metrics",
+ "//tensorflow/python/estimator",
+ "//tensorflow/python/estimator:linear",
+ "//tensorflow/python/feature_column",
+ "//third_party/py/numpy",
+ ],
+)
diff --git a/tensorflow/contrib/estimator/__init__.py b/tensorflow/contrib/estimator/__init__.py
new file mode 100644
index 0000000000..9180a3acc3
--- /dev/null
+++ b/tensorflow/contrib/estimator/__init__.py
@@ -0,0 +1,29 @@
+# Copyright 2017 The TensorFlow Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+"""Experimental utilities re:tf.estimator.*."""
+
+from __future__ import absolute_import
+from __future__ import division
+from __future__ import print_function
+
+# pylint: disable=unused-import,line-too-long,wildcard-import
+from tensorflow.contrib.estimator.python.estimator.extenders import *
+
+from tensorflow.python.util.all_util import remove_undocumented
+# pylint: enable=unused-import,line-too-long,wildcard-import
+
+_allowed_symbols = ['add_metrics']
+
+remove_undocumented(__name__, allowed_exception_list=_allowed_symbols)
diff --git a/tensorflow/contrib/estimator/python/estimator/extenders.py b/tensorflow/contrib/estimator/python/estimator/extenders.py
new file mode 100644
index 0000000000..45dd9ef70d
--- /dev/null
+++ b/tensorflow/contrib/estimator/python/estimator/extenders.py
@@ -0,0 +1,124 @@
+# Copyright 2016 The TensorFlow Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+"""Extenders of tf.estimator.Estimator."""
+
+from __future__ import absolute_import
+from __future__ import division
+from __future__ import print_function
+
+from tensorflow.python.estimator import estimator as estimator_lib
+from tensorflow.python.estimator import model_fn as model_fn_lib
+from tensorflow.python.estimator import util as estimator_util
+from tensorflow.python.util import tf_inspect
+
+_VALID_METRIC_FN_ARGS = set(['features', 'labels', 'predictions', 'config'])
+
+
+def add_metrics(estimator, metric_fn):
+ """Creates new ${tf.estimator.Esitmator} which has given metrics.
+
+ Example:
+
+ ```python
+ def my_auc(labels, predictions):
+ return {'auc': tf.metrics.auc(labels, predictions['logistic'])}
+
+ estimator = tf.estimator.DNNClassifier(...)
+ estimator = tf.contrib.estimator.add_metrics(estimator, my_auc)
+ estimator.train(...)
+ estimator.evaluate(...)
+ ```
+ Example usage of custom metric which uses features:
+
+ ```python
+ def my_auc(features, labels, predictions):
+ return {'auc': tf.metrics.auc(
+ labels, predictions['logistic'], weights=features['weight'])}
+
+ estimator = tf.estimator.DNNClassifier(...)
+ estimator = tf.contrib.estimator.add_metrics(estimator, my_auc)
+ estimator.train(...)
+ estimator.evaluate(...)
+ ```
+
+ Args:
+ estimator: A ${tf.estimator.Esitmator} object.
+ metric_fn: A function which should obey the following signature:
+ - Args: can only have following four arguments in any order:
+ * predictions: Predictions `Tensor` or dict of `Tensor` created by given
+ `estimator`.
+ * features: Input `dict` of `Tensor` objects created by `input_fn` which
+ is given to `estimator.evaluate` as an argument.
+ * labels: Labels `Tensor` or dict of `Tensor` created by `input_fn`
+ which is given to `estimator.evaluate` as an argument.
+ * config: config attribute of the `estimator`.
+ - Returns:
+ Dict of metric results keyed by name. Final metrics are a union of this
+ and `estimator's` existing metrics. If there is a name conflict between
+ this and `estimator`s existing metrics, this will override the existing
+ one. The values of the dict are the results of calling a metric
+ function, namely a `(metric_tensor, update_op)` tuple.
+
+ Returns:
+ A new ${tf.estimator.Estimator} which has a union of original metrics with
+ given ones.
+ """
+ _verify_metric_fn_args(metric_fn)
+
+ def new_model_fn(features, labels, mode):
+ spec = _get_model_fn(estimator)(features, labels, mode)
+ if mode != model_fn_lib.ModeKeys.EVAL:
+ return spec
+ new_metrics = _call_metric_fn(metric_fn, features, labels, spec.predictions,
+ estimator.config)
+ all_metrics = spec.eval_metric_ops or {}
+ all_metrics.update(new_metrics)
+ return spec._replace(eval_metric_ops=all_metrics)
+
+ return estimator_lib.Estimator(
+ model_fn=new_model_fn,
+ model_dir=estimator.model_dir,
+ config=estimator.config)
+
+
+# TODO(ispir): Move this to tf.estimator.Estimator.
+def _get_model_fn(estimator):
+ return estimator._call_model_fn # pylint: disable=protected-access
+
+
+def _verify_metric_fn_args(metric_fn):
+ args = set(estimator_util.fn_args(metric_fn))
+ if tf_inspect.ismethod(metric_fn):
+ if 'self' in args:
+ args.remove('self')
+ invalid_args = list(args - _VALID_METRIC_FN_ARGS)
+ if invalid_args:
+ raise ValueError('metric_fn (%s) has following not expected args: %s' %
+ (metric_fn, invalid_args))
+
+
+def _call_metric_fn(metric_fn, features, labels, predictions, config):
+ """Calls metric fn with proper arguments."""
+ metric_fn_args = estimator_util.fn_args(metric_fn)
+ kwargs = {}
+ if 'features' in metric_fn_args:
+ kwargs['features'] = features
+ if 'labels' in metric_fn_args:
+ kwargs['labels'] = labels
+ if 'predictions' in metric_fn_args:
+ kwargs['predictions'] = predictions
+ if 'config' in metric_fn_args:
+ kwargs['config'] = config
+ return metric_fn(**kwargs)
diff --git a/tensorflow/contrib/estimator/python/estimator/extenders_test.py b/tensorflow/contrib/estimator/python/estimator/extenders_test.py
new file mode 100644
index 0000000000..422c16d24e
--- /dev/null
+++ b/tensorflow/contrib/estimator/python/estimator/extenders_test.py
@@ -0,0 +1,135 @@
+# Copyright 2017 The TensorFlow Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+"""extenders tests."""
+
+from __future__ import absolute_import
+from __future__ import division
+from __future__ import print_function
+
+import numpy as np
+
+from tensorflow.contrib.data.python.ops import dataset_ops
+from tensorflow.contrib.estimator.python.estimator import extenders
+from tensorflow.python.estimator import run_config
+from tensorflow.python.estimator.canned import linear
+from tensorflow.python.feature_column import feature_column as fc
+from tensorflow.python.framework import constant_op
+from tensorflow.python.ops import metrics as metrics_lib
+from tensorflow.python.platform import test
+
+
+def get_input_fn(x, y):
+
+ def input_fn():
+ dataset = dataset_ops.Dataset.from_tensor_slices({'x': x, 'y': y})
+ iterator = dataset.make_one_shot_iterator()
+ features = iterator.get_next()
+ labels = features.pop('y')
+ return features, labels
+
+ return input_fn
+
+
+class AddMetricsTest(test.TestCase):
+
+ def test_should_add_metrics(self):
+ input_fn = get_input_fn(
+ x=np.arange(4)[:, None, None], y=np.ones(4)[:, None])
+ estimator = linear.LinearClassifier([fc.numeric_column('x')])
+
+ def metric_fn(features):
+ return {'mean_x': metrics_lib.mean(features['x'])}
+
+ estimator = extenders.add_metrics(estimator, metric_fn)
+
+ estimator.train(input_fn=input_fn)
+ metrics = estimator.evaluate(input_fn=input_fn)
+ self.assertIn('mean_x', metrics)
+ self.assertEqual(1.5, metrics['mean_x'])
+ # assert that it keeps original estimators metrics
+ self.assertIn('auc', metrics)
+
+ def test_should_error_out_for_not_recognized_args(self):
+ estimator = linear.LinearClassifier([fc.numeric_column('x')])
+
+ def metric_fn(features, not_recognized):
+ _, _ = features, not_recognized
+ return {}
+
+ with self.assertRaisesRegexp(ValueError, 'not_recognized'):
+ estimator = extenders.add_metrics(estimator, metric_fn)
+
+ def test_all_supported_args(self):
+ input_fn = get_input_fn(x=[[[0.]]], y=[[[1]]])
+ estimator = linear.LinearClassifier([fc.numeric_column('x')])
+
+ def metric_fn(features, predictions, labels, config):
+ self.assertIn('x', features)
+ self.assertIsNotNone(labels)
+ self.assertIn('logistic', predictions)
+ self.assertTrue(isinstance(config, run_config.RunConfig))
+ return {}
+
+ estimator = extenders.add_metrics(estimator, metric_fn)
+
+ estimator.train(input_fn=input_fn)
+ estimator.evaluate(input_fn=input_fn)
+
+ def test_all_supported_args_in_different_order(self):
+ input_fn = get_input_fn(x=[[[0.]]], y=[[[1]]])
+ estimator = linear.LinearClassifier([fc.numeric_column('x')])
+
+ def metric_fn(labels, config, features, predictions):
+ self.assertIn('x', features)
+ self.assertIsNotNone(labels)
+ self.assertIn('logistic', predictions)
+ self.assertTrue(isinstance(config, run_config.RunConfig))
+ return {}
+
+ estimator = extenders.add_metrics(estimator, metric_fn)
+
+ estimator.train(input_fn=input_fn)
+ estimator.evaluate(input_fn=input_fn)
+
+ def test_all_args_are_optional(self):
+ input_fn = get_input_fn(x=[[[0.]]], y=[[[1]]])
+ estimator = linear.LinearClassifier([fc.numeric_column('x')])
+
+ def metric_fn():
+ return {'two': metrics_lib.mean(constant_op.constant([2.]))}
+
+ estimator = extenders.add_metrics(estimator, metric_fn)
+
+ estimator.train(input_fn=input_fn)
+ metrics = estimator.evaluate(input_fn=input_fn)
+ self.assertEqual(2., metrics['two'])
+
+ def test_overrides_existing_metrics(self):
+ input_fn = get_input_fn(x=[[[0.]]], y=[[[1]]])
+ estimator = linear.LinearClassifier([fc.numeric_column('x')])
+ estimator.train(input_fn=input_fn)
+ metrics = estimator.evaluate(input_fn=input_fn)
+ self.assertNotEqual(2., metrics['auc'])
+
+ def metric_fn():
+ return {'auc': metrics_lib.mean(constant_op.constant([2.]))}
+
+ estimator = extenders.add_metrics(estimator, metric_fn)
+ metrics = estimator.evaluate(input_fn=input_fn)
+ self.assertEqual(2., metrics['auc'])
+
+
+if __name__ == '__main__':
+ test.main()
diff --git a/tensorflow/contrib/fused_conv/BUILD b/tensorflow/contrib/fused_conv/BUILD
index f5d21278db..9b34cf1bdb 100644
--- a/tensorflow/contrib/fused_conv/BUILD
+++ b/tensorflow/contrib/fused_conv/BUILD
@@ -60,12 +60,14 @@ tf_kernel_library(
srcs = [
"kernels/fused_conv2d_bias_activation_op.cc",
"kernels/fused_conv2d_bias_activation_op.h",
+ "kernels/fused_conv_ops_gpu.h",
],
prefix = "fused_conv2d_bias_activation_op",
deps = [
"//tensorflow/core:framework",
"//tensorflow/core:lib",
"//tensorflow/core:lib_proto_parsing",
+ "//tensorflow/core:stream_executor",
"//tensorflow/core/kernels:bounds_check_lib",
"//tensorflow/core/kernels:conv_2d_hdrs",
"//tensorflow/core/kernels:conv_ops_gpu_hdrs",
@@ -81,6 +83,7 @@ tf_custom_op_library(
srcs = [
"kernels/fused_conv2d_bias_activation_op.cc",
"kernels/fused_conv2d_bias_activation_op.h",
+ "kernels/fused_conv_ops_gpu.h",
"ops/fused_conv2d_bias_activation_op.cc",
],
deps = [
@@ -94,12 +97,8 @@ tf_custom_op_library(
)
tf_gen_op_libs(
- op_lib_names = [
- "fused_conv2d_bias_activation_op",
- ],
- deps = [
- "//tensorflow/core:lib_proto_parsing",
- ],
+ op_lib_names = ["fused_conv2d_bias_activation_op"],
+ deps = ["//tensorflow/core:lib_proto_parsing"],
)
tf_gen_op_wrapper_py(
@@ -109,7 +108,7 @@ tf_gen_op_wrapper_py(
cuda_py_test(
name = "fused_conv2d_bias_activation_op_test",
- size = "small",
+ size = "large",
srcs = ["python/ops/fused_conv2d_bias_activation_op_test.py"],
additional_deps = [
":fused_conv_py",
diff --git a/tensorflow/contrib/fused_conv/kernels/fused_conv2d_bias_activation_op.cc b/tensorflow/contrib/fused_conv/kernels/fused_conv2d_bias_activation_op.cc
index dc0701b234..675ff2be38 100644
--- a/tensorflow/contrib/fused_conv/kernels/fused_conv2d_bias_activation_op.cc
+++ b/tensorflow/contrib/fused_conv/kernels/fused_conv2d_bias_activation_op.cc
@@ -13,8 +13,6 @@ See the License for the specific language governing permissions and
limitations under the License.
==============================================================================*/
-#define EIGEN_USE_THREADS
-
#if GOOGLE_CUDA
#define EIGEN_USE_GPU
#endif // GOOGLE_CUDA
@@ -31,8 +29,8 @@ limitations under the License.
#include "tensorflow/core/kernels/conv_2d.h"
#include "tensorflow/core/kernels/ops_util.h"
#include "tensorflow/core/lib/core/errors.h"
+#include "tensorflow/core/lib/strings/strcat.h"
#include "tensorflow/core/util/padding.h"
-#include "tensorflow/core/util/tensor_format.h"
#include "tensorflow/core/util/use_cudnn.h"
#if GOOGLE_CUDA
@@ -40,38 +38,84 @@ limitations under the License.
#include "tensorflow/core/platform/stream_executor.h"
#include "tensorflow/core/util/activation_mode.h"
#endif // GOOGLE_CUDA
+
namespace tensorflow {
-typedef Eigen::ThreadPoolDevice CPUDevice;
typedef Eigen::GpuDevice GPUDevice;
-template <typename Device, typename T>
-struct LaunchConvOp;
+template <typename T>
+struct RawType {
+ using type = T;
+};
+
+template <>
+struct RawType<qint8> {
+ using type = int8;
+};
+
+// Template struct to convert int8x4 to int32.
+// (for NCHW_VECT_C with element type int8, we can consider it to be
+// an NCHW layout with element type int32 for operations like padding).
+template <typename T>
+struct Int8x4ToInt32 {
+ // By default, do not change T.
+ using type = T;
+};
+
+template <>
+struct Int8x4ToInt32<int8> {
+ using type = int32;
+};
-template <typename Device, typename T>
+// T is the element type of the conv_input, filter and side_input tensors.
+// BiasType is the element type of the bias tensor, which can be different.
+// ScaleType is the type used for conv_input_scale, side_input_scale.
+template <typename Device, typename T, typename BiasType, typename ScaleType>
class FusedConv2DBiasActivationOp : public OpKernel {
public:
explicit FusedConv2DBiasActivationOp(OpKernelConstruction* context)
: OpKernel(context) {
- string data_format;
- OP_REQUIRES_OK(context, context->GetAttr("data_format", &data_format));
- OP_REQUIRES(context, FormatFromString(data_format, &data_format_),
+ string data_format_str, filter_format_str;
+ OP_REQUIRES_OK(context, context->GetAttr("data_format", &data_format_str));
+ OP_REQUIRES(context, FormatFromString(data_format_str, &data_format_),
errors::InvalidArgument("Invalid data format"));
+ OP_REQUIRES_OK(context,
+ context->GetAttr("filter_format", &filter_format_str));
OP_REQUIRES(context,
- (data_format_ == FORMAT_NHWC || data_format_ == FORMAT_NCHW),
- errors::InvalidArgument("Current implementation only supports "
- "NHWC and NCHW data formats."));
- OP_REQUIRES_OK(context, context->GetAttr("strides", &strides_));
- OP_REQUIRES(context, strides_.size() == 4,
+ FilterFormatFromString(filter_format_str, &filter_format_),
+ errors::InvalidArgument("Invalid filter format"));
+
+ std::vector<int32> strides;
+ OP_REQUIRES_OK(context, context->GetAttr("strides", &strides));
+ OP_REQUIRES(context, strides.size() == 4,
errors::InvalidArgument("Sliding window strides field must "
"specify 4 dimensions"));
+
+ stride_rows_ = GetTensorDim(strides, data_format_, 'H');
+ stride_cols_ = GetTensorDim(strides, data_format_, 'W');
OP_REQUIRES(
context,
- (GetTensorDim(strides_, data_format_, 'N') == 1 &&
- GetTensorDim(strides_, data_format_, 'C') == 1),
- errors::InvalidArgument("Current implementation does not yet support "
- "strides in the batch and depth dimensions."));
- OP_REQUIRES_OK(context, context->GetAttr("padding", &padding_));
+ (GetTensorDim(strides, data_format_, 'N') == 1 &&
+ GetTensorDim(strides, data_format_, 'C') == 1),
+ errors::InvalidArgument("Convolutional strides are not supported in "
+ "the batch or depth dimensions."));
+
+ // Assuming qint8 <--> NCHW_VECT_C, OIHW_VECT_I (int8x4) here.
+ constexpr bool is_int8x4 = std::is_same<T, qint8>::value;
+
+ // Note: Only NCHW_VECT_C format is supported for int8.
+ // This is because it is expected to be the fastest, and our previous tests
+ // found cudnn 6 does not fully support the other formats for int8 mode.
+ OP_REQUIRES(context, (is_int8x4 == (data_format_ == FORMAT_NCHW_VECT_C)),
+ errors::InvalidArgument(
+ "qint8 should be used with data_format NCHW_VECT_C."));
+
+ OP_REQUIRES(context, (is_int8x4 == (filter_format_ == FORMAT_OIHW_VECT_I)),
+ errors::InvalidArgument(
+ "qint8 should be used with filter_format OIHW_VECT_I."));
+
+ OP_REQUIRES_OK(context, context->GetAttr("padding", &padding_type_));
+ eigen_padding_type_ = BrainPadding2EigenPadding(padding_type_);
string activation_mode_str;
OP_REQUIRES_OK(context,
context->GetAttr("activation_mode", &activation_mode_str));
@@ -79,130 +123,111 @@ class FusedConv2DBiasActivationOp : public OpKernel {
&activation_mode_));
OP_REQUIRES(context, activation_mode_ == ActivationMode::RELU,
errors::InvalidArgument("Current implementation only supports "
- "relu as the activation mode."));
+ "RELU as the activation function."));
cudnn_use_autotune_ = CudnnUseAutotune();
+ float conv_input_scale_flt, side_input_scale_flt;
+ OP_REQUIRES_OK(context,
+ context->GetAttr("conv_input_scale", &conv_input_scale_flt));
+ OP_REQUIRES_OK(context,
+ context->GetAttr("side_input_scale", &side_input_scale_flt));
+ conv_input_scale_ = conv_input_scale_flt;
+ side_input_scale_ = side_input_scale_flt;
+ }
+
+ Status CheckShape(const Tensor& tensor, const string& tensor_name) {
+ const int num_dims = tensor.dims();
+ for (int i = 0; i < num_dims; i++) {
+ if (!FastBoundsCheck(tensor.dim_size(i),
+ std::numeric_limits<int32>::max())) {
+ return errors::InvalidArgument(tensor_name, " dimension ", i,
+ " too large");
+ }
+ }
+ // If there is a 5th dimension it is the VECT_C or VECT_I dimension.
+ if (num_dims == 5 && tensor.dim_size(4) != 4) {
+ return errors::InvalidArgument("The last dimension of ", tensor_name,
+ " must be of size 4 for qint8.");
+ }
+ return Status::OK();
}
void Compute(OpKernelContext* context) override {
- // Input tensor is one of the following shapes:
- // [ batch, in_rows, in_cols, in_depth ] (for NHWC data format)
- // [ batch, in_depth, in_rows, in_cols ] (for NCHW data format)
- const Tensor& input = context->input(0);
+ // The conv_input tensor is one of the following formats:
+ // NHWC, NCHW, NCHW_VECT_C.
+ const Tensor& conv_input = context->input(0);
+ OP_REQUIRES_OK(context, CheckShape(conv_input, "conv_input"));
- // Input filter is of the following dimensions:
- // [ filter_rows, filter_cols, in_depth, out_depth ]
+ // The filter tensor is one of the following formats:
+ // HWIO, OIHW, OIHW_VECT_I.
const Tensor& filter = context->input(1);
+ OP_REQUIRES_OK(context, CheckShape(filter, "filter"));
- // Input bias is a 1-D tensor the size of the last
- // dimension of Output tensor
+ // Input bias is a 1-D tensor, with size matching output depth.
const Tensor& bias = context->input(2);
+ OP_REQUIRES_OK(context, CheckShape(bias, "conv_input"));
- // For 2D convolution, there should be 4 dimensions.
- OP_REQUIRES(context, input.dims() == 4,
- errors::InvalidArgument("input must be 4-dimensional",
- input.shape().DebugString()));
- OP_REQUIRES(context, filter.dims() == 4,
- errors::InvalidArgument("filter must be 4-dimensional: ",
- filter.shape().DebugString()));
-
- // Bias should be a 1-D tensor.
- OP_REQUIRES(context, bias.dims() == 1,
- errors::InvalidArgument("bias must be 1-dimensional: ",
- bias.shape().DebugString()));
-
- for (int i = 0; i < 4; i++) {
- OP_REQUIRES(context,
- FastBoundsCheck(filter.dim_size(i),
- std::numeric_limits<int32>::max()),
- errors::InvalidArgument("filter dimension too large"));
- OP_REQUIRES(
- context,
- FastBoundsCheck(input.dim_size(i), std::numeric_limits<int32>::max()),
- errors::InvalidArgument("input dimension too large"));
+ // If side_input_scale != 0, then side_input is not ignored and
+ // has the same type and dimensions as the output.
+ const Tensor& side_input = context->input(3);
+ if (side_input_scale_ != 0) {
+ OP_REQUIRES_OK(context, CheckShape(side_input, "side_input"));
}
- // The last dimension for input is in_depth. It must be the same as the
- // filter's in_depth.
- const int64 in_depth = GetTensorDim(input, data_format_, 'C');
- OP_REQUIRES(context, in_depth == filter.dim_size(2),
- errors::InvalidArgument(
- "input and filter must have the same depth: ", in_depth,
- " vs ", filter.dim_size(2)));
-
- // The last dimension for filter is out_depth.
- const int32 out_depth = static_cast<int32>(filter.dim_size(3));
-
- // The second dimension for input is rows/height.
- // The first dimension for filter is rows/height.
- const int64 input_rows_raw = GetTensorDim(input, data_format_, 'H');
- const int32 input_rows = static_cast<int32>(input_rows_raw);
- const int32 filter_rows = static_cast<int32>(filter.dim_size(0));
-
- // The third dimension for input is columns/width.
- // The second dimension for filter is columns/width.
- const int64 input_cols_raw = GetTensorDim(input, data_format_, 'W');
- const int32 input_cols = static_cast<int32>(input_cols_raw);
- const int32 filter_cols = static_cast<int32>(filter.dim_size(1));
-
- // The first dimension for input is batch.
- const int64 batch_raw = GetTensorDim(input, data_format_, 'N');
- const int32 batch = static_cast<int32>(batch_raw);
-
- // For now we take the stride from the second and third dimensions only (we
- // do not support striding on the batch or depth dimension).
- const int32 stride_rows =
- static_cast<int32>(GetTensorDim(strides_, data_format_, 'H'));
- const int32 stride_cols =
- static_cast<int32>(GetTensorDim(strides_, data_format_, 'W'));
- const int32 bias_size = static_cast<int32>(bias.dim_size(0));
-
- int64 out_rows = 0, out_cols = 0, pad_rows = 0, pad_cols = 0;
- OP_REQUIRES_OK(context,
- GetWindowedOutputSize(input_rows, filter_rows, stride_rows,
- padding_, &out_rows, &pad_rows));
- OP_REQUIRES_OK(context,
- GetWindowedOutputSize(input_cols, filter_cols, stride_cols,
- padding_, &out_cols, &pad_cols));
- // Output tensor is of the following dimensions:
- // [ in_batch, out_rows, out_cols, out_depth ]
- TensorShape out_shape =
- ShapeFromFormat(data_format_, batch, out_rows, out_cols, out_depth);
+ // TODO(pauldonnelly): Switch to a more efficient mechanism to access
+ // dimension indexes and per-dimension attributes.
+ const int32 filter_rows = GetFilterDim(filter, filter_format_, 'H');
+ const int32 filter_cols = GetFilterDim(filter, filter_format_, 'W');
+ const int32 output_depth = GetFilterDim(filter, filter_format_, 'O');
+
+ const int32 batch_size = GetTensorDim(conv_input, data_format_, 'N');
+ const int32 conv_input_rows = GetTensorDim(conv_input, data_format_, 'H');
+ const int32 conv_input_cols = GetTensorDim(conv_input, data_format_, 'W');
+
+ int64 output_rows = 0, output_cols = 0, pad_rows = 0, pad_cols = 0;
+ OP_REQUIRES_OK(context, GetWindowedOutputSize(conv_input_rows, filter_rows,
+ stride_rows_, padding_type_,
+ &output_rows, &pad_rows));
+ OP_REQUIRES_OK(context, GetWindowedOutputSize(conv_input_cols, filter_cols,
+ stride_cols_, padding_type_,
+ &output_cols, &pad_cols));
+ // Initialize the output tensor shape according to data_format_
+ TensorShape output_shape = ShapeFromFormat(
+ data_format_, batch_size, output_rows, output_cols, output_depth);
Tensor* output = nullptr;
- OP_REQUIRES_OK(context, context->allocate_output(0, out_shape, &output));
-
- // Bias size should be the same as the size of the channel dimension of
- // output.
- OP_REQUIRES(context, bias_size == out_depth,
- errors::InvalidArgument(
- "bias size should equal the channel "
- "dimension size of output. bias shape: ",
- bias.shape().DebugString() +
- ", output shape: " + output->shape().DebugString()));
+ OP_REQUIRES_OK(context, context->allocate_output(0, output_shape, &output));
- VLOG(2) << "FusedConv2DBiasActivation: in_depth = " << in_depth
- << ", input_cols = " << input_cols
+ VLOG(2) << "FusedConv2DBiasActivation: conv_input_cols = "
+ << conv_input_cols << ", conv_input_rows = " << conv_input_rows
<< ", filter_cols = " << filter_cols
- << ", input_rows = " << input_rows
<< ", filter_rows = " << filter_rows
- << ", stride_rows = " << stride_rows
- << ", stride_cols = " << stride_cols
- << ", bias_size = " << bias_size << ", out_depth = " << out_depth;
+ << ", stride_cols = " << stride_cols_
+ << ", stride_rows = " << stride_rows_
+ << ", output_depth = " << output_depth
+ << ", output_cols = " << output_cols
+ << ", output_rows = " << output_rows
+ << ", output_shape.num_elements = " << output_shape.num_elements();
// If there is nothing to compute, return.
- if (out_shape.num_elements() == 0) {
+ if (output_shape.num_elements() == 0) {
return;
}
- launcher_.launch(context, cudnn_use_autotune_, input, filter, stride_rows,
- stride_cols, bias, activation_mode_,
- BrainPadding2EigenPadding(padding_), data_format_, output);
+
+ launcher_.launch(context, cudnn_use_autotune_, conv_input,
+ conv_input_scale_, filter, stride_rows_, stride_cols_,
+ eigen_padding_type_, side_input, side_input_scale_, bias,
+ activation_mode_, data_format_, filter_format_, output);
}
private:
- std::vector<int32> strides_;
- Padding padding_;
+ int32 stride_rows_, stride_cols_;
+ Padding padding_type_;
+ Eigen::PaddingType eigen_padding_type_;
ActivationMode activation_mode_;
TensorFormat data_format_;
- LaunchFusedConv2DBiasActivationOp<Device, T> launcher_;
+ FilterTensorFormat filter_format_;
+ ScaleType conv_input_scale_;
+ ScaleType side_input_scale_;
+ LaunchFusedConv2DBiasActivationOp<Device, T, BiasType, ScaleType> launcher_;
bool cudnn_use_autotune_;
TF_DISALLOW_COPY_AND_ASSIGN(FusedConv2DBiasActivationOp);
@@ -211,67 +236,72 @@ class FusedConv2DBiasActivationOp : public OpKernel {
#if GOOGLE_CUDA
namespace dnn = ::perftools::gputools::dnn;
-dnn::ActivationMode BrainActivationMode2CudnnActivationMode(
- ActivationMode activation_mode) {
- switch (activation_mode) {
- case ActivationMode::SIGMOID:
- return dnn::ActivationMode::kSigmoid;
- case ActivationMode::RELU:
- return dnn::ActivationMode::kRelu;
- case ActivationMode::RELUX:
- return dnn::ActivationMode::kReluX;
- case ActivationMode::RELU6:
- return dnn::ActivationMode::kRelu6;
- case ActivationMode::TANH:
- return dnn::ActivationMode::kTanh;
- case ActivationMode::BANDPASS:
- return dnn::ActivationMode::kBandPass;
- }
- // Prevent compiler warning about missing return
- return dnn::ActivationMode::kRelu;
-}
-
// A dummy type to group forward convolution autotune results together.
struct ConvBiasActivationAutoTuneGroup {
static string name() { return "ConvBiasActivation"; }
};
-typedef AutoTuneSingleton<ConvBiasActivationAutoTuneGroup, ConvParameters,
- perftools::gputools::dnn::AlgorithmConfig>
+typedef AutoTuneSingleton<ConvBiasActivationAutoTuneGroup, FusedConvParameters,
+ dnn::AlgorithmConfig>
AutoTuneConvBiasActivation;
-template <typename T>
-void LaunchFusedConv2DBiasActivationOp<GPUDevice, T>::launch(
- OpKernelContext* ctx, bool cudnn_use_autotune, const Tensor& input_param,
- const Tensor& filter, int32 row_stride, int32 col_stride,
- const Tensor& bias, const ActivationMode& activation_mode,
- const Eigen::PaddingType& padding, TensorFormat data_format,
- Tensor* output) {
- using perftools::gputools::dnn::AlgorithmConfig;
- using perftools::gputools::dnn::AlgorithmType;
- using perftools::gputools::dnn::ProfileResult;
- using perftools::gputools::dnn::kDefaultAlgorithm;
+// Allocates 'transformed_tensor' and transforms 'nhwc_tensor' into it
+// using the specified 'batch_size', 'rows', 'cols', and 'depth' dimensions.
+template <typename T, size_t NDIMS>
+Status TransformNHWCToNCHW(OpKernelContext* ctx, const Tensor& nhwc_tensor,
+ int batch_size, int rows, int cols, int depth,
+ Tensor* transformed_tensor, const Tensor** result) {
+ TensorShape nchw_shape =
+ ShapeFromFormat(FORMAT_NCHW, batch_size, rows, cols, depth);
+ if (depth > 1) {
+ TF_RETURN_IF_ERROR(ctx->allocate_temp(DataTypeToEnum<T>::value, nchw_shape,
+ transformed_tensor));
+ functor::NHWCToNCHW<GPUDevice, T, NDIMS>()(
+ ctx->eigen_device<GPUDevice>(), nhwc_tensor.tensor<T, NDIMS>(),
+ transformed_tensor->tensor<T, NDIMS>());
+ } else {
+ // If depth <= 1, then just reshape.
+ CHECK(transformed_tensor->CopyFrom(nhwc_tensor, nchw_shape));
+ }
+ *result = transformed_tensor;
+ return Status::OK();
+}
+
+template <typename T, typename BiasType, typename ScaleType>
+void LaunchFusedConv2DBiasActivationOp<GPUDevice, T, BiasType, ScaleType>::
+ launch(OpKernelContext* ctx, bool cudnn_use_autotune,
+ const Tensor& conv_input_param, ScaleType conv_input_scale,
+ const Tensor& filter_param, int32 row_stride, int32 col_stride,
+ const Eigen::PaddingType& padding, const Tensor& side_input_param,
+ ScaleType side_input_scale, const Tensor& bias,
+ ActivationMode activation_mode, TensorFormat data_format,
+ FilterTensorFormat filter_format, Tensor* output_param) {
auto* stream = ctx->op_device_context()->stream();
OP_REQUIRES(ctx, stream, errors::Internal("No GPU stream available."));
- Tensor input = input_param;
-
- perftools::gputools::dnn::ActivationMode cudnn_activation_mode =
- BrainActivationMode2CudnnActivationMode(activation_mode);
-
// TODO(yangzihao): refactor all the complicated/duplicated code in regular
// conv ops to a shared conv utility.
- int32 padding_rows = 0;
- int32 padding_cols = 0;
- const int64 in_batch = GetTensorDim(input, data_format, 'N');
- int64 in_rows = GetTensorDim(input, data_format, 'H');
- int64 in_cols = GetTensorDim(input, data_format, 'W');
- const int64 in_depths = GetTensorDim(input, data_format, 'C');
- const int64 out_batch = GetTensorDim(*output, data_format, 'N');
- const int64 out_rows = GetTensorDim(*output, data_format, 'H');
- const int64 out_cols = GetTensorDim(*output, data_format, 'W');
- const int64 out_depths = GetTensorDim(*output, data_format, 'C');
- const int64 patch_rows = filter.dim_size(0);
- const int64 patch_cols = filter.dim_size(1);
+
+ // Assuming qint8 <--> NCHW_VECT_C, OIHW_VECT_I (int8x4) here.
+ constexpr bool is_int8x4 = std::is_same<T, qint8>::value;
+ constexpr int rank = is_int8x4 ? 5 : 4;
+ constexpr int vect = is_int8x4 ? 4 : 1;
+
+ const int batch_size = GetTensorDim(conv_input_param, data_format, 'N');
+ int conv_input_rows = GetTensorDim(conv_input_param, data_format, 'H');
+ int conv_input_cols = GetTensorDim(conv_input_param, data_format, 'W');
+
+ const int conv_input_depth =
+ GetTensorDim(conv_input_param, data_format, 'C') * vect;
+ const int output_rows = GetTensorDim(*output_param, data_format, 'H');
+ const int output_cols = GetTensorDim(*output_param, data_format, 'W');
+ const int output_depth = GetFilterDim(filter_param, filter_format, 'O');
+ const int filter_rows = GetFilterDim(filter_param, filter_format, 'H');
+ const int filter_cols = GetFilterDim(filter_param, filter_format, 'W');
+ int padding_rows = 0;
+ int padding_cols = 0;
+ const Tensor* conv_input = &conv_input_param;
+
+ Tensor maybe_padded_conv_input;
if (padding == Eigen::PADDING_SAME) {
// Total padding on rows and cols is
// Pr = (R' - 1) * S + Kr - R
@@ -281,114 +311,152 @@ void LaunchFusedConv2DBiasActivationOp<GPUDevice, T>::launch(
// We pad Pr/2 on the left and Pr - Pr/2 on the right, Pc/2 on the top
// and Pc - Pc/2 on the bottom. When Pr or Pc is odd, this means
// we pad more on the right and bottom than on the top and left.
- padding_rows =
- std::max<int32>(0, (out_rows - 1) * row_stride + patch_rows - in_rows);
- padding_cols =
- std::max<int32>(0, (out_cols - 1) * col_stride + patch_cols - in_cols);
- const int rows_parity = padding_rows & 1;
- const int cols_parity = padding_cols & 1;
- if ((rows_parity | cols_parity) != 0) {
+ padding_rows = std::max<int>(
+ 0, (output_rows - 1) * row_stride + filter_rows - conv_input_rows);
+ padding_cols = std::max<int>(
+ 0, (output_cols - 1) * col_stride + filter_cols - conv_input_cols);
+ const int padding_rows_parity = padding_rows & 1;
+ const int padding_cols_parity = padding_cols & 1;
+ if ((padding_rows_parity | padding_cols_parity) != 0) {
Tensor transformed_input;
- int64 new_in_rows = in_rows + rows_parity;
- int64 new_in_cols = in_cols + cols_parity;
+ const int new_conv_input_rows = conv_input_rows + padding_rows_parity;
+ const int new_conv_input_cols = conv_input_cols + padding_cols_parity;
+
+ using VectT = typename Int8x4ToInt32<typename RawType<T>::type>::type;
+ auto pad_data_format = is_int8x4 ? FORMAT_NCHW : data_format;
+
OP_REQUIRES_OK(
- ctx,
- ctx->allocate_temp(DataTypeToEnum<T>::value,
- ShapeFromFormat(data_format, in_batch, new_in_rows,
- new_in_cols, in_depths),
- &transformed_input));
-
- functor::PadInput<GPUDevice, T, int, 4>()(
- ctx->eigen_device<GPUDevice>(), To32Bit(input_param.tensor<T, 4>()),
- {{0, 0}}, {{rows_parity, cols_parity}},
- To32Bit(transformed_input.tensor<T, 4>()), data_format);
-
- input = transformed_input;
- in_rows = new_in_rows;
- in_cols = new_in_cols;
+ ctx, ctx->allocate_temp(
+ DataTypeToEnum<T>::value,
+ ShapeFromFormat(data_format, batch_size, new_conv_input_rows,
+ new_conv_input_cols, conv_input_depth),
+ &maybe_padded_conv_input));
+
+ auto conv_input_eigen_tensor =
+ To32Bit(conv_input_param.reinterpret_last_dimension<VectT, 4>());
+ auto padded_conv_input_eigen_tensor = To32Bit(
+ maybe_padded_conv_input.reinterpret_last_dimension<VectT, 4>());
+
+ functor::PadInput<GPUDevice, VectT, int, 4>()(
+ ctx->eigen_device<GPUDevice>(), conv_input_eigen_tensor, {{0, 0}},
+ {{padding_rows_parity, padding_cols_parity}},
+ padded_conv_input_eigen_tensor, pad_data_format);
+
+ conv_input = &maybe_padded_conv_input;
+ conv_input_rows = new_conv_input_rows;
+ conv_input_cols = new_conv_input_cols;
}
}
- if (data_format == FORMAT_NHWC) {
- // Convert the input tensor from NHWC to NCHW.
- TensorShape nchw_shape =
- ShapeFromFormat(FORMAT_NCHW, in_batch, in_rows, in_cols, in_depths);
- if (in_depths > 1) {
- Tensor transformed_input;
- OP_REQUIRES_OK(ctx, ctx->allocate_temp(DataTypeToEnum<T>::value,
- nchw_shape, &transformed_input));
- functor::NHWCToNCHW<GPUDevice, T, 4>()(
- ctx->eigen_device<GPUDevice>(),
- const_cast<const Tensor&>(input).tensor<T, 4>(),
- transformed_input.tensor<T, 4>());
- input = transformed_input;
- } else {
- // If depth <= 1, then just reshape.
- CHECK(input.CopyFrom(input, nchw_shape));
+ Tensor maybe_transformed_conv_input, maybe_transformed_side_input;
+ Tensor maybe_transformed_output;
+ const Tensor* side_input = &side_input_param;
+ Tensor* output = output_param;
+
+ // NOTE: Here and elsewhere, checking 'is_int8x4' may look unnecessary
+ // and inefficient, but it is actually both a time and code size optimization,
+ // since 'is_int8x4' is a constexpr determined by the template parameter.
+ if (!is_int8x4 && data_format == FORMAT_NHWC) {
+ OP_REQUIRES_OK(ctx, (TransformNHWCToNCHW<T, rank>(
+ ctx, *conv_input, batch_size, conv_input_rows,
+ conv_input_cols, conv_input_depth,
+ &maybe_transformed_conv_input, &conv_input)));
+ if (side_input_scale != 0) {
+ OP_REQUIRES_OK(
+ ctx, (TransformNHWCToNCHW<T, rank>(
+ ctx, side_input_param, batch_size, output_rows, output_cols,
+ output_depth, &maybe_transformed_side_input, &side_input)));
+ }
+ if (output_depth > 1) {
+ // Allocate a tensor for the NCHW output of the kernel and point output
+ // to it. Afterwards, we will transform it to NHWC while copying back to
+ // 'output_param'.
+ TensorShape nchw_shape = ShapeFromFormat(
+ FORMAT_NCHW, batch_size, output_rows, output_cols, output_depth);
+ OP_REQUIRES_OK(ctx,
+ ctx->allocate_temp(DataTypeToEnum<T>::value, nchw_shape,
+ &maybe_transformed_output));
+ output = &maybe_transformed_output;
}
}
- CHECK(padding_rows >= 0 && padding_cols >= 0)
- << "Negative row or col paddings: (" << padding_rows << ", "
- << padding_cols << ")";
- perftools::gputools::dnn::BatchDescriptor input_desc;
- input_desc.set_count(in_batch)
- .set_feature_map_count(in_depths)
- .set_height(in_rows)
- .set_width(in_cols)
- .set_layout(perftools::gputools::dnn::DataLayout::kBatchDepthYX);
- perftools::gputools::dnn::BatchDescriptor output_desc;
- output_desc.set_count(out_batch)
- .set_height(out_rows)
- .set_width(out_cols)
- .set_feature_map_count(out_depths)
- .set_layout(perftools::gputools::dnn::DataLayout::kBatchDepthYX);
- perftools::gputools::dnn::FilterDescriptor filter_desc;
- filter_desc.set_input_filter_height(filter.dim_size(0))
- .set_input_filter_width(filter.dim_size(1))
- .set_input_feature_map_count(filter.dim_size(2))
- .set_output_feature_map_count(filter.dim_size(3));
- perftools::gputools::dnn::ConvolutionDescriptor conv_desc;
+ constexpr auto data_layout = is_int8x4 ? dnn::DataLayout::kBatchDepthYX4
+ : dnn::DataLayout::kBatchDepthYX;
+ constexpr auto filter_layout = is_int8x4 ? dnn::FilterLayout::kOutputInputYX4
+ : dnn::FilterLayout::kOutputInputYX;
+
+ dnn::BatchDescriptor conv_input_desc;
+ conv_input_desc.set_count(batch_size)
+ .set_feature_map_count(conv_input_depth)
+ .set_height(conv_input_rows)
+ .set_width(conv_input_cols)
+ .set_layout(data_layout);
+ dnn::FilterDescriptor filter_desc;
+ filter_desc.set_input_filter_height(filter_rows)
+ .set_input_filter_width(filter_cols)
+ .set_input_feature_map_count(conv_input_depth)
+ .set_output_feature_map_count(output_depth)
+ .set_layout(filter_layout);
+ dnn::BatchDescriptor side_input_desc;
+ side_input_desc.set_count(batch_size)
+ .set_height(output_rows)
+ .set_width(output_cols)
+ .set_feature_map_count(output_depth)
+ .set_layout(data_layout);
+ dnn::BatchDescriptor bias_desc;
+ bias_desc.set_count(1)
+ .set_height(1)
+ .set_width(1)
+ .set_feature_map_count(output_depth)
+ .set_layout(dnn::DataLayout::kBatchDepthYX);
+ dnn::BatchDescriptor output_desc;
+ output_desc.set_count(batch_size)
+ .set_height(output_rows)
+ .set_width(output_cols)
+ .set_feature_map_count(output_depth)
+ .set_layout(data_layout);
+ dnn::ConvolutionDescriptor conv_desc;
conv_desc.set_vertical_filter_stride(row_stride)
.set_horizontal_filter_stride(col_stride)
.set_zero_padding_height(padding_rows / 2)
.set_zero_padding_width(padding_cols / 2);
- // Shuffles a filter tensor from:
- // [<spatial_dims>, in, out]
- // to:
- // [out, in, <spatial_dims>]
- // TODO(yangzihao): Support a data layout tag for the filter weights, and only
- // do the transform if the weights are not already in the correct layout.
- Tensor transformed_filter;
- OP_REQUIRES_OK(ctx, ctx->allocate_temp(
- DataTypeToEnum<T>::value,
- TensorShape({filter.dim_size(3), filter.dim_size(2),
- filter.dim_size(0), filter.dim_size(1)}),
- &transformed_filter));
-
- functor::TransformFilter<GPUDevice, T, int, 4>()(
- ctx->eigen_device<GPUDevice>(), To32Bit(filter.tensor<T, 4>()),
- To32Bit(transformed_filter.tensor<T, 4>()));
-
- Tensor transformed_output;
- OP_REQUIRES_OK(
- ctx, ctx->allocate_temp(DataTypeToEnum<T>::value,
- ShapeFromFormat(FORMAT_NCHW, out_batch, out_rows,
- out_cols, out_depths),
- &transformed_output));
-
- auto input_ptr = AsDeviceMemory(input.template flat<T>().data(),
- input.template flat<T>().size());
+ Tensor maybe_transformed_filter;
+ const Tensor* filter;
+ if (is_int8x4) {
+ // We have already checked filter is OIHW_VECT_I in the constructor.
+ filter = &filter_param;
+ } else if (filter_format == FORMAT_HWIO) {
+ // Shuffle filter tensor from HWIO to OIHW:
+ OP_REQUIRES_OK(ctx, ctx->allocate_temp(
+ DataTypeToEnum<T>::value,
+ ShapeFromFilterFormat(
+ FORMAT_OIHW, filter_param.shape(), FORMAT_HWIO),
+ &maybe_transformed_filter));
+ functor::TransformFilter<GPUDevice, T, int, 4>()(
+ ctx->eigen_device<GPUDevice>(), To32Bit(filter_param.tensor<T, 4>()),
+ To32Bit(maybe_transformed_filter.tensor<T, 4>()));
+ filter = &maybe_transformed_filter;
+ }
+
+ auto conv_input_ptr =
+ AsDeviceMemory(reinterpret_cast<const typename RawType<T>::type*>(
+ conv_input->template flat<T>().data()),
+ conv_input->template flat<T>().size());
auto filter_ptr =
- AsDeviceMemory(transformed_filter.template flat<T>().data(),
- transformed_filter.template flat<T>().size());
+ AsDeviceMemory(reinterpret_cast<const typename RawType<T>::type*>(
+ filter->template flat<T>().data()),
+ filter->template flat<T>().size());
+ auto side_input_ptr =
+ AsDeviceMemory(reinterpret_cast<const typename RawType<T>::type*>(
+ side_input->template flat<T>().data()),
+ side_input->template flat<T>().size());
auto output_ptr =
- AsDeviceMemory(transformed_output.template flat<T>().data(),
- transformed_output.template flat<T>().size());
-
- auto bias_ptr = AsDeviceMemory(bias.template flat<T>().data(),
- bias.template flat<T>().size());
+ AsDeviceMemory(reinterpret_cast<const typename RawType<T>::type*>(
+ output->template flat<T>().data()),
+ output->template flat<T>().size());
+ auto bias_ptr = AsDeviceMemory(bias.template flat<BiasType>().data(),
+ bias.template flat<BiasType>().size());
static int64 ConvolveScratchSize = GetCudnnWorkspaceLimit(
// default value is in bytes despite the name of the environment variable
@@ -396,38 +464,42 @@ void LaunchFusedConv2DBiasActivationOp<GPUDevice, T>::launch(
);
int device_id = stream->parent()->device_ordinal();
- DataType dtype = input.dtype();
- ConvParameters conv_parameters = {
- in_batch,
- in_depths,
- {{in_rows, in_cols}},
- out_depths,
- {{patch_rows, patch_cols}},
+ FusedConvParameters fused_conv_parameters = {
+ batch_size,
+ conv_input_depth,
+ {{conv_input_rows, conv_input_cols}},
+ output_depth,
+ {{filter_rows, filter_cols}},
{{row_stride, col_stride}},
{{padding_rows, padding_cols}},
- dtype,
+ conv_input->dtype(),
device_id,
+ (side_input_scale != 0),
+ activation_mode,
};
- AlgorithmConfig algorithm_config;
+ dnn::AlgorithmConfig algorithm_config;
if (cudnn_use_autotune && !AutoTuneConvBiasActivation::GetInstance()->Find(
- conv_parameters, &algorithm_config)) {
- std::vector<AlgorithmType> algorithms;
+ fused_conv_parameters, &algorithm_config)) {
+ std::vector<dnn::AlgorithmType> algorithms;
CHECK(stream->parent()->GetConvolveAlgorithms(
- conv_parameters.ShouldIncludeWinogradNonfusedAlgo<T>(), &algorithms));
- ProfileResult best_result;
- ProfileResult best_result_no_scratch;
+ fused_conv_parameters.ShouldIncludeWinogradNonfusedAlgo<T>(),
+ &algorithms));
+ dnn::ProfileResult best_result;
+ dnn::ProfileResult best_result_no_scratch;
for (auto profile_algorithm : algorithms) {
// TODO(zhengxq): profile each algorithm multiple times to better
// accuracy.
CudnnScratchAllocator scratch_allocator(ConvolveScratchSize, ctx);
- ProfileResult profile_result;
+ dnn::ProfileResult profile_result;
bool cudnn_launch_status =
stream
- ->ThenConvolveWithAlgorithm(
- input_desc, input_ptr, filter_desc, filter_ptr, conv_desc,
- bias_ptr, cudnn_activation_mode, output_desc, &output_ptr,
- &scratch_allocator, AlgorithmConfig(profile_algorithm),
+ ->ThenFusedConvolveWithAlgorithm(
+ conv_input_desc, conv_input_ptr, conv_input_scale,
+ filter_desc, filter_ptr, conv_desc, side_input_ptr,
+ side_input_scale, bias_desc, bias_ptr,
+ dnn::ActivationMode::kRelu, output_desc, &output_ptr,
+ &scratch_allocator, dnn::AlgorithmConfig(profile_algorithm),
&profile_result)
.ok();
if (cudnn_launch_status) {
@@ -454,42 +526,68 @@ void LaunchFusedConv2DBiasActivationOp<GPUDevice, T>::launch(
algorithm_config.set_algorithm_no_scratch(
best_result_no_scratch.algorithm());
}
- AutoTuneConvBiasActivation::GetInstance()->Insert(conv_parameters,
+ AutoTuneConvBiasActivation::GetInstance()->Insert(fused_conv_parameters,
algorithm_config);
}
CudnnScratchAllocator scratch_allocator(ConvolveScratchSize, ctx);
bool cudnn_launch_status =
stream
- ->ThenConvolveWithAlgorithm(
- input_desc, input_ptr, filter_desc, filter_ptr, conv_desc,
- bias_ptr, cudnn_activation_mode, output_desc, &output_ptr,
- &scratch_allocator, algorithm_config,
+ ->ThenFusedConvolveWithAlgorithm(
+ conv_input_desc, conv_input_ptr, conv_input_scale, filter_desc,
+ filter_ptr, conv_desc, side_input_ptr, side_input_scale,
+ bias_desc, bias_ptr, dnn::ActivationMode::kRelu, output_desc,
+ &output_ptr, &scratch_allocator, algorithm_config,
/*output_profile_result=*/nullptr)
.ok();
if (!cudnn_launch_status) {
- ctx->SetStatus(errors::Internal(
- "cuDNN launch failure : input shape(", input.shape().DebugString(),
- ") filter shape(", filter.shape().DebugString(), ")"));
+ ctx->SetStatus(errors::Internal("cuDNN launch failure : conv_input shape(",
+ conv_input->shape().DebugString(),
+ ") filter shape(",
+ filter->shape().DebugString(), ")"));
}
- // Convert the output tensor back from NCHW to NHWC.
- if (data_format == FORMAT_NHWC) {
+ // Convert the output tensor back from NCHW to NHWC if necessary.
+ if (!is_int8x4 && (data_format == FORMAT_NHWC) && (output_depth > 1)) {
functor::NCHWToNHWC<GPUDevice, T, 4>()(
ctx->eigen_device<GPUDevice>(),
- const_cast<const Tensor&>(transformed_output).tensor<T, 4>(),
- output->tensor<T, 4>());
- } else {
- *output = transformed_output;
+ const_cast<const Tensor*>(output)->tensor<T, 4>(),
+ output_param->tensor<T, 4>());
}
}
+// Forward declarations of the functor specializations for GPU used above.
+namespace functor {
+#define DECLARE_GPU_SPEC(T) \
+ template <> \
+ void PadInput<GPUDevice, T, int, 4>::operator()( \
+ const GPUDevice& d, typename TTypes<T, 4, int>::ConstTensor in, \
+ const std::array<int, 2>& padding_left, \
+ const std::array<int, 2>& padding_right, \
+ typename TTypes<T, 4, int>::Tensor out, TensorFormat data_format); \
+ extern template struct PadInput<GPUDevice, T, int, 4>;
+
+DECLARE_GPU_SPEC(float);
+DECLARE_GPU_SPEC(int32);
+#undef DECLARE_GPU_SPEC
+} // namespace functor
+
// Registration of the GPU implementations.
-REGISTER_KERNEL_BUILDER(Name("FusedConv2DBiasActivation")
- .Device(DEVICE_GPU)
- .TypeConstraint<float>("T"),
- FusedConv2DBiasActivationOp<GPUDevice, float>);
+
+REGISTER_KERNEL_BUILDER(
+ Name("FusedConv2DBiasActivation")
+ .Device(DEVICE_GPU)
+ .TypeConstraint<float>("T")
+ .TypeConstraint<float>("Tbias"),
+ FusedConv2DBiasActivationOp<GPUDevice, float, float, float>);
+
+REGISTER_KERNEL_BUILDER(
+ Name("FusedConv2DBiasActivation")
+ .Device(DEVICE_GPU)
+ .TypeConstraint<qint8>("T")
+ .TypeConstraint<float>("Tbias"),
+ FusedConv2DBiasActivationOp<GPUDevice, qint8, float, float>);
#endif // GOOGLE_CUDA
diff --git a/tensorflow/contrib/fused_conv/kernels/fused_conv2d_bias_activation_op.h b/tensorflow/contrib/fused_conv/kernels/fused_conv2d_bias_activation_op.h
index d71b26cf1d..7534f5797c 100644
--- a/tensorflow/contrib/fused_conv/kernels/fused_conv2d_bias_activation_op.h
+++ b/tensorflow/contrib/fused_conv/kernels/fused_conv2d_bias_activation_op.h
@@ -24,7 +24,7 @@ limitations under the License.
#if GOOGLE_CUDA
#include "third_party/eigen3/unsupported/Eigen/CXX11/Tensor"
-#include "tensorflow/core/kernels/conv_ops_gpu.h"
+#include "tensorflow/contrib/fused_conv/kernels/fused_conv_ops_gpu.h"
#include "tensorflow/core/platform/stream_executor.h"
#endif // GOOGLE_CUDA
@@ -33,27 +33,30 @@ namespace tensorflow {
// Forward declaration.
class OpKernelContext;
-template <typename Device, typename T>
+template <typename Device, typename T, typename BiasType, typename ScaleType>
class LaunchFusedConv2DBiasActivationOp {
public:
void launch(OpKernelContext* ctx, bool cudnn_use_autotune,
- const Tensor& input, const Tensor& filter, int row_stride,
- int col_stride, const Tensor& bias,
- const ActivationMode& activation_mode,
- const Eigen::PaddingType& padding, TensorFormat data_format,
- Tensor* output);
+ const Tensor& conv_input, ScaleType conv_input_scale,
+ const Tensor& filter, int32 row_stride, int32 col_stride,
+ const Eigen::PaddingType& padding, const Tensor& side_input,
+ ScaleType side_input_scale, const Tensor& bias,
+ ActivationMode activation_mode, TensorFormat data_format,
+ FilterTensorFormat filter_format, Tensor* output);
};
#ifdef GOOGLE_CUDA
-template <typename T>
-class LaunchFusedConv2DBiasActivationOp<Eigen::GpuDevice, T> {
+template <typename T, typename BiasType, typename ScaleType>
+class LaunchFusedConv2DBiasActivationOp<Eigen::GpuDevice, T, BiasType,
+ ScaleType> {
public:
void launch(OpKernelContext* ctx, bool cudnn_use_autotune,
- const Tensor& input, const Tensor& filter, int32 row_stride,
- int32 col_stride, const Tensor& bias,
- const ActivationMode& activation_mode,
- const Eigen::PaddingType& padding, TensorFormat data_format,
- Tensor* output);
+ const Tensor& conv_input, ScaleType conv_input_scale,
+ const Tensor& filter, int32 row_stride, int32 col_stride,
+ const Eigen::PaddingType& padding, const Tensor& side_input,
+ ScaleType side_input_scale, const Tensor& bias,
+ ActivationMode activation_mode, TensorFormat data_format,
+ FilterTensorFormat filter_format, Tensor* output);
};
#endif // GOOGLE_CUDA
diff --git a/tensorflow/contrib/fused_conv/kernels/fused_conv_ops_gpu.h b/tensorflow/contrib/fused_conv/kernels/fused_conv_ops_gpu.h
new file mode 100644
index 0000000000..dc43af1158
--- /dev/null
+++ b/tensorflow/contrib/fused_conv/kernels/fused_conv_ops_gpu.h
@@ -0,0 +1,74 @@
+/* Copyright 2017 The TensorFlow Authors. All Rights Reserved.
+
+Licensed under the Apache License, Version 2.0 (the "License");
+you may not use this file except in compliance with the License.
+You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing, software
+distributed under the License is distributed on an "AS IS" BASIS,
+WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+See the License for the specific language governing permissions and
+limitations under the License.
+==============================================================================*/
+
+#ifndef THIRD_PARTY_TENSORFLOW_CONTRIB_FUSED_CONV_KERNELS_FUSED_CONV_OPS_GPU_H_
+#define THIRD_PARTY_TENSORFLOW_CONTRIB_FUSED_CONV_KERNELS_FUSED_CONV_OPS_GPU_H_
+
+#if GOOGLE_CUDA
+
+#include "tensorflow/core/kernels/conv_ops_gpu.h"
+#include "tensorflow/core/util/activation_mode.h"
+
+// TODO(pauldonnelly): Merge this file into core/kernels/conv_ops_gpu.h.
+
+namespace tensorflow {
+
+// Add additional parameters specific to fused convolutions.
+class FusedConvParameters : public ConvParameters {
+ public:
+ FusedConvParameters(int64 batch, int64 in_depths, const SpatialArray& in,
+ int64 out_depths, const SpatialArray& filter,
+ const SpatialArray& stride, const SpatialArray& padding,
+ DataType dtype, int device_id, bool has_side_input,
+ ActivationMode activation_mode)
+ : ConvParameters(batch, in_depths, in, out_depths, filter, stride,
+ padding, dtype, device_id),
+ activation_mode_(activation_mode),
+ has_side_input_(has_side_input) {
+ hash_code_ = Hash64Combine(hash_code_, has_side_input);
+ hash_code_ = Hash64Combine(hash_code_, activation_mode);
+ }
+
+ bool operator==(const FusedConvParameters& other) const {
+ return this->get_data_as_tuple() == other.get_data_as_tuple();
+ }
+
+ bool operator!=(const FusedConvParameters& other) const {
+ return !(*this == other);
+ }
+
+ string ToString() const {
+ return strings::StrCat(ConvParameters::ToString(), ", ", has_side_input_,
+ ", ", activation_mode_, ", ");
+ }
+
+ private:
+ using ParameterDataType =
+ std::tuple<ConvParameters::ParameterDataType, bool, ActivationMode>;
+
+ ParameterDataType get_data_as_tuple() const {
+ return std::make_tuple(ConvParameters::get_data_as_tuple(), has_side_input_,
+ activation_mode_);
+ }
+
+ ActivationMode activation_mode_;
+ bool has_side_input_;
+};
+
+} // namespace tensorflow
+
+#endif // GOOGLE_CUDA
+
+#endif // THIRD_PARTY_TENSORFLOW_CONTRIB_FUSED_CONV_KERNELS_FUSED_CONV_OPS_GPU_H_
diff --git a/tensorflow/contrib/fused_conv/ops/fused_conv2d_bias_activation_op.cc b/tensorflow/contrib/fused_conv/ops/fused_conv2d_bias_activation_op.cc
index 6134c5c699..48f058b4c5 100644
--- a/tensorflow/contrib/fused_conv/ops/fused_conv2d_bias_activation_op.cc
+++ b/tensorflow/contrib/fused_conv/ops/fused_conv2d_bias_activation_op.cc
@@ -33,40 +33,73 @@ string GetAllActivationModeAttrString() { return "activation_mode: {'Relu'}"; }
} // namespace
// --------------------------------------------------------------------------
+
+// TODO(pauldonnelly): Add support for double inputs and scales to this Op,
+// (currently Attr does not support double).
+
REGISTER_OP("FusedConv2DBiasActivation")
- .Input("input: T")
+ .Input("conv_input: T")
.Input("filter: T")
- .Input("bias: T")
+ .Input("bias: Tbias")
+ .Input("side_input: T")
.Output("output: T")
- .Attr("T: {float}")
+ .Attr("T: {float, half, qint8}")
+ .Attr("Tbias: {float, half}")
+ .Attr("conv_input_scale: float = 1.0")
+ .Attr("side_input_scale: float = 0.0")
.Attr("strides: list(int)")
.Attr(GetPaddingAttrString())
- .Attr(GetConvnetDataFormatAttrString())
- .Attr(GetAllActivationModeAttrString())
+ .Attr("data_format: {'NHWC', 'NCHW', 'NCHW_VECT_C'} = 'NHWC'")
+ .Attr("filter_format: {'HWIO', 'OIHW', 'OIHW_VECT_I'} = 'HWIO'")
+ .Attr("activation_mode: {'Relu'} = 'Relu'")
.SetShapeFn(shape_inference::FusedConvBiasActivationShape)
.Doc(R"doc(
- Computes a fused 2-D convolution, adds bias, and applies an activation function
- on the output given 4-D `input`, 4-D `filter`, 1-D `bias` tensors and an activation mode.
+ Computes a fused kernel which implements: 2-D convolution, adds side input,
+ with separate scaling on convolution and side inputs, then adds bias and
+ applies the RELU activation function to the result. Supports both float and
+ qint8 data formats. In the case of qint8, the output is clipped to [0..127].
- input: A 4-D tensor. The dimension order is interpreted according to the value
- of `data_format`, see below for details.
- filter: A 4-D tensor of shape
- `[filter_height, filter_width, in_channels, out_channels]`
- bias: 1-D with size of the `out_channels` dimension in filter.
- output: A 4-D tensor. The dimension order is determined by the value of
- `data_format`, see below for details.
- T: The data type for the elements of input, filter, bias, and output Tensors.
+ conv_input: A tensor with format as specified by `data_format` (see below).
+ filter: A tensor with format depending on `data_format` as follows:
+ "NHWC", "NCHW":
+ `float [ filter_height, filter_width, in_channels, out_channels ]`
+ "NCHW_VECT_C":
+ `qint8 [ out_channels, in_channels, filter_height, filter_width ]`
+ bias: 1-D float tensor with size matching the `out_channels` dimension of
+ `filter`.
+ Note: this tensor is still float, even if other inputs are qint8.
+ side_input: A tensor with format as specified by `data_format` (see below).
+ This tensor will be ignored and can be [] if side_input_scale == 0.
+ Otherwise, the size of each dimension must match the `output` tensor.
+ output: A tensor with format as specified by `data_format` (see below).
+ The dimension sizes are determined automatically based on other inputs
+ and attributes.
+ T: The element data type of `conv_input`, `side_input` and `output` tensors.
+ Note: must match with the `data_format`.
+ Tbias: The element data type of `bias`.
+ conv_input_scale: scalar float value to be multiplied by `conv_input`.
+ (conceptually.. in reality it is applied after convolution).
+ side_input_scale: scalar float value to be multiplied by `side_input`.
strides: 1-D tensor of length 4. The stride of the sliding window for each
dimension of `input`. The dimension order is determined by the value of
`data_format`, see below for details.
+ Note: the stride for batch and channel dimensions must be 1.
padding: The type of padding algorithm to use.
- data_format: Specify the data format of the input and output data. With the
- default format "NHWC", the data is stored in the order of:
- [batch, height, width, channels].
- Alternatively, the format could be "NCHW", the data storage order of:
- [batch, channels, height, width].
- activation_mode: Specify the activation function to apply to the output tensor
- of bias add. Currently only supports "Relu".
+ data_format: A string specifying the data format of `conv_input`,
+ `side_input` and `output` tensors with the following options:
+ "NHWC": `float [ batch, height, width, channels ]`
+ "NCHW": `float [ batch, channels, height, width ]`
+ "NCHW_VECT_C":
+ `qint8 [ batch, channels / 4, height, width, channels % 4 ]`
+ Note: for "NCHW_VECT_C", `channels` must be a multiple of 4.
+ filter_format: A string specifying the data format of `filter`,
+ "HWIO": `float [ kernel_height, kernel_width, input_channels,
+ output_channels ]`
+ "OIHW_VECT_I":
+ `qint8 [ output_channels, input_channels / 4,
+ kernel_height, kernel_width, input_channels % 4 ]`
+ activation_mode: The activation applied to the output.
+ Currently must be "Relu".
)doc");
} // namespace tensorflow
diff --git a/tensorflow/contrib/fused_conv/python/ops/fused_conv2d_bias_activation_op.py b/tensorflow/contrib/fused_conv/python/ops/fused_conv2d_bias_activation_op.py
index 41f986dd07..8f3f31bad0 100644
--- a/tensorflow/contrib/fused_conv/python/ops/fused_conv2d_bias_activation_op.py
+++ b/tensorflow/contrib/fused_conv/python/ops/fused_conv2d_bias_activation_op.py
@@ -26,62 +26,83 @@ _fused_conv2d_bias_activation_op_so = loader.load_op_library(
resource_loader.get_path_to_datafile("_fused_conv2d_bias_activation_op.so"))
-def fused_conv2d_bias_activation(input_tensor,
- filter_tensor,
+# pylint: disable=redefined-builtin
+def fused_conv2d_bias_activation(conv_input,
+ filter,
bias,
- strides,
- padding,
- activation_mode,
+ strides=None,
+ padding=None,
+ conv_input_scale=1.0,
+ side_input_scale=0.0,
+ side_input=None,
+ activation_mode="Relu",
data_format=None,
+ filter_format=None,
name=None):
- """Computes a fused 2-D convolution, adds bias, and applies relu.
+ """Fused 2D conv, bias and activation with optional side input.
- input_tensor: A 4-D tensor. The dimension order is interpreted
- according to the value of `data_format`, see below for details.
- filter_tensor: A 4-D tensor of shape
- `[filter_height, filter_width, in_channels, out_channels]`
- bias: 1-D with size of the `out_channels` dimension in filter.
- output: A 4-D tensor. The dimension order is determined by the value of
- `data_format`, see below for details.
- T: The data type for the elements of input, filter, bias, and output
- Tensors.
- strides: 1-D tensor of length 4. The stride of the sliding window for
- each
- dimension of `input`. The dimension order is determined by the value
- of
- `data_format`, see below for details.
- padding: The type of padding algorithm to use.
- data_format: Specify the data format of the input and output data. With
- the
- default format "NHWC", the data is stored in the order of:
- [batch, height, width, channels].
- Alternatively, the format could be "NCHW", the data storage order of:
- [batch, channels, height, width].
- activation_mode: Specify the activation function to apply to the output
- tensor
- of bias add. Currently only supports "Relu".
+ Computes a fused 2-D convolution scaled by conv_input_scale,
+ adds an optional side input scaled by side_input_scale, adds biases,
+ and applies ReLU. As an equation:
+ output = ReLU(conv_input_scale * Conv(conv_input, filter) +
+ side_input_scale * side_input + bias)
+ Note: In int8 mode, The ReLU will clip the output to the range [0..127].
Args:
- input_tensor: A `Tensor`. Must be one of the following types: `float32`.
- filter_tensor: A `Tensor`. Must have the same type as `input`.
- bias: A `Tensor`. Must have the same type as `input`.
- strides: A list of `ints`.
+ conv_input: A `Tensor` of the format specified by `data_format`.
+ filter: A `Tensor` whose format depends on `data_format`:
+ if `data_format` is "NCHW_VECT_C", filter should be "OIHW_VECT_I"
+ otherwise, it should be "HWIO" format.
+ bias: A 1-D `Tensor` of type `float32`, and dimensions equal to the
+ number of output channels.
+ strides: A list of 4 `ints` specifying convolution strides.
+ if `data_format` is "NCHW" or "NCHW_VECT_C", the order should be NCHW.
+ if `data_format` is "NHWC", the order should be NHWC.
padding: A `string` from: `"SAME", "VALID"`.
- activation_mode: A `string` from: `"Sigmoid", "Relu", "Relu6", "ReluX",
- "Tanh", "BandPass"`.
- data_format: An optional `string` from: `"NHWC", "NCHW"`. Defaults to
- `"NHWC"`.
+ conv_input_scale: A scalar `float32` that will be multiplied by conv_input.
+ This is optional and defaults to 1. However it should be set to
+ specify the quantization scale when `data_format` is "NCHW_VECT_C".
+ side_input_scale: A scalar `float32` that will be multiplied by side_input.
+ This is optional and defaults to 0.
+ side_input: A `Tensor` of the format specified by `data_format`.
+ This is useful for imlementing ResNet blocks.
+ activation_mode: (optional) currently must be the default "Relu".
+ Note that in qint8 mode, it also clips to 127, so acts like ReluX.
+ data_format: Specifies the data format.
+ Possible values are:
+ "NHWC" float [batch, height, width, channels]
+ "NCHW" float [batch, channels, height, width]
+ "NCHW_VECT_C" qint8 [batch, channels / 4, height, width, channels % 4]
+ Defaults to `"NHWC"`.
+ Performance is worst for `"NHWC"` and best for `"NCHW_VECT_C"`.
+ filter_format: Specifies the filter format.
+ Possible values are:
+ "HWIO" float [kernel_height, kernel_width, input_channels,
+ output_channels ]
+ "OIHW" float [output_channels, input_channels, kernel_height,
+ kernel_width ]
+ "OIHW_VECT_I" qint8 [ output_channels, input_channels / 4,
+ kernel_height, kernel_width, input_channels % 4 ]
+ Defaults to `"HWIO"`.
name: A name for the operation (optional).
Returns:
- A `Tensor`. Has the same type as `input`.
+ A `Tensor` of the format specified by `data_format`.
"""
+ if strides is None:
+ strides = [1, 1, 1, 1]
+ if side_input is None:
+ side_input = []
return gen_fused_conv2d_bias_activation_op.fused_conv2d_bias_activation(
- input=input_tensor,
- filter=filter_tensor,
- bias=bias,
- strides=strides,
+ conv_input,
+ filter,
+ bias,
padding=padding,
+ strides=strides,
+ conv_input_scale=conv_input_scale,
+ side_input_scale=side_input_scale,
+ side_input=side_input,
activation_mode=activation_mode,
data_format=data_format,
+ filter_format=filter_format,
name=name)
diff --git a/tensorflow/contrib/fused_conv/python/ops/fused_conv2d_bias_activation_op_test.py b/tensorflow/contrib/fused_conv/python/ops/fused_conv2d_bias_activation_op_test.py
index 5d6a2fa3b8..3b8f7d6ed7 100644
--- a/tensorflow/contrib/fused_conv/python/ops/fused_conv2d_bias_activation_op_test.py
+++ b/tensorflow/contrib/fused_conv/python/ops/fused_conv2d_bias_activation_op_test.py
@@ -19,13 +19,16 @@ from __future__ import division
from __future__ import print_function
import numpy as np
+
from tensorflow.contrib.fused_conv.python.ops import fused_conv2d_bias_activation_op
from tensorflow.python.framework import constant_op
from tensorflow.python.framework import dtypes
from tensorflow.python.framework import errors_impl
from tensorflow.python.framework import test_util
from tensorflow.python.ops import array_ops
+from tensorflow.python.ops import gen_array_ops
from tensorflow.python.ops import nn_ops
+from tensorflow.python.ops import random_ops
from tensorflow.python.platform import test
from tensorflow.python.platform import tf_logging
@@ -484,7 +487,8 @@ class FusedConv2DBiasActivationTest(test.TestCase):
with self.test_session() as sess:
# Illegal strides.
with self.assertRaisesRegexp(errors_impl.InvalidArgumentError,
- "strides in the batch and depth"):
+ "Convolutional strides are not supported in "
+ "the batch or depth dimensions."):
sess.run(
fused_conv2d_bias_activation_op.fused_conv2d_bias_activation(
array_ops.placeholder(dtypes.float32),
@@ -494,7 +498,8 @@ class FusedConv2DBiasActivationTest(test.TestCase):
padding="SAME",
activation_mode="Relu"))
with self.assertRaisesRegexp(errors_impl.InvalidArgumentError,
- "strides in the batch and depth"):
+ "Convolutional strides are not supported in "
+ "the batch or depth dimensions."):
sess.run(
fused_conv2d_bias_activation_op.fused_conv2d_bias_activation(
array_ops.placeholder(dtypes.float32),
@@ -552,6 +557,286 @@ def GetInceptionFwdTest(input_size, filter_size, stride, padding,
return Test
+def CalculateCovolvedOutputDim(input_dim, filter_dim, stride, padding_type):
+ """Calculates the size of an output dimension of a strided convolution.
+
+ Given the sizes of the corresponding dimension of the input and filter shapes,
+ and the stride and padding_types, calculates the size of the output dimension.
+ This function can be called separately for each input dimension.
+
+ Args:
+ input_dim: An `int` specifying the size of the input dimension.
+ filter_dim: An `int` specifying the size of the filter dimension.
+ stride: An `int` specifying the step size of the convolution along the
+ input dimension.
+ padding_type: either 'VALID' or 'SAME'.
+
+ Returns:
+ The size of the output dimension.
+ """
+ if padding_type == "VALID":
+ return (input_dim - filter_dim + stride) // stride
+ else: # padding_type == 'SAME'
+ return (input_dim + stride - 1) // stride
+
+
+def NchwVectCToNchw(in_tensor):
+ # [N, C / 4, H, W, 4] => [N, C / 4, 4, H, W] == [N, C, H, W]
+ t = array_ops.transpose(in_tensor, [0, 1, 4, 2, 3])
+ n = in_tensor.shape.dims[0].value
+ c = in_tensor.shape.dims[1].value * in_tensor.shape.dims[4].value
+ h = in_tensor.shape.dims[2].value
+ w = in_tensor.shape.dims[3].value
+ return array_ops.reshape(t, [n, c, h, w])
+
+
+def OihwVectIToHwio(in_tensor):
+ # [O, I / 4, H, W, 4] => [O, I / 4, 4, H, W] == [O, I, H, W]
+ t = array_ops.transpose(in_tensor, [2, 3, 1, 4, 0])
+ o = in_tensor.shape.dims[0].value
+ i = in_tensor.shape.dims[1].value * in_tensor.shape.dims[4].value
+ h = in_tensor.shape.dims[2].value
+ w = in_tensor.shape.dims[3].value
+ return array_ops.reshape(t, [h, w, i, o])
+
+
+def NchwToNchwVectC(in_tensor):
+ n, c, h, w = in_tensor.shape.as_list()
+ assert c % 4 == 0
+ t = array_ops.reshape(in_tensor, [n, c // 4, 4, h, w])
+ return array_ops.transpose(t, [0, 1, 3, 4, 2])
+
+
+def SimulateFusedConv2dBiasActivationInt8(conv_input_scale, conv_input, kernel,
+ padding, strides, side_input_scale,
+ side_input, biases):
+ """Simulates the int8 fused 2-D convolution op using separate float ops.
+
+ The arguments and return values have the same format, meanings and
+ restrictions as the actual op.
+ Args:
+ conv_input_scale: A scalar 'float'.
+ conv_input: A `Tensor` of type `qint8` in NCHW_VECT_C layout.
+ kernel: A `Tensor` of type `qint8` in OIHW_VECT_I layout.
+ padding: A `string` from: `"SAME", "VALID"`.
+ strides: A list of `ints`.
+ side_input_scale: A scalar 'float'.
+ side_input: A `Tensor` of type `qint8` in NCHW_VECT_C layout.
+ biases: A `Tensor` of type `float32` in NCHW layout.
+ Returns:
+ A `Tensor` of type `qint8` in NCHW_VECT_C layout.
+ """
+ conv_result = nn_ops.conv2d(
+ NchwVectCToNchw(gen_array_ops.dequantize(conv_input, -128, 127)),
+ OihwVectIToHwio(gen_array_ops.dequantize(kernel, -128, 127)),
+ strides=strides,
+ padding=padding,
+ data_format="NCHW") * conv_input_scale
+
+ conv_and_side_inputs = conv_result + side_input_scale * NchwVectCToNchw(
+ gen_array_ops.dequantize(side_input, -128, 127))
+
+ logit = nn_ops.bias_add(conv_and_side_inputs, biases, data_format="NCHW")
+
+ result, _, _ = gen_array_ops.quantize_v2(
+ NchwToNchwVectC(nn_ops.relu(logit)), -128, 127, dtypes.qint8)
+ return result
+
+
+class FusedConvInt8Tests(test.TestCase):
+ _test_params = [
+ {
+ "batch_size": 2,
+ "input_channels": 8,
+ "output_channels": 16,
+ "input_height": 8,
+ "input_width": 8,
+ "filter_height": 3,
+ "filter_width": 3,
+ "vertical_stride": 2,
+ "horizontal_stride": 2,
+ "conv_input_scale": 0.002,
+ "side_input_scale": 0.0,
+ "bias_scale": 1,
+ "padding_type": "VALID"
+ },
+ {
+ "batch_size": 2,
+ "input_channels": 8,
+ "output_channels": 16,
+ "input_height": 8,
+ "input_width": 8,
+ "filter_height": 3,
+ "filter_width": 3,
+ "vertical_stride": 2,
+ "horizontal_stride": 2,
+ "conv_input_scale": 0.002,
+ "side_input_scale": 0.0,
+ "bias_scale": 1,
+ "padding_type": "SAME"
+ },
+ {
+ "batch_size": 2,
+ "input_channels": 8,
+ "output_channels": 16,
+ "input_height": 8,
+ "input_width": 8,
+ "filter_height": 3,
+ "filter_width": 3,
+ "vertical_stride": 2,
+ "horizontal_stride": 2,
+ "conv_input_scale": 0.002,
+ "side_input_scale": 0.5,
+ "bias_scale": 1,
+ "padding_type": "VALID"
+ },
+ {
+ "batch_size": 2,
+ "input_channels": 16,
+ "output_channels": 16,
+ "input_height": 9,
+ "input_width": 9,
+ "filter_height": 3,
+ "filter_width": 3,
+ "vertical_stride": 1,
+ "horizontal_stride": 1,
+ "conv_input_scale": 0.001,
+ "side_input_scale": 0.5,
+ "bias_scale": 1,
+ "padding_type": "SAME"
+ },
+ {
+ "batch_size": 3,
+ "input_channels": 8,
+ "output_channels": 8,
+ "input_height": 9,
+ "input_width": 9,
+ "filter_height": 5,
+ "filter_width": 5,
+ "vertical_stride": 1,
+ "horizontal_stride": 1,
+ "conv_input_scale": 0.001,
+ "side_input_scale": 0.5,
+ "bias_scale": 1,
+ "padding_type": "SAME"
+ },
+ {
+ "batch_size": 3,
+ "input_channels": 8,
+ "output_channels": 8,
+ "input_height": 9,
+ "input_width": 9,
+ "filter_height": 7,
+ "filter_width": 1,
+ "vertical_stride": 2,
+ "horizontal_stride": 1,
+ "conv_input_scale": 0.002,
+ "side_input_scale": 0.5,
+ "bias_scale": 1,
+ "padding_type": "SAME"
+ },
+ {
+ "batch_size": 3,
+ "input_channels": 8,
+ "output_channels": 8,
+ "input_height": 9,
+ "input_width": 9,
+ "filter_height": 1,
+ "filter_width": 7,
+ "vertical_stride": 1,
+ "horizontal_stride": 1,
+ "conv_input_scale": 0.002,
+ "side_input_scale": 0.5,
+ "bias_scale": 1,
+ "padding_type": "SAME"
+ },
+ ]
+
+ def runTest(self, test_param):
+ batch_size = test_param["batch_size"]
+ input_channels = test_param["input_channels"]
+ output_channels = test_param["output_channels"]
+ input_height = test_param["input_height"]
+ input_width = test_param["input_width"]
+ filter_height = test_param["filter_height"]
+ filter_width = test_param["filter_width"]
+ vertical_stride = test_param["vertical_stride"]
+ horizontal_stride = test_param["horizontal_stride"]
+ conv_input_scale = test_param["conv_input_scale"]
+ side_input_scale = test_param["side_input_scale"]
+ bias_scale = test_param["bias_scale"]
+ padding_type = test_param["padding_type"]
+
+ conv_input, _, _ = gen_array_ops.quantize_v2(
+ random_ops.random_uniform(
+ [batch_size, input_channels // 4, input_height, input_width, 4],
+ minval=-0.0,
+ maxval=1.0,
+ dtype=dtypes.float32), -1.0, 1.0, dtypes.qint8)
+
+ kernel, _, _ = gen_array_ops.quantize_v2(
+ random_ops.random_uniform(
+ [
+ output_channels, input_channels // 4, filter_height,
+ filter_width, 4
+ ],
+ minval=-1.0,
+ maxval=1.0,
+ dtype=dtypes.float32), -1.0, 1.0, dtypes.qint8)
+
+ output_height = CalculateCovolvedOutputDim(input_height, filter_height,
+ vertical_stride, padding_type)
+ output_width = CalculateCovolvedOutputDim(input_width, filter_width,
+ horizontal_stride, padding_type)
+ print("output_height=", output_height, ", output_width=", output_width)
+
+ side_input, _, _ = gen_array_ops.quantize_v2(
+ random_ops.random_uniform(
+ [batch_size, output_channels // 4, output_height, output_width, 4],
+ minval=0.0,
+ maxval=1.0,
+ dtype=dtypes.float32), -1.0, 1.0, dtypes.qint8)
+
+ biases = random_ops.random_uniform(
+ [output_channels],
+ minval=-10 * bias_scale,
+ maxval=20 * bias_scale,
+ dtype=dtypes.float32)
+
+ strides = [1, 1, vertical_stride, horizontal_stride]
+
+ actual = fused_conv2d_bias_activation_op.fused_conv2d_bias_activation(
+ conv_input,
+ kernel,
+ biases,
+ strides=strides,
+ padding=padding_type,
+ conv_input_scale=conv_input_scale,
+ side_input_scale=side_input_scale,
+ side_input=side_input,
+ data_format="NCHW_VECT_C",
+ filter_format="OIHW_VECT_I")
+
+ expected = SimulateFusedConv2dBiasActivationInt8(
+ conv_input_scale, conv_input, kernel, padding_type, strides,
+ side_input_scale, side_input, biases)
+
+ with self.test_session(use_gpu=True) as sess:
+ actual_y, expected_y = sess.run([actual, expected])
+ print("actual_y = ", actual_y)
+ print("expected_y = ", expected_y)
+ self.assertTrue(np.array_equal(actual_y, expected_y))
+
+ def testFusedConvInt8(self):
+ if not test.is_gpu_available(
+ cuda_only=True, min_cuda_compute_capability=(6, 1)):
+ tf_logging.info("int8 test skipped because not run with --config=cuda or "
+ "no GPUs with compute capability >= 6.1 are available.")
+ return
+ for test_param in self._test_params:
+ self.runTest(test_param)
+
+
if __name__ == "__main__":
for index, (input_size_, filter_size_, output_size_, stride_,
padding_) in enumerate(GetShrunkInceptionShapes()):
diff --git a/tensorflow/contrib/keras/BUILD b/tensorflow/contrib/keras/BUILD
index 26f0e41518..7e0019ce4a 100644
--- a/tensorflow/contrib/keras/BUILD
+++ b/tensorflow/contrib/keras/BUILD
@@ -1,5 +1,6 @@
# Description:
# Contains the Keras API (internal TensorFlow version).
+# Note that tf.contrib.keras has been deprecated in favor of tf.keras.
licenses(["notice"]) # Apache 2.0
@@ -7,9 +8,6 @@ exports_files(["LICENSE"])
package(default_visibility = ["//tensorflow:__subpackages__"])
-load("//tensorflow:tensorflow.bzl", "cuda_py_test")
-load("//tensorflow:tensorflow.bzl", "py_test")
-
py_library(
name = "keras",
srcs = [
@@ -48,641 +46,10 @@ py_library(
"api/keras/utils/__init__.py",
"api/keras/wrappers/__init__.py",
"api/keras/wrappers/scikit_learn/__init__.py",
- "python/keras/__init__.py",
- "python/keras/activations.py",
- "python/keras/applications/__init__.py",
- "python/keras/applications/imagenet_utils.py",
- "python/keras/applications/inception_v3.py",
- "python/keras/applications/mobilenet.py",
- "python/keras/applications/resnet50.py",
- "python/keras/applications/vgg16.py",
- "python/keras/applications/vgg19.py",
- "python/keras/applications/xception.py",
- "python/keras/backend.py",
- "python/keras/callbacks.py",
- "python/keras/constraints.py",
- "python/keras/datasets/__init__.py",
- "python/keras/datasets/boston_housing.py",
- "python/keras/datasets/cifar.py",
- "python/keras/datasets/cifar10.py",
- "python/keras/datasets/cifar100.py",
- "python/keras/datasets/imdb.py",
- "python/keras/datasets/mnist.py",
- "python/keras/datasets/reuters.py",
- "python/keras/engine/__init__.py",
- "python/keras/engine/topology.py",
- "python/keras/engine/training.py",
- "python/keras/initializers.py",
- "python/keras/layers/__init__.py",
- "python/keras/layers/advanced_activations.py",
- "python/keras/layers/convolutional.py",
- "python/keras/layers/convolutional_recurrent.py",
- "python/keras/layers/core.py",
- "python/keras/layers/embeddings.py",
- "python/keras/layers/local.py",
- "python/keras/layers/merge.py",
- "python/keras/layers/noise.py",
- "python/keras/layers/normalization.py",
- "python/keras/layers/pooling.py",
- "python/keras/layers/recurrent.py",
- "python/keras/layers/serialization.py",
- "python/keras/layers/wrappers.py",
- "python/keras/losses.py",
- "python/keras/metrics.py",
- "python/keras/models.py",
- "python/keras/optimizers.py",
- "python/keras/preprocessing/__init__.py",
- "python/keras/preprocessing/image.py",
- "python/keras/preprocessing/sequence.py",
- "python/keras/preprocessing/text.py",
- "python/keras/regularizers.py",
- "python/keras/testing_utils.py",
- "python/keras/utils/__init__.py",
- "python/keras/utils/conv_utils.py",
- "python/keras/utils/data_utils.py",
- "python/keras/utils/generic_utils.py",
- "python/keras/utils/io_utils.py",
- "python/keras/utils/layer_utils.py",
- "python/keras/utils/np_utils.py",
- "python/keras/utils/vis_utils.py",
- "python/keras/wrappers/__init__.py",
- "python/keras/wrappers/scikit_learn.py",
- ],
- srcs_version = "PY2AND3",
- deps = [
- "//tensorflow/contrib/tensorboard:projector",
- "//tensorflow/core:protos_all_py",
- "//tensorflow/python:array_ops",
- "//tensorflow/python:check_ops",
- "//tensorflow/python:client",
- "//tensorflow/python:clip_ops",
- "//tensorflow/python:constant_op",
- "//tensorflow/python:control_flow_ops",
- "//tensorflow/python:ctc_ops",
- "//tensorflow/python:dtypes",
- "//tensorflow/python:framework",
- "//tensorflow/python:framework_ops",
- "//tensorflow/python:functional_ops",
- "//tensorflow/python:gradients",
- "//tensorflow/python:image_ops",
- "//tensorflow/python:init_ops",
- "//tensorflow/python:layers",
- "//tensorflow/python:layers_base",
- "//tensorflow/python:logging_ops",
- "//tensorflow/python:math_ops",
- "//tensorflow/python:nn",
- "//tensorflow/python:platform",
- "//tensorflow/python:random_ops",
- "//tensorflow/python:sparse_ops",
- "//tensorflow/python:sparse_tensor",
- "//tensorflow/python:state_ops",
- "//tensorflow/python:summary",
- "//tensorflow/python:tensor_array_grad",
- "//tensorflow/python:tensor_array_ops",
- "//tensorflow/python:tensor_shape",
- "//tensorflow/python:training",
- "//tensorflow/python:util",
- "//tensorflow/python:variable_scope",
- "//tensorflow/python:variables",
- "@six_archive//:six",
- ],
-)
-
-py_test(
- name = "integration_test",
- size = "medium",
- srcs = ["python/keras/integration_test.py"],
- srcs_version = "PY2AND3",
- tags = ["notsan"],
- deps = [
- ":keras",
- "//tensorflow/python:client_testlib",
- "//tensorflow/python:layers",
- "//tensorflow/python:nn",
- "//third_party/py/numpy",
- ],
-)
-
-py_test(
- name = "activations_test",
- size = "small",
- srcs = ["python/keras/activations_test.py"],
- srcs_version = "PY2AND3",
- deps = [
- ":keras",
- "//tensorflow/python:client_testlib",
- "//third_party/py/numpy",
- ],
-)
-
-py_test(
- name = "constraints_test",
- size = "small",
- srcs = ["python/keras/constraints_test.py"],
- srcs_version = "PY2AND3",
- deps = [
- ":keras",
- "//tensorflow/python:client_testlib",
- "//third_party/py/numpy",
- ],
-)
-
-py_test(
- name = "initializers_test",
- size = "small",
- srcs = ["python/keras/initializers_test.py"],
- srcs_version = "PY2AND3",
- deps = [
- ":keras",
- "//tensorflow/python:client_testlib",
- "//tensorflow/python:init_ops",
- "//third_party/py/numpy",
- ],
-)
-
-py_test(
- name = "regularizers_test",
- size = "small",
- srcs = ["python/keras/regularizers_test.py"],
- srcs_version = "PY2AND3",
- deps = [
- ":keras",
- "//tensorflow/python:client_testlib",
- ],
-)
-
-py_test(
- name = "optimizers_test",
- size = "medium",
- srcs = ["python/keras/optimizers_test.py"],
- srcs_version = "PY2AND3",
- tags = ["notsan"],
- deps = [
- ":keras",
- "//tensorflow/python:client_testlib",
- "//tensorflow/python:training",
- "//third_party/py/numpy",
- ],
-)
-
-py_test(
- name = "losses_test",
- size = "small",
- srcs = ["python/keras/losses_test.py"],
- srcs_version = "PY2AND3",
- deps = [
- ":keras",
- "//tensorflow/python:client_testlib",
- "//third_party/py/numpy",
- ],
-)
-
-py_test(
- name = "metrics_test",
- size = "small",
- srcs = ["python/keras/metrics_test.py"],
- srcs_version = "PY2AND3",
- deps = [
- ":keras",
- "//tensorflow/python:client_testlib",
- "//third_party/py/numpy",
- ],
-)
-
-py_test(
- name = "inception_v3_test",
- size = "medium",
- srcs = ["python/keras/applications/inception_v3_test.py"],
- srcs_version = "PY2AND3",
- deps = [
- ":keras",
- "//tensorflow/python:client_testlib",
- "//third_party/py/numpy",
- ],
-)
-
-py_test(
- name = "mobilenet_test",
- size = "medium",
- srcs = ["python/keras/applications/mobilenet_test.py"],
- srcs_version = "PY2AND3",
- deps = [
- ":keras",
- "//tensorflow/python:client_testlib",
- "//third_party/py/numpy",
- ],
-)
-
-py_test(
- name = "resnet50_test",
- size = "small",
- srcs = ["python/keras/applications/resnet50_test.py"],
- srcs_version = "PY2AND3",
- deps = [
- ":keras",
- "//tensorflow/python:client_testlib",
- ],
-)
-
-py_test(
- name = "vgg16_test",
- size = "small",
- srcs = ["python/keras/applications/vgg16_test.py"],
- srcs_version = "PY2AND3",
- deps = [
- ":keras",
- "//tensorflow/python:client_testlib",
- ],
-)
-
-py_test(
- name = "vgg19_test",
- size = "small",
- srcs = ["python/keras/applications/vgg19_test.py"],
- srcs_version = "PY2AND3",
- deps = [
- ":keras",
- "//tensorflow/python:client_testlib",
- ],
-)
-
-py_test(
- name = "xception_test",
- size = "medium",
- srcs = ["python/keras/applications/xception_test.py"],
- srcs_version = "PY2AND3",
- deps = [
- ":keras",
- "//tensorflow/python:client_testlib",
- "//third_party/py/numpy",
- ],
-)
-
-py_test(
- name = "advanced_activations_test",
- size = "small",
- srcs = ["python/keras/layers/advanced_activations_test.py"],
- srcs_version = "PY2AND3",
- deps = [
- ":keras",
- "//tensorflow/python:client_testlib",
- ],
-)
-
-py_test(
- name = "convolutional_recurrent_test",
- size = "medium",
- srcs = ["python/keras/layers/convolutional_recurrent_test.py"],
- shard_count = 2,
- srcs_version = "PY2AND3",
- tags = ["noasan"], # times out b/63678675
- deps = [
- ":keras",
- "//tensorflow/python:client_testlib",
- "//third_party/py/numpy",
- ],
-)
-
-py_test(
- name = "convolutional_test",
- size = "medium",
- srcs = ["python/keras/layers/convolutional_test.py"],
- srcs_version = "PY2AND3",
- tags = [
- "manual",
- "noasan", # times out b/63678675
- "notsan",
- ],
- deps = [
- ":keras",
- "//tensorflow/python:client_testlib",
- "//third_party/py/numpy",
- ],
-)
-
-py_test(
- name = "pooling_test",
- size = "small",
- srcs = ["python/keras/layers/pooling_test.py"],
- srcs_version = "PY2AND3",
- deps = [
- ":keras",
- "//tensorflow/python:client_testlib",
- ],
-)
-
-py_test(
- name = "core_test",
- size = "small",
- srcs = ["python/keras/layers/core_test.py"],
- srcs_version = "PY2AND3",
- deps = [
- ":keras",
- "//tensorflow/python:client_testlib",
- "//third_party/py/numpy",
- ],
-)
-
-py_test(
- name = "embeddings_test",
- size = "small",
- srcs = ["python/keras/layers/embeddings_test.py"],
- srcs_version = "PY2AND3",
- deps = [
- ":keras",
- "//tensorflow/python:client_testlib",
- ],
-)
-
-py_test(
- name = "local_test",
- size = "medium",
- srcs = ["python/keras/layers/local_test.py"],
- srcs_version = "PY2AND3",
- deps = [
- ":keras",
- "//tensorflow/python:client_testlib",
- "//third_party/py/numpy",
- ],
-)
-
-py_test(
- name = "merge_test",
- size = "small",
- srcs = ["python/keras/layers/merge_test.py"],
- srcs_version = "PY2AND3",
- deps = [
- ":keras",
- "//tensorflow/python:client_testlib",
- "//third_party/py/numpy",
- ],
-)
-
-py_test(
- name = "noise_test",
- size = "small",
- srcs = ["python/keras/layers/noise_test.py"],
- srcs_version = "PY2AND3",
- deps = [
- ":keras",
- "//tensorflow/python:client_testlib",
- ],
-)
-
-py_test(
- name = "normalization_test",
- size = "small",
- srcs = ["python/keras/layers/normalization_test.py"],
- srcs_version = "PY2AND3",
- deps = [
- ":keras",
- "//tensorflow/python:client_testlib",
- "//third_party/py/numpy",
- ],
-)
-
-py_test(
- name = "simplernn_test",
- size = "medium",
- srcs = ["python/keras/layers/simplernn_test.py"],
- srcs_version = "PY2AND3",
- tags = ["notsan"],
- deps = [
- ":keras",
- "//tensorflow/python:client_testlib",
- "//third_party/py/numpy",
- ],
-)
-
-py_test(
- name = "gru_test",
- size = "medium",
- srcs = ["python/keras/layers/gru_test.py"],
- srcs_version = "PY2AND3",
- tags = ["notsan"], # http://b/62136390
- deps = [
- ":keras",
- "//tensorflow/python:client_testlib",
- "//third_party/py/numpy",
- ],
-)
-
-py_test(
- name = "lstm_test",
- size = "medium",
- srcs = ["python/keras/layers/lstm_test.py"],
- srcs_version = "PY2AND3",
- tags = [
- "noasan", # times out b/63678675
- "notsan", # http://b/62189182
- ],
- deps = [
- ":keras",
- "//tensorflow/python:client_testlib",
- "//third_party/py/numpy",
- ],
-)
-
-py_test(
- name = "serialization_test",
- size = "small",
- srcs = ["python/keras/layers/serialization_test.py"],
- srcs_version = "PY2AND3",
- deps = [
- ":keras",
- "//tensorflow/python:client_testlib",
- ],
-)
-
-py_test(
- name = "wrappers_test",
- size = "small",
- srcs = ["python/keras/layers/wrappers_test.py"],
- srcs_version = "PY2AND3",
- deps = [
- ":keras",
- "//tensorflow/python:client_testlib",
- "//third_party/py/numpy",
- ],
-)
-
-py_test(
- name = "scikit_learn_test",
- size = "small",
- srcs = ["python/keras/wrappers/scikit_learn_test.py"],
- srcs_version = "PY2AND3",
- tags = ["notsan"],
- deps = [
- ":keras",
- "//tensorflow/python:client_testlib",
- "//third_party/py/numpy",
- ],
-)
-
-py_test(
- name = "data_utils_test",
- size = "small",
- srcs = ["python/keras/utils/data_utils_test.py"],
- srcs_version = "PY2AND3",
- tags = [
- "noasan", # times out
- "notsan",
- ],
- deps = [
- ":keras",
- "//tensorflow/python:client_testlib",
- "//third_party/py/numpy",
- ],
-)
-
-py_test(
- name = "generic_utils_test",
- size = "small",
- srcs = ["python/keras/utils/generic_utils_test.py"],
- srcs_version = "PY2AND3",
- deps = [
- ":keras",
- "//tensorflow/python:client_testlib",
- ],
-)
-
-py_test(
- name = "io_utils_test",
- size = "small",
- srcs = ["python/keras/utils/io_utils_test.py"],
- srcs_version = "PY2AND3",
- tags = ["notsan"],
- deps = [
- ":keras",
- "//tensorflow/python:client_testlib",
- "//third_party/py/numpy",
- ],
-)
-
-py_test(
- name = "imagenet_utils_test",
- size = "small",
- srcs = ["python/keras/applications/imagenet_utils_test.py"],
- srcs_version = "PY2AND3",
- deps = [
- ":keras",
- "//tensorflow/python:client_testlib",
- "//third_party/py/numpy",
- ],
-)
-
-py_test(
- name = "image_test",
- size = "medium",
- srcs = ["python/keras/preprocessing/image_test.py"],
- srcs_version = "PY2AND3",
- deps = [
- ":keras",
- "//tensorflow/python:client_testlib",
- "//third_party/py/numpy",
- ],
-)
-
-py_test(
- name = "sequence_test",
- size = "small",
- srcs = ["python/keras/preprocessing/sequence_test.py"],
- srcs_version = "PY2AND3",
- deps = [
- ":keras",
- "//tensorflow/python:client_testlib",
- "//third_party/py/numpy",
- ],
-)
-
-py_test(
- name = "text_test",
- size = "small",
- srcs = ["python/keras/preprocessing/text_test.py"],
- srcs_version = "PY2AND3",
- deps = [
- ":keras",
- "//tensorflow/python:client_testlib",
- "//third_party/py/numpy",
- ],
-)
-
-py_test(
- name = "callbacks_test",
- size = "medium",
- srcs = ["python/keras/callbacks_test.py"],
- srcs_version = "PY2AND3",
- tags = ["notsan"],
- deps = [
- ":keras",
- "//tensorflow/python:client_testlib",
- "//third_party/py/numpy",
- ],
-)
-
-py_test(
- name = "training_test",
- size = "medium",
- srcs = ["python/keras/engine/training_test.py"],
- srcs_version = "PY2AND3",
- tags = ["notsan"],
- deps = [
- ":keras",
- "//tensorflow/python:client_testlib",
- "//third_party/py/numpy",
- ],
-)
-
-py_test(
- name = "topology_test",
- size = "small",
- srcs = ["python/keras/engine/topology_test.py"],
- srcs_version = "PY2AND3",
- deps = [
- ":keras",
- "//tensorflow/python:array_ops",
- "//tensorflow/python:client_testlib",
- "//tensorflow/python:dtypes",
- "//third_party/py/numpy",
- ],
-)
-
-py_test(
- name = "models_test",
- size = "small",
- srcs = ["python/keras/models_test.py"],
- srcs_version = "PY2AND3",
- deps = [
- ":keras",
- "//tensorflow/python:client_testlib",
- "//tensorflow/python:training",
- "//third_party/py/numpy",
- ],
-)
-
-py_test(
- name = "backend_test",
- size = "small",
- srcs = ["python/keras/backend_test.py"],
- srcs_version = "PY2AND3",
- deps = [
- ":keras",
- "//tensorflow/python:client_testlib",
- "//tensorflow/python:util",
- "//third_party/py/numpy",
- ],
-)
-
-py_library(
- name = "testing_utils",
- srcs = [
- "python/keras/testing_utils.py",
],
srcs_version = "PY2AND3",
deps = [
- ":keras",
- "//tensorflow/python:util",
- "//third_party/py/numpy",
+ "//tensorflow/python/keras",
],
)
diff --git a/tensorflow/contrib/keras/README.md b/tensorflow/contrib/keras/README.md
index db2556fe42..de4c81268d 100644
--- a/tensorflow/contrib/keras/README.md
+++ b/tensorflow/contrib/keras/README.md
@@ -1,3 +1,6 @@
+NOTE: THE `tensorflow.contrib.keras` MODULE HAS BEEN DEPRECATED.
+USE INSTEAD `tensorflow.keras`, PART OF CORE TENSORFLOW.
+
Keras is an object-oriented API for defining and training neural networks.
This module contains a pure-TensorFlow implementation of the Keras API,
diff --git a/tensorflow/contrib/keras/api/keras/activations/__init__.py b/tensorflow/contrib/keras/api/keras/activations/__init__.py
index af6f249e71..d04838c218 100644
--- a/tensorflow/contrib/keras/api/keras/activations/__init__.py
+++ b/tensorflow/contrib/keras/api/keras/activations/__init__.py
@@ -19,22 +19,22 @@ from __future__ import division
from __future__ import print_function
# Activation functions.
-from tensorflow.contrib.keras.python.keras.activations import elu
-from tensorflow.contrib.keras.python.keras.activations import hard_sigmoid
-from tensorflow.contrib.keras.python.keras.activations import linear
-from tensorflow.contrib.keras.python.keras.activations import relu
-from tensorflow.contrib.keras.python.keras.activations import selu
-from tensorflow.contrib.keras.python.keras.activations import sigmoid
-from tensorflow.contrib.keras.python.keras.activations import softmax
-from tensorflow.contrib.keras.python.keras.activations import softplus
-from tensorflow.contrib.keras.python.keras.activations import softsign
-from tensorflow.contrib.keras.python.keras.activations import tanh
+from tensorflow.python.keras._impl.keras.activations import elu
+from tensorflow.python.keras._impl.keras.activations import hard_sigmoid
+from tensorflow.python.keras._impl.keras.activations import linear
+from tensorflow.python.keras._impl.keras.activations import relu
+from tensorflow.python.keras._impl.keras.activations import selu
+from tensorflow.python.keras._impl.keras.activations import sigmoid
+from tensorflow.python.keras._impl.keras.activations import softmax
+from tensorflow.python.keras._impl.keras.activations import softplus
+from tensorflow.python.keras._impl.keras.activations import softsign
+from tensorflow.python.keras._impl.keras.activations import tanh
# Auxiliary utils.
# pylint: disable=g-bad-import-order
-from tensorflow.contrib.keras.python.keras.activations import deserialize
-from tensorflow.contrib.keras.python.keras.activations import serialize
-from tensorflow.contrib.keras.python.keras.activations import get
+from tensorflow.python.keras._impl.keras.activations import deserialize
+from tensorflow.python.keras._impl.keras.activations import serialize
+from tensorflow.python.keras._impl.keras.activations import get
del absolute_import
del division
diff --git a/tensorflow/contrib/keras/api/keras/applications/inception_v3/__init__.py b/tensorflow/contrib/keras/api/keras/applications/inception_v3/__init__.py
index d8ca73fb97..abf8393ae4 100644
--- a/tensorflow/contrib/keras/api/keras/applications/inception_v3/__init__.py
+++ b/tensorflow/contrib/keras/api/keras/applications/inception_v3/__init__.py
@@ -18,9 +18,9 @@ from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
-from tensorflow.contrib.keras.python.keras.applications.inception_v3 import decode_predictions
-from tensorflow.contrib.keras.python.keras.applications.inception_v3 import InceptionV3
-from tensorflow.contrib.keras.python.keras.applications.inception_v3 import preprocess_input
+from tensorflow.python.keras._impl.keras.applications.inception_v3 import decode_predictions
+from tensorflow.python.keras._impl.keras.applications.inception_v3 import InceptionV3
+from tensorflow.python.keras._impl.keras.applications.inception_v3 import preprocess_input
del absolute_import
del division
diff --git a/tensorflow/contrib/keras/api/keras/applications/mobilenet/__init__.py b/tensorflow/contrib/keras/api/keras/applications/mobilenet/__init__.py
index 594861fb51..b809e91193 100644
--- a/tensorflow/contrib/keras/api/keras/applications/mobilenet/__init__.py
+++ b/tensorflow/contrib/keras/api/keras/applications/mobilenet/__init__.py
@@ -18,9 +18,9 @@ from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
-from tensorflow.contrib.keras.python.keras.applications.mobilenet import decode_predictions
-from tensorflow.contrib.keras.python.keras.applications.mobilenet import MobileNet
-from tensorflow.contrib.keras.python.keras.applications.mobilenet import preprocess_input
+from tensorflow.python.keras._impl.keras.applications.mobilenet import decode_predictions
+from tensorflow.python.keras._impl.keras.applications.mobilenet import MobileNet
+from tensorflow.python.keras._impl.keras.applications.mobilenet import preprocess_input
del absolute_import
del division
diff --git a/tensorflow/contrib/keras/api/keras/applications/resnet50/__init__.py b/tensorflow/contrib/keras/api/keras/applications/resnet50/__init__.py
index e9b25b66d5..530805d150 100644
--- a/tensorflow/contrib/keras/api/keras/applications/resnet50/__init__.py
+++ b/tensorflow/contrib/keras/api/keras/applications/resnet50/__init__.py
@@ -18,9 +18,9 @@ from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
-from tensorflow.contrib.keras.python.keras.applications.resnet50 import decode_predictions
-from tensorflow.contrib.keras.python.keras.applications.resnet50 import preprocess_input
-from tensorflow.contrib.keras.python.keras.applications.resnet50 import ResNet50
+from tensorflow.python.keras._impl.keras.applications.resnet50 import decode_predictions
+from tensorflow.python.keras._impl.keras.applications.resnet50 import preprocess_input
+from tensorflow.python.keras._impl.keras.applications.resnet50 import ResNet50
del absolute_import
del division
diff --git a/tensorflow/contrib/keras/api/keras/applications/vgg16/__init__.py b/tensorflow/contrib/keras/api/keras/applications/vgg16/__init__.py
index 2a1f789cc5..118361604b 100644
--- a/tensorflow/contrib/keras/api/keras/applications/vgg16/__init__.py
+++ b/tensorflow/contrib/keras/api/keras/applications/vgg16/__init__.py
@@ -18,9 +18,9 @@ from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
-from tensorflow.contrib.keras.python.keras.applications.vgg16 import decode_predictions
-from tensorflow.contrib.keras.python.keras.applications.vgg16 import preprocess_input
-from tensorflow.contrib.keras.python.keras.applications.vgg16 import VGG16
+from tensorflow.python.keras._impl.keras.applications.vgg16 import decode_predictions
+from tensorflow.python.keras._impl.keras.applications.vgg16 import preprocess_input
+from tensorflow.python.keras._impl.keras.applications.vgg16 import VGG16
del absolute_import
del division
diff --git a/tensorflow/contrib/keras/api/keras/applications/vgg19/__init__.py b/tensorflow/contrib/keras/api/keras/applications/vgg19/__init__.py
index 22b5e7c8e4..cda52628f3 100644
--- a/tensorflow/contrib/keras/api/keras/applications/vgg19/__init__.py
+++ b/tensorflow/contrib/keras/api/keras/applications/vgg19/__init__.py
@@ -18,9 +18,9 @@ from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
-from tensorflow.contrib.keras.python.keras.applications.vgg19 import decode_predictions
-from tensorflow.contrib.keras.python.keras.applications.vgg19 import preprocess_input
-from tensorflow.contrib.keras.python.keras.applications.vgg19 import VGG19
+from tensorflow.python.keras._impl.keras.applications.vgg19 import decode_predictions
+from tensorflow.python.keras._impl.keras.applications.vgg19 import preprocess_input
+from tensorflow.python.keras._impl.keras.applications.vgg19 import VGG19
del absolute_import
del division
diff --git a/tensorflow/contrib/keras/api/keras/applications/xception/__init__.py b/tensorflow/contrib/keras/api/keras/applications/xception/__init__.py
index 23d1b6a0b3..ae9cd9cd18 100644
--- a/tensorflow/contrib/keras/api/keras/applications/xception/__init__.py
+++ b/tensorflow/contrib/keras/api/keras/applications/xception/__init__.py
@@ -18,9 +18,9 @@ from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
-from tensorflow.contrib.keras.python.keras.applications.xception import decode_predictions
-from tensorflow.contrib.keras.python.keras.applications.xception import preprocess_input
-from tensorflow.contrib.keras.python.keras.applications.xception import Xception
+from tensorflow.python.keras._impl.keras.applications.xception import decode_predictions
+from tensorflow.python.keras._impl.keras.applications.xception import preprocess_input
+from tensorflow.python.keras._impl.keras.applications.xception import Xception
del absolute_import
del division
diff --git a/tensorflow/contrib/keras/api/keras/backend/__init__.py b/tensorflow/contrib/keras/api/keras/backend/__init__.py
index f3721a8dcb..10ef5a7585 100644
--- a/tensorflow/contrib/keras/api/keras/backend/__init__.py
+++ b/tensorflow/contrib/keras/api/keras/backend/__init__.py
@@ -19,144 +19,144 @@ from __future__ import division
from __future__ import print_function
# pylint: disable=redefined-builtin
-from tensorflow.contrib.keras.python.keras.backend import abs
-from tensorflow.contrib.keras.python.keras.backend import all
-from tensorflow.contrib.keras.python.keras.backend import any
-from tensorflow.contrib.keras.python.keras.backend import arange
-from tensorflow.contrib.keras.python.keras.backend import argmax
-from tensorflow.contrib.keras.python.keras.backend import argmin
-from tensorflow.contrib.keras.python.keras.backend import backend
-from tensorflow.contrib.keras.python.keras.backend import batch_dot
-from tensorflow.contrib.keras.python.keras.backend import batch_flatten
-from tensorflow.contrib.keras.python.keras.backend import batch_get_value
-from tensorflow.contrib.keras.python.keras.backend import batch_normalization
-from tensorflow.contrib.keras.python.keras.backend import batch_set_value
-from tensorflow.contrib.keras.python.keras.backend import bias_add
-from tensorflow.contrib.keras.python.keras.backend import binary_crossentropy
-from tensorflow.contrib.keras.python.keras.backend import cast
-from tensorflow.contrib.keras.python.keras.backend import cast_to_floatx
-from tensorflow.contrib.keras.python.keras.backend import categorical_crossentropy
-from tensorflow.contrib.keras.python.keras.backend import clear_session
-from tensorflow.contrib.keras.python.keras.backend import clip
-from tensorflow.contrib.keras.python.keras.backend import concatenate
-from tensorflow.contrib.keras.python.keras.backend import constant
-from tensorflow.contrib.keras.python.keras.backend import conv1d
-from tensorflow.contrib.keras.python.keras.backend import conv2d
-from tensorflow.contrib.keras.python.keras.backend import conv2d_transpose
-from tensorflow.contrib.keras.python.keras.backend import conv3d
-from tensorflow.contrib.keras.python.keras.backend import cos
-from tensorflow.contrib.keras.python.keras.backend import count_params
-from tensorflow.contrib.keras.python.keras.backend import ctc_batch_cost
-from tensorflow.contrib.keras.python.keras.backend import ctc_decode
-from tensorflow.contrib.keras.python.keras.backend import ctc_label_dense_to_sparse
-from tensorflow.contrib.keras.python.keras.backend import dot
-from tensorflow.contrib.keras.python.keras.backend import dropout
-from tensorflow.contrib.keras.python.keras.backend import dtype
-from tensorflow.contrib.keras.python.keras.backend import elu
-from tensorflow.contrib.keras.python.keras.backend import epsilon
-from tensorflow.contrib.keras.python.keras.backend import equal
-from tensorflow.contrib.keras.python.keras.backend import eval
-from tensorflow.contrib.keras.python.keras.backend import exp
-from tensorflow.contrib.keras.python.keras.backend import expand_dims
-from tensorflow.contrib.keras.python.keras.backend import eye
-from tensorflow.contrib.keras.python.keras.backend import flatten
-from tensorflow.contrib.keras.python.keras.backend import floatx
-from tensorflow.contrib.keras.python.keras.backend import foldl
-from tensorflow.contrib.keras.python.keras.backend import foldr
-from tensorflow.contrib.keras.python.keras.backend import function
-from tensorflow.contrib.keras.python.keras.backend import gather
-from tensorflow.contrib.keras.python.keras.backend import get_session
-from tensorflow.contrib.keras.python.keras.backend import get_uid
-from tensorflow.contrib.keras.python.keras.backend import get_value
-from tensorflow.contrib.keras.python.keras.backend import gradients
-from tensorflow.contrib.keras.python.keras.backend import greater
-from tensorflow.contrib.keras.python.keras.backend import greater_equal
-from tensorflow.contrib.keras.python.keras.backend import hard_sigmoid
-from tensorflow.contrib.keras.python.keras.backend import image_data_format
-from tensorflow.contrib.keras.python.keras.backend import in_test_phase
-from tensorflow.contrib.keras.python.keras.backend import in_top_k
-from tensorflow.contrib.keras.python.keras.backend import in_train_phase
-from tensorflow.contrib.keras.python.keras.backend import int_shape
-from tensorflow.contrib.keras.python.keras.backend import is_sparse
-from tensorflow.contrib.keras.python.keras.backend import l2_normalize
-from tensorflow.contrib.keras.python.keras.backend import learning_phase
-from tensorflow.contrib.keras.python.keras.backend import less
-from tensorflow.contrib.keras.python.keras.backend import less_equal
-from tensorflow.contrib.keras.python.keras.backend import log
-from tensorflow.contrib.keras.python.keras.backend import manual_variable_initialization
-from tensorflow.contrib.keras.python.keras.backend import map_fn
-from tensorflow.contrib.keras.python.keras.backend import max
-from tensorflow.contrib.keras.python.keras.backend import maximum
-from tensorflow.contrib.keras.python.keras.backend import mean
-from tensorflow.contrib.keras.python.keras.backend import min
-from tensorflow.contrib.keras.python.keras.backend import minimum
-from tensorflow.contrib.keras.python.keras.backend import moving_average_update
-from tensorflow.contrib.keras.python.keras.backend import name_scope
-from tensorflow.contrib.keras.python.keras.backend import ndim
-from tensorflow.contrib.keras.python.keras.backend import normalize_batch_in_training
-from tensorflow.contrib.keras.python.keras.backend import not_equal
-from tensorflow.contrib.keras.python.keras.backend import one_hot
-from tensorflow.contrib.keras.python.keras.backend import ones
-from tensorflow.contrib.keras.python.keras.backend import ones_like
-from tensorflow.contrib.keras.python.keras.backend import permute_dimensions
-from tensorflow.contrib.keras.python.keras.backend import placeholder
-from tensorflow.contrib.keras.python.keras.backend import pool2d
-from tensorflow.contrib.keras.python.keras.backend import pool3d
-from tensorflow.contrib.keras.python.keras.backend import pow
-from tensorflow.contrib.keras.python.keras.backend import print_tensor
-from tensorflow.contrib.keras.python.keras.backend import prod
-from tensorflow.contrib.keras.python.keras.backend import random_binomial
-from tensorflow.contrib.keras.python.keras.backend import random_normal
-from tensorflow.contrib.keras.python.keras.backend import random_normal_variable
-from tensorflow.contrib.keras.python.keras.backend import random_uniform
-from tensorflow.contrib.keras.python.keras.backend import random_uniform_variable
-from tensorflow.contrib.keras.python.keras.backend import relu
-from tensorflow.contrib.keras.python.keras.backend import repeat
-from tensorflow.contrib.keras.python.keras.backend import repeat_elements
-from tensorflow.contrib.keras.python.keras.backend import reset_uids
-from tensorflow.contrib.keras.python.keras.backend import reshape
-from tensorflow.contrib.keras.python.keras.backend import resize_images
-from tensorflow.contrib.keras.python.keras.backend import resize_volumes
-from tensorflow.contrib.keras.python.keras.backend import reverse
-from tensorflow.contrib.keras.python.keras.backend import rnn
-from tensorflow.contrib.keras.python.keras.backend import round
-from tensorflow.contrib.keras.python.keras.backend import separable_conv2d
-from tensorflow.contrib.keras.python.keras.backend import set_epsilon
-from tensorflow.contrib.keras.python.keras.backend import set_floatx
-from tensorflow.contrib.keras.python.keras.backend import set_image_data_format
-from tensorflow.contrib.keras.python.keras.backend import set_learning_phase
-from tensorflow.contrib.keras.python.keras.backend import set_session
-from tensorflow.contrib.keras.python.keras.backend import set_value
-from tensorflow.contrib.keras.python.keras.backend import shape
-from tensorflow.contrib.keras.python.keras.backend import sigmoid
-from tensorflow.contrib.keras.python.keras.backend import sign
-from tensorflow.contrib.keras.python.keras.backend import sin
-from tensorflow.contrib.keras.python.keras.backend import softmax
-from tensorflow.contrib.keras.python.keras.backend import softplus
-from tensorflow.contrib.keras.python.keras.backend import softsign
-from tensorflow.contrib.keras.python.keras.backend import sparse_categorical_crossentropy
-from tensorflow.contrib.keras.python.keras.backend import spatial_2d_padding
-from tensorflow.contrib.keras.python.keras.backend import spatial_3d_padding
-from tensorflow.contrib.keras.python.keras.backend import sqrt
-from tensorflow.contrib.keras.python.keras.backend import square
-from tensorflow.contrib.keras.python.keras.backend import squeeze
-from tensorflow.contrib.keras.python.keras.backend import stack
-from tensorflow.contrib.keras.python.keras.backend import std
-from tensorflow.contrib.keras.python.keras.backend import stop_gradient
-from tensorflow.contrib.keras.python.keras.backend import sum
-from tensorflow.contrib.keras.python.keras.backend import switch
-from tensorflow.contrib.keras.python.keras.backend import tanh
-from tensorflow.contrib.keras.python.keras.backend import temporal_padding
-from tensorflow.contrib.keras.python.keras.backend import to_dense
-from tensorflow.contrib.keras.python.keras.backend import transpose
-from tensorflow.contrib.keras.python.keras.backend import truncated_normal
-from tensorflow.contrib.keras.python.keras.backend import update
-from tensorflow.contrib.keras.python.keras.backend import update_add
-from tensorflow.contrib.keras.python.keras.backend import update_sub
-from tensorflow.contrib.keras.python.keras.backend import var
-from tensorflow.contrib.keras.python.keras.backend import variable
-from tensorflow.contrib.keras.python.keras.backend import zeros
-from tensorflow.contrib.keras.python.keras.backend import zeros_like
+from tensorflow.python.keras._impl.keras.backend import abs
+from tensorflow.python.keras._impl.keras.backend import all
+from tensorflow.python.keras._impl.keras.backend import any
+from tensorflow.python.keras._impl.keras.backend import arange
+from tensorflow.python.keras._impl.keras.backend import argmax
+from tensorflow.python.keras._impl.keras.backend import argmin
+from tensorflow.python.keras._impl.keras.backend import backend
+from tensorflow.python.keras._impl.keras.backend import batch_dot
+from tensorflow.python.keras._impl.keras.backend import batch_flatten
+from tensorflow.python.keras._impl.keras.backend import batch_get_value
+from tensorflow.python.keras._impl.keras.backend import batch_normalization
+from tensorflow.python.keras._impl.keras.backend import batch_set_value
+from tensorflow.python.keras._impl.keras.backend import bias_add
+from tensorflow.python.keras._impl.keras.backend import binary_crossentropy
+from tensorflow.python.keras._impl.keras.backend import cast
+from tensorflow.python.keras._impl.keras.backend import cast_to_floatx
+from tensorflow.python.keras._impl.keras.backend import categorical_crossentropy
+from tensorflow.python.keras._impl.keras.backend import clear_session
+from tensorflow.python.keras._impl.keras.backend import clip
+from tensorflow.python.keras._impl.keras.backend import concatenate
+from tensorflow.python.keras._impl.keras.backend import constant
+from tensorflow.python.keras._impl.keras.backend import conv1d
+from tensorflow.python.keras._impl.keras.backend import conv2d
+from tensorflow.python.keras._impl.keras.backend import conv2d_transpose
+from tensorflow.python.keras._impl.keras.backend import conv3d
+from tensorflow.python.keras._impl.keras.backend import cos
+from tensorflow.python.keras._impl.keras.backend import count_params
+from tensorflow.python.keras._impl.keras.backend import ctc_batch_cost
+from tensorflow.python.keras._impl.keras.backend import ctc_decode
+from tensorflow.python.keras._impl.keras.backend import ctc_label_dense_to_sparse
+from tensorflow.python.keras._impl.keras.backend import dot
+from tensorflow.python.keras._impl.keras.backend import dropout
+from tensorflow.python.keras._impl.keras.backend import dtype
+from tensorflow.python.keras._impl.keras.backend import elu
+from tensorflow.python.keras._impl.keras.backend import epsilon
+from tensorflow.python.keras._impl.keras.backend import equal
+from tensorflow.python.keras._impl.keras.backend import eval
+from tensorflow.python.keras._impl.keras.backend import exp
+from tensorflow.python.keras._impl.keras.backend import expand_dims
+from tensorflow.python.keras._impl.keras.backend import eye
+from tensorflow.python.keras._impl.keras.backend import flatten
+from tensorflow.python.keras._impl.keras.backend import floatx
+from tensorflow.python.keras._impl.keras.backend import foldl
+from tensorflow.python.keras._impl.keras.backend import foldr
+from tensorflow.python.keras._impl.keras.backend import function
+from tensorflow.python.keras._impl.keras.backend import gather
+from tensorflow.python.keras._impl.keras.backend import get_session
+from tensorflow.python.keras._impl.keras.backend import get_uid
+from tensorflow.python.keras._impl.keras.backend import get_value
+from tensorflow.python.keras._impl.keras.backend import gradients
+from tensorflow.python.keras._impl.keras.backend import greater
+from tensorflow.python.keras._impl.keras.backend import greater_equal
+from tensorflow.python.keras._impl.keras.backend import hard_sigmoid
+from tensorflow.python.keras._impl.keras.backend import image_data_format
+from tensorflow.python.keras._impl.keras.backend import in_test_phase
+from tensorflow.python.keras._impl.keras.backend import in_top_k
+from tensorflow.python.keras._impl.keras.backend import in_train_phase
+from tensorflow.python.keras._impl.keras.backend import int_shape
+from tensorflow.python.keras._impl.keras.backend import is_sparse
+from tensorflow.python.keras._impl.keras.backend import l2_normalize
+from tensorflow.python.keras._impl.keras.backend import learning_phase
+from tensorflow.python.keras._impl.keras.backend import less
+from tensorflow.python.keras._impl.keras.backend import less_equal
+from tensorflow.python.keras._impl.keras.backend import log
+from tensorflow.python.keras._impl.keras.backend import manual_variable_initialization
+from tensorflow.python.keras._impl.keras.backend import map_fn
+from tensorflow.python.keras._impl.keras.backend import max
+from tensorflow.python.keras._impl.keras.backend import maximum
+from tensorflow.python.keras._impl.keras.backend import mean
+from tensorflow.python.keras._impl.keras.backend import min
+from tensorflow.python.keras._impl.keras.backend import minimum
+from tensorflow.python.keras._impl.keras.backend import moving_average_update
+from tensorflow.python.keras._impl.keras.backend import name_scope
+from tensorflow.python.keras._impl.keras.backend import ndim
+from tensorflow.python.keras._impl.keras.backend import normalize_batch_in_training
+from tensorflow.python.keras._impl.keras.backend import not_equal
+from tensorflow.python.keras._impl.keras.backend import one_hot
+from tensorflow.python.keras._impl.keras.backend import ones
+from tensorflow.python.keras._impl.keras.backend import ones_like
+from tensorflow.python.keras._impl.keras.backend import permute_dimensions
+from tensorflow.python.keras._impl.keras.backend import placeholder
+from tensorflow.python.keras._impl.keras.backend import pool2d
+from tensorflow.python.keras._impl.keras.backend import pool3d
+from tensorflow.python.keras._impl.keras.backend import pow
+from tensorflow.python.keras._impl.keras.backend import print_tensor
+from tensorflow.python.keras._impl.keras.backend import prod
+from tensorflow.python.keras._impl.keras.backend import random_binomial
+from tensorflow.python.keras._impl.keras.backend import random_normal
+from tensorflow.python.keras._impl.keras.backend import random_normal_variable
+from tensorflow.python.keras._impl.keras.backend import random_uniform
+from tensorflow.python.keras._impl.keras.backend import random_uniform_variable
+from tensorflow.python.keras._impl.keras.backend import relu
+from tensorflow.python.keras._impl.keras.backend import repeat
+from tensorflow.python.keras._impl.keras.backend import repeat_elements
+from tensorflow.python.keras._impl.keras.backend import reset_uids
+from tensorflow.python.keras._impl.keras.backend import reshape
+from tensorflow.python.keras._impl.keras.backend import resize_images
+from tensorflow.python.keras._impl.keras.backend import resize_volumes
+from tensorflow.python.keras._impl.keras.backend import reverse
+from tensorflow.python.keras._impl.keras.backend import rnn
+from tensorflow.python.keras._impl.keras.backend import round
+from tensorflow.python.keras._impl.keras.backend import separable_conv2d
+from tensorflow.python.keras._impl.keras.backend import set_epsilon
+from tensorflow.python.keras._impl.keras.backend import set_floatx
+from tensorflow.python.keras._impl.keras.backend import set_image_data_format
+from tensorflow.python.keras._impl.keras.backend import set_learning_phase
+from tensorflow.python.keras._impl.keras.backend import set_session
+from tensorflow.python.keras._impl.keras.backend import set_value
+from tensorflow.python.keras._impl.keras.backend import shape
+from tensorflow.python.keras._impl.keras.backend import sigmoid
+from tensorflow.python.keras._impl.keras.backend import sign
+from tensorflow.python.keras._impl.keras.backend import sin
+from tensorflow.python.keras._impl.keras.backend import softmax
+from tensorflow.python.keras._impl.keras.backend import softplus
+from tensorflow.python.keras._impl.keras.backend import softsign
+from tensorflow.python.keras._impl.keras.backend import sparse_categorical_crossentropy
+from tensorflow.python.keras._impl.keras.backend import spatial_2d_padding
+from tensorflow.python.keras._impl.keras.backend import spatial_3d_padding
+from tensorflow.python.keras._impl.keras.backend import sqrt
+from tensorflow.python.keras._impl.keras.backend import square
+from tensorflow.python.keras._impl.keras.backend import squeeze
+from tensorflow.python.keras._impl.keras.backend import stack
+from tensorflow.python.keras._impl.keras.backend import std
+from tensorflow.python.keras._impl.keras.backend import stop_gradient
+from tensorflow.python.keras._impl.keras.backend import sum
+from tensorflow.python.keras._impl.keras.backend import switch
+from tensorflow.python.keras._impl.keras.backend import tanh
+from tensorflow.python.keras._impl.keras.backend import temporal_padding
+from tensorflow.python.keras._impl.keras.backend import to_dense
+from tensorflow.python.keras._impl.keras.backend import transpose
+from tensorflow.python.keras._impl.keras.backend import truncated_normal
+from tensorflow.python.keras._impl.keras.backend import update
+from tensorflow.python.keras._impl.keras.backend import update_add
+from tensorflow.python.keras._impl.keras.backend import update_sub
+from tensorflow.python.keras._impl.keras.backend import var
+from tensorflow.python.keras._impl.keras.backend import variable
+from tensorflow.python.keras._impl.keras.backend import zeros
+from tensorflow.python.keras._impl.keras.backend import zeros_like
del absolute_import
del division
diff --git a/tensorflow/contrib/keras/api/keras/callbacks/__init__.py b/tensorflow/contrib/keras/api/keras/callbacks/__init__.py
index 3a97074857..2d884790dd 100644
--- a/tensorflow/contrib/keras/api/keras/callbacks/__init__.py
+++ b/tensorflow/contrib/keras/api/keras/callbacks/__init__.py
@@ -18,19 +18,19 @@ from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
-from tensorflow.contrib.keras.python.keras.callbacks import BaseLogger
-from tensorflow.contrib.keras.python.keras.callbacks import Callback
-from tensorflow.contrib.keras.python.keras.callbacks import CSVLogger
-from tensorflow.contrib.keras.python.keras.callbacks import EarlyStopping
-from tensorflow.contrib.keras.python.keras.callbacks import History
-from tensorflow.contrib.keras.python.keras.callbacks import LambdaCallback
-from tensorflow.contrib.keras.python.keras.callbacks import LearningRateScheduler
-from tensorflow.contrib.keras.python.keras.callbacks import ModelCheckpoint
-from tensorflow.contrib.keras.python.keras.callbacks import ProgbarLogger
-from tensorflow.contrib.keras.python.keras.callbacks import ReduceLROnPlateau
-from tensorflow.contrib.keras.python.keras.callbacks import RemoteMonitor
-from tensorflow.contrib.keras.python.keras.callbacks import TensorBoard
-from tensorflow.contrib.keras.python.keras.callbacks import TerminateOnNaN
+from tensorflow.python.keras._impl.keras.callbacks import BaseLogger
+from tensorflow.python.keras._impl.keras.callbacks import Callback
+from tensorflow.python.keras._impl.keras.callbacks import CSVLogger
+from tensorflow.python.keras._impl.keras.callbacks import EarlyStopping
+from tensorflow.python.keras._impl.keras.callbacks import History
+from tensorflow.python.keras._impl.keras.callbacks import LambdaCallback
+from tensorflow.python.keras._impl.keras.callbacks import LearningRateScheduler
+from tensorflow.python.keras._impl.keras.callbacks import ModelCheckpoint
+from tensorflow.python.keras._impl.keras.callbacks import ProgbarLogger
+from tensorflow.python.keras._impl.keras.callbacks import ReduceLROnPlateau
+from tensorflow.python.keras._impl.keras.callbacks import RemoteMonitor
+from tensorflow.python.keras._impl.keras.callbacks import TensorBoard
+from tensorflow.python.keras._impl.keras.callbacks import TerminateOnNaN
del absolute_import
del division
diff --git a/tensorflow/contrib/keras/api/keras/constraints/__init__.py b/tensorflow/contrib/keras/api/keras/constraints/__init__.py
index 6b9e3bf46e..152606d8eb 100644
--- a/tensorflow/contrib/keras/api/keras/constraints/__init__.py
+++ b/tensorflow/contrib/keras/api/keras/constraints/__init__.py
@@ -19,21 +19,21 @@ from __future__ import division
from __future__ import print_function
# Constraints functions / callable classes.
-from tensorflow.contrib.keras.python.keras.constraints import Constraint
-from tensorflow.contrib.keras.python.keras.constraints import max_norm
-from tensorflow.contrib.keras.python.keras.constraints import MaxNorm
-from tensorflow.contrib.keras.python.keras.constraints import min_max_norm
-from tensorflow.contrib.keras.python.keras.constraints import MinMaxNorm
-from tensorflow.contrib.keras.python.keras.constraints import non_neg
-from tensorflow.contrib.keras.python.keras.constraints import NonNeg
-from tensorflow.contrib.keras.python.keras.constraints import unit_norm
-from tensorflow.contrib.keras.python.keras.constraints import UnitNorm
+from tensorflow.python.keras._impl.keras.constraints import Constraint
+from tensorflow.python.keras._impl.keras.constraints import max_norm
+from tensorflow.python.keras._impl.keras.constraints import MaxNorm
+from tensorflow.python.keras._impl.keras.constraints import min_max_norm
+from tensorflow.python.keras._impl.keras.constraints import MinMaxNorm
+from tensorflow.python.keras._impl.keras.constraints import non_neg
+from tensorflow.python.keras._impl.keras.constraints import NonNeg
+from tensorflow.python.keras._impl.keras.constraints import unit_norm
+from tensorflow.python.keras._impl.keras.constraints import UnitNorm
# Auxiliary utils.
# pylint: disable=g-bad-import-order
-from tensorflow.contrib.keras.python.keras.constraints import deserialize
-from tensorflow.contrib.keras.python.keras.constraints import serialize
-from tensorflow.contrib.keras.python.keras.constraints import get
+from tensorflow.python.keras._impl.keras.constraints import deserialize
+from tensorflow.python.keras._impl.keras.constraints import serialize
+from tensorflow.python.keras._impl.keras.constraints import get
del absolute_import
del division
diff --git a/tensorflow/contrib/keras/api/keras/datasets/boston_housing/__init__.py b/tensorflow/contrib/keras/api/keras/datasets/boston_housing/__init__.py
index 0bfd3df540..b5371a03fd 100644
--- a/tensorflow/contrib/keras/api/keras/datasets/boston_housing/__init__.py
+++ b/tensorflow/contrib/keras/api/keras/datasets/boston_housing/__init__.py
@@ -18,7 +18,7 @@ from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
-from tensorflow.contrib.keras.python.keras.datasets.boston_housing import load_data
+from tensorflow.python.keras._impl.keras.datasets.boston_housing import load_data
del absolute_import
del division
diff --git a/tensorflow/contrib/keras/api/keras/datasets/cifar10/__init__.py b/tensorflow/contrib/keras/api/keras/datasets/cifar10/__init__.py
index f5fac6982a..68d3eb789e 100644
--- a/tensorflow/contrib/keras/api/keras/datasets/cifar10/__init__.py
+++ b/tensorflow/contrib/keras/api/keras/datasets/cifar10/__init__.py
@@ -18,7 +18,7 @@ from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
-from tensorflow.contrib.keras.python.keras.datasets.cifar10 import load_data
+from tensorflow.python.keras._impl.keras.datasets.cifar10 import load_data
del absolute_import
del division
diff --git a/tensorflow/contrib/keras/api/keras/datasets/cifar100/__init__.py b/tensorflow/contrib/keras/api/keras/datasets/cifar100/__init__.py
index a7e6996136..ca93742673 100644
--- a/tensorflow/contrib/keras/api/keras/datasets/cifar100/__init__.py
+++ b/tensorflow/contrib/keras/api/keras/datasets/cifar100/__init__.py
@@ -18,7 +18,7 @@ from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
-from tensorflow.contrib.keras.python.keras.datasets.cifar100 import load_data
+from tensorflow.python.keras._impl.keras.datasets.cifar100 import load_data
del absolute_import
del division
diff --git a/tensorflow/contrib/keras/api/keras/datasets/imdb/__init__.py b/tensorflow/contrib/keras/api/keras/datasets/imdb/__init__.py
index f141c8a8e9..1c6396d2d3 100644
--- a/tensorflow/contrib/keras/api/keras/datasets/imdb/__init__.py
+++ b/tensorflow/contrib/keras/api/keras/datasets/imdb/__init__.py
@@ -18,8 +18,8 @@ from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
-from tensorflow.contrib.keras.python.keras.datasets.imdb import get_word_index
-from tensorflow.contrib.keras.python.keras.datasets.imdb import load_data
+from tensorflow.python.keras._impl.keras.datasets.imdb import get_word_index
+from tensorflow.python.keras._impl.keras.datasets.imdb import load_data
del absolute_import
del division
diff --git a/tensorflow/contrib/keras/api/keras/datasets/mnist/__init__.py b/tensorflow/contrib/keras/api/keras/datasets/mnist/__init__.py
index 50b74f149c..364255f338 100644
--- a/tensorflow/contrib/keras/api/keras/datasets/mnist/__init__.py
+++ b/tensorflow/contrib/keras/api/keras/datasets/mnist/__init__.py
@@ -18,7 +18,7 @@ from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
-from tensorflow.contrib.keras.python.keras.datasets.mnist import load_data
+from tensorflow.python.keras._impl.keras.datasets.mnist import load_data
del absolute_import
del division
diff --git a/tensorflow/contrib/keras/api/keras/datasets/reuters/__init__.py b/tensorflow/contrib/keras/api/keras/datasets/reuters/__init__.py
index fc7f1235a3..bb6791a344 100644
--- a/tensorflow/contrib/keras/api/keras/datasets/reuters/__init__.py
+++ b/tensorflow/contrib/keras/api/keras/datasets/reuters/__init__.py
@@ -18,8 +18,8 @@ from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
-from tensorflow.contrib.keras.python.keras.datasets.reuters import get_word_index
-from tensorflow.contrib.keras.python.keras.datasets.reuters import load_data
+from tensorflow.python.keras._impl.keras.datasets.reuters import get_word_index
+from tensorflow.python.keras._impl.keras.datasets.reuters import load_data
del absolute_import
del division
diff --git a/tensorflow/contrib/keras/api/keras/initializers/__init__.py b/tensorflow/contrib/keras/api/keras/initializers/__init__.py
index 9b58723ed5..6b1fcfd2d9 100644
--- a/tensorflow/contrib/keras/api/keras/initializers/__init__.py
+++ b/tensorflow/contrib/keras/api/keras/initializers/__init__.py
@@ -19,30 +19,30 @@ from __future__ import division
from __future__ import print_function
# Initializer functions / callable classes.
-from tensorflow.contrib.keras.python.keras.initializers import Constant
-from tensorflow.contrib.keras.python.keras.initializers import Identity
-from tensorflow.contrib.keras.python.keras.initializers import Initializer
-from tensorflow.contrib.keras.python.keras.initializers import Ones
-from tensorflow.contrib.keras.python.keras.initializers import Orthogonal
-from tensorflow.contrib.keras.python.keras.initializers import RandomNormal
-from tensorflow.contrib.keras.python.keras.initializers import RandomUniform
-from tensorflow.contrib.keras.python.keras.initializers import TruncatedNormal
-from tensorflow.contrib.keras.python.keras.initializers import VarianceScaling
-from tensorflow.contrib.keras.python.keras.initializers import Zeros
+from tensorflow.python.keras._impl.keras.initializers import Constant
+from tensorflow.python.keras._impl.keras.initializers import Identity
+from tensorflow.python.keras._impl.keras.initializers import Initializer
+from tensorflow.python.keras._impl.keras.initializers import Ones
+from tensorflow.python.keras._impl.keras.initializers import Orthogonal
+from tensorflow.python.keras._impl.keras.initializers import RandomNormal
+from tensorflow.python.keras._impl.keras.initializers import RandomUniform
+from tensorflow.python.keras._impl.keras.initializers import TruncatedNormal
+from tensorflow.python.keras._impl.keras.initializers import VarianceScaling
+from tensorflow.python.keras._impl.keras.initializers import Zeros
# Functional interface.
# pylint: disable=g-bad-import-order
-from tensorflow.contrib.keras.python.keras.initializers import glorot_normal
-from tensorflow.contrib.keras.python.keras.initializers import glorot_uniform
-from tensorflow.contrib.keras.python.keras.initializers import he_normal
-from tensorflow.contrib.keras.python.keras.initializers import he_uniform
-from tensorflow.contrib.keras.python.keras.initializers import lecun_normal
-from tensorflow.contrib.keras.python.keras.initializers import lecun_uniform
+from tensorflow.python.keras._impl.keras.initializers import glorot_normal
+from tensorflow.python.keras._impl.keras.initializers import glorot_uniform
+from tensorflow.python.keras._impl.keras.initializers import he_normal
+from tensorflow.python.keras._impl.keras.initializers import he_uniform
+from tensorflow.python.keras._impl.keras.initializers import lecun_normal
+from tensorflow.python.keras._impl.keras.initializers import lecun_uniform
# Auxiliary utils.
-from tensorflow.contrib.keras.python.keras.initializers import deserialize
-from tensorflow.contrib.keras.python.keras.initializers import serialize
-from tensorflow.contrib.keras.python.keras.initializers import get
+from tensorflow.python.keras._impl.keras.initializers import deserialize
+from tensorflow.python.keras._impl.keras.initializers import serialize
+from tensorflow.python.keras._impl.keras.initializers import get
del absolute_import
del division
diff --git a/tensorflow/contrib/keras/api/keras/layers/__init__.py b/tensorflow/contrib/keras/api/keras/layers/__init__.py
index aafd189217..acf0a5e179 100644
--- a/tensorflow/contrib/keras/api/keras/layers/__init__.py
+++ b/tensorflow/contrib/keras/api/keras/layers/__init__.py
@@ -20,128 +20,128 @@ from __future__ import print_function
# Generic layers.
# pylint: disable=g-bad-import-order
-from tensorflow.contrib.keras.python.keras.engine import Input
-from tensorflow.contrib.keras.python.keras.engine import InputLayer
-from tensorflow.contrib.keras.python.keras.engine import InputSpec
-from tensorflow.contrib.keras.python.keras.engine import Layer
+from tensorflow.python.keras._impl.keras.engine import Input
+from tensorflow.python.keras._impl.keras.engine import InputLayer
+from tensorflow.python.keras._impl.keras.engine import InputSpec
+from tensorflow.python.keras._impl.keras.engine import Layer
# Advanced activations.
-from tensorflow.contrib.keras.python.keras.layers.advanced_activations import LeakyReLU
-from tensorflow.contrib.keras.python.keras.layers.advanced_activations import PReLU
-from tensorflow.contrib.keras.python.keras.layers.advanced_activations import ELU
-from tensorflow.contrib.keras.python.keras.layers.advanced_activations import ThresholdedReLU
+from tensorflow.python.keras._impl.keras.layers.advanced_activations import LeakyReLU
+from tensorflow.python.keras._impl.keras.layers.advanced_activations import PReLU
+from tensorflow.python.keras._impl.keras.layers.advanced_activations import ELU
+from tensorflow.python.keras._impl.keras.layers.advanced_activations import ThresholdedReLU
# Convolution layers.
-from tensorflow.contrib.keras.python.keras.layers.convolutional import Conv1D
-from tensorflow.contrib.keras.python.keras.layers.convolutional import Conv2D
-from tensorflow.contrib.keras.python.keras.layers.convolutional import Conv3D
-from tensorflow.contrib.keras.python.keras.layers.convolutional import Conv2DTranspose
-from tensorflow.contrib.keras.python.keras.layers.convolutional import Conv3DTranspose
-from tensorflow.contrib.keras.python.keras.layers.convolutional import SeparableConv2D
+from tensorflow.python.keras._impl.keras.layers.convolutional import Conv1D
+from tensorflow.python.keras._impl.keras.layers.convolutional import Conv2D
+from tensorflow.python.keras._impl.keras.layers.convolutional import Conv3D
+from tensorflow.python.keras._impl.keras.layers.convolutional import Conv2DTranspose
+from tensorflow.python.keras._impl.keras.layers.convolutional import Conv3DTranspose
+from tensorflow.python.keras._impl.keras.layers.convolutional import SeparableConv2D
# Convolution layer aliases.
-from tensorflow.contrib.keras.python.keras.layers.convolutional import Convolution1D
-from tensorflow.contrib.keras.python.keras.layers.convolutional import Convolution2D
-from tensorflow.contrib.keras.python.keras.layers.convolutional import Convolution3D
-from tensorflow.contrib.keras.python.keras.layers.convolutional import Convolution2DTranspose
-from tensorflow.contrib.keras.python.keras.layers.convolutional import Convolution3DTranspose
-from tensorflow.contrib.keras.python.keras.layers.convolutional import SeparableConvolution2D
+from tensorflow.python.keras._impl.keras.layers.convolutional import Convolution1D
+from tensorflow.python.keras._impl.keras.layers.convolutional import Convolution2D
+from tensorflow.python.keras._impl.keras.layers.convolutional import Convolution3D
+from tensorflow.python.keras._impl.keras.layers.convolutional import Convolution2DTranspose
+from tensorflow.python.keras._impl.keras.layers.convolutional import Convolution3DTranspose
+from tensorflow.python.keras._impl.keras.layers.convolutional import SeparableConvolution2D
# Image processing layers.
-from tensorflow.contrib.keras.python.keras.layers.convolutional import UpSampling1D
-from tensorflow.contrib.keras.python.keras.layers.convolutional import UpSampling2D
-from tensorflow.contrib.keras.python.keras.layers.convolutional import UpSampling3D
-from tensorflow.contrib.keras.python.keras.layers.convolutional import ZeroPadding1D
-from tensorflow.contrib.keras.python.keras.layers.convolutional import ZeroPadding2D
-from tensorflow.contrib.keras.python.keras.layers.convolutional import ZeroPadding3D
-from tensorflow.contrib.keras.python.keras.layers.convolutional import Cropping1D
-from tensorflow.contrib.keras.python.keras.layers.convolutional import Cropping2D
-from tensorflow.contrib.keras.python.keras.layers.convolutional import Cropping3D
+from tensorflow.python.keras._impl.keras.layers.convolutional import UpSampling1D
+from tensorflow.python.keras._impl.keras.layers.convolutional import UpSampling2D
+from tensorflow.python.keras._impl.keras.layers.convolutional import UpSampling3D
+from tensorflow.python.keras._impl.keras.layers.convolutional import ZeroPadding1D
+from tensorflow.python.keras._impl.keras.layers.convolutional import ZeroPadding2D
+from tensorflow.python.keras._impl.keras.layers.convolutional import ZeroPadding3D
+from tensorflow.python.keras._impl.keras.layers.convolutional import Cropping1D
+from tensorflow.python.keras._impl.keras.layers.convolutional import Cropping2D
+from tensorflow.python.keras._impl.keras.layers.convolutional import Cropping3D
# Convolutional-recurrent layers.
-from tensorflow.contrib.keras.python.keras.layers.convolutional_recurrent import ConvLSTM2D
+from tensorflow.python.keras._impl.keras.layers.convolutional_recurrent import ConvLSTM2D
# Core layers.
-from tensorflow.contrib.keras.python.keras.layers.core import Masking
-from tensorflow.contrib.keras.python.keras.layers.core import Dropout
-from tensorflow.contrib.keras.python.keras.layers.core import SpatialDropout1D
-from tensorflow.contrib.keras.python.keras.layers.core import SpatialDropout2D
-from tensorflow.contrib.keras.python.keras.layers.core import SpatialDropout3D
-from tensorflow.contrib.keras.python.keras.layers.core import Activation
-from tensorflow.contrib.keras.python.keras.layers.core import Reshape
-from tensorflow.contrib.keras.python.keras.layers.core import Permute
-from tensorflow.contrib.keras.python.keras.layers.core import Flatten
-from tensorflow.contrib.keras.python.keras.layers.core import RepeatVector
-from tensorflow.contrib.keras.python.keras.layers.core import Lambda
-from tensorflow.contrib.keras.python.keras.layers.core import Dense
-from tensorflow.contrib.keras.python.keras.layers.core import ActivityRegularization
+from tensorflow.python.keras._impl.keras.layers.core import Masking
+from tensorflow.python.keras._impl.keras.layers.core import Dropout
+from tensorflow.python.keras._impl.keras.layers.core import SpatialDropout1D
+from tensorflow.python.keras._impl.keras.layers.core import SpatialDropout2D
+from tensorflow.python.keras._impl.keras.layers.core import SpatialDropout3D
+from tensorflow.python.keras._impl.keras.layers.core import Activation
+from tensorflow.python.keras._impl.keras.layers.core import Reshape
+from tensorflow.python.keras._impl.keras.layers.core import Permute
+from tensorflow.python.keras._impl.keras.layers.core import Flatten
+from tensorflow.python.keras._impl.keras.layers.core import RepeatVector
+from tensorflow.python.keras._impl.keras.layers.core import Lambda
+from tensorflow.python.keras._impl.keras.layers.core import Dense
+from tensorflow.python.keras._impl.keras.layers.core import ActivityRegularization
# Embedding layers.
-from tensorflow.contrib.keras.python.keras.layers.embeddings import Embedding
+from tensorflow.python.keras._impl.keras.layers.embeddings import Embedding
# Locally-connected layers.
-from tensorflow.contrib.keras.python.keras.layers.local import LocallyConnected1D
-from tensorflow.contrib.keras.python.keras.layers.local import LocallyConnected2D
+from tensorflow.python.keras._impl.keras.layers.local import LocallyConnected1D
+from tensorflow.python.keras._impl.keras.layers.local import LocallyConnected2D
# Merge layers.
-from tensorflow.contrib.keras.python.keras.layers.merge import Add
-from tensorflow.contrib.keras.python.keras.layers.merge import Multiply
-from tensorflow.contrib.keras.python.keras.layers.merge import Average
-from tensorflow.contrib.keras.python.keras.layers.merge import Maximum
-from tensorflow.contrib.keras.python.keras.layers.merge import Concatenate
-from tensorflow.contrib.keras.python.keras.layers.merge import Dot
-from tensorflow.contrib.keras.python.keras.layers.merge import add
-from tensorflow.contrib.keras.python.keras.layers.merge import multiply
-from tensorflow.contrib.keras.python.keras.layers.merge import average
-from tensorflow.contrib.keras.python.keras.layers.merge import maximum
-from tensorflow.contrib.keras.python.keras.layers.merge import concatenate
-from tensorflow.contrib.keras.python.keras.layers.merge import dot
+from tensorflow.python.keras._impl.keras.layers.merge import Add
+from tensorflow.python.keras._impl.keras.layers.merge import Multiply
+from tensorflow.python.keras._impl.keras.layers.merge import Average
+from tensorflow.python.keras._impl.keras.layers.merge import Maximum
+from tensorflow.python.keras._impl.keras.layers.merge import Concatenate
+from tensorflow.python.keras._impl.keras.layers.merge import Dot
+from tensorflow.python.keras._impl.keras.layers.merge import add
+from tensorflow.python.keras._impl.keras.layers.merge import multiply
+from tensorflow.python.keras._impl.keras.layers.merge import average
+from tensorflow.python.keras._impl.keras.layers.merge import maximum
+from tensorflow.python.keras._impl.keras.layers.merge import concatenate
+from tensorflow.python.keras._impl.keras.layers.merge import dot
# Noise layers.
-from tensorflow.contrib.keras.python.keras.layers.noise import AlphaDropout
-from tensorflow.contrib.keras.python.keras.layers.noise import GaussianNoise
-from tensorflow.contrib.keras.python.keras.layers.noise import GaussianDropout
+from tensorflow.python.keras._impl.keras.layers.noise import AlphaDropout
+from tensorflow.python.keras._impl.keras.layers.noise import GaussianNoise
+from tensorflow.python.keras._impl.keras.layers.noise import GaussianDropout
# Normalization layers.
-from tensorflow.contrib.keras.python.keras.layers.normalization import BatchNormalization
+from tensorflow.python.keras._impl.keras.layers.normalization import BatchNormalization
# Pooling layers.
-from tensorflow.contrib.keras.python.keras.layers.pooling import MaxPooling1D
-from tensorflow.contrib.keras.python.keras.layers.pooling import MaxPooling2D
-from tensorflow.contrib.keras.python.keras.layers.pooling import MaxPooling3D
-from tensorflow.contrib.keras.python.keras.layers.pooling import AveragePooling1D
-from tensorflow.contrib.keras.python.keras.layers.pooling import AveragePooling2D
-from tensorflow.contrib.keras.python.keras.layers.pooling import AveragePooling3D
-from tensorflow.contrib.keras.python.keras.layers.pooling import GlobalAveragePooling1D
-from tensorflow.contrib.keras.python.keras.layers.pooling import GlobalAveragePooling2D
-from tensorflow.contrib.keras.python.keras.layers.pooling import GlobalAveragePooling3D
-from tensorflow.contrib.keras.python.keras.layers.pooling import GlobalMaxPooling1D
-from tensorflow.contrib.keras.python.keras.layers.pooling import GlobalMaxPooling2D
-from tensorflow.contrib.keras.python.keras.layers.pooling import GlobalMaxPooling3D
+from tensorflow.python.keras._impl.keras.layers.pooling import MaxPooling1D
+from tensorflow.python.keras._impl.keras.layers.pooling import MaxPooling2D
+from tensorflow.python.keras._impl.keras.layers.pooling import MaxPooling3D
+from tensorflow.python.keras._impl.keras.layers.pooling import AveragePooling1D
+from tensorflow.python.keras._impl.keras.layers.pooling import AveragePooling2D
+from tensorflow.python.keras._impl.keras.layers.pooling import AveragePooling3D
+from tensorflow.python.keras._impl.keras.layers.pooling import GlobalAveragePooling1D
+from tensorflow.python.keras._impl.keras.layers.pooling import GlobalAveragePooling2D
+from tensorflow.python.keras._impl.keras.layers.pooling import GlobalAveragePooling3D
+from tensorflow.python.keras._impl.keras.layers.pooling import GlobalMaxPooling1D
+from tensorflow.python.keras._impl.keras.layers.pooling import GlobalMaxPooling2D
+from tensorflow.python.keras._impl.keras.layers.pooling import GlobalMaxPooling3D
# Pooling layer aliases.
-from tensorflow.contrib.keras.python.keras.layers.pooling import MaxPool1D
-from tensorflow.contrib.keras.python.keras.layers.pooling import MaxPool2D
-from tensorflow.contrib.keras.python.keras.layers.pooling import MaxPool3D
-from tensorflow.contrib.keras.python.keras.layers.pooling import AvgPool1D
-from tensorflow.contrib.keras.python.keras.layers.pooling import AvgPool2D
-from tensorflow.contrib.keras.python.keras.layers.pooling import AvgPool3D
-from tensorflow.contrib.keras.python.keras.layers.pooling import GlobalAvgPool1D
-from tensorflow.contrib.keras.python.keras.layers.pooling import GlobalAvgPool2D
-from tensorflow.contrib.keras.python.keras.layers.pooling import GlobalAvgPool3D
-from tensorflow.contrib.keras.python.keras.layers.pooling import GlobalMaxPool1D
-from tensorflow.contrib.keras.python.keras.layers.pooling import GlobalMaxPool2D
-from tensorflow.contrib.keras.python.keras.layers.pooling import GlobalMaxPool3D
+from tensorflow.python.keras._impl.keras.layers.pooling import MaxPool1D
+from tensorflow.python.keras._impl.keras.layers.pooling import MaxPool2D
+from tensorflow.python.keras._impl.keras.layers.pooling import MaxPool3D
+from tensorflow.python.keras._impl.keras.layers.pooling import AvgPool1D
+from tensorflow.python.keras._impl.keras.layers.pooling import AvgPool2D
+from tensorflow.python.keras._impl.keras.layers.pooling import AvgPool3D
+from tensorflow.python.keras._impl.keras.layers.pooling import GlobalAvgPool1D
+from tensorflow.python.keras._impl.keras.layers.pooling import GlobalAvgPool2D
+from tensorflow.python.keras._impl.keras.layers.pooling import GlobalAvgPool3D
+from tensorflow.python.keras._impl.keras.layers.pooling import GlobalMaxPool1D
+from tensorflow.python.keras._impl.keras.layers.pooling import GlobalMaxPool2D
+from tensorflow.python.keras._impl.keras.layers.pooling import GlobalMaxPool3D
# Recurrent layers.
-from tensorflow.contrib.keras.python.keras.layers.recurrent import SimpleRNN
-from tensorflow.contrib.keras.python.keras.layers.recurrent import GRU
-from tensorflow.contrib.keras.python.keras.layers.recurrent import LSTM
+from tensorflow.python.keras._impl.keras.layers.recurrent import SimpleRNN
+from tensorflow.python.keras._impl.keras.layers.recurrent import GRU
+from tensorflow.python.keras._impl.keras.layers.recurrent import LSTM
# Wrapper functions
-from tensorflow.contrib.keras.python.keras.layers.wrappers import Wrapper
-from tensorflow.contrib.keras.python.keras.layers.wrappers import Bidirectional
-from tensorflow.contrib.keras.python.keras.layers.wrappers import TimeDistributed
+from tensorflow.python.keras._impl.keras.layers.wrappers import Wrapper
+from tensorflow.python.keras._impl.keras.layers.wrappers import Bidirectional
+from tensorflow.python.keras._impl.keras.layers.wrappers import TimeDistributed
del absolute_import
del division
diff --git a/tensorflow/contrib/keras/api/keras/losses/__init__.py b/tensorflow/contrib/keras/api/keras/losses/__init__.py
index 06dd679f9c..66721b694f 100644
--- a/tensorflow/contrib/keras/api/keras/losses/__init__.py
+++ b/tensorflow/contrib/keras/api/keras/losses/__init__.py
@@ -19,26 +19,26 @@ from __future__ import division
from __future__ import print_function
# Loss functions.
-from tensorflow.contrib.keras.python.keras.losses import binary_crossentropy
-from tensorflow.contrib.keras.python.keras.losses import categorical_crossentropy
-from tensorflow.contrib.keras.python.keras.losses import categorical_hinge
-from tensorflow.contrib.keras.python.keras.losses import cosine_proximity
-from tensorflow.contrib.keras.python.keras.losses import hinge
-from tensorflow.contrib.keras.python.keras.losses import kullback_leibler_divergence
-from tensorflow.contrib.keras.python.keras.losses import logcosh
-from tensorflow.contrib.keras.python.keras.losses import mean_absolute_error
-from tensorflow.contrib.keras.python.keras.losses import mean_absolute_percentage_error
-from tensorflow.contrib.keras.python.keras.losses import mean_squared_error
-from tensorflow.contrib.keras.python.keras.losses import mean_squared_logarithmic_error
-from tensorflow.contrib.keras.python.keras.losses import poisson
-from tensorflow.contrib.keras.python.keras.losses import sparse_categorical_crossentropy
-from tensorflow.contrib.keras.python.keras.losses import squared_hinge
+from tensorflow.python.keras._impl.keras.losses import binary_crossentropy
+from tensorflow.python.keras._impl.keras.losses import categorical_crossentropy
+from tensorflow.python.keras._impl.keras.losses import categorical_hinge
+from tensorflow.python.keras._impl.keras.losses import cosine_proximity
+from tensorflow.python.keras._impl.keras.losses import hinge
+from tensorflow.python.keras._impl.keras.losses import kullback_leibler_divergence
+from tensorflow.python.keras._impl.keras.losses import logcosh
+from tensorflow.python.keras._impl.keras.losses import mean_absolute_error
+from tensorflow.python.keras._impl.keras.losses import mean_absolute_percentage_error
+from tensorflow.python.keras._impl.keras.losses import mean_squared_error
+from tensorflow.python.keras._impl.keras.losses import mean_squared_logarithmic_error
+from tensorflow.python.keras._impl.keras.losses import poisson
+from tensorflow.python.keras._impl.keras.losses import sparse_categorical_crossentropy
+from tensorflow.python.keras._impl.keras.losses import squared_hinge
# Auxiliary utils.
# pylint: disable=g-bad-import-order
-from tensorflow.contrib.keras.python.keras.losses import deserialize
-from tensorflow.contrib.keras.python.keras.losses import serialize
-from tensorflow.contrib.keras.python.keras.losses import get
+from tensorflow.python.keras._impl.keras.losses import deserialize
+from tensorflow.python.keras._impl.keras.losses import serialize
+from tensorflow.python.keras._impl.keras.losses import get
del absolute_import
del division
diff --git a/tensorflow/contrib/keras/api/keras/metrics/__init__.py b/tensorflow/contrib/keras/api/keras/metrics/__init__.py
index 99496edde2..59faf037bc 100644
--- a/tensorflow/contrib/keras/api/keras/metrics/__init__.py
+++ b/tensorflow/contrib/keras/api/keras/metrics/__init__.py
@@ -19,28 +19,28 @@ from __future__ import division
from __future__ import print_function
# Metrics functions.
-from tensorflow.contrib.keras.python.keras.metrics import binary_accuracy
-from tensorflow.contrib.keras.python.keras.metrics import binary_crossentropy
-from tensorflow.contrib.keras.python.keras.metrics import categorical_accuracy
-from tensorflow.contrib.keras.python.keras.metrics import categorical_crossentropy
-from tensorflow.contrib.keras.python.keras.metrics import cosine_proximity
-from tensorflow.contrib.keras.python.keras.metrics import hinge
-from tensorflow.contrib.keras.python.keras.metrics import kullback_leibler_divergence
-from tensorflow.contrib.keras.python.keras.metrics import mean_absolute_error
-from tensorflow.contrib.keras.python.keras.metrics import mean_absolute_percentage_error
-from tensorflow.contrib.keras.python.keras.metrics import mean_squared_error
-from tensorflow.contrib.keras.python.keras.metrics import mean_squared_logarithmic_error
-from tensorflow.contrib.keras.python.keras.metrics import poisson
-from tensorflow.contrib.keras.python.keras.metrics import sparse_categorical_crossentropy
-from tensorflow.contrib.keras.python.keras.metrics import sparse_top_k_categorical_accuracy
-from tensorflow.contrib.keras.python.keras.metrics import squared_hinge
-from tensorflow.contrib.keras.python.keras.metrics import top_k_categorical_accuracy
+from tensorflow.python.keras._impl.keras.metrics import binary_accuracy
+from tensorflow.python.keras._impl.keras.metrics import binary_crossentropy
+from tensorflow.python.keras._impl.keras.metrics import categorical_accuracy
+from tensorflow.python.keras._impl.keras.metrics import categorical_crossentropy
+from tensorflow.python.keras._impl.keras.metrics import cosine_proximity
+from tensorflow.python.keras._impl.keras.metrics import hinge
+from tensorflow.python.keras._impl.keras.metrics import kullback_leibler_divergence
+from tensorflow.python.keras._impl.keras.metrics import mean_absolute_error
+from tensorflow.python.keras._impl.keras.metrics import mean_absolute_percentage_error
+from tensorflow.python.keras._impl.keras.metrics import mean_squared_error
+from tensorflow.python.keras._impl.keras.metrics import mean_squared_logarithmic_error
+from tensorflow.python.keras._impl.keras.metrics import poisson
+from tensorflow.python.keras._impl.keras.metrics import sparse_categorical_crossentropy
+from tensorflow.python.keras._impl.keras.metrics import sparse_top_k_categorical_accuracy
+from tensorflow.python.keras._impl.keras.metrics import squared_hinge
+from tensorflow.python.keras._impl.keras.metrics import top_k_categorical_accuracy
# Auxiliary utils.
# pylint: disable=g-bad-import-order
-from tensorflow.contrib.keras.python.keras.metrics import deserialize
-from tensorflow.contrib.keras.python.keras.metrics import serialize
-from tensorflow.contrib.keras.python.keras.metrics import get
+from tensorflow.python.keras._impl.keras.metrics import deserialize
+from tensorflow.python.keras._impl.keras.metrics import serialize
+from tensorflow.python.keras._impl.keras.metrics import get
del absolute_import
del division
diff --git a/tensorflow/contrib/keras/api/keras/models/__init__.py b/tensorflow/contrib/keras/api/keras/models/__init__.py
index 4e5b2a1ed0..2fb4ac0960 100644
--- a/tensorflow/contrib/keras/api/keras/models/__init__.py
+++ b/tensorflow/contrib/keras/api/keras/models/__init__.py
@@ -18,13 +18,13 @@ from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
-from tensorflow.contrib.keras.python.keras.models import load_model
-from tensorflow.contrib.keras.python.keras.models import Model
-from tensorflow.contrib.keras.python.keras.models import model_from_config
-from tensorflow.contrib.keras.python.keras.models import model_from_json
-from tensorflow.contrib.keras.python.keras.models import model_from_yaml
-from tensorflow.contrib.keras.python.keras.models import save_model
-from tensorflow.contrib.keras.python.keras.models import Sequential
+from tensorflow.python.keras._impl.keras.models import load_model
+from tensorflow.python.keras._impl.keras.models import Model
+from tensorflow.python.keras._impl.keras.models import model_from_config
+from tensorflow.python.keras._impl.keras.models import model_from_json
+from tensorflow.python.keras._impl.keras.models import model_from_yaml
+from tensorflow.python.keras._impl.keras.models import save_model
+from tensorflow.python.keras._impl.keras.models import Sequential
del absolute_import
del division
diff --git a/tensorflow/contrib/keras/api/keras/optimizers/__init__.py b/tensorflow/contrib/keras/api/keras/optimizers/__init__.py
index b3531d7933..44f47bc47f 100644
--- a/tensorflow/contrib/keras/api/keras/optimizers/__init__.py
+++ b/tensorflow/contrib/keras/api/keras/optimizers/__init__.py
@@ -19,20 +19,20 @@ from __future__ import division
from __future__ import print_function
# Optimizer classes.
-from tensorflow.contrib.keras.python.keras.optimizers import Adadelta
-from tensorflow.contrib.keras.python.keras.optimizers import Adagrad
-from tensorflow.contrib.keras.python.keras.optimizers import Adam
-from tensorflow.contrib.keras.python.keras.optimizers import Adamax
-from tensorflow.contrib.keras.python.keras.optimizers import Nadam
-from tensorflow.contrib.keras.python.keras.optimizers import Optimizer
-from tensorflow.contrib.keras.python.keras.optimizers import RMSprop
-from tensorflow.contrib.keras.python.keras.optimizers import SGD
+from tensorflow.python.keras._impl.keras.optimizers import Adadelta
+from tensorflow.python.keras._impl.keras.optimizers import Adagrad
+from tensorflow.python.keras._impl.keras.optimizers import Adam
+from tensorflow.python.keras._impl.keras.optimizers import Adamax
+from tensorflow.python.keras._impl.keras.optimizers import Nadam
+from tensorflow.python.keras._impl.keras.optimizers import Optimizer
+from tensorflow.python.keras._impl.keras.optimizers import RMSprop
+from tensorflow.python.keras._impl.keras.optimizers import SGD
# Auxiliary utils.
# pylint: disable=g-bad-import-order
-from tensorflow.contrib.keras.python.keras.optimizers import deserialize
-from tensorflow.contrib.keras.python.keras.optimizers import serialize
-from tensorflow.contrib.keras.python.keras.optimizers import get
+from tensorflow.python.keras._impl.keras.optimizers import deserialize
+from tensorflow.python.keras._impl.keras.optimizers import serialize
+from tensorflow.python.keras._impl.keras.optimizers import get
del absolute_import
del division
diff --git a/tensorflow/contrib/keras/api/keras/preprocessing/image/__init__.py b/tensorflow/contrib/keras/api/keras/preprocessing/image/__init__.py
index 18ce1becc2..b96e767552 100644
--- a/tensorflow/contrib/keras/api/keras/preprocessing/image/__init__.py
+++ b/tensorflow/contrib/keras/api/keras/preprocessing/image/__init__.py
@@ -18,20 +18,20 @@ from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
-from tensorflow.contrib.keras.python.keras.preprocessing.image import apply_transform
-from tensorflow.contrib.keras.python.keras.preprocessing.image import array_to_img
-from tensorflow.contrib.keras.python.keras.preprocessing.image import DirectoryIterator
-from tensorflow.contrib.keras.python.keras.preprocessing.image import flip_axis
-from tensorflow.contrib.keras.python.keras.preprocessing.image import ImageDataGenerator
-from tensorflow.contrib.keras.python.keras.preprocessing.image import img_to_array
-from tensorflow.contrib.keras.python.keras.preprocessing.image import Iterator
-from tensorflow.contrib.keras.python.keras.preprocessing.image import load_img
-from tensorflow.contrib.keras.python.keras.preprocessing.image import NumpyArrayIterator
-from tensorflow.contrib.keras.python.keras.preprocessing.image import random_channel_shift
-from tensorflow.contrib.keras.python.keras.preprocessing.image import random_rotation
-from tensorflow.contrib.keras.python.keras.preprocessing.image import random_shear
-from tensorflow.contrib.keras.python.keras.preprocessing.image import random_shift
-from tensorflow.contrib.keras.python.keras.preprocessing.image import random_zoom
+from tensorflow.python.keras._impl.keras.preprocessing.image import apply_transform
+from tensorflow.python.keras._impl.keras.preprocessing.image import array_to_img
+from tensorflow.python.keras._impl.keras.preprocessing.image import DirectoryIterator
+from tensorflow.python.keras._impl.keras.preprocessing.image import flip_axis
+from tensorflow.python.keras._impl.keras.preprocessing.image import ImageDataGenerator
+from tensorflow.python.keras._impl.keras.preprocessing.image import img_to_array
+from tensorflow.python.keras._impl.keras.preprocessing.image import Iterator
+from tensorflow.python.keras._impl.keras.preprocessing.image import load_img
+from tensorflow.python.keras._impl.keras.preprocessing.image import NumpyArrayIterator
+from tensorflow.python.keras._impl.keras.preprocessing.image import random_channel_shift
+from tensorflow.python.keras._impl.keras.preprocessing.image import random_rotation
+from tensorflow.python.keras._impl.keras.preprocessing.image import random_shear
+from tensorflow.python.keras._impl.keras.preprocessing.image import random_shift
+from tensorflow.python.keras._impl.keras.preprocessing.image import random_zoom
del absolute_import
del division
diff --git a/tensorflow/contrib/keras/api/keras/preprocessing/sequence/__init__.py b/tensorflow/contrib/keras/api/keras/preprocessing/sequence/__init__.py
index 2621e9bf53..112f6af5e5 100644
--- a/tensorflow/contrib/keras/api/keras/preprocessing/sequence/__init__.py
+++ b/tensorflow/contrib/keras/api/keras/preprocessing/sequence/__init__.py
@@ -18,9 +18,9 @@ from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
-from tensorflow.contrib.keras.python.keras.preprocessing.sequence import make_sampling_table
-from tensorflow.contrib.keras.python.keras.preprocessing.sequence import pad_sequences
-from tensorflow.contrib.keras.python.keras.preprocessing.sequence import skipgrams
+from tensorflow.python.keras._impl.keras.preprocessing.sequence import make_sampling_table
+from tensorflow.python.keras._impl.keras.preprocessing.sequence import pad_sequences
+from tensorflow.python.keras._impl.keras.preprocessing.sequence import skipgrams
del absolute_import
del division
diff --git a/tensorflow/contrib/keras/api/keras/preprocessing/text/__init__.py b/tensorflow/contrib/keras/api/keras/preprocessing/text/__init__.py
index a6b68c3ba6..5bf1a2fb21 100644
--- a/tensorflow/contrib/keras/api/keras/preprocessing/text/__init__.py
+++ b/tensorflow/contrib/keras/api/keras/preprocessing/text/__init__.py
@@ -18,9 +18,9 @@ from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
-from tensorflow.contrib.keras.python.keras.preprocessing.text import one_hot
-from tensorflow.contrib.keras.python.keras.preprocessing.text import text_to_word_sequence
-from tensorflow.contrib.keras.python.keras.preprocessing.text import Tokenizer
+from tensorflow.python.keras._impl.keras.preprocessing.text import one_hot
+from tensorflow.python.keras._impl.keras.preprocessing.text import text_to_word_sequence
+from tensorflow.python.keras._impl.keras.preprocessing.text import Tokenizer
del absolute_import
del division
diff --git a/tensorflow/contrib/keras/api/keras/regularizers/__init__.py b/tensorflow/contrib/keras/api/keras/regularizers/__init__.py
index a3b0062d5c..3e707ccab5 100644
--- a/tensorflow/contrib/keras/api/keras/regularizers/__init__.py
+++ b/tensorflow/contrib/keras/api/keras/regularizers/__init__.py
@@ -19,19 +19,19 @@ from __future__ import division
from __future__ import print_function
# Regularizer functions / callable classes.
-from tensorflow.contrib.keras.python.keras.regularizers import L1L2
-from tensorflow.contrib.keras.python.keras.regularizers import Regularizer
+from tensorflow.python.keras._impl.keras.regularizers import L1L2
+from tensorflow.python.keras._impl.keras.regularizers import Regularizer
# Functional interface.
# pylint: disable=g-bad-import-order
-from tensorflow.contrib.keras.python.keras.regularizers import l1
-from tensorflow.contrib.keras.python.keras.regularizers import l2
-from tensorflow.contrib.keras.python.keras.regularizers import l1_l2
+from tensorflow.python.keras._impl.keras.regularizers import l1
+from tensorflow.python.keras._impl.keras.regularizers import l2
+from tensorflow.python.keras._impl.keras.regularizers import l1_l2
# Auxiliary utils.
-from tensorflow.contrib.keras.python.keras.regularizers import deserialize
-from tensorflow.contrib.keras.python.keras.regularizers import serialize
-from tensorflow.contrib.keras.python.keras.regularizers import get
+from tensorflow.python.keras._impl.keras.regularizers import deserialize
+from tensorflow.python.keras._impl.keras.regularizers import serialize
+from tensorflow.python.keras._impl.keras.regularizers import get
del absolute_import
del division
diff --git a/tensorflow/contrib/keras/api/keras/utils/__init__.py b/tensorflow/contrib/keras/api/keras/utils/__init__.py
index d6d70f79d5..a7c2179fe7 100644
--- a/tensorflow/contrib/keras/api/keras/utils/__init__.py
+++ b/tensorflow/contrib/keras/api/keras/utils/__init__.py
@@ -18,21 +18,21 @@ from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
-from tensorflow.contrib.keras.python.keras.utils.data_utils import GeneratorEnqueuer
-from tensorflow.contrib.keras.python.keras.utils.data_utils import get_file
-from tensorflow.contrib.keras.python.keras.utils.data_utils import Sequence
-from tensorflow.contrib.keras.python.keras.utils.data_utils import SequenceEnqueuer
-from tensorflow.contrib.keras.python.keras.utils.generic_utils import custom_object_scope
-from tensorflow.contrib.keras.python.keras.utils.generic_utils import CustomObjectScope
-from tensorflow.contrib.keras.python.keras.utils.generic_utils import deserialize_keras_object
-from tensorflow.contrib.keras.python.keras.utils.generic_utils import get_custom_objects
-from tensorflow.contrib.keras.python.keras.utils.generic_utils import Progbar
-from tensorflow.contrib.keras.python.keras.utils.generic_utils import serialize_keras_object
-from tensorflow.contrib.keras.python.keras.utils.io_utils import HDF5Matrix
-from tensorflow.contrib.keras.python.keras.utils.layer_utils import convert_all_kernels_in_model
-from tensorflow.contrib.keras.python.keras.utils.np_utils import normalize
-from tensorflow.contrib.keras.python.keras.utils.np_utils import to_categorical
-from tensorflow.contrib.keras.python.keras.utils.vis_utils import plot_model
+from tensorflow.python.keras._impl.keras.utils.data_utils import GeneratorEnqueuer
+from tensorflow.python.keras._impl.keras.utils.data_utils import get_file
+from tensorflow.python.keras._impl.keras.utils.data_utils import Sequence
+from tensorflow.python.keras._impl.keras.utils.data_utils import SequenceEnqueuer
+from tensorflow.python.keras._impl.keras.utils.generic_utils import custom_object_scope
+from tensorflow.python.keras._impl.keras.utils.generic_utils import CustomObjectScope
+from tensorflow.python.keras._impl.keras.utils.generic_utils import deserialize_keras_object
+from tensorflow.python.keras._impl.keras.utils.generic_utils import get_custom_objects
+from tensorflow.python.keras._impl.keras.utils.generic_utils import Progbar
+from tensorflow.python.keras._impl.keras.utils.generic_utils import serialize_keras_object
+from tensorflow.python.keras._impl.keras.utils.io_utils import HDF5Matrix
+from tensorflow.python.keras._impl.keras.utils.layer_utils import convert_all_kernels_in_model
+from tensorflow.python.keras._impl.keras.utils.np_utils import normalize
+from tensorflow.python.keras._impl.keras.utils.np_utils import to_categorical
+from tensorflow.python.keras._impl.keras.utils.vis_utils import plot_model
del absolute_import
del division
diff --git a/tensorflow/contrib/keras/api/keras/wrappers/scikit_learn/__init__.py b/tensorflow/contrib/keras/api/keras/wrappers/scikit_learn/__init__.py
index ba1d28c5c6..a46f859273 100644
--- a/tensorflow/contrib/keras/api/keras/wrappers/scikit_learn/__init__.py
+++ b/tensorflow/contrib/keras/api/keras/wrappers/scikit_learn/__init__.py
@@ -18,8 +18,8 @@ from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
-from tensorflow.contrib.keras.python.keras.wrappers.scikit_learn import KerasClassifier
-from tensorflow.contrib.keras.python.keras.wrappers.scikit_learn import KerasRegressor
+from tensorflow.python.keras._impl.keras.wrappers.scikit_learn import KerasClassifier
+from tensorflow.python.keras._impl.keras.wrappers.scikit_learn import KerasRegressor
del absolute_import
del division
diff --git a/tensorflow/contrib/keras/python/keras/__init__.py b/tensorflow/contrib/keras/python/keras/__init__.py
deleted file mode 100644
index a3edb29170..0000000000
--- a/tensorflow/contrib/keras/python/keras/__init__.py
+++ /dev/null
@@ -1,40 +0,0 @@
-# Copyright 2015 The TensorFlow Authors. All Rights Reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-# ==============================================================================
-"""The Keras API.
-"""
-from __future__ import absolute_import
-from __future__ import division
-from __future__ import print_function
-
-from tensorflow.contrib.keras.python.keras import activations
-from tensorflow.contrib.keras.python.keras import applications
-from tensorflow.contrib.keras.python.keras import backend
-from tensorflow.contrib.keras.python.keras import callbacks
-from tensorflow.contrib.keras.python.keras import constraints
-from tensorflow.contrib.keras.python.keras import datasets
-from tensorflow.contrib.keras.python.keras import engine
-from tensorflow.contrib.keras.python.keras import initializers
-from tensorflow.contrib.keras.python.keras import layers
-from tensorflow.contrib.keras.python.keras import losses
-from tensorflow.contrib.keras.python.keras import metrics
-from tensorflow.contrib.keras.python.keras import models
-from tensorflow.contrib.keras.python.keras import optimizers
-from tensorflow.contrib.keras.python.keras import preprocessing
-from tensorflow.contrib.keras.python.keras import regularizers
-from tensorflow.contrib.keras.python.keras import utils
-from tensorflow.contrib.keras.python.keras import wrappers
-from tensorflow.contrib.keras.python.keras.layers import Input
-
-__version__ = '2.0.8-tf'
diff --git a/tensorflow/contrib/keras/python/keras/layers/__init__.py b/tensorflow/contrib/keras/python/keras/layers/__init__.py
deleted file mode 100644
index 9a428f3114..0000000000
--- a/tensorflow/contrib/keras/python/keras/layers/__init__.py
+++ /dev/null
@@ -1,40 +0,0 @@
-# Copyright 2015 The TensorFlow Authors. All Rights Reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-# ==============================================================================
-"""Keras layers module.
-"""
-# pylint: disable=wildcard-import
-from __future__ import absolute_import
-from __future__ import division
-from __future__ import print_function
-
-from tensorflow.contrib.keras.python.keras.engine import Input
-from tensorflow.contrib.keras.python.keras.engine import InputLayer
-from tensorflow.contrib.keras.python.keras.engine import InputSpec
-from tensorflow.contrib.keras.python.keras.engine import Layer
-from tensorflow.contrib.keras.python.keras.layers.advanced_activations import *
-from tensorflow.contrib.keras.python.keras.layers.convolutional import *
-from tensorflow.contrib.keras.python.keras.layers.convolutional_recurrent import *
-from tensorflow.contrib.keras.python.keras.layers.core import *
-from tensorflow.contrib.keras.python.keras.layers.embeddings import *
-from tensorflow.contrib.keras.python.keras.layers.local import *
-from tensorflow.contrib.keras.python.keras.layers.merge import *
-from tensorflow.contrib.keras.python.keras.layers.noise import *
-from tensorflow.contrib.keras.python.keras.layers.normalization import *
-from tensorflow.contrib.keras.python.keras.layers.pooling import *
-from tensorflow.contrib.keras.python.keras.layers.recurrent import *
-from tensorflow.contrib.keras.python.keras.layers.serialization import deserialize
-from tensorflow.contrib.keras.python.keras.layers.serialization import serialize
-from tensorflow.contrib.keras.python.keras.layers.wrappers import *
-
diff --git a/tensorflow/contrib/keras/python/keras/utils/__init__.py b/tensorflow/contrib/keras/python/keras/utils/__init__.py
deleted file mode 100644
index 3b197653f3..0000000000
--- a/tensorflow/contrib/keras/python/keras/utils/__init__.py
+++ /dev/null
@@ -1,43 +0,0 @@
-# Copyright 2015 The TensorFlow Authors. All Rights Reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-# ==============================================================================
-"""Keras utilities.
-"""
-from __future__ import absolute_import
-from __future__ import division
-from __future__ import print_function
-
-from tensorflow.contrib.keras.python.keras.utils import conv_utils
-from tensorflow.contrib.keras.python.keras.utils import data_utils
-from tensorflow.contrib.keras.python.keras.utils import generic_utils
-from tensorflow.contrib.keras.python.keras.utils import io_utils
-from tensorflow.contrib.keras.python.keras.utils import np_utils
-from tensorflow.contrib.keras.python.keras.utils.data_utils import GeneratorEnqueuer
-from tensorflow.contrib.keras.python.keras.utils.data_utils import get_file
-from tensorflow.contrib.keras.python.keras.utils.data_utils import OrderedEnqueuer
-from tensorflow.contrib.keras.python.keras.utils.data_utils import Sequence
-from tensorflow.contrib.keras.python.keras.utils.generic_utils import custom_object_scope
-from tensorflow.contrib.keras.python.keras.utils.generic_utils import CustomObjectScope
-from tensorflow.contrib.keras.python.keras.utils.generic_utils import deserialize_keras_object
-from tensorflow.contrib.keras.python.keras.utils.generic_utils import get_custom_objects
-from tensorflow.contrib.keras.python.keras.utils.generic_utils import Progbar
-from tensorflow.contrib.keras.python.keras.utils.generic_utils import serialize_keras_object
-from tensorflow.contrib.keras.python.keras.utils.io_utils import HDF5Matrix
-from tensorflow.contrib.keras.python.keras.utils.layer_utils import convert_all_kernels_in_model
-from tensorflow.contrib.keras.python.keras.utils.np_utils import normalize
-from tensorflow.contrib.keras.python.keras.utils.np_utils import to_categorical
-from tensorflow.contrib.keras.python.keras.utils.vis_utils import plot_model
-
-
-# Globally-importable utils.
diff --git a/tensorflow/contrib/keras/python/keras/utils/io_utils_test.py b/tensorflow/contrib/keras/python/keras/utils/io_utils_test.py
deleted file mode 100644
index f6820ee039..0000000000
--- a/tensorflow/contrib/keras/python/keras/utils/io_utils_test.py
+++ /dev/null
@@ -1,101 +0,0 @@
-# Copyright 2016 The TensorFlow Authors. All Rights Reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-# ==============================================================================
-"""Tests for io_utils."""
-
-from __future__ import absolute_import
-from __future__ import division
-from __future__ import print_function
-
-import os
-import shutil
-
-import numpy as np
-
-from tensorflow.contrib.keras.python import keras
-from tensorflow.python.platform import test
-
-try:
- import h5py # pylint:disable=g-import-not-at-top
-except ImportError:
- h5py = None
-
-
-def create_dataset(h5_path='test.h5'):
- x = np.random.randn(200, 10).astype('float32')
- y = np.random.randint(0, 2, size=(200, 1))
- f = h5py.File(h5_path, 'w')
- # Creating dataset to store features
- x_dset = f.create_dataset('my_data', (200, 10), dtype='f')
- x_dset[:] = x
- # Creating dataset to store labels
- y_dset = f.create_dataset('my_labels', (200, 1), dtype='i')
- y_dset[:] = y
- f.close()
-
-
-class TestIOUtils(test.TestCase):
-
- def test_HDF5Matrix(self):
- if h5py is None:
- return
-
- temp_dir = self.get_temp_dir()
- self.addCleanup(shutil.rmtree, temp_dir)
-
- h5_path = os.path.join(temp_dir, 'test.h5')
- create_dataset(h5_path)
-
- with self.test_session():
- # Instantiating HDF5Matrix for the training set,
- # which is a slice of the first 150 elements
- x_train = keras.utils.io_utils.HDF5Matrix(
- h5_path, 'my_data', start=0, end=150)
- y_train = keras.utils.io_utils.HDF5Matrix(
- h5_path, 'my_labels', start=0, end=150)
-
- # Likewise for the test set
- x_test = keras.utils.io_utils.HDF5Matrix(
- h5_path, 'my_data', start=150, end=200)
- y_test = keras.utils.io_utils.HDF5Matrix(
- h5_path, 'my_labels', start=150, end=200)
-
- # HDF5Matrix behave more or less like Numpy matrices
- # with regard to indexing
- self.assertEqual(y_train.shape, (150, 1))
- # But they don't support negative indices, so don't try print(x_train[-1])
-
- self.assertEqual(y_train.dtype, np.dtype('i'))
- self.assertEqual(y_train.ndim, 2)
- self.assertEqual(y_train.size, 150)
-
- model = keras.models.Sequential()
- model.add(keras.layers.Dense(64, input_shape=(10,), activation='relu'))
- model.add(keras.layers.Dense(1, activation='sigmoid'))
- model.compile(loss='binary_crossentropy', optimizer='sgd')
-
- # Note: you have to use shuffle='batch' or False with HDF5Matrix
- model.fit(x_train, y_train, batch_size=32, shuffle='batch', verbose=False)
- # test that evalutation and prediction
- # don't crash and return reasonable results
- out_pred = model.predict(x_test, batch_size=32, verbose=False)
- out_eval = model.evaluate(x_test, y_test, batch_size=32, verbose=False)
-
- self.assertEqual(out_pred.shape, (50, 1))
- self.assertEqual(out_eval.shape, ())
- self.assertGreater(out_eval, 0)
-
-
-if __name__ == '__main__':
- test.main()
diff --git a/tensorflow/contrib/learn/BUILD b/tensorflow/contrib/learn/BUILD
index db3be9a991..d35b5556fc 100644
--- a/tensorflow/contrib/learn/BUILD
+++ b/tensorflow/contrib/learn/BUILD
@@ -412,33 +412,6 @@ py_test(
)
py_test(
- name = "dnn_linear_combined_benchmark_test",
- size = "medium",
- srcs = ["python/learn/estimators/dnn_linear_combined_benchmark_test.py"],
- srcs_version = "PY2AND3",
- tags = [
- "guitar",
- "local",
- "manual",
- "notap",
- ],
- visibility = [
- "//learning/brain/google/guitar:__subpackages__",
- "//tensorflow:__subpackages__",
- ],
- deps = [
- ":learn",
- "//tensorflow/contrib/layers:layers_py",
- "//tensorflow/contrib/learn/python/learn/datasets",
- "//tensorflow/python:array_ops",
- "//tensorflow/python:client_testlib",
- "//tensorflow/python:framework_for_generated_wrappers",
- "//tensorflow/python:sparse_tensor",
- "//tensorflow/python:training",
- ],
-)
-
-py_test(
name = "kmeans_test",
size = "medium",
srcs = ["python/learn/estimators/kmeans_test.py"],
@@ -460,32 +433,6 @@ py_test(
)
py_test(
- name = "dnn_benchmark_test",
- size = "medium",
- srcs = ["python/learn/estimators/dnn_benchmark_test.py"],
- srcs_version = "PY2AND3",
- tags = [
- "guitar",
- "local",
- "manual",
- "notap",
- ],
- visibility = [
- "//learning/brain/google/guitar:__subpackages__",
- "//tensorflow:__subpackages__",
- ],
- deps = [
- ":learn",
- "//tensorflow/contrib/layers:layers_py",
- "//tensorflow/python:client_testlib",
- "//tensorflow/python:framework_for_generated_wrappers",
- "//tensorflow/python:sparse_tensor",
- "//tensorflow/python:training",
- "//third_party/py/numpy",
- ],
-)
-
-py_test(
name = "dynamic_rnn_estimator_test",
size = "medium",
srcs = ["python/learn/estimators/dynamic_rnn_estimator_test.py"],
diff --git a/tensorflow/contrib/learn/python/learn/estimators/dnn_benchmark_test.py b/tensorflow/contrib/learn/python/learn/estimators/dnn_benchmark_test.py
deleted file mode 100644
index 86b3eee6ad..0000000000
--- a/tensorflow/contrib/learn/python/learn/estimators/dnn_benchmark_test.py
+++ /dev/null
@@ -1,257 +0,0 @@
-# Copyright 2016 The TensorFlow Authors. All Rights Reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-# ==============================================================================
-"""Regression test for DNNEstimator."""
-
-from __future__ import absolute_import
-from __future__ import division
-from __future__ import print_function
-
-import functools
-import numpy as np
-from tensorflow.contrib.layers.python.layers import feature_column
-from tensorflow.contrib.learn.python.learn.estimators import dnn
-from tensorflow.contrib.learn.python.learn.estimators import estimator_test_utils
-from tensorflow.contrib.learn.python.learn.estimators import run_config
-from tensorflow.contrib.learn.python.learn.estimators import test_data
-from tensorflow.python.framework import constant_op
-from tensorflow.python.framework import dtypes
-from tensorflow.python.framework import sparse_tensor
-from tensorflow.python.platform import test
-from tensorflow.python.training import input as input_lib
-
-
-_METRIC_KEYS = {
- 'accuracy',
- 'auc',
- 'accuracy/threshold_0.500000_mean',
- 'loss',
- 'precision/positive_threshold_0.500000_mean',
- 'recall/positive_threshold_0.500000_mean',
-}
-
-
-class DNNClassifierBenchmark(test.Benchmark):
-
- def _report_metrics(self, metrics):
- self.report_benchmark(
- iters=metrics['global_step'],
- extras={k: v
- for k, v in metrics.items() if k in _METRIC_KEYS})
-
- def _report_predictions(self,
- benchmark_name_override,
- classifier,
- input_fn,
- iters,
- n_examples,
- n_classes,
- expected_probabilities=None,
- expected_classes=None):
- probabilities = classifier.predict_proba(
- input_fn=input_fn, as_iterable=False)
- if expected_probabilities is not None:
- np.testing.assert_allclose(
- expected_probabilities, tuple(probabilities), atol=0.2)
-
- classes = classifier.predict(input_fn=input_fn, as_iterable=False)
- if expected_classes is not None:
- np.testing.assert_array_equal(expected_classes, classes)
-
- self.report_benchmark(
- iters=iters,
- extras={
- 'inference.example%d_class%d_prob' % (i, j): probabilities[i][j]
- for j in range(n_classes) for i in range(n_examples)
- }.update({
- 'inference.example%d_class' % i: classes[i]
- for i in range(n_examples)
- }),
- name=benchmark_name_override)
-
- def benchmarkLogisticMatrixData(self):
- classifier = dnn.DNNClassifier(
- feature_columns=(feature_column.real_valued_column(
- 'feature', dimension=4),),
- hidden_units=(3, 3),
- config=run_config.RunConfig(tf_random_seed=1))
- input_fn = test_data.iris_input_logistic_fn
- steps = 400
- metrics = classifier.fit(input_fn=input_fn, steps=steps).evaluate(
- input_fn=input_fn, steps=1)
- estimator_test_utils.assert_in_range(steps, steps + 5, 'global_step',
- metrics)
- estimator_test_utils.assert_in_range(0.9, 1.0, 'accuracy', metrics)
- estimator_test_utils.assert_in_range(0.0, 0.3, 'loss', metrics)
-
- self._report_metrics(metrics)
-
- def benchmarkLogisticMatrixDataLabels1D(self):
-
- def _input_fn():
- iris = test_data.prepare_iris_data_for_logistic_regression()
- return {
- 'feature': constant_op.constant(
- iris.data, dtype=dtypes.float32)
- }, constant_op.constant(
- iris.target, shape=(100,), dtype=dtypes.int32)
-
- classifier = dnn.DNNClassifier(
- feature_columns=(feature_column.real_valued_column(
- 'feature', dimension=4),),
- hidden_units=(3, 3),
- config=run_config.RunConfig(tf_random_seed=1))
- steps = 1000
- metrics = classifier.fit(input_fn=_input_fn, steps=steps).evaluate(
- input_fn=_input_fn, steps=1)
- estimator_test_utils.assert_in_range(steps, steps + 5, 'global_step',
- metrics)
- estimator_test_utils.assert_in_range(0.9, 1.0, 'accuracy', metrics)
-
- self._report_metrics(metrics)
-
- def benchmarkLogisticNpMatrixData(self):
- classifier = dnn.DNNClassifier(
- feature_columns=(feature_column.real_valued_column(
- '', dimension=4),),
- hidden_units=(3, 3),
- config=run_config.RunConfig(tf_random_seed=1))
- iris = test_data.prepare_iris_data_for_logistic_regression()
- train_x = iris.data
- train_y = iris.target
- steps = 100
- metrics = classifier.fit(x=train_x, y=train_y, steps=steps).evaluate(
- x=train_x, y=train_y, steps=1)
- estimator_test_utils.assert_in_range(steps, steps + 5, 'global_step',
- metrics)
- estimator_test_utils.assert_in_range(0.8, 1.0, 'accuracy', metrics)
-
- self._report_metrics(metrics)
-
- def benchmarkLogisticTensorData(self):
-
- def _input_fn(num_epochs=None):
- features = {
- 'age':
- input_lib.limit_epochs(
- constant_op.constant(((.8,), (0.2,), (.1,))),
- num_epochs=num_epochs),
- 'language':
- sparse_tensor.SparseTensor(
- values=input_lib.limit_epochs(
- ('en', 'fr', 'zh'), num_epochs=num_epochs),
- indices=((0, 0), (0, 1), (2, 0)),
- dense_shape=(3, 2))
- }
- return features, constant_op.constant(
- ((1,), (0,), (0,)), dtype=dtypes.int32)
-
- lang_column = feature_column.sparse_column_with_hash_bucket(
- 'language', hash_bucket_size=20)
- classifier = dnn.DNNClassifier(
- feature_columns=(feature_column.embedding_column(
- lang_column, dimension=1),
- feature_column.real_valued_column('age')),
- hidden_units=(3, 3),
- config=run_config.RunConfig(tf_random_seed=1))
- steps = 100
- metrics = classifier.fit(input_fn=_input_fn, steps=steps).evaluate(
- input_fn=_input_fn, steps=1)
- estimator_test_utils.assert_in_range(steps, steps + 5, 'global_step',
- metrics)
- estimator_test_utils.assert_in_range(0.9, 1.0, 'accuracy', metrics)
- estimator_test_utils.assert_in_range(0.0, 0.3, 'loss', metrics)
-
- self._report_metrics(metrics)
- self._report_predictions(
- classifier=classifier,
- input_fn=functools.partial(_input_fn, num_epochs=1),
- iters=metrics['global_step'],
- n_examples=3,
- n_classes=2,
- expected_classes=(1, 0, 0),
- benchmark_name_override=(
- 'DNNClassifierBenchmark.benchmarkLogisticTensorData_predictions'))
-
- def benchmarkLogisticFloatLabel(self):
-
- def _input_fn(num_epochs=None):
- features = {
- 'age':
- input_lib.limit_epochs(
- constant_op.constant(((50,), (20,), (10,))),
- num_epochs=num_epochs),
- 'language':
- sparse_tensor.SparseTensor(
- values=input_lib.limit_epochs(
- ('en', 'fr', 'zh'), num_epochs=num_epochs),
- indices=((0, 0), (0, 1), (2, 0)),
- dense_shape=(3, 2))
- }
- return features, constant_op.constant(
- ((0.8,), (0.,), (0.2,)), dtype=dtypes.float32)
-
- lang_column = feature_column.sparse_column_with_hash_bucket(
- 'language', hash_bucket_size=20)
- n_classes = 2
- classifier = dnn.DNNClassifier(
- n_classes=n_classes,
- feature_columns=(feature_column.embedding_column(
- lang_column, dimension=1),
- feature_column.real_valued_column('age')),
- hidden_units=(3, 3),
- config=run_config.RunConfig(tf_random_seed=1))
- steps = 1000
- metrics = classifier.fit(input_fn=_input_fn, steps=steps).evaluate(
- input_fn=_input_fn, steps=1)
- estimator_test_utils.assert_in_range(steps, steps + 5, 'global_step',
- metrics)
-
- # Prediction probabilities mirror the labels column, which proves that the
- # classifier learns from float input.
- self._report_metrics(metrics)
- self._report_predictions(
- classifier=classifier,
- input_fn=functools.partial(_input_fn, num_epochs=1),
- iters=metrics['global_step'],
- n_examples=3,
- n_classes=n_classes,
- expected_probabilities=((0.2, 0.8), (1., 0.), (0.8, 0.2)),
- expected_classes=(1, 0, 0),
- benchmark_name_override=(
- 'DNNClassifierBenchmark.benchmarkLogisticFloatLabel_predictions'))
-
- def benchmarkMultiClassMatrixData(self):
- """Tests multi-class classification using matrix data as input."""
- classifier = dnn.DNNClassifier(
- n_classes=3,
- feature_columns=(feature_column.real_valued_column(
- 'feature', dimension=4),),
- hidden_units=(3, 3),
- config=run_config.RunConfig(tf_random_seed=1))
-
- input_fn = test_data.iris_input_multiclass_fn
- steps = 500
- metrics = classifier.fit(input_fn=input_fn, steps=steps).evaluate(
- input_fn=input_fn, steps=1)
- estimator_test_utils.assert_in_range(steps, steps + 5, 'global_step',
- metrics)
- estimator_test_utils.assert_in_range(0.9, 1.0, 'accuracy', metrics)
- estimator_test_utils.assert_in_range(0.0, 0.4, 'loss', metrics)
-
- self._report_metrics(metrics)
-
-
-if __name__ == '__main__':
- test.main()
diff --git a/tensorflow/contrib/learn/python/learn/estimators/dnn_linear_combined_benchmark_test.py b/tensorflow/contrib/learn/python/learn/estimators/dnn_linear_combined_benchmark_test.py
deleted file mode 100644
index 98b7c7e95c..0000000000
--- a/tensorflow/contrib/learn/python/learn/estimators/dnn_linear_combined_benchmark_test.py
+++ /dev/null
@@ -1,224 +0,0 @@
-# Copyright 2016 The TensorFlow Authors. All Rights Reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-# ==============================================================================
-"""Regression test for DNNLinearCombinedEstimator."""
-
-from __future__ import absolute_import
-from __future__ import division
-from __future__ import print_function
-
-import json
-import tempfile
-from tensorflow.contrib.layers.python.layers import feature_column
-from tensorflow.contrib.learn.python.learn.datasets import base
-from tensorflow.contrib.learn.python.learn.estimators import dnn_linear_combined
-from tensorflow.contrib.learn.python.learn.estimators import estimator_test_utils
-from tensorflow.contrib.learn.python.learn.estimators import run_config
-from tensorflow.contrib.learn.python.learn.estimators import test_data
-from tensorflow.python.framework import constant_op
-from tensorflow.python.framework import dtypes
-from tensorflow.python.framework import sparse_tensor
-from tensorflow.python.ops import array_ops
-from tensorflow.python.platform import test
-from tensorflow.python.training import adagrad
-from tensorflow.python.training import ftrl
-from tensorflow.python.training import server_lib
-
-
-# Desired training steps, reported in benchmark. Actual steps might be slightly
-# more than this since supervisor training runs for a non-detrministic number of
-# steps.
-_ITERS = 100
-
-_METRIC_KEYS = {
- 'accuracy',
- 'auc',
- 'accuracy/threshold_0.500000_mean',
- 'loss',
- 'precision/positive_threshold_0.500000_mean',
- 'recall/positive_threshold_0.500000_mean',
-}
-
-
-class DNNLinearCombinedClassifierBenchmark(test.Benchmark):
-
- def _assertSingleClassMetrics(self, metrics):
- estimator_test_utils.assert_in_range(0.9, 1.0, 'auc', metrics)
- estimator_test_utils.assert_in_range(0.9, 1.0,
- 'accuracy/threshold_0.500000_mean',
- metrics)
- estimator_test_utils.assert_in_range(
- 0.9, 1.0, 'precision/positive_threshold_0.500000_mean', metrics)
- estimator_test_utils.assert_in_range(
- 0.9, 1.0, 'recall/positive_threshold_0.500000_mean', metrics)
- self._assertCommonMetrics(metrics)
-
- def _assertCommonMetrics(self, metrics):
- estimator_test_utils.assert_in_range(_ITERS, _ITERS + 5, 'global_step',
- metrics)
- estimator_test_utils.assert_in_range(0.9, 1.0, 'accuracy', metrics)
- estimator_test_utils.assert_in_range(0.0, 0.2, 'loss', metrics)
- self.report_benchmark(
- iters=metrics['global_step'],
- extras={k: v
- for k, v in metrics.items() if k in _METRIC_KEYS})
-
- def benchmarkMatrixData(self):
- iris = test_data.prepare_iris_data_for_logistic_regression()
- cont_feature = feature_column.real_valued_column('feature', dimension=4)
- bucketized_feature = feature_column.bucketized_column(
- cont_feature, test_data.get_quantile_based_buckets(iris.data, 10))
-
- classifier = dnn_linear_combined.DNNLinearCombinedClassifier(
- model_dir=tempfile.mkdtemp(),
- linear_feature_columns=(bucketized_feature,),
- dnn_feature_columns=(cont_feature,),
- dnn_hidden_units=(3, 3))
-
- input_fn = test_data.iris_input_logistic_fn
- metrics = classifier.fit(input_fn=input_fn, steps=_ITERS).evaluate(
- input_fn=input_fn, steps=100)
- self._assertSingleClassMetrics(metrics)
-
- def benchmarkTensorData(self):
-
- def _input_fn():
- iris = test_data.prepare_iris_data_for_logistic_regression()
- features = {}
- for i in range(4):
- # The following shows how to provide the Tensor data for
- # RealValuedColumns.
- features.update({
- str(i):
- array_ops.reshape(
- constant_op.constant(
- iris.data[:, i], dtype=dtypes.float32), (-1, 1))
- })
- # The following shows how to provide the SparseTensor data for
- # a SparseColumn.
- features['dummy_sparse_column'] = sparse_tensor.SparseTensor(
- values=('en', 'fr', 'zh'),
- indices=((0, 0), (0, 1), (60, 0)),
- dense_shape=(len(iris.target), 2))
- labels = array_ops.reshape(
- constant_op.constant(
- iris.target, dtype=dtypes.int32), (-1, 1))
- return features, labels
-
- iris = test_data.prepare_iris_data_for_logistic_regression()
- cont_features = [
- feature_column.real_valued_column(str(i)) for i in range(4)
- ]
- linear_features = [
- feature_column.bucketized_column(
- cont_features[i],
- test_data.get_quantile_based_buckets(iris.data[:, i], 10))
- for i in range(4)
- ]
- linear_features.append(
- feature_column.sparse_column_with_hash_bucket(
- 'dummy_sparse_column', hash_bucket_size=100))
-
- classifier = dnn_linear_combined.DNNLinearCombinedClassifier(
- model_dir=tempfile.mkdtemp(),
- linear_feature_columns=linear_features,
- dnn_feature_columns=cont_features,
- dnn_hidden_units=(3, 3))
-
- metrics = classifier.fit(input_fn=_input_fn, steps=_ITERS).evaluate(
- input_fn=_input_fn, steps=100)
- self._assertSingleClassMetrics(metrics)
-
- def benchmarkCustomOptimizer(self):
- iris = test_data.prepare_iris_data_for_logistic_regression()
- cont_feature = feature_column.real_valued_column('feature', dimension=4)
- bucketized_feature = feature_column.bucketized_column(
- cont_feature, test_data.get_quantile_based_buckets(iris.data, 10))
-
- classifier = dnn_linear_combined.DNNLinearCombinedClassifier(
- model_dir=tempfile.mkdtemp(),
- linear_feature_columns=(bucketized_feature,),
- linear_optimizer=ftrl.FtrlOptimizer(learning_rate=0.1),
- dnn_feature_columns=(cont_feature,),
- dnn_hidden_units=(3, 3),
- dnn_optimizer=adagrad.AdagradOptimizer(learning_rate=0.1))
-
- input_fn = test_data.iris_input_logistic_fn
- metrics = classifier.fit(input_fn=input_fn, steps=_ITERS).evaluate(
- input_fn=input_fn, steps=100)
- self._assertSingleClassMetrics(metrics)
-
- def benchmarkMultiClass(self):
- iris = base.load_iris()
- cont_feature = feature_column.real_valued_column('feature', dimension=4)
- bucketized_feature = feature_column.bucketized_column(
- cont_feature, test_data.get_quantile_based_buckets(iris.data, 10))
-
- classifier = dnn_linear_combined.DNNLinearCombinedClassifier(
- n_classes=3,
- linear_feature_columns=(bucketized_feature,),
- dnn_feature_columns=(cont_feature,),
- dnn_hidden_units=(3, 3))
-
- input_fn = test_data.iris_input_multiclass_fn
- metrics = classifier.fit(input_fn=input_fn, steps=_ITERS).evaluate(
- input_fn=input_fn, steps=100)
- self._assertCommonMetrics(metrics)
-
- def benchmarkPartitionedVariables(self):
-
- def _input_fn():
- features = {
- 'language':
- sparse_tensor.SparseTensor(
- values=('en', 'fr', 'zh'),
- indices=((0, 0), (0, 1), (2, 0)),
- dense_shape=(3, 2))
- }
- labels = constant_op.constant(((1,), (0,), (0,)))
- return features, labels
-
- # The given hash_bucket_size results in variables larger than the
- # default min_slice_size attribute, so the variables are partitioned.
- sparse_feature = feature_column.sparse_column_with_hash_bucket(
- 'language', hash_bucket_size=2e7)
- embedding_feature = feature_column.embedding_column(
- sparse_feature, dimension=1)
-
- tf_config = {
- 'cluster': {
- run_config.TaskType.PS: ['fake_ps_0', 'fake_ps_1']
- }
- }
- with test.mock.patch.dict('os.environ',
- {'TF_CONFIG': json.dumps(tf_config)}):
- config = run_config.RunConfig()
- # Because we did not start a distributed cluster, we need to pass an
- # empty ClusterSpec, otherwise the device_setter will look for
- # distributed jobs, such as "/job:ps" which are not present.
- config._cluster_spec = server_lib.ClusterSpec({})
-
- classifier = dnn_linear_combined.DNNLinearCombinedClassifier(
- linear_feature_columns=(sparse_feature,),
- dnn_feature_columns=(embedding_feature,),
- dnn_hidden_units=(3, 3),
- config=config)
-
- metrics = classifier.fit(input_fn=_input_fn, steps=_ITERS).evaluate(
- input_fn=_input_fn, steps=100)
- self._assertCommonMetrics(metrics)
-
-
-if __name__ == '__main__':
- test.main()
diff --git a/tensorflow/contrib/learn/python/learn/estimators/head.py b/tensorflow/contrib/learn/python/learn/estimators/head.py
index 225d879678..861db1f89e 100644
--- a/tensorflow/contrib/learn/python/learn/estimators/head.py
+++ b/tensorflow/contrib/learn/python/learn/estimators/head.py
@@ -1070,8 +1070,8 @@ class _MultiClassHead(_SingleHead):
labels_tensor = _to_labels_tensor(labels, self._label_name)
_check_no_sparse_tensor(labels_tensor)
if self._label_keys:
- table = lookup_ops.index_table_from_tensor(self._label_keys,
- name="label_id_lookup")
+ table = lookup_ops.index_table_from_tensor(
+ self._label_keys, name="label_id_lookup")
return {
"labels": labels_tensor,
"label_ids": table.lookup(labels_tensor),
diff --git a/tensorflow/contrib/learn/python/learn/estimators/rnn_common.py b/tensorflow/contrib/learn/python/learn/estimators/rnn_common.py
index 0f09b111bd..896b668d4e 100644
--- a/tensorflow/contrib/learn/python/learn/estimators/rnn_common.py
+++ b/tensorflow/contrib/learn/python/learn/estimators/rnn_common.py
@@ -178,7 +178,7 @@ def select_last_activations(activations, sequence_lengths):
"""Selects the nth set of activations for each n in `sequence_length`.
Reuturns a `Tensor` of shape `[batch_size, k]`. If `sequence_length` is not
- `None`, then `output[i, :] = activations[i, sequence_length[i], :]`. If
+ `None`, then `output[i, :] = activations[i, sequence_length[i] - 1, :]`. If
`sequence_length` is `None`, then `output[i, :] = activations[i, -1, :]`.
Args:
diff --git a/tensorflow/contrib/resampler/python/ops/resampler_ops_test.py b/tensorflow/contrib/resampler/python/ops/resampler_ops_test.py
index 9aa1e05628..6253f96315 100644
--- a/tensorflow/contrib/resampler/python/ops/resampler_ops_test.py
+++ b/tensorflow/contrib/resampler/python/ops/resampler_ops_test.py
@@ -163,7 +163,7 @@ class ResamplerTest(test.TestCase):
data_channels = 3
warp_width = 2
warp_height = 6
- batch_size = 10
+ batch_size = 3
warp = _make_warp(batch_size, warp_height, warp_width, dtype.as_numpy_dtype)
data_shape = (batch_size, data_height, data_width, data_channels)
diff --git a/tensorflow/contrib/session_bundle/BUILD b/tensorflow/contrib/session_bundle/BUILD
index 596c4f351c..ebb7a21856 100644
--- a/tensorflow/contrib/session_bundle/BUILD
+++ b/tensorflow/contrib/session_bundle/BUILD
@@ -234,7 +234,7 @@ cc_library(
cc_test(
name = "session_bundle_test",
- size = "small",
+ size = "medium",
srcs = ["session_bundle_test.cc"],
data = [":session_bundle_half_plus_two"],
# Link in all registered kernels.
diff --git a/tensorflow/contrib/session_bundle/session_bundle_test.cc b/tensorflow/contrib/session_bundle/session_bundle_test.cc
index eb36d79e0f..6d997bac9e 100644
--- a/tensorflow/contrib/session_bundle/session_bundle_test.cc
+++ b/tensorflow/contrib/session_bundle/session_bundle_test.cc
@@ -171,7 +171,8 @@ void BasicTest(const string& export_path) {
// SessionBundles. Concurrent with adding this test, we had a leak where the
// TensorFlow Session was not being closed, which leaked memory.
// TODO(b/31711147): Increase the SessionBundle ResourceLeakTest iterations and
-// move outside of the test suite.
+// move outside of the test suite; decrease test size back to small at the same
+// time.
TEST(LoadSessionBundleFromPath, ResourceLeakTest) {
const string export_path = test_util::TestSrcDirPath(kExportPath);
for (int i = 0; i < 100; i++) {
diff --git a/tensorflow/contrib/summary/BUILD b/tensorflow/contrib/summary/BUILD
index bc30502264..527deab86a 100644
--- a/tensorflow/contrib/summary/BUILD
+++ b/tensorflow/contrib/summary/BUILD
@@ -22,10 +22,12 @@ py_test(
srcs_version = "PY2AND3",
deps = [
":summary_ops",
+ "//tensorflow/core:protos_all_py",
"//tensorflow/python:framework_test_lib",
+ "//tensorflow/python:lib",
"//tensorflow/python:platform",
"//tensorflow/python:training",
- "//tensorflow/python/eager:context",
+ "//tensorflow/python/eager:function",
"//tensorflow/python/eager:test",
],
)
@@ -38,6 +40,7 @@ py_library(
deps = [
":gen_summary_ops",
"//tensorflow/python:constant_op",
+ "//tensorflow/python:control_flow_ops",
"//tensorflow/python:dtypes",
"//tensorflow/python:framework_ops",
"//tensorflow/python:summary_op_util",
diff --git a/tensorflow/contrib/summary/summary_ops.py b/tensorflow/contrib/summary/summary_ops.py
index 05e627adf1..ceaf83b70a 100644
--- a/tensorflow/contrib/summary/summary_ops.py
+++ b/tensorflow/contrib/summary/summary_ops.py
@@ -68,7 +68,8 @@ def never_record_summaries():
def create_summary_file_writer(logdir,
max_queue=None,
flush_secs=None,
- filename_suffix=None):
+ filename_suffix=None,
+ name=None):
"""Creates a summary file writer in the current context."""
if max_queue is None:
max_queue = constant_op.constant(10)
@@ -76,7 +77,7 @@ def create_summary_file_writer(logdir,
flush_secs = constant_op.constant(120)
if filename_suffix is None:
filename_suffix = constant_op.constant("")
- resource = gen_summary_ops.summary_writer()
+ resource = gen_summary_ops.summary_writer(shared_name=name)
gen_summary_ops.create_summary_file_writer(resource, logdir, max_queue,
flush_secs, filename_suffix)
context.context().summary_writer_resource = resource
@@ -84,76 +85,87 @@ def create_summary_file_writer(logdir,
def _nothing():
"""Convenient else branch for when summaries do not record."""
- return
+ return False
-def generic(name, tensor, metadata, family=None):
- """Writes a tensor summary if possible."""
+def summary_writer_function(name, tensor, function, family=None):
+ """Helper function to write summaries.
+ Args:
+ name: name of the summary
+ tensor: main tensor to form the summary
+ function: function taking a tag and a scope which writes the summary
+ family: optional, the summary's family
+
+ Returns:
+ The result of writing the summary.
+ """
def record():
with summary_op_util.summary_scope(
name, family, values=[tensor]) as (tag, scope):
- gen_summary_ops.write_summary(context.context().summary_writer_resource,
- training_util.get_global_step(), tensor,
- tag, metadata, name=scope)
+ function(tag, scope)
+ return True
+
return control_flow_ops.cond(should_record_summaries(), record, _nothing)
+def generic(name, tensor, metadata, family=None):
+ """Writes a tensor summary if possible."""
+
+ def function(tag, scope):
+ gen_summary_ops.write_summary(context.context().summary_writer_resource,
+ training_util.get_global_step(), tensor,
+ tag, metadata, name=scope)
+ return summary_writer_function(name, tensor, function, family=family)
+
+
def scalar(name, tensor, family=None):
"""Writes a scalar summary if possible."""
- def record():
- with summary_op_util.summary_scope(
- name, family, values=[tensor]) as (tag, scope):
- gen_summary_ops.write_scalar_summary(
- context.context().summary_writer_resource,
- training_util.get_global_step(), tag, tensor, name=scope)
+ def function(tag, scope):
+ gen_summary_ops.write_scalar_summary(
+ context.context().summary_writer_resource,
+ training_util.get_global_step(), tag, tensor, name=scope)
- return control_flow_ops.cond(should_record_summaries(), record, _nothing)
+ return summary_writer_function(name, tensor, function, family=family)
def histogram(name, tensor, family=None):
"""Writes a histogram summary if possible."""
- def record():
- with summary_op_util.summary_scope(
- name, family, values=[tensor]) as (tag, scope):
- gen_summary_ops.write_histogram_summary(
- context.context().summary_writer_resource,
- training_util.get_global_step(), tag, tensor, name=scope)
+ def function(tag, scope):
+ gen_summary_ops.write_histogram_summary(
+ context.context().summary_writer_resource,
+ training_util.get_global_step(), tag, tensor, name=scope)
- return control_flow_ops.cond(should_record_summaries(), record, _nothing)
+ return summary_writer_function(name, tensor, function, family=family)
def image(name, tensor, bad_color=None, max_images=3, family=None):
"""Writes an image summary if possible."""
- def record():
+ def function(tag, scope):
if bad_color is None:
bad_color_ = constant_op.constant([255, 0, 0, 255], dtype=dtypes.uint8)
- with summary_op_util.summary_scope(
- name, family, values=[tensor]) as (tag, scope):
- gen_summary_ops.write_image_summary(
- context.context().summary_writer_resource,
- training_util.get_global_step(), tag, tensor, bad_color_, max_images,
- name=scope)
+ gen_summary_ops.write_image_summary(
+ context.context().summary_writer_resource,
+ training_util.get_global_step(), tag, tensor, bad_color_, max_images,
+ name=scope)
- return control_flow_ops.cond(should_record_summaries(), record, _nothing)
+ return summary_writer_function(name, tensor, function, family=family)
def audio(name, tensor, sample_rate, max_outputs, family=None):
"""Writes an audio summary if possible."""
- def record():
- with summary_op_util.summary_scope(
- name, family, values=[tensor]) as (tag, scope):
- gen_summary_ops.write_audio_summary(
- context.context().summary_writer_resource,
- training_util.get_global_step(),
- tag,
- tensor,
- sample_rate=sample_rate,
- max_outputs=max_outputs,
- name=scope)
-
- return control_flow_ops.cond(should_record_summaries(), record, _nothing)
+ def function(tag, scope):
+ gen_summary_ops.write_audio_summary(
+ context.context().summary_writer_resource,
+ training_util.get_global_step(),
+ tag,
+ tensor,
+ sample_rate=sample_rate,
+ max_outputs=max_outputs,
+ name=scope)
+
+ return summary_writer_function(name, tensor, function, family=family)
diff --git a/tensorflow/contrib/summary/summary_ops_test.py b/tensorflow/contrib/summary/summary_ops_test.py
index 56c1a16f7f..4b1f60ce4e 100644
--- a/tensorflow/contrib/summary/summary_ops_test.py
+++ b/tensorflow/contrib/summary/summary_ops_test.py
@@ -17,11 +17,15 @@ from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
+import os
import tempfile
from tensorflow.contrib.summary import summary_ops
+from tensorflow.core.util import event_pb2
+from tensorflow.python.eager import function
from tensorflow.python.eager import test
from tensorflow.python.framework import test_util
+from tensorflow.python.lib.io import tf_record
from tensorflow.python.platform import gfile
from tensorflow.python.training import training_util
@@ -36,7 +40,7 @@ class TargetTest(test_util.TensorFlowTestCase):
def testSummaryOps(self):
training_util.get_or_create_global_step()
logdir = tempfile.mkdtemp()
- summary_ops.create_summary_file_writer(logdir, max_queue=0)
+ summary_ops.create_summary_file_writer(logdir, max_queue=0, name='t0')
summary_ops.always_record_summaries()
summary_ops.generic('tensor', 1, '')
summary_ops.scalar('scalar', 2.0)
@@ -47,6 +51,27 @@ class TargetTest(test_util.TensorFlowTestCase):
# test here that we're calling them correctly.
self.assertTrue(gfile.Exists(logdir))
+ def testDefunSummarys(self):
+ training_util.get_or_create_global_step()
+ logdir = tempfile.mkdtemp()
+ summary_ops.create_summary_file_writer(logdir, max_queue=0, name='t1')
+ summary_ops.always_record_summaries()
+
+ @function.defun
+ def write():
+ summary_ops.scalar('scalar', 2.0)
+
+ write()
+
+ self.assertTrue(gfile.Exists(logdir))
+ files = gfile.ListDirectory(logdir)
+ self.assertEqual(len(files), 1)
+ records = list(tf_record.tf_record_iterator(os.path.join(logdir, files[0])))
+ self.assertEqual(len(records), 2)
+ event = event_pb2.Event()
+ event.ParseFromString(records[1])
+ self.assertEqual(event.summary.value[0].simple_value, 2.0)
+
if __name__ == '__main__':
test.main()
diff --git a/tensorflow/contrib/tensor_forest/kernels/v4/split_collection_operators.cc b/tensorflow/contrib/tensor_forest/kernels/v4/split_collection_operators.cc
index ccc412600c..e5d1beae7f 100644
--- a/tensorflow/contrib/tensor_forest/kernels/v4/split_collection_operators.cc
+++ b/tensorflow/contrib/tensor_forest/kernels/v4/split_collection_operators.cc
@@ -96,7 +96,12 @@ void SplitCollectionOperator::AddExample(
}
bool SplitCollectionOperator::IsInitialized(int32 node_id) const {
- return stats_.at(node_id)->IsInitialized();
+ auto it = stats_.find(node_id);
+ if (it == stats_.end()) {
+ LOG(WARNING) << "IsInitialized called with unknown node_id = " << node_id;
+ return false;
+ }
+ return it->second->IsInitialized();
}
void SplitCollectionOperator::CreateAndInitializeCandidateWithExample(
diff --git a/tensorflow/contrib/tensorboard/plugins/trace/trace.py b/tensorflow/contrib/tensorboard/plugins/trace/trace.py
index 57f95dfce7..07e5316b8b 100644
--- a/tensorflow/contrib/tensorboard/plugins/trace/trace.py
+++ b/tensorflow/contrib/tensorboard/plugins/trace/trace.py
@@ -38,7 +38,7 @@ TOKENS = LEFT_TOKENS + RIGHT_TOKENS
def store_trace_info(output_file_path,
- graph=ops.get_default_graph(),
+ graph=None,
ignore_regex_fpaths=None):
"""Collects and stores trace information for a TensorFlow model.
@@ -51,6 +51,8 @@ def store_trace_info(output_file_path,
in this list will be ignored. Defaults to patterns that match the core
tensorflow python library.
"""
+ graph = graph or ops.get_default_graph()
+
if not ignore_regex_fpaths:
ignore_regex_fpaths = TF_LIB_REGEX_FPATHS
diff --git a/tensorflow/contrib/tpu/profiler/op_profile.proto b/tensorflow/contrib/tpu/profiler/op_profile.proto
index 6911b649a0..840a43913b 100644
--- a/tensorflow/contrib/tpu/profiler/op_profile.proto
+++ b/tensorflow/contrib/tpu/profiler/op_profile.proto
@@ -32,6 +32,18 @@ message Node {
string expression = 2; // %multiply = [shape]multiply(operand1, operand2)
string provenance = 3; // Typically the TensorFlow operation name.
string category = 4;
+ // Describes the physical memory layout of the instruction's primary input.
+ // e.g. for a convolution, this analyzes the image and ignores the kernel.
+ LayoutAnalysis layout = 5;
+ message LayoutAnalysis {
+ // The physical data layout, from most-minor to most-major dimensions.
+ repeated Dimension dimensions = 1;
+ message Dimension {
+ int32 size = 1; // Size of the data in this dimension.
+ int32 alignment = 2; // Data must be padded to a multiple of alignment.
+ string semantics = 3; // What the dimension represents, e.g. "spatial".
+ }
+ }
}
}
diff --git a/tensorflow/core/BUILD b/tensorflow/core/BUILD
index 9db2ed830f..9319928307 100644
--- a/tensorflow/core/BUILD
+++ b/tensorflow/core/BUILD
@@ -2380,6 +2380,7 @@ tf_cc_tests(
"util/semver_test.cc",
"util/sparse/sparse_tensor_test.cc",
"util/stat_summarizer_test.cc",
+ "util/tensor_format_test.cc",
"util/tensor_slice_reader_test.cc",
"util/tensor_slice_set_test.cc",
"util/tensor_slice_util_test.cc",
diff --git a/tensorflow/core/common_runtime/simple_graph_execution_state.cc b/tensorflow/core/common_runtime/simple_graph_execution_state.cc
index 2a974d1840..363d3a0c9d 100644
--- a/tensorflow/core/common_runtime/simple_graph_execution_state.cc
+++ b/tensorflow/core/common_runtime/simple_graph_execution_state.cc
@@ -36,7 +36,6 @@ limitations under the License.
#include "tensorflow/core/lib/core/status.h"
#include "tensorflow/core/lib/strings/stringprintf.h"
#include "tensorflow/core/platform/logging.h"
-#include "tensorflow/core/platform/mutex.h"
#include "tensorflow/core/platform/types.h"
#include "tensorflow/core/util/util.h"
@@ -54,7 +53,6 @@ SimpleGraphExecutionState::SimpleGraphExecutionState(
: stateful_placements_(options.stateful_placements),
device_set_(options.device_set),
session_options_(options.session_options),
- costs_(true /*is_global*/),
flib_def_(new FunctionLibraryDefinition(OpRegistry::Global(),
graph_def->library())),
graph_(nullptr) {
@@ -258,19 +256,11 @@ Status SimpleGraphExecutionState::InitBaseGraph(
// Save stateful placements before placing.
RestoreStatefulNodes(new_graph.get());
- CostModel costs(true /*is_global*/);
- {
- mutex_lock l(mu_);
- costs_.InitFromGraph(*new_graph);
- costs.MergeFromGlobal(costs_);
- }
-
GraphOptimizationPassOptions optimization_options;
optimization_options.session_options = session_options_;
optimization_options.graph = &new_graph;
optimization_options.flib_def = flib_def_.get();
optimization_options.device_set = device_set_;
- optimization_options.cost_model = &costs;
TF_RETURN_IF_ERROR(OptimizationPassRegistry::Global()->RunGrouping(
OptimizationPassRegistry::PRE_PLACEMENT, optimization_options));
@@ -420,14 +410,11 @@ Status SimpleGraphExecutionState::BuildGraph(
new FunctionLibraryDefinition(*flib_def_));
// TODO(andydavis): Clarify optimization pass requirements around CostModel.
- CostModel costs(true /*is_global*/);
- costs.MergeFromGlobal(costs_);
GraphOptimizationPassOptions optimization_options;
optimization_options.session_options = session_options_;
optimization_options.graph = &ng;
optimization_options.flib_def = flib.get();
optimization_options.device_set = device_set_;
- optimization_options.cost_model = &costs;
TF_RETURN_IF_ERROR(OptimizationPassRegistry::Global()->RunGrouping(
OptimizationPassRegistry::POST_REWRITE_FOR_EXEC, optimization_options));
diff --git a/tensorflow/core/common_runtime/simple_graph_execution_state.h b/tensorflow/core/common_runtime/simple_graph_execution_state.h
index c7f34a42d6..53eef8a07d 100644
--- a/tensorflow/core/common_runtime/simple_graph_execution_state.h
+++ b/tensorflow/core/common_runtime/simple_graph_execution_state.h
@@ -25,19 +25,14 @@ limitations under the License.
#include "tensorflow/core/common_runtime/device.h"
#include "tensorflow/core/common_runtime/device_set.h"
#include "tensorflow/core/framework/graph.pb.h"
-#include "tensorflow/core/framework/step_stats.pb.h"
#include "tensorflow/core/graph/costmodel.h"
#include "tensorflow/core/graph/graph.h"
#include "tensorflow/core/lib/core/status.h"
#include "tensorflow/core/platform/macros.h"
-#include "tensorflow/core/platform/mutex.h"
-#include "tensorflow/core/platform/thread_annotations.h"
#include "tensorflow/core/platform/types.h"
namespace tensorflow {
struct SessionOptions;
-class StepStats;
-class Timeline;
namespace subgraph {
struct RewriteGraphMetadata;
@@ -167,7 +162,6 @@ class SimpleGraphExecutionState {
// Returns the map of stateful placements as a map of
// node name to placement string.
std::unordered_map<string, string> GetStatefulPlacements() const {
- mutex_lock l(mu_);
return stateful_placements_;
}
@@ -193,9 +187,6 @@ class SimpleGraphExecutionState {
const DeviceSet* device_set_; // Not owned
const SessionOptions* session_options_; // Not owned
- mutable mutex mu_;
- CostModel costs_ GUARDED_BY(mu_);
-
// Map from name to Node for the full graph in placed_.
NodeNameToCostIdMap node_name_to_cost_id_map_;
diff --git a/tensorflow/core/framework/common_shape_fns.cc b/tensorflow/core/framework/common_shape_fns.cc
index 0e3ea2ddfb..ab21f47282 100644
--- a/tensorflow/core/framework/common_shape_fns.cc
+++ b/tensorflow/core/framework/common_shape_fns.cc
@@ -206,15 +206,28 @@ Status BiasAddGradShape(shape_inference::InferenceContext* c) {
Status FusedConvBiasActivationShape(shape_inference::InferenceContext* c) {
TF_RETURN_IF_ERROR(Conv2DShape(c));
- ShapeHandle bias_shape;
- TF_RETURN_IF_ERROR(c->WithRankAtLeast(c->input(2), 1, &bias_shape));
- DimensionHandle bias_dim = c->Dim(bias_shape, 0);
+ string data_format_str, filter_format_str;
+ TF_RETURN_IF_ERROR(c->GetAttr("data_format", &data_format_str));
+ TF_RETURN_IF_ERROR(c->GetAttr("filter_format", &filter_format_str));
+
+ TensorFormat data_format;
+ FormatFromString(data_format_str, &data_format);
+ FilterTensorFormat filter_format;
+ FilterFormatFromString(filter_format_str, &filter_format);
+ constexpr int num_spatial_dims = 2;
+ const int rank = GetTensorDimsFromSpatialDims(num_spatial_dims, data_format);
ShapeHandle filter_shape;
- TF_RETURN_IF_ERROR(c->WithRank(c->input(1), 4, &filter_shape));
- DimensionHandle output_depth_dim = c->Dim(filter_shape, 3);
+ TF_RETURN_IF_ERROR(c->WithRank(c->input(1), rank, &filter_shape));
+ DimensionHandle output_depth_dim = c->Dim(
+ filter_shape, GetFilterDimIndex<num_spatial_dims>(filter_format, 'O'));
int64 output_depth_dim_val = c->Value(output_depth_dim);
+
+ ShapeHandle bias_shape;
+ // Bias should be a 1-D tensor.
+ TF_RETURN_IF_ERROR(c->WithRank(c->input(2), 1, &bias_shape));
+ DimensionHandle bias_dim = c->Dim(bias_shape, 0);
int64 bias_dim_val = c->Value(bias_dim);
if (output_depth_dim_val != bias_dim_val) {
@@ -223,6 +236,14 @@ Status FusedConvBiasActivationShape(shape_inference::InferenceContext* c) {
") and bias dimension (", bias_dim_val, ") do not match.");
}
+ // Check side input shape matches the output shape.
+ ShapeHandle side_input_shape;
+ TF_RETURN_IF_ERROR(c->WithRankAtLeast(c->input(3), 1, &side_input_shape));
+ if (c->Rank(side_input_shape) > 1) {
+ ShapeHandle unused;
+ TF_RETURN_IF_ERROR(c->Merge(side_input_shape, c->output(0), &unused));
+ }
+
return Status::OK();
}
@@ -323,24 +344,38 @@ Status ShapeFromDimensions(DimensionHandle batch_dim,
}
Status Conv2DShape(shape_inference::InferenceContext* c) {
- string data_format_str;
- Status s = c->GetAttr("data_format", &data_format_str);
- if (!s.ok()) {
+ string data_format_str, filter_format_str;
+ if (!c->GetAttr("data_format", &data_format_str).ok()) {
data_format_str = "NHWC";
}
+ if (!c->GetAttr("filter_format", &filter_format_str).ok()) {
+ filter_format_str = "HWIO";
+ }
TensorFormat data_format;
if (!FormatFromString(data_format_str, &data_format)) {
return errors::InvalidArgument("Invalid data format string: ",
data_format_str);
}
+ FilterTensorFormat filter_format;
+ if (!FilterFormatFromString(filter_format_str, &filter_format)) {
+ return errors::InvalidArgument("Invalid filter format string: ",
+ filter_format_str);
+ }
+
+ constexpr int num_spatial_dims = 2;
+ const int rank = GetTensorDimsFromSpatialDims(num_spatial_dims, data_format);
+ ShapeHandle conv_input_shape;
+ TF_RETURN_IF_ERROR(c->WithRank(c->input(0), rank, &conv_input_shape));
+ TF_RETURN_IF_ERROR(CheckFormatConstraintsOnShape(
+ data_format, conv_input_shape, "conv_input", c));
- const int rank = GetTensorDimsFromSpatialDims(2, data_format);
- ShapeHandle input_shape;
- TF_RETURN_IF_ERROR(c->WithRank(c->input(0), rank, &input_shape));
// The filter rank should match the input (4 for NCHW, 5 for NCHW_VECT_C).
ShapeHandle filter_shape;
TF_RETURN_IF_ERROR(c->WithRank(c->input(1), rank, &filter_shape));
+ TF_RETURN_IF_ERROR(
+ CheckFormatConstraintsOnShape(data_format, filter_shape, "filter", c));
+
std::vector<int32> strides;
TF_RETURN_IF_ERROR(c->GetAttr("strides", &strides));
@@ -352,38 +387,33 @@ Status Conv2DShape(shape_inference::InferenceContext* c) {
strides.size());
}
- int32 stride_rows, stride_cols;
- if (data_format == FORMAT_NCHW || data_format == FORMAT_NCHW_VECT_C) {
- stride_rows = strides[2];
- stride_cols = strides[3];
- } else {
- stride_rows = strides[1];
- stride_cols = strides[2];
- }
+ const int32 stride_rows = GetTensorDim(strides, data_format, 'H');
+ const int32 stride_cols = GetTensorDim(strides, data_format, 'W');
DimensionHandle batch_size_dim;
DimensionHandle input_depth_dim;
gtl::InlinedVector<DimensionHandle, 2> input_spatial_dims(2);
- TF_RETURN_IF_ERROR(DimensionsFromShape(input_shape, data_format,
+ TF_RETURN_IF_ERROR(DimensionsFromShape(conv_input_shape, data_format,
&batch_size_dim, &input_spatial_dims,
&input_depth_dim, c));
- DimensionHandle output_depth_dim, filter_rows_dim, filter_cols_dim,
- filter_input_depth_dim;
- // If the input format is NCHW_VECT_C, the filter format is assumed to be
- // OIHW_VECT_I, otherwise it is assumed to be HWIO.
- if (data_format == FORMAT_NCHW_VECT_C) {
- output_depth_dim = c->Dim(filter_shape, 0);
- TF_RETURN_IF_ERROR(c->Multiply(c->Dim(filter_shape, 1),
- c->Dim(filter_shape, 4),
- &filter_input_depth_dim));
- filter_rows_dim = c->Dim(filter_shape, 2);
- filter_cols_dim = c->Dim(filter_shape, 3);
+ DimensionHandle output_depth_dim = c->Dim(
+ filter_shape, GetFilterDimIndex<num_spatial_dims>(filter_format, 'O'));
+ DimensionHandle filter_rows_dim = c->Dim(
+ filter_shape, GetFilterDimIndex<num_spatial_dims>(filter_format, 'H'));
+ DimensionHandle filter_cols_dim = c->Dim(
+ filter_shape, GetFilterDimIndex<num_spatial_dims>(filter_format, 'W'));
+ DimensionHandle filter_input_depth_dim;
+ if (filter_format == FORMAT_OIHW_VECT_I) {
+ TF_RETURN_IF_ERROR(c->Multiply(
+ c->Dim(filter_shape,
+ GetFilterDimIndex<num_spatial_dims>(filter_format, 'I')),
+ c->Dim(filter_shape,
+ GetFilterTensorInnerInputChannelsDimIndex(rank, filter_format)),
+ &filter_input_depth_dim));
} else {
- filter_rows_dim = c->Dim(filter_shape, 0);
- filter_cols_dim = c->Dim(filter_shape, 1);
- filter_input_depth_dim = c->Dim(filter_shape, 2);
- output_depth_dim = c->Dim(filter_shape, 3);
+ filter_input_depth_dim = c->Dim(
+ filter_shape, GetFilterDimIndex<num_spatial_dims>(filter_format, 'I'));
}
// Check that the input tensor and the filter tensor agree on the input
@@ -559,9 +589,6 @@ Status DepthwiseConv2DNativeShape(shape_inference::InferenceContext* c) {
}
Status AvgPoolShape(shape_inference::InferenceContext* c) {
- ShapeHandle input_shape;
- TF_RETURN_IF_ERROR(c->WithRankAtLeast(c->input(0), 4, &input_shape));
-
string data_format_str;
TensorFormat data_format;
Status s = c->GetAttr("data_format", &data_format_str);
@@ -571,6 +598,10 @@ Status AvgPoolShape(shape_inference::InferenceContext* c) {
data_format = FORMAT_NHWC;
}
+ const int rank = (data_format == FORMAT_NCHW_VECT_C) ? 5 : 4;
+ ShapeHandle input_shape;
+ TF_RETURN_IF_ERROR(c->WithRank(c->input(0), rank, &input_shape));
+
TF_RETURN_IF_ERROR(
CheckFormatConstraintsOnShape(data_format, input_shape, "input", c));
@@ -627,9 +658,6 @@ Status AvgPoolShape(shape_inference::InferenceContext* c) {
}
Status MaxPoolShape(shape_inference::InferenceContext* c) {
- ShapeHandle input_shape;
- TF_RETURN_IF_ERROR(c->WithRankAtLeast(c->input(0), 4, &input_shape));
-
string data_format_str;
TensorFormat data_format;
Status s = c->GetAttr("data_format", &data_format_str);
@@ -639,6 +667,10 @@ Status MaxPoolShape(shape_inference::InferenceContext* c) {
data_format = FORMAT_NHWC;
}
+ const int rank = (data_format == FORMAT_NCHW_VECT_C) ? 5 : 4;
+ ShapeHandle input_shape;
+ TF_RETURN_IF_ERROR(c->WithRank(c->input(0), rank, &input_shape));
+
TF_RETURN_IF_ERROR(
CheckFormatConstraintsOnShape(data_format, input_shape, "input", c));
@@ -696,11 +728,21 @@ Status MaxPoolShape(shape_inference::InferenceContext* c) {
}
Status MaxPoolV2Shape(shape_inference::InferenceContext* c, int num_inputs) {
+ string data_format_str;
+ TensorFormat data_format;
+ Status s = c->GetAttr("data_format", &data_format_str);
+ if (s.ok()) {
+ FormatFromString(data_format_str, &data_format);
+ } else {
+ data_format = FORMAT_NHWC;
+ }
+
+ const int rank = (data_format == FORMAT_NCHW_VECT_C) ? 5 : 4;
ShapeHandle input_shape;
- TF_RETURN_IF_ERROR(c->WithRank(c->input(0), 4, &input_shape));
+ TF_RETURN_IF_ERROR(c->WithRank(c->input(0), rank, &input_shape));
- string data_format;
- Status s = c->GetAttr("data_format", &data_format);
+ TF_RETURN_IF_ERROR(
+ CheckFormatConstraintsOnShape(data_format, input_shape, "input", c));
std::vector<int32> kernel_sizes;
std::vector<int32> strides;
@@ -725,7 +767,8 @@ Status MaxPoolV2Shape(shape_inference::InferenceContext* c, int num_inputs) {
}
kernel_sizes.resize(kernel_sizes_tensor->shape().num_elements());
auto kernel_sizes_vec = kernel_sizes_tensor->flat<int32>();
- std::copy_n(&kernel_sizes_vec(0), kernel_sizes.size(), kernel_sizes.begin());
+ std::copy_n(&kernel_sizes_vec(0), kernel_sizes.size(),
+ kernel_sizes.begin());
const Tensor* strides_tensor = c->input_tensor(c->num_inputs() - 1);
if (strides_tensor == nullptr) {
@@ -749,35 +792,22 @@ Status MaxPoolV2Shape(shape_inference::InferenceContext* c, int num_inputs) {
kernel_sizes.size());
}
- int32 stride_rows, stride_cols, stride_depth;
- int32 kernel_rows, kernel_cols, kernel_depth;
-
- if (s.ok() && data_format == "NCHW") {
- // Canonicalize input shape to NHWC so the shape inference code below can
- // process it.
- auto dim = [&](char dimension) {
- return c->Dim(input_shape, GetTensorDimIndex<2>(FORMAT_NCHW, dimension));
- };
- input_shape = c->MakeShape({{dim('N'), dim('0'), dim('1'), dim('C')}});
- stride_depth = strides[1];
- stride_rows = strides[2];
- stride_cols = strides[3];
- kernel_depth = kernel_sizes[1];
- kernel_rows = kernel_sizes[2];
- kernel_cols = kernel_sizes[3];
- } else {
- stride_rows = strides[1];
- stride_cols = strides[2];
- stride_depth = strides[3];
- kernel_rows = kernel_sizes[1];
- kernel_cols = kernel_sizes[2];
- kernel_depth = kernel_sizes[3];
- }
+ int32 stride_depth = GetTensorDim(strides, data_format, 'C');
+ int32 stride_rows = GetTensorDim(strides, data_format, 'H');
+ int32 stride_cols = GetTensorDim(strides, data_format, 'W');
+ int32 kernel_depth = GetTensorDim(kernel_sizes, data_format, 'C');
+ int32 kernel_rows = GetTensorDim(kernel_sizes, data_format, 'H');
+ int32 kernel_cols = GetTensorDim(kernel_sizes, data_format, 'W');
- DimensionHandle batch_size_dim = c->Dim(input_shape, 0);
- DimensionHandle in_rows_dim = c->Dim(input_shape, 1);
- DimensionHandle in_cols_dim = c->Dim(input_shape, 2);
- DimensionHandle in_depth_dim = c->Dim(input_shape, 3);
+ constexpr int num_spatial_dims = 2;
+ DimensionHandle batch_size_dim = c->Dim(
+ input_shape, GetTensorDimIndex<num_spatial_dims>(data_format, 'N'));
+ DimensionHandle in_rows_dim = c->Dim(
+ input_shape, GetTensorDimIndex<num_spatial_dims>(data_format, 'H'));
+ DimensionHandle in_cols_dim = c->Dim(
+ input_shape, GetTensorDimIndex<num_spatial_dims>(data_format, 'W'));
+ DimensionHandle in_depth_dim = c->Dim(
+ input_shape, GetTensorDimIndex<num_spatial_dims>(data_format, 'C'));
Padding padding;
TF_RETURN_IF_ERROR(c->GetAttr("padding", &padding));
@@ -791,15 +821,9 @@ Status MaxPoolV2Shape(shape_inference::InferenceContext* c, int num_inputs) {
TF_RETURN_IF_ERROR(GetWindowedOutputSizeFromDims(
c, in_depth_dim, kernel_depth, stride_depth, padding, &output_depth));
- output_shape =
- c->MakeShape({batch_size_dim, output_rows, output_cols, output_depth});
- if (data_format == "NCHW") {
- // Convert output shape back to expected NCHW data format.
- auto dim = [&](char dimension) {
- return c->Dim(output_shape, GetTensorDimIndex<2>(FORMAT_NHWC, dimension));
- };
- output_shape = c->MakeShape({{dim('N'), dim('C'), dim('0'), dim('1')}});
- }
+ TF_RETURN_IF_ERROR(MakeShapeFromFormat(data_format, batch_size_dim,
+ {output_rows, output_cols},
+ output_depth, &output_shape, c));
c->set_output(0, output_shape);
return Status::OK();
diff --git a/tensorflow/core/framework/common_shape_fns_test.cc b/tensorflow/core/framework/common_shape_fns_test.cc
index 14f6c1bb45..ec9746b2af 100644
--- a/tensorflow/core/framework/common_shape_fns_test.cc
+++ b/tensorflow/core/framework/common_shape_fns_test.cc
@@ -14,6 +14,7 @@ limitations under the License.
==============================================================================*/
#include "tensorflow/core/framework/common_shape_fns.h"
+#include "tensorflow/core/framework/fake_input.h"
#include "tensorflow/core/framework/node_def_builder.h"
#include "tensorflow/core/framework/op_def_builder.h"
#include "tensorflow/core/framework/shape_inference_testutil.h"
@@ -411,34 +412,35 @@ TEST(CommonShapeFnsTest, BiasAddGradShapeTest) {
TEST(CommonShapeFnsTest, Conv2DShapeTest) {
ShapeInferenceTestOp op("Conv2D");
auto set_op = [&op](const std::vector<int32>& strides, const string& padding,
- const string& data_format) {
+ const string& data_format, const string& filter_format) {
TF_CHECK_OK(NodeDefBuilder("test", "Conv2D")
.Input("input", 0, DT_FLOAT)
.Input("filter", 0, DT_FLOAT)
.Attr("strides", strides)
.Attr("padding", padding)
.Attr("data_format", data_format)
+ .Attr("filter_format", filter_format)
.Finalize(&op.node_def));
};
// 1x1 filter
- set_op({{1, 1, 1, 1}}, "VALID", "NHWC");
+ set_op({{1, 1, 1, 1}}, "VALID", "NHWC", "HWIO");
INFER_OK(op, "[1,2,2,1];[1,1,1,1]", "[d0_0,2,2,d1_3]");
// 2x2 filter
- set_op({{1, 1, 1, 1}}, "VALID", "NHWC");
+ set_op({{1, 1, 1, 1}}, "VALID", "NHWC", "HWIO");
INFER_OK(op, "[1,2,2,1];[2,2,1,1]", "[d0_0,1,1,d1_3]");
// 3x3 input, 1x1 filter, 2x2 stride
- set_op({{1, 2, 2, 1}}, "VALID", "NHWC");
+ set_op({{1, 2, 2, 1}}, "VALID", "NHWC", "HWIO");
INFER_OK(op, "[1,3,3,1];[1,1,1,1]", "[d0_0,2,2,d1_3]");
// 3x3 input, 1x1 filter, 2x1 stride
- set_op({{1, 2, 1, 1}}, "VALID", "NHWC");
+ set_op({{1, 2, 1, 1}}, "VALID", "NHWC", "HWIO");
INFER_OK(op, "[1,3,3,1];[1,1,1,1]", "[d0_0,2,3,d1_3]");
// 4x4 input, 2x1 filter, 1x2 stride
- set_op({{1, 1, 2, 1}}, "VALID", "NHWC");
+ set_op({{1, 1, 2, 1}}, "VALID", "NHWC", "HWIO");
INFER_OK(op, "[1,4,4,1];[2,1,1,1]", "[d0_0,3,2,d1_3]");
// Invalid rank for input
@@ -460,77 +462,76 @@ TEST(CommonShapeFnsTest, Conv2DShapeTest) {
// Tests for NCHW
// 1x1 filter
- set_op({{1, 1, 1, 1}}, "VALID", "NCHW");
+ set_op({{1, 1, 1, 1}}, "VALID", "NCHW", "HWIO");
INFER_OK(op, "[1,1,2,2];[1,1,1,1]", "[d0_0,d1_3,2,2]");
// 2x2 filter
- set_op({{1, 1, 1, 1}}, "VALID", "NCHW");
+ set_op({{1, 1, 1, 1}}, "VALID", "NCHW", "HWIO");
INFER_OK(op, "[1,1,2,2];[2,2,1,1]", "[d0_0,d1_3,1,1]");
// 3x3 input, 1x1 filter, 2x2 stride
- set_op({{1, 1, 2, 2}}, "VALID", "NCHW");
+ set_op({{1, 1, 2, 2}}, "VALID", "NCHW", "HWIO");
INFER_OK(op, "[1,1,3,3];[1,1,1,1]", "[d0_0,d1_3,2,2]");
// 3x3 input, 1x1 filter, 2x1 stride
- set_op({{1, 1, 2, 1}}, "VALID", "NCHW");
+ set_op({{1, 1, 2, 1}}, "VALID", "NCHW", "HWIO");
INFER_OK(op, "[1,1,3,3];[1,1,1,1]", "[d0_0,d1_3,2,3]");
// 4x4 input, 2x1 filter, 1x2 stride
- set_op({{1, 1, 1, 2}}, "VALID", "NCHW");
+ set_op({{1, 1, 1, 2}}, "VALID", "NCHW", "HWIO");
INFER_OK(op, "[1,1,4,4];[2,1,1,1]", "[d0_0,d1_3,3,2]");
// Tests for NCHW_VECT_C
// 1x1 filter
- set_op({{1, 1, 1, 1}}, "VALID", "NCHW_VECT_C");
+ set_op({{1, 1, 1, 1}}, "VALID", "NCHW_VECT_C", "OIHW_VECT_I");
INFER_OK(op, "[1,1,2,2,4];[4,1,1,1,4]", "[d0_0,1,2,2,4]");
// 2x2 filter
- set_op({{1, 1, 1, 1}}, "VALID", "NCHW_VECT_C");
+ set_op({{1, 1, 1, 1}}, "VALID", "NCHW_VECT_C", "OIHW_VECT_I");
INFER_OK(op, "[1,1,2,2,4];[4,1,2,2,4]", "[d0_0,1,1,1,4]");
// 3x3 input, 1x1 filter, 2x2 stride
- set_op({{1, 1, 2, 2}}, "VALID", "NCHW_VECT_C");
+ set_op({{1, 1, 2, 2}}, "VALID", "NCHW_VECT_C", "OIHW_VECT_I");
INFER_OK(op, "[1,1,3,3,4];[8,1,1,1,4]", "[d0_0,2,2,2,4]");
// 3x3 input, 1x1 filter, 2x1 stride
- set_op({{1, 1, 2, 1}}, "VALID", "NCHW_VECT_C");
+ set_op({{1, 1, 2, 1}}, "VALID", "NCHW_VECT_C", "OIHW_VECT_I");
INFER_OK(op, "[1,1,3,3,4];[4,1,1,1,4]", "[d0_0,1,2,3,4]");
// 4x4 input, 2x1 filter, 1x2 stride
- set_op({{1, 1, 1, 2}}, "VALID", "NCHW_VECT_C");
+ set_op({{1, 1, 1, 2}}, "VALID", "NCHW_VECT_C", "OIHW_VECT_I");
INFER_OK(op, "[1,1,4,4,4];[4,1,2,1,4]", "[d0_0,1,3,2,4]");
// Some tests for "SAME" padding
// 4x4 input, 1x1 filter, 1x1 stride
- set_op({{1, 1, 1, 1}}, "SAME", "NHWC");
+ set_op({{1, 1, 1, 1}}, "SAME", "NHWC", "HWIO");
INFER_OK(op, "[1,4,4,1];[1,1,1,1]", "[d0_0,d0_1,d0_2,d1_3]");
// 3x3 input, 2x2 filter, 1x1 stride
- set_op({{1, 1, 1, 1}}, "SAME", "NHWC");
+ set_op({{1, 1, 1, 1}}, "SAME", "NHWC", "HWIO");
INFER_OK(op, "[1,3,3,1];[2,2,1,1]", "[d0_0,d0_1,d0_2,d1_3]");
// 4x4 input, 2x2 filter, 2x2 stride
- set_op({{1, 2, 2, 1}}, "SAME", "NHWC");
+ set_op({{1, 2, 2, 1}}, "SAME", "NHWC", "HWIO");
INFER_OK(op, "[1,4,4,1];[2,2,1,1]", "[d0_0,2,2,d1_3]");
// 4x4 input, 2x2 filter, 1x1 stride
- set_op({{1, 1, 1, 1}}, "SAME", "NHWC");
+ set_op({{1, 1, 1, 1}}, "SAME", "NHWC", "HWIO");
INFER_OK(op, "[1,4,4,1];[2,2,1,1]", "[d0_0,d0_1,d0_2,d1_3]");
// With stride 1x1 and SAME, unknown dims don't matter - filter dims except
// for output channels are ignored for output, so all inputs are carried
// through to output.
- set_op({{1, 1, 1, 1}}, "SAME", "NHWC");
+ set_op({{1, 1, 1, 1}}, "SAME", "NHWC", "HWIO");
INFER_OK(op, "[1,4,4,1];[?,?,?,?]", "[d0_0,d0_1,d0_2,d1_3]");
INFER_OK(op, "[1,?,4,1];[?,?,?,?]", "[d0_0,d0_1,d0_2,d1_3]");
INFER_OK(op, "[1,4,?,1];[?,?,?,?]", "[d0_0,d0_1,d0_2,d1_3]");
INFER_OK(op, "[1,4,4,?];[?,?,?,?]", "[d0_0,d0_1,d0_2,d1_3]");
- INFER_OK(op, "[1,4,4,1];[?,?,?,?]", "[d0_0,d0_1,d0_2,d1_3]");
- INFER_OK(op, "[1,4,4,1];[?,?,?,?]", "[d0_0,d0_1,d0_2,d1_3]");
+ INFER_OK(op, "[?,4,4,1];[?,?,?,?]", "[d0_0,d0_1,d0_2,d1_3]");
// With stride != 1, the input HW dims are divided to produce output dims.
- set_op({{1, 2, 2, 1}}, "SAME", "NHWC");
+ set_op({{1, 2, 2, 1}}, "SAME", "NHWC", "HWIO");
INFER_OK(op, "[?,4,4,1];[?,?,?,?]", "[d0_0,2,2,d1_3]");
INFER_OK(op, "[1,?,4,1];[?,?,?,?]", "[d0_0,?,2,d1_3]");
INFER_OK(op, "[1,4,?,1];[?,?,?,?]", "[d0_0,2,?,d1_3]");
@@ -704,7 +705,7 @@ TEST(CommonShapeFnsTest, AvgPool2DShapeTest) {
INFER_ERROR("Dimension must be 4 but is 3", op, "[2,5,7,11,3]");
// Invalid rank for input
- INFER_ERROR("must be at least rank 4", op, "[4,4]");
+ INFER_ERROR("Shape must be rank", op, "[4,4]");
}
TEST(CommonShapeFnsTest, MaxPool2DShapeTest) {
@@ -741,6 +742,48 @@ TEST(CommonShapeFnsTest, MaxPool2DShapeTest) {
INFER_ERROR("Dimension must be 4 but is 8", op, "[2,3,5,7,8]");
}
+TEST(CommonShapeFnsTest, MaxPoolV22DShapeTest) {
+ ShapeInferenceTestOp op("MaxPoolV2");
+ Tensor ksizes_tensor, strides_tensor;
+ auto set_op = [&op, &ksizes_tensor, &strides_tensor](
+ const std::vector<int32>& strides,
+ const std::vector<int32>& ksizes, const string& padding,
+ const string& data_format) {
+ TF_CHECK_OK(NodeDefBuilder("test", "MaxPoolV2")
+ .Input("input", 0, DT_FLOAT)
+ .Input("ksize", 1, DT_INT32)
+ .Input("strides", 2, DT_INT32)
+ .Attr("padding", padding)
+ .Attr("data_format", data_format)
+ .Finalize(&op.node_def));
+ ksizes_tensor = test::AsTensor<int32>(ksizes);
+ op.input_tensors.resize(3);
+ op.input_tensors[0] = nullptr;
+ op.input_tensors[1] = &ksizes_tensor;
+ strides_tensor = test::AsTensor<int32>(strides);
+ op.input_tensors[2] = &strides_tensor;
+ };
+
+ // Most of the functionality is tested by conv-like shapes,
+ // so we check the very-specific maxpooling features here,
+ // namely depthwise kernel and striding.
+
+ // all 1 strides, depth 2 filter
+ set_op({1, 1, 1, 1}, {1, 1, 1, 2}, "VALID", "NHWC");
+ INFER_OK(op, "[1,2,2,2];[4];[4]", "[d0_0,2,2,1]");
+
+ // depth 3 stride, 1x1x1 filter, NCHW
+ set_op({1, 3, 1, 1}, {1, 1, 1, 1}, "VALID", "NCHW");
+ INFER_OK(op, "[1,7,5,5];[4];[4]", "[d0_0,3,5,5]");
+
+ // 5x7 input, 2x2 ksize, 1x1 stride, NCHW_VECT_C tests
+ set_op({{1, 1, 1, 1}}, {1, 1, 2, 2}, "SAME", "NCHW_VECT_C");
+ INFER_OK(op, "[2,3,5,7,4];[4];[4]", "[d0_0,d0_1,d0_2,d0_3,4]");
+ INFER_OK(op, "[5,7,?,?,4];[4];[4]", "[d0_0,d0_1,d0_2,d0_3,4]");
+ INFER_OK(op, "[?,?,?,?,4];[4];[4]", "[d0_0,d0_1,d0_2,d0_3,4]");
+ INFER_ERROR("Dimension must be 4 but is 8", op, "[2,3,5,7,8];[4];[4]");
+}
+
TEST(CommonShapeFnsTest, Pool3DShapeTest) {
ShapeInferenceTestOp op("MaxPool3D");
auto set_op = [&op](const std::vector<int32>& strides,
diff --git a/tensorflow/core/framework/summary.proto b/tensorflow/core/framework/summary.proto
index ba49033331..55879f8783 100644
--- a/tensorflow/core/framework/summary.proto
+++ b/tensorflow/core/framework/summary.proto
@@ -42,7 +42,7 @@ message SummaryMetadata {
// The content to store for the plugin. The best practice is for this to be
// a binary serialized protocol buffer.
- string content = 2;
+ bytes content = 2;
}
// Data that associates a summary with a certain plugin.
diff --git a/tensorflow/core/graph/mkl_layout_pass.cc b/tensorflow/core/graph/mkl_layout_pass.cc
index 4c79323197..cf5d6e8baa 100644
--- a/tensorflow/core/graph/mkl_layout_pass.cc
+++ b/tensorflow/core/graph/mkl_layout_pass.cc
@@ -1205,12 +1205,12 @@ int MklLayoutRewritePass::SetUpContiguousInputs(
if (do_connect_conv2d_backprop_input_filter &&
iidx == kConv2DBackpropInputFilterInputSlotIdx) {
GetNodeProducingMklTensor(g, old_node, conv2d_node,
- kConv2DFilterOutputSlotIdx,
- &mkl_node, &mkl_node_output_slot);
+ kConv2DFilterOutputSlotIdx, &mkl_node,
+ &mkl_node_output_slot);
} else {
GetNodeProducingMklTensor(g, old_node, old_node_inputs[iidx].first,
- old_node_inputs[iidx].second,
- &mkl_node, &mkl_node_output_slot);
+ old_node_inputs[iidx].second, &mkl_node,
+ &mkl_node_output_slot);
}
nb->Input(mkl_node, mkl_node_output_slot);
iidx++;
diff --git a/tensorflow/core/kernels/BUILD b/tensorflow/core/kernels/BUILD
index efc5d7c553..32a1b2c84d 100644
--- a/tensorflow/core/kernels/BUILD
+++ b/tensorflow/core/kernels/BUILD
@@ -20,6 +20,7 @@ package_group(
packages = [
"//learning/brain/contrib/...",
"//learning/brain/research/sparse_matrix/...",
+ "//learning/faster_training/...",
"//tensorflow/...",
],
)
@@ -3350,10 +3351,9 @@ tf_cc_test(
srcs = ["parse_tensor_test.cc"],
deps = [
":ops_testutil",
- ":ops_util",
":parse_tensor_op",
+ "//tensorflow/core:core_cpu_internal",
"//tensorflow/core:framework",
- "//tensorflow/core:test",
"//tensorflow/core:test_main",
"//tensorflow/core:testlib",
],
@@ -5588,6 +5588,20 @@ cc_library(
)
cc_library(
+ name = "dataset_utils",
+ srcs = ["dataset_utils.cc"],
+ hdrs = ["dataset_utils.h"],
+ deps = [
+ ":captured_function",
+ ":dataset",
+ "//tensorflow/core:framework",
+ "//tensorflow/core:lib",
+ "//tensorflow/core:lib_internal",
+ "//tensorflow/core/util/tensor_bundle",
+ ],
+)
+
+cc_library(
name = "captured_function",
srcs = ["captured_function.cc"],
hdrs = ["captured_function.h"],
@@ -5713,6 +5727,7 @@ tf_kernel_library(
deps = [
":captured_function",
":dataset",
+ ":dataset_utils",
"//tensorflow/core:core_cpu_internal",
"//tensorflow/core:dataset_ops_op_lib",
"//tensorflow/core:framework",
@@ -5727,6 +5742,22 @@ tf_kernel_library(
deps = [
":captured_function",
":dataset",
+ ":dataset_utils",
+ "//tensorflow/core:core_cpu_internal",
+ "//tensorflow/core:dataset_ops_op_lib",
+ "//tensorflow/core:framework",
+ "//tensorflow/core:lib",
+ "//tensorflow/core:lib_internal",
+ ],
+)
+
+tf_kernel_library(
+ name = "sloppy_interleave_dataset_op",
+ srcs = ["sloppy_interleave_dataset_op.cc"],
+ deps = [
+ ":captured_function",
+ ":dataset",
+ ":dataset_utils",
"//tensorflow/core:core_cpu_internal",
"//tensorflow/core:dataset_ops_op_lib",
"//tensorflow/core:framework",
@@ -5963,6 +5994,7 @@ tf_kernel_library(
":repeat_dataset_op",
":shuffle_dataset_op",
":skip_dataset_op",
+ ":sloppy_interleave_dataset_op",
":sparse_tensor_slice_dataset_op",
":sql_dataset_ops",
":take_dataset_op",
diff --git a/tensorflow/core/kernels/conv_ops_gpu.h b/tensorflow/core/kernels/conv_ops_gpu.h
index 168cf37bc7..c852dc9991 100644
--- a/tensorflow/core/kernels/conv_ops_gpu.h
+++ b/tensorflow/core/kernels/conv_ops_gpu.h
@@ -92,11 +92,11 @@ class ConvParameters {
ConvParameters(int64 batch, int64 in_depths, const SpatialArray& in,
int64 out_depths, const SpatialArray& filter,
const SpatialArray& stride, const SpatialArray& padding,
- const DataType& dtype, int device_id)
+ DataType dtype, int device_id)
: batch_(batch),
in_depths_(in_depths),
- in_(in),
out_depths_(out_depths),
+ in_(in),
filter_(filter),
stride_(stride),
padding_(padding),
@@ -130,7 +130,8 @@ class ConvParameters {
"(", str_util::Join(filter_, ", "), "), ",
"(", str_util::Join(stride_, ", "), "), ",
"(", str_util::Join(padding_, ", "), "), ",
- dtype_, ", ", device_id_);
+ dtype_, ", ",
+ device_id_);
// clang-format on
}
@@ -150,26 +151,28 @@ class ConvParameters {
}
}
- private:
- typedef std::tuple<int64, int64, SpatialArray, int64, SpatialArray,
- SpatialArray, SpatialArray, DataType, int>
- ParameterDataType;
+ protected:
+ using ParameterDataType =
+ std::tuple<int64, int64, SpatialArray, int64, SpatialArray, SpatialArray,
+ SpatialArray, DataType, int>;
ParameterDataType get_data_as_tuple() const {
return std::make_tuple(batch_, in_depths_, in_, out_depths_, filter_,
stride_, padding_, dtype_, device_id_);
}
+ uint64 hash_code_;
+
+ private:
int64 batch_;
int64 in_depths_;
- SpatialArray in_;
int64 out_depths_;
+ SpatialArray in_;
SpatialArray filter_;
SpatialArray stride_;
SpatialArray padding_;
DataType dtype_;
int device_id_;
- uint64 hash_code_;
};
typedef Eigen::GpuDevice GPUDevice;
diff --git a/tensorflow/core/kernels/conv_ops_gpu_3.cu.cc b/tensorflow/core/kernels/conv_ops_gpu_3.cu.cc
index 2307c2de0e..3d4670c9ba 100644
--- a/tensorflow/core/kernels/conv_ops_gpu_3.cu.cc
+++ b/tensorflow/core/kernels/conv_ops_gpu_3.cu.cc
@@ -556,6 +556,7 @@ template struct functor::NCHWToNHWC<GPUDevice, double, 4>;
template struct functor::NCHWToNHWC<GPUDevice, float, 4>;
template struct functor::NCHWToNHWC<GPUDevice, Eigen::half, 4>;
+template struct functor::PadInput<GPUDevice, int, int, 4>;
template struct functor::PadInput<GPUDevice, float, int, 4>;
template struct functor::PadInput<GPUDevice, Eigen::half, int, 4>;
diff --git a/tensorflow/core/kernels/crop_and_resize_op.cc b/tensorflow/core/kernels/crop_and_resize_op.cc
index 56181a686c..45cc2fbbb8 100644
--- a/tensorflow/core/kernels/crop_and_resize_op.cc
+++ b/tensorflow/core/kernels/crop_and_resize_op.cc
@@ -19,59 +19,98 @@ limitations under the License.
#include "tensorflow/core/kernels/crop_and_resize_op.h"
+#include <functional>
+#include <string>
+
#include "third_party/eigen3/unsupported/Eigen/CXX11/Tensor"
#include "tensorflow/core/framework/register_types.h"
#include "tensorflow/core/framework/tensor.h"
#include "tensorflow/core/framework/tensor_shape.h"
#include "tensorflow/core/framework/types.h"
#include "tensorflow/core/kernels/bounds_check.h"
+#include "tensorflow/core/lib/core/errors.h"
#include "tensorflow/core/lib/core/status.h"
#include "tensorflow/core/platform/logging.h"
+#include "tensorflow/core/platform/types.h"
#include "tensorflow/core/util/work_sharder.h"
#if GOOGLE_CUDA
+#include "tensorflow/core/common_runtime/gpu/gpu_event_mgr.h"
+#include "tensorflow/core/platform/cuda.h"
#include "tensorflow/core/platform/stream_executor.h"
+
+using ::perftools::gputools::cuda::ScopedActivateExecutorContext;
#endif // GOOGLE_CUDA
namespace tensorflow {
typedef Eigen::ThreadPoolDevice CPUDevice;
typedef Eigen::GpuDevice GPUDevice;
+using Callback = std::function<void()>;
+
+namespace {
-static inline void ParseAndCheckBoxSizes(OpKernelContext* context,
- const Tensor& boxes,
- const Tensor& box_ind,
- int* num_boxes) {
- if (boxes.NumElements() == 0 && box_ind.NumElements() == 0) {
+static inline Status ParseAndCheckBoxSizes(const Tensor& boxes,
+ const Tensor& box_index,
+ int* num_boxes) {
+ if (boxes.NumElements() == 0 && box_index.NumElements() == 0) {
*num_boxes = 0;
- return;
+ return Status::OK();
}
// The shape of 'boxes' is [num_boxes, 4].
- OP_REQUIRES(context, boxes.dims() == 2,
- errors::InvalidArgument("boxes must be 2-D",
- boxes.shape().DebugString()));
+ if (boxes.dims() != 2) {
+ return errors::InvalidArgument("boxes must be 2-D",
+ boxes.shape().DebugString());
+ }
*num_boxes = boxes.dim_size(0);
- OP_REQUIRES(context, boxes.dim_size(1) == 4,
- errors::InvalidArgument("boxes must have 4 columns"));
-
- // The shape of 'box_ind' is [num_boxes].
- OP_REQUIRES(context, box_ind.dims() == 1,
- errors::InvalidArgument("box_ind must be 1-D",
- box_ind.shape().DebugString()));
- OP_REQUIRES(context, box_ind.dim_size(0) == *num_boxes,
- errors::InvalidArgument("box_ind has incompatible shape"));
+ if (boxes.dim_size(1) != 4) {
+ return errors::InvalidArgument("boxes must have 4 columns");
+ }
+ // The shape of 'box_index' is [num_boxes].
+ if (box_index.dims() != 1) {
+ return errors::InvalidArgument("box_index must be 1-D",
+ box_index.shape().DebugString());
+ }
+ if (box_index.dim_size(0) != *num_boxes) {
+ return errors::InvalidArgument("box_index has incompatible shape");
+ }
+ return Status::OK();
}
-// Verifies that all values in box_ind are in [0, batch).
+// Conditionally calls the compute callback if all values in box_index are in
+// [0, batch_size) then calls done.
template <typename Device>
-inline void CheckValidBoxInd(
- OpKernelContext* context,
- typename TTypes<int32, 1>::ConstTensor box_ind_data, int batch);
+inline void RunIfBoxIndexIsValid(
+ OpKernelContext* context, typename TTypes<int32, 1>::ConstTensor box_index,
+ int batch_size, const Callback& compute, const Callback& done);
+
+// Specialization of CheckValidBoxIndex for a CPUDevice.
+template <>
+inline void RunIfBoxIndexIsValid<CPUDevice>(
+ OpKernelContext* context, typename TTypes<int32, 1>::ConstTensor box_index,
+ int batch_size, const Callback& compute, const Callback& done) {
+ const int num_boxes = box_index.dimension(0);
+ for (int b = 0; b < num_boxes; ++b) {
+ OP_REQUIRES_ASYNC(
+ context, FastBoundsCheck(box_index(b), batch_size),
+ errors::OutOfRange("box_index has values outside [0, batch_size)"),
+ done);
+ }
+ if (compute) {
+ compute();
+ }
+ if (done) {
+ done();
+ }
+}
+
+} // namespace
template <typename Device, typename T>
-class CropAndResizeOp : public OpKernel {
+class CropAndResizeOp : public AsyncOpKernel {
public:
- explicit CropAndResizeOp(OpKernelConstruction* context) : OpKernel(context) {
+ explicit CropAndResizeOp(OpKernelConstruction* context)
+ : AsyncOpKernel(context) {
string method;
OP_REQUIRES_OK(context, context->GetAttr("method", &method));
OP_REQUIRES(context, method == "bilinear",
@@ -80,69 +119,77 @@ class CropAndResizeOp : public OpKernel {
&extrapolation_value_));
}
- void Compute(OpKernelContext* context) override {
- // The shape of 'image' is [batch, image_height, image_width, channels].
+ void ComputeAsync(OpKernelContext* context, DoneCallback done) override {
+ // The shape of 'image' is [batch_size, image_height, image_width,
+ // channels].
const Tensor& image = context->input(0);
- OP_REQUIRES(context, image.dims() == 4,
- errors::InvalidArgument("input image must be 4-D",
- image.shape().DebugString()));
-
- const int batch = image.dim_size(0);
- const int image_height = image.dim_size(1);
- const int image_width = image.dim_size(2);
- const int depth = image.dim_size(3);
- OP_REQUIRES(context, image_height > 0 && image_width > 0,
- errors::InvalidArgument("image dimensions must be positive"));
-
// The shape of 'boxes' is [num_boxes, 4].
const Tensor& boxes = context->input(1);
-
- // The shape of 'box_ind' is [num_boxes].
- const Tensor& box_ind = context->input(2);
-
- int num_boxes = 0;
- ParseAndCheckBoxSizes(context, boxes, box_ind, &num_boxes);
-
+ // The shape of 'box_index' is [num_boxes].
+ const Tensor& box_index = context->input(2);
// The shape of 'crop_size' is [2].
const Tensor& crop_size = context->input(3);
- OP_REQUIRES(context, crop_size.dims() == 1,
- errors::InvalidArgument("crop_size must be 1-D",
- crop_size.shape().DebugString()));
- OP_REQUIRES(context, crop_size.dim_size(0) == 2,
- errors::InvalidArgument("crop_size must have two elements",
- crop_size.shape().DebugString()));
-
+ // Validate inputs dimensions.
+ OP_REQUIRES_ASYNC(context, image.dims() == 4,
+ errors::InvalidArgument("input image must be 4-D",
+ image.shape().DebugString()),
+ done);
+ const int batch_size = image.dim_size(0);
+ const int image_height = image.dim_size(1);
+ const int image_width = image.dim_size(2);
+ const int depth = image.dim_size(3);
+ OP_REQUIRES_ASYNC(
+ context, image_height > 0 && image_width > 0,
+ errors::InvalidArgument("image dimensions must be positive"), done);
+ int num_boxes = 0;
+ OP_REQUIRES_OK_ASYNC(
+ context, ParseAndCheckBoxSizes(boxes, box_index, &num_boxes), done);
+
+ OP_REQUIRES_ASYNC(context, crop_size.dims() == 1,
+ errors::InvalidArgument("crop_size must be 1-D",
+ crop_size.shape().DebugString()),
+ done);
+ OP_REQUIRES_ASYNC(
+ context, crop_size.dim_size(0) == 2,
+ errors::InvalidArgument("crop_size must have two elements",
+ crop_size.shape().DebugString()),
+ done);
+
+ // Copy and validate crop sizes.
auto crop_size_vec = crop_size.vec<int32>();
const int crop_height = internal::SubtleMustCopy(crop_size_vec(0));
const int crop_width = internal::SubtleMustCopy(crop_size_vec(1));
- OP_REQUIRES(context, crop_height > 0 && crop_width > 0,
- errors::InvalidArgument("crop dimensions must be positive"));
+ OP_REQUIRES_ASYNC(
+ context, crop_height > 0 && crop_width > 0,
+ errors::InvalidArgument("crop dimensions must be positive"), done);
// Allocate output tensor.
Tensor* output = nullptr;
- OP_REQUIRES_OK(
+ OP_REQUIRES_OK_ASYNC(
context,
context->allocate_output(
0, TensorShape({num_boxes, crop_height, crop_width, depth}),
- &output));
-
- typename TTypes<T, 4>::ConstTensor image_data = image.tensor<T, 4>();
- typename TTypes<float, 2>::ConstTensor boxes_data =
- boxes.tensor<float, 2>();
- typename TTypes<int32, 1>::ConstTensor box_ind_data =
- box_ind.tensor<int32, 1>();
- typename TTypes<float, 4>::Tensor crops_data = output->tensor<float, 4>();
-
- CheckValidBoxInd<Device>(context, box_ind_data, batch);
-
- bool status = functor::CropAndResize<Device, T>()(
- context, image_data, boxes_data, box_ind_data, extrapolation_value_,
- crops_data);
- if (!status) {
- context->SetStatus(
- errors::Internal("Failed launch CropAndResizeKernel."));
- }
+ &output),
+ done);
+
+ auto compute_callback = [this, context, output]() {
+ const Tensor& image = context->input(0);
+ const Tensor& boxes = context->input(1);
+ const Tensor& box_index = context->input(2);
+ const bool status = functor::CropAndResize<Device, T>()(
+ context, image.tensor<T, 4>(), boxes.tensor<float, 2>(),
+ box_index.tensor<int32, 1>(), extrapolation_value_,
+ output->tensor<float, 4>());
+ if (!status) {
+ context->SetStatus(
+ errors::Internal("Failed launch CropAndResizeKernel."));
+ }
+ };
+
+ RunIfBoxIndexIsValid<Device>(context, box_index.tensor<int32, 1>(),
+ batch_size, std::move(compute_callback),
+ std::move(done));
}
private:
@@ -156,10 +203,10 @@ struct CropAndResize<CPUDevice, T> {
bool operator()(const OpKernelContext* context,
typename TTypes<T, 4>::ConstTensor image,
typename TTypes<float, 2>::ConstTensor boxes,
- typename TTypes<int32, 1>::ConstTensor box_ind,
+ typename TTypes<int32, 1>::ConstTensor box_index,
float extrapolation_value,
typename TTypes<float, 4>::Tensor crops) {
- const int batch = image.dimension(0);
+ const int batch_size = image.dimension(0);
const int image_height = image.dimension(1);
const int image_width = image.dimension(2);
@@ -176,8 +223,8 @@ struct CropAndResize<CPUDevice, T> {
const float y2 = boxes(b, 2);
const float x2 = boxes(b, 3);
- const int32 b_in = box_ind(b);
- if (b_in < 0 || b_in >= batch) {
+ const int32 b_in = box_index(b);
+ if (!FastBoundsCheck(b_in, batch_size)) {
continue;
}
@@ -255,89 +302,94 @@ struct CropAndResize<CPUDevice, T> {
return true;
}
};
+
} // namespace functor
template <typename Device, typename T>
-class CropAndResizeGradImageOp : public OpKernel {
+class CropAndResizeGradImageOp : public AsyncOpKernel {
public:
explicit CropAndResizeGradImageOp(OpKernelConstruction* context)
- : OpKernel(context) {
+ : AsyncOpKernel(context) {
string method;
OP_REQUIRES_OK(context, context->GetAttr("method", &method));
OP_REQUIRES(context, method == "bilinear",
errors::InvalidArgument("method must be 'bilinear'", method));
}
- void Compute(OpKernelContext* context) override {
+ void ComputeAsync(OpKernelContext* context, DoneCallback done) override {
// The shape of 'grads' is [num_boxes, crop_height, crop_width, depth].
const Tensor& grads = context->input(0);
-
- OP_REQUIRES(context, grads.dims() == 4,
- errors::InvalidArgument("grads image must be 4-D",
- grads.shape().DebugString()));
- const int crop_height = grads.dim_size(1);
- const int crop_width = grads.dim_size(2);
- OP_REQUIRES(context, crop_height > 0 && crop_width > 0,
- errors::InvalidArgument("grads dimensions must be positive"));
-
// The shape of 'boxes' is [num_boxes, 4].
const Tensor& boxes = context->input(1);
-
- // The shape of 'box_ind' is [num_boxes].
- const Tensor& box_ind = context->input(2);
-
- int num_boxes = 0;
- ParseAndCheckBoxSizes(context, boxes, box_ind, &num_boxes);
-
- OP_REQUIRES(
- context, grads.dim_size(0) == num_boxes,
- errors::InvalidArgument("boxes and grads have incompatible shape"));
-
+ // The shape of 'box_index' is [num_boxes].
+ const Tensor& box_index = context->input(2);
// The shape of 'image_size' is [4].
const Tensor& image_size = context->input(3);
- OP_REQUIRES(context, image_size.dims() == 1,
- errors::InvalidArgument("image_size must be 1-D",
- image_size.shape().DebugString()));
- OP_REQUIRES(context, image_size.dim_size(0) == 4,
- errors::InvalidArgument("image_size must have 4 elements",
- image_size.shape().DebugString()));
+ // Validate input shapes.
+ OP_REQUIRES_ASYNC(context, grads.dims() == 4,
+ errors::InvalidArgument("grads image must be 4-D",
+ grads.shape().DebugString()),
+ done);
+ const int crop_height = grads.dim_size(1);
+ const int crop_width = grads.dim_size(2);
+ OP_REQUIRES_ASYNC(
+ context, crop_height > 0 && crop_width > 0,
+ errors::InvalidArgument("grads dimensions must be positive"), done);
+ int num_boxes = 0;
+ OP_REQUIRES_OK_ASYNC(
+ context, ParseAndCheckBoxSizes(boxes, box_index, &num_boxes), done);
+ OP_REQUIRES_ASYNC(
+ context, grads.dim_size(0) == num_boxes,
+ errors::InvalidArgument("boxes and grads have incompatible shape"),
+ done);
+
+ OP_REQUIRES_ASYNC(context, image_size.dims() == 1,
+ errors::InvalidArgument("image_size must be 1-D",
+ image_size.shape().DebugString()),
+ done);
+ OP_REQUIRES_ASYNC(context, image_size.dim_size(0) == 4,
+ errors::InvalidArgument("image_size must have 4 elements",
+ image_size.shape().DebugString()),
+ done);
auto image_size_vec = image_size.vec<int32>();
- const int batch = internal::SubtleMustCopy(image_size_vec(0));
+ const int batch_size = internal::SubtleMustCopy(image_size_vec(0));
const int image_height = internal::SubtleMustCopy(image_size_vec(1));
const int image_width = internal::SubtleMustCopy(image_size_vec(2));
const int depth = internal::SubtleMustCopy(image_size_vec(3));
-
- OP_REQUIRES(context, image_height > 0 && image_width > 0,
- errors::InvalidArgument("image dimensions must be positive"));
- OP_REQUIRES(
+ OP_REQUIRES_ASYNC(
+ context, image_height > 0 && image_width > 0,
+ errors::InvalidArgument("image dimensions must be positive"), done);
+ OP_REQUIRES_ASYNC(
context, grads.dim_size(3) == depth,
- errors::InvalidArgument("image_size and grads are incompatible"));
+ errors::InvalidArgument("image_size and grads are incompatible"), done);
// Allocate output tensor.
Tensor* output = nullptr;
- OP_REQUIRES_OK(
- context, context->allocate_output(
- 0, TensorShape({batch, image_height, image_width, depth}),
- &output));
-
- typename TTypes<float, 4>::ConstTensor grads_data =
- grads.tensor<float, 4>();
- typename TTypes<float, 2>::ConstTensor boxes_data =
- boxes.tensor<float, 2>();
- typename TTypes<int32, 1>::ConstTensor box_ind_data =
- box_ind.tensor<int32, 1>();
- typename TTypes<T, 4>::Tensor output_data = output->tensor<T, 4>();
-
- CheckValidBoxInd<Device>(context, box_ind_data, batch);
-
- bool status = functor::CropAndResizeBackpropImage<Device, T>()(
- context->eigen_device<Device>(), grads_data, boxes_data, box_ind_data,
- output_data);
- if (!status) {
- context->SetStatus(
- errors::Internal("Failed launch CropAndResizeBackpropImageKernel."));
- }
+ OP_REQUIRES_OK_ASYNC(
+ context,
+ context->allocate_output(
+ 0, TensorShape({batch_size, image_height, image_width, depth}),
+ &output),
+ done);
+
+ auto compute_callback = [context, output]() {
+ const Tensor& grads = context->input(0);
+ const Tensor& boxes = context->input(1);
+ const Tensor& box_index = context->input(2);
+ const bool status = functor::CropAndResizeBackpropImage<Device, T>()(
+ context->eigen_device<Device>(), grads.tensor<float, 4>(),
+ boxes.tensor<float, 2>(), box_index.tensor<int32, 1>(),
+ output->tensor<T, 4>());
+ if (!status) {
+ context->SetStatus(errors::Internal(
+ "Failed launch CropAndResizeBackpropImage kernel."));
+ }
+ };
+
+ RunIfBoxIndexIsValid<Device>(context, box_index.tensor<int32, 1>(),
+ batch_size, std::move(compute_callback),
+ std::move(done));
}
};
@@ -348,9 +400,9 @@ struct CropAndResizeBackpropImage<CPUDevice, T> {
bool operator()(const CPUDevice& d,
typename TTypes<float, 4>::ConstTensor grads,
typename TTypes<float, 2>::ConstTensor boxes,
- typename TTypes<int32, 1>::ConstTensor box_ind,
+ typename TTypes<int32, 1>::ConstTensor box_index,
typename TTypes<T, 4>::Tensor grads_image) {
- const int batch = grads_image.dimension(0);
+ const int batch_size = grads_image.dimension(0);
const int image_height = grads_image.dimension(1);
const int image_width = grads_image.dimension(2);
@@ -367,8 +419,8 @@ struct CropAndResizeBackpropImage<CPUDevice, T> {
const float y2 = boxes(b, 2);
const float x2 = boxes(b, 3);
- const int32 b_in = box_ind(b);
- if (b_in < 0 || b_in >= batch) {
+ const int32 b_in = box_index(b);
+ if (!FastBoundsCheck(b_in, batch_size)) {
continue;
}
@@ -419,83 +471,90 @@ struct CropAndResizeBackpropImage<CPUDevice, T> {
return true;
}
};
+
} // namespace functor
template <typename Device, typename T>
-class CropAndResizeGradBoxesOp : public OpKernel {
+class CropAndResizeGradBoxesOp : public AsyncOpKernel {
public:
explicit CropAndResizeGradBoxesOp(OpKernelConstruction* context)
- : OpKernel(context) {
+ : AsyncOpKernel(context) {
string method;
OP_REQUIRES_OK(context, context->GetAttr("method", &method));
OP_REQUIRES(context, method == "bilinear",
errors::InvalidArgument("method must be 'bilinear'", method));
}
- void Compute(OpKernelContext* context) override {
+ void ComputeAsync(OpKernelContext* context, DoneCallback done) override {
// The shape of 'grads' is [num_boxes, crop_height, crop_width, depth].
const Tensor& grads = context->input(0);
+ // The shape of 'boxes' is [num_boxes, 4].
+ const Tensor& boxes = context->input(2);
+ // The shape of 'box_index' is [num_boxes].
+ const Tensor& box_index = context->input(3);
+ // The shape of 'image' is [batch_size, image_height, image_width, depth].
+ const Tensor& image = context->input(1);
- OP_REQUIRES(context, grads.dims() == 4,
- errors::InvalidArgument("grads image must be 4-D",
- grads.shape().DebugString()));
-
+ // Validate input shapes.
+ OP_REQUIRES_ASYNC(context, grads.dims() == 4,
+ errors::InvalidArgument("grads image must be 4-D",
+ grads.shape().DebugString()),
+ done);
const int crop_height = grads.dim_size(1);
const int crop_width = grads.dim_size(2);
const int depth = grads.dim_size(3);
- OP_REQUIRES(context, crop_height > 0 && crop_width > 0,
- errors::InvalidArgument("grads dimensions must be positive"));
-
- // The shape of 'image' is [batch, image_height, image_width, depth].
- const Tensor& image = context->input(1);
- OP_REQUIRES(context, image.dims() == 4,
- errors::InvalidArgument("input image must be 4-D",
- image.shape().DebugString()));
-
- const int batch = image.dim_size(0);
+ OP_REQUIRES_ASYNC(
+ context, crop_height > 0 && crop_width > 0,
+ errors::InvalidArgument("grads dimensions must be positive"), done);
+
+ OP_REQUIRES_ASYNC(context, image.dims() == 4,
+ errors::InvalidArgument("input image must be 4-D",
+ image.shape().DebugString()),
+ done);
+ const int batch_size = image.dim_size(0);
const int image_height = image.dim_size(1);
const int image_width = image.dim_size(2);
- OP_REQUIRES(context, image_height > 0 && image_width > 0,
- errors::InvalidArgument("image dimensions must be positive"));
- OP_REQUIRES(context, image.dim_size(3) == depth,
- errors::InvalidArgument("image, grads depth differ"));
-
- // The shape of 'boxes' is [num_boxes, 4].
- const Tensor& boxes = context->input(2);
-
- // The shape of 'box_ind' is [num_boxes].
- const Tensor& box_ind = context->input(3);
+ OP_REQUIRES_ASYNC(
+ context, image_height > 0 && image_width > 0,
+ errors::InvalidArgument("image dimensions must be positive"), done);
+ OP_REQUIRES_ASYNC(context, image.dim_size(3) == depth,
+ errors::InvalidArgument("image, grads depth differ"),
+ done);
int num_boxes = 0;
- ParseAndCheckBoxSizes(context, boxes, box_ind, &num_boxes);
+ OP_REQUIRES_OK_ASYNC(
+ context, ParseAndCheckBoxSizes(boxes, box_index, &num_boxes), done);
- OP_REQUIRES(
+ OP_REQUIRES_ASYNC(
context, grads.dim_size(0) == num_boxes,
- errors::InvalidArgument("boxes and grads have incompatible shape"));
+ errors::InvalidArgument("boxes and grads have incompatible shape"),
+ done);
// Allocate output tensor.
Tensor* output = nullptr;
- OP_REQUIRES_OK(context, context->allocate_output(
- 0, TensorShape({num_boxes, 4}), &output));
-
- typename TTypes<float, 4>::ConstTensor grads_data =
- grads.tensor<float, 4>();
- typename TTypes<T, 4>::ConstTensor image_data = image.tensor<T, 4>();
- typename TTypes<float, 2>::ConstTensor boxes_data =
- boxes.tensor<float, 2>();
- typename TTypes<int32, 1>::ConstTensor box_ind_data =
- box_ind.tensor<int32, 1>();
- typename TTypes<float, 2>::Tensor output_data = output->tensor<float, 2>();
-
- CheckValidBoxInd<Device>(context, box_ind_data, batch);
-
- bool status = functor::CropAndResizeBackpropBoxes<Device, T>()(
- context->eigen_device<Device>(), grads_data, image_data, boxes_data,
- box_ind_data, output_data);
- if (!status) {
- context->SetStatus(
- errors::Internal("Failed launch CropAndResizeBackpropBoxesKernel."));
- }
+ OP_REQUIRES_OK_ASYNC(
+ context,
+ context->allocate_output(0, TensorShape({num_boxes, 4}), &output),
+ done);
+
+ auto compute_callback = [context, output]() {
+ const Tensor& grads = context->input(0);
+ const Tensor& image = context->input(1);
+ const Tensor& boxes = context->input(2);
+ const Tensor& box_index = context->input(3);
+ const bool status = functor::CropAndResizeBackpropBoxes<Device, T>()(
+ context->eigen_device<Device>(), grads.tensor<float, 4>(),
+ image.tensor<T, 4>(), boxes.tensor<float, 2>(),
+ box_index.tensor<int32, 1>(), output->tensor<float, 2>());
+ if (!status) {
+ context->SetStatus(errors::Internal(
+ "Failed launch CropAndResizeBackpropBoxes kernel."));
+ }
+ };
+
+ RunIfBoxIndexIsValid<Device>(context, box_index.tensor<int32, 1>(),
+ batch_size, std::move(compute_callback),
+ std::move(done));
}
};
@@ -507,9 +566,9 @@ struct CropAndResizeBackpropBoxes<CPUDevice, T> {
typename TTypes<float, 4>::ConstTensor grads,
typename TTypes<T, 4>::ConstTensor image,
typename TTypes<float, 2>::ConstTensor boxes,
- typename TTypes<int32, 1>::ConstTensor box_ind,
+ typename TTypes<int32, 1>::ConstTensor box_index,
typename TTypes<float, 2>::Tensor grads_boxes) {
- const int batch = image.dimension(0);
+ const int batch_size = image.dimension(0);
const int image_height = image.dimension(1);
const int image_width = image.dimension(2);
@@ -526,8 +585,8 @@ struct CropAndResizeBackpropBoxes<CPUDevice, T> {
const float y2 = boxes(b, 2);
const float x2 = boxes(b, 3);
- const int32 b_in = box_ind(b);
- if (b_in < 0 || b_in >= batch) {
+ const int32 b_in = box_index(b);
+ if (!FastBoundsCheck(b_in, batch_size)) {
continue;
}
@@ -609,30 +668,19 @@ struct CropAndResizeBackpropBoxes<CPUDevice, T> {
return true;
}
};
-} // namespace functor
-// Specialization of CheckValidBoxInd for a CPUDevice.
-template <>
-inline void CheckValidBoxInd<CPUDevice>(
- OpKernelContext* context, typename TTypes<int32, 1>::ConstTensor box_ind,
- int batch) {
- const int num_boxes = box_ind.dimension(0);
- for (int b = 0; b < num_boxes; ++b) {
- OP_REQUIRES(context, box_ind(b) >= 0 && box_ind(b) < batch,
- errors::OutOfRange("box_ind has values outside [0, batch)"));
- }
-}
+} // namespace functor
-#define REGISTER_KERNEL(T) \
- REGISTER_KERNEL_BUILDER(Name("CropAndResize") \
- .Device(DEVICE_CPU) \
- .TypeConstraint<T>("T") \
- .HostMemory("crop_size"), \
- CropAndResizeOp<CPUDevice, T>); \
- \
- REGISTER_KERNEL_BUILDER(Name("CropAndResizeGradBoxes") \
- .Device(DEVICE_CPU) \
- .TypeConstraint<T>("T"), \
+#define REGISTER_KERNEL(T) \
+ REGISTER_KERNEL_BUILDER(Name("CropAndResize") \
+ .Device(DEVICE_CPU) \
+ .TypeConstraint<T>("T") \
+ .HostMemory("crop_size"), \
+ CropAndResizeOp<CPUDevice, T>); \
+ \
+ REGISTER_KERNEL_BUILDER(Name("CropAndResizeGradBoxes") \
+ .Device(DEVICE_CPU) \
+ .TypeConstraint<T>("T"), \
CropAndResizeGradBoxesOp<CPUDevice, T>);
TF_CALL_REAL_NUMBER_TYPES(REGISTER_KERNEL);
@@ -654,50 +702,93 @@ TF_CALL_double(REGISTER_KERNEL);
#if GOOGLE_CUDA
-// Forward declaration of the CheckValidBoxIndHelper specialization for GPU.
+// Forward declaration of the CheckValidBoxIndexHelper specialization for GPU.
namespace functor {
template <>
-void CheckValidBoxIndHelper<GPUDevice>::operator()(
- const GPUDevice& d, typename TTypes<int32, 1>::ConstTensor box_ind,
- int batch, typename TTypes<bool, 0>::Tensor isvalid);
-extern template struct CheckValidBoxIndHelper<GPUDevice>;
+void CheckValidBoxIndexHelper<GPUDevice>::operator()(
+ const GPUDevice& d, typename TTypes<int32, 1>::ConstTensor box_index,
+ int batch_size, typename TTypes<bool, 0>::Tensor isvalid);
+extern template struct CheckValidBoxIndexHelper<GPUDevice>;
} // namespace functor
-// Specialization of CheckValidBoxInd for a GPUDevice.
+namespace {
+
+// Specialization of CheckValidBoxIndex for a GPUDevice.
template <>
-inline void CheckValidBoxInd<GPUDevice>(
- OpKernelContext* context, typename TTypes<int32, 1>::ConstTensor box_ind,
- int batch) {
- const int num_boxes = box_ind.dimension(0);
+inline void RunIfBoxIndexIsValid<GPUDevice>(
+ OpKernelContext* context, typename TTypes<int32, 1>::ConstTensor box_index,
+ int batch_size, const Callback& compute, const Callback& done) {
+ const int num_boxes = box_index.dimension(0);
if (num_boxes == 0) {
+ compute();
+ done();
return;
}
- Tensor isvalid_tensor;
- OP_REQUIRES_OK(context,
- context->allocate_temp(DataTypeToEnum<bool>::value,
- TensorShape({}), &isvalid_tensor));
- typename TTypes<bool, 0>::Tensor isvalid = isvalid_tensor.tensor<bool, 0>();
+ Tensor isvalid_dev_tensor;
+ OP_REQUIRES_OK_ASYNC(
+ context,
+ context->allocate_temp(DataTypeToEnum<bool>::value, TensorShape({}),
+ &isvalid_dev_tensor),
+ done);
+ typename TTypes<bool, 0>::Tensor isvalid_dev =
+ isvalid_dev_tensor.tensor<bool, 0>();
- functor::CheckValidBoxIndHelper<GPUDevice>()(
- context->eigen_device<GPUDevice>(), box_ind, batch, isvalid);
+ // Run the actual box check on the device.
+ functor::CheckValidBoxIndexHelper<GPUDevice>()(
+ context->eigen_device<GPUDevice>(), box_index, batch_size, isvalid_dev);
+ // Copy the result back to the host.
auto* stream = context->op_device_context()->stream();
- OP_REQUIRES(context, stream, errors::Internal("No GPU stream available."));
-
- bool isvalid_host = false;
- perftools::gputools::DeviceMemoryBase isvalid_gpu(isvalid.data(),
- sizeof(bool));
- stream->ThenMemcpy(&isvalid_host, isvalid_gpu, sizeof(bool));
- stream->BlockHostUntilDone();
-
- OP_REQUIRES(context, stream->ok(),
- errors::Internal("cudaMemcpy from device to host failed"));
-
- OP_REQUIRES(context, isvalid_host,
- errors::OutOfRange("box_ind has values outside [0, batch)"));
+ OP_REQUIRES_ASYNC(context, stream,
+ errors::Internal("No GPU stream available."), done);
+ Tensor isvalid_host_tensor;
+ // Use pinned host memory on the host to avoid unnecessary
+ // synchronization.
+ AllocatorAttributes alloc_attr;
+ alloc_attr.set_on_host(true);
+ alloc_attr.set_gpu_compatible(true);
+ OP_REQUIRES_OK_ASYNC(
+ context,
+ context->allocate_temp(DataTypeToEnum<bool>::value, TensorShape({}),
+ &isvalid_host_tensor, alloc_attr),
+ done);
+ perftools::gputools::DeviceMemoryBase wrapped(isvalid_dev.data(),
+ sizeof(bool));
+ const bool status =
+ stream
+ ->ThenMemcpy(
+ isvalid_host_tensor.scalar<bool>().data() /* destination */,
+ wrapped /* source */, sizeof(bool))
+ .ok();
+ OP_REQUIRES_ASYNC(
+ context, status,
+ errors::Internal("Failed to launch copy of isvalid from device to host."),
+ done);
+
+ // We capture both temporary tensors to prevent them from being deallocated
+ // when ComputeAsync returns and before the closure runs.
+ TensorReference isvalid_dev_ref(isvalid_dev_tensor);
+ auto wrapped_callback = [context, isvalid_host_tensor, isvalid_dev_ref,
+ compute, done]() {
+ auto stream = context->op_device_context()->stream();
+ ScopedActivateExecutorContext scoped_activation{stream->parent()};
+ const bool isvalid = isvalid_host_tensor.scalar<bool>()();
+ isvalid_dev_ref.Unref();
+ OP_REQUIRES_ASYNC(
+ context, isvalid,
+ errors::OutOfRange("box_index has values outside [0, batch_size)"),
+ done);
+ compute();
+ done();
+ };
+
+ context->device()->tensorflow_gpu_device_info()->event_mgr->ThenExecute(
+ stream, wrapped_callback);
}
+} // namespace
+
#define REGISTER_KERNEL(T) \
REGISTER_KERNEL_BUILDER(Name("CropAndResize") \
.Device(DEVICE_GPU) \
diff --git a/tensorflow/core/kernels/crop_and_resize_op.h b/tensorflow/core/kernels/crop_and_resize_op.h
index 84d7a5e03b..b6b1dbd7b0 100644
--- a/tensorflow/core/kernels/crop_and_resize_op.h
+++ b/tensorflow/core/kernels/crop_and_resize_op.h
@@ -55,12 +55,12 @@ struct CropAndResizeBackpropBoxes {
};
template <typename Device>
-struct CheckValidBoxIndHelper {
- // Checks if all values in box_ind are in [0, batch).
+struct CheckValidBoxIndexHelper {
+ // Checks if all values in box_index are in [0, batch).
void operator()(const Device& d,
- typename TTypes<int32, 1>::ConstTensor box_ind, int batch,
+ typename TTypes<int32, 1>::ConstTensor box_index, int batch,
typename TTypes<bool, 0>::Tensor isvalid) {
- isvalid.device(d) = ((box_ind >= 0) && (box_ind < batch)).all();
+ isvalid.device(d) = ((box_index >= 0) && (box_index < batch)).all();
}
};
diff --git a/tensorflow/core/kernels/crop_and_resize_op_gpu.cu.cc b/tensorflow/core/kernels/crop_and_resize_op_gpu.cu.cc
index 1726e4a816..d12787d524 100644
--- a/tensorflow/core/kernels/crop_and_resize_op_gpu.cu.cc
+++ b/tensorflow/core/kernels/crop_and_resize_op_gpu.cu.cc
@@ -442,7 +442,7 @@ TF_CALL_GPU_NUMBER_TYPES(DEFINE_GPU_SPECS);
#undef DEFINE_GPU_SPECS
-template struct CheckValidBoxIndHelper<GPUDevice>;
+template struct CheckValidBoxIndexHelper<GPUDevice>;
} // namespace functor
} // namespace tensorflow
diff --git a/tensorflow/core/kernels/crop_and_resize_op_test.cc b/tensorflow/core/kernels/crop_and_resize_op_test.cc
index 1bf28d4d00..22c659b587 100644
--- a/tensorflow/core/kernels/crop_and_resize_op_test.cc
+++ b/tensorflow/core/kernels/crop_and_resize_op_test.cc
@@ -251,7 +251,7 @@ TEST_F(CropAndResizeOpTest, TestInvalidBoxIndexShape) {
Status s = RunOpKernel();
ASSERT_FALSE(s.ok());
EXPECT_TRUE(
- StringPiece(s.ToString()).contains("box_ind has incompatible shape"))
+ StringPiece(s.ToString()).contains("box_index has incompatible shape"))
<< s;
}
@@ -264,7 +264,7 @@ TEST_F(CropAndResizeOpTest, TestInvalidBoxIndex) {
Status s = RunOpKernel();
ASSERT_FALSE(s.ok());
EXPECT_TRUE(StringPiece(s.ToString())
- .contains("box_ind has values outside [0, batch)"))
+ .contains("box_index has values outside [0, batch_size)"))
<< s;
}
diff --git a/tensorflow/core/kernels/cuda_solvers.h b/tensorflow/core/kernels/cuda_solvers.h
index ac6119d8a2..0fd6450f98 100644
--- a/tensorflow/core/kernels/cuda_solvers.h
+++ b/tensorflow/core/kernels/cuda_solvers.h
@@ -313,6 +313,9 @@ class ScratchSpace {
int64 size() const { return scratch_tensor_.NumElements(); }
const string& debug_info() const { return debug_info_; }
+ Tensor& tensor() { return scratch_tensor_; }
+ const Tensor& tensor() const { return scratch_tensor_; }
+
// Returns true if this ScratchSpace is in host memory.
bool on_host() const { return on_host_; }
diff --git a/tensorflow/core/kernels/cuda_solvers_gpu.cu.cc b/tensorflow/core/kernels/cuda_solvers_gpu.cu.cc
index b9e42b4d00..af6c094d7a 100644
--- a/tensorflow/core/kernels/cuda_solvers_gpu.cu.cc
+++ b/tensorflow/core/kernels/cuda_solvers_gpu.cu.cc
@@ -51,55 +51,57 @@ namespace {
// Hacks around missing support for complex arithmetic in nvcc.
template <typename Scalar>
-__host__ __device__ inline Scalar Multiply(Scalar x, Scalar y) {
+__device__ inline Scalar Multiply(Scalar x, Scalar y) {
return x * y;
}
template <>
-__host__ __device__ inline cuComplex Multiply(cuComplex x, cuComplex y) {
+__device__ inline cuComplex Multiply(cuComplex x, cuComplex y) {
return cuCmulf(x, y);
}
template <>
-__host__ __device__ inline cuDoubleComplex Multiply(cuDoubleComplex x,
- cuDoubleComplex y) {
+__device__ inline cuDoubleComplex Multiply(cuDoubleComplex x,
+ cuDoubleComplex y) {
return cuCmul(x, y);
}
template <typename Scalar>
-__host__ __device__ inline Scalar Negate(Scalar x) {
+__device__ inline Scalar Negate(Scalar x) {
return -x;
}
template <>
-__host__ __device__ inline cuComplex Negate(cuComplex x) {
+__device__ inline cuComplex Negate(cuComplex x) {
return make_cuComplex(-cuCrealf(x), -cuCimagf(x));
}
template <>
-__host__ __device__ inline cuDoubleComplex Negate(cuDoubleComplex x) {
+__device__ inline cuDoubleComplex Negate(cuDoubleComplex x) {
return make_cuDoubleComplex(-cuCreal(x), -cuCimag(x));
}
template <typename Scalar>
-__host__ __device__ inline bool IsFinite(Scalar x) {
- return isfinite(x);
+__device__ inline bool IsFinite(Scalar x) {
+ return Eigen::numext::isfinite(x);
}
template <>
-__host__ __device__ inline bool IsFinite(cuComplex x) {
- return isfinite(cuCrealf(x)) && isfinite(cuCimagf(x));
+__device__ inline bool IsFinite(cuComplex x) {
+ return Eigen::numext::isfinite(cuCrealf(x)) &&
+ Eigen::numext::isfinite(cuCimagf(x));
}
template <>
-__host__ __device__ inline bool IsFinite(cuDoubleComplex x) {
- return isfinite(cuCreal(x)) && isfinite(cuCimag(x));
+__device__ inline bool IsFinite(cuDoubleComplex x) {
+ return Eigen::numext::isfinite(cuCreal(x)) &&
+ Eigen::numext::isfinite(cuCimag(x));
}
template <typename Scalar>
struct Const {
template <typename RealScalar>
- __host__ __device__ static inline Scalar make_const(const RealScalar x) {
+ __device__ static inline Scalar make_const(const RealScalar x) {
return Scalar(x);
}
};
@@ -107,7 +109,7 @@ struct Const {
template <>
struct Const<cuComplex> {
template <typename RealScalar>
- __host__ __device__ static inline cuComplex make_const(const RealScalar x) {
+ __device__ static inline cuComplex make_const(const RealScalar x) {
return make_cuComplex(x, 0.0f);
}
};
@@ -115,8 +117,7 @@ struct Const<cuComplex> {
template <>
struct Const<cuDoubleComplex> {
template <typename RealScalar>
- __host__ __device__ static inline cuDoubleComplex make_const(
- const RealScalar x) {
+ __device__ static inline cuDoubleComplex make_const(const RealScalar x) {
return make_cuDoubleComplex(x, 0.0f);
}
};
diff --git a/tensorflow/core/kernels/dataset_utils.cc b/tensorflow/core/kernels/dataset_utils.cc
new file mode 100644
index 0000000000..f320b3b09c
--- /dev/null
+++ b/tensorflow/core/kernels/dataset_utils.cc
@@ -0,0 +1,78 @@
+/* Copyright 2017 The TensorFlow Authors. All Rights Reserved.
+
+Licensed under the Apache License, Version 2.0 (the "License");
+you may not use this file except in compliance with the License.
+You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing, software
+distributed under the License is distributed on an "AS IS" BASIS,
+WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+See the License for the specific language governing permissions and
+limitations under the License.
+==============================================================================*/
+
+#include "tensorflow/core/kernels/dataset_utils.h"
+
+namespace tensorflow {
+
+namespace dataset {
+
+Status MakeIteratorFromInputElement(
+ IteratorContext* ctx, const std::vector<Tensor>& input_element,
+ int64 thread_index, CapturedFunction* captured_func, StringPiece prefix,
+ std::unique_ptr<IteratorBase>* out_iterator) {
+ FunctionLibraryRuntime::Options opts;
+ opts.runner = ctx->runner();
+ // Choose a step ID that is guaranteed not to clash with any
+ // Session-generated step ID. DirectSession only generates
+ // non-negative step IDs (contiguous, starting from 0), and
+ // MasterSession generates 56-bit random step IDs whose MSB
+ // is always 0, so a negative random step ID should suffice.
+ opts.step_id = CapturedFunction::generate_step_id();
+ ScopedStepContainer step_container(
+ opts.step_id, [captured_func, ctx](const string& name) {
+ captured_func->resource_manager()->Cleanup(name).IgnoreError();
+ });
+ opts.step_container = &step_container;
+ std::vector<Tensor> return_values;
+ TF_RETURN_IF_ERROR(captured_func->Run(opts, input_element, &return_values));
+
+ if (!(return_values.size() == 1 && return_values[0].dtype() == DT_RESOURCE &&
+ TensorShapeUtils::IsScalar(return_values[0].shape()))) {
+ return errors::InvalidArgument(
+ "Function must return a single scalar of dtype DT_RESOURCE.");
+ }
+
+ // Retrieve the dataset that was created in `f`.
+ DatasetBase* returned_dataset;
+ const ResourceHandle& dataset_resource =
+ return_values[0].scalar<ResourceHandle>()();
+
+ // NOTE(mrry): We cannot use the core `LookupResource()` or
+ // `DeleteResource()` functions, because we have an
+ // `IteratorContext*` and not an `OpKernelContext*`, so we
+ // replicate the necessary functionality here.
+ auto type_index = MakeTypeIndex<DatasetBase>();
+ if (type_index.hash_code() != dataset_resource.hash_code()) {
+ return errors::InvalidArgument("Function must return a Dataset resource.");
+ }
+ TF_RETURN_IF_ERROR(captured_func->resource_manager()->Lookup(
+ dataset_resource.container(), dataset_resource.name(),
+ &returned_dataset));
+ core::ScopedUnref unref_dataset(returned_dataset);
+
+ // Create an iterator for the dataset that was returned by
+ // `f`. This transfers ownership of the dataset to the
+ // iterator, so we can delete it from the resource manager.
+ *out_iterator = returned_dataset->MakeIterator(
+ strings::StrCat(prefix, "[", thread_index, "]"));
+ TF_RETURN_IF_ERROR(captured_func->resource_manager()->Delete<DatasetBase>(
+ dataset_resource.container(), dataset_resource.name()));
+ return Status::OK();
+}
+
+} // namespace dataset
+
+} // namespace tensorflow
diff --git a/tensorflow/core/kernels/dataset_utils.h b/tensorflow/core/kernels/dataset_utils.h
new file mode 100644
index 0000000000..eea2b8802b
--- /dev/null
+++ b/tensorflow/core/kernels/dataset_utils.h
@@ -0,0 +1,35 @@
+/* Copyright 2017 The TensorFlow Authors. All Rights Reserved.
+
+Licensed under the Apache License, Version 2.0 (the "License");
+you may not use this file except in compliance with the License.
+You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing, software
+distributed under the License is distributed on an "AS IS" BASIS,
+WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+See the License for the specific language governing permissions and
+limitations under the License.
+==============================================================================*/
+#ifndef THIRD_PARTY_TENSORFLOW_CORE_KERNELS_DATASET_UTILS_H_
+#define THIRD_PARTY_TENSORFLOW_CORE_KERNELS_DATASET_UTILS_H_
+
+#include "tensorflow/core/framework/tensor.h"
+#include "tensorflow/core/kernels/captured_function.h"
+#include "tensorflow/core/kernels/dataset.h"
+
+namespace tensorflow {
+
+namespace dataset {
+
+Status MakeIteratorFromInputElement(
+ IteratorContext* ctx, const std::vector<Tensor>& input_element,
+ int64 thread_index, CapturedFunction* captured_func, StringPiece prefix,
+ std::unique_ptr<IteratorBase>* out_iterator);
+
+} // namespace dataset
+
+} // namespace tensorflow
+
+#endif // THIRD_PARTY_TENSORFLOW_CORE_KERNELS_DATASET_UTILS_H_
diff --git a/tensorflow/core/kernels/flat_map_dataset_op.cc b/tensorflow/core/kernels/flat_map_dataset_op.cc
index e2310fecc7..a87e54bf31 100644
--- a/tensorflow/core/kernels/flat_map_dataset_op.cc
+++ b/tensorflow/core/kernels/flat_map_dataset_op.cc
@@ -20,6 +20,7 @@ limitations under the License.
#include "tensorflow/core/lib/random/random.h"
#include "tensorflow/core/kernels/captured_function.h"
+#include "tensorflow/core/kernels/dataset_utils.h"
namespace tensorflow {
@@ -125,58 +126,9 @@ class FlatMapDatasetOp : public UnaryDatasetOpKernel {
return Status::OK();
}
- FunctionLibraryRuntime::Options opts;
- opts.runner = ctx->runner();
- opts.step_id = CapturedFunction::generate_step_id();
- ScopedStepContainer step_container(
- opts.step_id, [this, ctx](const string& name) {
- dataset()
- ->captured_func_->resource_manager()
- ->Cleanup(name)
- .IgnoreError();
- });
- opts.step_container = &step_container;
- std::vector<Tensor> return_values;
- TF_RETURN_IF_ERROR(
- dataset()->captured_func_->Run(opts, args, &return_values));
-
- if (!(return_values.size() == 1 &&
- return_values[0].dtype() == DT_RESOURCE &&
- TensorShapeUtils::IsScalar(return_values[0].shape()))) {
- return errors::InvalidArgument(
- "`f` must return a single scalar of dtype DT_RESOURCE.");
- }
-
- // Retrieve the dataset that was created in `f`.
- DatasetBase* returned_dataset;
- const ResourceHandle& dataset_resource =
- return_values[0].scalar<ResourceHandle>()();
-
- // NOTE(mrry): We cannot use the core `LookupResource()` or
- // `DeleteResource()` functions, because we have an
- // `IteratorContext*` and not an `OpKernelContext*`, so we
- // replicate the necessary functionality here.
- auto type_index = MakeTypeIndex<DatasetBase>();
- if (type_index.hash_code() != dataset_resource.hash_code()) {
- return errors::InvalidArgument(
- "`f` must return a Dataset resource.");
- }
- TF_RETURN_IF_ERROR(
- dataset()->captured_func_->resource_manager()->Lookup(
- dataset_resource.container(), dataset_resource.name(),
- &returned_dataset));
- core::ScopedUnref unref_dataset(returned_dataset);
-
- // Create an iterator for the dataset that was returned by
- // `f`. This transfers ownership of the dataset to the
- // iterator, so we can delete it from the resource manager.
- current_element_iterator_ = returned_dataset->MakeIterator(
- strings::StrCat(prefix(), "[", element_index_++, "]"));
- TF_RETURN_IF_ERROR(
- dataset()
- ->captured_func_->resource_manager()
- ->Delete<DatasetBase>(dataset_resource.container(),
- dataset_resource.name()));
+ TF_RETURN_IF_ERROR(dataset::MakeIteratorFromInputElement(
+ ctx, args, element_index_++, dataset()->captured_func_.get(),
+ prefix(), &current_element_iterator_));
} while (true);
}
diff --git a/tensorflow/core/kernels/interleave_dataset_op.cc b/tensorflow/core/kernels/interleave_dataset_op.cc
index dce4f88101..7b148b74c9 100644
--- a/tensorflow/core/kernels/interleave_dataset_op.cc
+++ b/tensorflow/core/kernels/interleave_dataset_op.cc
@@ -21,6 +21,7 @@ limitations under the License.
#include "tensorflow/core/lib/random/random.h"
#include "tensorflow/core/kernels/captured_function.h"
+#include "tensorflow/core/kernels/dataset_utils.h"
namespace tensorflow {
@@ -168,8 +169,9 @@ class InterleaveDatasetOp : public OpKernel {
TF_RETURN_IF_ERROR(
input_impl_->GetNext(ctx, &args, &end_of_input_));
if (!end_of_input_) {
- TF_RETURN_IF_ERROR(MakeIteratorFromInputElement(
- ctx, args, &current_elements_[cycle_index_]));
+ TF_RETURN_IF_ERROR(dataset::MakeIteratorFromInputElement(
+ ctx, args, cycle_index_, dataset()->captured_func_.get(),
+ prefix(), &current_elements_[cycle_index_]));
++num_open_;
}
} else {
@@ -182,62 +184,6 @@ class InterleaveDatasetOp : public OpKernel {
}
private:
- Status MakeIteratorFromInputElement(
- IteratorContext* ctx, const std::vector<Tensor>& input_element,
- std::unique_ptr<IteratorBase>* out_iterator)
- EXCLUSIVE_LOCKS_REQUIRED(mu_) {
- FunctionLibraryRuntime::Options opts;
- opts.runner = ctx->runner();
- opts.step_id = CapturedFunction::generate_step_id();
- ScopedStepContainer step_container(
- opts.step_id, [this, ctx](const string& name) {
- dataset()
- ->captured_func_->resource_manager()
- ->Cleanup(name)
- .IgnoreError();
- });
- opts.step_container = &step_container;
- std::vector<Tensor> return_values;
- TF_RETURN_IF_ERROR(dataset()->captured_func_->Run(opts, input_element,
- &return_values));
-
- if (!(return_values.size() == 1 &&
- return_values[0].dtype() == DT_RESOURCE &&
- TensorShapeUtils::IsScalar(return_values[0].shape()))) {
- return errors::InvalidArgument(
- "`f` must return a single scalar of dtype DT_RESOURCE.");
- }
-
- // Retrieve the dataset that was created in `f`.
- DatasetBase* returned_dataset;
- const ResourceHandle& dataset_resource =
- return_values[0].scalar<ResourceHandle>()();
-
- // NOTE(mrry): We cannot use the core `LookupResource()` or
- // `DeleteResource()` functions, because we have an
- // `IteratorContext*` and not an `OpKernelContext*`, so we
- // replicate the necessary functionality here.
- auto type_index = MakeTypeIndex<DatasetBase>();
- if (type_index.hash_code() != dataset_resource.hash_code()) {
- return errors::InvalidArgument("`f` must return a Dataset resource.");
- }
- TF_RETURN_IF_ERROR(
- dataset()->captured_func_->resource_manager()->Lookup(
- dataset_resource.container(), dataset_resource.name(),
- &returned_dataset));
- core::ScopedUnref unref_dataset(returned_dataset);
-
- // Create an iterator for the dataset that was returned by
- // `f`. This transfers ownership of the dataset to the
- // iterator, so we can delete it from the resource manager.
- *out_iterator = returned_dataset->MakeIterator(
- strings::StrCat(prefix(), "[", cycle_index_, "]"));
- TF_RETURN_IF_ERROR(
- dataset()->captured_func_->resource_manager()->Delete<DatasetBase>(
- dataset_resource.container(), dataset_resource.name()));
- return Status::OK();
- }
-
mutex mu_;
const std::unique_ptr<IteratorBase> input_impl_ GUARDED_BY(mu_);
std::vector<std::unique_ptr<IteratorBase>> current_elements_
diff --git a/tensorflow/core/kernels/mkl_conv_grad_input_ops.cc b/tensorflow/core/kernels/mkl_conv_grad_input_ops.cc
index 50700c8bc8..00884d0981 100644
--- a/tensorflow/core/kernels/mkl_conv_grad_input_ops.cc
+++ b/tensorflow/core/kernels/mkl_conv_grad_input_ops.cc
@@ -98,11 +98,11 @@ class MklConv2DCustomBackpropInputOp : public OpKernel {
"Conv2DCustomBackpropInput: size must be 4-dim"));
const int64* filter_sizes =
- (const int64*) mkl_context.filter_shape.GetSizes();
+ (const int64*)mkl_context.filter_shape.GetSizes();
const int64 filter_dims = mkl_context.filter_shape.GetDimension();
- OP_REQUIRES_OK(context, TensorShapeUtils::MakeShape(filter_sizes,
- filter_dims, &filter_shape));
+ OP_REQUIRES_OK(context, TensorShapeUtils::MakeShape(
+ filter_sizes, filter_dims, &filter_shape));
} else {
filter_shape = filter.shape();
}
diff --git a/tensorflow/core/kernels/mkl_conv_ops.cc b/tensorflow/core/kernels/mkl_conv_ops.cc
index b50a6343ba..7099aa1307 100644
--- a/tensorflow/core/kernels/mkl_conv_ops.cc
+++ b/tensorflow/core/kernels/mkl_conv_ops.cc
@@ -270,22 +270,22 @@ class MklConv2DOp : public OpKernel {
MklShape mkl_filter_output_mkl_shape;
mkl_filter_output_mkl_shape.SetMklTensor(true);
mkl_filter_output_mkl_shape.SetMklLayout(mkl_context.prim_fwd,
- dnnResourceFilter);
+ dnnResourceFilter);
size_t filter_sizes[4] = {filter.dim_size(0), filter.dim_size(1),
- filter.dim_size(2), filter.dim_size(3)};
+ filter.dim_size(2), filter.dim_size(3)};
mkl_filter_output_mkl_shape.SetTfLayout(filter.dims(), filter_sizes,
- mkl_context.filter_strides);
+ mkl_context.filter_strides);
mkl_filter_output_mkl_shape.SetTfDimOrder(mkl_context.filter_dims,
- data_format_);
+ data_format_);
mkl_filter_output_tf_shape.AddDim(
- dnnLayoutGetMemorySize_F32(
- static_cast<dnnLayout_t>(
- mkl_filter_output_mkl_shape.GetMklLayout())) /
- sizeof(T));
+ dnnLayoutGetMemorySize_F32(static_cast<dnnLayout_t>(
+ mkl_filter_output_mkl_shape.GetMklLayout())) /
+ sizeof(T));
AllocateOutputSetMklShape(context, 1, &mkl_context.output_filter,
- mkl_filter_output_tf_shape, mkl_filter_output_mkl_shape);
+ mkl_filter_output_tf_shape,
+ mkl_filter_output_mkl_shape);
mkl_context.conv_res[dnnResourceDst] =
static_cast<void*>(output->flat<T>().data());
@@ -406,8 +406,13 @@ class MklConv2DOp : public OpKernel {
CHECK_EQ(dnnConversionCreate_F32(&mkl_prim_convert_filter, lt_filter,
mkl_lt_internal_filter),
E_SUCCESS);
+<<<<<<< HEAD
mkl_buf_convert_filter = const_cast<void*>(static_cast<const void*>(
output_filter->flat<T>().data()));
+=======
+ mkl_buf_convert_filter = const_cast<void*>(
+ static_cast<const void*>(output_filter->flat<T>().data()));
+>>>>>>> e722358e7e96dd2aa20d7e2c56336e76845daa6a
CHECK_EQ(
dnnConversionExecute_F32(mkl_prim_convert_filter, mkl_buf_filter,
mkl_buf_convert_filter),
diff --git a/tensorflow/core/kernels/mkl_reshape_op.cc b/tensorflow/core/kernels/mkl_reshape_op.cc
index 03c3fb09a1..5e98582475 100644
--- a/tensorflow/core/kernels/mkl_reshape_op.cc
+++ b/tensorflow/core/kernels/mkl_reshape_op.cc
@@ -128,6 +128,7 @@ class MklReshapeOp : public OpKernel {
CopyTfTensorInToOutWithShape(context, 0, 0, shape);
}
}
+
private:
template <typename Tshape>
Status ValidateSizes(const Tensor& sizes, int64* product, int* unknown_index,
diff --git a/tensorflow/core/kernels/parse_tensor_op.cc b/tensorflow/core/kernels/parse_tensor_op.cc
index dd645262d2..ab91a6ef67 100644
--- a/tensorflow/core/kernels/parse_tensor_op.cc
+++ b/tensorflow/core/kernels/parse_tensor_op.cc
@@ -16,6 +16,7 @@ limitations under the License.
// See docs in ../ops/parsing_ops.cc.
#include "tensorflow/core/framework/op_kernel.h"
+#include "tensorflow/core/framework/register_types.h"
#include "tensorflow/core/framework/tensor.h"
#include "tensorflow/core/framework/tensor.pb.h"
#include "tensorflow/core/framework/tensor_shape.h"
@@ -66,7 +67,6 @@ class ParseTensorOp : public OpKernel {
REGISTER_KERNEL_BUILDER(Name("ParseTensor").Device(DEVICE_CPU), ParseTensorOp);
-
template <typename T>
class SerializeTensorOp : public OpKernel {
public:
@@ -81,14 +81,14 @@ class SerializeTensorOp : public OpKernel {
tensor.AsProtoTensorContent(&proto);
}
Tensor* proto_string = nullptr;
- OP_REQUIRES_OK(
- context, context->allocate_output(0, TensorShape({}), &proto_string));
+ OP_REQUIRES_OK(context,
+ context->allocate_output(0, TensorShape({}), &proto_string));
CHECK(proto.SerializeToString(&proto_string->scalar<string>()()));
}
};
-#define REGISTER(T) \
- REGISTER_KERNEL_BUILDER( \
+#define REGISTER(T) \
+ REGISTER_KERNEL_BUILDER( \
Name("SerializeTensor").Device(DEVICE_CPU).TypeConstraint<T>("T"), \
SerializeTensorOp<T>);
TF_CALL_ALL_TYPES(REGISTER)
diff --git a/tensorflow/core/kernels/parse_tensor_test.cc b/tensorflow/core/kernels/parse_tensor_test.cc
index f6f60fee71..4a5fc07935 100644
--- a/tensorflow/core/kernels/parse_tensor_test.cc
+++ b/tensorflow/core/kernels/parse_tensor_test.cc
@@ -14,8 +14,8 @@ limitations under the License.
==============================================================================*/
#include <memory>
-#include <vector>
#include <string>
+#include <vector>
#include "tensorflow/core/common_runtime/device.h"
#include "tensorflow/core/common_runtime/device_factory.h"
@@ -33,27 +33,23 @@ namespace {
class SerializeTensorOpTest : public OpsTestBase {
protected:
template <typename T>
- void MakeOp(const TensorShape& input_shape,
- std::function<T(int)> functor) {
- TF_ASSERT_OK(
- NodeDefBuilder("myop", "SerializeTensor")
- .Input(FakeInput(DataTypeToEnum<T>::value))
- .Finalize(node_def()));
+ void MakeOp(const TensorShape& input_shape, std::function<T(int)> functor) {
+ TF_ASSERT_OK(NodeDefBuilder("myop", "SerializeTensor")
+ .Input(FakeInput(DataTypeToEnum<T>::value))
+ .Finalize(node_def()));
TF_ASSERT_OK(InitOp());
AddInput<T>(input_shape, functor);
}
void ParseSerializedWithNodeDef(const NodeDef& parse_node_def,
- Tensor* serialized,
- Tensor* parse_output) {
+ Tensor* serialized, Tensor* parse_output) {
std::unique_ptr<Device> device(
DeviceFactory::NewDevice("CPU", {}, "/job:a/replica:0/task:0"));
gtl::InlinedVector<TensorValue, 4> inputs;
inputs.push_back({nullptr, serialized});
Status status;
- std::unique_ptr<OpKernel> op(
- CreateOpKernel(DEVICE_CPU, device.get(),
- cpu_allocator(), parse_node_def,
- TF_GRAPH_DEF_VERSION, &status));
+ std::unique_ptr<OpKernel> op(CreateOpKernel(DEVICE_CPU, device.get(),
+ cpu_allocator(), parse_node_def,
+ TF_GRAPH_DEF_VERSION, &status));
TF_EXPECT_OK(status);
OpKernelContext::Params params;
params.device = device.get();
@@ -80,8 +76,8 @@ class SerializeTensorOpTest : public OpsTestBase {
TEST_F(SerializeTensorOpTest, SerializeTensorOpTest_half) {
MakeOp<Eigen::half>(TensorShape({10}), [](int x) -> Eigen::half {
- return static_cast<Eigen::half>(x / 10.);
- });
+ return static_cast<Eigen::half>(x / 10.);
+ });
TF_ASSERT_OK(RunOpKernel());
Tensor parse_output;
ParseSerializedOutput<Eigen::half>(GetOutput(0), &parse_output);
@@ -89,9 +85,8 @@ TEST_F(SerializeTensorOpTest, SerializeTensorOpTest_half) {
}
TEST_F(SerializeTensorOpTest, SerializeTensorOpTest_float) {
- MakeOp<float>(TensorShape({1, 10}), [](int x) -> float {
- return static_cast<float>(x / 10.);
- });
+ MakeOp<float>(TensorShape({1, 10}),
+ [](int x) -> float { return static_cast<float>(x / 10.); });
TF_ASSERT_OK(RunOpKernel());
Tensor parse_output;
ParseSerializedOutput<float>(GetOutput(0), &parse_output);
@@ -99,9 +94,8 @@ TEST_F(SerializeTensorOpTest, SerializeTensorOpTest_float) {
}
TEST_F(SerializeTensorOpTest, SerializeTensorOpTest_double) {
- MakeOp<double>(TensorShape({5, 5}), [](int x) -> double {
- return static_cast<double>(x / 10.);
- });
+ MakeOp<double>(TensorShape({5, 5}),
+ [](int x) -> double { return static_cast<double>(x / 10.); });
TF_ASSERT_OK(RunOpKernel());
Tensor parse_output;
ParseSerializedOutput<double>(GetOutput(0), &parse_output);
@@ -109,9 +103,8 @@ TEST_F(SerializeTensorOpTest, SerializeTensorOpTest_double) {
}
TEST_F(SerializeTensorOpTest, SerializeTensorOpTest_int64) {
- MakeOp<int64>(TensorShape({2, 3, 4}), [](int x) -> int64 {
- return static_cast<int64>(x - 10);
- });
+ MakeOp<int64>(TensorShape({2, 3, 4}),
+ [](int x) -> int64 { return static_cast<int64>(x - 10); });
TF_ASSERT_OK(RunOpKernel());
Tensor parse_output;
ParseSerializedOutput<int64>(GetOutput(0), &parse_output);
@@ -119,9 +112,8 @@ TEST_F(SerializeTensorOpTest, SerializeTensorOpTest_int64) {
}
TEST_F(SerializeTensorOpTest, SerializeTensorOpTest_int32) {
- MakeOp<int32>(TensorShape({4, 2}), [](int x) -> int32 {
- return static_cast<int32>(x + 7);
- });
+ MakeOp<int32>(TensorShape({4, 2}),
+ [](int x) -> int32 { return static_cast<int32>(x + 7); });
TF_ASSERT_OK(RunOpKernel());
Tensor parse_output;
ParseSerializedOutput<int32>(GetOutput(0), &parse_output);
@@ -129,9 +121,8 @@ TEST_F(SerializeTensorOpTest, SerializeTensorOpTest_int32) {
}
TEST_F(SerializeTensorOpTest, SerializeTensorOpTest_int16) {
- MakeOp<int16>(TensorShape({8}), [](int x) -> int16 {
- return static_cast<int16>(x + 18);
- });
+ MakeOp<int16>(TensorShape({8}),
+ [](int x) -> int16 { return static_cast<int16>(x + 18); });
TF_ASSERT_OK(RunOpKernel());
Tensor parse_output;
ParseSerializedOutput<int16>(GetOutput(0), &parse_output);
@@ -139,9 +130,8 @@ TEST_F(SerializeTensorOpTest, SerializeTensorOpTest_int16) {
}
TEST_F(SerializeTensorOpTest, SerializeTensorOpTest_int8) {
- MakeOp<int8>(TensorShape({2}), [](int x) -> int8 {
- return static_cast<int8>(x + 8);
- });
+ MakeOp<int8>(TensorShape({2}),
+ [](int x) -> int8 { return static_cast<int8>(x + 8); });
TF_ASSERT_OK(RunOpKernel());
Tensor parse_output;
ParseSerializedOutput<int8>(GetOutput(0), &parse_output);
@@ -149,9 +139,8 @@ TEST_F(SerializeTensorOpTest, SerializeTensorOpTest_int8) {
}
TEST_F(SerializeTensorOpTest, SerializeTensorOpTest_uint16) {
- MakeOp<uint16>(TensorShape({1, 3}), [](int x) -> uint16 {
- return static_cast<uint16>(x + 2);
- });
+ MakeOp<uint16>(TensorShape({1, 3}),
+ [](int x) -> uint16 { return static_cast<uint16>(x + 2); });
TF_ASSERT_OK(RunOpKernel());
Tensor parse_output;
ParseSerializedOutput<uint16>(GetOutput(0), &parse_output);
@@ -159,9 +148,8 @@ TEST_F(SerializeTensorOpTest, SerializeTensorOpTest_uint16) {
}
TEST_F(SerializeTensorOpTest, SerializeTensorOpTest_uint8) {
- MakeOp<uint8>(TensorShape({2, 1, 1}), [](int x) -> uint8 {
- return static_cast<uint8>(x + 1);
- });
+ MakeOp<uint8>(TensorShape({2, 1, 1}),
+ [](int x) -> uint8 { return static_cast<uint8>(x + 1); });
TF_ASSERT_OK(RunOpKernel());
Tensor parse_output;
ParseSerializedOutput<uint8>(GetOutput(0), &parse_output);
@@ -170,9 +158,8 @@ TEST_F(SerializeTensorOpTest, SerializeTensorOpTest_uint8) {
TEST_F(SerializeTensorOpTest, SerializeTensorOpTest_complex64) {
MakeOp<complex64>(TensorShape({}), [](int x) -> complex64 {
- return complex64{ static_cast<float>(x / 8.),
- static_cast<float>(x / 2.) };
- });
+ return complex64{static_cast<float>(x / 8.), static_cast<float>(x / 2.)};
+ });
TF_ASSERT_OK(RunOpKernel());
Tensor parse_output;
ParseSerializedOutput<complex64>(GetOutput(0), &parse_output);
@@ -181,8 +168,8 @@ TEST_F(SerializeTensorOpTest, SerializeTensorOpTest_complex64) {
TEST_F(SerializeTensorOpTest, SerializeTensorOpTest_complex128) {
MakeOp<complex128>(TensorShape({3}), [](int x) -> complex128 {
- return complex128{ x / 3., x / 2. };
- });
+ return complex128{x / 3., x / 2.};
+ });
TF_ASSERT_OK(RunOpKernel());
Tensor parse_output;
ParseSerializedOutput<complex128>(GetOutput(0), &parse_output);
@@ -190,9 +177,8 @@ TEST_F(SerializeTensorOpTest, SerializeTensorOpTest_complex128) {
}
TEST_F(SerializeTensorOpTest, SerializeTensorOpTest_bool) {
- MakeOp<bool>(TensorShape({1}), [](int x) -> bool {
- return static_cast<bool>(x % 2);
- });
+ MakeOp<bool>(TensorShape({1}),
+ [](int x) -> bool { return static_cast<bool>(x % 2); });
TF_ASSERT_OK(RunOpKernel());
Tensor parse_output;
ParseSerializedOutput<bool>(GetOutput(0), &parse_output);
@@ -200,13 +186,12 @@ TEST_F(SerializeTensorOpTest, SerializeTensorOpTest_bool) {
}
TEST_F(SerializeTensorOpTest, SerializeTensorOpTest_string) {
- MakeOp<std::string>(TensorShape({10}), [](int x) -> std::string {
- return std::to_string(x / 10.);
- });
+ MakeOp<string>(TensorShape({10}),
+ [](int x) -> string { return std::to_string(x / 10.); });
TF_ASSERT_OK(RunOpKernel());
Tensor parse_output;
- ParseSerializedOutput<std::string>(GetOutput(0), &parse_output);
- test::ExpectTensorEqual<std::string>(parse_output, GetInput(0));
+ ParseSerializedOutput<string>(GetOutput(0), &parse_output);
+ test::ExpectTensorEqual<string>(parse_output, GetInput(0));
}
} // namespace
diff --git a/tensorflow/core/kernels/segment_reduction_ops.cc b/tensorflow/core/kernels/segment_reduction_ops.cc
index 8f7eff113c..5624d5cd1b 100644
--- a/tensorflow/core/kernels/segment_reduction_ops.cc
+++ b/tensorflow/core/kernels/segment_reduction_ops.cc
@@ -35,7 +35,6 @@ limitations under the License.
#include "tensorflow/core/platform/logging.h"
#include "tensorflow/core/util/util.h"
-
#if GOOGLE_CUDA
#include "tensorflow/core/common_runtime/gpu/gpu_event_mgr.h"
#include "tensorflow/core/kernels/cuda_solvers.h"
@@ -249,10 +248,11 @@ class SegmentSumGPUOp : public AsyncOpKernel {
auto stream = context->op_device_context()->stream();
OP_REQUIRES_ASYNC(
- context, stream
- ->ThenMemcpy(output_rows_host.mutable_data(),
- output_rows_device, sizeof(Index))
- .ok(),
+ context,
+ stream
+ ->ThenMemcpy(output_rows_host.mutable_data(), output_rows_device,
+ sizeof(Index))
+ .ok(),
errors::Internal(
"SegmentSumGPUOp: failed to copy output_rows from device"),
done);
diff --git a/tensorflow/core/kernels/segment_reduction_ops_gpu.cu.cc b/tensorflow/core/kernels/segment_reduction_ops_gpu.cu.cc
index 26fcafee34..159fada621 100644
--- a/tensorflow/core/kernels/segment_reduction_ops_gpu.cu.cc
+++ b/tensorflow/core/kernels/segment_reduction_ops_gpu.cu.cc
@@ -186,10 +186,10 @@ void SegmentSumFunctor<T, Index>::operator()(
input_inner_dim_size * input_outer_dim_num_stripe;
config = GetCudaLaunchConfig(total_stripe_count, d);
- SortedSegmentSumCustomKernel<T, Index, OuterDimTileSize><<<
- config.block_count, config.thread_per_block, 0, d.stream()>>>(
- input_outer_dim_size, input_inner_dim_size, output_rows,
- segment_ids.data(), data, output.data(), total_stripe_count);
+ SortedSegmentSumCustomKernel<T, Index, OuterDimTileSize>
+ <<<config.block_count, config.thread_per_block, 0, d.stream()>>>(
+ input_outer_dim_size, input_inner_dim_size, output_rows,
+ segment_ids.data(), data, output.data(), total_stripe_count);
};
// UnsortedSegmentSumFunctor implementation for GPUDevice.
diff --git a/tensorflow/core/kernels/sloppy_interleave_dataset_op.cc b/tensorflow/core/kernels/sloppy_interleave_dataset_op.cc
new file mode 100644
index 0000000000..d95f51f0f2
--- /dev/null
+++ b/tensorflow/core/kernels/sloppy_interleave_dataset_op.cc
@@ -0,0 +1,370 @@
+/* Copyright 2017 The TensorFlow Authors. All Rights Reserved.
+
+Licensed under the Apache License, Version 2.0 (the "License");
+you may not use this file except in compliance with the License.
+You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing, software
+distributed under the License is distributed on an "AS IS" BASIS,
+WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+See the License for the specific language governing permissions and
+limitations under the License.
+==============================================================================*/
+#include "tensorflow/core/kernels/dataset.h"
+
+#include "tensorflow/core/common_runtime/function.h"
+#include "tensorflow/core/framework/partial_tensor_shape.h"
+#include "tensorflow/core/framework/tensor.h"
+#include "tensorflow/core/kernels/dataset_utils.h"
+#include "tensorflow/core/lib/gtl/cleanup.h"
+#include "tensorflow/core/lib/random/random.h"
+
+#include "tensorflow/core/kernels/captured_function.h"
+
+namespace tensorflow {
+
+namespace {
+
+// See documentation in ../ops/dataset_ops.cc for a high-level
+// description of the following op.
+
+class SloppyInterleaveDatasetOp : public UnaryDatasetOpKernel {
+ public:
+ explicit SloppyInterleaveDatasetOp(OpKernelConstruction* ctx)
+ : UnaryDatasetOpKernel(ctx),
+ graph_def_version_(ctx->graph_def_version()) {
+ OP_REQUIRES_OK(ctx, ctx->GetAttr("f", &func_));
+ OP_REQUIRES_OK(ctx, ctx->GetAttr("output_types", &output_types_));
+ OP_REQUIRES_OK(ctx, ctx->GetAttr("output_shapes", &output_shapes_));
+ }
+
+ void MakeDataset(OpKernelContext* ctx, DatasetBase* input,
+ DatasetBase** output) override {
+ OpInputList inputs;
+ OP_REQUIRES_OK(ctx, ctx->input_list("other_arguments", &inputs));
+ std::vector<Tensor> other_arguments;
+ other_arguments.reserve(inputs.size());
+ for (const Tensor& t : inputs) {
+ other_arguments.push_back(t);
+ }
+
+ int64 cycle_length;
+ OP_REQUIRES_OK(ctx,
+ ParseScalarArgument(ctx, "cycle_length", &cycle_length));
+ OP_REQUIRES(ctx, cycle_length > 0,
+ errors::InvalidArgument("`cycle_length` must be > 0"));
+
+ int64 block_length;
+ OP_REQUIRES_OK(ctx,
+ ParseScalarArgument(ctx, "block_length", &block_length));
+ OP_REQUIRES(ctx, block_length > 0,
+ errors::InvalidArgument("`block_length` must be > 0"));
+
+ std::unique_ptr<CapturedFunction> captured_func;
+ OP_REQUIRES_OK(ctx, CapturedFunction::Create(ctx, func_, graph_def_version_,
+ std::move(other_arguments),
+ &captured_func));
+
+ *output = new Dataset(input, std::move(captured_func), cycle_length,
+ block_length, output_types_, output_shapes_);
+ }
+
+ private:
+ class Dataset : public DatasetBase {
+ public:
+ Dataset(const DatasetBase* input,
+ std::unique_ptr<CapturedFunction> captured_func, int64 cycle_length,
+ int64 block_length, const DataTypeVector& output_types,
+ const std::vector<PartialTensorShape>& output_shapes)
+ : input_(input),
+ captured_func_(std::move(captured_func)),
+ cycle_length_(cycle_length),
+ block_length_(block_length),
+ output_types_(output_types),
+ output_shapes_(output_shapes) {
+ input_->Ref();
+ }
+
+ ~Dataset() override { input_->Unref(); }
+
+ std::unique_ptr<IteratorBase> MakeIterator(
+ const string& prefix) const override {
+ return std::unique_ptr<IteratorBase>(
+ new Iterator({this, strings::StrCat(prefix, "::SloppyInterleave")}));
+ }
+
+ const DataTypeVector& output_dtypes() const override {
+ return output_types_;
+ }
+ const std::vector<PartialTensorShape>& output_shapes() const override {
+ return output_shapes_;
+ }
+
+ string DebugString() override {
+ return "SloppyInterleaveDatasetOp::Dataset";
+ }
+
+ private:
+ class Iterator : public DatasetIterator<Dataset> {
+ public:
+ explicit Iterator(const Params& params)
+ : DatasetIterator<Dataset>(params),
+ input_impl_(params.dataset->input_->MakeIterator(params.prefix)),
+ output_elements_(params.dataset->cycle_length_) {}
+
+ ~Iterator() override {
+ mutex_lock l(mu_);
+ cancelled_ = true;
+ // Notify all workers in case they are blocked.
+ for (int64 i = 0; i < dataset()->cycle_length_; ++i) {
+ output_elements_[i].cond_var.notify_all();
+ }
+ }
+
+ // It is implemented so that it matches the deterministic interleave
+ // unless we would block waiting for an element, at which point it skips
+ // along to the next available value.
+ Status GetNextInternal(IteratorContext* ctx,
+ std::vector<Tensor>* out_tensors,
+ bool* end_of_sequence) override {
+ mutex_lock l(mu_);
+ TF_RETURN_IF_ERROR(EnsureWorkerThreadsStarted(ctx));
+ // Search for available items, blocking if necessary.
+ while (!cancelled_) {
+ for (size_t i = 0; i < dataset()->cycle_length_; ++i) {
+ size_t index = (next_index_ + i) % dataset()->cycle_length_;
+ if (output_elements_[index].is_produced) {
+ next_index_ = index;
+ if (i == 0) {
+ block_count_++;
+ if (block_count_ == dataset()->block_length_) {
+ next_index_ = (index + 1) % dataset()->cycle_length_;
+ block_count_ = 0;
+ }
+ } else {
+ block_count_ = 0;
+ }
+ // If we encounter an EoF, advance to the next iterator
+ if (output_elements_[index].end_of_sequence) {
+ output_elements_[index].is_produced = false;
+ output_elements_[index].cond_var.notify_one();
+ next_index_ = (index + 1) % dataset()->cycle_length_;
+ block_count_ = 0;
+ i = -1; // Restart the inner loop
+ continue;
+ }
+ *end_of_sequence = false;
+ if (output_elements_[index].output_status.ok()) {
+ output_elements_[index].output_value.swap(*out_tensors);
+ }
+ output_elements_[index].is_produced = false;
+ output_elements_[index].cond_var.notify_one();
+ return output_elements_[index].output_status;
+ }
+ }
+
+ if (num_active_threads_ == 0) {
+ // No potential for future values.
+ //
+ // Note: this condition check must occur after checking the output
+ // buffer, as its possible for there to be values in the output
+ // buffer, even if the number of live threads is zero.
+ *end_of_sequence = true;
+ return Status::OK();
+ }
+ // No values available; wait until woken up.
+ cond_var_.wait(l);
+ }
+ return errors::Cancelled(
+ "SloppyInterleaveDatasetOp::Dataset::Iterator::GetNext");
+ }
+
+ private:
+ // Internal structure to manage thread coordination. All values are
+ // guarded by the enclosing Iterator's mu_.
+ struct OutputBufferElement {
+ // The producer must set `is_produced` to `true` after
+ // `output_status` or `output_value` has been written.
+ bool is_produced = false;
+ // The producer sets `output_status` if either getting the input element
+ // or applying the function to it fails.
+ Status output_status;
+ // Reached end of sequence for the underlying iterator.
+ bool end_of_sequence = false;
+ // The output data element.
+ std::vector<Tensor> output_value;
+ // The producer thread waits on this condition variable after having
+ // produced an element. The reader thread notifies this condition
+ // variable after reading the value.
+ condition_variable cond_var;
+ };
+
+ Status EnsureWorkerThreadsStarted(IteratorContext* ctx)
+ EXCLUSIVE_LOCKS_REQUIRED(mu_) {
+ if (worker_threads_.empty()) {
+ for (int64 i = 0; i < dataset()->cycle_length_; ++i) {
+ // Serialize the creation of the workers and their corresponding
+ // input elements to ensure we match the standard interleave when
+ // the underlying iterators induce no delay.
+ std::vector<Tensor> args;
+ TF_RETURN_IF_ERROR(
+ input_impl_->GetNext(ctx, &args, &end_of_input_));
+ if (end_of_input_) {
+ LOG(WARNING) << "Input iterator exhausted after " << i
+ << " elements; cannot start all "
+ << dataset()->cycle_length_ << " worker threads.";
+ return Status::OK();
+ }
+ std::unique_ptr<IteratorBase> itr;
+ TF_RETURN_IF_ERROR(dataset::MakeIteratorFromInputElement(
+ ctx, args, i, dataset()->captured_func_.get(), prefix(), &itr));
+ worker_threads_.emplace_back(
+ std::unique_ptr<Thread>(ctx->env()->StartThread(
+ {}, "worker_thread",
+ std::bind(&Iterator::WorkerThread, this,
+ new IteratorContext(*ctx), i, itr.release()))));
+ num_active_threads_ = i + 1;
+ }
+ }
+ return Status::OK();
+ }
+
+ void BlockAndUpdateOutputBuffer(mutex_lock* l, const int64 thread_index,
+ const Status& status,
+ bool end_of_sequence,
+ std::vector<Tensor>* out_tensors)
+ EXCLUSIVE_LOCKS_REQUIRED(mu_) {
+ // We have produced an element; push it into the output buffer
+ // when space is available.
+ while (!cancelled_ && output_elements_[thread_index].is_produced) {
+ output_elements_[thread_index].cond_var.wait(*l);
+ }
+ if (cancelled_) {
+ return;
+ }
+ output_elements_[thread_index].is_produced = true;
+ output_elements_[thread_index].output_status = status;
+ output_elements_[thread_index].end_of_sequence = end_of_sequence;
+ if (status.ok()) {
+ output_elements_[thread_index].output_value.swap(*out_tensors);
+ } else {
+ output_elements_[thread_index].output_value.clear();
+ }
+ cond_var_.notify_one();
+ }
+
+ // Races to produce elements into the output queue buffers.
+ void WorkerThread(IteratorContext* ctx_ptr, const int64 thread_index,
+ IteratorBase* out_iterator_ptr) {
+ // std::function arguments are copy-constructable, so we pass raw
+ // pointers, and then immediately wrap them to ensure correct ownership.
+ std::unique_ptr<IteratorContext> ctx(ctx_ptr);
+ std::unique_ptr<IteratorBase> out_iterator(out_iterator_ptr);
+ auto cleanup = gtl::MakeCleanup([this, thread_index] {
+ mutex_lock l(mu_);
+ num_active_threads_--;
+ cond_var_.notify_all();
+ });
+ while (true) {
+ // Attempt to produce an element.
+ bool end_of_out_itr_input = false;
+ std::vector<Tensor> out_tensors;
+ Status element_status = out_iterator->GetNext(ctx.get(), &out_tensors,
+ &end_of_out_itr_input);
+ // Handle output.
+ {
+ mutex_lock l(mu_);
+ BlockAndUpdateOutputBuffer(&l, thread_index, element_status,
+ end_of_out_itr_input, &out_tensors);
+ if (end_of_out_itr_input) {
+ // We have exhausted our current iterator; get a new iterator;
+ // loop to handle errors.
+ while (!cancelled_) {
+ if (end_of_input_) {
+ // No more iterator inputs; we're done!
+ return;
+ }
+ std::vector<Tensor> args;
+ // BlockAndUpdateOutputBuffer() sequences calls to
+ // input_impl_->GetNext when the out_iterator doesn't cause
+ // slopping.
+ Status input_status =
+ input_impl_->GetNext(ctx.get(), &args, &end_of_input_);
+ if (end_of_input_) {
+ // No more elements to produce, stop the worker thread.
+ return;
+ }
+ if (input_status.ok()) {
+ input_status = dataset::MakeIteratorFromInputElement(
+ ctx.get(), args, thread_index,
+ dataset()->captured_func_.get(), prefix(), &out_iterator);
+ }
+ if (input_status.ok()) {
+ // Successfully have a new out_iterator; restart the outer
+ // loop to produce an element.
+ break;
+ }
+
+ // We encountered an error; push the error to the output buffer.
+ BlockAndUpdateOutputBuffer(&l, thread_index, input_status,
+ /* end_of_sequence = */ false,
+ &out_tensors);
+ }
+ }
+
+ // Check if we should exit.
+ if (cancelled_) {
+ return;
+ }
+ }
+ }
+ }
+
+ // Mutex & condition variable to guard mutable iterator internals and
+ // coordinate among worker threads and client thread[s].
+ mutex mu_;
+ condition_variable cond_var_;
+ // The iterator producing elements which are converted to datasets by
+ // the dataset()->captured_func_ then interleaved together.
+ const std::unique_ptr<IteratorBase> input_impl_ GUARDED_BY(mu_);
+ // Whether the input_impl_ can produce future elements.
+ bool end_of_input_ GUARDED_BY(mu_) = false;
+ // The buffer of elements to be produced. Each worker thread operates
+ // on a single OutputBufferElement.
+ std::vector<OutputBufferElement> output_elements_ GUARDED_BY(mu_);
+ // The index into output_elements_ for next element to produce.
+ size_t next_index_ GUARDED_BY(mu_) = 0;
+ // The number of items produced so far within the block
+ size_t block_count_ GUARDED_BY(mu_) = 0;
+ // Number of active threads.
+ size_t num_active_threads_ GUARDED_BY(mu_) = 0;
+ // Flag to instruct the worker threads to exit.
+ bool cancelled_ GUARDED_BY(mu_) = false;
+ // Pointers to the worker threads. This must be last to ensure the
+ // threads have exited before any other members are deallocated.
+ // TODO(b/65178177): Avoid allocating additional threads.
+ std::vector<std::unique_ptr<Thread>> worker_threads_ GUARDED_BY(mu_);
+ };
+
+ const DatasetBase* const input_;
+ const std::unique_ptr<CapturedFunction> captured_func_;
+ const int64 cycle_length_;
+ const int64 block_length_;
+ const DataTypeVector output_types_;
+ const std::vector<PartialTensorShape> output_shapes_;
+ };
+
+ const int graph_def_version_;
+ DataTypeVector output_types_;
+ std::vector<PartialTensorShape> output_shapes_;
+ const NameAttrList* func_;
+};
+
+REGISTER_KERNEL_BUILDER(Name("SloppyInterleaveDataset").Device(DEVICE_CPU),
+ SloppyInterleaveDatasetOp);
+
+} // namespace
+
+} // namespace tensorflow
diff --git a/tensorflow/core/kernels/summary_kernels.cc b/tensorflow/core/kernels/summary_kernels.cc
index d0eca0f1e7..cfa707de71 100644
--- a/tensorflow/core/kernels/summary_kernels.cc
+++ b/tensorflow/core/kernels/summary_kernels.cc
@@ -40,12 +40,7 @@ class CreateSummaryFileWriterOp : public OpKernel {
SummaryWriterInterface* s;
OP_REQUIRES_OK(ctx, CreateSummaryWriter(max_queue, flush_millis, logdir,
filename_suffix, ctx->env(), &s));
- Status status = CreateResource(ctx, HandleFromInput(ctx, 0), s);
- if (!status.ok()) {
- s->Unref();
- ctx->SetStatus(status);
- return;
- }
+ OP_REQUIRES_OK(ctx, CreateResource(ctx, HandleFromInput(ctx, 0), s));
}
};
REGISTER_KERNEL_BUILDER(Name("CreateSummaryFileWriter").Device(DEVICE_CPU),
diff --git a/tensorflow/core/lib/io/buffered_inputstream.cc b/tensorflow/core/lib/io/buffered_inputstream.cc
index 6f72da4713..b247e9c575 100644
--- a/tensorflow/core/lib/io/buffered_inputstream.cc
+++ b/tensorflow/core/lib/io/buffered_inputstream.cc
@@ -41,9 +41,18 @@ BufferedInputStream::~BufferedInputStream() {
}
Status BufferedInputStream::FillBuffer() {
+ if (!file_status_.ok()) {
+ pos_ = 0;
+ limit_ = 0;
+ return file_status_;
+ }
Status s = input_stream_->ReadNBytes(size_, &buf_);
pos_ = 0;
limit_ = buf_.size();
+ if (buf_.empty()) {
+ DCHECK(!s.ok());
+ file_status_ = s;
+ }
return s;
}
@@ -82,6 +91,9 @@ Status BufferedInputStream::ReadNBytes(int64 bytes_to_read, string* result) {
bytes_to_read);
}
result->clear();
+ if (!file_status_.ok() && bytes_to_read > 0) {
+ return file_status_;
+ }
result->reserve(bytes_to_read);
Status s;
@@ -91,6 +103,8 @@ Status BufferedInputStream::ReadNBytes(int64 bytes_to_read, string* result) {
s = FillBuffer();
// If we didn't read any bytes, we're at the end of the file; break out.
if (limit_ == 0) {
+ DCHECK(!s.ok());
+ file_status_ = s;
break;
}
}
@@ -124,6 +138,9 @@ Status BufferedInputStream::SkipNBytes(int64 bytes_to_skip) {
Status s = input_stream_->SkipNBytes(bytes_to_skip - (limit_ - pos_));
pos_ = 0;
limit_ = 0;
+ if (errors::IsOutOfRange(s)) {
+ file_status_ = s;
+ }
return s;
}
return Status::OK();
@@ -163,6 +180,7 @@ Status BufferedInputStream::ReadAll(string* result) {
}
if (errors::IsOutOfRange(status)) {
+ file_status_ = status;
return Status::OK();
}
return status;
@@ -172,6 +190,7 @@ Status BufferedInputStream::Reset() {
TF_RETURN_IF_ERROR(input_stream_->Reset());
pos_ = 0;
limit_ = 0;
+ file_status_ = Status::OK();
return Status::OK();
}
diff --git a/tensorflow/core/lib/io/buffered_inputstream.h b/tensorflow/core/lib/io/buffered_inputstream.h
index b37766005a..2b824f35f8 100644
--- a/tensorflow/core/lib/io/buffered_inputstream.h
+++ b/tensorflow/core/lib/io/buffered_inputstream.h
@@ -94,6 +94,9 @@ class BufferedInputStream : public InputStreamInterface {
size_t pos_ = 0; // current position in buf_.
size_t limit_ = 0; // just past the end of valid data in buf_.
bool owns_input_stream_ = false;
+ // When EoF is reached, file_status_ contains the status to skip unnecessary
+ // buffer allocations.
+ Status file_status_ = Status::OK();
TF_DISALLOW_COPY_AND_ASSIGN(BufferedInputStream);
};
diff --git a/tensorflow/core/lib/io/buffered_inputstream_test.cc b/tensorflow/core/lib/io/buffered_inputstream_test.cc
index 7265101e1b..49b2b1a861 100644
--- a/tensorflow/core/lib/io/buffered_inputstream_test.cc
+++ b/tensorflow/core/lib/io/buffered_inputstream_test.cc
@@ -19,6 +19,7 @@ limitations under the License.
#include "tensorflow/core/lib/io/random_inputstream.h"
#include "tensorflow/core/platform/env.h"
#include "tensorflow/core/platform/test.h"
+#include "tensorflow/core/platform/test_benchmark.h"
namespace tensorflow {
namespace io {
@@ -362,6 +363,45 @@ TEST(BufferedInputStream, ReadAll_Text) {
}
}
+void BM_BufferedReaderSmallReads(const int iters, const int buff_size,
+ const int file_size) {
+ testing::StopTiming();
+ Env* env = Env::Default();
+ string fname = testing::TmpDir() + "/buffered_inputstream_test";
+
+ const string file_elem = "0123456789";
+ std::unique_ptr<WritableFile> write_file;
+ TF_ASSERT_OK(env->NewWritableFile(fname, &write_file));
+ for (int i = 0; i < file_size; ++i) {
+ TF_ASSERT_OK(write_file->Append(file_elem));
+ }
+ TF_ASSERT_OK(write_file->Close());
+
+ std::unique_ptr<RandomAccessFile> file;
+ TF_ASSERT_OK(env->NewRandomAccessFile(fname, &file));
+
+ string result;
+ testing::StartTiming();
+
+ for (int itr = 0; itr < iters; ++itr) {
+ BufferedInputStream in(file.get(), buff_size);
+ for (int64 i = 0; i < 10 * file_size; ++i) {
+ TF_ASSERT_OK(in.ReadNBytes(1, &result))
+ << "i: " << i << " itr: " << itr << " buff_size: " << buff_size
+ << " file size: " << file_size;
+ }
+ }
+}
+BENCHMARK(BM_BufferedReaderSmallReads)
+ ->ArgPair(1, 5)
+ ->ArgPair(1, 1024)
+ ->ArgPair(10, 5)
+ ->ArgPair(10, 1024)
+ ->ArgPair(1024, 1024)
+ ->ArgPair(1024 * 1024, 1024)
+ ->ArgPair(1024 * 1024, 1024 * 1024)
+ ->ArgPair(256 * 1024 * 1024, 1024);
+
} // anonymous namespace
} // namespace io
} // namespace tensorflow
diff --git a/tensorflow/core/lib/io/zlib_inputstream.h b/tensorflow/core/lib/io/zlib_inputstream.h
index a8a4e7c83c..8faa7dcb8f 100644
--- a/tensorflow/core/lib/io/zlib_inputstream.h
+++ b/tensorflow/core/lib/io/zlib_inputstream.h
@@ -37,7 +37,7 @@ namespace io {
// by multiple threads
class ZlibInputStream : public InputStreamInterface {
public:
- // Create a ZlibInputBuffer for `input_stream` with a buffer of size
+ // Create a ZlibInputStream for `input_stream` with a buffer of size
// `input_buffer_bytes` bytes for reading contents from `input_stream` and
// another buffer with size `output_buffer_bytes` for caching decompressed
// contents. Does *not* take ownership of "input_stream".
diff --git a/tensorflow/core/ops/array_ops.cc b/tensorflow/core/ops/array_ops.cc
index 651f22c6ea..62c86c7714 100644
--- a/tensorflow/core/ops/array_ops.cc
+++ b/tensorflow/core/ops/array_ops.cc
@@ -5488,24 +5488,28 @@ REGISTER_OP("BatchMatrixDiag")
.Input("diagonal: T")
.Output("output: T")
.Attr("T: type")
- .Deprecated(14, "Use MatrixDiag");
+ .Deprecated(14, "Use MatrixDiag")
+ .SetShapeFn(shape_inference::UnknownShape);
REGISTER_OP("BatchMatrixSetDiag")
.Input("input: T")
.Input("diagonal: T")
.Output("output: T")
.Attr("T: type")
- .Deprecated(14, "Use MatrixSetDiag");
+ .Deprecated(14, "Use MatrixSetDiag")
+ .SetShapeFn(shape_inference::UnknownShape);
REGISTER_OP("BatchMatrixDiagPart")
.Input("input: T")
.Output("diagonal: T")
.Attr("T: type")
- .Deprecated(14, "Use MatrixDiagPart");
+ .Deprecated(14, "Use MatrixDiagPart")
+ .SetShapeFn(shape_inference::UnknownShape);
REGISTER_OP("BatchMatrixBandPart")
.Input("input: T")
.Input("num_lower: int64")
.Input("num_upper: int64")
.Output("band: T")
.Attr("T: type")
- .Deprecated(14, "Use MatrixBandPart");
+ .Deprecated(14, "Use MatrixBandPart")
+ .SetShapeFn(shape_inference::UnknownShape);
} // namespace tensorflow
diff --git a/tensorflow/core/ops/compat/ops_history.v1.pbtxt b/tensorflow/core/ops/compat/ops_history.v1.pbtxt
index 22d4a0056f..a8338620d6 100644
--- a/tensorflow/core/ops/compat/ops_history.v1.pbtxt
+++ b/tensorflow/core/ops/compat/ops_history.v1.pbtxt
@@ -24391,6 +24391,21 @@ op {
}
}
op {
+ name: "SerializeTensor"
+ input_arg {
+ name: "tensor"
+ type_attr: "T"
+ }
+ output_arg {
+ name: "serialized"
+ type: DT_STRING
+ }
+ attr {
+ name: "T"
+ type: "type"
+ }
+}
+op {
name: "SetSize"
input_arg {
name: "set_indices"
@@ -24845,6 +24860,51 @@ op {
}
}
op {
+ name: "SloppyInterleaveDataset"
+ input_arg {
+ name: "input_dataset"
+ type: DT_RESOURCE
+ }
+ input_arg {
+ name: "other_arguments"
+ type_list_attr: "Targuments"
+ }
+ input_arg {
+ name: "cycle_length"
+ type: DT_INT64
+ }
+ input_arg {
+ name: "block_length"
+ type: DT_INT64
+ }
+ output_arg {
+ name: "handle"
+ type: DT_RESOURCE
+ }
+ attr {
+ name: "f"
+ type: "func"
+ }
+ attr {
+ name: "Targuments"
+ type: "list(type)"
+ has_minimum: true
+ }
+ attr {
+ name: "output_types"
+ type: "list(type)"
+ has_minimum: true
+ minimum: 1
+ }
+ attr {
+ name: "output_shapes"
+ type: "list(shape)"
+ has_minimum: true
+ minimum: 1
+ }
+ is_stateful: true
+}
+op {
name: "Softmax"
input_arg {
name: "logits"
@@ -28818,6 +28878,40 @@ op {
}
}
op {
+ name: "Sub"
+ input_arg {
+ name: "x"
+ type_attr: "T"
+ }
+ input_arg {
+ name: "y"
+ type_attr: "T"
+ }
+ output_arg {
+ name: "z"
+ type_attr: "T"
+ }
+ attr {
+ name: "T"
+ type: "type"
+ allowed_values {
+ list {
+ type: DT_HALF
+ type: DT_FLOAT
+ type: DT_DOUBLE
+ type: DT_UINT8
+ type: DT_INT8
+ type: DT_UINT16
+ type: DT_INT16
+ type: DT_INT32
+ type: DT_INT64
+ type: DT_COMPLEX64
+ type: DT_COMPLEX128
+ }
+ }
+ }
+}
+op {
name: "Substr"
input_arg {
name: "input"
diff --git a/tensorflow/core/ops/dataset_ops.cc b/tensorflow/core/ops/dataset_ops.cc
index 37d9a737e2..7cc8dccb95 100644
--- a/tensorflow/core/ops/dataset_ops.cc
+++ b/tensorflow/core/ops/dataset_ops.cc
@@ -233,6 +233,33 @@ f: A function mapping elements of `input_dataset`, concatenated with
`output_types` and `output_shapes`.
)doc");
+REGISTER_OP("SloppyInterleaveDataset")
+ .Input("input_dataset: resource")
+ .Input("other_arguments: Targuments")
+ .Input("cycle_length: int64")
+ .Input("block_length: int64")
+ .Output("handle: resource")
+ .Attr("f: func")
+ .Attr("Targuments: list(type) >= 0")
+ .Attr("output_types: list(type) >= 1")
+ .Attr("output_shapes: list(shape) >= 1")
+ .SetShapeFn(shape_inference::ScalarShape)
+ .Doc(R"doc(
+Creates a dataset that applies `f` to the outputs of `input_dataset`.
+
+The resulting dataset is similar to the `InterleaveDataset`, with the exception
+that if retrieving the next value from a dataset would cause the requester to
+block, it will skip that input dataset. This dataset is especially useful
+when loading data from a variable-latency datastores (e.g. HDFS, GCS), as it
+allows the training step to proceed so long as some data is available.
+
+!! WARNING !! This dataset is not deterministic!
+
+f: A function mapping elements of `input_dataset`, concatenated with
+ `other_arguments`, to a Dataset resource that contains elements matching
+ `output_types` and `output_shapes`.
+)doc");
+
REGISTER_OP("GroupByWindowDataset")
.Input("input_dataset: resource")
.Input("key_func_other_arguments: Tkey_func_other_arguments")
diff --git a/tensorflow/core/ops/debug_ops.cc b/tensorflow/core/ops/debug_ops.cc
index bd7f7c2c01..5aebdca1ea 100644
--- a/tensorflow/core/ops/debug_ops.cc
+++ b/tensorflow/core/ops/debug_ops.cc
@@ -32,6 +32,7 @@ REGISTER_OP("Copy")
.Attr("tensor_name: string = ''")
.Attr("debug_ops_spec: list(string) = []")
.SetAllowsUninitializedInput()
+ .SetShapeFn(shape_inference::UnchangedShape)
.Doc(R"doc(
Copy Op.
@@ -61,6 +62,7 @@ REGISTER_OP("CopyHost")
.Attr("tensor_name: string = ''")
.Attr("debug_ops_spec: list(string) = []")
.SetAllowsUninitializedInput()
+ .SetShapeFn(shape_inference::UnchangedShape)
.Doc(R"doc(
Copy Host Op.
@@ -118,6 +120,7 @@ REGISTER_OP("DebugNanCount")
.Attr("debug_urls: list(string) = []")
.Attr("gated_grpc: bool = false")
.SetAllowsUninitializedInput()
+ .SetShapeFn(shape_inference::ScalarShape)
.Doc(R"doc(
Debug NaN Value Counter Op
@@ -148,6 +151,8 @@ REGISTER_OP("DebugNumericSummary")
.Attr("mute_if_healthy: bool = false")
.Attr("gated_grpc: bool = false")
.SetAllowsUninitializedInput()
+ // Note: this could return a more specific shape if needed in future.
+ .SetShapeFn(shape_inference::UnknownShape)
.Doc(R"doc(
Debug Numeric Summary Op.
diff --git a/tensorflow/core/ops/linalg_ops.cc b/tensorflow/core/ops/linalg_ops.cc
index 5b75bda1f1..48b2362342 100644
--- a/tensorflow/core/ops/linalg_ops.cc
+++ b/tensorflow/core/ops/linalg_ops.cc
@@ -13,6 +13,7 @@ See the License for the specific language governing permissions and
limitations under the License.
==============================================================================*/
+#include "tensorflow/core/framework/common_shape_fns.h"
#include "tensorflow/core/framework/op.h"
#include "tensorflow/core/framework/shape_inference.h"
@@ -557,34 +558,39 @@ REGISTER_OP("BatchSelfAdjointEig")
.Input("input: T")
.Output("output: T")
.Attr("T: {double, float}")
- .Deprecated(11, "Use SelfAdjointEigV2 instead.");
+ .Deprecated(11, "Use SelfAdjointEigV2 instead.")
+ .SetShapeFn(shape_inference::UnknownShape);
// Can all be deleted after 9mar2017.
REGISTER_OP("BatchMatrixDeterminant")
.Input("input: T")
.Output("output: T")
.Attr("T: {float, double, complex64, complex128}")
- .Deprecated(13, "Use MatrixDeterminant instead.");
+ .Deprecated(13, "Use MatrixDeterminant instead.")
+ .SetShapeFn(shape_inference::UnknownShape);
REGISTER_OP("BatchMatrixInverse")
.Input("input: T")
.Output("output: T")
.Attr("adjoint: bool = False")
.Attr("T: {double, float}")
- .Deprecated(13, "Use MatrixInverse instead.");
+ .Deprecated(13, "Use MatrixInverse instead.")
+ .SetShapeFn(shape_inference::UnknownShape);
REGISTER_OP("BatchCholesky")
.Input("input: T")
.Output("output: T")
.Attr("T: {double, float}")
- .Deprecated(13, "Use Cholesky instead.");
+ .Deprecated(13, "Use Cholesky instead.")
+ .SetShapeFn(shape_inference::UnknownShape);
REGISTER_OP("BatchCholeskyGrad")
.Input("l: T")
.Input("grad: T")
.Output("output: T")
.Attr("T: {float, double}")
- .Deprecated(13, "Use CholeskyGrad instead.");
+ .Deprecated(13, "Use CholeskyGrad instead.")
+ .SetShapeFn(shape_inference::UnknownShape);
REGISTER_OP("BatchSelfAdjointEigV2")
.Input("input: T")
@@ -592,7 +598,8 @@ REGISTER_OP("BatchSelfAdjointEigV2")
.Output("v: T")
.Attr("compute_v: bool = True")
.Attr("T: {double, float}")
- .Deprecated(13, "Use SelfAdjointEigV2 instead.");
+ .Deprecated(13, "Use SelfAdjointEigV2 instead.")
+ .SetShapeFn(shape_inference::UnknownShape);
REGISTER_OP("BatchMatrixSolve")
.Input("matrix: T")
@@ -600,7 +607,8 @@ REGISTER_OP("BatchMatrixSolve")
.Output("output: T")
.Attr("adjoint: bool = False")
.Attr("T: {double, float}")
- .Deprecated(13, "Use MatrixSolve instead.");
+ .Deprecated(13, "Use MatrixSolve instead.")
+ .SetShapeFn(shape_inference::UnknownShape);
REGISTER_OP("BatchMatrixTriangularSolve")
.Input("matrix: T")
@@ -609,7 +617,8 @@ REGISTER_OP("BatchMatrixTriangularSolve")
.Attr("lower: bool = True")
.Attr("adjoint: bool = False")
.Attr("T: {double, float}")
- .Deprecated(13, "Use MatrixTriangularSolve instead.");
+ .Deprecated(13, "Use MatrixTriangularSolve instead.")
+ .SetShapeFn(shape_inference::UnknownShape);
REGISTER_OP("BatchMatrixSolveLs")
.Input("matrix: T")
@@ -618,7 +627,8 @@ REGISTER_OP("BatchMatrixSolveLs")
.Output("output: T")
.Attr("T: {double, float}")
.Attr("fast: bool = True")
- .Deprecated(13, "Use MatrixSolveLs instead.");
+ .Deprecated(13, "Use MatrixSolveLs instead.")
+ .SetShapeFn(shape_inference::UnknownShape);
REGISTER_OP("BatchSvd")
.Input("input: T")
@@ -628,6 +638,7 @@ REGISTER_OP("BatchSvd")
.Attr("compute_uv: bool = True")
.Attr("full_matrices: bool = False")
.Attr("T: {double, float, complex64, complex128}")
- .Deprecated(13, "Use Svd instead.");
+ .Deprecated(13, "Use Svd instead.")
+ .SetShapeFn(shape_inference::UnknownShape);
} // namespace tensorflow
diff --git a/tensorflow/core/ops/ops.pbtxt b/tensorflow/core/ops/ops.pbtxt
index 35c31c6cb8..cfd3869d05 100644
--- a/tensorflow/core/ops/ops.pbtxt
+++ b/tensorflow/core/ops/ops.pbtxt
@@ -24035,6 +24035,25 @@ op {
summary: "Serialize a `SparseTensor` into a string 3-vector (1-D `Tensor`) object."
}
op {
+ name: "SerializeTensor"
+ input_arg {
+ name: "tensor"
+ description: "A Tensor of type `T`."
+ type_attr: "T"
+ }
+ output_arg {
+ name: "serialized"
+ description: "A serialized TensorProto proto of the input tensor."
+ type: DT_STRING
+ }
+ attr {
+ name: "T"
+ type: "type"
+ description: "The type of the input tensor."
+ }
+ summary: "Transforms a Tensor into a serialized TensorProto proto."
+}
+op {
name: "SetSize"
input_arg {
name: "set_indices"
@@ -24536,6 +24555,54 @@ op {
description: "The output tensor is a tensor with dimensions described by \'size\'\nwhose values are extracted from \'input\' starting at the offsets in\n\'begin\'.\n\n*Requirements*:\n 0 <= begin[i] <= begin[i] + size[i] <= Di for i in [0, n)"
}
op {
+ name: "SloppyInterleaveDataset"
+ input_arg {
+ name: "input_dataset"
+ type: DT_RESOURCE
+ }
+ input_arg {
+ name: "other_arguments"
+ type_list_attr: "Targuments"
+ }
+ input_arg {
+ name: "cycle_length"
+ type: DT_INT64
+ }
+ input_arg {
+ name: "block_length"
+ type: DT_INT64
+ }
+ output_arg {
+ name: "handle"
+ type: DT_RESOURCE
+ }
+ attr {
+ name: "f"
+ type: "func"
+ description: "A function mapping elements of `input_dataset`, concatenated with\n`other_arguments`, to a Dataset resource that contains elements matching\n`output_types` and `output_shapes`."
+ }
+ attr {
+ name: "Targuments"
+ type: "list(type)"
+ has_minimum: true
+ }
+ attr {
+ name: "output_types"
+ type: "list(type)"
+ has_minimum: true
+ minimum: 1
+ }
+ attr {
+ name: "output_shapes"
+ type: "list(shape)"
+ has_minimum: true
+ minimum: 1
+ }
+ summary: "Creates a dataset that applies `f` to the outputs of `input_dataset`."
+ description: "The resulting dataset is similar to the `InterleaveDataset`, with the exception\nthat if retrieving the next value from a dataset would cause the requester to\nblock, it will skip that input dataset. This dataset is especially useful\nwhen loading data from a variable-latency datastores (e.g. HDFS, GCS), as it\nallows the training step to proceed so long as some data is available.\n\n!! WARNING !! This dataset is not deterministic!"
+ is_stateful: true
+}
+op {
name: "Softmax"
input_arg {
name: "logits"
@@ -28908,6 +28975,10 @@ op {
type: DT_HALF
type: DT_FLOAT
type: DT_DOUBLE
+ type: DT_UINT8
+ type: DT_INT8
+ type: DT_UINT16
+ type: DT_INT16
type: DT_INT32
type: DT_INT64
type: DT_COMPLEX64
diff --git a/tensorflow/core/platform/default/logging.h b/tensorflow/core/platform/default/logging.h
index 04ff9e12b6..d5f7350cdd 100644
--- a/tensorflow/core/platform/default/logging.h
+++ b/tensorflow/core/platform/default/logging.h
@@ -86,7 +86,7 @@ class LogMessageFatal : public LogMessage {
((lvl) <= ::tensorflow::internal::LogMessage::MinVLogLevel())
#endif
-#define VLOG(lvl) \
+#define VLOG(lvl) \
if (TF_PREDICT_FALSE(VLOG_IS_ON(lvl))) \
::tensorflow::internal::LogMessage(__FILE__, __LINE__, tensorflow::INFO)
diff --git a/tensorflow/core/platform/env_test.cc b/tensorflow/core/platform/env_test.cc
index 50dd0cd58b..c9b362f182 100644
--- a/tensorflow/core/platform/env_test.cc
+++ b/tensorflow/core/platform/env_test.cc
@@ -226,14 +226,28 @@ TEST_F(DefaultEnvTest, RecursivelyCreateDirSubdirsExist) {
TEST_F(DefaultEnvTest, LocalFileSystem) {
// Test filename with file:// syntax.
+ int expected_num_files = 0;
+ std::vector<string> matching_paths;
for (const int length : {0, 1, 1212, 2553, 4928, 8196, 9000, (1 << 20) - 1,
1 << 20, (1 << 20) + 1}) {
- string filename = io::JoinPath(BaseDir(), strings::StrCat("file", length));
+ string filename = io::JoinPath(BaseDir(), strings::StrCat("len", length));
filename = strings::StrCat("file://", filename);
// Write a file with the given length
const string input = CreateTestFile(env_, filename, length);
+ ++expected_num_files;
+
+ // Ensure that GetMatchingPaths works as intended.
+ TF_EXPECT_OK(env_->GetMatchingPaths(
+ // Try it with the "file://" URI scheme.
+ strings::StrCat("file://", io::JoinPath(BaseDir(), "l*")),
+ &matching_paths));
+ EXPECT_EQ(expected_num_files, matching_paths.size());
+ TF_EXPECT_OK(env_->GetMatchingPaths(
+ // Try it without any URI scheme.
+ io::JoinPath(BaseDir(), "l*"), &matching_paths));
+ EXPECT_EQ(expected_num_files, matching_paths.size());
// Read the file back and check equality
string output;
diff --git a/tensorflow/core/util/activation_mode.cc b/tensorflow/core/util/activation_mode.cc
index 4bf947a0a9..efb5ab146a 100644
--- a/tensorflow/core/util/activation_mode.cc
+++ b/tensorflow/core/util/activation_mode.cc
@@ -22,7 +22,9 @@ namespace tensorflow {
Status GetActivationModeFromString(const string& str_value,
ActivationMode* value) {
- if (str_value == "Sigmoid") {
+ if (str_value == "None") {
+ *value = NONE;
+ } else if (str_value == "Sigmoid") {
*value = SIGMOID;
} else if (str_value == "Relu") {
*value = RELU;
diff --git a/tensorflow/core/util/activation_mode.h b/tensorflow/core/util/activation_mode.h
index 2a8564847d..2e03ccd5c8 100644
--- a/tensorflow/core/util/activation_mode.h
+++ b/tensorflow/core/util/activation_mode.h
@@ -28,6 +28,7 @@ namespace tensorflow {
// ActivationMode: the activation function we apply to the input tensor:
enum ActivationMode {
+ NONE = 0,
SIGMOID = 1,
RELU = 2,
RELU6 = 3,
diff --git a/tensorflow/docs_src/programmers_guide/datasets.md b/tensorflow/docs_src/programmers_guide/datasets.md
index ba26bd5e94..aaebabfddf 100644
--- a/tensorflow/docs_src/programmers_guide/datasets.md
+++ b/tensorflow/docs_src/programmers_guide/datasets.md
@@ -146,6 +146,9 @@ for i in range(100):
assert i == value
```
+Note: Currently, one-shot iterators are the only type that is easily usable
+with an `Estimator`.
+
An **initializable** iterator requires you to run an explicit
`iterator.initializer` operation before using it. In exchange for this
inconvenience, it enables you to *parameterize* the definition of the dataset,
@@ -452,6 +455,9 @@ dataset = dataset.flat_map(
.filter(lambda line: tf.not_equal(tf.substr(line, 0, 1), "#"))))
```
+For a full example of parsing a CSV file using datasets, see [`imports85.py`](https://www.tensorflow.org/code/tensorflow/examples/get_started/regression/imports85.py)
+in @{$get_started/linear_regression}.
+
<!--
TODO(mrry): Add these sections.
diff --git a/tensorflow/examples/get_started/regression/dnn_regression.py b/tensorflow/examples/get_started/regression/dnn_regression.py
index 06f0665e56..7aa3659139 100644
--- a/tensorflow/examples/get_started/regression/dnn_regression.py
+++ b/tensorflow/examples/get_started/regression/dnn_regression.py
@@ -28,15 +28,21 @@ STEPS = 5000
def main(argv):
"""Builds, trains, and evaluates the model."""
assert len(argv) == 1
- (x_train, y_train), (x_test, y_test) = imports85.load_data()
+ (train, test) = imports85.dataset()
# Build the training input_fn.
- input_train = tf.estimator.inputs.pandas_input_fn(
- x=x_train, y=y_train, num_epochs=None, shuffle=True)
+ def input_train():
+ return (
+ # Shuffling with a buffer larger than the data set ensures
+ # that the examples are well mixed.
+ train.shuffle(1000).batch(128)
+ # Repeat forever
+ .repeat().make_one_shot_iterator().get_next())
# Build the validation input_fn.
- input_test = tf.estimator.inputs.pandas_input_fn(
- x=x_test, y=y_test, shuffle=True)
+ def input_test():
+ return (test.shuffle(1000).batch(128)
+ .make_one_shot_iterator().get_next())
# The first way assigns a unique weight to each category. To do this you must
# specify the category's vocabulary (values outside this specification will
@@ -71,7 +77,7 @@ def main(argv):
# Train the model.
model.train(input_fn=input_train, steps=STEPS)
- # Evaluate how the model performs on data it has not yet seen.
+ # Evaluate how the model performs on data it has not yet seen.
eval_result = model.evaluate(input_fn=input_test)
# The evaluation returns a Python dictionary. The "average_loss" key holds the
diff --git a/tensorflow/examples/get_started/regression/imports85.py b/tensorflow/examples/get_started/regression/imports85.py
index 4532064622..41e77222ce 100644
--- a/tensorflow/examples/get_started/regression/imports85.py
+++ b/tensorflow/examples/get_started/regression/imports85.py
@@ -21,53 +21,149 @@ from __future__ import print_function
import collections
import numpy as np
-import pandas as pd
import tensorflow as tf
-header = collections.OrderedDict([
- ("symboling", np.int32),
- ("normalized-losses", np.float32),
- ("make", str),
- ("fuel-type", str),
- ("aspiration", str),
- ("num-of-doors", str),
- ("body-style", str),
- ("drive-wheels", str),
- ("engine-location", str),
- ("wheel-base", np.float32),
- ("length", np.float32),
- ("width", np.float32),
- ("height", np.float32),
- ("curb-weight", np.float32),
- ("engine-type", str),
- ("num-of-cylinders", str),
- ("engine-size", np.float32),
- ("fuel-system", str),
- ("bore", np.float32),
- ("stroke", np.float32),
- ("compression-ratio", np.float32),
- ("horsepower", np.float32),
- ("peak-rpm", np.float32),
- ("city-mpg", np.float32),
- ("highway-mpg", np.float32),
- ("price", np.float32)
+try:
+ import pandas as pd # pylint: disable=g-import-not-at-top
+except ImportError:
+ pass
+
+
+URL = "https://archive.ics.uci.edu/ml/machine-learning-databases/autos/imports-85.data"
+
+# Order is important for the csv-readers, so we use an OrderedDict here.
+defaults = collections.OrderedDict([
+ ("symboling", [0]),
+ ("normalized-losses", [0.0]),
+ ("make", [""]),
+ ("fuel-type", [""]),
+ ("aspiration", [""]),
+ ("num-of-doors", [""]),
+ ("body-style", [""]),
+ ("drive-wheels", [""]),
+ ("engine-location", [""]),
+ ("wheel-base", [0.0]),
+ ("length", [0.0]),
+ ("width", [0.0]),
+ ("height", [0.0]),
+ ("curb-weight", [0.0]),
+ ("engine-type", [""]),
+ ("num-of-cylinders", [""]),
+ ("engine-size", [0.0]),
+ ("fuel-system", [""]),
+ ("bore", [0.0]),
+ ("stroke", [0.0]),
+ ("compression-ratio", [0.0]),
+ ("horsepower", [0.0]),
+ ("peak-rpm", [0.0]),
+ ("city-mpg", [0.0]),
+ ("highway-mpg", [0.0]),
+ ("price", [0.0])
]) # pyformat: disable
-def raw():
- """Get the imports85 data and load it as a pd.DataFrame."""
- url = "https://archive.ics.uci.edu/ml/machine-learning-databases/autos/imports-85.data" # pylint: disable=line-too-long
- # Download and cache the data.
- path = tf.contrib.keras.utils.get_file(url.split("/")[-1], url)
+types = collections.OrderedDict((key, type(value[0]))
+ for key, value in defaults.items())
- # Load the CSV data into a pandas dataframe.
- df = pd.read_csv(path, names=header.keys(), dtype=header, na_values="?")
+
+def _get_imports85():
+ path = tf.contrib.keras.utils.get_file(URL.split("/")[-1], URL)
+ return path
+
+
+def dataset(y_name="price", train_fraction=0.7):
+ """Load the imports85 data as a (train,test) pair of `Dataset`.
+
+ Each dataset generates (features_dict, label) pairs.
+
+ Args:
+ y_name: The name of the column to use as the label.
+ train_fraction: A float, the fraction of data to use for training. The
+ remainder will be used for evaluation.
+ Returns:
+ A (train,test) pair of `Datasets`
+ """
+ # Download and cache the data
+ path = _get_imports85()
+
+ # Define how the lines of the file should be parsed
+ def decode_line(line):
+ """Convert a csv line into a (features_dict,label) pair."""
+ # Decode the line to a tuple of items based on the types of
+ # csv_header.values().
+ items = tf.decode_csv(line, defaults.values())
+
+ # Convert the keys and items to a dict.
+ pairs = zip(defaults.keys(), items)
+ features_dict = dict(pairs)
+
+ # Remove the label from the features_dict
+ label = features_dict.pop(y_name)
+
+ return features_dict, label
+
+ def has_no_question_marks(line):
+ """Returns True if the line of text has no question marks."""
+ # split the line into an array of characters
+ chars = tf.string_split(line[tf.newaxis], "").values
+ # for each character check if it is a question mark
+ is_question = tf.equal(chars, "?")
+ any_question = tf.reduce_any(is_question)
+ no_question = ~any_question
+
+ return no_question
+
+ def in_training_set(line):
+ """Returns a boolean tensor, true if the line is in the training set."""
+ # If you randomly split the dataset you won't get the same split in both
+ # sessions if you stop and restart training later. Also a simple
+ # random split won't work with a dataset that's too big to `.cache()` as
+ # we are doing here.
+ num_buckets = 1000000
+ bucket_id = tf.string_to_hash_bucket_fast(line, num_buckets)
+ # Use the hash bucket id as a random number that's deterministic per example
+ return bucket_id < int(train_fraction * num_buckets)
+
+ def in_test_set(line):
+ """Returns a boolean tensor, true if the line is in the training set."""
+ # Items not in the training set are in the test set.
+ # This line must use `~` instead of `not` beacuse `not` only works on python
+ # booleans but we are dealing with symbolic tensors.
+ return ~in_training_set(line)
+
+ base_dataset = (tf.contrib.data
+ # Get the lines from the file.
+ .TextLineDataset(path)
+ # drop lines with question marks.
+ .filter(has_no_question_marks))
+
+ train = (base_dataset
+ # Take only the training-set lines.
+ .filter(in_training_set)
+ # Cache data so you only read the file once.
+ .cache()
+ # Decode each line into a (features_dict, label) pair.
+ .map(decode_line))
+
+ # Do the same for the test-set.
+ test = (base_dataset.filter(in_test_set).cache().map(decode_line))
+
+ return train, test
+
+
+def raw_dataframe():
+ """Load the imports85 data as a pd.DataFrame."""
+ # Download and cache the data
+ path = _get_imports85()
+
+ # Load it into a pandas dataframe
+ df = pd.read_csv(path, names=types.keys(), dtype=types, na_values="?")
return df
def load_data(y_name="price", train_fraction=0.7, seed=None):
- """Returns the imports85 shuffled and split into train and test subsets.
+ """Get the imports85 data set.
A description of the data is available at:
https://archive.ics.uci.edu/ml/datasets/automobile
@@ -88,7 +184,7 @@ def load_data(y_name="price", train_fraction=0.7, seed=None):
array.
"""
# Load the raw data columns.
- data = raw()
+ data = raw_dataframe()
# Delete rows with unknowns
data = data.dropna()
diff --git a/tensorflow/examples/get_started/regression/linear_regression.py b/tensorflow/examples/get_started/regression/linear_regression.py
index 9793163323..dd44077663 100644
--- a/tensorflow/examples/get_started/regression/linear_regression.py
+++ b/tensorflow/examples/get_started/regression/linear_regression.py
@@ -29,20 +29,21 @@ STEPS = 1000
def main(argv):
"""Builds, trains, and evaluates the model."""
assert len(argv) == 1
- (x_train, y_train), (x_test, y_test) = imports85.load_data()
+ (train, test) = imports85.dataset()
# Build the training input_fn.
- input_train = tf.estimator.inputs.pandas_input_fn(
- x=x_train,
- y=y_train,
- # Setting `num_epochs` to `None` lets the `inpuf_fn` generate data
- # indefinitely, leaving the call to `Estimator.train` in control.
- num_epochs=None,
- shuffle=True)
+ def input_train():
+ return (
+ # Shuffling with a buffer larger than the data set ensures
+ # that the examples are well mixed.
+ train.shuffle(1000).batch(128)
+ # Repeat forever
+ .repeat().make_one_shot_iterator().get_next())
# Build the validation input_fn.
- input_test = tf.estimator.inputs.pandas_input_fn(
- x=x_test, y=y_test, shuffle=True)
+ def input_test():
+ return (test.shuffle(1000).batch(128)
+ .make_one_shot_iterator().get_next())
feature_columns = [
# "curb-weight" and "highway-mpg" are numeric columns.
diff --git a/tensorflow/examples/get_started/regression/linear_regression_categorical.py b/tensorflow/examples/get_started/regression/linear_regression_categorical.py
index 0a416595e6..38ecfada9d 100644
--- a/tensorflow/examples/get_started/regression/linear_regression_categorical.py
+++ b/tensorflow/examples/get_started/regression/linear_regression_categorical.py
@@ -28,20 +28,21 @@ STEPS = 1000
def main(argv):
"""Builds, trains, and evaluates the model."""
assert len(argv) == 1
- (x_train, y_train), (x_test, y_test) = imports85.load_data()
+ (train, test) = imports85.dataset()
# Build the training input_fn.
- input_train = tf.estimator.inputs.pandas_input_fn(
- x=x_train,
- y=y_train,
- # Setting `num_epochs` to `None` lets the `inpuf_fn` generate data
- # indefinitely, leaving the call to `Estimator.train` in control.
- num_epochs=None,
- shuffle=True)
+ def input_train():
+ return (
+ # Shuffling with a buffer larger than the data set ensures
+ # that the examples are well mixed.
+ train.shuffle(1000).batch(128)
+ # Repeat forever
+ .repeat().make_one_shot_iterator().get_next())
# Build the validation input_fn.
- input_test = tf.estimator.inputs.pandas_input_fn(
- x=x_test, y=y_test, shuffle=True)
+ def input_test():
+ return (test.shuffle(1000).batch(128)
+ .make_one_shot_iterator().get_next())
# The following code demonstrates two of the ways that `feature_columns` can
# be used to build a model with categorical inputs.
diff --git a/tensorflow/examples/get_started/regression/test.py b/tensorflow/examples/get_started/regression/test.py
index 5a644cb8d6..fa06dde9ae 100644
--- a/tensorflow/examples/get_started/regression/test.py
+++ b/tensorflow/examples/get_started/regression/test.py
@@ -26,48 +26,66 @@ from six.moves import StringIO
import tensorflow.examples.get_started.regression.imports85 as imports85
-import tensorflow.examples.get_started.regression.dnn_regression as dnn_regression # pylint: disable=g-bad-import-order,g-import-not-at-top
+sys.modules["imports85"] = imports85
+
+# pylint: disable=g-bad-import-order,g-import-not-at-top
+import tensorflow.contrib.data as data
+
+import tensorflow.examples.get_started.regression.dnn_regression as dnn_regression
import tensorflow.examples.get_started.regression.linear_regression as linear_regression
import tensorflow.examples.get_started.regression.linear_regression_categorical as linear_regression_categorical
from tensorflow.python.platform import googletest
from tensorflow.python.platform import test
+# pylint: disable=g-bad-import-order,g-import-not-at-top
+
+
+# pylint: disable=line-too-long
+FOUR_LINES = "\n".join([
+ "1,?,alfa-romero,gas,std,two,hatchback,rwd,front,94.50,171.20,65.50,52.40,2823,ohcv,six,152,mpfi,2.68,3.47,9.00,154,5000,19,26,16500",
+ "2,164,audi,gas,std,four,sedan,fwd,front,99.80,176.60,66.20,54.30,2337,ohc,four,109,mpfi,3.19,3.40,10.00,102,5500,24,30,13950",
+ "2,164,audi,gas,std,four,sedan,4wd,front,99.40,176.60,66.40,54.30,2824,ohc,five,136,mpfi,3.19,3.40,8.00,115,5500,18,22,17450",
+ "2,?,audi,gas,std,two,sedan,fwd,front,99.80,177.30,66.30,53.10,2507,ohc,five,136,mpfi,3.19,3.40,8.50,110,5500,19,25,15250",])
+
+# pylint: enable=line-too-long
+
+
+def four_lines_dataframe():
+ text = StringIO(FOUR_LINES)
+ return pd.read_csv(text, names=imports85.types.keys(),
+ dtype=imports85.types, na_values="?")
-def four_lines():
- # pylint: disable=line-too-long
- text = StringIO("""
- 1,?,alfa-romero,gas,std,two,hatchback,rwd,front,94.50,171.20,65.50,52.40,2823,ohcv,six,152,mpfi,2.68,3.47,9.00,154,5000,19,26,16500
- 2,164,audi,gas,std,four,sedan,fwd,front,99.80,176.60,66.20,54.30,2337,ohc,four,109,mpfi,3.19,3.40,10.00,102,5500,24,30,13950
- 2,164,audi,gas,std,four,sedan,4wd,front,99.40,176.60,66.40,54.30,2824,ohc,five,136,mpfi,3.19,3.40,8.00,115,5500,18,22,17450
- 2,?,audi,gas,std,two,sedan,fwd,front,99.80,177.30,66.30,53.10,2507,ohc,five,136,mpfi,3.19,3.40,8.50,110,5500,19,25,15250""")
- # pylint: enable=line-too-long
- return pd.read_csv(text, names=imports85.header.keys(),
- dtype=imports85.header, na_values='?')
+def four_lines_dataset(*args, **kwargs):
+ del args, kwargs
+ return data.Dataset.from_tensor_slices(FOUR_LINES.split("\n"))
class RegressionTest(googletest.TestCase):
"""Test the regression examples in this directory."""
- @test.mock.patch.dict(imports85.__dict__, {'raw': four_lines})
- @test.mock.patch.dict(linear_regression.__dict__, {'STEPS': 1})
- @test.mock.patch.dict(sys.modules, {'imports85': imports85})
+ @test.mock.patch.dict(data.__dict__,
+ {"TextLineDataset": four_lines_dataset})
+ @test.mock.patch.dict(imports85.__dict__, {"_get_imports85": (lambda: None)})
+ @test.mock.patch.dict(linear_regression.__dict__, {"STEPS": 1})
def test_linear_regression(self):
- linear_regression.main([])
+ linear_regression.main([""])
- @test.mock.patch.dict(imports85.__dict__, {'raw': four_lines})
- @test.mock.patch.dict(linear_regression_categorical.__dict__, {'STEPS': 1})
- @test.mock.patch.dict(sys.modules, {'imports85': imports85})
+ @test.mock.patch.dict(data.__dict__,
+ {"TextLineDataset": four_lines_dataset})
+ @test.mock.patch.dict(imports85.__dict__, {"_get_imports85": (lambda: None)})
+ @test.mock.patch.dict(linear_regression_categorical.__dict__, {"STEPS": 1})
def test_linear_regression_categorical(self):
- linear_regression_categorical.main([])
+ linear_regression_categorical.main([""])
- @test.mock.patch.dict(imports85.__dict__, {'raw': four_lines})
- @test.mock.patch.dict(dnn_regression.__dict__, {'STEPS': 1})
- @test.mock.patch.dict(sys.modules, {'imports85': imports85})
+ @test.mock.patch.dict(data.__dict__,
+ {"TextLineDataset": four_lines_dataset})
+ @test.mock.patch.dict(imports85.__dict__, {"_get_imports85": (lambda: None)})
+ @test.mock.patch.dict(dnn_regression.__dict__, {"STEPS": 1})
def test_dnn_regression(self):
- dnn_regression.main([])
+ dnn_regression.main([""])
-if __name__ == '__main__':
+if __name__ == "__main__":
googletest.main()
diff --git a/tensorflow/go/op/wrappers.go b/tensorflow/go/op/wrappers.go
index dda707aea2..1149ac6557 100644
--- a/tensorflow/go/op/wrappers.go
+++ b/tensorflow/go/op/wrappers.go
@@ -12297,172 +12297,6 @@ func SparseTensorDenseMatMul(scope *Scope, a_indices tf.Output, a_values tf.Outp
return op.Output(0)
}
-// L2 Loss.
-//
-// Computes half the L2 norm of a tensor without the `sqrt`:
-//
-// output = sum(t ** 2) / 2
-//
-// Arguments:
-// t: Typically 2-D, but may have any dimensions.
-//
-// Returns 0-D.
-func L2Loss(scope *Scope, t tf.Output) (output tf.Output) {
- if scope.Err() != nil {
- return
- }
- opspec := tf.OpSpec{
- Type: "L2Loss",
- Input: []tf.Input{
- t,
- },
- }
- op := scope.AddOperation(opspec)
- return op.Output(0)
-}
-
-// Computes rectified linear: `max(features, 0)`.
-func Relu(scope *Scope, features tf.Output) (activations tf.Output) {
- if scope.Err() != nil {
- return
- }
- opspec := tf.OpSpec{
- Type: "Relu",
- Input: []tf.Input{
- features,
- },
- }
- op := scope.AddOperation(opspec)
- return op.Output(0)
-}
-
-// Read an element from the TensorArray into output `value`.
-//
-// Arguments:
-// handle: The handle to a TensorArray.
-//
-// flow_in: A float scalar that enforces proper chaining of operations.
-// dtype: The type of the elem that is returned.
-//
-// Returns The tensor that is read from the TensorArray.
-func TensorArrayReadV3(scope *Scope, handle tf.Output, index tf.Output, flow_in tf.Output, dtype tf.DataType) (value tf.Output) {
- if scope.Err() != nil {
- return
- }
- attrs := map[string]interface{}{"dtype": dtype}
- opspec := tf.OpSpec{
- Type: "TensorArrayReadV3",
- Input: []tf.Input{
- handle, index, flow_in,
- },
- Attrs: attrs,
- }
- op := scope.AddOperation(opspec)
- return op.Output(0)
-}
-
-// Adds up a SparseTensor and a dense Tensor, using these special rules:
-//
-// (1) Broadcasts the dense side to have the same shape as the sparse side, if
-// eligible;
-// (2) Then, only the dense values pointed to by the indices of the SparseTensor
-// participate in the cwise addition.
-//
-// By these rules, the result is a logical SparseTensor with exactly the same
-// indices and shape, but possibly with different non-zero values. The output of
-// this Op is the resultant non-zero values.
-//
-// Arguments:
-// sp_indices: 2-D. `N x R` matrix with the indices of non-empty values in a
-// SparseTensor, possibly not in canonical ordering.
-// sp_values: 1-D. `N` non-empty values corresponding to `sp_indices`.
-// sp_shape: 1-D. Shape of the input SparseTensor.
-// dense: `R`-D. The dense Tensor operand.
-//
-// Returns 1-D. The `N` values that are operated on.
-func SparseDenseCwiseAdd(scope *Scope, sp_indices tf.Output, sp_values tf.Output, sp_shape tf.Output, dense tf.Output) (output tf.Output) {
- if scope.Err() != nil {
- return
- }
- opspec := tf.OpSpec{
- Type: "SparseDenseCwiseAdd",
- Input: []tf.Input{
- sp_indices, sp_values, sp_shape, dense,
- },
- }
- op := scope.AddOperation(opspec)
- return op.Output(0)
-}
-
-// Conv3DAttr is an optional argument to Conv3D.
-type Conv3DAttr func(optionalAttr)
-
-// Conv3DDataFormat sets the optional data_format attribute to value.
-//
-// value: The data format of the input and output data. With the
-// default format "NDHWC", the data is stored in the order of:
-// [batch, in_depth, in_height, in_width, in_channels].
-// Alternatively, the format could be "NCDHW", the data storage order is:
-// [batch, in_channels, in_depth, in_height, in_width].
-// If not specified, defaults to "NDHWC"
-func Conv3DDataFormat(value string) Conv3DAttr {
- return func(m optionalAttr) {
- m["data_format"] = value
- }
-}
-
-// Computes a 3-D convolution given 5-D `input` and `filter` tensors.
-//
-// In signal processing, cross-correlation is a measure of similarity of
-// two waveforms as a function of a time-lag applied to one of them. This
-// is also known as a sliding dot product or sliding inner-product.
-//
-// Our Conv3D implements a form of cross-correlation.
-//
-// Arguments:
-// input: Shape `[batch, in_depth, in_height, in_width, in_channels]`.
-// filter: Shape `[filter_depth, filter_height, filter_width, in_channels,
-// out_channels]`. `in_channels` must match between `input` and `filter`.
-// strides: 1-D tensor of length 5. The stride of the sliding window for each
-// dimension of `input`. Must have `strides[0] = strides[4] = 1`.
-// padding: The type of padding algorithm to use.
-func Conv3D(scope *Scope, input tf.Output, filter tf.Output, strides []int64, padding string, optional ...Conv3DAttr) (output tf.Output) {
- if scope.Err() != nil {
- return
- }
- attrs := map[string]interface{}{"strides": strides, "padding": padding}
- for _, a := range optional {
- a(attrs)
- }
- opspec := tf.OpSpec{
- Type: "Conv3D",
- Input: []tf.Input{
- input, filter,
- },
- Attrs: attrs,
- }
- op := scope.AddOperation(opspec)
- return op.Output(0)
-}
-
-// Returns the truth value of (x >= y) element-wise.
-//
-// *NOTE*: `GreaterEqual` supports broadcasting. More about broadcasting
-// [here](http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html)
-func GreaterEqual(scope *Scope, x tf.Output, y tf.Output) (z tf.Output) {
- if scope.Err() != nil {
- return
- }
- opspec := tf.OpSpec{
- Type: "GreaterEqual",
- Input: []tf.Input{
- x, y,
- },
- }
- op := scope.AddOperation(opspec)
- return op.Output(0)
-}
-
// OrderedMapUnstageNoKeyAttr is an optional argument to OrderedMapUnstageNoKey.
type OrderedMapUnstageNoKeyAttr func(optionalAttr)
@@ -12671,6 +12505,172 @@ func SparseAddGrad(scope *Scope, backprop_val_grad tf.Output, a_indices tf.Outpu
return op.Output(0), op.Output(1)
}
+// Read an element from the TensorArray into output `value`.
+//
+// Arguments:
+// handle: The handle to a TensorArray.
+//
+// flow_in: A float scalar that enforces proper chaining of operations.
+// dtype: The type of the elem that is returned.
+//
+// Returns The tensor that is read from the TensorArray.
+func TensorArrayReadV3(scope *Scope, handle tf.Output, index tf.Output, flow_in tf.Output, dtype tf.DataType) (value tf.Output) {
+ if scope.Err() != nil {
+ return
+ }
+ attrs := map[string]interface{}{"dtype": dtype}
+ opspec := tf.OpSpec{
+ Type: "TensorArrayReadV3",
+ Input: []tf.Input{
+ handle, index, flow_in,
+ },
+ Attrs: attrs,
+ }
+ op := scope.AddOperation(opspec)
+ return op.Output(0)
+}
+
+// Adds up a SparseTensor and a dense Tensor, using these special rules:
+//
+// (1) Broadcasts the dense side to have the same shape as the sparse side, if
+// eligible;
+// (2) Then, only the dense values pointed to by the indices of the SparseTensor
+// participate in the cwise addition.
+//
+// By these rules, the result is a logical SparseTensor with exactly the same
+// indices and shape, but possibly with different non-zero values. The output of
+// this Op is the resultant non-zero values.
+//
+// Arguments:
+// sp_indices: 2-D. `N x R` matrix with the indices of non-empty values in a
+// SparseTensor, possibly not in canonical ordering.
+// sp_values: 1-D. `N` non-empty values corresponding to `sp_indices`.
+// sp_shape: 1-D. Shape of the input SparseTensor.
+// dense: `R`-D. The dense Tensor operand.
+//
+// Returns 1-D. The `N` values that are operated on.
+func SparseDenseCwiseAdd(scope *Scope, sp_indices tf.Output, sp_values tf.Output, sp_shape tf.Output, dense tf.Output) (output tf.Output) {
+ if scope.Err() != nil {
+ return
+ }
+ opspec := tf.OpSpec{
+ Type: "SparseDenseCwiseAdd",
+ Input: []tf.Input{
+ sp_indices, sp_values, sp_shape, dense,
+ },
+ }
+ op := scope.AddOperation(opspec)
+ return op.Output(0)
+}
+
+// Conv3DAttr is an optional argument to Conv3D.
+type Conv3DAttr func(optionalAttr)
+
+// Conv3DDataFormat sets the optional data_format attribute to value.
+//
+// value: The data format of the input and output data. With the
+// default format "NDHWC", the data is stored in the order of:
+// [batch, in_depth, in_height, in_width, in_channels].
+// Alternatively, the format could be "NCDHW", the data storage order is:
+// [batch, in_channels, in_depth, in_height, in_width].
+// If not specified, defaults to "NDHWC"
+func Conv3DDataFormat(value string) Conv3DAttr {
+ return func(m optionalAttr) {
+ m["data_format"] = value
+ }
+}
+
+// Computes a 3-D convolution given 5-D `input` and `filter` tensors.
+//
+// In signal processing, cross-correlation is a measure of similarity of
+// two waveforms as a function of a time-lag applied to one of them. This
+// is also known as a sliding dot product or sliding inner-product.
+//
+// Our Conv3D implements a form of cross-correlation.
+//
+// Arguments:
+// input: Shape `[batch, in_depth, in_height, in_width, in_channels]`.
+// filter: Shape `[filter_depth, filter_height, filter_width, in_channels,
+// out_channels]`. `in_channels` must match between `input` and `filter`.
+// strides: 1-D tensor of length 5. The stride of the sliding window for each
+// dimension of `input`. Must have `strides[0] = strides[4] = 1`.
+// padding: The type of padding algorithm to use.
+func Conv3D(scope *Scope, input tf.Output, filter tf.Output, strides []int64, padding string, optional ...Conv3DAttr) (output tf.Output) {
+ if scope.Err() != nil {
+ return
+ }
+ attrs := map[string]interface{}{"strides": strides, "padding": padding}
+ for _, a := range optional {
+ a(attrs)
+ }
+ opspec := tf.OpSpec{
+ Type: "Conv3D",
+ Input: []tf.Input{
+ input, filter,
+ },
+ Attrs: attrs,
+ }
+ op := scope.AddOperation(opspec)
+ return op.Output(0)
+}
+
+// L2 Loss.
+//
+// Computes half the L2 norm of a tensor without the `sqrt`:
+//
+// output = sum(t ** 2) / 2
+//
+// Arguments:
+// t: Typically 2-D, but may have any dimensions.
+//
+// Returns 0-D.
+func L2Loss(scope *Scope, t tf.Output) (output tf.Output) {
+ if scope.Err() != nil {
+ return
+ }
+ opspec := tf.OpSpec{
+ Type: "L2Loss",
+ Input: []tf.Input{
+ t,
+ },
+ }
+ op := scope.AddOperation(opspec)
+ return op.Output(0)
+}
+
+// Computes rectified linear: `max(features, 0)`.
+func Relu(scope *Scope, features tf.Output) (activations tf.Output) {
+ if scope.Err() != nil {
+ return
+ }
+ opspec := tf.OpSpec{
+ Type: "Relu",
+ Input: []tf.Input{
+ features,
+ },
+ }
+ op := scope.AddOperation(opspec)
+ return op.Output(0)
+}
+
+// Returns the truth value of (x >= y) element-wise.
+//
+// *NOTE*: `GreaterEqual` supports broadcasting. More about broadcasting
+// [here](http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html)
+func GreaterEqual(scope *Scope, x tf.Output, y tf.Output) (z tf.Output) {
+ if scope.Err() != nil {
+ return
+ }
+ opspec := tf.OpSpec{
+ Type: "GreaterEqual",
+ Input: []tf.Input{
+ x, y,
+ },
+ }
+ op := scope.AddOperation(opspec)
+ return op.Output(0)
+}
+
// ResourceApplyMomentumAttr is an optional argument to ResourceApplyMomentum.
type ResourceApplyMomentumAttr func(optionalAttr)
@@ -25977,6 +25977,26 @@ func MatrixInverse(scope *Scope, input tf.Output, optional ...MatrixInverseAttr)
return op.Output(0)
}
+// Transforms a Tensor into a serialized TensorProto proto.
+//
+// Arguments:
+// tensor: A Tensor of type `T`.
+//
+// Returns A serialized TensorProto proto of the input tensor.
+func SerializeTensor(scope *Scope, tensor tf.Output) (serialized tf.Output) {
+ if scope.Err() != nil {
+ return
+ }
+ opspec := tf.OpSpec{
+ Type: "SerializeTensor",
+ Input: []tf.Input{
+ tensor,
+ },
+ }
+ op := scope.AddOperation(opspec)
+ return op.Output(0)
+}
+
// MatrixSolveAttr is an optional argument to MatrixSolve.
type MatrixSolveAttr func(optionalAttr)
diff --git a/tensorflow/java/BUILD b/tensorflow/java/BUILD
index ee07fc4813..4680e3ba16 100644
--- a/tensorflow/java/BUILD
+++ b/tensorflow/java/BUILD
@@ -81,10 +81,9 @@ cc_library(
copts = tf_copts(),
deps = [
"//tensorflow/core:framework",
+ "//tensorflow/core:framework_internal",
"//tensorflow/core:lib",
"//tensorflow/core:lib_internal",
- "//tensorflow/core:proto_text",
- "//tensorflow/core:protos_all_cc",
],
)
diff --git a/tensorflow/java/src/gen/cc/op_gen_main.cc b/tensorflow/java/src/gen/cc/op_gen_main.cc
index bc698124bf..a7c66dda89 100644
--- a/tensorflow/java/src/gen/cc/op_gen_main.cc
+++ b/tensorflow/java/src/gen/cc/op_gen_main.cc
@@ -16,12 +16,12 @@
#include <string>
#include <vector>
-#include "tensorflow/core/platform/init_main.h"
+#include "tensorflow/core/framework/op.h"
+#include "tensorflow/core/lib/core/status.h"
+#include "tensorflow/core/lib/strings/str_util.h"
#include "tensorflow/core/platform/env.h"
+#include "tensorflow/core/platform/init_main.h"
#include "tensorflow/core/util/command_line_flags.h"
-#include "tensorflow/core/lib/strings/str_util.h"
-#include "tensorflow/core/lib/core/status.h"
-#include "tensorflow/core/framework/op.h"
#include "tensorflow/java/src/gen/cc/op_generator.h"
namespace tensorflow {
@@ -44,7 +44,8 @@ const char kUsageHeader[] =
"'org.tensorflow.op.mylib' package and add them to the 'myLib()' operator\n"
"group.\n\n"
"Note that the operator group assigned to the generated wrappers is just "
- "an annotation tag at this stage. Operations will not be available through\n"
+ "an annotation tag at this stage. Operations will not be available "
+ "through\n"
"the 'org.tensorflow.op.Ops' API as a group until the generated classes "
"are compiled using an appropriate annotation processor.\n\n"
"Finally, the '--base_package' overrides the default parent package "
@@ -58,13 +59,14 @@ int main(int argc, char* argv[]) {
tensorflow::string output_dir;
tensorflow::string base_package = "org.tensorflow.op";
std::vector<tensorflow::Flag> flag_list = {
- tensorflow::Flag("output_dir", &output_dir,
- "Root directory into which output files are generated"),
- tensorflow::Flag("lib_name", &lib_name,
- "A name, in snake_case, used to classify this set of operations"),
- tensorflow::Flag("base_package", &base_package,
- "Package parent to the generated subpackage and classes")
- };
+ tensorflow::Flag("output_dir", &output_dir,
+ "Root directory into which output files are generated"),
+ tensorflow::Flag(
+ "lib_name", &lib_name,
+ "A name, in snake_case, used to classify this set of operations"),
+ tensorflow::Flag(
+ "base_package", &base_package,
+ "Package parent to the generated subpackage and classes")};
tensorflow::string usage = tensorflow::op_gen::kUsageHeader;
usage += tensorflow::Flags::Usage(argv[0], flag_list);
bool parsed_flags_ok = tensorflow::Flags::Parse(&argc, argv, flag_list);
diff --git a/tensorflow/java/src/gen/cc/op_generator.cc b/tensorflow/java/src/gen/cc/op_generator.cc
index 814a08c6cc..df130c32e6 100644
--- a/tensorflow/java/src/gen/cc/op_generator.cc
+++ b/tensorflow/java/src/gen/cc/op_generator.cc
@@ -15,8 +15,8 @@ limitations under the License.
#include <string>
-#include "tensorflow/core/platform/logging.h"
#include "tensorflow/core/lib/strings/str_util.h"
+#include "tensorflow/core/platform/logging.h"
#include "tensorflow/java/src/gen/cc/op_generator.h"
namespace tensorflow {
@@ -41,14 +41,12 @@ string CamelCase(const string& str, char delimiter, bool upper) {
} // namespace
-OpGenerator::OpGenerator()
- : env(Env::Default()) {
-}
+OpGenerator::OpGenerator() : env(Env::Default()) {}
OpGenerator::~OpGenerator() {}
Status OpGenerator::Run(const OpList& ops, const string& lib_name,
- const string& base_package, const string& output_dir) {
+ const string& base_package, const string& output_dir) {
const string package =
base_package + '.' + str_util::StringReplace(lib_name, "_", "", true);
const string package_path =
diff --git a/tensorflow/java/src/gen/cc/op_generator.h b/tensorflow/java/src/gen/cc/op_generator.h
index 98a1f8d534..eec1082b51 100644
--- a/tensorflow/java/src/gen/cc/op_generator.h
+++ b/tensorflow/java/src/gen/cc/op_generator.h
@@ -19,8 +19,8 @@ limitations under the License.
#include <string>
#include "tensorflow/core/framework/op.h"
-#include "tensorflow/core/platform/env.h"
#include "tensorflow/core/lib/core/status.h"
+#include "tensorflow/core/platform/env.h"
namespace tensorflow {
@@ -40,7 +40,7 @@ class OpGenerator {
/// Output files are generated in <output_dir>/<base_package>/<lib_package>,
/// where 'lib_package' is derived from 'lib_name'.
Status Run(const OpList& ops, const string& lib_name,
- const string& base_package, const string& output_dir);
+ const string& base_package, const string& output_dir);
private:
Env* env;
diff --git a/tensorflow/java/src/gen/gen_ops.bzl b/tensorflow/java/src/gen/gen_ops.bzl
index e0d5556122..e3710c49d0 100644
--- a/tensorflow/java/src/gen/gen_ops.bzl
+++ b/tensorflow/java/src/gen/gen_ops.bzl
@@ -27,11 +27,11 @@ def tf_java_op_gen_srcjar(name,
visibility=["//tensorflow/java:__pkg__"]):
gen_tools = []
- gen_cmds = ["rm -rf $(@D)"] # Always start from fresh when generating source files
+ gen_cmds = ["rm -rf $(@D)"] # Always start from fresh when generating source files
# Construct an op generator binary for each ops library.
for ops_lib in ops_libs:
- gen_lib = ops_lib[:ops_lib.rfind('_')]
+ gen_lib = ops_lib[:ops_lib.rfind("_")]
out_gen_tool = out_dir + ops_lib + "_gen_tool"
native.cc_binary(
@@ -50,10 +50,10 @@ def tf_java_op_gen_srcjar(name,
# Generate a source archive containing generated code for these ops.
gen_srcjar = out_dir + name + ".srcjar"
gen_cmds += ["$(location @local_jdk//:jar) cMf $(location :" + gen_srcjar + ") -C $(@D) ."]
+ gen_tools += ["@local_jdk//:jar"]
native.genrule(
name=name,
- srcs=["@local_jdk//:jar"] + ["@local_jdk//:jdk"],
outs=[gen_srcjar],
tools=gen_tools,
- cmd='&&'.join(gen_cmds))
+ cmd="&&".join(gen_cmds))
diff --git a/tensorflow/python/BUILD b/tensorflow/python/BUILD
index 26e0f86c37..c1e63c0d85 100644
--- a/tensorflow/python/BUILD
+++ b/tensorflow/python/BUILD
@@ -94,6 +94,7 @@ py_library(
"//tensorflow/python/ops/distributions",
"//tensorflow/python/profiler",
"//tensorflow/python/saved_model",
+ "//tensorflow/python/keras",
] + if_not_windows([
"//tensorflow/contrib:contrib_py",
]),
@@ -3600,22 +3601,24 @@ py_test(
py_test(
name = "monitored_session_test",
- size = "small",
+ size = "medium",
srcs = ["training/monitored_session_test.py"],
srcs_version = "PY2AND3",
tags = ["no_windows"],
deps = [
":array_ops",
- ":client",
":client_testlib",
+ ":control_flow_ops",
":errors",
":framework_for_generated_wrappers",
+ ":session",
":state_ops",
":summary",
":training",
":variables",
"//tensorflow/contrib/framework:framework_py",
"//tensorflow/contrib/testing:testing_py",
+ "//tensorflow/core:protos_all_py",
],
)
diff --git a/tensorflow/python/__init__.py b/tensorflow/python/__init__.py
index acda11bd18..18603c2181 100644
--- a/tensorflow/python/__init__.py
+++ b/tensorflow/python/__init__.py
@@ -80,6 +80,7 @@ from tensorflow.python.ops import linalg_ns as linalg
# Bring in subpackages.
from tensorflow.python.estimator import estimator_lib as estimator
from tensorflow.python.feature_column import feature_column_lib as feature_column
+from tensorflow.python import keras
from tensorflow.python.layers import layers
from tensorflow.python.ops import bitwise_ops as bitwise
from tensorflow.python.ops import image_ops as image
@@ -248,6 +249,7 @@ _allowed_symbols.extend([
'user_ops',
'layers',
'profiler',
+ 'keras',
])
# Variables framework.versions:
@@ -265,7 +267,7 @@ remove_undocumented(__name__, _allowed_symbols, [
functional_ops, histogram_ops, io_ops,
losses, math_ops, metrics, nn, resource_loader, sets, script_ops,
session_ops, sparse_ops, state_ops, string_ops, summary, tensor_array_ops,
- train, layers, profiler
+ train, layers, profiler, keras
])
# Special dunders that we choose to export:
diff --git a/tensorflow/python/debug/BUILD b/tensorflow/python/debug/BUILD
index 8eb2212069..c092616999 100644
--- a/tensorflow/python/debug/BUILD
+++ b/tensorflow/python/debug/BUILD
@@ -50,10 +50,25 @@ py_library(
)
py_library(
+ name = "debug_graphs",
+ srcs = ["lib/debug_graphs.py"],
+ srcs_version = "PY2AND3",
+ deps = [
+ "//tensorflow/core:protos_all_py",
+ "//tensorflow/python:framework",
+ "//tensorflow/python:op_def_registry",
+ "//tensorflow/python:platform",
+ "//tensorflow/python:tensor_util",
+ "@six_archive//:six",
+ ],
+)
+
+py_library(
name = "debug_data",
srcs = ["lib/debug_data.py"],
srcs_version = "PY2AND3",
deps = [
+ ":debug_graphs",
"//tensorflow/core:protos_all_py",
"//tensorflow/python:framework",
"//tensorflow/python:op_def_registry",
@@ -70,6 +85,7 @@ py_library(
srcs_version = "PY2AND3",
deps = [
":debug_data",
+ ":debug_graphs",
"//tensorflow/python:array_ops",
"//tensorflow/python:framework",
"//tensorflow/python:platform",
@@ -99,6 +115,7 @@ py_library(
srcs_version = "PY2AND3",
deps = [
":debug_data",
+ ":debug_graphs",
":debug_utils",
"//tensorflow/core:protos_all_py",
"//tensorflow/python:framework_for_generated_wrappers",
@@ -181,7 +198,7 @@ py_library(
deps = [
":cli_shared",
":command_parser",
- ":debug_data",
+ ":debug_graphs",
":debugger_cli_common",
":evaluator",
":source_utils",
@@ -401,6 +418,18 @@ py_binary(
)
py_test(
+ name = "debug_graphs_test",
+ size = "small",
+ srcs = ["lib/debug_graphs_test.py"],
+ srcs_version = "PY2AND3",
+ deps = [
+ ":debug_graphs",
+ "//tensorflow/python:client_testlib",
+ "//tensorflow/python:framework_test_lib",
+ ],
+)
+
+py_test(
name = "debug_data_test",
size = "small",
srcs = ["lib/debug_data_test.py"],
@@ -569,6 +598,7 @@ py_library(
srcs_version = "PY2AND3",
deps = [
":debug_data",
+ ":debug_graphs",
":debug_utils",
"//tensorflow/core:protos_all_py",
"//tensorflow/python:array_ops",
@@ -608,7 +638,7 @@ py_library(
srcs_version = "PY2AND3",
visibility = ["//visibility:public"],
deps = [
- ":debug_data",
+ ":debug_graphs",
":debug_service_pb2_grpc",
"//tensorflow/core/debug:debug_service_proto_py",
"@six_archive//:six",
diff --git a/tensorflow/python/debug/cli/analyzer_cli.py b/tensorflow/python/debug/cli/analyzer_cli.py
index 22e451e38c..50850bbc0d 100644
--- a/tensorflow/python/debug/cli/analyzer_cli.py
+++ b/tensorflow/python/debug/cli/analyzer_cli.py
@@ -34,7 +34,7 @@ from tensorflow.python.debug.cli import command_parser
from tensorflow.python.debug.cli import debugger_cli_common
from tensorflow.python.debug.cli import evaluator
from tensorflow.python.debug.cli import ui_factory
-from tensorflow.python.debug.lib import debug_data
+from tensorflow.python.debug.lib import debug_graphs
from tensorflow.python.debug.lib import source_utils
RL = debugger_cli_common.RichLine
@@ -716,7 +716,7 @@ class DebugAnalyzer(object):
# Get a node name, regardless of whether the input is a node name (without
# output slot attached) or a tensor name (with output slot attached).
- node_name, unused_slot = debug_data.parse_node_or_tensor_name(
+ node_name, unused_slot = debug_graphs.parse_node_or_tensor_name(
parsed.node_name)
if not self._debug_dump.node_exists(node_name):
@@ -840,7 +840,7 @@ class DebugAnalyzer(object):
parsed.op_type,
do_outputs=False)
- node_name = debug_data.get_node_name(parsed.node_name)
+ node_name = debug_graphs.get_node_name(parsed.node_name)
_add_main_menu(output, node_name=node_name, enable_list_inputs=False)
return output
@@ -871,7 +871,7 @@ class DebugAnalyzer(object):
tensor_name, tensor_slicing = (
command_parser.parse_tensor_name_with_slicing(parsed.tensor_name))
- node_name, output_slot = debug_data.parse_node_or_tensor_name(tensor_name)
+ node_name, output_slot = debug_graphs.parse_node_or_tensor_name(tensor_name)
if (self._debug_dump.loaded_partition_graphs() and
not self._debug_dump.node_exists(node_name)):
output = cli_shared.error(
@@ -1016,7 +1016,7 @@ class DebugAnalyzer(object):
parsed.op_type,
do_outputs=True)
- node_name = debug_data.get_node_name(parsed.node_name)
+ node_name = debug_graphs.get_node_name(parsed.node_name)
_add_main_menu(output, node_name=node_name, enable_list_outputs=False)
return output
@@ -1087,7 +1087,7 @@ class DebugAnalyzer(object):
label = RL(" " * 4)
if self._debug_dump.debug_watch_keys(
- debug_data.get_node_name(element)):
+ debug_graphs.get_node_name(element)):
attribute = debugger_cli_common.MenuItem("", "pt %s" % element)
else:
attribute = cli_shared.COLOR_BLUE
@@ -1246,7 +1246,7 @@ class DebugAnalyzer(object):
font_attr_segs = {}
# Check if this is a tensor name, instead of a node name.
- node_name, _ = debug_data.parse_node_or_tensor_name(node_name)
+ node_name, _ = debug_graphs.parse_node_or_tensor_name(node_name)
# Check if node exists.
if not self._debug_dump.node_exists(node_name):
@@ -1395,7 +1395,7 @@ class DebugAnalyzer(object):
# Recursive call.
# The input's/output's name can be a tensor name, in the case of node
# with >1 output slots.
- inp_node_name, _ = debug_data.parse_node_or_tensor_name(inp)
+ inp_node_name, _ = debug_graphs.parse_node_or_tensor_name(inp)
self._dfs_from_node(
lines,
attr_segs,
diff --git a/tensorflow/python/debug/lib/debug_data.py b/tensorflow/python/debug/lib/debug_data.py
index b2b3ec5d47..9ea279c004 100644
--- a/tensorflow/python/debug/lib/debug_data.py
+++ b/tensorflow/python/debug/lib/debug_data.py
@@ -26,14 +26,14 @@ import platform
import numpy as np
import six
-from six.moves import xrange # pylint: disable=redefined-builtin
from tensorflow.core.framework import graph_pb2
from tensorflow.core.framework import types_pb2
from tensorflow.core.util import event_pb2
-from tensorflow.python.framework import op_def_registry
+from tensorflow.python.debug.lib import debug_graphs
from tensorflow.python.framework import tensor_util
from tensorflow.python.platform import gfile
+from tensorflow.python.platform import tf_logging as logging
from tensorflow.python.util import compat
@@ -155,30 +155,6 @@ def _load_log_message_from_event_file(event_file_path):
return event.log_message.message
-def parse_node_or_tensor_name(name):
- """Get the node name from a string that can be node or tensor name.
-
- Args:
- name: An input node name (e.g., "node_a") or tensor name (e.g.,
- "node_a:0"), as a str.
-
- Returns:
- 1) The node name, as a str. If the input name is a tensor name, i.e.,
- consists of a colon, the final colon and the following output slot
- will be stripped.
- 2) If the input name is a tensor name, the output slot, as an int. If
- the input name is not a tensor name, None.
- """
-
- if ":" in name and not name.endswith(":"):
- node_name = name[:name.rfind(":")]
- output_slot = int(name[name.rfind(":") + 1:])
-
- return node_name, output_slot
- else:
- return name, None
-
-
def _is_graph_file(file_name):
return file_name.startswith(METADATA_FILE_PREFIX + GRAPH_FILE_TAG)
@@ -191,25 +167,6 @@ def _is_run_feed_keys_info_file(file_name):
return file_name == METADATA_FILE_PREFIX + FEED_KEYS_INFO_FILE_TAG
-def get_node_name(element_name):
- return element_name.split(":")[0] if ":" in element_name else element_name
-
-
-def get_output_slot(element_name):
- """Get the output slot number from the name of a graph element.
-
- If element_name is a node name without output slot at the end, 0 will be
- assumed.
-
- Args:
- element_name: (`str`) name of the graph element in question.
-
- Returns:
- (`int`) output slot number.
- """
- return int(element_name.split(":")[-1]) if ":" in element_name else 0
-
-
def _get_tensor_name(node_name, output_slot):
"""Get tensor name given node name and output slot index.
@@ -241,78 +198,6 @@ def _get_tensor_watch_key(node_name, output_slot, debug_op):
return "%s:%s" % (_get_tensor_name(node_name, output_slot), debug_op)
-def is_copy_node(node_name):
- """Determine whether a node name is that of a debug Copy node.
-
- Such nodes are inserted by TensorFlow core upon request in
- RunOptions.debug_options.debug_tensor_watch_opts.
-
- Args:
- node_name: Name of the node.
-
- Returns:
- A bool indicating whether the input argument is the name of a debug Copy
- node.
- """
- return node_name.startswith("__copy_")
-
-
-def is_debug_node(node_name):
- """Determine whether a node name is that of a debug node.
-
- Such nodes are inserted by TensorFlow core upon request in
- RunOptions.debug_options.debug_tensor_watch_opts.
-
- Args:
- node_name: Name of the node.
-
- Returns:
- A bool indicating whether the input argument is the name of a debug node.
- """
- return node_name.startswith("__dbg_")
-
-
-def parse_debug_node_name(node_name):
- """Parse the name of a debug node.
-
- Args:
- node_name: Name of the debug node.
-
- Returns:
- 1. Name of the watched node, as a str.
- 2. Output slot index of the watched tensor, as an int.
- 3. Index of the debug node, as an int.
- 4. Name of the debug op, as a str, e.g, "DebugIdentity".
-
- Raises:
- ValueError: If the input node name is not a valid debug node name.
- """
- prefix = "__dbg_"
-
- name = node_name
- if not name.startswith(prefix):
- raise ValueError("Invalid prefix in debug node name: '%s'" % node_name)
-
- name = name[len(prefix):]
-
- if name.count("_") < 2:
- raise ValueError("Invalid debug node name: '%s'" % node_name)
-
- debug_op = name[name.rindex("_") + 1:]
- name = name[:name.rindex("_")]
-
- debug_op_index = int(name[name.rindex("_") + 1:])
- name = name[:name.rindex("_")]
-
- if name.count(":") != 1:
- raise ValueError("Invalid tensor name in debug node name: '%s'" % node_name)
-
- watched_node_name = name[:name.index(":")]
- watched_output_slot = int(name[name.index(":") + 1:])
-
- return watched_node_name, watched_output_slot, debug_op_index, debug_op
-
-
def has_inf_or_nan(datum, tensor):
"""A predicate for whether a tensor consists of any bad numerical values.
@@ -573,88 +458,6 @@ class WatchKeyDoesNotExistInDebugDumpDirError(ValueError):
pass
-class _GraphTracingReachedDestination(Exception):
- pass
-
-
-class _DFSGraphTracer(object):
- """Graph input tracer using depth-first search."""
-
- def __init__(self,
- input_lists,
- skip_node_names=None,
- destination_node_name=None):
- """Constructor of _DFSGraphTracer.
-
- Args:
- input_lists: A list of dicts. Each dict is an adjacency (input) map from
- the recipient node name as the key and the list of input node names
- as the value.
- skip_node_names: Optional: a list of node names to skip tracing.
- destination_node_name: Optional: destination node name. If not `None`, it
- should be the name of a destination not as a str and the graph tracing
- will raise GraphTracingReachedDestination as soon as the node has been
- reached.
-
- Raises:
- _GraphTracingReachedDestination: if stop_at_node_name is not None and
- the specified node is reached.
- """
-
- self._input_lists = input_lists
- self._skip_node_names = skip_node_names
-
- self._inputs = []
- self._visited_nodes = []
- self._depth_count = 0
- self._depth_list = []
-
- self._destination_node_name = destination_node_name
-
- def trace(self, graph_element_name):
- """Trace inputs.
-
- Args:
- graph_element_name: Name of the node or an output tensor of the node, as a
- str.
-
- Raises:
- _GraphTracingReachedDestination: if destination_node_name of this tracer
- object is not None and the specified node is reached.
- """
- self._depth_count += 1
-
- node_name = get_node_name(graph_element_name)
-
- if node_name == self._destination_node_name:
- raise _GraphTracingReachedDestination()
-
- if node_name in self._skip_node_names:
- return
- if node_name in self._visited_nodes:
- return
-
- self._visited_nodes.append(node_name)
-
- for input_list in self._input_lists:
- for inp in input_list[node_name]:
- if get_node_name(inp) in self._visited_nodes:
- continue
- self._inputs.append(inp)
- self._depth_list.append(self._depth_count)
- self.trace(inp)
-
- self._depth_count -= 1
-
- def inputs(self):
- return self._inputs
-
- def depth_list(self):
- return self._depth_list
-
-
-# TODO(cais): This class is getting too large in line count. Refactor to make it
-# smaller and easier to maintain.
class DebugDumpDir(object):
"""Data set from a debug-dump directory on filesystem.
@@ -963,52 +766,36 @@ class DebugDumpDir(object):
ValueError: If the partition GraphDef of one or more devices fail to be
loaded.
"""
-
- self._node_attributes = {}
- self._node_inputs = {}
- self._node_reversed_ref_inputs = {}
- self._node_ctrl_inputs = {}
- self._node_recipients = {}
- self._node_ctrl_recipients = {}
+ self._debug_graphs = {}
self._node_devices = {}
- self._node_op_types = {}
- self._copy_send_nodes = {}
- self._ref_args = {}
-
- self._partition_graphs = {}
- for device_name in self._device_names:
- partition_graph = None
- if device_name in self._dump_graph_file_paths:
- partition_graph = _load_graph_def_from_event_file(
- self._dump_graph_file_paths[device_name])
- else:
- partition_graph = self._find_partition_graph(partition_graphs,
- device_name)
-
- if partition_graph:
- self._partition_graphs[device_name] = partition_graph
- self._node_attributes[device_name] = {}
- self._node_inputs[device_name] = {}
- self._node_reversed_ref_inputs[device_name] = {}
- self._node_ctrl_inputs[device_name] = {}
- self._node_recipients[device_name] = {}
- self._node_ctrl_recipients[device_name] = {}
- self._node_op_types[device_name] = {}
- self._copy_send_nodes[device_name] = []
- self._ref_args[device_name] = []
-
- if partition_graph:
- for node in partition_graph.node:
- self._process_partition_graph_node(device_name, node)
-
- self._prune_non_control_edges_of_debug_ops(device_name)
- self._prune_control_edges_of_debug_ops(device_name)
+ if partition_graphs:
+ partition_graphs_and_device_names = [
+ (partition_graph, None) for partition_graph in partition_graphs]
+ else:
+ partition_graphs_and_device_names = []
+ for device_name in self._device_names:
+ partition_graph = None
+ if device_name in self._dump_graph_file_paths:
+ partition_graph = _load_graph_def_from_event_file(
+ self._dump_graph_file_paths[device_name])
+ else:
+ partition_graph = self._find_partition_graph(partition_graphs,
+ device_name)
+ if partition_graph:
+ partition_graphs_and_device_names.append((partition_graph,
+ device_name))
+ else:
+ logging.warn("Failed to load partition graphs from disk.")
- self._populate_recipient_maps(device_name)
+ for partition_graph, maybe_device_name in partition_graphs_and_device_names:
+ debug_graph = debug_graphs.DebugGraph(partition_graph,
+ device_name=maybe_device_name)
+ self._debug_graphs[debug_graph.device_name] = debug_graph
+ self._collect_node_devices(debug_graph)
- if device_name in self._partition_graphs and validate:
- self._validate_dump_with_graphs(device_name)
+ if validate and debug_graph.device_name in self._dump_tensor_data:
+ self._validate_dump_with_graphs(debug_graph.device_name)
def _find_partition_graph(self, partition_graphs, device_name):
if partition_graphs is None:
@@ -1020,167 +807,13 @@ class DebugDumpDir(object):
return graph_def
return None
- def _get_ref_args(self, node):
- """Determine whether an input of an op is ref-type.
-
- Args:
- node: A `NodeDef`.
-
- Returns:
- A list of the arg names (as strs) that are ref-type.
- """
-
- op_def = op_def_registry.get_registered_ops().get(node.op)
- ref_args = []
- if op_def:
- for i, output_arg in enumerate(op_def.output_arg):
- if output_arg.is_ref:
- arg_name = node.name if i == 0 else (node.name + ":%d" % i)
- ref_args.append(arg_name)
- return ref_args
-
- def _process_partition_graph_node(self, device_name, node):
- """Process a node from the partition graphs.
-
- Args:
- device_name: (str) device name.
- node: (NodeDef) A partition-graph node to be processed.
-
- Raises:
- ValueError: If duplicate node names are encountered.
- """
-
- if is_debug_node(node.name):
- # This is a debug node. Parse the node name and retrieve the
- # information about debug watches on tensors. But do not include
- # the node in the graph.
- (watched_node_name, watched_output_slot, _,
- debug_op) = parse_debug_node_name(node.name)
-
- self._debug_watches[device_name][watched_node_name][
- watched_output_slot].add(debug_op)
-
- return
-
- if node.name in self._node_inputs[device_name]:
- raise ValueError("Duplicate node name on device %s: '%s'" %
- (device_name, node.name))
-
- self._node_attributes[device_name][node.name] = node.attr
-
- self._node_inputs[device_name][node.name] = []
- self._node_ctrl_inputs[device_name][node.name] = []
- self._node_recipients[device_name][node.name] = []
- self._node_ctrl_recipients[device_name][node.name] = []
-
- if node.name not in self._node_devices:
- self._node_devices[node.name] = set()
- self._node_devices[node.name].add(node.device)
- self._node_op_types[device_name][node.name] = node.op
- self._ref_args[device_name].extend(self._get_ref_args(node))
-
- for inp in node.input:
- if is_copy_node(inp) and (node.op == "_Send" or node.op == "_Retval"):
- self._copy_send_nodes[device_name].append(node.name)
-
- if inp.startswith("^"):
- cinp = inp[1:]
- self._node_ctrl_inputs[device_name][node.name].append(cinp)
+ def _collect_node_devices(self, debug_graph):
+ for node_name in debug_graph.node_devices:
+ if node_name in self._node_devices:
+ self._node_devices[node_name] = self._node_devices[node_name].union(
+ debug_graph.node_devices[node_name])
else:
- self._node_inputs[device_name][node.name].append(inp)
-
- def _prune_nodes_from_input_and_recipient_maps(self,
- device_name,
- nodes_to_prune):
- """Prune nodes out of input and recipient maps.
-
- Args:
- device_name: (`str`) device name.
- nodes_to_prune: (`list` of `str`) Names of the nodes to be pruned.
- """
-
- for node in nodes_to_prune:
- del self._node_inputs[device_name][node]
- del self._node_ctrl_inputs[device_name][node]
- del self._node_recipients[device_name][node]
- del self._node_ctrl_recipients[device_name][node]
-
- def _prune_non_control_edges_of_debug_ops(self, device_name):
- """Prune (non-control) edges related to debug ops.
-
- Prune the Copy ops and associated _Send ops inserted by the debugger out
- from the non-control inputs and output recipients map. Replace the inputs
- and recipients with original ones.
-
- Args:
- device_name: (`str`) device name.
- """
-
- copy_nodes = []
- for node in self._node_inputs[device_name]:
- if node in self._copy_send_nodes[device_name]:
- continue
-
- if is_copy_node(node):
- copy_nodes.append(node)
-
- inputs = self._node_inputs[device_name][node]
-
- for i in xrange(len(inputs)):
- inp = inputs[i]
- if is_copy_node(inp):
- # Find the input to the Copy node, which should be the original
- # input to the node.
- orig_inp = self._node_inputs[device_name][inp][0]
- inputs[i] = orig_inp
-
- self._prune_nodes_from_input_and_recipient_maps(device_name, copy_nodes)
- self._prune_nodes_from_input_and_recipient_maps(
- device_name, self._copy_send_nodes[device_name])
-
- def _prune_control_edges_of_debug_ops(self, device_name):
- """Prune control edges related to the debug ops."""
-
- for node in self._node_ctrl_inputs[device_name]:
- ctrl_inputs = self._node_ctrl_inputs[device_name][node]
- debug_op_inputs = []
- for ctrl_inp in ctrl_inputs:
- if is_debug_node(ctrl_inp):
- debug_op_inputs.append(ctrl_inp)
- for debug_op_inp in debug_op_inputs:
- ctrl_inputs.remove(debug_op_inp)
-
- def _populate_recipient_maps(self, device_name):
- """Populate the map from node name to recipient(s) of its output(s).
-
- This method also populates the input map based on reversed ref edges.
-
- Args:
- device_name: name of device.
- """
-
- for node in self._node_inputs[device_name]:
- inputs = self._node_inputs[device_name][node]
- for inp in inputs:
- inp = get_node_name(inp)
- if inp not in self._node_recipients[device_name]:
- self._node_recipients[device_name][inp] = []
- self._node_recipients[device_name][inp].append(node)
-
- if inp in self._ref_args[device_name]:
- if inp not in self._node_reversed_ref_inputs[device_name]:
- self._node_reversed_ref_inputs[device_name][inp] = []
- self._node_reversed_ref_inputs[device_name][inp].append(node)
-
- for node in self._node_ctrl_inputs[device_name]:
- ctrl_inputs = self._node_ctrl_inputs[device_name][node]
- for ctrl_inp in ctrl_inputs:
- if ctrl_inp in self._copy_send_nodes[device_name]:
- continue
-
- if ctrl_inp not in self._node_ctrl_recipients[device_name]:
- self._node_ctrl_recipients[device_name][ctrl_inp] = []
- self._node_ctrl_recipients[device_name][ctrl_inp].append(node)
+ self._node_devices[node_name] = debug_graph.node_devices[node_name]
def _validate_dump_with_graphs(self, device_name):
"""Validate the dumped tensor data against the partition graphs.
@@ -1197,31 +830,31 @@ class DebugDumpDir(object):
Or if the temporal order of the dump's timestamps violate the
input relations on the partition graphs.
"""
-
- if not self._partition_graphs[device_name]:
+ if not self._debug_graphs:
raise LookupError(
"No partition graphs loaded for device %s" % device_name)
+ debug_graph = self._debug_graphs[device_name]
# Verify that the node names in the dump data are all present in the
# partition graphs.
for datum in self._dump_tensor_data[device_name]:
- if datum.node_name not in self._node_inputs[device_name]:
+ if datum.node_name not in debug_graph.node_inputs:
raise ValueError("Node name '%s' is not found in partition graphs of "
"device %s." % (datum.node_name, device_name))
pending_inputs = {}
- for node in self._node_inputs[device_name]:
+ for node in debug_graph.node_inputs:
pending_inputs[node] = []
- inputs = self._node_inputs[device_name][node]
+ inputs = debug_graph.node_inputs[node]
for inp in inputs:
- inp_node = get_node_name(inp)
- inp_output_slot = get_output_slot(inp)
+ inp_node = debug_graphs.get_node_name(inp)
+ inp_output_slot = debug_graphs.get_output_slot(inp)
# Inputs from Enter and NextIteration nodes are not validated because
# DebugNodeInserter::InsertNodes() in the debugger core skips creating
# control edges from debug ops watching these types of nodes.
if (inp_node in self._debug_watches[device_name] and
inp_output_slot in self._debug_watches[device_name][inp_node] and
- self._node_op_types[device_name].get(inp) not in (
+ debug_graph.node_op_types.get(inp) not in (
"Enter", "NextIteration") and
(inp_node, inp_output_slot) not in pending_inputs[node]):
pending_inputs[node].append((inp_node, inp_output_slot))
@@ -1240,7 +873,7 @@ class DebugDumpDir(object):
"these input(s) are not satisfied: %s" %
(node, datum.timestamp, repr(pending_inputs[node])))
- recipients = self._node_recipients[device_name][node]
+ recipients = debug_graph.node_recipients[node]
for recipient in recipients:
recipient_pending_inputs = pending_inputs[recipient]
if (node, slot) in recipient_pending_inputs:
@@ -1285,7 +918,7 @@ class DebugDumpDir(object):
def loaded_partition_graphs(self):
"""Test whether partition graphs have been loaded."""
- return self._partition_graphs is not None
+ return bool(self._debug_graphs)
def partition_graphs(self):
"""Get the partition graphs.
@@ -1296,11 +929,10 @@ class DebugDumpDir(object):
Raises:
LookupError: If no partition graphs have been loaded.
"""
-
- if self._partition_graphs is None:
+ if not self._debug_graphs:
raise LookupError("No partition graphs have been loaded.")
-
- return self._partition_graphs.values()
+ return [self._debug_graphs[key].debug_graph_def
+ for key in self._debug_graphs]
@property
def run_fetches_info(self):
@@ -1380,17 +1012,17 @@ class DebugDumpDir(object):
LookupError: If no partition graphs have been loaded.
ValueError: If specified node name does not exist.
"""
- if self._partition_graphs is None:
+ if not self._debug_graphs:
raise LookupError("No partition graphs have been loaded.")
if device_name is None:
nodes = []
- for device_name in self._node_inputs:
- nodes.extend(self._node_inputs[device_name].keys())
+ for device_name in self._debug_graphs:
+ nodes.extend(self._debug_graphs[device_name].node_inputs.keys())
return nodes
else:
- if device_name not in self._node_inputs:
+ if device_name not in self._debug_graphs:
raise ValueError("Invalid device name: %s" % device_name)
- return self._node_inputs[device_name].keys()
+ return self._debug_graphs[device_name].node_inputs.keys()
def node_attributes(self, node_name, device_name=None):
"""Get the attributes of a node.
@@ -1406,11 +1038,11 @@ class DebugDumpDir(object):
Raises:
LookupError: If no partition graphs have been loaded.
"""
- if self._partition_graphs is None:
+ if not self._debug_graphs:
raise LookupError("No partition graphs have been loaded.")
device_name = self._infer_device_name(device_name, node_name)
- return self._node_attributes[device_name][node_name]
+ return self._debug_graphs[device_name].node_attributes[node_name]
def node_inputs(self, node_name, is_control=False, device_name=None):
"""Get the inputs of given node according to partition graphs.
@@ -1429,16 +1061,15 @@ class DebugDumpDir(object):
LookupError: If node inputs and control inputs have not been loaded
from partition graphs yet.
"""
-
- if self._partition_graphs is None:
+ if not self._debug_graphs:
raise LookupError(
"Node inputs are not loaded from partition graphs yet.")
device_name = self._infer_device_name(device_name, node_name)
if is_control:
- return self._node_ctrl_inputs[device_name][node_name]
+ return self._debug_graphs[device_name].node_ctrl_inputs[node_name]
else:
- return self._node_inputs[device_name][node_name]
+ return self._debug_graphs[device_name].node_inputs[node_name]
def transitive_inputs(self,
node_name,
@@ -1466,19 +1097,19 @@ class DebugDumpDir(object):
LookupError: If node inputs and control inputs have not been loaded
from partition graphs yet.
"""
-
- if self._partition_graphs is None:
+ if not self._debug_graphs:
raise LookupError(
"Node inputs are not loaded from partition graphs yet.")
device_name = self._infer_device_name(device_name, node_name)
- input_lists = [self._node_inputs[device_name]]
+ input_lists = [self._debug_graphs[device_name].node_inputs]
if include_control:
- input_lists.append(self._node_ctrl_inputs[device_name])
+ input_lists.append(self._debug_graphs[device_name].node_ctrl_inputs)
if include_reversed_ref:
- input_lists.append(self._node_reversed_ref_inputs[device_name])
- tracer = _DFSGraphTracer(
+ input_lists.append(
+ self._debug_graphs[device_name].node_reversed_ref_inputs)
+ tracer = debug_graphs.DFSGraphTracer(
input_lists,
skip_node_names=self._get_merge_node_names(device_name))
tracer.trace(node_name)
@@ -1492,9 +1123,10 @@ class DebugDumpDir(object):
if not hasattr(self, "_merge_node_names"):
self._merge_node_names = {}
if device_name not in self._merge_node_names:
+ debug_graph = self._debug_graphs[device_name]
self._merge_node_names[device_name] = [
- node for node in self._node_op_types[device_name]
- if self._node_op_types[device_name][node] == "Merge"]
+ node for node in debug_graph.node_op_types
+ if debug_graph.node_op_types[node] == "Merge"]
return self._merge_node_names[device_name]
def find_some_path(self,
@@ -1546,12 +1178,13 @@ class DebugDumpDir(object):
"%s vs. %s" % (src_node_name, dst_node_name, src_device_name,
dst_device_name))
- input_lists = [self._node_inputs[dst_device_name]]
+ input_lists = [self._debug_graphs[dst_device_name].node_inputs]
+ debug_graph = self._debug_graphs[dst_device_name]
if include_control:
- input_lists.append(self._node_ctrl_inputs[dst_device_name])
+ input_lists.append(debug_graph.node_ctrl_inputs)
if include_reversed_ref:
- input_lists.append(self._node_reversed_ref_inputs[dst_device_name])
- tracer = _DFSGraphTracer(
+ input_lists.append(debug_graph.node_reversed_ref_inputs)
+ tracer = debug_graphs.DFSGraphTracer(
input_lists,
skip_node_names=self._get_merge_node_names(dst_device_name),
destination_node_name=src_node_name)
@@ -1561,7 +1194,7 @@ class DebugDumpDir(object):
try:
tracer.trace(dst_node_name)
- except _GraphTracingReachedDestination:
+ except debug_graphs.GraphTracingReachedDestination:
# Prune nodes not on the path.
inputs = [dst_node_name] + tracer.inputs()
depth_list = [0] + tracer.depth_list()
@@ -1592,15 +1225,16 @@ class DebugDumpDir(object):
from partition graphs yet.
"""
- if self._partition_graphs is None:
+ if not self._debug_graphs:
raise LookupError(
"Node recipients are not loaded from partition graphs yet.")
device_name = self._infer_device_name(device_name, node_name)
+ debug_graph = self._debug_graphs[device_name]
if is_control:
- return self._node_ctrl_recipients[device_name][node_name]
+ return debug_graph.node_ctrl_recipients[node_name]
else:
- return self._node_recipients[device_name][node_name]
+ return debug_graph.node_recipients[node_name]
def devices(self):
"""Get the list of device names.
@@ -1608,7 +1242,6 @@ class DebugDumpDir(object):
Returns:
(`list` of `str`) names of the devices.
"""
-
return self._device_names
def node_exists(self, node_name, device_name=None):
@@ -1627,20 +1260,18 @@ class DebugDumpDir(object):
LookupError: If no partition graphs have been loaded yet.
ValueError: If device_name is specified but cannot be found.
"""
-
- if self._node_inputs is None:
+ if not self._debug_graphs:
raise LookupError(
"Nodes have not been loaded from partition graphs yet.")
- if (device_name is not None) and device_name not in self._node_inputs:
+ if (device_name is not None) and device_name not in self._debug_graphs:
raise ValueError(
"The specified device_name '%s' cannot be found." % device_name)
- node_inputs_all_devices = (self._node_inputs if device_name is None
- else (self._node_inputs[device_name],))
-
- return any(node_name in node_inputs_all_devices[dev_name]
- for dev_name in node_inputs_all_devices)
+ for _, debug_graph in self._debug_graphs.items():
+ if node_name in debug_graph.node_inputs:
+ return True
+ return False
def node_device(self, node_name):
"""Get the names of the devices that has nodes of the specified name.
@@ -1658,8 +1289,7 @@ class DebugDumpDir(object):
from partition graphs yet.
ValueError: If the node does not exist in partition graphs.
"""
-
- if self._partition_graphs is None:
+ if not self._debug_graphs:
raise LookupError(
"Node devices are not loaded from partition graphs yet.")
@@ -1685,13 +1315,12 @@ class DebugDumpDir(object):
LookupError: If node op types have not been loaded
from partition graphs yet.
"""
-
- if self._partition_graphs is None:
+ if not self._debug_graphs:
raise LookupError(
"Node op types are not loaded from partition graphs yet.")
device_name = self._infer_device_name(device_name, node_name)
- return self._node_op_types[device_name][node_name]
+ return self._debug_graphs[device_name].node_op_types[node_name]
def debug_watch_keys(self, node_name, device_name=None):
"""Get all tensor watch keys of given node according to partition graphs.
@@ -1957,7 +1586,7 @@ class DebugDumpDir(object):
if self._python_graph is None:
raise LookupError("Python graph is not available for traceback lookup")
- node_name = get_node_name(element_name)
+ node_name = debug_graphs.get_node_name(element_name)
if node_name not in self._node_traceback:
raise KeyError("Cannot find node \"%s\" in Python graph" % node_name)
diff --git a/tensorflow/python/debug/lib/debug_data_test.py b/tensorflow/python/debug/lib/debug_data_test.py
index 694010a23c..7ce7ef6a97 100644
--- a/tensorflow/python/debug/lib/debug_data_test.py
+++ b/tensorflow/python/debug/lib/debug_data_test.py
@@ -49,77 +49,6 @@ class DeviceNamePathConversionTest(test_util.TensorFlowTestCase):
",job_ps,replica_1,task_2,cpu_0"))
-class ParseNodeOrTensorNameTest(test_util.TensorFlowTestCase):
-
- def testParseNodeName(self):
- node_name, slot = debug_data.parse_node_or_tensor_name("namespace1/node_1")
-
- self.assertEqual("namespace1/node_1", node_name)
- self.assertIsNone(slot)
-
- def testParseTensorName(self):
- node_name, slot = debug_data.parse_node_or_tensor_name(
- "namespace1/node_2:3")
-
- self.assertEqual("namespace1/node_2", node_name)
- self.assertEqual(3, slot)
-
-
-class NodeNameChecksTest(test_util.TensorFlowTestCase):
-
- def testIsCopyNode(self):
- self.assertTrue(debug_data.is_copy_node("__copy_ns1/ns2/node3_0"))
-
- self.assertFalse(debug_data.is_copy_node("copy_ns1/ns2/node3_0"))
- self.assertFalse(debug_data.is_copy_node("_copy_ns1/ns2/node3_0"))
- self.assertFalse(debug_data.is_copy_node("_copyns1/ns2/node3_0"))
- self.assertFalse(debug_data.is_copy_node("__dbg_ns1/ns2/node3_0"))
-
- def testIsDebugNode(self):
- self.assertTrue(
- debug_data.is_debug_node("__dbg_ns1/ns2/node3:0_0_DebugIdentity"))
-
- self.assertFalse(
- debug_data.is_debug_node("dbg_ns1/ns2/node3:0_0_DebugIdentity"))
- self.assertFalse(
- debug_data.is_debug_node("_dbg_ns1/ns2/node3:0_0_DebugIdentity"))
- self.assertFalse(
- debug_data.is_debug_node("_dbgns1/ns2/node3:0_0_DebugIdentity"))
- self.assertFalse(debug_data.is_debug_node("__copy_ns1/ns2/node3_0"))
-
-
-class ParseDebugNodeNameTest(test_util.TensorFlowTestCase):
-
- def testParseDebugNodeName_valid(self):
- debug_node_name_1 = "__dbg_ns_a/ns_b/node_c:1_0_DebugIdentity"
- (watched_node, watched_output_slot, debug_op_index,
- debug_op) = debug_data.parse_debug_node_name(debug_node_name_1)
-
- self.assertEqual("ns_a/ns_b/node_c", watched_node)
- self.assertEqual(1, watched_output_slot)
- self.assertEqual(0, debug_op_index)
- self.assertEqual("DebugIdentity", debug_op)
-
- def testParseDebugNodeName_invalidPrefix(self):
- invalid_debug_node_name_1 = "__copy_ns_a/ns_b/node_c:1_0_DebugIdentity"
-
- with self.assertRaisesRegexp(ValueError, "Invalid prefix"):
- debug_data.parse_debug_node_name(invalid_debug_node_name_1)
-
- def testParseDebugNodeName_missingDebugOpIndex(self):
- invalid_debug_node_name_1 = "__dbg_node1:0_DebugIdentity"
-
- with self.assertRaisesRegexp(ValueError, "Invalid debug node name"):
- debug_data.parse_debug_node_name(invalid_debug_node_name_1)
-
- def testParseDebugNodeName_invalidWatchedTensorName(self):
- invalid_debug_node_name_1 = "__dbg_node1_0_DebugIdentity"
-
- with self.assertRaisesRegexp(ValueError,
- "Invalid tensor name in debug node name"):
- debug_data.parse_debug_node_name(invalid_debug_node_name_1)
-
-
class HasNanOrInfTest(test_util.TensorFlowTestCase):
def setUp(self):
@@ -375,19 +304,5 @@ class DebugDumpDirTest(test_util.TensorFlowTestCase):
fake.assert_has_calls(expected_calls, any_order=True)
-class GetNodeNameAndOutputSlotTest(test_util.TensorFlowTestCase):
-
- def testParseTensorNameInputWorks(self):
- self.assertEqual("a", debug_data.get_node_name("a:0"))
- self.assertEqual(0, debug_data.get_output_slot("a:0"))
-
- self.assertEqual("_b", debug_data.get_node_name("_b:1"))
- self.assertEqual(1, debug_data.get_output_slot("_b:1"))
-
- def testParseNodeNameInputWorks(self):
- self.assertEqual("a", debug_data.get_node_name("a"))
- self.assertEqual(0, debug_data.get_output_slot("a"))
-
-
if __name__ == "__main__":
googletest.main()
diff --git a/tensorflow/python/debug/lib/debug_gradients.py b/tensorflow/python/debug/lib/debug_gradients.py
index 5306391613..b01a58719c 100644
--- a/tensorflow/python/debug/lib/debug_gradients.py
+++ b/tensorflow/python/debug/lib/debug_gradients.py
@@ -24,6 +24,7 @@ import uuid
import six
from tensorflow.python.debug.lib import debug_data
+from tensorflow.python.debug.lib import debug_graphs
from tensorflow.python.framework import ops
from tensorflow.python.ops import gen_array_ops
from tensorflow.python.ops import variables
@@ -34,7 +35,7 @@ _gradient_debuggers = {}
def _tensor_to_grad_debug_op_name(tensor, grad_debugger_uuid):
- op_name, slot = debug_data.parse_node_or_tensor_name(tensor.name)
+ op_name, slot = debug_graphs.parse_node_or_tensor_name(tensor.name)
return "%s_%d/%s%s" % (op_name, slot, _GRADIENT_DEBUG_TAG, grad_debugger_uuid)
@@ -407,7 +408,7 @@ def gradient_values_from_dump(grad_debugger, x_tensor, dump):
(grad_debugger.graph, dump.python_graph))
gradient_tensor = grad_debugger.gradient_tensor(x_tensor)
- node_name, output_slot = debug_data.parse_node_or_tensor_name(
+ node_name, output_slot = debug_graphs.parse_node_or_tensor_name(
gradient_tensor.name)
try:
diff --git a/tensorflow/python/debug/lib/debug_graphs.py b/tensorflow/python/debug/lib/debug_graphs.py
new file mode 100644
index 0000000000..20e2a6acfe
--- /dev/null
+++ b/tensorflow/python/debug/lib/debug_graphs.py
@@ -0,0 +1,430 @@
+# Copyright 2016 The TensorFlow Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+"""Classes and methods for processing debugger-decorated graphs."""
+
+from __future__ import absolute_import
+from __future__ import division
+from __future__ import print_function
+
+from six.moves import xrange # pylint: disable=redefined-builtin
+
+from tensorflow.python.framework import op_def_registry
+
+
+def parse_node_or_tensor_name(name):
+ """Get the node name from a string that can be node or tensor name.
+
+ Args:
+ name: An input node name (e.g., "node_a") or tensor name (e.g.,
+ "node_a:0"), as a str.
+
+ Returns:
+ 1) The node name, as a str. If the input name is a tensor name, i.e.,
+ consists of a colon, the final colon and the following output slot
+ will be stripped.
+ 2) If the input name is a tensor name, the output slot, as an int. If
+ the input name is not a tensor name, None.
+ """
+
+ if ":" in name and not name.endswith(":"):
+ node_name = name[:name.rfind(":")]
+ output_slot = int(name[name.rfind(":") + 1:])
+
+ return node_name, output_slot
+ else:
+ return name, None
+
+
+def get_node_name(element_name):
+ node_name, _ = parse_node_or_tensor_name(element_name)
+ return node_name
+
+
+def get_output_slot(element_name):
+ """Get the output slot number from the name of a graph element.
+
+ If element_name is a node name without output slot at the end, 0 will be
+ assumed.
+
+ Args:
+ element_name: (`str`) name of the graph element in question.
+
+ Returns:
+ (`int`) output slot number.
+ """
+ _, output_slot = parse_node_or_tensor_name(element_name)
+ return output_slot if output_slot is not None else 0
+
+
+def is_copy_node(node_name):
+ """Determine whether a node name is that of a debug Copy node.
+
+ Such nodes are inserted by TensorFlow core upon request in
+ RunOptions.debug_options.debug_tensor_watch_opts.
+
+ Args:
+ node_name: Name of the node.
+
+ Returns:
+ A bool indicating whether the input argument is the name of a debug Copy
+ node.
+ """
+ return node_name.startswith("__copy_")
+
+
+def is_debug_node(node_name):
+ """Determine whether a node name is that of a debug node.
+
+ Such nodes are inserted by TensorFlow core upon request in
+ RunOptions.debug_options.debug_tensor_watch_opts.
+
+ Args:
+ node_name: Name of the node.
+
+ Returns:
+ A bool indicating whether the input argument is the name of a debug node.
+ """
+ return node_name.startswith("__dbg_")
+
+
+def parse_debug_node_name(node_name):
+ """Parse the name of a debug node.
+
+ Args:
+ node_name: Name of the debug node.
+
+ Returns:
+ 1. Name of the watched node, as a str.
+ 2. Output slot index of the watched tensor, as an int.
+ 3. Index of the debug node, as an int.
+ 4. Name of the debug op, as a str, e.g, "DebugIdentity".
+
+ Raises:
+ ValueError: If the input node name is not a valid debug node name.
+ """
+ prefix = "__dbg_"
+
+ name = node_name
+ if not name.startswith(prefix):
+ raise ValueError("Invalid prefix in debug node name: '%s'" % node_name)
+
+ name = name[len(prefix):]
+
+ if name.count("_") < 2:
+ raise ValueError("Invalid debug node name: '%s'" % node_name)
+
+ debug_op = name[name.rindex("_") + 1:]
+ name = name[:name.rindex("_")]
+
+ debug_op_index = int(name[name.rindex("_") + 1:])
+ name = name[:name.rindex("_")]
+
+ if name.count(":") != 1:
+ raise ValueError("Invalid tensor name in debug node name: '%s'" % node_name)
+
+ watched_node_name = name[:name.index(":")]
+ watched_output_slot = int(name[name.index(":") + 1:])
+
+ return watched_node_name, watched_output_slot, debug_op_index, debug_op
+
+
+class GraphTracingReachedDestination(Exception):
+ pass
+
+
+class DFSGraphTracer(object):
+ """Graph input tracer using depth-first search."""
+
+ def __init__(self,
+ input_lists,
+ skip_node_names=None,
+ destination_node_name=None):
+ """Constructor of _DFSGraphTracer.
+
+ Args:
+ input_lists: A list of dicts. Each dict is an adjacency (input) map from
+ the recipient node name as the key and the list of input node names
+ as the value.
+ skip_node_names: Optional: a list of node names to skip tracing.
+ destination_node_name: Optional: destination node name. If not `None`, it
+ should be the name of a destination not as a str and the graph tracing
+ will raise GraphTracingReachedDestination as soon as the node has been
+ reached.
+
+ Raises:
+ GraphTracingReachedDestination: if stop_at_node_name is not None and
+ the specified node is reached.
+ """
+
+ self._input_lists = input_lists
+ self._skip_node_names = skip_node_names
+
+ self._inputs = []
+ self._visited_nodes = []
+ self._depth_count = 0
+ self._depth_list = []
+
+ self._destination_node_name = destination_node_name
+
+ def trace(self, graph_element_name):
+ """Trace inputs.
+
+ Args:
+ graph_element_name: Name of the node or an output tensor of the node, as a
+ str.
+
+ Raises:
+ GraphTracingReachedDestination: if destination_node_name of this tracer
+ object is not None and the specified node is reached.
+ """
+ self._depth_count += 1
+
+ node_name = get_node_name(graph_element_name)
+ if node_name == self._destination_node_name:
+ raise GraphTracingReachedDestination()
+
+ if node_name in self._skip_node_names:
+ return
+ if node_name in self._visited_nodes:
+ return
+
+ self._visited_nodes.append(node_name)
+
+ for input_list in self._input_lists:
+ for inp in input_list[node_name]:
+ if get_node_name(inp) in self._visited_nodes:
+ continue
+ self._inputs.append(inp)
+ self._depth_list.append(self._depth_count)
+ self.trace(inp)
+
+ self._depth_count -= 1
+
+ def inputs(self):
+ return self._inputs
+
+ def depth_list(self):
+ return self._depth_list
+
+
+class DebugGraph(object):
+ """Represents a debugger-decorated graph."""
+
+ def __init__(self, debug_graph_def, device_name=None):
+ self._debug_graph_def = debug_graph_def
+
+ self._node_attributes = {}
+ self._node_inputs = {}
+ self._node_reversed_ref_inputs = {}
+ self._node_ctrl_inputs = {}
+ self._node_recipients = {}
+ self._node_ctrl_recipients = {}
+ self._node_devices = {}
+ self._node_op_types = {}
+ self._copy_send_nodes = []
+ self._ref_args = {}
+
+ self._device_name = device_name
+ if not self._device_name and debug_graph_def.node:
+ self._device_name = debug_graph_def.node[0].device
+
+ for node in debug_graph_def.node:
+ self._process_debug_graph_node(node)
+
+ self._prune_non_control_edges_of_debug_ops()
+ self._prune_control_edges_of_debug_ops()
+
+ self._populate_recipient_maps()
+
+ def _process_debug_graph_node(self, node):
+ """Process a node from the debug GraphDef.
+
+ Args:
+ node: (NodeDef) A partition-graph node to be processed.
+
+ Raises:
+ ValueError: If duplicate node names are encountered.
+ """
+
+ if is_debug_node(node.name):
+ # This is a debug node. Parse the node name and retrieve the
+ # information about debug watches on tensors. But do not include
+ # the node in the graph.
+ return
+
+ if node.name in self._node_inputs:
+ raise ValueError("Duplicate node name on device %s: '%s'" %
+ (self._device_name, node.name))
+
+ self._node_attributes[node.name] = node.attr
+
+ self._node_inputs[node.name] = []
+ self._node_ctrl_inputs[node.name] = []
+ self._node_recipients[node.name] = []
+ self._node_ctrl_recipients[node.name] = []
+
+ if node.name not in self._node_devices:
+ self._node_devices[node.name] = set()
+ self._node_devices[node.name].add(node.device)
+ self._node_op_types[node.name] = node.op
+ self._ref_args[node.name] = self._get_ref_args(node)
+
+ for inp in node.input:
+ if is_copy_node(inp) and (node.op == "_Send" or node.op == "_Retval"):
+ self._copy_send_nodes.append(node.name)
+
+ if inp.startswith("^"):
+ cinp = inp[1:]
+ self._node_ctrl_inputs[node.name].append(cinp)
+ else:
+ self._node_inputs[node.name].append(inp)
+
+ def _get_ref_args(self, node):
+ """Determine whether an input of an op is ref-type.
+
+ Args:
+ node: A `NodeDef`.
+
+ Returns:
+ A list of the arg names (as strs) that are ref-type.
+ """
+ op_def = op_def_registry.get_registered_ops().get(node.op)
+ ref_args = []
+ if op_def:
+ for i, output_arg in enumerate(op_def.output_arg):
+ if output_arg.is_ref:
+ arg_name = node.name if i == 0 else ("%s:%d" % (node.name, i))
+ ref_args.append(arg_name)
+ return ref_args
+
+ def _prune_non_control_edges_of_debug_ops(self):
+ """Prune (non-control) edges related to debug ops.
+
+ Prune the Copy ops and associated _Send ops inserted by the debugger out
+ from the non-control inputs and output recipients map. Replace the inputs
+ and recipients with original ones.
+ """
+ copy_nodes = []
+ for node in self._node_inputs:
+ if node in self._copy_send_nodes:
+ continue
+
+ if is_copy_node(node):
+ copy_nodes.append(node)
+
+ inputs = self._node_inputs[node]
+
+ for i in xrange(len(inputs)):
+ inp = inputs[i]
+ if is_copy_node(inp):
+ # Find the input to the Copy node, which should be the original
+ # input to the node.
+ orig_inp = self._node_inputs[inp][0]
+ inputs[i] = orig_inp
+
+ self._prune_nodes_from_input_and_recipient_maps(copy_nodes)
+ self._prune_nodes_from_input_and_recipient_maps(self._copy_send_nodes)
+
+ def _prune_control_edges_of_debug_ops(self):
+ """Prune control edges related to the debug ops."""
+ for node in self._node_ctrl_inputs:
+ ctrl_inputs = self._node_ctrl_inputs[node]
+ debug_op_inputs = []
+ for ctrl_inp in ctrl_inputs:
+ if is_debug_node(ctrl_inp):
+ debug_op_inputs.append(ctrl_inp)
+ for debug_op_inp in debug_op_inputs:
+ ctrl_inputs.remove(debug_op_inp)
+
+ def _populate_recipient_maps(self):
+ """Populate the map from node name to recipient(s) of its output(s).
+
+ This method also populates the input map based on reversed ref edges.
+ """
+ for node in self._node_inputs:
+ inputs = self._node_inputs[node]
+ for inp in inputs:
+ inp = get_node_name(inp)
+ if inp not in self._node_recipients:
+ self._node_recipients[inp] = []
+ self._node_recipients[inp].append(node)
+
+ if inp in self._ref_args:
+ if inp not in self._node_reversed_ref_inputs:
+ self._node_reversed_ref_inputs[inp] = []
+ self._node_reversed_ref_inputs[inp].append(node)
+
+ for node in self._node_ctrl_inputs:
+ ctrl_inputs = self._node_ctrl_inputs[node]
+ for ctrl_inp in ctrl_inputs:
+ if ctrl_inp in self._copy_send_nodes:
+ continue
+
+ if ctrl_inp not in self._node_ctrl_recipients:
+ self._node_ctrl_recipients[ctrl_inp] = []
+ self._node_ctrl_recipients[ctrl_inp].append(node)
+
+ def _prune_nodes_from_input_and_recipient_maps(self, nodes_to_prune):
+ """Prune nodes out of input and recipient maps.
+
+ Args:
+ nodes_to_prune: (`list` of `str`) Names of the nodes to be pruned.
+ """
+ for node in nodes_to_prune:
+ del self._node_inputs[node]
+ del self._node_ctrl_inputs[node]
+ del self._node_recipients[node]
+ del self._node_ctrl_recipients[node]
+
+ @property
+ def device_name(self):
+ return self._device_name
+
+ @property
+ def debug_graph_def(self):
+ """The debugger-decorated GraphDef."""
+ return self._debug_graph_def
+
+ @property
+ def node_devices(self):
+ return self._node_devices
+
+ @property
+ def node_op_types(self):
+ return self._node_op_types
+
+ @property
+ def node_attributes(self):
+ return self._node_attributes
+
+ @property
+ def node_inputs(self):
+ return self._node_inputs
+
+ @property
+ def node_ctrl_inputs(self):
+ return self._node_ctrl_inputs
+
+ @property
+ def node_reversed_ref_inputs(self):
+ return self._node_reversed_ref_inputs
+
+ @property
+ def node_recipients(self):
+ return self._node_recipients
+
+ @property
+ def node_ctrl_recipients(self):
+ return self._node_ctrl_recipients
diff --git a/tensorflow/python/debug/lib/debug_graphs_test.py b/tensorflow/python/debug/lib/debug_graphs_test.py
new file mode 100644
index 0000000000..34257794f1
--- /dev/null
+++ b/tensorflow/python/debug/lib/debug_graphs_test.py
@@ -0,0 +1,112 @@
+# Copyright 2016 The TensorFlow Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+"""Tests for tfdbg module debug_data."""
+from __future__ import absolute_import
+from __future__ import division
+from __future__ import print_function
+
+from tensorflow.python.debug.lib import debug_graphs
+from tensorflow.python.framework import test_util
+from tensorflow.python.platform import test
+
+
+class ParseNodeOrTensorNameTest(test_util.TensorFlowTestCase):
+
+ def testParseNodeName(self):
+ node_name, slot = debug_graphs.parse_node_or_tensor_name(
+ "namespace1/node_1")
+
+ self.assertEqual("namespace1/node_1", node_name)
+ self.assertIsNone(slot)
+
+ def testParseTensorName(self):
+ node_name, slot = debug_graphs.parse_node_or_tensor_name(
+ "namespace1/node_2:3")
+
+ self.assertEqual("namespace1/node_2", node_name)
+ self.assertEqual(3, slot)
+
+
+class GetNodeNameAndOutputSlotTest(test_util.TensorFlowTestCase):
+
+ def testParseTensorNameInputWorks(self):
+ self.assertEqual("a", debug_graphs.get_node_name("a:0"))
+ self.assertEqual(0, debug_graphs.get_output_slot("a:0"))
+
+ self.assertEqual("_b", debug_graphs.get_node_name("_b:1"))
+ self.assertEqual(1, debug_graphs.get_output_slot("_b:1"))
+
+ def testParseNodeNameInputWorks(self):
+ self.assertEqual("a", debug_graphs.get_node_name("a"))
+ self.assertEqual(0, debug_graphs.get_output_slot("a"))
+
+
+class NodeNameChecksTest(test_util.TensorFlowTestCase):
+
+ def testIsCopyNode(self):
+ self.assertTrue(debug_graphs.is_copy_node("__copy_ns1/ns2/node3_0"))
+
+ self.assertFalse(debug_graphs.is_copy_node("copy_ns1/ns2/node3_0"))
+ self.assertFalse(debug_graphs.is_copy_node("_copy_ns1/ns2/node3_0"))
+ self.assertFalse(debug_graphs.is_copy_node("_copyns1/ns2/node3_0"))
+ self.assertFalse(debug_graphs.is_copy_node("__dbg_ns1/ns2/node3_0"))
+
+ def testIsDebugNode(self):
+ self.assertTrue(
+ debug_graphs.is_debug_node("__dbg_ns1/ns2/node3:0_0_DebugIdentity"))
+
+ self.assertFalse(
+ debug_graphs.is_debug_node("dbg_ns1/ns2/node3:0_0_DebugIdentity"))
+ self.assertFalse(
+ debug_graphs.is_debug_node("_dbg_ns1/ns2/node3:0_0_DebugIdentity"))
+ self.assertFalse(
+ debug_graphs.is_debug_node("_dbgns1/ns2/node3:0_0_DebugIdentity"))
+ self.assertFalse(debug_graphs.is_debug_node("__copy_ns1/ns2/node3_0"))
+
+
+class ParseDebugNodeNameTest(test_util.TensorFlowTestCase):
+
+ def testParseDebugNodeName_valid(self):
+ debug_node_name_1 = "__dbg_ns_a/ns_b/node_c:1_0_DebugIdentity"
+ (watched_node, watched_output_slot, debug_op_index,
+ debug_op) = debug_graphs.parse_debug_node_name(debug_node_name_1)
+
+ self.assertEqual("ns_a/ns_b/node_c", watched_node)
+ self.assertEqual(1, watched_output_slot)
+ self.assertEqual(0, debug_op_index)
+ self.assertEqual("DebugIdentity", debug_op)
+
+ def testParseDebugNodeName_invalidPrefix(self):
+ invalid_debug_node_name_1 = "__copy_ns_a/ns_b/node_c:1_0_DebugIdentity"
+
+ with self.assertRaisesRegexp(ValueError, "Invalid prefix"):
+ debug_graphs.parse_debug_node_name(invalid_debug_node_name_1)
+
+ def testParseDebugNodeName_missingDebugOpIndex(self):
+ invalid_debug_node_name_1 = "__dbg_node1:0_DebugIdentity"
+
+ with self.assertRaisesRegexp(ValueError, "Invalid debug node name"):
+ debug_graphs.parse_debug_node_name(invalid_debug_node_name_1)
+
+ def testParseDebugNodeName_invalidWatchedTensorName(self):
+ invalid_debug_node_name_1 = "__dbg_node1_0_DebugIdentity"
+
+ with self.assertRaisesRegexp(ValueError,
+ "Invalid tensor name in debug node name"):
+ debug_graphs.parse_debug_node_name(invalid_debug_node_name_1)
+
+
+if __name__ == "__main__":
+ test.main()
diff --git a/tensorflow/python/debug/lib/grpc_debug_server.py b/tensorflow/python/debug/lib/grpc_debug_server.py
index 309fdb3bce..64e4f00168 100644
--- a/tensorflow/python/debug/lib/grpc_debug_server.py
+++ b/tensorflow/python/debug/lib/grpc_debug_server.py
@@ -29,9 +29,10 @@ from six.moves import queue
from tensorflow.core.debug import debug_service_pb2
from tensorflow.core.framework import graph_pb2
-from tensorflow.python.debug.lib import debug_data
+from tensorflow.python.debug.lib import debug_graphs
from tensorflow.python.debug.lib import debug_service_pb2_grpc
from tensorflow.python.platform import tf_logging as logging
+from tensorflow.python.util import compat
DebugWatch = collections.namedtuple("DebugWatch",
["node_name", "output_slot", "debug_op"])
@@ -219,7 +220,8 @@ class EventListenerBaseServicer(debug_service_pb2_grpc.EventListenerServicer):
"""
value = event.summary.value[0]
- debugger_plugin_metadata = json.loads(value.metadata.plugin_data.content)
+ debugger_plugin_metadata = json.loads(
+ compat.as_text(value.metadata.plugin_data.content))
device_name = debugger_plugin_metadata["device"]
num_chunks = debugger_plugin_metadata["numChunks"]
chunk_index = debugger_plugin_metadata["chunkIndex"]
@@ -294,10 +296,10 @@ class EventListenerBaseServicer(debug_service_pb2_grpc.EventListenerServicer):
def _process_graph_def(self, graph_def):
for node_def in graph_def.node:
- if (debug_data.is_debug_node(node_def.name) and
+ if (debug_graphs.is_debug_node(node_def.name) and
node_def.attr["gated_grpc"].b):
node_name, output_slot, _, debug_op = (
- debug_data.parse_debug_node_name(node_def.name))
+ debug_graphs.parse_debug_node_name(node_def.name))
self._gated_grpc_debug_watches.add(
DebugWatch(node_name, output_slot, debug_op))
diff --git a/tensorflow/python/debug/lib/grpc_debug_test_server.py b/tensorflow/python/debug/lib/grpc_debug_test_server.py
index 5e3743d9d3..2a87d861d2 100644
--- a/tensorflow/python/debug/lib/grpc_debug_test_server.py
+++ b/tensorflow/python/debug/lib/grpc_debug_test_server.py
@@ -41,6 +41,7 @@ from tensorflow.python.debug.lib import grpc_debug_server
from tensorflow.python.framework import constant_op
from tensorflow.python.framework import errors
from tensorflow.python.ops import variables
+from tensorflow.python.util import compat
def _get_dump_file_path(dump_root, device_name, debug_node_name):
@@ -198,7 +199,7 @@ class EventListenerTestStreamHandler(
if not summary_metadata.plugin_data:
raise ValueError("The value lacks plugin data.")
try:
- content = json.loads(summary_metadata.plugin_data.content)
+ content = json.loads(compat.as_text(summary_metadata.plugin_data.content))
except ValueError as err:
raise ValueError("Could not parse content into JSON: %r, %r" % (content,
err))
diff --git a/tensorflow/python/debug/lib/session_debug_file_test.py b/tensorflow/python/debug/lib/session_debug_file_test.py
index 48f31771db..aa5314dda5 100644
--- a/tensorflow/python/debug/lib/session_debug_file_test.py
+++ b/tensorflow/python/debug/lib/session_debug_file_test.py
@@ -34,7 +34,7 @@ from tensorflow.python.ops import variables
from tensorflow.python.platform import googletest
-class SessionDebugTest(session_debug_testlib.SessionDebugTestBase):
+class SessionDebugFileTest(session_debug_testlib.SessionDebugTestBase):
def _no_rewrite_session_config(self):
rewriter_config = rewriter_config_pb2.RewriterConfig(
diff --git a/tensorflow/python/debug/lib/session_debug_testlib.py b/tensorflow/python/debug/lib/session_debug_testlib.py
index 08b3e75e7c..d4b9d06b54 100644
--- a/tensorflow/python/debug/lib/session_debug_testlib.py
+++ b/tensorflow/python/debug/lib/session_debug_testlib.py
@@ -33,6 +33,7 @@ from tensorflow.core.protobuf import rewriter_config_pb2
from tensorflow.core.util import event_pb2
from tensorflow.python.client import session
from tensorflow.python.debug.lib import debug_data
+from tensorflow.python.debug.lib import debug_graphs
from tensorflow.python.debug.lib import debug_utils
from tensorflow.python.framework import constant_op
from tensorflow.python.framework import dtypes
@@ -242,7 +243,7 @@ class SessionDebugTestBase(test_util.TensorFlowTestCase):
v_copy_node_def = None
for partition_graph in run_metadata.partition_graphs:
for node_def in partition_graph.node:
- if debug_data.is_copy_node(node_def.name):
+ if debug_graphs.is_copy_node(node_def.name):
if node_def.name == "__copy_u_0":
u_copy_node_def = node_def
elif node_def.name == "__copy_v_0":
diff --git a/tensorflow/python/debug/lib/stepper.py b/tensorflow/python/debug/lib/stepper.py
index c814520b7e..1fa0b3dba2 100644
--- a/tensorflow/python/debug/lib/stepper.py
+++ b/tensorflow/python/debug/lib/stepper.py
@@ -27,6 +27,7 @@ import six
from tensorflow.core.protobuf import config_pb2
from tensorflow.python.debug.lib import debug_data
+from tensorflow.python.debug.lib import debug_graphs
from tensorflow.python.debug.lib import debug_utils
from tensorflow.python.framework import ops
from tensorflow.python.ops import session_ops
@@ -706,8 +707,8 @@ class NodeStepper(object):
if ":" in element_name:
debug_utils.add_debug_tensor_watch(
run_options,
- debug_data.get_node_name(element_name),
- output_slot=debug_data.get_output_slot(element_name),
+ debug_graphs.get_node_name(element_name),
+ output_slot=debug_graphs.get_output_slot(element_name),
debug_urls=["file://" + dump_path])
return dump_path, run_options
@@ -961,5 +962,5 @@ class NodeStepper(object):
The node associated with element in the graph.
"""
- node_name, _ = debug_data.parse_node_or_tensor_name(element.name)
+ node_name, _ = debug_graphs.parse_node_or_tensor_name(element.name)
return self._sess.graph.as_graph_element(node_name)
diff --git a/tensorflow/python/eager/backprop.py b/tensorflow/python/eager/backprop.py
index 326f56ebf9..46872e617a 100644
--- a/tensorflow/python/eager/backprop.py
+++ b/tensorflow/python/eager/backprop.py
@@ -186,7 +186,7 @@ def _aggregate_grads(gradients):
ret.append(g_list[0])
else:
# TODO(xpan): Aggregate IndexedSlices.
- ret.append((g_list[0][0], math_ops.add_n(zip(*g_list)[1])))
+ ret.append((g_list[0][0], math_ops.add_n(list(zip(*g_list))[1])))
return ret
diff --git a/tensorflow/python/eager/function.py b/tensorflow/python/eager/function.py
index 980b6c883f..227520eea8 100644
--- a/tensorflow/python/eager/function.py
+++ b/tensorflow/python/eager/function.py
@@ -373,6 +373,13 @@ def _defun_internal(name, func, args, kwds):
"""Defines and returns graph-mode version of func."""
with context.graph_mode():
tmp_graph = ops.Graph()
+ # Copy the graph collections to ensure summaries and other things work. This
+ # lets the function access (but not mutate) collections of the containing
+ # graph, such as the global step and the summary writer collections.
+ curr_graph = ops.get_default_graph()
+ for collection in curr_graph.collections:
+ tmp_graph.get_collection_ref(collection)[:] = curr_graph.get_collection(
+ collection)
with tmp_graph.as_default():
func_inputs = _get_defun_inputs(args)
diff --git a/tensorflow/python/eager/python_eager_op_gen.cc b/tensorflow/python/eager/python_eager_op_gen.cc
index 62579bd23a..a526856794 100644
--- a/tensorflow/python/eager/python_eager_op_gen.cc
+++ b/tensorflow/python/eager/python_eager_op_gen.cc
@@ -661,7 +661,6 @@ string GetEagerPythonOps(const OpList& ops,
const std::vector<string>& hidden_ops,
bool require_shapes,
const string& source_file_name = "") {
-
string result;
// Header
// TODO(josh11b): Mention the library for which wrappers are being generated.
@@ -669,7 +668,7 @@ string GetEagerPythonOps(const OpList& ops,
This file is MACHINE GENERATED! Do not edit.
)");
-
+
// Mention the original source file so someone tracing back through generated
// Python code will know where to look next.
if (!source_file_name.empty()) {
@@ -677,7 +676,7 @@ This file is MACHINE GENERATED! Do not edit.
strings::StrAppend(&result, source_file_name);
strings::StrAppend(&result, "\n");
}
-
+
strings::StrAppend(&result, R"("""
import collections as _collections
@@ -759,11 +758,10 @@ from tensorflow.python.framework import op_def_library as _op_def_library
void PrintEagerPythonOps(const OpList& ops,
const std::vector<string>& hidden_ops,
- bool require_shapes,
- const string& source_file_name)
-{
- printf("%s", GetEagerPythonOps(ops, hidden_ops, require_shapes,
- source_file_name).c_str());
+ bool require_shapes, const string& source_file_name) {
+ printf("%s",
+ GetEagerPythonOps(ops, hidden_ops, require_shapes, source_file_name)
+ .c_str());
}
string GetEagerPythonWrappers(const char* op_list_buf, size_t op_list_len) {
diff --git a/tensorflow/python/estimator/model_fn.py b/tensorflow/python/estimator/model_fn.py
index 1a4b0c5fc0..cfa4be5c7d 100644
--- a/tensorflow/python/estimator/model_fn.py
+++ b/tensorflow/python/estimator/model_fn.py
@@ -131,7 +131,10 @@ class EstimatorSpec(
train_op: Op for the training step.
eval_metric_ops: Dict of metric results keyed by name. The values of the
dict are the results of calling a metric function, namely a
- `(metric_tensor, update_op)` tuple.
+ `(metric_tensor, update_op)` tuple. `metric_tensor` should be evaluated
+ without any impact on state (typically is a pure computation results
+ based on variables.). For example, it should not trigger the `update_op`
+ or requires any input fetching.
export_outputs: Describes the output signatures to be exported to
`SavedModel` and used during serving.
A dict `{name: output}` where:
diff --git a/tensorflow/python/estimator/run_config.py b/tensorflow/python/estimator/run_config.py
index 2ba51ec9eb..e242a60aab 100644
--- a/tensorflow/python/estimator/run_config.py
+++ b/tensorflow/python/estimator/run_config.py
@@ -19,10 +19,14 @@ from __future__ import division
from __future__ import print_function
import copy
+import json
+import os
import six
from tensorflow.core.protobuf import config_pb2
+from tensorflow.python.platform import tf_logging as logging
+from tensorflow.python.training import server_lib
_USE_DEFAULT = object()
@@ -44,6 +48,56 @@ _SAVE_CKPT_ERR = (
'`save_checkpoints_steps` and `save_checkpoints_secs` cannot be both set.'
)
+_TF_CONFIG_ENV = 'TF_CONFIG'
+_TASK_ENV_KEY = 'task'
+_TASK_TYPE_KEY = 'type'
+_TASK_ID_KEY = 'index'
+_CLUSTER_KEY = 'cluster'
+_LOCAL_MASTER = ''
+_GRPC_SCHEME = 'grpc://'
+
+
+def _get_master(cluster_spec, task_type, task_id):
+ """Returns the appropriate string for the TensorFlow master."""
+ if not cluster_spec:
+ return _LOCAL_MASTER
+
+ jobs = cluster_spec.jobs
+ # Lookup the master in cluster_spec using task_type and task_id,
+ # if possible.
+ if task_type not in jobs:
+ raise ValueError(
+ '%s is not a valid task_type in the cluster_spec:\n'
+ '%s\n\n'
+ 'Note that these values may be coming from the TF_CONFIG environment '
+ 'variable.' % (task_type, cluster_spec))
+ addresses = cluster_spec.job_tasks(task_type)
+ if not 0 <= task_id < len(addresses):
+ raise ValueError(
+ '%d is not a valid task_id for task_type %s in the cluster_spec:\n'
+ '%s\n\n'
+ 'Note that these values may be coming from the TF_CONFIG environment '
+ 'variable.' % (task_id, task_type, cluster_spec))
+ return _GRPC_SCHEME + addresses[task_id]
+
+
+def _count_ps(cluster_spec):
+ """Counts the number of parameter servers in cluster_spec."""
+ if not cluster_spec:
+ return 0
+
+ return len(cluster_spec.as_dict().get(TaskType.PS, []))
+
+
+def _count_worker(cluster_spec):
+ """Counts the number of workers (including chief) in cluster_spec."""
+ if not cluster_spec:
+ raise RuntimeError(
+ 'Internal error: `_count_worker` does not expect empty cluster_spec.')
+
+ return (len(cluster_spec.as_dict().get(TaskType.WORKER, [])) +
+ len(cluster_spec.as_dict().get(TaskType.CHIEF, [])))
+
def _validate_save_ckpt_with_replaced_keys(new_copy, replaced_keys):
"""Validates the save ckpt properties."""
@@ -103,6 +157,8 @@ class TaskType(object):
MASTER = 'master'
PS = 'ps'
WORKER = 'worker'
+ CHIEF = 'chief'
+ EVALUATOR = 'evaluator'
class RunConfig(object):
@@ -120,6 +176,95 @@ class RunConfig(object):
log_step_count_steps=100):
"""Constructs a RunConfig.
+ All distributed training related properties `cluster_spec`, `is_chief`,
+ `master` , `num_worker_replicas`, `num_ps_replicas`, `task_id`, and
+ `task_type` are set based on the `TF_CONFIG` environment variable, if the
+ pertinent information is present. The `TF_CONFIG` environment variable is a
+ JSON object with attributes: `cluster` and `task`.
+
+ `cluster` is a JSON serialized version of `ClusterSpec`'s Python dict from
+ `server_lib.py`, mapping task types (usually one of the `TaskType` enums) to
+ a list of task addresses.
+
+ `task` has two attributes: `type` and `index`, where `type` can be any of
+ the task types in `cluster`. ` When `TF_CONFIG` contains said information,
+ the following properties are set on this class:
+
+ * `cluster_spec` is parsed from `TF_CONFIG['cluster']`. Defaults to {}. If
+ present, must have one and only one node in the `chief` attribute of
+ `cluster_spec`.
+ * `task_type` is set to `TF_CONFIG['task']['type']`. Must set if
+ `cluster_spec` is present; must be `worker` (the default value) if
+ `cluster_spec` is not set.
+ * `task_id` is set to `TF_CONFIG['task']['index']`. Must set if
+ `cluster_spec` is present; must be 0 (the default value) if
+ `cluster_spec` is not set.
+ * `master` is determined by looking up `task_type` and `task_id` in the
+ `cluster_spec`. Defaults to ''.
+ * `num_ps_replicas` is set by counting the number of nodes listed
+ in the `ps` attribute of `cluster_spec`. Defaults to 0.
+ * `num_worker_replicas` is set by counting the number of nodes listed
+ in the `worker` and `chief` attributes of `cluster_spec`. Defaults to 1.
+ * `is_chief` is determined based on `task_type` and `cluster`.
+
+ There is a special node with `task_type` as `evaluator`, which is not part
+ of the (training) `cluster_spec`. It handles the distributed evaluation job.
+
+ Example of non-chief node:
+ ```
+ cluster = {'chief': ['host0:2222'],
+ 'ps': ['host1:2222', 'host2:2222'],
+ 'worker': ['host3:2222', 'host4:2222', 'host5:2222']}
+ os.environ['TF_CONFIG'] = json.dumps(
+ {'cluster': cluster,
+ 'task': {'type': 'worker', 'index': 1}})
+ config = ClusterConfig()
+ assert config.master == 'host4:2222'
+ assert config.task_id == 1
+ assert config.num_ps_replicas == 2
+ assert config.num_worker_replicas == 4
+ assert config.cluster_spec == server_lib.ClusterSpec(cluster)
+ assert config.task_type == 'worker'
+ assert not config.is_chief
+ ```
+
+ Example of chief node:
+ ```
+ cluster = {'chief': ['host0:2222'],
+ 'ps': ['host1:2222', 'host2:2222'],
+ 'worker': ['host3:2222', 'host4:2222', 'host5:2222']}
+ os.environ['TF_CONFIG'] = json.dumps(
+ {'cluster': cluster,
+ 'task': {'type': 'chief', 'index': 0}})
+ config = ClusterConfig()
+ assert config.master == 'host0:2222'
+ assert config.task_id == 0
+ assert config.num_ps_replicas == 2
+ assert config.num_worker_replicas == 4
+ assert config.cluster_spec == server_lib.ClusterSpec(cluster)
+ assert config.task_type == 'chief'
+ assert config.is_chief
+ ```
+
+ Example of evaluator node (evaluator is not part of training cluster):
+ ```
+ cluster = {'chief': ['host0:2222'],
+ 'ps': ['host1:2222', 'host2:2222'],
+ 'worker': ['host3:2222', 'host4:2222', 'host5:2222']}
+ os.environ['TF_CONFIG'] = json.dumps(
+ {'cluster': cluster,
+ 'task': {'type': 'evaluator', 'index': 0}})
+ config = ClusterConfig()
+ assert config.master == ''
+ assert config.evaluator_master == ''
+ assert config.task_id == 0
+ assert config.num_ps_replicas == 0
+ assert config.num_worker_replicas == 0
+ assert config.cluster_spec == {}
+ assert config.task_type == 'evaluator'
+ assert not config.is_chief
+ ```
+
N.B.: If `save_checkpoints_steps` or `save_checkpoints_secs` is set,
`keep_checkpoint_max` might need to be adjusted accordingly, especially in
distributed training. For example, setting `save_checkpoints_secs` as 60
@@ -137,9 +282,10 @@ class RunConfig(object):
save_checkpoints_steps: Save checkpoints every this many steps. Can not be
specified with `save_checkpoints_secs`.
save_checkpoints_secs: Save checkpoints every this many seconds. Can not
- be specified with `save_checkpoints_steps`. Defaults to 600 seconds.
- If both `save_checkpoints_steps` and `save_checkpoints_secs` are None,
- then checkpoints are disabled.
+ be specified with `save_checkpoints_steps`. Defaults to 600 seconds if
+ both `save_checkpoints_steps` and `save_checkpoints_secs` are not set
+ in constructor. If both `save_checkpoints_steps` and
+ `save_checkpoints_secs` are None, then checkpoints are disabled.
session_config: a ConfigProto used to set session parameters, or None.
keep_checkpoint_max: The maximum number of recent checkpoint files to
keep. As new files are created, older files are deleted. If None or 0,
@@ -181,9 +327,79 @@ class RunConfig(object):
keep_checkpoint_every_n_hours=keep_checkpoint_every_n_hours,
log_step_count_steps=log_step_count_steps)
+ self._init_distributed_setting_from_environment_var()
+
+ def _init_distributed_setting_from_environment_var(self):
+ """Initialize distributed properties based on environment variable."""
+
+ tf_config = json.loads(os.environ.get(_TF_CONFIG_ENV) or '{}')
+ if tf_config:
+ logging.info('TF_CONFIG environment variable: %s', tf_config)
+
+ self._cluster_spec = server_lib.ClusterSpec(tf_config.get(_CLUSTER_KEY, {}))
+ task_env = tf_config.get(_TASK_ENV_KEY, {})
+
+ if self._cluster_spec:
+ # Distributed mode.
+ if TaskType.CHIEF not in self._cluster_spec.jobs:
+ raise ValueError(
+ 'If "cluster" is set in TF_CONFIG, it must have one "chief" node.')
+ if len(self._cluster_spec.job_tasks(TaskType.CHIEF)) > 1:
+ raise ValueError(
+ 'The "cluster" in TF_CONFIG must have only one "chief" node.')
+
+ self._task_type = task_env.get(_TASK_TYPE_KEY, None)
+ task_id = task_env.get(_TASK_ID_KEY, None)
+
+ if not self._task_type:
+ raise ValueError(
+ 'If "cluster" is set in TF_CONFIG, task type must be set.')
+ if task_id is None:
+ raise ValueError(
+ 'If "cluster" is set in TF_CONFIG, task index must be set.')
+
+ self._task_id = int(task_id)
+
+ # Check the task id bounds. Upper bound is not necessary as
+ # - for evaluator, there is no upper bound.
+ # - for non-evaluator, task id is upper bounded by the number of jobs in
+ # cluster spec, which will be checked later (when retrieving the `master`)
+ if self._task_id < 0:
+ raise ValueError('Task index must be non-negative number.')
+
+ if self._task_type != TaskType.EVALUATOR:
+ self._master = _get_master(
+ self._cluster_spec, self._task_type, self._task_id)
+ self._num_ps_replicas = _count_ps(self._cluster_spec)
+ self._num_worker_replicas = _count_worker(self._cluster_spec)
+ else:
+ # Evaluator is not part of the training cluster.
+ self._cluster_spec = server_lib.ClusterSpec({})
+ self._master = _LOCAL_MASTER
+ self._num_ps_replicas = 0
+ self._num_worker_replicas = 0
+
+ self._is_chief = self._task_type == TaskType.CHIEF
+ else:
+ # Local mode.
+ self._task_type = task_env.get(_TASK_TYPE_KEY, TaskType.WORKER)
+ self._task_id = int(task_env.get(_TASK_ID_KEY, 0))
+
+ if self._task_type != TaskType.WORKER:
+ raise ValueError(
+ 'If "cluster" is not set in TF_CONFIG, task type must be WORKER.')
+ if self._task_id != 0:
+ raise ValueError(
+ 'If "cluster" is not set in TF_CONFIG, task index must be 0.')
+
+ self._master = ''
+ self._is_chief = True
+ self._num_ps_replicas = 0
+ self._num_worker_replicas = 1
+
@property
def cluster_spec(self):
- return None
+ return self._cluster_spec
@property
def evaluation_master(self):
@@ -191,27 +407,27 @@ class RunConfig(object):
@property
def is_chief(self):
- return True
+ return self._is_chief
@property
def master(self):
- return ''
+ return self._master
@property
def num_ps_replicas(self):
- return 0
+ return self._num_ps_replicas
@property
def num_worker_replicas(self):
- return 1
+ return self._num_worker_replicas
@property
def task_id(self):
- return 0
+ return self._task_id
@property
def task_type(self):
- return TaskType.WORKER
+ return self._task_type
@property
def tf_random_seed(self):
diff --git a/tensorflow/python/estimator/run_config_test.py b/tensorflow/python/estimator/run_config_test.py
index 4a09417630..cd135a3468 100644
--- a/tensorflow/python/estimator/run_config_test.py
+++ b/tensorflow/python/estimator/run_config_test.py
@@ -18,6 +18,8 @@ from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
+import json
+
from tensorflow.core.protobuf import config_pb2
from tensorflow.python.estimator import run_config as run_config_lib
from tensorflow.python.platform import test
@@ -36,6 +38,22 @@ _SESSION_CONFIG_ERR = 'session_config must be instance of ConfigProto'
_KEEP_CKPT_MAX_ERR = 'keep_checkpoint_max should be >= 0'
_KEEP_CKPT_HOURS_ERR = 'keep_checkpoint_every_n_hours should be > 0'
_TF_RANDOM_SEED_ERR = 'tf_random_seed must be integer'
+_ONE_CHIEF_ERR = 'The "cluster" in TF_CONFIG must have only one "chief" node.'
+_MISSING_CHIEF_ERR = 'If "cluster" is set .* it must have one "chief" node'
+_MISSING_TASK_TYPE_ERR = 'If "cluster" is set .* task type must be set'
+_MISSING_TASK_ID_ERR = 'If "cluster" is set .* task index must be set'
+_INVALID_TASK_INDEX_ERR = 'is not a valid task_id'
+_NEGATIVE_TASK_INDEX_ERR = 'Task index must be non-negative number.'
+_INVALID_TASK_TYPE_ERR = 'is not a valid task_type'
+_INVALID_TASK_TYPE_FOR_LOCAL_ERR = (
+ 'If "cluster" is not set in TF_CONFIG, task type must be WORKER.')
+_INVALID_TASK_INDEX_FOR_LOCAL_ERR = (
+ 'If "cluster" is not set in TF_CONFIG, task index must be 0.')
+
+
+def _create_run_config_with_cluster_spec(tf_config, **kwargs):
+ with test.mock.patch.dict('os.environ', {'TF_CONFIG': json.dumps(tf_config)}):
+ return run_config_lib.RunConfig(**kwargs)
class RunConfigTest(test.TestCase):
@@ -189,6 +207,283 @@ class RunConfigTest(test.TestCase):
run_config_lib.RunConfig(tf_random_seed=1.0)
+class RunConfigDistributedSettingTest(test.TestCase):
+
+ def _assert_distributed_properties(self, run_config,
+ expected_cluster_spec,
+ expected_task_type,
+ expected_task_id,
+ expected_master,
+ expected_evaluation_master,
+ expected_is_chief,
+ expected_num_worker_replicas,
+ expected_num_ps_replicas):
+ self.assertEqual(expected_cluster_spec, run_config.cluster_spec.as_dict())
+ self.assertEqual(expected_task_type, run_config.task_type)
+ self.assertEqual(expected_task_id, run_config.task_id)
+ self.assertEqual(expected_master, run_config.master)
+ self.assertEqual(expected_evaluation_master, run_config.evaluation_master)
+ self.assertEqual(expected_is_chief, run_config.is_chief)
+ self.assertEqual(expected_num_worker_replicas,
+ run_config.num_worker_replicas)
+ self.assertEqual(expected_num_ps_replicas, run_config.num_ps_replicas)
+
+ def test_default_values(self):
+ self._assert_distributed_properties(
+ run_config=run_config_lib.RunConfig(),
+ expected_cluster_spec={},
+ expected_task_type=run_config_lib.TaskType.WORKER,
+ expected_task_id=0,
+ expected_master='',
+ expected_evaluation_master='',
+ expected_is_chief=True,
+ expected_num_worker_replicas=1,
+ expected_num_ps_replicas=0)
+
+ def test_tf_config_for_local(self):
+ tf_config = {
+ 'task': {
+ 'type': run_config_lib.TaskType.WORKER,
+ 'index': 0
+ }
+ }
+ self._assert_distributed_properties(
+ run_config=_create_run_config_with_cluster_spec(tf_config),
+ expected_cluster_spec={},
+ expected_task_type=run_config_lib.TaskType.WORKER,
+ expected_task_id=0,
+ expected_master='',
+ expected_evaluation_master='',
+ expected_is_chief=True,
+ expected_num_worker_replicas=1,
+ expected_num_ps_replicas=0)
+
+ def test_invalid_task_type_for_local(self):
+ tf_config = {
+ 'task': {
+ 'type': run_config_lib.TaskType.CHIEF,
+ 'index': 0
+ }
+ }
+ with self.assertRaisesRegexp(ValueError, _INVALID_TASK_TYPE_FOR_LOCAL_ERR):
+ _create_run_config_with_cluster_spec(tf_config)
+
+ def test_invalid_task_index_for_local(self):
+ tf_config = {
+ 'task': {
+ 'type': run_config_lib.TaskType.WORKER,
+ 'index': 1
+ }
+ }
+ with self.assertRaisesRegexp(ValueError, _INVALID_TASK_INDEX_FOR_LOCAL_ERR):
+ _create_run_config_with_cluster_spec(tf_config)
+
+ def test_chief_tf_config(self):
+ tf_config = {
+ 'cluster': {
+ run_config_lib.TaskType.CHIEF: ['host0:0'],
+ run_config_lib.TaskType.PS: ['host1:1', 'host2:2'],
+ run_config_lib.TaskType.WORKER: ['host3:3', 'host4:4', 'host5:5']
+ },
+ 'task': {
+ 'type': run_config_lib.TaskType.CHIEF,
+ 'index': 0
+ }
+ }
+ self._assert_distributed_properties(
+ run_config=_create_run_config_with_cluster_spec(tf_config),
+ expected_cluster_spec=tf_config['cluster'],
+ expected_task_type=run_config_lib.TaskType.CHIEF,
+ expected_task_id=0,
+ expected_master='grpc://host0:0',
+ expected_evaluation_master='',
+ expected_is_chief=True,
+ expected_num_worker_replicas=4,
+ expected_num_ps_replicas=2)
+
+ def test_fail_with_multiple_chief_nodes(self):
+ tf_config = {
+ 'cluster': {
+ run_config_lib.TaskType.CHIEF: ['host0:0', 'host:6:6'],
+ run_config_lib.TaskType.WORKER: ['host3:3', 'host4:4', 'host5:5']
+ },
+ }
+ with self.assertRaisesRegexp(ValueError, _ONE_CHIEF_ERR):
+ _create_run_config_with_cluster_spec(tf_config)
+
+ def test_fail_with_missing_chief_node(self):
+ tf_config = {
+ 'cluster': {
+ run_config_lib.TaskType.WORKER: ['host3:3', 'host4:4', 'host5:5']
+ },
+ }
+ with self.assertRaisesRegexp(ValueError, _MISSING_CHIEF_ERR):
+ _create_run_config_with_cluster_spec(tf_config)
+
+ def test_single_chief_node(self):
+ tf_config = {
+ 'cluster': {
+ run_config_lib.TaskType.CHIEF: ['host0:0'],
+ },
+ 'task': {
+ 'type': run_config_lib.TaskType.CHIEF,
+ 'index': 0
+ }
+ }
+ self._assert_distributed_properties(
+ run_config=_create_run_config_with_cluster_spec(tf_config),
+ expected_cluster_spec=tf_config['cluster'],
+ expected_task_type=run_config_lib.TaskType.CHIEF,
+ expected_task_id=0,
+ expected_master='grpc://host0:0',
+ expected_evaluation_master='',
+ expected_is_chief=True,
+ expected_num_worker_replicas=1,
+ expected_num_ps_replicas=0)
+
+ def test_fail_with_missing_task_type_for_distributed(self):
+ tf_config = {
+ 'cluster': {
+ run_config_lib.TaskType.CHIEF: ['host3:3']
+ },
+ }
+ with self.assertRaisesRegexp(ValueError, _MISSING_TASK_TYPE_ERR):
+ _create_run_config_with_cluster_spec(tf_config)
+
+ def test_fail_with_missing_task_index_for_distributed(self):
+ tf_config = {
+ 'cluster': {
+ run_config_lib.TaskType.CHIEF: ['host3:3']
+ },
+ 'task': {
+ 'type': run_config_lib.TaskType.CHIEF,
+ }
+ }
+ with self.assertRaisesRegexp(ValueError, _MISSING_TASK_ID_ERR):
+ _create_run_config_with_cluster_spec(tf_config)
+
+ def test_fail_with_index_is_too_large(self):
+ tf_config = {
+ 'cluster': {
+ run_config_lib.TaskType.CHIEF: ['host3:3']
+ },
+ 'task': {
+ 'type': run_config_lib.TaskType.CHIEF,
+ 'index': 1
+ }
+ }
+ with self.assertRaisesRegexp(ValueError, _INVALID_TASK_INDEX_ERR):
+ _create_run_config_with_cluster_spec(tf_config)
+
+ def test_fail_with_invalid_task_index(self):
+ tf_config = {
+ 'cluster': {
+ run_config_lib.TaskType.CHIEF: ['host3:3']
+ },
+ 'task': {
+ 'type': run_config_lib.TaskType.CHIEF,
+ 'index': -1
+ }
+ }
+ with self.assertRaisesRegexp(ValueError, _NEGATIVE_TASK_INDEX_ERR):
+ _create_run_config_with_cluster_spec(tf_config)
+
+ def test_fail_with_invalid_task_type(self):
+ tf_config = {
+ 'cluster': {
+ run_config_lib.TaskType.CHIEF: ['host3:3']
+ },
+ 'task': {
+ 'type': run_config_lib.TaskType.WORKER,
+ 'index': 0
+ }
+ }
+ with self.assertRaisesRegexp(ValueError, _INVALID_TASK_TYPE_ERR):
+ _create_run_config_with_cluster_spec(tf_config)
+
+ def test_worker_tf_config(self):
+ tf_config = {
+ 'cluster': {
+ run_config_lib.TaskType.CHIEF: ['host0:0'],
+ run_config_lib.TaskType.PS: ['host1:1', 'host2:2'],
+ run_config_lib.TaskType.WORKER: ['host3:3', 'host4:4', 'host5:5']
+ },
+ 'task': {
+ 'type': run_config_lib.TaskType.WORKER,
+ 'index': 1
+ }
+ }
+ self._assert_distributed_properties(
+ run_config=_create_run_config_with_cluster_spec(tf_config),
+ expected_cluster_spec=tf_config['cluster'],
+ expected_task_type=run_config_lib.TaskType.WORKER,
+ expected_task_id=1,
+ expected_master='grpc://host4:4',
+ expected_evaluation_master='',
+ expected_is_chief=False,
+ expected_num_worker_replicas=4,
+ expected_num_ps_replicas=2)
+
+ def test_ps_tf_config(self):
+ tf_config = {
+ 'cluster': {
+ run_config_lib.TaskType.CHIEF: ['host0:0'],
+ run_config_lib.TaskType.PS: ['host1:1', 'host2:2'],
+ run_config_lib.TaskType.WORKER: ['host3:3', 'host4:4', 'host5:5']
+ },
+ 'task': {
+ 'type': run_config_lib.TaskType.PS,
+ 'index': 0
+ }
+ }
+ self._assert_distributed_properties(
+ run_config=_create_run_config_with_cluster_spec(tf_config),
+ expected_cluster_spec=tf_config['cluster'],
+ expected_task_type=run_config_lib.TaskType.PS,
+ expected_task_id=0,
+ expected_master='grpc://host1:1',
+ expected_evaluation_master='',
+ expected_is_chief=False,
+ expected_num_worker_replicas=4,
+ expected_num_ps_replicas=2)
+
+ def test_evaluator_tf_config(self):
+ tf_config = {
+ 'cluster': {
+ run_config_lib.TaskType.CHIEF: ['host0:0'],
+ run_config_lib.TaskType.PS: ['host1:1', 'host2:2'],
+ run_config_lib.TaskType.WORKER: ['host3:3', 'host4:4', 'host5:5']
+ },
+ 'task': {
+ 'type': run_config_lib.TaskType.EVALUATOR,
+ 'index': 12
+ }
+ }
+ self._assert_distributed_properties(
+ run_config=_create_run_config_with_cluster_spec(tf_config),
+ expected_cluster_spec={},
+ expected_task_type=run_config_lib.TaskType.EVALUATOR,
+ expected_task_id=12,
+ expected_master='',
+ expected_evaluation_master='',
+ expected_is_chief=False, # evaluator is never chief.
+ expected_num_worker_replicas=0, # evaluator is not in training cluster.
+ expected_num_ps_replicas=0)
+
+ def test_fail_with_invalid_task_index_for_evaluator(self):
+ tf_config = {
+ 'cluster': {
+ run_config_lib.TaskType.CHIEF: ['host3:3']
+ },
+ 'task': {
+ 'type': run_config_lib.TaskType.EVALUATOR,
+ 'index': -1
+ }
+ }
+ with self.assertRaisesRegexp(ValueError, _NEGATIVE_TASK_INDEX_ERR):
+ _create_run_config_with_cluster_spec(tf_config)
+
+
class RunConfigSaveCheckpointsTest(test.TestCase):
def test_save_checkpoint(self):
diff --git a/tensorflow/python/feature_column/feature_column.py b/tensorflow/python/feature_column/feature_column.py
index f64235d70b..965b35bc4c 100644
--- a/tensorflow/python/feature_column/feature_column.py
+++ b/tensorflow/python/feature_column/feature_column.py
@@ -2475,10 +2475,8 @@ class _IndicatorColumn(_DenseColumn,
sp_values=weight_tensor,
vocab_size=int(self._variable_shape[-1]))
# Remove (?, -1) index
- weighted_column = sparse_ops.sparse_slice(
- weighted_column,
- [0, 0],
- weighted_column.dense_shape)
+ weighted_column = sparse_ops.sparse_slice(weighted_column, [0, 0],
+ weighted_column.dense_shape)
return sparse_ops.sparse_tensor_to_dense(weighted_column)
dense_id_tensor = sparse_ops.sparse_tensor_to_dense(
diff --git a/tensorflow/python/feature_column/feature_column_test.py b/tensorflow/python/feature_column/feature_column_test.py
index e707770f8a..926e78acee 100644
--- a/tensorflow/python/feature_column/feature_column_test.py
+++ b/tensorflow/python/feature_column/feature_column_test.py
@@ -3213,8 +3213,8 @@ class IndicatorColumnTest(test.TestCase):
weights = fc.weighted_categorical_column(ids, 'weights')
indicator = fc.indicator_column(weights)
features = {
- 'ids': constant_op.constant([['c', 'b', 'a']]),
- 'weights': constant_op.constant([[2., 4., 6.]])
+ 'ids': constant_op.constant([['c', 'b', 'a']]),
+ 'weights': constant_op.constant([[2., 4., 6.]])
}
indicator_tensor = _transform_features(features, [indicator])[indicator]
with _initialized_session():
@@ -3223,12 +3223,12 @@ class IndicatorColumnTest(test.TestCase):
def test_transform_with_missing_value_in_weighted_column(self):
# Github issue 12583
ids = fc.categorical_column_with_vocabulary_list(
- key='ids', vocabulary_list=('a', 'b', 'c'))
+ key='ids', vocabulary_list=('a', 'b', 'c'))
weights = fc.weighted_categorical_column(ids, 'weights')
indicator = fc.indicator_column(weights)
features = {
- 'ids': constant_op.constant([['c', 'b', 'unknown']]),
- 'weights': constant_op.constant([[2., 4., 6.]])
+ 'ids': constant_op.constant([['c', 'b', 'unknown']]),
+ 'weights': constant_op.constant([[2., 4., 6.]])
}
indicator_tensor = _transform_features(features, [indicator])[indicator]
with _initialized_session():
@@ -3237,10 +3237,10 @@ class IndicatorColumnTest(test.TestCase):
def test_transform_with_missing_value_in_categorical_column(self):
# Github issue 12583
ids = fc.categorical_column_with_vocabulary_list(
- key='ids', vocabulary_list=('a', 'b', 'c'))
+ key='ids', vocabulary_list=('a', 'b', 'c'))
indicator = fc.indicator_column(ids)
features = {
- 'ids': constant_op.constant([['c', 'b', 'unknown']]),
+ 'ids': constant_op.constant([['c', 'b', 'unknown']]),
}
indicator_tensor = _transform_features(features, [indicator])[indicator]
with _initialized_session():
diff --git a/tensorflow/python/framework/ops_test.py b/tensorflow/python/framework/ops_test.py
index 72964ca925..dc036598cb 100644
--- a/tensorflow/python/framework/ops_test.py
+++ b/tensorflow/python/framework/ops_test.py
@@ -399,7 +399,7 @@ class OperationTest(test_util.TensorFlowTestCase):
self.assertIsInstance(x, dtypes.DType)
self.assertEqual([dtypes.string, dtypes.double], l)
- # TODO(skyewm): test adding cycles, other error cases
+ # TODO(nolivia): test all error cases
@test_util.enable_c_api
def testAddControlInput(self):
with ops.Graph().as_default():
@@ -408,6 +408,22 @@ class OperationTest(test_util.TensorFlowTestCase):
y._add_control_input(x) # pylint: disable=protected-access
self.assertEqual(y.control_inputs, [x])
+ @test_util.enable_c_api
+ def testControlInputCycle(self):
+ graph = ops.Graph()
+ with graph.as_default():
+ z = constant_op.constant(0)
+ x = constant_op.constant(1)
+ y = constant_op.constant(2)
+ y.op._add_control_input(z.op) # pylint: disable=protected-access
+ y.op._add_control_input(x.op) # pylint: disable=protected-access
+ x.op._add_control_input(y.op) # pylint: disable=protected-access
+ with self.test_session(graph=graph) as sess:
+ with self.assertRaisesRegexp(
+ errors.InvalidArgumentError,
+ "Graph is invalid, contains a cycle with 2 nodes"):
+ sess.run(x)
+
class CreateOpTest(test_util.TensorFlowTestCase):
diff --git a/tensorflow/python/framework/python_op_gen_main.cc b/tensorflow/python/framework/python_op_gen_main.cc
index 3cf56330e0..f681daa7e4 100644
--- a/tensorflow/python/framework/python_op_gen_main.cc
+++ b/tensorflow/python/framework/python_op_gen_main.cc
@@ -81,7 +81,6 @@ Status ParseOpListCommandLine(const char* arg, std::vector<string>* op_list) {
return Status::OK();
}
-
// Use the name of the current executable to infer the C++ source file
// where the REGISTER_OP() call for the operator can be found.
// Returns the name of the file.
@@ -103,9 +102,8 @@ string InferSourceFileName(const char* argv_zero) {
}
}
-void PrintAllPythonOps(const std::vector<string>& op_list,
- const string& source_file_name,
- bool require_shapes,
+void PrintAllPythonOps(const std::vector<string>& op_list,
+ const string& source_file_name, bool require_shapes,
bool op_list_is_whitelist) {
OpList ops;
OpRegistry::Global()->Export(false, &ops);
diff --git a/tensorflow/python/framework/tensor_util.py b/tensorflow/python/framework/tensor_util.py
index 8c0975b11b..3e13b825f8 100644
--- a/tensorflow/python/framework/tensor_util.py
+++ b/tensorflow/python/framework/tensor_util.py
@@ -236,10 +236,8 @@ def _FilterTuple(v):
def _FilterInt(v):
if isinstance(v, (list, tuple)):
return _FirstNotNone([_FilterInt(x) for x in v])
- return None if isinstance(
- v,
- (compat.integral_types, tensor_shape.Dimension)) else _NotNone(v)
-
+ return None if isinstance(v, (compat.integral_types,
+ tensor_shape.Dimension)) else _NotNone(v)
def _FilterFloat(v):
if isinstance(v, (list, tuple)):
diff --git a/tensorflow/python/framework/tensor_util_test.py b/tensorflow/python/framework/tensor_util_test.py
index ca47274e9a..f66af3adc6 100644
--- a/tensorflow/python/framework/tensor_util_test.py
+++ b/tensorflow/python/framework/tensor_util_test.py
@@ -318,8 +318,8 @@ class TensorUtilTest(test.TestCase):
# Github issue: 11974
dtype = dtypes.int32
nptype = np.int32
- t = tensor_util.make_tensor_proto([10, tensor_shape.Dimension(20), 30],
- dtype=dtype)
+ t = tensor_util.make_tensor_proto(
+ [10, tensor_shape.Dimension(20), 30], dtype=dtype)
self.assertEquals(dtype, t.dtype)
a = tensor_util.MakeNdarray(t)
self.assertEquals(nptype, a.dtype)
diff --git a/tensorflow/python/framework/test_util.py b/tensorflow/python/framework/test_util.py
index 04c7554a58..9cf222a63a 100644
--- a/tensorflow/python/framework/test_util.py
+++ b/tensorflow/python/framework/test_util.py
@@ -298,11 +298,11 @@ def run_in_graph_and_eager_modes(__unused__=None, graph=None, config=None,
def decorator(f):
"""Test method decorator."""
- def decorated(self):
+ def decorated(self, **kwargs):
"""Decorated the test method."""
with context.graph_mode():
with self.test_session(graph, config, use_gpu, force_gpu):
- f(self)
+ f(self, **kwargs)
if reset_test:
# This decorator runs the wrapped test twice.
@@ -319,10 +319,10 @@ def run_in_graph_and_eager_modes(__unused__=None, graph=None, config=None,
f(self)
elif use_gpu:
# TODO(xpan): Support softplacement and gpu by default when available.
- f(self)
+ f(self, **kwargs)
else:
with context.device("/device:CPU:0"):
- f(self)
+ f(self, **kwargs)
eager_graph = graph or ops.Graph()
with context.eager_mode():
diff --git a/tensorflow/python/keras/BUILD b/tensorflow/python/keras/BUILD
new file mode 100644
index 0000000000..a7daab8335
--- /dev/null
+++ b/tensorflow/python/keras/BUILD
@@ -0,0 +1,694 @@
+# Description:
+# Contains the Keras API (internal TensorFlow version).
+
+licenses(["notice"]) # Apache 2.0
+
+package(default_visibility = ["//visibility:public"])
+
+load("//tensorflow:tensorflow.bzl", "py_test")
+
+py_library(
+ name = "keras",
+ srcs = [
+ "__init__.py",
+ "_impl/keras/__init__.py",
+ "_impl/keras/activations.py",
+ "_impl/keras/applications/__init__.py",
+ "_impl/keras/applications/imagenet_utils.py",
+ "_impl/keras/applications/inception_v3.py",
+ "_impl/keras/applications/mobilenet.py",
+ "_impl/keras/applications/resnet50.py",
+ "_impl/keras/applications/vgg16.py",
+ "_impl/keras/applications/vgg19.py",
+ "_impl/keras/applications/xception.py",
+ "_impl/keras/backend.py",
+ "_impl/keras/callbacks.py",
+ "_impl/keras/constraints.py",
+ "_impl/keras/datasets/__init__.py",
+ "_impl/keras/datasets/boston_housing.py",
+ "_impl/keras/datasets/cifar.py",
+ "_impl/keras/datasets/cifar10.py",
+ "_impl/keras/datasets/cifar100.py",
+ "_impl/keras/datasets/imdb.py",
+ "_impl/keras/datasets/mnist.py",
+ "_impl/keras/datasets/reuters.py",
+ "_impl/keras/engine/__init__.py",
+ "_impl/keras/engine/topology.py",
+ "_impl/keras/engine/training.py",
+ "_impl/keras/initializers.py",
+ "_impl/keras/layers/__init__.py",
+ "_impl/keras/layers/advanced_activations.py",
+ "_impl/keras/layers/convolutional.py",
+ "_impl/keras/layers/convolutional_recurrent.py",
+ "_impl/keras/layers/core.py",
+ "_impl/keras/layers/embeddings.py",
+ "_impl/keras/layers/local.py",
+ "_impl/keras/layers/merge.py",
+ "_impl/keras/layers/noise.py",
+ "_impl/keras/layers/normalization.py",
+ "_impl/keras/layers/pooling.py",
+ "_impl/keras/layers/recurrent.py",
+ "_impl/keras/layers/serialization.py",
+ "_impl/keras/layers/wrappers.py",
+ "_impl/keras/losses.py",
+ "_impl/keras/metrics.py",
+ "_impl/keras/models.py",
+ "_impl/keras/optimizers.py",
+ "_impl/keras/preprocessing/__init__.py",
+ "_impl/keras/preprocessing/image.py",
+ "_impl/keras/preprocessing/sequence.py",
+ "_impl/keras/preprocessing/text.py",
+ "_impl/keras/regularizers.py",
+ "_impl/keras/testing_utils.py",
+ "_impl/keras/utils/__init__.py",
+ "_impl/keras/utils/conv_utils.py",
+ "_impl/keras/utils/data_utils.py",
+ "_impl/keras/utils/generic_utils.py",
+ "_impl/keras/utils/io_utils.py",
+ "_impl/keras/utils/layer_utils.py",
+ "_impl/keras/utils/np_utils.py",
+ "_impl/keras/utils/vis_utils.py",
+ "_impl/keras/wrappers/__init__.py",
+ "_impl/keras/wrappers/scikit_learn.py",
+ "activations/__init__.py",
+ "applications/__init__.py",
+ "applications/inception_v3/__init__.py",
+ "applications/mobilenet/__init__.py",
+ "applications/resnet50/__init__.py",
+ "applications/vgg16/__init__.py",
+ "applications/vgg19/__init__.py",
+ "applications/xception/__init__.py",
+ "backend/__init__.py",
+ "callbacks/__init__.py",
+ "constraints/__init__.py",
+ "datasets/__init__.py",
+ "datasets/boston_housing/__init__.py",
+ "datasets/cifar10/__init__.py",
+ "datasets/cifar100/__init__.py",
+ "datasets/imdb/__init__.py",
+ "datasets/mnist/__init__.py",
+ "datasets/reuters/__init__.py",
+ "initializers/__init__.py",
+ "layers/__init__.py",
+ "losses/__init__.py",
+ "metrics/__init__.py",
+ "models/__init__.py",
+ "optimizers/__init__.py",
+ "preprocessing/__init__.py",
+ "preprocessing/image/__init__.py",
+ "preprocessing/sequence/__init__.py",
+ "preprocessing/text/__init__.py",
+ "regularizers/__init__.py",
+ "utils/__init__.py",
+ "wrappers/__init__.py",
+ "wrappers/scikit_learn/__init__.py",
+ ],
+ srcs_version = "PY2AND3",
+ visibility = ["//visibility:public"],
+ deps = [
+ "//tensorflow/core:protos_all_py",
+ "//tensorflow/python:array_ops",
+ "//tensorflow/python:check_ops",
+ "//tensorflow/python:client",
+ "//tensorflow/python:clip_ops",
+ "//tensorflow/python:constant_op",
+ "//tensorflow/python:control_flow_ops",
+ "//tensorflow/python:ctc_ops",
+ "//tensorflow/python:dtypes",
+ "//tensorflow/python:framework",
+ "//tensorflow/python:framework_ops",
+ "//tensorflow/python:functional_ops",
+ "//tensorflow/python:gradients",
+ "//tensorflow/python:image_ops",
+ "//tensorflow/python:init_ops",
+ "//tensorflow/python:layers",
+ "//tensorflow/python:layers_base",
+ "//tensorflow/python:logging_ops",
+ "//tensorflow/python:math_ops",
+ "//tensorflow/python:nn",
+ "//tensorflow/python:platform",
+ "//tensorflow/python:random_ops",
+ "//tensorflow/python:sparse_ops",
+ "//tensorflow/python:sparse_tensor",
+ "//tensorflow/python:state_ops",
+ "//tensorflow/python:summary",
+ "//tensorflow/python:tensor_array_grad",
+ "//tensorflow/python:tensor_array_ops",
+ "//tensorflow/python:tensor_shape",
+ "//tensorflow/python:training",
+ "//tensorflow/python:util",
+ "//tensorflow/python:variable_scope",
+ "//tensorflow/python:variables",
+ "@six_archive//:six",
+ ],
+)
+
+py_test(
+ name = "integration_test",
+ size = "medium",
+ srcs = ["_impl/keras/integration_test.py"],
+ srcs_version = "PY2AND3",
+ tags = ["notsan"],
+ deps = [
+ ":keras",
+ "//tensorflow/python:client_testlib",
+ "//tensorflow/python:layers",
+ "//tensorflow/python:nn",
+ "//third_party/py/numpy",
+ ],
+)
+
+py_test(
+ name = "activations_test",
+ size = "small",
+ srcs = ["_impl/keras/activations_test.py"],
+ srcs_version = "PY2AND3",
+ deps = [
+ ":keras",
+ "//tensorflow/python:client_testlib",
+ "//third_party/py/numpy",
+ ],
+)
+
+py_test(
+ name = "constraints_test",
+ size = "small",
+ srcs = ["_impl/keras/constraints_test.py"],
+ srcs_version = "PY2AND3",
+ deps = [
+ ":keras",
+ "//tensorflow/python:client_testlib",
+ "//third_party/py/numpy",
+ ],
+)
+
+py_test(
+ name = "initializers_test",
+ size = "small",
+ srcs = ["_impl/keras/initializers_test.py"],
+ srcs_version = "PY2AND3",
+ deps = [
+ ":keras",
+ "//tensorflow/python:client_testlib",
+ "//tensorflow/python:init_ops",
+ "//third_party/py/numpy",
+ ],
+)
+
+py_test(
+ name = "regularizers_test",
+ size = "small",
+ srcs = ["_impl/keras/regularizers_test.py"],
+ srcs_version = "PY2AND3",
+ deps = [
+ ":keras",
+ "//tensorflow/python:client_testlib",
+ ],
+)
+
+py_test(
+ name = "optimizers_test",
+ size = "medium",
+ srcs = ["_impl/keras/optimizers_test.py"],
+ srcs_version = "PY2AND3",
+ tags = ["notsan"],
+ deps = [
+ ":keras",
+ "//tensorflow/python:client_testlib",
+ "//tensorflow/python:training",
+ "//third_party/py/numpy",
+ ],
+)
+
+py_test(
+ name = "losses_test",
+ size = "small",
+ srcs = ["_impl/keras/losses_test.py"],
+ srcs_version = "PY2AND3",
+ deps = [
+ ":keras",
+ "//tensorflow/python:client_testlib",
+ "//third_party/py/numpy",
+ ],
+)
+
+py_test(
+ name = "metrics_test",
+ size = "small",
+ srcs = ["_impl/keras/metrics_test.py"],
+ srcs_version = "PY2AND3",
+ deps = [
+ ":keras",
+ "//tensorflow/python:client_testlib",
+ "//third_party/py/numpy",
+ ],
+)
+
+py_test(
+ name = "inception_v3_test",
+ size = "medium",
+ srcs = ["_impl/keras/applications/inception_v3_test.py"],
+ srcs_version = "PY2AND3",
+ deps = [
+ ":keras",
+ "//tensorflow/python:client_testlib",
+ "//third_party/py/numpy",
+ ],
+)
+
+py_test(
+ name = "mobilenet_test",
+ size = "medium",
+ srcs = ["_impl/keras/applications/mobilenet_test.py"],
+ srcs_version = "PY2AND3",
+ deps = [
+ ":keras",
+ "//tensorflow/python:client_testlib",
+ "//third_party/py/numpy",
+ ],
+)
+
+py_test(
+ name = "resnet50_test",
+ size = "small",
+ srcs = ["_impl/keras/applications/resnet50_test.py"],
+ srcs_version = "PY2AND3",
+ deps = [
+ ":keras",
+ "//tensorflow/python:client_testlib",
+ ],
+)
+
+py_test(
+ name = "vgg16_test",
+ size = "small",
+ srcs = ["_impl/keras/applications/vgg16_test.py"],
+ srcs_version = "PY2AND3",
+ deps = [
+ ":keras",
+ "//tensorflow/python:client_testlib",
+ ],
+)
+
+py_test(
+ name = "vgg19_test",
+ size = "small",
+ srcs = ["_impl/keras/applications/vgg19_test.py"],
+ srcs_version = "PY2AND3",
+ deps = [
+ ":keras",
+ "//tensorflow/python:client_testlib",
+ ],
+)
+
+py_test(
+ name = "xception_test",
+ size = "medium",
+ srcs = ["_impl/keras/applications/xception_test.py"],
+ srcs_version = "PY2AND3",
+ deps = [
+ ":keras",
+ "//tensorflow/python:client_testlib",
+ "//third_party/py/numpy",
+ ],
+)
+
+py_test(
+ name = "advanced_activations_test",
+ size = "small",
+ srcs = ["_impl/keras/layers/advanced_activations_test.py"],
+ srcs_version = "PY2AND3",
+ deps = [
+ ":keras",
+ "//tensorflow/python:client_testlib",
+ ],
+)
+
+py_test(
+ name = "convolutional_recurrent_test",
+ size = "medium",
+ srcs = ["_impl/keras/layers/convolutional_recurrent_test.py"],
+ shard_count = 2,
+ srcs_version = "PY2AND3",
+ tags = ["noasan"], # times out b/63678675
+ deps = [
+ ":keras",
+ "//tensorflow/python:client_testlib",
+ "//third_party/py/numpy",
+ ],
+)
+
+py_test(
+ name = "convolutional_test",
+ size = "medium",
+ srcs = ["_impl/keras/layers/convolutional_test.py"],
+ srcs_version = "PY2AND3",
+ tags = [
+ "manual",
+ "noasan", # times out b/63678675
+ "notsan",
+ ],
+ deps = [
+ ":keras",
+ "//tensorflow/python:client_testlib",
+ "//third_party/py/numpy",
+ ],
+)
+
+py_test(
+ name = "pooling_test",
+ size = "small",
+ srcs = ["_impl/keras/layers/pooling_test.py"],
+ srcs_version = "PY2AND3",
+ deps = [
+ ":keras",
+ "//tensorflow/python:client_testlib",
+ ],
+)
+
+py_test(
+ name = "core_test",
+ size = "small",
+ srcs = ["_impl/keras/layers/core_test.py"],
+ srcs_version = "PY2AND3",
+ deps = [
+ ":keras",
+ "//tensorflow/python:client_testlib",
+ "//third_party/py/numpy",
+ ],
+)
+
+py_test(
+ name = "embeddings_test",
+ size = "small",
+ srcs = ["_impl/keras/layers/embeddings_test.py"],
+ srcs_version = "PY2AND3",
+ deps = [
+ ":keras",
+ "//tensorflow/python:client_testlib",
+ ],
+)
+
+py_test(
+ name = "local_test",
+ size = "medium",
+ srcs = ["_impl/keras/layers/local_test.py"],
+ srcs_version = "PY2AND3",
+ deps = [
+ ":keras",
+ "//tensorflow/python:client_testlib",
+ "//third_party/py/numpy",
+ ],
+)
+
+py_test(
+ name = "merge_test",
+ size = "small",
+ srcs = ["_impl/keras/layers/merge_test.py"],
+ srcs_version = "PY2AND3",
+ deps = [
+ ":keras",
+ "//tensorflow/python:client_testlib",
+ "//third_party/py/numpy",
+ ],
+)
+
+py_test(
+ name = "noise_test",
+ size = "small",
+ srcs = ["_impl/keras/layers/noise_test.py"],
+ srcs_version = "PY2AND3",
+ deps = [
+ ":keras",
+ "//tensorflow/python:client_testlib",
+ ],
+)
+
+py_test(
+ name = "normalization_test",
+ size = "small",
+ srcs = ["_impl/keras/layers/normalization_test.py"],
+ srcs_version = "PY2AND3",
+ deps = [
+ ":keras",
+ "//tensorflow/python:client_testlib",
+ "//third_party/py/numpy",
+ ],
+)
+
+py_test(
+ name = "simplernn_test",
+ size = "medium",
+ srcs = ["_impl/keras/layers/simplernn_test.py"],
+ srcs_version = "PY2AND3",
+ tags = ["notsan"],
+ deps = [
+ ":keras",
+ "//tensorflow/python:client_testlib",
+ "//third_party/py/numpy",
+ ],
+)
+
+py_test(
+ name = "gru_test",
+ size = "medium",
+ srcs = ["_impl/keras/layers/gru_test.py"],
+ srcs_version = "PY2AND3",
+ tags = ["notsan"], # http://b/62136390
+ deps = [
+ ":keras",
+ "//tensorflow/python:client_testlib",
+ "//third_party/py/numpy",
+ ],
+)
+
+py_test(
+ name = "lstm_test",
+ size = "medium",
+ srcs = ["_impl/keras/layers/lstm_test.py"],
+ srcs_version = "PY2AND3",
+ tags = [
+ "noasan", # times out b/63678675
+ "notsan", # http://b/62189182
+ ],
+ deps = [
+ ":keras",
+ "//tensorflow/python:client_testlib",
+ "//third_party/py/numpy",
+ ],
+)
+
+py_test(
+ name = "serialization_test",
+ size = "small",
+ srcs = ["_impl/keras/layers/serialization_test.py"],
+ srcs_version = "PY2AND3",
+ deps = [
+ ":keras",
+ "//tensorflow/python:client_testlib",
+ ],
+)
+
+py_test(
+ name = "wrappers_test",
+ size = "small",
+ srcs = ["_impl/keras/layers/wrappers_test.py"],
+ srcs_version = "PY2AND3",
+ deps = [
+ ":keras",
+ "//tensorflow/python:client_testlib",
+ "//third_party/py/numpy",
+ ],
+)
+
+py_test(
+ name = "scikit_learn_test",
+ size = "small",
+ srcs = ["_impl/keras/wrappers/scikit_learn_test.py"],
+ srcs_version = "PY2AND3",
+ tags = ["notsan"],
+ deps = [
+ ":keras",
+ "//tensorflow/python:client_testlib",
+ "//third_party/py/numpy",
+ ],
+)
+
+py_test(
+ name = "data_utils_test",
+ size = "small",
+ srcs = ["_impl/keras/utils/data_utils_test.py"],
+ srcs_version = "PY2AND3",
+ tags = [
+ "noasan", # times out
+ "notsan",
+ ],
+ deps = [
+ ":keras",
+ "//tensorflow/python:client_testlib",
+ "//third_party/py/numpy",
+ ],
+)
+
+py_test(
+ name = "generic_utils_test",
+ size = "small",
+ srcs = ["_impl/keras/utils/generic_utils_test.py"],
+ srcs_version = "PY2AND3",
+ deps = [
+ ":keras",
+ "//tensorflow/python:client_testlib",
+ ],
+)
+
+py_test(
+ name = "io_utils_test",
+ size = "small",
+ srcs = ["_impl/keras/utils/io_utils_test.py"],
+ srcs_version = "PY2AND3",
+ tags = ["notsan"],
+ deps = [
+ ":keras",
+ "//tensorflow/python:client_testlib",
+ "//third_party/py/numpy",
+ ],
+)
+
+py_test(
+ name = "imagenet_utils_test",
+ size = "small",
+ srcs = ["_impl/keras/applications/imagenet_utils_test.py"],
+ srcs_version = "PY2AND3",
+ deps = [
+ ":keras",
+ "//tensorflow/python:client_testlib",
+ "//third_party/py/numpy",
+ ],
+)
+
+py_test(
+ name = "image_test",
+ size = "medium",
+ srcs = ["_impl/keras/preprocessing/image_test.py"],
+ srcs_version = "PY2AND3",
+ deps = [
+ ":keras",
+ "//tensorflow/python:client_testlib",
+ "//third_party/py/numpy",
+ ],
+)
+
+py_test(
+ name = "sequence_test",
+ size = "small",
+ srcs = ["_impl/keras/preprocessing/sequence_test.py"],
+ srcs_version = "PY2AND3",
+ deps = [
+ ":keras",
+ "//tensorflow/python:client_testlib",
+ "//third_party/py/numpy",
+ ],
+)
+
+py_test(
+ name = "text_test",
+ size = "small",
+ srcs = ["_impl/keras/preprocessing/text_test.py"],
+ srcs_version = "PY2AND3",
+ deps = [
+ ":keras",
+ "//tensorflow/python:client_testlib",
+ "//third_party/py/numpy",
+ ],
+)
+
+py_test(
+ name = "callbacks_test",
+ size = "medium",
+ srcs = ["_impl/keras/callbacks_test.py"],
+ srcs_version = "PY2AND3",
+ tags = ["notsan"],
+ deps = [
+ ":keras",
+ "//tensorflow/python:client_testlib",
+ "//third_party/py/numpy",
+ ],
+)
+
+py_test(
+ name = "training_test",
+ size = "medium",
+ srcs = ["_impl/keras/engine/training_test.py"],
+ srcs_version = "PY2AND3",
+ tags = ["notsan"],
+ deps = [
+ ":keras",
+ "//tensorflow/python:client_testlib",
+ "//third_party/py/numpy",
+ ],
+)
+
+py_test(
+ name = "topology_test",
+ size = "small",
+ srcs = ["_impl/keras/engine/topology_test.py"],
+ srcs_version = "PY2AND3",
+ deps = [
+ ":keras",
+ "//tensorflow/python:array_ops",
+ "//tensorflow/python:client_testlib",
+ "//tensorflow/python:dtypes",
+ "//third_party/py/numpy",
+ ],
+)
+
+py_test(
+ name = "models_test",
+ size = "small",
+ srcs = ["_impl/keras/models_test.py"],
+ srcs_version = "PY2AND3",
+ deps = [
+ ":keras",
+ "//tensorflow/python:client_testlib",
+ "//tensorflow/python:training",
+ "//third_party/py/numpy",
+ ],
+)
+
+py_test(
+ name = "backend_test",
+ size = "small",
+ srcs = ["_impl/keras/backend_test.py"],
+ srcs_version = "PY2AND3",
+ deps = [
+ ":keras",
+ "//tensorflow/python:client_testlib",
+ "//tensorflow/python:util",
+ "//third_party/py/numpy",
+ ],
+)
+
+py_library(
+ name = "testing_utils",
+ srcs = [
+ "_impl/keras/testing_utils.py",
+ ],
+ srcs_version = "PY2AND3",
+ deps = [
+ ":keras",
+ "//tensorflow/python:util",
+ "//third_party/py/numpy",
+ ],
+)
+
+filegroup(
+ name = "all_files",
+ srcs = glob(
+ ["**/*"],
+ exclude = [
+ "**/METADATA",
+ "**/OWNERS",
+ ],
+ ),
+ visibility = ["//tensorflow:__subpackages__"],
+)
diff --git a/tensorflow/python/keras/README.md b/tensorflow/python/keras/README.md
new file mode 100644
index 0000000000..db2556fe42
--- /dev/null
+++ b/tensorflow/python/keras/README.md
@@ -0,0 +1,6 @@
+Keras is an object-oriented API for defining and training neural networks.
+
+This module contains a pure-TensorFlow implementation of the Keras API,
+allowing for deep integration with TensorFlow functionality.
+
+See [keras.io](https://keras.io) for complete documentation and user guides.
diff --git a/tensorflow/python/keras/__init__.py b/tensorflow/python/keras/__init__.py
new file mode 100644
index 0000000000..962c7678dd
--- /dev/null
+++ b/tensorflow/python/keras/__init__.py
@@ -0,0 +1,47 @@
+# -*- coding: utf-8 -*-
+# Copyright 2015 The TensorFlow Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+"""Implementation of the Keras API meant to be a high-level API for TensorFlow.
+
+Detailed documentation and user guides are available at
+[keras.io](https://keras.io).
+"""
+from __future__ import absolute_import
+from __future__ import division
+from __future__ import print_function
+
+# pylint: disable=wildcard-import
+from tensorflow.python.keras import activations
+from tensorflow.python.keras import applications
+from tensorflow.python.keras import backend
+from tensorflow.python.keras import callbacks
+from tensorflow.python.keras import constraints
+from tensorflow.python.keras import datasets
+from tensorflow.python.keras import initializers
+from tensorflow.python.keras import layers
+from tensorflow.python.keras import losses
+from tensorflow.python.keras import metrics
+from tensorflow.python.keras import models
+from tensorflow.python.keras import optimizers
+from tensorflow.python.keras import preprocessing
+from tensorflow.python.keras import regularizers
+from tensorflow.python.keras import utils
+from tensorflow.python.keras import wrappers
+from tensorflow.python.keras._impl.keras import __version__
+from tensorflow.python.keras.layers import Input
+
+del absolute_import
+del division
+del print_function
diff --git a/tensorflow/python/keras/_impl/keras/__init__.py b/tensorflow/python/keras/_impl/keras/__init__.py
new file mode 100644
index 0000000000..d1aa4415a1
--- /dev/null
+++ b/tensorflow/python/keras/_impl/keras/__init__.py
@@ -0,0 +1,40 @@
+# Copyright 2015 The TensorFlow Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+"""The Keras API.
+"""
+from __future__ import absolute_import
+from __future__ import division
+from __future__ import print_function
+
+from tensorflow.python.keras._impl.keras import activations
+from tensorflow.python.keras._impl.keras import applications
+from tensorflow.python.keras._impl.keras import backend
+from tensorflow.python.keras._impl.keras import callbacks
+from tensorflow.python.keras._impl.keras import constraints
+from tensorflow.python.keras._impl.keras import datasets
+from tensorflow.python.keras._impl.keras import engine
+from tensorflow.python.keras._impl.keras import initializers
+from tensorflow.python.keras._impl.keras import layers
+from tensorflow.python.keras._impl.keras import losses
+from tensorflow.python.keras._impl.keras import metrics
+from tensorflow.python.keras._impl.keras import models
+from tensorflow.python.keras._impl.keras import optimizers
+from tensorflow.python.keras._impl.keras import preprocessing
+from tensorflow.python.keras._impl.keras import regularizers
+from tensorflow.python.keras._impl.keras import utils
+from tensorflow.python.keras._impl.keras import wrappers
+from tensorflow.python.keras._impl.keras.layers import Input
+
+__version__ = '2.0.8-tf'
diff --git a/tensorflow/contrib/keras/python/keras/activations.py b/tensorflow/python/keras/_impl/keras/activations.py
index 7f04234e01..4e35b79869 100644
--- a/tensorflow/contrib/keras/python/keras/activations.py
+++ b/tensorflow/python/keras/_impl/keras/activations.py
@@ -20,9 +20,9 @@ from __future__ import print_function
import six
-from tensorflow.contrib.keras.python.keras import backend as K
-from tensorflow.contrib.keras.python.keras.engine import Layer
-from tensorflow.contrib.keras.python.keras.utils.generic_utils import deserialize_keras_object
+from tensorflow.python.keras._impl.keras import backend as K
+from tensorflow.python.keras._impl.keras.engine import Layer
+from tensorflow.python.keras._impl.keras.utils.generic_utils import deserialize_keras_object
from tensorflow.python.platform import tf_logging as logging
diff --git a/tensorflow/contrib/keras/python/keras/activations_test.py b/tensorflow/python/keras/_impl/keras/activations_test.py
index 8efa464b03..fb0bb5f126 100644
--- a/tensorflow/contrib/keras/python/keras/activations_test.py
+++ b/tensorflow/python/keras/_impl/keras/activations_test.py
@@ -20,7 +20,7 @@ from __future__ import print_function
import numpy as np
-from tensorflow.contrib.keras.python import keras
+from tensorflow.python.keras._impl import keras
from tensorflow.python.platform import test
diff --git a/tensorflow/contrib/keras/python/keras/applications/__init__.py b/tensorflow/python/keras/_impl/keras/applications/__init__.py
index 9139df30a6..f78bbdc148 100644
--- a/tensorflow/contrib/keras/python/keras/applications/__init__.py
+++ b/tensorflow/python/keras/_impl/keras/applications/__init__.py
@@ -18,9 +18,9 @@ from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
-from tensorflow.contrib.keras.python.keras.applications.inception_v3 import InceptionV3
-from tensorflow.contrib.keras.python.keras.applications.mobilenet import MobileNet
-from tensorflow.contrib.keras.python.keras.applications.resnet50 import ResNet50
-from tensorflow.contrib.keras.python.keras.applications.vgg16 import VGG16
-from tensorflow.contrib.keras.python.keras.applications.vgg19 import VGG19
-from tensorflow.contrib.keras.python.keras.applications.xception import Xception
+from tensorflow.python.keras._impl.keras.applications.inception_v3 import InceptionV3
+from tensorflow.python.keras._impl.keras.applications.mobilenet import MobileNet
+from tensorflow.python.keras._impl.keras.applications.resnet50 import ResNet50
+from tensorflow.python.keras._impl.keras.applications.vgg16 import VGG16
+from tensorflow.python.keras._impl.keras.applications.vgg19 import VGG19
+from tensorflow.python.keras._impl.keras.applications.xception import Xception
diff --git a/tensorflow/contrib/keras/python/keras/applications/imagenet_utils.py b/tensorflow/python/keras/_impl/keras/applications/imagenet_utils.py
index ce287dbd66..43628341cb 100644
--- a/tensorflow/contrib/keras/python/keras/applications/imagenet_utils.py
+++ b/tensorflow/python/keras/_impl/keras/applications/imagenet_utils.py
@@ -20,8 +20,8 @@ from __future__ import print_function
import json
-from tensorflow.contrib.keras.python.keras import backend as K
-from tensorflow.contrib.keras.python.keras.utils.data_utils import get_file
+from tensorflow.python.keras._impl.keras import backend as K
+from tensorflow.python.keras._impl.keras.utils.data_utils import get_file
from tensorflow.python.platform import tf_logging as logging
diff --git a/tensorflow/contrib/keras/python/keras/applications/imagenet_utils_test.py b/tensorflow/python/keras/_impl/keras/applications/imagenet_utils_test.py
index fa0b9ec299..517ba91219 100644
--- a/tensorflow/contrib/keras/python/keras/applications/imagenet_utils_test.py
+++ b/tensorflow/python/keras/_impl/keras/applications/imagenet_utils_test.py
@@ -20,7 +20,7 @@ from __future__ import print_function
import numpy as np
-from tensorflow.contrib.keras.python import keras
+from tensorflow.python.keras._impl import keras
from tensorflow.python.platform import test
diff --git a/tensorflow/contrib/keras/python/keras/applications/inception_v3.py b/tensorflow/python/keras/_impl/keras/applications/inception_v3.py
index 2fdc62f2f2..edb4c60f8a 100644
--- a/tensorflow/contrib/keras/python/keras/applications/inception_v3.py
+++ b/tensorflow/python/keras/_impl/keras/applications/inception_v3.py
@@ -29,22 +29,22 @@ from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
-from tensorflow.contrib.keras.python.keras import backend as K
-from tensorflow.contrib.keras.python.keras import layers
-from tensorflow.contrib.keras.python.keras.applications.imagenet_utils import _obtain_input_shape
-from tensorflow.contrib.keras.python.keras.applications.imagenet_utils import decode_predictions # pylint: disable=unused-import
-from tensorflow.contrib.keras.python.keras.engine.topology import get_source_inputs
-from tensorflow.contrib.keras.python.keras.layers import Activation
-from tensorflow.contrib.keras.python.keras.layers import AveragePooling2D
-from tensorflow.contrib.keras.python.keras.layers import BatchNormalization
-from tensorflow.contrib.keras.python.keras.layers import Conv2D
-from tensorflow.contrib.keras.python.keras.layers import Dense
-from tensorflow.contrib.keras.python.keras.layers import GlobalAveragePooling2D
-from tensorflow.contrib.keras.python.keras.layers import GlobalMaxPooling2D
-from tensorflow.contrib.keras.python.keras.layers import Input
-from tensorflow.contrib.keras.python.keras.layers import MaxPooling2D
-from tensorflow.contrib.keras.python.keras.models import Model
-from tensorflow.contrib.keras.python.keras.utils.data_utils import get_file
+from tensorflow.python.keras._impl.keras import backend as K
+from tensorflow.python.keras._impl.keras import layers
+from tensorflow.python.keras._impl.keras.applications.imagenet_utils import _obtain_input_shape
+from tensorflow.python.keras._impl.keras.applications.imagenet_utils import decode_predictions # pylint: disable=unused-import
+from tensorflow.python.keras._impl.keras.engine.topology import get_source_inputs
+from tensorflow.python.keras._impl.keras.layers import Activation
+from tensorflow.python.keras._impl.keras.layers import AveragePooling2D
+from tensorflow.python.keras._impl.keras.layers import BatchNormalization
+from tensorflow.python.keras._impl.keras.layers import Conv2D
+from tensorflow.python.keras._impl.keras.layers import Dense
+from tensorflow.python.keras._impl.keras.layers import GlobalAveragePooling2D
+from tensorflow.python.keras._impl.keras.layers import GlobalMaxPooling2D
+from tensorflow.python.keras._impl.keras.layers import Input
+from tensorflow.python.keras._impl.keras.layers import MaxPooling2D
+from tensorflow.python.keras._impl.keras.models import Model
+from tensorflow.python.keras._impl.keras.utils.data_utils import get_file
WEIGHTS_PATH = 'https://github.com/fchollet/deep-learning-models/releases/download/v0.5/inception_v3_weights_tf_dim_ordering_tf_kernels.h5'
diff --git a/tensorflow/contrib/keras/python/keras/applications/inception_v3_test.py b/tensorflow/python/keras/_impl/keras/applications/inception_v3_test.py
index 890df612ff..20e11fa019 100644
--- a/tensorflow/contrib/keras/python/keras/applications/inception_v3_test.py
+++ b/tensorflow/python/keras/_impl/keras/applications/inception_v3_test.py
@@ -20,7 +20,7 @@ from __future__ import print_function
import numpy as np
-from tensorflow.contrib.keras.python import keras
+from tensorflow.python.keras._impl import keras
from tensorflow.python.platform import test
diff --git a/tensorflow/contrib/keras/python/keras/applications/mobilenet.py b/tensorflow/python/keras/_impl/keras/applications/mobilenet.py
index 2a93486401..9375e436f2 100644
--- a/tensorflow/contrib/keras/python/keras/applications/mobilenet.py
+++ b/tensorflow/python/keras/_impl/keras/applications/mobilenet.py
@@ -69,25 +69,25 @@ from __future__ import print_function
import warnings
-from tensorflow.contrib.keras.python.keras import backend as K
-from tensorflow.contrib.keras.python.keras import constraints
-from tensorflow.contrib.keras.python.keras import initializers
-from tensorflow.contrib.keras.python.keras import regularizers
-from tensorflow.contrib.keras.python.keras.applications.imagenet_utils import _obtain_input_shape
-from tensorflow.contrib.keras.python.keras.applications.imagenet_utils import decode_predictions # pylint: disable=unused-import
-from tensorflow.contrib.keras.python.keras.engine import InputSpec
-from tensorflow.contrib.keras.python.keras.engine.topology import get_source_inputs
-from tensorflow.contrib.keras.python.keras.layers import Activation
-from tensorflow.contrib.keras.python.keras.layers import BatchNormalization
-from tensorflow.contrib.keras.python.keras.layers import Conv2D
-from tensorflow.contrib.keras.python.keras.layers import Dropout
-from tensorflow.contrib.keras.python.keras.layers import GlobalAveragePooling2D
-from tensorflow.contrib.keras.python.keras.layers import GlobalMaxPooling2D
-from tensorflow.contrib.keras.python.keras.layers import Input
-from tensorflow.contrib.keras.python.keras.layers import Reshape
-from tensorflow.contrib.keras.python.keras.models import Model
-from tensorflow.contrib.keras.python.keras.utils import conv_utils
-from tensorflow.contrib.keras.python.keras.utils.data_utils import get_file
+from tensorflow.python.keras._impl.keras import backend as K
+from tensorflow.python.keras._impl.keras import constraints
+from tensorflow.python.keras._impl.keras import initializers
+from tensorflow.python.keras._impl.keras import regularizers
+from tensorflow.python.keras._impl.keras.applications.imagenet_utils import _obtain_input_shape
+from tensorflow.python.keras._impl.keras.applications.imagenet_utils import decode_predictions # pylint: disable=unused-import
+from tensorflow.python.keras._impl.keras.engine import InputSpec
+from tensorflow.python.keras._impl.keras.engine.topology import get_source_inputs
+from tensorflow.python.keras._impl.keras.layers import Activation
+from tensorflow.python.keras._impl.keras.layers import BatchNormalization
+from tensorflow.python.keras._impl.keras.layers import Conv2D
+from tensorflow.python.keras._impl.keras.layers import Dropout
+from tensorflow.python.keras._impl.keras.layers import GlobalAveragePooling2D
+from tensorflow.python.keras._impl.keras.layers import GlobalMaxPooling2D
+from tensorflow.python.keras._impl.keras.layers import Input
+from tensorflow.python.keras._impl.keras.layers import Reshape
+from tensorflow.python.keras._impl.keras.models import Model
+from tensorflow.python.keras._impl.keras.utils import conv_utils
+from tensorflow.python.keras._impl.keras.utils.data_utils import get_file
BASE_WEIGHT_PATH = 'https://github.com/fchollet/deep-learning-models/releases/download/v0.6/'
diff --git a/tensorflow/contrib/keras/python/keras/applications/mobilenet_test.py b/tensorflow/python/keras/_impl/keras/applications/mobilenet_test.py
index 19e69c764a..601d417e49 100644
--- a/tensorflow/contrib/keras/python/keras/applications/mobilenet_test.py
+++ b/tensorflow/python/keras/_impl/keras/applications/mobilenet_test.py
@@ -20,7 +20,7 @@ from __future__ import print_function
import numpy as np
-from tensorflow.contrib.keras.python import keras
+from tensorflow.python.keras._impl import keras
from tensorflow.python.platform import test
diff --git a/tensorflow/contrib/keras/python/keras/applications/resnet50.py b/tensorflow/python/keras/_impl/keras/applications/resnet50.py
index 794e05e2dc..f0cff2d686 100644
--- a/tensorflow/contrib/keras/python/keras/applications/resnet50.py
+++ b/tensorflow/python/keras/_impl/keras/applications/resnet50.py
@@ -26,24 +26,24 @@ from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
-from tensorflow.contrib.keras.python.keras import backend as K
-from tensorflow.contrib.keras.python.keras import layers
-from tensorflow.contrib.keras.python.keras.applications.imagenet_utils import _obtain_input_shape
-from tensorflow.contrib.keras.python.keras.applications.imagenet_utils import decode_predictions # pylint: disable=unused-import
-from tensorflow.contrib.keras.python.keras.applications.imagenet_utils import preprocess_input # pylint: disable=unused-import
-from tensorflow.contrib.keras.python.keras.engine.topology import get_source_inputs
-from tensorflow.contrib.keras.python.keras.layers import Activation
-from tensorflow.contrib.keras.python.keras.layers import AveragePooling2D
-from tensorflow.contrib.keras.python.keras.layers import BatchNormalization
-from tensorflow.contrib.keras.python.keras.layers import Conv2D
-from tensorflow.contrib.keras.python.keras.layers import Dense
-from tensorflow.contrib.keras.python.keras.layers import Flatten
-from tensorflow.contrib.keras.python.keras.layers import GlobalAveragePooling2D
-from tensorflow.contrib.keras.python.keras.layers import GlobalMaxPooling2D
-from tensorflow.contrib.keras.python.keras.layers import Input
-from tensorflow.contrib.keras.python.keras.layers import MaxPooling2D
-from tensorflow.contrib.keras.python.keras.models import Model
-from tensorflow.contrib.keras.python.keras.utils.data_utils import get_file
+from tensorflow.python.keras._impl.keras import backend as K
+from tensorflow.python.keras._impl.keras import layers
+from tensorflow.python.keras._impl.keras.applications.imagenet_utils import _obtain_input_shape
+from tensorflow.python.keras._impl.keras.applications.imagenet_utils import decode_predictions # pylint: disable=unused-import
+from tensorflow.python.keras._impl.keras.applications.imagenet_utils import preprocess_input # pylint: disable=unused-import
+from tensorflow.python.keras._impl.keras.engine.topology import get_source_inputs
+from tensorflow.python.keras._impl.keras.layers import Activation
+from tensorflow.python.keras._impl.keras.layers import AveragePooling2D
+from tensorflow.python.keras._impl.keras.layers import BatchNormalization
+from tensorflow.python.keras._impl.keras.layers import Conv2D
+from tensorflow.python.keras._impl.keras.layers import Dense
+from tensorflow.python.keras._impl.keras.layers import Flatten
+from tensorflow.python.keras._impl.keras.layers import GlobalAveragePooling2D
+from tensorflow.python.keras._impl.keras.layers import GlobalMaxPooling2D
+from tensorflow.python.keras._impl.keras.layers import Input
+from tensorflow.python.keras._impl.keras.layers import MaxPooling2D
+from tensorflow.python.keras._impl.keras.models import Model
+from tensorflow.python.keras._impl.keras.utils.data_utils import get_file
WEIGHTS_PATH = 'https://github.com/fchollet/deep-learning-models/releases/download/v0.2/resnet50_weights_tf_dim_ordering_tf_kernels.h5'
diff --git a/tensorflow/contrib/keras/python/keras/applications/resnet50_test.py b/tensorflow/python/keras/_impl/keras/applications/resnet50_test.py
index 2b00170652..07f9ffd73f 100644
--- a/tensorflow/contrib/keras/python/keras/applications/resnet50_test.py
+++ b/tensorflow/python/keras/_impl/keras/applications/resnet50_test.py
@@ -18,7 +18,7 @@ from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
-from tensorflow.contrib.keras.python import keras
+from tensorflow.python.keras._impl import keras
from tensorflow.python.platform import test
diff --git a/tensorflow/contrib/keras/python/keras/applications/vgg16.py b/tensorflow/python/keras/_impl/keras/applications/vgg16.py
index c38ae2a984..485b486e9d 100644
--- a/tensorflow/contrib/keras/python/keras/applications/vgg16.py
+++ b/tensorflow/python/keras/_impl/keras/applications/vgg16.py
@@ -25,21 +25,21 @@ from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
-from tensorflow.contrib.keras.python.keras import backend as K
-from tensorflow.contrib.keras.python.keras.applications.imagenet_utils import _obtain_input_shape
-from tensorflow.contrib.keras.python.keras.applications.imagenet_utils import decode_predictions # pylint: disable=unused-import
-from tensorflow.contrib.keras.python.keras.applications.imagenet_utils import preprocess_input # pylint: disable=unused-import
-from tensorflow.contrib.keras.python.keras.engine.topology import get_source_inputs
-from tensorflow.contrib.keras.python.keras.layers import Conv2D
-from tensorflow.contrib.keras.python.keras.layers import Dense
-from tensorflow.contrib.keras.python.keras.layers import Flatten
-from tensorflow.contrib.keras.python.keras.layers import GlobalAveragePooling2D
-from tensorflow.contrib.keras.python.keras.layers import GlobalMaxPooling2D
-from tensorflow.contrib.keras.python.keras.layers import Input
-from tensorflow.contrib.keras.python.keras.layers import MaxPooling2D
-from tensorflow.contrib.keras.python.keras.models import Model
-from tensorflow.contrib.keras.python.keras.utils import layer_utils
-from tensorflow.contrib.keras.python.keras.utils.data_utils import get_file
+from tensorflow.python.keras._impl.keras import backend as K
+from tensorflow.python.keras._impl.keras.applications.imagenet_utils import _obtain_input_shape
+from tensorflow.python.keras._impl.keras.applications.imagenet_utils import decode_predictions # pylint: disable=unused-import
+from tensorflow.python.keras._impl.keras.applications.imagenet_utils import preprocess_input # pylint: disable=unused-import
+from tensorflow.python.keras._impl.keras.engine.topology import get_source_inputs
+from tensorflow.python.keras._impl.keras.layers import Conv2D
+from tensorflow.python.keras._impl.keras.layers import Dense
+from tensorflow.python.keras._impl.keras.layers import Flatten
+from tensorflow.python.keras._impl.keras.layers import GlobalAveragePooling2D
+from tensorflow.python.keras._impl.keras.layers import GlobalMaxPooling2D
+from tensorflow.python.keras._impl.keras.layers import Input
+from tensorflow.python.keras._impl.keras.layers import MaxPooling2D
+from tensorflow.python.keras._impl.keras.models import Model
+from tensorflow.python.keras._impl.keras.utils import layer_utils
+from tensorflow.python.keras._impl.keras.utils.data_utils import get_file
WEIGHTS_PATH = 'https://github.com/fchollet/deep-learning-models/releases/download/v0.1/vgg16_weights_tf_dim_ordering_tf_kernels.h5'
diff --git a/tensorflow/contrib/keras/python/keras/applications/vgg16_test.py b/tensorflow/python/keras/_impl/keras/applications/vgg16_test.py
index 4ba5dabd5a..e6eba83678 100644
--- a/tensorflow/contrib/keras/python/keras/applications/vgg16_test.py
+++ b/tensorflow/python/keras/_impl/keras/applications/vgg16_test.py
@@ -18,7 +18,7 @@ from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
-from tensorflow.contrib.keras.python import keras
+from tensorflow.python.keras._impl import keras
from tensorflow.python.platform import test
diff --git a/tensorflow/contrib/keras/python/keras/applications/vgg19.py b/tensorflow/python/keras/_impl/keras/applications/vgg19.py
index ee67efaa92..3af6417c84 100644
--- a/tensorflow/contrib/keras/python/keras/applications/vgg19.py
+++ b/tensorflow/python/keras/_impl/keras/applications/vgg19.py
@@ -25,21 +25,21 @@ from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
-from tensorflow.contrib.keras.python.keras import backend as K
-from tensorflow.contrib.keras.python.keras.applications.imagenet_utils import _obtain_input_shape
-from tensorflow.contrib.keras.python.keras.applications.imagenet_utils import decode_predictions # pylint: disable=unused-import
-from tensorflow.contrib.keras.python.keras.applications.imagenet_utils import preprocess_input # pylint: disable=unused-import
-from tensorflow.contrib.keras.python.keras.engine.topology import get_source_inputs
-from tensorflow.contrib.keras.python.keras.layers import Conv2D
-from tensorflow.contrib.keras.python.keras.layers import Dense
-from tensorflow.contrib.keras.python.keras.layers import Flatten
-from tensorflow.contrib.keras.python.keras.layers import GlobalAveragePooling2D
-from tensorflow.contrib.keras.python.keras.layers import GlobalMaxPooling2D
-from tensorflow.contrib.keras.python.keras.layers import Input
-from tensorflow.contrib.keras.python.keras.layers import MaxPooling2D
-from tensorflow.contrib.keras.python.keras.models import Model
-from tensorflow.contrib.keras.python.keras.utils import layer_utils
-from tensorflow.contrib.keras.python.keras.utils.data_utils import get_file
+from tensorflow.python.keras._impl.keras import backend as K
+from tensorflow.python.keras._impl.keras.applications.imagenet_utils import _obtain_input_shape
+from tensorflow.python.keras._impl.keras.applications.imagenet_utils import decode_predictions # pylint: disable=unused-import
+from tensorflow.python.keras._impl.keras.applications.imagenet_utils import preprocess_input # pylint: disable=unused-import
+from tensorflow.python.keras._impl.keras.engine.topology import get_source_inputs
+from tensorflow.python.keras._impl.keras.layers import Conv2D
+from tensorflow.python.keras._impl.keras.layers import Dense
+from tensorflow.python.keras._impl.keras.layers import Flatten
+from tensorflow.python.keras._impl.keras.layers import GlobalAveragePooling2D
+from tensorflow.python.keras._impl.keras.layers import GlobalMaxPooling2D
+from tensorflow.python.keras._impl.keras.layers import Input
+from tensorflow.python.keras._impl.keras.layers import MaxPooling2D
+from tensorflow.python.keras._impl.keras.models import Model
+from tensorflow.python.keras._impl.keras.utils import layer_utils
+from tensorflow.python.keras._impl.keras.utils.data_utils import get_file
WEIGHTS_PATH = 'https://github.com/fchollet/deep-learning-models/releases/download/v0.1/vgg19_weights_tf_dim_ordering_tf_kernels.h5'
diff --git a/tensorflow/contrib/keras/python/keras/applications/vgg19_test.py b/tensorflow/python/keras/_impl/keras/applications/vgg19_test.py
index 604d4bb2d8..25100a2993 100644
--- a/tensorflow/contrib/keras/python/keras/applications/vgg19_test.py
+++ b/tensorflow/python/keras/_impl/keras/applications/vgg19_test.py
@@ -18,7 +18,7 @@ from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
-from tensorflow.contrib.keras.python import keras
+from tensorflow.python.keras._impl import keras
from tensorflow.python.platform import test
diff --git a/tensorflow/contrib/keras/python/keras/applications/xception.py b/tensorflow/python/keras/_impl/keras/applications/xception.py
index 7db7e9a9d6..6e521daa2d 100644
--- a/tensorflow/contrib/keras/python/keras/applications/xception.py
+++ b/tensorflow/python/keras/_impl/keras/applications/xception.py
@@ -36,22 +36,22 @@ from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
-from tensorflow.contrib.keras.python.keras import backend as K
-from tensorflow.contrib.keras.python.keras import layers
-from tensorflow.contrib.keras.python.keras.applications.imagenet_utils import _obtain_input_shape
-from tensorflow.contrib.keras.python.keras.applications.imagenet_utils import decode_predictions # pylint: disable=unused-import
-from tensorflow.contrib.keras.python.keras.engine.topology import get_source_inputs
-from tensorflow.contrib.keras.python.keras.layers import Activation
-from tensorflow.contrib.keras.python.keras.layers import BatchNormalization
-from tensorflow.contrib.keras.python.keras.layers import Conv2D
-from tensorflow.contrib.keras.python.keras.layers import Dense
-from tensorflow.contrib.keras.python.keras.layers import GlobalAveragePooling2D
-from tensorflow.contrib.keras.python.keras.layers import GlobalMaxPooling2D
-from tensorflow.contrib.keras.python.keras.layers import Input
-from tensorflow.contrib.keras.python.keras.layers import MaxPooling2D
-from tensorflow.contrib.keras.python.keras.layers import SeparableConv2D
-from tensorflow.contrib.keras.python.keras.models import Model
-from tensorflow.contrib.keras.python.keras.utils.data_utils import get_file
+from tensorflow.python.keras._impl.keras import backend as K
+from tensorflow.python.keras._impl.keras import layers
+from tensorflow.python.keras._impl.keras.applications.imagenet_utils import _obtain_input_shape
+from tensorflow.python.keras._impl.keras.applications.imagenet_utils import decode_predictions # pylint: disable=unused-import
+from tensorflow.python.keras._impl.keras.engine.topology import get_source_inputs
+from tensorflow.python.keras._impl.keras.layers import Activation
+from tensorflow.python.keras._impl.keras.layers import BatchNormalization
+from tensorflow.python.keras._impl.keras.layers import Conv2D
+from tensorflow.python.keras._impl.keras.layers import Dense
+from tensorflow.python.keras._impl.keras.layers import GlobalAveragePooling2D
+from tensorflow.python.keras._impl.keras.layers import GlobalMaxPooling2D
+from tensorflow.python.keras._impl.keras.layers import Input
+from tensorflow.python.keras._impl.keras.layers import MaxPooling2D
+from tensorflow.python.keras._impl.keras.layers import SeparableConv2D
+from tensorflow.python.keras._impl.keras.models import Model
+from tensorflow.python.keras._impl.keras.utils.data_utils import get_file
from tensorflow.python.platform import tf_logging as logging
diff --git a/tensorflow/contrib/keras/python/keras/applications/xception_test.py b/tensorflow/python/keras/_impl/keras/applications/xception_test.py
index a941514c3e..7ebdc30010 100644
--- a/tensorflow/contrib/keras/python/keras/applications/xception_test.py
+++ b/tensorflow/python/keras/_impl/keras/applications/xception_test.py
@@ -20,7 +20,7 @@ from __future__ import print_function
import numpy as np
-from tensorflow.contrib.keras.python import keras
+from tensorflow.python.keras._impl import keras
from tensorflow.python.platform import test
diff --git a/tensorflow/contrib/keras/python/keras/backend.py b/tensorflow/python/keras/_impl/keras/backend.py
index 76704d5d3d..76704d5d3d 100644
--- a/tensorflow/contrib/keras/python/keras/backend.py
+++ b/tensorflow/python/keras/_impl/keras/backend.py
diff --git a/tensorflow/contrib/keras/python/keras/backend_test.py b/tensorflow/python/keras/_impl/keras/backend_test.py
index 0717c91c6c..d914490f7e 100644
--- a/tensorflow/contrib/keras/python/keras/backend_test.py
+++ b/tensorflow/python/keras/_impl/keras/backend_test.py
@@ -13,7 +13,6 @@
# limitations under the License.
# ==============================================================================
"""Tests for Keras backend."""
-
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
@@ -21,8 +20,8 @@ from __future__ import print_function
import numpy as np
import scipy.sparse
-from tensorflow.contrib.keras.python import keras
from tensorflow.python.framework import sparse_tensor
+from tensorflow.python.keras._impl import keras
from tensorflow.python.platform import test
from tensorflow.python.util import tf_inspect
diff --git a/tensorflow/contrib/keras/python/keras/callbacks.py b/tensorflow/python/keras/_impl/keras/callbacks.py
index 323fdddb1f..eb678c4d1d 100644
--- a/tensorflow/contrib/keras/python/keras/callbacks.py
+++ b/tensorflow/python/keras/_impl/keras/callbacks.py
@@ -29,13 +29,11 @@ import time
import numpy as np
import six
-from tensorflow.contrib.keras.python.keras import backend as K
-from tensorflow.contrib.keras.python.keras.utils.generic_utils import Progbar
-from tensorflow.contrib.tensorboard.plugins import projector
+from tensorflow.python.keras._impl.keras import backend as K
+from tensorflow.python.keras._impl.keras.utils.generic_utils import Progbar
from tensorflow.python.ops import array_ops
from tensorflow.python.platform import tf_logging as logging
from tensorflow.python.summary import summary as tf_summary
-from tensorflow.python.training import saver as saver_lib
# pylint: disable=g-import-not-at-top
@@ -660,10 +658,7 @@ class TensorBoard(Callback):
batch_size=32,
write_graph=True,
write_grads=False,
- write_images=False,
- embeddings_freq=0,
- embeddings_layer_names=None,
- embeddings_metadata=None):
+ write_images=False):
super(TensorBoard, self).__init__()
self.log_dir = log_dir
self.histogram_freq = histogram_freq
@@ -671,9 +666,6 @@ class TensorBoard(Callback):
self.write_graph = write_graph
self.write_grads = write_grads
self.write_images = write_images
- self.embeddings_freq = embeddings_freq
- self.embeddings_layer_names = embeddings_layer_names
- self.embeddings_metadata = embeddings_metadata or {}
self.batch_size = batch_size
def set_model(self, model):
@@ -728,45 +720,6 @@ class TensorBoard(Callback):
else:
self.writer = tf_summary.FileWriter(self.log_dir)
- if self.embeddings_freq:
- embeddings_layer_names = self.embeddings_layer_names
-
- if not embeddings_layer_names:
- embeddings_layer_names = [
- layer.name for layer in self.model.layers
- if type(layer).__name__ == 'Embedding'
- ]
-
- embeddings = {
- layer.name: layer.weights[0]
- for layer in self.model.layers if layer.name in embeddings_layer_names
- }
-
- self.saver = saver_lib.Saver(list(embeddings.values()))
-
- embeddings_metadata = {}
-
- if not isinstance(self.embeddings_metadata, str):
- embeddings_metadata = self.embeddings_metadata
- else:
- embeddings_metadata = {
- layer_name: self.embeddings_metadata
- for layer_name in embeddings.keys()
- }
-
- config = projector.ProjectorConfig()
- self.embeddings_ckpt_path = os.path.join(self.log_dir,
- 'keras_embedding.ckpt')
-
- for layer_name, tensor in embeddings.items():
- embedding = config.embeddings.add()
- embedding.tensor_name = tensor.name
-
- if layer_name in embeddings_metadata:
- embedding.metadata_path = embeddings_metadata[layer_name]
-
- projector.visualize_embeddings(self.writer, config)
-
def on_epoch_end(self, epoch, logs=None):
logs = logs or {}
@@ -804,10 +757,6 @@ class TensorBoard(Callback):
self.writer.add_summary(summary_str, epoch)
i += self.batch_size
- if self.embeddings_freq and self.embeddings_ckpt_path:
- if epoch % self.embeddings_freq == 0:
- self.saver.save(self.sess, self.embeddings_ckpt_path, epoch)
-
for name, value in logs.items():
if name in ['batch', 'size']:
continue
diff --git a/tensorflow/contrib/keras/python/keras/callbacks_test.py b/tensorflow/python/keras/_impl/keras/callbacks_test.py
index f255feff41..d9d7fb5a9f 100644
--- a/tensorflow/contrib/keras/python/keras/callbacks_test.py
+++ b/tensorflow/python/keras/_impl/keras/callbacks_test.py
@@ -26,8 +26,8 @@ import shutil
import numpy as np
-from tensorflow.contrib.keras.python import keras
-from tensorflow.contrib.keras.python.keras import testing_utils
+from tensorflow.python.keras._impl import keras
+from tensorflow.python.keras._impl.keras import testing_utils
from tensorflow.python.platform import test
try:
@@ -574,8 +574,7 @@ class KerasCallbacksTest(test.TestCase):
tsb = keras.callbacks.TensorBoard(
log_dir=temp_dir, histogram_freq=1, write_images=True,
- write_grads=True, embeddings_freq=1,
- embeddings_layer_names=['dense_1'], batch_size=5)
+ write_grads=True, batch_size=5)
cbks = [tsb]
# fit with validation data
@@ -677,8 +676,6 @@ class KerasCallbacksTest(test.TestCase):
log_dir=filepath,
histogram_freq=histogram_freq,
write_images=True, write_grads=True,
- embeddings_freq=1,
- embeddings_layer_names=['dense_1'],
batch_size=5)]
# fit w/o validation data should raise ValueError if histogram_freq > 0
@@ -750,8 +747,6 @@ class KerasCallbacksTest(test.TestCase):
return [keras.callbacks.TensorBoard(log_dir=filepath,
histogram_freq=histogram_freq,
write_images=True, write_grads=True,
- embeddings_freq=1,
- embeddings_layer_names=['dense_1'],
batch_size=5)]
# fit without validation data
diff --git a/tensorflow/contrib/keras/python/keras/constraints.py b/tensorflow/python/keras/_impl/keras/constraints.py
index 0a59dd92c1..e58e3b0377 100644
--- a/tensorflow/contrib/keras/python/keras/constraints.py
+++ b/tensorflow/python/keras/_impl/keras/constraints.py
@@ -20,9 +20,9 @@ from __future__ import print_function
import six
-from tensorflow.contrib.keras.python.keras import backend as K
-from tensorflow.contrib.keras.python.keras.utils.generic_utils import deserialize_keras_object
-from tensorflow.contrib.keras.python.keras.utils.generic_utils import serialize_keras_object
+from tensorflow.python.keras._impl.keras import backend as K
+from tensorflow.python.keras._impl.keras.utils.generic_utils import deserialize_keras_object
+from tensorflow.python.keras._impl.keras.utils.generic_utils import serialize_keras_object
class Constraint(object):
diff --git a/tensorflow/contrib/keras/python/keras/constraints_test.py b/tensorflow/python/keras/_impl/keras/constraints_test.py
index 36fbee7fd5..87905693ca 100644
--- a/tensorflow/contrib/keras/python/keras/constraints_test.py
+++ b/tensorflow/python/keras/_impl/keras/constraints_test.py
@@ -20,7 +20,7 @@ from __future__ import print_function
import numpy as np
-from tensorflow.contrib.keras.python import keras
+from tensorflow.python.keras._impl import keras
from tensorflow.python.platform import test
diff --git a/tensorflow/contrib/keras/python/keras/datasets/__init__.py b/tensorflow/python/keras/_impl/keras/datasets/__init__.py
index fe8dee54db..22afb6a553 100644
--- a/tensorflow/contrib/keras/python/keras/datasets/__init__.py
+++ b/tensorflow/python/keras/_impl/keras/datasets/__init__.py
@@ -18,10 +18,10 @@ from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
-from tensorflow.contrib.keras.python.keras.datasets import boston_housing
-from tensorflow.contrib.keras.python.keras.datasets import cifar10
-from tensorflow.contrib.keras.python.keras.datasets import cifar100
-from tensorflow.contrib.keras.python.keras.datasets import imdb
-from tensorflow.contrib.keras.python.keras.datasets import mnist
-from tensorflow.contrib.keras.python.keras.datasets import reuters
+from tensorflow.python.keras._impl.keras.datasets import boston_housing
+from tensorflow.python.keras._impl.keras.datasets import cifar10
+from tensorflow.python.keras._impl.keras.datasets import cifar100
+from tensorflow.python.keras._impl.keras.datasets import imdb
+from tensorflow.python.keras._impl.keras.datasets import mnist
+from tensorflow.python.keras._impl.keras.datasets import reuters
diff --git a/tensorflow/contrib/keras/python/keras/datasets/boston_housing.py b/tensorflow/python/keras/_impl/keras/datasets/boston_housing.py
index 36b20451ff..e4f7fb9d21 100644
--- a/tensorflow/contrib/keras/python/keras/datasets/boston_housing.py
+++ b/tensorflow/python/keras/_impl/keras/datasets/boston_housing.py
@@ -20,7 +20,7 @@ from __future__ import print_function
import numpy as np
-from tensorflow.contrib.keras.python.keras.utils.data_utils import get_file
+from tensorflow.python.keras._impl.keras.utils.data_utils import get_file
def load_data(path='boston_housing.npz', seed=113, test_split=0.2):
diff --git a/tensorflow/contrib/keras/python/keras/datasets/cifar.py b/tensorflow/python/keras/_impl/keras/datasets/cifar.py
index 564709c0ee..564709c0ee 100644
--- a/tensorflow/contrib/keras/python/keras/datasets/cifar.py
+++ b/tensorflow/python/keras/_impl/keras/datasets/cifar.py
diff --git a/tensorflow/contrib/keras/python/keras/datasets/cifar10.py b/tensorflow/python/keras/_impl/keras/datasets/cifar10.py
index 11618b8552..672249ff20 100644
--- a/tensorflow/contrib/keras/python/keras/datasets/cifar10.py
+++ b/tensorflow/python/keras/_impl/keras/datasets/cifar10.py
@@ -22,9 +22,9 @@ import os
import numpy as np
-from tensorflow.contrib.keras.python.keras import backend as K
-from tensorflow.contrib.keras.python.keras.datasets.cifar import load_batch
-from tensorflow.contrib.keras.python.keras.utils.data_utils import get_file
+from tensorflow.python.keras._impl.keras import backend as K
+from tensorflow.python.keras._impl.keras.datasets.cifar import load_batch
+from tensorflow.python.keras._impl.keras.utils.data_utils import get_file
def load_data():
diff --git a/tensorflow/contrib/keras/python/keras/datasets/cifar100.py b/tensorflow/python/keras/_impl/keras/datasets/cifar100.py
index eba3ee6415..1be7483d27 100644
--- a/tensorflow/contrib/keras/python/keras/datasets/cifar100.py
+++ b/tensorflow/python/keras/_impl/keras/datasets/cifar100.py
@@ -22,9 +22,9 @@ import os
import numpy as np
-from tensorflow.contrib.keras.python.keras import backend as K
-from tensorflow.contrib.keras.python.keras.datasets.cifar import load_batch
-from tensorflow.contrib.keras.python.keras.utils.data_utils import get_file
+from tensorflow.python.keras._impl.keras import backend as K
+from tensorflow.python.keras._impl.keras.datasets.cifar import load_batch
+from tensorflow.python.keras._impl.keras.utils.data_utils import get_file
def load_data(label_mode='fine'):
diff --git a/tensorflow/contrib/keras/python/keras/datasets/imdb.py b/tensorflow/python/keras/_impl/keras/datasets/imdb.py
index 04ab154f9f..0db9d61f6d 100644
--- a/tensorflow/contrib/keras/python/keras/datasets/imdb.py
+++ b/tensorflow/python/keras/_impl/keras/datasets/imdb.py
@@ -23,7 +23,7 @@ import json
import numpy as np
from six.moves import zip # pylint: disable=redefined-builtin
-from tensorflow.contrib.keras.python.keras.utils.data_utils import get_file
+from tensorflow.python.keras._impl.keras.utils.data_utils import get_file
def load_data(path='imdb.npz',
diff --git a/tensorflow/contrib/keras/python/keras/datasets/mnist.py b/tensorflow/python/keras/_impl/keras/datasets/mnist.py
index aaced003d0..02be5e2a40 100644
--- a/tensorflow/contrib/keras/python/keras/datasets/mnist.py
+++ b/tensorflow/python/keras/_impl/keras/datasets/mnist.py
@@ -20,7 +20,7 @@ from __future__ import print_function
import numpy as np
-from tensorflow.contrib.keras.python.keras.utils.data_utils import get_file
+from tensorflow.python.keras._impl.keras.utils.data_utils import get_file
def load_data(path='mnist.npz'):
diff --git a/tensorflow/contrib/keras/python/keras/datasets/reuters.py b/tensorflow/python/keras/_impl/keras/datasets/reuters.py
index 2904eb5bf6..c36bac5cc7 100644
--- a/tensorflow/contrib/keras/python/keras/datasets/reuters.py
+++ b/tensorflow/python/keras/_impl/keras/datasets/reuters.py
@@ -24,7 +24,7 @@ import json
import numpy as np
from six.moves import zip # pylint: disable=redefined-builtin
-from tensorflow.contrib.keras.python.keras.utils.data_utils import get_file
+from tensorflow.python.keras._impl.keras.utils.data_utils import get_file
def load_data(path='reuters.npz',
diff --git a/tensorflow/contrib/keras/python/keras/engine/__init__.py b/tensorflow/python/keras/_impl/keras/engine/__init__.py
index 0a1dc3dd2d..31f624f9af 100644
--- a/tensorflow/contrib/keras/python/keras/engine/__init__.py
+++ b/tensorflow/python/keras/_impl/keras/engine/__init__.py
@@ -18,12 +18,12 @@ from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
-from tensorflow.contrib.keras.python.keras.engine.topology import get_source_inputs
-from tensorflow.contrib.keras.python.keras.engine.topology import Input
-from tensorflow.contrib.keras.python.keras.engine.topology import InputLayer
-from tensorflow.contrib.keras.python.keras.engine.topology import InputSpec
-from tensorflow.contrib.keras.python.keras.engine.topology import Layer
-from tensorflow.contrib.keras.python.keras.engine.training import Model
+from tensorflow.python.keras._impl.keras.engine.topology import get_source_inputs
+from tensorflow.python.keras._impl.keras.engine.topology import Input
+from tensorflow.python.keras._impl.keras.engine.topology import InputLayer
+from tensorflow.python.keras._impl.keras.engine.topology import InputSpec
+from tensorflow.python.keras._impl.keras.engine.topology import Layer
+from tensorflow.python.keras._impl.keras.engine.training import Model
# Note: topology.Node is an internal class,
diff --git a/tensorflow/contrib/keras/python/keras/engine/topology.py b/tensorflow/python/keras/_impl/keras/engine/topology.py
index 6502ba0f72..b6d341f7c9 100644
--- a/tensorflow/contrib/keras/python/keras/engine/topology.py
+++ b/tensorflow/python/keras/_impl/keras/engine/topology.py
@@ -26,11 +26,11 @@ import os
import numpy as np
from six.moves import zip # pylint: disable=redefined-builtin
-from tensorflow.contrib.keras.python.keras import backend as K
-from tensorflow.contrib.keras.python.keras.utils import conv_utils
-from tensorflow.contrib.keras.python.keras.utils.io_utils import ask_to_proceed_with_overwrite
-from tensorflow.contrib.keras.python.keras.utils.layer_utils import print_summary as print_layer_summary
from tensorflow.python.framework import tensor_shape
+from tensorflow.python.keras._impl.keras import backend as K
+from tensorflow.python.keras._impl.keras.utils import conv_utils
+from tensorflow.python.keras._impl.keras.utils.io_utils import ask_to_proceed_with_overwrite
+from tensorflow.python.keras._impl.keras.utils.layer_utils import print_summary as print_layer_summary
from tensorflow.python.layers import base as tf_base_layers
from tensorflow.python.platform import tf_logging as logging
@@ -941,7 +941,7 @@ class Network(tf_base_layers.Network, Layer):
layer_name = layer_data['name']
# Instantiate layer.
- from tensorflow.contrib.keras.python.keras.layers import deserialize as deserialize_layer # pylint: disable=g-import-not-at-top
+ from tensorflow.python.keras._impl.keras.layers import deserialize as deserialize_layer # pylint: disable=g-import-not-at-top
layer = deserialize_layer(layer_data, custom_objects=custom_objects)
created_layers[layer_name] = layer
@@ -1022,7 +1022,7 @@ class Network(tf_base_layers.Network, Layer):
model = load_model('my_model.h5')
```
"""
- from tensorflow.contrib.keras.python.keras.models import save_model # pylint: disable=g-import-not-at-top
+ from tensorflow.python.keras._impl.keras.models import save_model # pylint: disable=g-import-not-at-top
save_model(self, filepath, overwrite, include_optimizer)
def save_weights(self, filepath, overwrite=True):
@@ -1100,7 +1100,7 @@ class Network(tf_base_layers.Network, Layer):
Returns:
Model config with Keras version information added.
"""
- from tensorflow.contrib.keras.python.keras import __version__ as keras_version # pylint: disable=g-import-not-at-top
+ from tensorflow.python.keras._impl.keras import __version__ as keras_version # pylint: disable=g-import-not-at-top
config = self.get_config()
model_config = {
@@ -1247,7 +1247,7 @@ def _to_list(x):
def save_weights_to_hdf5_group(f, layers):
- from tensorflow.contrib.keras.python.keras import __version__ as keras_version # pylint: disable=g-import-not-at-top
+ from tensorflow.python.keras._impl.keras import __version__ as keras_version # pylint: disable=g-import-not-at-top
f.attrs['layer_names'] = [layer.name.encode('utf8') for layer in layers]
f.attrs['backend'] = K.backend().encode('utf8')
diff --git a/tensorflow/contrib/keras/python/keras/engine/topology_test.py b/tensorflow/python/keras/_impl/keras/engine/topology_test.py
index 0fe775cf66..e5ec01ed71 100644
--- a/tensorflow/contrib/keras/python/keras/engine/topology_test.py
+++ b/tensorflow/python/keras/_impl/keras/engine/topology_test.py
@@ -23,8 +23,8 @@ import shutil
import numpy as np
-from tensorflow.contrib.keras.python import keras
from tensorflow.python.framework import dtypes
+from tensorflow.python.keras._impl import keras
from tensorflow.python.ops import array_ops
from tensorflow.python.platform import test
diff --git a/tensorflow/contrib/keras/python/keras/engine/training.py b/tensorflow/python/keras/_impl/keras/engine/training.py
index 619ae74b66..0b04c17ad7 100644
--- a/tensorflow/contrib/keras/python/keras/engine/training.py
+++ b/tensorflow/python/keras/_impl/keras/engine/training.py
@@ -23,16 +23,16 @@ import copy
import numpy as np
-from tensorflow.contrib.keras.python.keras import backend as K
-from tensorflow.contrib.keras.python.keras import callbacks as cbks
-from tensorflow.contrib.keras.python.keras import losses
-from tensorflow.contrib.keras.python.keras import metrics as metrics_module
-from tensorflow.contrib.keras.python.keras import optimizers
-from tensorflow.contrib.keras.python.keras.engine.topology import Container
-from tensorflow.contrib.keras.python.keras.utils.data_utils import GeneratorEnqueuer
-from tensorflow.contrib.keras.python.keras.utils.data_utils import OrderedEnqueuer
-from tensorflow.contrib.keras.python.keras.utils.data_utils import Sequence
-from tensorflow.contrib.keras.python.keras.utils.generic_utils import Progbar
+from tensorflow.python.keras._impl.keras import backend as K
+from tensorflow.python.keras._impl.keras import callbacks as cbks
+from tensorflow.python.keras._impl.keras import losses
+from tensorflow.python.keras._impl.keras import metrics as metrics_module
+from tensorflow.python.keras._impl.keras import optimizers
+from tensorflow.python.keras._impl.keras.engine.topology import Container
+from tensorflow.python.keras._impl.keras.utils.data_utils import GeneratorEnqueuer
+from tensorflow.python.keras._impl.keras.utils.data_utils import OrderedEnqueuer
+from tensorflow.python.keras._impl.keras.utils.data_utils import Sequence
+from tensorflow.python.keras._impl.keras.utils.generic_utils import Progbar
from tensorflow.python.platform import tf_logging as logging
diff --git a/tensorflow/contrib/keras/python/keras/engine/training_test.py b/tensorflow/python/keras/_impl/keras/engine/training_test.py
index 30cdba96e4..bc9ad6693e 100644
--- a/tensorflow/contrib/keras/python/keras/engine/training_test.py
+++ b/tensorflow/python/keras/_impl/keras/engine/training_test.py
@@ -20,9 +20,9 @@ from __future__ import print_function
import numpy as np
-from tensorflow.contrib.keras.python import keras
-from tensorflow.contrib.keras.python.keras import testing_utils
-from tensorflow.contrib.keras.python.keras.engine.training import _weighted_masked_objective
+from tensorflow.python.keras._impl import keras
+from tensorflow.python.keras._impl.keras import testing_utils
+from tensorflow.python.keras._impl.keras.engine.training import _weighted_masked_objective
from tensorflow.python.platform import test
diff --git a/tensorflow/contrib/keras/python/keras/initializers.py b/tensorflow/python/keras/_impl/keras/initializers.py
index ae76c079f3..8752faa534 100644
--- a/tensorflow/contrib/keras/python/keras/initializers.py
+++ b/tensorflow/python/keras/_impl/keras/initializers.py
@@ -20,8 +20,8 @@ from __future__ import print_function
import six
-from tensorflow.contrib.keras.python.keras.utils.generic_utils import deserialize_keras_object
-from tensorflow.contrib.keras.python.keras.utils.generic_utils import serialize_keras_object
+from tensorflow.python.keras._impl.keras.utils.generic_utils import deserialize_keras_object
+from tensorflow.python.keras._impl.keras.utils.generic_utils import serialize_keras_object
from tensorflow.python.ops.init_ops import Constant
from tensorflow.python.ops.init_ops import Identity
from tensorflow.python.ops.init_ops import Initializer # pylint: disable=unused-import
diff --git a/tensorflow/contrib/keras/python/keras/initializers_test.py b/tensorflow/python/keras/_impl/keras/initializers_test.py
index f39d2bfd52..7b4e6b4d5b 100644
--- a/tensorflow/contrib/keras/python/keras/initializers_test.py
+++ b/tensorflow/python/keras/_impl/keras/initializers_test.py
@@ -20,7 +20,7 @@ from __future__ import print_function
import numpy as np
-from tensorflow.contrib.keras.python import keras
+from tensorflow.python.keras._impl import keras
from tensorflow.python.ops import init_ops
from tensorflow.python.platform import test
diff --git a/tensorflow/contrib/keras/python/keras/integration_test.py b/tensorflow/python/keras/_impl/keras/integration_test.py
index 5c42ffcfbd..d7d20e5698 100644
--- a/tensorflow/contrib/keras/python/keras/integration_test.py
+++ b/tensorflow/python/keras/_impl/keras/integration_test.py
@@ -20,8 +20,8 @@ from __future__ import print_function
import numpy as np
-from tensorflow.contrib.keras.python import keras
-from tensorflow.contrib.keras.python.keras import testing_utils
+from tensorflow.python.keras._impl import keras
+from tensorflow.python.keras._impl.keras import testing_utils
from tensorflow.python.layers import base as tf_base_layers
from tensorflow.python.layers import core as tf_core_layers
from tensorflow.python.ops import nn
diff --git a/tensorflow/python/keras/_impl/keras/layers/__init__.py b/tensorflow/python/keras/_impl/keras/layers/__init__.py
new file mode 100644
index 0000000000..81b2faf106
--- /dev/null
+++ b/tensorflow/python/keras/_impl/keras/layers/__init__.py
@@ -0,0 +1,40 @@
+# Copyright 2015 The TensorFlow Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+"""Keras layers module.
+"""
+# pylint: disable=wildcard-import
+from __future__ import absolute_import
+from __future__ import division
+from __future__ import print_function
+
+from tensorflow.python.keras._impl.keras.engine import Input
+from tensorflow.python.keras._impl.keras.engine import InputLayer
+from tensorflow.python.keras._impl.keras.engine import InputSpec
+from tensorflow.python.keras._impl.keras.engine import Layer
+from tensorflow.python.keras._impl.keras.layers.advanced_activations import *
+from tensorflow.python.keras._impl.keras.layers.convolutional import *
+from tensorflow.python.keras._impl.keras.layers.convolutional_recurrent import *
+from tensorflow.python.keras._impl.keras.layers.core import *
+from tensorflow.python.keras._impl.keras.layers.embeddings import *
+from tensorflow.python.keras._impl.keras.layers.local import *
+from tensorflow.python.keras._impl.keras.layers.merge import *
+from tensorflow.python.keras._impl.keras.layers.noise import *
+from tensorflow.python.keras._impl.keras.layers.normalization import *
+from tensorflow.python.keras._impl.keras.layers.pooling import *
+from tensorflow.python.keras._impl.keras.layers.recurrent import *
+from tensorflow.python.keras._impl.keras.layers.serialization import deserialize
+from tensorflow.python.keras._impl.keras.layers.serialization import serialize
+from tensorflow.python.keras._impl.keras.layers.wrappers import *
+
diff --git a/tensorflow/contrib/keras/python/keras/layers/advanced_activations.py b/tensorflow/python/keras/_impl/keras/layers/advanced_activations.py
index 55f17ac4e2..1cb881a13f 100644
--- a/tensorflow/contrib/keras/python/keras/layers/advanced_activations.py
+++ b/tensorflow/python/keras/_impl/keras/layers/advanced_activations.py
@@ -19,13 +19,13 @@ from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
-from tensorflow.contrib.keras.python.keras import backend as K
-from tensorflow.contrib.keras.python.keras import constraints
-from tensorflow.contrib.keras.python.keras import initializers
-from tensorflow.contrib.keras.python.keras import regularizers
-from tensorflow.contrib.keras.python.keras.engine import InputSpec
-from tensorflow.contrib.keras.python.keras.engine import Layer
from tensorflow.python.framework import tensor_shape
+from tensorflow.python.keras._impl.keras import backend as K
+from tensorflow.python.keras._impl.keras import constraints
+from tensorflow.python.keras._impl.keras import initializers
+from tensorflow.python.keras._impl.keras import regularizers
+from tensorflow.python.keras._impl.keras.engine import InputSpec
+from tensorflow.python.keras._impl.keras.engine import Layer
class LeakyReLU(Layer):
diff --git a/tensorflow/contrib/keras/python/keras/layers/advanced_activations_test.py b/tensorflow/python/keras/_impl/keras/layers/advanced_activations_test.py
index 1be56123d8..91efab30ed 100644
--- a/tensorflow/contrib/keras/python/keras/layers/advanced_activations_test.py
+++ b/tensorflow/python/keras/_impl/keras/layers/advanced_activations_test.py
@@ -18,8 +18,8 @@ from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
-from tensorflow.contrib.keras.python import keras
-from tensorflow.contrib.keras.python.keras import testing_utils
+from tensorflow.python.keras._impl import keras
+from tensorflow.python.keras._impl.keras import testing_utils
from tensorflow.python.platform import test
diff --git a/tensorflow/contrib/keras/python/keras/layers/convolutional.py b/tensorflow/python/keras/_impl/keras/layers/convolutional.py
index 9eda94c1df..ce96bc66f7 100644
--- a/tensorflow/contrib/keras/python/keras/layers/convolutional.py
+++ b/tensorflow/python/keras/_impl/keras/layers/convolutional.py
@@ -19,24 +19,24 @@ from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
-from tensorflow.contrib.keras.python.keras import activations
-from tensorflow.contrib.keras.python.keras import backend as K
-from tensorflow.contrib.keras.python.keras import constraints
-from tensorflow.contrib.keras.python.keras import initializers
-from tensorflow.contrib.keras.python.keras import regularizers
-from tensorflow.contrib.keras.python.keras.engine import InputSpec
-from tensorflow.contrib.keras.python.keras.engine import Layer
+from tensorflow.python.framework import tensor_shape
+from tensorflow.python.keras._impl.keras import activations
+from tensorflow.python.keras._impl.keras import backend as K
+from tensorflow.python.keras._impl.keras import constraints
+from tensorflow.python.keras._impl.keras import initializers
+from tensorflow.python.keras._impl.keras import regularizers
+from tensorflow.python.keras._impl.keras.engine import InputSpec
+from tensorflow.python.keras._impl.keras.engine import Layer
# imports for backwards namespace compatibility
# pylint: disable=unused-import
-from tensorflow.contrib.keras.python.keras.layers.pooling import AveragePooling1D
-from tensorflow.contrib.keras.python.keras.layers.pooling import AveragePooling2D
-from tensorflow.contrib.keras.python.keras.layers.pooling import AveragePooling3D
-from tensorflow.contrib.keras.python.keras.layers.pooling import MaxPooling1D
-from tensorflow.contrib.keras.python.keras.layers.pooling import MaxPooling2D
-from tensorflow.contrib.keras.python.keras.layers.pooling import MaxPooling3D
+from tensorflow.python.keras._impl.keras.layers.pooling import AveragePooling1D
+from tensorflow.python.keras._impl.keras.layers.pooling import AveragePooling2D
+from tensorflow.python.keras._impl.keras.layers.pooling import AveragePooling3D
+from tensorflow.python.keras._impl.keras.layers.pooling import MaxPooling1D
+from tensorflow.python.keras._impl.keras.layers.pooling import MaxPooling2D
+from tensorflow.python.keras._impl.keras.layers.pooling import MaxPooling3D
# pylint: enable=unused-import
-from tensorflow.contrib.keras.python.keras.utils import conv_utils
-from tensorflow.python.framework import tensor_shape
+from tensorflow.python.keras._impl.keras.utils import conv_utils
from tensorflow.python.layers import convolutional as tf_convolutional_layers
diff --git a/tensorflow/contrib/keras/python/keras/layers/convolutional_recurrent.py b/tensorflow/python/keras/_impl/keras/layers/convolutional_recurrent.py
index ed5aea2e57..74757532e1 100644
--- a/tensorflow/contrib/keras/python/keras/layers/convolutional_recurrent.py
+++ b/tensorflow/python/keras/_impl/keras/layers/convolutional_recurrent.py
@@ -20,15 +20,15 @@ from __future__ import print_function
import numpy as np
-from tensorflow.contrib.keras.python.keras import activations
-from tensorflow.contrib.keras.python.keras import backend as K
-from tensorflow.contrib.keras.python.keras import constraints
-from tensorflow.contrib.keras.python.keras import initializers
-from tensorflow.contrib.keras.python.keras import regularizers
-from tensorflow.contrib.keras.python.keras.engine import InputSpec
-from tensorflow.contrib.keras.python.keras.layers.recurrent import Recurrent
-from tensorflow.contrib.keras.python.keras.utils import conv_utils
from tensorflow.python.framework import tensor_shape
+from tensorflow.python.keras._impl.keras import activations
+from tensorflow.python.keras._impl.keras import backend as K
+from tensorflow.python.keras._impl.keras import constraints
+from tensorflow.python.keras._impl.keras import initializers
+from tensorflow.python.keras._impl.keras import regularizers
+from tensorflow.python.keras._impl.keras.engine import InputSpec
+from tensorflow.python.keras._impl.keras.layers.recurrent import Recurrent
+from tensorflow.python.keras._impl.keras.utils import conv_utils
class ConvRecurrent2D(Recurrent):
diff --git a/tensorflow/contrib/keras/python/keras/layers/convolutional_recurrent_test.py b/tensorflow/python/keras/_impl/keras/layers/convolutional_recurrent_test.py
index 1ce17f0c31..60137bdd72 100644
--- a/tensorflow/contrib/keras/python/keras/layers/convolutional_recurrent_test.py
+++ b/tensorflow/python/keras/_impl/keras/layers/convolutional_recurrent_test.py
@@ -20,8 +20,8 @@ from __future__ import print_function
import numpy as np
-from tensorflow.contrib.keras.python import keras
-from tensorflow.contrib.keras.python.keras import testing_utils
+from tensorflow.python.keras._impl import keras
+from tensorflow.python.keras._impl.keras import testing_utils
from tensorflow.python.platform import test
diff --git a/tensorflow/contrib/keras/python/keras/layers/convolutional_test.py b/tensorflow/python/keras/_impl/keras/layers/convolutional_test.py
index 00a7fbf8fb..be7da6f2b4 100644
--- a/tensorflow/contrib/keras/python/keras/layers/convolutional_test.py
+++ b/tensorflow/python/keras/_impl/keras/layers/convolutional_test.py
@@ -20,8 +20,8 @@ from __future__ import print_function
import numpy as np
-from tensorflow.contrib.keras.python import keras
-from tensorflow.contrib.keras.python.keras import testing_utils
+from tensorflow.python.keras._impl import keras
+from tensorflow.python.keras._impl.keras import testing_utils
from tensorflow.python.platform import test
diff --git a/tensorflow/contrib/keras/python/keras/layers/core.py b/tensorflow/python/keras/_impl/keras/layers/core.py
index e5df0c5800..e7b87a09aa 100644
--- a/tensorflow/contrib/keras/python/keras/layers/core.py
+++ b/tensorflow/python/keras/_impl/keras/layers/core.py
@@ -23,18 +23,18 @@ import types as python_types
import numpy as np
-from tensorflow.contrib.keras.python.keras import activations
-from tensorflow.contrib.keras.python.keras import backend as K
-from tensorflow.contrib.keras.python.keras import constraints
-from tensorflow.contrib.keras.python.keras import initializers
-from tensorflow.contrib.keras.python.keras import regularizers
-from tensorflow.contrib.keras.python.keras.engine import InputSpec
-from tensorflow.contrib.keras.python.keras.engine import Layer
-from tensorflow.contrib.keras.python.keras.utils.generic_utils import deserialize_keras_object
-from tensorflow.contrib.keras.python.keras.utils.generic_utils import func_dump
-from tensorflow.contrib.keras.python.keras.utils.generic_utils import func_load
-from tensorflow.contrib.keras.python.keras.utils.generic_utils import has_arg
from tensorflow.python.framework import tensor_shape
+from tensorflow.python.keras._impl.keras import activations
+from tensorflow.python.keras._impl.keras import backend as K
+from tensorflow.python.keras._impl.keras import constraints
+from tensorflow.python.keras._impl.keras import initializers
+from tensorflow.python.keras._impl.keras import regularizers
+from tensorflow.python.keras._impl.keras.engine import InputSpec
+from tensorflow.python.keras._impl.keras.engine import Layer
+from tensorflow.python.keras._impl.keras.utils.generic_utils import deserialize_keras_object
+from tensorflow.python.keras._impl.keras.utils.generic_utils import func_dump
+from tensorflow.python.keras._impl.keras.utils.generic_utils import func_load
+from tensorflow.python.keras._impl.keras.utils.generic_utils import has_arg
from tensorflow.python.layers import core as tf_core_layers
diff --git a/tensorflow/contrib/keras/python/keras/layers/core_test.py b/tensorflow/python/keras/_impl/keras/layers/core_test.py
index 818c55afe4..5b15895c41 100644
--- a/tensorflow/contrib/keras/python/keras/layers/core_test.py
+++ b/tensorflow/python/keras/_impl/keras/layers/core_test.py
@@ -20,8 +20,8 @@ from __future__ import print_function
import numpy as np
-from tensorflow.contrib.keras.python import keras
-from tensorflow.contrib.keras.python.keras import testing_utils
+from tensorflow.python.keras._impl import keras
+from tensorflow.python.keras._impl.keras import testing_utils
from tensorflow.python.platform import test
diff --git a/tensorflow/contrib/keras/python/keras/layers/embeddings.py b/tensorflow/python/keras/_impl/keras/layers/embeddings.py
index 9f617fd3e4..65d6355077 100644
--- a/tensorflow/contrib/keras/python/keras/layers/embeddings.py
+++ b/tensorflow/python/keras/_impl/keras/layers/embeddings.py
@@ -18,12 +18,12 @@ from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
-from tensorflow.contrib.keras.python.keras import backend as K
-from tensorflow.contrib.keras.python.keras import constraints
-from tensorflow.contrib.keras.python.keras import initializers
-from tensorflow.contrib.keras.python.keras import regularizers
-from tensorflow.contrib.keras.python.keras.engine import Layer
from tensorflow.python.framework import tensor_shape
+from tensorflow.python.keras._impl.keras import backend as K
+from tensorflow.python.keras._impl.keras import constraints
+from tensorflow.python.keras._impl.keras import initializers
+from tensorflow.python.keras._impl.keras import regularizers
+from tensorflow.python.keras._impl.keras.engine import Layer
class Embedding(Layer):
diff --git a/tensorflow/contrib/keras/python/keras/layers/embeddings_test.py b/tensorflow/python/keras/_impl/keras/layers/embeddings_test.py
index 5d6d386862..1712111b87 100644
--- a/tensorflow/contrib/keras/python/keras/layers/embeddings_test.py
+++ b/tensorflow/python/keras/_impl/keras/layers/embeddings_test.py
@@ -18,8 +18,8 @@ from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
-from tensorflow.contrib.keras.python import keras
-from tensorflow.contrib.keras.python.keras import testing_utils
+from tensorflow.python.keras._impl import keras
+from tensorflow.python.keras._impl.keras import testing_utils
from tensorflow.python.platform import test
diff --git a/tensorflow/contrib/keras/python/keras/layers/gru_test.py b/tensorflow/python/keras/_impl/keras/layers/gru_test.py
index 9af3290480..03f0736161 100644
--- a/tensorflow/contrib/keras/python/keras/layers/gru_test.py
+++ b/tensorflow/python/keras/_impl/keras/layers/gru_test.py
@@ -20,8 +20,8 @@ from __future__ import print_function
import numpy as np
-from tensorflow.contrib.keras.python import keras
-from tensorflow.contrib.keras.python.keras import testing_utils
+from tensorflow.python.keras._impl import keras
+from tensorflow.python.keras._impl.keras import testing_utils
from tensorflow.python.platform import test
diff --git a/tensorflow/contrib/keras/python/keras/layers/local.py b/tensorflow/python/keras/_impl/keras/layers/local.py
index 31a29cdaf4..040fe40c57 100644
--- a/tensorflow/contrib/keras/python/keras/layers/local.py
+++ b/tensorflow/python/keras/_impl/keras/layers/local.py
@@ -14,20 +14,19 @@
# ==============================================================================
"""Locally-connected layers.
"""
-
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
-from tensorflow.contrib.keras.python.keras import activations
-from tensorflow.contrib.keras.python.keras import backend as K
-from tensorflow.contrib.keras.python.keras import constraints
-from tensorflow.contrib.keras.python.keras import initializers
-from tensorflow.contrib.keras.python.keras import regularizers
-from tensorflow.contrib.keras.python.keras.engine import InputSpec
-from tensorflow.contrib.keras.python.keras.engine import Layer
-from tensorflow.contrib.keras.python.keras.utils import conv_utils
from tensorflow.python.framework import tensor_shape
+from tensorflow.python.keras._impl.keras import activations
+from tensorflow.python.keras._impl.keras import backend as K
+from tensorflow.python.keras._impl.keras import constraints
+from tensorflow.python.keras._impl.keras import initializers
+from tensorflow.python.keras._impl.keras import regularizers
+from tensorflow.python.keras._impl.keras.engine import InputSpec
+from tensorflow.python.keras._impl.keras.engine import Layer
+from tensorflow.python.keras._impl.keras.utils import conv_utils
class LocallyConnected1D(Layer):
diff --git a/tensorflow/contrib/keras/python/keras/layers/local_test.py b/tensorflow/python/keras/_impl/keras/layers/local_test.py
index 6da20d8f83..a815a0fadc 100644
--- a/tensorflow/contrib/keras/python/keras/layers/local_test.py
+++ b/tensorflow/python/keras/_impl/keras/layers/local_test.py
@@ -20,8 +20,8 @@ from __future__ import print_function
import numpy as np
-from tensorflow.contrib.keras.python import keras
-from tensorflow.contrib.keras.python.keras import testing_utils
+from tensorflow.python.keras._impl import keras
+from tensorflow.python.keras._impl.keras import testing_utils
from tensorflow.python.platform import test
diff --git a/tensorflow/contrib/keras/python/keras/layers/lstm_test.py b/tensorflow/python/keras/_impl/keras/layers/lstm_test.py
index d39ea90523..94049d4066 100644
--- a/tensorflow/contrib/keras/python/keras/layers/lstm_test.py
+++ b/tensorflow/python/keras/_impl/keras/layers/lstm_test.py
@@ -20,8 +20,8 @@ from __future__ import print_function
import numpy as np
-from tensorflow.contrib.keras.python import keras
-from tensorflow.contrib.keras.python.keras import testing_utils
+from tensorflow.python.keras._impl import keras
+from tensorflow.python.keras._impl.keras import testing_utils
from tensorflow.python.platform import test
diff --git a/tensorflow/contrib/keras/python/keras/layers/merge.py b/tensorflow/python/keras/_impl/keras/layers/merge.py
index 486036a156..b6391dba25 100644
--- a/tensorflow/contrib/keras/python/keras/layers/merge.py
+++ b/tensorflow/python/keras/_impl/keras/layers/merge.py
@@ -20,9 +20,9 @@ from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
-from tensorflow.contrib.keras.python.keras import backend as K
-from tensorflow.contrib.keras.python.keras.engine.topology import Layer
from tensorflow.python.framework import tensor_shape
+from tensorflow.python.keras._impl.keras import backend as K
+from tensorflow.python.keras._impl.keras.engine.topology import Layer
class _Merge(Layer):
diff --git a/tensorflow/contrib/keras/python/keras/layers/merge_test.py b/tensorflow/python/keras/_impl/keras/layers/merge_test.py
index aca6728e2a..ea76337317 100644
--- a/tensorflow/contrib/keras/python/keras/layers/merge_test.py
+++ b/tensorflow/python/keras/_impl/keras/layers/merge_test.py
@@ -20,7 +20,7 @@ from __future__ import print_function
import numpy as np
-from tensorflow.contrib.keras.python import keras
+from tensorflow.python.keras._impl import keras
from tensorflow.python.ops import array_ops
from tensorflow.python.platform import test
diff --git a/tensorflow/contrib/keras/python/keras/layers/noise.py b/tensorflow/python/keras/_impl/keras/layers/noise.py
index e3cfa1f711..9caa8b7024 100644
--- a/tensorflow/contrib/keras/python/keras/layers/noise.py
+++ b/tensorflow/python/keras/_impl/keras/layers/noise.py
@@ -20,8 +20,8 @@ from __future__ import print_function
import numpy as np
-from tensorflow.contrib.keras.python.keras import backend as K
-from tensorflow.contrib.keras.python.keras.engine import Layer
+from tensorflow.python.keras._impl.keras import backend as K
+from tensorflow.python.keras._impl.keras.engine import Layer
class GaussianNoise(Layer):
diff --git a/tensorflow/contrib/keras/python/keras/layers/noise_test.py b/tensorflow/python/keras/_impl/keras/layers/noise_test.py
index 8fb1339c2e..f9b4d9cd09 100644
--- a/tensorflow/contrib/keras/python/keras/layers/noise_test.py
+++ b/tensorflow/python/keras/_impl/keras/layers/noise_test.py
@@ -18,8 +18,8 @@ from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
-from tensorflow.contrib.keras.python import keras
-from tensorflow.contrib.keras.python.keras import testing_utils
+from tensorflow.python.keras._impl import keras
+from tensorflow.python.keras._impl.keras import testing_utils
from tensorflow.python.platform import test
diff --git a/tensorflow/contrib/keras/python/keras/layers/normalization.py b/tensorflow/python/keras/_impl/keras/layers/normalization.py
index 7b98fe9e85..965ef70e6e 100644
--- a/tensorflow/contrib/keras/python/keras/layers/normalization.py
+++ b/tensorflow/python/keras/_impl/keras/layers/normalization.py
@@ -18,11 +18,11 @@ from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
-from tensorflow.contrib.keras.python.keras import backend as K
-from tensorflow.contrib.keras.python.keras import constraints
-from tensorflow.contrib.keras.python.keras import initializers
-from tensorflow.contrib.keras.python.keras import regularizers
-from tensorflow.contrib.keras.python.keras.engine import Layer
+from tensorflow.python.keras._impl.keras import backend as K
+from tensorflow.python.keras._impl.keras import constraints
+from tensorflow.python.keras._impl.keras import initializers
+from tensorflow.python.keras._impl.keras import regularizers
+from tensorflow.python.keras._impl.keras.engine import Layer
from tensorflow.python.layers import normalization as tf_normalization_layers
diff --git a/tensorflow/contrib/keras/python/keras/layers/normalization_test.py b/tensorflow/python/keras/_impl/keras/layers/normalization_test.py
index eaeafb0c62..39a90e5970 100644
--- a/tensorflow/contrib/keras/python/keras/layers/normalization_test.py
+++ b/tensorflow/python/keras/_impl/keras/layers/normalization_test.py
@@ -20,8 +20,8 @@ from __future__ import print_function
import numpy as np
-from tensorflow.contrib.keras.python import keras
-from tensorflow.contrib.keras.python.keras import testing_utils
+from tensorflow.python.keras._impl import keras
+from tensorflow.python.keras._impl.keras import testing_utils
from tensorflow.python.platform import test
diff --git a/tensorflow/contrib/keras/python/keras/layers/pooling.py b/tensorflow/python/keras/_impl/keras/layers/pooling.py
index 704f05e494..e773e39679 100644
--- a/tensorflow/contrib/keras/python/keras/layers/pooling.py
+++ b/tensorflow/python/keras/_impl/keras/layers/pooling.py
@@ -18,11 +18,11 @@ from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
-from tensorflow.contrib.keras.python.keras import backend as K
-from tensorflow.contrib.keras.python.keras.engine import InputSpec
-from tensorflow.contrib.keras.python.keras.engine import Layer
-from tensorflow.contrib.keras.python.keras.utils import conv_utils
from tensorflow.python.framework import tensor_shape
+from tensorflow.python.keras._impl.keras import backend as K
+from tensorflow.python.keras._impl.keras.engine import InputSpec
+from tensorflow.python.keras._impl.keras.engine import Layer
+from tensorflow.python.keras._impl.keras.utils import conv_utils
from tensorflow.python.layers import pooling as tf_pooling_layers
diff --git a/tensorflow/contrib/keras/python/keras/layers/pooling_test.py b/tensorflow/python/keras/_impl/keras/layers/pooling_test.py
index d8a6a1673b..ec0a5ae560 100644
--- a/tensorflow/contrib/keras/python/keras/layers/pooling_test.py
+++ b/tensorflow/python/keras/_impl/keras/layers/pooling_test.py
@@ -18,8 +18,8 @@ from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
-from tensorflow.contrib.keras.python import keras
-from tensorflow.contrib.keras.python.keras import testing_utils
+from tensorflow.python.keras._impl import keras
+from tensorflow.python.keras._impl.keras import testing_utils
from tensorflow.python.platform import test
diff --git a/tensorflow/contrib/keras/python/keras/layers/recurrent.py b/tensorflow/python/keras/_impl/keras/layers/recurrent.py
index 988ddf54cc..f0f5e56495 100644
--- a/tensorflow/contrib/keras/python/keras/layers/recurrent.py
+++ b/tensorflow/python/keras/_impl/keras/layers/recurrent.py
@@ -21,14 +21,14 @@ from __future__ import print_function
import numpy as np
-from tensorflow.contrib.keras.python.keras import activations
-from tensorflow.contrib.keras.python.keras import backend as K
-from tensorflow.contrib.keras.python.keras import constraints
-from tensorflow.contrib.keras.python.keras import initializers
-from tensorflow.contrib.keras.python.keras import regularizers
-from tensorflow.contrib.keras.python.keras.engine import InputSpec
-from tensorflow.contrib.keras.python.keras.engine import Layer
from tensorflow.python.framework import tensor_shape
+from tensorflow.python.keras._impl.keras import activations
+from tensorflow.python.keras._impl.keras import backend as K
+from tensorflow.python.keras._impl.keras import constraints
+from tensorflow.python.keras._impl.keras import initializers
+from tensorflow.python.keras._impl.keras import regularizers
+from tensorflow.python.keras._impl.keras.engine import InputSpec
+from tensorflow.python.keras._impl.keras.engine import Layer
# pylint: disable=access-member-before-definition
@@ -48,7 +48,7 @@ def _time_distributed_dense(x,
x: input tensor.
w: weight matrix.
b: optional bias vector.
- dropout: wether to apply dropout (same dropout mask
+ dropout: whether to apply dropout (same dropout mask
for every temporal slice of the input).
input_dim: integer; optional dimensionality of the input.
output_dim: integer; optional dimensionality of the output.
diff --git a/tensorflow/contrib/keras/python/keras/layers/serialization.py b/tensorflow/python/keras/_impl/keras/layers/serialization.py
index f9c21a3e67..928feaadbf 100644
--- a/tensorflow/contrib/keras/python/keras/layers/serialization.py
+++ b/tensorflow/python/keras/_impl/keras/layers/serialization.py
@@ -20,21 +20,21 @@ from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
-from tensorflow.contrib.keras.python.keras.engine import Input
-from tensorflow.contrib.keras.python.keras.engine import InputLayer
-from tensorflow.contrib.keras.python.keras.layers.advanced_activations import *
-from tensorflow.contrib.keras.python.keras.layers.convolutional import *
-from tensorflow.contrib.keras.python.keras.layers.convolutional_recurrent import *
-from tensorflow.contrib.keras.python.keras.layers.core import *
-from tensorflow.contrib.keras.python.keras.layers.embeddings import *
-from tensorflow.contrib.keras.python.keras.layers.local import *
-from tensorflow.contrib.keras.python.keras.layers.merge import *
-from tensorflow.contrib.keras.python.keras.layers.noise import *
-from tensorflow.contrib.keras.python.keras.layers.normalization import *
-from tensorflow.contrib.keras.python.keras.layers.pooling import *
-from tensorflow.contrib.keras.python.keras.layers.recurrent import *
-from tensorflow.contrib.keras.python.keras.layers.wrappers import *
-from tensorflow.contrib.keras.python.keras.utils.generic_utils import deserialize_keras_object
+from tensorflow.python.keras._impl.keras.engine import Input
+from tensorflow.python.keras._impl.keras.engine import InputLayer
+from tensorflow.python.keras._impl.keras.layers.advanced_activations import *
+from tensorflow.python.keras._impl.keras.layers.convolutional import *
+from tensorflow.python.keras._impl.keras.layers.convolutional_recurrent import *
+from tensorflow.python.keras._impl.keras.layers.core import *
+from tensorflow.python.keras._impl.keras.layers.embeddings import *
+from tensorflow.python.keras._impl.keras.layers.local import *
+from tensorflow.python.keras._impl.keras.layers.merge import *
+from tensorflow.python.keras._impl.keras.layers.noise import *
+from tensorflow.python.keras._impl.keras.layers.normalization import *
+from tensorflow.python.keras._impl.keras.layers.pooling import *
+from tensorflow.python.keras._impl.keras.layers.recurrent import *
+from tensorflow.python.keras._impl.keras.layers.wrappers import *
+from tensorflow.python.keras._impl.keras.utils.generic_utils import deserialize_keras_object
def serialize(layer):
@@ -52,7 +52,7 @@ def deserialize(config, custom_objects=None):
Returns:
Layer instance (may be Model, Sequential, Layer...)
"""
- from tensorflow.contrib.keras.python.keras import models # pylint: disable=g-import-not-at-top
+ from tensorflow.python.keras._impl.keras import models # pylint: disable=g-import-not-at-top
globs = globals() # All layers.
globs['Model'] = models.Model
globs['Sequential'] = models.Sequential
diff --git a/tensorflow/contrib/keras/python/keras/layers/serialization_test.py b/tensorflow/python/keras/_impl/keras/layers/serialization_test.py
index fb2e506a4c..787160d1e7 100644
--- a/tensorflow/contrib/keras/python/keras/layers/serialization_test.py
+++ b/tensorflow/python/keras/_impl/keras/layers/serialization_test.py
@@ -18,7 +18,7 @@ from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
-from tensorflow.contrib.keras.python import keras
+from tensorflow.python.keras._impl import keras
from tensorflow.python.platform import test
diff --git a/tensorflow/contrib/keras/python/keras/layers/simplernn_test.py b/tensorflow/python/keras/_impl/keras/layers/simplernn_test.py
index 3d67011352..9833485236 100644
--- a/tensorflow/contrib/keras/python/keras/layers/simplernn_test.py
+++ b/tensorflow/python/keras/_impl/keras/layers/simplernn_test.py
@@ -20,8 +20,8 @@ from __future__ import print_function
import numpy as np
-from tensorflow.contrib.keras.python import keras
-from tensorflow.contrib.keras.python.keras import testing_utils
+from tensorflow.python.keras._impl import keras
+from tensorflow.python.keras._impl.keras import testing_utils
from tensorflow.python.platform import test
diff --git a/tensorflow/contrib/keras/python/keras/layers/wrappers.py b/tensorflow/python/keras/_impl/keras/layers/wrappers.py
index 9defd6cd1c..79e144869e 100644
--- a/tensorflow/contrib/keras/python/keras/layers/wrappers.py
+++ b/tensorflow/python/keras/_impl/keras/layers/wrappers.py
@@ -21,11 +21,11 @@ from __future__ import print_function
import copy
-from tensorflow.contrib.keras.python.keras import backend as K
-from tensorflow.contrib.keras.python.keras.engine import InputSpec
-from tensorflow.contrib.keras.python.keras.engine import Layer
-from tensorflow.contrib.keras.python.keras.utils.generic_utils import has_arg
from tensorflow.python.framework import tensor_shape
+from tensorflow.python.keras._impl.keras import backend as K
+from tensorflow.python.keras._impl.keras.engine import InputSpec
+from tensorflow.python.keras._impl.keras.engine import Layer
+from tensorflow.python.keras._impl.keras.utils.generic_utils import has_arg
from tensorflow.python.layers import base as tf_base_layers
@@ -119,7 +119,7 @@ class Wrapper(Layer):
@classmethod
def from_config(cls, config, custom_objects=None):
- from tensorflow.contrib.keras.python.keras.layers import deserialize as deserialize_layer # pylint: disable=g-import-not-at-top
+ from tensorflow.python.keras._impl.keras.layers import deserialize as deserialize_layer # pylint: disable=g-import-not-at-top
layer = deserialize_layer(
config.pop('layer'), custom_objects=custom_objects)
return cls(layer, **config)
diff --git a/tensorflow/contrib/keras/python/keras/layers/wrappers_test.py b/tensorflow/python/keras/_impl/keras/layers/wrappers_test.py
index d8e6c89564..a0951b8240 100644
--- a/tensorflow/contrib/keras/python/keras/layers/wrappers_test.py
+++ b/tensorflow/python/keras/_impl/keras/layers/wrappers_test.py
@@ -20,7 +20,7 @@ from __future__ import print_function
import numpy as np
-from tensorflow.contrib.keras.python import keras
+from tensorflow.python.keras._impl import keras
from tensorflow.python.platform import test
diff --git a/tensorflow/contrib/keras/python/keras/losses.py b/tensorflow/python/keras/_impl/keras/losses.py
index e94fca479f..7c6b304622 100644
--- a/tensorflow/contrib/keras/python/keras/losses.py
+++ b/tensorflow/python/keras/_impl/keras/losses.py
@@ -20,8 +20,8 @@ from __future__ import print_function
import six
-from tensorflow.contrib.keras.python.keras import backend as K
-from tensorflow.contrib.keras.python.keras.utils.generic_utils import deserialize_keras_object
+from tensorflow.python.keras._impl.keras import backend as K
+from tensorflow.python.keras._impl.keras.utils.generic_utils import deserialize_keras_object
def mean_squared_error(y_true, y_pred):
diff --git a/tensorflow/contrib/keras/python/keras/losses_test.py b/tensorflow/python/keras/_impl/keras/losses_test.py
index 6bdcc0b5ff..b295356ec1 100644
--- a/tensorflow/contrib/keras/python/keras/losses_test.py
+++ b/tensorflow/python/keras/_impl/keras/losses_test.py
@@ -20,7 +20,7 @@ from __future__ import print_function
import numpy as np
-from tensorflow.contrib.keras.python import keras
+from tensorflow.python.keras._impl import keras
from tensorflow.python.platform import test
diff --git a/tensorflow/contrib/keras/python/keras/metrics.py b/tensorflow/python/keras/_impl/keras/metrics.py
index 999e9cb9d4..202048f26d 100644
--- a/tensorflow/contrib/keras/python/keras/metrics.py
+++ b/tensorflow/python/keras/_impl/keras/metrics.py
@@ -20,23 +20,23 @@ from __future__ import print_function
import six
-from tensorflow.contrib.keras.python.keras import backend as K
+from tensorflow.python.keras._impl.keras import backend as K
# pylint: disable=unused-import
-from tensorflow.contrib.keras.python.keras.losses import binary_crossentropy
-from tensorflow.contrib.keras.python.keras.losses import categorical_crossentropy
-from tensorflow.contrib.keras.python.keras.losses import cosine_proximity
-from tensorflow.contrib.keras.python.keras.losses import hinge
-from tensorflow.contrib.keras.python.keras.losses import kullback_leibler_divergence
-from tensorflow.contrib.keras.python.keras.losses import logcosh
-from tensorflow.contrib.keras.python.keras.losses import mean_absolute_error
-from tensorflow.contrib.keras.python.keras.losses import mean_absolute_percentage_error
-from tensorflow.contrib.keras.python.keras.losses import mean_squared_error
-from tensorflow.contrib.keras.python.keras.losses import mean_squared_logarithmic_error
-from tensorflow.contrib.keras.python.keras.losses import poisson
-from tensorflow.contrib.keras.python.keras.losses import sparse_categorical_crossentropy
-from tensorflow.contrib.keras.python.keras.losses import squared_hinge
+from tensorflow.python.keras._impl.keras.losses import binary_crossentropy
+from tensorflow.python.keras._impl.keras.losses import categorical_crossentropy
+from tensorflow.python.keras._impl.keras.losses import cosine_proximity
+from tensorflow.python.keras._impl.keras.losses import hinge
+from tensorflow.python.keras._impl.keras.losses import kullback_leibler_divergence
+from tensorflow.python.keras._impl.keras.losses import logcosh
+from tensorflow.python.keras._impl.keras.losses import mean_absolute_error
+from tensorflow.python.keras._impl.keras.losses import mean_absolute_percentage_error
+from tensorflow.python.keras._impl.keras.losses import mean_squared_error
+from tensorflow.python.keras._impl.keras.losses import mean_squared_logarithmic_error
+from tensorflow.python.keras._impl.keras.losses import poisson
+from tensorflow.python.keras._impl.keras.losses import sparse_categorical_crossentropy
+from tensorflow.python.keras._impl.keras.losses import squared_hinge
# pylint: disable=unused-import
-from tensorflow.contrib.keras.python.keras.utils.generic_utils import deserialize_keras_object
+from tensorflow.python.keras._impl.keras.utils.generic_utils import deserialize_keras_object
def binary_accuracy(y_true, y_pred):
diff --git a/tensorflow/contrib/keras/python/keras/metrics_test.py b/tensorflow/python/keras/_impl/keras/metrics_test.py
index 84c6528174..f4792f3543 100644
--- a/tensorflow/contrib/keras/python/keras/metrics_test.py
+++ b/tensorflow/python/keras/_impl/keras/metrics_test.py
@@ -20,7 +20,7 @@ from __future__ import print_function
import numpy as np
-from tensorflow.contrib.keras.python import keras
+from tensorflow.python.keras._impl import keras
from tensorflow.python.platform import test
diff --git a/tensorflow/contrib/keras/python/keras/models.py b/tensorflow/python/keras/_impl/keras/models.py
index ff06782a44..9a4578b89b 100644
--- a/tensorflow/contrib/keras/python/keras/models.py
+++ b/tensorflow/python/keras/_impl/keras/models.py
@@ -15,7 +15,6 @@
# pylint: disable=protected-access
"""Home of the Sequential model, and the `save_model`/`load_model` functions.
"""
-
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
@@ -26,17 +25,17 @@ import os
import numpy as np
-from tensorflow.contrib.keras.python.keras import backend as K
-from tensorflow.contrib.keras.python.keras import layers as layer_module
-from tensorflow.contrib.keras.python.keras import optimizers
-from tensorflow.contrib.keras.python.keras.engine import topology
-from tensorflow.contrib.keras.python.keras.engine.topology import Input
-from tensorflow.contrib.keras.python.keras.engine.topology import Layer
-from tensorflow.contrib.keras.python.keras.engine.topology import TFBaseLayer
-from tensorflow.contrib.keras.python.keras.engine.training import Model
-from tensorflow.contrib.keras.python.keras.utils.generic_utils import has_arg
-from tensorflow.contrib.keras.python.keras.utils.io_utils import ask_to_proceed_with_overwrite
from tensorflow.python.framework import ops
+from tensorflow.python.keras._impl.keras import backend as K
+from tensorflow.python.keras._impl.keras import layers as layer_module
+from tensorflow.python.keras._impl.keras import optimizers
+from tensorflow.python.keras._impl.keras.engine import topology
+from tensorflow.python.keras._impl.keras.engine.topology import Input
+from tensorflow.python.keras._impl.keras.engine.topology import Layer
+from tensorflow.python.keras._impl.keras.engine.topology import TFBaseLayer
+from tensorflow.python.keras._impl.keras.engine.training import Model
+from tensorflow.python.keras._impl.keras.utils.generic_utils import has_arg
+from tensorflow.python.keras._impl.keras.utils.io_utils import ask_to_proceed_with_overwrite
from tensorflow.python.platform import tf_logging as logging
@@ -114,7 +113,7 @@ def save_model(model, filepath, overwrite=True, include_optimizer=True):
raise TypeError('Not JSON Serializable:', obj)
- from tensorflow.contrib.keras.python.keras import __version__ as keras_version # pylint: disable=g-import-not-at-top
+ from tensorflow.python.keras._impl.keras import __version__ as keras_version # pylint: disable=g-import-not-at-top
# If file exists and should not be overwritten.
if not overwrite and os.path.isfile(filepath):
diff --git a/tensorflow/contrib/keras/python/keras/models_test.py b/tensorflow/python/keras/_impl/keras/models_test.py
index 1a4e6fb4c8..fd6b20e0ed 100644
--- a/tensorflow/contrib/keras/python/keras/models_test.py
+++ b/tensorflow/python/keras/_impl/keras/models_test.py
@@ -24,7 +24,7 @@ import tempfile
import numpy as np
-from tensorflow.contrib.keras.python import keras
+from tensorflow.python.keras._impl import keras
from tensorflow.python.platform import test
from tensorflow.python.training import training as training_module
diff --git a/tensorflow/contrib/keras/python/keras/optimizers.py b/tensorflow/python/keras/_impl/keras/optimizers.py
index f137563d6d..a08073fa86 100644
--- a/tensorflow/contrib/keras/python/keras/optimizers.py
+++ b/tensorflow/python/keras/_impl/keras/optimizers.py
@@ -23,11 +23,11 @@ import copy
import six
from six.moves import zip # pylint: disable=redefined-builtin
-from tensorflow.contrib.keras.python.keras import backend as K
-from tensorflow.contrib.keras.python.keras.utils.generic_utils import deserialize_keras_object
-from tensorflow.contrib.keras.python.keras.utils.generic_utils import serialize_keras_object
from tensorflow.python.framework import dtypes as dtypes_module
from tensorflow.python.framework import ops
+from tensorflow.python.keras._impl.keras import backend as K
+from tensorflow.python.keras._impl.keras.utils.generic_utils import deserialize_keras_object
+from tensorflow.python.keras._impl.keras.utils.generic_utils import serialize_keras_object
from tensorflow.python.ops import control_flow_ops
from tensorflow.python.ops import math_ops
from tensorflow.python.training import optimizer as tf_optimizer_module
diff --git a/tensorflow/contrib/keras/python/keras/optimizers_test.py b/tensorflow/python/keras/_impl/keras/optimizers_test.py
index a105d13cf9..b63d82f6a0 100644
--- a/tensorflow/contrib/keras/python/keras/optimizers_test.py
+++ b/tensorflow/python/keras/_impl/keras/optimizers_test.py
@@ -20,8 +20,8 @@ from __future__ import print_function
import numpy as np
-from tensorflow.contrib.keras.python import keras
-from tensorflow.contrib.keras.python.keras import testing_utils
+from tensorflow.python.keras._impl import keras
+from tensorflow.python.keras._impl.keras import testing_utils
from tensorflow.python.platform import test
from tensorflow.python.training.adam import AdamOptimizer
diff --git a/tensorflow/contrib/keras/python/keras/preprocessing/__init__.py b/tensorflow/python/keras/_impl/keras/preprocessing/__init__.py
index 9ae14c9674..2ca48cdbf9 100644
--- a/tensorflow/contrib/keras/python/keras/preprocessing/__init__.py
+++ b/tensorflow/python/keras/_impl/keras/preprocessing/__init__.py
@@ -18,7 +18,7 @@ from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
-from tensorflow.contrib.keras.python.keras.preprocessing import image
-from tensorflow.contrib.keras.python.keras.preprocessing import sequence
-from tensorflow.contrib.keras.python.keras.preprocessing import text
+from tensorflow.python.keras._impl.keras.preprocessing import image
+from tensorflow.python.keras._impl.keras.preprocessing import sequence
+from tensorflow.python.keras._impl.keras.preprocessing import text
diff --git a/tensorflow/contrib/keras/python/keras/preprocessing/image.py b/tensorflow/python/keras/_impl/keras/preprocessing/image.py
index 4d6e0e0fcb..052a8addc4 100644
--- a/tensorflow/contrib/keras/python/keras/preprocessing/image.py
+++ b/tensorflow/python/keras/_impl/keras/preprocessing/image.py
@@ -30,7 +30,7 @@ import threading
import numpy as np
from six.moves import range # pylint: disable=redefined-builtin
-from tensorflow.contrib.keras.python.keras import backend as K
+from tensorflow.python.keras._impl.keras import backend as K
from tensorflow.python.platform import tf_logging as logging
diff --git a/tensorflow/contrib/keras/python/keras/preprocessing/image_test.py b/tensorflow/python/keras/_impl/keras/preprocessing/image_test.py
index d9ecb19003..19693410e7 100644
--- a/tensorflow/contrib/keras/python/keras/preprocessing/image_test.py
+++ b/tensorflow/python/keras/_impl/keras/preprocessing/image_test.py
@@ -23,7 +23,7 @@ import shutil
import numpy as np
-from tensorflow.contrib.keras.python import keras
+from tensorflow.python.keras._impl import keras
from tensorflow.python.platform import test
try:
diff --git a/tensorflow/contrib/keras/python/keras/preprocessing/sequence.py b/tensorflow/python/keras/_impl/keras/preprocessing/sequence.py
index a5deec87af..a5deec87af 100644
--- a/tensorflow/contrib/keras/python/keras/preprocessing/sequence.py
+++ b/tensorflow/python/keras/_impl/keras/preprocessing/sequence.py
diff --git a/tensorflow/contrib/keras/python/keras/preprocessing/sequence_test.py b/tensorflow/python/keras/_impl/keras/preprocessing/sequence_test.py
index 4e54b95c8b..4529e6e94f 100644
--- a/tensorflow/contrib/keras/python/keras/preprocessing/sequence_test.py
+++ b/tensorflow/python/keras/_impl/keras/preprocessing/sequence_test.py
@@ -20,7 +20,7 @@ from __future__ import print_function
import numpy as np
-from tensorflow.contrib.keras.python import keras
+from tensorflow.python.keras._impl import keras
from tensorflow.python.platform import test
diff --git a/tensorflow/contrib/keras/python/keras/preprocessing/text.py b/tensorflow/python/keras/_impl/keras/preprocessing/text.py
index 47e5aa064f..47e5aa064f 100644
--- a/tensorflow/contrib/keras/python/keras/preprocessing/text.py
+++ b/tensorflow/python/keras/_impl/keras/preprocessing/text.py
diff --git a/tensorflow/contrib/keras/python/keras/preprocessing/text_test.py b/tensorflow/python/keras/_impl/keras/preprocessing/text_test.py
index 7deeff0873..17ab48ba3f 100644
--- a/tensorflow/contrib/keras/python/keras/preprocessing/text_test.py
+++ b/tensorflow/python/keras/_impl/keras/preprocessing/text_test.py
@@ -20,7 +20,7 @@ from __future__ import print_function
import numpy as np
-from tensorflow.contrib.keras.python import keras
+from tensorflow.python.keras._impl import keras
from tensorflow.python.platform import test
diff --git a/tensorflow/contrib/keras/python/keras/regularizers.py b/tensorflow/python/keras/_impl/keras/regularizers.py
index 36cc5c47e4..161ff9bf5b 100644
--- a/tensorflow/contrib/keras/python/keras/regularizers.py
+++ b/tensorflow/python/keras/_impl/keras/regularizers.py
@@ -20,9 +20,9 @@ from __future__ import print_function
import six
-from tensorflow.contrib.keras.python.keras import backend as K
-from tensorflow.contrib.keras.python.keras.utils.generic_utils import deserialize_keras_object
-from tensorflow.contrib.keras.python.keras.utils.generic_utils import serialize_keras_object
+from tensorflow.python.keras._impl.keras import backend as K
+from tensorflow.python.keras._impl.keras.utils.generic_utils import deserialize_keras_object
+from tensorflow.python.keras._impl.keras.utils.generic_utils import serialize_keras_object
class Regularizer(object):
diff --git a/tensorflow/contrib/keras/python/keras/regularizers_test.py b/tensorflow/python/keras/_impl/keras/regularizers_test.py
index 528024994f..9a1612b777 100644
--- a/tensorflow/contrib/keras/python/keras/regularizers_test.py
+++ b/tensorflow/python/keras/_impl/keras/regularizers_test.py
@@ -18,8 +18,8 @@ from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
-from tensorflow.contrib.keras.python import keras
-from tensorflow.contrib.keras.python.keras import testing_utils
+from tensorflow.python.keras._impl import keras
+from tensorflow.python.keras._impl.keras import testing_utils
from tensorflow.python.platform import test
diff --git a/tensorflow/contrib/keras/python/keras/testing_utils.py b/tensorflow/python/keras/_impl/keras/testing_utils.py
index 2f51ace945..f204a5df3e 100644
--- a/tensorflow/contrib/keras/python/keras/testing_utils.py
+++ b/tensorflow/python/keras/_impl/keras/testing_utils.py
@@ -20,7 +20,7 @@ from __future__ import print_function
import numpy as np
-from tensorflow.contrib.keras.python import keras
+from tensorflow.python.keras._impl import keras
from tensorflow.python.util import tf_inspect
diff --git a/tensorflow/python/keras/_impl/keras/utils/__init__.py b/tensorflow/python/keras/_impl/keras/utils/__init__.py
new file mode 100644
index 0000000000..fa50b123b7
--- /dev/null
+++ b/tensorflow/python/keras/_impl/keras/utils/__init__.py
@@ -0,0 +1,43 @@
+# Copyright 2015 The TensorFlow Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+"""Keras utilities.
+"""
+from __future__ import absolute_import
+from __future__ import division
+from __future__ import print_function
+
+from tensorflow.python.keras._impl.keras.utils import conv_utils
+from tensorflow.python.keras._impl.keras.utils import data_utils
+from tensorflow.python.keras._impl.keras.utils import generic_utils
+from tensorflow.python.keras._impl.keras.utils import io_utils
+from tensorflow.python.keras._impl.keras.utils import np_utils
+from tensorflow.python.keras._impl.keras.utils.data_utils import GeneratorEnqueuer
+from tensorflow.python.keras._impl.keras.utils.data_utils import get_file
+from tensorflow.python.keras._impl.keras.utils.data_utils import OrderedEnqueuer
+from tensorflow.python.keras._impl.keras.utils.data_utils import Sequence
+from tensorflow.python.keras._impl.keras.utils.generic_utils import custom_object_scope
+from tensorflow.python.keras._impl.keras.utils.generic_utils import CustomObjectScope
+from tensorflow.python.keras._impl.keras.utils.generic_utils import deserialize_keras_object
+from tensorflow.python.keras._impl.keras.utils.generic_utils import get_custom_objects
+from tensorflow.python.keras._impl.keras.utils.generic_utils import Progbar
+from tensorflow.python.keras._impl.keras.utils.generic_utils import serialize_keras_object
+from tensorflow.python.keras._impl.keras.utils.io_utils import HDF5Matrix
+from tensorflow.python.keras._impl.keras.utils.layer_utils import convert_all_kernels_in_model
+from tensorflow.python.keras._impl.keras.utils.np_utils import normalize
+from tensorflow.python.keras._impl.keras.utils.np_utils import to_categorical
+from tensorflow.python.keras._impl.keras.utils.vis_utils import plot_model
+
+
+# Globally-importable utils.
diff --git a/tensorflow/contrib/keras/python/keras/utils/conv_utils.py b/tensorflow/python/keras/_impl/keras/utils/conv_utils.py
index ea3a70edab..583079d962 100644
--- a/tensorflow/contrib/keras/python/keras/utils/conv_utils.py
+++ b/tensorflow/python/keras/_impl/keras/utils/conv_utils.py
@@ -22,7 +22,7 @@ import numpy as np
from six.moves import range # pylint: disable=redefined-builtin
# pylint: disable=unused-import
-from tensorflow.contrib.keras.python.keras import backend as K
+from tensorflow.python.keras._impl.keras import backend as K
from tensorflow.python.layers.utils import conv_input_length
from tensorflow.python.layers.utils import conv_output_length
from tensorflow.python.layers.utils import deconv_output_length as deconv_length
diff --git a/tensorflow/contrib/keras/python/keras/utils/data_utils.py b/tensorflow/python/keras/_impl/keras/utils/data_utils.py
index 08ab8d7204..0ede7f12f2 100644
--- a/tensorflow/contrib/keras/python/keras/utils/data_utils.py
+++ b/tensorflow/python/keras/_impl/keras/utils/data_utils.py
@@ -36,7 +36,7 @@ from six.moves.urllib.error import HTTPError
from six.moves.urllib.error import URLError
from six.moves.urllib.request import urlopen
-from tensorflow.contrib.keras.python.keras.utils.generic_utils import Progbar
+from tensorflow.python.keras._impl.keras.utils.generic_utils import Progbar
try:
import queue # pylint:disable=g-import-not-at-top
diff --git a/tensorflow/contrib/keras/python/keras/utils/data_utils_test.py b/tensorflow/python/keras/_impl/keras/utils/data_utils_test.py
index 55d08a34d0..45322f1f29 100644
--- a/tensorflow/contrib/keras/python/keras/utils/data_utils_test.py
+++ b/tensorflow/python/keras/_impl/keras/utils/data_utils_test.py
@@ -28,7 +28,7 @@ import numpy as np
from six.moves.urllib.parse import urljoin
from six.moves.urllib.request import pathname2url
-from tensorflow.contrib.keras.python import keras
+from tensorflow.python.keras._impl import keras
from tensorflow.python.platform import test
diff --git a/tensorflow/contrib/keras/python/keras/utils/generic_utils.py b/tensorflow/python/keras/_impl/keras/utils/generic_utils.py
index 39a10c8650..39a10c8650 100644
--- a/tensorflow/contrib/keras/python/keras/utils/generic_utils.py
+++ b/tensorflow/python/keras/_impl/keras/utils/generic_utils.py
diff --git a/tensorflow/contrib/keras/python/keras/utils/generic_utils_test.py b/tensorflow/python/keras/_impl/keras/utils/generic_utils_test.py
index 8a6519f4cc..d57692f4f4 100644
--- a/tensorflow/contrib/keras/python/keras/utils/generic_utils_test.py
+++ b/tensorflow/python/keras/_impl/keras/utils/generic_utils_test.py
@@ -18,7 +18,7 @@ from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
-from tensorflow.contrib.keras.python import keras
+from tensorflow.python.keras._impl import keras
from tensorflow.python.platform import test
diff --git a/tensorflow/contrib/keras/python/keras/utils/io_utils.py b/tensorflow/python/keras/_impl/keras/utils/io_utils.py
index 5f2ba99be7..5f2ba99be7 100644
--- a/tensorflow/contrib/keras/python/keras/utils/io_utils.py
+++ b/tensorflow/python/keras/_impl/keras/utils/io_utils.py
diff --git a/tensorflow/python/keras/_impl/keras/utils/io_utils_test.py b/tensorflow/python/keras/_impl/keras/utils/io_utils_test.py
new file mode 100644
index 0000000000..cfeba188d3
--- /dev/null
+++ b/tensorflow/python/keras/_impl/keras/utils/io_utils_test.py
@@ -0,0 +1,100 @@
+# Copyright 2016 The TensorFlow Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+"""Tests for io_utils."""
+
+from __future__ import absolute_import
+from __future__ import division
+from __future__ import print_function
+
+import os
+import shutil
+
+import numpy as np
+
+from tensorflow.python.keras._impl import keras
+from tensorflow.python.platform import test
+
+try:
+ import h5py # pylint:disable=g-import-not-at-top
+except ImportError:
+ h5py = None
+
+
+def create_dataset(h5_path='test.h5'):
+ x = np.random.randn(200, 10).astype('float32')
+ y = np.random.randint(0, 2, size=(200, 1))
+ f = h5py.File(h5_path, 'w')
+ # Creating dataset to store features
+ x_dset = f.create_dataset('my_data', (200, 10), dtype='f')
+ x_dset[:] = x
+ # Creating dataset to store labels
+ y_dset = f.create_dataset('my_labels', (200, 1), dtype='i')
+ y_dset[:] = y
+ f.close()
+
+
+class TestIOUtils(test.TestCase):
+
+ def test_HDF5Matrix(self):
+ if h5py is None:
+ return
+
+ temp_dir = self.get_temp_dir()
+ self.addCleanup(shutil.rmtree, temp_dir)
+
+ h5_path = os.path.join(temp_dir, 'test.h5')
+ create_dataset(h5_path)
+
+ # Instantiating HDF5Matrix for the training set,
+ # which is a slice of the first 150 elements
+ x_train = keras.utils.io_utils.HDF5Matrix(
+ h5_path, 'my_data', start=0, end=150)
+ y_train = keras.utils.io_utils.HDF5Matrix(
+ h5_path, 'my_labels', start=0, end=150)
+
+ # Likewise for the test set
+ x_test = keras.utils.io_utils.HDF5Matrix(
+ h5_path, 'my_data', start=150, end=200)
+ y_test = keras.utils.io_utils.HDF5Matrix(
+ h5_path, 'my_labels', start=150, end=200)
+
+ # HDF5Matrix behave more or less like Numpy matrices
+ # with regard to indexing
+ self.assertEqual(y_train.shape, (150, 1))
+ # But they do not support negative indices, so don't try print(x_train[-1])
+
+ self.assertEqual(y_train.dtype, np.dtype('i'))
+ self.assertEqual(y_train.ndim, 2)
+ self.assertEqual(y_train.size, 150)
+
+ model = keras.models.Sequential()
+ model.add(keras.layers.Dense(64, input_shape=(10,), activation='relu'))
+ model.add(keras.layers.Dense(1, activation='sigmoid'))
+ model.compile(loss='binary_crossentropy', optimizer='sgd')
+
+ # Note: you have to use shuffle='batch' or False with HDF5Matrix
+ model.fit(x_train, y_train, batch_size=32, shuffle='batch', verbose=False)
+ # test that evalutation and prediction
+ # don't crash and return reasonable results
+ out_pred = model.predict(x_test, batch_size=32, verbose=False)
+ out_eval = model.evaluate(x_test, y_test, batch_size=32, verbose=False)
+
+ self.assertEqual(out_pred.shape, (50, 1))
+ self.assertEqual(out_eval.shape, ())
+ self.assertGreater(out_eval, 0)
+
+
+if __name__ == '__main__':
+ test.main()
diff --git a/tensorflow/contrib/keras/python/keras/utils/layer_utils.py b/tensorflow/python/keras/_impl/keras/utils/layer_utils.py
index 12d5368b08..399bbf3475 100644
--- a/tensorflow/contrib/keras/python/keras/utils/layer_utils.py
+++ b/tensorflow/python/keras/_impl/keras/utils/layer_utils.py
@@ -20,8 +20,8 @@ from __future__ import print_function
import numpy as np
-from tensorflow.contrib.keras.python.keras import backend as K
-from tensorflow.contrib.keras.python.keras.utils.conv_utils import convert_kernel
+from tensorflow.python.keras._impl.keras import backend as K
+from tensorflow.python.keras._impl.keras.utils.conv_utils import convert_kernel
def print_summary(model, line_length=None, positions=None, print_fn=None):
diff --git a/tensorflow/contrib/keras/python/keras/utils/np_utils.py b/tensorflow/python/keras/_impl/keras/utils/np_utils.py
index a23172d342..a23172d342 100644
--- a/tensorflow/contrib/keras/python/keras/utils/np_utils.py
+++ b/tensorflow/python/keras/_impl/keras/utils/np_utils.py
diff --git a/tensorflow/contrib/keras/python/keras/utils/vis_utils.py b/tensorflow/python/keras/_impl/keras/utils/vis_utils.py
index 949767299b..f227f3c3f7 100644
--- a/tensorflow/contrib/keras/python/keras/utils/vis_utils.py
+++ b/tensorflow/python/keras/_impl/keras/utils/vis_utils.py
@@ -65,8 +65,8 @@ def model_to_dot(model, show_shapes=False, show_layer_names=True, rankdir='TB'):
Returns:
A `pydot.Dot` instance representing the Keras model.
"""
- from tensorflow.contrib.keras.python.keras.layers.wrappers import Wrapper # pylint: disable=g-import-not-at-top
- from tensorflow.contrib.keras.python.keras.models import Sequential # pylint: disable=g-import-not-at-top
+ from tensorflow.python.keras._impl.keras.layers.wrappers import Wrapper # pylint: disable=g-import-not-at-top
+ from tensorflow.python.keras._impl.keras.models import Sequential # pylint: disable=g-import-not-at-top
_check_pydot()
dot = pydot.Dot()
diff --git a/tensorflow/contrib/keras/python/keras/wrappers/__init__.py b/tensorflow/python/keras/_impl/keras/wrappers/__init__.py
index 51244ff681..20c95929e3 100644
--- a/tensorflow/contrib/keras/python/keras/wrappers/__init__.py
+++ b/tensorflow/python/keras/_impl/keras/wrappers/__init__.py
@@ -18,5 +18,5 @@ from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
-from tensorflow.contrib.keras.python.keras.wrappers import scikit_learn
+from tensorflow.python.keras._impl.keras.wrappers import scikit_learn
diff --git a/tensorflow/contrib/keras/python/keras/wrappers/scikit_learn.py b/tensorflow/python/keras/_impl/keras/wrappers/scikit_learn.py
index 0d04fc120f..ac7bd49406 100644
--- a/tensorflow/contrib/keras/python/keras/wrappers/scikit_learn.py
+++ b/tensorflow/python/keras/_impl/keras/wrappers/scikit_learn.py
@@ -23,8 +23,8 @@ import types
import numpy as np
-from tensorflow.contrib.keras.python.keras.models import Sequential
-from tensorflow.contrib.keras.python.keras.utils.np_utils import to_categorical
+from tensorflow.python.keras._impl.keras.models import Sequential
+from tensorflow.python.keras._impl.keras.utils.np_utils import to_categorical
from tensorflow.python.util import tf_inspect
diff --git a/tensorflow/contrib/keras/python/keras/wrappers/scikit_learn_test.py b/tensorflow/python/keras/_impl/keras/wrappers/scikit_learn_test.py
index 95e0b951eb..b20a84ee88 100644
--- a/tensorflow/contrib/keras/python/keras/wrappers/scikit_learn_test.py
+++ b/tensorflow/python/keras/_impl/keras/wrappers/scikit_learn_test.py
@@ -20,8 +20,8 @@ from __future__ import print_function
import numpy as np
-from tensorflow.contrib.keras.python import keras
-from tensorflow.contrib.keras.python.keras import testing_utils
+from tensorflow.python.keras._impl import keras
+from tensorflow.python.keras._impl.keras import testing_utils
from tensorflow.python.platform import test
INPUT_DIM = 5
diff --git a/tensorflow/python/keras/activations/__init__.py b/tensorflow/python/keras/activations/__init__.py
new file mode 100644
index 0000000000..d04838c218
--- /dev/null
+++ b/tensorflow/python/keras/activations/__init__.py
@@ -0,0 +1,41 @@
+# Copyright 2016 The TensorFlow Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+"""Keras built-in activation functions."""
+
+from __future__ import absolute_import
+from __future__ import division
+from __future__ import print_function
+
+# Activation functions.
+from tensorflow.python.keras._impl.keras.activations import elu
+from tensorflow.python.keras._impl.keras.activations import hard_sigmoid
+from tensorflow.python.keras._impl.keras.activations import linear
+from tensorflow.python.keras._impl.keras.activations import relu
+from tensorflow.python.keras._impl.keras.activations import selu
+from tensorflow.python.keras._impl.keras.activations import sigmoid
+from tensorflow.python.keras._impl.keras.activations import softmax
+from tensorflow.python.keras._impl.keras.activations import softplus
+from tensorflow.python.keras._impl.keras.activations import softsign
+from tensorflow.python.keras._impl.keras.activations import tanh
+
+# Auxiliary utils.
+# pylint: disable=g-bad-import-order
+from tensorflow.python.keras._impl.keras.activations import deserialize
+from tensorflow.python.keras._impl.keras.activations import serialize
+from tensorflow.python.keras._impl.keras.activations import get
+
+del absolute_import
+del division
+del print_function
diff --git a/tensorflow/python/keras/applications/__init__.py b/tensorflow/python/keras/applications/__init__.py
new file mode 100644
index 0000000000..e34d9a8e0b
--- /dev/null
+++ b/tensorflow/python/keras/applications/__init__.py
@@ -0,0 +1,36 @@
+# Copyright 2016 The TensorFlow Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+"""Keras Applications are canned architectures with pre-trained weights."""
+
+from __future__ import absolute_import
+from __future__ import division
+from __future__ import print_function
+
+from tensorflow.python.keras.applications import inception_v3
+from tensorflow.python.keras.applications import mobilenet
+from tensorflow.python.keras.applications import resnet50
+from tensorflow.python.keras.applications import vgg16
+from tensorflow.python.keras.applications import vgg19
+from tensorflow.python.keras.applications import xception
+from tensorflow.python.keras.applications.inception_v3 import InceptionV3
+from tensorflow.python.keras.applications.mobilenet import MobileNet
+from tensorflow.python.keras.applications.resnet50 import ResNet50
+from tensorflow.python.keras.applications.vgg16 import VGG16
+from tensorflow.python.keras.applications.vgg19 import VGG19
+from tensorflow.python.keras.applications.xception import Xception
+
+del absolute_import
+del division
+del print_function
diff --git a/tensorflow/python/keras/applications/inception_v3/__init__.py b/tensorflow/python/keras/applications/inception_v3/__init__.py
new file mode 100644
index 0000000000..abf8393ae4
--- /dev/null
+++ b/tensorflow/python/keras/applications/inception_v3/__init__.py
@@ -0,0 +1,27 @@
+# Copyright 2016 The TensorFlow Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+"""Inception V3 Keras application."""
+
+from __future__ import absolute_import
+from __future__ import division
+from __future__ import print_function
+
+from tensorflow.python.keras._impl.keras.applications.inception_v3 import decode_predictions
+from tensorflow.python.keras._impl.keras.applications.inception_v3 import InceptionV3
+from tensorflow.python.keras._impl.keras.applications.inception_v3 import preprocess_input
+
+del absolute_import
+del division
+del print_function
diff --git a/tensorflow/python/keras/applications/mobilenet/__init__.py b/tensorflow/python/keras/applications/mobilenet/__init__.py
new file mode 100644
index 0000000000..b809e91193
--- /dev/null
+++ b/tensorflow/python/keras/applications/mobilenet/__init__.py
@@ -0,0 +1,27 @@
+# Copyright 2016 The TensorFlow Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+"""MobileNet Keras application."""
+
+from __future__ import absolute_import
+from __future__ import division
+from __future__ import print_function
+
+from tensorflow.python.keras._impl.keras.applications.mobilenet import decode_predictions
+from tensorflow.python.keras._impl.keras.applications.mobilenet import MobileNet
+from tensorflow.python.keras._impl.keras.applications.mobilenet import preprocess_input
+
+del absolute_import
+del division
+del print_function
diff --git a/tensorflow/python/keras/applications/resnet50/__init__.py b/tensorflow/python/keras/applications/resnet50/__init__.py
new file mode 100644
index 0000000000..530805d150
--- /dev/null
+++ b/tensorflow/python/keras/applications/resnet50/__init__.py
@@ -0,0 +1,27 @@
+# Copyright 2016 The TensorFlow Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+"""ResNet50 Keras application."""
+
+from __future__ import absolute_import
+from __future__ import division
+from __future__ import print_function
+
+from tensorflow.python.keras._impl.keras.applications.resnet50 import decode_predictions
+from tensorflow.python.keras._impl.keras.applications.resnet50 import preprocess_input
+from tensorflow.python.keras._impl.keras.applications.resnet50 import ResNet50
+
+del absolute_import
+del division
+del print_function
diff --git a/tensorflow/python/keras/applications/vgg16/__init__.py b/tensorflow/python/keras/applications/vgg16/__init__.py
new file mode 100644
index 0000000000..118361604b
--- /dev/null
+++ b/tensorflow/python/keras/applications/vgg16/__init__.py
@@ -0,0 +1,27 @@
+# Copyright 2016 The TensorFlow Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+"""VGG16 Keras application."""
+
+from __future__ import absolute_import
+from __future__ import division
+from __future__ import print_function
+
+from tensorflow.python.keras._impl.keras.applications.vgg16 import decode_predictions
+from tensorflow.python.keras._impl.keras.applications.vgg16 import preprocess_input
+from tensorflow.python.keras._impl.keras.applications.vgg16 import VGG16
+
+del absolute_import
+del division
+del print_function
diff --git a/tensorflow/python/keras/applications/vgg19/__init__.py b/tensorflow/python/keras/applications/vgg19/__init__.py
new file mode 100644
index 0000000000..cda52628f3
--- /dev/null
+++ b/tensorflow/python/keras/applications/vgg19/__init__.py
@@ -0,0 +1,27 @@
+# Copyright 2016 The TensorFlow Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+"""VGG19 Keras application."""
+
+from __future__ import absolute_import
+from __future__ import division
+from __future__ import print_function
+
+from tensorflow.python.keras._impl.keras.applications.vgg19 import decode_predictions
+from tensorflow.python.keras._impl.keras.applications.vgg19 import preprocess_input
+from tensorflow.python.keras._impl.keras.applications.vgg19 import VGG19
+
+del absolute_import
+del division
+del print_function
diff --git a/tensorflow/python/keras/applications/xception/__init__.py b/tensorflow/python/keras/applications/xception/__init__.py
new file mode 100644
index 0000000000..ae9cd9cd18
--- /dev/null
+++ b/tensorflow/python/keras/applications/xception/__init__.py
@@ -0,0 +1,27 @@
+# Copyright 2016 The TensorFlow Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+"""Xception Keras application."""
+
+from __future__ import absolute_import
+from __future__ import division
+from __future__ import print_function
+
+from tensorflow.python.keras._impl.keras.applications.xception import decode_predictions
+from tensorflow.python.keras._impl.keras.applications.xception import preprocess_input
+from tensorflow.python.keras._impl.keras.applications.xception import Xception
+
+del absolute_import
+del division
+del print_function
diff --git a/tensorflow/python/keras/backend/__init__.py b/tensorflow/python/keras/backend/__init__.py
new file mode 100644
index 0000000000..10ef5a7585
--- /dev/null
+++ b/tensorflow/python/keras/backend/__init__.py
@@ -0,0 +1,163 @@
+# Copyright 2016 The TensorFlow Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+"""Keras backend API."""
+
+from __future__ import absolute_import
+from __future__ import division
+from __future__ import print_function
+
+# pylint: disable=redefined-builtin
+from tensorflow.python.keras._impl.keras.backend import abs
+from tensorflow.python.keras._impl.keras.backend import all
+from tensorflow.python.keras._impl.keras.backend import any
+from tensorflow.python.keras._impl.keras.backend import arange
+from tensorflow.python.keras._impl.keras.backend import argmax
+from tensorflow.python.keras._impl.keras.backend import argmin
+from tensorflow.python.keras._impl.keras.backend import backend
+from tensorflow.python.keras._impl.keras.backend import batch_dot
+from tensorflow.python.keras._impl.keras.backend import batch_flatten
+from tensorflow.python.keras._impl.keras.backend import batch_get_value
+from tensorflow.python.keras._impl.keras.backend import batch_normalization
+from tensorflow.python.keras._impl.keras.backend import batch_set_value
+from tensorflow.python.keras._impl.keras.backend import bias_add
+from tensorflow.python.keras._impl.keras.backend import binary_crossentropy
+from tensorflow.python.keras._impl.keras.backend import cast
+from tensorflow.python.keras._impl.keras.backend import cast_to_floatx
+from tensorflow.python.keras._impl.keras.backend import categorical_crossentropy
+from tensorflow.python.keras._impl.keras.backend import clear_session
+from tensorflow.python.keras._impl.keras.backend import clip
+from tensorflow.python.keras._impl.keras.backend import concatenate
+from tensorflow.python.keras._impl.keras.backend import constant
+from tensorflow.python.keras._impl.keras.backend import conv1d
+from tensorflow.python.keras._impl.keras.backend import conv2d
+from tensorflow.python.keras._impl.keras.backend import conv2d_transpose
+from tensorflow.python.keras._impl.keras.backend import conv3d
+from tensorflow.python.keras._impl.keras.backend import cos
+from tensorflow.python.keras._impl.keras.backend import count_params
+from tensorflow.python.keras._impl.keras.backend import ctc_batch_cost
+from tensorflow.python.keras._impl.keras.backend import ctc_decode
+from tensorflow.python.keras._impl.keras.backend import ctc_label_dense_to_sparse
+from tensorflow.python.keras._impl.keras.backend import dot
+from tensorflow.python.keras._impl.keras.backend import dropout
+from tensorflow.python.keras._impl.keras.backend import dtype
+from tensorflow.python.keras._impl.keras.backend import elu
+from tensorflow.python.keras._impl.keras.backend import epsilon
+from tensorflow.python.keras._impl.keras.backend import equal
+from tensorflow.python.keras._impl.keras.backend import eval
+from tensorflow.python.keras._impl.keras.backend import exp
+from tensorflow.python.keras._impl.keras.backend import expand_dims
+from tensorflow.python.keras._impl.keras.backend import eye
+from tensorflow.python.keras._impl.keras.backend import flatten
+from tensorflow.python.keras._impl.keras.backend import floatx
+from tensorflow.python.keras._impl.keras.backend import foldl
+from tensorflow.python.keras._impl.keras.backend import foldr
+from tensorflow.python.keras._impl.keras.backend import function
+from tensorflow.python.keras._impl.keras.backend import gather
+from tensorflow.python.keras._impl.keras.backend import get_session
+from tensorflow.python.keras._impl.keras.backend import get_uid
+from tensorflow.python.keras._impl.keras.backend import get_value
+from tensorflow.python.keras._impl.keras.backend import gradients
+from tensorflow.python.keras._impl.keras.backend import greater
+from tensorflow.python.keras._impl.keras.backend import greater_equal
+from tensorflow.python.keras._impl.keras.backend import hard_sigmoid
+from tensorflow.python.keras._impl.keras.backend import image_data_format
+from tensorflow.python.keras._impl.keras.backend import in_test_phase
+from tensorflow.python.keras._impl.keras.backend import in_top_k
+from tensorflow.python.keras._impl.keras.backend import in_train_phase
+from tensorflow.python.keras._impl.keras.backend import int_shape
+from tensorflow.python.keras._impl.keras.backend import is_sparse
+from tensorflow.python.keras._impl.keras.backend import l2_normalize
+from tensorflow.python.keras._impl.keras.backend import learning_phase
+from tensorflow.python.keras._impl.keras.backend import less
+from tensorflow.python.keras._impl.keras.backend import less_equal
+from tensorflow.python.keras._impl.keras.backend import log
+from tensorflow.python.keras._impl.keras.backend import manual_variable_initialization
+from tensorflow.python.keras._impl.keras.backend import map_fn
+from tensorflow.python.keras._impl.keras.backend import max
+from tensorflow.python.keras._impl.keras.backend import maximum
+from tensorflow.python.keras._impl.keras.backend import mean
+from tensorflow.python.keras._impl.keras.backend import min
+from tensorflow.python.keras._impl.keras.backend import minimum
+from tensorflow.python.keras._impl.keras.backend import moving_average_update
+from tensorflow.python.keras._impl.keras.backend import name_scope
+from tensorflow.python.keras._impl.keras.backend import ndim
+from tensorflow.python.keras._impl.keras.backend import normalize_batch_in_training
+from tensorflow.python.keras._impl.keras.backend import not_equal
+from tensorflow.python.keras._impl.keras.backend import one_hot
+from tensorflow.python.keras._impl.keras.backend import ones
+from tensorflow.python.keras._impl.keras.backend import ones_like
+from tensorflow.python.keras._impl.keras.backend import permute_dimensions
+from tensorflow.python.keras._impl.keras.backend import placeholder
+from tensorflow.python.keras._impl.keras.backend import pool2d
+from tensorflow.python.keras._impl.keras.backend import pool3d
+from tensorflow.python.keras._impl.keras.backend import pow
+from tensorflow.python.keras._impl.keras.backend import print_tensor
+from tensorflow.python.keras._impl.keras.backend import prod
+from tensorflow.python.keras._impl.keras.backend import random_binomial
+from tensorflow.python.keras._impl.keras.backend import random_normal
+from tensorflow.python.keras._impl.keras.backend import random_normal_variable
+from tensorflow.python.keras._impl.keras.backend import random_uniform
+from tensorflow.python.keras._impl.keras.backend import random_uniform_variable
+from tensorflow.python.keras._impl.keras.backend import relu
+from tensorflow.python.keras._impl.keras.backend import repeat
+from tensorflow.python.keras._impl.keras.backend import repeat_elements
+from tensorflow.python.keras._impl.keras.backend import reset_uids
+from tensorflow.python.keras._impl.keras.backend import reshape
+from tensorflow.python.keras._impl.keras.backend import resize_images
+from tensorflow.python.keras._impl.keras.backend import resize_volumes
+from tensorflow.python.keras._impl.keras.backend import reverse
+from tensorflow.python.keras._impl.keras.backend import rnn
+from tensorflow.python.keras._impl.keras.backend import round
+from tensorflow.python.keras._impl.keras.backend import separable_conv2d
+from tensorflow.python.keras._impl.keras.backend import set_epsilon
+from tensorflow.python.keras._impl.keras.backend import set_floatx
+from tensorflow.python.keras._impl.keras.backend import set_image_data_format
+from tensorflow.python.keras._impl.keras.backend import set_learning_phase
+from tensorflow.python.keras._impl.keras.backend import set_session
+from tensorflow.python.keras._impl.keras.backend import set_value
+from tensorflow.python.keras._impl.keras.backend import shape
+from tensorflow.python.keras._impl.keras.backend import sigmoid
+from tensorflow.python.keras._impl.keras.backend import sign
+from tensorflow.python.keras._impl.keras.backend import sin
+from tensorflow.python.keras._impl.keras.backend import softmax
+from tensorflow.python.keras._impl.keras.backend import softplus
+from tensorflow.python.keras._impl.keras.backend import softsign
+from tensorflow.python.keras._impl.keras.backend import sparse_categorical_crossentropy
+from tensorflow.python.keras._impl.keras.backend import spatial_2d_padding
+from tensorflow.python.keras._impl.keras.backend import spatial_3d_padding
+from tensorflow.python.keras._impl.keras.backend import sqrt
+from tensorflow.python.keras._impl.keras.backend import square
+from tensorflow.python.keras._impl.keras.backend import squeeze
+from tensorflow.python.keras._impl.keras.backend import stack
+from tensorflow.python.keras._impl.keras.backend import std
+from tensorflow.python.keras._impl.keras.backend import stop_gradient
+from tensorflow.python.keras._impl.keras.backend import sum
+from tensorflow.python.keras._impl.keras.backend import switch
+from tensorflow.python.keras._impl.keras.backend import tanh
+from tensorflow.python.keras._impl.keras.backend import temporal_padding
+from tensorflow.python.keras._impl.keras.backend import to_dense
+from tensorflow.python.keras._impl.keras.backend import transpose
+from tensorflow.python.keras._impl.keras.backend import truncated_normal
+from tensorflow.python.keras._impl.keras.backend import update
+from tensorflow.python.keras._impl.keras.backend import update_add
+from tensorflow.python.keras._impl.keras.backend import update_sub
+from tensorflow.python.keras._impl.keras.backend import var
+from tensorflow.python.keras._impl.keras.backend import variable
+from tensorflow.python.keras._impl.keras.backend import zeros
+from tensorflow.python.keras._impl.keras.backend import zeros_like
+
+del absolute_import
+del division
+del print_function
diff --git a/tensorflow/python/keras/callbacks/__init__.py b/tensorflow/python/keras/callbacks/__init__.py
new file mode 100644
index 0000000000..2d884790dd
--- /dev/null
+++ b/tensorflow/python/keras/callbacks/__init__.py
@@ -0,0 +1,37 @@
+# Copyright 2016 The TensorFlow Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+"""Keras callback classes."""
+
+from __future__ import absolute_import
+from __future__ import division
+from __future__ import print_function
+
+from tensorflow.python.keras._impl.keras.callbacks import BaseLogger
+from tensorflow.python.keras._impl.keras.callbacks import Callback
+from tensorflow.python.keras._impl.keras.callbacks import CSVLogger
+from tensorflow.python.keras._impl.keras.callbacks import EarlyStopping
+from tensorflow.python.keras._impl.keras.callbacks import History
+from tensorflow.python.keras._impl.keras.callbacks import LambdaCallback
+from tensorflow.python.keras._impl.keras.callbacks import LearningRateScheduler
+from tensorflow.python.keras._impl.keras.callbacks import ModelCheckpoint
+from tensorflow.python.keras._impl.keras.callbacks import ProgbarLogger
+from tensorflow.python.keras._impl.keras.callbacks import ReduceLROnPlateau
+from tensorflow.python.keras._impl.keras.callbacks import RemoteMonitor
+from tensorflow.python.keras._impl.keras.callbacks import TensorBoard
+from tensorflow.python.keras._impl.keras.callbacks import TerminateOnNaN
+
+del absolute_import
+del division
+del print_function
diff --git a/tensorflow/python/keras/constraints/__init__.py b/tensorflow/python/keras/constraints/__init__.py
new file mode 100644
index 0000000000..152606d8eb
--- /dev/null
+++ b/tensorflow/python/keras/constraints/__init__.py
@@ -0,0 +1,40 @@
+# Copyright 2016 The TensorFlow Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+"""Keras built-in constraints functions."""
+
+from __future__ import absolute_import
+from __future__ import division
+from __future__ import print_function
+
+# Constraints functions / callable classes.
+from tensorflow.python.keras._impl.keras.constraints import Constraint
+from tensorflow.python.keras._impl.keras.constraints import max_norm
+from tensorflow.python.keras._impl.keras.constraints import MaxNorm
+from tensorflow.python.keras._impl.keras.constraints import min_max_norm
+from tensorflow.python.keras._impl.keras.constraints import MinMaxNorm
+from tensorflow.python.keras._impl.keras.constraints import non_neg
+from tensorflow.python.keras._impl.keras.constraints import NonNeg
+from tensorflow.python.keras._impl.keras.constraints import unit_norm
+from tensorflow.python.keras._impl.keras.constraints import UnitNorm
+
+# Auxiliary utils.
+# pylint: disable=g-bad-import-order
+from tensorflow.python.keras._impl.keras.constraints import deserialize
+from tensorflow.python.keras._impl.keras.constraints import serialize
+from tensorflow.python.keras._impl.keras.constraints import get
+
+del absolute_import
+del division
+del print_function
diff --git a/tensorflow/python/keras/datasets/__init__.py b/tensorflow/python/keras/datasets/__init__.py
new file mode 100644
index 0000000000..b76f278964
--- /dev/null
+++ b/tensorflow/python/keras/datasets/__init__.py
@@ -0,0 +1,30 @@
+# Copyright 2016 The TensorFlow Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+"""Keras built-in datasets."""
+
+from __future__ import absolute_import
+from __future__ import division
+from __future__ import print_function
+
+from tensorflow.python.keras.datasets import boston_housing
+from tensorflow.python.keras.datasets import cifar10
+from tensorflow.python.keras.datasets import cifar100
+from tensorflow.python.keras.datasets import imdb
+from tensorflow.python.keras.datasets import mnist
+from tensorflow.python.keras.datasets import reuters
+
+del absolute_import
+del division
+del print_function
diff --git a/tensorflow/python/keras/datasets/boston_housing/__init__.py b/tensorflow/python/keras/datasets/boston_housing/__init__.py
new file mode 100644
index 0000000000..b5371a03fd
--- /dev/null
+++ b/tensorflow/python/keras/datasets/boston_housing/__init__.py
@@ -0,0 +1,25 @@
+# Copyright 2016 The TensorFlow Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+"""Boston housing price regression dataset."""
+
+from __future__ import absolute_import
+from __future__ import division
+from __future__ import print_function
+
+from tensorflow.python.keras._impl.keras.datasets.boston_housing import load_data
+
+del absolute_import
+del division
+del print_function
diff --git a/tensorflow/python/keras/datasets/cifar10/__init__.py b/tensorflow/python/keras/datasets/cifar10/__init__.py
new file mode 100644
index 0000000000..68d3eb789e
--- /dev/null
+++ b/tensorflow/python/keras/datasets/cifar10/__init__.py
@@ -0,0 +1,25 @@
+# Copyright 2016 The TensorFlow Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+"""CIFAR10 small image classification dataset."""
+
+from __future__ import absolute_import
+from __future__ import division
+from __future__ import print_function
+
+from tensorflow.python.keras._impl.keras.datasets.cifar10 import load_data
+
+del absolute_import
+del division
+del print_function
diff --git a/tensorflow/python/keras/datasets/cifar100/__init__.py b/tensorflow/python/keras/datasets/cifar100/__init__.py
new file mode 100644
index 0000000000..ca93742673
--- /dev/null
+++ b/tensorflow/python/keras/datasets/cifar100/__init__.py
@@ -0,0 +1,25 @@
+# Copyright 2016 The TensorFlow Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+"""CIFAR100 small image classification dataset."""
+
+from __future__ import absolute_import
+from __future__ import division
+from __future__ import print_function
+
+from tensorflow.python.keras._impl.keras.datasets.cifar100 import load_data
+
+del absolute_import
+del division
+del print_function
diff --git a/tensorflow/python/keras/datasets/imdb/__init__.py b/tensorflow/python/keras/datasets/imdb/__init__.py
new file mode 100644
index 0000000000..1c6396d2d3
--- /dev/null
+++ b/tensorflow/python/keras/datasets/imdb/__init__.py
@@ -0,0 +1,26 @@
+# Copyright 2016 The TensorFlow Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+"""IMDB movie review sentiment classification dataset."""
+
+from __future__ import absolute_import
+from __future__ import division
+from __future__ import print_function
+
+from tensorflow.python.keras._impl.keras.datasets.imdb import get_word_index
+from tensorflow.python.keras._impl.keras.datasets.imdb import load_data
+
+del absolute_import
+del division
+del print_function
diff --git a/tensorflow/python/keras/datasets/mnist/__init__.py b/tensorflow/python/keras/datasets/mnist/__init__.py
new file mode 100644
index 0000000000..364255f338
--- /dev/null
+++ b/tensorflow/python/keras/datasets/mnist/__init__.py
@@ -0,0 +1,25 @@
+# Copyright 2016 The TensorFlow Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+"""MNIST handwritten digits classification dataset."""
+
+from __future__ import absolute_import
+from __future__ import division
+from __future__ import print_function
+
+from tensorflow.python.keras._impl.keras.datasets.mnist import load_data
+
+del absolute_import
+del division
+del print_function
diff --git a/tensorflow/python/keras/datasets/reuters/__init__.py b/tensorflow/python/keras/datasets/reuters/__init__.py
new file mode 100644
index 0000000000..bb6791a344
--- /dev/null
+++ b/tensorflow/python/keras/datasets/reuters/__init__.py
@@ -0,0 +1,26 @@
+# Copyright 2016 The TensorFlow Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+"""Reuters newswire topic classification dataset."""
+
+from __future__ import absolute_import
+from __future__ import division
+from __future__ import print_function
+
+from tensorflow.python.keras._impl.keras.datasets.reuters import get_word_index
+from tensorflow.python.keras._impl.keras.datasets.reuters import load_data
+
+del absolute_import
+del division
+del print_function
diff --git a/tensorflow/python/keras/initializers/__init__.py b/tensorflow/python/keras/initializers/__init__.py
new file mode 100644
index 0000000000..6b1fcfd2d9
--- /dev/null
+++ b/tensorflow/python/keras/initializers/__init__.py
@@ -0,0 +1,49 @@
+# Copyright 2016 The TensorFlow Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+"""Keras built-in initializers."""
+
+from __future__ import absolute_import
+from __future__ import division
+from __future__ import print_function
+
+# Initializer functions / callable classes.
+from tensorflow.python.keras._impl.keras.initializers import Constant
+from tensorflow.python.keras._impl.keras.initializers import Identity
+from tensorflow.python.keras._impl.keras.initializers import Initializer
+from tensorflow.python.keras._impl.keras.initializers import Ones
+from tensorflow.python.keras._impl.keras.initializers import Orthogonal
+from tensorflow.python.keras._impl.keras.initializers import RandomNormal
+from tensorflow.python.keras._impl.keras.initializers import RandomUniform
+from tensorflow.python.keras._impl.keras.initializers import TruncatedNormal
+from tensorflow.python.keras._impl.keras.initializers import VarianceScaling
+from tensorflow.python.keras._impl.keras.initializers import Zeros
+
+# Functional interface.
+# pylint: disable=g-bad-import-order
+from tensorflow.python.keras._impl.keras.initializers import glorot_normal
+from tensorflow.python.keras._impl.keras.initializers import glorot_uniform
+from tensorflow.python.keras._impl.keras.initializers import he_normal
+from tensorflow.python.keras._impl.keras.initializers import he_uniform
+from tensorflow.python.keras._impl.keras.initializers import lecun_normal
+from tensorflow.python.keras._impl.keras.initializers import lecun_uniform
+
+# Auxiliary utils.
+from tensorflow.python.keras._impl.keras.initializers import deserialize
+from tensorflow.python.keras._impl.keras.initializers import serialize
+from tensorflow.python.keras._impl.keras.initializers import get
+
+del absolute_import
+del division
+del print_function
diff --git a/tensorflow/python/keras/layers/__init__.py b/tensorflow/python/keras/layers/__init__.py
new file mode 100644
index 0000000000..acf0a5e179
--- /dev/null
+++ b/tensorflow/python/keras/layers/__init__.py
@@ -0,0 +1,148 @@
+# Copyright 2016 The TensorFlow Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+"""Keras layers API."""
+
+from __future__ import absolute_import
+from __future__ import division
+from __future__ import print_function
+
+# Generic layers.
+# pylint: disable=g-bad-import-order
+from tensorflow.python.keras._impl.keras.engine import Input
+from tensorflow.python.keras._impl.keras.engine import InputLayer
+from tensorflow.python.keras._impl.keras.engine import InputSpec
+from tensorflow.python.keras._impl.keras.engine import Layer
+
+# Advanced activations.
+from tensorflow.python.keras._impl.keras.layers.advanced_activations import LeakyReLU
+from tensorflow.python.keras._impl.keras.layers.advanced_activations import PReLU
+from tensorflow.python.keras._impl.keras.layers.advanced_activations import ELU
+from tensorflow.python.keras._impl.keras.layers.advanced_activations import ThresholdedReLU
+
+# Convolution layers.
+from tensorflow.python.keras._impl.keras.layers.convolutional import Conv1D
+from tensorflow.python.keras._impl.keras.layers.convolutional import Conv2D
+from tensorflow.python.keras._impl.keras.layers.convolutional import Conv3D
+from tensorflow.python.keras._impl.keras.layers.convolutional import Conv2DTranspose
+from tensorflow.python.keras._impl.keras.layers.convolutional import Conv3DTranspose
+from tensorflow.python.keras._impl.keras.layers.convolutional import SeparableConv2D
+
+# Convolution layer aliases.
+from tensorflow.python.keras._impl.keras.layers.convolutional import Convolution1D
+from tensorflow.python.keras._impl.keras.layers.convolutional import Convolution2D
+from tensorflow.python.keras._impl.keras.layers.convolutional import Convolution3D
+from tensorflow.python.keras._impl.keras.layers.convolutional import Convolution2DTranspose
+from tensorflow.python.keras._impl.keras.layers.convolutional import Convolution3DTranspose
+from tensorflow.python.keras._impl.keras.layers.convolutional import SeparableConvolution2D
+
+# Image processing layers.
+from tensorflow.python.keras._impl.keras.layers.convolutional import UpSampling1D
+from tensorflow.python.keras._impl.keras.layers.convolutional import UpSampling2D
+from tensorflow.python.keras._impl.keras.layers.convolutional import UpSampling3D
+from tensorflow.python.keras._impl.keras.layers.convolutional import ZeroPadding1D
+from tensorflow.python.keras._impl.keras.layers.convolutional import ZeroPadding2D
+from tensorflow.python.keras._impl.keras.layers.convolutional import ZeroPadding3D
+from tensorflow.python.keras._impl.keras.layers.convolutional import Cropping1D
+from tensorflow.python.keras._impl.keras.layers.convolutional import Cropping2D
+from tensorflow.python.keras._impl.keras.layers.convolutional import Cropping3D
+
+# Convolutional-recurrent layers.
+from tensorflow.python.keras._impl.keras.layers.convolutional_recurrent import ConvLSTM2D
+
+# Core layers.
+from tensorflow.python.keras._impl.keras.layers.core import Masking
+from tensorflow.python.keras._impl.keras.layers.core import Dropout
+from tensorflow.python.keras._impl.keras.layers.core import SpatialDropout1D
+from tensorflow.python.keras._impl.keras.layers.core import SpatialDropout2D
+from tensorflow.python.keras._impl.keras.layers.core import SpatialDropout3D
+from tensorflow.python.keras._impl.keras.layers.core import Activation
+from tensorflow.python.keras._impl.keras.layers.core import Reshape
+from tensorflow.python.keras._impl.keras.layers.core import Permute
+from tensorflow.python.keras._impl.keras.layers.core import Flatten
+from tensorflow.python.keras._impl.keras.layers.core import RepeatVector
+from tensorflow.python.keras._impl.keras.layers.core import Lambda
+from tensorflow.python.keras._impl.keras.layers.core import Dense
+from tensorflow.python.keras._impl.keras.layers.core import ActivityRegularization
+
+# Embedding layers.
+from tensorflow.python.keras._impl.keras.layers.embeddings import Embedding
+
+# Locally-connected layers.
+from tensorflow.python.keras._impl.keras.layers.local import LocallyConnected1D
+from tensorflow.python.keras._impl.keras.layers.local import LocallyConnected2D
+
+# Merge layers.
+from tensorflow.python.keras._impl.keras.layers.merge import Add
+from tensorflow.python.keras._impl.keras.layers.merge import Multiply
+from tensorflow.python.keras._impl.keras.layers.merge import Average
+from tensorflow.python.keras._impl.keras.layers.merge import Maximum
+from tensorflow.python.keras._impl.keras.layers.merge import Concatenate
+from tensorflow.python.keras._impl.keras.layers.merge import Dot
+from tensorflow.python.keras._impl.keras.layers.merge import add
+from tensorflow.python.keras._impl.keras.layers.merge import multiply
+from tensorflow.python.keras._impl.keras.layers.merge import average
+from tensorflow.python.keras._impl.keras.layers.merge import maximum
+from tensorflow.python.keras._impl.keras.layers.merge import concatenate
+from tensorflow.python.keras._impl.keras.layers.merge import dot
+
+# Noise layers.
+from tensorflow.python.keras._impl.keras.layers.noise import AlphaDropout
+from tensorflow.python.keras._impl.keras.layers.noise import GaussianNoise
+from tensorflow.python.keras._impl.keras.layers.noise import GaussianDropout
+
+# Normalization layers.
+from tensorflow.python.keras._impl.keras.layers.normalization import BatchNormalization
+
+# Pooling layers.
+from tensorflow.python.keras._impl.keras.layers.pooling import MaxPooling1D
+from tensorflow.python.keras._impl.keras.layers.pooling import MaxPooling2D
+from tensorflow.python.keras._impl.keras.layers.pooling import MaxPooling3D
+from tensorflow.python.keras._impl.keras.layers.pooling import AveragePooling1D
+from tensorflow.python.keras._impl.keras.layers.pooling import AveragePooling2D
+from tensorflow.python.keras._impl.keras.layers.pooling import AveragePooling3D
+from tensorflow.python.keras._impl.keras.layers.pooling import GlobalAveragePooling1D
+from tensorflow.python.keras._impl.keras.layers.pooling import GlobalAveragePooling2D
+from tensorflow.python.keras._impl.keras.layers.pooling import GlobalAveragePooling3D
+from tensorflow.python.keras._impl.keras.layers.pooling import GlobalMaxPooling1D
+from tensorflow.python.keras._impl.keras.layers.pooling import GlobalMaxPooling2D
+from tensorflow.python.keras._impl.keras.layers.pooling import GlobalMaxPooling3D
+
+# Pooling layer aliases.
+from tensorflow.python.keras._impl.keras.layers.pooling import MaxPool1D
+from tensorflow.python.keras._impl.keras.layers.pooling import MaxPool2D
+from tensorflow.python.keras._impl.keras.layers.pooling import MaxPool3D
+from tensorflow.python.keras._impl.keras.layers.pooling import AvgPool1D
+from tensorflow.python.keras._impl.keras.layers.pooling import AvgPool2D
+from tensorflow.python.keras._impl.keras.layers.pooling import AvgPool3D
+from tensorflow.python.keras._impl.keras.layers.pooling import GlobalAvgPool1D
+from tensorflow.python.keras._impl.keras.layers.pooling import GlobalAvgPool2D
+from tensorflow.python.keras._impl.keras.layers.pooling import GlobalAvgPool3D
+from tensorflow.python.keras._impl.keras.layers.pooling import GlobalMaxPool1D
+from tensorflow.python.keras._impl.keras.layers.pooling import GlobalMaxPool2D
+from tensorflow.python.keras._impl.keras.layers.pooling import GlobalMaxPool3D
+
+# Recurrent layers.
+from tensorflow.python.keras._impl.keras.layers.recurrent import SimpleRNN
+from tensorflow.python.keras._impl.keras.layers.recurrent import GRU
+from tensorflow.python.keras._impl.keras.layers.recurrent import LSTM
+
+# Wrapper functions
+from tensorflow.python.keras._impl.keras.layers.wrappers import Wrapper
+from tensorflow.python.keras._impl.keras.layers.wrappers import Bidirectional
+from tensorflow.python.keras._impl.keras.layers.wrappers import TimeDistributed
+
+del absolute_import
+del division
+del print_function
diff --git a/tensorflow/python/keras/losses/__init__.py b/tensorflow/python/keras/losses/__init__.py
new file mode 100644
index 0000000000..66721b694f
--- /dev/null
+++ b/tensorflow/python/keras/losses/__init__.py
@@ -0,0 +1,45 @@
+# Copyright 2016 The TensorFlow Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+"""Keras built-in loss functions."""
+
+from __future__ import absolute_import
+from __future__ import division
+from __future__ import print_function
+
+# Loss functions.
+from tensorflow.python.keras._impl.keras.losses import binary_crossentropy
+from tensorflow.python.keras._impl.keras.losses import categorical_crossentropy
+from tensorflow.python.keras._impl.keras.losses import categorical_hinge
+from tensorflow.python.keras._impl.keras.losses import cosine_proximity
+from tensorflow.python.keras._impl.keras.losses import hinge
+from tensorflow.python.keras._impl.keras.losses import kullback_leibler_divergence
+from tensorflow.python.keras._impl.keras.losses import logcosh
+from tensorflow.python.keras._impl.keras.losses import mean_absolute_error
+from tensorflow.python.keras._impl.keras.losses import mean_absolute_percentage_error
+from tensorflow.python.keras._impl.keras.losses import mean_squared_error
+from tensorflow.python.keras._impl.keras.losses import mean_squared_logarithmic_error
+from tensorflow.python.keras._impl.keras.losses import poisson
+from tensorflow.python.keras._impl.keras.losses import sparse_categorical_crossentropy
+from tensorflow.python.keras._impl.keras.losses import squared_hinge
+
+# Auxiliary utils.
+# pylint: disable=g-bad-import-order
+from tensorflow.python.keras._impl.keras.losses import deserialize
+from tensorflow.python.keras._impl.keras.losses import serialize
+from tensorflow.python.keras._impl.keras.losses import get
+
+del absolute_import
+del division
+del print_function
diff --git a/tensorflow/python/keras/metrics/__init__.py b/tensorflow/python/keras/metrics/__init__.py
new file mode 100644
index 0000000000..59faf037bc
--- /dev/null
+++ b/tensorflow/python/keras/metrics/__init__.py
@@ -0,0 +1,47 @@
+# Copyright 2016 The TensorFlow Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+"""Keras built-in metrics functions."""
+
+from __future__ import absolute_import
+from __future__ import division
+from __future__ import print_function
+
+# Metrics functions.
+from tensorflow.python.keras._impl.keras.metrics import binary_accuracy
+from tensorflow.python.keras._impl.keras.metrics import binary_crossentropy
+from tensorflow.python.keras._impl.keras.metrics import categorical_accuracy
+from tensorflow.python.keras._impl.keras.metrics import categorical_crossentropy
+from tensorflow.python.keras._impl.keras.metrics import cosine_proximity
+from tensorflow.python.keras._impl.keras.metrics import hinge
+from tensorflow.python.keras._impl.keras.metrics import kullback_leibler_divergence
+from tensorflow.python.keras._impl.keras.metrics import mean_absolute_error
+from tensorflow.python.keras._impl.keras.metrics import mean_absolute_percentage_error
+from tensorflow.python.keras._impl.keras.metrics import mean_squared_error
+from tensorflow.python.keras._impl.keras.metrics import mean_squared_logarithmic_error
+from tensorflow.python.keras._impl.keras.metrics import poisson
+from tensorflow.python.keras._impl.keras.metrics import sparse_categorical_crossentropy
+from tensorflow.python.keras._impl.keras.metrics import sparse_top_k_categorical_accuracy
+from tensorflow.python.keras._impl.keras.metrics import squared_hinge
+from tensorflow.python.keras._impl.keras.metrics import top_k_categorical_accuracy
+
+# Auxiliary utils.
+# pylint: disable=g-bad-import-order
+from tensorflow.python.keras._impl.keras.metrics import deserialize
+from tensorflow.python.keras._impl.keras.metrics import serialize
+from tensorflow.python.keras._impl.keras.metrics import get
+
+del absolute_import
+del division
+del print_function
diff --git a/tensorflow/python/keras/models/__init__.py b/tensorflow/python/keras/models/__init__.py
new file mode 100644
index 0000000000..2fb4ac0960
--- /dev/null
+++ b/tensorflow/python/keras/models/__init__.py
@@ -0,0 +1,31 @@
+# Copyright 2016 The TensorFlow Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+"""Keras models API."""
+
+from __future__ import absolute_import
+from __future__ import division
+from __future__ import print_function
+
+from tensorflow.python.keras._impl.keras.models import load_model
+from tensorflow.python.keras._impl.keras.models import Model
+from tensorflow.python.keras._impl.keras.models import model_from_config
+from tensorflow.python.keras._impl.keras.models import model_from_json
+from tensorflow.python.keras._impl.keras.models import model_from_yaml
+from tensorflow.python.keras._impl.keras.models import save_model
+from tensorflow.python.keras._impl.keras.models import Sequential
+
+del absolute_import
+del division
+del print_function
diff --git a/tensorflow/python/keras/optimizers/__init__.py b/tensorflow/python/keras/optimizers/__init__.py
new file mode 100644
index 0000000000..44f47bc47f
--- /dev/null
+++ b/tensorflow/python/keras/optimizers/__init__.py
@@ -0,0 +1,39 @@
+# Copyright 2016 The TensorFlow Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+"""Keras built-in optimizers."""
+
+from __future__ import absolute_import
+from __future__ import division
+from __future__ import print_function
+
+# Optimizer classes.
+from tensorflow.python.keras._impl.keras.optimizers import Adadelta
+from tensorflow.python.keras._impl.keras.optimizers import Adagrad
+from tensorflow.python.keras._impl.keras.optimizers import Adam
+from tensorflow.python.keras._impl.keras.optimizers import Adamax
+from tensorflow.python.keras._impl.keras.optimizers import Nadam
+from tensorflow.python.keras._impl.keras.optimizers import Optimizer
+from tensorflow.python.keras._impl.keras.optimizers import RMSprop
+from tensorflow.python.keras._impl.keras.optimizers import SGD
+
+# Auxiliary utils.
+# pylint: disable=g-bad-import-order
+from tensorflow.python.keras._impl.keras.optimizers import deserialize
+from tensorflow.python.keras._impl.keras.optimizers import serialize
+from tensorflow.python.keras._impl.keras.optimizers import get
+
+del absolute_import
+del division
+del print_function
diff --git a/tensorflow/python/keras/preprocessing/__init__.py b/tensorflow/python/keras/preprocessing/__init__.py
new file mode 100644
index 0000000000..8fa3911a7a
--- /dev/null
+++ b/tensorflow/python/keras/preprocessing/__init__.py
@@ -0,0 +1,27 @@
+# Copyright 2016 The TensorFlow Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+"""Keras data preprocessing utils."""
+
+from __future__ import absolute_import
+from __future__ import division
+from __future__ import print_function
+
+from tensorflow.python.keras.preprocessing import image
+from tensorflow.python.keras.preprocessing import sequence
+from tensorflow.python.keras.preprocessing import text
+
+del absolute_import
+del division
+del print_function
diff --git a/tensorflow/python/keras/preprocessing/image/__init__.py b/tensorflow/python/keras/preprocessing/image/__init__.py
new file mode 100644
index 0000000000..b96e767552
--- /dev/null
+++ b/tensorflow/python/keras/preprocessing/image/__init__.py
@@ -0,0 +1,38 @@
+# Copyright 2016 The TensorFlow Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+"""Keras data preprocessing utils for image data."""
+
+from __future__ import absolute_import
+from __future__ import division
+from __future__ import print_function
+
+from tensorflow.python.keras._impl.keras.preprocessing.image import apply_transform
+from tensorflow.python.keras._impl.keras.preprocessing.image import array_to_img
+from tensorflow.python.keras._impl.keras.preprocessing.image import DirectoryIterator
+from tensorflow.python.keras._impl.keras.preprocessing.image import flip_axis
+from tensorflow.python.keras._impl.keras.preprocessing.image import ImageDataGenerator
+from tensorflow.python.keras._impl.keras.preprocessing.image import img_to_array
+from tensorflow.python.keras._impl.keras.preprocessing.image import Iterator
+from tensorflow.python.keras._impl.keras.preprocessing.image import load_img
+from tensorflow.python.keras._impl.keras.preprocessing.image import NumpyArrayIterator
+from tensorflow.python.keras._impl.keras.preprocessing.image import random_channel_shift
+from tensorflow.python.keras._impl.keras.preprocessing.image import random_rotation
+from tensorflow.python.keras._impl.keras.preprocessing.image import random_shear
+from tensorflow.python.keras._impl.keras.preprocessing.image import random_shift
+from tensorflow.python.keras._impl.keras.preprocessing.image import random_zoom
+
+del absolute_import
+del division
+del print_function
diff --git a/tensorflow/python/keras/preprocessing/sequence/__init__.py b/tensorflow/python/keras/preprocessing/sequence/__init__.py
new file mode 100644
index 0000000000..112f6af5e5
--- /dev/null
+++ b/tensorflow/python/keras/preprocessing/sequence/__init__.py
@@ -0,0 +1,27 @@
+# Copyright 2016 The TensorFlow Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+"""Keras data preprocessing utils for sequence data."""
+
+from __future__ import absolute_import
+from __future__ import division
+from __future__ import print_function
+
+from tensorflow.python.keras._impl.keras.preprocessing.sequence import make_sampling_table
+from tensorflow.python.keras._impl.keras.preprocessing.sequence import pad_sequences
+from tensorflow.python.keras._impl.keras.preprocessing.sequence import skipgrams
+
+del absolute_import
+del division
+del print_function
diff --git a/tensorflow/python/keras/preprocessing/text/__init__.py b/tensorflow/python/keras/preprocessing/text/__init__.py
new file mode 100644
index 0000000000..5bf1a2fb21
--- /dev/null
+++ b/tensorflow/python/keras/preprocessing/text/__init__.py
@@ -0,0 +1,27 @@
+# Copyright 2016 The TensorFlow Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+"""Keras data preprocessing utils for text data."""
+
+from __future__ import absolute_import
+from __future__ import division
+from __future__ import print_function
+
+from tensorflow.python.keras._impl.keras.preprocessing.text import one_hot
+from tensorflow.python.keras._impl.keras.preprocessing.text import text_to_word_sequence
+from tensorflow.python.keras._impl.keras.preprocessing.text import Tokenizer
+
+del absolute_import
+del division
+del print_function
diff --git a/tensorflow/python/keras/regularizers/__init__.py b/tensorflow/python/keras/regularizers/__init__.py
new file mode 100644
index 0000000000..3e707ccab5
--- /dev/null
+++ b/tensorflow/python/keras/regularizers/__init__.py
@@ -0,0 +1,38 @@
+# Copyright 2016 The TensorFlow Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+"""Keras built-in regularizers."""
+
+from __future__ import absolute_import
+from __future__ import division
+from __future__ import print_function
+
+# Regularizer functions / callable classes.
+from tensorflow.python.keras._impl.keras.regularizers import L1L2
+from tensorflow.python.keras._impl.keras.regularizers import Regularizer
+
+# Functional interface.
+# pylint: disable=g-bad-import-order
+from tensorflow.python.keras._impl.keras.regularizers import l1
+from tensorflow.python.keras._impl.keras.regularizers import l2
+from tensorflow.python.keras._impl.keras.regularizers import l1_l2
+
+# Auxiliary utils.
+from tensorflow.python.keras._impl.keras.regularizers import deserialize
+from tensorflow.python.keras._impl.keras.regularizers import serialize
+from tensorflow.python.keras._impl.keras.regularizers import get
+
+del absolute_import
+del division
+del print_function
diff --git a/tensorflow/python/keras/utils/__init__.py b/tensorflow/python/keras/utils/__init__.py
new file mode 100644
index 0000000000..a7c2179fe7
--- /dev/null
+++ b/tensorflow/python/keras/utils/__init__.py
@@ -0,0 +1,39 @@
+# Copyright 2016 The TensorFlow Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+"""Keras utilities."""
+
+from __future__ import absolute_import
+from __future__ import division
+from __future__ import print_function
+
+from tensorflow.python.keras._impl.keras.utils.data_utils import GeneratorEnqueuer
+from tensorflow.python.keras._impl.keras.utils.data_utils import get_file
+from tensorflow.python.keras._impl.keras.utils.data_utils import Sequence
+from tensorflow.python.keras._impl.keras.utils.data_utils import SequenceEnqueuer
+from tensorflow.python.keras._impl.keras.utils.generic_utils import custom_object_scope
+from tensorflow.python.keras._impl.keras.utils.generic_utils import CustomObjectScope
+from tensorflow.python.keras._impl.keras.utils.generic_utils import deserialize_keras_object
+from tensorflow.python.keras._impl.keras.utils.generic_utils import get_custom_objects
+from tensorflow.python.keras._impl.keras.utils.generic_utils import Progbar
+from tensorflow.python.keras._impl.keras.utils.generic_utils import serialize_keras_object
+from tensorflow.python.keras._impl.keras.utils.io_utils import HDF5Matrix
+from tensorflow.python.keras._impl.keras.utils.layer_utils import convert_all_kernels_in_model
+from tensorflow.python.keras._impl.keras.utils.np_utils import normalize
+from tensorflow.python.keras._impl.keras.utils.np_utils import to_categorical
+from tensorflow.python.keras._impl.keras.utils.vis_utils import plot_model
+
+del absolute_import
+del division
+del print_function
diff --git a/tensorflow/python/keras/wrappers/__init__.py b/tensorflow/python/keras/wrappers/__init__.py
new file mode 100644
index 0000000000..da579a7ab5
--- /dev/null
+++ b/tensorflow/python/keras/wrappers/__init__.py
@@ -0,0 +1,25 @@
+# Copyright 2016 The TensorFlow Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+"""Wrappers for Keras models, providing compatibility with other frameworks."""
+
+from __future__ import absolute_import
+from __future__ import division
+from __future__ import print_function
+
+from tensorflow.python.keras.wrappers import scikit_learn
+
+del absolute_import
+del division
+del print_function
diff --git a/tensorflow/python/keras/wrappers/scikit_learn/__init__.py b/tensorflow/python/keras/wrappers/scikit_learn/__init__.py
new file mode 100644
index 0000000000..a46f859273
--- /dev/null
+++ b/tensorflow/python/keras/wrappers/scikit_learn/__init__.py
@@ -0,0 +1,26 @@
+# Copyright 2016 The TensorFlow Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+"""Keras scikit-learn API wrapper."""
+
+from __future__ import absolute_import
+from __future__ import division
+from __future__ import print_function
+
+from tensorflow.python.keras._impl.keras.wrappers.scikit_learn import KerasClassifier
+from tensorflow.python.keras._impl.keras.wrappers.scikit_learn import KerasRegressor
+
+del absolute_import
+del division
+del print_function
diff --git a/tensorflow/python/kernel_tests/segment_reduction_ops_test.py b/tensorflow/python/kernel_tests/segment_reduction_ops_test.py
index 5e426fc61a..516a9d000e 100644
--- a/tensorflow/python/kernel_tests/segment_reduction_ops_test.py
+++ b/tensorflow/python/kernel_tests/segment_reduction_ops_test.py
@@ -23,13 +23,12 @@ import itertools
import numpy as np
from tensorflow.python.client import session
-from tensorflow.python.framework import ops
from tensorflow.python.framework import constant_op
from tensorflow.python.framework import dtypes as dtypes_lib
+from tensorflow.python.framework import ops
from tensorflow.python.ops import gradient_checker
from tensorflow.python.ops import math_ops
from tensorflow.python.ops import variables
-import tensorflow.python.ops.nn_grad # pylint: disable=unused-import
from tensorflow.python.platform import test
@@ -351,8 +350,8 @@ class UnsortedSegmentSumTest(SegmentReductionHelper):
shape = indices.shape + (num_cols,)
with self.test_session(use_gpu=True):
tf_x, np_x = self._input(shape, dtype=dtypes_lib.float64)
- s = math_ops.unsorted_segment_max(data=tf_x, segment_ids=indices,
- num_segments=num_segments)
+ s = math_ops.unsorted_segment_max(
+ data=tf_x, segment_ids=indices, num_segments=num_segments)
jacob_t, jacob_n = gradient_checker.compute_gradient(
tf_x,
shape,
@@ -650,15 +649,19 @@ class SegmentReductionOpBenchmark(test.Benchmark):
outer_dim_options = [2**x for x in range(9, 14, 2)]
ratio_options = [2**x for x in range(1, 6, 2)]
inner_dim_options = [2**x for x in range(9, 14, 2)]
- #randomly generated sizes with less alignments
- inner_dim_options += [1120, 1215, 1856, 1302, 1329, 1531, 1313, 1672, 1851, 1584]
+ # randomly generated sizes with less alignments
+ inner_dim_options += [
+ 1120, 1215, 1856, 1302, 1329, 1531, 1313, 1672, 1851, 1584
+ ]
dtype_options = [np.float32, np.float64]
- options = (outer_dim_options,
- ratio_options, inner_dim_options, dtype_options)
+ options = (outer_dim_options, ratio_options, inner_dim_options, dtype_options)
+ # pylint: disable=g-long-lambda
op_functors = [lambda vc, vs, seg_ids:
("sorted", math_ops.segment_sum(vc, vs)),
lambda vc, vs, seg_ids:
- ("unsorted", math_ops.unsorted_segment_sum(vc, vs, seg_ids[-1]+1))]
+ ("unsorted",
+ math_ops.unsorted_segment_sum(vc, vs, seg_ids[-1]+1))]
+ # pylint: enable=g-long-lambda
repeat = 10
def _npTypeToStr(self, t):
@@ -668,30 +671,29 @@ class SegmentReductionOpBenchmark(test.Benchmark):
return "fp64"
def _runGraph(self, op_functor, outer_dim, ratio, inner_dim, dtype):
- output_outer_dim = int(outer_dim/ratio)
+ output_outer_dim = int(outer_dim / ratio)
const = np.random.randint(5, size=(outer_dim, inner_dim))
- seg_ids = np.sort(np.random.randint(
- output_outer_dim, size=outer_dim))
+ seg_ids = np.sort(np.random.randint(output_outer_dim, size=outer_dim))
vs = variables.Variable(seg_ids.astype(np.int32))
with ops.device("/gpu:0"):
vc = variables.Variable(const.astype(dtype))
name, op = op_functor(vc, vs, seg_ids)
with session.Session() as sess:
variables.global_variables_initializer().run()
- r = self.run_op_benchmark(sess, op, min_iters=self.repeat,
- name="_".join(map(str,
- [name,
- outer_dim,
- ratio,
- inner_dim,
- self._npTypeToStr(dtype)])))
+ r = self.run_op_benchmark(
+ sess,
+ op,
+ min_iters=self.repeat,
+ name="_".join(
+ map(str,
+ [name, outer_dim, ratio, inner_dim,
+ self._npTypeToStr(dtype)])))
return name, r["wall_time"]
def benchmarkSegmentSumGPU(self):
if not test.is_gpu_available(cuda_only=True):
return
for outer_dim, ratio, inner_dim, dtype in itertools.product(*self.options):
- output_outer_dim = int(outer_dim/ratio)
op_functor = self.op_functors[0]
with ops.Graph().as_default():
self._runGraph(op_functor, outer_dim, ratio, inner_dim, dtype)
@@ -700,10 +702,10 @@ class SegmentReductionOpBenchmark(test.Benchmark):
if not test.is_gpu_available(cuda_only=True):
return
for outer_dim, ratio, inner_dim, dtype in itertools.product(*self.options):
- output_outer_dim = int(outer_dim/ratio)
op_functor = self.op_functors[1]
with ops.Graph().as_default():
self._runGraph(op_functor, outer_dim, ratio, inner_dim, dtype)
+
if __name__ == "__main__":
test.main()
diff --git a/tensorflow/python/kernel_tests/sparse_ops_test.py b/tensorflow/python/kernel_tests/sparse_ops_test.py
index 51bfceee01..9161b8c5d1 100644
--- a/tensorflow/python/kernel_tests/sparse_ops_test.py
+++ b/tensorflow/python/kernel_tests/sparse_ops_test.py
@@ -28,6 +28,7 @@ from tensorflow.python.ops import array_ops
from tensorflow.python.ops import gradient_checker
from tensorflow.python.ops import nn_ops
from tensorflow.python.ops import sparse_ops
+from tensorflow.python.ops import variables
import tensorflow.python.ops.sparse_grad # pylint: disable=unused-import
from tensorflow.python.platform import googletest
from tensorflow.python.platform import test
@@ -544,6 +545,22 @@ class SparseFillEmptyRowsTest(test_util.TensorFlowTestCase):
self.assertAllEqual(empty_row_indicator_out, np.zeros(2).astype(np.bool))
+class SparseAddTest(test_util.TensorFlowTestCase):
+
+ def testValuesInVariable(self):
+ indices = constant_op.constant([[1]], dtype=dtypes.int64)
+ values = variables.Variable([1], trainable=False, dtype=dtypes.float32)
+ shape = constant_op.constant([1], dtype=dtypes.int64)
+
+ sp_input = sparse_tensor.SparseTensor(indices, values, shape)
+ sp_output = sparse_ops.sparse_add(sp_input, sp_input)
+
+ with self.test_session(use_gpu=False) as sess:
+ sess.run(variables.global_variables_initializer())
+ output = sess.run(sp_output)
+ self.assertAllEqual(output.values, [2])
+
+
class SparseReduceTest(test_util.TensorFlowTestCase):
# [[1, ?, 2]
diff --git a/tensorflow/python/layers/normalization.py b/tensorflow/python/layers/normalization.py
index 1fc2d70f9c..62265dce3c 100644
--- a/tensorflow/python/layers/normalization.py
+++ b/tensorflow/python/layers/normalization.py
@@ -26,14 +26,18 @@ from six.moves import xrange # pylint: disable=redefined-builtin
import numpy as np
from tensorflow.python.eager import context
+from tensorflow.python.framework import constant_op
from tensorflow.python.framework import dtypes
from tensorflow.python.framework import tensor_shape
from tensorflow.python.framework import ops
from tensorflow.python.ops import array_ops
from tensorflow.python.ops import nn
+from tensorflow.python.ops import gen_resource_variable_ops
+from tensorflow.python.ops import resource_variable_ops
from tensorflow.python.ops import math_ops
from tensorflow.python.ops import init_ops
from tensorflow.python.ops import standard_ops
+from tensorflow.python.ops import state_ops
from tensorflow.python.ops import variable_scope as vs
from tensorflow.python.training import moving_averages
from tensorflow.python.framework import tensor_util
@@ -229,6 +233,7 @@ class BatchNormalization(base.Layer):
shape=(param_dim,),
initializer=self.moving_variance_initializer,
trainable=False)
+ self._one_minus_decay = 1.0 - self.momentum
if self.renorm:
# Create variables to maintain the moving mean and standard deviation.
# These are used in training and thus are different from the moving
@@ -265,6 +270,19 @@ class BatchNormalization(base.Layer):
self._scope.set_partitioner(partitioner)
self.built = True
+ def _assign_moving_average(self, variable, value, one_minus_decay):
+ with ops.name_scope(None, 'AssignMovingAvg',
+ [variable, value, one_minus_decay]) as scope:
+ with ops.colocate_with(variable):
+ update_delta = (variable.read_value() - value) * one_minus_decay
+ if isinstance(variable, resource_variable_ops.ResourceVariable):
+ # state_ops.assign_sub does an extra read_variable_op after the
+ # assign. We avoid that here.
+ return gen_resource_variable_ops.assign_sub_variable_op(
+ variable.handle, update_delta, name=scope)
+ else:
+ return state_ops.assign_sub(variable, update_delta, name=scope)
+
def _fused_batch_norm(self, inputs, training):
"""Returns the output of fused batch norm."""
beta = self.beta if self.center else self._beta_const
@@ -301,12 +319,17 @@ class BatchNormalization(base.Layer):
variance *= factor
training_value = utils.constant_value(training)
- if training_value is not False:
- decay = _smart_select(training, lambda: self.momentum, lambda: 1.)
- mean_update = moving_averages.assign_moving_average(
- self.moving_mean, mean, decay, zero_debias=False)
- variance_update = moving_averages.assign_moving_average(
- self.moving_variance, variance, decay, zero_debias=False)
+ if training_value is None:
+ one_minus_decay = _smart_select(training,
+ lambda: self._one_minus_decay,
+ lambda: 0.)
+ else:
+ one_minus_decay = self._one_minus_decay
+ if training_value or training_value is None:
+ mean_update = self._assign_moving_average(self.moving_mean, mean,
+ one_minus_decay)
+ variance_update = self._assign_moving_average(self.moving_variance,
+ variance, one_minus_decay)
if context.in_graph_mode():
# Note that in Eager mode, the updates are already executed when running
# assign_moving_averages. So we do not need to put them into
diff --git a/tensorflow/python/ops/metrics_impl.py b/tensorflow/python/ops/metrics_impl.py
index 16320f7584..eb0b08c5fd 100644
--- a/tensorflow/python/ops/metrics_impl.py
+++ b/tensorflow/python/ops/metrics_impl.py
@@ -461,13 +461,17 @@ def _confusion_matrix_at_thresholds(
else:
for include in includes:
if include not in all_includes:
- raise ValueError('Invaild key: %s.' % include)
+ raise ValueError('Invalid key: %s.' % include)
with ops.control_dependencies([
check_ops.assert_greater_equal(
- predictions, 0.0, message='predictions must be in [0, 1]'),
+ predictions,
+ math_ops.cast(0.0, dtype=predictions.dtype),
+ message='predictions must be in [0, 1]'),
check_ops.assert_less_equal(
- predictions, 1.0, message='predictions must be in [0, 1]')
+ predictions,
+ math_ops.cast(1.0, dtype=predictions.dtype),
+ message='predictions must be in [0, 1]')
]):
predictions, labels, weights = _remove_squeezable_dimensions(
predictions=math_ops.to_float(predictions),
diff --git a/tensorflow/python/ops/parsing_ops.py b/tensorflow/python/ops/parsing_ops.py
index bf7c9fac8e..c5fd15bae4 100644
--- a/tensorflow/python/ops/parsing_ops.py
+++ b/tensorflow/python/ops/parsing_ops.py
@@ -199,7 +199,11 @@ def _features_to_raw_params(features, types):
sparse_types = []
dense_keys = []
dense_types = []
- dense_defaults = {}
+ # When the graph is built twice, multiple dense_defaults in a normal dict
+ # could come out in different orders. This will fail the _e2e_test which
+ # expects exactly the same graph.
+ # OrderedDict which preserves the order can solve the problem.
+ dense_defaults = collections.OrderedDict()
dense_shapes = []
if features:
# NOTE: We iterate over sorted keys to keep things deterministic.
@@ -625,7 +629,8 @@ def _parse_example_raw(serialized,
"""
with ops.name_scope(name, "ParseExample", [serialized, names]):
names = [] if names is None else names
- dense_defaults = {} if dense_defaults is None else dense_defaults
+ dense_defaults = collections.OrderedDict(
+ ) if dense_defaults is None else dense_defaults
sparse_keys = [] if sparse_keys is None else sparse_keys
sparse_types = [] if sparse_types is None else sparse_types
dense_keys = [] if dense_keys is None else dense_keys
diff --git a/tensorflow/python/ops/resource_variable_ops.py b/tensorflow/python/ops/resource_variable_ops.py
index 2cae16f44c..fdc8a5843f 100644
--- a/tensorflow/python/ops/resource_variable_ops.py
+++ b/tensorflow/python/ops/resource_variable_ops.py
@@ -30,6 +30,7 @@ from tensorflow.python.eager import tensor_node
from tensorflow.python.framework import dtypes
from tensorflow.python.framework import ops
from tensorflow.python.framework import tensor_shape
+from tensorflow.python.framework import tensor_util
from tensorflow.python.ops import array_ops
from tensorflow.python.ops import gen_array_ops
from tensorflow.python.ops import gen_resource_variable_ops
@@ -41,6 +42,29 @@ from tensorflow.python.ops.gen_resource_variable_ops import *
from tensorflow.python.util import compat
+def _eager_safe_variable_handle(shape, dtype, shared_name, name,
+ container=None):
+ """Creates a variable handle with information to do shape inference."""
+ handle = gen_resource_variable_ops.var_handle_op(shape=shape, dtype=dtype,
+ shared_name=shared_name,
+ name=name,
+ container=container)
+ if context.in_graph_mode():
+ return handle
+ with context.graph_mode(), ops.Graph().as_default():
+ h = gen_resource_variable_ops.var_handle_op(shape=shape, dtype=dtype,
+ shared_name=shared_name,
+ name=name,
+ container=container)
+
+ # Tensor._handle_data contains information for the shape-inference code to
+ # know the shape and dtype of the variable pointed to by a handle. Since
+ # shape inference doesn't run in eager mode we copy this data here for when
+ # the handle is captured by an eager mode function.
+ handle._handle_data = h._handle_data # pylint: disable=protected-access
+ return handle
+
+
class ResourceVariable(variables.Variable):
"""Variable based on resource handles.
@@ -231,7 +255,7 @@ class ResourceVariable(variables.Variable):
if trainable and ops.GraphKeys.TRAINABLE_VARIABLES not in collections:
collections = list(collections) + [ops.GraphKeys.TRAINABLE_VARIABLES]
self._save_slice_info = None
- in_graph_mode = context.in_graph_mode()
+ self._in_graph_mode = context.in_graph_mode()
with ops.control_dependencies(None):
with ops.name_scope(name, "Variable", []
if init_from_fn else [initial_value]) as name:
@@ -241,7 +265,7 @@ class ResourceVariable(variables.Variable):
# Use attr_scope and device(None) to simulate the behavior of
# colocate_with when the variable we want to colocate with doesn't
# yet exist.
- if in_graph_mode:
+ if self._in_graph_mode:
attr = attr_value_pb2.AttrValue(
list=attr_value_pb2.AttrValue.ListValue(
s=[compat.as_bytes("loc:@%s" % handle_name)]))
@@ -249,26 +273,28 @@ class ResourceVariable(variables.Variable):
with ops.name_scope("Initializer"), ops.device(None):
initial_value = ops.convert_to_tensor(
initial_value(), name="initial_value", dtype=dtype)
- self._handle = gen_resource_variable_ops.var_handle_op(
+ self._handle = _eager_safe_variable_handle(
shape=initial_value.get_shape(),
dtype=initial_value.dtype.base_dtype,
shared_name=handle_name,
name=name)
- self._handle_device = (self._handle.device if in_graph_mode else
- context.get_default_context().device_name)
+ self._handle_device = (
+ self._handle.device if self._in_graph_mode else
+ context.get_default_context().device_name)
else:
initial_value = initial_value()
with ops.name_scope("Initializer"):
initial_value = ops.convert_to_tensor(
initial_value, name="initial_value", dtype=dtype)
- self._handle = gen_resource_variable_ops.var_handle_op(
+ self._handle = _eager_safe_variable_handle(
shape=initial_value.get_shape(),
dtype=initial_value.dtype.base_dtype,
shared_name=handle_name,
name=name,
container="")
- self._handle_device = (self._handle.device if in_graph_mode else
- context.get_default_context().device_name)
+ self._handle_device = (
+ self._handle.device if self._in_graph_mode else
+ context.get_default_context().device_name)
# pylint: enable=protected-access
# Or get the initial value from a Tensor or Python object.
@@ -277,7 +303,7 @@ class ResourceVariable(variables.Variable):
initial_value = ops.convert_to_tensor(
initial_value, name="initial_value", dtype=dtype)
# pylint: disable=protected-access
- if (in_graph_mode and initial_value is not None and
+ if (self._in_graph_mode and initial_value is not None and
initial_value.op._get_control_flow_context() is not None):
raise ValueError(
"Initializer for variable %s is from inside a control-flow "
@@ -285,21 +311,21 @@ class ResourceVariable(variables.Variable):
"variable inside a loop or conditional, use a lambda as the "
"initializer." % name)
# pylint: enable=protected-access
- self._handle = gen_resource_variable_ops.var_handle_op(
+ self._handle = _eager_safe_variable_handle(
shape=initial_value.get_shape(),
dtype=initial_value.dtype.base_dtype,
shared_name=handle_name,
name=name,
container="")
- self._handle_device = (self._handle.device if in_graph_mode else
+ self._handle_device = (self._handle.device if self._in_graph_mode else
context.get_default_context().device_name)
- self._initial_value = initial_value if in_graph_mode else None
+ self._initial_value = initial_value if self._in_graph_mode else None
self._handle_name = handle_name + ":0"
self._dtype = initial_value.dtype.base_dtype
self._constraint = constraint
- if in_graph_mode:
+ if self._in_graph_mode:
with ops.name_scope("IsInitialized"):
self._is_initialized_op = (
gen_resource_variable_ops.var_is_initialized_op(self._handle))
@@ -399,10 +425,11 @@ class ResourceVariable(variables.Variable):
@property
def shape(self):
"""The shape of this variable."""
- if context.in_graph_mode():
+ if self._in_graph_mode:
return tensor_shape.TensorShape(self._handle.op.get_attr("shape"))
return tensor_shape.TensorShape(
- gen_resource_variable_ops.variable_shape(self._handle).numpy())
+ tensor_util.constant_value(
+ gen_resource_variable_ops.variable_shape(self._handle)))
@property
def create(self):
@@ -473,9 +500,12 @@ class ResourceVariable(variables.Variable):
return self._save_slice_info
def _read_variable_op(self):
- if context.in_eager_mode() and self._trainable:
+ if hasattr(self, "_trainable") and self._trainable:
tape.watch(self._handle)
- return read_variable_op(self._handle, dtype=self._dtype)
+ return read_variable_op(self._handle, dtype=self._dtype)
+ else:
+ return gen_resource_variable_ops.read_variable_op(self._handle,
+ self._dtype)
def read_value(self):
"""Constructs an op which reads the value of this variable.
diff --git a/tensorflow/python/ops/sparse_ops.py b/tensorflow/python/ops/sparse_ops.py
index 5a179048b1..e3990791c6 100644
--- a/tensorflow/python/ops/sparse_ops.py
+++ b/tensorflow/python/ops/sparse_ops.py
@@ -296,7 +296,7 @@ def sparse_add(a, b, thresh=0):
a = _convert_to_sparse_tensor(a)
b = _convert_to_sparse_tensor(b)
thresh = ops.convert_to_tensor(
- thresh, dtype=a.values.dtype.real_dtype, name="thresh")
+ thresh, dtype=a.values.dtype.real_dtype.base_dtype, name="thresh")
output_ind, output_val, output_shape = (gen_sparse_ops._sparse_add(
a.indices, a.values, a.dense_shape,
b.indices, b.values, b.dense_shape,
diff --git a/tensorflow/python/summary/text_summary.py b/tensorflow/python/summary/text_summary.py
index f0788399ff..4031355b03 100644
--- a/tensorflow/python/summary/text_summary.py
+++ b/tensorflow/python/summary/text_summary.py
@@ -23,7 +23,6 @@ from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
-from collections import namedtuple
import json
from tensorflow.core.framework import summary_pb2
@@ -33,9 +32,6 @@ from tensorflow.python.summary import plugin_asset
PLUGIN_NAME = "text"
-# Contains event-related data specific to the text plugin.
-_TextPluginData = namedtuple("_TextPluginData", [])
-
def text_summary(name, tensor, collections=None):
"""Summarizes textual data.
@@ -67,11 +63,9 @@ def text_summary(name, tensor, collections=None):
raise ValueError("Expected tensor %s to have dtype string, got %s" %
(tensor.name, tensor.dtype))
- summary_metadata = summary_pb2.SummaryMetadata()
- text_plugin_data = _TextPluginData()
- data_dict = text_plugin_data._asdict() # pylint: disable=protected-access
- summary_metadata.plugin_data.plugin_name = PLUGIN_NAME
- summary_metadata.plugin_data.content = json.dumps(data_dict)
+ summary_metadata = summary_pb2.SummaryMetadata(
+ plugin_data=summary_pb2.SummaryMetadata.PluginData(
+ plugin_name=PLUGIN_NAME))
t_summary = tensor_summary(
name=name,
tensor=tensor,
diff --git a/tensorflow/python/summary/writer/writer_test.py b/tensorflow/python/summary/writer/writer_test.py
index 9d3e20e408..88ade0aac3 100644
--- a/tensorflow/python/summary/writer/writer_test.py
+++ b/tensorflow/python/summary/writer/writer_test.py
@@ -39,6 +39,7 @@ from tensorflow.python.summary import plugin_asset
from tensorflow.python.summary import summary_iterator
from tensorflow.python.summary.writer import writer
from tensorflow.python.summary.writer import writer_cache
+from tensorflow.python.util import compat
class SummaryWriterTestCase(test.TestCase):
@@ -334,11 +335,11 @@ class SummaryWriterTestCase(test.TestCase):
# should strip the metadata from the second one.
value = summary_pb2.Summary.Value(tag="foo", simple_value=10.0)
value.metadata.plugin_data.plugin_name = "bar"
- value.metadata.plugin_data.content = "... content ..."
+ value.metadata.plugin_data.content = compat.as_bytes("... content ...")
sw.add_summary(summary_pb2.Summary(value=[value]), 10)
value = summary_pb2.Summary.Value(tag="foo", simple_value=10.0)
value.metadata.plugin_data.plugin_name = "bar"
- value.metadata.plugin_data.content = "... content ..."
+ value.metadata.plugin_data.content = compat.as_bytes("... content ...")
sw.add_summary(summary_pb2.Summary(value=[value]), 10)
sw.close()
diff --git a/tensorflow/python/training/monitored_session.py b/tensorflow/python/training/monitored_session.py
index 1562f65675..e6162dd34b 100644
--- a/tensorflow/python/training/monitored_session.py
+++ b/tensorflow/python/training/monitored_session.py
@@ -20,6 +20,9 @@ from __future__ import division
from __future__ import print_function
import abc
+import sys
+
+import six
from tensorflow.core.protobuf import config_pb2
from tensorflow.python.framework import errors
@@ -947,20 +950,21 @@ class _CoordinatedSession(_WrappedSession):
def run(self, *args, **kwargs):
try:
return self._sess.run(*args, **kwargs)
- except _PREEMPTION_ERRORS as original_exception:
- raise original_exception
- except Exception as original_exception: # pylint: disable=broad-except
+ except _PREEMPTION_ERRORS:
+ raise
+ except Exception: # pylint: disable=broad-except
# A non-preemption error could have been caused by a preemption error
# in the coordinator. If this is the case, raise that exception instead,
- # since it's the root cause. Otherwise, stick to the `original_exception`.
+ # since it's the root cause. Otherwise, stick to the `original_exc_info`.
+ original_exc_info = sys.exc_info()
try:
self._coord.raise_requested_exception()
- except _PREEMPTION_ERRORS as preemption_in_coordinator:
- raise preemption_in_coordinator
+ except _PREEMPTION_ERRORS:
+ raise
except Exception: # pylint: disable=broad-except
- raise original_exception
+ raise six.reraise(*original_exc_info)
else:
- raise original_exception
+ raise six.reraise(*original_exc_info)
class _HookedSession(_WrappedSession):
diff --git a/tensorflow/python/training/monitored_session_test.py b/tensorflow/python/training/monitored_session_test.py
index a7c34cdd1b..d88b187fde 100644
--- a/tensorflow/python/training/monitored_session_test.py
+++ b/tensorflow/python/training/monitored_session_test.py
@@ -22,8 +22,10 @@ from __future__ import print_function
import collections
import glob
import os
+import sys
import threading
import time
+import traceback
from tensorflow.contrib.framework.python.ops import variables as variables_lib
from tensorflow.contrib.testing.python.framework import util_test
@@ -34,6 +36,7 @@ from tensorflow.python.framework import constant_op
from tensorflow.python.framework import errors_impl
from tensorflow.python.framework import ops
from tensorflow.python.ops import array_ops
+from tensorflow.python.ops import control_flow_ops
from tensorflow.python.ops import state_ops
from tensorflow.python.ops import variables
from tensorflow.python.platform import test
@@ -506,6 +509,31 @@ class CoordinatedSessionTest(test.TestCase):
self.assertTrue(coord.should_stop())
self.assertTrue(coord_sess.should_stop())
+ def test_propagates_exception_trace(self):
+ assertion = control_flow_ops.Assert(False, ['This should fail.'])
+ with self.test_session() as sess:
+ coord = coordinator.Coordinator(clean_stop_exception_types=())
+ coord_sess = monitored_session._CoordinatedSession(sess, coord)
+ try:
+ coord_sess.run([assertion])
+ self.fail('No exception was raised by assertion.')
+ except errors_impl.InvalidArgumentError:
+ # Extract the name of the file where the exception was first raised.
+ _, _, exc_traceback = sys.exc_info()
+ tb = traceback.extract_tb(exc_traceback)
+ exc_source_file = tb[-1][0]
+ exc_source_basename = os.path.basename(exc_source_file)
+ # If it's monitored_session.py then the original stack trace was not
+ # correctly propagated.
+ self.assertIn(
+ exc_source_basename, ['session.py', 'monitored_session.py'],
+ 'The exception was raised from an unrecognized file. This unit '
+ 'test probably needs to be updated. Traceback:\n%s\n' % tb)
+ self.assertEqual(
+ exc_source_basename, 'session.py',
+ 'Original stack trace was not propagated by MonitoredSession. '
+ 'Traceback:\n%s' % tb)
+
class AbortAtNSession(object):
"""A mock session that aborts at the N-th run call."""
diff --git a/tensorflow/python/training/saver_test.py b/tensorflow/python/training/saver_test.py
index 35c5980818..e66993f50b 100644
--- a/tensorflow/python/training/saver_test.py
+++ b/tensorflow/python/training/saver_test.py
@@ -478,16 +478,17 @@ class SaverTest(test.TestCase):
def _SaveAndLoad(self, var_name, var_value, other_value, save_path):
with self.test_session() as sess:
- var = variables.Variable(var_value, name=var_name)
+ var = resource_variable_ops.ResourceVariable(var_value, name=var_name)
save = saver_module.Saver({var_name: var})
- var.initializer.run()
+ if context.in_graph_mode():
+ self.evaluate(var.initializer)
val = save.save(sess, save_path)
self.assertEqual(save_path, val)
with self.test_session() as sess:
- var = variables.Variable(other_value, name=var_name)
+ var = resource_variable_ops.ResourceVariable(other_value, name=var_name)
save = saver_module.Saver({var_name: var})
save.restore(sess, save_path)
- self.assertAllClose(var_value, var.eval())
+ self.assertAllClose(var_value, self.evaluate(var))
def testCacheRereadsFile(self):
save_path = os.path.join(self.get_temp_dir(), "cache_rereads")
@@ -609,30 +610,32 @@ class SaverTest(test.TestCase):
save.restore(sess, save_path)
self.assertAllClose([[1.0, 2.0], [3.0, 4.0], [5.0, 6.0]], var.eval())
+ @test_util.run_in_graph_and_eager_modes()
def testSaveWithGlobalStep(self, pad_step_number=False):
save_path = os.path.join(self.get_temp_dir(), "ckpt_with_global_step")
global_step_int = 5
# Save and reload one Variable named "var0".
self._SaveAndLoad("var0", 0.0, 1.0, save_path)
for use_tensor in [True, False]:
- with self.test_session() as sess:
- var = variables.Variable(1.0, name="var0")
- save = saver_module.Saver(
- {
- var.op.name: var
- }, pad_step_number=pad_step_number)
- var.initializer.run()
- if use_tensor:
- global_step = constant_op.constant(global_step_int)
- val = save.save(sess, save_path, global_step=global_step)
- else:
- val = save.save(sess, save_path, global_step=global_step_int)
- if pad_step_number:
- expected_save_path = "%s-%s" % (save_path,
- "{:08d}".format(global_step_int))
- else:
- expected_save_path = "%s-%d" % (save_path, global_step_int)
- self.assertEqual(expected_save_path, val)
+ var = resource_variable_ops.ResourceVariable(1.0, name="var0")
+ save = saver_module.Saver(
+ {
+ var._shared_name: var
+ }, pad_step_number=pad_step_number)
+ if context.in_graph_mode():
+ self.evaluate(var.initializer)
+ sess = ops_lib.get_default_session() if context.in_graph_mode() else None
+ if use_tensor:
+ global_step = constant_op.constant(global_step_int)
+ val = save.save(sess, save_path, global_step=global_step)
+ else:
+ val = save.save(sess, save_path, global_step=global_step_int)
+ if pad_step_number:
+ expected_save_path = "%s-%s" % (save_path,
+ "{:08d}".format(global_step_int))
+ else:
+ expected_save_path = "%s-%d" % (save_path, global_step_int)
+ self.assertEqual(expected_save_path, val)
def testSaveWithGlobalStepWithPadding(self):
self.testSaveWithGlobalStep(pad_step_number=True)
diff --git a/tensorflow/python/training/training_util.py b/tensorflow/python/training/training_util.py
index bf48c75997..9f2f9b7479 100644
--- a/tensorflow/python/training/training_util.py
+++ b/tensorflow/python/training/training_util.py
@@ -157,6 +157,7 @@ def assert_global_step(global_step_tensor):
raise TypeError('Existing "global_step" does not have integer type: %s' %
global_step_tensor.dtype)
- if global_step_tensor.get_shape().ndims != 0:
+ if (global_step_tensor.get_shape().ndims != 0 and
+ global_step_tensor.get_shape().is_fully_defined()):
raise TypeError('Existing "global_step" is not scalar: %s' %
global_step_tensor.get_shape())
diff --git a/tensorflow/python/util/tf_should_use.py b/tensorflow/python/util/tf_should_use.py
index ab9e82a3cc..d9b2e6fcd7 100644
--- a/tensorflow/python/util/tf_should_use.py
+++ b/tensorflow/python/util/tf_should_use.py
@@ -17,57 +17,17 @@ from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
-import collections
import functools
-import itertools
-import traceback
import types
import six # pylint: disable=unused-import
-# pylint: disable=g-bad-import-order,g-import-not-at-top
-try:
- from weakref import finalize
-except ImportError:
- from backports.weakref import finalize
-
-from tensorflow.python.platform import tf_logging
from tensorflow.python.util import tf_decorator
# pylint: enable=g-bad-import-order,g-import-not-at-top
-class _RefInfoField(
- collections.namedtuple(
- '_RefInfoField', ('type_', 'repr_', 'creation_stack', 'object_used'))):
- pass
-
-
-# Thread-safe up to int32max/2 thanks to python's GIL; and may be safe even for
-# higher values in Python 3.4+. We don't expect to ever count higher than this.
-# https://mail.python.org/pipermail/python-list/2005-April/342279.html
-_REF_ITER = itertools.count()
-
-# Dictionary mapping id(obj) => _RefInfoField.
-_REF_INFO = {}
-
-
-def _deleted(obj_id, fatal_error):
- obj = _REF_INFO[obj_id]
- del _REF_INFO[obj_id]
- if not obj.object_used:
- if fatal_error:
- logger = tf_logging.fatal
- else:
- logger = tf_logging.error
- logger(
- '==================================\n'
- 'Object was never used (type %s):\n%s\nIf you want to mark it as '
- 'used call its "mark_used()" method.\nIt was originally created '
- 'here:\n%s\n'
- '==================================' %
- (obj.type_, obj.repr_, obj.creation_stack))
-
-
+# TODO(b/65412899): Re-implement to avoid leaking python objects.
+# This function / class remains since the API is public (mark_used()).
def _add_should_use_warning(x, fatal_error=False):
"""Wraps object x so that if it is never used, a warning is logged.
@@ -80,16 +40,12 @@ def _add_should_use_warning(x, fatal_error=False):
An instance of `TFShouldUseWarningWrapper` which subclasses `type(x)`
and is a very shallow wrapper for `x` which logs access into `x`.
"""
+ del fatal_error
if x is None: # special corner case where x is None
return x
- if hasattr(x, '_tf_ref_id'): # this is already a TFShouldUseWarningWrapper
- return x
def override_method(method):
def fn(self, *args, **kwargs):
- # pylint: disable=protected-access
- _REF_INFO[self._tf_ref_id] = _REF_INFO[self._tf_ref_id]._replace(
- object_used=True)
return method(self, *args, **kwargs)
return fn
@@ -98,38 +54,16 @@ def _add_should_use_warning(x, fatal_error=False):
def __init__(self, true_self):
self.__dict__ = true_self.__dict__
- stack = [s.strip() for s in traceback.format_stack()]
- # Remove top three stack entries from adding the wrapper
- self.creation_stack = '\n'.join(stack[:-3])
- self._tf_ref_id = next(_REF_ITER)
- _REF_INFO[self._tf_ref_id] = _RefInfoField(
- type_=type(x),
- repr_=repr(x),
- creation_stack=stack,
- object_used=False)
-
- # Create a finalizer for self, which will be called when self is
- # garbage collected. Can't add self as the args because the
- # loop will break garbage collection. We keep track of
- # ourselves via python ids.
- finalize(self, _deleted, self._tf_ref_id, fatal_error)
# Not sure why this pylint warning is being used; this is not an
# old class form.
# pylint: disable=super-on-old-class
def __getattribute__(self, name):
- if name == '_tf_ref_id':
- return super(TFShouldUseWarningWrapper, self).__getattribute__(name)
- if self._tf_ref_id in _REF_INFO:
- _REF_INFO[self._tf_ref_id] = _REF_INFO[self._tf_ref_id]._replace(
- object_used=True)
return super(TFShouldUseWarningWrapper, self).__getattribute__(name)
def mark_used(self, *args, **kwargs):
- _REF_INFO[self._tf_ref_id] = _REF_INFO[self._tf_ref_id]._replace(
- object_used=True)
- if hasattr(super(TFShouldUseWarningWrapper, self), 'mark_used'):
- return super(TFShouldUseWarningWrapper, self).mark_used(*args, **kwargs)
+ return
+
# pylint: enable=super-on-old-class
for name in dir(TFShouldUseWarningWrapper):
@@ -143,8 +77,6 @@ def _add_should_use_warning(x, fatal_error=False):
wrapped = TFShouldUseWarningWrapper(x)
wrapped.__doc__ = x.__doc__ # functools.wraps fails on some objects.
- ref_id = wrapped._tf_ref_id # pylint: disable=protected-access
- _REF_INFO[ref_id] = _REF_INFO[ref_id]._replace(object_used=False)
return wrapped
diff --git a/tensorflow/python/util/tf_should_use_test.py b/tensorflow/python/util/tf_should_use_test.py
index c826874400..4c6e48b11c 100644
--- a/tensorflow/python/util/tf_should_use_test.py
+++ b/tensorflow/python/util/tf_should_use_test.py
@@ -46,6 +46,7 @@ def reroute_error(captured):
class TfShouldUseTest(test.TestCase):
def testAddShouldUseWarningWhenNotUsed(self):
+ self.skipTest('b/65412899')
c = constant_op.constant(0, name='blah0')
captured = []
with reroute_error(captured):
@@ -70,6 +71,7 @@ class TfShouldUseTest(test.TestCase):
self.assertNotIn('%s:0' % name, '\n'.join(captured))
def testAddShouldUseWarningWhenUsedWithAdd(self):
+ self.skipTest('b/65412899')
def add(h):
_ = h + 1
self._testAddShouldUseWarningWhenUsed(add, name='blah_add')
@@ -77,6 +79,7 @@ class TfShouldUseTest(test.TestCase):
self.assertFalse(gc.garbage)
def testAddShouldUseWarningWhenUsedWithGetName(self):
+ self.skipTest('b/65412899')
def get_name(h):
_ = h.name
self._testAddShouldUseWarningWhenUsed(get_name, name='blah_get_name')
@@ -84,6 +87,7 @@ class TfShouldUseTest(test.TestCase):
self.assertFalse(gc.garbage)
def testShouldUseResult(self):
+ self.skipTest('b/65412899')
@tf_should_use.should_use_result
def return_const(value):
return constant_op.constant(value, name='blah2')
@@ -97,6 +101,7 @@ class TfShouldUseTest(test.TestCase):
self.assertFalse(gc.garbage)
def testShouldUseResultWhenNotReallyUsed(self):
+ self.skipTest('b/65412899')
@tf_should_use.should_use_result
def return_const(value):
return constant_op.constant(value, name='blah3')
@@ -114,6 +119,13 @@ class TfShouldUseTest(test.TestCase):
gc.collect()
self.assertFalse(gc.garbage)
+ # Tests that mark_used is available in the API.
+ def testMarkUsed(self):
+ @tf_should_use.should_use_result
+ def return_const(value):
+ return constant_op.constant(value, name='blah3')
+ with self.test_session():
+ return_const(0.0).mark_used()
if __name__ == '__main__':
test.main()
diff --git a/tensorflow/stream_executor/cuda/cuda_dnn.cc b/tensorflow/stream_executor/cuda/cuda_dnn.cc
index 904c8c7818..6b5ad1b5fb 100644
--- a/tensorflow/stream_executor/cuda/cuda_dnn.cc
+++ b/tensorflow/stream_executor/cuda/cuda_dnn.cc
@@ -1913,6 +1913,106 @@ bool CudnnSupport::DoRnnBackward(
#endif // CUDNN_VERSION
}
+namespace {
+
+inline cudnnConvolutionFwdAlgo_t GetCudnnConvolutionForwardAlgo(
+ Stream* stream, CUDAExecutor* parent, void* dnn_handle,
+ const ScopedTensorDescriptor& input_nd,
+ const ScopedFilterDescriptor& filter,
+ const ScopedConvolutionDescriptor& conv,
+ const ScopedTensorDescriptor& output_nd, bool specify_workspace_limit,
+ ScratchAllocator* scratch_allocator) {
+ cudnnConvolutionFwdPreference_t preference =
+ specify_workspace_limit ? CUDNN_CONVOLUTION_FWD_SPECIFY_WORKSPACE_LIMIT
+ : CUDNN_CONVOLUTION_FWD_NO_WORKSPACE;
+ auto memory_limit_bytes =
+ scratch_allocator == nullptr
+ ? 0
+ : scratch_allocator->GetMemoryLimitInBytes(stream);
+ if (memory_limit_bytes < 0) {
+ memory_limit_bytes = 0;
+ }
+
+ cudnnConvolutionFwdAlgo_t algo_to_use;
+ auto status = wrap::cudnnGetConvolutionForwardAlgorithm(
+ parent, ToHandle(dnn_handle), input_nd.handle(), filter.handle(),
+ conv.handle(), output_nd.handle(), preference, memory_limit_bytes,
+ &algo_to_use);
+ CHECK_EQ(status, CUDNN_STATUS_SUCCESS)
+ << "Unable to find a suitable algorithm for doing forward convolution";
+ return algo_to_use;
+}
+
+dnn::AlgorithmType GetCudnnConvolutionForwardAlgorithm(
+ Stream* stream, CUDAExecutor* parent, void* dnn_handle,
+ int cudnn_type, // Actually cudnnDataType_t.
+ const dnn::AlgorithmConfig& algorithm_config, bool is_profiling,
+ const ScopedTensorDescriptor& input_nd,
+ const ScopedFilterDescriptor& filter,
+ const ScopedConvolutionDescriptor& conv,
+ const ScopedTensorDescriptor& output_nd,
+ ScratchAllocator* scratch_allocator, DeviceMemory<uint8>* scratch) {
+ cudnnConvolutionFwdAlgo_t algo =
+ (algorithm_config.algorithm() == dnn::kDefaultAlgorithm)
+ ? GetCudnnConvolutionForwardAlgo(
+ stream, parent, dnn_handle, input_nd, filter, conv, output_nd,
+ /*specify_workspace_limit=*/scratch_allocator != nullptr,
+ scratch_allocator)
+ : ToConvForwardAlgo(algorithm_config.algorithm());
+ size_t size_in_bytes;
+ auto status = wrap::cudnnGetConvolutionForwardWorkspaceSize(
+ parent, ToHandle(dnn_handle), /*srcDesc=*/input_nd.handle(),
+ /*filterDesc=*/filter.handle(), /*convDesc=*/conv.handle(),
+ /*destDesc=*/output_nd.handle(), /*algo=*/algo,
+ /*sizeInBytes=*/&size_in_bytes);
+ int64 size_in_bytes_int64 = size_in_bytes;
+ if (TF_PREDICT_FALSE(status != CUDNN_STATUS_SUCCESS)) {
+ CHECK(is_profiling) << "Cannot query the size of workspace needed "
+ "for the specified algorithm: "
+ << algorithm_config.algorithm() << " "
+ << ToString(status);
+ // Silently return when we are profiling.
+ return dnn::kNoSuitableAlgorithmFound;
+ }
+ if (TF_PREDICT_FALSE(size_in_bytes_int64 < 0)) {
+ LOG(WARNING) << "cudnnGetConvolutionForwardWorkspaceSize() returned "
+ "negative sizeInBytes value. This could be a cudnn bug.";
+ if (TF_PREDICT_TRUE(is_profiling)) {
+ return dnn::kNoSuitableAlgorithmFound;
+ }
+ } else if (size_in_bytes_int64 > 0) {
+ port::StatusOr<DeviceMemory<uint8>> allocated;
+ if (TF_PREDICT_TRUE(scratch_allocator)) {
+ allocated = scratch_allocator->AllocateBytes(stream, size_in_bytes);
+ if (TF_PREDICT_TRUE(allocated.ok())) {
+ *scratch = allocated.ValueOrDie();
+ } else {
+ if (TF_PREDICT_TRUE(is_profiling)) {
+ // Silently return when we are profiling.
+ return dnn::kNoSuitableAlgorithmFound;
+ }
+ LOG(WARNING) << allocated.status().error_message();
+ // For the int8 case, we fail at this point since the no_scratch
+ // algorithm should be set to dnn::kDefaultAlgorithm.
+ CHECK(algorithm_config.algorithm_no_scratch() != dnn::kDefaultAlgorithm)
+ << "The primary convolution algorithm failed memory allocation, "
+ "while a secondary algorithm is not provided.";
+ }
+ }
+ if (TF_PREDICT_FALSE(!allocated.ok())) {
+ algo = (algorithm_config.algorithm_no_scratch() == dnn::kDefaultAlgorithm)
+ ? GetCudnnConvolutionForwardAlgo(
+ stream, parent, dnn_handle, input_nd, filter, conv,
+ output_nd, /*specify_workspace_limit=*/false, nullptr)
+ : ToConvForwardAlgo(algorithm_config.algorithm_no_scratch());
+ }
+ }
+
+ return algo;
+}
+
+} // namespace
+
template <class T>
bool CudnnSupport::DoConvolveImpl(
Stream* stream, int cudnn_type, // Actually cudnnDataType_t.
@@ -1920,7 +2020,6 @@ bool CudnnSupport::DoConvolveImpl(
const FilterDescriptor& filter_descriptor,
const DeviceMemory<T>& filter_data,
const ConvolutionDescriptor& convolution_descriptor,
- const DeviceMemory<T>& biases, dnn::ActivationMode activation_mode,
const BatchDescriptor& output_descriptor, DeviceMemory<T>* output_data,
ScratchAllocator* scratch_allocator,
const dnn::AlgorithmConfig& algorithm_config,
@@ -1953,6 +2052,8 @@ bool CudnnSupport::DoConvolveImpl(
cudnnConvolutionFwdAlgo_t algo;
DeviceMemory<uint8> scratch;
+ // TODO(pauldonnelly): Replace the following code with a call to
+ // GetCudnnConvolutionForwardAlgorithm().
if (algorithm_config.algorithm() == dnn::kDefaultAlgorithm) {
// With the default algorithm, use Cudnn's heuristics.
auto get_algorithm =
@@ -2059,27 +2160,117 @@ bool CudnnSupport::DoConvolveImpl(
"negative sizeInBytes value. This could be a cudnn bug.";
}
}
- const bool has_biases = (biases != nullptr);
- const bool supported_activation_mode =
- (activation_mode == dnn::ActivationMode::kRelu);
+ std::unique_ptr<CUDATimer> timer;
+ if (is_profiling) {
+ timer.reset(new CUDATimer(parent_)); // NOLINT
+ if (!timer->Init()) {
+ return false;
+ }
+ // The start and stop of the timer should be as close to the Cudnn call as
+ // possible. It is still possible for other threads to issue workload on
+ // to this stream. So it could take multiple profiling measurements.
+ if (!timer->Start(AsCUDAStream(stream))) {
+ timer->Destroy();
+ return false;
+ }
+ }
+ status = wrap::cudnnConvolutionForward(
+ parent_, ToHandle(dnn_handle_),
+ /*alpha=*/&alpha, /*srcDesc=*/input_nd.handle(),
+ /*srcData=*/input_data.opaque(), /*filterDesc=*/filter.handle(),
+ /*filterData=*/filter_data.opaque(), /*convDesc=*/conv.handle(),
+ /*algo=*/algo, /*workSpace=*/scratch.opaque(),
+ /*workSpaceSizeInBytes=*/scratch.size(), /*beta=*/&beta,
+ /*destDesc=*/output_nd.handle(), /*destData=*/output_data->opaque());
+
+ if (is_profiling) {
+ if (!timer->Stop(AsCUDAStream(stream))) {
+ timer->Destroy();
+ return false;
+ }
+ if (status == CUDNN_STATUS_SUCCESS) {
+ output_profile_result->set_algorithm(algo);
+ output_profile_result->set_elapsed_time_in_ms(
+ timer->GetElapsedMilliseconds());
+ }
+ timer->Destroy();
+ }
+
+ if (status != CUDNN_STATUS_SUCCESS) {
+ // Silently return when we are profiling.
+ if (!is_profiling) {
+ LOG(ERROR) << "failed to enqueue convolution on stream: "
+ << ToString(status);
+ }
+ return false;
+ }
+
+ return true;
+}
+
+template <typename Type, typename BiasType, typename ScaleType,
+ int cudnn_data_type, int cudnn_compute_type>
+bool CudnnSupport::DoFusedConvolveImpl(
+ Stream* stream, const dnn::BatchDescriptor& conv_input_descriptor,
+ const DeviceMemory<Type>& conv_input_data, ScaleType conv_input_scale,
+ const dnn::FilterDescriptor& filter_descriptor,
+ const DeviceMemory<Type>& filter_data,
+ const dnn::ConvolutionDescriptor& convolution_descriptor,
+ const DeviceMemory<Type>& side_input_data, ScaleType side_input_scale,
+ const dnn::BatchDescriptor& bias_descriptor,
+ const DeviceMemory<BiasType>& biases, dnn::ActivationMode activation_mode,
+ const dnn::BatchDescriptor& output_descriptor,
+ DeviceMemory<Type>* output_data, ScratchAllocator* scratch_allocator,
+ const dnn::AlgorithmConfig& algorithm_config,
+ dnn::ProfileResult* output_profile_result) {
+#if CUDNN_VERSION < 6000
+ LOG(ERROR) << "cudnnConvolutionBiasActivationForward() is only "
+ "supported for cuDNN version >= 6";
+ return false;
+#else
+ ScopedTensorDescriptor conv_input_nd{
+ parent_, conv_input_descriptor,
+ static_cast<cudnnDataType_t>(cudnn_data_type)};
+ ScopedTensorDescriptor output_nd{
+ parent_, output_descriptor,
+ static_cast<cudnnDataType_t>(cudnn_data_type)};
+ ScopedFilterDescriptor filter{parent_, filter_descriptor,
+ conv_input_descriptor,
+ static_cast<cudnnDataType_t>(cudnn_data_type)};
+ ScopedTensorDescriptor bias_nd{parent_, bias_descriptor, CUDNN_DATA_FLOAT};
+ ScopedConvolutionDescriptor conv{
+ parent_, convolution_descriptor,
+ static_cast<cudnnDataType_t>(cudnn_compute_type)};
- if (has_biases && !supported_activation_mode) {
- LOG(ERROR) << "cudnnConvolutionBiasActivationForward() only "
- "support relu activation.";
+ mutex_lock lock{dnn_handle_mutex_};
+ auto status = wrap::cudnnSetStream(parent_, ToHandle(dnn_handle_),
+ AsCUDAStreamValue(stream));
+ CHECK(status == CUDNN_STATUS_SUCCESS)
+ << "failed to set stream for cudnn handle: " << ToString(status);
+
+ const bool is_profiling = output_profile_result != nullptr;
+ DeviceMemory<uint8> scratch;
+ dnn::AlgorithmType algorithm_type = GetCudnnConvolutionForwardAlgorithm(
+ stream, parent_, dnn_handle_, cudnn_data_type, algorithm_config,
+ is_profiling, conv_input_nd, filter, conv, output_nd, scratch_allocator,
+ &scratch);
+ if (algorithm_type == dnn::kNoSuitableAlgorithmFound) {
+ if (!is_profiling) {
+ LOG(ERROR) << "No suitable algorithm found";
+ }
return false;
}
+ auto algo = static_cast<cudnnConvolutionFwdAlgo_t>(algorithm_type);
- if (has_biases && activation_mode == dnn::ActivationMode::kNone) {
- LOG(ERROR) << "To use cudnnConvolutionBiasActivationForward() "
- "with a valid biases tensor, need to also provide "
- "a valid activation mode (currently only supports "
- "kRelu).";
+ if (activation_mode != dnn::ActivationMode::kRelu) {
+ LOG(ERROR) << "cudnnConvolutionBiasActivationForward() only supports Relu "
+ "activation.";
return false;
}
std::unique_ptr<CUDATimer> timer;
if (is_profiling) {
- timer.reset(new CUDATimer(parent_));
+ timer.reset(new CUDATimer(parent_)); // NOLINT
if (!timer->Init()) {
return false;
}
@@ -2091,50 +2282,44 @@ bool CudnnSupport::DoConvolveImpl(
return false;
}
}
- if (has_biases) {
- CHECK(supported_activation_mode);
-#if CUDNN_VERSION < 6000
- LOG(ERROR) << "cudnnConvolutionBiasActivationForward() is only "
- "supported for cuDNN version >= 6.";
- return false;
-#else
- BatchDescriptor bias_dimensions;
- bias_dimensions.set_count(1)
- .set_feature_map_count(output_descriptor.feature_map_count())
- .set_height(1)
- .set_width(1)
- .set_layout(dnn::DataLayout::kBatchYXDepth);
- ScopedTensorDescriptor bias_descriptor{
- parent_, bias_dimensions, static_cast<cudnnDataType_t>(cudnn_type)};
- // CUDNN v6 only supports CUDNN_NOT_PROPAGATE_NAN as the reluNanOpt for
- // activation descriptor. Note that this will change the nan propagation
- // behavior from separate conv, bias, and relu (which by default is
- // CUDNN_PROPAGATE_NAN.
- ScopedActivationDescriptor activation_desc{parent_, activation_mode,
- CUDNN_NOT_PROPAGATE_NAN,
- output_descriptor.value_max()};
- status = wrap::cudnnConvolutionBiasActivationForward(
- parent_, ToHandle(dnn_handle_),
- /*alpha1=*/&alpha, /*srcDesc=*/input_nd.handle(),
- /*srcData=*/input_data.opaque(), /*filterDesc=*/filter.handle(),
- /*filterData=*/filter_data.opaque(), /*convDesc=*/conv.handle(),
- /*algo=*/algo, /*workSpace=*/scratch.opaque(),
- /*workSpaceSizeInBytes=*/scratch.size(), /*alpha2=*/&beta,
- /*zDesc=*/output_nd.handle(), /*z=*/input_data.opaque(),
- /*biasDesc=*/bias_descriptor.handle(),
- /*bias=*/biases.opaque(), /*activationDesc=*/activation_desc.handle(),
- /*destDesc=*/output_nd.handle(), /*destData=*/output_data->opaque());
-#endif // CUDNN_VERSION < 6000
- } else {
- status = wrap::cudnnConvolutionForward(
- parent_, ToHandle(dnn_handle_),
- /*alpha=*/&alpha, /*srcDesc=*/input_nd.handle(),
- /*srcData=*/input_data.opaque(), /*filterDesc=*/filter.handle(),
- /*filterData=*/filter_data.opaque(), /*convDesc=*/conv.handle(),
- /*algo=*/algo, /*workSpace=*/scratch.opaque(),
- /*workSpaceSizeInBytes=*/scratch.size(), /*beta=*/&beta,
- /*destDesc=*/output_nd.handle(), /*destData=*/output_data->opaque());
- }
+ // CUDNN v6 only supports CUDNN_NOT_PROPAGATE_NAN as the reluNanOpt for
+ // activation descriptor. Note that this will change the nan propagation
+ // behavior from separate conv, bias, and relu (which by default is
+ // CUDNN_PROPAGATE_NAN.
+ ScopedActivationDescriptor activation_desc{parent_, activation_mode,
+ CUDNN_NOT_PROPAGATE_NAN,
+ output_descriptor.value_max()};
+ auto side_input_data_ptr = (side_input_scale == 0) ? output_data->opaque()
+ : side_input_data.opaque();
+
+ VLOG(2) << "\nconv_input_scale = " << conv_input_scale
+ << "\nconv_input_nd.handle() = " << conv_input_nd.handle()
+ << "\nconv_input_data.opaque() = " << conv_input_data.opaque()
+ << "\nfilter.handle() = " << filter.handle()
+ << "\nfilter_data.opaque() = " << filter_data.opaque()
+ << "\nconv.handle() = " << conv.handle() << "\nalgo = " << algo
+ << "\nscratch.opaque() = " << scratch.opaque()
+ << "\nscratch.size() = " << scratch.size()
+ << "\nside_input_scale = " << side_input_scale
+ << "\noutput_nd.handle() = " << output_nd.handle()
+ << "\nside_input_data_ptr = " << side_input_data_ptr
+ << "\nbias_nd.handle() = " << bias_nd.handle()
+ << "\nbiases.opaque() = " << biases.opaque()
+ << "\nactivation_desc.handle() = " << activation_desc.handle()
+ << "\noutput_nd.handle() = " << output_nd.handle()
+ << "\noutput_data->opaque() = " << output_data->opaque();
+
+ status = wrap::cudnnConvolutionBiasActivationForward(
+ parent_, ToHandle(dnn_handle_), /*alpha1=*/&conv_input_scale,
+ /*srcDesc=*/conv_input_nd.handle(), /*srcData=*/conv_input_data.opaque(),
+ /*filterDesc=*/filter.handle(), /*filterData=*/filter_data.opaque(),
+ /*convDesc=*/conv.handle(), algo, /*workSpace=*/scratch.opaque(),
+ /*workSpaceSizeInBytes=*/scratch.size(), /*alpha2=*/&side_input_scale,
+ /*zDesc=*/output_nd.handle(), /*z=*/side_input_data_ptr,
+ /*biasDesc=*/bias_nd.handle(), /*bias=*/biases.opaque(),
+ /*activationDesc=*/activation_desc.handle(),
+ /*destDesc=*/output_nd.handle(), /*destData=*/output_data->opaque());
+
if (is_profiling) {
if (!timer->Stop(AsCUDAStream(stream))) {
timer->Destroy();
@@ -2158,6 +2343,7 @@ bool CudnnSupport::DoConvolveImpl(
}
return true;
+#endif // CUDNN_VERSION < 6000
}
// A helper class to decide whether to enable the WINOGRAD_NONFUSED algorithms.
@@ -2407,32 +2593,13 @@ bool CudnnSupport::DoConvolve(
const FilterDescriptor& filter_descriptor,
const DeviceMemory<float>& filter_data,
const ConvolutionDescriptor& convolution_descriptor,
- const DeviceMemory<float>& biases, dnn::ActivationMode activation_mode,
- const BatchDescriptor& output_descriptor, DeviceMemory<float>* output_data,
- ScratchAllocator* scratch_allocator,
- const dnn::AlgorithmConfig& algorithm_config,
- dnn::ProfileResult* output_profile_result) {
- return DoConvolveImpl<float>(
- stream, CUDNN_DATA_FLOAT, batch_descriptor, input_data, filter_descriptor,
- filter_data, convolution_descriptor, biases, activation_mode,
- output_descriptor, output_data, scratch_allocator, algorithm_config,
- output_profile_result);
-}
-
-bool CudnnSupport::DoConvolve(
- Stream* stream, const BatchDescriptor& batch_descriptor,
- const DeviceMemory<float>& input_data,
- const FilterDescriptor& filter_descriptor,
- const DeviceMemory<float>& filter_data,
- const ConvolutionDescriptor& convolution_descriptor,
const BatchDescriptor& output_descriptor, DeviceMemory<float>* output_data,
ScratchAllocator* scratch_allocator,
const dnn::AlgorithmConfig& algorithm_config,
dnn::ProfileResult* output_profile_result) {
return DoConvolveImpl<float>(
stream, CUDNN_DATA_FLOAT, batch_descriptor, input_data, filter_descriptor,
- filter_data, convolution_descriptor, /*biases=*/nullptr,
- dnn::ActivationMode::kNone, output_descriptor, output_data,
+ filter_data, convolution_descriptor, output_descriptor, output_data,
scratch_allocator, algorithm_config, output_profile_result);
}
@@ -2442,19 +2609,6 @@ bool CudnnSupport::DoConvolve(
const FilterDescriptor& filter_descriptor,
const DeviceMemory<double>& filter_data,
const ConvolutionDescriptor& convolution_descriptor,
- const DeviceMemory<double>& biases, dnn::ActivationMode activation_mode,
- const BatchDescriptor& output_descriptor,
- DeviceMemory<double>* output_data) {
- LOG(ERROR) << "double-based DNN not yet implemented";
- return false;
-}
-
-bool CudnnSupport::DoConvolve(
- Stream* stream, const BatchDescriptor& batch_descriptor,
- const DeviceMemory<double>& input_data,
- const FilterDescriptor& filter_descriptor,
- const DeviceMemory<double>& filter_data,
- const ConvolutionDescriptor& convolution_descriptor,
const BatchDescriptor& output_descriptor,
DeviceMemory<double>* output_data) {
LOG(ERROR) << "double-based DNN not yet implemented";
@@ -2467,34 +2621,113 @@ bool CudnnSupport::DoConvolve(
const FilterDescriptor& filter_descriptor,
const DeviceMemory<Eigen::half>& filter_data,
const ConvolutionDescriptor& convolution_descriptor,
- const DeviceMemory<Eigen::half>& biases,
- dnn::ActivationMode activation_mode,
const BatchDescriptor& output_descriptor,
DeviceMemory<Eigen::half>* output_data, ScratchAllocator* scratch_allocator,
const dnn::AlgorithmConfig& algorithm_config,
dnn::ProfileResult* output_profile_result) {
return DoConvolveImpl<Eigen::half>(
stream, CUDNN_DATA_HALF, batch_descriptor, input_data, filter_descriptor,
- filter_data, convolution_descriptor, biases, activation_mode,
+ filter_data, convolution_descriptor, output_descriptor, output_data,
+ scratch_allocator, algorithm_config, output_profile_result);
+}
+
+bool CudnnSupport::DoFusedConvolve(
+ Stream* stream, const dnn::BatchDescriptor& conv_input_descriptor,
+ const DeviceMemory<double>& conv_input_data, double conv_input_scale,
+ const dnn::FilterDescriptor& filter_descriptor,
+ const DeviceMemory<double>& filter_data,
+ const dnn::ConvolutionDescriptor& convolution_descriptor,
+ const DeviceMemory<double>& side_input_data, double side_input_scale,
+ const dnn::BatchDescriptor& bias_descriptor,
+ const DeviceMemory<double>& biases, dnn::ActivationMode activation_mode,
+ const dnn::BatchDescriptor& output_descriptor,
+ DeviceMemory<double>* output_data, ScratchAllocator* scratch_allocator,
+ const dnn::AlgorithmConfig& algorithm_config,
+ dnn::ProfileResult* output_profile_result) {
+ return DoFusedConvolveImpl<double, double, double, CUDNN_DATA_DOUBLE,
+ CUDNN_DATA_DOUBLE>(
+ stream, conv_input_descriptor, conv_input_data, conv_input_scale,
+ filter_descriptor, filter_data, convolution_descriptor, side_input_data,
+ side_input_scale, bias_descriptor, biases, activation_mode,
+ output_descriptor, output_data, scratch_allocator, algorithm_config,
+ output_profile_result);
+ return true;
+}
+
+bool CudnnSupport::DoFusedConvolve(
+ Stream* stream, const dnn::BatchDescriptor& conv_input_descriptor,
+ const DeviceMemory<float>& conv_input_data, float conv_input_scale,
+ const dnn::FilterDescriptor& filter_descriptor,
+ const DeviceMemory<float>& filter_data,
+ const dnn::ConvolutionDescriptor& convolution_descriptor,
+ const DeviceMemory<float>& side_input_data, float side_input_scale,
+ const dnn::BatchDescriptor& bias_descriptor,
+ const DeviceMemory<float>& biases, dnn::ActivationMode activation_mode,
+ const dnn::BatchDescriptor& output_descriptor,
+ DeviceMemory<float>* output_data, ScratchAllocator* scratch_allocator,
+ const dnn::AlgorithmConfig& algorithm_config,
+ dnn::ProfileResult* output_profile_result) {
+ return DoFusedConvolveImpl<float, float, float, CUDNN_DATA_FLOAT,
+ CUDNN_DATA_FLOAT>(
+ stream, conv_input_descriptor, conv_input_data, conv_input_scale,
+ filter_descriptor, filter_data, convolution_descriptor, side_input_data,
+ side_input_scale, bias_descriptor, biases, activation_mode,
output_descriptor, output_data, scratch_allocator, algorithm_config,
output_profile_result);
+ return true;
}
-bool CudnnSupport::DoConvolve(
- Stream* stream, const BatchDescriptor& batch_descriptor,
- const DeviceMemory<Eigen::half>& input_data,
- const FilterDescriptor& filter_descriptor,
+bool CudnnSupport::DoFusedConvolve(
+ Stream* stream, const dnn::BatchDescriptor& conv_input_descriptor,
+ const DeviceMemory<Eigen::half>& conv_input_data, float conv_input_scale,
+ const dnn::FilterDescriptor& filter_descriptor,
const DeviceMemory<Eigen::half>& filter_data,
- const ConvolutionDescriptor& convolution_descriptor,
- const BatchDescriptor& output_descriptor,
+ const dnn::ConvolutionDescriptor& convolution_descriptor,
+ const DeviceMemory<Eigen::half>& side_input_data, float side_input_scale,
+ const dnn::BatchDescriptor& bias_descriptor,
+ const DeviceMemory<Eigen::half>& biases,
+ dnn::ActivationMode activation_mode,
+ const dnn::BatchDescriptor& output_descriptor,
DeviceMemory<Eigen::half>* output_data, ScratchAllocator* scratch_allocator,
const dnn::AlgorithmConfig& algorithm_config,
dnn::ProfileResult* output_profile_result) {
- return DoConvolveImpl<Eigen::half>(
- stream, CUDNN_DATA_HALF, batch_descriptor, input_data, filter_descriptor,
- filter_data, convolution_descriptor, /*biases=*/nullptr,
- dnn::ActivationMode::kNone, output_descriptor, output_data,
- scratch_allocator, algorithm_config, output_profile_result);
+ return DoFusedConvolveImpl<Eigen::half, Eigen::half, float, CUDNN_DATA_HALF,
+ CUDNN_DATA_FLOAT>(
+ stream, conv_input_descriptor, conv_input_data, conv_input_scale,
+ filter_descriptor, filter_data, convolution_descriptor, side_input_data,
+ side_input_scale, bias_descriptor, biases, activation_mode,
+ output_descriptor, output_data, scratch_allocator, algorithm_config,
+ output_profile_result);
+ return true;
+}
+
+bool CudnnSupport::DoFusedConvolve(
+ Stream* stream, const dnn::BatchDescriptor& conv_input_descriptor,
+ const DeviceMemory<int8>& conv_input_data, float conv_input_scale,
+ const dnn::FilterDescriptor& filter_descriptor,
+ const DeviceMemory<int8>& filter_data,
+ const dnn::ConvolutionDescriptor& convolution_descriptor,
+ const DeviceMemory<int8>& side_input_data, float side_input_scale,
+ const dnn::BatchDescriptor& bias_descriptor,
+ const DeviceMemory<float>& biases, dnn::ActivationMode activation_mode,
+ const dnn::BatchDescriptor& output_descriptor,
+ DeviceMemory<int8>* output_data, ScratchAllocator* scratch_allocator,
+ const dnn::AlgorithmConfig& algorithm_config,
+ dnn::ProfileResult* output_profile_result) {
+#if CUDNN_VERSION < 6000
+ LOG(ERROR) << "cudnnConvolutionBiasActivationForward() is only "
+ "supported for cuDNN version >= 6";
+ return false;
+#else
+ return DoFusedConvolveImpl<int8, float, float, CUDNN_DATA_INT8x4,
+ CUDNN_DATA_INT32>(
+ stream, conv_input_descriptor, conv_input_data, conv_input_scale,
+ filter_descriptor, filter_data, convolution_descriptor, side_input_data,
+ side_input_scale, bias_descriptor, biases, activation_mode,
+ output_descriptor, output_data, scratch_allocator, algorithm_config,
+ output_profile_result);
+ return true;
+#endif
}
template<class T>
@@ -2730,7 +2963,7 @@ bool CudnnSupport::DoConvolveBackwardDataImpl(
std::unique_ptr<CUDATimer> timer;
if (is_profiling) {
- timer.reset(new CUDATimer(parent_));
+ timer.reset(new CUDATimer(parent_)); // NOLINT
timer->Init();
// The start and stop of the timer should be as close to the Cudnn call as
// possible. It is still possible for other threads to issue workload on
@@ -2981,7 +3214,7 @@ bool CudnnSupport::DoConvolveBackwardFilterImpl(
std::unique_ptr<CUDATimer> timer;
if (is_profiling) {
- timer.reset(new CUDATimer(parent_));
+ timer.reset(new CUDATimer(parent_)); // NOLINT
timer->Init();
// The start and stop of the timer should be as close to the Cudnn call as
// possible. It is still possible for other threads to issue workload on
diff --git a/tensorflow/stream_executor/cuda/cuda_dnn.h b/tensorflow/stream_executor/cuda/cuda_dnn.h
index b094cf76e9..db376e2a66 100644
--- a/tensorflow/stream_executor/cuda/cuda_dnn.h
+++ b/tensorflow/stream_executor/cuda/cuda_dnn.h
@@ -183,8 +183,6 @@ class CudnnSupport : public dnn::DnnSupport {
const dnn::FilterDescriptor& filter_descriptor,
const DeviceMemory<float>& filter_data,
const dnn::ConvolutionDescriptor& convolution_descriptor,
- const DeviceMemory<float>& biases,
- dnn::ActivationMode activation_mode,
const dnn::BatchDescriptor& output_descriptor,
DeviceMemory<float>* output_data,
ScratchAllocator* scratch_allocator,
@@ -196,8 +194,6 @@ class CudnnSupport : public dnn::DnnSupport {
const dnn::FilterDescriptor& filter_descriptor,
const DeviceMemory<double>& filter_data,
const dnn::ConvolutionDescriptor& convolution_descriptor,
- const DeviceMemory<double>& biases,
- dnn::ActivationMode activation_mode,
const dnn::BatchDescriptor& output_descriptor,
DeviceMemory<double>* output_data) override;
@@ -206,43 +202,71 @@ class CudnnSupport : public dnn::DnnSupport {
const dnn::FilterDescriptor& filter_descriptor,
const DeviceMemory<Eigen::half>& filter_data,
const dnn::ConvolutionDescriptor& convolution_descriptor,
- const DeviceMemory<Eigen::half>& biases,
- dnn::ActivationMode activation_mode,
const dnn::BatchDescriptor& output_descriptor,
DeviceMemory<Eigen::half>* output_data,
ScratchAllocator* scratch_allocator,
const dnn::AlgorithmConfig& algorithm_config,
dnn::ProfileResult* output_profile_result) override;
- bool DoConvolve(Stream* stream, const dnn::BatchDescriptor& batch_descriptor,
- const DeviceMemory<float>& input_data,
- const dnn::FilterDescriptor& filter_descriptor,
- const DeviceMemory<float>& filter_data,
- const dnn::ConvolutionDescriptor& convolution_descriptor,
- const dnn::BatchDescriptor& output_descriptor,
- DeviceMemory<float>* output_data,
- ScratchAllocator* scratch_allocator,
- const dnn::AlgorithmConfig& algorithm_config,
- dnn::ProfileResult* output_profile_result) override;
+ bool DoFusedConvolve(
+ Stream* stream, const dnn::BatchDescriptor& conv_input_descriptor,
+ const DeviceMemory<double>& conv_input_data, double conv_input_scale,
+ const dnn::FilterDescriptor& filter_descriptor,
+ const DeviceMemory<double>& filter_data,
+ const dnn::ConvolutionDescriptor& convolution_descriptor,
+ const DeviceMemory<double>& side_input_data, double side_input_scale,
+ const dnn::BatchDescriptor& bias_descriptor,
+ const DeviceMemory<double>& biases, dnn::ActivationMode activation_mode,
+ const dnn::BatchDescriptor& output_descriptor,
+ DeviceMemory<double>* output_data, ScratchAllocator* scratch_allocator,
+ const dnn::AlgorithmConfig& algorithm_config,
+ dnn::ProfileResult* output_profile_result) override;
- bool DoConvolve(Stream* stream, const dnn::BatchDescriptor& batch_descriptor,
- const DeviceMemory<double>& input_data,
- const dnn::FilterDescriptor& filter_descriptor,
- const DeviceMemory<double>& filter_data,
- const dnn::ConvolutionDescriptor& convolution_descriptor,
- const dnn::BatchDescriptor& output_descriptor,
- DeviceMemory<double>* output_data) override;
+ bool DoFusedConvolve(
+ Stream* stream, const dnn::BatchDescriptor& conv_input_descriptor,
+ const DeviceMemory<float>& conv_input_data, float conv_input_scale,
+ const dnn::FilterDescriptor& filter_descriptor,
+ const DeviceMemory<float>& filter_data,
+ const dnn::ConvolutionDescriptor& convolution_descriptor,
+ const DeviceMemory<float>& side_input_data, float side_input_scale,
+ const dnn::BatchDescriptor& bias_descriptor,
+ const DeviceMemory<float>& biases, dnn::ActivationMode activation_mode,
+ const dnn::BatchDescriptor& output_descriptor,
+ DeviceMemory<float>* output_data, ScratchAllocator* scratch_allocator,
+ const dnn::AlgorithmConfig& algorithm_config,
+ dnn::ProfileResult* output_profile_result) override;
- bool DoConvolve(Stream* stream, const dnn::BatchDescriptor& batch_descriptor,
- const DeviceMemory<Eigen::half>& input_data,
- const dnn::FilterDescriptor& filter_descriptor,
- const DeviceMemory<Eigen::half>& filter_data,
- const dnn::ConvolutionDescriptor& convolution_descriptor,
- const dnn::BatchDescriptor& output_descriptor,
- DeviceMemory<Eigen::half>* output_data,
- ScratchAllocator* scratch_allocator,
- const dnn::AlgorithmConfig& algorithm_config,
- dnn::ProfileResult* output_profile_result) override;
+ bool DoFusedConvolve(Stream* stream,
+ const dnn::BatchDescriptor& conv_input_descriptor,
+ const DeviceMemory<Eigen::half>& conv_input_data,
+ float conv_input_scale,
+ const dnn::FilterDescriptor& filter_descriptor,
+ const DeviceMemory<Eigen::half>& filter_data,
+ const dnn::ConvolutionDescriptor& convolution_descriptor,
+ const DeviceMemory<Eigen::half>& side_input_data,
+ float side_input_scale,
+ const dnn::BatchDescriptor& bias_descriptor,
+ const DeviceMemory<Eigen::half>& biases,
+ dnn::ActivationMode activation_mode,
+ const dnn::BatchDescriptor& output_descriptor,
+ DeviceMemory<Eigen::half>* output_data,
+ ScratchAllocator* scratch_allocator,
+ const dnn::AlgorithmConfig& algorithm_config,
+ dnn::ProfileResult* output_profile_result) override;
+
+ bool DoFusedConvolve(
+ Stream* stream, const dnn::BatchDescriptor& conv_input_descriptor,
+ const DeviceMemory<int8>& conv_input_data, float conv_input_scale,
+ const dnn::FilterDescriptor& filter_descriptor,
+ const DeviceMemory<int8>& filter_data,
+ const dnn::ConvolutionDescriptor& convolution_descriptor,
+ const DeviceMemory<int8>& side_input_data, float side_input_scale,
+ const dnn::BatchDescriptor& bias_descriptor,
+ const DeviceMemory<float>& biases, dnn::ActivationMode activation_mode,
+ const dnn::BatchDescriptor& output_descriptor,
+ DeviceMemory<int8>* output_data, ScratchAllocator* scratch_allocator,
+ const dnn::AlgorithmConfig& algorithm_config,
+ dnn::ProfileResult* output_profile_result) override;
bool DoConvolveQuantized(
Stream* stream, const dnn::BatchDescriptor& input_descriptor,
@@ -561,14 +585,28 @@ class CudnnSupport : public dnn::DnnSupport {
const dnn::FilterDescriptor& filter_descriptor,
const DeviceMemory<T>& filter_data,
const dnn::ConvolutionDescriptor& convolution_descriptor,
- const DeviceMemory<T>& biases,
- dnn::ActivationMode activation_mode,
const dnn::BatchDescriptor& output_descriptor,
DeviceMemory<T>* output_data,
ScratchAllocator* scratch_allocator,
const dnn::AlgorithmConfig& algorithm_config,
dnn::ProfileResult* output_profile_result);
+ template <typename Type, typename BiasType, typename ScaleType,
+ int cudnn_data_type, int cudnn_compute_type>
+ bool DoFusedConvolveImpl(
+ Stream* stream, const dnn::BatchDescriptor& conv_input_descriptor,
+ const DeviceMemory<Type>& conv_input_data, ScaleType conv_input_scale,
+ const dnn::FilterDescriptor& filter_descriptor,
+ const DeviceMemory<Type>& filter_data,
+ const dnn::ConvolutionDescriptor& convolution_descriptor,
+ const DeviceMemory<Type>& side_input_data, ScaleType side_input_scale,
+ const dnn::BatchDescriptor& bias_descriptor,
+ const DeviceMemory<BiasType>& biases, dnn::ActivationMode activation_mode,
+ const dnn::BatchDescriptor& output_descriptor,
+ DeviceMemory<Type>* output_data, ScratchAllocator* scratch_allocator,
+ const dnn::AlgorithmConfig& algorithm_config,
+ dnn::ProfileResult* output_profile_result);
+
template <class T>
bool DoConvolveBackwardDataImpl(
Stream* stream,
diff --git a/tensorflow/stream_executor/dnn.h b/tensorflow/stream_executor/dnn.h
index 0a0ad7d9fb..0a4525c1b7 100644
--- a/tensorflow/stream_executor/dnn.h
+++ b/tensorflow/stream_executor/dnn.h
@@ -669,6 +669,7 @@ class PoolingDescriptor {
typedef int64 AlgorithmType;
constexpr AlgorithmType kDefaultAlgorithm = -1;
+constexpr AlgorithmType kNoSuitableAlgorithmFound = -2;
// Describes the result from a perf experiment.
//
@@ -912,20 +913,32 @@ class DnnSupport {
return false;
}
- // Enqueues a single-precision convolution operation onto the stream.
+ // Enqueues a fused convolution operation onto the stream.
+ // We provide several variants with different types for inputs, biases and
+ // scaling parameters.
//
// Arguments (all borrowed):
// stream: borrowed pointer to the stream that the 'convolve' operation
// should be enqueued onto.
- // input_descriptor: dimensions of the input layer.
- // input_data: un-owned device memory region which contains the
+ // conv_input_descriptor: dimensions of the convolution input layer.
+ // conv_input_data: un-owned device memory region which contains the
// convolution input.
+ // conv_input_scale: a floating point scale to multiply with each element
+ // of conv_input_data.
// filter_descriptor: dimensions of the convolution filter.
+ // filter_data: un-owned device memory region which contains the
+ // convolution filter weights.
// convolution_descriptor: stride of the convolution filter.
// biases: un-owned device memory region containing biases to add to the
- // input. This can be DeviceMemory pointing to NULL only when activation_mode
- // is kNone.
+ // input.
// activation_mode: Type of activation to perform.
+ // side_input_data: un-owned device memory region which contains optional
+ // side input data. If 'side_input_scale' is non-zero, then this must
+ // point to data in the tensor shape specified by output_shape.
+ // It will be scaled by 'side_input_scale' and added to the convolution
+ // result and bias prior to applying the activation function.
+ // side_input_scale: a floating point scale to multiply with each element
+ // of side_input_data.
// output_descriptor: dimensions of the output layer.
// output_data: un-owned device memory region in which to place the
// convolution result.
@@ -938,7 +951,7 @@ class DnnSupport {
// output_profile_result: the output profile result for this call. The
// profiling is only enabled when this is not nullptr.
//
- // input_descriptor, filter_descriptor, convolution_descriptor and
+ // conv_input_descriptor, filter_descriptor, convolution_descriptor and
// output_descriptor together specify exactly how the convolution is aligned
// with the input data:
//
@@ -952,55 +965,115 @@ class DnnSupport {
// that if the inverse of the filter is applied to the output in VALID mode
// the result is the same size as the input - this requires even more
// padding of the input.
- virtual bool DoConvolve(
- Stream* stream, const dnn::BatchDescriptor& input_descriptor,
- const DeviceMemory<float>& input_data,
+ virtual bool DoFusedConvolve(
+ Stream* stream, const dnn::BatchDescriptor& conv_input_descriptor,
+ const DeviceMemory<double>& conv_input_data, double conv_input_scale,
const dnn::FilterDescriptor& filter_descriptor,
- const DeviceMemory<float>& filter_data,
+ const DeviceMemory<double>& filter_data,
const dnn::ConvolutionDescriptor& convolution_descriptor,
- const DeviceMemory<float>& biases, dnn::ActivationMode activation_mode,
+ const DeviceMemory<double>& side_input_data, double side_input_scale,
+ const dnn::BatchDescriptor& bias_descriptor,
+ const DeviceMemory<double>& biases, dnn::ActivationMode activation_mode,
const dnn::BatchDescriptor& output_descriptor,
- DeviceMemory<float>* output_data, ScratchAllocator* scratch_allocator,
+ DeviceMemory<double>* output_data, ScratchAllocator* scratch_allocator,
const dnn::AlgorithmConfig& algorithm_config,
- ProfileResult* output_profile_result) {
+ dnn::ProfileResult* output_profile_result) {
return false;
}
- // Enqueues a double-precision fused convolution, bias add, and activation
- // operation onto the stream. See DoConvolve above for argument details.
- virtual bool DoConvolve(
- Stream* stream, const dnn::BatchDescriptor& batch_descriptor,
- const DeviceMemory<double>& input_data,
+ // This is the float version of DoFusedConvolve.
+ virtual bool DoFusedConvolve(
+ Stream* stream, const dnn::BatchDescriptor& conv_input_descriptor,
+ const DeviceMemory<float>& conv_input_data, float conv_input_scale,
const dnn::FilterDescriptor& filter_descriptor,
- const DeviceMemory<double>& filter_data,
+ const DeviceMemory<float>& filter_data,
const dnn::ConvolutionDescriptor& convolution_descriptor,
- const DeviceMemory<double>& biases, dnn::ActivationMode activation_mode,
+ const DeviceMemory<float>& side_input_data, float side_input_scale,
+ const dnn::BatchDescriptor& bias_descriptor,
+ const DeviceMemory<float>& biases, dnn::ActivationMode activation_mode,
const dnn::BatchDescriptor& output_descriptor,
- DeviceMemory<double>* output_data) {
+ DeviceMemory<float>* output_data, ScratchAllocator* scratch_allocator,
+ const dnn::AlgorithmConfig& algorithm_config,
+ dnn::ProfileResult* output_profile_result) {
return false;
}
- // Enqueues a half-precision fused convolution, bias add, and activation
- // operation onto the stream. See DoConvolve above for argument details.
- virtual bool DoConvolve(
- Stream* stream, const dnn::BatchDescriptor& batch_descriptor,
- const DeviceMemory<Eigen::half>& input_data,
+ // This is the Eigen::half version of DoFusedConvolve.
+ // The scaling parameters are still floats.
+ virtual bool DoFusedConvolve(
+ Stream* stream, const dnn::BatchDescriptor& conv_input_descriptor,
+ const DeviceMemory<Eigen::half>& conv_input_data, float conv_input_scale,
const dnn::FilterDescriptor& filter_descriptor,
const DeviceMemory<Eigen::half>& filter_data,
const dnn::ConvolutionDescriptor& convolution_descriptor,
+ const DeviceMemory<Eigen::half>& side_input_data, float side_input_scale,
+ const dnn::BatchDescriptor& bias_descriptor,
const DeviceMemory<Eigen::half>& biases,
dnn::ActivationMode activation_mode,
const dnn::BatchDescriptor& output_descriptor,
DeviceMemory<Eigen::half>* output_data,
ScratchAllocator* scratch_allocator,
const dnn::AlgorithmConfig& algorithm_config,
- ProfileResult* output_profile_result) {
+ dnn::ProfileResult* output_profile_result) {
return false;
}
- // Enqueues a single-precision convolution operation (without bias add
- // or activation) onto the stream.
- // See DoConvolve above for argument details.
+ // This is the int8 version of DoFusedConvolve.
+ // The bias input and scaling parameters are floats.
+ virtual bool DoFusedConvolve(
+ Stream* stream, const dnn::BatchDescriptor& conv_input_descriptor,
+ const DeviceMemory<int8>& conv_input_data, float conv_input_scale,
+ const dnn::FilterDescriptor& filter_descriptor,
+ const DeviceMemory<int8>& filter_data,
+ const dnn::ConvolutionDescriptor& convolution_descriptor,
+ const DeviceMemory<int8>& side_input_data, float side_input_scale,
+ const dnn::BatchDescriptor& bias_descriptor,
+ const DeviceMemory<float>& biases, dnn::ActivationMode activation_mode,
+ const dnn::BatchDescriptor& output_descriptor,
+ DeviceMemory<int8>* output_data, ScratchAllocator* scratch_allocator,
+ const dnn::AlgorithmConfig& algorithm_config,
+ dnn::ProfileResult* output_profile_result) {
+ return false;
+ }
+
+ // Enqueues a single-precision convolution operation onto the stream.
+ //
+ // Arguments (all borrowed):
+ // stream: borrowed pointer to the stream that the 'convolve' operation
+ // should be enqueued onto.
+ // input_descriptor: dimensions of the input layer.
+ // input_data: un-owned device memory region which contains the
+ // convolution input.
+ // filter_descriptor: dimensions of the convolution filter.
+ // convolution_descriptor: stride of the convolution filter.
+ // input. This can be DeviceMemory pointing to NULL only when activation_mode
+ // is kNone.
+ // output_descriptor: dimensions of the output layer.
+ // output_data: un-owned device memory region in which to place the
+ // convolution result.
+ // scratch_allocator: un-owned, may-be-null object that may allocate scratch
+ // space in order to speed up the convolution operation.
+ // algorithm: an integer to specify which algorithm should be used for the
+ // operation. kDefaultAlgorithm means the system will pick an algorithm
+ // by default. The coding of the algorithm is be interpretted by the
+ // underlying implementation.
+ // output_profile_result: the output profile result for this call. The
+ // profiling is only enabled when this is not nullptr.
+ //
+ // input_descriptor, filter_descriptor, convolution_descriptor and
+ // output_descriptor together specify exactly how the convolution is aligned
+ // with the input data:
+ //
+ // * (input dimensions - filter size + 1) / filter stride == output dimensions
+ // corresponds to dist_belief padding = VALID, i.e. the input is not padded.
+ // * input dimensions / filter stride == output dimensions
+ // corresponds to dist_belief padding = SAME, i.e. input and output are the
+ // same size - this requires padding the input.
+ // * (input dimensions + filter size - 1) / filter stride == output dimensions
+ // corresponds to dist_belief padding = FULL, i.e. the output is sized so
+ // that if the inverse of the filter is applied to the output in VALID mode
+ // the result is the same size as the input - this requires even more
+ // padding of the input.
virtual bool DoConvolve(
Stream* stream, const dnn::BatchDescriptor& input_descriptor,
const DeviceMemory<float>& input_data,
@@ -1012,8 +1085,7 @@ class DnnSupport {
const dnn::AlgorithmConfig& algorithm_config,
ProfileResult* output_profile_result) = 0;
- // Enqueues a double-precision convolution operation (without bias add
- // or activation) onto the stream.
+ // Enqueues a double-precision convolution operation onto the stream.
// See DoConvolve above for argument details.
virtual bool DoConvolve(
Stream* stream, const dnn::BatchDescriptor& batch_descriptor,
@@ -1024,8 +1096,7 @@ class DnnSupport {
const dnn::BatchDescriptor& output_descriptor,
DeviceMemory<double>* output_data) = 0;
- // Enqueues a half-precision convolution operation (without bias add
- // or activation) onto the stream.
+ // Enqueues a half-precision convolution operation onto the stream.
// See DoConvolve above for argument details.
virtual bool DoConvolve(
Stream* stream, const dnn::BatchDescriptor& batch_descriptor,
diff --git a/tensorflow/stream_executor/stream.cc b/tensorflow/stream_executor/stream.cc
index c9b36ba7ab..dc768e0273 100644
--- a/tensorflow/stream_executor/stream.cc
+++ b/tensorflow/stream_executor/stream.cc
@@ -361,28 +361,66 @@ Stream &Stream::ThenBatchNormalizationBackward(
return *this;
}
-Stream &Stream::ThenConvolveWithScratch(
- const dnn::BatchDescriptor &input_descriptor,
- const DeviceMemory<Eigen::half> &input_data,
+Stream &Stream::ThenFusedConvolveWithScratch(
+ const dnn::BatchDescriptor &conv_input_descriptor,
+ const DeviceMemory<int8> &conv_input_data, float conv_input_scale,
+ const dnn::FilterDescriptor &filter_descriptor,
+ const DeviceMemory<int8> &filter_data,
+ const dnn::ConvolutionDescriptor &convolution_descriptor,
+ const DeviceMemory<int8> &side_input_data, float side_input_scale,
+ const dnn::BatchDescriptor &bias_descriptor,
+ const DeviceMemory<float> &biases, dnn::ActivationMode activation_mode,
+ const dnn::BatchDescriptor &output_descriptor, DeviceMemory<int8> *output,
+ ScratchAllocator *scratch_allocator) {
+ VLOG_CALL(PARAM(conv_input_descriptor), PARAM(conv_input_data),
+ PARAM(conv_input_scale), PARAM(filter_descriptor),
+ PARAM(filter_data), PARAM(convolution_descriptor),
+ PARAM(side_input_data), PARAM(side_input_scale),
+ PARAM(bias_descriptor), PARAM(biases), PARAM(activation_mode),
+ PARAM(output_descriptor), PARAM(output));
+
+ if (ok()) {
+ if (dnn::DnnSupport *dnn = parent_->AsDnn()) {
+ CheckError(dnn->DoFusedConvolve(
+ this, conv_input_descriptor, conv_input_data, conv_input_scale,
+ filter_descriptor, filter_data, convolution_descriptor,
+ side_input_data, side_input_scale, bias_descriptor, biases,
+ activation_mode, output_descriptor, output, scratch_allocator,
+ dnn::AlgorithmConfig(), /*output_profile_result=*/nullptr));
+ } else {
+ SetErrorAndLogNoDnnSupport();
+ }
+ }
+ return *this;
+}
+
+Stream &Stream::ThenFusedConvolveWithScratch(
+ const dnn::BatchDescriptor &conv_input_descriptor,
+ const DeviceMemory<Eigen::half> &conv_input_data, float conv_input_scale,
const dnn::FilterDescriptor &filter_descriptor,
const DeviceMemory<Eigen::half> &filter_data,
const dnn::ConvolutionDescriptor &convolution_descriptor,
+ const DeviceMemory<Eigen::half> &side_input_data, float side_input_scale,
+ const dnn::BatchDescriptor &bias_descriptor,
const DeviceMemory<Eigen::half> &biases,
dnn::ActivationMode activation_mode,
const dnn::BatchDescriptor &output_descriptor,
DeviceMemory<Eigen::half> *output, ScratchAllocator *scratch_allocator) {
- VLOG_CALL(PARAM(input_descriptor), PARAM(input_data),
- PARAM(filter_descriptor), PARAM(filter_data),
- PARAM(convolution_descriptor), PARAM(biases),
- PARAM(activation_mode), PARAM(output_descriptor), PARAM(output));
+ VLOG_CALL(PARAM(conv_input_descriptor), PARAM(conv_input_data),
+ PARAM(conv_input_scale), PARAM(filter_descriptor),
+ PARAM(filter_data), PARAM(convolution_descriptor),
+ PARAM(side_input_data), PARAM(side_input_scale),
+ PARAM(bias_descriptor), PARAM(biases), PARAM(activation_mode),
+ PARAM(output_descriptor), PARAM(output));
if (ok()) {
if (dnn::DnnSupport *dnn = parent_->AsDnn()) {
- CheckError(dnn->DoConvolve(
- this, input_descriptor, input_data, filter_descriptor, filter_data,
- convolution_descriptor, biases, activation_mode, output_descriptor,
- output, scratch_allocator, dnn::AlgorithmConfig(),
- /*output_profile_result=*/nullptr));
+ CheckError(dnn->DoFusedConvolve(
+ this, conv_input_descriptor, conv_input_data, conv_input_scale,
+ filter_descriptor, filter_data, convolution_descriptor,
+ side_input_data, side_input_scale, bias_descriptor, biases,
+ activation_mode, output_descriptor, output, scratch_allocator,
+ dnn::AlgorithmConfig(), /*output_profile_result=*/nullptr));
} else {
SetErrorAndLogNoDnnSupport();
}
@@ -390,27 +428,32 @@ Stream &Stream::ThenConvolveWithScratch(
return *this;
}
-Stream &Stream::ThenConvolveWithScratch(
- const dnn::BatchDescriptor &input_descriptor,
- const DeviceMemory<float> &input_data,
+Stream &Stream::ThenFusedConvolveWithScratch(
+ const dnn::BatchDescriptor &conv_input_descriptor,
+ const DeviceMemory<float> &conv_input_data, float conv_input_scale,
const dnn::FilterDescriptor &filter_descriptor,
const DeviceMemory<float> &filter_data,
const dnn::ConvolutionDescriptor &convolution_descriptor,
+ const DeviceMemory<float> &side_input_data, float side_input_scale,
+ const dnn::BatchDescriptor &bias_descriptor,
const DeviceMemory<float> &biases, dnn::ActivationMode activation_mode,
const dnn::BatchDescriptor &output_descriptor, DeviceMemory<float> *output,
ScratchAllocator *scratch_allocator) {
- VLOG_CALL(PARAM(input_descriptor), PARAM(input_data),
- PARAM(filter_descriptor), PARAM(filter_data),
- PARAM(convolution_descriptor), PARAM(biases),
- PARAM(activation_mode), PARAM(output_descriptor), PARAM(output));
+ VLOG_CALL(PARAM(conv_input_descriptor), PARAM(conv_input_data),
+ PARAM(conv_input_scale), PARAM(filter_descriptor),
+ PARAM(filter_data), PARAM(convolution_descriptor),
+ PARAM(side_input_data), PARAM(side_input_scale),
+ PARAM(bias_descriptor), PARAM(biases), PARAM(activation_mode),
+ PARAM(output_descriptor), PARAM(output));
if (ok()) {
if (dnn::DnnSupport *dnn = parent_->AsDnn()) {
- CheckError(dnn->DoConvolve(
- this, input_descriptor, input_data, filter_descriptor, filter_data,
- convolution_descriptor, biases, activation_mode, output_descriptor,
- output, scratch_allocator, dnn::AlgorithmConfig(),
- /*output_profile_result=*/nullptr));
+ CheckError(dnn->DoFusedConvolve(
+ this, conv_input_descriptor, conv_input_data, conv_input_scale,
+ filter_descriptor, filter_data, convolution_descriptor,
+ side_input_data, side_input_scale, bias_descriptor, biases,
+ activation_mode, output_descriptor, output, scratch_allocator,
+ dnn::AlgorithmConfig(), /*output_profile_result=*/nullptr));
} else {
SetErrorAndLogNoDnnSupport();
}
@@ -472,29 +515,34 @@ Stream &Stream::ThenConvolveWithScratch(
return *this;
}
-Stream &Stream::ThenConvolveWithAlgorithm(
- const dnn::BatchDescriptor &input_descriptor,
- const DeviceMemory<float> &input_data,
+Stream &Stream::ThenFusedConvolveWithAlgorithm(
+ const dnn::BatchDescriptor &conv_input_descriptor,
+ const DeviceMemory<float> &conv_input_data, float conv_input_scale,
const dnn::FilterDescriptor &filter_descriptor,
const DeviceMemory<float> &filter_data,
const dnn::ConvolutionDescriptor &convolution_descriptor,
+ const DeviceMemory<float> &side_input_data, float side_input_scale,
+ const dnn::BatchDescriptor &bias_descriptor,
const DeviceMemory<float> &biases, dnn::ActivationMode activation_mode,
const dnn::BatchDescriptor &output_descriptor, DeviceMemory<float> *output,
ScratchAllocator *scratch_allocator,
const dnn::AlgorithmConfig &algorithm_config,
dnn::ProfileResult *output_profile_result) {
- VLOG_CALL(PARAM(input_descriptor), PARAM(input_data),
- PARAM(filter_descriptor), PARAM(filter_data),
- PARAM(convolution_descriptor), PARAM(biases),
+ VLOG_CALL(PARAM(conv_input_descriptor), PARAM(conv_input_data),
+ PARAM(conv_input_scale), PARAM(filter_descriptor),
+ PARAM(filter_data), PARAM(convolution_descriptor), PARAM(biases),
+ PARAM(side_input_data), PARAM(side_input_scale),
PARAM(activation_mode), PARAM(output_descriptor), PARAM(output),
PARAM(algorithm_config));
if (ok()) {
if (dnn::DnnSupport *dnn = parent_->AsDnn()) {
- auto status = dnn->DoConvolve(
- this, input_descriptor, input_data, filter_descriptor, filter_data,
- convolution_descriptor, biases, activation_mode, output_descriptor,
- output, scratch_allocator, algorithm_config, output_profile_result);
+ auto status = dnn->DoFusedConvolve(
+ this, conv_input_descriptor, conv_input_data, conv_input_scale,
+ filter_descriptor, filter_data, convolution_descriptor,
+ side_input_data, side_input_scale, bias_descriptor, biases,
+ activation_mode, output_descriptor, output, scratch_allocator,
+ algorithm_config, output_profile_result);
if (!status && !output_profile_result) {
SetError();
}
@@ -505,30 +553,73 @@ Stream &Stream::ThenConvolveWithAlgorithm(
return *this;
}
-Stream &Stream::ThenConvolveWithAlgorithm(
- const dnn::BatchDescriptor &input_descriptor,
- const DeviceMemory<Eigen::half> &input_data,
+Stream &Stream::ThenFusedConvolveWithAlgorithm(
+ const dnn::BatchDescriptor &conv_input_descriptor,
+ const DeviceMemory<Eigen::half> &conv_input_data, float conv_input_scale,
const dnn::FilterDescriptor &filter_descriptor,
const DeviceMemory<Eigen::half> &filter_data,
const dnn::ConvolutionDescriptor &convolution_descriptor,
+ const DeviceMemory<Eigen::half> &side_input_data, float side_input_scale,
+ const dnn::BatchDescriptor &bias_descriptor,
const DeviceMemory<Eigen::half> &biases,
dnn::ActivationMode activation_mode,
const dnn::BatchDescriptor &output_descriptor,
DeviceMemory<Eigen::half> *output, ScratchAllocator *scratch_allocator,
const dnn::AlgorithmConfig &algorithm_config,
dnn::ProfileResult *output_profile_result) {
- VLOG_CALL(PARAM(input_descriptor), PARAM(input_data),
- PARAM(filter_descriptor), PARAM(filter_data),
- PARAM(convolution_descriptor), PARAM(biases),
- PARAM(activation_mode), PARAM(output_descriptor), PARAM(output),
- PARAM(algorithm_config));
+ VLOG_CALL(PARAM(conv_input_descriptor), PARAM(conv_input_data),
+ PARAM(conv_input_scale), PARAM(filter_descriptor),
+ PARAM(filter_data), PARAM(convolution_descriptor), PARAM(biases),
+ PARAM(side_input_data), PARAM(side_input_scale),
+ PARAM(bias_descriptor), PARAM(biases), PARAM(activation_mode),
+ PARAM(output_descriptor), PARAM(output), PARAM(algorithm_config));
if (ok()) {
if (dnn::DnnSupport *dnn = parent_->AsDnn()) {
- auto status = dnn->DoConvolve(
- this, input_descriptor, input_data, filter_descriptor, filter_data,
- convolution_descriptor, biases, activation_mode, output_descriptor,
- output, scratch_allocator, algorithm_config, output_profile_result);
+ auto status = dnn->DoFusedConvolve(
+ this, conv_input_descriptor, conv_input_data, conv_input_scale,
+ filter_descriptor, filter_data, convolution_descriptor,
+ side_input_data, side_input_scale, bias_descriptor, biases,
+ activation_mode, output_descriptor, output, scratch_allocator,
+ algorithm_config, output_profile_result);
+ if (!status && !output_profile_result) {
+ SetError();
+ }
+ } else {
+ SetErrorAndLogNoDnnSupport();
+ }
+ }
+ return *this;
+}
+
+Stream &Stream::ThenFusedConvolveWithAlgorithm(
+ const dnn::BatchDescriptor &conv_input_descriptor,
+ const DeviceMemory<int8> &conv_input_data, float conv_input_scale,
+ const dnn::FilterDescriptor &filter_descriptor,
+ const DeviceMemory<int8> &filter_data,
+ const dnn::ConvolutionDescriptor &convolution_descriptor,
+ const DeviceMemory<int8> &side_input_data, float side_input_scale,
+ const dnn::BatchDescriptor &bias_descriptor,
+ const DeviceMemory<float> &biases, dnn::ActivationMode activation_mode,
+ const dnn::BatchDescriptor &output_descriptor, DeviceMemory<int8> *output,
+ ScratchAllocator *scratch_allocator,
+ const dnn::AlgorithmConfig &algorithm_config,
+ dnn::ProfileResult *output_profile_result) {
+ VLOG_CALL(PARAM(conv_input_descriptor), PARAM(conv_input_data),
+ PARAM(conv_input_scale), PARAM(filter_descriptor),
+ PARAM(filter_data), PARAM(convolution_descriptor), PARAM(biases),
+ PARAM(side_input_data), PARAM(side_input_scale),
+ PARAM(bias_descriptor), PARAM(biases), PARAM(activation_mode),
+ PARAM(output_descriptor), PARAM(output), PARAM(algorithm_config));
+
+ if (ok()) {
+ if (dnn::DnnSupport *dnn = parent_->AsDnn()) {
+ auto status = dnn->DoFusedConvolve(
+ this, conv_input_descriptor, conv_input_data, conv_input_scale,
+ filter_descriptor, filter_data, convolution_descriptor,
+ side_input_data, side_input_scale, bias_descriptor, biases,
+ activation_mode, output_descriptor, output, scratch_allocator,
+ algorithm_config, output_profile_result);
if (!status && !output_profile_result) {
SetError();
}
@@ -601,19 +692,22 @@ Stream &Stream::ThenConvolveWithAlgorithm(
return *this;
}
-Stream &Stream::ThenConvolve(
- const dnn::BatchDescriptor &input_descriptor,
- const DeviceMemory<float> &input_data,
+Stream &Stream::ThenFusedConvolve(
+ const dnn::BatchDescriptor &conv_input_descriptor,
+ const DeviceMemory<int8> &conv_input_data, float conv_input_scale,
const dnn::FilterDescriptor &filter_descriptor,
- const DeviceMemory<float> &filter_data,
+ const DeviceMemory<int8> &filter_data,
const dnn::ConvolutionDescriptor &convolution_descriptor,
+ const DeviceMemory<int8> &side_input_data, float side_input_scale,
+ const dnn::BatchDescriptor &bias_descriptor,
const DeviceMemory<float> &biases, dnn::ActivationMode activation_mode,
- const dnn::BatchDescriptor &output_descriptor,
- DeviceMemory<float> *output) {
- return ThenConvolveWithScratch(
- input_descriptor, input_data, filter_descriptor, filter_data,
- convolution_descriptor, biases, activation_mode, output_descriptor,
- output, /*scratch_allocator=*/nullptr);
+ const dnn::BatchDescriptor &output_descriptor, DeviceMemory<int8> *output) {
+ return ThenFusedConvolveWithScratch(
+ conv_input_descriptor, conv_input_data, conv_input_scale,
+ filter_descriptor, filter_data, convolution_descriptor, side_input_data,
+ side_input_scale, bias_descriptor, biases, activation_mode,
+ output_descriptor, output,
+ /*scratch_allocator=*/nullptr);
}
Stream &Stream::ThenConvolve(
diff --git a/tensorflow/stream_executor/stream.h b/tensorflow/stream_executor/stream.h
index 9bd4c21a66..a418fe961c 100644
--- a/tensorflow/stream_executor/stream.h
+++ b/tensorflow/stream_executor/stream.h
@@ -240,15 +240,17 @@ class Stream {
DeviceMemory<float> *offset_backprop);
// TODO(leary) add double-precision version of this interface.
- Stream &ThenConvolve(const dnn::BatchDescriptor &input_descriptor,
- const DeviceMemory<float> &input_data,
- const dnn::FilterDescriptor &filter_descriptor,
- const DeviceMemory<float> &filter_data,
- const dnn::ConvolutionDescriptor &convolution_descriptor,
- const DeviceMemory<float> &biases,
- dnn::ActivationMode activation_mode,
- const dnn::BatchDescriptor &output_descriptor,
- DeviceMemory<float> *output);
+ Stream &ThenFusedConvolve(
+ const dnn::BatchDescriptor &conv_input_descriptor,
+ const DeviceMemory<int8> &conv_input_data, float conv_input_scale,
+ const dnn::FilterDescriptor &filter_descriptor,
+ const DeviceMemory<int8> &filter_data,
+ const dnn::ConvolutionDescriptor &convolution_descriptor,
+ const DeviceMemory<int8> &side_input_data, float side_input_scale,
+ const dnn::BatchDescriptor &bias_descriptor,
+ const DeviceMemory<float> &biases, dnn::ActivationMode activation_mode,
+ const dnn::BatchDescriptor &output_descriptor,
+ DeviceMemory<int8> *output);
Stream &ThenConvolve(const dnn::BatchDescriptor &input_descriptor,
const DeviceMemory<float> &input_data,
@@ -278,23 +280,39 @@ class Stream {
const dnn::BatchDescriptor &output_descriptor,
DeviceMemory<float> *output_data);
- Stream &ThenConvolveWithScratch(
- const dnn::BatchDescriptor &input_descriptor,
- const DeviceMemory<Eigen::half> &input_data,
+ Stream &ThenFusedConvolveWithScratch(
+ const dnn::BatchDescriptor &conv_input_descriptor,
+ const DeviceMemory<int8> &conv_input_data, float conv_input_scale,
+ const dnn::FilterDescriptor &filter_descriptor,
+ const DeviceMemory<int8> &filter_data,
+ const dnn::ConvolutionDescriptor &convolution_descriptor,
+ const DeviceMemory<int8> &side_input_data, float side_input_scale,
+ const dnn::BatchDescriptor &bias_descriptor,
+ const DeviceMemory<float> &biases, dnn::ActivationMode activation_mode,
+ const dnn::BatchDescriptor &output_descriptor, DeviceMemory<int8> *output,
+ ScratchAllocator *scratch_allocator);
+
+ Stream &ThenFusedConvolveWithScratch(
+ const dnn::BatchDescriptor &conv_input_descriptor,
+ const DeviceMemory<Eigen::half> &conv_input_data, float conv_input_scale,
const dnn::FilterDescriptor &filter_descriptor,
const DeviceMemory<Eigen::half> &filter_data,
const dnn::ConvolutionDescriptor &convolution_descriptor,
+ const DeviceMemory<Eigen::half> &side_input_data, float side_input_scale,
+ const dnn::BatchDescriptor &bias_descriptor,
const DeviceMemory<Eigen::half> &biases,
dnn::ActivationMode activation_mode,
const dnn::BatchDescriptor &output_descriptor,
DeviceMemory<Eigen::half> *output, ScratchAllocator *scratch_allocator);
- Stream &ThenConvolveWithScratch(
- const dnn::BatchDescriptor &input_descriptor,
- const DeviceMemory<float> &input_data,
+ Stream &ThenFusedConvolveWithScratch(
+ const dnn::BatchDescriptor &conv_input_descriptor,
+ const DeviceMemory<float> &conv_input_data, float conv_input_scale,
const dnn::FilterDescriptor &filter_descriptor,
const DeviceMemory<float> &filter_data,
const dnn::ConvolutionDescriptor &convolution_descriptor,
+ const DeviceMemory<float> &side_input_data, float side_input_scale,
+ const dnn::BatchDescriptor &bias_descriptor,
const DeviceMemory<float> &biases, dnn::ActivationMode activation_mode,
const dnn::BatchDescriptor &output_descriptor,
DeviceMemory<float> *output, ScratchAllocator *scratch_allocator);
@@ -323,7 +341,6 @@ class Stream {
const dnn::FilterDescriptor &filter_descriptor,
const DeviceMemory<float> &filter_data,
const dnn::ConvolutionDescriptor &convolution_descriptor,
- const DeviceMemory<float> &biases, dnn::ActivationMode activation_mode,
const dnn::BatchDescriptor &output_descriptor,
DeviceMemory<float> *output, ScratchAllocator *scratch_allocator,
const dnn::AlgorithmConfig &algorithm_config,
@@ -335,35 +352,68 @@ class Stream {
const dnn::FilterDescriptor &filter_descriptor,
const DeviceMemory<Eigen::half> &filter_data,
const dnn::ConvolutionDescriptor &convolution_descriptor,
- const DeviceMemory<Eigen::half> &biases,
- dnn::ActivationMode activation_mode,
const dnn::BatchDescriptor &output_descriptor,
DeviceMemory<Eigen::half> *output, ScratchAllocator *scratch_allocator,
const dnn::AlgorithmConfig &algorithm_config,
dnn::ProfileResult *output_profile_result);
- Stream &ThenConvolveWithAlgorithm(
- const dnn::BatchDescriptor &input_descriptor,
- const DeviceMemory<float> &input_data,
+ Stream &ThenFusedConvolveWithAlgorithm(
+ const dnn::BatchDescriptor &conv_input_descriptor,
+ const DeviceMemory<double> &conv_input_data, double conv_input_scale,
+ const dnn::FilterDescriptor &filter_descriptor,
+ const DeviceMemory<double> &filter_data,
+ const dnn::ConvolutionDescriptor &convolution_descriptor,
+ const DeviceMemory<double> &side_input_data, double side_input_scale,
+ const dnn::BatchDescriptor &bias_descriptor,
+ const DeviceMemory<double> &biases, dnn::ActivationMode activation_mode,
+ const dnn::BatchDescriptor &output_descriptor,
+ DeviceMemory<double> *output, ScratchAllocator *scratch_allocator,
+ const dnn::AlgorithmConfig &algorithm_config,
+ dnn::ProfileResult *output_profile_result);
+
+ Stream &ThenFusedConvolveWithAlgorithm(
+ const dnn::BatchDescriptor &conv_input_descriptor,
+ const DeviceMemory<float> &conv_input_data, float conv_input_scale,
const dnn::FilterDescriptor &filter_descriptor,
const DeviceMemory<float> &filter_data,
const dnn::ConvolutionDescriptor &convolution_descriptor,
+ const DeviceMemory<float> &side_input_data, float side_input_scale,
+ const dnn::BatchDescriptor &bias_descriptor,
+ const DeviceMemory<float> &biases, dnn::ActivationMode activation_mode,
const dnn::BatchDescriptor &output_descriptor,
DeviceMemory<float> *output, ScratchAllocator *scratch_allocator,
const dnn::AlgorithmConfig &algorithm_config,
dnn::ProfileResult *output_profile_result);
- Stream &ThenConvolveWithAlgorithm(
- const dnn::BatchDescriptor &input_descriptor,
- const DeviceMemory<Eigen::half> &input_data,
+ Stream &ThenFusedConvolveWithAlgorithm(
+ const dnn::BatchDescriptor &conv_input_descriptor,
+ const DeviceMemory<Eigen::half> &conv_input_data, float conv_input_scale,
const dnn::FilterDescriptor &filter_descriptor,
const DeviceMemory<Eigen::half> &filter_data,
const dnn::ConvolutionDescriptor &convolution_descriptor,
+ const DeviceMemory<Eigen::half> &side_input_data, float side_input_scale,
+ const dnn::BatchDescriptor &bias_descriptor,
+ const DeviceMemory<Eigen::half> &biases,
+ dnn::ActivationMode activation_mode,
const dnn::BatchDescriptor &output_descriptor,
DeviceMemory<Eigen::half> *output, ScratchAllocator *scratch_allocator,
const dnn::AlgorithmConfig &algorithm_config,
dnn::ProfileResult *output_profile_result);
+ Stream &ThenFusedConvolveWithAlgorithm(
+ const dnn::BatchDescriptor &conv_input_descriptor,
+ const DeviceMemory<int8> &conv_input_data, float conv_input_scale,
+ const dnn::FilterDescriptor &filter_descriptor,
+ const DeviceMemory<int8> &filter_data,
+ const dnn::ConvolutionDescriptor &convolution_descriptor,
+ const DeviceMemory<int8> &side_input_data, float side_input_scale,
+ const dnn::BatchDescriptor &bias_descriptor,
+ const DeviceMemory<float> &biases, dnn::ActivationMode activation_mode,
+ const dnn::BatchDescriptor &output_descriptor, DeviceMemory<int8> *output,
+ ScratchAllocator *scratch_allocator,
+ const dnn::AlgorithmConfig &algorithm_config,
+ dnn::ProfileResult *output_profile_result);
+
Stream &ThenSeparableConvolve(
const dnn::BatchDescriptor &input_descriptor,
const DeviceMemory<float> &input_data,
diff --git a/tensorflow/tools/api/golden/tensorflow.keras.activations.pbtxt b/tensorflow/tools/api/golden/tensorflow.keras.activations.pbtxt
new file mode 100644
index 0000000000..2cd83baf65
--- /dev/null
+++ b/tensorflow/tools/api/golden/tensorflow.keras.activations.pbtxt
@@ -0,0 +1,55 @@
+path: "tensorflow.keras.activations"
+tf_module {
+ member_method {
+ name: "deserialize"
+ argspec: "args=[\'name\', \'custom_objects\'], varargs=None, keywords=None, defaults=[\'None\'], "
+ }
+ member_method {
+ name: "elu"
+ argspec: "args=[\'x\', \'alpha\'], varargs=None, keywords=None, defaults=[\'1.0\'], "
+ }
+ member_method {
+ name: "get"
+ argspec: "args=[\'identifier\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "hard_sigmoid"
+ argspec: "args=[\'x\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "linear"
+ argspec: "args=[\'x\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "relu"
+ argspec: "args=[\'x\', \'alpha\', \'max_value\'], varargs=None, keywords=None, defaults=[\'0.0\', \'None\'], "
+ }
+ member_method {
+ name: "selu"
+ argspec: "args=[\'x\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "serialize"
+ argspec: "args=[\'activation\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "sigmoid"
+ argspec: "args=[\'x\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "softmax"
+ argspec: "args=[\'x\', \'axis\'], varargs=None, keywords=None, defaults=[\'-1\'], "
+ }
+ member_method {
+ name: "softplus"
+ argspec: "args=[\'x\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "softsign"
+ argspec: "args=[\'x\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "tanh"
+ argspec: "args=[\'x\'], varargs=None, keywords=None, defaults=None"
+ }
+}
diff --git a/tensorflow/tools/api/golden/tensorflow.keras.applications.inception_v3.pbtxt b/tensorflow/tools/api/golden/tensorflow.keras.applications.inception_v3.pbtxt
new file mode 100644
index 0000000000..b67cee80ab
--- /dev/null
+++ b/tensorflow/tools/api/golden/tensorflow.keras.applications.inception_v3.pbtxt
@@ -0,0 +1,15 @@
+path: "tensorflow.keras.applications.inception_v3"
+tf_module {
+ member_method {
+ name: "InceptionV3"
+ argspec: "args=[\'include_top\', \'weights\', \'input_tensor\', \'input_shape\', \'pooling\', \'classes\'], varargs=None, keywords=None, defaults=[\'True\', \'imagenet\', \'None\', \'None\', \'None\', \'1000\'], "
+ }
+ member_method {
+ name: "decode_predictions"
+ argspec: "args=[\'preds\', \'top\'], varargs=None, keywords=None, defaults=[\'5\'], "
+ }
+ member_method {
+ name: "preprocess_input"
+ argspec: "args=[\'x\'], varargs=None, keywords=None, defaults=None"
+ }
+}
diff --git a/tensorflow/tools/api/golden/tensorflow.keras.applications.mobilenet.pbtxt b/tensorflow/tools/api/golden/tensorflow.keras.applications.mobilenet.pbtxt
new file mode 100644
index 0000000000..ef774e1dd7
--- /dev/null
+++ b/tensorflow/tools/api/golden/tensorflow.keras.applications.mobilenet.pbtxt
@@ -0,0 +1,15 @@
+path: "tensorflow.keras.applications.mobilenet"
+tf_module {
+ member_method {
+ name: "MobileNet"
+ argspec: "args=[\'input_shape\', \'alpha\', \'depth_multiplier\', \'dropout\', \'include_top\', \'weights\', \'input_tensor\', \'pooling\', \'classes\'], varargs=None, keywords=None, defaults=[\'None\', \'1.0\', \'1\', \'0.001\', \'True\', \'imagenet\', \'None\', \'None\', \'1000\'], "
+ }
+ member_method {
+ name: "decode_predictions"
+ argspec: "args=[\'preds\', \'top\'], varargs=None, keywords=None, defaults=[\'5\'], "
+ }
+ member_method {
+ name: "preprocess_input"
+ argspec: "args=[\'x\'], varargs=None, keywords=None, defaults=None"
+ }
+}
diff --git a/tensorflow/tools/api/golden/tensorflow.keras.applications.pbtxt b/tensorflow/tools/api/golden/tensorflow.keras.applications.pbtxt
new file mode 100644
index 0000000000..f50dc7d7fe
--- /dev/null
+++ b/tensorflow/tools/api/golden/tensorflow.keras.applications.pbtxt
@@ -0,0 +1,51 @@
+path: "tensorflow.keras.applications"
+tf_module {
+ member {
+ name: "inception_v3"
+ mtype: "<type \'module\'>"
+ }
+ member {
+ name: "mobilenet"
+ mtype: "<type \'module\'>"
+ }
+ member {
+ name: "resnet50"
+ mtype: "<type \'module\'>"
+ }
+ member {
+ name: "vgg16"
+ mtype: "<type \'module\'>"
+ }
+ member {
+ name: "vgg19"
+ mtype: "<type \'module\'>"
+ }
+ member {
+ name: "xception"
+ mtype: "<type \'module\'>"
+ }
+ member_method {
+ name: "InceptionV3"
+ argspec: "args=[\'include_top\', \'weights\', \'input_tensor\', \'input_shape\', \'pooling\', \'classes\'], varargs=None, keywords=None, defaults=[\'True\', \'imagenet\', \'None\', \'None\', \'None\', \'1000\'], "
+ }
+ member_method {
+ name: "MobileNet"
+ argspec: "args=[\'input_shape\', \'alpha\', \'depth_multiplier\', \'dropout\', \'include_top\', \'weights\', \'input_tensor\', \'pooling\', \'classes\'], varargs=None, keywords=None, defaults=[\'None\', \'1.0\', \'1\', \'0.001\', \'True\', \'imagenet\', \'None\', \'None\', \'1000\'], "
+ }
+ member_method {
+ name: "ResNet50"
+ argspec: "args=[\'include_top\', \'weights\', \'input_tensor\', \'input_shape\', \'pooling\', \'classes\'], varargs=None, keywords=None, defaults=[\'True\', \'imagenet\', \'None\', \'None\', \'None\', \'1000\'], "
+ }
+ member_method {
+ name: "VGG16"
+ argspec: "args=[\'include_top\', \'weights\', \'input_tensor\', \'input_shape\', \'pooling\', \'classes\'], varargs=None, keywords=None, defaults=[\'True\', \'imagenet\', \'None\', \'None\', \'None\', \'1000\'], "
+ }
+ member_method {
+ name: "VGG19"
+ argspec: "args=[\'include_top\', \'weights\', \'input_tensor\', \'input_shape\', \'pooling\', \'classes\'], varargs=None, keywords=None, defaults=[\'True\', \'imagenet\', \'None\', \'None\', \'None\', \'1000\'], "
+ }
+ member_method {
+ name: "Xception"
+ argspec: "args=[\'include_top\', \'weights\', \'input_tensor\', \'input_shape\', \'pooling\', \'classes\'], varargs=None, keywords=None, defaults=[\'True\', \'imagenet\', \'None\', \'None\', \'None\', \'1000\'], "
+ }
+}
diff --git a/tensorflow/tools/api/golden/tensorflow.keras.applications.resnet50.pbtxt b/tensorflow/tools/api/golden/tensorflow.keras.applications.resnet50.pbtxt
new file mode 100644
index 0000000000..57c48df2e3
--- /dev/null
+++ b/tensorflow/tools/api/golden/tensorflow.keras.applications.resnet50.pbtxt
@@ -0,0 +1,15 @@
+path: "tensorflow.keras.applications.resnet50"
+tf_module {
+ member_method {
+ name: "ResNet50"
+ argspec: "args=[\'include_top\', \'weights\', \'input_tensor\', \'input_shape\', \'pooling\', \'classes\'], varargs=None, keywords=None, defaults=[\'True\', \'imagenet\', \'None\', \'None\', \'None\', \'1000\'], "
+ }
+ member_method {
+ name: "decode_predictions"
+ argspec: "args=[\'preds\', \'top\'], varargs=None, keywords=None, defaults=[\'5\'], "
+ }
+ member_method {
+ name: "preprocess_input"
+ argspec: "args=[\'x\', \'data_format\'], varargs=None, keywords=None, defaults=[\'None\'], "
+ }
+}
diff --git a/tensorflow/tools/api/golden/tensorflow.keras.applications.vgg16.pbtxt b/tensorflow/tools/api/golden/tensorflow.keras.applications.vgg16.pbtxt
new file mode 100644
index 0000000000..29d45daea4
--- /dev/null
+++ b/tensorflow/tools/api/golden/tensorflow.keras.applications.vgg16.pbtxt
@@ -0,0 +1,15 @@
+path: "tensorflow.keras.applications.vgg16"
+tf_module {
+ member_method {
+ name: "VGG16"
+ argspec: "args=[\'include_top\', \'weights\', \'input_tensor\', \'input_shape\', \'pooling\', \'classes\'], varargs=None, keywords=None, defaults=[\'True\', \'imagenet\', \'None\', \'None\', \'None\', \'1000\'], "
+ }
+ member_method {
+ name: "decode_predictions"
+ argspec: "args=[\'preds\', \'top\'], varargs=None, keywords=None, defaults=[\'5\'], "
+ }
+ member_method {
+ name: "preprocess_input"
+ argspec: "args=[\'x\', \'data_format\'], varargs=None, keywords=None, defaults=[\'None\'], "
+ }
+}
diff --git a/tensorflow/tools/api/golden/tensorflow.keras.applications.vgg19.pbtxt b/tensorflow/tools/api/golden/tensorflow.keras.applications.vgg19.pbtxt
new file mode 100644
index 0000000000..124aa7e5e5
--- /dev/null
+++ b/tensorflow/tools/api/golden/tensorflow.keras.applications.vgg19.pbtxt
@@ -0,0 +1,15 @@
+path: "tensorflow.keras.applications.vgg19"
+tf_module {
+ member_method {
+ name: "VGG19"
+ argspec: "args=[\'include_top\', \'weights\', \'input_tensor\', \'input_shape\', \'pooling\', \'classes\'], varargs=None, keywords=None, defaults=[\'True\', \'imagenet\', \'None\', \'None\', \'None\', \'1000\'], "
+ }
+ member_method {
+ name: "decode_predictions"
+ argspec: "args=[\'preds\', \'top\'], varargs=None, keywords=None, defaults=[\'5\'], "
+ }
+ member_method {
+ name: "preprocess_input"
+ argspec: "args=[\'x\', \'data_format\'], varargs=None, keywords=None, defaults=[\'None\'], "
+ }
+}
diff --git a/tensorflow/tools/api/golden/tensorflow.keras.applications.xception.pbtxt b/tensorflow/tools/api/golden/tensorflow.keras.applications.xception.pbtxt
new file mode 100644
index 0000000000..59dd2108f2
--- /dev/null
+++ b/tensorflow/tools/api/golden/tensorflow.keras.applications.xception.pbtxt
@@ -0,0 +1,15 @@
+path: "tensorflow.keras.applications.xception"
+tf_module {
+ member_method {
+ name: "Xception"
+ argspec: "args=[\'include_top\', \'weights\', \'input_tensor\', \'input_shape\', \'pooling\', \'classes\'], varargs=None, keywords=None, defaults=[\'True\', \'imagenet\', \'None\', \'None\', \'None\', \'1000\'], "
+ }
+ member_method {
+ name: "decode_predictions"
+ argspec: "args=[\'preds\', \'top\'], varargs=None, keywords=None, defaults=[\'5\'], "
+ }
+ member_method {
+ name: "preprocess_input"
+ argspec: "args=[\'x\'], varargs=None, keywords=None, defaults=None"
+ }
+}
diff --git a/tensorflow/tools/api/golden/tensorflow.keras.backend.pbtxt b/tensorflow/tools/api/golden/tensorflow.keras.backend.pbtxt
new file mode 100644
index 0000000000..6204ffa814
--- /dev/null
+++ b/tensorflow/tools/api/golden/tensorflow.keras.backend.pbtxt
@@ -0,0 +1,555 @@
+path: "tensorflow.keras.backend"
+tf_module {
+ member_method {
+ name: "abs"
+ argspec: "args=[\'x\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "all"
+ argspec: "args=[\'x\', \'axis\', \'keepdims\'], varargs=None, keywords=None, defaults=[\'None\', \'False\'], "
+ }
+ member_method {
+ name: "any"
+ argspec: "args=[\'x\', \'axis\', \'keepdims\'], varargs=None, keywords=None, defaults=[\'None\', \'False\'], "
+ }
+ member_method {
+ name: "arange"
+ argspec: "args=[\'start\', \'stop\', \'step\', \'dtype\'], varargs=None, keywords=None, defaults=[\'None\', \'1\', \'int32\'], "
+ }
+ member_method {
+ name: "argmax"
+ argspec: "args=[\'x\', \'axis\'], varargs=None, keywords=None, defaults=[\'-1\'], "
+ }
+ member_method {
+ name: "argmin"
+ argspec: "args=[\'x\', \'axis\'], varargs=None, keywords=None, defaults=[\'-1\'], "
+ }
+ member_method {
+ name: "backend"
+ argspec: "args=[], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "batch_dot"
+ argspec: "args=[\'x\', \'y\', \'axes\'], varargs=None, keywords=None, defaults=[\'None\'], "
+ }
+ member_method {
+ name: "batch_flatten"
+ argspec: "args=[\'x\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "batch_get_value"
+ argspec: "args=[\'tensors\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "batch_normalization"
+ argspec: "args=[\'x\', \'mean\', \'var\', \'beta\', \'gamma\', \'epsilon\'], varargs=None, keywords=None, defaults=[\'0.001\'], "
+ }
+ member_method {
+ name: "batch_set_value"
+ argspec: "args=[\'tuples\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "bias_add"
+ argspec: "args=[\'x\', \'bias\', \'data_format\'], varargs=None, keywords=None, defaults=[\'None\'], "
+ }
+ member_method {
+ name: "binary_crossentropy"
+ argspec: "args=[\'target\', \'output\', \'from_logits\'], varargs=None, keywords=None, defaults=[\'False\'], "
+ }
+ member_method {
+ name: "cast"
+ argspec: "args=[\'x\', \'dtype\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "cast_to_floatx"
+ argspec: "args=[\'x\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "categorical_crossentropy"
+ argspec: "args=[\'target\', \'output\', \'from_logits\'], varargs=None, keywords=None, defaults=[\'False\'], "
+ }
+ member_method {
+ name: "clear_session"
+ argspec: "args=[], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "clip"
+ argspec: "args=[\'x\', \'min_value\', \'max_value\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "concatenate"
+ argspec: "args=[\'tensors\', \'axis\'], varargs=None, keywords=None, defaults=[\'-1\'], "
+ }
+ member_method {
+ name: "constant"
+ argspec: "args=[\'value\', \'dtype\', \'shape\', \'name\'], varargs=None, keywords=None, defaults=[\'None\', \'None\', \'None\'], "
+ }
+ member_method {
+ name: "conv1d"
+ argspec: "args=[\'x\', \'kernel\', \'strides\', \'padding\', \'data_format\', \'dilation_rate\'], varargs=None, keywords=None, defaults=[\'1\', \'valid\', \'None\', \'1\'], "
+ }
+ member_method {
+ name: "conv2d"
+ argspec: "args=[\'x\', \'kernel\', \'strides\', \'padding\', \'data_format\', \'dilation_rate\'], varargs=None, keywords=None, defaults=[\'(1, 1)\', \'valid\', \'None\', \'(1, 1)\'], "
+ }
+ member_method {
+ name: "conv2d_transpose"
+ argspec: "args=[\'x\', \'kernel\', \'output_shape\', \'strides\', \'padding\', \'data_format\'], varargs=None, keywords=None, defaults=[\'(1, 1)\', \'valid\', \'None\'], "
+ }
+ member_method {
+ name: "conv3d"
+ argspec: "args=[\'x\', \'kernel\', \'strides\', \'padding\', \'data_format\', \'dilation_rate\'], varargs=None, keywords=None, defaults=[\'(1, 1, 1)\', \'valid\', \'None\', \'(1, 1, 1)\'], "
+ }
+ member_method {
+ name: "cos"
+ argspec: "args=[\'x\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "count_params"
+ argspec: "args=[\'x\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "ctc_batch_cost"
+ argspec: "args=[\'y_true\', \'y_pred\', \'input_length\', \'label_length\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "ctc_decode"
+ argspec: "args=[\'y_pred\', \'input_length\', \'greedy\', \'beam_width\', \'top_paths\'], varargs=None, keywords=None, defaults=[\'True\', \'100\', \'1\'], "
+ }
+ member_method {
+ name: "ctc_label_dense_to_sparse"
+ argspec: "args=[\'labels\', \'label_lengths\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "dot"
+ argspec: "args=[\'x\', \'y\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "dropout"
+ argspec: "args=[\'x\', \'level\', \'noise_shape\', \'seed\'], varargs=None, keywords=None, defaults=[\'None\', \'None\'], "
+ }
+ member_method {
+ name: "dtype"
+ argspec: "args=[\'x\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "elu"
+ argspec: "args=[\'x\', \'alpha\'], varargs=None, keywords=None, defaults=[\'1.0\'], "
+ }
+ member_method {
+ name: "epsilon"
+ argspec: "args=[], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "equal"
+ argspec: "args=[\'x\', \'y\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "eval"
+ argspec: "args=[\'x\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "exp"
+ argspec: "args=[\'x\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "expand_dims"
+ argspec: "args=[\'x\', \'axis\'], varargs=None, keywords=None, defaults=[\'-1\'], "
+ }
+ member_method {
+ name: "eye"
+ argspec: "args=[\'size\', \'dtype\', \'name\'], varargs=None, keywords=None, defaults=[\'None\', \'None\'], "
+ }
+ member_method {
+ name: "flatten"
+ argspec: "args=[\'x\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "floatx"
+ argspec: "args=[], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "foldl"
+ argspec: "args=[\'fn\', \'elems\', \'initializer\', \'name\'], varargs=None, keywords=None, defaults=[\'None\', \'None\'], "
+ }
+ member_method {
+ name: "foldr"
+ argspec: "args=[\'fn\', \'elems\', \'initializer\', \'name\'], varargs=None, keywords=None, defaults=[\'None\', \'None\'], "
+ }
+ member_method {
+ name: "function"
+ argspec: "args=[\'inputs\', \'outputs\', \'updates\'], varargs=None, keywords=kwargs, defaults=[\'None\'], "
+ }
+ member_method {
+ name: "gather"
+ argspec: "args=[\'reference\', \'indices\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_session"
+ argspec: "args=[], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_uid"
+ argspec: "args=[\'prefix\'], varargs=None, keywords=None, defaults=[\'\'], "
+ }
+ member_method {
+ name: "get_value"
+ argspec: "args=[\'x\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "gradients"
+ argspec: "args=[\'loss\', \'variables\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "greater"
+ argspec: "args=[\'x\', \'y\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "greater_equal"
+ argspec: "args=[\'x\', \'y\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "hard_sigmoid"
+ argspec: "args=[\'x\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "image_data_format"
+ argspec: "args=[], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "in_test_phase"
+ argspec: "args=[\'x\', \'alt\', \'training\'], varargs=None, keywords=None, defaults=[\'None\'], "
+ }
+ member_method {
+ name: "in_top_k"
+ argspec: "args=[\'predictions\', \'targets\', \'k\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "in_train_phase"
+ argspec: "args=[\'x\', \'alt\', \'training\'], varargs=None, keywords=None, defaults=[\'None\'], "
+ }
+ member_method {
+ name: "int_shape"
+ argspec: "args=[\'x\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "is_sparse"
+ argspec: "args=[\'tensor\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "l2_normalize"
+ argspec: "args=[\'x\', \'axis\'], varargs=None, keywords=None, defaults=[\'None\'], "
+ }
+ member_method {
+ name: "learning_phase"
+ argspec: "args=[], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "less"
+ argspec: "args=[\'x\', \'y\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "less_equal"
+ argspec: "args=[\'x\', \'y\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "log"
+ argspec: "args=[\'x\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "manual_variable_initialization"
+ argspec: "args=[\'value\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "map_fn"
+ argspec: "args=[\'fn\', \'elems\', \'name\', \'dtype\'], varargs=None, keywords=None, defaults=[\'None\', \'None\'], "
+ }
+ member_method {
+ name: "max"
+ argspec: "args=[\'x\', \'axis\', \'keepdims\'], varargs=None, keywords=None, defaults=[\'None\', \'False\'], "
+ }
+ member_method {
+ name: "maximum"
+ argspec: "args=[\'x\', \'y\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "mean"
+ argspec: "args=[\'x\', \'axis\', \'keepdims\'], varargs=None, keywords=None, defaults=[\'None\', \'False\'], "
+ }
+ member_method {
+ name: "min"
+ argspec: "args=[\'x\', \'axis\', \'keepdims\'], varargs=None, keywords=None, defaults=[\'None\', \'False\'], "
+ }
+ member_method {
+ name: "minimum"
+ argspec: "args=[\'x\', \'y\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "moving_average_update"
+ argspec: "args=[\'x\', \'value\', \'momentum\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "name_scope"
+ argspec: "args=[\'name\', \'default_name\', \'values\'], varargs=None, keywords=None, defaults=[\'None\', \'None\'], "
+ }
+ member_method {
+ name: "ndim"
+ argspec: "args=[\'x\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "normalize_batch_in_training"
+ argspec: "args=[\'x\', \'gamma\', \'beta\', \'reduction_axes\', \'epsilon\'], varargs=None, keywords=None, defaults=[\'0.001\'], "
+ }
+ member_method {
+ name: "not_equal"
+ argspec: "args=[\'x\', \'y\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "one_hot"
+ argspec: "args=[\'indices\', \'num_classes\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "ones"
+ argspec: "args=[\'shape\', \'dtype\', \'name\'], varargs=None, keywords=None, defaults=[\'None\', \'None\'], "
+ }
+ member_method {
+ name: "ones_like"
+ argspec: "args=[\'x\', \'dtype\', \'name\'], varargs=None, keywords=None, defaults=[\'None\', \'None\'], "
+ }
+ member_method {
+ name: "permute_dimensions"
+ argspec: "args=[\'x\', \'pattern\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "placeholder"
+ argspec: "args=[\'shape\', \'ndim\', \'dtype\', \'sparse\', \'name\'], varargs=None, keywords=None, defaults=[\'None\', \'None\', \'None\', \'False\', \'None\'], "
+ }
+ member_method {
+ name: "pool2d"
+ argspec: "args=[\'x\', \'pool_size\', \'strides\', \'padding\', \'data_format\', \'pool_mode\'], varargs=None, keywords=None, defaults=[\'(1, 1)\', \'valid\', \'None\', \'max\'], "
+ }
+ member_method {
+ name: "pool3d"
+ argspec: "args=[\'x\', \'pool_size\', \'strides\', \'padding\', \'data_format\', \'pool_mode\'], varargs=None, keywords=None, defaults=[\'(1, 1, 1)\', \'valid\', \'None\', \'max\'], "
+ }
+ member_method {
+ name: "pow"
+ argspec: "args=[\'x\', \'a\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "print_tensor"
+ argspec: "args=[\'x\', \'message\'], varargs=None, keywords=None, defaults=[\'\'], "
+ }
+ member_method {
+ name: "prod"
+ argspec: "args=[\'x\', \'axis\', \'keepdims\'], varargs=None, keywords=None, defaults=[\'None\', \'False\'], "
+ }
+ member_method {
+ name: "random_binomial"
+ argspec: "args=[\'shape\', \'p\', \'dtype\', \'seed\'], varargs=None, keywords=None, defaults=[\'0.0\', \'None\', \'None\'], "
+ }
+ member_method {
+ name: "random_normal"
+ argspec: "args=[\'shape\', \'mean\', \'stddev\', \'dtype\', \'seed\'], varargs=None, keywords=None, defaults=[\'0.0\', \'1.0\', \'None\', \'None\'], "
+ }
+ member_method {
+ name: "random_normal_variable"
+ argspec: "args=[\'shape\', \'mean\', \'scale\', \'dtype\', \'name\', \'seed\'], varargs=None, keywords=None, defaults=[\'None\', \'None\', \'None\'], "
+ }
+ member_method {
+ name: "random_uniform"
+ argspec: "args=[\'shape\', \'minval\', \'maxval\', \'dtype\', \'seed\'], varargs=None, keywords=None, defaults=[\'0.0\', \'1.0\', \'None\', \'None\'], "
+ }
+ member_method {
+ name: "random_uniform_variable"
+ argspec: "args=[\'shape\', \'low\', \'high\', \'dtype\', \'name\', \'seed\'], varargs=None, keywords=None, defaults=[\'None\', \'None\', \'None\'], "
+ }
+ member_method {
+ name: "relu"
+ argspec: "args=[\'x\', \'alpha\', \'max_value\'], varargs=None, keywords=None, defaults=[\'0.0\', \'None\'], "
+ }
+ member_method {
+ name: "repeat"
+ argspec: "args=[\'x\', \'n\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "repeat_elements"
+ argspec: "args=[\'x\', \'rep\', \'axis\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "reset_uids"
+ argspec: "args=[], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "reshape"
+ argspec: "args=[\'x\', \'shape\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "resize_images"
+ argspec: "args=[\'x\', \'height_factor\', \'width_factor\', \'data_format\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "resize_volumes"
+ argspec: "args=[\'x\', \'depth_factor\', \'height_factor\', \'width_factor\', \'data_format\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "reverse"
+ argspec: "args=[\'x\', \'axes\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "rnn"
+ argspec: "args=[\'step_function\', \'inputs\', \'initial_states\', \'go_backwards\', \'mask\', \'constants\', \'unroll\'], varargs=None, keywords=None, defaults=[\'False\', \'None\', \'None\', \'False\'], "
+ }
+ member_method {
+ name: "round"
+ argspec: "args=[\'x\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "separable_conv2d"
+ argspec: "args=[\'x\', \'depthwise_kernel\', \'pointwise_kernel\', \'strides\', \'padding\', \'data_format\', \'dilation_rate\'], varargs=None, keywords=None, defaults=[\'(1, 1)\', \'valid\', \'None\', \'(1, 1)\'], "
+ }
+ member_method {
+ name: "set_epsilon"
+ argspec: "args=[\'value\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "set_floatx"
+ argspec: "args=[\'value\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "set_image_data_format"
+ argspec: "args=[\'data_format\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "set_learning_phase"
+ argspec: "args=[\'value\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "set_session"
+ argspec: "args=[\'session\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "set_value"
+ argspec: "args=[\'x\', \'value\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "shape"
+ argspec: "args=[\'x\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "sigmoid"
+ argspec: "args=[\'x\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "sign"
+ argspec: "args=[\'x\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "sin"
+ argspec: "args=[\'x\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "softmax"
+ argspec: "args=[\'x\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "softplus"
+ argspec: "args=[\'x\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "softsign"
+ argspec: "args=[\'x\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "sparse_categorical_crossentropy"
+ argspec: "args=[\'target\', \'output\', \'from_logits\'], varargs=None, keywords=None, defaults=[\'False\'], "
+ }
+ member_method {
+ name: "spatial_2d_padding"
+ argspec: "args=[\'x\', \'padding\', \'data_format\'], varargs=None, keywords=None, defaults=[\'((1, 1), (1, 1))\', \'None\'], "
+ }
+ member_method {
+ name: "spatial_3d_padding"
+ argspec: "args=[\'x\', \'padding\', \'data_format\'], varargs=None, keywords=None, defaults=[\'((1, 1), (1, 1), (1, 1))\', \'None\'], "
+ }
+ member_method {
+ name: "sqrt"
+ argspec: "args=[\'x\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "square"
+ argspec: "args=[\'x\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "squeeze"
+ argspec: "args=[\'x\', \'axis\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "stack"
+ argspec: "args=[\'x\', \'axis\'], varargs=None, keywords=None, defaults=[\'0\'], "
+ }
+ member_method {
+ name: "std"
+ argspec: "args=[\'x\', \'axis\', \'keepdims\'], varargs=None, keywords=None, defaults=[\'None\', \'False\'], "
+ }
+ member_method {
+ name: "stop_gradient"
+ argspec: "args=[\'variables\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "sum"
+ argspec: "args=[\'x\', \'axis\', \'keepdims\'], varargs=None, keywords=None, defaults=[\'None\', \'False\'], "
+ }
+ member_method {
+ name: "switch"
+ argspec: "args=[\'condition\', \'then_expression\', \'else_expression\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "tanh"
+ argspec: "args=[\'x\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "temporal_padding"
+ argspec: "args=[\'x\', \'padding\'], varargs=None, keywords=None, defaults=[\'(1, 1)\'], "
+ }
+ member_method {
+ name: "to_dense"
+ argspec: "args=[\'tensor\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "transpose"
+ argspec: "args=[\'x\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "truncated_normal"
+ argspec: "args=[\'shape\', \'mean\', \'stddev\', \'dtype\', \'seed\'], varargs=None, keywords=None, defaults=[\'0.0\', \'1.0\', \'None\', \'None\'], "
+ }
+ member_method {
+ name: "update"
+ argspec: "args=[\'x\', \'new_x\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "update_add"
+ argspec: "args=[\'x\', \'increment\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "update_sub"
+ argspec: "args=[\'x\', \'decrement\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "var"
+ argspec: "args=[\'x\', \'axis\', \'keepdims\'], varargs=None, keywords=None, defaults=[\'None\', \'False\'], "
+ }
+ member_method {
+ name: "variable"
+ argspec: "args=[\'value\', \'dtype\', \'name\', \'constraint\'], varargs=None, keywords=None, defaults=[\'None\', \'None\', \'None\'], "
+ }
+ member_method {
+ name: "zeros"
+ argspec: "args=[\'shape\', \'dtype\', \'name\'], varargs=None, keywords=None, defaults=[\'None\', \'None\'], "
+ }
+ member_method {
+ name: "zeros_like"
+ argspec: "args=[\'x\', \'dtype\', \'name\'], varargs=None, keywords=None, defaults=[\'None\', \'None\'], "
+ }
+}
diff --git a/tensorflow/tools/api/golden/tensorflow.keras.callbacks.-base-logger.pbtxt b/tensorflow/tools/api/golden/tensorflow.keras.callbacks.-base-logger.pbtxt
new file mode 100644
index 0000000000..ea4d514354
--- /dev/null
+++ b/tensorflow/tools/api/golden/tensorflow.keras.callbacks.-base-logger.pbtxt
@@ -0,0 +1,42 @@
+path: "tensorflow.keras.callbacks.BaseLogger"
+tf_class {
+ is_instance: "<class \'tensorflow.python.keras._impl.keras.callbacks.BaseLogger\'>"
+ is_instance: "<class \'tensorflow.python.keras._impl.keras.callbacks.Callback\'>"
+ is_instance: "<type \'object\'>"
+ member_method {
+ name: "__init__"
+ argspec: "args=[\'self\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "on_batch_begin"
+ argspec: "args=[\'self\', \'batch\', \'logs\'], varargs=None, keywords=None, defaults=[\'None\'], "
+ }
+ member_method {
+ name: "on_batch_end"
+ argspec: "args=[\'self\', \'batch\', \'logs\'], varargs=None, keywords=None, defaults=[\'None\'], "
+ }
+ member_method {
+ name: "on_epoch_begin"
+ argspec: "args=[\'self\', \'epoch\', \'logs\'], varargs=None, keywords=None, defaults=[\'None\'], "
+ }
+ member_method {
+ name: "on_epoch_end"
+ argspec: "args=[\'self\', \'epoch\', \'logs\'], varargs=None, keywords=None, defaults=[\'None\'], "
+ }
+ member_method {
+ name: "on_train_begin"
+ argspec: "args=[\'self\', \'logs\'], varargs=None, keywords=None, defaults=[\'None\'], "
+ }
+ member_method {
+ name: "on_train_end"
+ argspec: "args=[\'self\', \'logs\'], varargs=None, keywords=None, defaults=[\'None\'], "
+ }
+ member_method {
+ name: "set_model"
+ argspec: "args=[\'self\', \'model\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "set_params"
+ argspec: "args=[\'self\', \'params\'], varargs=None, keywords=None, defaults=None"
+ }
+}
diff --git a/tensorflow/tools/api/golden/tensorflow.keras.callbacks.-c-s-v-logger.pbtxt b/tensorflow/tools/api/golden/tensorflow.keras.callbacks.-c-s-v-logger.pbtxt
new file mode 100644
index 0000000000..86b264c79f
--- /dev/null
+++ b/tensorflow/tools/api/golden/tensorflow.keras.callbacks.-c-s-v-logger.pbtxt
@@ -0,0 +1,42 @@
+path: "tensorflow.keras.callbacks.CSVLogger"
+tf_class {
+ is_instance: "<class \'tensorflow.python.keras._impl.keras.callbacks.CSVLogger\'>"
+ is_instance: "<class \'tensorflow.python.keras._impl.keras.callbacks.Callback\'>"
+ is_instance: "<type \'object\'>"
+ member_method {
+ name: "__init__"
+ argspec: "args=[\'self\', \'filename\', \'separator\', \'append\'], varargs=None, keywords=None, defaults=[\',\', \'False\'], "
+ }
+ member_method {
+ name: "on_batch_begin"
+ argspec: "args=[\'self\', \'batch\', \'logs\'], varargs=None, keywords=None, defaults=[\'None\'], "
+ }
+ member_method {
+ name: "on_batch_end"
+ argspec: "args=[\'self\', \'batch\', \'logs\'], varargs=None, keywords=None, defaults=[\'None\'], "
+ }
+ member_method {
+ name: "on_epoch_begin"
+ argspec: "args=[\'self\', \'epoch\', \'logs\'], varargs=None, keywords=None, defaults=[\'None\'], "
+ }
+ member_method {
+ name: "on_epoch_end"
+ argspec: "args=[\'self\', \'epoch\', \'logs\'], varargs=None, keywords=None, defaults=[\'None\'], "
+ }
+ member_method {
+ name: "on_train_begin"
+ argspec: "args=[\'self\', \'logs\'], varargs=None, keywords=None, defaults=[\'None\'], "
+ }
+ member_method {
+ name: "on_train_end"
+ argspec: "args=[\'self\', \'logs\'], varargs=None, keywords=None, defaults=[\'None\'], "
+ }
+ member_method {
+ name: "set_model"
+ argspec: "args=[\'self\', \'model\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "set_params"
+ argspec: "args=[\'self\', \'params\'], varargs=None, keywords=None, defaults=None"
+ }
+}
diff --git a/tensorflow/tools/api/golden/tensorflow.keras.callbacks.-callback.pbtxt b/tensorflow/tools/api/golden/tensorflow.keras.callbacks.-callback.pbtxt
new file mode 100644
index 0000000000..1474b392ff
--- /dev/null
+++ b/tensorflow/tools/api/golden/tensorflow.keras.callbacks.-callback.pbtxt
@@ -0,0 +1,41 @@
+path: "tensorflow.keras.callbacks.Callback"
+tf_class {
+ is_instance: "<class \'tensorflow.python.keras._impl.keras.callbacks.Callback\'>"
+ is_instance: "<type \'object\'>"
+ member_method {
+ name: "__init__"
+ argspec: "args=[\'self\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "on_batch_begin"
+ argspec: "args=[\'self\', \'batch\', \'logs\'], varargs=None, keywords=None, defaults=[\'None\'], "
+ }
+ member_method {
+ name: "on_batch_end"
+ argspec: "args=[\'self\', \'batch\', \'logs\'], varargs=None, keywords=None, defaults=[\'None\'], "
+ }
+ member_method {
+ name: "on_epoch_begin"
+ argspec: "args=[\'self\', \'epoch\', \'logs\'], varargs=None, keywords=None, defaults=[\'None\'], "
+ }
+ member_method {
+ name: "on_epoch_end"
+ argspec: "args=[\'self\', \'epoch\', \'logs\'], varargs=None, keywords=None, defaults=[\'None\'], "
+ }
+ member_method {
+ name: "on_train_begin"
+ argspec: "args=[\'self\', \'logs\'], varargs=None, keywords=None, defaults=[\'None\'], "
+ }
+ member_method {
+ name: "on_train_end"
+ argspec: "args=[\'self\', \'logs\'], varargs=None, keywords=None, defaults=[\'None\'], "
+ }
+ member_method {
+ name: "set_model"
+ argspec: "args=[\'self\', \'model\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "set_params"
+ argspec: "args=[\'self\', \'params\'], varargs=None, keywords=None, defaults=None"
+ }
+}
diff --git a/tensorflow/tools/api/golden/tensorflow.keras.callbacks.-early-stopping.pbtxt b/tensorflow/tools/api/golden/tensorflow.keras.callbacks.-early-stopping.pbtxt
new file mode 100644
index 0000000000..27d4a208a4
--- /dev/null
+++ b/tensorflow/tools/api/golden/tensorflow.keras.callbacks.-early-stopping.pbtxt
@@ -0,0 +1,42 @@
+path: "tensorflow.keras.callbacks.EarlyStopping"
+tf_class {
+ is_instance: "<class \'tensorflow.python.keras._impl.keras.callbacks.EarlyStopping\'>"
+ is_instance: "<class \'tensorflow.python.keras._impl.keras.callbacks.Callback\'>"
+ is_instance: "<type \'object\'>"
+ member_method {
+ name: "__init__"
+ argspec: "args=[\'self\', \'monitor\', \'min_delta\', \'patience\', \'verbose\', \'mode\'], varargs=None, keywords=None, defaults=[\'val_loss\', \'0\', \'0\', \'0\', \'auto\'], "
+ }
+ member_method {
+ name: "on_batch_begin"
+ argspec: "args=[\'self\', \'batch\', \'logs\'], varargs=None, keywords=None, defaults=[\'None\'], "
+ }
+ member_method {
+ name: "on_batch_end"
+ argspec: "args=[\'self\', \'batch\', \'logs\'], varargs=None, keywords=None, defaults=[\'None\'], "
+ }
+ member_method {
+ name: "on_epoch_begin"
+ argspec: "args=[\'self\', \'epoch\', \'logs\'], varargs=None, keywords=None, defaults=[\'None\'], "
+ }
+ member_method {
+ name: "on_epoch_end"
+ argspec: "args=[\'self\', \'epoch\', \'logs\'], varargs=None, keywords=None, defaults=[\'None\'], "
+ }
+ member_method {
+ name: "on_train_begin"
+ argspec: "args=[\'self\', \'logs\'], varargs=None, keywords=None, defaults=[\'None\'], "
+ }
+ member_method {
+ name: "on_train_end"
+ argspec: "args=[\'self\', \'logs\'], varargs=None, keywords=None, defaults=[\'None\'], "
+ }
+ member_method {
+ name: "set_model"
+ argspec: "args=[\'self\', \'model\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "set_params"
+ argspec: "args=[\'self\', \'params\'], varargs=None, keywords=None, defaults=None"
+ }
+}
diff --git a/tensorflow/tools/api/golden/tensorflow.keras.callbacks.-history.pbtxt b/tensorflow/tools/api/golden/tensorflow.keras.callbacks.-history.pbtxt
new file mode 100644
index 0000000000..a7b2deea82
--- /dev/null
+++ b/tensorflow/tools/api/golden/tensorflow.keras.callbacks.-history.pbtxt
@@ -0,0 +1,42 @@
+path: "tensorflow.keras.callbacks.History"
+tf_class {
+ is_instance: "<class \'tensorflow.python.keras._impl.keras.callbacks.History\'>"
+ is_instance: "<class \'tensorflow.python.keras._impl.keras.callbacks.Callback\'>"
+ is_instance: "<type \'object\'>"
+ member_method {
+ name: "__init__"
+ argspec: "args=[\'self\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "on_batch_begin"
+ argspec: "args=[\'self\', \'batch\', \'logs\'], varargs=None, keywords=None, defaults=[\'None\'], "
+ }
+ member_method {
+ name: "on_batch_end"
+ argspec: "args=[\'self\', \'batch\', \'logs\'], varargs=None, keywords=None, defaults=[\'None\'], "
+ }
+ member_method {
+ name: "on_epoch_begin"
+ argspec: "args=[\'self\', \'epoch\', \'logs\'], varargs=None, keywords=None, defaults=[\'None\'], "
+ }
+ member_method {
+ name: "on_epoch_end"
+ argspec: "args=[\'self\', \'epoch\', \'logs\'], varargs=None, keywords=None, defaults=[\'None\'], "
+ }
+ member_method {
+ name: "on_train_begin"
+ argspec: "args=[\'self\', \'logs\'], varargs=None, keywords=None, defaults=[\'None\'], "
+ }
+ member_method {
+ name: "on_train_end"
+ argspec: "args=[\'self\', \'logs\'], varargs=None, keywords=None, defaults=[\'None\'], "
+ }
+ member_method {
+ name: "set_model"
+ argspec: "args=[\'self\', \'model\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "set_params"
+ argspec: "args=[\'self\', \'params\'], varargs=None, keywords=None, defaults=None"
+ }
+}
diff --git a/tensorflow/tools/api/golden/tensorflow.keras.callbacks.-lambda-callback.pbtxt b/tensorflow/tools/api/golden/tensorflow.keras.callbacks.-lambda-callback.pbtxt
new file mode 100644
index 0000000000..5ee22948ad
--- /dev/null
+++ b/tensorflow/tools/api/golden/tensorflow.keras.callbacks.-lambda-callback.pbtxt
@@ -0,0 +1,42 @@
+path: "tensorflow.keras.callbacks.LambdaCallback"
+tf_class {
+ is_instance: "<class \'tensorflow.python.keras._impl.keras.callbacks.LambdaCallback\'>"
+ is_instance: "<class \'tensorflow.python.keras._impl.keras.callbacks.Callback\'>"
+ is_instance: "<type \'object\'>"
+ member_method {
+ name: "__init__"
+ argspec: "args=[\'self\', \'on_epoch_begin\', \'on_epoch_end\', \'on_batch_begin\', \'on_batch_end\', \'on_train_begin\', \'on_train_end\'], varargs=None, keywords=kwargs, defaults=[\'None\', \'None\', \'None\', \'None\', \'None\', \'None\'], "
+ }
+ member_method {
+ name: "on_batch_begin"
+ argspec: "args=[\'self\', \'batch\', \'logs\'], varargs=None, keywords=None, defaults=[\'None\'], "
+ }
+ member_method {
+ name: "on_batch_end"
+ argspec: "args=[\'self\', \'batch\', \'logs\'], varargs=None, keywords=None, defaults=[\'None\'], "
+ }
+ member_method {
+ name: "on_epoch_begin"
+ argspec: "args=[\'self\', \'epoch\', \'logs\'], varargs=None, keywords=None, defaults=[\'None\'], "
+ }
+ member_method {
+ name: "on_epoch_end"
+ argspec: "args=[\'self\', \'epoch\', \'logs\'], varargs=None, keywords=None, defaults=[\'None\'], "
+ }
+ member_method {
+ name: "on_train_begin"
+ argspec: "args=[\'self\', \'logs\'], varargs=None, keywords=None, defaults=[\'None\'], "
+ }
+ member_method {
+ name: "on_train_end"
+ argspec: "args=[\'self\', \'logs\'], varargs=None, keywords=None, defaults=[\'None\'], "
+ }
+ member_method {
+ name: "set_model"
+ argspec: "args=[\'self\', \'model\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "set_params"
+ argspec: "args=[\'self\', \'params\'], varargs=None, keywords=None, defaults=None"
+ }
+}
diff --git a/tensorflow/tools/api/golden/tensorflow.keras.callbacks.-learning-rate-scheduler.pbtxt b/tensorflow/tools/api/golden/tensorflow.keras.callbacks.-learning-rate-scheduler.pbtxt
new file mode 100644
index 0000000000..8719c07ca3
--- /dev/null
+++ b/tensorflow/tools/api/golden/tensorflow.keras.callbacks.-learning-rate-scheduler.pbtxt
@@ -0,0 +1,42 @@
+path: "tensorflow.keras.callbacks.LearningRateScheduler"
+tf_class {
+ is_instance: "<class \'tensorflow.python.keras._impl.keras.callbacks.LearningRateScheduler\'>"
+ is_instance: "<class \'tensorflow.python.keras._impl.keras.callbacks.Callback\'>"
+ is_instance: "<type \'object\'>"
+ member_method {
+ name: "__init__"
+ argspec: "args=[\'self\', \'schedule\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "on_batch_begin"
+ argspec: "args=[\'self\', \'batch\', \'logs\'], varargs=None, keywords=None, defaults=[\'None\'], "
+ }
+ member_method {
+ name: "on_batch_end"
+ argspec: "args=[\'self\', \'batch\', \'logs\'], varargs=None, keywords=None, defaults=[\'None\'], "
+ }
+ member_method {
+ name: "on_epoch_begin"
+ argspec: "args=[\'self\', \'epoch\', \'logs\'], varargs=None, keywords=None, defaults=[\'None\'], "
+ }
+ member_method {
+ name: "on_epoch_end"
+ argspec: "args=[\'self\', \'epoch\', \'logs\'], varargs=None, keywords=None, defaults=[\'None\'], "
+ }
+ member_method {
+ name: "on_train_begin"
+ argspec: "args=[\'self\', \'logs\'], varargs=None, keywords=None, defaults=[\'None\'], "
+ }
+ member_method {
+ name: "on_train_end"
+ argspec: "args=[\'self\', \'logs\'], varargs=None, keywords=None, defaults=[\'None\'], "
+ }
+ member_method {
+ name: "set_model"
+ argspec: "args=[\'self\', \'model\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "set_params"
+ argspec: "args=[\'self\', \'params\'], varargs=None, keywords=None, defaults=None"
+ }
+}
diff --git a/tensorflow/tools/api/golden/tensorflow.keras.callbacks.-model-checkpoint.pbtxt b/tensorflow/tools/api/golden/tensorflow.keras.callbacks.-model-checkpoint.pbtxt
new file mode 100644
index 0000000000..79f9c88bbc
--- /dev/null
+++ b/tensorflow/tools/api/golden/tensorflow.keras.callbacks.-model-checkpoint.pbtxt
@@ -0,0 +1,42 @@
+path: "tensorflow.keras.callbacks.ModelCheckpoint"
+tf_class {
+ is_instance: "<class \'tensorflow.python.keras._impl.keras.callbacks.ModelCheckpoint\'>"
+ is_instance: "<class \'tensorflow.python.keras._impl.keras.callbacks.Callback\'>"
+ is_instance: "<type \'object\'>"
+ member_method {
+ name: "__init__"
+ argspec: "args=[\'self\', \'filepath\', \'monitor\', \'verbose\', \'save_best_only\', \'save_weights_only\', \'mode\', \'period\'], varargs=None, keywords=None, defaults=[\'val_loss\', \'0\', \'False\', \'False\', \'auto\', \'1\'], "
+ }
+ member_method {
+ name: "on_batch_begin"
+ argspec: "args=[\'self\', \'batch\', \'logs\'], varargs=None, keywords=None, defaults=[\'None\'], "
+ }
+ member_method {
+ name: "on_batch_end"
+ argspec: "args=[\'self\', \'batch\', \'logs\'], varargs=None, keywords=None, defaults=[\'None\'], "
+ }
+ member_method {
+ name: "on_epoch_begin"
+ argspec: "args=[\'self\', \'epoch\', \'logs\'], varargs=None, keywords=None, defaults=[\'None\'], "
+ }
+ member_method {
+ name: "on_epoch_end"
+ argspec: "args=[\'self\', \'epoch\', \'logs\'], varargs=None, keywords=None, defaults=[\'None\'], "
+ }
+ member_method {
+ name: "on_train_begin"
+ argspec: "args=[\'self\', \'logs\'], varargs=None, keywords=None, defaults=[\'None\'], "
+ }
+ member_method {
+ name: "on_train_end"
+ argspec: "args=[\'self\', \'logs\'], varargs=None, keywords=None, defaults=[\'None\'], "
+ }
+ member_method {
+ name: "set_model"
+ argspec: "args=[\'self\', \'model\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "set_params"
+ argspec: "args=[\'self\', \'params\'], varargs=None, keywords=None, defaults=None"
+ }
+}
diff --git a/tensorflow/tools/api/golden/tensorflow.keras.callbacks.-progbar-logger.pbtxt b/tensorflow/tools/api/golden/tensorflow.keras.callbacks.-progbar-logger.pbtxt
new file mode 100644
index 0000000000..0e6901f28a
--- /dev/null
+++ b/tensorflow/tools/api/golden/tensorflow.keras.callbacks.-progbar-logger.pbtxt
@@ -0,0 +1,42 @@
+path: "tensorflow.keras.callbacks.ProgbarLogger"
+tf_class {
+ is_instance: "<class \'tensorflow.python.keras._impl.keras.callbacks.ProgbarLogger\'>"
+ is_instance: "<class \'tensorflow.python.keras._impl.keras.callbacks.Callback\'>"
+ is_instance: "<type \'object\'>"
+ member_method {
+ name: "__init__"
+ argspec: "args=[\'self\', \'count_mode\'], varargs=None, keywords=None, defaults=[\'samples\'], "
+ }
+ member_method {
+ name: "on_batch_begin"
+ argspec: "args=[\'self\', \'batch\', \'logs\'], varargs=None, keywords=None, defaults=[\'None\'], "
+ }
+ member_method {
+ name: "on_batch_end"
+ argspec: "args=[\'self\', \'batch\', \'logs\'], varargs=None, keywords=None, defaults=[\'None\'], "
+ }
+ member_method {
+ name: "on_epoch_begin"
+ argspec: "args=[\'self\', \'epoch\', \'logs\'], varargs=None, keywords=None, defaults=[\'None\'], "
+ }
+ member_method {
+ name: "on_epoch_end"
+ argspec: "args=[\'self\', \'epoch\', \'logs\'], varargs=None, keywords=None, defaults=[\'None\'], "
+ }
+ member_method {
+ name: "on_train_begin"
+ argspec: "args=[\'self\', \'logs\'], varargs=None, keywords=None, defaults=[\'None\'], "
+ }
+ member_method {
+ name: "on_train_end"
+ argspec: "args=[\'self\', \'logs\'], varargs=None, keywords=None, defaults=[\'None\'], "
+ }
+ member_method {
+ name: "set_model"
+ argspec: "args=[\'self\', \'model\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "set_params"
+ argspec: "args=[\'self\', \'params\'], varargs=None, keywords=None, defaults=None"
+ }
+}
diff --git a/tensorflow/tools/api/golden/tensorflow.keras.callbacks.-reduce-l-r-on-plateau.pbtxt b/tensorflow/tools/api/golden/tensorflow.keras.callbacks.-reduce-l-r-on-plateau.pbtxt
new file mode 100644
index 0000000000..5838d58312
--- /dev/null
+++ b/tensorflow/tools/api/golden/tensorflow.keras.callbacks.-reduce-l-r-on-plateau.pbtxt
@@ -0,0 +1,46 @@
+path: "tensorflow.keras.callbacks.ReduceLROnPlateau"
+tf_class {
+ is_instance: "<class \'tensorflow.python.keras._impl.keras.callbacks.ReduceLROnPlateau\'>"
+ is_instance: "<class \'tensorflow.python.keras._impl.keras.callbacks.Callback\'>"
+ is_instance: "<type \'object\'>"
+ member_method {
+ name: "__init__"
+ argspec: "args=[\'self\', \'monitor\', \'factor\', \'patience\', \'verbose\', \'mode\', \'epsilon\', \'cooldown\', \'min_lr\'], varargs=None, keywords=None, defaults=[\'val_loss\', \'0.1\', \'10\', \'0\', \'auto\', \'0.0001\', \'0\', \'0\'], "
+ }
+ member_method {
+ name: "in_cooldown"
+ argspec: "args=[\'self\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "on_batch_begin"
+ argspec: "args=[\'self\', \'batch\', \'logs\'], varargs=None, keywords=None, defaults=[\'None\'], "
+ }
+ member_method {
+ name: "on_batch_end"
+ argspec: "args=[\'self\', \'batch\', \'logs\'], varargs=None, keywords=None, defaults=[\'None\'], "
+ }
+ member_method {
+ name: "on_epoch_begin"
+ argspec: "args=[\'self\', \'epoch\', \'logs\'], varargs=None, keywords=None, defaults=[\'None\'], "
+ }
+ member_method {
+ name: "on_epoch_end"
+ argspec: "args=[\'self\', \'epoch\', \'logs\'], varargs=None, keywords=None, defaults=[\'None\'], "
+ }
+ member_method {
+ name: "on_train_begin"
+ argspec: "args=[\'self\', \'logs\'], varargs=None, keywords=None, defaults=[\'None\'], "
+ }
+ member_method {
+ name: "on_train_end"
+ argspec: "args=[\'self\', \'logs\'], varargs=None, keywords=None, defaults=[\'None\'], "
+ }
+ member_method {
+ name: "set_model"
+ argspec: "args=[\'self\', \'model\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "set_params"
+ argspec: "args=[\'self\', \'params\'], varargs=None, keywords=None, defaults=None"
+ }
+}
diff --git a/tensorflow/tools/api/golden/tensorflow.keras.callbacks.-remote-monitor.pbtxt b/tensorflow/tools/api/golden/tensorflow.keras.callbacks.-remote-monitor.pbtxt
new file mode 100644
index 0000000000..3d0acfed1d
--- /dev/null
+++ b/tensorflow/tools/api/golden/tensorflow.keras.callbacks.-remote-monitor.pbtxt
@@ -0,0 +1,42 @@
+path: "tensorflow.keras.callbacks.RemoteMonitor"
+tf_class {
+ is_instance: "<class \'tensorflow.python.keras._impl.keras.callbacks.RemoteMonitor\'>"
+ is_instance: "<class \'tensorflow.python.keras._impl.keras.callbacks.Callback\'>"
+ is_instance: "<type \'object\'>"
+ member_method {
+ name: "__init__"
+ argspec: "args=[\'self\', \'root\', \'path\', \'field\', \'headers\'], varargs=None, keywords=None, defaults=[\'http://localhost:9000\', \'/publish/epoch/end/\', \'data\', \'None\'], "
+ }
+ member_method {
+ name: "on_batch_begin"
+ argspec: "args=[\'self\', \'batch\', \'logs\'], varargs=None, keywords=None, defaults=[\'None\'], "
+ }
+ member_method {
+ name: "on_batch_end"
+ argspec: "args=[\'self\', \'batch\', \'logs\'], varargs=None, keywords=None, defaults=[\'None\'], "
+ }
+ member_method {
+ name: "on_epoch_begin"
+ argspec: "args=[\'self\', \'epoch\', \'logs\'], varargs=None, keywords=None, defaults=[\'None\'], "
+ }
+ member_method {
+ name: "on_epoch_end"
+ argspec: "args=[\'self\', \'epoch\', \'logs\'], varargs=None, keywords=None, defaults=[\'None\'], "
+ }
+ member_method {
+ name: "on_train_begin"
+ argspec: "args=[\'self\', \'logs\'], varargs=None, keywords=None, defaults=[\'None\'], "
+ }
+ member_method {
+ name: "on_train_end"
+ argspec: "args=[\'self\', \'logs\'], varargs=None, keywords=None, defaults=[\'None\'], "
+ }
+ member_method {
+ name: "set_model"
+ argspec: "args=[\'self\', \'model\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "set_params"
+ argspec: "args=[\'self\', \'params\'], varargs=None, keywords=None, defaults=None"
+ }
+}
diff --git a/tensorflow/tools/api/golden/tensorflow.keras.callbacks.-tensor-board.pbtxt b/tensorflow/tools/api/golden/tensorflow.keras.callbacks.-tensor-board.pbtxt
new file mode 100644
index 0000000000..6620a9d308
--- /dev/null
+++ b/tensorflow/tools/api/golden/tensorflow.keras.callbacks.-tensor-board.pbtxt
@@ -0,0 +1,42 @@
+path: "tensorflow.keras.callbacks.TensorBoard"
+tf_class {
+ is_instance: "<class \'tensorflow.python.keras._impl.keras.callbacks.TensorBoard\'>"
+ is_instance: "<class \'tensorflow.python.keras._impl.keras.callbacks.Callback\'>"
+ is_instance: "<type \'object\'>"
+ member_method {
+ name: "__init__"
+ argspec: "args=[\'self\', \'log_dir\', \'histogram_freq\', \'batch_size\', \'write_graph\', \'write_grads\', \'write_images\'], varargs=None, keywords=None, defaults=[\'./logs\', \'0\', \'32\', \'True\', \'False\', \'False\'], "
+ }
+ member_method {
+ name: "on_batch_begin"
+ argspec: "args=[\'self\', \'batch\', \'logs\'], varargs=None, keywords=None, defaults=[\'None\'], "
+ }
+ member_method {
+ name: "on_batch_end"
+ argspec: "args=[\'self\', \'batch\', \'logs\'], varargs=None, keywords=None, defaults=[\'None\'], "
+ }
+ member_method {
+ name: "on_epoch_begin"
+ argspec: "args=[\'self\', \'epoch\', \'logs\'], varargs=None, keywords=None, defaults=[\'None\'], "
+ }
+ member_method {
+ name: "on_epoch_end"
+ argspec: "args=[\'self\', \'epoch\', \'logs\'], varargs=None, keywords=None, defaults=[\'None\'], "
+ }
+ member_method {
+ name: "on_train_begin"
+ argspec: "args=[\'self\', \'logs\'], varargs=None, keywords=None, defaults=[\'None\'], "
+ }
+ member_method {
+ name: "on_train_end"
+ argspec: "args=[\'self\', \'_\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "set_model"
+ argspec: "args=[\'self\', \'model\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "set_params"
+ argspec: "args=[\'self\', \'params\'], varargs=None, keywords=None, defaults=None"
+ }
+}
diff --git a/tensorflow/tools/api/golden/tensorflow.keras.callbacks.-terminate-on-na-n.pbtxt b/tensorflow/tools/api/golden/tensorflow.keras.callbacks.-terminate-on-na-n.pbtxt
new file mode 100644
index 0000000000..bf17e8736c
--- /dev/null
+++ b/tensorflow/tools/api/golden/tensorflow.keras.callbacks.-terminate-on-na-n.pbtxt
@@ -0,0 +1,42 @@
+path: "tensorflow.keras.callbacks.TerminateOnNaN"
+tf_class {
+ is_instance: "<class \'tensorflow.python.keras._impl.keras.callbacks.TerminateOnNaN\'>"
+ is_instance: "<class \'tensorflow.python.keras._impl.keras.callbacks.Callback\'>"
+ is_instance: "<type \'object\'>"
+ member_method {
+ name: "__init__"
+ argspec: "args=[\'self\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "on_batch_begin"
+ argspec: "args=[\'self\', \'batch\', \'logs\'], varargs=None, keywords=None, defaults=[\'None\'], "
+ }
+ member_method {
+ name: "on_batch_end"
+ argspec: "args=[\'self\', \'batch\', \'logs\'], varargs=None, keywords=None, defaults=[\'None\'], "
+ }
+ member_method {
+ name: "on_epoch_begin"
+ argspec: "args=[\'self\', \'epoch\', \'logs\'], varargs=None, keywords=None, defaults=[\'None\'], "
+ }
+ member_method {
+ name: "on_epoch_end"
+ argspec: "args=[\'self\', \'epoch\', \'logs\'], varargs=None, keywords=None, defaults=[\'None\'], "
+ }
+ member_method {
+ name: "on_train_begin"
+ argspec: "args=[\'self\', \'logs\'], varargs=None, keywords=None, defaults=[\'None\'], "
+ }
+ member_method {
+ name: "on_train_end"
+ argspec: "args=[\'self\', \'logs\'], varargs=None, keywords=None, defaults=[\'None\'], "
+ }
+ member_method {
+ name: "set_model"
+ argspec: "args=[\'self\', \'model\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "set_params"
+ argspec: "args=[\'self\', \'params\'], varargs=None, keywords=None, defaults=None"
+ }
+}
diff --git a/tensorflow/tools/api/golden/tensorflow.keras.callbacks.pbtxt b/tensorflow/tools/api/golden/tensorflow.keras.callbacks.pbtxt
new file mode 100644
index 0000000000..1e9085e034
--- /dev/null
+++ b/tensorflow/tools/api/golden/tensorflow.keras.callbacks.pbtxt
@@ -0,0 +1,55 @@
+path: "tensorflow.keras.callbacks"
+tf_module {
+ member {
+ name: "BaseLogger"
+ mtype: "<type \'type\'>"
+ }
+ member {
+ name: "CSVLogger"
+ mtype: "<type \'type\'>"
+ }
+ member {
+ name: "Callback"
+ mtype: "<type \'type\'>"
+ }
+ member {
+ name: "EarlyStopping"
+ mtype: "<type \'type\'>"
+ }
+ member {
+ name: "History"
+ mtype: "<type \'type\'>"
+ }
+ member {
+ name: "LambdaCallback"
+ mtype: "<type \'type\'>"
+ }
+ member {
+ name: "LearningRateScheduler"
+ mtype: "<type \'type\'>"
+ }
+ member {
+ name: "ModelCheckpoint"
+ mtype: "<type \'type\'>"
+ }
+ member {
+ name: "ProgbarLogger"
+ mtype: "<type \'type\'>"
+ }
+ member {
+ name: "ReduceLROnPlateau"
+ mtype: "<type \'type\'>"
+ }
+ member {
+ name: "RemoteMonitor"
+ mtype: "<type \'type\'>"
+ }
+ member {
+ name: "TensorBoard"
+ mtype: "<type \'type\'>"
+ }
+ member {
+ name: "TerminateOnNaN"
+ mtype: "<type \'type\'>"
+ }
+}
diff --git a/tensorflow/tools/api/golden/tensorflow.keras.constraints.-constraint.pbtxt b/tensorflow/tools/api/golden/tensorflow.keras.constraints.-constraint.pbtxt
new file mode 100644
index 0000000000..14977c696f
--- /dev/null
+++ b/tensorflow/tools/api/golden/tensorflow.keras.constraints.-constraint.pbtxt
@@ -0,0 +1,12 @@
+path: "tensorflow.keras.constraints.Constraint"
+tf_class {
+ is_instance: "<class \'tensorflow.python.keras._impl.keras.constraints.Constraint\'>"
+ is_instance: "<type \'object\'>"
+ member_method {
+ name: "__init__"
+ }
+ member_method {
+ name: "get_config"
+ argspec: "args=[\'self\'], varargs=None, keywords=None, defaults=None"
+ }
+}
diff --git a/tensorflow/tools/api/golden/tensorflow.keras.constraints.-max-norm.pbtxt b/tensorflow/tools/api/golden/tensorflow.keras.constraints.-max-norm.pbtxt
new file mode 100644
index 0000000000..a2269f8a18
--- /dev/null
+++ b/tensorflow/tools/api/golden/tensorflow.keras.constraints.-max-norm.pbtxt
@@ -0,0 +1,14 @@
+path: "tensorflow.keras.constraints.MaxNorm"
+tf_class {
+ is_instance: "<class \'tensorflow.python.keras._impl.keras.constraints.MaxNorm\'>"
+ is_instance: "<class \'tensorflow.python.keras._impl.keras.constraints.Constraint\'>"
+ is_instance: "<type \'object\'>"
+ member_method {
+ name: "__init__"
+ argspec: "args=[\'self\', \'max_value\', \'axis\'], varargs=None, keywords=None, defaults=[\'2\', \'0\'], "
+ }
+ member_method {
+ name: "get_config"
+ argspec: "args=[\'self\'], varargs=None, keywords=None, defaults=None"
+ }
+}
diff --git a/tensorflow/tools/api/golden/tensorflow.keras.constraints.-min-max-norm.pbtxt b/tensorflow/tools/api/golden/tensorflow.keras.constraints.-min-max-norm.pbtxt
new file mode 100644
index 0000000000..afe0d6478d
--- /dev/null
+++ b/tensorflow/tools/api/golden/tensorflow.keras.constraints.-min-max-norm.pbtxt
@@ -0,0 +1,14 @@
+path: "tensorflow.keras.constraints.MinMaxNorm"
+tf_class {
+ is_instance: "<class \'tensorflow.python.keras._impl.keras.constraints.MinMaxNorm\'>"
+ is_instance: "<class \'tensorflow.python.keras._impl.keras.constraints.Constraint\'>"
+ is_instance: "<type \'object\'>"
+ member_method {
+ name: "__init__"
+ argspec: "args=[\'self\', \'min_value\', \'max_value\', \'rate\', \'axis\'], varargs=None, keywords=None, defaults=[\'0.0\', \'1.0\', \'1.0\', \'0\'], "
+ }
+ member_method {
+ name: "get_config"
+ argspec: "args=[\'self\'], varargs=None, keywords=None, defaults=None"
+ }
+}
diff --git a/tensorflow/tools/api/golden/tensorflow.keras.constraints.-non-neg.pbtxt b/tensorflow/tools/api/golden/tensorflow.keras.constraints.-non-neg.pbtxt
new file mode 100644
index 0000000000..e8c4bb9088
--- /dev/null
+++ b/tensorflow/tools/api/golden/tensorflow.keras.constraints.-non-neg.pbtxt
@@ -0,0 +1,13 @@
+path: "tensorflow.keras.constraints.NonNeg"
+tf_class {
+ is_instance: "<class \'tensorflow.python.keras._impl.keras.constraints.NonNeg\'>"
+ is_instance: "<class \'tensorflow.python.keras._impl.keras.constraints.Constraint\'>"
+ is_instance: "<type \'object\'>"
+ member_method {
+ name: "__init__"
+ }
+ member_method {
+ name: "get_config"
+ argspec: "args=[\'self\'], varargs=None, keywords=None, defaults=None"
+ }
+}
diff --git a/tensorflow/tools/api/golden/tensorflow.keras.constraints.-unit-norm.pbtxt b/tensorflow/tools/api/golden/tensorflow.keras.constraints.-unit-norm.pbtxt
new file mode 100644
index 0000000000..d457cb6419
--- /dev/null
+++ b/tensorflow/tools/api/golden/tensorflow.keras.constraints.-unit-norm.pbtxt
@@ -0,0 +1,14 @@
+path: "tensorflow.keras.constraints.UnitNorm"
+tf_class {
+ is_instance: "<class \'tensorflow.python.keras._impl.keras.constraints.UnitNorm\'>"
+ is_instance: "<class \'tensorflow.python.keras._impl.keras.constraints.Constraint\'>"
+ is_instance: "<type \'object\'>"
+ member_method {
+ name: "__init__"
+ argspec: "args=[\'self\', \'axis\'], varargs=None, keywords=None, defaults=[\'0\'], "
+ }
+ member_method {
+ name: "get_config"
+ argspec: "args=[\'self\'], varargs=None, keywords=None, defaults=None"
+ }
+}
diff --git a/tensorflow/tools/api/golden/tensorflow.keras.constraints.max_norm.pbtxt b/tensorflow/tools/api/golden/tensorflow.keras.constraints.max_norm.pbtxt
new file mode 100644
index 0000000000..48128096d4
--- /dev/null
+++ b/tensorflow/tools/api/golden/tensorflow.keras.constraints.max_norm.pbtxt
@@ -0,0 +1,14 @@
+path: "tensorflow.keras.constraints.max_norm"
+tf_class {
+ is_instance: "<class \'tensorflow.python.keras._impl.keras.constraints.MaxNorm\'>"
+ is_instance: "<class \'tensorflow.python.keras._impl.keras.constraints.Constraint\'>"
+ is_instance: "<type \'object\'>"
+ member_method {
+ name: "__init__"
+ argspec: "args=[\'self\', \'max_value\', \'axis\'], varargs=None, keywords=None, defaults=[\'2\', \'0\'], "
+ }
+ member_method {
+ name: "get_config"
+ argspec: "args=[\'self\'], varargs=None, keywords=None, defaults=None"
+ }
+}
diff --git a/tensorflow/tools/api/golden/tensorflow.keras.constraints.min_max_norm.pbtxt b/tensorflow/tools/api/golden/tensorflow.keras.constraints.min_max_norm.pbtxt
new file mode 100644
index 0000000000..02eb3fb00c
--- /dev/null
+++ b/tensorflow/tools/api/golden/tensorflow.keras.constraints.min_max_norm.pbtxt
@@ -0,0 +1,14 @@
+path: "tensorflow.keras.constraints.min_max_norm"
+tf_class {
+ is_instance: "<class \'tensorflow.python.keras._impl.keras.constraints.MinMaxNorm\'>"
+ is_instance: "<class \'tensorflow.python.keras._impl.keras.constraints.Constraint\'>"
+ is_instance: "<type \'object\'>"
+ member_method {
+ name: "__init__"
+ argspec: "args=[\'self\', \'min_value\', \'max_value\', \'rate\', \'axis\'], varargs=None, keywords=None, defaults=[\'0.0\', \'1.0\', \'1.0\', \'0\'], "
+ }
+ member_method {
+ name: "get_config"
+ argspec: "args=[\'self\'], varargs=None, keywords=None, defaults=None"
+ }
+}
diff --git a/tensorflow/tools/api/golden/tensorflow.keras.constraints.non_neg.pbtxt b/tensorflow/tools/api/golden/tensorflow.keras.constraints.non_neg.pbtxt
new file mode 100644
index 0000000000..cc1101097c
--- /dev/null
+++ b/tensorflow/tools/api/golden/tensorflow.keras.constraints.non_neg.pbtxt
@@ -0,0 +1,13 @@
+path: "tensorflow.keras.constraints.non_neg"
+tf_class {
+ is_instance: "<class \'tensorflow.python.keras._impl.keras.constraints.NonNeg\'>"
+ is_instance: "<class \'tensorflow.python.keras._impl.keras.constraints.Constraint\'>"
+ is_instance: "<type \'object\'>"
+ member_method {
+ name: "__init__"
+ }
+ member_method {
+ name: "get_config"
+ argspec: "args=[\'self\'], varargs=None, keywords=None, defaults=None"
+ }
+}
diff --git a/tensorflow/tools/api/golden/tensorflow.keras.constraints.pbtxt b/tensorflow/tools/api/golden/tensorflow.keras.constraints.pbtxt
new file mode 100644
index 0000000000..655685956f
--- /dev/null
+++ b/tensorflow/tools/api/golden/tensorflow.keras.constraints.pbtxt
@@ -0,0 +1,51 @@
+path: "tensorflow.keras.constraints"
+tf_module {
+ member {
+ name: "Constraint"
+ mtype: "<type \'type\'>"
+ }
+ member {
+ name: "MaxNorm"
+ mtype: "<type \'type\'>"
+ }
+ member {
+ name: "MinMaxNorm"
+ mtype: "<type \'type\'>"
+ }
+ member {
+ name: "NonNeg"
+ mtype: "<type \'type\'>"
+ }
+ member {
+ name: "UnitNorm"
+ mtype: "<type \'type\'>"
+ }
+ member {
+ name: "max_norm"
+ mtype: "<type \'type\'>"
+ }
+ member {
+ name: "min_max_norm"
+ mtype: "<type \'type\'>"
+ }
+ member {
+ name: "non_neg"
+ mtype: "<type \'type\'>"
+ }
+ member {
+ name: "unit_norm"
+ mtype: "<type \'type\'>"
+ }
+ member_method {
+ name: "deserialize"
+ argspec: "args=[\'config\', \'custom_objects\'], varargs=None, keywords=None, defaults=[\'None\'], "
+ }
+ member_method {
+ name: "get"
+ argspec: "args=[\'identifier\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "serialize"
+ argspec: "args=[\'constraint\'], varargs=None, keywords=None, defaults=None"
+ }
+}
diff --git a/tensorflow/tools/api/golden/tensorflow.keras.constraints.unit_norm.pbtxt b/tensorflow/tools/api/golden/tensorflow.keras.constraints.unit_norm.pbtxt
new file mode 100644
index 0000000000..086f9f2d43
--- /dev/null
+++ b/tensorflow/tools/api/golden/tensorflow.keras.constraints.unit_norm.pbtxt
@@ -0,0 +1,14 @@
+path: "tensorflow.keras.constraints.unit_norm"
+tf_class {
+ is_instance: "<class \'tensorflow.python.keras._impl.keras.constraints.UnitNorm\'>"
+ is_instance: "<class \'tensorflow.python.keras._impl.keras.constraints.Constraint\'>"
+ is_instance: "<type \'object\'>"
+ member_method {
+ name: "__init__"
+ argspec: "args=[\'self\', \'axis\'], varargs=None, keywords=None, defaults=[\'0\'], "
+ }
+ member_method {
+ name: "get_config"
+ argspec: "args=[\'self\'], varargs=None, keywords=None, defaults=None"
+ }
+}
diff --git a/tensorflow/tools/api/golden/tensorflow.keras.datasets.boston_housing.pbtxt b/tensorflow/tools/api/golden/tensorflow.keras.datasets.boston_housing.pbtxt
new file mode 100644
index 0000000000..ef08f9b20f
--- /dev/null
+++ b/tensorflow/tools/api/golden/tensorflow.keras.datasets.boston_housing.pbtxt
@@ -0,0 +1,7 @@
+path: "tensorflow.keras.datasets.boston_housing"
+tf_module {
+ member_method {
+ name: "load_data"
+ argspec: "args=[\'path\', \'seed\', \'test_split\'], varargs=None, keywords=None, defaults=[\'boston_housing.npz\', \'113\', \'0.2\'], "
+ }
+}
diff --git a/tensorflow/tools/api/golden/tensorflow.keras.datasets.cifar10.pbtxt b/tensorflow/tools/api/golden/tensorflow.keras.datasets.cifar10.pbtxt
new file mode 100644
index 0000000000..8a5142f793
--- /dev/null
+++ b/tensorflow/tools/api/golden/tensorflow.keras.datasets.cifar10.pbtxt
@@ -0,0 +1,7 @@
+path: "tensorflow.keras.datasets.cifar10"
+tf_module {
+ member_method {
+ name: "load_data"
+ argspec: "args=[], varargs=None, keywords=None, defaults=None"
+ }
+}
diff --git a/tensorflow/tools/api/golden/tensorflow.keras.datasets.cifar100.pbtxt b/tensorflow/tools/api/golden/tensorflow.keras.datasets.cifar100.pbtxt
new file mode 100644
index 0000000000..16f184eeb5
--- /dev/null
+++ b/tensorflow/tools/api/golden/tensorflow.keras.datasets.cifar100.pbtxt
@@ -0,0 +1,7 @@
+path: "tensorflow.keras.datasets.cifar100"
+tf_module {
+ member_method {
+ name: "load_data"
+ argspec: "args=[\'label_mode\'], varargs=None, keywords=None, defaults=[\'fine\'], "
+ }
+}
diff --git a/tensorflow/tools/api/golden/tensorflow.keras.datasets.imdb.pbtxt b/tensorflow/tools/api/golden/tensorflow.keras.datasets.imdb.pbtxt
new file mode 100644
index 0000000000..8b1c17e9da
--- /dev/null
+++ b/tensorflow/tools/api/golden/tensorflow.keras.datasets.imdb.pbtxt
@@ -0,0 +1,11 @@
+path: "tensorflow.keras.datasets.imdb"
+tf_module {
+ member_method {
+ name: "get_word_index"
+ argspec: "args=[\'path\'], varargs=None, keywords=None, defaults=[\'imdb_word_index.json\'], "
+ }
+ member_method {
+ name: "load_data"
+ argspec: "args=[\'path\', \'num_words\', \'skip_top\', \'maxlen\', \'seed\', \'start_char\', \'oov_char\', \'index_from\'], varargs=None, keywords=None, defaults=[\'imdb.npz\', \'None\', \'0\', \'None\', \'113\', \'1\', \'2\', \'3\'], "
+ }
+}
diff --git a/tensorflow/tools/api/golden/tensorflow.keras.datasets.mnist.pbtxt b/tensorflow/tools/api/golden/tensorflow.keras.datasets.mnist.pbtxt
new file mode 100644
index 0000000000..530bb07550
--- /dev/null
+++ b/tensorflow/tools/api/golden/tensorflow.keras.datasets.mnist.pbtxt
@@ -0,0 +1,7 @@
+path: "tensorflow.keras.datasets.mnist"
+tf_module {
+ member_method {
+ name: "load_data"
+ argspec: "args=[\'path\'], varargs=None, keywords=None, defaults=[\'mnist.npz\'], "
+ }
+}
diff --git a/tensorflow/tools/api/golden/tensorflow.keras.datasets.pbtxt b/tensorflow/tools/api/golden/tensorflow.keras.datasets.pbtxt
new file mode 100644
index 0000000000..d4aa436f32
--- /dev/null
+++ b/tensorflow/tools/api/golden/tensorflow.keras.datasets.pbtxt
@@ -0,0 +1,27 @@
+path: "tensorflow.keras.datasets"
+tf_module {
+ member {
+ name: "boston_housing"
+ mtype: "<type \'module\'>"
+ }
+ member {
+ name: "cifar10"
+ mtype: "<type \'module\'>"
+ }
+ member {
+ name: "cifar100"
+ mtype: "<type \'module\'>"
+ }
+ member {
+ name: "imdb"
+ mtype: "<type \'module\'>"
+ }
+ member {
+ name: "mnist"
+ mtype: "<type \'module\'>"
+ }
+ member {
+ name: "reuters"
+ mtype: "<type \'module\'>"
+ }
+}
diff --git a/tensorflow/tools/api/golden/tensorflow.keras.datasets.reuters.pbtxt b/tensorflow/tools/api/golden/tensorflow.keras.datasets.reuters.pbtxt
new file mode 100644
index 0000000000..6b3ed1e9af
--- /dev/null
+++ b/tensorflow/tools/api/golden/tensorflow.keras.datasets.reuters.pbtxt
@@ -0,0 +1,11 @@
+path: "tensorflow.keras.datasets.reuters"
+tf_module {
+ member_method {
+ name: "get_word_index"
+ argspec: "args=[\'path\'], varargs=None, keywords=None, defaults=[\'reuters_word_index.json\'], "
+ }
+ member_method {
+ name: "load_data"
+ argspec: "args=[\'path\', \'num_words\', \'skip_top\', \'maxlen\', \'test_split\', \'seed\', \'start_char\', \'oov_char\', \'index_from\'], varargs=None, keywords=None, defaults=[\'reuters.npz\', \'None\', \'0\', \'None\', \'0.2\', \'113\', \'1\', \'2\', \'3\'], "
+ }
+}
diff --git a/tensorflow/tools/api/golden/tensorflow.keras.initializers.-constant.pbtxt b/tensorflow/tools/api/golden/tensorflow.keras.initializers.-constant.pbtxt
new file mode 100644
index 0000000000..cbaba78ed5
--- /dev/null
+++ b/tensorflow/tools/api/golden/tensorflow.keras.initializers.-constant.pbtxt
@@ -0,0 +1,18 @@
+path: "tensorflow.keras.initializers.Constant"
+tf_class {
+ is_instance: "<class \'tensorflow.python.ops.init_ops.Constant\'>"
+ is_instance: "<class \'tensorflow.python.ops.init_ops.Initializer\'>"
+ is_instance: "<type \'object\'>"
+ member_method {
+ name: "__init__"
+ argspec: "args=[\'self\', \'value\', \'dtype\', \'verify_shape\'], varargs=None, keywords=None, defaults=[\'0\', \"<dtype: \'float32\'>\", \'False\'], "
+ }
+ member_method {
+ name: "from_config"
+ argspec: "args=[\'cls\', \'config\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_config"
+ argspec: "args=[\'self\'], varargs=None, keywords=None, defaults=None"
+ }
+}
diff --git a/tensorflow/tools/api/golden/tensorflow.keras.initializers.-identity.pbtxt b/tensorflow/tools/api/golden/tensorflow.keras.initializers.-identity.pbtxt
new file mode 100644
index 0000000000..a5f7f348de
--- /dev/null
+++ b/tensorflow/tools/api/golden/tensorflow.keras.initializers.-identity.pbtxt
@@ -0,0 +1,18 @@
+path: "tensorflow.keras.initializers.Identity"
+tf_class {
+ is_instance: "<class \'tensorflow.python.ops.init_ops.Identity\'>"
+ is_instance: "<class \'tensorflow.python.ops.init_ops.Initializer\'>"
+ is_instance: "<type \'object\'>"
+ member_method {
+ name: "__init__"
+ argspec: "args=[\'self\', \'gain\', \'dtype\'], varargs=None, keywords=None, defaults=[\'1.0\', \"<dtype: \'float32\'>\"], "
+ }
+ member_method {
+ name: "from_config"
+ argspec: "args=[\'cls\', \'config\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_config"
+ argspec: "args=[\'self\'], varargs=None, keywords=None, defaults=None"
+ }
+}
diff --git a/tensorflow/tools/api/golden/tensorflow.keras.initializers.-initializer.pbtxt b/tensorflow/tools/api/golden/tensorflow.keras.initializers.-initializer.pbtxt
new file mode 100644
index 0000000000..8f10d1698e
--- /dev/null
+++ b/tensorflow/tools/api/golden/tensorflow.keras.initializers.-initializer.pbtxt
@@ -0,0 +1,16 @@
+path: "tensorflow.keras.initializers.Initializer"
+tf_class {
+ is_instance: "<class \'tensorflow.python.ops.init_ops.Initializer\'>"
+ is_instance: "<type \'object\'>"
+ member_method {
+ name: "__init__"
+ }
+ member_method {
+ name: "from_config"
+ argspec: "args=[\'cls\', \'config\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_config"
+ argspec: "args=[\'self\'], varargs=None, keywords=None, defaults=None"
+ }
+}
diff --git a/tensorflow/tools/api/golden/tensorflow.keras.initializers.-ones.pbtxt b/tensorflow/tools/api/golden/tensorflow.keras.initializers.-ones.pbtxt
new file mode 100644
index 0000000000..2fbfa774f8
--- /dev/null
+++ b/tensorflow/tools/api/golden/tensorflow.keras.initializers.-ones.pbtxt
@@ -0,0 +1,18 @@
+path: "tensorflow.keras.initializers.Ones"
+tf_class {
+ is_instance: "<class \'tensorflow.python.ops.init_ops.Ones\'>"
+ is_instance: "<class \'tensorflow.python.ops.init_ops.Initializer\'>"
+ is_instance: "<type \'object\'>"
+ member_method {
+ name: "__init__"
+ argspec: "args=[\'self\', \'dtype\'], varargs=None, keywords=None, defaults=[\"<dtype: \'float32\'>\"], "
+ }
+ member_method {
+ name: "from_config"
+ argspec: "args=[\'cls\', \'config\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_config"
+ argspec: "args=[\'self\'], varargs=None, keywords=None, defaults=None"
+ }
+}
diff --git a/tensorflow/tools/api/golden/tensorflow.keras.initializers.-orthogonal.pbtxt b/tensorflow/tools/api/golden/tensorflow.keras.initializers.-orthogonal.pbtxt
new file mode 100644
index 0000000000..874d320d73
--- /dev/null
+++ b/tensorflow/tools/api/golden/tensorflow.keras.initializers.-orthogonal.pbtxt
@@ -0,0 +1,18 @@
+path: "tensorflow.keras.initializers.Orthogonal"
+tf_class {
+ is_instance: "<class \'tensorflow.python.ops.init_ops.Orthogonal\'>"
+ is_instance: "<class \'tensorflow.python.ops.init_ops.Initializer\'>"
+ is_instance: "<type \'object\'>"
+ member_method {
+ name: "__init__"
+ argspec: "args=[\'self\', \'gain\', \'seed\', \'dtype\'], varargs=None, keywords=None, defaults=[\'1.0\', \'None\', \"<dtype: \'float32\'>\"], "
+ }
+ member_method {
+ name: "from_config"
+ argspec: "args=[\'cls\', \'config\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_config"
+ argspec: "args=[\'self\'], varargs=None, keywords=None, defaults=None"
+ }
+}
diff --git a/tensorflow/tools/api/golden/tensorflow.keras.initializers.-random-normal.pbtxt b/tensorflow/tools/api/golden/tensorflow.keras.initializers.-random-normal.pbtxt
new file mode 100644
index 0000000000..23cd02c0b0
--- /dev/null
+++ b/tensorflow/tools/api/golden/tensorflow.keras.initializers.-random-normal.pbtxt
@@ -0,0 +1,18 @@
+path: "tensorflow.keras.initializers.RandomNormal"
+tf_class {
+ is_instance: "<class \'tensorflow.python.ops.init_ops.RandomNormal\'>"
+ is_instance: "<class \'tensorflow.python.ops.init_ops.Initializer\'>"
+ is_instance: "<type \'object\'>"
+ member_method {
+ name: "__init__"
+ argspec: "args=[\'self\', \'mean\', \'stddev\', \'seed\', \'dtype\'], varargs=None, keywords=None, defaults=[\'0.0\', \'1.0\', \'None\', \"<dtype: \'float32\'>\"], "
+ }
+ member_method {
+ name: "from_config"
+ argspec: "args=[\'cls\', \'config\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_config"
+ argspec: "args=[\'self\'], varargs=None, keywords=None, defaults=None"
+ }
+}
diff --git a/tensorflow/tools/api/golden/tensorflow.keras.initializers.-random-uniform.pbtxt b/tensorflow/tools/api/golden/tensorflow.keras.initializers.-random-uniform.pbtxt
new file mode 100644
index 0000000000..d98628f422
--- /dev/null
+++ b/tensorflow/tools/api/golden/tensorflow.keras.initializers.-random-uniform.pbtxt
@@ -0,0 +1,18 @@
+path: "tensorflow.keras.initializers.RandomUniform"
+tf_class {
+ is_instance: "<class \'tensorflow.python.ops.init_ops.RandomUniform\'>"
+ is_instance: "<class \'tensorflow.python.ops.init_ops.Initializer\'>"
+ is_instance: "<type \'object\'>"
+ member_method {
+ name: "__init__"
+ argspec: "args=[\'self\', \'minval\', \'maxval\', \'seed\', \'dtype\'], varargs=None, keywords=None, defaults=[\'0\', \'None\', \'None\', \"<dtype: \'float32\'>\"], "
+ }
+ member_method {
+ name: "from_config"
+ argspec: "args=[\'cls\', \'config\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_config"
+ argspec: "args=[\'self\'], varargs=None, keywords=None, defaults=None"
+ }
+}
diff --git a/tensorflow/tools/api/golden/tensorflow.keras.initializers.-truncated-normal.pbtxt b/tensorflow/tools/api/golden/tensorflow.keras.initializers.-truncated-normal.pbtxt
new file mode 100644
index 0000000000..86d48257c1
--- /dev/null
+++ b/tensorflow/tools/api/golden/tensorflow.keras.initializers.-truncated-normal.pbtxt
@@ -0,0 +1,18 @@
+path: "tensorflow.keras.initializers.TruncatedNormal"
+tf_class {
+ is_instance: "<class \'tensorflow.python.ops.init_ops.TruncatedNormal\'>"
+ is_instance: "<class \'tensorflow.python.ops.init_ops.Initializer\'>"
+ is_instance: "<type \'object\'>"
+ member_method {
+ name: "__init__"
+ argspec: "args=[\'self\', \'mean\', \'stddev\', \'seed\', \'dtype\'], varargs=None, keywords=None, defaults=[\'0.0\', \'1.0\', \'None\', \"<dtype: \'float32\'>\"], "
+ }
+ member_method {
+ name: "from_config"
+ argspec: "args=[\'cls\', \'config\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_config"
+ argspec: "args=[\'self\'], varargs=None, keywords=None, defaults=None"
+ }
+}
diff --git a/tensorflow/tools/api/golden/tensorflow.keras.initializers.-variance-scaling.pbtxt b/tensorflow/tools/api/golden/tensorflow.keras.initializers.-variance-scaling.pbtxt
new file mode 100644
index 0000000000..32a6f6ee88
--- /dev/null
+++ b/tensorflow/tools/api/golden/tensorflow.keras.initializers.-variance-scaling.pbtxt
@@ -0,0 +1,18 @@
+path: "tensorflow.keras.initializers.VarianceScaling"
+tf_class {
+ is_instance: "<class \'tensorflow.python.ops.init_ops.VarianceScaling\'>"
+ is_instance: "<class \'tensorflow.python.ops.init_ops.Initializer\'>"
+ is_instance: "<type \'object\'>"
+ member_method {
+ name: "__init__"
+ argspec: "args=[\'self\', \'scale\', \'mode\', \'distribution\', \'seed\', \'dtype\'], varargs=None, keywords=None, defaults=[\'1.0\', \'fan_in\', \'normal\', \'None\', \"<dtype: \'float32\'>\"], "
+ }
+ member_method {
+ name: "from_config"
+ argspec: "args=[\'cls\', \'config\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_config"
+ argspec: "args=[\'self\'], varargs=None, keywords=None, defaults=None"
+ }
+}
diff --git a/tensorflow/tools/api/golden/tensorflow.keras.initializers.-zeros.pbtxt b/tensorflow/tools/api/golden/tensorflow.keras.initializers.-zeros.pbtxt
new file mode 100644
index 0000000000..b6ab68e5be
--- /dev/null
+++ b/tensorflow/tools/api/golden/tensorflow.keras.initializers.-zeros.pbtxt
@@ -0,0 +1,18 @@
+path: "tensorflow.keras.initializers.Zeros"
+tf_class {
+ is_instance: "<class \'tensorflow.python.ops.init_ops.Zeros\'>"
+ is_instance: "<class \'tensorflow.python.ops.init_ops.Initializer\'>"
+ is_instance: "<type \'object\'>"
+ member_method {
+ name: "__init__"
+ argspec: "args=[\'self\', \'dtype\'], varargs=None, keywords=None, defaults=[\"<dtype: \'float32\'>\"], "
+ }
+ member_method {
+ name: "from_config"
+ argspec: "args=[\'cls\', \'config\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_config"
+ argspec: "args=[\'self\'], varargs=None, keywords=None, defaults=None"
+ }
+}
diff --git a/tensorflow/tools/api/golden/tensorflow.keras.initializers.pbtxt b/tensorflow/tools/api/golden/tensorflow.keras.initializers.pbtxt
new file mode 100644
index 0000000000..093c56595b
--- /dev/null
+++ b/tensorflow/tools/api/golden/tensorflow.keras.initializers.pbtxt
@@ -0,0 +1,79 @@
+path: "tensorflow.keras.initializers"
+tf_module {
+ member {
+ name: "Constant"
+ mtype: "<type \'type\'>"
+ }
+ member {
+ name: "Identity"
+ mtype: "<type \'type\'>"
+ }
+ member {
+ name: "Initializer"
+ mtype: "<type \'type\'>"
+ }
+ member {
+ name: "Ones"
+ mtype: "<type \'type\'>"
+ }
+ member {
+ name: "Orthogonal"
+ mtype: "<type \'type\'>"
+ }
+ member {
+ name: "RandomNormal"
+ mtype: "<type \'type\'>"
+ }
+ member {
+ name: "RandomUniform"
+ mtype: "<type \'type\'>"
+ }
+ member {
+ name: "TruncatedNormal"
+ mtype: "<type \'type\'>"
+ }
+ member {
+ name: "VarianceScaling"
+ mtype: "<type \'type\'>"
+ }
+ member {
+ name: "Zeros"
+ mtype: "<type \'type\'>"
+ }
+ member_method {
+ name: "deserialize"
+ argspec: "args=[\'config\', \'custom_objects\'], varargs=None, keywords=None, defaults=[\'None\'], "
+ }
+ member_method {
+ name: "get"
+ argspec: "args=[\'identifier\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "glorot_normal"
+ argspec: "args=[\'seed\'], varargs=None, keywords=None, defaults=[\'None\'], "
+ }
+ member_method {
+ name: "glorot_uniform"
+ argspec: "args=[\'seed\'], varargs=None, keywords=None, defaults=[\'None\'], "
+ }
+ member_method {
+ name: "he_normal"
+ argspec: "args=[\'seed\'], varargs=None, keywords=None, defaults=[\'None\'], "
+ }
+ member_method {
+ name: "he_uniform"
+ argspec: "args=[\'seed\'], varargs=None, keywords=None, defaults=[\'None\'], "
+ }
+ member_method {
+ name: "lecun_normal"
+ argspec: "args=[\'seed\'], varargs=None, keywords=None, defaults=[\'None\'], "
+ }
+ member_method {
+ name: "lecun_uniform"
+ argspec: "args=[\'seed\'], varargs=None, keywords=None, defaults=[\'None\'], "
+ }
+ member_method {
+ name: "serialize"
+ argspec: "args=[\'initializer\'], varargs=None, keywords=None, defaults=None"
+ }
+}
diff --git a/tensorflow/tools/api/golden/tensorflow.keras.layers.-activation.pbtxt b/tensorflow/tools/api/golden/tensorflow.keras.layers.-activation.pbtxt
new file mode 100644
index 0000000000..52b65bb916
--- /dev/null
+++ b/tensorflow/tools/api/golden/tensorflow.keras.layers.-activation.pbtxt
@@ -0,0 +1,159 @@
+path: "tensorflow.keras.layers.Activation"
+tf_class {
+ is_instance: "<class \'tensorflow.python.keras._impl.keras.layers.core.Activation\'>"
+ is_instance: "<class \'tensorflow.python.keras._impl.keras.engine.topology.Layer\'>"
+ is_instance: "<class \'tensorflow.python.layers.base.Layer\'>"
+ is_instance: "<type \'object\'>"
+ member {
+ name: "graph"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "input"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "input_mask"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "input_shape"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "losses"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "non_trainable_variables"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "non_trainable_weights"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "output"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "output_mask"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "output_shape"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "scope_name"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "trainable_variables"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "trainable_weights"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "updates"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "variables"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "weights"
+ mtype: "<type \'property\'>"
+ }
+ member_method {
+ name: "__init__"
+ argspec: "args=[\'self\', \'activation\'], varargs=None, keywords=kwargs, defaults=None"
+ }
+ member_method {
+ name: "add_loss"
+ argspec: "args=[\'self\', \'losses\', \'inputs\'], varargs=None, keywords=None, defaults=[\'None\'], "
+ }
+ member_method {
+ name: "add_update"
+ argspec: "args=[\'self\', \'updates\', \'inputs\'], varargs=None, keywords=None, defaults=[\'None\'], "
+ }
+ member_method {
+ name: "add_variable"
+ argspec: "args=[\'self\', \'name\', \'shape\', \'dtype\', \'initializer\', \'regularizer\', \'trainable\', \'constraint\'], varargs=None, keywords=None, defaults=[\'None\', \'None\', \'None\', \'True\', \'None\'], "
+ }
+ member_method {
+ name: "add_weight"
+ argspec: "args=[\'self\', \'name\', \'shape\', \'dtype\', \'initializer\', \'regularizer\', \'trainable\', \'constraint\'], varargs=None, keywords=None, defaults=[\'None\', \'None\', \'None\', \'True\', \'None\'], "
+ }
+ member_method {
+ name: "apply"
+ argspec: "args=[\'self\', \'inputs\'], varargs=args, keywords=kwargs, defaults=None"
+ }
+ member_method {
+ name: "build"
+ argspec: "args=[\'self\', \'_\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "call"
+ argspec: "args=[\'self\', \'inputs\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "compute_mask"
+ argspec: "args=[\'self\', \'inputs\', \'mask\'], varargs=None, keywords=None, defaults=[\'None\'], "
+ }
+ member_method {
+ name: "count_params"
+ argspec: "args=[\'self\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "from_config"
+ argspec: "args=[\'cls\', \'config\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_config"
+ argspec: "args=[\'self\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_input_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_input_mask_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_input_shape_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_losses_for"
+ argspec: "args=[\'self\', \'inputs\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_output_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_output_mask_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_output_shape_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_updates_for"
+ argspec: "args=[\'self\', \'inputs\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_weights"
+ argspec: "args=[\'self\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "set_weights"
+ argspec: "args=[\'self\', \'weights\'], varargs=None, keywords=None, defaults=None"
+ }
+}
diff --git a/tensorflow/tools/api/golden/tensorflow.keras.layers.-activity-regularization.pbtxt b/tensorflow/tools/api/golden/tensorflow.keras.layers.-activity-regularization.pbtxt
new file mode 100644
index 0000000000..5ef00eada9
--- /dev/null
+++ b/tensorflow/tools/api/golden/tensorflow.keras.layers.-activity-regularization.pbtxt
@@ -0,0 +1,159 @@
+path: "tensorflow.keras.layers.ActivityRegularization"
+tf_class {
+ is_instance: "<class \'tensorflow.python.keras._impl.keras.layers.core.ActivityRegularization\'>"
+ is_instance: "<class \'tensorflow.python.keras._impl.keras.engine.topology.Layer\'>"
+ is_instance: "<class \'tensorflow.python.layers.base.Layer\'>"
+ is_instance: "<type \'object\'>"
+ member {
+ name: "graph"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "input"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "input_mask"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "input_shape"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "losses"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "non_trainable_variables"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "non_trainable_weights"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "output"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "output_mask"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "output_shape"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "scope_name"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "trainable_variables"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "trainable_weights"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "updates"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "variables"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "weights"
+ mtype: "<type \'property\'>"
+ }
+ member_method {
+ name: "__init__"
+ argspec: "args=[\'self\', \'l1\', \'l2\'], varargs=None, keywords=kwargs, defaults=[\'0.0\', \'0.0\'], "
+ }
+ member_method {
+ name: "add_loss"
+ argspec: "args=[\'self\', \'losses\', \'inputs\'], varargs=None, keywords=None, defaults=[\'None\'], "
+ }
+ member_method {
+ name: "add_update"
+ argspec: "args=[\'self\', \'updates\', \'inputs\'], varargs=None, keywords=None, defaults=[\'None\'], "
+ }
+ member_method {
+ name: "add_variable"
+ argspec: "args=[\'self\', \'name\', \'shape\', \'dtype\', \'initializer\', \'regularizer\', \'trainable\', \'constraint\'], varargs=None, keywords=None, defaults=[\'None\', \'None\', \'None\', \'True\', \'None\'], "
+ }
+ member_method {
+ name: "add_weight"
+ argspec: "args=[\'self\', \'name\', \'shape\', \'dtype\', \'initializer\', \'regularizer\', \'trainable\', \'constraint\'], varargs=None, keywords=None, defaults=[\'None\', \'None\', \'None\', \'True\', \'None\'], "
+ }
+ member_method {
+ name: "apply"
+ argspec: "args=[\'self\', \'inputs\'], varargs=args, keywords=kwargs, defaults=None"
+ }
+ member_method {
+ name: "build"
+ argspec: "args=[\'self\', \'_\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "call"
+ argspec: "args=[\'self\', \'inputs\'], varargs=None, keywords=kwargs, defaults=None"
+ }
+ member_method {
+ name: "compute_mask"
+ argspec: "args=[\'self\', \'inputs\', \'mask\'], varargs=None, keywords=None, defaults=[\'None\'], "
+ }
+ member_method {
+ name: "count_params"
+ argspec: "args=[\'self\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "from_config"
+ argspec: "args=[\'cls\', \'config\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_config"
+ argspec: "args=[\'self\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_input_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_input_mask_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_input_shape_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_losses_for"
+ argspec: "args=[\'self\', \'inputs\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_output_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_output_mask_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_output_shape_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_updates_for"
+ argspec: "args=[\'self\', \'inputs\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_weights"
+ argspec: "args=[\'self\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "set_weights"
+ argspec: "args=[\'self\', \'weights\'], varargs=None, keywords=None, defaults=None"
+ }
+}
diff --git a/tensorflow/tools/api/golden/tensorflow.keras.layers.-add.pbtxt b/tensorflow/tools/api/golden/tensorflow.keras.layers.-add.pbtxt
new file mode 100644
index 0000000000..a75a51a411
--- /dev/null
+++ b/tensorflow/tools/api/golden/tensorflow.keras.layers.-add.pbtxt
@@ -0,0 +1,160 @@
+path: "tensorflow.keras.layers.Add"
+tf_class {
+ is_instance: "<class \'tensorflow.python.keras._impl.keras.layers.merge.Add\'>"
+ is_instance: "<class \'tensorflow.python.keras._impl.keras.layers.merge._Merge\'>"
+ is_instance: "<class \'tensorflow.python.keras._impl.keras.engine.topology.Layer\'>"
+ is_instance: "<class \'tensorflow.python.layers.base.Layer\'>"
+ is_instance: "<type \'object\'>"
+ member {
+ name: "graph"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "input"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "input_mask"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "input_shape"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "losses"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "non_trainable_variables"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "non_trainable_weights"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "output"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "output_mask"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "output_shape"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "scope_name"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "trainable_variables"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "trainable_weights"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "updates"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "variables"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "weights"
+ mtype: "<type \'property\'>"
+ }
+ member_method {
+ name: "__init__"
+ argspec: "args=[\'self\'], varargs=None, keywords=kwargs, defaults=None"
+ }
+ member_method {
+ name: "add_loss"
+ argspec: "args=[\'self\', \'losses\', \'inputs\'], varargs=None, keywords=None, defaults=[\'None\'], "
+ }
+ member_method {
+ name: "add_update"
+ argspec: "args=[\'self\', \'updates\', \'inputs\'], varargs=None, keywords=None, defaults=[\'None\'], "
+ }
+ member_method {
+ name: "add_variable"
+ argspec: "args=[\'self\', \'name\', \'shape\', \'dtype\', \'initializer\', \'regularizer\', \'trainable\', \'constraint\'], varargs=None, keywords=None, defaults=[\'None\', \'None\', \'None\', \'True\', \'None\'], "
+ }
+ member_method {
+ name: "add_weight"
+ argspec: "args=[\'self\', \'name\', \'shape\', \'dtype\', \'initializer\', \'regularizer\', \'trainable\', \'constraint\'], varargs=None, keywords=None, defaults=[\'None\', \'None\', \'None\', \'True\', \'None\'], "
+ }
+ member_method {
+ name: "apply"
+ argspec: "args=[\'self\', \'inputs\'], varargs=args, keywords=kwargs, defaults=None"
+ }
+ member_method {
+ name: "build"
+ argspec: "args=[\'self\', \'input_shape\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "call"
+ argspec: "args=[\'self\', \'inputs\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "compute_mask"
+ argspec: "args=[\'self\', \'inputs\', \'mask\'], varargs=None, keywords=None, defaults=[\'None\'], "
+ }
+ member_method {
+ name: "count_params"
+ argspec: "args=[\'self\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "from_config"
+ argspec: "args=[\'cls\', \'config\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_config"
+ argspec: "args=[\'self\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_input_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_input_mask_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_input_shape_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_losses_for"
+ argspec: "args=[\'self\', \'inputs\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_output_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_output_mask_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_output_shape_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_updates_for"
+ argspec: "args=[\'self\', \'inputs\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_weights"
+ argspec: "args=[\'self\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "set_weights"
+ argspec: "args=[\'self\', \'weights\'], varargs=None, keywords=None, defaults=None"
+ }
+}
diff --git a/tensorflow/tools/api/golden/tensorflow.keras.layers.-alpha-dropout.pbtxt b/tensorflow/tools/api/golden/tensorflow.keras.layers.-alpha-dropout.pbtxt
new file mode 100644
index 0000000000..560295eb3e
--- /dev/null
+++ b/tensorflow/tools/api/golden/tensorflow.keras.layers.-alpha-dropout.pbtxt
@@ -0,0 +1,159 @@
+path: "tensorflow.keras.layers.AlphaDropout"
+tf_class {
+ is_instance: "<class \'tensorflow.python.keras._impl.keras.layers.noise.AlphaDropout\'>"
+ is_instance: "<class \'tensorflow.python.keras._impl.keras.engine.topology.Layer\'>"
+ is_instance: "<class \'tensorflow.python.layers.base.Layer\'>"
+ is_instance: "<type \'object\'>"
+ member {
+ name: "graph"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "input"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "input_mask"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "input_shape"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "losses"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "non_trainable_variables"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "non_trainable_weights"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "output"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "output_mask"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "output_shape"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "scope_name"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "trainable_variables"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "trainable_weights"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "updates"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "variables"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "weights"
+ mtype: "<type \'property\'>"
+ }
+ member_method {
+ name: "__init__"
+ argspec: "args=[\'self\', \'rate\', \'noise_shape\', \'seed\'], varargs=None, keywords=kwargs, defaults=[\'None\', \'None\'], "
+ }
+ member_method {
+ name: "add_loss"
+ argspec: "args=[\'self\', \'losses\', \'inputs\'], varargs=None, keywords=None, defaults=[\'None\'], "
+ }
+ member_method {
+ name: "add_update"
+ argspec: "args=[\'self\', \'updates\', \'inputs\'], varargs=None, keywords=None, defaults=[\'None\'], "
+ }
+ member_method {
+ name: "add_variable"
+ argspec: "args=[\'self\', \'name\', \'shape\', \'dtype\', \'initializer\', \'regularizer\', \'trainable\', \'constraint\'], varargs=None, keywords=None, defaults=[\'None\', \'None\', \'None\', \'True\', \'None\'], "
+ }
+ member_method {
+ name: "add_weight"
+ argspec: "args=[\'self\', \'name\', \'shape\', \'dtype\', \'initializer\', \'regularizer\', \'trainable\', \'constraint\'], varargs=None, keywords=None, defaults=[\'None\', \'None\', \'None\', \'True\', \'None\'], "
+ }
+ member_method {
+ name: "apply"
+ argspec: "args=[\'self\', \'inputs\'], varargs=args, keywords=kwargs, defaults=None"
+ }
+ member_method {
+ name: "build"
+ argspec: "args=[\'self\', \'_\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "call"
+ argspec: "args=[\'self\', \'inputs\', \'training\'], varargs=None, keywords=None, defaults=[\'None\'], "
+ }
+ member_method {
+ name: "compute_mask"
+ argspec: "args=[\'self\', \'inputs\', \'mask\'], varargs=None, keywords=None, defaults=[\'None\'], "
+ }
+ member_method {
+ name: "count_params"
+ argspec: "args=[\'self\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "from_config"
+ argspec: "args=[\'cls\', \'config\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_config"
+ argspec: "args=[\'self\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_input_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_input_mask_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_input_shape_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_losses_for"
+ argspec: "args=[\'self\', \'inputs\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_output_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_output_mask_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_output_shape_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_updates_for"
+ argspec: "args=[\'self\', \'inputs\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_weights"
+ argspec: "args=[\'self\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "set_weights"
+ argspec: "args=[\'self\', \'weights\'], varargs=None, keywords=None, defaults=None"
+ }
+}
diff --git a/tensorflow/tools/api/golden/tensorflow.keras.layers.-average-pooling1-d.pbtxt b/tensorflow/tools/api/golden/tensorflow.keras.layers.-average-pooling1-d.pbtxt
new file mode 100644
index 0000000000..f05a216e95
--- /dev/null
+++ b/tensorflow/tools/api/golden/tensorflow.keras.layers.-average-pooling1-d.pbtxt
@@ -0,0 +1,161 @@
+path: "tensorflow.keras.layers.AveragePooling1D"
+tf_class {
+ is_instance: "<class \'tensorflow.python.keras._impl.keras.layers.pooling.AveragePooling1D\'>"
+ is_instance: "<class \'tensorflow.python.layers.pooling.AveragePooling1D\'>"
+ is_instance: "<class \'tensorflow.python.layers.pooling._Pooling1D\'>"
+ is_instance: "<class \'tensorflow.python.keras._impl.keras.engine.topology.Layer\'>"
+ is_instance: "<class \'tensorflow.python.layers.base.Layer\'>"
+ is_instance: "<type \'object\'>"
+ member {
+ name: "graph"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "input"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "input_mask"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "input_shape"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "losses"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "non_trainable_variables"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "non_trainable_weights"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "output"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "output_mask"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "output_shape"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "scope_name"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "trainable_variables"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "trainable_weights"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "updates"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "variables"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "weights"
+ mtype: "<type \'property\'>"
+ }
+ member_method {
+ name: "__init__"
+ argspec: "args=[\'self\', \'pool_size\', \'strides\', \'padding\'], varargs=None, keywords=kwargs, defaults=[\'2\', \'None\', \'valid\'], "
+ }
+ member_method {
+ name: "add_loss"
+ argspec: "args=[\'self\', \'losses\', \'inputs\'], varargs=None, keywords=None, defaults=[\'None\'], "
+ }
+ member_method {
+ name: "add_update"
+ argspec: "args=[\'self\', \'updates\', \'inputs\'], varargs=None, keywords=None, defaults=[\'None\'], "
+ }
+ member_method {
+ name: "add_variable"
+ argspec: "args=[\'self\', \'name\', \'shape\', \'dtype\', \'initializer\', \'regularizer\', \'trainable\', \'constraint\'], varargs=None, keywords=None, defaults=[\'None\', \'None\', \'None\', \'True\', \'None\'], "
+ }
+ member_method {
+ name: "add_weight"
+ argspec: "args=[\'self\', \'name\', \'shape\', \'dtype\', \'initializer\', \'regularizer\', \'trainable\', \'constraint\'], varargs=None, keywords=None, defaults=[\'None\', \'None\', \'None\', \'True\', \'None\'], "
+ }
+ member_method {
+ name: "apply"
+ argspec: "args=[\'self\', \'inputs\'], varargs=args, keywords=kwargs, defaults=None"
+ }
+ member_method {
+ name: "build"
+ argspec: "args=[\'self\', \'_\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "call"
+ argspec: "args=[\'self\', \'inputs\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "compute_mask"
+ argspec: "args=[\'self\', \'inputs\', \'mask\'], varargs=None, keywords=None, defaults=[\'None\'], "
+ }
+ member_method {
+ name: "count_params"
+ argspec: "args=[\'self\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "from_config"
+ argspec: "args=[\'cls\', \'config\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_config"
+ argspec: "args=[\'self\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_input_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_input_mask_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_input_shape_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_losses_for"
+ argspec: "args=[\'self\', \'inputs\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_output_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_output_mask_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_output_shape_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_updates_for"
+ argspec: "args=[\'self\', \'inputs\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_weights"
+ argspec: "args=[\'self\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "set_weights"
+ argspec: "args=[\'self\', \'weights\'], varargs=None, keywords=None, defaults=None"
+ }
+}
diff --git a/tensorflow/tools/api/golden/tensorflow.keras.layers.-average-pooling2-d.pbtxt b/tensorflow/tools/api/golden/tensorflow.keras.layers.-average-pooling2-d.pbtxt
new file mode 100644
index 0000000000..2a71a5a2e6
--- /dev/null
+++ b/tensorflow/tools/api/golden/tensorflow.keras.layers.-average-pooling2-d.pbtxt
@@ -0,0 +1,161 @@
+path: "tensorflow.keras.layers.AveragePooling2D"
+tf_class {
+ is_instance: "<class \'tensorflow.python.keras._impl.keras.layers.pooling.AveragePooling2D\'>"
+ is_instance: "<class \'tensorflow.python.layers.pooling.AveragePooling2D\'>"
+ is_instance: "<class \'tensorflow.python.layers.pooling._Pooling2D\'>"
+ is_instance: "<class \'tensorflow.python.keras._impl.keras.engine.topology.Layer\'>"
+ is_instance: "<class \'tensorflow.python.layers.base.Layer\'>"
+ is_instance: "<type \'object\'>"
+ member {
+ name: "graph"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "input"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "input_mask"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "input_shape"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "losses"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "non_trainable_variables"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "non_trainable_weights"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "output"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "output_mask"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "output_shape"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "scope_name"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "trainable_variables"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "trainable_weights"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "updates"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "variables"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "weights"
+ mtype: "<type \'property\'>"
+ }
+ member_method {
+ name: "__init__"
+ argspec: "args=[\'self\', \'pool_size\', \'strides\', \'padding\', \'data_format\'], varargs=None, keywords=kwargs, defaults=[\'(2, 2)\', \'None\', \'valid\', \'None\'], "
+ }
+ member_method {
+ name: "add_loss"
+ argspec: "args=[\'self\', \'losses\', \'inputs\'], varargs=None, keywords=None, defaults=[\'None\'], "
+ }
+ member_method {
+ name: "add_update"
+ argspec: "args=[\'self\', \'updates\', \'inputs\'], varargs=None, keywords=None, defaults=[\'None\'], "
+ }
+ member_method {
+ name: "add_variable"
+ argspec: "args=[\'self\', \'name\', \'shape\', \'dtype\', \'initializer\', \'regularizer\', \'trainable\', \'constraint\'], varargs=None, keywords=None, defaults=[\'None\', \'None\', \'None\', \'True\', \'None\'], "
+ }
+ member_method {
+ name: "add_weight"
+ argspec: "args=[\'self\', \'name\', \'shape\', \'dtype\', \'initializer\', \'regularizer\', \'trainable\', \'constraint\'], varargs=None, keywords=None, defaults=[\'None\', \'None\', \'None\', \'True\', \'None\'], "
+ }
+ member_method {
+ name: "apply"
+ argspec: "args=[\'self\', \'inputs\'], varargs=args, keywords=kwargs, defaults=None"
+ }
+ member_method {
+ name: "build"
+ argspec: "args=[\'self\', \'_\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "call"
+ argspec: "args=[\'self\', \'inputs\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "compute_mask"
+ argspec: "args=[\'self\', \'inputs\', \'mask\'], varargs=None, keywords=None, defaults=[\'None\'], "
+ }
+ member_method {
+ name: "count_params"
+ argspec: "args=[\'self\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "from_config"
+ argspec: "args=[\'cls\', \'config\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_config"
+ argspec: "args=[\'self\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_input_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_input_mask_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_input_shape_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_losses_for"
+ argspec: "args=[\'self\', \'inputs\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_output_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_output_mask_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_output_shape_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_updates_for"
+ argspec: "args=[\'self\', \'inputs\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_weights"
+ argspec: "args=[\'self\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "set_weights"
+ argspec: "args=[\'self\', \'weights\'], varargs=None, keywords=None, defaults=None"
+ }
+}
diff --git a/tensorflow/tools/api/golden/tensorflow.keras.layers.-average-pooling3-d.pbtxt b/tensorflow/tools/api/golden/tensorflow.keras.layers.-average-pooling3-d.pbtxt
new file mode 100644
index 0000000000..8756b96297
--- /dev/null
+++ b/tensorflow/tools/api/golden/tensorflow.keras.layers.-average-pooling3-d.pbtxt
@@ -0,0 +1,161 @@
+path: "tensorflow.keras.layers.AveragePooling3D"
+tf_class {
+ is_instance: "<class \'tensorflow.python.keras._impl.keras.layers.pooling.AveragePooling3D\'>"
+ is_instance: "<class \'tensorflow.python.layers.pooling.AveragePooling3D\'>"
+ is_instance: "<class \'tensorflow.python.layers.pooling._Pooling3D\'>"
+ is_instance: "<class \'tensorflow.python.keras._impl.keras.engine.topology.Layer\'>"
+ is_instance: "<class \'tensorflow.python.layers.base.Layer\'>"
+ is_instance: "<type \'object\'>"
+ member {
+ name: "graph"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "input"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "input_mask"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "input_shape"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "losses"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "non_trainable_variables"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "non_trainable_weights"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "output"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "output_mask"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "output_shape"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "scope_name"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "trainable_variables"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "trainable_weights"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "updates"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "variables"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "weights"
+ mtype: "<type \'property\'>"
+ }
+ member_method {
+ name: "__init__"
+ argspec: "args=[\'self\', \'pool_size\', \'strides\', \'padding\', \'data_format\'], varargs=None, keywords=kwargs, defaults=[\'(2, 2, 2)\', \'None\', \'valid\', \'None\'], "
+ }
+ member_method {
+ name: "add_loss"
+ argspec: "args=[\'self\', \'losses\', \'inputs\'], varargs=None, keywords=None, defaults=[\'None\'], "
+ }
+ member_method {
+ name: "add_update"
+ argspec: "args=[\'self\', \'updates\', \'inputs\'], varargs=None, keywords=None, defaults=[\'None\'], "
+ }
+ member_method {
+ name: "add_variable"
+ argspec: "args=[\'self\', \'name\', \'shape\', \'dtype\', \'initializer\', \'regularizer\', \'trainable\', \'constraint\'], varargs=None, keywords=None, defaults=[\'None\', \'None\', \'None\', \'True\', \'None\'], "
+ }
+ member_method {
+ name: "add_weight"
+ argspec: "args=[\'self\', \'name\', \'shape\', \'dtype\', \'initializer\', \'regularizer\', \'trainable\', \'constraint\'], varargs=None, keywords=None, defaults=[\'None\', \'None\', \'None\', \'True\', \'None\'], "
+ }
+ member_method {
+ name: "apply"
+ argspec: "args=[\'self\', \'inputs\'], varargs=args, keywords=kwargs, defaults=None"
+ }
+ member_method {
+ name: "build"
+ argspec: "args=[\'self\', \'_\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "call"
+ argspec: "args=[\'self\', \'inputs\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "compute_mask"
+ argspec: "args=[\'self\', \'inputs\', \'mask\'], varargs=None, keywords=None, defaults=[\'None\'], "
+ }
+ member_method {
+ name: "count_params"
+ argspec: "args=[\'self\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "from_config"
+ argspec: "args=[\'cls\', \'config\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_config"
+ argspec: "args=[\'self\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_input_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_input_mask_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_input_shape_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_losses_for"
+ argspec: "args=[\'self\', \'inputs\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_output_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_output_mask_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_output_shape_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_updates_for"
+ argspec: "args=[\'self\', \'inputs\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_weights"
+ argspec: "args=[\'self\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "set_weights"
+ argspec: "args=[\'self\', \'weights\'], varargs=None, keywords=None, defaults=None"
+ }
+}
diff --git a/tensorflow/tools/api/golden/tensorflow.keras.layers.-average.pbtxt b/tensorflow/tools/api/golden/tensorflow.keras.layers.-average.pbtxt
new file mode 100644
index 0000000000..9a2940d298
--- /dev/null
+++ b/tensorflow/tools/api/golden/tensorflow.keras.layers.-average.pbtxt
@@ -0,0 +1,160 @@
+path: "tensorflow.keras.layers.Average"
+tf_class {
+ is_instance: "<class \'tensorflow.python.keras._impl.keras.layers.merge.Average\'>"
+ is_instance: "<class \'tensorflow.python.keras._impl.keras.layers.merge._Merge\'>"
+ is_instance: "<class \'tensorflow.python.keras._impl.keras.engine.topology.Layer\'>"
+ is_instance: "<class \'tensorflow.python.layers.base.Layer\'>"
+ is_instance: "<type \'object\'>"
+ member {
+ name: "graph"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "input"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "input_mask"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "input_shape"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "losses"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "non_trainable_variables"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "non_trainable_weights"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "output"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "output_mask"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "output_shape"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "scope_name"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "trainable_variables"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "trainable_weights"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "updates"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "variables"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "weights"
+ mtype: "<type \'property\'>"
+ }
+ member_method {
+ name: "__init__"
+ argspec: "args=[\'self\'], varargs=None, keywords=kwargs, defaults=None"
+ }
+ member_method {
+ name: "add_loss"
+ argspec: "args=[\'self\', \'losses\', \'inputs\'], varargs=None, keywords=None, defaults=[\'None\'], "
+ }
+ member_method {
+ name: "add_update"
+ argspec: "args=[\'self\', \'updates\', \'inputs\'], varargs=None, keywords=None, defaults=[\'None\'], "
+ }
+ member_method {
+ name: "add_variable"
+ argspec: "args=[\'self\', \'name\', \'shape\', \'dtype\', \'initializer\', \'regularizer\', \'trainable\', \'constraint\'], varargs=None, keywords=None, defaults=[\'None\', \'None\', \'None\', \'True\', \'None\'], "
+ }
+ member_method {
+ name: "add_weight"
+ argspec: "args=[\'self\', \'name\', \'shape\', \'dtype\', \'initializer\', \'regularizer\', \'trainable\', \'constraint\'], varargs=None, keywords=None, defaults=[\'None\', \'None\', \'None\', \'True\', \'None\'], "
+ }
+ member_method {
+ name: "apply"
+ argspec: "args=[\'self\', \'inputs\'], varargs=args, keywords=kwargs, defaults=None"
+ }
+ member_method {
+ name: "build"
+ argspec: "args=[\'self\', \'input_shape\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "call"
+ argspec: "args=[\'self\', \'inputs\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "compute_mask"
+ argspec: "args=[\'self\', \'inputs\', \'mask\'], varargs=None, keywords=None, defaults=[\'None\'], "
+ }
+ member_method {
+ name: "count_params"
+ argspec: "args=[\'self\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "from_config"
+ argspec: "args=[\'cls\', \'config\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_config"
+ argspec: "args=[\'self\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_input_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_input_mask_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_input_shape_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_losses_for"
+ argspec: "args=[\'self\', \'inputs\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_output_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_output_mask_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_output_shape_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_updates_for"
+ argspec: "args=[\'self\', \'inputs\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_weights"
+ argspec: "args=[\'self\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "set_weights"
+ argspec: "args=[\'self\', \'weights\'], varargs=None, keywords=None, defaults=None"
+ }
+}
diff --git a/tensorflow/tools/api/golden/tensorflow.keras.layers.-avg-pool1-d.pbtxt b/tensorflow/tools/api/golden/tensorflow.keras.layers.-avg-pool1-d.pbtxt
new file mode 100644
index 0000000000..62a53b8ab6
--- /dev/null
+++ b/tensorflow/tools/api/golden/tensorflow.keras.layers.-avg-pool1-d.pbtxt
@@ -0,0 +1,161 @@
+path: "tensorflow.keras.layers.AvgPool1D"
+tf_class {
+ is_instance: "<class \'tensorflow.python.keras._impl.keras.layers.pooling.AveragePooling1D\'>"
+ is_instance: "<class \'tensorflow.python.layers.pooling.AveragePooling1D\'>"
+ is_instance: "<class \'tensorflow.python.layers.pooling._Pooling1D\'>"
+ is_instance: "<class \'tensorflow.python.keras._impl.keras.engine.topology.Layer\'>"
+ is_instance: "<class \'tensorflow.python.layers.base.Layer\'>"
+ is_instance: "<type \'object\'>"
+ member {
+ name: "graph"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "input"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "input_mask"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "input_shape"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "losses"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "non_trainable_variables"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "non_trainable_weights"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "output"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "output_mask"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "output_shape"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "scope_name"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "trainable_variables"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "trainable_weights"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "updates"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "variables"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "weights"
+ mtype: "<type \'property\'>"
+ }
+ member_method {
+ name: "__init__"
+ argspec: "args=[\'self\', \'pool_size\', \'strides\', \'padding\'], varargs=None, keywords=kwargs, defaults=[\'2\', \'None\', \'valid\'], "
+ }
+ member_method {
+ name: "add_loss"
+ argspec: "args=[\'self\', \'losses\', \'inputs\'], varargs=None, keywords=None, defaults=[\'None\'], "
+ }
+ member_method {
+ name: "add_update"
+ argspec: "args=[\'self\', \'updates\', \'inputs\'], varargs=None, keywords=None, defaults=[\'None\'], "
+ }
+ member_method {
+ name: "add_variable"
+ argspec: "args=[\'self\', \'name\', \'shape\', \'dtype\', \'initializer\', \'regularizer\', \'trainable\', \'constraint\'], varargs=None, keywords=None, defaults=[\'None\', \'None\', \'None\', \'True\', \'None\'], "
+ }
+ member_method {
+ name: "add_weight"
+ argspec: "args=[\'self\', \'name\', \'shape\', \'dtype\', \'initializer\', \'regularizer\', \'trainable\', \'constraint\'], varargs=None, keywords=None, defaults=[\'None\', \'None\', \'None\', \'True\', \'None\'], "
+ }
+ member_method {
+ name: "apply"
+ argspec: "args=[\'self\', \'inputs\'], varargs=args, keywords=kwargs, defaults=None"
+ }
+ member_method {
+ name: "build"
+ argspec: "args=[\'self\', \'_\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "call"
+ argspec: "args=[\'self\', \'inputs\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "compute_mask"
+ argspec: "args=[\'self\', \'inputs\', \'mask\'], varargs=None, keywords=None, defaults=[\'None\'], "
+ }
+ member_method {
+ name: "count_params"
+ argspec: "args=[\'self\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "from_config"
+ argspec: "args=[\'cls\', \'config\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_config"
+ argspec: "args=[\'self\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_input_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_input_mask_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_input_shape_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_losses_for"
+ argspec: "args=[\'self\', \'inputs\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_output_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_output_mask_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_output_shape_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_updates_for"
+ argspec: "args=[\'self\', \'inputs\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_weights"
+ argspec: "args=[\'self\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "set_weights"
+ argspec: "args=[\'self\', \'weights\'], varargs=None, keywords=None, defaults=None"
+ }
+}
diff --git a/tensorflow/tools/api/golden/tensorflow.keras.layers.-avg-pool2-d.pbtxt b/tensorflow/tools/api/golden/tensorflow.keras.layers.-avg-pool2-d.pbtxt
new file mode 100644
index 0000000000..d442311087
--- /dev/null
+++ b/tensorflow/tools/api/golden/tensorflow.keras.layers.-avg-pool2-d.pbtxt
@@ -0,0 +1,161 @@
+path: "tensorflow.keras.layers.AvgPool2D"
+tf_class {
+ is_instance: "<class \'tensorflow.python.keras._impl.keras.layers.pooling.AveragePooling2D\'>"
+ is_instance: "<class \'tensorflow.python.layers.pooling.AveragePooling2D\'>"
+ is_instance: "<class \'tensorflow.python.layers.pooling._Pooling2D\'>"
+ is_instance: "<class \'tensorflow.python.keras._impl.keras.engine.topology.Layer\'>"
+ is_instance: "<class \'tensorflow.python.layers.base.Layer\'>"
+ is_instance: "<type \'object\'>"
+ member {
+ name: "graph"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "input"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "input_mask"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "input_shape"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "losses"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "non_trainable_variables"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "non_trainable_weights"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "output"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "output_mask"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "output_shape"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "scope_name"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "trainable_variables"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "trainable_weights"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "updates"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "variables"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "weights"
+ mtype: "<type \'property\'>"
+ }
+ member_method {
+ name: "__init__"
+ argspec: "args=[\'self\', \'pool_size\', \'strides\', \'padding\', \'data_format\'], varargs=None, keywords=kwargs, defaults=[\'(2, 2)\', \'None\', \'valid\', \'None\'], "
+ }
+ member_method {
+ name: "add_loss"
+ argspec: "args=[\'self\', \'losses\', \'inputs\'], varargs=None, keywords=None, defaults=[\'None\'], "
+ }
+ member_method {
+ name: "add_update"
+ argspec: "args=[\'self\', \'updates\', \'inputs\'], varargs=None, keywords=None, defaults=[\'None\'], "
+ }
+ member_method {
+ name: "add_variable"
+ argspec: "args=[\'self\', \'name\', \'shape\', \'dtype\', \'initializer\', \'regularizer\', \'trainable\', \'constraint\'], varargs=None, keywords=None, defaults=[\'None\', \'None\', \'None\', \'True\', \'None\'], "
+ }
+ member_method {
+ name: "add_weight"
+ argspec: "args=[\'self\', \'name\', \'shape\', \'dtype\', \'initializer\', \'regularizer\', \'trainable\', \'constraint\'], varargs=None, keywords=None, defaults=[\'None\', \'None\', \'None\', \'True\', \'None\'], "
+ }
+ member_method {
+ name: "apply"
+ argspec: "args=[\'self\', \'inputs\'], varargs=args, keywords=kwargs, defaults=None"
+ }
+ member_method {
+ name: "build"
+ argspec: "args=[\'self\', \'_\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "call"
+ argspec: "args=[\'self\', \'inputs\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "compute_mask"
+ argspec: "args=[\'self\', \'inputs\', \'mask\'], varargs=None, keywords=None, defaults=[\'None\'], "
+ }
+ member_method {
+ name: "count_params"
+ argspec: "args=[\'self\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "from_config"
+ argspec: "args=[\'cls\', \'config\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_config"
+ argspec: "args=[\'self\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_input_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_input_mask_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_input_shape_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_losses_for"
+ argspec: "args=[\'self\', \'inputs\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_output_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_output_mask_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_output_shape_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_updates_for"
+ argspec: "args=[\'self\', \'inputs\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_weights"
+ argspec: "args=[\'self\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "set_weights"
+ argspec: "args=[\'self\', \'weights\'], varargs=None, keywords=None, defaults=None"
+ }
+}
diff --git a/tensorflow/tools/api/golden/tensorflow.keras.layers.-avg-pool3-d.pbtxt b/tensorflow/tools/api/golden/tensorflow.keras.layers.-avg-pool3-d.pbtxt
new file mode 100644
index 0000000000..812118f340
--- /dev/null
+++ b/tensorflow/tools/api/golden/tensorflow.keras.layers.-avg-pool3-d.pbtxt
@@ -0,0 +1,161 @@
+path: "tensorflow.keras.layers.AvgPool3D"
+tf_class {
+ is_instance: "<class \'tensorflow.python.keras._impl.keras.layers.pooling.AveragePooling3D\'>"
+ is_instance: "<class \'tensorflow.python.layers.pooling.AveragePooling3D\'>"
+ is_instance: "<class \'tensorflow.python.layers.pooling._Pooling3D\'>"
+ is_instance: "<class \'tensorflow.python.keras._impl.keras.engine.topology.Layer\'>"
+ is_instance: "<class \'tensorflow.python.layers.base.Layer\'>"
+ is_instance: "<type \'object\'>"
+ member {
+ name: "graph"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "input"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "input_mask"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "input_shape"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "losses"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "non_trainable_variables"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "non_trainable_weights"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "output"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "output_mask"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "output_shape"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "scope_name"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "trainable_variables"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "trainable_weights"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "updates"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "variables"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "weights"
+ mtype: "<type \'property\'>"
+ }
+ member_method {
+ name: "__init__"
+ argspec: "args=[\'self\', \'pool_size\', \'strides\', \'padding\', \'data_format\'], varargs=None, keywords=kwargs, defaults=[\'(2, 2, 2)\', \'None\', \'valid\', \'None\'], "
+ }
+ member_method {
+ name: "add_loss"
+ argspec: "args=[\'self\', \'losses\', \'inputs\'], varargs=None, keywords=None, defaults=[\'None\'], "
+ }
+ member_method {
+ name: "add_update"
+ argspec: "args=[\'self\', \'updates\', \'inputs\'], varargs=None, keywords=None, defaults=[\'None\'], "
+ }
+ member_method {
+ name: "add_variable"
+ argspec: "args=[\'self\', \'name\', \'shape\', \'dtype\', \'initializer\', \'regularizer\', \'trainable\', \'constraint\'], varargs=None, keywords=None, defaults=[\'None\', \'None\', \'None\', \'True\', \'None\'], "
+ }
+ member_method {
+ name: "add_weight"
+ argspec: "args=[\'self\', \'name\', \'shape\', \'dtype\', \'initializer\', \'regularizer\', \'trainable\', \'constraint\'], varargs=None, keywords=None, defaults=[\'None\', \'None\', \'None\', \'True\', \'None\'], "
+ }
+ member_method {
+ name: "apply"
+ argspec: "args=[\'self\', \'inputs\'], varargs=args, keywords=kwargs, defaults=None"
+ }
+ member_method {
+ name: "build"
+ argspec: "args=[\'self\', \'_\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "call"
+ argspec: "args=[\'self\', \'inputs\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "compute_mask"
+ argspec: "args=[\'self\', \'inputs\', \'mask\'], varargs=None, keywords=None, defaults=[\'None\'], "
+ }
+ member_method {
+ name: "count_params"
+ argspec: "args=[\'self\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "from_config"
+ argspec: "args=[\'cls\', \'config\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_config"
+ argspec: "args=[\'self\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_input_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_input_mask_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_input_shape_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_losses_for"
+ argspec: "args=[\'self\', \'inputs\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_output_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_output_mask_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_output_shape_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_updates_for"
+ argspec: "args=[\'self\', \'inputs\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_weights"
+ argspec: "args=[\'self\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "set_weights"
+ argspec: "args=[\'self\', \'weights\'], varargs=None, keywords=None, defaults=None"
+ }
+}
diff --git a/tensorflow/tools/api/golden/tensorflow.keras.layers.-batch-normalization.pbtxt b/tensorflow/tools/api/golden/tensorflow.keras.layers.-batch-normalization.pbtxt
new file mode 100644
index 0000000000..3aa6a990b6
--- /dev/null
+++ b/tensorflow/tools/api/golden/tensorflow.keras.layers.-batch-normalization.pbtxt
@@ -0,0 +1,160 @@
+path: "tensorflow.keras.layers.BatchNormalization"
+tf_class {
+ is_instance: "<class \'tensorflow.python.keras._impl.keras.layers.normalization.BatchNormalization\'>"
+ is_instance: "<class \'tensorflow.python.layers.normalization.BatchNormalization\'>"
+ is_instance: "<class \'tensorflow.python.keras._impl.keras.engine.topology.Layer\'>"
+ is_instance: "<class \'tensorflow.python.layers.base.Layer\'>"
+ is_instance: "<type \'object\'>"
+ member {
+ name: "graph"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "input"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "input_mask"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "input_shape"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "losses"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "non_trainable_variables"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "non_trainable_weights"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "output"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "output_mask"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "output_shape"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "scope_name"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "trainable_variables"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "trainable_weights"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "updates"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "variables"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "weights"
+ mtype: "<type \'property\'>"
+ }
+ member_method {
+ name: "__init__"
+ argspec: "args=[\'self\', \'axis\', \'momentum\', \'epsilon\', \'center\', \'scale\', \'beta_initializer\', \'gamma_initializer\', \'moving_mean_initializer\', \'moving_variance_initializer\', \'beta_regularizer\', \'gamma_regularizer\', \'beta_constraint\', \'gamma_constraint\'], varargs=None, keywords=kwargs, defaults=[\'-1\', \'0.99\', \'0.001\', \'True\', \'True\', \'zeros\', \'ones\', \'zeros\', \'ones\', \'None\', \'None\', \'None\', \'None\'], "
+ }
+ member_method {
+ name: "add_loss"
+ argspec: "args=[\'self\', \'losses\', \'inputs\'], varargs=None, keywords=None, defaults=[\'None\'], "
+ }
+ member_method {
+ name: "add_update"
+ argspec: "args=[\'self\', \'updates\', \'inputs\'], varargs=None, keywords=None, defaults=[\'None\'], "
+ }
+ member_method {
+ name: "add_variable"
+ argspec: "args=[\'self\', \'name\', \'shape\', \'dtype\', \'initializer\', \'regularizer\', \'trainable\', \'constraint\'], varargs=None, keywords=None, defaults=[\'None\', \'None\', \'None\', \'True\', \'None\'], "
+ }
+ member_method {
+ name: "add_weight"
+ argspec: "args=[\'self\', \'name\', \'shape\', \'dtype\', \'initializer\', \'regularizer\', \'trainable\', \'constraint\'], varargs=None, keywords=None, defaults=[\'None\', \'None\', \'None\', \'True\', \'None\'], "
+ }
+ member_method {
+ name: "apply"
+ argspec: "args=[\'self\', \'inputs\'], varargs=args, keywords=kwargs, defaults=None"
+ }
+ member_method {
+ name: "build"
+ argspec: "args=[\'self\', \'input_shape\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "call"
+ argspec: "args=[\'self\', \'inputs\', \'training\'], varargs=None, keywords=None, defaults=[\'None\'], "
+ }
+ member_method {
+ name: "compute_mask"
+ argspec: "args=[\'self\', \'inputs\', \'mask\'], varargs=None, keywords=None, defaults=[\'None\'], "
+ }
+ member_method {
+ name: "count_params"
+ argspec: "args=[\'self\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "from_config"
+ argspec: "args=[\'cls\', \'config\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_config"
+ argspec: "args=[\'self\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_input_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_input_mask_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_input_shape_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_losses_for"
+ argspec: "args=[\'self\', \'inputs\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_output_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_output_mask_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_output_shape_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_updates_for"
+ argspec: "args=[\'self\', \'inputs\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_weights"
+ argspec: "args=[\'self\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "set_weights"
+ argspec: "args=[\'self\', \'weights\'], varargs=None, keywords=None, defaults=None"
+ }
+}
diff --git a/tensorflow/tools/api/golden/tensorflow.keras.layers.-bidirectional.pbtxt b/tensorflow/tools/api/golden/tensorflow.keras.layers.-bidirectional.pbtxt
new file mode 100644
index 0000000000..a0f64a8245
--- /dev/null
+++ b/tensorflow/tools/api/golden/tensorflow.keras.layers.-bidirectional.pbtxt
@@ -0,0 +1,172 @@
+path: "tensorflow.keras.layers.Bidirectional"
+tf_class {
+ is_instance: "<class \'tensorflow.python.keras._impl.keras.layers.wrappers.Bidirectional\'>"
+ is_instance: "<class \'tensorflow.python.keras._impl.keras.layers.wrappers.Wrapper\'>"
+ is_instance: "<class \'tensorflow.python.keras._impl.keras.engine.topology.Layer\'>"
+ is_instance: "<class \'tensorflow.python.layers.base.Layer\'>"
+ is_instance: "<type \'object\'>"
+ member {
+ name: "activity_regularizer"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "constraints"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "graph"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "input"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "input_mask"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "input_shape"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "losses"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "non_trainable_variables"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "non_trainable_weights"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "output"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "output_mask"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "output_shape"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "scope_name"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "trainable_variables"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "trainable_weights"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "updates"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "variables"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "weights"
+ mtype: "<type \'property\'>"
+ }
+ member_method {
+ name: "__init__"
+ argspec: "args=[\'self\', \'layer\', \'merge_mode\', \'weights\'], varargs=None, keywords=kwargs, defaults=[\'concat\', \'None\'], "
+ }
+ member_method {
+ name: "add_loss"
+ argspec: "args=[\'self\', \'losses\', \'inputs\'], varargs=None, keywords=None, defaults=[\'None\'], "
+ }
+ member_method {
+ name: "add_update"
+ argspec: "args=[\'self\', \'updates\', \'inputs\'], varargs=None, keywords=None, defaults=[\'None\'], "
+ }
+ member_method {
+ name: "add_variable"
+ argspec: "args=[\'self\', \'name\', \'shape\', \'dtype\', \'initializer\', \'regularizer\', \'trainable\', \'constraint\'], varargs=None, keywords=None, defaults=[\'None\', \'None\', \'None\', \'True\', \'None\'], "
+ }
+ member_method {
+ name: "add_weight"
+ argspec: "args=[\'self\', \'name\', \'shape\', \'dtype\', \'initializer\', \'regularizer\', \'trainable\', \'constraint\'], varargs=None, keywords=None, defaults=[\'None\', \'None\', \'None\', \'True\', \'None\'], "
+ }
+ member_method {
+ name: "apply"
+ argspec: "args=[\'self\', \'inputs\'], varargs=args, keywords=kwargs, defaults=None"
+ }
+ member_method {
+ name: "build"
+ argspec: "args=[\'self\', \'input_shape\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "call"
+ argspec: "args=[\'self\', \'inputs\', \'training\', \'mask\'], varargs=None, keywords=None, defaults=[\'None\', \'None\'], "
+ }
+ member_method {
+ name: "compute_mask"
+ argspec: "args=[\'self\', \'inputs\', \'mask\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "count_params"
+ argspec: "args=[\'self\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "from_config"
+ argspec: "args=[\'cls\', \'config\', \'custom_objects\'], varargs=None, keywords=None, defaults=[\'None\'], "
+ }
+ member_method {
+ name: "get_config"
+ argspec: "args=[\'self\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_input_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_input_mask_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_input_shape_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_losses_for"
+ argspec: "args=[\'self\', \'inputs\'], varargs=None, keywords=None, defaults=[\'None\'], "
+ }
+ member_method {
+ name: "get_output_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_output_mask_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_output_shape_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_updates_for"
+ argspec: "args=[\'self\', \'inputs\'], varargs=None, keywords=None, defaults=[\'None\'], "
+ }
+ member_method {
+ name: "get_weights"
+ argspec: "args=[\'self\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "reset_states"
+ argspec: "args=[\'self\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "set_weights"
+ argspec: "args=[\'self\', \'weights\'], varargs=None, keywords=None, defaults=None"
+ }
+}
diff --git a/tensorflow/tools/api/golden/tensorflow.keras.layers.-concatenate.pbtxt b/tensorflow/tools/api/golden/tensorflow.keras.layers.-concatenate.pbtxt
new file mode 100644
index 0000000000..fe8fc4fd6d
--- /dev/null
+++ b/tensorflow/tools/api/golden/tensorflow.keras.layers.-concatenate.pbtxt
@@ -0,0 +1,160 @@
+path: "tensorflow.keras.layers.Concatenate"
+tf_class {
+ is_instance: "<class \'tensorflow.python.keras._impl.keras.layers.merge.Concatenate\'>"
+ is_instance: "<class \'tensorflow.python.keras._impl.keras.layers.merge._Merge\'>"
+ is_instance: "<class \'tensorflow.python.keras._impl.keras.engine.topology.Layer\'>"
+ is_instance: "<class \'tensorflow.python.layers.base.Layer\'>"
+ is_instance: "<type \'object\'>"
+ member {
+ name: "graph"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "input"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "input_mask"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "input_shape"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "losses"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "non_trainable_variables"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "non_trainable_weights"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "output"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "output_mask"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "output_shape"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "scope_name"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "trainable_variables"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "trainable_weights"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "updates"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "variables"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "weights"
+ mtype: "<type \'property\'>"
+ }
+ member_method {
+ name: "__init__"
+ argspec: "args=[\'self\', \'axis\'], varargs=None, keywords=kwargs, defaults=[\'-1\'], "
+ }
+ member_method {
+ name: "add_loss"
+ argspec: "args=[\'self\', \'losses\', \'inputs\'], varargs=None, keywords=None, defaults=[\'None\'], "
+ }
+ member_method {
+ name: "add_update"
+ argspec: "args=[\'self\', \'updates\', \'inputs\'], varargs=None, keywords=None, defaults=[\'None\'], "
+ }
+ member_method {
+ name: "add_variable"
+ argspec: "args=[\'self\', \'name\', \'shape\', \'dtype\', \'initializer\', \'regularizer\', \'trainable\', \'constraint\'], varargs=None, keywords=None, defaults=[\'None\', \'None\', \'None\', \'True\', \'None\'], "
+ }
+ member_method {
+ name: "add_weight"
+ argspec: "args=[\'self\', \'name\', \'shape\', \'dtype\', \'initializer\', \'regularizer\', \'trainable\', \'constraint\'], varargs=None, keywords=None, defaults=[\'None\', \'None\', \'None\', \'True\', \'None\'], "
+ }
+ member_method {
+ name: "apply"
+ argspec: "args=[\'self\', \'inputs\'], varargs=args, keywords=kwargs, defaults=None"
+ }
+ member_method {
+ name: "build"
+ argspec: "args=[\'self\', \'input_shape\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "call"
+ argspec: "args=[\'self\', \'inputs\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "compute_mask"
+ argspec: "args=[\'self\', \'inputs\', \'mask\'], varargs=None, keywords=None, defaults=[\'None\'], "
+ }
+ member_method {
+ name: "count_params"
+ argspec: "args=[\'self\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "from_config"
+ argspec: "args=[\'cls\', \'config\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_config"
+ argspec: "args=[\'self\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_input_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_input_mask_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_input_shape_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_losses_for"
+ argspec: "args=[\'self\', \'inputs\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_output_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_output_mask_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_output_shape_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_updates_for"
+ argspec: "args=[\'self\', \'inputs\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_weights"
+ argspec: "args=[\'self\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "set_weights"
+ argspec: "args=[\'self\', \'weights\'], varargs=None, keywords=None, defaults=None"
+ }
+}
diff --git a/tensorflow/tools/api/golden/tensorflow.keras.layers.-conv-l-s-t-m2-d.pbtxt b/tensorflow/tools/api/golden/tensorflow.keras.layers.-conv-l-s-t-m2-d.pbtxt
new file mode 100644
index 0000000000..a482dec23f
--- /dev/null
+++ b/tensorflow/tools/api/golden/tensorflow.keras.layers.-conv-l-s-t-m2-d.pbtxt
@@ -0,0 +1,189 @@
+path: "tensorflow.keras.layers.ConvLSTM2D"
+tf_class {
+ is_instance: "<class \'tensorflow.python.keras._impl.keras.layers.convolutional_recurrent.ConvLSTM2D\'>"
+ is_instance: "<class \'tensorflow.python.keras._impl.keras.layers.convolutional_recurrent.ConvRecurrent2D\'>"
+ is_instance: "<class \'tensorflow.python.keras._impl.keras.layers.recurrent.Recurrent\'>"
+ is_instance: "<class \'tensorflow.python.keras._impl.keras.engine.topology.Layer\'>"
+ is_instance: "<class \'tensorflow.python.layers.base.Layer\'>"
+ is_instance: "<type \'object\'>"
+ member {
+ name: "graph"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "input"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "input_mask"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "input_shape"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "losses"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "non_trainable_variables"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "non_trainable_weights"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "output"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "output_mask"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "output_shape"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "scope_name"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "trainable_variables"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "trainable_weights"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "updates"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "variables"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "weights"
+ mtype: "<type \'property\'>"
+ }
+ member_method {
+ name: "__init__"
+ argspec: "args=[\'self\', \'filters\', \'kernel_size\', \'strides\', \'padding\', \'data_format\', \'dilation_rate\', \'activation\', \'recurrent_activation\', \'use_bias\', \'kernel_initializer\', \'recurrent_initializer\', \'bias_initializer\', \'unit_forget_bias\', \'kernel_regularizer\', \'recurrent_regularizer\', \'bias_regularizer\', \'activity_regularizer\', \'kernel_constraint\', \'recurrent_constraint\', \'bias_constraint\', \'return_sequences\', \'go_backwards\', \'stateful\', \'dropout\', \'recurrent_dropout\'], varargs=None, keywords=kwargs, defaults=[\'(1, 1)\', \'valid\', \'None\', \'(1, 1)\', \'tanh\', \'hard_sigmoid\', \'True\', \'glorot_uniform\', \'orthogonal\', \'zeros\', \'True\', \'None\', \'None\', \'None\', \'None\', \'None\', \'None\', \'None\', \'False\', \'False\', \'False\', \'0.0\', \'0.0\'], "
+ }
+ member_method {
+ name: "add_loss"
+ argspec: "args=[\'self\', \'losses\', \'inputs\'], varargs=None, keywords=None, defaults=[\'None\'], "
+ }
+ member_method {
+ name: "add_update"
+ argspec: "args=[\'self\', \'updates\', \'inputs\'], varargs=None, keywords=None, defaults=[\'None\'], "
+ }
+ member_method {
+ name: "add_variable"
+ argspec: "args=[\'self\', \'name\', \'shape\', \'dtype\', \'initializer\', \'regularizer\', \'trainable\', \'constraint\'], varargs=None, keywords=None, defaults=[\'None\', \'None\', \'None\', \'True\', \'None\'], "
+ }
+ member_method {
+ name: "add_weight"
+ argspec: "args=[\'self\', \'name\', \'shape\', \'dtype\', \'initializer\', \'regularizer\', \'trainable\', \'constraint\'], varargs=None, keywords=None, defaults=[\'None\', \'None\', \'None\', \'True\', \'None\'], "
+ }
+ member_method {
+ name: "apply"
+ argspec: "args=[\'self\', \'inputs\'], varargs=args, keywords=kwargs, defaults=None"
+ }
+ member_method {
+ name: "build"
+ argspec: "args=[\'self\', \'input_shape\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "call"
+ argspec: "args=[\'self\', \'inputs\', \'mask\', \'training\', \'initial_state\'], varargs=None, keywords=None, defaults=[\'None\', \'None\', \'None\'], "
+ }
+ member_method {
+ name: "compute_mask"
+ argspec: "args=[\'self\', \'inputs\', \'mask\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "count_params"
+ argspec: "args=[\'self\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "from_config"
+ argspec: "args=[\'cls\', \'config\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_config"
+ argspec: "args=[\'self\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_constants"
+ argspec: "args=[\'self\', \'inputs\', \'training\'], varargs=None, keywords=None, defaults=[\'None\'], "
+ }
+ member_method {
+ name: "get_initial_state"
+ argspec: "args=[\'self\', \'inputs\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_input_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_input_mask_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_input_shape_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_losses_for"
+ argspec: "args=[\'self\', \'inputs\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_output_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_output_mask_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_output_shape_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_updates_for"
+ argspec: "args=[\'self\', \'inputs\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_weights"
+ argspec: "args=[\'self\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "input_conv"
+ argspec: "args=[\'self\', \'x\', \'w\', \'b\', \'padding\'], varargs=None, keywords=None, defaults=[\'None\', \'valid\'], "
+ }
+ member_method {
+ name: "preprocess_input"
+ argspec: "args=[\'self\', \'inputs\', \'training\'], varargs=None, keywords=None, defaults=[\'None\'], "
+ }
+ member_method {
+ name: "reccurent_conv"
+ argspec: "args=[\'self\', \'x\', \'w\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "reset_states"
+ argspec: "args=[\'self\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "set_weights"
+ argspec: "args=[\'self\', \'weights\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "step"
+ argspec: "args=[\'self\', \'inputs\', \'states\'], varargs=None, keywords=None, defaults=None"
+ }
+}
diff --git a/tensorflow/tools/api/golden/tensorflow.keras.layers.-conv1-d.pbtxt b/tensorflow/tools/api/golden/tensorflow.keras.layers.-conv1-d.pbtxt
new file mode 100644
index 0000000000..977a0035bf
--- /dev/null
+++ b/tensorflow/tools/api/golden/tensorflow.keras.layers.-conv1-d.pbtxt
@@ -0,0 +1,161 @@
+path: "tensorflow.keras.layers.Conv1D"
+tf_class {
+ is_instance: "<class \'tensorflow.python.keras._impl.keras.layers.convolutional.Conv1D\'>"
+ is_instance: "<class \'tensorflow.python.layers.convolutional.Conv1D\'>"
+ is_instance: "<class \'tensorflow.python.layers.convolutional._Conv\'>"
+ is_instance: "<class \'tensorflow.python.keras._impl.keras.engine.topology.Layer\'>"
+ is_instance: "<class \'tensorflow.python.layers.base.Layer\'>"
+ is_instance: "<type \'object\'>"
+ member {
+ name: "graph"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "input"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "input_mask"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "input_shape"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "losses"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "non_trainable_variables"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "non_trainable_weights"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "output"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "output_mask"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "output_shape"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "scope_name"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "trainable_variables"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "trainable_weights"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "updates"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "variables"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "weights"
+ mtype: "<type \'property\'>"
+ }
+ member_method {
+ name: "__init__"
+ argspec: "args=[\'self\', \'filters\', \'kernel_size\', \'strides\', \'padding\', \'dilation_rate\', \'activation\', \'use_bias\', \'kernel_initializer\', \'bias_initializer\', \'kernel_regularizer\', \'bias_regularizer\', \'activity_regularizer\', \'kernel_constraint\', \'bias_constraint\'], varargs=None, keywords=kwargs, defaults=[\'1\', \'valid\', \'1\', \'None\', \'True\', \'glorot_uniform\', \'zeros\', \'None\', \'None\', \'None\', \'None\', \'None\'], "
+ }
+ member_method {
+ name: "add_loss"
+ argspec: "args=[\'self\', \'losses\', \'inputs\'], varargs=None, keywords=None, defaults=[\'None\'], "
+ }
+ member_method {
+ name: "add_update"
+ argspec: "args=[\'self\', \'updates\', \'inputs\'], varargs=None, keywords=None, defaults=[\'None\'], "
+ }
+ member_method {
+ name: "add_variable"
+ argspec: "args=[\'self\', \'name\', \'shape\', \'dtype\', \'initializer\', \'regularizer\', \'trainable\', \'constraint\'], varargs=None, keywords=None, defaults=[\'None\', \'None\', \'None\', \'True\', \'None\'], "
+ }
+ member_method {
+ name: "add_weight"
+ argspec: "args=[\'self\', \'name\', \'shape\', \'dtype\', \'initializer\', \'regularizer\', \'trainable\', \'constraint\'], varargs=None, keywords=None, defaults=[\'None\', \'None\', \'None\', \'True\', \'None\'], "
+ }
+ member_method {
+ name: "apply"
+ argspec: "args=[\'self\', \'inputs\'], varargs=args, keywords=kwargs, defaults=None"
+ }
+ member_method {
+ name: "build"
+ argspec: "args=[\'self\', \'input_shape\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "call"
+ argspec: "args=[\'self\', \'inputs\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "compute_mask"
+ argspec: "args=[\'self\', \'inputs\', \'mask\'], varargs=None, keywords=None, defaults=[\'None\'], "
+ }
+ member_method {
+ name: "count_params"
+ argspec: "args=[\'self\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "from_config"
+ argspec: "args=[\'cls\', \'config\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_config"
+ argspec: "args=[\'self\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_input_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_input_mask_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_input_shape_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_losses_for"
+ argspec: "args=[\'self\', \'inputs\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_output_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_output_mask_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_output_shape_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_updates_for"
+ argspec: "args=[\'self\', \'inputs\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_weights"
+ argspec: "args=[\'self\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "set_weights"
+ argspec: "args=[\'self\', \'weights\'], varargs=None, keywords=None, defaults=None"
+ }
+}
diff --git a/tensorflow/tools/api/golden/tensorflow.keras.layers.-conv2-d-transpose.pbtxt b/tensorflow/tools/api/golden/tensorflow.keras.layers.-conv2-d-transpose.pbtxt
new file mode 100644
index 0000000000..d63c5a23b4
--- /dev/null
+++ b/tensorflow/tools/api/golden/tensorflow.keras.layers.-conv2-d-transpose.pbtxt
@@ -0,0 +1,162 @@
+path: "tensorflow.keras.layers.Conv2DTranspose"
+tf_class {
+ is_instance: "<class \'tensorflow.python.keras._impl.keras.layers.convolutional.Conv2DTranspose\'>"
+ is_instance: "<class \'tensorflow.python.layers.convolutional.Conv2DTranspose\'>"
+ is_instance: "<class \'tensorflow.python.layers.convolutional.Conv2D\'>"
+ is_instance: "<class \'tensorflow.python.layers.convolutional._Conv\'>"
+ is_instance: "<class \'tensorflow.python.keras._impl.keras.engine.topology.Layer\'>"
+ is_instance: "<class \'tensorflow.python.layers.base.Layer\'>"
+ is_instance: "<type \'object\'>"
+ member {
+ name: "graph"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "input"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "input_mask"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "input_shape"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "losses"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "non_trainable_variables"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "non_trainable_weights"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "output"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "output_mask"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "output_shape"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "scope_name"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "trainable_variables"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "trainable_weights"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "updates"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "variables"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "weights"
+ mtype: "<type \'property\'>"
+ }
+ member_method {
+ name: "__init__"
+ argspec: "args=[\'self\', \'filters\', \'kernel_size\', \'strides\', \'padding\', \'data_format\', \'activation\', \'use_bias\', \'kernel_initializer\', \'bias_initializer\', \'kernel_regularizer\', \'bias_regularizer\', \'activity_regularizer\', \'kernel_constraint\', \'bias_constraint\'], varargs=None, keywords=kwargs, defaults=[\'(1, 1)\', \'valid\', \'None\', \'None\', \'True\', \'glorot_uniform\', \'zeros\', \'None\', \'None\', \'None\', \'None\', \'None\'], "
+ }
+ member_method {
+ name: "add_loss"
+ argspec: "args=[\'self\', \'losses\', \'inputs\'], varargs=None, keywords=None, defaults=[\'None\'], "
+ }
+ member_method {
+ name: "add_update"
+ argspec: "args=[\'self\', \'updates\', \'inputs\'], varargs=None, keywords=None, defaults=[\'None\'], "
+ }
+ member_method {
+ name: "add_variable"
+ argspec: "args=[\'self\', \'name\', \'shape\', \'dtype\', \'initializer\', \'regularizer\', \'trainable\', \'constraint\'], varargs=None, keywords=None, defaults=[\'None\', \'None\', \'None\', \'True\', \'None\'], "
+ }
+ member_method {
+ name: "add_weight"
+ argspec: "args=[\'self\', \'name\', \'shape\', \'dtype\', \'initializer\', \'regularizer\', \'trainable\', \'constraint\'], varargs=None, keywords=None, defaults=[\'None\', \'None\', \'None\', \'True\', \'None\'], "
+ }
+ member_method {
+ name: "apply"
+ argspec: "args=[\'self\', \'inputs\'], varargs=args, keywords=kwargs, defaults=None"
+ }
+ member_method {
+ name: "build"
+ argspec: "args=[\'self\', \'input_shape\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "call"
+ argspec: "args=[\'self\', \'inputs\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "compute_mask"
+ argspec: "args=[\'self\', \'inputs\', \'mask\'], varargs=None, keywords=None, defaults=[\'None\'], "
+ }
+ member_method {
+ name: "count_params"
+ argspec: "args=[\'self\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "from_config"
+ argspec: "args=[\'cls\', \'config\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_config"
+ argspec: "args=[\'self\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_input_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_input_mask_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_input_shape_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_losses_for"
+ argspec: "args=[\'self\', \'inputs\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_output_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_output_mask_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_output_shape_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_updates_for"
+ argspec: "args=[\'self\', \'inputs\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_weights"
+ argspec: "args=[\'self\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "set_weights"
+ argspec: "args=[\'self\', \'weights\'], varargs=None, keywords=None, defaults=None"
+ }
+}
diff --git a/tensorflow/tools/api/golden/tensorflow.keras.layers.-conv2-d.pbtxt b/tensorflow/tools/api/golden/tensorflow.keras.layers.-conv2-d.pbtxt
new file mode 100644
index 0000000000..3cc9a2267f
--- /dev/null
+++ b/tensorflow/tools/api/golden/tensorflow.keras.layers.-conv2-d.pbtxt
@@ -0,0 +1,161 @@
+path: "tensorflow.keras.layers.Conv2D"
+tf_class {
+ is_instance: "<class \'tensorflow.python.keras._impl.keras.layers.convolutional.Conv2D\'>"
+ is_instance: "<class \'tensorflow.python.layers.convolutional.Conv2D\'>"
+ is_instance: "<class \'tensorflow.python.layers.convolutional._Conv\'>"
+ is_instance: "<class \'tensorflow.python.keras._impl.keras.engine.topology.Layer\'>"
+ is_instance: "<class \'tensorflow.python.layers.base.Layer\'>"
+ is_instance: "<type \'object\'>"
+ member {
+ name: "graph"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "input"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "input_mask"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "input_shape"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "losses"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "non_trainable_variables"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "non_trainable_weights"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "output"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "output_mask"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "output_shape"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "scope_name"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "trainable_variables"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "trainable_weights"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "updates"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "variables"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "weights"
+ mtype: "<type \'property\'>"
+ }
+ member_method {
+ name: "__init__"
+ argspec: "args=[\'self\', \'filters\', \'kernel_size\', \'strides\', \'padding\', \'data_format\', \'dilation_rate\', \'activation\', \'use_bias\', \'kernel_initializer\', \'bias_initializer\', \'kernel_regularizer\', \'bias_regularizer\', \'activity_regularizer\', \'kernel_constraint\', \'bias_constraint\'], varargs=None, keywords=kwargs, defaults=[\'(1, 1)\', \'valid\', \'None\', \'(1, 1)\', \'None\', \'True\', \'glorot_uniform\', \'zeros\', \'None\', \'None\', \'None\', \'None\', \'None\'], "
+ }
+ member_method {
+ name: "add_loss"
+ argspec: "args=[\'self\', \'losses\', \'inputs\'], varargs=None, keywords=None, defaults=[\'None\'], "
+ }
+ member_method {
+ name: "add_update"
+ argspec: "args=[\'self\', \'updates\', \'inputs\'], varargs=None, keywords=None, defaults=[\'None\'], "
+ }
+ member_method {
+ name: "add_variable"
+ argspec: "args=[\'self\', \'name\', \'shape\', \'dtype\', \'initializer\', \'regularizer\', \'trainable\', \'constraint\'], varargs=None, keywords=None, defaults=[\'None\', \'None\', \'None\', \'True\', \'None\'], "
+ }
+ member_method {
+ name: "add_weight"
+ argspec: "args=[\'self\', \'name\', \'shape\', \'dtype\', \'initializer\', \'regularizer\', \'trainable\', \'constraint\'], varargs=None, keywords=None, defaults=[\'None\', \'None\', \'None\', \'True\', \'None\'], "
+ }
+ member_method {
+ name: "apply"
+ argspec: "args=[\'self\', \'inputs\'], varargs=args, keywords=kwargs, defaults=None"
+ }
+ member_method {
+ name: "build"
+ argspec: "args=[\'self\', \'input_shape\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "call"
+ argspec: "args=[\'self\', \'inputs\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "compute_mask"
+ argspec: "args=[\'self\', \'inputs\', \'mask\'], varargs=None, keywords=None, defaults=[\'None\'], "
+ }
+ member_method {
+ name: "count_params"
+ argspec: "args=[\'self\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "from_config"
+ argspec: "args=[\'cls\', \'config\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_config"
+ argspec: "args=[\'self\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_input_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_input_mask_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_input_shape_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_losses_for"
+ argspec: "args=[\'self\', \'inputs\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_output_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_output_mask_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_output_shape_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_updates_for"
+ argspec: "args=[\'self\', \'inputs\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_weights"
+ argspec: "args=[\'self\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "set_weights"
+ argspec: "args=[\'self\', \'weights\'], varargs=None, keywords=None, defaults=None"
+ }
+}
diff --git a/tensorflow/tools/api/golden/tensorflow.keras.layers.-conv3-d-transpose.pbtxt b/tensorflow/tools/api/golden/tensorflow.keras.layers.-conv3-d-transpose.pbtxt
new file mode 100644
index 0000000000..3653eb5b3b
--- /dev/null
+++ b/tensorflow/tools/api/golden/tensorflow.keras.layers.-conv3-d-transpose.pbtxt
@@ -0,0 +1,161 @@
+path: "tensorflow.keras.layers.Conv3DTranspose"
+tf_class {
+ is_instance: "<class \'tensorflow.python.keras._impl.keras.layers.convolutional.Conv3DTranspose\'>"
+ is_instance: "<class \'tensorflow.python.layers.convolutional.Conv3D\'>"
+ is_instance: "<class \'tensorflow.python.layers.convolutional._Conv\'>"
+ is_instance: "<class \'tensorflow.python.keras._impl.keras.engine.topology.Layer\'>"
+ is_instance: "<class \'tensorflow.python.layers.base.Layer\'>"
+ is_instance: "<type \'object\'>"
+ member {
+ name: "graph"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "input"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "input_mask"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "input_shape"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "losses"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "non_trainable_variables"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "non_trainable_weights"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "output"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "output_mask"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "output_shape"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "scope_name"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "trainable_variables"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "trainable_weights"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "updates"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "variables"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "weights"
+ mtype: "<type \'property\'>"
+ }
+ member_method {
+ name: "__init__"
+ argspec: "args=[\'self\', \'filters\', \'kernel_size\', \'strides\', \'padding\', \'data_format\', \'activation\', \'use_bias\', \'kernel_initializer\', \'bias_initializer\', \'kernel_regularizer\', \'bias_regularizer\', \'activity_regularizer\', \'kernel_constraint\', \'bias_constraint\'], varargs=None, keywords=kwargs, defaults=[\'(1, 1, 1)\', \'valid\', \'None\', \'None\', \'True\', \'glorot_uniform\', \'zeros\', \'None\', \'None\', \'None\', \'None\', \'None\'], "
+ }
+ member_method {
+ name: "add_loss"
+ argspec: "args=[\'self\', \'losses\', \'inputs\'], varargs=None, keywords=None, defaults=[\'None\'], "
+ }
+ member_method {
+ name: "add_update"
+ argspec: "args=[\'self\', \'updates\', \'inputs\'], varargs=None, keywords=None, defaults=[\'None\'], "
+ }
+ member_method {
+ name: "add_variable"
+ argspec: "args=[\'self\', \'name\', \'shape\', \'dtype\', \'initializer\', \'regularizer\', \'trainable\', \'constraint\'], varargs=None, keywords=None, defaults=[\'None\', \'None\', \'None\', \'True\', \'None\'], "
+ }
+ member_method {
+ name: "add_weight"
+ argspec: "args=[\'self\', \'name\', \'shape\', \'dtype\', \'initializer\', \'regularizer\', \'trainable\', \'constraint\'], varargs=None, keywords=None, defaults=[\'None\', \'None\', \'None\', \'True\', \'None\'], "
+ }
+ member_method {
+ name: "apply"
+ argspec: "args=[\'self\', \'inputs\'], varargs=args, keywords=kwargs, defaults=None"
+ }
+ member_method {
+ name: "build"
+ argspec: "args=[\'self\', \'input_shape\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "call"
+ argspec: "args=[\'self\', \'inputs\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "compute_mask"
+ argspec: "args=[\'self\', \'inputs\', \'mask\'], varargs=None, keywords=None, defaults=[\'None\'], "
+ }
+ member_method {
+ name: "count_params"
+ argspec: "args=[\'self\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "from_config"
+ argspec: "args=[\'cls\', \'config\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_config"
+ argspec: "args=[\'self\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_input_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_input_mask_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_input_shape_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_losses_for"
+ argspec: "args=[\'self\', \'inputs\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_output_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_output_mask_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_output_shape_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_updates_for"
+ argspec: "args=[\'self\', \'inputs\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_weights"
+ argspec: "args=[\'self\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "set_weights"
+ argspec: "args=[\'self\', \'weights\'], varargs=None, keywords=None, defaults=None"
+ }
+}
diff --git a/tensorflow/tools/api/golden/tensorflow.keras.layers.-conv3-d.pbtxt b/tensorflow/tools/api/golden/tensorflow.keras.layers.-conv3-d.pbtxt
new file mode 100644
index 0000000000..e549444986
--- /dev/null
+++ b/tensorflow/tools/api/golden/tensorflow.keras.layers.-conv3-d.pbtxt
@@ -0,0 +1,161 @@
+path: "tensorflow.keras.layers.Conv3D"
+tf_class {
+ is_instance: "<class \'tensorflow.python.keras._impl.keras.layers.convolutional.Conv3D\'>"
+ is_instance: "<class \'tensorflow.python.layers.convolutional.Conv3D\'>"
+ is_instance: "<class \'tensorflow.python.layers.convolutional._Conv\'>"
+ is_instance: "<class \'tensorflow.python.keras._impl.keras.engine.topology.Layer\'>"
+ is_instance: "<class \'tensorflow.python.layers.base.Layer\'>"
+ is_instance: "<type \'object\'>"
+ member {
+ name: "graph"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "input"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "input_mask"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "input_shape"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "losses"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "non_trainable_variables"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "non_trainable_weights"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "output"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "output_mask"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "output_shape"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "scope_name"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "trainable_variables"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "trainable_weights"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "updates"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "variables"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "weights"
+ mtype: "<type \'property\'>"
+ }
+ member_method {
+ name: "__init__"
+ argspec: "args=[\'self\', \'filters\', \'kernel_size\', \'strides\', \'padding\', \'data_format\', \'dilation_rate\', \'activation\', \'use_bias\', \'kernel_initializer\', \'bias_initializer\', \'kernel_regularizer\', \'bias_regularizer\', \'activity_regularizer\', \'kernel_constraint\', \'bias_constraint\'], varargs=None, keywords=kwargs, defaults=[\'(1, 1, 1)\', \'valid\', \'None\', \'(1, 1, 1)\', \'None\', \'True\', \'glorot_uniform\', \'zeros\', \'None\', \'None\', \'None\', \'None\', \'None\'], "
+ }
+ member_method {
+ name: "add_loss"
+ argspec: "args=[\'self\', \'losses\', \'inputs\'], varargs=None, keywords=None, defaults=[\'None\'], "
+ }
+ member_method {
+ name: "add_update"
+ argspec: "args=[\'self\', \'updates\', \'inputs\'], varargs=None, keywords=None, defaults=[\'None\'], "
+ }
+ member_method {
+ name: "add_variable"
+ argspec: "args=[\'self\', \'name\', \'shape\', \'dtype\', \'initializer\', \'regularizer\', \'trainable\', \'constraint\'], varargs=None, keywords=None, defaults=[\'None\', \'None\', \'None\', \'True\', \'None\'], "
+ }
+ member_method {
+ name: "add_weight"
+ argspec: "args=[\'self\', \'name\', \'shape\', \'dtype\', \'initializer\', \'regularizer\', \'trainable\', \'constraint\'], varargs=None, keywords=None, defaults=[\'None\', \'None\', \'None\', \'True\', \'None\'], "
+ }
+ member_method {
+ name: "apply"
+ argspec: "args=[\'self\', \'inputs\'], varargs=args, keywords=kwargs, defaults=None"
+ }
+ member_method {
+ name: "build"
+ argspec: "args=[\'self\', \'input_shape\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "call"
+ argspec: "args=[\'self\', \'inputs\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "compute_mask"
+ argspec: "args=[\'self\', \'inputs\', \'mask\'], varargs=None, keywords=None, defaults=[\'None\'], "
+ }
+ member_method {
+ name: "count_params"
+ argspec: "args=[\'self\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "from_config"
+ argspec: "args=[\'cls\', \'config\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_config"
+ argspec: "args=[\'self\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_input_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_input_mask_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_input_shape_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_losses_for"
+ argspec: "args=[\'self\', \'inputs\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_output_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_output_mask_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_output_shape_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_updates_for"
+ argspec: "args=[\'self\', \'inputs\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_weights"
+ argspec: "args=[\'self\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "set_weights"
+ argspec: "args=[\'self\', \'weights\'], varargs=None, keywords=None, defaults=None"
+ }
+}
diff --git a/tensorflow/tools/api/golden/tensorflow.keras.layers.-convolution1-d.pbtxt b/tensorflow/tools/api/golden/tensorflow.keras.layers.-convolution1-d.pbtxt
new file mode 100644
index 0000000000..a8984deb2b
--- /dev/null
+++ b/tensorflow/tools/api/golden/tensorflow.keras.layers.-convolution1-d.pbtxt
@@ -0,0 +1,161 @@
+path: "tensorflow.keras.layers.Convolution1D"
+tf_class {
+ is_instance: "<class \'tensorflow.python.keras._impl.keras.layers.convolutional.Conv1D\'>"
+ is_instance: "<class \'tensorflow.python.layers.convolutional.Conv1D\'>"
+ is_instance: "<class \'tensorflow.python.layers.convolutional._Conv\'>"
+ is_instance: "<class \'tensorflow.python.keras._impl.keras.engine.topology.Layer\'>"
+ is_instance: "<class \'tensorflow.python.layers.base.Layer\'>"
+ is_instance: "<type \'object\'>"
+ member {
+ name: "graph"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "input"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "input_mask"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "input_shape"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "losses"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "non_trainable_variables"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "non_trainable_weights"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "output"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "output_mask"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "output_shape"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "scope_name"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "trainable_variables"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "trainable_weights"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "updates"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "variables"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "weights"
+ mtype: "<type \'property\'>"
+ }
+ member_method {
+ name: "__init__"
+ argspec: "args=[\'self\', \'filters\', \'kernel_size\', \'strides\', \'padding\', \'dilation_rate\', \'activation\', \'use_bias\', \'kernel_initializer\', \'bias_initializer\', \'kernel_regularizer\', \'bias_regularizer\', \'activity_regularizer\', \'kernel_constraint\', \'bias_constraint\'], varargs=None, keywords=kwargs, defaults=[\'1\', \'valid\', \'1\', \'None\', \'True\', \'glorot_uniform\', \'zeros\', \'None\', \'None\', \'None\', \'None\', \'None\'], "
+ }
+ member_method {
+ name: "add_loss"
+ argspec: "args=[\'self\', \'losses\', \'inputs\'], varargs=None, keywords=None, defaults=[\'None\'], "
+ }
+ member_method {
+ name: "add_update"
+ argspec: "args=[\'self\', \'updates\', \'inputs\'], varargs=None, keywords=None, defaults=[\'None\'], "
+ }
+ member_method {
+ name: "add_variable"
+ argspec: "args=[\'self\', \'name\', \'shape\', \'dtype\', \'initializer\', \'regularizer\', \'trainable\', \'constraint\'], varargs=None, keywords=None, defaults=[\'None\', \'None\', \'None\', \'True\', \'None\'], "
+ }
+ member_method {
+ name: "add_weight"
+ argspec: "args=[\'self\', \'name\', \'shape\', \'dtype\', \'initializer\', \'regularizer\', \'trainable\', \'constraint\'], varargs=None, keywords=None, defaults=[\'None\', \'None\', \'None\', \'True\', \'None\'], "
+ }
+ member_method {
+ name: "apply"
+ argspec: "args=[\'self\', \'inputs\'], varargs=args, keywords=kwargs, defaults=None"
+ }
+ member_method {
+ name: "build"
+ argspec: "args=[\'self\', \'input_shape\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "call"
+ argspec: "args=[\'self\', \'inputs\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "compute_mask"
+ argspec: "args=[\'self\', \'inputs\', \'mask\'], varargs=None, keywords=None, defaults=[\'None\'], "
+ }
+ member_method {
+ name: "count_params"
+ argspec: "args=[\'self\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "from_config"
+ argspec: "args=[\'cls\', \'config\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_config"
+ argspec: "args=[\'self\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_input_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_input_mask_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_input_shape_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_losses_for"
+ argspec: "args=[\'self\', \'inputs\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_output_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_output_mask_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_output_shape_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_updates_for"
+ argspec: "args=[\'self\', \'inputs\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_weights"
+ argspec: "args=[\'self\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "set_weights"
+ argspec: "args=[\'self\', \'weights\'], varargs=None, keywords=None, defaults=None"
+ }
+}
diff --git a/tensorflow/tools/api/golden/tensorflow.keras.layers.-convolution2-d-transpose.pbtxt b/tensorflow/tools/api/golden/tensorflow.keras.layers.-convolution2-d-transpose.pbtxt
new file mode 100644
index 0000000000..bd61143235
--- /dev/null
+++ b/tensorflow/tools/api/golden/tensorflow.keras.layers.-convolution2-d-transpose.pbtxt
@@ -0,0 +1,162 @@
+path: "tensorflow.keras.layers.Convolution2DTranspose"
+tf_class {
+ is_instance: "<class \'tensorflow.python.keras._impl.keras.layers.convolutional.Conv2DTranspose\'>"
+ is_instance: "<class \'tensorflow.python.layers.convolutional.Conv2DTranspose\'>"
+ is_instance: "<class \'tensorflow.python.layers.convolutional.Conv2D\'>"
+ is_instance: "<class \'tensorflow.python.layers.convolutional._Conv\'>"
+ is_instance: "<class \'tensorflow.python.keras._impl.keras.engine.topology.Layer\'>"
+ is_instance: "<class \'tensorflow.python.layers.base.Layer\'>"
+ is_instance: "<type \'object\'>"
+ member {
+ name: "graph"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "input"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "input_mask"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "input_shape"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "losses"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "non_trainable_variables"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "non_trainable_weights"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "output"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "output_mask"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "output_shape"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "scope_name"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "trainable_variables"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "trainable_weights"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "updates"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "variables"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "weights"
+ mtype: "<type \'property\'>"
+ }
+ member_method {
+ name: "__init__"
+ argspec: "args=[\'self\', \'filters\', \'kernel_size\', \'strides\', \'padding\', \'data_format\', \'activation\', \'use_bias\', \'kernel_initializer\', \'bias_initializer\', \'kernel_regularizer\', \'bias_regularizer\', \'activity_regularizer\', \'kernel_constraint\', \'bias_constraint\'], varargs=None, keywords=kwargs, defaults=[\'(1, 1)\', \'valid\', \'None\', \'None\', \'True\', \'glorot_uniform\', \'zeros\', \'None\', \'None\', \'None\', \'None\', \'None\'], "
+ }
+ member_method {
+ name: "add_loss"
+ argspec: "args=[\'self\', \'losses\', \'inputs\'], varargs=None, keywords=None, defaults=[\'None\'], "
+ }
+ member_method {
+ name: "add_update"
+ argspec: "args=[\'self\', \'updates\', \'inputs\'], varargs=None, keywords=None, defaults=[\'None\'], "
+ }
+ member_method {
+ name: "add_variable"
+ argspec: "args=[\'self\', \'name\', \'shape\', \'dtype\', \'initializer\', \'regularizer\', \'trainable\', \'constraint\'], varargs=None, keywords=None, defaults=[\'None\', \'None\', \'None\', \'True\', \'None\'], "
+ }
+ member_method {
+ name: "add_weight"
+ argspec: "args=[\'self\', \'name\', \'shape\', \'dtype\', \'initializer\', \'regularizer\', \'trainable\', \'constraint\'], varargs=None, keywords=None, defaults=[\'None\', \'None\', \'None\', \'True\', \'None\'], "
+ }
+ member_method {
+ name: "apply"
+ argspec: "args=[\'self\', \'inputs\'], varargs=args, keywords=kwargs, defaults=None"
+ }
+ member_method {
+ name: "build"
+ argspec: "args=[\'self\', \'input_shape\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "call"
+ argspec: "args=[\'self\', \'inputs\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "compute_mask"
+ argspec: "args=[\'self\', \'inputs\', \'mask\'], varargs=None, keywords=None, defaults=[\'None\'], "
+ }
+ member_method {
+ name: "count_params"
+ argspec: "args=[\'self\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "from_config"
+ argspec: "args=[\'cls\', \'config\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_config"
+ argspec: "args=[\'self\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_input_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_input_mask_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_input_shape_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_losses_for"
+ argspec: "args=[\'self\', \'inputs\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_output_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_output_mask_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_output_shape_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_updates_for"
+ argspec: "args=[\'self\', \'inputs\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_weights"
+ argspec: "args=[\'self\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "set_weights"
+ argspec: "args=[\'self\', \'weights\'], varargs=None, keywords=None, defaults=None"
+ }
+}
diff --git a/tensorflow/tools/api/golden/tensorflow.keras.layers.-convolution2-d.pbtxt b/tensorflow/tools/api/golden/tensorflow.keras.layers.-convolution2-d.pbtxt
new file mode 100644
index 0000000000..0a87c40e27
--- /dev/null
+++ b/tensorflow/tools/api/golden/tensorflow.keras.layers.-convolution2-d.pbtxt
@@ -0,0 +1,161 @@
+path: "tensorflow.keras.layers.Convolution2D"
+tf_class {
+ is_instance: "<class \'tensorflow.python.keras._impl.keras.layers.convolutional.Conv2D\'>"
+ is_instance: "<class \'tensorflow.python.layers.convolutional.Conv2D\'>"
+ is_instance: "<class \'tensorflow.python.layers.convolutional._Conv\'>"
+ is_instance: "<class \'tensorflow.python.keras._impl.keras.engine.topology.Layer\'>"
+ is_instance: "<class \'tensorflow.python.layers.base.Layer\'>"
+ is_instance: "<type \'object\'>"
+ member {
+ name: "graph"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "input"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "input_mask"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "input_shape"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "losses"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "non_trainable_variables"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "non_trainable_weights"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "output"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "output_mask"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "output_shape"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "scope_name"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "trainable_variables"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "trainable_weights"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "updates"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "variables"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "weights"
+ mtype: "<type \'property\'>"
+ }
+ member_method {
+ name: "__init__"
+ argspec: "args=[\'self\', \'filters\', \'kernel_size\', \'strides\', \'padding\', \'data_format\', \'dilation_rate\', \'activation\', \'use_bias\', \'kernel_initializer\', \'bias_initializer\', \'kernel_regularizer\', \'bias_regularizer\', \'activity_regularizer\', \'kernel_constraint\', \'bias_constraint\'], varargs=None, keywords=kwargs, defaults=[\'(1, 1)\', \'valid\', \'None\', \'(1, 1)\', \'None\', \'True\', \'glorot_uniform\', \'zeros\', \'None\', \'None\', \'None\', \'None\', \'None\'], "
+ }
+ member_method {
+ name: "add_loss"
+ argspec: "args=[\'self\', \'losses\', \'inputs\'], varargs=None, keywords=None, defaults=[\'None\'], "
+ }
+ member_method {
+ name: "add_update"
+ argspec: "args=[\'self\', \'updates\', \'inputs\'], varargs=None, keywords=None, defaults=[\'None\'], "
+ }
+ member_method {
+ name: "add_variable"
+ argspec: "args=[\'self\', \'name\', \'shape\', \'dtype\', \'initializer\', \'regularizer\', \'trainable\', \'constraint\'], varargs=None, keywords=None, defaults=[\'None\', \'None\', \'None\', \'True\', \'None\'], "
+ }
+ member_method {
+ name: "add_weight"
+ argspec: "args=[\'self\', \'name\', \'shape\', \'dtype\', \'initializer\', \'regularizer\', \'trainable\', \'constraint\'], varargs=None, keywords=None, defaults=[\'None\', \'None\', \'None\', \'True\', \'None\'], "
+ }
+ member_method {
+ name: "apply"
+ argspec: "args=[\'self\', \'inputs\'], varargs=args, keywords=kwargs, defaults=None"
+ }
+ member_method {
+ name: "build"
+ argspec: "args=[\'self\', \'input_shape\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "call"
+ argspec: "args=[\'self\', \'inputs\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "compute_mask"
+ argspec: "args=[\'self\', \'inputs\', \'mask\'], varargs=None, keywords=None, defaults=[\'None\'], "
+ }
+ member_method {
+ name: "count_params"
+ argspec: "args=[\'self\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "from_config"
+ argspec: "args=[\'cls\', \'config\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_config"
+ argspec: "args=[\'self\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_input_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_input_mask_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_input_shape_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_losses_for"
+ argspec: "args=[\'self\', \'inputs\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_output_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_output_mask_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_output_shape_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_updates_for"
+ argspec: "args=[\'self\', \'inputs\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_weights"
+ argspec: "args=[\'self\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "set_weights"
+ argspec: "args=[\'self\', \'weights\'], varargs=None, keywords=None, defaults=None"
+ }
+}
diff --git a/tensorflow/tools/api/golden/tensorflow.keras.layers.-convolution3-d-transpose.pbtxt b/tensorflow/tools/api/golden/tensorflow.keras.layers.-convolution3-d-transpose.pbtxt
new file mode 100644
index 0000000000..005cec9748
--- /dev/null
+++ b/tensorflow/tools/api/golden/tensorflow.keras.layers.-convolution3-d-transpose.pbtxt
@@ -0,0 +1,161 @@
+path: "tensorflow.keras.layers.Convolution3DTranspose"
+tf_class {
+ is_instance: "<class \'tensorflow.python.keras._impl.keras.layers.convolutional.Conv3DTranspose\'>"
+ is_instance: "<class \'tensorflow.python.layers.convolutional.Conv3D\'>"
+ is_instance: "<class \'tensorflow.python.layers.convolutional._Conv\'>"
+ is_instance: "<class \'tensorflow.python.keras._impl.keras.engine.topology.Layer\'>"
+ is_instance: "<class \'tensorflow.python.layers.base.Layer\'>"
+ is_instance: "<type \'object\'>"
+ member {
+ name: "graph"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "input"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "input_mask"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "input_shape"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "losses"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "non_trainable_variables"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "non_trainable_weights"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "output"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "output_mask"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "output_shape"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "scope_name"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "trainable_variables"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "trainable_weights"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "updates"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "variables"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "weights"
+ mtype: "<type \'property\'>"
+ }
+ member_method {
+ name: "__init__"
+ argspec: "args=[\'self\', \'filters\', \'kernel_size\', \'strides\', \'padding\', \'data_format\', \'activation\', \'use_bias\', \'kernel_initializer\', \'bias_initializer\', \'kernel_regularizer\', \'bias_regularizer\', \'activity_regularizer\', \'kernel_constraint\', \'bias_constraint\'], varargs=None, keywords=kwargs, defaults=[\'(1, 1, 1)\', \'valid\', \'None\', \'None\', \'True\', \'glorot_uniform\', \'zeros\', \'None\', \'None\', \'None\', \'None\', \'None\'], "
+ }
+ member_method {
+ name: "add_loss"
+ argspec: "args=[\'self\', \'losses\', \'inputs\'], varargs=None, keywords=None, defaults=[\'None\'], "
+ }
+ member_method {
+ name: "add_update"
+ argspec: "args=[\'self\', \'updates\', \'inputs\'], varargs=None, keywords=None, defaults=[\'None\'], "
+ }
+ member_method {
+ name: "add_variable"
+ argspec: "args=[\'self\', \'name\', \'shape\', \'dtype\', \'initializer\', \'regularizer\', \'trainable\', \'constraint\'], varargs=None, keywords=None, defaults=[\'None\', \'None\', \'None\', \'True\', \'None\'], "
+ }
+ member_method {
+ name: "add_weight"
+ argspec: "args=[\'self\', \'name\', \'shape\', \'dtype\', \'initializer\', \'regularizer\', \'trainable\', \'constraint\'], varargs=None, keywords=None, defaults=[\'None\', \'None\', \'None\', \'True\', \'None\'], "
+ }
+ member_method {
+ name: "apply"
+ argspec: "args=[\'self\', \'inputs\'], varargs=args, keywords=kwargs, defaults=None"
+ }
+ member_method {
+ name: "build"
+ argspec: "args=[\'self\', \'input_shape\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "call"
+ argspec: "args=[\'self\', \'inputs\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "compute_mask"
+ argspec: "args=[\'self\', \'inputs\', \'mask\'], varargs=None, keywords=None, defaults=[\'None\'], "
+ }
+ member_method {
+ name: "count_params"
+ argspec: "args=[\'self\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "from_config"
+ argspec: "args=[\'cls\', \'config\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_config"
+ argspec: "args=[\'self\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_input_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_input_mask_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_input_shape_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_losses_for"
+ argspec: "args=[\'self\', \'inputs\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_output_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_output_mask_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_output_shape_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_updates_for"
+ argspec: "args=[\'self\', \'inputs\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_weights"
+ argspec: "args=[\'self\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "set_weights"
+ argspec: "args=[\'self\', \'weights\'], varargs=None, keywords=None, defaults=None"
+ }
+}
diff --git a/tensorflow/tools/api/golden/tensorflow.keras.layers.-convolution3-d.pbtxt b/tensorflow/tools/api/golden/tensorflow.keras.layers.-convolution3-d.pbtxt
new file mode 100644
index 0000000000..caf06b130d
--- /dev/null
+++ b/tensorflow/tools/api/golden/tensorflow.keras.layers.-convolution3-d.pbtxt
@@ -0,0 +1,161 @@
+path: "tensorflow.keras.layers.Convolution3D"
+tf_class {
+ is_instance: "<class \'tensorflow.python.keras._impl.keras.layers.convolutional.Conv3D\'>"
+ is_instance: "<class \'tensorflow.python.layers.convolutional.Conv3D\'>"
+ is_instance: "<class \'tensorflow.python.layers.convolutional._Conv\'>"
+ is_instance: "<class \'tensorflow.python.keras._impl.keras.engine.topology.Layer\'>"
+ is_instance: "<class \'tensorflow.python.layers.base.Layer\'>"
+ is_instance: "<type \'object\'>"
+ member {
+ name: "graph"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "input"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "input_mask"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "input_shape"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "losses"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "non_trainable_variables"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "non_trainable_weights"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "output"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "output_mask"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "output_shape"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "scope_name"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "trainable_variables"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "trainable_weights"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "updates"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "variables"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "weights"
+ mtype: "<type \'property\'>"
+ }
+ member_method {
+ name: "__init__"
+ argspec: "args=[\'self\', \'filters\', \'kernel_size\', \'strides\', \'padding\', \'data_format\', \'dilation_rate\', \'activation\', \'use_bias\', \'kernel_initializer\', \'bias_initializer\', \'kernel_regularizer\', \'bias_regularizer\', \'activity_regularizer\', \'kernel_constraint\', \'bias_constraint\'], varargs=None, keywords=kwargs, defaults=[\'(1, 1, 1)\', \'valid\', \'None\', \'(1, 1, 1)\', \'None\', \'True\', \'glorot_uniform\', \'zeros\', \'None\', \'None\', \'None\', \'None\', \'None\'], "
+ }
+ member_method {
+ name: "add_loss"
+ argspec: "args=[\'self\', \'losses\', \'inputs\'], varargs=None, keywords=None, defaults=[\'None\'], "
+ }
+ member_method {
+ name: "add_update"
+ argspec: "args=[\'self\', \'updates\', \'inputs\'], varargs=None, keywords=None, defaults=[\'None\'], "
+ }
+ member_method {
+ name: "add_variable"
+ argspec: "args=[\'self\', \'name\', \'shape\', \'dtype\', \'initializer\', \'regularizer\', \'trainable\', \'constraint\'], varargs=None, keywords=None, defaults=[\'None\', \'None\', \'None\', \'True\', \'None\'], "
+ }
+ member_method {
+ name: "add_weight"
+ argspec: "args=[\'self\', \'name\', \'shape\', \'dtype\', \'initializer\', \'regularizer\', \'trainable\', \'constraint\'], varargs=None, keywords=None, defaults=[\'None\', \'None\', \'None\', \'True\', \'None\'], "
+ }
+ member_method {
+ name: "apply"
+ argspec: "args=[\'self\', \'inputs\'], varargs=args, keywords=kwargs, defaults=None"
+ }
+ member_method {
+ name: "build"
+ argspec: "args=[\'self\', \'input_shape\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "call"
+ argspec: "args=[\'self\', \'inputs\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "compute_mask"
+ argspec: "args=[\'self\', \'inputs\', \'mask\'], varargs=None, keywords=None, defaults=[\'None\'], "
+ }
+ member_method {
+ name: "count_params"
+ argspec: "args=[\'self\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "from_config"
+ argspec: "args=[\'cls\', \'config\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_config"
+ argspec: "args=[\'self\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_input_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_input_mask_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_input_shape_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_losses_for"
+ argspec: "args=[\'self\', \'inputs\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_output_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_output_mask_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_output_shape_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_updates_for"
+ argspec: "args=[\'self\', \'inputs\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_weights"
+ argspec: "args=[\'self\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "set_weights"
+ argspec: "args=[\'self\', \'weights\'], varargs=None, keywords=None, defaults=None"
+ }
+}
diff --git a/tensorflow/tools/api/golden/tensorflow.keras.layers.-cropping1-d.pbtxt b/tensorflow/tools/api/golden/tensorflow.keras.layers.-cropping1-d.pbtxt
new file mode 100644
index 0000000000..e3287554a6
--- /dev/null
+++ b/tensorflow/tools/api/golden/tensorflow.keras.layers.-cropping1-d.pbtxt
@@ -0,0 +1,159 @@
+path: "tensorflow.keras.layers.Cropping1D"
+tf_class {
+ is_instance: "<class \'tensorflow.python.keras._impl.keras.layers.convolutional.Cropping1D\'>"
+ is_instance: "<class \'tensorflow.python.keras._impl.keras.engine.topology.Layer\'>"
+ is_instance: "<class \'tensorflow.python.layers.base.Layer\'>"
+ is_instance: "<type \'object\'>"
+ member {
+ name: "graph"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "input"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "input_mask"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "input_shape"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "losses"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "non_trainable_variables"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "non_trainable_weights"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "output"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "output_mask"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "output_shape"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "scope_name"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "trainable_variables"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "trainable_weights"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "updates"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "variables"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "weights"
+ mtype: "<type \'property\'>"
+ }
+ member_method {
+ name: "__init__"
+ argspec: "args=[\'self\', \'cropping\'], varargs=None, keywords=kwargs, defaults=[\'(1, 1)\'], "
+ }
+ member_method {
+ name: "add_loss"
+ argspec: "args=[\'self\', \'losses\', \'inputs\'], varargs=None, keywords=None, defaults=[\'None\'], "
+ }
+ member_method {
+ name: "add_update"
+ argspec: "args=[\'self\', \'updates\', \'inputs\'], varargs=None, keywords=None, defaults=[\'None\'], "
+ }
+ member_method {
+ name: "add_variable"
+ argspec: "args=[\'self\', \'name\', \'shape\', \'dtype\', \'initializer\', \'regularizer\', \'trainable\', \'constraint\'], varargs=None, keywords=None, defaults=[\'None\', \'None\', \'None\', \'True\', \'None\'], "
+ }
+ member_method {
+ name: "add_weight"
+ argspec: "args=[\'self\', \'name\', \'shape\', \'dtype\', \'initializer\', \'regularizer\', \'trainable\', \'constraint\'], varargs=None, keywords=None, defaults=[\'None\', \'None\', \'None\', \'True\', \'None\'], "
+ }
+ member_method {
+ name: "apply"
+ argspec: "args=[\'self\', \'inputs\'], varargs=args, keywords=kwargs, defaults=None"
+ }
+ member_method {
+ name: "build"
+ argspec: "args=[\'self\', \'_\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "call"
+ argspec: "args=[\'self\', \'inputs\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "compute_mask"
+ argspec: "args=[\'self\', \'inputs\', \'mask\'], varargs=None, keywords=None, defaults=[\'None\'], "
+ }
+ member_method {
+ name: "count_params"
+ argspec: "args=[\'self\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "from_config"
+ argspec: "args=[\'cls\', \'config\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_config"
+ argspec: "args=[\'self\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_input_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_input_mask_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_input_shape_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_losses_for"
+ argspec: "args=[\'self\', \'inputs\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_output_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_output_mask_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_output_shape_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_updates_for"
+ argspec: "args=[\'self\', \'inputs\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_weights"
+ argspec: "args=[\'self\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "set_weights"
+ argspec: "args=[\'self\', \'weights\'], varargs=None, keywords=None, defaults=None"
+ }
+}
diff --git a/tensorflow/tools/api/golden/tensorflow.keras.layers.-cropping2-d.pbtxt b/tensorflow/tools/api/golden/tensorflow.keras.layers.-cropping2-d.pbtxt
new file mode 100644
index 0000000000..7aecf7fe33
--- /dev/null
+++ b/tensorflow/tools/api/golden/tensorflow.keras.layers.-cropping2-d.pbtxt
@@ -0,0 +1,159 @@
+path: "tensorflow.keras.layers.Cropping2D"
+tf_class {
+ is_instance: "<class \'tensorflow.python.keras._impl.keras.layers.convolutional.Cropping2D\'>"
+ is_instance: "<class \'tensorflow.python.keras._impl.keras.engine.topology.Layer\'>"
+ is_instance: "<class \'tensorflow.python.layers.base.Layer\'>"
+ is_instance: "<type \'object\'>"
+ member {
+ name: "graph"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "input"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "input_mask"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "input_shape"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "losses"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "non_trainable_variables"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "non_trainable_weights"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "output"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "output_mask"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "output_shape"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "scope_name"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "trainable_variables"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "trainable_weights"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "updates"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "variables"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "weights"
+ mtype: "<type \'property\'>"
+ }
+ member_method {
+ name: "__init__"
+ argspec: "args=[\'self\', \'cropping\', \'data_format\'], varargs=None, keywords=kwargs, defaults=[\'((0, 0), (0, 0))\', \'None\'], "
+ }
+ member_method {
+ name: "add_loss"
+ argspec: "args=[\'self\', \'losses\', \'inputs\'], varargs=None, keywords=None, defaults=[\'None\'], "
+ }
+ member_method {
+ name: "add_update"
+ argspec: "args=[\'self\', \'updates\', \'inputs\'], varargs=None, keywords=None, defaults=[\'None\'], "
+ }
+ member_method {
+ name: "add_variable"
+ argspec: "args=[\'self\', \'name\', \'shape\', \'dtype\', \'initializer\', \'regularizer\', \'trainable\', \'constraint\'], varargs=None, keywords=None, defaults=[\'None\', \'None\', \'None\', \'True\', \'None\'], "
+ }
+ member_method {
+ name: "add_weight"
+ argspec: "args=[\'self\', \'name\', \'shape\', \'dtype\', \'initializer\', \'regularizer\', \'trainable\', \'constraint\'], varargs=None, keywords=None, defaults=[\'None\', \'None\', \'None\', \'True\', \'None\'], "
+ }
+ member_method {
+ name: "apply"
+ argspec: "args=[\'self\', \'inputs\'], varargs=args, keywords=kwargs, defaults=None"
+ }
+ member_method {
+ name: "build"
+ argspec: "args=[\'self\', \'_\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "call"
+ argspec: "args=[\'self\', \'inputs\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "compute_mask"
+ argspec: "args=[\'self\', \'inputs\', \'mask\'], varargs=None, keywords=None, defaults=[\'None\'], "
+ }
+ member_method {
+ name: "count_params"
+ argspec: "args=[\'self\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "from_config"
+ argspec: "args=[\'cls\', \'config\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_config"
+ argspec: "args=[\'self\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_input_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_input_mask_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_input_shape_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_losses_for"
+ argspec: "args=[\'self\', \'inputs\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_output_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_output_mask_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_output_shape_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_updates_for"
+ argspec: "args=[\'self\', \'inputs\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_weights"
+ argspec: "args=[\'self\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "set_weights"
+ argspec: "args=[\'self\', \'weights\'], varargs=None, keywords=None, defaults=None"
+ }
+}
diff --git a/tensorflow/tools/api/golden/tensorflow.keras.layers.-cropping3-d.pbtxt b/tensorflow/tools/api/golden/tensorflow.keras.layers.-cropping3-d.pbtxt
new file mode 100644
index 0000000000..a7bd30675b
--- /dev/null
+++ b/tensorflow/tools/api/golden/tensorflow.keras.layers.-cropping3-d.pbtxt
@@ -0,0 +1,159 @@
+path: "tensorflow.keras.layers.Cropping3D"
+tf_class {
+ is_instance: "<class \'tensorflow.python.keras._impl.keras.layers.convolutional.Cropping3D\'>"
+ is_instance: "<class \'tensorflow.python.keras._impl.keras.engine.topology.Layer\'>"
+ is_instance: "<class \'tensorflow.python.layers.base.Layer\'>"
+ is_instance: "<type \'object\'>"
+ member {
+ name: "graph"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "input"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "input_mask"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "input_shape"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "losses"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "non_trainable_variables"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "non_trainable_weights"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "output"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "output_mask"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "output_shape"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "scope_name"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "trainable_variables"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "trainable_weights"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "updates"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "variables"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "weights"
+ mtype: "<type \'property\'>"
+ }
+ member_method {
+ name: "__init__"
+ argspec: "args=[\'self\', \'cropping\', \'data_format\'], varargs=None, keywords=kwargs, defaults=[\'((1, 1), (1, 1), (1, 1))\', \'None\'], "
+ }
+ member_method {
+ name: "add_loss"
+ argspec: "args=[\'self\', \'losses\', \'inputs\'], varargs=None, keywords=None, defaults=[\'None\'], "
+ }
+ member_method {
+ name: "add_update"
+ argspec: "args=[\'self\', \'updates\', \'inputs\'], varargs=None, keywords=None, defaults=[\'None\'], "
+ }
+ member_method {
+ name: "add_variable"
+ argspec: "args=[\'self\', \'name\', \'shape\', \'dtype\', \'initializer\', \'regularizer\', \'trainable\', \'constraint\'], varargs=None, keywords=None, defaults=[\'None\', \'None\', \'None\', \'True\', \'None\'], "
+ }
+ member_method {
+ name: "add_weight"
+ argspec: "args=[\'self\', \'name\', \'shape\', \'dtype\', \'initializer\', \'regularizer\', \'trainable\', \'constraint\'], varargs=None, keywords=None, defaults=[\'None\', \'None\', \'None\', \'True\', \'None\'], "
+ }
+ member_method {
+ name: "apply"
+ argspec: "args=[\'self\', \'inputs\'], varargs=args, keywords=kwargs, defaults=None"
+ }
+ member_method {
+ name: "build"
+ argspec: "args=[\'self\', \'_\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "call"
+ argspec: "args=[\'self\', \'inputs\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "compute_mask"
+ argspec: "args=[\'self\', \'inputs\', \'mask\'], varargs=None, keywords=None, defaults=[\'None\'], "
+ }
+ member_method {
+ name: "count_params"
+ argspec: "args=[\'self\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "from_config"
+ argspec: "args=[\'cls\', \'config\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_config"
+ argspec: "args=[\'self\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_input_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_input_mask_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_input_shape_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_losses_for"
+ argspec: "args=[\'self\', \'inputs\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_output_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_output_mask_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_output_shape_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_updates_for"
+ argspec: "args=[\'self\', \'inputs\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_weights"
+ argspec: "args=[\'self\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "set_weights"
+ argspec: "args=[\'self\', \'weights\'], varargs=None, keywords=None, defaults=None"
+ }
+}
diff --git a/tensorflow/tools/api/golden/tensorflow.keras.layers.-dense.pbtxt b/tensorflow/tools/api/golden/tensorflow.keras.layers.-dense.pbtxt
new file mode 100644
index 0000000000..c502083af8
--- /dev/null
+++ b/tensorflow/tools/api/golden/tensorflow.keras.layers.-dense.pbtxt
@@ -0,0 +1,160 @@
+path: "tensorflow.keras.layers.Dense"
+tf_class {
+ is_instance: "<class \'tensorflow.python.keras._impl.keras.layers.core.Dense\'>"
+ is_instance: "<class \'tensorflow.python.layers.core.Dense\'>"
+ is_instance: "<class \'tensorflow.python.keras._impl.keras.engine.topology.Layer\'>"
+ is_instance: "<class \'tensorflow.python.layers.base.Layer\'>"
+ is_instance: "<type \'object\'>"
+ member {
+ name: "graph"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "input"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "input_mask"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "input_shape"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "losses"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "non_trainable_variables"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "non_trainable_weights"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "output"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "output_mask"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "output_shape"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "scope_name"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "trainable_variables"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "trainable_weights"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "updates"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "variables"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "weights"
+ mtype: "<type \'property\'>"
+ }
+ member_method {
+ name: "__init__"
+ argspec: "args=[\'self\', \'units\', \'activation\', \'use_bias\', \'kernel_initializer\', \'bias_initializer\', \'kernel_regularizer\', \'bias_regularizer\', \'activity_regularizer\', \'kernel_constraint\', \'bias_constraint\'], varargs=None, keywords=kwargs, defaults=[\'None\', \'True\', \'glorot_uniform\', \'zeros\', \'None\', \'None\', \'None\', \'None\', \'None\'], "
+ }
+ member_method {
+ name: "add_loss"
+ argspec: "args=[\'self\', \'losses\', \'inputs\'], varargs=None, keywords=None, defaults=[\'None\'], "
+ }
+ member_method {
+ name: "add_update"
+ argspec: "args=[\'self\', \'updates\', \'inputs\'], varargs=None, keywords=None, defaults=[\'None\'], "
+ }
+ member_method {
+ name: "add_variable"
+ argspec: "args=[\'self\', \'name\', \'shape\', \'dtype\', \'initializer\', \'regularizer\', \'trainable\', \'constraint\'], varargs=None, keywords=None, defaults=[\'None\', \'None\', \'None\', \'True\', \'None\'], "
+ }
+ member_method {
+ name: "add_weight"
+ argspec: "args=[\'self\', \'name\', \'shape\', \'dtype\', \'initializer\', \'regularizer\', \'trainable\', \'constraint\'], varargs=None, keywords=None, defaults=[\'None\', \'None\', \'None\', \'True\', \'None\'], "
+ }
+ member_method {
+ name: "apply"
+ argspec: "args=[\'self\', \'inputs\'], varargs=args, keywords=kwargs, defaults=None"
+ }
+ member_method {
+ name: "build"
+ argspec: "args=[\'self\', \'input_shape\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "call"
+ argspec: "args=[\'self\', \'inputs\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "compute_mask"
+ argspec: "args=[\'self\', \'inputs\', \'mask\'], varargs=None, keywords=None, defaults=[\'None\'], "
+ }
+ member_method {
+ name: "count_params"
+ argspec: "args=[\'self\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "from_config"
+ argspec: "args=[\'cls\', \'config\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_config"
+ argspec: "args=[\'self\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_input_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_input_mask_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_input_shape_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_losses_for"
+ argspec: "args=[\'self\', \'inputs\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_output_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_output_mask_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_output_shape_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_updates_for"
+ argspec: "args=[\'self\', \'inputs\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_weights"
+ argspec: "args=[\'self\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "set_weights"
+ argspec: "args=[\'self\', \'weights\'], varargs=None, keywords=None, defaults=None"
+ }
+}
diff --git a/tensorflow/tools/api/golden/tensorflow.keras.layers.-dot.pbtxt b/tensorflow/tools/api/golden/tensorflow.keras.layers.-dot.pbtxt
new file mode 100644
index 0000000000..ebc21b0168
--- /dev/null
+++ b/tensorflow/tools/api/golden/tensorflow.keras.layers.-dot.pbtxt
@@ -0,0 +1,160 @@
+path: "tensorflow.keras.layers.Dot"
+tf_class {
+ is_instance: "<class \'tensorflow.python.keras._impl.keras.layers.merge.Dot\'>"
+ is_instance: "<class \'tensorflow.python.keras._impl.keras.layers.merge._Merge\'>"
+ is_instance: "<class \'tensorflow.python.keras._impl.keras.engine.topology.Layer\'>"
+ is_instance: "<class \'tensorflow.python.layers.base.Layer\'>"
+ is_instance: "<type \'object\'>"
+ member {
+ name: "graph"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "input"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "input_mask"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "input_shape"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "losses"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "non_trainable_variables"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "non_trainable_weights"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "output"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "output_mask"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "output_shape"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "scope_name"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "trainable_variables"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "trainable_weights"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "updates"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "variables"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "weights"
+ mtype: "<type \'property\'>"
+ }
+ member_method {
+ name: "__init__"
+ argspec: "args=[\'self\', \'axes\', \'normalize\'], varargs=None, keywords=kwargs, defaults=[\'False\'], "
+ }
+ member_method {
+ name: "add_loss"
+ argspec: "args=[\'self\', \'losses\', \'inputs\'], varargs=None, keywords=None, defaults=[\'None\'], "
+ }
+ member_method {
+ name: "add_update"
+ argspec: "args=[\'self\', \'updates\', \'inputs\'], varargs=None, keywords=None, defaults=[\'None\'], "
+ }
+ member_method {
+ name: "add_variable"
+ argspec: "args=[\'self\', \'name\', \'shape\', \'dtype\', \'initializer\', \'regularizer\', \'trainable\', \'constraint\'], varargs=None, keywords=None, defaults=[\'None\', \'None\', \'None\', \'True\', \'None\'], "
+ }
+ member_method {
+ name: "add_weight"
+ argspec: "args=[\'self\', \'name\', \'shape\', \'dtype\', \'initializer\', \'regularizer\', \'trainable\', \'constraint\'], varargs=None, keywords=None, defaults=[\'None\', \'None\', \'None\', \'True\', \'None\'], "
+ }
+ member_method {
+ name: "apply"
+ argspec: "args=[\'self\', \'inputs\'], varargs=args, keywords=kwargs, defaults=None"
+ }
+ member_method {
+ name: "build"
+ argspec: "args=[\'self\', \'input_shape\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "call"
+ argspec: "args=[\'self\', \'inputs\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "compute_mask"
+ argspec: "args=[\'self\', \'inputs\', \'mask\'], varargs=None, keywords=None, defaults=[\'None\'], "
+ }
+ member_method {
+ name: "count_params"
+ argspec: "args=[\'self\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "from_config"
+ argspec: "args=[\'cls\', \'config\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_config"
+ argspec: "args=[\'self\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_input_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_input_mask_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_input_shape_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_losses_for"
+ argspec: "args=[\'self\', \'inputs\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_output_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_output_mask_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_output_shape_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_updates_for"
+ argspec: "args=[\'self\', \'inputs\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_weights"
+ argspec: "args=[\'self\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "set_weights"
+ argspec: "args=[\'self\', \'weights\'], varargs=None, keywords=None, defaults=None"
+ }
+}
diff --git a/tensorflow/tools/api/golden/tensorflow.keras.layers.-dropout.pbtxt b/tensorflow/tools/api/golden/tensorflow.keras.layers.-dropout.pbtxt
new file mode 100644
index 0000000000..19a8a3cc03
--- /dev/null
+++ b/tensorflow/tools/api/golden/tensorflow.keras.layers.-dropout.pbtxt
@@ -0,0 +1,160 @@
+path: "tensorflow.keras.layers.Dropout"
+tf_class {
+ is_instance: "<class \'tensorflow.python.keras._impl.keras.layers.core.Dropout\'>"
+ is_instance: "<class \'tensorflow.python.layers.core.Dropout\'>"
+ is_instance: "<class \'tensorflow.python.keras._impl.keras.engine.topology.Layer\'>"
+ is_instance: "<class \'tensorflow.python.layers.base.Layer\'>"
+ is_instance: "<type \'object\'>"
+ member {
+ name: "graph"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "input"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "input_mask"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "input_shape"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "losses"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "non_trainable_variables"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "non_trainable_weights"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "output"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "output_mask"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "output_shape"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "scope_name"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "trainable_variables"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "trainable_weights"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "updates"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "variables"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "weights"
+ mtype: "<type \'property\'>"
+ }
+ member_method {
+ name: "__init__"
+ argspec: "args=[\'self\', \'rate\', \'noise_shape\', \'seed\'], varargs=None, keywords=kwargs, defaults=[\'None\', \'None\'], "
+ }
+ member_method {
+ name: "add_loss"
+ argspec: "args=[\'self\', \'losses\', \'inputs\'], varargs=None, keywords=None, defaults=[\'None\'], "
+ }
+ member_method {
+ name: "add_update"
+ argspec: "args=[\'self\', \'updates\', \'inputs\'], varargs=None, keywords=None, defaults=[\'None\'], "
+ }
+ member_method {
+ name: "add_variable"
+ argspec: "args=[\'self\', \'name\', \'shape\', \'dtype\', \'initializer\', \'regularizer\', \'trainable\', \'constraint\'], varargs=None, keywords=None, defaults=[\'None\', \'None\', \'None\', \'True\', \'None\'], "
+ }
+ member_method {
+ name: "add_weight"
+ argspec: "args=[\'self\', \'name\', \'shape\', \'dtype\', \'initializer\', \'regularizer\', \'trainable\', \'constraint\'], varargs=None, keywords=None, defaults=[\'None\', \'None\', \'None\', \'True\', \'None\'], "
+ }
+ member_method {
+ name: "apply"
+ argspec: "args=[\'self\', \'inputs\'], varargs=args, keywords=kwargs, defaults=None"
+ }
+ member_method {
+ name: "build"
+ argspec: "args=[\'self\', \'_\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "call"
+ argspec: "args=[\'self\', \'inputs\', \'training\'], varargs=None, keywords=None, defaults=[\'None\'], "
+ }
+ member_method {
+ name: "compute_mask"
+ argspec: "args=[\'self\', \'inputs\', \'mask\'], varargs=None, keywords=None, defaults=[\'None\'], "
+ }
+ member_method {
+ name: "count_params"
+ argspec: "args=[\'self\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "from_config"
+ argspec: "args=[\'cls\', \'config\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_config"
+ argspec: "args=[\'self\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_input_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_input_mask_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_input_shape_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_losses_for"
+ argspec: "args=[\'self\', \'inputs\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_output_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_output_mask_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_output_shape_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_updates_for"
+ argspec: "args=[\'self\', \'inputs\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_weights"
+ argspec: "args=[\'self\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "set_weights"
+ argspec: "args=[\'self\', \'weights\'], varargs=None, keywords=None, defaults=None"
+ }
+}
diff --git a/tensorflow/tools/api/golden/tensorflow.keras.layers.-e-l-u.pbtxt b/tensorflow/tools/api/golden/tensorflow.keras.layers.-e-l-u.pbtxt
new file mode 100644
index 0000000000..2c8f19068b
--- /dev/null
+++ b/tensorflow/tools/api/golden/tensorflow.keras.layers.-e-l-u.pbtxt
@@ -0,0 +1,159 @@
+path: "tensorflow.keras.layers.ELU"
+tf_class {
+ is_instance: "<class \'tensorflow.python.keras._impl.keras.layers.advanced_activations.ELU\'>"
+ is_instance: "<class \'tensorflow.python.keras._impl.keras.engine.topology.Layer\'>"
+ is_instance: "<class \'tensorflow.python.layers.base.Layer\'>"
+ is_instance: "<type \'object\'>"
+ member {
+ name: "graph"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "input"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "input_mask"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "input_shape"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "losses"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "non_trainable_variables"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "non_trainable_weights"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "output"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "output_mask"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "output_shape"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "scope_name"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "trainable_variables"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "trainable_weights"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "updates"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "variables"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "weights"
+ mtype: "<type \'property\'>"
+ }
+ member_method {
+ name: "__init__"
+ argspec: "args=[\'self\', \'alpha\'], varargs=None, keywords=kwargs, defaults=[\'1.0\'], "
+ }
+ member_method {
+ name: "add_loss"
+ argspec: "args=[\'self\', \'losses\', \'inputs\'], varargs=None, keywords=None, defaults=[\'None\'], "
+ }
+ member_method {
+ name: "add_update"
+ argspec: "args=[\'self\', \'updates\', \'inputs\'], varargs=None, keywords=None, defaults=[\'None\'], "
+ }
+ member_method {
+ name: "add_variable"
+ argspec: "args=[\'self\', \'name\', \'shape\', \'dtype\', \'initializer\', \'regularizer\', \'trainable\', \'constraint\'], varargs=None, keywords=None, defaults=[\'None\', \'None\', \'None\', \'True\', \'None\'], "
+ }
+ member_method {
+ name: "add_weight"
+ argspec: "args=[\'self\', \'name\', \'shape\', \'dtype\', \'initializer\', \'regularizer\', \'trainable\', \'constraint\'], varargs=None, keywords=None, defaults=[\'None\', \'None\', \'None\', \'True\', \'None\'], "
+ }
+ member_method {
+ name: "apply"
+ argspec: "args=[\'self\', \'inputs\'], varargs=args, keywords=kwargs, defaults=None"
+ }
+ member_method {
+ name: "build"
+ argspec: "args=[\'self\', \'_\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "call"
+ argspec: "args=[\'self\', \'inputs\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "compute_mask"
+ argspec: "args=[\'self\', \'inputs\', \'mask\'], varargs=None, keywords=None, defaults=[\'None\'], "
+ }
+ member_method {
+ name: "count_params"
+ argspec: "args=[\'self\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "from_config"
+ argspec: "args=[\'cls\', \'config\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_config"
+ argspec: "args=[\'self\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_input_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_input_mask_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_input_shape_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_losses_for"
+ argspec: "args=[\'self\', \'inputs\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_output_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_output_mask_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_output_shape_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_updates_for"
+ argspec: "args=[\'self\', \'inputs\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_weights"
+ argspec: "args=[\'self\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "set_weights"
+ argspec: "args=[\'self\', \'weights\'], varargs=None, keywords=None, defaults=None"
+ }
+}
diff --git a/tensorflow/tools/api/golden/tensorflow.keras.layers.-embedding.pbtxt b/tensorflow/tools/api/golden/tensorflow.keras.layers.-embedding.pbtxt
new file mode 100644
index 0000000000..e5a9273009
--- /dev/null
+++ b/tensorflow/tools/api/golden/tensorflow.keras.layers.-embedding.pbtxt
@@ -0,0 +1,159 @@
+path: "tensorflow.keras.layers.Embedding"
+tf_class {
+ is_instance: "<class \'tensorflow.python.keras._impl.keras.layers.embeddings.Embedding\'>"
+ is_instance: "<class \'tensorflow.python.keras._impl.keras.engine.topology.Layer\'>"
+ is_instance: "<class \'tensorflow.python.layers.base.Layer\'>"
+ is_instance: "<type \'object\'>"
+ member {
+ name: "graph"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "input"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "input_mask"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "input_shape"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "losses"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "non_trainable_variables"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "non_trainable_weights"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "output"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "output_mask"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "output_shape"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "scope_name"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "trainable_variables"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "trainable_weights"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "updates"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "variables"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "weights"
+ mtype: "<type \'property\'>"
+ }
+ member_method {
+ name: "__init__"
+ argspec: "args=[\'self\', \'input_dim\', \'output_dim\', \'embeddings_initializer\', \'embeddings_regularizer\', \'activity_regularizer\', \'embeddings_constraint\', \'mask_zero\', \'input_length\'], varargs=None, keywords=kwargs, defaults=[\'uniform\', \'None\', \'None\', \'None\', \'False\', \'None\'], "
+ }
+ member_method {
+ name: "add_loss"
+ argspec: "args=[\'self\', \'losses\', \'inputs\'], varargs=None, keywords=None, defaults=[\'None\'], "
+ }
+ member_method {
+ name: "add_update"
+ argspec: "args=[\'self\', \'updates\', \'inputs\'], varargs=None, keywords=None, defaults=[\'None\'], "
+ }
+ member_method {
+ name: "add_variable"
+ argspec: "args=[\'self\', \'name\', \'shape\', \'dtype\', \'initializer\', \'regularizer\', \'trainable\', \'constraint\'], varargs=None, keywords=None, defaults=[\'None\', \'None\', \'None\', \'True\', \'None\'], "
+ }
+ member_method {
+ name: "add_weight"
+ argspec: "args=[\'self\', \'name\', \'shape\', \'dtype\', \'initializer\', \'regularizer\', \'trainable\', \'constraint\'], varargs=None, keywords=None, defaults=[\'None\', \'None\', \'None\', \'True\', \'None\'], "
+ }
+ member_method {
+ name: "apply"
+ argspec: "args=[\'self\', \'inputs\'], varargs=args, keywords=kwargs, defaults=None"
+ }
+ member_method {
+ name: "build"
+ argspec: "args=[\'self\', \'input_shape\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "call"
+ argspec: "args=[\'self\', \'inputs\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "compute_mask"
+ argspec: "args=[\'self\', \'inputs\', \'mask\'], varargs=None, keywords=None, defaults=[\'None\'], "
+ }
+ member_method {
+ name: "count_params"
+ argspec: "args=[\'self\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "from_config"
+ argspec: "args=[\'cls\', \'config\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_config"
+ argspec: "args=[\'self\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_input_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_input_mask_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_input_shape_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_losses_for"
+ argspec: "args=[\'self\', \'inputs\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_output_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_output_mask_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_output_shape_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_updates_for"
+ argspec: "args=[\'self\', \'inputs\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_weights"
+ argspec: "args=[\'self\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "set_weights"
+ argspec: "args=[\'self\', \'weights\'], varargs=None, keywords=None, defaults=None"
+ }
+}
diff --git a/tensorflow/tools/api/golden/tensorflow.keras.layers.-flatten.pbtxt b/tensorflow/tools/api/golden/tensorflow.keras.layers.-flatten.pbtxt
new file mode 100644
index 0000000000..0f1898bcfa
--- /dev/null
+++ b/tensorflow/tools/api/golden/tensorflow.keras.layers.-flatten.pbtxt
@@ -0,0 +1,159 @@
+path: "tensorflow.keras.layers.Flatten"
+tf_class {
+ is_instance: "<class \'tensorflow.python.keras._impl.keras.layers.core.Flatten\'>"
+ is_instance: "<class \'tensorflow.python.keras._impl.keras.engine.topology.Layer\'>"
+ is_instance: "<class \'tensorflow.python.layers.base.Layer\'>"
+ is_instance: "<type \'object\'>"
+ member {
+ name: "graph"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "input"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "input_mask"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "input_shape"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "losses"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "non_trainable_variables"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "non_trainable_weights"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "output"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "output_mask"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "output_shape"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "scope_name"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "trainable_variables"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "trainable_weights"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "updates"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "variables"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "weights"
+ mtype: "<type \'property\'>"
+ }
+ member_method {
+ name: "__init__"
+ argspec: "args=[\'self\'], varargs=None, keywords=kwargs, defaults=None"
+ }
+ member_method {
+ name: "add_loss"
+ argspec: "args=[\'self\', \'losses\', \'inputs\'], varargs=None, keywords=None, defaults=[\'None\'], "
+ }
+ member_method {
+ name: "add_update"
+ argspec: "args=[\'self\', \'updates\', \'inputs\'], varargs=None, keywords=None, defaults=[\'None\'], "
+ }
+ member_method {
+ name: "add_variable"
+ argspec: "args=[\'self\', \'name\', \'shape\', \'dtype\', \'initializer\', \'regularizer\', \'trainable\', \'constraint\'], varargs=None, keywords=None, defaults=[\'None\', \'None\', \'None\', \'True\', \'None\'], "
+ }
+ member_method {
+ name: "add_weight"
+ argspec: "args=[\'self\', \'name\', \'shape\', \'dtype\', \'initializer\', \'regularizer\', \'trainable\', \'constraint\'], varargs=None, keywords=None, defaults=[\'None\', \'None\', \'None\', \'True\', \'None\'], "
+ }
+ member_method {
+ name: "apply"
+ argspec: "args=[\'self\', \'inputs\'], varargs=args, keywords=kwargs, defaults=None"
+ }
+ member_method {
+ name: "build"
+ argspec: "args=[\'self\', \'_\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "call"
+ argspec: "args=[\'self\', \'inputs\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "compute_mask"
+ argspec: "args=[\'self\', \'inputs\', \'mask\'], varargs=None, keywords=None, defaults=[\'None\'], "
+ }
+ member_method {
+ name: "count_params"
+ argspec: "args=[\'self\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "from_config"
+ argspec: "args=[\'cls\', \'config\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_config"
+ argspec: "args=[\'self\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_input_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_input_mask_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_input_shape_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_losses_for"
+ argspec: "args=[\'self\', \'inputs\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_output_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_output_mask_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_output_shape_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_updates_for"
+ argspec: "args=[\'self\', \'inputs\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_weights"
+ argspec: "args=[\'self\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "set_weights"
+ argspec: "args=[\'self\', \'weights\'], varargs=None, keywords=None, defaults=None"
+ }
+}
diff --git a/tensorflow/tools/api/golden/tensorflow.keras.layers.-g-r-u.pbtxt b/tensorflow/tools/api/golden/tensorflow.keras.layers.-g-r-u.pbtxt
new file mode 100644
index 0000000000..c8cd8faaac
--- /dev/null
+++ b/tensorflow/tools/api/golden/tensorflow.keras.layers.-g-r-u.pbtxt
@@ -0,0 +1,180 @@
+path: "tensorflow.keras.layers.GRU"
+tf_class {
+ is_instance: "<class \'tensorflow.python.keras._impl.keras.layers.recurrent.GRU\'>"
+ is_instance: "<class \'tensorflow.python.keras._impl.keras.layers.recurrent.Recurrent\'>"
+ is_instance: "<class \'tensorflow.python.keras._impl.keras.engine.topology.Layer\'>"
+ is_instance: "<class \'tensorflow.python.layers.base.Layer\'>"
+ is_instance: "<type \'object\'>"
+ member {
+ name: "graph"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "input"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "input_mask"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "input_shape"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "losses"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "non_trainable_variables"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "non_trainable_weights"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "output"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "output_mask"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "output_shape"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "scope_name"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "trainable_variables"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "trainable_weights"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "updates"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "variables"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "weights"
+ mtype: "<type \'property\'>"
+ }
+ member_method {
+ name: "__init__"
+ argspec: "args=[\'self\', \'units\', \'activation\', \'recurrent_activation\', \'use_bias\', \'kernel_initializer\', \'recurrent_initializer\', \'bias_initializer\', \'kernel_regularizer\', \'recurrent_regularizer\', \'bias_regularizer\', \'activity_regularizer\', \'kernel_constraint\', \'recurrent_constraint\', \'bias_constraint\', \'dropout\', \'recurrent_dropout\'], varargs=None, keywords=kwargs, defaults=[\'tanh\', \'hard_sigmoid\', \'True\', \'glorot_uniform\', \'orthogonal\', \'zeros\', \'None\', \'None\', \'None\', \'None\', \'None\', \'None\', \'None\', \'0.0\', \'0.0\'], "
+ }
+ member_method {
+ name: "add_loss"
+ argspec: "args=[\'self\', \'losses\', \'inputs\'], varargs=None, keywords=None, defaults=[\'None\'], "
+ }
+ member_method {
+ name: "add_update"
+ argspec: "args=[\'self\', \'updates\', \'inputs\'], varargs=None, keywords=None, defaults=[\'None\'], "
+ }
+ member_method {
+ name: "add_variable"
+ argspec: "args=[\'self\', \'name\', \'shape\', \'dtype\', \'initializer\', \'regularizer\', \'trainable\', \'constraint\'], varargs=None, keywords=None, defaults=[\'None\', \'None\', \'None\', \'True\', \'None\'], "
+ }
+ member_method {
+ name: "add_weight"
+ argspec: "args=[\'self\', \'name\', \'shape\', \'dtype\', \'initializer\', \'regularizer\', \'trainable\', \'constraint\'], varargs=None, keywords=None, defaults=[\'None\', \'None\', \'None\', \'True\', \'None\'], "
+ }
+ member_method {
+ name: "apply"
+ argspec: "args=[\'self\', \'inputs\'], varargs=args, keywords=kwargs, defaults=None"
+ }
+ member_method {
+ name: "build"
+ argspec: "args=[\'self\', \'input_shape\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "call"
+ argspec: "args=[\'self\', \'inputs\', \'mask\', \'training\', \'initial_state\'], varargs=None, keywords=None, defaults=[\'None\', \'None\', \'None\'], "
+ }
+ member_method {
+ name: "compute_mask"
+ argspec: "args=[\'self\', \'inputs\', \'mask\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "count_params"
+ argspec: "args=[\'self\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "from_config"
+ argspec: "args=[\'cls\', \'config\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_config"
+ argspec: "args=[\'self\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_constants"
+ argspec: "args=[\'self\', \'inputs\', \'training\'], varargs=None, keywords=None, defaults=[\'None\'], "
+ }
+ member_method {
+ name: "get_initial_state"
+ argspec: "args=[\'self\', \'inputs\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_input_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_input_mask_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_input_shape_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_losses_for"
+ argspec: "args=[\'self\', \'inputs\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_output_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_output_mask_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_output_shape_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_updates_for"
+ argspec: "args=[\'self\', \'inputs\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_weights"
+ argspec: "args=[\'self\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "preprocess_input"
+ argspec: "args=[\'self\', \'inputs\', \'training\'], varargs=None, keywords=None, defaults=[\'None\'], "
+ }
+ member_method {
+ name: "reset_states"
+ argspec: "args=[\'self\', \'states\'], varargs=None, keywords=None, defaults=[\'None\'], "
+ }
+ member_method {
+ name: "set_weights"
+ argspec: "args=[\'self\', \'weights\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "step"
+ argspec: "args=[\'self\', \'inputs\', \'states\'], varargs=None, keywords=None, defaults=None"
+ }
+}
diff --git a/tensorflow/tools/api/golden/tensorflow.keras.layers.-gaussian-dropout.pbtxt b/tensorflow/tools/api/golden/tensorflow.keras.layers.-gaussian-dropout.pbtxt
new file mode 100644
index 0000000000..98c8b96719
--- /dev/null
+++ b/tensorflow/tools/api/golden/tensorflow.keras.layers.-gaussian-dropout.pbtxt
@@ -0,0 +1,159 @@
+path: "tensorflow.keras.layers.GaussianDropout"
+tf_class {
+ is_instance: "<class \'tensorflow.python.keras._impl.keras.layers.noise.GaussianDropout\'>"
+ is_instance: "<class \'tensorflow.python.keras._impl.keras.engine.topology.Layer\'>"
+ is_instance: "<class \'tensorflow.python.layers.base.Layer\'>"
+ is_instance: "<type \'object\'>"
+ member {
+ name: "graph"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "input"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "input_mask"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "input_shape"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "losses"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "non_trainable_variables"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "non_trainable_weights"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "output"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "output_mask"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "output_shape"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "scope_name"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "trainable_variables"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "trainable_weights"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "updates"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "variables"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "weights"
+ mtype: "<type \'property\'>"
+ }
+ member_method {
+ name: "__init__"
+ argspec: "args=[\'self\', \'rate\'], varargs=None, keywords=kwargs, defaults=None"
+ }
+ member_method {
+ name: "add_loss"
+ argspec: "args=[\'self\', \'losses\', \'inputs\'], varargs=None, keywords=None, defaults=[\'None\'], "
+ }
+ member_method {
+ name: "add_update"
+ argspec: "args=[\'self\', \'updates\', \'inputs\'], varargs=None, keywords=None, defaults=[\'None\'], "
+ }
+ member_method {
+ name: "add_variable"
+ argspec: "args=[\'self\', \'name\', \'shape\', \'dtype\', \'initializer\', \'regularizer\', \'trainable\', \'constraint\'], varargs=None, keywords=None, defaults=[\'None\', \'None\', \'None\', \'True\', \'None\'], "
+ }
+ member_method {
+ name: "add_weight"
+ argspec: "args=[\'self\', \'name\', \'shape\', \'dtype\', \'initializer\', \'regularizer\', \'trainable\', \'constraint\'], varargs=None, keywords=None, defaults=[\'None\', \'None\', \'None\', \'True\', \'None\'], "
+ }
+ member_method {
+ name: "apply"
+ argspec: "args=[\'self\', \'inputs\'], varargs=args, keywords=kwargs, defaults=None"
+ }
+ member_method {
+ name: "build"
+ argspec: "args=[\'self\', \'_\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "call"
+ argspec: "args=[\'self\', \'inputs\', \'training\'], varargs=None, keywords=None, defaults=[\'None\'], "
+ }
+ member_method {
+ name: "compute_mask"
+ argspec: "args=[\'self\', \'inputs\', \'mask\'], varargs=None, keywords=None, defaults=[\'None\'], "
+ }
+ member_method {
+ name: "count_params"
+ argspec: "args=[\'self\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "from_config"
+ argspec: "args=[\'cls\', \'config\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_config"
+ argspec: "args=[\'self\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_input_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_input_mask_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_input_shape_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_losses_for"
+ argspec: "args=[\'self\', \'inputs\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_output_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_output_mask_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_output_shape_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_updates_for"
+ argspec: "args=[\'self\', \'inputs\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_weights"
+ argspec: "args=[\'self\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "set_weights"
+ argspec: "args=[\'self\', \'weights\'], varargs=None, keywords=None, defaults=None"
+ }
+}
diff --git a/tensorflow/tools/api/golden/tensorflow.keras.layers.-gaussian-noise.pbtxt b/tensorflow/tools/api/golden/tensorflow.keras.layers.-gaussian-noise.pbtxt
new file mode 100644
index 0000000000..f961291110
--- /dev/null
+++ b/tensorflow/tools/api/golden/tensorflow.keras.layers.-gaussian-noise.pbtxt
@@ -0,0 +1,159 @@
+path: "tensorflow.keras.layers.GaussianNoise"
+tf_class {
+ is_instance: "<class \'tensorflow.python.keras._impl.keras.layers.noise.GaussianNoise\'>"
+ is_instance: "<class \'tensorflow.python.keras._impl.keras.engine.topology.Layer\'>"
+ is_instance: "<class \'tensorflow.python.layers.base.Layer\'>"
+ is_instance: "<type \'object\'>"
+ member {
+ name: "graph"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "input"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "input_mask"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "input_shape"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "losses"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "non_trainable_variables"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "non_trainable_weights"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "output"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "output_mask"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "output_shape"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "scope_name"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "trainable_variables"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "trainable_weights"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "updates"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "variables"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "weights"
+ mtype: "<type \'property\'>"
+ }
+ member_method {
+ name: "__init__"
+ argspec: "args=[\'self\', \'stddev\'], varargs=None, keywords=kwargs, defaults=None"
+ }
+ member_method {
+ name: "add_loss"
+ argspec: "args=[\'self\', \'losses\', \'inputs\'], varargs=None, keywords=None, defaults=[\'None\'], "
+ }
+ member_method {
+ name: "add_update"
+ argspec: "args=[\'self\', \'updates\', \'inputs\'], varargs=None, keywords=None, defaults=[\'None\'], "
+ }
+ member_method {
+ name: "add_variable"
+ argspec: "args=[\'self\', \'name\', \'shape\', \'dtype\', \'initializer\', \'regularizer\', \'trainable\', \'constraint\'], varargs=None, keywords=None, defaults=[\'None\', \'None\', \'None\', \'True\', \'None\'], "
+ }
+ member_method {
+ name: "add_weight"
+ argspec: "args=[\'self\', \'name\', \'shape\', \'dtype\', \'initializer\', \'regularizer\', \'trainable\', \'constraint\'], varargs=None, keywords=None, defaults=[\'None\', \'None\', \'None\', \'True\', \'None\'], "
+ }
+ member_method {
+ name: "apply"
+ argspec: "args=[\'self\', \'inputs\'], varargs=args, keywords=kwargs, defaults=None"
+ }
+ member_method {
+ name: "build"
+ argspec: "args=[\'self\', \'_\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "call"
+ argspec: "args=[\'self\', \'inputs\', \'training\'], varargs=None, keywords=None, defaults=[\'None\'], "
+ }
+ member_method {
+ name: "compute_mask"
+ argspec: "args=[\'self\', \'inputs\', \'mask\'], varargs=None, keywords=None, defaults=[\'None\'], "
+ }
+ member_method {
+ name: "count_params"
+ argspec: "args=[\'self\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "from_config"
+ argspec: "args=[\'cls\', \'config\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_config"
+ argspec: "args=[\'self\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_input_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_input_mask_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_input_shape_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_losses_for"
+ argspec: "args=[\'self\', \'inputs\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_output_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_output_mask_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_output_shape_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_updates_for"
+ argspec: "args=[\'self\', \'inputs\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_weights"
+ argspec: "args=[\'self\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "set_weights"
+ argspec: "args=[\'self\', \'weights\'], varargs=None, keywords=None, defaults=None"
+ }
+}
diff --git a/tensorflow/tools/api/golden/tensorflow.keras.layers.-global-average-pooling1-d.pbtxt b/tensorflow/tools/api/golden/tensorflow.keras.layers.-global-average-pooling1-d.pbtxt
new file mode 100644
index 0000000000..e120da3649
--- /dev/null
+++ b/tensorflow/tools/api/golden/tensorflow.keras.layers.-global-average-pooling1-d.pbtxt
@@ -0,0 +1,160 @@
+path: "tensorflow.keras.layers.GlobalAveragePooling1D"
+tf_class {
+ is_instance: "<class \'tensorflow.python.keras._impl.keras.layers.pooling.GlobalAveragePooling1D\'>"
+ is_instance: "<class \'tensorflow.python.keras._impl.keras.layers.pooling._GlobalPooling1D\'>"
+ is_instance: "<class \'tensorflow.python.keras._impl.keras.engine.topology.Layer\'>"
+ is_instance: "<class \'tensorflow.python.layers.base.Layer\'>"
+ is_instance: "<type \'object\'>"
+ member {
+ name: "graph"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "input"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "input_mask"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "input_shape"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "losses"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "non_trainable_variables"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "non_trainable_weights"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "output"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "output_mask"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "output_shape"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "scope_name"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "trainable_variables"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "trainable_weights"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "updates"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "variables"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "weights"
+ mtype: "<type \'property\'>"
+ }
+ member_method {
+ name: "__init__"
+ argspec: "args=[\'self\'], varargs=None, keywords=kwargs, defaults=None"
+ }
+ member_method {
+ name: "add_loss"
+ argspec: "args=[\'self\', \'losses\', \'inputs\'], varargs=None, keywords=None, defaults=[\'None\'], "
+ }
+ member_method {
+ name: "add_update"
+ argspec: "args=[\'self\', \'updates\', \'inputs\'], varargs=None, keywords=None, defaults=[\'None\'], "
+ }
+ member_method {
+ name: "add_variable"
+ argspec: "args=[\'self\', \'name\', \'shape\', \'dtype\', \'initializer\', \'regularizer\', \'trainable\', \'constraint\'], varargs=None, keywords=None, defaults=[\'None\', \'None\', \'None\', \'True\', \'None\'], "
+ }
+ member_method {
+ name: "add_weight"
+ argspec: "args=[\'self\', \'name\', \'shape\', \'dtype\', \'initializer\', \'regularizer\', \'trainable\', \'constraint\'], varargs=None, keywords=None, defaults=[\'None\', \'None\', \'None\', \'True\', \'None\'], "
+ }
+ member_method {
+ name: "apply"
+ argspec: "args=[\'self\', \'inputs\'], varargs=args, keywords=kwargs, defaults=None"
+ }
+ member_method {
+ name: "build"
+ argspec: "args=[\'self\', \'_\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "call"
+ argspec: "args=[\'self\', \'inputs\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "compute_mask"
+ argspec: "args=[\'self\', \'inputs\', \'mask\'], varargs=None, keywords=None, defaults=[\'None\'], "
+ }
+ member_method {
+ name: "count_params"
+ argspec: "args=[\'self\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "from_config"
+ argspec: "args=[\'cls\', \'config\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_config"
+ argspec: "args=[\'self\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_input_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_input_mask_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_input_shape_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_losses_for"
+ argspec: "args=[\'self\', \'inputs\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_output_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_output_mask_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_output_shape_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_updates_for"
+ argspec: "args=[\'self\', \'inputs\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_weights"
+ argspec: "args=[\'self\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "set_weights"
+ argspec: "args=[\'self\', \'weights\'], varargs=None, keywords=None, defaults=None"
+ }
+}
diff --git a/tensorflow/tools/api/golden/tensorflow.keras.layers.-global-average-pooling2-d.pbtxt b/tensorflow/tools/api/golden/tensorflow.keras.layers.-global-average-pooling2-d.pbtxt
new file mode 100644
index 0000000000..89eb90efd9
--- /dev/null
+++ b/tensorflow/tools/api/golden/tensorflow.keras.layers.-global-average-pooling2-d.pbtxt
@@ -0,0 +1,160 @@
+path: "tensorflow.keras.layers.GlobalAveragePooling2D"
+tf_class {
+ is_instance: "<class \'tensorflow.python.keras._impl.keras.layers.pooling.GlobalAveragePooling2D\'>"
+ is_instance: "<class \'tensorflow.python.keras._impl.keras.layers.pooling._GlobalPooling2D\'>"
+ is_instance: "<class \'tensorflow.python.keras._impl.keras.engine.topology.Layer\'>"
+ is_instance: "<class \'tensorflow.python.layers.base.Layer\'>"
+ is_instance: "<type \'object\'>"
+ member {
+ name: "graph"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "input"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "input_mask"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "input_shape"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "losses"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "non_trainable_variables"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "non_trainable_weights"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "output"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "output_mask"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "output_shape"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "scope_name"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "trainable_variables"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "trainable_weights"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "updates"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "variables"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "weights"
+ mtype: "<type \'property\'>"
+ }
+ member_method {
+ name: "__init__"
+ argspec: "args=[\'self\', \'data_format\'], varargs=None, keywords=kwargs, defaults=[\'None\'], "
+ }
+ member_method {
+ name: "add_loss"
+ argspec: "args=[\'self\', \'losses\', \'inputs\'], varargs=None, keywords=None, defaults=[\'None\'], "
+ }
+ member_method {
+ name: "add_update"
+ argspec: "args=[\'self\', \'updates\', \'inputs\'], varargs=None, keywords=None, defaults=[\'None\'], "
+ }
+ member_method {
+ name: "add_variable"
+ argspec: "args=[\'self\', \'name\', \'shape\', \'dtype\', \'initializer\', \'regularizer\', \'trainable\', \'constraint\'], varargs=None, keywords=None, defaults=[\'None\', \'None\', \'None\', \'True\', \'None\'], "
+ }
+ member_method {
+ name: "add_weight"
+ argspec: "args=[\'self\', \'name\', \'shape\', \'dtype\', \'initializer\', \'regularizer\', \'trainable\', \'constraint\'], varargs=None, keywords=None, defaults=[\'None\', \'None\', \'None\', \'True\', \'None\'], "
+ }
+ member_method {
+ name: "apply"
+ argspec: "args=[\'self\', \'inputs\'], varargs=args, keywords=kwargs, defaults=None"
+ }
+ member_method {
+ name: "build"
+ argspec: "args=[\'self\', \'_\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "call"
+ argspec: "args=[\'self\', \'inputs\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "compute_mask"
+ argspec: "args=[\'self\', \'inputs\', \'mask\'], varargs=None, keywords=None, defaults=[\'None\'], "
+ }
+ member_method {
+ name: "count_params"
+ argspec: "args=[\'self\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "from_config"
+ argspec: "args=[\'cls\', \'config\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_config"
+ argspec: "args=[\'self\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_input_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_input_mask_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_input_shape_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_losses_for"
+ argspec: "args=[\'self\', \'inputs\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_output_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_output_mask_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_output_shape_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_updates_for"
+ argspec: "args=[\'self\', \'inputs\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_weights"
+ argspec: "args=[\'self\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "set_weights"
+ argspec: "args=[\'self\', \'weights\'], varargs=None, keywords=None, defaults=None"
+ }
+}
diff --git a/tensorflow/tools/api/golden/tensorflow.keras.layers.-global-average-pooling3-d.pbtxt b/tensorflow/tools/api/golden/tensorflow.keras.layers.-global-average-pooling3-d.pbtxt
new file mode 100644
index 0000000000..d6d35c45df
--- /dev/null
+++ b/tensorflow/tools/api/golden/tensorflow.keras.layers.-global-average-pooling3-d.pbtxt
@@ -0,0 +1,160 @@
+path: "tensorflow.keras.layers.GlobalAveragePooling3D"
+tf_class {
+ is_instance: "<class \'tensorflow.python.keras._impl.keras.layers.pooling.GlobalAveragePooling3D\'>"
+ is_instance: "<class \'tensorflow.python.keras._impl.keras.layers.pooling._GlobalPooling3D\'>"
+ is_instance: "<class \'tensorflow.python.keras._impl.keras.engine.topology.Layer\'>"
+ is_instance: "<class \'tensorflow.python.layers.base.Layer\'>"
+ is_instance: "<type \'object\'>"
+ member {
+ name: "graph"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "input"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "input_mask"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "input_shape"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "losses"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "non_trainable_variables"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "non_trainable_weights"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "output"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "output_mask"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "output_shape"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "scope_name"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "trainable_variables"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "trainable_weights"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "updates"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "variables"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "weights"
+ mtype: "<type \'property\'>"
+ }
+ member_method {
+ name: "__init__"
+ argspec: "args=[\'self\', \'data_format\'], varargs=None, keywords=kwargs, defaults=[\'None\'], "
+ }
+ member_method {
+ name: "add_loss"
+ argspec: "args=[\'self\', \'losses\', \'inputs\'], varargs=None, keywords=None, defaults=[\'None\'], "
+ }
+ member_method {
+ name: "add_update"
+ argspec: "args=[\'self\', \'updates\', \'inputs\'], varargs=None, keywords=None, defaults=[\'None\'], "
+ }
+ member_method {
+ name: "add_variable"
+ argspec: "args=[\'self\', \'name\', \'shape\', \'dtype\', \'initializer\', \'regularizer\', \'trainable\', \'constraint\'], varargs=None, keywords=None, defaults=[\'None\', \'None\', \'None\', \'True\', \'None\'], "
+ }
+ member_method {
+ name: "add_weight"
+ argspec: "args=[\'self\', \'name\', \'shape\', \'dtype\', \'initializer\', \'regularizer\', \'trainable\', \'constraint\'], varargs=None, keywords=None, defaults=[\'None\', \'None\', \'None\', \'True\', \'None\'], "
+ }
+ member_method {
+ name: "apply"
+ argspec: "args=[\'self\', \'inputs\'], varargs=args, keywords=kwargs, defaults=None"
+ }
+ member_method {
+ name: "build"
+ argspec: "args=[\'self\', \'_\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "call"
+ argspec: "args=[\'self\', \'inputs\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "compute_mask"
+ argspec: "args=[\'self\', \'inputs\', \'mask\'], varargs=None, keywords=None, defaults=[\'None\'], "
+ }
+ member_method {
+ name: "count_params"
+ argspec: "args=[\'self\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "from_config"
+ argspec: "args=[\'cls\', \'config\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_config"
+ argspec: "args=[\'self\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_input_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_input_mask_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_input_shape_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_losses_for"
+ argspec: "args=[\'self\', \'inputs\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_output_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_output_mask_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_output_shape_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_updates_for"
+ argspec: "args=[\'self\', \'inputs\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_weights"
+ argspec: "args=[\'self\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "set_weights"
+ argspec: "args=[\'self\', \'weights\'], varargs=None, keywords=None, defaults=None"
+ }
+}
diff --git a/tensorflow/tools/api/golden/tensorflow.keras.layers.-global-avg-pool1-d.pbtxt b/tensorflow/tools/api/golden/tensorflow.keras.layers.-global-avg-pool1-d.pbtxt
new file mode 100644
index 0000000000..3d28cb068e
--- /dev/null
+++ b/tensorflow/tools/api/golden/tensorflow.keras.layers.-global-avg-pool1-d.pbtxt
@@ -0,0 +1,160 @@
+path: "tensorflow.keras.layers.GlobalAvgPool1D"
+tf_class {
+ is_instance: "<class \'tensorflow.python.keras._impl.keras.layers.pooling.GlobalAveragePooling1D\'>"
+ is_instance: "<class \'tensorflow.python.keras._impl.keras.layers.pooling._GlobalPooling1D\'>"
+ is_instance: "<class \'tensorflow.python.keras._impl.keras.engine.topology.Layer\'>"
+ is_instance: "<class \'tensorflow.python.layers.base.Layer\'>"
+ is_instance: "<type \'object\'>"
+ member {
+ name: "graph"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "input"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "input_mask"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "input_shape"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "losses"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "non_trainable_variables"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "non_trainable_weights"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "output"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "output_mask"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "output_shape"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "scope_name"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "trainable_variables"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "trainable_weights"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "updates"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "variables"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "weights"
+ mtype: "<type \'property\'>"
+ }
+ member_method {
+ name: "__init__"
+ argspec: "args=[\'self\'], varargs=None, keywords=kwargs, defaults=None"
+ }
+ member_method {
+ name: "add_loss"
+ argspec: "args=[\'self\', \'losses\', \'inputs\'], varargs=None, keywords=None, defaults=[\'None\'], "
+ }
+ member_method {
+ name: "add_update"
+ argspec: "args=[\'self\', \'updates\', \'inputs\'], varargs=None, keywords=None, defaults=[\'None\'], "
+ }
+ member_method {
+ name: "add_variable"
+ argspec: "args=[\'self\', \'name\', \'shape\', \'dtype\', \'initializer\', \'regularizer\', \'trainable\', \'constraint\'], varargs=None, keywords=None, defaults=[\'None\', \'None\', \'None\', \'True\', \'None\'], "
+ }
+ member_method {
+ name: "add_weight"
+ argspec: "args=[\'self\', \'name\', \'shape\', \'dtype\', \'initializer\', \'regularizer\', \'trainable\', \'constraint\'], varargs=None, keywords=None, defaults=[\'None\', \'None\', \'None\', \'True\', \'None\'], "
+ }
+ member_method {
+ name: "apply"
+ argspec: "args=[\'self\', \'inputs\'], varargs=args, keywords=kwargs, defaults=None"
+ }
+ member_method {
+ name: "build"
+ argspec: "args=[\'self\', \'_\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "call"
+ argspec: "args=[\'self\', \'inputs\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "compute_mask"
+ argspec: "args=[\'self\', \'inputs\', \'mask\'], varargs=None, keywords=None, defaults=[\'None\'], "
+ }
+ member_method {
+ name: "count_params"
+ argspec: "args=[\'self\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "from_config"
+ argspec: "args=[\'cls\', \'config\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_config"
+ argspec: "args=[\'self\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_input_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_input_mask_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_input_shape_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_losses_for"
+ argspec: "args=[\'self\', \'inputs\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_output_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_output_mask_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_output_shape_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_updates_for"
+ argspec: "args=[\'self\', \'inputs\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_weights"
+ argspec: "args=[\'self\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "set_weights"
+ argspec: "args=[\'self\', \'weights\'], varargs=None, keywords=None, defaults=None"
+ }
+}
diff --git a/tensorflow/tools/api/golden/tensorflow.keras.layers.-global-avg-pool2-d.pbtxt b/tensorflow/tools/api/golden/tensorflow.keras.layers.-global-avg-pool2-d.pbtxt
new file mode 100644
index 0000000000..2bc4297b83
--- /dev/null
+++ b/tensorflow/tools/api/golden/tensorflow.keras.layers.-global-avg-pool2-d.pbtxt
@@ -0,0 +1,160 @@
+path: "tensorflow.keras.layers.GlobalAvgPool2D"
+tf_class {
+ is_instance: "<class \'tensorflow.python.keras._impl.keras.layers.pooling.GlobalAveragePooling2D\'>"
+ is_instance: "<class \'tensorflow.python.keras._impl.keras.layers.pooling._GlobalPooling2D\'>"
+ is_instance: "<class \'tensorflow.python.keras._impl.keras.engine.topology.Layer\'>"
+ is_instance: "<class \'tensorflow.python.layers.base.Layer\'>"
+ is_instance: "<type \'object\'>"
+ member {
+ name: "graph"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "input"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "input_mask"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "input_shape"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "losses"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "non_trainable_variables"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "non_trainable_weights"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "output"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "output_mask"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "output_shape"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "scope_name"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "trainable_variables"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "trainable_weights"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "updates"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "variables"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "weights"
+ mtype: "<type \'property\'>"
+ }
+ member_method {
+ name: "__init__"
+ argspec: "args=[\'self\', \'data_format\'], varargs=None, keywords=kwargs, defaults=[\'None\'], "
+ }
+ member_method {
+ name: "add_loss"
+ argspec: "args=[\'self\', \'losses\', \'inputs\'], varargs=None, keywords=None, defaults=[\'None\'], "
+ }
+ member_method {
+ name: "add_update"
+ argspec: "args=[\'self\', \'updates\', \'inputs\'], varargs=None, keywords=None, defaults=[\'None\'], "
+ }
+ member_method {
+ name: "add_variable"
+ argspec: "args=[\'self\', \'name\', \'shape\', \'dtype\', \'initializer\', \'regularizer\', \'trainable\', \'constraint\'], varargs=None, keywords=None, defaults=[\'None\', \'None\', \'None\', \'True\', \'None\'], "
+ }
+ member_method {
+ name: "add_weight"
+ argspec: "args=[\'self\', \'name\', \'shape\', \'dtype\', \'initializer\', \'regularizer\', \'trainable\', \'constraint\'], varargs=None, keywords=None, defaults=[\'None\', \'None\', \'None\', \'True\', \'None\'], "
+ }
+ member_method {
+ name: "apply"
+ argspec: "args=[\'self\', \'inputs\'], varargs=args, keywords=kwargs, defaults=None"
+ }
+ member_method {
+ name: "build"
+ argspec: "args=[\'self\', \'_\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "call"
+ argspec: "args=[\'self\', \'inputs\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "compute_mask"
+ argspec: "args=[\'self\', \'inputs\', \'mask\'], varargs=None, keywords=None, defaults=[\'None\'], "
+ }
+ member_method {
+ name: "count_params"
+ argspec: "args=[\'self\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "from_config"
+ argspec: "args=[\'cls\', \'config\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_config"
+ argspec: "args=[\'self\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_input_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_input_mask_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_input_shape_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_losses_for"
+ argspec: "args=[\'self\', \'inputs\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_output_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_output_mask_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_output_shape_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_updates_for"
+ argspec: "args=[\'self\', \'inputs\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_weights"
+ argspec: "args=[\'self\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "set_weights"
+ argspec: "args=[\'self\', \'weights\'], varargs=None, keywords=None, defaults=None"
+ }
+}
diff --git a/tensorflow/tools/api/golden/tensorflow.keras.layers.-global-avg-pool3-d.pbtxt b/tensorflow/tools/api/golden/tensorflow.keras.layers.-global-avg-pool3-d.pbtxt
new file mode 100644
index 0000000000..83de1acdcf
--- /dev/null
+++ b/tensorflow/tools/api/golden/tensorflow.keras.layers.-global-avg-pool3-d.pbtxt
@@ -0,0 +1,160 @@
+path: "tensorflow.keras.layers.GlobalAvgPool3D"
+tf_class {
+ is_instance: "<class \'tensorflow.python.keras._impl.keras.layers.pooling.GlobalAveragePooling3D\'>"
+ is_instance: "<class \'tensorflow.python.keras._impl.keras.layers.pooling._GlobalPooling3D\'>"
+ is_instance: "<class \'tensorflow.python.keras._impl.keras.engine.topology.Layer\'>"
+ is_instance: "<class \'tensorflow.python.layers.base.Layer\'>"
+ is_instance: "<type \'object\'>"
+ member {
+ name: "graph"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "input"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "input_mask"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "input_shape"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "losses"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "non_trainable_variables"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "non_trainable_weights"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "output"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "output_mask"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "output_shape"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "scope_name"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "trainable_variables"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "trainable_weights"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "updates"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "variables"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "weights"
+ mtype: "<type \'property\'>"
+ }
+ member_method {
+ name: "__init__"
+ argspec: "args=[\'self\', \'data_format\'], varargs=None, keywords=kwargs, defaults=[\'None\'], "
+ }
+ member_method {
+ name: "add_loss"
+ argspec: "args=[\'self\', \'losses\', \'inputs\'], varargs=None, keywords=None, defaults=[\'None\'], "
+ }
+ member_method {
+ name: "add_update"
+ argspec: "args=[\'self\', \'updates\', \'inputs\'], varargs=None, keywords=None, defaults=[\'None\'], "
+ }
+ member_method {
+ name: "add_variable"
+ argspec: "args=[\'self\', \'name\', \'shape\', \'dtype\', \'initializer\', \'regularizer\', \'trainable\', \'constraint\'], varargs=None, keywords=None, defaults=[\'None\', \'None\', \'None\', \'True\', \'None\'], "
+ }
+ member_method {
+ name: "add_weight"
+ argspec: "args=[\'self\', \'name\', \'shape\', \'dtype\', \'initializer\', \'regularizer\', \'trainable\', \'constraint\'], varargs=None, keywords=None, defaults=[\'None\', \'None\', \'None\', \'True\', \'None\'], "
+ }
+ member_method {
+ name: "apply"
+ argspec: "args=[\'self\', \'inputs\'], varargs=args, keywords=kwargs, defaults=None"
+ }
+ member_method {
+ name: "build"
+ argspec: "args=[\'self\', \'_\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "call"
+ argspec: "args=[\'self\', \'inputs\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "compute_mask"
+ argspec: "args=[\'self\', \'inputs\', \'mask\'], varargs=None, keywords=None, defaults=[\'None\'], "
+ }
+ member_method {
+ name: "count_params"
+ argspec: "args=[\'self\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "from_config"
+ argspec: "args=[\'cls\', \'config\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_config"
+ argspec: "args=[\'self\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_input_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_input_mask_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_input_shape_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_losses_for"
+ argspec: "args=[\'self\', \'inputs\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_output_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_output_mask_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_output_shape_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_updates_for"
+ argspec: "args=[\'self\', \'inputs\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_weights"
+ argspec: "args=[\'self\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "set_weights"
+ argspec: "args=[\'self\', \'weights\'], varargs=None, keywords=None, defaults=None"
+ }
+}
diff --git a/tensorflow/tools/api/golden/tensorflow.keras.layers.-global-max-pool1-d.pbtxt b/tensorflow/tools/api/golden/tensorflow.keras.layers.-global-max-pool1-d.pbtxt
new file mode 100644
index 0000000000..58dee9406c
--- /dev/null
+++ b/tensorflow/tools/api/golden/tensorflow.keras.layers.-global-max-pool1-d.pbtxt
@@ -0,0 +1,160 @@
+path: "tensorflow.keras.layers.GlobalMaxPool1D"
+tf_class {
+ is_instance: "<class \'tensorflow.python.keras._impl.keras.layers.pooling.GlobalMaxPooling1D\'>"
+ is_instance: "<class \'tensorflow.python.keras._impl.keras.layers.pooling._GlobalPooling1D\'>"
+ is_instance: "<class \'tensorflow.python.keras._impl.keras.engine.topology.Layer\'>"
+ is_instance: "<class \'tensorflow.python.layers.base.Layer\'>"
+ is_instance: "<type \'object\'>"
+ member {
+ name: "graph"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "input"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "input_mask"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "input_shape"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "losses"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "non_trainable_variables"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "non_trainable_weights"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "output"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "output_mask"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "output_shape"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "scope_name"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "trainable_variables"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "trainable_weights"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "updates"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "variables"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "weights"
+ mtype: "<type \'property\'>"
+ }
+ member_method {
+ name: "__init__"
+ argspec: "args=[\'self\'], varargs=None, keywords=kwargs, defaults=None"
+ }
+ member_method {
+ name: "add_loss"
+ argspec: "args=[\'self\', \'losses\', \'inputs\'], varargs=None, keywords=None, defaults=[\'None\'], "
+ }
+ member_method {
+ name: "add_update"
+ argspec: "args=[\'self\', \'updates\', \'inputs\'], varargs=None, keywords=None, defaults=[\'None\'], "
+ }
+ member_method {
+ name: "add_variable"
+ argspec: "args=[\'self\', \'name\', \'shape\', \'dtype\', \'initializer\', \'regularizer\', \'trainable\', \'constraint\'], varargs=None, keywords=None, defaults=[\'None\', \'None\', \'None\', \'True\', \'None\'], "
+ }
+ member_method {
+ name: "add_weight"
+ argspec: "args=[\'self\', \'name\', \'shape\', \'dtype\', \'initializer\', \'regularizer\', \'trainable\', \'constraint\'], varargs=None, keywords=None, defaults=[\'None\', \'None\', \'None\', \'True\', \'None\'], "
+ }
+ member_method {
+ name: "apply"
+ argspec: "args=[\'self\', \'inputs\'], varargs=args, keywords=kwargs, defaults=None"
+ }
+ member_method {
+ name: "build"
+ argspec: "args=[\'self\', \'_\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "call"
+ argspec: "args=[\'self\', \'inputs\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "compute_mask"
+ argspec: "args=[\'self\', \'inputs\', \'mask\'], varargs=None, keywords=None, defaults=[\'None\'], "
+ }
+ member_method {
+ name: "count_params"
+ argspec: "args=[\'self\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "from_config"
+ argspec: "args=[\'cls\', \'config\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_config"
+ argspec: "args=[\'self\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_input_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_input_mask_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_input_shape_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_losses_for"
+ argspec: "args=[\'self\', \'inputs\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_output_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_output_mask_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_output_shape_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_updates_for"
+ argspec: "args=[\'self\', \'inputs\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_weights"
+ argspec: "args=[\'self\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "set_weights"
+ argspec: "args=[\'self\', \'weights\'], varargs=None, keywords=None, defaults=None"
+ }
+}
diff --git a/tensorflow/tools/api/golden/tensorflow.keras.layers.-global-max-pool2-d.pbtxt b/tensorflow/tools/api/golden/tensorflow.keras.layers.-global-max-pool2-d.pbtxt
new file mode 100644
index 0000000000..6490cd4b59
--- /dev/null
+++ b/tensorflow/tools/api/golden/tensorflow.keras.layers.-global-max-pool2-d.pbtxt
@@ -0,0 +1,160 @@
+path: "tensorflow.keras.layers.GlobalMaxPool2D"
+tf_class {
+ is_instance: "<class \'tensorflow.python.keras._impl.keras.layers.pooling.GlobalMaxPooling2D\'>"
+ is_instance: "<class \'tensorflow.python.keras._impl.keras.layers.pooling._GlobalPooling2D\'>"
+ is_instance: "<class \'tensorflow.python.keras._impl.keras.engine.topology.Layer\'>"
+ is_instance: "<class \'tensorflow.python.layers.base.Layer\'>"
+ is_instance: "<type \'object\'>"
+ member {
+ name: "graph"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "input"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "input_mask"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "input_shape"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "losses"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "non_trainable_variables"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "non_trainable_weights"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "output"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "output_mask"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "output_shape"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "scope_name"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "trainable_variables"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "trainable_weights"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "updates"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "variables"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "weights"
+ mtype: "<type \'property\'>"
+ }
+ member_method {
+ name: "__init__"
+ argspec: "args=[\'self\', \'data_format\'], varargs=None, keywords=kwargs, defaults=[\'None\'], "
+ }
+ member_method {
+ name: "add_loss"
+ argspec: "args=[\'self\', \'losses\', \'inputs\'], varargs=None, keywords=None, defaults=[\'None\'], "
+ }
+ member_method {
+ name: "add_update"
+ argspec: "args=[\'self\', \'updates\', \'inputs\'], varargs=None, keywords=None, defaults=[\'None\'], "
+ }
+ member_method {
+ name: "add_variable"
+ argspec: "args=[\'self\', \'name\', \'shape\', \'dtype\', \'initializer\', \'regularizer\', \'trainable\', \'constraint\'], varargs=None, keywords=None, defaults=[\'None\', \'None\', \'None\', \'True\', \'None\'], "
+ }
+ member_method {
+ name: "add_weight"
+ argspec: "args=[\'self\', \'name\', \'shape\', \'dtype\', \'initializer\', \'regularizer\', \'trainable\', \'constraint\'], varargs=None, keywords=None, defaults=[\'None\', \'None\', \'None\', \'True\', \'None\'], "
+ }
+ member_method {
+ name: "apply"
+ argspec: "args=[\'self\', \'inputs\'], varargs=args, keywords=kwargs, defaults=None"
+ }
+ member_method {
+ name: "build"
+ argspec: "args=[\'self\', \'_\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "call"
+ argspec: "args=[\'self\', \'inputs\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "compute_mask"
+ argspec: "args=[\'self\', \'inputs\', \'mask\'], varargs=None, keywords=None, defaults=[\'None\'], "
+ }
+ member_method {
+ name: "count_params"
+ argspec: "args=[\'self\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "from_config"
+ argspec: "args=[\'cls\', \'config\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_config"
+ argspec: "args=[\'self\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_input_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_input_mask_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_input_shape_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_losses_for"
+ argspec: "args=[\'self\', \'inputs\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_output_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_output_mask_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_output_shape_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_updates_for"
+ argspec: "args=[\'self\', \'inputs\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_weights"
+ argspec: "args=[\'self\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "set_weights"
+ argspec: "args=[\'self\', \'weights\'], varargs=None, keywords=None, defaults=None"
+ }
+}
diff --git a/tensorflow/tools/api/golden/tensorflow.keras.layers.-global-max-pool3-d.pbtxt b/tensorflow/tools/api/golden/tensorflow.keras.layers.-global-max-pool3-d.pbtxt
new file mode 100644
index 0000000000..15e1a609f3
--- /dev/null
+++ b/tensorflow/tools/api/golden/tensorflow.keras.layers.-global-max-pool3-d.pbtxt
@@ -0,0 +1,160 @@
+path: "tensorflow.keras.layers.GlobalMaxPool3D"
+tf_class {
+ is_instance: "<class \'tensorflow.python.keras._impl.keras.layers.pooling.GlobalMaxPooling3D\'>"
+ is_instance: "<class \'tensorflow.python.keras._impl.keras.layers.pooling._GlobalPooling3D\'>"
+ is_instance: "<class \'tensorflow.python.keras._impl.keras.engine.topology.Layer\'>"
+ is_instance: "<class \'tensorflow.python.layers.base.Layer\'>"
+ is_instance: "<type \'object\'>"
+ member {
+ name: "graph"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "input"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "input_mask"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "input_shape"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "losses"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "non_trainable_variables"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "non_trainable_weights"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "output"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "output_mask"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "output_shape"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "scope_name"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "trainable_variables"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "trainable_weights"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "updates"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "variables"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "weights"
+ mtype: "<type \'property\'>"
+ }
+ member_method {
+ name: "__init__"
+ argspec: "args=[\'self\', \'data_format\'], varargs=None, keywords=kwargs, defaults=[\'None\'], "
+ }
+ member_method {
+ name: "add_loss"
+ argspec: "args=[\'self\', \'losses\', \'inputs\'], varargs=None, keywords=None, defaults=[\'None\'], "
+ }
+ member_method {
+ name: "add_update"
+ argspec: "args=[\'self\', \'updates\', \'inputs\'], varargs=None, keywords=None, defaults=[\'None\'], "
+ }
+ member_method {
+ name: "add_variable"
+ argspec: "args=[\'self\', \'name\', \'shape\', \'dtype\', \'initializer\', \'regularizer\', \'trainable\', \'constraint\'], varargs=None, keywords=None, defaults=[\'None\', \'None\', \'None\', \'True\', \'None\'], "
+ }
+ member_method {
+ name: "add_weight"
+ argspec: "args=[\'self\', \'name\', \'shape\', \'dtype\', \'initializer\', \'regularizer\', \'trainable\', \'constraint\'], varargs=None, keywords=None, defaults=[\'None\', \'None\', \'None\', \'True\', \'None\'], "
+ }
+ member_method {
+ name: "apply"
+ argspec: "args=[\'self\', \'inputs\'], varargs=args, keywords=kwargs, defaults=None"
+ }
+ member_method {
+ name: "build"
+ argspec: "args=[\'self\', \'_\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "call"
+ argspec: "args=[\'self\', \'inputs\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "compute_mask"
+ argspec: "args=[\'self\', \'inputs\', \'mask\'], varargs=None, keywords=None, defaults=[\'None\'], "
+ }
+ member_method {
+ name: "count_params"
+ argspec: "args=[\'self\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "from_config"
+ argspec: "args=[\'cls\', \'config\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_config"
+ argspec: "args=[\'self\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_input_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_input_mask_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_input_shape_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_losses_for"
+ argspec: "args=[\'self\', \'inputs\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_output_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_output_mask_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_output_shape_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_updates_for"
+ argspec: "args=[\'self\', \'inputs\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_weights"
+ argspec: "args=[\'self\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "set_weights"
+ argspec: "args=[\'self\', \'weights\'], varargs=None, keywords=None, defaults=None"
+ }
+}
diff --git a/tensorflow/tools/api/golden/tensorflow.keras.layers.-global-max-pooling1-d.pbtxt b/tensorflow/tools/api/golden/tensorflow.keras.layers.-global-max-pooling1-d.pbtxt
new file mode 100644
index 0000000000..4a795aa663
--- /dev/null
+++ b/tensorflow/tools/api/golden/tensorflow.keras.layers.-global-max-pooling1-d.pbtxt
@@ -0,0 +1,160 @@
+path: "tensorflow.keras.layers.GlobalMaxPooling1D"
+tf_class {
+ is_instance: "<class \'tensorflow.python.keras._impl.keras.layers.pooling.GlobalMaxPooling1D\'>"
+ is_instance: "<class \'tensorflow.python.keras._impl.keras.layers.pooling._GlobalPooling1D\'>"
+ is_instance: "<class \'tensorflow.python.keras._impl.keras.engine.topology.Layer\'>"
+ is_instance: "<class \'tensorflow.python.layers.base.Layer\'>"
+ is_instance: "<type \'object\'>"
+ member {
+ name: "graph"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "input"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "input_mask"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "input_shape"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "losses"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "non_trainable_variables"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "non_trainable_weights"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "output"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "output_mask"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "output_shape"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "scope_name"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "trainable_variables"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "trainable_weights"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "updates"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "variables"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "weights"
+ mtype: "<type \'property\'>"
+ }
+ member_method {
+ name: "__init__"
+ argspec: "args=[\'self\'], varargs=None, keywords=kwargs, defaults=None"
+ }
+ member_method {
+ name: "add_loss"
+ argspec: "args=[\'self\', \'losses\', \'inputs\'], varargs=None, keywords=None, defaults=[\'None\'], "
+ }
+ member_method {
+ name: "add_update"
+ argspec: "args=[\'self\', \'updates\', \'inputs\'], varargs=None, keywords=None, defaults=[\'None\'], "
+ }
+ member_method {
+ name: "add_variable"
+ argspec: "args=[\'self\', \'name\', \'shape\', \'dtype\', \'initializer\', \'regularizer\', \'trainable\', \'constraint\'], varargs=None, keywords=None, defaults=[\'None\', \'None\', \'None\', \'True\', \'None\'], "
+ }
+ member_method {
+ name: "add_weight"
+ argspec: "args=[\'self\', \'name\', \'shape\', \'dtype\', \'initializer\', \'regularizer\', \'trainable\', \'constraint\'], varargs=None, keywords=None, defaults=[\'None\', \'None\', \'None\', \'True\', \'None\'], "
+ }
+ member_method {
+ name: "apply"
+ argspec: "args=[\'self\', \'inputs\'], varargs=args, keywords=kwargs, defaults=None"
+ }
+ member_method {
+ name: "build"
+ argspec: "args=[\'self\', \'_\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "call"
+ argspec: "args=[\'self\', \'inputs\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "compute_mask"
+ argspec: "args=[\'self\', \'inputs\', \'mask\'], varargs=None, keywords=None, defaults=[\'None\'], "
+ }
+ member_method {
+ name: "count_params"
+ argspec: "args=[\'self\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "from_config"
+ argspec: "args=[\'cls\', \'config\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_config"
+ argspec: "args=[\'self\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_input_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_input_mask_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_input_shape_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_losses_for"
+ argspec: "args=[\'self\', \'inputs\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_output_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_output_mask_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_output_shape_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_updates_for"
+ argspec: "args=[\'self\', \'inputs\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_weights"
+ argspec: "args=[\'self\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "set_weights"
+ argspec: "args=[\'self\', \'weights\'], varargs=None, keywords=None, defaults=None"
+ }
+}
diff --git a/tensorflow/tools/api/golden/tensorflow.keras.layers.-global-max-pooling2-d.pbtxt b/tensorflow/tools/api/golden/tensorflow.keras.layers.-global-max-pooling2-d.pbtxt
new file mode 100644
index 0000000000..dab26b5627
--- /dev/null
+++ b/tensorflow/tools/api/golden/tensorflow.keras.layers.-global-max-pooling2-d.pbtxt
@@ -0,0 +1,160 @@
+path: "tensorflow.keras.layers.GlobalMaxPooling2D"
+tf_class {
+ is_instance: "<class \'tensorflow.python.keras._impl.keras.layers.pooling.GlobalMaxPooling2D\'>"
+ is_instance: "<class \'tensorflow.python.keras._impl.keras.layers.pooling._GlobalPooling2D\'>"
+ is_instance: "<class \'tensorflow.python.keras._impl.keras.engine.topology.Layer\'>"
+ is_instance: "<class \'tensorflow.python.layers.base.Layer\'>"
+ is_instance: "<type \'object\'>"
+ member {
+ name: "graph"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "input"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "input_mask"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "input_shape"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "losses"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "non_trainable_variables"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "non_trainable_weights"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "output"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "output_mask"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "output_shape"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "scope_name"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "trainable_variables"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "trainable_weights"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "updates"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "variables"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "weights"
+ mtype: "<type \'property\'>"
+ }
+ member_method {
+ name: "__init__"
+ argspec: "args=[\'self\', \'data_format\'], varargs=None, keywords=kwargs, defaults=[\'None\'], "
+ }
+ member_method {
+ name: "add_loss"
+ argspec: "args=[\'self\', \'losses\', \'inputs\'], varargs=None, keywords=None, defaults=[\'None\'], "
+ }
+ member_method {
+ name: "add_update"
+ argspec: "args=[\'self\', \'updates\', \'inputs\'], varargs=None, keywords=None, defaults=[\'None\'], "
+ }
+ member_method {
+ name: "add_variable"
+ argspec: "args=[\'self\', \'name\', \'shape\', \'dtype\', \'initializer\', \'regularizer\', \'trainable\', \'constraint\'], varargs=None, keywords=None, defaults=[\'None\', \'None\', \'None\', \'True\', \'None\'], "
+ }
+ member_method {
+ name: "add_weight"
+ argspec: "args=[\'self\', \'name\', \'shape\', \'dtype\', \'initializer\', \'regularizer\', \'trainable\', \'constraint\'], varargs=None, keywords=None, defaults=[\'None\', \'None\', \'None\', \'True\', \'None\'], "
+ }
+ member_method {
+ name: "apply"
+ argspec: "args=[\'self\', \'inputs\'], varargs=args, keywords=kwargs, defaults=None"
+ }
+ member_method {
+ name: "build"
+ argspec: "args=[\'self\', \'_\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "call"
+ argspec: "args=[\'self\', \'inputs\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "compute_mask"
+ argspec: "args=[\'self\', \'inputs\', \'mask\'], varargs=None, keywords=None, defaults=[\'None\'], "
+ }
+ member_method {
+ name: "count_params"
+ argspec: "args=[\'self\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "from_config"
+ argspec: "args=[\'cls\', \'config\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_config"
+ argspec: "args=[\'self\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_input_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_input_mask_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_input_shape_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_losses_for"
+ argspec: "args=[\'self\', \'inputs\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_output_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_output_mask_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_output_shape_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_updates_for"
+ argspec: "args=[\'self\', \'inputs\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_weights"
+ argspec: "args=[\'self\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "set_weights"
+ argspec: "args=[\'self\', \'weights\'], varargs=None, keywords=None, defaults=None"
+ }
+}
diff --git a/tensorflow/tools/api/golden/tensorflow.keras.layers.-global-max-pooling3-d.pbtxt b/tensorflow/tools/api/golden/tensorflow.keras.layers.-global-max-pooling3-d.pbtxt
new file mode 100644
index 0000000000..cbe05ed7a4
--- /dev/null
+++ b/tensorflow/tools/api/golden/tensorflow.keras.layers.-global-max-pooling3-d.pbtxt
@@ -0,0 +1,160 @@
+path: "tensorflow.keras.layers.GlobalMaxPooling3D"
+tf_class {
+ is_instance: "<class \'tensorflow.python.keras._impl.keras.layers.pooling.GlobalMaxPooling3D\'>"
+ is_instance: "<class \'tensorflow.python.keras._impl.keras.layers.pooling._GlobalPooling3D\'>"
+ is_instance: "<class \'tensorflow.python.keras._impl.keras.engine.topology.Layer\'>"
+ is_instance: "<class \'tensorflow.python.layers.base.Layer\'>"
+ is_instance: "<type \'object\'>"
+ member {
+ name: "graph"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "input"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "input_mask"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "input_shape"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "losses"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "non_trainable_variables"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "non_trainable_weights"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "output"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "output_mask"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "output_shape"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "scope_name"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "trainable_variables"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "trainable_weights"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "updates"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "variables"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "weights"
+ mtype: "<type \'property\'>"
+ }
+ member_method {
+ name: "__init__"
+ argspec: "args=[\'self\', \'data_format\'], varargs=None, keywords=kwargs, defaults=[\'None\'], "
+ }
+ member_method {
+ name: "add_loss"
+ argspec: "args=[\'self\', \'losses\', \'inputs\'], varargs=None, keywords=None, defaults=[\'None\'], "
+ }
+ member_method {
+ name: "add_update"
+ argspec: "args=[\'self\', \'updates\', \'inputs\'], varargs=None, keywords=None, defaults=[\'None\'], "
+ }
+ member_method {
+ name: "add_variable"
+ argspec: "args=[\'self\', \'name\', \'shape\', \'dtype\', \'initializer\', \'regularizer\', \'trainable\', \'constraint\'], varargs=None, keywords=None, defaults=[\'None\', \'None\', \'None\', \'True\', \'None\'], "
+ }
+ member_method {
+ name: "add_weight"
+ argspec: "args=[\'self\', \'name\', \'shape\', \'dtype\', \'initializer\', \'regularizer\', \'trainable\', \'constraint\'], varargs=None, keywords=None, defaults=[\'None\', \'None\', \'None\', \'True\', \'None\'], "
+ }
+ member_method {
+ name: "apply"
+ argspec: "args=[\'self\', \'inputs\'], varargs=args, keywords=kwargs, defaults=None"
+ }
+ member_method {
+ name: "build"
+ argspec: "args=[\'self\', \'_\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "call"
+ argspec: "args=[\'self\', \'inputs\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "compute_mask"
+ argspec: "args=[\'self\', \'inputs\', \'mask\'], varargs=None, keywords=None, defaults=[\'None\'], "
+ }
+ member_method {
+ name: "count_params"
+ argspec: "args=[\'self\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "from_config"
+ argspec: "args=[\'cls\', \'config\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_config"
+ argspec: "args=[\'self\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_input_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_input_mask_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_input_shape_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_losses_for"
+ argspec: "args=[\'self\', \'inputs\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_output_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_output_mask_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_output_shape_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_updates_for"
+ argspec: "args=[\'self\', \'inputs\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_weights"
+ argspec: "args=[\'self\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "set_weights"
+ argspec: "args=[\'self\', \'weights\'], varargs=None, keywords=None, defaults=None"
+ }
+}
diff --git a/tensorflow/tools/api/golden/tensorflow.keras.layers.-input-layer.pbtxt b/tensorflow/tools/api/golden/tensorflow.keras.layers.-input-layer.pbtxt
new file mode 100644
index 0000000000..b3f81cc459
--- /dev/null
+++ b/tensorflow/tools/api/golden/tensorflow.keras.layers.-input-layer.pbtxt
@@ -0,0 +1,160 @@
+path: "tensorflow.keras.layers.InputLayer"
+tf_class {
+ is_instance: "<class \'tensorflow.python.keras._impl.keras.engine.topology.InputLayer\'>"
+ is_instance: "<class \'tensorflow.python.layers.base.InputLayer\'>"
+ is_instance: "<class \'tensorflow.python.keras._impl.keras.engine.topology.Layer\'>"
+ is_instance: "<class \'tensorflow.python.layers.base.Layer\'>"
+ is_instance: "<type \'object\'>"
+ member {
+ name: "graph"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "input"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "input_mask"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "input_shape"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "losses"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "non_trainable_variables"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "non_trainable_weights"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "output"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "output_mask"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "output_shape"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "scope_name"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "trainable_variables"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "trainable_weights"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "updates"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "variables"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "weights"
+ mtype: "<type \'property\'>"
+ }
+ member_method {
+ name: "__init__"
+ argspec: "args=[\'self\', \'input_shape\', \'batch_size\', \'dtype\', \'input_tensor\', \'sparse\', \'name\'], varargs=None, keywords=kwargs, defaults=[\'None\', \'None\', \'None\', \'None\', \'False\', \'None\'], "
+ }
+ member_method {
+ name: "add_loss"
+ argspec: "args=[\'self\', \'losses\', \'inputs\'], varargs=None, keywords=None, defaults=[\'None\'], "
+ }
+ member_method {
+ name: "add_update"
+ argspec: "args=[\'self\', \'updates\', \'inputs\'], varargs=None, keywords=None, defaults=[\'None\'], "
+ }
+ member_method {
+ name: "add_variable"
+ argspec: "args=[\'self\', \'name\', \'shape\', \'dtype\', \'initializer\', \'regularizer\', \'trainable\', \'constraint\'], varargs=None, keywords=None, defaults=[\'None\', \'None\', \'None\', \'True\', \'None\'], "
+ }
+ member_method {
+ name: "add_weight"
+ argspec: "args=[\'self\', \'name\', \'shape\', \'dtype\', \'initializer\', \'regularizer\', \'trainable\', \'constraint\'], varargs=None, keywords=None, defaults=[\'None\', \'None\', \'None\', \'True\', \'None\'], "
+ }
+ member_method {
+ name: "apply"
+ argspec: "args=[\'self\', \'inputs\'], varargs=args, keywords=kwargs, defaults=None"
+ }
+ member_method {
+ name: "build"
+ argspec: "args=[\'self\', \'_\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "call"
+ argspec: "args=[\'self\', \'inputs\'], varargs=None, keywords=kwargs, defaults=None"
+ }
+ member_method {
+ name: "compute_mask"
+ argspec: "args=[\'self\', \'inputs\', \'mask\'], varargs=None, keywords=None, defaults=[\'None\'], "
+ }
+ member_method {
+ name: "count_params"
+ argspec: "args=[\'self\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "from_config"
+ argspec: "args=[\'cls\', \'config\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_config"
+ argspec: "args=[\'self\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_input_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_input_mask_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_input_shape_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_losses_for"
+ argspec: "args=[\'self\', \'inputs\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_output_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_output_mask_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_output_shape_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_updates_for"
+ argspec: "args=[\'self\', \'inputs\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_weights"
+ argspec: "args=[\'self\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "set_weights"
+ argspec: "args=[\'self\', \'weights\'], varargs=None, keywords=None, defaults=None"
+ }
+}
diff --git a/tensorflow/tools/api/golden/tensorflow.keras.layers.-input-spec.pbtxt b/tensorflow/tools/api/golden/tensorflow.keras.layers.-input-spec.pbtxt
new file mode 100644
index 0000000000..3aeef347ae
--- /dev/null
+++ b/tensorflow/tools/api/golden/tensorflow.keras.layers.-input-spec.pbtxt
@@ -0,0 +1,9 @@
+path: "tensorflow.keras.layers.InputSpec"
+tf_class {
+ is_instance: "<class \'tensorflow.python.layers.base.InputSpec\'>"
+ is_instance: "<type \'object\'>"
+ member_method {
+ name: "__init__"
+ argspec: "args=[\'self\', \'dtype\', \'shape\', \'ndim\', \'max_ndim\', \'min_ndim\', \'axes\'], varargs=None, keywords=None, defaults=[\'None\', \'None\', \'None\', \'None\', \'None\', \'None\'], "
+ }
+}
diff --git a/tensorflow/tools/api/golden/tensorflow.keras.layers.-l-s-t-m.pbtxt b/tensorflow/tools/api/golden/tensorflow.keras.layers.-l-s-t-m.pbtxt
new file mode 100644
index 0000000000..36a7e4a176
--- /dev/null
+++ b/tensorflow/tools/api/golden/tensorflow.keras.layers.-l-s-t-m.pbtxt
@@ -0,0 +1,180 @@
+path: "tensorflow.keras.layers.LSTM"
+tf_class {
+ is_instance: "<class \'tensorflow.python.keras._impl.keras.layers.recurrent.LSTM\'>"
+ is_instance: "<class \'tensorflow.python.keras._impl.keras.layers.recurrent.Recurrent\'>"
+ is_instance: "<class \'tensorflow.python.keras._impl.keras.engine.topology.Layer\'>"
+ is_instance: "<class \'tensorflow.python.layers.base.Layer\'>"
+ is_instance: "<type \'object\'>"
+ member {
+ name: "graph"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "input"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "input_mask"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "input_shape"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "losses"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "non_trainable_variables"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "non_trainable_weights"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "output"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "output_mask"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "output_shape"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "scope_name"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "trainable_variables"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "trainable_weights"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "updates"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "variables"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "weights"
+ mtype: "<type \'property\'>"
+ }
+ member_method {
+ name: "__init__"
+ argspec: "args=[\'self\', \'units\', \'activation\', \'recurrent_activation\', \'use_bias\', \'kernel_initializer\', \'recurrent_initializer\', \'bias_initializer\', \'unit_forget_bias\', \'kernel_regularizer\', \'recurrent_regularizer\', \'bias_regularizer\', \'activity_regularizer\', \'kernel_constraint\', \'recurrent_constraint\', \'bias_constraint\', \'dropout\', \'recurrent_dropout\'], varargs=None, keywords=kwargs, defaults=[\'tanh\', \'hard_sigmoid\', \'True\', \'glorot_uniform\', \'orthogonal\', \'zeros\', \'True\', \'None\', \'None\', \'None\', \'None\', \'None\', \'None\', \'None\', \'0.0\', \'0.0\'], "
+ }
+ member_method {
+ name: "add_loss"
+ argspec: "args=[\'self\', \'losses\', \'inputs\'], varargs=None, keywords=None, defaults=[\'None\'], "
+ }
+ member_method {
+ name: "add_update"
+ argspec: "args=[\'self\', \'updates\', \'inputs\'], varargs=None, keywords=None, defaults=[\'None\'], "
+ }
+ member_method {
+ name: "add_variable"
+ argspec: "args=[\'self\', \'name\', \'shape\', \'dtype\', \'initializer\', \'regularizer\', \'trainable\', \'constraint\'], varargs=None, keywords=None, defaults=[\'None\', \'None\', \'None\', \'True\', \'None\'], "
+ }
+ member_method {
+ name: "add_weight"
+ argspec: "args=[\'self\', \'name\', \'shape\', \'dtype\', \'initializer\', \'regularizer\', \'trainable\', \'constraint\'], varargs=None, keywords=None, defaults=[\'None\', \'None\', \'None\', \'True\', \'None\'], "
+ }
+ member_method {
+ name: "apply"
+ argspec: "args=[\'self\', \'inputs\'], varargs=args, keywords=kwargs, defaults=None"
+ }
+ member_method {
+ name: "build"
+ argspec: "args=[\'self\', \'input_shape\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "call"
+ argspec: "args=[\'self\', \'inputs\', \'mask\', \'training\', \'initial_state\'], varargs=None, keywords=None, defaults=[\'None\', \'None\', \'None\'], "
+ }
+ member_method {
+ name: "compute_mask"
+ argspec: "args=[\'self\', \'inputs\', \'mask\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "count_params"
+ argspec: "args=[\'self\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "from_config"
+ argspec: "args=[\'cls\', \'config\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_config"
+ argspec: "args=[\'self\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_constants"
+ argspec: "args=[\'self\', \'inputs\', \'training\'], varargs=None, keywords=None, defaults=[\'None\'], "
+ }
+ member_method {
+ name: "get_initial_state"
+ argspec: "args=[\'self\', \'inputs\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_input_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_input_mask_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_input_shape_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_losses_for"
+ argspec: "args=[\'self\', \'inputs\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_output_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_output_mask_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_output_shape_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_updates_for"
+ argspec: "args=[\'self\', \'inputs\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_weights"
+ argspec: "args=[\'self\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "preprocess_input"
+ argspec: "args=[\'self\', \'inputs\', \'training\'], varargs=None, keywords=None, defaults=[\'None\'], "
+ }
+ member_method {
+ name: "reset_states"
+ argspec: "args=[\'self\', \'states\'], varargs=None, keywords=None, defaults=[\'None\'], "
+ }
+ member_method {
+ name: "set_weights"
+ argspec: "args=[\'self\', \'weights\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "step"
+ argspec: "args=[\'self\', \'inputs\', \'states\'], varargs=None, keywords=None, defaults=None"
+ }
+}
diff --git a/tensorflow/tools/api/golden/tensorflow.keras.layers.-lambda.pbtxt b/tensorflow/tools/api/golden/tensorflow.keras.layers.-lambda.pbtxt
new file mode 100644
index 0000000000..1d62867eb4
--- /dev/null
+++ b/tensorflow/tools/api/golden/tensorflow.keras.layers.-lambda.pbtxt
@@ -0,0 +1,159 @@
+path: "tensorflow.keras.layers.Lambda"
+tf_class {
+ is_instance: "<class \'tensorflow.python.keras._impl.keras.layers.core.Lambda\'>"
+ is_instance: "<class \'tensorflow.python.keras._impl.keras.engine.topology.Layer\'>"
+ is_instance: "<class \'tensorflow.python.layers.base.Layer\'>"
+ is_instance: "<type \'object\'>"
+ member {
+ name: "graph"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "input"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "input_mask"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "input_shape"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "losses"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "non_trainable_variables"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "non_trainable_weights"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "output"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "output_mask"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "output_shape"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "scope_name"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "trainable_variables"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "trainable_weights"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "updates"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "variables"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "weights"
+ mtype: "<type \'property\'>"
+ }
+ member_method {
+ name: "__init__"
+ argspec: "args=[\'self\', \'function\', \'mask\', \'arguments\'], varargs=None, keywords=kwargs, defaults=[\'None\', \'None\'], "
+ }
+ member_method {
+ name: "add_loss"
+ argspec: "args=[\'self\', \'losses\', \'inputs\'], varargs=None, keywords=None, defaults=[\'None\'], "
+ }
+ member_method {
+ name: "add_update"
+ argspec: "args=[\'self\', \'updates\', \'inputs\'], varargs=None, keywords=None, defaults=[\'None\'], "
+ }
+ member_method {
+ name: "add_variable"
+ argspec: "args=[\'self\', \'name\', \'shape\', \'dtype\', \'initializer\', \'regularizer\', \'trainable\', \'constraint\'], varargs=None, keywords=None, defaults=[\'None\', \'None\', \'None\', \'True\', \'None\'], "
+ }
+ member_method {
+ name: "add_weight"
+ argspec: "args=[\'self\', \'name\', \'shape\', \'dtype\', \'initializer\', \'regularizer\', \'trainable\', \'constraint\'], varargs=None, keywords=None, defaults=[\'None\', \'None\', \'None\', \'True\', \'None\'], "
+ }
+ member_method {
+ name: "apply"
+ argspec: "args=[\'self\', \'inputs\'], varargs=args, keywords=kwargs, defaults=None"
+ }
+ member_method {
+ name: "build"
+ argspec: "args=[\'self\', \'_\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "call"
+ argspec: "args=[\'self\', \'inputs\', \'mask\'], varargs=None, keywords=None, defaults=[\'None\'], "
+ }
+ member_method {
+ name: "compute_mask"
+ argspec: "args=[\'self\', \'inputs\', \'mask\'], varargs=None, keywords=None, defaults=[\'None\'], "
+ }
+ member_method {
+ name: "count_params"
+ argspec: "args=[\'self\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "from_config"
+ argspec: "args=[\'cls\', \'config\', \'custom_objects\'], varargs=None, keywords=None, defaults=[\'None\'], "
+ }
+ member_method {
+ name: "get_config"
+ argspec: "args=[\'self\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_input_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_input_mask_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_input_shape_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_losses_for"
+ argspec: "args=[\'self\', \'inputs\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_output_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_output_mask_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_output_shape_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_updates_for"
+ argspec: "args=[\'self\', \'inputs\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_weights"
+ argspec: "args=[\'self\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "set_weights"
+ argspec: "args=[\'self\', \'weights\'], varargs=None, keywords=None, defaults=None"
+ }
+}
diff --git a/tensorflow/tools/api/golden/tensorflow.keras.layers.-layer.pbtxt b/tensorflow/tools/api/golden/tensorflow.keras.layers.-layer.pbtxt
new file mode 100644
index 0000000000..7326d87cda
--- /dev/null
+++ b/tensorflow/tools/api/golden/tensorflow.keras.layers.-layer.pbtxt
@@ -0,0 +1,158 @@
+path: "tensorflow.keras.layers.Layer"
+tf_class {
+ is_instance: "<class \'tensorflow.python.keras._impl.keras.engine.topology.Layer\'>"
+ is_instance: "<class \'tensorflow.python.layers.base.Layer\'>"
+ is_instance: "<type \'object\'>"
+ member {
+ name: "graph"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "input"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "input_mask"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "input_shape"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "losses"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "non_trainable_variables"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "non_trainable_weights"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "output"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "output_mask"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "output_shape"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "scope_name"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "trainable_variables"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "trainable_weights"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "updates"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "variables"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "weights"
+ mtype: "<type \'property\'>"
+ }
+ member_method {
+ name: "__init__"
+ argspec: "args=[\'self\'], varargs=None, keywords=kwargs, defaults=None"
+ }
+ member_method {
+ name: "add_loss"
+ argspec: "args=[\'self\', \'losses\', \'inputs\'], varargs=None, keywords=None, defaults=[\'None\'], "
+ }
+ member_method {
+ name: "add_update"
+ argspec: "args=[\'self\', \'updates\', \'inputs\'], varargs=None, keywords=None, defaults=[\'None\'], "
+ }
+ member_method {
+ name: "add_variable"
+ argspec: "args=[\'self\', \'name\', \'shape\', \'dtype\', \'initializer\', \'regularizer\', \'trainable\', \'constraint\'], varargs=None, keywords=None, defaults=[\'None\', \'None\', \'None\', \'True\', \'None\'], "
+ }
+ member_method {
+ name: "add_weight"
+ argspec: "args=[\'self\', \'name\', \'shape\', \'dtype\', \'initializer\', \'regularizer\', \'trainable\', \'constraint\'], varargs=None, keywords=None, defaults=[\'None\', \'None\', \'None\', \'True\', \'None\'], "
+ }
+ member_method {
+ name: "apply"
+ argspec: "args=[\'self\', \'inputs\'], varargs=args, keywords=kwargs, defaults=None"
+ }
+ member_method {
+ name: "build"
+ argspec: "args=[\'self\', \'_\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "call"
+ argspec: "args=[\'self\', \'inputs\'], varargs=None, keywords=kwargs, defaults=None"
+ }
+ member_method {
+ name: "compute_mask"
+ argspec: "args=[\'self\', \'inputs\', \'mask\'], varargs=None, keywords=None, defaults=[\'None\'], "
+ }
+ member_method {
+ name: "count_params"
+ argspec: "args=[\'self\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "from_config"
+ argspec: "args=[\'cls\', \'config\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_config"
+ argspec: "args=[\'self\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_input_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_input_mask_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_input_shape_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_losses_for"
+ argspec: "args=[\'self\', \'inputs\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_output_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_output_mask_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_output_shape_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_updates_for"
+ argspec: "args=[\'self\', \'inputs\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_weights"
+ argspec: "args=[\'self\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "set_weights"
+ argspec: "args=[\'self\', \'weights\'], varargs=None, keywords=None, defaults=None"
+ }
+}
diff --git a/tensorflow/tools/api/golden/tensorflow.keras.layers.-leaky-re-l-u.pbtxt b/tensorflow/tools/api/golden/tensorflow.keras.layers.-leaky-re-l-u.pbtxt
new file mode 100644
index 0000000000..6a0c72ecdf
--- /dev/null
+++ b/tensorflow/tools/api/golden/tensorflow.keras.layers.-leaky-re-l-u.pbtxt
@@ -0,0 +1,159 @@
+path: "tensorflow.keras.layers.LeakyReLU"
+tf_class {
+ is_instance: "<class \'tensorflow.python.keras._impl.keras.layers.advanced_activations.LeakyReLU\'>"
+ is_instance: "<class \'tensorflow.python.keras._impl.keras.engine.topology.Layer\'>"
+ is_instance: "<class \'tensorflow.python.layers.base.Layer\'>"
+ is_instance: "<type \'object\'>"
+ member {
+ name: "graph"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "input"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "input_mask"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "input_shape"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "losses"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "non_trainable_variables"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "non_trainable_weights"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "output"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "output_mask"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "output_shape"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "scope_name"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "trainable_variables"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "trainable_weights"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "updates"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "variables"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "weights"
+ mtype: "<type \'property\'>"
+ }
+ member_method {
+ name: "__init__"
+ argspec: "args=[\'self\', \'alpha\'], varargs=None, keywords=kwargs, defaults=[\'0.3\'], "
+ }
+ member_method {
+ name: "add_loss"
+ argspec: "args=[\'self\', \'losses\', \'inputs\'], varargs=None, keywords=None, defaults=[\'None\'], "
+ }
+ member_method {
+ name: "add_update"
+ argspec: "args=[\'self\', \'updates\', \'inputs\'], varargs=None, keywords=None, defaults=[\'None\'], "
+ }
+ member_method {
+ name: "add_variable"
+ argspec: "args=[\'self\', \'name\', \'shape\', \'dtype\', \'initializer\', \'regularizer\', \'trainable\', \'constraint\'], varargs=None, keywords=None, defaults=[\'None\', \'None\', \'None\', \'True\', \'None\'], "
+ }
+ member_method {
+ name: "add_weight"
+ argspec: "args=[\'self\', \'name\', \'shape\', \'dtype\', \'initializer\', \'regularizer\', \'trainable\', \'constraint\'], varargs=None, keywords=None, defaults=[\'None\', \'None\', \'None\', \'True\', \'None\'], "
+ }
+ member_method {
+ name: "apply"
+ argspec: "args=[\'self\', \'inputs\'], varargs=args, keywords=kwargs, defaults=None"
+ }
+ member_method {
+ name: "build"
+ argspec: "args=[\'self\', \'_\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "call"
+ argspec: "args=[\'self\', \'inputs\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "compute_mask"
+ argspec: "args=[\'self\', \'inputs\', \'mask\'], varargs=None, keywords=None, defaults=[\'None\'], "
+ }
+ member_method {
+ name: "count_params"
+ argspec: "args=[\'self\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "from_config"
+ argspec: "args=[\'cls\', \'config\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_config"
+ argspec: "args=[\'self\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_input_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_input_mask_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_input_shape_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_losses_for"
+ argspec: "args=[\'self\', \'inputs\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_output_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_output_mask_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_output_shape_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_updates_for"
+ argspec: "args=[\'self\', \'inputs\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_weights"
+ argspec: "args=[\'self\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "set_weights"
+ argspec: "args=[\'self\', \'weights\'], varargs=None, keywords=None, defaults=None"
+ }
+}
diff --git a/tensorflow/tools/api/golden/tensorflow.keras.layers.-locally-connected1-d.pbtxt b/tensorflow/tools/api/golden/tensorflow.keras.layers.-locally-connected1-d.pbtxt
new file mode 100644
index 0000000000..a8338314b8
--- /dev/null
+++ b/tensorflow/tools/api/golden/tensorflow.keras.layers.-locally-connected1-d.pbtxt
@@ -0,0 +1,159 @@
+path: "tensorflow.keras.layers.LocallyConnected1D"
+tf_class {
+ is_instance: "<class \'tensorflow.python.keras._impl.keras.layers.local.LocallyConnected1D\'>"
+ is_instance: "<class \'tensorflow.python.keras._impl.keras.engine.topology.Layer\'>"
+ is_instance: "<class \'tensorflow.python.layers.base.Layer\'>"
+ is_instance: "<type \'object\'>"
+ member {
+ name: "graph"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "input"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "input_mask"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "input_shape"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "losses"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "non_trainable_variables"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "non_trainable_weights"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "output"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "output_mask"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "output_shape"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "scope_name"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "trainable_variables"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "trainable_weights"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "updates"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "variables"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "weights"
+ mtype: "<type \'property\'>"
+ }
+ member_method {
+ name: "__init__"
+ argspec: "args=[\'self\', \'filters\', \'kernel_size\', \'strides\', \'padding\', \'data_format\', \'activation\', \'use_bias\', \'kernel_initializer\', \'bias_initializer\', \'kernel_regularizer\', \'bias_regularizer\', \'activity_regularizer\', \'kernel_constraint\', \'bias_constraint\'], varargs=None, keywords=kwargs, defaults=[\'1\', \'valid\', \'None\', \'None\', \'True\', \'glorot_uniform\', \'zeros\', \'None\', \'None\', \'None\', \'None\', \'None\'], "
+ }
+ member_method {
+ name: "add_loss"
+ argspec: "args=[\'self\', \'losses\', \'inputs\'], varargs=None, keywords=None, defaults=[\'None\'], "
+ }
+ member_method {
+ name: "add_update"
+ argspec: "args=[\'self\', \'updates\', \'inputs\'], varargs=None, keywords=None, defaults=[\'None\'], "
+ }
+ member_method {
+ name: "add_variable"
+ argspec: "args=[\'self\', \'name\', \'shape\', \'dtype\', \'initializer\', \'regularizer\', \'trainable\', \'constraint\'], varargs=None, keywords=None, defaults=[\'None\', \'None\', \'None\', \'True\', \'None\'], "
+ }
+ member_method {
+ name: "add_weight"
+ argspec: "args=[\'self\', \'name\', \'shape\', \'dtype\', \'initializer\', \'regularizer\', \'trainable\', \'constraint\'], varargs=None, keywords=None, defaults=[\'None\', \'None\', \'None\', \'True\', \'None\'], "
+ }
+ member_method {
+ name: "apply"
+ argspec: "args=[\'self\', \'inputs\'], varargs=args, keywords=kwargs, defaults=None"
+ }
+ member_method {
+ name: "build"
+ argspec: "args=[\'self\', \'input_shape\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "call"
+ argspec: "args=[\'self\', \'inputs\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "compute_mask"
+ argspec: "args=[\'self\', \'inputs\', \'mask\'], varargs=None, keywords=None, defaults=[\'None\'], "
+ }
+ member_method {
+ name: "count_params"
+ argspec: "args=[\'self\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "from_config"
+ argspec: "args=[\'cls\', \'config\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_config"
+ argspec: "args=[\'self\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_input_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_input_mask_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_input_shape_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_losses_for"
+ argspec: "args=[\'self\', \'inputs\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_output_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_output_mask_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_output_shape_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_updates_for"
+ argspec: "args=[\'self\', \'inputs\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_weights"
+ argspec: "args=[\'self\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "set_weights"
+ argspec: "args=[\'self\', \'weights\'], varargs=None, keywords=None, defaults=None"
+ }
+}
diff --git a/tensorflow/tools/api/golden/tensorflow.keras.layers.-locally-connected2-d.pbtxt b/tensorflow/tools/api/golden/tensorflow.keras.layers.-locally-connected2-d.pbtxt
new file mode 100644
index 0000000000..a74f1a7c2a
--- /dev/null
+++ b/tensorflow/tools/api/golden/tensorflow.keras.layers.-locally-connected2-d.pbtxt
@@ -0,0 +1,159 @@
+path: "tensorflow.keras.layers.LocallyConnected2D"
+tf_class {
+ is_instance: "<class \'tensorflow.python.keras._impl.keras.layers.local.LocallyConnected2D\'>"
+ is_instance: "<class \'tensorflow.python.keras._impl.keras.engine.topology.Layer\'>"
+ is_instance: "<class \'tensorflow.python.layers.base.Layer\'>"
+ is_instance: "<type \'object\'>"
+ member {
+ name: "graph"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "input"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "input_mask"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "input_shape"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "losses"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "non_trainable_variables"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "non_trainable_weights"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "output"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "output_mask"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "output_shape"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "scope_name"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "trainable_variables"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "trainable_weights"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "updates"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "variables"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "weights"
+ mtype: "<type \'property\'>"
+ }
+ member_method {
+ name: "__init__"
+ argspec: "args=[\'self\', \'filters\', \'kernel_size\', \'strides\', \'padding\', \'data_format\', \'activation\', \'use_bias\', \'kernel_initializer\', \'bias_initializer\', \'kernel_regularizer\', \'bias_regularizer\', \'activity_regularizer\', \'kernel_constraint\', \'bias_constraint\'], varargs=None, keywords=kwargs, defaults=[\'(1, 1)\', \'valid\', \'None\', \'None\', \'True\', \'glorot_uniform\', \'zeros\', \'None\', \'None\', \'None\', \'None\', \'None\'], "
+ }
+ member_method {
+ name: "add_loss"
+ argspec: "args=[\'self\', \'losses\', \'inputs\'], varargs=None, keywords=None, defaults=[\'None\'], "
+ }
+ member_method {
+ name: "add_update"
+ argspec: "args=[\'self\', \'updates\', \'inputs\'], varargs=None, keywords=None, defaults=[\'None\'], "
+ }
+ member_method {
+ name: "add_variable"
+ argspec: "args=[\'self\', \'name\', \'shape\', \'dtype\', \'initializer\', \'regularizer\', \'trainable\', \'constraint\'], varargs=None, keywords=None, defaults=[\'None\', \'None\', \'None\', \'True\', \'None\'], "
+ }
+ member_method {
+ name: "add_weight"
+ argspec: "args=[\'self\', \'name\', \'shape\', \'dtype\', \'initializer\', \'regularizer\', \'trainable\', \'constraint\'], varargs=None, keywords=None, defaults=[\'None\', \'None\', \'None\', \'True\', \'None\'], "
+ }
+ member_method {
+ name: "apply"
+ argspec: "args=[\'self\', \'inputs\'], varargs=args, keywords=kwargs, defaults=None"
+ }
+ member_method {
+ name: "build"
+ argspec: "args=[\'self\', \'input_shape\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "call"
+ argspec: "args=[\'self\', \'inputs\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "compute_mask"
+ argspec: "args=[\'self\', \'inputs\', \'mask\'], varargs=None, keywords=None, defaults=[\'None\'], "
+ }
+ member_method {
+ name: "count_params"
+ argspec: "args=[\'self\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "from_config"
+ argspec: "args=[\'cls\', \'config\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_config"
+ argspec: "args=[\'self\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_input_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_input_mask_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_input_shape_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_losses_for"
+ argspec: "args=[\'self\', \'inputs\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_output_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_output_mask_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_output_shape_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_updates_for"
+ argspec: "args=[\'self\', \'inputs\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_weights"
+ argspec: "args=[\'self\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "set_weights"
+ argspec: "args=[\'self\', \'weights\'], varargs=None, keywords=None, defaults=None"
+ }
+}
diff --git a/tensorflow/tools/api/golden/tensorflow.keras.layers.-masking.pbtxt b/tensorflow/tools/api/golden/tensorflow.keras.layers.-masking.pbtxt
new file mode 100644
index 0000000000..8c5d9b0fc9
--- /dev/null
+++ b/tensorflow/tools/api/golden/tensorflow.keras.layers.-masking.pbtxt
@@ -0,0 +1,159 @@
+path: "tensorflow.keras.layers.Masking"
+tf_class {
+ is_instance: "<class \'tensorflow.python.keras._impl.keras.layers.core.Masking\'>"
+ is_instance: "<class \'tensorflow.python.keras._impl.keras.engine.topology.Layer\'>"
+ is_instance: "<class \'tensorflow.python.layers.base.Layer\'>"
+ is_instance: "<type \'object\'>"
+ member {
+ name: "graph"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "input"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "input_mask"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "input_shape"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "losses"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "non_trainable_variables"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "non_trainable_weights"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "output"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "output_mask"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "output_shape"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "scope_name"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "trainable_variables"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "trainable_weights"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "updates"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "variables"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "weights"
+ mtype: "<type \'property\'>"
+ }
+ member_method {
+ name: "__init__"
+ argspec: "args=[\'self\', \'mask_value\'], varargs=None, keywords=kwargs, defaults=[\'0.0\'], "
+ }
+ member_method {
+ name: "add_loss"
+ argspec: "args=[\'self\', \'losses\', \'inputs\'], varargs=None, keywords=None, defaults=[\'None\'], "
+ }
+ member_method {
+ name: "add_update"
+ argspec: "args=[\'self\', \'updates\', \'inputs\'], varargs=None, keywords=None, defaults=[\'None\'], "
+ }
+ member_method {
+ name: "add_variable"
+ argspec: "args=[\'self\', \'name\', \'shape\', \'dtype\', \'initializer\', \'regularizer\', \'trainable\', \'constraint\'], varargs=None, keywords=None, defaults=[\'None\', \'None\', \'None\', \'True\', \'None\'], "
+ }
+ member_method {
+ name: "add_weight"
+ argspec: "args=[\'self\', \'name\', \'shape\', \'dtype\', \'initializer\', \'regularizer\', \'trainable\', \'constraint\'], varargs=None, keywords=None, defaults=[\'None\', \'None\', \'None\', \'True\', \'None\'], "
+ }
+ member_method {
+ name: "apply"
+ argspec: "args=[\'self\', \'inputs\'], varargs=args, keywords=kwargs, defaults=None"
+ }
+ member_method {
+ name: "build"
+ argspec: "args=[\'self\', \'_\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "call"
+ argspec: "args=[\'self\', \'inputs\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "compute_mask"
+ argspec: "args=[\'self\', \'inputs\', \'mask\'], varargs=None, keywords=None, defaults=[\'None\'], "
+ }
+ member_method {
+ name: "count_params"
+ argspec: "args=[\'self\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "from_config"
+ argspec: "args=[\'cls\', \'config\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_config"
+ argspec: "args=[\'self\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_input_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_input_mask_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_input_shape_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_losses_for"
+ argspec: "args=[\'self\', \'inputs\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_output_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_output_mask_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_output_shape_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_updates_for"
+ argspec: "args=[\'self\', \'inputs\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_weights"
+ argspec: "args=[\'self\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "set_weights"
+ argspec: "args=[\'self\', \'weights\'], varargs=None, keywords=None, defaults=None"
+ }
+}
diff --git a/tensorflow/tools/api/golden/tensorflow.keras.layers.-max-pool1-d.pbtxt b/tensorflow/tools/api/golden/tensorflow.keras.layers.-max-pool1-d.pbtxt
new file mode 100644
index 0000000000..0d1998dff6
--- /dev/null
+++ b/tensorflow/tools/api/golden/tensorflow.keras.layers.-max-pool1-d.pbtxt
@@ -0,0 +1,161 @@
+path: "tensorflow.keras.layers.MaxPool1D"
+tf_class {
+ is_instance: "<class \'tensorflow.python.keras._impl.keras.layers.pooling.MaxPooling1D\'>"
+ is_instance: "<class \'tensorflow.python.layers.pooling.MaxPooling1D\'>"
+ is_instance: "<class \'tensorflow.python.layers.pooling._Pooling1D\'>"
+ is_instance: "<class \'tensorflow.python.keras._impl.keras.engine.topology.Layer\'>"
+ is_instance: "<class \'tensorflow.python.layers.base.Layer\'>"
+ is_instance: "<type \'object\'>"
+ member {
+ name: "graph"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "input"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "input_mask"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "input_shape"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "losses"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "non_trainable_variables"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "non_trainable_weights"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "output"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "output_mask"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "output_shape"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "scope_name"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "trainable_variables"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "trainable_weights"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "updates"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "variables"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "weights"
+ mtype: "<type \'property\'>"
+ }
+ member_method {
+ name: "__init__"
+ argspec: "args=[\'self\', \'pool_size\', \'strides\', \'padding\'], varargs=None, keywords=kwargs, defaults=[\'2\', \'None\', \'valid\'], "
+ }
+ member_method {
+ name: "add_loss"
+ argspec: "args=[\'self\', \'losses\', \'inputs\'], varargs=None, keywords=None, defaults=[\'None\'], "
+ }
+ member_method {
+ name: "add_update"
+ argspec: "args=[\'self\', \'updates\', \'inputs\'], varargs=None, keywords=None, defaults=[\'None\'], "
+ }
+ member_method {
+ name: "add_variable"
+ argspec: "args=[\'self\', \'name\', \'shape\', \'dtype\', \'initializer\', \'regularizer\', \'trainable\', \'constraint\'], varargs=None, keywords=None, defaults=[\'None\', \'None\', \'None\', \'True\', \'None\'], "
+ }
+ member_method {
+ name: "add_weight"
+ argspec: "args=[\'self\', \'name\', \'shape\', \'dtype\', \'initializer\', \'regularizer\', \'trainable\', \'constraint\'], varargs=None, keywords=None, defaults=[\'None\', \'None\', \'None\', \'True\', \'None\'], "
+ }
+ member_method {
+ name: "apply"
+ argspec: "args=[\'self\', \'inputs\'], varargs=args, keywords=kwargs, defaults=None"
+ }
+ member_method {
+ name: "build"
+ argspec: "args=[\'self\', \'_\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "call"
+ argspec: "args=[\'self\', \'inputs\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "compute_mask"
+ argspec: "args=[\'self\', \'inputs\', \'mask\'], varargs=None, keywords=None, defaults=[\'None\'], "
+ }
+ member_method {
+ name: "count_params"
+ argspec: "args=[\'self\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "from_config"
+ argspec: "args=[\'cls\', \'config\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_config"
+ argspec: "args=[\'self\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_input_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_input_mask_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_input_shape_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_losses_for"
+ argspec: "args=[\'self\', \'inputs\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_output_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_output_mask_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_output_shape_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_updates_for"
+ argspec: "args=[\'self\', \'inputs\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_weights"
+ argspec: "args=[\'self\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "set_weights"
+ argspec: "args=[\'self\', \'weights\'], varargs=None, keywords=None, defaults=None"
+ }
+}
diff --git a/tensorflow/tools/api/golden/tensorflow.keras.layers.-max-pool2-d.pbtxt b/tensorflow/tools/api/golden/tensorflow.keras.layers.-max-pool2-d.pbtxt
new file mode 100644
index 0000000000..4858920ea7
--- /dev/null
+++ b/tensorflow/tools/api/golden/tensorflow.keras.layers.-max-pool2-d.pbtxt
@@ -0,0 +1,161 @@
+path: "tensorflow.keras.layers.MaxPool2D"
+tf_class {
+ is_instance: "<class \'tensorflow.python.keras._impl.keras.layers.pooling.MaxPooling2D\'>"
+ is_instance: "<class \'tensorflow.python.layers.pooling.MaxPooling2D\'>"
+ is_instance: "<class \'tensorflow.python.layers.pooling._Pooling2D\'>"
+ is_instance: "<class \'tensorflow.python.keras._impl.keras.engine.topology.Layer\'>"
+ is_instance: "<class \'tensorflow.python.layers.base.Layer\'>"
+ is_instance: "<type \'object\'>"
+ member {
+ name: "graph"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "input"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "input_mask"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "input_shape"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "losses"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "non_trainable_variables"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "non_trainable_weights"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "output"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "output_mask"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "output_shape"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "scope_name"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "trainable_variables"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "trainable_weights"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "updates"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "variables"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "weights"
+ mtype: "<type \'property\'>"
+ }
+ member_method {
+ name: "__init__"
+ argspec: "args=[\'self\', \'pool_size\', \'strides\', \'padding\', \'data_format\'], varargs=None, keywords=kwargs, defaults=[\'(2, 2)\', \'None\', \'valid\', \'None\'], "
+ }
+ member_method {
+ name: "add_loss"
+ argspec: "args=[\'self\', \'losses\', \'inputs\'], varargs=None, keywords=None, defaults=[\'None\'], "
+ }
+ member_method {
+ name: "add_update"
+ argspec: "args=[\'self\', \'updates\', \'inputs\'], varargs=None, keywords=None, defaults=[\'None\'], "
+ }
+ member_method {
+ name: "add_variable"
+ argspec: "args=[\'self\', \'name\', \'shape\', \'dtype\', \'initializer\', \'regularizer\', \'trainable\', \'constraint\'], varargs=None, keywords=None, defaults=[\'None\', \'None\', \'None\', \'True\', \'None\'], "
+ }
+ member_method {
+ name: "add_weight"
+ argspec: "args=[\'self\', \'name\', \'shape\', \'dtype\', \'initializer\', \'regularizer\', \'trainable\', \'constraint\'], varargs=None, keywords=None, defaults=[\'None\', \'None\', \'None\', \'True\', \'None\'], "
+ }
+ member_method {
+ name: "apply"
+ argspec: "args=[\'self\', \'inputs\'], varargs=args, keywords=kwargs, defaults=None"
+ }
+ member_method {
+ name: "build"
+ argspec: "args=[\'self\', \'_\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "call"
+ argspec: "args=[\'self\', \'inputs\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "compute_mask"
+ argspec: "args=[\'self\', \'inputs\', \'mask\'], varargs=None, keywords=None, defaults=[\'None\'], "
+ }
+ member_method {
+ name: "count_params"
+ argspec: "args=[\'self\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "from_config"
+ argspec: "args=[\'cls\', \'config\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_config"
+ argspec: "args=[\'self\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_input_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_input_mask_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_input_shape_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_losses_for"
+ argspec: "args=[\'self\', \'inputs\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_output_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_output_mask_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_output_shape_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_updates_for"
+ argspec: "args=[\'self\', \'inputs\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_weights"
+ argspec: "args=[\'self\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "set_weights"
+ argspec: "args=[\'self\', \'weights\'], varargs=None, keywords=None, defaults=None"
+ }
+}
diff --git a/tensorflow/tools/api/golden/tensorflow.keras.layers.-max-pool3-d.pbtxt b/tensorflow/tools/api/golden/tensorflow.keras.layers.-max-pool3-d.pbtxt
new file mode 100644
index 0000000000..57df6727cf
--- /dev/null
+++ b/tensorflow/tools/api/golden/tensorflow.keras.layers.-max-pool3-d.pbtxt
@@ -0,0 +1,161 @@
+path: "tensorflow.keras.layers.MaxPool3D"
+tf_class {
+ is_instance: "<class \'tensorflow.python.keras._impl.keras.layers.pooling.MaxPooling3D\'>"
+ is_instance: "<class \'tensorflow.python.layers.pooling.MaxPooling3D\'>"
+ is_instance: "<class \'tensorflow.python.layers.pooling._Pooling3D\'>"
+ is_instance: "<class \'tensorflow.python.keras._impl.keras.engine.topology.Layer\'>"
+ is_instance: "<class \'tensorflow.python.layers.base.Layer\'>"
+ is_instance: "<type \'object\'>"
+ member {
+ name: "graph"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "input"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "input_mask"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "input_shape"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "losses"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "non_trainable_variables"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "non_trainable_weights"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "output"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "output_mask"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "output_shape"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "scope_name"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "trainable_variables"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "trainable_weights"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "updates"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "variables"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "weights"
+ mtype: "<type \'property\'>"
+ }
+ member_method {
+ name: "__init__"
+ argspec: "args=[\'self\', \'pool_size\', \'strides\', \'padding\', \'data_format\'], varargs=None, keywords=kwargs, defaults=[\'(2, 2, 2)\', \'None\', \'valid\', \'None\'], "
+ }
+ member_method {
+ name: "add_loss"
+ argspec: "args=[\'self\', \'losses\', \'inputs\'], varargs=None, keywords=None, defaults=[\'None\'], "
+ }
+ member_method {
+ name: "add_update"
+ argspec: "args=[\'self\', \'updates\', \'inputs\'], varargs=None, keywords=None, defaults=[\'None\'], "
+ }
+ member_method {
+ name: "add_variable"
+ argspec: "args=[\'self\', \'name\', \'shape\', \'dtype\', \'initializer\', \'regularizer\', \'trainable\', \'constraint\'], varargs=None, keywords=None, defaults=[\'None\', \'None\', \'None\', \'True\', \'None\'], "
+ }
+ member_method {
+ name: "add_weight"
+ argspec: "args=[\'self\', \'name\', \'shape\', \'dtype\', \'initializer\', \'regularizer\', \'trainable\', \'constraint\'], varargs=None, keywords=None, defaults=[\'None\', \'None\', \'None\', \'True\', \'None\'], "
+ }
+ member_method {
+ name: "apply"
+ argspec: "args=[\'self\', \'inputs\'], varargs=args, keywords=kwargs, defaults=None"
+ }
+ member_method {
+ name: "build"
+ argspec: "args=[\'self\', \'_\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "call"
+ argspec: "args=[\'self\', \'inputs\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "compute_mask"
+ argspec: "args=[\'self\', \'inputs\', \'mask\'], varargs=None, keywords=None, defaults=[\'None\'], "
+ }
+ member_method {
+ name: "count_params"
+ argspec: "args=[\'self\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "from_config"
+ argspec: "args=[\'cls\', \'config\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_config"
+ argspec: "args=[\'self\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_input_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_input_mask_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_input_shape_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_losses_for"
+ argspec: "args=[\'self\', \'inputs\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_output_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_output_mask_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_output_shape_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_updates_for"
+ argspec: "args=[\'self\', \'inputs\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_weights"
+ argspec: "args=[\'self\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "set_weights"
+ argspec: "args=[\'self\', \'weights\'], varargs=None, keywords=None, defaults=None"
+ }
+}
diff --git a/tensorflow/tools/api/golden/tensorflow.keras.layers.-max-pooling1-d.pbtxt b/tensorflow/tools/api/golden/tensorflow.keras.layers.-max-pooling1-d.pbtxt
new file mode 100644
index 0000000000..5ddc879399
--- /dev/null
+++ b/tensorflow/tools/api/golden/tensorflow.keras.layers.-max-pooling1-d.pbtxt
@@ -0,0 +1,161 @@
+path: "tensorflow.keras.layers.MaxPooling1D"
+tf_class {
+ is_instance: "<class \'tensorflow.python.keras._impl.keras.layers.pooling.MaxPooling1D\'>"
+ is_instance: "<class \'tensorflow.python.layers.pooling.MaxPooling1D\'>"
+ is_instance: "<class \'tensorflow.python.layers.pooling._Pooling1D\'>"
+ is_instance: "<class \'tensorflow.python.keras._impl.keras.engine.topology.Layer\'>"
+ is_instance: "<class \'tensorflow.python.layers.base.Layer\'>"
+ is_instance: "<type \'object\'>"
+ member {
+ name: "graph"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "input"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "input_mask"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "input_shape"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "losses"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "non_trainable_variables"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "non_trainable_weights"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "output"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "output_mask"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "output_shape"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "scope_name"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "trainable_variables"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "trainable_weights"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "updates"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "variables"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "weights"
+ mtype: "<type \'property\'>"
+ }
+ member_method {
+ name: "__init__"
+ argspec: "args=[\'self\', \'pool_size\', \'strides\', \'padding\'], varargs=None, keywords=kwargs, defaults=[\'2\', \'None\', \'valid\'], "
+ }
+ member_method {
+ name: "add_loss"
+ argspec: "args=[\'self\', \'losses\', \'inputs\'], varargs=None, keywords=None, defaults=[\'None\'], "
+ }
+ member_method {
+ name: "add_update"
+ argspec: "args=[\'self\', \'updates\', \'inputs\'], varargs=None, keywords=None, defaults=[\'None\'], "
+ }
+ member_method {
+ name: "add_variable"
+ argspec: "args=[\'self\', \'name\', \'shape\', \'dtype\', \'initializer\', \'regularizer\', \'trainable\', \'constraint\'], varargs=None, keywords=None, defaults=[\'None\', \'None\', \'None\', \'True\', \'None\'], "
+ }
+ member_method {
+ name: "add_weight"
+ argspec: "args=[\'self\', \'name\', \'shape\', \'dtype\', \'initializer\', \'regularizer\', \'trainable\', \'constraint\'], varargs=None, keywords=None, defaults=[\'None\', \'None\', \'None\', \'True\', \'None\'], "
+ }
+ member_method {
+ name: "apply"
+ argspec: "args=[\'self\', \'inputs\'], varargs=args, keywords=kwargs, defaults=None"
+ }
+ member_method {
+ name: "build"
+ argspec: "args=[\'self\', \'_\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "call"
+ argspec: "args=[\'self\', \'inputs\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "compute_mask"
+ argspec: "args=[\'self\', \'inputs\', \'mask\'], varargs=None, keywords=None, defaults=[\'None\'], "
+ }
+ member_method {
+ name: "count_params"
+ argspec: "args=[\'self\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "from_config"
+ argspec: "args=[\'cls\', \'config\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_config"
+ argspec: "args=[\'self\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_input_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_input_mask_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_input_shape_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_losses_for"
+ argspec: "args=[\'self\', \'inputs\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_output_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_output_mask_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_output_shape_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_updates_for"
+ argspec: "args=[\'self\', \'inputs\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_weights"
+ argspec: "args=[\'self\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "set_weights"
+ argspec: "args=[\'self\', \'weights\'], varargs=None, keywords=None, defaults=None"
+ }
+}
diff --git a/tensorflow/tools/api/golden/tensorflow.keras.layers.-max-pooling2-d.pbtxt b/tensorflow/tools/api/golden/tensorflow.keras.layers.-max-pooling2-d.pbtxt
new file mode 100644
index 0000000000..b8186c15f3
--- /dev/null
+++ b/tensorflow/tools/api/golden/tensorflow.keras.layers.-max-pooling2-d.pbtxt
@@ -0,0 +1,161 @@
+path: "tensorflow.keras.layers.MaxPooling2D"
+tf_class {
+ is_instance: "<class \'tensorflow.python.keras._impl.keras.layers.pooling.MaxPooling2D\'>"
+ is_instance: "<class \'tensorflow.python.layers.pooling.MaxPooling2D\'>"
+ is_instance: "<class \'tensorflow.python.layers.pooling._Pooling2D\'>"
+ is_instance: "<class \'tensorflow.python.keras._impl.keras.engine.topology.Layer\'>"
+ is_instance: "<class \'tensorflow.python.layers.base.Layer\'>"
+ is_instance: "<type \'object\'>"
+ member {
+ name: "graph"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "input"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "input_mask"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "input_shape"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "losses"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "non_trainable_variables"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "non_trainable_weights"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "output"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "output_mask"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "output_shape"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "scope_name"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "trainable_variables"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "trainable_weights"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "updates"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "variables"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "weights"
+ mtype: "<type \'property\'>"
+ }
+ member_method {
+ name: "__init__"
+ argspec: "args=[\'self\', \'pool_size\', \'strides\', \'padding\', \'data_format\'], varargs=None, keywords=kwargs, defaults=[\'(2, 2)\', \'None\', \'valid\', \'None\'], "
+ }
+ member_method {
+ name: "add_loss"
+ argspec: "args=[\'self\', \'losses\', \'inputs\'], varargs=None, keywords=None, defaults=[\'None\'], "
+ }
+ member_method {
+ name: "add_update"
+ argspec: "args=[\'self\', \'updates\', \'inputs\'], varargs=None, keywords=None, defaults=[\'None\'], "
+ }
+ member_method {
+ name: "add_variable"
+ argspec: "args=[\'self\', \'name\', \'shape\', \'dtype\', \'initializer\', \'regularizer\', \'trainable\', \'constraint\'], varargs=None, keywords=None, defaults=[\'None\', \'None\', \'None\', \'True\', \'None\'], "
+ }
+ member_method {
+ name: "add_weight"
+ argspec: "args=[\'self\', \'name\', \'shape\', \'dtype\', \'initializer\', \'regularizer\', \'trainable\', \'constraint\'], varargs=None, keywords=None, defaults=[\'None\', \'None\', \'None\', \'True\', \'None\'], "
+ }
+ member_method {
+ name: "apply"
+ argspec: "args=[\'self\', \'inputs\'], varargs=args, keywords=kwargs, defaults=None"
+ }
+ member_method {
+ name: "build"
+ argspec: "args=[\'self\', \'_\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "call"
+ argspec: "args=[\'self\', \'inputs\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "compute_mask"
+ argspec: "args=[\'self\', \'inputs\', \'mask\'], varargs=None, keywords=None, defaults=[\'None\'], "
+ }
+ member_method {
+ name: "count_params"
+ argspec: "args=[\'self\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "from_config"
+ argspec: "args=[\'cls\', \'config\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_config"
+ argspec: "args=[\'self\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_input_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_input_mask_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_input_shape_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_losses_for"
+ argspec: "args=[\'self\', \'inputs\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_output_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_output_mask_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_output_shape_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_updates_for"
+ argspec: "args=[\'self\', \'inputs\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_weights"
+ argspec: "args=[\'self\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "set_weights"
+ argspec: "args=[\'self\', \'weights\'], varargs=None, keywords=None, defaults=None"
+ }
+}
diff --git a/tensorflow/tools/api/golden/tensorflow.keras.layers.-max-pooling3-d.pbtxt b/tensorflow/tools/api/golden/tensorflow.keras.layers.-max-pooling3-d.pbtxt
new file mode 100644
index 0000000000..16fe3372f7
--- /dev/null
+++ b/tensorflow/tools/api/golden/tensorflow.keras.layers.-max-pooling3-d.pbtxt
@@ -0,0 +1,161 @@
+path: "tensorflow.keras.layers.MaxPooling3D"
+tf_class {
+ is_instance: "<class \'tensorflow.python.keras._impl.keras.layers.pooling.MaxPooling3D\'>"
+ is_instance: "<class \'tensorflow.python.layers.pooling.MaxPooling3D\'>"
+ is_instance: "<class \'tensorflow.python.layers.pooling._Pooling3D\'>"
+ is_instance: "<class \'tensorflow.python.keras._impl.keras.engine.topology.Layer\'>"
+ is_instance: "<class \'tensorflow.python.layers.base.Layer\'>"
+ is_instance: "<type \'object\'>"
+ member {
+ name: "graph"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "input"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "input_mask"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "input_shape"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "losses"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "non_trainable_variables"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "non_trainable_weights"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "output"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "output_mask"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "output_shape"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "scope_name"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "trainable_variables"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "trainable_weights"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "updates"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "variables"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "weights"
+ mtype: "<type \'property\'>"
+ }
+ member_method {
+ name: "__init__"
+ argspec: "args=[\'self\', \'pool_size\', \'strides\', \'padding\', \'data_format\'], varargs=None, keywords=kwargs, defaults=[\'(2, 2, 2)\', \'None\', \'valid\', \'None\'], "
+ }
+ member_method {
+ name: "add_loss"
+ argspec: "args=[\'self\', \'losses\', \'inputs\'], varargs=None, keywords=None, defaults=[\'None\'], "
+ }
+ member_method {
+ name: "add_update"
+ argspec: "args=[\'self\', \'updates\', \'inputs\'], varargs=None, keywords=None, defaults=[\'None\'], "
+ }
+ member_method {
+ name: "add_variable"
+ argspec: "args=[\'self\', \'name\', \'shape\', \'dtype\', \'initializer\', \'regularizer\', \'trainable\', \'constraint\'], varargs=None, keywords=None, defaults=[\'None\', \'None\', \'None\', \'True\', \'None\'], "
+ }
+ member_method {
+ name: "add_weight"
+ argspec: "args=[\'self\', \'name\', \'shape\', \'dtype\', \'initializer\', \'regularizer\', \'trainable\', \'constraint\'], varargs=None, keywords=None, defaults=[\'None\', \'None\', \'None\', \'True\', \'None\'], "
+ }
+ member_method {
+ name: "apply"
+ argspec: "args=[\'self\', \'inputs\'], varargs=args, keywords=kwargs, defaults=None"
+ }
+ member_method {
+ name: "build"
+ argspec: "args=[\'self\', \'_\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "call"
+ argspec: "args=[\'self\', \'inputs\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "compute_mask"
+ argspec: "args=[\'self\', \'inputs\', \'mask\'], varargs=None, keywords=None, defaults=[\'None\'], "
+ }
+ member_method {
+ name: "count_params"
+ argspec: "args=[\'self\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "from_config"
+ argspec: "args=[\'cls\', \'config\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_config"
+ argspec: "args=[\'self\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_input_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_input_mask_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_input_shape_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_losses_for"
+ argspec: "args=[\'self\', \'inputs\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_output_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_output_mask_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_output_shape_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_updates_for"
+ argspec: "args=[\'self\', \'inputs\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_weights"
+ argspec: "args=[\'self\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "set_weights"
+ argspec: "args=[\'self\', \'weights\'], varargs=None, keywords=None, defaults=None"
+ }
+}
diff --git a/tensorflow/tools/api/golden/tensorflow.keras.layers.-maximum.pbtxt b/tensorflow/tools/api/golden/tensorflow.keras.layers.-maximum.pbtxt
new file mode 100644
index 0000000000..baeb3d8353
--- /dev/null
+++ b/tensorflow/tools/api/golden/tensorflow.keras.layers.-maximum.pbtxt
@@ -0,0 +1,160 @@
+path: "tensorflow.keras.layers.Maximum"
+tf_class {
+ is_instance: "<class \'tensorflow.python.keras._impl.keras.layers.merge.Maximum\'>"
+ is_instance: "<class \'tensorflow.python.keras._impl.keras.layers.merge._Merge\'>"
+ is_instance: "<class \'tensorflow.python.keras._impl.keras.engine.topology.Layer\'>"
+ is_instance: "<class \'tensorflow.python.layers.base.Layer\'>"
+ is_instance: "<type \'object\'>"
+ member {
+ name: "graph"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "input"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "input_mask"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "input_shape"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "losses"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "non_trainable_variables"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "non_trainable_weights"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "output"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "output_mask"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "output_shape"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "scope_name"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "trainable_variables"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "trainable_weights"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "updates"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "variables"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "weights"
+ mtype: "<type \'property\'>"
+ }
+ member_method {
+ name: "__init__"
+ argspec: "args=[\'self\'], varargs=None, keywords=kwargs, defaults=None"
+ }
+ member_method {
+ name: "add_loss"
+ argspec: "args=[\'self\', \'losses\', \'inputs\'], varargs=None, keywords=None, defaults=[\'None\'], "
+ }
+ member_method {
+ name: "add_update"
+ argspec: "args=[\'self\', \'updates\', \'inputs\'], varargs=None, keywords=None, defaults=[\'None\'], "
+ }
+ member_method {
+ name: "add_variable"
+ argspec: "args=[\'self\', \'name\', \'shape\', \'dtype\', \'initializer\', \'regularizer\', \'trainable\', \'constraint\'], varargs=None, keywords=None, defaults=[\'None\', \'None\', \'None\', \'True\', \'None\'], "
+ }
+ member_method {
+ name: "add_weight"
+ argspec: "args=[\'self\', \'name\', \'shape\', \'dtype\', \'initializer\', \'regularizer\', \'trainable\', \'constraint\'], varargs=None, keywords=None, defaults=[\'None\', \'None\', \'None\', \'True\', \'None\'], "
+ }
+ member_method {
+ name: "apply"
+ argspec: "args=[\'self\', \'inputs\'], varargs=args, keywords=kwargs, defaults=None"
+ }
+ member_method {
+ name: "build"
+ argspec: "args=[\'self\', \'input_shape\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "call"
+ argspec: "args=[\'self\', \'inputs\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "compute_mask"
+ argspec: "args=[\'self\', \'inputs\', \'mask\'], varargs=None, keywords=None, defaults=[\'None\'], "
+ }
+ member_method {
+ name: "count_params"
+ argspec: "args=[\'self\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "from_config"
+ argspec: "args=[\'cls\', \'config\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_config"
+ argspec: "args=[\'self\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_input_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_input_mask_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_input_shape_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_losses_for"
+ argspec: "args=[\'self\', \'inputs\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_output_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_output_mask_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_output_shape_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_updates_for"
+ argspec: "args=[\'self\', \'inputs\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_weights"
+ argspec: "args=[\'self\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "set_weights"
+ argspec: "args=[\'self\', \'weights\'], varargs=None, keywords=None, defaults=None"
+ }
+}
diff --git a/tensorflow/tools/api/golden/tensorflow.keras.layers.-multiply.pbtxt b/tensorflow/tools/api/golden/tensorflow.keras.layers.-multiply.pbtxt
new file mode 100644
index 0000000000..5c1d511cf7
--- /dev/null
+++ b/tensorflow/tools/api/golden/tensorflow.keras.layers.-multiply.pbtxt
@@ -0,0 +1,160 @@
+path: "tensorflow.keras.layers.Multiply"
+tf_class {
+ is_instance: "<class \'tensorflow.python.keras._impl.keras.layers.merge.Multiply\'>"
+ is_instance: "<class \'tensorflow.python.keras._impl.keras.layers.merge._Merge\'>"
+ is_instance: "<class \'tensorflow.python.keras._impl.keras.engine.topology.Layer\'>"
+ is_instance: "<class \'tensorflow.python.layers.base.Layer\'>"
+ is_instance: "<type \'object\'>"
+ member {
+ name: "graph"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "input"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "input_mask"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "input_shape"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "losses"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "non_trainable_variables"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "non_trainable_weights"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "output"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "output_mask"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "output_shape"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "scope_name"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "trainable_variables"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "trainable_weights"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "updates"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "variables"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "weights"
+ mtype: "<type \'property\'>"
+ }
+ member_method {
+ name: "__init__"
+ argspec: "args=[\'self\'], varargs=None, keywords=kwargs, defaults=None"
+ }
+ member_method {
+ name: "add_loss"
+ argspec: "args=[\'self\', \'losses\', \'inputs\'], varargs=None, keywords=None, defaults=[\'None\'], "
+ }
+ member_method {
+ name: "add_update"
+ argspec: "args=[\'self\', \'updates\', \'inputs\'], varargs=None, keywords=None, defaults=[\'None\'], "
+ }
+ member_method {
+ name: "add_variable"
+ argspec: "args=[\'self\', \'name\', \'shape\', \'dtype\', \'initializer\', \'regularizer\', \'trainable\', \'constraint\'], varargs=None, keywords=None, defaults=[\'None\', \'None\', \'None\', \'True\', \'None\'], "
+ }
+ member_method {
+ name: "add_weight"
+ argspec: "args=[\'self\', \'name\', \'shape\', \'dtype\', \'initializer\', \'regularizer\', \'trainable\', \'constraint\'], varargs=None, keywords=None, defaults=[\'None\', \'None\', \'None\', \'True\', \'None\'], "
+ }
+ member_method {
+ name: "apply"
+ argspec: "args=[\'self\', \'inputs\'], varargs=args, keywords=kwargs, defaults=None"
+ }
+ member_method {
+ name: "build"
+ argspec: "args=[\'self\', \'input_shape\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "call"
+ argspec: "args=[\'self\', \'inputs\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "compute_mask"
+ argspec: "args=[\'self\', \'inputs\', \'mask\'], varargs=None, keywords=None, defaults=[\'None\'], "
+ }
+ member_method {
+ name: "count_params"
+ argspec: "args=[\'self\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "from_config"
+ argspec: "args=[\'cls\', \'config\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_config"
+ argspec: "args=[\'self\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_input_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_input_mask_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_input_shape_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_losses_for"
+ argspec: "args=[\'self\', \'inputs\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_output_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_output_mask_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_output_shape_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_updates_for"
+ argspec: "args=[\'self\', \'inputs\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_weights"
+ argspec: "args=[\'self\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "set_weights"
+ argspec: "args=[\'self\', \'weights\'], varargs=None, keywords=None, defaults=None"
+ }
+}
diff --git a/tensorflow/tools/api/golden/tensorflow.keras.layers.-p-re-l-u.pbtxt b/tensorflow/tools/api/golden/tensorflow.keras.layers.-p-re-l-u.pbtxt
new file mode 100644
index 0000000000..a8f938cc6e
--- /dev/null
+++ b/tensorflow/tools/api/golden/tensorflow.keras.layers.-p-re-l-u.pbtxt
@@ -0,0 +1,159 @@
+path: "tensorflow.keras.layers.PReLU"
+tf_class {
+ is_instance: "<class \'tensorflow.python.keras._impl.keras.layers.advanced_activations.PReLU\'>"
+ is_instance: "<class \'tensorflow.python.keras._impl.keras.engine.topology.Layer\'>"
+ is_instance: "<class \'tensorflow.python.layers.base.Layer\'>"
+ is_instance: "<type \'object\'>"
+ member {
+ name: "graph"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "input"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "input_mask"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "input_shape"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "losses"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "non_trainable_variables"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "non_trainable_weights"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "output"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "output_mask"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "output_shape"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "scope_name"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "trainable_variables"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "trainable_weights"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "updates"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "variables"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "weights"
+ mtype: "<type \'property\'>"
+ }
+ member_method {
+ name: "__init__"
+ argspec: "args=[\'self\', \'alpha_initializer\', \'alpha_regularizer\', \'alpha_constraint\', \'shared_axes\'], varargs=None, keywords=kwargs, defaults=[\'zeros\', \'None\', \'None\', \'None\'], "
+ }
+ member_method {
+ name: "add_loss"
+ argspec: "args=[\'self\', \'losses\', \'inputs\'], varargs=None, keywords=None, defaults=[\'None\'], "
+ }
+ member_method {
+ name: "add_update"
+ argspec: "args=[\'self\', \'updates\', \'inputs\'], varargs=None, keywords=None, defaults=[\'None\'], "
+ }
+ member_method {
+ name: "add_variable"
+ argspec: "args=[\'self\', \'name\', \'shape\', \'dtype\', \'initializer\', \'regularizer\', \'trainable\', \'constraint\'], varargs=None, keywords=None, defaults=[\'None\', \'None\', \'None\', \'True\', \'None\'], "
+ }
+ member_method {
+ name: "add_weight"
+ argspec: "args=[\'self\', \'name\', \'shape\', \'dtype\', \'initializer\', \'regularizer\', \'trainable\', \'constraint\'], varargs=None, keywords=None, defaults=[\'None\', \'None\', \'None\', \'True\', \'None\'], "
+ }
+ member_method {
+ name: "apply"
+ argspec: "args=[\'self\', \'inputs\'], varargs=args, keywords=kwargs, defaults=None"
+ }
+ member_method {
+ name: "build"
+ argspec: "args=[\'self\', \'input_shape\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "call"
+ argspec: "args=[\'self\', \'inputs\', \'mask\'], varargs=None, keywords=None, defaults=[\'None\'], "
+ }
+ member_method {
+ name: "compute_mask"
+ argspec: "args=[\'self\', \'inputs\', \'mask\'], varargs=None, keywords=None, defaults=[\'None\'], "
+ }
+ member_method {
+ name: "count_params"
+ argspec: "args=[\'self\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "from_config"
+ argspec: "args=[\'cls\', \'config\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_config"
+ argspec: "args=[\'self\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_input_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_input_mask_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_input_shape_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_losses_for"
+ argspec: "args=[\'self\', \'inputs\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_output_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_output_mask_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_output_shape_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_updates_for"
+ argspec: "args=[\'self\', \'inputs\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_weights"
+ argspec: "args=[\'self\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "set_weights"
+ argspec: "args=[\'self\', \'weights\'], varargs=None, keywords=None, defaults=None"
+ }
+}
diff --git a/tensorflow/tools/api/golden/tensorflow.keras.layers.-permute.pbtxt b/tensorflow/tools/api/golden/tensorflow.keras.layers.-permute.pbtxt
new file mode 100644
index 0000000000..eac826b965
--- /dev/null
+++ b/tensorflow/tools/api/golden/tensorflow.keras.layers.-permute.pbtxt
@@ -0,0 +1,159 @@
+path: "tensorflow.keras.layers.Permute"
+tf_class {
+ is_instance: "<class \'tensorflow.python.keras._impl.keras.layers.core.Permute\'>"
+ is_instance: "<class \'tensorflow.python.keras._impl.keras.engine.topology.Layer\'>"
+ is_instance: "<class \'tensorflow.python.layers.base.Layer\'>"
+ is_instance: "<type \'object\'>"
+ member {
+ name: "graph"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "input"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "input_mask"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "input_shape"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "losses"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "non_trainable_variables"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "non_trainable_weights"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "output"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "output_mask"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "output_shape"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "scope_name"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "trainable_variables"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "trainable_weights"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "updates"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "variables"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "weights"
+ mtype: "<type \'property\'>"
+ }
+ member_method {
+ name: "__init__"
+ argspec: "args=[\'self\', \'dims\'], varargs=None, keywords=kwargs, defaults=None"
+ }
+ member_method {
+ name: "add_loss"
+ argspec: "args=[\'self\', \'losses\', \'inputs\'], varargs=None, keywords=None, defaults=[\'None\'], "
+ }
+ member_method {
+ name: "add_update"
+ argspec: "args=[\'self\', \'updates\', \'inputs\'], varargs=None, keywords=None, defaults=[\'None\'], "
+ }
+ member_method {
+ name: "add_variable"
+ argspec: "args=[\'self\', \'name\', \'shape\', \'dtype\', \'initializer\', \'regularizer\', \'trainable\', \'constraint\'], varargs=None, keywords=None, defaults=[\'None\', \'None\', \'None\', \'True\', \'None\'], "
+ }
+ member_method {
+ name: "add_weight"
+ argspec: "args=[\'self\', \'name\', \'shape\', \'dtype\', \'initializer\', \'regularizer\', \'trainable\', \'constraint\'], varargs=None, keywords=None, defaults=[\'None\', \'None\', \'None\', \'True\', \'None\'], "
+ }
+ member_method {
+ name: "apply"
+ argspec: "args=[\'self\', \'inputs\'], varargs=args, keywords=kwargs, defaults=None"
+ }
+ member_method {
+ name: "build"
+ argspec: "args=[\'self\', \'_\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "call"
+ argspec: "args=[\'self\', \'inputs\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "compute_mask"
+ argspec: "args=[\'self\', \'inputs\', \'mask\'], varargs=None, keywords=None, defaults=[\'None\'], "
+ }
+ member_method {
+ name: "count_params"
+ argspec: "args=[\'self\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "from_config"
+ argspec: "args=[\'cls\', \'config\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_config"
+ argspec: "args=[\'self\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_input_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_input_mask_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_input_shape_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_losses_for"
+ argspec: "args=[\'self\', \'inputs\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_output_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_output_mask_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_output_shape_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_updates_for"
+ argspec: "args=[\'self\', \'inputs\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_weights"
+ argspec: "args=[\'self\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "set_weights"
+ argspec: "args=[\'self\', \'weights\'], varargs=None, keywords=None, defaults=None"
+ }
+}
diff --git a/tensorflow/tools/api/golden/tensorflow.keras.layers.-repeat-vector.pbtxt b/tensorflow/tools/api/golden/tensorflow.keras.layers.-repeat-vector.pbtxt
new file mode 100644
index 0000000000..dfae244356
--- /dev/null
+++ b/tensorflow/tools/api/golden/tensorflow.keras.layers.-repeat-vector.pbtxt
@@ -0,0 +1,159 @@
+path: "tensorflow.keras.layers.RepeatVector"
+tf_class {
+ is_instance: "<class \'tensorflow.python.keras._impl.keras.layers.core.RepeatVector\'>"
+ is_instance: "<class \'tensorflow.python.keras._impl.keras.engine.topology.Layer\'>"
+ is_instance: "<class \'tensorflow.python.layers.base.Layer\'>"
+ is_instance: "<type \'object\'>"
+ member {
+ name: "graph"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "input"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "input_mask"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "input_shape"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "losses"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "non_trainable_variables"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "non_trainable_weights"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "output"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "output_mask"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "output_shape"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "scope_name"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "trainable_variables"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "trainable_weights"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "updates"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "variables"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "weights"
+ mtype: "<type \'property\'>"
+ }
+ member_method {
+ name: "__init__"
+ argspec: "args=[\'self\', \'n\'], varargs=None, keywords=kwargs, defaults=None"
+ }
+ member_method {
+ name: "add_loss"
+ argspec: "args=[\'self\', \'losses\', \'inputs\'], varargs=None, keywords=None, defaults=[\'None\'], "
+ }
+ member_method {
+ name: "add_update"
+ argspec: "args=[\'self\', \'updates\', \'inputs\'], varargs=None, keywords=None, defaults=[\'None\'], "
+ }
+ member_method {
+ name: "add_variable"
+ argspec: "args=[\'self\', \'name\', \'shape\', \'dtype\', \'initializer\', \'regularizer\', \'trainable\', \'constraint\'], varargs=None, keywords=None, defaults=[\'None\', \'None\', \'None\', \'True\', \'None\'], "
+ }
+ member_method {
+ name: "add_weight"
+ argspec: "args=[\'self\', \'name\', \'shape\', \'dtype\', \'initializer\', \'regularizer\', \'trainable\', \'constraint\'], varargs=None, keywords=None, defaults=[\'None\', \'None\', \'None\', \'True\', \'None\'], "
+ }
+ member_method {
+ name: "apply"
+ argspec: "args=[\'self\', \'inputs\'], varargs=args, keywords=kwargs, defaults=None"
+ }
+ member_method {
+ name: "build"
+ argspec: "args=[\'self\', \'_\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "call"
+ argspec: "args=[\'self\', \'inputs\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "compute_mask"
+ argspec: "args=[\'self\', \'inputs\', \'mask\'], varargs=None, keywords=None, defaults=[\'None\'], "
+ }
+ member_method {
+ name: "count_params"
+ argspec: "args=[\'self\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "from_config"
+ argspec: "args=[\'cls\', \'config\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_config"
+ argspec: "args=[\'self\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_input_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_input_mask_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_input_shape_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_losses_for"
+ argspec: "args=[\'self\', \'inputs\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_output_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_output_mask_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_output_shape_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_updates_for"
+ argspec: "args=[\'self\', \'inputs\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_weights"
+ argspec: "args=[\'self\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "set_weights"
+ argspec: "args=[\'self\', \'weights\'], varargs=None, keywords=None, defaults=None"
+ }
+}
diff --git a/tensorflow/tools/api/golden/tensorflow.keras.layers.-reshape.pbtxt b/tensorflow/tools/api/golden/tensorflow.keras.layers.-reshape.pbtxt
new file mode 100644
index 0000000000..5c8192b226
--- /dev/null
+++ b/tensorflow/tools/api/golden/tensorflow.keras.layers.-reshape.pbtxt
@@ -0,0 +1,159 @@
+path: "tensorflow.keras.layers.Reshape"
+tf_class {
+ is_instance: "<class \'tensorflow.python.keras._impl.keras.layers.core.Reshape\'>"
+ is_instance: "<class \'tensorflow.python.keras._impl.keras.engine.topology.Layer\'>"
+ is_instance: "<class \'tensorflow.python.layers.base.Layer\'>"
+ is_instance: "<type \'object\'>"
+ member {
+ name: "graph"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "input"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "input_mask"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "input_shape"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "losses"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "non_trainable_variables"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "non_trainable_weights"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "output"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "output_mask"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "output_shape"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "scope_name"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "trainable_variables"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "trainable_weights"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "updates"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "variables"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "weights"
+ mtype: "<type \'property\'>"
+ }
+ member_method {
+ name: "__init__"
+ argspec: "args=[\'self\', \'target_shape\'], varargs=None, keywords=kwargs, defaults=None"
+ }
+ member_method {
+ name: "add_loss"
+ argspec: "args=[\'self\', \'losses\', \'inputs\'], varargs=None, keywords=None, defaults=[\'None\'], "
+ }
+ member_method {
+ name: "add_update"
+ argspec: "args=[\'self\', \'updates\', \'inputs\'], varargs=None, keywords=None, defaults=[\'None\'], "
+ }
+ member_method {
+ name: "add_variable"
+ argspec: "args=[\'self\', \'name\', \'shape\', \'dtype\', \'initializer\', \'regularizer\', \'trainable\', \'constraint\'], varargs=None, keywords=None, defaults=[\'None\', \'None\', \'None\', \'True\', \'None\'], "
+ }
+ member_method {
+ name: "add_weight"
+ argspec: "args=[\'self\', \'name\', \'shape\', \'dtype\', \'initializer\', \'regularizer\', \'trainable\', \'constraint\'], varargs=None, keywords=None, defaults=[\'None\', \'None\', \'None\', \'True\', \'None\'], "
+ }
+ member_method {
+ name: "apply"
+ argspec: "args=[\'self\', \'inputs\'], varargs=args, keywords=kwargs, defaults=None"
+ }
+ member_method {
+ name: "build"
+ argspec: "args=[\'self\', \'_\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "call"
+ argspec: "args=[\'self\', \'inputs\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "compute_mask"
+ argspec: "args=[\'self\', \'inputs\', \'mask\'], varargs=None, keywords=None, defaults=[\'None\'], "
+ }
+ member_method {
+ name: "count_params"
+ argspec: "args=[\'self\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "from_config"
+ argspec: "args=[\'cls\', \'config\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_config"
+ argspec: "args=[\'self\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_input_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_input_mask_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_input_shape_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_losses_for"
+ argspec: "args=[\'self\', \'inputs\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_output_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_output_mask_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_output_shape_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_updates_for"
+ argspec: "args=[\'self\', \'inputs\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_weights"
+ argspec: "args=[\'self\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "set_weights"
+ argspec: "args=[\'self\', \'weights\'], varargs=None, keywords=None, defaults=None"
+ }
+}
diff --git a/tensorflow/tools/api/golden/tensorflow.keras.layers.-separable-conv2-d.pbtxt b/tensorflow/tools/api/golden/tensorflow.keras.layers.-separable-conv2-d.pbtxt
new file mode 100644
index 0000000000..3da1d84060
--- /dev/null
+++ b/tensorflow/tools/api/golden/tensorflow.keras.layers.-separable-conv2-d.pbtxt
@@ -0,0 +1,162 @@
+path: "tensorflow.keras.layers.SeparableConv2D"
+tf_class {
+ is_instance: "<class \'tensorflow.python.keras._impl.keras.layers.convolutional.SeparableConv2D\'>"
+ is_instance: "<class \'tensorflow.python.layers.convolutional.SeparableConv2D\'>"
+ is_instance: "<class \'tensorflow.python.layers.convolutional.Conv2D\'>"
+ is_instance: "<class \'tensorflow.python.layers.convolutional._Conv\'>"
+ is_instance: "<class \'tensorflow.python.keras._impl.keras.engine.topology.Layer\'>"
+ is_instance: "<class \'tensorflow.python.layers.base.Layer\'>"
+ is_instance: "<type \'object\'>"
+ member {
+ name: "graph"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "input"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "input_mask"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "input_shape"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "losses"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "non_trainable_variables"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "non_trainable_weights"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "output"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "output_mask"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "output_shape"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "scope_name"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "trainable_variables"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "trainable_weights"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "updates"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "variables"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "weights"
+ mtype: "<type \'property\'>"
+ }
+ member_method {
+ name: "__init__"
+ argspec: "args=[\'self\', \'filters\', \'kernel_size\', \'strides\', \'padding\', \'data_format\', \'depth_multiplier\', \'activation\', \'use_bias\', \'depthwise_initializer\', \'pointwise_initializer\', \'bias_initializer\', \'depthwise_regularizer\', \'pointwise_regularizer\', \'bias_regularizer\', \'activity_regularizer\', \'depthwise_constraint\', \'pointwise_constraint\', \'bias_constraint\'], varargs=None, keywords=kwargs, defaults=[\'(1, 1)\', \'valid\', \'None\', \'1\', \'None\', \'True\', \'glorot_uniform\', \'glorot_uniform\', \'zeros\', \'None\', \'None\', \'None\', \'None\', \'None\', \'None\', \'None\'], "
+ }
+ member_method {
+ name: "add_loss"
+ argspec: "args=[\'self\', \'losses\', \'inputs\'], varargs=None, keywords=None, defaults=[\'None\'], "
+ }
+ member_method {
+ name: "add_update"
+ argspec: "args=[\'self\', \'updates\', \'inputs\'], varargs=None, keywords=None, defaults=[\'None\'], "
+ }
+ member_method {
+ name: "add_variable"
+ argspec: "args=[\'self\', \'name\', \'shape\', \'dtype\', \'initializer\', \'regularizer\', \'trainable\', \'constraint\'], varargs=None, keywords=None, defaults=[\'None\', \'None\', \'None\', \'True\', \'None\'], "
+ }
+ member_method {
+ name: "add_weight"
+ argspec: "args=[\'self\', \'name\', \'shape\', \'dtype\', \'initializer\', \'regularizer\', \'trainable\', \'constraint\'], varargs=None, keywords=None, defaults=[\'None\', \'None\', \'None\', \'True\', \'None\'], "
+ }
+ member_method {
+ name: "apply"
+ argspec: "args=[\'self\', \'inputs\'], varargs=args, keywords=kwargs, defaults=None"
+ }
+ member_method {
+ name: "build"
+ argspec: "args=[\'self\', \'input_shape\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "call"
+ argspec: "args=[\'self\', \'inputs\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "compute_mask"
+ argspec: "args=[\'self\', \'inputs\', \'mask\'], varargs=None, keywords=None, defaults=[\'None\'], "
+ }
+ member_method {
+ name: "count_params"
+ argspec: "args=[\'self\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "from_config"
+ argspec: "args=[\'cls\', \'config\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_config"
+ argspec: "args=[\'self\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_input_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_input_mask_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_input_shape_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_losses_for"
+ argspec: "args=[\'self\', \'inputs\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_output_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_output_mask_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_output_shape_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_updates_for"
+ argspec: "args=[\'self\', \'inputs\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_weights"
+ argspec: "args=[\'self\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "set_weights"
+ argspec: "args=[\'self\', \'weights\'], varargs=None, keywords=None, defaults=None"
+ }
+}
diff --git a/tensorflow/tools/api/golden/tensorflow.keras.layers.-separable-convolution2-d.pbtxt b/tensorflow/tools/api/golden/tensorflow.keras.layers.-separable-convolution2-d.pbtxt
new file mode 100644
index 0000000000..4b593c19c7
--- /dev/null
+++ b/tensorflow/tools/api/golden/tensorflow.keras.layers.-separable-convolution2-d.pbtxt
@@ -0,0 +1,162 @@
+path: "tensorflow.keras.layers.SeparableConvolution2D"
+tf_class {
+ is_instance: "<class \'tensorflow.python.keras._impl.keras.layers.convolutional.SeparableConv2D\'>"
+ is_instance: "<class \'tensorflow.python.layers.convolutional.SeparableConv2D\'>"
+ is_instance: "<class \'tensorflow.python.layers.convolutional.Conv2D\'>"
+ is_instance: "<class \'tensorflow.python.layers.convolutional._Conv\'>"
+ is_instance: "<class \'tensorflow.python.keras._impl.keras.engine.topology.Layer\'>"
+ is_instance: "<class \'tensorflow.python.layers.base.Layer\'>"
+ is_instance: "<type \'object\'>"
+ member {
+ name: "graph"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "input"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "input_mask"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "input_shape"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "losses"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "non_trainable_variables"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "non_trainable_weights"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "output"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "output_mask"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "output_shape"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "scope_name"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "trainable_variables"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "trainable_weights"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "updates"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "variables"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "weights"
+ mtype: "<type \'property\'>"
+ }
+ member_method {
+ name: "__init__"
+ argspec: "args=[\'self\', \'filters\', \'kernel_size\', \'strides\', \'padding\', \'data_format\', \'depth_multiplier\', \'activation\', \'use_bias\', \'depthwise_initializer\', \'pointwise_initializer\', \'bias_initializer\', \'depthwise_regularizer\', \'pointwise_regularizer\', \'bias_regularizer\', \'activity_regularizer\', \'depthwise_constraint\', \'pointwise_constraint\', \'bias_constraint\'], varargs=None, keywords=kwargs, defaults=[\'(1, 1)\', \'valid\', \'None\', \'1\', \'None\', \'True\', \'glorot_uniform\', \'glorot_uniform\', \'zeros\', \'None\', \'None\', \'None\', \'None\', \'None\', \'None\', \'None\'], "
+ }
+ member_method {
+ name: "add_loss"
+ argspec: "args=[\'self\', \'losses\', \'inputs\'], varargs=None, keywords=None, defaults=[\'None\'], "
+ }
+ member_method {
+ name: "add_update"
+ argspec: "args=[\'self\', \'updates\', \'inputs\'], varargs=None, keywords=None, defaults=[\'None\'], "
+ }
+ member_method {
+ name: "add_variable"
+ argspec: "args=[\'self\', \'name\', \'shape\', \'dtype\', \'initializer\', \'regularizer\', \'trainable\', \'constraint\'], varargs=None, keywords=None, defaults=[\'None\', \'None\', \'None\', \'True\', \'None\'], "
+ }
+ member_method {
+ name: "add_weight"
+ argspec: "args=[\'self\', \'name\', \'shape\', \'dtype\', \'initializer\', \'regularizer\', \'trainable\', \'constraint\'], varargs=None, keywords=None, defaults=[\'None\', \'None\', \'None\', \'True\', \'None\'], "
+ }
+ member_method {
+ name: "apply"
+ argspec: "args=[\'self\', \'inputs\'], varargs=args, keywords=kwargs, defaults=None"
+ }
+ member_method {
+ name: "build"
+ argspec: "args=[\'self\', \'input_shape\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "call"
+ argspec: "args=[\'self\', \'inputs\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "compute_mask"
+ argspec: "args=[\'self\', \'inputs\', \'mask\'], varargs=None, keywords=None, defaults=[\'None\'], "
+ }
+ member_method {
+ name: "count_params"
+ argspec: "args=[\'self\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "from_config"
+ argspec: "args=[\'cls\', \'config\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_config"
+ argspec: "args=[\'self\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_input_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_input_mask_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_input_shape_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_losses_for"
+ argspec: "args=[\'self\', \'inputs\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_output_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_output_mask_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_output_shape_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_updates_for"
+ argspec: "args=[\'self\', \'inputs\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_weights"
+ argspec: "args=[\'self\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "set_weights"
+ argspec: "args=[\'self\', \'weights\'], varargs=None, keywords=None, defaults=None"
+ }
+}
diff --git a/tensorflow/tools/api/golden/tensorflow.keras.layers.-simple-r-n-n.pbtxt b/tensorflow/tools/api/golden/tensorflow.keras.layers.-simple-r-n-n.pbtxt
new file mode 100644
index 0000000000..8620322230
--- /dev/null
+++ b/tensorflow/tools/api/golden/tensorflow.keras.layers.-simple-r-n-n.pbtxt
@@ -0,0 +1,180 @@
+path: "tensorflow.keras.layers.SimpleRNN"
+tf_class {
+ is_instance: "<class \'tensorflow.python.keras._impl.keras.layers.recurrent.SimpleRNN\'>"
+ is_instance: "<class \'tensorflow.python.keras._impl.keras.layers.recurrent.Recurrent\'>"
+ is_instance: "<class \'tensorflow.python.keras._impl.keras.engine.topology.Layer\'>"
+ is_instance: "<class \'tensorflow.python.layers.base.Layer\'>"
+ is_instance: "<type \'object\'>"
+ member {
+ name: "graph"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "input"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "input_mask"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "input_shape"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "losses"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "non_trainable_variables"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "non_trainable_weights"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "output"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "output_mask"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "output_shape"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "scope_name"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "trainable_variables"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "trainable_weights"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "updates"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "variables"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "weights"
+ mtype: "<type \'property\'>"
+ }
+ member_method {
+ name: "__init__"
+ argspec: "args=[\'self\', \'units\', \'activation\', \'use_bias\', \'kernel_initializer\', \'recurrent_initializer\', \'bias_initializer\', \'kernel_regularizer\', \'recurrent_regularizer\', \'bias_regularizer\', \'activity_regularizer\', \'kernel_constraint\', \'recurrent_constraint\', \'bias_constraint\', \'dropout\', \'recurrent_dropout\'], varargs=None, keywords=kwargs, defaults=[\'tanh\', \'True\', \'glorot_uniform\', \'orthogonal\', \'zeros\', \'None\', \'None\', \'None\', \'None\', \'None\', \'None\', \'None\', \'0.0\', \'0.0\'], "
+ }
+ member_method {
+ name: "add_loss"
+ argspec: "args=[\'self\', \'losses\', \'inputs\'], varargs=None, keywords=None, defaults=[\'None\'], "
+ }
+ member_method {
+ name: "add_update"
+ argspec: "args=[\'self\', \'updates\', \'inputs\'], varargs=None, keywords=None, defaults=[\'None\'], "
+ }
+ member_method {
+ name: "add_variable"
+ argspec: "args=[\'self\', \'name\', \'shape\', \'dtype\', \'initializer\', \'regularizer\', \'trainable\', \'constraint\'], varargs=None, keywords=None, defaults=[\'None\', \'None\', \'None\', \'True\', \'None\'], "
+ }
+ member_method {
+ name: "add_weight"
+ argspec: "args=[\'self\', \'name\', \'shape\', \'dtype\', \'initializer\', \'regularizer\', \'trainable\', \'constraint\'], varargs=None, keywords=None, defaults=[\'None\', \'None\', \'None\', \'True\', \'None\'], "
+ }
+ member_method {
+ name: "apply"
+ argspec: "args=[\'self\', \'inputs\'], varargs=args, keywords=kwargs, defaults=None"
+ }
+ member_method {
+ name: "build"
+ argspec: "args=[\'self\', \'input_shape\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "call"
+ argspec: "args=[\'self\', \'inputs\', \'mask\', \'training\', \'initial_state\'], varargs=None, keywords=None, defaults=[\'None\', \'None\', \'None\'], "
+ }
+ member_method {
+ name: "compute_mask"
+ argspec: "args=[\'self\', \'inputs\', \'mask\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "count_params"
+ argspec: "args=[\'self\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "from_config"
+ argspec: "args=[\'cls\', \'config\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_config"
+ argspec: "args=[\'self\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_constants"
+ argspec: "args=[\'self\', \'inputs\', \'training\'], varargs=None, keywords=None, defaults=[\'None\'], "
+ }
+ member_method {
+ name: "get_initial_state"
+ argspec: "args=[\'self\', \'inputs\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_input_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_input_mask_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_input_shape_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_losses_for"
+ argspec: "args=[\'self\', \'inputs\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_output_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_output_mask_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_output_shape_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_updates_for"
+ argspec: "args=[\'self\', \'inputs\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_weights"
+ argspec: "args=[\'self\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "preprocess_input"
+ argspec: "args=[\'self\', \'inputs\', \'training\'], varargs=None, keywords=None, defaults=[\'None\'], "
+ }
+ member_method {
+ name: "reset_states"
+ argspec: "args=[\'self\', \'states\'], varargs=None, keywords=None, defaults=[\'None\'], "
+ }
+ member_method {
+ name: "set_weights"
+ argspec: "args=[\'self\', \'weights\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "step"
+ argspec: "args=[\'self\', \'inputs\', \'states\'], varargs=None, keywords=None, defaults=None"
+ }
+}
diff --git a/tensorflow/tools/api/golden/tensorflow.keras.layers.-spatial-dropout1-d.pbtxt b/tensorflow/tools/api/golden/tensorflow.keras.layers.-spatial-dropout1-d.pbtxt
new file mode 100644
index 0000000000..156943a201
--- /dev/null
+++ b/tensorflow/tools/api/golden/tensorflow.keras.layers.-spatial-dropout1-d.pbtxt
@@ -0,0 +1,161 @@
+path: "tensorflow.keras.layers.SpatialDropout1D"
+tf_class {
+ is_instance: "<class \'tensorflow.python.keras._impl.keras.layers.core.SpatialDropout1D\'>"
+ is_instance: "<class \'tensorflow.python.keras._impl.keras.layers.core.Dropout\'>"
+ is_instance: "<class \'tensorflow.python.layers.core.Dropout\'>"
+ is_instance: "<class \'tensorflow.python.keras._impl.keras.engine.topology.Layer\'>"
+ is_instance: "<class \'tensorflow.python.layers.base.Layer\'>"
+ is_instance: "<type \'object\'>"
+ member {
+ name: "graph"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "input"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "input_mask"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "input_shape"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "losses"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "non_trainable_variables"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "non_trainable_weights"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "output"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "output_mask"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "output_shape"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "scope_name"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "trainable_variables"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "trainable_weights"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "updates"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "variables"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "weights"
+ mtype: "<type \'property\'>"
+ }
+ member_method {
+ name: "__init__"
+ argspec: "args=[\'self\', \'rate\'], varargs=None, keywords=kwargs, defaults=None"
+ }
+ member_method {
+ name: "add_loss"
+ argspec: "args=[\'self\', \'losses\', \'inputs\'], varargs=None, keywords=None, defaults=[\'None\'], "
+ }
+ member_method {
+ name: "add_update"
+ argspec: "args=[\'self\', \'updates\', \'inputs\'], varargs=None, keywords=None, defaults=[\'None\'], "
+ }
+ member_method {
+ name: "add_variable"
+ argspec: "args=[\'self\', \'name\', \'shape\', \'dtype\', \'initializer\', \'regularizer\', \'trainable\', \'constraint\'], varargs=None, keywords=None, defaults=[\'None\', \'None\', \'None\', \'True\', \'None\'], "
+ }
+ member_method {
+ name: "add_weight"
+ argspec: "args=[\'self\', \'name\', \'shape\', \'dtype\', \'initializer\', \'regularizer\', \'trainable\', \'constraint\'], varargs=None, keywords=None, defaults=[\'None\', \'None\', \'None\', \'True\', \'None\'], "
+ }
+ member_method {
+ name: "apply"
+ argspec: "args=[\'self\', \'inputs\'], varargs=args, keywords=kwargs, defaults=None"
+ }
+ member_method {
+ name: "build"
+ argspec: "args=[\'self\', \'_\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "call"
+ argspec: "args=[\'self\', \'inputs\', \'training\'], varargs=None, keywords=None, defaults=[\'None\'], "
+ }
+ member_method {
+ name: "compute_mask"
+ argspec: "args=[\'self\', \'inputs\', \'mask\'], varargs=None, keywords=None, defaults=[\'None\'], "
+ }
+ member_method {
+ name: "count_params"
+ argspec: "args=[\'self\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "from_config"
+ argspec: "args=[\'cls\', \'config\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_config"
+ argspec: "args=[\'self\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_input_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_input_mask_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_input_shape_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_losses_for"
+ argspec: "args=[\'self\', \'inputs\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_output_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_output_mask_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_output_shape_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_updates_for"
+ argspec: "args=[\'self\', \'inputs\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_weights"
+ argspec: "args=[\'self\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "set_weights"
+ argspec: "args=[\'self\', \'weights\'], varargs=None, keywords=None, defaults=None"
+ }
+}
diff --git a/tensorflow/tools/api/golden/tensorflow.keras.layers.-spatial-dropout2-d.pbtxt b/tensorflow/tools/api/golden/tensorflow.keras.layers.-spatial-dropout2-d.pbtxt
new file mode 100644
index 0000000000..5368b5468a
--- /dev/null
+++ b/tensorflow/tools/api/golden/tensorflow.keras.layers.-spatial-dropout2-d.pbtxt
@@ -0,0 +1,161 @@
+path: "tensorflow.keras.layers.SpatialDropout2D"
+tf_class {
+ is_instance: "<class \'tensorflow.python.keras._impl.keras.layers.core.SpatialDropout2D\'>"
+ is_instance: "<class \'tensorflow.python.keras._impl.keras.layers.core.Dropout\'>"
+ is_instance: "<class \'tensorflow.python.layers.core.Dropout\'>"
+ is_instance: "<class \'tensorflow.python.keras._impl.keras.engine.topology.Layer\'>"
+ is_instance: "<class \'tensorflow.python.layers.base.Layer\'>"
+ is_instance: "<type \'object\'>"
+ member {
+ name: "graph"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "input"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "input_mask"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "input_shape"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "losses"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "non_trainable_variables"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "non_trainable_weights"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "output"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "output_mask"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "output_shape"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "scope_name"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "trainable_variables"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "trainable_weights"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "updates"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "variables"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "weights"
+ mtype: "<type \'property\'>"
+ }
+ member_method {
+ name: "__init__"
+ argspec: "args=[\'self\', \'rate\', \'data_format\'], varargs=None, keywords=kwargs, defaults=[\'None\'], "
+ }
+ member_method {
+ name: "add_loss"
+ argspec: "args=[\'self\', \'losses\', \'inputs\'], varargs=None, keywords=None, defaults=[\'None\'], "
+ }
+ member_method {
+ name: "add_update"
+ argspec: "args=[\'self\', \'updates\', \'inputs\'], varargs=None, keywords=None, defaults=[\'None\'], "
+ }
+ member_method {
+ name: "add_variable"
+ argspec: "args=[\'self\', \'name\', \'shape\', \'dtype\', \'initializer\', \'regularizer\', \'trainable\', \'constraint\'], varargs=None, keywords=None, defaults=[\'None\', \'None\', \'None\', \'True\', \'None\'], "
+ }
+ member_method {
+ name: "add_weight"
+ argspec: "args=[\'self\', \'name\', \'shape\', \'dtype\', \'initializer\', \'regularizer\', \'trainable\', \'constraint\'], varargs=None, keywords=None, defaults=[\'None\', \'None\', \'None\', \'True\', \'None\'], "
+ }
+ member_method {
+ name: "apply"
+ argspec: "args=[\'self\', \'inputs\'], varargs=args, keywords=kwargs, defaults=None"
+ }
+ member_method {
+ name: "build"
+ argspec: "args=[\'self\', \'_\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "call"
+ argspec: "args=[\'self\', \'inputs\', \'training\'], varargs=None, keywords=None, defaults=[\'None\'], "
+ }
+ member_method {
+ name: "compute_mask"
+ argspec: "args=[\'self\', \'inputs\', \'mask\'], varargs=None, keywords=None, defaults=[\'None\'], "
+ }
+ member_method {
+ name: "count_params"
+ argspec: "args=[\'self\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "from_config"
+ argspec: "args=[\'cls\', \'config\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_config"
+ argspec: "args=[\'self\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_input_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_input_mask_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_input_shape_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_losses_for"
+ argspec: "args=[\'self\', \'inputs\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_output_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_output_mask_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_output_shape_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_updates_for"
+ argspec: "args=[\'self\', \'inputs\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_weights"
+ argspec: "args=[\'self\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "set_weights"
+ argspec: "args=[\'self\', \'weights\'], varargs=None, keywords=None, defaults=None"
+ }
+}
diff --git a/tensorflow/tools/api/golden/tensorflow.keras.layers.-spatial-dropout3-d.pbtxt b/tensorflow/tools/api/golden/tensorflow.keras.layers.-spatial-dropout3-d.pbtxt
new file mode 100644
index 0000000000..568b5ad66e
--- /dev/null
+++ b/tensorflow/tools/api/golden/tensorflow.keras.layers.-spatial-dropout3-d.pbtxt
@@ -0,0 +1,161 @@
+path: "tensorflow.keras.layers.SpatialDropout3D"
+tf_class {
+ is_instance: "<class \'tensorflow.python.keras._impl.keras.layers.core.SpatialDropout3D\'>"
+ is_instance: "<class \'tensorflow.python.keras._impl.keras.layers.core.Dropout\'>"
+ is_instance: "<class \'tensorflow.python.layers.core.Dropout\'>"
+ is_instance: "<class \'tensorflow.python.keras._impl.keras.engine.topology.Layer\'>"
+ is_instance: "<class \'tensorflow.python.layers.base.Layer\'>"
+ is_instance: "<type \'object\'>"
+ member {
+ name: "graph"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "input"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "input_mask"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "input_shape"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "losses"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "non_trainable_variables"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "non_trainable_weights"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "output"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "output_mask"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "output_shape"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "scope_name"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "trainable_variables"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "trainable_weights"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "updates"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "variables"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "weights"
+ mtype: "<type \'property\'>"
+ }
+ member_method {
+ name: "__init__"
+ argspec: "args=[\'self\', \'rate\', \'data_format\'], varargs=None, keywords=kwargs, defaults=[\'None\'], "
+ }
+ member_method {
+ name: "add_loss"
+ argspec: "args=[\'self\', \'losses\', \'inputs\'], varargs=None, keywords=None, defaults=[\'None\'], "
+ }
+ member_method {
+ name: "add_update"
+ argspec: "args=[\'self\', \'updates\', \'inputs\'], varargs=None, keywords=None, defaults=[\'None\'], "
+ }
+ member_method {
+ name: "add_variable"
+ argspec: "args=[\'self\', \'name\', \'shape\', \'dtype\', \'initializer\', \'regularizer\', \'trainable\', \'constraint\'], varargs=None, keywords=None, defaults=[\'None\', \'None\', \'None\', \'True\', \'None\'], "
+ }
+ member_method {
+ name: "add_weight"
+ argspec: "args=[\'self\', \'name\', \'shape\', \'dtype\', \'initializer\', \'regularizer\', \'trainable\', \'constraint\'], varargs=None, keywords=None, defaults=[\'None\', \'None\', \'None\', \'True\', \'None\'], "
+ }
+ member_method {
+ name: "apply"
+ argspec: "args=[\'self\', \'inputs\'], varargs=args, keywords=kwargs, defaults=None"
+ }
+ member_method {
+ name: "build"
+ argspec: "args=[\'self\', \'_\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "call"
+ argspec: "args=[\'self\', \'inputs\', \'training\'], varargs=None, keywords=None, defaults=[\'None\'], "
+ }
+ member_method {
+ name: "compute_mask"
+ argspec: "args=[\'self\', \'inputs\', \'mask\'], varargs=None, keywords=None, defaults=[\'None\'], "
+ }
+ member_method {
+ name: "count_params"
+ argspec: "args=[\'self\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "from_config"
+ argspec: "args=[\'cls\', \'config\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_config"
+ argspec: "args=[\'self\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_input_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_input_mask_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_input_shape_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_losses_for"
+ argspec: "args=[\'self\', \'inputs\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_output_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_output_mask_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_output_shape_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_updates_for"
+ argspec: "args=[\'self\', \'inputs\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_weights"
+ argspec: "args=[\'self\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "set_weights"
+ argspec: "args=[\'self\', \'weights\'], varargs=None, keywords=None, defaults=None"
+ }
+}
diff --git a/tensorflow/tools/api/golden/tensorflow.keras.layers.-thresholded-re-l-u.pbtxt b/tensorflow/tools/api/golden/tensorflow.keras.layers.-thresholded-re-l-u.pbtxt
new file mode 100644
index 0000000000..445f2df59d
--- /dev/null
+++ b/tensorflow/tools/api/golden/tensorflow.keras.layers.-thresholded-re-l-u.pbtxt
@@ -0,0 +1,159 @@
+path: "tensorflow.keras.layers.ThresholdedReLU"
+tf_class {
+ is_instance: "<class \'tensorflow.python.keras._impl.keras.layers.advanced_activations.ThresholdedReLU\'>"
+ is_instance: "<class \'tensorflow.python.keras._impl.keras.engine.topology.Layer\'>"
+ is_instance: "<class \'tensorflow.python.layers.base.Layer\'>"
+ is_instance: "<type \'object\'>"
+ member {
+ name: "graph"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "input"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "input_mask"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "input_shape"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "losses"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "non_trainable_variables"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "non_trainable_weights"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "output"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "output_mask"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "output_shape"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "scope_name"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "trainable_variables"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "trainable_weights"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "updates"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "variables"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "weights"
+ mtype: "<type \'property\'>"
+ }
+ member_method {
+ name: "__init__"
+ argspec: "args=[\'self\', \'theta\'], varargs=None, keywords=kwargs, defaults=[\'1.0\'], "
+ }
+ member_method {
+ name: "add_loss"
+ argspec: "args=[\'self\', \'losses\', \'inputs\'], varargs=None, keywords=None, defaults=[\'None\'], "
+ }
+ member_method {
+ name: "add_update"
+ argspec: "args=[\'self\', \'updates\', \'inputs\'], varargs=None, keywords=None, defaults=[\'None\'], "
+ }
+ member_method {
+ name: "add_variable"
+ argspec: "args=[\'self\', \'name\', \'shape\', \'dtype\', \'initializer\', \'regularizer\', \'trainable\', \'constraint\'], varargs=None, keywords=None, defaults=[\'None\', \'None\', \'None\', \'True\', \'None\'], "
+ }
+ member_method {
+ name: "add_weight"
+ argspec: "args=[\'self\', \'name\', \'shape\', \'dtype\', \'initializer\', \'regularizer\', \'trainable\', \'constraint\'], varargs=None, keywords=None, defaults=[\'None\', \'None\', \'None\', \'True\', \'None\'], "
+ }
+ member_method {
+ name: "apply"
+ argspec: "args=[\'self\', \'inputs\'], varargs=args, keywords=kwargs, defaults=None"
+ }
+ member_method {
+ name: "build"
+ argspec: "args=[\'self\', \'_\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "call"
+ argspec: "args=[\'self\', \'inputs\', \'mask\'], varargs=None, keywords=None, defaults=[\'None\'], "
+ }
+ member_method {
+ name: "compute_mask"
+ argspec: "args=[\'self\', \'inputs\', \'mask\'], varargs=None, keywords=None, defaults=[\'None\'], "
+ }
+ member_method {
+ name: "count_params"
+ argspec: "args=[\'self\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "from_config"
+ argspec: "args=[\'cls\', \'config\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_config"
+ argspec: "args=[\'self\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_input_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_input_mask_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_input_shape_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_losses_for"
+ argspec: "args=[\'self\', \'inputs\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_output_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_output_mask_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_output_shape_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_updates_for"
+ argspec: "args=[\'self\', \'inputs\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_weights"
+ argspec: "args=[\'self\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "set_weights"
+ argspec: "args=[\'self\', \'weights\'], varargs=None, keywords=None, defaults=None"
+ }
+}
diff --git a/tensorflow/tools/api/golden/tensorflow.keras.layers.-time-distributed.pbtxt b/tensorflow/tools/api/golden/tensorflow.keras.layers.-time-distributed.pbtxt
new file mode 100644
index 0000000000..b6ebf02b2a
--- /dev/null
+++ b/tensorflow/tools/api/golden/tensorflow.keras.layers.-time-distributed.pbtxt
@@ -0,0 +1,168 @@
+path: "tensorflow.keras.layers.TimeDistributed"
+tf_class {
+ is_instance: "<class \'tensorflow.python.keras._impl.keras.layers.wrappers.TimeDistributed\'>"
+ is_instance: "<class \'tensorflow.python.keras._impl.keras.layers.wrappers.Wrapper\'>"
+ is_instance: "<class \'tensorflow.python.keras._impl.keras.engine.topology.Layer\'>"
+ is_instance: "<class \'tensorflow.python.layers.base.Layer\'>"
+ is_instance: "<type \'object\'>"
+ member {
+ name: "activity_regularizer"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "constraints"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "graph"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "input"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "input_mask"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "input_shape"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "losses"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "non_trainable_variables"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "non_trainable_weights"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "output"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "output_mask"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "output_shape"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "scope_name"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "trainable_variables"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "trainable_weights"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "updates"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "variables"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "weights"
+ mtype: "<type \'property\'>"
+ }
+ member_method {
+ name: "__init__"
+ argspec: "args=[\'self\', \'layer\'], varargs=None, keywords=kwargs, defaults=None"
+ }
+ member_method {
+ name: "add_loss"
+ argspec: "args=[\'self\', \'losses\', \'inputs\'], varargs=None, keywords=None, defaults=[\'None\'], "
+ }
+ member_method {
+ name: "add_update"
+ argspec: "args=[\'self\', \'updates\', \'inputs\'], varargs=None, keywords=None, defaults=[\'None\'], "
+ }
+ member_method {
+ name: "add_variable"
+ argspec: "args=[\'self\', \'name\', \'shape\', \'dtype\', \'initializer\', \'regularizer\', \'trainable\', \'constraint\'], varargs=None, keywords=None, defaults=[\'None\', \'None\', \'None\', \'True\', \'None\'], "
+ }
+ member_method {
+ name: "add_weight"
+ argspec: "args=[\'self\', \'name\', \'shape\', \'dtype\', \'initializer\', \'regularizer\', \'trainable\', \'constraint\'], varargs=None, keywords=None, defaults=[\'None\', \'None\', \'None\', \'True\', \'None\'], "
+ }
+ member_method {
+ name: "apply"
+ argspec: "args=[\'self\', \'inputs\'], varargs=args, keywords=kwargs, defaults=None"
+ }
+ member_method {
+ name: "build"
+ argspec: "args=[\'self\', \'input_shape\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "call"
+ argspec: "args=[\'self\', \'inputs\', \'training\', \'mask\'], varargs=None, keywords=None, defaults=[\'None\', \'None\'], "
+ }
+ member_method {
+ name: "compute_mask"
+ argspec: "args=[\'self\', \'inputs\', \'mask\'], varargs=None, keywords=None, defaults=[\'None\'], "
+ }
+ member_method {
+ name: "count_params"
+ argspec: "args=[\'self\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "from_config"
+ argspec: "args=[\'cls\', \'config\', \'custom_objects\'], varargs=None, keywords=None, defaults=[\'None\'], "
+ }
+ member_method {
+ name: "get_config"
+ argspec: "args=[\'self\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_input_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_input_mask_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_input_shape_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_losses_for"
+ argspec: "args=[\'self\', \'inputs\'], varargs=None, keywords=None, defaults=[\'None\'], "
+ }
+ member_method {
+ name: "get_output_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_output_mask_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_output_shape_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_updates_for"
+ argspec: "args=[\'self\', \'inputs\'], varargs=None, keywords=None, defaults=[\'None\'], "
+ }
+ member_method {
+ name: "get_weights"
+ argspec: "args=[\'self\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "set_weights"
+ argspec: "args=[\'self\', \'weights\'], varargs=None, keywords=None, defaults=None"
+ }
+}
diff --git a/tensorflow/tools/api/golden/tensorflow.keras.layers.-up-sampling1-d.pbtxt b/tensorflow/tools/api/golden/tensorflow.keras.layers.-up-sampling1-d.pbtxt
new file mode 100644
index 0000000000..868805a563
--- /dev/null
+++ b/tensorflow/tools/api/golden/tensorflow.keras.layers.-up-sampling1-d.pbtxt
@@ -0,0 +1,159 @@
+path: "tensorflow.keras.layers.UpSampling1D"
+tf_class {
+ is_instance: "<class \'tensorflow.python.keras._impl.keras.layers.convolutional.UpSampling1D\'>"
+ is_instance: "<class \'tensorflow.python.keras._impl.keras.engine.topology.Layer\'>"
+ is_instance: "<class \'tensorflow.python.layers.base.Layer\'>"
+ is_instance: "<type \'object\'>"
+ member {
+ name: "graph"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "input"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "input_mask"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "input_shape"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "losses"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "non_trainable_variables"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "non_trainable_weights"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "output"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "output_mask"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "output_shape"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "scope_name"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "trainable_variables"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "trainable_weights"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "updates"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "variables"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "weights"
+ mtype: "<type \'property\'>"
+ }
+ member_method {
+ name: "__init__"
+ argspec: "args=[\'self\', \'size\'], varargs=None, keywords=kwargs, defaults=[\'2\'], "
+ }
+ member_method {
+ name: "add_loss"
+ argspec: "args=[\'self\', \'losses\', \'inputs\'], varargs=None, keywords=None, defaults=[\'None\'], "
+ }
+ member_method {
+ name: "add_update"
+ argspec: "args=[\'self\', \'updates\', \'inputs\'], varargs=None, keywords=None, defaults=[\'None\'], "
+ }
+ member_method {
+ name: "add_variable"
+ argspec: "args=[\'self\', \'name\', \'shape\', \'dtype\', \'initializer\', \'regularizer\', \'trainable\', \'constraint\'], varargs=None, keywords=None, defaults=[\'None\', \'None\', \'None\', \'True\', \'None\'], "
+ }
+ member_method {
+ name: "add_weight"
+ argspec: "args=[\'self\', \'name\', \'shape\', \'dtype\', \'initializer\', \'regularizer\', \'trainable\', \'constraint\'], varargs=None, keywords=None, defaults=[\'None\', \'None\', \'None\', \'True\', \'None\'], "
+ }
+ member_method {
+ name: "apply"
+ argspec: "args=[\'self\', \'inputs\'], varargs=args, keywords=kwargs, defaults=None"
+ }
+ member_method {
+ name: "build"
+ argspec: "args=[\'self\', \'_\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "call"
+ argspec: "args=[\'self\', \'inputs\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "compute_mask"
+ argspec: "args=[\'self\', \'inputs\', \'mask\'], varargs=None, keywords=None, defaults=[\'None\'], "
+ }
+ member_method {
+ name: "count_params"
+ argspec: "args=[\'self\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "from_config"
+ argspec: "args=[\'cls\', \'config\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_config"
+ argspec: "args=[\'self\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_input_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_input_mask_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_input_shape_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_losses_for"
+ argspec: "args=[\'self\', \'inputs\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_output_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_output_mask_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_output_shape_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_updates_for"
+ argspec: "args=[\'self\', \'inputs\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_weights"
+ argspec: "args=[\'self\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "set_weights"
+ argspec: "args=[\'self\', \'weights\'], varargs=None, keywords=None, defaults=None"
+ }
+}
diff --git a/tensorflow/tools/api/golden/tensorflow.keras.layers.-up-sampling2-d.pbtxt b/tensorflow/tools/api/golden/tensorflow.keras.layers.-up-sampling2-d.pbtxt
new file mode 100644
index 0000000000..caa85afa15
--- /dev/null
+++ b/tensorflow/tools/api/golden/tensorflow.keras.layers.-up-sampling2-d.pbtxt
@@ -0,0 +1,159 @@
+path: "tensorflow.keras.layers.UpSampling2D"
+tf_class {
+ is_instance: "<class \'tensorflow.python.keras._impl.keras.layers.convolutional.UpSampling2D\'>"
+ is_instance: "<class \'tensorflow.python.keras._impl.keras.engine.topology.Layer\'>"
+ is_instance: "<class \'tensorflow.python.layers.base.Layer\'>"
+ is_instance: "<type \'object\'>"
+ member {
+ name: "graph"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "input"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "input_mask"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "input_shape"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "losses"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "non_trainable_variables"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "non_trainable_weights"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "output"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "output_mask"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "output_shape"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "scope_name"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "trainable_variables"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "trainable_weights"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "updates"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "variables"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "weights"
+ mtype: "<type \'property\'>"
+ }
+ member_method {
+ name: "__init__"
+ argspec: "args=[\'self\', \'size\', \'data_format\'], varargs=None, keywords=kwargs, defaults=[\'(2, 2)\', \'None\'], "
+ }
+ member_method {
+ name: "add_loss"
+ argspec: "args=[\'self\', \'losses\', \'inputs\'], varargs=None, keywords=None, defaults=[\'None\'], "
+ }
+ member_method {
+ name: "add_update"
+ argspec: "args=[\'self\', \'updates\', \'inputs\'], varargs=None, keywords=None, defaults=[\'None\'], "
+ }
+ member_method {
+ name: "add_variable"
+ argspec: "args=[\'self\', \'name\', \'shape\', \'dtype\', \'initializer\', \'regularizer\', \'trainable\', \'constraint\'], varargs=None, keywords=None, defaults=[\'None\', \'None\', \'None\', \'True\', \'None\'], "
+ }
+ member_method {
+ name: "add_weight"
+ argspec: "args=[\'self\', \'name\', \'shape\', \'dtype\', \'initializer\', \'regularizer\', \'trainable\', \'constraint\'], varargs=None, keywords=None, defaults=[\'None\', \'None\', \'None\', \'True\', \'None\'], "
+ }
+ member_method {
+ name: "apply"
+ argspec: "args=[\'self\', \'inputs\'], varargs=args, keywords=kwargs, defaults=None"
+ }
+ member_method {
+ name: "build"
+ argspec: "args=[\'self\', \'_\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "call"
+ argspec: "args=[\'self\', \'inputs\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "compute_mask"
+ argspec: "args=[\'self\', \'inputs\', \'mask\'], varargs=None, keywords=None, defaults=[\'None\'], "
+ }
+ member_method {
+ name: "count_params"
+ argspec: "args=[\'self\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "from_config"
+ argspec: "args=[\'cls\', \'config\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_config"
+ argspec: "args=[\'self\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_input_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_input_mask_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_input_shape_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_losses_for"
+ argspec: "args=[\'self\', \'inputs\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_output_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_output_mask_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_output_shape_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_updates_for"
+ argspec: "args=[\'self\', \'inputs\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_weights"
+ argspec: "args=[\'self\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "set_weights"
+ argspec: "args=[\'self\', \'weights\'], varargs=None, keywords=None, defaults=None"
+ }
+}
diff --git a/tensorflow/tools/api/golden/tensorflow.keras.layers.-up-sampling3-d.pbtxt b/tensorflow/tools/api/golden/tensorflow.keras.layers.-up-sampling3-d.pbtxt
new file mode 100644
index 0000000000..d3362faefa
--- /dev/null
+++ b/tensorflow/tools/api/golden/tensorflow.keras.layers.-up-sampling3-d.pbtxt
@@ -0,0 +1,159 @@
+path: "tensorflow.keras.layers.UpSampling3D"
+tf_class {
+ is_instance: "<class \'tensorflow.python.keras._impl.keras.layers.convolutional.UpSampling3D\'>"
+ is_instance: "<class \'tensorflow.python.keras._impl.keras.engine.topology.Layer\'>"
+ is_instance: "<class \'tensorflow.python.layers.base.Layer\'>"
+ is_instance: "<type \'object\'>"
+ member {
+ name: "graph"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "input"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "input_mask"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "input_shape"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "losses"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "non_trainable_variables"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "non_trainable_weights"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "output"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "output_mask"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "output_shape"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "scope_name"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "trainable_variables"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "trainable_weights"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "updates"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "variables"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "weights"
+ mtype: "<type \'property\'>"
+ }
+ member_method {
+ name: "__init__"
+ argspec: "args=[\'self\', \'size\', \'data_format\'], varargs=None, keywords=kwargs, defaults=[\'(2, 2, 2)\', \'None\'], "
+ }
+ member_method {
+ name: "add_loss"
+ argspec: "args=[\'self\', \'losses\', \'inputs\'], varargs=None, keywords=None, defaults=[\'None\'], "
+ }
+ member_method {
+ name: "add_update"
+ argspec: "args=[\'self\', \'updates\', \'inputs\'], varargs=None, keywords=None, defaults=[\'None\'], "
+ }
+ member_method {
+ name: "add_variable"
+ argspec: "args=[\'self\', \'name\', \'shape\', \'dtype\', \'initializer\', \'regularizer\', \'trainable\', \'constraint\'], varargs=None, keywords=None, defaults=[\'None\', \'None\', \'None\', \'True\', \'None\'], "
+ }
+ member_method {
+ name: "add_weight"
+ argspec: "args=[\'self\', \'name\', \'shape\', \'dtype\', \'initializer\', \'regularizer\', \'trainable\', \'constraint\'], varargs=None, keywords=None, defaults=[\'None\', \'None\', \'None\', \'True\', \'None\'], "
+ }
+ member_method {
+ name: "apply"
+ argspec: "args=[\'self\', \'inputs\'], varargs=args, keywords=kwargs, defaults=None"
+ }
+ member_method {
+ name: "build"
+ argspec: "args=[\'self\', \'_\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "call"
+ argspec: "args=[\'self\', \'inputs\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "compute_mask"
+ argspec: "args=[\'self\', \'inputs\', \'mask\'], varargs=None, keywords=None, defaults=[\'None\'], "
+ }
+ member_method {
+ name: "count_params"
+ argspec: "args=[\'self\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "from_config"
+ argspec: "args=[\'cls\', \'config\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_config"
+ argspec: "args=[\'self\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_input_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_input_mask_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_input_shape_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_losses_for"
+ argspec: "args=[\'self\', \'inputs\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_output_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_output_mask_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_output_shape_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_updates_for"
+ argspec: "args=[\'self\', \'inputs\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_weights"
+ argspec: "args=[\'self\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "set_weights"
+ argspec: "args=[\'self\', \'weights\'], varargs=None, keywords=None, defaults=None"
+ }
+}
diff --git a/tensorflow/tools/api/golden/tensorflow.keras.layers.-wrapper.pbtxt b/tensorflow/tools/api/golden/tensorflow.keras.layers.-wrapper.pbtxt
new file mode 100644
index 0000000000..ede827f4ec
--- /dev/null
+++ b/tensorflow/tools/api/golden/tensorflow.keras.layers.-wrapper.pbtxt
@@ -0,0 +1,167 @@
+path: "tensorflow.keras.layers.Wrapper"
+tf_class {
+ is_instance: "<class \'tensorflow.python.keras._impl.keras.layers.wrappers.Wrapper\'>"
+ is_instance: "<class \'tensorflow.python.keras._impl.keras.engine.topology.Layer\'>"
+ is_instance: "<class \'tensorflow.python.layers.base.Layer\'>"
+ is_instance: "<type \'object\'>"
+ member {
+ name: "activity_regularizer"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "constraints"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "graph"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "input"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "input_mask"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "input_shape"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "losses"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "non_trainable_variables"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "non_trainable_weights"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "output"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "output_mask"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "output_shape"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "scope_name"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "trainable_variables"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "trainable_weights"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "updates"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "variables"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "weights"
+ mtype: "<type \'property\'>"
+ }
+ member_method {
+ name: "__init__"
+ argspec: "args=[\'self\', \'layer\'], varargs=None, keywords=kwargs, defaults=None"
+ }
+ member_method {
+ name: "add_loss"
+ argspec: "args=[\'self\', \'losses\', \'inputs\'], varargs=None, keywords=None, defaults=[\'None\'], "
+ }
+ member_method {
+ name: "add_update"
+ argspec: "args=[\'self\', \'updates\', \'inputs\'], varargs=None, keywords=None, defaults=[\'None\'], "
+ }
+ member_method {
+ name: "add_variable"
+ argspec: "args=[\'self\', \'name\', \'shape\', \'dtype\', \'initializer\', \'regularizer\', \'trainable\', \'constraint\'], varargs=None, keywords=None, defaults=[\'None\', \'None\', \'None\', \'True\', \'None\'], "
+ }
+ member_method {
+ name: "add_weight"
+ argspec: "args=[\'self\', \'name\', \'shape\', \'dtype\', \'initializer\', \'regularizer\', \'trainable\', \'constraint\'], varargs=None, keywords=None, defaults=[\'None\', \'None\', \'None\', \'True\', \'None\'], "
+ }
+ member_method {
+ name: "apply"
+ argspec: "args=[\'self\', \'inputs\'], varargs=args, keywords=kwargs, defaults=None"
+ }
+ member_method {
+ name: "build"
+ argspec: "args=[\'self\', \'input_shape\'], varargs=None, keywords=None, defaults=[\'None\'], "
+ }
+ member_method {
+ name: "call"
+ argspec: "args=[\'self\', \'inputs\'], varargs=None, keywords=kwargs, defaults=None"
+ }
+ member_method {
+ name: "compute_mask"
+ argspec: "args=[\'self\', \'inputs\', \'mask\'], varargs=None, keywords=None, defaults=[\'None\'], "
+ }
+ member_method {
+ name: "count_params"
+ argspec: "args=[\'self\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "from_config"
+ argspec: "args=[\'cls\', \'config\', \'custom_objects\'], varargs=None, keywords=None, defaults=[\'None\'], "
+ }
+ member_method {
+ name: "get_config"
+ argspec: "args=[\'self\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_input_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_input_mask_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_input_shape_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_losses_for"
+ argspec: "args=[\'self\', \'inputs\'], varargs=None, keywords=None, defaults=[\'None\'], "
+ }
+ member_method {
+ name: "get_output_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_output_mask_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_output_shape_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_updates_for"
+ argspec: "args=[\'self\', \'inputs\'], varargs=None, keywords=None, defaults=[\'None\'], "
+ }
+ member_method {
+ name: "get_weights"
+ argspec: "args=[\'self\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "set_weights"
+ argspec: "args=[\'self\', \'weights\'], varargs=None, keywords=None, defaults=None"
+ }
+}
diff --git a/tensorflow/tools/api/golden/tensorflow.keras.layers.-zero-padding1-d.pbtxt b/tensorflow/tools/api/golden/tensorflow.keras.layers.-zero-padding1-d.pbtxt
new file mode 100644
index 0000000000..3472bb4514
--- /dev/null
+++ b/tensorflow/tools/api/golden/tensorflow.keras.layers.-zero-padding1-d.pbtxt
@@ -0,0 +1,159 @@
+path: "tensorflow.keras.layers.ZeroPadding1D"
+tf_class {
+ is_instance: "<class \'tensorflow.python.keras._impl.keras.layers.convolutional.ZeroPadding1D\'>"
+ is_instance: "<class \'tensorflow.python.keras._impl.keras.engine.topology.Layer\'>"
+ is_instance: "<class \'tensorflow.python.layers.base.Layer\'>"
+ is_instance: "<type \'object\'>"
+ member {
+ name: "graph"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "input"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "input_mask"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "input_shape"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "losses"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "non_trainable_variables"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "non_trainable_weights"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "output"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "output_mask"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "output_shape"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "scope_name"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "trainable_variables"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "trainable_weights"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "updates"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "variables"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "weights"
+ mtype: "<type \'property\'>"
+ }
+ member_method {
+ name: "__init__"
+ argspec: "args=[\'self\', \'padding\'], varargs=None, keywords=kwargs, defaults=[\'1\'], "
+ }
+ member_method {
+ name: "add_loss"
+ argspec: "args=[\'self\', \'losses\', \'inputs\'], varargs=None, keywords=None, defaults=[\'None\'], "
+ }
+ member_method {
+ name: "add_update"
+ argspec: "args=[\'self\', \'updates\', \'inputs\'], varargs=None, keywords=None, defaults=[\'None\'], "
+ }
+ member_method {
+ name: "add_variable"
+ argspec: "args=[\'self\', \'name\', \'shape\', \'dtype\', \'initializer\', \'regularizer\', \'trainable\', \'constraint\'], varargs=None, keywords=None, defaults=[\'None\', \'None\', \'None\', \'True\', \'None\'], "
+ }
+ member_method {
+ name: "add_weight"
+ argspec: "args=[\'self\', \'name\', \'shape\', \'dtype\', \'initializer\', \'regularizer\', \'trainable\', \'constraint\'], varargs=None, keywords=None, defaults=[\'None\', \'None\', \'None\', \'True\', \'None\'], "
+ }
+ member_method {
+ name: "apply"
+ argspec: "args=[\'self\', \'inputs\'], varargs=args, keywords=kwargs, defaults=None"
+ }
+ member_method {
+ name: "build"
+ argspec: "args=[\'self\', \'_\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "call"
+ argspec: "args=[\'self\', \'inputs\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "compute_mask"
+ argspec: "args=[\'self\', \'inputs\', \'mask\'], varargs=None, keywords=None, defaults=[\'None\'], "
+ }
+ member_method {
+ name: "count_params"
+ argspec: "args=[\'self\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "from_config"
+ argspec: "args=[\'cls\', \'config\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_config"
+ argspec: "args=[\'self\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_input_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_input_mask_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_input_shape_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_losses_for"
+ argspec: "args=[\'self\', \'inputs\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_output_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_output_mask_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_output_shape_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_updates_for"
+ argspec: "args=[\'self\', \'inputs\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_weights"
+ argspec: "args=[\'self\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "set_weights"
+ argspec: "args=[\'self\', \'weights\'], varargs=None, keywords=None, defaults=None"
+ }
+}
diff --git a/tensorflow/tools/api/golden/tensorflow.keras.layers.-zero-padding2-d.pbtxt b/tensorflow/tools/api/golden/tensorflow.keras.layers.-zero-padding2-d.pbtxt
new file mode 100644
index 0000000000..5af56bd135
--- /dev/null
+++ b/tensorflow/tools/api/golden/tensorflow.keras.layers.-zero-padding2-d.pbtxt
@@ -0,0 +1,159 @@
+path: "tensorflow.keras.layers.ZeroPadding2D"
+tf_class {
+ is_instance: "<class \'tensorflow.python.keras._impl.keras.layers.convolutional.ZeroPadding2D\'>"
+ is_instance: "<class \'tensorflow.python.keras._impl.keras.engine.topology.Layer\'>"
+ is_instance: "<class \'tensorflow.python.layers.base.Layer\'>"
+ is_instance: "<type \'object\'>"
+ member {
+ name: "graph"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "input"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "input_mask"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "input_shape"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "losses"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "non_trainable_variables"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "non_trainable_weights"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "output"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "output_mask"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "output_shape"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "scope_name"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "trainable_variables"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "trainable_weights"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "updates"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "variables"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "weights"
+ mtype: "<type \'property\'>"
+ }
+ member_method {
+ name: "__init__"
+ argspec: "args=[\'self\', \'padding\', \'data_format\'], varargs=None, keywords=kwargs, defaults=[\'(1, 1)\', \'None\'], "
+ }
+ member_method {
+ name: "add_loss"
+ argspec: "args=[\'self\', \'losses\', \'inputs\'], varargs=None, keywords=None, defaults=[\'None\'], "
+ }
+ member_method {
+ name: "add_update"
+ argspec: "args=[\'self\', \'updates\', \'inputs\'], varargs=None, keywords=None, defaults=[\'None\'], "
+ }
+ member_method {
+ name: "add_variable"
+ argspec: "args=[\'self\', \'name\', \'shape\', \'dtype\', \'initializer\', \'regularizer\', \'trainable\', \'constraint\'], varargs=None, keywords=None, defaults=[\'None\', \'None\', \'None\', \'True\', \'None\'], "
+ }
+ member_method {
+ name: "add_weight"
+ argspec: "args=[\'self\', \'name\', \'shape\', \'dtype\', \'initializer\', \'regularizer\', \'trainable\', \'constraint\'], varargs=None, keywords=None, defaults=[\'None\', \'None\', \'None\', \'True\', \'None\'], "
+ }
+ member_method {
+ name: "apply"
+ argspec: "args=[\'self\', \'inputs\'], varargs=args, keywords=kwargs, defaults=None"
+ }
+ member_method {
+ name: "build"
+ argspec: "args=[\'self\', \'_\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "call"
+ argspec: "args=[\'self\', \'inputs\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "compute_mask"
+ argspec: "args=[\'self\', \'inputs\', \'mask\'], varargs=None, keywords=None, defaults=[\'None\'], "
+ }
+ member_method {
+ name: "count_params"
+ argspec: "args=[\'self\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "from_config"
+ argspec: "args=[\'cls\', \'config\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_config"
+ argspec: "args=[\'self\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_input_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_input_mask_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_input_shape_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_losses_for"
+ argspec: "args=[\'self\', \'inputs\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_output_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_output_mask_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_output_shape_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_updates_for"
+ argspec: "args=[\'self\', \'inputs\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_weights"
+ argspec: "args=[\'self\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "set_weights"
+ argspec: "args=[\'self\', \'weights\'], varargs=None, keywords=None, defaults=None"
+ }
+}
diff --git a/tensorflow/tools/api/golden/tensorflow.keras.layers.-zero-padding3-d.pbtxt b/tensorflow/tools/api/golden/tensorflow.keras.layers.-zero-padding3-d.pbtxt
new file mode 100644
index 0000000000..1caf07fedc
--- /dev/null
+++ b/tensorflow/tools/api/golden/tensorflow.keras.layers.-zero-padding3-d.pbtxt
@@ -0,0 +1,159 @@
+path: "tensorflow.keras.layers.ZeroPadding3D"
+tf_class {
+ is_instance: "<class \'tensorflow.python.keras._impl.keras.layers.convolutional.ZeroPadding3D\'>"
+ is_instance: "<class \'tensorflow.python.keras._impl.keras.engine.topology.Layer\'>"
+ is_instance: "<class \'tensorflow.python.layers.base.Layer\'>"
+ is_instance: "<type \'object\'>"
+ member {
+ name: "graph"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "input"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "input_mask"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "input_shape"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "losses"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "non_trainable_variables"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "non_trainable_weights"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "output"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "output_mask"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "output_shape"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "scope_name"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "trainable_variables"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "trainable_weights"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "updates"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "variables"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "weights"
+ mtype: "<type \'property\'>"
+ }
+ member_method {
+ name: "__init__"
+ argspec: "args=[\'self\', \'padding\', \'data_format\'], varargs=None, keywords=kwargs, defaults=[\'(1, 1, 1)\', \'None\'], "
+ }
+ member_method {
+ name: "add_loss"
+ argspec: "args=[\'self\', \'losses\', \'inputs\'], varargs=None, keywords=None, defaults=[\'None\'], "
+ }
+ member_method {
+ name: "add_update"
+ argspec: "args=[\'self\', \'updates\', \'inputs\'], varargs=None, keywords=None, defaults=[\'None\'], "
+ }
+ member_method {
+ name: "add_variable"
+ argspec: "args=[\'self\', \'name\', \'shape\', \'dtype\', \'initializer\', \'regularizer\', \'trainable\', \'constraint\'], varargs=None, keywords=None, defaults=[\'None\', \'None\', \'None\', \'True\', \'None\'], "
+ }
+ member_method {
+ name: "add_weight"
+ argspec: "args=[\'self\', \'name\', \'shape\', \'dtype\', \'initializer\', \'regularizer\', \'trainable\', \'constraint\'], varargs=None, keywords=None, defaults=[\'None\', \'None\', \'None\', \'True\', \'None\'], "
+ }
+ member_method {
+ name: "apply"
+ argspec: "args=[\'self\', \'inputs\'], varargs=args, keywords=kwargs, defaults=None"
+ }
+ member_method {
+ name: "build"
+ argspec: "args=[\'self\', \'_\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "call"
+ argspec: "args=[\'self\', \'inputs\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "compute_mask"
+ argspec: "args=[\'self\', \'inputs\', \'mask\'], varargs=None, keywords=None, defaults=[\'None\'], "
+ }
+ member_method {
+ name: "count_params"
+ argspec: "args=[\'self\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "from_config"
+ argspec: "args=[\'cls\', \'config\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_config"
+ argspec: "args=[\'self\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_input_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_input_mask_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_input_shape_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_losses_for"
+ argspec: "args=[\'self\', \'inputs\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_output_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_output_mask_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_output_shape_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_updates_for"
+ argspec: "args=[\'self\', \'inputs\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_weights"
+ argspec: "args=[\'self\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "set_weights"
+ argspec: "args=[\'self\', \'weights\'], varargs=None, keywords=None, defaults=None"
+ }
+}
diff --git a/tensorflow/tools/api/golden/tensorflow.keras.layers.pbtxt b/tensorflow/tools/api/golden/tensorflow.keras.layers.pbtxt
new file mode 100644
index 0000000000..8466c3e039
--- /dev/null
+++ b/tensorflow/tools/api/golden/tensorflow.keras.layers.pbtxt
@@ -0,0 +1,371 @@
+path: "tensorflow.keras.layers"
+tf_module {
+ member {
+ name: "Activation"
+ mtype: "<type \'type\'>"
+ }
+ member {
+ name: "ActivityRegularization"
+ mtype: "<type \'type\'>"
+ }
+ member {
+ name: "Add"
+ mtype: "<type \'type\'>"
+ }
+ member {
+ name: "AlphaDropout"
+ mtype: "<type \'type\'>"
+ }
+ member {
+ name: "Average"
+ mtype: "<type \'type\'>"
+ }
+ member {
+ name: "AveragePooling1D"
+ mtype: "<type \'type\'>"
+ }
+ member {
+ name: "AveragePooling2D"
+ mtype: "<type \'type\'>"
+ }
+ member {
+ name: "AveragePooling3D"
+ mtype: "<type \'type\'>"
+ }
+ member {
+ name: "AvgPool1D"
+ mtype: "<type \'type\'>"
+ }
+ member {
+ name: "AvgPool2D"
+ mtype: "<type \'type\'>"
+ }
+ member {
+ name: "AvgPool3D"
+ mtype: "<type \'type\'>"
+ }
+ member {
+ name: "BatchNormalization"
+ mtype: "<type \'type\'>"
+ }
+ member {
+ name: "Bidirectional"
+ mtype: "<type \'type\'>"
+ }
+ member {
+ name: "Concatenate"
+ mtype: "<type \'type\'>"
+ }
+ member {
+ name: "Conv1D"
+ mtype: "<type \'type\'>"
+ }
+ member {
+ name: "Conv2D"
+ mtype: "<type \'type\'>"
+ }
+ member {
+ name: "Conv2DTranspose"
+ mtype: "<type \'type\'>"
+ }
+ member {
+ name: "Conv3D"
+ mtype: "<type \'type\'>"
+ }
+ member {
+ name: "Conv3DTranspose"
+ mtype: "<type \'type\'>"
+ }
+ member {
+ name: "ConvLSTM2D"
+ mtype: "<type \'type\'>"
+ }
+ member {
+ name: "Convolution1D"
+ mtype: "<type \'type\'>"
+ }
+ member {
+ name: "Convolution2D"
+ mtype: "<type \'type\'>"
+ }
+ member {
+ name: "Convolution2DTranspose"
+ mtype: "<type \'type\'>"
+ }
+ member {
+ name: "Convolution3D"
+ mtype: "<type \'type\'>"
+ }
+ member {
+ name: "Convolution3DTranspose"
+ mtype: "<type \'type\'>"
+ }
+ member {
+ name: "Cropping1D"
+ mtype: "<type \'type\'>"
+ }
+ member {
+ name: "Cropping2D"
+ mtype: "<type \'type\'>"
+ }
+ member {
+ name: "Cropping3D"
+ mtype: "<type \'type\'>"
+ }
+ member {
+ name: "Dense"
+ mtype: "<type \'type\'>"
+ }
+ member {
+ name: "Dot"
+ mtype: "<type \'type\'>"
+ }
+ member {
+ name: "Dropout"
+ mtype: "<type \'type\'>"
+ }
+ member {
+ name: "ELU"
+ mtype: "<type \'type\'>"
+ }
+ member {
+ name: "Embedding"
+ mtype: "<type \'type\'>"
+ }
+ member {
+ name: "Flatten"
+ mtype: "<type \'type\'>"
+ }
+ member {
+ name: "GRU"
+ mtype: "<type \'type\'>"
+ }
+ member {
+ name: "GaussianDropout"
+ mtype: "<type \'type\'>"
+ }
+ member {
+ name: "GaussianNoise"
+ mtype: "<type \'type\'>"
+ }
+ member {
+ name: "GlobalAveragePooling1D"
+ mtype: "<type \'type\'>"
+ }
+ member {
+ name: "GlobalAveragePooling2D"
+ mtype: "<type \'type\'>"
+ }
+ member {
+ name: "GlobalAveragePooling3D"
+ mtype: "<type \'type\'>"
+ }
+ member {
+ name: "GlobalAvgPool1D"
+ mtype: "<type \'type\'>"
+ }
+ member {
+ name: "GlobalAvgPool2D"
+ mtype: "<type \'type\'>"
+ }
+ member {
+ name: "GlobalAvgPool3D"
+ mtype: "<type \'type\'>"
+ }
+ member {
+ name: "GlobalMaxPool1D"
+ mtype: "<type \'type\'>"
+ }
+ member {
+ name: "GlobalMaxPool2D"
+ mtype: "<type \'type\'>"
+ }
+ member {
+ name: "GlobalMaxPool3D"
+ mtype: "<type \'type\'>"
+ }
+ member {
+ name: "GlobalMaxPooling1D"
+ mtype: "<type \'type\'>"
+ }
+ member {
+ name: "GlobalMaxPooling2D"
+ mtype: "<type \'type\'>"
+ }
+ member {
+ name: "GlobalMaxPooling3D"
+ mtype: "<type \'type\'>"
+ }
+ member {
+ name: "InputLayer"
+ mtype: "<type \'type\'>"
+ }
+ member {
+ name: "InputSpec"
+ mtype: "<type \'type\'>"
+ }
+ member {
+ name: "LSTM"
+ mtype: "<type \'type\'>"
+ }
+ member {
+ name: "Lambda"
+ mtype: "<type \'type\'>"
+ }
+ member {
+ name: "Layer"
+ mtype: "<type \'type\'>"
+ }
+ member {
+ name: "LeakyReLU"
+ mtype: "<type \'type\'>"
+ }
+ member {
+ name: "LocallyConnected1D"
+ mtype: "<type \'type\'>"
+ }
+ member {
+ name: "LocallyConnected2D"
+ mtype: "<type \'type\'>"
+ }
+ member {
+ name: "Masking"
+ mtype: "<type \'type\'>"
+ }
+ member {
+ name: "MaxPool1D"
+ mtype: "<type \'type\'>"
+ }
+ member {
+ name: "MaxPool2D"
+ mtype: "<type \'type\'>"
+ }
+ member {
+ name: "MaxPool3D"
+ mtype: "<type \'type\'>"
+ }
+ member {
+ name: "MaxPooling1D"
+ mtype: "<type \'type\'>"
+ }
+ member {
+ name: "MaxPooling2D"
+ mtype: "<type \'type\'>"
+ }
+ member {
+ name: "MaxPooling3D"
+ mtype: "<type \'type\'>"
+ }
+ member {
+ name: "Maximum"
+ mtype: "<type \'type\'>"
+ }
+ member {
+ name: "Multiply"
+ mtype: "<type \'type\'>"
+ }
+ member {
+ name: "PReLU"
+ mtype: "<type \'type\'>"
+ }
+ member {
+ name: "Permute"
+ mtype: "<type \'type\'>"
+ }
+ member {
+ name: "RepeatVector"
+ mtype: "<type \'type\'>"
+ }
+ member {
+ name: "Reshape"
+ mtype: "<type \'type\'>"
+ }
+ member {
+ name: "SeparableConv2D"
+ mtype: "<type \'type\'>"
+ }
+ member {
+ name: "SeparableConvolution2D"
+ mtype: "<type \'type\'>"
+ }
+ member {
+ name: "SimpleRNN"
+ mtype: "<type \'type\'>"
+ }
+ member {
+ name: "SpatialDropout1D"
+ mtype: "<type \'type\'>"
+ }
+ member {
+ name: "SpatialDropout2D"
+ mtype: "<type \'type\'>"
+ }
+ member {
+ name: "SpatialDropout3D"
+ mtype: "<type \'type\'>"
+ }
+ member {
+ name: "ThresholdedReLU"
+ mtype: "<type \'type\'>"
+ }
+ member {
+ name: "TimeDistributed"
+ mtype: "<type \'type\'>"
+ }
+ member {
+ name: "UpSampling1D"
+ mtype: "<type \'type\'>"
+ }
+ member {
+ name: "UpSampling2D"
+ mtype: "<type \'type\'>"
+ }
+ member {
+ name: "UpSampling3D"
+ mtype: "<type \'type\'>"
+ }
+ member {
+ name: "Wrapper"
+ mtype: "<type \'type\'>"
+ }
+ member {
+ name: "ZeroPadding1D"
+ mtype: "<type \'type\'>"
+ }
+ member {
+ name: "ZeroPadding2D"
+ mtype: "<type \'type\'>"
+ }
+ member {
+ name: "ZeroPadding3D"
+ mtype: "<type \'type\'>"
+ }
+ member_method {
+ name: "Input"
+ argspec: "args=[\'shape\', \'batch_size\', \'name\', \'dtype\', \'sparse\', \'tensor\'], varargs=None, keywords=kwargs, defaults=[\'None\', \'None\', \'None\', \'None\', \'False\', \'None\'], "
+ }
+ member_method {
+ name: "add"
+ argspec: "args=[\'inputs\'], varargs=None, keywords=kwargs, defaults=None"
+ }
+ member_method {
+ name: "average"
+ argspec: "args=[\'inputs\'], varargs=None, keywords=kwargs, defaults=None"
+ }
+ member_method {
+ name: "concatenate"
+ argspec: "args=[\'inputs\', \'axis\'], varargs=None, keywords=kwargs, defaults=[\'-1\'], "
+ }
+ member_method {
+ name: "dot"
+ argspec: "args=[\'inputs\', \'axes\', \'normalize\'], varargs=None, keywords=kwargs, defaults=[\'False\'], "
+ }
+ member_method {
+ name: "maximum"
+ argspec: "args=[\'inputs\'], varargs=None, keywords=kwargs, defaults=None"
+ }
+ member_method {
+ name: "multiply"
+ argspec: "args=[\'inputs\'], varargs=None, keywords=kwargs, defaults=None"
+ }
+}
diff --git a/tensorflow/tools/api/golden/tensorflow.keras.losses.pbtxt b/tensorflow/tools/api/golden/tensorflow.keras.losses.pbtxt
new file mode 100644
index 0000000000..ae5f6305b7
--- /dev/null
+++ b/tensorflow/tools/api/golden/tensorflow.keras.losses.pbtxt
@@ -0,0 +1,71 @@
+path: "tensorflow.keras.losses"
+tf_module {
+ member_method {
+ name: "binary_crossentropy"
+ argspec: "args=[\'y_true\', \'y_pred\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "categorical_crossentropy"
+ argspec: "args=[\'y_true\', \'y_pred\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "categorical_hinge"
+ argspec: "args=[\'y_true\', \'y_pred\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "cosine_proximity"
+ argspec: "args=[\'y_true\', \'y_pred\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "deserialize"
+ argspec: "args=[\'name\', \'custom_objects\'], varargs=None, keywords=None, defaults=[\'None\'], "
+ }
+ member_method {
+ name: "get"
+ argspec: "args=[\'identifier\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "hinge"
+ argspec: "args=[\'y_true\', \'y_pred\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "kullback_leibler_divergence"
+ argspec: "args=[\'y_true\', \'y_pred\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "logcosh"
+ argspec: "args=[\'y_true\', \'y_pred\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "mean_absolute_error"
+ argspec: "args=[\'y_true\', \'y_pred\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "mean_absolute_percentage_error"
+ argspec: "args=[\'y_true\', \'y_pred\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "mean_squared_error"
+ argspec: "args=[\'y_true\', \'y_pred\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "mean_squared_logarithmic_error"
+ argspec: "args=[\'y_true\', \'y_pred\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "poisson"
+ argspec: "args=[\'y_true\', \'y_pred\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "serialize"
+ argspec: "args=[\'loss\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "sparse_categorical_crossentropy"
+ argspec: "args=[\'y_true\', \'y_pred\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "squared_hinge"
+ argspec: "args=[\'y_true\', \'y_pred\'], varargs=None, keywords=None, defaults=None"
+ }
+}
diff --git a/tensorflow/tools/api/golden/tensorflow.keras.metrics.pbtxt b/tensorflow/tools/api/golden/tensorflow.keras.metrics.pbtxt
new file mode 100644
index 0000000000..de285c1aab
--- /dev/null
+++ b/tensorflow/tools/api/golden/tensorflow.keras.metrics.pbtxt
@@ -0,0 +1,79 @@
+path: "tensorflow.keras.metrics"
+tf_module {
+ member_method {
+ name: "binary_accuracy"
+ argspec: "args=[\'y_true\', \'y_pred\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "binary_crossentropy"
+ argspec: "args=[\'y_true\', \'y_pred\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "categorical_accuracy"
+ argspec: "args=[\'y_true\', \'y_pred\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "categorical_crossentropy"
+ argspec: "args=[\'y_true\', \'y_pred\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "cosine_proximity"
+ argspec: "args=[\'y_true\', \'y_pred\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "deserialize"
+ argspec: "args=[\'name\', \'custom_objects\'], varargs=None, keywords=None, defaults=[\'None\'], "
+ }
+ member_method {
+ name: "get"
+ argspec: "args=[\'identifier\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "hinge"
+ argspec: "args=[\'y_true\', \'y_pred\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "kullback_leibler_divergence"
+ argspec: "args=[\'y_true\', \'y_pred\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "mean_absolute_error"
+ argspec: "args=[\'y_true\', \'y_pred\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "mean_absolute_percentage_error"
+ argspec: "args=[\'y_true\', \'y_pred\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "mean_squared_error"
+ argspec: "args=[\'y_true\', \'y_pred\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "mean_squared_logarithmic_error"
+ argspec: "args=[\'y_true\', \'y_pred\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "poisson"
+ argspec: "args=[\'y_true\', \'y_pred\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "serialize"
+ argspec: "args=[\'metric\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "sparse_categorical_crossentropy"
+ argspec: "args=[\'y_true\', \'y_pred\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "sparse_top_k_categorical_accuracy"
+ argspec: "args=[\'y_true\', \'y_pred\', \'k\'], varargs=None, keywords=None, defaults=[\'5\'], "
+ }
+ member_method {
+ name: "squared_hinge"
+ argspec: "args=[\'y_true\', \'y_pred\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "top_k_categorical_accuracy"
+ argspec: "args=[\'y_true\', \'y_pred\', \'k\'], varargs=None, keywords=None, defaults=[\'5\'], "
+ }
+}
diff --git a/tensorflow/tools/api/golden/tensorflow.keras.models.-model.pbtxt b/tensorflow/tools/api/golden/tensorflow.keras.models.-model.pbtxt
new file mode 100644
index 0000000000..ade551d02a
--- /dev/null
+++ b/tensorflow/tools/api/golden/tensorflow.keras.models.-model.pbtxt
@@ -0,0 +1,249 @@
+path: "tensorflow.keras.models.Model"
+tf_class {
+ is_instance: "<class \'tensorflow.python.keras._impl.keras.engine.training.Model\'>"
+ is_instance: "<class \'tensorflow.python.keras._impl.keras.engine.topology.Network\'>"
+ is_instance: "<class \'tensorflow.python.layers.base.Network\'>"
+ is_instance: "<class \'tensorflow.python.keras._impl.keras.engine.topology.Layer\'>"
+ is_instance: "<class \'tensorflow.python.layers.base.Layer\'>"
+ is_instance: "<type \'object\'>"
+ member {
+ name: "graph"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "input"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "input_mask"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "input_shape"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "input_spec"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "losses"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "non_trainable_variables"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "non_trainable_weights"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "output"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "output_mask"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "output_shape"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "scope_name"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "state_updates"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "stateful"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "trainable_variables"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "trainable_weights"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "updates"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "uses_learning_phase"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "variables"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "weights"
+ mtype: "<type \'property\'>"
+ }
+ member_method {
+ name: "__init__"
+ argspec: "args=[\'self\', \'inputs\', \'outputs\', \'name\'], varargs=None, keywords=None, defaults=[\'None\'], "
+ }
+ member_method {
+ name: "add_loss"
+ argspec: "args=[\'self\', \'losses\', \'inputs\'], varargs=None, keywords=None, defaults=[\'None\'], "
+ }
+ member_method {
+ name: "add_update"
+ argspec: "args=[\'self\', \'updates\', \'inputs\'], varargs=None, keywords=None, defaults=[\'None\'], "
+ }
+ member_method {
+ name: "add_variable"
+ argspec: "args=[\'self\', \'name\', \'shape\', \'dtype\', \'initializer\', \'regularizer\', \'trainable\', \'constraint\'], varargs=None, keywords=None, defaults=[\'None\', \'None\', \'None\', \'True\', \'None\'], "
+ }
+ member_method {
+ name: "add_weight"
+ argspec: "args=[\'self\', \'name\', \'shape\', \'dtype\', \'initializer\', \'regularizer\', \'trainable\', \'constraint\'], varargs=None, keywords=None, defaults=[\'None\', \'None\', \'None\', \'True\', \'None\'], "
+ }
+ member_method {
+ name: "apply"
+ argspec: "args=[\'self\', \'inputs\'], varargs=args, keywords=kwargs, defaults=None"
+ }
+ member_method {
+ name: "build"
+ argspec: "args=[\'self\', \'_\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "call"
+ argspec: "args=[\'self\', \'inputs\', \'mask\'], varargs=None, keywords=None, defaults=[\'None\'], "
+ }
+ member_method {
+ name: "compile"
+ argspec: "args=[\'self\', \'optimizer\', \'loss\', \'metrics\', \'loss_weights\', \'sample_weight_mode\', \'weighted_metrics\', \'target_tensors\'], varargs=None, keywords=kwargs, defaults=[\'None\', \'None\', \'None\', \'None\', \'None\'], "
+ }
+ member_method {
+ name: "compute_mask"
+ argspec: "args=[\'self\', \'inputs\', \'mask\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "count_params"
+ argspec: "args=[\'self\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "evaluate"
+ argspec: "args=[\'self\', \'x\', \'y\', \'batch_size\', \'verbose\', \'sample_weight\', \'steps\'], varargs=None, keywords=None, defaults=[\'None\', \'1\', \'None\', \'None\'], "
+ }
+ member_method {
+ name: "evaluate_generator"
+ argspec: "args=[\'self\', \'generator\', \'steps\', \'max_queue_size\', \'workers\', \'use_multiprocessing\'], varargs=None, keywords=kwargs, defaults=[\'10\', \'1\', \'False\'], "
+ }
+ member_method {
+ name: "fit"
+ argspec: "args=[\'self\', \'x\', \'y\', \'batch_size\', \'epochs\', \'verbose\', \'callbacks\', \'validation_split\', \'validation_data\', \'shuffle\', \'class_weight\', \'sample_weight\', \'initial_epoch\', \'steps_per_epoch\', \'validation_steps\'], varargs=None, keywords=None, defaults=[\'None\', \'None\', \'None\', \'1\', \'1\', \'None\', \'0.0\', \'None\', \'True\', \'None\', \'None\', \'0\', \'None\', \'None\'], "
+ }
+ member_method {
+ name: "fit_generator"
+ argspec: "args=[\'self\', \'generator\', \'steps_per_epoch\', \'epochs\', \'verbose\', \'callbacks\', \'validation_data\', \'validation_steps\', \'class_weight\', \'max_queue_size\', \'workers\', \'use_multiprocessing\', \'shuffle\', \'initial_epoch\'], varargs=None, keywords=kwargs, defaults=[\'1\', \'1\', \'None\', \'None\', \'None\', \'None\', \'10\', \'1\', \'False\', \'True\', \'0\'], "
+ }
+ member_method {
+ name: "from_config"
+ argspec: "args=[\'cls\', \'config\', \'custom_objects\'], varargs=None, keywords=None, defaults=[\'None\'], "
+ }
+ member_method {
+ name: "get_config"
+ argspec: "args=[\'self\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_input_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_input_mask_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_input_shape_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_layer"
+ argspec: "args=[\'self\', \'name\', \'index\'], varargs=None, keywords=None, defaults=[\'None\', \'None\'], "
+ }
+ member_method {
+ name: "get_losses_for"
+ argspec: "args=[\'self\', \'inputs\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_output_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_output_mask_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_output_shape_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_updates_for"
+ argspec: "args=[\'self\', \'inputs\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_weights"
+ argspec: "args=[\'self\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "load_weights"
+ argspec: "args=[\'self\', \'filepath\', \'by_name\'], varargs=None, keywords=None, defaults=[\'False\'], "
+ }
+ member_method {
+ name: "predict"
+ argspec: "args=[\'self\', \'x\', \'batch_size\', \'verbose\', \'steps\'], varargs=None, keywords=None, defaults=[\'None\', \'0\', \'None\'], "
+ }
+ member_method {
+ name: "predict_generator"
+ argspec: "args=[\'self\', \'generator\', \'steps\', \'max_queue_size\', \'workers\', \'use_multiprocessing\', \'verbose\'], varargs=None, keywords=kwargs, defaults=[\'10\', \'1\', \'False\', \'0\'], "
+ }
+ member_method {
+ name: "predict_on_batch"
+ argspec: "args=[\'self\', \'x\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "reset_states"
+ argspec: "args=[\'self\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "save"
+ argspec: "args=[\'self\', \'filepath\', \'overwrite\', \'include_optimizer\'], varargs=None, keywords=None, defaults=[\'True\', \'True\'], "
+ }
+ member_method {
+ name: "save_weights"
+ argspec: "args=[\'self\', \'filepath\', \'overwrite\'], varargs=None, keywords=None, defaults=[\'True\'], "
+ }
+ member_method {
+ name: "set_weights"
+ argspec: "args=[\'self\', \'weights\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "summary"
+ argspec: "args=[\'self\', \'line_length\', \'positions\', \'print_fn\'], varargs=None, keywords=None, defaults=[\'None\', \'None\', \'None\'], "
+ }
+ member_method {
+ name: "test_on_batch"
+ argspec: "args=[\'self\', \'x\', \'y\', \'sample_weight\'], varargs=None, keywords=None, defaults=[\'None\'], "
+ }
+ member_method {
+ name: "to_json"
+ argspec: "args=[\'self\'], varargs=None, keywords=kwargs, defaults=None"
+ }
+ member_method {
+ name: "to_yaml"
+ argspec: "args=[\'self\'], varargs=None, keywords=kwargs, defaults=None"
+ }
+ member_method {
+ name: "train_on_batch"
+ argspec: "args=[\'self\', \'x\', \'y\', \'sample_weight\', \'class_weight\'], varargs=None, keywords=None, defaults=[\'None\', \'None\'], "
+ }
+}
diff --git a/tensorflow/tools/api/golden/tensorflow.keras.models.-sequential.pbtxt b/tensorflow/tools/api/golden/tensorflow.keras.models.-sequential.pbtxt
new file mode 100644
index 0000000000..cadd74eb5f
--- /dev/null
+++ b/tensorflow/tools/api/golden/tensorflow.keras.models.-sequential.pbtxt
@@ -0,0 +1,274 @@
+path: "tensorflow.keras.models.Sequential"
+tf_class {
+ is_instance: "<class \'tensorflow.python.keras._impl.keras.models.Sequential\'>"
+ is_instance: "<class \'tensorflow.python.keras._impl.keras.engine.training.Model\'>"
+ is_instance: "<class \'tensorflow.python.keras._impl.keras.engine.topology.Network\'>"
+ is_instance: "<class \'tensorflow.python.layers.base.Network\'>"
+ is_instance: "<class \'tensorflow.python.keras._impl.keras.engine.topology.Layer\'>"
+ is_instance: "<class \'tensorflow.python.layers.base.Layer\'>"
+ is_instance: "<type \'object\'>"
+ member {
+ name: "graph"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "input"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "input_mask"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "input_shape"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "input_spec"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "losses"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "non_trainable_variables"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "non_trainable_weights"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "output"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "output_mask"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "output_shape"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "regularizers"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "scope_name"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "state_updates"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "stateful"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "trainable"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "trainable_variables"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "trainable_weights"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "updates"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "uses_learning_phase"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "variables"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "weights"
+ mtype: "<type \'property\'>"
+ }
+ member_method {
+ name: "__init__"
+ argspec: "args=[\'self\', \'layers\', \'name\'], varargs=None, keywords=None, defaults=[\'None\', \'None\'], "
+ }
+ member_method {
+ name: "add"
+ argspec: "args=[\'self\', \'layer\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "add_loss"
+ argspec: "args=[\'self\', \'losses\', \'inputs\'], varargs=None, keywords=None, defaults=[\'None\'], "
+ }
+ member_method {
+ name: "add_update"
+ argspec: "args=[\'self\', \'updates\', \'inputs\'], varargs=None, keywords=None, defaults=[\'None\'], "
+ }
+ member_method {
+ name: "add_variable"
+ argspec: "args=[\'self\', \'name\', \'shape\', \'dtype\', \'initializer\', \'regularizer\', \'trainable\', \'constraint\'], varargs=None, keywords=None, defaults=[\'None\', \'None\', \'None\', \'True\', \'None\'], "
+ }
+ member_method {
+ name: "add_weight"
+ argspec: "args=[\'self\', \'name\', \'shape\', \'dtype\', \'initializer\', \'regularizer\', \'trainable\', \'constraint\'], varargs=None, keywords=None, defaults=[\'None\', \'None\', \'None\', \'True\', \'None\'], "
+ }
+ member_method {
+ name: "apply"
+ argspec: "args=[\'self\', \'inputs\'], varargs=args, keywords=kwargs, defaults=None"
+ }
+ member_method {
+ name: "build"
+ argspec: "args=[\'self\', \'input_shape\'], varargs=None, keywords=None, defaults=[\'None\'], "
+ }
+ member_method {
+ name: "call"
+ argspec: "args=[\'self\', \'inputs\', \'mask\'], varargs=None, keywords=None, defaults=[\'None\'], "
+ }
+ member_method {
+ name: "compile"
+ argspec: "args=[\'self\', \'optimizer\', \'loss\', \'metrics\', \'sample_weight_mode\', \'weighted_metrics\'], varargs=None, keywords=kwargs, defaults=[\'None\', \'None\', \'None\'], "
+ }
+ member_method {
+ name: "compute_mask"
+ argspec: "args=[\'self\', \'inputs\', \'mask\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "count_params"
+ argspec: "args=[\'self\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "evaluate"
+ argspec: "args=[\'self\', \'x\', \'y\', \'batch_size\', \'verbose\', \'sample_weight\'], varargs=None, keywords=None, defaults=[\'32\', \'1\', \'None\'], "
+ }
+ member_method {
+ name: "evaluate_generator"
+ argspec: "args=[\'self\', \'generator\', \'steps\', \'max_queue_size\', \'workers\', \'use_multiprocessing\'], varargs=None, keywords=kwargs, defaults=[\'10\', \'1\', \'False\'], "
+ }
+ member_method {
+ name: "fit"
+ argspec: "args=[\'self\', \'x\', \'y\', \'batch_size\', \'epochs\', \'verbose\', \'callbacks\', \'validation_split\', \'validation_data\', \'shuffle\', \'class_weight\', \'sample_weight\', \'initial_epoch\'], varargs=None, keywords=None, defaults=[\'32\', \'10\', \'1\', \'None\', \'0.0\', \'None\', \'True\', \'None\', \'None\', \'0\'], "
+ }
+ member_method {
+ name: "fit_generator"
+ argspec: "args=[\'self\', \'generator\', \'steps_per_epoch\', \'epochs\', \'verbose\', \'callbacks\', \'validation_data\', \'validation_steps\', \'class_weight\', \'max_queue_size\', \'workers\', \'use_multiprocessing\', \'initial_epoch\'], varargs=None, keywords=kwargs, defaults=[\'1\', \'1\', \'None\', \'None\', \'None\', \'None\', \'10\', \'1\', \'False\', \'0\'], "
+ }
+ member_method {
+ name: "from_config"
+ argspec: "args=[\'cls\', \'config\', \'custom_objects\'], varargs=None, keywords=None, defaults=[\'None\'], "
+ }
+ member_method {
+ name: "get_config"
+ argspec: "args=[\'self\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_input_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_input_mask_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_input_shape_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_layer"
+ argspec: "args=[\'self\', \'name\', \'index\'], varargs=None, keywords=None, defaults=[\'None\', \'None\'], "
+ }
+ member_method {
+ name: "get_losses_for"
+ argspec: "args=[\'self\', \'inputs\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_output_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_output_mask_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_output_shape_at"
+ argspec: "args=[\'self\', \'node_index\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_updates_for"
+ argspec: "args=[\'self\', \'inputs\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_weights"
+ argspec: "args=[\'self\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "load_weights"
+ argspec: "args=[\'self\', \'filepath\', \'by_name\'], varargs=None, keywords=None, defaults=[\'False\'], "
+ }
+ member_method {
+ name: "pop"
+ argspec: "args=[\'self\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "predict"
+ argspec: "args=[\'self\', \'x\', \'batch_size\', \'verbose\'], varargs=None, keywords=None, defaults=[\'32\', \'0\'], "
+ }
+ member_method {
+ name: "predict_classes"
+ argspec: "args=[\'self\', \'x\', \'batch_size\', \'verbose\'], varargs=None, keywords=None, defaults=[\'32\', \'1\'], "
+ }
+ member_method {
+ name: "predict_generator"
+ argspec: "args=[\'self\', \'generator\', \'steps\', \'max_queue_size\', \'workers\', \'use_multiprocessing\', \'verbose\'], varargs=None, keywords=kwargs, defaults=[\'10\', \'1\', \'False\', \'0\'], "
+ }
+ member_method {
+ name: "predict_on_batch"
+ argspec: "args=[\'self\', \'x\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "predict_proba"
+ argspec: "args=[\'self\', \'x\', \'batch_size\', \'verbose\'], varargs=None, keywords=None, defaults=[\'32\', \'1\'], "
+ }
+ member_method {
+ name: "reset_states"
+ argspec: "args=[\'self\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "save"
+ argspec: "args=[\'self\', \'filepath\', \'overwrite\', \'include_optimizer\'], varargs=None, keywords=None, defaults=[\'True\', \'True\'], "
+ }
+ member_method {
+ name: "save_weights"
+ argspec: "args=[\'self\', \'filepath\', \'overwrite\'], varargs=None, keywords=None, defaults=[\'True\'], "
+ }
+ member_method {
+ name: "set_weights"
+ argspec: "args=[\'self\', \'weights\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "summary"
+ argspec: "args=[\'self\', \'line_length\', \'positions\', \'print_fn\'], varargs=None, keywords=None, defaults=[\'None\', \'None\', \'None\'], "
+ }
+ member_method {
+ name: "test_on_batch"
+ argspec: "args=[\'self\', \'x\', \'y\', \'sample_weight\'], varargs=None, keywords=None, defaults=[\'None\'], "
+ }
+ member_method {
+ name: "to_json"
+ argspec: "args=[\'self\'], varargs=None, keywords=kwargs, defaults=None"
+ }
+ member_method {
+ name: "to_yaml"
+ argspec: "args=[\'self\'], varargs=None, keywords=kwargs, defaults=None"
+ }
+ member_method {
+ name: "train_on_batch"
+ argspec: "args=[\'self\', \'x\', \'y\', \'class_weight\', \'sample_weight\'], varargs=None, keywords=None, defaults=[\'None\', \'None\'], "
+ }
+}
diff --git a/tensorflow/tools/api/golden/tensorflow.keras.models.pbtxt b/tensorflow/tools/api/golden/tensorflow.keras.models.pbtxt
new file mode 100644
index 0000000000..8ba0e7480b
--- /dev/null
+++ b/tensorflow/tools/api/golden/tensorflow.keras.models.pbtxt
@@ -0,0 +1,31 @@
+path: "tensorflow.keras.models"
+tf_module {
+ member {
+ name: "Model"
+ mtype: "<type \'type\'>"
+ }
+ member {
+ name: "Sequential"
+ mtype: "<type \'type\'>"
+ }
+ member_method {
+ name: "load_model"
+ argspec: "args=[\'filepath\', \'custom_objects\', \'compile\'], varargs=None, keywords=None, defaults=[\'None\', \'True\'], "
+ }
+ member_method {
+ name: "model_from_config"
+ argspec: "args=[\'config\', \'custom_objects\'], varargs=None, keywords=None, defaults=[\'None\'], "
+ }
+ member_method {
+ name: "model_from_json"
+ argspec: "args=[\'json_string\', \'custom_objects\'], varargs=None, keywords=None, defaults=[\'None\'], "
+ }
+ member_method {
+ name: "model_from_yaml"
+ argspec: "args=[\'yaml_string\', \'custom_objects\'], varargs=None, keywords=None, defaults=[\'None\'], "
+ }
+ member_method {
+ name: "save_model"
+ argspec: "args=[\'model\', \'filepath\', \'overwrite\', \'include_optimizer\'], varargs=None, keywords=None, defaults=[\'True\', \'True\'], "
+ }
+}
diff --git a/tensorflow/tools/api/golden/tensorflow.keras.optimizers.-adadelta.pbtxt b/tensorflow/tools/api/golden/tensorflow.keras.optimizers.-adadelta.pbtxt
new file mode 100644
index 0000000000..ed040c1586
--- /dev/null
+++ b/tensorflow/tools/api/golden/tensorflow.keras.optimizers.-adadelta.pbtxt
@@ -0,0 +1,34 @@
+path: "tensorflow.keras.optimizers.Adadelta"
+tf_class {
+ is_instance: "<class \'tensorflow.python.keras._impl.keras.optimizers.Adadelta\'>"
+ is_instance: "<class \'tensorflow.python.keras._impl.keras.optimizers.Optimizer\'>"
+ is_instance: "<type \'object\'>"
+ member_method {
+ name: "__init__"
+ argspec: "args=[\'self\', \'lr\', \'rho\', \'epsilon\', \'decay\'], varargs=None, keywords=kwargs, defaults=[\'1.0\', \'0.95\', \'1e-08\', \'0.0\'], "
+ }
+ member_method {
+ name: "from_config"
+ argspec: "args=[\'cls\', \'config\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_config"
+ argspec: "args=[\'self\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_gradients"
+ argspec: "args=[\'self\', \'loss\', \'params\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_updates"
+ argspec: "args=[\'self\', \'loss\', \'params\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_weights"
+ argspec: "args=[\'self\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "set_weights"
+ argspec: "args=[\'self\', \'weights\'], varargs=None, keywords=None, defaults=None"
+ }
+}
diff --git a/tensorflow/tools/api/golden/tensorflow.keras.optimizers.-adagrad.pbtxt b/tensorflow/tools/api/golden/tensorflow.keras.optimizers.-adagrad.pbtxt
new file mode 100644
index 0000000000..a24651429a
--- /dev/null
+++ b/tensorflow/tools/api/golden/tensorflow.keras.optimizers.-adagrad.pbtxt
@@ -0,0 +1,34 @@
+path: "tensorflow.keras.optimizers.Adagrad"
+tf_class {
+ is_instance: "<class \'tensorflow.python.keras._impl.keras.optimizers.Adagrad\'>"
+ is_instance: "<class \'tensorflow.python.keras._impl.keras.optimizers.Optimizer\'>"
+ is_instance: "<type \'object\'>"
+ member_method {
+ name: "__init__"
+ argspec: "args=[\'self\', \'lr\', \'epsilon\', \'decay\'], varargs=None, keywords=kwargs, defaults=[\'0.01\', \'1e-08\', \'0.0\'], "
+ }
+ member_method {
+ name: "from_config"
+ argspec: "args=[\'cls\', \'config\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_config"
+ argspec: "args=[\'self\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_gradients"
+ argspec: "args=[\'self\', \'loss\', \'params\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_updates"
+ argspec: "args=[\'self\', \'loss\', \'params\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_weights"
+ argspec: "args=[\'self\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "set_weights"
+ argspec: "args=[\'self\', \'weights\'], varargs=None, keywords=None, defaults=None"
+ }
+}
diff --git a/tensorflow/tools/api/golden/tensorflow.keras.optimizers.-adam.pbtxt b/tensorflow/tools/api/golden/tensorflow.keras.optimizers.-adam.pbtxt
new file mode 100644
index 0000000000..a0d978fded
--- /dev/null
+++ b/tensorflow/tools/api/golden/tensorflow.keras.optimizers.-adam.pbtxt
@@ -0,0 +1,34 @@
+path: "tensorflow.keras.optimizers.Adam"
+tf_class {
+ is_instance: "<class \'tensorflow.python.keras._impl.keras.optimizers.Adam\'>"
+ is_instance: "<class \'tensorflow.python.keras._impl.keras.optimizers.Optimizer\'>"
+ is_instance: "<type \'object\'>"
+ member_method {
+ name: "__init__"
+ argspec: "args=[\'self\', \'lr\', \'beta_1\', \'beta_2\', \'epsilon\', \'decay\'], varargs=None, keywords=kwargs, defaults=[\'0.001\', \'0.9\', \'0.999\', \'1e-08\', \'0.0\'], "
+ }
+ member_method {
+ name: "from_config"
+ argspec: "args=[\'cls\', \'config\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_config"
+ argspec: "args=[\'self\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_gradients"
+ argspec: "args=[\'self\', \'loss\', \'params\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_updates"
+ argspec: "args=[\'self\', \'loss\', \'params\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_weights"
+ argspec: "args=[\'self\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "set_weights"
+ argspec: "args=[\'self\', \'weights\'], varargs=None, keywords=None, defaults=None"
+ }
+}
diff --git a/tensorflow/tools/api/golden/tensorflow.keras.optimizers.-adamax.pbtxt b/tensorflow/tools/api/golden/tensorflow.keras.optimizers.-adamax.pbtxt
new file mode 100644
index 0000000000..1b70c93ad5
--- /dev/null
+++ b/tensorflow/tools/api/golden/tensorflow.keras.optimizers.-adamax.pbtxt
@@ -0,0 +1,34 @@
+path: "tensorflow.keras.optimizers.Adamax"
+tf_class {
+ is_instance: "<class \'tensorflow.python.keras._impl.keras.optimizers.Adamax\'>"
+ is_instance: "<class \'tensorflow.python.keras._impl.keras.optimizers.Optimizer\'>"
+ is_instance: "<type \'object\'>"
+ member_method {
+ name: "__init__"
+ argspec: "args=[\'self\', \'lr\', \'beta_1\', \'beta_2\', \'epsilon\', \'decay\'], varargs=None, keywords=kwargs, defaults=[\'0.002\', \'0.9\', \'0.999\', \'1e-08\', \'0.0\'], "
+ }
+ member_method {
+ name: "from_config"
+ argspec: "args=[\'cls\', \'config\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_config"
+ argspec: "args=[\'self\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_gradients"
+ argspec: "args=[\'self\', \'loss\', \'params\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_updates"
+ argspec: "args=[\'self\', \'loss\', \'params\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_weights"
+ argspec: "args=[\'self\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "set_weights"
+ argspec: "args=[\'self\', \'weights\'], varargs=None, keywords=None, defaults=None"
+ }
+}
diff --git a/tensorflow/tools/api/golden/tensorflow.keras.optimizers.-nadam.pbtxt b/tensorflow/tools/api/golden/tensorflow.keras.optimizers.-nadam.pbtxt
new file mode 100644
index 0000000000..b49dbe5cf8
--- /dev/null
+++ b/tensorflow/tools/api/golden/tensorflow.keras.optimizers.-nadam.pbtxt
@@ -0,0 +1,34 @@
+path: "tensorflow.keras.optimizers.Nadam"
+tf_class {
+ is_instance: "<class \'tensorflow.python.keras._impl.keras.optimizers.Nadam\'>"
+ is_instance: "<class \'tensorflow.python.keras._impl.keras.optimizers.Optimizer\'>"
+ is_instance: "<type \'object\'>"
+ member_method {
+ name: "__init__"
+ argspec: "args=[\'self\', \'lr\', \'beta_1\', \'beta_2\', \'epsilon\', \'schedule_decay\'], varargs=None, keywords=kwargs, defaults=[\'0.002\', \'0.9\', \'0.999\', \'1e-08\', \'0.004\'], "
+ }
+ member_method {
+ name: "from_config"
+ argspec: "args=[\'cls\', \'config\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_config"
+ argspec: "args=[\'self\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_gradients"
+ argspec: "args=[\'self\', \'loss\', \'params\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_updates"
+ argspec: "args=[\'self\', \'loss\', \'params\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_weights"
+ argspec: "args=[\'self\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "set_weights"
+ argspec: "args=[\'self\', \'weights\'], varargs=None, keywords=None, defaults=None"
+ }
+}
diff --git a/tensorflow/tools/api/golden/tensorflow.keras.optimizers.-optimizer.pbtxt b/tensorflow/tools/api/golden/tensorflow.keras.optimizers.-optimizer.pbtxt
new file mode 100644
index 0000000000..ca47e95228
--- /dev/null
+++ b/tensorflow/tools/api/golden/tensorflow.keras.optimizers.-optimizer.pbtxt
@@ -0,0 +1,33 @@
+path: "tensorflow.keras.optimizers.Optimizer"
+tf_class {
+ is_instance: "<class \'tensorflow.python.keras._impl.keras.optimizers.Optimizer\'>"
+ is_instance: "<type \'object\'>"
+ member_method {
+ name: "__init__"
+ argspec: "args=[\'self\'], varargs=None, keywords=kwargs, defaults=None"
+ }
+ member_method {
+ name: "from_config"
+ argspec: "args=[\'cls\', \'config\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_config"
+ argspec: "args=[\'self\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_gradients"
+ argspec: "args=[\'self\', \'loss\', \'params\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_updates"
+ argspec: "args=[\'self\', \'loss\', \'params\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_weights"
+ argspec: "args=[\'self\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "set_weights"
+ argspec: "args=[\'self\', \'weights\'], varargs=None, keywords=None, defaults=None"
+ }
+}
diff --git a/tensorflow/tools/api/golden/tensorflow.keras.optimizers.-r-m-sprop.pbtxt b/tensorflow/tools/api/golden/tensorflow.keras.optimizers.-r-m-sprop.pbtxt
new file mode 100644
index 0000000000..c8860d80d4
--- /dev/null
+++ b/tensorflow/tools/api/golden/tensorflow.keras.optimizers.-r-m-sprop.pbtxt
@@ -0,0 +1,34 @@
+path: "tensorflow.keras.optimizers.RMSprop"
+tf_class {
+ is_instance: "<class \'tensorflow.python.keras._impl.keras.optimizers.RMSprop\'>"
+ is_instance: "<class \'tensorflow.python.keras._impl.keras.optimizers.Optimizer\'>"
+ is_instance: "<type \'object\'>"
+ member_method {
+ name: "__init__"
+ argspec: "args=[\'self\', \'lr\', \'rho\', \'epsilon\', \'decay\'], varargs=None, keywords=kwargs, defaults=[\'0.001\', \'0.9\', \'1e-08\', \'0.0\'], "
+ }
+ member_method {
+ name: "from_config"
+ argspec: "args=[\'cls\', \'config\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_config"
+ argspec: "args=[\'self\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_gradients"
+ argspec: "args=[\'self\', \'loss\', \'params\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_updates"
+ argspec: "args=[\'self\', \'loss\', \'params\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_weights"
+ argspec: "args=[\'self\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "set_weights"
+ argspec: "args=[\'self\', \'weights\'], varargs=None, keywords=None, defaults=None"
+ }
+}
diff --git a/tensorflow/tools/api/golden/tensorflow.keras.optimizers.-s-g-d.pbtxt b/tensorflow/tools/api/golden/tensorflow.keras.optimizers.-s-g-d.pbtxt
new file mode 100644
index 0000000000..25adfd3f0b
--- /dev/null
+++ b/tensorflow/tools/api/golden/tensorflow.keras.optimizers.-s-g-d.pbtxt
@@ -0,0 +1,34 @@
+path: "tensorflow.keras.optimizers.SGD"
+tf_class {
+ is_instance: "<class \'tensorflow.python.keras._impl.keras.optimizers.SGD\'>"
+ is_instance: "<class \'tensorflow.python.keras._impl.keras.optimizers.Optimizer\'>"
+ is_instance: "<type \'object\'>"
+ member_method {
+ name: "__init__"
+ argspec: "args=[\'self\', \'lr\', \'momentum\', \'decay\', \'nesterov\'], varargs=None, keywords=kwargs, defaults=[\'0.01\', \'0.0\', \'0.0\', \'False\'], "
+ }
+ member_method {
+ name: "from_config"
+ argspec: "args=[\'cls\', \'config\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_config"
+ argspec: "args=[\'self\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_gradients"
+ argspec: "args=[\'self\', \'loss\', \'params\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_updates"
+ argspec: "args=[\'self\', \'loss\', \'params\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_weights"
+ argspec: "args=[\'self\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "set_weights"
+ argspec: "args=[\'self\', \'weights\'], varargs=None, keywords=None, defaults=None"
+ }
+}
diff --git a/tensorflow/tools/api/golden/tensorflow.keras.optimizers.pbtxt b/tensorflow/tools/api/golden/tensorflow.keras.optimizers.pbtxt
new file mode 100644
index 0000000000..7257b02087
--- /dev/null
+++ b/tensorflow/tools/api/golden/tensorflow.keras.optimizers.pbtxt
@@ -0,0 +1,47 @@
+path: "tensorflow.keras.optimizers"
+tf_module {
+ member {
+ name: "Adadelta"
+ mtype: "<type \'type\'>"
+ }
+ member {
+ name: "Adagrad"
+ mtype: "<type \'type\'>"
+ }
+ member {
+ name: "Adam"
+ mtype: "<type \'type\'>"
+ }
+ member {
+ name: "Adamax"
+ mtype: "<type \'type\'>"
+ }
+ member {
+ name: "Nadam"
+ mtype: "<type \'type\'>"
+ }
+ member {
+ name: "Optimizer"
+ mtype: "<type \'type\'>"
+ }
+ member {
+ name: "RMSprop"
+ mtype: "<type \'type\'>"
+ }
+ member {
+ name: "SGD"
+ mtype: "<type \'type\'>"
+ }
+ member_method {
+ name: "deserialize"
+ argspec: "args=[\'config\', \'custom_objects\'], varargs=None, keywords=None, defaults=[\'None\'], "
+ }
+ member_method {
+ name: "get"
+ argspec: "args=[\'identifier\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "serialize"
+ argspec: "args=[\'optimizer\'], varargs=None, keywords=None, defaults=None"
+ }
+}
diff --git a/tensorflow/tools/api/golden/tensorflow.keras.pbtxt b/tensorflow/tools/api/golden/tensorflow.keras.pbtxt
new file mode 100644
index 0000000000..b198bde7af
--- /dev/null
+++ b/tensorflow/tools/api/golden/tensorflow.keras.pbtxt
@@ -0,0 +1,71 @@
+path: "tensorflow.keras"
+tf_module {
+ member {
+ name: "activations"
+ mtype: "<type \'module\'>"
+ }
+ member {
+ name: "applications"
+ mtype: "<type \'module\'>"
+ }
+ member {
+ name: "backend"
+ mtype: "<type \'module\'>"
+ }
+ member {
+ name: "callbacks"
+ mtype: "<type \'module\'>"
+ }
+ member {
+ name: "constraints"
+ mtype: "<type \'module\'>"
+ }
+ member {
+ name: "datasets"
+ mtype: "<type \'module\'>"
+ }
+ member {
+ name: "initializers"
+ mtype: "<type \'module\'>"
+ }
+ member {
+ name: "layers"
+ mtype: "<type \'module\'>"
+ }
+ member {
+ name: "losses"
+ mtype: "<type \'module\'>"
+ }
+ member {
+ name: "metrics"
+ mtype: "<type \'module\'>"
+ }
+ member {
+ name: "models"
+ mtype: "<type \'module\'>"
+ }
+ member {
+ name: "optimizers"
+ mtype: "<type \'module\'>"
+ }
+ member {
+ name: "preprocessing"
+ mtype: "<type \'module\'>"
+ }
+ member {
+ name: "regularizers"
+ mtype: "<type \'module\'>"
+ }
+ member {
+ name: "utils"
+ mtype: "<type \'module\'>"
+ }
+ member {
+ name: "wrappers"
+ mtype: "<type \'module\'>"
+ }
+ member_method {
+ name: "Input"
+ argspec: "args=[\'shape\', \'batch_size\', \'name\', \'dtype\', \'sparse\', \'tensor\'], varargs=None, keywords=kwargs, defaults=[\'None\', \'None\', \'None\', \'None\', \'False\', \'None\'], "
+ }
+}
diff --git a/tensorflow/tools/api/golden/tensorflow.keras.preprocessing.image.-directory-iterator.pbtxt b/tensorflow/tools/api/golden/tensorflow.keras.preprocessing.image.-directory-iterator.pbtxt
new file mode 100644
index 0000000000..8ad1f32551
--- /dev/null
+++ b/tensorflow/tools/api/golden/tensorflow.keras.preprocessing.image.-directory-iterator.pbtxt
@@ -0,0 +1,18 @@
+path: "tensorflow.keras.preprocessing.image.DirectoryIterator"
+tf_class {
+ is_instance: "<class \'tensorflow.python.keras._impl.keras.preprocessing.image.DirectoryIterator\'>"
+ is_instance: "<class \'tensorflow.python.keras._impl.keras.preprocessing.image.Iterator\'>"
+ is_instance: "<type \'object\'>"
+ member_method {
+ name: "__init__"
+ argspec: "args=[\'self\', \'directory\', \'image_data_generator\', \'target_size\', \'color_mode\', \'classes\', \'class_mode\', \'batch_size\', \'shuffle\', \'seed\', \'data_format\', \'save_to_dir\', \'save_prefix\', \'save_format\', \'follow_links\'], varargs=None, keywords=None, defaults=[\'(256, 256)\', \'rgb\', \'None\', \'categorical\', \'32\', \'True\', \'None\', \'None\', \'None\', \'\', \'png\', \'False\'], "
+ }
+ member_method {
+ name: "next"
+ argspec: "args=[\'self\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "reset"
+ argspec: "args=[\'self\'], varargs=None, keywords=None, defaults=None"
+ }
+}
diff --git a/tensorflow/tools/api/golden/tensorflow.keras.preprocessing.image.-image-data-generator.pbtxt b/tensorflow/tools/api/golden/tensorflow.keras.preprocessing.image.-image-data-generator.pbtxt
new file mode 100644
index 0000000000..7e33285e7a
--- /dev/null
+++ b/tensorflow/tools/api/golden/tensorflow.keras.preprocessing.image.-image-data-generator.pbtxt
@@ -0,0 +1,29 @@
+path: "tensorflow.keras.preprocessing.image.ImageDataGenerator"
+tf_class {
+ is_instance: "<class \'tensorflow.python.keras._impl.keras.preprocessing.image.ImageDataGenerator\'>"
+ is_instance: "<type \'object\'>"
+ member_method {
+ name: "__init__"
+ argspec: "args=[\'self\', \'featurewise_center\', \'samplewise_center\', \'featurewise_std_normalization\', \'samplewise_std_normalization\', \'zca_whitening\', \'zca_epsilon\', \'rotation_range\', \'width_shift_range\', \'height_shift_range\', \'shear_range\', \'zoom_range\', \'channel_shift_range\', \'fill_mode\', \'cval\', \'horizontal_flip\', \'vertical_flip\', \'rescale\', \'preprocessing_function\', \'data_format\'], varargs=None, keywords=None, defaults=[\'False\', \'False\', \'False\', \'False\', \'False\', \'1e-06\', \'0.0\', \'0.0\', \'0.0\', \'0.0\', \'0.0\', \'0.0\', \'nearest\', \'0.0\', \'False\', \'False\', \'None\', \'None\', \'None\'], "
+ }
+ member_method {
+ name: "fit"
+ argspec: "args=[\'self\', \'x\', \'augment\', \'rounds\', \'seed\'], varargs=None, keywords=None, defaults=[\'False\', \'1\', \'None\'], "
+ }
+ member_method {
+ name: "flow"
+ argspec: "args=[\'self\', \'x\', \'y\', \'batch_size\', \'shuffle\', \'seed\', \'save_to_dir\', \'save_prefix\', \'save_format\'], varargs=None, keywords=None, defaults=[\'None\', \'32\', \'True\', \'None\', \'None\', \'\', \'png\'], "
+ }
+ member_method {
+ name: "flow_from_directory"
+ argspec: "args=[\'self\', \'directory\', \'target_size\', \'color_mode\', \'classes\', \'class_mode\', \'batch_size\', \'shuffle\', \'seed\', \'save_to_dir\', \'save_prefix\', \'save_format\', \'follow_links\'], varargs=None, keywords=None, defaults=[\'(256, 256)\', \'rgb\', \'None\', \'categorical\', \'32\', \'True\', \'None\', \'None\', \'\', \'png\', \'False\'], "
+ }
+ member_method {
+ name: "random_transform"
+ argspec: "args=[\'self\', \'x\', \'seed\'], varargs=None, keywords=None, defaults=[\'None\'], "
+ }
+ member_method {
+ name: "standardize"
+ argspec: "args=[\'self\', \'x\'], varargs=None, keywords=None, defaults=None"
+ }
+}
diff --git a/tensorflow/tools/api/golden/tensorflow.keras.preprocessing.image.-iterator.pbtxt b/tensorflow/tools/api/golden/tensorflow.keras.preprocessing.image.-iterator.pbtxt
new file mode 100644
index 0000000000..d30462a8eb
--- /dev/null
+++ b/tensorflow/tools/api/golden/tensorflow.keras.preprocessing.image.-iterator.pbtxt
@@ -0,0 +1,13 @@
+path: "tensorflow.keras.preprocessing.image.Iterator"
+tf_class {
+ is_instance: "<class \'tensorflow.python.keras._impl.keras.preprocessing.image.Iterator\'>"
+ is_instance: "<type \'object\'>"
+ member_method {
+ name: "__init__"
+ argspec: "args=[\'self\', \'n\', \'batch_size\', \'shuffle\', \'seed\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "reset"
+ argspec: "args=[\'self\'], varargs=None, keywords=None, defaults=None"
+ }
+}
diff --git a/tensorflow/tools/api/golden/tensorflow.keras.preprocessing.image.-numpy-array-iterator.pbtxt b/tensorflow/tools/api/golden/tensorflow.keras.preprocessing.image.-numpy-array-iterator.pbtxt
new file mode 100644
index 0000000000..841f1c5585
--- /dev/null
+++ b/tensorflow/tools/api/golden/tensorflow.keras.preprocessing.image.-numpy-array-iterator.pbtxt
@@ -0,0 +1,18 @@
+path: "tensorflow.keras.preprocessing.image.NumpyArrayIterator"
+tf_class {
+ is_instance: "<class \'tensorflow.python.keras._impl.keras.preprocessing.image.NumpyArrayIterator\'>"
+ is_instance: "<class \'tensorflow.python.keras._impl.keras.preprocessing.image.Iterator\'>"
+ is_instance: "<type \'object\'>"
+ member_method {
+ name: "__init__"
+ argspec: "args=[\'self\', \'x\', \'y\', \'image_data_generator\', \'batch_size\', \'shuffle\', \'seed\', \'data_format\', \'save_to_dir\', \'save_prefix\', \'save_format\'], varargs=None, keywords=None, defaults=[\'32\', \'False\', \'None\', \'None\', \'None\', \'\', \'png\'], "
+ }
+ member_method {
+ name: "next"
+ argspec: "args=[\'self\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "reset"
+ argspec: "args=[\'self\'], varargs=None, keywords=None, defaults=None"
+ }
+}
diff --git a/tensorflow/tools/api/golden/tensorflow.keras.preprocessing.image.pbtxt b/tensorflow/tools/api/golden/tensorflow.keras.preprocessing.image.pbtxt
new file mode 100644
index 0000000000..5652687033
--- /dev/null
+++ b/tensorflow/tools/api/golden/tensorflow.keras.preprocessing.image.pbtxt
@@ -0,0 +1,59 @@
+path: "tensorflow.keras.preprocessing.image"
+tf_module {
+ member {
+ name: "DirectoryIterator"
+ mtype: "<type \'type\'>"
+ }
+ member {
+ name: "ImageDataGenerator"
+ mtype: "<type \'type\'>"
+ }
+ member {
+ name: "Iterator"
+ mtype: "<type \'type\'>"
+ }
+ member {
+ name: "NumpyArrayIterator"
+ mtype: "<type \'type\'>"
+ }
+ member_method {
+ name: "apply_transform"
+ argspec: "args=[\'x\', \'transform_matrix\', \'channel_axis\', \'fill_mode\', \'cval\'], varargs=None, keywords=None, defaults=[\'0\', \'nearest\', \'0.0\'], "
+ }
+ member_method {
+ name: "array_to_img"
+ argspec: "args=[\'x\', \'data_format\', \'scale\'], varargs=None, keywords=None, defaults=[\'None\', \'True\'], "
+ }
+ member_method {
+ name: "flip_axis"
+ argspec: "args=[\'x\', \'axis\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "img_to_array"
+ argspec: "args=[\'img\', \'data_format\'], varargs=None, keywords=None, defaults=[\'None\'], "
+ }
+ member_method {
+ name: "load_img"
+ argspec: "args=[\'path\', \'grayscale\', \'target_size\'], varargs=None, keywords=None, defaults=[\'False\', \'None\'], "
+ }
+ member_method {
+ name: "random_channel_shift"
+ argspec: "args=[\'x\', \'intensity\', \'channel_axis\'], varargs=None, keywords=None, defaults=[\'0\'], "
+ }
+ member_method {
+ name: "random_rotation"
+ argspec: "args=[\'x\', \'rg\', \'row_axis\', \'col_axis\', \'channel_axis\', \'fill_mode\', \'cval\'], varargs=None, keywords=None, defaults=[\'1\', \'2\', \'0\', \'nearest\', \'0.0\'], "
+ }
+ member_method {
+ name: "random_shear"
+ argspec: "args=[\'x\', \'intensity\', \'row_axis\', \'col_axis\', \'channel_axis\', \'fill_mode\', \'cval\'], varargs=None, keywords=None, defaults=[\'1\', \'2\', \'0\', \'nearest\', \'0.0\'], "
+ }
+ member_method {
+ name: "random_shift"
+ argspec: "args=[\'x\', \'wrg\', \'hrg\', \'row_axis\', \'col_axis\', \'channel_axis\', \'fill_mode\', \'cval\'], varargs=None, keywords=None, defaults=[\'1\', \'2\', \'0\', \'nearest\', \'0.0\'], "
+ }
+ member_method {
+ name: "random_zoom"
+ argspec: "args=[\'x\', \'zoom_range\', \'row_axis\', \'col_axis\', \'channel_axis\', \'fill_mode\', \'cval\'], varargs=None, keywords=None, defaults=[\'1\', \'2\', \'0\', \'nearest\', \'0.0\'], "
+ }
+}
diff --git a/tensorflow/tools/api/golden/tensorflow.keras.preprocessing.pbtxt b/tensorflow/tools/api/golden/tensorflow.keras.preprocessing.pbtxt
new file mode 100644
index 0000000000..5a78581fc5
--- /dev/null
+++ b/tensorflow/tools/api/golden/tensorflow.keras.preprocessing.pbtxt
@@ -0,0 +1,15 @@
+path: "tensorflow.keras.preprocessing"
+tf_module {
+ member {
+ name: "image"
+ mtype: "<type \'module\'>"
+ }
+ member {
+ name: "sequence"
+ mtype: "<type \'module\'>"
+ }
+ member {
+ name: "text"
+ mtype: "<type \'module\'>"
+ }
+}
diff --git a/tensorflow/tools/api/golden/tensorflow.keras.preprocessing.sequence.pbtxt b/tensorflow/tools/api/golden/tensorflow.keras.preprocessing.sequence.pbtxt
new file mode 100644
index 0000000000..1b01935cc5
--- /dev/null
+++ b/tensorflow/tools/api/golden/tensorflow.keras.preprocessing.sequence.pbtxt
@@ -0,0 +1,15 @@
+path: "tensorflow.keras.preprocessing.sequence"
+tf_module {
+ member_method {
+ name: "make_sampling_table"
+ argspec: "args=[\'size\', \'sampling_factor\'], varargs=None, keywords=None, defaults=[\'1e-05\'], "
+ }
+ member_method {
+ name: "pad_sequences"
+ argspec: "args=[\'sequences\', \'maxlen\', \'dtype\', \'padding\', \'truncating\', \'value\'], varargs=None, keywords=None, defaults=[\'None\', \'int32\', \'pre\', \'pre\', \'0.0\'], "
+ }
+ member_method {
+ name: "skipgrams"
+ argspec: "args=[\'sequence\', \'vocabulary_size\', \'window_size\', \'negative_samples\', \'shuffle\', \'categorical\', \'sampling_table\', \'seed\'], varargs=None, keywords=None, defaults=[\'4\', \'1.0\', \'True\', \'False\', \'None\', \'None\'], "
+ }
+}
diff --git a/tensorflow/tools/api/golden/tensorflow.keras.preprocessing.text.-tokenizer.pbtxt b/tensorflow/tools/api/golden/tensorflow.keras.preprocessing.text.-tokenizer.pbtxt
new file mode 100644
index 0000000000..5bc8c40120
--- /dev/null
+++ b/tensorflow/tools/api/golden/tensorflow.keras.preprocessing.text.-tokenizer.pbtxt
@@ -0,0 +1,33 @@
+path: "tensorflow.keras.preprocessing.text.Tokenizer"
+tf_class {
+ is_instance: "<class \'tensorflow.python.keras._impl.keras.preprocessing.text.Tokenizer\'>"
+ is_instance: "<type \'object\'>"
+ member_method {
+ name: "__init__"
+ argspec: "args=[\'self\', \'num_words\', \'filters\', \'lower\', \'split\', \'char_level\'], varargs=None, keywords=None, defaults=[\'None\', \'!\"#$%&()*+,-./:;<=>?@[\\\\]^_`{|}~\\t\\n\', \'True\', \' \', \'False\'], "
+ }
+ member_method {
+ name: "fit_on_sequences"
+ argspec: "args=[\'self\', \'sequences\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "fit_on_texts"
+ argspec: "args=[\'self\', \'texts\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "sequences_to_matrix"
+ argspec: "args=[\'self\', \'sequences\', \'mode\'], varargs=None, keywords=None, defaults=[\'binary\'], "
+ }
+ member_method {
+ name: "texts_to_matrix"
+ argspec: "args=[\'self\', \'texts\', \'mode\'], varargs=None, keywords=None, defaults=[\'binary\'], "
+ }
+ member_method {
+ name: "texts_to_sequences"
+ argspec: "args=[\'self\', \'texts\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "texts_to_sequences_generator"
+ argspec: "args=[\'self\', \'texts\'], varargs=None, keywords=None, defaults=None"
+ }
+}
diff --git a/tensorflow/tools/api/golden/tensorflow.keras.preprocessing.text.pbtxt b/tensorflow/tools/api/golden/tensorflow.keras.preprocessing.text.pbtxt
new file mode 100644
index 0000000000..d106429df0
--- /dev/null
+++ b/tensorflow/tools/api/golden/tensorflow.keras.preprocessing.text.pbtxt
@@ -0,0 +1,15 @@
+path: "tensorflow.keras.preprocessing.text"
+tf_module {
+ member {
+ name: "Tokenizer"
+ mtype: "<type \'type\'>"
+ }
+ member_method {
+ name: "one_hot"
+ argspec: "args=[\'text\', \'n\', \'filters\', \'lower\', \'split\'], varargs=None, keywords=None, defaults=[\'!\"#$%&()*+,-./:;<=>?@[\\\\]^_`{|}~\\t\\n\', \'True\', \' \'], "
+ }
+ member_method {
+ name: "text_to_word_sequence"
+ argspec: "args=[\'text\', \'filters\', \'lower\', \'split\'], varargs=None, keywords=None, defaults=[\'!\"#$%&()*+,-./:;<=>?@[\\\\]^_`{|}~\\t\\n\', \'True\', \' \'], "
+ }
+}
diff --git a/tensorflow/tools/api/golden/tensorflow.keras.regularizers.-l1-l2.pbtxt b/tensorflow/tools/api/golden/tensorflow.keras.regularizers.-l1-l2.pbtxt
new file mode 100644
index 0000000000..04dcda3860
--- /dev/null
+++ b/tensorflow/tools/api/golden/tensorflow.keras.regularizers.-l1-l2.pbtxt
@@ -0,0 +1,18 @@
+path: "tensorflow.keras.regularizers.L1L2"
+tf_class {
+ is_instance: "<class \'tensorflow.python.keras._impl.keras.regularizers.L1L2\'>"
+ is_instance: "<class \'tensorflow.python.keras._impl.keras.regularizers.Regularizer\'>"
+ is_instance: "<type \'object\'>"
+ member_method {
+ name: "__init__"
+ argspec: "args=[\'self\', \'l1\', \'l2\'], varargs=None, keywords=None, defaults=[\'0.0\', \'0.0\'], "
+ }
+ member_method {
+ name: "from_config"
+ argspec: "args=[\'cls\', \'config\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_config"
+ argspec: "args=[\'self\'], varargs=None, keywords=None, defaults=None"
+ }
+}
diff --git a/tensorflow/tools/api/golden/tensorflow.keras.regularizers.-regularizer.pbtxt b/tensorflow/tools/api/golden/tensorflow.keras.regularizers.-regularizer.pbtxt
new file mode 100644
index 0000000000..b0a125f238
--- /dev/null
+++ b/tensorflow/tools/api/golden/tensorflow.keras.regularizers.-regularizer.pbtxt
@@ -0,0 +1,12 @@
+path: "tensorflow.keras.regularizers.Regularizer"
+tf_class {
+ is_instance: "<class \'tensorflow.python.keras._impl.keras.regularizers.Regularizer\'>"
+ is_instance: "<type \'object\'>"
+ member_method {
+ name: "__init__"
+ }
+ member_method {
+ name: "from_config"
+ argspec: "args=[\'cls\', \'config\'], varargs=None, keywords=None, defaults=None"
+ }
+}
diff --git a/tensorflow/tools/api/golden/tensorflow.keras.regularizers.pbtxt b/tensorflow/tools/api/golden/tensorflow.keras.regularizers.pbtxt
new file mode 100644
index 0000000000..bb10d41d70
--- /dev/null
+++ b/tensorflow/tools/api/golden/tensorflow.keras.regularizers.pbtxt
@@ -0,0 +1,35 @@
+path: "tensorflow.keras.regularizers"
+tf_module {
+ member {
+ name: "L1L2"
+ mtype: "<type \'type\'>"
+ }
+ member {
+ name: "Regularizer"
+ mtype: "<type \'type\'>"
+ }
+ member_method {
+ name: "deserialize"
+ argspec: "args=[\'config\', \'custom_objects\'], varargs=None, keywords=None, defaults=[\'None\'], "
+ }
+ member_method {
+ name: "get"
+ argspec: "args=[\'identifier\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "l1"
+ argspec: "args=[\'l\'], varargs=None, keywords=None, defaults=[\'0.01\'], "
+ }
+ member_method {
+ name: "l1_l2"
+ argspec: "args=[\'l1\', \'l2\'], varargs=None, keywords=None, defaults=[\'0.01\', \'0.01\'], "
+ }
+ member_method {
+ name: "l2"
+ argspec: "args=[\'l\'], varargs=None, keywords=None, defaults=[\'0.01\'], "
+ }
+ member_method {
+ name: "serialize"
+ argspec: "args=[\'regularizer\'], varargs=None, keywords=None, defaults=None"
+ }
+}
diff --git a/tensorflow/tools/api/golden/tensorflow.keras.utils.-custom-object-scope.pbtxt b/tensorflow/tools/api/golden/tensorflow.keras.utils.-custom-object-scope.pbtxt
new file mode 100644
index 0000000000..dda39ed221
--- /dev/null
+++ b/tensorflow/tools/api/golden/tensorflow.keras.utils.-custom-object-scope.pbtxt
@@ -0,0 +1,9 @@
+path: "tensorflow.keras.utils.CustomObjectScope"
+tf_class {
+ is_instance: "<class \'tensorflow.python.keras._impl.keras.utils.generic_utils.CustomObjectScope\'>"
+ is_instance: "<type \'object\'>"
+ member_method {
+ name: "__init__"
+ argspec: "args=[\'self\'], varargs=args, keywords=None, defaults=None"
+ }
+}
diff --git a/tensorflow/tools/api/golden/tensorflow.keras.utils.-generator-enqueuer.pbtxt b/tensorflow/tools/api/golden/tensorflow.keras.utils.-generator-enqueuer.pbtxt
new file mode 100644
index 0000000000..bf27a97cf2
--- /dev/null
+++ b/tensorflow/tools/api/golden/tensorflow.keras.utils.-generator-enqueuer.pbtxt
@@ -0,0 +1,26 @@
+path: "tensorflow.keras.utils.GeneratorEnqueuer"
+tf_class {
+ is_instance: "<class \'tensorflow.python.keras._impl.keras.utils.data_utils.GeneratorEnqueuer\'>"
+ is_instance: "<class \'tensorflow.python.keras._impl.keras.utils.data_utils.SequenceEnqueuer\'>"
+ is_instance: "<type \'object\'>"
+ member_method {
+ name: "__init__"
+ argspec: "args=[\'self\', \'generator\', \'use_multiprocessing\', \'wait_time\', \'random_seed\'], varargs=None, keywords=None, defaults=[\'False\', \'0.05\', \'None\'], "
+ }
+ member_method {
+ name: "get"
+ argspec: "args=[\'self\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "is_running"
+ argspec: "args=[\'self\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "start"
+ argspec: "args=[\'self\', \'workers\', \'max_queue_size\'], varargs=None, keywords=None, defaults=[\'1\', \'10\'], "
+ }
+ member_method {
+ name: "stop"
+ argspec: "args=[\'self\', \'timeout\'], varargs=None, keywords=None, defaults=[\'None\'], "
+ }
+}
diff --git a/tensorflow/tools/api/golden/tensorflow.keras.utils.-h-d-f5-matrix.pbtxt b/tensorflow/tools/api/golden/tensorflow.keras.utils.-h-d-f5-matrix.pbtxt
new file mode 100644
index 0000000000..ce62c8bafc
--- /dev/null
+++ b/tensorflow/tools/api/golden/tensorflow.keras.utils.-h-d-f5-matrix.pbtxt
@@ -0,0 +1,29 @@
+path: "tensorflow.keras.utils.HDF5Matrix"
+tf_class {
+ is_instance: "<class \'tensorflow.python.keras._impl.keras.utils.io_utils.HDF5Matrix\'>"
+ is_instance: "<type \'object\'>"
+ member {
+ name: "dtype"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "ndim"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "refs"
+ mtype: "<type \'collections.defaultdict\'>"
+ }
+ member {
+ name: "shape"
+ mtype: "<type \'property\'>"
+ }
+ member {
+ name: "size"
+ mtype: "<type \'property\'>"
+ }
+ member_method {
+ name: "__init__"
+ argspec: "args=[\'self\', \'datapath\', \'dataset\', \'start\', \'end\', \'normalizer\'], varargs=None, keywords=None, defaults=[\'0\', \'None\', \'None\'], "
+ }
+}
diff --git a/tensorflow/tools/api/golden/tensorflow.keras.utils.-progbar.pbtxt b/tensorflow/tools/api/golden/tensorflow.keras.utils.-progbar.pbtxt
new file mode 100644
index 0000000000..3adc6b6faa
--- /dev/null
+++ b/tensorflow/tools/api/golden/tensorflow.keras.utils.-progbar.pbtxt
@@ -0,0 +1,17 @@
+path: "tensorflow.keras.utils.Progbar"
+tf_class {
+ is_instance: "<class \'tensorflow.python.keras._impl.keras.utils.generic_utils.Progbar\'>"
+ is_instance: "<type \'object\'>"
+ member_method {
+ name: "__init__"
+ argspec: "args=[\'self\', \'target\', \'width\', \'verbose\', \'interval\'], varargs=None, keywords=None, defaults=[\'30\', \'1\', \'0.05\'], "
+ }
+ member_method {
+ name: "add"
+ argspec: "args=[\'self\', \'n\', \'values\'], varargs=None, keywords=None, defaults=[\'None\'], "
+ }
+ member_method {
+ name: "update"
+ argspec: "args=[\'self\', \'current\', \'values\', \'force\'], varargs=None, keywords=None, defaults=[\'None\', \'False\'], "
+ }
+}
diff --git a/tensorflow/tools/api/golden/tensorflow.keras.utils.-sequence-enqueuer.pbtxt b/tensorflow/tools/api/golden/tensorflow.keras.utils.-sequence-enqueuer.pbtxt
new file mode 100644
index 0000000000..5cf2a07b0b
--- /dev/null
+++ b/tensorflow/tools/api/golden/tensorflow.keras.utils.-sequence-enqueuer.pbtxt
@@ -0,0 +1,24 @@
+path: "tensorflow.keras.utils.SequenceEnqueuer"
+tf_class {
+ is_instance: "<class \'tensorflow.python.keras._impl.keras.utils.data_utils.SequenceEnqueuer\'>"
+ is_instance: "<type \'object\'>"
+ member_method {
+ name: "__init__"
+ }
+ member_method {
+ name: "get"
+ argspec: "args=[\'self\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "is_running"
+ argspec: "args=[\'self\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "start"
+ argspec: "args=[\'self\', \'workers\', \'max_queue_size\'], varargs=None, keywords=None, defaults=[\'1\', \'10\'], "
+ }
+ member_method {
+ name: "stop"
+ argspec: "args=[\'self\', \'timeout\'], varargs=None, keywords=None, defaults=[\'None\'], "
+ }
+}
diff --git a/tensorflow/tools/api/golden/tensorflow.keras.utils.-sequence.pbtxt b/tensorflow/tools/api/golden/tensorflow.keras.utils.-sequence.pbtxt
new file mode 100644
index 0000000000..5b272253e3
--- /dev/null
+++ b/tensorflow/tools/api/golden/tensorflow.keras.utils.-sequence.pbtxt
@@ -0,0 +1,12 @@
+path: "tensorflow.keras.utils.Sequence"
+tf_class {
+ is_instance: "<class \'tensorflow.python.keras._impl.keras.utils.data_utils.Sequence\'>"
+ is_instance: "<type \'object\'>"
+ member_method {
+ name: "__init__"
+ }
+ member_method {
+ name: "on_epoch_end"
+ argspec: "args=[\'self\'], varargs=None, keywords=None, defaults=None"
+ }
+}
diff --git a/tensorflow/tools/api/golden/tensorflow.keras.utils.pbtxt b/tensorflow/tools/api/golden/tensorflow.keras.utils.pbtxt
new file mode 100644
index 0000000000..e840f33142
--- /dev/null
+++ b/tensorflow/tools/api/golden/tensorflow.keras.utils.pbtxt
@@ -0,0 +1,63 @@
+path: "tensorflow.keras.utils"
+tf_module {
+ member {
+ name: "CustomObjectScope"
+ mtype: "<type \'type\'>"
+ }
+ member {
+ name: "GeneratorEnqueuer"
+ mtype: "<type \'type\'>"
+ }
+ member {
+ name: "HDF5Matrix"
+ mtype: "<type \'type\'>"
+ }
+ member {
+ name: "Progbar"
+ mtype: "<type \'type\'>"
+ }
+ member {
+ name: "Sequence"
+ mtype: "<type \'type\'>"
+ }
+ member {
+ name: "SequenceEnqueuer"
+ mtype: "<type \'type\'>"
+ }
+ member_method {
+ name: "convert_all_kernels_in_model"
+ argspec: "args=[\'model\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "custom_object_scope"
+ argspec: "args=[], varargs=args, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "deserialize_keras_object"
+ argspec: "args=[\'identifier\', \'module_objects\', \'custom_objects\', \'printable_module_name\'], varargs=None, keywords=None, defaults=[\'None\', \'None\', \'object\'], "
+ }
+ member_method {
+ name: "get_custom_objects"
+ argspec: "args=[], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "get_file"
+ argspec: "args=[\'fname\', \'origin\', \'untar\', \'md5_hash\', \'file_hash\', \'cache_subdir\', \'hash_algorithm\', \'extract\', \'archive_format\', \'cache_dir\'], varargs=None, keywords=None, defaults=[\'False\', \'None\', \'None\', \'datasets\', \'auto\', \'False\', \'auto\', \'None\'], "
+ }
+ member_method {
+ name: "normalize"
+ argspec: "args=[\'x\', \'axis\', \'order\'], varargs=None, keywords=None, defaults=[\'-1\', \'2\'], "
+ }
+ member_method {
+ name: "plot_model"
+ argspec: "args=[\'model\', \'to_file\', \'show_shapes\', \'show_layer_names\', \'rankdir\'], varargs=None, keywords=None, defaults=[\'model.png\', \'False\', \'True\', \'TB\'], "
+ }
+ member_method {
+ name: "serialize_keras_object"
+ argspec: "args=[\'instance\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "to_categorical"
+ argspec: "args=[\'y\', \'num_classes\'], varargs=None, keywords=None, defaults=[\'None\'], "
+ }
+}
diff --git a/tensorflow/tools/api/golden/tensorflow.keras.wrappers.pbtxt b/tensorflow/tools/api/golden/tensorflow.keras.wrappers.pbtxt
new file mode 100644
index 0000000000..0b2fac9b7d
--- /dev/null
+++ b/tensorflow/tools/api/golden/tensorflow.keras.wrappers.pbtxt
@@ -0,0 +1,7 @@
+path: "tensorflow.keras.wrappers"
+tf_module {
+ member {
+ name: "scikit_learn"
+ mtype: "<type \'module\'>"
+ }
+}
diff --git a/tensorflow/tools/api/golden/tensorflow.keras.wrappers.scikit_learn.-keras-classifier.pbtxt b/tensorflow/tools/api/golden/tensorflow.keras.wrappers.scikit_learn.-keras-classifier.pbtxt
new file mode 100644
index 0000000000..8d200f99fd
--- /dev/null
+++ b/tensorflow/tools/api/golden/tensorflow.keras.wrappers.scikit_learn.-keras-classifier.pbtxt
@@ -0,0 +1,42 @@
+path: "tensorflow.keras.wrappers.scikit_learn.KerasClassifier"
+tf_class {
+ is_instance: "<class \'tensorflow.python.keras._impl.keras.wrappers.scikit_learn.KerasClassifier\'>"
+ is_instance: "<class \'tensorflow.python.keras._impl.keras.wrappers.scikit_learn.BaseWrapper\'>"
+ is_instance: "<type \'object\'>"
+ member_method {
+ name: "__init__"
+ argspec: "args=[\'self\', \'build_fn\'], varargs=None, keywords=sk_params, defaults=[\'None\'], "
+ }
+ member_method {
+ name: "check_params"
+ argspec: "args=[\'self\', \'params\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "filter_sk_params"
+ argspec: "args=[\'self\', \'fn\', \'override\'], varargs=None, keywords=None, defaults=[\'None\'], "
+ }
+ member_method {
+ name: "fit"
+ argspec: "args=[\'self\', \'x\', \'y\'], varargs=None, keywords=kwargs, defaults=None"
+ }
+ member_method {
+ name: "get_params"
+ argspec: "args=[\'self\'], varargs=None, keywords=params, defaults=None"
+ }
+ member_method {
+ name: "predict"
+ argspec: "args=[\'self\', \'x\'], varargs=None, keywords=kwargs, defaults=None"
+ }
+ member_method {
+ name: "predict_proba"
+ argspec: "args=[\'self\', \'x\'], varargs=None, keywords=kwargs, defaults=None"
+ }
+ member_method {
+ name: "score"
+ argspec: "args=[\'self\', \'x\', \'y\'], varargs=None, keywords=kwargs, defaults=None"
+ }
+ member_method {
+ name: "set_params"
+ argspec: "args=[\'self\'], varargs=None, keywords=params, defaults=None"
+ }
+}
diff --git a/tensorflow/tools/api/golden/tensorflow.keras.wrappers.scikit_learn.-keras-regressor.pbtxt b/tensorflow/tools/api/golden/tensorflow.keras.wrappers.scikit_learn.-keras-regressor.pbtxt
new file mode 100644
index 0000000000..7a971346d8
--- /dev/null
+++ b/tensorflow/tools/api/golden/tensorflow.keras.wrappers.scikit_learn.-keras-regressor.pbtxt
@@ -0,0 +1,38 @@
+path: "tensorflow.keras.wrappers.scikit_learn.KerasRegressor"
+tf_class {
+ is_instance: "<class \'tensorflow.python.keras._impl.keras.wrappers.scikit_learn.KerasRegressor\'>"
+ is_instance: "<class \'tensorflow.python.keras._impl.keras.wrappers.scikit_learn.BaseWrapper\'>"
+ is_instance: "<type \'object\'>"
+ member_method {
+ name: "__init__"
+ argspec: "args=[\'self\', \'build_fn\'], varargs=None, keywords=sk_params, defaults=[\'None\'], "
+ }
+ member_method {
+ name: "check_params"
+ argspec: "args=[\'self\', \'params\'], varargs=None, keywords=None, defaults=None"
+ }
+ member_method {
+ name: "filter_sk_params"
+ argspec: "args=[\'self\', \'fn\', \'override\'], varargs=None, keywords=None, defaults=[\'None\'], "
+ }
+ member_method {
+ name: "fit"
+ argspec: "args=[\'self\', \'x\', \'y\'], varargs=None, keywords=kwargs, defaults=None"
+ }
+ member_method {
+ name: "get_params"
+ argspec: "args=[\'self\'], varargs=None, keywords=params, defaults=None"
+ }
+ member_method {
+ name: "predict"
+ argspec: "args=[\'self\', \'x\'], varargs=None, keywords=kwargs, defaults=None"
+ }
+ member_method {
+ name: "score"
+ argspec: "args=[\'self\', \'x\', \'y\'], varargs=None, keywords=kwargs, defaults=None"
+ }
+ member_method {
+ name: "set_params"
+ argspec: "args=[\'self\'], varargs=None, keywords=params, defaults=None"
+ }
+}
diff --git a/tensorflow/tools/api/golden/tensorflow.keras.wrappers.scikit_learn.pbtxt b/tensorflow/tools/api/golden/tensorflow.keras.wrappers.scikit_learn.pbtxt
new file mode 100644
index 0000000000..fbd4d13387
--- /dev/null
+++ b/tensorflow/tools/api/golden/tensorflow.keras.wrappers.scikit_learn.pbtxt
@@ -0,0 +1,11 @@
+path: "tensorflow.keras.wrappers.scikit_learn"
+tf_module {
+ member {
+ name: "KerasClassifier"
+ mtype: "<type \'type\'>"
+ }
+ member {
+ name: "KerasRegressor"
+ mtype: "<type \'type\'>"
+ }
+}
diff --git a/tensorflow/tools/api/golden/tensorflow.pbtxt b/tensorflow/tools/api/golden/tensorflow.pbtxt
index 8893594dc3..8935bcda3d 100644
--- a/tensorflow/tools/api/golden/tensorflow.pbtxt
+++ b/tensorflow/tools/api/golden/tensorflow.pbtxt
@@ -365,6 +365,10 @@ tf_module {
mtype: "<class \'tensorflow.python.framework.dtypes.DType\'>"
}
member {
+ name: "keras"
+ mtype: "<type \'module\'>"
+ }
+ member {
name: "layers"
mtype: "<type \'module\'>"
}
diff --git a/tensorflow/tools/ci_build/ci_sanity.sh b/tensorflow/tools/ci_build/ci_sanity.sh
index b223d7d887..3e0eaa26bc 100755
--- a/tensorflow/tools/ci_build/ci_sanity.sh
+++ b/tensorflow/tools/ci_build/ci_sanity.sh
@@ -96,7 +96,7 @@ do_pylint() {
"^tensorflow/python/feature_column/feature_column_test\.py.*\[E0110.*abstract-class-instantiated "\
"^tensorflow/contrib/layers/python/layers/feature_column\.py.*\[E0110.*abstract-class-instantiated "\
"^tensorflow/python/platform/gfile\.py.*\[E0301.*non-iterator "\
-"^tensorflow/contrib/keras/python/keras/callbacks\.py.*\[E1133.*not-an-iterable"
+"^tensorflow/python/keras/_impl/keras/callbacks\.py.*\[E1133.*not-an-iterable"
echo "ERROR_WHITELIST=\"${ERROR_WHITELIST}\""
diff --git a/tensorflow/tools/ci_build/linux/cpu/run_cc_core.sh b/tensorflow/tools/ci_build/linux/cpu/run_cc_core.sh
index 817df6a434..08fc82d04c 100755
--- a/tensorflow/tools/ci_build/linux/cpu/run_cc_core.sh
+++ b/tensorflow/tools/ci_build/linux/cpu/run_cc_core.sh
@@ -33,7 +33,7 @@ export PYTHON_BIN_PATH=`which python`
yes "" | $PYTHON_BIN_PATH configure.py
# Run bazel test command. Double test timeouts to avoid flakes.
-bazel test --test_tag_filters=-no_oss,-gpu,-benchmark-test --test_lang_filters=cc -k \
+bazel test --test_tag_filters=-no_oss,-gpu,-benchmark-test --test_lang_filters=cc,java -k \
--jobs=${N_JOBS} --test_timeout 300,450,1200,3600 \
--test_output=errors -- \
//tensorflow/... -//tensorflow/compiler/... -//tensorflow/contrib/...
diff --git a/tensorflow/tools/graph_transforms/remove_attribute.cc b/tensorflow/tools/graph_transforms/remove_attribute.cc
index d76c3ff87d..b1a04c0f28 100644
--- a/tensorflow/tools/graph_transforms/remove_attribute.cc
+++ b/tensorflow/tools/graph_transforms/remove_attribute.cc
@@ -34,7 +34,7 @@ Status RemoveAttribute(const GraphDef& input_graph_def,
if (!context.params.count("attribute_name") ||
(context.params.at("attribute_name").size() != 1)) {
return errors::InvalidArgument(
- "remove_nodes expects exactly one 'attribute_name' "
+ "remove_attribute expects exactly one 'attribute_name' "
"argument, e.g. remove_attribute(op_name=Mul, attribute_name=foo)");
}